Analogously to Pauan, I'd love to have you try wart. It's much slower and has fewer libraries than arc. But on the other hand it's cleaner, and has fexprs, python-like indent-sensitivity and python-like keyword args. At the very least, maybe worth a few minutes play? (And tell me what you think? :) Somebody using it may well be the impetus I need to go back to work on speeding it up.
Hello akkartik, I'm glad to be back. I've been lurking on and off for the past few years, but it will be nice to get involved again.
I guess I should be fair and try yours out too while I'm at it, though I have to say that the whitespace sensitivity makes me somewhat cautious. For some reason, I've always liked the more traditional sexpr syntax for lisp. Does wart mind if I use parens for everything?
Even more disconcerting would probably be your infix support. Maybe it won't be as much of a problem as I'm expecting, but I like being able to use random symbols in my variable names.
Also, numbered git commits strikes me as a little odd, as does the numbered source files. In the latter case its little more than taste, and I see that Pauan is doing the same thing. Maybe you have a good reason for it and I'd start doing the same if I only understood.
Anyway, enough casting of doubt on someone else's pet project. I'm certainly interested in a clean language with the possibility of fexprs to try out and maybe even keyword args. I'm not sure what I would actually need them for, but more power is never something I will turn down.
I'm also somewhat interested in the wat/taf projects, but they seem to be a bit more experimental right now.
As for my project plans, I'm thinking of doing two or three web service projects on the side, as a long term investment counter-point to my current hourly contracting job.
The first one I'm thinking of focusing on is a sort of easy, data-driven unit-testing-as-a-service concept. If you've ever seen the fit or fitnesse testing frameworks, this idea was originally based off of those. Instead of writing unit test code, you would use a website to input test values and corresponding expected results in a table format. The first row of the table specifies the function or object being tested and its arguments or property names, and each row after would give the values for that test case, with the last column or set of columns specifying the return value. Fitnesse did that for c# and java, but it had a few major flaws. In the first case, it would only interact with classes that inherited from the fitnesse test classes, so you were forced to write test harness code anyway. Second, the user had to format the tables manually using a wiki format, so it required a bit too much manual formatting and there wasn't any way to provide additional metadata or add any more dimensions to the tables.
The first few incarnations would probably be something really simple that would only be useful for testing code locally, but eventually it would be expanded to a web service that supports multiple languages and has a way to point it at any vcs repo and run tests interactively in the cloud. This would hopefully be a cheapish testing and specification solution for startups or foss projects that would be easy enough to add to existing code that people would actually do it. That and an enterprise version that can be deployed internally, which would hopefully make it so that business analysts and the QA team can write, run, and review the tests without having the dev team write a separate test harness for them.
There's a bit more to the idea, but right now none of it has been written, so I probably shouldn't advertise features that I may never get to, or might not even be feasible in the first place. Of course, I like talking even more than I like coding, so I'm sure we can discuss it if you're interested. In fact, I would be very open to discussion, as I'm sure what other people actually want/need/would be willing to pay for in a test system won't match up exactly my own ideas.
"In the latter case its little more than taste, and I see that Pauan is doing the same thing."
The reason I numbered them was just to make it easier to navigate the code. As a user, if you see "01 nu.rkt" you know it's loaded first, and that "02 arc.arc" is loaded second. And each one builds on the stuff defined previously, so you can read it in a linear order.
I only did that for the stuff that's automatically loaded when you start up Arc. You'll notice that the "lib" and "app" stuff is un-numbered. And I don't expect user-created code or libraries to use numbers! So I definitely don't take it as far as wart does.
Ah, I was unaware of fitnesse! Thanks for the pointer, that's a really neat idea. Tests are a huge part of wart's goal of 'empowering outsiders'.
Sucks that fitnesse is stuck in the java eco-system. Just porting it to lisp/arc/wart would be awesome for starters..
Arguably much of the benefit of testing comes from the same person doing both programming and testing. Organizations which separate those two functions tend to gradually suck. If you accept that, inserting a web interface between oneself and one's tests seems like a bad idea. Perhaps fitnesse-like flows would be best utilized to involve the non-programmer in integration testing, testing the entire site as a whole rather than individual functions. Perhaps script interactions with a (non-ajax for starters!) app so that the CEO/QA engineer doesn't have to know about REST and PUT/GET? Hmm, that would be cool..
Organizations which separate those two functions tend to gradually suck
Hmm... Well, that's certainly a valid opinion, and it may even be true in a lot of cases, however I think the issue is largely due to two other related issues: 1) The requirements aren't specified clearly enough, and are dissociated from the tests as well, and 2) they just don't have good enough tools.
Tests can serve many purposes. The most basic, given the fundamental concept of a test, is to tell you when something works, or when it doesn't. TDD focuses on the former, unit testing and regression testing focus more on the later. Tests can be used at different points in the development cycle, and if you use the wrong kind at the wrong point, it's not really all that helpful.
My concern is that its too difficult to write the correct kind of test, so most developers use the wrong kind or don't use any at all. There's nothing really wrong with that, I think it's just an unfortunate inefficiency, like trying to debug without stacktraces. >.> Hmm. Something to look forward to going back to arc I suppose. Anyway, my goal is to make testing easy enough to do, either for developers who just want a quick way to check if they broke something after making a 'minor' change, or for larger companies that want to know that all their requirements have actually been met.
So, to solve the first problem I'm hoping to utilize a lot of reflection and code inspection so that at least the outline of the test cases can be generated automatically, if not more. Then it should be really easy for the programmer to just add the missing details, either as specific test vectors or by using a more general definition of requirements using something like QuickCheck's generators.
In the long run the plan is for the tool to be able to support working the other direction, from requirements to tests. Hopefully with better tool support, and more intelligent interaction with the system under test, it should be possible for the architects to specify the requirements, and the tool should be able to verify that the code works.
Yes, divorcing tests from code could mean that different people do them. Doesn't have to be the case, but it becomes a possibility. And that means that they could will have a different perspective on the operation of the system, but not necessarily a worse one. If it's the architects or BAs writing the tests, then they might actually have more information about how the system should be working than the programmers, especially in the case that the programmers are outsourced. At which point allowing someone else to write the tests is an improvement. When developers write the tests, it doesn't help if they use the same incorrect understanding of the requirements for both the tests and the code.
Hopefully by making an easy enough tool that supports rapidly filling in tests based on code analysis (which would help anyone that doesn't know much about the actual code base match it up with the requirements they have) reducing boiler plate and barriers to testing, making it a much easier to use tool for developing. Maybe if it gets easy enough, developers would find that testing actually saves enough time testing to be worth the few seconds specifying test vectors for each method. And if it can do a good enough job at turning requirements into tests in a way that is clear enough to double as documentation, it should save the architects and BAs enough time, as well as make implementation easier for developers, that I might actually be able to sell licenses :P
"If it's the architects or BAs writing the tests, then they might actually have _more_ information about how the system should be working than the programmers,"
Oh, absolutely. I didn't mean to sound anti-non-programmer.
I tend to distrust labels like 'architect' and 'programmer'. Really there's only two kinds of people who can work on something: those who are paid to work on it, and those who have some other (usually richer and more nuanced) motivation to do so. When I said, "Organizations which separate those two functions tend to gradually suck", I was implicitly assuming both sides were paid employees.
If non-employees (I'll call them, oh I don't know, owners) contribute towards building a program the result is always superior. Regardless of how they contribute. They're just more engaged, more aware of details, faster to react to changes in requirements (which always change). Your idea sounds awesome because it helps them be more engaged.
But when it's all paid employees and you separate them into testers and devs, then a peculiar phenomenon occurs. The engineers throw half-baked crap over to testers because, well, it's not their job to test. And test engineers throw releases back at the first sign of problems because, well, it's not their job to actually do anything constructive. A lot of shuffling back and forth happens, and both velocity and safety suffer, because nobody cares about the big picture of the product anymore.
(Agh, this is not very clear. I spend a lot of time thinking about large organizations. Another analogous example involves the patent office: http://akkartik.name/blog/2010-12-19-18-19-59-soc. Perhaps that'll help triangulate on where I'm coming from.)
(BTW, I've always wondered: what's that cryptic string in your profile?)
Don't feel like you have to be fair :) The world isn't fair, and I understand about differences in taste. My goal with wart and some other stuff has been to figure out how to empower outsiders to bend a codebase to their will and taste with a minimum of effort. For example, one experiment I'd love to perform on you is to measure how long it takes you to fork wart to toss out infix and get back your beloved hyphens :) But no pressure.