Even the  function shorthand expands into S-expressions:
[+ 1 2] -> (square-brackets + 1 2)
He does have somewhat of a point about # and ; but I could easily change PyArc so they also expand into S-expressions:
#\x -> (char x)
;x -> (comment x)
He didn't mention string syntax, but I could do that too:
"x" -> (string x)
Thus, they are merely syntax shorthands, so we don't have to type as much. In fact, in PyArc, syntax (including ssyntax) is expanded at read time, so PyArc's eval doesn't actually know about #\ ' ` , ,@ it only knows about S-expressions.
Also, I read that guy's articles a while back... I laughed when he said that cons cells were the primary reason people aren't using Lisp.
Yes, sometimes you do end up using (car) or (cdr) in Lisp programs, especially if you're writing utilities, but a lot of list processing happens with higher-order functions like map, filter, etc.
He seems to like Mathematica a lot, and dislikes Lisp because it's not Mathematica. That's all well and good, but it'd be nice if he actually tried to understand Lisp before bashing it. Here's some choice quotes:
I became aware of this essay in early 2000s. Kent himself
mentions it when he sees fit. I actually never red it. I just
knew that it's something lispers quote frequently like a
gospel. In the back of my mind, i tend to think it is something
particular to lisp, and thus is stupid, because many
oft-debated lisp issues (e.g. single vs multi semantic space),
never happens in a lisp-like language Mathematica which i'm
a expert, nor does it happen in any dynamic langs i have
So... doesn't read what other people have to say, but because it relates to Lisp, he assumes it must be stupid and wrong, and then claims that Lisp sucks because other languages don't have the same issue, even though he doesn't even know what the issue is.
Actually, the article he linked to mentions that a generic copy function is possible, but would require making arbitrary choices about how it would behave in different situations. Also in the paper he referenced: "However, the problems cited here are quite general, and occur routinely in other dynamically-typed languages as well as user programs." I'm actually tempted to make a generic copy function in Arc.
For example, the difference between (list 1 2 3), '(1 2 3),
(quote (1 2 3)) is a frequently asked question.
It is? They all produce the same output: a list of 3 integers. Simple question, simple answer. In fact, '(1 2 3) expands to (quote (1 2 3)) so the last two are truly identical.
OK, I want to create a nested list in Lisp (always of only
integers) from a text file, such that each line in the text file
would be represented as a sublist in the 'imported' list.
Example of input
3 10 2
example of output:
((3 10 2) (4 1) (11 18))
Using only the core arc.arc, here is the solution I came up with:
This could, of course, be made cleaner with a (readlines) macro. So... I guess his problems are more with Common Lisp, Scheme, and Elisp? Arc, using just base functionality, isn't that far behind Ruby. Here's the readlines version, which is basically the same length as the Ruby version:
I actually like the Ruby version because it reads in a left-to-right fashion: read the lines from the file, then map then, then split them, etc. Whereas the Arc version has the order jumbled up. It might be interesting to write this instead:
Also, he's comparing apples to oranges. Ruby comes with a "readlines" function built-in, but he defined a "readlines" in elisp. Thus, half of the verbosity in elisp is because it doesn't have a built-in "readlines" function. If you assume that "readlines" is built-in, then the elisp one is in the same league as Ruby, for conciseness.
Dispite being a expert in trees, the lisp's cons business is
truely a pain to deal with. A large part of my time spent on
elisp programing is on the debugging the cons business.
But in lisp, it is at the low level exposed to programers the
“con” sequenced in a particular way, and accessed with
car,cdr, caard.. etc. A lisper may argue that a programer can
simply use higher-level constructs like “nth” or “list” to deal
with lists. However, one really cannot pretend that cons
doesn't exist in lisp because the language takes the con cell
as its fundamental primitive. In short, a programer cannot
program in lisp in a real world situation without having a good
understanding of the cons. (partly due to the language itself,
partly due to (i think vast majority) of existing code all
directly deals with cons)
all this may seem shallow and does not constitute a problem
for any serious, professional programer. But i think it is a
damnation to lisp and the major (unfixable) cause of
preventing lisp from becoming widely used in the future
(despite that lisp had been somewhat mainstream in the
1980s, which i wasn't a programer then). Because, the
reason for high-level lang being what they are is from all
these little details. In general, high-level lang being what
they are more and more abstract from the
Well... yeah. In other languages you need to worry about how many bits an int is, or a floating point number is. In some popular languages (C, C++) you need to worry about whether it's a Pascal string or a C string. In most popular languages you need to worry about bytes vs Unicode code points. In Java, you need to worry about int vs Integer:
Whoops, there goes your "Lisp loses because it's less abstract" argument... The reason Lisp hasn't caught on isn't that other languages "hide the details" better, because they actually hide the details worse, in many ways.
So... if we don't use cons cells to construct lists, what do we use? Arrays like in Python, C, etc.? Objects like in JS and Lua? Really, there has to be a primitive somewhere, and cons cells are nice because they can construct lists, but also other things as well, when you need to. It's a flexible base datatype.
I guess he just hates that Lisp exposes the cons function. He wants (list) to be the only way to construct lists, meaning that all conses would be proper. But... are improper lists difficult to deal with in practice? I would expect most lists to be proper, with improper lists only occurring due to a mistake or because the programmer intended them to be improper.
Not to say that he might not have a point or two, but it's kinda bogged down by the rest of it. An amusing read, nonetheless. My suggestion to him: keep using Mathematica, since you seem to like it a lot.
He does have one point, though: Arc seems to be missing some higher-order functions, like "get all the nodes at level n. Map a function to level n. Map a function to just leafs" But... if we ever needed such functionality, it shouldn't be hard to write a function to do that, right? So what's the problem?
By the way, he mentions how in Mathematica, it's all lists, so it basically uses alists rather than hashes, etc. Then he says that the compiler should automatically figure out how to optimize it, or allow the programmer to give type hints. That's not a bad idea, but I was thinking of something more generic: iterators.
By defining a base "sequence of something" type, all you would need to do is change (coerce) so it coerces your type to an iterator, and everything would work, ala Python. Though I think it's possible to do it better than Python. One could do the same for hashes, defining a "mapping" or "collection" type.
In any case, there are solutions to the issues he brings up, so he seems to be nitpicking in an attempt to discredit Lisp in any way possible. Makes me wonder why he isn't just happy using Mathematica, since he likes it so much? Why is he trying so hard to help solve what he perceives to be problems in Lisp, if Mathematica is so good?
I think you've edited your post a few times, and given how long it is I'm not quite sure which parts have changed. Perhaps in the future you could break up your thoughts among multiple comments? One advantage would be that I could reply to each point separately.
I don't think cons cells are a hack, but I do think it's a hack to use them for things other than sequences. Since we almost always want len(x)=len(cdr(x))+1 for cons cells, rather than len(x)=2, they aren't really useful as their own two-element collection type.
Yeah, I'm tempted to agree with you. In the arc.arc source code, pg even mentions a solution to improper lists: allow any symbol to terminate a list, rather than just nil.
Of course, an easier fix would be to change `cons` so it throws an error if the second argument isn't a cons or nil. Honestly, are improper lists useful often enough to warrant overloading the meaning of cons? We could have a separate data-type for B-trees.
That's one area that I can actually agree with the article, but that has nothing to do with conses in general (when used to create proper lists), only with improper lists. And contrary to what he says, it's not an "unfixable problem", instead it would probably take only 2 lines of Python code to fix it.
One thing though... function argument lists:
(fn (a b . c))
Of course I can special-case this in PyArc, so . has special meaning only in the argument list. This is, in fact, what I do right now. But that may make it harder for implementations in say... Common Lisp or Scheme (assuming you're piggybacking, rather than writing a full-on parser/reader/interpreter).
If so... then you may end up with the odd situation that it's possible to create improper lists, using the (a . b) syntax, but not possible to create improper lists using `cons`
By the way... how about this: proper lists would have a type of 'list and improper lists would have a type of 'cons. Yes, it would break backwards compatibility, but it might be a good idea in the long run. Or we could have lists have a type of 'cons and improper lists have a type of 'pair.
I don't know if improper lists are really a problem, just hackish. :) My "solution" would be to remove the need for them by changing the rest parameter syntax (both in parameter lists and in destructuring patterns).
"how about this: proper lists would have a type of 'list and improper lists would have a type of 'cons."
I don't think I like the idea of something's type changing when it's modified. But then that's mostly because I don't think the 'type function is useful on a high level; it generally seems more flexible to do (if afoo.x ...) rather than (case type.x foo ..), 'cause that way something can have more than one "type." Because of this approach, I end up using the 'type type just to identify the kind of concrete implementation a value has, and the way I think about it, the concrete implementation is whatever invariants are preserved under mutation.
That's just my take on it. Your experience with 'type may vary. ^_^