The '(frst . rest)' signals an error before compilation reaches the ,@.
The problem is, I think, that ac-qq1 doesn't expect to encounter an improper list. So, with arc1 you can't use a dot in a quasiquote to denote an improper list.
However, as vsingh pointed out, an unescaped cons operation will get you an improper list in a quasiquote where a dot doesn't.
This seems like a bug indeed, and it is easy to fix, simply replace this in ac-qq1:
((pair? x)
(map (lambda (x) (ac-qq1 level x env)) x)))
Scheme uses the convention to place ? and ! at the end of the names for predicate and setter procedures respectively. Arc has no comparable convention.
Scheme: Arc:
pair? acons
number? number
set! set
set-car! scar
A convention such as this (or Common Lisp's trailing p for predicates) seems like a good thing to me. Does Arc avoid it because it "costs" an extra character?
Well I don't think it's ugly, but ugliness aside it's very descriptive in one char. I'd personally like to see this convention carried over. Ruby uses the same convention so it's not just Scheme.
If this was intentionally left out my guess would be that PG was trying to save those chars for something meaningful to the compiler (syntax) not for aesthetics.
It should be possible to detect, at define-time, whether a function/macro has side-effects (see the ideas for a conservative purity analyzer in http://arclanguage.org/item?id=558).
So if the function/macro has side-effects, def/mac could e.g. also assign it to a second symbol with "!" appended.
'?' could also be done similarly, but would be more difficult. Here is a start:
def/mac should also assign its function/macro to a second symbol with "?" appended if:
1) The function/macro has no side-effects (no "!")
Leaving aside the issues of detecting a query, I think it's a really bad idea to have the compiler force (badly guessed) semantic knowledge on me.
It's my code, I wrote it, and I don't want any compiler telling me that a query that caches previous results must be imperative! and not end in a ? .
I also thing needlessly polluting a namespace with addition symbols is a bad idea. You only get one namespace in arc, try to preserve it for the future.
that makes sense. '!' is already being used (though the convention doesn't interfere with the syntax at the moment)
bear with me here, but '!' means the function isn't pure, right? if so, who cares? it seems like an ivory tower thing. ? is fine, though maybe prefix notation can be considered
Just because something is academic doesn't mean it's not worthwhile. For instance, map, keep, reduce, etc. are different if passed a function with side-effects. What happens if you give them a function which modifies the list as it's being traversed? A pathological example:
However, if map is passed a pure function, you can e.g. run one copy of the function on each core and run it in parallel.
And "dangerSet" is both non-Lispy and non-Arcy. It's non-Arcy because it's long. And it's non-Lispy because of that capital letter: danger-set would be marginally better. But nevertheless, pair? is shorter than is-pair, and (in my opinion) reads much more nicely.
From the copyright file in arc1.tar: "This software is copyright (c) Paul Graham and Robert Morris. Permission to use it is granted under the Perl Foundations's [sic] Artistic License 2.0."
Since, as I've pointed out, the editor can handle the parens for you completely and unabiguously, even without any special commands, you could simply turn-off, parens that exist in addition to indentation, make them invisible.
If you then additionally make the editor display the opening parenthesis as a colon then, voila, you have the visually pleasing colon syntax.
In such a mode you'd always have to be indentation perfect, but just as with colon syntax, you can simply switch to a more traditional editing mode.
Thanks, although it seems to regard mostly macros.
PG commented "The single biggest compromise I had to make because of MzScheme was not being able to put objects like functions and hashtables into macroexpansions." Not exactly sure what he meant by that, or if it had anything to do with the evaling functions directly.
I guess what I am really wondering is if there is an intrinsic reason why evaling a function directly can't work, or if support for it just doesn't happen to be in Arc currently because PG never needed it.
[Note: Like functions, hash tables can't be printed out in a way
that can be read back in. We hope to fix that though.]
The problem that I see with macros+hashtables in macroexpansion, as well as with printing/reading-back-in functions+macros+hashtables, is that currently an expression is compiled, before it is evaled, and loses its original uncompiled form. That's why macros are currently not truly first-class.
If used with eval or in macroexpansions this only causes a problem with macros and hashtables, not with functions or hashtables used as functions. Like nlavine pointed out, it only takes an additional line of code in "ac" to make your example work.
First, disallowing arithmetic operators in symbol names, so that (a+b/c) is interpreted like (a + b / c). All the replies to that, up till now, can be summed up like this:
eds: *I don't really think it is worth it*
kennytilton: *impoverishing the Arc naming syntax*
cadaver: *seems not such a good idea after all*
Secondly, whether or not to have some language support for infix. As can be seen in previous discussion, with eds's system, you only need swapped positions where a literal-in-functional-position is encountered for infix support. Paul Graham said that literals in functional position are valuable real estate, nevertheless a comment regarding such an idea can be found in the arc1 source.
One good point of having support is brevity for math-infix users. The only bad point that I see is that we use up the valuable number-in-functional-position real estate, which could have been used for something else.
Supporting infix may not only be good for math. Consider the following:
(sort (fn (sm gr) (sm < gr)) somelist)
I'm sorry that I can't supply any good examples of heavy maths in real world programs, though I don't doubt that such exist (ciphers?). On the other side, if in a program there isn't any heavy use of maths at all, except for a single mathematical formula that the programmer would like to write in infix, then using a separate macro package would introduce a dependency, and that might make the programmer grudgingly write out the formula in prefix. Another good case for lisp-infix is that when, like me, you tend to copy other's non-lisp formulae then, in eds's infix system, it would look more like its original form.
Supporting infix may not only be good for math. Consider the following: (sort (fn (sm gr) (sm < gr)) somelist)
Consider: (sort < somelist)
And if (sort (fn (x y) (< x y)) somelist) looks awkward then maybe the issue is prefix altogether? I think anyone trying Arc who is new to Lisp might try Just Lisping (in Arc, of course) for a few weeks before even thinking about changing the language. These things take time and until one has done enough coding to get fluent (or throw up ones hands and say it has been three weeks and I still hate this!) one cannot even form an opinion about the whatever that thing might be. It is like an editor or IDE -- I hated the IDE I love now but made myself wait a month before ripping the vendor a new one. Now people accuse me of being on their sales team.
Case in point: Arc. It is hard judging the brevity/parens stuff because I am in pain anyway without a decent IDE, without a debugger, without a standard library... but I slogged my way thru a port of Cells from Lisp to Arc and some of the brevity is growing on me (I deliberately stopped doing a transliteration and reinvented over the keyboard so I could feel the language better) and at this point I think I can say I would not kick Arc out of bed. Something like that.
btw, if we are just talking about one math formula in a blue moon, why bother? I mean, it would be fun if it had no cost, sure, but apparently pg has plans for numbers in the functional position. Anyway...
Although literals in functional position may be valuable real estate, I think that infix math is valuable enough to justify using it (at least until we think of something more important).
The only other thing that was suggested in the comment in ar-apply was that literals in functional position might be constant functions. I think infix syntax gives the programmer far more expressive power than being able to denote constant functions.
If an unbound symbol is evaluated, a global AUTLOAD function is called with that symbol as an argument. AUTOLOAD could then parse this symbol and add, at runtime, the necessary functions/variables to the global environment. Invoking AUTOLOAD only on a page-fault basis may not be too inefficient. I have very often seen this done in Perl.
By the way, I like the Perl-like syntax just fine.
Since a number-in-functional-position introduces a context for its own s-expression (not for any of its sub-expressions), it would be possible to expand intrasymbol arithmetic, much like with the . and ! notations, only in infix-context.
(1 + x/y ;expanded: x / y
+ (w/bar func) ;not expanded, function call
This would effectively keep you from referring to any symbols that contain arithmetic operators from infix-context, but maybe this is not really such a problem.
For this to work you'd need truly first-class macros, since it is not possible to definitely detect infix-context at compile-time:
(1 + x/y) ;infix, but may be subject to macro expansion
((sqrt 16)*4 + x/y) ;need to decide at runtime
Being able to work with symbols, instead of the functions they are bound to, would also be better for precedence analysis, since infix is essentially a visual thing.
Possibly. But you still need whitespace in the first position in the call or Arc will try to evaluate the variable a+b rather than the expression (+ a b).
As for precedence analysis, you need some evaluation to happen in order to know that the object in the functional position is a number, after which all your symbols representing functions have been evaluated too.
But even if you did manage to get a hold of the symbols, you would have to make arbitrary decisions about what is an operator and what is a function. (Unless you start evaluating stuff, which gets us back to where we started.) For example, even though expt isn't defined as an operator in the current version, this expression still works:
(2 expt 3)
You might not get proper precedence when using it like this, but at least it is interpreted correctly as a function.
True, (a+b) has no infix context, havn't thought about that; seems not such a good idea after all.
--
Regarding symbol precedence
Getting hold of the symbols: it's the same with macros, you need some evaluation to happen to know that the object in the functional position is a macro. That's why I said truly first-class macros; the functional position is evaluated first and only then a decision is made to either evaluate the already compiled form in the case of a function call, or to macro-expand the uncompiled form in case of a macro expansion.
Arbitrary decisions about operator/function: macros have all the power, you can handle precedence exactly like in your current system, but get the precedence information by looking up a table with the symbols as keys. This will work exactly the same for functions and yield more predictable precedence for operators.
On the other hand, I'd prefer that functions with undefined precedence were forbidden wherever precedence matters, because it's very easy to unwittingly redefine a function that has precedence defined for it. By itself, (2 expt 3) is fine; no precedence handling necessary.
Think of it as: infix-functions make precedence invisible (masked through the binding, you see the symbol not the function); infix-symbols make precedence obvious.
EDIT: Actually, you wouldn't need first-class macros, if you could redefine "fn" and "set" to handle specially all functions that get bound/assigned to symbols that have some precedence defined for them.
There seems to be some difficulty doing this in arc1; "fn" and "set" are kind of special:
arc> set
Error: "reference to undefined identifier: _set"
arc> fn
Error: "reference to undefined identifier: _fn"
:def func ()
:if cond
true
false
-><- forceful break of suggested indentation through backspace
So, I suppose, that's not an issue. What is an issue is that, if you revisit your code and want to add/remove something, you'll either need to use smart editor commands that let you navigate and edit your code based on s-expressions rather than rows/columns, or you'll have to sort out closing parens by hand.
If you have indentation based syntax, the sorting out by hand is replaced by matching the indentation level to the s-expression you want to modify; much easier, especially if you have programmed in non-lisp languages and are new to lisp, like me.
Here, for what it's worth, comes my insight:
practically the same functionality that is gained by your proposed system could be gained by making the editor interpret a forceful break of indentation as a command to add/remove closing parenthesis of the last s-expression, e.g.:
(if cond
(do (one)
(two)))
-><- ;we start here and want to add another expression to (do ...
-><-) ;we break the editor's suggested indentation once and the editor
;places if's ')' after the cursor
-><-)) ;we break indentation twice and the editor
;places do's ')' after the cursor
(three)-><-))
This is just one case of course. The editor also has to manage closing parenthesis when you break suggested indentation in the middle of an s-expression, and there are likely to be other things I've not thought of, but essentially, the editor can unambiguously manage parenthesis for you by responding to user-override of the current indentation level.
I almost feel up to try to implement this in emacs lisp. But maybe I should just try and learn, I don't know, quack-mode (which I'm using right now) or SLIME-mode of which I've only just heard through this forum.
"you'll either need to use smart editor commands that let you navigate and edit your code based on s-expressions rather than rows/columns, or you'll have to sort out closing parens by hand."
The beauty of parentheses is precisely being able to edit code in meaningful chunks because the parentheses naturally organize our code that way, which is part of why I think Arc's philosophy of "First, we kill all the parentheses" is away from goodness.
When I do edit code as if it were just so many lines and characters (about half the time -- after a dozen years I still have not mastered more than a few keychords) a simple "reformat" keychord automatically puts everything where it should be. And I am rarely disappointed by mistakes because the editor is still giving me cues by auto-indenting when I hit TAB and by blinking matching parens as I type.
I would suggest folks spend a few weeks writing Lisp before they try to change it, they might be surprised how they end up feeling about the parens. Unfortunately pg himself has it in for parens, so I cannot blame you all for the syntax massacre I am witnessing. :) A non-problem is being solved.