Something like marginalia might prove to be better than the arcfn docs. Not only because the docs would be fully integrated with the source code, but because it would also solve the multiple dialects problem. i.e. If some given code can be tagged with a dialect name then automation could also apply a dialect filter.
Of course this would probably be quite a bit of an undertaking.
The crux is colocating the rendered docs online with the repo. Would marginalia help us use github's hosting with github pages, managing branches, etc? If it does I think I'd be willing to go on a significant undertaking.
Marginalia is clojure specific so I expect it will not help other than to provide ideas.
To create an arc equivalent you probably need build an arc library which provides some code inspection/dissection capabilities and ideally also be able to attach metadata to any given function or macro. With such a library you then build a script to auto generate the docs.
As for GitHub syncing; well no, I'm guessing users would need to trigger the script and then check in the updated docs.
This is still better for a few reasons...1. developers can generate docs, locally, that are in sync with the code base they are actually using (checked out or branched). 2. Even if the online docs gets out of sync for a while you're still only a script trigger away for updating all outstanding changes.
The alternative is what you just went through; having someone remind you to do the work manually as an after-thought, which I've only seen happen once.
If you were to draw a line with liberal being at one end and conservative on the other, then plot various languages on that line you'll find arc and clojure would be close together as opposed to far apart.
Even Yegge's post would seem to confirm this with his own data:
1. I assume arc to be like lisp. Many of the things that need a new interpreter atop scheme (lisp-1, quasiquote, unhygienic macros) are borrowings from lisp. The only major scheme-ism arc retains is call/cc. Am I missing anything?
2. I actually think arc is more batshit liberal than traditional lisp. Perhaps I'm reading too much into the absence of a module system, but I consider it a deliberate omission.
3. You're assuming that there's only one way to build out the missing infrastructure, and that is to be more like clojure. I'm skeptical of that.
Yeah, I can see how one could draw a little larger of a distinction between Arc and Clojure using the conservative/liberal categories, but even so I'm not convinced it even stands up as a good overall measure for similarity/differences (even if it's a good one). So setting that one criteria aside for a moment, I look at Arc and Clojure and here's what I also see:
- They are both Lisps with dynamic type handling, prefix notation, strings as sequences, heavy use of hash-maps, a large number of functions and macros with similar names doing the same things.
- The syntax's are so similar you could pretty much swap out color scheme templates (in fact I initially used the arc textmate bundle for Clojure).
- They both depend upon pre-existing language for compilation. Clojure drops into java, Arc drops into scheme.
- The online comparisons between them are more frequent than arc and any other language.
I could go on, but I think you get the point.
And while I know you could create a list to show differences I'm simply saying that overall they are very similar which was my initial claim.
As for "having to be more like Clojure" to advance, well no, but I'm pretty sure that it's more likely than any other unstated options. Here's what I do see to support this:
Ok, I'm convinced :) By pg's comments in particular.
(There's a lot of red herrings in your comment, though. That they look the most similar syntactically isn't too surprising, and it's irrelevant that erlang can't compete, or that they build on an existing language, etc. You're measuring position to compare trajectories.)
In any case, it feels quite delusional to compare arc to clojure at the moment :)
Let me ask you this: if you have clojure why do you care about arc? Genuinely curious.
> “There's a lot of red herrings in your comment, though. That they look the most similar syntactically isn't too surprising, and it's irrelevant that erlang can't compete, or that they build on an existing language…”
I realize many of my arguments have holes in them when viewed independently, but when put together those “soft arguments” contribute to painting an overall picture that supports my claim better than had I not given them. Albeit the "they build on an existing language" is pretty weak overall.
> “You're measuring position to compare trajectories”
Position is not something you can measure it's something you can map/use. That aside, I don’t believe what it is I think you intended to say is true. I used position to substantiate the claim of similarity, I used trends to substantiate the trajectory (i.e. the last half of my reply, what people are saying and what they are doing are trends).
> "if you have Clojure why do you care about arc?"
I like both languages. I don’t use Arc because its unfinished and unusable for the things I need to do. I often get the sense that when I promote Clojure on the Arc forum that people get defensive and wonder why I'm even around, but remember I am only saying what pg himself is:
> Would you recommend Arc to modern startups in general?
PG: No, I don't think so, not in its current state.
> Why not?
PG: It's still missing some things that most people take for granted.
> Are there any Lisps you would recommend.. ?
PG: Clojure is probably the best bet...
I like the arc language and I like the community. I view arc as unfinished and look forward to giving it a shot when it is, but until then I am going to continue to be realistic about the current state and promote Clojure as an option.
"I used trends to substantiate the trajectory, i.e. the last half of my reply, what people are saying and what they are doing are trends."
If you think arc is whatever pg says it is, then what we are saying and doing here is irrelevant :)
If you think arc is whatever we're saying and doing here, the things we take from clojure seem a tiny minority compared to the sum total of conversations. I'd say arc starts out kinda like clojure, but is looking to steal ideas from all sorts of languages including erlang.
I tried to phrase my question very carefully to avoid seeming defensive about something I don't care about. So let me just come out and say it: I have no problem with you talking about clojure all you want here. If you did, maybe I'd get to talk to you more! :) Arc is indeed absolutely unfinished, no disagreement there either.
Maybe what I'm actually defensive about is the prospect of any two languages becoming more and more similar. That just seems bad, nothing good can come of it. I'd rather see arc and clojure evolve in different directions and give me more ideas and more data about how good those ideas are.
Copying ideas and creating hybrids is totally fine, that's what we are good at. But then the hybrid starts at the intersection of its influences and sets off on a whole different trajectory.
So let me rephrase my question: is there something you miss from arc in clojure that has you wishing for a superset?
(The answer may take time to emerge from the subconscious. At least it has for me in similar questions.)
My comments are a targeted response to statements made in a specific comment (http://arclanguage.org/item?id=18421). I think you're treating the conversation in this thread as you treat this forum; as some place to mingle languages and abstract all ideas. That's fine if you want to do that, but I'll suggest you'll be more effective if you follow the thread and consider context when responding.
And I'm suggesting more care be taken in this regard, because it leads to you being offended (i.e. "If you think arc is whatever pg says it is, then what we are saying and doing here is irrelevant...") on statements that should be considered only in relation to the thread. It also leads to unintentional attacks such as "In any case, it feels quite delusional to compare arc to clojure ...". I don't think you realize you suggested I'm being delusional by adding that comment... Arc and Clojure are both modern lisps that have eliminated the overuse of brackets, so really is it delusional to suggest these two languages are more similar than not? Am I creating some injustice by telling someone, that's already leaning towards Clojure, that these languages are very similar anyway?
I'm going to end this thread here as I think it has already gone off the rails.
Sorry I'm causing offense. It was indeed utterly unintentional, and fwiw, I actually was never in the slightest conscious of feeling offended by anything in this thread. I'm entirely the offender and not an offended party.
I didn't mean to trigger associations like "injustice". When I said "delusional to compare arc to clojure" I was trying to be self-deprecating. Arc is just a toy, clojure is real. You're absolutely right in your defense of clojure.
Not to worry, I've known you long enough now to know you're not mean spirited or intending offence. Hopefully I didn't get too grumpy, but I needed to put and end the thread because I could see we had different ideas on what the thread was even about.
These parentheses weren't decorative though,
they exposed the structure of the code, and they are a
minimal price to pay for extra clarity.
Well clarity is dependant upon the reader and, more specifically, how the reader reads.
My path to clarity is as follows: I first scan code at a high level, often referred to as speed reading where I find having more parentheses simply gets in the way. The second is using fixation where I stare at a small block of code. With both Arc and Clojure I have no trouble moving from scanning to fixation then to understanding (clarity). It could be that the author struggles with his/her reading method. Chances are the author learned with traditional L.I.S.P (Lots of Infuriatingly Stupid Parentheses) and therefore the newer lisp dialects are working against his/her read training. I however started with Arc & Clojure and I therefore experience the same lack of clarity when reading the traditional LISP dialects. To each their own.
Original blogger here. Apologies about the lack of indentation, I'm not sure how I missed that, it is a markdown conversion thing.
The point I was making in the blog is expressed more eloquently in , but to summarise: the difference between LISP and the countless languages that have risen and fallen during LISP's lifetime is that LISP directly represents the parse tree. By disposing of those parentheses in the (if ) statement, Arc has introduced the need to count the terms in the list to understand the sense of the code, in other words to parse it. Clearly the same is true of Python, C++, Haskell and many other languages I respect, but by placing a barrier between the code and the parse tree, Arc has abandoned the principle that makes LISP a global, rather than a local, minimum in the history of programming languages, and I think this is a mistake.
It sounds like what you like to see are s-expressions that fit these patterns:
homogeneous list of any number of X
fixed-length heterogeneous list of (X Y Z)
expression... which fits one of these patterns:
call to X with args Y
Arc and Clojure make this more complicated by adding more cases, and I'll highlight Clojure's cases here:
map from X to Y
homogeneous vector of any number of X
vector alternating between X and Y, with no excess (like a map)
call to 'cond with args alternating between (expression) and
(expression), with no excess
call to 'case with args starting with (expression), followed by
alternations between (anything) and (expression), perhaps with
(expression) as excess
function body which might begin with a docstring and/or a metadata map
Between Arc and Clojure, I'm pretty sure Arc is intrinsically harder to auto-indent, because it doesn't distinguish between different cases using different kinds of brackets. Racket's a great example of what I mean; parens are used when the subsequent lines should be indented like function arguments, and square brackets are used otherwise.
For structured editing -- what you pursue -- we probably want even more static structure than what we want for auto-indentation.
I don't want to prod you to spend your time writing tools for niche languages or designing niche languages of your own, if that's not what you already want to do, but I'd like to ask what kinds of hypothetical languages you would like best....
Would you be eager to work with a lisp-like language where the AST has a few built-in notions of homogeneous lists, heterogeneous tuples, function calls, etc.? For instance, Lark defines an alternative to s-expressions that's more tailored to the ways the syntax is actually used (https://github.com/munificent/lark).
On the other hand, would you be eager to work with a language where the AST has an endless variety of possible cases, which can be extended by any programmer who wants to write an editor widget? Racket does something related to this, because it has an expressive pattern language for writing macros, and macros written this way generate good parse errors automatically (http://docs.racket-lang.org/syntax/Parsing_Syntax.html).
Personally, I've been kinda dreaming about both of these approaches to structured editing for a long time, but I'm still working on the run-time and link-time semantics of my language designs, so I've been unambitious when it comes to editing-time semantics. :)
Fair enough, you're not saying anything wrong here, but I still do not agree. This really this is about the trade offs each of us are willing to make.
i.e Programmatically yes you might need to count terms, but for code readability I do not count the terms I let code indentation guide me.
I can certainly see how this code indentation factor may be seen as too free in form or structure to be appealing to many, but having less parentheses is a huge readability/enjoyability win. A win that, at least for me, leads to huge gains in productivity.
Also, you do have the option to pretend the token count is even:
(def example (x)
(if (is x 1) "One"
(is x 2) "Two"
It's up to you. In my mind power & flexibility are the big draws to LISP. If I wanted to be directed by the language as opposed to empowered by it I would just use C.
1. Once I've had to create a custom macro where I might have had to count the terms, but it's a rare event.
Welcome! I can see how lispers wouldn't like the arc approach, but I don't see how it makes arc more like non-lisps. What traditional language requires counting positions? I usually associate that with lisps.
I've also extracted the _ handling into a new function called (ugh) functionize. And I've fixed zap to show how you can make other functions "_ aware".
But like I said, this isn't as valuable in arc since you already have the [.. _ ..] syntax. All it permits is more widespread ssyntax use. You can say _.x instead of [_ x], and you can chain ssyntax around obstacles.
I found the idea interesting because it asks the question: do we need [.. _ ..] syntax, or can we eliminate the need for this reader macro just with a few simple changes to the functions where we most often use it?
This is a horrible change. I didn't respond to the original post because of "Thumper's rule," and I couldn't rebut thaddeus in time to stop you. :(
== Problems with functionize in general ==
The functionize-based utilities are discontinuous about the way they detect underscores: The body can use _ three times, or two times, or one time, but as soon as it uses _ zero times, it means something completely different. Thanks to this, I can break several layers of code by making a single local edit. But will I? Yes:
The 'treemem function detects occurrences of _ without regard for quoting or local scopes. So if I use an _ to activate one functionize-based utility, then I'll accidentally activate all the other functionize-based utilities which surround that one. If I want to avoid refactoring several layers of code each time I edit, I pretty much have two options:
- I can avoid putting an _ anywhere in my code, in which case this functionize feature won't be very useful to me.
- I can make sure to activate each and every functionize utility as soon as I use it, in which case they would have been better off as 'let variants. For instance, I might settle on the idiom (zap (do '_ ...) foo), but it would be more convenient to say (zaplet orig foo ...).
== Problems in Arc ==
Your Anarki commit is one of those things that is "guaranteed to break all your code." Personally, I like using the pattern (zap [map [...] _] args), which now breaks since the _ activates zap's automatic function wrapper. It seems you would want me to write (zap (map [...] _) args) instead, but for compatibility with Rainbow, Jarc, etc., I think I'll define a macro (itfn ...) and write (zap (itfn:map (itfn:...) it) args). Effectively, I'll be recreating the [...] functionality in the way I like.
Meanwhile, Arc already covers a lot of the functionality of Clojure's -> and ->> operators using (aand ...). If you still miss -> or ->>, I recommend just implementing an 'aand variant that doesn't short-circuit on nil. Call it 'ado or something.
Just spotted your comment while I'm working on something else. Haven't digested it all, but judging from the first five words -- feel free to revert! It was definitely intended as an experiment, and I'm not attached to it. I may well do so myself later today if you don't get to it first.
Ok, done reading now, and you're right, I'll revert it.
I can only defend myself against the wart section :) In wart the pipe operator can only take two args and is intended to be used in infix. I use a non-infix transform for more args, and for prefix mode in general: https://github.com/akkartik/wart/commit/ec0f9a38b8
My weak defense for the rest: functionize and the _ syntax was only intended for tiny expressions.
"I can only defend myself against the wart section :) In wart the pipe operator can only take two args and is intended to be used in infix. I use a non-infix transform for more args, and prefix mode in general"
Oh, so you're pursuing both options at once. I look forward to you figuring out what kind of indentation you prefer here. :) My "considerations about wart" section was only wishy-washy anyway.
"For the rest, my weak defense is that functionize and the _ syntax is only intended for tiny expressions."
In Penknife, when I used the a'b operator as sugar for (b a), I found I ended up with a few really long lines of a.b.c'd'e.f, so it kinda suffered from its own success. ^_^ My a'b is the same as your (a -> b._), and it exactly corresponds to your no-underscore special case, (a -> b), so I expect you to have the same issue.
I suspect these syntaxes actually have a special tendency to let sugar accumulate, driving them away from the ideal "tiny expressions" case. Specifically, they make it possible to inject new code without breaking apart the surrounding sugar first:
a.b.f.c.d # before refactoring
a.b."foo".c.d # illegal
a.b -> (_ "foo") -> _.c.d # legal? (not quite the example you gave)
a.b'[itfn:it s.foo].c.d # Penknife code of similar generality
Fortunately for wart, its infix operators allow whitespace in between, which possibly means you can write these long expressions on multiple lines. (That wasn't the case in Penknife.)