For me the big reason is differences in the way macros are ordered. Racket is a scheme so its macros are hygienic and require a little bit greater ceremony in creating. More seriously, they have phase ordering, which creates constraints about what code you can call from within macros: https://docs.racket-lang.org/guide/phases.html
Arc is more like Common Lisp in that you can call whatever you want while expanding a macro, and if you make a mistake you might end up with an infinite regress of macroexpansion or something like that. Its macros are not hygienic which again creates room for certain kinds of bugs, but some of us here tend to think of those as learning experiences ^_^ whose benefits outweigh their pain.
Another minor difference is that Arc is a lisp-1 like Scheme and unlike Common Lisp.
I think there is one way to consider Arc to be a language with good hygiene: We can program so that if we ever use a name as a global variable, we never use it as a local variable, and vice versa. As long as an Arc problem follows this rule and the usual (w/uniq ...) idiom, it won't encounter hygiene issues.
Paul Graham has this to say about hygiene in the tutorial:
Some people worry unduly about this kind of bug. It caused the
Scheme committee to adopt a plan for "hygienic" macros that was
probably a mistake. It seems to me that the solution is not to
encourage the noob illusion that macro calls are function calls.
People writing macros need to remember that macros live in the land
of names. Naturally in the land of names you have to worry about
using the wrong names, just as in the land of values you have to
remember not to use the wrong values-- for example, not to use zero
as a divisor.
However, he's only careful about one direction of variable capture. Here's one example from the tutorial where he doesn't mind capturing the names let, repeat, push, and rev:
I think he gets away with this because he's following that rule I mentioned, keeping a careful separation between the names of locals and globals.
It seems we don't particularly follow that rule here on the Arc Forum. For instance, a few of us have agreed that a certain behavior in Arc 3.1 is a bug: When we make a function call to a local variable, we don't want a global macro of the same name to take effect, which is what happens in Arc 3.1. If we were keeping locals and globals separate, we wouldn't even encounter this problem.
Which means that if we want to write macros that are hygienic, we can't write them in quite the way we see in arc.arc or the tutorial. If we're dedicated to hygiene, we might even want to rewrite arc.arc to fix its hygiene issues... but that's practically the whole language, so it effectively starts to be a new language project. The Anarki arc2.hygiene branch, Penknife, ar, Semi-Arc, and Nulan are several examples of Arc-based or Arc-inspired projects that pursued some kind of hygiene.
If we don't mind the lack of hygiene in arc.arc but only care about proper hygiene for our own new macros, it is possible to be diligent about hygiene in plain Arc 3.1 or Anarki:
(mac n-of (n expr)
(w/uniq ga
(rep.let ga nil
(rep.repeat n (rep.push expr ga))
`(',rev ,ga))))
Coding this way looks a little arcane and loses some of Arc's brevity, but one of the techniques here is to embed a the rev function into the syntax as a first-class value. By putting most of the macro implementation into an embedded function, it can become rather familiar-looking again:
Here's a macro that makes this even more convenient:
(mac qq-with args
(let (body . rev-bindings) rev.args
(let (vars vals) (apply map list (pair rev.rev-bindings))
`(',list `',(fn ,vars ,body) ,@vals))))
(mac n-of (n expr)
(qq-with n n expr `(fn () ,expr)
(let a nil
(repeat n (push (expr) a))
rev.a)))
I think if I programmed much in Arc again, I'd start by defining that macro or something like it. :)
As it is, right now I'm just settling into a macro system I designed. I don't have convenient gensym side effects like Arc does, and I want the generated code to be serializable (not containing opaque first-class functions), so my options are limited. I still can and do implement macros, but the implementation of each macro is pretty verbose.
I think Clojure fits most of the criteria that would lead someone to choose Arc. I think Clojure's main flaw compared to Arc is that it's a bit cumbersome to do iteration, because there's no general support for tail call optimization.
Arc has a few things positively going for it:
* Arc's implementation doesn't implement the whole language from scratch. Instead, syntaxes, data representations, and primitive operations are inherited from Racket, and most of the high-level tools are implemented in Arc itself as a library. What remains in the Arc implementation is a small, unintimidating core focused on some compilation and pattern-matching features. Since the core is small, it's easy to make certain modifications if needed. (Modifications to things inherited from Racket, like changing the reader syntax, are more challenging.)
* It extends s-expression syntax in a few minor ways. I think one of these extensions, the (a:b:c d) shortcut for (a (b (c d))), is particularly compelling. It tends to reduce lines of code, indentation, and parentheses all at once:
* Paul Graham wrote influential essays about language design that led to the release of Arc. Some people, including me, came to Arc because they read those essays and liked the high-level goals and priorities they expressed. Arc probably isn't even the best existing manifestation of those goals, but it is a Schelling point at least.
I'm drifting off topic, but imagine this: When you call (eval ...), imagine you pass in the global namespace that the code will run in. (Maybe we're using aw's extension for this purpose.) When you pass in a namespace that contains your own implementation of (eval ...) itself, you've effectively modified the compiler, but only as far as that specific code is concerned! As long as our custom compilers are written in Arc, we can treat them like we treat Arc libraries, and we can mix code that uses different compilers. We can have all kinds of compilers open in split windows at the same time. :-p
We already have plenty of examples of first-class namespaces, like aw's implementation posted recently. So all this would take is an implementation of Arc in Arc. Do we have one of those? I thought I heard of one at some point.
My excitement is not because I think a pileup of various compilers in a single codebase is a great idea, but because I think it's easier to share code this way. Compiler hacks are prone to merge conflicts that inhibit code sharing, but sharing libraries is... well, not perfect in Arc either, but it's at least ameliorable by doing some simple search-and-replace renaming or by agreeing on a namespacing framework (like my framework in Lathe).
This came to mind because I was recently realizing that in my language Staccato, my Staccato self-compiler was approximating a style of programming much like that split window of yours, without sacrificing modularity.
Clojure also has a pretty cool way to not have to call (uniq) by hand. If, inside a backquote, you append a # to a symbol, clojure will replace that variable with a gensym. And it'll use the same gensym every time you use that variable in the backquoted form.
(defmacro and
"Evaluates exprs one at a time, from left to right. If a form
returns logical false (nil or false), and returns that value and
doesn't evaluate any of the other expressions, otherwise it returns
the value of the last expr. (and) returns true."
{:added "1.0"}
([] true)
([x] x)
([x & next]
`(let [and# ~x]
(if and# (and ~@next) and#))))
See how it uses and#, but it doesn't capture the variable and?
I'm not entirely sure how you would capture a variable (e.g., Arc's "aand"); if you try to, clojure errors by default. There's probably some way, but I don't know offhand.
Mature: more people have used Clojure, kicked at the wheels, found bugs.
Who wants unhygienic macros? People here and those who like Common Lisp. Why? They're more flexible. They give you enough rope to hang yourself, but if you exercise taste in using them life can be quite good.
We often prefer arc because we want to poke under the hood and understand how interpreters and compilers work.
If your goal is to learn, it mostly doesn't matter what you use. Just build. The journey is what matters. Use something where you have someone to ask questions of. (Like here.)
(If your goal is something more specific, then Arc might well not be a good idea. But then neither will anything else, most likely. But you'll still have built something by the time you realize that you chose wrong.)
Looks like Arc vs. Clojure has been pretty well covered by the other comments. To take a step back and look at your goals, in case you might find it useful... consider separating the need to eat from your other ambitions.
The demand for hackers is very high right now, so it's easy to find work.
Most people when they get a job and make more money, immediately raise their standard of living. I.e. they find a nicer place to live, they eat more expensive food, maybe buy a car or get a nicer one, etc. But you don't have to do that if you don't want to. You can, if you choose, keep your expenses low while working, and save a lot of money instead.
When your expenses are lower than your income, you don't need to work full time. For example, you could work part time. Or, you could work full time for part of a year and not work the rest of the year.
With "I need to eat" covered, then you have time to pursue your interests without fearing that you're going to starve if you don't get things going quickly enough.
YC has a less than 3% acceptance rate (https://blog.ycombinator.com/yc-portfolio-stats), so applying to YC isn't a great strategy for keeping from starving. (YC is great if what you want to do is build a world changing startup. For meeting your own basic income needs there are many far easier and much more certain ways to do that).
I don't mean to discourage you in any way from applying to YC if you want to do a startup. Just suggesting you have a plan B for the "so I can eat" part :)
I think you might find TripleByte's blog post on what kinds of programmers YC companies want to hire interesting for several reasons:
First, if you want to do a startup, it's interesting to see what kinds of technical skills have turned out to be important for startups.
Second, if you want to create an app, it's interesting to see what kinds of technical skills have turned out to be important for startups creating apps.
And third, if you want to get a job, it's interesting to see what kinds of technical skills are most in demand.
A highlight is that the most demand is for product-focused programmers.
Thus, if I were looking for something to study for the purpose of starting a startup, or creating an app that lots of people use/love, or for finding work, I'd consider focusing on:
But keep in mind that for most apps, for most startups, you don't need Lisp. Reddit, for example, started in Lisp and switched to Python because the libraries were better. Nor are most of the YC companies using Lisp.
Of course, every startup is different, and every app is different. For a particular app, or for a particular startup, Lisp might be a strong advantage. For Paul Graham's original startup ViaWeb, for example, Lisp was a decisive advantage.
Lisp is a programmable programming language. When do you need to program your programming language? When your programming language isn't doing enough for you on its own.
As other programming languages have gotten better, there's less of a gap between them and Lisp. ViaWeb was using Lisp competing against companies writing in C++. Nowadays the mainstream languages are higher level.
Lisp is a useful skill to learn because if you ever do get into a situation where it would be helpful to be able to program your programming language you can say "aha! A macro would make this easier!"
And yet, to get into YC, or to write an app that lots of people use/love, most of the time, in most cases, that's not necessary. (Or else startups would be looking for Lisp programmers).
Yeah I wouldn't use Arc in your situation. It can still be good for prototyping new ideas, but I'm not sure the gap with other languages is large enough to take on the risk of painting yourself into a corner..
It would be nice, but I have a hard time recommending it to anyone with no releases since 2009.
Perhaps Anarki would be something to recommend. But that's not really being driven forwards either. Any changes are the result of design by committee, which doesn't tend to lead to great design.
It doesn't seem fair to call Anarki design by committee. It's closer to a small number of scatterbrained people who periodically have a shiny new idea and add it in in anarchist fashion. Maybe design by Dory? http://www.imdb.com/character/ch0003708 :)
That said, Arc is probably a better language if you want to hack the language, and Clojure is better for almost everything else. This is especially true if you need libraries, or want something that Just Works^TM.
The people that run HN don't seem to be interested in doing so, and they almost never post here (except like one time: http://arclanguage.org/item?id=19389). So getting even an updated Arc release hasn't happened, nevermind actually letting the community push Arc forwards.
The community is definitely stagnant. PG has been the driving force behind its publicity, so because he hasn't said anything about Arc in a while, there isn't new blood coming in.
I'm still not following what your program does. (Can you describe its inputs and how it transforms them in english?)
I remember you asked a similar question last year: http://arclanguage.org/item?id=19109. Perhaps it would help to connect up how your question here relates to that thread?
Here's how you run the code in that thread for reading lines from a file with anarki:
$ cat x
ab
cd ef
ghi
$ cat x.arc
(write:w/infile file "x"
(drain (readline file)))
$ ./arc x.arc
("ab" "cd ef" "ghi")
#t
Here is a follow on problem as I'm going through the tutorial: obj does not work and the error message seems to access memory not involved with the obj.
arc> (printlst alist)
ARR(1)="PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG"
ARR(2)=" N ARR,FND,I,RSD,RTN,STOP,TXT"
ARR(3)=" W !!,"PASTE""
ARR(4)=" F R !,X:15 Q:'$T S ARR($I(ARR))=X"
ARR(5)=" K RSDS"
""
arc> (= codes (obj "Boston" 'bos "San Francisco" 'sfo "Paris" 'cdg))
Error: "list-ref: contract violation\n expected: exact-nonnegative-integer?\n given: '(((codes (obj \"Boston\" (quote bos . nil) \"San Francisco\" (quote sfo . nil) \"Paris\" (quote cdg . nil) . nil) . nil) . nil))\n argument position: 2nd\n other arguments...:\n '(\"\\nARR(1)=\\\"PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG\\\"\" \"\\nARR(2)=\\\" N ARR,FND,I,RSD,RTN,STOP,TXT\\\"\" \"\\nARR(3)=\\\" W !!,\\\"PASTE\\\"\\\"\" \"\\nARR(4)=\\\" F R !,X:15 Q:'$T S ARR($I(ARR))=X\\\"\" \"\\nARR(5)=\\\" K RSDS\\..."
arc>
Judging by that error message, it looks like the variable "=" or one of its dependencies might have been reassigned somewhere along the line. The second argument in that error message indicates that = is getting hold of your read-in data somehow, so it might be something you've defined for processing this data.
The dependencies of = include expand=list, expand=, map, pair, and setforms (among others), so if any of these has been overwritten, it might do something like what you're seeing.
By the way, I think if you're not using Anarki, there's a known bug in (readline ...) where it will spuriously combine each empty line with the following line (https://sites.google.com/site/arclanguagewiki/arc-3_1/known-...). Maybe this could explain the extra \n you're getting.
Hmm, not sure what happened. Not sure what you mean by memory problems, but I've never seen flakiness in a session this short. Perhaps something in your earlier session was accidentally a control character or something. Keep an eye out for it and I will too.
Here's a full session I tried out on linux:
$ arc
arc> (def printlst (thelist) (if (is thelist nil) (prn "") (do (prn (car thelist)) (printlst (cdr thelist)))))
#<procedure: printlst>
arc> (def readit () (drain (readline (stdin))))
#<procedure: readit>
arc> (= alist (readit))
ARR(1)="PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG"
ARR(2)=" N ARR,FND,I,RSD,RTN,STOP,TXT"
ARR(3)=" W !!,"PASTE""
ARR(4)=" F R !,X:15 Q:'$T S ARR($I(ARR))=X"
ARR(5)=" K RSDS"
("" "ARR(1)=\"PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG\"" "" "ARR(2)=\" N ARR,FND,I,RSD,RTN,STOP,TXT\"" "" "ARR(3)=\" W !!,\"PASTE\"\"" "" "ARR(4)=\" F R !,X:15 Q:'$T S ARR($I(ARR))=X\"" "" "ARR(5)=\" K RSDS\"")
arc> (printlst alist)
ARR(1)="PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG"
ARR(2)=" N ARR,FND,I,RSD,RTN,STOP,TXT"
ARR(3)=" W !!,"PASTE""
ARR(4)=" F R !,X:15 Q:'$T S ARR($I(ARR))=X"
ARR(5)=" K RSDS"
""
arc> (= codes (obj "Boston" 'bos "San Francisco" 'sfo "Paris" 'cdg))
#hash(("Boston" . bos) ("Paris" . cdg) ("San Francisco" . sfo))
arc>
So I tried again with arc running on Racket under Linux. Here's what I found:
arc> (def readit () (drain (readline (stdin))))
#<procedure: readit>
arc> (readit)
ARR(1)="PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG"
ARR(2)=" N ARR,FND,I,RSD,RTN,STOP,TXT"
ARR(3)=" W !!,"PASTE""
ARR(4)=" F R !,X:15 Q:'$T S ARR($I(ARR))=X"
ARR(5)=" K RSDS"
("\nARR(1)=\"PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG\"" "\nARR(2)=\" N ARR,FND,I,RSD,RTN,STOP,TXT\"" "\nARR(3)=\" W !!,\"PASTE\"\"" "\nARR(4)=\" F R !,X:15 Q:'$T S ARR($I(ARR))=X\"" "\nARR(5)=\" K RSDS\"" "\n")
arc> (= alist (readit))
ARR(1)="PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG"
ARR(2)=" N ARR,FND,I,RSD,RTN,STOP,TXT"
ARR(3)=" W !!,"PASTE""
ARR(4)=" F R !,X:15 Q:'$T S ARR($I(ARR))=X"
ARR(5)=" K RSDS"
("\nARR(1)=\"PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG\"" "\nARR(2)=\" N ARR,FND,I,RSD,RTN,STOP,TXT\"" "\nARR(3)=\" W !!,\"PASTE\"\"" "\nARR(4)=\" F R !,X:15 Q:'$T S ARR($I(ARR))=X\"" "\nARR(5)=\" K RSDS\"" "\n")
arc> alist
("\nARR(1)=\"PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG\"" "\nARR(2)=\" N ARR,FND,I,RSD,RTN,STOP,TXT\"" "\nARR(3)=\" W !!,\"PASTE\"\"" "\nARR(4)=\" F R !,X:15 Q:'$T S ARR($I(ARR))=X\"" "\nARR(5)=\" K RSDS\"" "\n")
arc> (len alist)
6
arc> (def printlst (thelist) (if (is thelist nil) (prn "") (do (prn (car thelist)) (printlst (cdr thelist)))))
#<procedure: printlst>
arc> (printlst alist)
ARR(1)="PARSE ; PARSE OUTPUT OF ^%RFIND INTO RSD/RTN/TAG"
ARR(2)=" N ARR,FND,I,RSD,RTN,STOP,TXT"
ARR(3)=" W !!,"PASTE""
ARR(4)=" F R !,X:15 Q:'$T S ARR($I(ARR))=X"
ARR(5)=" K RSDS"
""
arc>
I think the issue with the \n is sending data between Windows and Linux.
I did have to key in Ctrl-D twice to actually get the function to finish reading. Is there a better way to do this?
This probably seems weird, but I capture the data on a Windows system, then e-mail the data to a Linux system which is where arc resides. I assume Windows and Linux have different line endings.
Perhaps I need to check out the community version of arc?
Has anyone figured out a way to compile an arc routine? I saw an earlier thread on it, but no resolution.
This was surprisingly easy to follow considering I was rusty on ccc; I didn't need to consult the definition of ccc because your descriptions made it easy to deduce. Maybe this also makes it a nice tutorial for ccc?!
Another reaction: I wish there was a way to encode such 'derivations' of code in code, so it could lie in arc.arc without seeming inscrutable. It's only inscrutable without your accompanying prose.
I think for it to be a good tutorial, it ought to do something that's useful on its own. Maybe an implementation of generators or something like that. `capture-dyn` is a useful tool for working with continuations (it seems like), but I think as a tutorial it may be a bit esoteric since you'd already need to know about the interaction of continuations with dynamic scope to see when and why you'd want to use `capture-dyn`.
This raises a question I've been mulling for a year now: is a global variable just an implicit parameter passed into all functions? What does 'functional' really mean? I've been exploring new interfaces at the lowest levels of the OS that make all operations referentially transparent without any constraints on mutability. For example, the 'print' syscall takes a screen object as an ingredient. (The real hardware screen is represented as nil/0.) Is that as 'functional' as Haskell?
Store-passing style! I've been using it in my programs too. It even came up again on David Barbour's blog yesterday.[1] It's kinda funny how everything keeps coming back to the same continuation-passing styles and store-passing styles.
I think I've followed all these to a nice, general conclusion in Staccato. :)
Staccato's going to have at least two completely different (and not necessarily compatible) families of side effects: At compile time, macros will install definitions as a form of side effect. At run time, microservices will have continuous reactive side effects for communicating with each other. Nevertheless, I'm taking one consistent approach to side effects that should work in both cases. (It might be pretty inefficient when used for continuous reactive effects, but I'm optimistic.)
Staccato has no static type system (yet?), but even if it did, I would expect to have to think about run time errors anyway: Usually, a program can be written that bides its time until the Earth gets consumed by the sun, and then there's no way it'll successfully proceed to return a value. So, I accept dynamic errors, but I'll be mindful of where and when any given error could gum up the works, e.g. whether it happens on the browser side or the server side, and whether it happens before or after some other side effects occur.
So the kind of purity I'm going for involves quasi-determinism in the sense I've seen Lindsey Kuper use it when talking about LVars[2]: As long as a program isn't swallowed by the sun or otherwise interrupted, it will always return the same value. I'll still need to be mindful of where and when errors may occur in the language (e.g. server-side or client-side), so that I know which of the program's side effects should be aborted or reverted.
If a "side effect" only communicates with the language implementation itself (e.g. for debugging or profiling), that's fine. We already trust the language implementation to implement the language semantics in a single deterministic way, so we can trust it to respond to these communications in a single deterministic way as well!
If a "side effect" is tame enough that it can be removed by dead code elimination if the result value is never used, that's fine too. Arguably it has no side effects at all; the effects are all represented in its result. This means there can be some minimal support for operations that read some value from an opaque external reference (e.g. a file handle, a socket). In order to preserve quasi-determinism, the output may only vary if the input does, so these operations will tend to take an explicit parameter designating the time/world at which to do the reading. If a program makes pervasive use of these operations, it will take the shape of a sort of store-passing style, though it doesn't ever need to return a new version of the store. (For context: Whereas Haskell's State monad is used for store-passing style, its Reader monad is simplified for this special case.)
I'll do all other side effects using a commutative monad. By commutativity, any two commands in this monad can be reordered, which should guide me toward easy refactoring, extensibility, and concurrency. If I need to write any code that depends on the result of an effect, this can't usually be done in a commutative way, but a commutative effect could still set up an asynchronous callback, which can run a separate set of commutative effects in a future tick. If a computation spans more than a few of these ticks, it'll start to look like continuation-passing style. Staccato's syntax is actually pretty nice for continuation-passing style code, so this isn't a problem. CPS necessarily sequentializes the code, but when I need concurrency, I can synchronize between concurrent code the way JavaScript programmers often do these days, using promises.[3]
[3] To preserve quasi-determinism and to ensure that no two promise allocations give the same promise as their result, the allocation of a promise will itself be an asynchronous operation. (This is sort of a note to myself, because I haven't written up designs for the promise primitives yet.)
Sounds like it. The idea is that you have some irreducible amount of imperative state out in the "real world" (files in the OS, your hardware screen, etc), and the goal is to keep that as small a part of your language as possible -- so that the rest of your language can be as functional as possible.
My understanding is that Racket, like most other high-level languages, has traditionally[1] provided support for concurrency but not parallelism. You still benefit from atomic, though, because otherwise a thread can be stopped at any time, and some other thread restarted in its place. You can have race conditions on a single OS thread.
I'm under the impression that when I create a future it doesn't run until I call touch - this behavior denies parallelism if I'm correct. Is it supposed to do that, no I guess? Any example that works?
(= f ($.future (fn () (for i 0 (< i 100) (++ i) (prn i)))))
($.touch f)
0
1
2
3
4
5
6
...
Going by http://docs.racket-lang.org/guide/parallelism.html#%28part._..., it looks like any operation in a future that might be expensive is a "blocking operation," which can only be resumed by touching the future. Even multiplying a floating-point number by fixed-point integer is expensive enough to be blocking!
Without testing it myself, I'd guess there are a few things that might be blocking in your example:
* Converting a number to a string.
* Looking up the current value of stdout. This depends on the current parameterization, which is probably carried on the continuation in the form of continuation marks. According to http://docs.racket-lang.org/reference/futures.html, "work in a future is suspended if it depends in some way on the current continuation, such as raising an exception."
* Actually writing to the output stream.
Maybe "visualize-futures" would show you what's going on in particular.
I finally sat down to test it, and it looks like all three of those are blocking operations, just as I thought.
arc> (= g 1)
1
arc> ($.future:fn () (= g 2))
#<future>
arc> g
2
As a baseline, that future seems to work. It simply assigns a variable, which the documentation explicitly says is a supported operation, so there wasn't a lot that could go wrong.
arc> (= f ($.future:fn () (= g $.number->string.3)))
#<future>
arc> g
2
arc> $.touch.f
"3"
arc> g
"3"
That future had to call Racket's number->string, and it blocked until it was touched. The same thing happens with Arc's (= g string.3).
arc> (= f ($.future:fn () (= g ($.current-output-port))))
#<future>
arc> g
"3"
arc> $.touch.f
#<output-port:stdout>
arc> g
#<output-port:stdout>
That future blocked due to calling Racket's current-output-port. The same thing happens with Arc's (= g (stdout)).
arc> (= sout (stdout))
#<output-port:stdout>
arc> (= f ($.future:fn () ($.display #\! sout) (= g 5)))
#<future>
arc> g
#<output-port:stdout>
arc> $.touch.f
!5
arc> g
5
That future blocked on calling Racket's display operation. It finally output the #\! character when it was touched. The same thing happens with Arc's (disp #\! out), and the same thing happens if I pass in a string instead of a single character.
I tried using visualize-futures from Arc, but I ran across some errors. Here's the first one:
It seems to be dividing by zero there. I tried it in Racket, but I got the same error. This error can be fixed by tacking on (sleep 0.1) so that the total duration isn't close to zero:
(visualize-futures:withs
(g 5
f ($.future:fn () (= g $.number->string.6)))
(sleep 0.1)
wrn.g
(wrn $.number->string.6)
wrn.g
(wrn $.touch.f)
wrn.g)
However, even that code gives me trouble in Anarki; the window that Racket pops up is unresponsive for some reason. So here's the same test in Racket, where the window actually works:
If I look in the timeline and select the two red dots, this information comes up:
Event: block
Time: +0.0 ms
Future ID: 1
Process ID: 1
Primitive: number->string
Event: block
Time: +109.744140625 ms
Future ID: 1
Process ID: 0
Primitive: number->string
It looks like the first one is the number->string call inside the future, and the second one is the call that occurs outside the future. I guess it's still considered a blocking operation even if it happens in the main process, but fortunately it doesn't stop the whole program. :)
So number->string is a primitive that's considered complicated enough to put the future in a blocked state. To speculate, maybe the Racket project doesn't want to incur the performance cost of having the future's process load the code for every single Racket primitive, or maybe they just haven't implemented this one operation yet.
Going by this, futures can be useful, but they have a pretty limited set of operations. Still, mutation is pretty powerful: If needed, maybe it's possible to set up an execution harness where the future assigns the operation it wants to perform to a variable, and then some monitoring thread takes care of it, assigning the result back to a variable that the future can read.
Meanwhile, I wonder why the pop-up doesn't seem to work from Anarki. I seem to remember other Racket GUI operations haven't worked for me either. If the GUI works for anyone else, it might be that I'm on Windows.
Tried using worker processes, the single arc http server still is a major contention point. As I understand now, the solution for good http server performance would be to sit a bunch of arc servers behind a reverse proxy like nginx. That explains why the arc http server isn't more complicated, it doesn't need to.
I'd be curious to see your experiment. How did you measure contention?
I'm not aware of any arc servers in the wild using multiple servers. This isn't for performance reasons but just correctness; we don't have a database that can keep concurrent writes from stepping on each other.
Very empirically. Basically, on localhost, I've a client sending a shit load of basic requests (one thread for each request) and a server printing "hello world" in the repl for each of them in a minimalistic defop. The server end up printing on the repl far after the client has ended sending requests. The absolute number of requests we are talking here is in the order of 200; server takes 5 secondes (give or take) to print the last "hello world" - on a 2.5Ghz recent CPU. I have to make more tests to see if the client may be responsible for a share of that latency. I don't see the OS and repl as bottlenecks (printed in the past in repl at tremendous rates). I may be completely wrong as I'm beginner in networking.
> I'm not aware of any arc servers in the wild using multiple servers. This isn't for performance reasons but just correctness; we don't have a database that can keep concurrent writes from stepping on each other.
I've replaced the diskvars files by sqlite entries. Sqlite is a competitor to fopen as it's advertised on the website making it a perfect choice for diskvars in my opinion (again, ultimately beginner on that matter).
Ran the same test, 200 requests for 2 seconds (my test machine is a laptop). The weird thing is that the percentiles are multiple of 16...
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests
Server Software:
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1
Time taken for tests: 2.456 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 17800 bytes
HTML transferred: 2200 bytes
Requests per second: 81.44 [#/sec] (mean)
Time per request: 12.279 [ms] (mean)
Time per request: 12.279 [ms] (mean, across all concurrent requests)
Transfer rate: 7.08 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.9 0 16
Processing: 0 12 9.1 16 31
Waiting: 0 3 6.3 0 16
Total: 0 12 9.3 16 31
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 16
80% 16
90% 16
95% 31
98% 31
99% 31
100% 31 (longest request)
Alright, it's off by almost 10x then my test sux. The client code must be wrong. Thinking about it quickly, it's probably not so easy to send a lot of requests in parallel... I'll use ab in the future. Thank a lot for having checked that.
That sounds amazing! I'd never heard that about sqlite. Patches would be most welcome. If you tell me your github username I'll give you commit rights to anarki.
> That sounds amazing! I'd never heard that about sqlite.
Cool happy that sounds like interesting stuff.
> If you tell me your github username
Please find my github profile on my arc forum profile.
> I'll give you commit rights to anarki
I would be happy to contribute. Well, I have to, that's a duty; I'm given such an amazing language in the first place.
So I have these sqlite obj. I've coded worker processes which you can spawn and kill (they use the db to register, take jobs and kill themselves). You can give them any job by supplying a list. They return the result (using the db again).
I'm planning to write a cluster.arc, which manage a bunch of http servers which you can spawn and kill the same way as the worker processes (well you could do that using the worker processes themselves, I dont remember why I didnt retain that idea though, will remember...). Easy to use behind a reverse proxy, that's the goal.
I have a with-lock macro, which takes an id in argument; its like atomic but associated to an id (use the db again, so that it works inter-processes)(there is something similar in racket using files).
I have let1, alet1, when1 and aor. Not sure those macro are all relevant.
I'll be happy to contribute with the relevant part. Give me a few weeks though. Still have to tests most of these things and I'll need time to extract them from the pet-project.
Since sqlite is daemon-less, it doesnt work well when multiple processes try to access the same database; one starts receiving database locked errors. I'll see if I can find a daemon-less database engine which handles that. I want to run multiple arc http servers accessing a single database behind a load-balancer and I want a client app to have parallelism using worker processes, which would use the same kind of database engine. So I'm looking for a daemon-less database engine that multiple processes can access concurrently without me having to implement anything to support that and which works under windows, linux and OSX. Anyone knows one? I'm looking at Sophia right now [1]
What I could do however, is have a master process starts an http server with a daemon-less database. At this point, one can read and write the database at any time via http requests. Then I can make the worker processes thing works. Then Arc has parallelism.
I think John Shutt was proposing people do something similar in his Kernel thesis: https://web.cs.wpi.edu/~jshutt/kernel.html. Only he never liked quasiquote for some reason and wanted to construct macro expressions explicitly using list, cons, etc. Manuel Simoni has my favorite expression of the justification: http://arclanguage.org/item?id=17666