I picked that habit up from some of the examples in the forum, and it works for me. Is there a reason why one would really want multiple return values?
The one nice thing about multiple values that I don't think returning lists accomplishes is that if you have a function that doesn't expect to receive multiple values, it will just use the first value returned. For example, in CL, #'truncate returns two values, the quotient and the remainder. But if you pass the return values of #'truncate to #'+, it just pretends you only passed a single value.
* (truncate 5 3)
1
2
* (+ (truncate 5 3) 6)
7
I don't know of any way to make this work implicitly with returning lists... you would need to explicitly test if you were receiving a list and then destructure it accordingly. (Please correct me if you know of a better way around this.)
Anyway, I suppose that the programmer knows what the output is of a certain function, and not be surprised if a function churns out a list rather than two values ;).
What if your function originally just returned one value, but at some later point you realize that a second value would be useful in some situations?
With multiple return values you can just extend it without breaking existing clients. If, on the other hand, you add a list wrapper around the returned values, all call sites must be changed to take car of the list.
That would be useful indeed. The flip side of the coin might be something that was sort of mentioned in 'on lisp'. If all functions return only one value (be it a list or a single value) by default, you can write a general memoize layer around functions that doesn't have to check how many multiple return values are returned.
I also noticed a carif function in arc. If you are worried about single values that will become lists in the future, you might start using carif in your current clients.
Returning multiple values puts multiple non-GC'd values on the stack. This is fast, but capturing and using those values is somewhat inconvenient for the programmer. Returning a cons creates one other cons in the heap that will need to be GC'd for each additional value returned.
In theory we could do this with arc2c. However, I'm not 100% sure this is necessary with a "really good" optimizing compiler.
We could defer destructuring of arguments in arc2c to as late as possible, so that we could do some amount of optimizing away 'cons cells when the cells themselves are used only for returning multiple values. Which is arguably difficult, since each stage in arc2c expects arguments to be in undestructured form, i.e. (let (foo bar) niaw ...) => (let tmp niaw (with (foo (car tmp) bar (cadr tmp)))). If arguments are kept in non-destructured form, we would need to modify the way that function parameters are stored (to handle (o foo (expression))) and check each stage in the compiler.
Edit: When optimizing raymyers' treeparse, I actually transformed parsers to CPS form in order to return multiple values without all the construction and deconstruction of 'cons cells, which helped reduce some of the overhead.
That points me in the right direction at least. I'd have wanted that everything in the list is just seen as a string, so "#\p" would indeed be smaller than "(bite #\p)" if the character "#" has a lower ascii code than "(".
coerce looked promising, but i'm not sure how to force a list like (bite #\p) into the string "(bite #\p)".
Could be. Right now I'm still using the regular arc2, and am now trying to figure out how to deal with the input of the mud, if I've to make separate threads for reading and writing input, or that I can make use of peekc in some way to figure out when the mud has stopped sending characters to the input port. It is fun though :)
Hmm. still a bit puzzled. I seem able to read (in various ways readc, readline) no problem, but writing to cadr inout doesn't seem to trigger new input from the mud to read.
All of you, thanks for the help so far. I'll tinker on with it later, time for my primary arc project ;).
For the output problem, try flushing the out stream with flush-socket on Anarki or the corresponding flush-output from mzscheme, maybe the output is written to a buffer and there is not enough information to send to force the buffer to be flushed.
peekc hangs when I'm reading the output from telnet, but works fine (returning nil) when I'm reading out the output after the last char from a stored pipe.
In my experience peekc doesn't seem to work properly (i.e. as advertised) at all.
I assume you're doing peekc so that you can do something else while waiting for a character? If that is what you're trying to do, I suggest you use threads:
(first time I work with threads) readmud only reads one character at a time, and I want it to just keep on writing as long as there is data. I'll try to add a thread that calls readmud again the moment it returns something sensible.
Partial success. The reading bit seems to function fine (if slow ;), and I now can use arc while in the background the muds' output is being read, but still cannot write. Did add mzscheme's flush-out (but perhaps in some wrong way. Will look at it during the next work-break ;).
Maybe the problem with peekc is that it tries to read a character from the socket, but the server has yet to write that character, and peekc doesn't return nil because the server hasn't yet closed its output stream, so there is no end-of-file.
Why? because there isn't that much in lisp/scheme/arc that comes naturally to me :).
Not sure about how to go about pushing something on something else. Never been in an environment where we worked with cvs or git or anything. Solitary scientists :)
it would be appreciated! :P. Functional programming helps because usually it is the last thing you wrote that is wrong, but occasionally I'm completely puzzled by error messages (only to find out that one of my comments started with a : rather than a ;.
Hmm. Anyway it looks like it might be useful to subtype function closures into continuation and non-continuation functions (as an aside it would probably be useful also for optimizations: when a continuation function exits, it can't be called and its closure can be immediately freed or reused, unless we use 'ccc: and even so we could just copy the continuation into a non-continuation closure).
Then when a backtrace is requested we simply scan through the stack for continuation-type functions, and scan through their closures for continuation-types, and so on until we end up on a closure without any continuation-type functions.
While scanning the stack you have to pay attention to not include functional arguments as if they were called functions. To give descriptive names to functions I would transform every lambda expression in a named function, e.g. :
(def f (x) x) --> (set f (fn (x) x)) --> (set f (__named '(f x) (x) x))
and for anonymous functions:
(fn (a b) (+ a b)) --> (__named '(fn (a b)) (a b) (+ a b))
> While scanning the stack you have to pay attention to not include functional arguments as if they were called functions.
Which is why I was proposing to subtype closures into continuations and non-continuations. Normal functions that are passed around are non-continuations, while continuation closures are created during CPS-conversion. Of course we probably need to add code in 'ccc which would probably copy a continuation closure into a non-continuation version of the closure.
Nice. I remembered there already was something, but couldn't find it. was trying to find it using site:arclanguage.org and ruby python c, but it didn't show up obviously enough in google
Both of you, thanks!, more learning arc. (1) "withs" is "with sequentially"', so that "range" makes sense in the function "awaydir". (2) cons isn't just appending a number to a list.
I like the table!key syntax, not sure what I'm thinking about world.0 as a syntax, especially because world.0.0 doesn't give what I'd expect (first element of the first element of world), but just returns the same as world.0
(def makeworld (x)
(= world (n-of x (n-of x nil))) ; world of nil
nil)
Actually, almkglor's code would (I'm pretty sure) have worked with just a normal with; function definitions are unevaluated at definition, so that (fn (x) (afergergghersgergferg x)) would not give "Error: afergergghersgergferg is not a function" until it's called. If a defined a function called afergergghersgergferg in the meantime, as with a normal with statement, it should have worked.
Anyway, you're not alone in being surprised that x.y.z expands to (x y z) instead of (x (y z)).
Not quite. It works in the toplevel because doing (def name args ...) is effectively to (= name (fn args ...)); this establishes a global, not a lexical, binding. Using let/with/withs adds a lexical binding, and since functions (or "closures") close over their scopes, they can't get at functions defined at the same time. Thus, you need to use withs. Observe:
That implies that at the top level, the order in which I define my functions (which use each other) doesn't matter..
nice! Then I can group them in a way that is more sensible for humans.
Anyway, the program is now far enough that I've a critter running around the field and filling its memory with associations between the things it encounters. Next stop is to let it learn what is edible and to let it steer itself based on its observations :).
So.. (let newdir '(nil nil) should become (let newdir (list nil nil) and creature!pos is equivalent to (creature 'pos) and you make a temporary anonymous function within a function but still give it a name (delimit, through the with command) that does the comparing. Three new things learned :).
I'll see if I can improve it some more though, might be able to do something with testing whether position + car / cadr direction would put a creature out of bounds in the world array, and then multiply the direction by -1. Not at home now, so will have to wait.
Still here, but dividing my time in finishing my phd, learning to use arc to code for a pet project of mine.
Can't imagine pg just abandoning arc, but a bit more excitement from the regulars would be a good thing. Perhaps we can do something like start writing the great computer language shootout progs in arc (although not for speed, we'd be able to go for brevity and clarity)?
As for shootout , that would be a good idea, but the current implementations (arc2, arc2c or even others) aren't mature enough to compete.
I mean, arc2 takes 3 sec. to startup (and it cannot even take command line arguments) and arc2c is not meta-circular yet (and lacks a lot of things, including command-line args, but that is easy to add), so a real benchmarks would need both arcs (one to compile, the other to run the compiler). I don't know the precise status of other implementations, but I guess they aren't mature enough either.
And, that's a chicken-and-egg problem, but I don't think the shootout admins would accept to add an Arc implementation until the language is a little more popular (or there would have thousands of languages referenced).