I don't see what your argument is against setf being =, Except to maintain some compatibility with CL, which PG already said was not his intent with Arc.
I agree with him. The symbol = means "equality", not "definition". This convension is a bad thing the languages have learned from C (or any older one). The = could be a function to test equality of numbers (maybe for speed improvement) or even instead of "is". The name "is" remembers type checking, not comparison (in my opinion, but this is no the real problem, though).
If briefness is the target, we should use := (or $) instead.
It would have zero effect on program size if there are no undeclared parameters, so in the default case the code remains identical to the previous behavior.
In terms of token count, by itself this feature might lead to a modest 2% token count decrease. It would probably go up to 5%-10% if combined with a partial application feature, such as described in http://arclanguage.org/item?id=645
I plan on creating an improved version of this that includes partial application and give more examples with more impressive decreases in token count.
well, the splitn in the top level post showed a decreased token count (the very last line of the splitn function could be removed) but i'm working on something clearer...
table is a hash table, k is a key, gethash returns the value in table for k or 0 if no value for k is found. Think of incf as ++. It will increment the value by 1 and set that as the value for k in table.
And the problem with Arc is that currently the default value for an entry in a hash table is nil, rather than zero. If h is a hash table and you know (h 'foo) is 1, you can safely say
(++ (h 'foo))
But if you don't know whether (h 'foo) has a value yet you have to check explicitly:
How about if <code>h</code> takes an optional second argument, which is the default, and the macros are smart enough that you can do <code>(++ (h 'foo 0))</code>?
Oh, it definitely throws strong typing right out of the window.
The reason I suggested it is because it would seem that almost all of the time where you go to do an increment on a nil value, you're working with an uninitialized element (not necessarily in a hash map) and treating that as 0 (as you're doing an increment) would in a certain sense be reasonable behaviour.
But I guess you're right, in the case where nil does represent an error, it'll be two steps backwards when you go to debug the thing.
Happens when you want to categorize the frequency of items in a list, and I've been doing that all the time (categorizing gene frequencies in an agent-based model).
In ruby I'd extend the array class with this code
class Array
def categorize
hash = Hash.new(0)
self.each {|item| hash[item] += 1}
return hash
end
end
although the other day I saw someone achieve the same thing using a hack on inject (the `; hash' part is only there because inject demands that, the work is done earlier.)
Why would you extend rather than subclass the Array class? It kind of confirms all of my worst fears about Ruby's too-easy class reopening. (what happens when someone else defines an Array method called "categorize" for a totally unrelated purposes?)
I think that the Python syntax for this is
h[x] = h.get(x, 0) + 1
It isn't quite as concise as the Common Lisp but more so than Arc. I'd be curious to see what the Common Lisp looks like if you are doing something more complicated than an in-place increment. E.g. the equivalent of:
I'm a phd, working alone on projects, and the scripts I write a generally < 300 lines + 6 functions from a library I wrote. The agent-based models i write are ~ 200 lines, no libraries.
For me there's not much risk in redefining things.
A hashtable containing integer values is a common implementation for the collection data structure known as a Bag or Counted Set. The value indicates how many instances of the key appear in the collection. Incrementing the value would be equivalent to adding a member instance. Giving a zero default is a shortcut to avoid having to check for membership.
prn used alone would have a return value of simply "hello, ". With tostring, the return value is "hello, world\n".
Though it might be worth noting that in the source file, arc.arc, there looks to be the beginnings of a printf-like function:
(let argsym (uniq)
(def parse-format (str)
(rev (accum a
(with (chars nil i -1)
(w/instring s str
(whilet c (readc s)
(case c
#\# (do (a (coerce (rev chars) 'string))
(nil! chars)
(a (read s)))
#\~ (do (a (coerce (rev chars) 'string))
(nil! chars)
(readc s)
(a (list argsym (++ i))))
(push c chars))))
(when chars
(a (coerce (rev chars) 'string)))))))
(mac prf (str . args)
`(let ,argsym (list ,@args)
(pr ,@(parse-format str))))
)
So that you could say, apparently:
(let var "world"
(prf "hello, ~" var))
Granted, the hello world example isn't very illuminating as to the benefits of using prf over the regular pr / prn. But, hey, the option's there in some base form (after all, the code's rather experimental at this stage).
I suspect it is a debugging thing. Since pr and prn return their first argument verbatim you can put them in the middle of working code to find what the return value of something is, without breaking anything. As a trivial example:
(+ (prn 2) 3) ; prints 2 and returns 5
Maybe you wouldn't need it if you had a fancy debugging suite, but it can be useful if you are debugging manually.
I think the reason is that returning the concatenation would be very expensive. It would basically require a temporary write stream just to put together the return value. In the majority of cases, the return value is probably not even used.
To get the effect you want, simply concatenate as a separate step: