Arc Forumnew | comments | leaders | submitlogin
3 points by rocketnia 4839 days ago | link | parent

It's a good topic, but I have several sorta fundamental objections to the article's assumptions.

---

It's an interesting fact that practitioners of either language can point to lack of features in the other. That has some pretty obvious corollaries as well.

1. There may be such a thing as the most powerful language right now, but it may involve trade-offs (I don't know what it is, but one exists. I'll call it "Alpha" so as not to offend anyone).

2. There is such a thing as the language that will be the best for the next 10 to 100 years (This one may or may not exist in some form today; it might be unified from several current languages as Seibel alludes. I'll use his name and call it "Omega").

I'm not sure where you're getting those particular corrolaries. Seibel's leaving open the possibility that there are multiple local optima that can't be unified into Omega.

---

3. There is such a thing as the most powerful language that could exist on current machine architectures (This one almost certainly doesn't exist yet, and may never be embodied in an implementation. It's just the limit, in the calculus sense, of what we can hope to achieve with a language along the axes of expressiveness, terseness, maintainability, readability and flexibility. This one I'll call 0).

I have the same objection to this corrolary, but I'd also like to point out that, when judged based on their applicability to current machine architectures, languages all tend to be approximating the optimal power of machine code. Any language that lets you be picky about your machine code is just as powerful in this sense. It's easy to imagine they're all inconvenient and insufficiently abstract to have JITting and garbage collection, but that's not necessarily true either. (How about D?)

---

Haskell/Common Lisp unification theory

You go on to talk about what features would need to be tacked on to either language to get to the middle. You're not considering the possibility that the "missing" features hurt more than they help. Particularly:

---

How do you implement full currying and optional/default/keyword/rest arguments? [...] sometimes you want flexible arguments (and counter-arguing by saying "well, if you need them, you've factored your application wrong" is missing the point; expressiveness is a measure of power, remember, and having to think about the world in a particular way is a strike against you in that sense) [...]

I may not want to have to think about the world as containing keyword arguments, either. They make the function a more complicated concept, so that it takes more edge-case consideration (and code) to get everything working intuitively.

This isn't to say that Common Lisp and Haskell don't still have features left to adopt from each other. I just don't expect them to completely get along, just like I don't expect apple pies to develop marinara sauce.

---

Skipping back a bit:

How do you blur read/compile/run time when one of your goals is to have a complete type system?

I'm sure I'm not nearly the first one to try, but I'll give my take on it, just in case. ^_^

IMO, the fact that something is a macro is part of its static type (something I originally said at http://arclanguage.org/item?id=10739). Writing a macro is a matter of creating a new kind of syntax transformation and a new type inference case at the same time.

If a macro can only take a single, well-formed local expression as an argument (for now), just like a function, and if its statically known information includes the fact that it accepts expressions of type a and outputs expressions of type b, then it's simple to just treat it as a value of type a -> b for inference purposes.

If we want to make the macro definitions themselves type-safe (ensuring that we're being honest about the input and output expression types), that may require a significant effort, but it isn't necessary. After all, a runtime error in a macro definition will be caught at compile time, just like a static type inconsistency. (Well, I suppose that doesn't prevent errors in libraries that provide macros.)

At that point, it would be useful to extend macros so they can take multiple expressions, like curried functions do. For that to happen, the masquerading static type of a macro, like a -> b -> c -> d, needs to be a front for the actual behavior of taking N expressions and outputting one. Penknife accomplishes this sort of thing by having a parse fork (which holds the syntax behavior) return another parse fork. Similarly, in Haskell a macro implementation could return another macro, with each macro containing information about which expression to use in place of it in case it's used as an argument rather than an operator.

In summary, a macro would be an abstraction consisting of two statically known values: The expression to use in place of it when it's used as an argument (which doubles as the specification of its masqueraded type) and the expression translation behavior.

Once all this is done, macros still aren't a great way to extend Haskell's syntax. They only use function-style syntax, and their expressions can't even have unbound variables like (let ...) can, since those variables would have undefined types when trying to disambiguate Haskell's infix syntax. Most of all, you can't create nicer alternatives to "if ... then ... else ...", "\ ... -> ...", "let ... = ... in ...", and so forth. This could partly be helped using a sort of reader macro ("$my-let ..."), but dropping infix syntax in favor of prefix syntax is certainly one way to do it, and as you've pointed out, that's not an especially unpopular option.

(I'm noticing there's a bit of a brevity boost with infix syntax, though. It means you typically only have to add "1 + " to an expression, rather than "(+ 1 " on one side and ")" on the other, for a total of one edit versus two. I guess that makes it a way to boost brevity in a less customizable language. ...And now that I've said that, the "too many parentheses!" barrier to lisp adoption makes a whole lot of sense again. XD )

---

I haven't a clue how to place my own bet. I tried starting this paragraph both with "My bet's on Lisp..." and "My bet's on Haskell...", but each beginning got to a logical dead end within two sentences.

I'm betting on Penknife. :-p Actually, I'm not that arrogant, but you know my bets already. ^_^

(For those just joining us, I'm betting on any and every build language with sufficiently customizable syntax. I expect they'll all dogpile on top of each other (which is to say, they'll generalize each other) so that it's not clear which is best. http://arclanguage.org/item?id=12993)



4 points by waterhouse 4838 days ago | link

(I'm noticing there's a bit of a brevity boost with infix syntax, though. It means you typically only have to add "1 + " to an expression, rather than "(+ 1 " on one side and ")" on the other, for a total of one edit versus two. I guess that makes it a way to boost brevity in a less customizable language.

With ssyntax, you usually only have to add "inc:" or "inc." to the left side of an expression (depending on whether it's already a function/macro call or just a symbol). This is why I love the hell out of ssyntax. It only doesn't work if the expression is already of the form "a.b", in which case you have to add some parentheses. Currently, "inc:inc.2" expands to "(compose inc inc.2)".

Recommendation: Have "a:b.c" expand to (a:b c); this will make it always, or nearly always (I haven't thought much about using it with quote or quasiquote), possible to tack an extra function/macro call onto an expression with a single edit. I think it's also more consistent: currently we have a.b.c -> (a.b c) [-> ((a b) c)], which is useful for referencing nested data structures; and it fits that pattern to have a:b.c -> (a:b c) [-> ((compose a b) c) -> (a (b c))]. I really think this would be a big improvement.

Anyway, this ssyntax stuff has the major advantage that it works to tack on any single-argument function/macro, rather than just the very few that are important enough to have their own special character. I find this extremely convenient: I can add "time:" and "prn:" to an expression when I want to debug it, and then remove it just as easily. Combined with my 'cp function (http://arclanguage.org/item?id=12873), this has in my past made for some very pleasant debugging.

Aesthetically, tacking "func:" onto "expr" is like adding an element to the tree (in fact, it literally does "expr -> (list func expr) = (cons func (cons expr nil))"), which is an O(1) operation, and it makes sense for it to always be a simple O(1) editing operation. (Adding something to the left side, then remembering to look for the right side so you can add a close-paren, is not O(1) to me.) In fact, it's a common O(1) operation, and it should be O(1); and the same should go for its opposite, removing an element previously tacked on.

Imma go figure out how to change ac.scm. ...And I've kludged it by making . and ! expand before : and ~ in the definition of "expand-ssyntax" in ac.scm.

  ((or (insym? #\. sym) (insym? #\! sym)) expand-sexpr) ;switched order
  ((or (insym? #\: sym) (insym? #\~ sym)) expand-compose)
  --ac.scm
This should work fine for the near future, as I don't have any reason to mix "." and ":" in expressions other than with "." as the last ssyntax character. I think it should ideally expand right-to-left no matter what the ssyntax characters are; note that, under this scheme, "a:b:c" would expand to (compose a:b c) -> (compose (compose a b) c), which is equivalent to the current expansion (compose a b c), and the Arc Compiler could transform the former into the latter.

Arc seems to load properly, and the ssyntax works the way I like it.

  arc> factorial.3
  6
  arc> inc:factorial.3
  7
  arc> factorial:inc:factorial.3
  5040

-----

2 points by rocketnia 4838 days ago | link

In Arc I sometimes like to say (a.b:c.d e f), which this change breaks. But I don't mind that. ^_^

Anyway, what you're talking about is already how Penknife works: Infix operators on the right are always handled first. An interesting result is that a!b ssyntax is a bit redundant under this setup. Here's an Arc demonstration:

  ; with ! ssyntax
  a.b!c.d        ->  (a.b!c d)
                 ->  ((a.b (quote c)) d)
  
  ; without ! ssyntax
  a.b:quote.c.d  ->  (a.b:quote.c d)
                 ->  ((a.b:quote c) d)
                 ->  (((compose a.b quote) c) d)
                 ->  ((a.b (quote c)) d)
So now the following ssyntaxes are all abbreviations for things that are a little more verbose but just as edit-efficient:

  a!b  -> a:quote.b
  .a   -> get.a
  !a   -> get:quote.a
  ~a   -> no:a
The ssyntaxes left over are a:b, a.b, and a&b.

In Penknife, I've been thinking about having infix operator such that a`b.c is (b a c). Arc could do this if a`b expanded to something like (opcurry b a), where 'opcurry was a metafn such that ((opcurry a b c) d e f) expanded to (a b c d e f). Then the only essential ssyntaxes would be a`b and a.b:

  a&b  -> a`andf.b
  a:b  -> a`compose.b
  a!b  -> a`compose.quote.b
  .a   -> get.a
  !a   -> get`compose.quote.a
  ~a   -> no`compose.a
Of course, at a certain point the verbosity is a bit silly. I've defined ` as a curry function in Penknife so that I can test it out a`foo.b with everyday functions foo, like 1`+.2 and so forth, and I've found it pretty cumbersome to actually type. Still, it's less cumbersome than 1.+(2), I suppose. :-p

Maybe it'll be useful in an axiomatic way. Perhaps a Penknife-like or Arc-like language can have ` and . as its only basic infix operators, with all other infix operators being abbreviations defined in terms of those two....

Hmm, an alternate axiomatic approach is to treat a`b.c as a single ternary operator a{b}c.

  a&b  -> a{andf}b
  a:b  -> a{compose}b
  a.b  -> a{call-op}b  ; where (call-op a b) expands to (a b)
  a!b  -> a{compose}quote{call-op}b
  .a   -> get{call-op}a
  !a   -> get{compose}quote{call-op}a
  ~a   -> no{compose}a
This is essentially equivalent to Penknife's approach, except that Penknife uses Haskell-style naming rules (infix identifier or alpha identifier) rather than delimiters. So it looks like the a`b.c approach is just a way to simulate this syntax system within itself. Pretty interesting. XD

-----

3 points by akkartik 4838 days ago | link

Have "a:b.c" expand to (a:b c)

Wart unconsciously got this right (compose is ^):

  wart> (wt-transform 'a^b.c)
  (call* (compose* a b) c)

-----

1 point by Inaimathi 4837 days ago | link

About the corrolaries. Sorry, I meant to acknowledge the possibility of multiple, non-unifiable languages at the "top tier", but not focus on it. On second reading, I seem to have left the notion out altogether. Which was unintentional; Seibel explicitly makes your point about local maxima, and I didn't intend for it to seem like unification was the only possibility. I just happened to be interested in thinking about it because it's not the the first time I heard the idea of "Lisp + Haskell would rule the world".

>I may not want to have to think about the world as containing keyword arguments, either. They make the function a more complicated concept, so that it takes more edge-case consideration (and code) to get everything working intuitively.

This is a quibble, and the one point I potentially disagree with. If you're speaking as the language implementer, fair enough, I could see how it would be harder to provide optional/keyword/default/rest/body args, but how does having this option hurt you as a language user? I'm wondering because my experience with this particular feature has been that it's very useful in some cases, but doesn't get in the way at all otherwise (which is why I'd consider it a net win if it could somehow be made to co-exist with auto-currying).

-----

3 points by rocketnia 4836 days ago | link

how does having this option hurt you as a language user?

I agree it's a minor quibble, but I am talking about an inconvenience from the standpoint of a language user. ^_^

A better example is call/cc. If the language I'm using has reentrant continuations and I want to write a loop utility that calls a callback several times, I can't just maintain the state of the loop as a local variable, 'cause if a continuation call restores a previous iteration of the loop, the variable holding the loop's state will actually stay as it was before the jump. The resulting iteration order will be a bit weird for my taste. For instance, it could appear to be 12345-jump-367 when I want 12345-jump-34567.

Arc has reentrant continuations, and we don't have to use them, but it's still somewhat necessary to worry about that issue when making libraries that might be used by people who do use continuations.

(It's only somewhat necessary because even standard Arc loop constructs like 'each already make that mistake, so the problem's bigger than one library.)

As for keyword arguments in particular, I actually brought up a case pretty recently (http://arclanguage.org/item?id=13085) where the definition of a library utility was more cumbersome in an Arc with keyword arguments than in one without them. It's a utility for making a function that when called, calculates another function and calls it with the same arguments:

  ; common definitions
  (def call (func . args)
    (apply func args))
  (mac late body
    `(fn-late:fn () ,@body))
  
  ; fn-late in a language without keyword arguments
  (def fn-late (body)
    (fn args
      (apply call.body args)))
  
  ; fn-late in a language with keyword arguments
  (def fn-late (body)
    (fn-that-handles-keys-explicitly keyword-args rest-args
      (apply-with-keys call.body keyword-args rest-args))
  
  ; same fn-late as above, but less ridiculously verbose
  (def fn-late (body)
    (kfn keys rest
      (kapply call.body keys rest)))
Again, people who don't use the extra feature can get by with the simpler version, but they should take the extra effort in a general-use library.

IMO, complexity in a few examples like these is just fine if the feature is good enough to make lots of other cases more pleasant. Where it really gets tricky is when multiple language features put together breed tough design corner cases:

  threads + mutation -> locks
  call/cc + mutation -> dynamic-wind
  call/cc + dynamic-wind -> What continuation exists *during* a wind?
  call/cc + locks -> Does a captured continuation keep its locks?
  auto-curry + keyword args -> Which intermediate fn gets each key?
There's probably some diabolical combination of individually nice language features such that a language would be nicer with one left out.

-----

4 points by akkartik 4829 days ago | link

Who cares? :)

A clean model for locks is an asset to any language. Using them well is up to the programmer.

The need to treat libraries as black boxes is holding back programming language design. It forces the set of inputs a library can accept to grow monotonically, causing a sort of tragedy of the commons.

I've had your comment on my browser for a week now. As other tabs have come and gone, one other story has stuck around as well, reread and mulled over: http://www.miller-mccune.com/culture-society/a-psychological... Now I find myself thinking the two have a slight overlap: paranoia :)

-----

1 point by rocketnia 4829 days ago | link

A clean model for locks is an asset to any language.

Do you mean any platform? 'Cause Penknife's an example of a language where

1) I don't expect to find an uncontroversial model for locks,

2) I don't want people to have to write thread-safe code just so that it's gemlike ('cause I know I won't get anything written myself that way XD ),

3) I intend for Penknife programs to be able to call out to other running platforms, including the underlying platform, when they need to, and

4) if Penknife is successful, I fully expect to see a Penknife fork that adopts threads as part of its own runtime. I don't expect it to be as timeless.

The need to treat libraries as black boxes is holding back programming language design. It forces the set of inputs a library can accept to grow monotonically, causing a sort of tragedy of the commons.

It took me a while to figure out what you meant here. Are you talking about closed frameworks that are gradually opened up in successive versions as the real-world needs are observed and the design solidifies?

I agree about those being frustratingly limiting, but the kind of omissions I'm talking about aren't the same thing as being closed. A language designed without feature X doesn't necessarily hide a secret world of X just waiting to be unleashed. ^_^

-----

4 points by akkartik 4828 days ago | link

Attempt 2:

Libs arose in static languages. They save effort recompiling common code. But you only save compilation time if libs are more stable than non-libs. If libs are islands in your sea of code.

In the desktop era libs were units of commerce. To sell a library you promised 'code reuse'.

But code reuse is still a mirage. Upgrading a lib remains akin to transplanting an organ into a new body. Always the fear of rejection.

As libs mature upgrades get smoother, but at the cost of degrees of freedom. Constraints are only added, never removed.

Let's try backing off from the mirage. We have dynamic languages now, no compilation. We aren't trying to sell code. Let's not get hung up on backwards compatibility, on breaking the code of others. It's often easy to fix. Just make sure code screams when it breaks. Unit tests, etc.

Let's try keeping code from others in the same directory as our own. Rather than segregate it into a separate ghetto, let us hack on it like it's our own.

Any codebase will form islands over time. That doesn't make solidity a virtue. Let us not prematurely generalize interfaces.

-----

1 point by rocketnia 4828 days ago | link

That's a bit easier to understand. ^_^

---

But code reuse is still a mirage. Upgrading a lib remains akin to transplanting an organ into a new body. Always the fear of rejection.

For the same reason as that, I don't especially believe in upgrading libraries. If that's code reuse, I agree it's a mirage, but my idea of code reuse is just to use the same code as someone else has used.

---

Constraints are only added, never removed.

Are you sure? A library that prides itself on perfect backward compatibility will only remove constraints and add features, never the reverse. Or are you talking about the fact that sometimes new features are limited in seemingly arbitrary ways because that's necessary for them to be able to cohabitate with features that came before?

---

Let's not get hung up on backwards compatibility, on breaking the code of others. It's often easy to fix. Just make sure code screams when it breaks. Unit tests, etc.

I agree. Fostering a community is probably easier when maintaining backwards compatibility, but I prefer the idea of encouraging people to use whatever version they like.

Let's try keeping code from others in the same directory as our own. Rather than segregate it into a separate ghetto, let us hack on it like it's our own.

...Then again, as waterhouse points out at http://arclanguage.org/item?id=13260, fostering a community is probably easier when there's a single standard basis everyone's using.

Another advantage to segregation is that occasionally the organ transplant does work, and then you've spent minimal effort to catch up with the community and the cutting edge.

Furthermore, the strategy you're talking about sounds like a practice that already exists to a degree (in blogs and forums) and can be promoted in any language, so I'm not worried the languages will get in your way here. Of course, lots of things are doable in many languages but are better suited to certain ones. Can you think of any particular language features that help out, besides modules? Any that become less important, besides modules? :-p

---

Let us not prematurely generalize interfaces.

I don't know what to tell ya. Sometimes I'm not sure I make anything but generalized interfaces. :-p

EDIT: Hmm, I just found this, which is pretty interesting: http://codecourse.sourceforge.net/materials/The-Importance-o...

-----

2 points by akkartik 4828 days ago | link

Saving keystrokes on kindle makes me seem certain :)

Also can't read pdfs on kindle for a few days.

I meant constraints for lib implementor. Libs rarely delete stale features.

Upgrading libs is simply the most obvious pain point. Successful upgrade is worse, causes complacency. Using abstractions without understanding is cargo-culting. We've all done it.

Easier to learn up front than debug later, to learn in isolation than in a big system.

I can try this in arc because: it's dynamic; few and concise libs, less catching up; we've often ignored bc.

Community is lower priority for now. Let's see what happens.

-----

4 points by rocketnia 4827 days ago | link

Successful upgrade is worse, causes complacency. Using abstractions without understanding is cargo-culting. We've all done it.

I accept the risk of not being an expert in everything I'm using. Otherwise I'd never attempt anything, and I'd never learn. :)

It does go against my grain in a certain way: I like to build things that are precisely what I want, so it's bothersome for those things to depend on anything I don't control. However, usually I don't care about the whole codebase being precise, just the observable outcome of it.

---

Easier to learn up front than debug later, to learn in isolation than in a big system.

I think that's the opposite of your point. I take it by "big system" you mean a large group of concepts with so much interconnectedness between them that it's hard to understand any one concept before understanding them all. But you've mainly been saying you'd rather people not think of each other's code as being separate from theirs. That you'd rather have all the code in a project be developed as one big system.

Well, maybe your ideal scenario is for each borrowable codebase to be a small system, such that people can easily learn it and integrate it into their big-system applications. But what about a borrowable codebase that depends on another borrowable codebase? If that project treats the depended-upon codebase as its own code, it'll become a big system, and it'll be harder to borrow. If instead it just comes with advice about what dependencies to track down to get it to work, it's back to cargo-culting.

Also, if my own application is a big system (thanks to all the borrowed code I've added), that's bound to be annoying to me when I try to understand it in six months.

---

I can try this in arc because: it's dynamic; few and concise libs, less catching up; we've often ignored bc.

The last two are probably related. :) With a smaller quantity of well-borrowed code to break and with more of it being easy to fix, there's less short-term value in preserving backward compatibility.

I agree about it being healthier not to promise or rely on backward compatibility. I think there is a language trait that can actively help with that: hackability. If my code can load versions two and three of a library and patch them together, then I don't have to go patch them together on a textual level and drastically increase the amount of code and complexity that belongs to my project.

Of course, even with hackability handy, it's still possible to take full ownership of certain pieces of borrowed code when it makes more sense to do that, so it's not like we have to choose. :) And both of these approaches are taken in Arc programming already.

---

Community is lower priority for now. Let's see what happens.

Well, I consider myself to be programming for the community of all my future selves. ^^ It's 'cause I have a lot of concrete project ideas that are too grandiose for me to have fun working on right away. In the occasional case I decide to implement one of the more urgent or less fickle ideas, it really pays off that I've amassed a lot of abstract, future-proofed libraries in the meantime. So I'm often interested in developing things that make future-proofing itself easier, like module systems. Ironically, they have the most immediate use for me.

-----

3 points by akkartik 4827 days ago | link

There is no borrow. When i see external code that may be useful to my pgm i want to understand it, tailor it for my needs, prune the unnecessary.

I want to see how long I can keep the codebase small.

You don't return to all your code. 90% of my code never gets read again. Not worth thinking about.

When I do return to old code I lately have no trouble. I think it's the unit tests. Little easily digestible atoms of semantics.

If everyone does this less code will be shared, but shared code will become more timeless. Easy to return to it in six months.

You don't have to be an expert in other people's code, design decisions, error paths. Just concepts and happy paths. So you see opportunities to delete.

Perhaps borrowing code is also learned from static languages. Just a delete-if loop in c can be nontrivial to read.

-----

2 points by rocketnia 4827 days ago | link

We're really getting to the heart of things now, I think. :)

---

There is no borrow. When i see external code that may be useful to my pgm i want to understand it, tailor it for my needs, prune the unnecessary.

That's one kind of borrowing, as far as I'm concerned. All I'm talking about is tech one person comes up with first and another person takes advantage of. Whether it's called code reuse or borrowing or whatever doesn't matter to me.

Borrowing code is essential for arriving at the best combinations of ideas. Two heads are better than one and all that.

---

I want to see how long I can keep the codebase small.

I want my code to be small only because I want it to be valuable to me when I go back to read it. Value comes first.

And on that topic...

You don't return to all your code. 90% of my code never gets read again. Not worth thinking about.

...if I believed that, then maybe I wouldn't even care if my code were small. :)

I return to my code more and more as I've developed better tools to future-proof it. If any piece of code isn't worth returning to, then why was it written in the first place?

Okay, maybe for one-shot scripts or something. ^_^ Like I said, I primarily write abstract, generalized code (or at least that's what I perceive I do).

---

When I do return to old code I lately have no trouble. I think it's the unit tests. Little easily digestible atoms of semantics.

Comments are good for that too, but unit tests are certainly easier to keep up-to-date. :)

---

If everyone does this less code will be shared, but shared code will become more timeless. Easy to return to it in six months.

You don't have to be an expert in other people's code, design decisions, error paths. Just concepts and happy paths. So you see opportunities to delete.

I think we're on common ground here. ^_^

---

Perhaps borrowing code is also learned from static languages. Just a delete-if loop in c can be nontrivial to read.

It took me a Google search to find the meaning of 'delete-if, but I get it now. :) http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec...

What do you mean by a static language? You keep referring to static and dynamic languages, but in my experience "dynamic language" is a term for a language with dynamic typing. I think C loop boilerplate is something to blame on the lack of a convenient closure/block syntax, not the type system.

If by "static language" you mean a language that can't be changed, that makes way more sense. But even in a language like Arc that's open to customization, we still share code.

Actually, you probably mean something different by "borrow" too, such as using code as a black box. In that case, you make sense again. :-p

The full translation is "Perhaps the practice of taking code for granted has come from a history full of languages where code isn't formatted in a personalizable enough way for its purpose to be clear." I don't want to put words in your mouth, though. XD

-----

3 points by akkartik 4823 days ago | link

(Ok, back to the land of internet.)

"Perhaps the practice of taking code for granted has come from a history full of languages where code isn't formatted in a personalizable enough way for its purpose to be clear."

Yep. There's more pressure to take code for granted in languages so verbose that everything is non-trivial to read. It seems to have been a slippery slope from having such languages to assuming take-for-granted-ability was an unabashed virtue.

That is perhaps the biggest disadvantage of C and relatives[1]: they kept programmers from bulking up on their reading muscles, the ability to read concise code patterns at a glance. Perhaps this is partly what Dijkstra was referring to as 'brain damage'. (http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498....) [2] Fortunately it is curable, no matter what he said.

[1] What I was referring to as static languages, where functions can't be redefined dynamically.

[2] In seeking out this reference I just found a Dijkstra quote about the brain damage done by lisp! http://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/EW...

-----

2 points by evanrmurphy 4827 days ago | link

> Saving keystrokes on kindle makes me seem certain

Ah, so is this why you've been sounding so prophetic lately? :)

-----

1 point by akkartik 4829 days ago | link

No I think evan got it right. constrain as less as possible. you can constrain more later if you need to.

STL needs a crystalline/congealed interface. you can't inspect running sources, insert prints. all you have are compiler errors and API docs. we dont have those constraints, let's not tether ourselves to libs. libs and APIs are for more static languages. we don't need a new configuration of sources to just work. let's back off. just tell me the moment something fails.

-----

1 point by evanrmurphy 4829 days ago | link

> The need to treat libraries as black boxes is holding back programming language design. It forces the set of inputs a library can accept to grow monotonically, causing a sort of tragedy of the commons.

Very interesting idea. So what are you proposing, that library users be more willing to hack internals instead of always wishing for endless customizability from the outside?

> reread and mulled over: [...]

That link didn't work for me. Is this the same story? http://www.miller-mccune.com/culture-society/a-psychological...

Update: waterhouse and his unmatched internet-fu beat me to correct the link.

-----

1 point by waterhouse 4829 days ago | link

That link doesn't work. My awesome internet-fu (ability to use Google) has found the correct link:

http://www.miller-mccune.com/culture-society/a-psychological...

-----

1 point by akkartik 4829 days ago | link

thanks - I'd typed it it in by hand :)

-----

2 points by evanrmurphy 4839 days ago | link

Just a nit: I'm pretty sure akkartik didn't write the article (he's one of the commenters), but your use of the second person here makes it sound like you're addressing him as the author.

-----

1 point by rocketnia 4838 days ago | link

My mistake. I did think akkartik was the author. XD

-----

1 point by evanrmurphy 4838 days ago | link

You should copy your comment to the article's thread or link back to here. Would be a shame if the OP missed out on your thoughtful response.

-----

2 points by Inaimathi 4837 days ago | link

I found it; better late than never, I guess.

This is a much more thoughtful response than I was expecting, and I've linked it from the original blog post.

-----

1 point by rocketnia 4837 days ago | link

Oh, sorry! I was working on a more focused and less personalized-for-someone-else version of the response yesterday, but it takes me a while sometimes to get to things. XD Glad you found it!

-----