Arc Forumnew | comments | leaders | submitlogin
1 point by Pauan 4300 days ago | link | parent

"Whoops. You mean "var" in the Clojure or Common Lisp 'defvar sense, don't you?"

Yup, the Clojure sense.

---

"Did you mean to say "mutable" there instead of "immutable"? I've been talking about mutability being farther from the core."

Nope. I see immutability as being more important than vau. But for practicalities sake, you do need some mutability, so I also see mutability as being important (just less than immutability).

You're right in the sense that it's possible to define the language with $vau, eval, and immutability, but without mutability... but I think the end result would be a massive pain and very impractical for actually getting things done, so I still consider mutability important.

My point was that I don't really see these things in some sort of rigid hierarchy where "mutability is frivilous and vau is central". They're used for different purposes and they're both important, for different reasons.

---

"I'm heavily influenced by the desire to partially evaluate fexprs."

Yeah, I get that. But I'm not going to contort my language to conform to some notion of purity, especially not to enable some technique like partial evaluation which may or may not work and may not be beneficial even if it does work. Sure, it might be great if you could partial evaluate Nulan, but if you can't, that's fine too.

I already decided that it's just fine if Nulan can't be compiled and must be interpreted. Until I run into some major roadblocks, I'm just going to do the simplest/best thing I can. I'll worry about speed and purity later. I already said my idea about immutable environments is about practical benefits, not speed. The speed is just a nice bonus, that's all.

---

"If a single mutable container is passed through every corner of the code, even one uncompilable expression will make it difficult to know the contents of that container, which will make it difficult to compile any of the expressions which follow that one."

This is true of vars in general. I'm not exactly sure how that situation is made worse for mutable environments, but you're the one with the partial evaluation ideas. I haven't thought about this much.

---

"Maybe the interactions with continuations and concurrency could persuade me. Do you intend to have '$let-var be just like Racket's 'parameterize, or would you just use mutate-and-mutate-back?"

I'm currently using mutate-and-mutate-back because it's dirt-simple. And so, how $let-var interacts with threads will depend on how vars themself interact with threads. I haven't really given that much thought to multicore yet: right now I'm focusing on just getting the damn thing working at all, so all my talk about multicore is about the future.

---

"I find that unsettling, but I don't have a reason why. Not even an impractical one. :-p"

Yeah it disturbs me too, but like you I can't find any real reason why. It's not any less dangerous than what Arc/Scheme/JavaScript/etc. do, and in fact it's actually far safer because it's selective and you can use various constructs to control mutability.

---

"Given mutable boxes, people are going to create their own stateful interfaces on top of them, so you're going to get stateful interfaces different from boxes anyway. I think 'set-the-next-global-env is consistent with that language experience."

Yeah, but I still need to think about what the core of the language is and what the idioms are. Saying, "mutable boxes make such and such thing possible, so eventually somebody will add it in, so you might as well add it in now" doesn't convince me because languages can and do influence what people do and also how easy it is to do things. My view can be summed up as: the default experience matters.

---

"Anyway, the top-level namespace doesn't have to be stored in a data structure. The language itself can hold onto it."

And how would the language itself hold onto it? Presumably in a data structure. And $vau already reifies the dynamic environment, so I can't just pretend that the environment is some closed-box that isn't accessible to the programmer.

I don't currently make a distinction between top-level or local-level environments: they're the exact same from both the implementation's perspective and $vau's perspective. I like that consistency, so I'd need to see a good incentive to make them different.

---

"Oh, I'd take a more syntax-side approach: Define a new expression type that holds a symbol. When that expression is evaluated, it's like evaluating the symbol, but without auto-unwrapping. That way the programmer always has the option to write any part of the code in a more precise style, at the cost of brevity."

So you'd be wrapping a symbol rather than a var? Well, okay, but I can't think of any situation where I'd want to auto-unwrap something other than a var, so tieing it to var seems okay to me.

Wait... there is one thing... $lazy creates a special data structure which is auto-evaled, and currently there's no way to disable that. So I guess I could use the same auto-unwrap mechanism for both $lazy and vars.



1 point by rocketnia 4300 days ago | link

"Nope. I see immutability as being more important than vau."

Then we're talking past each other here. I don't care to compare these two.

---

"This is true of vars in general. I'm not exactly sure how that situation is made worse for mutable environments, but you're the one with the partial evaluation ideas. I haven't thought about this much."

I'm not sure what you're saying here, but I'll explain again. Passing a mutable box to every part of the code means any part could clobber that box, inhibiting our ability to predict the behavior of the remaining code.

The box you're passing around every part of the code is the one that contains the namespace.

---

"And how would the language itself hold onto it? Presumably in a data structure."

That's an implementation detail. It doesn't make a difference to the users of the language.

To implement a language that "holds" something without a data structure, you can implement it in a language that can already do that. ^_-

We already have lots and lots of languages with that feature. Any language with non-first-class variables can qualify (e.g. Haskell, Java, or Arc). Some languages might let you hold values in event queues (e.g. JavaScript or E) or in a language-wide stack (e.g. Forth, Factor, or assembly language).

---

"And $vau already reifies the dynamic environment, so I can't just pretend that the environment is some closed-box that isn't accessible to the programmer."

All it needs is the immutable part. You're putting a box around it, and I consider that to be harmful, unnecessary complexity.

---

"I don't currently make a distinction between top-level or local-level environments: they're the exact same from both the implementation's perspective and $vau's perspective. I like that consistency, so I'd need to see a good incentive to make them different."

Hmm, why are you even wrapping a local environment in a box? Why not use '$let for every locally scoped definition?

-----

1 point by Pauan 4300 days ago | link

"I'm not sure what you're saying here, but I'll explain again. Passing a mutable box to every part of the code means any part could clobber that box, inhibiting our ability to predict the behavior of the remaining code."

Yeah, but only certain functions are capable of clobbering that box, and because environments are immutable... it should be possible to statically determine at compile-time whether such a function occurs or not. Racket can already do this: it statically determines whether a module uses "set!" or not. It should be a piece of cake to do the same in Nulan, at compile-time.

As I already said, when a vau is created, the lexical scope is the current value of the environment variable, not the environment variable itself. So you know exactly what bindings are present within the vau's body when the vau is called, even with environment mutation. That should give you plenty to reason about, right?

And because the dynamic environment is really just the lexical environment of the call site, that should be immutable as well... which means fexpr inlining should be possible even though the environment is stored in a variable. Unless I'm missing something. I haven't really thought this through, so I wouldn't be surprised.

---

"All it needs is the immutable part. You're putting a box around it, and I consider that to be harmful, unnecessary complexity."

Yet clearly there needs to be some kind of mutability... and I don't see your environment-mutability schemes as being any less complex than variables, since I've already decided to include variables as part of the language, might as well use 'em. Now, if my language didn't have variables to begin with, then I can agree with you that they'd be more complicated than your ideas.

---

"Hmm, why are you even wrapping a local environment in a box? Why not use '$let for every locally scoped definition?"

That's to allow $def! and $set! to create local bindings. Though you're right that if I gave up on internal-definition forms I could probably remove var from local environments.

I'm not convinced that's a net win, but it would solve the problem of $def! creating a local binding sometimes and mutating a var in other cases... so that's pretty nice.

---

But I get the distinct impression you're only being so anti-var because they interfere with your whole partial evaluator scheme... maybe it would be better to figure out a way to deal with vars in your partial evaluator? Unless you're saying that's impossible?

In particular, languages like Scheme/Racket/JavaScript seem to do just fine with mutable variables. Yeah they could be even faster with immutable everything, but I wouldn't exactly call them slow languages...

I know Racket/JS use JIT rather than AoT. I'm okay with that. I'm even okay with Nulan staying interpreter-only, until it becomes too slow to be usable. Python/Ruby seem fast enough to me and they're interpreted (albeit written in C... or is it C++?)

I haven't even gotten to the point of porting Nulan from pure-Python to PyPy, so I have no clue how fast PyPy will make it run. I still have high hopes that PyPy will give it Good Enough(tm) speed for the kinds of things I want to do. That's enough for me.

-----

1 point by Pauan 4300 days ago | link

"So I guess I could use the same auto-unwrap mechanism for both $lazy and vars."

Okay, I got a semantic I'm reasonably satisfied with... there will be three built-in primitives: `implicit`, `$explicit`, and `implicit?`.

* implicit accepts a function as its argument and returns a wrapped structure that when evaluated will call the function argument.

* $explicit evaluates its argument and if it's implicit, it returns the underlying function.

* implicit? just tells you whether the data structure is implicit or not.

This can be used to implement auto-unwrapping behavior for vars and also laziness.

---

By the way, if "eval" were mutable, I could actually implement the whole implicit/explicit thing in Nulan itself. The downside is that every call to eval would have extra overhead because it would have to unwrap the variable. I'm also not convinced it's a good idea to let the language change evaluation in arbitrary ways, but it's an idea I'll have to think about.

-----

1 point by rocketnia 4300 days ago | link

Your implicit wrappers sound like they would be too slippery. First I'd try (all implicit? lst)--sorry, I'm using Arc syntax here--and I'd find out all the implicit wrappers were unwrapped before they got to 'implicit?. Then I'd try something like (all [implicit? ($explicit _)] lst), but no dice; the implicit wrappers are unwrapped before they get to [implicit? ($explicit _)]. Either I'd have to pass an fexpr (which 'all shouldn't even support, for API simplicity's sake), or I'd have to hand-roll a new version of 'all. (EDIT: Or maybe 'all would just have to use '$explicit in the first place.)

I think aw's implicit global variables are somewhat better. Things are only unwrapped once--when a symbol is evaluated--and after that they're first-class values, without the evaluator oppressing them at every turn.

---

"This can be used to implement auto-unwrapping behavior for vars and also laziness."

I don't think it can model laziness (since it forces at every step). I recommend to model laziness by way of a doppelganger standard library that builds promises out of promises. Eager utilities would still see these promises as first-class promise objects. I explored this not too long ago as a way to port Haskell code to JavaScript... but I used a different technique than this one, and I grew to dislike it.

The code which uses that technique I now dislike is at (https://github.com/rocketnia/underreact/blob/865ccdb1a2c8dc0...), if you're willing to trudge through its boilerplate. (I'm linking to a specific commit because I plan to delete it for being too long out of date and too hard to maintain.)

---

"I'm also not convinced it's a good idea to let the language change evaluation in arbitrary ways, but it's an idea I'll have to think about."

It might be a challenge to keep things efficient and modular. But it's just the kind of challenge you'd looking forward to, I'm sure. :-p

There's nothing wrong with pursuing this kind of thing. A minimalistic Kernel-style 'eval is already user-extensible in a way, since whenever it evaluates a cons cell, it executes user code (an operative or applicative).

-----

1 point by Pauan 4300 days ago | link

"Your implicit wrappers sound like they would be too slippery."

Yeah, I've already changed it since then. It's in a lot of flux right now!

---

"I think aw's implicit global variables are somewhat better. Things are only unwrapped once--when a symbol is evaluated--and after that they're first-class values, without the evaluator oppressing them at every turn."

Not true. aw's implicit system is the same as Arc/Nu and Nulan except that ar hardcodes symbol names while Arc/Nu and Nulan use first-class mutable thunks, that's all.

aw's implicit global system works by having a table of symbols. The compiler will look at this table and if it finds a symbol in it, it will wrap it as a function call. In other words... in ar, if 'foo is in the implicit table, this:

  (foo bar qux)
Would be compiled into this:

  ((foo) bar qux)
And then foo is a function that returns the value. This is just like Arc/Nu and Nulan except both Arc/Nu and Nulan handle it in much cleaner ways:

In particular, aw's system hardcodes the symbols, which means it isn't late-bound and works poorly with multiple namespaces. Because Arc/Nu uses values rather than symbols, both of those work perfectly.

As for Nulan... it doesn't have late-bindedness or multiple namespaces (by default), so that's not a problem either way. :P

---

"I don't think it can model laziness (since it forces at every step)."

Why not? It may call the implicit thunk repeatedly, but the $lazy vau caches it so the expression is only evaluated once:

  $defv! $lazy Env; X ->
    $lets: U:      uniq
           Saved:  var U
      implicit; ->
        $if Saved = U
          $set-var! Saved: eval X
          Saved
The only issue I see with this is efficiency, which I'm not really worried about right now. That particular problem could be solved by giving a way for the implicit wrapper to mutate itself... then, when evaluating the thunk, it would mutate itself to point directly to the evaluated expression. It would still be slightly inefficient because it would still have to unwrap the implicit, but hey, I can only do so much.

---

"There's nothing wrong with pursuing this kind of thing."

There's nothing wrong with any choices. The question is whether you like them or not, whether they support your goals or not.

-----