Arc Forumnew | comments | leaders | submitlogin
2 points by Pauan 4527 days ago | link | parent

Yes I pretty much agree with what you are saying.

---

"How do you feel about Lisp versus SML?"

I've never used SML and have only read a little about it. It looks a lot like Haskell, which I don't have much experience with either.

But from what I've seen, I don't like static type systems. I think making them optional is fantastic, but I don't like having them shoved down your throat.

I think you should be able to write your program in care-free Ruby/Arc style, and then once things have settled down, go back in and add type annotations for speed/safety. But you shouldn't have to use the type system right from the start.

The problem is that a lot of the people who find static type systems useful are also the kind of people who like safety a lot, so they want the type system on all the time. Not just to protect their code, but to prevent other programmers from making mistakes.

I don't like that mindset. Which is why I prefer languages like Ruby and Arc, even with their flaws. I don't think any restriction should be added in to prevent stupid people from doing stupid things. I think the language should only add in restrictions if it helps the smart people to do smart things. And for no other reason.

So as long as the type system helps smart people to do smart things, and doesn't get in the way too much, then sure, I think it's great. But if it gets in the way, or it's done to prevent stupid people from doing stupid things... no thanks.



2 points by Pauan 4527 days ago | link

In that line of reasoning, I've been thinking about adding in a static type checker to Nulan. But I want it to use a blacklist approach rather than a whitelist.

What I mean by that is, if it can be guaranteed at compile time that a program is in error, then it should throw a well formatted and precise error that makes it easy to fix the problem.

But if there's a chance that the program is correct, the type system should allow for it. This is the opposite of the stance in Haskell/SML which says: if it cannot be guaranteed at compile-time that a program is valid, then the program is rejected.

Here's an example of what I'm talking about:

  def foo ->
    bar 10 20
The variable `bar` isn't defined. This can be determined at compile-time. Thus, Nulan throws this error at compile-time:

  NULAN.Error: undefined variable: bar
    bar 10 20  (line 2, column 3)
    ^^^
The error message is precise, and pinpoints the exact source of the error, making it easy to fix. And likewise, this program...

  def foo -> 1
    5
   
  foo 2
...creates a function `foo` that requires that its first argument is the number `1`. It then calls the function with the number `2`. This situation can be determined at compile-time, and so I would like for Nulan to throw this error:

  NULAN.Error: expected 1 but got 2
    foo 2  (line 4, column 5)
        ^
But with this program...

  def foo -> 1
    5
   
  foo a + b
...it might not be possible to determine whether the first argument to `foo` is the number `1` or not. If this were Haskell/SML, it might refuse to run the program. But in Nulan, I would simply defer the check to runtime.

This means that every program that is valid at runtime is also valid according to the type-checker. Thus the type-checker is seen as a useful tool to help catch some errors at compile-time, unlike Haskell/SML which attempt to catch all errors at compile-time.

I think this kind of hybrid system is better than pure dynamic/pure static typing.

-----

1 point by rocketnia 4527 days ago | link

How is this different from preventing stupid people from doing stupid things?

I've said this recently, but I like static typing when it contributes to the essential details of the program, rather than merely being a redundant layer for enhancing confidence in one's own code. Static typing is particularly meaningful at module boundaries, where it lets people establish confidence about each other's programs.

Anyway, enhanced usability is nothing to scoff at either. If you find this kind of static analysis important, I look forward to what you accomplish. :)

-----

1 point by Pauan 4527 days ago | link

"How is this different from preventing stupid people from doing stupid things?"

Because the only difference is whether the error occurs at compile-time or run-time. I'm not adding in additional restrictions to make the type-system happy: if the type system can't understand it, it just defers the checking until run-time.

Thus, the type system takes certain errors that would have happened at run-time, and instead makes them happen at compile-time, which is better because it gives you early error detection. What the type system doesn't do is restrict the programmer in order to make it easier to detect errors at compile-time.

---

"If you find this kind of static analysis important"

Not really, no. Useful? Yeah, a bit. It's nice to have some early detection on errors. But my goals aren't to guarantee things. So whether you have the type-checker on or off just determines when you get the errors. A small bonus, but nothing huge. So I'd be fine with not having any static type checker at all.

-----

1 point by rocketnia 4526 days ago | link

The way I see it, what you're talking about still seems like a way to cater to stupid programming. Truly smart programmers don't generate any errors unless they mean to. ;)

---

"What the type system doesn't do is restrict the programmer in order to make it easier to detect errors at compile-time."

Guarantees don't have to "restrict the programmer." If you take your proposal, but add a type annotation form "(the <type> <term>)" that guarantees it'll reject any program for which the type can't be sufficiently proved at compile time, you've still done nothing but give the programmer more flexibility. (Gradual typing is a good approach to formalizing this sufficiency: http://ecee.colorado.edu/~siek/gradualtyping.html)

I think restriction comes into play when one programmer decides they'll be better off if they encourage other programmers to follow certain conventions, or if they follow certain conventions on their own without immediate selfish benefit. This happens all the time, and some may call it cargo culting, but I think ultimately it's just called society. :-p

-----

1 point by Pauan 4526 days ago | link

"The way I see it, what you're talking about still seems like a way to cater to stupid programming. Truly smart programmers don't generate any errors unless they mean to. ;)"

Then I'll reclarify and say "any programmer who's just as smart as me", thereby nullifying the argument that a "sufficiently smart programmer would never make the mistake in the first place".

---

"If you take your proposal, but add a type annotation form [...]"

Sure, if it's optional. And not idiomatic to use it all the time. The problem that I see with languages that emphasize static typing is that even if it's technically possible to disable the type checker, it's seen as very bad form, and you'll get lots of bad looks from others.

The idioms and what is seen as "socially acceptable" matter just as much as whether it's "technically possible". If I add in type checking, it'll be in a care-free "sure use it if you want, but you don't have to" kind of way. I've seen very few languages that add in static typing with that kind of flavor to it.

---

"This happens all the time, and some may call it cargo culting, but I think ultimately it's just called society. :-p"

And I am very much so against our current society and its ways of doing things, but now we're straying into non-programming areas...

-----

1 point by rocketnia 4526 days ago | link

"And I am very much so against our current society and its ways of doing things, but now we're straying into non-programming areas..."

Yeah, I know, your and my brands of cynicism are very different. :) Unfortunately, I actually consider this one of the most interesting programming topics right now. On the 20th (two days ago) I started thinking of formulating a general-purpose language where the primitives are the claims and communication avenues people share with each other, and the UI tries its best to enable a human to access their space of feedback and freedom in an intuitive way.

I'd like to encourage others to think about how they'd design such a system, but I know this can be a very touchy subject. It's really more philosophy and sociology than programming, and I can claim no expertise. If anyone wants to discuss this, please contact me in private if there's a chance you'll incite hard feelings.

-----

1 point by Pauan 4526 days ago | link

"http://ecee.colorado.edu/~siek/gradualtyping.html "

I like that article, I think that'll be useful to me, thanks.

-----

1 point by nburns 4527 days ago | link

I think that C has a good solution. It will compile any code that's possible to compile, but it will output warnings. I don't think it's necessary to halt compilation just to get the programmer's attention. That's what Java does, and it really annoys me.

If the type-checking is not strictly necessary, maybe you should make it an option, like -Wall.

-----

1 point by Pauan 4527 days ago | link

Yes, absolutely. There are certain errors that absolutely cannot be worked around, like an undefined variable. Those are errors that actually halt the program. But the rest should be optional.

-----

1 point by akkartik 4527 days ago | link

I've learned through bitter experience to treat all C warnings as errors, and more. The presence of a single uninitialized local variable somewhere in your program makes the entire program undefined. Where undefined means "segfaults in an entirely random place."

-----

1 point by nburns 4524 days ago | link

I think that's a good practice in general. But when you are experimenting and debugging, it can be useful to eliminate chunks of code by expedient means, which often generates warnings that you don't care about.

-----

2 points by akkartik 4521 days ago | link

I find programming to fractally involve debugging all the time. So if I allowed warnings when debugging I'd be dead :)

You're right that there are exceptions. I think of warnings as something to indulge in in the short term. The extreme short-term; I try very hard not to ever commit a patch that causes warnings. It really isn't that hard in the moment, and the cost rises steeply thereafter.

Incidentally, I'm only this draconian with C/C++. Given their utterly insane notions of undefined behavior I think it behooves us to stay where the light shines brightest. Whether we agree with individual warning types or not, it's easier to just say no.

But with other languages, converting errors to warnings is a good thing in general. Go, for example, goes overboard by not permitting one to define unused variables.

-----

2 points by nburns 4527 days ago | link

"The problem is that a lot of the people who find static type systems useful are also the kind of people who like safety a lot, so they want the type system on all the time. Not just to protect their code, but to prevent other programmers from making mistakes.

I don't like that mindset. Which is why I prefer languages like Ruby and Arc, even with their flaws. I don't think any restriction should be added in to prevent stupid people from doing stupid things. I think the language should only add in restrictions if it helps the smart people to do smart things. And for no other reason."

I could not agree more. I think that the idea of preventing mistakes via restrictive language features is one of the dominant ideas behind object-oriented languages. Consider the keywords "private" and "protected;" they literally have no effect other than to cause compile-time errors. It seems to me, intuitively, that the kinds of mistakes that can be easily caught by the compiler at compile time are in general the kinds of mistakes that are easily caught, period. The kinds of bugs that are hard to find are the ones that happen at runtime and propagate before showing themselves, and they are literally impossible for the compiler to find, because that would require the compiler to solve problems that are provably uncomputable. At my last job, I was working on fairly complicated web applications in PHP, and even though occasionally I'd run into a bug that could have been prevented by static type-checking, it was always in code that I had just written and wasn't hard to find. By eliminating things like variable declarations, PHP code can be made very succinct, and I think that simplicity and succinctness more than offset the risks that come from a permissive language. But I've never used a language that came with type-checking optional, so I have never made an apples-to-apples comparison. PHP is actually an interesting example, because in PHP, the rules for variable declarations are basically inverted from normal: you have to declare global variables in every function that you use them (or access them through the $GLOBALS array), but you don't have to declare function-scope variables at all. It makes a lot of sense if you think about it.

-----

1 point by rocketnia 4526 days ago | link

"Consider the keywords "private" and "protected;" they literally have no effect other than to cause compile-time errors."

Would you still consider this semantics restrictive if the default were private scope and a programmer could intentionally expose API functionality using "package," "protected," and "public"?

IMO, anonymous functions make OO-style private scope easy and implicit, without feeling like a limitation on the programmer.

---

"It seems to me, intuitively, that the kinds of mistakes that can be easily caught by the compiler at compile time are in general the kinds of mistakes that are easily caught, period."

I think that's true, yet not as trivial as you suggest. In general, the properties a compiler can verify are those that can be "easily" expressed in mathematics, where "easily" means the proof-theoretical algorithms of finding proofs, verifying proofs, etc. (whatever the compiler needs to do) have reasonable computational complexity. Mathematics as a whole is arbitrarily hard, but I believe human effort has computational complexity limits too, and I see no clear place to draw the line between what computers can verify and what humans can verify. Our type systems and other tech will keep getting better.

---

"The kinds of bugs that are hard to find are the ones that happen at runtime and propagate before showing themselves, and they are literally impossible for the compiler to find, because that would require the compiler to solve problems that are provably uncomputable."

I believe you're assuming a program must run Turing-complete computations at run time. While Turing-completeness is an extremely common feature of programming languages, not all languages encourage it, especially not if their type system is used for theorem proving. From a theorems-as-types point of view, the run time behavior of a mathematical proof is just the comfort in knowing its theorem is provable. :-p If you delay that comfort forever in a nonterminating computation, you're not proving anything.

Functional programming with guaranteed termination is known as total FP. "Epigram [a total FP language] has more static information than we know what to do with." http://strictlypositive.org/publications.html

---

"PHP is actually an interesting example, because in PHP, the rules for variable declarations are basically inverted from normal: you have to declare global variables in every function that you use them (or access them through the $GLOBALS array), but you don't have to declare function-scope variables at all. It makes a lot of sense if you think about it."

I find this annoying. My style of programming isn't absolutely pure functional programming, but it often approximates it. In pure FP, there's no need to have the assignment syntax automatically declare a local variable. That's because there's no assignment syntax! Accordingly, if a variable is used but not defined, it must be captured from a surrounding scope, so it's extraneous to have to declare it as a nonlocal variable.

I understand if PHP's interpreter doesn't have the ability to do static analysis to figure out the free variables of an anonymous function. That's why I would use Pharen, a language that compiles to PHP. (http://arclanguage.org/item?id=16586)

-----

1 point by nburns 4524 days ago | link

>> >> "Consider the keywords "private" and "protected;" they literally have no effect other than to cause compile-time errors." >> >> Would you still consider this semantics restrictive if the default were private scope and a programmer could intentionally expose API functionality using "package," "protected," and "public"?

Actually, in C++ the default for class members is private...

It's simply a true statement that "private" generates no machine language. All it does is cause compilation to fail. Whether or not this is a good thing is a matter of opinion.

>> IMO, anonymous functions make OO-style private scope easy and implicit, without feeling like a limitation on the programmer.

If you're speaking of lexical closures, I think you're right. You don't need to declare variables as private, because you can use the rules of scoping to make them impossible to refer to. You can achieve the same thing with a simpler syntax and more succinct code.

>> I believe you're assuming a program must run Turing-complete computations at run time. While Turing-completeness is an extremely common feature of programming languages, not all languages encourage it, especially not if their type system is used for theorem proving.

I'm not assuming that programming languages must be Turing complete. It happens to be true of all general-purpose languages that are in common use today.

>> Functional programming with guaranteed termination is known as total FP. "Epigram [a total FP language] has more static information than we know what to do with." http://strictlypositive.org/publications.html

I'll take a look at that language. I think that in 50 years' time, we might all be using non-Turing-complete languages. Making a language Turing complete is the easiest way to ensure that it can solve any problem, but isn't necessarily the best way.

( Technically, a language has to be Turing complete to solve literally any problem, but my hunch is that all problems of practical utility can be solved without resorting to a Turing machine.)

-----