"When mathematics made its way to software the notion of rigor hardened to a sort of math envy. Endless man-years have been spent trying to "prove programs correct" rather than to do computer-aided proofs as a human activity. But the best practitioners have always known that you can't take the human out of the proof life-cycle."
Reflexively, I don't like drawing a distinction between human activity and mathematics. What else in the universe does mathematics except humans?
Any field is going to keep humans in the loop or else it won't be useful to humans. Any field is going to strive to minimize humans' responsibilities in the loop or else it won't be useful to humans.
Right now I'm in the mood to drill down on "field" and "useful." A "field" implies there's some kind of shared research going on, and "useful" implies that it comes in handy for people's naive needs. (Who knows if something that's useful is really beneficial?)
Shared research, in itself, crucially involves summaries and citations to cash in on prior work. But when we snapshot these summaries and citations (e.g. writing them on paper), the discreteness of our snapshots permeates into discreteness of the citation expressions, which then permeates into discrete methods for locally assessing the value of the research, which then permeates into discrete valuations of the research. So I suspect this one useful application domain -- that is, the meta-theory of snapshotted shared research -- is sufficient to give a strong identity to the field of logic, regardless of what culture currently surrounds it and what other application domains intersect with it.
 I thought I was going to say "mathematics," but "logic" is a culturally closer term for what I described. The impression I've gotten is like this: Logic gives you a canvas for reasoning. Mathematics equips that canvas with intermediate scaffolding for the subjects you care about. It's paint by number! :-p
"Just look at ACL2."
Er, what about ACL2?
"The most harmful result of this slippery slope has been to enshrine and ossify notation as something most programmers are to be prevented from changing."
For at least this reply, I'd like to break down "harmful" into "less useful" and "less beneficial."
A programmer will find it more useful to be able to use the notation of their choice, especially notation of their own design. However, if many programmers each willingly stray from standard notations, they make the world of program code less useful in at least one way, to at least one audience: Newcomers face a steep learning curve, encountering different notations everywhere they look.
As for whether this is "beneficial" one way or the other, I don't know. I just wanted to cast doubt on the moral connotations of "useful" again.
"We don't pay nearly enough attention to the fact that large programs can't be taken in at a glance, and so at large scales the visual effects of notation must be subordinated to a tactile feedback-based environment."
I don't know if they can be completely "taken in," but making them interactive is great. Logic (in the sense I described above) is already essentially interactive because of the process of tracking down citations to convince oneself of any pieces that are hard to believe. It would be pretty useful to be able to navigate a program to see various views that are bite-size enough to take in.
Ooh, as an extrapolation of your unit-test-based ideas, it would be especially nifty to view a codebase as a fractal pattern where every possible run-time state corresponds to one location in the fractal. XD Do just enough fuzzing to draw a good density of pixels for the overall program fractal, render the errors in red, and then do more localized fuzzing when the user zooms in. Some design flaws and corner cases may be visually apparent without even being proper errors!
If you're starting with a conventional assembly language, you can probably display each subroutine call as a pyramid of the calls that happen inside, with each level being some fraction as tall as the previous one. Conditionals would be treated similarly to calls, except that they don't always happen, so sometimes they can be omitted from the inner levels of a pyramid. Non-call, non-conditional statements would be omitted from the fractal altogether to save room.
Sorry, I guess I got carried away on a specific idea. XD Your ideas are already pretty nice too.
"you can probably display each subroutine call as a pyramid of the calls that happen inside"
Oops, just after I posted this I realized that the "calls that happen inside" a subroutine may vary depending on the program state or arguments. If these separate possibilities are not shown as separate locations in the fractal, then zooming in won't be sufficient to narrow down these parameters in the view. If they are shown as separate locations, then a mere 8-bit parameter will shrink each sub-pyramid by a factor of 256.
It's likely the zooming interface will need to be accompanied by a timeline scrubber or tableau view to visualize number-like parameters.
Pointer-like parameters would get out of hand either way, let alone strings, lists, trees, and functions.
The author has more faith in the morality of human values than I do. I think there's quite a lot that's just an accident of the culture we're currently in.
On the other hand, the author has far less confidence in the resilience of art, science, and philosophy than I do. I think any competitive process will spend some effort on research to make sure it's spending the rest of its effort wisely. In fact, I strongly suspect that we have no innate goal to pursue, so all our effort eventually serves the needs of research.
What's the problem here? The fact that people are being bombarded with lies? Consider the alternative:
- If we tell the same "truth" to everyone, then everyone will find themselves persuaded by the same few marketing pitches, and the wealth gap increases.
- If we educate everyone well, we develop technology faster, and that technology further disrupts our social ties to each other, displaces jobs, increases the risk of world annihilation, accelerates our population growth, and introduces new intelligent beings who may have even more severe problems to deal with than humans do.
- If we take our claims seriously enough not to include a snide hint of "you'd better fact-check this yourself," then the burden of proof remains with the speaker, so fewer people will be able to invest in the task of passing along the message.
- If we try to tell the whole truth to someone, but fail to discuss each topic in the proper proportion, then they may be frustrated by non-sequiturs, or they may mistake certain topics to be more relevant than they are.
There's probably no way to tell the truth to humans without being deceptive and exploitative. There's probably no such thing as truth at all, only the natural selection of processes that are good at doing research.
That doesn't mean we won't care about this, but the result of our caring is to bombard humans with lies.
I think if we stop encoding our claims using discrete snapshots, we can stop judging them on the discrete scale of whether they're "true" or not, and we might find a better overall quality of communication. (Of course, I don't mean we could stop altogether. I mean we could use discreteness less frequently.)
To return to your statement that "any field is going to strive to minimize humans' responsibilities in the loop or else it won't be useful to humans," this isn't a simple gradient descent problem, where there's a single tightly constrained way to add automation and we always minimize responsibility in the exact same way. There are lots of different ways to automate, and we think up new ways all the time. My claim is that it's worth being thoughtful and restrained about what automation we introduce, thinking hard about what its effects may be, and how we may roll it back if we find overly high unanticipated costs.
BTW, I encountered another example of runaway automation since I wrote my earlier comment. http://www.foreignaffairs.com/articles/141729/francis-fukuya... was a great narration of the recent history of the US in using people as automation by creating rules and institutions, and the incredibly poor consequences of some of those decisions.
I don't know :) I struggled to share some half-baked ideas in my post above, but yeah it's very hard to do rollbacks in the real world when people start to rely on infrastructure. Even my example of forcing people to sell houses in some situations is probably politically infeasible.
I'm far more confident :) just pointing at these problems to motivate that certain kinds of constructs might be counter-productive in software because they encourage the wrong kinds of dynamics. For example, I think namespaces and modules are socially counter-productive even when implemented in a technically impeccable manner, because they encourage division of labor. In even the smallest of projects, without thinking about it, we often start out with the goal of working with someone, and proceed to do so by defining an interface and then working apart. That is no fun, and it is gradually self-limiting. I think we do this because it's so easy to create interfaces and silos, to isolate ourselves from each other's changes using namespaces and similar constructs. By creating these constructs, by making them too easily available, we forget to ask if we should use them. This is, I think, an example of taking the human out of the loop. But even here, the only way I can think of to do a rollback is by trying to build a software stack from scratch.
"For example, I think namespaces and modules are socially counter-productive even when implemented in a technically impeccable manner, because they encourage division of labor. [...] it's so easy to create interfaces and silos [...]"
Well, I think module systems are an essentially flawed idea just like everything else, and "socially counter-productive" is something I might say alluding to specific ways they're used, but I don't think they're implicated like that.
These days it's very easy to whip up ad hoc module systems with precise encapsulation properties, because we've already had such a push toward OO encapsulation; shared-nothing concurrency; proof systems with cut elimination; a concept of independent free will of each human; etc. However, some of the easiest ad hoc systems have obvious shallow flaws, like allowing accidental namespace collisions.
On a good day, encapsulation is great for vagueness: If I have to encode a vague concept in a precise programming language anyway, I'm going to think of more than one way to formalize it, and encapsulation gives me a buffer for trying several possible formalizations without changing the rest of the system. Encapsulation lowers the cost to tackle vague projects. It's arguably the kind of rollback you're looking for!
A technically impeccable module system would lower the cost to create technically impeccable encapsulations, thus facilitating vague projects better than ever... on a good day.
However, module systems will always have incremental upgrades to work on, especially regarding performance and logical completeness, not to mention keeping up with innovations in how we choose to share our vague concepts or how we choose to audit them back to their authors. And module systems will at best be a bit of a siren's song, since they facilitate the irresponsible practice of making things.
"In even the smallest of projects, without thinking about it, we often start out with the goal of working with someone, and proceed to do so by defining an interface and then working apart ."
Yeah, even in cases where people specifically try to work together more seamlessly, it's hard. To be blunt about it, human bodies are already silos, and our natural language utterances are practically discrete snapshots already.
As a way forward, I'm interested in encapsulation methods that don't lie about how encapsulated they really are in practice, and I'm interested in finding a more "direct manipulation" style of programming rather than the text-editing/tree-editing we use now.
I'm kind of liking David Barbour's plan and your plan of keeping a history recording. Perhaps this recording could be the primary thing that is actually "made," and programs can be invoked from examples there rather than meticulously authored.
That's interesting. I guess I'm only opposed to namespacing once the experiments stop ^_^. But it seems hard to draw that line..
"I'm kind of liking David Barbour's plan and your plan of keeping a history recording. Perhaps this recording could be the primary thing that is actually "made," and programs can be invoked from examples there rather than meticulously authored."
Interesting. I'm not sure how that would work, but in the past week I thought of a maybe-similar hypothesis: perhaps we need notational help only for scenarios. So when you open a new codebase, the first thing you should see is example runs. If you're lucky they'll be curated by the previous programmer, guiding your attention first to the most basic functionality. But regardless, you see some notational representation for scenarios, while the code itself that implements that functionality remains in a sort of assembly like language. I'm not opposed to high-level languages, but if we have a limited amount of bandwidth for managing notations, perhaps we should feed tests before code.
And yeah, I too noticed that my attention has moved from notation to tools, just like David Barbour. It happened without conscious desire to imitate him, as far as I am aware. Perhaps that means we're on the right track.
I'm probably at the limit of my ideas :) I don't want to overstate how confident I feel about them. So far the best way I can think of to prevent module overuse from making codebases harder to understand is: don't support namespaces.
But I hadn't considered inexact alternatives, and I still don't understand what that entails. But it might well work better than dropping namespaces entirely. So my previous comment was in the vein of vague encouragement. Just throwing out ideas as they occur to me.
"What if the only way to share a program were by example?"
There might well be something here! We'd need a system that can integrate multiple scenarios into a single function. But I have no idea how to build such a system, and it seems far harder than what I'm trying to build. Then again, maybe that will change as we talk about it.
"What would distinguish program notation from scenario notation?"
I don't know yet. I suspect every program would need to invent its own notations mimicking its target domain. A browser may need notations like "when click(...) ...", while a server might need a notation like "when received(...) ...". Maybe those two can be integrated.
The general template for a scenario seems to be when X then Y. X would include events and fakes to insert into the system at various points, either carefully orchestrated or randomized like in Quickcheck. Y would include assertions of constraints of various kinds, either on the output, or on the trace generated by the system, or on invariants at various static points in the code. That's what I have so far. Can you think of anything else?
"But I hadn't considered inexact alternatives, and I still don't understand what that entails. But it might well work better than dropping namespaces entirely."
What's your goal with dropping namespaces? You keep bringing up namespaces as though they're what I'm talking about, but I don't even know what kind of namespaces you have in mind. :)
I am indeed interested in dropping certain approaches almost entirely in favor of inexact alternatives, but until I understand what those alternatives are, I can't claim it'll work very well. :-p
One example of an inexact encapsulation technique is physical separation: Two people build Lego structures on separate corners of a table, and sometimes they may construct bridges linking the two, but they can return to working on separate parts at any time. (Unfortunately, this is still an example of making things.)
There are probably lessons in architecture, improvisational theatre, sociology, and a lot of other fields where the boundaries are soft. Just because the boundaries are soft doesn't mean they can't go fast enough to compete with the automation power of logical precision... or does it?
One thing that might help in the meantime is to think not about making things but about augmenting ourselves. This way we may still be dealing with discrete this-person/that-person divisions, but at least we aren't dealing with a discrete creator/creation divide on top of that.
(David Barbour has been talking about encouraging the self-augmenting task of programming the programmer-UI, but I'm not sure I remember his motivation. Maybe I'm getting this from him.)
"The general template for a scenario seems to be when X then Y . [...] That's what I have so far. Can you think of anything else?"
That actually sounds a lot like the Era module system. Not in the idiosyncratic details, but in the fact that we're both managing top-level sets of things that do Y when hedged under some modality X. I'm thinking of them as programs with imports, and you're thinking of them as testing scenarios with initial conditions. Perhaps scenarios are a special case of programs?
I'd say "perhaps they're complements," except that it's hard to be specific about any concept without it turning into a program at some point.
This might be why I'm interested in understanding programs as a special case of scenarios, if that's possible.
Sorry, no offense intended. I think I'm ascribing to you something closer to the mainstream view than my own, so you're a stand in for the world :) Not entirely without reason; you haven't said anything in this thread but at other times you've wanted to define your modules in such a way that they'd continue to work regardless of what surrounding modules contain. I'm not sure how far you've moved away from such ideas, and I don't mean to attack you, so where I imply 'you', please assume 'everyone but me'. :)
My radical approach is: "let the caller hack my library to get it to work." Have you been converted quite that far? :)
Er, sorry, I think I must talk in a decisive way that makes me look offended. I'm so indecisive in general, I have to take what I can get. :)
I don't know what would have offended me, so don't worry about it.
"I think I'm ascribing to you something closer to the mainstream view than my own, so you're a stand in for the world :)"
As someone who has spent this thread denouncing the evils of truth and making things, when I hear you calling me mainstream, it's heartwarming. :)
"Not entirely without reason; you haven't said anything in this thread but at other times you've wanted to define your modules in such a way that they'd continue to work regardless of what surrounding modules contain."
Yes, that's something I still consider an essential feature of a module system. And I think module systems themselves are a matter of course if we have a certain culture around collecting discrete snapshots (research).
But I'm interested in how much we can avoid discrete snapshots in the first place, as well as what different cultural attitudes we could adopt.
"My radical approach is: "let the caller hack my library to get it to work." Have you been converted quite that far? :) "
That depends on what context we're discussing this in.
- In the here and now, I'm taking this day to day. I don't have a strong opinion on how much I should indulge my fondness of making things, what I should make or procure for whom, and how much I should work closely with whom. Nor do I have a strong opinion on what you should do. :-p Or maybe it's more accurate to say that I might have strong opinions if presented with more specific situations.
- For thousands of years down the line, I hope we get to "let the caller communicate with me if they want to, under the ethical and logistical supervision of our local peers." There's rarely a "library" involved, and if there is, it's the one who's saying "me" here.
- In the projects I've been working on and pondering, I already have a (very incomplete) plan to allow Era modules to break each other's encapsulation. The idea is that one module should have special privileges with another module as long as it can prove it already knows that module's implementation code or has that module author's permission. A user must have the implementation code of a module in order to install it in the first place, so they can of course hack on it and break compatibility and find themselves rewriting a whole chain of depenencies to use their customization, but this way they can sometimes just add a new module that takes advantage of having extra insider knowledge about the original.
In fact, now that I think about it, this "extra insider knowledge" access is similar to the extra information you're trying to access from your scenarios concept. Hey, we might be able to design a formal system together. :) Did you ever get around to looking at linear type systems?
EDIT to add: I was encouraged to think about this "extra insider knowledge" plan thanks to a recent LtU thread about "private" and "public" access controls: http://lambda-the-ultimate.org/node/4965. The Era module system has only one top-level definition per module, so a private definition doesn't make sense unless there's some way to break into it. Before that thread, I was already worried about what it would take to allow an author to prove extra things about the other definitions they had already published, but that's where I started thinking about a solution.
"Did you ever get around to looking at linear type systems?"
I was (and still am) waiting for you to teach them to me :p
My recent explorations are hardening my bias for operational/ugly/imperative approaches over declarative/elegant/functional ones. The trouble with being elegant is that the implementation is harder to tinker with without causing subtle issues. Later readers have to understand global properties at a deep level.
Don't get me wrong, there's room for high-level languages in my world-view. They're just harder to communicate the big picture of, and I want to start with simpler problems. High-level languages and type systems are hard crystals, and I'm currently more interested in building squishy cells.
"I'm currently more interested in building squishy cells."
I respect that, but just to get it written down, here's what I've come up with so far:
All lambda expressions must be lambda-lifted so they're at the top level of some module. (I guess this might count as a lambda-free logical framework.)
Every function type (X -> Y) is annotated as (X -> [C] Y) with the implementation code C of that function's source module. A simple export/import of an X-to-Y function will actually bind an import variable of the type (existscode C. (SatisfiesSignature S C * (X -> [C] Y))), where S is the import signature.
If a module has access to installed implementation code C and it knows SatisfiesSignature S C and SatisfiesSignature S C', then it can conclude C = C'. It can take advantage of this knowledge to get access to fuller knowledge about how an (X -> [C'] Y) function behaves.
"I was (and still am) waiting for you to teach them to me :p"
Really? I don't remember where we left off, but let's get in touch on this.
"..if many programmers each willingly stray from standard notations, they make the world of program code less useful in at least one way, to at least one audience: Newcomers face a steep learning curve, encountering different notations everywhere they look."
But if you rely less on notation and more on interactive feedback, this might be less of a problem :)