T-expressions are my attempt at answering the question 'what's the minimal clean notation for expressing Prolog terms in S-expressions?'
PicoLisp's Pilog, for example just decides that 'a term is a list'. Which is very clean and simple - but unfortunately it means you can't distinguish between a term AND a list, and it's often very important to tell the two apart.
So the simplest solution is just to reserve one symbol (I use '/' because it's available and not special-syntax on most Lisps... though any symbol would do, and if I were writing from scratch, I might think about repurposing '.') to indicate 'the start of a Prolog-like term'.
The rest of that post series is really just about teasing out some of the implications of this, because I think it opens a lot of interesting possibilities for adding a very minimal notion of 'typed expression' to Lisp. Minimal in that there is no specification of what the meaning of a type is; just that a certain expression is more than just a raw list.
There's a number of applications of this: we can replace almost all special non-S-expression 'reader macro' syntax, for example.
Also having lists which are not Lisp functions/macros means we could implement 'implicit cons', which allows us to drop a lot of complexity in list-builder expressions. For example:
If x were (a b) and y were (c d e) then a standard Lisp expression might look like this:
(cons 1 (cons (car x) (cdr y)))
(1 a d e)
But if we had a Lisp where function calls (or all forms) were term-expression terms, ie prefixed with '/', then we can write that expression as:
(1 (/ car x) / cdr y)
Suddenly we get a lot cleaner notation... fewer nested parens, less awkward syntax, and we can apply this technique to a lot of things.
For another example: say we had a list which was generated from a procedure (local or remote) and we wanted to show that it continues at a remote website:
where "more-at-URL" is either a function that evaluates to a list, or just a piece of syntax that's parsed by some higher level.
This is a basic "lazy list" technique, well known to Lisp people for decades.... but the fact that we don't need to hide it behind lambdas, we can do this inline and mark ANY list up with a CDR that is an arbitrary expression, I think is an important capability that is worth thinking about.
I also think that we can then take this syntax and extend it down to the level of formal logic, because what we are essentially doing is writing (slightly extended) First-Order Predicate Logic expressions... and so we might want to think about exact mappings between FOPL and our Lisp expressions. And vice versa: I think we need to look hard at what Lisp, especially the S-expression 'dotted pair' allows us to express that standard vanilla FOPL does not. And ask what that extra piece of information might imply for logic - in the vein of, eg, what the HiLog researchers found, since this is a very similar kind of syntactic extension to FOPL as HiLog is. See, for example, the 1980s Sinclair Spectrum MicroProlog, which uses S-expressions internally to store Prolog expressions, and which discovered the dotted-pair technique for FOPL expressions and called it 'meta-variables'.
Final note: My syntax in the blog posts assumes that I have a custom reader macro and so I don't put a space after the / wherever it appears, in order to make the expressions look a but cleaner... but if you were embedding T-expressions on top of S-expressions in an existing Lisp, you would put a space there. And also you'd need to sort out some way of disambiguating in all cases between /-as-symbol and /-as-syntax. This last bit might be a little tricky.
Thanks! The reason it's hard to get is probably there's very little to get - this is just a syntax/notation tweak, I haven't built a proper language on top of it yet.
The main idea is pretty much just 'S-expressions have dots; what if that dotted part could be not just an atom but a full structured expression like a Lisp form'. (or a Prolog term, if you are familiar with Prolog).
This very small idea opens up a number of new possibilities in creating very compressed notation, I think. It's mostly useful for people interested in building Domain Specific Languages, where you want a minimum amount of friction from the underlying syntax, and no limit on the kinds of expressions that can be represented.
With a particular emphasis on something that can be embedded easily into raw text as a markup language, without needing to remap a lot of characters like in HTML, but that also allows the full syntactic power of an arbitrary programming or data-representation language (eg Lisp or Prolog) if required. Also something that can be used to represent very low-level concerns, like circular references and read-time macro insertion and other tricky syntax-level things that Lisps tend to use reader macros to do.
Again, this is purely at the syntax level. Translating an actual language to work smoothly on top of this syntax is a much bigger project. But this syntax stands midway between raw S-expressions and 'heavier' syntaxes that have a notion of 'typed expressions' which S-expressions don't natively provide.
Think of T-expressions as a proposal for a 'concrete syntax tree' format: ie, an unambiguous representation of an arbitrary Abstract Syntax Tree on top of the usual Lisp/S-expression components of pairs and symbols. Raw S-expressions don't quite provide enough tools to express all possible Abstract Syntax Trees (because the CDR of a dotted pair can't distinguish between a list and a typed expression). But T-expressions can.
And then `quasiquote` itself is a macro. It's an interesting macro to implement, and I just spent like two years trying to streamline the way I was implementing it for my languages 'cause I kept adding bells and whistles to it. :)
If you write a slightly different macro where the syntax for (unquote ___) is (/ . ___), then you could use it like this, which matches the syntax of your example exactly:
(t-expr (1 (/ car x) / cdr y))
I think the advantages of quasiquotation specifically shine when the quoted data is also program code in the same language (or like you just said, a DSL). In that specific case, I like that the code I'm writing looks the same regardless of whether it's under an unquote or not. There's a practical advantage in terms of being able to copy the code back and forth as necessary.
It sounds like you're looking at this as a data encoding? A lot of data encodings take pains not to be Turing-complete. Then again, for inexpressive languages like HTML, there are a lot of Turing-complete templating languages designed to dynamically generate them.
You can of course experiment with that in most Lisp dialects using a quasiquoted term where all the unquotes have functions in them:
And you could write a macro that puts in those lambdas automatically so it's not so verbose. Then your parser could use the Lisp's reader, wrap the result in (t-expr ...), and eval it in an appropriate namespace to yield your compiled program-syntax-with-functions-in-it. Then you could pass that syntax through an interpreter.
Is this all making sense? XD I started going through ideas a little fast, and maybe I missed your point a long time ago.