Arc Forumnew | comments | leaders | submitlogin
1 point by Pauan 4196 days ago | link | parent

Once I add in things like infix, I can change optional arguments to use =:

  (var {a b = 5} {1})
The above expands to this:

  (var (list a (is b 5)) (list 1))
And then "is" would implement pattern match behavior.


1 point by rocketnia 4196 days ago | link

"(var {a b = 5} {1})"

I don't especially like the idea of infix operators breaking up juxtaposed identifiers like "a b". Infix operators are visible boundaries, and juxtaposition has no boundary except whitespace.

However, this is lisp syntax. Juxtaposition is an element separator, not an operator of its own. Ultimately, I don't know where I stand on this. :)

---

"And then "is" would implement pattern match behavior."

Because of my preference for patterns to be inverted expressions, I would rather (is b 5) match only t or nil. If the input is t, then b will be bound to 5. If the input is nil, b will be bound to anything but 5.

Not that I would have any use for such an operator.

---

The inverse of the optional arg pattern (o b 5) would be a splicing(!) expression which usually returned a length-1 subsequence containing b, but might(!) instead return a length-0 subsequence if b were 5. I wouldn't have a use for this operator either.

Actually, it might not be as simple as that. Should the expression 5 become the pattern 5 when we invert (o b 5)? What would that mean?

-----

1 point by Pauan 4196 days ago | link

"The way I like to look at it, the correspondence between patterns and expressions is based on one being the inverse of the other. If a value is matched and then reconstructed using the same code, it should become the same value.

For example, since (cons a b) as an expression takes two inputs and gives one cons cell as output, its pattern version takes one cons cell as input and gives two outputs."

I think you're thinking in terms of Haskell pattern matching. I view pattern matching as being more general and dynamic, not exclusively about destructuring the components of data. I see "data destructuring" as being one kind of pattern matching. Nulan supports this kind, of course, but also supports other forms too. For instance, regexps are a form of pattern matching that slices and dices strings, but you can't recreate the string from the regexp.

---

"The pattern version of (and ...) would take one input and give at least one output. (I'm ignoring zero-argument (and) here.) If the input is nil, the outputs can be picked arbitrarily from the space of all possible values, as long as at least one of them is nil. If the input is not nil, then the last output will be equal to that value, and the other outputs can be any arbitrary non-nil values. Thanks to all this nondeterminism, an 'and pattern isn't exactly useful or even meaningful."

That's not how Nulan's pattern matching works. At least, for now. The point of Nulan's pattern matching is simply to take an environment, a pattern, and some blob of data and return an environment that possibly binds new variables to the data. This means a pattern could insert arbitrary stuff into the environment, or return an empty environment, etc.

I'm not big on the whole staticy guarantee thing, where you try to prevent bad stuff from happening. So my stance is, "well, if you wanna do that stuff, go ahead, even though I personally don't see much use for it." And if somebody happens to write a pattern matcher that does funky stuff, I can simply choose to not use it. That's what modules are for: to decide which things you want to use and not use.

http://arclanguage.org/item?id=4460

---

"One refinement of this interface is to pick the outputs such that they're all equal to the input. That seems to be what you're going for."

That is kind of the point of "and", yes.

---

"The same implementation fits the requirements for a pattern version of 'or, so why choose one of these names over the other?"

No it doesn't. "or" matches only a single pattern, the first one that works. "and" always matches all the patterns, or fails. This is analogous to how "or" and "and" work in pretty much every programming language.

---

"If pattern-and's outputs are always equal, then expression-and's inputs should always be equal too, right?"

I have no clue what you mean by expression-and. I guess you're talking about using "and" in a non-pattern context. Right now, "and" behaves like it does in Arc (and many other languages). Pattern matching and calling are two separate things and have two separate implementations.

---

"We can dodge both of these conundrums by defining 'as as its own operator and just giving up on finding inverted counterparts of 'as, 'and, and 'or. It's okay for some computations to be non-invertible--unless, of course, the language design demands otherwise."

I see no reason to define "as" as its own operator since I can simply use "and" to get the same effect. I guess "as" might be slightly more self-documenting.

---

"Besides inverting expressions, you and Racket (not to mention my past self) do some extra things with pattern matching beyond inversion: You allow the same variable to appear more than once in a pattern, and you allow backtracking when a match fails."

Yes.

---

"If the variables are seen as logic variables, then multiple variable occurrences make sense as multiple constraints to be unified. Racket and Nulan operations can only operate on values that have been determined up to equality, so the logic variable semantics degenerates into an equality test.

In another language, (and ...) as a pattern might constrain its arguments without actually making an arbitrary choice. (At this point it's even tempting to remove notions of "input" and "output" and just reason about a constraint graph, but I don't know how to work that way. In my vague thought experiments, the extra polarity comes in handy for choosing orders of evaluation and such.)"

No clue what you're talking about here.

---

"Personally, I would expect this unconstrained foo to shadow nonlocal variables named "foo" just as if it were constrained to a particular value. However, if I tried to access it, I would expect some kind of uninitialized variable error. Is your version of "The Right Thing(tm)" the same as mine?"

If one of the branches does not bind "foo" and you try to use "foo", it will use the lexically scoped "foo":

  (let foo 5
    (let (or {foo} bar) 2
      # foo is 5
      # bar is 2
      ))
That's a good point, though: I should support the ability to write a pattern matcher that binds non-matching branches to variables that throw an error when accessed.

That way you can have a version of "or" that uses the lexical scope for non-matching branches, or one that uses special variables or whatever.

---

"In other news, this kind of (o a 5) syntax means you're putting expressions inside your patterns inside your expressions."

No, it doesn't. These are all patterns:

  (o a 5)
  {a (o b 5)}
  (list a (o b 5))
"{a (o b 5)}" is just syntax sugar for "(list a (o b 5))"

A pattern is defined as:

  * A list, in which case the first element has a %pattern-match property that is a vau that describes how to do the pattern matching.

  * A symbol, in which case the value is bound to the symbol.

  * Everything else is compared by equality.
So in the above case, "list" has a %pattern-match property, and "o" also has a %pattern-match property. It's just simple recursion with first-class environments. Very simple to understand and implement.

---

"You may need to ask yourself just what variables should be in scope in that inner expression; do they include some variables "previously" bound in the pattern? There was no need for this notion of "previously" until now."

This is a non-issue for the most part because if a variable occurs twice, it will test for equality. But yes, it currently evaluates the pattern in the environment of the previous patterns.

Because Nulan uses first-class environments, it's also easy to evaluate the patterns in the dynamic environment, or the lexical environment. I think I'll change it to use the lexical environment.

---

"I don't especially like the idea of infix operators breaking up juxtaposed identifiers like "a b". Infix operators are visible boundaries, and juxtaposition has no boundary except whitespace."

Personally I think it looks nice and short. I might choose a different operator other than "=" but I don't see a problem with the idea itself. You just have to keep in mind that "=" is an infix operator, that's all.

---

"Because of my preference for patterns to be inverted expressions, I would rather (is b 5) match only t or nil. If the input is t, then b will be bound to 5. If the input is nil, b will be bound to anything but 5."

I don't have much interest in constraining pattern matching to only be inversions of expressions. User code can supply any kind of pattern matching they want. And the reason for using "is" is because "a = b" expands to "(is a b)". It isn't actually tied to the is-expression, as you might call it.

Also, I would probably use "if" for such an expression. As in:

  (var (if a)     %f)    -> error
  (var (if a 1)   %f)    -> error
  (var (if a 1 2) %f)    -> 2
  
  (var (if a)     "foo") -> "foo"
  (var (if a 1)   "foo") -> 1
  (var (if a 1 2) "foo") -> 1
That actually seems pretty useful. Thanks for the idea.

-----

3 points by rocketnia 4196 days ago | link

"I have no clue what you mean by expression-and. I guess you're talking about using "and" in a non-pattern context."

Yep, specifically an expression context. :-p

---

"I think you're thinking in terms of Haskell pattern matching. I view pattern matching as being more general and dynamic, not exclusively about destructuring the components of data."

I see that, of course. I'm not even saying I disagree with adding that functionality, just saying I'd choose operator names so that whenever a pattern and an expression are identical, they correspond in semantics too.

Haskell's pattern matching supports backtracking if the pattern fails, but its expressions don't support the same feature, so I hesitate to cite Haskell as a good example of what I'm going for. It is closer.

---

Me: "The same implementation fits the requirements for a pattern version of 'or, so why choose one of these names over the other?"

You: "No it doesn't. "or" matches only a single pattern, the first one that works. "and" always matches all the patterns, or fails. This is analogous to how "or" and "and" work in pretty much every programming language."

I was still talking about pattern-expression symmetry, trying to explain it with enough examples to convey how I reason about it and why I might make different choices than you do.

You're choosing definitions of pattern-and and pattern-or are (in most ways) not inverses of your definitions of expression-and and expression-or.

The point of pattern-as is that it takes one input and gives some outputs equal to it. If on the other hand you give several equal inputs to expression-and or expression-or, they'll all be equal to the output. Thus, both of these expression operators are special cases of the inverted interface of pattern-as.

---

"No clue what you're talking about here."

Logic variables are variables whose value is defined by solving a system of partial descriptions. Here's a sketch of an example:

  (introduce-logic-vars (a b c d)
    ; We don't know anything about a, b, c, or d yet.
    (= a (cons b c))
    (= (cons d a) c)
    ; Now we know a is the infinite list (b d b d b d ...).
    ; The order of = doesn't matter.
    (= a (cons b a))
    ; Now we know b and d are identical. 
    (= c "not a cons")
    ; Now we know there's an error, since the descriptions are
    ; inconsistent.
    
    )
The terms "unification" and "constraint" may come in handy too, but these are strongly associated with logic programming and constraint programming respectively, and I'm not sure which kind of programming I'm talking about here.

---

Me: "In other news, this kind of (o a 5) syntax means you're putting expressions inside your patterns inside your expressions."

You: "No, it doesn't."

The "5" part is an expression, right?

Suppose the pattern were {a b c (o d (cons b e)) e}. Should the expression (cons b e) see the value of b as bound by the pattern, or the version in the nearest non-pattern surrounding scope? What version of e should it see?

---

"That actually seems pretty useful. Thanks for the idea."

Well, I'm glad you found something that might be useful, but I think that idea is all yours, lol. ^_^

-----

1 point by Pauan 4195 days ago | link

"I see that, of course. I'm not even saying I disagree with adding that functionality, just saying I'd choose operator names so that whenever a pattern and an expression are identical, they correspond in semantics too."

It's up to the object to decide whether to ensure that correspondence or not. What operator name would you suggest for my "and" pattern matching?

---

"I was still talking about pattern-expression symmetry, trying to explain it with enough examples to convey how I reason about it and why I might make different choices than you do."

Yes, I know, and I'm saying I don't see any real benefit to that. So you're going to have to make a harder case to convince me.

---

"You're choosing definitions of pattern-and and pattern-or are (in most ways) not inverses of your definitions of expression-and and expression-or."

Yes, but I think it fits with the intuitive understanding of what "and" and "or" do. "and" makes sure all its arguments are "truthy" and "or" selects the first "truthy" argument. In this case, "truthiness" means whether the pattern matches or not. In an expression context, "truthiness" would mean "not %f"

---

"The point of pattern-as is that it takes one input and gives some outputs equal to it. If on the other hand you give several equal inputs to expression-and or expression-or, they'll all be equal to the output. Thus, both of these expression operators are special cases of the inverted interface of pattern-as."

Yes, and "and" is a superset of "as" because it behaves exactly like "as" when the first argument is a symbol and the second argument is a pattern:

  (var (and a {b c}) {1 2})
  (var (as a {b c}) {1 2})
But as you rightfully pointed out, "and" can do things besides that, because it can have more than 2 arguments, and it doesn't require a symbol as the first one. Thus, "as" is a constricted form of "and", effectively. Thus, "and" is a superset of "as".

---

"Logic variables are variables whose value is defined by solving a system of partial descriptions."

Okay. I have no clue how to implement such a system, so I'll stick with my nice simple system.

---

"The "5" part is an expression, right?"

In that particular case, I suppose so, yes.

If you call "pattern-match", it's a pattern, if you call "eval", it's an expression. "o" happens to call "eval" on its second argument, so it's an expression.

---

"Suppose the pattern were {a b c (o d (cons b e)) e}. Should the expression (cons b e) see the value of b as bound by the pattern, or the version in the nearest non-pattern surrounding scope? What version of e should it see?"

The "o" pattern matcher can actually choose whether its second argument is evaluated in the pattern-match environment or the enclosing environment. I suppose the "obvious" thing to do would be to evaluate it in the pattern-match environment.

As for "e"... the "list" pattern matcher does its pattern matching from left-to-right by recursing on the cdr of the list. Which means at the time (cons b e) is evaluated, "e" is not in the pattern-match environment.

As for what version it should see... I like the simplicity of doing the pattern matching from left-to-right, so I suppose it would see whatever "e" is bound (or not bound) in the pattern-match environment at the time it's evaluated. In that case, it'll probably use the "e" from the enclosing scope.

---

"Well, I'm glad you found something that might be useful, but I think that idea is all yours, lol. ^_^"

Yes, but your idea sparked the idea in my head.

-----

2 points by rocketnia 4195 days ago | link

"I have no clue how to implement [logic variables]..."

I'll pretend you're curious though~! :D

In proof theory terms, constraint logic programming is closely related to proof search: Given a theorem, it finds a proof. Functional programming is closely related to proof normalization: Given a proof, it finds a simpler way to express the same proof (by, for example, reducing the proof all the way to a literal value).

Going by this correspondence, one way to combine these kinds of programming is by putting the theorem-heavy constraint logic code into a type system for the proof-heavy functional code. Agda is probably a very expressive language for this approach, since the full computational power of its "static" type system can potentially live until run time. Other typed functional languages with implicit arguments or type classes fit this bill too, but the phase separation might stifle it.

If you're not so interested in type systems, other languages have tackled this combination of programming paradigms in a more procedural way. It looks like http://en.wikipedia.org/wiki/Oz_(programming_language) and its "see also" links are a good starting point.

A search for [racket constraint logic] brings up Racklog (http://docs.racket-lang.org/racklog/index.html). It doesn't look like Racklog has numeric constraints, and it's based on Prolog besides, so it's clearly in the logic programming camp.

I don't really have a clue how this stuff is implemented, but I fear it might just be a lot of backtracking search most of the time. :-p I'm guessing (ISO standard) Prolog is particularly doomed to backtracking search, since some of its predicates have side effects.

There's probably some kind of relationship between depth-first vs. breath-first search and strict vs. lazy evaluation....

---

"[...]so I'll stick with my nice simple system."

"I like the simplicity of doing the pattern matching from left-to-right[...]"

As Rich Hickey says, "simple" and "easy" are distinct concepts. An easy implementation doesn't necessarily lead to a simple result.

In his "Simple Made Easy" talk, he specifically considered 'fold more complex than 'reduce due to the way it imposed a left-to-right or right-to-left ordering that wasn't necessarily part of the programmer's intent.

Ironically, he also dissed type systems. :)

-----

2 points by Pauan 4193 days ago | link

"It looks like http://en.wikipedia.org/wiki/Oz_(programming_language) and its "see also" links are a good starting point."

  fun {Fact N}
     if N =< 0 then 1 else N*{Fact N-1} end
  end

  fun {Comb N K}
     {Fact N} div ({Fact K} * {Fact N-K}) % integers can't overflow in Oz (unless no memory is left)
  end

  fun {SumList List}
     case List of nil then 0
     [] H|T then H+{SumList T} % pattern matching on lists
     end
  end
I found the above to be cool: notice that the function arguments match the function call. This is like what akkartik was talking about: http://arclanguage.org/item?id=16924

-----

1 point by Pauan 4194 days ago | link

"I'll pretend you're curious though~! :D"

I admit I am vaguely curious about lots of things. That Racklog in particular seems interesting.

---

"As Rich Hickey says, "simple" and "easy" are distinct concepts. An easy implementation doesn't necessarily lead to a simple result."

I am aware. But until I either A) find a better system that I can actually understand and implement or B) find some severe flaw with my system, I'll stick with my system.

In addition, because of immutable environments, variable bindings are strictly from top-to-bottom. So enforcing that same kind of strict left-to-right ordering is more consistent and I think intuitive for the programmer. So I would say it is both easy and simple.

---

"In his "Simple Made Easy" talk, he specifically considered 'fold more complex than 'reduce due to the way it imposed a left-to-right or right-to-left ordering that wasn't necessarily part of the programmer's intent."

Well, okay, but sometimes you really do want that different ordering.

---

"Ironically, he also dissed type systems. :)"

I guess it depends highly on what he meant by "type systems". I haven't seen that video so I don't know.

-----

1 point by rocketnia 4195 days ago | link

"What operator name would you suggest for my "and" pattern matching?"

I've been using "as" in this conversation, but I suppose I'd try to name it based on how its corresponding expression would behave (even if it isn't really usable as an expression). Since the point of this pattern is to act as a T-junction whose inputs and outputs are all equal, How about "idfn"?

If all else fails, I might name it something like "pand" just so it didn't conflict with 'and. But I don't seriously expect you to reach the same conclusion. :)

---

"So you're going to have to make a harder case to convince me."

Last time I brought this up, I gave an example implementation of a pattern-matching macro (with no backtracking) that took its operators from the same environment as the functions. In fact, the operators were implemented as Racket procedures with multiple return values.

At the time, I think I might have cited the advantage that a single macro could be used for patterns and expressions without any difference in the code it emits.

Personally, I find that a weak argument; there aren't likely to be very many macros that take advantage of this. Since you're embracing fexprs rather than macros, I don't expect it to convince you either, but I'm bringing it up just in case.

-----

1 point by Pauan 4195 days ago | link

"How about "idfn"?"

That's not a bad name choice at all, except that idfn takes a single argument, so it feels weird to have it take two with the pattern match version.

---

"If all else fails, I might name it something like "pand" just so it didn't conflict with 'and. But I don't seriously expect you to reach the same conclusion. :)"

That would be a good idea if there were multiple competing implementations for the "and" pattern. In this case, I think my implementation is reasonably intuitive and simple. So there's not much reason to rename it to avoid that sorta thing.

By the way, I know it's a bit of a pain, but it is possible to just do this:

  (def pand and)
And now you can use "pand" instead of "and" in your own code.

---

"Since you're embracing fexprs rather than macros"

I'm actually only embracing fexprs because the implementation is super easy and makes dealing with environments easier.

I could instead have gone with the macro route and had a "call-with-current-environment" (analogous to call-with-current-continuation) instead.

That might actually be a better idea. It kinda bugs me having to specify the environment in vaus even if you aren't going to use it.

---

"As you indicate, this has to do with the pattern choosing to call 'eval on an argument. I think the scope question will come up pretty much any time a pattern does that. If different patterns do different things here, that's okay, but I'd hesitate to introduce that inconsistency without a good reason."

My philosophy in this case is basically "don't worry about it, just let the patterns do what they want to do. If I make it easy to do The Right Thing(tm), it'll be okay"

In other words, if I make it easy (and idiomatic) to evaluate in the pattern-match environment, then pattern matchers will naturally go that route unless they have a specific reason to do something else, in which case they're probably right.

-----

2 points by Pauan 4193 days ago | link

I think I'll use "let" for the "as" pattern:

  -> (let a {b c d}) ...

-----

1 point by rocketnia 4193 days ago | link

I kinda like it. Do you find it more readable?

[1] Not that it fits my scheme...

-----

2 points by Pauan 4193 days ago | link

Yes, significantly more readable.

[1] Yes I know.

-----

1 point by rocketnia 4195 days ago | link

"But as you rightfully pointed out, "and" can do things besides that, because it can have more than 2 arguments, and it doesn't require a symbol as the first one."

I didn't suppose 'as would have those restrictions. The reason 'as is a restricted version of 'and is because the arguments of 'as are fully determined by its value, while 'and is nondeterministic in that direction.

(I'm not talking about your pattern-and. That's just pattern-as to me.)

---

"In that particular case, I suppose so, yes."

That's the particular case I was talking about. :)

As you indicate, this has to do with the pattern choosing to call 'eval on an argument. I think the scope question will come up pretty much any time a pattern does that. If different patterns do different things here, that's okay, but I'd hesitate to introduce that inconsistency without a good reason.

Still, getting something that works for now is a good reason. :-p

-----