Thursday, December 22, 2005

Expressions and statements

This post explaining the I/O monad to a newbie on the Haskell mailing list motivates it in terms of expressions and statements:
We have a[n] "evaluation machine" and a[n] "execution machine". The former
evaluates expressions, the latter executes I/O actions. When the program is
started, the execution machine wants to execute the actions from the "main to
do list".
Interesting. Imperative languages like C distinguish expressions and statements syntactically, but with various implicit conversions between them, e.g., expression-statements and procedures that execute statements but then return values. Haskell separates statements from expressions with the type system, and keeps them apart more strictly (no pun intended). But in a sense they're both making the same distinction.

Tuesday, December 20, 2005

I dare you

Read these three posts about Real's business practices and try not to get angry.

12 Weeks With Geeks

This looks interesting:
http://www.projectaardvark.com/movie/
It features some of your favorite outspoken software evangelists, Joel Spolsky and Paul Graham.

I'm not sure what I'll think of it, because on the one hand they're billing it as a more honest look into the software development process, which should be informative, but on the other hand, the preview hints at a number of the myths computer people like to believe about themselves.

I find myself increasingly upset at the way people in software are portrayed both by the media and by themselves--as idiot savants, socially inept geniuses, lone ranger superheroes, and revenge-of-the-nerds underdogs. It's dangerously delusional, it fosters extreme anti-social behavior, allows for all sorts of irresponsibility and incredibly bad software, continually dissuades women from entering the field, and perpetuates the belief that each hacker can solve any problem alone with complete disregard and disdain for a century of progress and collective knowledge.

How long can we continue duping the world into relying on us to design the heart of every automated mechanism known to man? How long will people continue believing in the myth of the 15-year-old hacker genius while simultaneously decrying the unreliability of software before the cognitive dissonance finally cracks?

At any rate, I don't know if that's where this movie is going. You can only tell so much from a preview. But it should be very interesting, especially about the product lifecycle and management perspectives.

Try-catch-finally in Scheme

I need a little break from all this parsing crap. Let's have some macro fun. Here's a PLT Scheme version of Java's try-catch-finally. Note the use of symbolic literals, rather than the usual syntax-rules/syntax-case literals list. This is due to my philosophy on macro keywords.
(define-for-syntax (symbolic-identifier=? id1 id2)
(eq? (syntax-object->datum id1)
(syntax-object->datum id2)))

(define-syntax (try stx)
(syntax-case* stx (catch finally) symbolic-identifier=?
[(_ e (catch ([pred handler] ...)) (finally e0 e1 ...))
#'(dynamic-wind void
(lambda ()
(with-handlers ([pred handler] ...)
e))
(lambda () e0 e1 ...))]
[(_ e (catch ([pred handler] ...)))
#'(with-handlers ([pred handler] ...) e)]
[(_ e (finally e0 e1 ...))
#'(dynamic-wind void
(lambda () e)
(lambda () e0 e1 ...))]))
Example use:
(try (begin (printf "hi")
(printf " there")
(printf ", world")
(newline)
(error 'blah))
(catch ([exn? (lambda (exn) 3)]))
(finally (printf "finally!~n")))

Monday, December 19, 2005

Dynamic scope

Now I'm confused by another aspect of JavaScript scope:
var f = function() { alert("default") }
function test(b) {
if (b) {
function f() { alert("shadowed") }
}
f()
}
Evaluate test(true), and you'll see "shadowed". Evaluate test(false), and you'll see "default". Apparently function declarations are another instance where JavaScript is not lexically scoped (in addition to this, arguments, and the behavior of references to unbound variables).

Update: Oh, I forgot to mention the with construct -- another instance of dynamic scope.

Update: Also note that this is not the same thing as the behavior of var. If we write this function instead:
var f = function() { alert("default") };
function test(b) {
if (b) {
var f = function() { alert("overridden") };
}
f()
}
Then with test(true) we still get "overridden", but with test(false) we get an error from trying to apply undefined as a function. In other words, in this example, we have normal lexical scope with hoisting.

Update: I think this settles it--it's pretty much what it looks like. In chapter 13 of ECMA-262, the dynamic semantics of a FunctionDeclaration is to create a new binding in the "variable object" (ECMA-262 terminology for an environment frame) upon evaluation of the declaration.

If you imagine an environment as a list of frames, then in a lexically scoped language, this is a statically computable list. In a language with dynamic scope, frames in the environment may be generated or mutated at runtime, so it's not statically computable. JavaScript has some of both, so there are some things you can know statically about a variable reference, and some things you can't.

Update: The saga continues... I realized that I couldn't actually derive the above example from the official ECMAScript grammar! I see that others have made this surprising discovery as well: you can't nest FunctionDeclarations beneath Statements. Furthermore, you can't begin an ExpressionStatement with a FunctionExpression. As a result, it's impossible to derive the above program.

However, in practice, JavaScript engines do appear to accept the program, and use the dynamic semantics from chapter 13 as I describe above. But for strict ECMAScript, I can't seem to produce an example. I thought I could still force the issue by placing the nested function at the top-level of the enclosing function body and conditionally returning. But I was thwarted by two more interesting phenomena. First of all, when a var declaration and a function declaration compete to bind the same identifier, the var always wins, apparently. So this always produces "default", regardless of the value of b:
function test(b) {
var f = function() { alert("default") };
if (!b) {
f();
return;
}
function f() { alert("overridden") }
f()
}
Second, nested function declarations appear to be hoisted to the top of the function body. So the following example always produces "overridden":
var f = function() { alert("default") };
function test3(b) {
if (!b) {
f();
return;
}
function f() { alert("overridden") }
f()
}
Curious. At any rate, the behavior of popular JavaScript engines (Firefox and Rhino) seem to be a generalization of ECMAScript (by allowing FunctionDeclarations to appear nested within Statements) that is consistent with the spec (by obeying the behavior specified by section 13).

Now I need to confirm these two more properties of the language with the spec.

Gotcha gotcha

In "a huge gotcha with Javascript closures," Keith Lea describes an example function written in JavaScript with surprising behavior. But he misattributes the unexpected results to the language spec's discussion of "joining closures." The real culprit, rather, is JavaScript's rules for variable scope. Let me explain.

Here's Keith's example:
function loadme() {
var arr = ["a", "b", "c"];
var fs = [];
for (var i in arr) {
var x = arr[i];
var f = function() { alert(x) };
f();
fs.push(f);
}
for (var j in fs) {
fs[j]();
}
}
We might expect the function to produce "a", "b", "c", "a", "b", "c", but surprisingly, it displays "a", "b", "c", "c", "c", "c"! Wha' happened?

The answer is that, in JavaScript, all var declarations are hoisted to the top of the nearest enclosing function definition. So while x appears to be local to the for-loop, it is in fact allocated once at the top of the loadme function and in scope throughout the function body. In other words, the above function is equivalent to:
function loadme() {
var x;
var arr = ["a", "b", "c"];
var fs = [];
for (var i in arr) {
x = arr[i];
var f = function() { alert(x) };
f();
fs.push(f);
}
for (var j in fs) {
fs[j]();
}
}
In this version, it's clear why the function behaves as it does: the closure is mutating a global variable every time it's called!

To get the desired behavior, you need to use nested functions instead of loops, e.g.:
function loadme() {
var arr = ["a", "b", "c"]
var fs = [];
(function loop(i) {
if (i < arr.length) {
var x = arr[i];
var f = function() { alert(x) };
f();
fs.push(f);
loop(i + 1);
}
})(0);
for (var j in fs) {
fs[j]();
}
}
Now x is truly local to the loop body, and the function produces "a", "b", "c", "a", "b", "c" as expected.

Update: My suggested fix is really bad advice for ECMAScript Edition 3! I should never recommend using tail recursion in a language that is not properly tail recursive. If all goes according to plan for proper tail calls in Edition 4, my suggestion would be fine for that language. But today, you can't use tail recursion. Instead, you should use any of the existing lexical scope mechanisms in ECMAScript for binding another variable to the loop variable i.

Solution 1 - wrap the closure in another closure that gets immediately applied:
function loadme() {
var arr = ["a", "b", "c"];
var fs = [];
for (var i in arr) {
var x = arr[i];
var f = (function() {
var y = x;
return function() { alert(y) }
})();
f();
fs.push(f);
}
for (var j in fs) {
fs[j]();
}
}
Solution 2 - wrap the loop body in a closure that gets immediately applied (suggested in the comments by Lars):
function loadme() {
var arr = ["a", "b", "c"];
var fs = [];
for (var i in arr) {
(function(i) {
var x = arr[i];
var f = function() { alert(x) };
f();
fs.push(f);
})(i);
}
for (var j in fs) {
fs[j]();
}
}
Solution 3 - wrap the closure in a let-binding (in JavaScript 1.7, currently implemented only in Firefox 2):
function loadme() {
var arr = ["a", "b", "c"];
var fs = [];
for (var i in arr) {
var x = arr[i];
var f = let (y = x) function() { alert(y) };
f();
fs.push(f);
}
for (var j in fs) {
fs[j]();
}
}
Solution 4 - ditch var entirely and only use let (again, currently specific to Firefox 2):
function loadme() {
let arr = ["a", "b", "c"];
let fs = [];
for (let i in arr) {
let x = arr[i];
let f = function() { alert(x) };
f();
fs.push(f);
}
for (let j in fs) {
fs[j]();
}
}
The moral of the story is that in ECMAScript Edition 4, let is a "better var", behaving more like you'd expect. But until you can rely on let as a cross-browser feature, you'll have to use solutions like 1 and 2.

Tuesday, December 13, 2005

Semanticist's cookbook

Is this a dumb idea? I'd like to create a semantic modeling wiki where people can describe various techniques for modeling different kinds of programs, programming languages, and programming language features mathematically. Sort of a clearinghouse or cookbook for semantics. It could feature links to literature, and various categorizations so that people could find recipes by following different paths. For example, a reader might discover the exception monad by alternatively searching for control features, or error handling, or denotational modeling, or monads, etc.

Perhaps a better-designed database would be more organized, but I think wikis are good for continuously reorganizing, as well as for community contribution.

The Semanticist's Cookbook. I like it!

Update: Stubbed at http://semanticscookbook.org.

Update: Maybe I'll install a wiki another time but I don't have time for this now.

Tuesday, November 29, 2005

What makes a good paper?

Or: Theorems are the test cases of semantics

Good PL papers define their systems with enough greek symbols, funky operators, inference rules and the like (what Mitch calls "dirty pictures") to demonstrate depth and precision of intellectual content. Without proving subsequent theorems based on these systems, though, there's no evidence that these systems are actually consistent or even useful.

A programming language semantics is, as Mitch puts it, "a mathematical model for predicting the behavior of physical systems." One of its primary purposes is reasoning (at a particular level of abstraction) about programs written in that language. It's generally hard to prove that a semantics is "the right one" for the problem space, which is a fuzzy notion, but theorems raise its probability of being useful.

A semantics alone may at least convey information about a language, since math is often more concise and precise than English. But without being put into use, it's doubtful whether it is really a specification; it's far too easy to slip into specifics as an algorithm. Or at the other extreme, it may leave out too many details to address the salient issues in the problem space. And who knows what subtle errors are lurking within the language definition? The best way to ensure that a semantics accurately defines a language at an appropriate level of abstraction is to prove theorems with it.

Friday, November 18, 2005

Monday, November 14, 2005

Values in Non-Strict Languages

In the literature on non-strict languages, the term "value" seems to be used in a different way. In call-by-value languages, we classify a set of normal forms that we define as the "values" of the language, and arguments to functions must evaluate to values before the functions can be applied. But because in call-by-name or call-by-need languages, you can pass arbitrary, unevaluated expressions to functions, the notion of a value is in a sense less critical. You still need to define the normal forms in order to determine when to stop evaluating the whole program (or when to stop evaluating arguments to primitives with strict positions). But interestingly, in The Implementation of Functional Programming Languages, Peyton Jones still refers to the expression that a function argument is bound to as the variable's "value" -- despite the fact that it may not be in weak head normal form. Apparently the term "value" is informally used to mean a variable's binding, whereas the final result of an evaluation is only referred to as a normal form.

Tuesday, November 01, 2005

A packrat parser for Scheme

Through some random poking around the web, I just discovered this packrat parser in R5RS Scheme. I need to check that out.

Update: That's a parser in Scheme, not a parser for Scheme. (Thanks, Mitch.)

Update2: Not entirely random, Noel, thanks. Though discovering Tony's files by URL munging certainly was random.

Saturday, October 22, 2005

Language that represent themselves

In studying macros, I've discovered a fundamental complexity in one axis of language design: making aspects of a language easily representable within itself hampers the plasticity of the language. That is, as soon as you make it easy to observe the internals of the language, any changes you make to the language become by definition observable. As a result, any time you add, remove, or change a language feature, you add must consider that feature's representation within the language and how it affects the semantics.

This is well known to designers of languages with reflection, and causes all sorts of ulcers for compiler writers. But it's also a pain for designers of macros. For example, every time you add a new type of expression to the language, macros must be able to consume and produce that type of expression. Scheme makes this easier by generally having just one kind of expression--but in reality, that's a lie! Definitions are different from expressions, and cause lots of subtle complexities.

Tuesday, October 18, 2005

Contracts fall right out

Jacob Matthews came to town this past weekend for several events, and on Monday gave a fabulous talk at Northeastern on multi-language systems. The experience made me proud to be a member of both the Northeastern PRL and the PLT research family. Jacob's work looks like it could be very useful, and I hope to apply it to several of my research projects. And the presentation was fantastic: he was entertaining and charismatic, and our group was a lively and interactive audience.

He's been talking for some time about how when programs interact between typed and untyped languages, "contracts fall right out." I was looking forward to finding out how, and he kept telling me that once I saw it it would be obvious. It had sounded rather magical to me, but sure enough, about halfway through his talk it became totally obvious! If a typed language is to use an expression from an untyped language in any useful way, it'll have to have a dynamic check to make sure the expression really was of the right type. But of course, you can't in general dynamically check the type of a procedure, so contracts are a natural solution!

Saturday, October 01, 2005

Don't evaluate under lambda

In the past, I've been unsure what Alan Kay meant whenever he talked about "late binding" being the key insight into the development of Smalltalk, an insight which he attributes to his reading of the Lisp 1.5 manual. I think what he's talking about is the idea that if you have some part of your program that contains a piece of data, but now you want the construction of that data to vary based on some particular condition of the dynamic execution of the program, wrap it in a procedure. Now you can dynamically generate the data instead of fixing it once and for all.

This is of course possible regardless of whether you're in an OO or FP framework; in the context of OO, he's referring to the idea that instead of a record with fixed fields, you use methods that dynamically return the values of the fields. In functional languages you can do the analogous thing by placing functions in the slots of records, or in general you can just replace any datatype α with (→ α).

You can also take this in arguably the wrong direction and base the dynamic generation of this data on the mutable state of the program (whether that's the mutable fields of the object the method belongs to, or the mutable state of variables in the closure of a procedure). So you can see how late binding could be seen as a gateway drug to overuse of mutation.

But more and more I appreciate the power and simplicity of this notion that the natural path a program follows as it gets more complex is to take the pieces that are fixed and abstract them; to take the constants and make them variable; to take variables and fields and turn them into procedures and methods.

Post-Redirect-Get with the PLT web server

Here's a nice idiom easily expressed with the PLT servlet library. Typically in an interactive web application you'd like to respond to form submissions (via "POST") with a page that the user can safely refresh, i.e., a page delivered via "GET". As a result, most people recommend you use what's known as the "Post-Redirect-Get pattern," where the immediate response to a POST is a redirect to an ordinary GET page.

Look how beautifully this is expressed in PLT Scheme:
(define (redirect/get)
(send/suspend
(lambda (k-url)
(redirect-to k-url))))
The send/suspend procedure reifies the current continuation as a web continuation URL. The redirect-to procedure generates a response that redirects the user's browser to the given URL. By combining the two, we simply redirect the user's browser right back to exactly the point in the computation where we are, the only difference being that we've now returned from a "GET" interaction instead of a "POST". Usage looks like:
(define (start initial-request)
(let ([request (send/suspend
(lambda (k-url)
... form ...))])
(redirect/get)
(let ([user (extract-binding/single
'user
(request-bindings request))])
`(html (head (title "Hi"))
(body (p "Hi, " ,user "! Hit refresh!"))))))
(Added to the Scheme Cookbook.)

One-step reduction semantics a historical mistake?

Wow, I just discovered this thread where Thorsten Altenkirch declares small-step operational semantics a "historical mistake." Sometimes I forget how specific the research culture I'm a part of is; given how often I use small-step semantics and abstract machines, I just assume that's what everyone does. But there are many people who consider big-step semantics to be more "natural." (Whatever that means.)

What confuses me about the popularity of big-step semantics in the typed functional language community is that Matthias's approach to proving type soundness via subject reduction in a small-step operational semantics is by now the de facto standard technique.

Anyway, I honestly have no idea how you'd reason about non-trivial language features like continuations with a big-step semantics.

Monday, September 19, 2005

How do we automate?

Read this excellent and reasonable post from Jacob Matthews on undergraduate curriculum in computer science.

Let me call attention to Jacob's characterization of CS: "computer science is fundamentally the study of how to best systematically solve problems." I think it's interesting to compare this to Knuth's definition of computer science (pp. 5-6) as the answer to Forsythe's question "what can be automated?" (With some quick googling I found a slight generalization of this formulation to the broader question "how do we automate?")

It's useful to place Jacob's definition in the context of our research culture. You could describe us as belonging to the academic diaspora of Matthias Felleisen, whose profound textbook How to Design Programs and corresponding pedagogy permeate our approach to computer science. (The book's title is an homage to Polya's How to Solve It, a landmark work in the history of mathematical pedagogy, which sought to elevate math education beyond the realm of black art by providing students with a systematic process for solving problems rigorously.) The HtDP approach to teaching computer science is to give students a repeatable process for solving problems--this is systematic both in the sense that students are given a dependable process that they can reliably use to solve new problems, and in the sense that expressing the solution in terms that a computer can execute and evaluate is more formal and reliable than a rote hand-calculation.

Nevertheless, I like Knuth's and Raatikainen's definitions. Computer science is not always about problems--both in research and industry, I think there's a place for exploration where the discovery of applications may not follow until later. The real common thread seems to be the ability to describe any process in a formal way so that it can be specified without unnecessary implementation detail and yet in a way that it can be repeated reliably.

Wednesday, September 14, 2005

The language in my head

Felix points out how bizarre the hypothetical language was that I used in my previous post. Very true! It combines Javascript object-literal notation, ML expression syntax, semantic brackets to represent compilation, Scheme's hygienic expansion, Java (et al) method invocation, and hey, let's give credit to the new guy, Fortress-style mathematical notation for set-union.

Well, I know what I like!

Tuesday, September 13, 2005

Compiling a language with multiple operations

I've now twice compiled languages with more than just an eval operation on expressions. For example, I wrote an LL(1) parser generator that generated for each expression in the grammar an object with a parse method (which you can think of as eval for grammars) as well as first, follow, and nullable? methods. I've also run into the same issue when working on the Topsl compiler.

But when you generate objects for expressions and you need to model binding in your language, you find an interesting little problem. Imagine compiling a simple functional language into a bunch of expression objects that have two methods: eval and analyze (for some hypothetical static analysis you might like to perform). Now how would you implement the rule for compiling let-expressions?
[[let x = e1 in e2]] =
{ analyze : λ().[[e1]].analyze() ∪ let x = ()
in [[e2]].analyze(),
eval : λ().let x = [[e1]].eval() in [[e2]].eval() }
By binding the variable x in both methods, it can be referred to in the compiled code for e2. In the static analysis, we can bind it to any dummy value, since the analysis presumably doesn't care about the dynamic result of evaluating the expression.

But this results in duplicating the expansion of e1 and e2, which is not just code bloat, it's exponential code bloat. A non-starter.

We could then lift out the generated let to surround the entire object creation, initialize x to a dummy value, and then mutate it within the body of eval. But this introduces brittle dependencies on the order in which the methods can be invoked.

Instead, we can generate one piece of code, specifically, a function from x-value to expression object, and then invoke this function differently from each method:
[[let x = e1 in e2]] =
let f = λx.[[e2]]
in { analyze : λ().[[e1]].analyze() ∪ (f ()).analyze(),
eval : λ().(f [[e1]].eval()).eval() }
Now there are no fragile method invocation interdependencies, and each method can give a different value for x, and each expression is only compiled once.

Update: I wasn't clear that the semantic brackets function [[--]] represents Scheme-style macro-expansion. So I assume all hygiene issues are handled automatically by the expansion process.

Saturday, August 27, 2005

Meta-lambda

Here's something that used to bother me when reading Winskel and learning denotational semantics.

We start with a language like the lambda calculus, and everyone looks at λn.(n+1) and knows that this represents the function that takes an integer and increments it. But that's not enough for us: we have to build up this big, complicated mathematical machinery to determine precisely what this function means; we treat it as dumb syntax and give it a translation to some domain that models it.

But when we're defining the model, we allow ourselves to represent meta-functions simply using the lambda calculus, and suddenly λn.(n+1) just means the set-theoretic function that maps integers to their successors. Why do we have to work so hard to give a precise model for our object language, but when we work with the meta-language, we can just say "you know what I mean"?

The reason is simply that the object language is generally a complex language that's made to look like lambda calculus, but has other features and effects (e.g., recursion and non-termination, mutation, etc.) that can't be modelled in obvious ways and require a precise translation. By contrast, when we work in the model, if it's simply set-theoretic functions, then we can use a simple meta-language based on the lambda calculus to represent these functions. So this meta-lambda calculus is just a convenient shorthand for elements in the domains of the model. As long as there's an obvious translation of such representations into sets, terms in the meta-lambda calculus such as λn.(n+1) can be treated as nothing more than a convenient notation, no different from { (n, n+1) | nN }.

Wednesday, August 24, 2005

Pointed domains

In a denotational semantics for a language with non-termination, you use pointed domains, i.e., CPO's with a bottom element. But some of the usual constructions on compound domains such as pairs and sums end up combining multiple bottom elements into one domain. Because the existence of a bottom element is used for the proof of the existence of fixpoints for all functions, it's critical that every domain have a single bottom element. So that's what the "coalesced sum" and "smash product" constructions are for: they identify multiple bottom elements in multiple domains into a single bottom element in the combined domain. [Abramsky and Jung 94, 3.2]

Trust me

I don't understand how you can get away with leaving out proofs in a publication. Given the constraints on the length of the paper, I could imagine providing external references to more complete proofs, say in a companion technical report. It's one thing to trust provided proofs, when even then they are error-prone. But how can you believe a theorem stated without even any indication of a proof? That's kind of the point of math.

I'm reading a couple of papers on FreshML and FreshO'Caml, and they're great work. But the ICFP paper from 2003 has a number of unproved theorems, and then they posted an erratum stating that one of their theorems is flawed, but they haven't bothered fixing it because they have a new paper that takes a different approach. This kind of thing makes me generally nervous about the reliability of research and publications.

Tuesday, August 23, 2005

Reasonable Defaults

I went to add a mail filter rule to Apple Mail in Tiger, and the new rule came up with a default rule for filtering messages where "Any Recipient" matched the recipient of the current message I was looking at. Beautiful! It gets better -- I went to add another condition, which I changed to a "From" condition, and it automatically changed the initial value of the condition to match the sender of the current message. I never had to type in a single value, switch window panes, copy or paste.

This is something library and language designers could learn from HCI--making the common cases easy. (The Perl-mongers know it.) It's up there with progressive disclosure.

Friday, August 05, 2005

alphaCaml

I've written my first Cαml program: a little lambda-calculus term manipulation language and interpreter. Since it's a language for manipulating terms, there's an object language (the lambda calculus) and a meta-language (the term manipulation language). Expressions in the meta-language evaluate to lambda calculus terms; for example, cps e evaluates the expression e to get a term and then converts the term to continuation-passing style.

Here's the data definition and binding specification of the object language datatype:
sort var

type term =
| Var of atom var
| Lam of < lambda >
| App of term * term

type lambda binds var = atom var * inner term
The declaration sort var declares a disjoint universe of atoms that will be used as lambda calculus variables. The type lambda is a binding datatype: it binds atoms of sort var, which are then in scope in the right-hand conjunct of the product type (the keyword inner indicates that all the bindings in lambda are in scope in the term). This binding specification actually automatically generates a bunch of Ocaml datatype and library definitions, including:
type term =
| Var of var
| Lam of opaque_lambda
| App of term * term
and var = Var.Atom.t
and lambda = var * expression
and opaque_lambda

val create_lambda : lambda → opaque_lambda
val open_lambda : opaque_lambda → lambda
By making the contents of the Lam constructor opaque, it forces programmers to use open_lambda whenever they need to get at its contents, which has the side effect of freshening all of its bound names. This alleviates the programmer from having to avoid capture explicitly, since bound names are kept unique.

Now here's a term manipulation function written in Cαml. (In fact, this is really just an Ocaml program, since the only extension to Ocaml is the binding specification language. Beyond that, Cαml is just a library.)
let make_lambda : var → term → term = fun v body →
Lam (create_lambda (v, body))

let rec beta : term → var → term → term = fun t x arg →
match t with
| Var x' when Var.Atom.equal x x' → arg
| Var _ → t
| App (l, r) → App (beta l x arg, beta r x arg)
| Lam lam → let (x', body) = open_lambda lam in
make_lambda x' (beta body x arg)

let rec reduce : term → term = function
| App ((Lam lam), rand) when isValue rand →
let (x, body) = open_lambda lam in
beta body x rand
| App(rator, rand) when (isValue rator) && (isValue rand) →
raise No_redex
| App(rator, rand) when (isValue rator) →
App(rator, reduce rand)
| App(rator, rand) →
App(reduce rator, rand)
| _ → raise No_redex
There's no need to implement capture-avoiding substitution, because every time an abstraction is opened, its bound variable is automatically freshened by open_lambda.

Tuesday, August 02, 2005

Stupid keywords

I spent like 20 minutes trying to figure out what I was doing wrong with the syntax of pattern matching in Ocaml, only to discover the problem wasn't with pattern matching at all: I was trying to bind the name val, which is a keyword in ML:
let (key, val) = List.find( ... )
Learning a new language is such a pain. I thought I'd grown out of stupid problems like this. You almost can't blame developers for resisting learning new technologies.

Subtleties of getElementsBySelector

I should point out that my implementation of getElementsBySelector corrects a couple of subtle bugs in the original version. For one thing, the original version searches for nodes of the form #foo by using document.getElementById; this results in finding the outermost node in the entire document whose ID matches, even if searching a subtree. Since ID's are not guaranteed to be unique, this can produce the wrong node.

An even more subtle bug in the original version is that multiple copies of the same node may appear in the result array. For example, the selector '.foo .bar .mumble', when applied to the following tree:
<div class="foo">
<div class="bar">
<div class="bar">
<div class="mumble">mumble</div>
</div>
</div>
</div>
...produces two copies of the mumble node, because it finds it by following two different paths. By contrast, my implementation visits each node in the entire document tree exactly once, so it is guaranteed to produce either 0 or 1 copy of each node in the result array.

Sunday, July 31, 2005

Element.prototype.getElementsBySelector

All right, y'all had plenty of time to implement the imperative tree traversal, so let's compare notes. Here's what my Javascript version looks like--built to work on actual DOM nodes, of course, using real CSS selectors.

The matching function takes an element and an array of selector paths and returns the new array of selector paths, with an extra boolean field elementMatched indicating whether or not the node matched one of the given selector paths. I've elided the data definition for selectors and paths but it should be pretty clear from context. These get constructed from a parser for CSS selector strings.
// matchElement : Element * Array<Path>
// → Array<Path> & { elementMatched : boolean }
function matchElement(element, paths) {
var childPaths = new Array;
childPaths.elementMatched = false;
for (var i = 0; i < paths.length; i++) {
var path = paths[i];
if (path.isSimple()) {
if (path.selector().matches(element)) {
childPaths.elementMatched = true;
}
childPaths.push(path);
}
else if (path.selector().matches(element)) {
childPaths.push(path.tailPath());
childPaths.push(path);
}
else {
childPaths.push(path);
}
}
return childPaths;
}
Then the main select function takes an element (which can be document.documentElement if you want to start at the root of the HTML document) and a single selector path and searches the entire tree for all matching nodes.
function select(element, path) {
var continuation =
new Array(new Task(element, new Array(path)));
var result = new Array;

while (continuation.length > 0) {
var task = continuation.pop();
var childPaths = matchElement(task.element, task.paths);

// Add the element's child nodes to the work list.
var children = task.element.childNodes;
for (var i = children.length - 1; i >= 0; i--) {
continuation.push(new Task(children[i], childPaths));
}

// If matched, save element in result array.
if (childPaths.elementMatched) {
result.push(task.element);
}
}

return result;
}
I daresay my version is easier to understand than Simon Willison's getElementsBySelector implementation or Dean Edwards' cssQuery function. Mine's not quite as fully featured but the extra features--such as matching of pseudo-elements and arbitrary attributes--shouldn't actually affect the above code. It'll just require additions to the data definition of selector objects.

I've put the code up on my web site; a description page should appear at some point.

I'm back.

Tuesday, July 19, 2005

That's a vacation

Sunshine, cliffside villas, insanely attractive company, unending supply of cold beer, and Introduction to Lattices and Order.

Sunday, July 17, 2005

The Little Calculist goes to the Caribbean

I'm going out of town for my brother's wedding for the next couple of weeks. Cardelli, Wadler, and Milner couldn't decide amongst one another who'd get to be my guest blogger, so I suppose we'll just have to make do with nothing.

Thursday, July 14, 2005

Moderation in all things

Mike Spille blogs about computer folks' tendency to adhere to extremist positions on technologies and methodologies, as well as their tendency to jump to opposite extremes when they get burned. That tired "considered harmful" meme is one of the sounds you hear immediately preceding legions of lemming programmers making the leap.

Monday, July 11, 2005

Deriving an Efficient Tree Traversal, Part III

Previously:

Part I: Overview, specification, CPS
Part II: Contexts, data abstraction

A Small Refolding Simplification

We like our stack representation of contexts (after all, that's the well-known implementation technique we've been aiming for), so let's commit to that from now on. But there's a slight redundancy in the context implementation: notice that the select/c procedure takes a tree, a path, and a context, and the select*/c procedure takes a list of trees, a path, and a context. Those first two parameters for each procedure can also be encoded as context frames. Let's combine the two functions into a single function of just one parameter: the context.

(To distinguish this implementation from the previous version, and to emphasize our commitment to the initial algebra representation, we'll refer to contexts as "work lists" now, and context frames as "work items.")
;; item = (union tree (listof tree)) × path
(define-struct item (task path))

;; initial-work-list : tree × path → (listof item)
(define (initial-work-list tree path)
(list (make-item tree path)))

;; select/w : (listof item) → (listof tree)
(define (select/w items)
(if (null? items)
null
(let* ([item (car items)]
[task (item-task item)]
[path (item-path item)]
[items (cdr items)])
(cond
[(tree? task)
(cond
[(null? path) (cons task (select/w items))]
[(matches? task (car path))
(select/w (cons (make-item task (cdr path))
(cons (make-item (children task) path)
items)))]
[else
(select/w (cons (make-item (children task) path)
items))])]
[(null? task) (select/w items)]
[else (select/w (cons (make-item (car task) path)
(cons (make-item (cdr task) path)
items)))]))))
Now that we've folded both tree and (listof tree) arguments into the work list argument, there's just a single function. By now, we've completely gotten rid of all of the appends, so the implementation is linear.

(Update: Ryan found the removal of the appends unclear. The idea is that the context is only being applied to results that are either null or a single-element list, so we can optimize this away to a cons in one case and a simple tail-recursion in the other. Also, I claimed that this implementation is linear, but I don't think that's true: nodes can be visited multiple times since they can reappear in the work list.)

The only problems left are 1) that non-tail recursive call: (cons task (select/w items)), and 2) the use of lists instead of vectors. The first problem is easily solved with an accumulator. The second isn't really a problem, but imperative languages tend to be optimized for array manipulation, so in preparation we'll use an abstract stack datatype:
;; stack : a ... → (stackof a)
;; push : a × (stackof a) → (stackof a)
;; pop : (stackof a) → a × (stackof a)
;; stack-empty? : (stackof a) → boolean
The obvious implementation of stacks is as lists:
(define stack list)

(define (push x s)
(cons x s))

(define (pop s)
(values (car s) (cdr s)))

(define stack-empty? null?)

Work Lists with an Accumulator

The accumulator-passing-style implementation converts the recursive calls to tail calls:
(define (initial-work-stack tree path)
(stack (make-item tree path)))

;; select/a : (stackof item) → (stackof tree)
(define (select/a items)
(let loop ([items items] [result (stack)])
(if (stack-empty? items)
result
(let-values ([(item items) (pop items)])
(let ([task (item-task item)]
[path (item-path item)])
(cond
[(tree? task)
(cond
[(null? path) (loop items (push task result))]
[(matches? task (car path))
(loop (push (make-item task (cdr path))
(push (make-item (children task)
path)
items))
result)]
[else
(loop (push (make-item (children task) path)
items)
result)])]
[(null? task) (loop items result)]
[else (loop (push (make-item (car task) path)
(push (make-item (cdr task) path)
items))
result)]))))))
This version uses bounded control space, making only tail calls for any non-trivial control. It's easy to see how this would translate to a while-loop.

Imperative Implementation

At last, the final imperative implementation is easy to derive. Since each stack is used linearly in the previous version, we can replace it with a destructive update. So we change our stack representation to use a growable vector datatype (thanks to Jacob for the Scheme implementation): 1
(define stack gvector)

(define (pop v)
(let ([len (gvector-length v)])
(let ([x (gvector-ref v (- len 1))])
(gvector-remove! v (- len 1))
(values x v))))

(define (push x v)
(gvector-insert! v x)
v)

(define (stack-empty? v)
(zero? (gvector-length v)))
We'll represent paths as indices into a fixed path vector, and the child lists of tree nodes will be vectors as well. So we extend work list items to be triples with a task (either a tree or a vector of trees), an optional index into the tree vector (hidden invariants--yuck!), and a path index.
;; tree = symbol × (vectorof tree)
;; path = (vectorof symbol)
;; task = (union (tree × #f × nat) ((vectorof tree) × nat × nat))
And here it is, in all its horrifying glory, our final imperative implementation:
(define-struct item (task index path) #f)

(define (select/! tree path)
(let ([path-len (vector-length path)])
(let loop ([items (stack (make-item tree #f 0))]
[result (stack)])
(if (stack-empty? items)
result
(let-values ([(item items) (pop items)])
(let ([task (item-task item)]
[index (item-index item)]
[path-i (item-path item)])
(cond
[(tree? task)
(cond
[(>= path-i path-len)
(loop items (push task result))]
[(matches? task (vector-ref path path-i))
(loop (push (make-item task #f (+ path-i 1))
(push (make-item (children task)
0
path-i)
items))
result)]
[else
(loop (push (make-item (children task)
0
path-i)
items)
result)])]
[(>= index (vector-length task))
(loop items result)]
[else
(loop (push (make-item (vector-ref task index)
#f
path-i)
(push (make-item task
(+ index 1)
path-i)
items))
result)])))))))
Translation to your favorite imperative language is now trivial, and left as an exercise.
1 With the simple addition of a gvector-remove! operation.

Deriving an Efficient Tree Traversal, Part II

In the last episode, we tried representing control with continuations. But continuations are in fact just one implementation of an abstract datatype of contexts. To understand the kinds of contexts we encounter in this problem, let's look at all the ways continuations get constructed in the CPS version:
(define (select/k tree path)
...
[(matches? tree (car path))
(select/k tree (cdr path)
(lambda (ts1)
(select*/k (children tree) path
(lambda (ts2)
(k (append ts1 ts2))))))]
[else (select*/k (children tree) path k)]))

(define (select*/k trees path k)
(cond
...
[else (select/k (car trees) path
(lambda (ts1)
(select*/k (cdr trees) path
(lambda (ts2)
(k (append ts1 ts2))))))]))
In essence, there are two types of context frames, describing the work we have left to do: frames containing a tree and a path, describing a future application of select/k, and frames containing a list of trees and a path, describing a future application of select*/k. The bolded parts of the continuations are the only parts that rely on dynamic data; the rest is just boilerplate, which can be abstracted into the datatype of contexts.

Abstract Datatype: Contexts

The abstract datatype of traversal contexts includes (as always) the initial context, as well as compound contexts containing a base context and a context frame. Context frames in turn are either tree frames, containing a tree and a path, or tree list frames, containing a list of trees and a path. That is:
context ::= [] | context-frame ⋅ context
context-frame ::= tree × path | (listof tree) × path
We need two operations on contexts: one to build compound contexts, and one to apply an existing context.1
cons-context : context-frame × context → context
apply-context : context → (listof tree)
Our latest version of the program can now operate on these datatypes independent of their representation:
;; select/c : tree × path × context → (listof tree)
(define (select/c tree path c)
(cond
[(null? path) (apply-context c (list tree))]
[(matches? tree (car path))
(select/c tree (cdr path)
(cons-context (make-context-frame (children tree) path)
c))]
[else (select*/c (children tree) path c)]))

;; select*/c : (listof tree) × path × context → (listof tree)
(define (select*/c trees path c)
(cond
[(null? trees) (apply-context c null)]
[else (select/c (car trees) path
(cons-context (make-context-frame (cdr trees) path)
c))]))

Final Algebra Representation

We can recover the CPS implementation by representing the constructors of the context datatype (cons-context, make-context-frame, initial-context) as functions, and the observer of the datatype (apply-context) as function application. This is the so-called final algebra representation of an abstract datatype: all the intelligence is in the constructors.
;; context-frame = (context → context)
;; context = (listof tree) → (listof tree)


;; make-context-frame : (union tree (listof tree)) × path → context-frame
(define (make-context-frame work path)
(cond
[(tree? work) (lambda (k)
(lambda (ts1)
(select/c work path
(lambda (ts2)
(k (append ts1 ts2))))))]
[else (lambda (k)
(lambda (ts1)
(select*/c work path
(lambda (ts2)
(k (append ts1 ts2))))))]))

;; cons-context : context-frame × context → context
(define (cons-context frame context)
(frame context))

;; apply-context : context × (listof tree) → (listof tree)
(define (apply-context k trees)
(k trees))

(define initial-context identity)

Initial Algebra Representation

Alternatively, we can represent the constructors of the abstract datatype using a simple disjoint union datatype, and put all the intelligence in the observer. This is the initial algebra representation. The new representations of contexts and frames are:
;; context = (listof context-frame)
;; context-frame = (union tree (listof tree)) × path


(define-struct context-frame (work path))

(define cons-context cons)

(define initial-context null)

;; apply-context : context × (listof tree) → (listof tree)
(define (apply-context ls trees)
(cond
[(null? ls) trees]
[(tree? (context-frame-work (car ls)))
(append trees
(select/c (context-frame-work (car ls))
(context-frame-path (car ls))
(cdr ls)))]
[else
(append trees
(select*/c (context-frame-work (car ls))
(context-frame-path (car ls))
(cdr ls)))]))
This is looking promising: now we can represent contexts with simple tuples and unions, and without the use of higher-order functions.

To be concluded...
1 In general, the return type of applying a context could be polymorphic, but for our purposes, all computations will return lists of tree nodes.

Deriving an Efficient Tree Traversal, Part I

In an attempt to live up to this blog's name, I'm going to spend the next few posts describing a series of transformations that derive an efficient implementation of a tree traversal suitable for imperative languages, starting from a natural, structural recursion in a language that isn't purposefully broken. This is mostly based on material from Mitch's Principles of Programming Languages course that all first-year Northeastern PhD students take--especially data abstraction, contexts, and continuations. This is great stuff, and it's worth learning.

The Problem

The problem we'll solve is to write a traversal of the DHTML DOM tree in a web page. The problem is useful and ubiquitous, but it's hard to solve in a language that doesn't like recursion. It's pretty well-known that you need to maintain an explicit stack to drive the traversal, but it's not obvious how to get it right. I could point out popular libraries that get it wrong, but I'll be polite.

For the sake of example, I'll simplify the problem somewhat: we'll represent DOM nodes as a simple structure:
;; tree = symbol × (listof tree)
(define-struct tree (tag children))
Leaf nodes are represented as trees with empty lists of children. We'll traverse the DOM in the style of CSS selectors, which are represented as an ancestry path of tags:
;; path = (listof selector)
;; selector = symbol
For example, the path '(foo bar) represents the set of all bar nodes that are descendants of foo nodes.

Natural Solution

The clearest implementation of the problem is a structural induction on the DOM tree and selector path (technically, the ordering is lexicographic over both inputs):
;; select : tree × path → (listof tree)
(define (select tree path)
(cond
[(null? path) (list tree)]
[(matches? tree (car path)) ; does node's tag match first tag in path?
(append (select tree (cdr path))
(select* (children tree) path))]
[else (select* (children tree) path)]))

;; select* : (listof tree) × path → (listof tree)
(define (select* trees path)
(cond
[(null? trees) null]
[else (append (select (car trees) path)
(select* (cdr trees) path))]))
We have two distinct datatypes to recur on: tree and (listof tree), which leads to two separate functions. When we find a node that matches, we need to test it against the rest of the path as well; otherwise we just search its descendants. We only produce a match when we find a node that has matched the entire selector path.1

CPS

The recursive solution uses the implicit control stack of Scheme to drive the traversal, but we want an explicit implementation. The easiest way to represent control is to CPS a program, so let's try that:
;; select/k : tree × path × ((listof tree) → (listof tree)) → (listof tree)
(define (select/k tree path k)
(cond
[(null? path) (k (list tree))]
[(matches? tree (car path))
(select/k tree (cdr path)
(lambda (ts1)
(select*/k (children tree) path
(lambda (ts2)
(k (append ts1 ts2))))))]
[else (select*/k (children tree) path k)]))

;; select*/k : (listof tree) × path × ((listof tree) → (listof tree)) → (listof tree)
(define (select*/k trees path k)
(cond
[(null? trees) (k null)]
[else (select/k (car trees) path
(lambda (ts1)
(select*/k (cdr trees) path
(lambda (ts2)
(k (append ts1 ts2))))))]))
This no longer relies on the control stack of Scheme, because every function call is in tail position. However, it isn't a satisfying solution for an iterative language--even one with higher-order functions--because without proper tail calls, it will still use unbounded control space.

To be continued...
1 For the sake of simplicity, I'm not going to worry about duplicate elements in the result list.

Thursday, July 07, 2005

How will AJAX change the landscape of browser platforms?

All this AJAX mania means we're going to be seeing a lot more long-lived single pages on the web, with users interacting indefinitely within a single web page. This means browser scripting engines are going to have to deal with more garbage collection, for one thing. I'm sure that to date, the interpreters bundled with browsers have been built with the implicit assumption that all scripts will be short-lived.

There are currently serious performance issues with web scripting. (This multi-browser experiment at quirksmode demonstrates a few of them. Joel Webber describes some more.) A single developer trying to make a robust web app in Javascript would be up a creek, but with huge industry pressure backed by the craze of an Internet fad, there's incentive for the platforms to change. I have no idea how they'll change, though.

Wednesday, July 06, 2005

Latest Javascript AOP round-up

I've updated a couple of my web pages on Javascript. Check out the latest version of the CEKS machine semantics for "ClassicJavascript," as well as my new Behaviors library. There's still more to come, but they're getting better.

I also just discovered another post on AOP in Javascript, but don't have time to read it at the moment.

Monday, July 04, 2005

For the love of all that's good and holy

This JavaScript stuff is just getting silly. Every object has an internal prototype link, which is a reference to its prototype that, according to the spec, is not accessible to the user program. (The object's prototype is initially available as obj.constructor.prototype, but changing these fields doesn't affect the real prototype link.) There's a very good reason for hiding this link: the dynamic dispatch semantics searches the prototype chain automatically on field dereference (including method calls), and it would be a shame if cycles showed up in the prototype graph. I'm gonna make a rash generalization and guess that the legions of novice scripters would have no clue what was going on if simply dereferencing foo.bar caused their entire web browser to freeze.

But many implementations do expose the internal prototype link, via a non-standard __proto__ property! And what do they do about cycles? My guess was that they'd check for cycles during the dereferencing operation, but it appears they disallow it on mutation of the __proto__ field:
Rhino 1.6 release 1 2004 11 30
js> function Thing(v) { this.value = v; }
js> Thing.prototype = { common : 'blue' };
[object Object]
js> var thing1 = new Thing(1);
js> var thing2 = new Thing(2);
js> thing1.value
1
js> thing2.value
2
js> thing1.common
blue
js> thing2.common
blue
js> thing1.__proto__ == Thing.prototype
true
js> thing2.__proto__ == Thing.prototype
true
js> thing1.__proto__ = thing2;
[object Object]
js> thing2.__proto__ = thing1;
js: "<stdin>", line 12: Cyclic __proto__ value not allowed.
Good gravy, does everything have to be mutable?

Saturday, July 02, 2005

AOP in JavaScript

It's really easy to implement various aspect-oriented programming idioms in JavaScript. This shouldn't come as a big surprise, perhaps because it's a reflective language, but probably even more so simply because it allows you to mutate everything.

Danne Lundqvist implemented a little library that introduces before, after, and around advice to methods. It's very simple: it just replaces the method slot of an object with a wrapper around the original method.

Another creative aspect-oriented library for JavaScript is Ben Nolan's Behaviours library. The "advice" in this case is simply the standard DHTML event handlers, but the interesting part is the join-point model: CSS selectors. Because a web page is represented roughly as a tree of DOM objects (a graph, actually, because of the back edges), there are all kinds of libraries for encoding the selection of sets of nodes. Because CSS offers a high-level syntax for representing these selectors, Ben's library allows you to specify join points using those selectors and automatically attach new behavior to all the selected nodes.

Noel noted the connection to AOP on the Untyping blog, and I've made some comments about the design of such an AOP facility on Lambda the Ultimate.

Update: Here's a prototype implementation of a variation on Ben's library. I implemented some of the suggestions I made on Lambda the Ultimate.

Friday, July 01, 2005

Haskell's hidden mutation

Here's a clearer version of the Haskell function I described earlier today:
minTree t = t'
where (n, t') = replaceMin t n

replaceMin (Leaf n) m = (n, Leaf m)
replaceMin (Node t1 t2) m = (n, Node t1' t2')
where (n1, t1') = replaceMin t1 m
(n2, t2') = replaceMin t2 m
n = min n1 n2
In a sense, the binding of a new variable n introduces an implicit box that will get filled in at some point. So because we have letrec binding semantics, we can use the same box as both an output and an input to the replaceMin function. But the box can only be filled once; if, in the process of computing the value to be placed in the box, we end up having to use the value in the box, we end up with a dependency cycle and then loop. This program works because the box gets set at the end of the computation, and doesn't get used until after that point.

We can explicitly encode the order of the computation using continuations. I'll use Scheme now just to show there's no hidden reliance on laziness anymore.
(define (min-tree t n)
(let ([m (box #f)])
(match-let ([(n . t*) (replace/min t m)])
(set-box! m (n identity))
t*)))

(define (replace/min t m)
(match t
[($ Leaf n)
(cons (lambda (k) (k n)) (make-Leaf m))]
[($ Node t1 t2)
(match-let ([(k1 . t1*) (replace/min t1 m)]
[(k2 . t2*) (replace/min t2 m)])
(cons (lambda (k)
(k1 (lambda (n1)
(k2 (lambda (n2)
(k (min n1 n2)))))))
(make-Node t1* t2*)))]))
Now you can see exactly when we set the value of the box--after the computation finishes--and that we don't do anything with the returned "min-computations" except embed them in larger min-computations until after the traversal, when we run the final computation.

Circular programming in Haskell

Haskell lets you write some strange-looking programs in a style known as "circular programming." If you remember how you felt the first time you saw a recursive program (usually something in between awe and fear), get ready for this... Imagine transforming a tree with integer leaves into a tree that replaces all the leaves with the minimum leaf value. Normally we'd do this in two passes: first find the minimum leaf, and then traverse the tree a second time, replacing the leaves with the minimum value. In a lazy language you can do this in one go:
data Tree = Leaf Int
| Node Tree Tree

minTree :: Tree -> Tree
minTree t = t'
where minTree' (Leaf n) = (n, Leaf n')
minTree' (Node t1 t2) = let (m1, t1') = minTree' t1
(m2, t2') = minTree' t2
in (min m1 m2, Node t1' t2')
(n', t') = minTree' t
What is so absolutely bizarre about this program is that it uses the result of the entire recursion n' in one of the recursive calls, before the result has been computed! But because it isn't actually observed until after the tree has been constructed, it's safe to do so. Another way to think of this is to imagine the value n' as a reference cell or "box" whose contents get set once at the end of the computation; we don't look inside it until after the computation is over, but during the computation we're allowed to pass the box around and put it inside other nodes.

I've seen some pretty weird Haskell programs that use this kind of trick, passing the result of a function in as one of its own arguments, for example. Of course, you can't just do this however you like. For example, we can't possibly imagine this program would give us a useful result:
data Thing = Leaf
| Node Thing

f :: Bool -> Thing -> Int
f _ (Node v) = i'
where i' = f (i' == 0) v
f isZero Leaf = if isZero then 1 else 0
The isZero argument is a flag that means something like "will my result by unequal to itself?"

Wednesday, June 29, 2005

Monday, June 27, 2005

Tail-applicative continuations

In a recent discussion on the plt-scheme mailing list, I learned that the application of a captured continuation in tail position--even an escape continuation--is not a tail call. This means, for example, that if you use let/ec to encode constructs from C-like languages such as return, break, and continue, that applications of these continuations will not be tail calls.

Ryan and I came up with a trampoline-like implementation of what I'm calling let-tail-continuation (or let/tc for short), which captures the current continuation, binds it to a local variable, and implements applications of that variable as tail calls. Here's a simplified version:
(define-syntax let/tc
(syntax-rules ()
[(let/tc k body)
(let ([thunk (let/cc return
(let-syntax ([k (syntax-rules ()
[(k e)
(return (lambda () e))])])
(lambda () body)))])
(thunk))]))
The trampoline evaluates the body, which is required to evaluate to a thunk, and then applies the thunk in tail position. If the body invokes the captured continuation, it wraps the argument to the continuation in a thunk so that it gets evaluated by the trampoline, again in tail position.

We can observe that applications of the capture continuation are in tail position using the trick I described the other day:
(define (f n)
(let/tc return
(if (zero? n)
(continuation-mark-set->list (current-continuation-marks)
'test)
(return (with-continuation-mark 'test n
(f (sub1 n)))))))
; => (1)
Whereas if we apply the recursive call in a non-tail position with respect to the captured continuation, we see the control stack grow:
(define (g n)
(let/tc return
(if (zero? n)
(continuation-mark-set->list (current-continuation-marks)
'test)
(return (with-continuation-mark 'test n
(values (g (sub1 n))))))))
; => (1 2 3 4 5 6 7 8 9 10)
For the more robust version, I've allowed arbitrarily many body expressions and use an identifier macro to allow the continuation variable to be used in operand position as well.

Update: Thank you Felix for the correction--that's an important point. The salient issue is not whether the application of a continuation itself is in tail position but whether its subexpression is. The let/tc form allows applications of the continuation to evaluate their subexpression in tail position; another way to put this is that continuation applications are tail contexts (or tail context frames).

Quickest way to circumvent tail-recursion in Scheme

Replace exp with (values exp).

Saturday, June 25, 2005

Strict Haskell

I just found these notes on improving inefficiencies in Haskell resulting from laziness. They're written by someone who appears to be a bioinformaticist--hence, someone working with very large datasets and computations. (It's always a breath of fresh air to see non-PL people using something other than Fortran, C, or VB.)

Friday, June 24, 2005

Welcome to the future

On the car ride up to NEPLS today, I watched in amazement as Eli used his cell phone's web interface to trigger a script that made the 2005 ICFP programming contest go live exactly at 10am (9am CT). We were lucky -- around Williamstown the reception started getting spotty, but luckily we hit a good patch just as it was striking 10:00.

I was one of three people to witness the activation of this international programming contest. Who'd have guessed it was happening in a beat-up old car bouncing down the back streets of western Mass?

Thursday, June 23, 2005

Literate programming

It's tempting to use literate programming to write a little document recording every last implementation decision you made in designing a library, and that kind of dedication to documentation is admirable, but it's ultimately misdirected.

Javadoc has the better idea: couple the documentation of the interface with the code. I'm beginning to believe that lower-level design decisions should be documented elsewhere, and should be written on a by-need basis only. Code is hard enough to maintain, but documenting every last aspect of your code just guarantees the documentation will get out of date. Often the design decisions are global anyway, in which case there isn't any one place in the code where they really belong.

1000th site visit

Got my 1000th reader today! My readership appears to be growing little by little. Thanks for reading!

Wednesday, June 22, 2005

Observing tail calls

Take a simple tail-recursive function:
(define (f n acc)
(if (zero? n)
acc
(f (sub1 n) (+ n acc))))
How can we demonstrate the control behavior of such a procedure? With continuation marks! If we wrap every reference to f with code that saves the current arguments in a continuation mark:
(with-continuation-mark 'snapshot (list n acc)
(f (sub1 n) (+ n acc)))
then as long as the reference to f is in tail position, this should overwrite the previous mark. Otherwise, it will build a stack of marks, each representing the current arguments at one of the invocations. We can save this control snapshot for posterity so that after the computation finishes, we can check its most recent value:
(define (f n acc)
(set! snapshot (current-continuation-marks))
...)
Here's a macro that abstracts out this transformation:
(define-syntax (control-snapshot stx)
(syntax-case stx ()
[(_ f e ...)
(identifier? #'f)
(with-syntax ([(x ...) (generate-temporaries #'(e ...))])
#'(let ([original f]
[snapshot #f]
[label (gensym)])
(fluid-let ([f (lambda (x ...)
(set! snapshot (current-continuation-marks))
(with-continuation-mark label (list x ...)
(original x ...)))])
(f e ...)
(continuation-mark-set->list snapshot label))))]))
If we take a control snapshot of an invocation of the tail-recursive f, we get
> (control-snapshot f 10 0)
((1 54))
If we define a non-tail-recursive version like so:
(define (g n acc)
(if (zero? n)
acc
(let ([result (g (sub1 n) (+ n acc))])
result)))
we get a full stack history:
> (control-snapshot g 10 0)
((1 54) (2 52) (3 49) (4 45) (5 40) (6 34) (7 27) (8 19) (9 10) (10 0))
You can use continuation marks in this manner to write test cases that ensure that something you've written is using bounded control.

Monday, June 20, 2005

We have met the enemy, and it is... hidden invariants

A couple well-known encodings have a very nasty property that they are not bijective. For example, a list of pairs can always be transformed into a pair of lists, and a pair of lists can be transformed into a list of pairs, but if the two lists are of different length, you either have to fail, drop some elements, or throw in some dummy elements. (The latter may be impossible for a polymorphic function.)
unzip : [(t, u)] → ([t], [u])
zip : ([t], [u]) ⇀ [(t, u)]
That means that the "unzipped" representation, i.e., the pair of lists, contains a hidden invariant that's not expressed in the type: that the two lists have the same length. Even in a language with decidable dependent types like Dependent ML, this invariant can only be enforced so far; for example, if you run the filter function over the two lists, the resulting lists will have statically unknown length.

Similarly, the option type constructor doesn't distribute nicely over tuples. If you have two fields that must both either be present or absent, it's better to represent them as a single field that is an optional tuple, rather than separate optional fields. This is taking advantage of the type system to help keep your datatypes straight. Even without a type system, it's still better because it groups related elements, and it's one less property to document. We're talking datatype abstraction here, folks. It's a Good Thing.

I've noticed that languages that have a kind of built-in option always available--I'm looking at you, Java--encourage the proliferation of this family of hidden invariants. There are many cases where an object has multiple fields that must all be either null or non-null. I realize they needed null in the language because when you create an object, the fields need to have some well-defined value before they're initialized. (Of course, imperative constructors are a whole 'nother world of pain...) But the additional fact that Java discourages the creation of small datatypes deters people from abstracting their datatypes into separate classes, and the result is these extra invariants.

And we know what happens to hidden invariants: they hide away in the back of the programmer's brain until they get metally garbage collected, lost to posterity, and consequently violated.

Thursday, June 16, 2005

Real world PL design

Let's say I have an imperative language, and I can do all kinds of icky things with it: send messages to a server in Norway, create and mutate reference cells, abort and resume captured continuations. But I find there are large subsets of my program that are pure--no mutation, no I/O, no side effects in general, just functions. I'd love to be able to use laziness for these portions of the program. For one thing, it adds a kind of whole-program optimization, avoiding unnecessary computation and fusing traversals of compound data structures. This alleviates me of a whole bunch of manual optimization. Furthermore, it also allows me to operate on infinite data structures like streams.

Now, since the lazy evaluation strategy doesn't have a clear, sequential operational model, I don't want to use any of the side effects in the lazy portions of the program. I'd like the type system to help me make sure that any effects in the language are restricted to the imperative world, and the lazy world is only allowed to use the pure subset of the language. So we add a simple effect system to the language: an imperative expression of type α is given type Effect α. Then all the imperative language features and standard libraries have this type, and the lazy portions of the program, which cannot have type Effect α for any α, can't touch them.

This is preamble, I suppose, to three points.

1. Monads should be introduced as a customizable effect system. My story above is how I think we should introduce monads. If nothing else, as Simon PJ knows, you're bound to scare people if you use terminology from category theory. For crying out loud, it's okay to have a good theoretical model, but you don't have to confuse the language with the theory! Even when you want people to be able to introduce their own effects into the language, you don't have to call them monads. Just call them effects! This also solves the "what exactly is a monad? Can I hold one in my hand? Is it a value? Is it a type?" problem: I bet people are already comfortable with the idea of an effect as an intangible thing, even though it is something you can "add" to a language by extending the core semantics.

2. The imperative part of a language is important. I think it's the wrong way around to treat the pure functional part of the program as the main part. The imperative part is just as fundamental. It's not the gutter where you sweep all the ugly bits. (The addition of forkIO must really be cognitive dissonance for the monks of pure FP...) Especially when you start having things like concurrency, you really want to put as much care and design into the impure part of the language as the pure part. I'd rather think about embedding pure, lazy languages into general-purpose languages than the other way around.

3. Combining effects is both hard and necessary. Monads are a great abstraction for designing effects. Type classes work fine for modelling individual effects on top of a pure core language, if you really only want to dabble with small doses of individual effects here and there. But monads are notoriously hard to combine, as are language features in general. Making the programmer invent a general purpose PL every time they want these effects is a waste of effort. Notice that the IO monad contains mutable state, concurrency, exceptions and non-termination in addition to actual I/O. That's because, as I said, the imperative fragment of a language is important, and all these effects have proven to be useful and important features of impure languages.

Now, if you rid Haskell entirely of monads and put all the effects into the top-level language, you wouldn't be able to experiment with new language features and effects. And I like the idea of a customizable effect system. So I guess I don't dislike Haskell's monads, but I think they're overblown. What I'm saying is, there's real-world PL design hiding in the IO monad. No pun intended.

Eich on Javascript's future

Via Slashdot, I see that Brendan Eich, the original designer of Javascript (and who's still actively involved in the development of Mozilla) is publicly discussing the future of the language. If he's really open to suggestions I want to think about what I'd most like to see put in there.

Usually, my stock complaint for most languages is the lack of support for recursion. But for compiler output, the needs may be quite different. I really need to go to bed, but I'll think about this.

ClassicJavascript

I've written up a CEKS machine semantics for the essential core of Javascript, which just for fun I'm calling ClassicJavascript. It demonstrates the strange scoping and store semantics of the language, the confusing meaning of this, and the prototype-based inheritance model. It doesn't correctly treat the primitive types like objects in quite the way it's supposed to, but that wouldn't be too hard to add. Oh, I also don't have a rule for starting up the machine, which I ought to, because it would explain the "global" object and create the initial Object and Function objects. I'll do that at some point.

I imagine you could use this to reason about a compiler that generated Javascript, if you were pretty ambitious, but mostly I just did it to help myself understand the language. It would also make it really easy to write an interpreter. A lot easier than trying to write to the spec, which is written in a combination of English prose and BASIC-like pseudocode (ouch).

If I do say so myself, this turned out to be a whole lot more concise than the 75 or so pages of the spec specifying the core semantics of the language (that's the part without any description of the standard library).