At the recent DSL DevCon, Martin Fowler gave an excellent keynote speech discussing domain-specific languages, an important subject that seems certain to be the buzzword of the next year or two.
However, one slide of his got my goat, leading me to foam at the mouth, beat the table with my shoe, wag my finger angrily, and otherwise mix my metaphors. In that slide, he said something along the lines of "a DSL probably ought not to be Turing complete."
I think this is terrible advice. First, I think it's a pedagogical mistake. Second, I think it's incorrect.
Proof
Q.E.D.
Discussion
Is SQL a less powerful programming language than the game of minesweeper?
To the extent that we want to broaden the number of people writing domain-specific languages, we can't put that question on the entrance test.
Even if the percentage of people who know the answer can be made large, the percentage who comprehend it is quite a bit smaller, and the number who can apply it is a very small number. In a DSL, it's smaller still, because you have to analyze your language in the context you've embedded it. You've climbed right up Bloom's taxonomy! That's why I think it's a mistake pedagogically.
Why I think the advice is flat-out wrong is that DSLs are necessarily embedded in the context of a general-purpose language. Say I want a DSL that converted the numbers 1 through 256 into a set of CA rules following the obvious structure implied in:
A DSL with exactly one production ({byte} -> {CA rules}) is not Turing complete (duh). But the context in which this DSL is expressed is one that leads to a (possibly profound) discovery of Turing completeness. (In case it's not obvious from the context -- the illustrated 'Rule 110' CA is Turing complete.)
Fowler referred to this when he advised DSL writers to avoid "accidental" Turing completeness. That really set the hook deep in my mouth, because it has a whiff of this stuff. To raise the issue and then to hand-wave it away is leaving yourself one heck of a big escape hatch. The mention of 'Turing completeness' or 'Turing equivalence' begs attention; they're either forbidden holy-words or they're used precisely and must be discussed with precision. (Thus, O'Brien's Corollary (1991) to Godwin's Law: The first person to mention 'Turing equivalence' in a debate of programming languages loses.)
Solution
It's only because Fowler generally gives such excellent advice that the "Turing complete" stuff provoked me so much. Sure enough, a moment after the slide I didn't like, he came up with what seems like the answer. I'm not sure of his exact words, but it was along the lines of:
Start with a general language that you think is readable and take stuff out. If you can't take out quite a bit, don't write an external DSL.
Great advice.