While reading Sipser’s book on theory of computation, it relies heavily on the concept of formal language, and machines that merely accept or reject an input.

But most useful programs we deal with do more than merely verify an input. They compute something. We may compute a solution for an equation instead of merely verify it. We may sum a list of numbers, or calculate some function on it.

Maybe “most” is an exaggeration since I can’t prove it. But still, it begs the question. Why not deal with machines than do more than merely verify?

  • Corbin
    link
    fedilink
    English
    arrow-up
    23
    ·
    8 months ago

    It’s because most of the hard questions and theorems can be phrased over the Booleans. Lawvere’s fixed-point theorem, which has Turing’s theorem and Rice’s theorem as special cases (see also Yanofsky 2003), applies to Boolean values just as well as to natural numbers.

    That said, you’re right to feel like there should be more explanation. Far too many parser papers are written with only recognition in mind, and the actual execution of parsing rules can be a serious engineering challenge. It would be nice if parser formalisms were described in terms of AST constructors and not mere verifiers.