robjsoftware.info

A blog about software – researching it, developing it, and contemplating its future.

Glorious Possibilities #2: The Ultimate Parser

with one comment

So for various reasons I have been learning a lot about parsers lately.  The #1 thing I have learned is that parsing is all about tradeoffs.  The #2 thing I have learned is that the ultimate parsing environment does not yet exist.

Long-time readers of this blog (assuming I didn’t lose them all in the slightly bobbled switch to WordPress!) know that I’m very interested in growable languages and in extensible programming environments.  Parsing is at the heart of programming language design, since it brings meaning to the text we programmers write.  When writing extensible programming environments, it is important that your language be closed under composition, so extending your language’s grammar still has some meaning (even if it introduces more ambiguity).  Parsers, in this sense, want to be powerful.

On the other end of the spectrum, there are parsers for relatively straightforward formats such as XML or CSS.  These parsers are often running in resource-constrained environments (a cell phone browser) or are dealing with large volumes under high load (an enterprise messaging bus).  For these parsers, high performance is absolutely critical, sometimes in both memory and CPU.  Parsers, in this sense, want to be fast.

There is a very tricky tradeoff between power and speed throughout the parsing landscape.  As just one example, I’ve been experimenting with ANTLR lately.  It is, on many fronts, the best parsing toolkit available.  Terence Parr has done a bang-up job over many years of addressing difficult technical issues while building a very usable tool.  It is just purely wonderful that ANTLR’s generated code has line-by-line comments that come directly from the grammar that generated it.  ANTLRWorks is a very enjoyable tool for language design.  Finally, Terence’s books are immensely useful.

But even ANTLR has some surprises.  I generated CSharp2 code from the example CSS grammar on the ANTLR site.  Then I ran it and looked at its memory footprint with a profiler.  I was not surprised to see that there were an immense number of IToken objects being created — ANTLR’s generated lexer (as of version 3, anyway) allocates a new IToken object for each token parsed.  This alone is actually a really big performance problem for high-load parsers — allocation in the parser’s hot path is a big problem in a managed language.

There was a surprise, though, and not a good one.  There were close to 5 megabytes of int16[] arrays.  Where did those come from?  From the lexer DFAs, which are stored in static lexer fields and uncompressed when the lexer is loaded.  The compressed char[] representation of the lexer DFAs is tiny — certainly not megabytes.  So there is a shockingly high compression ratio here.  I have to question whether that grammar is really complex enough to motivate a multi-megabyte lexer DFA.  It seems that I am not the only person encountering surprisingly large lexers with ANTLR v3.  It looks like this can be worked around with a hand-written lexer, but wouldn’t it be wonderful if ANTLR itself were able to tune its lexer generation to achieve the same effect (trading off some speed, perhaps, to reduce DFA explosion)?

So there is room for performance improvement even on the most mature tools in the industry.  And on the other end of the spectrum, there are grammatical constructs that are impossible to express in purely context-free parsing frameworks like YACC.  I have been corresponding a bit with Yitzhak Mandelbaum at AT&T, subsequent to his excellent paper (with Trevor Jim and David Walker) on data-dependent parsing.  This has a lot of similarities with ANTLR’s semantic predicates, but seems to be perhaps more powerful and general (being embedded in a framework that is inherently scannerless).  However, being based on Earley parsing, it is hard for this method to gain competitive performance.  The method does support embedded “black-box” parsers, however.

The holy grail might be an environment with a powerful, compositional, potentially ambiguous top-level user language, and an implementation that could aggressively analyze sublanguages of the grammar and choose optimized implementations.  Grammar and language design is a balancing act between ambiguity and generality on the one hand, and efficiency on the other.  Different implementations have different tradeoffs — implementations supporting unambiguous grammars only will inevitably be faster than those that handle ambiguous parses; LR and LL algorithms both have their own idiosyncrasies with respect to the languages they can support; and so on.  So the ideal language environment would itself be extensible with different parsing algorithms that could themselves be composed.  That way, if all you have is a regular expression, you get tightly tuned inner loops and no backtracking; but if you have a highly ambiguous grammar, you get a backtracking and/or chart-parsing implementation, perhaps with lots of data dependencies.  The toolset should let you apply different algorithms to different parts of your language, and should support you in refactoring (for example, the left-corner transform) your grammar.

Incidentally, this last point is one reason why Parsing Expression Grammars are potentially problematic.  PEGs are a beautiful embedding of a declarative grammar in a functional programming language.  But in some sense most PEG implementations are too embedded — the only representation of the grammar’s structure is in the code itself, and all you can do with your PEG is to run it, not refactor or otherwise transform it.  (And let’s leave aside the issue of ordered choice potentially breaking your compositional intention.)  There is at least one endeavor to make PEG representations transformable, which is extremely interesting and worth following.

There is much more that could be said here — error recovery, for instance, is extremely tricky in many cases, and has tight interactions with the algorithm you are using.  Yet you still would like a declarative way to express your grammar’s error behavior, if possible. 

And what about low-level performance?  Which is faster:  a virtual object dispatch in your parser’s inner loop, or a colossal switch statement?  If you are doing code generation, the only answer is probably “generate both versions and then measure it on your specific grammar!”  There are so many tradeoffs here that drawing general conclusions may be nigh impossible.

The basic point I want to make is that parsing is very far from being a properly solved problem.  Tools like ANTLR point the way, but there is more still to be done.  And this is the glorious possibility:  building a parsing environment that is itself composable, and that is fully transparent to the developer at all times, letting the developer choose their implementation tradeoffs at all levels.

As Terence knows better than anyone, this alone could consume an entire career.  Or multiple such careers.  Still, I could see myself having a lot of fun building an F# implementation of this kind of transparent, composable parsing toolkit.  What a useful thing it would be!

Edit: And you know I will be following Terence’s work on ANTLR 4.  Also, this presentation of his on parser implementation is a great look at the state of the art in managed-language parser code generation. 

Edit #2:  I must also mention that ANTLR has soundly convinced me of the benefit of LL style: LL style is top-down recursive, which is a very natural way for most programmers to think.  Table-driven parsers, without good and scalable visualization, can be quite hard to debug and extend, but ANTLR’s generated code is a model of clarity.

Advertisement

Written by robjellinghaus

2010/05/04 at 22:51

Posted in Uncategorized

One Response

Subscribe to comments with RSS.

  1. Incidentally one can get away without NFA -> DFA transformations entirely and use an approach which I’ve called “Trace Based Parsing”. It is a generalization of Thompson style regexp matching which is top-down and linear in time and space, but being advanced to full grammars i.e. many interconnected production rules instead of just one. It is not without its own complexities when it comes to elimination of First/First conflicts in top-down parsing but one approximately gets an LL(1) parser with the capability of an LL(*) parsers by appropriate parse table preparation.

    I’ll take a look at the “data dependent grammars” paper but I’m generally sceptical about extending the declarative scheme of grammars far beyond their expression of the context free case. Grammar debugging is hard already in the CF case and it will consume much of your time when building a language frontend anyway.

    Instead I suggest to break the complete grammar into two different parts, one for the lexer and one for the parser. Many context dependent aspects ( e.g. building a symbol table for C ) can then handled in between:

    lexer: string -> token-stream
    postlexer: token-stream -> token-stream
    parser: token-stream -> parse-tree
    

    The postlexer is expressed using an arbitrary imperative or functional program. It can insert/substitute token within the given token stream produced by the context free lexer. It adds precision and simplifies the work of the parser.

    For error recovery ( user level ) I tend to think about stopping to be too demanding with the parser. When the parser is considered a device to create a parse tree from a token stream and it fails to build one, the reason is an error in the token stream. This can be detected using simpler techniques which analyse the token stream alone without building a tree. If the parser runs into an error, one starts a separate error recovery routine which traces the stream and figures out what went wrong. This technique can also be used for autocompletion ( token stream repair ) in cases where a missing token can be uniquely determined.

    For no money in the world I would drop parse tables. They are a low level representation of a grammar and can be used to build all sorts of machines, not only parsers: token-tracers and parse-tree validators for example which can verify the correctness of all transformations being applied on a parse-tree. They are also helpful in doing analysis using handy entities like first-sets, follow-sets, end-sets etc.

    There are other aspects of parsing which are also interesting, for example online incremental parsing on incomplete data. Locality is also a big one: change a single character in a text when working with an IDE and figure out how to re-tokenize and re-parse only a tiny portion. Then apply damage-repair on the existing data structures.

    Kay Schluehr

    2010/05/22 at 21:10


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: