|LRSTAR: LR(*) parser generator for C++|
|Home Downloads Feedback Comparison Theoretical Documentation Contact|
LRSTAR vs ANTLR vs Bison
Note. Things change over time. Some of these items may be different from what is shown above. To make sure you have the latest information, visit the competitor's website
Canonical LR(1) vs Minimal LR(1)
September 2018. I just finished a 3-day study of Canonical LR(1) versus Minimal LR(1) parser tables. The "dragon" textbook by Alfred Aho and others, says canonical parser tables can be as large as 10,000 states, but I have never seen any published results on this. Since we don't worry much about CPU memory usage anymore, I thought it would be interesting to see just how large the canonical parser tables can get for large grammars. Answer: very large, ridiculously large. See the DB2 grammar at the bottom of the chart below.
Note. For large grammars, 99% of the time was spent in compressing the parser tables from the original matrix down to 4 smaller ones, 3 of which can take advantage of graph coloring to reduce their size. The size shown in the chart is the reduced size which includes all 4 tables.
flex vs DFA vs re2c
DFA generates compressed-matrix table-driven lexers, which are very fast and small. The DFA lexers compile very fast. I did a comparison of DFA to flex and re2c a few years ago and here are the results:
|(c) Copyright Paul B Mann 2020. All rights reserved.|