-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
need memo
strategy modifier for visit
to avoid exponential running times.
#2076
Comments
memo
strategy modifier memo
strategy modifier for visit
to avoid exponential running times.
@PaulKlint @tvdstorm @DavyLandman an alternative fix which is much much simpler: Let all the strategies of the current
This is implementable almost all in the run-time code for tree traversal. No complex changes to be made. WDYT? It would allow a number of the error recovery algorithms to be written easily in short Rascal code, instead of complex Java code with if-then-elses or the BottomUpTreeVisitor or whatever we have in Java. This is what Rascal was made for after all. I don't want to introduce all this Java code now for error recovery heuristic algorithms, only because of the memoization aspect. |
The location in the interpreter is the
This will be slightly different for every traversal strategy, because this is a top-down memo strategy that will also be applied to bottom-up even though that seemingly contradicts the strategy. Bottom-up should not go down into an already visited cluster either. |
I like the idea, my only concern if it would surprise people that use a visit actions with side-effects (some kind of counter for example). I agree for parse trees, it would be harder to hit cases where the same tree is not the same thing, but how about different data types? Like some kind of AST? Currently we also have configurations for memo, since we found that a user might want to tune the caching, to reduce the memory pressure. Or do we only want to have memo in the scope of the current visit? |
And, although I originally added the |
Yes, definitely only for the scope of the visit if we are talking about visit strategies.
The point is that such algorithms would always be highly exponential if they do not memoize. So people won't get an answer for that count anyway within a day or so. Let's say something like So I claim those people do not exist, otherwise we would have had them in our inboxes. If they have
For parse tree amb nodes we can use reference equality. If we want to memoize just like functions do on other data-types, all we have is hashCode and equals, otherwise it breaks the notion of value semantics of Rascal. I still believe that this would be a useful feature for |
It's very important that we only memoize amb clusters and not all the appl nodes in between. The reason is that only amb clusters are reference-shared in reality and almost never appl nodes. When apps nods just come out of the parser they are unique by their .src origin, their non-terminal (production rule even) and their constituent children by induction. After rewriting and matching/substitution of course it can become a feast of reference sharing inside of a tree by duplication, but also if trees become accidentally equal by reduction. In this case memoization makes less and less sence. For starters we do not have a fast enough implementation of equality modulo the |
Ok, I was confused by reading the original proposal and then the subsequent one. I missed the important detail that the second proposal was limited to amb nodes. Sorry for the confusion. Most of my comments are caused by this confusion. |
Ah right. sorry; I missed that too. It is a subtle detail indeed, especially if we also talk about reference equality rather than structural equality for the memo map. |
Is your feature request related to a problem? Please describe.
If I write a recursive function over$2^3$ without $O(2^{n - 1})$ with $n$ the number of operators. With $O(n^3)$ ; a dramatical improvement.
Tree
, with case distinction overappl
,amb
,char
andcycle
then I mustuse
@memo
to avoid exponential behavior in the presence of nested ambiguity. I.e. forsyntax E = E + E
, an input like1 + 1 + 1 + 1 + 1
uses time with a factor of@memo
, and in general it is in@memo
we can remain inHowever, using a
visit
saves more than half of the code for these kinds of functions. The recursion is very useful and very repetitive, exactly whatvisit
is for. Sincevisit
does not have an@memo
modifier, it will behave exponentially.Describe the solution you'd like
Extending the
Strategy
modifier forvisit
with an option tomemo
every currently existing strategy. For example:Describe alternatives you've considered
Could also use tags instead:
But that would be the first expression/statement kind that may have tags and I don't see a general applicability there yet,
Additional context
After this, it becomes weird that functions are tagged with
@memo
while visit is modified withmemo
. Sincememo
is a builtin function modifier we may choose to upgrade it to a modifier from a tag, just likejava
andpublic
.@memo
has configuration bells and whistles for cache eviction policies; if we consider addingmemo
tovisit
we have to consider mapping these policies as well. Perhaps it is unnecessary, but it's best to remain consistent and predictable.The text was updated successfully, but these errors were encountered: