The code in this directory is experimental and not yet ready for public consumption. This file briefly describes its design and intended use.
Wolf analysis is intended as a broader, more extensible version of flow analysis for use in the analyzer package and the linter. If it proves useful, we may expand it to be usable by clients of the analyzer such as code generators.
The reason for making a new type of analysis, rather than extending flow analysis, is because the design of flow analysis is heavily constrained in ways that make it expensive to modify, maintain, and extend:
It has to operate in parallel with type inference. This means that it can only make a single pass through the source code, and any time it “looks ahead”, the code it is looking at doesn‘t yet have types. This forces it to make some conservative assumptions when analyzing loops and closures--for example, on entry to a loop, flow analysis knows which variables are written inside the loop body, but it doesn’t know the types of the values that are written to those variables; so it has to discard type promotions for all of them. If later analysis shows that the values that were written are compatible with the promoted type, it's too late to go back and update the analysis.
It can't be improved without bumping the language version (or it will break existing programs).
All of its analyses must be 100% sound, because they are relied on by later compilation stages. This severly limits its ability to account for human intent, which we would sometimes like to do in lints.
It has to be shared between the analyzer and the common front end, which means that all of the data structures it acts on must be abstract.
Additionally, flow analysis doesn't understand special behavioral guarantees of methods and types declared outside the SDK, and it never looks outside of a single method body.
The design consists of multiple analysis stages:
AST-to-IR. The analyzer's AST is converted to a stack-based IR (intermediate representation). The proposed IR and the approach for converting to it are described in https://flutter.dev/go/dart-static-analysis-ir.
Scope analysis. A pass is made through the IR, identifying all the state variables of interest. This includes local variables as well as any other pieces of state that are needed for whatever lints are enabled. This stage records the set of state variables that could potentially be changed inside each loop, block, closure, etc., for use in later analysis stages.
SSA (single static assignment) analysis. A pass is made through the IR to assign SSA node ids to each state variable and stack value. Queries are generated at points that are of interest to whatever lints are enabled (e.g. a lint that verifies that certain kinds of values are never null will generate queries that ask “is the value represented by this SSA node null?”).
Query solver. Each query generated by SSA analysis is resolved by working backwards through the IR, rewriting it in terms of previous SSA nodes, until a point in the code is reached where the query either can or can't be definitively answered. The mechanism for rewriting queries is inspired by Prolog. Depending on the result of the query, a lint output might be generated.
Other analysis stages might be added in the future. For example, if we find that some lints can't be easily expressed in terms of queries, but can still benefit from the SSA representation, we may add other analysis stages after SSA analysis, to support those lints.
In addition to the pieces above, there is a simple interpreter that executes the IR. The interpreter is not intended for production use; it is just sophisticated enough to serve as the basis for unit testing the other stages. For example, we use it in the unit tests of the AST-to-IR conversion stage, to make sure that the output of this stage properly reflects Dart semantics. (If, instead, our unit tests simply compared the output of AST-to-IR conversion to an expected sequence of instructions, the unit tests would be much more brittle, and there would be a much greater risk of human error in reviewing them.)