!!! WARNING This document is work in progress and is currently being written. Please contact Vyacheslav Egorov (by mail or @mraleph) if you have any questions, suggestions, bug reports. Last update: January 29 2020
!!! sourcecode “Purpose of this document” This document is intended as a reference for new members of the Dart VM team, potential external contributors or just anybody interested in VM internals. It starts with a high-level overview of the Dart VM and then proceeds to describe various components of the VM in more details.
Dart VM is a collection of components for executing Dart code natively. Notably it includes the following:
The name “Dart VM” is historical. Dart VM is a virtual machine in a sense that it provides an execution environment for a high-level programming language, however it does not imply that Dart is always interpreted or JIT-compiled, when executing on Dart VM. For example, Dart code can be compiled into machine code using Dart VM AOT pipeline and then executed within a stripped version of the Dart VM, called precompiled runtime, which does not contain any compiler components and is incapable of loading Dart source code dynamically.
Dart VM has multiple ways to execute the code, for example:
However the main difference between these lies in when and how VM converts Dart source code to executable code. The runtime environment that facilitates the execution remains the same.
Any Dart code within the VM is running within some isolate, which can be best described as an isolated Dart universe with its own memory (heap) and usually with its own thread of control (mutator thread). There can be many isolates executing Dart code concurrently, but they cannot share any state directly and can only communicate by message passing through ports (not to be confused with network ports!).
The relationship between OS threads and isolates is a bit blurry and highly dependent on how VM is embedded into an application. Only the following is guaranteed:
However the same OS thread can first enter one isolate, execute Dart code, then leave this isolate and enter another isolate. Alternatively many different OS threads can enter an isolate and execute Dart code inside it, just not simultaneously.
In addition to a single mutator thread an isolate can also be associated with multiple helper threads, for example:
Internally VM uses a thread pool (@{dart::ThreadPool}) to manage OS threads and the code is structured around @{dart::ThreadPool::Task} concept rather than around a concept of an OS thread. For example, instead of spawning a dedicated thread to perform background sweeping after a GC VM posts a @{dart::ConcurrentSweeperTask} to the global VM thread pool and thread pool implementation either selects an idling thread or spawns a new thread if no threads are available. Similarly the default implementation of an event loop for isolate message processing does not actually spawn a dedicated event loop thread, instead it posts a @{dart::MessageHandlerTask} to the thread pool whenever a new message arrives.
!!! sourcecode “Source to read” Class @{dart::Isolate} represents an isolate, class @{dart::Heap} - isolate‘s heap. Class @{dart::Thread} describes the state associated with a thread attached to an isolate. Note that the name Thread
is somewhat confusing because all OS threads attached to the same isolate as a mutator would reuse the same Thread
instance. See @{Dart_RunLoop} and @{dart::MessageHandler} for the default implementation of an isolate’s message handling.
This section tries to cover what happens when you try to execute Dart from the command line:
// hello.dart main() => print('Hello, World!');
$ dart hello.dart Hello, World!
Since Dart 2 VM no longer has the ability to directly execute Dart from raw source, instead VM expects to be given Kernel binaries (also called dill files) which contain serialized Kernel ASTs. The task of translating Dart source into Kernel AST is handled by the common front-end (CFE) written in Dart and shared between different Dart tools (e.g. VM, dart2js, Dart Dev Compiler).
To preserve convenience of executing Dart directly from source standalone dart
executable hosts a helper isolate called kernel service, which handles compilation of Dart source into Kernel. VM then would run resulting Kernel binary.
However this setup is not the only way to arrange CFE and VM to run Dart code. For example, Flutter completely separates compilation to Kernel and execution from Kernel by putting them onto different devices: compilation happens on the developer machine (host) and execution is handled on the target mobile device, which receives Kernel binaries send to it by flutter
tool.
Note that flutter
tool does not handle parsing of Dart itself - instead it spawns another persistent process frontend_server
, which is essentially a thin wrapper around CFE and some Flutter specific Kernel-to-Kernel transformations. frontend_server
compiles Dart source into Kernel files, which flutter
tool then sends to the device. Persistence of the frontend_server
process comes into play when developer requests hot reload: in this case frontend_server
can reuse CFE state from the previous compilation and recompile just libraries which actually changed.
Once Kernel binary is loaded into the VM it is parsed to create objects representing various program entities. However this is done lazily: at first only basic information about libraries and classes is loaded. Each entity originating from a Kernel binary keeps a pointer back to the binary, so that later more information can be loaded as needed.
Information about the class is fully deserialized only when runtime later needs it (e.g. to lookup a class member, to allocate an instance, etc). At this stage class members are read from the Kernel binary. However full function bodies are not deserialized at this stage, only their signatures.
At this point enough information is loaded from Kernel binary for runtime to successfully resolve and invoke methods. For example, it could resolve and invoke main
function from a library.
!!! sourcecode “Source to read” @{package:kernel/ast.dart} defines classes describing the Kernel AST. @{package:front_end} handles parsing Dart source and building Kernel AST from it. @{dart::kernel::KernelLoader::LoadEntireProgram} is an entry point for deserialization of Kernel AST into corresponding VM objects. @{pkg/vm/bin/kernel_service.dart} implements the Kernel Service isolate, @{runtime/vm/kernel_isolate.cc} glues Dart implementation to the rest of the VM. @{package:vm} hosts most of the Kernel based VM specific functionality, e.g various Kernel-to-Kernel transformations. However some VM specific transformations still live in @{package:kernel} for historical reasons. A good example of a complicated transformation is @{package:kernel/transformations/continuation.dart}, which desugars async
,async*
and sync*
functions.
!!! tryit “Trying it” If you are interested in Kernel format and its VM specific usage, then you can use @{pkg/vm/bin/gen_kernel.dart} to produce a Kernel binary file from Dart source. Resulting binary can then be dumped using @{pkg/vm/bin/dump_kernel.dart}.
```custom-shell-session # Take hello.dart and compile it to hello.dill Kernel binary using CFE. $ dart pkg/vm/bin/gen_kernel.dart \ --platform out/ReleaseX64/vm_platform_strong.dill \ -o hello.dill \ hello.dart # Dump textual representation of Kernel AST. $ dart pkg/vm/bin/dump_kernel.dart hello.dill hello.kernel.txt ``` When you try using `gen_kernel.dart` you will notice that it requires something called *platform*, a Kernel binary containing AST for all core libraries (`dart:core`, `dart:async`, etc). If you have Dart SDK build configured then you can just use platform file from the `out` directory, e.g. `out/ReleaseX64/vm_platform_strong.dill`. Alternatively you can use @{pkg/front_end/tool/_fasta/compile_platform.dart} to generate the platform: ```custom-shell-session # Produce outline and platform files using the given libraries list. $ dart pkg/front_end/tool/_fasta/compile_platform.dart \ dart:core \ sdk/lib/libraries.json \ vm_outline.dill vm_platform.dill vm_outline.dill ```
Initially all functions have a placeholder instead of an actually executable code for their bodies: they point to LazyCompileStub
, which simply asks runtime system to generate executable code for the current function and then tail-calls this newly generated code.
When the function is compiled for the first time this is done by unoptimizing compiler.
Unoptimizing compiler produces machine code in two passes:
There are no optimizations performed at this stage. The main goal of unoptimizing compiler is to produce executable code quickly.
This also means that unoptimizing compiler does not attempt to statically resolve any calls that were not resolved in Kernel binary, so calls (MethodInvocation
or PropertyGet
AST nodes) are compiled as if they were completely dynamic. VM currently does not use any form of virtual table or interface table based dispatch and instead implements dynamic calls using inline caching.
The core idea behind inline caching is to cache results of method resolution in a call site specific cache. Inline caching mechanism used by the VM consists of:
The picture below illustrates the structure and the state of an inline cache associated with animal.toFace()
call site, which was executed twice with an instance of Dog
and once with an instance of a Cat
.
Unoptimizing compiler by itself is enough to execute any possible Dart code. However the code it produces is rather slow, which is why VM also implements adaptive optimizing compilation pipeline. The idea behind adaptive optimization is to use execution profile of a running program to drive optimization decisions.
As unoptimized code is running it collects the following information:
When an execution counter associated with a function reaches certain threshold, this function is submitted to a background optimizing compiler for optimization.
Optimizing compilations starts in the same way as unoptimizing compilation does: by walking serialized Kernel AST to build unoptimized IL for the function that is being optimized. However instead of directly lowering that IL into machine code, optimizing compiler proceeds to translate unoptimized IL into static single assignment (SSA) form based optimized IL. SSA based IL is then subjected to speculative specialization based on the collected type feedback and passed through a sequence of classical and Dart specific optimizations: e.g. inlining, range analysis, type propagation, representation selection, store-to-load and load-to-load forwarding, global value numbering, allocation sinking, etc. At the end optimized IL is lowered into machine code using linear scan register allocator and a simple one-to-many lowering of IL instructions.
Once compilation is complete background compiler requests mutator thread to enter a safepoint and attaches optimized code to the function.
The next time this function is called - it will use optimized code. Some functions contain very long running loops and for those it makes sense to switch execution from unoptimized to optimized code while the function is still running. This process is called on stack replacement (OSR) owing its name to the fact that a stack frame for one version of the function is transparently replaced with a stack frame for another version of the same function.
!!! sourcecode “Source to read” Compiler sources are in the @{runtime/vm/compiler} directory. Compilation pipeline entry point is @{dart::CompileParsedFunctionHelper::Compile}. IL is defined in @{runtime/vm/compiler/backend/il.h}. Kernel-to-IL translation starts in @{dart::kernel::StreamingFlowGraphBuilder::BuildGraph}, and this function also handles construction of IL for various artificial functions. @{dart::compiler::StubCodeCompiler::GenerateNArgsCheckInlineCacheStub} generates machine code for inline-cache stub, while @{InlineCacheMissHandler} handles IC misses. @{runtime/vm/compiler/compiler_pass.cc} defines optimizing compiler passes and their order. @{dart::JitCallSpecializer} does most of the type-feedback based specializations.
!!! tryit “Trying it” VM also has flags which can be used to control JIT and to make it dump IL and generated machine code for the functions that are being compiled by the JIT.
| Flag | Description | | ---- | ---- | | `--print-flow-graph[-optimized]` | Print IL for all (or only optimized) compilations | | `--disassemble[-optimized]` | Disassemble all (or only optimized) compiled functions | | `--print-flow-graph-filter=xyz,abc,...` | Restrict output triggered by previous flags only to the functions which contain one of the comma separated substrings in their names | | `--compiler-passes=...` | Fine control over compiler passes: force IL to be printed before/after a certain pass. Disable passes by name. Pass `help` for more information | | `--no-background-compilation` | Disable background compilation, and compile all hot functions on the main thread. Useful for experimentation, otherwise short running programs might finish before background compiler compiles hot function | For example ```myshell # Run test.dart and dump optimized IL and machine code for # function(s) that contain(s) "myFunction" in its name. # Disable background compilation for determinism. $ dart --print-flow-graph-optimized \ --disassemble-optimized \ --print-flow-graph-filter=myFunction \ --no-background-compilation \ test.dart ```
It is important to highlight that the code generated by optimizing compiler is specialized under speculative assumptions based on the execution profile of the application. For example, a dynamic call site that only observed instances of a single class C
as a receiver will be converted into a direct call preceeded by a check verifying that receiver has an expected class C
. However these assumptions might be violated later during execution of the program:
void printAnimal(obj) { print('Animal {'); print(' ${obj.toString()}'); print('}'); } // Call printAnimal(...) a lot of times with an intance of Cat. // As a result printAnimal(...) will be optimized under the // assumption that obj is always a Cat. for (var i = 0; i < 50000; i++) printAnimal(Cat()); // Now call printAnimal(...) with a Dog - optimized version // can not handle such an object, because it was // compiled under assumption that obj is always a Cat. // This leads to deoptimization. printAnimal(Dog());
Whenever optimized code is making some optimistic assumptions, which might be violated during the execution, it needs to guard against such violations and be able to recover if they occur.
This process of recovery is known as deoptimization: whenever optimized version hits a case which it can't handle, it simply transfers execution into the matching point of unoptimized function and continues execution there. Unoptimized version of a function does not make any assumptions and can handle all possible inputs.
VM usually discards optimized version of the function after deoptimization and then reoptimizes it again later - using updated type feedback.
There are two ways VM guards speculative assumptions made by the compiler:
CheckSmi
, CheckClass
IL instructions) that verify if assumption holds at use site where compiler made this assumption. For example, when turning dynamic calls into direct calls compiler adds these checks right before a direct call. Deoptimization that happens on such checks is called eager deoptimization, because it occurs eagerly as the check is reached.C
is never extended and use this information during type propagation pass. However subsequent dynamic code loading or class finalization can introduce a subclass of C
- which invalidates the assumption. At this point runtime needs to find and discard all optimized code that was compiled under the assumption that C
has no subclasses. It is possible that runtime would find some of the now invalid optimized code on the execution stack - in which case affected frames would be marked for deoptimization and will deoptimize when execution returns to them. This sort of deoptimization is called lazy deoptimization: because it is delayed until control returns back to the optimized code.!!! sourcecode “Source to read” Deoptimizer machinery resides in @{runtime/vm/deopt_instructions.cc}. It is essentially a mini-interpreter for deoptimization instructions which describe how to reconstruct needed state of the unoptimized code from the state of optimized code. Deoptimization instructions are generated by @{dart::CompilerDeoptInfo::CreateDeoptInfo} for every potential deoptimization location in optimized code during compilation.
!!! tryit “Trying it” Flag --trace-deoptimization
makes VM print information about the cause and location of every deoptimization that occurs. --trace-deoptimization-verbose
makes VM print a line for every deoptimization instruction it executes during deoptimization.
VM has the ability to serialize isolate's heap or more precisely object graph residing in the heap into a binary snapshot. Snapshot then can be used to recreate the same state when starting VM isolates.
Snapshot's format is low level and optimized for fast startup - it is essentially a list of objects to create and instructions on how to connect them together. That was the original idea behind snapshots: instead of parsing Dart source and gradually creating internal VM data structures, VM can just spin an isolate up with all necessary data structures quickly unpacked from the snapshot.
Initially snapshots did not include machine code, however this capability was later added when AOT compiler was developed. Motivation for developing AOT compiler and snapshots-with-code was to allow VM to be used on the platforms where JITing is impossible due to platform level restrictions.
Snapshots-with-code work almost in the same way as normal snapshots with a minor difference: they include a code section which unlike the rest of the snapshot does not require deserialization. This code section laid in way that allows it to directly become part of the heap after it was mapped into memory.
!!! sourcecode “Source to read” @{runtime/vm/app_snapshot.cc} handles serialization and deserialization of snapshots. A family of API functions Dart_CreateXyzSnapshot[AsAssembly]
are responsible for writing out snapshots of the heap (e.g. @{Dart_CreateAppJITSnapshotAsBlobs} and @{Dart_CreateAppAOTSnapshotAsAssembly}). On the other hand @{Dart_CreateIsolateGroup} optionally takes snapshot data to start an isolate from.
AppJIT snapshots were introduced to reduce JIT warm up time for large Dart applications like dartanalyzer
or dart2js
. When these tools are used on small projects they spent as much time doing actual work as VM spends JIT compiling these apps.
AppJIT snapshots allow to address this problem: an application can be run on the VM using some mock training data and then all generated code and VM internal data structures are serialized into an AppJIT snapshot. This snapshot can then be distributed instead of distributing application in the source (or Kernel binary) form. VM starting from this snapshot can still JIT - if it turns out that execution profile on the real data does not match execution profile observed during training.