[vm/compiler] Support materializing unboxed variables when entering catch.

Previously we tried to rely on the assumption that all variables would be
boxed - so the machinery for setting correct catch-entry state only
supported tagged values and constants. However this both leads to worse code
and is not entirely correct assumption.

This also:

- renames various confusingly named classes: we move away from talking
about "catch entry state" to "catch entry moves" - because we only
record a subset of moves that needs to be performed and that does
not describe the whole state;
- refactors a bunch of associated code to be more readable and maintainable;
- adds documentation about catch implementation in optimized code
to runtime/docs/compiler;

Fixes https://github.com/flutter/flutter/issues/21685.

Change-Id: I03ae361a1bb7710acbd9f661ae014e663a163c59
Reviewed-on: https://dart-review.googlesource.com/74860
Commit-Queue: Vyacheslav Egorov <vegorov@google.com>
Reviewed-by: Martin Kustermann <kustermann@google.com>
Reviewed-by: Alexander Markov <alexmarkov@google.com>
diff --git a/runtime/docs/compiler/exceptions.md b/runtime/docs/compiler/exceptions.md
new file mode 100644
index 0000000..222e5f8
--- /dev/null
+++ b/runtime/docs/compiler/exceptions.md
@@ -0,0 +1,92 @@
+# Exceptions Implementation
+
+This page describes how exceptions throwing and catching is implemented in the
+VM.
+
+## Intermediate Language
+
+Dart VM's IL **does not** explicitly represent exceptional control flow in its
+flow graph, there are **no** explicit exceptional edges connecting potentially
+throwing instructions (e.g. calls) with corresponding catch blocks. Instead this
+connection is defined at the block level: all exceptions that occur in any block
+with the given `try_index` will be caught by `CatchBlockEntry` with the equal
+`catch_try_index`.
+
+![Catch Block Entry](images/catch-block-entry-0.png)
+
+For optimized code this means that data flow associated with exceptional control
+flow is also represented implicitly: due to the absence of explicit exceptional
+edges the data flow can't be represented using explicit phi-functions. Instead
+in optimized code each `CatchBlockEntry` is treated almost as if it was an
+independent entry into the function: for each variable `v` `CatchBlockEntry`
+will contain a `Parameter(...)` instruction restoring variable state at catch
+entry from a fixed location on the stack. When an exception is thrown runtime
+system takes care of populating these stack slots with right values - current
+state of corresponding local variables. It's easy to see a parallel between
+these `Parameter(...)` instructions and `Phi(...)` instructions that would be
+used if exception control flow would be explicit.
+
+![Catch Block Entry](images/catch-block-entry-1.png)
+
+How does runtime system populate stack slots corresponding to these
+`Parameter(...)` instructions? During compilation necessary information is
+available in _deoptimization environment_ attached to the instruction. This
+environment encodes the state of local variables in terms of SSA values i.e. if
+we need to reconstruct unoptimized frame which SSA value should be stored into
+the given local variable (see [Optimized
+IL](compiler-pipeline-overview.md#optimized-il) for
+an overview). However the way we use these information for exception handling is
+slightly different in JIT and AOT modes.
+
+### AOT mode
+
+AOT mode does not support deoptimization and thus AOT compiler does not
+associate any deoptimization metadata with generated code. Instead
+deoptimization environments associated with instructions that can throw are
+converted into `CatchEntryMoves` metadata during code generation and resulting
+metadata is stored `RawCode::catch_entry_moves_maps_` in a compressed form.
+
+`CatchEntryMoves` is essentially a sequence of moves which runtime needs to
+perform to create the state that catch entry expects. There are three types of
+moves:
+
+* `*(FP + Dst) <- ObjectPool[PoolIndex]` - a move of a constant from an object
+pool;
+* `*(FP + Dst) <- *(FP + Src)` - a move of a tagged value;
+* `*(FP + Dst) <- Box<Rep>(*(FP + Src))` - a boxing operation for an untagged
+value;
+
+When an exception is caught runtime decompresses the metadata associated with the
+call site which has thrown an exception and uses it to prepare the state of the
+stack for the catch block entry. See
+`ExceptionHandlerFinder::{ReadCompressedCatchEntryMoves, ExecuteCatchEntryMoves}`.
+
+NOTE: See [this
+design/motivation](https://docs.google.com/a/google.com/document/d/1_vX8VkvHVA1Om7jjONiWLA325k_JmSZuvVClet-x-xM/edit?usp=sharing)
+document for `CatchEntryMoves` metadata
+
+### JIT mode
+
+JIT mode heavily relies on deoptimization and all call instructions have (lazy)
+deoptimization environments associated with them. These environments are
+converted to [deoptimization
+instructions](deoptimization.md#in-optimized-code)
+during code generation and stored on the `Code` object.
+
+When an exception is caught the runtime system converts deoptimization
+environment associated with the call site that threw an exception into
+`CatchEntryMoves` and then uses it to prepare the state of the stack for the
+catch block entry. See `ExceptionHandlerFinder::{GetCatchEntryMovesFromDeopt, ExecuteCatchEntryMoves}`.
+
+Constructing `CatchEntryMoves` dynamically from deoptimization instructions
+allows to avoid unnecessary duplication of the metadata and save memory: as
+deoptimization environments contain all information necessary for constructing
+correct stack state.
+
+IMPORTANT: There is a subtle difference between DBC and other architectures with
+respect to catch block entry state. On normal architectures `Parameter(i)` at
+catch entry would be associated with the same stack space that would be used to
+store variable with index `i`. On DBC however at catch entry `Parameter(i)`
+would be allocated to a separate scratch space at the very top of the register
+space. See `FlowGraphAllocator::ProcessInitialDefinition` and
+`FlowGraphCompiler::CatchEntryRegForVariable`.
diff --git a/runtime/docs/compiler/images/catch-block-entry-0.png b/runtime/docs/compiler/images/catch-block-entry-0.png
new file mode 100644
index 0000000..b6013de
--- /dev/null
+++ b/runtime/docs/compiler/images/catch-block-entry-0.png
Binary files differ
diff --git a/runtime/docs/compiler/images/catch-block-entry-1.png b/runtime/docs/compiler/images/catch-block-entry-1.png
new file mode 100644
index 0000000..c41adb4
--- /dev/null
+++ b/runtime/docs/compiler/images/catch-block-entry-1.png
Binary files differ
diff --git a/runtime/tests/vm/dart/catch_entry_state_test.dart b/runtime/tests/vm/dart/catch_entry_state_test.dart
new file mode 100644
index 0000000..ed33f17
--- /dev/null
+++ b/runtime/tests/vm/dart/catch_entry_state_test.dart
@@ -0,0 +1,53 @@
+// Copyright (c) 2018, the Dart project authors.  Please see the AUTHORS file
+// for details. All rights reserved. Use of this source code is governed by a
+// BSD-style license that can be found in the LICENSE file.
+// VMOptions=--no-background-compilation --enable-inlining-annotations --optimization-counter-threshold=100
+
+// Verify that runtime correctly materializes unboxed variables on the catch
+// entry in optimized code.
+
+import 'dart:typed_data';
+
+import 'package:expect/expect.dart';
+
+const NeverInline = "NeverInline";
+
+@NeverInline
+void testThrow(bool shouldThrow) {
+  var dbl = 0.0;
+  var i32 = 0;
+  var i64 = 0;
+  var f32x4 = new Float32x4.zero();
+  var f64x2 = new Float64x2.zero();
+  var i32x4 = new Int32x4(0, 0, 0, 0);
+  try {
+    for (var i = 0; i < 100; i++) {
+      dbl += i;
+      i32 = i | 0x70000000;
+      i64 = i | 0x80000000;
+      final d = i.toDouble();
+      f32x4 += new Float32x4(d, -d, d, -d);
+      f64x2 += new Float64x2(d, -d);
+      i32x4 += new Int32x4(-i, i, -i, i);
+      if (shouldThrow && i == 50) {
+        throw "";
+      }
+    }
+  } catch (e) {}
+
+  if (shouldThrow) {
+    Expect.equals(1275.0, dbl);
+    Expect.equals(0x70000000 | 50, i32);
+    Expect.equals(0x80000000 | 50, i64);
+    Expect.listEquals([1275.0, -1275.0, 1275.0, -1275.0],
+        [f32x4.x, f32x4.y, f32x4.z, f32x4.w]);
+    Expect.listEquals([1275.0, -1275.0], [f64x2.x, f64x2.y]);
+    Expect.listEquals(
+        [-1275, 1275, -1275, 1275], [i32x4.x, i32x4.y, i32x4.z, i32x4.w]);
+  }
+}
+
+void main() {
+  for (var i = 0; i < 100; i++) testThrow(false);
+  testThrow(true);
+}
diff --git a/runtime/tests/vm/vm.status b/runtime/tests/vm/vm.status
index 1632825..df4f8e3 100644
--- a/runtime/tests/vm/vm.status
+++ b/runtime/tests/vm/vm.status
@@ -60,6 +60,9 @@
 dart/redirection_type_shuffling_test/00: RuntimeError, Pass
 dart/redirection_type_shuffling_test/none: RuntimeError
 
+[ $runtime != vm && $runtime != dart_precompiled ]
+dart/catch_entry_state: SkipByDesign
+
 [ $compiler != dartk && $compiler != dartkb ]
 cc/IsolateReload_KernelIncrementalCompile: SkipByDesign
 cc/IsolateReload_KernelIncrementalCompileAppAndLib: SkipByDesign
diff --git a/runtime/vm/clustered_snapshot.cc b/runtime/vm/clustered_snapshot.cc
index f799a8d..84bc363 100644
--- a/runtime/vm/clustered_snapshot.cc
+++ b/runtime/vm/clustered_snapshot.cc
@@ -1666,7 +1666,7 @@
     s->Push(code->ptr()->exception_handlers_);
     s->Push(code->ptr()->pc_descriptors_);
 #if defined(DART_PRECOMPILED_RUNTIME) || defined(DART_PRECOMPILER)
-    s->Push(code->ptr()->catch_entry_.catch_entry_state_maps_);
+    s->Push(code->ptr()->catch_entry_.catch_entry_moves_maps_);
 #else
     s->Push(code->ptr()->catch_entry_.variables_);
 #endif
@@ -1727,7 +1727,7 @@
       s->WriteRef(code->ptr()->exception_handlers_);
       s->WriteRef(code->ptr()->pc_descriptors_);
 #if defined(DART_PRECOMPILED_RUNTIME) || defined(DART_PRECOMPILER)
-      s->WriteRef(code->ptr()->catch_entry_.catch_entry_state_maps_);
+      s->WriteRef(code->ptr()->catch_entry_.catch_entry_moves_maps_);
 #else
       s->WriteRef(code->ptr()->catch_entry_.variables_);
 #endif
@@ -1810,7 +1810,7 @@
       code->ptr()->pc_descriptors_ =
           reinterpret_cast<RawPcDescriptors*>(d->ReadRef());
 #if defined(DART_PRECOMPILED_RUNTIME) || defined(DART_PRECOMPILER)
-      code->ptr()->catch_entry_.catch_entry_state_maps_ =
+      code->ptr()->catch_entry_.catch_entry_moves_maps_ =
           reinterpret_cast<RawTypedData*>(d->ReadRef());
 #else
       code->ptr()->catch_entry_.variables_ =
diff --git a/runtime/vm/code_descriptors.cc b/runtime/vm/code_descriptors.cc
index da77114..84d9882 100644
--- a/runtime/vm/code_descriptors.cc
+++ b/runtime/vm/code_descriptors.cc
@@ -112,18 +112,19 @@
   return handlers.raw();
 }
 
-static uint8_t* zone_allocator(uint8_t* ptr,
-                               intptr_t old_size,
-                               intptr_t new_size) {
+static uint8_t* ZoneAllocator(uint8_t* ptr,
+                              intptr_t old_size,
+                              intptr_t new_size) {
   Zone* zone = Thread::Current()->zone();
   return zone->Realloc<uint8_t>(ptr, old_size, new_size);
 }
 
-class CatchEntryStateMapBuilder::TrieNode : public ZoneAllocated {
+#if !defined(DART_PRECOMPILED_RUNTIME)
+class CatchEntryMovesMapBuilder::TrieNode : public ZoneAllocated {
  public:
-  TrieNode() : pair_(), entry_state_offset_(-1) {}
-  TrieNode(CatchEntryStatePair pair, intptr_t index)
-      : pair_(pair), entry_state_offset_(index) {}
+  TrieNode() : move_(), entry_state_offset_(-1) {}
+  TrieNode(CatchEntryMove move, intptr_t index)
+      : move_(move), entry_state_offset_(index) {}
 
   intptr_t Offset() { return entry_state_offset_; }
 
@@ -132,42 +133,36 @@
     return node;
   }
 
-  TrieNode* Follow(CatchEntryStatePair next) {
+  TrieNode* Follow(CatchEntryMove next) {
     for (intptr_t i = 0; i < children_.length(); i++) {
-      if (children_[i]->pair_ == next) return children_[i];
+      if (children_[i]->move_ == next) return children_[i];
     }
     return NULL;
   }
 
  private:
-  CatchEntryStatePair pair_;
+  CatchEntryMove move_;
   const intptr_t entry_state_offset_;
   GrowableArray<TrieNode*> children_;
 };
 
-CatchEntryStateMapBuilder::CatchEntryStateMapBuilder()
+CatchEntryMovesMapBuilder::CatchEntryMovesMapBuilder()
     : zone_(Thread::Current()->zone()),
       root_(new TrieNode()),
       current_pc_offset_(0),
       buffer_(NULL),
-      stream_(&buffer_, zone_allocator, 64) {}
+      stream_(&buffer_, ZoneAllocator, 64) {}
 
-void CatchEntryStateMapBuilder::AppendMove(intptr_t src_slot,
-                                           intptr_t dest_slot) {
-  moves_.Add(CatchEntryStatePair::FromMove(src_slot, dest_slot));
+void CatchEntryMovesMapBuilder::Append(const CatchEntryMove& move) {
+  moves_.Add(move);
 }
 
-void CatchEntryStateMapBuilder::AppendConstant(intptr_t pool_id,
-                                               intptr_t dest_slot) {
-  moves_.Add(CatchEntryStatePair::FromConstant(pool_id, dest_slot));
-}
-
-void CatchEntryStateMapBuilder::NewMapping(intptr_t pc_offset) {
+void CatchEntryMovesMapBuilder::NewMapping(intptr_t pc_offset) {
   moves_.Clear();
   current_pc_offset_ = pc_offset;
 }
 
-void CatchEntryStateMapBuilder::EndMapping() {
+void CatchEntryMovesMapBuilder::EndMapping() {
   intptr_t suffix_length = 0;
   TrieNode* suffix = root_;
   // Find the largest common suffix, get the last node of the path.
@@ -189,8 +184,7 @@
   // Write the unshared part, adding it to the trie.
   TrieNode* node = suffix;
   for (intptr_t i = length - 1; i >= 0; i--) {
-    Writer::Write(&stream_, moves_[i].src);
-    Writer::Write(&stream_, moves_[i].dest);
+    moves_[i].WriteTo(&stream_);
 
     TrieNode* child = new (zone_) TrieNode(moves_[i], current_offset);
     node->Insert(child);
@@ -198,7 +192,7 @@
   }
 }
 
-RawTypedData* CatchEntryStateMapBuilder::FinalizeCatchEntryStateMap() {
+RawTypedData* CatchEntryMovesMapBuilder::FinalizeCatchEntryMovesMap() {
   TypedData& td = TypedData::Handle(TypedData::New(
       kTypedDataInt8ArrayCid, stream_.bytes_written(), Heap::kOld));
   NoSafepointScope no_safepoint;
@@ -209,6 +203,7 @@
   }
   return td.raw();
 }
+#endif  // !defined(DART_PRECOMPILED_RUNTIME)
 
 const TokenPosition CodeSourceMapBuilder::kInitialPosition =
     TokenPosition(TokenPosition::kDartCodeProloguePos);
@@ -230,7 +225,7 @@
       inlined_functions_(
           GrowableObjectArray::Handle(GrowableObjectArray::New(Heap::kOld))),
       buffer_(NULL),
-      stream_(&buffer_, zone_allocator, 64),
+      stream_(&buffer_, ZoneAllocator, 64),
       stack_traces_only_(stack_traces_only) {
   buffered_inline_id_stack_.Add(0);
   buffered_token_pos_stack_.Add(kInitialPosition);
diff --git a/runtime/vm/code_descriptors.h b/runtime/vm/code_descriptors.h
index 6c0daab..dfec824 100644
--- a/runtime/vm/code_descriptors.h
+++ b/runtime/vm/code_descriptors.h
@@ -144,43 +144,16 @@
   DISALLOW_COPY_AND_ASSIGN(ExceptionHandlerList);
 };
 
-// An encoded move from stack/constant to stack performed
-struct CatchEntryStatePair {
-  enum { kCatchEntryStateIsMove = 1, kCatchEntryStateDestShift = 1 };
-
-  intptr_t src, dest;
-
-  static CatchEntryStatePair FromConstant(intptr_t pool_id,
-                                          intptr_t dest_slot) {
-    CatchEntryStatePair pair;
-    pair.src = pool_id;
-    pair.dest = (dest_slot << kCatchEntryStateDestShift);
-    return pair;
-  }
-
-  static CatchEntryStatePair FromMove(intptr_t src_slot, intptr_t dest_slot) {
-    CatchEntryStatePair pair;
-    pair.src = src_slot;
-    pair.dest =
-        (dest_slot << kCatchEntryStateDestShift) | kCatchEntryStateIsMove;
-    return pair;
-  }
-
-  bool operator==(const CatchEntryStatePair& rhs) {
-    return src == rhs.src && dest == rhs.dest;
-  }
-};
-
-// Used to construct CatchEntryState metadata for AoT mode of compilation.
-class CatchEntryStateMapBuilder : public ZoneAllocated {
+#if !defined(DART_PRECOMPILED_RUNTIME)
+// Used to construct CatchEntryMoves for the AOT mode of compilation.
+class CatchEntryMovesMapBuilder : public ZoneAllocated {
  public:
-  CatchEntryStateMapBuilder();
+  CatchEntryMovesMapBuilder();
 
   void NewMapping(intptr_t pc_offset);
-  void AppendMove(intptr_t src_slot, intptr_t dest_slot);
-  void AppendConstant(intptr_t pool_id, intptr_t dest_slot);
+  void Append(const CatchEntryMove& move);
   void EndMapping();
-  RawTypedData* FinalizeCatchEntryStateMap();
+  RawTypedData* FinalizeCatchEntryMovesMap();
 
  private:
   class TrieNode;
@@ -188,12 +161,13 @@
   Zone* zone_;
   TrieNode* root_;
   intptr_t current_pc_offset_;
-  GrowableArray<CatchEntryStatePair> moves_;
+  GrowableArray<CatchEntryMove> moves_;
   uint8_t* buffer_;
   WriteStream stream_;
 
-  DISALLOW_COPY_AND_ASSIGN(CatchEntryStateMapBuilder);
+  DISALLOW_COPY_AND_ASSIGN(CatchEntryMovesMapBuilder);
 };
+#endif  // !defined(DART_PRECOMPILED_RUNTIME)
 
 // A CodeSourceMap maps from pc offsets to a stack of inlined functions and
 // their positions. This is encoded as a little bytecode that pushes and pops
diff --git a/runtime/vm/compiler/aot/precompiler.cc b/runtime/vm/compiler/aot/precompiler.cc
index 3448fea..d2fdd60 100644
--- a/runtime/vm/compiler/aot/precompiler.cc
+++ b/runtime/vm/compiler/aot/precompiler.cc
@@ -2709,7 +2709,7 @@
   graph_compiler->FinalizeStackMaps(code);
   graph_compiler->FinalizeVarDescriptors(code);
   graph_compiler->FinalizeExceptionHandlers(code);
-  graph_compiler->FinalizeCatchEntryStateMap(code);
+  graph_compiler->FinalizeCatchEntryMovesMap(code);
   graph_compiler->FinalizeStaticCallTargetsTable(code);
   graph_compiler->FinalizeCodeSourceMap(code);
 
diff --git a/runtime/vm/compiler/backend/flow_graph.cc b/runtime/vm/compiler/backend/flow_graph.cc
index 3c9e524..161889f 100644
--- a/runtime/vm/compiler/backend/flow_graph.cc
+++ b/runtime/vm/compiler/backend/flow_graph.cc
@@ -1702,32 +1702,12 @@
   }
 }
 
-void FlowGraph::ConvertEnvironmentUse(Value* use, Representation from_rep) {
-  const Representation to_rep = kTagged;
-  if (from_rep == to_rep) {
-    return;
-  }
-  InsertConversion(from_rep, to_rep, use, /*is_environment_use=*/true);
-}
-
 void FlowGraph::InsertConversionsFor(Definition* def) {
   const Representation from_rep = def->representation();
 
   for (Value::Iterator it(def->input_use_list()); !it.Done(); it.Advance()) {
     ConvertUse(it.Current(), from_rep);
   }
-
-  if (!graph_entry()->catch_entries().is_empty()) {
-    for (Value::Iterator it(def->env_use_list()); !it.Done(); it.Advance()) {
-      Value* use = it.Current();
-      if (use->instruction()->MayThrow() &&
-          use->instruction()->GetBlock()->InsideTryBlock()) {
-        // Environment uses at calls inside try-blocks must be converted to
-        // tagged representation.
-        ConvertEnvironmentUse(it.Current(), from_rep);
-      }
-    }
-  }
 }
 
 static void UnboxPhi(PhiInstr* phi) {
diff --git a/runtime/vm/compiler/backend/flow_graph.h b/runtime/vm/compiler/backend/flow_graph.h
index a3b4ef0..ba3e046 100644
--- a/runtime/vm/compiler/backend/flow_graph.h
+++ b/runtime/vm/compiler/backend/flow_graph.h
@@ -438,7 +438,6 @@
 
   void InsertConversionsFor(Definition* def);
   void ConvertUse(Value* use, Representation from);
-  void ConvertEnvironmentUse(Value* use, Representation from);
   void InsertConversion(Representation from,
                         Representation to,
                         Value* use,
diff --git a/runtime/vm/compiler/backend/flow_graph_compiler.cc b/runtime/vm/compiler/backend/flow_graph_compiler.cc
index ad396ce..0dcc98fb 100644
--- a/runtime/vm/compiler/backend/flow_graph_compiler.cc
+++ b/runtime/vm/compiler/backend/flow_graph_compiler.cc
@@ -119,7 +119,7 @@
       pc_descriptors_list_(NULL),
       stackmap_table_builder_(NULL),
       code_source_map_builder_(NULL),
-      catch_entry_state_maps_builder_(NULL),
+      catch_entry_moves_maps_builder_(NULL),
       block_info_(block_order_.length()),
       deopt_infos_(),
       static_calls_target_table_(),
@@ -183,7 +183,9 @@
 void FlowGraphCompiler::InitCompiler() {
   pc_descriptors_list_ = new (zone()) DescriptorList(64);
   exception_handlers_list_ = new (zone()) ExceptionHandlerList();
-  catch_entry_state_maps_builder_ = new (zone()) CatchEntryStateMapBuilder();
+#if defined(DART_PRECOMPILER)
+  catch_entry_moves_maps_builder_ = new (zone()) CatchEntryMovesMapBuilder();
+#endif
   block_info_.Clear();
   // Initialize block info and search optimized (non-OSR) code for calls
   // indicating a non-leaf routine and calls without IC data indicating
@@ -340,80 +342,122 @@
   return 0;
 }
 
-void FlowGraphCompiler::EmitCatchEntryState(Environment* env,
-                                            intptr_t try_index) {
-#if defined(DART_PRECOMPILER) || defined(DART_PRECOMPILED_RUNTIME)
+#if defined(DART_PRECOMPILER)
+static intptr_t LocationToStackIndex(const Location& src) {
+  ASSERT(src.HasStackIndex());
+  return -compiler_frame_layout.VariableIndexForFrameSlot(src.stack_index());
+}
+
+static CatchEntryMove CatchEntryMoveFor(Assembler* assembler,
+                                        Representation src_rep,
+                                        const Location& src,
+                                        intptr_t dst_index) {
+  if (src.IsConstant()) {
+    // Skip dead locations.
+    if (src.constant().raw() == Symbols::OptimizedOut().raw()) {
+      return CatchEntryMove();
+    }
+    const intptr_t pool_index =
+        assembler->object_pool_wrapper().FindObject(src.constant());
+    return CatchEntryMove::FromSlot(CatchEntryMove::SourceKind::kConstant,
+                                    pool_index, dst_index);
+  }
+
+  if (src.IsPairLocation()) {
+    const auto lo_loc = src.AsPairLocation()->At(0);
+    const auto hi_loc = src.AsPairLocation()->At(1);
+    ASSERT(lo_loc.IsStackSlot() && hi_loc.IsStackSlot());
+    return CatchEntryMove::FromSlot(
+        CatchEntryMove::SourceKind::kInt64PairSlot,
+        CatchEntryMove::EncodePairSource(LocationToStackIndex(lo_loc),
+                                         LocationToStackIndex(hi_loc)),
+        dst_index);
+  }
+
+  CatchEntryMove::SourceKind src_kind;
+  switch (src_rep) {
+    case kTagged:
+      src_kind = CatchEntryMove::SourceKind::kTaggedSlot;
+      break;
+    case kUnboxedInt64:
+      src_kind = CatchEntryMove::SourceKind::kInt64Slot;
+      break;
+    case kUnboxedInt32:
+      src_kind = CatchEntryMove::SourceKind::kInt32Slot;
+      break;
+    case kUnboxedUint32:
+      src_kind = CatchEntryMove::SourceKind::kUint32Slot;
+      break;
+    case kUnboxedDouble:
+      src_kind = CatchEntryMove::SourceKind::kDoubleSlot;
+      break;
+    case kUnboxedFloat32x4:
+      src_kind = CatchEntryMove::SourceKind::kFloat32x4Slot;
+      break;
+    case kUnboxedFloat64x2:
+      src_kind = CatchEntryMove::SourceKind::kFloat64x2Slot;
+      break;
+    case kUnboxedInt32x4:
+      src_kind = CatchEntryMove::SourceKind::kInt32x4Slot;
+      break;
+    default:
+      UNREACHABLE();
+      break;
+  }
+
+  return CatchEntryMove::FromSlot(src_kind, LocationToStackIndex(src),
+                                  dst_index);
+}
+#endif
+
+void FlowGraphCompiler::RecordCatchEntryMoves(Environment* env,
+                                              intptr_t try_index) {
+#if defined(DART_PRECOMPILER)
   env = env ? env : pending_deoptimization_env_;
   try_index = try_index != CatchClauseNode::kInvalidTryIndex
                   ? try_index
                   : CurrentTryIndex();
-  if (is_optimizing() && env != NULL &&
+  if (is_optimizing() && env != nullptr &&
       (try_index != CatchClauseNode::kInvalidTryIndex)) {
     env = env->Outermost();
     CatchBlockEntryInstr* catch_block =
         flow_graph().graph_entry()->GetCatchEntry(try_index);
     const GrowableArray<Definition*>* idefs =
         catch_block->initial_definitions();
-    catch_entry_state_maps_builder_->NewMapping(assembler()->CodeSize());
-    // Parameters first.
-    intptr_t i = 0;
+    catch_entry_moves_maps_builder_->NewMapping(assembler()->CodeSize());
 
     const intptr_t num_direct_parameters = flow_graph().num_direct_parameters();
-    for (; i < num_direct_parameters; ++i) {
+    const intptr_t ex_idx =
+        catch_block->raw_exception_var() != nullptr
+            ? flow_graph().EnvIndex(catch_block->raw_exception_var())
+            : -1;
+    const intptr_t st_idx =
+        catch_block->raw_stacktrace_var() != nullptr
+            ? flow_graph().EnvIndex(catch_block->raw_stacktrace_var())
+            : -1;
+    for (intptr_t i = 0; i < flow_graph().variable_count(); ++i) {
       // Don't sync captured parameters. They are not in the environment.
       if (flow_graph().captured_parameters()->Contains(i)) continue;
-      if ((*idefs)[i]->IsConstant()) continue;  // Common constants.
+      // Don't sync exception or stack trace variables.
+      if (i == ex_idx || i == st_idx) continue;
+      // Don't sync values that have been replaced with constants.
+      if ((*idefs)[i]->IsConstant()) continue;
+
       Location src = env->LocationAt(i);
+      // Can only occur if AllocationSinking is enabled - and it is disabled
+      // in functions with try.
+      ASSERT(!src.IsInvalid());
+      const Representation src_rep =
+          env->ValueAt(i)->definition()->representation();
       intptr_t dest_index = i - num_direct_parameters;
-      if (!src.IsStackSlot()) {
-        ASSERT(src.IsConstant());
-        // Skip dead locations.
-        if (src.constant().raw() == Symbols::OptimizedOut().raw()) {
-          continue;
-        }
-        intptr_t id =
-            assembler()->object_pool_wrapper().FindObject(src.constant());
-        catch_entry_state_maps_builder_->AppendConstant(id, dest_index);
-        continue;
-      }
-      const intptr_t src_index =
-          -compiler_frame_layout.VariableIndexForFrameSlot(src.stack_index());
-      if (src_index != dest_index) {
-        catch_entry_state_maps_builder_->AppendMove(src_index, dest_index);
+      const auto move =
+          CatchEntryMoveFor(assembler(), src_rep, src, dest_index);
+      if (!move.IsRedundant()) {
+        catch_entry_moves_maps_builder_->Append(move);
       }
     }
 
-    // Process locals. Skip exception_var and stacktrace_var.
-    intptr_t local_base = num_direct_parameters;
-    intptr_t ex_idx = local_base - catch_block->exception_var().index().value();
-    intptr_t st_idx =
-        local_base - catch_block->stacktrace_var().index().value();
-    for (; i < flow_graph().variable_count(); ++i) {
-      // Don't sync captured parameters. They are not in the environment.
-      if (flow_graph().captured_parameters()->Contains(i)) continue;
-      if (i == ex_idx || i == st_idx) continue;
-      if ((*idefs)[i]->IsConstant()) continue;  // Common constants.
-      Location src = env->LocationAt(i);
-      if (src.IsInvalid()) continue;
-      intptr_t dest_index = i - num_direct_parameters;
-      if (!src.IsStackSlot()) {
-        ASSERT(src.IsConstant());
-        // Skip dead locations.
-        if (src.constant().raw() == Symbols::OptimizedOut().raw()) {
-          continue;
-        }
-        intptr_t id =
-            assembler()->object_pool_wrapper().FindObject(src.constant());
-        catch_entry_state_maps_builder_->AppendConstant(id, dest_index);
-        continue;
-      }
-      const intptr_t src_index =
-          -compiler_frame_layout.VariableIndexForFrameSlot(src.stack_index());
-      if (src_index != dest_index) {
-        catch_entry_state_maps_builder_->AppendMove(src_index, dest_index);
-      }
-    }
-    catch_entry_state_maps_builder_->EndMapping();
+    catch_entry_moves_maps_builder_->EndMapping();
   }
 #endif  // defined(DART_PRECOMPILER) || defined(DART_PRECOMPILED_RUNTIME)
 }
@@ -424,7 +468,7 @@
                                              LocationSummary* locs) {
   AddCurrentDescriptor(kind, deopt_id, token_pos);
   RecordSafepoint(locs);
-  EmitCatchEntryState();
+  RecordCatchEntryMoves();
   if (deopt_id != DeoptId::kNone) {
     // Marks either the continuation point in unoptimized code or the
     // deoptimization point in optimized code, after call.
@@ -1019,11 +1063,11 @@
   code.set_var_descriptors(var_descs);
 }
 
-void FlowGraphCompiler::FinalizeCatchEntryStateMap(const Code& code) {
-#if defined(DART_PRECOMPILED_RUNTIME) || defined(DART_PRECOMPILER)
+void FlowGraphCompiler::FinalizeCatchEntryMovesMap(const Code& code) {
+#if defined(DART_PRECOMPILER)
   TypedData& maps = TypedData::Handle(
-      catch_entry_state_maps_builder_->FinalizeCatchEntryStateMap());
-  code.set_catch_entry_state_maps(maps);
+      catch_entry_moves_maps_builder_->FinalizeCatchEntryMovesMap());
+  code.set_catch_entry_moves_maps(maps);
 #else
   code.set_variables(Smi::Handle(Smi::New(flow_graph().variable_count())));
 #endif
@@ -2223,7 +2267,7 @@
       (compiler->CurrentTryIndex() != CatchClauseNode::kInvalidTryIndex)) {
     Environment* env =
         compiler->SlowPathEnvironmentFor(instruction(), num_args_);
-    compiler->EmitCatchEntryState(env, try_index_);
+    compiler->RecordCatchEntryMoves(env, try_index_);
   }
   if (!use_shared_stub) {
     __ Breakpoint();
diff --git a/runtime/vm/compiler/backend/flow_graph_compiler.h b/runtime/vm/compiler/backend/flow_graph_compiler.h
index 7baed3c..1df8a26 100644
--- a/runtime/vm/compiler/backend/flow_graph_compiler.h
+++ b/runtime/vm/compiler/backend/flow_graph_compiler.h
@@ -15,6 +15,7 @@
 namespace dart {
 
 // Forward declarations.
+class CatchEntryMovesMapBuilder;
 class Code;
 class DeoptInfoBuilder;
 class FlowGraph;
@@ -592,7 +593,7 @@
 
   void EmitEdgeCounter(intptr_t edge_id);
 #endif  // !defined(TARGET_ARCH_DBC)
-  void EmitCatchEntryState(
+  void RecordCatchEntryMoves(
       Environment* env = NULL,
       intptr_t try_index = CatchClauseNode::kInvalidTryIndex);
 
@@ -665,7 +666,7 @@
   RawArray* CreateDeoptInfo(Assembler* assembler);
   void FinalizeStackMaps(const Code& code);
   void FinalizeVarDescriptors(const Code& code);
-  void FinalizeCatchEntryStateMap(const Code& code);
+  void FinalizeCatchEntryMovesMap(const Code& code);
   void FinalizeStaticCallTargetsTable(const Code& code);
   void FinalizeCodeSourceMap(const Code& code);
 
@@ -953,7 +954,7 @@
   DescriptorList* pc_descriptors_list_;
   StackMapTableBuilder* stackmap_table_builder_;
   CodeSourceMapBuilder* code_source_map_builder_;
-  CatchEntryStateMapBuilder* catch_entry_state_maps_builder_;
+  CatchEntryMovesMapBuilder* catch_entry_moves_maps_builder_;
   GrowableArray<BlockInfo*> block_info_;
   GrowableArray<CompilerDeoptInfo*> deopt_infos_;
   GrowableArray<SlowPathCode*> slow_path_code_;
diff --git a/runtime/vm/compiler/backend/flow_graph_compiler_arm.cc b/runtime/vm/compiler/backend/flow_graph_compiler_arm.cc
index a6ff28b..f53c438 100644
--- a/runtime/vm/compiler/backend/flow_graph_compiler_arm.cc
+++ b/runtime/vm/compiler/backend/flow_graph_compiler_arm.cc
@@ -1070,7 +1070,7 @@
     // arguments are removed.
     AddCurrentDescriptor(RawPcDescriptors::kDeopt, deopt_id_after, token_pos);
   }
-  EmitCatchEntryState(pending_deoptimization_env_, try_index);
+  RecordCatchEntryMoves(pending_deoptimization_env_, try_index);
   __ Drop(args_desc.CountWithTypeArgs());
 }
 
diff --git a/runtime/vm/compiler/backend/flow_graph_compiler_arm64.cc b/runtime/vm/compiler/backend/flow_graph_compiler_arm64.cc
index 2dc1a30..998842a 100644
--- a/runtime/vm/compiler/backend/flow_graph_compiler_arm64.cc
+++ b/runtime/vm/compiler/backend/flow_graph_compiler_arm64.cc
@@ -1039,7 +1039,7 @@
     // arguments are removed.
     AddCurrentDescriptor(RawPcDescriptors::kDeopt, deopt_id_after, token_pos);
   }
-  EmitCatchEntryState(pending_deoptimization_env_, try_index);
+  RecordCatchEntryMoves(pending_deoptimization_env_, try_index);
   __ Drop(args_desc.CountWithTypeArgs());
 }
 
diff --git a/runtime/vm/compiler/backend/flow_graph_compiler_ia32.cc b/runtime/vm/compiler/backend/flow_graph_compiler_ia32.cc
index 93dde40..bfc712a 100644
--- a/runtime/vm/compiler/backend/flow_graph_compiler_ia32.cc
+++ b/runtime/vm/compiler/backend/flow_graph_compiler_ia32.cc
@@ -979,7 +979,7 @@
     // arguments are removed.
     AddCurrentDescriptor(RawPcDescriptors::kDeopt, deopt_id_after, token_pos);
   }
-  EmitCatchEntryState(pending_deoptimization_env_, try_index);
+  RecordCatchEntryMoves(pending_deoptimization_env_, try_index);
   __ Drop(args_desc.CountWithTypeArgs());
 }
 
diff --git a/runtime/vm/compiler/backend/flow_graph_compiler_x64.cc b/runtime/vm/compiler/backend/flow_graph_compiler_x64.cc
index 5c1329a..ae17719 100644
--- a/runtime/vm/compiler/backend/flow_graph_compiler_x64.cc
+++ b/runtime/vm/compiler/backend/flow_graph_compiler_x64.cc
@@ -1062,7 +1062,7 @@
     // arguments are removed.
     AddCurrentDescriptor(RawPcDescriptors::kDeopt, deopt_id_after, token_pos);
   }
-  EmitCatchEntryState(pending_deoptimization_env_, try_index);
+  RecordCatchEntryMoves(pending_deoptimization_env_, try_index);
   __ Drop(args_desc.CountWithTypeArgs(), RCX);
 }
 
diff --git a/runtime/vm/compiler/backend/il.cc b/runtime/vm/compiler/backend/il.cc
index fb9dcdb..3e94025 100644
--- a/runtime/vm/compiler/backend/il.cc
+++ b/runtime/vm/compiler/backend/il.cc
@@ -2824,19 +2824,8 @@
   return NULL;
 }
 
-static bool HasTryBlockUse(Value* use_list) {
-  for (Value::Iterator it(use_list); !it.Done(); it.Advance()) {
-    Value* use = it.Current();
-    if (use->instruction()->MayThrow() &&
-        use->instruction()->GetBlock()->InsideTryBlock()) {
-      return true;
-    }
-  }
-  return false;
-}
-
 Definition* BoxInstr::Canonicalize(FlowGraph* flow_graph) {
-  if ((input_use_list() == NULL) && !HasTryBlockUse(env_use_list())) {
+  if (input_use_list() == nullptr) {
     // Environments can accommodate any representation. No need to box.
     return value()->definition();
   }
@@ -2859,7 +2848,7 @@
 }
 
 Definition* BoxIntegerInstr::Canonicalize(FlowGraph* flow_graph) {
-  if ((input_use_list() == NULL) && !HasTryBlockUse(env_use_list())) {
+  if (input_use_list() == nullptr) {
     // Environments can accommodate any representation. No need to box.
     return value()->definition();
   }
diff --git a/runtime/vm/compiler/backend/il_arm.cc b/runtime/vm/compiler/backend/il_arm.cc
index 6474df0..99b85ad 100644
--- a/runtime/vm/compiler/backend/il_arm.cc
+++ b/runtime/vm/compiler/backend/il_arm.cc
@@ -3068,7 +3068,7 @@
       __ ldr(LR, Address(THR, entry_point_offset));
       __ blx(LR);
       compiler->RecordSafepoint(instruction()->locs(), kNumSlowPathArgs);
-      compiler->EmitCatchEntryState();
+      compiler->RecordCatchEntryMoves();
       compiler->AddDescriptor(
           RawPcDescriptors::kOther, compiler->assembler()->CodeSize(),
           instruction()->deopt_id(), instruction()->token_pos(),
diff --git a/runtime/vm/compiler/backend/il_arm64.cc b/runtime/vm/compiler/backend/il_arm64.cc
index ed0f400..61b4766 100644
--- a/runtime/vm/compiler/backend/il_arm64.cc
+++ b/runtime/vm/compiler/backend/il_arm64.cc
@@ -2752,7 +2752,7 @@
       __ ldr(LR, Address(THR, entry_point_offset));
       __ blr(LR);
       compiler->RecordSafepoint(instruction()->locs(), kNumSlowPathArgs);
-      compiler->EmitCatchEntryState();
+      compiler->RecordCatchEntryMoves();
       compiler->AddDescriptor(
           RawPcDescriptors::kOther, compiler->assembler()->CodeSize(),
           instruction()->deopt_id(), instruction()->token_pos(),
diff --git a/runtime/vm/compiler/backend/il_x64.cc b/runtime/vm/compiler/backend/il_x64.cc
index a5e521b..f0cfb7b 100644
--- a/runtime/vm/compiler/backend/il_x64.cc
+++ b/runtime/vm/compiler/backend/il_x64.cc
@@ -2752,7 +2752,7 @@
                     stack_overflow_shared_without_fpu_regs_entry_point_offset();
       __ call(Address(THR, entry_point_offset));
       compiler->RecordSafepoint(instruction()->locs(), kNumSlowPathArgs);
-      compiler->EmitCatchEntryState();
+      compiler->RecordCatchEntryMoves();
       compiler->AddDescriptor(
           RawPcDescriptors::kOther, compiler->assembler()->CodeSize(),
           instruction()->deopt_id(), instruction()->token_pos(),
diff --git a/runtime/vm/compiler/jit/compiler.cc b/runtime/vm/compiler/jit/compiler.cc
index 14670ce..e84b28d 100644
--- a/runtime/vm/compiler/jit/compiler.cc
+++ b/runtime/vm/compiler/jit/compiler.cc
@@ -630,7 +630,7 @@
   graph_compiler->FinalizeStackMaps(code);
   graph_compiler->FinalizeVarDescriptors(code);
   graph_compiler->FinalizeExceptionHandlers(code);
-  graph_compiler->FinalizeCatchEntryStateMap(code);
+  graph_compiler->FinalizeCatchEntryMovesMap(code);
   graph_compiler->FinalizeStaticCallTargetsTable(code);
   graph_compiler->FinalizeCodeSourceMap(code);
 
diff --git a/runtime/vm/deopt_instructions.cc b/runtime/vm/deopt_instructions.cc
index ec6fa74..ec35ebc8 100644
--- a/runtime/vm/deopt_instructions.cc
+++ b/runtime/vm/deopt_instructions.cc
@@ -335,7 +335,7 @@
   }
 }
 
-intptr_t* DeoptContext::CatchEntryState(intptr_t num_vars) {
+const CatchEntryMoves* DeoptContext::ToCatchEntryMoves(intptr_t num_vars) {
   const Code& code = Code::Handle(code_);
   const TypedData& deopt_info = TypedData::Handle(deopt_info_);
   GrowableArray<DeoptInstr*> deopt_instructions;
@@ -343,8 +343,7 @@
   ASSERT(!deopt_table.IsNull());
   DeoptInfo::Unpack(deopt_table, deopt_info, &deopt_instructions);
 
-  intptr_t* state = new intptr_t[2 * num_vars + 1];
-  state[0] = num_vars;
+  CatchEntryMoves* moves = CatchEntryMoves::Allocate(num_vars);
 
   Function& function = Function::Handle(zone(), code.function());
   intptr_t params =
@@ -363,12 +362,10 @@
     DeoptInstr* instr = deopt_instructions[len - 1 - slot];
     intptr_t dest_index = i - params;
 #endif
-    CatchEntryStatePair p = instr->ToCatchEntryStatePair(this, dest_index);
-    state[1 + 2 * i] = p.src;
-    state[2 + 2 * i] = p.dest;
+    moves->At(i) = instr->ToCatchEntryMove(this, dest_index);
   }
 
-  return state;
+  return moves;
 }
 
 static void FillDeferredSlots(DeoptContext* deopt_context,
@@ -508,9 +505,9 @@
     *reinterpret_cast<RawObject**>(dest_addr) = obj.raw();
   }
 
-  CatchEntryStatePair ToCatchEntryStatePair(DeoptContext* deopt_context,
-                                            intptr_t dest_slot) {
-    return CatchEntryStatePair::FromConstant(object_table_index_, dest_slot);
+  CatchEntryMove ToCatchEntryMove(DeoptContext* deopt_context,
+                                  intptr_t dest_slot) {
+    return CatchEntryMove::FromConstant(object_table_index_, dest_slot);
   }
 
  private:
@@ -540,10 +537,11 @@
     *dest_addr = source_.Value<intptr_t>(deopt_context);
   }
 
-  CatchEntryStatePair ToCatchEntryStatePair(DeoptContext* deopt_context,
-                                            intptr_t dest_slot) {
-    return CatchEntryStatePair::FromMove(source_.StackSlot(deopt_context),
-                                         dest_slot);
+  CatchEntryMove ToCatchEntryMove(DeoptContext* deopt_context,
+                                  intptr_t dest_slot) {
+    return CatchEntryMove::FromSlot(CatchEntryMove::SourceKind::kTaggedSlot,
+                                    source_.StackSlot(deopt_context),
+                                    dest_slot);
   }
 
  private:
@@ -599,6 +597,15 @@
                                   hi_.Value<int32_t>(deopt_context));
   }
 
+  CatchEntryMove ToCatchEntryMove(DeoptContext* deopt_context,
+                                  intptr_t dest_slot) {
+    return CatchEntryMove::FromSlot(
+        CatchEntryMove::SourceKind::kInt64PairSlot,
+        CatchEntryMove::EncodePairSource(lo_.StackSlot(deopt_context),
+                                         hi_.StackSlot(deopt_context)),
+        dest_slot);
+  }
+
  private:
   static const intptr_t kFieldWidth = kBitsPerWord / 2;
   class LoRegister : public BitField<intptr_t, intptr_t, 0, kFieldWidth> {};
@@ -611,7 +618,7 @@
   DISALLOW_COPY_AND_ASSIGN(DeoptMintPairInstr);
 };
 
-template <DeoptInstr::Kind K, typename T>
+template <DeoptInstr::Kind K, CatchEntryMove::SourceKind slot_kind, typename T>
 class DeoptIntInstr : public DeoptIntegerInstrBase {
  public:
   explicit DeoptIntInstr(intptr_t source_index)
@@ -629,17 +636,35 @@
     return static_cast<int64_t>(source_.Value<T>(deopt_context));
   }
 
+  CatchEntryMove ToCatchEntryMove(DeoptContext* deopt_context,
+                                  intptr_t dest_slot) {
+    return CatchEntryMove::FromSlot(slot_kind, source_.StackSlot(deopt_context),
+                                    dest_slot);
+  }
+
  private:
   const CpuRegisterSource source_;
 
   DISALLOW_COPY_AND_ASSIGN(DeoptIntInstr);
 };
 
-typedef DeoptIntInstr<DeoptInstr::kUint32, uint32_t> DeoptUint32Instr;
-typedef DeoptIntInstr<DeoptInstr::kInt32, int32_t> DeoptInt32Instr;
-typedef DeoptIntInstr<DeoptInstr::kMint, int64_t> DeoptMintInstr;
+typedef DeoptIntInstr<DeoptInstr::kUint32,
+                      CatchEntryMove::SourceKind::kUint32Slot,
+                      uint32_t>
+    DeoptUint32Instr;
+typedef DeoptIntInstr<DeoptInstr::kInt32,
+                      CatchEntryMove::SourceKind::kInt32Slot,
+                      int32_t>
+    DeoptInt32Instr;
+typedef DeoptIntInstr<DeoptInstr::kMint,
+                      CatchEntryMove::SourceKind::kInt64Slot,
+                      int64_t>
+    DeoptMintInstr;
 
-template <DeoptInstr::Kind K, typename Type, typename RawObjectType>
+template <DeoptInstr::Kind K,
+          CatchEntryMove::SourceKind slot_kind,
+          typename Type,
+          typename RawObjectType>
 class DeoptFpuInstr : public DeoptInstr {
  public:
   explicit DeoptFpuInstr(intptr_t source_index) : source_(source_index) {}
@@ -658,22 +683,39 @@
         reinterpret_cast<RawObjectType**>(dest_addr));
   }
 
+  CatchEntryMove ToCatchEntryMove(DeoptContext* deopt_context,
+                                  intptr_t dest_slot) {
+    return CatchEntryMove::FromSlot(slot_kind, source_.StackSlot(deopt_context),
+                                    dest_slot);
+  }
+
  private:
   const FpuRegisterSource source_;
 
   DISALLOW_COPY_AND_ASSIGN(DeoptFpuInstr);
 };
 
-typedef DeoptFpuInstr<DeoptInstr::kDouble, double, RawDouble> DeoptDoubleInstr;
+typedef DeoptFpuInstr<DeoptInstr::kDouble,
+                      CatchEntryMove::SourceKind::kDoubleSlot,
+                      double,
+                      RawDouble>
+    DeoptDoubleInstr;
 
 // Simd128 types.
-typedef DeoptFpuInstr<DeoptInstr::kFloat32x4, simd128_value_t, RawFloat32x4>
+typedef DeoptFpuInstr<DeoptInstr::kFloat32x4,
+                      CatchEntryMove::SourceKind::kFloat32x4Slot,
+                      simd128_value_t,
+                      RawFloat32x4>
     DeoptFloat32x4Instr;
-typedef DeoptFpuInstr<DeoptInstr::kFloat32x4, simd128_value_t, RawFloat32x4>
-    DeoptFloat32x4Instr;
-typedef DeoptFpuInstr<DeoptInstr::kFloat64x2, simd128_value_t, RawFloat64x2>
+typedef DeoptFpuInstr<DeoptInstr::kFloat64x2,
+                      CatchEntryMove::SourceKind::kFloat64x2Slot,
+                      simd128_value_t,
+                      RawFloat64x2>
     DeoptFloat64x2Instr;
-typedef DeoptFpuInstr<DeoptInstr::kInt32x4, simd128_value_t, RawInt32x4>
+typedef DeoptFpuInstr<DeoptInstr::kInt32x4,
+                      CatchEntryMove::SourceKind::kInt32x4Slot,
+                      simd128_value_t,
+                      RawInt32x4>
     DeoptInt32x4Instr;
 
 // Deoptimization instruction creating a PC marker for the code of
diff --git a/runtime/vm/deopt_instructions.h b/runtime/vm/deopt_instructions.h
index 38d95fd..e2096b6 100644
--- a/runtime/vm/deopt_instructions.h
+++ b/runtime/vm/deopt_instructions.h
@@ -166,8 +166,10 @@
   // objects.
   void FillDestFrame();
 
-  // Allocate and prepare exceptions metadata for TrySync
-  intptr_t* CatchEntryState(intptr_t num_vars);
+  // Convert deoptimization instructions to a list of moves that need
+  // to be executed when entering catch entry block from this deoptimization
+  // point.
+  const CatchEntryMoves* ToCatchEntryMoves(intptr_t num_vars);
 
   // Materializes all deferred objects.  Returns the total number of
   // artificial arguments used during deoptimization.
@@ -334,11 +336,10 @@
 
   virtual void Execute(DeoptContext* deopt_context, intptr_t* dest_addr) = 0;
 
-  // Convert DeoptInstr to TrySync metadata entry.
-  virtual CatchEntryStatePair ToCatchEntryStatePair(DeoptContext* deopt_context,
-                                                    intptr_t dest_slot) {
+  virtual CatchEntryMove ToCatchEntryMove(DeoptContext* deopt_context,
+                                          intptr_t dest_slot) {
     UNREACHABLE();
-    return CatchEntryStatePair();
+    return CatchEntryMove();
   }
 
   virtual DeoptInstr::Kind kind() const = 0;
diff --git a/runtime/vm/exceptions.cc b/runtime/vm/exceptions.cc
index 5437930..5248782 100644
--- a/runtime/vm/exceptions.cc
+++ b/runtime/vm/exceptions.cc
@@ -141,20 +141,10 @@
   }
 }
 
-static RawObject** VariableAt(uword fp, int stack_slot) {
-#if defined(TARGET_ARCH_DBC)
-  return reinterpret_cast<RawObject**>(fp + stack_slot * kWordSize);
-#else
-  const intptr_t frame_slot =
-      runtime_frame_layout.FrameSlotForVariableIndex(-stack_slot);
-  return reinterpret_cast<RawObject**>(fp + frame_slot * kWordSize);
-#endif
-}
-
 class ExceptionHandlerFinder : public StackResource {
  public:
   explicit ExceptionHandlerFinder(Thread* thread)
-      : StackResource(thread), thread_(thread), cache_(NULL), metadata_(NULL) {}
+      : StackResource(thread), thread_(thread) {}
 
   // Iterate through the stack frames and try to find a frame with an
   // exception handler. Once found, set the pc, sp and fp so that execution
@@ -172,7 +162,7 @@
     uword temp_handler_pc = kUwordMax;
     bool is_optimized = false;
     code_ = NULL;
-    cache_ = thread_->isolate()->catch_entry_state_cache();
+    catch_entry_moves_cache_ = thread_->isolate()->catch_entry_moves_cache();
 
     while (!frame->IsEntryFrame()) {
       if (frame->IsDartFrame()) {
@@ -187,13 +177,20 @@
             if (is_optimized) {
               pc_ = frame->pc();
               code_ = &Code::Handle(frame->LookupDartCode());
-              CatchEntryState* state = cache_->Lookup(pc_);
-              if (state != NULL) cached_ = *state;
+              CatchEntryMovesRefPtr* cached_catch_entry_moves =
+                  catch_entry_moves_cache_->Lookup(pc_);
+              if (cached_catch_entry_moves != NULL) {
+                cached_catch_entry_moves_ = *cached_catch_entry_moves;
+              }
 #if !defined(DART_PRECOMPILED_RUNTIME) && !defined(DART_PRECOMPILER)
               intptr_t num_vars = Smi::Value(code_->variables());
-              if (cached_.Empty()) GetMetaDataFromDeopt(num_vars, frame);
+              if (cached_catch_entry_moves_.IsEmpty()) {
+                GetCatchEntryMovesFromDeopt(num_vars, frame);
+              }
 #else
-              if (cached_.Empty()) ReadCompressedMetaData();
+              if (cached_catch_entry_moves_.IsEmpty()) {
+                ReadCompressedCatchEntryMoves();
+              }
 #endif  // !defined(DART_PRECOMPILED_RUNTIME) && !defined(DART_PRECOMPILER)
             }
           }
@@ -216,80 +213,121 @@
     return handler_pc_set_;
   }
 
-  void TrySync() {
-    if (code_ == NULL || !code_->is_optimized()) {
+  // When entering catch block in the optimized code we need to execute
+  // catch entry moves that would morph the state of the frame into
+  // what catch entry expects.
+  void PrepareFrameForCatchEntry() {
+    if (code_ == nullptr || !code_->is_optimized()) {
       return;
     }
-    if (!cached_.Empty()) {
-      // Cache hit.
-      TrySyncCached(&cached_);
+
+    if (cached_catch_entry_moves_.IsEmpty()) {
+      catch_entry_moves_cache_->Insert(
+          pc_, CatchEntryMovesRefPtr(catch_entry_moves_));
     } else {
-      // New cache entry.
-      CatchEntryState m(metadata_);
-      TrySyncCached(&m);
-      cache_->Insert(pc_, m);
+      catch_entry_moves_ = &cached_catch_entry_moves_.moves();
     }
+
+    ExecuteCatchEntryMoves(*catch_entry_moves_);
   }
 
-  void TrySyncCached(CatchEntryState* md) {
+  void ExecuteCatchEntryMoves(const CatchEntryMoves& moves) {
     uword fp = handler_fp;
-    ObjectPool* pool = NULL;
-    intptr_t pairs = md->Pairs();
-    for (int j = 0; j < pairs; j++) {
-      intptr_t src = md->Src(j);
-      intptr_t dest = md->Dest(j);
-      if (md->isMove(j)) {
-        *VariableAt(fp, dest) = *VariableAt(fp, src);
-      } else {
-        if (pool == NULL) {
-          pool = &ObjectPool::Handle(code_->object_pool());
-        }
-        RawObject* obj = pool->ObjectAt(src);
-        *VariableAt(fp, dest) = obj;
+    ObjectPool* pool = nullptr;
+    for (int j = 0; j < moves.count(); j++) {
+      const CatchEntryMove& move = moves.At(j);
+
+      RawObject* value;
+      switch (move.source_kind()) {
+        case CatchEntryMove::SourceKind::kConstant:
+          if (pool == nullptr) {
+            pool = &ObjectPool::Handle(code_->object_pool());
+          }
+          value = pool->ObjectAt(move.src_slot());
+          break;
+
+        case CatchEntryMove::SourceKind::kTaggedSlot:
+          value = *TaggedSlotAt(fp, move.src_slot());
+          break;
+
+        case CatchEntryMove::SourceKind::kDoubleSlot:
+          value = Double::New(*SlotAt<double>(fp, move.src_slot()));
+          break;
+
+        case CatchEntryMove::SourceKind::kFloat32x4Slot:
+          value = Float32x4::New(*SlotAt<simd128_value_t>(fp, move.src_slot()));
+          break;
+
+        case CatchEntryMove::SourceKind::kFloat64x2Slot:
+          value = Float64x2::New(*SlotAt<simd128_value_t>(fp, move.src_slot()));
+          break;
+
+        case CatchEntryMove::SourceKind::kInt32x4Slot:
+          value = Int32x4::New(*SlotAt<simd128_value_t>(fp, move.src_slot()));
+          break;
+
+        case CatchEntryMove::SourceKind::kInt64PairSlot:
+          value = Integer::New(
+              Utils::LowHighTo64Bits(*SlotAt<uint32_t>(fp, move.src_lo_slot()),
+                                     *SlotAt<int32_t>(fp, move.src_hi_slot())));
+          break;
+
+        case CatchEntryMove::SourceKind::kInt64Slot:
+          value = Integer::New(*SlotAt<int64_t>(fp, move.src_slot()));
+          break;
+
+        case CatchEntryMove::SourceKind::kInt32Slot:
+          value = Integer::New(*SlotAt<int32_t>(fp, move.src_slot()));
+          break;
+
+        case CatchEntryMove::SourceKind::kUint32Slot:
+          value = Integer::New(*SlotAt<uint32_t>(fp, move.src_slot()));
+          break;
       }
+
+      *TaggedSlotAt(fp, move.dest_slot()) = value;
     }
   }
 
 #if defined(DART_PRECOMPILED_RUNTIME) || defined(DART_PRECOMPILER)
-  void ReadCompressedMetaData() {
+  void ReadCompressedCatchEntryMoves() {
     intptr_t pc_offset = pc_ - code_->PayloadStart();
-    const TypedData& td = TypedData::Handle(code_->catch_entry_state_maps());
+    const TypedData& td = TypedData::Handle(code_->catch_entry_moves_maps());
     NoSafepointScope no_safepoint;
     ReadStream stream(static_cast<uint8_t*>(td.DataAddr(0)), td.Length());
 
-    bool found_metadata = false;
+    intptr_t prefix_length, suffix_length, suffix_offset;
     while (stream.PendingBytes() > 0) {
       intptr_t target_pc_offset = Reader::Read(&stream);
-      intptr_t variables = Reader::Read(&stream);
-      intptr_t suffix_length = Reader::Read(&stream);
-      intptr_t suffix_offset = Reader::Read(&stream);
+      prefix_length = Reader::Read(&stream);
+      suffix_length = Reader::Read(&stream);
+      suffix_offset = Reader::Read(&stream);
       if (pc_offset == target_pc_offset) {
-        metadata_ = new intptr_t[2 * (variables + suffix_length) + 1];
-        metadata_[0] = variables + suffix_length;
-        for (int j = 0; j < variables; j++) {
-          intptr_t src = Reader::Read(&stream);
-          intptr_t dest = Reader::Read(&stream);
-          metadata_[1 + 2 * j] = src;
-          metadata_[2 + 2 * j] = dest;
-        }
-        ReadCompressedSuffix(&stream, suffix_offset, suffix_length, metadata_,
-                             2 * variables + 1);
-        found_metadata = true;
         break;
-      } else {
-        for (intptr_t j = 0; j < 2 * variables; j++) {
-          Reader::Read(&stream);
-        }
+      }
+
+      // Skip the moves.
+      for (intptr_t j = 0; j < prefix_length; j++) {
+        CatchEntryMove::ReadFrom(&stream);
       }
     }
-    ASSERT(found_metadata);
+    ASSERT((stream.PendingBytes() > 0) || (prefix_length == 0));
+
+    CatchEntryMoves* moves =
+        CatchEntryMoves::Allocate(prefix_length + suffix_length);
+    for (int j = 0; j < prefix_length; j++) {
+      moves->At(j) = CatchEntryMove::ReadFrom(&stream);
+    }
+    ReadCompressedCatchEntryMovesSuffix(&stream, suffix_offset, suffix_length,
+                                        moves, prefix_length);
+    catch_entry_moves_ = moves;
   }
 
-  void ReadCompressedSuffix(ReadStream* stream,
-                            intptr_t offset,
-                            intptr_t length,
-                            intptr_t* target,
-                            intptr_t target_offset) {
+  void ReadCompressedCatchEntryMovesSuffix(ReadStream* stream,
+                                           intptr_t offset,
+                                           intptr_t length,
+                                           CatchEntryMoves* moves,
+                                           intptr_t moves_offset) {
     stream->SetPosition(offset);
     Reader::Read(stream);  // skip pc_offset
     Reader::Read(stream);  // skip variables
@@ -297,24 +335,23 @@
     intptr_t suffix_offset = Reader::Read(stream);
     intptr_t to_read = length - suffix_length;
     for (int j = 0; j < to_read; j++) {
-      target[target_offset + 2 * j] = Reader::Read(stream);
-      target[target_offset + 2 * j + 1] = Reader::Read(stream);
+      moves->At(moves_offset + j) = CatchEntryMove::ReadFrom(stream);
     }
     if (suffix_length > 0) {
-      ReadCompressedSuffix(stream, suffix_offset, suffix_length, target,
-                           target_offset + to_read * 2);
+      ReadCompressedCatchEntryMovesSuffix(stream, suffix_offset, suffix_length,
+                                          moves, moves_offset + to_read);
     }
   }
 
 #else
-  void GetMetaDataFromDeopt(intptr_t num_vars, StackFrame* frame) {
+  void GetCatchEntryMovesFromDeopt(intptr_t num_vars, StackFrame* frame) {
     Isolate* isolate = thread_->isolate();
     DeoptContext* deopt_context =
         new DeoptContext(frame, *code_, DeoptContext::kDestIsAllocated, NULL,
                          NULL, true, false /* deoptimizing_code */);
     isolate->set_deopt_context(deopt_context);
 
-    metadata_ = deopt_context->CatchEntryState(num_vars);
+    catch_entry_moves_ = deopt_context->ToCatchEntryMoves(num_vars);
 
     isolate->set_deopt_context(NULL);
     delete deopt_context;
@@ -327,16 +364,47 @@
   uword handler_fp;
 
  private:
+  template <typename T>
+  static T* SlotAt(uword fp, int stack_slot) {
+#if defined(TARGET_ARCH_DBC)
+    return reinterpret_cast<T*>(fp + stack_slot * kWordSize);
+#else
+    const intptr_t frame_slot =
+        runtime_frame_layout.FrameSlotForVariableIndex(-stack_slot);
+    return reinterpret_cast<T*>(fp + frame_slot * kWordSize);
+#endif
+  }
+
+  static RawObject** TaggedSlotAt(uword fp, int stack_slot) {
+    return SlotAt<RawObject*>(fp, stack_slot);
+  }
+
   typedef ReadStream::Raw<sizeof(intptr_t), intptr_t> Reader;
   Thread* thread_;
-  CatchEntryStateCache* cache_;
   Code* code_;
   bool handler_pc_set_;
-  intptr_t* metadata_;      // MetaData generated from deopt.
-  CatchEntryState cached_;  // Value of per PC MetaData cache.
   intptr_t pc_;             // Current pc in the handler frame.
+
+  const CatchEntryMoves* catch_entry_moves_ = nullptr;
+  CatchEntryMovesCache* catch_entry_moves_cache_ = nullptr;
+  CatchEntryMovesRefPtr cached_catch_entry_moves_;
 };
 
+CatchEntryMove CatchEntryMove::ReadFrom(ReadStream* stream) {
+  using Reader = ReadStream::Raw<sizeof(intptr_t), intptr_t>;
+  const intptr_t src = Reader::Read(stream);
+  const intptr_t dest_and_kind = Reader::Read(stream);
+  return CatchEntryMove(src, dest_and_kind);
+}
+
+#if !defined(DART_PRECOMPILED_RUNTIME)
+void CatchEntryMove::WriteTo(WriteStream* stream) {
+  using Writer = WriteStream::Raw<sizeof(intptr_t), intptr_t>;
+  Writer::Write(stream, src_);
+  Writer::Write(stream, dest_and_kind_);
+}
+#endif
+
 static void FindErrorHandler(uword* handler_pc,
                              uword* handler_sp,
                              uword* handler_fp) {
@@ -619,7 +687,7 @@
     THR_Print("%s\n", stacktrace.ToCString());
   }
   if (handler_exists) {
-    finder.TrySync();
+    finder.PrepareFrameForCatchEntry();
     // Found a dart handler for the exception, jump to it.
     JumpToExceptionHandler(thread, handler_pc, handler_sp, handler_fp,
                            exception, stacktrace);
diff --git a/runtime/vm/exceptions.h b/runtime/vm/exceptions.h
index d9b8c14..b681b0f 100644
--- a/runtime/vm/exceptions.h
+++ b/runtime/vm/exceptions.h
@@ -6,6 +6,7 @@
 #define RUNTIME_VM_EXCEPTIONS_H_
 
 #include "vm/allocation.h"
+#include "vm/bitfield.h"
 #include "vm/token_position.h"
 
 namespace dart {
@@ -22,6 +23,8 @@
 class RawObject;
 class RawScript;
 class RawStackTrace;
+class ReadStream;
+class WriteStream;
 class String;
 class Thread;
 
@@ -106,61 +109,193 @@
   int8_t is_generated;         // True if this is a generated handler.
 };
 
-class CatchEntryState {
+//
+// Support for try/catch in the optimized code.
+//
+// Optimizing compiler does not model exceptional control flow explicitly,
+// instead we rely on the runtime system to create correct state at the
+// entry into the catch block by reshuffling values in the frame into
+// positions where they are expected to be at the beginning of the catch block.
+//
+// See runtime/docs/compiler/exceptions.md for more details.
+//
+
+// A single move from a stack slot or an object pool into another stack slot.
+// Destination slot is expecting only tagged values, however source
+// slot can contain an unboxed value (e.g. an unboxed double) - in this case
+// we will box the value before executing the move.
+class CatchEntryMove {
  public:
-  enum { kCatchEntryStateIsMove = 1, kCatchEntryStateDestShift = 1 };
+  CatchEntryMove()
+      : src_(0),
+        dest_and_kind_(static_cast<intptr_t>(SourceKind::kTaggedSlot)) {
+    ASSERT(IsRedundant());
+  }
 
-  CatchEntryState() : data_(NULL), ref_count_(NULL) {}
-  explicit CatchEntryState(intptr_t* data)
-      : data_(data), ref_count_(new intptr_t(1)) {}
+  enum class SourceKind {
+    kConstant,
+    kTaggedSlot,
+    kDoubleSlot,
+    kFloat32x4Slot,
+    kFloat64x2Slot,
+    kInt32x4Slot,
+    kInt64PairSlot,
+    kInt64Slot,
+    kInt32Slot,
+    kUint32Slot,
+  };
 
-  CatchEntryState(const CatchEntryState& state) { Copy(state); }
+  SourceKind source_kind() const {
+    return SourceKindField::decode(dest_and_kind_);
+  }
 
-  ~CatchEntryState() { Destroy(); }
+  intptr_t src_slot() const {
+    ASSERT(source_kind() != SourceKind::kInt64PairSlot);
+    return src_;
+  }
 
-  CatchEntryState& operator=(const CatchEntryState& state) {
+  intptr_t src_lo_slot() const {
+    ASSERT(source_kind() == SourceKind::kInt64PairSlot);
+    return LoSourceSlot::decode(src_);
+  }
+
+  intptr_t src_hi_slot() const {
+    ASSERT(source_kind() == SourceKind::kInt64PairSlot);
+    return HiSourceSlot::decode(src_);
+  }
+
+  intptr_t dest_slot() const {
+    return dest_and_kind_ >> SourceKindField::bitsize();
+  }
+
+  static CatchEntryMove FromConstant(intptr_t pool_id, intptr_t dest_slot) {
+    return FromSlot(SourceKind::kConstant, pool_id, dest_slot);
+  }
+
+  static CatchEntryMove FromSlot(SourceKind kind,
+                                 intptr_t src_slot,
+                                 intptr_t dest_slot) {
+    return CatchEntryMove(src_slot,
+                          SourceKindField::encode(kind) |
+                              (dest_slot << SourceKindField::bitsize()));
+  }
+
+  static intptr_t EncodePairSource(intptr_t src_lo_slot, intptr_t src_hi_slot) {
+    return LoSourceSlot::encode(src_lo_slot) |
+           HiSourceSlot::encode(src_hi_slot);
+  }
+
+  bool IsRedundant() const {
+    return (source_kind() == SourceKind::kTaggedSlot) &&
+           (dest_slot() == src_slot());
+  }
+
+  bool operator==(const CatchEntryMove& rhs) {
+    return src_ == rhs.src_ && dest_and_kind_ == rhs.dest_and_kind_;
+  }
+
+  static CatchEntryMove ReadFrom(ReadStream* stream);
+
+#if !defined(DART_PRECOMPILED_RUNTIME)
+  void WriteTo(WriteStream* stream);
+#endif
+
+ private:
+  CatchEntryMove(intptr_t src, intptr_t dest_and_kind)
+      : src_(src), dest_and_kind_(dest_and_kind) {}
+
+  // Note: BitField helper does not work with signed values of size that does
+  // not match the destination size - thus we don't use BitField for declaring
+  // DestinationField and instead encode and decode it manually.
+  using SourceKindField = BitField<intptr_t, SourceKind, 0, 4>;
+
+  static constexpr intptr_t kHalfSourceBits = kBitsPerWord / 2;
+  using LoSourceSlot = BitField<intptr_t, intptr_t, 0, kHalfSourceBits>;
+  using HiSourceSlot =
+      BitField<intptr_t, intptr_t, kHalfSourceBits, kHalfSourceBits>;
+
+  intptr_t src_;
+  intptr_t dest_and_kind_;
+};
+
+// A sequence of moves that needs to be executed to create a state expected
+// at the catch entry.
+// Note: this is a deserialized representation that is used by the runtime
+// system as a temporary representation and for caching. That is why this
+// object is allocated in the malloced heap and not in the Dart heap.
+class CatchEntryMoves {
+ public:
+  static CatchEntryMoves* Allocate(intptr_t num_moves) {
+    auto result = reinterpret_cast<CatchEntryMoves*>(
+        malloc(sizeof(CatchEntryMoves) + sizeof(CatchEntryMove) * num_moves));
+    result->count_ = num_moves;
+    return result;
+  }
+
+  static void Free(const CatchEntryMoves* moves) {
+    free(const_cast<CatchEntryMoves*>(moves));
+  }
+
+  intptr_t count() const { return count_; }
+  CatchEntryMove& At(intptr_t i) { return Moves()[i]; }
+  const CatchEntryMove& At(intptr_t i) const { return Moves()[i]; }
+
+ private:
+  CatchEntryMove* Moves() {
+    return reinterpret_cast<CatchEntryMove*>(this + 1);
+  }
+
+  const CatchEntryMove* Moves() const {
+    return reinterpret_cast<const CatchEntryMove*>(this + 1);
+  }
+
+  intptr_t count_;
+  // Followed by CatchEntryMove[count_]
+};
+
+// A simple reference counting wrapper for CatchEntryMoves.
+//
+// TODO(vegorov) switch this to intrusive reference counting.
+class CatchEntryMovesRefPtr {
+ public:
+  CatchEntryMovesRefPtr() : moves_(nullptr), ref_count_(nullptr) {}
+  explicit CatchEntryMovesRefPtr(const CatchEntryMoves* moves)
+      : moves_(moves), ref_count_(new intptr_t(1)) {}
+
+  CatchEntryMovesRefPtr(const CatchEntryMovesRefPtr& state) { Copy(state); }
+
+  ~CatchEntryMovesRefPtr() { Destroy(); }
+
+  CatchEntryMovesRefPtr& operator=(const CatchEntryMovesRefPtr& state) {
     Destroy();
     Copy(state);
     return *this;
   }
 
-  bool Empty() { return ref_count_ == NULL; }
+  bool IsEmpty() { return ref_count_ == nullptr; }
 
-  intptr_t Pairs() { return data_[0]; }
-
-  intptr_t Src(intptr_t i) { return data_[1 + 2 * i]; }
-
-  intptr_t Dest(intptr_t i) {
-    return data_[2 + 2 * i] >> kCatchEntryStateDestShift;
-  }
-
-  bool isMove(intptr_t i) { return data_[2 + 2 * i] & kCatchEntryStateIsMove; }
+  const CatchEntryMoves& moves() { return *moves_; }
 
  private:
   void Destroy() {
-    if (ref_count_ != NULL) {
+    if (ref_count_ != nullptr) {
       (*ref_count_)--;
       if (*ref_count_ == 0) {
         delete ref_count_;
-        delete[] data_;
+        CatchEntryMoves::Free(moves_);
       }
     }
   }
 
-  void Copy(const CatchEntryState& state) {
-    data_ = state.data_;
+  void Copy(const CatchEntryMovesRefPtr& state) {
+    moves_ = state.moves_;
     ref_count_ = state.ref_count_;
-    if (ref_count_ != NULL) {
+    if (ref_count_ != nullptr) {
       (*ref_count_)++;
     }
   }
 
-  // data_ has the following format:
-  // 0 - number of pairs in this state
-  // 1-2 - 1st encoded src,dest pair
-  // 3-4 - 2nd pair
-  // ....
-  intptr_t* data_;
+  const CatchEntryMoves* moves_;
   intptr_t* ref_count_;
 };
 
diff --git a/runtime/vm/heap/heap.cc b/runtime/vm/heap/heap.cc
index de9911f..2b319e2 100644
--- a/runtime/vm/heap/heap.cc
+++ b/runtime/vm/heap/heap.cc
@@ -431,7 +431,7 @@
     NOT_IN_PRODUCT(PrintStatsToTimeline(&tds, reason));
     // Some Code objects may have been collected so invalidate handler cache.
     thread->isolate()->handler_info_cache()->Clear();
-    thread->isolate()->catch_entry_state_cache()->Clear();
+    thread->isolate()->catch_entry_moves_cache()->Clear();
     EndOldSpaceGC();
   }
 }
diff --git a/runtime/vm/isolate.cc b/runtime/vm/isolate.cc
index af54df8..aa4ae11 100644
--- a/runtime/vm/isolate.cc
+++ b/runtime/vm/isolate.cc
@@ -941,7 +941,7 @@
       spawn_count_monitor_(new Monitor()),
       spawn_count_(0),
       handler_info_cache_(),
-      catch_entry_state_cache_(),
+      catch_entry_moves_cache_(),
       embedder_entry_points_(NULL),
       obfuscation_map_(NULL) {
   FlagsCopyFrom(api_flags);
diff --git a/runtime/vm/isolate.h b/runtime/vm/isolate.h
index b40651f..07e6197 100644
--- a/runtime/vm/isolate.h
+++ b/runtime/vm/isolate.h
@@ -130,7 +130,7 @@
 // Fixed cache for exception handler lookup.
 typedef FixedCache<intptr_t, ExceptionHandlerInfo, 16> HandlerInfoCache;
 // Fixed cache for catch entry state lookup.
-typedef FixedCache<intptr_t, CatchEntryState, 16> CatchEntryStateCache;
+typedef FixedCache<intptr_t, CatchEntryMovesRefPtr, 16> CatchEntryMovesCache;
 
 // List of Isolate flags with corresponding members of Dart_IsolateFlags and
 // corresponding global command line flags.
@@ -784,8 +784,8 @@
 
   HandlerInfoCache* handler_info_cache() { return &handler_info_cache_; }
 
-  CatchEntryStateCache* catch_entry_state_cache() {
-    return &catch_entry_state_cache_;
+  CatchEntryMovesCache* catch_entry_moves_cache() {
+    return &catch_entry_moves_cache_;
   }
 
   void MaybeIncreaseReloadEveryNStackOverflowChecks();
@@ -1017,7 +1017,7 @@
   intptr_t spawn_count_;
 
   HandlerInfoCache handler_info_cache_;
-  CatchEntryStateCache catch_entry_state_cache_;
+  CatchEntryMovesCache catch_entry_moves_cache_;
 
   Dart_QualifiedFunctionName* embedder_entry_points_;
   const char** obfuscation_map_;
diff --git a/runtime/vm/object.cc b/runtime/vm/object.cc
index c8149f6..434de7b 100644
--- a/runtime/vm/object.cc
+++ b/runtime/vm/object.cc
@@ -15398,8 +15398,8 @@
   StorePointer(&raw_ptr()->catch_entry_.variables_, smi.raw());
 }
 #else
-void Code::set_catch_entry_state_maps(const TypedData& maps) const {
-  StorePointer(&raw_ptr()->catch_entry_.catch_entry_state_maps_, maps.raw());
+void Code::set_catch_entry_moves_maps(const TypedData& maps) const {
+  StorePointer(&raw_ptr()->catch_entry_.catch_entry_moves_maps_, maps.raw());
 }
 #endif
 
diff --git a/runtime/vm/object.h b/runtime/vm/object.h
index 6613544..11ea398 100644
--- a/runtime/vm/object.h
+++ b/runtime/vm/object.h
@@ -5045,10 +5045,10 @@
   RawSmi* variables() const { return raw_ptr()->catch_entry_.variables_; }
   void set_variables(const Smi& smi) const;
 #else
-  RawTypedData* catch_entry_state_maps() const {
-    return raw_ptr()->catch_entry_.catch_entry_state_maps_;
+  RawTypedData* catch_entry_moves_maps() const {
+    return raw_ptr()->catch_entry_.catch_entry_moves_maps_;
   }
-  void set_catch_entry_state_maps(const TypedData& maps) const;
+  void set_catch_entry_moves_maps(const TypedData& maps) const;
 #endif
 
   RawArray* stackmaps() const { return raw_ptr()->stackmaps_; }
diff --git a/runtime/vm/program_visitor.cc b/runtime/vm/program_visitor.cc
index a6400e8..2fe8841 100644
--- a/runtime/vm/program_visitor.cc
+++ b/runtime/vm/program_visitor.cc
@@ -380,51 +380,51 @@
 #endif  // !defined(DART_PRECOMPILED_RUNTIME)
 
 #if defined(DART_PRECOMPILER)
-void ProgramVisitor::DedupCatchEntryStateMaps() {
+void ProgramVisitor::DedupCatchEntryMovesMaps() {
   if (!FLAG_precompiled_mode) {
     return;
   }
-  class DedupCatchEntryStateMapsVisitor : public FunctionVisitor {
+  class DedupCatchEntryMovesMapsVisitor : public FunctionVisitor {
    public:
-    explicit DedupCatchEntryStateMapsVisitor(Zone* zone)
+    explicit DedupCatchEntryMovesMapsVisitor(Zone* zone)
         : zone_(zone),
-          canonical_catch_entry_state_maps_(),
+          canonical_catch_entry_moves_maps_(),
           code_(Code::Handle(zone)),
-          catch_entry_state_maps_(TypedData::Handle(zone)) {}
+          catch_entry_moves_maps_(TypedData::Handle(zone)) {}
 
     void Visit(const Function& function) {
       if (!function.HasCode()) {
         return;
       }
       code_ = function.CurrentCode();
-      catch_entry_state_maps_ = code_.catch_entry_state_maps();
-      catch_entry_state_maps_ =
-          DedupCatchEntryStateMaps(catch_entry_state_maps_);
-      code_.set_catch_entry_state_maps(catch_entry_state_maps_);
+      catch_entry_moves_maps_ = code_.catch_entry_moves_maps();
+      catch_entry_moves_maps_ =
+          DedupCatchEntryMovesMaps(catch_entry_moves_maps_);
+      code_.set_catch_entry_moves_maps(catch_entry_moves_maps_);
     }
 
-    RawTypedData* DedupCatchEntryStateMaps(
-        const TypedData& catch_entry_state_maps) {
-      const TypedData* canonical_catch_entry_state_maps =
-          canonical_catch_entry_state_maps_.LookupValue(
-              &catch_entry_state_maps);
-      if (canonical_catch_entry_state_maps == NULL) {
-        canonical_catch_entry_state_maps_.Insert(
-            &TypedData::ZoneHandle(zone_, catch_entry_state_maps.raw()));
-        return catch_entry_state_maps.raw();
+    RawTypedData* DedupCatchEntryMovesMaps(
+        const TypedData& catch_entry_moves_maps) {
+      const TypedData* canonical_catch_entry_moves_maps =
+          canonical_catch_entry_moves_maps_.LookupValue(
+              &catch_entry_moves_maps);
+      if (canonical_catch_entry_moves_maps == NULL) {
+        canonical_catch_entry_moves_maps_.Insert(
+            &TypedData::ZoneHandle(zone_, catch_entry_moves_maps.raw()));
+        return catch_entry_moves_maps.raw();
       } else {
-        return canonical_catch_entry_state_maps->raw();
+        return canonical_catch_entry_moves_maps->raw();
       }
     }
 
    private:
     Zone* zone_;
-    TypedDataSet canonical_catch_entry_state_maps_;
+    TypedDataSet canonical_catch_entry_moves_maps_;
     Code& code_;
-    TypedData& catch_entry_state_maps_;
+    TypedData& catch_entry_moves_maps_;
   };
 
-  DedupCatchEntryStateMapsVisitor visitor(Thread::Current()->zone());
+  DedupCatchEntryMovesMapsVisitor visitor(Thread::Current()->zone());
   ProgramVisitor::VisitFunctions(&visitor);
 }
 #endif  // !defined(DART_PRECOMPILER)
@@ -724,7 +724,7 @@
   DedupPcDescriptors();
   NOT_IN_PRECOMPILED(DedupDeoptEntries());
 #if defined(DART_PRECOMPILER)
-  DedupCatchEntryStateMaps();
+  DedupCatchEntryMovesMaps();
 #endif
   DedupCodeSourceMaps();
   DedupLists();
diff --git a/runtime/vm/program_visitor.h b/runtime/vm/program_visitor.h
index 4125f71..f2b255a 100644
--- a/runtime/vm/program_visitor.h
+++ b/runtime/vm/program_visitor.h
@@ -35,7 +35,7 @@
   static void DedupPcDescriptors();
   NOT_IN_PRECOMPILED(static void DedupDeoptEntries());
 #if defined(DART_PRECOMPILER)
-  static void DedupCatchEntryStateMaps();
+  static void DedupCatchEntryMovesMaps();
 #endif
   static void DedupCodeSourceMaps();
   static void DedupLists();
diff --git a/runtime/vm/raw_object.h b/runtime/vm/raw_object.h
index 57950fa..f8dacf3 100644
--- a/runtime/vm/raw_object.h
+++ b/runtime/vm/raw_object.h
@@ -1391,7 +1391,7 @@
   RawExceptionHandlers* exception_handlers_;
   RawPcDescriptors* pc_descriptors_;
   union {
-    RawTypedData* catch_entry_state_maps_;
+    RawTypedData* catch_entry_moves_maps_;
     RawSmi* variables_;
   } catch_entry_;
   RawArray* stackmaps_;