Auto merge of #45905 - alexcrichton:add-wasm-target, r=aturon

std: Add a new wasm32-unknown-unknown target

This commit adds a new target to the compiler: wasm32-unknown-unknown. This target is a reimagining of what it looks like to generate WebAssembly code from Rust. Instead of using Emscripten which can bring with it a weighty runtime this instead is a target which uses only the LLVM backend for WebAssembly and a "custom linker" for now which will hopefully one day be direct calls to lld.

Notable features of this target include:

* There is zero runtime footprint. The target assumes nothing exists other than the wasm32 instruction set.
* There is zero toolchain footprint beyond adding the target. No custom linker is needed, rustc contains everything.
* Very small wasm modules can be generated directly from Rust code using this target.
* Most of the standard library is stubbed out to return an error, but anything related to allocation works (aka `HashMap`, `Vec`, etc).
* Naturally, any `#[no_std]` crate should be 100% compatible with this new target.

This target is currently somewhat janky due to how linking works. The "linking" is currently unconditional whole program LTO (aka LLVM is being used as a linker). Naturally that means compiling programs is pretty slow! Eventually though this target should have a linker.

This target is also intended to be quite experimental. I'm hoping that this can act as a catalyst for further experimentation in Rust with WebAssembly. Breaking changes are very likely to land to this target, so it's not recommended to rely on it in any critical capacity yet. We'll let you know when it's "production ready".

### Building yourself

First you'll need to configure the build of LLVM and enable this target

```
$ ./configure --target=wasm32-unknown-unknown --set llvm.experimental-targets=WebAssembly
```

Next you'll want to remove any previously compiled LLVM as it needs to be rebuilt with WebAssembly support. You can do that with:

```
$ rm -rf build
```

And then you're good to go! A `./x.py build` should give you a rustc with the appropriate libstd target.

### Test support

Currently testing-wise this target is looking pretty good but isn't complete. I've got almost the entire `run-pass` test suite working with this target (lots of tests ignored, but many passing as well). The `core` test suite is [still getting LLVM bugs fixed](https://reviews.llvm.org/D39866) to get that working and will take some time. Relatively simple programs all seem to work though!

In general I've only tested this with a local fork that makes use of LLVM 5 rather than our current LLVM 4 on master. The LLVM 4 WebAssembly backend AFAIK isn't broken per se but is likely missing bug fixes available on LLVM 5. I'm hoping though that we can decouple the LLVM 5 upgrade and adding this wasm target!

### But the modules generated are huge!

It's worth nothing that you may not immediately see the "smallest possible wasm module" for the input you feed to rustc. For various reasons it's very difficult to get rid of the final "bloat" in vanilla rustc (again, a real linker should fix all this). For now what you'll have to do is:

    cargo install --git https://github.com/alexcrichton/wasm-gc
    wasm-gc foo.wasm bar.wasm

And then `bar.wasm` should be the smallest we can get it!

---

In any case for now I'd love feedback on this, particularly on the various integration points if you've got better ideas of how to approach them!
diff --git a/.travis.yml b/.travis.yml
index 62336a7..5ff3a1c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -254,11 +254,11 @@
   # Random attempt at debugging currently. Just poking around in here to see if
   # anything shows up.
   - ls -lat $HOME/Library/Logs/DiagnosticReports/
-  - find $HOME/Library/Logs/DiagnosticReports/ ! \(
-      -name '*.stage2-*.crash'
-      -name 'com.apple.CoreSimulator.CoreSimulatorService-*.crash'
-    \)
-      -exec echo -e travis_fold":start:crashlog\n\033[31;1m" {} "\033[0m" \;
+  - find $HOME/Library/Logs/DiagnosticReports
+      -type f
+      -not -name '*.stage2-*.crash'
+      -not -name 'com.apple.CoreSimulator.CoreSimulatorService-*.crash'
+      -exec printf travis_fold":start:crashlog\n\033[31;1m%s\033[0m\n" {} \;
       -exec head -750 {} \;
       -exec echo travis_fold":"end:crashlog \;
 
diff --git a/RELEASES.md b/RELEASES.md
index 194745d..7a3b097 100644
--- a/RELEASES.md
+++ b/RELEASES.md
@@ -1,3 +1,91 @@
+Version 1.22.0 (2017-11-23)
+==========================
+
+Language
+--------
+- [`non_snake_case` lint now allows extern no-mangle functions][44966]
+- [Now accepts underscores in unicode escapes][43716]
+- [`#![feature(const_fn)]` is now no longer required for
+  calling const functions.][43017] It's still required for creating
+  constant functions.
+- [`T op= &T` now works for numeric types.][44287] eg. `let mut x = 2; x += &8;`
+- [types that impl `Drop` are now allowed in `const` and `static` types][44456]
+
+Compiler
+--------
+- [rustc now defaults to having 16 codegen units at debug on supported platforms.][45064]
+- [rustc will no longer inline in codegen units when compiling for debug][45075]
+  This should decrease compile times for debug builds.
+- [strict memory alignment now enabled on ARMv6][45094]
+- [Remove support for the PNaCl target `le32-unknown-nacl`][45041]
+
+Libraries
+---------
+- [Allow atomic operations up to 32 bits
+  on `armv5te_unknown_linux_gnueabi`][44978]
+- [`Box<Error>` now impls `From<Cow<str>>`][44466]
+- [`std::mem::Discriminant` is now guaranteed to be `Send + Sync`][45095]
+- [`fs::copy` now returns the length of the main stream on NTFS.][44895]
+- [Properly detect overflow in `Instant += Duration`.][44220]
+- [impl `Hasher` for `{&mut Hasher, Box<Hasher>}`][44015]
+- [impl `fmt::Debug` for `SplitWhitespace`.][44303]
+- [`Option<T>` now impls `Try`][42526] This allows for using `?` with `Option` types.
+
+Stabilized APIs
+---------------
+
+Cargo
+-----
+- [Cargo will now build multi file examples in subdirectories of the `examples`
+  folder that have a `main.rs` file.][cargo/4496]
+- [Changed `[root]` to `[package]` in `Cargo.lock`][cargo/4571] Packages with
+  the old format will continue to work and can be updated with `cargo update`.
+- [Now supports vendoring git repositories][cargo/3992]
+
+Misc
+----
+- [`libbacktrace` is now available on Apple platforms.][44251]
+- [Stabilised the `compile_fail` attribute for code fences.][43949] This now
+  lets you specify that a given code example will fail to compile.
+
+Compatibility Notes
+-------------------
+- [The minimum Android version that rustc can build for has been bumped
+  to `4.0` from `2.3`][45656]
+- [Allowing `T op= &T` for numeric types has broken some type
+  inference cases][45480]
+
+
+[42526]: https://github.com/rust-lang/rust/pull/42526
+[43017]: https://github.com/rust-lang/rust/pull/43017
+[43716]: https://github.com/rust-lang/rust/pull/43716
+[43949]: https://github.com/rust-lang/rust/pull/43949
+[44015]: https://github.com/rust-lang/rust/pull/44015
+[44220]: https://github.com/rust-lang/rust/pull/44220
+[44251]: https://github.com/rust-lang/rust/pull/44251
+[44287]: https://github.com/rust-lang/rust/pull/44287
+[44303]: https://github.com/rust-lang/rust/pull/44303
+[44456]: https://github.com/rust-lang/rust/pull/44456
+[44466]: https://github.com/rust-lang/rust/pull/44466
+[44895]: https://github.com/rust-lang/rust/pull/44895
+[44966]: https://github.com/rust-lang/rust/pull/44966
+[44978]: https://github.com/rust-lang/rust/pull/44978
+[45041]: https://github.com/rust-lang/rust/pull/45041
+[45064]: https://github.com/rust-lang/rust/pull/45064
+[45075]: https://github.com/rust-lang/rust/pull/45075
+[45094]: https://github.com/rust-lang/rust/pull/45094
+[45095]: https://github.com/rust-lang/rust/pull/45095
+[45480]: https://github.com/rust-lang/rust/issues/45480
+[45656]: https://github.com/rust-lang/rust/pull/45656
+[cargo/3992]: https://github.com/rust-lang/cargo/pull/3992
+[cargo/4496]: https://github.com/rust-lang/cargo/pull/4496
+[cargo/4571]: https://github.com/rust-lang/cargo/pull/4571
+
+
+
+
+
+
 Version 1.21.0 (2017-10-12)
 ==========================
 
diff --git a/src/Cargo.lock b/src/Cargo.lock
index 2769806..d0d6271 100644
--- a/src/Cargo.lock
+++ b/src/Cargo.lock
@@ -421,7 +421,7 @@
  "advapi32-sys 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
  "commoncrypto 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
  "hex 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
  "winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
 ]
 
@@ -457,7 +457,7 @@
  "curl-sys 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
  "libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
  "openssl-probe 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
  "socket2 0.2.4 (registry+https://github.com/rust-lang/crates.io-index)",
  "winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
 ]
@@ -470,7 +470,7 @@
  "cc 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
  "libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
  "libz-sys 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
  "pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
  "vcpkg 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
  "winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -696,7 +696,7 @@
  "libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
  "libgit2-sys 0.6.16 (registry+https://github.com/rust-lang/crates.io-index)",
  "openssl-probe 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
  "url 1.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
 ]
 
@@ -949,7 +949,7 @@
  "libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
  "libssh2-sys 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
  "libz-sys 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
  "pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
 ]
 
@@ -961,7 +961,7 @@
  "cmake 0.1.26 (registry+https://github.com/rust-lang/crates.io-index)",
  "libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
  "libz-sys 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
  "pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
 ]
 
@@ -1192,14 +1192,14 @@
 
 [[package]]
 name = "openssl"
-version = "0.9.20"
+version = "0.9.21"
 source = "registry+https://github.com/rust-lang/crates.io-index"
 dependencies = [
  "bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)",
  "foreign-types 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
  "lazy_static 0.2.9 (registry+https://github.com/rust-lang/crates.io-index)",
  "libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
- "openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)",
+ "openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)",
 ]
 
 [[package]]
@@ -1209,7 +1209,7 @@
 
 [[package]]
 name = "openssl-sys"
-version = "0.9.20"
+version = "0.9.21"
 source = "registry+https://github.com/rust-lang/crates.io-index"
 dependencies = [
  "cc 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -2707,9 +2707,9 @@
 "checksum num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "99843c856d68d8b4313b03a17e33c4bb42ae8f6610ea81b28abe076ac721b9b0"
 "checksum num_cpus 1.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "514f0d73e64be53ff320680ca671b64fe3fb91da01e1ae2ddc99eb51d453b20d"
 "checksum open 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "c281318d992e4432cfa799969467003d05921582a7489a8325e37f8a450d5113"
-"checksum openssl 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)" = "8bf434ff6117485dc16478d77a4f5c84eccc9c3645c4da8323b287ad6a15a638"
+"checksum openssl 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)" = "2225c305d8f57001a0d34263e046794aa251695f20773102fbbfeb1e7b189955"
 "checksum openssl-probe 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "d98df0270d404ccd3c050a41d579c52d1db15375168bb3471e04ec0f5f378daf"
-"checksum openssl-sys 0.9.20 (registry+https://github.com/rust-lang/crates.io-index)" = "0ad395f1cee51b64a8d07cc8063498dc7554db62d5f3ca87a67f4eed2791d0c8"
+"checksum openssl-sys 0.9.21 (registry+https://github.com/rust-lang/crates.io-index)" = "92867746af30eea7a89feade385f7f5366776f1c52ec6f0de81360373fa88363"
 "checksum os_pipe 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "998bfbb3042e715190fe2a41abfa047d7e8cb81374d2977d7f100eacd8619cb1"
 "checksum owning_ref 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "cdf84f41639e037b484f93433aa3897863b561ed65c6e59c7073d7c561710f37"
 "checksum percent-encoding 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "de154f638187706bde41d9b4738748933d64e6b37bdbffc0b47a97d16a6ae356"
diff --git a/src/bootstrap/builder.rs b/src/bootstrap/builder.rs
index 4e2898b..b202df7 100644
--- a/src/bootstrap/builder.rs
+++ b/src/bootstrap/builder.rs
@@ -264,7 +264,7 @@
                 dist::Rls, dist::Rustfmt, dist::Extended, dist::HashSign,
                 dist::DontDistWithMiriEnabled),
             Kind::Install => describe!(install::Docs, install::Std, install::Cargo, install::Rls,
-                install::Analysis, install::Src, install::Rustc),
+                install::Rustfmt, install::Analysis, install::Src, install::Rustc),
         }
     }
 
diff --git a/src/bootstrap/install.rs b/src/bootstrap/install.rs
index c150459..743f32e 100644
--- a/src/bootstrap/install.rs
+++ b/src/bootstrap/install.rs
@@ -39,6 +39,10 @@
     install_sh(builder, "rls", "rls", stage, Some(host));
 }
 
+pub fn install_rustfmt(builder: &Builder, stage: u32, host: Interned<String>) {
+    install_sh(builder, "rustfmt", "rustfmt", stage, Some(host));
+}
+
 pub fn install_analysis(builder: &Builder, stage: u32, host: Interned<String>) {
     install_sh(builder, "analysis", "rust-analysis", stage, Some(host));
 }
@@ -192,6 +196,13 @@
             println!("skipping Install RLS stage{} ({})", self.stage, self.target);
         }
     };
+    Rustfmt, "rustfmt", _config.extended, only_hosts: true, {
+        if builder.ensure(dist::Rustfmt { stage: self.stage, target: self.target }).is_some() {
+            install_rustfmt(builder, self.stage, self.target);
+        } else {
+            println!("skipping Install Rustfmt stage{} ({})", self.stage, self.target);
+        }
+    };
     Analysis, "analysis", _config.extended, only_hosts: false, {
         builder.ensure(dist::Analysis {
             compiler: builder.compiler(self.stage, self.host),
diff --git a/src/ci/docker/dist-i586-gnu-i686-musl/Dockerfile b/src/ci/docker/dist-i586-gnu-i686-musl/Dockerfile
index efde3ff..2fb1219 100644
--- a/src/ci/docker/dist-i586-gnu-i686-musl/Dockerfile
+++ b/src/ci/docker/dist-i586-gnu-i686-musl/Dockerfile
@@ -34,6 +34,7 @@
 #
 # See: https://github.com/rust-lang/rust/issues/34978
 ENV CFLAGS_i686_unknown_linux_musl=-Wa,-mrelax-relocations=no
+ENV CFLAGS_i586_unknown_linux_gnu=-Wa,-mrelax-relocations=no
 
 ENV SCRIPT \
       python2.7 ../x.py test \
diff --git a/src/doc/rustdoc/src/documentation-tests.md b/src/doc/rustdoc/src/documentation-tests.md
index eb3e6a9..9c6b86d 100644
--- a/src/doc/rustdoc/src/documentation-tests.md
+++ b/src/doc/rustdoc/src/documentation-tests.md
@@ -38,17 +38,19 @@
 adds friction. So `rustdoc` processes your examples slightly before
 running them. Here's the full algorithm rustdoc uses to preprocess examples:
 
-1. Any leading `#![foo]` attributes are left intact as crate attributes.
-2. Some common `allow` attributes are inserted, including
+1. Some common `allow` attributes are inserted, including
    `unused_variables`, `unused_assignments`, `unused_mut`,
    `unused_attributes`, and `dead_code`. Small examples often trigger
    these lints.
-3. If the example does not contain `extern crate`, then `extern crate
+2. Any attributes specified with `#![doc(test(attr(...)))]` are added.
+3. Any leading `#![foo]` attributes are left intact as crate attributes.
+4. If the example does not contain `extern crate`, and
+   `#![doc(test(no_crate_inject))]` was not specified, then `extern crate
    <mycrate>;` is inserted (note the lack of `#[macro_use]`).
-4. Finally, if the example does not contain `fn main`, the remainder of the
+5. Finally, if the example does not contain `fn main`, the remainder of the
    text is wrapped in `fn main() { your_code }`.
 
-For more about that caveat in rule 3, see "Documeting Macros" below.
+For more about that caveat in rule 4, see "Documeting Macros" below.
 
 ## Hiding portions of the example
 
@@ -261,4 +263,4 @@
 The `no_run` attribute will compile your code, but not run it. This is
 important for examples such as "Here's how to retrieve a web page,"
 which you would want to ensure compiles, but might be run in a test
-environment that has no network access.
\ No newline at end of file
+environment that has no network access.
diff --git a/src/doc/rustdoc/src/the-doc-attribute.md b/src/doc/rustdoc/src/the-doc-attribute.md
index 978d765..aadd72d 100644
--- a/src/doc/rustdoc/src/the-doc-attribute.md
+++ b/src/doc/rustdoc/src/the-doc-attribute.md
@@ -103,6 +103,26 @@
 
 it will not.
 
+### `test(no_crate_inject)`
+
+By default, `rustdoc` will automatically add a line with `extern crate my_crate;` into each doctest.
+But if you include this:
+
+```rust,ignore
+#![doc(test(no_crate_inject))]
+```
+
+it will not.
+
+### `test(attr(...))`
+
+This form of the `doc` attribute allows you to add arbitrary attributes to all your doctests. For
+example, if you want your doctests to fail if they produce any warnings, you could add this:
+
+```rust,ignore
+#![doc(test(attr(deny(warnings))))]
+```
+
 ## At the item level
 
 These forms of the `#[doc]` attribute are used on individual items, to control how
diff --git a/src/etc/indenter b/src/etc/indenter
index b3eed6a..21bfc44 100755
--- a/src/etc/indenter
+++ b/src/etc/indenter
@@ -13,7 +13,7 @@
     if more_re.match(line):
         indent += 1
 
-    print "%03d %s%s" % (indent, " " * indent, line.strip())
+    print("%03d %s%s" % (indent, " " * indent, line.strip()))
 
     if less_re.match(line):
         indent -= 1
diff --git a/src/etc/sugarise-doc-comments.py b/src/etc/sugarise-doc-comments.py
index 62870f3..ac2223f 100755
--- a/src/etc/sugarise-doc-comments.py
+++ b/src/etc/sugarise-doc-comments.py
@@ -50,11 +50,11 @@
         lns = lns[:-1]
 
     # remove leading horizontal whitespace
-    n = sys.maxint
+    n = sys.maxsize
     for ln in lns:
         if ln.strip():
             n = min(n, len(re.search('^\s*', ln).group()))
-    if n != sys.maxint:
+    if n != sys.maxsize:
         lns = [ln[n:] for ln in lns]
 
     # strip trailing whitespace
diff --git a/src/etc/test-float-parse/runtests.py b/src/etc/test-float-parse/runtests.py
index bc14187..75c92b9 100644
--- a/src/etc/test-float-parse/runtests.py
+++ b/src/etc/test-float-parse/runtests.py
@@ -97,11 +97,15 @@
 from subprocess import Popen, check_call, PIPE
 from glob import glob
 import multiprocessing
-import Queue
 import threading
 import ctypes
 import binascii
 
+try:  # Python 3
+    import queue as Queue
+except ImportError:  # Python 2
+    import Queue
+
 NUM_WORKERS = 2
 UPDATE_EVERY_N = 50000
 INF = namedtuple('INF', '')()
diff --git a/src/liballoc/borrow.rs b/src/liballoc/borrow.rs
index e8aff09..acae0da 100644
--- a/src/liballoc/borrow.rs
+++ b/src/liballoc/borrow.rs
@@ -232,7 +232,7 @@
     ///
     /// assert_eq!(
     ///   cow.into_owned(),
-    ///   Cow::Owned(String::from(s))
+    ///   String::from(s)
     /// );
     /// ```
     ///
@@ -246,7 +246,7 @@
     ///
     /// assert_eq!(
     ///   cow.into_owned(),
-    ///   Cow::Owned(String::from(s))
+    ///   String::from(s)
     /// );
     /// ```
     #[stable(feature = "rust1", since = "1.0.0")]
diff --git a/src/liballoc/boxed.rs b/src/liballoc/boxed.rs
index 79292d3..2226cee 100644
--- a/src/liballoc/boxed.rs
+++ b/src/liballoc/boxed.rs
@@ -151,7 +151,7 @@
 unsafe fn finalize<T>(b: IntermediateBox<T>) -> Box<T> {
     let p = b.ptr as *mut T;
     mem::forget(b);
-    mem::transmute(p)
+    Box::from_raw(p)
 }
 
 fn make_place<T>() -> IntermediateBox<T> {
@@ -300,7 +300,10 @@
                issue = "27730")]
     #[inline]
     pub unsafe fn from_unique(u: Unique<T>) -> Self {
-        mem::transmute(u)
+        #[cfg(stage0)]
+        return mem::transmute(u);
+        #[cfg(not(stage0))]
+        return Box(u);
     }
 
     /// Consumes the `Box`, returning the wrapped raw pointer.
@@ -362,7 +365,14 @@
                issue = "27730")]
     #[inline]
     pub fn into_unique(b: Box<T>) -> Unique<T> {
-        unsafe { mem::transmute(b) }
+        #[cfg(stage0)]
+        return unsafe { mem::transmute(b) };
+        #[cfg(not(stage0))]
+        return {
+            let unique = b.0;
+            mem::forget(b);
+            unique
+        };
     }
 }
 
@@ -627,7 +637,7 @@
     pub fn downcast<T: Any>(self) -> Result<Box<T>, Box<Any + Send>> {
         <Box<Any>>::downcast(self).map_err(|s| unsafe {
             // reapply the Send marker
-            mem::transmute::<Box<Any>, Box<Any + Send>>(s)
+            Box::from_raw(Box::into_raw(s) as *mut (Any + Send))
         })
     }
 }
diff --git a/src/libcore/benches/iter.rs b/src/libcore/benches/iter.rs
index 1f16f5b..b284d85 100644
--- a/src/libcore/benches/iter.rs
+++ b/src/libcore/benches/iter.rs
@@ -275,3 +275,9 @@
     bench_skip_while_chain_ref_sum,
     (0i64..1000000).chain(0..1000000).skip_while(|&x| x < 1000)
 }
+
+bench_sums! {
+    bench_take_while_chain_sum,
+    bench_take_while_chain_ref_sum,
+    (0i64..1000000).chain(1000000..).take_while(|&x| x < 1111111)
+}
diff --git a/src/libcore/cell.rs b/src/libcore/cell.rs
index d222bf6..d02576a 100644
--- a/src/libcore/cell.rs
+++ b/src/libcore/cell.rs
@@ -579,32 +579,62 @@
     ///
     /// This function corresponds to [`std::mem::replace`](../mem/fn.replace.html).
     ///
+    /// # Panics
+    ///
+    /// Panics if the value is currently borrowed.
+    ///
     /// # Examples
     ///
     /// ```
     /// #![feature(refcell_replace_swap)]
     /// use std::cell::RefCell;
-    /// let c = RefCell::new(5);
-    /// let u = c.replace(6);
-    /// assert_eq!(u, 5);
-    /// assert_eq!(c, RefCell::new(6));
+    /// let cell = RefCell::new(5);
+    /// let old_value = cell.replace(6);
+    /// assert_eq!(old_value, 5);
+    /// assert_eq!(cell, RefCell::new(6));
     /// ```
-    ///
-    /// # Panics
-    ///
-    /// This function will panic if the `RefCell` has any outstanding borrows,
-    /// whether or not they are full mutable borrows.
     #[inline]
     #[unstable(feature = "refcell_replace_swap", issue="43570")]
     pub fn replace(&self, t: T) -> T {
         mem::replace(&mut *self.borrow_mut(), t)
     }
 
+    /// Replaces the wrapped value with a new one computed from `f`, returning
+    /// the old value, without deinitializing either one.
+    ///
+    /// This function corresponds to [`std::mem::replace`](../mem/fn.replace.html).
+    ///
+    /// # Panics
+    ///
+    /// Panics if the value is currently borrowed.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// #![feature(refcell_replace_swap)]
+    /// use std::cell::RefCell;
+    /// let cell = RefCell::new(5);
+    /// let old_value = cell.replace_with(|&mut old| old + 1);
+    /// assert_eq!(old_value, 5);
+    /// assert_eq!(cell, RefCell::new(6));
+    /// ```
+    #[inline]
+    #[unstable(feature = "refcell_replace_swap", issue="43570")]
+    pub fn replace_with<F: FnOnce(&mut T) -> T>(&self, f: F) -> T {
+        let mut_borrow = &mut *self.borrow_mut();
+        let replacement = f(mut_borrow);
+        mem::replace(mut_borrow, replacement)
+    }
+
     /// Swaps the wrapped value of `self` with the wrapped value of `other`,
     /// without deinitializing either one.
     ///
     /// This function corresponds to [`std::mem::swap`](../mem/fn.swap.html).
     ///
+    /// # Panics
+    ///
+    /// Panics if the value in either `RefCell` is currently borrowed.
+    ///
     /// # Examples
     ///
     /// ```
@@ -616,11 +646,6 @@
     /// assert_eq!(c, RefCell::new(6));
     /// assert_eq!(d, RefCell::new(5));
     /// ```
-    ///
-    /// # Panics
-    ///
-    /// This function will panic if either `RefCell` has any outstanding borrows,
-    /// whether or not they are full mutable borrows.
     #[inline]
     #[unstable(feature = "refcell_replace_swap", issue="43570")]
     pub fn swap(&self, other: &Self) {
diff --git a/src/libcore/iter/iterator.rs b/src/libcore/iter/iterator.rs
index 79767b3..4029838 100644
--- a/src/libcore/iter/iterator.rs
+++ b/src/libcore/iter/iterator.rs
@@ -9,7 +9,9 @@
 // except according to those terms.
 
 use cmp::Ordering;
+use ops::Try;
 
+use super::{AlwaysOk, LoopState};
 use super::{Chain, Cycle, Cloned, Enumerate, Filter, FilterMap, FlatMap, Fuse};
 use super::{Inspect, Map, Peekable, Scan, Skip, SkipWhile, StepBy, Take, TakeWhile, Rev};
 use super::{Zip, Sum, Product};
@@ -1337,6 +1339,78 @@
         (left, right)
     }
 
+    /// An iterator method that applies a function as long as it returns
+    /// successfully, producing a single, final value.
+    ///
+    /// `try_fold()` takes two arguments: an initial value, and a closure with
+    /// two arguments: an 'accumulator', and an element. The closure either
+    /// returns successfully, with the value that the accumulator should have
+    /// for the next iteration, or it returns failure, with an error value that
+    /// is propagated back to the caller immediately (short-circuiting).
+    ///
+    /// The initial value is the value the accumulator will have on the first
+    /// call.  If applying the closure succeeded against every element of the
+    /// iterator, `try_fold()` returns the final accumulator as success.
+    ///
+    /// Folding is useful whenever you have a collection of something, and want
+    /// to produce a single value from it.
+    ///
+    /// # Note to Implementors
+    ///
+    /// Most of the other (forward) methods have default implementations in
+    /// terms of this one, so try to implement this explicitly if it can
+    /// do something better than the default `for` loop implementation.
+    ///
+    /// In particular, try to have this call `try_fold()` on the internal parts
+    /// from which this iterator is composed.  If multiple calls are needed,
+    /// the `?` operator be convenient for chaining the accumulator value along,
+    /// but beware any invariants that need to be upheld before those early
+    /// returns.  This is a `&mut self` method, so iteration needs to be
+    /// resumable after hitting an error here.
+    ///
+    /// # Examples
+    ///
+    /// Basic usage:
+    ///
+    /// ```
+    /// #![feature(iterator_try_fold)]
+    /// let a = [1, 2, 3];
+    ///
+    /// // the checked sum of all of the elements of the array
+    /// let sum = a.iter()
+    ///            .try_fold(0i8, |acc, &x| acc.checked_add(x));
+    ///
+    /// assert_eq!(sum, Some(6));
+    /// ```
+    ///
+    /// Short-circuiting:
+    ///
+    /// ```
+    /// #![feature(iterator_try_fold)]
+    /// let a = [10, 20, 30, 100, 40, 50];
+    /// let mut it = a.iter();
+    ///
+    /// // This sum overflows when adding the 100 element
+    /// let sum = it.try_fold(0i8, |acc, &x| acc.checked_add(x));
+    /// assert_eq!(sum, None);
+    ///
+    /// // Because it short-circuited, the remaining elements are still
+    /// // available through the iterator.
+    /// assert_eq!(it.len(), 2);
+    /// assert_eq!(it.next(), Some(&40));
+    /// ```
+    #[inline]
+    #[unstable(feature = "iterator_try_fold", issue = "45594")]
+    fn try_fold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        let mut accum = init;
+        while let Some(x) = self.next() {
+            accum = f(accum, x)?;
+        }
+        Try::from_ok(accum)
+    }
+
     /// An iterator method that applies a function, producing a single, final value.
     ///
     /// `fold()` takes two arguments: an initial value, and a closure with two
@@ -1361,7 +1435,7 @@
     /// ```
     /// let a = [1, 2, 3];
     ///
-    /// // the sum of all of the elements of a
+    /// // the sum of all of the elements of the array
     /// let sum = a.iter()
     ///            .fold(0, |acc, &x| acc + x);
     ///
@@ -1403,14 +1477,10 @@
     /// ```
     #[inline]
     #[stable(feature = "rust1", since = "1.0.0")]
-    fn fold<B, F>(self, init: B, mut f: F) -> B where
+    fn fold<B, F>(mut self, init: B, mut f: F) -> B where
         Self: Sized, F: FnMut(B, Self::Item) -> B,
     {
-        let mut accum = init;
-        for x in self {
-            accum = f(accum, x);
-        }
-        accum
+        self.try_fold(init, move |acc, x| AlwaysOk(f(acc, x))).0
     }
 
     /// Tests if every element of the iterator matches a predicate.
@@ -1455,12 +1525,10 @@
     fn all<F>(&mut self, mut f: F) -> bool where
         Self: Sized, F: FnMut(Self::Item) -> bool
     {
-        for x in self {
-            if !f(x) {
-                return false;
-            }
-        }
-        true
+        self.try_fold((), move |(), x| {
+            if f(x) { LoopState::Continue(()) }
+            else { LoopState::Break(()) }
+        }) == LoopState::Continue(())
     }
 
     /// Tests if any element of the iterator matches a predicate.
@@ -1506,12 +1574,10 @@
         Self: Sized,
         F: FnMut(Self::Item) -> bool
     {
-        for x in self {
-            if f(x) {
-                return true;
-            }
-        }
-        false
+        self.try_fold((), move |(), x| {
+            if f(x) { LoopState::Break(()) }
+            else { LoopState::Continue(()) }
+        }) == LoopState::Break(())
     }
 
     /// Searches for an element of an iterator that satisfies a predicate.
@@ -1562,10 +1628,10 @@
         Self: Sized,
         P: FnMut(&Self::Item) -> bool,
     {
-        for x in self {
-            if predicate(&x) { return Some(x) }
-        }
-        None
+        self.try_fold((), move |(), x| {
+            if predicate(&x) { LoopState::Break(x) }
+            else { LoopState::Continue(()) }
+        }).break_value()
     }
 
     /// Searches for an element in an iterator, returning its index.
@@ -1623,18 +1689,17 @@
     ///
     /// ```
     #[inline]
+    #[rustc_inherit_overflow_checks]
     #[stable(feature = "rust1", since = "1.0.0")]
     fn position<P>(&mut self, mut predicate: P) -> Option<usize> where
         Self: Sized,
         P: FnMut(Self::Item) -> bool,
     {
-        // `enumerate` might overflow.
-        for (i, x) in self.enumerate() {
-            if predicate(x) {
-                return Some(i);
-            }
-        }
-        None
+        // The addition might panic on overflow
+        self.try_fold(0, move |i, x| {
+            if predicate(x) { LoopState::Break(i) }
+            else { LoopState::Continue(i + 1) }
+        }).break_value()
     }
 
     /// Searches for an element in an iterator from the right, returning its
@@ -1681,17 +1746,14 @@
         P: FnMut(Self::Item) -> bool,
         Self: Sized + ExactSizeIterator + DoubleEndedIterator
     {
-        let mut i = self.len();
-
-        while let Some(v) = self.next_back() {
-            // No need for an overflow check here, because `ExactSizeIterator`
-            // implies that the number of elements fits into a `usize`.
-            i -= 1;
-            if predicate(v) {
-                return Some(i);
-            }
-        }
-        None
+        // No need for an overflow check here, because `ExactSizeIterator`
+        // implies that the number of elements fits into a `usize`.
+        let n = self.len();
+        self.try_rfold(n, move |i, x| {
+            let i = i - 1;
+            if predicate(x) { LoopState::Break(i) }
+            else { LoopState::Continue(i) }
+        }).break_value()
     }
 
     /// Returns the maximum element of an iterator.
@@ -1922,10 +1984,10 @@
         let mut ts: FromA = Default::default();
         let mut us: FromB = Default::default();
 
-        for (t, u) in self {
+        self.for_each(|(t, u)| {
             ts.extend(Some(t));
             us.extend(Some(u));
-        }
+        });
 
         (ts, us)
     }
@@ -2300,17 +2362,17 @@
     // start with the first element as our selection. This avoids
     // having to use `Option`s inside the loop, translating to a
     // sizeable performance gain (6x in one case).
-    it.next().map(|mut sel| {
-        let mut sel_p = f_proj(&sel);
+    it.next().map(|first| {
+        let first_p = f_proj(&first);
 
-        for x in it {
+        it.fold((first_p, first), |(sel_p, sel), x| {
             let x_p = f_proj(&x);
             if f_cmp(&sel_p, &sel, &x_p, &x) {
-                sel = x;
-                sel_p = x_p;
+                (x_p, x)
+            } else {
+                (sel_p, sel)
             }
-        }
-        (sel_p, sel)
+        })
     })
 }
 
diff --git a/src/libcore/iter/mod.rs b/src/libcore/iter/mod.rs
index 8d2521b..e173f43 100644
--- a/src/libcore/iter/mod.rs
+++ b/src/libcore/iter/mod.rs
@@ -305,6 +305,7 @@
 use cmp;
 use fmt;
 use iter_private::TrustedRandomAccess;
+use ops::Try;
 use usize;
 
 #[stable(feature = "rust1", since = "1.0.0")]
@@ -336,6 +337,71 @@
 mod sources;
 mod traits;
 
+/// Transparent newtype used to implement foo methods in terms of try_foo.
+/// Important until #43278 is fixed; might be better as `Result<T, !>` later.
+struct AlwaysOk<T>(pub T);
+
+impl<T> Try for AlwaysOk<T> {
+    type Ok = T;
+    type Error = !;
+    #[inline]
+    fn into_result(self) -> Result<Self::Ok, Self::Error> { Ok(self.0) }
+    #[inline]
+    fn from_error(v: Self::Error) -> Self { v }
+    #[inline]
+    fn from_ok(v: Self::Ok) -> Self { AlwaysOk(v) }
+}
+
+/// Used to make try_fold closures more like normal loops
+#[derive(PartialEq)]
+enum LoopState<C, B> {
+    Continue(C),
+    Break(B),
+}
+
+impl<C, B> Try for LoopState<C, B> {
+    type Ok = C;
+    type Error = B;
+    #[inline]
+    fn into_result(self) -> Result<Self::Ok, Self::Error> {
+        match self {
+            LoopState::Continue(y) => Ok(y),
+            LoopState::Break(x) => Err(x),
+        }
+    }
+    #[inline]
+    fn from_error(v: Self::Error) -> Self { LoopState::Break(v) }
+    #[inline]
+    fn from_ok(v: Self::Ok) -> Self { LoopState::Continue(v) }
+}
+
+impl<C, B> LoopState<C, B> {
+    #[inline]
+    fn break_value(self) -> Option<B> {
+        match self {
+            LoopState::Continue(..) => None,
+            LoopState::Break(x) => Some(x),
+        }
+    }
+}
+
+impl<R: Try> LoopState<R::Ok, R> {
+    #[inline]
+    fn from_try(r: R) -> Self {
+        match Try::into_result(r) {
+            Ok(v) => LoopState::Continue(v),
+            Err(v) => LoopState::Break(Try::from_error(v)),
+        }
+    }
+    #[inline]
+    fn into_try(self) -> R {
+        match self {
+            LoopState::Continue(v) => Try::from_ok(v),
+            LoopState::Break(v) => v,
+        }
+    }
+}
+
 /// A double-ended iterator with the direction inverted.
 ///
 /// This `struct` is created by the [`rev`] method on [`Iterator`]. See its
@@ -359,6 +425,12 @@
     #[inline]
     fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
 
+    fn try_fold<B, F, R>(&mut self, init: B, f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        self.iter.try_rfold(init, f)
+    }
+
     fn fold<Acc, F>(self, init: Acc, f: F) -> Acc
         where F: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -385,6 +457,12 @@
     #[inline]
     fn next_back(&mut self) -> Option<<I as Iterator>::Item> { self.iter.next() }
 
+    fn try_rfold<B, F, R>(&mut self, init: B, f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        self.iter.try_fold(init, f)
+    }
+
     fn rfold<Acc, F>(self, init: Acc, f: F) -> Acc
         where F: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -447,6 +525,12 @@
         self.it.size_hint()
     }
 
+    fn try_fold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        self.it.try_fold(init, move |acc, elt| f(acc, elt.clone()))
+    }
+
     fn fold<Acc, F>(self, init: Acc, mut f: F) -> Acc
         where F: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -462,6 +546,12 @@
         self.it.next_back().cloned()
     }
 
+    fn try_rfold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        self.it.try_rfold(init, move |acc, elt| f(acc, elt.clone()))
+    }
+
     fn rfold<Acc, F>(self, init: Acc, mut f: F) -> Acc
         where F: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -683,6 +773,25 @@
         }
     }
 
+    fn try_fold<Acc, F, R>(&mut self, init: Acc, mut f: F) -> R where
+        Self: Sized, F: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let mut accum = init;
+        match self.state {
+            ChainState::Both | ChainState::Front => {
+                accum = self.a.try_fold(accum, &mut f)?;
+                if let ChainState::Both = self.state {
+                    self.state = ChainState::Back;
+                }
+            }
+            _ => { }
+        }
+        if let ChainState::Back = self.state {
+            accum = self.b.try_fold(accum, &mut f)?;
+        }
+        Try::from_ok(accum)
+    }
+
     fn fold<Acc, F>(self, init: Acc, mut f: F) -> Acc
         where F: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -792,6 +901,25 @@
         }
     }
 
+    fn try_rfold<Acc, F, R>(&mut self, init: Acc, mut f: F) -> R where
+        Self: Sized, F: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let mut accum = init;
+        match self.state {
+            ChainState::Both | ChainState::Back => {
+                accum = self.b.try_rfold(accum, &mut f)?;
+                if let ChainState::Both = self.state {
+                    self.state = ChainState::Front;
+                }
+            }
+            _ => { }
+        }
+        if let ChainState::Front = self.state {
+            accum = self.a.try_rfold(accum, &mut f)?;
+        }
+        Try::from_ok(accum)
+    }
+
     fn rfold<Acc, F>(self, init: Acc, mut f: F) -> Acc
         where F: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1128,6 +1256,13 @@
         self.iter.size_hint()
     }
 
+    fn try_fold<Acc, G, R>(&mut self, init: Acc, mut g: G) -> R where
+        Self: Sized, G: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let f = &mut self.f;
+        self.iter.try_fold(init, move |acc, elt| g(acc, f(elt)))
+    }
+
     fn fold<Acc, G>(self, init: Acc, mut g: G) -> Acc
         where G: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1145,6 +1280,13 @@
         self.iter.next_back().map(&mut self.f)
     }
 
+    fn try_rfold<Acc, G, R>(&mut self, init: Acc, mut g: G) -> R where
+        Self: Sized, G: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let f = &mut self.f;
+        self.iter.try_rfold(init, move |acc, elt| g(acc, f(elt)))
+    }
+
     fn rfold<Acc, G>(self, init: Acc, mut g: G) -> Acc
         where G: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1252,6 +1394,18 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let predicate = &mut self.predicate;
+        self.iter.try_fold(init, move |acc, item| if predicate(&item) {
+            fold(acc, item)
+        } else {
+            Try::from_ok(acc)
+        })
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1279,6 +1433,18 @@
     }
 
     #[inline]
+    fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let predicate = &mut self.predicate;
+        self.iter.try_rfold(init, move |acc, item| if predicate(&item) {
+            fold(acc, item)
+        } else {
+            Try::from_ok(acc)
+        })
+    }
+
+    #[inline]
     fn rfold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1342,6 +1508,17 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let f = &mut self.f;
+        self.iter.try_fold(init, move |acc, item| match f(item) {
+            Some(x) => fold(acc, x),
+            None => Try::from_ok(acc),
+        })
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1368,6 +1545,17 @@
     }
 
     #[inline]
+    fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let f = &mut self.f;
+        self.iter.try_rfold(init, move |acc, item| match f(item) {
+            Some(x) => fold(acc, x),
+            None => Try::from_ok(acc),
+        })
+    }
+
+    #[inline]
     fn rfold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1444,6 +1632,19 @@
 
     #[inline]
     #[rustc_inherit_overflow_checks]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let count = &mut self.count;
+        self.iter.try_fold(init, move |acc, item| {
+            let acc = fold(acc, (*count, item));
+            *count += 1;
+            acc
+        })
+    }
+
+    #[inline]
+    #[rustc_inherit_overflow_checks]
     fn fold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1471,6 +1672,19 @@
     }
 
     #[inline]
+    fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        // Can safely add and subtract the count, as `ExactSizeIterator` promises
+        // that the number of elements fits into a `usize`.
+        let mut count = self.count + self.iter.len();
+        self.iter.try_rfold(init, move |acc, item| {
+            count -= 1;
+            fold(acc, (count, item))
+        })
+    }
+
+    #[inline]
     fn rfold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1595,6 +1809,18 @@
     }
 
     #[inline]
+    fn try_fold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        let acc = match self.peeked.take() {
+            Some(None) => return Try::from_ok(init),
+            Some(Some(v)) => f(init, v)?,
+            None => init,
+        };
+        self.iter.try_fold(acc, f)
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1699,13 +1925,16 @@
 
     #[inline]
     fn next(&mut self) -> Option<I::Item> {
-        for x in self.iter.by_ref() {
-            if self.flag || !(self.predicate)(&x) {
-                self.flag = true;
-                return Some(x);
+        let flag = &mut self.flag;
+        let pred = &mut self.predicate;
+        self.iter.find(move |x| {
+            if *flag || !pred(x) {
+                *flag = true;
+                true
+            } else {
+                false
             }
-        }
-        None
+        })
     }
 
     #[inline]
@@ -1715,6 +1944,19 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, mut init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if !self.flag {
+            match self.next() {
+                Some(v) => init = fold(init, v)?,
+                None => return Try::from_ok(init),
+            }
+        }
+        self.iter.try_fold(init, fold)
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(mut self, mut init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1785,6 +2027,26 @@
         let (_, upper) = self.iter.size_hint();
         (0, upper) // can't know a lower bound, due to the predicate
     }
+
+    #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if self.flag {
+            Try::from_ok(init)
+        } else {
+            let flag = &mut self.flag;
+            let p = &mut self.predicate;
+            self.iter.try_fold(init, move |acc, x|{
+                if p(&x) {
+                    LoopState::from_try(fold(acc, x))
+                } else {
+                    *flag = true;
+                    LoopState::Break(Try::from_ok(acc))
+                }
+            }).into_try()
+        }
+    }
 }
 
 #[unstable(feature = "fused", issue = "35602")]
@@ -1868,6 +2130,21 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let n = self.n;
+        self.n = 0;
+        if n > 0 {
+            // nth(n) skips n+1
+            if self.iter.nth(n - 1).is_none() {
+                return Try::from_ok(init);
+            }
+        }
+        self.iter.try_fold(init, fold)
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(mut self, init: Acc, fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -1893,6 +2170,22 @@
             None
         }
     }
+
+    fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let mut n = self.len();
+        if n == 0 {
+            Try::from_ok(init)
+        } else {
+            self.iter.try_rfold(init, move |acc, x| {
+                n -= 1;
+                let r = fold(acc, x);
+                if n == 0 { LoopState::Break(r) }
+                else { LoopState::from_try(r) }
+            }).into_try()
+        }
+    }
 }
 
 #[unstable(feature = "fused", issue = "35602")]
@@ -1954,6 +2247,23 @@
 
         (lower, upper)
     }
+
+    #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if self.n == 0 {
+            Try::from_ok(init)
+        } else {
+            let n = &mut self.n;
+            self.iter.try_fold(init, move |acc, x| {
+                *n -= 1;
+                let r = fold(acc, x);
+                if *n == 0 { LoopState::Break(r) }
+                else { LoopState::from_try(r) }
+            }).into_try()
+        }
+    }
 }
 
 #[stable(feature = "rust1", since = "1.0.0")]
@@ -2005,6 +2315,20 @@
         let (_, upper) = self.iter.size_hint();
         (0, upper) // can't know a lower bound, due to the scan function
     }
+
+    #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let state = &mut self.state;
+        let f = &mut self.f;
+        self.iter.try_fold(init, move |acc, x| {
+            match f(state, x) {
+                None => LoopState::Break(Try::from_ok(acc)),
+                Some(x) => LoopState::from_try(fold(acc, x)),
+            }
+        }).into_try()
+    }
 }
 
 /// An iterator that maps each element to an iterator, and yields the elements
@@ -2071,6 +2395,35 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, mut init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if let Some(ref mut front) = self.frontiter {
+            init = front.try_fold(init, &mut fold)?;
+        }
+        self.frontiter = None;
+
+        {
+            let f = &mut self.f;
+            let frontiter = &mut self.frontiter;
+            init = self.iter.try_fold(init, |acc, x| {
+                let mut mid = f(x).into_iter();
+                let r = mid.try_fold(acc, &mut fold);
+                *frontiter = Some(mid);
+                r
+            })?;
+        }
+        self.frontiter = None;
+
+        if let Some(ref mut back) = self.backiter {
+            init = back.try_fold(init, &mut fold)?;
+        }
+        self.backiter = None;
+
+        Try::from_ok(init)
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2103,6 +2456,35 @@
     }
 
     #[inline]
+    fn try_rfold<Acc, Fold, R>(&mut self, mut init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if let Some(ref mut back) = self.backiter {
+            init = back.try_rfold(init, &mut fold)?;
+        }
+        self.backiter = None;
+
+        {
+            let f = &mut self.f;
+            let backiter = &mut self.backiter;
+            init = self.iter.try_rfold(init, |acc, x| {
+                let mut mid = f(x).into_iter();
+                let r = mid.try_rfold(acc, &mut fold);
+                *backiter = Some(mid);
+                r
+            })?;
+        }
+        self.backiter = None;
+
+        if let Some(ref mut front) = self.frontiter {
+            init = front.try_rfold(init, &mut fold)?;
+        }
+        self.frontiter = None;
+
+        Try::from_ok(init)
+    }
+
+    #[inline]
     fn rfold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2190,6 +2572,19 @@
     }
 
     #[inline]
+    default fn try_fold<Acc, Fold, R>(&mut self, init: Acc, fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if self.done {
+            Try::from_ok(init)
+        } else {
+            let acc = self.iter.try_fold(init, fold)?;
+            self.done = true;
+            Try::from_ok(acc)
+        }
+    }
+
+    #[inline]
     default fn fold<Acc, Fold>(self, init: Acc, fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2215,6 +2610,19 @@
     }
 
     #[inline]
+    default fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        if self.done {
+            Try::from_ok(init)
+        } else {
+            let acc = self.iter.try_rfold(init, fold)?;
+            self.done = true;
+            Try::from_ok(acc)
+        }
+    }
+
+    #[inline]
     default fn rfold<Acc, Fold>(self, init: Acc, fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2266,6 +2674,13 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        self.iter.try_fold(init, fold)
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(self, init: Acc, fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2283,6 +2698,13 @@
     }
 
     #[inline]
+    fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        self.iter.try_rfold(init, fold)
+    }
+
+    #[inline]
     fn rfold<Acc, Fold>(self, init: Acc, fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2354,6 +2776,14 @@
     }
 
     #[inline]
+    fn try_fold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let f = &mut self.f;
+        self.iter.try_fold(init, move |acc, item| { f(&item); fold(acc, item) })
+    }
+
+    #[inline]
     fn fold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
@@ -2373,6 +2803,14 @@
     }
 
     #[inline]
+    fn try_rfold<Acc, Fold, R>(&mut self, init: Acc, mut fold: Fold) -> R where
+        Self: Sized, Fold: FnMut(Acc, Self::Item) -> R, R: Try<Ok=Acc>
+    {
+        let f = &mut self.f;
+        self.iter.try_rfold(init, move |acc, item| { f(&item); fold(acc, item) })
+    }
+
+    #[inline]
     fn rfold<Acc, Fold>(self, init: Acc, mut fold: Fold) -> Acc
         where Fold: FnMut(Acc, Self::Item) -> Acc,
     {
diff --git a/src/libcore/iter/traits.rs b/src/libcore/iter/traits.rs
index 28236d1..11e668d 100644
--- a/src/libcore/iter/traits.rs
+++ b/src/libcore/iter/traits.rs
@@ -7,9 +7,11 @@
 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
-use ops::{Mul, Add};
+use ops::{Mul, Add, Try};
 use num::Wrapping;
 
+use super::{AlwaysOk, LoopState};
+
 /// Conversion from an `Iterator`.
 ///
 /// By implementing `FromIterator` for a type, you define how it will be
@@ -415,6 +417,52 @@
     #[stable(feature = "rust1", since = "1.0.0")]
     fn next_back(&mut self) -> Option<Self::Item>;
 
+    /// This is the reverse version of [`try_fold()`]: it takes elements
+    /// starting from the back of the iterator.
+    ///
+    /// [`try_fold()`]: trait.Iterator.html#method.try_fold
+    ///
+    /// # Examples
+    ///
+    /// Basic usage:
+    ///
+    /// ```
+    /// #![feature(iterator_try_fold)]
+    /// let a = ["1", "2", "3"];
+    /// let sum = a.iter()
+    ///     .map(|&s| s.parse::<i32>())
+    ///     .try_rfold(0, |acc, x| x.and_then(|y| Ok(acc + y)));
+    /// assert_eq!(sum, Ok(6));
+    /// ```
+    ///
+    /// Short-circuiting:
+    ///
+    /// ```
+    /// #![feature(iterator_try_fold)]
+    /// let a = ["1", "rust", "3"];
+    /// let mut it = a.iter();
+    /// let sum = it
+    ///     .by_ref()
+    ///     .map(|&s| s.parse::<i32>())
+    ///     .try_rfold(0, |acc, x| x.and_then(|y| Ok(acc + y)));
+    /// assert!(sum.is_err());
+    ///
+    /// // Because it short-circuited, the remaining elements are still
+    /// // available through the iterator.
+    /// assert_eq!(it.next_back(), Some(&"1"));
+    /// ```
+    #[inline]
+    #[unstable(feature = "iterator_try_fold", issue = "45594")]
+    fn try_rfold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+        Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
+    {
+        let mut accum = init;
+        while let Some(x) = self.next_back() {
+            accum = f(accum, x)?;
+        }
+        Try::from_ok(accum)
+    }
+
     /// An iterator method that reduces the iterator's elements to a single,
     /// final value, starting from the back.
     ///
@@ -470,13 +518,10 @@
     /// ```
     #[inline]
     #[unstable(feature = "iter_rfold", issue = "44705")]
-    fn rfold<B, F>(mut self, mut accum: B, mut f: F) -> B where
+    fn rfold<B, F>(mut self, accum: B, mut f: F) -> B where
         Self: Sized, F: FnMut(B, Self::Item) -> B,
     {
-        while let Some(x) = self.next_back() {
-            accum = f(accum, x);
-        }
-        accum
+        self.try_rfold(accum, move |acc, x| AlwaysOk(f(acc, x))).0
     }
 
     /// Searches for an element of an iterator from the right that satisfies a predicate.
@@ -531,10 +576,10 @@
         Self: Sized,
         P: FnMut(&Self::Item) -> bool
     {
-        while let Some(x) = self.next_back() {
-            if predicate(&x) { return Some(x) }
-        }
-        None
+        self.try_rfold((), move |(), x| {
+            if predicate(&x) { LoopState::Break(x) }
+            else { LoopState::Continue(()) }
+        }).break_value()
     }
 }
 
diff --git a/src/libcore/slice/mod.rs b/src/libcore/slice/mod.rs
index 74182f3..49c51f4 100644
--- a/src/libcore/slice/mod.rs
+++ b/src/libcore/slice/mod.rs
@@ -40,7 +40,7 @@
 use fmt;
 use intrinsics::assume;
 use iter::*;
-use ops::{FnMut, self};
+use ops::{FnMut, Try, self};
 use option::Option;
 use option::Option::{None, Some};
 use result::Result;
@@ -1165,62 +1165,37 @@
                 self.next_back()
             }
 
-            fn all<F>(&mut self, mut predicate: F) -> bool
-                where F: FnMut(Self::Item) -> bool,
+            #[inline]
+            fn try_fold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+                Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
             {
-                self.search_while(true, move |elt| {
-                    if predicate(elt) {
-                        SearchWhile::Continue
-                    } else {
-                        SearchWhile::Done(false)
+                // manual unrolling is needed when there are conditional exits from the loop
+                let mut accum = init;
+                unsafe {
+                    while ptrdistance(self.ptr, self.end) >= 4 {
+                        accum = f(accum, $mkref!(self.ptr.post_inc()))?;
+                        accum = f(accum, $mkref!(self.ptr.post_inc()))?;
+                        accum = f(accum, $mkref!(self.ptr.post_inc()))?;
+                        accum = f(accum, $mkref!(self.ptr.post_inc()))?;
                     }
-                })
+                    while self.ptr != self.end {
+                        accum = f(accum, $mkref!(self.ptr.post_inc()))?;
+                    }
+                }
+                Try::from_ok(accum)
             }
 
-            fn any<F>(&mut self, mut predicate: F) -> bool
-                where F: FnMut(Self::Item) -> bool,
+            #[inline]
+            fn fold<Acc, Fold>(mut self, init: Acc, mut f: Fold) -> Acc
+                where Fold: FnMut(Acc, Self::Item) -> Acc,
             {
-                !self.all(move |elt| !predicate(elt))
-            }
-
-            fn find<F>(&mut self, mut predicate: F) -> Option<Self::Item>
-                where F: FnMut(&Self::Item) -> bool,
-            {
-                self.search_while(None, move |elt| {
-                    if predicate(&elt) {
-                        SearchWhile::Done(Some(elt))
-                    } else {
-                        SearchWhile::Continue
-                    }
-                })
-            }
-
-            fn position<F>(&mut self, mut predicate: F) -> Option<usize>
-                where F: FnMut(Self::Item) -> bool,
-            {
-                let mut index = 0;
-                self.search_while(None, move |elt| {
-                    if predicate(elt) {
-                        SearchWhile::Done(Some(index))
-                    } else {
-                        index += 1;
-                        SearchWhile::Continue
-                    }
-                })
-            }
-
-            fn rposition<F>(&mut self, mut predicate: F) -> Option<usize>
-                where F: FnMut(Self::Item) -> bool,
-            {
-                let mut index = self.len();
-                self.rsearch_while(None, move |elt| {
-                    index -= 1;
-                    if predicate(elt) {
-                        SearchWhile::Done(Some(index))
-                    } else {
-                        SearchWhile::Continue
-                    }
-                })
+                // Let LLVM unroll this, rather than using the default
+                // impl that would force the manual unrolling above
+                let mut accum = init;
+                while let Some(x) = self.next() {
+                    accum = f(accum, x);
+                }
+                accum
             }
         }
 
@@ -1242,59 +1217,37 @@
                 }
             }
 
-            fn rfind<F>(&mut self, mut predicate: F) -> Option<Self::Item>
-                where F: FnMut(&Self::Item) -> bool,
-            {
-                self.rsearch_while(None, move |elt| {
-                    if predicate(&elt) {
-                        SearchWhile::Done(Some(elt))
-                    } else {
-                        SearchWhile::Continue
-                    }
-                })
-            }
-
-        }
-
-        // search_while is a generalization of the internal iteration methods.
-        impl<'a, T> $name<'a, T> {
-            // search through the iterator's element using the closure `g`.
-            // if no element was found, return `default`.
-            fn search_while<Acc, G>(&mut self, default: Acc, mut g: G) -> Acc
-                where Self: Sized,
-                      G: FnMut($elem) -> SearchWhile<Acc>
+            #[inline]
+            fn try_rfold<B, F, R>(&mut self, init: B, mut f: F) -> R where
+                Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Ok=B>
             {
                 // manual unrolling is needed when there are conditional exits from the loop
+                let mut accum = init;
                 unsafe {
                     while ptrdistance(self.ptr, self.end) >= 4 {
-                        search_while!(g($mkref!(self.ptr.post_inc())));
-                        search_while!(g($mkref!(self.ptr.post_inc())));
-                        search_while!(g($mkref!(self.ptr.post_inc())));
-                        search_while!(g($mkref!(self.ptr.post_inc())));
+                        accum = f(accum, $mkref!(self.end.pre_dec()))?;
+                        accum = f(accum, $mkref!(self.end.pre_dec()))?;
+                        accum = f(accum, $mkref!(self.end.pre_dec()))?;
+                        accum = f(accum, $mkref!(self.end.pre_dec()))?;
                     }
                     while self.ptr != self.end {
-                        search_while!(g($mkref!(self.ptr.post_inc())));
+                        accum = f(accum, $mkref!(self.end.pre_dec()))?;
                     }
                 }
-                default
+                Try::from_ok(accum)
             }
 
-            fn rsearch_while<Acc, G>(&mut self, default: Acc, mut g: G) -> Acc
-                where Self: Sized,
-                      G: FnMut($elem) -> SearchWhile<Acc>
+            #[inline]
+            fn rfold<Acc, Fold>(mut self, init: Acc, mut f: Fold) -> Acc
+                where Fold: FnMut(Acc, Self::Item) -> Acc,
             {
-                unsafe {
-                    while ptrdistance(self.ptr, self.end) >= 4 {
-                        search_while!(g($mkref!(self.end.pre_dec())));
-                        search_while!(g($mkref!(self.end.pre_dec())));
-                        search_while!(g($mkref!(self.end.pre_dec())));
-                        search_while!(g($mkref!(self.end.pre_dec())));
-                    }
-                    while self.ptr != self.end {
-                        search_while!(g($mkref!(self.end.pre_dec())));
-                    }
+                // Let LLVM unroll this, rather than using the default
+                // impl that would force the manual unrolling above
+                let mut accum = init;
+                while let Some(x) = self.next_back() {
+                    accum = f(accum, x);
                 }
-                default
+                accum
             }
         }
     }
@@ -1328,24 +1281,6 @@
     }}
 }
 
-// An enum used for controlling the execution of `.search_while()`.
-enum SearchWhile<T> {
-    // Continue searching
-    Continue,
-    // Fold is complete and will return this value
-    Done(T),
-}
-
-// helper macro for search while's control flow
-macro_rules! search_while {
-    ($e:expr) => {
-        match $e {
-            SearchWhile::Continue => { }
-            SearchWhile::Done(done) => return done,
-        }
-    }
-}
-
 /// Immutable slice iterator
 ///
 /// This struct is created by the [`iter`] method on [slices].
diff --git a/src/libcore/tests/iter.rs b/src/libcore/tests/iter.rs
index f8c6fc5..5cac5b2 100644
--- a/src/libcore/tests/iter.rs
+++ b/src/libcore/tests/iter.rs
@@ -664,6 +664,7 @@
 fn test_iterator_skip_fold() {
     let xs = [0, 1, 2, 3, 5, 13, 15, 16, 17, 19, 20, 30];
     let ys = [13, 15, 16, 17, 19, 20, 30];
+
     let it = xs.iter().skip(5);
     let i = it.fold(0, |i, &x| {
         assert_eq!(x, ys[i]);
@@ -678,6 +679,24 @@
         i + 1
     });
     assert_eq!(i, ys.len());
+
+    let it = xs.iter().skip(5);
+    let i = it.rfold(ys.len(), |i, &x| {
+        let i = i - 1;
+        assert_eq!(x, ys[i]);
+        i
+    });
+    assert_eq!(i, 0);
+
+    let mut it = xs.iter().skip(5);
+    assert_eq!(it.next(), Some(&ys[0])); // process skips before folding
+    let i = it.rfold(ys.len(), |i, &x| {
+        let i = i - 1;
+        assert_eq!(x, ys[i]);
+        i
+    });
+    assert_eq!(i, 1);
+
 }
 
 #[test]
@@ -1478,3 +1497,207 @@
     assert_eq!(x, 1);
     assert_eq!(y, 5);
 }
+
+#[test]
+fn test_rev_try_folds() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((1..10).rev().try_fold(7, f), (1..10).try_rfold(7, f));
+    assert_eq!((1..10).rev().try_rfold(7, f), (1..10).try_fold(7, f));
+
+    let a = [10, 20, 30, 40, 100, 60, 70, 80, 90];
+    let mut iter = a.iter().rev();
+    assert_eq!(iter.try_fold(0_i8, |acc, &x| acc.checked_add(x)), None);
+    assert_eq!(iter.next(), Some(&70));
+    let mut iter = a.iter().rev();
+    assert_eq!(iter.try_rfold(0_i8, |acc, &x| acc.checked_add(x)), None);
+    assert_eq!(iter.next_back(), Some(&60));
+}
+
+#[test]
+fn test_cloned_try_folds() {
+    let a = [1, 2, 3, 4, 5, 6, 7, 8, 9];
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    let f_ref = &|acc, &x| i32::checked_add(2*acc, x);
+    assert_eq!(a.iter().cloned().try_fold(7, f), a.iter().try_fold(7, f_ref));
+    assert_eq!(a.iter().cloned().try_rfold(7, f), a.iter().try_rfold(7, f_ref));
+
+    let a = [10, 20, 30, 40, 100, 60, 70, 80, 90];
+    let mut iter = a.iter().cloned();
+    assert_eq!(iter.try_fold(0_i8, |acc, x| acc.checked_add(x)), None);
+    assert_eq!(iter.next(), Some(60));
+    let mut iter = a.iter().cloned();
+    assert_eq!(iter.try_rfold(0_i8, |acc, x| acc.checked_add(x)), None);
+    assert_eq!(iter.next_back(), Some(70));
+}
+
+#[test]
+fn test_chain_try_folds() {
+    let c = || (0..10).chain(10..20);
+
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!(c().try_fold(7, f), (0..20).try_fold(7, f));
+    assert_eq!(c().try_rfold(7, f), (0..20).rev().try_fold(7, f));
+
+    let mut iter = c();
+    assert_eq!(iter.position(|x| x == 5), Some(5));
+    assert_eq!(iter.next(), Some(6), "stopped in front, state Both");
+    assert_eq!(iter.position(|x| x == 13), Some(6));
+    assert_eq!(iter.next(), Some(14), "stopped in back, state Back");
+    assert_eq!(iter.try_fold(0, |acc, x| Some(acc+x)), Some((15..20).sum()));
+
+    let mut iter = c().rev(); // use rev to access try_rfold
+    assert_eq!(iter.position(|x| x == 15), Some(4));
+    assert_eq!(iter.next(), Some(14), "stopped in back, state Both");
+    assert_eq!(iter.position(|x| x == 5), Some(8));
+    assert_eq!(iter.next(), Some(4), "stopped in front, state Front");
+    assert_eq!(iter.try_fold(0, |acc, x| Some(acc+x)), Some((0..4).sum()));
+
+    let mut iter = c();
+    iter.by_ref().rev().nth(14); // skip the last 15, ending in state Front
+    assert_eq!(iter.try_fold(7, f), (0..5).try_fold(7, f));
+
+    let mut iter = c();
+    iter.nth(14); // skip the first 15, ending in state Back
+    assert_eq!(iter.try_rfold(7, f), (15..20).try_rfold(7, f));
+}
+
+#[test]
+fn test_map_try_folds() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((0..10).map(|x| x+3).try_fold(7, f), (3..13).try_fold(7, f));
+    assert_eq!((0..10).map(|x| x+3).try_rfold(7, f), (3..13).try_rfold(7, f));
+
+    let mut iter = (0..40).map(|x| x+10);
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(20));
+    assert_eq!(iter.try_rfold(0, i8::checked_add), None);
+    assert_eq!(iter.next_back(), Some(46));
+}
+
+#[test]
+fn test_filter_try_folds() {
+    fn p(&x: &i32) -> bool { 0 <= x && x < 10 }
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((-10..20).filter(p).try_fold(7, f), (0..10).try_fold(7, f));
+    assert_eq!((-10..20).filter(p).try_rfold(7, f), (0..10).try_rfold(7, f));
+
+    let mut iter = (0..40).filter(|&x| x % 2 == 1);
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(25));
+    assert_eq!(iter.try_rfold(0, i8::checked_add), None);
+    assert_eq!(iter.next_back(), Some(31));
+}
+
+#[test]
+fn test_filter_map_try_folds() {
+    let mp = &|x| if 0 <= x && x < 10 { Some(x*2) } else { None };
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((-9..20).filter_map(mp).try_fold(7, f), (0..10).map(|x| 2*x).try_fold(7, f));
+    assert_eq!((-9..20).filter_map(mp).try_rfold(7, f), (0..10).map(|x| 2*x).try_rfold(7, f));
+
+    let mut iter = (0..40).filter_map(|x| if x%2 == 1 { None } else { Some(x*2 + 10) });
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(38));
+    assert_eq!(iter.try_rfold(0, i8::checked_add), None);
+    assert_eq!(iter.next_back(), Some(78));
+}
+
+#[test]
+fn test_enumerate_try_folds() {
+    let f = &|acc, (i, x)| usize::checked_add(2*acc, x/(i+1) + i);
+    assert_eq!((9..18).enumerate().try_fold(7, f), (0..9).map(|i| (i, i+9)).try_fold(7, f));
+    assert_eq!((9..18).enumerate().try_rfold(7, f), (0..9).map(|i| (i, i+9)).try_rfold(7, f));
+
+    let mut iter = (100..200).enumerate();
+    let f = &|acc, (i, x)| u8::checked_add(acc, u8::checked_div(x, i as u8 + 1)?);
+    assert_eq!(iter.try_fold(0, f), None);
+    assert_eq!(iter.next(), Some((7, 107)));
+    assert_eq!(iter.try_rfold(0, f), None);
+    assert_eq!(iter.next_back(), Some((11, 111)));
+}
+
+#[test]
+fn test_peek_try_fold() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((1..20).peekable().try_fold(7, f), (1..20).try_fold(7, f));
+    let mut iter = (1..20).peekable();
+    assert_eq!(iter.peek(), Some(&1));
+    assert_eq!(iter.try_fold(7, f), (1..20).try_fold(7, f));
+
+    let mut iter = [100, 20, 30, 40, 50, 60, 70].iter().cloned().peekable();
+    assert_eq!(iter.peek(), Some(&100));
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.peek(), Some(&40));
+}
+
+#[test]
+fn test_skip_while_try_fold() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    fn p(&x: &i32) -> bool { (x % 10) <= 5 }
+    assert_eq!((1..20).skip_while(p).try_fold(7, f), (6..20).try_fold(7, f));
+    let mut iter = (1..20).skip_while(p);
+    assert_eq!(iter.nth(5), Some(11));
+    assert_eq!(iter.try_fold(7, f), (12..20).try_fold(7, f));
+
+    let mut iter = (0..50).skip_while(|&x| (x % 20) < 15);
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(23));
+}
+
+#[test]
+fn test_take_while_folds() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((1..20).take_while(|&x| x != 10).try_fold(7, f), (1..10).try_fold(7, f));
+    let mut iter = (1..20).take_while(|&x| x != 10);
+    assert_eq!(iter.try_fold(0, |x, y| Some(x+y)), Some((1..10).sum()));
+    assert_eq!(iter.next(), None, "flag should be set");
+    let iter = (1..20).take_while(|&x| x != 10);
+    assert_eq!(iter.fold(0, |x, y| x+y), (1..10).sum());
+
+    let mut iter = (10..50).take_while(|&x| x != 40);
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(20));
+}
+
+#[test]
+fn test_skip_try_folds() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((1..20).skip(9).try_fold(7, f), (10..20).try_fold(7, f));
+    assert_eq!((1..20).skip(9).try_rfold(7, f), (10..20).try_rfold(7, f));
+
+    let mut iter = (0..30).skip(10);
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(20));
+    assert_eq!(iter.try_rfold(0, i8::checked_add), None);
+    assert_eq!(iter.next_back(), Some(24));
+}
+
+#[test]
+fn test_take_try_folds() {
+    let f = &|acc, x| i32::checked_add(2*acc, x);
+    assert_eq!((10..30).take(10).try_fold(7, f), (10..20).try_fold(7, f));
+    //assert_eq!((10..30).take(10).try_rfold(7, f), (10..20).try_rfold(7, f));
+
+    let mut iter = (10..30).take(20);
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(20));
+    //assert_eq!(iter.try_rfold(0, i8::checked_add), None);
+    //assert_eq!(iter.next_back(), Some(24));
+}
+
+#[test]
+fn test_flat_map_try_folds() {
+    let f = &|acc, x| i32::checked_add(acc*2/3, x);
+    let mr = &|x| (5*x)..(5*x + 5);
+    assert_eq!((0..10).flat_map(mr).try_fold(7, f), (0..50).try_fold(7, f));
+    assert_eq!((0..10).flat_map(mr).try_rfold(7, f), (0..50).try_rfold(7, f));
+    let mut iter = (0..10).flat_map(mr);
+    iter.next(); iter.next_back(); // have front and back iters in progress
+    assert_eq!(iter.try_rfold(7, f), (1..49).try_rfold(7, f));
+
+    let mut iter = (0..10).flat_map(|x| (4*x)..(4*x + 4));
+    assert_eq!(iter.try_fold(0, i8::checked_add), None);
+    assert_eq!(iter.next(), Some(17));
+    assert_eq!(iter.try_rfold(0, i8::checked_add), None);
+    assert_eq!(iter.next_back(), Some(35));
+}
diff --git a/src/libcore/tests/lib.rs b/src/libcore/tests/lib.rs
index edf7f44..e2283a7 100644
--- a/src/libcore/tests/lib.rs
+++ b/src/libcore/tests/lib.rs
@@ -24,6 +24,7 @@
 #![feature(i128_type)]
 #![feature(inclusive_range)]
 #![feature(inclusive_range_syntax)]
+#![feature(iterator_try_fold)]
 #![feature(iter_rfind)]
 #![feature(iter_rfold)]
 #![feature(nonzero)]
diff --git a/src/libcore/tests/slice.rs b/src/libcore/tests/slice.rs
index 60e4e6d..fa4c2e9 100644
--- a/src/libcore/tests/slice.rs
+++ b/src/libcore/tests/slice.rs
@@ -273,6 +273,23 @@
 }
 
 #[test]
+fn test_iter_folds() {
+    let a = [1, 2, 3, 4, 5]; // len>4 so the unroll is used
+    assert_eq!(a.iter().fold(0, |acc, &x| 2*acc + x), 57);
+    assert_eq!(a.iter().rfold(0, |acc, &x| 2*acc + x), 129);
+    let fold = |acc: i32, &x| acc.checked_mul(2)?.checked_add(x);
+    assert_eq!(a.iter().try_fold(0, &fold), Some(57));
+    assert_eq!(a.iter().try_rfold(0, &fold), Some(129));
+
+    // short-circuiting try_fold, through other methods
+    let a = [0, 1, 2, 3, 5, 5, 5, 7, 8, 9];
+    let mut iter = a.iter();
+    assert_eq!(iter.position(|&x| x == 3), Some(3));
+    assert_eq!(iter.rfind(|&&x| x == 5), Some(&5));
+    assert_eq!(iter.len(), 2);
+}
+
+#[test]
 fn test_rotate() {
     const N: usize = 600;
     let a: &mut [_] = &mut [0; N];
diff --git a/src/liblibc b/src/liblibc
index c1068cd..a72a79b 160000
--- a/src/liblibc
+++ b/src/liblibc
@@ -1 +1 @@
-Subproject commit c1068cd82ae55907e8fd4457b98278a3aaa9162b
+Subproject commit a72a79b34def38e9004baa459a3c204eb674b14c
diff --git a/src/libproc_macro/lib.rs b/src/libproc_macro/lib.rs
index 8a400f3..22f788e 100644
--- a/src/libproc_macro/lib.rs
+++ b/src/libproc_macro/lib.rs
@@ -177,9 +177,10 @@
 #[derive(Copy, Clone, Debug, PartialEq, Eq)]
 pub struct Span(syntax_pos::Span);
 
-#[unstable(feature = "proc_macro", issue = "38356")]
-impl Default for Span {
-    fn default() -> Span {
+impl Span {
+    /// A span that resolves at the macro definition site.
+    #[unstable(feature = "proc_macro", issue = "38356")]
+    pub fn def_site() -> Span {
         ::__internal::with_sess(|(_, mark)| {
             let call_site = mark.expn_info().unwrap().call_site;
             Span(call_site.with_ctxt(SyntaxContext::empty().apply_mark(mark)))
@@ -351,7 +352,7 @@
 #[unstable(feature = "proc_macro", issue = "38356")]
 impl From<TokenNode> for TokenTree {
     fn from(kind: TokenNode) -> TokenTree {
-        TokenTree { span: Span::default(), kind: kind }
+        TokenTree { span: Span::def_site(), kind: kind }
     }
 }
 
diff --git a/src/libproc_macro/quote.rs b/src/libproc_macro/quote.rs
index 26f88ad..8b5add1 100644
--- a/src/libproc_macro/quote.rs
+++ b/src/libproc_macro/quote.rs
@@ -168,7 +168,7 @@
 
 impl Quote for Span {
     fn quote(self) -> TokenStream {
-        quote!(::Span::default())
+        quote!(::Span::def_site())
     }
 }
 
diff --git a/src/librustc/dep_graph/dep_node.rs b/src/librustc/dep_graph/dep_node.rs
index b391b35..523a244 100644
--- a/src/librustc/dep_graph/dep_node.rs
+++ b/src/librustc/dep_graph/dep_node.rs
@@ -618,7 +618,7 @@
 
     [input] Freevars(DefId),
     [input] MaybeUnusedTraitImport(DefId),
-    [] MaybeUnusedExternCrates,
+    [input] MaybeUnusedExternCrates,
     [] StabilityIndex,
     [input] AllCrateNums,
     [] ExportedSymbols(CrateNum),
diff --git a/src/librustc/dep_graph/graph.rs b/src/librustc/dep_graph/graph.rs
index 97ac1b2..015cdd1 100644
--- a/src/librustc/dep_graph/graph.rs
+++ b/src/librustc/dep_graph/graph.rs
@@ -327,6 +327,7 @@
         }
     }
 
+    #[inline]
     pub fn fingerprint_of(&self, dep_node: &DepNode) -> Fingerprint {
         match self.fingerprints.borrow().get(dep_node) {
             Some(&fingerprint) => fingerprint,
@@ -340,6 +341,11 @@
         self.data.as_ref().unwrap().previous.fingerprint_of(dep_node)
     }
 
+    #[inline]
+    pub fn prev_dep_node_index_of(&self, dep_node: &DepNode) -> SerializedDepNodeIndex {
+        self.data.as_ref().unwrap().previous.node_to_index(dep_node)
+    }
+
     /// Indicates that a previous work product exists for `v`. This is
     /// invoked during initial start-up based on what nodes are clean
     /// (and what files exist in the incr. directory).
@@ -407,6 +413,12 @@
         self.data.as_ref().and_then(|t| t.dep_node_debug.borrow().get(&dep_node).cloned())
     }
 
+    pub fn edge_deduplication_data(&self) -> (u64, u64) {
+        let current_dep_graph = self.data.as_ref().unwrap().current.borrow();
+
+        (current_dep_graph.total_read_count, current_dep_graph.total_duplicate_read_count)
+    }
+
     pub fn serialize(&self) -> SerializedDepGraph {
         let fingerprints = self.fingerprints.borrow();
         let current_dep_graph = self.data.as_ref().unwrap().current.borrow();
@@ -731,6 +743,9 @@
     // each anon node. The session-key is just a random number generated when
     // the DepGraph is created.
     anon_id_seed: Fingerprint,
+
+    total_read_count: u64,
+    total_duplicate_read_count: u64,
 }
 
 impl CurrentDepGraph {
@@ -764,6 +779,8 @@
             anon_id_seed: stable_hasher.finish(),
             task_stack: Vec::new(),
             forbidden_edge,
+            total_read_count: 0,
+            total_duplicate_read_count: 0,
         }
     }
 
@@ -894,6 +911,7 @@
                 ref mut read_set,
                 node: ref target,
             }) => {
+                self.total_read_count += 1;
                 if read_set.insert(source) {
                     reads.push(source);
 
@@ -907,6 +925,8 @@
                             }
                         }
                     }
+                } else {
+                    self.total_duplicate_read_count += 1;
                 }
             }
             Some(&mut OpenTask::Anon {
diff --git a/src/librustc/dep_graph/prev.rs b/src/librustc/dep_graph/prev.rs
index 17001bb..6c43b5c 100644
--- a/src/librustc/dep_graph/prev.rs
+++ b/src/librustc/dep_graph/prev.rs
@@ -45,6 +45,11 @@
     }
 
     #[inline]
+    pub fn node_to_index(&self, dep_node: &DepNode) -> SerializedDepNodeIndex {
+        self.index[dep_node]
+    }
+
+    #[inline]
     pub fn fingerprint_of(&self, dep_node: &DepNode) -> Option<Fingerprint> {
         self.index
             .get(dep_node)
diff --git a/src/librustc/hir/check_attr.rs b/src/librustc/hir/check_attr.rs
index 946cbb7..05c3711 100644
--- a/src/librustc/hir/check_attr.rs
+++ b/src/librustc/hir/check_attr.rs
@@ -47,27 +47,27 @@
 
 impl<'a> CheckAttrVisitor<'a> {
     /// Check any attribute.
-    fn check_attribute(&self, attr: &ast::Attribute, target: Target) {
+    fn check_attribute(&self, attr: &ast::Attribute, item: &ast::Item, target: Target) {
         if let Some(name) = attr.name() {
             match &*name.as_str() {
-                "inline" => self.check_inline(attr, target),
-                "repr" => self.check_repr(attr, target),
+                "inline" => self.check_inline(attr, item, target),
+                "repr" => self.check_repr(attr, item, target),
                 _ => (),
             }
         }
     }
 
     /// Check if an `#[inline]` is applied to a function.
-    fn check_inline(&self, attr: &ast::Attribute, target: Target) {
+    fn check_inline(&self, attr: &ast::Attribute, item: &ast::Item, target: Target) {
         if target != Target::Fn {
             struct_span_err!(self.sess, attr.span, E0518, "attribute should be applied to function")
-                .span_label(attr.span, "requires a function")
+                .span_label(item.span, "not a function")
                 .emit();
         }
     }
 
     /// Check if an `#[repr]` attr is valid.
-    fn check_repr(&self, attr: &ast::Attribute, target: Target) {
+    fn check_repr(&self, attr: &ast::Attribute, item: &ast::Item, target: Target) {
         let words = match attr.meta_item_list() {
             Some(words) => words,
             None => {
@@ -139,7 +139,7 @@
                 _ => continue,
             };
             struct_span_err!(self.sess, attr.span, E0517, "{}", message)
-                .span_label(attr.span, format!("requires {}", label))
+                .span_label(item.span, format!("not {}", label))
                 .emit();
         }
         if conflicting_reprs > 1 {
@@ -153,7 +153,7 @@
     fn visit_item(&mut self, item: &'a ast::Item) {
         let target = Target::from_item(item);
         for attr in &item.attrs {
-            self.check_attribute(attr, target);
+            self.check_attribute(attr, item, target);
         }
         visit::walk_item(self, item);
     }
diff --git a/src/librustc/hir/def_id.rs b/src/librustc/hir/def_id.rs
index 428f154..f6fcff3 100644
--- a/src/librustc/hir/def_id.rs
+++ b/src/librustc/hir/def_id.rs
@@ -11,8 +11,7 @@
 use ty;
 
 use rustc_data_structures::indexed_vec::Idx;
-use serialize::{self, Encoder, Decoder};
-
+use serialize;
 use std::fmt;
 use std::u32;
 
@@ -32,6 +31,10 @@
 
         /// A CrateNum value that indicates that something is wrong.
         const INVALID_CRATE = u32::MAX - 1,
+
+        /// A special CrateNum that we use for the tcx.rcache when decoding from
+        /// the incr. comp. cache.
+        const RESERVED_FOR_INCR_COMP_CACHE = u32::MAX - 2,
     });
 
 impl CrateNum {
@@ -61,17 +64,8 @@
     }
 }
 
-impl serialize::UseSpecializedEncodable for CrateNum {
-    fn default_encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
-        s.emit_u32(self.0)
-    }
-}
-
-impl serialize::UseSpecializedDecodable for CrateNum {
-    fn default_decode<D: Decoder>(d: &mut D) -> Result<CrateNum, D::Error> {
-        d.read_u32().map(CrateNum)
-    }
-}
+impl serialize::UseSpecializedEncodable for CrateNum {}
+impl serialize::UseSpecializedDecodable for CrateNum {}
 
 /// A DefIndex is an index into the hir-map for a crate, identifying a
 /// particular definition. It should really be considered an interned
@@ -88,6 +82,7 @@
 /// don't have to care about these ranges.
 newtype_index!(DefIndex
     {
+        ENCODABLE = custom
         DEBUG_FORMAT = custom,
 
         /// The start of the "high" range of DefIndexes.
@@ -146,6 +141,9 @@
     }
 }
 
+impl serialize::UseSpecializedEncodable for DefIndex {}
+impl serialize::UseSpecializedDecodable for DefIndex {}
+
 #[derive(Copy, Clone, Eq, PartialEq, Hash)]
 pub enum DefIndexAddressSpace {
     Low = 0,
@@ -166,7 +164,7 @@
 
 /// A DefId identifies a particular *definition*, by combining a crate
 /// index and a def index.
-#[derive(Clone, Eq, Ord, PartialOrd, PartialEq, RustcEncodable, RustcDecodable, Hash, Copy)]
+#[derive(Clone, Eq, Ord, PartialOrd, PartialEq, Hash, Copy)]
 pub struct DefId {
     pub krate: CrateNum,
     pub index: DefIndex,
@@ -188,14 +186,58 @@
     }
 }
 
-
 impl DefId {
     /// Make a local `DefId` with the given index.
+    #[inline]
     pub fn local(index: DefIndex) -> DefId {
         DefId { krate: LOCAL_CRATE, index: index }
     }
 
-    pub fn is_local(&self) -> bool {
+    #[inline]
+    pub fn is_local(self) -> bool {
         self.krate == LOCAL_CRATE
     }
+
+    #[inline]
+    pub fn to_local(self) -> LocalDefId {
+        LocalDefId::from_def_id(self)
+    }
 }
+
+impl serialize::UseSpecializedEncodable for DefId {}
+impl serialize::UseSpecializedDecodable for DefId {}
+
+/// A LocalDefId is equivalent to a DefId with `krate == LOCAL_CRATE`. Since
+/// we encode this information in the type, we can ensure at compile time that
+/// no DefIds from upstream crates get thrown into the mix. There are quite a
+/// few cases where we know that only DefIds from the local crate are expected
+/// and a DefId from a different crate would signify a bug somewhere. This
+/// is when LocalDefId comes in handy.
+#[derive(Clone, Copy, Eq, PartialEq, Ord, PartialOrd, Hash)]
+pub struct LocalDefId(DefIndex);
+
+impl LocalDefId {
+
+    #[inline]
+    pub fn from_def_id(def_id: DefId) -> LocalDefId {
+        assert!(def_id.is_local());
+        LocalDefId(def_id.index)
+    }
+
+    #[inline]
+    pub fn to_def_id(self) -> DefId {
+        DefId {
+            krate: LOCAL_CRATE,
+            index: self.0
+        }
+    }
+}
+
+impl fmt::Debug for LocalDefId {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        self.to_def_id().fmt(f)
+    }
+}
+
+impl serialize::UseSpecializedEncodable for LocalDefId {}
+impl serialize::UseSpecializedDecodable for LocalDefId {}
diff --git a/src/librustc/hir/map/mod.rs b/src/librustc/hir/map/mod.rs
index 1b590c2..8b00280 100644
--- a/src/librustc/hir/map/mod.rs
+++ b/src/librustc/hir/map/mod.rs
@@ -17,7 +17,7 @@
 
 use dep_graph::{DepGraph, DepNode, DepKind, DepNodeIndex};
 
-use hir::def_id::{CRATE_DEF_INDEX, DefId, DefIndexAddressSpace};
+use hir::def_id::{CRATE_DEF_INDEX, DefId, LocalDefId, DefIndexAddressSpace};
 
 use syntax::abi::Abi;
 use syntax::ast::{self, Name, NodeId, CRATE_NODE_ID};
@@ -359,6 +359,16 @@
         self.definitions.as_local_node_id(DefId::local(def_index)).unwrap()
     }
 
+    #[inline]
+    pub fn local_def_id_to_hir_id(&self, def_id: LocalDefId) -> HirId {
+        self.definitions.def_index_to_hir_id(def_id.to_def_id().index)
+    }
+
+    #[inline]
+    pub fn local_def_id_to_node_id(&self, def_id: LocalDefId) -> NodeId {
+        self.definitions.as_local_node_id(def_id.to_def_id()).unwrap()
+    }
+
     fn entry_count(&self) -> usize {
         self.map.len()
     }
diff --git a/src/librustc/hir/mod.rs b/src/librustc/hir/mod.rs
index ee83000..1346685 100644
--- a/src/librustc/hir/mod.rs
+++ b/src/librustc/hir/mod.rs
@@ -45,6 +45,7 @@
 
 use rustc_data_structures::indexed_vec;
 
+use serialize::{self, Encoder, Encodable, Decoder, Decodable};
 use std::collections::BTreeMap;
 use std::fmt;
 
@@ -85,13 +86,37 @@
 /// the local_id part of the HirId changing, which is a very useful property in
 /// incremental compilation where we have to persist things through changes to
 /// the code base.
-#[derive(Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord, Debug,
-         RustcEncodable, RustcDecodable)]
+#[derive(Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord, Debug)]
 pub struct HirId {
     pub owner: DefIndex,
     pub local_id: ItemLocalId,
 }
 
+impl serialize::UseSpecializedEncodable for HirId {
+    fn default_encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
+        let HirId {
+            owner,
+            local_id,
+        } = *self;
+
+        owner.encode(s)?;
+        local_id.encode(s)
+    }
+}
+
+impl serialize::UseSpecializedDecodable for HirId {
+    fn default_decode<D: Decoder>(d: &mut D) -> Result<HirId, D::Error> {
+        let owner = DefIndex::decode(d)?;
+        let local_id = ItemLocalId::decode(d)?;
+
+        Ok(HirId {
+            owner,
+            local_id
+        })
+    }
+}
+
+
 /// An `ItemLocalId` uniquely identifies something within a given "item-like",
 /// that is within a hir::Item, hir::TraitItem, or hir::ImplItem. There is no
 /// guarantee that the numerical value of a given `ItemLocalId` corresponds to
diff --git a/src/librustc/ich/impls_hir.rs b/src/librustc/ich/impls_hir.rs
index a04683e..2b5390e 100644
--- a/src/librustc/ich/impls_hir.rs
+++ b/src/librustc/ich/impls_hir.rs
@@ -13,7 +13,7 @@
 
 use hir;
 use hir::map::DefPathHash;
-use hir::def_id::{DefId, CrateNum, CRATE_DEF_INDEX};
+use hir::def_id::{DefId, LocalDefId, CrateNum, CRATE_DEF_INDEX};
 use ich::{StableHashingContext, NodeIdHashingMode};
 use rustc_data_structures::stable_hasher::{HashStable, ToStableHashKey,
                                            StableHasher, StableHasherResult};
@@ -38,6 +38,24 @@
     }
 }
 
+impl<'gcx> HashStable<StableHashingContext<'gcx>> for LocalDefId {
+    #[inline]
+    fn hash_stable<W: StableHasherResult>(&self,
+                                          hcx: &mut StableHashingContext<'gcx>,
+                                          hasher: &mut StableHasher<W>) {
+        hcx.def_path_hash(self.to_def_id()).hash_stable(hcx, hasher);
+    }
+}
+
+impl<'gcx> ToStableHashKey<StableHashingContext<'gcx>> for LocalDefId {
+    type KeyType = DefPathHash;
+
+    #[inline]
+    fn to_stable_hash_key(&self, hcx: &StableHashingContext<'gcx>) -> DefPathHash {
+        hcx.def_path_hash(self.to_def_id())
+    }
+}
+
 impl<'gcx> HashStable<StableHashingContext<'gcx>> for CrateNum {
     #[inline]
     fn hash_stable<W: StableHasherResult>(&self,
diff --git a/src/librustc/infer/README.md b/src/librustc/infer/README.md
index 6c14785..e7daff3 100644
--- a/src/librustc/infer/README.md
+++ b/src/librustc/infer/README.md
@@ -1,239 +1,227 @@
 # Type inference engine
 
-This is loosely based on standard HM-type inference, but with an
-extension to try and accommodate subtyping.  There is nothing
-principled about this extension; it's sound---I hope!---but it's a
-heuristic, ultimately, and does not guarantee that it finds a valid
-typing even if one exists (in fact, there are known scenarios where it
-fails, some of which may eventually become problematic).
+The type inference is based on standard HM-type inference, but
+extended in various way to accommodate subtyping, region inference,
+and higher-ranked types.
 
-## Key idea
+## A note on terminology
 
-The main change is that each type variable T is associated with a
-lower-bound L and an upper-bound U.  L and U begin as bottom and top,
-respectively, but gradually narrow in response to new constraints
-being introduced.  When a variable is finally resolved to a concrete
-type, it can (theoretically) select any type that is a supertype of L
-and a subtype of U.
+We use the notation `?T` to refer to inference variables, also called
+existential variables.
 
-There are several critical invariants which we maintain:
+We use the term "region" and "lifetime" interchangeably. Both refer to
+the `'a` in `&'a T`.
 
-- the upper-bound of a variable only becomes lower and the lower-bound
-  only becomes higher over time;
-- the lower-bound L is always a subtype of the upper bound U;
-- the lower-bound L and upper-bound U never refer to other type variables,
-  but only to types (though those types may contain type variables).
+The term "bound region" refers to regions bound in a function
+signature, such as the `'a` in `for<'a> fn(&'a u32)`. A region is
+"free" if it is not bound.
 
-> An aside: if the terms upper- and lower-bound confuse you, think of
-> "supertype" and "subtype".  The upper-bound is a "supertype"
-> (super=upper in Latin, or something like that anyway) and the lower-bound
-> is a "subtype" (sub=lower in Latin).  I find it helps to visualize
-> a simple class hierarchy, like Java minus interfaces and
-> primitive types.  The class Object is at the root (top) and other
-> types lie in between.  The bottom type is then the Null type.
-> So the tree looks like:
->
-> ```text
->         Object
->         /    \
->     String   Other
->         \    /
->         (null)
-> ```
->
-> So the upper bound type is the "supertype" and the lower bound is the
-> "subtype" (also, super and sub mean upper and lower in Latin, or something
-> like that anyway).
+## Creating an inference context
 
-## Satisfying constraints
-
-At a primitive level, there is only one form of constraint that the
-inference understands: a subtype relation.  So the outside world can
-say "make type A a subtype of type B".  If there are variables
-involved, the inferencer will adjust their upper- and lower-bounds as
-needed to ensure that this relation is satisfied. (We also allow "make
-type A equal to type B", but this is translated into "A <: B" and "B
-<: A")
-
-As stated above, we always maintain the invariant that type bounds
-never refer to other variables.  This keeps the inference relatively
-simple, avoiding the scenario of having a kind of graph where we have
-to pump constraints along and reach a fixed point, but it does impose
-some heuristics in the case where the user is relating two type
-variables A <: B.
-
-Combining two variables such that variable A will forever be a subtype
-of variable B is the trickiest part of the algorithm because there is
-often no right choice---that is, the right choice will depend on
-future constraints which we do not yet know. The problem comes about
-because both A and B have bounds that can be adjusted in the future.
-Let's look at some of the cases that can come up.
-
-Imagine, to start, the best case, where both A and B have an upper and
-lower bound (that is, the bounds are not top nor bot respectively). In
-that case, if we're lucky, A.ub <: B.lb, and so we know that whatever
-A and B should become, they will forever have the desired subtyping
-relation.  We can just leave things as they are.
-
-### Option 1: Unify
-
-However, suppose that A.ub is *not* a subtype of B.lb.  In
-that case, we must make a decision.  One option is to unify A
-and B so that they are one variable whose bounds are:
-
-    UB = GLB(A.ub, B.ub)
-    LB = LUB(A.lb, B.lb)
-
-(Note that we will have to verify that LB <: UB; if it does not, the
-types are not intersecting and there is an error) In that case, A <: B
-holds trivially because A==B.  However, we have now lost some
-flexibility, because perhaps the user intended for A and B to end up
-as different types and not the same type.
-
-Pictorially, what this does is to take two distinct variables with
-(hopefully not completely) distinct type ranges and produce one with
-the intersection.
-
-```text
-                  B.ub                  B.ub
-                   /\                    /
-           A.ub   /  \           A.ub   /
-           /   \ /    \              \ /
-          /     X      \              UB
-         /     / \      \            / \
-        /     /   /      \          /   /
-        \     \  /       /          \  /
-         \      X       /             LB
-          \    / \     /             / \
-           \  /   \   /             /   \
-           A.lb    B.lb          A.lb    B.lb
-```
-
-
-### Option 2: Relate UB/LB
-
-Another option is to keep A and B as distinct variables but set their
-bounds in such a way that, whatever happens, we know that A <: B will hold.
-This can be achieved by ensuring that A.ub <: B.lb.  In practice there
-are two ways to do that, depicted pictorially here:
-
-```text
-    Before                Option #1            Option #2
-
-             B.ub                B.ub                B.ub
-              /\                 /  \                /  \
-      A.ub   /  \        A.ub   /(B')\       A.ub   /(B')\
-      /   \ /    \           \ /     /           \ /     /
-     /     X      \         __UB____/             UB    /
-    /     / \      \       /  |                   |    /
-   /     /   /      \     /   |                   |   /
-   \     \  /       /    /(A')|                   |  /
-    \      X       /    /     LB            ______LB/
-     \    / \     /    /     / \           / (A')/ \
-      \  /   \   /     \    /   \          \    /   \
-      A.lb    B.lb       A.lb    B.lb        A.lb    B.lb
-```
-
-In these diagrams, UB and LB are defined as before.  As you can see,
-the new ranges `A'` and `B'` are quite different from the range that
-would be produced by unifying the variables.
-
-### What we do now
-
-Our current technique is to *try* (transactionally) to relate the
-existing bounds of A and B, if there are any (i.e., if `UB(A) != top
-&& LB(B) != bot`).  If that succeeds, we're done.  If it fails, then
-we merge A and B into same variable.
-
-This is not clearly the correct course.  For example, if `UB(A) !=
-top` but `LB(B) == bot`, we could conceivably set `LB(B)` to `UB(A)`
-and leave the variables unmerged.  This is sometimes the better
-course, it depends on the program.
-
-The main case which fails today that I would like to support is:
+You create and "enter" an inference context by doing something like
+the following:
 
 ```rust
-fn foo<T>(x: T, y: T) { ... }
-
-fn bar() {
-    let x: @mut int = @mut 3;
-    let y: @int = @3;
-    foo(x, y);
-}
+tcx.infer_ctxt().enter(|infcx| {
+    // use the inference context `infcx` in here
+})
 ```
 
-In principle, the inferencer ought to find that the parameter `T` to
-`foo(x, y)` is `@const int`.  Today, however, it does not; this is
-because the type variable `T` is merged with the type variable for
-`X`, and thus inherits its UB/LB of `@mut int`.  This leaves no
-flexibility for `T` to later adjust to accommodate `@int`.
+Each inference context creates a short-lived type arena to store the
+fresh types and things that it will create, as described in
+[the README in the ty module][ty-readme]. This arena is created by the `enter`
+function and disposed after it returns.
 
-Note: `@` and `@mut` are replaced with `Rc<T>` and `Rc<RefCell<T>>` in current Rust.
+[ty-readme]: src/librustc/ty/README.md
 
-### What to do when not all bounds are present
+Within the closure, the infcx will have the type `InferCtxt<'cx, 'gcx,
+'tcx>` for some fresh `'cx` and `'tcx` -- the latter corresponds to
+the lifetime of this temporary arena, and the `'cx` is the lifetime of
+the `InferCtxt` itself. (Again, see [that ty README][ty-readme] for
+more details on this setup.)
 
-In the prior discussion we assumed that A.ub was not top and B.lb was
-not bot.  Unfortunately this is rarely the case.  Often type variables
-have "lopsided" bounds.  For example, if a variable in the program has
-been initialized but has not been used, then its corresponding type
-variable will have a lower bound but no upper bound.  When that
-variable is then used, we would like to know its upper bound---but we
-don't have one!  In this case we'll do different things depending on
-how the variable is being used.
+The `tcx.infer_ctxt` method actually returns a build, which means
+there are some kinds of configuration you can do before the `infcx` is
+created. See `InferCtxtBuilder` for more information.
 
-## Transactional support
+## Inference variables
 
-Whenever we adjust merge variables or adjust their bounds, we always
-keep a record of the old value.  This allows the changes to be undone.
+The main purpose of the inference context is to house a bunch of
+**inference variables** -- these represent types or regions whose precise
+value is not yet known, but will be uncovered as we perform type-checking.
 
-## Regions
+If you're familiar with the basic ideas of unification from H-M type
+systems, or logic languages like Prolog, this is the same concept. If
+you're not, you might want to read a tutorial on how H-M type
+inference works, or perhaps this blog post on
+[unification in the Chalk project].
 
-I've only talked about type variables here, but region variables
-follow the same principle.  They have upper- and lower-bounds.  A
-region A is a subregion of a region B if A being valid implies that B
-is valid.  This basically corresponds to the block nesting structure:
-the regions for outer block scopes are superregions of those for inner
-block scopes.
+[Unification in the Chalk project]: http://smallcultfollowing.com/babysteps/blog/2017/03/25/unification-in-chalk-part-1/
 
-## Integral and floating-point type variables
+All told, the inference context stores four kinds of inference variables as of this
+writing:
 
-There is a third variety of type variable that we use only for
-inferring the types of unsuffixed integer literals.  Integral type
-variables differ from general-purpose type variables in that there's
-no subtyping relationship among the various integral types, so instead
-of associating each variable with an upper and lower bound, we just
-use simple unification.  Each integer variable is associated with at
-most one integer type.  Floating point types are handled similarly to
-integral types.
+- Type variables, which come in three varieties:
+  - General type variables (the most common). These can be unified with any type.
+  - Integral type variables, which can only be unified with an integral type, and
+    arise from an integer literal expression like `22`.
+  - Float type variables, which can only be unified with a float type, and
+    arise from a float literal expression like `22.0`.
+- Region variables, which represent lifetimes, and arise all over the dang place.
 
-## GLB/LUB
+All the type variables work in much the same way: you can create a new
+type variable, and what you get is `Ty<'tcx>` representing an
+unresolved type `?T`. Then later you can apply the various operations
+that the inferencer supports, such as equality or subtyping, and it
+will possibly **instantiate** (or **bind**) that `?T` to a specific
+value as a result.
 
-Computing the greatest-lower-bound and least-upper-bound of two
-types/regions is generally straightforward except when type variables
-are involved. In that case, we follow a similar "try to use the bounds
-when possible but otherwise merge the variables" strategy.  In other
-words, `GLB(A, B)` where `A` and `B` are variables will often result
-in `A` and `B` being merged and the result being `A`.
+The region variables work somewhat differently, and are described
+below in a separate section.
 
-## Type coercion
+## Enforcing equality / subtyping
 
-We have a notion of assignability which differs somewhat from
-subtyping; in particular it may cause region borrowing to occur.  See
-the big comment later in this file on Type Coercion for specifics.
+The most basic operations you can perform in the type inferencer is
+**equality**, which forces two types `T` and `U` to be the same. The
+recommended way to add an equality constraint is using the `at`
+method, roughly like so:
 
-### In conclusion
+```
+infcx.at(...).eq(t, u);
+```
 
-I showed you three ways to relate `A` and `B`.  There are also more,
-of course, though I'm not sure if there are any more sensible options.
-The main point is that there are various options, each of which
-produce a distinct range of types for `A` and `B`.  Depending on what
-the correct values for A and B are, one of these options will be the
-right choice: but of course we don't know the right values for A and B
-yet, that's what we're trying to find!  In our code, we opt to unify
-(Option #1).
+The first `at()` call provides a bit of context, i.e., why you are
+doing this unification, and in what environment, and the `eq` method
+performs the actual equality constraint.
 
-# Implementation details
+When you equate things, you force them to be precisely equal. Equating
+returns a `InferResult` -- if it returns `Err(err)`, then equating
+failed, and the enclosing `TypeError` will tell you what went wrong.
 
-We make use of a trait-like implementation strategy to consolidate
-duplicated code between subtypes, GLB, and LUB computations.  See the
-section on "Type Combining" in combine.rs for more details.
+The success case is perhaps more interesting. The "primary" return
+type of `eq` is `()` -- that is, when it succeeds, it doesn't return a
+value of any particular interest. Rather, it is executed for its
+side-effects of constraining type variables and so forth. However, the
+actual return type is not `()`, but rather `InferOk<()>`. The
+`InferOk` type is used to carry extra trait obligations -- your job is
+to ensure that these are fulfilled (typically by enrolling them in a
+fulfillment context). See the [trait README] for more background here.
+
+[trait README]: ../traits/README.md
+
+You can also enforce subtyping through `infcx.at(..).sub(..)`. The same
+basic concepts apply as above.
+
+## "Trying" equality
+
+Sometimes you would like to know if it is *possible* to equate two
+types without error.  You can test that with `infcx.can_eq` (or
+`infcx.can_sub` for subtyping). If this returns `Ok`, then equality
+is possible -- but in all cases, any side-effects are reversed.
+
+Be aware though that the success or failure of these methods is always
+**modulo regions**. That is, two types `&'a u32` and `&'b u32` will
+return `Ok` for `can_eq`, even if `'a != 'b`.  This falls out from the
+"two-phase" nature of how we solve region constraints.
+
+## Snapshots
+
+As described in the previous section on `can_eq`, often it is useful
+to be able to do a series of operations and then roll back their
+side-effects. This is done for various reasons: one of them is to be
+able to backtrack, trying out multiple possibilities before settling
+on which path to take. Another is in order to ensure that a series of
+smaller changes take place atomically or not at all.
+
+To allow for this, the inference context supports a `snapshot` method.
+When you call it, it will start recording changes that occur from the
+operations you perform. When you are done, you can either invoke
+`rollback_to`, which will undo those changes, or else `confirm`, which
+will make the permanent. Snapshots can be nested as long as you follow
+a stack-like discipline.
+
+Rather than use snapshots directly, it is often helpful to use the
+methods like `commit_if_ok` or `probe` that encapsulte higher-level
+patterns.
+
+## Subtyping obligations
+
+One thing worth discussing are subtyping obligations. When you force
+two types to be a subtype, like `?T <: i32`, we can often convert those
+into equality constraints. This follows from Rust's rather limited notion
+of subtyping: so, in the above case, `?T <: i32` is equivalent to `?T = i32`.
+
+However, in some cases we have to be more careful. For example, when
+regions are involved. So if you have `?T <: &'a i32`, what we would do
+is to first "generalize" `&'a i32` into a type with a region variable:
+`&'?b i32`, and then unify `?T` with that (`?T = &'?b i32`). We then
+relate this new variable with the original bound:
+
+    &'?b i32 <: &'a i32
+    
+This will result in a region constraint (see below) of `'?b: 'a`.
+
+One final interesting case is relating two unbound type variables,
+like `?T <: ?U`.  In that case, we can't make progress, so we enqueue
+an obligation `Subtype(?T, ?U)` and return it via the `InferOk`
+mechanism. You'll have to try again when more details about `?T` or
+`?U` are known.
+
+## Region constraints
+
+Regions are inferred somewhat differently from types. Rather than
+eagerly unifying things, we simply collect constraints as we go, but
+make (almost) no attempt to solve regions. These constraints have the
+form of an outlives constraint:
+
+    'a: 'b
+    
+Actually the code tends to view them as a subregion relation, but it's the same
+idea:
+
+    'b <= 'a
+
+(There are various other kinds of constriants, such as "verifys"; see
+the `region_constraints` module for details.)
+
+There is one case where we do some amount of eager unification. If you have an equality constraint
+between two regions
+
+    'a = 'b
+    
+we will record that fact in a unification table. You can then use
+`opportunistic_resolve_var` to convert `'b` to `'a` (or vice
+versa). This is sometimes needed to ensure termination of fixed-point
+algorithms.
+
+## Extracting region constraints
+
+Ultimately, region constraints are only solved at the very end of
+type-checking, once all other constraints are known. There are two
+ways to solve region constraints right now: lexical and
+non-lexical. Eventually there will only be one.
+
+To solve **lexical** region constraints, you invoke
+`resolve_regions_and_report_errors`.  This will "close" the region
+constraint process and invoke the `lexical_region_resolve` code. Once
+this is done, any further attempt to equate or create a subtyping
+relationship will yield an ICE.
+
+Non-lexical region constraints are not handled within the inference
+context. Instead, the NLL solver (actually, the MIR type-checker)
+invokes `take_and_reset_region_constraints` periodically. This
+extracts all of the outlives constraints from the region solver, but
+leaves the set of variables intact. This is used to get *just* the
+region constraints that resulted from some particular point in the
+program, since the NLL solver needs to know not just *what* regions
+were subregions but *where*. Finally, the NLL solver invokes
+`take_region_var_origins`, which "closes" the region constraint
+process in the same way as normal solving.
+
+## Lexical region resolution
+
+Lexical region resolution is done by initially assigning each region
+variable to an empty value. We then process each outlives constraint
+repeatedly, growing region variables until a fixed-point is reached.
+Region variables can be grown using a least-upper-bound relation on
+the region lattice in a fairly straight-forward fashion.
diff --git a/src/librustc/infer/equate.rs b/src/librustc/infer/equate.rs
index f9ffaee..2ae8f8a 100644
--- a/src/librustc/infer/equate.rs
+++ b/src/librustc/infer/equate.rs
@@ -104,7 +104,8 @@
                a,
                b);
         let origin = Subtype(self.fields.trace.clone());
-        self.fields.infcx.region_vars.make_eqregion(origin, a, b);
+        self.fields.infcx.borrow_region_constraints()
+                         .make_eqregion(origin, a, b);
         Ok(a)
     }
 
diff --git a/src/librustc/infer/error_reporting/different_lifetimes.rs b/src/librustc/infer/error_reporting/different_lifetimes.rs
index 36370e2..c64bd61 100644
--- a/src/librustc/infer/error_reporting/different_lifetimes.rs
+++ b/src/librustc/infer/error_reporting/different_lifetimes.rs
@@ -13,8 +13,8 @@
 use hir;
 use infer::InferCtxt;
 use ty::{self, Region};
-use infer::region_inference::RegionResolutionError::*;
-use infer::region_inference::RegionResolutionError;
+use infer::lexical_region_resolve::RegionResolutionError::*;
+use infer::lexical_region_resolve::RegionResolutionError;
 use hir::map as hir_map;
 use middle::resolve_lifetime as rl;
 use hir::intravisit::{self, Visitor, NestedVisitorMap};
diff --git a/src/librustc/infer/error_reporting/mod.rs b/src/librustc/infer/error_reporting/mod.rs
index e9916bd..4f36193 100644
--- a/src/librustc/infer/error_reporting/mod.rs
+++ b/src/librustc/infer/error_reporting/mod.rs
@@ -57,8 +57,8 @@
 
 use infer;
 use super::{InferCtxt, TypeTrace, SubregionOrigin, RegionVariableOrigin, ValuePairs};
-use super::region_inference::{RegionResolutionError, ConcreteFailure, SubSupConflict,
-                              GenericBoundFailure, GenericKind};
+use super::region_constraints::GenericKind;
+use super::lexical_region_resolve::RegionResolutionError;
 
 use std::fmt;
 use hir;
@@ -177,13 +177,7 @@
 
             ty::ReEarlyBound(_) |
             ty::ReFree(_) => {
-                let scope = match *region {
-                    ty::ReEarlyBound(ref br) => {
-                        self.parent_def_id(br.def_id).unwrap()
-                    }
-                    ty::ReFree(ref fr) => fr.scope,
-                    _ => bug!()
-                };
+                let scope = region.free_region_binding_scope(self);
                 let prefix = match *region {
                     ty::ReEarlyBound(ref br) => {
                         format!("the lifetime {} as defined on", br.name)
@@ -293,33 +287,37 @@
             debug!("report_region_errors: error = {:?}", error);
 
             if !self.try_report_named_anon_conflict(&error) &&
-               !self.try_report_anon_anon_conflict(&error) {
+                !self.try_report_anon_anon_conflict(&error)
+            {
+                match error.clone() {
+                    // These errors could indicate all manner of different
+                    // problems with many different solutions. Rather
+                    // than generate a "one size fits all" error, what we
+                    // attempt to do is go through a number of specific
+                    // scenarios and try to find the best way to present
+                    // the error. If all of these fails, we fall back to a rather
+                    // general bit of code that displays the error information
+                    RegionResolutionError::ConcreteFailure(origin, sub, sup) => {
+                        self.report_concrete_failure(region_scope_tree, origin, sub, sup).emit();
+                    }
 
-               match error.clone() {
-                  // These errors could indicate all manner of different
-                  // problems with many different solutions. Rather
-                  // than generate a "one size fits all" error, what we
-                  // attempt to do is go through a number of specific
-                  // scenarios and try to find the best way to present
-                  // the error. If all of these fails, we fall back to a rather
-                  // general bit of code that displays the error information
-                  ConcreteFailure(origin, sub, sup) => {
-                      self.report_concrete_failure(region_scope_tree, origin, sub, sup).emit();
-                  }
+                    RegionResolutionError::GenericBoundFailure(kind, param_ty, sub) => {
+                        self.report_generic_bound_failure(region_scope_tree, kind, param_ty, sub);
+                    }
 
-                  GenericBoundFailure(kind, param_ty, sub) => {
-                      self.report_generic_bound_failure(region_scope_tree, kind, param_ty, sub);
-                  }
-
-                  SubSupConflict(var_origin, sub_origin, sub_r, sup_origin, sup_r) => {
+                    RegionResolutionError::SubSupConflict(var_origin,
+                                                          sub_origin,
+                                                          sub_r,
+                                                          sup_origin,
+                                                          sup_r) => {
                         self.report_sub_sup_conflict(region_scope_tree,
                                                      var_origin,
                                                      sub_origin,
                                                      sub_r,
                                                      sup_origin,
                                                      sup_r);
-                  }
-               }
+                    }
+                }
             }
         }
     }
@@ -351,9 +349,9 @@
         // the only thing in the list.
 
         let is_bound_failure = |e: &RegionResolutionError<'tcx>| match *e {
-            ConcreteFailure(..) => false,
-            SubSupConflict(..) => false,
-            GenericBoundFailure(..) => true,
+            RegionResolutionError::GenericBoundFailure(..) => true,
+            RegionResolutionError::ConcreteFailure(..) |
+            RegionResolutionError::SubSupConflict(..) => false,
         };
 
 
@@ -365,9 +363,9 @@
 
         // sort the errors by span, for better error message stability.
         errors.sort_by_key(|u| match *u {
-            ConcreteFailure(ref sro, _, _) => sro.span(),
-            GenericBoundFailure(ref sro, _, _) => sro.span(),
-            SubSupConflict(ref rvo, _, _, _, _) => rvo.span(),
+            RegionResolutionError::ConcreteFailure(ref sro, _, _) => sro.span(),
+            RegionResolutionError::GenericBoundFailure(ref sro, _, _) => sro.span(),
+            RegionResolutionError::SubSupConflict(ref rvo, _, _, _, _) => rvo.span(),
         });
         errors
     }
@@ -764,9 +762,12 @@
             }
         }
 
-        self.note_error_origin(diag, &cause);
         self.check_and_note_conflicting_crates(diag, terr, span);
         self.tcx.note_and_explain_type_err(diag, terr, span);
+
+        // It reads better to have the error origin as the final
+        // thing.
+        self.note_error_origin(diag, &cause);
     }
 
     pub fn report_and_explain_type_error(&self,
@@ -774,6 +775,10 @@
                                          terr: &TypeError<'tcx>)
                                          -> DiagnosticBuilder<'tcx>
     {
+        debug!("report_and_explain_type_error(trace={:?}, terr={:?})",
+               trace,
+               terr);
+
         let span = trace.cause.span;
         let failure_str = trace.cause.as_failure_str();
         let mut diag = match trace.cause.code {
@@ -880,14 +885,13 @@
         };
 
         if let SubregionOrigin::CompareImplMethodObligation {
-            span, item_name, impl_item_def_id, trait_item_def_id, lint_id
+            span, item_name, impl_item_def_id, trait_item_def_id,
         } = origin {
             self.report_extra_impl_obligation(span,
                                               item_name,
                                               impl_item_def_id,
                                               trait_item_def_id,
-                                              &format!("`{}: {}`", bound_kind, sub),
-                                              lint_id)
+                                              &format!("`{}: {}`", bound_kind, sub))
                 .emit();
             return;
         }
@@ -1026,6 +1030,7 @@
                 let var_name = self.tcx.hir.name(var_node_id);
                 format!(" for capture of `{}` by closure", var_name)
             }
+            infer::NLL(..) => bug!("NLL variable found in lexical phase"),
         };
 
         struct_span_err!(self.tcx.sess, var_origin.span(), E0495,
diff --git a/src/librustc/infer/error_reporting/named_anon_conflict.rs b/src/librustc/infer/error_reporting/named_anon_conflict.rs
index e0b8a19..6af7415 100644
--- a/src/librustc/infer/error_reporting/named_anon_conflict.rs
+++ b/src/librustc/infer/error_reporting/named_anon_conflict.rs
@@ -11,8 +11,8 @@
 //! Error Reporting for Anonymous Region Lifetime Errors
 //! where one region is named and the other is anonymous.
 use infer::InferCtxt;
-use infer::region_inference::RegionResolutionError::*;
-use infer::region_inference::RegionResolutionError;
+use infer::lexical_region_resolve::RegionResolutionError::*;
+use infer::lexical_region_resolve::RegionResolutionError;
 use ty;
 
 impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
diff --git a/src/librustc/infer/error_reporting/note.rs b/src/librustc/infer/error_reporting/note.rs
index 1f0fd7b..e46613b 100644
--- a/src/librustc/infer/error_reporting/note.rs
+++ b/src/librustc/infer/error_reporting/note.rs
@@ -445,14 +445,12 @@
             infer::CompareImplMethodObligation { span,
                                                  item_name,
                                                  impl_item_def_id,
-                                                 trait_item_def_id,
-                                                 lint_id } => {
+                                                 trait_item_def_id } => {
                 self.report_extra_impl_obligation(span,
                                                   item_name,
                                                   impl_item_def_id,
                                                   trait_item_def_id,
-                                                  &format!("`{}: {}`", sup, sub),
-                                                  lint_id)
+                                                  &format!("`{}: {}`", sup, sub))
             }
         }
     }
diff --git a/src/librustc/infer/fudge.rs b/src/librustc/infer/fudge.rs
index 9cad6ce..756a694 100644
--- a/src/librustc/infer/fudge.rs
+++ b/src/librustc/infer/fudge.rs
@@ -78,8 +78,8 @@
                         self.type_variables.borrow_mut().types_created_since_snapshot(
                             &snapshot.type_snapshot);
                     let region_vars =
-                        self.region_vars.vars_created_since_snapshot(
-                            &snapshot.region_vars_snapshot);
+                        self.borrow_region_constraints().vars_created_since_snapshot(
+                            &snapshot.region_constraints_snapshot);
 
                     Ok((type_variables, region_vars, value))
                 }
diff --git a/src/librustc/infer/glb.rs b/src/librustc/infer/glb.rs
index d7afeba..fd14e0e 100644
--- a/src/librustc/infer/glb.rs
+++ b/src/librustc/infer/glb.rs
@@ -15,6 +15,7 @@
 
 use traits::ObligationCause;
 use ty::{self, Ty, TyCtxt};
+use ty::error::TypeError;
 use ty::relate::{Relate, RelateResult, TypeRelation};
 
 /// "Greatest lower bound" (common subtype)
@@ -67,14 +68,39 @@
                b);
 
         let origin = Subtype(self.fields.trace.clone());
-        Ok(self.fields.infcx.region_vars.glb_regions(origin, a, b))
+        Ok(self.fields.infcx.borrow_region_constraints().glb_regions(self.tcx(), origin, a, b))
     }
 
     fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
                   -> RelateResult<'tcx, ty::Binder<T>>
         where T: Relate<'tcx>
     {
-        self.fields.higher_ranked_glb(a, b, self.a_is_expected)
+        debug!("binders(a={:?}, b={:?})", a, b);
+        let was_error = self.infcx().probe(|_snapshot| {
+            // Subtle: use a fresh combine-fields here because we recover
+            // from Err. Doing otherwise could propagate obligations out
+            // through our `self.obligations` field.
+            self.infcx()
+                .combine_fields(self.fields.trace.clone(), self.fields.param_env)
+                .higher_ranked_glb(a, b, self.a_is_expected)
+                .is_err()
+        });
+        debug!("binders: was_error={:?}", was_error);
+
+        // When higher-ranked types are involved, computing the LUB is
+        // very challenging, switch to invariance. This is obviously
+        // overly conservative but works ok in practice.
+        match self.relate_with_variance(ty::Variance::Invariant, a, b) {
+            Ok(_) => Ok(a.clone()),
+            Err(err) => {
+                debug!("binders: error occurred, was_error={:?}", was_error);
+                if !was_error {
+                    Err(TypeError::OldStyleLUB(Box::new(err)))
+                } else {
+                    Err(err)
+                }
+            }
+        }
     }
 }
 
diff --git a/src/librustc/infer/higher_ranked/mod.rs b/src/librustc/infer/higher_ranked/mod.rs
index 6736751..57e237fb 100644
--- a/src/librustc/infer/higher_ranked/mod.rs
+++ b/src/librustc/infer/higher_ranked/mod.rs
@@ -17,8 +17,9 @@
             SubregionOrigin,
             SkolemizationMap};
 use super::combine::CombineFields;
-use super::region_inference::{TaintDirections};
+use super::region_constraints::{TaintDirections};
 
+use std::collections::BTreeMap;
 use ty::{self, TyCtxt, Binder, TypeFoldable};
 use ty::error::TypeError;
 use ty::relate::{Relate, RelateResult, TypeRelation};
@@ -176,9 +177,10 @@
                                      .filter(|&r| r != representative)
                 {
                     let origin = SubregionOrigin::Subtype(self.trace.clone());
-                    self.infcx.region_vars.make_eqregion(origin,
-                                                         *representative,
-                                                         *region);
+                    self.infcx.borrow_region_constraints()
+                              .make_eqregion(origin,
+                                             *representative,
+                                             *region);
                 }
             }
 
@@ -245,7 +247,7 @@
                                              snapshot: &CombinedSnapshot,
                                              debruijn: ty::DebruijnIndex,
                                              new_vars: &[ty::RegionVid],
-                                             a_map: &FxHashMap<ty::BoundRegion, ty::Region<'tcx>>,
+                                             a_map: &BTreeMap<ty::BoundRegion, ty::Region<'tcx>>,
                                              r0: ty::Region<'tcx>)
                                              -> ty::Region<'tcx> {
             // Regions that pre-dated the LUB computation stay as they are.
@@ -341,7 +343,7 @@
                                              snapshot: &CombinedSnapshot,
                                              debruijn: ty::DebruijnIndex,
                                              new_vars: &[ty::RegionVid],
-                                             a_map: &FxHashMap<ty::BoundRegion, ty::Region<'tcx>>,
+                                             a_map: &BTreeMap<ty::BoundRegion, ty::Region<'tcx>>,
                                              a_vars: &[ty::RegionVid],
                                              b_vars: &[ty::RegionVid],
                                              r0: ty::Region<'tcx>)
@@ -410,7 +412,7 @@
 
         fn rev_lookup<'a, 'gcx, 'tcx>(infcx: &InferCtxt<'a, 'gcx, 'tcx>,
                                       span: Span,
-                                      a_map: &FxHashMap<ty::BoundRegion, ty::Region<'tcx>>,
+                                      a_map: &BTreeMap<ty::BoundRegion, ty::Region<'tcx>>,
                                       r: ty::Region<'tcx>) -> ty::Region<'tcx>
         {
             for (a_br, a_r) in a_map {
@@ -427,13 +429,13 @@
         fn fresh_bound_variable<'a, 'gcx, 'tcx>(infcx: &InferCtxt<'a, 'gcx, 'tcx>,
                                                 debruijn: ty::DebruijnIndex)
                                                 -> ty::Region<'tcx> {
-            infcx.region_vars.new_bound(debruijn)
+            infcx.borrow_region_constraints().new_bound(infcx.tcx, debruijn)
         }
     }
 }
 
 fn var_ids<'a, 'gcx, 'tcx>(fields: &CombineFields<'a, 'gcx, 'tcx>,
-                           map: &FxHashMap<ty::BoundRegion, ty::Region<'tcx>>)
+                           map: &BTreeMap<ty::BoundRegion, ty::Region<'tcx>>)
                            -> Vec<ty::RegionVid> {
     map.iter()
        .map(|(_, &r)| match *r {
@@ -481,7 +483,11 @@
                        r: ty::Region<'tcx>,
                        directions: TaintDirections)
                        -> FxHashSet<ty::Region<'tcx>> {
-        self.region_vars.tainted(&snapshot.region_vars_snapshot, r, directions)
+        self.borrow_region_constraints().tainted(
+            self.tcx,
+            &snapshot.region_constraints_snapshot,
+            r,
+            directions)
     }
 
     fn region_vars_confined_to_snapshot(&self,
@@ -539,7 +545,8 @@
          */
 
         let mut region_vars =
-            self.region_vars.vars_created_since_snapshot(&snapshot.region_vars_snapshot);
+            self.borrow_region_constraints().vars_created_since_snapshot(
+                &snapshot.region_constraints_snapshot);
 
         let escaping_types =
             self.type_variables.borrow_mut().types_escaping_snapshot(&snapshot.type_snapshot);
@@ -581,7 +588,8 @@
         where T : TypeFoldable<'tcx>
     {
         let (result, map) = self.tcx.replace_late_bound_regions(binder, |br| {
-            self.region_vars.push_skolemized(br, &snapshot.region_vars_snapshot)
+            self.borrow_region_constraints()
+                .push_skolemized(self.tcx, br, &snapshot.region_constraints_snapshot)
         });
 
         debug!("skolemize_bound_regions(binder={:?}, result={:?}, map={:?})",
@@ -766,7 +774,8 @@
     {
         debug!("pop_skolemized({:?})", skol_map);
         let skol_regions: FxHashSet<_> = skol_map.values().cloned().collect();
-        self.region_vars.pop_skolemized(&skol_regions, &snapshot.region_vars_snapshot);
+        self.borrow_region_constraints()
+            .pop_skolemized(self.tcx, &skol_regions, &snapshot.region_constraints_snapshot);
         if !skol_map.is_empty() {
             self.projection_cache.borrow_mut().rollback_skolemized(
                 &snapshot.projection_cache_snapshot);
diff --git a/src/librustc/infer/region_inference/README.md b/src/librustc/infer/lexical_region_resolve/README.md
similarity index 77%
rename from src/librustc/infer/region_inference/README.md
rename to src/librustc/infer/lexical_region_resolve/README.md
index b564faf..a902308 100644
--- a/src/librustc/infer/region_inference/README.md
+++ b/src/librustc/infer/lexical_region_resolve/README.md
@@ -1,10 +1,13 @@
-Region inference
+# Region inference
 
-# Terminology
+## Terminology
 
 Note that we use the terms region and lifetime interchangeably.
 
-# Introduction
+## Introduction
+
+See the [general inference README](../README.md) for an overview of
+how lexical-region-solving fits into the bigger picture.
 
 Region inference uses a somewhat more involved algorithm than type
 inference. It is not the most efficient thing ever written though it
@@ -16,63 +19,6 @@
 regions are a simpler case than types: they don't have aggregate
 structure, for example.
 
-Unlike normal type inference, which is similar in spirit to H-M and thus
-works progressively, the region type inference works by accumulating
-constraints over the course of a function.  Finally, at the end of
-processing a function, we process and solve the constraints all at
-once.
-
-The constraints are always of one of three possible forms:
-
-- `ConstrainVarSubVar(Ri, Rj)` states that region variable Ri must be
-  a subregion of Rj
-- `ConstrainRegSubVar(R, Ri)` states that the concrete region R (which
-  must not be a variable) must be a subregion of the variable Ri
-- `ConstrainVarSubReg(Ri, R)` states the variable Ri shoudl be less
-  than the concrete region R. This is kind of deprecated and ought to
-  be replaced with a verify (they essentially play the same role).
-
-In addition to constraints, we also gather up a set of "verifys"
-(what, you don't think Verify is a noun? Get used to it my
-friend!). These represent relations that must hold but which don't
-influence inference proper. These take the form of:
-
-- `VerifyRegSubReg(Ri, Rj)` indicates that Ri <= Rj must hold,
-  where Rj is not an inference variable (and Ri may or may not contain
-  one). This doesn't influence inference because we will already have
-  inferred Ri to be as small as possible, so then we just test whether
-  that result was less than Rj or not.
-- `VerifyGenericBound(R, Vb)` is a more complex expression which tests
-  that the region R must satisfy the bound `Vb`. The bounds themselves
-  may have structure like "must outlive one of the following regions"
-  or "must outlive ALL of the following regions. These bounds arise
-  from constraints like `T: 'a` -- if we know that `T: 'b` and `T: 'c`
-  (say, from where clauses), then we can conclude that `T: 'a` if `'b:
-  'a` *or* `'c: 'a`.
-
-# Building up the constraints
-
-Variables and constraints are created using the following methods:
-
-- `new_region_var()` creates a new, unconstrained region variable;
-- `make_subregion(Ri, Rj)` states that Ri is a subregion of Rj
-- `lub_regions(Ri, Rj) -> Rk` returns a region Rk which is
-  the smallest region that is greater than both Ri and Rj
-- `glb_regions(Ri, Rj) -> Rk` returns a region Rk which is
-  the greatest region that is smaller than both Ri and Rj
-
-The actual region resolution algorithm is not entirely
-obvious, though it is also not overly complex.
-
-## Snapshotting
-
-It is also permitted to try (and rollback) changes to the graph.  This
-is done by invoking `start_snapshot()`, which returns a value.  Then
-later you can call `rollback_to()` which undoes the work.
-Alternatively, you can call `commit()` which ends all snapshots.
-Snapshots can be recursive---so you can start a snapshot when another
-is in progress, but only the root snapshot can "commit".
-
 ## The problem
 
 Basically our input is a directed graph where nodes can be divided
@@ -109,9 +55,9 @@
 satisfied. These bounds represent the "maximal" values that a region
 variable can take on, basically.
 
-# The Region Hierarchy
+## The Region Hierarchy
 
-## Without closures
+### Without closures
 
 Let's first consider the region hierarchy without thinking about
 closures, because they add a lot of complications. The region
@@ -141,7 +87,7 @@
 also the expression `x + y`. The expression itself has sublifetimes
 for evaluating `x` and `y`.
 
-## Function calls
+#s## Function calls
 
 Function calls are a bit tricky. I will describe how we handle them
 *now* and then a bit about how we can improve them (Issue #6268).
@@ -259,7 +205,7 @@
 the borrow expression, we must issue sufficient restrictions to ensure
 that the pointee remains valid.
 
-## Modeling closures
+### Modeling closures
 
 Integrating closures properly into the model is a bit of
 work-in-progress. In an ideal world, we would model closures as
@@ -314,8 +260,3 @@
 type-checking accepting incorrect code (though it sometimes rejects
 what might be considered correct code; see rust-lang/rust#22557), but
 it still doesn't feel like the right approach.
-
-### Skolemization
-
-For a discussion on skolemization and higher-ranked subtyping, please
-see the module `middle::infer::higher_ranked::doc`.
diff --git a/src/librustc/infer/region_inference/graphviz.rs b/src/librustc/infer/lexical_region_resolve/graphviz.rs
similarity index 88%
rename from src/librustc/infer/region_inference/graphviz.rs
rename to src/librustc/infer/lexical_region_resolve/graphviz.rs
index efe3641..4120948 100644
--- a/src/librustc/infer/region_inference/graphviz.rs
+++ b/src/librustc/infer/lexical_region_resolve/graphviz.rs
@@ -9,7 +9,7 @@
 // except according to those terms.
 
 //! This module provides linkage between libgraphviz traits and
-//! `rustc::middle::typeck::infer::region_inference`, generating a
+//! `rustc::middle::typeck::infer::region_constraints`, generating a
 //! rendering of the graph represented by the list of `Constraint`
 //! instances (which make up the edges of the graph), as well as the
 //! origin for each constraint (which are attached to the labels on
@@ -25,7 +25,7 @@
 use middle::region;
 use super::Constraint;
 use infer::SubregionOrigin;
-use infer::region_inference::RegionVarBindings;
+use infer::region_constraints::RegionConstraintData;
 use util::nodemap::{FxHashMap, FxHashSet};
 
 use std::borrow::Cow;
@@ -57,12 +57,13 @@
 }
 
 pub fn maybe_print_constraints_for<'a, 'gcx, 'tcx>(
-    region_vars: &RegionVarBindings<'a, 'gcx, 'tcx>,
+    region_data: &RegionConstraintData<'tcx>,
     region_rels: &RegionRelations<'a, 'gcx, 'tcx>)
 {
+    let tcx = region_rels.tcx;
     let context = region_rels.context;
 
-    if !region_vars.tcx.sess.opts.debugging_opts.print_region_graph {
+    if !tcx.sess.opts.debugging_opts.print_region_graph {
         return;
     }
 
@@ -112,12 +113,11 @@
         }
     };
 
-    let constraints = &*region_vars.constraints.borrow();
-    match dump_region_constraints_to(region_rels, constraints, &output_path) {
+    match dump_region_data_to(region_rels, &region_data.constraints, &output_path) {
         Ok(()) => {}
         Err(e) => {
             let msg = format!("io error dumping region constraints: {}", e);
-            region_vars.tcx.sess.err(&msg)
+            tcx.sess.err(&msg)
         }
     }
 }
@@ -212,13 +212,13 @@
 
 fn constraint_to_nodes(c: &Constraint) -> (Node, Node) {
     match *c {
-        Constraint::ConstrainVarSubVar(rv_1, rv_2) =>
+        Constraint::VarSubVar(rv_1, rv_2) =>
             (Node::RegionVid(rv_1), Node::RegionVid(rv_2)),
-        Constraint::ConstrainRegSubVar(r_1, rv_2) =>
+        Constraint::RegSubVar(r_1, rv_2) =>
             (Node::Region(*r_1), Node::RegionVid(rv_2)),
-        Constraint::ConstrainVarSubReg(rv_1, r_2) =>
+        Constraint::VarSubReg(rv_1, r_2) =>
             (Node::RegionVid(rv_1), Node::Region(*r_2)),
-        Constraint::ConstrainRegSubReg(r_1, r_2) =>
+        Constraint::RegSubReg(r_1, r_2) =>
             (Node::Region(*r_1), Node::Region(*r_2)),
     }
 }
@@ -267,15 +267,15 @@
 
 pub type ConstraintMap<'tcx> = BTreeMap<Constraint<'tcx>, SubregionOrigin<'tcx>>;
 
-fn dump_region_constraints_to<'a, 'gcx, 'tcx>(region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                                              map: &ConstraintMap<'tcx>,
-                                              path: &str)
-                                              -> io::Result<()> {
-    debug!("dump_region_constraints map (len: {}) path: {}",
+fn dump_region_data_to<'a, 'gcx, 'tcx>(region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
+                                       map: &ConstraintMap<'tcx>,
+                                       path: &str)
+                                       -> io::Result<()> {
+    debug!("dump_region_data map (len: {}) path: {}",
            map.len(),
            path);
-    let g = ConstraintGraph::new(format!("region_constraints"), region_rels, map);
-    debug!("dump_region_constraints calling render");
+    let g = ConstraintGraph::new(format!("region_data"), region_rels, map);
+    debug!("dump_region_data calling render");
     let mut v = Vec::new();
     dot::render(&g, &mut v).unwrap();
     File::create(path).and_then(|mut f| f.write_all(&v))
diff --git a/src/librustc/infer/lexical_region_resolve/mod.rs b/src/librustc/infer/lexical_region_resolve/mod.rs
new file mode 100644
index 0000000..0692d28
--- /dev/null
+++ b/src/librustc/infer/lexical_region_resolve/mod.rs
@@ -0,0 +1,766 @@
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! The code to do lexical region resolution.
+
+use infer::SubregionOrigin;
+use infer::RegionVariableOrigin;
+use infer::region_constraints::Constraint;
+use infer::region_constraints::GenericKind;
+use infer::region_constraints::RegionConstraintData;
+use infer::region_constraints::VarOrigins;
+use infer::region_constraints::VerifyBound;
+use middle::free_region::RegionRelations;
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
+use rustc_data_structures::fx::FxHashSet;
+use rustc_data_structures::graph::{self, Direction, NodeIndex, OUTGOING};
+use std::fmt;
+use std::u32;
+use ty::{self, TyCtxt};
+use ty::{Region, RegionVid};
+use ty::{ReEarlyBound, ReEmpty, ReErased, ReFree, ReStatic};
+use ty::{ReLateBound, ReScope, ReSkolemized, ReVar};
+
+mod graphviz;
+
+/// This function performs lexical region resolution given a complete
+/// set of constraints and variable origins. It performs a fixed-point
+/// iteration to find region values which satisfy all constraints,
+/// assuming such values can be found. It returns the final values of
+/// all the variables as well as a set of errors that must be reported.
+pub fn resolve<'tcx>(
+    region_rels: &RegionRelations<'_, '_, 'tcx>,
+    var_origins: VarOrigins,
+    data: RegionConstraintData<'tcx>,
+) -> (
+    LexicalRegionResolutions<'tcx>,
+    Vec<RegionResolutionError<'tcx>>,
+) {
+    debug!("RegionConstraintData: resolve_regions()");
+    let mut errors = vec![];
+    let mut resolver = LexicalResolver {
+        region_rels,
+        var_origins,
+        data,
+    };
+    let values = resolver.infer_variable_values(&mut errors);
+    (values, errors)
+}
+
+/// Contains the result of lexical region resolution. Offers methods
+/// to lookup up the final value of a region variable.
+pub struct LexicalRegionResolutions<'tcx> {
+    values: IndexVec<RegionVid, VarValue<'tcx>>,
+    error_region: ty::Region<'tcx>,
+}
+
+#[derive(Copy, Clone, Debug)]
+enum VarValue<'tcx> {
+    Value(Region<'tcx>),
+    ErrorValue,
+}
+
+#[derive(Clone, Debug)]
+pub enum RegionResolutionError<'tcx> {
+    /// `ConcreteFailure(o, a, b)`:
+    ///
+    /// `o` requires that `a <= b`, but this does not hold
+    ConcreteFailure(SubregionOrigin<'tcx>, Region<'tcx>, Region<'tcx>),
+
+    /// `GenericBoundFailure(p, s, a)
+    ///
+    /// The parameter/associated-type `p` must be known to outlive the lifetime
+    /// `a` (but none of the known bounds are sufficient).
+    GenericBoundFailure(SubregionOrigin<'tcx>, GenericKind<'tcx>, Region<'tcx>),
+
+    /// `SubSupConflict(v, sub_origin, sub_r, sup_origin, sup_r)`:
+    ///
+    /// Could not infer a value for `v` because `sub_r <= v` (due to
+    /// `sub_origin`) but `v <= sup_r` (due to `sup_origin`) and
+    /// `sub_r <= sup_r` does not hold.
+    SubSupConflict(
+        RegionVariableOrigin,
+        SubregionOrigin<'tcx>,
+        Region<'tcx>,
+        SubregionOrigin<'tcx>,
+        Region<'tcx>,
+    ),
+}
+
+struct RegionAndOrigin<'tcx> {
+    region: Region<'tcx>,
+    origin: SubregionOrigin<'tcx>,
+}
+
+type RegionGraph<'tcx> = graph::Graph<(), Constraint<'tcx>>;
+
+struct LexicalResolver<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
+    region_rels: &'cx RegionRelations<'cx, 'gcx, 'tcx>,
+    var_origins: VarOrigins,
+    data: RegionConstraintData<'tcx>,
+}
+
+impl<'cx, 'gcx, 'tcx> LexicalResolver<'cx, 'gcx, 'tcx> {
+    fn infer_variable_values(
+        &mut self,
+        errors: &mut Vec<RegionResolutionError<'tcx>>,
+    ) -> LexicalRegionResolutions<'tcx> {
+        let mut var_data = self.construct_var_data(self.region_rels.tcx);
+
+        // Dorky hack to cause `dump_constraints` to only get called
+        // if debug mode is enabled:
+        debug!(
+            "----() End constraint listing (context={:?}) {:?}---",
+            self.region_rels.context,
+            self.dump_constraints(self.region_rels)
+        );
+        graphviz::maybe_print_constraints_for(&self.data, self.region_rels);
+
+        let graph = self.construct_graph();
+        self.expand_givens(&graph);
+        self.expansion(&mut var_data);
+        self.collect_errors(&mut var_data, errors);
+        self.collect_var_errors(&var_data, &graph, errors);
+        var_data
+    }
+
+    fn num_vars(&self) -> usize {
+        self.var_origins.len()
+    }
+
+    /// Initially, the value for all variables is set to `'empty`, the
+    /// empty region. The `expansion` phase will grow this larger.
+    fn construct_var_data(&self, tcx: TyCtxt<'_, '_, 'tcx>) -> LexicalRegionResolutions<'tcx> {
+        LexicalRegionResolutions {
+            error_region: tcx.types.re_static,
+            values: (0..self.num_vars())
+                .map(|_| VarValue::Value(tcx.types.re_empty))
+                .collect(),
+        }
+    }
+
+    fn dump_constraints(&self, free_regions: &RegionRelations<'_, '_, 'tcx>) {
+        debug!(
+            "----() Start constraint listing (context={:?}) ()----",
+            free_regions.context
+        );
+        for (idx, (constraint, _)) in self.data.constraints.iter().enumerate() {
+            debug!("Constraint {} => {:?}", idx, constraint);
+        }
+    }
+
+    fn expand_givens(&mut self, graph: &RegionGraph) {
+        // Givens are a kind of horrible hack to account for
+        // constraints like 'c <= '0 that are known to hold due to
+        // closure signatures (see the comment above on the `givens`
+        // field). They should go away. But until they do, the role
+        // of this fn is to account for the transitive nature:
+        //
+        //     Given 'c <= '0
+        //     and   '0 <= '1
+        //     then  'c <= '1
+
+        let seeds: Vec<_> = self.data.givens.iter().cloned().collect();
+        for (r, vid) in seeds {
+            // While all things transitively reachable in the graph
+            // from the variable (`'0` in the example above).
+            let seed_index = NodeIndex(vid.index as usize);
+            for succ_index in graph.depth_traverse(seed_index, OUTGOING) {
+                let succ_index = succ_index.0;
+
+                // The first N nodes correspond to the region
+                // variables. Other nodes correspond to constant
+                // regions.
+                if succ_index < self.num_vars() {
+                    let succ_vid = RegionVid::new(succ_index);
+
+                    // Add `'c <= '1`.
+                    self.data.givens.insert((r, succ_vid));
+                }
+            }
+        }
+    }
+
+    fn expansion(&self, var_values: &mut LexicalRegionResolutions<'tcx>) {
+        self.iterate_until_fixed_point("Expansion", |constraint, origin| {
+            debug!("expansion: constraint={:?} origin={:?}", constraint, origin);
+            match *constraint {
+                Constraint::RegSubVar(a_region, b_vid) => {
+                    let b_data = var_values.value_mut(b_vid);
+                    self.expand_node(a_region, b_vid, b_data)
+                }
+                Constraint::VarSubVar(a_vid, b_vid) => match *var_values.value(a_vid) {
+                    VarValue::ErrorValue => false,
+                    VarValue::Value(a_region) => {
+                        let b_node = var_values.value_mut(b_vid);
+                        self.expand_node(a_region, b_vid, b_node)
+                    }
+                },
+                Constraint::RegSubReg(..) | Constraint::VarSubReg(..) => {
+                    // These constraints are checked after expansion
+                    // is done, in `collect_errors`.
+                    false
+                }
+            }
+        })
+    }
+
+    fn expand_node(
+        &self,
+        a_region: Region<'tcx>,
+        b_vid: RegionVid,
+        b_data: &mut VarValue<'tcx>,
+    ) -> bool {
+        debug!("expand_node({:?}, {:?} == {:?})", a_region, b_vid, b_data);
+
+        // Check if this relationship is implied by a given.
+        match *a_region {
+            ty::ReEarlyBound(_) | ty::ReFree(_) => if self.data.givens.contains(&(a_region, b_vid))
+            {
+                debug!("given");
+                return false;
+            },
+            _ => {}
+        }
+
+        match *b_data {
+            VarValue::Value(cur_region) => {
+                let lub = self.lub_concrete_regions(a_region, cur_region);
+                if lub == cur_region {
+                    return false;
+                }
+
+                debug!(
+                    "Expanding value of {:?} from {:?} to {:?}",
+                    b_vid,
+                    cur_region,
+                    lub
+                );
+
+                *b_data = VarValue::Value(lub);
+                return true;
+            }
+
+            VarValue::ErrorValue => {
+                return false;
+            }
+        }
+    }
+
+
+    fn lub_concrete_regions(&self, a: Region<'tcx>, b: Region<'tcx>) -> Region<'tcx> {
+        let tcx = self.region_rels.tcx;
+        match (a, b) {
+            (&ReLateBound(..), _) | (_, &ReLateBound(..)) | (&ReErased, _) | (_, &ReErased) => {
+                bug!("cannot relate region: LUB({:?}, {:?})", a, b);
+            }
+
+            (r @ &ReStatic, _) | (_, r @ &ReStatic) => {
+                r // nothing lives longer than static
+            }
+
+            (&ReEmpty, r) | (r, &ReEmpty) => {
+                r // everything lives longer than empty
+            }
+
+            (&ReVar(v_id), _) | (_, &ReVar(v_id)) => {
+                span_bug!(
+                    self.var_origins[v_id].span(),
+                    "lub_concrete_regions invoked with non-concrete \
+                     regions: {:?}, {:?}",
+                    a,
+                    b
+                );
+            }
+
+            (&ReEarlyBound(_), &ReScope(s_id)) |
+            (&ReScope(s_id), &ReEarlyBound(_)) |
+            (&ReFree(_), &ReScope(s_id)) |
+            (&ReScope(s_id), &ReFree(_)) => {
+                // A "free" region can be interpreted as "some region
+                // at least as big as fr.scope".  So, we can
+                // reasonably compare free regions and scopes:
+                let fr_scope = match (a, b) {
+                    (&ReEarlyBound(ref br), _) | (_, &ReEarlyBound(ref br)) => self.region_rels
+                        .region_scope_tree
+                        .early_free_scope(self.region_rels.tcx, br),
+                    (&ReFree(ref fr), _) | (_, &ReFree(ref fr)) => self.region_rels
+                        .region_scope_tree
+                        .free_scope(self.region_rels.tcx, fr),
+                    _ => bug!(),
+                };
+                let r_id = self.region_rels
+                    .region_scope_tree
+                    .nearest_common_ancestor(fr_scope, s_id);
+                if r_id == fr_scope {
+                    // if the free region's scope `fr.scope` is bigger than
+                    // the scope region `s_id`, then the LUB is the free
+                    // region itself:
+                    match (a, b) {
+                        (_, &ReScope(_)) => return a,
+                        (&ReScope(_), _) => return b,
+                        _ => bug!(),
+                    }
+                }
+
+                // otherwise, we don't know what the free region is,
+                // so we must conservatively say the LUB is static:
+                tcx.types.re_static
+            }
+
+            (&ReScope(a_id), &ReScope(b_id)) => {
+                // The region corresponding to an outer block is a
+                // subtype of the region corresponding to an inner
+                // block.
+                let lub = self.region_rels
+                    .region_scope_tree
+                    .nearest_common_ancestor(a_id, b_id);
+                tcx.mk_region(ReScope(lub))
+            }
+
+            (&ReEarlyBound(_), &ReEarlyBound(_)) |
+            (&ReFree(_), &ReEarlyBound(_)) |
+            (&ReEarlyBound(_), &ReFree(_)) |
+            (&ReFree(_), &ReFree(_)) => self.region_rels.lub_free_regions(a, b),
+
+            // For these types, we cannot define any additional
+            // relationship:
+            (&ReSkolemized(..), _) | (_, &ReSkolemized(..)) => if a == b {
+                a
+            } else {
+                tcx.types.re_static
+            },
+        }
+    }
+
+    /// After expansion is complete, go and check upper bounds (i.e.,
+    /// cases where the region cannot grow larger than a fixed point)
+    /// and check that they are satisfied.
+    fn collect_errors(
+        &self,
+        var_data: &mut LexicalRegionResolutions<'tcx>,
+        errors: &mut Vec<RegionResolutionError<'tcx>>,
+    ) {
+        for (constraint, origin) in &self.data.constraints {
+            debug!(
+                "collect_errors: constraint={:?} origin={:?}",
+                constraint,
+                origin
+            );
+            match *constraint {
+                Constraint::RegSubVar(..) | Constraint::VarSubVar(..) => {
+                    // Expansion will ensure that these constraints hold. Ignore.
+                }
+
+                Constraint::RegSubReg(sub, sup) => {
+                    if self.region_rels.is_subregion_of(sub, sup) {
+                        continue;
+                    }
+
+                    debug!(
+                        "collect_errors: region error at {:?}: \
+                         cannot verify that {:?} <= {:?}",
+                        origin,
+                        sub,
+                        sup
+                    );
+
+                    errors.push(RegionResolutionError::ConcreteFailure(
+                        (*origin).clone(),
+                        sub,
+                        sup,
+                    ));
+                }
+
+                Constraint::VarSubReg(a_vid, b_region) => {
+                    let a_data = var_data.value_mut(a_vid);
+                    debug!("contraction: {:?} == {:?}, {:?}", a_vid, a_data, b_region);
+
+                    let a_region = match *a_data {
+                        VarValue::ErrorValue => continue,
+                        VarValue::Value(a_region) => a_region,
+                    };
+
+                    // Do not report these errors immediately:
+                    // instead, set the variable value to error and
+                    // collect them later.
+                    if !self.region_rels.is_subregion_of(a_region, b_region) {
+                        debug!(
+                            "collect_errors: region error at {:?}: \
+                             cannot verify that {:?}={:?} <= {:?}",
+                            origin,
+                            a_vid,
+                            a_region,
+                            b_region
+                        );
+                        *a_data = VarValue::ErrorValue;
+                    }
+                }
+            }
+        }
+
+        for verify in &self.data.verifys {
+            debug!("collect_errors: verify={:?}", verify);
+            let sub = var_data.normalize(verify.region);
+
+            // This was an inference variable which didn't get
+            // constrained, therefore it can be assume to hold.
+            if let ty::ReEmpty = *sub {
+                continue;
+            }
+
+            if self.bound_is_met(&verify.bound, var_data, sub) {
+                continue;
+            }
+
+            debug!(
+                "collect_errors: region error at {:?}: \
+                 cannot verify that {:?} <= {:?}",
+                verify.origin,
+                verify.region,
+                verify.bound
+            );
+
+            errors.push(RegionResolutionError::GenericBoundFailure(
+                verify.origin.clone(),
+                verify.kind.clone(),
+                sub,
+            ));
+        }
+    }
+
+    /// Go over the variables that were declared to be error variables
+    /// and create a `RegionResolutionError` for each of them.
+    fn collect_var_errors(
+        &self,
+        var_data: &LexicalRegionResolutions<'tcx>,
+        graph: &RegionGraph<'tcx>,
+        errors: &mut Vec<RegionResolutionError<'tcx>>,
+    ) {
+        debug!("collect_var_errors");
+
+        // This is the best way that I have found to suppress
+        // duplicate and related errors. Basically we keep a set of
+        // flags for every node. Whenever an error occurs, we will
+        // walk some portion of the graph looking to find pairs of
+        // conflicting regions to report to the user. As we walk, we
+        // trip the flags from false to true, and if we find that
+        // we've already reported an error involving any particular
+        // node we just stop and don't report the current error.  The
+        // idea is to report errors that derive from independent
+        // regions of the graph, but not those that derive from
+        // overlapping locations.
+        let mut dup_vec = vec![u32::MAX; self.num_vars()];
+
+        for (node_vid, value) in var_data.values.iter_enumerated() {
+            match *value {
+                VarValue::Value(_) => { /* Inference successful */ }
+                VarValue::ErrorValue => {
+                    /* Inference impossible, this value contains
+                       inconsistent constraints.
+
+                       I think that in this case we should report an
+                       error now---unlike the case above, we can't
+                       wait to see whether the user needs the result
+                       of this variable.  The reason is that the mere
+                       existence of this variable implies that the
+                       region graph is inconsistent, whether or not it
+                       is used.
+
+                       For example, we may have created a region
+                       variable that is the GLB of two other regions
+                       which do not have a GLB.  Even if that variable
+                       is not used, it implies that those two regions
+                       *should* have a GLB.
+
+                       At least I think this is true. It may be that
+                       the mere existence of a conflict in a region variable
+                       that is not used is not a problem, so if this rule
+                       starts to create problems we'll have to revisit
+                       this portion of the code and think hard about it. =) */
+                    self.collect_error_for_expanding_node(graph, &mut dup_vec, node_vid, errors);
+                }
+            }
+        }
+    }
+
+    fn construct_graph(&self) -> RegionGraph<'tcx> {
+        let num_vars = self.num_vars();
+
+        let mut graph = graph::Graph::new();
+
+        for _ in 0..num_vars {
+            graph.add_node(());
+        }
+
+        // Issue #30438: two distinct dummy nodes, one for incoming
+        // edges (dummy_source) and another for outgoing edges
+        // (dummy_sink). In `dummy -> a -> b -> dummy`, using one
+        // dummy node leads one to think (erroneously) there exists a
+        // path from `b` to `a`. Two dummy nodes sidesteps the issue.
+        let dummy_source = graph.add_node(());
+        let dummy_sink = graph.add_node(());
+
+        for (constraint, _) in &self.data.constraints {
+            match *constraint {
+                Constraint::VarSubVar(a_id, b_id) => {
+                    graph.add_edge(
+                        NodeIndex(a_id.index as usize),
+                        NodeIndex(b_id.index as usize),
+                        *constraint,
+                    );
+                }
+                Constraint::RegSubVar(_, b_id) => {
+                    graph.add_edge(dummy_source, NodeIndex(b_id.index as usize), *constraint);
+                }
+                Constraint::VarSubReg(a_id, _) => {
+                    graph.add_edge(NodeIndex(a_id.index as usize), dummy_sink, *constraint);
+                }
+                Constraint::RegSubReg(..) => {
+                    // this would be an edge from `dummy_source` to
+                    // `dummy_sink`; just ignore it.
+                }
+            }
+        }
+
+        return graph;
+    }
+
+    fn collect_error_for_expanding_node(
+        &self,
+        graph: &RegionGraph<'tcx>,
+        dup_vec: &mut [u32],
+        node_idx: RegionVid,
+        errors: &mut Vec<RegionResolutionError<'tcx>>,
+    ) {
+        // Errors in expanding nodes result from a lower-bound that is
+        // not contained by an upper-bound.
+        let (mut lower_bounds, lower_dup) =
+            self.collect_concrete_regions(graph, node_idx, graph::INCOMING, dup_vec);
+        let (mut upper_bounds, upper_dup) =
+            self.collect_concrete_regions(graph, node_idx, graph::OUTGOING, dup_vec);
+
+        if lower_dup || upper_dup {
+            return;
+        }
+
+        // We place free regions first because we are special casing
+        // SubSupConflict(ReFree, ReFree) when reporting error, and so
+        // the user will more likely get a specific suggestion.
+        fn region_order_key(x: &RegionAndOrigin) -> u8 {
+            match *x.region {
+                ReEarlyBound(_) => 0,
+                ReFree(_) => 1,
+                _ => 2,
+            }
+        }
+        lower_bounds.sort_by_key(region_order_key);
+        upper_bounds.sort_by_key(region_order_key);
+
+        for lower_bound in &lower_bounds {
+            for upper_bound in &upper_bounds {
+                if !self.region_rels
+                    .is_subregion_of(lower_bound.region, upper_bound.region)
+                {
+                    let origin = self.var_origins[node_idx].clone();
+                    debug!(
+                        "region inference error at {:?} for {:?}: SubSupConflict sub: {:?} \
+                         sup: {:?}",
+                        origin,
+                        node_idx,
+                        lower_bound.region,
+                        upper_bound.region
+                    );
+                    errors.push(RegionResolutionError::SubSupConflict(
+                        origin,
+                        lower_bound.origin.clone(),
+                        lower_bound.region,
+                        upper_bound.origin.clone(),
+                        upper_bound.region,
+                    ));
+                    return;
+                }
+            }
+        }
+
+        span_bug!(
+            self.var_origins[node_idx].span(),
+            "collect_error_for_expanding_node() could not find \
+             error for var {:?}, lower_bounds={:?}, \
+             upper_bounds={:?}",
+            node_idx,
+            lower_bounds,
+            upper_bounds
+        );
+    }
+
+    fn collect_concrete_regions(
+        &self,
+        graph: &RegionGraph<'tcx>,
+        orig_node_idx: RegionVid,
+        dir: Direction,
+        dup_vec: &mut [u32],
+    ) -> (Vec<RegionAndOrigin<'tcx>>, bool) {
+        struct WalkState<'tcx> {
+            set: FxHashSet<RegionVid>,
+            stack: Vec<RegionVid>,
+            result: Vec<RegionAndOrigin<'tcx>>,
+            dup_found: bool,
+        }
+        let mut state = WalkState {
+            set: FxHashSet(),
+            stack: vec![orig_node_idx],
+            result: Vec::new(),
+            dup_found: false,
+        };
+        state.set.insert(orig_node_idx);
+
+        // to start off the process, walk the source node in the
+        // direction specified
+        process_edges(&self.data, &mut state, graph, orig_node_idx, dir);
+
+        while !state.stack.is_empty() {
+            let node_idx = state.stack.pop().unwrap();
+
+            // check whether we've visited this node on some previous walk
+            if dup_vec[node_idx.index as usize] == u32::MAX {
+                dup_vec[node_idx.index as usize] = orig_node_idx.index;
+            } else if dup_vec[node_idx.index as usize] != orig_node_idx.index {
+                state.dup_found = true;
+            }
+
+            debug!(
+                "collect_concrete_regions(orig_node_idx={:?}, node_idx={:?})",
+                orig_node_idx,
+                node_idx
+            );
+
+            process_edges(&self.data, &mut state, graph, node_idx, dir);
+        }
+
+        let WalkState {
+            result, dup_found, ..
+        } = state;
+        return (result, dup_found);
+
+        fn process_edges<'tcx>(
+            this: &RegionConstraintData<'tcx>,
+            state: &mut WalkState<'tcx>,
+            graph: &RegionGraph<'tcx>,
+            source_vid: RegionVid,
+            dir: Direction,
+        ) {
+            debug!("process_edges(source_vid={:?}, dir={:?})", source_vid, dir);
+
+            let source_node_index = NodeIndex(source_vid.index as usize);
+            for (_, edge) in graph.adjacent_edges(source_node_index, dir) {
+                match edge.data {
+                    Constraint::VarSubVar(from_vid, to_vid) => {
+                        let opp_vid = if from_vid == source_vid {
+                            to_vid
+                        } else {
+                            from_vid
+                        };
+                        if state.set.insert(opp_vid) {
+                            state.stack.push(opp_vid);
+                        }
+                    }
+
+                    Constraint::RegSubVar(region, _) | Constraint::VarSubReg(_, region) => {
+                        state.result.push(RegionAndOrigin {
+                            region,
+                            origin: this.constraints.get(&edge.data).unwrap().clone(),
+                        });
+                    }
+
+                    Constraint::RegSubReg(..) => panic!(
+                        "cannot reach reg-sub-reg edge in region inference \
+                         post-processing"
+                    ),
+                }
+            }
+        }
+    }
+
+    fn iterate_until_fixed_point<F>(&self, tag: &str, mut body: F)
+    where
+        F: FnMut(&Constraint<'tcx>, &SubregionOrigin<'tcx>) -> bool,
+    {
+        let mut iteration = 0;
+        let mut changed = true;
+        while changed {
+            changed = false;
+            iteration += 1;
+            debug!("---- {} Iteration {}{}", "#", tag, iteration);
+            for (constraint, origin) in &self.data.constraints {
+                let edge_changed = body(constraint, origin);
+                if edge_changed {
+                    debug!("Updated due to constraint {:?}", constraint);
+                    changed = true;
+                }
+            }
+        }
+        debug!("---- {} Complete after {} iteration(s)", tag, iteration);
+    }
+
+    fn bound_is_met(
+        &self,
+        bound: &VerifyBound<'tcx>,
+        var_values: &LexicalRegionResolutions<'tcx>,
+        min: ty::Region<'tcx>,
+    ) -> bool {
+        match bound {
+            VerifyBound::AnyRegion(rs) => rs.iter()
+                .map(|&r| var_values.normalize(r))
+                .any(|r| self.region_rels.is_subregion_of(min, r)),
+
+            VerifyBound::AllRegions(rs) => rs.iter()
+                .map(|&r| var_values.normalize(r))
+                .all(|r| self.region_rels.is_subregion_of(min, r)),
+
+            VerifyBound::AnyBound(bs) => bs.iter().any(|b| self.bound_is_met(b, var_values, min)),
+
+            VerifyBound::AllBounds(bs) => bs.iter().all(|b| self.bound_is_met(b, var_values, min)),
+        }
+    }
+}
+
+impl<'tcx> fmt::Debug for RegionAndOrigin<'tcx> {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        write!(f, "RegionAndOrigin({:?},{:?})", self.region, self.origin)
+    }
+}
+
+
+impl<'tcx> LexicalRegionResolutions<'tcx> {
+    fn normalize(&self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
+        match *r {
+            ty::ReVar(rid) => self.resolve_var(rid),
+            _ => r,
+        }
+    }
+
+    fn value(&self, rid: RegionVid) -> &VarValue<'tcx> {
+        &self.values[rid]
+    }
+
+    fn value_mut(&mut self, rid: RegionVid) -> &mut VarValue<'tcx> {
+        &mut self.values[rid]
+    }
+
+    pub fn resolve_var(&self, rid: RegionVid) -> ty::Region<'tcx> {
+        let result = match self.values[rid] {
+            VarValue::Value(r) => r,
+            VarValue::ErrorValue => self.error_region,
+        };
+        debug!("resolve_var({:?}) = {:?}", rid, result);
+        result
+    }
+}
diff --git a/src/librustc/infer/lub.rs b/src/librustc/infer/lub.rs
index 04b470b..55c7eef 100644
--- a/src/librustc/infer/lub.rs
+++ b/src/librustc/infer/lub.rs
@@ -15,6 +15,7 @@
 
 use traits::ObligationCause;
 use ty::{self, Ty, TyCtxt};
+use ty::error::TypeError;
 use ty::relate::{Relate, RelateResult, TypeRelation};
 
 /// "Least upper bound" (common supertype)
@@ -67,14 +68,39 @@
                b);
 
         let origin = Subtype(self.fields.trace.clone());
-        Ok(self.fields.infcx.region_vars.lub_regions(origin, a, b))
+        Ok(self.fields.infcx.borrow_region_constraints().lub_regions(self.tcx(), origin, a, b))
     }
 
     fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
                   -> RelateResult<'tcx, ty::Binder<T>>
         where T: Relate<'tcx>
     {
-        self.fields.higher_ranked_lub(a, b, self.a_is_expected)
+        debug!("binders(a={:?}, b={:?})", a, b);
+        let was_error = self.infcx().probe(|_snapshot| {
+            // Subtle: use a fresh combine-fields here because we recover
+            // from Err. Doing otherwise could propagate obligations out
+            // through our `self.obligations` field.
+            self.infcx()
+                .combine_fields(self.fields.trace.clone(), self.fields.param_env)
+                .higher_ranked_lub(a, b, self.a_is_expected)
+                .is_err()
+        });
+        debug!("binders: was_error={:?}", was_error);
+
+        // When higher-ranked types are involved, computing the LUB is
+        // very challenging, switch to invariance. This is obviously
+        // overly conservative but works ok in practice.
+        match self.relate_with_variance(ty::Variance::Invariant, a, b) {
+            Ok(_) => Ok(a.clone()),
+            Err(err) => {
+                debug!("binders: error occurred, was_error={:?}", was_error);
+                if !was_error {
+                    Err(TypeError::OldStyleLUB(Box::new(err)))
+                } else {
+                    Err(err)
+                }
+            }
+        }
     }
 }
 
diff --git a/src/librustc/infer/mod.rs b/src/librustc/infer/mod.rs
index 79eeebf..4f923f0 100644
--- a/src/librustc/infer/mod.rs
+++ b/src/librustc/infer/mod.rs
@@ -16,7 +16,6 @@
 pub use self::ValuePairs::*;
 pub use ty::IntVarValue;
 pub use self::freshen::TypeFreshener;
-pub use self::region_inference::{GenericKind, VerifyBound};
 
 use hir::def_id::DefId;
 use middle::free_region::{FreeRegionMap, RegionRelations};
@@ -31,7 +30,8 @@
 use ty::relate::RelateResult;
 use traits::{self, ObligationCause, PredicateObligations, Reveal};
 use rustc_data_structures::unify::{self, UnificationTable};
-use std::cell::{Cell, RefCell, Ref};
+use std::cell::{Cell, RefCell, Ref, RefMut};
+use std::collections::BTreeMap;
 use std::fmt;
 use syntax::ast;
 use errors::DiagnosticBuilder;
@@ -41,7 +41,9 @@
 
 use self::combine::CombineFields;
 use self::higher_ranked::HrMatchResult;
-use self::region_inference::{RegionVarBindings, RegionSnapshot};
+use self::region_constraints::{RegionConstraintCollector, RegionSnapshot};
+use self::region_constraints::{GenericKind, VerifyBound, RegionConstraintData, VarOrigins};
+use self::lexical_region_resolve::LexicalRegionResolutions;
 use self::type_variable::TypeVariableOrigin;
 use self::unify_key::ToType;
 
@@ -54,13 +56,17 @@
 mod higher_ranked;
 pub mod lattice;
 mod lub;
-pub mod region_inference;
+pub mod region_constraints;
+mod lexical_region_resolve;
+mod outlives;
 pub mod resolve;
 mod freshen;
 mod sub;
 pub mod type_variable;
 pub mod unify_key;
 
+pub use self::outlives::env::OutlivesEnvironment;
+
 #[must_use]
 pub struct InferOk<'tcx, T> {
     pub value: T,
@@ -98,8 +104,15 @@
     // Map from floating variable to the kind of float it represents
     float_unification_table: RefCell<UnificationTable<ty::FloatVid>>,
 
-    // For region variables.
-    region_vars: RegionVarBindings<'a, 'gcx, 'tcx>,
+    // Tracks the set of region variables and the constraints between
+    // them.  This is initially `Some(_)` but when
+    // `resolve_regions_and_report_errors` is invoked, this gets set
+    // to `None` -- further attempts to perform unification etc may
+    // fail if new region constraints would've been added.
+    region_constraints: RefCell<Option<RegionConstraintCollector<'tcx>>>,
+
+    // Once region inference is done, the values for each variable.
+    lexical_region_resolutions: RefCell<Option<LexicalRegionResolutions<'tcx>>>,
 
     /// Caches the results of trait selection. This cache is used
     /// for things that have to do with the parameters in scope.
@@ -135,11 +148,44 @@
 
     // This flag is true while there is an active snapshot.
     in_snapshot: Cell<bool>,
+
+    // A set of constraints that regionck must validate. Each
+    // constraint has the form `T:'a`, meaning "some type `T` must
+    // outlive the lifetime 'a". These constraints derive from
+    // instantiated type parameters. So if you had a struct defined
+    // like
+    //
+    //     struct Foo<T:'static> { ... }
+    //
+    // then in some expression `let x = Foo { ... }` it will
+    // instantiate the type parameter `T` with a fresh type `$0`. At
+    // the same time, it will record a region obligation of
+    // `$0:'static`. This will get checked later by regionck. (We
+    // can't generally check these things right away because we have
+    // to wait until types are resolved.)
+    //
+    // These are stored in a map keyed to the id of the innermost
+    // enclosing fn body / static initializer expression. This is
+    // because the location where the obligation was incurred can be
+    // relevant with respect to which sublifetime assumptions are in
+    // place. The reason that we store under the fn-id, and not
+    // something more fine-grained, is so that it is easier for
+    // regionck to be sure that it has found *all* the region
+    // obligations (otherwise, it's easy to fail to walk to a
+    // particular node-id).
+    //
+    // Before running `resolve_regions_and_report_errors`, the creator
+    // of the inference context is expected to invoke
+    // `process_region_obligations` (defined in `self::region_obligations`)
+    // for each body-id in this map, which will process the
+    // obligations within. This is expected to be done 'late enough'
+    // that all type inference variables have been bound and so forth.
+    region_obligations: RefCell<Vec<(ast::NodeId, RegionObligation<'tcx>)>>,
 }
 
 /// A map returned by `skolemize_late_bound_regions()` indicating the skolemized
 /// region that each late-bound region was replaced with.
-pub type SkolemizationMap<'tcx> = FxHashMap<ty::BoundRegion, ty::Region<'tcx>>;
+pub type SkolemizationMap<'tcx> = BTreeMap<ty::BoundRegion, ty::Region<'tcx>>;
 
 /// See `error_reporting` module for more details
 #[derive(Clone, Debug)]
@@ -248,10 +294,6 @@
         item_name: ast::Name,
         impl_item_def_id: DefId,
         trait_item_def_id: DefId,
-
-        // this is `Some(_)` if this error arises from the bug fix for
-        // #18937. This is a temporary measure.
-        lint_id: Option<ast::NodeId>,
     },
 }
 
@@ -280,7 +322,7 @@
 /// Reasons to create a region inference variable
 ///
 /// See `error_reporting` module for more details
-#[derive(Clone, Debug)]
+#[derive(Copy, Clone, Debug)]
 pub enum RegionVariableOrigin {
     // Region variables created for ill-categorized reasons,
     // mostly indicates places in need of refactoring
@@ -308,6 +350,20 @@
     UpvarRegion(ty::UpvarId, Span),
 
     BoundRegionInCoherence(ast::Name),
+
+    // This origin is used for the inference variables that we create
+    // during NLL region processing.
+    NLL(NLLRegionVariableOrigin),
+}
+
+#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
+pub enum NLLRegionVariableOrigin {
+    // During NLL region processing, we create variables for free
+    // regions that we encounter in the function signature and
+    // elsewhere. This origin indices we've got one of those.
+    FreeRegion,
+
+    Inferred(::mir::visit::TyContext),
 }
 
 #[derive(Copy, Clone, Debug)]
@@ -317,6 +373,14 @@
     UnresolvedTy(TyVid)
 }
 
+/// See the `region_obligations` field for more information.
+#[derive(Clone)]
+pub struct RegionObligation<'tcx> {
+    pub sub_region: ty::Region<'tcx>,
+    pub sup_type: Ty<'tcx>,
+    pub cause: ObligationCause<'tcx>,
+}
+
 impl fmt::Display for FixupError {
     fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
         use self::FixupError::*;
@@ -379,13 +443,15 @@
             type_variables: RefCell::new(type_variable::TypeVariableTable::new()),
             int_unification_table: RefCell::new(UnificationTable::new()),
             float_unification_table: RefCell::new(UnificationTable::new()),
-            region_vars: RegionVarBindings::new(tcx),
+            region_constraints: RefCell::new(Some(RegionConstraintCollector::new())),
+            lexical_region_resolutions: RefCell::new(None),
             selection_cache: traits::SelectionCache::new(),
             evaluation_cache: traits::EvaluationCache::new(),
             reported_trait_errors: RefCell::new(FxHashMap()),
             tainted_by_errors_flag: Cell::new(false),
             err_count_on_creation: tcx.sess.err_count(),
             in_snapshot: Cell::new(false),
+            region_obligations: RefCell::new(vec![]),
         }))
     }
 }
@@ -412,7 +478,8 @@
     type_snapshot: type_variable::Snapshot,
     int_snapshot: unify::Snapshot<ty::IntVid>,
     float_snapshot: unify::Snapshot<ty::FloatVid>,
-    region_vars_snapshot: RegionSnapshot,
+    region_constraints_snapshot: RegionSnapshot,
+    region_obligations_snapshot: usize,
     was_in_snapshot: bool,
     _in_progress_tables: Option<Ref<'a, ty::TypeckTables<'tcx>>>,
 }
@@ -720,7 +787,8 @@
             type_snapshot: self.type_variables.borrow_mut().snapshot(),
             int_snapshot: self.int_unification_table.borrow_mut().snapshot(),
             float_snapshot: self.float_unification_table.borrow_mut().snapshot(),
-            region_vars_snapshot: self.region_vars.start_snapshot(),
+            region_constraints_snapshot: self.borrow_region_constraints().start_snapshot(),
+            region_obligations_snapshot: self.region_obligations.borrow().len(),
             was_in_snapshot: in_snapshot,
             // Borrow tables "in progress" (i.e. during typeck)
             // to ban writes from within a snapshot to them.
@@ -736,7 +804,8 @@
                                type_snapshot,
                                int_snapshot,
                                float_snapshot,
-                               region_vars_snapshot,
+                               region_constraints_snapshot,
+                               region_obligations_snapshot,
                                was_in_snapshot,
                                _in_progress_tables } = snapshot;
 
@@ -754,8 +823,11 @@
         self.float_unification_table
             .borrow_mut()
             .rollback_to(float_snapshot);
-        self.region_vars
-            .rollback_to(region_vars_snapshot);
+        self.region_obligations
+            .borrow_mut()
+            .truncate(region_obligations_snapshot);
+        self.borrow_region_constraints()
+            .rollback_to(region_constraints_snapshot);
     }
 
     fn commit_from(&self, snapshot: CombinedSnapshot) {
@@ -764,7 +836,8 @@
                                type_snapshot,
                                int_snapshot,
                                float_snapshot,
-                               region_vars_snapshot,
+                               region_constraints_snapshot,
+                               region_obligations_snapshot: _,
                                was_in_snapshot,
                                _in_progress_tables } = snapshot;
 
@@ -782,8 +855,8 @@
         self.float_unification_table
             .borrow_mut()
             .commit(float_snapshot);
-        self.region_vars
-            .commit(region_vars_snapshot);
+        self.borrow_region_constraints()
+            .commit(region_constraints_snapshot);
     }
 
     /// Execute `f` and commit the bindings
@@ -838,7 +911,7 @@
                      sub: ty::Region<'tcx>,
                      sup: ty::RegionVid)
     {
-        self.region_vars.add_given(sub, sup);
+        self.borrow_region_constraints().add_given(sub, sup);
     }
 
     pub fn can_sub<T>(&self,
@@ -878,7 +951,7 @@
                        a: ty::Region<'tcx>,
                        b: ty::Region<'tcx>) {
         debug!("sub_regions({:?} <: {:?})", a, b);
-        self.region_vars.make_subregion(origin, a, b);
+        self.borrow_region_constraints().make_subregion(origin, a, b);
     }
 
     pub fn equality_predicate(&self,
@@ -979,9 +1052,21 @@
             .new_key(None)
     }
 
+    /// Create a fresh region variable with the next available index.
+    ///
+    /// # Parameters
+    ///
+    /// - `origin`: information about why we created this variable, for use
+    ///   during diagnostics / error-reporting.
     pub fn next_region_var(&self, origin: RegionVariableOrigin)
                            -> ty::Region<'tcx> {
-        self.tcx.mk_region(ty::ReVar(self.region_vars.new_region_var(origin)))
+        self.tcx.mk_region(ty::ReVar(self.borrow_region_constraints().new_region_var(origin)))
+    }
+
+    /// Just a convenient wrapper of `next_region_var` for using during NLL.
+    pub fn next_nll_region_var(&self, origin: NLLRegionVariableOrigin)
+                               -> ty::Region<'tcx> {
+        self.next_region_var(RegionVariableOrigin::NLL(origin))
     }
 
     /// Create a region inference variable for the given
@@ -1040,10 +1125,6 @@
         })
     }
 
-    pub fn fresh_bound_region(&self, debruijn: ty::DebruijnIndex) -> ty::Region<'tcx> {
-        self.region_vars.new_bound(debruijn)
-    }
-
     /// True if errors have been reported since this infcx was
     /// created.  This is sometimes used as a heuristic to skip
     /// reporting errors that often occur as a result of earlier
@@ -1069,15 +1150,31 @@
         self.tainted_by_errors_flag.set(true)
     }
 
+    /// Process the region constraints and report any errors that
+    /// result. After this, no more unification operations should be
+    /// done -- or the compiler will panic -- but it is legal to use
+    /// `resolve_type_vars_if_possible` as well as `fully_resolve`.
     pub fn resolve_regions_and_report_errors(&self,
                                              region_context: DefId,
                                              region_map: &region::ScopeTree,
                                              free_regions: &FreeRegionMap<'tcx>) {
-        let region_rels = RegionRelations::new(self.tcx,
-                                               region_context,
-                                               region_map,
-                                               free_regions);
-        let errors = self.region_vars.resolve_regions(&region_rels);
+        assert!(self.is_tainted_by_errors() || self.region_obligations.borrow().is_empty(),
+                "region_obligations not empty: {:#?}",
+                self.region_obligations.borrow());
+
+        let region_rels = &RegionRelations::new(self.tcx,
+                                                region_context,
+                                                region_map,
+                                                free_regions);
+        let (var_origins, data) = self.region_constraints.borrow_mut()
+                                                         .take()
+                                                         .expect("regions already resolved")
+                                                         .into_origins_and_data();
+        let (lexical_region_resolutions, errors) =
+            lexical_region_resolve::resolve(region_rels, var_origins, data);
+
+        let old_value = self.lexical_region_resolutions.replace(Some(lexical_region_resolutions));
+        assert!(old_value.is_none());
 
         if !self.is_tainted_by_errors() {
             // As a heuristic, just skip reporting region errors
@@ -1089,6 +1186,34 @@
         }
     }
 
+    /// Obtains (and clears) the current set of region
+    /// constraints. The inference context is still usable: further
+    /// unifications will simply add new constraints.
+    ///
+    /// This method is not meant to be used with normal lexical region
+    /// resolution. Rather, it is used in the NLL mode as a kind of
+    /// interim hack: basically we run normal type-check and generate
+    /// region constraints as normal, but then we take them and
+    /// translate them into the form that the NLL solver
+    /// understands. See the NLL module for mode details.
+    pub fn take_and_reset_region_constraints(&self) -> RegionConstraintData<'tcx> {
+        self.borrow_region_constraints().take_and_reset_data()
+    }
+
+    /// Takes ownership of the list of variable regions. This implies
+    /// that all the region constriants have already been taken, and
+    /// hence that `resolve_regions_and_report_errors` can never be
+    /// called. This is used only during NLL processing to "hand off" ownership
+    /// of the set of region vairables into the NLL region context.
+    pub fn take_region_var_origins(&self) -> VarOrigins {
+        let (var_origins, data) = self.region_constraints.borrow_mut()
+                                                         .take()
+                                                         .expect("regions already resolved")
+                                                         .into_origins_and_data();
+        assert!(data.is_empty());
+        var_origins
+    }
+
     pub fn ty_to_string(&self, t: Ty<'tcx>) -> String {
         self.resolve_type_vars_if_possible(&t).to_string()
     }
@@ -1260,7 +1385,7 @@
         span: Span,
         lbrct: LateBoundRegionConversionTime,
         value: &ty::Binder<T>)
-        -> (T, FxHashMap<ty::BoundRegion, ty::Region<'tcx>>)
+        -> (T, BTreeMap<ty::BoundRegion, ty::Region<'tcx>>)
         where T : TypeFoldable<'tcx>
     {
         self.tcx.replace_late_bound_regions(
@@ -1301,7 +1426,7 @@
         Ok(InferOk { value: result, obligations: combine.obligations })
     }
 
-    /// See `verify_generic_bound` method in `region_inference`
+    /// See `verify_generic_bound` method in `region_constraints`
     pub fn verify_generic_bound(&self,
                                 origin: SubregionOrigin<'tcx>,
                                 kind: GenericKind<'tcx>,
@@ -1312,7 +1437,7 @@
                a,
                bound);
 
-        self.region_vars.verify_generic_bound(origin, kind, a, bound);
+        self.borrow_region_constraints().verify_generic_bound(origin, kind, a, bound);
     }
 
     pub fn type_moves_by_default(&self,
@@ -1389,6 +1514,33 @@
 
         self.tcx.generator_sig(def_id)
     }
+
+    /// Normalizes associated types in `value`, potentially returning
+    /// new obligations that must further be processed.
+    pub fn partially_normalize_associated_types_in<T>(&self,
+                                                      span: Span,
+                                                      body_id: ast::NodeId,
+                                                      param_env: ty::ParamEnv<'tcx>,
+                                                      value: &T)
+                                                      -> InferOk<'tcx, T>
+        where T : TypeFoldable<'tcx>
+    {
+        debug!("partially_normalize_associated_types_in(value={:?})", value);
+        let mut selcx = traits::SelectionContext::new(self);
+        let cause = ObligationCause::misc(span, body_id);
+        let traits::Normalized { value, obligations } =
+            traits::normalize(&mut selcx, param_env, cause, value);
+        debug!("partially_normalize_associated_types_in: result={:?} predicates={:?}",
+            value,
+            obligations);
+        InferOk { value, obligations }
+    }
+
+    fn borrow_region_constraints(&self) -> RefMut<'_, RegionConstraintCollector<'tcx>> {
+        RefMut::map(
+            self.region_constraints.borrow_mut(),
+            |c| c.as_mut().expect("region constraints already solved"))
+    }
 }
 
 impl<'a, 'gcx, 'tcx> TypeTrace<'tcx> {
@@ -1466,14 +1618,12 @@
 
             traits::ObligationCauseCode::CompareImplMethodObligation { item_name,
                                                                        impl_item_def_id,
-                                                                       trait_item_def_id,
-                                                                       lint_id } =>
+                                                                       trait_item_def_id, } =>
                 SubregionOrigin::CompareImplMethodObligation {
                     span: cause.span,
                     item_name,
                     impl_item_def_id,
                     trait_item_def_id,
-                    lint_id,
                 },
 
             _ => default(),
@@ -1492,7 +1642,8 @@
             EarlyBoundRegion(a, ..) => a,
             LateBoundRegion(a, ..) => a,
             BoundRegionInCoherence(_) => syntax_pos::DUMMY_SP,
-            UpvarRegion(_, a) => a
+            UpvarRegion(_, a) => a,
+            NLL(..) => bug!("NLL variable used with `span`"),
         }
     }
 }
@@ -1533,3 +1684,12 @@
         self.cause.visit_with(visitor) || self.values.visit_with(visitor)
     }
 }
+
+impl<'tcx> fmt::Debug for RegionObligation<'tcx> {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        write!(f, "RegionObligation(sub_region={:?}, sup_type={:?})",
+               self.sub_region,
+               self.sup_type)
+    }
+}
+
diff --git a/src/librustc/infer/outlives/env.rs b/src/librustc/infer/outlives/env.rs
new file mode 100644
index 0000000..2099e92
--- /dev/null
+++ b/src/librustc/infer/outlives/env.rs
@@ -0,0 +1,355 @@
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use middle::free_region::FreeRegionMap;
+use infer::{InferCtxt, GenericKind};
+use traits::FulfillmentContext;
+use ty::{self, Ty, TypeFoldable};
+use ty::outlives::Component;
+use ty::wf;
+
+use syntax::ast;
+use syntax_pos::Span;
+
+/// The `OutlivesEnvironment` collects information about what outlives
+/// what in a given type-checking setting. For example, if we have a
+/// where-clause like `where T: 'a` in scope, then the
+/// `OutlivesEnvironment` would record that (in its
+/// `region_bound_pairs` field). Similarly, it contains methods for
+/// processing and adding implied bounds into the outlives
+/// environment.
+///
+/// Other code at present does not typically take a
+/// `&OutlivesEnvironment`, but rather takes some of its fields (e.g.,
+/// `process_registered_region_obligations` wants the
+/// region-bound-pairs). There is no mistaking it: the current setup
+/// of tracking region information is quite scattered! The
+/// `OutlivesEnvironment`, for example, needs to sometimes be combined
+/// with the `middle::RegionRelations`, to yield a full picture of how
+/// (lexical) lifetimes interact. However, I'm reluctant to do more
+/// refactoring here, since the setup with NLL is quite different.
+/// For example, NLL has no need of `RegionRelations`, and is solely
+/// interested in the `OutlivesEnvironment`. -nmatsakis
+#[derive(Clone)]
+pub struct OutlivesEnvironment<'tcx> {
+    param_env: ty::ParamEnv<'tcx>,
+    free_region_map: FreeRegionMap<'tcx>,
+    region_bound_pairs: Vec<(ty::Region<'tcx>, GenericKind<'tcx>)>,
+}
+
+/// Implied bounds are region relationships that we deduce
+/// automatically.  The idea is that (e.g.) a caller must check that a
+/// function's argument types are well-formed immediately before
+/// calling that fn, and hence the *callee* can assume that its
+/// argument types are well-formed. This may imply certain relationships
+/// between generic parameters. For example:
+///
+///     fn foo<'a,T>(x: &'a T)
+///
+/// can only be called with a `'a` and `T` such that `&'a T` is WF.
+/// For `&'a T` to be WF, `T: 'a` must hold. So we can assume `T: 'a`.
+#[derive(Debug)]
+enum ImpliedBound<'tcx> {
+    RegionSubRegion(ty::Region<'tcx>, ty::Region<'tcx>),
+    RegionSubParam(ty::Region<'tcx>, ty::ParamTy),
+    RegionSubProjection(ty::Region<'tcx>, ty::ProjectionTy<'tcx>),
+}
+
+impl<'a, 'gcx: 'tcx, 'tcx: 'a> OutlivesEnvironment<'tcx> {
+    pub fn new(param_env: ty::ParamEnv<'tcx>) -> Self {
+        let mut free_region_map = FreeRegionMap::new();
+        free_region_map.relate_free_regions_from_predicates(&param_env.caller_bounds);
+
+        OutlivesEnvironment {
+            param_env,
+            free_region_map,
+            region_bound_pairs: vec![],
+        }
+    }
+
+    /// Borrows current value of the `free_region_map`.
+    pub fn free_region_map(&self) -> &FreeRegionMap<'tcx> {
+        &self.free_region_map
+    }
+
+    /// Borrows current value of the `region_bound_pairs`.
+    pub fn region_bound_pairs(&self) -> &[(ty::Region<'tcx>, GenericKind<'tcx>)] {
+        &self.region_bound_pairs
+    }
+
+    /// Returns ownership of the `free_region_map`.
+    pub fn into_free_region_map(self) -> FreeRegionMap<'tcx> {
+        self.free_region_map
+    }
+
+    /// This is a hack to support the old-skool regionck, which
+    /// processes region constraints from the main function and the
+    /// closure together. In that context, when we enter a closure, we
+    /// want to be able to "save" the state of the surrounding a
+    /// function. We can then add implied bounds and the like from the
+    /// closure arguments into the environment -- these should only
+    /// apply in the closure body, so once we exit, we invoke
+    /// `pop_snapshot_post_closure` to remove them.
+    ///
+    /// Example:
+    ///
+    /// ```
+    /// fn foo<T>() {
+    ///    callback(for<'a> |x: &'a T| {
+    ///         // ^^^^^^^ not legal syntax, but probably should be
+    ///         // within this closure body, `T: 'a` holds
+    ///    })
+    /// }
+    /// ```
+    ///
+    /// This "containment" of closure's effects only works so well. In
+    /// particular, we (intentionally) leak relationships between free
+    /// regions that are created by the closure's bounds. The case
+    /// where this is useful is when you have (e.g.) a closure with a
+    /// signature like `for<'a, 'b> fn(x: &'a &'b u32)` -- in this
+    /// case, we want to keep the relationship `'b: 'a` in the
+    /// free-region-map, so that later if we have to take `LUB('b,
+    /// 'a)` we can get the result `'b`.
+    ///
+    /// I have opted to keep **all modifications** to the
+    /// free-region-map, however, and not just those that concern free
+    /// variables bound in the closure. The latter seems more correct,
+    /// but it is not the existing behavior, and I could not find a
+    /// case where the existing behavior went wrong. In any case, it
+    /// seems like it'd be readily fixed if we wanted. There are
+    /// similar leaks around givens that seem equally suspicious, to
+    /// be honest. --nmatsakis
+    pub fn push_snapshot_pre_closure(&self) -> usize {
+        self.region_bound_pairs.len()
+    }
+
+    /// See `push_snapshot_pre_closure`.
+    pub fn pop_snapshot_post_closure(&mut self, len: usize) {
+        self.region_bound_pairs.truncate(len);
+    }
+
+    /// This method adds "implied bounds" into the outlives environment.
+    /// Implied bounds are outlives relationships that we can deduce
+    /// on the basis that certain types must be well-formed -- these are
+    /// either the types that appear in the function signature or else
+    /// the input types to an impl. For example, if you have a function
+    /// like
+    ///
+    /// ```
+    /// fn foo<'a, 'b, T>(x: &'a &'b [T]) { }
+    /// ```
+    ///
+    /// we can assume in the caller's body that `'b: 'a` and that `T:
+    /// 'b` (and hence, transitively, that `T: 'a`). This method would
+    /// add those assumptions into the outlives-environment.
+    ///
+    /// Tests: `src/test/compile-fail/regions-free-region-ordering-*.rs`
+    pub fn add_implied_bounds(
+        &mut self,
+        infcx: &InferCtxt<'a, 'gcx, 'tcx>,
+        fn_sig_tys: &[Ty<'tcx>],
+        body_id: ast::NodeId,
+        span: Span,
+    ) {
+        debug!("add_implied_bounds()");
+
+        for &ty in fn_sig_tys {
+            let ty = infcx.resolve_type_vars_if_possible(&ty);
+            debug!("add_implied_bounds: ty = {}", ty);
+            let implied_bounds = self.implied_bounds(infcx, body_id, ty, span);
+
+            // But also record other relationships, such as `T:'x`,
+            // that don't go into the free-region-map but which we use
+            // here.
+            for implication in implied_bounds {
+                debug!("add_implied_bounds: implication={:?}", implication);
+                match implication {
+                    ImpliedBound::RegionSubRegion(
+                        r_a @ &ty::ReEarlyBound(_),
+                        &ty::ReVar(vid_b),
+                    ) |
+                    ImpliedBound::RegionSubRegion(r_a @ &ty::ReFree(_), &ty::ReVar(vid_b)) => {
+                        infcx.add_given(r_a, vid_b);
+                    }
+                    ImpliedBound::RegionSubParam(r_a, param_b) => {
+                        self.region_bound_pairs
+                            .push((r_a, GenericKind::Param(param_b)));
+                    }
+                    ImpliedBound::RegionSubProjection(r_a, projection_b) => {
+                        self.region_bound_pairs
+                            .push((r_a, GenericKind::Projection(projection_b)));
+                    }
+                    ImpliedBound::RegionSubRegion(r_a, r_b) => {
+                        // In principle, we could record (and take
+                        // advantage of) every relationship here, but
+                        // we are also free not to -- it simply means
+                        // strictly less that we can successfully type
+                        // check. Right now we only look for things
+                        // relationships between free regions. (It may
+                        // also be that we should revise our inference
+                        // system to be more general and to make use
+                        // of *every* relationship that arises here,
+                        // but presently we do not.)
+                        self.free_region_map.relate_regions(r_a, r_b);
+                    }
+                }
+            }
+        }
+    }
+
+    /// Compute the implied bounds that a callee/impl can assume based on
+    /// the fact that caller/projector has ensured that `ty` is WF.  See
+    /// the `ImpliedBound` type for more details.
+    fn implied_bounds(
+        &mut self,
+        infcx: &InferCtxt<'a, 'gcx, 'tcx>,
+        body_id: ast::NodeId,
+        ty: Ty<'tcx>,
+        span: Span,
+    ) -> Vec<ImpliedBound<'tcx>> {
+        let tcx = infcx.tcx;
+
+        // Sometimes when we ask what it takes for T: WF, we get back that
+        // U: WF is required; in that case, we push U onto this stack and
+        // process it next. Currently (at least) these resulting
+        // predicates are always guaranteed to be a subset of the original
+        // type, so we need not fear non-termination.
+        let mut wf_types = vec![ty];
+
+        let mut implied_bounds = vec![];
+
+        let mut fulfill_cx = FulfillmentContext::new();
+
+        while let Some(ty) = wf_types.pop() {
+            // Compute the obligations for `ty` to be well-formed. If `ty` is
+            // an unresolved inference variable, just substituted an empty set
+            // -- because the return type here is going to be things we *add*
+            // to the environment, it's always ok for this set to be smaller
+            // than the ultimate set. (Note: normally there won't be
+            // unresolved inference variables here anyway, but there might be
+            // during typeck under some circumstances.)
+            let obligations =
+                wf::obligations(infcx, self.param_env, body_id, ty, span).unwrap_or(vec![]);
+
+            // NB: All of these predicates *ought* to be easily proven
+            // true. In fact, their correctness is (mostly) implied by
+            // other parts of the program. However, in #42552, we had
+            // an annoying scenario where:
+            //
+            // - Some `T::Foo` gets normalized, resulting in a
+            //   variable `_1` and a `T: Trait<Foo=_1>` constraint
+            //   (not sure why it couldn't immediately get
+            //   solved). This result of `_1` got cached.
+            // - These obligations were dropped on the floor here,
+            //   rather than being registered.
+            // - Then later we would get a request to normalize
+            //   `T::Foo` which would result in `_1` being used from
+            //   the cache, but hence without the `T: Trait<Foo=_1>`
+            //   constraint. As a result, `_1` never gets resolved,
+            //   and we get an ICE (in dropck).
+            //
+            // Therefore, we register any predicates involving
+            // inference variables. We restrict ourselves to those
+            // involving inference variables both for efficiency and
+            // to avoids duplicate errors that otherwise show up.
+            fulfill_cx.register_predicate_obligations(
+                infcx,
+                obligations
+                    .iter()
+                    .filter(|o| o.predicate.has_infer_types())
+                    .cloned());
+
+            // From the full set of obligations, just filter down to the
+            // region relationships.
+            implied_bounds.extend(obligations.into_iter().flat_map(|obligation| {
+                assert!(!obligation.has_escaping_regions());
+                match obligation.predicate {
+                    ty::Predicate::Trait(..) |
+                    ty::Predicate::Equate(..) |
+                    ty::Predicate::Subtype(..) |
+                    ty::Predicate::Projection(..) |
+                    ty::Predicate::ClosureKind(..) |
+                    ty::Predicate::ObjectSafe(..) |
+                    ty::Predicate::ConstEvaluatable(..) => vec![],
+
+                    ty::Predicate::WellFormed(subty) => {
+                        wf_types.push(subty);
+                        vec![]
+                    }
+
+                    ty::Predicate::RegionOutlives(ref data) => {
+                        match tcx.no_late_bound_regions(data) {
+                            None => vec![],
+                            Some(ty::OutlivesPredicate(r_a, r_b)) => {
+                                vec![ImpliedBound::RegionSubRegion(r_b, r_a)]
+                            }
+                        }
+                    }
+
+                    ty::Predicate::TypeOutlives(ref data) => {
+                        match tcx.no_late_bound_regions(data) {
+                            None => vec![],
+                            Some(ty::OutlivesPredicate(ty_a, r_b)) => {
+                                let ty_a = infcx.resolve_type_vars_if_possible(&ty_a);
+                                let components = tcx.outlives_components(ty_a);
+                                self.implied_bounds_from_components(r_b, components)
+                            }
+                        }
+                    }
+                }
+            }));
+        }
+
+        // Ensure that those obligations that we had to solve
+        // get solved *here*.
+        match fulfill_cx.select_all_or_error(infcx) {
+            Ok(()) => (),
+            Err(errors) => infcx.report_fulfillment_errors(&errors, None),
+        }
+
+        implied_bounds
+    }
+
+    /// When we have an implied bound that `T: 'a`, we can further break
+    /// this down to determine what relationships would have to hold for
+    /// `T: 'a` to hold. We get to assume that the caller has validated
+    /// those relationships.
+    fn implied_bounds_from_components(
+        &self,
+        sub_region: ty::Region<'tcx>,
+        sup_components: Vec<Component<'tcx>>,
+    ) -> Vec<ImpliedBound<'tcx>> {
+        sup_components
+            .into_iter()
+            .flat_map(|component| {
+                match component {
+                    Component::Region(r) =>
+                        vec![ImpliedBound::RegionSubRegion(sub_region, r)],
+                    Component::Param(p) =>
+                        vec![ImpliedBound::RegionSubParam(sub_region, p)],
+                    Component::Projection(p) =>
+                        vec![ImpliedBound::RegionSubProjection(sub_region, p)],
+                    Component::EscapingProjection(_) =>
+                    // If the projection has escaping regions, don't
+                    // try to infer any implied bounds even for its
+                    // free components. This is conservative, because
+                    // the caller will still have to prove that those
+                    // free components outlive `sub_region`. But the
+                    // idea is that the WAY that the caller proves
+                    // that may change in the future and we want to
+                    // give ourselves room to get smarter here.
+                        vec![],
+                    Component::UnresolvedInferenceVariable(..) =>
+                        vec![],
+                }
+            })
+            .collect()
+    }
+}
diff --git a/src/test/run-pass/issue-30276.rs b/src/librustc/infer/outlives/mod.rs
similarity index 74%
rename from src/test/run-pass/issue-30276.rs
rename to src/librustc/infer/outlives/mod.rs
index 5dd0cd8..0976c5f 100644
--- a/src/test/run-pass/issue-30276.rs
+++ b/src/librustc/infer/outlives/mod.rs
@@ -1,4 +1,4 @@
-// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
 // file at the top-level directory of this distribution and at
 // http://rust-lang.org/COPYRIGHT.
 //
@@ -8,7 +8,5 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-struct Test([i32]);
-fn main() {
-    let _x: fn(_) -> Test = Test;
-}
+pub mod env;
+mod obligations;
diff --git a/src/librustc/infer/outlives/obligations.rs b/src/librustc/infer/outlives/obligations.rs
new file mode 100644
index 0000000..c7081e5
--- /dev/null
+++ b/src/librustc/infer/outlives/obligations.rs
@@ -0,0 +1,623 @@
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! Code that handles "type-outlives" constraints like `T: 'a`. This
+//! is based on the `outlives_components` function defined on the tcx,
+//! but it adds a bit of heuristics on top, in particular to deal with
+//! associated types and projections.
+//!
+//! When we process a given `T: 'a` obligation, we may produce two
+//! kinds of constraints for the region inferencer:
+//!
+//! - Relationships between inference variables and other regions.
+//!   For example, if we have `&'?0 u32: 'a`, then we would produce
+//!   a constraint that `'a <= '?0`.
+//! - "Verifys" that must be checked after inferencing is done.
+//!   For example, if we know that, for some type parameter `T`,
+//!   `T: 'a + 'b`, and we have a requirement that `T: '?1`,
+//!   then we add a "verify" that checks that `'?1 <= 'a || '?1 <= 'b`.
+//!   - Note the difference with the previous case: here, the region
+//!     variable must be less than something else, so this doesn't
+//!     affect how inference works (it finds the smallest region that
+//!     will do); it's just a post-condition that we have to check.
+//!
+//! **The key point is that once this function is done, we have
+//! reduced all of our "type-region outlives" obligations into relationships
+//! between individual regions.**
+//!
+//! One key input to this function is the set of "region-bound pairs".
+//! These are basically the relationships between type parameters and
+//! regions that are in scope at the point where the outlives
+//! obligation was incurred. **When type-checking a function,
+//! particularly in the face of closures, this is not known until
+//! regionck runs!** This is because some of those bounds come
+//! from things we have yet to infer.
+//!
+//! Consider:
+//!
+//! ```
+//! fn bar<T>(a: T, b: impl for<'a> Fn(&'a T));
+//! fn foo<T>(x: T) {
+//!     bar(x, |y| { ... })
+//!          // ^ closure arg
+//! }
+//! ```
+//!
+//! Here, the type of `y` may involve inference variables and the
+//! like, and it may also contain implied bounds that are needed to
+//! type-check the closure body (e.g., here it informs us that `T`
+//! outlives the late-bound region `'a`).
+//!
+//! Note that by delaying the gathering of implied bounds until all
+//! inference information is known, we may find relationships between
+//! bound regions and other regions in the environment. For example,
+//! when we first check a closure like the one expected as argument
+//! to `foo`:
+//!
+//! ```
+//! fn foo<U, F: for<'a> FnMut(&'a U)>(_f: F) {}
+//! ```
+//!
+//! the type of the closure's first argument would be `&'a ?U`.  We
+//! might later infer `?U` to something like `&'b u32`, which would
+//! imply that `'b: 'a`.
+
+use hir::def_id::DefId;
+use infer::{self, GenericKind, InferCtxt, RegionObligation, SubregionOrigin, VerifyBound};
+use traits;
+use ty::{self, Ty, TyCtxt, TypeFoldable};
+use ty::subst::{Subst, Substs};
+use ty::outlives::Component;
+use syntax::ast;
+
+impl<'cx, 'gcx, 'tcx> InferCtxt<'cx, 'gcx, 'tcx> {
+    /// Registers that the given region obligation must be resolved
+    /// from within the scope of `body_id`. These regions are enqueued
+    /// and later processed by regionck, when full type information is
+    /// available (see `region_obligations` field for more
+    /// information).
+    pub fn register_region_obligation(
+        &self,
+        body_id: ast::NodeId,
+        obligation: RegionObligation<'tcx>,
+    ) {
+        self.region_obligations
+            .borrow_mut()
+            .push((body_id, obligation));
+    }
+
+    /// Process the region obligations that must be proven (during
+    /// `regionck`) for the given `body_id`, given information about
+    /// the region bounds in scope and so forth. This function must be
+    /// invoked for all relevant body-ids before region inference is
+    /// done (or else an assert will fire).
+    ///
+    /// See the `region_obligations` field of `InferCtxt` for some
+    /// comments about how this funtion fits into the overall expected
+    /// flow of the the inferencer. The key point is that it is
+    /// invoked after all type-inference variables have been bound --
+    /// towards the end of regionck. This also ensures that the
+    /// region-bound-pairs are available (see comments above regarding
+    /// closures).
+    ///
+    /// # Parameters
+    ///
+    /// - `region_bound_pairs`: the set of region bounds implied by
+    ///   the parameters and where-clauses. In particular, each pair
+    ///   `('a, K)` in this list tells us that the bounds in scope
+    ///   indicate that `K: 'a`, where `K` is either a generic
+    ///   parameter like `T` or a projection like `T::Item`.
+    /// - `implicit_region_bound`: if some, this is a region bound
+    ///   that is considered to hold for all type parameters (the
+    ///   function body).
+    /// - `param_env` is the parameter environment for the enclosing function.
+    /// - `body_id` is the body-id whose region obligations are being
+    ///   processed.
+    ///
+    /// # Returns
+    ///
+    /// This function may have to perform normalizations, and hence it
+    /// returns an `InferOk` with subobligations that must be
+    /// processed.
+    pub fn process_registered_region_obligations(
+        &self,
+        region_bound_pairs: &[(ty::Region<'tcx>, GenericKind<'tcx>)],
+        implicit_region_bound: Option<ty::Region<'tcx>>,
+        param_env: ty::ParamEnv<'tcx>,
+        body_id: ast::NodeId,
+    ) {
+        assert!(
+            !self.in_snapshot.get(),
+            "cannot process registered region obligations in a snapshot"
+        );
+
+        // pull out the region obligations with the given `body_id` (leaving the rest)
+        let mut my_region_obligations = Vec::with_capacity(self.region_obligations.borrow().len());
+        {
+            let mut r_o = self.region_obligations.borrow_mut();
+            for (_, obligation) in r_o.drain_filter(|(ro_body_id, _)| *ro_body_id == body_id) {
+                my_region_obligations.push(obligation);
+            }
+        }
+
+        let outlives =
+            TypeOutlives::new(self, region_bound_pairs, implicit_region_bound, param_env);
+
+        for RegionObligation {
+            sup_type,
+            sub_region,
+            cause,
+        } in my_region_obligations
+        {
+            let origin = SubregionOrigin::from_obligation_cause(
+                &cause,
+                || infer::RelateParamBound(cause.span, sup_type),
+            );
+
+            outlives.type_must_outlive(origin, sup_type, sub_region);
+        }
+    }
+
+    /// Processes a single ad-hoc region obligation that was not
+    /// registered in advance.
+    pub fn type_must_outlive(
+        &self,
+        region_bound_pairs: &[(ty::Region<'tcx>, GenericKind<'tcx>)],
+        implicit_region_bound: Option<ty::Region<'tcx>>,
+        param_env: ty::ParamEnv<'tcx>,
+        origin: infer::SubregionOrigin<'tcx>,
+        ty: Ty<'tcx>,
+        region: ty::Region<'tcx>,
+    ) {
+        let outlives =
+            TypeOutlives::new(self, region_bound_pairs, implicit_region_bound, param_env);
+        outlives.type_must_outlive(origin, ty, region);
+    }
+
+    /// Ignore the region obligations, not bothering to prove
+    /// them. This function should not really exist; it is used to
+    /// accommodate some older code for the time being.
+    pub fn ignore_region_obligations(&self) {
+        assert!(
+            !self.in_snapshot.get(),
+            "cannot ignore registered region obligations in a snapshot"
+        );
+
+        self.region_obligations.borrow_mut().clear();
+    }
+}
+
+#[must_use] // you ought to invoke `into_accrued_obligations` when you are done =)
+struct TypeOutlives<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
+    // See the comments on `process_registered_region_obligations` for the meaning
+    // of these fields.
+    infcx: &'cx InferCtxt<'cx, 'gcx, 'tcx>,
+    region_bound_pairs: &'cx [(ty::Region<'tcx>, GenericKind<'tcx>)],
+    implicit_region_bound: Option<ty::Region<'tcx>>,
+    param_env: ty::ParamEnv<'tcx>,
+}
+
+impl<'cx, 'gcx, 'tcx> TypeOutlives<'cx, 'gcx, 'tcx> {
+    fn new(
+        infcx: &'cx InferCtxt<'cx, 'gcx, 'tcx>,
+        region_bound_pairs: &'cx [(ty::Region<'tcx>, GenericKind<'tcx>)],
+        implicit_region_bound: Option<ty::Region<'tcx>>,
+        param_env: ty::ParamEnv<'tcx>,
+    ) -> Self {
+        Self {
+            infcx,
+            region_bound_pairs,
+            implicit_region_bound,
+            param_env,
+        }
+    }
+
+    /// Adds constraints to inference such that `T: 'a` holds (or
+    /// reports an error if it cannot).
+    ///
+    /// # Parameters
+    ///
+    /// - `origin`, the reason we need this constraint
+    /// - `ty`, the type `T`
+    /// - `region`, the region `'a`
+    fn type_must_outlive(
+        &self,
+        origin: infer::SubregionOrigin<'tcx>,
+        ty: Ty<'tcx>,
+        region: ty::Region<'tcx>,
+    ) {
+        let ty = self.infcx.resolve_type_vars_if_possible(&ty);
+
+        debug!(
+            "type_must_outlive(ty={:?}, region={:?}, origin={:?})",
+            ty,
+            region,
+            origin
+        );
+
+        assert!(!ty.has_escaping_regions());
+
+        let components = self.tcx().outlives_components(ty);
+        self.components_must_outlive(origin, components, region);
+    }
+
+    fn tcx(&self) -> TyCtxt<'cx, 'gcx, 'tcx> {
+        self.infcx.tcx
+    }
+
+    fn components_must_outlive(
+        &self,
+        origin: infer::SubregionOrigin<'tcx>,
+        components: Vec<Component<'tcx>>,
+        region: ty::Region<'tcx>,
+    ) {
+        for component in components {
+            let origin = origin.clone();
+            match component {
+                Component::Region(region1) => {
+                    self.infcx.sub_regions(origin, region, region1);
+                }
+                Component::Param(param_ty) => {
+                    self.param_ty_must_outlive(origin, region, param_ty);
+                }
+                Component::Projection(projection_ty) => {
+                    self.projection_must_outlive(origin, region, projection_ty);
+                }
+                Component::EscapingProjection(subcomponents) => {
+                    self.components_must_outlive(origin, subcomponents, region);
+                }
+                Component::UnresolvedInferenceVariable(v) => {
+                    // ignore this, we presume it will yield an error
+                    // later, since if a type variable is not resolved by
+                    // this point it never will be
+                    self.infcx.tcx.sess.delay_span_bug(
+                        origin.span(),
+                        &format!("unresolved inference variable in outlives: {:?}", v),
+                    );
+                }
+            }
+        }
+    }
+
+    fn param_ty_must_outlive(
+        &self,
+        origin: infer::SubregionOrigin<'tcx>,
+        region: ty::Region<'tcx>,
+        param_ty: ty::ParamTy,
+    ) {
+        debug!(
+            "param_ty_must_outlive(region={:?}, param_ty={:?}, origin={:?})",
+            region,
+            param_ty,
+            origin
+        );
+
+        let verify_bound = self.param_bound(param_ty);
+        let generic = GenericKind::Param(param_ty);
+        self.infcx
+            .verify_generic_bound(origin, generic, region, verify_bound);
+    }
+
+    fn projection_must_outlive(
+        &self,
+        origin: infer::SubregionOrigin<'tcx>,
+        region: ty::Region<'tcx>,
+        projection_ty: ty::ProjectionTy<'tcx>,
+    ) {
+        debug!(
+            "projection_must_outlive(region={:?}, projection_ty={:?}, origin={:?})",
+            region,
+            projection_ty,
+            origin
+        );
+
+        // This case is thorny for inference. The fundamental problem is
+        // that there are many cases where we have choice, and inference
+        // doesn't like choice (the current region inference in
+        // particular). :) First off, we have to choose between using the
+        // OutlivesProjectionEnv, OutlivesProjectionTraitDef, and
+        // OutlivesProjectionComponent rules, any one of which is
+        // sufficient.  If there are no inference variables involved, it's
+        // not hard to pick the right rule, but if there are, we're in a
+        // bit of a catch 22: if we picked which rule we were going to
+        // use, we could add constraints to the region inference graph
+        // that make it apply, but if we don't add those constraints, the
+        // rule might not apply (but another rule might). For now, we err
+        // on the side of adding too few edges into the graph.
+
+        // Compute the bounds we can derive from the environment or trait
+        // definition.  We know that the projection outlives all the
+        // regions in this list.
+        let env_bounds = self.projection_declared_bounds(projection_ty);
+
+        debug!("projection_must_outlive: env_bounds={:?}", env_bounds);
+
+        // If we know that the projection outlives 'static, then we're
+        // done here.
+        if env_bounds.contains(&&ty::ReStatic) {
+            debug!("projection_must_outlive: 'static as declared bound");
+            return;
+        }
+
+        // If declared bounds list is empty, the only applicable rule is
+        // OutlivesProjectionComponent. If there are inference variables,
+        // then, we can break down the outlives into more primitive
+        // components without adding unnecessary edges.
+        //
+        // If there are *no* inference variables, however, we COULD do
+        // this, but we choose not to, because the error messages are less
+        // good. For example, a requirement like `T::Item: 'r` would be
+        // translated to a requirement that `T: 'r`; when this is reported
+        // to the user, it will thus say "T: 'r must hold so that T::Item:
+        // 'r holds". But that makes it sound like the only way to fix
+        // the problem is to add `T: 'r`, which isn't true. So, if there are no
+        // inference variables, we use a verify constraint instead of adding
+        // edges, which winds up enforcing the same condition.
+        let needs_infer = projection_ty.needs_infer();
+        if env_bounds.is_empty() && needs_infer {
+            debug!("projection_must_outlive: no declared bounds");
+
+            for component_ty in projection_ty.substs.types() {
+                self.type_must_outlive(origin.clone(), component_ty, region);
+            }
+
+            for r in projection_ty.substs.regions() {
+                self.infcx.sub_regions(origin.clone(), region, r);
+            }
+
+            return;
+        }
+
+        // If we find that there is a unique declared bound `'b`, and this bound
+        // appears in the trait reference, then the best action is to require that `'b:'r`,
+        // so do that. This is best no matter what rule we use:
+        //
+        // - OutlivesProjectionEnv or OutlivesProjectionTraitDef: these would translate to
+        // the requirement that `'b:'r`
+        // - OutlivesProjectionComponent: this would require `'b:'r` in addition to
+        // other conditions
+        if !env_bounds.is_empty() && env_bounds[1..].iter().all(|b| *b == env_bounds[0]) {
+            let unique_bound = env_bounds[0];
+            debug!(
+                "projection_must_outlive: unique declared bound = {:?}",
+                unique_bound
+            );
+            if projection_ty
+                .substs
+                .regions()
+                .any(|r| env_bounds.contains(&r))
+            {
+                debug!("projection_must_outlive: unique declared bound appears in trait ref");
+                self.infcx.sub_regions(origin.clone(), region, unique_bound);
+                return;
+            }
+        }
+
+        // Fallback to verifying after the fact that there exists a
+        // declared bound, or that all the components appearing in the
+        // projection outlive; in some cases, this may add insufficient
+        // edges into the inference graph, leading to inference failures
+        // even though a satisfactory solution exists.
+        let verify_bound = self.projection_bound(env_bounds, projection_ty);
+        let generic = GenericKind::Projection(projection_ty);
+        self.infcx
+            .verify_generic_bound(origin, generic.clone(), region, verify_bound);
+    }
+
+    fn type_bound(&self, ty: Ty<'tcx>) -> VerifyBound<'tcx> {
+        match ty.sty {
+            ty::TyParam(p) => self.param_bound(p),
+            ty::TyProjection(data) => {
+                let declared_bounds = self.projection_declared_bounds(data);
+                self.projection_bound(declared_bounds, data)
+            }
+            _ => self.recursive_type_bound(ty),
+        }
+    }
+
+    fn param_bound(&self, param_ty: ty::ParamTy) -> VerifyBound<'tcx> {
+        debug!("param_bound(param_ty={:?})", param_ty);
+
+        let mut param_bounds = self.declared_generic_bounds_from_env(GenericKind::Param(param_ty));
+
+        // Add in the default bound of fn body that applies to all in
+        // scope type parameters:
+        param_bounds.extend(self.implicit_region_bound);
+
+        VerifyBound::AnyRegion(param_bounds)
+    }
+
+    fn projection_declared_bounds(
+        &self,
+        projection_ty: ty::ProjectionTy<'tcx>,
+    ) -> Vec<ty::Region<'tcx>> {
+        // First assemble bounds from where clauses and traits.
+
+        let mut declared_bounds =
+            self.declared_generic_bounds_from_env(GenericKind::Projection(projection_ty));
+
+        declared_bounds
+            .extend_from_slice(&self.declared_projection_bounds_from_trait(projection_ty));
+
+        declared_bounds
+    }
+
+    fn projection_bound(
+        &self,
+        declared_bounds: Vec<ty::Region<'tcx>>,
+        projection_ty: ty::ProjectionTy<'tcx>,
+    ) -> VerifyBound<'tcx> {
+        debug!(
+            "projection_bound(declared_bounds={:?}, projection_ty={:?})",
+            declared_bounds,
+            projection_ty
+        );
+
+        // see the extensive comment in projection_must_outlive
+        let ty = self.infcx
+            .tcx
+            .mk_projection(projection_ty.item_def_id, projection_ty.substs);
+        let recursive_bound = self.recursive_type_bound(ty);
+
+        VerifyBound::AnyRegion(declared_bounds).or(recursive_bound)
+    }
+
+    fn recursive_type_bound(&self, ty: Ty<'tcx>) -> VerifyBound<'tcx> {
+        let mut bounds = vec![];
+
+        for subty in ty.walk_shallow() {
+            bounds.push(self.type_bound(subty));
+        }
+
+        let mut regions = ty.regions();
+        regions.retain(|r| !r.is_late_bound()); // ignore late-bound regions
+        bounds.push(VerifyBound::AllRegions(regions));
+
+        // remove bounds that must hold, since they are not interesting
+        bounds.retain(|b| !b.must_hold());
+
+        if bounds.len() == 1 {
+            bounds.pop().unwrap()
+        } else {
+            VerifyBound::AllBounds(bounds)
+        }
+    }
+
+    fn declared_generic_bounds_from_env(
+        &self,
+        generic: GenericKind<'tcx>,
+    ) -> Vec<ty::Region<'tcx>> {
+        let tcx = self.tcx();
+
+        // To start, collect bounds from user environment. Note that
+        // parameter environments are already elaborated, so we don't
+        // have to worry about that. Comparing using `==` is a bit
+        // dubious for projections, but it will work for simple cases
+        // like `T` and `T::Item`. It may not work as well for things
+        // like `<T as Foo<'a>>::Item`.
+        let generic_ty = generic.to_ty(tcx);
+        let c_b = self.param_env.caller_bounds;
+        let mut param_bounds = self.collect_outlives_from_predicate_list(generic_ty, c_b);
+
+        // Next, collect regions we scraped from the well-formedness
+        // constraints in the fn signature. To do that, we walk the list
+        // of known relations from the fn ctxt.
+        //
+        // This is crucial because otherwise code like this fails:
+        //
+        //     fn foo<'a, A>(x: &'a A) { x.bar() }
+        //
+        // The problem is that the type of `x` is `&'a A`. To be
+        // well-formed, then, A must be lower-generic by `'a`, but we
+        // don't know that this holds from first principles.
+        for &(r, p) in self.region_bound_pairs {
+            debug!("generic={:?} p={:?}", generic, p);
+            if generic == p {
+                param_bounds.push(r);
+            }
+        }
+
+        param_bounds
+    }
+
+    /// Given a projection like `<T as Foo<'x>>::Bar`, returns any bounds
+    /// declared in the trait definition. For example, if the trait were
+    ///
+    /// ```rust
+    /// trait Foo<'a> {
+    ///     type Bar: 'a;
+    /// }
+    /// ```
+    ///
+    /// then this function would return `'x`. This is subject to the
+    /// limitations around higher-ranked bounds described in
+    /// `region_bounds_declared_on_associated_item`.
+    fn declared_projection_bounds_from_trait(
+        &self,
+        projection_ty: ty::ProjectionTy<'tcx>,
+    ) -> Vec<ty::Region<'tcx>> {
+        debug!("projection_bounds(projection_ty={:?})", projection_ty);
+        let mut bounds = self.region_bounds_declared_on_associated_item(projection_ty.item_def_id);
+        for r in &mut bounds {
+            *r = r.subst(self.tcx(), projection_ty.substs);
+        }
+        bounds
+    }
+
+    /// Given the def-id of an associated item, returns any region
+    /// bounds attached to that associated item from the trait definition.
+    ///
+    /// For example:
+    ///
+    /// ```rust
+    /// trait Foo<'a> {
+    ///     type Bar: 'a;
+    /// }
+    /// ```
+    ///
+    /// If we were given the def-id of `Foo::Bar`, we would return
+    /// `'a`. You could then apply the substitutions from the
+    /// projection to convert this into your namespace. This also
+    /// works if the user writes `where <Self as Foo<'a>>::Bar: 'a` on
+    /// the trait. In fact, it works by searching for just such a
+    /// where-clause.
+    ///
+    /// It will not, however, work for higher-ranked bounds like:
+    ///
+    /// ```rust
+    /// trait Foo<'a, 'b>
+    /// where for<'x> <Self as Foo<'x, 'b>>::Bar: 'x
+    /// {
+    ///     type Bar;
+    /// }
+    /// ```
+    ///
+    /// This is for simplicity, and because we are not really smart
+    /// enough to cope with such bounds anywhere.
+    fn region_bounds_declared_on_associated_item(
+        &self,
+        assoc_item_def_id: DefId,
+    ) -> Vec<ty::Region<'tcx>> {
+        let tcx = self.tcx();
+        let assoc_item = tcx.associated_item(assoc_item_def_id);
+        let trait_def_id = assoc_item.container.assert_trait();
+        let trait_predicates = tcx.predicates_of(trait_def_id);
+        let identity_substs = Substs::identity_for_item(tcx, assoc_item_def_id);
+        let identity_proj = tcx.mk_projection(assoc_item_def_id, identity_substs);
+        self.collect_outlives_from_predicate_list(
+            identity_proj,
+            traits::elaborate_predicates(tcx, trait_predicates.predicates),
+        )
+    }
+
+    /// Searches through a predicate list for a predicate `T: 'a`.
+    ///
+    /// Careful: does not elaborate predicates, and just uses `==`
+    /// when comparing `ty` for equality, so `ty` must be something
+    /// that does not involve inference variables and where you
+    /// otherwise want a precise match.
+    fn collect_outlives_from_predicate_list<I, P>(
+        &self,
+        ty: Ty<'tcx>,
+        predicates: I,
+    ) -> Vec<ty::Region<'tcx>>
+    where
+        I: IntoIterator<Item = P>,
+        P: AsRef<ty::Predicate<'tcx>>,
+    {
+        predicates
+            .into_iter()
+            .filter_map(|p| p.as_ref().to_opt_type_outlives())
+            .filter_map(|p| self.tcx().no_late_bound_regions(&p))
+            .filter(|p| p.0 == ty)
+            .map(|p| p.1)
+            .collect()
+    }
+}
diff --git a/src/librustc/infer/region_constraints/README.md b/src/librustc/infer/region_constraints/README.md
new file mode 100644
index 0000000..67ad08c
--- /dev/null
+++ b/src/librustc/infer/region_constraints/README.md
@@ -0,0 +1,70 @@
+# Region constraint collection
+
+## Terminology
+
+Note that we use the terms region and lifetime interchangeably.
+
+## Introduction
+
+As described in the [inference README](../README.md), and unlike
+normal type inference, which is similar in spirit to H-M and thus
+works progressively, the region type inference works by accumulating
+constraints over the course of a function.  Finally, at the end of
+processing a function, we process and solve the constraints all at
+once.
+
+The constraints are always of one of three possible forms:
+
+- `ConstrainVarSubVar(Ri, Rj)` states that region variable Ri must be
+  a subregion of Rj
+- `ConstrainRegSubVar(R, Ri)` states that the concrete region R (which
+  must not be a variable) must be a subregion of the variable Ri
+- `ConstrainVarSubReg(Ri, R)` states the variable Ri shoudl be less
+  than the concrete region R. This is kind of deprecated and ought to
+  be replaced with a verify (they essentially play the same role).
+
+In addition to constraints, we also gather up a set of "verifys"
+(what, you don't think Verify is a noun? Get used to it my
+friend!). These represent relations that must hold but which don't
+influence inference proper. These take the form of:
+
+- `VerifyRegSubReg(Ri, Rj)` indicates that Ri <= Rj must hold,
+  where Rj is not an inference variable (and Ri may or may not contain
+  one). This doesn't influence inference because we will already have
+  inferred Ri to be as small as possible, so then we just test whether
+  that result was less than Rj or not.
+- `VerifyGenericBound(R, Vb)` is a more complex expression which tests
+  that the region R must satisfy the bound `Vb`. The bounds themselves
+  may have structure like "must outlive one of the following regions"
+  or "must outlive ALL of the following regions. These bounds arise
+  from constraints like `T: 'a` -- if we know that `T: 'b` and `T: 'c`
+  (say, from where clauses), then we can conclude that `T: 'a` if `'b:
+  'a` *or* `'c: 'a`.
+
+## Building up the constraints
+
+Variables and constraints are created using the following methods:
+
+- `new_region_var()` creates a new, unconstrained region variable;
+- `make_subregion(Ri, Rj)` states that Ri is a subregion of Rj
+- `lub_regions(Ri, Rj) -> Rk` returns a region Rk which is
+  the smallest region that is greater than both Ri and Rj
+- `glb_regions(Ri, Rj) -> Rk` returns a region Rk which is
+  the greatest region that is smaller than both Ri and Rj
+
+The actual region resolution algorithm is not entirely
+obvious, though it is also not overly complex.
+
+## Snapshotting
+
+It is also permitted to try (and rollback) changes to the graph.  This
+is done by invoking `start_snapshot()`, which returns a value.  Then
+later you can call `rollback_to()` which undoes the work.
+Alternatively, you can call `commit()` which ends all snapshots.
+Snapshots can be recursive---so you can start a snapshot when another
+is in progress, but only the root snapshot can "commit".
+
+## Skolemization
+
+For a discussion on skolemization and higher-ranked subtyping, please
+see the module `middle::infer::higher_ranked::doc`.
diff --git a/src/librustc/infer/region_constraints/mod.rs b/src/librustc/infer/region_constraints/mod.rs
new file mode 100644
index 0000000..096037e
--- /dev/null
+++ b/src/librustc/infer/region_constraints/mod.rs
@@ -0,0 +1,956 @@
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! See README.md
+
+use self::UndoLogEntry::*;
+use self::CombineMapType::*;
+
+use super::{MiscVariable, RegionVariableOrigin, SubregionOrigin};
+use super::unify_key;
+
+use rustc_data_structures::indexed_vec::IndexVec;
+use rustc_data_structures::fx::{FxHashMap, FxHashSet};
+use rustc_data_structures::unify::{self, UnificationTable};
+use ty::{self, Ty, TyCtxt};
+use ty::{Region, RegionVid};
+use ty::ReStatic;
+use ty::{BrFresh, ReLateBound, ReSkolemized, ReVar};
+
+use std::collections::BTreeMap;
+use std::fmt;
+use std::mem;
+use std::u32;
+
+mod taint;
+
+pub struct RegionConstraintCollector<'tcx> {
+    /// For each `RegionVid`, the corresponding `RegionVariableOrigin`.
+    var_origins: IndexVec<RegionVid, RegionVariableOrigin>,
+
+    data: RegionConstraintData<'tcx>,
+
+    /// For a given pair of regions (R1, R2), maps to a region R3 that
+    /// is designated as their LUB (edges R1 <= R3 and R2 <= R3
+    /// exist). This prevents us from making many such regions.
+    lubs: CombineMap<'tcx>,
+
+    /// For a given pair of regions (R1, R2), maps to a region R3 that
+    /// is designated as their GLB (edges R3 <= R1 and R3 <= R2
+    /// exist). This prevents us from making many such regions.
+    glbs: CombineMap<'tcx>,
+
+    /// Number of skolemized variables currently active.
+    skolemization_count: u32,
+
+    /// Global counter used during the GLB algorithm to create unique
+    /// names for fresh bound regions
+    bound_count: u32,
+
+    /// The undo log records actions that might later be undone.
+    ///
+    /// Note: when the undo_log is empty, we are not actively
+    /// snapshotting. When the `start_snapshot()` method is called, we
+    /// push an OpenSnapshot entry onto the list to indicate that we
+    /// are now actively snapshotting. The reason for this is that
+    /// otherwise we end up adding entries for things like the lower
+    /// bound on a variable and so forth, which can never be rolled
+    /// back.
+    undo_log: Vec<UndoLogEntry<'tcx>>,
+
+    /// When we add a R1 == R2 constriant, we currently add (a) edges
+    /// R1 <= R2 and R2 <= R1 and (b) we unify the two regions in this
+    /// table. You can then call `opportunistic_resolve_var` early
+    /// which will map R1 and R2 to some common region (i.e., either
+    /// R1 or R2). This is important when dropck and other such code
+    /// is iterating to a fixed point, because otherwise we sometimes
+    /// would wind up with a fresh stream of region variables that
+    /// have been equated but appear distinct.
+    unification_table: UnificationTable<ty::RegionVid>,
+}
+
+pub type VarOrigins = IndexVec<RegionVid, RegionVariableOrigin>;
+
+/// The full set of region constraints gathered up by the collector.
+/// Describes constraints between the region variables and other
+/// regions, as well as other conditions that must be verified, or
+/// assumptions that can be made.
+#[derive(Default)]
+pub struct RegionConstraintData<'tcx> {
+    /// Constraints of the form `A <= B`, where either `A` or `B` can
+    /// be a region variable (or neither, as it happens).
+    pub constraints: BTreeMap<Constraint<'tcx>, SubregionOrigin<'tcx>>,
+
+    /// A "verify" is something that we need to verify after inference
+    /// is done, but which does not directly affect inference in any
+    /// way.
+    ///
+    /// An example is a `A <= B` where neither `A` nor `B` are
+    /// inference variables.
+    pub verifys: Vec<Verify<'tcx>>,
+
+    /// A "given" is a relationship that is known to hold. In
+    /// particular, we often know from closure fn signatures that a
+    /// particular free region must be a subregion of a region
+    /// variable:
+    ///
+    ///    foo.iter().filter(<'a> |x: &'a &'b T| ...)
+    ///
+    /// In situations like this, `'b` is in fact a region variable
+    /// introduced by the call to `iter()`, and `'a` is a bound region
+    /// on the closure (as indicated by the `<'a>` prefix). If we are
+    /// naive, we wind up inferring that `'b` must be `'static`,
+    /// because we require that it be greater than `'a` and we do not
+    /// know what `'a` is precisely.
+    ///
+    /// This hashmap is used to avoid that naive scenario. Basically
+    /// we record the fact that `'a <= 'b` is implied by the fn
+    /// signature, and then ignore the constraint when solving
+    /// equations. This is a bit of a hack but seems to work.
+    pub givens: FxHashSet<(Region<'tcx>, ty::RegionVid)>,
+}
+
+/// A constraint that influences the inference process.
+#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, PartialOrd, Ord)]
+pub enum Constraint<'tcx> {
+    /// One region variable is subregion of another
+    VarSubVar(RegionVid, RegionVid),
+
+    /// Concrete region is subregion of region variable
+    RegSubVar(Region<'tcx>, RegionVid),
+
+    /// Region variable is subregion of concrete region. This does not
+    /// directly affect inference, but instead is checked after
+    /// inference is complete.
+    VarSubReg(RegionVid, Region<'tcx>),
+
+    /// A constraint where neither side is a variable. This does not
+    /// directly affect inference, but instead is checked after
+    /// inference is complete.
+    RegSubReg(Region<'tcx>, Region<'tcx>),
+}
+
+/// VerifyGenericBound(T, _, R, RS): The parameter type `T` (or
+/// associated type) must outlive the region `R`. `T` is known to
+/// outlive `RS`. Therefore verify that `R <= RS[i]` for some
+/// `i`. Inference variables may be involved (but this verification
+/// step doesn't influence inference).
+#[derive(Debug)]
+pub struct Verify<'tcx> {
+    pub kind: GenericKind<'tcx>,
+    pub origin: SubregionOrigin<'tcx>,
+    pub region: Region<'tcx>,
+    pub bound: VerifyBound<'tcx>,
+}
+
+#[derive(Copy, Clone, PartialEq, Eq)]
+pub enum GenericKind<'tcx> {
+    Param(ty::ParamTy),
+    Projection(ty::ProjectionTy<'tcx>),
+}
+
+/// When we introduce a verification step, we wish to test that a
+/// particular region (let's call it `'min`) meets some bound.
+/// The bound is described the by the following grammar:
+#[derive(Debug)]
+pub enum VerifyBound<'tcx> {
+    /// B = exists {R} --> some 'r in {R} must outlive 'min
+    ///
+    /// Put another way, the subject value is known to outlive all
+    /// regions in {R}, so if any of those outlives 'min, then the
+    /// bound is met.
+    AnyRegion(Vec<Region<'tcx>>),
+
+    /// B = forall {R} --> all 'r in {R} must outlive 'min
+    ///
+    /// Put another way, the subject value is known to outlive some
+    /// region in {R}, so if all of those outlives 'min, then the bound
+    /// is met.
+    AllRegions(Vec<Region<'tcx>>),
+
+    /// B = exists {B} --> 'min must meet some bound b in {B}
+    AnyBound(Vec<VerifyBound<'tcx>>),
+
+    /// B = forall {B} --> 'min must meet all bounds b in {B}
+    AllBounds(Vec<VerifyBound<'tcx>>),
+}
+
+#[derive(Copy, Clone, PartialEq, Eq, Hash)]
+struct TwoRegions<'tcx> {
+    a: Region<'tcx>,
+    b: Region<'tcx>,
+}
+
+#[derive(Copy, Clone, PartialEq)]
+enum UndoLogEntry<'tcx> {
+    /// Pushed when we start a snapshot.
+    OpenSnapshot,
+
+    /// Replaces an `OpenSnapshot` when a snapshot is committed, but
+    /// that snapshot is not the root. If the root snapshot is
+    /// unrolled, all nested snapshots must be committed.
+    CommitedSnapshot,
+
+    /// We added `RegionVid`
+    AddVar(RegionVid),
+
+    /// We added the given `constraint`
+    AddConstraint(Constraint<'tcx>),
+
+    /// We added the given `verify`
+    AddVerify(usize),
+
+    /// We added the given `given`
+    AddGiven(Region<'tcx>, ty::RegionVid),
+
+    /// We added a GLB/LUB "combination variable"
+    AddCombination(CombineMapType, TwoRegions<'tcx>),
+
+    /// During skolemization, we sometimes purge entries from the undo
+    /// log in a kind of minisnapshot (unlike other snapshots, this
+    /// purging actually takes place *on success*). In that case, we
+    /// replace the corresponding entry with `Noop` so as to avoid the
+    /// need to do a bunch of swapping. (We can't use `swap_remove` as
+    /// the order of the vector is important.)
+    Purged,
+}
+
+#[derive(Copy, Clone, PartialEq)]
+enum CombineMapType {
+    Lub,
+    Glb,
+}
+
+type CombineMap<'tcx> = FxHashMap<TwoRegions<'tcx>, RegionVid>;
+
+pub struct RegionSnapshot {
+    length: usize,
+    region_snapshot: unify::Snapshot<ty::RegionVid>,
+    skolemization_count: u32,
+}
+
+/// When working with skolemized regions, we often wish to find all of
+/// the regions that are either reachable from a skolemized region, or
+/// which can reach a skolemized region, or both. We call such regions
+/// *tained* regions.  This struct allows you to decide what set of
+/// tainted regions you want.
+#[derive(Debug)]
+pub struct TaintDirections {
+    incoming: bool,
+    outgoing: bool,
+}
+
+impl TaintDirections {
+    pub fn incoming() -> Self {
+        TaintDirections {
+            incoming: true,
+            outgoing: false,
+        }
+    }
+
+    pub fn outgoing() -> Self {
+        TaintDirections {
+            incoming: false,
+            outgoing: true,
+        }
+    }
+
+    pub fn both() -> Self {
+        TaintDirections {
+            incoming: true,
+            outgoing: true,
+        }
+    }
+}
+
+impl<'tcx> RegionConstraintCollector<'tcx> {
+    pub fn new() -> RegionConstraintCollector<'tcx> {
+        RegionConstraintCollector {
+            var_origins: VarOrigins::default(),
+            data: RegionConstraintData::default(),
+            lubs: FxHashMap(),
+            glbs: FxHashMap(),
+            skolemization_count: 0,
+            bound_count: 0,
+            undo_log: Vec::new(),
+            unification_table: UnificationTable::new(),
+        }
+    }
+
+    pub fn var_origins(&self) -> &VarOrigins {
+        &self.var_origins
+    }
+
+    /// Once all the constraints have been gathered, extract out the final data.
+    ///
+    /// Not legal during a snapshot.
+    pub fn into_origins_and_data(self) -> (VarOrigins, RegionConstraintData<'tcx>) {
+        assert!(!self.in_snapshot());
+        (self.var_origins, self.data)
+    }
+
+    /// Takes (and clears) the current set of constraints. Note that
+    /// the set of variables remains intact, but all relationships
+    /// between them are reset.  This is used during NLL checking to
+    /// grab the set of constraints that arose from a particular
+    /// operation.
+    ///
+    /// We don't want to leak relationships between variables between
+    /// points because just because (say) `r1 == r2` was true at some
+    /// point P in the graph doesn't imply that it will be true at
+    /// some other point Q, in NLL.
+    ///
+    /// Not legal during a snapshot.
+    pub fn take_and_reset_data(&mut self) -> RegionConstraintData<'tcx> {
+        assert!(!self.in_snapshot());
+
+        // If you add a new field to `RegionConstraintCollector`, you
+        // should think carefully about whether it needs to be cleared
+        // or updated in some way.
+        let RegionConstraintCollector {
+            var_origins,
+            data,
+            lubs,
+            glbs,
+            skolemization_count,
+            bound_count: _,
+            undo_log: _,
+            unification_table,
+        } = self;
+
+        assert_eq!(*skolemization_count, 0);
+
+        // Clear the tables of (lubs, glbs), so that we will create
+        // fresh regions if we do a LUB operation. As it happens,
+        // LUB/GLB are not performed by the MIR type-checker, which is
+        // the one that uses this method, but it's good to be correct.
+        lubs.clear();
+        glbs.clear();
+
+        // Clear all unifications and recreate the variables a "now
+        // un-unified" state. Note that when we unify `a` and `b`, we
+        // also insert `a <= b` and a `b <= a` edges, so the
+        // `RegionConstraintData` contains the relationship here.
+        *unification_table = UnificationTable::new();
+        for vid in var_origins.indices() {
+            unification_table.new_key(unify_key::RegionVidKey { min_vid: vid });
+        }
+
+        mem::replace(data, RegionConstraintData::default())
+    }
+
+    fn in_snapshot(&self) -> bool {
+        !self.undo_log.is_empty()
+    }
+
+    pub fn start_snapshot(&mut self) -> RegionSnapshot {
+        let length = self.undo_log.len();
+        debug!("RegionConstraintCollector: start_snapshot({})", length);
+        self.undo_log.push(OpenSnapshot);
+        RegionSnapshot {
+            length,
+            region_snapshot: self.unification_table.snapshot(),
+            skolemization_count: self.skolemization_count,
+        }
+    }
+
+    pub fn commit(&mut self, snapshot: RegionSnapshot) {
+        debug!("RegionConstraintCollector: commit({})", snapshot.length);
+        assert!(self.undo_log.len() > snapshot.length);
+        assert!(self.undo_log[snapshot.length] == OpenSnapshot);
+        assert!(
+            self.skolemization_count == snapshot.skolemization_count,
+            "failed to pop skolemized regions: {} now vs {} at start",
+            self.skolemization_count,
+            snapshot.skolemization_count
+        );
+
+        if snapshot.length == 0 {
+            self.undo_log.truncate(0);
+        } else {
+            (*self.undo_log)[snapshot.length] = CommitedSnapshot;
+        }
+        self.unification_table.commit(snapshot.region_snapshot);
+    }
+
+    pub fn rollback_to(&mut self, snapshot: RegionSnapshot) {
+        debug!("RegionConstraintCollector: rollback_to({:?})", snapshot);
+        assert!(self.undo_log.len() > snapshot.length);
+        assert!(self.undo_log[snapshot.length] == OpenSnapshot);
+        while self.undo_log.len() > snapshot.length + 1 {
+            let undo_entry = self.undo_log.pop().unwrap();
+            self.rollback_undo_entry(undo_entry);
+        }
+        let c = self.undo_log.pop().unwrap();
+        assert!(c == OpenSnapshot);
+        self.skolemization_count = snapshot.skolemization_count;
+        self.unification_table.rollback_to(snapshot.region_snapshot);
+    }
+
+    fn rollback_undo_entry(&mut self, undo_entry: UndoLogEntry<'tcx>) {
+        match undo_entry {
+            OpenSnapshot => {
+                panic!("Failure to observe stack discipline");
+            }
+            Purged | CommitedSnapshot => {
+                // nothing to do here
+            }
+            AddVar(vid) => {
+                self.var_origins.pop().unwrap();
+                assert_eq!(self.var_origins.len(), vid.index as usize);
+            }
+            AddConstraint(ref constraint) => {
+                self.data.constraints.remove(constraint);
+            }
+            AddVerify(index) => {
+                self.data.verifys.pop();
+                assert_eq!(self.data.verifys.len(), index);
+            }
+            AddGiven(sub, sup) => {
+                self.data.givens.remove(&(sub, sup));
+            }
+            AddCombination(Glb, ref regions) => {
+                self.glbs.remove(regions);
+            }
+            AddCombination(Lub, ref regions) => {
+                self.lubs.remove(regions);
+            }
+        }
+    }
+
+    pub fn new_region_var(&mut self, origin: RegionVariableOrigin) -> RegionVid {
+        let vid = self.var_origins.push(origin.clone());
+
+        let u_vid = self.unification_table
+            .new_key(unify_key::RegionVidKey { min_vid: vid });
+        assert_eq!(vid, u_vid);
+        if self.in_snapshot() {
+            self.undo_log.push(AddVar(vid));
+        }
+        debug!(
+            "created new region variable {:?} with origin {:?}",
+            vid,
+            origin
+        );
+        return vid;
+    }
+
+    /// Returns the origin for the given variable.
+    pub fn var_origin(&self, vid: RegionVid) -> RegionVariableOrigin {
+        self.var_origins[vid].clone()
+    }
+
+    /// Creates a new skolemized region. Skolemized regions are fresh
+    /// regions used when performing higher-ranked computations. They
+    /// must be used in a very particular way and are never supposed
+    /// to "escape" out into error messages or the code at large.
+    ///
+    /// The idea is to always create a snapshot. Skolemized regions
+    /// can be created in the context of this snapshot, but before the
+    /// snapshot is committed or rolled back, they must be popped
+    /// (using `pop_skolemized_regions`), so that their numbers can be
+    /// recycled. Normally you don't have to think about this: you use
+    /// the APIs in `higher_ranked/mod.rs`, such as
+    /// `skolemize_late_bound_regions` and `plug_leaks`, which will
+    /// guide you on this path (ensure that the `SkolemizationMap` is
+    /// consumed and you are good).  There are also somewhat extensive
+    /// comments in `higher_ranked/README.md`.
+    ///
+    /// The `snapshot` argument to this function is not really used;
+    /// it's just there to make it explicit which snapshot bounds the
+    /// skolemized region that results. It should always be the top-most snapshot.
+    pub fn push_skolemized(
+        &mut self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        br: ty::BoundRegion,
+        snapshot: &RegionSnapshot,
+    ) -> Region<'tcx> {
+        assert!(self.in_snapshot());
+        assert!(self.undo_log[snapshot.length] == OpenSnapshot);
+
+        let sc = self.skolemization_count;
+        self.skolemization_count = sc + 1;
+        tcx.mk_region(ReSkolemized(ty::SkolemizedRegionVid { index: sc }, br))
+    }
+
+    /// Removes all the edges to/from the skolemized regions that are
+    /// in `skols`. This is used after a higher-ranked operation
+    /// completes to remove all trace of the skolemized regions
+    /// created in that time.
+    pub fn pop_skolemized(
+        &mut self,
+        _tcx: TyCtxt<'_, '_, 'tcx>,
+        skols: &FxHashSet<ty::Region<'tcx>>,
+        snapshot: &RegionSnapshot,
+    ) {
+        debug!("pop_skolemized_regions(skols={:?})", skols);
+
+        assert!(self.in_snapshot());
+        assert!(self.undo_log[snapshot.length] == OpenSnapshot);
+        assert!(
+            self.skolemization_count as usize >= skols.len(),
+            "popping more skolemized variables than actually exist, \
+             sc now = {}, skols.len = {}",
+            self.skolemization_count,
+            skols.len()
+        );
+
+        let last_to_pop = self.skolemization_count;
+        let first_to_pop = last_to_pop - (skols.len() as u32);
+
+        assert!(
+            first_to_pop >= snapshot.skolemization_count,
+            "popping more regions than snapshot contains, \
+             sc now = {}, sc then = {}, skols.len = {}",
+            self.skolemization_count,
+            snapshot.skolemization_count,
+            skols.len()
+        );
+        debug_assert! {
+            skols.iter()
+                 .all(|&k| match *k {
+                     ty::ReSkolemized(index, _) =>
+                         index.index >= first_to_pop &&
+                         index.index < last_to_pop,
+                     _ =>
+                         false
+                 }),
+            "invalid skolemization keys or keys out of range ({}..{}): {:?}",
+            snapshot.skolemization_count,
+            self.skolemization_count,
+            skols
+        }
+
+        let constraints_to_kill: Vec<usize> = self.undo_log
+            .iter()
+            .enumerate()
+            .rev()
+            .filter(|&(_, undo_entry)| kill_constraint(skols, undo_entry))
+            .map(|(index, _)| index)
+            .collect();
+
+        for index in constraints_to_kill {
+            let undo_entry = mem::replace(&mut self.undo_log[index], Purged);
+            self.rollback_undo_entry(undo_entry);
+        }
+
+        self.skolemization_count = snapshot.skolemization_count;
+        return;
+
+        fn kill_constraint<'tcx>(
+            skols: &FxHashSet<ty::Region<'tcx>>,
+            undo_entry: &UndoLogEntry<'tcx>,
+        ) -> bool {
+            match undo_entry {
+                &AddConstraint(Constraint::VarSubVar(..)) => false,
+                &AddConstraint(Constraint::RegSubVar(a, _)) => skols.contains(&a),
+                &AddConstraint(Constraint::VarSubReg(_, b)) => skols.contains(&b),
+                &AddConstraint(Constraint::RegSubReg(a, b)) => {
+                    skols.contains(&a) || skols.contains(&b)
+                }
+                &AddGiven(..) => false,
+                &AddVerify(_) => false,
+                &AddCombination(_, ref two_regions) => {
+                    skols.contains(&two_regions.a) || skols.contains(&two_regions.b)
+                }
+                &AddVar(..) | &OpenSnapshot | &Purged | &CommitedSnapshot => false,
+            }
+        }
+    }
+
+    pub fn new_bound(
+        &mut self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        debruijn: ty::DebruijnIndex,
+    ) -> Region<'tcx> {
+        // Creates a fresh bound variable for use in GLB computations.
+        // See discussion of GLB computation in the large comment at
+        // the top of this file for more details.
+        //
+        // This computation is potentially wrong in the face of
+        // rollover.  It's conceivable, if unlikely, that one might
+        // wind up with accidental capture for nested functions in
+        // that case, if the outer function had bound regions created
+        // a very long time before and the inner function somehow
+        // wound up rolling over such that supposedly fresh
+        // identifiers were in fact shadowed. For now, we just assert
+        // that there is no rollover -- eventually we should try to be
+        // robust against this possibility, either by checking the set
+        // of bound identifiers that appear in a given expression and
+        // ensure that we generate one that is distinct, or by
+        // changing the representation of bound regions in a fn
+        // declaration
+
+        let sc = self.bound_count;
+        self.bound_count = sc + 1;
+
+        if sc >= self.bound_count {
+            bug!("rollover in RegionInference new_bound()");
+        }
+
+        tcx.mk_region(ReLateBound(debruijn, BrFresh(sc)))
+    }
+
+    fn add_constraint(&mut self, constraint: Constraint<'tcx>, origin: SubregionOrigin<'tcx>) {
+        // cannot add constraints once regions are resolved
+        debug!(
+            "RegionConstraintCollector: add_constraint({:?})",
+            constraint
+        );
+
+        // never overwrite an existing (constraint, origin) - only insert one if it isn't
+        // present in the map yet. This prevents origins from outside the snapshot being
+        // replaced with "less informative" origins e.g. during calls to `can_eq`
+        let in_snapshot = self.in_snapshot();
+        let undo_log = &mut self.undo_log;
+        self.data.constraints.entry(constraint).or_insert_with(|| {
+            if in_snapshot {
+                undo_log.push(AddConstraint(constraint));
+            }
+            origin
+        });
+    }
+
+    fn add_verify(&mut self, verify: Verify<'tcx>) {
+        // cannot add verifys once regions are resolved
+        debug!("RegionConstraintCollector: add_verify({:?})", verify);
+
+        // skip no-op cases known to be satisfied
+        match verify.bound {
+            VerifyBound::AllBounds(ref bs) if bs.len() == 0 => {
+                return;
+            }
+            _ => {}
+        }
+
+        let index = self.data.verifys.len();
+        self.data.verifys.push(verify);
+        if self.in_snapshot() {
+            self.undo_log.push(AddVerify(index));
+        }
+    }
+
+    pub fn add_given(&mut self, sub: Region<'tcx>, sup: ty::RegionVid) {
+        // cannot add givens once regions are resolved
+        if self.data.givens.insert((sub, sup)) {
+            debug!("add_given({:?} <= {:?})", sub, sup);
+
+            if self.in_snapshot() {
+                self.undo_log.push(AddGiven(sub, sup));
+            }
+        }
+    }
+
+    pub fn make_eqregion(
+        &mut self,
+        origin: SubregionOrigin<'tcx>,
+        sub: Region<'tcx>,
+        sup: Region<'tcx>,
+    ) {
+        if sub != sup {
+            // Eventually, it would be nice to add direct support for
+            // equating regions.
+            self.make_subregion(origin.clone(), sub, sup);
+            self.make_subregion(origin, sup, sub);
+
+            if let (ty::ReVar(sub), ty::ReVar(sup)) = (*sub, *sup) {
+                self.unification_table.union(sub, sup);
+            }
+        }
+    }
+
+    pub fn make_subregion(
+        &mut self,
+        origin: SubregionOrigin<'tcx>,
+        sub: Region<'tcx>,
+        sup: Region<'tcx>,
+    ) {
+        // cannot add constraints once regions are resolved
+        debug!(
+            "RegionConstraintCollector: make_subregion({:?}, {:?}) due to {:?}",
+            sub,
+            sup,
+            origin
+        );
+
+        match (sub, sup) {
+            (&ReLateBound(..), _) | (_, &ReLateBound(..)) => {
+                span_bug!(
+                    origin.span(),
+                    "cannot relate bound region: {:?} <= {:?}",
+                    sub,
+                    sup
+                );
+            }
+            (_, &ReStatic) => {
+                // all regions are subregions of static, so we can ignore this
+            }
+            (&ReVar(sub_id), &ReVar(sup_id)) => {
+                self.add_constraint(Constraint::VarSubVar(sub_id, sup_id), origin);
+            }
+            (_, &ReVar(sup_id)) => {
+                self.add_constraint(Constraint::RegSubVar(sub, sup_id), origin);
+            }
+            (&ReVar(sub_id), _) => {
+                self.add_constraint(Constraint::VarSubReg(sub_id, sup), origin);
+            }
+            _ => {
+                self.add_constraint(Constraint::RegSubReg(sub, sup), origin);
+            }
+        }
+    }
+
+    /// See `Verify::VerifyGenericBound`
+    pub fn verify_generic_bound(
+        &mut self,
+        origin: SubregionOrigin<'tcx>,
+        kind: GenericKind<'tcx>,
+        sub: Region<'tcx>,
+        bound: VerifyBound<'tcx>,
+    ) {
+        self.add_verify(Verify {
+            kind,
+            origin,
+            region: sub,
+            bound,
+        });
+    }
+
+    pub fn lub_regions(
+        &mut self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        origin: SubregionOrigin<'tcx>,
+        a: Region<'tcx>,
+        b: Region<'tcx>,
+    ) -> Region<'tcx> {
+        // cannot add constraints once regions are resolved
+        debug!("RegionConstraintCollector: lub_regions({:?}, {:?})", a, b);
+        match (a, b) {
+            (r @ &ReStatic, _) | (_, r @ &ReStatic) => {
+                r // nothing lives longer than static
+            }
+
+            _ if a == b => {
+                a // LUB(a,a) = a
+            }
+
+            _ => self.combine_vars(tcx, Lub, a, b, origin.clone()),
+        }
+    }
+
+    pub fn glb_regions(
+        &mut self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        origin: SubregionOrigin<'tcx>,
+        a: Region<'tcx>,
+        b: Region<'tcx>,
+    ) -> Region<'tcx> {
+        // cannot add constraints once regions are resolved
+        debug!("RegionConstraintCollector: glb_regions({:?}, {:?})", a, b);
+        match (a, b) {
+            (&ReStatic, r) | (r, &ReStatic) => {
+                r // static lives longer than everything else
+            }
+
+            _ if a == b => {
+                a // GLB(a,a) = a
+            }
+
+            _ => self.combine_vars(tcx, Glb, a, b, origin.clone()),
+        }
+    }
+
+    pub fn opportunistic_resolve_var(
+        &mut self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        rid: RegionVid,
+    ) -> ty::Region<'tcx> {
+        let vid = self.unification_table.find_value(rid).min_vid;
+        tcx.mk_region(ty::ReVar(vid))
+    }
+
+    fn combine_map(&mut self, t: CombineMapType) -> &mut CombineMap<'tcx> {
+        match t {
+            Glb => &mut self.glbs,
+            Lub => &mut self.lubs,
+        }
+    }
+
+    fn combine_vars(
+        &mut self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        t: CombineMapType,
+        a: Region<'tcx>,
+        b: Region<'tcx>,
+        origin: SubregionOrigin<'tcx>,
+    ) -> Region<'tcx> {
+        let vars = TwoRegions { a: a, b: b };
+        if let Some(&c) = self.combine_map(t).get(&vars) {
+            return tcx.mk_region(ReVar(c));
+        }
+        let c = self.new_region_var(MiscVariable(origin.span()));
+        self.combine_map(t).insert(vars, c);
+        if self.in_snapshot() {
+            self.undo_log.push(AddCombination(t, vars));
+        }
+        let new_r = tcx.mk_region(ReVar(c));
+        for &old_r in &[a, b] {
+            match t {
+                Glb => self.make_subregion(origin.clone(), new_r, old_r),
+                Lub => self.make_subregion(origin.clone(), old_r, new_r),
+            }
+        }
+        debug!("combine_vars() c={:?}", c);
+        new_r
+    }
+
+    pub fn vars_created_since_snapshot(&self, mark: &RegionSnapshot) -> Vec<RegionVid> {
+        self.undo_log[mark.length..]
+            .iter()
+            .filter_map(|&elt| match elt {
+                AddVar(vid) => Some(vid),
+                _ => None,
+            })
+            .collect()
+    }
+
+    /// Computes all regions that have been related to `r0` since the
+    /// mark `mark` was made---`r0` itself will be the first
+    /// entry. The `directions` parameter controls what kind of
+    /// relations are considered. For example, one can say that only
+    /// "incoming" edges to `r0` are desired, in which case one will
+    /// get the set of regions `{r|r <= r0}`. This is used when
+    /// checking whether skolemized regions are being improperly
+    /// related to other regions.
+    pub fn tainted(
+        &self,
+        tcx: TyCtxt<'_, '_, 'tcx>,
+        mark: &RegionSnapshot,
+        r0: Region<'tcx>,
+        directions: TaintDirections,
+    ) -> FxHashSet<ty::Region<'tcx>> {
+        debug!(
+            "tainted(mark={:?}, r0={:?}, directions={:?})",
+            mark,
+            r0,
+            directions
+        );
+
+        // `result_set` acts as a worklist: we explore all outgoing
+        // edges and add any new regions we find to result_set.  This
+        // is not a terribly efficient implementation.
+        let mut taint_set = taint::TaintSet::new(directions, r0);
+        taint_set.fixed_point(tcx, &self.undo_log[mark.length..], &self.data.verifys);
+        debug!("tainted: result={:?}", taint_set);
+        return taint_set.into_set();
+    }
+}
+
+impl fmt::Debug for RegionSnapshot {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        write!(
+            f,
+            "RegionSnapshot(length={},skolemization={})",
+            self.length,
+            self.skolemization_count
+        )
+    }
+}
+
+impl<'tcx> fmt::Debug for GenericKind<'tcx> {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        match *self {
+            GenericKind::Param(ref p) => write!(f, "{:?}", p),
+            GenericKind::Projection(ref p) => write!(f, "{:?}", p),
+        }
+    }
+}
+
+impl<'tcx> fmt::Display for GenericKind<'tcx> {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        match *self {
+            GenericKind::Param(ref p) => write!(f, "{}", p),
+            GenericKind::Projection(ref p) => write!(f, "{}", p),
+        }
+    }
+}
+
+impl<'a, 'gcx, 'tcx> GenericKind<'tcx> {
+    pub fn to_ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Ty<'tcx> {
+        match *self {
+            GenericKind::Param(ref p) => p.to_ty(tcx),
+            GenericKind::Projection(ref p) => tcx.mk_projection(p.item_def_id, p.substs),
+        }
+    }
+}
+
+impl<'a, 'gcx, 'tcx> VerifyBound<'tcx> {
+    fn for_each_region(&self, f: &mut FnMut(ty::Region<'tcx>)) {
+        match self {
+            &VerifyBound::AnyRegion(ref rs) | &VerifyBound::AllRegions(ref rs) => for &r in rs {
+                f(r);
+            },
+
+            &VerifyBound::AnyBound(ref bs) | &VerifyBound::AllBounds(ref bs) => for b in bs {
+                b.for_each_region(f);
+            },
+        }
+    }
+
+    pub fn must_hold(&self) -> bool {
+        match self {
+            &VerifyBound::AnyRegion(ref bs) => bs.contains(&&ty::ReStatic),
+            &VerifyBound::AllRegions(ref bs) => bs.is_empty(),
+            &VerifyBound::AnyBound(ref bs) => bs.iter().any(|b| b.must_hold()),
+            &VerifyBound::AllBounds(ref bs) => bs.iter().all(|b| b.must_hold()),
+        }
+    }
+
+    pub fn cannot_hold(&self) -> bool {
+        match self {
+            &VerifyBound::AnyRegion(ref bs) => bs.is_empty(),
+            &VerifyBound::AllRegions(ref bs) => bs.contains(&&ty::ReEmpty),
+            &VerifyBound::AnyBound(ref bs) => bs.iter().all(|b| b.cannot_hold()),
+            &VerifyBound::AllBounds(ref bs) => bs.iter().any(|b| b.cannot_hold()),
+        }
+    }
+
+    pub fn or(self, vb: VerifyBound<'tcx>) -> VerifyBound<'tcx> {
+        if self.must_hold() || vb.cannot_hold() {
+            self
+        } else if self.cannot_hold() || vb.must_hold() {
+            vb
+        } else {
+            VerifyBound::AnyBound(vec![self, vb])
+        }
+    }
+
+    pub fn and(self, vb: VerifyBound<'tcx>) -> VerifyBound<'tcx> {
+        if self.must_hold() && vb.must_hold() {
+            self
+        } else if self.cannot_hold() && vb.cannot_hold() {
+            self
+        } else {
+            VerifyBound::AllBounds(vec![self, vb])
+        }
+    }
+}
+
+impl<'tcx> RegionConstraintData<'tcx> {
+    /// True if this region constraint data contains no constraints.
+    pub fn is_empty(&self) -> bool {
+        let RegionConstraintData {
+            constraints,
+            verifys,
+            givens,
+        } = self;
+        constraints.is_empty() && verifys.is_empty() && givens.is_empty()
+    }
+}
diff --git a/src/librustc/infer/region_constraints/taint.rs b/src/librustc/infer/region_constraints/taint.rs
new file mode 100644
index 0000000..ee45f7b
--- /dev/null
+++ b/src/librustc/infer/region_constraints/taint.rs
@@ -0,0 +1,96 @@
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use super::*;
+
+#[derive(Debug)]
+pub(super) struct TaintSet<'tcx> {
+    directions: TaintDirections,
+    regions: FxHashSet<ty::Region<'tcx>>
+}
+
+impl<'tcx> TaintSet<'tcx> {
+    pub(super) fn new(directions: TaintDirections,
+                      initial_region: ty::Region<'tcx>)
+                      -> Self {
+        let mut regions = FxHashSet();
+        regions.insert(initial_region);
+        TaintSet { directions: directions, regions: regions }
+    }
+
+    pub(super) fn fixed_point(&mut self,
+                              tcx: TyCtxt<'_, '_, 'tcx>,
+                              undo_log: &[UndoLogEntry<'tcx>],
+                              verifys: &[Verify<'tcx>]) {
+        let mut prev_len = 0;
+        while prev_len < self.len() {
+            debug!("tainted: prev_len = {:?} new_len = {:?}",
+                   prev_len, self.len());
+
+            prev_len = self.len();
+
+            for undo_entry in undo_log {
+                match undo_entry {
+                    &AddConstraint(Constraint::VarSubVar(a, b)) => {
+                        self.add_edge(tcx.mk_region(ReVar(a)),
+                                      tcx.mk_region(ReVar(b)));
+                    }
+                    &AddConstraint(Constraint::RegSubVar(a, b)) => {
+                        self.add_edge(a, tcx.mk_region(ReVar(b)));
+                    }
+                    &AddConstraint(Constraint::VarSubReg(a, b)) => {
+                        self.add_edge(tcx.mk_region(ReVar(a)), b);
+                    }
+                    &AddConstraint(Constraint::RegSubReg(a, b)) => {
+                        self.add_edge(a, b);
+                    }
+                    &AddGiven(a, b) => {
+                        self.add_edge(a, tcx.mk_region(ReVar(b)));
+                    }
+                    &AddVerify(i) => {
+                        verifys[i].bound.for_each_region(&mut |b| {
+                            self.add_edge(verifys[i].region, b);
+                        });
+                    }
+                    &Purged |
+                    &AddCombination(..) |
+                    &AddVar(..) |
+                    &OpenSnapshot |
+                    &CommitedSnapshot => {}
+                }
+            }
+        }
+    }
+
+    pub(super) fn into_set(self) -> FxHashSet<ty::Region<'tcx>> {
+        self.regions
+    }
+
+    fn len(&self) -> usize {
+        self.regions.len()
+    }
+
+    fn add_edge(&mut self,
+                source: ty::Region<'tcx>,
+                target: ty::Region<'tcx>) {
+        if self.directions.incoming {
+            if self.regions.contains(&target) {
+                self.regions.insert(source);
+            }
+        }
+
+        if self.directions.outgoing {
+            if self.regions.contains(&source) {
+                self.regions.insert(target);
+            }
+        }
+    }
+}
+
diff --git a/src/librustc/infer/region_inference/mod.rs b/src/librustc/infer/region_inference/mod.rs
deleted file mode 100644
index f5327fa..0000000
--- a/src/librustc/infer/region_inference/mod.rs
+++ /dev/null
@@ -1,1648 +0,0 @@
-// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! See README.md
-
-pub use self::Constraint::*;
-pub use self::UndoLogEntry::*;
-pub use self::CombineMapType::*;
-pub use self::RegionResolutionError::*;
-pub use self::VarValue::*;
-
-use super::{RegionVariableOrigin, SubregionOrigin, MiscVariable};
-use super::unify_key;
-
-use rustc_data_structures::fx::{FxHashMap, FxHashSet};
-use rustc_data_structures::graph::{self, Direction, NodeIndex, OUTGOING};
-use rustc_data_structures::unify::{self, UnificationTable};
-use middle::free_region::RegionRelations;
-use ty::{self, Ty, TyCtxt};
-use ty::{Region, RegionVid};
-use ty::{ReEmpty, ReStatic, ReFree, ReEarlyBound, ReErased};
-use ty::{ReLateBound, ReScope, ReVar, ReSkolemized, BrFresh};
-
-use std::collections::BTreeMap;
-use std::cell::{Cell, RefCell};
-use std::fmt;
-use std::mem;
-use std::u32;
-
-mod graphviz;
-
-/// A constraint that influences the inference process.
-#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, PartialOrd, Ord)]
-pub enum Constraint<'tcx> {
-    /// One region variable is subregion of another
-    ConstrainVarSubVar(RegionVid, RegionVid),
-
-    /// Concrete region is subregion of region variable
-    ConstrainRegSubVar(Region<'tcx>, RegionVid),
-
-    /// Region variable is subregion of concrete region. This does not
-    /// directly affect inference, but instead is checked after
-    /// inference is complete.
-    ConstrainVarSubReg(RegionVid, Region<'tcx>),
-
-    /// A constraint where neither side is a variable. This does not
-    /// directly affect inference, but instead is checked after
-    /// inference is complete.
-    ConstrainRegSubReg(Region<'tcx>, Region<'tcx>),
-}
-
-/// VerifyGenericBound(T, _, R, RS): The parameter type `T` (or
-/// associated type) must outlive the region `R`. `T` is known to
-/// outlive `RS`. Therefore verify that `R <= RS[i]` for some
-/// `i`. Inference variables may be involved (but this verification
-/// step doesn't influence inference).
-#[derive(Debug)]
-pub struct Verify<'tcx> {
-    kind: GenericKind<'tcx>,
-    origin: SubregionOrigin<'tcx>,
-    region: Region<'tcx>,
-    bound: VerifyBound<'tcx>,
-}
-
-#[derive(Copy, Clone, PartialEq, Eq)]
-pub enum GenericKind<'tcx> {
-    Param(ty::ParamTy),
-    Projection(ty::ProjectionTy<'tcx>),
-}
-
-/// When we introduce a verification step, we wish to test that a
-/// particular region (let's call it `'min`) meets some bound.
-/// The bound is described the by the following grammar:
-#[derive(Debug)]
-pub enum VerifyBound<'tcx> {
-    /// B = exists {R} --> some 'r in {R} must outlive 'min
-    ///
-    /// Put another way, the subject value is known to outlive all
-    /// regions in {R}, so if any of those outlives 'min, then the
-    /// bound is met.
-    AnyRegion(Vec<Region<'tcx>>),
-
-    /// B = forall {R} --> all 'r in {R} must outlive 'min
-    ///
-    /// Put another way, the subject value is known to outlive some
-    /// region in {R}, so if all of those outlives 'min, then the bound
-    /// is met.
-    AllRegions(Vec<Region<'tcx>>),
-
-    /// B = exists {B} --> 'min must meet some bound b in {B}
-    AnyBound(Vec<VerifyBound<'tcx>>),
-
-    /// B = forall {B} --> 'min must meet all bounds b in {B}
-    AllBounds(Vec<VerifyBound<'tcx>>),
-}
-
-#[derive(Copy, Clone, PartialEq, Eq, Hash)]
-pub struct TwoRegions<'tcx> {
-    a: Region<'tcx>,
-    b: Region<'tcx>,
-}
-
-#[derive(Copy, Clone, PartialEq)]
-pub enum UndoLogEntry<'tcx> {
-    /// Pushed when we start a snapshot.
-    OpenSnapshot,
-
-    /// Replaces an `OpenSnapshot` when a snapshot is committed, but
-    /// that snapshot is not the root. If the root snapshot is
-    /// unrolled, all nested snapshots must be committed.
-    CommitedSnapshot,
-
-    /// We added `RegionVid`
-    AddVar(RegionVid),
-
-    /// We added the given `constraint`
-    AddConstraint(Constraint<'tcx>),
-
-    /// We added the given `verify`
-    AddVerify(usize),
-
-    /// We added the given `given`
-    AddGiven(Region<'tcx>, ty::RegionVid),
-
-    /// We added a GLB/LUB "combination variable"
-    AddCombination(CombineMapType, TwoRegions<'tcx>),
-
-    /// During skolemization, we sometimes purge entries from the undo
-    /// log in a kind of minisnapshot (unlike other snapshots, this
-    /// purging actually takes place *on success*). In that case, we
-    /// replace the corresponding entry with `Noop` so as to avoid the
-    /// need to do a bunch of swapping. (We can't use `swap_remove` as
-    /// the order of the vector is important.)
-    Purged,
-}
-
-#[derive(Copy, Clone, PartialEq)]
-pub enum CombineMapType {
-    Lub,
-    Glb,
-}
-
-#[derive(Clone, Debug)]
-pub enum RegionResolutionError<'tcx> {
-    /// `ConcreteFailure(o, a, b)`:
-    ///
-    /// `o` requires that `a <= b`, but this does not hold
-    ConcreteFailure(SubregionOrigin<'tcx>, Region<'tcx>, Region<'tcx>),
-
-    /// `GenericBoundFailure(p, s, a)
-    ///
-    /// The parameter/associated-type `p` must be known to outlive the lifetime
-    /// `a` (but none of the known bounds are sufficient).
-    GenericBoundFailure(SubregionOrigin<'tcx>, GenericKind<'tcx>, Region<'tcx>),
-
-    /// `SubSupConflict(v, sub_origin, sub_r, sup_origin, sup_r)`:
-    ///
-    /// Could not infer a value for `v` because `sub_r <= v` (due to
-    /// `sub_origin`) but `v <= sup_r` (due to `sup_origin`) and
-    /// `sub_r <= sup_r` does not hold.
-    SubSupConflict(RegionVariableOrigin,
-                   SubregionOrigin<'tcx>,
-                   Region<'tcx>,
-                   SubregionOrigin<'tcx>,
-                   Region<'tcx>),
-}
-
-#[derive(Clone, Debug)]
-pub enum ProcessedErrorOrigin<'tcx> {
-    ConcreteFailure(SubregionOrigin<'tcx>, Region<'tcx>, Region<'tcx>),
-    VariableFailure(RegionVariableOrigin),
-}
-
-pub type CombineMap<'tcx> = FxHashMap<TwoRegions<'tcx>, RegionVid>;
-
-pub struct RegionVarBindings<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
-    tcx: TyCtxt<'a, 'gcx, 'tcx>,
-    var_origins: RefCell<Vec<RegionVariableOrigin>>,
-
-    /// Constraints of the form `A <= B` introduced by the region
-    /// checker.  Here at least one of `A` and `B` must be a region
-    /// variable.
-    ///
-    /// Using `BTreeMap` because the order in which we iterate over
-    /// these constraints can affect the way we build the region graph,
-    /// which in turn affects the way that region errors are reported,
-    /// leading to small variations in error output across runs and
-    /// platforms.
-    constraints: RefCell<BTreeMap<Constraint<'tcx>, SubregionOrigin<'tcx>>>,
-
-    /// A "verify" is something that we need to verify after inference is
-    /// done, but which does not directly affect inference in any way.
-    ///
-    /// An example is a `A <= B` where neither `A` nor `B` are
-    /// inference variables.
-    verifys: RefCell<Vec<Verify<'tcx>>>,
-
-    /// A "given" is a relationship that is known to hold. In particular,
-    /// we often know from closure fn signatures that a particular free
-    /// region must be a subregion of a region variable:
-    ///
-    ///    foo.iter().filter(<'a> |x: &'a &'b T| ...)
-    ///
-    /// In situations like this, `'b` is in fact a region variable
-    /// introduced by the call to `iter()`, and `'a` is a bound region
-    /// on the closure (as indicated by the `<'a>` prefix). If we are
-    /// naive, we wind up inferring that `'b` must be `'static`,
-    /// because we require that it be greater than `'a` and we do not
-    /// know what `'a` is precisely.
-    ///
-    /// This hashmap is used to avoid that naive scenario. Basically we
-    /// record the fact that `'a <= 'b` is implied by the fn signature,
-    /// and then ignore the constraint when solving equations. This is
-    /// a bit of a hack but seems to work.
-    givens: RefCell<FxHashSet<(Region<'tcx>, ty::RegionVid)>>,
-
-    lubs: RefCell<CombineMap<'tcx>>,
-    glbs: RefCell<CombineMap<'tcx>>,
-    skolemization_count: Cell<u32>,
-    bound_count: Cell<u32>,
-
-    /// The undo log records actions that might later be undone.
-    ///
-    /// Note: when the undo_log is empty, we are not actively
-    /// snapshotting. When the `start_snapshot()` method is called, we
-    /// push an OpenSnapshot entry onto the list to indicate that we
-    /// are now actively snapshotting. The reason for this is that
-    /// otherwise we end up adding entries for things like the lower
-    /// bound on a variable and so forth, which can never be rolled
-    /// back.
-    undo_log: RefCell<Vec<UndoLogEntry<'tcx>>>,
-
-    unification_table: RefCell<UnificationTable<ty::RegionVid>>,
-
-    /// This contains the results of inference.  It begins as an empty
-    /// option and only acquires a value after inference is complete.
-    values: RefCell<Option<Vec<VarValue<'tcx>>>>,
-}
-
-pub struct RegionSnapshot {
-    length: usize,
-    region_snapshot: unify::Snapshot<ty::RegionVid>,
-    skolemization_count: u32,
-}
-
-/// When working with skolemized regions, we often wish to find all of
-/// the regions that are either reachable from a skolemized region, or
-/// which can reach a skolemized region, or both. We call such regions
-/// *tained* regions.  This struct allows you to decide what set of
-/// tainted regions you want.
-#[derive(Debug)]
-pub struct TaintDirections {
-    incoming: bool,
-    outgoing: bool,
-}
-
-impl TaintDirections {
-    pub fn incoming() -> Self {
-        TaintDirections { incoming: true, outgoing: false }
-    }
-
-    pub fn outgoing() -> Self {
-        TaintDirections { incoming: false, outgoing: true }
-    }
-
-    pub fn both() -> Self {
-        TaintDirections { incoming: true, outgoing: true }
-    }
-}
-
-struct TaintSet<'tcx> {
-    directions: TaintDirections,
-    regions: FxHashSet<ty::Region<'tcx>>
-}
-
-impl<'a, 'gcx, 'tcx> TaintSet<'tcx> {
-    fn new(directions: TaintDirections,
-           initial_region: ty::Region<'tcx>)
-           -> Self {
-        let mut regions = FxHashSet();
-        regions.insert(initial_region);
-        TaintSet { directions: directions, regions: regions }
-    }
-
-    fn fixed_point(&mut self,
-                   tcx: TyCtxt<'a, 'gcx, 'tcx>,
-                   undo_log: &[UndoLogEntry<'tcx>],
-                   verifys: &[Verify<'tcx>]) {
-        let mut prev_len = 0;
-        while prev_len < self.len() {
-            debug!("tainted: prev_len = {:?} new_len = {:?}",
-                   prev_len, self.len());
-
-            prev_len = self.len();
-
-            for undo_entry in undo_log {
-                match undo_entry {
-                    &AddConstraint(ConstrainVarSubVar(a, b)) => {
-                        self.add_edge(tcx.mk_region(ReVar(a)),
-                                      tcx.mk_region(ReVar(b)));
-                    }
-                    &AddConstraint(ConstrainRegSubVar(a, b)) => {
-                        self.add_edge(a, tcx.mk_region(ReVar(b)));
-                    }
-                    &AddConstraint(ConstrainVarSubReg(a, b)) => {
-                        self.add_edge(tcx.mk_region(ReVar(a)), b);
-                    }
-                    &AddConstraint(ConstrainRegSubReg(a, b)) => {
-                        self.add_edge(a, b);
-                    }
-                    &AddGiven(a, b) => {
-                        self.add_edge(a, tcx.mk_region(ReVar(b)));
-                    }
-                    &AddVerify(i) => {
-                        verifys[i].bound.for_each_region(&mut |b| {
-                            self.add_edge(verifys[i].region, b);
-                        });
-                    }
-                    &Purged |
-                    &AddCombination(..) |
-                    &AddVar(..) |
-                    &OpenSnapshot |
-                    &CommitedSnapshot => {}
-                }
-            }
-        }
-    }
-
-    fn into_set(self) -> FxHashSet<ty::Region<'tcx>> {
-        self.regions
-    }
-
-    fn len(&self) -> usize {
-        self.regions.len()
-    }
-
-    fn add_edge(&mut self,
-                source: ty::Region<'tcx>,
-                target: ty::Region<'tcx>) {
-        if self.directions.incoming {
-            if self.regions.contains(&target) {
-                self.regions.insert(source);
-            }
-        }
-
-        if self.directions.outgoing {
-            if self.regions.contains(&source) {
-                self.regions.insert(target);
-            }
-        }
-    }
-}
-
-impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
-    pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>) -> RegionVarBindings<'a, 'gcx, 'tcx> {
-        RegionVarBindings {
-            tcx,
-            var_origins: RefCell::new(Vec::new()),
-            values: RefCell::new(None),
-            constraints: RefCell::new(BTreeMap::new()),
-            verifys: RefCell::new(Vec::new()),
-            givens: RefCell::new(FxHashSet()),
-            lubs: RefCell::new(FxHashMap()),
-            glbs: RefCell::new(FxHashMap()),
-            skolemization_count: Cell::new(0),
-            bound_count: Cell::new(0),
-            undo_log: RefCell::new(Vec::new()),
-            unification_table: RefCell::new(UnificationTable::new()),
-        }
-    }
-
-    fn in_snapshot(&self) -> bool {
-        !self.undo_log.borrow().is_empty()
-    }
-
-    pub fn start_snapshot(&self) -> RegionSnapshot {
-        let length = self.undo_log.borrow().len();
-        debug!("RegionVarBindings: start_snapshot({})", length);
-        self.undo_log.borrow_mut().push(OpenSnapshot);
-        RegionSnapshot {
-            length,
-            region_snapshot: self.unification_table.borrow_mut().snapshot(),
-            skolemization_count: self.skolemization_count.get(),
-        }
-    }
-
-    pub fn commit(&self, snapshot: RegionSnapshot) {
-        debug!("RegionVarBindings: commit({})", snapshot.length);
-        assert!(self.undo_log.borrow().len() > snapshot.length);
-        assert!((*self.undo_log.borrow())[snapshot.length] == OpenSnapshot);
-        assert!(self.skolemization_count.get() == snapshot.skolemization_count,
-                "failed to pop skolemized regions: {} now vs {} at start",
-                self.skolemization_count.get(),
-                snapshot.skolemization_count);
-
-        let mut undo_log = self.undo_log.borrow_mut();
-        if snapshot.length == 0 {
-            undo_log.truncate(0);
-        } else {
-            (*undo_log)[snapshot.length] = CommitedSnapshot;
-        }
-        self.unification_table.borrow_mut().commit(snapshot.region_snapshot);
-    }
-
-    pub fn rollback_to(&self, snapshot: RegionSnapshot) {
-        debug!("RegionVarBindings: rollback_to({:?})", snapshot);
-        let mut undo_log = self.undo_log.borrow_mut();
-        assert!(undo_log.len() > snapshot.length);
-        assert!((*undo_log)[snapshot.length] == OpenSnapshot);
-        while undo_log.len() > snapshot.length + 1 {
-            self.rollback_undo_entry(undo_log.pop().unwrap());
-        }
-        let c = undo_log.pop().unwrap();
-        assert!(c == OpenSnapshot);
-        self.skolemization_count.set(snapshot.skolemization_count);
-        self.unification_table.borrow_mut()
-            .rollback_to(snapshot.region_snapshot);
-    }
-
-    pub fn rollback_undo_entry(&self, undo_entry: UndoLogEntry<'tcx>) {
-        match undo_entry {
-            OpenSnapshot => {
-                panic!("Failure to observe stack discipline");
-            }
-            Purged | CommitedSnapshot => {
-                // nothing to do here
-            }
-            AddVar(vid) => {
-                let mut var_origins = self.var_origins.borrow_mut();
-                var_origins.pop().unwrap();
-                assert_eq!(var_origins.len(), vid.index as usize);
-            }
-            AddConstraint(ref constraint) => {
-                self.constraints.borrow_mut().remove(constraint);
-            }
-            AddVerify(index) => {
-                self.verifys.borrow_mut().pop();
-                assert_eq!(self.verifys.borrow().len(), index);
-            }
-            AddGiven(sub, sup) => {
-                self.givens.borrow_mut().remove(&(sub, sup));
-            }
-            AddCombination(Glb, ref regions) => {
-                self.glbs.borrow_mut().remove(regions);
-            }
-            AddCombination(Lub, ref regions) => {
-                self.lubs.borrow_mut().remove(regions);
-            }
-        }
-    }
-
-    pub fn num_vars(&self) -> u32 {
-        let len = self.var_origins.borrow().len();
-        // enforce no overflow
-        assert!(len as u32 as usize == len);
-        len as u32
-    }
-
-    pub fn new_region_var(&self, origin: RegionVariableOrigin) -> RegionVid {
-        let vid = RegionVid { index: self.num_vars() };
-        self.var_origins.borrow_mut().push(origin.clone());
-
-        let u_vid = self.unification_table.borrow_mut().new_key(
-            unify_key::RegionVidKey { min_vid: vid }
-            );
-        assert_eq!(vid, u_vid);
-        if self.in_snapshot() {
-            self.undo_log.borrow_mut().push(AddVar(vid));
-        }
-        debug!("created new region variable {:?} with origin {:?}",
-               vid,
-               origin);
-        return vid;
-    }
-
-    pub fn var_origin(&self, vid: RegionVid) -> RegionVariableOrigin {
-        self.var_origins.borrow()[vid.index as usize].clone()
-    }
-
-    /// Creates a new skolemized region. Skolemized regions are fresh
-    /// regions used when performing higher-ranked computations. They
-    /// must be used in a very particular way and are never supposed
-    /// to "escape" out into error messages or the code at large.
-    ///
-    /// The idea is to always create a snapshot. Skolemized regions
-    /// can be created in the context of this snapshot, but before the
-    /// snapshot is committed or rolled back, they must be popped
-    /// (using `pop_skolemized_regions`), so that their numbers can be
-    /// recycled. Normally you don't have to think about this: you use
-    /// the APIs in `higher_ranked/mod.rs`, such as
-    /// `skolemize_late_bound_regions` and `plug_leaks`, which will
-    /// guide you on this path (ensure that the `SkolemizationMap` is
-    /// consumed and you are good).  There are also somewhat extensive
-    /// comments in `higher_ranked/README.md`.
-    ///
-    /// The `snapshot` argument to this function is not really used;
-    /// it's just there to make it explicit which snapshot bounds the
-    /// skolemized region that results. It should always be the top-most snapshot.
-    pub fn push_skolemized(&self, br: ty::BoundRegion, snapshot: &RegionSnapshot)
-                           -> Region<'tcx> {
-        assert!(self.in_snapshot());
-        assert!(self.undo_log.borrow()[snapshot.length] == OpenSnapshot);
-
-        let sc = self.skolemization_count.get();
-        self.skolemization_count.set(sc + 1);
-        self.tcx.mk_region(ReSkolemized(ty::SkolemizedRegionVid { index: sc }, br))
-    }
-
-    /// Removes all the edges to/from the skolemized regions that are
-    /// in `skols`. This is used after a higher-ranked operation
-    /// completes to remove all trace of the skolemized regions
-    /// created in that time.
-    pub fn pop_skolemized(&self,
-                          skols: &FxHashSet<ty::Region<'tcx>>,
-                          snapshot: &RegionSnapshot) {
-        debug!("pop_skolemized_regions(skols={:?})", skols);
-
-        assert!(self.in_snapshot());
-        assert!(self.undo_log.borrow()[snapshot.length] == OpenSnapshot);
-        assert!(self.skolemization_count.get() as usize >= skols.len(),
-                "popping more skolemized variables than actually exist, \
-                 sc now = {}, skols.len = {}",
-                self.skolemization_count.get(),
-                skols.len());
-
-        let last_to_pop = self.skolemization_count.get();
-        let first_to_pop = last_to_pop - (skols.len() as u32);
-
-        assert!(first_to_pop >= snapshot.skolemization_count,
-                "popping more regions than snapshot contains, \
-                 sc now = {}, sc then = {}, skols.len = {}",
-                self.skolemization_count.get(),
-                snapshot.skolemization_count,
-                skols.len());
-        debug_assert! {
-            skols.iter()
-                 .all(|&k| match *k {
-                     ty::ReSkolemized(index, _) =>
-                         index.index >= first_to_pop &&
-                         index.index < last_to_pop,
-                     _ =>
-                         false
-                 }),
-            "invalid skolemization keys or keys out of range ({}..{}): {:?}",
-            snapshot.skolemization_count,
-            self.skolemization_count.get(),
-            skols
-        }
-
-        let mut undo_log = self.undo_log.borrow_mut();
-
-        let constraints_to_kill: Vec<usize> =
-            undo_log.iter()
-                    .enumerate()
-                    .rev()
-                    .filter(|&(_, undo_entry)| kill_constraint(skols, undo_entry))
-                    .map(|(index, _)| index)
-                    .collect();
-
-        for index in constraints_to_kill {
-            let undo_entry = mem::replace(&mut undo_log[index], Purged);
-            self.rollback_undo_entry(undo_entry);
-        }
-
-        self.skolemization_count.set(snapshot.skolemization_count);
-        return;
-
-        fn kill_constraint<'tcx>(skols: &FxHashSet<ty::Region<'tcx>>,
-                                 undo_entry: &UndoLogEntry<'tcx>)
-                                 -> bool {
-            match undo_entry {
-                &AddConstraint(ConstrainVarSubVar(..)) =>
-                    false,
-                &AddConstraint(ConstrainRegSubVar(a, _)) =>
-                    skols.contains(&a),
-                &AddConstraint(ConstrainVarSubReg(_, b)) =>
-                    skols.contains(&b),
-                &AddConstraint(ConstrainRegSubReg(a, b)) =>
-                    skols.contains(&a) || skols.contains(&b),
-                &AddGiven(..) =>
-                    false,
-                &AddVerify(_) =>
-                    false,
-                &AddCombination(_, ref two_regions) =>
-                    skols.contains(&two_regions.a) ||
-                    skols.contains(&two_regions.b),
-                &AddVar(..) |
-                &OpenSnapshot |
-                &Purged |
-                &CommitedSnapshot =>
-                    false,
-            }
-        }
-
-    }
-
-    pub fn new_bound(&self, debruijn: ty::DebruijnIndex) -> Region<'tcx> {
-        // Creates a fresh bound variable for use in GLB computations.
-        // See discussion of GLB computation in the large comment at
-        // the top of this file for more details.
-        //
-        // This computation is potentially wrong in the face of
-        // rollover.  It's conceivable, if unlikely, that one might
-        // wind up with accidental capture for nested functions in
-        // that case, if the outer function had bound regions created
-        // a very long time before and the inner function somehow
-        // wound up rolling over such that supposedly fresh
-        // identifiers were in fact shadowed. For now, we just assert
-        // that there is no rollover -- eventually we should try to be
-        // robust against this possibility, either by checking the set
-        // of bound identifiers that appear in a given expression and
-        // ensure that we generate one that is distinct, or by
-        // changing the representation of bound regions in a fn
-        // declaration
-
-        let sc = self.bound_count.get();
-        self.bound_count.set(sc + 1);
-
-        if sc >= self.bound_count.get() {
-            bug!("rollover in RegionInference new_bound()");
-        }
-
-        self.tcx.mk_region(ReLateBound(debruijn, BrFresh(sc)))
-    }
-
-    fn values_are_none(&self) -> bool {
-        self.values.borrow().is_none()
-    }
-
-    fn add_constraint(&self, constraint: Constraint<'tcx>, origin: SubregionOrigin<'tcx>) {
-        // cannot add constraints once regions are resolved
-        assert!(self.values_are_none());
-
-        debug!("RegionVarBindings: add_constraint({:?})", constraint);
-
-        // never overwrite an existing (constraint, origin) - only insert one if it isn't
-        // present in the map yet. This prevents origins from outside the snapshot being
-        // replaced with "less informative" origins e.g. during calls to `can_eq`
-        self.constraints.borrow_mut().entry(constraint).or_insert_with(|| {
-            if self.in_snapshot() {
-                self.undo_log.borrow_mut().push(AddConstraint(constraint));
-            }
-            origin
-        });
-    }
-
-    fn add_verify(&self, verify: Verify<'tcx>) {
-        // cannot add verifys once regions are resolved
-        assert!(self.values_are_none());
-
-        debug!("RegionVarBindings: add_verify({:?})", verify);
-
-        // skip no-op cases known to be satisfied
-        match verify.bound {
-            VerifyBound::AllBounds(ref bs) if bs.len() == 0 => { return; }
-            _ => { }
-        }
-
-        let mut verifys = self.verifys.borrow_mut();
-        let index = verifys.len();
-        verifys.push(verify);
-        if self.in_snapshot() {
-            self.undo_log.borrow_mut().push(AddVerify(index));
-        }
-    }
-
-    pub fn add_given(&self, sub: Region<'tcx>, sup: ty::RegionVid) {
-        // cannot add givens once regions are resolved
-        assert!(self.values_are_none());
-
-        let mut givens = self.givens.borrow_mut();
-        if givens.insert((sub, sup)) {
-            debug!("add_given({:?} <= {:?})", sub, sup);
-
-            self.undo_log.borrow_mut().push(AddGiven(sub, sup));
-        }
-    }
-
-    pub fn make_eqregion(&self,
-                         origin: SubregionOrigin<'tcx>,
-                         sub: Region<'tcx>,
-                         sup: Region<'tcx>) {
-        if sub != sup {
-            // Eventually, it would be nice to add direct support for
-            // equating regions.
-            self.make_subregion(origin.clone(), sub, sup);
-            self.make_subregion(origin, sup, sub);
-
-            if let (ty::ReVar(sub), ty::ReVar(sup)) = (*sub, *sup) {
-                self.unification_table.borrow_mut().union(sub, sup);
-            }
-        }
-    }
-
-    pub fn make_subregion(&self,
-                          origin: SubregionOrigin<'tcx>,
-                          sub: Region<'tcx>,
-                          sup: Region<'tcx>) {
-        // cannot add constraints once regions are resolved
-        assert!(self.values_are_none());
-
-        debug!("RegionVarBindings: make_subregion({:?}, {:?}) due to {:?}",
-               sub,
-               sup,
-               origin);
-
-        match (sub, sup) {
-            (&ReLateBound(..), _) |
-            (_, &ReLateBound(..)) => {
-                span_bug!(origin.span(),
-                          "cannot relate bound region: {:?} <= {:?}",
-                          sub,
-                          sup);
-            }
-            (_, &ReStatic) => {
-                // all regions are subregions of static, so we can ignore this
-            }
-            (&ReVar(sub_id), &ReVar(sup_id)) => {
-                self.add_constraint(ConstrainVarSubVar(sub_id, sup_id), origin);
-            }
-            (_, &ReVar(sup_id)) => {
-                self.add_constraint(ConstrainRegSubVar(sub, sup_id), origin);
-            }
-            (&ReVar(sub_id), _) => {
-                self.add_constraint(ConstrainVarSubReg(sub_id, sup), origin);
-            }
-            _ => {
-                self.add_constraint(ConstrainRegSubReg(sub, sup), origin);
-            }
-        }
-    }
-
-    /// See `Verify::VerifyGenericBound`
-    pub fn verify_generic_bound(&self,
-                                origin: SubregionOrigin<'tcx>,
-                                kind: GenericKind<'tcx>,
-                                sub: Region<'tcx>,
-                                bound: VerifyBound<'tcx>) {
-        self.add_verify(Verify {
-            kind,
-            origin,
-            region: sub,
-            bound,
-        });
-    }
-
-    pub fn lub_regions(&self,
-                       origin: SubregionOrigin<'tcx>,
-                       a: Region<'tcx>,
-                       b: Region<'tcx>)
-                       -> Region<'tcx> {
-        // cannot add constraints once regions are resolved
-        assert!(self.values_are_none());
-
-        debug!("RegionVarBindings: lub_regions({:?}, {:?})", a, b);
-        match (a, b) {
-            (r @ &ReStatic, _) | (_, r @ &ReStatic) => {
-                r // nothing lives longer than static
-            }
-
-            _ if a == b => {
-                a // LUB(a,a) = a
-            }
-
-            _ => {
-                self.combine_vars(Lub, a, b, origin.clone(), |this, old_r, new_r| {
-                    this.make_subregion(origin.clone(), old_r, new_r)
-                })
-            }
-        }
-    }
-
-    pub fn glb_regions(&self,
-                       origin: SubregionOrigin<'tcx>,
-                       a: Region<'tcx>,
-                       b: Region<'tcx>)
-                       -> Region<'tcx> {
-        // cannot add constraints once regions are resolved
-        assert!(self.values_are_none());
-
-        debug!("RegionVarBindings: glb_regions({:?}, {:?})", a, b);
-        match (a, b) {
-            (&ReStatic, r) | (r, &ReStatic) => {
-                r // static lives longer than everything else
-            }
-
-            _ if a == b => {
-                a // GLB(a,a) = a
-            }
-
-            _ => {
-                self.combine_vars(Glb, a, b, origin.clone(), |this, old_r, new_r| {
-                    this.make_subregion(origin.clone(), new_r, old_r)
-                })
-            }
-        }
-    }
-
-    pub fn resolve_var(&self, rid: RegionVid) -> ty::Region<'tcx> {
-        match *self.values.borrow() {
-            None => {
-                span_bug!((*self.var_origins.borrow())[rid.index as usize].span(),
-                          "attempt to resolve region variable before values have \
-                           been computed!")
-            }
-            Some(ref values) => {
-                let r = lookup(self.tcx, values, rid);
-                debug!("resolve_var({:?}) = {:?}", rid, r);
-                r
-            }
-        }
-    }
-
-    pub fn opportunistic_resolve_var(&self, rid: RegionVid) -> ty::Region<'tcx> {
-        let vid = self.unification_table.borrow_mut().find_value(rid).min_vid;
-        self.tcx.mk_region(ty::ReVar(vid))
-    }
-
-    fn combine_map(&self, t: CombineMapType) -> &RefCell<CombineMap<'tcx>> {
-        match t {
-            Glb => &self.glbs,
-            Lub => &self.lubs,
-        }
-    }
-
-    pub fn combine_vars<F>(&self,
-                           t: CombineMapType,
-                           a: Region<'tcx>,
-                           b: Region<'tcx>,
-                           origin: SubregionOrigin<'tcx>,
-                           mut relate: F)
-                           -> Region<'tcx>
-        where F: FnMut(&RegionVarBindings<'a, 'gcx, 'tcx>, Region<'tcx>, Region<'tcx>)
-    {
-        let vars = TwoRegions { a: a, b: b };
-        if let Some(&c) = self.combine_map(t).borrow().get(&vars) {
-            return self.tcx.mk_region(ReVar(c));
-        }
-        let c = self.new_region_var(MiscVariable(origin.span()));
-        self.combine_map(t).borrow_mut().insert(vars, c);
-        if self.in_snapshot() {
-            self.undo_log.borrow_mut().push(AddCombination(t, vars));
-        }
-        relate(self, a, self.tcx.mk_region(ReVar(c)));
-        relate(self, b, self.tcx.mk_region(ReVar(c)));
-        debug!("combine_vars() c={:?}", c);
-        self.tcx.mk_region(ReVar(c))
-    }
-
-    pub fn vars_created_since_snapshot(&self, mark: &RegionSnapshot) -> Vec<RegionVid> {
-        self.undo_log.borrow()[mark.length..]
-            .iter()
-            .filter_map(|&elt| {
-                match elt {
-                    AddVar(vid) => Some(vid),
-                    _ => None,
-                }
-            })
-            .collect()
-    }
-
-    /// Computes all regions that have been related to `r0` since the
-    /// mark `mark` was made---`r0` itself will be the first
-    /// entry. The `directions` parameter controls what kind of
-    /// relations are considered. For example, one can say that only
-    /// "incoming" edges to `r0` are desired, in which case one will
-    /// get the set of regions `{r|r <= r0}`. This is used when
-    /// checking whether skolemized regions are being improperly
-    /// related to other regions.
-    pub fn tainted(&self,
-                   mark: &RegionSnapshot,
-                   r0: Region<'tcx>,
-                   directions: TaintDirections)
-                   -> FxHashSet<ty::Region<'tcx>> {
-        debug!("tainted(mark={:?}, r0={:?}, directions={:?})",
-               mark, r0, directions);
-
-        // `result_set` acts as a worklist: we explore all outgoing
-        // edges and add any new regions we find to result_set.  This
-        // is not a terribly efficient implementation.
-        let mut taint_set = TaintSet::new(directions, r0);
-        taint_set.fixed_point(self.tcx,
-                              &self.undo_log.borrow()[mark.length..],
-                              &self.verifys.borrow());
-        debug!("tainted: result={:?}", taint_set.regions);
-        return taint_set.into_set();
-    }
-
-    /// This function performs the actual region resolution.  It must be
-    /// called after all constraints have been added.  It performs a
-    /// fixed-point iteration to find region values which satisfy all
-    /// constraints, assuming such values can be found; if they cannot,
-    /// errors are reported.
-    pub fn resolve_regions(&self,
-                           region_rels: &RegionRelations<'a, 'gcx, 'tcx>)
-                           -> Vec<RegionResolutionError<'tcx>> {
-        debug!("RegionVarBindings: resolve_regions()");
-        let mut errors = vec![];
-        let v = self.infer_variable_values(region_rels, &mut errors);
-        *self.values.borrow_mut() = Some(v);
-        errors
-    }
-
-    fn lub_concrete_regions(&self,
-                            region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                            a: Region<'tcx>,
-                            b: Region<'tcx>)
-                            -> Region<'tcx> {
-        match (a, b) {
-            (&ReLateBound(..), _) |
-            (_, &ReLateBound(..)) |
-            (&ReErased, _) |
-            (_, &ReErased) => {
-                bug!("cannot relate region: LUB({:?}, {:?})", a, b);
-            }
-
-            (r @ &ReStatic, _) | (_, r @ &ReStatic) => {
-                r // nothing lives longer than static
-            }
-
-            (&ReEmpty, r) | (r, &ReEmpty) => {
-                r // everything lives longer than empty
-            }
-
-            (&ReVar(v_id), _) | (_, &ReVar(v_id)) => {
-                span_bug!((*self.var_origins.borrow())[v_id.index as usize].span(),
-                          "lub_concrete_regions invoked with non-concrete \
-                           regions: {:?}, {:?}",
-                          a,
-                          b);
-            }
-
-            (&ReEarlyBound(_), &ReScope(s_id)) |
-            (&ReScope(s_id), &ReEarlyBound(_)) |
-            (&ReFree(_), &ReScope(s_id)) |
-            (&ReScope(s_id), &ReFree(_)) => {
-                // A "free" region can be interpreted as "some region
-                // at least as big as fr.scope".  So, we can
-                // reasonably compare free regions and scopes:
-                let fr_scope = match (a, b) {
-                    (&ReEarlyBound(ref br), _) | (_, &ReEarlyBound(ref br)) => {
-                        region_rels.region_scope_tree.early_free_scope(self.tcx, br)
-                    }
-                    (&ReFree(ref fr), _) | (_, &ReFree(ref fr)) => {
-                        region_rels.region_scope_tree.free_scope(self.tcx, fr)
-                    }
-                    _ => bug!()
-                };
-                let r_id = region_rels.region_scope_tree.nearest_common_ancestor(fr_scope, s_id);
-                if r_id == fr_scope {
-                    // if the free region's scope `fr.scope` is bigger than
-                    // the scope region `s_id`, then the LUB is the free
-                    // region itself:
-                    match (a, b) {
-                        (_, &ReScope(_)) => return a,
-                        (&ReScope(_), _) => return b,
-                        _ => bug!()
-                    }
-                }
-
-                // otherwise, we don't know what the free region is,
-                // so we must conservatively say the LUB is static:
-                self.tcx.types.re_static
-            }
-
-            (&ReScope(a_id), &ReScope(b_id)) => {
-                // The region corresponding to an outer block is a
-                // subtype of the region corresponding to an inner
-                // block.
-                let lub = region_rels.region_scope_tree.nearest_common_ancestor(a_id, b_id);
-                self.tcx.mk_region(ReScope(lub))
-            }
-
-            (&ReEarlyBound(_), &ReEarlyBound(_)) |
-            (&ReFree(_), &ReEarlyBound(_)) |
-            (&ReEarlyBound(_), &ReFree(_)) |
-            (&ReFree(_), &ReFree(_)) => {
-                region_rels.lub_free_regions(a, b)
-            }
-
-            // For these types, we cannot define any additional
-            // relationship:
-            (&ReSkolemized(..), _) |
-            (_, &ReSkolemized(..)) => {
-                if a == b {
-                    a
-                } else {
-                    self.tcx.types.re_static
-                }
-            }
-        }
-    }
-}
-
-// ______________________________________________________________________
-
-#[derive(Copy, Clone, Debug)]
-pub enum VarValue<'tcx> {
-    Value(Region<'tcx>),
-    ErrorValue,
-}
-
-struct RegionAndOrigin<'tcx> {
-    region: Region<'tcx>,
-    origin: SubregionOrigin<'tcx>,
-}
-
-type RegionGraph<'tcx> = graph::Graph<(), Constraint<'tcx>>;
-
-impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
-    fn infer_variable_values(&self,
-                             region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                             errors: &mut Vec<RegionResolutionError<'tcx>>)
-                             -> Vec<VarValue<'tcx>> {
-        let mut var_data = self.construct_var_data();
-
-        // Dorky hack to cause `dump_constraints` to only get called
-        // if debug mode is enabled:
-        debug!("----() End constraint listing (context={:?}) {:?}---",
-               region_rels.context,
-               self.dump_constraints(region_rels));
-        graphviz::maybe_print_constraints_for(self, region_rels);
-
-        let graph = self.construct_graph();
-        self.expand_givens(&graph);
-        self.expansion(region_rels, &mut var_data);
-        self.collect_errors(region_rels, &mut var_data, errors);
-        self.collect_var_errors(region_rels, &var_data, &graph, errors);
-        var_data
-    }
-
-    fn construct_var_data(&self) -> Vec<VarValue<'tcx>> {
-        (0..self.num_vars() as usize)
-            .map(|_| Value(self.tcx.types.re_empty))
-            .collect()
-    }
-
-    fn dump_constraints(&self, free_regions: &RegionRelations<'a, 'gcx, 'tcx>) {
-        debug!("----() Start constraint listing (context={:?}) ()----",
-               free_regions.context);
-        for (idx, (constraint, _)) in self.constraints.borrow().iter().enumerate() {
-            debug!("Constraint {} => {:?}", idx, constraint);
-        }
-    }
-
-    fn expand_givens(&self, graph: &RegionGraph) {
-        // Givens are a kind of horrible hack to account for
-        // constraints like 'c <= '0 that are known to hold due to
-        // closure signatures (see the comment above on the `givens`
-        // field). They should go away. But until they do, the role
-        // of this fn is to account for the transitive nature:
-        //
-        //     Given 'c <= '0
-        //     and   '0 <= '1
-        //     then  'c <= '1
-
-        let mut givens = self.givens.borrow_mut();
-        let seeds: Vec<_> = givens.iter().cloned().collect();
-        for (r, vid) in seeds {
-            let seed_index = NodeIndex(vid.index as usize);
-            for succ_index in graph.depth_traverse(seed_index, OUTGOING) {
-                let succ_index = succ_index.0 as u32;
-                if succ_index < self.num_vars() {
-                    let succ_vid = RegionVid { index: succ_index };
-                    givens.insert((r, succ_vid));
-                }
-            }
-        }
-    }
-
-    fn expansion(&self,
-                 region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                 var_values: &mut [VarValue<'tcx>]) {
-        self.iterate_until_fixed_point("Expansion", |constraint, origin| {
-            debug!("expansion: constraint={:?} origin={:?}",
-                   constraint, origin);
-            match *constraint {
-                ConstrainRegSubVar(a_region, b_vid) => {
-                    let b_data = &mut var_values[b_vid.index as usize];
-                    self.expand_node(region_rels, a_region, b_vid, b_data)
-                }
-                ConstrainVarSubVar(a_vid, b_vid) => {
-                    match var_values[a_vid.index as usize] {
-                        ErrorValue => false,
-                        Value(a_region) => {
-                            let b_node = &mut var_values[b_vid.index as usize];
-                            self.expand_node(region_rels, a_region, b_vid, b_node)
-                        }
-                    }
-                }
-                ConstrainRegSubReg(..) |
-                ConstrainVarSubReg(..) => {
-                    // These constraints are checked after expansion
-                    // is done, in `collect_errors`.
-                    false
-                }
-            }
-        })
-    }
-
-    fn expand_node(&self,
-                   region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                   a_region: Region<'tcx>,
-                   b_vid: RegionVid,
-                   b_data: &mut VarValue<'tcx>)
-                   -> bool {
-        debug!("expand_node({:?}, {:?} == {:?})",
-               a_region,
-               b_vid,
-               b_data);
-
-        // Check if this relationship is implied by a given.
-        match *a_region {
-            ty::ReEarlyBound(_) |
-            ty::ReFree(_) => {
-                if self.givens.borrow().contains(&(a_region, b_vid)) {
-                    debug!("given");
-                    return false;
-                }
-            }
-            _ => {}
-        }
-
-        match *b_data {
-            Value(cur_region) => {
-                let lub = self.lub_concrete_regions(region_rels, a_region, cur_region);
-                if lub == cur_region {
-                    return false;
-                }
-
-                debug!("Expanding value of {:?} from {:?} to {:?}",
-                       b_vid,
-                       cur_region,
-                       lub);
-
-                *b_data = Value(lub);
-                return true;
-            }
-
-            ErrorValue => {
-                return false;
-            }
-        }
-    }
-
-    /// After expansion is complete, go and check upper bounds (i.e.,
-    /// cases where the region cannot grow larger than a fixed point)
-    /// and check that they are satisfied.
-    fn collect_errors(&self,
-                      region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                      var_data: &mut Vec<VarValue<'tcx>>,
-                      errors: &mut Vec<RegionResolutionError<'tcx>>) {
-        let constraints = self.constraints.borrow();
-        for (constraint, origin) in constraints.iter() {
-            debug!("collect_errors: constraint={:?} origin={:?}",
-                   constraint, origin);
-            match *constraint {
-                ConstrainRegSubVar(..) |
-                ConstrainVarSubVar(..) => {
-                    // Expansion will ensure that these constraints hold. Ignore.
-                }
-
-                ConstrainRegSubReg(sub, sup) => {
-                    if region_rels.is_subregion_of(sub, sup) {
-                        continue;
-                    }
-
-                    debug!("collect_errors: region error at {:?}: \
-                            cannot verify that {:?} <= {:?}",
-                           origin,
-                           sub,
-                           sup);
-
-                    errors.push(ConcreteFailure((*origin).clone(), sub, sup));
-                }
-
-                ConstrainVarSubReg(a_vid, b_region) => {
-                    let a_data = &mut var_data[a_vid.index as usize];
-                    debug!("contraction: {:?} == {:?}, {:?}",
-                           a_vid,
-                           a_data,
-                           b_region);
-
-                    let a_region = match *a_data {
-                        ErrorValue => continue,
-                        Value(a_region) => a_region,
-                    };
-
-                    // Do not report these errors immediately:
-                    // instead, set the variable value to error and
-                    // collect them later.
-                    if !region_rels.is_subregion_of(a_region, b_region) {
-                        debug!("collect_errors: region error at {:?}: \
-                                cannot verify that {:?}={:?} <= {:?}",
-                               origin,
-                               a_vid,
-                               a_region,
-                               b_region);
-                        *a_data = ErrorValue;
-                    }
-                }
-            }
-        }
-
-        for verify in self.verifys.borrow().iter() {
-            debug!("collect_errors: verify={:?}", verify);
-            let sub = normalize(self.tcx, var_data, verify.region);
-
-            // This was an inference variable which didn't get
-            // constrained, therefore it can be assume to hold.
-            if let ty::ReEmpty = *sub {
-                continue;
-            }
-
-            if verify.bound.is_met(region_rels, var_data, sub) {
-                continue;
-            }
-
-            debug!("collect_errors: region error at {:?}: \
-                    cannot verify that {:?} <= {:?}",
-                   verify.origin,
-                   verify.region,
-                   verify.bound);
-
-            errors.push(GenericBoundFailure(verify.origin.clone(),
-                                            verify.kind.clone(),
-                                            sub));
-        }
-    }
-
-    /// Go over the variables that were declared to be error variables
-    /// and create a `RegionResolutionError` for each of them.
-    fn collect_var_errors(&self,
-                          region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                          var_data: &[VarValue<'tcx>],
-                          graph: &RegionGraph<'tcx>,
-                          errors: &mut Vec<RegionResolutionError<'tcx>>) {
-        debug!("collect_var_errors");
-
-        // This is the best way that I have found to suppress
-        // duplicate and related errors. Basically we keep a set of
-        // flags for every node. Whenever an error occurs, we will
-        // walk some portion of the graph looking to find pairs of
-        // conflicting regions to report to the user. As we walk, we
-        // trip the flags from false to true, and if we find that
-        // we've already reported an error involving any particular
-        // node we just stop and don't report the current error.  The
-        // idea is to report errors that derive from independent
-        // regions of the graph, but not those that derive from
-        // overlapping locations.
-        let mut dup_vec = vec![u32::MAX; self.num_vars() as usize];
-
-        for idx in 0..self.num_vars() as usize {
-            match var_data[idx] {
-                Value(_) => {
-                    /* Inference successful */
-                }
-                ErrorValue => {
-                    /* Inference impossible, this value contains
-                       inconsistent constraints.
-
-                       I think that in this case we should report an
-                       error now---unlike the case above, we can't
-                       wait to see whether the user needs the result
-                       of this variable.  The reason is that the mere
-                       existence of this variable implies that the
-                       region graph is inconsistent, whether or not it
-                       is used.
-
-                       For example, we may have created a region
-                       variable that is the GLB of two other regions
-                       which do not have a GLB.  Even if that variable
-                       is not used, it implies that those two regions
-                       *should* have a GLB.
-
-                       At least I think this is true. It may be that
-                       the mere existence of a conflict in a region variable
-                       that is not used is not a problem, so if this rule
-                       starts to create problems we'll have to revisit
-                       this portion of the code and think hard about it. =) */
-
-                    let node_vid = RegionVid { index: idx as u32 };
-                    self.collect_error_for_expanding_node(region_rels,
-                                                          graph,
-                                                          &mut dup_vec,
-                                                          node_vid,
-                                                          errors);
-                }
-            }
-        }
-    }
-
-    fn construct_graph(&self) -> RegionGraph<'tcx> {
-        let num_vars = self.num_vars();
-
-        let constraints = self.constraints.borrow();
-
-        let mut graph = graph::Graph::new();
-
-        for _ in 0..num_vars {
-            graph.add_node(());
-        }
-
-        // Issue #30438: two distinct dummy nodes, one for incoming
-        // edges (dummy_source) and another for outgoing edges
-        // (dummy_sink). In `dummy -> a -> b -> dummy`, using one
-        // dummy node leads one to think (erroneously) there exists a
-        // path from `b` to `a`. Two dummy nodes sidesteps the issue.
-        let dummy_source = graph.add_node(());
-        let dummy_sink = graph.add_node(());
-
-        for (constraint, _) in constraints.iter() {
-            match *constraint {
-                ConstrainVarSubVar(a_id, b_id) => {
-                    graph.add_edge(NodeIndex(a_id.index as usize),
-                                   NodeIndex(b_id.index as usize),
-                                   *constraint);
-                }
-                ConstrainRegSubVar(_, b_id) => {
-                    graph.add_edge(dummy_source, NodeIndex(b_id.index as usize), *constraint);
-                }
-                ConstrainVarSubReg(a_id, _) => {
-                    graph.add_edge(NodeIndex(a_id.index as usize), dummy_sink, *constraint);
-                }
-                ConstrainRegSubReg(..) => {
-                    // this would be an edge from `dummy_source` to
-                    // `dummy_sink`; just ignore it.
-                }
-            }
-        }
-
-        return graph;
-    }
-
-    fn collect_error_for_expanding_node(&self,
-                                        region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-                                        graph: &RegionGraph<'tcx>,
-                                        dup_vec: &mut [u32],
-                                        node_idx: RegionVid,
-                                        errors: &mut Vec<RegionResolutionError<'tcx>>) {
-        // Errors in expanding nodes result from a lower-bound that is
-        // not contained by an upper-bound.
-        let (mut lower_bounds, lower_dup) = self.collect_concrete_regions(graph,
-                                                                          node_idx,
-                                                                          graph::INCOMING,
-                                                                          dup_vec);
-        let (mut upper_bounds, upper_dup) = self.collect_concrete_regions(graph,
-                                                                          node_idx,
-                                                                          graph::OUTGOING,
-                                                                          dup_vec);
-
-        if lower_dup || upper_dup {
-            return;
-        }
-
-        // We place free regions first because we are special casing
-        // SubSupConflict(ReFree, ReFree) when reporting error, and so
-        // the user will more likely get a specific suggestion.
-        fn region_order_key(x: &RegionAndOrigin) -> u8 {
-            match *x.region {
-                ReEarlyBound(_) => 0,
-                ReFree(_) => 1,
-                _ => 2
-            }
-        }
-        lower_bounds.sort_by_key(region_order_key);
-        upper_bounds.sort_by_key(region_order_key);
-
-        for lower_bound in &lower_bounds {
-            for upper_bound in &upper_bounds {
-                if !region_rels.is_subregion_of(lower_bound.region, upper_bound.region) {
-                    let origin = (*self.var_origins.borrow())[node_idx.index as usize].clone();
-                    debug!("region inference error at {:?} for {:?}: SubSupConflict sub: {:?} \
-                            sup: {:?}",
-                           origin,
-                           node_idx,
-                           lower_bound.region,
-                           upper_bound.region);
-                    errors.push(SubSupConflict(origin,
-                                               lower_bound.origin.clone(),
-                                               lower_bound.region,
-                                               upper_bound.origin.clone(),
-                                               upper_bound.region));
-                    return;
-                }
-            }
-        }
-
-        span_bug!((*self.var_origins.borrow())[node_idx.index as usize].span(),
-                  "collect_error_for_expanding_node() could not find \
-                   error for var {:?}, lower_bounds={:?}, \
-                   upper_bounds={:?}",
-                  node_idx,
-                  lower_bounds,
-                  upper_bounds);
-    }
-
-    fn collect_concrete_regions(&self,
-                                graph: &RegionGraph<'tcx>,
-                                orig_node_idx: RegionVid,
-                                dir: Direction,
-                                dup_vec: &mut [u32])
-                                -> (Vec<RegionAndOrigin<'tcx>>, bool) {
-        struct WalkState<'tcx> {
-            set: FxHashSet<RegionVid>,
-            stack: Vec<RegionVid>,
-            result: Vec<RegionAndOrigin<'tcx>>,
-            dup_found: bool,
-        }
-        let mut state = WalkState {
-            set: FxHashSet(),
-            stack: vec![orig_node_idx],
-            result: Vec::new(),
-            dup_found: false,
-        };
-        state.set.insert(orig_node_idx);
-
-        // to start off the process, walk the source node in the
-        // direction specified
-        process_edges(self, &mut state, graph, orig_node_idx, dir);
-
-        while !state.stack.is_empty() {
-            let node_idx = state.stack.pop().unwrap();
-
-            // check whether we've visited this node on some previous walk
-            if dup_vec[node_idx.index as usize] == u32::MAX {
-                dup_vec[node_idx.index as usize] = orig_node_idx.index;
-            } else if dup_vec[node_idx.index as usize] != orig_node_idx.index {
-                state.dup_found = true;
-            }
-
-            debug!("collect_concrete_regions(orig_node_idx={:?}, node_idx={:?})",
-                   orig_node_idx,
-                   node_idx);
-
-            process_edges(self, &mut state, graph, node_idx, dir);
-        }
-
-        let WalkState {result, dup_found, ..} = state;
-        return (result, dup_found);
-
-        fn process_edges<'a, 'gcx, 'tcx>(this: &RegionVarBindings<'a, 'gcx, 'tcx>,
-                                         state: &mut WalkState<'tcx>,
-                                         graph: &RegionGraph<'tcx>,
-                                         source_vid: RegionVid,
-                                         dir: Direction) {
-            debug!("process_edges(source_vid={:?}, dir={:?})", source_vid, dir);
-
-            let source_node_index = NodeIndex(source_vid.index as usize);
-            for (_, edge) in graph.adjacent_edges(source_node_index, dir) {
-                match edge.data {
-                    ConstrainVarSubVar(from_vid, to_vid) => {
-                        let opp_vid = if from_vid == source_vid {
-                            to_vid
-                        } else {
-                            from_vid
-                        };
-                        if state.set.insert(opp_vid) {
-                            state.stack.push(opp_vid);
-                        }
-                    }
-
-                    ConstrainRegSubVar(region, _) |
-                    ConstrainVarSubReg(_, region) => {
-                        state.result.push(RegionAndOrigin {
-                            region,
-                            origin: this.constraints.borrow().get(&edge.data).unwrap().clone(),
-                        });
-                    }
-
-                    ConstrainRegSubReg(..) => {
-                        panic!("cannot reach reg-sub-reg edge in region inference \
-                                post-processing")
-                    }
-                }
-            }
-        }
-    }
-
-    fn iterate_until_fixed_point<F>(&self, tag: &str, mut body: F)
-        where F: FnMut(&Constraint<'tcx>, &SubregionOrigin<'tcx>) -> bool
-    {
-        let mut iteration = 0;
-        let mut changed = true;
-        while changed {
-            changed = false;
-            iteration += 1;
-            debug!("---- {} Iteration {}{}", "#", tag, iteration);
-            for (constraint, origin) in self.constraints.borrow().iter() {
-                let edge_changed = body(constraint, origin);
-                if edge_changed {
-                    debug!("Updated due to constraint {:?}", constraint);
-                    changed = true;
-                }
-            }
-        }
-        debug!("---- {} Complete after {} iteration(s)", tag, iteration);
-    }
-
-}
-
-fn normalize<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
-                             values: &Vec<VarValue<'tcx>>,
-                             r: ty::Region<'tcx>)
-                             -> ty::Region<'tcx> {
-    match *r {
-        ty::ReVar(rid) => lookup(tcx, values, rid),
-        _ => r,
-    }
-}
-
-fn lookup<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
-                          values: &Vec<VarValue<'tcx>>,
-                          rid: ty::RegionVid)
-                          -> ty::Region<'tcx> {
-    match values[rid.index as usize] {
-        Value(r) => r,
-        ErrorValue => tcx.types.re_static, // Previously reported error.
-    }
-}
-
-impl<'tcx> fmt::Debug for RegionAndOrigin<'tcx> {
-    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
-        write!(f, "RegionAndOrigin({:?},{:?})", self.region, self.origin)
-    }
-}
-
-impl fmt::Debug for RegionSnapshot {
-    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
-        write!(f, "RegionSnapshot(length={},skolemization={})",
-               self.length, self.skolemization_count)
-    }
-}
-
-impl<'tcx> fmt::Debug for GenericKind<'tcx> {
-    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
-        match *self {
-            GenericKind::Param(ref p) => write!(f, "{:?}", p),
-            GenericKind::Projection(ref p) => write!(f, "{:?}", p),
-        }
-    }
-}
-
-impl<'tcx> fmt::Display for GenericKind<'tcx> {
-    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
-        match *self {
-            GenericKind::Param(ref p) => write!(f, "{}", p),
-            GenericKind::Projection(ref p) => write!(f, "{}", p),
-        }
-    }
-}
-
-impl<'a, 'gcx, 'tcx> GenericKind<'tcx> {
-    pub fn to_ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Ty<'tcx> {
-        match *self {
-            GenericKind::Param(ref p) => p.to_ty(tcx),
-            GenericKind::Projection(ref p) => tcx.mk_projection(p.item_def_id, p.substs),
-        }
-    }
-}
-
-impl<'a, 'gcx, 'tcx> VerifyBound<'tcx> {
-    fn for_each_region(&self, f: &mut FnMut(ty::Region<'tcx>)) {
-        match self {
-            &VerifyBound::AnyRegion(ref rs) |
-            &VerifyBound::AllRegions(ref rs) => for &r in rs {
-                f(r);
-            },
-
-            &VerifyBound::AnyBound(ref bs) |
-            &VerifyBound::AllBounds(ref bs) => for b in bs {
-                b.for_each_region(f);
-            },
-        }
-    }
-
-    pub fn must_hold(&self) -> bool {
-        match self {
-            &VerifyBound::AnyRegion(ref bs) => bs.contains(&&ty::ReStatic),
-            &VerifyBound::AllRegions(ref bs) => bs.is_empty(),
-            &VerifyBound::AnyBound(ref bs) => bs.iter().any(|b| b.must_hold()),
-            &VerifyBound::AllBounds(ref bs) => bs.iter().all(|b| b.must_hold()),
-        }
-    }
-
-    pub fn cannot_hold(&self) -> bool {
-        match self {
-            &VerifyBound::AnyRegion(ref bs) => bs.is_empty(),
-            &VerifyBound::AllRegions(ref bs) => bs.contains(&&ty::ReEmpty),
-            &VerifyBound::AnyBound(ref bs) => bs.iter().all(|b| b.cannot_hold()),
-            &VerifyBound::AllBounds(ref bs) => bs.iter().any(|b| b.cannot_hold()),
-        }
-    }
-
-    pub fn or(self, vb: VerifyBound<'tcx>) -> VerifyBound<'tcx> {
-        if self.must_hold() || vb.cannot_hold() {
-            self
-        } else if self.cannot_hold() || vb.must_hold() {
-            vb
-        } else {
-            VerifyBound::AnyBound(vec![self, vb])
-        }
-    }
-
-    pub fn and(self, vb: VerifyBound<'tcx>) -> VerifyBound<'tcx> {
-        if self.must_hold() && vb.must_hold() {
-            self
-        } else if self.cannot_hold() && vb.cannot_hold() {
-            self
-        } else {
-            VerifyBound::AllBounds(vec![self, vb])
-        }
-    }
-
-    fn is_met(&self,
-              region_rels: &RegionRelations<'a, 'gcx, 'tcx>,
-              var_values: &Vec<VarValue<'tcx>>,
-              min: ty::Region<'tcx>)
-              -> bool {
-        let tcx = region_rels.tcx;
-        match self {
-            &VerifyBound::AnyRegion(ref rs) =>
-                rs.iter()
-                  .map(|&r| normalize(tcx, var_values, r))
-                  .any(|r| region_rels.is_subregion_of(min, r)),
-
-            &VerifyBound::AllRegions(ref rs) =>
-                rs.iter()
-                  .map(|&r| normalize(tcx, var_values, r))
-                  .all(|r| region_rels.is_subregion_of(min, r)),
-
-            &VerifyBound::AnyBound(ref bs) =>
-                bs.iter()
-                  .any(|b| b.is_met(region_rels, var_values, min)),
-
-            &VerifyBound::AllBounds(ref bs) =>
-                bs.iter()
-                  .all(|b| b.is_met(region_rels, var_values, min)),
-        }
-    }
-}
diff --git a/src/librustc/infer/resolve.rs b/src/librustc/infer/resolve.rs
index 10899e4..5e70c0c 100644
--- a/src/librustc/infer/resolve.rs
+++ b/src/librustc/infer/resolve.rs
@@ -74,8 +74,11 @@
 
     fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
         match *r {
-            ty::ReVar(rid) => self.infcx.region_vars.opportunistic_resolve_var(rid),
-            _ => r,
+            ty::ReVar(rid) =>
+                self.infcx.borrow_region_constraints()
+                          .opportunistic_resolve_var(self.tcx(), rid),
+            _ =>
+                r,
         }
     }
 }
@@ -185,7 +188,11 @@
 
     fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
         match *r {
-            ty::ReVar(rid) => self.infcx.region_vars.resolve_var(rid),
+            ty::ReVar(rid) => self.infcx.lexical_region_resolutions
+                                        .borrow()
+                                        .as_ref()
+                                        .expect("region resolution not performed")
+                                        .resolve_var(rid),
             _ => r,
         }
     }
diff --git a/src/librustc/infer/sub.rs b/src/librustc/infer/sub.rs
index 4056999..f891f69 100644
--- a/src/librustc/infer/sub.rs
+++ b/src/librustc/infer/sub.rs
@@ -137,7 +137,8 @@
         // from the "cause" field, we could perhaps give more tailored
         // error messages.
         let origin = SubregionOrigin::Subtype(self.fields.trace.clone());
-        self.fields.infcx.region_vars.make_subregion(origin, a, b);
+        self.fields.infcx.borrow_region_constraints()
+                         .make_subregion(origin, a, b);
 
         Ok(a)
     }
diff --git a/src/librustc/lib.rs b/src/librustc/lib.rs
index 498e1aa..b59f748 100644
--- a/src/librustc/lib.rs
+++ b/src/librustc/lib.rs
@@ -45,17 +45,23 @@
 #![feature(conservative_impl_trait)]
 #![feature(const_fn)]
 #![feature(core_intrinsics)]
+#![feature(drain_filter)]
+#![feature(i128)]
 #![feature(i128_type)]
+#![feature(inclusive_range)]
 #![feature(inclusive_range_syntax)]
 #![cfg_attr(windows, feature(libc))]
 #![feature(macro_vis_matcher)]
+#![feature(match_default_bindings)]
 #![feature(never_type)]
 #![feature(nonzero)]
 #![feature(quote)]
+#![feature(refcell_replace_swap)]
 #![feature(rustc_diagnostic_macros)]
 #![feature(slice_patterns)]
 #![feature(specialization)]
 #![feature(unboxed_closures)]
+#![feature(underscore_lifetimes)]
 #![feature(trace_macros)]
 #![feature(test)]
 #![feature(const_atomic_bool_new)]
diff --git a/src/librustc/lint/builtin.rs b/src/librustc/lint/builtin.rs
index 855cc06..7544658 100644
--- a/src/librustc/lint/builtin.rs
+++ b/src/librustc/lint/builtin.rs
@@ -162,12 +162,6 @@
 }
 
 declare_lint! {
-    pub EXTRA_REQUIREMENT_IN_IMPL,
-    Deny,
-    "detects extra requirements in impls that were erroneously allowed"
-}
-
-declare_lint! {
     pub LEGACY_DIRECTORY_OWNERSHIP,
     Deny,
     "non-inline, non-`#[path]` modules (e.g. `mod foo;`) were erroneously allowed in some files \
@@ -254,7 +248,6 @@
             RESOLVE_TRAIT_ON_DEFAULTED_UNIT,
             SAFE_EXTERN_STATICS,
             PATTERNS_IN_FNS_WITHOUT_BODY,
-            EXTRA_REQUIREMENT_IN_IMPL,
             LEGACY_DIRECTORY_OWNERSHIP,
             LEGACY_IMPORTS,
             LEGACY_CONSTRUCTOR_VISIBILITY,
diff --git a/src/librustc/lint/context.rs b/src/librustc/lint/context.rs
index 601e031..4496e07 100644
--- a/src/librustc/lint/context.rs
+++ b/src/librustc/lint/context.rs
@@ -34,7 +34,8 @@
 use rustc_serialize::{Decoder, Decodable, Encoder, Encodable};
 use session::{config, early_error, Session};
 use traits::Reveal;
-use ty::{self, TyCtxt};
+use ty::{self, TyCtxt, Ty};
+use ty::layout::{LayoutError, LayoutOf, TyLayout};
 use util::nodemap::FxHashMap;
 
 use std::default::Default as StdDefault;
@@ -626,6 +627,14 @@
     }
 }
 
+impl<'a, 'tcx> LayoutOf<Ty<'tcx>> for &'a LateContext<'a, 'tcx> {
+    type TyLayout = Result<TyLayout<'tcx>, LayoutError<'tcx>>;
+
+    fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
+        (self.tcx, self.param_env.reveal_all()).layout_of(ty)
+    }
+}
+
 impl<'a, 'tcx> hir_visit::Visitor<'tcx> for LateContext<'a, 'tcx> {
     /// Because lints are scoped lexically, we want to walk nested
     /// items in the context of the outer item, so enable
diff --git a/src/librustc/middle/cstore.rs b/src/librustc/middle/cstore.rs
index 628538b..5d71419 100644
--- a/src/librustc/middle/cstore.rs
+++ b/src/librustc/middle/cstore.rs
@@ -24,7 +24,7 @@
 
 use hir;
 use hir::def;
-use hir::def_id::{CrateNum, DefId, DefIndex, LOCAL_CRATE};
+use hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
 use hir::map as hir_map;
 use hir::map::definitions::{Definitions, DefKey, DefPathTable};
 use hir::svh::Svh;
@@ -180,7 +180,7 @@
 /// upstream crate.
 #[derive(Debug, RustcEncodable, RustcDecodable, Copy, Clone)]
 pub struct EncodedMetadataHash {
-    pub def_index: DefIndex,
+    pub def_index: u32,
     pub hash: ich::Fingerprint,
 }
 
diff --git a/src/librustc/middle/expr_use_visitor.rs b/src/librustc/middle/expr_use_visitor.rs
index 0383d5c..9018b9f 100644
--- a/src/librustc/middle/expr_use_visitor.rs
+++ b/src/librustc/middle/expr_use_visitor.rs
@@ -20,7 +20,7 @@
 use self::OverloadedCallType::*;
 
 use hir::def::Def;
-use hir::def_id::{DefId};
+use hir::def_id::DefId;
 use infer::InferCtxt;
 use middle::mem_categorization as mc;
 use middle::region;
@@ -915,7 +915,7 @@
                 let closure_def_id = self.tcx().hir.local_def_id(closure_expr.id);
                 let upvar_id = ty::UpvarId {
                     var_id: var_hir_id,
-                    closure_expr_id: closure_def_id.index
+                    closure_expr_id: closure_def_id.to_local(),
                 };
                 let upvar_capture = self.mc.tables.upvar_capture(upvar_id);
                 let cmt_var = return_if_err!(self.cat_captured_var(closure_expr.id,
diff --git a/src/librustc/middle/free_region.rs b/src/librustc/middle/free_region.rs
index 3bcdc4f..da505f1 100644
--- a/src/librustc/middle/free_region.rs
+++ b/src/librustc/middle/free_region.rs
@@ -192,7 +192,7 @@
     ///
     /// if `r_a` represents `'a`, this function would return `{'b, 'c}`.
     pub fn regions_that_outlive<'a, 'gcx>(&self, r_a: Region<'tcx>) -> Vec<&Region<'tcx>> {
-        assert!(is_free(r_a));
+        assert!(is_free(r_a) || *r_a == ty::ReStatic);
         self.relation.greater_than(&r_a)
     }
 }
diff --git a/src/librustc/middle/mem_categorization.rs b/src/librustc/middle/mem_categorization.rs
index 4071f81e..c89d67d 100644
--- a/src/librustc/middle/mem_categorization.rs
+++ b/src/librustc/middle/mem_categorization.rs
@@ -70,7 +70,7 @@
 use self::Aliasability::*;
 
 use middle::region;
-use hir::def_id::{DefId, DefIndex};
+use hir::def_id::{DefId, LocalDefId};
 use hir::map as hir_map;
 use infer::InferCtxt;
 use hir::def::{Def, CtorKind};
@@ -191,7 +191,7 @@
 
 pub enum ImmutabilityBlame<'tcx> {
     ImmLocal(ast::NodeId),
-    ClosureEnv(DefIndex),
+    ClosureEnv(LocalDefId),
     LocalDeref(ast::NodeId),
     AdtFieldDeref(&'tcx ty::AdtDef, &'tcx ty::FieldDef)
 }
@@ -210,7 +210,7 @@
                 adt_def.variant_with_id(variant_did)
             }
             _ => {
-                assert!(adt_def.is_univariant());
+                assert_eq!(adt_def.variants.len(), 1);
                 &adt_def.variants[0]
             }
         };
@@ -759,11 +759,11 @@
             ref t => span_bug!(span, "unexpected type for fn in mem_categorization: {:?}", t),
         };
 
-        let closure_expr_def_index = self.tcx.hir.local_def_id(fn_node_id).index;
+        let closure_expr_def_id = self.tcx.hir.local_def_id(fn_node_id);
         let var_hir_id = self.tcx.hir.node_to_hir_id(var_id);
         let upvar_id = ty::UpvarId {
             var_id: var_hir_id,
-            closure_expr_id: closure_expr_def_index
+            closure_expr_id: closure_expr_def_id.to_local(),
         };
 
         let var_ty = self.node_ty(var_hir_id)?;
@@ -838,7 +838,7 @@
             // The environment of a closure is guaranteed to
             // outlive any bindings introduced in the body of the
             // closure itself.
-            scope: DefId::local(upvar_id.closure_expr_id),
+            scope: upvar_id.closure_expr_id.to_def_id(),
             bound_region: ty::BrEnv
         }));
 
@@ -1096,7 +1096,7 @@
                                               -> cmt<'tcx> {
         // univariant enums do not need downcasts
         let base_did = self.tcx.parent_def_id(variant_did).unwrap();
-        if !self.tcx.adt_def(base_did).is_univariant() {
+        if self.tcx.adt_def(base_did).variants.len() != 1 {
             let base_ty = base_cmt.ty;
             let ret = Rc::new(cmt_ {
                 id: node.id(),
diff --git a/src/librustc/middle/region.rs b/src/librustc/middle/region.rs
index a788299..d3aa80e 100644
--- a/src/librustc/middle/region.rs
+++ b/src/librustc/middle/region.rs
@@ -12,7 +12,7 @@
 //! the parent links in the region hierarchy.
 //!
 //! Most of the documentation on regions can be found in
-//! `middle/infer/region_inference/README.md`
+//! `middle/infer/region_constraints/README.md`
 
 use ich::{StableHashingContext, NodeIdHashingMode};
 use util::nodemap::{FxHashMap, FxHashSet};
@@ -320,7 +320,7 @@
     /// hierarchy based on their lexical mapping. This is used to
     /// handle the relationships between regions in a fn and in a
     /// closure defined by that fn. See the "Modeling closures"
-    /// section of the README in infer::region_inference for
+    /// section of the README in infer::region_constraints for
     /// more details.
     closure_tree: FxHashMap<hir::ItemLocalId, hir::ItemLocalId>,
 
@@ -407,7 +407,7 @@
     /// of the innermost fn body. Each fn forms its own disjoint tree
     /// in the region hierarchy. These fn bodies are themselves
     /// arranged into a tree. See the "Modeling closures" section of
-    /// the README in infer::region_inference for more
+    /// the README in infer::region_constraints for more
     /// details.
     root_id: Option<hir::ItemLocalId>,
 
@@ -646,7 +646,7 @@
             // different functions.  Compare those fn for lexical
             // nesting. The reasoning behind this is subtle.  See the
             // "Modeling closures" section of the README in
-            // infer::region_inference for more details.
+            // infer::region_constraints for more details.
             let a_root_scope = a_ancestors[a_index];
             let b_root_scope = a_ancestors[a_index];
             return match (a_root_scope.data(), b_root_scope.data()) {
diff --git a/src/librustc/mir/mod.rs b/src/librustc/mir/mod.rs
index 18c2650..355fb57 100644
--- a/src/librustc/mir/mod.rs
+++ b/src/librustc/mir/mod.rs
@@ -82,9 +82,6 @@
     /// in scope, but a separate set of locals.
     pub promoted: IndexVec<Promoted, Mir<'tcx>>,
 
-    /// Return type of the function.
-    pub return_ty: Ty<'tcx>,
-
     /// Yield type of the function, if it is a generator.
     pub yield_ty: Option<Ty<'tcx>>,
 
@@ -135,7 +132,6 @@
                visibility_scope_info: ClearOnDecode<IndexVec<VisibilityScope,
                                                              VisibilityScopeInfo>>,
                promoted: IndexVec<Promoted, Mir<'tcx>>,
-               return_ty: Ty<'tcx>,
                yield_ty: Option<Ty<'tcx>>,
                local_decls: IndexVec<Local, LocalDecl<'tcx>>,
                arg_count: usize,
@@ -145,14 +141,12 @@
         // We need `arg_count` locals, and one for the return pointer
         assert!(local_decls.len() >= arg_count + 1,
             "expected at least {} locals, got {}", arg_count + 1, local_decls.len());
-        assert_eq!(local_decls[RETURN_POINTER].ty, return_ty);
 
         Mir {
             basic_blocks,
             visibility_scopes,
             visibility_scope_info,
             promoted,
-            return_ty,
             yield_ty,
             generator_drop: None,
             generator_layout: None,
@@ -273,6 +267,11 @@
             &block.terminator().source_info
         }
     }
+
+    /// Return the return type, it always return first element from `local_decls` array
+    pub fn return_ty(&self) -> Ty<'tcx> {
+        self.local_decls[RETURN_POINTER].ty
+    }
 }
 
 #[derive(Clone, Debug)]
@@ -299,7 +298,6 @@
     visibility_scopes,
     visibility_scope_info,
     promoted,
-    return_ty,
     yield_ty,
     generator_drop,
     generator_layout,
@@ -555,6 +553,15 @@
 
 newtype_index!(BasicBlock { DEBUG_FORMAT = "bb{}" });
 
+impl BasicBlock {
+    pub fn start_location(self) -> Location {
+        Location {
+            block: self,
+            statement_index: 0,
+        }
+    }
+}
+
 ///////////////////////////////////////////////////////////////////////////
 // BasicBlockData and Terminator
 
@@ -638,7 +645,32 @@
         unwind: Option<BasicBlock>
     },
 
-    /// Drop the Lvalue and assign the new value over it
+    /// Drop the Lvalue and assign the new value over it. This ensures
+    /// that the assignment to LV occurs *even if* the destructor for
+    /// lvalue unwinds. Its semantics are best explained by by the
+    /// elaboration:
+    ///
+    /// ```
+    /// BB0 {
+    ///   DropAndReplace(LV <- RV, goto BB1, unwind BB2)
+    /// }
+    /// ```
+    ///
+    /// becomes
+    ///
+    /// ```
+    /// BB0 {
+    ///   Drop(LV, goto BB1, unwind BB2)
+    /// }
+    /// BB1 {
+    ///   // LV is now unitialized
+    ///   LV <- RV
+    /// }
+    /// BB2 {
+    ///   // LV is now unitialized -- its dtor panicked
+    ///   LV <- RV
+    /// }
+    /// ```
     DropAndReplace {
         location: Lvalue<'tcx>,
         value: Operand<'tcx>,
@@ -650,9 +682,10 @@
     Call {
         /// The function that’s being called
         func: Operand<'tcx>,
-        /// Arguments the function is called with. These are owned by the callee, which is free to
-        /// modify them. This is important as "by-value" arguments might be passed by-reference at
-        /// the ABI level.
+        /// Arguments the function is called with.
+        /// These are owned by the callee, which is free to modify them.
+        /// This allows the memory occupied by "by-value" arguments to be
+        /// reused across function calls without duplicating the contents.
         args: Vec<Operand<'tcx>>,
         /// Destination for the return value. If some, the call is converging.
         destination: Option<(Lvalue<'tcx>, BasicBlock)>,
@@ -1709,7 +1742,6 @@
             visibility_scopes: self.visibility_scopes.clone(),
             visibility_scope_info: self.visibility_scope_info.clone(),
             promoted: self.promoted.fold_with(folder),
-            return_ty: self.return_ty.fold_with(folder),
             yield_ty: self.yield_ty.fold_with(folder),
             generator_drop: self.generator_drop.fold_with(folder),
             generator_layout: self.generator_layout.fold_with(folder),
@@ -1728,7 +1760,6 @@
         self.generator_layout.visit_with(visitor) ||
         self.yield_ty.visit_with(visitor) ||
         self.promoted.visit_with(visitor)     ||
-        self.return_ty.visit_with(visitor)    ||
         self.local_decls.visit_with(visitor)
     }
 }
diff --git a/src/librustc/mir/visit.rs b/src/librustc/mir/visit.rs
index 00863ab..5f2f5b7 100644
--- a/src/librustc/mir/visit.rs
+++ b/src/librustc/mir/visit.rs
@@ -292,11 +292,10 @@
                     self.visit_visibility_scope_data(scope);
                 }
 
-                let lookup = TyContext::SourceInfo(SourceInfo {
+                self.visit_ty(&$($mutability)* mir.return_ty(), TyContext::ReturnTy(SourceInfo {
                     span: mir.span,
                     scope: ARGUMENT_VISIBILITY_SCOPE,
-                });
-                self.visit_ty(&$($mutability)* mir.return_ty, lookup);
+                }));
 
                 for local in mir.local_decls.indices() {
                     self.visit_local_decl(local, & $($mutability)* mir.local_decls[local]);
@@ -811,7 +810,7 @@
 
 /// Extra information passed to `visit_ty` and friends to give context
 /// about where the type etc appears.
-#[derive(Copy, Clone, Debug)]
+#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
 pub enum TyContext {
     LocalDecl {
         /// The index of the local variable we are visiting.
@@ -821,9 +820,11 @@
         source_info: SourceInfo,
     },
 
-    Location(Location),
+    /// The return type of the function.
+    ReturnTy(SourceInfo),
 
-    SourceInfo(SourceInfo),
+    /// A type found at some location.
+    Location(Location),
 }
 
 #[derive(Copy, Clone, Debug, PartialEq, Eq)]
diff --git a/src/librustc/session/config.rs b/src/librustc/session/config.rs
index 3f53c89..57fae22 100644
--- a/src/librustc/session/config.rs
+++ b/src/librustc/session/config.rs
@@ -1042,6 +1042,8 @@
           "enable incremental compilation (experimental)"),
     incremental_cc: bool = (false, parse_bool, [UNTRACKED],
           "enable cross-crate incremental compilation (even more experimental)"),
+    incremental_queries: bool = (true, parse_bool, [UNTRACKED],
+          "enable incremental compilation support for queries (experimental)"),
     incremental_info: bool = (false, parse_bool, [UNTRACKED],
         "print high-level information about incremental reuse (or the lack thereof)"),
     incremental_dump_hash: bool = (false, parse_bool, [UNTRACKED],
@@ -1471,8 +1473,15 @@
             Some("human") => ErrorOutputType::HumanReadable(color),
             Some("json")  => ErrorOutputType::Json(false),
             Some("pretty-json") => ErrorOutputType::Json(true),
-            Some("short") => ErrorOutputType::Short(color),
-
+            Some("short") => {
+                if nightly_options::is_unstable_enabled(matches) {
+                    ErrorOutputType::Short(color)
+                } else {
+                    early_error(ErrorOutputType::default(),
+                                &format!("the `-Z unstable-options` flag must also be passed to \
+                                          enable the short error message option"));
+                }
+            }
             None => ErrorOutputType::HumanReadable(color),
 
             Some(arg) => {
diff --git a/src/librustc/traits/error_reporting.rs b/src/librustc/traits/error_reporting.rs
index 106b1b0..7c38cf7 100644
--- a/src/librustc/traits/error_reporting.rs
+++ b/src/librustc/traits/error_reporting.rs
@@ -33,7 +33,6 @@
 use infer::{self, InferCtxt};
 use infer::type_variable::TypeVariableOrigin;
 use middle::const_val;
-use rustc::lint::builtin::EXTRA_REQUIREMENT_IN_IMPL;
 use std::fmt;
 use syntax::ast;
 use session::DiagnosticMessageId;
@@ -481,30 +480,14 @@
                                         item_name: ast::Name,
                                         _impl_item_def_id: DefId,
                                         trait_item_def_id: DefId,
-                                        requirement: &fmt::Display,
-                                        lint_id: Option<ast::NodeId>) // (*)
+                                        requirement: &fmt::Display)
                                         -> DiagnosticBuilder<'tcx>
     {
-        // (*) This parameter is temporary and used only for phasing
-        // in the bug fix to #18937. If it is `Some`, it has a kind of
-        // weird effect -- the diagnostic is reported as a lint, and
-        // the builder which is returned is marked as canceled.
-
         let msg = "impl has stricter requirements than trait";
-        let mut err = match lint_id {
-            Some(node_id) => {
-                self.tcx.struct_span_lint_node(EXTRA_REQUIREMENT_IN_IMPL,
-                                               node_id,
-                                               error_span,
-                                               msg)
-            }
-            None => {
-                struct_span_err!(self.tcx.sess,
-                                 error_span,
-                                 E0276,
-                                 "{}", msg)
-            }
-        };
+        let mut err = struct_span_err!(self.tcx.sess,
+                                       error_span,
+                                       E0276,
+                                       "{}", msg);
 
         if let Some(trait_item_span) = self.tcx.hir.span_if_local(trait_item_def_id) {
             let span = self.tcx.sess.codemap().def_span(trait_item_span);
@@ -543,15 +526,14 @@
         let mut err = match *error {
             SelectionError::Unimplemented => {
                 if let ObligationCauseCode::CompareImplMethodObligation {
-                    item_name, impl_item_def_id, trait_item_def_id, lint_id
+                    item_name, impl_item_def_id, trait_item_def_id,
                 } = obligation.cause.code {
                     self.report_extra_impl_obligation(
                         span,
                         item_name,
                         impl_item_def_id,
                         trait_item_def_id,
-                        &format!("`{}`", obligation.predicate),
-                        lint_id)
+                        &format!("`{}`", obligation.predicate))
                         .emit();
                     return;
                 }
diff --git a/src/librustc/traits/fulfill.rs b/src/librustc/traits/fulfill.rs
index cc2506d..297feea 100644
--- a/src/librustc/traits/fulfill.rs
+++ b/src/librustc/traits/fulfill.rs
@@ -8,14 +8,12 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use infer::{InferCtxt, InferOk};
+use infer::{RegionObligation, InferCtxt, InferOk};
 use ty::{self, Ty, TypeFoldable, ToPolyTraitRef, ToPredicate};
 use ty::error::ExpectedFound;
 use rustc_data_structures::obligation_forest::{ObligationForest, Error};
 use rustc_data_structures::obligation_forest::{ForestObligation, ObligationProcessor};
 use std::marker::PhantomData;
-use syntax::ast;
-use util::nodemap::NodeMap;
 use hir::def_id::DefId;
 
 use super::CodeAmbiguity;
@@ -48,39 +46,6 @@
     // A list of all obligations that have been registered with this
     // fulfillment context.
     predicates: ObligationForest<PendingPredicateObligation<'tcx>>,
-
-    // A set of constraints that regionck must validate. Each
-    // constraint has the form `T:'a`, meaning "some type `T` must
-    // outlive the lifetime 'a". These constraints derive from
-    // instantiated type parameters. So if you had a struct defined
-    // like
-    //
-    //     struct Foo<T:'static> { ... }
-    //
-    // then in some expression `let x = Foo { ... }` it will
-    // instantiate the type parameter `T` with a fresh type `$0`. At
-    // the same time, it will record a region obligation of
-    // `$0:'static`. This will get checked later by regionck. (We
-    // can't generally check these things right away because we have
-    // to wait until types are resolved.)
-    //
-    // These are stored in a map keyed to the id of the innermost
-    // enclosing fn body / static initializer expression. This is
-    // because the location where the obligation was incurred can be
-    // relevant with respect to which sublifetime assumptions are in
-    // place. The reason that we store under the fn-id, and not
-    // something more fine-grained, is so that it is easier for
-    // regionck to be sure that it has found *all* the region
-    // obligations (otherwise, it's easy to fail to walk to a
-    // particular node-id).
-    region_obligations: NodeMap<Vec<RegionObligation<'tcx>>>,
-}
-
-#[derive(Clone)]
-pub struct RegionObligation<'tcx> {
-    pub sub_region: ty::Region<'tcx>,
-    pub sup_type: Ty<'tcx>,
-    pub cause: ObligationCause<'tcx>,
 }
 
 #[derive(Clone, Debug)]
@@ -94,7 +59,6 @@
     pub fn new() -> FulfillmentContext<'tcx> {
         FulfillmentContext {
             predicates: ObligationForest::new(),
-            region_obligations: NodeMap(),
         }
     }
 
@@ -157,14 +121,6 @@
         });
     }
 
-    pub fn register_region_obligation(&mut self,
-                                      t_a: Ty<'tcx>,
-                                      r_b: ty::Region<'tcx>,
-                                      cause: ObligationCause<'tcx>)
-    {
-        register_region_obligation(t_a, r_b, cause, &mut self.region_obligations);
-    }
-
     pub fn register_predicate_obligation(&mut self,
                                          infcx: &InferCtxt<'a, 'gcx, 'tcx>,
                                          obligation: PredicateObligation<'tcx>)
@@ -183,26 +139,16 @@
         });
     }
 
-    pub fn register_predicate_obligations(&mut self,
-                                          infcx: &InferCtxt<'a, 'gcx, 'tcx>,
-                                          obligations: Vec<PredicateObligation<'tcx>>)
+    pub fn register_predicate_obligations<I>(&mut self,
+                                             infcx: &InferCtxt<'a, 'gcx, 'tcx>,
+                                             obligations: I)
+        where I: IntoIterator<Item = PredicateObligation<'tcx>>
     {
         for obligation in obligations {
             self.register_predicate_obligation(infcx, obligation);
         }
     }
 
-
-    pub fn region_obligations(&self,
-                              body_id: ast::NodeId)
-                              -> &[RegionObligation<'tcx>]
-    {
-        match self.region_obligations.get(&body_id) {
-            None => Default::default(),
-            Some(vec) => vec,
-        }
-    }
-
     pub fn select_all_or_error(&mut self,
                                infcx: &InferCtxt<'a, 'gcx, 'tcx>)
                                -> Result<(),Vec<FulfillmentError<'tcx>>>
@@ -245,10 +191,7 @@
             debug!("select: starting another iteration");
 
             // Process pending obligations.
-            let outcome = self.predicates.process_obligations(&mut FulfillProcessor {
-                selcx,
-                region_obligations: &mut self.region_obligations,
-            });
+            let outcome = self.predicates.process_obligations(&mut FulfillProcessor { selcx });
             debug!("select: outcome={:?}", outcome);
 
             // FIXME: if we kept the original cache key, we could mark projection
@@ -277,7 +220,6 @@
 
 struct FulfillProcessor<'a, 'b: 'a, 'gcx: 'tcx, 'tcx: 'b> {
     selcx: &'a mut SelectionContext<'b, 'gcx, 'tcx>,
-    region_obligations: &'a mut NodeMap<Vec<RegionObligation<'tcx>>>,
 }
 
 impl<'a, 'b, 'gcx, 'tcx> ObligationProcessor for FulfillProcessor<'a, 'b, 'gcx, 'tcx> {
@@ -288,9 +230,7 @@
                           obligation: &mut Self::Obligation)
                           -> Result<Option<Vec<Self::Obligation>>, Self::Error>
     {
-        process_predicate(self.selcx,
-                          obligation,
-                          self.region_obligations)
+        process_predicate(self.selcx, obligation)
             .map(|os| os.map(|os| os.into_iter().map(|o| PendingPredicateObligation {
                 obligation: o,
                 stalled_on: vec![]
@@ -329,8 +269,7 @@
 /// - `Err` if the predicate does not hold
 fn process_predicate<'a, 'gcx, 'tcx>(
     selcx: &mut SelectionContext<'a, 'gcx, 'tcx>,
-    pending_obligation: &mut PendingPredicateObligation<'tcx>,
-    region_obligations: &mut NodeMap<Vec<RegionObligation<'tcx>>>)
+    pending_obligation: &mut PendingPredicateObligation<'tcx>)
     -> Result<Option<Vec<PredicateObligation<'tcx>>>,
               FulfillmentErrorCode<'tcx>>
 {
@@ -452,18 +391,26 @@
                         // `for<'a> T: 'a where 'a not in T`, which we can treat as `T: 'static`.
                         Some(t_a) => {
                             let r_static = selcx.tcx().types.re_static;
-                            register_region_obligation(t_a, r_static,
-                                                       obligation.cause.clone(),
-                                                       region_obligations);
+                            selcx.infcx().register_region_obligation(
+                                obligation.cause.body_id,
+                                RegionObligation {
+                                    sup_type: t_a,
+                                    sub_region: r_static,
+                                    cause: obligation.cause.clone(),
+                                });
                             Ok(Some(vec![]))
                         }
                     }
                 }
                 // If there aren't, register the obligation.
                 Some(ty::OutlivesPredicate(t_a, r_b)) => {
-                    register_region_obligation(t_a, r_b,
-                                               obligation.cause.clone(),
-                                               region_obligations);
+                    selcx.infcx().register_region_obligation(
+                        obligation.cause.body_id,
+                        RegionObligation {
+                            sup_type: t_a,
+                            sub_region: r_b,
+                            cause: obligation.cause.clone()
+                        });
                     Ok(Some(vec![]))
                 }
             }
@@ -566,25 +513,6 @@
     }
 }
 
-
-fn register_region_obligation<'tcx>(t_a: Ty<'tcx>,
-                                    r_b: ty::Region<'tcx>,
-                                    cause: ObligationCause<'tcx>,
-                                    region_obligations: &mut NodeMap<Vec<RegionObligation<'tcx>>>)
-{
-    let region_obligation = RegionObligation { sup_type: t_a,
-                                               sub_region: r_b,
-                                               cause: cause };
-
-    debug!("register_region_obligation({:?}, cause={:?})",
-           region_obligation, region_obligation.cause);
-
-    region_obligations.entry(region_obligation.cause.body_id)
-                      .or_insert(vec![])
-                      .push(region_obligation);
-
-}
-
 fn to_fulfillment_error<'tcx>(
     error: Error<PendingPredicateObligation<'tcx>, FulfillmentErrorCode<'tcx>>)
     -> FulfillmentError<'tcx>
diff --git a/src/librustc/traits/mod.rs b/src/librustc/traits/mod.rs
index 62d2fe7..55b1a91 100644
--- a/src/librustc/traits/mod.rs
+++ b/src/librustc/traits/mod.rs
@@ -30,7 +30,7 @@
 use syntax_pos::{Span, DUMMY_SP};
 
 pub use self::coherence::{orphan_check, overlapping_impls, OrphanCheckErr, OverlapResult};
-pub use self::fulfill::{FulfillmentContext, RegionObligation};
+pub use self::fulfill::FulfillmentContext;
 pub use self::project::MismatchedProjectionTypes;
 pub use self::project::{normalize, normalize_projection_type, Normalized};
 pub use self::project::{ProjectionCache, ProjectionCacheSnapshot, Reveal};
@@ -152,7 +152,6 @@
         item_name: ast::Name,
         impl_item_def_id: DefId,
         trait_item_def_id: DefId,
-        lint_id: Option<ast::NodeId>,
     },
 
     /// Checking that this expression can be assigned where it needs to be
@@ -537,6 +536,17 @@
 
         let region_scope_tree = region::ScopeTree::default();
         let free_regions = FreeRegionMap::new();
+
+        // FIXME. We should really... do something with these region
+        // obligations. But this call just continues the older
+        // behavior (i.e., doesn't cause any new bugs), and it would
+        // take some further refactoring to actually solve them. In
+        // particular, we would have to handle implied bounds
+        // properly, and that code is currently largely confined to
+        // regionck (though I made some efforts to extract it
+        // out). -nmatsakis
+        let _ = infcx.ignore_region_obligations();
+
         infcx.resolve_regions_and_report_errors(region_context, &region_scope_tree, &free_regions);
         let predicates = match infcx.fully_resolve(&predicates) {
             Ok(predicates) => predicates,
diff --git a/src/librustc/traits/structural_impls.rs b/src/librustc/traits/structural_impls.rs
index fd93aa1..9231995 100644
--- a/src/librustc/traits/structural_impls.rs
+++ b/src/librustc/traits/structural_impls.rs
@@ -26,13 +26,6 @@
     }
 }
 
-impl<'tcx> fmt::Debug for traits::RegionObligation<'tcx> {
-    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
-        write!(f, "RegionObligation(sub_region={:?}, sup_type={:?})",
-               self.sub_region,
-               self.sup_type)
-    }
-}
 impl<'tcx, O: fmt::Debug> fmt::Debug for traits::Obligation<'tcx, O> {
     fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
         write!(f, "Obligation(predicate={:?},depth={})",
@@ -221,13 +214,11 @@
             }
             super::CompareImplMethodObligation { item_name,
                                                  impl_item_def_id,
-                                                 trait_item_def_id,
-                                                 lint_id } => {
+                                                 trait_item_def_id } => {
                 Some(super::CompareImplMethodObligation {
                     item_name,
                     impl_item_def_id,
                     trait_item_def_id,
-                    lint_id,
                 })
             }
             super::ExprAssignable => Some(super::ExprAssignable),
diff --git a/src/librustc/ty/codec.rs b/src/librustc/ty/codec.rs
index 1c79392..fbb14f3 100644
--- a/src/librustc/ty/codec.rs
+++ b/src/librustc/ty/codec.rs
@@ -19,7 +19,7 @@
 use hir::def_id::{DefId, CrateNum};
 use middle::const_val::ByteArray;
 use rustc_data_structures::fx::FxHashMap;
-use rustc_serialize::{Decodable, Decoder, Encoder, Encodable};
+use rustc_serialize::{Decodable, Decoder, Encoder, Encodable, opaque};
 use std::hash::Hash;
 use std::intrinsics;
 use ty::{self, Ty, TyCtxt};
@@ -53,6 +53,13 @@
     fn position(&self) -> usize;
 }
 
+impl<'buf> TyEncoder for opaque::Encoder<'buf> {
+    #[inline]
+    fn position(&self) -> usize {
+        self.position()
+    }
+}
+
 /// Encode the given value or a previously cached shorthand.
 pub fn encode_with_shorthand<E, T, M>(encoder: &mut E,
                                       value: &T,
@@ -113,6 +120,8 @@
 
     fn peek_byte(&self) -> u8;
 
+    fn position(&self) -> usize;
+
     fn cached_ty_for_shorthand<F>(&mut self,
                                   shorthand: usize,
                                   or_insert_with: F)
@@ -129,6 +138,7 @@
     }
 }
 
+#[inline]
 pub fn decode_cnum<'a, 'tcx, D>(decoder: &mut D) -> Result<CrateNum, D::Error>
     where D: TyDecoder<'a, 'tcx>,
           'tcx: 'a,
@@ -137,12 +147,12 @@
     Ok(decoder.map_encoded_cnum_to_current(cnum))
 }
 
+#[inline]
 pub fn decode_ty<'a, 'tcx, D>(decoder: &mut D) -> Result<Ty<'tcx>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
           'tcx: 'a,
 {
     // Handle shorthands first, if we have an usize > 0x80.
-    // if self.opaque.data[self.opaque.position()] & 0x80 != 0 {
     if decoder.positioned_at_shorthand() {
         let pos = decoder.read_usize()?;
         assert!(pos >= SHORTHAND_OFFSET);
@@ -157,6 +167,7 @@
     }
 }
 
+#[inline]
 pub fn decode_predicates<'a, 'tcx, D>(decoder: &mut D)
                                       -> Result<ty::GenericPredicates<'tcx>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
@@ -180,6 +191,7 @@
     })
 }
 
+#[inline]
 pub fn decode_substs<'a, 'tcx, D>(decoder: &mut D) -> Result<&'tcx Substs<'tcx>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
           'tcx: 'a,
@@ -189,6 +201,7 @@
     Ok(tcx.mk_substs((0..len).map(|_| Decodable::decode(decoder)))?)
 }
 
+#[inline]
 pub fn decode_region<'a, 'tcx, D>(decoder: &mut D) -> Result<ty::Region<'tcx>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
           'tcx: 'a,
@@ -196,6 +209,7 @@
     Ok(decoder.tcx().mk_region(Decodable::decode(decoder)?))
 }
 
+#[inline]
 pub fn decode_ty_slice<'a, 'tcx, D>(decoder: &mut D)
                                     -> Result<&'tcx ty::Slice<Ty<'tcx>>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
@@ -205,6 +219,7 @@
     Ok(decoder.tcx().mk_type_list((0..len).map(|_| Decodable::decode(decoder)))?)
 }
 
+#[inline]
 pub fn decode_adt_def<'a, 'tcx, D>(decoder: &mut D)
                                    -> Result<&'tcx ty::AdtDef, D::Error>
     where D: TyDecoder<'a, 'tcx>,
@@ -214,6 +229,7 @@
     Ok(decoder.tcx().adt_def(def_id))
 }
 
+#[inline]
 pub fn decode_existential_predicate_slice<'a, 'tcx, D>(decoder: &mut D)
     -> Result<&'tcx ty::Slice<ty::ExistentialPredicate<'tcx>>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
@@ -224,6 +240,7 @@
               .mk_existential_predicates((0..len).map(|_| Decodable::decode(decoder)))?)
 }
 
+#[inline]
 pub fn decode_byte_array<'a, 'tcx, D>(decoder: &mut D)
                                       -> Result<ByteArray<'tcx>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
@@ -234,6 +251,7 @@
     })
 }
 
+#[inline]
 pub fn decode_const<'a, 'tcx, D>(decoder: &mut D)
                                  -> Result<&'tcx ty::Const<'tcx>, D::Error>
     where D: TyDecoder<'a, 'tcx>,
@@ -241,3 +259,138 @@
 {
     Ok(decoder.tcx().mk_const(Decodable::decode(decoder)?))
 }
+
+#[macro_export]
+macro_rules! __impl_decoder_methods {
+    ($($name:ident -> $ty:ty;)*) => {
+        $(fn $name(&mut self) -> Result<$ty, Self::Error> {
+            self.opaque.$name()
+        })*
+    }
+}
+
+#[macro_export]
+macro_rules! implement_ty_decoder {
+    ($DecoderName:ident <$($typaram:tt),*>) => {
+        mod __ty_decoder_impl {
+            use super::$DecoderName;
+            use $crate::ty;
+            use $crate::ty::codec::*;
+            use $crate::ty::subst::Substs;
+            use $crate::hir::def_id::{CrateNum};
+            use $crate::middle::const_val::ByteArray;
+            use rustc_serialize::{Decoder, SpecializedDecoder};
+            use std::borrow::Cow;
+
+            impl<$($typaram ),*> Decoder for $DecoderName<$($typaram),*> {
+                type Error = String;
+
+                __impl_decoder_methods! {
+                    read_nil -> ();
+
+                    read_u128 -> u128;
+                    read_u64 -> u64;
+                    read_u32 -> u32;
+                    read_u16 -> u16;
+                    read_u8 -> u8;
+                    read_usize -> usize;
+
+                    read_i128 -> i128;
+                    read_i64 -> i64;
+                    read_i32 -> i32;
+                    read_i16 -> i16;
+                    read_i8 -> i8;
+                    read_isize -> isize;
+
+                    read_bool -> bool;
+                    read_f64 -> f64;
+                    read_f32 -> f32;
+                    read_char -> char;
+                    read_str -> Cow<str>;
+                }
+
+                fn error(&mut self, err: &str) -> Self::Error {
+                    self.opaque.error(err)
+                }
+            }
+
+            // FIXME(#36588) These impls are horribly unsound as they allow
+            // the caller to pick any lifetime for 'tcx, including 'static,
+            // by using the unspecialized proxies to them.
+
+            impl<$($typaram),*> SpecializedDecoder<CrateNum>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<CrateNum, Self::Error> {
+                    decode_cnum(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<ty::Ty<'tcx>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<ty::Ty<'tcx>, Self::Error> {
+                    decode_ty(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<ty::GenericPredicates<'tcx>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self)
+                                      -> Result<ty::GenericPredicates<'tcx>, Self::Error> {
+                    decode_predicates(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<&'tcx Substs<'tcx>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<&'tcx Substs<'tcx>, Self::Error> {
+                    decode_substs(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<ty::Region<'tcx>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<ty::Region<'tcx>, Self::Error> {
+                    decode_region(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<&'tcx ty::Slice<ty::Ty<'tcx>>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self)
+                                      -> Result<&'tcx ty::Slice<ty::Ty<'tcx>>, Self::Error> {
+                    decode_ty_slice(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<&'tcx ty::AdtDef>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<&'tcx ty::AdtDef, Self::Error> {
+                    decode_adt_def(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<&'tcx ty::Slice<ty::ExistentialPredicate<'tcx>>>
+                for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self)
+                    -> Result<&'tcx ty::Slice<ty::ExistentialPredicate<'tcx>>, Self::Error> {
+                    decode_existential_predicate_slice(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<ByteArray<'tcx>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<ByteArray<'tcx>, Self::Error> {
+                    decode_byte_array(self)
+                }
+            }
+
+            impl<$($typaram),*> SpecializedDecoder<&'tcx $crate::ty::Const<'tcx>>
+            for $DecoderName<$($typaram),*> {
+                fn specialized_decode(&mut self) -> Result<&'tcx ty::Const<'tcx>, Self::Error> {
+                    decode_const(self)
+                }
+            }
+        }
+    }
+}
+
diff --git a/src/librustc/ty/context.rs b/src/librustc/ty/context.rs
index 193c367..904f9a0 100644
--- a/src/librustc/ty/context.rs
+++ b/src/librustc/ty/context.rs
@@ -41,7 +41,7 @@
 use ty::RegionKind;
 use ty::{TyVar, TyVid, IntVar, IntVid, FloatVar, FloatVid};
 use ty::TypeVariants::*;
-use ty::layout::{Layout, TargetDataLayout};
+use ty::layout::{LayoutDetails, TargetDataLayout};
 use ty::maps;
 use ty::steal::Steal;
 use ty::BindingMode;
@@ -78,7 +78,7 @@
 /// Internal storage
 pub struct GlobalArenas<'tcx> {
     // internings
-    layout: TypedArena<Layout>,
+    layout: TypedArena<LayoutDetails>,
 
     // references
     generics: TypedArena<ty::Generics>,
@@ -768,7 +768,7 @@
                 };
                 let closure_def_id = DefId {
                     krate: local_id_root.krate,
-                    index: closure_expr_id,
+                    index: closure_expr_id.to_def_id().index,
                 };
                 (hcx.def_path_hash(var_owner_def_id),
                  var_id.local_id,
@@ -918,7 +918,7 @@
 
     stability_interner: RefCell<FxHashSet<&'tcx attr::Stability>>,
 
-    layout_interner: RefCell<FxHashSet<&'tcx Layout>>,
+    layout_interner: RefCell<FxHashSet<&'tcx LayoutDetails>>,
 
     /// A vector of every trait accessible in the whole crate
     /// (i.e. including those from subcrates). This is used only for
@@ -1016,7 +1016,7 @@
         interned
     }
 
-    pub fn intern_layout(self, layout: Layout) -> &'gcx Layout {
+    pub fn intern_layout(self, layout: LayoutDetails) -> &'gcx LayoutDetails {
         if let Some(layout) = self.layout_interner.borrow().get(&layout) {
             return layout;
         }
@@ -1306,9 +1306,9 @@
     pub fn serialize_query_result_cache<E>(self,
                                            encoder: &mut E)
                                            -> Result<(), E::Error>
-        where E: ::rustc_serialize::Encoder
+        where E: ty::codec::TyEncoder
     {
-        self.on_disk_query_result_cache.serialize(encoder)
+        self.on_disk_query_result_cache.serialize(self.global_tcx(), self.cstore, encoder)
     }
 
 }
diff --git a/src/librustc/ty/error.rs b/src/librustc/ty/error.rs
index 5cfa72c..228ca76 100644
--- a/src/librustc/ty/error.rs
+++ b/src/librustc/ty/error.rs
@@ -54,6 +54,8 @@
     ProjectionBoundsLength(ExpectedFound<usize>),
     TyParamDefaultMismatch(ExpectedFound<type_variable::Default<'tcx>>),
     ExistentialMismatch(ExpectedFound<&'tcx ty::Slice<ty::ExistentialPredicate<'tcx>>>),
+
+    OldStyleLUB(Box<TypeError<'tcx>>),
 }
 
 #[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Eq, Hash, Debug, Copy)]
@@ -170,6 +172,9 @@
                 report_maybe_different(f, format!("trait `{}`", values.expected),
                                        format!("trait `{}`", values.found))
             }
+            OldStyleLUB(ref err) => {
+                write!(f, "{}", err)
+            }
         }
     }
 }
@@ -293,6 +298,12 @@
                 db.span_note(found.origin_span,
                              "...that also applies to the same type variable here");
             }
+            OldStyleLUB(err) => {
+                db.note("this was previously accepted by the compiler but has been phased out");
+                db.note("for more information, see https://github.com/rust-lang/rust/issues/45852");
+
+                self.note_and_explain_type_err(db, &err, sp);
+            }
             _ => {}
         }
     }
diff --git a/src/librustc/ty/fold.rs b/src/librustc/ty/fold.rs
index 149999e..bee1199 100644
--- a/src/librustc/ty/fold.rs
+++ b/src/librustc/ty/fold.rs
@@ -43,7 +43,8 @@
 use ty::{self, Binder, Ty, TyCtxt, TypeFlags};
 
 use std::fmt;
-use util::nodemap::{FxHashMap, FxHashSet};
+use std::collections::BTreeMap;
+use util::nodemap::FxHashSet;
 
 /// The TypeFoldable trait is implemented for every type that can be folded.
 /// Basically, every type that has a corresponding method in TypeFolder.
@@ -324,14 +325,14 @@
     tcx: TyCtxt<'a, 'gcx, 'tcx>,
     current_depth: u32,
     fld_r: &'a mut (FnMut(ty::BoundRegion) -> ty::Region<'tcx> + 'a),
-    map: FxHashMap<ty::BoundRegion, ty::Region<'tcx>>
+    map: BTreeMap<ty::BoundRegion, ty::Region<'tcx>>
 }
 
 impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
     pub fn replace_late_bound_regions<T,F>(self,
         value: &Binder<T>,
         mut f: F)
-        -> (T, FxHashMap<ty::BoundRegion, ty::Region<'tcx>>)
+        -> (T, BTreeMap<ty::BoundRegion, ty::Region<'tcx>>)
         where F : FnMut(ty::BoundRegion) -> ty::Region<'tcx>,
               T : TypeFoldable<'tcx>,
     {
@@ -438,7 +439,7 @@
             tcx,
             current_depth: 1,
             fld_r,
-            map: FxHashMap()
+            map: BTreeMap::default()
         }
     }
 }
diff --git a/src/librustc/ty/layout.rs b/src/librustc/ty/layout.rs
index 491fa2a..71bf333 100644
--- a/src/librustc/ty/layout.rs
+++ b/src/librustc/ty/layout.rs
@@ -9,7 +9,6 @@
 // except according to those terms.
 
 pub use self::Integer::*;
-pub use self::Layout::*;
 pub use self::Primitive::*;
 
 use session::{self, DataTypeKind, Session};
@@ -21,10 +20,10 @@
 
 use std::cmp;
 use std::fmt;
-use std::i64;
+use std::i128;
 use std::iter;
 use std::mem;
-use std::ops::Deref;
+use std::ops::{Add, Sub, Mul, AddAssign, Deref, RangeInclusive};
 
 use ich::StableHashingContext;
 use rustc_data_structures::stable_hasher::{HashStable, StableHasher,
@@ -203,6 +202,18 @@
             bits => bug!("ptr_sized_integer: unknown pointer bit size {}", bits)
         }
     }
+
+    pub fn vector_align(&self, vec_size: Size) -> Align {
+        for &(size, align) in &self.vector_align {
+            if size == vec_size {
+                return align;
+            }
+        }
+        // Default to natural alignment, which is what LLVM does.
+        // That is, use the size, rounded up to a power of 2.
+        let align = vec_size.bytes().next_power_of_two();
+        Align::from_bytes(align, align).unwrap()
+    }
 }
 
 pub trait HasDataLayout: Copy {
@@ -215,12 +226,6 @@
     }
 }
 
-impl<'a, 'tcx> HasDataLayout for TyCtxt<'a, 'tcx, 'tcx> {
-    fn data_layout(&self) -> &TargetDataLayout {
-        &self.data_layout
-    }
-}
-
 /// Endianness of the target, which must match cfg(target-endian).
 #[derive(Copy, Clone)]
 pub enum Endian {
@@ -236,7 +241,8 @@
 
 impl Size {
     pub fn from_bits(bits: u64) -> Size {
-        Size::from_bytes((bits + 7) / 8)
+        // Avoid potential overflow from `bits + 7`.
+        Size::from_bytes(bits / 8 + ((bits % 8) + 7) / 8)
     }
 
     pub fn from_bytes(bytes: u64) -> Size {
@@ -261,6 +267,11 @@
         Size::from_bytes((self.bytes() + mask) & !mask)
     }
 
+    pub fn is_abi_aligned(self, align: Align) -> bool {
+        let mask = align.abi() - 1;
+        self.bytes() & mask == 0
+    }
+
     pub fn checked_add<C: HasDataLayout>(self, offset: Size, cx: C) -> Option<Size> {
         let dl = cx.data_layout();
 
@@ -278,8 +289,6 @@
     pub fn checked_mul<C: HasDataLayout>(self, count: u64, cx: C) -> Option<Size> {
         let dl = cx.data_layout();
 
-        // Each Size is less than dl.obj_size_bound(), so the sum is
-        // also less than 1 << 62 (and therefore can't overflow).
         match self.bytes().checked_mul(count) {
             Some(bytes) if bytes < dl.obj_size_bound() => {
                 Some(Size::from_bytes(bytes))
@@ -289,6 +298,46 @@
     }
 }
 
+// Panicking addition, subtraction and multiplication for convenience.
+// Avoid during layout computation, return `LayoutError` instead.
+
+impl Add for Size {
+    type Output = Size;
+    fn add(self, other: Size) -> Size {
+        // Each Size is less than 1 << 61, so the sum is
+        // less than 1 << 62 (and therefore can't overflow).
+        Size::from_bytes(self.bytes() + other.bytes())
+    }
+}
+
+impl Sub for Size {
+    type Output = Size;
+    fn sub(self, other: Size) -> Size {
+        // Each Size is less than 1 << 61, so an underflow
+        // would result in a value larger than 1 << 61,
+        // which Size::from_bytes will catch for us.
+        Size::from_bytes(self.bytes() - other.bytes())
+    }
+}
+
+impl Mul<u64> for Size {
+    type Output = Size;
+    fn mul(self, count: u64) -> Size {
+        match self.bytes().checked_mul(count) {
+            Some(bytes) => Size::from_bytes(bytes),
+            None => {
+                bug!("Size::mul: {} * {} doesn't fit in u64", self.bytes(), count)
+            }
+        }
+    }
+}
+
+impl AddAssign for Size {
+    fn add_assign(&mut self, other: Size) {
+        *self = *self + other;
+    }
+}
+
 /// Alignment of a type in bytes, both ABI-mandated and preferred.
 /// Each field is a power of two, giving the alignment a maximum
 /// value of 2^(2^8 - 1), which is limited by LLVM to a i32, with
@@ -301,7 +350,8 @@
 
 impl Align {
     pub fn from_bits(abi: u64, pref: u64) -> Result<Align, String> {
-        Align::from_bytes((abi + 7) / 8, (pref + 7) / 8)
+        Align::from_bytes(Size::from_bits(abi).bytes(),
+                          Size::from_bits(pref).bytes())
     }
 
     pub fn from_bytes(abi: u64, pref: u64) -> Result<Align, String> {
@@ -340,6 +390,14 @@
         1 << self.pref
     }
 
+    pub fn abi_bits(self) -> u64 {
+        self.abi() * 8
+    }
+
+    pub fn pref_bits(self) -> u64 {
+        self.pref() * 8
+    }
+
     pub fn min(self, other: Align) -> Align {
         Align {
             abi: cmp::min(self.abi, other.abi),
@@ -358,7 +416,6 @@
 /// Integers, also used for enum discriminants.
 #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Debug)]
 pub enum Integer {
-    I1,
     I8,
     I16,
     I32,
@@ -366,10 +423,9 @@
     I128,
 }
 
-impl Integer {
+impl<'a, 'tcx> Integer {
     pub fn size(&self) -> Size {
         match *self {
-            I1 => Size::from_bits(1),
             I8 => Size::from_bytes(1),
             I16 => Size::from_bytes(2),
             I32 => Size::from_bytes(4),
@@ -382,7 +438,6 @@
         let dl = cx.data_layout();
 
         match *self {
-            I1 => dl.i1_align,
             I8 => dl.i8_align,
             I16 => dl.i16_align,
             I32 => dl.i32_align,
@@ -391,16 +446,13 @@
         }
     }
 
-    pub fn to_ty<'a, 'tcx>(&self, tcx: &TyCtxt<'a, 'tcx, 'tcx>,
-                           signed: bool) -> Ty<'tcx> {
+    pub fn to_ty(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, signed: bool) -> Ty<'tcx> {
         match (*self, signed) {
-            (I1, false) => tcx.types.u8,
             (I8, false) => tcx.types.u8,
             (I16, false) => tcx.types.u16,
             (I32, false) => tcx.types.u32,
             (I64, false) => tcx.types.u64,
             (I128, false) => tcx.types.u128,
-            (I1, true) => tcx.types.i8,
             (I8, true) => tcx.types.i8,
             (I16, true) => tcx.types.i16,
             (I32, true) => tcx.types.i32,
@@ -410,9 +462,8 @@
     }
 
     /// Find the smallest Integer type which can represent the signed value.
-    pub fn fit_signed(x: i64) -> Integer {
+    pub fn fit_signed(x: i128) -> Integer {
         match x {
-            -0x0000_0000_0000_0001...0x0000_0000_0000_0000 => I1,
             -0x0000_0000_0000_0080...0x0000_0000_0000_007f => I8,
             -0x0000_0000_0000_8000...0x0000_0000_0000_7fff => I16,
             -0x0000_0000_8000_0000...0x0000_0000_7fff_ffff => I32,
@@ -422,9 +473,8 @@
     }
 
     /// Find the smallest Integer type which can represent the unsigned value.
-    pub fn fit_unsigned(x: u64) -> Integer {
+    pub fn fit_unsigned(x: u128) -> Integer {
         match x {
-            0...0x0000_0000_0000_0001 => I1,
             0...0x0000_0000_0000_00ff => I8,
             0...0x0000_0000_0000_ffff => I16,
             0...0x0000_0000_ffff_ffff => I32,
@@ -438,8 +488,8 @@
         let dl = cx.data_layout();
 
         let wanted = align.abi();
-        for &candidate in &[I8, I16, I32, I64] {
-            let ty = Int(candidate);
+        for &candidate in &[I8, I16, I32, I64, I128] {
+            let ty = Int(candidate, false);
             if wanted == ty.align(dl).abi() && wanted == ty.size(dl).bytes() {
                 return Some(candidate);
             }
@@ -465,19 +515,19 @@
 
     /// Find the appropriate Integer type and signedness for the given
     /// signed discriminant range and #[repr] attribute.
-    /// N.B.: u64 values above i64::MAX will be treated as signed, but
+    /// N.B.: u128 values above i128::MAX will be treated as signed, but
     /// that shouldn't affect anything, other than maybe debuginfo.
-    fn repr_discr<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                            ty: Ty<'tcx>,
-                            repr: &ReprOptions,
-                            min: i64,
-                            max: i64)
-                            -> (Integer, bool) {
+    fn repr_discr(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                  ty: Ty<'tcx>,
+                  repr: &ReprOptions,
+                  min: i128,
+                  max: i128)
+                  -> (Integer, bool) {
         // Theoretically, negative values could be larger in unsigned representation
         // than the unsigned representation of the signed minimum. However, if there
-        // are any negative values, the only valid unsigned representation is u64
-        // which can fit all i64 values, so the result remains unaffected.
-        let unsigned_fit = Integer::fit_unsigned(cmp::max(min as u64, max as u64));
+        // are any negative values, the only valid unsigned representation is u128
+        // which can fit all i128 values, so the result remains unaffected.
+        let unsigned_fit = Integer::fit_unsigned(cmp::max(min as u128, max as u128));
         let signed_fit = cmp::max(Integer::fit_signed(min), Integer::fit_signed(max));
 
         let mut min_from_extern = None;
@@ -518,22 +568,27 @@
 /// Fundamental unit of memory access and layout.
 #[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
 pub enum Primitive {
-    Int(Integer),
+    /// The `bool` is the signedness of the `Integer` type.
+    ///
+    /// One would think we would not care about such details this low down,
+    /// but some ABIs are described in terms of C types and ISAs where the
+    /// integer arithmetic is done on {sign,zero}-extended registers, e.g.
+    /// a negative integer passed by zero-extension will appear positive in
+    /// the callee, and most operations on it will produce the wrong values.
+    Int(Integer, bool),
     F32,
     F64,
     Pointer
 }
 
-impl Primitive {
+impl<'a, 'tcx> Primitive {
     pub fn size<C: HasDataLayout>(self, cx: C) -> Size {
         let dl = cx.data_layout();
 
         match self {
-            Int(I1) | Int(I8) => Size::from_bits(8),
-            Int(I16) => Size::from_bits(16),
-            Int(I32) | F32 => Size::from_bits(32),
-            Int(I64) | F64 => Size::from_bits(64),
-            Int(I128) => Size::from_bits(128),
+            Int(i, _) => i.size(),
+            F32 => Size::from_bits(32),
+            F64 => Size::from_bits(64),
             Pointer => dl.pointer_size
         }
     }
@@ -542,453 +597,42 @@
         let dl = cx.data_layout();
 
         match self {
-            Int(I1) => dl.i1_align,
-            Int(I8) => dl.i8_align,
-            Int(I16) => dl.i16_align,
-            Int(I32) => dl.i32_align,
-            Int(I64) => dl.i64_align,
-            Int(I128) => dl.i128_align,
+            Int(i, _) => i.align(dl),
             F32 => dl.f32_align,
             F64 => dl.f64_align,
             Pointer => dl.pointer_align
         }
     }
-}
 
-/// Path through fields of nested structures.
-// FIXME(eddyb) use small vector optimization for the common case.
-pub type FieldPath = Vec<u32>;
-
-/// A structure, a product type in ADT terms.
-#[derive(PartialEq, Eq, Hash, Debug)]
-pub struct Struct {
-    /// Maximum alignment of fields and repr alignment.
-    pub align: Align,
-
-    /// Primitive alignment of fields without repr alignment.
-    pub primitive_align: Align,
-
-    /// If true, no alignment padding is used.
-    pub packed: bool,
-
-    /// If true, the size is exact, otherwise it's only a lower bound.
-    pub sized: bool,
-
-    /// Offsets for the first byte of each field, ordered to match the source definition order.
-    /// This vector does not go in increasing order.
-    /// FIXME(eddyb) use small vector optimization for the common case.
-    pub offsets: Vec<Size>,
-
-    /// Maps source order field indices to memory order indices, depending how fields were permuted.
-    /// FIXME (camlorn) also consider small vector  optimization here.
-    pub memory_index: Vec<u32>,
-
-    pub min_size: Size,
-}
-
-/// Info required to optimize struct layout.
-#[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Debug)]
-enum StructKind {
-    /// A tuple, closure, or univariant which cannot be coerced to unsized.
-    AlwaysSizedUnivariant,
-    /// A univariant, the last field of which may be coerced to unsized.
-    MaybeUnsizedUnivariant,
-    /// A univariant, but part of an enum.
-    EnumVariant,
-}
-
-impl<'a, 'tcx> Struct {
-    fn new(dl: &TargetDataLayout,
-           fields: &Vec<&'a Layout>,
-           repr: &ReprOptions,
-           kind: StructKind,
-           scapegoat: Ty<'tcx>)
-           -> Result<Struct, LayoutError<'tcx>> {
-        if repr.packed() && repr.align > 0 {
-            bug!("Struct cannot be packed and aligned");
-        }
-
-        let align = if repr.packed() {
-            dl.i8_align
-        } else {
-            dl.aggregate_align
-        };
-
-        let mut ret = Struct {
-            align,
-            primitive_align: align,
-            packed: repr.packed(),
-            sized: true,
-            offsets: vec![],
-            memory_index: vec![],
-            min_size: Size::from_bytes(0),
-        };
-
-        // Anything with repr(C) or repr(packed) doesn't optimize.
-        // Neither do  1-member and 2-member structs.
-        // In addition, code in trans assume that 2-element structs can become pairs.
-        // It's easier to just short-circuit here.
-        let can_optimize = (fields.len() > 2 || StructKind::EnumVariant == kind)
-            && (repr.flags & ReprFlags::IS_UNOPTIMISABLE).is_empty();
-
-        let (optimize, sort_ascending) = match kind {
-            StructKind::AlwaysSizedUnivariant => (can_optimize, false),
-            StructKind::MaybeUnsizedUnivariant => (can_optimize, false),
-            StructKind::EnumVariant => {
-                assert!(fields.len() >= 1, "Enum variants must have discriminants.");
-                (can_optimize && fields[0].size(dl).bytes() == 1, true)
-            }
-        };
-
-        ret.offsets = vec![Size::from_bytes(0); fields.len()];
-        let mut inverse_memory_index: Vec<u32> = (0..fields.len() as u32).collect();
-
-        if optimize {
-            let start = if let StructKind::EnumVariant = kind { 1 } else { 0 };
-            let end = if let StructKind::MaybeUnsizedUnivariant = kind {
-                fields.len() - 1
-            } else {
-                fields.len()
-            };
-            if end > start {
-                let optimizing  = &mut inverse_memory_index[start..end];
-                if sort_ascending {
-                    optimizing.sort_by_key(|&x| fields[x as usize].align(dl).abi());
-                } else {
-                    optimizing.sort_by(| &a, &b | {
-                        let a = fields[a as usize].align(dl).abi();
-                        let b = fields[b as usize].align(dl).abi();
-                        b.cmp(&a)
-                    });
-                }
-            }
-        }
-
-        // inverse_memory_index holds field indices by increasing memory offset.
-        // That is, if field 5 has offset 0, the first element of inverse_memory_index is 5.
-        // We now write field offsets to the corresponding offset slot;
-        // field 5 with offset 0 puts 0 in offsets[5].
-        // At the bottom of this function, we use inverse_memory_index to produce memory_index.
-
-        if let StructKind::EnumVariant = kind {
-            assert_eq!(inverse_memory_index[0], 0,
-              "Enum variant discriminants must have the lowest offset.");
-        }
-
-        let mut offset = Size::from_bytes(0);
-
-        for i in inverse_memory_index.iter() {
-            let field = fields[*i as usize];
-            if !ret.sized {
-                bug!("Struct::new: field #{} of `{}` comes after unsized field",
-                     ret.offsets.len(), scapegoat);
-            }
-
-            if field.is_unsized() {
-                ret.sized = false;
-            }
-
-            // Invariant: offset < dl.obj_size_bound() <= 1<<61
-            if !ret.packed {
-                let align = field.align(dl);
-                let primitive_align = field.primitive_align(dl);
-                ret.align = ret.align.max(align);
-                ret.primitive_align = ret.primitive_align.max(primitive_align);
-                offset = offset.abi_align(align);
-            }
-
-            debug!("Struct::new offset: {:?} field: {:?} {:?}", offset, field, field.size(dl));
-            ret.offsets[*i as usize] = offset;
-
-            offset = offset.checked_add(field.size(dl), dl)
-                           .map_or(Err(LayoutError::SizeOverflow(scapegoat)), Ok)?;
-        }
-
-        if repr.align > 0 {
-            let repr_align = repr.align as u64;
-            ret.align = ret.align.max(Align::from_bytes(repr_align, repr_align).unwrap());
-            debug!("Struct::new repr_align: {:?}", repr_align);
-        }
-
-        debug!("Struct::new min_size: {:?}", offset);
-        ret.min_size = offset;
-
-        // As stated above, inverse_memory_index holds field indices by increasing offset.
-        // This makes it an already-sorted view of the offsets vec.
-        // To invert it, consider:
-        // If field 5 has offset 0, offsets[0] is 5, and memory_index[5] should be 0.
-        // Field 5 would be the first element, so memory_index is i:
-        // Note: if we didn't optimize, it's already right.
-
-        if optimize {
-            ret.memory_index = vec![0; inverse_memory_index.len()];
-
-            for i in 0..inverse_memory_index.len() {
-                ret.memory_index[inverse_memory_index[i] as usize]  = i as u32;
-            }
-        } else {
-            ret.memory_index = inverse_memory_index;
-        }
-
-        Ok(ret)
-    }
-
-    /// Get the size with trailing alignment padding.
-    pub fn stride(&self) -> Size {
-        self.min_size.abi_align(self.align)
-    }
-
-    /// Determine whether a structure would be zero-sized, given its fields.
-    fn would_be_zero_sized<I>(dl: &TargetDataLayout, fields: I)
-                              -> Result<bool, LayoutError<'tcx>>
-    where I: Iterator<Item=Result<&'a Layout, LayoutError<'tcx>>> {
-        for field in fields {
-            let field = field?;
-            if field.is_unsized() || field.size(dl).bytes() > 0 {
-                return Ok(false);
-            }
-        }
-        Ok(true)
-    }
-
-    /// Get indices of the tys that made this struct by increasing offset.
-    #[inline]
-    pub fn field_index_by_increasing_offset<'b>(&'b self) -> impl iter::Iterator<Item=usize>+'b {
-        let mut inverse_small = [0u8; 64];
-        let mut inverse_big = vec![];
-        let use_small = self.memory_index.len() <= inverse_small.len();
-
-        // We have to write this logic twice in order to keep the array small.
-        if use_small {
-            for i in 0..self.memory_index.len() {
-                inverse_small[self.memory_index[i] as usize] = i as u8;
-            }
-        } else {
-            inverse_big = vec![0; self.memory_index.len()];
-            for i in 0..self.memory_index.len() {
-                inverse_big[self.memory_index[i] as usize] = i as u32;
-            }
-        }
-
-        (0..self.memory_index.len()).map(move |i| {
-            if use_small { inverse_small[i] as usize }
-            else { inverse_big[i] as usize }
-        })
-    }
-
-    /// Find the path leading to a non-zero leaf field, starting from
-    /// the given type and recursing through aggregates.
-    /// The tuple is `(path, source_path)`,
-    /// where `path` is in memory order and `source_path` in source order.
-    // FIXME(eddyb) track value ranges and traverse already optimized enums.
-    fn non_zero_field_in_type(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                              param_env: ty::ParamEnv<'tcx>,
-                              ty: Ty<'tcx>)
-                              -> Result<Option<(FieldPath, FieldPath)>, LayoutError<'tcx>> {
-        match (ty.layout(tcx, param_env)?, &ty.sty) {
-            (&Scalar { non_zero: true, .. }, _) |
-            (&CEnum { non_zero: true, .. }, _) => Ok(Some((vec![], vec![]))),
-            (&FatPointer { non_zero: true, .. }, _) => {
-                Ok(Some((vec![FAT_PTR_ADDR as u32], vec![FAT_PTR_ADDR as u32])))
-            }
-
-            // Is this the NonZero lang item wrapping a pointer or integer type?
-            (&Univariant { non_zero: true, .. }, &ty::TyAdt(def, substs)) => {
-                let fields = &def.struct_variant().fields;
-                assert_eq!(fields.len(), 1);
-                match *fields[0].ty(tcx, substs).layout(tcx, param_env)? {
-                    // FIXME(eddyb) also allow floating-point types here.
-                    Scalar { value: Int(_), non_zero: false } |
-                    Scalar { value: Pointer, non_zero: false } => {
-                        Ok(Some((vec![0], vec![0])))
-                    }
-                    FatPointer { non_zero: false, .. } => {
-                        let tmp = vec![FAT_PTR_ADDR as u32, 0];
-                        Ok(Some((tmp.clone(), tmp)))
-                    }
-                    _ => Ok(None)
-                }
-            }
-
-            // Perhaps one of the fields of this struct is non-zero
-            // let's recurse and find out
-            (&Univariant { ref variant, .. }, &ty::TyAdt(def, substs)) if def.is_struct() => {
-                Struct::non_zero_field_paths(
-                    tcx,
-                    param_env,
-                    def.struct_variant().fields.iter().map(|field| {
-                        field.ty(tcx, substs)
-                    }),
-                    Some(&variant.memory_index[..]))
-            }
-
-            // Perhaps one of the upvars of this closure is non-zero
-            (&Univariant { ref variant, .. }, &ty::TyClosure(def, substs)) => {
-                let upvar_tys = substs.upvar_tys(def, tcx);
-                Struct::non_zero_field_paths(
-                    tcx,
-                    param_env,
-                    upvar_tys,
-                    Some(&variant.memory_index[..]))
-            }
-            // Can we use one of the fields in this tuple?
-            (&Univariant { ref variant, .. }, &ty::TyTuple(tys, _)) => {
-                Struct::non_zero_field_paths(
-                    tcx,
-                    param_env,
-                    tys.iter().cloned(),
-                    Some(&variant.memory_index[..]))
-            }
-
-            // Is this a fixed-size array of something non-zero
-            // with at least one element?
-            (_, &ty::TyArray(ety, mut count)) => {
-                if count.has_projections() {
-                    count = tcx.normalize_associated_type_in_env(&count, param_env);
-                    if count.has_projections() {
-                        return Err(LayoutError::Unknown(ty));
-                    }
-                }
-                if count.val.to_const_int().unwrap().to_u64().unwrap() != 0 {
-                    Struct::non_zero_field_paths(
-                        tcx,
-                        param_env,
-                        Some(ety).into_iter(),
-                        None)
-                } else {
-                    Ok(None)
-                }
-            }
-
-            (_, &ty::TyProjection(_)) | (_, &ty::TyAnon(..)) => {
-                let normalized = tcx.normalize_associated_type_in_env(&ty, param_env);
-                if ty == normalized {
-                    return Ok(None);
-                }
-                return Struct::non_zero_field_in_type(tcx, param_env, normalized);
-            }
-
-            // Anything else is not a non-zero type.
-            _ => Ok(None)
-        }
-    }
-
-    /// Find the path leading to a non-zero leaf field, starting from
-    /// the given set of fields and recursing through aggregates.
-    /// Returns Some((path, source_path)) on success.
-    /// `path` is translated to memory order. `source_path` is not.
-    fn non_zero_field_paths<I>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                               param_env: ty::ParamEnv<'tcx>,
-                               fields: I,
-                               permutation: Option<&[u32]>)
-                               -> Result<Option<(FieldPath, FieldPath)>, LayoutError<'tcx>>
-    where I: Iterator<Item=Ty<'tcx>> {
-        for (i, ty) in fields.enumerate() {
-            let r = Struct::non_zero_field_in_type(tcx, param_env, ty)?;
-            if let Some((mut path, mut source_path)) = r {
-                source_path.push(i as u32);
-                let index = if let Some(p) = permutation {
-                    p[i] as usize
-                } else {
-                    i
-                };
-                path.push(index as u32);
-                return Ok(Some((path, source_path)));
-            }
-        }
-        Ok(None)
-    }
-
-    pub fn over_align(&self) -> Option<u32> {
-        let align = self.align.abi();
-        let primitive_align = self.primitive_align.abi();
-        if align > primitive_align {
-            Some(align as u32)
-        } else {
-            None
+    pub fn to_ty(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>) -> Ty<'tcx> {
+        match *self {
+            Int(i, signed) => i.to_ty(tcx, signed),
+            F32 => tcx.types.f32,
+            F64 => tcx.types.f64,
+            Pointer => tcx.mk_mut_ptr(tcx.mk_nil()),
         }
     }
 }
 
-/// An untagged union.
-#[derive(PartialEq, Eq, Hash, Debug)]
-pub struct Union {
-    pub align: Align,
-    pub primitive_align: Align,
+/// Information about one scalar component of a Rust type.
+#[derive(Clone, PartialEq, Eq, Hash, Debug)]
+pub struct Scalar {
+    pub value: Primitive,
 
-    pub min_size: Size,
-
-    /// If true, no alignment padding is used.
-    pub packed: bool,
+    /// Inclusive wrap-around range of valid values, that is, if
+    /// min > max, it represents min..=u128::MAX followed by 0..=max.
+    // FIXME(eddyb) always use the shortest range, e.g. by finding
+    // the largest space between two consecutive valid values and
+    // taking everything else as the (shortest) valid range.
+    pub valid_range: RangeInclusive<u128>,
 }
 
-impl<'a, 'tcx> Union {
-    fn new(dl: &TargetDataLayout, repr: &ReprOptions) -> Union {
-        if repr.packed() && repr.align > 0 {
-            bug!("Union cannot be packed and aligned");
-        }
-
-        let primitive_align = if repr.packed() {
-            dl.i8_align
+impl Scalar {
+    pub fn is_bool(&self) -> bool {
+        if let Int(I8, _) = self.value {
+            self.valid_range == (0..=1)
         } else {
-            dl.aggregate_align
-        };
-
-        let align = if repr.align > 0 {
-            let repr_align = repr.align as u64;
-            debug!("Union::new repr_align: {:?}", repr_align);
-            primitive_align.max(Align::from_bytes(repr_align, repr_align).unwrap())
-        } else {
-            primitive_align
-        };
-
-        Union {
-            align,
-            primitive_align,
-            min_size: Size::from_bytes(0),
-            packed: repr.packed(),
-        }
-    }
-
-    /// Extend the Struct with more fields.
-    fn extend<I>(&mut self, dl: &TargetDataLayout,
-                 fields: I,
-                 scapegoat: Ty<'tcx>)
-                 -> Result<(), LayoutError<'tcx>>
-    where I: Iterator<Item=Result<&'a Layout, LayoutError<'tcx>>> {
-        for (index, field) in fields.enumerate() {
-            let field = field?;
-            if field.is_unsized() {
-                bug!("Union::extend: field #{} of `{}` is unsized",
-                     index, scapegoat);
-            }
-
-            debug!("Union::extend field: {:?} {:?}", field, field.size(dl));
-
-            if !self.packed {
-                self.align = self.align.max(field.align(dl));
-                self.primitive_align = self.primitive_align.max(field.primitive_align(dl));
-            }
-            self.min_size = cmp::max(self.min_size, field.size(dl));
-        }
-
-        debug!("Union::extend min-size: {:?}", self.min_size);
-
-        Ok(())
-    }
-
-    /// Get the size with trailing alignment padding.
-    pub fn stride(&self) -> Size {
-        self.min_size.abi_align(self.align)
-    }
-
-    pub fn over_align(&self) -> Option<u32> {
-        let align = self.align.abi();
-        let primitive_align = self.primitive_align.abi();
-        if align > primitive_align {
-            Some(align as u32)
-        } else {
-            None
+            false
         }
     }
 }
@@ -1003,106 +647,178 @@
 /// - For a slice, this is the length.
 pub const FAT_PTR_EXTRA: usize = 1;
 
-/// Type layout, from which size and alignment can be cheaply computed.
-/// For ADTs, it also includes field placement and enum optimizations.
-/// NOTE: Because Layout is interned, redundant information should be
-/// kept to a minimum, e.g. it includes no sub-component Ty or Layout.
-#[derive(Debug, PartialEq, Eq, Hash)]
-pub enum Layout {
-    /// TyBool, TyChar, TyInt, TyUint, TyFloat, TyRawPtr, TyRef or TyFnPtr.
-    Scalar {
-        value: Primitive,
-        // If true, the value cannot represent a bit pattern of all zeroes.
-        non_zero: bool
-    },
+/// Describes how the fields of a type are located in memory.
+#[derive(PartialEq, Eq, Hash, Debug)]
+pub enum FieldPlacement {
+    /// All fields start at no offset. The `usize` is the field count.
+    Union(usize),
 
-    /// SIMD vectors, from structs marked with #[repr(simd)].
-    Vector {
-        element: Primitive,
+    /// Array/vector-like placement, with all fields of identical types.
+    Array {
+        stride: Size,
         count: u64
     },
 
-    /// TyArray, TySlice or TyStr.
-    Array {
+    /// Struct-like placement, with precomputed offsets.
+    ///
+    /// Fields are guaranteed to not overlap, but note that gaps
+    /// before, between and after all the fields are NOT always
+    /// padding, and as such their contents may not be discarded.
+    /// For example, enum variants leave a gap at the start,
+    /// where the discriminant field in the enum layout goes.
+    Arbitrary {
+        /// Offsets for the first byte of each field,
+        /// ordered to match the source definition order.
+        /// This vector does not go in increasing order.
+        // FIXME(eddyb) use small vector optimization for the common case.
+        offsets: Vec<Size>,
+
+        /// Maps source order field indices to memory order indices,
+        /// depending how fields were permuted.
+        // FIXME(camlorn) also consider small vector  optimization here.
+        memory_index: Vec<u32>
+    }
+}
+
+impl FieldPlacement {
+    pub fn count(&self) -> usize {
+        match *self {
+            FieldPlacement::Union(count) => count,
+            FieldPlacement::Array { count, .. } => {
+                let usize_count = count as usize;
+                assert_eq!(usize_count as u64, count);
+                usize_count
+            }
+            FieldPlacement::Arbitrary { ref offsets, .. } => offsets.len()
+        }
+    }
+
+    pub fn offset(&self, i: usize) -> Size {
+        match *self {
+            FieldPlacement::Union(_) => Size::from_bytes(0),
+            FieldPlacement::Array { stride, count } => {
+                let i = i as u64;
+                assert!(i < count);
+                stride * i
+            }
+            FieldPlacement::Arbitrary { ref offsets, .. } => offsets[i]
+        }
+    }
+
+    pub fn memory_index(&self, i: usize) -> usize {
+        match *self {
+            FieldPlacement::Union(_) |
+            FieldPlacement::Array { .. } => i,
+            FieldPlacement::Arbitrary { ref memory_index, .. } => {
+                let r = memory_index[i];
+                assert_eq!(r as usize as u32, r);
+                r as usize
+            }
+        }
+    }
+
+    /// Get source indices of the fields by increasing offsets.
+    #[inline]
+    pub fn index_by_increasing_offset<'a>(&'a self) -> impl iter::Iterator<Item=usize>+'a {
+        let mut inverse_small = [0u8; 64];
+        let mut inverse_big = vec![];
+        let use_small = self.count() <= inverse_small.len();
+
+        // We have to write this logic twice in order to keep the array small.
+        if let FieldPlacement::Arbitrary { ref memory_index, .. } = *self {
+            if use_small {
+                for i in 0..self.count() {
+                    inverse_small[memory_index[i] as usize] = i as u8;
+                }
+            } else {
+                inverse_big = vec![0; self.count()];
+                for i in 0..self.count() {
+                    inverse_big[memory_index[i] as usize] = i as u32;
+                }
+            }
+        }
+
+        (0..self.count()).map(move |i| {
+            match *self {
+                FieldPlacement::Union(_) |
+                FieldPlacement::Array { .. } => i,
+                FieldPlacement::Arbitrary { .. } => {
+                    if use_small { inverse_small[i] as usize }
+                    else { inverse_big[i] as usize }
+                }
+            }
+        })
+    }
+}
+
+/// Describes how values of the type are passed by target ABIs,
+/// in terms of categories of C types there are ABI rules for.
+#[derive(Clone, PartialEq, Eq, Hash, Debug)]
+pub enum Abi {
+    Uninhabited,
+    Scalar(Scalar),
+    ScalarPair(Scalar, Scalar),
+    Vector,
+    Aggregate {
         /// If true, the size is exact, otherwise it's only a lower bound.
         sized: bool,
-        align: Align,
-        primitive_align: Align,
-        element_size: Size,
-        count: u64
+        packed: bool
+    }
+}
+
+impl Abi {
+    /// Returns true if the layout corresponds to an unsized type.
+    pub fn is_unsized(&self) -> bool {
+        match *self {
+            Abi::Uninhabited |
+            Abi::Scalar(_) |
+            Abi::ScalarPair(..) |
+            Abi::Vector => false,
+            Abi::Aggregate { sized, .. } => !sized
+        }
+    }
+
+    /// Returns true if the fields of the layout are packed.
+    pub fn is_packed(&self) -> bool {
+        match *self {
+            Abi::Uninhabited |
+            Abi::Scalar(_) |
+            Abi::ScalarPair(..) |
+            Abi::Vector => false,
+            Abi::Aggregate { packed, .. } => packed
+        }
+    }
+}
+
+#[derive(PartialEq, Eq, Hash, Debug)]
+pub enum Variants {
+    /// Single enum variants, structs/tuples, unions, and all non-ADTs.
+    Single {
+        index: usize
     },
 
-    /// TyRawPtr or TyRef with a !Sized pointee.
-    FatPointer {
-        metadata: Primitive,
-        /// If true, the pointer cannot be null.
-        non_zero: bool
+    /// General-case enums: for each case there is a struct, and they all have
+    /// all space reserved for the discriminant, and their first field starts
+    /// at a non-0 offset, after where the discriminant would go.
+    Tagged {
+        discr: Scalar,
+        variants: Vec<LayoutDetails>,
     },
 
-    // Remaining variants are all ADTs such as structs, enums or tuples.
-
-    /// C-like enums; basically an integer.
-    CEnum {
-        discr: Integer,
-        signed: bool,
-        non_zero: bool,
-        /// Inclusive discriminant range.
-        /// If min > max, it represents min...u64::MAX followed by 0...max.
-        // FIXME(eddyb) always use the shortest range, e.g. by finding
-        // the largest space between two consecutive discriminants and
-        // taking everything else as the (shortest) discriminant range.
-        min: u64,
-        max: u64
-    },
-
-    /// Single-case enums, and structs/tuples.
-    Univariant {
-        variant: Struct,
-        /// If true, the structure is NonZero.
-        // FIXME(eddyb) use a newtype Layout kind for this.
-        non_zero: bool
-    },
-
-    /// Untagged unions.
-    UntaggedUnion {
-        variants: Union,
-    },
-
-    /// General-case enums: for each case there is a struct, and they
-    /// all start with a field for the discriminant.
-    General {
-        discr: Integer,
-        variants: Vec<Struct>,
-        size: Size,
-        align: Align,
-        primitive_align: Align,
-    },
-
-    /// Two cases distinguished by a nullable pointer: the case with discriminant
-    /// `nndiscr` must have single field which is known to be nonnull due to its type.
-    /// The other case is known to be zero sized. Hence we represent the enum
-    /// as simply a nullable pointer: if not null it indicates the `nndiscr` variant,
-    /// otherwise it indicates the other case.
+    /// Multiple cases distinguished by a niche (values invalid for a type):
+    /// the variant `dataful_variant` contains a niche at an arbitrary
+    /// offset (field 0 of the enum), which for a variant with discriminant
+    /// `d` is set to `(d - niche_variants.start).wrapping_add(niche_start)`.
     ///
-    /// For example, `std::option::Option` instantiated at a safe pointer type
-    /// is represented such that `None` is a null pointer and `Some` is the
-    /// identity function.
-    RawNullablePointer {
-        nndiscr: u64,
-        value: Primitive
-    },
-
-    /// Two cases distinguished by a nullable pointer: the case with discriminant
-    /// `nndiscr` is represented by the struct `nonnull`, where the `discrfield`th
-    /// field is known to be nonnull due to its type; if that field is null, then
-    /// it represents the other case, which is known to be zero sized.
-    StructWrappedNullablePointer {
-        nndiscr: u64,
-        nonnull: Struct,
-        /// N.B. There is a 0 at the start, for LLVM GEP through a pointer.
-        discrfield: FieldPath,
-        /// Like discrfield, but in source order. For debuginfo.
-        discrfield_source: FieldPath
+    /// For example, `Option<(usize, &T)>`  is represented such that
+    /// `None` has a null pointer for the second tuple field, and
+    /// `Some` is the identity function (with a non-null reference).
+    NicheFilling {
+        dataful_variant: usize,
+        niche_variants: RangeInclusive<usize>,
+        niche: Scalar,
+        niche_start: u128,
+        variants: Vec<LayoutDetails>,
     }
 }
 
@@ -1125,68 +841,383 @@
     }
 }
 
-impl<'a, 'tcx> Layout {
-    pub fn compute_uncached(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                            param_env: ty::ParamEnv<'tcx>,
-                            ty: Ty<'tcx>)
-                            -> Result<&'tcx Layout, LayoutError<'tcx>> {
-        let success = |layout| Ok(tcx.intern_layout(layout));
-        let dl = &tcx.data_layout;
-        assert!(!ty.has_infer_types());
+#[derive(PartialEq, Eq, Hash, Debug)]
+pub struct LayoutDetails {
+    pub variants: Variants,
+    pub fields: FieldPlacement,
+    pub abi: Abi,
+    pub align: Align,
+    pub size: Size
+}
 
-        let ptr_layout = |pointee: Ty<'tcx>| {
-            let non_zero = !ty.is_unsafe_ptr();
-            let pointee = tcx.normalize_associated_type_in_env(&pointee, param_env);
-            if pointee.is_sized(tcx, param_env, DUMMY_SP) {
-                Ok(Scalar { value: Pointer, non_zero: non_zero })
-            } else {
-                let unsized_part = tcx.struct_tail(pointee);
-                match unsized_part.sty {
-                    ty::TySlice(_) | ty::TyStr => Ok(FatPointer {
-                        metadata: Int(dl.ptr_sized_integer()),
-                        non_zero: non_zero
-                    }),
-                    ty::TyDynamic(..) => Ok(FatPointer { metadata: Pointer, non_zero: non_zero }),
-                    ty::TyForeign(..) => Ok(Scalar { value: Pointer, non_zero: non_zero }),
-                    _ => Err(LayoutError::Unknown(unsized_part)),
-                }
+impl LayoutDetails {
+    fn scalar<C: HasDataLayout>(cx: C, scalar: Scalar) -> Self {
+        let size = scalar.value.size(cx);
+        let align = scalar.value.align(cx);
+        LayoutDetails {
+            variants: Variants::Single { index: 0 },
+            fields: FieldPlacement::Union(0),
+            abi: Abi::Scalar(scalar),
+            size,
+            align,
+        }
+    }
+
+    fn uninhabited(field_count: usize) -> Self {
+        let align = Align::from_bytes(1, 1).unwrap();
+        LayoutDetails {
+            variants: Variants::Single { index: 0 },
+            fields: FieldPlacement::Union(field_count),
+            abi: Abi::Uninhabited,
+            align,
+            size: Size::from_bytes(0)
+        }
+    }
+}
+
+fn layout_raw<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                        query: ty::ParamEnvAnd<'tcx, Ty<'tcx>>)
+                        -> Result<&'tcx LayoutDetails, LayoutError<'tcx>>
+{
+    let (param_env, ty) = query.into_parts();
+
+    let rec_limit = tcx.sess.recursion_limit.get();
+    let depth = tcx.layout_depth.get();
+    if depth > rec_limit {
+        tcx.sess.fatal(
+            &format!("overflow representing the type `{}`", ty));
+    }
+
+    tcx.layout_depth.set(depth+1);
+    let layout = LayoutDetails::compute_uncached(tcx, param_env, ty);
+    tcx.layout_depth.set(depth);
+
+    layout
+}
+
+pub fn provide(providers: &mut ty::maps::Providers) {
+    *providers = ty::maps::Providers {
+        layout_raw,
+        ..*providers
+    };
+}
+
+impl<'a, 'tcx> LayoutDetails {
+    fn compute_uncached(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                        param_env: ty::ParamEnv<'tcx>,
+                        ty: Ty<'tcx>)
+                        -> Result<&'tcx Self, LayoutError<'tcx>> {
+        let cx = (tcx, param_env);
+        let dl = cx.data_layout();
+        let scalar_unit = |value: Primitive| {
+            let bits = value.size(dl).bits();
+            assert!(bits <= 128);
+            Scalar {
+                value,
+                valid_range: 0..=(!0 >> (128 - bits))
+            }
+        };
+        let scalar = |value: Primitive| {
+            tcx.intern_layout(LayoutDetails::scalar(cx, scalar_unit(value)))
+        };
+        let scalar_pair = |a: Scalar, b: Scalar| {
+            let align = a.value.align(dl).max(b.value.align(dl)).max(dl.aggregate_align);
+            let b_offset = a.value.size(dl).abi_align(b.value.align(dl));
+            let size = (b_offset + b.value.size(dl)).abi_align(align);
+            LayoutDetails {
+                variants: Variants::Single { index: 0 },
+                fields: FieldPlacement::Arbitrary {
+                    offsets: vec![Size::from_bytes(0), b_offset],
+                    memory_index: vec![0, 1]
+                },
+                abi: Abi::ScalarPair(a, b),
+                align,
+                size
             }
         };
 
-        let layout = match ty.sty {
-            // Basic scalars.
-            ty::TyBool => Scalar { value: Int(I1), non_zero: false },
-            ty::TyChar => Scalar { value: Int(I32), non_zero: false },
-            ty::TyInt(ity) => {
-                Scalar {
-                    value: Int(Integer::from_attr(dl, attr::SignedInt(ity))),
-                    non_zero: false
+        #[derive(Copy, Clone, Debug)]
+        enum StructKind {
+            /// A tuple, closure, or univariant which cannot be coerced to unsized.
+            AlwaysSized,
+            /// A univariant, the last field of which may be coerced to unsized.
+            MaybeUnsized,
+            /// A univariant, but part of an enum.
+            EnumVariant(Integer),
+        }
+        let univariant_uninterned = |fields: &[TyLayout], repr: &ReprOptions, kind| {
+            let packed = repr.packed();
+            if packed && repr.align > 0 {
+                bug!("struct cannot be packed and aligned");
+            }
+
+            let mut align = if packed {
+                dl.i8_align
+            } else {
+                dl.aggregate_align
+            };
+
+            let mut sized = true;
+            let mut offsets = vec![Size::from_bytes(0); fields.len()];
+            let mut inverse_memory_index: Vec<u32> = (0..fields.len() as u32).collect();
+
+            // Anything with repr(C) or repr(packed) doesn't optimize.
+            let optimize = match kind {
+                StructKind::AlwaysSized |
+                StructKind::MaybeUnsized |
+                StructKind::EnumVariant(I8) => {
+                    (repr.flags & ReprFlags::IS_UNOPTIMISABLE).is_empty()
                 }
+                StructKind::EnumVariant(_) => false
+            };
+            if optimize {
+                let end = if let StructKind::MaybeUnsized = kind {
+                    fields.len() - 1
+                } else {
+                    fields.len()
+                };
+                let optimizing = &mut inverse_memory_index[..end];
+                match kind {
+                    StructKind::AlwaysSized |
+                    StructKind::MaybeUnsized => {
+                        optimizing.sort_by_key(|&x| {
+                            // Place ZSTs first to avoid "interesting offsets",
+                            // especially with only one or two non-ZST fields.
+                            let f = &fields[x as usize];
+                            (!f.is_zst(), cmp::Reverse(f.align.abi()))
+                        })
+                    }
+                    StructKind::EnumVariant(_) => {
+                        optimizing.sort_by_key(|&x| fields[x as usize].align.abi());
+                    }
+                }
+            }
+
+            // inverse_memory_index holds field indices by increasing memory offset.
+            // That is, if field 5 has offset 0, the first element of inverse_memory_index is 5.
+            // We now write field offsets to the corresponding offset slot;
+            // field 5 with offset 0 puts 0 in offsets[5].
+            // At the bottom of this function, we use inverse_memory_index to produce memory_index.
+
+            let mut offset = Size::from_bytes(0);
+
+            if let StructKind::EnumVariant(discr) = kind {
+                offset = discr.size();
+                if !packed {
+                    let discr_align = discr.align(dl);
+                    align = align.max(discr_align);
+                }
+            }
+
+            for &i in &inverse_memory_index {
+                let field = fields[i as usize];
+                if !sized {
+                    bug!("univariant: field #{} of `{}` comes after unsized field",
+                        offsets.len(), ty);
+                }
+
+                if field.abi == Abi::Uninhabited {
+                    return Ok(LayoutDetails::uninhabited(fields.len()));
+                }
+
+                if field.is_unsized() {
+                    sized = false;
+                }
+
+                // Invariant: offset < dl.obj_size_bound() <= 1<<61
+                if !packed {
+                    offset = offset.abi_align(field.align);
+                    align = align.max(field.align);
+                }
+
+                debug!("univariant offset: {:?} field: {:#?}", offset, field);
+                offsets[i as usize] = offset;
+
+                offset = offset.checked_add(field.size, dl)
+                    .ok_or(LayoutError::SizeOverflow(ty))?;
+            }
+
+            if repr.align > 0 {
+                let repr_align = repr.align as u64;
+                align = align.max(Align::from_bytes(repr_align, repr_align).unwrap());
+                debug!("univariant repr_align: {:?}", repr_align);
+            }
+
+            debug!("univariant min_size: {:?}", offset);
+            let min_size = offset;
+
+            // As stated above, inverse_memory_index holds field indices by increasing offset.
+            // This makes it an already-sorted view of the offsets vec.
+            // To invert it, consider:
+            // If field 5 has offset 0, offsets[0] is 5, and memory_index[5] should be 0.
+            // Field 5 would be the first element, so memory_index is i:
+            // Note: if we didn't optimize, it's already right.
+
+            let mut memory_index;
+            if optimize {
+                memory_index = vec![0; inverse_memory_index.len()];
+
+                for i in 0..inverse_memory_index.len() {
+                    memory_index[inverse_memory_index[i] as usize]  = i as u32;
+                }
+            } else {
+                memory_index = inverse_memory_index;
+            }
+
+            let size = min_size.abi_align(align);
+            let mut abi = Abi::Aggregate {
+                sized,
+                packed
+            };
+
+            // Unpack newtype ABIs and find scalar pairs.
+            if sized && size.bytes() > 0 {
+                // All other fields must be ZSTs, and we need them to all start at 0.
+                let mut zst_offsets =
+                    offsets.iter().enumerate().filter(|&(i, _)| fields[i].is_zst());
+                if zst_offsets.all(|(_, o)| o.bytes() == 0) {
+                    let mut non_zst_fields =
+                        fields.iter().enumerate().filter(|&(_, f)| !f.is_zst());
+
+                    match (non_zst_fields.next(), non_zst_fields.next(), non_zst_fields.next()) {
+                        // We have exactly one non-ZST field.
+                        (Some((i, field)), None, None) => {
+                            // Field fills the struct and it has a scalar or scalar pair ABI.
+                            if offsets[i].bytes() == 0 && size == field.size {
+                                match field.abi {
+                                    // For plain scalars we can't unpack newtypes
+                                    // for `#[repr(C)]`, as that affects C ABIs.
+                                    Abi::Scalar(_) if optimize => {
+                                        abi = field.abi.clone();
+                                    }
+                                    // But scalar pairs are Rust-specific and get
+                                    // treated as aggregates by C ABIs anyway.
+                                    Abi::ScalarPair(..) => {
+                                        abi = field.abi.clone();
+                                    }
+                                    _ => {}
+                                }
+                            }
+                        }
+
+                        // Two non-ZST fields, and they're both scalars.
+                        (Some((i, &TyLayout {
+                            details: &LayoutDetails { abi: Abi::Scalar(ref a), .. }, ..
+                        })), Some((j, &TyLayout {
+                            details: &LayoutDetails { abi: Abi::Scalar(ref b), .. }, ..
+                        })), None) => {
+                            // Order by the memory placement, not source order.
+                            let ((i, a), (j, b)) = if offsets[i] < offsets[j] {
+                                ((i, a), (j, b))
+                            } else {
+                                ((j, b), (i, a))
+                            };
+                            let pair = scalar_pair(a.clone(), b.clone());
+                            let pair_offsets = match pair.fields {
+                                FieldPlacement::Arbitrary {
+                                    ref offsets,
+                                    ref memory_index
+                                } => {
+                                    assert_eq!(memory_index, &[0, 1]);
+                                    offsets
+                                }
+                                _ => bug!()
+                            };
+                            if offsets[i] == pair_offsets[0] &&
+                               offsets[j] == pair_offsets[1] &&
+                               align == pair.align &&
+                               size == pair.size {
+                                // We can use `ScalarPair` only when it matches our
+                                // already computed layout (including `#[repr(C)]`).
+                                abi = pair.abi;
+                            }
+                        }
+
+                        _ => {}
+                    }
+                }
+            }
+
+            Ok(LayoutDetails {
+                variants: Variants::Single { index: 0 },
+                fields: FieldPlacement::Arbitrary {
+                    offsets,
+                    memory_index
+                },
+                abi,
+                align,
+                size
+            })
+        };
+        let univariant = |fields: &[TyLayout], repr: &ReprOptions, kind| {
+            Ok(tcx.intern_layout(univariant_uninterned(fields, repr, kind)?))
+        };
+        assert!(!ty.has_infer_types());
+
+        Ok(match ty.sty {
+            // Basic scalars.
+            ty::TyBool => {
+                tcx.intern_layout(LayoutDetails::scalar(cx, Scalar {
+                    value: Int(I8, false),
+                    valid_range: 0..=1
+                }))
+            }
+            ty::TyChar => {
+                tcx.intern_layout(LayoutDetails::scalar(cx, Scalar {
+                    value: Int(I32, false),
+                    valid_range: 0..=0x10FFFF
+                }))
+            }
+            ty::TyInt(ity) => {
+                scalar(Int(Integer::from_attr(dl, attr::SignedInt(ity)), true))
             }
             ty::TyUint(ity) => {
-                Scalar {
-                    value: Int(Integer::from_attr(dl, attr::UnsignedInt(ity))),
-                    non_zero: false
-                }
+                scalar(Int(Integer::from_attr(dl, attr::UnsignedInt(ity)), false))
             }
-            ty::TyFloat(FloatTy::F32) => Scalar { value: F32, non_zero: false },
-            ty::TyFloat(FloatTy::F64) => Scalar { value: F64, non_zero: false },
-            ty::TyFnPtr(_) => Scalar { value: Pointer, non_zero: true },
+            ty::TyFloat(FloatTy::F32) => scalar(F32),
+            ty::TyFloat(FloatTy::F64) => scalar(F64),
+            ty::TyFnPtr(_) => {
+                let mut ptr = scalar_unit(Pointer);
+                ptr.valid_range.start = 1;
+                tcx.intern_layout(LayoutDetails::scalar(cx, ptr))
+            }
 
             // The never type.
-            ty::TyNever => Univariant {
-                variant: Struct::new(dl, &vec![], &ReprOptions::default(),
-                  StructKind::AlwaysSizedUnivariant, ty)?,
-                non_zero: false
-            },
+            ty::TyNever => {
+                tcx.intern_layout(LayoutDetails::uninhabited(0))
+            }
 
             // Potentially-fat pointers.
             ty::TyRef(_, ty::TypeAndMut { ty: pointee, .. }) |
             ty::TyRawPtr(ty::TypeAndMut { ty: pointee, .. }) => {
-                ptr_layout(pointee)?
-            }
-            ty::TyAdt(def, _) if def.is_box() => {
-                ptr_layout(ty.boxed_ty())?
+                let mut data_ptr = scalar_unit(Pointer);
+                if !ty.is_unsafe_ptr() {
+                    data_ptr.valid_range.start = 1;
+                }
+
+                let pointee = tcx.normalize_associated_type_in_env(&pointee, param_env);
+                if pointee.is_sized(tcx, param_env, DUMMY_SP) {
+                    return Ok(tcx.intern_layout(LayoutDetails::scalar(cx, data_ptr)));
+                }
+
+                let unsized_part = tcx.struct_tail(pointee);
+                let metadata = match unsized_part.sty {
+                    ty::TyForeign(..) => {
+                        return Ok(tcx.intern_layout(LayoutDetails::scalar(cx, data_ptr)));
+                    }
+                    ty::TySlice(_) | ty::TyStr => {
+                        scalar_unit(Int(dl.ptr_sized_integer(), false))
+                    }
+                    ty::TyDynamic(..) => {
+                        let mut vtable = scalar_unit(Pointer);
+                        vtable.valid_range.start = 1;
+                        vtable
+                    }
+                    _ => return Err(LayoutError::Unknown(unsized_part))
+                };
+
+                // Effectively a (ptr, meta) tuple.
+                tcx.intern_layout(scalar_pair(data_ptr, metadata))
             }
 
             // Arrays and slices.
@@ -1198,284 +1229,350 @@
                     }
                 }
 
-                let element = element.layout(tcx, param_env)?;
-                let element_size = element.size(dl);
+                let element = cx.layout_of(element)?;
                 let count = count.val.to_const_int().unwrap().to_u64().unwrap();
-                if element_size.checked_mul(count, dl).is_none() {
-                    return Err(LayoutError::SizeOverflow(ty));
-                }
-                Array {
-                    sized: true,
-                    align: element.align(dl),
-                    primitive_align: element.primitive_align(dl),
-                    element_size,
-                    count,
-                }
+                let size = element.size.checked_mul(count, dl)
+                    .ok_or(LayoutError::SizeOverflow(ty))?;
+
+                tcx.intern_layout(LayoutDetails {
+                    variants: Variants::Single { index: 0 },
+                    fields: FieldPlacement::Array {
+                        stride: element.size,
+                        count
+                    },
+                    abi: Abi::Aggregate {
+                        sized: true,
+                        packed: false
+                    },
+                    align: element.align,
+                    size
+                })
             }
             ty::TySlice(element) => {
-                let element = element.layout(tcx, param_env)?;
-                Array {
-                    sized: false,
-                    align: element.align(dl),
-                    primitive_align: element.primitive_align(dl),
-                    element_size: element.size(dl),
-                    count: 0
-                }
+                let element = cx.layout_of(element)?;
+                tcx.intern_layout(LayoutDetails {
+                    variants: Variants::Single { index: 0 },
+                    fields: FieldPlacement::Array {
+                        stride: element.size,
+                        count: 0
+                    },
+                    abi: Abi::Aggregate {
+                        sized: false,
+                        packed: false
+                    },
+                    align: element.align,
+                    size: Size::from_bytes(0)
+                })
             }
             ty::TyStr => {
-                Array {
-                    sized: false,
+                tcx.intern_layout(LayoutDetails {
+                    variants: Variants::Single { index: 0 },
+                    fields: FieldPlacement::Array {
+                        stride: Size::from_bytes(1),
+                        count: 0
+                    },
+                    abi: Abi::Aggregate {
+                        sized: false,
+                        packed: false
+                    },
                     align: dl.i8_align,
-                    primitive_align: dl.i8_align,
-                    element_size: Size::from_bytes(1),
-                    count: 0
-                }
+                    size: Size::from_bytes(0)
+                })
             }
 
             // Odd unit types.
             ty::TyFnDef(..) => {
-                Univariant {
-                    variant: Struct::new(dl, &vec![],
-                      &ReprOptions::default(), StructKind::AlwaysSizedUnivariant, ty)?,
-                    non_zero: false
-                }
+                univariant(&[], &ReprOptions::default(), StructKind::AlwaysSized)?
             }
             ty::TyDynamic(..) | ty::TyForeign(..) => {
-                let mut unit = Struct::new(dl, &vec![], &ReprOptions::default(),
-                  StructKind::AlwaysSizedUnivariant, ty)?;
-                unit.sized = false;
-                Univariant { variant: unit, non_zero: false }
+                let mut unit = univariant_uninterned(&[], &ReprOptions::default(),
+                  StructKind::AlwaysSized)?;
+                match unit.abi {
+                    Abi::Aggregate { ref mut sized, .. } => *sized = false,
+                    _ => bug!()
+                }
+                tcx.intern_layout(unit)
             }
 
             // Tuples, generators and closures.
             ty::TyGenerator(def_id, ref substs, _) => {
                 let tys = substs.field_tys(def_id, tcx);
-                let st = Struct::new(dl,
-                    &tys.map(|ty| ty.layout(tcx, param_env))
-                      .collect::<Result<Vec<_>, _>>()?,
+                univariant(&tys.map(|ty| cx.layout_of(ty)).collect::<Result<Vec<_>, _>>()?,
                     &ReprOptions::default(),
-                    StructKind::AlwaysSizedUnivariant, ty)?;
-                Univariant { variant: st, non_zero: false }
+                    StructKind::AlwaysSized)?
             }
 
             ty::TyClosure(def_id, ref substs) => {
                 let tys = substs.upvar_tys(def_id, tcx);
-                let st = Struct::new(dl,
-                    &tys.map(|ty| ty.layout(tcx, param_env))
-                      .collect::<Result<Vec<_>, _>>()?,
+                univariant(&tys.map(|ty| cx.layout_of(ty)).collect::<Result<Vec<_>, _>>()?,
                     &ReprOptions::default(),
-                    StructKind::AlwaysSizedUnivariant, ty)?;
-                Univariant { variant: st, non_zero: false }
+                    StructKind::AlwaysSized)?
             }
 
             ty::TyTuple(tys, _) => {
                 let kind = if tys.len() == 0 {
-                    StructKind::AlwaysSizedUnivariant
+                    StructKind::AlwaysSized
                 } else {
-                    StructKind::MaybeUnsizedUnivariant
+                    StructKind::MaybeUnsized
                 };
 
-                let st = Struct::new(dl,
-                    &tys.iter().map(|ty| ty.layout(tcx, param_env))
-                      .collect::<Result<Vec<_>, _>>()?,
-                    &ReprOptions::default(), kind, ty)?;
-                Univariant { variant: st, non_zero: false }
+                univariant(&tys.iter().map(|ty| cx.layout_of(ty)).collect::<Result<Vec<_>, _>>()?,
+                    &ReprOptions::default(), kind)?
             }
 
             // SIMD vector types.
             ty::TyAdt(def, ..) if def.repr.simd() => {
-                let element = ty.simd_type(tcx);
-                match *element.layout(tcx, param_env)? {
-                    Scalar { value, .. } => {
-                        return success(Vector {
-                            element: value,
-                            count: ty.simd_size(tcx) as u64
-                        });
-                    }
+                let count = ty.simd_size(tcx) as u64;
+                let element = cx.layout_of(ty.simd_type(tcx))?;
+                match element.abi {
+                    Abi::Scalar(_) => {}
                     _ => {
                         tcx.sess.fatal(&format!("monomorphising SIMD type `{}` with \
                                                 a non-machine element type `{}`",
-                                                ty, element));
+                                                ty, element.ty));
                     }
                 }
+                let size = element.size.checked_mul(count, dl)
+                    .ok_or(LayoutError::SizeOverflow(ty))?;
+                let align = dl.vector_align(size);
+                let size = size.abi_align(align);
+
+                tcx.intern_layout(LayoutDetails {
+                    variants: Variants::Single { index: 0 },
+                    fields: FieldPlacement::Array {
+                        stride: element.size,
+                        count
+                    },
+                    abi: Abi::Vector,
+                    size,
+                    align,
+                })
             }
 
             // ADTs.
             ty::TyAdt(def, substs) => {
-                if def.variants.is_empty() {
-                    // Uninhabitable; represent as unit
-                    // (Typechecking will reject discriminant-sizing attrs.)
+                // Cache the field layouts.
+                let variants = def.variants.iter().map(|v| {
+                    v.fields.iter().map(|field| {
+                        cx.layout_of(field.ty(tcx, substs))
+                    }).collect::<Result<Vec<_>, _>>()
+                }).collect::<Result<Vec<_>, _>>()?;
 
-                    return success(Univariant {
-                        variant: Struct::new(dl, &vec![],
-                          &def.repr, StructKind::AlwaysSizedUnivariant, ty)?,
-                        non_zero: false
+                let (inh_first, inh_second) = {
+                    let mut inh_variants = (0..variants.len()).filter(|&v| {
+                        variants[v].iter().all(|f| f.abi != Abi::Uninhabited)
                     });
+                    (inh_variants.next(), inh_variants.next())
+                };
+                if inh_first.is_none() {
+                    // Uninhabited because it has no variants, or only uninhabited ones.
+                    return Ok(tcx.intern_layout(LayoutDetails::uninhabited(0)));
                 }
 
-                if def.is_enum() && def.variants.iter().all(|v| v.fields.is_empty()) {
-                    // All bodies empty -> intlike
-                    let (mut min, mut max, mut non_zero) = (i64::max_value(),
-                                                            i64::min_value(),
-                                                            true);
-                    for discr in def.discriminants(tcx) {
-                        let x = discr.to_u128_unchecked() as i64;
-                        if x == 0 { non_zero = false; }
-                        if x < min { min = x; }
-                        if x > max { max = x; }
+                if def.is_union() {
+                    let packed = def.repr.packed();
+                    if packed && def.repr.align > 0 {
+                        bug!("Union cannot be packed and aligned");
                     }
 
-                    // FIXME: should handle i128? signed-value based impl is weird and hard to
-                    // grok.
-                    let (discr, signed) = Integer::repr_discr(tcx, ty, &def.repr, min, max);
-                    return success(CEnum {
-                        discr,
-                        signed,
-                        non_zero,
-                        // FIXME: should be u128?
-                        min: min as u64,
-                        max: max as u64
-                    });
+                    let mut align = if def.repr.packed() {
+                        dl.i8_align
+                    } else {
+                        dl.aggregate_align
+                    };
+
+                    if def.repr.align > 0 {
+                        let repr_align = def.repr.align as u64;
+                        align = align.max(
+                            Align::from_bytes(repr_align, repr_align).unwrap());
+                    }
+
+                    let mut size = Size::from_bytes(0);
+                    for field in &variants[0] {
+                        assert!(!field.is_unsized());
+
+                        if !packed {
+                            align = align.max(field.align);
+                        }
+                        size = cmp::max(size, field.size);
+                    }
+
+                    return Ok(tcx.intern_layout(LayoutDetails {
+                        variants: Variants::Single { index: 0 },
+                        fields: FieldPlacement::Union(variants[0].len()),
+                        abi: Abi::Aggregate {
+                            sized: true,
+                            packed
+                        },
+                        align,
+                        size: size.abi_align(align)
+                    }));
                 }
 
-                if !def.is_enum() || (def.variants.len() == 1 &&
-                                      !def.repr.inhibit_enum_layout_opt()) {
-                    // Struct, or union, or univariant enum equivalent to a struct.
+                let is_struct = !def.is_enum() ||
+                    // Only one variant is inhabited.
+                    (inh_second.is_none() &&
+                    // Representation optimizations are allowed.
+                     !def.repr.inhibit_enum_layout_opt() &&
+                    // Inhabited variant either has data ...
+                     (!variants[inh_first.unwrap()].is_empty() ||
+                    // ... or there other, uninhabited, variants.
+                      variants.len() > 1));
+                if is_struct {
+                    // Struct, or univariant enum equivalent to a struct.
                     // (Typechecking will reject discriminant-sizing attrs.)
 
-                    let kind = if def.is_enum() || def.variants[0].fields.len() == 0{
-                        StructKind::AlwaysSizedUnivariant
+                    let v = inh_first.unwrap();
+                    let kind = if def.is_enum() || variants[v].len() == 0 {
+                        StructKind::AlwaysSized
                     } else {
                         let param_env = tcx.param_env(def.did);
-                        let fields = &def.variants[0].fields;
-                        let last_field = &fields[fields.len()-1];
+                        let last_field = def.variants[v].fields.last().unwrap();
                         let always_sized = tcx.type_of(last_field.did)
                           .is_sized(tcx, param_env, DUMMY_SP);
-                        if !always_sized { StructKind::MaybeUnsizedUnivariant }
-                        else { StructKind::AlwaysSizedUnivariant }
+                        if !always_sized { StructKind::MaybeUnsized }
+                        else { StructKind::AlwaysSized }
                     };
 
-                    let fields = def.variants[0].fields.iter().map(|field| {
-                        field.ty(tcx, substs).layout(tcx, param_env)
-                    }).collect::<Result<Vec<_>, _>>()?;
-                    let layout = if def.is_union() {
-                        let mut un = Union::new(dl, &def.repr);
-                        un.extend(dl, fields.iter().map(|&f| Ok(f)), ty)?;
-                        UntaggedUnion { variants: un }
-                    } else {
-                        let st = Struct::new(dl, &fields, &def.repr,
-                          kind, ty)?;
-                        let non_zero = Some(def.did) == tcx.lang_items().non_zero();
-                        Univariant { variant: st, non_zero: non_zero }
-                    };
-                    return success(layout);
-                }
-
-                // Since there's at least one
-                // non-empty body, explicit discriminants should have
-                // been rejected by a checker before this point.
-                for (i, v) in def.variants.iter().enumerate() {
-                    if v.discr != ty::VariantDiscr::Relative(i) {
-                        bug!("non-C-like enum {} with specified discriminants",
-                            tcx.item_path_str(def.did));
-                    }
-                }
-
-                // Cache the substituted and normalized variant field types.
-                let variants = def.variants.iter().map(|v| {
-                    v.fields.iter().map(|field| field.ty(tcx, substs)).collect::<Vec<_>>()
-                }).collect::<Vec<_>>();
-
-                if variants.len() == 2 && !def.repr.inhibit_enum_layout_opt() {
-                    // Nullable pointer optimization
-                    for discr in 0..2 {
-                        let other_fields = variants[1 - discr].iter().map(|ty| {
-                            ty.layout(tcx, param_env)
-                        });
-                        if !Struct::would_be_zero_sized(dl, other_fields)? {
-                            continue;
+                    let mut st = univariant_uninterned(&variants[v], &def.repr, kind)?;
+                    st.variants = Variants::Single { index: v };
+                    // Exclude 0 from the range of a newtype ABI NonZero<T>.
+                    if Some(def.did) == cx.tcx().lang_items().non_zero() {
+                        match st.abi {
+                            Abi::Scalar(ref mut scalar) |
+                            Abi::ScalarPair(ref mut scalar, _) => {
+                                if scalar.valid_range.start == 0 {
+                                    scalar.valid_range.start = 1;
+                                }
+                            }
+                            _ => {}
                         }
-                        let paths = Struct::non_zero_field_paths(tcx,
-                                                                 param_env,
-                                                                 variants[discr].iter().cloned(),
-                                                                 None)?;
-                        let (mut path, mut path_source) = if let Some(p) = paths { p }
-                          else { continue };
+                    }
+                    return Ok(tcx.intern_layout(st));
+                }
 
-                        // FIXME(eddyb) should take advantage of a newtype.
-                        if path == &[0] && variants[discr].len() == 1 {
-                            let value = match *variants[discr][0].layout(tcx, param_env)? {
-                                Scalar { value, .. } => value,
-                                CEnum { discr, .. } => Int(discr),
-                                _ => bug!("Layout::compute: `{}`'s non-zero \
-                                           `{}` field not scalar?!",
-                                           ty, variants[discr][0])
+                let no_explicit_discriminants = def.variants.iter().enumerate()
+                    .all(|(i, v)| v.discr == ty::VariantDiscr::Relative(i));
+
+                // Niche-filling enum optimization.
+                if !def.repr.inhibit_enum_layout_opt() && no_explicit_discriminants {
+                    let mut dataful_variant = None;
+                    let mut niche_variants = usize::max_value()..=0;
+
+                    // Find one non-ZST variant.
+                    'variants: for (v, fields) in variants.iter().enumerate() {
+                        for f in fields {
+                            if f.abi == Abi::Uninhabited {
+                                continue 'variants;
+                            }
+                            if !f.is_zst() {
+                                if dataful_variant.is_none() {
+                                    dataful_variant = Some(v);
+                                    continue 'variants;
+                                } else {
+                                    dataful_variant = None;
+                                    break 'variants;
+                                }
+                            }
+                        }
+                        if niche_variants.start > v {
+                            niche_variants.start = v;
+                        }
+                        niche_variants.end = v;
+                    }
+
+                    if niche_variants.start > niche_variants.end {
+                        dataful_variant = None;
+                    }
+
+                    if let Some(i) = dataful_variant {
+                        let count = (niche_variants.end - niche_variants.start + 1) as u128;
+                        for (field_index, field) in variants[i].iter().enumerate() {
+                            let (offset, niche, niche_start) =
+                                match field.find_niche(cx, count)? {
+                                    Some(niche) => niche,
+                                    None => continue
+                                };
+                            let st = variants.iter().enumerate().map(|(j, v)| {
+                                let mut st = univariant_uninterned(v,
+                                    &def.repr, StructKind::AlwaysSized)?;
+                                st.variants = Variants::Single { index: j };
+                                Ok(st)
+                            }).collect::<Result<Vec<_>, _>>()?;
+
+                            let offset = st[i].fields.offset(field_index) + offset;
+                            let LayoutDetails { size, mut align, .. } = st[i];
+
+                            let mut niche_align = niche.value.align(dl);
+                            let abi = if offset.bytes() == 0 && niche.value.size(dl) == size {
+                                Abi::Scalar(niche.clone())
+                            } else {
+                                let mut packed = st[i].abi.is_packed();
+                                if offset.abi_align(niche_align) != offset {
+                                    packed = true;
+                                    niche_align = dl.i8_align;
+                                }
+                                Abi::Aggregate {
+                                    sized: true,
+                                    packed
+                                }
                             };
-                            return success(RawNullablePointer {
-                                nndiscr: discr as u64,
-                                value,
-                            });
+                            align = align.max(niche_align);
+
+                            return Ok(tcx.intern_layout(LayoutDetails {
+                                variants: Variants::NicheFilling {
+                                    dataful_variant: i,
+                                    niche_variants,
+                                    niche,
+                                    niche_start,
+                                    variants: st,
+                                },
+                                fields: FieldPlacement::Arbitrary {
+                                    offsets: vec![offset],
+                                    memory_index: vec![0]
+                                },
+                                abi,
+                                size,
+                                align,
+                            }));
                         }
-
-                        let st = Struct::new(dl,
-                            &variants[discr].iter().map(|ty| ty.layout(tcx, param_env))
-                              .collect::<Result<Vec<_>, _>>()?,
-                            &def.repr, StructKind::AlwaysSizedUnivariant, ty)?;
-
-                        // We have to fix the last element of path here.
-                        let mut i = *path.last().unwrap();
-                        i = st.memory_index[i as usize];
-                        *path.last_mut().unwrap() = i;
-                        path.push(0); // For GEP through a pointer.
-                        path.reverse();
-                        path_source.push(0);
-                        path_source.reverse();
-
-                        return success(StructWrappedNullablePointer {
-                            nndiscr: discr as u64,
-                            nonnull: st,
-                            discrfield: path,
-                            discrfield_source: path_source
-                        });
                     }
                 }
 
-                // The general case.
-                let discr_max = (variants.len() - 1) as i64;
-                assert!(discr_max >= 0);
-                let (min_ity, _) = Integer::repr_discr(tcx, ty, &def.repr, 0, discr_max);
+                let (mut min, mut max) = (i128::max_value(), i128::min_value());
+                for (i, discr) in def.discriminants(tcx).enumerate() {
+                    if variants[i].iter().any(|f| f.abi == Abi::Uninhabited) {
+                        continue;
+                    }
+                    let x = discr.to_u128_unchecked() as i128;
+                    if x < min { min = x; }
+                    if x > max { max = x; }
+                }
+                assert!(min <= max, "discriminant range is {}...{}", min, max);
+                let (min_ity, signed) = Integer::repr_discr(tcx, ty, &def.repr, min, max);
+
                 let mut align = dl.aggregate_align;
-                let mut primitive_align = dl.aggregate_align;
                 let mut size = Size::from_bytes(0);
 
                 // We're interested in the smallest alignment, so start large.
                 let mut start_align = Align::from_bytes(256, 256).unwrap();
+                assert_eq!(Integer::for_abi_align(dl, start_align), None);
 
-                // Create the set of structs that represent each variant
-                // Use the minimum integer type we figured out above
-                let discr = Scalar { value: Int(min_ity), non_zero: false };
-                let mut variants = variants.into_iter().map(|fields| {
-                    let mut fields = fields.into_iter().map(|field| {
-                        field.layout(tcx, param_env)
-                    }).collect::<Result<Vec<_>, _>>()?;
-                    fields.insert(0, &discr);
-                    let st = Struct::new(dl,
-                        &fields,
-                        &def.repr, StructKind::EnumVariant, ty)?;
+                // Create the set of structs that represent each variant.
+                let mut variants = variants.into_iter().enumerate().map(|(i, field_layouts)| {
+                    let mut st = univariant_uninterned(&field_layouts,
+                        &def.repr, StructKind::EnumVariant(min_ity))?;
+                    st.variants = Variants::Single { index: i };
                     // Find the first field we can't move later
                     // to make room for a larger discriminant.
-                    // It is important to skip the first field.
-                    for i in st.field_index_by_increasing_offset().skip(1) {
-                        let field = fields[i];
-                        let field_align = field.align(dl);
-                        if field.size(dl).bytes() != 0 || field_align.abi() != 1 {
-                            start_align = start_align.min(field_align);
+                    for field in st.fields.index_by_increasing_offset().map(|j| field_layouts[j]) {
+                        if !field.is_zst() || field.align.abi() != 1 {
+                            start_align = start_align.min(field.align);
                             break;
                         }
                     }
-                    size = cmp::max(size, st.min_size);
+                    size = cmp::max(size, st.size);
                     align = align.max(st.align);
-                    primitive_align = primitive_align.max(st.primitive_align);
                     Ok(st)
                 }).collect::<Result<Vec<_>, _>>()?;
 
@@ -1521,30 +1618,55 @@
                     ity = min_ity;
                 } else {
                     // Patch up the variants' first few fields.
-                    let old_ity_size = Int(min_ity).size(dl);
-                    let new_ity_size = Int(ity).size(dl);
+                    let old_ity_size = min_ity.size();
+                    let new_ity_size = ity.size();
                     for variant in &mut variants {
-                        for i in variant.offsets.iter_mut() {
-                            // The first field is the discrimminant, at offset 0.
-                            // These aren't in order, and we need to skip it.
-                            if *i <= old_ity_size && *i > Size::from_bytes(0) {
-                                *i = new_ity_size;
-                            }
+                        if variant.abi == Abi::Uninhabited {
+                            continue;
                         }
-                        // We might be making the struct larger.
-                        if variant.min_size <= old_ity_size {
-                            variant.min_size = new_ity_size;
+                        match variant.fields {
+                            FieldPlacement::Arbitrary { ref mut offsets, .. } => {
+                                for i in offsets {
+                                    if *i <= old_ity_size {
+                                        assert_eq!(*i, old_ity_size);
+                                        *i = new_ity_size;
+                                    }
+                                }
+                                // We might be making the struct larger.
+                                if variant.size <= old_ity_size {
+                                    variant.size = new_ity_size;
+                                }
+                            }
+                            _ => bug!()
                         }
                     }
                 }
 
-                General {
-                    discr: ity,
-                    variants,
-                    size,
+                let discr = Scalar {
+                    value: Int(ity, signed),
+                    valid_range: (min as u128)..=(max as u128)
+                };
+                let abi = if discr.value.size(dl) == size {
+                    Abi::Scalar(discr.clone())
+                } else {
+                    Abi::Aggregate {
+                        sized: true,
+                        packed: false
+                    }
+                };
+                tcx.intern_layout(LayoutDetails {
+                    variants: Variants::Tagged {
+                        discr,
+                        variants
+                    },
+                    // FIXME(eddyb): using `FieldPlacement::Arbitrary` here results
+                    // in lost optimizations, specifically around allocations, see
+                    // `test/codegen/{alloc-optimisation,vec-optimizes-away}.rs`.
+                    fields: FieldPlacement::Union(1),
+                    abi,
                     align,
-                    primitive_align,
-                }
+                    size
+                })
             }
 
             // Types with no meaningful known layout.
@@ -1553,204 +1675,24 @@
                 if ty == normalized {
                     return Err(LayoutError::Unknown(ty));
                 }
-                return normalized.layout(tcx, param_env);
+                tcx.layout_raw(param_env.and(normalized))?
             }
             ty::TyParam(_) => {
                 return Err(LayoutError::Unknown(ty));
             }
             ty::TyInfer(_) | ty::TyError => {
-                bug!("Layout::compute: unexpected type `{}`", ty)
+                bug!("LayoutDetails::compute: unexpected type `{}`", ty)
             }
-        };
-
-        success(layout)
-    }
-
-    /// Returns true if the layout corresponds to an unsized type.
-    pub fn is_unsized(&self) -> bool {
-        match *self {
-            Scalar {..} | Vector {..} | FatPointer {..} |
-            CEnum {..} | UntaggedUnion {..} | General {..} |
-            RawNullablePointer {..} |
-            StructWrappedNullablePointer {..} => false,
-
-            Array { sized, .. } |
-            Univariant { variant: Struct { sized, .. }, .. } => !sized
-        }
-    }
-
-    pub fn size<C: HasDataLayout>(&self, cx: C) -> Size {
-        let dl = cx.data_layout();
-
-        match *self {
-            Scalar { value, .. } | RawNullablePointer { value, .. } => {
-                value.size(dl)
-            }
-
-            Vector { element, count } => {
-                let element_size = element.size(dl);
-                let vec_size = match element_size.checked_mul(count, dl) {
-                    Some(size) => size,
-                    None => bug!("Layout::size({:?}): {} * {} overflowed",
-                                 self, element_size.bytes(), count)
-                };
-                vec_size.abi_align(self.align(dl))
-            }
-
-            Array { element_size, count, .. } => {
-                match element_size.checked_mul(count, dl) {
-                    Some(size) => size,
-                    None => bug!("Layout::size({:?}): {} * {} overflowed",
-                                 self, element_size.bytes(), count)
-                }
-            }
-
-            FatPointer { metadata, .. } => {
-                // Effectively a (ptr, meta) tuple.
-                Pointer.size(dl).abi_align(metadata.align(dl))
-                       .checked_add(metadata.size(dl), dl).unwrap()
-                       .abi_align(self.align(dl))
-            }
-
-            CEnum { discr, .. } => Int(discr).size(dl),
-            General { size, .. } => size,
-            UntaggedUnion { ref variants } => variants.stride(),
-
-            Univariant { ref variant, .. } |
-            StructWrappedNullablePointer { nonnull: ref variant, .. } => {
-                variant.stride()
-            }
-        }
-    }
-
-    pub fn align<C: HasDataLayout>(&self, cx: C) -> Align {
-        let dl = cx.data_layout();
-
-        match *self {
-            Scalar { value, .. } | RawNullablePointer { value, .. } => {
-                value.align(dl)
-            }
-
-            Vector { element, count } => {
-                let elem_size = element.size(dl);
-                let vec_size = match elem_size.checked_mul(count, dl) {
-                    Some(size) => size,
-                    None => bug!("Layout::align({:?}): {} * {} overflowed",
-                                 self, elem_size.bytes(), count)
-                };
-                for &(size, align) in &dl.vector_align {
-                    if size == vec_size {
-                        return align;
-                    }
-                }
-                // Default to natural alignment, which is what LLVM does.
-                // That is, use the size, rounded up to a power of 2.
-                let align = vec_size.bytes().next_power_of_two();
-                Align::from_bytes(align, align).unwrap()
-            }
-
-            FatPointer { metadata, .. } => {
-                // Effectively a (ptr, meta) tuple.
-                Pointer.align(dl).max(metadata.align(dl))
-            }
-
-            CEnum { discr, .. } => Int(discr).align(dl),
-            Array { align, .. } | General { align, .. } => align,
-            UntaggedUnion { ref variants } => variants.align,
-
-            Univariant { ref variant, .. } |
-            StructWrappedNullablePointer { nonnull: ref variant, .. } => {
-                variant.align
-            }
-        }
-    }
-
-    /// Returns alignment before repr alignment is applied
-    pub fn primitive_align(&self, dl: &TargetDataLayout) -> Align {
-        match *self {
-            Array { primitive_align, .. } | General { primitive_align, .. } => primitive_align,
-            Univariant { ref variant, .. } |
-            StructWrappedNullablePointer { nonnull: ref variant, .. } => {
-                variant.primitive_align
-            },
-
-            _ => self.align(dl)
-        }
-    }
-
-    /// Returns repr alignment if it is greater than the primitive alignment.
-    pub fn over_align(&self, dl: &TargetDataLayout) -> Option<u32> {
-        let align = self.align(dl);
-        let primitive_align = self.primitive_align(dl);
-        if align.abi() > primitive_align.abi() {
-            Some(align.abi() as u32)
-        } else {
-            None
-        }
-    }
-
-    pub fn field_offset<C: HasDataLayout>(&self,
-                                          cx: C,
-                                          i: usize,
-                                          variant_index: Option<usize>)
-                                          -> Size {
-        let dl = cx.data_layout();
-
-        match *self {
-            Scalar { .. } |
-            CEnum { .. } |
-            UntaggedUnion { .. } |
-            RawNullablePointer { .. } => {
-                Size::from_bytes(0)
-            }
-
-            Vector { element, count } => {
-                let element_size = element.size(dl);
-                let i = i as u64;
-                assert!(i < count);
-                Size::from_bytes(element_size.bytes() * count)
-            }
-
-            Array { element_size, count, .. } => {
-                let i = i as u64;
-                assert!(i < count);
-                Size::from_bytes(element_size.bytes() * count)
-            }
-
-            FatPointer { metadata, .. } => {
-                // Effectively a (ptr, meta) tuple.
-                assert!(i < 2);
-                if i == 0 {
-                    Size::from_bytes(0)
-                } else {
-                    Pointer.size(dl).abi_align(metadata.align(dl))
-                }
-            }
-
-            Univariant { ref variant, .. } => variant.offsets[i],
-
-            General { ref variants, .. } => {
-                let v = variant_index.expect("variant index required");
-                variants[v].offsets[i + 1]
-            }
-
-            StructWrappedNullablePointer { nndiscr, ref nonnull, .. } => {
-                if Some(nndiscr as usize) == variant_index {
-                    nonnull.offsets[i]
-                } else {
-                    Size::from_bytes(0)
-                }
-            }
-        }
+        })
     }
 
     /// This is invoked by the `layout_raw` query to record the final
     /// layout of each type.
     #[inline]
-    pub fn record_layout_for_printing(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                                      ty: Ty<'tcx>,
-                                      param_env: ty::ParamEnv<'tcx>,
-                                      layout: &Layout) {
+    fn record_layout_for_printing(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                                  ty: Ty<'tcx>,
+                                  param_env: ty::ParamEnv<'tcx>,
+                                  layout: TyLayout<'tcx>) {
         // If we are running with `-Zprint-type-sizes`, record layouts for
         // dumping later. Ignore layouts that are done with non-empty
         // environments or non-monomorphic layouts, as the user only wants
@@ -1770,24 +1712,23 @@
     fn record_layout_for_printing_outlined(tcx: TyCtxt<'a, 'tcx, 'tcx>,
                                            ty: Ty<'tcx>,
                                            param_env: ty::ParamEnv<'tcx>,
-                                           layout: &Layout) {
+                                           layout: TyLayout<'tcx>) {
+        let cx = (tcx, param_env);
         // (delay format until we actually need it)
         let record = |kind, opt_discr_size, variants| {
             let type_desc = format!("{:?}", ty);
-            let overall_size = layout.size(tcx);
-            let align = layout.align(tcx);
             tcx.sess.code_stats.borrow_mut().record_type_size(kind,
                                                               type_desc,
-                                                              align,
-                                                              overall_size,
+                                                              layout.align,
+                                                              layout.size,
                                                               opt_discr_size,
                                                               variants);
         };
 
-        let (adt_def, substs) = match ty.sty {
-            ty::TyAdt(ref adt_def, substs) => {
+        let adt_def = match ty.sty {
+            ty::TyAdt(ref adt_def, _) => {
                 debug!("print-type-size t: `{:?}` process adt", ty);
-                (adt_def, substs)
+                adt_def
             }
 
             ty::TyClosure(..) => {
@@ -1804,106 +1745,61 @@
 
         let adt_kind = adt_def.adt_kind();
 
-        let build_field_info = |(field_name, field_ty): (ast::Name, Ty<'tcx>), offset: &Size| {
-            let layout = field_ty.layout(tcx, param_env);
-            match layout {
-                Err(_) => bug!("no layout found for field {} type: `{:?}`", field_name, field_ty),
-                Ok(field_layout) => {
-                    session::FieldInfo {
-                        name: field_name.to_string(),
-                        offset: offset.bytes(),
-                        size: field_layout.size(tcx).bytes(),
-                        align: field_layout.align(tcx).abi(),
+        let build_variant_info = |n: Option<ast::Name>,
+                                  flds: &[ast::Name],
+                                  layout: TyLayout<'tcx>| {
+            let mut min_size = Size::from_bytes(0);
+            let field_info: Vec<_> = flds.iter().enumerate().map(|(i, &name)| {
+                match layout.field(cx, i) {
+                    Err(err) => {
+                        bug!("no layout found for field {}: `{:?}`", name, err);
+                    }
+                    Ok(field_layout) => {
+                        let offset = layout.fields.offset(i);
+                        let field_end = offset + field_layout.size;
+                        if min_size < field_end {
+                            min_size = field_end;
+                        }
+                        session::FieldInfo {
+                            name: name.to_string(),
+                            offset: offset.bytes(),
+                            size: field_layout.size.bytes(),
+                            align: field_layout.align.abi(),
+                        }
                     }
                 }
-            }
-        };
-
-        let build_primitive_info = |name: ast::Name, value: &Primitive| {
-            session::VariantInfo {
-                name: Some(name.to_string()),
-                kind: session::SizeKind::Exact,
-                align: value.align(tcx).abi(),
-                size: value.size(tcx).bytes(),
-                fields: vec![],
-            }
-        };
-
-        enum Fields<'a> {
-            WithDiscrim(&'a Struct),
-            NoDiscrim(&'a Struct),
-        }
-
-        let build_variant_info = |n: Option<ast::Name>,
-                                  flds: &[(ast::Name, Ty<'tcx>)],
-                                  layout: Fields| {
-            let (s, field_offsets) = match layout {
-                Fields::WithDiscrim(s) => (s, &s.offsets[1..]),
-                Fields::NoDiscrim(s) => (s, &s.offsets[0..]),
-            };
-            let field_info: Vec<_> =
-                flds.iter()
-                    .zip(field_offsets.iter())
-                    .map(|(&field_name_ty, offset)| build_field_info(field_name_ty, offset))
-                    .collect();
+            }).collect();
 
             session::VariantInfo {
                 name: n.map(|n|n.to_string()),
-                kind: if s.sized {
-                    session::SizeKind::Exact
-                } else {
+                kind: if layout.is_unsized() {
                     session::SizeKind::Min
+                } else {
+                    session::SizeKind::Exact
                 },
-                align: s.align.abi(),
-                size: s.min_size.bytes(),
+                align: layout.align.abi(),
+                size: if min_size.bytes() == 0 {
+                    layout.size.bytes()
+                } else {
+                    min_size.bytes()
+                },
                 fields: field_info,
             }
         };
 
-        match *layout {
-            Layout::StructWrappedNullablePointer { nonnull: ref variant_layout,
-                                                   nndiscr,
-                                                   discrfield: _,
-                                                   discrfield_source: _ } => {
-                debug!("print-type-size t: `{:?}` adt struct-wrapped nullable nndiscr {} is {:?}",
-                       ty, nndiscr, variant_layout);
-                let variant_def = &adt_def.variants[nndiscr as usize];
-                let fields: Vec<_> =
-                    variant_def.fields.iter()
-                                      .map(|field_def| (field_def.name, field_def.ty(tcx, substs)))
-                                      .collect();
-                record(adt_kind.into(),
-                       None,
-                       vec![build_variant_info(Some(variant_def.name),
-                                               &fields,
-                                               Fields::NoDiscrim(variant_layout))]);
-            }
-            Layout::RawNullablePointer { nndiscr, value } => {
-                debug!("print-type-size t: `{:?}` adt raw nullable nndiscr {} is {:?}",
-                       ty, nndiscr, value);
-                let variant_def = &adt_def.variants[nndiscr as usize];
-                record(adt_kind.into(), None,
-                       vec![build_primitive_info(variant_def.name, &value)]);
-            }
-            Layout::Univariant { variant: ref variant_layout, non_zero: _ } => {
-                let variant_names = || {
-                    adt_def.variants.iter().map(|v|format!("{}", v.name)).collect::<Vec<_>>()
-                };
-                debug!("print-type-size t: `{:?}` adt univariant {:?} variants: {:?}",
-                       ty, variant_layout, variant_names());
-                assert!(adt_def.variants.len() <= 1,
-                        "univariant with variants {:?}", variant_names());
-                if adt_def.variants.len() == 1 {
-                    let variant_def = &adt_def.variants[0];
+        match layout.variants {
+            Variants::Single { index } => {
+                debug!("print-type-size `{:#?}` variant {}",
+                       layout, adt_def.variants[index].name);
+                if !adt_def.variants.is_empty() {
+                    let variant_def = &adt_def.variants[index];
                     let fields: Vec<_> =
-                        variant_def.fields.iter()
-                                          .map(|f| (f.name, f.ty(tcx, substs)))
-                                          .collect();
+                        variant_def.fields.iter().map(|f| f.name).collect();
                     record(adt_kind.into(),
                            None,
                            vec![build_variant_info(Some(variant_def.name),
                                                    &fields,
-                                                   Fields::NoDiscrim(variant_layout))]);
+                                                   layout)]);
                 } else {
                     // (This case arises for *empty* enums; so give it
                     // zero variants.)
@@ -1911,54 +1807,23 @@
                 }
             }
 
-            Layout::General { ref variants, discr, .. } => {
-                debug!("print-type-size t: `{:?}` adt general variants def {} layouts {} {:?}",
-                       ty, adt_def.variants.len(), variants.len(), variants);
+            Variants::NicheFilling { .. } |
+            Variants::Tagged { .. } => {
+                debug!("print-type-size `{:#?}` adt general variants def {}",
+                       ty, adt_def.variants.len());
                 let variant_infos: Vec<_> =
-                    adt_def.variants.iter()
-                                    .zip(variants.iter())
-                                    .map(|(variant_def, variant_layout)| {
-                                        let fields: Vec<_> =
-                                            variant_def.fields
-                                                       .iter()
-                                                       .map(|f| (f.name, f.ty(tcx, substs)))
-                                                       .collect();
-                                        build_variant_info(Some(variant_def.name),
-                                                           &fields,
-                                                           Fields::WithDiscrim(variant_layout))
-                                    })
-                                    .collect();
-                record(adt_kind.into(), Some(discr.size()), variant_infos);
-            }
-
-            Layout::UntaggedUnion { ref variants } => {
-                debug!("print-type-size t: `{:?}` adt union variants {:?}",
-                       ty, variants);
-                // layout does not currently store info about each
-                // variant...
-                record(adt_kind.into(), None, Vec::new());
-            }
-
-            Layout::CEnum { discr, .. } => {
-                debug!("print-type-size t: `{:?}` adt c-like enum", ty);
-                let variant_infos: Vec<_> =
-                    adt_def.variants.iter()
-                                    .map(|variant_def| {
-                                        build_primitive_info(variant_def.name,
-                                                             &Primitive::Int(discr))
-                                    })
-                                    .collect();
-                record(adt_kind.into(), Some(discr.size()), variant_infos);
-            }
-
-            // other cases provide little interesting (i.e. adjustable
-            // via representation tweaks) size info beyond total size.
-            Layout::Scalar { .. } |
-            Layout::Vector { .. } |
-            Layout::Array { .. } |
-            Layout::FatPointer { .. } => {
-                debug!("print-type-size t: `{:?}` adt other", ty);
-                record(adt_kind.into(), None, Vec::new())
+                    adt_def.variants.iter().enumerate().map(|(i, variant_def)| {
+                        let fields: Vec<_> =
+                            variant_def.fields.iter().map(|f| f.name).collect();
+                        build_variant_info(Some(variant_def.name),
+                                            &fields,
+                                            layout.for_variant(cx, i))
+                    })
+                    .collect();
+                record(adt_kind.into(), match layout.variants {
+                    Variants::Tagged { ref discr, .. } => Some(discr.value.size(tcx)),
+                    _ => None
+                }, variant_infos);
             }
         }
     }
@@ -1992,39 +1857,32 @@
         assert!(!ty.has_infer_types());
 
         // First try computing a static layout.
-        let err = match ty.layout(tcx, param_env) {
+        let err = match (tcx, param_env).layout_of(ty) {
             Ok(layout) => {
-                return Ok(SizeSkeleton::Known(layout.size(tcx)));
+                return Ok(SizeSkeleton::Known(layout.size));
             }
             Err(err) => err
         };
 
-        let ptr_skeleton = |pointee: Ty<'tcx>| {
-            let non_zero = !ty.is_unsafe_ptr();
-            let tail = tcx.struct_tail(pointee);
-            match tail.sty {
-                ty::TyParam(_) | ty::TyProjection(_) => {
-                    assert!(tail.has_param_types() || tail.has_self_ty());
-                    Ok(SizeSkeleton::Pointer {
-                        non_zero,
-                        tail: tcx.erase_regions(&tail)
-                    })
-                }
-                _ => {
-                    bug!("SizeSkeleton::compute({}): layout errored ({}), yet \
-                            tail `{}` is not a type parameter or a projection",
-                            ty, err, tail)
-                }
-            }
-        };
-
         match ty.sty {
             ty::TyRef(_, ty::TypeAndMut { ty: pointee, .. }) |
             ty::TyRawPtr(ty::TypeAndMut { ty: pointee, .. }) => {
-                ptr_skeleton(pointee)
-            }
-            ty::TyAdt(def, _) if def.is_box() => {
-                ptr_skeleton(ty.boxed_ty())
+                let non_zero = !ty.is_unsafe_ptr();
+                let tail = tcx.struct_tail(pointee);
+                match tail.sty {
+                    ty::TyParam(_) | ty::TyProjection(_) => {
+                        assert!(tail.has_param_types() || tail.has_self_ty());
+                        Ok(SizeSkeleton::Pointer {
+                            non_zero,
+                            tail: tcx.erase_regions(&tail)
+                        })
+                    }
+                    _ => {
+                        bug!("SizeSkeleton::compute({}): layout errored ({}), yet \
+                              tail `{}` is not a type parameter or a projection",
+                             ty, err, tail)
+                    }
+                }
             }
 
             ty::TyAdt(def, substs) => {
@@ -2109,142 +1967,184 @@
     }
 }
 
-/// A pair of a type and its layout. Implements various
-/// type traversal APIs (e.g. recursing into fields).
+/// The details of the layout of a type, alongside the type itself.
+/// Provides various type traversal APIs (e.g. recursing into fields).
+///
+/// Note that the details are NOT guaranteed to always be identical
+/// to those obtained from `layout_of(ty)`, as we need to produce
+/// layouts for which Rust types do not exist, such as enum variants
+/// or synthetic fields of enums (i.e. discriminants) and fat pointers.
 #[derive(Copy, Clone, Debug)]
 pub struct TyLayout<'tcx> {
     pub ty: Ty<'tcx>,
-    pub layout: &'tcx Layout,
-    pub variant_index: Option<usize>,
+    details: &'tcx LayoutDetails
 }
 
 impl<'tcx> Deref for TyLayout<'tcx> {
-    type Target = Layout;
-    fn deref(&self) -> &Layout {
-        self.layout
+    type Target = &'tcx LayoutDetails;
+    fn deref(&self) -> &&'tcx LayoutDetails {
+        &self.details
     }
 }
 
-pub trait LayoutTyper<'tcx>: HasDataLayout {
+pub trait HasTyCtxt<'tcx>: HasDataLayout {
+    fn tcx<'a>(&'a self) -> TyCtxt<'a, 'tcx, 'tcx>;
+}
+
+impl<'a, 'gcx, 'tcx> HasDataLayout for TyCtxt<'a, 'gcx, 'tcx> {
+    fn data_layout(&self) -> &TargetDataLayout {
+        &self.data_layout
+    }
+}
+
+impl<'a, 'gcx, 'tcx> HasTyCtxt<'gcx> for TyCtxt<'a, 'gcx, 'tcx> {
+    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'gcx, 'gcx> {
+        self.global_tcx()
+    }
+}
+
+impl<'a, 'gcx, 'tcx, T: Copy> HasDataLayout for (TyCtxt<'a, 'gcx, 'tcx>, T) {
+    fn data_layout(&self) -> &TargetDataLayout {
+        self.0.data_layout()
+    }
+}
+
+impl<'a, 'gcx, 'tcx, T: Copy> HasTyCtxt<'gcx> for (TyCtxt<'a, 'gcx, 'tcx>, T) {
+    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'gcx, 'gcx> {
+        self.0.tcx()
+    }
+}
+
+pub trait MaybeResult<T> {
+    fn from_ok(x: T) -> Self;
+    fn map_same<F: FnOnce(T) -> T>(self, f: F) -> Self;
+}
+
+impl<T> MaybeResult<T> for T {
+    fn from_ok(x: T) -> Self {
+        x
+    }
+    fn map_same<F: FnOnce(T) -> T>(self, f: F) -> Self {
+        f(self)
+    }
+}
+
+impl<T, E> MaybeResult<T> for Result<T, E> {
+    fn from_ok(x: T) -> Self {
+        Ok(x)
+    }
+    fn map_same<F: FnOnce(T) -> T>(self, f: F) -> Self {
+        self.map(f)
+    }
+}
+
+pub trait LayoutOf<T> {
     type TyLayout;
 
-    fn tcx<'a>(&'a self) -> TyCtxt<'a, 'tcx, 'tcx>;
-    fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout;
-    fn normalize_projections(self, ty: Ty<'tcx>) -> Ty<'tcx>;
+    fn layout_of(self, ty: T) -> Self::TyLayout;
 }
 
-/// Combines a tcx with the parameter environment so that you can
-/// compute layout operations.
-#[derive(Copy, Clone)]
-pub struct LayoutCx<'a, 'tcx: 'a> {
-    tcx: TyCtxt<'a, 'tcx, 'tcx>,
-    param_env: ty::ParamEnv<'tcx>,
-}
-
-impl<'a, 'tcx> LayoutCx<'a, 'tcx> {
-    pub fn new(tcx: TyCtxt<'a, 'tcx, 'tcx>, param_env: ty::ParamEnv<'tcx>) -> Self {
-        LayoutCx { tcx, param_env }
-    }
-}
-
-impl<'a, 'tcx> HasDataLayout for LayoutCx<'a, 'tcx> {
-    fn data_layout(&self) -> &TargetDataLayout {
-        &self.tcx.data_layout
-    }
-}
-
-impl<'a, 'tcx> LayoutTyper<'tcx> for LayoutCx<'a, 'tcx> {
+impl<'a, 'tcx> LayoutOf<Ty<'tcx>> for (TyCtxt<'a, 'tcx, 'tcx>, ty::ParamEnv<'tcx>) {
     type TyLayout = Result<TyLayout<'tcx>, LayoutError<'tcx>>;
 
-    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'tcx, 'tcx> {
-        self.tcx
-    }
-
+    /// Computes the layout of a type. Note that this implicitly
+    /// executes in "reveal all" mode.
+    #[inline]
     fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
-        let ty = self.normalize_projections(ty);
+        let (tcx, param_env) = self;
 
-        Ok(TyLayout {
+        let ty = tcx.normalize_associated_type_in_env(&ty, param_env.reveal_all());
+        let details = tcx.layout_raw(param_env.reveal_all().and(ty))?;
+        let layout = TyLayout {
             ty,
-            layout: ty.layout(self.tcx, self.param_env)?,
-            variant_index: None
-        })
-    }
+            details
+        };
 
-    fn normalize_projections(self, ty: Ty<'tcx>) -> Ty<'tcx> {
-        self.tcx.normalize_associated_type_in_env(&ty, self.param_env)
+        // NB: This recording is normally disabled; when enabled, it
+        // can however trigger recursive invocations of `layout_of`.
+        // Therefore, we execute it *after* the main query has
+        // completed, to avoid problems around recursive structures
+        // and the like. (Admitedly, I wasn't able to reproduce a problem
+        // here, but it seems like the right thing to do. -nmatsakis)
+        LayoutDetails::record_layout_for_printing(tcx, ty, param_env, layout);
+
+        Ok(layout)
+    }
+}
+
+impl<'a, 'tcx> LayoutOf<Ty<'tcx>> for (ty::maps::TyCtxtAt<'a, 'tcx, 'tcx>,
+                                       ty::ParamEnv<'tcx>) {
+    type TyLayout = Result<TyLayout<'tcx>, LayoutError<'tcx>>;
+
+    /// Computes the layout of a type. Note that this implicitly
+    /// executes in "reveal all" mode.
+    #[inline]
+    fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
+        let (tcx_at, param_env) = self;
+
+        let ty = tcx_at.tcx.normalize_associated_type_in_env(&ty, param_env.reveal_all());
+        let details = tcx_at.layout_raw(param_env.reveal_all().and(ty))?;
+        let layout = TyLayout {
+            ty,
+            details
+        };
+
+        // NB: This recording is normally disabled; when enabled, it
+        // can however trigger recursive invocations of `layout_of`.
+        // Therefore, we execute it *after* the main query has
+        // completed, to avoid problems around recursive structures
+        // and the like. (Admitedly, I wasn't able to reproduce a problem
+        // here, but it seems like the right thing to do. -nmatsakis)
+        LayoutDetails::record_layout_for_printing(tcx_at.tcx, ty, param_env, layout);
+
+        Ok(layout)
     }
 }
 
 impl<'a, 'tcx> TyLayout<'tcx> {
-    pub fn for_variant(&self, variant_index: usize) -> Self {
-        TyLayout {
-            variant_index: Some(variant_index),
-            ..*self
-        }
-    }
+    pub fn for_variant<C>(&self, cx: C, variant_index: usize) -> Self
+        where C: LayoutOf<Ty<'tcx>> + HasTyCtxt<'tcx>,
+              C::TyLayout: MaybeResult<TyLayout<'tcx>>
+    {
+        let details = match self.variants {
+            Variants::Single { index } if index == variant_index => self.details,
 
-    pub fn field_offset<C: HasDataLayout>(&self, cx: C, i: usize) -> Size {
-        self.layout.field_offset(cx, i, self.variant_index)
-    }
+            Variants::Single { index } => {
+                // Deny calling for_variant more than once for non-Single enums.
+                cx.layout_of(self.ty).map_same(|layout| {
+                    assert_eq!(layout.variants, Variants::Single { index });
+                    layout
+                });
 
-    pub fn field_count(&self) -> usize {
-        // Handle enum/union through the type rather than Layout.
-        if let ty::TyAdt(def, _) = self.ty.sty {
-            let v = self.variant_index.unwrap_or(0);
-            if def.variants.is_empty() {
-                assert_eq!(v, 0);
-                return 0;
-            } else {
-                return def.variants[v].fields.len();
-            }
-        }
-
-        match *self.layout {
-            Scalar { .. } => {
-                bug!("TyLayout::field_count({:?}): not applicable", self)
+                let fields = match self.ty.sty {
+                    ty::TyAdt(def, _) => def.variants[variant_index].fields.len(),
+                    _ => bug!()
+                };
+                let mut details = LayoutDetails::uninhabited(fields);
+                details.variants = Variants::Single { index: variant_index };
+                cx.tcx().intern_layout(details)
             }
 
-            // Handled above (the TyAdt case).
-            CEnum { .. } |
-            General { .. } |
-            UntaggedUnion { .. } |
-            RawNullablePointer { .. } |
-            StructWrappedNullablePointer { .. } => bug!(),
-
-            FatPointer { .. } => 2,
-
-            Vector { count, .. } |
-            Array { count, .. } => {
-                let usize_count = count as usize;
-                assert_eq!(usize_count as u64, count);
-                usize_count
-            }
-
-            Univariant { ref variant, .. } => variant.offsets.len(),
-        }
-    }
-
-    pub fn field_type<C: LayoutTyper<'tcx>>(&self, cx: C, i: usize) -> Ty<'tcx> {
-        let tcx = cx.tcx();
-
-        let ptr_field_type = |pointee: Ty<'tcx>| {
-            assert!(i < 2);
-            let slice = |element: Ty<'tcx>| {
-                if i == 0 {
-                    tcx.mk_mut_ptr(element)
-                } else {
-                    tcx.types.usize
-                }
-            };
-            match tcx.struct_tail(pointee).sty {
-                ty::TySlice(element) => slice(element),
-                ty::TyStr => slice(tcx.types.u8),
-                ty::TyDynamic(..) => tcx.mk_mut_ptr(tcx.mk_nil()),
-                _ => bug!("TyLayout::field_type({:?}): not applicable", self)
+            Variants::NicheFilling { ref variants, .. } |
+            Variants::Tagged { ref variants, .. } => {
+                &variants[variant_index]
             }
         };
 
-        match self.ty.sty {
+        assert_eq!(details.variants, Variants::Single { index: variant_index });
+
+        TyLayout {
+            ty: self.ty,
+            details
+        }
+    }
+
+    pub fn field<C>(&self, cx: C, i: usize) -> C::TyLayout
+        where C: LayoutOf<Ty<'tcx>> + HasTyCtxt<'tcx>,
+              C::TyLayout: MaybeResult<TyLayout<'tcx>>
+    {
+        let tcx = cx.tcx();
+        cx.layout_of(match self.ty.sty {
             ty::TyBool |
             ty::TyChar |
             ty::TyInt(_) |
@@ -2261,10 +2161,35 @@
             // Potentially-fat pointers.
             ty::TyRef(_, ty::TypeAndMut { ty: pointee, .. }) |
             ty::TyRawPtr(ty::TypeAndMut { ty: pointee, .. }) => {
-                ptr_field_type(pointee)
-            }
-            ty::TyAdt(def, _) if def.is_box() => {
-                ptr_field_type(self.ty.boxed_ty())
+                assert!(i < 2);
+
+                // Reuse the fat *T type as its own thin pointer data field.
+                // This provides information about e.g. DST struct pointees
+                // (which may have no non-DST form), and will work as long
+                // as the `Abi` or `FieldPlacement` is checked by users.
+                if i == 0 {
+                    let nil = tcx.mk_nil();
+                    let ptr_ty = if self.ty.is_unsafe_ptr() {
+                        tcx.mk_mut_ptr(nil)
+                    } else {
+                        tcx.mk_mut_ref(tcx.types.re_static, nil)
+                    };
+                    return cx.layout_of(ptr_ty).map_same(|mut ptr_layout| {
+                        ptr_layout.ty = self.ty;
+                        ptr_layout
+                    });
+                }
+
+                match tcx.struct_tail(pointee).sty {
+                    ty::TySlice(_) |
+                    ty::TyStr => tcx.types.usize,
+                    ty::TyDynamic(..) => {
+                        // FIXME(eddyb) use an usize/fn() array with
+                        // the correct number of vtables slots.
+                        tcx.mk_imm_ref(tcx.types.re_static, tcx.mk_nil())
+                    }
+                    _ => bug!("TyLayout::field_type({:?}): not applicable", self)
+                }
             }
 
             // Arrays and slices.
@@ -2290,94 +2215,232 @@
 
             // ADTs.
             ty::TyAdt(def, substs) => {
-                def.variants[self.variant_index.unwrap_or(0)].fields[i].ty(tcx, substs)
+                match self.variants {
+                    Variants::Single { index } => {
+                        def.variants[index].fields[i].ty(tcx, substs)
+                    }
+
+                    // Discriminant field for enums (where applicable).
+                    Variants::Tagged { ref discr, .. } |
+                    Variants::NicheFilling { niche: ref discr, .. } => {
+                        assert_eq!(i, 0);
+                        let layout = LayoutDetails::scalar(tcx, discr.clone());
+                        return MaybeResult::from_ok(TyLayout {
+                            details: tcx.intern_layout(layout),
+                            ty: discr.value.to_ty(tcx)
+                        });
+                    }
+                }
             }
 
             ty::TyProjection(_) | ty::TyAnon(..) | ty::TyParam(_) |
             ty::TyInfer(_) | ty::TyError => {
                 bug!("TyLayout::field_type: unexpected type `{}`", self.ty)
             }
+        })
+    }
+
+    /// Returns true if the layout corresponds to an unsized type.
+    pub fn is_unsized(&self) -> bool {
+        self.abi.is_unsized()
+    }
+
+    /// Returns true if the fields of the layout are packed.
+    pub fn is_packed(&self) -> bool {
+        self.abi.is_packed()
+    }
+
+    /// Returns true if the type is a ZST and not unsized.
+    pub fn is_zst(&self) -> bool {
+        match self.abi {
+            Abi::Uninhabited => true,
+            Abi::Scalar(_) | Abi::ScalarPair(..) => false,
+            Abi::Vector => self.size.bytes() == 0,
+            Abi::Aggregate { sized, .. } => sized && self.size.bytes() == 0
         }
     }
 
-    pub fn field<C: LayoutTyper<'tcx>>(&self,
-                                       cx: C,
-                                       i: usize)
-                                       -> C::TyLayout {
-        cx.layout_of(cx.normalize_projections(self.field_type(cx, i)))
+    pub fn size_and_align(&self) -> (Size, Align) {
+        (self.size, self.align)
+    }
+
+    /// Find the offset of a niche leaf field, starting from
+    /// the given type and recursing through aggregates, which
+    /// has at least `count` consecutive invalid values.
+    /// The tuple is `(offset, scalar, niche_value)`.
+    // FIXME(eddyb) traverse already optimized enums.
+    fn find_niche<C>(&self, cx: C, count: u128)
+        -> Result<Option<(Size, Scalar, u128)>, LayoutError<'tcx>>
+        where C: LayoutOf<Ty<'tcx>, TyLayout = Result<Self, LayoutError<'tcx>>> +
+                 HasTyCtxt<'tcx>
+    {
+        let scalar_component = |scalar: &Scalar, offset| {
+            let Scalar { value, valid_range: ref v } = *scalar;
+
+            let bits = value.size(cx).bits();
+            assert!(bits <= 128);
+            let max_value = !0u128 >> (128 - bits);
+
+            // Find out how many values are outside the valid range.
+            let niches = if v.start <= v.end {
+                v.start + (max_value - v.end)
+            } else {
+                v.start - v.end - 1
+            };
+
+            // Give up if we can't fit `count` consecutive niches.
+            if count > niches {
+                return None;
+            }
+
+            let niche_start = v.end.wrapping_add(1) & max_value;
+            let niche_end = v.end.wrapping_add(count) & max_value;
+            Some((offset, Scalar {
+                value,
+                valid_range: v.start..=niche_end
+            }, niche_start))
+        };
+
+        match self.abi {
+            Abi::Scalar(ref scalar) => {
+                return Ok(scalar_component(scalar, Size::from_bytes(0)));
+            }
+            Abi::ScalarPair(ref a, ref b) => {
+                return Ok(scalar_component(a, Size::from_bytes(0)).or_else(|| {
+                    scalar_component(b, a.value.size(cx).abi_align(b.value.align(cx)))
+                }));
+            }
+            _ => {}
+        }
+
+        // Perhaps one of the fields is non-zero, let's recurse and find out.
+        if let FieldPlacement::Union(_) = self.fields {
+            // Only Rust enums have safe-to-inspect fields
+            // (a discriminant), other unions are unsafe.
+            if let Variants::Single { .. } = self.variants {
+                return Ok(None);
+            }
+        }
+        if let FieldPlacement::Array { .. } = self.fields {
+            if self.fields.count() > 0 {
+                return self.field(cx, 0)?.find_niche(cx, count);
+            }
+        }
+        for i in 0..self.fields.count() {
+            let r = self.field(cx, i)?.find_niche(cx, count)?;
+            if let Some((offset, scalar, niche_value)) = r {
+                let offset = self.fields.offset(i) + offset;
+                return Ok(Some((offset, scalar, niche_value)));
+            }
+        }
+        Ok(None)
     }
 }
 
-impl<'gcx> HashStable<StableHashingContext<'gcx>> for Layout
-{
+impl<'gcx> HashStable<StableHashingContext<'gcx>> for Variants {
     fn hash_stable<W: StableHasherResult>(&self,
                                           hcx: &mut StableHashingContext<'gcx>,
                                           hasher: &mut StableHasher<W>) {
-        use ty::layout::Layout::*;
+        use ty::layout::Variants::*;
         mem::discriminant(self).hash_stable(hcx, hasher);
 
         match *self {
-            Scalar { value, non_zero } => {
-                value.hash_stable(hcx, hasher);
-                non_zero.hash_stable(hcx, hasher);
+            Single { index } => {
+                index.hash_stable(hcx, hasher);
             }
-            Vector { element, count } => {
-                element.hash_stable(hcx, hasher);
-                count.hash_stable(hcx, hasher);
-            }
-            Array { sized, align, primitive_align, element_size, count } => {
-                sized.hash_stable(hcx, hasher);
-                align.hash_stable(hcx, hasher);
-                primitive_align.hash_stable(hcx, hasher);
-                element_size.hash_stable(hcx, hasher);
-                count.hash_stable(hcx, hasher);
-            }
-            FatPointer { ref metadata, non_zero } => {
-                metadata.hash_stable(hcx, hasher);
-                non_zero.hash_stable(hcx, hasher);
-            }
-            CEnum { discr, signed, non_zero, min, max } => {
-                discr.hash_stable(hcx, hasher);
-                signed.hash_stable(hcx, hasher);
-                non_zero.hash_stable(hcx, hasher);
-                min.hash_stable(hcx, hasher);
-                max.hash_stable(hcx, hasher);
-            }
-            Univariant { ref variant, non_zero } => {
-                variant.hash_stable(hcx, hasher);
-                non_zero.hash_stable(hcx, hasher);
-            }
-            UntaggedUnion { ref variants } => {
-                variants.hash_stable(hcx, hasher);
-            }
-            General { discr, ref variants, size, align, primitive_align } => {
-                discr.hash_stable(hcx, hasher);
-                variants.hash_stable(hcx, hasher);
-                size.hash_stable(hcx, hasher);
-                align.hash_stable(hcx, hasher);
-                primitive_align.hash_stable(hcx, hasher);
-            }
-            RawNullablePointer { nndiscr, ref value } => {
-                nndiscr.hash_stable(hcx, hasher);
-                value.hash_stable(hcx, hasher);
-            }
-            StructWrappedNullablePointer {
-                nndiscr,
-                ref nonnull,
-                ref discrfield,
-                ref discrfield_source
+            Tagged {
+                ref discr,
+                ref variants,
             } => {
-                nndiscr.hash_stable(hcx, hasher);
-                nonnull.hash_stable(hcx, hasher);
-                discrfield.hash_stable(hcx, hasher);
-                discrfield_source.hash_stable(hcx, hasher);
+                discr.hash_stable(hcx, hasher);
+                variants.hash_stable(hcx, hasher);
+            }
+            NicheFilling {
+                dataful_variant,
+                niche_variants: RangeInclusive { start, end },
+                ref niche,
+                niche_start,
+                ref variants,
+            } => {
+                dataful_variant.hash_stable(hcx, hasher);
+                start.hash_stable(hcx, hasher);
+                end.hash_stable(hcx, hasher);
+                niche.hash_stable(hcx, hasher);
+                niche_start.hash_stable(hcx, hasher);
+                variants.hash_stable(hcx, hasher);
             }
         }
     }
 }
 
+impl<'gcx> HashStable<StableHashingContext<'gcx>> for FieldPlacement {
+    fn hash_stable<W: StableHasherResult>(&self,
+                                          hcx: &mut StableHashingContext<'gcx>,
+                                          hasher: &mut StableHasher<W>) {
+        use ty::layout::FieldPlacement::*;
+        mem::discriminant(self).hash_stable(hcx, hasher);
+
+        match *self {
+            Union(count) => {
+                count.hash_stable(hcx, hasher);
+            }
+            Array { count, stride } => {
+                count.hash_stable(hcx, hasher);
+                stride.hash_stable(hcx, hasher);
+            }
+            Arbitrary { ref offsets, ref memory_index } => {
+                offsets.hash_stable(hcx, hasher);
+                memory_index.hash_stable(hcx, hasher);
+            }
+        }
+    }
+}
+
+impl<'gcx> HashStable<StableHashingContext<'gcx>> for Abi {
+    fn hash_stable<W: StableHasherResult>(&self,
+                                          hcx: &mut StableHashingContext<'gcx>,
+                                          hasher: &mut StableHasher<W>) {
+        use ty::layout::Abi::*;
+        mem::discriminant(self).hash_stable(hcx, hasher);
+
+        match *self {
+            Uninhabited => {}
+            Scalar(ref value) => {
+                value.hash_stable(hcx, hasher);
+            }
+            ScalarPair(ref a, ref b) => {
+                a.hash_stable(hcx, hasher);
+                b.hash_stable(hcx, hasher);
+            }
+            Vector => {}
+            Aggregate { packed, sized } => {
+                packed.hash_stable(hcx, hasher);
+                sized.hash_stable(hcx, hasher);
+            }
+        }
+    }
+}
+
+impl<'gcx> HashStable<StableHashingContext<'gcx>> for Scalar {
+    fn hash_stable<W: StableHasherResult>(&self,
+                                          hcx: &mut StableHashingContext<'gcx>,
+                                          hasher: &mut StableHasher<W>) {
+        let Scalar { value, valid_range: RangeInclusive { start, end } } = *self;
+        value.hash_stable(hcx, hasher);
+        start.hash_stable(hcx, hasher);
+        end.hash_stable(hcx, hasher);
+    }
+}
+
+impl_stable_hash_for!(struct ::ty::layout::LayoutDetails {
+    variants,
+    fields,
+    abi,
+    size,
+    align
+});
+
 impl_stable_hash_for!(enum ::ty::layout::Integer {
-    I1,
     I8,
     I16,
     I32,
@@ -2386,7 +2449,7 @@
 });
 
 impl_stable_hash_for!(enum ::ty::layout::Primitive {
-    Int(integer),
+    Int(integer, signed),
     F32,
     F64,
     Pointer
@@ -2415,20 +2478,3 @@
         }
     }
 }
-
-impl_stable_hash_for!(struct ::ty::layout::Struct {
-    align,
-    primitive_align,
-    packed,
-    sized,
-    offsets,
-    memory_index,
-    min_size
-});
-
-impl_stable_hash_for!(struct ::ty::layout::Union {
-    align,
-    primitive_align,
-    min_size,
-    packed
-});
diff --git a/src/librustc/ty/maps/config.rs b/src/librustc/ty/maps/config.rs
index deaafd1..066b80c 100644
--- a/src/librustc/ty/maps/config.rs
+++ b/src/librustc/ty/maps/config.rs
@@ -8,6 +8,7 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
+use dep_graph::SerializedDepNodeIndex;
 use hir::def_id::{CrateNum, DefId, DefIndex};
 use ty::{self, Ty, TyCtxt};
 use ty::maps::queries;
@@ -23,11 +24,21 @@
     type Value;
 }
 
-pub(super) trait QueryDescription: QueryConfig {
+pub(super) trait QueryDescription<'tcx>: QueryConfig {
     fn describe(tcx: TyCtxt, key: Self::Key) -> String;
+
+    fn cache_on_disk(_: Self::Key) -> bool {
+        false
+    }
+
+    fn load_from_disk<'a>(_: TyCtxt<'a, 'tcx, 'tcx>,
+                          _: SerializedDepNodeIndex)
+                          -> Self::Value {
+        bug!("QueryDescription::load_from_disk() called for unsupport query.")
+    }
 }
 
-impl<M: QueryConfig<Key=DefId>> QueryDescription for M {
+impl<'tcx, M: QueryConfig<Key=DefId>> QueryDescription<'tcx> for M {
     default fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         if !tcx.sess.verbose() {
             format!("processing `{}`", tcx.item_path_str(def_id))
@@ -38,50 +49,50 @@
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_copy_raw<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_copy_raw<'tcx> {
     fn describe(_tcx: TyCtxt, env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> String {
         format!("computing whether `{}` is `Copy`", env.value)
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_sized_raw<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_sized_raw<'tcx> {
     fn describe(_tcx: TyCtxt, env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> String {
         format!("computing whether `{}` is `Sized`", env.value)
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_freeze_raw<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_freeze_raw<'tcx> {
     fn describe(_tcx: TyCtxt, env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> String {
         format!("computing whether `{}` is freeze", env.value)
     }
 }
 
-impl<'tcx> QueryDescription for queries::needs_drop_raw<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::needs_drop_raw<'tcx> {
     fn describe(_tcx: TyCtxt, env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> String {
         format!("computing whether `{}` needs drop", env.value)
     }
 }
 
-impl<'tcx> QueryDescription for queries::layout_raw<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::layout_raw<'tcx> {
     fn describe(_tcx: TyCtxt, env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> String {
         format!("computing layout of `{}`", env.value)
     }
 }
 
-impl<'tcx> QueryDescription for queries::super_predicates_of<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::super_predicates_of<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("computing the supertraits of `{}`",
                 tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::erase_regions_ty<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::erase_regions_ty<'tcx> {
     fn describe(_tcx: TyCtxt, ty: Ty<'tcx>) -> String {
         format!("erasing regions from `{:?}`", ty)
     }
 }
 
-impl<'tcx> QueryDescription for queries::type_param_predicates<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::type_param_predicates<'tcx> {
     fn describe(tcx: TyCtxt, (_, def_id): (DefId, DefId)) -> String {
         let id = tcx.hir.as_local_node_id(def_id).unwrap();
         format!("computing the bounds for type parameter `{}`",
@@ -89,452 +100,468 @@
     }
 }
 
-impl<'tcx> QueryDescription for queries::coherent_trait<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::coherent_trait<'tcx> {
     fn describe(tcx: TyCtxt, (_, def_id): (CrateNum, DefId)) -> String {
         format!("coherence checking all impls of trait `{}`",
                 tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::crate_inherent_impls<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::crate_inherent_impls<'tcx> {
     fn describe(_: TyCtxt, k: CrateNum) -> String {
         format!("all inherent impls defined in crate `{:?}`", k)
     }
 }
 
-impl<'tcx> QueryDescription for queries::crate_inherent_impls_overlap_check<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::crate_inherent_impls_overlap_check<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         format!("check for overlap between inherent impls defined in this crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::crate_variances<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::crate_variances<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("computing the variances for items in this crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::mir_shims<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::mir_shims<'tcx> {
     fn describe(tcx: TyCtxt, def: ty::InstanceDef<'tcx>) -> String {
         format!("generating MIR shim for `{}`",
                 tcx.item_path_str(def.def_id()))
     }
 }
 
-impl<'tcx> QueryDescription for queries::privacy_access_levels<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::privacy_access_levels<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         format!("privacy access levels")
     }
 }
 
-impl<'tcx> QueryDescription for queries::typeck_item_bodies<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::typeck_item_bodies<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         format!("type-checking all item bodies")
     }
 }
 
-impl<'tcx> QueryDescription for queries::reachable_set<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::reachable_set<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         format!("reachability")
     }
 }
 
-impl<'tcx> QueryDescription for queries::const_eval<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::const_eval<'tcx> {
     fn describe(tcx: TyCtxt, key: ty::ParamEnvAnd<'tcx, (DefId, &'tcx Substs<'tcx>)>) -> String {
         format!("const-evaluating `{}`", tcx.item_path_str(key.value.0))
     }
 }
 
-impl<'tcx> QueryDescription for queries::mir_keys<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::mir_keys<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         format!("getting a list of all mir_keys")
     }
 }
 
-impl<'tcx> QueryDescription for queries::symbol_name<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::symbol_name<'tcx> {
     fn describe(_tcx: TyCtxt, instance: ty::Instance<'tcx>) -> String {
         format!("computing the symbol for `{}`", instance)
     }
 }
 
-impl<'tcx> QueryDescription for queries::describe_def<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::describe_def<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("describe_def")
     }
 }
 
-impl<'tcx> QueryDescription for queries::def_span<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::def_span<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("def_span")
     }
 }
 
 
-impl<'tcx> QueryDescription for queries::lookup_stability<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::lookup_stability<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("stability")
     }
 }
 
-impl<'tcx> QueryDescription for queries::lookup_deprecation_entry<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::lookup_deprecation_entry<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("deprecation")
     }
 }
 
-impl<'tcx> QueryDescription for queries::item_attrs<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::item_attrs<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("item_attrs")
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_exported_symbol<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_exported_symbol<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("is_exported_symbol")
     }
 }
 
-impl<'tcx> QueryDescription for queries::fn_arg_names<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::fn_arg_names<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("fn_arg_names")
     }
 }
 
-impl<'tcx> QueryDescription for queries::impl_parent<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::impl_parent<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("impl_parent")
     }
 }
 
-impl<'tcx> QueryDescription for queries::trait_of_item<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::trait_of_item<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         bug!("trait_of_item")
     }
 }
 
-impl<'tcx> QueryDescription for queries::item_body_nested_bodies<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::item_body_nested_bodies<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("nested item bodies of `{}`", tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::const_is_rvalue_promotable_to_static<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::const_is_rvalue_promotable_to_static<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("const checking if rvalue is promotable to static `{}`",
             tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::rvalue_promotable_map<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::rvalue_promotable_map<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("checking which parts of `{}` are promotable to static",
                 tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_mir_available<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_mir_available<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("checking if item is mir available: `{}`",
             tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::trans_fulfill_obligation<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::trans_fulfill_obligation<'tcx> {
     fn describe(tcx: TyCtxt, key: (ty::ParamEnv<'tcx>, ty::PolyTraitRef<'tcx>)) -> String {
         format!("checking if `{}` fulfills its obligations", tcx.item_path_str(key.1.def_id()))
     }
 }
 
-impl<'tcx> QueryDescription for queries::trait_impls_of<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::trait_impls_of<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("trait impls of `{}`", tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_object_safe<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_object_safe<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("determine object safety of trait `{}`", tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_const_fn<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_const_fn<'tcx> {
     fn describe(tcx: TyCtxt, def_id: DefId) -> String {
         format!("checking if item is const fn: `{}`", tcx.item_path_str(def_id))
     }
 }
 
-impl<'tcx> QueryDescription for queries::dylib_dependency_formats<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::dylib_dependency_formats<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         "dylib dependency formats of crate".to_string()
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_panic_runtime<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_panic_runtime<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         "checking if the crate is_panic_runtime".to_string()
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_compiler_builtins<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_compiler_builtins<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         "checking if the crate is_compiler_builtins".to_string()
     }
 }
 
-impl<'tcx> QueryDescription for queries::has_global_allocator<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::has_global_allocator<'tcx> {
     fn describe(_: TyCtxt, _: CrateNum) -> String {
         "checking if the crate has_global_allocator".to_string()
     }
 }
 
-impl<'tcx> QueryDescription for queries::extern_crate<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::extern_crate<'tcx> {
     fn describe(_: TyCtxt, _: DefId) -> String {
         "getting crate's ExternCrateData".to_string()
     }
 }
 
-impl<'tcx> QueryDescription for queries::lint_levels<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::lint_levels<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("computing the lint levels for items in this crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::specializes<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::specializes<'tcx> {
     fn describe(_tcx: TyCtxt, _: (DefId, DefId)) -> String {
         format!("computing whether impls specialize one another")
     }
 }
 
-impl<'tcx> QueryDescription for queries::in_scope_traits_map<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::in_scope_traits_map<'tcx> {
     fn describe(_tcx: TyCtxt, _: DefIndex) -> String {
         format!("traits in scope at a block")
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_no_builtins<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_no_builtins<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("test whether a crate has #![no_builtins]")
     }
 }
 
-impl<'tcx> QueryDescription for queries::panic_strategy<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::panic_strategy<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("query a crate's configured panic strategy")
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_profiler_runtime<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_profiler_runtime<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("query a crate is #![profiler_runtime]")
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_sanitizer_runtime<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_sanitizer_runtime<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("query a crate is #![sanitizer_runtime]")
     }
 }
 
-impl<'tcx> QueryDescription for queries::exported_symbol_ids<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::exported_symbol_ids<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the exported symbols of a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::native_libraries<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::native_libraries<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the native libraries of a linked crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::plugin_registrar_fn<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::plugin_registrar_fn<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the plugin registrar for a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::derive_registrar_fn<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::derive_registrar_fn<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the derive registrar for a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::crate_disambiguator<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::crate_disambiguator<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the disambiguator a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::crate_hash<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::crate_hash<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the hash a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::original_crate_name<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::original_crate_name<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up the original name a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::implementations_of_trait<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::implementations_of_trait<'tcx> {
     fn describe(_tcx: TyCtxt, _: (CrateNum, DefId)) -> String {
         format!("looking up implementations of a trait in a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::all_trait_implementations<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::all_trait_implementations<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up all (?) trait implementations")
     }
 }
 
-impl<'tcx> QueryDescription for queries::link_args<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::link_args<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up link arguments for a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::named_region_map<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::named_region_map<'tcx> {
     fn describe(_tcx: TyCtxt, _: DefIndex) -> String {
         format!("looking up a named region")
     }
 }
 
-impl<'tcx> QueryDescription for queries::is_late_bound_map<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::is_late_bound_map<'tcx> {
     fn describe(_tcx: TyCtxt, _: DefIndex) -> String {
         format!("testing if a region is late boudn")
     }
 }
 
-impl<'tcx> QueryDescription for queries::object_lifetime_defaults_map<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::object_lifetime_defaults_map<'tcx> {
     fn describe(_tcx: TyCtxt, _: DefIndex) -> String {
         format!("looking up lifetime defaults for a region")
     }
 }
 
-impl<'tcx> QueryDescription for queries::dep_kind<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::dep_kind<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("fetching what a dependency looks like")
     }
 }
 
-impl<'tcx> QueryDescription for queries::crate_name<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::crate_name<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("fetching what a crate is named")
     }
 }
 
-impl<'tcx> QueryDescription for queries::get_lang_items<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::get_lang_items<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("calculating the lang items map")
     }
 }
 
-impl<'tcx> QueryDescription for queries::defined_lang_items<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::defined_lang_items<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("calculating the lang items defined in a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::missing_lang_items<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::missing_lang_items<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("calculating the missing lang items in a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::visible_parent_map<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::visible_parent_map<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("calculating the visible parent map")
     }
 }
 
-impl<'tcx> QueryDescription for queries::missing_extern_crate_item<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::missing_extern_crate_item<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("seeing if we're missing an `extern crate` item for this crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::used_crate_source<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::used_crate_source<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking at the source for a crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::postorder_cnums<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::postorder_cnums<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("generating a postorder list of CrateNums")
     }
 }
 
-impl<'tcx> QueryDescription for queries::maybe_unused_extern_crates<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::maybe_unused_extern_crates<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("looking up all possibly unused extern crates")
     }
 }
 
-impl<'tcx> QueryDescription for queries::stability_index<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::stability_index<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("calculating the stability index for the local crate")
     }
 }
 
-impl<'tcx> QueryDescription for queries::all_crate_nums<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::all_crate_nums<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("fetching all foreign CrateNum instances")
     }
 }
 
-impl<'tcx> QueryDescription for queries::exported_symbols<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::exported_symbols<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("exported_symbols")
     }
 }
 
-impl<'tcx> QueryDescription for queries::collect_and_partition_translation_items<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::collect_and_partition_translation_items<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("collect_and_partition_translation_items")
     }
 }
 
-impl<'tcx> QueryDescription for queries::codegen_unit<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::codegen_unit<'tcx> {
     fn describe(_tcx: TyCtxt, _: InternedString) -> String {
         format!("codegen_unit")
     }
 }
 
-impl<'tcx> QueryDescription for queries::compile_codegen_unit<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::compile_codegen_unit<'tcx> {
     fn describe(_tcx: TyCtxt, _: InternedString) -> String {
         format!("compile_codegen_unit")
     }
 }
 
-impl<'tcx> QueryDescription for queries::output_filenames<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::output_filenames<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("output_filenames")
     }
 }
 
-impl<'tcx> QueryDescription for queries::has_clone_closures<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::has_clone_closures<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("seeing if the crate has enabled `Clone` closures")
     }
 }
 
-impl<'tcx> QueryDescription for queries::vtable_methods<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::vtable_methods<'tcx> {
     fn describe(tcx: TyCtxt, key: ty::PolyTraitRef<'tcx> ) -> String {
         format!("finding all methods for trait {}", tcx.item_path_str(key.def_id()))
     }
 }
 
-impl<'tcx> QueryDescription for queries::has_copy_closures<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::has_copy_closures<'tcx> {
     fn describe(_tcx: TyCtxt, _: CrateNum) -> String {
         format!("seeing if the crate has enabled `Copy` closures")
     }
 }
 
-impl<'tcx> QueryDescription for queries::fully_normalize_monormophic_ty<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::fully_normalize_monormophic_ty<'tcx> {
     fn describe(_tcx: TyCtxt, _: Ty) -> String {
         format!("normalizing types")
     }
 }
+
+impl<'tcx> QueryDescription<'tcx> for queries::typeck_tables_of<'tcx> {
+    #[inline]
+    fn cache_on_disk(def_id: Self::Key) -> bool {
+        def_id.is_local()
+    }
+
+    fn load_from_disk<'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                          id: SerializedDepNodeIndex)
+                          -> Self::Value {
+        let typeck_tables: ty::TypeckTables<'tcx> = tcx.on_disk_query_result_cache
+                                                       .load_query_result(tcx, id);
+        tcx.alloc_tables(typeck_tables)
+    }
+}
+
diff --git a/src/librustc/ty/maps/mod.rs b/src/librustc/ty/maps/mod.rs
index 320f651..2f648e8 100644
--- a/src/librustc/ty/maps/mod.rs
+++ b/src/librustc/ty/maps/mod.rs
@@ -34,7 +34,6 @@
 use traits::Vtable;
 use traits::specialization_graph;
 use ty::{self, CrateInherentImpls, Ty, TyCtxt};
-use ty::layout::{Layout, LayoutError};
 use ty::steal::Steal;
 use ty::subst::Substs;
 use util::nodemap::{DefIdSet, DefIdMap, ItemLocalSet};
@@ -265,7 +264,8 @@
     [] fn is_freeze_raw: is_freeze_dep_node(ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool,
     [] fn needs_drop_raw: needs_drop_dep_node(ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool,
     [] fn layout_raw: layout_dep_node(ty::ParamEnvAnd<'tcx, Ty<'tcx>>)
-                                  -> Result<&'tcx Layout, LayoutError<'tcx>>,
+                                  -> Result<&'tcx ty::layout::LayoutDetails,
+                                            ty::layout::LayoutError<'tcx>>,
 
     [] fn dylib_dependency_formats: DylibDepFormats(CrateNum)
                                     -> Rc<Vec<(CrateNum, LinkagePreference)>>,
diff --git a/src/librustc/ty/maps/on_disk_cache.rs b/src/librustc/ty/maps/on_disk_cache.rs
index 24ce8fb..53ca9b3 100644
--- a/src/librustc/ty/maps/on_disk_cache.rs
+++ b/src/librustc/ty/maps/on_disk_cache.rs
@@ -9,24 +9,42 @@
 // except according to those terms.
 
 use dep_graph::{DepNodeIndex, SerializedDepNodeIndex};
-use rustc_data_structures::fx::FxHashMap;
-use rustc_data_structures::indexed_vec::Idx;
 use errors::Diagnostic;
+use hir;
+use hir::def_id::{CrateNum, DefIndex, DefId, LocalDefId,
+                  RESERVED_FOR_INCR_COMP_CACHE, LOCAL_CRATE};
+use hir::map::definitions::DefPathHash;
+use middle::cstore::CrateStore;
+use rustc_data_structures::fx::FxHashMap;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
 use rustc_serialize::{Decodable, Decoder, Encodable, Encoder, opaque,
-                      SpecializedDecoder};
-use session::Session;
-use std::borrow::Cow;
+                      SpecializedDecoder, SpecializedEncoder,
+                      UseSpecializedDecodable, UseSpecializedEncodable};
+use session::{CrateDisambiguator, Session};
 use std::cell::RefCell;
 use std::collections::BTreeMap;
 use std::mem;
+use syntax::ast::NodeId;
 use syntax::codemap::{CodeMap, StableFilemapId};
 use syntax_pos::{BytePos, Span, NO_EXPANSION, DUMMY_SP};
+use ty;
+use ty::codec::{self as ty_codec, TyDecoder, TyEncoder};
+use ty::context::TyCtxt;
+
+// Some magic values used for verifying that encoding and decoding. These are
+// basically random numbers.
+const PREV_DIAGNOSTICS_TAG: u64 = 0x1234_5678_A1A1_A1A1;
+const QUERY_RESULT_INDEX_TAG: u64 = 0x1234_5678_C3C3_C3C3;
 
 /// `OnDiskCache` provides an interface to incr. comp. data cached from the
 /// previous compilation session. This data will eventually include the results
 /// of a few selected queries (like `typeck_tables_of` and `mir_optimized`) and
 /// any diagnostics that have been emitted during a query.
 pub struct OnDiskCache<'sess> {
+
+    // The complete cache data in serialized form.
+    serialized_data: Vec<u8>,
+
     // The diagnostics emitted during the previous compilation session.
     prev_diagnostics: FxHashMap<SerializedDepNodeIndex, Vec<Diagnostic>>,
 
@@ -34,68 +52,120 @@
     // compilation session.
     current_diagnostics: RefCell<FxHashMap<DepNodeIndex, Vec<Diagnostic>>>,
 
-    // This will eventually be needed for creating Decoders that can rebase
-    // spans.
-    _prev_filemap_starts: BTreeMap<BytePos, StableFilemapId>,
+    prev_cnums: Vec<(u32, String, CrateDisambiguator)>,
+    cnum_map: RefCell<Option<IndexVec<CrateNum, Option<CrateNum>>>>,
+
+    prev_filemap_starts: BTreeMap<BytePos, StableFilemapId>,
     codemap: &'sess CodeMap,
+
+    // A map from dep-node to the position of the cached query result in
+    // `serialized_data`.
+    query_result_index: FxHashMap<SerializedDepNodeIndex, usize>,
 }
 
 // This type is used only for (de-)serialization.
 #[derive(RustcEncodable, RustcDecodable)]
 struct Header {
     prev_filemap_starts: BTreeMap<BytePos, StableFilemapId>,
+    prev_cnums: Vec<(u32, String, CrateDisambiguator)>,
 }
 
-// This type is used only for (de-)serialization.
-#[derive(RustcEncodable, RustcDecodable)]
-struct Body {
-    diagnostics: Vec<(SerializedDepNodeIndex, Vec<Diagnostic>)>,
-}
+type EncodedPrevDiagnostics = Vec<(SerializedDepNodeIndex, Vec<Diagnostic>)>;
+type EncodedQueryResultIndex = Vec<(SerializedDepNodeIndex, usize)>;
 
 impl<'sess> OnDiskCache<'sess> {
     /// Create a new OnDiskCache instance from the serialized data in `data`.
-    /// Note that the current implementation (which only deals with diagnostics
-    /// so far) will eagerly deserialize the complete cache. Once we are
-    /// dealing with larger amounts of data (i.e. cached query results),
-    /// deserialization will need to happen lazily.
-    pub fn new(sess: &'sess Session, data: &[u8]) -> OnDiskCache<'sess> {
+    pub fn new(sess: &'sess Session, data: Vec<u8>, start_pos: usize) -> OnDiskCache<'sess> {
         debug_assert!(sess.opts.incremental.is_some());
 
-        let mut decoder = opaque::Decoder::new(&data[..], 0);
-        let header = Header::decode(&mut decoder).unwrap();
+        // Decode the header
+        let (header, post_header_pos) = {
+            let mut decoder = opaque::Decoder::new(&data[..], start_pos);
+            let header = Header::decode(&mut decoder)
+                .expect("Error while trying to decode incr. comp. cache header.");
+            (header, decoder.position())
+        };
 
-        let prev_diagnostics: FxHashMap<_, _> = {
+        let (prev_diagnostics, query_result_index) = {
             let mut decoder = CacheDecoder {
-                opaque: decoder,
+                tcx: None,
+                opaque: opaque::Decoder::new(&data[..], post_header_pos),
                 codemap: sess.codemap(),
                 prev_filemap_starts: &header.prev_filemap_starts,
+                cnum_map: &IndexVec::new(),
             };
-            let body = Body::decode(&mut decoder).unwrap();
-            body.diagnostics.into_iter().collect()
+
+            // Decode Diagnostics
+            let prev_diagnostics: FxHashMap<_, _> = {
+                let diagnostics: EncodedPrevDiagnostics =
+                    decode_tagged(&mut decoder, PREV_DIAGNOSTICS_TAG)
+                        .expect("Error while trying to decode previous session \
+                                 diagnostics from incr. comp. cache.");
+                diagnostics.into_iter().collect()
+            };
+
+            // Decode the *position* of the query result index
+            let query_result_index_pos = {
+                let pos_pos = data.len() - IntEncodedWithFixedSize::ENCODED_SIZE;
+                decoder.with_position(pos_pos, |decoder| {
+                    IntEncodedWithFixedSize::decode(decoder)
+                }).expect("Error while trying to decode query result index position.")
+                .0 as usize
+            };
+
+            // Decode the query result index itself
+            let query_result_index: EncodedQueryResultIndex =
+                decoder.with_position(query_result_index_pos, |decoder| {
+                    decode_tagged(decoder, QUERY_RESULT_INDEX_TAG)
+                }).expect("Error while trying to decode query result index.");
+
+            (prev_diagnostics, query_result_index)
         };
 
         OnDiskCache {
+            serialized_data: data,
             prev_diagnostics,
-            _prev_filemap_starts: header.prev_filemap_starts,
+            prev_filemap_starts: header.prev_filemap_starts,
+            prev_cnums: header.prev_cnums,
+            cnum_map: RefCell::new(None),
             codemap: sess.codemap(),
             current_diagnostics: RefCell::new(FxHashMap()),
+            query_result_index: query_result_index.into_iter().collect(),
         }
     }
 
     pub fn new_empty(codemap: &'sess CodeMap) -> OnDiskCache<'sess> {
         OnDiskCache {
+            serialized_data: Vec::new(),
             prev_diagnostics: FxHashMap(),
-            _prev_filemap_starts: BTreeMap::new(),
+            prev_filemap_starts: BTreeMap::new(),
+            prev_cnums: vec![],
+            cnum_map: RefCell::new(None),
             codemap,
             current_diagnostics: RefCell::new(FxHashMap()),
+            query_result_index: FxHashMap(),
         }
     }
 
     pub fn serialize<'a, 'tcx, E>(&self,
+                                  tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                                  cstore: &CrateStore,
                                   encoder: &mut E)
                                   -> Result<(), E::Error>
-        where E: Encoder
-    {
+        where E: ty_codec::TyEncoder
+     {
+        // Serializing the DepGraph should not modify it:
+        let _in_ignore = tcx.dep_graph.in_ignore();
+
+        let mut encoder = CacheEncoder {
+            tcx,
+            encoder,
+            type_shorthands: FxHashMap(),
+            predicate_shorthands: FxHashMap(),
+        };
+
+
+        // Encode the file header
         let prev_filemap_starts: BTreeMap<_, _> = self
             .codemap
             .files()
@@ -103,18 +173,61 @@
             .map(|fm| (fm.start_pos, StableFilemapId::new(fm)))
             .collect();
 
-        Header { prev_filemap_starts }.encode(encoder)?;
+        let sorted_cnums = sorted_cnums_including_local_crate(cstore);
 
-        let diagnostics: Vec<(SerializedDepNodeIndex, Vec<Diagnostic>)> =
+        let prev_cnums: Vec<_> = sorted_cnums.iter().map(|&cnum| {
+            let crate_name = tcx.original_crate_name(cnum).as_str().to_string();
+            let crate_disambiguator = tcx.crate_disambiguator(cnum);
+            (cnum.as_u32(), crate_name, crate_disambiguator)
+        }).collect();
+
+        Header {
+            prev_filemap_starts,
+            prev_cnums,
+        }.encode(&mut encoder)?;
+
+
+        // Encode Diagnostics
+        let diagnostics: EncodedPrevDiagnostics =
             self.current_diagnostics
                 .borrow()
                 .iter()
                 .map(|(k, v)| (SerializedDepNodeIndex::new(k.index()), v.clone()))
                 .collect();
 
-        Body { diagnostics }.encode(encoder)?;
+        encoder.encode_tagged(PREV_DIAGNOSTICS_TAG, &diagnostics)?;
 
-        Ok(())
+
+        // Encode query results
+        let mut query_result_index = EncodedQueryResultIndex::new();
+
+        {
+            use ty::maps::queries::*;
+            let enc = &mut encoder;
+            let qri = &mut query_result_index;
+
+            // Encode TypeckTables
+            encode_query_results::<typeck_tables_of, _>(tcx, enc, qri)?;
+        }
+
+        // Encode query result index
+        let query_result_index_pos = encoder.position() as u64;
+        encoder.encode_tagged(QUERY_RESULT_INDEX_TAG, &query_result_index)?;
+
+        // Encode the position of the query result index as the last 8 bytes of
+        // file so we know where to look for it.
+        IntEncodedWithFixedSize(query_result_index_pos).encode(&mut encoder)?;
+
+        return Ok(());
+
+        fn sorted_cnums_including_local_crate(cstore: &CrateStore) -> Vec<CrateNum> {
+            let mut cnums = vec![LOCAL_CRATE];
+            cnums.extend_from_slice(&cstore.crates_untracked()[..]);
+            cnums.sort_unstable();
+            // Just to be sure...
+            cnums.dedup();
+            cnums
+        }
     }
 
     /// Load a diagnostic emitted during the previous compilation session.
@@ -135,6 +248,37 @@
         debug_assert!(prev.is_none());
     }
 
+    pub fn load_query_result<'a, 'tcx, T>(&self,
+                                          tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                                          dep_node_index: SerializedDepNodeIndex)
+                                          -> T
+        where T: Decodable
+    {
+        let pos = self.query_result_index[&dep_node_index];
+
+        let mut cnum_map = self.cnum_map.borrow_mut();
+        if cnum_map.is_none() {
+            *cnum_map = Some(Self::compute_cnum_map(tcx, &self.prev_cnums[..]));
+        }
+
+        let mut decoder = CacheDecoder {
+            tcx: Some(tcx),
+            opaque: opaque::Decoder::new(&self.serialized_data[..], pos),
+            codemap: self.codemap,
+            prev_filemap_starts: &self.prev_filemap_starts,
+            cnum_map: cnum_map.as_ref().unwrap(),
+        };
+
+        match decode_tagged(&mut decoder, dep_node_index) {
+            Ok(value) => {
+                value
+            }
+            Err(e) => {
+                bug!("Could not decode cached query result: {}", e)
+            }
+        }
+    }
+
     /// Store a diagnostic emitted during computation of an anonymous query.
     /// Since many anonymous queries can share the same `DepNode`, we aggregate
     /// them -- as opposed to regular queries where we assume that there is a
@@ -150,18 +294,57 @@
 
         x.extend(diagnostics.into_iter());
     }
+
+    // This function builds mapping from previous-session-CrateNum to
+    // current-session-CrateNum. There might be CrateNums from the previous
+    // Session that don't occur in the current one. For these, the mapping
+    // maps to None.
+    fn compute_cnum_map(tcx: TyCtxt,
+                        prev_cnums: &[(u32, String, CrateDisambiguator)])
+                        -> IndexVec<CrateNum, Option<CrateNum>>
+    {
+        let _in_ignore = tcx.dep_graph.in_ignore();
+
+        let current_cnums = tcx.all_crate_nums(LOCAL_CRATE).iter().map(|&cnum| {
+            let crate_name = tcx.original_crate_name(cnum)
+                                .as_str()
+                                .to_string();
+            let crate_disambiguator = tcx.crate_disambiguator(cnum);
+            ((crate_name, crate_disambiguator), cnum)
+        }).collect::<FxHashMap<_,_>>();
+
+        let map_size = prev_cnums.iter()
+                                 .map(|&(cnum, ..)| cnum)
+                                 .max()
+                                 .unwrap_or(0) + 1;
+        let mut map = IndexVec::new();
+        map.resize(map_size as usize, None);
+
+        for &(prev_cnum, ref crate_name, crate_disambiguator) in prev_cnums {
+            let key = (crate_name.clone(), crate_disambiguator);
+            map[CrateNum::from_u32(prev_cnum)] = current_cnums.get(&key).cloned();
+        }
+
+        map[LOCAL_CRATE] = Some(LOCAL_CRATE);
+        map
+    }
 }
 
+
+//- DECODING -------------------------------------------------------------------
+
 /// A decoder that can read the incr. comp. cache. It is similar to the one
 /// we use for crate metadata decoding in that it can rebase spans and
 /// eventually will also handle things that contain `Ty` instances.
-struct CacheDecoder<'a> {
-    opaque: opaque::Decoder<'a>,
-    codemap: &'a CodeMap,
-    prev_filemap_starts: &'a BTreeMap<BytePos, StableFilemapId>,
+struct CacheDecoder<'a, 'tcx: 'a, 'x> {
+    tcx: Option<TyCtxt<'a, 'tcx, 'tcx>>,
+    opaque: opaque::Decoder<'x>,
+    codemap: &'x CodeMap,
+    prev_filemap_starts: &'x BTreeMap<BytePos, StableFilemapId>,
+    cnum_map: &'x IndexVec<CrateNum, Option<CrateNum>>,
 }
 
-impl<'a> CacheDecoder<'a> {
+impl<'a, 'tcx, 'x> CacheDecoder<'a, 'tcx, 'x> {
     fn find_filemap_prev_bytepos(&self,
                                  prev_bytepos: BytePos)
                                  -> Option<(BytePos, StableFilemapId)> {
@@ -173,47 +356,91 @@
     }
 }
 
-macro_rules! decoder_methods {
-    ($($name:ident -> $ty:ty;)*) => {
-        $(fn $name(&mut self) -> Result<$ty, Self::Error> {
-            self.opaque.$name()
-        })*
+// Decode something that was encoded with encode_tagged() and verify that the
+// tag matches and the correct amount of bytes was read.
+fn decode_tagged<'a, 'tcx, D, T, V>(decoder: &mut D,
+                                    expected_tag: T)
+                                    -> Result<V, D::Error>
+    where T: Decodable + Eq + ::std::fmt::Debug,
+          V: Decodable,
+          D: Decoder + ty_codec::TyDecoder<'a, 'tcx>,
+          'tcx: 'a,
+{
+    let start_pos = decoder.position();
+
+    let actual_tag = T::decode(decoder)?;
+    assert_eq!(actual_tag, expected_tag);
+    let value = V::decode(decoder)?;
+    let end_pos = decoder.position();
+
+    let expected_len: u64 = Decodable::decode(decoder)?;
+    assert_eq!((end_pos - start_pos) as u64, expected_len);
+
+    Ok(value)
+}
+
+
+impl<'a, 'tcx: 'a, 'x> ty_codec::TyDecoder<'a, 'tcx> for CacheDecoder<'a, 'tcx, 'x> {
+
+    #[inline]
+    fn tcx(&self) -> TyCtxt<'a, 'tcx, 'tcx> {
+        self.tcx.expect("missing TyCtxt in CacheDecoder")
+    }
+
+    #[inline]
+    fn position(&self) -> usize {
+        self.opaque.position()
+    }
+
+    #[inline]
+    fn peek_byte(&self) -> u8 {
+        self.opaque.data[self.opaque.position()]
+    }
+
+    fn cached_ty_for_shorthand<F>(&mut self,
+                                  shorthand: usize,
+                                  or_insert_with: F)
+                                  -> Result<ty::Ty<'tcx>, Self::Error>
+        where F: FnOnce(&mut Self) -> Result<ty::Ty<'tcx>, Self::Error>
+    {
+        let tcx = self.tcx();
+
+        let cache_key = ty::CReaderCacheKey {
+            cnum: RESERVED_FOR_INCR_COMP_CACHE,
+            pos: shorthand,
+        };
+
+        if let Some(&ty) = tcx.rcache.borrow().get(&cache_key) {
+            return Ok(ty);
+        }
+
+        let ty = or_insert_with(self)?;
+        tcx.rcache.borrow_mut().insert(cache_key, ty);
+        Ok(ty)
+    }
+
+    fn with_position<F, R>(&mut self, pos: usize, f: F) -> R
+        where F: FnOnce(&mut Self) -> R
+    {
+        debug_assert!(pos < self.opaque.data.len());
+
+        let new_opaque = opaque::Decoder::new(self.opaque.data, pos);
+        let old_opaque = mem::replace(&mut self.opaque, new_opaque);
+        let r = f(self);
+        self.opaque = old_opaque;
+        r
+    }
+
+    fn map_encoded_cnum_to_current(&self, cnum: CrateNum) -> CrateNum {
+        self.cnum_map[cnum].unwrap_or_else(|| {
+            bug!("Could not find new CrateNum for {:?}", cnum)
+        })
     }
 }
 
-impl<'sess> Decoder for CacheDecoder<'sess> {
-    type Error = String;
+implement_ty_decoder!( CacheDecoder<'a, 'tcx, 'x> );
 
-    decoder_methods! {
-        read_nil -> ();
-
-        read_u128 -> u128;
-        read_u64 -> u64;
-        read_u32 -> u32;
-        read_u16 -> u16;
-        read_u8 -> u8;
-        read_usize -> usize;
-
-        read_i128 -> i128;
-        read_i64 -> i64;
-        read_i32 -> i32;
-        read_i16 -> i16;
-        read_i8 -> i8;
-        read_isize -> isize;
-
-        read_bool -> bool;
-        read_f64 -> f64;
-        read_f32 -> f32;
-        read_char -> char;
-        read_str -> Cow<str>;
-    }
-
-    fn error(&mut self, err: &str) -> Self::Error {
-        self.opaque.error(err)
-    }
-}
-
-impl<'a> SpecializedDecoder<Span> for CacheDecoder<'a> {
+impl<'a, 'tcx, 'x> SpecializedDecoder<Span> for CacheDecoder<'a, 'tcx, 'x> {
     fn specialized_decode(&mut self) -> Result<Span, Self::Error> {
         let lo = BytePos::decode(self)?;
         let hi = BytePos::decode(self)?;
@@ -229,3 +456,307 @@
         Ok(DUMMY_SP)
     }
 }
+
+// This impl makes sure that we get a runtime error when we try decode a
+// DefIndex that is not contained in a DefId. Such a case would be problematic
+// because we would not know how to transform the DefIndex to the current
+// context.
+impl<'a, 'tcx, 'x> SpecializedDecoder<DefIndex> for CacheDecoder<'a, 'tcx, 'x> {
+    fn specialized_decode(&mut self) -> Result<DefIndex, Self::Error> {
+        bug!("Trying to decode DefIndex outside the context of a DefId")
+    }
+}
+
+// Both the CrateNum and the DefIndex of a DefId can change in between two
+// compilation sessions. We use the DefPathHash, which is stable across
+// sessions, to map the old DefId to the new one.
+impl<'a, 'tcx, 'x> SpecializedDecoder<DefId> for CacheDecoder<'a, 'tcx, 'x> {
+    fn specialized_decode(&mut self) -> Result<DefId, Self::Error> {
+        // Load the DefPathHash which is was we encoded the DefId as.
+        let def_path_hash = DefPathHash::decode(self)?;
+
+        // Using the DefPathHash, we can lookup the new DefId
+        Ok(self.tcx().def_path_hash_to_def_id.as_ref().unwrap()[&def_path_hash])
+    }
+}
+
+impl<'a, 'tcx, 'x> SpecializedDecoder<LocalDefId> for CacheDecoder<'a, 'tcx, 'x> {
+    fn specialized_decode(&mut self) -> Result<LocalDefId, Self::Error> {
+        Ok(LocalDefId::from_def_id(DefId::decode(self)?))
+    }
+}
+
+impl<'a, 'tcx, 'x> SpecializedDecoder<hir::HirId> for CacheDecoder<'a, 'tcx, 'x> {
+    fn specialized_decode(&mut self) -> Result<hir::HirId, Self::Error> {
+        // Load the DefPathHash which is was we encoded the DefIndex as.
+        let def_path_hash = DefPathHash::decode(self)?;
+
+        // Use the DefPathHash to map to the current DefId.
+        let def_id = self.tcx()
+                         .def_path_hash_to_def_id
+                         .as_ref()
+                         .unwrap()[&def_path_hash];
+
+        debug_assert!(def_id.is_local());
+
+        // The ItemLocalId needs no remapping.
+        let local_id = hir::ItemLocalId::decode(self)?;
+
+        // Reconstruct the HirId and look up the corresponding NodeId in the
+        // context of the current session.
+        Ok(hir::HirId {
+            owner: def_id.index,
+            local_id
+        })
+    }
+}
+
+// NodeIds are not stable across compilation sessions, so we store them in their
+// HirId representation. This allows use to map them to the current NodeId.
+impl<'a, 'tcx, 'x> SpecializedDecoder<NodeId> for CacheDecoder<'a, 'tcx, 'x> {
+    fn specialized_decode(&mut self) -> Result<NodeId, Self::Error> {
+        let hir_id = hir::HirId::decode(self)?;
+        Ok(self.tcx().hir.hir_to_node_id(hir_id))
+    }
+}
+
+//- ENCODING -------------------------------------------------------------------
+
+struct CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder,
+          'tcx: 'a,
+{
+    tcx: TyCtxt<'a, 'tcx, 'tcx>,
+    encoder: &'enc mut E,
+    type_shorthands: FxHashMap<ty::Ty<'tcx>, usize>,
+    predicate_shorthands: FxHashMap<ty::Predicate<'tcx>, usize>,
+}
+
+impl<'enc, 'a, 'tcx, E> CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    /// Encode something with additional information that allows to do some
+    /// sanity checks when decoding the data again. This method will first
+    /// encode the specified tag, then the given value, then the number of
+    /// bytes taken up by tag and value. On decoding, we can then verify that
+    /// we get the expected tag and read the expected number of bytes.
+    fn encode_tagged<T: Encodable, V: Encodable>(&mut self,
+                                                 tag: T,
+                                                 value: &V)
+                                                 -> Result<(), E::Error>
+    {
+        use ty::codec::TyEncoder;
+        let start_pos = self.position();
+
+        tag.encode(self)?;
+        value.encode(self)?;
+
+        let end_pos = self.position();
+        ((end_pos - start_pos) as u64).encode(self)
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> ty_codec::TyEncoder for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn position(&self) -> usize {
+        self.encoder.position()
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<CrateNum> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self, cnum: &CrateNum) -> Result<(), Self::Error> {
+        self.emit_u32(cnum.as_u32())
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<ty::Ty<'tcx>> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self, ty: &ty::Ty<'tcx>) -> Result<(), Self::Error> {
+        ty_codec::encode_with_shorthand(self, ty,
+            |encoder| &mut encoder.type_shorthands)
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<ty::GenericPredicates<'tcx>>
+    for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self,
+                          predicates: &ty::GenericPredicates<'tcx>)
+                          -> Result<(), Self::Error> {
+        ty_codec::encode_predicates(self, predicates,
+            |encoder| &mut encoder.predicate_shorthands)
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<hir::HirId> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self, id: &hir::HirId) -> Result<(), Self::Error> {
+        let hir::HirId {
+            owner,
+            local_id,
+        } = *id;
+
+        let def_path_hash = self.tcx.hir.definitions().def_path_hash(owner);
+
+        def_path_hash.encode(self)?;
+        local_id.encode(self)
+    }
+}
+
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<DefId> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self, id: &DefId) -> Result<(), Self::Error> {
+        let def_path_hash = self.tcx.def_path_hash(*id);
+        def_path_hash.encode(self)
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<LocalDefId> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self, id: &LocalDefId) -> Result<(), Self::Error> {
+        id.to_def_id().encode(self)
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<DefIndex> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    fn specialized_encode(&mut self, _: &DefIndex) -> Result<(), Self::Error> {
+        bug!("Encoding DefIndex without context.")
+    }
+}
+
+// NodeIds are not stable across compilation sessions, so we store them in their
+// HirId representation. This allows use to map them to the current NodeId.
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<NodeId> for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    #[inline]
+    fn specialized_encode(&mut self, node_id: &NodeId) -> Result<(), Self::Error> {
+        let hir_id = self.tcx.hir.node_to_hir_id(*node_id);
+        hir_id.encode(self)
+    }
+}
+
+macro_rules! encoder_methods {
+    ($($name:ident($ty:ty);)*) => {
+        $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
+            self.encoder.$name(value)
+        })*
+    }
+}
+
+impl<'enc, 'a, 'tcx, E> Encoder for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    type Error = E::Error;
+
+    fn emit_nil(&mut self) -> Result<(), Self::Error> {
+        Ok(())
+    }
+
+    encoder_methods! {
+        emit_usize(usize);
+        emit_u128(u128);
+        emit_u64(u64);
+        emit_u32(u32);
+        emit_u16(u16);
+        emit_u8(u8);
+
+        emit_isize(isize);
+        emit_i128(i128);
+        emit_i64(i64);
+        emit_i32(i32);
+        emit_i16(i16);
+        emit_i8(i8);
+
+        emit_bool(bool);
+        emit_f64(f64);
+        emit_f32(f32);
+        emit_char(char);
+        emit_str(&str);
+    }
+}
+
+// An integer that will always encode to 8 bytes.
+struct IntEncodedWithFixedSize(u64);
+
+impl IntEncodedWithFixedSize {
+    pub const ENCODED_SIZE: usize = 8;
+}
+
+impl UseSpecializedEncodable for IntEncodedWithFixedSize {}
+impl UseSpecializedDecodable for IntEncodedWithFixedSize {}
+
+impl<'enc, 'a, 'tcx, E> SpecializedEncoder<IntEncodedWithFixedSize>
+for CacheEncoder<'enc, 'a, 'tcx, E>
+    where E: 'enc + ty_codec::TyEncoder
+{
+    fn specialized_encode(&mut self, x: &IntEncodedWithFixedSize) -> Result<(), Self::Error> {
+        let start_pos = self.position();
+        for i in 0 .. IntEncodedWithFixedSize::ENCODED_SIZE {
+            ((x.0 >> i * 8) as u8).encode(self)?;
+        }
+        let end_pos = self.position();
+        assert_eq!((end_pos - start_pos), IntEncodedWithFixedSize::ENCODED_SIZE);
+        Ok(())
+    }
+}
+
+impl<'a, 'tcx, 'x> SpecializedDecoder<IntEncodedWithFixedSize>
+for CacheDecoder<'a, 'tcx, 'x> {
+    fn specialized_decode(&mut self) -> Result<IntEncodedWithFixedSize, Self::Error> {
+        let mut value: u64 = 0;
+        let start_pos = self.position();
+
+        for i in 0 .. IntEncodedWithFixedSize::ENCODED_SIZE {
+            let byte: u8 = Decodable::decode(self)?;
+            value |= (byte as u64) << (i * 8);
+        }
+
+        let end_pos = self.position();
+        assert_eq!((end_pos - start_pos), IntEncodedWithFixedSize::ENCODED_SIZE);
+
+        Ok(IntEncodedWithFixedSize(value))
+    }
+}
+
+fn encode_query_results<'enc, 'a, 'tcx, Q, E>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+                                              encoder: &mut CacheEncoder<'enc, 'a, 'tcx, E>,
+                                              query_result_index: &mut EncodedQueryResultIndex)
+                                              -> Result<(), E::Error>
+    where Q: super::plumbing::GetCacheInternal<'tcx>,
+          E: 'enc + TyEncoder,
+          Q::Value: Encodable,
+{
+    for (key, entry) in Q::get_cache_internal(tcx).map.iter() {
+        if Q::cache_on_disk(key.clone()) {
+            let dep_node = SerializedDepNodeIndex::new(entry.index.index());
+
+            // Record position of the cache entry
+            query_result_index.push((dep_node, encoder.position()));
+
+            // Encode the type check tables with the SerializedDepNodeIndex
+            // as tag.
+            encoder.encode_tagged(dep_node, &entry.value)?;
+        }
+    }
+
+    Ok(())
+}
diff --git a/src/librustc/ty/maps/plumbing.rs b/src/librustc/ty/maps/plumbing.rs
index f5e1f38..1ca8fc6 100644
--- a/src/librustc/ty/maps/plumbing.rs
+++ b/src/librustc/ty/maps/plumbing.rs
@@ -20,13 +20,13 @@
 use ty::item_path;
 
 use rustc_data_structures::fx::{FxHashMap};
-use std::cell::RefMut;
+use std::cell::{Ref, RefMut};
 use std::marker::PhantomData;
 use std::mem;
 use syntax_pos::Span;
 
-pub(super) struct QueryMap<D: QueryDescription> {
-    phantom: PhantomData<D>,
+pub(super) struct QueryMap<'tcx, D: QueryDescription<'tcx>> {
+    phantom: PhantomData<(D, &'tcx ())>,
     pub(super) map: FxHashMap<D::Key, QueryValue<D::Value>>,
 }
 
@@ -46,8 +46,8 @@
     }
 }
 
-impl<M: QueryDescription> QueryMap<M> {
-    pub(super) fn new() -> QueryMap<M> {
+impl<'tcx, M: QueryDescription<'tcx>> QueryMap<'tcx, M> {
+    pub(super) fn new() -> QueryMap<'tcx, M> {
         QueryMap {
             phantom: PhantomData,
             map: FxHashMap(),
@@ -55,6 +55,11 @@
     }
 }
 
+pub(super) trait GetCacheInternal<'tcx>: QueryDescription<'tcx> + Sized {
+    fn get_cache_internal<'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>)
+                              -> Ref<'a, QueryMap<'tcx, Self>>;
+}
+
 pub(super) struct CycleError<'a, 'tcx: 'a> {
     span: Span,
     cycle: RefMut<'a, [(Span, Query<'tcx>)]>,
@@ -242,6 +247,13 @@
             type Value = $V;
         }
 
+        impl<$tcx> GetCacheInternal<$tcx> for queries::$name<$tcx> {
+            fn get_cache_internal<'a>(tcx: TyCtxt<'a, $tcx, $tcx>)
+                                      -> ::std::cell::Ref<'a, QueryMap<$tcx, Self>> {
+                tcx.maps.$name.borrow()
+            }
+        }
+
         impl<'a, $tcx, 'lcx> queries::$name<$tcx> {
 
             #[allow(unused)]
@@ -379,18 +391,26 @@
             {
                 debug_assert!(tcx.dep_graph.is_green(dep_node_index));
 
-                // We don't do any caching yet, so recompute.
-                // The diagnostics for this query have already been promoted to
-                // the current session during try_mark_green(), so we can ignore
-                // them here.
-                let (result, _) = tcx.cycle_check(span, Query::$name(key), || {
-                    tcx.sess.diagnostic().track_diagnostics(|| {
-                        // The dep-graph for this computation is already in place
-                        tcx.dep_graph.with_ignore(|| {
-                            Self::compute_result(tcx, key)
+                let result = if tcx.sess.opts.debugging_opts.incremental_queries &&
+                                Self::cache_on_disk(key) {
+                    let prev_dep_node_index =
+                        tcx.dep_graph.prev_dep_node_index_of(dep_node);
+                    Self::load_from_disk(tcx.global_tcx(), prev_dep_node_index)
+                } else {
+                    let (result, _ ) = tcx.cycle_check(span, Query::$name(key), || {
+                        // The diagnostics for this query have already been
+                        // promoted to the current session during
+                        // try_mark_green(), so we can ignore them here.
+                        tcx.sess.diagnostic().track_diagnostics(|| {
+                            // The dep-graph for this computation is already in
+                            // place
+                            tcx.dep_graph.with_ignore(|| {
+                                Self::compute_result(tcx, key)
+                            })
                         })
-                    })
-                })?;
+                    })?;
+                    result
+                };
 
                 // If -Zincremental-verify-ich is specified, re-hash results from
                 // the cache and make sure that they have the expected fingerprint.
@@ -547,7 +567,7 @@
         pub struct Maps<$tcx> {
             providers: IndexVec<CrateNum, Providers<$tcx>>,
             query_stack: RefCell<Vec<(Span, Query<$tcx>)>>,
-            $($(#[$attr])*  $name: RefCell<QueryMap<queries::$name<$tcx>>>,)*
+            $($(#[$attr])*  $name: RefCell<QueryMap<$tcx, queries::$name<$tcx>>>,)*
         }
     };
 }
diff --git a/src/librustc/ty/mod.rs b/src/librustc/ty/mod.rs
index bf1cc68..48ec92a 100644
--- a/src/librustc/ty/mod.rs
+++ b/src/librustc/ty/mod.rs
@@ -17,7 +17,7 @@
 
 use hir::{map as hir_map, FreevarMap, TraitMap};
 use hir::def::{Def, CtorKind, ExportMap};
-use hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX, LOCAL_CRATE};
+use hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX, LOCAL_CRATE};
 use hir::map::DefPathData;
 use ich::StableHashingContext;
 use middle::const_val::ConstVal;
@@ -89,6 +89,7 @@
 pub mod adjustment;
 pub mod binding;
 pub mod cast;
+#[macro_use]
 pub mod codec;
 pub mod error;
 mod erase_regions;
@@ -144,6 +145,15 @@
 }
 
 impl AssociatedItemContainer {
+    /// Asserts that this is the def-id of an associated item declared
+    /// in a trait, and returns the trait def-id.
+    pub fn assert_trait(&self) -> DefId {
+        match *self {
+            TraitContainer(id) => id,
+            _ => bug!("associated item has wrong container type: {:?}", self)
+        }
+    }
+
     pub fn id(&self) -> DefId {
         match *self {
             TraitContainer(id) => id,
@@ -573,7 +583,7 @@
 #[derive(Clone, Copy, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)]
 pub struct UpvarId {
     pub var_id: hir::HirId,
-    pub closure_expr_id: DefIndex,
+    pub closure_expr_id: LocalDefId,
 }
 
 #[derive(Clone, PartialEq, Eq, Hash, Debug, RustcEncodable, RustcDecodable, Copy)]
@@ -895,6 +905,12 @@
     ConstEvaluatable(DefId, &'tcx Substs<'tcx>),
 }
 
+impl<'tcx> AsRef<Predicate<'tcx>> for Predicate<'tcx> {
+    fn as_ref(&self) -> &Predicate<'tcx> {
+        self
+    }
+}
+
 impl<'a, 'gcx, 'tcx> Predicate<'tcx> {
     /// Performs a substitution suitable for going from a
     /// poly-trait-ref to supertraits that must hold if that
@@ -1200,6 +1216,25 @@
             }
         }
     }
+
+    pub fn to_opt_type_outlives(&self) -> Option<PolyTypeOutlivesPredicate<'tcx>> {
+        match *self {
+            Predicate::TypeOutlives(data) => {
+                Some(data)
+            }
+            Predicate::Trait(..) |
+            Predicate::Projection(..) |
+            Predicate::Equate(..) |
+            Predicate::Subtype(..) |
+            Predicate::RegionOutlives(..) |
+            Predicate::WellFormed(..) |
+            Predicate::ObjectSafe(..) |
+            Predicate::ClosureKind(..) |
+            Predicate::ConstEvaluatable(..) => {
+                None
+            }
+        }
+    }
 }
 
 /// Represents the bounds declared on a particular set of type
@@ -1639,11 +1674,6 @@
         self.variants.iter().flat_map(|v| v.fields.iter())
     }
 
-    #[inline]
-    pub fn is_univariant(&self) -> bool {
-        self.variants.len() == 1
-    }
-
     pub fn is_payloadfree(&self) -> bool {
         !self.variants.is_empty() &&
             self.variants.iter().all(|v| v.fields.is_empty())
@@ -2587,9 +2617,10 @@
 }
 
 pub fn provide(providers: &mut ty::maps::Providers) {
-    util::provide(providers);
     context::provide(providers);
     erase_regions::provide(providers);
+    layout::provide(providers);
+    util::provide(providers);
     *providers = ty::maps::Providers {
         associated_item,
         associated_item_def_ids,
diff --git a/src/librustc/ty/structural_impls.rs b/src/librustc/ty/structural_impls.rs
index 5f1448c..e5c24b4 100644
--- a/src/librustc/ty/structural_impls.rs
+++ b/src/librustc/ty/structural_impls.rs
@@ -428,7 +428,8 @@
             TyParamDefaultMismatch(ref x) => {
                 return tcx.lift(x).map(TyParamDefaultMismatch)
             }
-            ExistentialMismatch(ref x) => return tcx.lift(x).map(ExistentialMismatch)
+            ExistentialMismatch(ref x) => return tcx.lift(x).map(ExistentialMismatch),
+            OldStyleLUB(ref x) => return tcx.lift(x).map(OldStyleLUB),
         })
     }
 }
@@ -1174,6 +1175,7 @@
             Sorts(x) => Sorts(x.fold_with(folder)),
             TyParamDefaultMismatch(ref x) => TyParamDefaultMismatch(x.fold_with(folder)),
             ExistentialMismatch(x) => ExistentialMismatch(x.fold_with(folder)),
+            OldStyleLUB(ref x) => OldStyleLUB(x.fold_with(folder)),
         }
     }
 
@@ -1191,6 +1193,7 @@
                 b.visit_with(visitor)
             },
             Sorts(x) => x.visit_with(visitor),
+            OldStyleLUB(ref x) => x.visit_with(visitor),
             TyParamDefaultMismatch(ref x) => x.visit_with(visitor),
             ExistentialMismatch(x) => x.visit_with(visitor),
             Mismatch |
diff --git a/src/librustc/ty/sty.rs b/src/librustc/ty/sty.rs
index a60cad0..65406c3 100644
--- a/src/librustc/ty/sty.rs
+++ b/src/librustc/ty/sty.rs
@@ -14,6 +14,7 @@
 
 use middle::const_val::ConstVal;
 use middle::region;
+use rustc_data_structures::indexed_vec::Idx;
 use ty::subst::{Substs, Subst};
 use ty::{self, AdtDef, TypeFlags, Ty, TyCtxt, TypeFoldable};
 use ty::{Slice, TyS};
@@ -898,6 +899,18 @@
     pub index: u32,
 }
 
+// FIXME: We could convert this to use `newtype_index!`
+impl Idx for RegionVid {
+    fn new(value: usize) -> Self {
+        assert!(value < ::std::u32::MAX as usize);
+        RegionVid { index: value as u32 }
+    }
+
+    fn index(self) -> usize {
+        self.index as usize
+    }
+}
+
 #[derive(Clone, Copy, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable, PartialOrd, Ord)]
 pub struct SkolemizedRegionVid {
     pub index: u32,
@@ -1037,6 +1050,35 @@
 
         flags
     }
+
+    /// Given an early-bound or free region, returns the def-id where it was bound.
+    /// For example, consider the regions in this snippet of code:
+    ///
+    /// ```
+    /// impl<'a> Foo {
+    ///      ^^ -- early bound, declared on an impl
+    ///
+    ///     fn bar<'b, 'c>(x: &self, y: &'b u32, z: &'c u64) where 'static: 'c
+    ///            ^^  ^^     ^ anonymous, late-bound
+    ///            |   early-bound, appears in where-clauses
+    ///            late-bound, appears only in fn args
+    ///     {..}
+    /// }
+    /// ```
+    ///
+    /// Here, `free_region_binding_scope('a)` would return the def-id
+    /// of the impl, and for all the other highlighted regions, it
+    /// would return the def-id of the function. In other cases (not shown), this
+    /// function might return the def-id of a closure.
+    pub fn free_region_binding_scope(&self, tcx: TyCtxt<'_, '_, '_>) -> DefId {
+        match self {
+            ty::ReEarlyBound(br) => {
+                tcx.parent_def_id(br.def_id).unwrap()
+            }
+            ty::ReFree(fr) => fr.scope,
+            _ => bug!("free_region_binding_scope invoked on inappropriate region: {:?}", self),
+        }
+    }
 }
 
 /// Type utilities
diff --git a/src/librustc/ty/util.rs b/src/librustc/ty/util.rs
index a0219f2..23dd3f1 100644
--- a/src/librustc/ty/util.rs
+++ b/src/librustc/ty/util.rs
@@ -19,7 +19,6 @@
 use traits::{self, Reveal};
 use ty::{self, Ty, TyCtxt, TypeFoldable};
 use ty::fold::TypeVisitor;
-use ty::layout::{Layout, LayoutError};
 use ty::subst::{Subst, Kind};
 use ty::TypeVariants::*;
 use util::common::ErrorReported;
@@ -852,30 +851,6 @@
         tcx.needs_drop_raw(param_env.and(self))
     }
 
-    /// Computes the layout of a type. Note that this implicitly
-    /// executes in "reveal all" mode.
-    #[inline]
-    pub fn layout<'lcx>(&'tcx self,
-                        tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                        param_env: ty::ParamEnv<'tcx>)
-                        -> Result<&'tcx Layout, LayoutError<'tcx>> {
-        let ty = tcx.erase_regions(&self);
-        let layout = tcx.layout_raw(param_env.reveal_all().and(ty));
-
-        // NB: This recording is normally disabled; when enabled, it
-        // can however trigger recursive invocations of `layout()`.
-        // Therefore, we execute it *after* the main query has
-        // completed, to avoid problems around recursive structures
-        // and the like. (Admitedly, I wasn't able to reproduce a problem
-        // here, but it seems like the right thing to do. -nmatsakis)
-        if let Ok(l) = layout {
-            Layout::record_layout_for_printing(tcx, ty, param_env, l);
-        }
-
-        layout
-    }
-
-
     /// Check whether a type is representable. This means it cannot contain unboxed
     /// structural recursion. This check is needed for structs and enums.
     pub fn is_representable(&'tcx self,
@@ -1184,26 +1159,6 @@
     }
 }
 
-fn layout_raw<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                        query: ty::ParamEnvAnd<'tcx, Ty<'tcx>>)
-                        -> Result<&'tcx Layout, LayoutError<'tcx>>
-{
-    let (param_env, ty) = query.into_parts();
-
-    let rec_limit = tcx.sess.recursion_limit.get();
-    let depth = tcx.layout_depth.get();
-    if depth > rec_limit {
-        tcx.sess.fatal(
-            &format!("overflow representing the type `{}`", ty));
-    }
-
-    tcx.layout_depth.set(depth+1);
-    let layout = Layout::compute_uncached(tcx, param_env, ty);
-    tcx.layout_depth.set(depth);
-
-    layout
-}
-
 pub enum ExplicitSelf<'tcx> {
     ByValue,
     ByReference(ty::Region<'tcx>, hir::Mutability),
@@ -1262,7 +1217,6 @@
         is_sized_raw,
         is_freeze_raw,
         needs_drop_raw,
-        layout_raw,
         ..*providers
     };
 }
diff --git a/src/librustc_borrowck/borrowck/mod.rs b/src/librustc_borrowck/borrowck/mod.rs
index 6be0787..7b09e45 100644
--- a/src/librustc_borrowck/borrowck/mod.rs
+++ b/src/librustc_borrowck/borrowck/mod.rs
@@ -29,7 +29,7 @@
 use rustc::middle::dataflow::DataFlowOperator;
 use rustc::middle::dataflow::KillFrom;
 use rustc::middle::borrowck::BorrowCheckResult;
-use rustc::hir::def_id::{DefId, DefIndex};
+use rustc::hir::def_id::{DefId, LocalDefId};
 use rustc::middle::expr_use_visitor as euv;
 use rustc::middle::mem_categorization as mc;
 use rustc::middle::mem_categorization::Categorization;
@@ -376,9 +376,9 @@
     LpInterior(Option<DefId>, InteriorKind),
 }
 
-fn closure_to_block(closure_id: DefIndex,
+fn closure_to_block(closure_id: LocalDefId,
                     tcx: TyCtxt) -> ast::NodeId {
-    let closure_id = tcx.hir.def_index_to_node_id(closure_id);
+    let closure_id = tcx.hir.local_def_id_to_node_id(closure_id);
     match tcx.hir.get(closure_id) {
         hir_map::NodeExpr(expr) => match expr.node {
             hir::ExprClosure(.., body_id, _, _) => {
@@ -1101,7 +1101,7 @@
                 } else {
                     "consider changing this closure to take self by mutable reference"
                 };
-                let node_id = self.tcx.hir.def_index_to_node_id(id);
+                let node_id = self.tcx.hir.local_def_id_to_node_id(id);
                 let help_span = self.tcx.hir.span(node_id);
                 self.cannot_act_on_capture_in_sharable_fn(span,
                                                           prefix,
@@ -1297,7 +1297,7 @@
                 };
                 if kind == ty::ClosureKind::Fn {
                     let closure_node_id =
-                        self.tcx.hir.def_index_to_node_id(upvar_id.closure_expr_id);
+                        self.tcx.hir.local_def_id_to_node_id(upvar_id.closure_expr_id);
                     db.span_help(self.tcx.hir.span(closure_node_id),
                                  "consider changing this closure to take \
                                   self by mutable reference");
diff --git a/src/librustc_const_eval/_match.rs b/src/librustc_const_eval/_match.rs
index 6ebe3c6..33d9bfa 100644
--- a/src/librustc_const_eval/_match.rs
+++ b/src/librustc_const_eval/_match.rs
@@ -255,7 +255,7 @@
         match self {
             &Variant(vid) => adt.variant_index_with_id(vid),
             &Single => {
-                assert_eq!(adt.variants.len(), 1);
+                assert!(!adt.is_enum());
                 0
             }
             _ => bug!("bad constructor {:?} for adt {:?}", self, adt)
@@ -356,7 +356,7 @@
                     }).collect();
 
                     if let ty::TyAdt(adt, substs) = ty.sty {
-                        if adt.variants.len() > 1 {
+                        if adt.is_enum() {
                             PatternKind::Variant {
                                 adt_def: adt,
                                 substs,
@@ -444,7 +444,7 @@
                 (0..pcx.max_slice_length+1).map(|length| Slice(length)).collect()
             }
         }
-        ty::TyAdt(def, substs) if def.is_enum() && def.variants.len() != 1 => {
+        ty::TyAdt(def, substs) if def.is_enum() => {
             def.variants.iter()
                 .filter(|v| !cx.is_variant_uninhabited(v, substs))
                 .map(|v| Variant(v.did))
diff --git a/src/librustc_const_eval/eval.rs b/src/librustc_const_eval/eval.rs
index 6571569..a548c1d 100644
--- a/src/librustc_const_eval/eval.rs
+++ b/src/librustc_const_eval/eval.rs
@@ -17,6 +17,7 @@
 use rustc::hir::def::{Def, CtorKind};
 use rustc::hir::def_id::DefId;
 use rustc::ty::{self, Ty, TyCtxt};
+use rustc::ty::layout::LayoutOf;
 use rustc::ty::maps::Providers;
 use rustc::ty::util::IntTypeExt;
 use rustc::ty::subst::{Substs, Subst};
@@ -313,18 +314,18 @@
           if tcx.fn_sig(def_id).abi() == Abi::RustIntrinsic {
             let layout_of = |ty: Ty<'tcx>| {
                 let ty = tcx.erase_regions(&ty);
-                tcx.at(e.span).layout_raw(cx.param_env.reveal_all().and(ty)).map_err(|err| {
+                (tcx.at(e.span), cx.param_env).layout_of(ty).map_err(|err| {
                     ConstEvalErr { span: e.span, kind: LayoutError(err) }
                 })
             };
             match &tcx.item_name(def_id)[..] {
                 "size_of" => {
-                    let size = layout_of(substs.type_at(0))?.size(tcx).bytes();
+                    let size = layout_of(substs.type_at(0))?.size.bytes();
                     return Ok(mk_const(Integral(Usize(ConstUsize::new(size,
                         tcx.sess.target.usize_ty).unwrap()))));
                 }
                 "min_align_of" => {
-                    let align = layout_of(substs.type_at(0))?.align(tcx).abi();
+                    let align = layout_of(substs.type_at(0))?.align.abi();
                     return Ok(mk_const(Integral(Usize(ConstUsize::new(align,
                         tcx.sess.target.usize_ty).unwrap()))));
                 }
diff --git a/src/librustc_const_eval/pattern.rs b/src/librustc_const_eval/pattern.rs
index d7a16e9..cfbb962 100644
--- a/src/librustc_const_eval/pattern.rs
+++ b/src/librustc_const_eval/pattern.rs
@@ -150,7 +150,7 @@
                         Some(&adt_def.variants[variant_index])
                     }
                     _ => if let ty::TyAdt(adt, _) = self.ty.sty {
-                        if adt.is_univariant() {
+                        if !adt.is_enum() {
                             Some(&adt.variants[0])
                         } else {
                             None
@@ -598,7 +598,7 @@
             Def::Variant(variant_id) | Def::VariantCtor(variant_id, ..) => {
                 let enum_id = self.tcx.parent_def_id(variant_id).unwrap();
                 let adt_def = self.tcx.adt_def(enum_id);
-                if adt_def.variants.len() > 1 {
+                if adt_def.is_enum() {
                     let substs = match ty.sty {
                         ty::TyAdt(_, substs) |
                         ty::TyFnDef(_, substs) => substs,
diff --git a/src/librustc_data_structures/indexed_vec.rs b/src/librustc_data_structures/indexed_vec.rs
index a733e9d..e2f50c8 100644
--- a/src/librustc_data_structures/indexed_vec.rs
+++ b/src/librustc_data_structures/indexed_vec.rs
@@ -385,6 +385,11 @@
     }
 
     #[inline]
+    pub fn pop(&mut self) -> Option<T> {
+        self.raw.pop()
+    }
+
+    #[inline]
     pub fn len(&self) -> usize {
         self.raw.len()
     }
@@ -411,7 +416,7 @@
     }
 
     #[inline]
-    pub fn iter_enumerated(&self) -> Enumerated<I, slice::Iter<T>>
+    pub fn iter_enumerated(&self) -> Enumerated<I, slice::Iter<'_, T>>
     {
         self.raw.iter().enumerate().map(IntoIdx { _marker: PhantomData })
     }
@@ -427,7 +432,7 @@
     }
 
     #[inline]
-    pub fn iter_enumerated_mut(&mut self) -> Enumerated<I, slice::IterMut<T>>
+    pub fn iter_enumerated_mut(&mut self) -> Enumerated<I, slice::IterMut<'_, T>>
     {
         self.raw.iter_mut().enumerate().map(IntoIdx { _marker: PhantomData })
     }
diff --git a/src/librustc_data_structures/lib.rs b/src/librustc_data_structures/lib.rs
index 3a20343..8862ba3 100644
--- a/src/librustc_data_structures/lib.rs
+++ b/src/librustc_data_structures/lib.rs
@@ -31,6 +31,7 @@
 #![feature(i128)]
 #![feature(conservative_impl_trait)]
 #![feature(specialization)]
+#![feature(underscore_lifetimes)]
 
 #![cfg_attr(unix, feature(libc))]
 #![cfg_attr(test, feature(test))]
diff --git a/src/librustc_driver/test.rs b/src/librustc_driver/test.rs
index 9e02065..78ce959 100644
--- a/src/librustc_driver/test.rs
+++ b/src/librustc_driver/test.rs
@@ -353,28 +353,10 @@
         self.infcx.tcx.mk_imm_ref(r, self.tcx().types.isize)
     }
 
-    pub fn t_rptr_static(&self) -> Ty<'tcx> {
-        self.infcx.tcx.mk_imm_ref(self.infcx.tcx.types.re_static,
-                                  self.tcx().types.isize)
-    }
-
-    pub fn t_rptr_empty(&self) -> Ty<'tcx> {
-        self.infcx.tcx.mk_imm_ref(self.infcx.tcx.types.re_empty,
-                                  self.tcx().types.isize)
-    }
-
     pub fn sub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) -> InferResult<'tcx, ()> {
         self.infcx.at(&ObligationCause::dummy(), self.param_env).sub(t1, t2)
     }
 
-    pub fn lub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) -> InferResult<'tcx, Ty<'tcx>> {
-        self.infcx.at(&ObligationCause::dummy(), self.param_env).lub(t1, t2)
-    }
-
-    pub fn glb(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) -> InferResult<'tcx, Ty<'tcx>> {
-        self.infcx.at(&ObligationCause::dummy(), self.param_env).glb(t1, t2)
-    }
-
     /// Checks that `t1 <: t2` is true (this may register additional
     /// region checks).
     pub fn check_sub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) {
@@ -399,37 +381,6 @@
             }
         }
     }
-
-    /// Checks that `LUB(t1,t2) == t_lub`
-    pub fn check_lub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>, t_lub: Ty<'tcx>) {
-        match self.lub(t1, t2) {
-            Ok(InferOk { obligations, value: t }) => {
-                // None of these tests should require nested obligations:
-                assert!(obligations.is_empty());
-
-                self.assert_eq(t, t_lub);
-            }
-            Err(ref e) => panic!("unexpected error in LUB: {}", e),
-        }
-    }
-
-    /// Checks that `GLB(t1,t2) == t_glb`
-    pub fn check_glb(&self, t1: Ty<'tcx>, t2: Ty<'tcx>, t_glb: Ty<'tcx>) {
-        debug!("check_glb(t1={}, t2={}, t_glb={})", t1, t2, t_glb);
-        match self.glb(t1, t2) {
-            Err(e) => panic!("unexpected error computing LUB: {:?}", e),
-            Ok(InferOk { obligations, value: t }) => {
-                // None of these tests should require nested obligations:
-                assert!(obligations.is_empty());
-
-                self.assert_eq(t, t_glb);
-
-                // sanity check for good measure:
-                self.assert_subtype(t, t1);
-                self.assert_subtype(t, t2);
-            }
-        }
-    }
 }
 
 #[test]
@@ -508,169 +459,6 @@
     })
 }
 
-#[test]
-fn lub_free_bound_infer() {
-    //! Test result of:
-    //!
-    //!     LUB(fn(_#1), for<'b> fn(&'b isize))
-    //!
-    //! This should yield `fn(&'_ isize)`. We check
-    //! that it yields `fn(&'x isize)` for some free `'x`,
-    //! anyhow.
-
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |mut env| {
-        env.create_simple_region_hierarchy();
-        let t_infer1 = env.infcx.next_ty_var(TypeVariableOrigin::MiscVariable(DUMMY_SP));
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_free1 = env.t_rptr_free(1);
-        env.check_lub(env.t_fn(&[t_infer1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_free1], env.tcx().types.isize));
-    });
-}
-
-#[test]
-fn lub_bound_bound() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |env| {
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_bound2 = env.t_rptr_late_bound(2);
-        env.check_lub(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound2], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound1], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn lub_bound_free() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |mut env| {
-        env.create_simple_region_hierarchy();
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_free1 = env.t_rptr_free(1);
-        env.check_lub(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_free1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_free1], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn lub_bound_static() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |env| {
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_static = env.t_rptr_static();
-        env.check_lub(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_static], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_static], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn lub_bound_bound_inverse_order() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |env| {
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_bound2 = env.t_rptr_late_bound(2);
-        env.check_lub(env.t_fn(&[t_rptr_bound1, t_rptr_bound2], t_rptr_bound1),
-                      env.t_fn(&[t_rptr_bound2, t_rptr_bound1], t_rptr_bound1),
-                      env.t_fn(&[t_rptr_bound1, t_rptr_bound1], t_rptr_bound1));
-    })
-}
-
-#[test]
-fn lub_free_free() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |mut env| {
-        env.create_simple_region_hierarchy();
-        let t_rptr_free1 = env.t_rptr_free(1);
-        let t_rptr_free2 = env.t_rptr_free(2);
-        let t_rptr_static = env.t_rptr_static();
-        env.check_lub(env.t_fn(&[t_rptr_free1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_free2], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_static], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn lub_returning_scope() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |mut env| {
-        env.create_simple_region_hierarchy();
-        let t_rptr_scope10 = env.t_rptr_scope(10);
-        let t_rptr_scope11 = env.t_rptr_scope(11);
-        let t_rptr_empty = env.t_rptr_empty();
-        env.check_lub(env.t_fn(&[t_rptr_scope10], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_scope11], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_empty], env.tcx().types.isize));
-    });
-}
-
-#[test]
-fn glb_free_free_with_common_scope() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |mut env| {
-        env.create_simple_region_hierarchy();
-        let t_rptr_free1 = env.t_rptr_free(1);
-        let t_rptr_free2 = env.t_rptr_free(2);
-        let t_rptr_scope = env.t_rptr_scope(1);
-        env.check_glb(env.t_fn(&[t_rptr_free1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_free2], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_scope], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn glb_bound_bound() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |env| {
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_bound2 = env.t_rptr_late_bound(2);
-        env.check_glb(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound2], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound1], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn glb_bound_free() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |mut env| {
-        env.create_simple_region_hierarchy();
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_free1 = env.t_rptr_free(1);
-        env.check_glb(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_free1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound1], env.tcx().types.isize));
-    })
-}
-
-#[test]
-fn glb_bound_free_infer() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |env| {
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_infer1 = env.infcx.next_ty_var(TypeVariableOrigin::MiscVariable(DUMMY_SP));
-
-        // compute GLB(fn(_) -> isize, for<'b> fn(&'b isize) -> isize),
-        // which should yield for<'b> fn(&'b isize) -> isize
-        env.check_glb(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_infer1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound1], env.tcx().types.isize));
-
-        // as a side-effect, computing GLB should unify `_` with
-        // `&'_ isize`
-        let t_resolve1 = env.infcx.shallow_resolve(t_infer1);
-        match t_resolve1.sty {
-            ty::TyRef(..) => {}
-            _ => {
-                panic!("t_resolve1={:?}", t_resolve1);
-            }
-        }
-    })
-}
-
-#[test]
-fn glb_bound_static() {
-    test_env(EMPTY_SOURCE_STR, errors(&[]), |env| {
-        let t_rptr_bound1 = env.t_rptr_late_bound(1);
-        let t_rptr_static = env.t_rptr_static();
-        env.check_glb(env.t_fn(&[t_rptr_bound1], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_static], env.tcx().types.isize),
-                      env.t_fn(&[t_rptr_bound1], env.tcx().types.isize));
-    })
-}
-
 /// Test substituting a bound region into a function, which introduces another level of binding.
 /// This requires adjusting the Debruijn index.
 #[test]
diff --git a/src/librustc_errors/diagnostic.rs b/src/librustc_errors/diagnostic.rs
index c7e9c82..221c751 100644
--- a/src/librustc_errors/diagnostic.rs
+++ b/src/librustc_errors/diagnostic.rs
@@ -12,7 +12,6 @@
 use SubstitutionPart;
 use Substitution;
 use Level;
-use RenderSpan;
 use std::fmt;
 use syntax_pos::{MultiSpan, Span};
 use snippet::Style;
@@ -40,7 +39,7 @@
     pub level: Level,
     pub message: Vec<(String, Style)>,
     pub span: MultiSpan,
-    pub render_span: Option<RenderSpan>,
+    pub render_span: Option<MultiSpan>,
 }
 
 #[derive(PartialEq, Eq)]
@@ -307,7 +306,7 @@
            level: Level,
            message: &str,
            span: MultiSpan,
-           render_span: Option<RenderSpan>) {
+           render_span: Option<MultiSpan>) {
         let sub = SubDiagnostic {
             level,
             message: vec![(message.to_owned(), Style::NoStyle)],
@@ -323,7 +322,7 @@
                            level: Level,
                            message: Vec<(String, Style)>,
                            span: MultiSpan,
-                           render_span: Option<RenderSpan>) {
+                           render_span: Option<MultiSpan>) {
         let sub = SubDiagnostic {
             level,
             message,
diff --git a/src/librustc_errors/emitter.rs b/src/librustc_errors/emitter.rs
index 0e39f29..57523d2 100644
--- a/src/librustc_errors/emitter.rs
+++ b/src/librustc_errors/emitter.rs
@@ -13,7 +13,6 @@
 use syntax_pos::{DUMMY_SP, FileMap, Span, MultiSpan};
 
 use {Level, CodeSuggestion, DiagnosticBuilder, SubDiagnostic, CodeMapper, DiagnosticId};
-use RenderSpan::*;
 use snippet::{Annotation, AnnotationType, Line, MultilineAnnotation, StyledString, Style};
 use styled_buffer::StyledBuffer;
 
@@ -35,6 +34,7 @@
     fn emit(&mut self, db: &DiagnosticBuilder) {
         let mut primary_span = db.span.clone();
         let mut children = db.children.clone();
+        let mut suggestions: &[_] = &[];
 
         if let Some((sugg, rest)) = db.suggestions.split_first() {
             if rest.is_empty() &&
@@ -60,14 +60,7 @@
                 // to be consistent. We could try to figure out if we can
                 // make one (or the first one) inline, but that would give
                 // undue importance to a semi-random suggestion
-                for sugg in &db.suggestions {
-                    children.push(SubDiagnostic {
-                        level: Level::Help,
-                        message: Vec::new(),
-                        span: MultiSpan::new(),
-                        render_span: Some(Suggestion(sugg.clone())),
-                    });
-                }
+                suggestions = &db.suggestions;
             }
         }
 
@@ -76,7 +69,8 @@
                                    &db.styled_message(),
                                    &db.code,
                                    &primary_span,
-                                   &children);
+                                   &children,
+                                   &suggestions);
     }
 }
 
@@ -1179,7 +1173,8 @@
                              message: &Vec<(String, Style)>,
                              code: &Option<DiagnosticId>,
                              span: &MultiSpan,
-                             children: &Vec<SubDiagnostic>) {
+                             children: &Vec<SubDiagnostic>,
+                             suggestions: &[CodeSuggestion]) {
         let max_line_num = self.get_max_line_num(span, children);
         let max_line_num_len = max_line_num.to_string().len();
 
@@ -1198,37 +1193,23 @@
                 }
                 if !self.short_message {
                     for child in children {
-                        match child.render_span {
-                            Some(FullSpan(ref msp)) => {
-                                match self.emit_message_default(msp,
-                                                                &child.styled_message(),
-                                                                &None,
-                                                                &child.level,
-                                                                max_line_num_len,
-                                                                true) {
-                                    Err(e) => panic!("failed to emit error: {}", e),
-                                    _ => ()
-                                }
-                            }
-                            Some(Suggestion(ref cs)) => {
-                                match self.emit_suggestion_default(cs,
-                                                                   &child.level,
-                                                                   max_line_num_len) {
-                                    Err(e) => panic!("failed to emit error: {}", e),
-                                    _ => ()
-                                }
-                            }
-                            None => {
-                                match self.emit_message_default(&child.span,
-                                                                &child.styled_message(),
-                                                                &None,
-                                                                &child.level,
-                                                                max_line_num_len,
-                                                                true) {
-                                    Err(e) => panic!("failed to emit error: {}", e),
-                                    _ => (),
-                                }
-                            }
+                        let span = child.render_span.as_ref().unwrap_or(&child.span);
+                        match self.emit_message_default(&span,
+                                                        &child.styled_message(),
+                                                        &None,
+                                                        &child.level,
+                                                        max_line_num_len,
+                                                        true) {
+                            Err(e) => panic!("failed to emit error: {}", e),
+                            _ => ()
+                        }
+                    }
+                    for sugg in suggestions {
+                        match self.emit_suggestion_default(sugg,
+                                                           &Level::Help,
+                                                           max_line_num_len) {
+                            Err(e) => panic!("failed to emit error: {}", e),
+                            _ => ()
                         }
                     }
                 }
diff --git a/src/librustc_errors/lib.rs b/src/librustc_errors/lib.rs
index 7bf64d2..e83ac88 100644
--- a/src/librustc_errors/lib.rs
+++ b/src/librustc_errors/lib.rs
@@ -53,20 +53,6 @@
 use syntax_pos::{BytePos, Loc, FileLinesResult, FileMap, FileName, MultiSpan, Span, NO_EXPANSION};
 
 #[derive(Clone, Debug, PartialEq, Hash, RustcEncodable, RustcDecodable)]
-pub enum RenderSpan {
-    /// A FullSpan renders with both with an initial line for the
-    /// message, prefixed by file:linenum, followed by a summary of
-    /// the source code covered by the span.
-    FullSpan(MultiSpan),
-
-    /// A suggestion renders with both with an initial line for the
-    /// message, prefixed by file:linenum, followed by a summary
-    /// of hypothetical source code, where each `String` is spliced
-    /// into the lines in place of the code covered by each span.
-    Suggestion(CodeSuggestion),
-}
-
-#[derive(Clone, Debug, PartialEq, Hash, RustcEncodable, RustcDecodable)]
 pub struct CodeSuggestion {
     /// Each substitute can have multiple variants due to multiple
     /// applicable suggestions
diff --git a/src/librustc_incremental/persist/data.rs b/src/librustc_incremental/persist/data.rs
index fc41785..08f9dba 100644
--- a/src/librustc_incremental/persist/data.rs
+++ b/src/librustc_incremental/persist/data.rs
@@ -11,7 +11,6 @@
 //! The data that we will serialize and deserialize.
 
 use rustc::dep_graph::{WorkProduct, WorkProductId};
-use rustc::hir::def_id::DefIndex;
 use rustc::hir::map::DefPathHash;
 use rustc::middle::cstore::EncodedMetadataHash;
 use rustc_data_structures::fx::FxHashMap;
@@ -58,5 +57,5 @@
     /// is only populated if -Z query-dep-graph is specified. It will be
     /// empty otherwise. Importing crates are perfectly happy with just having
     /// the DefIndex.
-    pub index_map: FxHashMap<DefIndex, DefPathHash>
+    pub index_map: FxHashMap<u32, DefPathHash>
 }
diff --git a/src/librustc_incremental/persist/file_format.rs b/src/librustc_incremental/persist/file_format.rs
index 7d1400b..7d27b84 100644
--- a/src/librustc_incremental/persist/file_format.rs
+++ b/src/librustc_incremental/persist/file_format.rs
@@ -53,19 +53,25 @@
 
 /// Reads the contents of a file with a file header as defined in this module.
 ///
-/// - Returns `Ok(Some(data))` if the file existed and was generated by a
+/// - Returns `Ok(Some(data, pos))` if the file existed and was generated by a
 ///   compatible compiler version. `data` is the entire contents of the file
-///   *after* the header.
+///   and `pos` points to the first byte after the header.
 /// - Returns `Ok(None)` if the file did not exist or was generated by an
 ///   incompatible version of the compiler.
 /// - Returns `Err(..)` if some kind of IO error occurred while reading the
 ///   file.
-pub fn read_file(sess: &Session, path: &Path) -> io::Result<Option<Vec<u8>>> {
+pub fn read_file(sess: &Session, path: &Path) -> io::Result<Option<(Vec<u8>, usize)>> {
     if !path.exists() {
         return Ok(None);
     }
 
     let mut file = File::open(path)?;
+    let file_size = file.metadata()?.len() as usize;
+
+    let mut data = Vec::with_capacity(file_size);
+    file.read_to_end(&mut data)?;
+
+    let mut file = io::Cursor::new(data);
 
     // Check FILE_MAGIC
     {
@@ -107,10 +113,8 @@
         }
     }
 
-    let mut data = vec![];
-    file.read_to_end(&mut data)?;
-
-    Ok(Some(data))
+    let post_header_start_pos = file.position() as usize;
+    Ok(Some((file.into_inner(), post_header_start_pos)))
 }
 
 fn report_format_mismatch(sess: &Session, file: &Path, message: &str) {
diff --git a/src/librustc_incremental/persist/load.rs b/src/librustc_incremental/persist/load.rs
index 158e9f2..e4bc6b7 100644
--- a/src/librustc_incremental/persist/load.rs
+++ b/src/librustc_incremental/persist/load.rs
@@ -42,9 +42,9 @@
     }
 
     let work_products_path = work_products_path(tcx.sess);
-    if let Some(work_products_data) = load_data(tcx.sess, &work_products_path) {
+    if let Some((work_products_data, start_pos)) = load_data(tcx.sess, &work_products_path) {
         // Decode the list of work_products
-        let mut work_product_decoder = Decoder::new(&work_products_data[..], 0);
+        let mut work_product_decoder = Decoder::new(&work_products_data[..], start_pos);
         let work_products: Vec<SerializedWorkProduct> =
             RustcDecodable::decode(&mut work_product_decoder).unwrap_or_else(|e| {
                 let msg = format!("Error decoding `work-products` from incremental \
@@ -77,9 +77,9 @@
     }
 }
 
-fn load_data(sess: &Session, path: &Path) -> Option<Vec<u8>> {
+fn load_data(sess: &Session, path: &Path) -> Option<(Vec<u8>, usize)> {
     match file_format::read_file(sess, path) {
-        Ok(Some(data)) => return Some(data),
+        Ok(Some(data_and_pos)) => return Some(data_and_pos),
         Ok(None) => {
             // The file either didn't exist or was produced by an incompatible
             // compiler version. Neither is an error.
@@ -126,8 +126,8 @@
 
     debug!("load_prev_metadata_hashes() - File: {}", file_path.display());
 
-    let data = match file_format::read_file(tcx.sess, &file_path) {
-        Ok(Some(data)) => data,
+    let (data, start_pos) = match file_format::read_file(tcx.sess, &file_path) {
+        Ok(Some(data_and_pos)) => data_and_pos,
         Ok(None) => {
             debug!("load_prev_metadata_hashes() - File produced by incompatible \
                     compiler version: {}", file_path.display());
@@ -141,7 +141,7 @@
     };
 
     debug!("load_prev_metadata_hashes() - Decoding hashes");
-    let mut decoder = Decoder::new(&data, 0);
+    let mut decoder = Decoder::new(&data, start_pos);
     let _ = Svh::decode(&mut decoder).unwrap();
     let serialized_hashes = SerializedMetadataHashes::decode(&mut decoder).unwrap();
 
@@ -171,8 +171,8 @@
         return empty
     }
 
-    if let Some(bytes) = load_data(sess, &dep_graph_path(sess)) {
-        let mut decoder = Decoder::new(&bytes, 0);
+    if let Some((bytes, start_pos)) = load_data(sess, &dep_graph_path(sess)) {
+        let mut decoder = Decoder::new(&bytes, start_pos);
         let prev_commandline_args_hash = u64::decode(&mut decoder)
             .expect("Error reading commandline arg hash from cached dep-graph");
 
@@ -184,6 +184,10 @@
             // We can't reuse the cache, purge it.
             debug!("load_dep_graph_new: differing commandline arg hashes");
 
+            delete_all_session_dir_contents(sess)
+                .expect("Failed to delete invalidated incr. comp. session \
+                         directory contents.");
+
             // No need to do any further work
             return empty
         }
@@ -198,12 +202,13 @@
 }
 
 pub fn load_query_result_cache<'sess>(sess: &'sess Session) -> OnDiskCache<'sess> {
-    if sess.opts.incremental.is_none() {
+    if sess.opts.incremental.is_none() ||
+       !sess.opts.debugging_opts.incremental_queries {
         return OnDiskCache::new_empty(sess.codemap());
     }
 
-    if let Some(bytes) = load_data(sess, &query_cache_path(sess)) {
-        OnDiskCache::new(sess, &bytes[..])
+    if let Some((bytes, start_pos)) = load_data(sess, &query_cache_path(sess)) {
+        OnDiskCache::new(sess, bytes, start_pos)
     } else {
         OnDiskCache::new_empty(sess.codemap())
     }
diff --git a/src/librustc_incremental/persist/save.rs b/src/librustc_incremental/persist/save.rs
index 711550c..a438ac4 100644
--- a/src/librustc_incremental/persist/save.rs
+++ b/src/librustc_incremental/persist/save.rs
@@ -9,7 +9,7 @@
 // except according to those terms.
 
 use rustc::dep_graph::{DepGraph, DepKind};
-use rustc::hir::def_id::DefId;
+use rustc::hir::def_id::{DefId, DefIndex};
 use rustc::hir::svh::Svh;
 use rustc::ich::Fingerprint;
 use rustc::middle::cstore::EncodedMetadataHashes;
@@ -69,11 +69,13 @@
                 |e| encode_query_cache(tcx, e));
     });
 
-    time(sess.time_passes(), "persist dep-graph", || {
-        save_in(sess,
-                dep_graph_path(sess),
-                |e| encode_dep_graph(tcx, e));
-    });
+    if tcx.sess.opts.debugging_opts.incremental_queries {
+        time(sess.time_passes(), "persist dep-graph", || {
+            save_in(sess,
+                    dep_graph_path(sess),
+                    |e| encode_dep_graph(tcx, e));
+        });
+    }
 
     dirty_clean::check_dirty_clean_annotations(tcx);
     dirty_clean::check_dirty_clean_metadata(tcx,
@@ -187,6 +189,8 @@
 
         let total_node_count = serialized_graph.nodes.len();
         let total_edge_count = serialized_graph.edge_list_data.len();
+        let (total_edge_reads, total_duplicate_edge_reads) =
+            tcx.dep_graph.edge_deduplication_data();
 
         let mut counts: FxHashMap<_, Stat> = FxHashMap();
 
@@ -224,6 +228,8 @@
         println!("[incremental]");
         println!("[incremental] Total Node Count: {}", total_node_count);
         println!("[incremental] Total Edge Count: {}", total_edge_count);
+        println!("[incremental] Total Edge Reads: {}", total_edge_reads);
+        println!("[incremental] Total Duplicate Edge Reads: {}", total_duplicate_edge_reads);
         println!("[incremental]");
         println!("[incremental]  {:<36}| {:<17}| {:<12}| {:<17}|",
                  "Node Kind",
@@ -268,11 +274,11 @@
 
     if tcx.sess.opts.debugging_opts.query_dep_graph {
         for serialized_hash in &serialized_hashes.entry_hashes {
-            let def_id = DefId::local(serialized_hash.def_index);
+            let def_id = DefId::local(DefIndex::from_u32(serialized_hash.def_index));
 
             // Store entry in the index_map
             let def_path_hash = tcx.def_path_hash(def_id);
-            serialized_hashes.index_map.insert(def_id.index, def_path_hash);
+            serialized_hashes.index_map.insert(def_id.index.as_u32(), def_path_hash);
 
             // Record hash in current_metadata_hashes
             current_metadata_hashes.insert(def_id, serialized_hash.hash);
diff --git a/src/librustc_lint/lib.rs b/src/librustc_lint/lib.rs
index 1a8ad97..97c34a1 100644
--- a/src/librustc_lint/lib.rs
+++ b/src/librustc_lint/lib.rs
@@ -208,10 +208,6 @@
             reference: "issue #36887 <https://github.com/rust-lang/rust/issues/36887>",
         },
         FutureIncompatibleInfo {
-            id: LintId::of(EXTRA_REQUIREMENT_IN_IMPL),
-            reference: "issue #37166 <https://github.com/rust-lang/rust/issues/37166>",
-        },
-        FutureIncompatibleInfo {
             id: LintId::of(LEGACY_DIRECTORY_OWNERSHIP),
             reference: "issue #37872 <https://github.com/rust-lang/rust/issues/37872>",
         },
@@ -276,4 +272,6 @@
         "converted into hard error, see https://github.com/rust-lang/rust/issues/36891");
     store.register_removed("lifetime_underscore",
         "converted into hard error, see https://github.com/rust-lang/rust/issues/36892");
+    store.register_removed("extra_requirement_in_impl",
+        "converted into hard error, see https://github.com/rust-lang/rust/issues/37166");
 }
diff --git a/src/librustc_lint/types.rs b/src/librustc_lint/types.rs
index 8f08987..1356574 100644
--- a/src/librustc_lint/types.rs
+++ b/src/librustc_lint/types.rs
@@ -13,7 +13,7 @@
 use rustc::hir::def_id::DefId;
 use rustc::ty::subst::Substs;
 use rustc::ty::{self, AdtKind, Ty, TyCtxt};
-use rustc::ty::layout::{Layout, Primitive};
+use rustc::ty::layout::{self, LayoutOf};
 use middle::const_val::ConstVal;
 use rustc_const_eval::ConstContext;
 use util::nodemap::FxHashSet;
@@ -748,25 +748,23 @@
                 // sizes only make sense for non-generic types
                 let item_def_id = cx.tcx.hir.local_def_id(it.id);
                 let t = cx.tcx.type_of(item_def_id);
-                let param_env = cx.param_env.reveal_all();
                 let ty = cx.tcx.erase_regions(&t);
-                let layout = ty.layout(cx.tcx, param_env).unwrap_or_else(|e| {
+                let layout = cx.layout_of(ty).unwrap_or_else(|e| {
                     bug!("failed to get layout for `{}`: {}", t, e)
                 });
 
-                if let Layout::General { ref variants, ref size, discr, .. } = *layout {
-                    let discr_size = Primitive::Int(discr).size(cx.tcx).bytes();
+                if let layout::Variants::Tagged { ref variants, ref discr, .. } = layout.variants {
+                    let discr_size = discr.value.size(cx.tcx).bytes();
 
                     debug!("enum `{}` is {} bytes large with layout:\n{:#?}",
-                      t, size.bytes(), layout);
+                      t, layout.size.bytes(), layout);
 
                     let (largest, slargest, largest_index) = enum_definition.variants
                         .iter()
                         .zip(variants)
                         .map(|(variant, variant_layout)| {
                             // Subtract the size of the enum discriminant
-                            let bytes = variant_layout.min_size
-                                .bytes()
+                            let bytes = variant_layout.size.bytes()
                                 .saturating_sub(discr_size);
 
                             debug!("- variant `{}` is {} bytes large", variant.node.name, bytes);
diff --git a/src/librustc_llvm/ffi.rs b/src/librustc_llvm/ffi.rs
index cff584c..aab6139 100644
--- a/src/librustc_llvm/ffi.rs
+++ b/src/librustc_llvm/ffi.rs
@@ -575,8 +575,6 @@
                                    ElementCount: c_uint,
                                    Packed: Bool)
                                    -> TypeRef;
-    pub fn LLVMCountStructElementTypes(StructTy: TypeRef) -> c_uint;
-    pub fn LLVMGetStructElementTypes(StructTy: TypeRef, Dest: *mut TypeRef);
     pub fn LLVMIsPackedStruct(StructTy: TypeRef) -> Bool;
 
     // Operations on array, pointer, and vector types (sequence types)
@@ -585,7 +583,6 @@
     pub fn LLVMVectorType(ElementType: TypeRef, ElementCount: c_uint) -> TypeRef;
 
     pub fn LLVMGetElementType(Ty: TypeRef) -> TypeRef;
-    pub fn LLVMGetArrayLength(ArrayTy: TypeRef) -> c_uint;
     pub fn LLVMGetVectorSize(VectorTy: TypeRef) -> c_uint;
 
     // Operations on other types
@@ -611,10 +608,7 @@
     pub fn LLVMConstNull(Ty: TypeRef) -> ValueRef;
     pub fn LLVMConstICmp(Pred: IntPredicate, V1: ValueRef, V2: ValueRef) -> ValueRef;
     pub fn LLVMConstFCmp(Pred: RealPredicate, V1: ValueRef, V2: ValueRef) -> ValueRef;
-    // only for isize/vector
     pub fn LLVMGetUndef(Ty: TypeRef) -> ValueRef;
-    pub fn LLVMIsNull(Val: ValueRef) -> Bool;
-    pub fn LLVMIsUndef(Val: ValueRef) -> Bool;
 
     // Operations on metadata
     pub fn LLVMMDStringInContext(C: ContextRef, Str: *const c_char, SLen: c_uint) -> ValueRef;
@@ -736,7 +730,9 @@
                                        FunctionTy: TypeRef)
                                        -> ValueRef;
     pub fn LLVMSetFunctionCallConv(Fn: ValueRef, CC: c_uint);
+    pub fn LLVMRustAddAlignmentAttr(Fn: ValueRef, index: c_uint, bytes: u32);
     pub fn LLVMRustAddDereferenceableAttr(Fn: ValueRef, index: c_uint, bytes: u64);
+    pub fn LLVMRustAddDereferenceableOrNullAttr(Fn: ValueRef, index: c_uint, bytes: u64);
     pub fn LLVMRustAddFunctionAttribute(Fn: ValueRef, index: c_uint, attr: Attribute);
     pub fn LLVMRustAddFunctionAttrStringValue(Fn: ValueRef,
                                               index: c_uint,
@@ -766,7 +762,11 @@
     // Operations on call sites
     pub fn LLVMSetInstructionCallConv(Instr: ValueRef, CC: c_uint);
     pub fn LLVMRustAddCallSiteAttribute(Instr: ValueRef, index: c_uint, attr: Attribute);
+    pub fn LLVMRustAddAlignmentCallSiteAttr(Instr: ValueRef, index: c_uint, bytes: u32);
     pub fn LLVMRustAddDereferenceableCallSiteAttr(Instr: ValueRef, index: c_uint, bytes: u64);
+    pub fn LLVMRustAddDereferenceableOrNullCallSiteAttr(Instr: ValueRef,
+                                                        index: c_uint,
+                                                        bytes: u64);
 
     // Operations on load/store instructions (only)
     pub fn LLVMSetVolatile(MemoryAccessInst: ValueRef, volatile: Bool);
@@ -1205,15 +1205,13 @@
     pub fn LLVMRustBuildAtomicLoad(B: BuilderRef,
                                    PointerVal: ValueRef,
                                    Name: *const c_char,
-                                   Order: AtomicOrdering,
-                                   Alignment: c_uint)
+                                   Order: AtomicOrdering)
                                    -> ValueRef;
 
     pub fn LLVMRustBuildAtomicStore(B: BuilderRef,
                                     Val: ValueRef,
                                     Ptr: ValueRef,
-                                    Order: AtomicOrdering,
-                                    Alignment: c_uint)
+                                    Order: AtomicOrdering)
                                     -> ValueRef;
 
     pub fn LLVMRustBuildAtomicCmpXchg(B: BuilderRef,
@@ -1247,23 +1245,6 @@
 
     /// Creates target data from a target layout string.
     pub fn LLVMCreateTargetData(StringRep: *const c_char) -> TargetDataRef;
-    /// Number of bytes clobbered when doing a Store to *T.
-    pub fn LLVMSizeOfTypeInBits(TD: TargetDataRef, Ty: TypeRef) -> c_ulonglong;
-
-    /// Distance between successive elements in an array of T. Includes ABI padding.
-    pub fn LLVMABISizeOfType(TD: TargetDataRef, Ty: TypeRef) -> c_ulonglong;
-
-    /// Returns the preferred alignment of a type.
-    pub fn LLVMPreferredAlignmentOfType(TD: TargetDataRef, Ty: TypeRef) -> c_uint;
-    /// Returns the minimum alignment of a type.
-    pub fn LLVMABIAlignmentOfType(TD: TargetDataRef, Ty: TypeRef) -> c_uint;
-
-    /// Computes the byte offset of the indexed struct element for a
-    /// target.
-    pub fn LLVMOffsetOfElement(TD: TargetDataRef,
-                               StructTy: TypeRef,
-                               Element: c_uint)
-                               -> c_ulonglong;
 
     /// Disposes target data.
     pub fn LLVMDisposeTargetData(TD: TargetDataRef);
@@ -1341,11 +1322,6 @@
                              ElementCount: c_uint,
                              Packed: Bool);
 
-    pub fn LLVMConstNamedStruct(S: TypeRef,
-                                ConstantVals: *const ValueRef,
-                                Count: c_uint)
-                                -> ValueRef;
-
     /// Enables LLVM debug output.
     pub fn LLVMRustSetDebug(Enabled: c_int);
 
diff --git a/src/librustc_llvm/lib.rs b/src/librustc_llvm/lib.rs
index 5ccce8d..592bd62 100644
--- a/src/librustc_llvm/lib.rs
+++ b/src/librustc_llvm/lib.rs
@@ -74,22 +74,19 @@
     }
 }
 
-#[repr(C)]
 #[derive(Copy, Clone)]
 pub enum AttributePlace {
+    ReturnValue,
     Argument(u32),
     Function,
 }
 
 impl AttributePlace {
-    pub fn ReturnValue() -> Self {
-        AttributePlace::Argument(0)
-    }
-
     pub fn as_uint(self) -> c_uint {
         match self {
+            AttributePlace::ReturnValue => 0,
+            AttributePlace::Argument(i) => 1 + i,
             AttributePlace::Function => !0,
-            AttributePlace::Argument(i) => i,
         }
     }
 }
diff --git a/src/librustc_metadata/decoder.rs b/src/librustc_metadata/decoder.rs
index e63037f..0dd1b9e 100644
--- a/src/librustc_metadata/decoder.rs
+++ b/src/librustc_metadata/decoder.rs
@@ -15,8 +15,6 @@
 
 use rustc::hir::map::{DefKey, DefPath, DefPathData, DefPathHash};
 use rustc::hir;
-
-use rustc::middle::const_val::ByteArray;
 use rustc::middle::cstore::{LinkagePreference, ExternConstBody,
                             ExternBodyNestedBodies};
 use rustc::hir::def::{self, Def, CtorKind};
@@ -25,19 +23,15 @@
 use rustc::middle::lang_items;
 use rustc::session::Session;
 use rustc::ty::{self, Ty, TyCtxt};
-use rustc::ty::codec::{self as ty_codec, TyDecoder};
-use rustc::ty::subst::Substs;
+use rustc::ty::codec::TyDecoder;
 use rustc::util::nodemap::DefIdSet;
-
 use rustc::mir::Mir;
 
-use std::borrow::Cow;
 use std::cell::Ref;
 use std::collections::BTreeMap;
 use std::io;
 use std::mem;
 use std::rc::Rc;
-use std::str;
 use std::u32;
 
 use rustc_serialize::{Decodable, Decoder, SpecializedDecoder, opaque};
@@ -174,57 +168,23 @@
     }
 }
 
-macro_rules! decoder_methods {
-    ($($name:ident -> $ty:ty;)*) => {
-        $(fn $name(&mut self) -> Result<$ty, Self::Error> {
-            self.opaque.$name()
-        })*
-    }
-}
-
-impl<'doc, 'tcx> Decoder for DecodeContext<'doc, 'tcx> {
-    type Error = <opaque::Decoder<'doc> as Decoder>::Error;
-
-    decoder_methods! {
-        read_nil -> ();
-
-        read_u128 -> u128;
-        read_u64 -> u64;
-        read_u32 -> u32;
-        read_u16 -> u16;
-        read_u8 -> u8;
-        read_usize -> usize;
-
-        read_i128 -> i128;
-        read_i64 -> i64;
-        read_i32 -> i32;
-        read_i16 -> i16;
-        read_i8 -> i8;
-        read_isize -> isize;
-
-        read_bool -> bool;
-        read_f64 -> f64;
-        read_f32 -> f32;
-        read_char -> char;
-        read_str -> Cow<str>;
-    }
-
-    fn error(&mut self, err: &str) -> Self::Error {
-        self.opaque.error(err)
-    }
-}
-
-
 impl<'a, 'tcx: 'a> TyDecoder<'a, 'tcx> for DecodeContext<'a, 'tcx> {
 
+    #[inline]
     fn tcx(&self) -> TyCtxt<'a, 'tcx, 'tcx> {
         self.tcx.expect("missing TyCtxt in DecodeContext")
     }
 
+    #[inline]
     fn peek_byte(&self) -> u8 {
         self.opaque.data[self.opaque.position()]
     }
 
+    #[inline]
+    fn position(&self) -> usize {
+        self.opaque.position()
+    }
+
     fn cached_ty_for_shorthand<F>(&mut self,
                                   shorthand: usize,
                                   or_insert_with: F)
@@ -286,14 +246,24 @@
     }
 }
 
-impl<'a, 'tcx> SpecializedDecoder<CrateNum> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<CrateNum, Self::Error> {
-        let cnum = CrateNum::from_u32(u32::decode(self)?);
-        if cnum == LOCAL_CRATE {
-            Ok(self.cdata().cnum)
-        } else {
-            Ok(self.cdata().cnum_map.borrow()[cnum])
-        }
+
+impl<'a, 'tcx> SpecializedDecoder<DefId> for DecodeContext<'a, 'tcx> {
+    #[inline]
+    fn specialized_decode(&mut self) -> Result<DefId, Self::Error> {
+        let krate = CrateNum::decode(self)?;
+        let index = DefIndex::decode(self)?;
+
+        Ok(DefId {
+            krate,
+            index,
+        })
+    }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<DefIndex> for DecodeContext<'a, 'tcx> {
+    #[inline]
+    fn specialized_decode(&mut self) -> Result<DefIndex, Self::Error> {
+        Ok(DefIndex::from_u32(self.read_u32()?))
     }
 }
 
@@ -357,65 +327,7 @@
     }
 }
 
-// FIXME(#36588) These impls are horribly unsound as they allow
-// the caller to pick any lifetime for 'tcx, including 'static,
-// by using the unspecialized proxies to them.
-
-impl<'a, 'tcx> SpecializedDecoder<Ty<'tcx>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<Ty<'tcx>, Self::Error> {
-        ty_codec::decode_ty(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<ty::GenericPredicates<'tcx>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<ty::GenericPredicates<'tcx>, Self::Error> {
-        ty_codec::decode_predicates(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<&'tcx Substs<'tcx>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<&'tcx Substs<'tcx>, Self::Error> {
-        ty_codec::decode_substs(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<ty::Region<'tcx>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<ty::Region<'tcx>, Self::Error> {
-        ty_codec::decode_region(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::Slice<Ty<'tcx>>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<&'tcx ty::Slice<Ty<'tcx>>, Self::Error> {
-        ty_codec::decode_ty_slice(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::AdtDef> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<&'tcx ty::AdtDef, Self::Error> {
-        ty_codec::decode_adt_def(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::Slice<ty::ExistentialPredicate<'tcx>>>
-    for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self)
-        -> Result<&'tcx ty::Slice<ty::ExistentialPredicate<'tcx>>, Self::Error> {
-        ty_codec::decode_existential_predicate_slice(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<ByteArray<'tcx>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<ByteArray<'tcx>, Self::Error> {
-        ty_codec::decode_byte_array(self)
-    }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::Const<'tcx>> for DecodeContext<'a, 'tcx> {
-    fn specialized_decode(&mut self) -> Result<&'tcx ty::Const<'tcx>, Self::Error> {
-        ty_codec::decode_const(self)
-    }
-}
+implement_ty_decoder!( DecodeContext<'a, 'tcx> );
 
 impl<'a, 'tcx> MetadataBlob {
     pub fn is_compatible(&self) -> bool {
diff --git a/src/librustc_metadata/encoder.rs b/src/librustc_metadata/encoder.rs
index 8c40f30..23e86b2 100644
--- a/src/librustc_metadata/encoder.rs
+++ b/src/librustc_metadata/encoder.rs
@@ -116,6 +116,33 @@
     }
 }
 
+impl<'a, 'tcx> SpecializedEncoder<CrateNum> for EncodeContext<'a, 'tcx> {
+    #[inline]
+    fn specialized_encode(&mut self, cnum: &CrateNum) -> Result<(), Self::Error> {
+        self.emit_u32(cnum.as_u32())
+    }
+}
+
+impl<'a, 'tcx> SpecializedEncoder<DefId> for EncodeContext<'a, 'tcx> {
+    #[inline]
+    fn specialized_encode(&mut self, def_id: &DefId) -> Result<(), Self::Error> {
+        let DefId {
+            krate,
+            index,
+        } = *def_id;
+
+        krate.encode(self)?;
+        index.encode(self)
+    }
+}
+
+impl<'a, 'tcx> SpecializedEncoder<DefIndex> for EncodeContext<'a, 'tcx> {
+    #[inline]
+    fn specialized_encode(&mut self, def_index: &DefIndex) -> Result<(), Self::Error> {
+        self.emit_u32(def_index.as_u32())
+    }
+}
+
 impl<'a, 'tcx> SpecializedEncoder<Ty<'tcx>> for EncodeContext<'a, 'tcx> {
     fn specialized_encode(&mut self, ty: &Ty<'tcx>) -> Result<(), Self::Error> {
         ty_codec::encode_with_shorthand(self, ty, |ecx| &mut ecx.type_shorthands)
@@ -213,7 +240,7 @@
 
         if let Some(fingerprint) = fingerprint {
             this.metadata_hashes.hashes.push(EncodedMetadataHash {
-                def_index,
+                def_index: def_index.as_u32(),
                 hash: fingerprint,
             })
         }
@@ -395,7 +422,7 @@
         let total_bytes = self.position();
 
         self.metadata_hashes.hashes.push(EncodedMetadataHash {
-            def_index: global_metadata_def_index(GlobalMetaDataKind::Krate),
+            def_index: global_metadata_def_index(GlobalMetaDataKind::Krate).as_u32(),
             hash: Fingerprint::from_smaller_hash(link_meta.crate_hash.as_u64())
         });
 
diff --git a/src/librustc_metadata/index_builder.rs b/src/librustc_metadata/index_builder.rs
index 1d2b6cc..46706bb 100644
--- a/src/librustc_metadata/index_builder.rs
+++ b/src/librustc_metadata/index_builder.rs
@@ -136,7 +136,7 @@
         let (fingerprint, ecx) = entry_builder.finish();
         if let Some(hash) = fingerprint {
             ecx.metadata_hashes.hashes.push(EncodedMetadataHash {
-                def_index: id.index,
+                def_index: id.index.as_u32(),
                 hash,
             });
         }
diff --git a/src/librustc_mir/borrow_check.rs b/src/librustc_mir/borrow_check.rs
index 2a7a62c..cdac72b 100644
--- a/src/librustc_mir/borrow_check.rs
+++ b/src/librustc_mir/borrow_check.rs
@@ -20,6 +20,7 @@
 use rustc::mir::{Statement, StatementKind, Terminator, TerminatorKind};
 use transform::nll;
 
+use rustc_data_structures::fx::FxHashSet;
 use rustc_data_structures::indexed_set::{self, IdxSetBuf};
 use rustc_data_structures::indexed_vec::{Idx};
 
@@ -112,7 +113,7 @@
     let opt_regioncx = if !tcx.sess.opts.debugging_opts.nll {
         None
     } else {
-        Some(nll::compute_regions(infcx, def_id, mir))
+        Some(nll::compute_regions(infcx, def_id, param_env, mir))
     };
 
     let mdpe = MoveDataParamEnv { move_data: move_data, param_env: param_env };
@@ -136,7 +137,7 @@
         node_id: id,
         move_data: &mdpe.move_data,
         param_env: param_env,
-        fake_infer_ctxt: &infcx,
+        storage_drop_or_dead_error_reported: FxHashSet(),
     };
 
     let mut state = InProgress::new(flow_borrows,
@@ -148,13 +149,16 @@
 }
 
 #[allow(dead_code)]
-pub struct MirBorrowckCtxt<'c, 'b, 'a: 'b+'c, 'gcx: 'a+'tcx, 'tcx: 'a> {
-    tcx: TyCtxt<'a, 'gcx, 'tcx>,
-    mir: &'b Mir<'tcx>,
+pub struct MirBorrowckCtxt<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
+    tcx: TyCtxt<'cx, 'gcx, 'tcx>,
+    mir: &'cx Mir<'tcx>,
     node_id: ast::NodeId,
-    move_data: &'b MoveData<'tcx>,
-    param_env: ParamEnv<'tcx>,
-    fake_infer_ctxt: &'c InferCtxt<'c, 'gcx, 'tcx>,
+    move_data: &'cx MoveData<'tcx>,
+    param_env: ParamEnv<'gcx>,
+    /// This field keeps track of when storage drop or dead errors are reported
+    /// in order to stop duplicate error reporting and identify the conditions required
+    /// for a "temporary value dropped here while still borrowed" error. See #45360.
+    storage_drop_or_dead_error_reported: FxHashSet<Local>,
 }
 
 // (forced to be `pub` due to its use as an associated type below.)
@@ -177,12 +181,10 @@
 // 2. loans made in overlapping scopes do not conflict
 // 3. assignments do not affect things loaned out as immutable
 // 4. moves do not affect things loaned out in any way
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> DataflowResultsConsumer<'b, 'tcx>
-    for MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx>
-{
-    type FlowState = InProgress<'b, 'gcx, 'tcx>;
+impl<'cx, 'gcx, 'tcx> DataflowResultsConsumer<'cx, 'tcx> for MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
+    type FlowState = InProgress<'cx, 'gcx, 'tcx>;
 
-    fn mir(&self) -> &'b Mir<'tcx> { self.mir }
+    fn mir(&self) -> &'cx Mir<'tcx> { self.mir }
 
     fn reset_to_entry_of(&mut self, bb: BasicBlock, flow_state: &mut Self::FlowState) {
         flow_state.each_flow(|b| b.reset_to_entry_of(bb),
@@ -285,10 +287,15 @@
             }
 
             StatementKind::StorageDead(local) => {
-                self.access_lvalue(ContextKind::StorageDead.new(location),
-                                   (&Lvalue::Local(local), span),
-                                   (Shallow(None), Write(WriteKind::StorageDead)),
-                                   flow_state);
+                if !self.storage_drop_or_dead_error_reported.contains(&local) {
+                    let error_reported = self.access_lvalue(ContextKind::StorageDead.new(location),
+                        (&Lvalue::Local(local), span),
+                        (Shallow(None), Write(WriteKind::StorageDeadOrDrop)), flow_state);
+
+                    if error_reported {
+                        self.storage_drop_or_dead_error_reported.insert(local);
+                    }
+                }
             }
         }
     }
@@ -431,24 +438,30 @@
 
 #[derive(Copy, Clone, PartialEq, Eq, Debug)]
 enum WriteKind {
-    StorageDead,
+    StorageDeadOrDrop,
     MutableBorrow(BorrowKind),
     Mutate,
     Move,
 }
 
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
+    /// Checks an access to the given lvalue to see if it is allowed. Examines the set of borrows
+    /// that are in scope, as well as which paths have been initialized, to ensure that (a) the
+    /// lvalue is initialized and (b) it is not borrowed in some way that would prevent this
+    /// access.
+    ///
+    /// Returns true if an error is reported, false otherwise.
     fn access_lvalue(&mut self,
                      context: Context,
                      lvalue_span: (&Lvalue<'tcx>, Span),
                      kind: (ShallowOrDeep, ReadOrWrite),
-                     flow_state: &InProgress<'b, 'gcx, 'tcx>) {
-
+                     flow_state: &InProgress<'cx, 'gcx, 'tcx>) -> bool {
         let (sd, rw) = kind;
 
         // Check permissions
         self.check_access_permissions(lvalue_span, rw);
 
+        let mut error_reported = false;
         self.each_borrow_involving_path(
             context, (sd, lvalue_span.0), flow_state, |this, _index, borrow, common_prefix| {
                 match (rw, borrow.kind) {
@@ -458,13 +471,16 @@
                     (Read(kind), BorrowKind::Unique) |
                     (Read(kind), BorrowKind::Mut) => {
                         match kind {
-                            ReadKind::Copy =>
+                            ReadKind::Copy => {
+                                error_reported = true;
                                 this.report_use_while_mutably_borrowed(
-                                    context, lvalue_span, borrow),
+                                    context, lvalue_span, borrow)
+                            },
                             ReadKind::Borrow(bk) => {
                                 let end_issued_loan_span =
                                     flow_state.borrows.base_results.operator().opt_region_end_span(
                                         &borrow.region);
+                                error_reported = true;
                                 this.report_conflicting_borrow(
                                     context, common_prefix, lvalue_span, bk,
                                     &borrow, end_issued_loan_span)
@@ -478,22 +494,35 @@
                                 let end_issued_loan_span =
                                     flow_state.borrows.base_results.operator().opt_region_end_span(
                                         &borrow.region);
+                                error_reported = true;
                                 this.report_conflicting_borrow(
                                     context, common_prefix, lvalue_span, bk,
                                     &borrow, end_issued_loan_span)
                             }
-                            WriteKind::StorageDead |
-                            WriteKind::Mutate =>
+                             WriteKind::StorageDeadOrDrop => {
+                                let end_span =
+                                    flow_state.borrows.base_results.operator().opt_region_end_span(
+                                        &borrow.region);
+                                error_reported = true;
+                                this.report_borrowed_value_does_not_live_long_enough(
+                                    context, lvalue_span, end_span)
+                            },
+                            WriteKind::Mutate => {
+                                error_reported = true;
                                 this.report_illegal_mutation_of_borrowed(
-                                    context, lvalue_span, borrow),
-                            WriteKind::Move =>
+                                    context, lvalue_span, borrow)
+                            },
+                            WriteKind::Move => {
+                                error_reported = true;
                                 this.report_move_out_while_borrowed(
-                                    context, lvalue_span, &borrow),
+                                    context, lvalue_span, &borrow)
+                            },
                         }
                         Control::Break
                     }
                 }
             });
+        error_reported
     }
 
     fn mutate_lvalue(&mut self,
@@ -501,7 +530,7 @@
                      lvalue_span: (&Lvalue<'tcx>, Span),
                      kind: ShallowOrDeep,
                      mode: MutateMode,
-                     flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                     flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         // Write of P[i] or *P, or WriteAndRead of any P, requires P init'd.
         match mode {
             MutateMode::WriteAndRead => {
@@ -522,7 +551,7 @@
                       context: Context,
                       (rvalue, span): (&Rvalue<'tcx>, Span),
                       _location: Location,
-                      flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                      flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         match *rvalue {
             Rvalue::Ref(_/*rgn*/, bk, ref lvalue) => {
                 let access_kind = match bk {
@@ -579,7 +608,7 @@
                        context: Context,
                        consume_via_drop: ConsumeKind,
                        (operand, span): (&Operand<'tcx>, Span),
-                       flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                       flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         match *operand {
             Operand::Consume(ref lvalue) => {
                 self.consume_lvalue(context, consume_via_drop, (lvalue, span), flow_state)
@@ -592,17 +621,55 @@
                       context: Context,
                       consume_via_drop: ConsumeKind,
                       lvalue_span: (&Lvalue<'tcx>, Span),
-                      flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                      flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         let lvalue = lvalue_span.0;
+
         let ty = lvalue.ty(self.mir, self.tcx).to_ty(self.tcx);
-        let moves_by_default =
-            self.fake_infer_ctxt.type_moves_by_default(self.param_env, ty, DUMMY_SP);
-        if moves_by_default {
-            // move of lvalue: check if this is move of already borrowed path
-            self.access_lvalue(context, lvalue_span, (Deep, Write(WriteKind::Move)), flow_state);
-        } else {
-            // copy of lvalue: check if this is "copy of frozen path" (FIXME: see check_loans.rs)
-            self.access_lvalue(context, lvalue_span, (Deep, Read(ReadKind::Copy)), flow_state);
+
+        // Erase the regions in type before checking whether it moves by
+        // default. There are a few reasons to do this:
+        //
+        // - They should not affect the result.
+        // - It avoids adding new region constraints into the surrounding context,
+        //   which would trigger an ICE, since the infcx will have been "frozen" by
+        //   the NLL region context.
+        let gcx = self.tcx.global_tcx();
+        let erased_ty = gcx.lift(&self.tcx.erase_regions(&ty)).unwrap();
+        let moves_by_default = erased_ty.moves_by_default(gcx, self.param_env, DUMMY_SP);
+
+        // Check if error has already been reported to stop duplicate reporting.
+        let has_storage_drop_or_dead_error_reported = match *lvalue {
+            Lvalue::Local(local) => self.storage_drop_or_dead_error_reported.contains(&local),
+            _ => false,
+        };
+
+        // If the error has been reported already, then we don't need the access_lvalue call.
+        if !has_storage_drop_or_dead_error_reported || consume_via_drop != ConsumeKind::Drop {
+            let error_reported;
+
+            if moves_by_default {
+                let kind = match consume_via_drop {
+                    ConsumeKind::Drop => WriteKind::StorageDeadOrDrop,
+                    _ => WriteKind::Move,
+                };
+
+                // move of lvalue: check if this is move of already borrowed path
+                error_reported = self.access_lvalue(context, lvalue_span,
+                                                    (Deep, Write(kind)), flow_state);
+            } else {
+                // copy of lvalue: check if this is "copy of frozen path"
+                // (FIXME: see check_loans.rs)
+                error_reported = self.access_lvalue(context, lvalue_span,
+                                                    (Deep, Read(ReadKind::Copy)), flow_state);
+            }
+
+            // If there was an error, then we keep track of it so as to deduplicate it.
+            // We only do this on ConsumeKind::Drop.
+            if error_reported && consume_via_drop == ConsumeKind::Drop {
+                if let Lvalue::Local(local) = *lvalue {
+                    self.storage_drop_or_dead_error_reported.insert(local);
+                }
+            }
         }
 
         // Finally, check if path was already moved.
@@ -619,11 +686,11 @@
     }
 }
 
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
     fn check_if_reassignment_to_immutable_state(&mut self,
                                                 context: Context,
                                                 (lvalue, span): (&Lvalue<'tcx>, Span),
-                                                flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                                                flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         let move_data = self.move_data;
 
         // determine if this path has a non-mut owner (and thus needs checking).
@@ -640,10 +707,12 @@
                         Mutability::Mut => return,
                     }
                 }
-                Lvalue::Static(_) => {
+                Lvalue::Static(ref static_) => {
                     // mutation of non-mut static is always illegal,
                     // independent of dataflow.
-                    self.report_assignment_to_static(context, (lvalue, span));
+                    if !self.tcx.is_static_mut(static_.def_id) {
+                        self.report_assignment_to_static(context, (lvalue, span));
+                    }
                     return;
                 }
             }
@@ -674,7 +743,7 @@
                               context: Context,
                               desired_action: &str,
                               lvalue_span: (&Lvalue<'tcx>, Span),
-                              flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                              flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         // FIXME: analogous code in check_loans first maps `lvalue` to
         // its base_path ... but is that what we want here?
         let lvalue = self.base_path(lvalue_span.0);
@@ -802,7 +871,7 @@
     fn check_if_assigned_path_is_moved(&mut self,
                                        context: Context,
                                        (lvalue, span): (&Lvalue<'tcx>, Span),
-                                       flow_state: &InProgress<'b, 'gcx, 'tcx>) {
+                                       flow_state: &InProgress<'cx, 'gcx, 'tcx>) {
         // recur down lvalue; dispatch to check_if_path_is_moved when necessary
         let mut lvalue = lvalue;
         loop {
@@ -1015,11 +1084,11 @@
     ReachedStatic,
 }
 
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
     fn each_borrow_involving_path<F>(&mut self,
                                      _context: Context,
                                      access_lvalue: (ShallowOrDeep, &Lvalue<'tcx>),
-                                     flow_state: &InProgress<'b, 'gcx, 'tcx>,
+                                     flow_state: &InProgress<'cx, 'gcx, 'tcx>,
                                      mut op: F)
         where F: FnMut(&mut Self, BorrowIndex, &BorrowData<'tcx>, &Lvalue<'tcx>) -> Control
     {
@@ -1119,11 +1188,11 @@
     }
 
 
-    pub(super) struct Prefixes<'c, 'gcx: 'tcx, 'tcx: 'c> {
-        mir: &'c Mir<'tcx>,
-        tcx: TyCtxt<'c, 'gcx, 'tcx>,
+    pub(super) struct Prefixes<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
+        mir: &'cx Mir<'tcx>,
+        tcx: TyCtxt<'cx, 'gcx, 'tcx>,
         kind: PrefixSet,
-        next: Option<&'c Lvalue<'tcx>>,
+        next: Option<&'cx Lvalue<'tcx>>,
     }
 
     #[derive(Copy, Clone, PartialEq, Eq, Debug)]
@@ -1137,21 +1206,21 @@
         Supporting,
     }
 
-    impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+    impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
         /// Returns an iterator over the prefixes of `lvalue`
         /// (inclusive) from longest to smallest, potentially
         /// terminating the iteration early based on `kind`.
-        pub(super) fn prefixes<'d>(&self,
-                                   lvalue: &'d Lvalue<'tcx>,
-                                   kind: PrefixSet)
-                                   -> Prefixes<'d, 'gcx, 'tcx> where 'b: 'd
+        pub(super) fn prefixes(&self,
+                               lvalue: &'cx Lvalue<'tcx>,
+                               kind: PrefixSet)
+                               -> Prefixes<'cx, 'gcx, 'tcx>
         {
             Prefixes { next: Some(lvalue), kind, mir: self.mir, tcx: self.tcx }
         }
     }
 
-    impl<'c, 'gcx, 'tcx> Iterator for Prefixes<'c, 'gcx, 'tcx> {
-        type Item = &'c Lvalue<'tcx>;
+    impl<'cx, 'gcx, 'tcx> Iterator for Prefixes<'cx, 'gcx, 'tcx> {
+        type Item = &'cx Lvalue<'tcx>;
         fn next(&mut self) -> Option<Self::Item> {
             let mut cursor = match self.next {
                 None => return None,
@@ -1244,7 +1313,7 @@
     }
 }
 
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
     fn report_use_of_moved_or_uninitialized(&mut self,
                            _context: Context,
                            desired_action: &str,
@@ -1451,6 +1520,27 @@
         err.emit();
     }
 
+    fn report_borrowed_value_does_not_live_long_enough(&mut self,
+                                                       _: Context,
+                                                       (lvalue, span): (&Lvalue, Span),
+                                                       end_span: Option<Span>) {
+        let proper_span = match *lvalue {
+            Lvalue::Local(local) => self.mir.local_decls[local].source_info.span,
+            _ => span
+        };
+
+        let mut err = self.tcx.path_does_not_live_long_enough(span, "borrowed value", Origin::Mir);
+        err.span_label(proper_span, "temporary value created here");
+        err.span_label(span, "temporary value dropped here while still borrowed");
+        err.note("consider using a `let` binding to increase its lifetime");
+
+        if let Some(end) = end_span {
+            err.span_label(end, "temporary value needs to live until here");
+        }
+
+        err.emit();
+    }
+
     fn report_illegal_mutation_of_borrowed(&mut self,
                                            _: Context,
                                            (lvalue, span): (&Lvalue<'tcx>, Span),
@@ -1483,7 +1573,7 @@
     }
 }
 
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
     // End-user visible description of `lvalue`
     fn describe_lvalue(&self, lvalue: &Lvalue<'tcx>) -> String {
         let mut buf = String::new();
@@ -1641,7 +1731,7 @@
     }
 }
 
-impl<'c, 'b, 'a: 'b+'c, 'gcx, 'tcx: 'a> MirBorrowckCtxt<'c, 'b, 'a, 'gcx, 'tcx> {
+impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
     // FIXME (#16118): function intended to allow the borrow checker
     // to be less precise in its handling of Box while still allowing
     // moves out of a Box. They should be removed when/if we stop
diff --git a/src/librustc_mir/build/expr/into.rs b/src/librustc_mir/build/expr/into.rs
index 280e1c8..cdbcb43 100644
--- a/src/librustc_mir/build/expr/into.rs
+++ b/src/librustc_mir/build/expr/into.rs
@@ -247,13 +247,7 @@
                 } else {
                     let args: Vec<_> =
                         args.into_iter()
-                            .map(|arg| {
-                                let scope = this.local_scope();
-                                // Function arguments are owned by the callee, so we need as_temp()
-                                // instead of as_operand() to enforce copies
-                                let operand = unpack!(block = this.as_temp(block, scope, arg));
-                                Operand::Consume(Lvalue::Local(operand))
-                            })
+                            .map(|arg| unpack!(block = this.as_local_operand(block, arg)))
                             .collect();
 
                     let success = this.cfg.start_new_block();
diff --git a/src/librustc_mir/build/matches/simplify.rs b/src/librustc_mir/build/matches/simplify.rs
index 9b3f16f..a7599f1 100644
--- a/src/librustc_mir/build/matches/simplify.rs
+++ b/src/librustc_mir/build/matches/simplify.rs
@@ -98,19 +98,16 @@
             }
 
             PatternKind::Variant { adt_def, substs, variant_index, ref subpatterns } => {
-                if self.hir.tcx().sess.features.borrow().never_type {
-                    let irrefutable = adt_def.variants.iter().enumerate().all(|(i, v)| {
-                        i == variant_index || {
-                            self.hir.tcx().is_variant_uninhabited_from_all_modules(v, substs)
-                        }
-                    });
-                    if irrefutable {
-                        let lvalue = match_pair.lvalue.downcast(adt_def, variant_index);
-                        candidate.match_pairs.extend(self.field_match_pairs(lvalue, subpatterns));
-                        Ok(())
-                    } else {
-                        Err(match_pair)
+                let irrefutable = adt_def.variants.iter().enumerate().all(|(i, v)| {
+                    i == variant_index || {
+                        self.hir.tcx().sess.features.borrow().never_type &&
+                        self.hir.tcx().is_variant_uninhabited_from_all_modules(v, substs)
                     }
+                });
+                if irrefutable {
+                    let lvalue = match_pair.lvalue.downcast(adt_def, variant_index);
+                    candidate.match_pairs.extend(self.field_match_pairs(lvalue, subpatterns));
+                    Ok(())
                 } else {
                     Err(match_pair)
                 }
diff --git a/src/librustc_mir/build/matches/test.rs b/src/librustc_mir/build/matches/test.rs
index 1cf35af..02a7bc8 100644
--- a/src/librustc_mir/build/matches/test.rs
+++ b/src/librustc_mir/build/matches/test.rs
@@ -39,7 +39,7 @@
                     span: match_pair.pattern.span,
                     kind: TestKind::Switch {
                         adt_def: adt_def.clone(),
-                        variants: BitVector::new(self.hir.num_variants(adt_def)),
+                        variants: BitVector::new(adt_def.variants.len()),
                     },
                 }
             }
@@ -184,7 +184,7 @@
         match test.kind {
             TestKind::Switch { adt_def, ref variants } => {
                 // Variants is a BitVec of indexes into adt_def.variants.
-                let num_enum_variants = self.hir.num_variants(adt_def);
+                let num_enum_variants = adt_def.variants.len();
                 let used_variants = variants.count();
                 let mut otherwise_block = None;
                 let mut target_blocks = Vec::with_capacity(num_enum_variants);
diff --git a/src/librustc_mir/build/mod.rs b/src/librustc_mir/build/mod.rs
index 83326f7..287d108 100644
--- a/src/librustc_mir/build/mod.rs
+++ b/src/librustc_mir/build/mod.rs
@@ -13,7 +13,7 @@
 use hair::cx::Cx;
 use hair::LintLevel;
 use rustc::hir;
-use rustc::hir::def_id::DefId;
+use rustc::hir::def_id::{DefId, LocalDefId};
 use rustc::middle::region;
 use rustc::mir::*;
 use rustc::mir::visit::{MutVisitor, TyContext};
@@ -422,10 +422,10 @@
         freevars.iter().map(|fv| {
             let var_id = fv.var_id();
             let var_hir_id = tcx.hir.node_to_hir_id(var_id);
-            let closure_expr_id = tcx.hir.local_def_id(fn_id).index;
+            let closure_expr_id = tcx.hir.local_def_id(fn_id);
             let capture = hir.tables().upvar_capture(ty::UpvarId {
                 var_id: var_hir_id,
-                closure_expr_id,
+                closure_expr_id: LocalDefId::from_def_id(closure_expr_id),
             });
             let by_ref = match capture {
                 ty::UpvarCapture::ByValue => false,
@@ -444,7 +444,7 @@
         }).collect()
     });
 
-    let mut mir = builder.finish(upvar_decls, return_ty, yield_ty);
+    let mut mir = builder.finish(upvar_decls, yield_ty);
     mir.spread_arg = spread_arg;
     mir
 }
@@ -469,7 +469,7 @@
     // Constants can't `return` so a return block should not be created.
     assert_eq!(builder.cached_return_block, None);
 
-    builder.finish(vec![], ty, None)
+    builder.finish(vec![], None)
 }
 
 fn construct_error<'a, 'gcx, 'tcx>(hir: Cx<'a, 'gcx, 'tcx>,
@@ -481,7 +481,7 @@
     let mut builder = Builder::new(hir, span, 0, Safety::Safe, ty);
     let source_info = builder.source_info(span);
     builder.cfg.terminate(START_BLOCK, source_info, TerminatorKind::Unreachable);
-    builder.finish(vec![], ty, None)
+    builder.finish(vec![], None)
 }
 
 impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
@@ -524,7 +524,6 @@
 
     fn finish(self,
               upvar_decls: Vec<UpvarDecl>,
-              return_ty: Ty<'tcx>,
               yield_ty: Option<Ty<'tcx>>)
               -> Mir<'tcx> {
         for (index, block) in self.cfg.basic_blocks.iter().enumerate() {
@@ -537,7 +536,6 @@
                  self.visibility_scopes,
                  ClearOnDecode::Set(self.visibility_scope_info),
                  IndexVec::new(),
-                 return_ty,
                  yield_ty,
                  self.local_decls,
                  self.arg_count,
diff --git a/src/librustc_mir/dataflow/impls/borrows.rs b/src/librustc_mir/dataflow/impls/borrows.rs
index 928c07b..acfa195 100644
--- a/src/librustc_mir/dataflow/impls/borrows.rs
+++ b/src/librustc_mir/dataflow/impls/borrows.rs
@@ -10,7 +10,7 @@
 
 use rustc::mir::{self, Location, Mir};
 use rustc::mir::visit::Visitor;
-use rustc::ty::{Region, TyCtxt};
+use rustc::ty::{self, Region, TyCtxt};
 use rustc::ty::RegionKind;
 use rustc::ty::RegionKind::ReScope;
 use rustc::util::nodemap::{FxHashMap, FxHashSet};
@@ -22,7 +22,7 @@
 use dataflow::{BitDenotation, BlockSets, DataflowOperator};
 pub use dataflow::indexes::BorrowIndex;
 use transform::nll::region_infer::RegionInferenceContext;
-use transform::nll::ToRegionIndex;
+use transform::nll::ToRegionVid;
 
 use syntax_pos::Span;
 
@@ -71,10 +71,14 @@
                mir: &'a Mir<'tcx>,
                nonlexical_regioncx: Option<&'a RegionInferenceContext<'tcx>>)
                -> Self {
-        let mut visitor = GatherBorrows { idx_vec: IndexVec::new(),
-                                          location_map: FxHashMap(),
-                                          region_map: FxHashMap(),
-                                          region_span_map: FxHashMap()};
+        let mut visitor = GatherBorrows {
+            tcx,
+            mir,
+            idx_vec: IndexVec::new(),
+            location_map: FxHashMap(),
+            region_map: FxHashMap(),
+            region_span_map: FxHashMap()
+        };
         visitor.visit_mir(mir);
         return Borrows { tcx: tcx,
                          mir: mir,
@@ -84,17 +88,22 @@
                          region_span_map: visitor.region_span_map,
                          nonlexical_regioncx };
 
-        struct GatherBorrows<'tcx> {
+        struct GatherBorrows<'a, 'gcx: 'tcx, 'tcx: 'a> {
+            tcx: TyCtxt<'a, 'gcx, 'tcx>,
+            mir: &'a Mir<'tcx>,
             idx_vec: IndexVec<BorrowIndex, BorrowData<'tcx>>,
             location_map: FxHashMap<Location, BorrowIndex>,
             region_map: FxHashMap<Region<'tcx>, FxHashSet<BorrowIndex>>,
             region_span_map: FxHashMap<RegionKind, Span>,
         }
-        impl<'tcx> Visitor<'tcx> for GatherBorrows<'tcx> {
+
+        impl<'a, 'gcx, 'tcx> Visitor<'tcx> for GatherBorrows<'a, 'gcx, 'tcx> {
             fn visit_rvalue(&mut self,
                             rvalue: &mir::Rvalue<'tcx>,
                             location: mir::Location) {
                 if let mir::Rvalue::Ref(region, kind, ref lvalue) = *rvalue {
+                    if is_unsafe_lvalue(self.tcx, self.mir, lvalue) { return; }
+
                     let borrow = BorrowData {
                         location: location, kind: kind, region: region, lvalue: lvalue.clone(),
                     };
@@ -145,7 +154,7 @@
                                            location: Location) {
         if let Some(regioncx) = self.nonlexical_regioncx {
             for (borrow_index, borrow_data) in self.borrows.iter_enumerated() {
-                let borrow_region = borrow_data.region.to_region_index();
+                let borrow_region = borrow_data.region.to_region_vid();
                 if !regioncx.region_contains_point(borrow_region, location) {
                     // The region checker really considers the borrow
                     // to start at the point **after** the location of
@@ -197,7 +206,8 @@
             }
 
             mir::StatementKind::Assign(_, ref rhs) => {
-                if let mir::Rvalue::Ref(region, _, _) = *rhs {
+                if let mir::Rvalue::Ref(region, _, ref lvalue) = *rhs {
+                    if is_unsafe_lvalue(self.tcx, self.mir, lvalue) { return; }
                     let index = self.location_map.get(&location).unwrap_or_else(|| {
                         panic!("could not find BorrowIndex for location {:?}", location);
                     });
@@ -248,3 +258,35 @@
         false // bottom = no Rvalue::Refs are active by default
     }
 }
+
+fn is_unsafe_lvalue<'a, 'gcx: 'tcx, 'tcx: 'a>(
+    tcx: TyCtxt<'a, 'gcx, 'tcx>,
+    mir: &'a Mir<'tcx>,
+    lvalue: &mir::Lvalue<'tcx>
+) -> bool {
+    use self::mir::Lvalue::*;
+    use self::mir::ProjectionElem;
+
+    match *lvalue {
+        Local(_) => false,
+        Static(ref static_) => tcx.is_static_mut(static_.def_id),
+        Projection(ref proj) => {
+            match proj.elem {
+                ProjectionElem::Field(..) |
+                ProjectionElem::Downcast(..) |
+                ProjectionElem::Subslice { .. } |
+                ProjectionElem::ConstantIndex { .. } |
+                ProjectionElem::Index(_) => {
+                    is_unsafe_lvalue(tcx, mir, &proj.base)
+                }
+                ProjectionElem::Deref => {
+                    let ty = proj.base.ty(mir, tcx).to_ty(tcx);
+                    match ty.sty {
+                        ty::TyRawPtr(..) => true,
+                        _ => is_unsafe_lvalue(tcx, mir, &proj.base),
+                    }
+                }
+            }
+        }
+    }
+}
diff --git a/src/librustc_mir/hair/cx/expr.rs b/src/librustc_mir/hair/cx/expr.rs
index f5a53e2..798928e 100644
--- a/src/librustc_mir/hair/cx/expr.rs
+++ b/src/librustc_mir/hair/cx/expr.rs
@@ -20,6 +20,7 @@
 use rustc::ty::adjustment::{Adjustment, Adjust, AutoBorrow};
 use rustc::ty::cast::CastKind as TyCastKind;
 use rustc::hir;
+use rustc::hir::def_id::LocalDefId;
 
 impl<'tcx> Mirror<'tcx> for &'tcx hir::Expr {
     type Output = Expr<'tcx>;
@@ -783,7 +784,7 @@
             // point we need an implicit deref
             let upvar_id = ty::UpvarId {
                 var_id: var_hir_id,
-                closure_expr_id: closure_def_id.index,
+                closure_expr_id: LocalDefId::from_def_id(closure_def_id),
             };
             match cx.tables().upvar_capture(upvar_id) {
                 ty::UpvarCapture::ByValue => field_kind,
@@ -897,7 +898,7 @@
     let var_hir_id = cx.tcx.hir.node_to_hir_id(freevar.var_id());
     let upvar_id = ty::UpvarId {
         var_id: var_hir_id,
-        closure_expr_id: cx.tcx.hir.local_def_id(closure_expr.id).index,
+        closure_expr_id: cx.tcx.hir.local_def_id(closure_expr.id).to_local(),
     };
     let upvar_capture = cx.tables().upvar_capture(upvar_id);
     let temp_lifetime = cx.region_scope_tree.temporary_scope(closure_expr.hir_id.local_id);
diff --git a/src/librustc_mir/hair/cx/mod.rs b/src/librustc_mir/hair/cx/mod.rs
index 5026423..b1f4b84 100644
--- a/src/librustc_mir/hair/cx/mod.rs
+++ b/src/librustc_mir/hair/cx/mod.rs
@@ -213,10 +213,6 @@
         bug!("found no method `{}` in `{:?}`", method_name, trait_def_id);
     }
 
-    pub fn num_variants(&mut self, adt_def: &ty::AdtDef) -> usize {
-        adt_def.variants.len()
-    }
-
     pub fn all_fields(&mut self, adt_def: &ty::AdtDef, variant_index: usize) -> Vec<Field> {
         (0..adt_def.variants[variant_index].fields.len())
             .map(Field::new)
diff --git a/src/librustc_mir/lib.rs b/src/librustc_mir/lib.rs
index 5e65398..af30934 100644
--- a/src/librustc_mir/lib.rs
+++ b/src/librustc_mir/lib.rs
@@ -23,6 +23,7 @@
 #![feature(core_intrinsics)]
 #![feature(decl_macro)]
 #![feature(i128_type)]
+#![feature(match_default_bindings)]
 #![feature(rustc_diagnostic_macros)]
 #![feature(placement_in_syntax)]
 #![feature(collection_placement)]
diff --git a/src/librustc_mir/shim.rs b/src/librustc_mir/shim.rs
index e1f0e01..d31f381 100644
--- a/src/librustc_mir/shim.rs
+++ b/src/librustc_mir/shim.rs
@@ -197,7 +197,6 @@
         ),
         ClearOnDecode::Clear,
         IndexVec::new(),
-        sig.output(),
         None,
         local_decls_for_sig(&sig, span),
         sig.inputs().len(),
@@ -345,7 +344,6 @@
             ),
             ClearOnDecode::Clear,
             IndexVec::new(),
-            self.sig.output(),
             None,
             self.local_decls,
             self.sig.inputs().len(),
@@ -808,7 +806,6 @@
         ),
         ClearOnDecode::Clear,
         IndexVec::new(),
-        sig.output(),
         None,
         local_decls,
         sig.inputs().len(),
@@ -881,7 +878,6 @@
         ),
         ClearOnDecode::Clear,
         IndexVec::new(),
-        sig.output(),
         None,
         local_decls,
         sig.inputs().len(),
diff --git a/src/librustc_mir/transform/deaggregator.rs b/src/librustc_mir/transform/deaggregator.rs
index 61b4716..e2ecd48 100644
--- a/src/librustc_mir/transform/deaggregator.rs
+++ b/src/librustc_mir/transform/deaggregator.rs
@@ -67,7 +67,7 @@
                     let ty = variant_def.fields[i].ty(tcx, substs);
                     let rhs = Rvalue::Use(op.clone());
 
-                    let lhs_cast = if adt_def.variants.len() > 1 {
+                    let lhs_cast = if adt_def.is_enum() {
                         Lvalue::Projection(Box::new(LvalueProjection {
                             base: lhs.clone(),
                             elem: ProjectionElem::Downcast(adt_def, variant),
@@ -89,7 +89,7 @@
                 }
 
                 // if the aggregate was an enum, we need to set the discriminant
-                if adt_def.variants.len() > 1 {
+                if adt_def.is_enum() {
                     let set_discriminant = Statement {
                         kind: StatementKind::SetDiscriminant {
                             lvalue: lhs.clone(),
diff --git a/src/librustc_mir/transform/generator.rs b/src/librustc_mir/transform/generator.rs
index 7d12d50..f676372 100644
--- a/src/librustc_mir/transform/generator.rs
+++ b/src/librustc_mir/transform/generator.rs
@@ -557,7 +557,6 @@
     }
 
     // Replace the return variable
-    mir.return_ty = tcx.mk_nil();
     mir.local_decls[RETURN_POINTER] = LocalDecl {
         mutability: Mutability::Mut,
         ty: tcx.mk_nil(),
@@ -777,7 +776,7 @@
         let state_did = tcx.lang_items().gen_state().unwrap();
         let state_adt_ref = tcx.adt_def(state_did);
         let state_substs = tcx.mk_substs([Kind::from(yield_ty),
-            Kind::from(mir.return_ty)].iter());
+            Kind::from(mir.return_ty())].iter());
         let ret_ty = tcx.mk_adt(state_adt_ref, state_substs);
 
         // We rename RETURN_POINTER which has type mir.return_ty to new_ret_local
@@ -808,7 +807,6 @@
         transform.visit_mir(mir);
 
         // Update our MIR struct to reflect the changed we've made
-        mir.return_ty = ret_ty;
         mir.yield_ty = None;
         mir.arg_count = 1;
         mir.spread_arg = None;
diff --git a/src/librustc_mir/transform/inline.rs b/src/librustc_mir/transform/inline.rs
index 628a816..4b7856f 100644
--- a/src/librustc_mir/transform/inline.rs
+++ b/src/librustc_mir/transform/inline.rs
@@ -19,6 +19,7 @@
 use rustc::mir::*;
 use rustc::mir::visit::*;
 use rustc::ty::{self, Instance, Ty, TyCtxt, TypeFoldable};
+use rustc::ty::layout::LayoutOf;
 use rustc::ty::subst::{Subst,Substs};
 
 use std::collections::VecDeque;
@@ -625,9 +626,7 @@
 fn type_size_of<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
                           param_env: ty::ParamEnv<'tcx>,
                           ty: Ty<'tcx>) -> Option<u64> {
-    ty.layout(tcx, param_env).ok().map(|layout| {
-        layout.size(&tcx.data_layout).bytes()
-    })
+    (tcx, param_env).layout_of(ty).ok().map(|layout| layout.size.bytes())
 }
 
 fn subst_and_normalize<'a, 'tcx: 'a>(
diff --git a/src/librustc_mir/transform/nll/constraint_generation.rs b/src/librustc_mir/transform/nll/constraint_generation.rs
index b095a19..1f905d3 100644
--- a/src/librustc_mir/transform/nll/constraint_generation.rs
+++ b/src/librustc_mir/transform/nll/constraint_generation.rs
@@ -9,7 +9,7 @@
 // except according to those terms.
 
 use rustc::hir;
-use rustc::mir::{BasicBlock, BorrowKind, Location, Lvalue, Mir, Rvalue, Statement, StatementKind};
+use rustc::mir::{Location, Lvalue, Mir, Rvalue};
 use rustc::mir::visit::Visitor;
 use rustc::mir::Lvalue::Projection;
 use rustc::mir::{LvalueProjection, ProjectionElem};
@@ -21,9 +21,8 @@
 use rustc_data_structures::fx::FxHashSet;
 use syntax::codemap::DUMMY_SP;
 
-use super::subtype;
 use super::LivenessResults;
-use super::ToRegionIndex;
+use super::ToRegionVid;
 use super::region_infer::RegionInferenceContext;
 
 pub(super) fn generate_constraints<'a, 'gcx, 'tcx>(
@@ -102,7 +101,7 @@
         self.infcx
             .tcx
             .for_each_free_region(&live_ty, |live_region| {
-                let vid = live_region.to_region_index();
+                let vid = live_region.to_region_vid();
                 self.regioncx.add_live_point(vid, location);
             });
     }
@@ -179,29 +178,6 @@
         self.visit_mir(self.mir);
     }
 
-    fn add_borrow_constraint(
-        &mut self,
-        location: Location,
-        destination_lv: &Lvalue<'tcx>,
-        borrow_region: ty::Region<'tcx>,
-        _borrow_kind: BorrowKind,
-        _borrowed_lv: &Lvalue<'tcx>,
-    ) {
-        let tcx = self.infcx.tcx;
-        let span = self.mir.source_info(location).span;
-        let destination_ty = destination_lv.ty(self.mir, tcx).to_ty(tcx);
-
-        let destination_region = match destination_ty.sty {
-            ty::TyRef(r, _) => r,
-            _ => bug!()
-        };
-
-        self.regioncx.add_outlives(span,
-                                   borrow_region.to_region_index(),
-                                   destination_region.to_region_index(),
-                                   location.successor_within_block());
-    }
-
     fn add_reborrow_constraint(
         &mut self,
         location: Location,
@@ -227,8 +203,8 @@
 
                     let span = self.mir.source_info(location).span;
                     self.regioncx.add_outlives(span,
-                                               base_region.to_region_index(),
-                                               borrow_region.to_region_index(),
+                                               base_region.to_region_vid(),
+                                               borrow_region.to_region_vid(),
                                                location.successor_within_block());
                 }
             }
@@ -237,35 +213,22 @@
 }
 
 impl<'cx, 'gcx, 'tcx> Visitor<'tcx> for ConstraintGeneration<'cx, 'gcx, 'tcx> {
-    fn visit_statement(&mut self,
-                       block: BasicBlock,
-                       statement: &Statement<'tcx>,
-                       location: Location) {
+    fn visit_rvalue(&mut self,
+                    rvalue: &Rvalue<'tcx>,
+                    location: Location) {
+        debug!("visit_rvalue(rvalue={:?}, location={:?})", rvalue, location);
 
-        debug!("visit_statement(statement={:?}, location={:?})", statement, location);
-
-        // Look for a statement like:
+        // Look for an rvalue like:
         //
-        //     D = & L
+        //     & L
         //
-        // where D is the path to which we are assigning, and
-        // L is the path that is borrowed.
-        if let StatementKind::Assign(ref destination_lv, ref rv) = statement.kind {
-            if let Rvalue::Ref(region, bk, ref borrowed_lv) = *rv {
-                self.add_borrow_constraint(location, destination_lv, region, bk, borrowed_lv);
-                self.add_reborrow_constraint(location, region, borrowed_lv);
-            }
-
-            let tcx = self.infcx.tcx;
-            let destination_ty = destination_lv.ty(self.mir, tcx).to_ty(tcx);
-            let rv_ty = rv.ty(self.mir, tcx);
-
-            let span = self.mir.source_info(location).span;
-            for (a, b) in subtype::outlives_pairs(tcx, rv_ty, destination_ty) {
-                self.regioncx.add_outlives(span, a, b, location.successor_within_block());
-            }
+        // where L is the path that is borrowed. In that case, we have
+        // to add the reborrow constraints (which don't fall out
+        // naturally from the type-checker).
+        if let Rvalue::Ref(region, _bk, ref borrowed_lv) = *rvalue {
+            self.add_reborrow_constraint(location, region, borrowed_lv);
         }
 
-        self.super_statement(block, statement, location);
+        self.super_rvalue(rvalue, location);
     }
 }
diff --git a/src/librustc_mir/transform/nll/free_regions.rs b/src/librustc_mir/transform/nll/free_regions.rs
index 554d212..92a8a71 100644
--- a/src/librustc_mir/transform/nll/free_regions.rs
+++ b/src/librustc_mir/transform/nll/free_regions.rs
@@ -25,17 +25,18 @@
 use rustc::hir::def_id::DefId;
 use rustc::infer::InferCtxt;
 use rustc::middle::free_region::FreeRegionMap;
-use rustc::ty;
+use rustc::ty::{self, RegionVid};
 use rustc::ty::subst::Substs;
 use rustc::util::nodemap::FxHashMap;
+use rustc_data_structures::indexed_vec::Idx;
 
 #[derive(Debug)]
 pub struct FreeRegions<'tcx> {
     /// Given a free region defined on this function (either early- or
-    /// late-bound), this maps it to its internal region index. The
-    /// corresponding variable will be "capped" so that it cannot
-    /// grow.
-    pub indices: FxHashMap<ty::Region<'tcx>, usize>,
+    /// late-bound), this maps it to its internal region index. When
+    /// the region context is created, the first N variables will be
+    /// created based on these indices.
+    pub indices: FxHashMap<ty::Region<'tcx>, RegionVid>,
 
     /// The map from the typeck tables telling us how to relate free regions.
     pub free_region_map: &'tcx FreeRegionMap<'tcx>,
@@ -49,6 +50,9 @@
 
     let mut indices = FxHashMap();
 
+    // `'static` is always free.
+    insert_free_region(&mut indices, infcx.tcx.types.re_static);
+
     // Extract the early regions.
     let item_substs = Substs::identity_for_item(infcx.tcx, item_def_id);
     for item_subst in item_substs {
@@ -78,9 +82,9 @@
 }
 
 fn insert_free_region<'tcx>(
-    free_regions: &mut FxHashMap<ty::Region<'tcx>, usize>,
+    free_regions: &mut FxHashMap<ty::Region<'tcx>, RegionVid>,
     region: ty::Region<'tcx>,
 ) {
-    let len = free_regions.len();
-    free_regions.entry(region).or_insert(len);
+    let next = RegionVid::new(free_regions.len());
+    free_regions.entry(region).or_insert(next);
 }
diff --git a/src/librustc_mir/transform/nll/mod.rs b/src/librustc_mir/transform/nll/mod.rs
index f27d0a8..147f061 100644
--- a/src/librustc_mir/transform/nll/mod.rs
+++ b/src/librustc_mir/transform/nll/mod.rs
@@ -11,19 +11,19 @@
 use rustc::hir::def_id::DefId;
 use rustc::mir::Mir;
 use rustc::infer::InferCtxt;
-use rustc::ty::{self, RegionKind};
+use rustc::ty::{self, RegionKind, RegionVid};
 use rustc::util::nodemap::FxHashMap;
-use rustc_data_structures::indexed_vec::Idx;
 use std::collections::BTreeSet;
 use transform::MirSource;
+use transform::type_check;
 use util::liveness::{self, LivenessMode, LivenessResult, LocalSet};
 
 use util as mir_util;
 use self::mir_util::PassWhere;
 
 mod constraint_generation;
+mod subtype_constraint_generation;
 mod free_regions;
-mod subtype;
 
 pub(crate) mod region_infer;
 use self::region_infer::RegionInferenceContext;
@@ -36,13 +36,24 @@
 pub fn compute_regions<'a, 'gcx, 'tcx>(
     infcx: &InferCtxt<'a, 'gcx, 'tcx>,
     def_id: DefId,
+    param_env: ty::ParamEnv<'gcx>,
     mir: &mut Mir<'tcx>,
 ) -> RegionInferenceContext<'tcx> {
     // Compute named region information.
     let free_regions = &free_regions::free_regions(infcx, def_id);
 
     // Replace all regions with fresh inference variables.
-    let num_region_variables = renumber::renumber_mir(infcx, free_regions, mir);
+    renumber::renumber_mir(infcx, free_regions, mir);
+
+    // Run the MIR type-checker.
+    let mir_node_id = infcx.tcx.hir.as_local_node_id(def_id).unwrap();
+    let constraint_sets = &type_check::type_check(infcx, mir_node_id, param_env, mir);
+
+    // Create the region inference context, taking ownership of the region inference
+    // data that was contained in `infcx`.
+    let var_origins = infcx.take_region_var_origins();
+    let mut regioncx = RegionInferenceContext::new(var_origins, free_regions, mir);
+    subtype_constraint_generation::generate(&mut regioncx, free_regions, mir, constraint_sets);
 
     // Compute what is live where.
     let liveness = &LivenessResults {
@@ -63,11 +74,10 @@
         ),
     };
 
-    // Create the region inference context, generate the constraints,
-    // and then solve them.
-    let mut regioncx = RegionInferenceContext::new(free_regions, num_region_variables, mir);
-    let param_env = infcx.tcx.param_env(def_id);
+    // Generate non-subtyping constraints.
     constraint_generation::generate_constraints(infcx, &mut regioncx, &mir, param_env, liveness);
+
+    // Solve the region constraints.
     regioncx.solve(infcx, &mir);
 
     // Dump MIR results into a file, if that is enabled. This let us
@@ -123,12 +133,7 @@
         match pass_where {
             // Before the CFG, dump out the values for each region variable.
             PassWhere::BeforeCFG => for region in regioncx.regions() {
-                writeln!(
-                    out,
-                    "| {:?}: {:?}",
-                    region,
-                    regioncx.region_value(region)
-                )?;
+                writeln!(out, "| {:?}: {:?}", region, regioncx.region_value(region))?;
             },
 
             // Before each basic block, dump out the values
@@ -152,23 +157,19 @@
     });
 }
 
-newtype_index!(RegionIndex {
-    DEBUG_FORMAT = "'_#{}r",
-});
-
 /// Right now, we piggy back on the `ReVar` to store our NLL inference
-/// regions. These are indexed with `RegionIndex`. This method will
-/// assert that the region is a `ReVar` and convert the internal index
-/// into a `RegionIndex`. This is reasonable because in our MIR we
-/// replace all free regions with inference variables.
-pub trait ToRegionIndex {
-    fn to_region_index(&self) -> RegionIndex;
+/// regions. These are indexed with `RegionVid`. This method will
+/// assert that the region is a `ReVar` and extract its interal index.
+/// This is reasonable because in our MIR we replace all free regions
+/// with inference variables.
+pub trait ToRegionVid {
+    fn to_region_vid(&self) -> RegionVid;
 }
 
-impl ToRegionIndex for RegionKind {
-    fn to_region_index(&self) -> RegionIndex {
+impl ToRegionVid for RegionKind {
+    fn to_region_vid(&self) -> RegionVid {
         if let &ty::ReVar(vid) = self {
-            RegionIndex::new(vid.index as usize)
+            vid
         } else {
             bug!("region is not an ReVar: {:?}", self)
         }
diff --git a/src/librustc_mir/transform/nll/region_infer.rs b/src/librustc_mir/transform/nll/region_infer.rs
index 553d5ad..1609c12 100644
--- a/src/librustc_mir/transform/nll/region_infer.rs
+++ b/src/librustc_mir/transform/nll/region_infer.rs
@@ -8,12 +8,14 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use super::RegionIndex;
 use super::free_regions::FreeRegions;
 use rustc::infer::InferCtxt;
+use rustc::infer::RegionVariableOrigin;
+use rustc::infer::NLLRegionVariableOrigin;
+use rustc::infer::region_constraints::VarOrigins;
 use rustc::mir::{Location, Mir};
-use rustc::ty;
-use rustc_data_structures::indexed_vec::{Idx, IndexVec};
+use rustc::ty::{self, RegionVid};
+use rustc_data_structures::indexed_vec::IndexVec;
 use rustc_data_structures::fx::FxHashSet;
 use std::collections::BTreeSet;
 use std::fmt;
@@ -21,28 +23,22 @@
 
 pub struct RegionInferenceContext<'tcx> {
     /// Contains the definition for every region variable.  Region
-    /// variables are identified by their index (`RegionIndex`). The
+    /// variables are identified by their index (`RegionVid`). The
     /// definition contains information about where the region came
     /// from as well as its final inferred value.
-    definitions: IndexVec<RegionIndex, RegionDefinition<'tcx>>,
-
-    /// The indices of all "free regions" in scope. These are the
-    /// lifetime parameters (anonymous and named) declared in the
-    /// function signature:
-    ///
-    ///     fn foo<'a, 'b>(x: &Foo<'a, 'b>)
-    ///            ^^  ^^     ^
-    ///
-    /// These indices will be from 0..N, as it happens, but we collect
-    /// them into a vector for convenience.
-    free_regions: Vec<RegionIndex>,
+    definitions: IndexVec<RegionVid, RegionDefinition<'tcx>>,
 
     /// The constraints we have accumulated and used during solving.
     constraints: Vec<Constraint>,
 }
 
-#[derive(Default)]
 struct RegionDefinition<'tcx> {
+    /// Why we created this variable. Mostly these will be
+    /// `RegionVariableOrigin::NLL`, but some variables get created
+    /// elsewhere in the code with other causes (e.g., instantiation
+    /// late-bound-regions).
+    origin: RegionVariableOrigin,
+
     /// If this is a free-region, then this is `Some(X)` where `X` is
     /// the name of the region.
     name: Option<ty::Region<'tcx>>,
@@ -66,7 +62,7 @@
 #[derive(Clone, Default, PartialEq, Eq)]
 struct Region {
     points: BTreeSet<Location>,
-    free_regions: BTreeSet<RegionIndex>,
+    free_regions: BTreeSet<RegionVid>,
 }
 
 impl fmt::Debug for Region {
@@ -84,7 +80,7 @@
         self.points.insert(point)
     }
 
-    fn add_free_region(&mut self, region: RegionIndex) -> bool {
+    fn add_free_region(&mut self, region: RegionVid) -> bool {
         self.free_regions.insert(region)
     }
 
@@ -93,19 +89,24 @@
     }
 }
 
-#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
+#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
 pub struct Constraint {
-    /// Where did this constraint arise?
-    span: Span,
+    // NB. The ordering here is not significant for correctness, but
+    // it is for convenience. Before we dump the constraints in the
+    // debugging logs, we sort them, and we'd like the "super region"
+    // to be first, etc. (In particular, span should remain last.)
 
     /// The region SUP must outlive SUB...
-    sup: RegionIndex,
+    sup: RegionVid,
 
     /// Region that must be outlived.
-    sub: RegionIndex,
+    sub: RegionVid,
 
     /// At this location.
     point: Location,
+
+    /// Where did this constraint arise?
+    span: Span,
 }
 
 impl<'a, 'gcx, 'tcx> RegionInferenceContext<'tcx> {
@@ -113,17 +114,16 @@
     /// `num_region_variables` valid inference variables; the first N
     /// of those will be constant regions representing the free
     /// regions defined in `free_regions`.
-    pub fn new(
-        free_regions: &FreeRegions<'tcx>,
-        num_region_variables: usize,
-        mir: &Mir<'tcx>,
-    ) -> Self {
+    pub fn new(var_origins: VarOrigins, free_regions: &FreeRegions<'tcx>, mir: &Mir<'tcx>) -> Self {
+        // Create a RegionDefinition for each inference variable.
+        let definitions = var_origins
+            .into_iter()
+            .map(|origin| RegionDefinition::new(origin))
+            .collect();
+
         let mut result = Self {
-            definitions: (0..num_region_variables)
-                .map(|_| RegionDefinition::default())
-                .collect(),
+            definitions: definitions,
             constraints: Vec::new(),
-            free_regions: Vec::new(),
         };
 
         result.init_free_regions(free_regions, mir);
@@ -151,16 +151,18 @@
     /// is just itself. R1 (`'b`) in contrast also outlives `'a` and
     /// hence contains R0 and R1.
     fn init_free_regions(&mut self, free_regions: &FreeRegions<'tcx>, mir: &Mir<'tcx>) {
-        let &FreeRegions {
-            ref indices,
-            ref free_region_map,
+        let FreeRegions {
+            indices,
+            free_region_map,
         } = free_regions;
 
         // For each free region X:
-        for (free_region, index) in indices {
-            let variable = RegionIndex::new(*index);
-
-            self.free_regions.push(variable);
+        for (free_region, &variable) in indices {
+            // These should be free-region variables.
+            assert!(match self.definitions[variable].origin {
+                RegionVariableOrigin::NLL(NLLRegionVariableOrigin::FreeRegion) => true,
+                _ => false,
+            });
 
             // Initialize the name and a few other details.
             self.definitions[variable].name = Some(free_region);
@@ -181,10 +183,19 @@
             // Add `end(X)` into the set for X.
             self.definitions[variable].value.add_free_region(variable);
 
+            // `'static` outlives all other free regions as well.
+            if let ty::ReStatic = free_region {
+                for &other_variable in indices.values() {
+                    self.definitions[variable]
+                        .value
+                        .add_free_region(other_variable);
+                }
+            }
+
             // Go through each region Y that outlives X (i.e., where
             // Y: X is true). Add `end(X)` into the set for `Y`.
             for superregion in free_region_map.regions_that_outlive(&free_region) {
-                let superregion_index = RegionIndex::new(indices[superregion]);
+                let superregion_index = indices[superregion];
                 self.definitions[superregion_index]
                     .value
                     .add_free_region(variable);
@@ -200,24 +211,24 @@
     }
 
     /// Returns an iterator over all the region indices.
-    pub fn regions(&self) -> impl Iterator<Item = RegionIndex> {
+    pub fn regions(&self) -> impl Iterator<Item = RegionVid> {
         self.definitions.indices()
     }
 
     /// Returns true if the region `r` contains the point `p`.
     ///
     /// Until `solve()` executes, this value is not particularly meaningful.
-    pub fn region_contains_point(&self, r: RegionIndex, p: Location) -> bool {
+    pub fn region_contains_point(&self, r: RegionVid, p: Location) -> bool {
         self.definitions[r].value.contains_point(p)
     }
 
     /// Returns access to the value of `r` for debugging purposes.
-    pub(super) fn region_value(&self, r: RegionIndex) -> &fmt::Debug {
+    pub(super) fn region_value(&self, r: RegionVid) -> &fmt::Debug {
         &self.definitions[r].value
     }
 
     /// Indicates that the region variable `v` is live at the point `point`.
-    pub(super) fn add_live_point(&mut self, v: RegionIndex, point: Location) {
+    pub(super) fn add_live_point(&mut self, v: RegionVid, point: Location) {
         debug!("add_live_point({:?}, {:?})", v, point);
         let definition = &mut self.definitions[v];
         if !definition.constant {
@@ -233,8 +244,8 @@
     pub(super) fn add_outlives(
         &mut self,
         span: Span,
-        sup: RegionIndex,
-        sub: RegionIndex,
+        sup: RegionVid,
+        sub: RegionVid,
         point: Location,
     ) {
         debug!("add_outlives({:?}: {:?} @ {:?}", sup, sub, point);
@@ -267,23 +278,28 @@
     /// for each region variable until all the constraints are
     /// satisfied. Note that some values may grow **too** large to be
     /// feasible, but we check this later.
-    fn propagate_constraints(
-        &mut self,
-        mir: &Mir<'tcx>,
-    ) -> Vec<(RegionIndex, Span, RegionIndex)> {
+    fn propagate_constraints(&mut self, mir: &Mir<'tcx>) -> Vec<(RegionVid, Span, RegionVid)> {
         let mut changed = true;
         let mut dfs = Dfs::new(mir);
         let mut error_regions = FxHashSet();
         let mut errors = vec![];
+
+        debug!("propagate_constraints()");
+        debug!("propagate_constraints: constraints={:#?}", {
+            let mut constraints: Vec<_> = self.constraints.iter().collect();
+            constraints.sort();
+            constraints
+        });
+
         while changed {
             changed = false;
             for constraint in &self.constraints {
-                debug!("constraint: {:?}", constraint);
+                debug!("propagate_constraints: constraint={:?}", constraint);
                 let sub = &self.definitions[constraint.sub].value.clone();
                 let sup_def = &mut self.definitions[constraint.sup];
 
-                debug!("    sub (before): {:?}", sub);
-                debug!("    sup (before): {:?}", sup_def.value);
+                debug!("propagate_constraints:    sub (before): {:?}", sub);
+                debug!("propagate_constraints:    sup (before): {:?}", sup_def.value);
 
                 if !sup_def.constant {
                     // If this is not a constant, then grow the value as needed to
@@ -293,8 +309,8 @@
                         changed = true;
                     }
 
-                    debug!("    sup (after) : {:?}", sup_def.value);
-                    debug!("    changed     : {:?}", changed);
+                    debug!("propagate_constraints:    sup (after) : {:?}", sup_def.value);
+                    debug!("propagate_constraints:    changed     : {:?}", changed);
                 } else {
                     // If this is a constant, check whether it *would
                     // have* to grow in order for the constraint to be
@@ -310,7 +326,7 @@
                             .difference(&sup_def.value.free_regions)
                             .next()
                             .unwrap();
-                        debug!("    new_region : {:?}", new_region);
+                        debug!("propagate_constraints:    new_region : {:?}", new_region);
                         if error_regions.insert(constraint.sup) {
                             errors.push((constraint.sup, constraint.span, new_region));
                         }
@@ -398,3 +414,30 @@
         changed
     }
 }
+
+impl<'tcx> RegionDefinition<'tcx> {
+    fn new(origin: RegionVariableOrigin) -> Self {
+        // Create a new region definition. Note that, for free
+        // regions, these fields get updated later in
+        // `init_free_regions`.
+        Self {
+            origin,
+            name: None,
+            constant: false,
+            value: Region::default(),
+        }
+    }
+}
+
+impl fmt::Debug for Constraint {
+    fn fmt(&self, formatter: &mut fmt::Formatter) -> Result<(), fmt::Error> {
+        write!(
+            formatter,
+            "({:?}: {:?} @ {:?}) due to {:?}",
+            self.sup,
+            self.sub,
+            self.point,
+            self.span
+        )
+    }
+}
diff --git a/src/librustc_mir/transform/nll/renumber.rs b/src/librustc_mir/transform/nll/renumber.rs
index a3ff7a0..1076b77 100644
--- a/src/librustc_mir/transform/nll/renumber.rs
+++ b/src/librustc_mir/transform/nll/renumber.rs
@@ -8,15 +8,14 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use rustc_data_structures::indexed_vec::Idx;
-use rustc::ty::subst::{Kind, Substs};
-use rustc::ty::{self, ClosureSubsts, RegionKind, RegionVid, Ty, TypeFoldable};
-use rustc::mir::{BasicBlock, Local, Location, Mir, Rvalue, Statement, StatementKind};
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
+use rustc::ty::subst::Substs;
+use rustc::ty::{self, ClosureSubsts, RegionVid, Ty, TypeFoldable};
+use rustc::mir::{BasicBlock, Local, Location, Mir, Statement, StatementKind};
 use rustc::mir::visit::{MutVisitor, TyContext};
-use rustc::infer::{self as rustc_infer, InferCtxt};
-use syntax_pos::DUMMY_SP;
-use std::collections::HashMap;
+use rustc::infer::{InferCtxt, NLLRegionVariableOrigin};
 
+use super::ToRegionVid;
 use super::free_regions::FreeRegions;
 
 /// Replaces all free regions appearing in the MIR with fresh
@@ -25,33 +24,35 @@
     infcx: &InferCtxt<'a, 'gcx, 'tcx>,
     free_regions: &FreeRegions<'tcx>,
     mir: &mut Mir<'tcx>,
-) -> usize {
+) {
     // Create inference variables for each of the free regions
     // declared on the function signature.
     let free_region_inference_vars = (0..free_regions.indices.len())
-        .map(|_| {
-            infcx.next_region_var(rustc_infer::MiscVariable(DUMMY_SP))
+        .map(RegionVid::new)
+        .map(|vid_expected| {
+            let r = infcx.next_nll_region_var(NLLRegionVariableOrigin::FreeRegion);
+            assert_eq!(vid_expected, r.to_region_vid());
+            r
         })
         .collect();
 
+    debug!("renumber_mir()");
+    debug!("renumber_mir: free_regions={:#?}", free_regions);
+    debug!("renumber_mir: mir.arg_count={:?}", mir.arg_count);
+
     let mut visitor = NLLVisitor {
         infcx,
-        lookup_map: HashMap::new(),
-        num_region_variables: free_regions.indices.len(),
         free_regions,
         free_region_inference_vars,
         arg_count: mir.arg_count,
     };
     visitor.visit_mir(mir);
-    visitor.num_region_variables
 }
 
 struct NLLVisitor<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> {
-    lookup_map: HashMap<RegionVid, TyContext>,
-    num_region_variables: usize,
     infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
     free_regions: &'a FreeRegions<'tcx>,
-    free_region_inference_vars: Vec<ty::Region<'tcx>>,
+    free_region_inference_vars: IndexVec<RegionVid, ty::Region<'tcx>>,
     arg_count: usize,
 }
 
@@ -59,16 +60,17 @@
     /// Replaces all regions appearing in `value` with fresh inference
     /// variables. This is what we do for almost the entire MIR, with
     /// the exception of the declared types of our arguments.
-    fn renumber_regions<T>(&mut self, value: &T) -> T
+    fn renumber_regions<T>(&mut self, ty_context: TyContext, value: &T) -> T
     where
         T: TypeFoldable<'tcx>,
     {
+        debug!("renumber_regions(value={:?})", value);
+
         self.infcx
             .tcx
             .fold_regions(value, &mut false, |_region, _depth| {
-                self.num_region_variables += 1;
-                self.infcx
-                    .next_region_var(rustc_infer::MiscVariable(DUMMY_SP))
+                let origin = NLLRegionVariableOrigin::Inferred(ty_context);
+                self.infcx.next_nll_region_var(origin)
             })
     }
 
@@ -78,6 +80,8 @@
     where
         T: TypeFoldable<'tcx>,
     {
+        debug!("renumber_free_regions(value={:?})", value);
+
         self.infcx
             .tcx
             .fold_regions(value, &mut false, |region, _depth| {
@@ -86,26 +90,6 @@
             })
     }
 
-    fn store_region(&mut self, region: &RegionKind, lookup: TyContext) {
-        if let RegionKind::ReVar(rid) = *region {
-            self.lookup_map.entry(rid).or_insert(lookup);
-        }
-    }
-
-    fn store_ty_regions(&mut self, ty: &Ty<'tcx>, ty_context: TyContext) {
-        for region in ty.regions() {
-            self.store_region(region, ty_context);
-        }
-    }
-
-    fn store_kind_regions(&mut self, kind: &'tcx Kind, ty_context: TyContext) {
-        if let Some(ty) = kind.as_type() {
-            self.store_ty_regions(&ty, ty_context);
-        } else if let Some(region) = kind.as_region() {
-            self.store_region(region, ty_context);
-        }
-    }
-
     fn is_argument_or_return_slot(&self, local: Local) -> bool {
         // The first argument is return slot, next N are arguments.
         local.index() <= self.arg_count
@@ -116,56 +100,55 @@
     fn visit_ty(&mut self, ty: &mut Ty<'tcx>, ty_context: TyContext) {
         let is_arg = match ty_context {
             TyContext::LocalDecl { local, .. } => self.is_argument_or_return_slot(local),
-            _ => false,
+            TyContext::ReturnTy(..) => true,
+            TyContext::Location(..) => false,
         };
+        debug!(
+            "visit_ty(ty={:?}, is_arg={:?}, ty_context={:?})",
+            ty,
+            is_arg,
+            ty_context
+        );
 
         let old_ty = *ty;
         *ty = if is_arg {
             self.renumber_free_regions(&old_ty)
         } else {
-            self.renumber_regions(&old_ty)
+            self.renumber_regions(ty_context, &old_ty)
         };
-        self.store_ty_regions(ty, ty_context);
+        debug!("visit_ty: ty={:?}", ty);
     }
 
     fn visit_substs(&mut self, substs: &mut &'tcx Substs<'tcx>, location: Location) {
-        *substs = self.renumber_regions(&{ *substs });
+        debug!("visit_substs(substs={:?}, location={:?})", substs, location);
+
         let ty_context = TyContext::Location(location);
-        for kind in *substs {
-            self.store_kind_regions(kind, ty_context);
-        }
+        *substs = self.renumber_regions(ty_context, &{ *substs });
+
+        debug!("visit_substs: substs={:?}", substs);
     }
 
-    fn visit_rvalue(&mut self, rvalue: &mut Rvalue<'tcx>, location: Location) {
-        match *rvalue {
-            Rvalue::Ref(ref mut r, _, _) => {
-                let old_r = *r;
-                *r = self.renumber_regions(&old_r);
-                let ty_context = TyContext::Location(location);
-                self.store_region(r, ty_context);
-            }
-            Rvalue::Use(..) |
-            Rvalue::Repeat(..) |
-            Rvalue::Len(..) |
-            Rvalue::Cast(..) |
-            Rvalue::BinaryOp(..) |
-            Rvalue::CheckedBinaryOp(..) |
-            Rvalue::UnaryOp(..) |
-            Rvalue::Discriminant(..) |
-            Rvalue::NullaryOp(..) |
-            Rvalue::Aggregate(..) => {
-                // These variants don't contain regions.
-            }
-        }
-        self.super_rvalue(rvalue, location);
+    fn visit_region(&mut self, region: &mut ty::Region<'tcx>, location: Location) {
+        debug!("visit_region(region={:?}, location={:?})", region, location);
+
+        let old_region = *region;
+        let ty_context = TyContext::Location(location);
+        *region = self.renumber_regions(ty_context, &old_region);
+
+        debug!("visit_region: region={:?}", region);
     }
 
     fn visit_closure_substs(&mut self, substs: &mut ClosureSubsts<'tcx>, location: Location) {
-        *substs = self.renumber_regions(substs);
+        debug!(
+            "visit_closure_substs(substs={:?}, location={:?})",
+            substs,
+            location
+        );
+
         let ty_context = TyContext::Location(location);
-        for kind in substs.substs {
-            self.store_kind_regions(kind, ty_context);
-        }
+        *substs = self.renumber_regions(ty_context, substs);
+
+        debug!("visit_closure_substs: substs={:?}", substs);
     }
 
     fn visit_statement(
diff --git a/src/librustc_mir/transform/nll/subtype.rs b/src/librustc_mir/transform/nll/subtype.rs
deleted file mode 100644
index 953fc0e..0000000
--- a/src/librustc_mir/transform/nll/subtype.rs
+++ /dev/null
@@ -1,99 +0,0 @@
-// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-use super::RegionIndex;
-use transform::nll::ToRegionIndex;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc::ty::relate::{self, Relate, RelateResult, TypeRelation};
-
-pub fn outlives_pairs<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
-                      a: Ty<'tcx>,
-                      b: Ty<'tcx>)
-                      -> Vec<(RegionIndex, RegionIndex)>
-{
-    let mut subtype = Subtype::new(tcx);
-    match subtype.relate(&a, &b) {
-        Ok(_) => subtype.outlives_pairs,
-
-        Err(_) => bug!("Fail to relate a = {:?} and b = {:?}", a, b)
-    }
-}
-
-struct Subtype<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
-    tcx: TyCtxt<'a, 'gcx, 'tcx>,
-    outlives_pairs: Vec<(RegionIndex, RegionIndex)>,
-    ambient_variance: ty::Variance,
-}
-
-impl<'a, 'gcx, 'tcx> Subtype<'a, 'gcx, 'tcx> {
-    pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Subtype<'a, 'gcx, 'tcx> {
-        Subtype {
-            tcx,
-            outlives_pairs: vec![],
-            ambient_variance: ty::Covariant,
-        }
-    }
-}
-
-impl<'a, 'gcx, 'tcx> TypeRelation<'a, 'gcx, 'tcx> for Subtype<'a, 'gcx, 'tcx> {
-    fn tag(&self) -> &'static str { "Subtype" }
-    fn tcx(&self) -> TyCtxt<'a, 'gcx, 'tcx> { self.tcx }
-    fn a_is_expected(&self) -> bool { true } // irrelevant
-
-    fn relate_with_variance<T: Relate<'tcx>>(&mut self,
-                                             variance: ty::Variance,
-                                             a: &T,
-                                             b: &T)
-                                             -> RelateResult<'tcx, T>
-    {
-        let old_ambient_variance = self.ambient_variance;
-        self.ambient_variance = self.ambient_variance.xform(variance);
-
-        let result = self.relate(a, b);
-        self.ambient_variance = old_ambient_variance;
-        result
-    }
-
-    fn tys(&mut self, t: Ty<'tcx>, t2: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
-        relate::super_relate_tys(self, t, t2)
-    }
-
-    fn regions(&mut self, r_a: ty::Region<'tcx>, r_b: ty::Region<'tcx>)
-               -> RelateResult<'tcx, ty::Region<'tcx>> {
-        let a = r_a.to_region_index();
-        let b = r_b.to_region_index();
-
-        match self.ambient_variance {
-            ty::Covariant => {
-                self.outlives_pairs.push((b, a));
-            },
-
-            ty::Invariant => {
-                self.outlives_pairs.push((a, b));
-                self.outlives_pairs.push((b, a));
-            },
-
-            ty::Contravariant => {
-                self.outlives_pairs.push((a, b));
-            },
-
-            ty::Bivariant => {},
-        }
-
-        Ok(r_a)
-    }
-
-    fn binders<T>(&mut self, _a: &ty::Binder<T>, _b: &ty::Binder<T>)
-                  -> RelateResult<'tcx, ty::Binder<T>>
-        where T: Relate<'tcx>
-    {
-        unimplemented!();
-    }
-}
diff --git a/src/librustc_mir/transform/nll/subtype_constraint_generation.rs b/src/librustc_mir/transform/nll/subtype_constraint_generation.rs
new file mode 100644
index 0000000..c1850c7
--- /dev/null
+++ b/src/librustc_mir/transform/nll/subtype_constraint_generation.rs
@@ -0,0 +1,112 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use rustc::mir::Mir;
+use rustc::infer::region_constraints::Constraint;
+use rustc::infer::region_constraints::RegionConstraintData;
+use rustc::ty;
+use transform::type_check::MirTypeckRegionConstraints;
+use transform::type_check::OutlivesSet;
+
+use super::free_regions::FreeRegions;
+use super::region_infer::RegionInferenceContext;
+
+/// When the MIR type-checker executes, it validates all the types in
+/// the MIR, and in the process generates a set of constraints that
+/// must hold regarding the regions in the MIR, along with locations
+/// *where* they must hold. This code takes those constriants and adds
+/// them into the NLL `RegionInferenceContext`.
+pub(super) fn generate<'tcx>(
+    regioncx: &mut RegionInferenceContext<'tcx>,
+    free_regions: &FreeRegions<'tcx>,
+    mir: &Mir<'tcx>,
+    constraints: &MirTypeckRegionConstraints<'tcx>,
+) {
+    SubtypeConstraintGenerator {
+        regioncx,
+        free_regions,
+        mir,
+    }.generate(constraints);
+}
+
+struct SubtypeConstraintGenerator<'cx, 'tcx: 'cx> {
+    regioncx: &'cx mut RegionInferenceContext<'tcx>,
+    free_regions: &'cx FreeRegions<'tcx>,
+    mir: &'cx Mir<'tcx>,
+}
+
+impl<'cx, 'tcx> SubtypeConstraintGenerator<'cx, 'tcx> {
+    fn generate(&mut self, constraints: &MirTypeckRegionConstraints<'tcx>) {
+        let MirTypeckRegionConstraints {
+            liveness_set,
+            outlives_sets,
+        } = constraints;
+
+        debug!(
+            "generate(liveness_set={} items, outlives_sets={} items)",
+            liveness_set.len(),
+            outlives_sets.len()
+        );
+
+        for (region, location) in liveness_set {
+            debug!("generate: {:#?} is live at {:#?}", region, location);
+            let region_vid = self.to_region_vid(region);
+            self.regioncx.add_live_point(region_vid, *location);
+        }
+
+        for OutlivesSet { locations, data } in outlives_sets {
+            debug!("generate: constraints at: {:#?}", locations);
+            let RegionConstraintData {
+                constraints,
+                verifys,
+                givens,
+            } = data;
+
+            for constraint in constraints.keys() {
+                debug!("generate: constraint: {:?}", constraint);
+                let (a_vid, b_vid) = match constraint {
+                    Constraint::VarSubVar(a_vid, b_vid) => (*a_vid, *b_vid),
+                    Constraint::RegSubVar(a_r, b_vid) => (self.to_region_vid(a_r), *b_vid),
+                    Constraint::VarSubReg(a_vid, b_r) => (*a_vid, self.to_region_vid(b_r)),
+                    Constraint::RegSubReg(a_r, b_r) => {
+                        (self.to_region_vid(a_r), self.to_region_vid(b_r))
+                    }
+                };
+
+                // We have the constraint that `a_vid <= b_vid`. Add
+                // `b_vid: a_vid` to our region checker. Note that we
+                // reverse direction, because `regioncx` talks about
+                // "outlives" (`>=`) whereas the region constraints
+                // talk about `<=`.
+                let span = self.mir.source_info(locations.from_location).span;
+                self.regioncx
+                    .add_outlives(span, b_vid, a_vid, locations.at_location);
+            }
+
+            assert!(verifys.is_empty(), "verifys not yet implemented");
+            assert!(
+                givens.is_empty(),
+                "MIR type-checker does not use givens (thank goodness)"
+            );
+        }
+    }
+
+    fn to_region_vid(&self, r: ty::Region<'tcx>) -> ty::RegionVid {
+        // Every region that we see in the constraints came from the
+        // MIR or from the parameter environment. If the former, it
+        // will be a region variable.  If the latter, it will be in
+        // the set of free regions *somewhere*.
+        if let ty::ReVar(vid) = r {
+            *vid
+        } else {
+            self.free_regions.indices[&r]
+        }
+    }
+}
diff --git a/src/librustc_mir/transform/promote_consts.rs b/src/librustc_mir/transform/promote_consts.rs
index 339ea8a..70f0c63 100644
--- a/src/librustc_mir/transform/promote_consts.rs
+++ b/src/librustc_mir/transform/promote_consts.rs
@@ -287,7 +287,7 @@
         let span = self.promoted.span;
         let new_operand = Operand::Constant(box Constant {
             span,
-            ty: self.promoted.return_ty,
+            ty: self.promoted.return_ty(),
             literal: Literal::Promoted {
                 index: Promoted::new(self.source.promoted.len())
             }
@@ -385,7 +385,6 @@
                 mir.visibility_scopes.clone(),
                 mir.visibility_scope_info.clone(),
                 IndexVec::new(),
-                ty,
                 None,
                 initial_locals,
                 0,
diff --git a/src/librustc_mir/transform/qualify_consts.rs b/src/librustc_mir/transform/qualify_consts.rs
index ab29134..97e80de 100644
--- a/src/librustc_mir/transform/qualify_consts.rs
+++ b/src/librustc_mir/transform/qualify_consts.rs
@@ -380,7 +380,7 @@
         // conservative type qualification instead.
         if self.qualif.intersects(Qualif::CONST_ERROR) {
             self.qualif = Qualif::empty();
-            let return_ty = mir.return_ty;
+            let return_ty = mir.return_ty();
             self.add_type(return_ty);
         }
 
@@ -938,7 +938,7 @@
     // performing the steal.
     let mir = &tcx.mir_const(def_id).borrow();
 
-    if mir.return_ty.references_error() {
+    if mir.return_ty().references_error() {
         tcx.sess.delay_span_bug(mir.span, "mir_const_qualif: Mir had errors");
         return (Qualif::NOT_CONST.bits(), Rc::new(IdxSetBuf::new_empty(0)));
     }
@@ -956,7 +956,7 @@
                           src: MirSource,
                           mir: &mut Mir<'tcx>) {
         // There's not really any point in promoting errorful MIR.
-        if mir.return_ty.references_error() {
+        if mir.return_ty().references_error() {
             tcx.sess.delay_span_bug(mir.span, "QualifyAndPromoteConstants: Mir had errors");
             return;
         }
@@ -1045,7 +1045,7 @@
                     return;
                 }
             }
-            let ty = mir.return_ty;
+            let ty = mir.return_ty();
             tcx.infer_ctxt().enter(|infcx| {
                 let param_env = ty::ParamEnv::empty(Reveal::UserFacing);
                 let cause = traits::ObligationCause::new(mir.span, id, traits::SharedStatic);
diff --git a/src/librustc_mir/transform/type_check.rs b/src/librustc_mir/transform/type_check.rs
index dc462cd..cc6b702 100644
--- a/src/librustc_mir/transform/type_check.rs
+++ b/src/librustc_mir/transform/type_check.rs
@@ -11,8 +11,10 @@
 //! This pass type-checks the MIR to ensure it is not broken.
 #![allow(unreachable_code)]
 
-use rustc::infer::{self, InferCtxt, InferOk};
-use rustc::traits;
+use rustc::infer::{InferCtxt, InferOk, InferResult, LateBoundRegionConversionTime, UnitResult};
+use rustc::infer::region_constraints::RegionConstraintData;
+use rustc::traits::{self, FulfillmentContext};
+use rustc::ty::error::TypeError;
 use rustc::ty::fold::TypeFoldable;
 use rustc::ty::{self, Ty, TyCtxt, TypeVariants};
 use rustc::middle::const_val::ConstVal;
@@ -27,6 +29,34 @@
 use rustc_data_structures::fx::FxHashSet;
 use rustc_data_structures::indexed_vec::Idx;
 
+/// Type checks the given `mir` in the context of the inference
+/// context `infcx`. Returns any region constraints that have yet to
+/// be proven.
+///
+/// This phase of type-check ought to be infallible -- this is because
+/// the original, HIR-based type-check succeeded. So if any errors
+/// occur here, we will get a `bug!` reported.
+pub fn type_check<'a, 'gcx, 'tcx>(
+    infcx: &InferCtxt<'a, 'gcx, 'tcx>,
+    body_id: ast::NodeId,
+    param_env: ty::ParamEnv<'gcx>,
+    mir: &Mir<'tcx>,
+) -> MirTypeckRegionConstraints<'tcx> {
+    let mut checker = TypeChecker::new(infcx, body_id, param_env);
+    let errors_reported = {
+        let mut verifier = TypeVerifier::new(&mut checker, mir);
+        verifier.visit_mir(mir);
+        verifier.errors_reported
+    };
+
+    if !errors_reported {
+        // if verifier failed, don't do further checks to avoid ICEs
+        checker.typeck_mir(mir);
+    }
+
+    checker.constraints
+}
+
 fn mirbug(tcx: TyCtxt, span: Span, msg: &str) {
     tcx.sess.diagnostic().span_bug(span, msg);
 }
@@ -51,7 +81,7 @@
 }
 
 enum FieldAccessError {
-    OutOfRange { field_count: usize }
+    OutOfRange { field_count: usize },
 }
 
 /// Verifies that MIR types are sane to not crash further checks.
@@ -59,12 +89,12 @@
 /// The sanitize_XYZ methods here take an MIR object and compute its
 /// type, calling `span_mirbug` and returning an error type if there
 /// is a problem.
-struct TypeVerifier<'a, 'b: 'a, 'gcx: 'b+'tcx, 'tcx: 'b> {
+struct TypeVerifier<'a, 'b: 'a, 'gcx: 'b + 'tcx, 'tcx: 'b> {
     cx: &'a mut TypeChecker<'b, 'gcx, 'tcx>,
     mir: &'a Mir<'tcx>,
     last_span: Span,
     body_id: ast::NodeId,
-    errors_reported: bool
+    errors_reported: bool,
 }
 
 impl<'a, 'b, 'gcx, 'tcx> Visitor<'tcx> for TypeVerifier<'a, 'b, 'gcx, 'tcx> {
@@ -74,10 +104,12 @@
         }
     }
 
-    fn visit_lvalue(&mut self,
-                    lvalue: &Lvalue<'tcx>,
-                    _context: visit::LvalueContext,
-                    location: Location) {
+    fn visit_lvalue(
+        &mut self,
+        lvalue: &Lvalue<'tcx>,
+        _context: visit::LvalueContext,
+        location: Location,
+    ) {
         self.sanitize_lvalue(lvalue, location);
     }
 
@@ -98,7 +130,7 @@
     }
 
     fn visit_mir(&mut self, mir: &Mir<'tcx>) {
-        self.sanitize_type(&"return type", mir.return_ty);
+        self.sanitize_type(&"return type", mir.return_ty());
         for local_decl in &mir.local_decls {
             self.sanitize_type(local_decl, local_decl.ty);
         }
@@ -116,7 +148,7 @@
             body_id: cx.body_id,
             cx,
             last_span: mir.span,
-            errors_reported: false
+            errors_reported: false,
         }
     }
 
@@ -125,7 +157,7 @@
     }
 
     fn sanitize_type(&mut self, parent: &fmt::Debug, ty: Ty<'tcx>) -> Ty<'tcx> {
-        if ty.needs_infer() || ty.has_escaping_regions() || ty.references_error() {
+        if ty.has_escaping_regions() || ty.references_error() {
             span_mirbug_and_err!(self, parent, "bad type {:?}", ty)
         } else {
             ty
@@ -135,25 +167,35 @@
     fn sanitize_lvalue(&mut self, lvalue: &Lvalue<'tcx>, location: Location) -> LvalueTy<'tcx> {
         debug!("sanitize_lvalue: {:?}", lvalue);
         match *lvalue {
-            Lvalue::Local(index) => LvalueTy::Ty { ty: self.mir.local_decls[index].ty },
+            Lvalue::Local(index) => LvalueTy::Ty {
+                ty: self.mir.local_decls[index].ty,
+            },
             Lvalue::Static(box Static { def_id, ty: sty }) => {
                 let sty = self.sanitize_type(lvalue, sty);
                 let ty = self.tcx().type_of(def_id);
-                let ty = self.cx.normalize(&ty);
-                if let Err(terr) = self.cx.eq_types(self.last_span, ty, sty) {
+                let ty = self.cx.normalize(&ty, location);
+                if let Err(terr) = self.cx
+                    .eq_types(self.last_span, ty, sty, location.at_self())
+                {
                     span_mirbug!(
-                        self, lvalue, "bad static type ({:?}: {:?}): {:?}",
-                        ty, sty, terr);
+                        self,
+                        lvalue,
+                        "bad static type ({:?}: {:?}): {:?}",
+                        ty,
+                        sty,
+                        terr
+                    );
                 }
                 LvalueTy::Ty { ty: sty }
-
-            },
+            }
             Lvalue::Projection(ref proj) => {
                 let base_ty = self.sanitize_lvalue(&proj.base, location);
                 if let LvalueTy::Ty { ty } = base_ty {
                     if ty.references_error() {
                         assert!(self.errors_reported);
-                        return LvalueTy::Ty { ty: self.tcx().types.err };
+                        return LvalueTy::Ty {
+                            ty: self.tcx().types.err,
+                        };
                     }
                 }
                 self.sanitize_projection(base_ty, &proj.elem, lvalue, location)
@@ -161,12 +203,13 @@
         }
     }
 
-    fn sanitize_projection(&mut self,
-                           base: LvalueTy<'tcx>,
-                           pi: &LvalueElem<'tcx>,
-                           lvalue: &Lvalue<'tcx>,
-                           _: Location)
-                           -> LvalueTy<'tcx> {
+    fn sanitize_projection(
+        &mut self,
+        base: LvalueTy<'tcx>,
+        pi: &LvalueElem<'tcx>,
+        lvalue: &Lvalue<'tcx>,
+        location: Location,
+    ) -> LvalueTy<'tcx> {
         debug!("sanitize_projection: {:?} {:?} {:?}", base, pi, lvalue);
         let tcx = self.tcx();
         let base_ty = base.to_ty(tcx);
@@ -176,23 +219,21 @@
                 let deref_ty = base_ty.builtin_deref(true, ty::LvaluePreference::NoPreference);
                 LvalueTy::Ty {
                     ty: deref_ty.map(|t| t.ty).unwrap_or_else(|| {
-                        span_mirbug_and_err!(
-                            self, lvalue, "deref of non-pointer {:?}", base_ty)
-                    })
+                        span_mirbug_and_err!(self, lvalue, "deref of non-pointer {:?}", base_ty)
+                    }),
                 }
             }
             ProjectionElem::Index(i) => {
                 let index_ty = Lvalue::Local(i).ty(self.mir, tcx).to_ty(tcx);
                 if index_ty != tcx.types.usize {
                     LvalueTy::Ty {
-                        ty: span_mirbug_and_err!(self, i, "index by non-usize {:?}", i)
+                        ty: span_mirbug_and_err!(self, i, "index by non-usize {:?}", i),
                     }
                 } else {
                     LvalueTy::Ty {
                         ty: base_ty.builtin_index().unwrap_or_else(|| {
-                            span_mirbug_and_err!(
-                                self, lvalue, "index of non-array {:?}", base_ty)
-                        })
+                            span_mirbug_and_err!(self, lvalue, "index of non-array {:?}", base_ty)
+                        }),
                     }
                 }
             }
@@ -200,73 +241,82 @@
                 // consider verifying in-bounds
                 LvalueTy::Ty {
                     ty: base_ty.builtin_index().unwrap_or_else(|| {
-                        span_mirbug_and_err!(
-                            self, lvalue, "index of non-array {:?}", base_ty)
-                    })
+                        span_mirbug_and_err!(self, lvalue, "index of non-array {:?}", base_ty)
+                    }),
                 }
             }
-            ProjectionElem::Subslice { from, to } => {
-                LvalueTy::Ty {
-                    ty: match base_ty.sty {
-                        ty::TyArray(inner, size) => {
-                            let size = size.val.to_const_int().unwrap().to_u64().unwrap();
-                            let min_size = (from as u64) + (to as u64);
-                            if let Some(rest_size) = size.checked_sub(min_size) {
-                                tcx.mk_array(inner, rest_size)
-                            } else {
-                                span_mirbug_and_err!(
-                                    self, lvalue, "taking too-small slice of {:?}", base_ty)
-                            }
-                        }
-                        ty::TySlice(..) => base_ty,
-                        _ => {
-                            span_mirbug_and_err!(
-                                self, lvalue, "slice of non-array {:?}", base_ty)
-                        }
-                    }
-                }
-            }
-            ProjectionElem::Downcast(adt_def1, index) =>
-                match base_ty.sty {
-                    ty::TyAdt(adt_def, substs) if adt_def.is_enum() && adt_def == adt_def1 => {
-                        if index >= adt_def.variants.len() {
-                            LvalueTy::Ty {
-                                ty: span_mirbug_and_err!(
-                                    self,
-                                    lvalue,
-                                    "cast to variant #{:?} but enum only has {:?}",
-                                    index,
-                                    adt_def.variants.len())
-                            }
+            ProjectionElem::Subslice { from, to } => LvalueTy::Ty {
+                ty: match base_ty.sty {
+                    ty::TyArray(inner, size) => {
+                        let size = size.val.to_const_int().unwrap().to_u64().unwrap();
+                        let min_size = (from as u64) + (to as u64);
+                        if let Some(rest_size) = size.checked_sub(min_size) {
+                            tcx.mk_array(inner, rest_size)
                         } else {
-                            LvalueTy::Downcast {
-                                adt_def,
-                                substs,
-                                variant_index: index
-                            }
+                            span_mirbug_and_err!(
+                                self,
+                                lvalue,
+                                "taking too-small slice of {:?}",
+                                base_ty
+                            )
                         }
                     }
-                    _ => LvalueTy::Ty {
-                        ty: span_mirbug_and_err!(
-                            self, lvalue, "can't downcast {:?} as {:?}",
-                            base_ty, adt_def1)
-                    }
+                    ty::TySlice(..) => base_ty,
+                    _ => span_mirbug_and_err!(self, lvalue, "slice of non-array {:?}", base_ty),
                 },
+            },
+            ProjectionElem::Downcast(adt_def1, index) => match base_ty.sty {
+                ty::TyAdt(adt_def, substs) if adt_def.is_enum() && adt_def == adt_def1 => {
+                    if index >= adt_def.variants.len() {
+                        LvalueTy::Ty {
+                            ty: span_mirbug_and_err!(
+                                self,
+                                lvalue,
+                                "cast to variant #{:?} but enum only has {:?}",
+                                index,
+                                adt_def.variants.len()
+                            ),
+                        }
+                    } else {
+                        LvalueTy::Downcast {
+                            adt_def,
+                            substs,
+                            variant_index: index,
+                        }
+                    }
+                }
+                _ => LvalueTy::Ty {
+                    ty: span_mirbug_and_err!(
+                        self,
+                        lvalue,
+                        "can't downcast {:?} as {:?}",
+                        base_ty,
+                        adt_def1
+                    ),
+                },
+            },
             ProjectionElem::Field(field, fty) => {
                 let fty = self.sanitize_type(lvalue, fty);
-                match self.field_ty(lvalue, base, field) {
+                match self.field_ty(lvalue, base, field, location) {
                     Ok(ty) => {
-                        if let Err(terr) = self.cx.eq_types(span, ty, fty) {
+                        if let Err(terr) = self.cx.eq_types(span, ty, fty, location.at_self()) {
                             span_mirbug!(
-                                self, lvalue, "bad field access ({:?}: {:?}): {:?}",
-                                ty, fty, terr);
+                                self,
+                                lvalue,
+                                "bad field access ({:?}: {:?}): {:?}",
+                                ty,
+                                fty,
+                                terr
+                            );
                         }
                     }
-                    Err(FieldAccessError::OutOfRange { field_count }) => {
-                        span_mirbug!(
-                            self, lvalue, "accessed field #{} but variant only has {}",
-                            field.index(), field_count)
-                    }
+                    Err(FieldAccessError::OutOfRange { field_count }) => span_mirbug!(
+                        self,
+                        lvalue,
+                        "accessed field #{} but variant only has {}",
+                        field.index(),
+                        field_count
+                    ),
                 }
                 LvalueTy::Ty { ty: fty }
             }
@@ -278,28 +328,31 @@
         self.tcx().types.err
     }
 
-    fn field_ty(&mut self,
-                parent: &fmt::Debug,
-                base_ty: LvalueTy<'tcx>,
-                field: Field)
-                -> Result<Ty<'tcx>, FieldAccessError>
-    {
+    fn field_ty(
+        &mut self,
+        parent: &fmt::Debug,
+        base_ty: LvalueTy<'tcx>,
+        field: Field,
+        location: Location,
+    ) -> Result<Ty<'tcx>, FieldAccessError> {
         let tcx = self.tcx();
 
         let (variant, substs) = match base_ty {
-            LvalueTy::Downcast { adt_def, substs, variant_index } => {
-                (&adt_def.variants[variant_index], substs)
-            }
+            LvalueTy::Downcast {
+                adt_def,
+                substs,
+                variant_index,
+            } => (&adt_def.variants[variant_index], substs),
             LvalueTy::Ty { ty } => match ty.sty {
-                ty::TyAdt(adt_def, substs) if adt_def.is_univariant() => {
-                        (&adt_def.variants[0], substs)
-                    }
+                ty::TyAdt(adt_def, substs) if !adt_def.is_enum() => {
+                    (&adt_def.variants[0], substs)
+                }
                 ty::TyClosure(def_id, substs) => {
                     return match substs.upvar_tys(def_id, tcx).nth(field.index()) {
                         Some(ty) => Ok(ty),
                         None => Err(FieldAccessError::OutOfRange {
-                            field_count: substs.upvar_tys(def_id, tcx).count()
-                        })
+                            field_count: substs.upvar_tys(def_id, tcx).count(),
+                        }),
                     }
                 }
                 ty::TyGenerator(def_id, substs, _) => {
@@ -311,52 +364,109 @@
                     return match substs.field_tys(def_id, tcx).nth(field.index()) {
                         Some(ty) => Ok(ty),
                         None => Err(FieldAccessError::OutOfRange {
-                            field_count: substs.field_tys(def_id, tcx).count() + 1
-                        })
-                    }
+                            field_count: substs.field_tys(def_id, tcx).count() + 1,
+                        }),
+                    };
                 }
                 ty::TyTuple(tys, _) => {
                     return match tys.get(field.index()) {
                         Some(&ty) => Ok(ty),
                         None => Err(FieldAccessError::OutOfRange {
-                            field_count: tys.len()
-                        })
+                            field_count: tys.len(),
+                        }),
                     }
                 }
-                _ => return Ok(span_mirbug_and_err!(
-                    self, parent, "can't project out of {:?}", base_ty))
-            }
+                _ => {
+                    return Ok(span_mirbug_and_err!(
+                        self,
+                        parent,
+                        "can't project out of {:?}",
+                        base_ty
+                    ))
+                }
+            },
         };
 
         if let Some(field) = variant.fields.get(field.index()) {
-            Ok(self.cx.normalize(&field.ty(tcx, substs)))
+            Ok(self.cx.normalize(&field.ty(tcx, substs), location))
         } else {
-            Err(FieldAccessError::OutOfRange { field_count: variant.fields.len() })
+            Err(FieldAccessError::OutOfRange {
+                field_count: variant.fields.len(),
+            })
         }
     }
 }
 
-pub struct TypeChecker<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
+/// The MIR type checker. Visits the MIR and enforces all the
+/// constraints needed for it to be valid and well-typed. Along the
+/// way, it accrues region constraints -- these can later be used by
+/// NLL region checking.
+pub struct TypeChecker<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> {
     infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
     param_env: ty::ParamEnv<'gcx>,
-    fulfillment_cx: traits::FulfillmentContext<'tcx>,
     last_span: Span,
     body_id: ast::NodeId,
     reported_errors: FxHashSet<(Ty<'tcx>, Span)>,
+    constraints: MirTypeckRegionConstraints<'tcx>,
+}
+
+/// A collection of region constraints that must be satisfied for the
+/// program to be considered well-typed.
+#[derive(Default)]
+pub struct MirTypeckRegionConstraints<'tcx> {
+    /// In general, the type-checker is not responsible for enforcing
+    /// liveness constraints; this job falls to the region inferencer,
+    /// which performs a liveness analysis. However, in some limited
+    /// cases, the MIR type-checker creates temporary regions that do
+    /// not otherwise appear in the MIR -- in particular, the
+    /// late-bound regions that it instantiates at call-sites -- and
+    /// hence it must report on their liveness constraints.
+    pub liveness_set: Vec<(ty::Region<'tcx>, Location)>,
+
+    /// During the course of type-checking, we will accumulate region
+    /// constraints due to performing subtyping operations or solving
+    /// traits. These are accumulated into this vector for later use.
+    pub outlives_sets: Vec<OutlivesSet<'tcx>>,
+}
+
+/// Outlives relationships between regions and types created at a
+/// particular point within the control-flow graph.
+pub struct OutlivesSet<'tcx> {
+    /// The locations associated with these constraints.
+    pub locations: Locations,
+
+    /// Constraints generated. In terms of the NLL RFC, when you have
+    /// a constraint `R1: R2 @ P`, the data in there specifies things
+    /// like `R1: R2`.
+    pub data: RegionConstraintData<'tcx>,
+}
+
+#[derive(Copy, Clone, Debug)]
+pub struct Locations {
+    /// The location in the MIR that generated these constraints.
+    /// This is intended for error reporting and diagnosis; the
+    /// constraints may *take effect* at a distinct spot.
+    pub from_location: Location,
+
+    /// The constraints must be met at this location. In terms of the
+    /// NLL RFC, when you have a constraint `R1: R2 @ P`, this field
+    /// is the `P` value.
+    pub at_location: Location,
 }
 
 impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> {
-    fn new(infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
-           body_id: ast::NodeId,
-           param_env: ty::ParamEnv<'gcx>)
-           -> Self {
+    fn new(
+        infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
+        body_id: ast::NodeId,
+        param_env: ty::ParamEnv<'gcx>,
+    ) -> Self {
         TypeChecker {
             infcx,
-            fulfillment_cx: traits::FulfillmentContext::new(),
             last_span: DUMMY_SP,
             body_id,
             param_env,
             reported_errors: FxHashSet(),
+            constraints: MirTypeckRegionConstraints::default(),
         }
     }
 
@@ -364,61 +474,105 @@
         traits::ObligationCause::misc(span, self.body_id)
     }
 
-    pub fn register_infer_ok_obligations<T>(&mut self, infer_ok: InferOk<'tcx, T>) -> T {
-        for obligation in infer_ok.obligations {
-            self.fulfillment_cx.register_predicate_obligation(self.infcx, obligation);
+    fn fully_perform_op<OP, R>(
+        &mut self,
+        locations: Locations,
+        op: OP,
+    ) -> Result<R, TypeError<'tcx>>
+    where
+        OP: FnOnce(&mut Self) -> InferResult<'tcx, R>,
+    {
+        let mut fulfill_cx = FulfillmentContext::new();
+        let InferOk { value, obligations } = self.infcx.commit_if_ok(|_| op(self))?;
+        fulfill_cx.register_predicate_obligations(self.infcx, obligations);
+        if let Err(e) = fulfill_cx.select_all_or_error(self.infcx) {
+            span_mirbug!(self, "", "errors selecting obligation: {:?}", e);
         }
-        infer_ok.value
+
+        let data = self.infcx.take_and_reset_region_constraints();
+        if !data.is_empty() {
+            self.constraints
+                .outlives_sets
+                .push(OutlivesSet { locations, data });
+        }
+
+        Ok(value)
     }
 
-    fn sub_types(&mut self, sub: Ty<'tcx>, sup: Ty<'tcx>)
-                 -> infer::UnitResult<'tcx>
-    {
-        self.infcx.at(&self.misc(self.last_span), self.param_env)
-                  .sup(sup, sub)
-                  .map(|ok| self.register_infer_ok_obligations(ok))
+    fn sub_types(
+        &mut self,
+        sub: Ty<'tcx>,
+        sup: Ty<'tcx>,
+        locations: Locations,
+    ) -> UnitResult<'tcx> {
+        self.fully_perform_op(locations, |this| {
+            this.infcx
+                .at(&this.misc(this.last_span), this.param_env)
+                .sup(sup, sub)
+        })
     }
 
-    fn eq_types(&mut self, span: Span, a: Ty<'tcx>, b: Ty<'tcx>)
-                -> infer::UnitResult<'tcx>
-    {
-        self.infcx.at(&self.misc(span), self.param_env)
-                  .eq(b, a)
-                  .map(|ok| self.register_infer_ok_obligations(ok))
+    fn eq_types(
+        &mut self,
+        _span: Span,
+        a: Ty<'tcx>,
+        b: Ty<'tcx>,
+        locations: Locations,
+    ) -> UnitResult<'tcx> {
+        self.fully_perform_op(locations, |this| {
+            this.infcx
+                .at(&this.misc(this.last_span), this.param_env)
+                .eq(b, a)
+        })
     }
 
     fn tcx(&self) -> TyCtxt<'a, 'gcx, 'tcx> {
         self.infcx.tcx
     }
 
-    fn check_stmt(&mut self, mir: &Mir<'tcx>, stmt: &Statement<'tcx>) {
+    fn check_stmt(&mut self, mir: &Mir<'tcx>, stmt: &Statement<'tcx>, location: Location) {
         debug!("check_stmt: {:?}", stmt);
         let tcx = self.tcx();
         match stmt.kind {
             StatementKind::Assign(ref lv, ref rv) => {
                 let lv_ty = lv.ty(mir, tcx).to_ty(tcx);
                 let rv_ty = rv.ty(mir, tcx);
-                if let Err(terr) = self.sub_types(rv_ty, lv_ty) {
-                    span_mirbug!(self, stmt, "bad assignment ({:?} = {:?}): {:?}",
-                                 lv_ty, rv_ty, terr);
+                if let Err(terr) =
+                    self.sub_types(rv_ty, lv_ty, location.at_successor_within_block())
+                {
+                    span_mirbug!(
+                        self,
+                        stmt,
+                        "bad assignment ({:?} = {:?}): {:?}",
+                        lv_ty,
+                        rv_ty,
+                        terr
+                    );
                 }
             }
-            StatementKind::SetDiscriminant{ ref lvalue, variant_index } => {
+            StatementKind::SetDiscriminant {
+                ref lvalue,
+                variant_index,
+            } => {
                 let lvalue_type = lvalue.ty(mir, tcx).to_ty(tcx);
                 let adt = match lvalue_type.sty {
                     TypeVariants::TyAdt(adt, _) if adt.is_enum() => adt,
                     _ => {
-                        span_bug!(stmt.source_info.span,
-                                  "bad set discriminant ({:?} = {:?}): lhs is not an enum",
-                                  lvalue,
-                                  variant_index);
+                        span_bug!(
+                            stmt.source_info.span,
+                            "bad set discriminant ({:?} = {:?}): lhs is not an enum",
+                            lvalue,
+                            variant_index
+                        );
                     }
                 };
                 if variant_index >= adt.variants.len() {
-                     span_bug!(stmt.source_info.span,
-                               "bad set discriminant ({:?} = {:?}): value of of range",
-                               lvalue,
-                               variant_index);
+                    span_bug!(
+                        stmt.source_info.span,
+                        "bad set discriminant ({:?} = {:?}): value of of range",
+                        lvalue,
+                        variant_index
+                    );
                 };
             }
             StatementKind::StorageLive(_) |
@@ -430,9 +584,12 @@
         }
     }
 
-    fn check_terminator(&mut self,
-                        mir: &Mir<'tcx>,
-                        term: &Terminator<'tcx>) {
+    fn check_terminator(
+        &mut self,
+        mir: &Mir<'tcx>,
+        term: &Terminator<'tcx>,
+        term_location: Location,
+    ) {
         debug!("check_terminator: {:?}", term);
         let tcx = self.tcx();
         match term.kind {
@@ -446,33 +603,77 @@
                 // no checks needed for these
             }
 
-
             TerminatorKind::DropAndReplace {
                 ref location,
                 ref value,
-                ..
+                target,
+                unwind,
             } => {
                 let lv_ty = location.ty(mir, tcx).to_ty(tcx);
                 let rv_ty = value.ty(mir, tcx);
-                if let Err(terr) = self.sub_types(rv_ty, lv_ty) {
-                    span_mirbug!(self, term, "bad DropAndReplace ({:?} = {:?}): {:?}",
-                                 lv_ty, rv_ty, terr);
+
+                let locations = Locations {
+                    from_location: term_location,
+                    at_location: target.start_location(),
+                };
+                if let Err(terr) = self.sub_types(rv_ty, lv_ty, locations) {
+                    span_mirbug!(
+                        self,
+                        term,
+                        "bad DropAndReplace ({:?} = {:?}): {:?}",
+                        lv_ty,
+                        rv_ty,
+                        terr
+                    );
+                }
+
+                // Subtle: this assignment occurs at the start of
+                // *both* blocks, so we need to ensure that it holds
+                // at both locations.
+                if let Some(unwind) = unwind {
+                    let locations = Locations {
+                        from_location: term_location,
+                        at_location: unwind.start_location(),
+                    };
+                    if let Err(terr) = self.sub_types(rv_ty, lv_ty, locations) {
+                        span_mirbug!(
+                            self,
+                            term,
+                            "bad DropAndReplace ({:?} = {:?}): {:?}",
+                            lv_ty,
+                            rv_ty,
+                            terr
+                        );
+                    }
                 }
             }
-            TerminatorKind::SwitchInt { ref discr, switch_ty, .. } => {
+            TerminatorKind::SwitchInt {
+                ref discr,
+                switch_ty,
+                ..
+            } => {
                 let discr_ty = discr.ty(mir, tcx);
-                if let Err(terr) = self.sub_types(discr_ty, switch_ty) {
-                    span_mirbug!(self, term, "bad SwitchInt ({:?} on {:?}): {:?}",
-                                 switch_ty, discr_ty, terr);
+                if let Err(terr) = self.sub_types(discr_ty, switch_ty, term_location.at_self()) {
+                    span_mirbug!(
+                        self,
+                        term,
+                        "bad SwitchInt ({:?} on {:?}): {:?}",
+                        switch_ty,
+                        discr_ty,
+                        terr
+                    );
                 }
-                if !switch_ty.is_integral() && !switch_ty.is_char() &&
-                    !switch_ty.is_bool()
-                {
-                    span_mirbug!(self, term, "bad SwitchInt discr ty {:?}",switch_ty);
+                if !switch_ty.is_integral() && !switch_ty.is_char() && !switch_ty.is_bool() {
+                    span_mirbug!(self, term, "bad SwitchInt discr ty {:?}", switch_ty);
                 }
                 // FIXME: check the values
             }
-            TerminatorKind::Call { ref func, ref args, ref destination, .. } => {
+            TerminatorKind::Call {
+                ref func,
+                ref args,
+                ref destination,
+                ..
+            } => {
                 let func_ty = func.ty(mir, tcx);
                 debug!("check_terminator: call, func_ty={:?}", func_ty);
                 let sig = match func_ty.sty {
@@ -482,17 +683,36 @@
                         return;
                     }
                 };
-                let sig = tcx.erase_late_bound_regions(&sig);
-                let sig = self.normalize(&sig);
-                self.check_call_dest(mir, term, &sig, destination);
+                let (sig, map) = self.infcx.replace_late_bound_regions_with_fresh_var(
+                    term.source_info.span,
+                    LateBoundRegionConversionTime::FnCall,
+                    &sig,
+                );
+                let sig = self.normalize(&sig, term_location);
+                self.check_call_dest(mir, term, &sig, destination, term_location);
+
+                // The ordinary liveness rules will ensure that all
+                // regions in the type of the callee are live here. We
+                // then further constrain the late-bound regions that
+                // were instantiated at the call site to be live as
+                // well. The resulting is that all the input (and
+                // output) types in the signature must be live, since
+                // all the inputs that fed into it were live.
+                for &late_bound_region in map.values() {
+                    self.constraints
+                        .liveness_set
+                        .push((late_bound_region, term_location));
+                }
 
                 if self.is_box_free(func) {
-                    self.check_box_free_inputs(mir, term, &sig, args);
+                    self.check_box_free_inputs(mir, term, &sig, args, term_location);
                 } else {
-                    self.check_call_inputs(mir, term, &sig, args);
+                    self.check_call_inputs(mir, term, &sig, args, term_location);
                 }
             }
-            TerminatorKind::Assert { ref cond, ref msg, .. } => {
+            TerminatorKind::Assert {
+                ref cond, ref msg, ..
+            } => {
                 let cond_ty = cond.ty(mir, tcx);
                 if cond_ty != tcx.types.bool {
                     span_mirbug!(self, term, "bad Assert ({:?}, not bool", cond_ty);
@@ -512,13 +732,15 @@
                 match mir.yield_ty {
                     None => span_mirbug!(self, term, "yield in non-generator"),
                     Some(ty) => {
-                        if let Err(terr) = self.sub_types(value_ty, ty) {
-                            span_mirbug!(self,
+                        if let Err(terr) = self.sub_types(value_ty, ty, term_location.at_self()) {
+                            span_mirbug!(
+                                self,
                                 term,
                                 "type of yield value is {:?}, but the yield type is {:?}: {:?}",
                                 value_ty,
                                 ty,
-                                terr);
+                                terr
+                            );
                         }
                     }
                 }
@@ -526,46 +748,66 @@
         }
     }
 
-    fn check_call_dest(&mut self,
-                       mir: &Mir<'tcx>,
-                       term: &Terminator<'tcx>,
-                       sig: &ty::FnSig<'tcx>,
-                       destination: &Option<(Lvalue<'tcx>, BasicBlock)>) {
+    fn check_call_dest(
+        &mut self,
+        mir: &Mir<'tcx>,
+        term: &Terminator<'tcx>,
+        sig: &ty::FnSig<'tcx>,
+        destination: &Option<(Lvalue<'tcx>, BasicBlock)>,
+        term_location: Location,
+    ) {
         let tcx = self.tcx();
         match *destination {
-            Some((ref dest, _)) => {
+            Some((ref dest, target_block)) => {
                 let dest_ty = dest.ty(mir, tcx).to_ty(tcx);
-                if let Err(terr) = self.sub_types(sig.output(), dest_ty) {
-                    span_mirbug!(self, term,
-                                 "call dest mismatch ({:?} <- {:?}): {:?}",
-                                 dest_ty, sig.output(), terr);
+                let locations = Locations {
+                    from_location: term_location,
+                    at_location: target_block.start_location(),
+                };
+                if let Err(terr) = self.sub_types(sig.output(), dest_ty, locations) {
+                    span_mirbug!(
+                        self,
+                        term,
+                        "call dest mismatch ({:?} <- {:?}): {:?}",
+                        dest_ty,
+                        sig.output(),
+                        terr
+                    );
                 }
-            },
+            }
             None => {
                 // FIXME(canndrew): This is_never should probably be an is_uninhabited
                 if !sig.output().is_never() {
                     span_mirbug!(self, term, "call to converging function {:?} w/o dest", sig);
                 }
-            },
+            }
         }
     }
 
-    fn check_call_inputs(&mut self,
-                         mir: &Mir<'tcx>,
-                         term: &Terminator<'tcx>,
-                         sig: &ty::FnSig<'tcx>,
-                         args: &[Operand<'tcx>])
-    {
+    fn check_call_inputs(
+        &mut self,
+        mir: &Mir<'tcx>,
+        term: &Terminator<'tcx>,
+        sig: &ty::FnSig<'tcx>,
+        args: &[Operand<'tcx>],
+        term_location: Location,
+    ) {
         debug!("check_call_inputs({:?}, {:?})", sig, args);
-        if args.len() < sig.inputs().len() ||
-           (args.len() > sig.inputs().len() && !sig.variadic) {
+        if args.len() < sig.inputs().len() || (args.len() > sig.inputs().len() && !sig.variadic) {
             span_mirbug!(self, term, "call to {:?} with wrong # of args", sig);
         }
         for (n, (fn_arg, op_arg)) in sig.inputs().iter().zip(args).enumerate() {
             let op_arg_ty = op_arg.ty(mir, self.tcx());
-            if let Err(terr) = self.sub_types(op_arg_ty, fn_arg) {
-                span_mirbug!(self, term, "bad arg #{:?} ({:?} <- {:?}): {:?}",
-                             n, fn_arg, op_arg_ty, terr);
+            if let Err(terr) = self.sub_types(op_arg_ty, fn_arg, term_location.at_self()) {
+                span_mirbug!(
+                    self,
+                    term,
+                    "bad arg #{:?} ({:?} <- {:?}): {:?}",
+                    n,
+                    fn_arg,
+                    op_arg_ty,
+                    terr
+                );
             }
         }
     }
@@ -573,22 +815,29 @@
     fn is_box_free(&self, operand: &Operand<'tcx>) -> bool {
         match operand {
             &Operand::Constant(box Constant {
-                literal: Literal::Value {
-                    value: &ty::Const { val: ConstVal::Function(def_id, _), .. }, ..
-                }, ..
-            }) => {
-                Some(def_id) == self.tcx().lang_items().box_free_fn()
-            }
+                literal:
+                    Literal::Value {
+                        value:
+                            &ty::Const {
+                                val: ConstVal::Function(def_id, _),
+                                ..
+                            },
+                        ..
+                    },
+                ..
+            }) => Some(def_id) == self.tcx().lang_items().box_free_fn(),
             _ => false,
         }
     }
 
-    fn check_box_free_inputs(&mut self,
-                             mir: &Mir<'tcx>,
-                             term: &Terminator<'tcx>,
-                             sig: &ty::FnSig<'tcx>,
-                             args: &[Operand<'tcx>])
-    {
+    fn check_box_free_inputs(
+        &mut self,
+        mir: &Mir<'tcx>,
+        term: &Terminator<'tcx>,
+        sig: &ty::FnSig<'tcx>,
+        args: &[Operand<'tcx>],
+        term_location: Location,
+    ) {
         debug!("check_box_free_inputs");
 
         // box_free takes a Box as a pointer. Allow for that.
@@ -621,93 +870,108 @@
             }
         };
 
-        if let Err(terr) = self.sub_types(arg_ty, pointee_ty) {
-            span_mirbug!(self, term, "bad box_free arg ({:?} <- {:?}): {:?}",
-                         pointee_ty, arg_ty, terr);
+        if let Err(terr) = self.sub_types(arg_ty, pointee_ty, term_location.at_self()) {
+            span_mirbug!(
+                self,
+                term,
+                "bad box_free arg ({:?} <- {:?}): {:?}",
+                pointee_ty,
+                arg_ty,
+                terr
+            );
         }
     }
 
-    fn check_iscleanup(&mut self, mir: &Mir<'tcx>, block: &BasicBlockData<'tcx>)
-    {
-        let is_cleanup = block.is_cleanup;
-        self.last_span = block.terminator().source_info.span;
-        match block.terminator().kind {
-            TerminatorKind::Goto { target } =>
-                self.assert_iscleanup(mir, block, target, is_cleanup),
-            TerminatorKind::SwitchInt { ref targets, .. } => {
-                for target in targets {
-                    self.assert_iscleanup(mir, block, *target, is_cleanup);
-                }
+    fn check_iscleanup(&mut self, mir: &Mir<'tcx>, block_data: &BasicBlockData<'tcx>) {
+        let is_cleanup = block_data.is_cleanup;
+        self.last_span = block_data.terminator().source_info.span;
+        match block_data.terminator().kind {
+            TerminatorKind::Goto { target } => {
+                self.assert_iscleanup(mir, block_data, target, is_cleanup)
             }
-            TerminatorKind::Resume => {
-                if !is_cleanup {
-                    span_mirbug!(self, block, "resume on non-cleanup block!")
-                }
-            }
-            TerminatorKind::Return => {
-                if is_cleanup {
-                    span_mirbug!(self, block, "return on cleanup block")
-                }
-            }
-            TerminatorKind::GeneratorDrop { .. } => {
-                if is_cleanup {
-                    span_mirbug!(self, block, "generator_drop in cleanup block")
-                }
-            }
+            TerminatorKind::SwitchInt { ref targets, .. } => for target in targets {
+                self.assert_iscleanup(mir, block_data, *target, is_cleanup);
+            },
+            TerminatorKind::Resume => if !is_cleanup {
+                span_mirbug!(self, block_data, "resume on non-cleanup block!")
+            },
+            TerminatorKind::Return => if is_cleanup {
+                span_mirbug!(self, block_data, "return on cleanup block")
+            },
+            TerminatorKind::GeneratorDrop { .. } => if is_cleanup {
+                span_mirbug!(self, block_data, "generator_drop in cleanup block")
+            },
             TerminatorKind::Yield { resume, drop, .. } => {
                 if is_cleanup {
-                    span_mirbug!(self, block, "yield in cleanup block")
+                    span_mirbug!(self, block_data, "yield in cleanup block")
                 }
-                self.assert_iscleanup(mir, block, resume, is_cleanup);
+                self.assert_iscleanup(mir, block_data, resume, is_cleanup);
                 if let Some(drop) = drop {
-                    self.assert_iscleanup(mir, block, drop, is_cleanup);
+                    self.assert_iscleanup(mir, block_data, drop, is_cleanup);
                 }
             }
             TerminatorKind::Unreachable => {}
             TerminatorKind::Drop { target, unwind, .. } |
             TerminatorKind::DropAndReplace { target, unwind, .. } |
-            TerminatorKind::Assert { target, cleanup: unwind, .. } => {
-                self.assert_iscleanup(mir, block, target, is_cleanup);
+            TerminatorKind::Assert {
+                target,
+                cleanup: unwind,
+                ..
+            } => {
+                self.assert_iscleanup(mir, block_data, target, is_cleanup);
                 if let Some(unwind) = unwind {
                     if is_cleanup {
-                        span_mirbug!(self, block, "unwind on cleanup block")
+                        span_mirbug!(self, block_data, "unwind on cleanup block")
                     }
-                    self.assert_iscleanup(mir, block, unwind, true);
+                    self.assert_iscleanup(mir, block_data, unwind, true);
                 }
             }
-            TerminatorKind::Call { ref destination, cleanup, .. } => {
+            TerminatorKind::Call {
+                ref destination,
+                cleanup,
+                ..
+            } => {
                 if let &Some((_, target)) = destination {
-                    self.assert_iscleanup(mir, block, target, is_cleanup);
+                    self.assert_iscleanup(mir, block_data, target, is_cleanup);
                 }
                 if let Some(cleanup) = cleanup {
                     if is_cleanup {
-                        span_mirbug!(self, block, "cleanup on cleanup block")
+                        span_mirbug!(self, block_data, "cleanup on cleanup block")
                     }
-                    self.assert_iscleanup(mir, block, cleanup, true);
+                    self.assert_iscleanup(mir, block_data, cleanup, true);
                 }
             }
-            TerminatorKind::FalseEdges { real_target, ref imaginary_targets } => {
-                self.assert_iscleanup(mir, block, real_target, is_cleanup);
+            TerminatorKind::FalseEdges {
+                real_target,
+                ref imaginary_targets,
+            } => {
+                self.assert_iscleanup(mir, block_data, real_target, is_cleanup);
                 for target in imaginary_targets {
-                    self.assert_iscleanup(mir, block, *target, is_cleanup);
+                    self.assert_iscleanup(mir, block_data, *target, is_cleanup);
                 }
             }
         }
     }
 
-    fn assert_iscleanup(&mut self,
-                        mir: &Mir<'tcx>,
-                        ctxt: &fmt::Debug,
-                        bb: BasicBlock,
-                        iscleanuppad: bool)
-    {
+    fn assert_iscleanup(
+        &mut self,
+        mir: &Mir<'tcx>,
+        ctxt: &fmt::Debug,
+        bb: BasicBlock,
+        iscleanuppad: bool,
+    ) {
         if mir[bb].is_cleanup != iscleanuppad {
-            span_mirbug!(self, ctxt, "cleanuppad mismatch: {:?} should be {:?}",
-                         bb, iscleanuppad);
+            span_mirbug!(
+                self,
+                ctxt,
+                "cleanuppad mismatch: {:?} should be {:?}",
+                bb,
+                iscleanuppad
+            );
         }
     }
 
-    fn check_local(&mut self, mir: &Mir<'gcx>, local: Local, local_decl: &LocalDecl<'gcx>) {
+    fn check_local(&mut self, mir: &Mir<'tcx>, local: Local, local_decl: &LocalDecl<'tcx>) {
         match mir.local_kind(local) {
             LocalKind::ReturnPointer | LocalKind::Arg => {
                 // return values of normal functions are required to be
@@ -716,27 +980,38 @@
                 //
                 // Unbound parts of arguments were never required to be Sized
                 // - maybe we should make that a warning.
-                return
+                return;
             }
             LocalKind::Var | LocalKind::Temp => {}
         }
 
         let span = local_decl.source_info.span;
         let ty = local_decl.ty;
-        if !ty.is_sized(self.tcx().global_tcx(), self.param_env, span) {
+
+        // Erase the regions from `ty` to get a global type.  The
+        // `Sized` bound in no way depends on precise regions, so this
+        // shouldn't affect `is_sized`.
+        let gcx = self.tcx().global_tcx();
+        let erased_ty = gcx.lift(&self.tcx().erase_regions(&ty)).unwrap();
+        if !erased_ty.is_sized(gcx, self.param_env, span) {
             // in current MIR construction, all non-control-flow rvalue
             // expressions evaluate through `as_temp` or `into` a return
             // slot or local, so to find all unsized rvalues it is enough
             // to check all temps, return slots and locals.
             if let None = self.reported_errors.replace((ty, span)) {
-                span_err!(self.tcx().sess, span, E0161,
-                          "cannot move a value of type {0}: the size of {0} \
-                           cannot be statically determined", ty);
+                span_err!(
+                    self.tcx().sess,
+                    span,
+                    E0161,
+                    "cannot move a value of type {0}: the size of {0} \
+                     cannot be statically determined",
+                    ty
+                );
             }
         }
     }
 
-    fn typeck_mir(&mut self, mir: &Mir<'gcx>) {
+    fn typeck_mir(&mut self, mir: &Mir<'tcx>) {
         self.last_span = mir.span;
         debug!("run_on_mir: {:?}", mir.span);
 
@@ -744,56 +1019,42 @@
             self.check_local(mir, local, local_decl);
         }
 
-        for block in mir.basic_blocks() {
-            for stmt in &block.statements {
+        for (block, block_data) in mir.basic_blocks().iter_enumerated() {
+            let mut location = Location {
+                block,
+                statement_index: 0,
+            };
+            for stmt in &block_data.statements {
                 if stmt.source_info.span != DUMMY_SP {
                     self.last_span = stmt.source_info.span;
                 }
-                self.check_stmt(mir, stmt);
+                self.check_stmt(mir, stmt, location);
+                location.statement_index += 1;
             }
 
-            self.check_terminator(mir, block.terminator());
-            self.check_iscleanup(mir, block);
+            self.check_terminator(mir, block_data.terminator(), location);
+            self.check_iscleanup(mir, block_data);
         }
     }
 
-
-    fn normalize<T>(&mut self, value: &T) -> T
-        where T: fmt::Debug + TypeFoldable<'tcx>
+    fn normalize<T>(&mut self, value: &T, location: Location) -> T
+    where
+        T: fmt::Debug + TypeFoldable<'tcx>,
     {
-        let mut selcx = traits::SelectionContext::new(self.infcx);
-        let cause = traits::ObligationCause::misc(self.last_span, ast::CRATE_NODE_ID);
-        let traits::Normalized { value, obligations } =
-            traits::normalize(&mut selcx, self.param_env, cause, value);
-
-        debug!("normalize: value={:?} obligations={:?}",
-               value,
-               obligations);
-
-        let fulfill_cx = &mut self.fulfillment_cx;
-        for obligation in obligations {
-            fulfill_cx.register_predicate_obligation(self.infcx, obligation);
-        }
-
-        value
-    }
-
-    fn verify_obligations(&mut self, mir: &Mir<'tcx>) {
-        self.last_span = mir.span;
-        if let Err(e) = self.fulfillment_cx.select_all_or_error(self.infcx) {
-            span_mirbug!(self, "", "errors selecting obligation: {:?}",
-                         e);
-        }
+        self.fully_perform_op(location.at_self(), |this| {
+            let mut selcx = traits::SelectionContext::new(this.infcx);
+            let cause = traits::ObligationCause::misc(this.last_span, ast::CRATE_NODE_ID);
+            let traits::Normalized { value, obligations } =
+                traits::normalize(&mut selcx, this.param_env, cause, value);
+            Ok(InferOk { value, obligations })
+        }).unwrap()
     }
 }
 
 pub struct TypeckMir;
 
 impl MirPass for TypeckMir {
-    fn run_pass<'a, 'tcx>(&self,
-                          tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                          src: MirSource,
-                          mir: &mut Mir<'tcx>) {
+    fn run_pass<'a, 'tcx>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, src: MirSource, mir: &mut Mir<'tcx>) {
         let def_id = src.def_id;
         let id = tcx.hir.as_local_node_id(def_id).unwrap();
         debug!("run_pass: {:?}", def_id);
@@ -805,17 +1066,44 @@
         }
         let param_env = tcx.param_env(def_id);
         tcx.infer_ctxt().enter(|infcx| {
-            let mut checker = TypeChecker::new(&infcx, id, param_env);
-            {
-                let mut verifier = TypeVerifier::new(&mut checker, mir);
-                verifier.visit_mir(mir);
-                if verifier.errors_reported {
-                    // don't do further checks to avoid ICEs
-                    return;
-                }
-            }
-            checker.typeck_mir(mir);
-            checker.verify_obligations(mir);
+            let _region_constraint_sets = type_check(&infcx, id, param_env, mir);
+
+            // For verification purposes, we just ignore the resulting
+            // region constraint sets. Not our problem. =)
         });
     }
 }
+
+trait AtLocation {
+    /// Creates a `Locations` where `self` is both the from-location
+    /// and the at-location. This means that any required region
+    /// relationships must hold upon entering the statement/terminator
+    /// indicated by `self`. This is typically used when processing
+    /// "inputs" to the given location.
+    fn at_self(self) -> Locations;
+
+    /// Creates a `Locations` where `self` is the from-location and
+    /// its successor within the block is the at-location. This means
+    /// that any required region relationships must hold only upon
+    /// **exiting** the statement/terminator indicated by `self`. This
+    /// is for example used when you have a `lv = rv` statement: it
+    /// indicates that the `typeof(rv) <: typeof(lv)` as of the
+    /// **next** statement.
+    fn at_successor_within_block(self) -> Locations;
+}
+
+impl AtLocation for Location {
+    fn at_self(self) -> Locations {
+        Locations {
+            from_location: self,
+            at_location: self,
+        }
+    }
+
+    fn at_successor_within_block(self) -> Locations {
+        Locations {
+            from_location: self,
+            at_location: self.successor_within_block(),
+        }
+    }
+}
diff --git a/src/librustc_mir/util/elaborate_drops.rs b/src/librustc_mir/util/elaborate_drops.rs
index 3b97720..1852712 100644
--- a/src/librustc_mir/util/elaborate_drops.rs
+++ b/src/librustc_mir/util/elaborate_drops.rs
@@ -384,7 +384,7 @@
                                   substs: &'tcx Substs<'tcx>)
                                   -> (BasicBlock, Unwind) {
         let (succ, unwind) = self.drop_ladder_bottom();
-        if adt.variants.len() == 1 {
+        if !adt.is_enum() {
             let fields = self.move_paths_for_fields(
                 self.lvalue,
                 self.path,
diff --git a/src/librustc_mir/util/graphviz.rs b/src/librustc_mir/util/graphviz.rs
index b3c7b4b..ea4495b 100644
--- a/src/librustc_mir/util/graphviz.rs
+++ b/src/librustc_mir/util/graphviz.rs
@@ -150,7 +150,7 @@
         write!(w, "{:?}: {}", Lvalue::Local(arg), escape(&mir.local_decls[arg].ty))?;
     }
 
-    write!(w, ") -&gt; {}", escape(mir.return_ty))?;
+    write!(w, ") -&gt; {}", escape(mir.return_ty()))?;
     write!(w, r#"<br align="left"/>"#)?;
 
     for local in mir.vars_and_temps_iter() {
diff --git a/src/librustc_mir/util/pretty.rs b/src/librustc_mir/util/pretty.rs
index 5dc7a32..7d9cae6 100644
--- a/src/librustc_mir/util/pretty.rs
+++ b/src/librustc_mir/util/pretty.rs
@@ -348,7 +348,7 @@
     let indented_retptr = format!("{}let mut {:?}: {};",
                                   INDENT,
                                   RETURN_POINTER,
-                                  mir.return_ty);
+                                  mir.local_decls[RETURN_POINTER].ty);
     writeln!(w, "{0:1$} // return pointer",
              indented_retptr,
              ALIGN)?;
@@ -392,13 +392,13 @@
                 write!(w, "{:?}: {}", Lvalue::Local(arg), mir.local_decls[arg].ty)?;
             }
 
-            write!(w, ") -> {}", mir.return_ty)
+            write!(w, ") -> {}", mir.return_ty())
         }
         (hir::BodyOwnerKind::Const, _) |
         (hir::BodyOwnerKind::Static(_), _) |
         (_, Some(_)) => {
             assert_eq!(mir.arg_count, 0);
-            write!(w, ": {} =", mir.return_ty)
+            write!(w, ": {} =", mir.return_ty())
         }
     }
 }
diff --git a/src/librustc_trans/abi.rs b/src/librustc_trans/abi.rs
index 6df40c3..5482804 100644
--- a/src/librustc_trans/abi.rs
+++ b/src/librustc_trans/abi.rs
@@ -11,7 +11,7 @@
 use llvm::{self, ValueRef, AttributePlace};
 use base;
 use builder::Builder;
-use common::{instance_ty, ty_fn_sig, type_is_fat_ptr, C_usize};
+use common::{instance_ty, ty_fn_sig, C_usize};
 use context::CrateContext;
 use cabi_x86;
 use cabi_x86_64;
@@ -30,31 +30,34 @@
 use cabi_nvptx;
 use cabi_nvptx64;
 use cabi_hexagon;
-use machine::llalign_of_min;
+use mir::lvalue::{Alignment, LvalueRef};
+use mir::operand::OperandValue;
 use type_::Type;
-use type_of;
+use type_of::{LayoutLlvmExt, PointerKind};
 
-use rustc::hir;
 use rustc::ty::{self, Ty};
-use rustc::ty::layout::{self, Layout, LayoutTyper, TyLayout, Size};
-use rustc_back::PanicStrategy;
+use rustc::ty::layout::{self, Align, Size, TyLayout};
+use rustc::ty::layout::{HasDataLayout, LayoutOf};
 
 use libc::c_uint;
-use std::cmp;
-use std::iter;
+use std::{cmp, iter};
 
 pub use syntax::abi::Abi;
 pub use rustc::ty::layout::{FAT_PTR_ADDR, FAT_PTR_EXTRA};
 
-#[derive(Clone, Copy, PartialEq, Debug)]
-enum ArgKind {
-    /// Pass the argument directly using the normal converted
-    /// LLVM type or by coercing to another specified type
-    Direct,
-    /// Pass the argument indirectly via a hidden pointer
-    Indirect,
-    /// Ignore the argument (useful for empty struct)
+#[derive(Clone, Copy, PartialEq, Eq, Debug)]
+pub enum PassMode {
+    /// Ignore the argument (useful for empty struct).
     Ignore,
+    /// Pass the argument directly.
+    Direct(ArgAttributes),
+    /// Pass a pair's elements directly in two arguments.
+    Pair(ArgAttributes, ArgAttributes),
+    /// Pass the argument after casting it, to either
+    /// a single uniform or a pair of registers.
+    Cast(CastTarget),
+    /// Pass the argument indirectly via a hidden pointer.
+    Indirect(ArgAttributes),
 }
 
 // Hack to disable non_upper_case_globals only for the bitflags! and not for the rest
@@ -96,20 +99,24 @@
 
 /// A compact representation of LLVM attributes (at least those relevant for this module)
 /// that can be manipulated without interacting with LLVM's Attribute machinery.
-#[derive(Copy, Clone, Debug, Default)]
+#[derive(Copy, Clone, PartialEq, Eq, Debug)]
 pub struct ArgAttributes {
     regular: ArgAttribute,
-    dereferenceable_bytes: u64,
+    pointee_size: Size,
+    pointee_align: Option<Align>
 }
 
 impl ArgAttributes {
-    pub fn set(&mut self, attr: ArgAttribute) -> &mut Self {
-        self.regular = self.regular | attr;
-        self
+    fn new() -> Self {
+        ArgAttributes {
+            regular: ArgAttribute::default(),
+            pointee_size: Size::from_bytes(0),
+            pointee_align: None,
+        }
     }
 
-    pub fn set_dereferenceable(&mut self, bytes: u64) -> &mut Self {
-        self.dereferenceable_bytes = bytes;
+    pub fn set(&mut self, attr: ArgAttribute) -> &mut Self {
+        self.regular = self.regular | attr;
         self
     }
 
@@ -118,24 +125,52 @@
     }
 
     pub fn apply_llfn(&self, idx: AttributePlace, llfn: ValueRef) {
+        let mut regular = self.regular;
         unsafe {
-            self.regular.for_each_kind(|attr| attr.apply_llfn(idx, llfn));
-            if self.dereferenceable_bytes != 0 {
-                llvm::LLVMRustAddDereferenceableAttr(llfn,
-                                                     idx.as_uint(),
-                                                     self.dereferenceable_bytes);
+            let deref = self.pointee_size.bytes();
+            if deref != 0 {
+                if regular.contains(ArgAttribute::NonNull) {
+                    llvm::LLVMRustAddDereferenceableAttr(llfn,
+                                                         idx.as_uint(),
+                                                         deref);
+                } else {
+                    llvm::LLVMRustAddDereferenceableOrNullAttr(llfn,
+                                                               idx.as_uint(),
+                                                               deref);
+                }
+                regular -= ArgAttribute::NonNull;
             }
+            if let Some(align) = self.pointee_align {
+                llvm::LLVMRustAddAlignmentAttr(llfn,
+                                               idx.as_uint(),
+                                               align.abi() as u32);
+            }
+            regular.for_each_kind(|attr| attr.apply_llfn(idx, llfn));
         }
     }
 
     pub fn apply_callsite(&self, idx: AttributePlace, callsite: ValueRef) {
+        let mut regular = self.regular;
         unsafe {
-            self.regular.for_each_kind(|attr| attr.apply_callsite(idx, callsite));
-            if self.dereferenceable_bytes != 0 {
-                llvm::LLVMRustAddDereferenceableCallSiteAttr(callsite,
-                                                             idx.as_uint(),
-                                                             self.dereferenceable_bytes);
+            let deref = self.pointee_size.bytes();
+            if deref != 0 {
+                if regular.contains(ArgAttribute::NonNull) {
+                    llvm::LLVMRustAddDereferenceableCallSiteAttr(callsite,
+                                                                 idx.as_uint(),
+                                                                 deref);
+                } else {
+                    llvm::LLVMRustAddDereferenceableOrNullCallSiteAttr(callsite,
+                                                                       idx.as_uint(),
+                                                                       deref);
+                }
+                regular -= ArgAttribute::NonNull;
             }
+            if let Some(align) = self.pointee_align {
+                llvm::LLVMRustAddAlignmentCallSiteAttr(callsite,
+                                                       idx.as_uint(),
+                                                       align.abi() as u32);
+            }
+            regular.for_each_kind(|attr| attr.apply_callsite(idx, callsite));
         }
     }
 }
@@ -174,7 +209,32 @@
 }
 
 impl Reg {
-    fn llvm_type(&self, ccx: &CrateContext) -> Type {
+    pub fn align(&self, ccx: &CrateContext) -> Align {
+        let dl = ccx.data_layout();
+        match self.kind {
+            RegKind::Integer => {
+                match self.size.bits() {
+                    1 => dl.i1_align,
+                    2...8 => dl.i8_align,
+                    9...16 => dl.i16_align,
+                    17...32 => dl.i32_align,
+                    33...64 => dl.i64_align,
+                    65...128 => dl.i128_align,
+                    _ => bug!("unsupported integer: {:?}", self)
+                }
+            }
+            RegKind::Float => {
+                match self.size.bits() {
+                    32 => dl.f32_align,
+                    64 => dl.f64_align,
+                    _ => bug!("unsupported float: {:?}", self)
+                }
+            }
+            RegKind::Vector => dl.vector_align(self.size)
+        }
+    }
+
+    pub fn llvm_type(&self, ccx: &CrateContext) -> Type {
         match self.kind {
             RegKind::Integer => Type::ix(ccx, self.size.bits()),
             RegKind::Float => {
@@ -193,7 +253,7 @@
 
 /// An argument passed entirely registers with the
 /// same kind (e.g. HFA / HVA on PPC64 and AArch64).
-#[derive(Copy, Clone)]
+#[derive(Clone, Copy, PartialEq, Eq, Debug)]
 pub struct Uniform {
     pub unit: Reg,
 
@@ -216,7 +276,11 @@
 }
 
 impl Uniform {
-    fn llvm_type(&self, ccx: &CrateContext) -> Type {
+    pub fn align(&self, ccx: &CrateContext) -> Align {
+        self.unit.align(ccx)
+    }
+
+    pub fn llvm_type(&self, ccx: &CrateContext) -> Type {
         let llunit = self.unit.llvm_type(ccx);
 
         if self.total <= self.unit.size {
@@ -248,66 +312,59 @@
 
 impl<'tcx> LayoutExt<'tcx> for TyLayout<'tcx> {
     fn is_aggregate(&self) -> bool {
-        match *self.layout {
-            Layout::Scalar { .. } |
-            Layout::RawNullablePointer { .. } |
-            Layout::CEnum { .. } |
-            Layout::Vector { .. } => false,
-
-            Layout::Array { .. } |
-            Layout::FatPointer { .. } |
-            Layout::Univariant { .. } |
-            Layout::UntaggedUnion { .. } |
-            Layout::General { .. } |
-            Layout::StructWrappedNullablePointer { .. } => true
+        match self.abi {
+            layout::Abi::Uninhabited |
+            layout::Abi::Scalar(_) |
+            layout::Abi::Vector => false,
+            layout::Abi::ScalarPair(..) |
+            layout::Abi::Aggregate { .. } => true
         }
     }
 
     fn homogeneous_aggregate<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> Option<Reg> {
-        match *self.layout {
-            // The primitives for this algorithm.
-            Layout::Scalar { value, .. } |
-            Layout::RawNullablePointer { value, .. } => {
-                let kind = match value {
-                    layout::Int(_) |
+        match self.abi {
+            layout::Abi::Uninhabited => None,
+
+            // The primitive for this algorithm.
+            layout::Abi::Scalar(ref scalar) => {
+                let kind = match scalar.value {
+                    layout::Int(..) |
                     layout::Pointer => RegKind::Integer,
                     layout::F32 |
                     layout::F64 => RegKind::Float
                 };
                 Some(Reg {
                     kind,
-                    size: self.size(ccx)
+                    size: self.size
                 })
             }
 
-            Layout::CEnum { .. } => {
-                Some(Reg {
-                    kind: RegKind::Integer,
-                    size: self.size(ccx)
-                })
-            }
-
-            Layout::Vector { .. } => {
+            layout::Abi::Vector => {
                 Some(Reg {
                     kind: RegKind::Vector,
-                    size: self.size(ccx)
+                    size: self.size
                 })
             }
 
-            Layout::Array { count, .. } => {
-                if count > 0 {
-                    self.field(ccx, 0).homogeneous_aggregate(ccx)
-                } else {
-                    None
-                }
-            }
-
-            Layout::Univariant { ref variant, .. } => {
-                let mut unaligned_offset = Size::from_bytes(0);
+            layout::Abi::ScalarPair(..) |
+            layout::Abi::Aggregate { .. } => {
+                let mut total = Size::from_bytes(0);
                 let mut result = None;
 
-                for i in 0..self.field_count() {
-                    if unaligned_offset != variant.offsets[i] {
+                let is_union = match self.fields {
+                    layout::FieldPlacement::Array { count, .. } => {
+                        if count > 0 {
+                            return self.field(ccx, 0).homogeneous_aggregate(ccx);
+                        } else {
+                            return None;
+                        }
+                    }
+                    layout::FieldPlacement::Union(_) => true,
+                    layout::FieldPlacement::Arbitrary { .. } => false
+                };
+
+                for i in 0..self.fields.count() {
+                    if !is_union && total != self.fields.offset(i) {
                         return None;
                     }
 
@@ -328,65 +385,26 @@
                     }
 
                     // Keep track of the offset (without padding).
-                    let size = field.size(ccx);
-                    match unaligned_offset.checked_add(size, ccx) {
-                        Some(offset) => unaligned_offset = offset,
-                        None => return None
+                    let size = field.size;
+                    if is_union {
+                        total = cmp::max(total, size);
+                    } else {
+                        total += size;
                     }
                 }
 
                 // There needs to be no padding.
-                if unaligned_offset != self.size(ccx) {
+                if total != self.size {
                     None
                 } else {
                     result
                 }
             }
-
-            Layout::UntaggedUnion { .. } => {
-                let mut max = Size::from_bytes(0);
-                let mut result = None;
-
-                for i in 0..self.field_count() {
-                    let field = self.field(ccx, i);
-                    match (result, field.homogeneous_aggregate(ccx)) {
-                        // The field itself must be a homogeneous aggregate.
-                        (_, None) => return None,
-                        // If this is the first field, record the unit.
-                        (None, Some(unit)) => {
-                            result = Some(unit);
-                        }
-                        // For all following fields, the unit must be the same.
-                        (Some(prev_unit), Some(unit)) => {
-                            if prev_unit != unit {
-                                return None;
-                            }
-                        }
-                    }
-
-                    // Keep track of the offset (without padding).
-                    let size = field.size(ccx);
-                    if size > max {
-                        max = size;
-                    }
-                }
-
-                // There needs to be no padding.
-                if max != self.size(ccx) {
-                    None
-                } else {
-                    result
-                }
-            }
-
-            // Rust-specific types, which we can ignore for C ABIs.
-            Layout::FatPointer { .. } |
-            Layout::General { .. } |
-            Layout::StructWrappedNullablePointer { .. } => None
         }
     }
 }
 
+#[derive(Clone, Copy, PartialEq, Eq, Debug)]
 pub enum CastTarget {
     Uniform(Uniform),
     Pair(Reg, Reg)
@@ -405,7 +423,28 @@
 }
 
 impl CastTarget {
-    fn llvm_type(&self, ccx: &CrateContext) -> Type {
+    pub fn size(&self, ccx: &CrateContext) -> Size {
+        match *self {
+            CastTarget::Uniform(u) => u.total,
+            CastTarget::Pair(a, b) => {
+                (a.size.abi_align(a.align(ccx)) + b.size)
+                    .abi_align(self.align(ccx))
+            }
+        }
+    }
+
+    pub fn align(&self, ccx: &CrateContext) -> Align {
+        match *self {
+            CastTarget::Uniform(u) => u.align(ccx),
+            CastTarget::Pair(a, b) => {
+                ccx.data_layout().aggregate_align
+                    .max(a.align(ccx))
+                    .max(b.align(ccx))
+            }
+        }
+    }
+
+    pub fn llvm_type(&self, ccx: &CrateContext) -> Type {
         match *self {
             CastTarget::Uniform(u) => u.llvm_type(ccx),
             CastTarget::Pair(a, b) => {
@@ -418,131 +457,118 @@
     }
 }
 
-/// Information about how a specific C type
-/// should be passed to or returned from a function
-///
-/// This is borrowed from clang's ABIInfo.h
-#[derive(Clone, Copy, Debug)]
+/// Information about how to pass an argument to,
+/// or return a value from, a function, under some ABI.
+#[derive(Debug)]
 pub struct ArgType<'tcx> {
-    kind: ArgKind,
     pub layout: TyLayout<'tcx>,
-    /// Coerced LLVM Type
-    pub cast: Option<Type>,
-    /// Dummy argument, which is emitted before the real argument
-    pub pad: Option<Type>,
-    /// LLVM attributes of argument
-    pub attrs: ArgAttributes
+
+    /// Dummy argument, which is emitted before the real argument.
+    pub pad: Option<Reg>,
+
+    pub mode: PassMode,
 }
 
 impl<'a, 'tcx> ArgType<'tcx> {
     fn new(layout: TyLayout<'tcx>) -> ArgType<'tcx> {
         ArgType {
-            kind: ArgKind::Direct,
             layout,
-            cast: None,
             pad: None,
-            attrs: ArgAttributes::default()
+            mode: PassMode::Direct(ArgAttributes::new()),
         }
     }
 
-    pub fn make_indirect(&mut self, ccx: &CrateContext<'a, 'tcx>) {
-        assert_eq!(self.kind, ArgKind::Direct);
+    pub fn make_indirect(&mut self) {
+        assert_eq!(self.mode, PassMode::Direct(ArgAttributes::new()));
 
-        // Wipe old attributes, likely not valid through indirection.
-        self.attrs = ArgAttributes::default();
-
-        let llarg_sz = self.layout.size(ccx).bytes();
+        // Start with fresh attributes for the pointer.
+        let mut attrs = ArgAttributes::new();
 
         // For non-immediate arguments the callee gets its own copy of
         // the value on the stack, so there are no aliases. It's also
         // program-invisible so can't possibly capture
-        self.attrs.set(ArgAttribute::NoAlias)
-                  .set(ArgAttribute::NoCapture)
-                  .set_dereferenceable(llarg_sz);
+        attrs.set(ArgAttribute::NoAlias)
+             .set(ArgAttribute::NoCapture)
+             .set(ArgAttribute::NonNull);
+        attrs.pointee_size = self.layout.size;
+        // FIXME(eddyb) We should be doing this, but at least on
+        // i686-pc-windows-msvc, it results in wrong stack offsets.
+        // attrs.pointee_align = Some(self.layout.align);
 
-        self.kind = ArgKind::Indirect;
+        self.mode = PassMode::Indirect(attrs);
     }
 
-    pub fn ignore(&mut self) {
-        assert_eq!(self.kind, ArgKind::Direct);
-        self.kind = ArgKind::Ignore;
+    pub fn make_indirect_byval(&mut self) {
+        self.make_indirect();
+        match self.mode {
+            PassMode::Indirect(ref mut attrs) => {
+                attrs.set(ArgAttribute::ByVal);
+            }
+            _ => bug!()
+        }
     }
 
     pub fn extend_integer_width_to(&mut self, bits: u64) {
         // Only integers have signedness
-        let (i, signed) = match *self.layout {
-            Layout::Scalar { value, .. } => {
-                match value {
-                    layout::Int(i) => {
-                        if self.layout.ty.is_integral() {
-                            (i, self.layout.ty.is_signed())
+        if let layout::Abi::Scalar(ref scalar) = self.layout.abi {
+            if let layout::Int(i, signed) = scalar.value {
+                if i.size().bits() < bits {
+                    if let PassMode::Direct(ref mut attrs) = self.mode {
+                        attrs.set(if signed {
+                            ArgAttribute::SExt
                         } else {
-                            return;
-                        }
+                            ArgAttribute::ZExt
+                        });
                     }
-                    _ => return
                 }
             }
-
-            // Rust enum types that map onto C enums also need to follow
-            // the target ABI zero-/sign-extension rules.
-            Layout::CEnum { discr, signed, .. } => (discr, signed),
-
-            _ => return
-        };
-
-        if i.size().bits() < bits {
-            self.attrs.set(if signed {
-                ArgAttribute::SExt
-            } else {
-                ArgAttribute::ZExt
-            });
         }
     }
 
-    pub fn cast_to<T: Into<CastTarget>>(&mut self, ccx: &CrateContext, target: T) {
-        self.cast = Some(target.into().llvm_type(ccx));
+    pub fn cast_to<T: Into<CastTarget>>(&mut self, target: T) {
+        assert_eq!(self.mode, PassMode::Direct(ArgAttributes::new()));
+        self.mode = PassMode::Cast(target.into());
     }
 
-    pub fn pad_with(&mut self, ccx: &CrateContext, reg: Reg) {
-        self.pad = Some(reg.llvm_type(ccx));
+    pub fn pad_with(&mut self, reg: Reg) {
+        self.pad = Some(reg);
     }
 
     pub fn is_indirect(&self) -> bool {
-        self.kind == ArgKind::Indirect
+        match self.mode {
+            PassMode::Indirect(_) => true,
+            _ => false
+        }
     }
 
     pub fn is_ignore(&self) -> bool {
-        self.kind == ArgKind::Ignore
+        self.mode == PassMode::Ignore
     }
 
     /// Get the LLVM type for an lvalue of the original Rust type of
     /// this argument/return, i.e. the result of `type_of::type_of`.
     pub fn memory_ty(&self, ccx: &CrateContext<'a, 'tcx>) -> Type {
-        type_of::type_of(ccx, self.layout.ty)
+        self.layout.llvm_type(ccx)
     }
 
     /// Store a direct/indirect value described by this ArgType into a
     /// lvalue for the original Rust type of this argument/return.
     /// Can be used for both storing formal arguments into Rust variables
     /// or results of call/invoke instructions into their destinations.
-    pub fn store(&self, bcx: &Builder<'a, 'tcx>, mut val: ValueRef, dst: ValueRef) {
+    pub fn store(&self, bcx: &Builder<'a, 'tcx>, val: ValueRef, dst: LvalueRef<'tcx>) {
         if self.is_ignore() {
             return;
         }
         let ccx = bcx.ccx;
         if self.is_indirect() {
-            let llsz = C_usize(ccx, self.layout.size(ccx).bytes());
-            let llalign = self.layout.align(ccx).abi();
-            base::call_memcpy(bcx, dst, val, llsz, llalign as u32);
-        } else if let Some(ty) = self.cast {
+            OperandValue::Ref(val, Alignment::AbiAligned).store(bcx, dst)
+        } else if let PassMode::Cast(cast) = self.mode {
             // FIXME(eddyb): Figure out when the simpler Store is safe, clang
             // uses it for i16 -> {i8, i8}, but not for i24 -> {i8, i8, i8}.
             let can_store_through_cast_ptr = false;
             if can_store_through_cast_ptr {
-                let cast_dst = bcx.pointercast(dst, ty.ptr_to());
-                let llalign = self.layout.align(ccx).abi();
-                bcx.store(val, cast_dst, Some(llalign as u32));
+                let cast_dst = bcx.pointercast(dst.llval, cast.llvm_type(ccx).ptr_to());
+                bcx.store(val, cast_dst, Some(self.layout.align));
             } else {
                 // The actual return type is a struct, but the ABI
                 // adaptation code has cast it into some scalar type.  The
@@ -559,40 +585,45 @@
                 //   bitcasting to the struct type yields invalid cast errors.
 
                 // We instead thus allocate some scratch space...
-                let llscratch = bcx.alloca(ty, "abi_cast", None);
-                base::Lifetime::Start.call(bcx, llscratch);
+                let llscratch = bcx.alloca(cast.llvm_type(ccx), "abi_cast", cast.align(ccx));
+                let scratch_size = cast.size(ccx);
+                bcx.lifetime_start(llscratch, scratch_size);
 
                 // ...where we first store the value...
                 bcx.store(val, llscratch, None);
 
                 // ...and then memcpy it to the intended destination.
                 base::call_memcpy(bcx,
-                                  bcx.pointercast(dst, Type::i8p(ccx)),
+                                  bcx.pointercast(dst.llval, Type::i8p(ccx)),
                                   bcx.pointercast(llscratch, Type::i8p(ccx)),
-                                  C_usize(ccx, self.layout.size(ccx).bytes()),
-                                  cmp::min(self.layout.align(ccx).abi() as u32,
-                                           llalign_of_min(ccx, ty)));
+                                  C_usize(ccx, self.layout.size.bytes()),
+                                  self.layout.align.min(cast.align(ccx)));
 
-                base::Lifetime::End.call(bcx, llscratch);
+                bcx.lifetime_end(llscratch, scratch_size);
             }
         } else {
-            if self.layout.ty == ccx.tcx().types.bool {
-                val = bcx.zext(val, Type::i8(ccx));
-            }
-            bcx.store(val, dst, None);
+            OperandValue::Immediate(val).store(bcx, dst);
         }
     }
 
-    pub fn store_fn_arg(&self, bcx: &Builder<'a, 'tcx>, idx: &mut usize, dst: ValueRef) {
+    pub fn store_fn_arg(&self, bcx: &Builder<'a, 'tcx>, idx: &mut usize, dst: LvalueRef<'tcx>) {
         if self.pad.is_some() {
             *idx += 1;
         }
-        if self.is_ignore() {
-            return;
+        let mut next = || {
+            let val = llvm::get_param(bcx.llfn(), *idx as c_uint);
+            *idx += 1;
+            val
+        };
+        match self.mode {
+            PassMode::Ignore => {},
+            PassMode::Pair(..) => {
+                OperandValue::Pair(next(), next()).store(bcx, dst);
+            }
+            PassMode::Direct(_) | PassMode::Indirect(_) | PassMode::Cast(_) => {
+                self.store(bcx, next(), dst);
+            }
         }
-        let val = llvm::get_param(bcx.llfn(), *idx as c_uint);
-        *idx += 1;
-        self.store(bcx, val, dst);
     }
 }
 
@@ -601,7 +632,7 @@
 ///
 /// I will do my best to describe this structure, but these
 /// comments are reverse-engineered and may be inaccurate. -NDM
-#[derive(Clone, Debug)]
+#[derive(Debug)]
 pub struct FnType<'tcx> {
     /// The LLVM types of each argument.
     pub args: Vec<ArgType<'tcx>>,
@@ -620,14 +651,14 @@
         let fn_ty = instance_ty(ccx.tcx(), &instance);
         let sig = ty_fn_sig(ccx, fn_ty);
         let sig = ccx.tcx().erase_late_bound_regions_and_normalize(&sig);
-        Self::new(ccx, sig, &[])
+        FnType::new(ccx, sig, &[])
     }
 
     pub fn new(ccx: &CrateContext<'a, 'tcx>,
                sig: ty::FnSig<'tcx>,
                extra_args: &[Ty<'tcx>]) -> FnType<'tcx> {
         let mut fn_ty = FnType::unadjusted(ccx, sig, extra_args);
-        fn_ty.adjust_for_abi(ccx, sig);
+        fn_ty.adjust_for_abi(ccx, sig.abi);
         fn_ty
     }
 
@@ -636,8 +667,23 @@
                       extra_args: &[Ty<'tcx>]) -> FnType<'tcx> {
         let mut fn_ty = FnType::unadjusted(ccx, sig, extra_args);
         // Don't pass the vtable, it's not an argument of the virtual fn.
-        fn_ty.args[1].ignore();
-        fn_ty.adjust_for_abi(ccx, sig);
+        {
+            let self_arg = &mut fn_ty.args[0];
+            match self_arg.mode {
+                PassMode::Pair(data_ptr, _) => {
+                    self_arg.mode = PassMode::Direct(data_ptr);
+                }
+                _ => bug!("FnType::new_vtable: non-pair self {:?}", self_arg)
+            }
+
+            let pointee = self_arg.layout.ty.builtin_deref(true, ty::NoPreference)
+                .unwrap_or_else(|| {
+                    bug!("FnType::new_vtable: non-pointer self {:?}", self_arg)
+                }).ty;
+            let fat_ptr_ty = ccx.tcx().mk_mut_ptr(pointee);
+            self_arg.layout = ccx.layout_of(fat_ptr_ty).field(ccx, 0);
+        }
+        fn_ty.adjust_for_abi(ccx, sig.abi);
         fn_ty
     }
 
@@ -702,120 +748,113 @@
             _ => false
         };
 
-        let arg_of = |ty: Ty<'tcx>, is_return: bool| {
-            let mut arg = ArgType::new(ccx.layout_of(ty));
-            if ty.is_bool() {
-                arg.attrs.set(ArgAttribute::ZExt);
-            } else {
-                if arg.layout.size(ccx).bytes() == 0 {
-                    // For some forsaken reason, x86_64-pc-windows-gnu
-                    // doesn't ignore zero-sized struct arguments.
-                    // The same is true for s390x-unknown-linux-gnu.
-                    if is_return || rust_abi ||
-                       (!win_x64_gnu && !linux_s390x) {
-                        arg.ignore();
+        // Handle safe Rust thin and fat pointers.
+        let adjust_for_rust_scalar = |attrs: &mut ArgAttributes,
+                                      scalar: &layout::Scalar,
+                                      layout: TyLayout<'tcx>,
+                                      offset: Size,
+                                      is_return: bool| {
+            // Booleans are always an i1 that needs to be zero-extended.
+            if scalar.is_bool() {
+                attrs.set(ArgAttribute::ZExt);
+                return;
+            }
+
+            // Only pointer types handled below.
+            if scalar.value != layout::Pointer {
+                return;
+            }
+
+            if scalar.valid_range.start < scalar.valid_range.end {
+                if scalar.valid_range.start > 0 {
+                    attrs.set(ArgAttribute::NonNull);
+                }
+            }
+
+            if let Some(pointee) = layout.pointee_info_at(ccx, offset) {
+                if let Some(kind) = pointee.safe {
+                    attrs.pointee_size = pointee.size;
+                    attrs.pointee_align = Some(pointee.align);
+
+                    // HACK(eddyb) LLVM inserts `llvm.assume` calls when inlining functions
+                    // with align attributes, and those calls later block optimizations.
+                    if !is_return {
+                        attrs.pointee_align = None;
+                    }
+
+                    // `Box` pointer parameters never alias because ownership is transferred
+                    // `&mut` pointer parameters never alias other parameters,
+                    // or mutable global data
+                    //
+                    // `&T` where `T` contains no `UnsafeCell<U>` is immutable,
+                    // and can be marked as both `readonly` and `noalias`, as
+                    // LLVM's definition of `noalias` is based solely on memory
+                    // dependencies rather than pointer equality
+                    let no_alias = match kind {
+                        PointerKind::Shared => false,
+                        PointerKind::Frozen | PointerKind::UniqueOwned => true,
+                        PointerKind::UniqueBorrowed => !is_return
+                    };
+                    if no_alias {
+                        attrs.set(ArgAttribute::NoAlias);
+                    }
+
+                    if kind == PointerKind::Frozen && !is_return {
+                        attrs.set(ArgAttribute::ReadOnly);
                     }
                 }
             }
+        };
+
+        let arg_of = |ty: Ty<'tcx>, is_return: bool| {
+            let mut arg = ArgType::new(ccx.layout_of(ty));
+            if arg.layout.is_zst() {
+                // For some forsaken reason, x86_64-pc-windows-gnu
+                // doesn't ignore zero-sized struct arguments.
+                // The same is true for s390x-unknown-linux-gnu.
+                if is_return || rust_abi || (!win_x64_gnu && !linux_s390x) {
+                    arg.mode = PassMode::Ignore;
+                }
+            }
+
+            // FIXME(eddyb) other ABIs don't have logic for scalar pairs.
+            if !is_return && rust_abi {
+                if let layout::Abi::ScalarPair(ref a, ref b) = arg.layout.abi {
+                    let mut a_attrs = ArgAttributes::new();
+                    let mut b_attrs = ArgAttributes::new();
+                    adjust_for_rust_scalar(&mut a_attrs,
+                                           a,
+                                           arg.layout,
+                                           Size::from_bytes(0),
+                                           false);
+                    adjust_for_rust_scalar(&mut b_attrs,
+                                           b,
+                                           arg.layout,
+                                           a.value.size(ccx).abi_align(b.value.align(ccx)),
+                                           false);
+                    arg.mode = PassMode::Pair(a_attrs, b_attrs);
+                    return arg;
+                }
+            }
+
+            if let layout::Abi::Scalar(ref scalar) = arg.layout.abi {
+                if let PassMode::Direct(ref mut attrs) = arg.mode {
+                    adjust_for_rust_scalar(attrs,
+                                           scalar,
+                                           arg.layout,
+                                           Size::from_bytes(0),
+                                           is_return);
+                }
+            }
+
             arg
         };
 
-        let ret_ty = sig.output();
-        let mut ret = arg_of(ret_ty, true);
-
-        if !type_is_fat_ptr(ccx, ret_ty) {
-            // The `noalias` attribute on the return value is useful to a
-            // function ptr caller.
-            if ret_ty.is_box() {
-                // `Box` pointer return values never alias because ownership
-                // is transferred
-                ret.attrs.set(ArgAttribute::NoAlias);
-            }
-
-            // We can also mark the return value as `dereferenceable` in certain cases
-            match ret_ty.sty {
-                // These are not really pointers but pairs, (pointer, len)
-                ty::TyRef(_, ty::TypeAndMut { ty, .. }) => {
-                    ret.attrs.set_dereferenceable(ccx.size_of(ty));
-                }
-                ty::TyAdt(def, _) if def.is_box() => {
-                    ret.attrs.set_dereferenceable(ccx.size_of(ret_ty.boxed_ty()));
-                }
-                _ => {}
-            }
-        }
-
-        let mut args = Vec::with_capacity(inputs.len() + extra_args.len());
-
-        // Handle safe Rust thin and fat pointers.
-        let rust_ptr_attrs = |ty: Ty<'tcx>, arg: &mut ArgType| match ty.sty {
-            // `Box` pointer parameters never alias because ownership is transferred
-            ty::TyAdt(def, _) if def.is_box() => {
-                arg.attrs.set(ArgAttribute::NoAlias);
-                Some(ty.boxed_ty())
-            }
-
-            ty::TyRef(_, mt) => {
-                // `&mut` pointer parameters never alias other parameters, or mutable global data
-                //
-                // `&T` where `T` contains no `UnsafeCell<U>` is immutable, and can be marked as
-                // both `readonly` and `noalias`, as LLVM's definition of `noalias` is based solely
-                // on memory dependencies rather than pointer equality
-                let is_freeze = ccx.shared().type_is_freeze(mt.ty);
-
-                let no_alias_is_safe =
-                    if ccx.shared().tcx().sess.opts.debugging_opts.mutable_noalias ||
-                       ccx.shared().tcx().sess.panic_strategy() == PanicStrategy::Abort {
-                        // Mutable refrences or immutable shared references
-                        mt.mutbl == hir::MutMutable || is_freeze
-                    } else {
-                        // Only immutable shared references
-                        mt.mutbl != hir::MutMutable && is_freeze
-                    };
-
-                if no_alias_is_safe {
-                    arg.attrs.set(ArgAttribute::NoAlias);
-                }
-
-                if mt.mutbl == hir::MutImmutable && is_freeze {
-                    arg.attrs.set(ArgAttribute::ReadOnly);
-                }
-
-                Some(mt.ty)
-            }
-            _ => None
-        };
-
-        for ty in inputs.iter().chain(extra_args.iter()) {
-            let mut arg = arg_of(ty, false);
-
-            if let ty::layout::FatPointer { .. } = *arg.layout {
-                let mut data = ArgType::new(arg.layout.field(ccx, 0));
-                let mut info = ArgType::new(arg.layout.field(ccx, 1));
-
-                if let Some(inner) = rust_ptr_attrs(ty, &mut data) {
-                    data.attrs.set(ArgAttribute::NonNull);
-                    if ccx.tcx().struct_tail(inner).is_trait() {
-                        // vtables can be safely marked non-null, readonly
-                        // and noalias.
-                        info.attrs.set(ArgAttribute::NonNull);
-                        info.attrs.set(ArgAttribute::ReadOnly);
-                        info.attrs.set(ArgAttribute::NoAlias);
-                    }
-                }
-                args.push(data);
-                args.push(info);
-            } else {
-                if let Some(inner) = rust_ptr_attrs(ty, &mut arg) {
-                    arg.attrs.set_dereferenceable(ccx.size_of(inner));
-                }
-                args.push(arg);
-            }
-        }
-
         FnType {
-            args,
-            ret,
+            ret: arg_of(sig.output(), true),
+            args: inputs.iter().chain(extra_args.iter()).map(|ty| {
+                arg_of(ty, false)
+            }).collect(),
             variadic: sig.variadic,
             cconv,
         }
@@ -823,63 +862,38 @@
 
     fn adjust_for_abi(&mut self,
                       ccx: &CrateContext<'a, 'tcx>,
-                      sig: ty::FnSig<'tcx>) {
-        let abi = sig.abi;
+                      abi: Abi) {
         if abi == Abi::Unadjusted { return }
 
         if abi == Abi::Rust || abi == Abi::RustCall ||
            abi == Abi::RustIntrinsic || abi == Abi::PlatformIntrinsic {
             let fixup = |arg: &mut ArgType<'tcx>| {
-                if !arg.layout.is_aggregate() {
-                    return;
+                if arg.is_ignore() { return; }
+
+                match arg.layout.abi {
+                    layout::Abi::Aggregate { .. } => {}
+                    _ => return
                 }
 
-                let size = arg.layout.size(ccx);
-
-                if let Some(unit) = arg.layout.homogeneous_aggregate(ccx) {
-                    // Replace newtypes with their inner-most type.
-                    if unit.size == size {
-                        // Needs a cast as we've unpacked a newtype.
-                        arg.cast_to(ccx, unit);
-                        return;
-                    }
-
-                    // Pairs of floats.
-                    if unit.kind == RegKind::Float {
-                        if unit.size.checked_mul(2, ccx) == Some(size) {
-                            // FIXME(eddyb) This should be using Uniform instead of a pair,
-                            // but the resulting [2 x float/double] breaks emscripten.
-                            // See https://github.com/kripken/emscripten-fastcomp/issues/178.
-                            arg.cast_to(ccx, CastTarget::Pair(unit, unit));
-                            return;
-                        }
-                    }
-                }
-
+                let size = arg.layout.size;
                 if size > layout::Pointer.size(ccx) {
-                    arg.make_indirect(ccx);
+                    arg.make_indirect();
                 } else {
                     // We want to pass small aggregates as immediates, but using
                     // a LLVM aggregate type for this leads to bad optimizations,
                     // so we pick an appropriately sized integer type instead.
-                    arg.cast_to(ccx, Reg {
+                    arg.cast_to(Reg {
                         kind: RegKind::Integer,
                         size
                     });
                 }
             };
-            // Fat pointers are returned by-value.
-            if !self.ret.is_ignore() {
-                if !type_is_fat_ptr(ccx, sig.output()) {
-                    fixup(&mut self.ret);
-                }
-            }
+            fixup(&mut self.ret);
             for arg in &mut self.args {
-                if arg.is_ignore() { continue; }
                 fixup(arg);
             }
-            if self.ret.is_indirect() {
-                self.ret.attrs.set(ArgAttribute::StructRet);
+            if let PassMode::Indirect(ref mut attrs) = self.ret.mode {
+                attrs.set(ArgAttribute::StructRet);
             }
             return;
         }
@@ -896,7 +910,7 @@
             "x86_64" => if abi == Abi::SysV64 {
                 cabi_x86_64::compute_abi_info(ccx, self);
             } else if abi == Abi::Win64 || ccx.sess().target.target.options.is_like_windows {
-                cabi_x86_win64::compute_abi_info(ccx, self);
+                cabi_x86_win64::compute_abi_info(self);
             } else {
                 cabi_x86_64::compute_abi_info(ccx, self);
             },
@@ -909,51 +923,52 @@
             "s390x" => cabi_s390x::compute_abi_info(ccx, self),
             "asmjs" => cabi_asmjs::compute_abi_info(ccx, self),
             "wasm32" => cabi_asmjs::compute_abi_info(ccx, self),
-            "msp430" => cabi_msp430::compute_abi_info(ccx, self),
+            "msp430" => cabi_msp430::compute_abi_info(self),
             "sparc" => cabi_sparc::compute_abi_info(ccx, self),
             "sparc64" => cabi_sparc64::compute_abi_info(ccx, self),
-            "nvptx" => cabi_nvptx::compute_abi_info(ccx, self),
-            "nvptx64" => cabi_nvptx64::compute_abi_info(ccx, self),
-            "hexagon" => cabi_hexagon::compute_abi_info(ccx, self),
+            "nvptx" => cabi_nvptx::compute_abi_info(self),
+            "nvptx64" => cabi_nvptx64::compute_abi_info(self),
+            "hexagon" => cabi_hexagon::compute_abi_info(self),
             a => ccx.sess().fatal(&format!("unrecognized arch \"{}\" in target specification", a))
         }
 
-        if self.ret.is_indirect() {
-            self.ret.attrs.set(ArgAttribute::StructRet);
+        if let PassMode::Indirect(ref mut attrs) = self.ret.mode {
+            attrs.set(ArgAttribute::StructRet);
         }
     }
 
     pub fn llvm_type(&self, ccx: &CrateContext<'a, 'tcx>) -> Type {
         let mut llargument_tys = Vec::new();
 
-        let llreturn_ty = if self.ret.is_ignore() {
-            Type::void(ccx)
-        } else if self.ret.is_indirect() {
-            llargument_tys.push(self.ret.memory_ty(ccx).ptr_to());
-            Type::void(ccx)
-        } else {
-            self.ret.cast.unwrap_or_else(|| {
-                type_of::immediate_type_of(ccx, self.ret.layout.ty)
-            })
+        let llreturn_ty = match self.ret.mode {
+            PassMode::Ignore => Type::void(ccx),
+            PassMode::Direct(_) | PassMode::Pair(..) => {
+                self.ret.layout.immediate_llvm_type(ccx)
+            }
+            PassMode::Cast(cast) => cast.llvm_type(ccx),
+            PassMode::Indirect(_) => {
+                llargument_tys.push(self.ret.memory_ty(ccx).ptr_to());
+                Type::void(ccx)
+            }
         };
 
         for arg in &self.args {
-            if arg.is_ignore() {
-                continue;
-            }
             // add padding
             if let Some(ty) = arg.pad {
-                llargument_tys.push(ty);
+                llargument_tys.push(ty.llvm_type(ccx));
             }
 
-            let llarg_ty = if arg.is_indirect() {
-                arg.memory_ty(ccx).ptr_to()
-            } else {
-                arg.cast.unwrap_or_else(|| {
-                    type_of::immediate_type_of(ccx, arg.layout.ty)
-                })
+            let llarg_ty = match arg.mode {
+                PassMode::Ignore => continue,
+                PassMode::Direct(_) => arg.layout.immediate_llvm_type(ccx),
+                PassMode::Pair(..) => {
+                    llargument_tys.push(arg.layout.scalar_pair_element_llvm_type(ccx, 0));
+                    llargument_tys.push(arg.layout.scalar_pair_element_llvm_type(ccx, 1));
+                    continue;
+                }
+                PassMode::Cast(cast) => cast.llvm_type(ccx),
+                PassMode::Indirect(_) => arg.memory_ty(ccx).ptr_to(),
             };
-
             llargument_tys.push(llarg_ty);
         }
 
@@ -965,31 +980,61 @@
     }
 
     pub fn apply_attrs_llfn(&self, llfn: ValueRef) {
-        let mut i = if self.ret.is_indirect() { 1 } else { 0 };
-        if !self.ret.is_ignore() {
-            self.ret.attrs.apply_llfn(llvm::AttributePlace::Argument(i), llfn);
+        let mut i = 0;
+        let mut apply = |attrs: &ArgAttributes| {
+            attrs.apply_llfn(llvm::AttributePlace::Argument(i), llfn);
+            i += 1;
+        };
+        match self.ret.mode {
+            PassMode::Direct(ref attrs) => {
+                attrs.apply_llfn(llvm::AttributePlace::ReturnValue, llfn);
+            }
+            PassMode::Indirect(ref attrs) => apply(attrs),
+            _ => {}
         }
-        i += 1;
         for arg in &self.args {
-            if !arg.is_ignore() {
-                if arg.pad.is_some() { i += 1; }
-                arg.attrs.apply_llfn(llvm::AttributePlace::Argument(i), llfn);
-                i += 1;
+            if arg.pad.is_some() {
+                apply(&ArgAttributes::new());
+            }
+            match arg.mode {
+                PassMode::Ignore => {}
+                PassMode::Direct(ref attrs) |
+                PassMode::Indirect(ref attrs) => apply(attrs),
+                PassMode::Pair(ref a, ref b) => {
+                    apply(a);
+                    apply(b);
+                }
+                PassMode::Cast(_) => apply(&ArgAttributes::new()),
             }
         }
     }
 
     pub fn apply_attrs_callsite(&self, callsite: ValueRef) {
-        let mut i = if self.ret.is_indirect() { 1 } else { 0 };
-        if !self.ret.is_ignore() {
-            self.ret.attrs.apply_callsite(llvm::AttributePlace::Argument(i), callsite);
+        let mut i = 0;
+        let mut apply = |attrs: &ArgAttributes| {
+            attrs.apply_callsite(llvm::AttributePlace::Argument(i), callsite);
+            i += 1;
+        };
+        match self.ret.mode {
+            PassMode::Direct(ref attrs) => {
+                attrs.apply_callsite(llvm::AttributePlace::ReturnValue, callsite);
+            }
+            PassMode::Indirect(ref attrs) => apply(attrs),
+            _ => {}
         }
-        i += 1;
         for arg in &self.args {
-            if !arg.is_ignore() {
-                if arg.pad.is_some() { i += 1; }
-                arg.attrs.apply_callsite(llvm::AttributePlace::Argument(i), callsite);
-                i += 1;
+            if arg.pad.is_some() {
+                apply(&ArgAttributes::new());
+            }
+            match arg.mode {
+                PassMode::Ignore => {}
+                PassMode::Direct(ref attrs) |
+                PassMode::Indirect(ref attrs) => apply(attrs),
+                PassMode::Pair(ref a, ref b) => {
+                    apply(a);
+                    apply(b);
+                }
+                PassMode::Cast(_) => apply(&ArgAttributes::new()),
             }
         }
 
@@ -998,7 +1043,3 @@
         }
     }
 }
-
-pub fn align_up_to(off: u64, a: u64) -> u64 {
-    (off + a - 1) / a * a
-}
diff --git a/src/librustc_trans/adt.rs b/src/librustc_trans/adt.rs
deleted file mode 100644
index b06f8e4..0000000
--- a/src/librustc_trans/adt.rs
+++ /dev/null
@@ -1,497 +0,0 @@
-// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! # Representation of Algebraic Data Types
-//!
-//! This module determines how to represent enums, structs, and tuples
-//! based on their monomorphized types; it is responsible both for
-//! choosing a representation and translating basic operations on
-//! values of those types.  (Note: exporting the representations for
-//! debuggers is handled in debuginfo.rs, not here.)
-//!
-//! Note that the interface treats everything as a general case of an
-//! enum, so structs/tuples/etc. have one pseudo-variant with
-//! discriminant 0; i.e., as if they were a univariant enum.
-//!
-//! Having everything in one place will enable improvements to data
-//! structure representation; possibilities include:
-//!
-//! - User-specified alignment (e.g., cacheline-aligning parts of
-//!   concurrently accessed data structures); LLVM can't represent this
-//!   directly, so we'd have to insert padding fields in any structure
-//!   that might contain one and adjust GEP indices accordingly.  See
-//!   issue #4578.
-//!
-//! - Store nested enums' discriminants in the same word.  Rather, if
-//!   some variants start with enums, and those enums representations
-//!   have unused alignment padding between discriminant and body, the
-//!   outer enum's discriminant can be stored there and those variants
-//!   can start at offset 0.  Kind of fancy, and might need work to
-//!   make copies of the inner enum type cooperate, but it could help
-//!   with `Option` or `Result` wrapped around another enum.
-//!
-//! - Tagged pointers would be neat, but given that any type can be
-//!   used unboxed and any field can have pointers (including mutable)
-//!   taken to it, implementing them for Rust seems difficult.
-
-use std;
-
-use llvm::{ValueRef, True, IntEQ, IntNE};
-use rustc::ty::{self, Ty};
-use rustc::ty::layout::{self, LayoutTyper};
-use common::*;
-use builder::Builder;
-use base;
-use machine;
-use monomorphize;
-use type_::Type;
-use type_of;
-
-use mir::lvalue::Alignment;
-
-/// Given an enum, struct, closure, or tuple, extracts fields.
-/// Treats closures as a struct with one variant.
-/// `empty_if_no_variants` is a switch to deal with empty enums.
-/// If true, `variant_index` is disregarded and an empty Vec returned in this case.
-pub fn compute_fields<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>,
-                                variant_index: usize,
-                                empty_if_no_variants: bool) -> Vec<Ty<'tcx>> {
-    match t.sty {
-        ty::TyAdt(ref def, _) if def.variants.len() == 0 && empty_if_no_variants => {
-            Vec::default()
-        },
-        ty::TyAdt(ref def, ref substs) => {
-            def.variants[variant_index].fields.iter().map(|f| {
-                monomorphize::field_ty(cx.tcx(), substs, f)
-            }).collect::<Vec<_>>()
-        },
-        ty::TyTuple(fields, _) => fields.to_vec(),
-        ty::TyClosure(def_id, substs) => {
-            if variant_index > 0 { bug!("{} is a closure, which only has one variant", t);}
-            substs.upvar_tys(def_id, cx.tcx()).collect()
-        },
-        ty::TyGenerator(def_id, substs, _) => {
-            if variant_index > 0 { bug!("{} is a generator, which only has one variant", t);}
-            substs.field_tys(def_id, cx.tcx()).map(|t| {
-                cx.tcx().fully_normalize_associated_types_in(&t)
-            }).collect()
-        },
-        _ => bug!("{} is not a type that can have fields.", t)
-    }
-}
-
-/// LLVM-level types are a little complicated.
-///
-/// C-like enums need to be actual ints, not wrapped in a struct,
-/// because that changes the ABI on some platforms (see issue #10308).
-///
-/// For nominal types, in some cases, we need to use LLVM named structs
-/// and fill in the actual contents in a second pass to prevent
-/// unbounded recursion; see also the comments in `trans::type_of`.
-pub fn type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Type {
-    generic_type_of(cx, t, None)
-}
-
-pub fn incomplete_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
-                                    t: Ty<'tcx>, name: &str) -> Type {
-    generic_type_of(cx, t, Some(name))
-}
-
-pub fn finish_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
-                                t: Ty<'tcx>, llty: &mut Type) {
-    let l = cx.layout_of(t);
-    debug!("finish_type_of: {} with layout {:#?}", t, l);
-    match *l {
-        layout::CEnum { .. } | layout::General { .. }
-        | layout::UntaggedUnion { .. } | layout::RawNullablePointer { .. } => { }
-        layout::Univariant { ..}
-        | layout::StructWrappedNullablePointer { .. } => {
-            let (nonnull_variant_index, nonnull_variant, packed) = match *l {
-                layout::Univariant { ref variant, .. } => (0, variant, variant.packed),
-                layout::StructWrappedNullablePointer { nndiscr, ref nonnull, .. } =>
-                    (nndiscr, nonnull, nonnull.packed),
-                _ => unreachable!()
-            };
-            let fields = compute_fields(cx, t, nonnull_variant_index as usize, true);
-            llty.set_struct_body(&struct_llfields(cx, &fields, nonnull_variant),
-                                 packed)
-        },
-        _ => bug!("This function cannot handle {} with layout {:#?}", t, l)
-    }
-}
-
-fn generic_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
-                             t: Ty<'tcx>,
-                             name: Option<&str>) -> Type {
-    let l = cx.layout_of(t);
-    debug!("adt::generic_type_of t: {:?} name: {:?}", t, name);
-    match *l {
-        layout::CEnum { discr, .. } => Type::from_integer(cx, discr),
-        layout::RawNullablePointer { nndiscr, .. } => {
-            let (def, substs) = match t.sty {
-                ty::TyAdt(d, s) => (d, s),
-                _ => bug!("{} is not an ADT", t)
-            };
-            let nnty = monomorphize::field_ty(cx.tcx(), substs,
-                &def.variants[nndiscr as usize].fields[0]);
-            if let layout::Scalar { value: layout::Pointer, .. } = *cx.layout_of(nnty) {
-                Type::i8p(cx)
-            } else {
-                type_of::type_of(cx, nnty)
-            }
-        }
-        layout::StructWrappedNullablePointer { nndiscr, ref nonnull, .. } => {
-            let fields = compute_fields(cx, t, nndiscr as usize, false);
-            match name {
-                None => {
-                    Type::struct_(cx, &struct_llfields(cx, &fields, nonnull),
-                                  nonnull.packed)
-                }
-                Some(name) => {
-                    Type::named_struct(cx, name)
-                }
-            }
-        }
-        layout::Univariant { ref variant, .. } => {
-            // Note that this case also handles empty enums.
-            // Thus the true as the final parameter here.
-            let fields = compute_fields(cx, t, 0, true);
-            match name {
-                None => {
-                    let fields = struct_llfields(cx, &fields, &variant);
-                    Type::struct_(cx, &fields, variant.packed)
-                }
-                Some(name) => {
-                    // Hypothesis: named_struct's can never need a
-                    // drop flag. (... needs validation.)
-                    Type::named_struct(cx, name)
-                }
-            }
-        }
-        layout::UntaggedUnion { ref variants, .. }=> {
-            // Use alignment-sized ints to fill all the union storage.
-            let size = variants.stride().bytes();
-            let align = variants.align.abi();
-            let fill = union_fill(cx, size, align);
-            match name {
-                None => {
-                    Type::struct_(cx, &[fill], variants.packed)
-                }
-                Some(name) => {
-                    let mut llty = Type::named_struct(cx, name);
-                    llty.set_struct_body(&[fill], variants.packed);
-                    llty
-                }
-            }
-        }
-        layout::General { discr, size, align, primitive_align, .. } => {
-            // We need a representation that has:
-            // * The alignment of the most-aligned field
-            // * The size of the largest variant (rounded up to that alignment)
-            // * No alignment padding anywhere any variant has actual data
-            //   (currently matters only for enums small enough to be immediate)
-            // * The discriminant in an obvious place.
-            //
-            // So we start with the discriminant, pad it up to the alignment with
-            // more of its own type, then use alignment-sized ints to get the rest
-            // of the size.
-            let size = size.bytes();
-            let align = align.abi();
-            let primitive_align = primitive_align.abi();
-            assert!(align <= std::u32::MAX as u64);
-            let discr_ty = Type::from_integer(cx, discr);
-            let discr_size = discr.size().bytes();
-            let padded_discr_size = roundup(discr_size, align as u32);
-            let variant_part_size = size-padded_discr_size;
-            let variant_fill = union_fill(cx, variant_part_size, primitive_align);
-
-            assert_eq!(machine::llalign_of_min(cx, variant_fill), primitive_align as u32);
-            assert_eq!(padded_discr_size % discr_size, 0); // Ensure discr_ty can fill pad evenly
-            let fields: Vec<Type> =
-                [discr_ty,
-                 Type::array(&discr_ty, (padded_discr_size - discr_size)/discr_size),
-                 variant_fill].iter().cloned().collect();
-            match name {
-                None => {
-                    Type::struct_(cx, &fields, false)
-                }
-                Some(name) => {
-                    let mut llty = Type::named_struct(cx, name);
-                    llty.set_struct_body(&fields, false);
-                    llty
-                }
-            }
-        }
-        _ => bug!("Unsupported type {} represented as {:#?}", t, l)
-    }
-}
-
-fn union_fill(cx: &CrateContext, size: u64, align: u64) -> Type {
-    assert_eq!(size%align, 0);
-    assert_eq!(align.count_ones(), 1, "Alignment must be a power fof 2. Got {}", align);
-    let align_units = size/align;
-    let layout_align = layout::Align::from_bytes(align, align).unwrap();
-    if let Some(ity) = layout::Integer::for_abi_align(cx, layout_align) {
-        Type::array(&Type::from_integer(cx, ity), align_units)
-    } else {
-        Type::array(&Type::vector(&Type::i32(cx), align/4),
-                    align_units)
-    }
-}
-
-
-// Double index to account for padding (FieldPath already uses `Struct::memory_index`)
-fn struct_llfields_path(discrfield: &layout::FieldPath) -> Vec<usize> {
-    discrfield.iter().map(|&i| (i as usize) << 1).collect::<Vec<_>>()
-}
-
-
-// Lookup `Struct::memory_index` and double it to account for padding
-pub fn struct_llfields_index(variant: &layout::Struct, index: usize) -> usize {
-    (variant.memory_index[index] as usize) << 1
-}
-
-
-pub fn struct_llfields<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, field_tys: &Vec<Ty<'tcx>>,
-                             variant: &layout::Struct) -> Vec<Type> {
-    debug!("struct_llfields: variant: {:?}", variant);
-    let mut first_field = true;
-    let mut min_offset = 0;
-    let mut result: Vec<Type> = Vec::with_capacity(field_tys.len() * 2);
-    let field_iter = variant.field_index_by_increasing_offset().map(|i| {
-        (i, field_tys[i as usize], variant.offsets[i as usize].bytes()) });
-    for (index, ty, target_offset) in field_iter {
-        if first_field {
-            debug!("struct_llfields: {} ty: {} min_offset: {} target_offset: {}",
-                index, ty, min_offset, target_offset);
-            first_field = false;
-        } else {
-            assert!(target_offset >= min_offset);
-            let padding_bytes = if variant.packed { 0 } else { target_offset - min_offset };
-            result.push(Type::array(&Type::i8(cx), padding_bytes));
-            debug!("struct_llfields: {} ty: {} pad_bytes: {} min_offset: {} target_offset: {}",
-                index, ty, padding_bytes, min_offset, target_offset);
-        }
-        let llty = type_of::in_memory_type_of(cx, ty);
-        result.push(llty);
-        let layout = cx.layout_of(ty);
-        let target_size = layout.size(&cx.tcx().data_layout).bytes();
-        min_offset = target_offset + target_size;
-    }
-    if variant.sized && !field_tys.is_empty() {
-        if variant.stride().bytes() < min_offset {
-            bug!("variant: {:?} stride: {} min_offset: {}", variant, variant.stride().bytes(),
-            min_offset);
-        }
-        let padding_bytes = variant.stride().bytes() - min_offset;
-        debug!("struct_llfields: pad_bytes: {} min_offset: {} min_size: {} stride: {}\n",
-               padding_bytes, min_offset, variant.min_size.bytes(), variant.stride().bytes());
-        result.push(Type::array(&Type::i8(cx), padding_bytes));
-        assert!(result.len() == (field_tys.len() * 2));
-    } else {
-        debug!("struct_llfields: min_offset: {} min_size: {} stride: {}\n",
-               min_offset, variant.min_size.bytes(), variant.stride().bytes());
-    }
-
-    result
-}
-
-pub fn is_discr_signed<'tcx>(l: &layout::Layout) -> bool {
-    match *l {
-        layout::CEnum { signed, .. }=> signed,
-        _ => false,
-    }
-}
-
-/// Obtain the actual discriminant of a value.
-pub fn trans_get_discr<'a, 'tcx>(
-    bcx: &Builder<'a, 'tcx>,
-    t: Ty<'tcx>,
-    scrutinee: ValueRef,
-    alignment: Alignment,
-    cast_to: Option<Type>,
-    range_assert: bool
-) -> ValueRef {
-    debug!("trans_get_discr t: {:?}", t);
-    let l = bcx.ccx.layout_of(t);
-
-    let val = match *l {
-        layout::CEnum { discr, min, max, .. } => {
-            load_discr(bcx, discr, scrutinee, alignment, min, max, range_assert)
-        }
-        layout::General { discr, ref variants, .. } => {
-            let ptr = bcx.struct_gep(scrutinee, 0);
-            load_discr(bcx, discr, ptr, alignment,
-                       0, variants.len() as u64 - 1,
-                       range_assert)
-        }
-        layout::Univariant { .. } | layout::UntaggedUnion { .. } => C_u8(bcx.ccx, 0),
-        layout::RawNullablePointer { nndiscr, .. } => {
-            let cmp = if nndiscr == 0 { IntEQ } else { IntNE };
-            let discr = bcx.load(scrutinee, alignment.to_align());
-            bcx.icmp(cmp, discr, C_null(val_ty(discr)))
-        }
-        layout::StructWrappedNullablePointer { nndiscr, ref discrfield, .. } => {
-            struct_wrapped_nullable_bitdiscr(bcx, nndiscr, discrfield, scrutinee, alignment)
-        },
-        _ => bug!("{} is not an enum", t)
-    };
-    match cast_to {
-        None => val,
-        Some(llty) => bcx.intcast(val, llty, is_discr_signed(&l))
-    }
-}
-
-fn struct_wrapped_nullable_bitdiscr(
-    bcx: &Builder,
-    nndiscr: u64,
-    discrfield: &layout::FieldPath,
-    scrutinee: ValueRef,
-    alignment: Alignment,
-) -> ValueRef {
-    let path = struct_llfields_path(discrfield);
-    let llptrptr = bcx.gepi(scrutinee, &path);
-    let llptr = bcx.load(llptrptr, alignment.to_align());
-    let cmp = if nndiscr == 0 { IntEQ } else { IntNE };
-    bcx.icmp(cmp, llptr, C_null(val_ty(llptr)))
-}
-
-/// Helper for cases where the discriminant is simply loaded.
-fn load_discr(bcx: &Builder, ity: layout::Integer, ptr: ValueRef,
-              alignment: Alignment, min: u64, max: u64,
-              range_assert: bool)
-    -> ValueRef {
-    let llty = Type::from_integer(bcx.ccx, ity);
-    assert_eq!(val_ty(ptr), llty.ptr_to());
-    let bits = ity.size().bits();
-    assert!(bits <= 64);
-    let bits = bits as usize;
-    let mask = !0u64 >> (64 - bits);
-    // For a (max) discr of -1, max will be `-1 as usize`, which overflows.
-    // However, that is fine here (it would still represent the full range),
-    if max.wrapping_add(1) & mask == min & mask || !range_assert {
-        // i.e., if the range is everything.  The lo==hi case would be
-        // rejected by the LLVM verifier (it would mean either an
-        // empty set, which is impossible, or the entire range of the
-        // type, which is pointless).
-        bcx.load(ptr, alignment.to_align())
-    } else {
-        // llvm::ConstantRange can deal with ranges that wrap around,
-        // so an overflow on (max + 1) is fine.
-        bcx.load_range_assert(ptr, min, max.wrapping_add(1), /* signed: */ True,
-                              alignment.to_align())
-    }
-}
-
-/// Set the discriminant for a new value of the given case of the given
-/// representation.
-pub fn trans_set_discr<'a, 'tcx>(bcx: &Builder<'a, 'tcx>, t: Ty<'tcx>, val: ValueRef, to: u64) {
-    let l = bcx.ccx.layout_of(t);
-    match *l {
-        layout::CEnum{ discr, min, max, .. } => {
-            assert_discr_in_range(min, max, to);
-            bcx.store(C_int(Type::from_integer(bcx.ccx, discr), to as i64),
-                  val, None);
-        }
-        layout::General{ discr, .. } => {
-            bcx.store(C_int(Type::from_integer(bcx.ccx, discr), to as i64),
-                  bcx.struct_gep(val, 0), None);
-        }
-        layout::Univariant { .. }
-        | layout::UntaggedUnion { .. }
-        | layout::Vector { .. } => {
-            assert_eq!(to, 0);
-        }
-        layout::RawNullablePointer { nndiscr, .. } => {
-            if to != nndiscr {
-                let llptrty = val_ty(val).element_type();
-                bcx.store(C_null(llptrty), val, None);
-            }
-        }
-        layout::StructWrappedNullablePointer { nndiscr, ref discrfield, ref nonnull, .. } => {
-            if to != nndiscr {
-                if target_sets_discr_via_memset(bcx) {
-                    // Issue #34427: As workaround for LLVM bug on
-                    // ARM, use memset of 0 on whole struct rather
-                    // than storing null to single target field.
-                    let llptr = bcx.pointercast(val, Type::i8(bcx.ccx).ptr_to());
-                    let fill_byte = C_u8(bcx.ccx, 0);
-                    let size = C_usize(bcx.ccx, nonnull.stride().bytes());
-                    let align = C_i32(bcx.ccx, nonnull.align.abi() as i32);
-                    base::call_memset(bcx, llptr, fill_byte, size, align, false);
-                } else {
-                    let path = struct_llfields_path(discrfield);
-                    let llptrptr = bcx.gepi(val, &path);
-                    let llptrty = val_ty(llptrptr).element_type();
-                    bcx.store(C_null(llptrty), llptrptr, None);
-                }
-            }
-        }
-        _ => bug!("Cannot handle {} represented as {:#?}", t, l)
-    }
-}
-
-fn target_sets_discr_via_memset<'a, 'tcx>(bcx: &Builder<'a, 'tcx>) -> bool {
-    bcx.sess().target.target.arch == "arm" || bcx.sess().target.target.arch == "aarch64"
-}
-
-pub fn assert_discr_in_range<D: PartialOrd>(min: D, max: D, discr: D) {
-    if min <= max {
-        assert!(min <= discr && discr <= max)
-    } else {
-        assert!(min <= discr || discr <= max)
-    }
-}
-
-// FIXME this utility routine should be somewhere more general
-#[inline]
-fn roundup(x: u64, a: u32) -> u64 { let a = a as u64; ((x + (a - 1)) / a) * a }
-
-/// Extract a field of a constant value, as appropriate for its
-/// representation.
-///
-/// (Not to be confused with `common::const_get_elt`, which operates on
-/// raw LLVM-level structs and arrays.)
-pub fn const_get_field<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>,
-                       val: ValueRef,
-                       ix: usize) -> ValueRef {
-    let l = ccx.layout_of(t);
-    match *l {
-        layout::CEnum { .. } => bug!("element access in C-like enum const"),
-        layout::Univariant { ref variant, .. } => {
-            const_struct_field(val, variant.memory_index[ix] as usize)
-        }
-        layout::Vector { .. } => const_struct_field(val, ix),
-        layout::UntaggedUnion { .. } => const_struct_field(val, 0),
-        _ => bug!("{} does not have fields.", t)
-    }
-}
-
-/// Extract field of struct-like const, skipping our alignment padding.
-fn const_struct_field(val: ValueRef, ix: usize) -> ValueRef {
-    // Get the ix-th non-undef element of the struct.
-    let mut real_ix = 0; // actual position in the struct
-    let mut ix = ix; // logical index relative to real_ix
-    let mut field;
-    loop {
-        loop {
-            field = const_get_elt(val, &[real_ix]);
-            if !is_undef(field) {
-                break;
-            }
-            real_ix = real_ix + 1;
-        }
-        if ix == 0 {
-            return field;
-        }
-        ix = ix - 1;
-        real_ix = real_ix + 1;
-    }
-}
diff --git a/src/librustc_trans/asm.rs b/src/librustc_trans/asm.rs
index 92cbd00..1959fd1 100644
--- a/src/librustc_trans/asm.rs
+++ b/src/librustc_trans/asm.rs
@@ -11,16 +11,15 @@
 //! # Translation of inline assembly.
 
 use llvm::{self, ValueRef};
-use base;
 use common::*;
-use type_of;
 use type_::Type;
+use type_of::LayoutLlvmExt;
 use builder::Builder;
 
 use rustc::hir;
-use rustc::ty::Ty;
 
-use mir::lvalue::Alignment;
+use mir::lvalue::LvalueRef;
+use mir::operand::OperandValue;
 
 use std::ffi::CString;
 use syntax::ast::AsmDialect;
@@ -30,7 +29,7 @@
 pub fn trans_inline_asm<'a, 'tcx>(
     bcx: &Builder<'a, 'tcx>,
     ia: &hir::InlineAsm,
-    outputs: Vec<(ValueRef, Ty<'tcx>)>,
+    outputs: Vec<LvalueRef<'tcx>>,
     mut inputs: Vec<ValueRef>
 ) {
     let mut ext_constraints = vec![];
@@ -38,20 +37,15 @@
 
     // Prepare the output operands
     let mut indirect_outputs = vec![];
-    for (i, (out, &(val, ty))) in ia.outputs.iter().zip(&outputs).enumerate() {
-        let val = if out.is_rw || out.is_indirect {
-            Some(base::load_ty(bcx, val, Alignment::Packed, ty))
-        } else {
-            None
-        };
+    for (i, (out, lvalue)) in ia.outputs.iter().zip(&outputs).enumerate() {
         if out.is_rw {
-            inputs.push(val.unwrap());
+            inputs.push(lvalue.load(bcx).immediate());
             ext_constraints.push(i.to_string());
         }
         if out.is_indirect {
-            indirect_outputs.push(val.unwrap());
+            indirect_outputs.push(lvalue.load(bcx).immediate());
         } else {
-            output_types.push(type_of::type_of(bcx.ccx, ty));
+            output_types.push(lvalue.layout.llvm_type(bcx.ccx));
         }
     }
     if !indirect_outputs.is_empty() {
@@ -106,9 +100,9 @@
 
     // Again, based on how many outputs we have
     let outputs = ia.outputs.iter().zip(&outputs).filter(|&(ref o, _)| !o.is_indirect);
-    for (i, (_, &(val, _))) in outputs.enumerate() {
-        let v = if num_outputs == 1 { r } else { bcx.extract_value(r, i) };
-        bcx.store(v, val, None);
+    for (i, (_, &lvalue)) in outputs.enumerate() {
+        let v = if num_outputs == 1 { r } else { bcx.extract_value(r, i as u64) };
+        OperandValue::Immediate(v).store(bcx, lvalue);
     }
 
     // Store mark in a metadata node so we can map LLVM errors
diff --git a/src/librustc_trans/attributes.rs b/src/librustc_trans/attributes.rs
index b6ca146..745aa0d 100644
--- a/src/librustc_trans/attributes.rs
+++ b/src/librustc_trans/attributes.rs
@@ -116,7 +116,7 @@
             naked(llfn, true);
         } else if attr.check_name("allocator") {
             Attribute::NoAlias.apply_llfn(
-                llvm::AttributePlace::ReturnValue(), llfn);
+                llvm::AttributePlace::ReturnValue, llfn);
         } else if attr.check_name("unwind") {
             unwind(llfn, true);
         } else if attr.check_name("rustc_allocator_nounwind") {
diff --git a/src/librustc_trans/back/link.rs b/src/librustc_trans/back/link.rs
index 89f182d..e0eef1f 100644
--- a/src/librustc_trans/back/link.rs
+++ b/src/librustc_trans/back/link.rs
@@ -262,19 +262,31 @@
         check_file_is_writeable(obj, sess);
     }
 
-    let tmpdir = match TempDir::new("rustc") {
-        Ok(tmpdir) => tmpdir,
-        Err(err) => sess.fatal(&format!("couldn't create a temp dir: {}", err)),
-    };
-
     let mut out_filenames = vec![];
 
     if outputs.outputs.contains_key(&OutputType::Metadata) {
         let out_filename = filename_for_metadata(sess, crate_name, outputs);
-        emit_metadata(sess, trans, &out_filename);
+        // To avoid races with another rustc process scanning the output directory,
+        // we need to write the file somewhere else and atomically move it to its
+        // final destination, with a `fs::rename` call. In order for the rename to
+        // always succeed, the temporary file needs to be on the same filesystem,
+        // which is why we create it inside the output directory specifically.
+        let metadata_tmpdir = match TempDir::new_in(out_filename.parent().unwrap(), "rmeta") {
+            Ok(tmpdir) => tmpdir,
+            Err(err) => sess.fatal(&format!("couldn't create a temp dir: {}", err)),
+        };
+        let metadata = emit_metadata(sess, trans, &metadata_tmpdir);
+        if let Err(e) = fs::rename(metadata, &out_filename) {
+            sess.fatal(&format!("failed to write {}: {}", out_filename.display(), e));
+        }
         out_filenames.push(out_filename);
     }
 
+    let tmpdir = match TempDir::new("rustc") {
+        Ok(tmpdir) => tmpdir,
+        Err(err) => sess.fatal(&format!("couldn't create a temp dir: {}", err)),
+    };
+
     if outputs.outputs.should_trans() {
         let out_filename = out_filename(sess, crate_type, outputs, crate_name);
         match crate_type {
@@ -283,10 +295,10 @@
                           trans,
                           RlibFlavor::Normal,
                           &out_filename,
-                          tmpdir.path()).build();
+                          &tmpdir).build();
             }
             config::CrateTypeStaticlib => {
-                link_staticlib(sess, trans, &out_filename, tmpdir.path());
+                link_staticlib(sess, trans, &out_filename, &tmpdir);
             }
             _ => {
                 link_natively(sess, crate_type, &out_filename, trans, tmpdir.path());
@@ -321,14 +333,23 @@
     }
 }
 
-fn emit_metadata<'a>(sess: &'a Session, trans: &CrateTranslation, out_filename: &Path) {
-    let result = fs::File::create(out_filename).and_then(|mut f| {
+/// We use a temp directory here to avoid races between concurrent rustc processes,
+/// such as builds in the same directory using the same filename for metadata while
+/// building an `.rlib` (stomping over one another), or writing an `.rmeta` into a
+/// directory being searched for `extern crate` (observing an incomplete file).
+/// The returned path is the temporary file containing the complete metadata.
+fn emit_metadata<'a>(sess: &'a Session, trans: &CrateTranslation, tmpdir: &TempDir)
+                     -> PathBuf {
+    let out_filename = tmpdir.path().join(METADATA_FILENAME);
+    let result = fs::File::create(&out_filename).and_then(|mut f| {
         f.write_all(&trans.metadata.raw_data)
     });
 
     if let Err(e) = result {
         sess.fatal(&format!("failed to write {}: {}", out_filename.display(), e));
     }
+
+    out_filename
 }
 
 enum RlibFlavor {
@@ -346,7 +367,7 @@
                  trans: &CrateTranslation,
                  flavor: RlibFlavor,
                  out_filename: &Path,
-                 tmpdir: &Path) -> ArchiveBuilder<'a> {
+                 tmpdir: &TempDir) -> ArchiveBuilder<'a> {
     info!("preparing rlib to {:?}", out_filename);
     let mut ab = ArchiveBuilder::new(archive_config(sess, out_filename, None));
 
@@ -408,12 +429,8 @@
     match flavor {
         RlibFlavor::Normal => {
             // Instead of putting the metadata in an object file section, rlibs
-            // contain the metadata in a separate file. We use a temp directory
-            // here so concurrent builds in the same directory don't try to use
-            // the same filename for metadata (stomping over one another)
-            let metadata = tmpdir.join(METADATA_FILENAME);
-            emit_metadata(sess, trans, &metadata);
-            ab.add_file(&metadata);
+            // contain the metadata in a separate file.
+            ab.add_file(&emit_metadata(sess, trans, tmpdir));
 
             // For LTO purposes, the bytecode of this library is also inserted
             // into the archive.
@@ -457,7 +474,7 @@
 fn link_staticlib(sess: &Session,
                   trans: &CrateTranslation,
                   out_filename: &Path,
-                  tempdir: &Path) {
+                  tempdir: &TempDir) {
     let mut ab = link_rlib(sess,
                            trans,
                            RlibFlavor::StaticlibBase,
@@ -649,9 +666,9 @@
         let mut out = output.stderr.clone();
         out.extend(&output.stdout);
         let out = String::from_utf8_lossy(&out);
-        let msg = "clang: error: unable to execute command: \
-                   Segmentation fault: 11";
-        if !out.contains(msg) {
+        let msg_segv = "clang: error: unable to execute command: Segmentation fault: 11";
+        let msg_bus  = "clang: error: unable to execute command: Bus error: 10";
+        if !(out.contains(msg_segv) || out.contains(msg_bus)) {
             break
         }
 
diff --git a/src/librustc_trans/base.rs b/src/librustc_trans/base.rs
index 69bcd0a..b740868 100644
--- a/src/librustc_trans/base.rs
+++ b/src/librustc_trans/base.rs
@@ -28,6 +28,7 @@
 use super::ModuleTranslation;
 use super::ModuleKind;
 
+use abi;
 use assert_module_sources;
 use back::link;
 use back::symbol_export;
@@ -40,6 +41,7 @@
 use rustc::middle::trans::{Linkage, Visibility, Stats};
 use rustc::middle::cstore::{EncodedMetadata, EncodedMetadataHashes};
 use rustc::ty::{self, Ty, TyCtxt};
+use rustc::ty::layout::{self, Align, TyLayout, LayoutOf};
 use rustc::ty::maps::Providers;
 use rustc::dep_graph::{DepNode, DepKind, DepConstructor};
 use rustc::middle::cstore::{self, LinkMeta, LinkagePreference};
@@ -47,7 +49,6 @@
 use rustc::session::config::{self, NoDebugInfo};
 use rustc::session::Session;
 use rustc_incremental;
-use abi;
 use allocator;
 use mir::lvalue::LvalueRef;
 use attributes;
@@ -55,25 +56,20 @@
 use callee;
 use common::{C_bool, C_bytes_in_context, C_i32, C_usize};
 use collector::{self, TransItemCollectionMode};
-use common::{C_struct_in_context, C_u64, C_undef, C_array};
-use common::CrateContext;
-use common::{type_is_zero_size, val_ty};
-use common;
+use common::{self, C_struct_in_context, C_array, CrateContext, val_ty};
 use consts;
 use context::{self, LocalCrateContext, SharedCrateContext};
 use debuginfo;
 use declare;
-use machine;
 use meth;
 use mir;
-use monomorphize::{self, Instance};
+use monomorphize::Instance;
 use partitioning::{self, PartitioningStrategy, CodegenUnit, CodegenUnitExt};
 use symbol_names_test;
 use time_graph;
 use trans_item::{TransItem, BaseTransItemExt, TransItemExt, DefPathBasedNames};
 use type_::Type;
-use type_of;
-use value::Value;
+use type_of::LayoutLlvmExt;
 use rustc::util::nodemap::{NodeSet, FxHashMap, FxHashSet, DefIdSet};
 use CrateInfo;
 
@@ -90,7 +86,7 @@
 use rustc::hir;
 use syntax::ast;
 
-use mir::lvalue::Alignment;
+use mir::operand::OperandValue;
 
 pub use rustc_trans_utils::{find_exported_symbols, check_for_rustc_errors_attr};
 pub use rustc_trans_utils::trans_item::linkage_by_name;
@@ -125,14 +121,6 @@
     }
 }
 
-pub fn get_meta(bcx: &Builder, fat_ptr: ValueRef) -> ValueRef {
-    bcx.struct_gep(fat_ptr, abi::FAT_PTR_EXTRA)
-}
-
-pub fn get_dataptr(bcx: &Builder, fat_ptr: ValueRef) -> ValueRef {
-    bcx.struct_gep(fat_ptr, abi::FAT_PTR_ADDR)
-}
-
 pub fn bin_op_to_icmp_predicate(op: hir::BinOp_,
                                 signed: bool)
                                 -> llvm::IntPredicate {
@@ -216,8 +204,10 @@
             old_info.expect("unsized_info: missing old info for trait upcast")
         }
         (_, &ty::TyDynamic(ref data, ..)) => {
+            let vtable_ptr = ccx.layout_of(ccx.tcx().mk_mut_ptr(target))
+                .field(ccx, abi::FAT_PTR_EXTRA);
             consts::ptrcast(meth::get_vtable(ccx, source, data.principal()),
-                            Type::vtable_ptr(ccx))
+                            vtable_ptr.llvm_type(ccx))
         }
         _ => bug!("unsized_info: invalid unsizing {:?} -> {:?}",
                                      source,
@@ -241,15 +231,40 @@
         (&ty::TyRawPtr(ty::TypeAndMut { ty: a, .. }),
          &ty::TyRawPtr(ty::TypeAndMut { ty: b, .. })) => {
             assert!(bcx.ccx.shared().type_is_sized(a));
-            let ptr_ty = type_of::in_memory_type_of(bcx.ccx, b).ptr_to();
+            let ptr_ty = bcx.ccx.layout_of(b).llvm_type(bcx.ccx).ptr_to();
             (bcx.pointercast(src, ptr_ty), unsized_info(bcx.ccx, a, b, None))
         }
         (&ty::TyAdt(def_a, _), &ty::TyAdt(def_b, _)) if def_a.is_box() && def_b.is_box() => {
             let (a, b) = (src_ty.boxed_ty(), dst_ty.boxed_ty());
             assert!(bcx.ccx.shared().type_is_sized(a));
-            let ptr_ty = type_of::in_memory_type_of(bcx.ccx, b).ptr_to();
+            let ptr_ty = bcx.ccx.layout_of(b).llvm_type(bcx.ccx).ptr_to();
             (bcx.pointercast(src, ptr_ty), unsized_info(bcx.ccx, a, b, None))
         }
+        (&ty::TyAdt(def_a, _), &ty::TyAdt(def_b, _)) => {
+            assert_eq!(def_a, def_b);
+
+            let src_layout = bcx.ccx.layout_of(src_ty);
+            let dst_layout = bcx.ccx.layout_of(dst_ty);
+            let mut result = None;
+            for i in 0..src_layout.fields.count() {
+                let src_f = src_layout.field(bcx.ccx, i);
+                assert_eq!(src_layout.fields.offset(i).bytes(), 0);
+                assert_eq!(dst_layout.fields.offset(i).bytes(), 0);
+                if src_f.is_zst() {
+                    continue;
+                }
+                assert_eq!(src_layout.size, src_f.size);
+
+                let dst_f = dst_layout.field(bcx.ccx, i);
+                assert_ne!(src_f.ty, dst_f.ty);
+                assert_eq!(result, None);
+                result = Some(unsize_thin_ptr(bcx, src, src_f.ty, dst_f.ty));
+            }
+            let (lldata, llextra) = result.unwrap();
+            // HACK(eddyb) have to bitcast pointers until LLVM removes pointee types.
+            (bcx.bitcast(lldata, dst_layout.scalar_pair_element_llvm_type(bcx.ccx, 0)),
+             bcx.bitcast(llextra, dst_layout.scalar_pair_element_llvm_type(bcx.ccx, 1)))
+        }
         _ => bug!("unsize_thin_ptr: called on bad types"),
     }
 }
@@ -257,25 +272,26 @@
 /// Coerce `src`, which is a reference to a value of type `src_ty`,
 /// to a value of type `dst_ty` and store the result in `dst`
 pub fn coerce_unsized_into<'a, 'tcx>(bcx: &Builder<'a, 'tcx>,
-                                     src: &LvalueRef<'tcx>,
-                                     dst: &LvalueRef<'tcx>) {
-    let src_ty = src.ty.to_ty(bcx.tcx());
-    let dst_ty = dst.ty.to_ty(bcx.tcx());
+                                     src: LvalueRef<'tcx>,
+                                     dst: LvalueRef<'tcx>) {
+    let src_ty = src.layout.ty;
+    let dst_ty = dst.layout.ty;
     let coerce_ptr = || {
-        let (base, info) = if common::type_is_fat_ptr(bcx.ccx, src_ty) {
-            // fat-ptr to fat-ptr unsize preserves the vtable
-            // i.e. &'a fmt::Debug+Send => &'a fmt::Debug
-            // So we need to pointercast the base to ensure
-            // the types match up.
-            let (base, info) = load_fat_ptr(bcx, src.llval, src.alignment, src_ty);
-            let llcast_ty = type_of::fat_ptr_base_ty(bcx.ccx, dst_ty);
-            let base = bcx.pointercast(base, llcast_ty);
-            (base, info)
-        } else {
-            let base = load_ty(bcx, src.llval, src.alignment, src_ty);
-            unsize_thin_ptr(bcx, base, src_ty, dst_ty)
+        let (base, info) = match src.load(bcx).val {
+            OperandValue::Pair(base, info) => {
+                // fat-ptr to fat-ptr unsize preserves the vtable
+                // i.e. &'a fmt::Debug+Send => &'a fmt::Debug
+                // So we need to pointercast the base to ensure
+                // the types match up.
+                let thin_ptr = dst.layout.field(bcx.ccx, abi::FAT_PTR_ADDR);
+                (bcx.pointercast(base, thin_ptr.llvm_type(bcx.ccx)), info)
+            }
+            OperandValue::Immediate(base) => {
+                unsize_thin_ptr(bcx, base, src_ty, dst_ty)
+            }
+            OperandValue::Ref(..) => bug!()
         };
-        store_fat_ptr(bcx, base, info, dst.llval, dst.alignment, dst_ty);
+        OperandValue::Pair(base, info).store(bcx, dst);
     };
     match (&src_ty.sty, &dst_ty.sty) {
         (&ty::TyRef(..), &ty::TyRef(..)) |
@@ -287,32 +303,22 @@
             coerce_ptr()
         }
 
-        (&ty::TyAdt(def_a, substs_a), &ty::TyAdt(def_b, substs_b)) => {
+        (&ty::TyAdt(def_a, _), &ty::TyAdt(def_b, _)) => {
             assert_eq!(def_a, def_b);
 
-            let src_fields = def_a.variants[0].fields.iter().map(|f| {
-                monomorphize::field_ty(bcx.tcx(), substs_a, f)
-            });
-            let dst_fields = def_b.variants[0].fields.iter().map(|f| {
-                monomorphize::field_ty(bcx.tcx(), substs_b, f)
-            });
+            for i in 0..def_a.variants[0].fields.len() {
+                let src_f = src.project_field(bcx, i);
+                let dst_f = dst.project_field(bcx, i);
 
-            let iter = src_fields.zip(dst_fields).enumerate();
-            for (i, (src_fty, dst_fty)) in iter {
-                if type_is_zero_size(bcx.ccx, dst_fty) {
+                if dst_f.layout.is_zst() {
                     continue;
                 }
 
-                let (src_f, src_f_align) = src.trans_field_ptr(bcx, i);
-                let (dst_f, dst_f_align) = dst.trans_field_ptr(bcx, i);
-                if src_fty == dst_fty {
-                    memcpy_ty(bcx, dst_f, src_f, src_fty, None);
+                if src_f.layout.ty == dst_f.layout.ty {
+                    memcpy_ty(bcx, dst_f.llval, src_f.llval, src_f.layout,
+                        (src_f.alignment | dst_f.alignment).non_abi());
                 } else {
-                    coerce_unsized_into(
-                        bcx,
-                        &LvalueRef::new_sized_ty(src_f, src_fty, src_f_align),
-                        &LvalueRef::new_sized_ty(dst_f, dst_fty, dst_f_align)
-                    );
+                    coerce_unsized_into(bcx, src_f, dst_f);
                 }
             }
         }
@@ -385,94 +391,6 @@
     b.call(assume_intrinsic, &[val], None);
 }
 
-/// Helper for loading values from memory. Does the necessary conversion if the in-memory type
-/// differs from the type used for SSA values. Also handles various special cases where the type
-/// gives us better information about what we are loading.
-pub fn load_ty<'a, 'tcx>(b: &Builder<'a, 'tcx>, ptr: ValueRef,
-                         alignment: Alignment, t: Ty<'tcx>) -> ValueRef {
-    let ccx = b.ccx;
-    if type_is_zero_size(ccx, t) {
-        return C_undef(type_of::type_of(ccx, t));
-    }
-
-    unsafe {
-        let global = llvm::LLVMIsAGlobalVariable(ptr);
-        if !global.is_null() && llvm::LLVMIsGlobalConstant(global) == llvm::True {
-            let val = llvm::LLVMGetInitializer(global);
-            if !val.is_null() {
-                if t.is_bool() {
-                    return llvm::LLVMConstTrunc(val, Type::i1(ccx).to_ref());
-                }
-                return val;
-            }
-        }
-    }
-
-    if t.is_bool() {
-        b.trunc(b.load_range_assert(ptr, 0, 2, llvm::False, alignment.to_align()),
-                Type::i1(ccx))
-    } else if t.is_char() {
-        // a char is a Unicode codepoint, and so takes values from 0
-        // to 0x10FFFF inclusive only.
-        b.load_range_assert(ptr, 0, 0x10FFFF + 1, llvm::False, alignment.to_align())
-    } else if (t.is_region_ptr() || t.is_box() || t.is_fn())
-        && !common::type_is_fat_ptr(ccx, t)
-    {
-        b.load_nonnull(ptr, alignment.to_align())
-    } else {
-        b.load(ptr, alignment.to_align())
-    }
-}
-
-/// Helper for storing values in memory. Does the necessary conversion if the in-memory type
-/// differs from the type used for SSA values.
-pub fn store_ty<'a, 'tcx>(cx: &Builder<'a, 'tcx>, v: ValueRef, dst: ValueRef,
-                          dst_align: Alignment, t: Ty<'tcx>) {
-    debug!("store_ty: {:?} : {:?} <- {:?}", Value(dst), t, Value(v));
-
-    if common::type_is_fat_ptr(cx.ccx, t) {
-        let lladdr = cx.extract_value(v, abi::FAT_PTR_ADDR);
-        let llextra = cx.extract_value(v, abi::FAT_PTR_EXTRA);
-        store_fat_ptr(cx, lladdr, llextra, dst, dst_align, t);
-    } else {
-        cx.store(from_immediate(cx, v), dst, dst_align.to_align());
-    }
-}
-
-pub fn store_fat_ptr<'a, 'tcx>(cx: &Builder<'a, 'tcx>,
-                               data: ValueRef,
-                               extra: ValueRef,
-                               dst: ValueRef,
-                               dst_align: Alignment,
-                               _ty: Ty<'tcx>) {
-    // FIXME: emit metadata
-    cx.store(data, get_dataptr(cx, dst), dst_align.to_align());
-    cx.store(extra, get_meta(cx, dst), dst_align.to_align());
-}
-
-pub fn load_fat_ptr<'a, 'tcx>(
-    b: &Builder<'a, 'tcx>, src: ValueRef, alignment: Alignment, t: Ty<'tcx>
-) -> (ValueRef, ValueRef) {
-    let ptr = get_dataptr(b, src);
-    let ptr = if t.is_region_ptr() || t.is_box() {
-        b.load_nonnull(ptr, alignment.to_align())
-    } else {
-        b.load(ptr, alignment.to_align())
-    };
-
-    let meta = get_meta(b, src);
-    let meta_ty = val_ty(meta);
-    // If the 'meta' field is a pointer, it's a vtable, so use load_nonnull
-    // instead
-    let meta = if meta_ty.element_type().kind() == llvm::TypeKind::Pointer {
-        b.load_nonnull(meta, None)
-    } else {
-        b.load(meta, None)
-    };
-
-    (ptr, meta)
-}
-
 pub fn from_immediate(bcx: &Builder, val: ValueRef) -> ValueRef {
     if val_ty(val) == Type::i1(bcx.ccx) {
         bcx.zext(val, Type::i8(bcx.ccx))
@@ -481,50 +399,20 @@
     }
 }
 
-pub fn to_immediate(bcx: &Builder, val: ValueRef, ty: Ty) -> ValueRef {
-    if ty.is_bool() {
-        bcx.trunc(val, Type::i1(bcx.ccx))
-    } else {
-        val
+pub fn to_immediate(bcx: &Builder, val: ValueRef, layout: layout::TyLayout) -> ValueRef {
+    if let layout::Abi::Scalar(ref scalar) = layout.abi {
+        if scalar.is_bool() {
+            return bcx.trunc(val, Type::i1(bcx.ccx));
+        }
     }
+    val
 }
 
-pub enum Lifetime { Start, End }
-
-impl Lifetime {
-    // If LLVM lifetime intrinsic support is enabled (i.e. optimizations
-    // on), and `ptr` is nonzero-sized, then extracts the size of `ptr`
-    // and the intrinsic for `lt` and passes them to `emit`, which is in
-    // charge of generating code to call the passed intrinsic on whatever
-    // block of generated code is targeted for the intrinsic.
-    //
-    // If LLVM lifetime intrinsic support is disabled (i.e.  optimizations
-    // off) or `ptr` is zero-sized, then no-op (does not call `emit`).
-    pub fn call(self, b: &Builder, ptr: ValueRef) {
-        if b.ccx.sess().opts.optimize == config::OptLevel::No {
-            return;
-        }
-
-        let size = machine::llsize_of_alloc(b.ccx, val_ty(ptr).element_type());
-        if size == 0 {
-            return;
-        }
-
-        let lifetime_intrinsic = b.ccx.get_intrinsic(match self {
-            Lifetime::Start => "llvm.lifetime.start",
-            Lifetime::End => "llvm.lifetime.end"
-        });
-
-        let ptr = b.pointercast(ptr, Type::i8p(b.ccx));
-        b.call(lifetime_intrinsic, &[C_u64(b.ccx, size), ptr], None);
-    }
-}
-
-pub fn call_memcpy<'a, 'tcx>(b: &Builder<'a, 'tcx>,
-                               dst: ValueRef,
-                               src: ValueRef,
-                               n_bytes: ValueRef,
-                               align: u32) {
+pub fn call_memcpy(b: &Builder,
+                   dst: ValueRef,
+                   src: ValueRef,
+                   n_bytes: ValueRef,
+                   align: Align) {
     let ccx = b.ccx;
     let ptr_width = &ccx.sess().target.target.target_pointer_width;
     let key = format!("llvm.memcpy.p0i8.p0i8.i{}", ptr_width);
@@ -532,7 +420,7 @@
     let src_ptr = b.pointercast(src, Type::i8p(ccx));
     let dst_ptr = b.pointercast(dst, Type::i8p(ccx));
     let size = b.intcast(n_bytes, ccx.isize_ty(), false);
-    let align = C_i32(ccx, align as i32);
+    let align = C_i32(ccx, align.abi() as i32);
     let volatile = C_bool(ccx, false);
     b.call(memcpy, &[dst_ptr, src_ptr, size, align, volatile], None);
 }
@@ -541,18 +429,16 @@
     bcx: &Builder<'a, 'tcx>,
     dst: ValueRef,
     src: ValueRef,
-    t: Ty<'tcx>,
-    align: Option<u32>,
+    layout: TyLayout<'tcx>,
+    align: Option<Align>,
 ) {
-    let ccx = bcx.ccx;
-
-    let size = ccx.size_of(t);
+    let size = layout.size.bytes();
     if size == 0 {
         return;
     }
 
-    let align = align.unwrap_or_else(|| ccx.align_of(t));
-    call_memcpy(bcx, dst, src, C_usize(ccx, size), align);
+    let align = align.unwrap_or(layout.align);
+    call_memcpy(bcx, dst, src, C_usize(bcx.ccx, size), align);
 }
 
 pub fn call_memset<'a, 'tcx>(b: &Builder<'a, 'tcx>,
diff --git a/src/librustc_trans/builder.rs b/src/librustc_trans/builder.rs
index b366d55..50e673b 100644
--- a/src/librustc_trans/builder.rs
+++ b/src/librustc_trans/builder.rs
@@ -15,15 +15,16 @@
 use llvm::{Opcode, IntPredicate, RealPredicate, False, OperandBundleDef};
 use llvm::{ValueRef, BasicBlockRef, BuilderRef, ModuleRef};
 use common::*;
-use machine::llalign_of_pref;
 use type_::Type;
 use value::Value;
 use libc::{c_uint, c_char};
 use rustc::ty::TyCtxt;
-use rustc::session::Session;
+use rustc::ty::layout::{Align, Size};
+use rustc::session::{config, Session};
 
 use std::borrow::Cow;
 use std::ffi::CString;
+use std::ops::Range;
 use std::ptr;
 use syntax_pos::Span;
 
@@ -487,7 +488,7 @@
         }
     }
 
-    pub fn alloca(&self, ty: Type, name: &str, align: Option<u32>) -> ValueRef {
+    pub fn alloca(&self, ty: Type, name: &str, align: Align) -> ValueRef {
         let builder = Builder::with_ccx(self.ccx);
         builder.position_at_start(unsafe {
             llvm::LLVMGetFirstBasicBlock(self.llfn())
@@ -495,7 +496,7 @@
         builder.dynamic_alloca(ty, name, align)
     }
 
-    pub fn dynamic_alloca(&self, ty: Type, name: &str, align: Option<u32>) -> ValueRef {
+    pub fn dynamic_alloca(&self, ty: Type, name: &str, align: Align) -> ValueRef {
         self.count_insn("alloca");
         unsafe {
             let alloca = if name.is_empty() {
@@ -505,9 +506,7 @@
                 llvm::LLVMBuildAlloca(self.llbuilder, ty.to_ref(),
                                       name.as_ptr())
             };
-            if let Some(align) = align {
-                llvm::LLVMSetAlignment(alloca, align as c_uint);
-            }
+            llvm::LLVMSetAlignment(alloca, align.abi() as c_uint);
             alloca
         }
     }
@@ -519,12 +518,12 @@
         }
     }
 
-    pub fn load(&self, ptr: ValueRef, align: Option<u32>) -> ValueRef {
+    pub fn load(&self, ptr: ValueRef, align: Option<Align>) -> ValueRef {
         self.count_insn("load");
         unsafe {
             let load = llvm::LLVMBuildLoad(self.llbuilder, ptr, noname());
             if let Some(align) = align {
-                llvm::LLVMSetAlignment(load, align as c_uint);
+                llvm::LLVMSetAlignment(load, align.abi() as c_uint);
             }
             load
         }
@@ -539,49 +538,42 @@
         }
     }
 
-    pub fn atomic_load(&self, ptr: ValueRef, order: AtomicOrdering) -> ValueRef {
+    pub fn atomic_load(&self, ptr: ValueRef, order: AtomicOrdering, align: Align) -> ValueRef {
         self.count_insn("load.atomic");
         unsafe {
-            let ty = Type::from_ref(llvm::LLVMTypeOf(ptr));
-            let align = llalign_of_pref(self.ccx, ty.element_type());
-            llvm::LLVMRustBuildAtomicLoad(self.llbuilder, ptr, noname(), order,
-                                          align as c_uint)
+            let load = llvm::LLVMRustBuildAtomicLoad(self.llbuilder, ptr, noname(), order);
+            // FIXME(eddyb) Isn't it UB to use `pref` instead of `abi` here?
+            // However, 64-bit atomic loads on `i686-apple-darwin` appear to
+            // require `___atomic_load` with ABI-alignment, so it's staying.
+            llvm::LLVMSetAlignment(load, align.pref() as c_uint);
+            load
         }
     }
 
 
-    pub fn load_range_assert(&self, ptr: ValueRef, lo: u64,
-                             hi: u64, signed: llvm::Bool,
-                             align: Option<u32>) -> ValueRef {
-        let value = self.load(ptr, align);
-
+    pub fn range_metadata(&self, load: ValueRef, range: Range<u128>) {
         unsafe {
-            let t = llvm::LLVMGetElementType(llvm::LLVMTypeOf(ptr));
-            let min = llvm::LLVMConstInt(t, lo, signed);
-            let max = llvm::LLVMConstInt(t, hi, signed);
+            let llty = val_ty(load);
+            let v = [
+                C_uint_big(llty, range.start),
+                C_uint_big(llty, range.end)
+            ];
 
-            let v = [min, max];
-
-            llvm::LLVMSetMetadata(value, llvm::MD_range as c_uint,
+            llvm::LLVMSetMetadata(load, llvm::MD_range as c_uint,
                                   llvm::LLVMMDNodeInContext(self.ccx.llcx(),
                                                             v.as_ptr(),
                                                             v.len() as c_uint));
         }
-
-        value
     }
 
-    pub fn load_nonnull(&self, ptr: ValueRef, align: Option<u32>) -> ValueRef {
-        let value = self.load(ptr, align);
+    pub fn nonnull_metadata(&self, load: ValueRef) {
         unsafe {
-            llvm::LLVMSetMetadata(value, llvm::MD_nonnull as c_uint,
+            llvm::LLVMSetMetadata(load, llvm::MD_nonnull as c_uint,
                                   llvm::LLVMMDNodeInContext(self.ccx.llcx(), ptr::null(), 0));
         }
-
-        value
     }
 
-    pub fn store(&self, val: ValueRef, ptr: ValueRef, align: Option<u32>) -> ValueRef {
+    pub fn store(&self, val: ValueRef, ptr: ValueRef, align: Option<Align>) -> ValueRef {
         debug!("Store {:?} -> {:?}", Value(val), Value(ptr));
         assert!(!self.llbuilder.is_null());
         self.count_insn("store");
@@ -589,7 +581,7 @@
         unsafe {
             let store = llvm::LLVMBuildStore(self.llbuilder, val, ptr);
             if let Some(align) = align {
-                llvm::LLVMSetAlignment(store, align as c_uint);
+                llvm::LLVMSetAlignment(store, align.abi() as c_uint);
             }
             store
         }
@@ -607,14 +599,16 @@
         }
     }
 
-    pub fn atomic_store(&self, val: ValueRef, ptr: ValueRef, order: AtomicOrdering) {
+    pub fn atomic_store(&self, val: ValueRef, ptr: ValueRef,
+                        order: AtomicOrdering, align: Align) {
         debug!("Store {:?} -> {:?}", Value(val), Value(ptr));
         self.count_insn("store.atomic");
         let ptr = self.check_store(val, ptr);
         unsafe {
-            let ty = Type::from_ref(llvm::LLVMTypeOf(ptr));
-            let align = llalign_of_pref(self.ccx, ty.element_type());
-            llvm::LLVMRustBuildAtomicStore(self.llbuilder, val, ptr, order, align as c_uint);
+            let store = llvm::LLVMRustBuildAtomicStore(self.llbuilder, val, ptr, order);
+            // FIXME(eddyb) Isn't it UB to use `pref` instead of `abi` here?
+            // Also see `atomic_load` for more context.
+            llvm::LLVMSetAlignment(store, align.pref() as c_uint);
         }
     }
 
@@ -626,25 +620,6 @@
         }
     }
 
-    // Simple wrapper around GEP that takes an array of ints and wraps them
-    // in C_i32()
-    #[inline]
-    pub fn gepi(&self, base: ValueRef, ixs: &[usize]) -> ValueRef {
-        // Small vector optimization. This should catch 100% of the cases that
-        // we care about.
-        if ixs.len() < 16 {
-            let mut small_vec = [ C_i32(self.ccx, 0); 16 ];
-            for (small_vec_e, &ix) in small_vec.iter_mut().zip(ixs) {
-                *small_vec_e = C_i32(self.ccx, ix as i32);
-            }
-            self.inbounds_gep(base, &small_vec[..ixs.len()])
-        } else {
-            let v = ixs.iter().map(|i| C_i32(self.ccx, *i as i32)).collect::<Vec<ValueRef>>();
-            self.count_insn("gepi");
-            self.inbounds_gep(base, &v)
-        }
-    }
-
     pub fn inbounds_gep(&self, ptr: ValueRef, indices: &[ValueRef]) -> ValueRef {
         self.count_insn("inboundsgep");
         unsafe {
@@ -653,8 +628,9 @@
         }
     }
 
-    pub fn struct_gep(&self, ptr: ValueRef, idx: usize) -> ValueRef {
+    pub fn struct_gep(&self, ptr: ValueRef, idx: u64) -> ValueRef {
         self.count_insn("structgep");
+        assert_eq!(idx as c_uint as u64, idx);
         unsafe {
             llvm::LLVMBuildStructGEP(self.llbuilder, ptr, idx as c_uint, noname())
         }
@@ -960,16 +936,18 @@
         }
     }
 
-    pub fn extract_value(&self, agg_val: ValueRef, idx: usize) -> ValueRef {
+    pub fn extract_value(&self, agg_val: ValueRef, idx: u64) -> ValueRef {
         self.count_insn("extractvalue");
+        assert_eq!(idx as c_uint as u64, idx);
         unsafe {
             llvm::LLVMBuildExtractValue(self.llbuilder, agg_val, idx as c_uint, noname())
         }
     }
 
     pub fn insert_value(&self, agg_val: ValueRef, elt: ValueRef,
-                       idx: usize) -> ValueRef {
+                       idx: u64) -> ValueRef {
         self.count_insn("insertvalue");
+        assert_eq!(idx as c_uint as u64, idx);
         unsafe {
             llvm::LLVMBuildInsertValue(self.llbuilder, agg_val, elt, idx as c_uint,
                                        noname())
@@ -1151,14 +1129,12 @@
 
     pub fn add_case(&self, s: ValueRef, on_val: ValueRef, dest: BasicBlockRef) {
         unsafe {
-            if llvm::LLVMIsUndef(s) == llvm::True { return; }
             llvm::LLVMAddCase(s, on_val, dest)
         }
     }
 
     pub fn add_incoming_to_phi(&self, phi: ValueRef, val: ValueRef, bb: BasicBlockRef) {
         unsafe {
-            if llvm::LLVMIsUndef(phi) == llvm::True { return; }
             llvm::LLVMAddIncoming(phi, &val, &bb, 1 as c_uint);
         }
     }
@@ -1233,4 +1209,36 @@
 
         return Cow::Owned(casted_args);
     }
+
+    pub fn lifetime_start(&self, ptr: ValueRef, size: Size) {
+        self.call_lifetime_intrinsic("llvm.lifetime.start", ptr, size);
+    }
+
+    pub fn lifetime_end(&self, ptr: ValueRef, size: Size) {
+        self.call_lifetime_intrinsic("llvm.lifetime.end", ptr, size);
+    }
+
+    /// If LLVM lifetime intrinsic support is enabled (i.e. optimizations
+    /// on), and `ptr` is nonzero-sized, then extracts the size of `ptr`
+    /// and the intrinsic for `lt` and passes them to `emit`, which is in
+    /// charge of generating code to call the passed intrinsic on whatever
+    /// block of generated code is targetted for the intrinsic.
+    ///
+    /// If LLVM lifetime intrinsic support is disabled (i.e.  optimizations
+    /// off) or `ptr` is zero-sized, then no-op (does not call `emit`).
+    fn call_lifetime_intrinsic(&self, intrinsic: &str, ptr: ValueRef, size: Size) {
+        if self.ccx.sess().opts.optimize == config::OptLevel::No {
+            return;
+        }
+
+        let size = size.bytes();
+        if size == 0 {
+            return;
+        }
+
+        let lifetime_intrinsic = self.ccx.get_intrinsic(intrinsic);
+
+        let ptr = self.pointercast(ptr, Type::i8p(self.ccx));
+        self.call(lifetime_intrinsic, &[C_u64(self.ccx, size), ptr], None);
+    }
 }
diff --git a/src/librustc_trans/cabi_aarch64.rs b/src/librustc_trans/cabi_aarch64.rs
index bf842e6..d5f341f 100644
--- a/src/librustc_trans/cabi_aarch64.rs
+++ b/src/librustc_trans/cabi_aarch64.rs
@@ -14,7 +14,7 @@
 fn is_homogeneous_aggregate<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>)
                                      -> Option<Uniform> {
     arg.layout.homogeneous_aggregate(ccx).and_then(|unit| {
-        let size = arg.layout.size(ccx);
+        let size = arg.layout.size;
 
         // Ensure we have at most four uniquely addressable members.
         if size > unit.size.checked_mul(4, ccx).unwrap() {
@@ -44,10 +44,10 @@
         return;
     }
     if let Some(uniform) = is_homogeneous_aggregate(ccx, ret) {
-        ret.cast_to(ccx, uniform);
+        ret.cast_to(uniform);
         return;
     }
-    let size = ret.layout.size(ccx);
+    let size = ret.layout.size;
     let bits = size.bits();
     if bits <= 128 {
         let unit = if bits <= 8 {
@@ -60,13 +60,13 @@
             Reg::i64()
         };
 
-        ret.cast_to(ccx, Uniform {
+        ret.cast_to(Uniform {
             unit,
             total: size
         });
         return;
     }
-    ret.make_indirect(ccx);
+    ret.make_indirect();
 }
 
 fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
@@ -75,10 +75,10 @@
         return;
     }
     if let Some(uniform) = is_homogeneous_aggregate(ccx, arg) {
-        arg.cast_to(ccx, uniform);
+        arg.cast_to(uniform);
         return;
     }
-    let size = arg.layout.size(ccx);
+    let size = arg.layout.size;
     let bits = size.bits();
     if bits <= 128 {
         let unit = if bits <= 8 {
@@ -91,13 +91,13 @@
             Reg::i64()
         };
 
-        arg.cast_to(ccx, Uniform {
+        arg.cast_to(Uniform {
             unit,
             total: size
         });
         return;
     }
-    arg.make_indirect(ccx);
+    arg.make_indirect();
 }
 
 pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
diff --git a/src/librustc_trans/cabi_arm.rs b/src/librustc_trans/cabi_arm.rs
index 635741b..438053d 100644
--- a/src/librustc_trans/cabi_arm.rs
+++ b/src/librustc_trans/cabi_arm.rs
@@ -15,7 +15,7 @@
 fn is_homogeneous_aggregate<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>)
                                      -> Option<Uniform> {
     arg.layout.homogeneous_aggregate(ccx).and_then(|unit| {
-        let size = arg.layout.size(ccx);
+        let size = arg.layout.size;
 
         // Ensure we have at most four uniquely addressable members.
         if size > unit.size.checked_mul(4, ccx).unwrap() {
@@ -47,12 +47,12 @@
 
     if vfp {
         if let Some(uniform) = is_homogeneous_aggregate(ccx, ret) {
-            ret.cast_to(ccx, uniform);
+            ret.cast_to(uniform);
             return;
         }
     }
 
-    let size = ret.layout.size(ccx);
+    let size = ret.layout.size;
     let bits = size.bits();
     if bits <= 32 {
         let unit = if bits <= 8 {
@@ -62,13 +62,13 @@
         } else {
             Reg::i32()
         };
-        ret.cast_to(ccx, Uniform {
+        ret.cast_to(Uniform {
             unit,
             total: size
         });
         return;
     }
-    ret.make_indirect(ccx);
+    ret.make_indirect();
 }
 
 fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>, vfp: bool) {
@@ -79,14 +79,14 @@
 
     if vfp {
         if let Some(uniform) = is_homogeneous_aggregate(ccx, arg) {
-            arg.cast_to(ccx, uniform);
+            arg.cast_to(uniform);
             return;
         }
     }
 
-    let align = arg.layout.align(ccx).abi();
-    let total = arg.layout.size(ccx);
-    arg.cast_to(ccx, Uniform {
+    let align = arg.layout.align.abi();
+    let total = arg.layout.size;
+    arg.cast_to(Uniform {
         unit: if align <= 4 { Reg::i32() } else { Reg::i64() },
         total
     });
diff --git a/src/librustc_trans/cabi_asmjs.rs b/src/librustc_trans/cabi_asmjs.rs
index 6fcd3ed..1664251 100644
--- a/src/librustc_trans/cabi_asmjs.rs
+++ b/src/librustc_trans/cabi_asmjs.rs
@@ -8,7 +8,7 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use abi::{FnType, ArgType, ArgAttribute, LayoutExt, Uniform};
+use abi::{FnType, ArgType, LayoutExt, Uniform};
 use context::CrateContext;
 
 // Data layout: e-p:32:32-i64:64-v128:32:128-n32-S128
@@ -19,9 +19,9 @@
 fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
     if ret.layout.is_aggregate() {
         if let Some(unit) = ret.layout.homogeneous_aggregate(ccx) {
-            let size = ret.layout.size(ccx);
+            let size = ret.layout.size;
             if unit.size == size {
-                ret.cast_to(ccx, Uniform {
+                ret.cast_to(Uniform {
                     unit,
                     total: size
                 });
@@ -29,14 +29,13 @@
             }
         }
 
-        ret.make_indirect(ccx);
+        ret.make_indirect();
     }
 }
 
-fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
+fn classify_arg_ty(arg: &mut ArgType) {
     if arg.layout.is_aggregate() {
-        arg.make_indirect(ccx);
-        arg.attrs.set(ArgAttribute::ByVal);
+        arg.make_indirect_byval();
     }
 }
 
@@ -47,6 +46,6 @@
 
     for arg in &mut fty.args {
         if arg.is_ignore() { continue; }
-        classify_arg_ty(ccx, arg);
+        classify_arg_ty(arg);
     }
 }
diff --git a/src/librustc_trans/cabi_hexagon.rs b/src/librustc_trans/cabi_hexagon.rs
index 1acda72..7e7e483 100644
--- a/src/librustc_trans/cabi_hexagon.rs
+++ b/src/librustc_trans/cabi_hexagon.rs
@@ -11,33 +11,32 @@
 #![allow(non_upper_case_globals)]
 
 use abi::{FnType, ArgType, LayoutExt};
-use context::CrateContext;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
-    if ret.layout.is_aggregate() && ret.layout.size(ccx).bits() > 64 {
-        ret.make_indirect(ccx);
+fn classify_ret_ty(ret: &mut ArgType) {
+    if ret.layout.is_aggregate() && ret.layout.size.bits() > 64 {
+        ret.make_indirect();
     } else {
         ret.extend_integer_width_to(32);
     }
 }
 
-fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
-    if arg.layout.is_aggregate() && arg.layout.size(ccx).bits() > 64 {
-        arg.make_indirect(ccx);
+fn classify_arg_ty(arg: &mut ArgType) {
+    if arg.layout.is_aggregate() && arg.layout.size.bits() > 64 {
+        arg.make_indirect();
     } else {
         arg.extend_integer_width_to(32);
     }
 }
 
-pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+pub fn compute_abi_info(fty: &mut FnType) {
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(&mut fty.ret);
     }
 
     for arg in &mut fty.args {
         if arg.is_ignore() {
             continue;
         }
-        classify_arg_ty(ccx, arg);
+        classify_arg_ty(arg);
     }
 }
diff --git a/src/librustc_trans/cabi_mips.rs b/src/librustc_trans/cabi_mips.rs
index b7b6085..fe61670 100644
--- a/src/librustc_trans/cabi_mips.rs
+++ b/src/librustc_trans/cabi_mips.rs
@@ -8,45 +8,48 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use std::cmp;
-use abi::{align_up_to, ArgType, FnType, LayoutExt, Reg, Uniform};
+use abi::{ArgType, FnType, LayoutExt, Reg, Uniform};
 use context::CrateContext;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
+use rustc::ty::layout::Size;
+
+fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                             ret: &mut ArgType<'tcx>,
+                             offset: &mut Size) {
     if !ret.layout.is_aggregate() {
         ret.extend_integer_width_to(32);
     } else {
-        ret.make_indirect(ccx);
+        ret.make_indirect();
+        *offset += ccx.tcx().data_layout.pointer_size;
     }
 }
 
-fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut u64) {
-    let size = arg.layout.size(ccx);
-    let mut align = arg.layout.align(ccx).abi();
-    align = cmp::min(cmp::max(align, 4), 8);
+fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut Size) {
+    let dl = &ccx.tcx().data_layout;
+    let size = arg.layout.size;
+    let align = arg.layout.align.max(dl.i32_align).min(dl.i64_align);
 
     if arg.layout.is_aggregate() {
-        arg.cast_to(ccx, Uniform {
+        arg.cast_to(Uniform {
             unit: Reg::i32(),
             total: size
         });
-        if ((align - 1) & *offset) > 0 {
-            arg.pad_with(ccx, Reg::i32());
+        if !offset.is_abi_aligned(align) {
+            arg.pad_with(Reg::i32());
         }
     } else {
         arg.extend_integer_width_to(32);
     }
 
-    *offset = align_up_to(*offset, align);
-    *offset += align_up_to(size.bytes(), align);
+    *offset = offset.abi_align(align) + size.abi_align(align);
 }
 
 pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+    let mut offset = Size::from_bytes(0);
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(ccx, &mut fty.ret, &mut offset);
     }
 
-    let mut offset = if fty.ret.is_indirect() { 4 } else { 0 };
     for arg in &mut fty.args {
         if arg.is_ignore() { continue; }
         classify_arg_ty(ccx, arg, &mut offset);
diff --git a/src/librustc_trans/cabi_mips64.rs b/src/librustc_trans/cabi_mips64.rs
index dff75e6..16d0cfe 100644
--- a/src/librustc_trans/cabi_mips64.rs
+++ b/src/librustc_trans/cabi_mips64.rs
@@ -8,45 +8,48 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use std::cmp;
-use abi::{align_up_to, ArgType, FnType, LayoutExt, Reg, Uniform};
+use abi::{ArgType, FnType, LayoutExt, Reg, Uniform};
 use context::CrateContext;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
+use rustc::ty::layout::Size;
+
+fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                             ret: &mut ArgType<'tcx>,
+                             offset: &mut Size) {
     if !ret.layout.is_aggregate() {
         ret.extend_integer_width_to(64);
     } else {
-        ret.make_indirect(ccx);
+        ret.make_indirect();
+        *offset += ccx.tcx().data_layout.pointer_size;
     }
 }
 
-fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut u64) {
-    let size = arg.layout.size(ccx);
-    let mut align = arg.layout.align(ccx).abi();
-    align = cmp::min(cmp::max(align, 4), 8);
+fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut Size) {
+    let dl = &ccx.tcx().data_layout;
+    let size = arg.layout.size;
+    let align = arg.layout.align.max(dl.i32_align).min(dl.i64_align);
 
     if arg.layout.is_aggregate() {
-        arg.cast_to(ccx, Uniform {
+        arg.cast_to(Uniform {
             unit: Reg::i64(),
             total: size
         });
-        if ((align - 1) & *offset) > 0 {
-            arg.pad_with(ccx, Reg::i64());
+        if !offset.is_abi_aligned(align) {
+            arg.pad_with(Reg::i64());
         }
     } else {
         arg.extend_integer_width_to(64);
     }
 
-    *offset = align_up_to(*offset, align);
-    *offset += align_up_to(size.bytes(), align);
+    *offset = offset.abi_align(align) + size.abi_align(align);
 }
 
 pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+    let mut offset = Size::from_bytes(0);
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(ccx, &mut fty.ret, &mut offset);
     }
 
-    let mut offset = if fty.ret.is_indirect() { 8 } else { 0 };
     for arg in &mut fty.args {
         if arg.is_ignore() { continue; }
         classify_arg_ty(ccx, arg, &mut offset);
diff --git a/src/librustc_trans/cabi_msp430.rs b/src/librustc_trans/cabi_msp430.rs
index 546bb5a..d270886 100644
--- a/src/librustc_trans/cabi_msp430.rs
+++ b/src/librustc_trans/cabi_msp430.rs
@@ -12,7 +12,6 @@
 // http://www.ti.com/lit/an/slaa534/slaa534.pdf
 
 use abi::{ArgType, FnType, LayoutExt};
-use context::CrateContext;
 
 // 3.5 Structures or Unions Passed and Returned by Reference
 //
@@ -20,31 +19,31 @@
 // returned by reference. To pass a structure or union by reference, the caller
 // places its address in the appropriate location: either in a register or on
 // the stack, according to its position in the argument list. (..)"
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
-    if ret.layout.is_aggregate() && ret.layout.size(ccx).bits() > 32 {
-        ret.make_indirect(ccx);
+fn classify_ret_ty(ret: &mut ArgType) {
+    if ret.layout.is_aggregate() && ret.layout.size.bits() > 32 {
+        ret.make_indirect();
     } else {
         ret.extend_integer_width_to(16);
     }
 }
 
-fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
-    if arg.layout.is_aggregate() && arg.layout.size(ccx).bits() > 32 {
-        arg.make_indirect(ccx);
+fn classify_arg_ty(arg: &mut ArgType) {
+    if arg.layout.is_aggregate() && arg.layout.size.bits() > 32 {
+        arg.make_indirect();
     } else {
         arg.extend_integer_width_to(16);
     }
 }
 
-pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+pub fn compute_abi_info(fty: &mut FnType) {
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(&mut fty.ret);
     }
 
     for arg in &mut fty.args {
         if arg.is_ignore() {
             continue;
         }
-        classify_arg_ty(ccx, arg);
+        classify_arg_ty(arg);
     }
 }
diff --git a/src/librustc_trans/cabi_nvptx.rs b/src/librustc_trans/cabi_nvptx.rs
index 3873752..69cfc69 100644
--- a/src/librustc_trans/cabi_nvptx.rs
+++ b/src/librustc_trans/cabi_nvptx.rs
@@ -12,33 +12,32 @@
 // http://docs.nvidia.com/cuda/ptx-writers-guide-to-interoperability
 
 use abi::{ArgType, FnType, LayoutExt};
-use context::CrateContext;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
-    if ret.layout.is_aggregate() && ret.layout.size(ccx).bits() > 32 {
-        ret.make_indirect(ccx);
+fn classify_ret_ty(ret: &mut ArgType) {
+    if ret.layout.is_aggregate() && ret.layout.size.bits() > 32 {
+        ret.make_indirect();
     } else {
         ret.extend_integer_width_to(32);
     }
 }
 
-fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
-    if arg.layout.is_aggregate() && arg.layout.size(ccx).bits() > 32 {
-        arg.make_indirect(ccx);
+fn classify_arg_ty(arg: &mut ArgType) {
+    if arg.layout.is_aggregate() && arg.layout.size.bits() > 32 {
+        arg.make_indirect();
     } else {
         arg.extend_integer_width_to(32);
     }
 }
 
-pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+pub fn compute_abi_info(fty: &mut FnType) {
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(&mut fty.ret);
     }
 
     for arg in &mut fty.args {
         if arg.is_ignore() {
             continue;
         }
-        classify_arg_ty(ccx, arg);
+        classify_arg_ty(arg);
     }
 }
diff --git a/src/librustc_trans/cabi_nvptx64.rs b/src/librustc_trans/cabi_nvptx64.rs
index 24bf492..4d76c15 100644
--- a/src/librustc_trans/cabi_nvptx64.rs
+++ b/src/librustc_trans/cabi_nvptx64.rs
@@ -12,33 +12,32 @@
 // http://docs.nvidia.com/cuda/ptx-writers-guide-to-interoperability
 
 use abi::{ArgType, FnType, LayoutExt};
-use context::CrateContext;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
-    if ret.layout.is_aggregate() && ret.layout.size(ccx).bits() > 64 {
-        ret.make_indirect(ccx);
+fn classify_ret_ty(ret: &mut ArgType) {
+    if ret.layout.is_aggregate() && ret.layout.size.bits() > 64 {
+        ret.make_indirect();
     } else {
         ret.extend_integer_width_to(64);
     }
 }
 
-fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
-    if arg.layout.is_aggregate() && arg.layout.size(ccx).bits() > 64 {
-        arg.make_indirect(ccx);
+fn classify_arg_ty(arg: &mut ArgType) {
+    if arg.layout.is_aggregate() && arg.layout.size.bits() > 64 {
+        arg.make_indirect();
     } else {
         arg.extend_integer_width_to(64);
     }
 }
 
-pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+pub fn compute_abi_info(fty: &mut FnType) {
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(&mut fty.ret);
     }
 
     for arg in &mut fty.args {
         if arg.is_ignore() {
             continue;
         }
-        classify_arg_ty(ccx, arg);
+        classify_arg_ty(arg);
     }
 }
diff --git a/src/librustc_trans/cabi_powerpc.rs b/src/librustc_trans/cabi_powerpc.rs
index f951ac7..c3c8c74 100644
--- a/src/librustc_trans/cabi_powerpc.rs
+++ b/src/librustc_trans/cabi_powerpc.rs
@@ -8,46 +8,48 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use abi::{align_up_to, FnType, ArgType, LayoutExt, Reg, Uniform};
+use abi::{ArgType, FnType, LayoutExt, Reg, Uniform};
 use context::CrateContext;
 
-use std::cmp;
+use rustc::ty::layout::Size;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
+fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                             ret: &mut ArgType<'tcx>,
+                             offset: &mut Size) {
     if !ret.layout.is_aggregate() {
         ret.extend_integer_width_to(32);
     } else {
-        ret.make_indirect(ccx);
+        ret.make_indirect();
+        *offset += ccx.tcx().data_layout.pointer_size;
     }
 }
 
-fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut u64) {
-    let size = arg.layout.size(ccx);
-    let mut align = arg.layout.align(ccx).abi();
-    align = cmp::min(cmp::max(align, 4), 8);
+fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut Size) {
+    let dl = &ccx.tcx().data_layout;
+    let size = arg.layout.size;
+    let align = arg.layout.align.max(dl.i32_align).min(dl.i64_align);
 
     if arg.layout.is_aggregate() {
-        arg.cast_to(ccx, Uniform {
+        arg.cast_to(Uniform {
             unit: Reg::i32(),
             total: size
         });
-        if ((align - 1) & *offset) > 0 {
-            arg.pad_with(ccx, Reg::i32());
+        if !offset.is_abi_aligned(align) {
+            arg.pad_with(Reg::i32());
         }
     } else {
         arg.extend_integer_width_to(32);
     }
 
-    *offset = align_up_to(*offset, align);
-    *offset += align_up_to(size.bytes(), align);
+    *offset = offset.abi_align(align) + size.abi_align(align);
 }
 
 pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+    let mut offset = Size::from_bytes(0);
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(ccx, &mut fty.ret, &mut offset);
     }
 
-    let mut offset = if fty.ret.is_indirect() { 4 } else { 0 };
     for arg in &mut fty.args {
         if arg.is_ignore() { continue; }
         classify_arg_ty(ccx, arg, &mut offset);
diff --git a/src/librustc_trans/cabi_powerpc64.rs b/src/librustc_trans/cabi_powerpc64.rs
index fb5472e..2206a4f 100644
--- a/src/librustc_trans/cabi_powerpc64.rs
+++ b/src/librustc_trans/cabi_powerpc64.rs
@@ -28,25 +28,23 @@
                                       abi: ABI)
                                      -> Option<Uniform> {
     arg.layout.homogeneous_aggregate(ccx).and_then(|unit| {
-        let size = arg.layout.size(ccx);
-
         // ELFv1 only passes one-member aggregates transparently.
         // ELFv2 passes up to eight uniquely addressable members.
-        if (abi == ELFv1 && size > unit.size)
-                || size > unit.size.checked_mul(8, ccx).unwrap() {
+        if (abi == ELFv1 && arg.layout.size > unit.size)
+                || arg.layout.size > unit.size.checked_mul(8, ccx).unwrap() {
             return None;
         }
 
         let valid_unit = match unit.kind {
             RegKind::Integer => false,
             RegKind::Float => true,
-            RegKind::Vector => size.bits() == 128
+            RegKind::Vector => arg.layout.size.bits() == 128
         };
 
         if valid_unit {
             Some(Uniform {
                 unit,
-                total: size
+                total: arg.layout.size
             })
         } else {
             None
@@ -62,16 +60,16 @@
 
     // The ELFv1 ABI doesn't return aggregates in registers
     if abi == ELFv1 {
-        ret.make_indirect(ccx);
+        ret.make_indirect();
         return;
     }
 
     if let Some(uniform) = is_homogeneous_aggregate(ccx, ret, abi) {
-        ret.cast_to(ccx, uniform);
+        ret.cast_to(uniform);
         return;
     }
 
-    let size = ret.layout.size(ccx);
+    let size = ret.layout.size;
     let bits = size.bits();
     if bits <= 128 {
         let unit = if bits <= 8 {
@@ -84,14 +82,14 @@
             Reg::i64()
         };
 
-        ret.cast_to(ccx, Uniform {
+        ret.cast_to(Uniform {
             unit,
             total: size
         });
         return;
     }
 
-    ret.make_indirect(ccx);
+    ret.make_indirect();
 }
 
 fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>, abi: ABI) {
@@ -101,11 +99,11 @@
     }
 
     if let Some(uniform) = is_homogeneous_aggregate(ccx, arg, abi) {
-        arg.cast_to(ccx, uniform);
+        arg.cast_to(uniform);
         return;
     }
 
-    let size = arg.layout.size(ccx);
+    let size = arg.layout.size;
     let (unit, total) = match abi {
         ELFv1 => {
             // In ELFv1, aggregates smaller than a doubleword should appear in
@@ -124,7 +122,7 @@
         },
     };
 
-    arg.cast_to(ccx, Uniform {
+    arg.cast_to(Uniform {
         unit,
         total
     });
diff --git a/src/librustc_trans/cabi_s390x.rs b/src/librustc_trans/cabi_s390x.rs
index fedebea..9fb4600 100644
--- a/src/librustc_trans/cabi_s390x.rs
+++ b/src/librustc_trans/cabi_s390x.rs
@@ -14,23 +14,27 @@
 use abi::{FnType, ArgType, LayoutExt, Reg};
 use context::CrateContext;
 
-use rustc::ty::layout::{self, Layout, TyLayout};
+use rustc::ty::layout::{self, TyLayout};
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
-    if !ret.layout.is_aggregate() && ret.layout.size(ccx).bits() <= 64 {
+fn classify_ret_ty(ret: &mut ArgType) {
+    if !ret.layout.is_aggregate() && ret.layout.size.bits() <= 64 {
         ret.extend_integer_width_to(64);
     } else {
-        ret.make_indirect(ccx);
+        ret.make_indirect();
     }
 }
 
 fn is_single_fp_element<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
                                   layout: TyLayout<'tcx>) -> bool {
-    match *layout {
-        Layout::Scalar { value: layout::F32, .. } |
-        Layout::Scalar { value: layout::F64, .. } => true,
-        Layout::Univariant { .. } => {
-            if layout.field_count() == 1 {
+    match layout.abi {
+        layout::Abi::Scalar(ref scalar) => {
+            match scalar.value {
+                layout::F32 | layout::F64 => true,
+                _ => false
+            }
+        }
+        layout::Abi::Aggregate { .. } => {
+            if layout.fields.count() == 1 && layout.fields.offset(0).bytes() == 0 {
                 is_single_fp_element(ccx, layout.field(ccx, 0))
             } else {
                 false
@@ -41,32 +45,31 @@
 }
 
 fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
-    let size = arg.layout.size(ccx);
-    if !arg.layout.is_aggregate() && size.bits() <= 64 {
+    if !arg.layout.is_aggregate() && arg.layout.size.bits() <= 64 {
         arg.extend_integer_width_to(64);
         return;
     }
 
     if is_single_fp_element(ccx, arg.layout) {
-        match size.bytes() {
-            4 => arg.cast_to(ccx, Reg::f32()),
-            8 => arg.cast_to(ccx, Reg::f64()),
-            _ => arg.make_indirect(ccx)
+        match arg.layout.size.bytes() {
+            4 => arg.cast_to(Reg::f32()),
+            8 => arg.cast_to(Reg::f64()),
+            _ => arg.make_indirect()
         }
     } else {
-        match size.bytes() {
-            1 => arg.cast_to(ccx, Reg::i8()),
-            2 => arg.cast_to(ccx, Reg::i16()),
-            4 => arg.cast_to(ccx, Reg::i32()),
-            8 => arg.cast_to(ccx, Reg::i64()),
-            _ => arg.make_indirect(ccx)
+        match arg.layout.size.bytes() {
+            1 => arg.cast_to(Reg::i8()),
+            2 => arg.cast_to(Reg::i16()),
+            4 => arg.cast_to(Reg::i32()),
+            8 => arg.cast_to(Reg::i64()),
+            _ => arg.make_indirect()
         }
     }
 }
 
 pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(&mut fty.ret);
     }
 
     for arg in &mut fty.args {
diff --git a/src/librustc_trans/cabi_sparc.rs b/src/librustc_trans/cabi_sparc.rs
index c17901e..fe61670 100644
--- a/src/librustc_trans/cabi_sparc.rs
+++ b/src/librustc_trans/cabi_sparc.rs
@@ -8,45 +8,48 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use std::cmp;
-use abi::{align_up_to, ArgType, FnType, LayoutExt, Reg, Uniform};
+use abi::{ArgType, FnType, LayoutExt, Reg, Uniform};
 use context::CrateContext;
 
-fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ret: &mut ArgType<'tcx>) {
+use rustc::ty::layout::Size;
+
+fn classify_ret_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                             ret: &mut ArgType<'tcx>,
+                             offset: &mut Size) {
     if !ret.layout.is_aggregate() {
         ret.extend_integer_width_to(32);
     } else {
-        ret.make_indirect(ccx);
+        ret.make_indirect();
+        *offset += ccx.tcx().data_layout.pointer_size;
     }
 }
 
-fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut u64) {
-    let size = arg.layout.size(ccx);
-    let mut align = arg.layout.align(ccx).abi();
-    align = cmp::min(cmp::max(align, 4), 8);
+fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType, offset: &mut Size) {
+    let dl = &ccx.tcx().data_layout;
+    let size = arg.layout.size;
+    let align = arg.layout.align.max(dl.i32_align).min(dl.i64_align);
 
     if arg.layout.is_aggregate() {
-        arg.cast_to(ccx, Uniform {
+        arg.cast_to(Uniform {
             unit: Reg::i32(),
             total: size
         });
-        if ((align - 1) & *offset) > 0 {
-            arg.pad_with(ccx, Reg::i32());
+        if !offset.is_abi_aligned(align) {
+            arg.pad_with(Reg::i32());
         }
     } else {
-        arg.extend_integer_width_to(32)
+        arg.extend_integer_width_to(32);
     }
 
-    *offset = align_up_to(*offset, align);
-    *offset += align_up_to(size.bytes(), align);
+    *offset = offset.abi_align(align) + size.abi_align(align);
 }
 
 pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
+    let mut offset = Size::from_bytes(0);
     if !fty.ret.is_ignore() {
-        classify_ret_ty(ccx, &mut fty.ret);
+        classify_ret_ty(ccx, &mut fty.ret, &mut offset);
     }
 
-    let mut offset = if fty.ret.is_indirect() { 4 } else { 0 };
     for arg in &mut fty.args {
         if arg.is_ignore() { continue; }
         classify_arg_ty(ccx, arg, &mut offset);
diff --git a/src/librustc_trans/cabi_sparc64.rs b/src/librustc_trans/cabi_sparc64.rs
index 8383007..7c52e27 100644
--- a/src/librustc_trans/cabi_sparc64.rs
+++ b/src/librustc_trans/cabi_sparc64.rs
@@ -16,23 +16,21 @@
 fn is_homogeneous_aggregate<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>)
                                      -> Option<Uniform> {
     arg.layout.homogeneous_aggregate(ccx).and_then(|unit| {
-        let size = arg.layout.size(ccx);
-
         // Ensure we have at most eight uniquely addressable members.
-        if size > unit.size.checked_mul(8, ccx).unwrap() {
+        if arg.layout.size > unit.size.checked_mul(8, ccx).unwrap() {
             return None;
         }
 
         let valid_unit = match unit.kind {
             RegKind::Integer => false,
             RegKind::Float => true,
-            RegKind::Vector => size.bits() == 128
+            RegKind::Vector => arg.layout.size.bits() == 128
         };
 
         if valid_unit {
             Some(Uniform {
                 unit,
-                total: size
+                total: arg.layout.size
             })
         } else {
             None
@@ -47,10 +45,10 @@
     }
 
     if let Some(uniform) = is_homogeneous_aggregate(ccx, ret) {
-        ret.cast_to(ccx, uniform);
+        ret.cast_to(uniform);
         return;
     }
-    let size = ret.layout.size(ccx);
+    let size = ret.layout.size;
     let bits = size.bits();
     if bits <= 128 {
         let unit = if bits <= 8 {
@@ -63,7 +61,7 @@
             Reg::i64()
         };
 
-        ret.cast_to(ccx, Uniform {
+        ret.cast_to(Uniform {
             unit,
             total: size
         });
@@ -71,7 +69,7 @@
     }
 
     // don't return aggregates in registers
-    ret.make_indirect(ccx);
+    ret.make_indirect();
 }
 
 fn classify_arg_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &mut ArgType<'tcx>) {
@@ -81,12 +79,12 @@
     }
 
     if let Some(uniform) = is_homogeneous_aggregate(ccx, arg) {
-        arg.cast_to(ccx, uniform);
+        arg.cast_to(uniform);
         return;
     }
 
-    let total = arg.layout.size(ccx);
-    arg.cast_to(ccx, Uniform {
+    let total = arg.layout.size;
+    arg.cast_to(Uniform {
         unit: Reg::i64(),
         total
     });
diff --git a/src/librustc_trans/cabi_x86.rs b/src/librustc_trans/cabi_x86.rs
index 49634d6..6fd0140 100644
--- a/src/librustc_trans/cabi_x86.rs
+++ b/src/librustc_trans/cabi_x86.rs
@@ -8,10 +8,10 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use abi::{ArgAttribute, FnType, LayoutExt, Reg, RegKind};
+use abi::{ArgAttribute, FnType, LayoutExt, PassMode, Reg, RegKind};
 use common::CrateContext;
 
-use rustc::ty::layout::{self, Layout, TyLayout};
+use rustc::ty::layout::{self, TyLayout};
 
 #[derive(PartialEq)]
 pub enum Flavor {
@@ -21,11 +21,15 @@
 
 fn is_single_fp_element<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
                                   layout: TyLayout<'tcx>) -> bool {
-    match *layout {
-        Layout::Scalar { value: layout::F32, .. } |
-        Layout::Scalar { value: layout::F64, .. } => true,
-        Layout::Univariant { .. } => {
-            if layout.field_count() == 1 {
+    match layout.abi {
+        layout::Abi::Scalar(ref scalar) => {
+            match scalar.value {
+                layout::F32 | layout::F64 => true,
+                _ => false
+            }
+        }
+        layout::Abi::Aggregate { .. } => {
+            if layout.fields.count() == 1 && layout.fields.offset(0).bytes() == 0 {
                 is_single_fp_element(ccx, layout.field(ccx, 0))
             } else {
                 false
@@ -50,27 +54,25 @@
             let t = &ccx.sess().target.target;
             if t.options.is_like_osx || t.options.is_like_windows
                 || t.options.is_like_openbsd {
-                let size = fty.ret.layout.size(ccx);
-
                 // According to Clang, everyone but MSVC returns single-element
                 // float aggregates directly in a floating-point register.
                 if !t.options.is_like_msvc && is_single_fp_element(ccx, fty.ret.layout) {
-                    match size.bytes() {
-                        4 => fty.ret.cast_to(ccx, Reg::f32()),
-                        8 => fty.ret.cast_to(ccx, Reg::f64()),
-                        _ => fty.ret.make_indirect(ccx)
+                    match fty.ret.layout.size.bytes() {
+                        4 => fty.ret.cast_to(Reg::f32()),
+                        8 => fty.ret.cast_to(Reg::f64()),
+                        _ => fty.ret.make_indirect()
                     }
                 } else {
-                    match size.bytes() {
-                        1 => fty.ret.cast_to(ccx, Reg::i8()),
-                        2 => fty.ret.cast_to(ccx, Reg::i16()),
-                        4 => fty.ret.cast_to(ccx, Reg::i32()),
-                        8 => fty.ret.cast_to(ccx, Reg::i64()),
-                        _ => fty.ret.make_indirect(ccx)
+                    match fty.ret.layout.size.bytes() {
+                        1 => fty.ret.cast_to(Reg::i8()),
+                        2 => fty.ret.cast_to(Reg::i16()),
+                        4 => fty.ret.cast_to(Reg::i32()),
+                        8 => fty.ret.cast_to(Reg::i64()),
+                        _ => fty.ret.make_indirect()
                     }
                 }
             } else {
-                fty.ret.make_indirect(ccx);
+                fty.ret.make_indirect();
             }
         } else {
             fty.ret.extend_integer_width_to(32);
@@ -80,8 +82,7 @@
     for arg in &mut fty.args {
         if arg.is_ignore() { continue; }
         if arg.layout.is_aggregate() {
-            arg.make_indirect(ccx);
-            arg.attrs.set(ArgAttribute::ByVal);
+            arg.make_indirect_byval();
         } else {
             arg.extend_integer_width_to(32);
         }
@@ -100,17 +101,24 @@
         let mut free_regs = 2;
 
         for arg in &mut fty.args {
-            if arg.is_ignore() || arg.is_indirect() { continue; }
+            let attrs = match arg.mode {
+                PassMode::Ignore |
+                PassMode::Indirect(_) => continue,
+                PassMode::Direct(ref mut attrs) => attrs,
+                PassMode::Pair(..) |
+                PassMode::Cast(_) => {
+                    bug!("x86 shouldn't be passing arguments by {:?}", arg.mode)
+                }
+            };
 
             // At this point we know this must be a primitive of sorts.
             let unit = arg.layout.homogeneous_aggregate(ccx).unwrap();
-            let size = arg.layout.size(ccx);
-            assert_eq!(unit.size, size);
+            assert_eq!(unit.size, arg.layout.size);
             if unit.kind == RegKind::Float {
                 continue;
             }
 
-            let size_in_regs = (size.bits() + 31) / 32;
+            let size_in_regs = (arg.layout.size.bits() + 31) / 32;
 
             if size_in_regs == 0 {
                 continue;
@@ -122,8 +130,8 @@
 
             free_regs -= size_in_regs;
 
-            if size.bits() <= 32 && unit.kind == RegKind::Integer {
-                arg.attrs.set(ArgAttribute::InReg);
+            if arg.layout.size.bits() <= 32 && unit.kind == RegKind::Integer {
+                attrs.set(ArgAttribute::InReg);
             }
 
             if free_regs == 0 {
diff --git a/src/librustc_trans/cabi_x86_64.rs b/src/librustc_trans/cabi_x86_64.rs
index a814f45..81eb362 100644
--- a/src/librustc_trans/cabi_x86_64.rs
+++ b/src/librustc_trans/cabi_x86_64.rs
@@ -11,10 +11,10 @@
 // The classification code for the x86_64 ABI is taken from the clay language
 // https://github.com/jckarter/clay/blob/master/compiler/src/externals.cpp
 
-use abi::{ArgType, ArgAttribute, CastTarget, FnType, LayoutExt, Reg, RegKind};
+use abi::{ArgType, CastTarget, FnType, LayoutExt, Reg, RegKind};
 use context::CrateContext;
 
-use rustc::ty::layout::{self, Layout, TyLayout, Size};
+use rustc::ty::layout::{self, TyLayout, Size};
 
 #[derive(Clone, Copy, PartialEq, Debug)]
 enum Class {
@@ -34,9 +34,9 @@
 fn classify_arg<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg: &ArgType<'tcx>)
                           -> Result<[Class; MAX_EIGHTBYTES], Memory> {
     fn unify(cls: &mut [Class],
-             off: u64,
+             off: Size,
              c: Class) {
-        let i = (off / 8) as usize;
+        let i = (off.bytes() / 8) as usize;
         let to_write = match (cls[i], c) {
             (Class::None, _) => c,
             (_, Class::None) => return,
@@ -55,20 +55,21 @@
     fn classify<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
                           layout: TyLayout<'tcx>,
                           cls: &mut [Class],
-                          off: u64)
+                          off: Size)
                           -> Result<(), Memory> {
-        if off % layout.align(ccx).abi() != 0 {
-            if layout.size(ccx).bytes() > 0 {
+        if !off.is_abi_aligned(layout.align) {
+            if !layout.is_zst() {
                 return Err(Memory);
             }
             return Ok(());
         }
 
-        match *layout {
-            Layout::Scalar { value, .. } |
-            Layout::RawNullablePointer { value, .. } => {
-                let reg = match value {
-                    layout::Int(_) |
+        match layout.abi {
+            layout::Abi::Uninhabited => {}
+
+            layout::Abi::Scalar(ref scalar) => {
+                let reg = match scalar.value {
+                    layout::Int(..) |
                     layout::Pointer => Class::Int,
                     layout::F32 |
                     layout::F64 => Class::Sse
@@ -76,59 +77,43 @@
                 unify(cls, off, reg);
             }
 
-            Layout::CEnum { .. } => {
-                unify(cls, off, Class::Int);
-            }
-
-            Layout::Vector { element, count } => {
+            layout::Abi::Vector => {
                 unify(cls, off, Class::Sse);
 
                 // everything after the first one is the upper
                 // half of a register.
-                let eltsz = element.size(ccx).bytes();
-                for i in 1..count {
-                    unify(cls, off + i * eltsz, Class::SseUp);
+                for i in 1..layout.fields.count() {
+                    let field_off = off + layout.fields.offset(i);
+                    unify(cls, field_off, Class::SseUp);
                 }
             }
 
-            Layout::Array { count, .. } => {
-                if count > 0 {
-                    let elt = layout.field(ccx, 0);
-                    let eltsz = elt.size(ccx).bytes();
-                    for i in 0..count {
-                        classify(ccx, elt, cls, off + i * eltsz)?;
+            layout::Abi::ScalarPair(..) |
+            layout::Abi::Aggregate { .. } => {
+                match layout.variants {
+                    layout::Variants::Single { .. } => {
+                        for i in 0..layout.fields.count() {
+                            let field_off = off + layout.fields.offset(i);
+                            classify(ccx, layout.field(ccx, i), cls, field_off)?;
+                        }
                     }
+                    layout::Variants::Tagged { .. } |
+                    layout::Variants::NicheFilling { .. } => return Err(Memory),
                 }
             }
 
-            Layout::Univariant { ref variant, .. } => {
-                for i in 0..layout.field_count() {
-                    let field_off = off + variant.offsets[i].bytes();
-                    classify(ccx, layout.field(ccx, i), cls, field_off)?;
-                }
-            }
-
-            Layout::UntaggedUnion { .. } => {
-                for i in 0..layout.field_count() {
-                    classify(ccx, layout.field(ccx, i), cls, off)?;
-                }
-            }
-
-            Layout::FatPointer { .. } |
-            Layout::General { .. } |
-            Layout::StructWrappedNullablePointer { .. } => return Err(Memory)
         }
 
         Ok(())
     }
 
-    let n = ((arg.layout.size(ccx).bytes() + 7) / 8) as usize;
+    let n = ((arg.layout.size.bytes() + 7) / 8) as usize;
     if n > MAX_EIGHTBYTES {
         return Err(Memory);
     }
 
     let mut cls = [Class::None; MAX_EIGHTBYTES];
-    classify(ccx, arg.layout, &mut cls, 0)?;
+    classify(ccx, arg.layout, &mut cls, Size::from_bytes(0))?;
     if n > 2 {
         if cls[0] != Class::Sse {
             return Err(Memory);
@@ -153,7 +138,7 @@
     Ok(cls)
 }
 
-fn reg_component(cls: &[Class], i: &mut usize, size: u64) -> Option<Reg> {
+fn reg_component(cls: &[Class], i: &mut usize, size: Size) -> Option<Reg> {
     if *i >= cls.len() {
         return None;
     }
@@ -162,7 +147,7 @@
         Class::None => None,
         Class::Int => {
             *i += 1;
-            Some(match size {
+            Some(match size.bytes() {
                 1 => Reg::i8(),
                 2 => Reg::i16(),
                 3 |
@@ -174,14 +159,14 @@
             let vec_len = 1 + cls[*i+1..].iter().take_while(|&&c| c == Class::SseUp).count();
             *i += vec_len;
             Some(if vec_len == 1 {
-                match size {
+                match size.bytes() {
                     4 => Reg::f32(),
                     _ => Reg::f64()
                 }
             } else {
                 Reg {
                     kind: RegKind::Vector,
-                    size: Size::from_bytes(vec_len as u64 * 8)
+                    size: Size::from_bytes(8) * (vec_len as u64)
                 }
             })
         }
@@ -189,17 +174,17 @@
     }
 }
 
-fn cast_target(cls: &[Class], size: u64) -> CastTarget {
+fn cast_target(cls: &[Class], size: Size) -> CastTarget {
     let mut i = 0;
     let lo = reg_component(cls, &mut i, size).unwrap();
-    let offset = i as u64 * 8;
+    let offset = Size::from_bytes(8) * (i as u64);
     let target = if size <= offset {
         CastTarget::from(lo)
     } else {
         let hi = reg_component(cls, &mut i, size - offset).unwrap();
         CastTarget::Pair(lo, hi)
     };
-    assert_eq!(reg_component(cls, &mut i, 0), None);
+    assert_eq!(reg_component(cls, &mut i, Size::from_bytes(0)), None);
     target
 }
 
@@ -229,11 +214,11 @@
         };
 
         if in_mem {
-            arg.make_indirect(ccx);
             if is_arg {
-                arg.attrs.set(ArgAttribute::ByVal);
+                arg.make_indirect_byval();
             } else {
                 // `sret` parameter thus one less integer register available
+                arg.make_indirect();
                 int_regs -= 1;
             }
         } else {
@@ -242,8 +227,8 @@
             sse_regs -= needed_sse;
 
             if arg.layout.is_aggregate() {
-                let size = arg.layout.size(ccx).bytes();
-                arg.cast_to(ccx, cast_target(cls.as_ref().unwrap(), size))
+                let size = arg.layout.size;
+                arg.cast_to(cast_target(cls.as_ref().unwrap(), size))
             } else {
                 arg.extend_integer_width_to(32);
             }
diff --git a/src/librustc_trans/cabi_x86_win64.rs b/src/librustc_trans/cabi_x86_win64.rs
index 39e728d..473c001 100644
--- a/src/librustc_trans/cabi_x86_win64.rs
+++ b/src/librustc_trans/cabi_x86_win64.rs
@@ -8,32 +8,36 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use abi::{ArgType, FnType, LayoutExt, Reg};
-use common::CrateContext;
+use abi::{ArgType, FnType, Reg};
 
-use rustc::ty::layout::Layout;
+use rustc::ty::layout;
 
 // Win64 ABI: http://msdn.microsoft.com/en-us/library/zthk2dkh.aspx
 
-pub fn compute_abi_info<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fty: &mut FnType<'tcx>) {
-    let fixup = |a: &mut ArgType<'tcx>| {
-        let size = a.layout.size(ccx);
-        if a.layout.is_aggregate() {
-            match size.bits() {
-                8 => a.cast_to(ccx, Reg::i8()),
-                16 => a.cast_to(ccx, Reg::i16()),
-                32 => a.cast_to(ccx, Reg::i32()),
-                64 => a.cast_to(ccx, Reg::i64()),
-                _ => a.make_indirect(ccx)
-            };
-        } else {
-            if let Layout::Vector { .. } = *a.layout {
+pub fn compute_abi_info(fty: &mut FnType) {
+    let fixup = |a: &mut ArgType| {
+        match a.layout.abi {
+            layout::Abi::Uninhabited => {}
+            layout::Abi::ScalarPair(..) |
+            layout::Abi::Aggregate { .. } => {
+                match a.layout.size.bits() {
+                    8 => a.cast_to(Reg::i8()),
+                    16 => a.cast_to(Reg::i16()),
+                    32 => a.cast_to(Reg::i32()),
+                    64 => a.cast_to(Reg::i64()),
+                    _ => a.make_indirect()
+                }
+            }
+            layout::Abi::Vector => {
                 // FIXME(eddyb) there should be a size cap here
                 // (probably what clang calls "illegal vectors").
-            } else if size.bytes() > 8 {
-                a.make_indirect(ccx);
-            } else {
-                a.extend_integer_width_to(32);
+            }
+            layout::Abi::Scalar(_) => {
+                if a.layout.size.bytes() > 8 {
+                    a.make_indirect();
+                } else {
+                    a.extend_integer_width_to(32);
+                }
             }
         }
     };
diff --git a/src/librustc_trans/callee.rs b/src/librustc_trans/callee.rs
index b515c94..4afeac2 100644
--- a/src/librustc_trans/callee.rs
+++ b/src/librustc_trans/callee.rs
@@ -20,12 +20,14 @@
 use declare;
 use llvm::{self, ValueRef};
 use monomorphize::Instance;
+use type_of::LayoutLlvmExt;
+
 use rustc::hir::def_id::DefId;
 use rustc::ty::{self, TypeFoldable};
+use rustc::ty::layout::LayoutOf;
 use rustc::traits;
 use rustc::ty::subst::Substs;
 use rustc_back::PanicStrategy;
-use type_of;
 
 /// Translates a reference to a fn/method item, monomorphizing and
 /// inlining as it goes.
@@ -56,7 +58,7 @@
 
     // Create a fn pointer with the substituted signature.
     let fn_ptr_ty = tcx.mk_fn_ptr(common::ty_fn_sig(ccx, fn_ty));
-    let llptrty = type_of::type_of(ccx, fn_ptr_ty);
+    let llptrty = ccx.layout_of(fn_ptr_ty).llvm_type(ccx);
 
     let llfn = if let Some(llfn) = declare::get_declared_value(ccx, &sym) {
         // This is subtle and surprising, but sometimes we have to bitcast
diff --git a/src/librustc_trans/common.rs b/src/librustc_trans/common.rs
index e3856ca..7bd8a0c 100644
--- a/src/librustc_trans/common.rs
+++ b/src/librustc_trans/common.rs
@@ -18,17 +18,17 @@
 use rustc::hir::def_id::DefId;
 use rustc::hir::map::DefPathData;
 use rustc::middle::lang_items::LangItem;
+use abi;
 use base;
 use builder::Builder;
 use consts;
 use declare;
-use machine;
-use monomorphize;
 use type_::Type;
+use type_of::LayoutLlvmExt;
 use value::Value;
 use rustc::traits;
 use rustc::ty::{self, Ty, TyCtxt};
-use rustc::ty::layout::{Layout, LayoutTyper};
+use rustc::ty::layout::{HasDataLayout, LayoutOf};
 use rustc::ty::subst::{Kind, Subst, Substs};
 use rustc::hir;
 
@@ -41,105 +41,6 @@
 
 pub use context::{CrateContext, SharedCrateContext};
 
-pub fn type_is_fat_ptr<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> bool {
-    if let Layout::FatPointer { .. } = *ccx.layout_of(ty) {
-        true
-    } else {
-        false
-    }
-}
-
-pub fn type_is_immediate<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> bool {
-    let layout = ccx.layout_of(ty);
-    match *layout {
-        Layout::CEnum { .. } |
-        Layout::Scalar { .. } |
-        Layout::Vector { .. } => true,
-
-        Layout::FatPointer { .. } => false,
-
-        Layout::Array { .. } |
-        Layout::Univariant { .. } |
-        Layout::General { .. } |
-        Layout::UntaggedUnion { .. } |
-        Layout::RawNullablePointer { .. } |
-        Layout::StructWrappedNullablePointer { .. } => {
-            !layout.is_unsized() && layout.size(ccx).bytes() == 0
-        }
-    }
-}
-
-/// Returns Some([a, b]) if the type has a pair of fields with types a and b.
-pub fn type_pair_fields<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>)
-                                  -> Option<[Ty<'tcx>; 2]> {
-    match ty.sty {
-        ty::TyAdt(adt, substs) => {
-            assert_eq!(adt.variants.len(), 1);
-            let fields = &adt.variants[0].fields;
-            if fields.len() != 2 {
-                return None;
-            }
-            Some([monomorphize::field_ty(ccx.tcx(), substs, &fields[0]),
-                  monomorphize::field_ty(ccx.tcx(), substs, &fields[1])])
-        }
-        ty::TyClosure(def_id, substs) => {
-            let mut tys = substs.upvar_tys(def_id, ccx.tcx());
-            tys.next().and_then(|first_ty| tys.next().and_then(|second_ty| {
-                if tys.next().is_some() {
-                    None
-                } else {
-                    Some([first_ty, second_ty])
-                }
-            }))
-        }
-        ty::TyGenerator(def_id, substs, _) => {
-            let mut tys = substs.field_tys(def_id, ccx.tcx());
-            tys.next().and_then(|first_ty| tys.next().and_then(|second_ty| {
-                if tys.next().is_some() {
-                    None
-                } else {
-                    Some([first_ty, second_ty])
-                }
-            }))
-        }
-        ty::TyTuple(tys, _) => {
-            if tys.len() != 2 {
-                return None;
-            }
-            Some([tys[0], tys[1]])
-        }
-        _ => None
-    }
-}
-
-/// Returns true if the type is represented as a pair of immediates.
-pub fn type_is_imm_pair<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>)
-                                  -> bool {
-    match *ccx.layout_of(ty) {
-        Layout::FatPointer { .. } => true,
-        Layout::Univariant { ref variant, .. } => {
-            // There must be only 2 fields.
-            if variant.offsets.len() != 2 {
-                return false;
-            }
-
-            match type_pair_fields(ccx, ty) {
-                Some([a, b]) => {
-                    type_is_immediate(ccx, a) && type_is_immediate(ccx, b)
-                }
-                None => false
-            }
-        }
-        _ => false
-    }
-}
-
-/// Identify types which have size zero at runtime.
-pub fn type_is_zero_size<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> bool {
-    let layout = ccx.layout_of(ty);
-    !layout.is_unsized() && layout.size(ccx).bytes() == 0
-}
-
 pub fn type_needs_drop<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, ty: Ty<'tcx>) -> bool {
     ty.needs_drop(tcx, ty::ParamEnv::empty(traits::Reveal::All))
 }
@@ -245,17 +146,13 @@
     }
 }
 
-pub fn C_big_integral(t: Type, u: u128) -> ValueRef {
+pub fn C_uint_big(t: Type, u: u128) -> ValueRef {
     unsafe {
-        let words = [u as u64, u.wrapping_shr(64) as u64];
+        let words = [u as u64, (u >> 64) as u64];
         llvm::LLVMConstIntOfArbitraryPrecision(t.to_ref(), 2, words.as_ptr())
     }
 }
 
-pub fn C_nil(ccx: &CrateContext) -> ValueRef {
-    C_struct(ccx, &[], false)
-}
-
 pub fn C_bool(ccx: &CrateContext, val: bool) -> ValueRef {
     C_uint(Type::i1(ccx), val as u64)
 }
@@ -273,8 +170,7 @@
 }
 
 pub fn C_usize(ccx: &CrateContext, i: u64) -> ValueRef {
-    let bit_size = machine::llbitsize_of_real(ccx, ccx.isize_ty());
-
+    let bit_size = ccx.data_layout().pointer_size.bits();
     if bit_size < 64 {
         // make sure it doesn't overflow
         assert!(i < (1<<bit_size));
@@ -317,8 +213,15 @@
 // you will be kicked off fast isel. See issue #4352 for an example of this.
 pub fn C_str_slice(cx: &CrateContext, s: InternedString) -> ValueRef {
     let len = s.len();
-    let cs = consts::ptrcast(C_cstr(cx, s, false), Type::i8p(cx));
-    C_named_struct(cx.str_slice_type(), &[cs, C_usize(cx, len as u64)])
+    let cs = consts::ptrcast(C_cstr(cx, s, false),
+        cx.layout_of(cx.tcx().mk_str()).llvm_type(cx).ptr_to());
+    C_fat_ptr(cx, cs, C_usize(cx, len as u64))
+}
+
+pub fn C_fat_ptr(cx: &CrateContext, ptr: ValueRef, meta: ValueRef) -> ValueRef {
+    assert_eq!(abi::FAT_PTR_ADDR, 0);
+    assert_eq!(abi::FAT_PTR_EXTRA, 1);
+    C_struct(cx, &[ptr, meta], false)
 }
 
 pub fn C_struct(cx: &CrateContext, elts: &[ValueRef], packed: bool) -> ValueRef {
@@ -333,12 +236,6 @@
     }
 }
 
-pub fn C_named_struct(t: Type, elts: &[ValueRef]) -> ValueRef {
-    unsafe {
-        llvm::LLVMConstNamedStruct(t.to_ref(), elts.as_ptr(), elts.len() as c_uint)
-    }
-}
-
 pub fn C_array(ty: Type, elts: &[ValueRef]) -> ValueRef {
     unsafe {
         return llvm::LLVMConstArray(ty.to_ref(), elts.as_ptr(), elts.len() as c_uint);
@@ -362,13 +259,14 @@
     }
 }
 
-pub fn const_get_elt(v: ValueRef, us: &[c_uint])
-              -> ValueRef {
+pub fn const_get_elt(v: ValueRef, idx: u64) -> ValueRef {
     unsafe {
+        assert_eq!(idx as c_uint as u64, idx);
+        let us = &[idx as c_uint];
         let r = llvm::LLVMConstExtractValue(v, us.as_ptr(), us.len() as c_uint);
 
-        debug!("const_get_elt(v={:?}, us={:?}, r={:?})",
-               Value(v), us, Value(r));
+        debug!("const_get_elt(v={:?}, idx={}, r={:?})",
+               Value(v), idx, Value(r));
 
         r
     }
@@ -408,19 +306,6 @@
     }
 }
 
-pub fn is_undef(val: ValueRef) -> bool {
-    unsafe {
-        llvm::LLVMIsUndef(val) != False
-    }
-}
-
-#[allow(dead_code)] // potentially useful
-pub fn is_null(val: ValueRef) -> bool {
-    unsafe {
-        llvm::LLVMIsNull(val) != False
-    }
-}
-
 pub fn langcall(tcx: TyCtxt,
                 span: Option<Span>,
                 msg: &str,
diff --git a/src/librustc_trans/consts.rs b/src/librustc_trans/consts.rs
index 4ae289c..cfca3b5 100644
--- a/src/librustc_trans/consts.rs
+++ b/src/librustc_trans/consts.rs
@@ -14,19 +14,19 @@
 use rustc::hir::def_id::DefId;
 use rustc::hir::map as hir_map;
 use rustc::middle::const_val::ConstEvalErr;
-use {debuginfo, machine};
+use debuginfo;
 use base;
 use trans_item::{TransItem, TransItemExt};
 use common::{self, CrateContext, val_ty};
 use declare;
 use monomorphize::Instance;
 use type_::Type;
-use type_of;
+use type_of::LayoutLlvmExt;
 use rustc::ty;
+use rustc::ty::layout::{Align, LayoutOf};
 
 use rustc::hir;
 
-use std::cmp;
 use std::ffi::{CStr, CString};
 use syntax::ast;
 use syntax::attr;
@@ -45,26 +45,26 @@
 
 fn set_global_alignment(ccx: &CrateContext,
                         gv: ValueRef,
-                        mut align: machine::llalign) {
+                        mut align: Align) {
     // The target may require greater alignment for globals than the type does.
     // Note: GCC and Clang also allow `__attribute__((aligned))` on variables,
     // which can force it to be smaller.  Rust doesn't support this yet.
     if let Some(min) = ccx.sess().target.target.options.min_global_align {
         match ty::layout::Align::from_bits(min, min) {
-            Ok(min) => align = cmp::max(align, min.abi() as machine::llalign),
+            Ok(min) => align = align.max(min),
             Err(err) => {
                 ccx.sess().err(&format!("invalid minimum global alignment: {}", err));
             }
         }
     }
     unsafe {
-        llvm::LLVMSetAlignment(gv, align);
+        llvm::LLVMSetAlignment(gv, align.abi() as u32);
     }
 }
 
 pub fn addr_of_mut(ccx: &CrateContext,
                    cv: ValueRef,
-                   align: machine::llalign,
+                   align: Align,
                    kind: &str)
                     -> ValueRef {
     unsafe {
@@ -82,15 +82,16 @@
 
 pub fn addr_of(ccx: &CrateContext,
                cv: ValueRef,
-               align: machine::llalign,
+               align: Align,
                kind: &str)
                -> ValueRef {
     if let Some(&gv) = ccx.const_globals().borrow().get(&cv) {
         unsafe {
             // Upgrade the alignment in cases where the same constant is used with different
             // alignment requirements
-            if align > llvm::LLVMGetAlignment(gv) {
-                llvm::LLVMSetAlignment(gv, align);
+            let llalign = align.abi() as u32;
+            if llalign > llvm::LLVMGetAlignment(gv) {
+                llvm::LLVMSetAlignment(gv, llalign);
             }
         }
         return gv;
@@ -112,7 +113,7 @@
     let ty = common::instance_ty(ccx.tcx(), &instance);
     let g = if let Some(id) = ccx.tcx().hir.as_local_node_id(def_id) {
 
-        let llty = type_of::type_of(ccx, ty);
+        let llty = ccx.layout_of(ty).llvm_type(ccx);
         let (g, attrs) = match ccx.tcx().hir.get(id) {
             hir_map::NodeItem(&hir::Item {
                 ref attrs, span, node: hir::ItemStatic(..), ..
@@ -157,7 +158,7 @@
                         }
                     };
                     let llty2 = match ty.sty {
-                        ty::TyRawPtr(ref mt) => type_of::type_of(ccx, mt.ty),
+                        ty::TyRawPtr(ref mt) => ccx.layout_of(mt.ty).llvm_type(ccx),
                         _ => {
                             ccx.sess().span_fatal(span, "must have type `*const T` or `*mut T`");
                         }
@@ -206,7 +207,7 @@
 
         // FIXME(nagisa): perhaps the map of externs could be offloaded to llvm somehow?
         // FIXME(nagisa): investigate whether it can be changed into define_global
-        let g = declare::declare_global(ccx, &sym, type_of::type_of(ccx, ty));
+        let g = declare::declare_global(ccx, &sym, ccx.layout_of(ty).llvm_type(ccx));
         // Thread-local statics in some other crate need to *always* be linked
         // against in a thread-local fashion, so we need to be sure to apply the
         // thread-local attribute locally if it was present remotely. If we
@@ -266,7 +267,7 @@
 
         let instance = Instance::mono(ccx.tcx(), def_id);
         let ty = common::instance_ty(ccx.tcx(), &instance);
-        let llty = type_of::type_of(ccx, ty);
+        let llty = ccx.layout_of(ty).llvm_type(ccx);
         let g = if val_llty == llty {
             g
         } else {
diff --git a/src/librustc_trans/context.rs b/src/librustc_trans/context.rs
index cb71ef1..b2bb605 100644
--- a/src/librustc_trans/context.rs
+++ b/src/librustc_trans/context.rs
@@ -24,12 +24,14 @@
 
 use partitioning::CodegenUnit;
 use type_::Type;
+use type_of::PointeeInfo;
+
 use rustc_data_structures::base_n;
 use rustc::middle::trans::Stats;
 use rustc_data_structures::stable_hasher::StableHashingContextProvider;
 use rustc::session::config::{self, NoDebugInfo};
 use rustc::session::Session;
-use rustc::ty::layout::{LayoutCx, LayoutError, LayoutTyper, TyLayout};
+use rustc::ty::layout::{LayoutError, LayoutOf, Size, TyLayout};
 use rustc::ty::{self, Ty, TyCtxt};
 use rustc::util::nodemap::FxHashMap;
 use rustc_trans_utils;
@@ -99,10 +101,10 @@
     /// See http://llvm.org/docs/LangRef.html#the-llvm-used-global-variable for details
     used_statics: RefCell<Vec<ValueRef>>,
 
-    lltypes: RefCell<FxHashMap<Ty<'tcx>, Type>>,
+    lltypes: RefCell<FxHashMap<(Ty<'tcx>, Option<usize>), Type>>,
+    scalar_lltypes: RefCell<FxHashMap<Ty<'tcx>, Type>>,
+    pointee_infos: RefCell<FxHashMap<(Ty<'tcx>, Size), Option<PointeeInfo>>>,
     isize_ty: Type,
-    opaque_vec_type: Type,
-    str_slice_type: Type,
 
     dbg_cx: Option<debuginfo::CrateDebugContext<'tcx>>,
 
@@ -377,9 +379,9 @@
                 statics_to_rauw: RefCell::new(Vec::new()),
                 used_statics: RefCell::new(Vec::new()),
                 lltypes: RefCell::new(FxHashMap()),
+                scalar_lltypes: RefCell::new(FxHashMap()),
+                pointee_infos: RefCell::new(FxHashMap()),
                 isize_ty: Type::from_ref(ptr::null_mut()),
-                opaque_vec_type: Type::from_ref(ptr::null_mut()),
-                str_slice_type: Type::from_ref(ptr::null_mut()),
                 dbg_cx,
                 eh_personality: Cell::new(None),
                 eh_unwind_resume: Cell::new(None),
@@ -389,25 +391,19 @@
                 placeholder: PhantomData,
             };
 
-            let (isize_ty, opaque_vec_type, str_slice_ty, mut local_ccx) = {
+            let (isize_ty, mut local_ccx) = {
                 // Do a little dance to create a dummy CrateContext, so we can
                 // create some things in the LLVM module of this codegen unit
                 let mut local_ccxs = vec![local_ccx];
-                let (isize_ty, opaque_vec_type, str_slice_ty) = {
+                let isize_ty = {
                     let dummy_ccx = LocalCrateContext::dummy_ccx(shared,
                                                                  local_ccxs.as_mut_slice());
-                    let mut str_slice_ty = Type::named_struct(&dummy_ccx, "str_slice");
-                    str_slice_ty.set_struct_body(&[Type::i8p(&dummy_ccx),
-                                                   Type::isize(&dummy_ccx)],
-                                                 false);
-                    (Type::isize(&dummy_ccx), Type::opaque_vec(&dummy_ccx), str_slice_ty)
+                    Type::isize(&dummy_ccx)
                 };
-                (isize_ty, opaque_vec_type, str_slice_ty, local_ccxs.pop().unwrap())
+                (isize_ty, local_ccxs.pop().unwrap())
             };
 
             local_ccx.isize_ty = isize_ty;
-            local_ccx.opaque_vec_type = opaque_vec_type;
-            local_ccx.str_slice_type = str_slice_ty;
 
             local_ccx
         }
@@ -512,10 +508,19 @@
         &self.local().used_statics
     }
 
-    pub fn lltypes<'a>(&'a self) -> &'a RefCell<FxHashMap<Ty<'tcx>, Type>> {
+    pub fn lltypes<'a>(&'a self) -> &'a RefCell<FxHashMap<(Ty<'tcx>, Option<usize>), Type>> {
         &self.local().lltypes
     }
 
+    pub fn scalar_lltypes<'a>(&'a self) -> &'a RefCell<FxHashMap<Ty<'tcx>, Type>> {
+        &self.local().scalar_lltypes
+    }
+
+    pub fn pointee_infos<'a>(&'a self)
+                             -> &'a RefCell<FxHashMap<(Ty<'tcx>, Size), Option<PointeeInfo>>> {
+        &self.local().pointee_infos
+    }
+
     pub fn stats<'a>(&'a self) -> &'a RefCell<Stats> {
         &self.local().stats
     }
@@ -524,10 +529,6 @@
         self.local().isize_ty
     }
 
-    pub fn str_slice_type(&self) -> Type {
-        self.local().str_slice_type
-    }
-
     pub fn dbg_cx<'a>(&'a self) -> &'a Option<debuginfo::CrateDebugContext<'tcx>> {
         &self.local().dbg_cx
     }
@@ -647,48 +648,44 @@
     }
 }
 
+impl<'a, 'tcx> ty::layout::HasTyCtxt<'tcx> for &'a SharedCrateContext<'a, 'tcx> {
+    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'tcx, 'tcx> {
+        self.tcx
+    }
+}
+
 impl<'a, 'tcx> ty::layout::HasDataLayout for &'a CrateContext<'a, 'tcx> {
     fn data_layout(&self) -> &ty::layout::TargetDataLayout {
         &self.shared.tcx.data_layout
     }
 }
 
-impl<'a, 'tcx> LayoutTyper<'tcx> for &'a SharedCrateContext<'a, 'tcx> {
+impl<'a, 'tcx> ty::layout::HasTyCtxt<'tcx> for &'a CrateContext<'a, 'tcx> {
+    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'tcx, 'tcx> {
+        self.shared.tcx
+    }
+}
+
+impl<'a, 'tcx> LayoutOf<Ty<'tcx>> for &'a SharedCrateContext<'a, 'tcx> {
     type TyLayout = TyLayout<'tcx>;
 
-    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'tcx, 'tcx> {
-        self.tcx
-    }
-
     fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
-        let param_env = ty::ParamEnv::empty(traits::Reveal::All);
-        LayoutCx::new(self.tcx, param_env)
+        (self.tcx, ty::ParamEnv::empty(traits::Reveal::All))
             .layout_of(ty)
             .unwrap_or_else(|e| match e {
                 LayoutError::SizeOverflow(_) => self.sess().fatal(&e.to_string()),
                 _ => bug!("failed to get layout for `{}`: {}", ty, e)
             })
     }
-
-    fn normalize_projections(self, ty: Ty<'tcx>) -> Ty<'tcx> {
-        self.tcx().fully_normalize_associated_types_in(&ty)
-    }
 }
 
-impl<'a, 'tcx> LayoutTyper<'tcx> for &'a CrateContext<'a, 'tcx> {
+impl<'a, 'tcx> LayoutOf<Ty<'tcx>> for &'a CrateContext<'a, 'tcx> {
     type TyLayout = TyLayout<'tcx>;
 
-    fn tcx<'b>(&'b self) -> TyCtxt<'b, 'tcx, 'tcx> {
-        self.shared.tcx
-    }
 
     fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
         self.shared.layout_of(ty)
     }
-
-    fn normalize_projections(self, ty: Ty<'tcx>) -> Ty<'tcx> {
-        self.shared.normalize_projections(ty)
-    }
 }
 
 /// Declare any llvm intrinsics that you might need
diff --git a/src/librustc_trans/debuginfo/metadata.rs b/src/librustc_trans/debuginfo/metadata.rs
index 4f07af9..b2ad538 100644
--- a/src/librustc_trans/debuginfo/metadata.rs
+++ b/src/librustc_trans/debuginfo/metadata.rs
@@ -9,15 +9,15 @@
 // except according to those terms.
 
 use self::RecursiveTypeDescription::*;
-use self::MemberOffset::*;
 use self::MemberDescriptionFactory::*;
 use self::EnumDiscriminantInfo::*;
 
-use super::utils::{debug_context, DIB, span_start, bytes_to_bits, size_and_align_of,
+use super::utils::{debug_context, DIB, span_start,
                    get_namespace_for_item, create_DIArray, is_node_local_to_unit};
 use super::namespace::mangled_name_of_item;
 use super::type_names::compute_debuginfo_type_name;
 use super::{CrateDebugContext};
+use abi;
 use context::SharedCrateContext;
 
 use llvm::{self, ValueRef};
@@ -29,19 +29,17 @@
 use rustc::ty::fold::TypeVisitor;
 use rustc::ty::subst::Substs;
 use rustc::ty::util::TypeIdHasher;
-use rustc::hir;
 use rustc::ich::Fingerprint;
-use {type_of, machine, monomorphize};
 use common::{self, CrateContext};
-use type_::Type;
 use rustc::ty::{self, AdtKind, Ty};
-use rustc::ty::layout::{self, LayoutTyper};
+use rustc::ty::layout::{self, Align, LayoutOf, Size, TyLayout};
 use rustc::session::{Session, config};
 use rustc::util::nodemap::FxHashMap;
 use rustc::util::common::path2cstr;
 
 use libc::{c_uint, c_longlong};
 use std::ffi::CString;
+use std::fmt::Write;
 use std::ptr;
 use std::path::Path;
 use syntax::ast;
@@ -183,7 +181,6 @@
         unfinished_type: Ty<'tcx>,
         unique_type_id: UniqueTypeId,
         metadata_stub: DICompositeType,
-        llvm_type: Type,
         member_description_factory: MemberDescriptionFactory<'tcx>,
     },
     FinalMetadata(DICompositeType)
@@ -194,7 +191,6 @@
     unfinished_type: Ty<'tcx>,
     unique_type_id: UniqueTypeId,
     metadata_stub: DICompositeType,
-    llvm_type: Type,
     member_description_factory: MemberDescriptionFactory<'tcx>)
  -> RecursiveTypeDescription<'tcx> {
 
@@ -207,7 +203,6 @@
         unfinished_type,
         unique_type_id,
         metadata_stub,
-        llvm_type,
         member_description_factory,
     }
 }
@@ -223,9 +218,7 @@
                 unfinished_type,
                 unique_type_id,
                 metadata_stub,
-                llvm_type,
                 ref member_description_factory,
-                ..
             } => {
                 // Make sure that we have a forward declaration of the type in
                 // the TypeMap so that recursive references are possible. This
@@ -250,7 +243,6 @@
                 // ... and attach them to the stub to complete it.
                 set_members_of_composite_type(cx,
                                               metadata_stub,
-                                              llvm_type,
                                               &member_descriptions[..]);
                 return MetadataCreationResult::new(metadata_stub, true);
             }
@@ -273,20 +265,21 @@
 
 fn fixed_vec_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
                                 unique_type_id: UniqueTypeId,
+                                array_or_slice_type: Ty<'tcx>,
                                 element_type: Ty<'tcx>,
-                                len: Option<u64>,
                                 span: Span)
                                 -> MetadataCreationResult {
     let element_type_metadata = type_metadata(cx, element_type, span);
 
     return_if_metadata_created_in_meantime!(cx, unique_type_id);
 
-    let element_llvm_type = type_of::type_of(cx, element_type);
-    let (element_type_size, element_type_align) = size_and_align_of(cx, element_llvm_type);
+    let (size, align) = cx.size_and_align_of(array_or_slice_type);
 
-    let (array_size_in_bytes, upper_bound) = match len {
-        Some(len) => (element_type_size * len, len as c_longlong),
-        None => (0, -1)
+    let upper_bound = match array_or_slice_type.sty {
+        ty::TyArray(_, len) => {
+            len.val.to_const_int().unwrap().to_u64().unwrap() as c_longlong
+        }
+        _ => -1
     };
 
     let subrange = unsafe {
@@ -297,8 +290,8 @@
     let metadata = unsafe {
         llvm::LLVMRustDIBuilderCreateArrayType(
             DIB(cx),
-            bytes_to_bits(array_size_in_bytes),
-            bytes_to_bits(element_type_align),
+            size.bits(),
+            align.abi_bits() as u32,
             element_type_metadata,
             subscripts)
     };
@@ -307,66 +300,52 @@
 }
 
 fn vec_slice_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
-                                vec_type: Ty<'tcx>,
+                                slice_ptr_type: Ty<'tcx>,
                                 element_type: Ty<'tcx>,
                                 unique_type_id: UniqueTypeId,
                                 span: Span)
                                 -> MetadataCreationResult {
-    let data_ptr_type = cx.tcx().mk_ptr(ty::TypeAndMut {
-        ty: element_type,
-        mutbl: hir::MutImmutable
-    });
+    let data_ptr_type = cx.tcx().mk_imm_ptr(element_type);
 
-    let element_type_metadata = type_metadata(cx, data_ptr_type, span);
+    let data_ptr_metadata = type_metadata(cx, data_ptr_type, span);
 
     return_if_metadata_created_in_meantime!(cx, unique_type_id);
 
-    let slice_llvm_type = type_of::type_of(cx, vec_type);
-    let slice_type_name = compute_debuginfo_type_name(cx, vec_type, true);
+    let slice_type_name = compute_debuginfo_type_name(cx, slice_ptr_type, true);
 
-    let member_llvm_types = slice_llvm_type.field_types();
-    assert!(slice_layout_is_correct(cx,
-                                    &member_llvm_types[..],
-                                    element_type));
+    let (pointer_size, pointer_align) = cx.size_and_align_of(data_ptr_type);
+    let (usize_size, usize_align) = cx.size_and_align_of(cx.tcx().types.usize);
+
     let member_descriptions = [
         MemberDescription {
             name: "data_ptr".to_string(),
-            llvm_type: member_llvm_types[0],
-            type_metadata: element_type_metadata,
-            offset: ComputedMemberOffset,
+            type_metadata: data_ptr_metadata,
+            offset: Size::from_bytes(0),
+            size: pointer_size,
+            align: pointer_align,
             flags: DIFlags::FlagZero,
         },
         MemberDescription {
             name: "length".to_string(),
-            llvm_type: member_llvm_types[1],
             type_metadata: type_metadata(cx, cx.tcx().types.usize, span),
-            offset: ComputedMemberOffset,
+            offset: pointer_size,
+            size: usize_size,
+            align: usize_align,
             flags: DIFlags::FlagZero,
         },
     ];
 
-    assert!(member_descriptions.len() == member_llvm_types.len());
-
     let file_metadata = unknown_file_metadata(cx);
 
     let metadata = composite_type_metadata(cx,
-                                           slice_llvm_type,
+                                           slice_ptr_type,
                                            &slice_type_name[..],
                                            unique_type_id,
                                            &member_descriptions,
                                            NO_SCOPE_METADATA,
                                            file_metadata,
                                            span);
-    return MetadataCreationResult::new(metadata, false);
-
-    fn slice_layout_is_correct<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
-                                         member_llvm_types: &[Type],
-                                         element_type: Ty<'tcx>)
-                                         -> bool {
-        member_llvm_types.len() == 2 &&
-        member_llvm_types[0] == type_of::type_of(cx, element_type).ptr_to() &&
-        member_llvm_types[1] == cx.isize_ty()
-    }
+    MetadataCreationResult::new(metadata, false)
 }
 
 fn subroutine_type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
@@ -435,14 +414,41 @@
     let trait_type_name =
         compute_debuginfo_type_name(cx, trait_object_type, false);
 
-    let trait_llvm_type = type_of::type_of(cx, trait_object_type);
     let file_metadata = unknown_file_metadata(cx);
 
+    let layout = cx.layout_of(cx.tcx().mk_mut_ptr(trait_type));
+
+    assert_eq!(abi::FAT_PTR_ADDR, 0);
+    assert_eq!(abi::FAT_PTR_EXTRA, 1);
+
+    let data_ptr_field = layout.field(cx, 0);
+    let vtable_field = layout.field(cx, 1);
+    let member_descriptions = [
+        MemberDescription {
+            name: "pointer".to_string(),
+            type_metadata: type_metadata(cx,
+                cx.tcx().mk_mut_ptr(cx.tcx().types.u8),
+                syntax_pos::DUMMY_SP),
+            offset: layout.fields.offset(0),
+            size: data_ptr_field.size,
+            align: data_ptr_field.align,
+            flags: DIFlags::FlagArtificial,
+        },
+        MemberDescription {
+            name: "vtable".to_string(),
+            type_metadata: type_metadata(cx, vtable_field.ty, syntax_pos::DUMMY_SP),
+            offset: layout.fields.offset(1),
+            size: vtable_field.size,
+            align: vtable_field.align,
+            flags: DIFlags::FlagArtificial,
+        },
+    ];
+
     composite_type_metadata(cx,
-                            trait_llvm_type,
+                            trait_object_type,
                             &trait_type_name[..],
                             unique_type_id,
-                            &[],
+                            &member_descriptions,
                             containing_scope,
                             file_metadata,
                             syntax_pos::DUMMY_SP)
@@ -528,15 +534,12 @@
         ty::TyTuple(ref elements, _) if elements.is_empty() => {
             MetadataCreationResult::new(basic_type_metadata(cx, t), false)
         }
-        ty::TyArray(typ, len) => {
-            let len = len.val.to_const_int().unwrap().to_u64().unwrap();
-            fixed_vec_metadata(cx, unique_type_id, typ, Some(len), usage_site_span)
-        }
+        ty::TyArray(typ, _) |
         ty::TySlice(typ) => {
-            fixed_vec_metadata(cx, unique_type_id, typ, None, usage_site_span)
+            fixed_vec_metadata(cx, unique_type_id, t, typ, usage_site_span)
         }
         ty::TyStr => {
-            fixed_vec_metadata(cx, unique_type_id, cx.tcx().types.i8, None, usage_site_span)
+            fixed_vec_metadata(cx, unique_type_id, t, cx.tcx().types.i8, usage_site_span)
         }
         ty::TyDynamic(..) => {
             MetadataCreationResult::new(
@@ -742,15 +745,14 @@
         _ => bug!("debuginfo::basic_type_metadata - t is invalid type")
     };
 
-    let llvm_type = type_of::type_of(cx, t);
-    let (size, align) = size_and_align_of(cx, llvm_type);
+    let (size, align) = cx.size_and_align_of(t);
     let name = CString::new(name).unwrap();
     let ty_metadata = unsafe {
         llvm::LLVMRustDIBuilderCreateBasicType(
             DIB(cx),
             name.as_ptr(),
-            bytes_to_bits(size),
-            bytes_to_bits(align),
+            size.bits(),
+            align.abi_bits() as u32,
             encoding)
     };
 
@@ -762,29 +764,25 @@
                                    unique_type_id: UniqueTypeId) -> DIType {
     debug!("foreign_type_metadata: {:?}", t);
 
-    let llvm_type = type_of::type_of(cx, t);
-
     let name = compute_debuginfo_type_name(cx, t, false);
-    create_struct_stub(cx, llvm_type, &name, unique_type_id, NO_SCOPE_METADATA)
+    create_struct_stub(cx, t, &name, unique_type_id, NO_SCOPE_METADATA)
 }
 
 fn pointer_type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
                                    pointer_type: Ty<'tcx>,
                                    pointee_type_metadata: DIType)
                                    -> DIType {
-    let pointer_llvm_type = type_of::type_of(cx, pointer_type);
-    let (pointer_size, pointer_align) = size_and_align_of(cx, pointer_llvm_type);
+    let (pointer_size, pointer_align) = cx.size_and_align_of(pointer_type);
     let name = compute_debuginfo_type_name(cx, pointer_type, false);
     let name = CString::new(name).unwrap();
-    let ptr_metadata = unsafe {
+    unsafe {
         llvm::LLVMRustDIBuilderCreatePointerType(
             DIB(cx),
             pointee_type_metadata,
-            bytes_to_bits(pointer_size),
-            bytes_to_bits(pointer_align),
+            pointer_size.bits(),
+            pointer_align.abi_bits() as u32,
             name.as_ptr())
-    };
-    return ptr_metadata;
+    }
 }
 
 pub fn compile_unit_metadata(scc: &SharedCrateContext,
@@ -879,21 +877,15 @@
     }
 }
 
-#[derive(Debug)]
-enum MemberOffset {
-    FixedMemberOffset { bytes: usize },
-    // For ComputedMemberOffset, the offset is read from the llvm type definition.
-    ComputedMemberOffset
-}
-
 // Description of a type member, which can either be a regular field (as in
 // structs or tuples) or an enum variant.
 #[derive(Debug)]
 struct MemberDescription {
     name: String,
-    llvm_type: Type,
     type_metadata: DIType,
-    offset: MemberOffset,
+    offset: Size,
+    size: Size,
+    align: Align,
     flags: DIFlags,
 }
 
@@ -940,7 +932,6 @@
 struct StructMemberDescriptionFactory<'tcx> {
     ty: Ty<'tcx>,
     variant: &'tcx ty::VariantDef,
-    substs: &'tcx Substs<'tcx>,
     span: Span,
 }
 
@@ -948,35 +939,20 @@
     fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>)
                                       -> Vec<MemberDescription> {
         let layout = cx.layout_of(self.ty);
-
-        let tmp;
-        let offsets = match *layout {
-            layout::Univariant { ref variant, .. } => &variant.offsets,
-            layout::Vector { element, count } => {
-                let element_size = element.size(cx).bytes();
-                tmp = (0..count).
-                  map(|i| layout::Size::from_bytes(i*element_size))
-                  .collect::<Vec<layout::Size>>();
-                &tmp
-            }
-            _ => bug!("{} is not a struct", self.ty)
-        };
-
         self.variant.fields.iter().enumerate().map(|(i, f)| {
             let name = if self.variant.ctor_kind == CtorKind::Fn {
                 format!("__{}", i)
             } else {
                 f.name.to_string()
             };
-            let fty = monomorphize::field_ty(cx.tcx(), self.substs, f);
-
-            let offset = FixedMemberOffset { bytes: offsets[i].bytes() as usize};
-
+            let field = layout.field(cx, i);
+            let (size, align) = field.size_and_align();
             MemberDescription {
                 name,
-                llvm_type: type_of::in_memory_type_of(cx, fty),
-                type_metadata: type_metadata(cx, fty, self.span),
-                offset,
+                type_metadata: type_metadata(cx, field.ty, self.span),
+                offset: layout.fields.offset(i),
+                size,
+                align,
                 flags: DIFlags::FlagZero,
             }
         }).collect()
@@ -990,17 +966,16 @@
                                      span: Span)
                                      -> RecursiveTypeDescription<'tcx> {
     let struct_name = compute_debuginfo_type_name(cx, struct_type, false);
-    let struct_llvm_type = type_of::in_memory_type_of(cx, struct_type);
 
-    let (struct_def_id, variant, substs) = match struct_type.sty {
-        ty::TyAdt(def, substs) => (def.did, def.struct_variant(), substs),
+    let (struct_def_id, variant) = match struct_type.sty {
+        ty::TyAdt(def, _) => (def.did, def.struct_variant()),
         _ => bug!("prepare_struct_metadata on a non-ADT")
     };
 
     let containing_scope = get_namespace_for_item(cx, struct_def_id);
 
     let struct_metadata_stub = create_struct_stub(cx,
-                                                  struct_llvm_type,
+                                                  struct_type,
                                                   &struct_name,
                                                   unique_type_id,
                                                   containing_scope);
@@ -1010,11 +985,9 @@
         struct_type,
         unique_type_id,
         struct_metadata_stub,
-        struct_llvm_type,
         StructMDF(StructMemberDescriptionFactory {
             ty: struct_type,
             variant,
-            substs,
             span,
         })
     )
@@ -1035,21 +1008,14 @@
     fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>)
                                       -> Vec<MemberDescription> {
         let layout = cx.layout_of(self.ty);
-        let offsets = if let layout::Univariant { ref variant, .. } = *layout {
-            &variant.offsets
-        } else {
-            bug!("{} is not a tuple", self.ty);
-        };
-
-        self.component_types
-            .iter()
-            .enumerate()
-            .map(|(i, &component_type)| {
+        self.component_types.iter().enumerate().map(|(i, &component_type)| {
+            let (size, align) = cx.size_and_align_of(component_type);
             MemberDescription {
                 name: format!("__{}", i),
-                llvm_type: type_of::type_of(cx, component_type),
                 type_metadata: type_metadata(cx, component_type, self.span),
-                offset: FixedMemberOffset { bytes: offsets[i].bytes() as usize },
+                offset: layout.fields.offset(i),
+                size,
+                align,
                 flags: DIFlags::FlagZero,
             }
         }).collect()
@@ -1063,18 +1029,16 @@
                                     span: Span)
                                     -> RecursiveTypeDescription<'tcx> {
     let tuple_name = compute_debuginfo_type_name(cx, tuple_type, false);
-    let tuple_llvm_type = type_of::type_of(cx, tuple_type);
 
     create_and_register_recursive_type_forward_declaration(
         cx,
         tuple_type,
         unique_type_id,
         create_struct_stub(cx,
-                           tuple_llvm_type,
+                           tuple_type,
                            &tuple_name[..],
                            unique_type_id,
                            NO_SCOPE_METADATA),
-        tuple_llvm_type,
         TupleMDF(TupleMemberDescriptionFactory {
             ty: tuple_type,
             component_types: component_types.to_vec(),
@@ -1088,21 +1052,23 @@
 //=-----------------------------------------------------------------------------
 
 struct UnionMemberDescriptionFactory<'tcx> {
+    layout: TyLayout<'tcx>,
     variant: &'tcx ty::VariantDef,
-    substs: &'tcx Substs<'tcx>,
     span: Span,
 }
 
 impl<'tcx> UnionMemberDescriptionFactory<'tcx> {
     fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>)
                                       -> Vec<MemberDescription> {
-        self.variant.fields.iter().map(|field| {
-            let fty = monomorphize::field_ty(cx.tcx(), self.substs, field);
+        self.variant.fields.iter().enumerate().map(|(i, f)| {
+            let field = self.layout.field(cx, i);
+            let (size, align) = field.size_and_align();
             MemberDescription {
-                name: field.name.to_string(),
-                llvm_type: type_of::type_of(cx, fty),
-                type_metadata: type_metadata(cx, fty, self.span),
-                offset: FixedMemberOffset { bytes: 0 },
+                name: f.name.to_string(),
+                type_metadata: type_metadata(cx, field.ty, self.span),
+                offset: Size::from_bytes(0),
+                size,
+                align,
                 flags: DIFlags::FlagZero,
             }
         }).collect()
@@ -1115,17 +1081,16 @@
                                     span: Span)
                                     -> RecursiveTypeDescription<'tcx> {
     let union_name = compute_debuginfo_type_name(cx, union_type, false);
-    let union_llvm_type = type_of::in_memory_type_of(cx, union_type);
 
-    let (union_def_id, variant, substs) = match union_type.sty {
-        ty::TyAdt(def, substs) => (def.did, def.struct_variant(), substs),
+    let (union_def_id, variant) = match union_type.sty {
+        ty::TyAdt(def, _) => (def.did, def.struct_variant()),
         _ => bug!("prepare_union_metadata on a non-ADT")
     };
 
     let containing_scope = get_namespace_for_item(cx, union_def_id);
 
     let union_metadata_stub = create_union_stub(cx,
-                                                union_llvm_type,
+                                                union_type,
                                                 &union_name,
                                                 unique_type_id,
                                                 containing_scope);
@@ -1135,10 +1100,9 @@
         union_type,
         unique_type_id,
         union_metadata_stub,
-        union_llvm_type,
         UnionMDF(UnionMemberDescriptionFactory {
+            layout: cx.layout_of(union_type),
             variant,
-            substs,
             span,
         })
     )
@@ -1155,10 +1119,9 @@
 // offset of zero bytes).
 struct EnumMemberDescriptionFactory<'tcx> {
     enum_type: Ty<'tcx>,
-    type_rep: &'tcx layout::Layout,
+    layout: TyLayout<'tcx>,
     discriminant_type_metadata: Option<DIType>,
     containing_scope: DIScope,
-    file_metadata: DIFile,
     span: Span,
 }
 
@@ -1166,162 +1129,70 @@
     fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>)
                                       -> Vec<MemberDescription> {
         let adt = &self.enum_type.ty_adt_def().unwrap();
-        let substs = match self.enum_type.sty {
-            ty::TyAdt(def, ref s) if def.adt_kind() == AdtKind::Enum => s,
-            _ => bug!("{} is not an enum", self.enum_type)
-        };
-        match *self.type_rep {
-            layout::General { ref variants, .. } => {
-                let discriminant_info = RegularDiscriminant(self.discriminant_type_metadata
-                    .expect(""));
-                variants
-                    .iter()
-                    .enumerate()
-                    .map(|(i, struct_def)| {
-                        let (variant_type_metadata,
-                             variant_llvm_type,
-                             member_desc_factory) =
-                            describe_enum_variant(cx,
-                                                  self.enum_type,
-                                                  struct_def,
-                                                  &adt.variants[i],
-                                                  discriminant_info,
-                                                  self.containing_scope,
-                                                  self.span);
+        match self.layout.variants {
+            layout::Variants::Single { .. } if adt.variants.is_empty() => vec![],
+            layout::Variants::Single { index } => {
+                let (variant_type_metadata, member_description_factory) =
+                    describe_enum_variant(cx,
+                                          self.layout,
+                                          &adt.variants[index],
+                                          NoDiscriminant,
+                                          self.containing_scope,
+                                          self.span);
 
-                        let member_descriptions = member_desc_factory
-                            .create_member_descriptions(cx);
+                let member_descriptions =
+                    member_description_factory.create_member_descriptions(cx);
 
-                        set_members_of_composite_type(cx,
-                                                      variant_type_metadata,
-                                                      variant_llvm_type,
-                                                      &member_descriptions);
-                        MemberDescription {
-                            name: "".to_string(),
-                            llvm_type: variant_llvm_type,
-                            type_metadata: variant_type_metadata,
-                            offset: FixedMemberOffset { bytes: 0 },
-                            flags: DIFlags::FlagZero
-                        }
-                    }).collect()
-            },
-            layout::Univariant{ ref variant, .. } => {
-                assert!(adt.variants.len() <= 1);
-
-                if adt.variants.is_empty() {
-                    vec![]
-                } else {
-                    let (variant_type_metadata,
-                         variant_llvm_type,
-                         member_description_factory) =
-                        describe_enum_variant(cx,
-                                              self.enum_type,
-                                              variant,
-                                              &adt.variants[0],
-                                              NoDiscriminant,
-                                              self.containing_scope,
-                                              self.span);
-
-                    let member_descriptions =
-                        member_description_factory.create_member_descriptions(cx);
-
-                    set_members_of_composite_type(cx,
-                                                  variant_type_metadata,
-                                                  variant_llvm_type,
-                                                  &member_descriptions[..]);
-                    vec![
-                        MemberDescription {
-                            name: "".to_string(),
-                            llvm_type: variant_llvm_type,
-                            type_metadata: variant_type_metadata,
-                            offset: FixedMemberOffset { bytes: 0 },
-                            flags: DIFlags::FlagZero
-                        }
-                    ]
-                }
-            }
-            layout::RawNullablePointer { nndiscr: non_null_variant_index, .. } => {
-                // As far as debuginfo is concerned, the pointer this enum
-                // represents is still wrapped in a struct. This is to make the
-                // DWARF representation of enums uniform.
-
-                // First create a description of the artificial wrapper struct:
-                let non_null_variant = &adt.variants[non_null_variant_index as usize];
-                let non_null_variant_name = non_null_variant.name.as_str();
-
-                // The llvm type and metadata of the pointer
-                let nnty = monomorphize::field_ty(cx.tcx(), &substs, &non_null_variant.fields[0] );
-                let non_null_llvm_type = type_of::type_of(cx, nnty);
-                let non_null_type_metadata = type_metadata(cx, nnty, self.span);
-
-                // The type of the artificial struct wrapping the pointer
-                let artificial_struct_llvm_type = Type::struct_(cx,
-                                                                &[non_null_llvm_type],
-                                                                false);
-
-                // For the metadata of the wrapper struct, we need to create a
-                // MemberDescription of the struct's single field.
-                let sole_struct_member_description = MemberDescription {
-                    name: match non_null_variant.ctor_kind {
-                        CtorKind::Fn => "__0".to_string(),
-                        CtorKind::Fictive => {
-                            non_null_variant.fields[0].name.to_string()
-                        }
-                        CtorKind::Const => bug!()
-                    },
-                    llvm_type: non_null_llvm_type,
-                    type_metadata: non_null_type_metadata,
-                    offset: FixedMemberOffset { bytes: 0 },
-                    flags: DIFlags::FlagZero
-                };
-
-                let unique_type_id = debug_context(cx).type_map
-                                                      .borrow_mut()
-                                                      .get_unique_type_id_of_enum_variant(
-                                                          cx,
-                                                          self.enum_type,
-                                                          &non_null_variant_name);
-
-                // Now we can create the metadata of the artificial struct
-                let artificial_struct_metadata =
-                    composite_type_metadata(cx,
-                                            artificial_struct_llvm_type,
-                                            &non_null_variant_name,
-                                            unique_type_id,
-                                            &[sole_struct_member_description],
-                                            self.containing_scope,
-                                            self.file_metadata,
-                                            syntax_pos::DUMMY_SP);
-
-                // Encode the information about the null variant in the union
-                // member's name.
-                let null_variant_index = (1 - non_null_variant_index) as usize;
-                let null_variant_name = adt.variants[null_variant_index].name;
-                let union_member_name = format!("RUST$ENCODED$ENUM${}${}",
-                                                0,
-                                                null_variant_name);
-
-                // Finally create the (singleton) list of descriptions of union
-                // members.
+                set_members_of_composite_type(cx,
+                                              variant_type_metadata,
+                                              &member_descriptions[..]);
                 vec![
                     MemberDescription {
-                        name: union_member_name,
-                        llvm_type: artificial_struct_llvm_type,
-                        type_metadata: artificial_struct_metadata,
-                        offset: FixedMemberOffset { bytes: 0 },
+                        name: "".to_string(),
+                        type_metadata: variant_type_metadata,
+                        offset: Size::from_bytes(0),
+                        size: self.layout.size,
+                        align: self.layout.align,
                         flags: DIFlags::FlagZero
                     }
                 ]
-            },
-            layout::StructWrappedNullablePointer { nonnull: ref struct_def,
-                                                nndiscr,
-                                                ref discrfield_source, ..} => {
+            }
+            layout::Variants::Tagged { ref variants, .. } => {
+                let discriminant_info = RegularDiscriminant(self.discriminant_type_metadata
+                    .expect(""));
+                (0..variants.len()).map(|i| {
+                    let variant = self.layout.for_variant(cx, i);
+                    let (variant_type_metadata, member_desc_factory) =
+                        describe_enum_variant(cx,
+                                              variant,
+                                              &adt.variants[i],
+                                              discriminant_info,
+                                              self.containing_scope,
+                                              self.span);
+
+                    let member_descriptions = member_desc_factory
+                        .create_member_descriptions(cx);
+
+                    set_members_of_composite_type(cx,
+                                                  variant_type_metadata,
+                                                  &member_descriptions);
+                    MemberDescription {
+                        name: "".to_string(),
+                        type_metadata: variant_type_metadata,
+                        offset: Size::from_bytes(0),
+                        size: variant.size,
+                        align: variant.align,
+                        flags: DIFlags::FlagZero
+                    }
+                }).collect()
+            }
+            layout::Variants::NicheFilling { dataful_variant, ref niche_variants, .. } => {
+                let variant = self.layout.for_variant(cx, dataful_variant);
                 // Create a description of the non-null variant
-                let (variant_type_metadata, variant_llvm_type, member_description_factory) =
+                let (variant_type_metadata, member_description_factory) =
                     describe_enum_variant(cx,
-                                          self.enum_type,
-                                          struct_def,
-                                          &adt.variants[nndiscr as usize],
+                                          variant,
+                                          &adt.variants[dataful_variant],
                                           OptimizedDiscriminant,
                                           self.containing_scope,
                                           self.span);
@@ -1331,34 +1202,51 @@
 
                 set_members_of_composite_type(cx,
                                               variant_type_metadata,
-                                              variant_llvm_type,
                                               &variant_member_descriptions[..]);
 
                 // Encode the information about the null variant in the union
                 // member's name.
-                let null_variant_index = (1 - nndiscr) as usize;
-                let null_variant_name = adt.variants[null_variant_index].name;
-                let discrfield_source = discrfield_source.iter()
-                                           .skip(1)
-                                           .map(|x| x.to_string())
-                                           .collect::<Vec<_>>().join("$");
-                let union_member_name = format!("RUST$ENCODED$ENUM${}${}",
-                                                discrfield_source,
-                                                null_variant_name);
+                let mut name = String::from("RUST$ENCODED$ENUM$");
+                // HACK(eddyb) the debuggers should just handle offset+size
+                // of discriminant instead of us having to recover its path.
+                // Right now it's not even going to work for `niche_start > 0`,
+                // and for multiple niche variants it only supports the first.
+                fn compute_field_path<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                                                name: &mut String,
+                                                layout: TyLayout<'tcx>,
+                                                offset: Size,
+                                                size: Size) {
+                    for i in 0..layout.fields.count() {
+                        let field_offset = layout.fields.offset(i);
+                        if field_offset > offset {
+                            continue;
+                        }
+                        let inner_offset = offset - field_offset;
+                        let field = layout.field(ccx, i);
+                        if inner_offset + size <= field.size {
+                            write!(name, "{}$", i).unwrap();
+                            compute_field_path(ccx, name, field, inner_offset, size);
+                        }
+                    }
+                }
+                compute_field_path(cx, &mut name,
+                                   self.layout,
+                                   self.layout.fields.offset(0),
+                                   self.layout.field(cx, 0).size);
+                name.push_str(&adt.variants[niche_variants.start].name.as_str());
 
                 // Create the (singleton) list of descriptions of union members.
                 vec![
                     MemberDescription {
-                        name: union_member_name,
-                        llvm_type: variant_llvm_type,
+                        name,
                         type_metadata: variant_type_metadata,
-                        offset: FixedMemberOffset { bytes: 0 },
+                        offset: Size::from_bytes(0),
+                        size: variant.size,
+                        align: variant.align,
                         flags: DIFlags::FlagZero
                     }
                 ]
-            },
-            layout::CEnum { .. } => span_bug!(self.span, "This should be unreachable."),
-            ref l @ _ => bug!("Not an enum layout: {:#?}", l)
+            }
         }
     }
 }
@@ -1366,7 +1254,7 @@
 // Creates MemberDescriptions for the fields of a single enum variant.
 struct VariantMemberDescriptionFactory<'tcx> {
     // Cloned from the layout::Struct describing the variant.
-    offsets: &'tcx [layout::Size],
+    offsets: Vec<layout::Size>,
     args: Vec<(String, Ty<'tcx>)>,
     discriminant_type_metadata: Option<DIType>,
     span: Span,
@@ -1376,14 +1264,16 @@
     fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>)
                                       -> Vec<MemberDescription> {
         self.args.iter().enumerate().map(|(i, &(ref name, ty))| {
+            let (size, align) = cx.size_and_align_of(ty);
             MemberDescription {
                 name: name.to_string(),
-                llvm_type: type_of::type_of(cx, ty),
                 type_metadata: match self.discriminant_type_metadata {
                     Some(metadata) if i == 0 => metadata,
                     _ => type_metadata(cx, ty, self.span)
                 },
-                offset: FixedMemberOffset { bytes: self.offsets[i].bytes() as usize },
+                offset: self.offsets[i],
+                size,
+                align,
                 flags: DIFlags::FlagZero
             }
         }).collect()
@@ -1402,92 +1292,52 @@
 // descriptions of the fields of the variant. This is a rudimentary version of a
 // full RecursiveTypeDescription.
 fn describe_enum_variant<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
-                                   enum_type: Ty<'tcx>,
-                                   struct_def: &'tcx layout::Struct,
+                                   layout: layout::TyLayout<'tcx>,
                                    variant: &'tcx ty::VariantDef,
                                    discriminant_info: EnumDiscriminantInfo,
                                    containing_scope: DIScope,
                                    span: Span)
-                                   -> (DICompositeType, Type, MemberDescriptionFactory<'tcx>) {
-    let substs = match enum_type.sty {
-        ty::TyAdt(def, s) if def.adt_kind() == AdtKind::Enum => s,
-        ref t @ _ => bug!("{:#?} is not an enum", t)
-    };
-
-    let maybe_discr_and_signed: Option<(layout::Integer, bool)> = match *cx.layout_of(enum_type) {
-        layout::CEnum {discr, ..} => Some((discr, true)),
-        layout::General{discr, ..} => Some((discr, false)),
-        layout::Univariant { .. }
-        | layout::RawNullablePointer { .. }
-        | layout::StructWrappedNullablePointer { .. } => None,
-        ref l @ _ => bug!("This should be unreachable. Type is {:#?} layout is {:#?}", enum_type, l)
-    };
-
-    let mut field_tys = variant.fields.iter().map(|f| {
-        monomorphize::field_ty(cx.tcx(), &substs, f)
-    }).collect::<Vec<_>>();
-
-    if let Some((discr, signed)) = maybe_discr_and_signed {
-        field_tys.insert(0, discr.to_ty(&cx.tcx(), signed));
-    }
-
-
-    let variant_llvm_type =
-        Type::struct_(cx, &field_tys
-                                    .iter()
-                                    .map(|t| type_of::type_of(cx, t))
-                                    .collect::<Vec<_>>()
-                                    ,
-                      struct_def.packed);
-    // Could do some consistency checks here: size, align, field count, discr type
-
+                                   -> (DICompositeType, MemberDescriptionFactory<'tcx>) {
     let variant_name = variant.name.as_str();
     let unique_type_id = debug_context(cx).type_map
                                           .borrow_mut()
                                           .get_unique_type_id_of_enum_variant(
                                               cx,
-                                              enum_type,
+                                              layout.ty,
                                               &variant_name);
 
     let metadata_stub = create_struct_stub(cx,
-                                           variant_llvm_type,
+                                           layout.ty,
                                            &variant_name,
                                            unique_type_id,
                                            containing_scope);
 
-    // Get the argument names from the enum variant info
-    let mut arg_names: Vec<_> = match variant.ctor_kind {
-        CtorKind::Const => vec![],
-        CtorKind::Fn => {
-            variant.fields
-                   .iter()
-                   .enumerate()
-                   .map(|(i, _)| format!("__{}", i))
-                   .collect()
-        }
-        CtorKind::Fictive => {
-            variant.fields
-                   .iter()
-                   .map(|f| f.name.to_string())
-                   .collect()
-        }
-    };
-
     // If this is not a univariant enum, there is also the discriminant field.
-    match discriminant_info {
-        RegularDiscriminant(_) => arg_names.insert(0, "RUST$ENUM$DISR".to_string()),
-        _ => { /* do nothing */ }
+    let (discr_offset, discr_arg) = match discriminant_info {
+        RegularDiscriminant(_) => {
+            let enum_layout = cx.layout_of(layout.ty);
+            (Some(enum_layout.fields.offset(0)),
+             Some(("RUST$ENUM$DISR".to_string(), enum_layout.field(cx, 0).ty)))
+        }
+        _ => (None, None),
     };
+    let offsets = discr_offset.into_iter().chain((0..layout.fields.count()).map(|i| {
+        layout.fields.offset(i)
+    })).collect();
 
     // Build an array of (field name, field type) pairs to be captured in the factory closure.
-    let args: Vec<(String, Ty)> = arg_names.iter()
-        .zip(field_tys.iter())
-        .map(|(s, &t)| (s.to_string(), t))
-        .collect();
+    let args = discr_arg.into_iter().chain((0..layout.fields.count()).map(|i| {
+        let name = if variant.ctor_kind == CtorKind::Fn {
+            format!("__{}", i)
+        } else {
+            variant.fields[i].name.to_string()
+        };
+        (name, layout.field(cx, i).ty)
+    })).collect();
 
     let member_description_factory =
         VariantMDF(VariantMemberDescriptionFactory {
-            offsets: &struct_def.offsets[..],
+            offsets,
             args,
             discriminant_type_metadata: match discriminant_info {
                 RegularDiscriminant(discriminant_type_metadata) => {
@@ -1498,7 +1348,7 @@
             span,
         });
 
-    (metadata_stub, variant_llvm_type, member_description_factory)
+    (metadata_stub, member_description_factory)
 }
 
 fn prepare_enum_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
@@ -1534,21 +1384,18 @@
         })
         .collect();
 
-    let discriminant_type_metadata = |inttype: layout::Integer, signed: bool| {
-        let disr_type_key = (enum_def_id, inttype);
+    let discriminant_type_metadata = |discr: layout::Primitive| {
+        let disr_type_key = (enum_def_id, discr);
         let cached_discriminant_type_metadata = debug_context(cx).created_enum_disr_types
                                                                  .borrow()
                                                                  .get(&disr_type_key).cloned();
         match cached_discriminant_type_metadata {
             Some(discriminant_type_metadata) => discriminant_type_metadata,
             None => {
-                let discriminant_llvm_type = Type::from_integer(cx, inttype);
                 let (discriminant_size, discriminant_align) =
-                    size_and_align_of(cx, discriminant_llvm_type);
+                    (discr.size(cx), discr.align(cx));
                 let discriminant_base_type_metadata =
-                    type_metadata(cx,
-                                  inttype.to_ty(&cx.tcx(), signed),
-                                  syntax_pos::DUMMY_SP);
+                    type_metadata(cx, discr.to_ty(cx.tcx()), syntax_pos::DUMMY_SP);
                 let discriminant_name = get_enum_discriminant_name(cx, enum_def_id);
 
                 let name = CString::new(discriminant_name.as_bytes()).unwrap();
@@ -1559,8 +1406,8 @@
                         name.as_ptr(),
                         file_metadata,
                         UNKNOWN_LINE_NUMBER,
-                        bytes_to_bits(discriminant_size),
-                        bytes_to_bits(discriminant_align),
+                        discriminant_size.bits(),
+                        discriminant_align.abi_bits() as u32,
                         create_DIArray(DIB(cx), &enumerators_metadata),
                         discriminant_base_type_metadata)
                 };
@@ -1574,21 +1421,22 @@
         }
     };
 
-    let type_rep = cx.layout_of(enum_type);
+    let layout = cx.layout_of(enum_type);
 
-    let discriminant_type_metadata = match *type_rep {
-        layout::CEnum { discr, signed, .. } => {
-            return FinalMetadata(discriminant_type_metadata(discr, signed))
-        },
-        layout::RawNullablePointer { .. }           |
-        layout::StructWrappedNullablePointer { .. } |
-        layout::Univariant { .. }                      => None,
-        layout::General { discr, .. } => Some(discriminant_type_metadata(discr, false)),
-        ref l @ _ => bug!("Not an enum layout: {:#?}", l)
+    let discriminant_type_metadata = match layout.variants {
+        layout::Variants::Single { .. } |
+        layout::Variants::NicheFilling { .. } => None,
+        layout::Variants::Tagged { ref discr, .. } => {
+            Some(discriminant_type_metadata(discr.value))
+        }
     };
 
-    let enum_llvm_type = type_of::type_of(cx, enum_type);
-    let (enum_type_size, enum_type_align) = size_and_align_of(cx, enum_llvm_type);
+    match (&layout.abi, discriminant_type_metadata) {
+        (&layout::Abi::Scalar(_), Some(discr)) => return FinalMetadata(discr),
+        _ => {}
+    }
+
+    let (enum_type_size, enum_type_align) = layout.size_and_align();
 
     let enum_name = CString::new(enum_name).unwrap();
     let unique_type_id_str = CString::new(
@@ -1601,8 +1449,8 @@
         enum_name.as_ptr(),
         file_metadata,
         UNKNOWN_LINE_NUMBER,
-        bytes_to_bits(enum_type_size),
-        bytes_to_bits(enum_type_align),
+        enum_type_size.bits(),
+        enum_type_align.abi_bits() as u32,
         DIFlags::FlagZero,
         ptr::null_mut(),
         0, // RuntimeLang
@@ -1614,13 +1462,11 @@
         enum_type,
         unique_type_id,
         enum_metadata,
-        enum_llvm_type,
         EnumMDF(EnumMemberDescriptionFactory {
             enum_type,
-            type_rep: type_rep.layout,
+            layout,
             discriminant_type_metadata,
             containing_scope,
-            file_metadata,
             span,
         }),
     );
@@ -1636,28 +1482,27 @@
 /// results in a LLVM struct.
 ///
 /// Examples of Rust types to use this are: structs, tuples, boxes, vecs, and enums.
-fn composite_type_metadata(cx: &CrateContext,
-                           composite_llvm_type: Type,
-                           composite_type_name: &str,
-                           composite_type_unique_id: UniqueTypeId,
-                           member_descriptions: &[MemberDescription],
-                           containing_scope: DIScope,
+fn composite_type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
+                                     composite_type: Ty<'tcx>,
+                                     composite_type_name: &str,
+                                     composite_type_unique_id: UniqueTypeId,
+                                     member_descriptions: &[MemberDescription],
+                                     containing_scope: DIScope,
 
-                           // Ignore source location information as long as it
-                           // can't be reconstructed for non-local crates.
-                           _file_metadata: DIFile,
-                           _definition_span: Span)
-                           -> DICompositeType {
+                                     // Ignore source location information as long as it
+                                     // can't be reconstructed for non-local crates.
+                                     _file_metadata: DIFile,
+                                     _definition_span: Span)
+                                     -> DICompositeType {
     // Create the (empty) struct metadata node ...
     let composite_type_metadata = create_struct_stub(cx,
-                                                     composite_llvm_type,
+                                                     composite_type,
                                                      composite_type_name,
                                                      composite_type_unique_id,
                                                      containing_scope);
     // ... and immediately create and add the member descriptions.
     set_members_of_composite_type(cx,
                                   composite_type_metadata,
-                                  composite_llvm_type,
                                   member_descriptions);
 
     return composite_type_metadata;
@@ -1665,7 +1510,6 @@
 
 fn set_members_of_composite_type(cx: &CrateContext,
                                  composite_type_metadata: DICompositeType,
-                                 composite_llvm_type: Type,
                                  member_descriptions: &[MemberDescription]) {
     // In some rare cases LLVM metadata uniquing would lead to an existing type
     // description being used instead of a new one created in
@@ -1686,14 +1530,7 @@
 
     let member_metadata: Vec<DIDescriptor> = member_descriptions
         .iter()
-        .enumerate()
-        .map(|(i, member_description)| {
-            let (member_size, member_align) = size_and_align_of(cx, member_description.llvm_type);
-            let member_offset = match member_description.offset {
-                FixedMemberOffset { bytes } => bytes as u64,
-                ComputedMemberOffset => machine::llelement_offset(cx, composite_llvm_type, i)
-            };
-
+        .map(|member_description| {
             let member_name = member_description.name.as_bytes();
             let member_name = CString::new(member_name).unwrap();
             unsafe {
@@ -1703,9 +1540,9 @@
                     member_name.as_ptr(),
                     unknown_file_metadata(cx),
                     UNKNOWN_LINE_NUMBER,
-                    bytes_to_bits(member_size),
-                    bytes_to_bits(member_align),
-                    bytes_to_bits(member_offset),
+                    member_description.size.bits(),
+                    member_description.align.abi_bits() as u32,
+                    member_description.offset.bits(),
                     member_description.flags,
                     member_description.type_metadata)
             }
@@ -1722,13 +1559,13 @@
 // A convenience wrapper around LLVMRustDIBuilderCreateStructType(). Does not do
 // any caching, does not add any fields to the struct. This can be done later
 // with set_members_of_composite_type().
-fn create_struct_stub(cx: &CrateContext,
-                      struct_llvm_type: Type,
-                      struct_type_name: &str,
-                      unique_type_id: UniqueTypeId,
-                      containing_scope: DIScope)
-                   -> DICompositeType {
-    let (struct_size, struct_align) = size_and_align_of(cx, struct_llvm_type);
+fn create_struct_stub<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
+                                struct_type: Ty<'tcx>,
+                                struct_type_name: &str,
+                                unique_type_id: UniqueTypeId,
+                                containing_scope: DIScope)
+                                -> DICompositeType {
+    let (struct_size, struct_align) = cx.size_and_align_of(struct_type);
 
     let name = CString::new(struct_type_name).unwrap();
     let unique_type_id = CString::new(
@@ -1746,8 +1583,8 @@
             name.as_ptr(),
             unknown_file_metadata(cx),
             UNKNOWN_LINE_NUMBER,
-            bytes_to_bits(struct_size),
-            bytes_to_bits(struct_align),
+            struct_size.bits(),
+            struct_align.abi_bits() as u32,
             DIFlags::FlagZero,
             ptr::null_mut(),
             empty_array,
@@ -1759,13 +1596,13 @@
     return metadata_stub;
 }
 
-fn create_union_stub(cx: &CrateContext,
-                     union_llvm_type: Type,
-                     union_type_name: &str,
-                     unique_type_id: UniqueTypeId,
-                     containing_scope: DIScope)
-                   -> DICompositeType {
-    let (union_size, union_align) = size_and_align_of(cx, union_llvm_type);
+fn create_union_stub<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
+                               union_type: Ty<'tcx>,
+                               union_type_name: &str,
+                               unique_type_id: UniqueTypeId,
+                               containing_scope: DIScope)
+                               -> DICompositeType {
+    let (union_size, union_align) = cx.size_and_align_of(union_type);
 
     let name = CString::new(union_type_name).unwrap();
     let unique_type_id = CString::new(
@@ -1783,8 +1620,8 @@
             name.as_ptr(),
             unknown_file_metadata(cx),
             UNKNOWN_LINE_NUMBER,
-            bytes_to_bits(union_size),
-            bytes_to_bits(union_align),
+            union_size.bits(),
+            union_align.abi_bits() as u32,
             DIFlags::FlagZero,
             empty_array,
             0, // RuntimeLang
@@ -1839,7 +1676,7 @@
                                                     is_local_to_unit,
                                                     global,
                                                     ptr::null_mut(),
-                                                    global_align,
+                                                    global_align.abi() as u32,
         );
     }
 }
@@ -1858,3 +1695,63 @@
             file_metadata)
     }
 }
+
+/// Creates debug information for the given vtable, which is for the
+/// given type.
+///
+/// Adds the created metadata nodes directly to the crate's IR.
+pub fn create_vtable_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
+                                        ty: ty::Ty<'tcx>,
+                                        vtable: ValueRef) {
+    if cx.dbg_cx().is_none() {
+        return;
+    }
+
+    let type_metadata = type_metadata(cx, ty, syntax_pos::DUMMY_SP);
+
+    unsafe {
+        // LLVMRustDIBuilderCreateStructType() wants an empty array. A null
+        // pointer will lead to hard to trace and debug LLVM assertions
+        // later on in llvm/lib/IR/Value.cpp.
+        let empty_array = create_DIArray(DIB(cx), &[]);
+
+        let name = CString::new("vtable").unwrap();
+
+        // Create a new one each time.  We don't want metadata caching
+        // here, because each vtable will refer to a unique containing
+        // type.
+        let vtable_type = llvm::LLVMRustDIBuilderCreateStructType(
+            DIB(cx),
+            NO_SCOPE_METADATA,
+            name.as_ptr(),
+            unknown_file_metadata(cx),
+            UNKNOWN_LINE_NUMBER,
+            Size::from_bytes(0).bits(),
+            cx.tcx().data_layout.pointer_align.abi_bits() as u32,
+            DIFlags::FlagArtificial,
+            ptr::null_mut(),
+            empty_array,
+            0,
+            type_metadata,
+            name.as_ptr()
+        );
+
+        llvm::LLVMRustDIBuilderCreateStaticVariable(DIB(cx),
+                                                    NO_SCOPE_METADATA,
+                                                    name.as_ptr(),
+                                                    // LLVM 3.9
+                                                    // doesn't accept
+                                                    // null here, so
+                                                    // pass the name
+                                                    // as the linkage
+                                                    // name.
+                                                    name.as_ptr(),
+                                                    unknown_file_metadata(cx),
+                                                    UNKNOWN_LINE_NUMBER,
+                                                    vtable_type,
+                                                    true,
+                                                    vtable,
+                                                    ptr::null_mut(),
+                                                    0);
+    }
+}
diff --git a/src/librustc_trans/debuginfo/mod.rs b/src/librustc_trans/debuginfo/mod.rs
index 1a28429..c0df252 100644
--- a/src/librustc_trans/debuginfo/mod.rs
+++ b/src/librustc_trans/debuginfo/mod.rs
@@ -43,7 +43,7 @@
 use syntax_pos::{self, Span, Pos};
 use syntax::ast;
 use syntax::symbol::Symbol;
-use rustc::ty::layout::{self, LayoutTyper};
+use rustc::ty::layout::{self, LayoutOf};
 
 pub mod gdb;
 mod utils;
@@ -56,6 +56,7 @@
 pub use self::create_scope_map::{create_mir_scopes, MirDebugScope};
 pub use self::source_loc::start_emitting_source_locations;
 pub use self::metadata::create_global_var_metadata;
+pub use self::metadata::create_vtable_metadata;
 pub use self::metadata::extend_scope_to_file;
 pub use self::source_loc::set_source_location;
 
@@ -70,7 +71,7 @@
     llmod: ModuleRef,
     builder: DIBuilderRef,
     created_files: RefCell<FxHashMap<(Symbol, Symbol), DIFile>>,
-    created_enum_disr_types: RefCell<FxHashMap<(DefId, layout::Integer), DIType>>,
+    created_enum_disr_types: RefCell<FxHashMap<(DefId, layout::Primitive), DIType>>,
 
     type_map: RefCell<TypeMap<'tcx>>,
     namespace_map: RefCell<DefIdMap<DIScope>>,
@@ -334,8 +335,7 @@
             signature.extend(inputs.iter().map(|&t| {
                 let t = match t.sty {
                     ty::TyArray(ct, _)
-                        if (ct == cx.tcx().types.u8) ||
-                           (cx.layout_of(ct).size(cx).bytes() == 0) => {
+                        if (ct == cx.tcx().types.u8) || cx.layout_of(ct).is_zst() => {
                         cx.tcx().mk_imm_ptr(ct)
                     }
                     _ => t
@@ -498,7 +498,7 @@
                     cx.sess().opts.optimize != config::OptLevel::No,
                     DIFlags::FlagZero,
                     argument_index,
-                    align,
+                    align.abi() as u32,
                 )
             };
             source_loc::set_debug_location(bcx,
diff --git a/src/librustc_trans/debuginfo/utils.rs b/src/librustc_trans/debuginfo/utils.rs
index ad4fdfc..95427d9 100644
--- a/src/librustc_trans/debuginfo/utils.rs
+++ b/src/librustc_trans/debuginfo/utils.rs
@@ -18,15 +18,11 @@
 
 use llvm;
 use llvm::debuginfo::{DIScope, DIBuilderRef, DIDescriptor, DIArray};
-use machine;
 use common::{CrateContext};
-use type_::Type;
 
 use syntax_pos::{self, Span};
 use syntax::ast;
 
-use std::ops;
-
 pub fn is_node_local_to_unit(cx: &CrateContext, node_id: ast::NodeId) -> bool
 {
     // The is_local_to_unit flag indicates whether a function is local to the
@@ -53,15 +49,6 @@
     cx.sess().codemap().lookup_char_pos(span.lo())
 }
 
-pub fn size_and_align_of(cx: &CrateContext, llvm_type: Type) -> (u64, u32) {
-    (machine::llsize_of_alloc(cx, llvm_type), machine::llalign_of_min(cx, llvm_type))
-}
-
-pub fn bytes_to_bits<T>(bytes: T) -> T
-    where T: ops::Mul<Output=T> + From<u8> {
-    bytes * 8u8.into()
-}
-
 #[inline]
 pub fn debug_context<'a, 'tcx>(cx: &'a CrateContext<'a, 'tcx>)
                            -> &'a CrateDebugContext<'tcx> {
diff --git a/src/librustc_trans/glue.rs b/src/librustc_trans/glue.rs
index 453b98a..6c7d770 100644
--- a/src/librustc_trans/glue.rs
+++ b/src/librustc_trans/glue.rs
@@ -19,8 +19,7 @@
 use llvm::{ValueRef};
 use llvm;
 use meth;
-use monomorphize;
-use rustc::ty::layout::LayoutTyper;
+use rustc::ty::layout::LayoutOf;
 use rustc::ty::{self, Ty};
 use value::Value;
 
@@ -29,17 +28,28 @@
     debug!("calculate size of DST: {}; with lost info: {:?}",
            t, Value(info));
     if bcx.ccx.shared().type_is_sized(t) {
-        let size = bcx.ccx.size_of(t);
-        let align = bcx.ccx.align_of(t);
-        debug!("size_and_align_of_dst t={} info={:?} size: {} align: {}",
+        let (size, align) = bcx.ccx.size_and_align_of(t);
+        debug!("size_and_align_of_dst t={} info={:?} size: {:?} align: {:?}",
                t, Value(info), size, align);
-        let size = C_usize(bcx.ccx, size);
-        let align = C_usize(bcx.ccx, align as u64);
+        let size = C_usize(bcx.ccx, size.bytes());
+        let align = C_usize(bcx.ccx, align.abi());
         return (size, align);
     }
     assert!(!info.is_null());
     match t.sty {
-        ty::TyAdt(..) | ty::TyTuple(..) => {
+        ty::TyDynamic(..) => {
+            // load size/align from vtable
+            (meth::SIZE.get_usize(bcx, info), meth::ALIGN.get_usize(bcx, info))
+        }
+        ty::TySlice(_) | ty::TyStr => {
+            let unit = t.sequence_element_type(bcx.tcx());
+            // The info in this case is the length of the str, so the size is that
+            // times the unit size.
+            let (size, align) = bcx.ccx.size_and_align_of(unit);
+            (bcx.mul(info, C_usize(bcx.ccx, size.bytes())),
+             C_usize(bcx.ccx, align.abi()))
+        }
+        _ => {
             let ccx = bcx.ccx;
             // First get the size of all statically known fields.
             // Don't use size_of because it also rounds up to alignment, which we
@@ -48,15 +58,9 @@
             let layout = ccx.layout_of(t);
             debug!("DST {} layout: {:?}", t, layout);
 
-            let (sized_size, sized_align) = match *layout {
-                ty::layout::Layout::Univariant { ref variant, .. } => {
-                    (variant.offsets.last().map_or(0, |o| o.bytes()), variant.align.abi())
-                }
-                _ => {
-                    bug!("size_and_align_of_dst: expcted Univariant for `{}`, found {:#?}",
-                         t, layout);
-                }
-            };
+            let i = layout.fields.count() - 1;
+            let sized_size = layout.fields.offset(i).bytes();
+            let sized_align = layout.align.abi();
             debug!("DST {} statically sized prefix size: {} align: {}",
                    t, sized_size, sized_align);
             let sized_size = C_usize(ccx, sized_size);
@@ -64,14 +68,7 @@
 
             // Recurse to get the size of the dynamically sized field (must be
             // the last field).
-            let field_ty = match t.sty {
-                ty::TyAdt(def, substs) => {
-                    let last_field = def.struct_variant().fields.last().unwrap();
-                    monomorphize::field_ty(bcx.tcx(), substs, last_field)
-                },
-                ty::TyTuple(tys, _) => tys.last().unwrap(),
-                _ => unreachable!(),
-            };
+            let field_ty = layout.field(ccx, i).ty;
             let (unsized_size, unsized_align) = size_and_align_of_dst(bcx, field_ty, info);
 
             // FIXME (#26403, #27023): We should be adding padding
@@ -114,17 +111,5 @@
 
             (size, align)
         }
-        ty::TyDynamic(..) => {
-            // load size/align from vtable
-            (meth::SIZE.get_usize(bcx, info), meth::ALIGN.get_usize(bcx, info))
-        }
-        ty::TySlice(_) | ty::TyStr => {
-            let unit = t.sequence_element_type(bcx.tcx());
-            // The info in this case is the length of the str, so the size is that
-            // times the unit size.
-            (bcx.mul(info, C_usize(bcx.ccx, bcx.ccx.size_of(unit))),
-             C_usize(bcx.ccx, bcx.ccx.align_of(unit) as u64))
-        }
-        _ => bug!("Unexpected unsized type, found {}", t)
     }
 }
diff --git a/src/librustc_trans/intrinsic.rs b/src/librustc_trans/intrinsic.rs
index 2f1a950..adbb45f 100644
--- a/src/librustc_trans/intrinsic.rs
+++ b/src/librustc_trans/intrinsic.rs
@@ -11,20 +11,19 @@
 #![allow(non_upper_case_globals)]
 
 use intrinsics::{self, Intrinsic};
-use libc;
 use llvm;
 use llvm::{ValueRef};
-use abi::{Abi, FnType};
-use adt;
+use abi::{Abi, FnType, PassMode};
 use mir::lvalue::{LvalueRef, Alignment};
+use mir::operand::{OperandRef, OperandValue};
 use base::*;
 use common::*;
 use declare;
 use glue;
-use type_of;
-use machine;
 use type_::Type;
+use type_of::LayoutLlvmExt;
 use rustc::ty::{self, Ty};
+use rustc::ty::layout::{HasDataLayout, LayoutOf};
 use rustc::hir;
 use syntax::ast;
 use syntax::symbol::Symbol;
@@ -88,8 +87,8 @@
 /// add them to librustc_trans/trans/context.rs
 pub fn trans_intrinsic_call<'a, 'tcx>(bcx: &Builder<'a, 'tcx>,
                                       callee_ty: Ty<'tcx>,
-                                      fn_ty: &FnType,
-                                      llargs: &[ValueRef],
+                                      fn_ty: &FnType<'tcx>,
+                                      args: &[OperandRef<'tcx>],
                                       llresult: ValueRef,
                                       span: Span) {
     let ccx = bcx.ccx;
@@ -106,27 +105,34 @@
     let ret_ty = sig.output();
     let name = &*tcx.item_name(def_id);
 
-    let llret_ty = type_of::type_of(ccx, ret_ty);
+    let llret_ty = ccx.layout_of(ret_ty).llvm_type(ccx);
+    let result = LvalueRef::new_sized(llresult, fn_ty.ret.layout, Alignment::AbiAligned);
 
     let simple = get_simple_intrinsic(ccx, name);
     let llval = match name {
         _ if simple.is_some() => {
-            bcx.call(simple.unwrap(), &llargs, None)
+            bcx.call(simple.unwrap(),
+                     &args.iter().map(|arg| arg.immediate()).collect::<Vec<_>>(),
+                     None)
         }
         "unreachable" => {
             return;
         },
         "likely" => {
             let expect = ccx.get_intrinsic(&("llvm.expect.i1"));
-            bcx.call(expect, &[llargs[0], C_bool(ccx, true)], None)
+            bcx.call(expect, &[args[0].immediate(), C_bool(ccx, true)], None)
         }
         "unlikely" => {
             let expect = ccx.get_intrinsic(&("llvm.expect.i1"));
-            bcx.call(expect, &[llargs[0], C_bool(ccx, false)], None)
+            bcx.call(expect, &[args[0].immediate(), C_bool(ccx, false)], None)
         }
         "try" => {
-            try_intrinsic(bcx, ccx, llargs[0], llargs[1], llargs[2], llresult);
-            C_nil(ccx)
+            try_intrinsic(bcx, ccx,
+                          args[0].immediate(),
+                          args[1].immediate(),
+                          args[2].immediate(),
+                          llresult);
+            return;
         }
         "breakpoint" => {
             let llfn = ccx.get_intrinsic(&("llvm.debugtrap"));
@@ -134,42 +140,35 @@
         }
         "size_of" => {
             let tp_ty = substs.type_at(0);
-            let lltp_ty = type_of::type_of(ccx, tp_ty);
-            C_usize(ccx, machine::llsize_of_alloc(ccx, lltp_ty))
+            C_usize(ccx, ccx.size_of(tp_ty).bytes())
         }
         "size_of_val" => {
             let tp_ty = substs.type_at(0);
-            if bcx.ccx.shared().type_is_sized(tp_ty) {
-                let lltp_ty = type_of::type_of(ccx, tp_ty);
-                C_usize(ccx, machine::llsize_of_alloc(ccx, lltp_ty))
-            } else if bcx.ccx.shared().type_has_metadata(tp_ty) {
+            if let OperandValue::Pair(_, meta) = args[0].val {
                 let (llsize, _) =
-                    glue::size_and_align_of_dst(bcx, tp_ty, llargs[1]);
+                    glue::size_and_align_of_dst(bcx, tp_ty, meta);
                 llsize
             } else {
-                C_usize(ccx, 0u64)
+                C_usize(ccx, ccx.size_of(tp_ty).bytes())
             }
         }
         "min_align_of" => {
             let tp_ty = substs.type_at(0);
-            C_usize(ccx, ccx.align_of(tp_ty) as u64)
+            C_usize(ccx, ccx.align_of(tp_ty).abi())
         }
         "min_align_of_val" => {
             let tp_ty = substs.type_at(0);
-            if bcx.ccx.shared().type_is_sized(tp_ty) {
-                C_usize(ccx, ccx.align_of(tp_ty) as u64)
-            } else if bcx.ccx.shared().type_has_metadata(tp_ty) {
+            if let OperandValue::Pair(_, meta) = args[0].val {
                 let (_, llalign) =
-                    glue::size_and_align_of_dst(bcx, tp_ty, llargs[1]);
+                    glue::size_and_align_of_dst(bcx, tp_ty, meta);
                 llalign
             } else {
-                C_usize(ccx, 1u64)
+                C_usize(ccx, ccx.align_of(tp_ty).abi())
             }
         }
         "pref_align_of" => {
             let tp_ty = substs.type_at(0);
-            let lltp_ty = type_of::type_of(ccx, tp_ty);
-            C_usize(ccx, machine::llalign_of_pref(ccx, lltp_ty) as u64)
+            C_usize(ccx, ccx.align_of(tp_ty).pref())
         }
         "type_name" => {
             let tp_ty = substs.type_at(0);
@@ -181,18 +180,18 @@
         }
         "init" => {
             let ty = substs.type_at(0);
-            if !type_is_zero_size(ccx, ty) {
+            if !ccx.layout_of(ty).is_zst() {
                 // Just zero out the stack slot.
                 // If we store a zero constant, LLVM will drown in vreg allocation for large data
                 // structures, and the generated code will be awful. (A telltale sign of this is
                 // large quantities of `mov [byte ptr foo],0` in the generated code.)
                 memset_intrinsic(bcx, false, ty, llresult, C_u8(ccx, 0), C_usize(ccx, 1));
             }
-            C_nil(ccx)
+            return;
         }
         // Effectively no-ops
         "uninit" => {
-            C_nil(ccx)
+            return;
         }
         "needs_drop" => {
             let tp_ty = substs.type_at(0);
@@ -200,69 +199,75 @@
             C_bool(ccx, bcx.ccx.shared().type_needs_drop(tp_ty))
         }
         "offset" => {
-            let ptr = llargs[0];
-            let offset = llargs[1];
+            let ptr = args[0].immediate();
+            let offset = args[1].immediate();
             bcx.inbounds_gep(ptr, &[offset])
         }
         "arith_offset" => {
-            let ptr = llargs[0];
-            let offset = llargs[1];
+            let ptr = args[0].immediate();
+            let offset = args[1].immediate();
             bcx.gep(ptr, &[offset])
         }
 
         "copy_nonoverlapping" => {
-            copy_intrinsic(bcx, false, false, substs.type_at(0), llargs[1], llargs[0], llargs[2])
+            copy_intrinsic(bcx, false, false, substs.type_at(0),
+                           args[1].immediate(), args[0].immediate(), args[2].immediate())
         }
         "copy" => {
-            copy_intrinsic(bcx, true, false, substs.type_at(0), llargs[1], llargs[0], llargs[2])
+            copy_intrinsic(bcx, true, false, substs.type_at(0),
+                           args[1].immediate(), args[0].immediate(), args[2].immediate())
         }
         "write_bytes" => {
-            memset_intrinsic(bcx, false, substs.type_at(0), llargs[0], llargs[1], llargs[2])
+            memset_intrinsic(bcx, false, substs.type_at(0),
+                             args[0].immediate(), args[1].immediate(), args[2].immediate())
         }
 
         "volatile_copy_nonoverlapping_memory" => {
-            copy_intrinsic(bcx, false, true, substs.type_at(0), llargs[0], llargs[1], llargs[2])
+            copy_intrinsic(bcx, false, true, substs.type_at(0),
+                           args[0].immediate(), args[1].immediate(), args[2].immediate())
         }
         "volatile_copy_memory" => {
-            copy_intrinsic(bcx, true, true, substs.type_at(0), llargs[0], llargs[1], llargs[2])
+            copy_intrinsic(bcx, true, true, substs.type_at(0),
+                           args[0].immediate(), args[1].immediate(), args[2].immediate())
         }
         "volatile_set_memory" => {
-            memset_intrinsic(bcx, true, substs.type_at(0), llargs[0], llargs[1], llargs[2])
+            memset_intrinsic(bcx, true, substs.type_at(0),
+                             args[0].immediate(), args[1].immediate(), args[2].immediate())
         }
         "volatile_load" => {
             let tp_ty = substs.type_at(0);
-            let mut ptr = llargs[0];
-            if let Some(ty) = fn_ty.ret.cast {
-                ptr = bcx.pointercast(ptr, ty.ptr_to());
+            let mut ptr = args[0].immediate();
+            if let PassMode::Cast(ty) = fn_ty.ret.mode {
+                ptr = bcx.pointercast(ptr, ty.llvm_type(ccx).ptr_to());
             }
             let load = bcx.volatile_load(ptr);
             unsafe {
-                llvm::LLVMSetAlignment(load, ccx.align_of(tp_ty));
+                llvm::LLVMSetAlignment(load, ccx.align_of(tp_ty).abi() as u32);
             }
-            to_immediate(bcx, load, tp_ty)
+            to_immediate(bcx, load, ccx.layout_of(tp_ty))
         },
         "volatile_store" => {
             let tp_ty = substs.type_at(0);
-            if type_is_fat_ptr(bcx.ccx, tp_ty) {
-                bcx.volatile_store(llargs[1], get_dataptr(bcx, llargs[0]));
-                bcx.volatile_store(llargs[2], get_meta(bcx, llargs[0]));
+            let dst = args[0].deref(bcx.ccx);
+            if let OperandValue::Pair(a, b) = args[1].val {
+                bcx.volatile_store(a, dst.project_field(bcx, 0).llval);
+                bcx.volatile_store(b, dst.project_field(bcx, 1).llval);
             } else {
-                let val = if fn_ty.args[1].is_indirect() {
-                    bcx.load(llargs[1], None)
+                let val = if let OperandValue::Ref(ptr, align) = args[1].val {
+                    bcx.load(ptr, align.non_abi())
                 } else {
-                    if !type_is_zero_size(ccx, tp_ty) {
-                        from_immediate(bcx, llargs[1])
-                    } else {
-                        C_nil(ccx)
+                    if dst.layout.is_zst() {
+                        return;
                     }
+                    from_immediate(bcx, args[1].immediate())
                 };
-                let ptr = bcx.pointercast(llargs[0], val_ty(val).ptr_to());
+                let ptr = bcx.pointercast(dst.llval, val_ty(val).ptr_to());
                 let store = bcx.volatile_store(val, ptr);
                 unsafe {
-                    llvm::LLVMSetAlignment(store, ccx.align_of(tp_ty));
+                    llvm::LLVMSetAlignment(store, ccx.align_of(tp_ty).abi() as u32);
                 }
             }
-            C_nil(ccx)
+            return;
         },
         "prefetch_read_data" | "prefetch_write_data" |
         "prefetch_read_instruction" | "prefetch_write_instruction" => {
@@ -274,35 +279,40 @@
                 "prefetch_write_instruction" => (1, 0),
                 _ => bug!()
             };
-            bcx.call(expect, &[llargs[0], C_i32(ccx, rw), llargs[1], C_i32(ccx, cache_type)], None)
+            bcx.call(expect, &[
+                args[0].immediate(),
+                C_i32(ccx, rw),
+                args[1].immediate(),
+                C_i32(ccx, cache_type)
+            ], None)
         },
         "ctlz" | "ctlz_nonzero" | "cttz" | "cttz_nonzero" | "ctpop" | "bswap" |
         "add_with_overflow" | "sub_with_overflow" | "mul_with_overflow" |
         "overflowing_add" | "overflowing_sub" | "overflowing_mul" |
         "unchecked_div" | "unchecked_rem" | "unchecked_shl" | "unchecked_shr" => {
-            let sty = &arg_tys[0].sty;
-            match int_type_width_signed(sty, ccx) {
+            let ty = arg_tys[0];
+            match int_type_width_signed(ty, ccx) {
                 Some((width, signed)) =>
                     match name {
                         "ctlz" | "cttz" => {
                             let y = C_bool(bcx.ccx, false);
                             let llfn = ccx.get_intrinsic(&format!("llvm.{}.i{}", name, width));
-                            bcx.call(llfn, &[llargs[0], y], None)
+                            bcx.call(llfn, &[args[0].immediate(), y], None)
                         }
                         "ctlz_nonzero" | "cttz_nonzero" => {
                             let y = C_bool(bcx.ccx, true);
                             let llvm_name = &format!("llvm.{}.i{}", &name[..4], width);
                             let llfn = ccx.get_intrinsic(llvm_name);
-                            bcx.call(llfn, &[llargs[0], y], None)
+                            bcx.call(llfn, &[args[0].immediate(), y], None)
                         }
                         "ctpop" => bcx.call(ccx.get_intrinsic(&format!("llvm.ctpop.i{}", width)),
-                                        &llargs, None),
+                                        &[args[0].immediate()], None),
                         "bswap" => {
                             if width == 8 {
-                                llargs[0] // byte swap a u8/i8 is just a no-op
+                                args[0].immediate() // byte swap a u8/i8 is just a no-op
                             } else {
                                 bcx.call(ccx.get_intrinsic(&format!("llvm.bswap.i{}", width)),
-                                        &llargs, None)
+                                        &[args[0].immediate()], None)
                             }
                         }
                         "add_with_overflow" | "sub_with_overflow" | "mul_with_overflow" => {
@@ -312,35 +322,41 @@
                             let llfn = bcx.ccx.get_intrinsic(&intrinsic);
 
                             // Convert `i1` to a `bool`, and write it to the out parameter
-                            let val = bcx.call(llfn, &[llargs[0], llargs[1]], None);
-                            let result = bcx.extract_value(val, 0);
-                            let overflow = bcx.zext(bcx.extract_value(val, 1), Type::bool(ccx));
-                            bcx.store(result, bcx.struct_gep(llresult, 0), None);
-                            bcx.store(overflow, bcx.struct_gep(llresult, 1), None);
+                            let pair = bcx.call(llfn, &[
+                                args[0].immediate(),
+                                args[1].immediate()
+                            ], None);
+                            let val = bcx.extract_value(pair, 0);
+                            let overflow = bcx.zext(bcx.extract_value(pair, 1), Type::bool(ccx));
 
-                            C_nil(bcx.ccx)
+                            let dest = result.project_field(bcx, 0);
+                            bcx.store(val, dest.llval, dest.alignment.non_abi());
+                            let dest = result.project_field(bcx, 1);
+                            bcx.store(overflow, dest.llval, dest.alignment.non_abi());
+
+                            return;
                         },
-                        "overflowing_add" => bcx.add(llargs[0], llargs[1]),
-                        "overflowing_sub" => bcx.sub(llargs[0], llargs[1]),
-                        "overflowing_mul" => bcx.mul(llargs[0], llargs[1]),
+                        "overflowing_add" => bcx.add(args[0].immediate(), args[1].immediate()),
+                        "overflowing_sub" => bcx.sub(args[0].immediate(), args[1].immediate()),
+                        "overflowing_mul" => bcx.mul(args[0].immediate(), args[1].immediate()),
                         "unchecked_div" =>
                             if signed {
-                                bcx.sdiv(llargs[0], llargs[1])
+                                bcx.sdiv(args[0].immediate(), args[1].immediate())
                             } else {
-                                bcx.udiv(llargs[0], llargs[1])
+                                bcx.udiv(args[0].immediate(), args[1].immediate())
                             },
                         "unchecked_rem" =>
                             if signed {
-                                bcx.srem(llargs[0], llargs[1])
+                                bcx.srem(args[0].immediate(), args[1].immediate())
                             } else {
-                                bcx.urem(llargs[0], llargs[1])
+                                bcx.urem(args[0].immediate(), args[1].immediate())
                             },
-                        "unchecked_shl" => bcx.shl(llargs[0], llargs[1]),
+                        "unchecked_shl" => bcx.shl(args[0].immediate(), args[1].immediate()),
                         "unchecked_shr" =>
                             if signed {
-                                bcx.ashr(llargs[0], llargs[1])
+                                bcx.ashr(args[0].immediate(), args[1].immediate())
                             } else {
-                                bcx.lshr(llargs[0], llargs[1])
+                                bcx.lshr(args[0].immediate(), args[1].immediate())
                             },
                         _ => bug!(),
                     },
@@ -348,8 +364,8 @@
                     span_invalid_monomorphization_error(
                         tcx.sess, span,
                         &format!("invalid monomorphization of `{}` intrinsic: \
-                                  expected basic integer type, found `{}`", name, sty));
-                        C_nil(ccx)
+                                  expected basic integer type, found `{}`", name, ty));
+                    return;
                 }
             }
 
@@ -359,11 +375,11 @@
             match float_type_width(sty) {
                 Some(_width) =>
                     match name {
-                        "fadd_fast" => bcx.fadd_fast(llargs[0], llargs[1]),
-                        "fsub_fast" => bcx.fsub_fast(llargs[0], llargs[1]),
-                        "fmul_fast" => bcx.fmul_fast(llargs[0], llargs[1]),
-                        "fdiv_fast" => bcx.fdiv_fast(llargs[0], llargs[1]),
-                        "frem_fast" => bcx.frem_fast(llargs[0], llargs[1]),
+                        "fadd_fast" => bcx.fadd_fast(args[0].immediate(), args[1].immediate()),
+                        "fsub_fast" => bcx.fsub_fast(args[0].immediate(), args[1].immediate()),
+                        "fmul_fast" => bcx.fmul_fast(args[0].immediate(), args[1].immediate()),
+                        "fdiv_fast" => bcx.fdiv_fast(args[0].immediate(), args[1].immediate()),
+                        "frem_fast" => bcx.frem_fast(args[0].immediate(), args[1].immediate()),
                         _ => bug!(),
                     },
                 None => {
@@ -371,40 +387,37 @@
                         tcx.sess, span,
                         &format!("invalid monomorphization of `{}` intrinsic: \
                                   expected basic float type, found `{}`", name, sty));
-                        C_nil(ccx)
+                    return;
                 }
             }
 
         },
 
         "discriminant_value" => {
-            let val_ty = substs.type_at(0);
-            match val_ty.sty {
-                ty::TyAdt(adt, ..) if adt.is_enum() => {
-                    adt::trans_get_discr(bcx, val_ty, llargs[0], Alignment::AbiAligned,
-                                         Some(llret_ty), true)
-                }
-                _ => C_null(llret_ty)
-            }
+            args[0].deref(bcx.ccx).trans_get_discr(bcx, ret_ty)
         }
 
         "align_offset" => {
             // `ptr as usize`
-            let ptr_val = bcx.ptrtoint(llargs[0], bcx.ccx.isize_ty());
+            let ptr_val = bcx.ptrtoint(args[0].immediate(), bcx.ccx.isize_ty());
             // `ptr_val % align`
-            let offset = bcx.urem(ptr_val, llargs[1]);
+            let align = args[1].immediate();
+            let offset = bcx.urem(ptr_val, align);
             let zero = C_null(bcx.ccx.isize_ty());
             // `offset == 0`
             let is_zero = bcx.icmp(llvm::IntPredicate::IntEQ, offset, zero);
             // `if offset == 0 { 0 } else { offset - align }`
-            bcx.select(is_zero, zero, bcx.sub(offset, llargs[1]))
+            bcx.select(is_zero, zero, bcx.sub(offset, align))
         }
         name if name.starts_with("simd_") => {
-            generic_simd_intrinsic(bcx, name,
-                                   callee_ty,
-                                   &llargs,
-                                   ret_ty, llret_ty,
-                                   span)
+            match generic_simd_intrinsic(bcx, name,
+                                         callee_ty,
+                                         args,
+                                         ret_ty, llret_ty,
+                                         span) {
+                Ok(llval) => llval,
+                Err(()) => return
+            }
         }
         // This requires that atomic intrinsics follow a specific naming pattern:
         // "atomic_<operation>[_<ordering>]", and no ordering means SeqCst
@@ -438,57 +451,66 @@
                 _ => ccx.sess().fatal("Atomic intrinsic not in correct format"),
             };
 
-            let invalid_monomorphization = |sty| {
+            let invalid_monomorphization = |ty| {
                 span_invalid_monomorphization_error(tcx.sess, span,
                     &format!("invalid monomorphization of `{}` intrinsic: \
-                              expected basic integer type, found `{}`", name, sty));
+                              expected basic integer type, found `{}`", name, ty));
             };
 
             match split[1] {
                 "cxchg" | "cxchgweak" => {
-                    let sty = &substs.type_at(0).sty;
-                    if int_type_width_signed(sty, ccx).is_some() {
+                    let ty = substs.type_at(0);
+                    if int_type_width_signed(ty, ccx).is_some() {
                         let weak = if split[1] == "cxchgweak" { llvm::True } else { llvm::False };
-                        let val = bcx.atomic_cmpxchg(llargs[0], llargs[1], llargs[2], order,
-                            failorder, weak);
-                        let result = bcx.extract_value(val, 0);
-                        let success = bcx.zext(bcx.extract_value(val, 1), Type::bool(bcx.ccx));
-                        bcx.store(result, bcx.struct_gep(llresult, 0), None);
-                        bcx.store(success, bcx.struct_gep(llresult, 1), None);
+                        let pair = bcx.atomic_cmpxchg(
+                            args[0].immediate(),
+                            args[1].immediate(),
+                            args[2].immediate(),
+                            order,
+                            failorder,
+                            weak);
+                        let val = bcx.extract_value(pair, 0);
+                        let success = bcx.zext(bcx.extract_value(pair, 1), Type::bool(bcx.ccx));
+
+                        let dest = result.project_field(bcx, 0);
+                        bcx.store(val, dest.llval, dest.alignment.non_abi());
+                        let dest = result.project_field(bcx, 1);
+                        bcx.store(success, dest.llval, dest.alignment.non_abi());
+                        return;
                     } else {
-                        invalid_monomorphization(sty);
+                        return invalid_monomorphization(ty);
                     }
-                    C_nil(ccx)
                 }
 
                 "load" => {
-                    let sty = &substs.type_at(0).sty;
-                    if int_type_width_signed(sty, ccx).is_some() {
-                        bcx.atomic_load(llargs[0], order)
+                    let ty = substs.type_at(0);
+                    if int_type_width_signed(ty, ccx).is_some() {
+                        let align = ccx.align_of(ty);
+                        bcx.atomic_load(args[0].immediate(), order, align)
                     } else {
-                        invalid_monomorphization(sty);
-                        C_nil(ccx)
+                        return invalid_monomorphization(ty);
                     }
                 }
 
                 "store" => {
-                    let sty = &substs.type_at(0).sty;
-                    if int_type_width_signed(sty, ccx).is_some() {
-                        bcx.atomic_store(llargs[1], llargs[0], order);
+                    let ty = substs.type_at(0);
+                    if int_type_width_signed(ty, ccx).is_some() {
+                        let align = ccx.align_of(ty);
+                        bcx.atomic_store(args[1].immediate(), args[0].immediate(), order, align);
+                        return;
                     } else {
-                        invalid_monomorphization(sty);
+                        return invalid_monomorphization(ty);
                     }
-                    C_nil(ccx)
                 }
 
                 "fence" => {
                     bcx.atomic_fence(order, llvm::SynchronizationScope::CrossThread);
-                    C_nil(ccx)
+                    return;
                 }
 
                 "singlethreadfence" => {
                     bcx.atomic_fence(order, llvm::SynchronizationScope::SingleThread);
-                    C_nil(ccx)
+                    return;
                 }
 
                 // These are all AtomicRMW ops
@@ -508,12 +530,11 @@
                         _ => ccx.sess().fatal("unknown atomic operation")
                     };
 
-                    let sty = &substs.type_at(0).sty;
-                    if int_type_width_signed(sty, ccx).is_some() {
-                        bcx.atomic_rmw(atom_op, llargs[0], llargs[1], order)
+                    let ty = substs.type_at(0);
+                    if int_type_width_signed(ty, ccx).is_some() {
+                        bcx.atomic_rmw(atom_op, args[0].immediate(), args[1].immediate(), order)
                     } else {
-                        invalid_monomorphization(sty);
-                        C_nil(ccx)
+                        return invalid_monomorphization(ty);
                     }
                 }
             }
@@ -528,13 +549,11 @@
                 assert_eq!(x.len(), 1);
                 x.into_iter().next().unwrap()
             }
-            fn ty_to_type(ccx: &CrateContext, t: &intrinsics::Type,
-                          any_changes_needed: &mut bool) -> Vec<Type> {
+            fn ty_to_type(ccx: &CrateContext, t: &intrinsics::Type) -> Vec<Type> {
                 use intrinsics::Type::*;
                 match *t {
                     Void => vec![Type::void(ccx)],
-                    Integer(_signed, width, llvm_width) => {
-                        *any_changes_needed |= width != llvm_width;
+                    Integer(_signed, _width, llvm_width) => {
                         vec![Type::ix(ccx, llvm_width as u64)]
                     }
                     Float(x) => {
@@ -545,29 +564,24 @@
                         }
                     }
                     Pointer(ref t, ref llvm_elem, _const) => {
-                        *any_changes_needed |= llvm_elem.is_some();
-
                         let t = llvm_elem.as_ref().unwrap_or(t);
-                        let elem = one(ty_to_type(ccx, t, any_changes_needed));
+                        let elem = one(ty_to_type(ccx, t));
                         vec![elem.ptr_to()]
                     }
                     Vector(ref t, ref llvm_elem, length) => {
-                        *any_changes_needed |= llvm_elem.is_some();
-
                         let t = llvm_elem.as_ref().unwrap_or(t);
-                        let elem = one(ty_to_type(ccx, t, any_changes_needed));
+                        let elem = one(ty_to_type(ccx, t));
                         vec![Type::vector(&elem, length as u64)]
                     }
                     Aggregate(false, ref contents) => {
                         let elems = contents.iter()
-                                            .map(|t| one(ty_to_type(ccx, t, any_changes_needed)))
+                                            .map(|t| one(ty_to_type(ccx, t)))
                                             .collect::<Vec<_>>();
                         vec![Type::struct_(ccx, &elems, false)]
                     }
                     Aggregate(true, ref contents) => {
-                        *any_changes_needed = true;
                         contents.iter()
-                                .flat_map(|t| ty_to_type(ccx, t, any_changes_needed))
+                                .flat_map(|t| ty_to_type(ccx, t))
                                 .collect()
                     }
                 }
@@ -579,8 +593,7 @@
             // cast.
             fn modify_as_needed<'a, 'tcx>(bcx: &Builder<'a, 'tcx>,
                                           t: &intrinsics::Type,
-                                          arg_type: Ty<'tcx>,
-                                          llarg: ValueRef)
+                                          arg: &OperandRef<'tcx>)
                                           -> Vec<ValueRef>
             {
                 match *t {
@@ -591,55 +604,44 @@
                         // This assumes the type is "simple", i.e. no
                         // destructors, and the contents are SIMD
                         // etc.
-                        assert!(!bcx.ccx.shared().type_needs_drop(arg_type));
-                        let arg = LvalueRef::new_sized_ty(llarg, arg_type, Alignment::AbiAligned);
+                        assert!(!bcx.ccx.shared().type_needs_drop(arg.layout.ty));
+                        let (ptr, align) = match arg.val {
+                            OperandValue::Ref(ptr, align) => (ptr, align),
+                            _ => bug!()
+                        };
+                        let arg = LvalueRef::new_sized(ptr, arg.layout, align);
                         (0..contents.len()).map(|i| {
-                            let (ptr, align) = arg.trans_field_ptr(bcx, i);
-                            bcx.load(ptr, align.to_align())
+                            arg.project_field(bcx, i).load(bcx).immediate()
                         }).collect()
                     }
                     intrinsics::Type::Pointer(_, Some(ref llvm_elem), _) => {
-                        let llvm_elem = one(ty_to_type(bcx.ccx, llvm_elem, &mut false));
-                        vec![bcx.pointercast(llarg, llvm_elem.ptr_to())]
+                        let llvm_elem = one(ty_to_type(bcx.ccx, llvm_elem));
+                        vec![bcx.pointercast(arg.immediate(), llvm_elem.ptr_to())]
                     }
                     intrinsics::Type::Vector(_, Some(ref llvm_elem), length) => {
-                        let llvm_elem = one(ty_to_type(bcx.ccx, llvm_elem, &mut false));
-                        vec![bcx.bitcast(llarg, Type::vector(&llvm_elem, length as u64))]
+                        let llvm_elem = one(ty_to_type(bcx.ccx, llvm_elem));
+                        vec![bcx.bitcast(arg.immediate(), Type::vector(&llvm_elem, length as u64))]
                     }
                     intrinsics::Type::Integer(_, width, llvm_width) if width != llvm_width => {
                         // the LLVM intrinsic uses a smaller integer
                         // size than the C intrinsic's signature, so
                         // we have to trim it down here.
-                        vec![bcx.trunc(llarg, Type::ix(bcx.ccx, llvm_width as u64))]
+                        vec![bcx.trunc(arg.immediate(), Type::ix(bcx.ccx, llvm_width as u64))]
                     }
-                    _ => vec![llarg],
+                    _ => vec![arg.immediate()],
                 }
             }
 
 
-            let mut any_changes_needed = false;
             let inputs = intr.inputs.iter()
-                                    .flat_map(|t| ty_to_type(ccx, t, &mut any_changes_needed))
+                                    .flat_map(|t| ty_to_type(ccx, t))
                                     .collect::<Vec<_>>();
 
-            let mut out_changes = false;
-            let outputs = one(ty_to_type(ccx, &intr.output, &mut out_changes));
-            // outputting a flattened aggregate is nonsense
-            assert!(!out_changes);
+            let outputs = one(ty_to_type(ccx, &intr.output));
 
-            let llargs = if !any_changes_needed {
-                // no aggregates to flatten, so no change needed
-                llargs.to_vec()
-            } else {
-                // there are some aggregates that need to be flattened
-                // in the LLVM call, so we need to run over the types
-                // again to find them and extract the arguments
-                intr.inputs.iter()
-                           .zip(llargs)
-                           .zip(arg_tys)
-                           .flat_map(|((t, llarg), ty)| modify_as_needed(bcx, t, ty, *llarg))
-                           .collect()
-            };
+            let llargs: Vec<_> = intr.inputs.iter().zip(args).flat_map(|(t, arg)| {
+                modify_as_needed(bcx, t, arg)
+            }).collect();
             assert_eq!(inputs.len(), llargs.len());
 
             let val = match intr.definition {
@@ -657,25 +659,24 @@
                     assert!(!flatten);
 
                     for i in 0..elems.len() {
-                        let val = bcx.extract_value(val, i);
-                        let lval = LvalueRef::new_sized_ty(llresult, ret_ty,
-                                                           Alignment::AbiAligned);
-                        let (dest, align) = lval.trans_field_ptr(bcx, i);
-                        bcx.store(val, dest, align.to_align());
+                        let dest = result.project_field(bcx, i);
+                        let val = bcx.extract_value(val, i as u64);
+                        bcx.store(val, dest.llval, dest.alignment.non_abi());
                     }
-                    C_nil(ccx)
+                    return;
                 }
                 _ => val,
             }
         }
     };
 
-    if val_ty(llval) != Type::void(ccx) && machine::llsize_of_alloc(ccx, val_ty(llval)) != 0 {
-        if let Some(ty) = fn_ty.ret.cast {
-            let ptr = bcx.pointercast(llresult, ty.ptr_to());
+    if !fn_ty.ret.is_ignore() {
+        if let PassMode::Cast(ty) = fn_ty.ret.mode {
+            let ptr = bcx.pointercast(llresult, ty.llvm_type(ccx).ptr_to());
             bcx.store(llval, ptr, Some(ccx.align_of(ret_ty)));
         } else {
-            store_ty(bcx, llval, llresult, Alignment::AbiAligned, ret_ty);
+            OperandRef::from_immediate_or_packed_pair(bcx, llval, result.layout)
+                .val.store(bcx, result);
         }
     }
 }
@@ -683,16 +684,15 @@
 fn copy_intrinsic<'a, 'tcx>(bcx: &Builder<'a, 'tcx>,
                             allow_overlap: bool,
                             volatile: bool,
-                            tp_ty: Ty<'tcx>,
+                            ty: Ty<'tcx>,
                             dst: ValueRef,
                             src: ValueRef,
                             count: ValueRef)
                             -> ValueRef {
     let ccx = bcx.ccx;
-    let lltp_ty = type_of::type_of(ccx, tp_ty);
-    let align = C_i32(ccx, ccx.align_of(tp_ty) as i32);
-    let size = machine::llsize_of(ccx, lltp_ty);
-    let int_size = machine::llbitsize_of_real(ccx, ccx.isize_ty());
+    let (size, align) = ccx.size_and_align_of(ty);
+    let size = C_usize(ccx, size.bytes());
+    let align = C_i32(ccx, align.abi() as i32);
 
     let operation = if allow_overlap {
         "memmove"
@@ -700,7 +700,8 @@
         "memcpy"
     };
 
-    let name = format!("llvm.{}.p0i8.p0i8.i{}", operation, int_size);
+    let name = format!("llvm.{}.p0i8.p0i8.i{}", operation,
+                       ccx.data_layout().pointer_size.bits());
 
     let dst_ptr = bcx.pointercast(dst, Type::i8p(ccx));
     let src_ptr = bcx.pointercast(src, Type::i8p(ccx));
@@ -724,9 +725,9 @@
     count: ValueRef
 ) -> ValueRef {
     let ccx = bcx.ccx;
-    let align = C_i32(ccx, ccx.align_of(ty) as i32);
-    let lltp_ty = type_of::type_of(ccx, ty);
-    let size = machine::llsize_of(ccx, lltp_ty);
+    let (size, align) = ccx.size_and_align_of(ty);
+    let size = C_usize(ccx, size.bytes());
+    let align = C_i32(ccx, align.abi() as i32);
     let dst = bcx.pointercast(dst, Type::i8p(ccx));
     call_memset(bcx, dst, val, bcx.mul(size, count), align, volatile)
 }
@@ -816,7 +817,7 @@
         //
         // More information can be found in libstd's seh.rs implementation.
         let i64p = Type::i64(ccx).ptr_to();
-        let slot = bcx.alloca(i64p, "slot", None);
+        let slot = bcx.alloca(i64p, "slot", ccx.data_layout().pointer_align);
         bcx.invoke(func, &[data], normal.llbb(), catchswitch.llbb(),
             None);
 
@@ -972,11 +973,11 @@
     bcx: &Builder<'a, 'tcx>,
     name: &str,
     callee_ty: Ty<'tcx>,
-    llargs: &[ValueRef],
+    args: &[OperandRef<'tcx>],
     ret_ty: Ty<'tcx>,
     llret_ty: Type,
     span: Span
-) -> ValueRef {
+) -> Result<ValueRef, ()> {
     // macros for error handling:
     macro_rules! emit_error {
         ($msg: tt) => {
@@ -994,7 +995,7 @@
         ($cond: expr, $($fmt: tt)*) => {
             if !$cond {
                 emit_error!($($fmt)*);
-                return C_nil(bcx.ccx)
+                return Err(());
             }
         }
     }
@@ -1040,12 +1041,12 @@
                  ret_ty,
                  ret_ty.simd_type(tcx));
 
-        return compare_simd_types(bcx,
-                                  llargs[0],
-                                  llargs[1],
-                                  in_elem,
-                                  llret_ty,
-                                  cmp_op)
+        return Ok(compare_simd_types(bcx,
+                                     args[0].immediate(),
+                                     args[1].immediate(),
+                                     in_elem,
+                                     llret_ty,
+                                     cmp_op))
     }
 
     if name.starts_with("simd_shuffle") {
@@ -1069,12 +1070,12 @@
 
         let total_len = in_len as u128 * 2;
 
-        let vector = llargs[2];
+        let vector = args[2].immediate();
 
         let indices: Option<Vec<_>> = (0..n)
             .map(|i| {
                 let arg_idx = i;
-                let val = const_get_elt(vector, &[i as libc::c_uint]);
+                let val = const_get_elt(vector, i as u64);
                 match const_to_opt_u128(val, true) {
                     None => {
                         emit_error!("shuffle index #{} is not a constant", arg_idx);
@@ -1091,23 +1092,27 @@
             .collect();
         let indices = match indices {
             Some(i) => i,
-            None => return C_null(llret_ty)
+            None => return Ok(C_null(llret_ty))
         };
 
-        return bcx.shuffle_vector(llargs[0], llargs[1], C_vector(&indices))
+        return Ok(bcx.shuffle_vector(args[0].immediate(),
+                                     args[1].immediate(),
+                                     C_vector(&indices)))
     }
 
     if name == "simd_insert" {
         require!(in_elem == arg_tys[2],
                  "expected inserted type `{}` (element of input `{}`), found `{}`",
                  in_elem, in_ty, arg_tys[2]);
-        return bcx.insert_element(llargs[0], llargs[2], llargs[1])
+        return Ok(bcx.insert_element(args[0].immediate(),
+                                     args[2].immediate(),
+                                     args[1].immediate()))
     }
     if name == "simd_extract" {
         require!(ret_ty == in_elem,
                  "expected return type `{}` (element of input `{}`), found `{}`",
                  in_elem, in_ty, ret_ty);
-        return bcx.extract_element(llargs[0], llargs[1])
+        return Ok(bcx.extract_element(args[0].immediate(), args[1].immediate()))
     }
 
     if name == "simd_cast" {
@@ -1121,7 +1126,7 @@
         // casting cares about nominal type, not just structural type
         let out_elem = ret_ty.simd_type(tcx);
 
-        if in_elem == out_elem { return llargs[0]; }
+        if in_elem == out_elem { return Ok(args[0].immediate()); }
 
         enum Style { Float, Int(/* is signed? */ bool), Unsupported }
 
@@ -1142,36 +1147,36 @@
 
         match (in_style, out_style) {
             (Style::Int(in_is_signed), Style::Int(_)) => {
-                return match in_width.cmp(&out_width) {
-                    Ordering::Greater => bcx.trunc(llargs[0], llret_ty),
-                    Ordering::Equal => llargs[0],
+                return Ok(match in_width.cmp(&out_width) {
+                    Ordering::Greater => bcx.trunc(args[0].immediate(), llret_ty),
+                    Ordering::Equal => args[0].immediate(),
                     Ordering::Less => if in_is_signed {
-                        bcx.sext(llargs[0], llret_ty)
+                        bcx.sext(args[0].immediate(), llret_ty)
                     } else {
-                        bcx.zext(llargs[0], llret_ty)
+                        bcx.zext(args[0].immediate(), llret_ty)
                     }
-                }
+                })
             }
             (Style::Int(in_is_signed), Style::Float) => {
-                return if in_is_signed {
-                    bcx.sitofp(llargs[0], llret_ty)
+                return Ok(if in_is_signed {
+                    bcx.sitofp(args[0].immediate(), llret_ty)
                 } else {
-                    bcx.uitofp(llargs[0], llret_ty)
-                }
+                    bcx.uitofp(args[0].immediate(), llret_ty)
+                })
             }
             (Style::Float, Style::Int(out_is_signed)) => {
-                return if out_is_signed {
-                    bcx.fptosi(llargs[0], llret_ty)
+                return Ok(if out_is_signed {
+                    bcx.fptosi(args[0].immediate(), llret_ty)
                 } else {
-                    bcx.fptoui(llargs[0], llret_ty)
-                }
+                    bcx.fptoui(args[0].immediate(), llret_ty)
+                })
             }
             (Style::Float, Style::Float) => {
-                return match in_width.cmp(&out_width) {
-                    Ordering::Greater => bcx.fptrunc(llargs[0], llret_ty),
-                    Ordering::Equal => llargs[0],
-                    Ordering::Less => bcx.fpext(llargs[0], llret_ty)
-                }
+                return Ok(match in_width.cmp(&out_width) {
+                    Ordering::Greater => bcx.fptrunc(args[0].immediate(), llret_ty),
+                    Ordering::Equal => args[0].immediate(),
+                    Ordering::Less => bcx.fpext(args[0].immediate(), llret_ty)
+                })
             }
             _ => {/* Unsupported. Fallthrough. */}
         }
@@ -1182,21 +1187,18 @@
     }
     macro_rules! arith {
         ($($name: ident: $($($p: ident),* => $call: ident),*;)*) => {
-            $(
-                if name == stringify!($name) {
-                    match in_elem.sty {
-                        $(
-                            $(ty::$p(_))|* => {
-                                return bcx.$call(llargs[0], llargs[1])
-                            }
-                            )*
-                        _ => {},
-                    }
-                    require!(false,
-                             "unsupported operation on `{}` with element `{}`",
-                             in_ty,
-                             in_elem)
-                })*
+            $(if name == stringify!($name) {
+                match in_elem.sty {
+                    $($(ty::$p(_))|* => {
+                        return Ok(bcx.$call(args[0].immediate(), args[1].immediate()))
+                    })*
+                    _ => {},
+                }
+                require!(false,
+                            "unsupported operation on `{}` with element `{}`",
+                            in_ty,
+                            in_elem)
+            })*
         }
     }
     arith! {
@@ -1214,15 +1216,13 @@
     span_bug!(span, "unknown SIMD intrinsic");
 }
 
-// Returns the width of an int TypeVariant, and if it's signed or not
+// Returns the width of an int Ty, and if it's signed or not
 // Returns None if the type is not an integer
 // FIXME: there’s multiple of this functions, investigate using some of the already existing
 // stuffs.
-fn int_type_width_signed<'tcx>(sty: &ty::TypeVariants<'tcx>, ccx: &CrateContext)
-        -> Option<(u64, bool)> {
-    use rustc::ty::{TyInt, TyUint};
-    match *sty {
-        TyInt(t) => Some((match t {
+fn int_type_width_signed(ty: Ty, ccx: &CrateContext) -> Option<(u64, bool)> {
+    match ty.sty {
+        ty::TyInt(t) => Some((match t {
             ast::IntTy::Is => {
                 match &ccx.tcx().sess.target.target.target_pointer_width[..] {
                     "16" => 16,
@@ -1237,7 +1237,7 @@
             ast::IntTy::I64 => 64,
             ast::IntTy::I128 => 128,
         }, true)),
-        TyUint(t) => Some((match t {
+        ty::TyUint(t) => Some((match t {
             ast::UintTy::Us => {
                 match &ccx.tcx().sess.target.target.target_pointer_width[..] {
                     "16" => 16,
diff --git a/src/librustc_trans/lib.rs b/src/librustc_trans/lib.rs
index ae25e7d..923d935 100644
--- a/src/librustc_trans/lib.rs
+++ b/src/librustc_trans/lib.rs
@@ -25,6 +25,8 @@
 #![allow(unused_attributes)]
 #![feature(i128_type)]
 #![feature(i128)]
+#![feature(inclusive_range)]
+#![feature(inclusive_range_syntax)]
 #![feature(libc)]
 #![feature(quote)]
 #![feature(rustc_diagnostic_macros)]
@@ -104,7 +106,6 @@
 }
 
 mod abi;
-mod adt;
 mod allocator;
 mod asm;
 mod assert_module_sources;
@@ -137,7 +138,6 @@
 mod glue;
 mod intrinsic;
 mod llvm_util;
-mod machine;
 mod metadata;
 mod meth;
 mod mir;
@@ -145,7 +145,6 @@
 mod symbol_names_test;
 mod time_graph;
 mod trans_item;
-mod tvec;
 mod type_;
 mod type_of;
 mod value;
diff --git a/src/librustc_trans/machine.rs b/src/librustc_trans/machine.rs
deleted file mode 100644
index bc383ab..0000000
--- a/src/librustc_trans/machine.rs
+++ /dev/null
@@ -1,79 +0,0 @@
-// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// Information concerning the machine representation of various types.
-
-#![allow(non_camel_case_types)]
-
-use llvm::{self, ValueRef};
-use common::*;
-
-use type_::Type;
-
-pub type llbits = u64;
-pub type llsize = u64;
-pub type llalign = u32;
-
-// ______________________________________________________________________
-// compute sizeof / alignof
-
-// Returns the number of bytes between successive elements of type T in an
-// array of T. This is the "ABI" size. It includes any ABI-mandated padding.
-pub fn llsize_of_alloc(cx: &CrateContext, ty: Type) -> llsize {
-    unsafe {
-        return llvm::LLVMABISizeOfType(cx.td(), ty.to_ref());
-    }
-}
-
-/// Returns the "real" size of the type in bits.
-pub fn llbitsize_of_real(cx: &CrateContext, ty: Type) -> llbits {
-    unsafe {
-        llvm::LLVMSizeOfTypeInBits(cx.td(), ty.to_ref())
-    }
-}
-
-/// Returns the size of the type as an LLVM constant integer value.
-pub fn llsize_of(cx: &CrateContext, ty: Type) -> ValueRef {
-    // Once upon a time, this called LLVMSizeOf, which does a
-    // getelementptr(1) on a null pointer and casts to an int, in
-    // order to obtain the type size as a value without requiring the
-    // target data layout.  But we have the target data layout, so
-    // there's no need for that contrivance.  The instruction
-    // selection DAG generator would flatten that GEP(1) node into a
-    // constant of the type's alloc size, so let's save it some work.
-    return C_usize(cx, llsize_of_alloc(cx, ty));
-}
-
-// Returns the preferred alignment of the given type for the current target.
-// The preferred alignment may be larger than the alignment used when
-// packing the type into structs. This will be used for things like
-// allocations inside a stack frame, which LLVM has a free hand in.
-pub fn llalign_of_pref(cx: &CrateContext, ty: Type) -> llalign {
-    unsafe {
-        return llvm::LLVMPreferredAlignmentOfType(cx.td(), ty.to_ref());
-    }
-}
-
-// Returns the minimum alignment of a type required by the platform.
-// This is the alignment that will be used for struct fields, arrays,
-// and similar ABI-mandated things.
-pub fn llalign_of_min(cx: &CrateContext, ty: Type) -> llalign {
-    unsafe {
-        return llvm::LLVMABIAlignmentOfType(cx.td(), ty.to_ref());
-    }
-}
-
-pub fn llelement_offset(cx: &CrateContext, struct_ty: Type, element: usize) -> u64 {
-    unsafe {
-        return llvm::LLVMOffsetOfElement(cx.td(),
-                                         struct_ty.to_ref(),
-                                         element as u32);
-    }
-}
diff --git a/src/librustc_trans/meth.rs b/src/librustc_trans/meth.rs
index 3253a03..a7d467f 100644
--- a/src/librustc_trans/meth.rs
+++ b/src/librustc_trans/meth.rs
@@ -9,18 +9,20 @@
 // except according to those terms.
 
 use llvm::ValueRef;
+use abi::FnType;
 use callee;
 use common::*;
 use builder::Builder;
 use consts;
-use machine;
 use monomorphize;
 use type_::Type;
 use value::Value;
 use rustc::ty::{self, Ty};
+use rustc::ty::layout::HasDataLayout;
+use debuginfo;
 
 #[derive(Copy, Clone, Debug)]
-pub struct VirtualIndex(usize);
+pub struct VirtualIndex(u64);
 
 pub const DESTRUCTOR: VirtualIndex = VirtualIndex(0);
 pub const SIZE: VirtualIndex = VirtualIndex(1);
@@ -28,14 +30,18 @@
 
 impl<'a, 'tcx> VirtualIndex {
     pub fn from_index(index: usize) -> Self {
-        VirtualIndex(index + 3)
+        VirtualIndex(index as u64 + 3)
     }
 
-    pub fn get_fn(self, bcx: &Builder<'a, 'tcx>, llvtable: ValueRef) -> ValueRef {
+    pub fn get_fn(self, bcx: &Builder<'a, 'tcx>,
+                  llvtable: ValueRef,
+                  fn_ty: &FnType<'tcx>) -> ValueRef {
         // Load the data pointer from the object.
         debug!("get_fn({:?}, {:?})", Value(llvtable), self);
 
-        let ptr = bcx.load_nonnull(bcx.gepi(llvtable, &[self.0]), None);
+        let llvtable = bcx.pointercast(llvtable, fn_ty.llvm_type(bcx.ccx).ptr_to().ptr_to());
+        let ptr = bcx.load(bcx.inbounds_gep(llvtable, &[C_usize(bcx.ccx, self.0)]), None);
+        bcx.nonnull_metadata(ptr);
         // Vtable loads are invariant
         bcx.set_invariant_load(ptr);
         ptr
@@ -46,7 +52,7 @@
         debug!("get_int({:?}, {:?})", Value(llvtable), self);
 
         let llvtable = bcx.pointercast(llvtable, Type::isize(bcx.ccx).ptr_to());
-        let ptr = bcx.load(bcx.gepi(llvtable, &[self.0]), None);
+        let ptr = bcx.load(bcx.inbounds_gep(llvtable, &[C_usize(bcx.ccx, self.0)]), None);
         // Vtable loads are invariant
         bcx.set_invariant_load(ptr);
         ptr
@@ -76,12 +82,13 @@
     }
 
     // Not in the cache. Build it.
-    let nullptr = C_null(Type::nil(ccx).ptr_to());
+    let nullptr = C_null(Type::i8p(ccx));
 
+    let (size, align) = ccx.size_and_align_of(ty);
     let mut components: Vec<_> = [
         callee::get_fn(ccx, monomorphize::resolve_drop_in_place(ccx.tcx(), ty)),
-        C_usize(ccx, ccx.size_of(ty)),
-        C_usize(ccx, ccx.align_of(ty) as u64)
+        C_usize(ccx, size.bytes()),
+        C_usize(ccx, align.abi())
     ].iter().cloned().collect();
 
     if let Some(trait_ref) = trait_ref {
@@ -96,9 +103,11 @@
     }
 
     let vtable_const = C_struct(ccx, &components, false);
-    let align = machine::llalign_of_pref(ccx, val_ty(vtable_const));
+    let align = ccx.data_layout().pointer_align;
     let vtable = consts::addr_of(ccx, vtable_const, align, "vtable");
 
+    debuginfo::create_vtable_metadata(ccx, ty, vtable);
+
     ccx.vtables().borrow_mut().insert((ty, trait_ref), vtable);
     vtable
 }
diff --git a/src/librustc_trans/mir/analyze.rs b/src/librustc_trans/mir/analyze.rs
index 73f60ff..2233795 100644
--- a/src/librustc_trans/mir/analyze.rs
+++ b/src/librustc_trans/mir/analyze.rs
@@ -18,7 +18,8 @@
 use rustc::mir::visit::{Visitor, LvalueContext};
 use rustc::mir::traversal;
 use rustc::ty;
-use common;
+use rustc::ty::layout::LayoutOf;
+use type_of::LayoutLlvmExt;
 use super::MirContext;
 
 pub fn lvalue_locals<'a, 'tcx>(mircx: &MirContext<'a, 'tcx>) -> BitVector {
@@ -30,21 +31,15 @@
     for (index, ty) in mir.local_decls.iter().map(|l| l.ty).enumerate() {
         let ty = mircx.monomorphize(&ty);
         debug!("local {} has type {:?}", index, ty);
-        if ty.is_scalar() ||
-            ty.is_box() ||
-            ty.is_region_ptr() ||
-            ty.is_simd() ||
-            common::type_is_zero_size(mircx.ccx, ty)
-        {
+        let layout = mircx.ccx.layout_of(ty);
+        if layout.is_llvm_immediate() {
             // These sorts of types are immediates that we can store
             // in an ValueRef without an alloca.
-            assert!(common::type_is_immediate(mircx.ccx, ty) ||
-                    common::type_is_fat_ptr(mircx.ccx, ty));
-        } else if common::type_is_imm_pair(mircx.ccx, ty) {
+        } else if layout.is_llvm_scalar_pair() {
             // We allow pairs and uses of any of their 2 fields.
         } else {
             // These sorts of types require an alloca. Note that
-            // type_is_immediate() may *still* be true, particularly
+            // is_llvm_immediate() may *still* be true, particularly
             // for newtypes, but we currently force some types
             // (e.g. structs) into an alloca unconditionally, just so
             // that we don't have to deal with having two pathways
@@ -141,18 +136,29 @@
                     context: LvalueContext<'tcx>,
                     location: Location) {
         debug!("visit_lvalue(lvalue={:?}, context={:?})", lvalue, context);
+        let ccx = self.cx.ccx;
 
         if let mir::Lvalue::Projection(ref proj) = *lvalue {
-            // Allow uses of projections of immediate pair fields.
+            // Allow uses of projections that are ZSTs or from scalar fields.
             if let LvalueContext::Consume = context {
-                if let mir::Lvalue::Local(_) = proj.base {
-                    if let mir::ProjectionElem::Field(..) = proj.elem {
-                        let ty = proj.base.ty(self.cx.mir, self.cx.ccx.tcx());
+                let base_ty = proj.base.ty(self.cx.mir, ccx.tcx());
+                let base_ty = self.cx.monomorphize(&base_ty);
 
-                        let ty = self.cx.monomorphize(&ty.to_ty(self.cx.ccx.tcx()));
-                        if common::type_is_imm_pair(self.cx.ccx, ty) {
-                            return;
-                        }
+                // ZSTs don't require any actual memory access.
+                let elem_ty = base_ty.projection_ty(ccx.tcx(), &proj.elem).to_ty(ccx.tcx());
+                let elem_ty = self.cx.monomorphize(&elem_ty);
+                if ccx.layout_of(elem_ty).is_zst() {
+                    return;
+                }
+
+                if let mir::ProjectionElem::Field(..) = proj.elem {
+                    let layout = ccx.layout_of(base_ty.to_ty(ccx.tcx()));
+                    if layout.is_llvm_immediate() || layout.is_llvm_scalar_pair() {
+                        // Recurse as a `Consume` instead of `Projection`,
+                        // potentially stopping at non-operand projections,
+                        // which would trigger `mark_as_lvalue` on locals.
+                        self.visit_lvalue(&proj.base, LvalueContext::Consume, location);
+                        return;
                     }
                 }
             }
@@ -178,9 +184,9 @@
             LvalueContext::StorageLive |
             LvalueContext::StorageDead |
             LvalueContext::Validate |
-            LvalueContext::Inspect |
             LvalueContext::Consume => {}
 
+            LvalueContext::Inspect |
             LvalueContext::Store |
             LvalueContext::Borrow { .. } |
             LvalueContext::Projection(..) => {
diff --git a/src/librustc_trans/mir/block.rs b/src/librustc_trans/mir/block.rs
index 11d992b..f43eba3 100644
--- a/src/librustc_trans/mir/block.rs
+++ b/src/librustc_trans/mir/block.rs
@@ -11,28 +11,24 @@
 use llvm::{self, ValueRef, BasicBlockRef};
 use rustc::middle::lang_items;
 use rustc::middle::const_val::{ConstEvalErr, ConstInt, ErrKind};
-use rustc::ty::{self, Ty, TypeFoldable};
-use rustc::ty::layout::{self, LayoutTyper};
+use rustc::ty::{self, TypeFoldable};
+use rustc::ty::layout::{self, LayoutOf};
 use rustc::traits;
 use rustc::mir;
-use abi::{Abi, FnType, ArgType};
-use adt;
-use base::{self, Lifetime};
+use abi::{Abi, FnType, ArgType, PassMode};
+use base;
 use callee;
 use builder::Builder;
 use common::{self, C_bool, C_str_slice, C_struct, C_u32, C_undef};
 use consts;
-use machine::llalign_of_min;
 use meth;
 use monomorphize;
-use type_of;
+use type_of::LayoutLlvmExt;
 use type_::Type;
 
 use syntax::symbol::Symbol;
 use syntax_pos::Pos;
 
-use std::cmp;
-
 use super::{MirContext, LocalRef};
 use super::constant::Const;
 use super::lvalue::{Alignment, LvalueRef};
@@ -120,11 +116,11 @@
             fn_ty: FnType<'tcx>,
             fn_ptr: ValueRef,
             llargs: &[ValueRef],
-            destination: Option<(ReturnDest, Ty<'tcx>, mir::BasicBlock)>,
+            destination: Option<(ReturnDest<'tcx>, mir::BasicBlock)>,
             cleanup: Option<mir::BasicBlock>
         | {
             if let Some(cleanup) = cleanup {
-                let ret_bcx = if let Some((_, _, target)) = destination {
+                let ret_bcx = if let Some((_, target)) = destination {
                     this.blocks[target]
                 } else {
                     this.unreachable_block()
@@ -136,14 +132,10 @@
                                            cleanup_bundle);
                 fn_ty.apply_attrs_callsite(invokeret);
 
-                if let Some((ret_dest, ret_ty, target)) = destination {
+                if let Some((ret_dest, target)) = destination {
                     let ret_bcx = this.get_builder(target);
                     this.set_debug_loc(&ret_bcx, terminator.source_info);
-                    let op = OperandRef {
-                        val: Immediate(invokeret),
-                        ty: ret_ty,
-                    };
-                    this.store_return(&ret_bcx, ret_dest, &fn_ty.ret, op);
+                    this.store_return(&ret_bcx, ret_dest, &fn_ty.ret, invokeret);
                 }
             } else {
                 let llret = bcx.call(fn_ptr, &llargs, cleanup_bundle);
@@ -156,12 +148,8 @@
                     llvm::Attribute::NoInline.apply_callsite(llvm::AttributePlace::Function, llret);
                 }
 
-                if let Some((ret_dest, ret_ty, target)) = destination {
-                    let op = OperandRef {
-                        val: Immediate(llret),
-                        ty: ret_ty,
-                    };
-                    this.store_return(&bcx, ret_dest, &fn_ty.ret, op);
+                if let Some((ret_dest, target)) = destination {
+                    this.store_return(&bcx, ret_dest, &fn_ty.ret, llret);
                     funclet_br(this, bcx, target);
                 } else {
                     bcx.unreachable();
@@ -175,14 +163,18 @@
                 if let Some(cleanup_pad) = cleanup_pad {
                     bcx.cleanup_ret(cleanup_pad, None);
                 } else {
-                    let ps = self.get_personality_slot(&bcx);
-                    let lp = bcx.load(ps, None);
-                    Lifetime::End.call(&bcx, ps);
+                    let slot = self.get_personality_slot(&bcx);
+                    let lp0 = slot.project_field(&bcx, 0).load(&bcx).immediate();
+                    let lp1 = slot.project_field(&bcx, 1).load(&bcx).immediate();
+                    slot.storage_dead(&bcx);
+
                     if !bcx.sess().target.target.options.custom_unwind_resume {
+                        let mut lp = C_undef(self.landing_pad_type());
+                        lp = bcx.insert_value(lp, lp0, 0);
+                        lp = bcx.insert_value(lp, lp1, 1);
                         bcx.resume(lp);
                     } else {
-                        let exc_ptr = bcx.extract_value(lp, 0);
-                        bcx.call(bcx.ccx.eh_unwind_resume(), &[exc_ptr], cleanup_bundle);
+                        bcx.call(bcx.ccx.eh_unwind_resume(), &[lp0], cleanup_bundle);
                         bcx.unreachable();
                     }
                 }
@@ -215,45 +207,47 @@
             }
 
             mir::TerminatorKind::Return => {
-                let ret = self.fn_ty.ret;
-                if ret.is_ignore() || ret.is_indirect() {
-                    bcx.ret_void();
-                    return;
-                }
+                let llval = match self.fn_ty.ret.mode {
+                    PassMode::Ignore | PassMode::Indirect(_) => {
+                        bcx.ret_void();
+                        return;
+                    }
 
-                let llval = if let Some(cast_ty) = ret.cast {
-                    let op = match self.locals[mir::RETURN_POINTER] {
-                        LocalRef::Operand(Some(op)) => op,
-                        LocalRef::Operand(None) => bug!("use of return before def"),
-                        LocalRef::Lvalue(tr_lvalue) => {
-                            OperandRef {
-                                val: Ref(tr_lvalue.llval, tr_lvalue.alignment),
-                                ty: tr_lvalue.ty.to_ty(bcx.tcx())
+                    PassMode::Direct(_) | PassMode::Pair(..) => {
+                        let op = self.trans_consume(&bcx, &mir::Lvalue::Local(mir::RETURN_POINTER));
+                        if let Ref(llval, align) = op.val {
+                            bcx.load(llval, align.non_abi())
+                        } else {
+                            op.immediate_or_packed_pair(&bcx)
+                        }
+                    }
+
+                    PassMode::Cast(cast_ty) => {
+                        let op = match self.locals[mir::RETURN_POINTER] {
+                            LocalRef::Operand(Some(op)) => op,
+                            LocalRef::Operand(None) => bug!("use of return before def"),
+                            LocalRef::Lvalue(tr_lvalue) => {
+                                OperandRef {
+                                    val: Ref(tr_lvalue.llval, tr_lvalue.alignment),
+                                    layout: tr_lvalue.layout
+                                }
                             }
-                        }
-                    };
-                    let llslot = match op.val {
-                        Immediate(_) | Pair(..) => {
-                            let llscratch = bcx.alloca(ret.memory_ty(bcx.ccx), "ret", None);
-                            self.store_operand(&bcx, llscratch, None, op);
-                            llscratch
-                        }
-                        Ref(llval, align) => {
-                            assert_eq!(align, Alignment::AbiAligned,
-                                       "return pointer is unaligned!");
-                            llval
-                        }
-                    };
-                    let load = bcx.load(
-                        bcx.pointercast(llslot, cast_ty.ptr_to()),
-                        Some(ret.layout.align(bcx.ccx).abi() as u32));
-                    load
-                } else {
-                    let op = self.trans_consume(&bcx, &mir::Lvalue::Local(mir::RETURN_POINTER));
-                    if let Ref(llval, align) = op.val {
-                        base::load_ty(&bcx, llval, align, op.ty)
-                    } else {
-                        op.pack_if_pair(&bcx).immediate()
+                        };
+                        let llslot = match op.val {
+                            Immediate(_) | Pair(..) => {
+                                let scratch = LvalueRef::alloca(&bcx, self.fn_ty.ret.layout, "ret");
+                                op.val.store(&bcx, scratch);
+                                scratch.llval
+                            }
+                            Ref(llval, align) => {
+                                assert_eq!(align, Alignment::AbiAligned,
+                                           "return pointer is unaligned!");
+                                llval
+                            }
+                        };
+                        bcx.load(
+                            bcx.pointercast(llslot, cast_ty.llvm_type(bcx.ccx).ptr_to()),
+                            Some(self.fn_ty.ret.layout.align))
                     }
                 };
                 bcx.ret(llval);
@@ -275,15 +269,24 @@
                 }
 
                 let lvalue = self.trans_lvalue(&bcx, location);
-                let fn_ty = FnType::of_instance(bcx.ccx, &drop_fn);
-                let (drop_fn, need_extra) = match ty.sty {
-                    ty::TyDynamic(..) => (meth::DESTRUCTOR.get_fn(&bcx, lvalue.llextra),
-                                          false),
-                    _ => (callee::get_fn(bcx.ccx, drop_fn), lvalue.has_extra())
+                let mut args: &[_] = &[lvalue.llval, lvalue.llextra];
+                args = &args[..1 + lvalue.has_extra() as usize];
+                let (drop_fn, fn_ty) = match ty.sty {
+                    ty::TyDynamic(..) => {
+                        let fn_ty = common::instance_ty(bcx.ccx.tcx(), &drop_fn);
+                        let sig = common::ty_fn_sig(bcx.ccx, fn_ty);
+                        let sig = bcx.tcx().erase_late_bound_regions_and_normalize(&sig);
+                        let fn_ty = FnType::new_vtable(bcx.ccx, sig, &[]);
+                        args = &args[..1];
+                        (meth::DESTRUCTOR.get_fn(&bcx, lvalue.llextra, &fn_ty), fn_ty)
+                    }
+                    _ => {
+                        (callee::get_fn(bcx.ccx, drop_fn),
+                         FnType::of_instance(bcx.ccx, &drop_fn))
+                    }
                 };
-                let args = &[lvalue.llval, lvalue.llextra][..1 + need_extra as usize];
                 do_call(self, bcx, fn_ty, drop_fn, args,
-                        Some((ReturnDest::Nothing, tcx.mk_nil(), target)),
+                        Some((ReturnDest::Nothing, target)),
                         unwind);
             }
 
@@ -336,6 +339,9 @@
                 let filename = C_str_slice(bcx.ccx, filename);
                 let line = C_u32(bcx.ccx, loc.line as u32);
                 let col = C_u32(bcx.ccx, loc.col.to_usize() as u32 + 1);
+                let align = tcx.data_layout.aggregate_align
+                    .max(tcx.data_layout.i32_align)
+                    .max(tcx.data_layout.pointer_align);
 
                 // Put together the arguments to the panic entry point.
                 let (lang_item, args, const_err) = match *msg {
@@ -351,7 +357,6 @@
                                 }));
 
                         let file_line_col = C_struct(bcx.ccx, &[filename, line, col], false);
-                        let align = llalign_of_min(bcx.ccx, common::val_ty(file_line_col));
                         let file_line_col = consts::addr_of(bcx.ccx,
                                                             file_line_col,
                                                             align,
@@ -366,7 +371,6 @@
                         let msg_file_line_col = C_struct(bcx.ccx,
                                                      &[msg_str, filename, line, col],
                                                      false);
-                        let align = llalign_of_min(bcx.ccx, common::val_ty(msg_file_line_col));
                         let msg_file_line_col = consts::addr_of(bcx.ccx,
                                                                 msg_file_line_col,
                                                                 align,
@@ -387,7 +391,6 @@
                         let msg_file_line_col = C_struct(bcx.ccx,
                                                      &[msg_str, filename, line, col],
                                                      false);
-                        let align = llalign_of_min(bcx.ccx, common::val_ty(msg_file_line_col));
                         let msg_file_line_col = consts::addr_of(bcx.ccx,
                                                                 msg_file_line_col,
                                                                 align,
@@ -428,7 +431,7 @@
                 // Create the callee. This is a fn ptr or zero-sized and hence a kind of scalar.
                 let callee = self.trans_operand(&bcx, func);
 
-                let (instance, mut llfn) = match callee.ty.sty {
+                let (instance, mut llfn) = match callee.layout.ty.sty {
                     ty::TyFnDef(def_id, substs) => {
                         (Some(ty::Instance::resolve(bcx.ccx.tcx(),
                                                     ty::ParamEnv::empty(traits::Reveal::All),
@@ -439,10 +442,10 @@
                     ty::TyFnPtr(_) => {
                         (None, Some(callee.immediate()))
                     }
-                    _ => bug!("{} is not callable", callee.ty)
+                    _ => bug!("{} is not callable", callee.layout.ty)
                 };
                 let def = instance.map(|i| i.def);
-                let sig = callee.ty.fn_sig(bcx.tcx());
+                let sig = callee.layout.ty.fn_sig(bcx.tcx());
                 let sig = bcx.tcx().erase_late_bound_regions_and_normalize(&sig);
                 let abi = sig.abi;
 
@@ -493,74 +496,51 @@
                     ReturnDest::Nothing
                 };
 
-                // Split the rust-call tupled arguments off.
-                let (first_args, untuple) = if abi == Abi::RustCall && !args.is_empty() {
-                    let (tup, args) = args.split_last().unwrap();
-                    (args, Some(tup))
-                } else {
-                    (&args[..], None)
-                };
-
-                let is_shuffle = intrinsic.map_or(false, |name| {
-                    name.starts_with("simd_shuffle")
-                });
-                let mut idx = 0;
-                for arg in first_args {
-                    // The indices passed to simd_shuffle* in the
-                    // third argument must be constant. This is
-                    // checked by const-qualification, which also
-                    // promotes any complex rvalues to constants.
-                    if is_shuffle && idx == 2 {
-                        match *arg {
-                            mir::Operand::Consume(_) => {
-                                span_bug!(span, "shuffle indices must be constant");
-                            }
-                            mir::Operand::Constant(ref constant) => {
-                                let val = self.trans_constant(&bcx, constant);
-                                llargs.push(val.llval);
-                                idx += 1;
-                                continue;
-                            }
-                        }
-                    }
-
-                    let op = self.trans_operand(&bcx, arg);
-                    self.trans_argument(&bcx, op, &mut llargs, &fn_ty,
-                                        &mut idx, &mut llfn, &def);
-                }
-                if let Some(tup) = untuple {
-                    self.trans_arguments_untupled(&bcx, tup, &mut llargs, &fn_ty,
-                                                  &mut idx, &mut llfn, &def)
-                }
-
                 if intrinsic.is_some() && intrinsic != Some("drop_in_place") {
                     use intrinsic::trans_intrinsic_call;
 
-                    let (dest, llargs) = match ret_dest {
-                        _ if fn_ty.ret.is_indirect() => {
-                            (llargs[0], &llargs[1..])
-                        }
+                    let dest = match ret_dest {
+                        _ if fn_ty.ret.is_indirect() => llargs[0],
                         ReturnDest::Nothing => {
-                            (C_undef(fn_ty.ret.memory_ty(bcx.ccx).ptr_to()), &llargs[..])
+                            C_undef(fn_ty.ret.memory_ty(bcx.ccx).ptr_to())
                         }
                         ReturnDest::IndirectOperand(dst, _) |
-                        ReturnDest::Store(dst) => (dst, &llargs[..]),
+                        ReturnDest::Store(dst) => dst.llval,
                         ReturnDest::DirectOperand(_) =>
                             bug!("Cannot use direct operand with an intrinsic call")
                     };
 
+                    let args: Vec<_> = args.iter().enumerate().map(|(i, arg)| {
+                        // The indices passed to simd_shuffle* in the
+                        // third argument must be constant. This is
+                        // checked by const-qualification, which also
+                        // promotes any complex rvalues to constants.
+                        if i == 2 && intrinsic.unwrap().starts_with("simd_shuffle") {
+                            match *arg {
+                                mir::Operand::Consume(_) => {
+                                    span_bug!(span, "shuffle indices must be constant");
+                                }
+                                mir::Operand::Constant(ref constant) => {
+                                    let val = self.trans_constant(&bcx, constant);
+                                    return OperandRef {
+                                        val: Immediate(val.llval),
+                                        layout: bcx.ccx.layout_of(val.ty)
+                                    };
+                                }
+                            }
+                        }
+
+                        self.trans_operand(&bcx, arg)
+                    }).collect();
+
+
                     let callee_ty = common::instance_ty(
                         bcx.ccx.tcx(), instance.as_ref().unwrap());
-                    trans_intrinsic_call(&bcx, callee_ty, &fn_ty, &llargs, dest,
+                    trans_intrinsic_call(&bcx, callee_ty, &fn_ty, &args, dest,
                                          terminator.source_info.span);
 
                     if let ReturnDest::IndirectOperand(dst, _) = ret_dest {
-                        // Make a fake operand for store_return
-                        let op = OperandRef {
-                            val: Ref(dst, Alignment::AbiAligned),
-                            ty: sig.output(),
-                        };
-                        self.store_return(&bcx, ret_dest, &fn_ty.ret, op);
+                        self.store_return(&bcx, ret_dest, &fn_ty.ret, dst.llval);
                     }
 
                     if let Some((_, target)) = *destination {
@@ -572,6 +552,40 @@
                     return;
                 }
 
+                // Split the rust-call tupled arguments off.
+                let (first_args, untuple) = if abi == Abi::RustCall && !args.is_empty() {
+                    let (tup, args) = args.split_last().unwrap();
+                    (args, Some(tup))
+                } else {
+                    (&args[..], None)
+                };
+
+                for (i, arg) in first_args.iter().enumerate() {
+                    let mut op = self.trans_operand(&bcx, arg);
+                    if let (0, Some(ty::InstanceDef::Virtual(_, idx))) = (i, def) {
+                        if let Pair(data_ptr, meta) = op.val {
+                            llfn = Some(meth::VirtualIndex::from_index(idx)
+                                .get_fn(&bcx, meta, &fn_ty));
+                            llargs.push(data_ptr);
+                            continue;
+                        }
+                    }
+
+                    // The callee needs to own the argument memory if we pass it
+                    // by-ref, so make a local copy of non-immediate constants.
+                    if let (&mir::Operand::Constant(_), Ref(..)) = (arg, op.val) {
+                        let tmp = LvalueRef::alloca(&bcx, op.layout, "const");
+                        op.val.store(&bcx, tmp);
+                        op.val = Ref(tmp.llval, tmp.alignment);
+                    }
+
+                    self.trans_argument(&bcx, op, &mut llargs, &fn_ty.args[i]);
+                }
+                if let Some(tup) = untuple {
+                    self.trans_arguments_untupled(&bcx, tup, &mut llargs,
+                        &fn_ty.args[first_args.len()..])
+                }
+
                 let fn_ptr = match (llfn, instance) {
                     (Some(llfn), _) => llfn,
                     (None, Some(instance)) => callee::get_fn(bcx.ccx, instance),
@@ -579,7 +593,7 @@
                 };
 
                 do_call(self, bcx, fn_ty, fn_ptr, &llargs,
-                        destination.as_ref().map(|&(_, target)| (ret_dest, sig.output(), target)),
+                        destination.as_ref().map(|&(_, target)| (ret_dest, target)),
                         cleanup);
             }
             mir::TerminatorKind::GeneratorDrop |
@@ -592,79 +606,73 @@
                       bcx: &Builder<'a, 'tcx>,
                       op: OperandRef<'tcx>,
                       llargs: &mut Vec<ValueRef>,
-                      fn_ty: &FnType<'tcx>,
-                      next_idx: &mut usize,
-                      llfn: &mut Option<ValueRef>,
-                      def: &Option<ty::InstanceDef<'tcx>>) {
-        if let Pair(a, b) = op.val {
-            // Treat the values in a fat pointer separately.
-            if common::type_is_fat_ptr(bcx.ccx, op.ty) {
-                let (ptr, meta) = (a, b);
-                if *next_idx == 0 {
-                    if let Some(ty::InstanceDef::Virtual(_, idx)) = *def {
-                        let llmeth = meth::VirtualIndex::from_index(idx).get_fn(bcx, meta);
-                        let llty = fn_ty.llvm_type(bcx.ccx).ptr_to();
-                        *llfn = Some(bcx.pointercast(llmeth, llty));
-                    }
-                }
-
-                let imm_op = |x| OperandRef {
-                    val: Immediate(x),
-                    // We won't be checking the type again.
-                    ty: bcx.tcx().types.err
-                };
-                self.trans_argument(bcx, imm_op(ptr), llargs, fn_ty, next_idx, llfn, def);
-                self.trans_argument(bcx, imm_op(meta), llargs, fn_ty, next_idx, llfn, def);
-                return;
-            }
-        }
-
-        let arg = &fn_ty.args[*next_idx];
-        *next_idx += 1;
-
+                      arg: &ArgType<'tcx>) {
         // Fill padding with undef value, where applicable.
         if let Some(ty) = arg.pad {
-            llargs.push(C_undef(ty));
+            llargs.push(C_undef(ty.llvm_type(bcx.ccx)));
         }
 
         if arg.is_ignore() {
             return;
         }
 
+        if let PassMode::Pair(..) = arg.mode {
+            match op.val {
+                Pair(a, b) => {
+                    llargs.push(a);
+                    llargs.push(b);
+                    return;
+                }
+                _ => bug!("trans_argument: {:?} invalid for pair arugment", op)
+            }
+        }
+
         // Force by-ref if we have to load through a cast pointer.
         let (mut llval, align, by_ref) = match op.val {
             Immediate(_) | Pair(..) => {
-                if arg.is_indirect() || arg.cast.is_some() {
-                    let llscratch = bcx.alloca(arg.memory_ty(bcx.ccx), "arg", None);
-                    self.store_operand(bcx, llscratch, None, op);
-                    (llscratch, Alignment::AbiAligned, true)
-                } else {
-                    (op.pack_if_pair(bcx).immediate(), Alignment::AbiAligned, false)
+                match arg.mode {
+                    PassMode::Indirect(_) | PassMode::Cast(_) => {
+                        let scratch = LvalueRef::alloca(bcx, arg.layout, "arg");
+                        op.val.store(bcx, scratch);
+                        (scratch.llval, Alignment::AbiAligned, true)
+                    }
+                    _ => {
+                        (op.immediate_or_packed_pair(bcx), Alignment::AbiAligned, false)
+                    }
                 }
             }
-            Ref(llval, Alignment::Packed) if arg.is_indirect() => {
+            Ref(llval, align @ Alignment::Packed(_)) if arg.is_indirect() => {
                 // `foo(packed.large_field)`. We can't pass the (unaligned) field directly. I
                 // think that ATM (Rust 1.16) we only pass temporaries, but we shouldn't
                 // have scary latent bugs around.
 
-                let llscratch = bcx.alloca(arg.memory_ty(bcx.ccx), "arg", None);
-                base::memcpy_ty(bcx, llscratch, llval, op.ty, Some(1));
-                (llscratch, Alignment::AbiAligned, true)
+                let scratch = LvalueRef::alloca(bcx, arg.layout, "arg");
+                base::memcpy_ty(bcx, scratch.llval, llval, op.layout, align.non_abi());
+                (scratch.llval, Alignment::AbiAligned, true)
             }
             Ref(llval, align) => (llval, align, true)
         };
 
         if by_ref && !arg.is_indirect() {
             // Have to load the argument, maybe while casting it.
-            if arg.layout.ty == bcx.tcx().types.bool {
-                // We store bools as i8 so we need to truncate to i1.
-                llval = bcx.load_range_assert(llval, 0, 2, llvm::False, None);
-                llval = bcx.trunc(llval, Type::i1(bcx.ccx));
-            } else if let Some(ty) = arg.cast {
-                llval = bcx.load(bcx.pointercast(llval, ty.ptr_to()),
-                                 align.min_with(arg.layout.align(bcx.ccx).abi() as u32));
+            if let PassMode::Cast(ty) = arg.mode {
+                llval = bcx.load(bcx.pointercast(llval, ty.llvm_type(bcx.ccx).ptr_to()),
+                                 (align | Alignment::Packed(arg.layout.align))
+                                    .non_abi());
             } else {
-                llval = bcx.load(llval, align.to_align());
+                // We can't use `LvalueRef::load` here because the argument
+                // may have a type we don't treat as immediate, but the ABI
+                // used for this call is passing it by-value. In that case,
+                // the load would just produce `OperandValue::Ref` instead
+                // of the `OperandValue::Immediate` we need for the call.
+                llval = bcx.load(llval, align.non_abi());
+                if let layout::Abi::Scalar(ref scalar) = arg.layout.abi {
+                    if scalar.is_bool() {
+                        bcx.range_metadata(llval, 0..2);
+                    }
+                }
+                // We store bools as i8 so we need to truncate to i1.
+                llval = base::to_immediate(bcx, llval, arg.layout);
             }
         }
 
@@ -675,89 +683,36 @@
                                 bcx: &Builder<'a, 'tcx>,
                                 operand: &mir::Operand<'tcx>,
                                 llargs: &mut Vec<ValueRef>,
-                                fn_ty: &FnType<'tcx>,
-                                next_idx: &mut usize,
-                                llfn: &mut Option<ValueRef>,
-                                def: &Option<ty::InstanceDef<'tcx>>) {
+                                args: &[ArgType<'tcx>]) {
         let tuple = self.trans_operand(bcx, operand);
 
-        let arg_types = match tuple.ty.sty {
-            ty::TyTuple(ref tys, _) => tys,
-            _ => span_bug!(self.mir.span,
-                           "bad final argument to \"rust-call\" fn {:?}", tuple.ty)
-        };
-
         // Handle both by-ref and immediate tuples.
-        match tuple.val {
-            Ref(llval, align) => {
-                for (n, &ty) in arg_types.iter().enumerate() {
-                    let ptr = LvalueRef::new_sized_ty(llval, tuple.ty, align);
-                    let (ptr, align) = ptr.trans_field_ptr(bcx, n);
-                    let val = if common::type_is_fat_ptr(bcx.ccx, ty) {
-                        let (lldata, llextra) = base::load_fat_ptr(bcx, ptr, align, ty);
-                        Pair(lldata, llextra)
-                    } else {
-                        // trans_argument will load this if it needs to
-                        Ref(ptr, align)
-                    };
-                    let op = OperandRef {
-                        val,
-                        ty,
-                    };
-                    self.trans_argument(bcx, op, llargs, fn_ty, next_idx, llfn, def);
-                }
-
+        if let Ref(llval, align) = tuple.val {
+            let tuple_ptr = LvalueRef::new_sized(llval, tuple.layout, align);
+            for i in 0..tuple.layout.fields.count() {
+                let field_ptr = tuple_ptr.project_field(bcx, i);
+                self.trans_argument(bcx, field_ptr.load(bcx), llargs, &args[i]);
             }
-            Immediate(llval) => {
-                let l = bcx.ccx.layout_of(tuple.ty);
-                let v = if let layout::Univariant { ref variant, .. } = *l {
-                    variant
-                } else {
-                    bug!("Not a tuple.");
-                };
-                for (n, &ty) in arg_types.iter().enumerate() {
-                    let mut elem = bcx.extract_value(
-                        llval, adt::struct_llfields_index(v, n));
-                    // Truncate bools to i1, if needed
-                    if ty.is_bool() && common::val_ty(elem) != Type::i1(bcx.ccx) {
-                        elem = bcx.trunc(elem, Type::i1(bcx.ccx));
-                    }
-                    // If the tuple is immediate, the elements are as well
-                    let op = OperandRef {
-                        val: Immediate(elem),
-                        ty,
-                    };
-                    self.trans_argument(bcx, op, llargs, fn_ty, next_idx, llfn, def);
-                }
-            }
-            Pair(a, b) => {
-                let elems = [a, b];
-                for (n, &ty) in arg_types.iter().enumerate() {
-                    let mut elem = elems[n];
-                    // Truncate bools to i1, if needed
-                    if ty.is_bool() && common::val_ty(elem) != Type::i1(bcx.ccx) {
-                        elem = bcx.trunc(elem, Type::i1(bcx.ccx));
-                    }
-                    // Pair is always made up of immediates
-                    let op = OperandRef {
-                        val: Immediate(elem),
-                        ty,
-                    };
-                    self.trans_argument(bcx, op, llargs, fn_ty, next_idx, llfn, def);
-                }
+        } else {
+            // If the tuple is immediate, the elements are as well.
+            for i in 0..tuple.layout.fields.count() {
+                let op = tuple.extract_field(bcx, i);
+                self.trans_argument(bcx, op, llargs, &args[i]);
             }
         }
-
     }
 
-    fn get_personality_slot(&mut self, bcx: &Builder<'a, 'tcx>) -> ValueRef {
+    fn get_personality_slot(&mut self, bcx: &Builder<'a, 'tcx>) -> LvalueRef<'tcx> {
         let ccx = bcx.ccx;
-        if let Some(slot) = self.llpersonalityslot {
+        if let Some(slot) = self.personality_slot {
             slot
         } else {
-            let llretty = Type::struct_(ccx, &[Type::i8p(ccx), Type::i32(ccx)], false);
-            let slot = bcx.alloca(llretty, "personalityslot", None);
-            self.llpersonalityslot = Some(slot);
+            let layout = ccx.layout_of(ccx.tcx().intern_tup(&[
+                ccx.tcx().mk_mut_ptr(ccx.tcx().types.u8),
+                ccx.tcx().types.i32
+            ], false));
+            let slot = LvalueRef::alloca(bcx, layout, "personalityslot");
+            self.personality_slot = Some(slot);
             slot
         }
     }
@@ -783,18 +738,24 @@
 
         let bcx = self.new_block("cleanup");
 
-        let ccx = bcx.ccx;
         let llpersonality = self.ccx.eh_personality();
-        let llretty = Type::struct_(ccx, &[Type::i8p(ccx), Type::i32(ccx)], false);
-        let llretval = bcx.landing_pad(llretty, llpersonality, 1, self.llfn);
-        bcx.set_cleanup(llretval);
+        let llretty = self.landing_pad_type();
+        let lp = bcx.landing_pad(llretty, llpersonality, 1, self.llfn);
+        bcx.set_cleanup(lp);
+
         let slot = self.get_personality_slot(&bcx);
-        Lifetime::Start.call(&bcx, slot);
-        bcx.store(llretval, slot, None);
+        slot.storage_live(&bcx);
+        Pair(bcx.extract_value(lp, 0), bcx.extract_value(lp, 1)).store(&bcx, slot);
+
         bcx.br(target_bb);
         bcx.llbb()
     }
 
+    fn landing_pad_type(&self) -> Type {
+        let ccx = self.ccx;
+        Type::struct_(ccx, &[Type::i8p(ccx), Type::i32(ccx)], false)
+    }
+
     fn unreachable_block(&mut self) -> BasicBlockRef {
         self.unreachable_block.unwrap_or_else(|| {
             let bl = self.new_block("unreachable");
@@ -815,31 +776,33 @@
     }
 
     fn make_return_dest(&mut self, bcx: &Builder<'a, 'tcx>,
-                        dest: &mir::Lvalue<'tcx>, fn_ret_ty: &ArgType,
-                        llargs: &mut Vec<ValueRef>, is_intrinsic: bool) -> ReturnDest {
+                        dest: &mir::Lvalue<'tcx>, fn_ret: &ArgType<'tcx>,
+                        llargs: &mut Vec<ValueRef>, is_intrinsic: bool)
+                        -> ReturnDest<'tcx> {
         // If the return is ignored, we can just return a do-nothing ReturnDest
-        if fn_ret_ty.is_ignore() {
+        if fn_ret.is_ignore() {
             return ReturnDest::Nothing;
         }
         let dest = if let mir::Lvalue::Local(index) = *dest {
-            let ret_ty = self.monomorphized_lvalue_ty(dest);
             match self.locals[index] {
                 LocalRef::Lvalue(dest) => dest,
                 LocalRef::Operand(None) => {
                     // Handle temporary lvalues, specifically Operand ones, as
                     // they don't have allocas
-                    return if fn_ret_ty.is_indirect() {
+                    return if fn_ret.is_indirect() {
                         // Odd, but possible, case, we have an operand temporary,
                         // but the calling convention has an indirect return.
-                        let tmp = LvalueRef::alloca(bcx, ret_ty, "tmp_ret");
+                        let tmp = LvalueRef::alloca(bcx, fn_ret.layout, "tmp_ret");
+                        tmp.storage_live(bcx);
                         llargs.push(tmp.llval);
-                        ReturnDest::IndirectOperand(tmp.llval, index)
+                        ReturnDest::IndirectOperand(tmp, index)
                     } else if is_intrinsic {
                         // Currently, intrinsics always need a location to store
                         // the result. so we create a temporary alloca for the
                         // result
-                        let tmp = LvalueRef::alloca(bcx, ret_ty, "tmp_ret");
-                        ReturnDest::IndirectOperand(tmp.llval, index)
+                        let tmp = LvalueRef::alloca(bcx, fn_ret.layout, "tmp_ret");
+                        tmp.storage_live(bcx);
+                        ReturnDest::IndirectOperand(tmp, index)
                     } else {
                         ReturnDest::DirectOperand(index)
                     };
@@ -851,13 +814,13 @@
         } else {
             self.trans_lvalue(bcx, dest)
         };
-        if fn_ret_ty.is_indirect() {
+        if fn_ret.is_indirect() {
             match dest.alignment {
                 Alignment::AbiAligned => {
                     llargs.push(dest.llval);
                     ReturnDest::Nothing
                 },
-                Alignment::Packed => {
+                Alignment::Packed(_) => {
                     // Currently, MIR code generation does not create calls
                     // that store directly to fields of packed structs (in
                     // fact, the calls it creates write only to temps),
@@ -868,7 +831,7 @@
                 }
             }
         } else {
-            ReturnDest::Store(dest.llval)
+            ReturnDest::Store(dest)
         }
     }
 
@@ -877,63 +840,67 @@
                        dst: &mir::Lvalue<'tcx>) {
         if let mir::Lvalue::Local(index) = *dst {
             match self.locals[index] {
-                LocalRef::Lvalue(lvalue) => self.trans_transmute_into(bcx, src, &lvalue),
+                LocalRef::Lvalue(lvalue) => self.trans_transmute_into(bcx, src, lvalue),
                 LocalRef::Operand(None) => {
-                    let lvalue_ty = self.monomorphized_lvalue_ty(dst);
-                    assert!(!lvalue_ty.has_erasable_regions());
-                    let lvalue = LvalueRef::alloca(bcx, lvalue_ty, "transmute_temp");
-                    self.trans_transmute_into(bcx, src, &lvalue);
-                    let op = self.trans_load(bcx, lvalue.llval, lvalue.alignment, lvalue_ty);
+                    let dst_layout = bcx.ccx.layout_of(self.monomorphized_lvalue_ty(dst));
+                    assert!(!dst_layout.ty.has_erasable_regions());
+                    let lvalue = LvalueRef::alloca(bcx, dst_layout, "transmute_temp");
+                    lvalue.storage_live(bcx);
+                    self.trans_transmute_into(bcx, src, lvalue);
+                    let op = lvalue.load(bcx);
+                    lvalue.storage_dead(bcx);
                     self.locals[index] = LocalRef::Operand(Some(op));
                 }
-                LocalRef::Operand(Some(_)) => {
-                    let ty = self.monomorphized_lvalue_ty(dst);
-                    assert!(common::type_is_zero_size(bcx.ccx, ty),
+                LocalRef::Operand(Some(op)) => {
+                    assert!(op.layout.is_zst(),
                             "assigning to initialized SSAtemp");
                 }
             }
         } else {
             let dst = self.trans_lvalue(bcx, dst);
-            self.trans_transmute_into(bcx, src, &dst);
+            self.trans_transmute_into(bcx, src, dst);
         }
     }
 
     fn trans_transmute_into(&mut self, bcx: &Builder<'a, 'tcx>,
                             src: &mir::Operand<'tcx>,
-                            dst: &LvalueRef<'tcx>) {
-        let val = self.trans_operand(bcx, src);
-        let llty = type_of::type_of(bcx.ccx, val.ty);
+                            dst: LvalueRef<'tcx>) {
+        let src = self.trans_operand(bcx, src);
+        let llty = src.layout.llvm_type(bcx.ccx);
         let cast_ptr = bcx.pointercast(dst.llval, llty.ptr_to());
-        let in_type = val.ty;
-        let out_type = dst.ty.to_ty(bcx.tcx());
-        let llalign = cmp::min(bcx.ccx.align_of(in_type), bcx.ccx.align_of(out_type));
-        self.store_operand(bcx, cast_ptr, Some(llalign), val);
+        let align = src.layout.align.min(dst.layout.align);
+        src.val.store(bcx,
+            LvalueRef::new_sized(cast_ptr, src.layout, Alignment::Packed(align)));
     }
 
 
     // Stores the return value of a function call into it's final location.
     fn store_return(&mut self,
                     bcx: &Builder<'a, 'tcx>,
-                    dest: ReturnDest,
+                    dest: ReturnDest<'tcx>,
                     ret_ty: &ArgType<'tcx>,
-                    op: OperandRef<'tcx>) {
+                    llval: ValueRef) {
         use self::ReturnDest::*;
 
         match dest {
             Nothing => (),
-            Store(dst) => ret_ty.store(bcx, op.immediate(), dst),
+            Store(dst) => ret_ty.store(bcx, llval, dst),
             IndirectOperand(tmp, index) => {
-                let op = self.trans_load(bcx, tmp, Alignment::AbiAligned, op.ty);
+                let op = tmp.load(bcx);
+                tmp.storage_dead(bcx);
                 self.locals[index] = LocalRef::Operand(Some(op));
             }
             DirectOperand(index) => {
                 // If there is a cast, we have to store and reload.
-                let op = if ret_ty.cast.is_some() {
-                    let tmp = LvalueRef::alloca(bcx, op.ty, "tmp_ret");
-                    ret_ty.store(bcx, op.immediate(), tmp.llval);
-                    self.trans_load(bcx, tmp.llval, tmp.alignment, op.ty)
+                let op = if let PassMode::Cast(_) = ret_ty.mode {
+                    let tmp = LvalueRef::alloca(bcx, ret_ty.layout, "tmp_ret");
+                    tmp.storage_live(bcx);
+                    ret_ty.store(bcx, llval, tmp);
+                    let op = tmp.load(bcx);
+                    tmp.storage_dead(bcx);
+                    op
                 } else {
-                    op.unpack_if_pair(bcx)
+                    OperandRef::from_immediate_or_packed_pair(bcx, llval, ret_ty.layout)
                 };
                 self.locals[index] = LocalRef::Operand(Some(op));
             }
@@ -941,13 +908,13 @@
     }
 }
 
-enum ReturnDest {
+enum ReturnDest<'tcx> {
     // Do nothing, the return value is indirect or ignored
     Nothing,
     // Store the return value to the pointer
-    Store(ValueRef),
+    Store(LvalueRef<'tcx>),
     // Stores an indirect return value to an operand local lvalue
-    IndirectOperand(ValueRef, mir::Local),
+    IndirectOperand(LvalueRef<'tcx>, mir::Local),
     // Stores a direct return value to an operand local lvalue
     DirectOperand(mir::Local)
 }
diff --git a/src/librustc_trans/mir/constant.rs b/src/librustc_trans/mir/constant.rs
index 6573e50..8c01333 100644
--- a/src/librustc_trans/mir/constant.rs
+++ b/src/librustc_trans/mir/constant.rs
@@ -18,21 +18,21 @@
 use rustc::mir;
 use rustc::mir::tcx::LvalueTy;
 use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
-use rustc::ty::layout::{self, LayoutTyper};
+use rustc::ty::layout::{self, LayoutOf, Size};
 use rustc::ty::cast::{CastTy, IntTy};
 use rustc::ty::subst::{Kind, Substs, Subst};
 use rustc_apfloat::{ieee, Float, Status};
 use rustc_data_structures::indexed_vec::{Idx, IndexVec};
-use {adt, base, machine};
+use base;
 use abi::{self, Abi};
 use callee;
 use builder::Builder;
 use common::{self, CrateContext, const_get_elt, val_ty};
-use common::{C_array, C_bool, C_bytes, C_int, C_uint, C_big_integral, C_u32, C_u64};
-use common::{C_null, C_struct, C_str_slice, C_undef, C_usize, C_vector, is_undef};
+use common::{C_array, C_bool, C_bytes, C_int, C_uint, C_uint_big, C_u32, C_u64};
+use common::{C_null, C_struct, C_str_slice, C_undef, C_usize, C_vector, C_fat_ptr};
 use common::const_to_opt_u128;
 use consts;
-use type_of;
+use type_of::LayoutLlvmExt;
 use type_::Type;
 use value::Value;
 
@@ -55,7 +55,7 @@
     pub ty: Ty<'tcx>
 }
 
-impl<'tcx> Const<'tcx> {
+impl<'a, 'tcx> Const<'tcx> {
     pub fn new(llval: ValueRef, ty: Ty<'tcx>) -> Const<'tcx> {
         Const {
             llval,
@@ -63,32 +63,31 @@
         }
     }
 
-    pub fn from_constint<'a>(ccx: &CrateContext<'a, 'tcx>, ci: &ConstInt)
-    -> Const<'tcx> {
+    pub fn from_constint(ccx: &CrateContext<'a, 'tcx>, ci: &ConstInt) -> Const<'tcx> {
         let tcx = ccx.tcx();
         let (llval, ty) = match *ci {
             I8(v) => (C_int(Type::i8(ccx), v as i64), tcx.types.i8),
             I16(v) => (C_int(Type::i16(ccx), v as i64), tcx.types.i16),
             I32(v) => (C_int(Type::i32(ccx), v as i64), tcx.types.i32),
             I64(v) => (C_int(Type::i64(ccx), v as i64), tcx.types.i64),
-            I128(v) => (C_big_integral(Type::i128(ccx), v as u128), tcx.types.i128),
+            I128(v) => (C_uint_big(Type::i128(ccx), v as u128), tcx.types.i128),
             Isize(v) => (C_int(Type::isize(ccx), v.as_i64()), tcx.types.isize),
             U8(v) => (C_uint(Type::i8(ccx), v as u64), tcx.types.u8),
             U16(v) => (C_uint(Type::i16(ccx), v as u64), tcx.types.u16),
             U32(v) => (C_uint(Type::i32(ccx), v as u64), tcx.types.u32),
             U64(v) => (C_uint(Type::i64(ccx), v), tcx.types.u64),
-            U128(v) => (C_big_integral(Type::i128(ccx), v), tcx.types.u128),
+            U128(v) => (C_uint_big(Type::i128(ccx), v), tcx.types.u128),
             Usize(v) => (C_uint(Type::isize(ccx), v.as_u64()), tcx.types.usize),
         };
         Const { llval: llval, ty: ty }
     }
 
     /// Translate ConstVal into a LLVM constant value.
-    pub fn from_constval<'a>(ccx: &CrateContext<'a, 'tcx>,
-                             cv: &ConstVal,
-                             ty: Ty<'tcx>)
-                             -> Const<'tcx> {
-        let llty = type_of::type_of(ccx, ty);
+    pub fn from_constval(ccx: &CrateContext<'a, 'tcx>,
+                         cv: &ConstVal,
+                         ty: Ty<'tcx>)
+                         -> Const<'tcx> {
+        let llty = ccx.layout_of(ty).llvm_type(ccx);
         let val = match *cv {
             ConstVal::Float(v) => {
                 let bits = match v.ty {
@@ -100,9 +99,11 @@
             ConstVal::Bool(v) => C_bool(ccx, v),
             ConstVal::Integral(ref i) => return Const::from_constint(ccx, i),
             ConstVal::Str(ref v) => C_str_slice(ccx, v.clone()),
-            ConstVal::ByteStr(v) => consts::addr_of(ccx, C_bytes(ccx, v.data), 1, "byte_str"),
+            ConstVal::ByteStr(v) => {
+                consts::addr_of(ccx, C_bytes(ccx, v.data), ccx.align_of(ty), "byte_str")
+            }
             ConstVal::Char(c) => C_uint(Type::char(ccx), c as u64),
-            ConstVal::Function(..) => C_null(type_of::type_of(ccx, ty)),
+            ConstVal::Function(..) => C_undef(llty),
             ConstVal::Variant(_) |
             ConstVal::Aggregate(..) |
             ConstVal::Unevaluated(..) => {
@@ -115,15 +116,44 @@
         Const::new(val, ty)
     }
 
-    fn get_pair(&self) -> (ValueRef, ValueRef) {
-        (const_get_elt(self.llval, &[0]),
-         const_get_elt(self.llval, &[1]))
+    fn get_field(&self, ccx: &CrateContext<'a, 'tcx>, i: usize) -> ValueRef {
+        let layout = ccx.layout_of(self.ty);
+        let field = layout.field(ccx, i);
+        if field.is_zst() {
+            return C_undef(field.immediate_llvm_type(ccx));
+        }
+        match layout.abi {
+            layout::Abi::Scalar(_) => self.llval,
+            layout::Abi::ScalarPair(ref a, ref b) => {
+                let offset = layout.fields.offset(i);
+                if offset.bytes() == 0 {
+                    if field.size == layout.size {
+                        self.llval
+                    } else {
+                        assert_eq!(field.size, a.value.size(ccx));
+                        const_get_elt(self.llval, 0)
+                    }
+                } else {
+                    assert_eq!(offset, a.value.size(ccx)
+                        .abi_align(b.value.align(ccx)));
+                    assert_eq!(field.size, b.value.size(ccx));
+                    const_get_elt(self.llval, 1)
+                }
+            }
+            _ => {
+                const_get_elt(self.llval, layout.llvm_field_index(i))
+            }
+        }
     }
 
-    fn get_fat_ptr(&self) -> (ValueRef, ValueRef) {
+    fn get_pair(&self, ccx: &CrateContext<'a, 'tcx>) -> (ValueRef, ValueRef) {
+        (self.get_field(ccx, 0), self.get_field(ccx, 1))
+    }
+
+    fn get_fat_ptr(&self, ccx: &CrateContext<'a, 'tcx>) -> (ValueRef, ValueRef) {
         assert_eq!(abi::FAT_PTR_ADDR, 0);
         assert_eq!(abi::FAT_PTR_EXTRA, 1);
-        self.get_pair()
+        self.get_pair(ccx)
     }
 
     fn as_lvalue(&self) -> ConstLvalue<'tcx> {
@@ -134,14 +164,16 @@
         }
     }
 
-    pub fn to_operand<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> OperandRef<'tcx> {
-        let llty = type_of::immediate_type_of(ccx, self.ty);
+    pub fn to_operand(&self, ccx: &CrateContext<'a, 'tcx>) -> OperandRef<'tcx> {
+        let layout = ccx.layout_of(self.ty);
+        let llty = layout.immediate_llvm_type(ccx);
         let llvalty = val_ty(self.llval);
 
-        let val = if llty == llvalty && common::type_is_imm_pair(ccx, self.ty) {
-            let (a, b) = self.get_pair();
-            OperandValue::Pair(a, b)
-        } else if llty == llvalty && common::type_is_immediate(ccx, self.ty) {
+        let val = if llty == llvalty && layout.is_llvm_scalar_pair() {
+            OperandValue::Pair(
+                const_get_elt(self.llval, 0),
+                const_get_elt(self.llval, 1))
+        } else if llty == llvalty && layout.is_llvm_immediate() {
             // If the types match, we can use the value directly.
             OperandValue::Immediate(self.llval)
         } else {
@@ -149,12 +181,13 @@
             // a constant LLVM global and cast its address if necessary.
             let align = ccx.align_of(self.ty);
             let ptr = consts::addr_of(ccx, self.llval, align, "const");
-            OperandValue::Ref(consts::ptrcast(ptr, llty.ptr_to()), Alignment::AbiAligned)
+            OperandValue::Ref(consts::ptrcast(ptr, layout.llvm_type(ccx).ptr_to()),
+                              Alignment::AbiAligned)
         };
 
         OperandRef {
             val,
-            ty: self.ty
+            layout: ccx.layout_of(self.ty)
         }
     }
 }
@@ -368,12 +401,12 @@
                             match &tcx.item_name(def_id)[..] {
                                 "size_of" => {
                                     let llval = C_usize(self.ccx,
-                                        self.ccx.size_of(substs.type_at(0)));
+                                        self.ccx.size_of(substs.type_at(0)).bytes());
                                     Ok(Const::new(llval, tcx.types.usize))
                                 }
                                 "min_align_of" => {
                                     let llval = C_usize(self.ccx,
-                                        self.ccx.align_of(substs.type_at(0)) as u64);
+                                        self.ccx.align_of(substs.type_at(0)).abi());
                                     Ok(Const::new(llval, tcx.types.usize))
                                 }
                                 _ => span_bug!(span, "{:?} in constant", terminator.kind)
@@ -436,7 +469,7 @@
                         let (base, extra) = if !has_metadata {
                             (base.llval, ptr::null_mut())
                         } else {
-                            base.get_fat_ptr()
+                            base.get_fat_ptr(self.ccx)
                         };
                         if self.ccx.statics().borrow().contains_key(&base) {
                             (Base::Static(base), extra)
@@ -450,9 +483,10 @@
                                 span_bug!(span, "dereference of non-constant pointer `{:?}`",
                                           Value(base));
                             }
-                            if projected_ty.is_bool() {
+                            let layout = self.ccx.layout_of(projected_ty);
+                            if let layout::Abi::Scalar(ref scalar) = layout.abi {
                                 let i1_type = Type::i1(self.ccx);
-                                if val_ty(val) != i1_type {
+                                if scalar.is_bool() && val_ty(val) != i1_type {
                                     unsafe {
                                         val = llvm::LLVMConstTrunc(val, i1_type.to_ref());
                                     }
@@ -462,8 +496,7 @@
                         }
                     }
                     mir::ProjectionElem::Field(ref field, _) => {
-                        let llprojected = adt::const_get_field(self.ccx, tr_base.ty, base.llval,
-                                                               field.index());
+                        let llprojected = base.get_field(self.ccx, field.index());
                         let llextra = if !has_metadata {
                             ptr::null_mut()
                         } else {
@@ -484,9 +517,9 @@
                         // Produce an undef instead of a LLVM assertion on OOB.
                         let len = common::const_to_uint(tr_base.len(self.ccx));
                         let llelem = if iv < len as u128 {
-                            const_get_elt(base.llval, &[iv as u32])
+                            const_get_elt(base.llval, iv as u64)
                         } else {
-                            C_undef(type_of::type_of(self.ccx, projected_ty))
+                            C_undef(self.ccx.layout_of(projected_ty).llvm_type(self.ccx))
                         };
 
                         (Base::Value(llelem), ptr::null_mut())
@@ -540,7 +573,7 @@
         let elem_ty = array_ty.builtin_index().unwrap_or_else(|| {
             bug!("bad array type {:?}", array_ty)
         });
-        let llunitty = type_of::type_of(self.ccx, elem_ty);
+        let llunitty = self.ccx.layout_of(elem_ty).llvm_type(self.ccx);
         // If the array contains enums, an LLVM array won't work.
         let val = if fields.iter().all(|&f| val_ty(f) == llunitty) {
             C_array(llunitty, fields)
@@ -566,7 +599,7 @@
                 self.const_array(dest_ty, &fields)
             }
 
-            mir::Rvalue::Aggregate(ref kind, ref operands) => {
+            mir::Rvalue::Aggregate(box mir::AggregateKind::Array(_), ref operands) => {
                 // Make sure to evaluate all operands to
                 // report as many errors as we possibly can.
                 let mut fields = Vec::with_capacity(operands.len());
@@ -579,17 +612,23 @@
                 }
                 failure?;
 
-                match **kind {
-                    mir::AggregateKind::Array(_) => {
-                        self.const_array(dest_ty, &fields)
-                    }
-                    mir::AggregateKind::Adt(..) |
-                    mir::AggregateKind::Closure(..) |
-                    mir::AggregateKind::Generator(..) |
-                    mir::AggregateKind::Tuple => {
-                        Const::new(trans_const(self.ccx, dest_ty, kind, &fields), dest_ty)
+                self.const_array(dest_ty, &fields)
+            }
+
+            mir::Rvalue::Aggregate(ref kind, ref operands) => {
+                // Make sure to evaluate all operands to
+                // report as many errors as we possibly can.
+                let mut fields = Vec::with_capacity(operands.len());
+                let mut failure = Ok(());
+                for operand in operands {
+                    match self.const_operand(operand, span) {
+                        Ok(val) => fields.push(val),
+                        Err(err) => if failure.is_ok() { failure = Err(err); }
                     }
                 }
+                failure?;
+
+                trans_const_adt(self.ccx, dest_ty, kind, &fields)
             }
 
             mir::Rvalue::Cast(ref kind, ref source, cast_ty) => {
@@ -635,10 +674,6 @@
                         operand.llval
                     }
                     mir::CastKind::Unsize => {
-                        // unsize targets other than to a fat pointer currently
-                        // can't be in constants.
-                        assert!(common::type_is_fat_ptr(self.ccx, cast_ty));
-
                         let pointee_ty = operand.ty.builtin_deref(true, ty::NoPreference)
                             .expect("consts: unsizing got non-pointer type").ty;
                         let (base, old_info) = if !self.ccx.shared().type_is_sized(pointee_ty) {
@@ -648,7 +683,7 @@
                             // to use a different vtable. In that case, we want to
                             // load out the original data pointer so we can repackage
                             // it.
-                            let (base, extra) = operand.get_fat_ptr();
+                            let (base, extra) = operand.get_fat_ptr(self.ccx);
                             (base, Some(extra))
                         } else {
                             (operand.llval, None)
@@ -656,7 +691,7 @@
 
                         let unsized_ty = cast_ty.builtin_deref(true, ty::NoPreference)
                             .expect("consts: unsizing got non-pointer target type").ty;
-                        let ptr_ty = type_of::in_memory_type_of(self.ccx, unsized_ty).ptr_to();
+                        let ptr_ty = self.ccx.layout_of(unsized_ty).llvm_type(self.ccx).ptr_to();
                         let base = consts::ptrcast(base, ptr_ty);
                         let info = base::unsized_info(self.ccx, pointee_ty,
                                                       unsized_ty, old_info);
@@ -666,22 +701,23 @@
                                                      .insert(base, operand.llval);
                             assert!(prev_const.is_none() || prev_const == Some(operand.llval));
                         }
-                        assert_eq!(abi::FAT_PTR_ADDR, 0);
-                        assert_eq!(abi::FAT_PTR_EXTRA, 1);
-                        C_struct(self.ccx, &[base, info], false)
+                        C_fat_ptr(self.ccx, base, info)
                     }
-                    mir::CastKind::Misc if common::type_is_immediate(self.ccx, operand.ty) => {
-                        debug_assert!(common::type_is_immediate(self.ccx, cast_ty));
+                    mir::CastKind::Misc if self.ccx.layout_of(operand.ty).is_llvm_immediate() => {
                         let r_t_in = CastTy::from_ty(operand.ty).expect("bad input type for cast");
                         let r_t_out = CastTy::from_ty(cast_ty).expect("bad output type for cast");
-                        let ll_t_out = type_of::immediate_type_of(self.ccx, cast_ty);
+                        let cast_layout = self.ccx.layout_of(cast_ty);
+                        assert!(cast_layout.is_llvm_immediate());
+                        let ll_t_out = cast_layout.immediate_llvm_type(self.ccx);
                         let llval = operand.llval;
-                        let signed = if let CastTy::Int(IntTy::CEnum) = r_t_in {
-                            let l = self.ccx.layout_of(operand.ty);
-                            adt::is_discr_signed(&l)
-                        } else {
-                            operand.ty.is_signed()
-                        };
+
+                        let mut signed = false;
+                        let l = self.ccx.layout_of(operand.ty);
+                        if let layout::Abi::Scalar(ref scalar) = l.abi {
+                            if let layout::Int(_, true) = scalar.value {
+                                signed = true;
+                            }
+                        }
 
                         unsafe {
                             match (r_t_in, r_t_out) {
@@ -720,20 +756,19 @@
                         }
                     }
                     mir::CastKind::Misc => { // Casts from a fat-ptr.
-                        let ll_cast_ty = type_of::immediate_type_of(self.ccx, cast_ty);
-                        let ll_from_ty = type_of::immediate_type_of(self.ccx, operand.ty);
-                        if common::type_is_fat_ptr(self.ccx, operand.ty) {
-                            let (data_ptr, meta_ptr) = operand.get_fat_ptr();
-                            if common::type_is_fat_ptr(self.ccx, cast_ty) {
-                                let ll_cft = ll_cast_ty.field_types();
-                                let ll_fft = ll_from_ty.field_types();
-                                let data_cast = consts::ptrcast(data_ptr, ll_cft[0]);
-                                assert_eq!(ll_cft[1].kind(), ll_fft[1].kind());
-                                C_struct(self.ccx, &[data_cast, meta_ptr], false)
+                        let l = self.ccx.layout_of(operand.ty);
+                        let cast = self.ccx.layout_of(cast_ty);
+                        if l.is_llvm_scalar_pair() {
+                            let (data_ptr, meta) = operand.get_fat_ptr(self.ccx);
+                            if cast.is_llvm_scalar_pair() {
+                                let data_cast = consts::ptrcast(data_ptr,
+                                    cast.scalar_pair_element_llvm_type(self.ccx, 0));
+                                C_fat_ptr(self.ccx, data_cast, meta)
                             } else { // cast to thin-ptr
                                 // Cast of fat-ptr to thin-ptr is an extraction of data-ptr and
                                 // pointer-cast of that pointer to desired pointer type.
-                                consts::ptrcast(data_ptr, ll_cast_ty)
+                                let llcast_ty = cast.immediate_llvm_type(self.ccx);
+                                consts::ptrcast(data_ptr, llcast_ty)
                             }
                         } else {
                             bug!("Unexpected non-fat-pointer operand")
@@ -756,7 +791,7 @@
                         let align = if self.ccx.shared().type_is_sized(ty) {
                             self.ccx.align_of(ty)
                         } else {
-                            self.ccx.tcx().data_layout.pointer_align.abi() as machine::llalign
+                            self.ccx.tcx().data_layout.pointer_align
                         };
                         if bk == mir::BorrowKind::Mut {
                             consts::addr_of_mut(self.ccx, llval, align, "ref_mut")
@@ -771,7 +806,7 @@
                 let ptr = if self.ccx.shared().type_is_sized(ty) {
                     base
                 } else {
-                    C_struct(self.ccx, &[base, tr_lvalue.llextra], false)
+                    C_fat_ptr(self.ccx, base, tr_lvalue.llextra)
                 };
                 Const::new(ptr, ref_ty)
             }
@@ -801,8 +836,10 @@
 
                 match const_scalar_checked_binop(tcx, op, lhs, rhs, ty) {
                     Some((llval, of)) => {
-                        let llof = C_bool(self.ccx, of);
-                        Const::new(C_struct(self.ccx, &[llval, llof], false), binop_ty)
+                        trans_const_adt(self.ccx, binop_ty, &mir::AggregateKind::Tuple, &[
+                            Const::new(llval, val_ty),
+                            Const::new(C_bool(self.ccx, of), tcx.types.bool)
+                        ])
                     }
                     None => {
                         span_bug!(span, "{:?} got non-integer operands: {:?} and {:?}",
@@ -836,7 +873,7 @@
 
             mir::Rvalue::NullaryOp(mir::NullOp::SizeOf, ty) => {
                 assert!(self.ccx.shared().type_is_sized(ty));
-                let llval = C_usize(self.ccx, self.ccx.size_of(ty));
+                let llval = C_usize(self.ccx, self.ccx.size_of(ty).bytes());
                 Const::new(llval, tcx.types.usize)
             }
 
@@ -986,7 +1023,7 @@
         let err = ConstEvalErr { span: span, kind: ErrKind::CannotCast };
         err.report(ccx.tcx(), span, "expression");
     }
-    C_big_integral(int_ty, cast_result.value)
+    C_uint_big(int_ty, cast_result.value)
 }
 
 unsafe fn cast_const_int_to_float(ccx: &CrateContext,
@@ -1037,7 +1074,7 @@
 
         let result = result.unwrap_or_else(|_| {
             // We've errored, so we don't have to produce working code.
-            let llty = type_of::type_of(bcx.ccx, ty);
+            let llty = bcx.ccx.layout_of(ty).llvm_type(bcx.ccx);
             Const::new(C_undef(llty), ty)
         });
 
@@ -1075,19 +1112,41 @@
 /// Currently the returned value has the same size as the type, but
 /// this could be changed in the future to avoid allocating unnecessary
 /// space after values of shorter-than-maximum cases.
-fn trans_const<'a, 'tcx>(
+fn trans_const_adt<'a, 'tcx>(
     ccx: &CrateContext<'a, 'tcx>,
     t: Ty<'tcx>,
     kind: &mir::AggregateKind,
-    vals: &[ValueRef]
-) -> ValueRef {
+    vals: &[Const<'tcx>]
+) -> Const<'tcx> {
     let l = ccx.layout_of(t);
     let variant_index = match *kind {
         mir::AggregateKind::Adt(_, index, _, _) => index,
         _ => 0,
     };
-    match *l {
-        layout::CEnum { discr: d, min, max, .. } => {
+
+    if let layout::Abi::Uninhabited = l.abi {
+        return Const::new(C_undef(l.llvm_type(ccx)), t);
+    }
+
+    match l.variants {
+        layout::Variants::Single { index } => {
+            assert_eq!(variant_index, index);
+            if let layout::Abi::Vector = l.abi {
+                Const::new(C_vector(&vals.iter().map(|x| x.llval).collect::<Vec<_>>()), t)
+            } else if let layout::FieldPlacement::Union(_) = l.fields {
+                assert_eq!(variant_index, 0);
+                assert_eq!(vals.len(), 1);
+                let contents = [
+                    vals[0].llval,
+                    padding(ccx, l.size - ccx.size_of(vals[0].ty))
+                ];
+
+                Const::new(C_struct(ccx, &contents, l.is_packed()), t)
+            } else {
+                build_const_struct(ccx, l, vals, None)
+            }
+        }
+        layout::Variants::Tagged { .. } => {
             let discr = match *kind {
                 mir::AggregateKind::Adt(adt_def, _, _, _) => {
                     adt_def.discriminant_for_variant(ccx.tcx(), variant_index)
@@ -1095,114 +1154,103 @@
                 },
                 _ => 0,
             };
-            assert_eq!(vals.len(), 0);
-            adt::assert_discr_in_range(min, max, discr);
-            C_int(Type::from_integer(ccx, d), discr as i64)
-        }
-        layout::General { discr: d, ref variants, .. } => {
-            let variant = &variants[variant_index];
-            let lldiscr = C_int(Type::from_integer(ccx, d), variant_index as i64);
-            let mut vals_with_discr = vec![lldiscr];
-            vals_with_discr.extend_from_slice(vals);
-            let mut contents = build_const_struct(ccx, &variant, &vals_with_discr[..]);
-            let needed_padding = l.size(ccx).bytes() - variant.stride().bytes();
-            if needed_padding > 0 {
-                contents.push(padding(ccx, needed_padding));
-            }
-            C_struct(ccx, &contents[..], false)
-        }
-        layout::UntaggedUnion { ref variants, .. }=> {
-            assert_eq!(variant_index, 0);
-            let contents = build_const_union(ccx, variants, vals[0]);
-            C_struct(ccx, &contents, variants.packed)
-        }
-        layout::Univariant { ref variant, .. } => {
-            assert_eq!(variant_index, 0);
-            let contents = build_const_struct(ccx, &variant, vals);
-            C_struct(ccx, &contents[..], variant.packed)
-        }
-        layout::Vector { .. } => {
-            C_vector(vals)
-        }
-        layout::RawNullablePointer { nndiscr, .. } => {
-            if variant_index as u64 == nndiscr {
-                assert_eq!(vals.len(), 1);
-                vals[0]
+            let discr_field = l.field(ccx, 0);
+            let discr = C_int(discr_field.llvm_type(ccx), discr as i64);
+            if let layout::Abi::Scalar(_) = l.abi {
+                Const::new(discr, t)
             } else {
-                C_null(type_of::type_of(ccx, t))
+                let discr = Const::new(discr, discr_field.ty);
+                build_const_struct(ccx, l.for_variant(ccx, variant_index), vals, Some(discr))
             }
         }
-        layout::StructWrappedNullablePointer { ref nonnull, nndiscr, .. } => {
-            if variant_index as u64 == nndiscr {
-                C_struct(ccx, &build_const_struct(ccx, &nonnull, vals), false)
+        layout::Variants::NicheFilling {
+            dataful_variant,
+            ref niche_variants,
+            niche_start,
+            ..
+        } => {
+            if variant_index == dataful_variant {
+                build_const_struct(ccx, l.for_variant(ccx, dataful_variant), vals, None)
             } else {
-                // Always use null even if it's not the `discrfield`th
-                // field; see #8506.
-                C_null(type_of::type_of(ccx, t))
+                let niche = l.field(ccx, 0);
+                let niche_llty = niche.llvm_type(ccx);
+                let niche_value = ((variant_index - niche_variants.start) as u128)
+                    .wrapping_add(niche_start);
+                // FIXME(eddyb) Check the actual primitive type here.
+                let niche_llval = if niche_value == 0 {
+                    // HACK(eddyb) Using `C_null` as it works on all types.
+                    C_null(niche_llty)
+                } else {
+                    C_uint_big(niche_llty, niche_value)
+                };
+                build_const_struct(ccx, l, &[Const::new(niche_llval, niche.ty)], None)
             }
         }
-        _ => bug!("trans_const: cannot handle type {} repreented as {:#?}", t, l)
     }
 }
 
 /// Building structs is a little complicated, because we might need to
 /// insert padding if a field's value is less aligned than its type.
 ///
-/// Continuing the example from `trans_const`, a value of type `(u32,
+/// Continuing the example from `trans_const_adt`, a value of type `(u32,
 /// E)` should have the `E` at offset 8, but if that field's
 /// initializer is 4-byte aligned then simply translating the tuple as
 /// a two-element struct will locate it at offset 4, and accesses to it
 /// will read the wrong memory.
 fn build_const_struct<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
-                                st: &layout::Struct,
-                                vals: &[ValueRef])
-                                -> Vec<ValueRef> {
-    assert_eq!(vals.len(), st.offsets.len());
+                                layout: layout::TyLayout<'tcx>,
+                                vals: &[Const<'tcx>],
+                                discr: Option<Const<'tcx>>)
+                                -> Const<'tcx> {
+    assert_eq!(vals.len(), layout.fields.count());
 
-    if vals.len() == 0 {
-        return Vec::new();
+    match layout.abi {
+        layout::Abi::Scalar(_) |
+        layout::Abi::ScalarPair(..) if discr.is_none() => {
+            let mut non_zst_fields = vals.iter().enumerate().map(|(i, f)| {
+                (f, layout.fields.offset(i))
+            }).filter(|&(f, _)| !ccx.layout_of(f.ty).is_zst());
+            match (non_zst_fields.next(), non_zst_fields.next()) {
+                (Some((x, offset)), None) if offset.bytes() == 0 => {
+                    return Const::new(x.llval, layout.ty);
+                }
+                (Some((a, a_offset)), Some((b, _))) if a_offset.bytes() == 0 => {
+                    return Const::new(C_struct(ccx, &[a.llval, b.llval], false), layout.ty);
+                }
+                (Some((a, _)), Some((b, b_offset))) if b_offset.bytes() == 0 => {
+                    return Const::new(C_struct(ccx, &[b.llval, a.llval], false), layout.ty);
+                }
+                _ => {}
+            }
+        }
+        _ => {}
     }
 
     // offset of current value
-    let mut offset = 0;
+    let mut offset = Size::from_bytes(0);
     let mut cfields = Vec::new();
-    cfields.reserve(st.offsets.len()*2);
+    cfields.reserve(discr.is_some() as usize + 1 + layout.fields.count() * 2);
 
-    let parts = st.field_index_by_increasing_offset().map(|i| {
-        (&vals[i], st.offsets[i].bytes())
+    if let Some(discr) = discr {
+        cfields.push(discr.llval);
+        offset = ccx.size_of(discr.ty);
+    }
+
+    let parts = layout.fields.index_by_increasing_offset().map(|i| {
+        (vals[i], layout.fields.offset(i))
     });
-    for (&val, target_offset) in parts {
-        if offset < target_offset {
-            cfields.push(padding(ccx, target_offset - offset));
-            offset = target_offset;
-        }
-        assert!(!is_undef(val));
-        cfields.push(val);
-        offset += machine::llsize_of_alloc(ccx, val_ty(val));
+    for (val, target_offset) in parts {
+        cfields.push(padding(ccx, target_offset - offset));
+        cfields.push(val.llval);
+        offset = target_offset + ccx.size_of(val.ty);
     }
 
-    if offset < st.stride().bytes() {
-        cfields.push(padding(ccx, st.stride().bytes() - offset));
-    }
+    // Pad to the size of the whole type, not e.g. the variant.
+    cfields.push(padding(ccx, ccx.size_of(layout.ty) - offset));
 
-    cfields
+    Const::new(C_struct(ccx, &cfields, layout.is_packed()), layout.ty)
 }
 
-fn build_const_union<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
-                               un: &layout::Union,
-                               field_val: ValueRef)
-                               -> Vec<ValueRef> {
-    let mut cfields = vec![field_val];
-
-    let offset = machine::llsize_of_alloc(ccx, val_ty(field_val));
-    let size = un.stride().bytes();
-    if offset != size {
-        cfields.push(padding(ccx, size - offset));
-    }
-
-    cfields
-}
-
-fn padding(ccx: &CrateContext, size: u64) -> ValueRef {
-    C_undef(Type::array(&Type::i8(ccx), size))
+fn padding(ccx: &CrateContext, size: Size) -> ValueRef {
+    C_undef(Type::array(&Type::i8(ccx), size.bytes()))
 }
diff --git a/src/librustc_trans/mir/lvalue.rs b/src/librustc_trans/mir/lvalue.rs
index d939aca..891d520 100644
--- a/src/librustc_trans/mir/lvalue.rs
+++ b/src/librustc_trans/mir/lvalue.rs
@@ -8,18 +8,17 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-use llvm::ValueRef;
-use rustc::ty::{self, Ty, TypeFoldable};
-use rustc::ty::layout::{self, LayoutTyper};
+use llvm::{self, ValueRef};
+use rustc::ty::{self, Ty};
+use rustc::ty::layout::{self, Align, TyLayout, LayoutOf};
 use rustc::mir;
 use rustc::mir::tcx::LvalueTy;
 use rustc_data_structures::indexed_vec::Idx;
-use adt;
+use base;
 use builder::Builder;
-use common::{self, CrateContext, C_usize};
+use common::{CrateContext, C_usize, C_u8, C_u32, C_uint, C_int, C_null, C_uint_big};
 use consts;
-use machine;
-use type_of;
+use type_of::LayoutLlvmExt;
 use type_::Type;
 use value::Value;
 use glue;
@@ -28,10 +27,11 @@
 use std::ops;
 
 use super::{MirContext, LocalRef};
+use super::operand::{OperandRef, OperandValue};
 
 #[derive(Copy, Clone, Debug, PartialEq, Eq)]
 pub enum Alignment {
-    Packed,
+    Packed(Align),
     AbiAligned,
 }
 
@@ -40,34 +40,36 @@
 
     fn bitor(self, rhs: Self) -> Self {
         match (self, rhs) {
-            (Alignment::Packed, _) => Alignment::Packed,
-            (Alignment::AbiAligned, a) => a,
+            (Alignment::Packed(a), Alignment::Packed(b)) => {
+                Alignment::Packed(a.min(b))
+            }
+            (Alignment::Packed(x), _) | (_, Alignment::Packed(x)) => {
+                Alignment::Packed(x)
+            }
+            (Alignment::AbiAligned, Alignment::AbiAligned) => {
+                Alignment::AbiAligned
+            }
+        }
+    }
+}
+
+impl<'a> From<TyLayout<'a>> for Alignment {
+    fn from(layout: TyLayout) -> Self {
+        if layout.is_packed() {
+            Alignment::Packed(layout.align)
+        } else {
+            Alignment::AbiAligned
         }
     }
 }
 
 impl Alignment {
-    pub fn from_packed(packed: bool) -> Self {
-        if packed {
-            Alignment::Packed
-        } else {
-            Alignment::AbiAligned
-        }
-    }
-
-    pub fn to_align(self) -> Option<u32> {
+    pub fn non_abi(self) -> Option<Align> {
         match self {
-            Alignment::Packed => Some(1),
+            Alignment::Packed(x) => Some(x),
             Alignment::AbiAligned => None,
         }
     }
-
-    pub fn min_with(self, align: u32) -> Option<u32> {
-        match self {
-            Alignment::Packed => Some(1),
-            Alignment::AbiAligned => Some(align),
-        }
-    }
 }
 
 #[derive(Copy, Clone, Debug)]
@@ -79,41 +81,43 @@
     pub llextra: ValueRef,
 
     /// Monomorphized type of this lvalue, including variant information
-    pub ty: LvalueTy<'tcx>,
+    pub layout: TyLayout<'tcx>,
 
     /// Whether this lvalue is known to be aligned according to its layout
     pub alignment: Alignment,
 }
 
 impl<'a, 'tcx> LvalueRef<'tcx> {
-    pub fn new_sized(llval: ValueRef, lvalue_ty: LvalueTy<'tcx>,
-                     alignment: Alignment) -> LvalueRef<'tcx> {
-        LvalueRef { llval: llval, llextra: ptr::null_mut(), ty: lvalue_ty, alignment: alignment }
+    pub fn new_sized(llval: ValueRef,
+                     layout: TyLayout<'tcx>,
+                     alignment: Alignment)
+                     -> LvalueRef<'tcx> {
+        LvalueRef {
+            llval,
+            llextra: ptr::null_mut(),
+            layout,
+            alignment
+        }
     }
 
-    pub fn new_sized_ty(llval: ValueRef, ty: Ty<'tcx>, alignment: Alignment) -> LvalueRef<'tcx> {
-        LvalueRef::new_sized(llval, LvalueTy::from_ty(ty), alignment)
-    }
-
-    pub fn alloca(bcx: &Builder<'a, 'tcx>, ty: Ty<'tcx>, name: &str) -> LvalueRef<'tcx> {
-        debug!("alloca({:?}: {:?})", name, ty);
-        let tmp = bcx.alloca(
-            type_of::type_of(bcx.ccx, ty), name, bcx.ccx.over_align_of(ty));
-        assert!(!ty.has_param_types());
-        Self::new_sized_ty(tmp, ty, Alignment::AbiAligned)
+    pub fn alloca(bcx: &Builder<'a, 'tcx>, layout: TyLayout<'tcx>, name: &str)
+                  -> LvalueRef<'tcx> {
+        debug!("alloca({:?}: {:?})", name, layout);
+        let tmp = bcx.alloca(layout.llvm_type(bcx.ccx), name, layout.align);
+        Self::new_sized(tmp, layout, Alignment::AbiAligned)
     }
 
     pub fn len(&self, ccx: &CrateContext<'a, 'tcx>) -> ValueRef {
-        let ty = self.ty.to_ty(ccx.tcx());
-        match ty.sty {
-            ty::TyArray(_, n) => {
-                common::C_usize(ccx, n.val.to_const_int().unwrap().to_u64().unwrap())
-            }
-            ty::TySlice(_) | ty::TyStr => {
-                assert!(self.llextra != ptr::null_mut());
+        if let layout::FieldPlacement::Array { count, .. } = self.layout.fields {
+            if self.layout.is_unsized() {
+                assert!(self.has_extra());
+                assert_eq!(count, 0);
                 self.llextra
+            } else {
+                C_usize(ccx, count)
             }
-            _ => bug!("unexpected type `{}` in LvalueRef::len", ty)
+        } else {
+            bug!("unexpected layout `{:#?}` in LvalueRef::len", self.layout)
         }
     }
 
@@ -121,53 +125,132 @@
         !self.llextra.is_null()
     }
 
-    fn struct_field_ptr(
-        self,
-        bcx: &Builder<'a, 'tcx>,
-        st: &layout::Struct,
-        fields: &Vec<Ty<'tcx>>,
-        ix: usize,
-        needs_cast: bool
-    ) -> (ValueRef, Alignment) {
-        let fty = fields[ix];
-        let ccx = bcx.ccx;
+    pub fn load(&self, bcx: &Builder<'a, 'tcx>) -> OperandRef<'tcx> {
+        debug!("LvalueRef::load: {:?}", self);
 
-        let alignment = self.alignment | Alignment::from_packed(st.packed);
+        assert!(!self.has_extra());
 
-        let llfields = adt::struct_llfields(ccx, fields, st);
-        let ptr_val = if needs_cast {
-            let real_ty = Type::struct_(ccx, &llfields[..], st.packed);
-            bcx.pointercast(self.llval, real_ty.ptr_to())
+        if self.layout.is_zst() {
+            return OperandRef::new_zst(bcx.ccx, self.layout);
+        }
+
+        let scalar_load_metadata = |load, scalar: &layout::Scalar| {
+            let (min, max) = (scalar.valid_range.start, scalar.valid_range.end);
+            let max_next = max.wrapping_add(1);
+            let bits = scalar.value.size(bcx.ccx).bits();
+            assert!(bits <= 128);
+            let mask = !0u128 >> (128 - bits);
+            // For a (max) value of -1, max will be `-1 as usize`, which overflows.
+            // However, that is fine here (it would still represent the full range),
+            // i.e., if the range is everything.  The lo==hi case would be
+            // rejected by the LLVM verifier (it would mean either an
+            // empty set, which is impossible, or the entire range of the
+            // type, which is pointless).
+            match scalar.value {
+                layout::Int(..) if max_next & mask != min & mask => {
+                    // llvm::ConstantRange can deal with ranges that wrap around,
+                    // so an overflow on (max + 1) is fine.
+                    bcx.range_metadata(load, min..max_next);
+                }
+                layout::Pointer if 0 < min && min < max => {
+                    bcx.nonnull_metadata(load);
+                }
+                _ => {}
+            }
+        };
+
+        let val = if self.layout.is_llvm_immediate() {
+            let mut const_llval = ptr::null_mut();
+            unsafe {
+                let global = llvm::LLVMIsAGlobalVariable(self.llval);
+                if !global.is_null() && llvm::LLVMIsGlobalConstant(global) == llvm::True {
+                    const_llval = llvm::LLVMGetInitializer(global);
+                }
+            }
+
+            let llval = if !const_llval.is_null() {
+                const_llval
+            } else {
+                let load = bcx.load(self.llval, self.alignment.non_abi());
+                if let layout::Abi::Scalar(ref scalar) = self.layout.abi {
+                    scalar_load_metadata(load, scalar);
+                }
+                load
+            };
+            OperandValue::Immediate(base::to_immediate(bcx, llval, self.layout))
+        } else if let layout::Abi::ScalarPair(ref a, ref b) = self.layout.abi {
+            let load = |i, scalar: &layout::Scalar| {
+                let mut llptr = bcx.struct_gep(self.llval, i as u64);
+                // Make sure to always load i1 as i8.
+                if scalar.is_bool() {
+                    llptr = bcx.pointercast(llptr, Type::i8p(bcx.ccx));
+                }
+                let load = bcx.load(llptr, self.alignment.non_abi());
+                scalar_load_metadata(load, scalar);
+                if scalar.is_bool() {
+                    bcx.trunc(load, Type::i1(bcx.ccx))
+                } else {
+                    load
+                }
+            };
+            OperandValue::Pair(load(0, a), load(1, b))
         } else {
-            self.llval
+            OperandValue::Ref(self.llval, self.alignment)
+        };
+
+        OperandRef { val, layout: self.layout }
+    }
+
+    /// Access a field, at a point when the value's case is known.
+    pub fn project_field(self, bcx: &Builder<'a, 'tcx>, ix: usize) -> LvalueRef<'tcx> {
+        let ccx = bcx.ccx;
+        let field = self.layout.field(ccx, ix);
+        let offset = self.layout.fields.offset(ix);
+        let alignment = self.alignment | Alignment::from(self.layout);
+
+        let simple = || {
+            // Unions and newtypes only use an offset of 0.
+            let llval = if offset.bytes() == 0 {
+                self.llval
+            } else if let layout::Abi::ScalarPair(ref a, ref b) = self.layout.abi {
+                // Offsets have to match either first or second field.
+                assert_eq!(offset, a.value.size(ccx).abi_align(b.value.align(ccx)));
+                bcx.struct_gep(self.llval, 1)
+            } else {
+                bcx.struct_gep(self.llval, self.layout.llvm_field_index(ix))
+            };
+            LvalueRef {
+                // HACK(eddyb) have to bitcast pointers until LLVM removes pointee types.
+                llval: bcx.pointercast(llval, field.llvm_type(ccx).ptr_to()),
+                llextra: if ccx.shared().type_has_metadata(field.ty) {
+                    self.llextra
+                } else {
+                    ptr::null_mut()
+                },
+                layout: field,
+                alignment,
+            }
         };
 
         // Simple case - we can just GEP the field
-        //   * First field - Always aligned properly
         //   * Packed struct - There is no alignment padding
         //   * Field is sized - pointer is properly aligned already
-        if st.offsets[ix] == layout::Size::from_bytes(0) || st.packed ||
-            bcx.ccx.shared().type_is_sized(fty)
-        {
-            return (bcx.struct_gep(
-                    ptr_val, adt::struct_llfields_index(st, ix)), alignment);
+        if self.layout.is_packed() || !field.is_unsized() {
+            return simple();
         }
 
         // If the type of the last field is [T], str or a foreign type, then we don't need to do
         // any adjusments
-        match fty.sty {
-            ty::TySlice(..) | ty::TyStr | ty::TyForeign(..) => {
-                return (bcx.struct_gep(
-                        ptr_val, adt::struct_llfields_index(st, ix)), alignment);
-            }
+        match field.ty.sty {
+            ty::TySlice(..) | ty::TyStr | ty::TyForeign(..) => return simple(),
             _ => ()
         }
 
         // There's no metadata available, log the case and just do the GEP.
         if !self.has_extra() {
             debug!("Unsized field `{}`, of `{:?}` has no metadata for adjustment",
-                ix, Value(ptr_val));
-            return (bcx.struct_gep(ptr_val, adt::struct_llfields_index(st, ix)), alignment);
+                ix, Value(self.llval));
+            return simple();
         }
 
         // We need to get the pointer manually now.
@@ -187,12 +270,10 @@
 
         let meta = self.llextra;
 
-
-        let offset = st.offsets[ix].bytes();
-        let unaligned_offset = C_usize(bcx.ccx, offset);
+        let unaligned_offset = C_usize(ccx, offset.bytes());
 
         // Get the alignment of the field
-        let (_, align) = glue::size_and_align_of_dst(bcx, fty, meta);
+        let (_, align) = glue::size_and_align_of_dst(bcx, field.ty, meta);
 
         // Bump the unaligned offset up to the appropriate alignment using the
         // following expression:
@@ -200,88 +281,165 @@
         //   (unaligned offset + (align - 1)) & -align
 
         // Calculate offset
-        let align_sub_1 = bcx.sub(align, C_usize(bcx.ccx, 1));
+        let align_sub_1 = bcx.sub(align, C_usize(ccx, 1u64));
         let offset = bcx.and(bcx.add(unaligned_offset, align_sub_1),
         bcx.neg(align));
 
         debug!("struct_field_ptr: DST field offset: {:?}", Value(offset));
 
         // Cast and adjust pointer
-        let byte_ptr = bcx.pointercast(ptr_val, Type::i8p(bcx.ccx));
+        let byte_ptr = bcx.pointercast(self.llval, Type::i8p(ccx));
         let byte_ptr = bcx.gep(byte_ptr, &[offset]);
 
         // Finally, cast back to the type expected
-        let ll_fty = type_of::in_memory_type_of(bcx.ccx, fty);
+        let ll_fty = field.llvm_type(ccx);
         debug!("struct_field_ptr: Field type is {:?}", ll_fty);
-        (bcx.pointercast(byte_ptr, ll_fty.ptr_to()), alignment)
-    }
 
-    /// Access a field, at a point when the value's case is known.
-    pub fn trans_field_ptr(self, bcx: &Builder<'a, 'tcx>, ix: usize) -> (ValueRef, Alignment) {
-        let discr = match self.ty {
-            LvalueTy::Ty { .. } => 0,
-            LvalueTy::Downcast { variant_index, .. } => variant_index,
-        };
-        let t = self.ty.to_ty(bcx.tcx());
-        let l = bcx.ccx.layout_of(t);
-        // Note: if this ever needs to generate conditionals (e.g., if we
-        // decide to do some kind of cdr-coding-like non-unique repr
-        // someday), it will need to return a possibly-new bcx as well.
-        match *l {
-            layout::Univariant { ref variant, .. } => {
-                assert_eq!(discr, 0);
-                self.struct_field_ptr(bcx, &variant,
-                    &adt::compute_fields(bcx.ccx, t, 0, false), ix, false)
-            }
-            layout::Vector { count, .. } => {
-                assert_eq!(discr, 0);
-                assert!((ix as u64) < count);
-                (bcx.struct_gep(self.llval, ix), self.alignment)
-            }
-            layout::General { discr: d, ref variants, .. } => {
-                let mut fields = adt::compute_fields(bcx.ccx, t, discr, false);
-                fields.insert(0, d.to_ty(&bcx.tcx(), false));
-                self.struct_field_ptr(bcx, &variants[discr], &fields, ix + 1, true)
-            }
-            layout::UntaggedUnion { ref variants } => {
-                let fields = adt::compute_fields(bcx.ccx, t, 0, false);
-                let ty = type_of::in_memory_type_of(bcx.ccx, fields[ix]);
-                (bcx.pointercast(self.llval, ty.ptr_to()),
-                 self.alignment | Alignment::from_packed(variants.packed))
-            }
-            layout::RawNullablePointer { nndiscr, .. } |
-            layout::StructWrappedNullablePointer { nndiscr,  .. } if discr as u64 != nndiscr => {
-                let nullfields = adt::compute_fields(bcx.ccx, t, (1-nndiscr) as usize, false);
-                // The unit-like case might have a nonzero number of unit-like fields.
-                // (e.d., Result of Either with (), as one side.)
-                let ty = type_of::type_of(bcx.ccx, nullfields[ix]);
-                assert_eq!(machine::llsize_of_alloc(bcx.ccx, ty), 0);
-                (bcx.pointercast(self.llval, ty.ptr_to()), Alignment::Packed)
-            }
-            layout::RawNullablePointer { nndiscr, .. } => {
-                let nnty = adt::compute_fields(bcx.ccx, t, nndiscr as usize, false)[0];
-                assert_eq!(ix, 0);
-                assert_eq!(discr as u64, nndiscr);
-                let ty = type_of::type_of(bcx.ccx, nnty);
-                (bcx.pointercast(self.llval, ty.ptr_to()), self.alignment)
-            }
-            layout::StructWrappedNullablePointer { ref nonnull, nndiscr, .. } => {
-                assert_eq!(discr as u64, nndiscr);
-                self.struct_field_ptr(bcx, &nonnull,
-                     &adt::compute_fields(bcx.ccx, t, discr, false), ix, false)
-            }
-            _ => bug!("element access in type without elements: {} represented as {:#?}", t, l)
+        LvalueRef {
+            llval: bcx.pointercast(byte_ptr, ll_fty.ptr_to()),
+            llextra: self.llextra,
+            layout: field,
+            alignment,
         }
     }
 
-    pub fn project_index(&self, bcx: &Builder<'a, 'tcx>, llindex: ValueRef) -> ValueRef {
-        if let ty::TySlice(_) = self.ty.to_ty(bcx.tcx()).sty {
-            // Slices already point to the array element type.
-            bcx.inbounds_gep(self.llval, &[llindex])
-        } else {
-            let zero = common::C_usize(bcx.ccx, 0);
-            bcx.inbounds_gep(self.llval, &[zero, llindex])
+    /// Obtain the actual discriminant of a value.
+    pub fn trans_get_discr(self, bcx: &Builder<'a, 'tcx>, cast_to: Ty<'tcx>) -> ValueRef {
+        let cast_to = bcx.ccx.layout_of(cast_to).immediate_llvm_type(bcx.ccx);
+        match self.layout.variants {
+            layout::Variants::Single { index } => {
+                return C_uint(cast_to, index as u64);
+            }
+            layout::Variants::Tagged { .. } |
+            layout::Variants::NicheFilling { .. } => {},
         }
+
+        let discr = self.project_field(bcx, 0);
+        let lldiscr = discr.load(bcx).immediate();
+        match self.layout.variants {
+            layout::Variants::Single { .. } => bug!(),
+            layout::Variants::Tagged { ref discr, .. } => {
+                let signed = match discr.value {
+                    layout::Int(_, signed) => signed,
+                    _ => false
+                };
+                bcx.intcast(lldiscr, cast_to, signed)
+            }
+            layout::Variants::NicheFilling {
+                dataful_variant,
+                ref niche_variants,
+                niche_start,
+                ..
+            } => {
+                let niche_llty = discr.layout.immediate_llvm_type(bcx.ccx);
+                if niche_variants.start == niche_variants.end {
+                    // FIXME(eddyb) Check the actual primitive type here.
+                    let niche_llval = if niche_start == 0 {
+                        // HACK(eddyb) Using `C_null` as it works on all types.
+                        C_null(niche_llty)
+                    } else {
+                        C_uint_big(niche_llty, niche_start)
+                    };
+                    bcx.select(bcx.icmp(llvm::IntEQ, lldiscr, niche_llval),
+                        C_uint(cast_to, niche_variants.start as u64),
+                        C_uint(cast_to, dataful_variant as u64))
+                } else {
+                    // Rebase from niche values to discriminant values.
+                    let delta = niche_start.wrapping_sub(niche_variants.start as u128);
+                    let lldiscr = bcx.sub(lldiscr, C_uint_big(niche_llty, delta));
+                    let lldiscr_max = C_uint(niche_llty, niche_variants.end as u64);
+                    bcx.select(bcx.icmp(llvm::IntULE, lldiscr, lldiscr_max),
+                        bcx.intcast(lldiscr, cast_to, false),
+                        C_uint(cast_to, dataful_variant as u64))
+                }
+            }
+        }
+    }
+
+    /// Set the discriminant for a new value of the given case of the given
+    /// representation.
+    pub fn trans_set_discr(&self, bcx: &Builder<'a, 'tcx>, variant_index: usize) {
+        match self.layout.variants {
+            layout::Variants::Single { index } => {
+                if index != variant_index {
+                    // If the layout of an enum is `Single`, all
+                    // other variants are necessarily uninhabited.
+                    assert_eq!(self.layout.for_variant(bcx.ccx, variant_index).abi,
+                               layout::Abi::Uninhabited);
+                }
+            }
+            layout::Variants::Tagged { .. } => {
+                let ptr = self.project_field(bcx, 0);
+                let to = self.layout.ty.ty_adt_def().unwrap()
+                    .discriminant_for_variant(bcx.tcx(), variant_index)
+                    .to_u128_unchecked() as u64;
+                bcx.store(C_int(ptr.layout.llvm_type(bcx.ccx), to as i64),
+                    ptr.llval, ptr.alignment.non_abi());
+            }
+            layout::Variants::NicheFilling {
+                dataful_variant,
+                ref niche_variants,
+                niche_start,
+                ..
+            } => {
+                if variant_index != dataful_variant {
+                    if bcx.sess().target.target.arch == "arm" ||
+                       bcx.sess().target.target.arch == "aarch64" {
+                        // Issue #34427: As workaround for LLVM bug on ARM,
+                        // use memset of 0 before assigning niche value.
+                        let llptr = bcx.pointercast(self.llval, Type::i8(bcx.ccx).ptr_to());
+                        let fill_byte = C_u8(bcx.ccx, 0);
+                        let (size, align) = self.layout.size_and_align();
+                        let size = C_usize(bcx.ccx, size.bytes());
+                        let align = C_u32(bcx.ccx, align.abi() as u32);
+                        base::call_memset(bcx, llptr, fill_byte, size, align, false);
+                    }
+
+                    let niche = self.project_field(bcx, 0);
+                    let niche_llty = niche.layout.immediate_llvm_type(bcx.ccx);
+                    let niche_value = ((variant_index - niche_variants.start) as u128)
+                        .wrapping_add(niche_start);
+                    // FIXME(eddyb) Check the actual primitive type here.
+                    let niche_llval = if niche_value == 0 {
+                        // HACK(eddyb) Using `C_null` as it works on all types.
+                        C_null(niche_llty)
+                    } else {
+                        C_uint_big(niche_llty, niche_value)
+                    };
+                    OperandValue::Immediate(niche_llval).store(bcx, niche);
+                }
+            }
+        }
+    }
+
+    pub fn project_index(&self, bcx: &Builder<'a, 'tcx>, llindex: ValueRef)
+                         -> LvalueRef<'tcx> {
+        LvalueRef {
+            llval: bcx.inbounds_gep(self.llval, &[C_usize(bcx.ccx, 0), llindex]),
+            llextra: ptr::null_mut(),
+            layout: self.layout.field(bcx.ccx, 0),
+            alignment: self.alignment
+        }
+    }
+
+    pub fn project_downcast(&self, bcx: &Builder<'a, 'tcx>, variant_index: usize)
+                            -> LvalueRef<'tcx> {
+        let mut downcast = *self;
+        downcast.layout = self.layout.for_variant(bcx.ccx, variant_index);
+
+        // Cast to the appropriate variant struct type.
+        let variant_ty = downcast.layout.llvm_type(bcx.ccx);
+        downcast.llval = bcx.pointercast(downcast.llval, variant_ty.ptr_to());
+
+        downcast
+    }
+
+    pub fn storage_live(&self, bcx: &Builder<'a, 'tcx>) {
+        bcx.lifetime_start(self.llval, self.layout.size);
+    }
+
+    pub fn storage_dead(&self, bcx: &Builder<'a, 'tcx>) {
+        bcx.lifetime_end(self.llval, self.layout.size);
     }
 }
 
@@ -310,7 +468,7 @@
             mir::Lvalue::Local(_) => bug!(), // handled above
             mir::Lvalue::Static(box mir::Static { def_id, ty }) => {
                 LvalueRef::new_sized(consts::get_static(ccx, def_id),
-                                     LvalueTy::from_ty(self.monomorphize(&ty)),
+                                     ccx.layout_of(self.monomorphize(&ty)),
                                      Alignment::AbiAligned)
             },
             mir::Lvalue::Projection(box mir::Projection {
@@ -318,37 +476,27 @@
                 elem: mir::ProjectionElem::Deref
             }) => {
                 // Load the pointer from its location.
-                self.trans_consume(bcx, base).deref()
+                self.trans_consume(bcx, base).deref(bcx.ccx)
             }
             mir::Lvalue::Projection(ref projection) => {
                 let tr_base = self.trans_lvalue(bcx, &projection.base);
-                let projected_ty = tr_base.ty.projection_ty(tcx, &projection.elem);
-                let projected_ty = self.monomorphize(&projected_ty);
-                let align = tr_base.alignment;
 
-                let ((llprojected, align), llextra) = match projection.elem {
+                match projection.elem {
                     mir::ProjectionElem::Deref => bug!(),
                     mir::ProjectionElem::Field(ref field, _) => {
-                        let has_metadata = self.ccx.shared()
-                            .type_has_metadata(projected_ty.to_ty(tcx));
-                        let llextra = if !has_metadata {
-                            ptr::null_mut()
-                        } else {
-                            tr_base.llextra
-                        };
-                        (tr_base.trans_field_ptr(bcx, field.index()), llextra)
+                        tr_base.project_field(bcx, field.index())
                     }
                     mir::ProjectionElem::Index(index) => {
                         let index = &mir::Operand::Consume(mir::Lvalue::Local(index));
                         let index = self.trans_operand(bcx, index);
-                        let llindex = self.prepare_index(bcx, index.immediate());
-                        ((tr_base.project_index(bcx, llindex), align), ptr::null_mut())
+                        let llindex = index.immediate();
+                        tr_base.project_index(bcx, llindex)
                     }
                     mir::ProjectionElem::ConstantIndex { offset,
                                                          from_end: false,
                                                          min_length: _ } => {
                         let lloffset = C_usize(bcx.ccx, offset as u64);
-                        ((tr_base.project_index(bcx, lloffset), align), ptr::null_mut())
+                        tr_base.project_index(bcx, lloffset)
                     }
                     mir::ProjectionElem::ConstantIndex { offset,
                                                          from_end: true,
@@ -356,39 +504,31 @@
                         let lloffset = C_usize(bcx.ccx, offset as u64);
                         let lllen = tr_base.len(bcx.ccx);
                         let llindex = bcx.sub(lllen, lloffset);
-                        ((tr_base.project_index(bcx, llindex), align), ptr::null_mut())
+                        tr_base.project_index(bcx, llindex)
                     }
                     mir::ProjectionElem::Subslice { from, to } => {
-                        let llbase = tr_base.project_index(bcx, C_usize(bcx.ccx, from as u64));
+                        let mut subslice = tr_base.project_index(bcx,
+                            C_usize(bcx.ccx, from as u64));
+                        let projected_ty = LvalueTy::Ty { ty: tr_base.layout.ty }
+                            .projection_ty(tcx, &projection.elem).to_ty(bcx.tcx());
+                        subslice.layout = bcx.ccx.layout_of(self.monomorphize(&projected_ty));
 
-                        let base_ty = tr_base.ty.to_ty(bcx.tcx());
-                        match base_ty.sty {
-                            ty::TyArray(..) => {
-                                // must cast the lvalue pointer type to the new
-                                // array type (*[%_; new_len]).
-                                let base_ty = self.monomorphized_lvalue_ty(lvalue);
-                                let llbasety = type_of::type_of(bcx.ccx, base_ty).ptr_to();
-                                let llbase = bcx.pointercast(llbase, llbasety);
-                                ((llbase, align), ptr::null_mut())
-                            }
-                            ty::TySlice(..) => {
-                                assert!(tr_base.llextra != ptr::null_mut());
-                                let lllen = bcx.sub(tr_base.llextra,
-                                                    C_usize(bcx.ccx, (from as u64)+(to as u64)));
-                                ((llbase, align), lllen)
-                            }
-                            _ => bug!("unexpected type {:?} in Subslice", base_ty)
+                        if subslice.layout.is_unsized() {
+                            assert!(tr_base.has_extra());
+                            subslice.llextra = bcx.sub(tr_base.llextra,
+                                C_usize(bcx.ccx, (from as u64) + (to as u64)));
                         }
+
+                        // Cast the lvalue pointer type to the new
+                        // array or slice type (*[%_; new_len]).
+                        subslice.llval = bcx.pointercast(subslice.llval,
+                            subslice.layout.llvm_type(bcx.ccx).ptr_to());
+
+                        subslice
                     }
-                    mir::ProjectionElem::Downcast(..) => {
-                        ((tr_base.llval, align), tr_base.llextra)
+                    mir::ProjectionElem::Downcast(_, v) => {
+                        tr_base.project_downcast(bcx, v)
                     }
-                };
-                LvalueRef {
-                    llval: llprojected,
-                    llextra,
-                    ty: projected_ty,
-                    alignment: align,
                 }
             }
         };
@@ -396,22 +536,6 @@
         result
     }
 
-    /// Adjust the bitwidth of an index since LLVM is less forgiving
-    /// than we are.
-    ///
-    /// nmatsakis: is this still necessary? Not sure.
-    fn prepare_index(&mut self, bcx: &Builder<'a, 'tcx>, llindex: ValueRef) -> ValueRef {
-        let index_size = machine::llbitsize_of_real(bcx.ccx, common::val_ty(llindex));
-        let int_size = machine::llbitsize_of_real(bcx.ccx, bcx.ccx.isize_ty());
-        if index_size < int_size {
-            bcx.zext(llindex, bcx.ccx.isize_ty())
-        } else if index_size > int_size {
-            bcx.trunc(llindex, bcx.ccx.isize_ty())
-        } else {
-            llindex
-        }
-    }
-
     pub fn monomorphized_lvalue_ty(&self, lvalue: &mir::Lvalue<'tcx>) -> Ty<'tcx> {
         let tcx = self.ccx.tcx();
         let lvalue_ty = lvalue.ty(self.mir, tcx);
diff --git a/src/librustc_trans/mir/mod.rs b/src/librustc_trans/mir/mod.rs
index 59da800..7f3a430 100644
--- a/src/librustc_trans/mir/mod.rs
+++ b/src/librustc_trans/mir/mod.rs
@@ -11,20 +11,18 @@
 use libc::c_uint;
 use llvm::{self, ValueRef, BasicBlockRef};
 use llvm::debuginfo::DIScope;
-use rustc::ty::{self, Ty, TypeFoldable};
-use rustc::ty::layout::{self, LayoutTyper};
+use rustc::ty::{self, TypeFoldable};
+use rustc::ty::layout::{LayoutOf, TyLayout};
 use rustc::mir::{self, Mir};
-use rustc::mir::tcx::LvalueTy;
 use rustc::ty::subst::Substs;
 use rustc::infer::TransNormalize;
 use rustc::session::config::FullDebugInfo;
 use base;
 use builder::Builder;
-use common::{self, CrateContext, Funclet};
+use common::{CrateContext, Funclet};
 use debuginfo::{self, declare_local, VariableAccess, VariableKind, FunctionDebugContext};
 use monomorphize::Instance;
-use abi::{ArgAttribute, FnType};
-use type_of;
+use abi::{ArgAttribute, FnType, PassMode};
 
 use syntax_pos::{DUMMY_SP, NO_EXPANSION, BytePos, Span};
 use syntax::symbol::keywords;
@@ -61,7 +59,7 @@
     /// don't really care about it very much. Anyway, this value
     /// contains an alloca into which the personality is stored and
     /// then later loaded when generating the DIVERGE_BLOCK.
-    llpersonalityslot: Option<ValueRef>,
+    personality_slot: Option<LvalueRef<'tcx>>,
 
     /// A `Block` for each MIR `BasicBlock`
     blocks: IndexVec<mir::BasicBlock, BasicBlockRef>,
@@ -86,7 +84,7 @@
     /// directly using an `OperandRef`, which makes for tighter LLVM
     /// IR. The conditions for using an `OperandRef` are as follows:
     ///
-    /// - the type of the local must be judged "immediate" by `type_is_immediate`
+    /// - the type of the local must be judged "immediate" by `is_llvm_immediate`
     /// - the operand must never be referenced indirectly
     ///     - we should not take its address using the `&` operator
     ///     - nor should it appear in an lvalue path like `tmp.a`
@@ -177,14 +175,13 @@
     Operand(Option<OperandRef<'tcx>>),
 }
 
-impl<'tcx> LocalRef<'tcx> {
-    fn new_operand<'a>(ccx: &CrateContext<'a, 'tcx>,
-                       ty: Ty<'tcx>) -> LocalRef<'tcx> {
-        if common::type_is_zero_size(ccx, ty) {
+impl<'a, 'tcx> LocalRef<'tcx> {
+    fn new_operand(ccx: &CrateContext<'a, 'tcx>, layout: TyLayout<'tcx>) -> LocalRef<'tcx> {
+        if layout.is_zst() {
             // Zero-size temporaries aren't always initialized, which
             // doesn't matter because they don't contain data, but
             // we need something in the operand.
-            LocalRef::Operand(Some(OperandRef::new_zst(ccx, ty)))
+            LocalRef::Operand(Some(OperandRef::new_zst(ccx, layout)))
         } else {
             LocalRef::Operand(None)
         }
@@ -232,7 +229,7 @@
         llfn,
         fn_ty,
         ccx,
-        llpersonalityslot: None,
+        personality_slot: None,
         blocks: block_bcxs,
         unreachable_block: None,
         cleanup_kinds,
@@ -255,7 +252,8 @@
 
         let mut allocate_local = |local| {
             let decl = &mir.local_decls[local];
-            let ty = mircx.monomorphize(&decl.ty);
+            let layout = bcx.ccx.layout_of(mircx.monomorphize(&decl.ty));
+            assert!(!layout.ty.has_erasable_regions());
 
             if let Some(name) = decl.name {
                 // User variable
@@ -264,15 +262,14 @@
 
                 if !lvalue_locals.contains(local.index()) && !dbg {
                     debug!("alloc: {:?} ({}) -> operand", local, name);
-                    return LocalRef::new_operand(bcx.ccx, ty);
+                    return LocalRef::new_operand(bcx.ccx, layout);
                 }
 
                 debug!("alloc: {:?} ({}) -> lvalue", local, name);
-                assert!(!ty.has_erasable_regions());
-                let lvalue = LvalueRef::alloca(&bcx, ty, &name.as_str());
+                let lvalue = LvalueRef::alloca(&bcx, layout, &name.as_str());
                 if dbg {
                     let (scope, span) = mircx.debug_loc(decl.source_info);
-                    declare_local(&bcx, &mircx.debug_context, name, ty, scope,
+                    declare_local(&bcx, &mircx.debug_context, name, layout.ty, scope,
                         VariableAccess::DirectVariable { alloca: lvalue.llval },
                         VariableKind::LocalVariable, span);
                 }
@@ -282,18 +279,18 @@
                 if local == mir::RETURN_POINTER && mircx.fn_ty.ret.is_indirect() {
                     debug!("alloc: {:?} (return pointer) -> lvalue", local);
                     let llretptr = llvm::get_param(llfn, 0);
-                    LocalRef::Lvalue(LvalueRef::new_sized(llretptr, LvalueTy::from_ty(ty),
+                    LocalRef::Lvalue(LvalueRef::new_sized(llretptr,
+                                                          layout,
                                                           Alignment::AbiAligned))
                 } else if lvalue_locals.contains(local.index()) {
                     debug!("alloc: {:?} -> lvalue", local);
-                    assert!(!ty.has_erasable_regions());
-                    LocalRef::Lvalue(LvalueRef::alloca(&bcx, ty,  &format!("{:?}", local)))
+                    LocalRef::Lvalue(LvalueRef::alloca(&bcx, layout, &format!("{:?}", local)))
                 } else {
                     // If this is an immediate local, we do not create an
                     // alloca in advance. Instead we wait until we see the
                     // definition and update the operand there.
                     debug!("alloc: {:?} -> operand", local);
-                    LocalRef::new_operand(bcx.ccx, ty)
+                    LocalRef::new_operand(bcx.ccx, layout)
                 }
             }
         };
@@ -384,7 +381,6 @@
 
     mir.args_iter().enumerate().map(|(arg_index, local)| {
         let arg_decl = &mir.local_decls[local];
-        let arg_ty = mircx.monomorphize(&arg_decl.ty);
 
         let name = if let Some(name) = arg_decl.name {
             name.as_str().to_string()
@@ -398,26 +394,17 @@
             // to reconstruct it into a tuple local variable, from multiple
             // individual LLVM function arguments.
 
+            let arg_ty = mircx.monomorphize(&arg_decl.ty);
             let tupled_arg_tys = match arg_ty.sty {
                 ty::TyTuple(ref tys, _) => tys,
                 _ => bug!("spread argument isn't a tuple?!")
             };
 
-            let lvalue = LvalueRef::alloca(bcx, arg_ty, &name);
-            for (i, &tupled_arg_ty) in tupled_arg_tys.iter().enumerate() {
-                let (dst, _) = lvalue.trans_field_ptr(bcx, i);
+            let lvalue = LvalueRef::alloca(bcx, bcx.ccx.layout_of(arg_ty), &name);
+            for i in 0..tupled_arg_tys.len() {
                 let arg = &mircx.fn_ty.args[idx];
                 idx += 1;
-                if common::type_is_fat_ptr(bcx.ccx, tupled_arg_ty) {
-                    // We pass fat pointers as two words, but inside the tuple
-                    // they are the two sub-fields of a single aggregate field.
-                    let meta = &mircx.fn_ty.args[idx];
-                    idx += 1;
-                    arg.store_fn_arg(bcx, &mut llarg_idx, base::get_dataptr(bcx, dst));
-                    meta.store_fn_arg(bcx, &mut llarg_idx, base::get_meta(bcx, dst));
-                } else {
-                    arg.store_fn_arg(bcx, &mut llarg_idx, dst);
-                }
+                arg.store_fn_arg(bcx, &mut llarg_idx, lvalue.project_field(bcx, i));
             }
 
             // Now that we have one alloca that contains the aggregate value,
@@ -442,82 +429,56 @@
 
         let arg = &mircx.fn_ty.args[idx];
         idx += 1;
-        let llval = if arg.is_indirect() {
-            // Don't copy an indirect argument to an alloca, the caller
-            // already put it in a temporary alloca and gave it up
-            // FIXME: lifetimes
-            if arg.pad.is_some() {
-                llarg_idx += 1;
-            }
-            let llarg = llvm::get_param(bcx.llfn(), llarg_idx as c_uint);
-            bcx.set_value_name(llarg, &name);
+        if arg.pad.is_some() {
             llarg_idx += 1;
-            llarg
-        } else if !lvalue_locals.contains(local.index()) &&
-                  arg.cast.is_none() && arg_scope.is_none() {
-            if arg.is_ignore() {
-                return LocalRef::new_operand(bcx.ccx, arg_ty);
-            }
+        }
 
+        if arg_scope.is_none() && !lvalue_locals.contains(local.index()) {
             // We don't have to cast or keep the argument in the alloca.
             // FIXME(eddyb): We should figure out how to use llvm.dbg.value instead
             // of putting everything in allocas just so we can use llvm.dbg.declare.
-            if arg.pad.is_some() {
-                llarg_idx += 1;
+            let local = |op| LocalRef::Operand(Some(op));
+            match arg.mode {
+                PassMode::Ignore => {
+                    return local(OperandRef::new_zst(bcx.ccx, arg.layout));
+                }
+                PassMode::Direct(_) => {
+                    let llarg = llvm::get_param(bcx.llfn(), llarg_idx as c_uint);
+                    bcx.set_value_name(llarg, &name);
+                    llarg_idx += 1;
+                    return local(
+                        OperandRef::from_immediate_or_packed_pair(bcx, llarg, arg.layout));
+                }
+                PassMode::Pair(..) => {
+                    let a = llvm::get_param(bcx.llfn(), llarg_idx as c_uint);
+                    bcx.set_value_name(a, &(name.clone() + ".0"));
+                    llarg_idx += 1;
+
+                    let b = llvm::get_param(bcx.llfn(), llarg_idx as c_uint);
+                    bcx.set_value_name(b, &(name + ".1"));
+                    llarg_idx += 1;
+
+                    return local(OperandRef {
+                        val: OperandValue::Pair(a, b),
+                        layout: arg.layout
+                    });
+                }
+                _ => {}
             }
+        }
+
+        let lvalue = if arg.is_indirect() {
+            // Don't copy an indirect argument to an alloca, the caller
+            // already put it in a temporary alloca and gave it up.
+            // FIXME: lifetimes
             let llarg = llvm::get_param(bcx.llfn(), llarg_idx as c_uint);
+            bcx.set_value_name(llarg, &name);
             llarg_idx += 1;
-            let val = if common::type_is_fat_ptr(bcx.ccx, arg_ty) {
-                let meta = &mircx.fn_ty.args[idx];
-                idx += 1;
-                assert_eq!((meta.cast, meta.pad), (None, None));
-                let llmeta = llvm::get_param(bcx.llfn(), llarg_idx as c_uint);
-                llarg_idx += 1;
-
-                // FIXME(eddyb) As we can't perfectly represent the data and/or
-                // vtable pointer in a fat pointers in Rust's typesystem, and
-                // because we split fat pointers into two ArgType's, they're
-                // not the right type so we have to cast them for now.
-                let pointee = match arg_ty.sty {
-                    ty::TyRef(_, ty::TypeAndMut{ty, ..}) |
-                    ty::TyRawPtr(ty::TypeAndMut{ty, ..}) => ty,
-                    ty::TyAdt(def, _) if def.is_box() => arg_ty.boxed_ty(),
-                    _ => bug!()
-                };
-                let data_llty = type_of::in_memory_type_of(bcx.ccx, pointee);
-                let meta_llty = type_of::unsized_info_ty(bcx.ccx, pointee);
-
-                let llarg = bcx.pointercast(llarg, data_llty.ptr_to());
-                bcx.set_value_name(llarg, &(name.clone() + ".ptr"));
-                let llmeta = bcx.pointercast(llmeta, meta_llty);
-                bcx.set_value_name(llmeta, &(name + ".meta"));
-
-                OperandValue::Pair(llarg, llmeta)
-            } else {
-                bcx.set_value_name(llarg, &name);
-                OperandValue::Immediate(llarg)
-            };
-            let operand = OperandRef {
-                val,
-                ty: arg_ty
-            };
-            return LocalRef::Operand(Some(operand.unpack_if_pair(bcx)));
+            LvalueRef::new_sized(llarg, arg.layout, Alignment::AbiAligned)
         } else {
-            let lltemp = LvalueRef::alloca(bcx, arg_ty, &name);
-            if common::type_is_fat_ptr(bcx.ccx, arg_ty) {
-                // we pass fat pointers as two words, but we want to
-                // represent them internally as a pointer to two words,
-                // so make an alloca to store them in.
-                let meta = &mircx.fn_ty.args[idx];
-                idx += 1;
-                arg.store_fn_arg(bcx, &mut llarg_idx, base::get_dataptr(bcx, lltemp.llval));
-                meta.store_fn_arg(bcx, &mut llarg_idx, base::get_meta(bcx, lltemp.llval));
-            } else  {
-                // otherwise, arg is passed by value, so make a
-                // temporary and store it there
-                arg.store_fn_arg(bcx, &mut llarg_idx, lltemp.llval);
-            }
-            lltemp.llval
+            let tmp = LvalueRef::alloca(bcx, arg.layout, &name);
+            arg.store_fn_arg(bcx, &mut llarg_idx, tmp);
+            tmp
         };
         arg_scope.map(|scope| {
             // Is this a regular argument?
@@ -525,21 +486,24 @@
                 // The Rust ABI passes indirect variables using a pointer and a manual copy, so we
                 // need to insert a deref here, but the C ABI uses a pointer and a copy using the
                 // byval attribute, for which LLVM does the deref itself, so we must not add it.
-                let variable_access = if arg.is_indirect() &&
-                    !arg.attrs.contains(ArgAttribute::ByVal) {
-                    VariableAccess::IndirectVariable {
-                        alloca: llval,
-                        address_operations: &deref_op,
-                    }
-                } else {
-                    VariableAccess::DirectVariable { alloca: llval }
+                let mut variable_access = VariableAccess::DirectVariable {
+                    alloca: lvalue.llval
                 };
 
+                if let PassMode::Indirect(ref attrs) = arg.mode {
+                    if !attrs.contains(ArgAttribute::ByVal) {
+                        variable_access = VariableAccess::IndirectVariable {
+                            alloca: lvalue.llval,
+                            address_operations: &deref_op,
+                        };
+                    }
+                }
+
                 declare_local(
                     bcx,
                     &mircx.debug_context,
                     arg_decl.name.unwrap_or(keywords::Invalid.name()),
-                    arg_ty,
+                    arg.layout.ty,
                     scope,
                     variable_access,
                     VariableKind::ArgumentVariable(arg_index + 1),
@@ -549,15 +513,15 @@
             }
 
             // Or is it the closure environment?
-            let (closure_ty, env_ref) = match arg_ty.sty {
-                ty::TyRef(_, mt) | ty::TyRawPtr(mt) => (mt.ty, true),
-                _ => (arg_ty, false)
+            let (closure_layout, env_ref) = match arg.layout.ty.sty {
+                ty::TyRef(_, mt) | ty::TyRawPtr(mt) => (bcx.ccx.layout_of(mt.ty), true),
+                _ => (arg.layout, false)
             };
 
-            let upvar_tys = match closure_ty.sty {
+            let upvar_tys = match closure_layout.ty.sty {
                 ty::TyClosure(def_id, substs) |
                 ty::TyGenerator(def_id, substs, _) => substs.upvar_tys(def_id, tcx),
-                _ => bug!("upvar_decls with non-closure arg0 type `{}`", closure_ty)
+                _ => bug!("upvar_decls with non-closure arg0 type `{}`", closure_layout.ty)
             };
 
             // Store the pointer to closure data in an alloca for debuginfo
@@ -568,21 +532,17 @@
             // doesn't actually strip the offset when splitting the closure
             // environment into its components so it ends up out of bounds.
             let env_ptr = if !env_ref {
-                let alloc = bcx.alloca(common::val_ty(llval), "__debuginfo_env_ptr", None);
-                bcx.store(llval, alloc, None);
-                alloc
+                let alloc = LvalueRef::alloca(bcx,
+                    bcx.ccx.layout_of(tcx.mk_mut_ptr(arg.layout.ty)),
+                    "__debuginfo_env_ptr");
+                bcx.store(lvalue.llval, alloc.llval, None);
+                alloc.llval
             } else {
-                llval
-            };
-
-            let layout = bcx.ccx.layout_of(closure_ty);
-            let offsets = match *layout {
-                layout::Univariant { ref variant, .. } => &variant.offsets[..],
-                _ => bug!("Closures are only supposed to be Univariant")
+                lvalue.llval
             };
 
             for (i, (decl, ty)) in mir.upvar_decls.iter().zip(upvar_tys).enumerate() {
-                let byte_offset_of_var_in_env = offsets[i].bytes();
+                let byte_offset_of_var_in_env = closure_layout.fields.offset(i).bytes();
 
                 let ops = unsafe {
                     [llvm::LLVMRustDIBuilderCreateOpDeref(),
@@ -620,8 +580,7 @@
                 );
             }
         });
-        LocalRef::Lvalue(LvalueRef::new_sized(llval, LvalueTy::from_ty(arg_ty),
-                                              Alignment::AbiAligned))
+        LocalRef::Lvalue(lvalue)
     }).collect()
 }
 
@@ -629,6 +588,6 @@
 mod block;
 mod constant;
 pub mod lvalue;
-mod operand;
+pub mod operand;
 mod rvalue;
 mod statement;
diff --git a/src/librustc_trans/mir/operand.rs b/src/librustc_trans/mir/operand.rs
index 9ce1749..8c43bde 100644
--- a/src/librustc_trans/mir/operand.rs
+++ b/src/librustc_trans/mir/operand.rs
@@ -9,18 +9,16 @@
 // except according to those terms.
 
 use llvm::ValueRef;
-use rustc::ty::{self, Ty};
-use rustc::ty::layout::{Layout, LayoutTyper};
+use rustc::ty;
+use rustc::ty::layout::{self, LayoutOf, TyLayout};
 use rustc::mir;
-use rustc::mir::tcx::LvalueTy;
 use rustc_data_structures::indexed_vec::Idx;
 
-use adt;
 use base;
-use common::{self, CrateContext, C_null};
+use common::{self, CrateContext, C_undef, C_usize};
 use builder::Builder;
 use value::Value;
-use type_of;
+use type_of::LayoutLlvmExt;
 use type_::Type;
 
 use std::fmt;
@@ -43,63 +41,52 @@
     Pair(ValueRef, ValueRef)
 }
 
+impl fmt::Debug for OperandValue {
+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+        match *self {
+            OperandValue::Ref(r, align) => {
+                write!(f, "Ref({:?}, {:?})", Value(r), align)
+            }
+            OperandValue::Immediate(i) => {
+                write!(f, "Immediate({:?})", Value(i))
+            }
+            OperandValue::Pair(a, b) => {
+                write!(f, "Pair({:?}, {:?})", Value(a), Value(b))
+            }
+        }
+    }
+}
+
 /// An `OperandRef` is an "SSA" reference to a Rust value, along with
 /// its type.
 ///
 /// NOTE: unless you know a value's type exactly, you should not
 /// generate LLVM opcodes acting on it and instead act via methods,
-/// to avoid nasty edge cases. In particular, using `Builder.store`
-/// directly is sure to cause problems -- use `MirContext.store_operand`
+/// to avoid nasty edge cases. In particular, using `Builder::store`
+/// directly is sure to cause problems -- use `OperandRef::store`
 /// instead.
 #[derive(Copy, Clone)]
 pub struct OperandRef<'tcx> {
     // The value.
     pub val: OperandValue,
 
-    // The type of value being returned.
-    pub ty: Ty<'tcx>
+    // The layout of value, based on its Rust type.
+    pub layout: TyLayout<'tcx>,
 }
 
 impl<'tcx> fmt::Debug for OperandRef<'tcx> {
     fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
-        match self.val {
-            OperandValue::Ref(r, align) => {
-                write!(f, "OperandRef(Ref({:?}, {:?}) @ {:?})",
-                       Value(r), align, self.ty)
-            }
-            OperandValue::Immediate(i) => {
-                write!(f, "OperandRef(Immediate({:?}) @ {:?})",
-                       Value(i), self.ty)
-            }
-            OperandValue::Pair(a, b) => {
-                write!(f, "OperandRef(Pair({:?}, {:?}) @ {:?})",
-                       Value(a), Value(b), self.ty)
-            }
-        }
+        write!(f, "OperandRef({:?} @ {:?})", self.val, self.layout)
     }
 }
 
 impl<'a, 'tcx> OperandRef<'tcx> {
     pub fn new_zst(ccx: &CrateContext<'a, 'tcx>,
-                   ty: Ty<'tcx>) -> OperandRef<'tcx> {
-        assert!(common::type_is_zero_size(ccx, ty));
-        let llty = type_of::type_of(ccx, ty);
-        let val = if common::type_is_imm_pair(ccx, ty) {
-            let layout = ccx.layout_of(ty);
-            let (ix0, ix1) = if let Layout::Univariant { ref variant, .. } = *layout {
-                (adt::struct_llfields_index(variant, 0),
-                adt::struct_llfields_index(variant, 1))
-            } else {
-                (0, 1)
-            };
-            let fields = llty.field_types();
-            OperandValue::Pair(C_null(fields[ix0]), C_null(fields[ix1]))
-        } else {
-            OperandValue::Immediate(C_null(llty))
-        };
+                   layout: TyLayout<'tcx>) -> OperandRef<'tcx> {
+        assert!(layout.is_zst());
         OperandRef {
-            val,
-            ty,
+            val: OperandValue::Immediate(C_undef(layout.immediate_llvm_type(ccx))),
+            layout
         }
     }
 
@@ -112,8 +99,8 @@
         }
     }
 
-    pub fn deref(self) -> LvalueRef<'tcx> {
-        let projected_ty = self.ty.builtin_deref(true, ty::NoPreference)
+    pub fn deref(self, ccx: &CrateContext<'a, 'tcx>) -> LvalueRef<'tcx> {
+        let projected_ty = self.layout.ty.builtin_deref(true, ty::NoPreference)
             .unwrap_or_else(|| bug!("deref of non-pointer {:?}", self)).ty;
         let (llptr, llextra) = match self.val {
             OperandValue::Immediate(llptr) => (llptr, ptr::null_mut()),
@@ -123,126 +110,150 @@
         LvalueRef {
             llval: llptr,
             llextra,
-            ty: LvalueTy::from_ty(projected_ty),
+            layout: ccx.layout_of(projected_ty),
             alignment: Alignment::AbiAligned,
         }
     }
 
-    /// If this operand is a Pair, we return an
-    /// Immediate aggregate with the two values.
-    pub fn pack_if_pair(mut self, bcx: &Builder<'a, 'tcx>) -> OperandRef<'tcx> {
+    /// If this operand is a `Pair`, we return an aggregate with the two values.
+    /// For other cases, see `immediate`.
+    pub fn immediate_or_packed_pair(self, bcx: &Builder<'a, 'tcx>) -> ValueRef {
         if let OperandValue::Pair(a, b) = self.val {
+            let llty = self.layout.llvm_type(bcx.ccx);
+            debug!("Operand::immediate_or_packed_pair: packing {:?} into {:?}",
+                   self, llty);
             // Reconstruct the immediate aggregate.
-            let llty = type_of::type_of(bcx.ccx, self.ty);
-            let mut llpair = common::C_undef(llty);
-            let elems = [a, b];
-            for i in 0..2 {
-                let mut elem = elems[i];
-                // Extend boolean i1's to i8.
-                if common::val_ty(elem) == Type::i1(bcx.ccx) {
-                    elem = bcx.zext(elem, Type::i8(bcx.ccx));
-                }
-                let layout = bcx.ccx.layout_of(self.ty);
-                let i = if let Layout::Univariant { ref variant, .. } = *layout {
-                    adt::struct_llfields_index(variant, i)
-                } else {
-                    i
-                };
-                llpair = bcx.insert_value(llpair, elem, i);
-            }
-            self.val = OperandValue::Immediate(llpair);
+            let mut llpair = C_undef(llty);
+            llpair = bcx.insert_value(llpair, a, 0);
+            llpair = bcx.insert_value(llpair, b, 1);
+            llpair
+        } else {
+            self.immediate()
         }
-        self
     }
 
-    /// If this operand is a pair in an Immediate,
-    /// we return a Pair with the two halves.
-    pub fn unpack_if_pair(mut self, bcx: &Builder<'a, 'tcx>) -> OperandRef<'tcx> {
-        if let OperandValue::Immediate(llval) = self.val {
+    /// If the type is a pair, we return a `Pair`, otherwise, an `Immediate`.
+    pub fn from_immediate_or_packed_pair(bcx: &Builder<'a, 'tcx>,
+                                         llval: ValueRef,
+                                         layout: TyLayout<'tcx>)
+                                         -> OperandRef<'tcx> {
+        let val = if layout.is_llvm_scalar_pair() {
+            debug!("Operand::from_immediate_or_packed_pair: unpacking {:?} @ {:?}",
+                    llval, layout);
+
             // Deconstruct the immediate aggregate.
-            if common::type_is_imm_pair(bcx.ccx, self.ty) {
-                debug!("Operand::unpack_if_pair: unpacking {:?}", self);
+            OperandValue::Pair(bcx.extract_value(llval, 0),
+                               bcx.extract_value(llval, 1))
+        } else {
+            OperandValue::Immediate(llval)
+        };
+        OperandRef { val, layout }
+    }
 
-                let layout = bcx.ccx.layout_of(self.ty);
-                let (ix0, ix1) = if let Layout::Univariant { ref variant, .. } = *layout {
-                    (adt::struct_llfields_index(variant, 0),
-                    adt::struct_llfields_index(variant, 1))
-                } else {
-                    (0, 1)
+    pub fn extract_field(&self, bcx: &Builder<'a, 'tcx>, i: usize) -> OperandRef<'tcx> {
+        let field = self.layout.field(bcx.ccx, i);
+        let offset = self.layout.fields.offset(i);
+
+        let mut val = match (self.val, &self.layout.abi) {
+            // If we're uninhabited, or the field is ZST, it has no data.
+            _ if self.layout.abi == layout::Abi::Uninhabited || field.is_zst() => {
+                return OperandRef {
+                    val: OperandValue::Immediate(C_undef(field.immediate_llvm_type(bcx.ccx))),
+                    layout: field
                 };
+            }
 
-                let mut a = bcx.extract_value(llval, ix0);
-                let mut b = bcx.extract_value(llval, ix1);
+            // Newtype of a scalar or scalar pair.
+            (OperandValue::Immediate(_), _) |
+            (OperandValue::Pair(..), _) if field.size == self.layout.size => {
+                assert_eq!(offset.bytes(), 0);
+                self.val
+            }
 
-                let pair_fields = common::type_pair_fields(bcx.ccx, self.ty);
-                if let Some([a_ty, b_ty]) = pair_fields {
-                    if a_ty.is_bool() {
-                        a = bcx.trunc(a, Type::i1(bcx.ccx));
-                    }
-                    if b_ty.is_bool() {
-                        b = bcx.trunc(b, Type::i1(bcx.ccx));
-                    }
+            // Extract a scalar component from a pair.
+            (OperandValue::Pair(a_llval, b_llval), &layout::Abi::ScalarPair(ref a, ref b)) => {
+                if offset.bytes() == 0 {
+                    assert_eq!(field.size, a.value.size(bcx.ccx));
+                    OperandValue::Immediate(a_llval)
+                } else {
+                    assert_eq!(offset, a.value.size(bcx.ccx)
+                        .abi_align(b.value.align(bcx.ccx)));
+                    assert_eq!(field.size, b.value.size(bcx.ccx));
+                    OperandValue::Immediate(b_llval)
                 }
+            }
 
-                self.val = OperandValue::Pair(a, b);
+            // `#[repr(simd)]` types are also immediate.
+            (OperandValue::Immediate(llval), &layout::Abi::Vector) => {
+                OperandValue::Immediate(
+                    bcx.extract_element(llval, C_usize(bcx.ccx, i as u64)))
+            }
+
+            _ => bug!("OperandRef::extract_field({:?}): not applicable", self)
+        };
+
+        // HACK(eddyb) have to bitcast pointers until LLVM removes pointee types.
+        match val {
+            OperandValue::Immediate(ref mut llval) => {
+                *llval = bcx.bitcast(*llval, field.immediate_llvm_type(bcx.ccx));
+            }
+            OperandValue::Pair(ref mut a, ref mut b) => {
+                *a = bcx.bitcast(*a, field.scalar_pair_element_llvm_type(bcx.ccx, 0));
+                *b = bcx.bitcast(*b, field.scalar_pair_element_llvm_type(bcx.ccx, 1));
+            }
+            OperandValue::Ref(..) => bug!()
+        }
+
+        OperandRef {
+            val,
+            layout: field
+        }
+    }
+}
+
+impl<'a, 'tcx> OperandValue {
+    pub fn store(self, bcx: &Builder<'a, 'tcx>, dest: LvalueRef<'tcx>) {
+        debug!("OperandRef::store: operand={:?}, dest={:?}", self, dest);
+        // Avoid generating stores of zero-sized values, because the only way to have a zero-sized
+        // value is through `undef`, and store itself is useless.
+        if dest.layout.is_zst() {
+            return;
+        }
+        match self {
+            OperandValue::Ref(r, source_align) =>
+                base::memcpy_ty(bcx, dest.llval, r, dest.layout,
+                                (source_align | dest.alignment).non_abi()),
+            OperandValue::Immediate(s) => {
+                bcx.store(base::from_immediate(bcx, s), dest.llval, dest.alignment.non_abi());
+            }
+            OperandValue::Pair(a, b) => {
+                for (i, &x) in [a, b].iter().enumerate() {
+                    let mut llptr = bcx.struct_gep(dest.llval, i as u64);
+                    // Make sure to always store i1 as i8.
+                    if common::val_ty(x) == Type::i1(bcx.ccx) {
+                        llptr = bcx.pointercast(llptr, Type::i8p(bcx.ccx));
+                    }
+                    bcx.store(base::from_immediate(bcx, x), llptr, dest.alignment.non_abi());
+                }
             }
         }
-        self
     }
 }
 
 impl<'a, 'tcx> MirContext<'a, 'tcx> {
-    pub fn trans_load(&mut self,
-                      bcx: &Builder<'a, 'tcx>,
-                      llval: ValueRef,
-                      align: Alignment,
-                      ty: Ty<'tcx>)
-                      -> OperandRef<'tcx>
+    fn maybe_trans_consume_direct(&mut self,
+                                  bcx: &Builder<'a, 'tcx>,
+                                  lvalue: &mir::Lvalue<'tcx>)
+                                   -> Option<OperandRef<'tcx>>
     {
-        debug!("trans_load: {:?} @ {:?}", Value(llval), ty);
-
-        let val = if common::type_is_fat_ptr(bcx.ccx, ty) {
-            let (lldata, llextra) = base::load_fat_ptr(bcx, llval, align, ty);
-            OperandValue::Pair(lldata, llextra)
-        } else if common::type_is_imm_pair(bcx.ccx, ty) {
-            let (ix0, ix1, f_align) = match *bcx.ccx.layout_of(ty) {
-                Layout::Univariant { ref variant, .. } => {
-                    (adt::struct_llfields_index(variant, 0),
-                    adt::struct_llfields_index(variant, 1),
-                    Alignment::from_packed(variant.packed) | align)
-                },
-                _ => (0, 1, align)
-            };
-            let [a_ty, b_ty] = common::type_pair_fields(bcx.ccx, ty).unwrap();
-            let a_ptr = bcx.struct_gep(llval, ix0);
-            let b_ptr = bcx.struct_gep(llval, ix1);
-
-            OperandValue::Pair(
-                base::load_ty(bcx, a_ptr, f_align, a_ty),
-                base::load_ty(bcx, b_ptr, f_align, b_ty)
-            )
-        } else if common::type_is_immediate(bcx.ccx, ty) {
-            OperandValue::Immediate(base::load_ty(bcx, llval, align, ty))
-        } else {
-            OperandValue::Ref(llval, align)
-        };
-
-        OperandRef { val: val, ty: ty }
-    }
-
-    pub fn trans_consume(&mut self,
-                         bcx: &Builder<'a, 'tcx>,
-                         lvalue: &mir::Lvalue<'tcx>)
-                         -> OperandRef<'tcx>
-    {
-        debug!("trans_consume(lvalue={:?})", lvalue);
+        debug!("maybe_trans_consume_direct(lvalue={:?})", lvalue);
 
         // watch out for locals that do not have an
         // alloca; they are handled somewhat differently
         if let mir::Lvalue::Local(index) = *lvalue {
             match self.locals[index] {
                 LocalRef::Operand(Some(o)) => {
-                    return o;
+                    return Some(o);
                 }
                 LocalRef::Operand(None) => {
                     bug!("use of {:?} before def", lvalue);
@@ -253,33 +264,40 @@
             }
         }
 
-        // Moves out of pair fields are trivial.
+        // Moves out of scalar and scalar pair fields are trivial.
         if let &mir::Lvalue::Projection(ref proj) = lvalue {
-            if let mir::Lvalue::Local(index) = proj.base {
-                if let LocalRef::Operand(Some(o)) = self.locals[index] {
-                    match (o.val, &proj.elem) {
-                        (OperandValue::Pair(a, b),
-                         &mir::ProjectionElem::Field(ref f, ty)) => {
-                            let llval = [a, b][f.index()];
-                            let op = OperandRef {
-                                val: OperandValue::Immediate(llval),
-                                ty: self.monomorphize(&ty)
-                            };
-
-                            // Handle nested pairs.
-                            return op.unpack_if_pair(bcx);
-                        }
-                        _ => {}
-                    }
+            if let mir::ProjectionElem::Field(ref f, _) = proj.elem {
+                if let Some(o) = self.maybe_trans_consume_direct(bcx, &proj.base) {
+                    return Some(o.extract_field(bcx, f.index()));
                 }
             }
         }
 
+        None
+    }
+
+    pub fn trans_consume(&mut self,
+                         bcx: &Builder<'a, 'tcx>,
+                         lvalue: &mir::Lvalue<'tcx>)
+                         -> OperandRef<'tcx>
+    {
+        debug!("trans_consume(lvalue={:?})", lvalue);
+
+        let ty = self.monomorphized_lvalue_ty(lvalue);
+        let layout = bcx.ccx.layout_of(ty);
+
+        // ZSTs don't require any actual memory access.
+        if layout.is_zst() {
+            return OperandRef::new_zst(bcx.ccx, layout);
+        }
+
+        if let Some(o) = self.maybe_trans_consume_direct(bcx, lvalue) {
+            return o;
+        }
+
         // for most lvalues, to consume them we just load them
         // out from their home
-        let tr_lvalue = self.trans_lvalue(bcx, lvalue);
-        let ty = tr_lvalue.ty.to_ty(bcx.tcx());
-        self.trans_load(bcx, tr_lvalue.llval, tr_lvalue.alignment, ty)
+        self.trans_lvalue(bcx, lvalue).load(bcx)
     }
 
     pub fn trans_operand(&mut self,
@@ -299,60 +317,11 @@
                 let operand = val.to_operand(bcx.ccx);
                 if let OperandValue::Ref(ptr, align) = operand.val {
                     // If this is a OperandValue::Ref to an immediate constant, load it.
-                    self.trans_load(bcx, ptr, align, operand.ty)
+                    LvalueRef::new_sized(ptr, operand.layout, align).load(bcx)
                 } else {
                     operand
                 }
             }
         }
     }
-
-    pub fn store_operand(&mut self,
-                         bcx: &Builder<'a, 'tcx>,
-                         lldest: ValueRef,
-                         align: Option<u32>,
-                         operand: OperandRef<'tcx>) {
-        debug!("store_operand: operand={:?}, align={:?}", operand, align);
-        // Avoid generating stores of zero-sized values, because the only way to have a zero-sized
-        // value is through `undef`, and store itself is useless.
-        if common::type_is_zero_size(bcx.ccx, operand.ty) {
-            return;
-        }
-        match operand.val {
-            OperandValue::Ref(r, Alignment::Packed) =>
-                base::memcpy_ty(bcx, lldest, r, operand.ty, Some(1)),
-            OperandValue::Ref(r, Alignment::AbiAligned) =>
-                base::memcpy_ty(bcx, lldest, r, operand.ty, align),
-            OperandValue::Immediate(s) => {
-                bcx.store(base::from_immediate(bcx, s), lldest, align);
-            }
-            OperandValue::Pair(a, b) => {
-                let (ix0, ix1, f_align) = match *bcx.ccx.layout_of(operand.ty) {
-                    Layout::Univariant { ref variant, .. } => {
-                        (adt::struct_llfields_index(variant, 0),
-                        adt::struct_llfields_index(variant, 1),
-                        if variant.packed { Some(1) } else { None })
-                    }
-                    _ => (0, 1, align)
-                };
-
-                let a = base::from_immediate(bcx, a);
-                let b = base::from_immediate(bcx, b);
-
-                // See comment above about zero-sized values.
-                let (a_zst, b_zst) = common::type_pair_fields(bcx.ccx, operand.ty)
-                    .map_or((false, false), |[a_ty, b_ty]| {
-                        (common::type_is_zero_size(bcx.ccx, a_ty),
-                         common::type_is_zero_size(bcx.ccx, b_ty))
-                    });
-
-                if !a_zst {
-                    bcx.store(a, bcx.struct_gep(lldest, ix0), f_align);
-                }
-                if !b_zst {
-                    bcx.store(b, bcx.struct_gep(lldest, ix1), f_align);
-                }
-            }
-        }
-    }
 }
diff --git a/src/librustc_trans/mir/rvalue.rs b/src/librustc_trans/mir/rvalue.rs
index 7e187a8..4781425 100644
--- a/src/librustc_trans/mir/rvalue.rs
+++ b/src/librustc_trans/mir/rvalue.rs
@@ -11,8 +11,7 @@
 use llvm::{self, ValueRef};
 use rustc::ty::{self, Ty};
 use rustc::ty::cast::{CastTy, IntTy};
-use rustc::ty::layout::{Layout, LayoutTyper};
-use rustc::mir::tcx::LvalueTy;
+use rustc::ty::layout::{self, LayoutOf};
 use rustc::mir;
 use rustc::middle::lang_items::ExchangeMallocFnLangItem;
 use rustc_apfloat::{ieee, Float, Status, Round};
@@ -22,14 +21,12 @@
 use base;
 use builder::Builder;
 use callee;
-use common::{self, val_ty, C_bool, C_i32, C_u32, C_u64, C_null, C_usize, C_uint, C_big_integral};
+use common::{self, val_ty};
+use common::{C_bool, C_u8, C_i32, C_u32, C_u64, C_null, C_usize, C_uint, C_uint_big};
 use consts;
-use adt;
-use machine;
 use monomorphize;
 use type_::Type;
-use type_of;
-use tvec;
+use type_of::LayoutLlvmExt;
 use value::Value;
 
 use super::{MirContext, LocalRef};
@@ -52,18 +49,18 @@
                let tr_operand = self.trans_operand(&bcx, operand);
                // FIXME: consider not copying constants through stack. (fixable by translating
                // constants into OperandValue::Ref, why don’t we do that yet if we don’t?)
-               self.store_operand(&bcx, dest.llval, dest.alignment.to_align(), tr_operand);
+               tr_operand.val.store(&bcx, dest);
                bcx
            }
 
-            mir::Rvalue::Cast(mir::CastKind::Unsize, ref source, cast_ty) => {
-                let cast_ty = self.monomorphize(&cast_ty);
-
-                if common::type_is_fat_ptr(bcx.ccx, cast_ty) {
+            mir::Rvalue::Cast(mir::CastKind::Unsize, ref source, _) => {
+                // The destination necessarily contains a fat pointer, so if
+                // it's a scalar pair, it's a fat pointer or newtype thereof.
+                if dest.layout.is_llvm_scalar_pair() {
                     // into-coerce of a thin pointer to a fat pointer - just
                     // use the operand path.
                     let (bcx, temp) = self.trans_rvalue_operand(bcx, rvalue);
-                    self.store_operand(&bcx, dest.llval, dest.alignment.to_align(), temp);
+                    temp.val.store(&bcx, dest);
                     return bcx;
                 }
 
@@ -72,10 +69,9 @@
                 // `CoerceUnsized` can be passed by a where-clause,
                 // so the (generic) MIR may not be able to expand it.
                 let operand = self.trans_operand(&bcx, source);
-                let operand = operand.pack_if_pair(&bcx);
-                let llref = match operand.val {
-                    OperandValue::Pair(..) => bug!(),
-                    OperandValue::Immediate(llval) => {
+                match operand.val {
+                    OperandValue::Pair(..) |
+                    OperandValue::Immediate(_) => {
                         // unsize from an immediate structure. We don't
                         // really need a temporary alloca here, but
                         // avoiding it would require us to have
@@ -83,106 +79,93 @@
                         // index into the struct, and this case isn't
                         // important enough for it.
                         debug!("trans_rvalue: creating ugly alloca");
-                        let scratch = LvalueRef::alloca(&bcx, operand.ty, "__unsize_temp");
-                        base::store_ty(&bcx, llval, scratch.llval, scratch.alignment, operand.ty);
-                        scratch
+                        let scratch = LvalueRef::alloca(&bcx, operand.layout, "__unsize_temp");
+                        scratch.storage_live(&bcx);
+                        operand.val.store(&bcx, scratch);
+                        base::coerce_unsized_into(&bcx, scratch, dest);
+                        scratch.storage_dead(&bcx);
                     }
                     OperandValue::Ref(llref, align) => {
-                        LvalueRef::new_sized_ty(llref, operand.ty, align)
+                        let source = LvalueRef::new_sized(llref, operand.layout, align);
+                        base::coerce_unsized_into(&bcx, source, dest);
                     }
-                };
-                base::coerce_unsized_into(&bcx, &llref, &dest);
+                }
                 bcx
             }
 
             mir::Rvalue::Repeat(ref elem, count) => {
-                let dest_ty = dest.ty.to_ty(bcx.tcx());
+                let tr_elem = self.trans_operand(&bcx, elem);
 
-                // No need to inizialize memory of a zero-sized slice
-                if common::type_is_zero_size(bcx.ccx, dest_ty) {
+                // Do not generate the loop for zero-sized elements or empty arrays.
+                if dest.layout.is_zst() {
                     return bcx;
                 }
 
-                let tr_elem = self.trans_operand(&bcx, elem);
-                let size = count.as_u64();
-                let size = C_usize(bcx.ccx, size);
-                let base = base::get_dataptr(&bcx, dest.llval);
-                let align = dest.alignment.to_align();
+                let start = dest.project_index(&bcx, C_usize(bcx.ccx, 0)).llval;
 
                 if let OperandValue::Immediate(v) = tr_elem.val {
+                    let align = dest.alignment.non_abi()
+                        .unwrap_or(tr_elem.layout.align);
+                    let align = C_i32(bcx.ccx, align.abi() as i32);
+                    let size = C_usize(bcx.ccx, dest.layout.size.bytes());
+
                     // Use llvm.memset.p0i8.* to initialize all zero arrays
                     if common::is_const_integral(v) && common::const_to_uint(v) == 0 {
-                        let align = align.unwrap_or_else(|| bcx.ccx.align_of(tr_elem.ty));
-                        let align = C_i32(bcx.ccx, align as i32);
-                        let ty = type_of::type_of(bcx.ccx, dest_ty);
-                        let size = machine::llsize_of(bcx.ccx, ty);
-                        let fill = C_uint(Type::i8(bcx.ccx), 0);
-                        base::call_memset(&bcx, base, fill, size, align, false);
+                        let fill = C_u8(bcx.ccx, 0);
+                        base::call_memset(&bcx, start, fill, size, align, false);
                         return bcx;
                     }
 
                     // Use llvm.memset.p0i8.* to initialize byte arrays
+                    let v = base::from_immediate(&bcx, v);
                     if common::val_ty(v) == Type::i8(bcx.ccx) {
-                        let align = align.unwrap_or_else(|| bcx.ccx.align_of(tr_elem.ty));
-                        let align = C_i32(bcx.ccx, align as i32);
-                        base::call_memset(&bcx, base, v, size, align, false);
+                        base::call_memset(&bcx, start, v, size, align, false);
                         return bcx;
                     }
                 }
 
-                tvec::slice_for_each(&bcx, base, tr_elem.ty, size, |bcx, llslot, loop_bb| {
-                    self.store_operand(bcx, llslot, align, tr_elem);
-                    bcx.br(loop_bb);
-                })
+                let count = count.as_u64();
+                let count = C_usize(bcx.ccx, count);
+                let end = dest.project_index(&bcx, count).llval;
+
+                let header_bcx = bcx.build_sibling_block("repeat_loop_header");
+                let body_bcx = bcx.build_sibling_block("repeat_loop_body");
+                let next_bcx = bcx.build_sibling_block("repeat_loop_next");
+
+                bcx.br(header_bcx.llbb());
+                let current = header_bcx.phi(common::val_ty(start), &[start], &[bcx.llbb()]);
+
+                let keep_going = header_bcx.icmp(llvm::IntNE, current, end);
+                header_bcx.cond_br(keep_going, body_bcx.llbb(), next_bcx.llbb());
+
+                tr_elem.val.store(&body_bcx,
+                    LvalueRef::new_sized(current, tr_elem.layout, dest.alignment));
+
+                let next = body_bcx.inbounds_gep(current, &[C_usize(bcx.ccx, 1)]);
+                body_bcx.br(header_bcx.llbb());
+                header_bcx.add_incoming_to_phi(current, next, body_bcx.llbb());
+
+                next_bcx
             }
 
             mir::Rvalue::Aggregate(ref kind, ref operands) => {
-                match **kind {
-                    mir::AggregateKind::Adt(adt_def, variant_index, substs, active_field_index) => {
-                        let discr = adt_def.discriminant_for_variant(bcx.tcx(), variant_index)
-                           .to_u128_unchecked() as u64;
-                        let dest_ty = dest.ty.to_ty(bcx.tcx());
-                        adt::trans_set_discr(&bcx, dest_ty, dest.llval, discr);
-                        for (i, operand) in operands.iter().enumerate() {
-                            let op = self.trans_operand(&bcx, operand);
-                            // Do not generate stores and GEPis for zero-sized fields.
-                            if !common::type_is_zero_size(bcx.ccx, op.ty) {
-                                let mut val = LvalueRef::new_sized(
-                                    dest.llval, dest.ty, dest.alignment);
-                                let field_index = active_field_index.unwrap_or(i);
-                                val.ty = LvalueTy::Downcast {
-                                    adt_def,
-                                    substs: self.monomorphize(&substs),
-                                    variant_index,
-                                };
-                                let (lldest_i, align) = val.trans_field_ptr(&bcx, field_index);
-                                self.store_operand(&bcx, lldest_i, align.to_align(), op);
-                            }
+                let (dest, active_field_index) = match **kind {
+                    mir::AggregateKind::Adt(adt_def, variant_index, _, active_field_index) => {
+                        dest.trans_set_discr(&bcx, variant_index);
+                        if adt_def.is_enum() {
+                            (dest.project_downcast(&bcx, variant_index), active_field_index)
+                        } else {
+                            (dest, active_field_index)
                         }
-                    },
-                    _ => {
-                        // If this is a tuple or closure, we need to translate GEP indices.
-                        let layout = bcx.ccx.layout_of(dest.ty.to_ty(bcx.tcx()));
-                        let get_memory_index = |i| {
-                            if let Layout::Univariant { ref variant, .. } = *layout {
-                                adt::struct_llfields_index(variant, i)
-                            } else {
-                                i
-                            }
-                        };
-                        let alignment = dest.alignment;
-                        for (i, operand) in operands.iter().enumerate() {
-                            let op = self.trans_operand(&bcx, operand);
-                            // Do not generate stores and GEPis for zero-sized fields.
-                            if !common::type_is_zero_size(bcx.ccx, op.ty) {
-                                // Note: perhaps this should be StructGep, but
-                                // note that in some cases the values here will
-                                // not be structs but arrays.
-                                let i = get_memory_index(i);
-                                let dest = bcx.gepi(dest.llval, &[0, i]);
-                                self.store_operand(&bcx, dest, alignment.to_align(), op);
-                            }
-                        }
+                    }
+                    _ => (dest, None)
+                };
+                for (i, operand) in operands.iter().enumerate() {
+                    let op = self.trans_operand(&bcx, operand);
+                    // Do not generate stores and GEPis for zero-sized fields.
+                    if !op.layout.is_zst() {
+                        let field_index = active_field_index.unwrap_or(i);
+                        op.val.store(&bcx, dest.project_field(&bcx, field_index));
                     }
                 }
                 bcx
@@ -191,7 +174,7 @@
             _ => {
                 assert!(self.rvalue_creates_operand(rvalue));
                 let (bcx, temp) = self.trans_rvalue_operand(bcx, rvalue);
-                self.store_operand(&bcx, dest.llval, dest.alignment.to_align(), temp);
+                temp.val.store(&bcx, dest);
                 bcx
             }
         }
@@ -205,32 +188,32 @@
         assert!(self.rvalue_creates_operand(rvalue), "cannot trans {:?} to operand", rvalue);
 
         match *rvalue {
-            mir::Rvalue::Cast(ref kind, ref source, cast_ty) => {
+            mir::Rvalue::Cast(ref kind, ref source, mir_cast_ty) => {
                 let operand = self.trans_operand(&bcx, source);
                 debug!("cast operand is {:?}", operand);
-                let cast_ty = self.monomorphize(&cast_ty);
+                let cast = bcx.ccx.layout_of(self.monomorphize(&mir_cast_ty));
 
                 let val = match *kind {
                     mir::CastKind::ReifyFnPointer => {
-                        match operand.ty.sty {
+                        match operand.layout.ty.sty {
                             ty::TyFnDef(def_id, substs) => {
                                 OperandValue::Immediate(
                                     callee::resolve_and_get_fn(bcx.ccx, def_id, substs))
                             }
                             _ => {
-                                bug!("{} cannot be reified to a fn ptr", operand.ty)
+                                bug!("{} cannot be reified to a fn ptr", operand.layout.ty)
                             }
                         }
                     }
                     mir::CastKind::ClosureFnPointer => {
-                        match operand.ty.sty {
+                        match operand.layout.ty.sty {
                             ty::TyClosure(def_id, substs) => {
                                 let instance = monomorphize::resolve_closure(
                                     bcx.ccx.tcx(), def_id, substs, ty::ClosureKind::FnOnce);
                                 OperandValue::Immediate(callee::get_fn(bcx.ccx, instance))
                             }
                             _ => {
-                                bug!("{} cannot be cast to a fn ptr", operand.ty)
+                                bug!("{} cannot be cast to a fn ptr", operand.layout.ty)
                             }
                         }
                     }
@@ -239,26 +222,24 @@
                         operand.val
                     }
                     mir::CastKind::Unsize => {
-                        // unsize targets other than to a fat pointer currently
-                        // can't be operands.
-                        assert!(common::type_is_fat_ptr(bcx.ccx, cast_ty));
-
+                        assert!(cast.is_llvm_scalar_pair());
                         match operand.val {
                             OperandValue::Pair(lldata, llextra) => {
                                 // unsize from a fat pointer - this is a
                                 // "trait-object-to-supertrait" coercion, for
                                 // example,
                                 //   &'a fmt::Debug+Send => &'a fmt::Debug,
-                                // So we need to pointercast the base to ensure
-                                // the types match up.
-                                let llcast_ty = type_of::fat_ptr_base_ty(bcx.ccx, cast_ty);
-                                let lldata = bcx.pointercast(lldata, llcast_ty);
+
+                                // HACK(eddyb) have to bitcast pointers
+                                // until LLVM removes pointee types.
+                                let lldata = bcx.pointercast(lldata,
+                                    cast.scalar_pair_element_llvm_type(bcx.ccx, 0));
                                 OperandValue::Pair(lldata, llextra)
                             }
                             OperandValue::Immediate(lldata) => {
                                 // "standard" unsize
                                 let (lldata, llextra) = base::unsize_thin_ptr(&bcx, lldata,
-                                    operand.ty, cast_ty);
+                                    operand.layout.ty, cast.ty);
                                 OperandValue::Pair(lldata, llextra)
                             }
                             OperandValue::Ref(..) => {
@@ -267,20 +248,17 @@
                             }
                         }
                     }
-                    mir::CastKind::Misc if common::type_is_fat_ptr(bcx.ccx, operand.ty) => {
-                        let ll_cast_ty = type_of::immediate_type_of(bcx.ccx, cast_ty);
-                        let ll_from_ty = type_of::immediate_type_of(bcx.ccx, operand.ty);
-                        if let OperandValue::Pair(data_ptr, meta_ptr) = operand.val {
-                            if common::type_is_fat_ptr(bcx.ccx, cast_ty) {
-                                let ll_cft = ll_cast_ty.field_types();
-                                let ll_fft = ll_from_ty.field_types();
-                                let data_cast = bcx.pointercast(data_ptr, ll_cft[0]);
-                                assert_eq!(ll_cft[1].kind(), ll_fft[1].kind());
-                                OperandValue::Pair(data_cast, meta_ptr)
+                    mir::CastKind::Misc if operand.layout.is_llvm_scalar_pair() => {
+                        if let OperandValue::Pair(data_ptr, meta) = operand.val {
+                            if cast.is_llvm_scalar_pair() {
+                                let data_cast = bcx.pointercast(data_ptr,
+                                    cast.scalar_pair_element_llvm_type(bcx.ccx, 0));
+                                OperandValue::Pair(data_cast, meta)
                             } else { // cast to thin-ptr
                                 // Cast of fat-ptr to thin-ptr is an extraction of data-ptr and
                                 // pointer-cast of that pointer to desired pointer type.
-                                let llval = bcx.pointercast(data_ptr, ll_cast_ty);
+                                let llcast_ty = cast.immediate_llvm_type(bcx.ccx);
+                                let llval = bcx.pointercast(data_ptr, llcast_ty);
                                 OperandValue::Immediate(llval)
                             }
                         } else {
@@ -288,30 +266,32 @@
                         }
                     }
                     mir::CastKind::Misc => {
-                        debug_assert!(common::type_is_immediate(bcx.ccx, cast_ty));
-                        let r_t_in = CastTy::from_ty(operand.ty).expect("bad input type for cast");
-                        let r_t_out = CastTy::from_ty(cast_ty).expect("bad output type for cast");
-                        let ll_t_in = type_of::immediate_type_of(bcx.ccx, operand.ty);
-                        let ll_t_out = type_of::immediate_type_of(bcx.ccx, cast_ty);
+                        assert!(cast.is_llvm_immediate());
+                        let r_t_in = CastTy::from_ty(operand.layout.ty)
+                            .expect("bad input type for cast");
+                        let r_t_out = CastTy::from_ty(cast.ty).expect("bad output type for cast");
+                        let ll_t_in = operand.layout.immediate_llvm_type(bcx.ccx);
+                        let ll_t_out = cast.immediate_llvm_type(bcx.ccx);
                         let llval = operand.immediate();
-                        let l = bcx.ccx.layout_of(operand.ty);
-                        let signed = if let Layout::CEnum { signed, min, max, .. } = *l {
-                            if max > min {
-                                // We want `table[e as usize]` to not
-                                // have bound checks, and this is the most
-                                // convenient place to put the `assume`.
 
-                                base::call_assume(&bcx, bcx.icmp(
-                                    llvm::IntULE,
-                                    llval,
-                                    C_uint(common::val_ty(llval), max)
-                                ));
+                        let mut signed = false;
+                        if let layout::Abi::Scalar(ref scalar) = operand.layout.abi {
+                            if let layout::Int(_, s) = scalar.value {
+                                signed = s;
+
+                                if scalar.valid_range.end > scalar.valid_range.start {
+                                    // We want `table[e as usize]` to not
+                                    // have bound checks, and this is the most
+                                    // convenient place to put the `assume`.
+
+                                    base::call_assume(&bcx, bcx.icmp(
+                                        llvm::IntULE,
+                                        llval,
+                                        C_uint_big(ll_t_in, scalar.valid_range.end)
+                                    ));
+                                }
                             }
-
-                            signed
-                        } else {
-                            operand.ty.is_signed()
-                        };
+                        }
 
                         let newval = match (r_t_in, r_t_out) {
                             (CastTy::Int(_), CastTy::Int(_)) => {
@@ -343,49 +323,43 @@
                                 cast_float_to_int(&bcx, true, llval, ll_t_in, ll_t_out),
                             (CastTy::Float, CastTy::Int(_)) =>
                                 cast_float_to_int(&bcx, false, llval, ll_t_in, ll_t_out),
-                            _ => bug!("unsupported cast: {:?} to {:?}", operand.ty, cast_ty)
+                            _ => bug!("unsupported cast: {:?} to {:?}", operand.layout.ty, cast.ty)
                         };
                         OperandValue::Immediate(newval)
                     }
                 };
-                let operand = OperandRef {
+                (bcx, OperandRef {
                     val,
-                    ty: cast_ty
-                };
-                (bcx, operand)
+                    layout: cast
+                })
             }
 
             mir::Rvalue::Ref(_, bk, ref lvalue) => {
                 let tr_lvalue = self.trans_lvalue(&bcx, lvalue);
 
-                let ty = tr_lvalue.ty.to_ty(bcx.tcx());
-                let ref_ty = bcx.tcx().mk_ref(
-                    bcx.tcx().types.re_erased,
-                    ty::TypeAndMut { ty: ty, mutbl: bk.to_mutbl_lossy() }
-                );
+                let ty = tr_lvalue.layout.ty;
 
                 // Note: lvalues are indirect, so storing the `llval` into the
                 // destination effectively creates a reference.
-                let operand = if !bcx.ccx.shared().type_has_metadata(ty) {
-                    OperandRef {
-                        val: OperandValue::Immediate(tr_lvalue.llval),
-                        ty: ref_ty,
-                    }
+                let val = if !bcx.ccx.shared().type_has_metadata(ty) {
+                    OperandValue::Immediate(tr_lvalue.llval)
                 } else {
-                    OperandRef {
-                        val: OperandValue::Pair(tr_lvalue.llval,
-                                                tr_lvalue.llextra),
-                        ty: ref_ty,
-                    }
+                    OperandValue::Pair(tr_lvalue.llval, tr_lvalue.llextra)
                 };
-                (bcx, operand)
+                (bcx, OperandRef {
+                    val,
+                    layout: self.ccx.layout_of(self.ccx.tcx().mk_ref(
+                        self.ccx.tcx().types.re_erased,
+                        ty::TypeAndMut { ty, mutbl: bk.to_mutbl_lossy() }
+                    )),
+                })
             }
 
             mir::Rvalue::Len(ref lvalue) => {
                 let size = self.evaluate_array_len(&bcx, lvalue);
                 let operand = OperandRef {
                     val: OperandValue::Immediate(size),
-                    ty: bcx.tcx().types.usize,
+                    layout: bcx.ccx.layout_of(bcx.tcx().types.usize),
                 };
                 (bcx, operand)
             }
@@ -393,26 +367,26 @@
             mir::Rvalue::BinaryOp(op, ref lhs, ref rhs) => {
                 let lhs = self.trans_operand(&bcx, lhs);
                 let rhs = self.trans_operand(&bcx, rhs);
-                let llresult = if common::type_is_fat_ptr(bcx.ccx, lhs.ty) {
-                    match (lhs.val, rhs.val) {
-                        (OperandValue::Pair(lhs_addr, lhs_extra),
-                         OperandValue::Pair(rhs_addr, rhs_extra)) => {
-                            self.trans_fat_ptr_binop(&bcx, op,
-                                                     lhs_addr, lhs_extra,
-                                                     rhs_addr, rhs_extra,
-                                                     lhs.ty)
-                        }
-                        _ => bug!()
+                let llresult = match (lhs.val, rhs.val) {
+                    (OperandValue::Pair(lhs_addr, lhs_extra),
+                     OperandValue::Pair(rhs_addr, rhs_extra)) => {
+                        self.trans_fat_ptr_binop(&bcx, op,
+                                                 lhs_addr, lhs_extra,
+                                                 rhs_addr, rhs_extra,
+                                                 lhs.layout.ty)
                     }
 
-                } else {
-                    self.trans_scalar_binop(&bcx, op,
-                                            lhs.immediate(), rhs.immediate(),
-                                            lhs.ty)
+                    (OperandValue::Immediate(lhs_val),
+                     OperandValue::Immediate(rhs_val)) => {
+                        self.trans_scalar_binop(&bcx, op, lhs_val, rhs_val, lhs.layout.ty)
+                    }
+
+                    _ => bug!()
                 };
                 let operand = OperandRef {
                     val: OperandValue::Immediate(llresult),
-                    ty: op.ty(bcx.tcx(), lhs.ty, rhs.ty),
+                    layout: bcx.ccx.layout_of(
+                        op.ty(bcx.tcx(), lhs.layout.ty, rhs.layout.ty)),
                 };
                 (bcx, operand)
             }
@@ -421,12 +395,12 @@
                 let rhs = self.trans_operand(&bcx, rhs);
                 let result = self.trans_scalar_checked_binop(&bcx, op,
                                                              lhs.immediate(), rhs.immediate(),
-                                                             lhs.ty);
-                let val_ty = op.ty(bcx.tcx(), lhs.ty, rhs.ty);
+                                                             lhs.layout.ty);
+                let val_ty = op.ty(bcx.tcx(), lhs.layout.ty, rhs.layout.ty);
                 let operand_ty = bcx.tcx().intern_tup(&[val_ty, bcx.tcx().types.bool], false);
                 let operand = OperandRef {
                     val: result,
-                    ty: operand_ty
+                    layout: bcx.ccx.layout_of(operand_ty)
                 };
 
                 (bcx, operand)
@@ -435,7 +409,7 @@
             mir::Rvalue::UnaryOp(op, ref operand) => {
                 let operand = self.trans_operand(&bcx, operand);
                 let lloperand = operand.immediate();
-                let is_float = operand.ty.is_fp();
+                let is_float = operand.layout.ty.is_fp();
                 let llval = match op {
                     mir::UnOp::Not => bcx.not(lloperand),
                     mir::UnOp::Neg => if is_float {
@@ -446,47 +420,43 @@
                 };
                 (bcx, OperandRef {
                     val: OperandValue::Immediate(llval),
-                    ty: operand.ty,
+                    layout: operand.layout,
                 })
             }
 
             mir::Rvalue::Discriminant(ref lvalue) => {
-                let discr_lvalue = self.trans_lvalue(&bcx, lvalue);
-                let enum_ty = discr_lvalue.ty.to_ty(bcx.tcx());
                 let discr_ty = rvalue.ty(&*self.mir, bcx.tcx());
-                let discr_type = type_of::immediate_type_of(bcx.ccx, discr_ty);
-                let discr = adt::trans_get_discr(&bcx, enum_ty, discr_lvalue.llval,
-                                                  discr_lvalue.alignment, Some(discr_type), true);
+                let discr =  self.trans_lvalue(&bcx, lvalue)
+                    .trans_get_discr(&bcx, discr_ty);
                 (bcx, OperandRef {
                     val: OperandValue::Immediate(discr),
-                    ty: discr_ty
+                    layout: self.ccx.layout_of(discr_ty)
                 })
             }
 
             mir::Rvalue::NullaryOp(mir::NullOp::SizeOf, ty) => {
                 assert!(bcx.ccx.shared().type_is_sized(ty));
-                let val = C_usize(bcx.ccx, bcx.ccx.size_of(ty));
+                let val = C_usize(bcx.ccx, bcx.ccx.size_of(ty).bytes());
                 let tcx = bcx.tcx();
                 (bcx, OperandRef {
                     val: OperandValue::Immediate(val),
-                    ty: tcx.types.usize,
+                    layout: self.ccx.layout_of(tcx.types.usize),
                 })
             }
 
             mir::Rvalue::NullaryOp(mir::NullOp::Box, content_ty) => {
                 let content_ty: Ty<'tcx> = self.monomorphize(&content_ty);
-                let llty = type_of::type_of(bcx.ccx, content_ty);
-                let llsize = machine::llsize_of(bcx.ccx, llty);
-                let align = bcx.ccx.align_of(content_ty);
-                let llalign = C_usize(bcx.ccx, align as u64);
-                let llty_ptr = llty.ptr_to();
-                let box_ty = bcx.tcx().mk_box(content_ty);
+                let (size, align) = bcx.ccx.size_and_align_of(content_ty);
+                let llsize = C_usize(bcx.ccx, size.bytes());
+                let llalign = C_usize(bcx.ccx, align.abi());
+                let box_layout = bcx.ccx.layout_of(bcx.tcx().mk_box(content_ty));
+                let llty_ptr = box_layout.llvm_type(bcx.ccx);
 
                 // Allocate space:
                 let def_id = match bcx.tcx().lang_items().require(ExchangeMallocFnLangItem) {
                     Ok(id) => id,
                     Err(s) => {
-                        bcx.sess().fatal(&format!("allocation of `{}` {}", box_ty, s));
+                        bcx.sess().fatal(&format!("allocation of `{}` {}", box_layout.ty, s));
                     }
                 };
                 let instance = ty::Instance::mono(bcx.tcx(), def_id);
@@ -495,7 +465,7 @@
 
                 let operand = OperandRef {
                     val: OperandValue::Immediate(val),
-                    ty: box_ty,
+                    layout: box_layout,
                 };
                 (bcx, operand)
             }
@@ -508,7 +478,8 @@
                 // According to `rvalue_creates_operand`, only ZST
                 // aggregate rvalues are allowed to be operands.
                 let ty = rvalue.ty(self.mir, self.ccx.tcx());
-                (bcx, OperandRef::new_zst(self.ccx, self.monomorphize(&ty)))
+                (bcx, OperandRef::new_zst(self.ccx,
+                    self.ccx.layout_of(self.monomorphize(&ty))))
             }
         }
     }
@@ -521,11 +492,9 @@
         // because trans_lvalue() panics if Local is operand.
         if let mir::Lvalue::Local(index) = *lvalue {
             if let LocalRef::Operand(Some(op)) = self.locals[index] {
-                if common::type_is_zero_size(bcx.ccx, op.ty) {
-                    if let ty::TyArray(_, n) = op.ty.sty {
-                        let n = n.val.to_const_int().unwrap().to_u64().unwrap();
-                        return common::C_usize(bcx.ccx, n);
-                    }
+                if let ty::TyArray(_, n) = op.layout.ty.sty {
+                    let n = n.val.to_const_int().unwrap().to_u64().unwrap();
+                    return common::C_usize(bcx.ccx, n);
                 }
             }
         }
@@ -730,7 +699,7 @@
             mir::Rvalue::Aggregate(..) => {
                 let ty = rvalue.ty(self.mir, self.ccx.tcx());
                 let ty = self.monomorphize(&ty);
-                common::type_is_zero_size(self.ccx, ty)
+                self.ccx.layout_of(ty).is_zst()
             }
         }
 
@@ -830,7 +799,7 @@
     if is_u128_to_f32 {
         // All inputs greater or equal to (f32::MAX + 0.5 ULP) are rounded to infinity,
         // and for everything else LLVM's uitofp works just fine.
-        let max = C_big_integral(int_ty, MAX_F32_PLUS_HALF_ULP);
+        let max = C_uint_big(int_ty, MAX_F32_PLUS_HALF_ULP);
         let overflow = bcx.icmp(llvm::IntUGE, x, max);
         let infinity_bits = C_u32(bcx.ccx, ieee::Single::INFINITY.to_bits() as u32);
         let infinity = consts::bitcast(infinity_bits, float_ty);
@@ -957,8 +926,8 @@
     // performed is ultimately up to the backend, but at least x86 does perform them.
     let less_or_nan = bcx.fcmp(llvm::RealULT, x, f_min);
     let greater = bcx.fcmp(llvm::RealOGT, x, f_max);
-    let int_max = C_big_integral(int_ty, int_max(signed, int_ty));
-    let int_min = C_big_integral(int_ty, int_min(signed, int_ty) as u128);
+    let int_max = C_uint_big(int_ty, int_max(signed, int_ty));
+    let int_min = C_uint_big(int_ty, int_min(signed, int_ty) as u128);
     let s0 = bcx.select(less_or_nan, int_min, fptosui_result);
     let s1 = bcx.select(greater, int_max, s0);
 
diff --git a/src/librustc_trans/mir/statement.rs b/src/librustc_trans/mir/statement.rs
index bbf661a..607ecd8 100644
--- a/src/librustc_trans/mir/statement.rs
+++ b/src/librustc_trans/mir/statement.rs
@@ -10,14 +10,11 @@
 
 use rustc::mir;
 
-use base;
 use asm;
-use common;
 use builder::Builder;
 
 use super::MirContext;
 use super::LocalRef;
-use super::super::adt;
 
 impl<'a, 'tcx> MirContext<'a, 'tcx> {
     pub fn trans_statement(&mut self,
@@ -39,18 +36,16 @@
                             self.locals[index] = LocalRef::Operand(Some(operand));
                             bcx
                         }
-                        LocalRef::Operand(Some(_)) => {
-                            let ty = self.monomorphized_lvalue_ty(lvalue);
-
-                            if !common::type_is_zero_size(bcx.ccx, ty) {
+                        LocalRef::Operand(Some(op)) => {
+                            if !op.layout.is_zst() {
                                 span_bug!(statement.source_info.span,
                                           "operand {:?} already assigned",
                                           rvalue);
-                            } else {
-                                // If the type is zero-sized, it's already been set here,
-                                // but we still need to make sure we translate the operand
-                                self.trans_rvalue_operand(bcx, rvalue).0
                             }
+
+                            // If the type is zero-sized, it's already been set here,
+                            // but we still need to make sure we translate the operand
+                            self.trans_rvalue_operand(bcx, rvalue).0
                         }
                     }
                 } else {
@@ -59,24 +54,25 @@
                 }
             }
             mir::StatementKind::SetDiscriminant{ref lvalue, variant_index} => {
-                let ty = self.monomorphized_lvalue_ty(lvalue);
-                let lvalue_transed = self.trans_lvalue(&bcx, lvalue);
-                adt::trans_set_discr(&bcx,
-                    ty,
-                    lvalue_transed.llval,
-                    variant_index as u64);
+                self.trans_lvalue(&bcx, lvalue)
+                    .trans_set_discr(&bcx, variant_index);
                 bcx
             }
             mir::StatementKind::StorageLive(local) => {
-                self.trans_storage_liveness(bcx, local, base::Lifetime::Start)
+                if let LocalRef::Lvalue(tr_lval) = self.locals[local] {
+                    tr_lval.storage_live(&bcx);
+                }
+                bcx
             }
             mir::StatementKind::StorageDead(local) => {
-                self.trans_storage_liveness(bcx, local, base::Lifetime::End)
+                if let LocalRef::Lvalue(tr_lval) = self.locals[local] {
+                    tr_lval.storage_dead(&bcx);
+                }
+                bcx
             }
             mir::StatementKind::InlineAsm { ref asm, ref outputs, ref inputs } => {
                 let outputs = outputs.iter().map(|output| {
-                    let lvalue = self.trans_lvalue(&bcx, output);
-                    (lvalue.llval, lvalue.ty.to_ty(bcx.tcx()))
+                    self.trans_lvalue(&bcx, output)
                 }).collect();
 
                 let input_vals = inputs.iter().map(|input| {
@@ -91,15 +87,4 @@
             mir::StatementKind::Nop => bcx,
         }
     }
-
-    fn trans_storage_liveness(&self,
-                              bcx: Builder<'a, 'tcx>,
-                              index: mir::Local,
-                              intrinsic: base::Lifetime)
-                              -> Builder<'a, 'tcx> {
-        if let LocalRef::Lvalue(tr_lval) = self.locals[index] {
-            intrinsic.call(&bcx, tr_lval.llval);
-        }
-        bcx
-    }
 }
diff --git a/src/librustc_trans/trans_item.rs b/src/librustc_trans/trans_item.rs
index fb68be2..991f99e 100644
--- a/src/librustc_trans/trans_item.rs
+++ b/src/librustc_trans/trans_item.rs
@@ -23,14 +23,15 @@
 use declare;
 use llvm;
 use monomorphize::Instance;
+use type_of::LayoutLlvmExt;
 use rustc::hir;
 use rustc::middle::trans::{Linkage, Visibility};
 use rustc::ty::{self, TyCtxt, TypeFoldable};
+use rustc::ty::layout::LayoutOf;
 use syntax::ast;
 use syntax::attr;
 use syntax_pos::Span;
 use syntax_pos::symbol::Symbol;
-use type_of;
 use std::fmt;
 
 pub use rustc::middle::trans::TransItem;
@@ -173,7 +174,7 @@
     let def_id = ccx.tcx().hir.local_def_id(node_id);
     let instance = Instance::mono(ccx.tcx(), def_id);
     let ty = common::instance_ty(ccx.tcx(), &instance);
-    let llty = type_of::type_of(ccx, ty);
+    let llty = ccx.layout_of(ty).llvm_type(ccx);
 
     let g = declare::define_global(ccx, symbol_name, llty).unwrap_or_else(|| {
         ccx.sess().span_fatal(ccx.tcx().hir.span(node_id),
diff --git a/src/librustc_trans/tvec.rs b/src/librustc_trans/tvec.rs
deleted file mode 100644
index da4a4e5..0000000
--- a/src/librustc_trans/tvec.rs
+++ /dev/null
@@ -1,53 +0,0 @@
-// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-use llvm;
-use builder::Builder;
-use llvm::{BasicBlockRef, ValueRef};
-use common::*;
-use rustc::ty::Ty;
-
-pub fn slice_for_each<'a, 'tcx, F>(
-    bcx: &Builder<'a, 'tcx>,
-    data_ptr: ValueRef,
-    unit_ty: Ty<'tcx>,
-    len: ValueRef,
-    f: F
-) -> Builder<'a, 'tcx> where F: FnOnce(&Builder<'a, 'tcx>, ValueRef, BasicBlockRef) {
-    // Special-case vectors with elements of size 0  so they don't go out of bounds (#9890)
-    let zst = type_is_zero_size(bcx.ccx, unit_ty);
-    let add = |bcx: &Builder, a, b| if zst {
-        bcx.add(a, b)
-    } else {
-        bcx.inbounds_gep(a, &[b])
-    };
-
-    let body_bcx = bcx.build_sibling_block("slice_loop_body");
-    let header_bcx = bcx.build_sibling_block("slice_loop_header");
-    let next_bcx = bcx.build_sibling_block("slice_loop_next");
-
-    let start = if zst {
-        C_usize(bcx.ccx, 1)
-    } else {
-        data_ptr
-    };
-    let end = add(&bcx, start, len);
-
-    bcx.br(header_bcx.llbb());
-    let current = header_bcx.phi(val_ty(start), &[start], &[bcx.llbb()]);
-
-    let keep_going = header_bcx.icmp(llvm::IntNE, current, end);
-    header_bcx.cond_br(keep_going, body_bcx.llbb(), next_bcx.llbb());
-
-    let next = add(&body_bcx, current, C_usize(bcx.ccx, 1));
-    f(&body_bcx, if zst { data_ptr } else { current }, header_bcx.llbb());
-    header_bcx.add_incoming_to_phi(current, next, body_bcx.llbb());
-    next_bcx
-}
diff --git a/src/librustc_trans/type_.rs b/src/librustc_trans/type_.rs
index ffb3036..0222485 100644
--- a/src/librustc_trans/type_.rs
+++ b/src/librustc_trans/type_.rs
@@ -17,7 +17,7 @@
 use context::CrateContext;
 
 use syntax::ast;
-use rustc::ty::layout;
+use rustc::ty::layout::{self, Align};
 
 use std::ffi::CString;
 use std::fmt;
@@ -66,10 +66,6 @@
         ty!(llvm::LLVMVoidTypeInContext(ccx.llcx()))
     }
 
-    pub fn nil(ccx: &CrateContext) -> Type {
-        Type::empty_struct(ccx)
-    }
-
     pub fn metadata(ccx: &CrateContext) -> Type {
         ty!(llvm::LLVMRustMetadataTypeInContext(ccx.llcx()))
     }
@@ -202,9 +198,6 @@
         ty!(llvm::LLVMStructCreateNamed(ccx.llcx(), name.as_ptr()))
     }
 
-    pub fn empty_struct(ccx: &CrateContext) -> Type {
-        Type::struct_(ccx, &[], false)
-    }
 
     pub fn array(ty: &Type, len: u64) -> Type {
         ty!(llvm::LLVMRustArrayType(ty.to_ref(), len))
@@ -214,20 +207,6 @@
         ty!(llvm::LLVMVectorType(ty.to_ref(), len as c_uint))
     }
 
-    pub fn vec(ccx: &CrateContext, ty: &Type) -> Type {
-        Type::struct_(ccx,
-            &[Type::array(ty, 0), Type::isize(ccx)],
-        false)
-    }
-
-    pub fn opaque_vec(ccx: &CrateContext) -> Type {
-        Type::vec(ccx, &Type::i8(ccx))
-    }
-
-    pub fn vtable_ptr(ccx: &CrateContext) -> Type {
-        Type::func(&[Type::i8p(ccx)], &Type::void(ccx)).ptr_to().ptr_to()
-    }
-
     pub fn kind(&self) -> TypeKind {
         unsafe {
             llvm::LLVMRustGetTypeKind(self.to_ref())
@@ -259,19 +238,6 @@
         }
     }
 
-    pub fn field_types(&self) -> Vec<Type> {
-        unsafe {
-            let n_elts = llvm::LLVMCountStructElementTypes(self.to_ref()) as usize;
-            if n_elts == 0 {
-                return Vec::new();
-            }
-            let mut elts = vec![Type { rf: ptr::null_mut() }; n_elts];
-            llvm::LLVMGetStructElementTypes(self.to_ref(),
-                                            elts.as_mut_ptr() as *mut TypeRef);
-            elts
-        }
-    }
-
     pub fn func_params(&self) -> Vec<Type> {
         unsafe {
             let n_args = llvm::LLVMCountParamTypes(self.to_ref()) as usize;
@@ -302,7 +268,6 @@
     pub fn from_integer(cx: &CrateContext, i: layout::Integer) -> Type {
         use rustc::ty::layout::Integer::*;
         match i {
-            I1 => Type::i1(cx),
             I8 => Type::i8(cx),
             I16 => Type::i16(cx),
             I32 => Type::i32(cx),
@@ -310,4 +275,15 @@
             I128 => Type::i128(cx),
         }
     }
+
+    /// Return a LLVM type that has at most the required alignment,
+    /// as a conservative approximation for unknown pointee types.
+    pub fn pointee_for_abi_align(ccx: &CrateContext, align: Align) -> Type {
+        if let Some(ity) = layout::Integer::for_abi_align(ccx, align) {
+            Type::from_integer(ccx, ity)
+        } else {
+            // FIXME(eddyb) We could find a better approximation here.
+            Type::i8(ccx)
+        }
+    }
 }
diff --git a/src/librustc_trans/type_of.rs b/src/librustc_trans/type_of.rs
index cac09a8..9b32c82 100644
--- a/src/librustc_trans/type_of.rs
+++ b/src/librustc_trans/type_of.rs
@@ -9,231 +9,484 @@
 // except according to those terms.
 
 use abi::FnType;
-use adt;
 use common::*;
-use machine;
+use rustc::hir;
 use rustc::ty::{self, Ty, TypeFoldable};
-use rustc::ty::layout::LayoutTyper;
+use rustc::ty::layout::{self, Align, LayoutOf, Size, TyLayout};
+use rustc_back::PanicStrategy;
 use trans_item::DefPathBasedNames;
 use type_::Type;
 
-use syntax::ast;
+use std::fmt::Write;
 
-pub fn fat_ptr_base_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> Type {
-    match ty.sty {
-        ty::TyRef(_, ty::TypeAndMut { ty: t, .. }) |
-        ty::TyRawPtr(ty::TypeAndMut { ty: t, .. }) if ccx.shared().type_has_metadata(t) => {
-            in_memory_type_of(ccx, t).ptr_to()
+fn uncached_llvm_type<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                                layout: TyLayout<'tcx>,
+                                defer: &mut Option<(Type, TyLayout<'tcx>)>)
+                                -> Type {
+    match layout.abi {
+        layout::Abi::Scalar(_) => bug!("handled elsewhere"),
+        layout::Abi::Vector => {
+            return Type::vector(&layout.field(ccx, 0).llvm_type(ccx),
+                                layout.fields.count() as u64);
         }
-        ty::TyAdt(def, _) if def.is_box() => {
-            in_memory_type_of(ccx, ty.boxed_ty()).ptr_to()
+        layout::Abi::ScalarPair(..) => {
+            return Type::struct_(ccx, &[
+                layout.scalar_pair_element_llvm_type(ccx, 0),
+                layout.scalar_pair_element_llvm_type(ccx, 1),
+            ], false);
         }
-        _ => bug!("expected fat ptr ty but got {:?}", ty)
-    }
-}
-
-pub fn unsized_info_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> Type {
-    let unsized_part = ccx.tcx().struct_tail(ty);
-    match unsized_part.sty {
-        ty::TyStr | ty::TyArray(..) | ty::TySlice(_) => {
-            Type::uint_from_ty(ccx, ast::UintTy::Us)
-        }
-        ty::TyDynamic(..) => Type::vtable_ptr(ccx),
-        _ => bug!("Unexpected tail in unsized_info_ty: {:?} for ty={:?}",
-                          unsized_part, ty)
-    }
-}
-
-pub fn immediate_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Type {
-    if t.is_bool() {
-        Type::i1(cx)
-    } else {
-        type_of(cx, t)
-    }
-}
-
-/// Get the LLVM type corresponding to a Rust type, i.e. `rustc::ty::Ty`.
-/// This is the right LLVM type for an alloca containing a value of that type,
-/// and the pointee of an Lvalue Datum (which is always a LLVM pointer).
-/// For unsized types, the returned type is a fat pointer, thus the resulting
-/// LLVM type for a `Trait` Lvalue is `{ i8*, void(i8*)** }*`, which is a double
-/// indirection to the actual data, unlike a `i8` Lvalue, which is just `i8*`.
-/// This is needed due to the treatment of immediate values, as a fat pointer
-/// is too large for it to be placed in SSA value (by our rules).
-/// For the raw type without far pointer indirection, see `in_memory_type_of`.
-pub fn type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> Type {
-    let ty = if cx.shared().type_has_metadata(ty) {
-        cx.tcx().mk_imm_ptr(ty)
-    } else {
-        ty
-    };
-    in_memory_type_of(cx, ty)
-}
-
-/// Get the LLVM type corresponding to a Rust type, i.e. `rustc::ty::Ty`.
-/// This is the right LLVM type for a field/array element of that type,
-/// and is the same as `type_of` for all Sized types.
-/// Unsized types, however, are represented by a "minimal unit", e.g.
-/// `[T]` becomes `T`, while `str` and `Trait` turn into `i8` - this
-/// is useful for indexing slices, as `&[T]`'s data pointer is `T*`.
-/// If the type is an unsized struct, the regular layout is generated,
-/// with the inner-most trailing unsized field using the "minimal unit"
-/// of that field's type - this is useful for taking the address of
-/// that field and ensuring the struct has the right alignment.
-/// For the LLVM type of a value as a whole, see `type_of`.
-pub fn in_memory_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Type {
-    // Check the cache.
-    if let Some(&llty) = cx.lltypes().borrow().get(&t) {
-        return llty;
+        layout::Abi::Uninhabited |
+        layout::Abi::Aggregate { .. } => {}
     }
 
-    debug!("type_of {:?}", t);
-
-    assert!(!t.has_escaping_regions(), "{:?} has escaping regions", t);
-
-    // Replace any typedef'd types with their equivalent non-typedef
-    // type. This ensures that all LLVM nominal types that contain
-    // Rust types are defined as the same LLVM types.  If we don't do
-    // this then, e.g. `Option<{myfield: bool}>` would be a different
-    // type than `Option<myrec>`.
-    let t_norm = cx.tcx().erase_regions(&t);
-
-    if t != t_norm {
-        let llty = in_memory_type_of(cx, t_norm);
-        debug!("--> normalized {:?} to {:?} llty={:?}", t, t_norm, llty);
-        cx.lltypes().borrow_mut().insert(t, llty);
-        return llty;
-    }
-
-    let ptr_ty = |ty: Ty<'tcx>| {
-        if cx.shared().type_has_metadata(ty) {
-            if let ty::TyStr = ty.sty {
-                // This means we get a nicer name in the output (str is always
-                // unsized).
-                cx.str_slice_type()
-            } else {
-                let ptr_ty = in_memory_type_of(cx, ty).ptr_to();
-                let info_ty = unsized_info_ty(cx, ty);
-                Type::struct_(cx, &[ptr_ty, info_ty], false)
+    let name = match layout.ty.sty {
+        ty::TyClosure(..) |
+        ty::TyGenerator(..) |
+        ty::TyAdt(..) |
+        ty::TyDynamic(..) |
+        ty::TyForeign(..) |
+        ty::TyStr => {
+            let mut name = String::with_capacity(32);
+            let printer = DefPathBasedNames::new(ccx.tcx(), true, true);
+            printer.push_type_name(layout.ty, &mut name);
+            match (&layout.ty.sty, &layout.variants) {
+                (&ty::TyAdt(def, _), &layout::Variants::Single { index }) => {
+                    if def.is_enum() && !def.variants.is_empty() {
+                        write!(&mut name, "::{}", def.variants[index].name).unwrap();
+                    }
+                }
+                _ => {}
             }
+            Some(name)
+        }
+        _ => None
+    };
+
+    match layout.fields {
+        layout::FieldPlacement::Union(_) => {
+            let size = layout.size.bytes();
+            let fill = Type::array(&Type::i8(ccx), size);
+            match name {
+                None => {
+                    Type::struct_(ccx, &[fill], layout.is_packed())
+                }
+                Some(ref name) => {
+                    let mut llty = Type::named_struct(ccx, name);
+                    llty.set_struct_body(&[fill], layout.is_packed());
+                    llty
+                }
+            }
+        }
+        layout::FieldPlacement::Array { count, .. } => {
+            Type::array(&layout.field(ccx, 0).llvm_type(ccx), count)
+        }
+        layout::FieldPlacement::Arbitrary { .. } => {
+            match name {
+                None => {
+                    Type::struct_(ccx, &struct_llfields(ccx, layout), layout.is_packed())
+                }
+                Some(ref name) => {
+                    let llty = Type::named_struct(ccx, name);
+                    *defer = Some((llty, layout));
+                    llty
+                }
+            }
+        }
+    }
+}
+
+fn struct_llfields<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
+                             layout: TyLayout<'tcx>) -> Vec<Type> {
+    debug!("struct_llfields: {:#?}", layout);
+    let field_count = layout.fields.count();
+
+    let mut offset = Size::from_bytes(0);
+    let mut result: Vec<Type> = Vec::with_capacity(1 + field_count * 2);
+    for i in layout.fields.index_by_increasing_offset() {
+        let field = layout.field(ccx, i);
+        let target_offset = layout.fields.offset(i as usize);
+        debug!("struct_llfields: {}: {:?} offset: {:?} target_offset: {:?}",
+            i, field, offset, target_offset);
+        assert!(target_offset >= offset);
+        let padding = target_offset - offset;
+        result.push(Type::array(&Type::i8(ccx), padding.bytes()));
+        debug!("    padding before: {:?}", padding);
+
+        result.push(field.llvm_type(ccx));
+
+        if layout.is_packed() {
+            assert_eq!(padding.bytes(), 0);
         } else {
-            in_memory_type_of(cx, ty).ptr_to()
+            assert!(field.align.abi() <= layout.align.abi(),
+                    "non-packed type has field with larger align ({}): {:#?}",
+                    field.align.abi(), layout);
         }
-    };
 
-    let mut llty = match t.sty {
-      ty::TyBool => Type::bool(cx),
-      ty::TyChar => Type::char(cx),
-      ty::TyInt(t) => Type::int_from_ty(cx, t),
-      ty::TyUint(t) => Type::uint_from_ty(cx, t),
-      ty::TyFloat(t) => Type::float_from_ty(cx, t),
-      ty::TyNever => Type::nil(cx),
-      ty::TyClosure(..) => {
-          // Only create the named struct, but don't fill it in. We
-          // fill it in *after* placing it into the type cache.
-          adt::incomplete_type_of(cx, t, "closure")
-      }
-      ty::TyGenerator(..) => {
-          // Only create the named struct, but don't fill it in. We
-          // fill it in *after* placing it into the type cache.
-          adt::incomplete_type_of(cx, t, "generator")
-      }
-
-      ty::TyRef(_, ty::TypeAndMut{ty, ..}) |
-      ty::TyRawPtr(ty::TypeAndMut{ty, ..}) => {
-          ptr_ty(ty)
-      }
-      ty::TyAdt(def, _) if def.is_box() => {
-          ptr_ty(t.boxed_ty())
-      }
-
-      ty::TyArray(ty, size) => {
-          let llty = in_memory_type_of(cx, ty);
-          let size = size.val.to_const_int().unwrap().to_u64().unwrap();
-          Type::array(&llty, size)
-      }
-
-      // Unsized slice types (and str) have the type of their element, and
-      // traits have the type of u8. This is so that the data pointer inside
-      // fat pointers is of the right type (e.g. for array accesses), even
-      // when taking the address of an unsized field in a struct.
-      ty::TySlice(ty) => in_memory_type_of(cx, ty),
-      ty::TyStr | ty::TyDynamic(..) | ty::TyForeign(..) => Type::i8(cx),
-
-      ty::TyFnDef(..) => Type::nil(cx),
-      ty::TyFnPtr(sig) => {
-        let sig = cx.tcx().erase_late_bound_regions_and_normalize(&sig);
-        FnType::new(cx, sig, &[]).llvm_type(cx).ptr_to()
-      }
-      ty::TyTuple(ref tys, _) if tys.is_empty() => Type::nil(cx),
-      ty::TyTuple(..) => {
-          adt::type_of(cx, t)
-      }
-      ty::TyAdt(..) if t.is_simd() => {
-          let e = t.simd_type(cx.tcx());
-          if !e.is_machine() {
-              cx.sess().fatal(&format!("monomorphising SIMD type `{}` with \
-                                        a non-machine element type `{}`",
-                                       t, e))
-          }
-          let llet = in_memory_type_of(cx, e);
-          let n = t.simd_size(cx.tcx()) as u64;
-          Type::vector(&llet, n)
-      }
-      ty::TyAdt(..) => {
-          // Only create the named struct, but don't fill it in. We
-          // fill it in *after* placing it into the type cache. This
-          // avoids creating more than one copy of the enum when one
-          // of the enum's variants refers to the enum itself.
-          let name = llvm_type_name(cx, t);
-          adt::incomplete_type_of(cx, t, &name[..])
-      }
-
-      ty::TyInfer(..) |
-      ty::TyProjection(..) |
-      ty::TyParam(..) |
-      ty::TyAnon(..) |
-      ty::TyError => bug!("type_of with {:?}", t),
-    };
-
-    debug!("--> mapped t={:?} to llty={:?}", t, llty);
-
-    cx.lltypes().borrow_mut().insert(t, llty);
-
-    // If this was an enum or struct, fill in the type now.
-    match t.sty {
-        ty::TyAdt(..) | ty::TyClosure(..) | ty::TyGenerator(..) if !t.is_simd() && !t.is_box() => {
-            adt::finish_type_of(cx, t, &mut llty);
+        offset = target_offset + field.size;
+    }
+    if !layout.is_unsized() && field_count > 0 {
+        if offset > layout.size {
+            bug!("layout: {:#?} stride: {:?} offset: {:?}",
+                 layout, layout.size, offset);
         }
-        _ => ()
+        let padding = layout.size - offset;
+        debug!("struct_llfields: pad_bytes: {:?} offset: {:?} stride: {:?}",
+               padding, offset, layout.size);
+        result.push(Type::array(&Type::i8(ccx), padding.bytes()));
+        assert!(result.len() == 1 + field_count * 2);
+    } else {
+        debug!("struct_llfields: offset: {:?} stride: {:?}",
+               offset, layout.size);
     }
 
-    llty
+    result
 }
 
 impl<'a, 'tcx> CrateContext<'a, 'tcx> {
-    pub fn align_of(&self, ty: Ty<'tcx>) -> machine::llalign {
-        self.layout_of(ty).align(self).abi() as machine::llalign
+    pub fn align_of(&self, ty: Ty<'tcx>) -> Align {
+        self.layout_of(ty).align
     }
 
-    pub fn size_of(&self, ty: Ty<'tcx>) -> machine::llsize {
-        self.layout_of(ty).size(self).bytes() as machine::llsize
+    pub fn size_of(&self, ty: Ty<'tcx>) -> Size {
+        self.layout_of(ty).size
     }
 
-    pub fn over_align_of(&self, t: Ty<'tcx>)
-                              -> Option<machine::llalign> {
-        let layout = self.layout_of(t);
-        if let Some(align) = layout.over_align(&self.tcx().data_layout) {
-            Some(align as machine::llalign)
-        } else {
-            None
-        }
+    pub fn size_and_align_of(&self, ty: Ty<'tcx>) -> (Size, Align) {
+        self.layout_of(ty).size_and_align()
     }
 }
 
-fn llvm_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> String {
-    let mut name = String::with_capacity(32);
-    let printer = DefPathBasedNames::new(cx.tcx(), true, true);
-    printer.push_type_name(ty, &mut name);
-    name
+#[derive(Copy, Clone, PartialEq, Eq)]
+pub enum PointerKind {
+    /// Most general case, we know no restrictions to tell LLVM.
+    Shared,
+
+    /// `&T` where `T` contains no `UnsafeCell`, is `noalias` and `readonly`.
+    Frozen,
+
+    /// `&mut T`, when we know `noalias` is safe for LLVM.
+    UniqueBorrowed,
+
+    /// `Box<T>`, unlike `UniqueBorrowed`, it also has `noalias` on returns.
+    UniqueOwned
+}
+
+#[derive(Copy, Clone)]
+pub struct PointeeInfo {
+    pub size: Size,
+    pub align: Align,
+    pub safe: Option<PointerKind>,
+}
+
+pub trait LayoutLlvmExt<'tcx> {
+    fn is_llvm_immediate(&self) -> bool;
+    fn is_llvm_scalar_pair<'a>(&self) -> bool;
+    fn llvm_type<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> Type;
+    fn immediate_llvm_type<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> Type;
+    fn scalar_pair_element_llvm_type<'a>(&self, ccx: &CrateContext<'a, 'tcx>,
+                                         index: usize) -> Type;
+    fn llvm_field_index(&self, index: usize) -> u64;
+    fn pointee_info_at<'a>(&self, ccx: &CrateContext<'a, 'tcx>, offset: Size)
+                           -> Option<PointeeInfo>;
+}
+
+impl<'tcx> LayoutLlvmExt<'tcx> for TyLayout<'tcx> {
+    fn is_llvm_immediate(&self) -> bool {
+        match self.abi {
+            layout::Abi::Uninhabited |
+            layout::Abi::Scalar(_) |
+            layout::Abi::Vector => true,
+            layout::Abi::ScalarPair(..) => false,
+            layout::Abi::Aggregate { .. } => self.is_zst()
+        }
+    }
+
+    fn is_llvm_scalar_pair<'a>(&self) -> bool {
+        match self.abi {
+            layout::Abi::ScalarPair(..) => true,
+            layout::Abi::Uninhabited |
+            layout::Abi::Scalar(_) |
+            layout::Abi::Vector |
+            layout::Abi::Aggregate { .. } => false
+        }
+    }
+
+    /// Get the LLVM type corresponding to a Rust type, i.e. `rustc::ty::Ty`.
+    /// The pointee type of the pointer in `LvalueRef` is always this type.
+    /// For sized types, it is also the right LLVM type for an `alloca`
+    /// containing a value of that type, and most immediates (except `bool`).
+    /// Unsized types, however, are represented by a "minimal unit", e.g.
+    /// `[T]` becomes `T`, while `str` and `Trait` turn into `i8` - this
+    /// is useful for indexing slices, as `&[T]`'s data pointer is `T*`.
+    /// If the type is an unsized struct, the regular layout is generated,
+    /// with the inner-most trailing unsized field using the "minimal unit"
+    /// of that field's type - this is useful for taking the address of
+    /// that field and ensuring the struct has the right alignment.
+    fn llvm_type<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> Type {
+        if let layout::Abi::Scalar(ref scalar) = self.abi {
+            // Use a different cache for scalars because pointers to DSTs
+            // can be either fat or thin (data pointers of fat pointers).
+            if let Some(&llty) = ccx.scalar_lltypes().borrow().get(&self.ty) {
+                return llty;
+            }
+            let llty = match scalar.value {
+                layout::Int(i, _) => Type::from_integer(ccx, i),
+                layout::F32 => Type::f32(ccx),
+                layout::F64 => Type::f64(ccx),
+                layout::Pointer => {
+                    let pointee = match self.ty.sty {
+                        ty::TyRef(_, ty::TypeAndMut { ty, .. }) |
+                        ty::TyRawPtr(ty::TypeAndMut { ty, .. }) => {
+                            ccx.layout_of(ty).llvm_type(ccx)
+                        }
+                        ty::TyAdt(def, _) if def.is_box() => {
+                            ccx.layout_of(self.ty.boxed_ty()).llvm_type(ccx)
+                        }
+                        ty::TyFnPtr(sig) => {
+                            let sig = ccx.tcx().erase_late_bound_regions_and_normalize(&sig);
+                            FnType::new(ccx, sig, &[]).llvm_type(ccx)
+                        }
+                        _ => {
+                            // If we know the alignment, pick something better than i8.
+                            if let Some(pointee) = self.pointee_info_at(ccx, Size::from_bytes(0)) {
+                                Type::pointee_for_abi_align(ccx, pointee.align)
+                            } else {
+                                Type::i8(ccx)
+                            }
+                        }
+                    };
+                    pointee.ptr_to()
+                }
+            };
+            ccx.scalar_lltypes().borrow_mut().insert(self.ty, llty);
+            return llty;
+        }
+
+
+        // Check the cache.
+        let variant_index = match self.variants {
+            layout::Variants::Single { index } => Some(index),
+            _ => None
+        };
+        if let Some(&llty) = ccx.lltypes().borrow().get(&(self.ty, variant_index)) {
+            return llty;
+        }
+
+        debug!("llvm_type({:#?})", self);
+
+        assert!(!self.ty.has_escaping_regions(), "{:?} has escaping regions", self.ty);
+
+        // Make sure lifetimes are erased, to avoid generating distinct LLVM
+        // types for Rust types that only differ in the choice of lifetimes.
+        let normal_ty = ccx.tcx().erase_regions(&self.ty);
+
+        let mut defer = None;
+        let llty = if self.ty != normal_ty {
+            let mut layout = ccx.layout_of(normal_ty);
+            if let Some(v) = variant_index {
+                layout = layout.for_variant(ccx, v);
+            }
+            layout.llvm_type(ccx)
+        } else {
+            uncached_llvm_type(ccx, *self, &mut defer)
+        };
+        debug!("--> mapped {:#?} to llty={:?}", self, llty);
+
+        ccx.lltypes().borrow_mut().insert((self.ty, variant_index), llty);
+
+        if let Some((mut llty, layout)) = defer {
+            llty.set_struct_body(&struct_llfields(ccx, layout), layout.is_packed())
+        }
+
+        llty
+    }
+
+    fn immediate_llvm_type<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> Type {
+        if let layout::Abi::Scalar(ref scalar) = self.abi {
+            if scalar.is_bool() {
+                return Type::i1(ccx);
+            }
+        }
+        self.llvm_type(ccx)
+    }
+
+    fn scalar_pair_element_llvm_type<'a>(&self, ccx: &CrateContext<'a, 'tcx>,
+                                         index: usize) -> Type {
+        // HACK(eddyb) special-case fat pointers until LLVM removes
+        // pointee types, to avoid bitcasting every `OperandRef::deref`.
+        match self.ty.sty {
+            ty::TyRef(..) |
+            ty::TyRawPtr(_) => {
+                return self.field(ccx, index).llvm_type(ccx);
+            }
+            ty::TyAdt(def, _) if def.is_box() => {
+                let ptr_ty = ccx.tcx().mk_mut_ptr(self.ty.boxed_ty());
+                return ccx.layout_of(ptr_ty).scalar_pair_element_llvm_type(ccx, index);
+            }
+            _ => {}
+        }
+
+        let (a, b) = match self.abi {
+            layout::Abi::ScalarPair(ref a, ref b) => (a, b),
+            _ => bug!("TyLayout::scalar_pair_element_llty({:?}): not applicable", self)
+        };
+        let scalar = [a, b][index];
+
+        // Make sure to return the same type `immediate_llvm_type` would,
+        // to avoid dealing with two types and the associated conversions.
+        // This means that `(bool, bool)` is represented as `{i1, i1}`,
+        // both in memory and as an immediate, while `bool` is typically
+        // `i8` in memory and only `i1` when immediate. While we need to
+        // load/store `bool` as `i8` to avoid crippling LLVM optimizations,
+        // `i1` in a LLVM aggregate is valid and mostly equivalent to `i8`.
+        if scalar.is_bool() {
+            return Type::i1(ccx);
+        }
+
+        match scalar.value {
+            layout::Int(i, _) => Type::from_integer(ccx, i),
+            layout::F32 => Type::f32(ccx),
+            layout::F64 => Type::f64(ccx),
+            layout::Pointer => {
+                // If we know the alignment, pick something better than i8.
+                let offset = if index == 0 {
+                    Size::from_bytes(0)
+                } else {
+                    a.value.size(ccx).abi_align(b.value.align(ccx))
+                };
+                let pointee = if let Some(pointee) = self.pointee_info_at(ccx, offset) {
+                    Type::pointee_for_abi_align(ccx, pointee.align)
+                } else {
+                    Type::i8(ccx)
+                };
+                pointee.ptr_to()
+            }
+        }
+    }
+
+    fn llvm_field_index(&self, index: usize) -> u64 {
+        match self.abi {
+            layout::Abi::Scalar(_) |
+            layout::Abi::ScalarPair(..) => {
+                bug!("TyLayout::llvm_field_index({:?}): not applicable", self)
+            }
+            _ => {}
+        }
+        match self.fields {
+            layout::FieldPlacement::Union(_) => {
+                bug!("TyLayout::llvm_field_index({:?}): not applicable", self)
+            }
+
+            layout::FieldPlacement::Array { .. } => {
+                index as u64
+            }
+
+            layout::FieldPlacement::Arbitrary { .. } => {
+                1 + (self.fields.memory_index(index) as u64) * 2
+            }
+        }
+    }
+
+    fn pointee_info_at<'a>(&self, ccx: &CrateContext<'a, 'tcx>, offset: Size)
+                           -> Option<PointeeInfo> {
+        if let Some(&pointee) = ccx.pointee_infos().borrow().get(&(self.ty, offset)) {
+            return pointee;
+        }
+
+        let mut result = None;
+        match self.ty.sty {
+            ty::TyRawPtr(mt) if offset.bytes() == 0 => {
+                let (size, align) = ccx.size_and_align_of(mt.ty);
+                result = Some(PointeeInfo {
+                    size,
+                    align,
+                    safe: None
+                });
+            }
+
+            ty::TyRef(_, mt) if offset.bytes() == 0 => {
+                let (size, align) = ccx.size_and_align_of(mt.ty);
+
+                let kind = match mt.mutbl {
+                    hir::MutImmutable => if ccx.shared().type_is_freeze(mt.ty) {
+                        PointerKind::Frozen
+                    } else {
+                        PointerKind::Shared
+                    },
+                    hir::MutMutable => {
+                        if ccx.shared().tcx().sess.opts.debugging_opts.mutable_noalias ||
+                           ccx.shared().tcx().sess.panic_strategy() == PanicStrategy::Abort {
+                            PointerKind::UniqueBorrowed
+                        } else {
+                            PointerKind::Shared
+                        }
+                    }
+                };
+
+                result = Some(PointeeInfo {
+                    size,
+                    align,
+                    safe: Some(kind)
+                });
+            }
+
+            _ => {
+                let mut data_variant = match self.variants {
+                    layout::Variants::NicheFilling { dataful_variant, .. } => {
+                        // Only the niche itself is always initialized,
+                        // so only check for a pointer at its offset.
+                        //
+                        // If the niche is a pointer, it's either valid
+                        // (according to its type), or null (which the
+                        // niche field's scalar validity range encodes).
+                        // This allows using `dereferenceable_or_null`
+                        // for e.g. `Option<&T>`, and this will continue
+                        // to work as long as we don't start using more
+                        // niches than just null (e.g. the first page
+                        // of the address space, or unaligned pointers).
+                        if self.fields.offset(0) == offset {
+                            Some(self.for_variant(ccx, dataful_variant))
+                        } else {
+                            None
+                        }
+                    }
+                    _ => Some(*self)
+                };
+
+                if let Some(variant) = data_variant {
+                    // We're not interested in any unions.
+                    if let layout::FieldPlacement::Union(_) = variant.fields {
+                        data_variant = None;
+                    }
+                }
+
+                if let Some(variant) = data_variant {
+                    let ptr_end = offset + layout::Pointer.size(ccx);
+                    for i in 0..variant.fields.count() {
+                        let field_start = variant.fields.offset(i);
+                        if field_start <= offset {
+                            let field = variant.field(ccx, i);
+                            if ptr_end <= field_start + field.size {
+                                // We found the right field, look inside it.
+                                result = field.pointee_info_at(ccx, offset - field_start);
+                                break;
+                            }
+                        }
+                    }
+                }
+
+                // FIXME(eddyb) This should be for `ptr::Unique<T>`, not `Box<T>`.
+                if let Some(ref mut pointee) = result {
+                    if let ty::TyAdt(def, _) = self.ty.sty {
+                        if def.is_box() && offset.bytes() == 0 {
+                            pointee.safe = Some(PointerKind::UniqueOwned);
+                        }
+                    }
+                }
+            }
+        }
+
+        ccx.pointee_infos().borrow_mut().insert((self.ty, offset), result);
+        result
+    }
 }
diff --git a/src/librustc_trans_utils/monomorphize.rs b/src/librustc_trans_utils/monomorphize.rs
index ab61dac..eee5c1d 100644
--- a/src/librustc_trans_utils/monomorphize.rs
+++ b/src/librustc_trans_utils/monomorphize.rs
@@ -12,7 +12,7 @@
 use rustc::middle::lang_items::DropInPlaceFnLangItem;
 use rustc::traits;
 use rustc::ty::adjustment::CustomCoerceUnsized;
-use rustc::ty::subst::{Kind, Subst, Substs};
+use rustc::ty::subst::{Kind, Subst};
 use rustc::ty::{self, Ty, TyCtxt};
 
 pub use rustc::ty::Instance;
@@ -125,12 +125,3 @@
     }
 }
 
-/// Returns the normalized type of a struct field
-pub fn field_ty<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
-                          param_substs: &Substs<'tcx>,
-                          f: &'tcx ty::FieldDef)
-                          -> Ty<'tcx>
-{
-    tcx.fully_normalize_associated_types_in(&f.ty(tcx, param_substs))
-}
-
diff --git a/src/librustc_typeck/check/_match.rs b/src/librustc_typeck/check/_match.rs
index 272f13b..ea0fa94 100644
--- a/src/librustc_typeck/check/_match.rs
+++ b/src/librustc_typeck/check/_match.rs
@@ -471,7 +471,7 @@
         //
         // 2. Things go horribly wrong if we use subtype. The reason for
         // THIS is a fairly subtle case involving bound regions. See the
-        // `givens` field in `region_inference`, as well as the test
+        // `givens` field in `region_constraints`, as well as the test
         // `regions-relate-bound-regions-on-closures-to-inference-variables.rs`,
         // for details. Short version is that we must sometimes detect
         // relationships between specific region variables and regions
diff --git a/src/librustc_typeck/check/compare_method.rs b/src/librustc_typeck/check/compare_method.rs
index 139449e..24efb79 100644
--- a/src/librustc_typeck/check/compare_method.rs
+++ b/src/librustc_typeck/check/compare_method.rs
@@ -10,8 +10,6 @@
 
 use rustc::hir::{self, ImplItemKind, TraitItemKind};
 use rustc::infer::{self, InferOk};
-use rustc::middle::free_region::FreeRegionMap;
-use rustc::middle::region;
 use rustc::ty::{self, TyCtxt};
 use rustc::ty::util::ExplicitSelf;
 use rustc::traits::{self, ObligationCause, ObligationCauseCode, Reveal};
@@ -38,8 +36,7 @@
                                      impl_m_span: Span,
                                      trait_m: &ty::AssociatedItem,
                                      impl_trait_ref: ty::TraitRef<'tcx>,
-                                     trait_item_span: Option<Span>,
-                                     old_broken_mode: bool) {
+                                     trait_item_span: Option<Span>) {
     debug!("compare_impl_method(impl_trait_ref={:?})",
            impl_trait_ref);
 
@@ -79,8 +76,7 @@
                                                              impl_m,
                                                              impl_m_span,
                                                              trait_m,
-                                                             impl_trait_ref,
-                                                             old_broken_mode) {
+                                                             impl_trait_ref) {
         return;
     }
 }
@@ -89,8 +85,7 @@
                                           impl_m: &ty::AssociatedItem,
                                           impl_m_span: Span,
                                           trait_m: &ty::AssociatedItem,
-                                          impl_trait_ref: ty::TraitRef<'tcx>,
-                                          old_broken_mode: bool)
+                                          impl_trait_ref: ty::TraitRef<'tcx>)
                                           -> Result<(), ErrorReported> {
     let trait_to_impl_substs = impl_trait_ref.substs;
 
@@ -106,7 +101,6 @@
             item_name: impl_m.name,
             impl_item_def_id: impl_m.def_id,
             trait_item_def_id: trait_m.def_id,
-            lint_id: if !old_broken_mode { Some(impl_m_node_id) } else { None },
         },
     };
 
@@ -342,22 +336,8 @@
 
         // Finally, resolve all regions. This catches wily misuses of
         // lifetime parameters.
-        if old_broken_mode {
-            // FIXME(#18937) -- this is how the code used to
-            // work. This is buggy because the fulfillment cx creates
-            // region obligations that get overlooked.  The right
-            // thing to do is the code below. But we keep this old
-            // pass around temporarily.
-            let region_scope_tree = region::ScopeTree::default();
-            let mut free_regions = FreeRegionMap::new();
-            free_regions.relate_free_regions_from_predicates(&param_env.caller_bounds);
-            infcx.resolve_regions_and_report_errors(impl_m.def_id,
-                                                    &region_scope_tree,
-                                                    &free_regions);
-        } else {
-            let fcx = FnCtxt::new(&inh, param_env, impl_m_node_id);
-            fcx.regionck_item(impl_m_node_id, impl_m_span, &[]);
-        }
+        let fcx = FnCtxt::new(&inh, param_env, impl_m_node_id);
+        fcx.regionck_item(impl_m_node_id, impl_m_span, &[]);
 
         Ok(())
     })
diff --git a/src/librustc_typeck/check/mod.rs b/src/librustc_typeck/check/mod.rs
index 4cc1e83..b3a0702 100644
--- a/src/librustc_typeck/check/mod.rs
+++ b/src/librustc_typeck/check/mod.rs
@@ -117,6 +117,7 @@
 use std::ops::{self, Deref};
 use syntax::abi::Abi;
 use syntax::ast;
+use syntax::attr;
 use syntax::codemap::{self, original_sp, Spanned};
 use syntax::feature_gate::{GateIssue, emit_feature_err};
 use syntax::ptr::P;
@@ -136,7 +137,7 @@
 pub mod dropck;
 pub mod _match;
 pub mod writeback;
-pub mod regionck;
+mod regionck;
 pub mod coercion;
 pub mod demand;
 pub mod method;
@@ -657,29 +658,10 @@
                                         value: &T) -> T
         where T : TypeFoldable<'tcx>
     {
-        let ok = self.normalize_associated_types_in_as_infer_ok(span, body_id, param_env, value);
+        let ok = self.partially_normalize_associated_types_in(span, body_id, param_env, value);
         self.register_infer_ok_obligations(ok)
     }
 
-    fn normalize_associated_types_in_as_infer_ok<T>(&self,
-                                                    span: Span,
-                                                    body_id: ast::NodeId,
-                                                    param_env: ty::ParamEnv<'tcx>,
-                                                    value: &T)
-                                                    -> InferOk<'tcx, T>
-        where T : TypeFoldable<'tcx>
-    {
-        debug!("normalize_associated_types_in(value={:?})", value);
-        let mut selcx = traits::SelectionContext::new(self);
-        let cause = ObligationCause::misc(span, body_id);
-        let traits::Normalized { value, obligations } =
-            traits::normalize(&mut selcx, param_env, cause, value);
-        debug!("normalize_associated_types_in: result={:?} predicates={:?}",
-            value,
-            obligations);
-        InferOk { value, obligations }
-    }
-
     /// Replace any late-bound regions bound in `value` with
     /// free variants attached to `all_outlive_scope`.
     fn liberate_late_bound_regions<T>(&self,
@@ -1339,24 +1321,12 @@
                 hir::ImplItemKind::Method(..) => {
                     let trait_span = tcx.hir.span_if_local(ty_trait_item.def_id);
                     if ty_trait_item.kind == ty::AssociatedKind::Method {
-                        let err_count = tcx.sess.err_count();
                         compare_impl_method(tcx,
                                             &ty_impl_item,
                                             impl_item.span,
                                             &ty_trait_item,
                                             impl_trait_ref,
-                                            trait_span,
-                                            true); // start with old-broken-mode
-                        if err_count == tcx.sess.err_count() {
-                            // old broken mode did not report an error. Try with the new mode.
-                            compare_impl_method(tcx,
-                                                &ty_impl_item,
-                                                impl_item.span,
-                                                &ty_trait_item,
-                                                impl_trait_ref,
-                                                trait_span,
-                                                false); // use the new mode
-                        }
+                                            trait_span);
                     } else {
                         let mut err = struct_span_err!(tcx.sess, impl_item.span, E0324,
                                   "item `{}` is an associated method, \
@@ -1561,12 +1531,15 @@
     let def = tcx.adt_def(def_id);
     def.destructor(tcx); // force the destructor to be evaluated
 
-    if vs.is_empty() && tcx.has_attr(def_id, "repr") {
-        struct_span_err!(
-            tcx.sess, sp, E0084,
-            "unsupported representation for zero-variant enum")
-            .span_label(sp, "unsupported enum representation")
-            .emit();
+    if vs.is_empty() {
+        let attributes = tcx.get_attrs(def_id);
+        if let Some(attr) = attr::find_by_name(&attributes, "repr") {
+            struct_span_err!(
+                tcx.sess, attr.span, E0084,
+                "unsupported representation for zero-variant enum")
+                .span_label(sp, "zero-variant enum")
+                .emit();
+        }
     }
 
     let repr_type_ty = def.repr.discr_type().to_ty(tcx);
@@ -1982,10 +1955,10 @@
                                                     -> InferOk<'tcx, T>
         where T : TypeFoldable<'tcx>
     {
-        self.inh.normalize_associated_types_in_as_infer_ok(span,
-                                                           self.body_id,
-                                                           self.param_env,
-                                                           value)
+        self.inh.partially_normalize_associated_types_in(span,
+                                                         self.body_id,
+                                                         self.param_env,
+                                                         value)
     }
 
     pub fn require_type_meets(&self,
diff --git a/src/librustc_typeck/check/regionck.rs b/src/librustc_typeck/check/regionck.rs
index ad79784..a17133d 100644
--- a/src/librustc_typeck/check/regionck.rs
+++ b/src/librustc_typeck/check/regionck.rs
@@ -84,18 +84,14 @@
 
 use check::dropck;
 use check::FnCtxt;
-use middle::free_region::FreeRegionMap;
 use middle::mem_categorization as mc;
 use middle::mem_categorization::Categorization;
 use middle::region;
 use rustc::hir::def_id::DefId;
 use rustc::ty::subst::Substs;
-use rustc::traits;
-use rustc::ty::{self, Ty, TypeFoldable};
-use rustc::infer::{self, GenericKind, SubregionOrigin, VerifyBound};
+use rustc::ty::{self, Ty};
+use rustc::infer::{self, OutlivesEnvironment};
 use rustc::ty::adjustment;
-use rustc::ty::outlives::Component;
-use rustc::ty::wf;
 
 use std::mem;
 use std::ops::Deref;
@@ -117,7 +113,11 @@
     pub fn regionck_expr(&self, body: &'gcx hir::Body) {
         let subject = self.tcx.hir.body_owner_def_id(body.id());
         let id = body.value.id;
-        let mut rcx = RegionCtxt::new(self, RepeatingScope(id), id, Subject(subject));
+        let mut rcx = RegionCtxt::new(self,
+                                      RepeatingScope(id),
+                                      id,
+                                      Subject(subject),
+                                      self.param_env);
         if self.err_count_since_creation() == 0 {
             // regionck assumes typeck succeeded
             rcx.visit_body(body);
@@ -126,7 +126,7 @@
         rcx.resolve_regions_and_report_errors();
 
         assert!(self.tables.borrow().free_region_map.is_empty());
-        self.tables.borrow_mut().free_region_map = rcx.free_region_map;
+        self.tables.borrow_mut().free_region_map = rcx.outlives_environment.into_free_region_map();
     }
 
     /// Region checking during the WF phase for items. `wf_tys` are the
@@ -137,37 +137,48 @@
                          wf_tys: &[Ty<'tcx>]) {
         debug!("regionck_item(item.id={:?}, wf_tys={:?}", item_id, wf_tys);
         let subject = self.tcx.hir.local_def_id(item_id);
-        let mut rcx = RegionCtxt::new(self, RepeatingScope(item_id), item_id, Subject(subject));
-        rcx.free_region_map.relate_free_regions_from_predicates(
-            &self.param_env.caller_bounds);
-        rcx.relate_free_regions(wf_tys, item_id, span);
+        let mut rcx = RegionCtxt::new(self,
+                                      RepeatingScope(item_id),
+                                      item_id,
+                                      Subject(subject),
+                                      self.param_env);
+        rcx.outlives_environment.add_implied_bounds(self, wf_tys, item_id, span);
         rcx.visit_region_obligations(item_id);
         rcx.resolve_regions_and_report_errors();
     }
 
+    /// Region check a function body. Not invoked on closures, but
+    /// only on the "root" fn item (in which closures may be
+    /// embedded). Walks the function body and adds various add'l
+    /// constraints that are needed for region inference. This is
+    /// separated both to isolate "pure" region constraints from the
+    /// rest of type check and because sometimes we need type
+    /// inference to have completed before we can determine which
+    /// constraints to add.
     pub fn regionck_fn(&self,
                        fn_id: ast::NodeId,
                        body: &'gcx hir::Body) {
         debug!("regionck_fn(id={})", fn_id);
         let subject = self.tcx.hir.body_owner_def_id(body.id());
         let node_id = body.value.id;
-        let mut rcx = RegionCtxt::new(self, RepeatingScope(node_id), node_id, Subject(subject));
+        let mut rcx = RegionCtxt::new(self,
+                                      RepeatingScope(node_id),
+                                      node_id,
+                                      Subject(subject),
+                                      self.param_env);
 
         if self.err_count_since_creation() == 0 {
             // regionck assumes typeck succeeded
             rcx.visit_fn_body(fn_id, body, self.tcx.hir.span(fn_id));
         }
 
-        rcx.free_region_map.relate_free_regions_from_predicates(
-            &self.param_env.caller_bounds);
-
         rcx.resolve_regions_and_report_errors();
 
         // In this mode, we also copy the free-region-map into the
         // tables of the enclosing fcx. In the other regionck modes
         // (e.g., `regionck_item`), we don't have an enclosing tables.
         assert!(self.tables.borrow().free_region_map.is_empty());
-        self.tables.borrow_mut().free_region_map = rcx.free_region_map;
+        self.tables.borrow_mut().free_region_map = rcx.outlives_environment.into_free_region_map();
     }
 }
 
@@ -177,11 +188,9 @@
 pub struct RegionCtxt<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
     pub fcx: &'a FnCtxt<'a, 'gcx, 'tcx>,
 
-    region_bound_pairs: Vec<(ty::Region<'tcx>, GenericKind<'tcx>)>,
-
     pub region_scope_tree: Rc<region::ScopeTree>,
 
-    free_region_map: FreeRegionMap<'tcx>,
+    outlives_environment: OutlivesEnvironment<'tcx>,
 
     // id of innermost fn body id
     body_id: ast::NodeId,
@@ -197,24 +206,6 @@
 
 }
 
-/// Implied bounds are region relationships that we deduce
-/// automatically.  The idea is that (e.g.) a caller must check that a
-/// function's argument types are well-formed immediately before
-/// calling that fn, and hence the *callee* can assume that its
-/// argument types are well-formed. This may imply certain relationships
-/// between generic parameters. For example:
-///
-///     fn foo<'a,T>(x: &'a T)
-///
-/// can only be called with a `'a` and `T` such that `&'a T` is WF.
-/// For `&'a T` to be WF, `T: 'a` must hold. So we can assume `T: 'a`.
-#[derive(Debug)]
-enum ImpliedBound<'tcx> {
-    RegionSubRegion(ty::Region<'tcx>, ty::Region<'tcx>),
-    RegionSubParam(ty::Region<'tcx>, ty::ParamTy),
-    RegionSubProjection(ty::Region<'tcx>, ty::ProjectionTy<'tcx>),
-}
-
 impl<'a, 'gcx, 'tcx> Deref for RegionCtxt<'a, 'gcx, 'tcx> {
     type Target = FnCtxt<'a, 'gcx, 'tcx>;
     fn deref(&self) -> &Self::Target {
@@ -229,8 +220,11 @@
     pub fn new(fcx: &'a FnCtxt<'a, 'gcx, 'tcx>,
                RepeatingScope(initial_repeating_scope): RepeatingScope,
                initial_body_id: ast::NodeId,
-               Subject(subject): Subject) -> RegionCtxt<'a, 'gcx, 'tcx> {
+               Subject(subject): Subject,
+               param_env: ty::ParamEnv<'tcx>)
+               -> RegionCtxt<'a, 'gcx, 'tcx> {
         let region_scope_tree = fcx.tcx.region_scope_tree(subject);
+        let outlives_environment = OutlivesEnvironment::new(param_env);
         RegionCtxt {
             fcx,
             region_scope_tree,
@@ -238,20 +232,10 @@
             body_id: initial_body_id,
             call_site_scope: None,
             subject_def_id: subject,
-            region_bound_pairs: Vec::new(),
-            free_region_map: FreeRegionMap::new(),
+            outlives_environment,
         }
     }
 
-    fn set_call_site_scope(&mut self, call_site_scope: Option<region::Scope>)
-                           -> Option<region::Scope> {
-        mem::replace(&mut self.call_site_scope, call_site_scope)
-    }
-
-    fn set_body_id(&mut self, body_id: ast::NodeId) -> ast::NodeId {
-        mem::replace(&mut self.body_id, body_id)
-    }
-
     fn set_repeating_scope(&mut self, scope: ast::NodeId) -> ast::NodeId {
         mem::replace(&mut self.repeating_scope, scope)
     }
@@ -295,6 +279,18 @@
         self.resolve_type(ty)
     }
 
+    /// This is the "main" function when region-checking a function item or a closure
+    /// within a function item. It begins by updating various fields (e.g., `call_site_scope`
+    /// and `outlives_environment`) to be appropriate to the function and then adds constraints
+    /// derived from the function body.
+    ///
+    /// Note that it does **not** restore the state of the fields that
+    /// it updates! This is intentional, since -- for the main
+    /// function -- we wish to be able to read the final
+    /// `outlives_environment` and other fields from the caller. For
+    /// closures, however, we save and restore any "scoped state"
+    /// before we invoke this function. (See `visit_fn` in the
+    /// `intravisit::Visitor` impl below.)
     fn visit_fn_body(&mut self,
                      id: ast::NodeId, // the id of the fn itself
                      body: &'gcx hir::Body,
@@ -304,9 +300,10 @@
         debug!("visit_fn_body(id={})", id);
 
         let body_id = body.id();
+        self.body_id = body_id.node_id;
 
         let call_site = region::Scope::CallSite(body.value.hir_id.local_id);
-        let old_call_site_scope = self.set_call_site_scope(Some(call_site));
+        self.call_site_scope = Some(call_site);
 
         let fn_sig = {
             let fn_hir_id = self.tcx.hir.node_to_hir_id(id);
@@ -318,8 +315,6 @@
             }
         };
 
-        let old_region_bounds_pairs_len = self.region_bound_pairs.len();
-
         // Collect the types from which we create inferred bounds.
         // For the return type, if diverging, substitute `bool` just
         // because it will have no effect.
@@ -328,8 +323,11 @@
         let fn_sig_tys: Vec<_> =
             fn_sig.inputs().iter().cloned().chain(Some(fn_sig.output())).collect();
 
-        let old_body_id = self.set_body_id(body_id.node_id);
-        self.relate_free_regions(&fn_sig_tys[..], body_id.node_id, span);
+        self.outlives_environment.add_implied_bounds(
+            self.fcx,
+            &fn_sig_tys[..],
+            body_id.node_id,
+            span);
         self.link_fn_args(region::Scope::Node(body.value.hir_id.local_id), &body.arguments);
         self.visit_body(body);
         self.visit_region_obligations(body_id.node_id);
@@ -342,11 +340,6 @@
         self.type_of_node_must_outlive(infer::CallReturn(span),
                                        body_hir_id,
                                        call_site_region);
-
-        self.region_bound_pairs.truncate(old_region_bounds_pairs_len);
-
-        self.set_body_id(old_body_id);
-        self.set_call_site_scope(old_call_site_scope);
     }
 
     fn visit_region_obligations(&mut self, node_id: ast::NodeId)
@@ -358,231 +351,17 @@
         // obligations. So make sure we process those.
         self.select_all_obligations_or_error();
 
-        // Make a copy of the region obligations vec because we'll need
-        // to be able to borrow the fulfillment-cx below when projecting.
-        let region_obligations =
-            self.fulfillment_cx
-                .borrow()
-                .region_obligations(node_id)
-                .to_vec();
-
-        for r_o in &region_obligations {
-            debug!("visit_region_obligations: r_o={:?} cause={:?}",
-                   r_o, r_o.cause);
-            let sup_type = self.resolve_type(r_o.sup_type);
-            let origin = self.code_to_origin(&r_o.cause, sup_type);
-            self.type_must_outlive(origin, sup_type, r_o.sub_region);
-        }
-
-        // Processing the region obligations should not cause the list to grow further:
-        assert_eq!(region_obligations.len(),
-                   self.fulfillment_cx.borrow().region_obligations(node_id).len());
-    }
-
-    fn code_to_origin(&self,
-                      cause: &traits::ObligationCause<'tcx>,
-                      sup_type: Ty<'tcx>)
-                      -> SubregionOrigin<'tcx> {
-        SubregionOrigin::from_obligation_cause(cause,
-                                               || infer::RelateParamBound(cause.span, sup_type))
-    }
-
-    /// This method populates the region map's `free_region_map`. It walks over the transformed
-    /// argument and return types for each function just before we check the body of that function,
-    /// looking for types where you have a borrowed pointer to other borrowed data (e.g., `&'a &'b
-    /// [usize]`.  We do not allow references to outlive the things they point at, so we can assume
-    /// that `'a <= 'b`. This holds for both the argument and return types, basically because, on
-    /// the caller side, the caller is responsible for checking that the type of every expression
-    /// (including the actual values for the arguments, as well as the return type of the fn call)
-    /// is well-formed.
-    ///
-    /// Tests: `src/test/compile-fail/regions-free-region-ordering-*.rs`
-    fn relate_free_regions(&mut self,
-                           fn_sig_tys: &[Ty<'tcx>],
-                           body_id: ast::NodeId,
-                           span: Span) {
-        debug!("relate_free_regions >>");
-
-        for &ty in fn_sig_tys {
-            let ty = self.resolve_type(ty);
-            debug!("relate_free_regions(t={:?})", ty);
-            let implied_bounds = self.implied_bounds(body_id, ty, span);
-
-            // But also record other relationships, such as `T:'x`,
-            // that don't go into the free-region-map but which we use
-            // here.
-            for implication in implied_bounds {
-                debug!("implication: {:?}", implication);
-                match implication {
-                    ImpliedBound::RegionSubRegion(r_a @ &ty::ReEarlyBound(_),
-                                                  &ty::ReVar(vid_b)) |
-                    ImpliedBound::RegionSubRegion(r_a @ &ty::ReFree(_),
-                                                  &ty::ReVar(vid_b)) => {
-                        self.add_given(r_a, vid_b);
-                    }
-                    ImpliedBound::RegionSubParam(r_a, param_b) => {
-                        self.region_bound_pairs.push((r_a, GenericKind::Param(param_b)));
-                    }
-                    ImpliedBound::RegionSubProjection(r_a, projection_b) => {
-                        self.region_bound_pairs.push((r_a, GenericKind::Projection(projection_b)));
-                    }
-                    ImpliedBound::RegionSubRegion(r_a, r_b) => {
-                        // In principle, we could record (and take
-                        // advantage of) every relationship here, but
-                        // we are also free not to -- it simply means
-                        // strictly less that we can successfully type
-                        // check. Right now we only look for things
-                        // relationships between free regions. (It may
-                        // also be that we should revise our inference
-                        // system to be more general and to make use
-                        // of *every* relationship that arises here,
-                        // but presently we do not.)
-                        self.free_region_map.relate_regions(r_a, r_b);
-                    }
-                }
-            }
-        }
-
-        debug!("<< relate_free_regions");
-    }
-
-    /// Compute the implied bounds that a callee/impl can assume based on
-    /// the fact that caller/projector has ensured that `ty` is WF.  See
-    /// the `ImpliedBound` type for more details.
-    fn implied_bounds(&mut self, body_id: ast::NodeId, ty: Ty<'tcx>, span: Span)
-                      -> Vec<ImpliedBound<'tcx>> {
-        // Sometimes when we ask what it takes for T: WF, we get back that
-        // U: WF is required; in that case, we push U onto this stack and
-        // process it next. Currently (at least) these resulting
-        // predicates are always guaranteed to be a subset of the original
-        // type, so we need not fear non-termination.
-        let mut wf_types = vec![ty];
-
-        let mut implied_bounds = vec![];
-
-        while let Some(ty) = wf_types.pop() {
-            // Compute the obligations for `ty` to be well-formed. If `ty` is
-            // an unresolved inference variable, just substituted an empty set
-            // -- because the return type here is going to be things we *add*
-            // to the environment, it's always ok for this set to be smaller
-            // than the ultimate set. (Note: normally there won't be
-            // unresolved inference variables here anyway, but there might be
-            // during typeck under some circumstances.)
-            let obligations =
-                wf::obligations(self, self.fcx.param_env, body_id, ty, span)
-                .unwrap_or(vec![]);
-
-            // NB: All of these predicates *ought* to be easily proven
-            // true. In fact, their correctness is (mostly) implied by
-            // other parts of the program. However, in #42552, we had
-            // an annoying scenario where:
-            //
-            // - Some `T::Foo` gets normalized, resulting in a
-            //   variable `_1` and a `T: Trait<Foo=_1>` constraint
-            //   (not sure why it couldn't immediately get
-            //   solved). This result of `_1` got cached.
-            // - These obligations were dropped on the floor here,
-            //   rather than being registered.
-            // - Then later we would get a request to normalize
-            //   `T::Foo` which would result in `_1` being used from
-            //   the cache, but hence without the `T: Trait<Foo=_1>`
-            //   constraint. As a result, `_1` never gets resolved,
-            //   and we get an ICE (in dropck).
-            //
-            // Therefore, we register any predicates involving
-            // inference variables. We restrict ourselves to those
-            // involving inference variables both for efficiency and
-            // to avoids duplicate errors that otherwise show up.
-            self.fcx.register_predicates(
-                obligations.iter()
-                           .filter(|o| o.predicate.has_infer_types())
-                           .cloned());
-
-            // From the full set of obligations, just filter down to the
-            // region relationships.
-            implied_bounds.extend(
-                obligations
-                    .into_iter()
-                    .flat_map(|obligation| {
-                        assert!(!obligation.has_escaping_regions());
-                        match obligation.predicate {
-                            ty::Predicate::Trait(..) |
-                            ty::Predicate::Equate(..) |
-                            ty::Predicate::Subtype(..) |
-                            ty::Predicate::Projection(..) |
-                            ty::Predicate::ClosureKind(..) |
-                            ty::Predicate::ObjectSafe(..) |
-                            ty::Predicate::ConstEvaluatable(..) =>
-                                vec![],
-
-                            ty::Predicate::WellFormed(subty) => {
-                                wf_types.push(subty);
-                                vec![]
-                            }
-
-                            ty::Predicate::RegionOutlives(ref data) =>
-                                match self.tcx.no_late_bound_regions(data) {
-                                    None =>
-                                        vec![],
-                                    Some(ty::OutlivesPredicate(r_a, r_b)) =>
-                                        vec![ImpliedBound::RegionSubRegion(r_b, r_a)],
-                                },
-
-                            ty::Predicate::TypeOutlives(ref data) =>
-                                match self.tcx.no_late_bound_regions(data) {
-                                    None => vec![],
-                                    Some(ty::OutlivesPredicate(ty_a, r_b)) => {
-                                        let ty_a = self.resolve_type_vars_if_possible(&ty_a);
-                                        let components = self.tcx.outlives_components(ty_a);
-                                        self.implied_bounds_from_components(r_b, components)
-                                    }
-                                },
-                        }}));
-        }
-
-        implied_bounds
-    }
-
-    /// When we have an implied bound that `T: 'a`, we can further break
-    /// this down to determine what relationships would have to hold for
-    /// `T: 'a` to hold. We get to assume that the caller has validated
-    /// those relationships.
-    fn implied_bounds_from_components(&self,
-                                      sub_region: ty::Region<'tcx>,
-                                      sup_components: Vec<Component<'tcx>>)
-                                      -> Vec<ImpliedBound<'tcx>>
-    {
-        sup_components
-            .into_iter()
-            .flat_map(|component| {
-                match component {
-                    Component::Region(r) =>
-                        vec![ImpliedBound::RegionSubRegion(sub_region, r)],
-                    Component::Param(p) =>
-                        vec![ImpliedBound::RegionSubParam(sub_region, p)],
-                    Component::Projection(p) =>
-                        vec![ImpliedBound::RegionSubProjection(sub_region, p)],
-                    Component::EscapingProjection(_) =>
-                    // If the projection has escaping regions, don't
-                    // try to infer any implied bounds even for its
-                    // free components. This is conservative, because
-                    // the caller will still have to prove that those
-                    // free components outlive `sub_region`. But the
-                    // idea is that the WAY that the caller proves
-                    // that may change in the future and we want to
-                    // give ourselves room to get smarter here.
-                        vec![],
-                    Component::UnresolvedInferenceVariable(..) =>
-                        vec![],
-                }
-            })
-            .collect()
+        self.infcx.process_registered_region_obligations(
+            self.outlives_environment.region_bound_pairs(),
+            self.implicit_region_bound,
+            self.param_env,
+            self.body_id);
     }
 
     fn resolve_regions_and_report_errors(&self) {
         self.fcx.resolve_regions_and_report_errors(self.subject_def_id,
                                                    &self.region_scope_tree,
-                                                   &self.free_region_map);
+                                                   self.outlives_environment.free_region_map());
     }
 
     fn constrain_bindings_in_pat(&mut self, pat: &hir::Pat) {
@@ -638,10 +417,28 @@
         NestedVisitorMap::None
     }
 
-    fn visit_fn(&mut self, _fk: intravisit::FnKind<'gcx>, _: &'gcx hir::FnDecl,
-                b: hir::BodyId, span: Span, id: ast::NodeId) {
-        let body = self.tcx.hir.body(b);
-        self.visit_fn_body(id, body, span)
+    fn visit_fn(&mut self,
+                fk: intravisit::FnKind<'gcx>,
+                _: &'gcx hir::FnDecl,
+                body_id: hir::BodyId,
+                span: Span,
+                id: ast::NodeId) {
+        assert!(match fk { intravisit::FnKind::Closure(..) => true, _ => false },
+                "visit_fn invoked for something other than a closure");
+
+        // Save state of current function before invoking
+        // `visit_fn_body`.  We will restore afterwards.
+        let old_body_id = self.body_id;
+        let old_call_site_scope = self.call_site_scope;
+        let env_snapshot = self.outlives_environment.push_snapshot_pre_closure();
+
+        let body = self.tcx.hir.body(body_id);
+        self.visit_fn_body(id, body, span);
+
+        // Restore state from previous function.
+        self.outlives_environment.pop_snapshot_post_closure(env_snapshot);
+        self.call_site_scope = old_call_site_scope;
+        self.body_id = old_body_id;
     }
 
     //visit_pat: visit_pat, // (..) see above
@@ -1137,6 +934,27 @@
         self.type_must_outlive(origin, ty, minimum_lifetime);
     }
 
+    /// Adds constraints to inference such that `T: 'a` holds (or
+    /// reports an error if it cannot).
+    ///
+    /// # Parameters
+    ///
+    /// - `origin`, the reason we need this constraint
+    /// - `ty`, the type `T`
+    /// - `region`, the region `'a`
+    pub fn type_must_outlive(&self,
+                             origin: infer::SubregionOrigin<'tcx>,
+                             ty: Ty<'tcx>,
+                             region: ty::Region<'tcx>)
+    {
+        self.infcx.type_must_outlive(self.outlives_environment.region_bound_pairs(),
+                                     self.implicit_region_bound,
+                                     self.param_env,
+                                     origin,
+                                     ty,
+                                     region);
+    }
+
     /// Computes the guarantor for an expression `&base` and then ensures that the lifetime of the
     /// resulting pointer is linked to the lifetime of its guarantor (if any).
     fn link_addr_of(&mut self, expr: &hir::Expr,
@@ -1492,345 +1310,4 @@
             self.type_must_outlive(origin.clone(), ty, expr_region);
         }
     }
-
-    /// Ensures that type is well-formed in `region`, which implies (among
-    /// other things) that all borrowed data reachable via `ty` outlives
-    /// `region`.
-    pub fn type_must_outlive(&self,
-                             origin: infer::SubregionOrigin<'tcx>,
-                             ty: Ty<'tcx>,
-                             region: ty::Region<'tcx>)
-    {
-        let ty = self.resolve_type(ty);
-
-        debug!("type_must_outlive(ty={:?}, region={:?}, origin={:?})",
-               ty,
-               region,
-               origin);
-
-        assert!(!ty.has_escaping_regions());
-
-        let components = self.tcx.outlives_components(ty);
-        self.components_must_outlive(origin, components, region);
-    }
-
-    fn components_must_outlive(&self,
-                               origin: infer::SubregionOrigin<'tcx>,
-                               components: Vec<Component<'tcx>>,
-                               region: ty::Region<'tcx>)
-    {
-        for component in components {
-            let origin = origin.clone();
-            match component {
-                Component::Region(region1) => {
-                    self.sub_regions(origin, region, region1);
-                }
-                Component::Param(param_ty) => {
-                    self.param_ty_must_outlive(origin, region, param_ty);
-                }
-                Component::Projection(projection_ty) => {
-                    self.projection_must_outlive(origin, region, projection_ty);
-                }
-                Component::EscapingProjection(subcomponents) => {
-                    self.components_must_outlive(origin, subcomponents, region);
-                }
-                Component::UnresolvedInferenceVariable(v) => {
-                    // ignore this, we presume it will yield an error
-                    // later, since if a type variable is not resolved by
-                    // this point it never will be
-                    self.tcx.sess.delay_span_bug(
-                        origin.span(),
-                        &format!("unresolved inference variable in outlives: {:?}", v));
-                }
-            }
-        }
-    }
-
-    fn param_ty_must_outlive(&self,
-                             origin: infer::SubregionOrigin<'tcx>,
-                             region: ty::Region<'tcx>,
-                             param_ty: ty::ParamTy) {
-        debug!("param_ty_must_outlive(region={:?}, param_ty={:?}, origin={:?})",
-               region, param_ty, origin);
-
-        let verify_bound = self.param_bound(param_ty);
-        let generic = GenericKind::Param(param_ty);
-        self.verify_generic_bound(origin, generic, region, verify_bound);
-    }
-
-    fn projection_must_outlive(&self,
-                               origin: infer::SubregionOrigin<'tcx>,
-                               region: ty::Region<'tcx>,
-                               projection_ty: ty::ProjectionTy<'tcx>)
-    {
-        debug!("projection_must_outlive(region={:?}, projection_ty={:?}, origin={:?})",
-               region, projection_ty, origin);
-
-        // This case is thorny for inference. The fundamental problem is
-        // that there are many cases where we have choice, and inference
-        // doesn't like choice (the current region inference in
-        // particular). :) First off, we have to choose between using the
-        // OutlivesProjectionEnv, OutlivesProjectionTraitDef, and
-        // OutlivesProjectionComponent rules, any one of which is
-        // sufficient.  If there are no inference variables involved, it's
-        // not hard to pick the right rule, but if there are, we're in a
-        // bit of a catch 22: if we picked which rule we were going to
-        // use, we could add constraints to the region inference graph
-        // that make it apply, but if we don't add those constraints, the
-        // rule might not apply (but another rule might). For now, we err
-        // on the side of adding too few edges into the graph.
-
-        // Compute the bounds we can derive from the environment or trait
-        // definition.  We know that the projection outlives all the
-        // regions in this list.
-        let env_bounds = self.projection_declared_bounds(origin.span(), projection_ty);
-
-        debug!("projection_must_outlive: env_bounds={:?}",
-               env_bounds);
-
-        // If we know that the projection outlives 'static, then we're
-        // done here.
-        if env_bounds.contains(&&ty::ReStatic) {
-            debug!("projection_must_outlive: 'static as declared bound");
-            return;
-        }
-
-        // If declared bounds list is empty, the only applicable rule is
-        // OutlivesProjectionComponent. If there are inference variables,
-        // then, we can break down the outlives into more primitive
-        // components without adding unnecessary edges.
-        //
-        // If there are *no* inference variables, however, we COULD do
-        // this, but we choose not to, because the error messages are less
-        // good. For example, a requirement like `T::Item: 'r` would be
-        // translated to a requirement that `T: 'r`; when this is reported
-        // to the user, it will thus say "T: 'r must hold so that T::Item:
-        // 'r holds". But that makes it sound like the only way to fix
-        // the problem is to add `T: 'r`, which isn't true. So, if there are no
-        // inference variables, we use a verify constraint instead of adding
-        // edges, which winds up enforcing the same condition.
-        let needs_infer = projection_ty.needs_infer();
-        if env_bounds.is_empty() && needs_infer {
-            debug!("projection_must_outlive: no declared bounds");
-
-            for component_ty in projection_ty.substs.types() {
-                self.type_must_outlive(origin.clone(), component_ty, region);
-            }
-
-            for r in projection_ty.substs.regions() {
-                self.sub_regions(origin.clone(), region, r);
-            }
-
-            return;
-        }
-
-        // If we find that there is a unique declared bound `'b`, and this bound
-        // appears in the trait reference, then the best action is to require that `'b:'r`,
-        // so do that. This is best no matter what rule we use:
-        //
-        // - OutlivesProjectionEnv or OutlivesProjectionTraitDef: these would translate to
-        // the requirement that `'b:'r`
-        // - OutlivesProjectionComponent: this would require `'b:'r` in addition to
-        // other conditions
-        if !env_bounds.is_empty() && env_bounds[1..].iter().all(|b| *b == env_bounds[0]) {
-            let unique_bound = env_bounds[0];
-            debug!("projection_must_outlive: unique declared bound = {:?}", unique_bound);
-            if projection_ty.substs.regions().any(|r| env_bounds.contains(&r)) {
-                debug!("projection_must_outlive: unique declared bound appears in trait ref");
-                self.sub_regions(origin.clone(), region, unique_bound);
-                return;
-            }
-        }
-
-        // Fallback to verifying after the fact that there exists a
-        // declared bound, or that all the components appearing in the
-        // projection outlive; in some cases, this may add insufficient
-        // edges into the inference graph, leading to inference failures
-        // even though a satisfactory solution exists.
-        let verify_bound = self.projection_bound(origin.span(), env_bounds, projection_ty);
-        let generic = GenericKind::Projection(projection_ty);
-        self.verify_generic_bound(origin, generic.clone(), region, verify_bound);
-    }
-
-    fn type_bound(&self, span: Span, ty: Ty<'tcx>) -> VerifyBound<'tcx> {
-        match ty.sty {
-            ty::TyParam(p) => {
-                self.param_bound(p)
-            }
-            ty::TyProjection(data) => {
-                let declared_bounds = self.projection_declared_bounds(span, data);
-                self.projection_bound(span, declared_bounds, data)
-            }
-            _ => {
-                self.recursive_type_bound(span, ty)
-            }
-        }
-    }
-
-    fn param_bound(&self, param_ty: ty::ParamTy) -> VerifyBound<'tcx> {
-        debug!("param_bound(param_ty={:?})",
-               param_ty);
-
-        let mut param_bounds = self.declared_generic_bounds_from_env(GenericKind::Param(param_ty));
-
-        // Add in the default bound of fn body that applies to all in
-        // scope type parameters:
-        param_bounds.extend(self.implicit_region_bound);
-
-        VerifyBound::AnyRegion(param_bounds)
-    }
-
-    fn projection_declared_bounds(&self,
-                                  span: Span,
-                                  projection_ty: ty::ProjectionTy<'tcx>)
-                                  -> Vec<ty::Region<'tcx>>
-    {
-        // First assemble bounds from where clauses and traits.
-
-        let mut declared_bounds =
-            self.declared_generic_bounds_from_env(GenericKind::Projection(projection_ty));
-
-        declared_bounds.extend_from_slice(
-            &self.declared_projection_bounds_from_trait(span, projection_ty));
-
-        declared_bounds
-    }
-
-    fn projection_bound(&self,
-                        span: Span,
-                        declared_bounds: Vec<ty::Region<'tcx>>,
-                        projection_ty: ty::ProjectionTy<'tcx>)
-                        -> VerifyBound<'tcx> {
-        debug!("projection_bound(declared_bounds={:?}, projection_ty={:?})",
-               declared_bounds, projection_ty);
-
-        // see the extensive comment in projection_must_outlive
-        let ty = self.tcx.mk_projection(projection_ty.item_def_id, projection_ty.substs);
-        let recursive_bound = self.recursive_type_bound(span, ty);
-
-        VerifyBound::AnyRegion(declared_bounds).or(recursive_bound)
-    }
-
-    fn recursive_type_bound(&self, span: Span, ty: Ty<'tcx>) -> VerifyBound<'tcx> {
-        let mut bounds = vec![];
-
-        for subty in ty.walk_shallow() {
-            bounds.push(self.type_bound(span, subty));
-        }
-
-        let mut regions = ty.regions();
-        regions.retain(|r| !r.is_late_bound()); // ignore late-bound regions
-        bounds.push(VerifyBound::AllRegions(regions));
-
-        // remove bounds that must hold, since they are not interesting
-        bounds.retain(|b| !b.must_hold());
-
-        if bounds.len() == 1 {
-            bounds.pop().unwrap()
-        } else {
-            VerifyBound::AllBounds(bounds)
-        }
-    }
-
-    fn declared_generic_bounds_from_env(&self, generic: GenericKind<'tcx>)
-                                        -> Vec<ty::Region<'tcx>>
-    {
-        let param_env = &self.param_env;
-
-        // To start, collect bounds from user:
-        let mut param_bounds = self.tcx.required_region_bounds(generic.to_ty(self.tcx),
-                                                               param_env.caller_bounds.to_vec());
-
-        // Next, collect regions we scraped from the well-formedness
-        // constraints in the fn signature. To do that, we walk the list
-        // of known relations from the fn ctxt.
-        //
-        // This is crucial because otherwise code like this fails:
-        //
-        //     fn foo<'a, A>(x: &'a A) { x.bar() }
-        //
-        // The problem is that the type of `x` is `&'a A`. To be
-        // well-formed, then, A must be lower-generic by `'a`, but we
-        // don't know that this holds from first principles.
-        for &(r, p) in &self.region_bound_pairs {
-            debug!("generic={:?} p={:?}",
-                   generic,
-                   p);
-            if generic == p {
-                param_bounds.push(r);
-            }
-        }
-
-        param_bounds
-    }
-
-    fn declared_projection_bounds_from_trait(&self,
-                                             span: Span,
-                                             projection_ty: ty::ProjectionTy<'tcx>)
-                                             -> Vec<ty::Region<'tcx>>
-    {
-        debug!("projection_bounds(projection_ty={:?})",
-               projection_ty);
-        let ty = self.tcx.mk_projection(projection_ty.item_def_id, projection_ty.substs);
-
-        // Say we have a projection `<T as SomeTrait<'a>>::SomeType`. We are interested
-        // in looking for a trait definition like:
-        //
-        // ```
-        // trait SomeTrait<'a> {
-        //     type SomeType : 'a;
-        // }
-        // ```
-        //
-        // we can thus deduce that `<T as SomeTrait<'a>>::SomeType : 'a`.
-        let trait_predicates = self.tcx.predicates_of(projection_ty.trait_ref(self.tcx).def_id);
-        assert_eq!(trait_predicates.parent, None);
-        let predicates = trait_predicates.predicates.as_slice().to_vec();
-        traits::elaborate_predicates(self.tcx, predicates)
-            .filter_map(|predicate| {
-                // we're only interesting in `T : 'a` style predicates:
-                let outlives = match predicate {
-                    ty::Predicate::TypeOutlives(data) => data,
-                    _ => { return None; }
-                };
-
-                debug!("projection_bounds: outlives={:?} (1)",
-                       outlives);
-
-                // apply the substitutions (and normalize any projected types)
-                let outlives = self.instantiate_type_scheme(span,
-                                                            projection_ty.substs,
-                                                            &outlives);
-
-                debug!("projection_bounds: outlives={:?} (2)",
-                       outlives);
-
-                let region_result = self.commit_if_ok(|_| {
-                    let (outlives, _) =
-                        self.replace_late_bound_regions_with_fresh_var(
-                            span,
-                            infer::AssocTypeProjection(projection_ty.item_def_id),
-                            &outlives);
-
-                    debug!("projection_bounds: outlives={:?} (3)",
-                           outlives);
-
-                    // check whether this predicate applies to our current projection
-                    let cause = self.fcx.misc(span);
-                    match self.at(&cause, self.fcx.param_env).eq(outlives.0, ty) {
-                        Ok(ok) => Ok((ok, outlives.1)),
-                        Err(_) => Err(())
-                    }
-                }).map(|(ok, result)| {
-                    self.register_infer_ok_obligations(ok);
-                    result
-                });
-
-                debug!("projection_bounds: region_result={:?}",
-                       region_result);
-
-                region_result.ok()
-            })
-            .collect()
-    }
 }
diff --git a/src/librustc_typeck/check/upvar.rs b/src/librustc_typeck/check/upvar.rs
index a6a8148..9a70a86 100644
--- a/src/librustc_typeck/check/upvar.rs
+++ b/src/librustc_typeck/check/upvar.rs
@@ -50,7 +50,7 @@
 use syntax::ast;
 use syntax_pos::Span;
 use rustc::hir;
-use rustc::hir::def_id::DefIndex;
+use rustc::hir::def_id::LocalDefId;
 use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap};
 use rustc::util::nodemap::FxHashMap;
 
@@ -128,7 +128,7 @@
             for freevar in freevars {
                 let upvar_id = ty::UpvarId {
                     var_id: self.tcx.hir.node_to_hir_id(freevar.var_id()),
-                    closure_expr_id: closure_def_id.index,
+                    closure_expr_id: LocalDefId::from_def_id(closure_def_id),
                 };
                 debug!("seed upvar_id {:?}", upvar_id);
 
@@ -167,7 +167,7 @@
             // Write the adjusted values back into the main tables.
             if infer_kind {
                 if let Some(kind) = delegate.adjust_closure_kinds
-                                            .remove(&closure_def_id.index) {
+                                            .remove(&closure_def_id.to_local()) {
                     self.tables
                         .borrow_mut()
                         .closure_kinds_mut()
@@ -231,7 +231,7 @@
         // This may change if abstract return types of some sort are
         // implemented.
         let tcx = self.tcx;
-        let closure_def_index = tcx.hir.local_def_id(closure_id).index;
+        let closure_def_index = tcx.hir.local_def_id(closure_id);
 
         tcx.with_freevars(closure_id, |freevars| {
             freevars.iter().map(|freevar| {
@@ -240,7 +240,7 @@
                 let freevar_ty = self.node_ty(var_hir_id);
                 let upvar_id = ty::UpvarId {
                     var_id: var_hir_id,
-                    closure_expr_id: closure_def_index,
+                    closure_expr_id: LocalDefId::from_def_id(closure_def_index),
                 };
                 let capture = self.tables.borrow().upvar_capture(upvar_id);
 
@@ -263,7 +263,7 @@
 
 struct InferBorrowKind<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
     fcx: &'a FnCtxt<'a, 'gcx, 'tcx>,
-    adjust_closure_kinds: FxHashMap<DefIndex, (ty::ClosureKind, Option<(Span, ast::Name)>)>,
+    adjust_closure_kinds: FxHashMap<LocalDefId, (ty::ClosureKind, Option<(Span, ast::Name)>)>,
     adjust_upvar_captures: ty::UpvarCaptureMap<'tcx>,
 }
 
@@ -485,7 +485,7 @@
     }
 
     fn adjust_closure_kind(&mut self,
-                           closure_id: DefIndex,
+                           closure_id: LocalDefId,
                            new_kind: ty::ClosureKind,
                            upvar_span: Span,
                            var_name: ast::Name) {
@@ -494,7 +494,7 @@
 
         let closure_kind = self.adjust_closure_kinds.get(&closure_id).cloned()
             .or_else(|| {
-                let closure_id = self.fcx.tcx.hir.def_index_to_hir_id(closure_id);
+                let closure_id = self.fcx.tcx.hir.local_def_id_to_hir_id(closure_id);
                 self.fcx.tables.borrow().closure_kinds().get(closure_id).cloned()
             });
 
diff --git a/src/librustc_typeck/lib.rs b/src/librustc_typeck/lib.rs
index 5227955..014b8b1 100644
--- a/src/librustc_typeck/lib.rs
+++ b/src/librustc_typeck/lib.rs
@@ -75,6 +75,7 @@
 #![feature(advanced_slice_patterns)]
 #![feature(box_patterns)]
 #![feature(box_syntax)]
+#![feature(crate_visibility_modifier)]
 #![feature(conservative_impl_trait)]
 #![feature(match_default_bindings)]
 #![feature(never_type)]
diff --git a/src/librustdoc/clean/inline.rs b/src/librustdoc/clean/inline.rs
index 9fb9437..4c51816 100644
--- a/src/librustdoc/clean/inline.rs
+++ b/src/librustdoc/clean/inline.rs
@@ -77,6 +77,11 @@
             ret.extend(build_impls(cx, did));
             clean::EnumItem(build_enum(cx, did))
         }
+        Def::TyForeign(did) => {
+            record_extern_fqn(cx, did, clean::TypeKind::Foreign);
+            ret.extend(build_impls(cx, did));
+            clean::ForeignTypeItem
+        }
         // Never inline enum variants but leave them shown as reexports.
         Def::Variant(..) => return None,
         // Assume that enum variants and struct types are reexported next to
diff --git a/src/librustdoc/html/render.rs b/src/librustdoc/html/render.rs
index 69eaf24..7760561 100644
--- a/src/librustdoc/html/render.rs
+++ b/src/librustdoc/html/render.rs
@@ -1257,7 +1257,7 @@
             clean::FunctionItem(..) | clean::ModuleItem(..) |
             clean::ForeignFunctionItem(..) | clean::ForeignStaticItem(..) |
             clean::ConstantItem(..) | clean::StaticItem(..) |
-            clean::UnionItem(..)
+            clean::UnionItem(..) | clean::ForeignTypeItem
             if !self.stripped_mod => {
                 // Reexported items mean that the same id can show up twice
                 // in the rustdoc ast that we're looking at. We know,
@@ -1292,7 +1292,7 @@
         // Maintain the parent stack
         let orig_parent_is_trait_impl = self.parent_is_trait_impl;
         let parent_pushed = match item.inner {
-            clean::TraitItem(..) | clean::EnumItem(..) |
+            clean::TraitItem(..) | clean::EnumItem(..) | clean::ForeignTypeItem |
             clean::StructItem(..) | clean::UnionItem(..) => {
                 self.parent_stack.push(item.def_id);
                 self.parent_is_trait_impl = false;
@@ -1683,7 +1683,7 @@
             format!("{}-{}", self.item.source.loline, self.item.source.hiline)
         };
         Some(format!("{root}src/{krate}/{path}#{lines}",
-                     root = root,
+                     root = Escape(&root),
                      krate = krate,
                      path = path,
                      lines = lines))
@@ -1711,6 +1711,7 @@
             clean::PrimitiveItem(..) => write!(fmt, "Primitive Type ")?,
             clean::StaticItem(..) | clean::ForeignStaticItem(..) => write!(fmt, "Static ")?,
             clean::ConstantItem(..) => write!(fmt, "Constant ")?,
+            clean::ForeignTypeItem => write!(fmt, "Foreign Type ")?,
             _ => {
                 // We don't generate pages for any other type.
                 unreachable!();
@@ -1775,6 +1776,7 @@
             clean::StaticItem(ref i) | clean::ForeignStaticItem(ref i) =>
                 item_static(fmt, self.cx, self.item, i),
             clean::ConstantItem(ref c) => item_constant(fmt, self.cx, self.item, c),
+            clean::ForeignTypeItem => item_foreign_type(fmt, self.cx, self.item),
             _ => {
                 // We don't generate pages for any other type.
                 unreachable!();
@@ -3429,6 +3431,21 @@
     render_assoc_items(w, cx, it, it.def_id, AssocItemRender::All)
 }
 
+fn item_foreign_type(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item) -> fmt::Result {
+    writeln!(w, "<pre class='rust foreigntype'>extern {{")?;
+    render_attributes(w, it)?;
+    write!(
+        w,
+        "    {}type {};\n}}</pre>",
+        VisSpace(&it.visibility),
+        it.name.as_ref().unwrap(),
+    )?;
+
+    document(w, cx, it)?;
+
+    render_assoc_items(w, cx, it, it.def_id, AssocItemRender::All)
+}
+
 impl<'a> fmt::Display for Sidebar<'a> {
     fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
         let cx = self.cx;
@@ -3446,6 +3463,7 @@
                 clean::UnionItem(..) => write!(fmt, "Union ")?,
                 clean::EnumItem(..) => write!(fmt, "Enum ")?,
                 clean::TypedefItem(..) => write!(fmt, "Type Definition ")?,
+                clean::ForeignTypeItem => write!(fmt, "Foreign Type ")?,
                 clean::ModuleItem(..) => if it.is_crate() {
                     write!(fmt, "Crate ")?;
                 } else {
@@ -3474,6 +3492,7 @@
                 clean::EnumItem(ref e) => sidebar_enum(fmt, it, e)?,
                 clean::TypedefItem(ref t, _) => sidebar_typedef(fmt, it, t)?,
                 clean::ModuleItem(ref m) => sidebar_module(fmt, it, &m.items)?,
+                clean::ForeignTypeItem => sidebar_foreign_type(fmt, it)?,
                 _ => (),
             }
         }
@@ -3897,6 +3916,14 @@
     Ok(())
 }
 
+fn sidebar_foreign_type(fmt: &mut fmt::Formatter, it: &clean::Item) -> fmt::Result {
+    let sidebar = sidebar_assoc_items(it);
+    if !sidebar.is_empty() {
+        write!(fmt, "<div class=\"block items\">{}</div>", sidebar)?;
+    }
+    Ok(())
+}
+
 impl<'a> fmt::Display for Source<'a> {
     fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
         let Source(s) = *self;
diff --git a/src/librustdoc/html/static/main.js b/src/librustdoc/html/static/main.js
index 8d0faf2..a3957cc 100644
--- a/src/librustdoc/html/static/main.js
+++ b/src/librustdoc/html/static/main.js
@@ -37,7 +37,8 @@
                      "associatedtype",
                      "constant",
                      "associatedconstant",
-                     "union"];
+                     "union",
+                     "foreigntype"];
 
     // On the search screen, so you remain on the last tab you opened.
     //
@@ -381,13 +382,6 @@
                 }
             }
 
-            function min(a, b) {
-                if (a < b) {
-                    return a;
-                }
-                return b;
-            }
-
             function extractGenerics(val) {
                 val = val.toLowerCase();
                 if (val.indexOf('<') !== -1) {
@@ -425,7 +419,7 @@
                             }
                             if (lev.pos !== -1) {
                                 elems.splice(lev.pos, 1);
-                                lev_distance = min(lev.lev, lev_distance);
+                                lev_distance = Math.min(lev.lev, lev_distance);
                             } else {
                                 return MAX_LEV_DISTANCE + 1;
                             }
@@ -488,11 +482,12 @@
                 var new_lev = levenshtein(obj.name, val.name);
                 if (new_lev < lev_distance) {
                     if ((lev = checkGenerics(obj, val)) <= MAX_LEV_DISTANCE) {
-                        lev_distance = min(min(new_lev, lev), lev_distance);
+                        lev_distance = Math.min(Math.min(new_lev, lev), lev_distance);
                     }
                 } else if (obj.generics && obj.generics.length > 0) {
                     for (var x = 0; x < obj.generics.length; ++x) {
-                        lev_distance = min(levenshtein(obj.generics[x], val.name), lev_distance);
+                        lev_distance = Math.min(levenshtein(obj.generics[x], val.name),
+                                                lev_distance);
                     }
                 }
                 // Now whatever happens, the returned distance is "less good" so we should mark it
@@ -509,7 +504,7 @@
                         if (literalSearch === true && tmp === true) {
                             return true;
                         }
-                        lev_distance = min(tmp, lev_distance);
+                        lev_distance = Math.min(tmp, lev_distance);
                         if (lev_distance === 0) {
                             return 0;
                         }
@@ -526,7 +521,7 @@
                     if (literalSearch === true && tmp === true) {
                         return true;
                     }
-                    lev_distance = min(tmp, lev_distance);
+                    lev_distance = Math.min(tmp, lev_distance);
                     if (lev_distance === 0) {
                         return 0;
                     }
@@ -567,18 +562,20 @@
                     var in_args = findArg(searchIndex[i], val, true);
                     var returned = checkReturned(searchIndex[i], val, true);
                     var ty = searchIndex[i];
+                    var fullId = itemTypes[ty.ty] + ty.path + ty.name;
+
                     if (searchWords[i] === val.name) {
                         // filter type: ... queries
                         if (typePassesFilter(typeFilter, searchIndex[i].ty) &&
-                            results[ty.path + ty.name] === undefined)
+                            results[fullId] === undefined)
                         {
-                            results[ty.path + ty.name] = {id: i, index: -1};
+                            results[fullId] = {id: i, index: -1};
                             results_length += 1;
                         }
                     } else if ((in_args === true || returned === true) &&
                                typePassesFilter(typeFilter, searchIndex[i].ty)) {
-                        if (results[ty.path + ty.name] === undefined) {
-                            results[ty.path + ty.name] = {
+                        if (results[fullId] === undefined) {
+                            results[fullId] = {
                                 id: i,
                                 index: -1,
                                 dontValidate: true,
@@ -588,10 +585,10 @@
                             results_length += 1;
                         } else {
                             if (in_args === true) {
-                                results[ty.path + ty.name].in_args = true;
+                                results[fullId].in_args = true;
                             }
                             if (returned === true) {
-                                results[ty.path + ty.name].returned = true;
+                                results[fullId].returned = true;
                             }
                         }
                     }
@@ -620,6 +617,7 @@
                     if (!type) {
                         continue;
                     }
+                    var fullId = itemTypes[ty.ty] + ty.path + ty.name;
 
                     // allow searching for void (no output) functions as well
                     var typeOutput = type.output ? type.output.name : "";
@@ -638,15 +636,15 @@
                             in_args = allFound;
                         }
                         if (in_args === true || returned === true || module === true) {
-                            if (results[ty.path + ty.name] !== undefined) {
+                            if (results[fullId] !== undefined) {
                                 if (returned === true) {
-                                    results[ty.path + ty.name].returned = true;
+                                    results[fullId].returned = true;
                                 }
                                 if (in_args === true) {
-                                    results[ty.path + ty.name].in_args = true;
+                                    results[fullId].in_args = true;
                                 }
                             } else {
-                                results[ty.path + ty.name] = {
+                                results[fullId] = {
                                     id: i,
                                     index: -1,
                                     dontValidate: true,
@@ -681,48 +679,49 @@
                         var index = -1;
                         // we want lev results to go lower than others
                         var lev = MAX_LEV_DISTANCE;
+                        var fullId = itemTypes[ty.ty] + ty.path + ty.name;
 
                         if (searchWords[j].indexOf(split[i]) > -1 ||
                             searchWords[j].indexOf(val) > -1 ||
                             searchWords[j].replace(/_/g, "").indexOf(val) > -1)
                         {
                             // filter type: ... queries
-                            if (typePassesFilter(typeFilter, searchIndex[j].ty) &&
-                                results[ty.path + ty.name] === undefined) {
+                            if (typePassesFilter(typeFilter, ty) &&
+                                results[fullId] === undefined) {
                                 index = searchWords[j].replace(/_/g, "").indexOf(val);
                             }
                         }
                         if ((lev_distance = levenshtein(searchWords[j], val)) <= MAX_LEV_DISTANCE) {
-                            if (typePassesFilter(typeFilter, searchIndex[j].ty) &&
-                                (results[ty.path + ty.name] === undefined ||
-                                 results[ty.path + ty.name].lev > lev_distance)) {
-                                lev = min(lev, lev_distance);
-                                index = 0;
+                            if (typePassesFilter(typeFilter, ty) &&
+                                (results[fullId] === undefined ||
+                                 results[fullId].lev > lev_distance)) {
+                                lev = Math.min(lev, lev_distance);
+                                index = Math.max(0, index);
                             }
                         }
                         if ((lev_distance = findArg(searchIndex[j], valGenerics))
                             <= MAX_LEV_DISTANCE) {
-                            if (typePassesFilter(typeFilter, searchIndex[j].ty) &&
-                                (results[ty.path + ty.name] === undefined ||
-                                 results[ty.path + ty.name].lev > lev_distance)) {
+                            if (typePassesFilter(typeFilter, ty) &&
+                                (results[fullId] === undefined ||
+                                 results[fullId].lev > lev_distance)) {
                                 in_args = true;
-                                lev = min(lev_distance, lev);
-                                index = 0;
+                                lev = Math.min(lev_distance, lev);
+                                index = Math.max(0, index);
                             }
                         }
                         if ((lev_distance = checkReturned(searchIndex[j], valGenerics)) <=
                             MAX_LEV_DISTANCE) {
-                            if (typePassesFilter(typeFilter, searchIndex[j].ty) &&
-                                (results[ty.path + ty.name] === undefined ||
-                                 results[ty.path + ty.name].lev > lev_distance)) {
+                            if (typePassesFilter(typeFilter, ty) &&
+                                (results[fullId] === undefined ||
+                                 results[fullId].lev > lev_distance)) {
                                 returned = true;
-                                lev = min(lev_distance, lev);
-                                index = 0;
+                                lev = Math.min(lev_distance, lev);
+                                index = Math.max(0, index);
                             }
                         }
                         if (index !== -1) {
-                            if (results[ty.path + ty.name] === undefined) {
-                                results[ty.path + ty.name] = {
+                            if (results[fullId] === undefined) {
+                                results[fullId] = {
                                     id: j,
                                     index: index,
                                     lev: lev,
@@ -731,14 +730,14 @@
                                 };
                                 results_length += 1;
                             } else {
-                                if (results[ty.path + ty.name].lev > lev) {
-                                    results[ty.path + ty.name].lev = lev;
+                                if (results[fullId].lev > lev) {
+                                    results[fullId].lev = lev;
                                 }
                                 if (in_args === true) {
-                                    results[ty.path + ty.name].in_args = true;
+                                    results[fullId].in_args = true;
                                 }
                                 if (returned === true) {
-                                    results[ty.path + ty.name].returned = true;
+                                    results[fullId].returned = true;
                                 }
                             }
                         }
@@ -1445,6 +1444,7 @@
         block("trait", "Traits");
         block("fn", "Functions");
         block("type", "Type Definitions");
+        block("foreigntype", "Foreign Types");
     }
 
     window.initSidebarItems = initSidebarItems;
diff --git a/src/librustdoc/html/static/styles/main.css b/src/librustdoc/html/static/styles/main.css
index 4a4ca15..cb19034 100644
--- a/src/librustdoc/html/static/styles/main.css
+++ b/src/librustdoc/html/static/styles/main.css
@@ -104,6 +104,7 @@
 .content .highlighted.method,
 .content .highlighted.tymethod { background-color: #c6afb3; }
 .content .highlighted.type { background-color: #ffc891; }
+.content .highlighted.foreigntype { background-color: #f5c4ff; }
 .content .highlighted.macro { background-color: #8ce488; }
 .content .highlighted.constant,
 .content .highlighted.static { background-color: #c3e0ff; }
@@ -112,6 +113,7 @@
 .content span.enum, .content a.enum, .block a.current.enum { color: #508157; }
 .content span.struct, .content a.struct, .block a.current.struct { color: #df3600; }
 .content span.type, .content a.type, .block a.current.type { color: #ba5d00; }
+.content span.foreigntype, .content a.foreigntype, .block a.current.foreigntype { color: #cd00e2; }
 .content span.macro, .content a.macro, .block a.current.macro { color: #068000; }
 .content span.union, .content a.union, .block a.current.union { color: #767b27; }
 .content span.constant, .content a.constant, .block a.current.constant,
diff --git a/src/libstd_unicode/unicode.py b/src/libstd_unicode/unicode.py
index 1fac859..df79760 100755
--- a/src/libstd_unicode/unicode.py
+++ b/src/libstd_unicode/unicode.py
@@ -89,7 +89,7 @@
         if is_surrogate(cp):
             continue
         if range_start >= 0:
-            for i in xrange(range_start, cp):
+            for i in range(range_start, cp):
                 udict[i] = data
             range_start = -1
         if data[1].endswith(", First>"):
@@ -382,7 +382,7 @@
     root = []
     childmap = {}
     child_data = []
-    for i in range(len(rawdata) / chunksize):
+    for i in range(len(rawdata) // chunksize):
         data = rawdata[i * chunksize: (i + 1) * chunksize]
         child = '|'.join(map(str, data))
         if child not in childmap:
@@ -400,7 +400,7 @@
 
     # convert to bitmap chunks of 64 bits each
     chunks = []
-    for i in range(0x110000 / CHUNK):
+    for i in range(0x110000 // CHUNK):
         chunk = 0
         for j in range(64):
             if rawdata[i * 64 + j]:
@@ -412,12 +412,12 @@
         pub_string = "pub "
     f.write("    %sconst %s: &'static super::BoolTrie = &super::BoolTrie {\n" % (pub_string, name))
     f.write("        r1: [\n")
-    data = ','.join('0x%016x' % chunk for chunk in chunks[0:0x800 / CHUNK])
+    data = ','.join('0x%016x' % chunk for chunk in chunks[0:0x800 // CHUNK])
     format_table_content(f, data, 12)
     f.write("\n        ],\n")
 
     # 0x800..0x10000 trie
-    (r2, r3) = compute_trie(chunks[0x800 / CHUNK : 0x10000 / CHUNK], 64 / CHUNK)
+    (r2, r3) = compute_trie(chunks[0x800 // CHUNK : 0x10000 // CHUNK], 64 // CHUNK)
     f.write("        r2: [\n")
     data = ','.join(str(node) for node in r2)
     format_table_content(f, data, 12)
@@ -428,7 +428,7 @@
     f.write("\n        ],\n")
 
     # 0x10000..0x110000 trie
-    (mid, r6) = compute_trie(chunks[0x10000 / CHUNK : 0x110000 / CHUNK], 64 / CHUNK)
+    (mid, r6) = compute_trie(chunks[0x10000 // CHUNK : 0x110000 // CHUNK], 64 // CHUNK)
     (r4, r5) = compute_trie(mid, 64)
     f.write("        r4: [\n")
     data = ','.join(str(node) for node in r4)
@@ -446,14 +446,14 @@
     f.write("    };\n\n")
 
 def emit_small_bool_trie(f, name, t_data, is_pub=True):
-    last_chunk = max(int(hi / 64) for (lo, hi) in t_data)
+    last_chunk = max(hi // 64 for (lo, hi) in t_data)
     n_chunks = last_chunk + 1
     chunks = [0] * n_chunks
     for (lo, hi) in t_data:
         for cp in range(lo, hi + 1):
-            if int(cp / 64) >= len(chunks):
-                print(cp, int(cp / 64), len(chunks), lo, hi)
-            chunks[int(cp / 64)] |= 1 << (cp & 63)
+            if cp // 64 >= len(chunks):
+                print(cp, cp // 64, len(chunks), lo, hi)
+            chunks[cp // 64] |= 1 << (cp & 63)
 
     pub_string = ""
     if is_pub:
@@ -519,32 +519,29 @@
     pfun = lambda x: "(%s,[%s,%s,%s])" % (
         escape_char(x[0]), escape_char(x[1][0]), escape_char(x[1][1]), escape_char(x[1][2]))
     emit_table(f, "to_lowercase_table",
-        sorted(to_lower.iteritems(), key=operator.itemgetter(0)),
+        sorted(to_lower.items(), key=operator.itemgetter(0)),
         is_pub=False, t_type = t_type, pfun=pfun)
     emit_table(f, "to_uppercase_table",
-        sorted(to_upper.iteritems(), key=operator.itemgetter(0)),
+        sorted(to_upper.items(), key=operator.itemgetter(0)),
         is_pub=False, t_type = t_type, pfun=pfun)
     f.write("}\n\n")
 
 def emit_norm_module(f, canon, compat, combine, norm_props):
-    canon_keys = canon.keys()
-    canon_keys.sort()
+    canon_keys = sorted(canon.keys())
 
-    compat_keys = compat.keys()
-    compat_keys.sort()
+    compat_keys = sorted(compat.keys())
 
     canon_comp = {}
     comp_exclusions = norm_props["Full_Composition_Exclusion"]
     for char in canon_keys:
-        if True in map(lambda (lo, hi): lo <= char <= hi, comp_exclusions):
+        if any(lo <= char <= hi for lo, hi in comp_exclusions):
             continue
         decomp = canon[char]
         if len(decomp) == 2:
-            if not canon_comp.has_key(decomp[0]):
+            if decomp[0] not in canon_comp:
                 canon_comp[decomp[0]] = []
             canon_comp[decomp[0]].append( (decomp[1], char) )
-    canon_comp_keys = canon_comp.keys()
-    canon_comp_keys.sort()
+    canon_comp_keys = sorted(canon_comp.keys())
 
 if __name__ == "__main__":
     r = "tables.rs"
diff --git a/src/libsyntax/ext/expand.rs b/src/libsyntax/ext/expand.rs
index 614c4a1..491dbed 100644
--- a/src/libsyntax/ext/expand.rs
+++ b/src/libsyntax/ext/expand.rs
@@ -29,6 +29,7 @@
 use symbol::Symbol;
 use symbol::keywords;
 use syntax_pos::{Span, DUMMY_SP};
+use syntax_pos::hygiene::ExpnFormat;
 use tokenstream::{TokenStream, TokenTree};
 use util::small_vector::SmallVector;
 use visit::Visitor;
@@ -151,6 +152,26 @@
     }
 }
 
+fn macro_bang_format(path: &ast::Path) -> ExpnFormat {
+    // We don't want to format a path using pretty-printing,
+    // `format!("{}", path)`, because that tries to insert
+    // line-breaks and is slow.
+    let mut path_str = String::with_capacity(64);
+    for (i, segment) in path.segments.iter().enumerate() {
+        if i != 0 {
+            path_str.push_str("::");
+        }
+
+        if segment.identifier.name != keywords::CrateRoot.name() &&
+            segment.identifier.name != keywords::DollarCrate.name()
+        {
+            path_str.push_str(&segment.identifier.name.as_str())
+        }
+    }
+
+    MacroBang(Symbol::intern(&path_str))
+}
+
 pub struct Invocation {
     pub kind: InvocationKind,
     expansion_kind: ExpansionKind,
@@ -517,7 +538,7 @@
             mark.set_expn_info(ExpnInfo {
                 call_site: span,
                 callee: NameAndSpan {
-                    format: MacroBang(Symbol::intern(&format!("{}", path))),
+                    format: macro_bang_format(path),
                     span: def_site_span,
                     allow_internal_unstable,
                     allow_internal_unsafe,
@@ -564,7 +585,7 @@
                 invoc.expansion_data.mark.set_expn_info(ExpnInfo {
                     call_site: span,
                     callee: NameAndSpan {
-                        format: MacroBang(Symbol::intern(&format!("{}", path))),
+                        format: macro_bang_format(path),
                         span: tt_span,
                         allow_internal_unstable,
                         allow_internal_unsafe: false,
@@ -600,7 +621,7 @@
                 invoc.expansion_data.mark.set_expn_info(ExpnInfo {
                     call_site: span,
                     callee: NameAndSpan {
-                        format: MacroBang(Symbol::intern(&format!("{}", path))),
+                        format: macro_bang_format(path),
                         // FIXME procedural macros do not have proper span info
                         // yet, when they do, we should use it here.
                         span: None,
diff --git a/src/libsyntax/json.rs b/src/libsyntax/json.rs
index 6564046..e739c6d 100644
--- a/src/libsyntax/json.rs
+++ b/src/libsyntax/json.rs
@@ -22,7 +22,7 @@
 use codemap::{CodeMap, FilePathMapping};
 use syntax_pos::{self, MacroBacktrace, Span, SpanLabel, MultiSpan};
 use errors::registry::Registry;
-use errors::{DiagnosticBuilder, SubDiagnostic, RenderSpan, CodeSuggestion, CodeMapper};
+use errors::{DiagnosticBuilder, SubDiagnostic, CodeSuggestion, CodeMapper};
 use errors::DiagnosticId;
 use errors::emitter::Emitter;
 
@@ -188,7 +188,7 @@
             code: None,
             level: db.level.to_str(),
             spans: db.render_span.as_ref()
-                     .map(|sp| DiagnosticSpan::from_render_span(sp, je))
+                     .map(|sp| DiagnosticSpan::from_multispan(sp, je))
                      .unwrap_or_else(|| DiagnosticSpan::from_multispan(&db.span, je)),
             children: vec![],
             rendered: None,
@@ -300,16 +300,6 @@
                       })
                       .collect()
     }
-
-    fn from_render_span(rsp: &RenderSpan, je: &JsonEmitter) -> Vec<DiagnosticSpan> {
-        match *rsp {
-            RenderSpan::FullSpan(ref msp) =>
-                DiagnosticSpan::from_multispan(msp, je),
-            // regular diagnostics don't produce this anymore
-            // FIXME(oli_obk): remove it entirely
-            RenderSpan::Suggestion(_) => unreachable!(),
-        }
-    }
 }
 
 impl DiagnosticSpanLine {
diff --git a/src/llvm b/src/llvm
index b48f77c..51f104b 160000
--- a/src/llvm
+++ b/src/llvm
@@ -1 +1 @@
-Subproject commit b48f77c5ed570001957408f4adeec88ae010c4d9
+Subproject commit 51f104bf1cc6c3a588a11c90a3b4a4a18ee080ac
diff --git a/src/rustllvm/RustWrapper.cpp b/src/rustllvm/RustWrapper.cpp
index 20ea8d7..9aa1725 100644
--- a/src/rustllvm/RustWrapper.cpp
+++ b/src/rustllvm/RustWrapper.cpp
@@ -178,6 +178,22 @@
 #endif
 }
 
+extern "C" void LLVMRustAddAlignmentCallSiteAttr(LLVMValueRef Instr,
+                                                 unsigned Index,
+                                                 uint32_t Bytes) {
+  CallSite Call = CallSite(unwrap<Instruction>(Instr));
+  AttrBuilder B;
+  B.addAlignmentAttr(Bytes);
+#if LLVM_VERSION_GE(5, 0)
+  Call.setAttributes(Call.getAttributes().addAttributes(
+      Call->getContext(), Index, B));
+#else
+  Call.setAttributes(Call.getAttributes().addAttributes(
+      Call->getContext(), Index,
+      AttributeSet::get(Call->getContext(), Index, B)));
+#endif
+}
+
 extern "C" void LLVMRustAddDereferenceableCallSiteAttr(LLVMValueRef Instr,
                                                        unsigned Index,
                                                        uint64_t Bytes) {
@@ -194,6 +210,22 @@
 #endif
 }
 
+extern "C" void LLVMRustAddDereferenceableOrNullCallSiteAttr(LLVMValueRef Instr,
+                                                             unsigned Index,
+                                                             uint64_t Bytes) {
+  CallSite Call = CallSite(unwrap<Instruction>(Instr));
+  AttrBuilder B;
+  B.addDereferenceableOrNullAttr(Bytes);
+#if LLVM_VERSION_GE(5, 0)
+  Call.setAttributes(Call.getAttributes().addAttributes(
+      Call->getContext(), Index, B));
+#else
+  Call.setAttributes(Call.getAttributes().addAttributes(
+      Call->getContext(), Index,
+      AttributeSet::get(Call->getContext(), Index, B)));
+#endif
+}
+
 extern "C" void LLVMRustAddFunctionAttribute(LLVMValueRef Fn, unsigned Index,
                                              LLVMRustAttribute RustAttr) {
   Function *A = unwrap<Function>(Fn);
@@ -206,6 +238,19 @@
 #endif
 }
 
+extern "C" void LLVMRustAddAlignmentAttr(LLVMValueRef Fn,
+                                         unsigned Index,
+                                         uint32_t Bytes) {
+  Function *A = unwrap<Function>(Fn);
+  AttrBuilder B;
+  B.addAlignmentAttr(Bytes);
+#if LLVM_VERSION_GE(5, 0)
+  A->addAttributes(Index, B);
+#else
+  A->addAttributes(Index, AttributeSet::get(A->getContext(), Index, B));
+#endif
+}
+
 extern "C" void LLVMRustAddDereferenceableAttr(LLVMValueRef Fn, unsigned Index,
                                                uint64_t Bytes) {
   Function *A = unwrap<Function>(Fn);
@@ -218,6 +263,19 @@
 #endif
 }
 
+extern "C" void LLVMRustAddDereferenceableOrNullAttr(LLVMValueRef Fn,
+                                                     unsigned Index,
+                                                     uint64_t Bytes) {
+  Function *A = unwrap<Function>(Fn);
+  AttrBuilder B;
+  B.addDereferenceableOrNullAttr(Bytes);
+#if LLVM_VERSION_GE(5, 0)
+  A->addAttributes(Index, B);
+#else
+  A->addAttributes(Index, AttributeSet::get(A->getContext(), Index, B));
+#endif
+}
+
 extern "C" void LLVMRustAddFunctionAttrStringValue(LLVMValueRef Fn,
                                                    unsigned Index,
                                                    const char *Name,
@@ -257,21 +315,18 @@
 
 extern "C" LLVMValueRef
 LLVMRustBuildAtomicLoad(LLVMBuilderRef B, LLVMValueRef Source, const char *Name,
-                        LLVMAtomicOrdering Order, unsigned Alignment) {
+                        LLVMAtomicOrdering Order) {
   LoadInst *LI = new LoadInst(unwrap(Source), 0);
   LI->setAtomic(fromRust(Order));
-  LI->setAlignment(Alignment);
   return wrap(unwrap(B)->Insert(LI, Name));
 }
 
 extern "C" LLVMValueRef LLVMRustBuildAtomicStore(LLVMBuilderRef B,
                                                  LLVMValueRef V,
                                                  LLVMValueRef Target,
-                                                 LLVMAtomicOrdering Order,
-                                                 unsigned Alignment) {
+                                                 LLVMAtomicOrdering Order) {
   StoreInst *SI = new StoreInst(unwrap(V), unwrap(Target));
   SI->setAtomic(fromRust(Order));
-  SI->setAlignment(Alignment);
   return wrap(unwrap(B)->Insert(SI));
 }
 
diff --git a/src/test/codegen/adjustments.rs b/src/test/codegen/adjustments.rs
index 342a4f0..2b35d45 100644
--- a/src/test/codegen/adjustments.rs
+++ b/src/test/codegen/adjustments.rs
@@ -9,6 +9,7 @@
 // except according to those terms.
 
 // compile-flags: -C no-prepopulate-passes
+// ignore-tidy-linelength
 
 #![crate_type = "lib"]
 
@@ -23,9 +24,9 @@
 pub fn no_op_slice_adjustment(x: &[u8]) -> &[u8] {
     // We used to generate an extra alloca and memcpy for the block's trailing expression value, so
     // check that we copy directly to the return value slot
-// CHECK: %0 = insertvalue { i8*, [[USIZE]] } undef, i8* %x.ptr, 0
-// CHECK: %1 = insertvalue { i8*, [[USIZE]] } %0, [[USIZE]] %x.meta, 1
-// CHECK: ret { i8*, [[USIZE]] } %1
+// CHECK: %0 = insertvalue { [0 x i8]*, [[USIZE]] } undef, [0 x i8]* %x.0, 0
+// CHECK: %1 = insertvalue { [0 x i8]*, [[USIZE]] } %0, [[USIZE]] %x.1, 1
+// CHECK: ret { [0 x i8]*, [[USIZE]] } %1
     { x }
 }
 
diff --git a/src/test/codegen/consts.rs b/src/test/codegen/consts.rs
index 33b4221..a75b8f3 100644
--- a/src/test/codegen/consts.rs
+++ b/src/test/codegen/consts.rs
@@ -54,7 +54,7 @@
 #[no_mangle]
 pub fn low_align_const() -> E<i16, [i16; 3]> {
 // Check that low_align_const and high_align_const use the same constant
-// CHECK: load {{.*}} bitcast ({ i16, i16, [4 x i8] }** [[LOW_HIGH_REF]]
+// CHECK: load {{.*}} bitcast ({ i16, [0 x i8], i16, [4 x i8] }** [[LOW_HIGH_REF]]
     *&E::A(0)
 }
 
@@ -62,6 +62,6 @@
 #[no_mangle]
 pub fn high_align_const() -> E<i16, i32> {
 // Check that low_align_const and high_align_const use the same constant
-// CHECK: load {{.*}} bitcast ({ i16, i16, [4 x i8] }** [[LOW_HIGH_REF]]
+// CHECK: load {{.*}} bitcast ({ i16, [0 x i8], i16, [4 x i8] }** [[LOW_HIGH_REF]]
     *&E::A(0)
 }
diff --git a/src/test/codegen/function-arguments.rs b/src/test/codegen/function-arguments.rs
index 29e2840..f8945a6 100644
--- a/src/test/codegen/function-arguments.rs
+++ b/src/test/codegen/function-arguments.rs
@@ -9,12 +9,13 @@
 // except according to those terms.
 
 // compile-flags: -C no-prepopulate-passes
+// ignore-tidy-linelength
 
 #![crate_type = "lib"]
 #![feature(custom_attribute)]
 
 pub struct S {
-  _field: [i64; 4],
+  _field: [i32; 8],
 }
 
 pub struct UnsafeInner {
@@ -45,13 +46,13 @@
 pub fn named_borrow<'r>(_: &'r i32) {
 }
 
-// CHECK: @unsafe_borrow(%UnsafeInner* dereferenceable(2) %arg0)
+// CHECK: @unsafe_borrow(i16* dereferenceable(2) %arg0)
 // unsafe interior means this isn't actually readonly and there may be aliases ...
 #[no_mangle]
 pub fn unsafe_borrow(_: &UnsafeInner) {
 }
 
-// CHECK: @mutable_unsafe_borrow(%UnsafeInner* dereferenceable(2) %arg0)
+// CHECK: @mutable_unsafe_borrow(i16* dereferenceable(2) %arg0)
 // ... unless this is a mutable borrow, those never alias
 // ... except that there's this LLVM bug that forces us to not use noalias, see #29485
 #[no_mangle]
@@ -76,7 +77,7 @@
 pub fn borrowed_struct(_: &S) {
 }
 
-// CHECK: noalias dereferenceable(4) i32* @_box(i32* noalias dereferenceable(4) %x)
+// CHECK: noalias align 4 dereferenceable(4) i32* @_box(i32* noalias dereferenceable(4) %x)
 #[no_mangle]
 pub fn _box(x: Box<i32>) -> Box<i32> {
   x
@@ -86,7 +87,7 @@
 #[no_mangle]
 pub fn struct_return() -> S {
   S {
-    _field: [0, 0, 0, 0]
+    _field: [0, 0, 0, 0, 0, 0, 0, 0]
   }
 }
 
@@ -96,43 +97,43 @@
 pub fn helper(_: usize) {
 }
 
-// CHECK: @slice(i8* noalias nonnull readonly %arg0.ptr, [[USIZE]] %arg0.meta)
+// CHECK: @slice([0 x i8]* noalias nonnull readonly %arg0.0, [[USIZE]] %arg0.1)
 // FIXME #25759 This should also have `nocapture`
 #[no_mangle]
 pub fn slice(_: &[u8]) {
 }
 
-// CHECK: @mutable_slice(i8* nonnull %arg0.ptr, [[USIZE]] %arg0.meta)
+// CHECK: @mutable_slice([0 x i8]* nonnull %arg0.0, [[USIZE]] %arg0.1)
 // FIXME #25759 This should also have `nocapture`
 // ... there's this LLVM bug that forces us to not use noalias, see #29485
 #[no_mangle]
 pub fn mutable_slice(_: &mut [u8]) {
 }
 
-// CHECK: @unsafe_slice(%UnsafeInner* nonnull %arg0.ptr, [[USIZE]] %arg0.meta)
+// CHECK: @unsafe_slice([0 x i16]* nonnull %arg0.0, [[USIZE]] %arg0.1)
 // unsafe interior means this isn't actually readonly and there may be aliases ...
 #[no_mangle]
 pub fn unsafe_slice(_: &[UnsafeInner]) {
 }
 
-// CHECK: @str(i8* noalias nonnull readonly %arg0.ptr, [[USIZE]] %arg0.meta)
+// CHECK: @str([0 x i8]* noalias nonnull readonly %arg0.0, [[USIZE]] %arg0.1)
 // FIXME #25759 This should also have `nocapture`
 #[no_mangle]
 pub fn str(_: &[u8]) {
 }
 
-// CHECK: @trait_borrow({}* nonnull, {}* noalias nonnull readonly)
+// CHECK: @trait_borrow(%"core::ops::drop::Drop"* nonnull %arg0.0, {}* noalias nonnull readonly %arg0.1)
 // FIXME #25759 This should also have `nocapture`
 #[no_mangle]
 pub fn trait_borrow(_: &Drop) {
 }
 
-// CHECK: @trait_box({}* noalias nonnull, {}* noalias nonnull readonly)
+// CHECK: @trait_box(%"core::ops::drop::Drop"* noalias nonnull, {}* noalias nonnull readonly)
 #[no_mangle]
 pub fn trait_box(_: Box<Drop>) {
 }
 
-// CHECK: { i16*, [[USIZE]] } @return_slice(i16* noalias nonnull readonly %x.ptr, [[USIZE]] %x.meta)
+// CHECK: { [0 x i16]*, [[USIZE]] } @return_slice([0 x i16]* noalias nonnull readonly %x.0, [[USIZE]] %x.1)
 #[no_mangle]
 pub fn return_slice(x: &[u16]) -> &[u16] {
   x
diff --git a/src/test/codegen/issue-32031.rs b/src/test/codegen/issue-32031.rs
index 5d3ccbf..e5ec173 100644
--- a/src/test/codegen/issue-32031.rs
+++ b/src/test/codegen/issue-32031.rs
@@ -15,7 +15,7 @@
 #[no_mangle]
 pub struct F32(f32);
 
-// CHECK: define float @add_newtype_f32(float, float)
+// CHECK: define float @add_newtype_f32(float %a, float %b)
 #[inline(never)]
 #[no_mangle]
 pub fn add_newtype_f32(a: F32, b: F32) -> F32 {
@@ -25,7 +25,7 @@
 #[no_mangle]
 pub struct F64(f64);
 
-// CHECK: define double @add_newtype_f64(double, double)
+// CHECK: define double @add_newtype_f64(double %a, double %b)
 #[inline(never)]
 #[no_mangle]
 pub fn add_newtype_f64(a: F64, b: F64) -> F64 {
diff --git a/src/test/codegen/link_section.rs b/src/test/codegen/link_section.rs
index 98214dc..1879002 100644
--- a/src/test/codegen/link_section.rs
+++ b/src/test/codegen/link_section.rs
@@ -22,12 +22,12 @@
     B(f32)
 }
 
-// CHECK: @VAR2 = constant {{.*}} { i32 0, i32 666 }, section ".test_two"
+// CHECK: @VAR2 = constant {{.*}}, section ".test_two"
 #[no_mangle]
 #[link_section = ".test_two"]
 pub static VAR2: E = E::A(666);
 
-// CHECK: @VAR3 = constant {{.*}} { i32 1, float 1.000000e+00 }, section ".test_three"
+// CHECK: @VAR3 = constant {{.*}}, section ".test_three"
 #[no_mangle]
 #[link_section = ".test_three"]
 pub static VAR3: E = E::B(1.);
diff --git a/src/test/codegen/match-optimizes-away.rs b/src/test/codegen/match-optimizes-away.rs
index c0f2f64..d7b7793 100644
--- a/src/test/codegen/match-optimizes-away.rs
+++ b/src/test/codegen/match-optimizes-away.rs
@@ -12,11 +12,9 @@
 // compile-flags: -O
 #![crate_type="lib"]
 
-pub enum Three { First, Second, Third }
-use Three::*;
+pub enum Three { A, B, C }
 
-pub enum Four { First, Second, Third, Fourth }
-use Four::*;
+pub enum Four { A, B, C, D }
 
 #[no_mangle]
 pub fn three_valued(x: Three) -> Three {
@@ -24,9 +22,9 @@
     // CHECK-NEXT: {{^.*:$}}
     // CHECK-NEXT: ret i8 %0
     match x {
-        First => First,
-        Second => Second,
-        Third => Third,
+        Three::A => Three::A,
+        Three::B => Three::B,
+        Three::C => Three::C,
     }
 }
 
@@ -36,9 +34,9 @@
     // CHECK-NEXT: {{^.*:$}}
     // CHECK-NEXT: ret i8 %0
     match x {
-        First => First,
-        Second => Second,
-        Third => Third,
-        Fourth => Fourth,
+        Four::A => Four::A,
+        Four::B => Four::B,
+        Four::C => Four::C,
+        Four::D => Four::D,
     }
 }
diff --git a/src/test/codegen/packed.rs b/src/test/codegen/packed.rs
index 99e6e38..dd530cf 100644
--- a/src/test/codegen/packed.rs
+++ b/src/test/codegen/packed.rs
@@ -54,9 +54,6 @@
 // CHECK-LABEL: @pkd_pair
 #[no_mangle]
 pub fn pkd_pair(pair1: &mut PackedPair, pair2: &mut PackedPair) {
-    // CHECK: [[V1:%[a-z0-9]+]] = load i8, i8* %{{.*}}, align 1
-    // CHECK: [[V2:%[a-z0-9]+]] = load i32, i32* %{{.*}}, align 1
-    // CHECK: store i8 [[V1]], i8* {{.*}}, align 1
-    // CHECK: store i32 [[V2]], i32* {{.*}}, align 1
+// CHECK: call void @llvm.memcpy.{{.*}}(i8* %{{.*}}, i8* %{{.*}}, i{{[0-9]+}} 5, i32 1, i1 false)
     *pair2 = *pair1;
 }
diff --git a/src/test/codegen/refs.rs b/src/test/codegen/refs.rs
index 4b713e2..6c00ffa 100644
--- a/src/test/codegen/refs.rs
+++ b/src/test/codegen/refs.rs
@@ -9,6 +9,7 @@
 // except according to those terms.
 
 // compile-flags: -C no-prepopulate-passes
+// ignore-tidy-linelength
 
 #![crate_type = "lib"]
 
@@ -23,10 +24,10 @@
 pub fn ref_dst(s: &[u8]) {
     // We used to generate an extra alloca and memcpy to ref the dst, so check that we copy
     // directly to the alloca for "x"
-// CHECK: [[X0:%[0-9]+]] = getelementptr {{.*}} { i8*, [[USIZE]] }* %x, i32 0, i32 0
-// CHECK: store i8* %s.ptr, i8** [[X0]]
-// CHECK: [[X1:%[0-9]+]] = getelementptr {{.*}} { i8*, [[USIZE]] }* %x, i32 0, i32 1
-// CHECK: store [[USIZE]] %s.meta, [[USIZE]]* [[X1]]
+// CHECK: [[X0:%[0-9]+]] = getelementptr {{.*}} { [0 x i8]*, [[USIZE]] }* %x, i32 0, i32 0
+// CHECK: store [0 x i8]* %s.0, [0 x i8]** [[X0]]
+// CHECK: [[X1:%[0-9]+]] = getelementptr {{.*}} { [0 x i8]*, [[USIZE]] }* %x, i32 0, i32 1
+// CHECK: store [[USIZE]] %s.1, [[USIZE]]* [[X1]]
 
     let x = &*s;
     &x; // keep variable in an alloca
diff --git a/src/test/codegen/slice-init.rs b/src/test/codegen/slice-init.rs
index 569d937..915db49 100644
--- a/src/test/codegen/slice-init.rs
+++ b/src/test/codegen/slice-init.rs
@@ -15,7 +15,7 @@
 // CHECK-LABEL: @zero_sized_elem
 #[no_mangle]
 pub fn zero_sized_elem() {
-    // CHECK-NOT: br label %slice_loop_header{{.*}}
+    // CHECK-NOT: br label %repeat_loop_header{{.*}}
     // CHECK-NOT: call void @llvm.memset.p0i8
     let x = [(); 4];
     drop(&x);
@@ -24,7 +24,7 @@
 // CHECK-LABEL: @zero_len_array
 #[no_mangle]
 pub fn zero_len_array() {
-    // CHECK-NOT: br label %slice_loop_header{{.*}}
+    // CHECK-NOT: br label %repeat_loop_header{{.*}}
     // CHECK-NOT: call void @llvm.memset.p0i8
     let x = [4; 0];
     drop(&x);
@@ -34,7 +34,7 @@
 #[no_mangle]
 pub fn byte_array() {
     // CHECK: call void @llvm.memset.p0i8.i[[WIDTH:[0-9]+]](i8* {{.*}}, i8 7, i[[WIDTH]] 4
-    // CHECK-NOT: br label %slice_loop_header{{.*}}
+    // CHECK-NOT: br label %repeat_loop_header{{.*}}
     let x = [7u8; 4];
     drop(&x);
 }
@@ -50,7 +50,7 @@
 #[no_mangle]
 pub fn byte_enum_array() {
     // CHECK: call void @llvm.memset.p0i8.i[[WIDTH:[0-9]+]](i8* {{.*}}, i8 {{.*}}, i[[WIDTH]] 4
-    // CHECK-NOT: br label %slice_loop_header{{.*}}
+    // CHECK-NOT: br label %repeat_loop_header{{.*}}
     let x = [Init::Memset; 4];
     drop(&x);
 }
@@ -59,7 +59,7 @@
 #[no_mangle]
 pub fn zeroed_integer_array() {
     // CHECK: call void @llvm.memset.p0i8.i[[WIDTH:[0-9]+]](i8* {{.*}}, i8 0, i[[WIDTH]] 16
-    // CHECK-NOT: br label %slice_loop_header{{.*}}
+    // CHECK-NOT: br label %repeat_loop_header{{.*}}
     let x = [0u32; 4];
     drop(&x);
 }
@@ -67,7 +67,7 @@
 // CHECK-LABEL: @nonzero_integer_array
 #[no_mangle]
 pub fn nonzero_integer_array() {
-    // CHECK: br label %slice_loop_header{{.*}}
+    // CHECK: br label %repeat_loop_header{{.*}}
     // CHECK-NOT: call void @llvm.memset.p0i8
     let x = [0x1a_2b_3c_4d_u32; 4];
     drop(&x);
diff --git a/src/test/codegen/vtabletype.rs b/src/test/codegen/vtabletype.rs
new file mode 100644
index 0000000..b646646
--- /dev/null
+++ b/src/test/codegen/vtabletype.rs
@@ -0,0 +1,33 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// This test depends on a patch that was committed to upstream LLVM
+// after 5.0, then backported to the Rust LLVM fork.
+
+// ignore-tidy-linelength
+// ignore-windows
+// ignore-macos
+// min-system-llvm-version 5.1
+
+// compile-flags: -g -C no-prepopulate-passes
+
+// CHECK-LABEL: @main
+// CHECK: {{.*}}DICompositeType{{.*}}name: "vtable",{{.*}}vtableHolder:{{.*}}
+
+pub trait T {
+}
+
+impl T for f64 {
+}
+
+pub fn main() {
+    let d = 23.0f64;
+    let td = &d as &T;
+}
diff --git a/src/test/compile-fail/E0084.rs b/src/test/compile-fail/E0084.rs
index c7c5662..d19eed7 100644
--- a/src/test/compile-fail/E0084.rs
+++ b/src/test/compile-fail/E0084.rs
@@ -8,10 +8,8 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-#[repr(i32)]
-enum Foo {}
-//~^ ERROR E0084
-//~| unsupported enum representation
+#[repr(i32)] //~ ERROR: E0084
+enum Foo {} //~ NOTE: zero-variant enum
 
 fn main() {
 }
diff --git a/src/test/compile-fail/E0517.rs b/src/test/compile-fail/E0517.rs
index b79cb2c..7feda67 100644
--- a/src/test/compile-fail/E0517.rs
+++ b/src/test/compile-fail/E0517.rs
@@ -8,21 +8,17 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-#[repr(C)] //~ ERROR E0517
-           //~| requires a struct, enum or union
-type Foo = u8;
+#[repr(C)] //~ ERROR: E0517
+type Foo = u8; //~ NOTE: not a struct, enum or union
 
-#[repr(packed)] //~ ERROR E0517
-                //~| requires a struct
-enum Foo2 {Bar, Baz}
+#[repr(packed)] //~ ERROR: E0517
+enum Foo2 {Bar, Baz} //~ NOTE: not a struct
 
-#[repr(u8)] //~ ERROR E0517
-            //~| requires an enum
-struct Foo3 {bar: bool, baz: bool}
+#[repr(u8)] //~ ERROR: E0517
+struct Foo3 {bar: bool, baz: bool} //~ NOTE: not an enum
 
-#[repr(C)] //~ ERROR E0517
-           //~| requires a struct, enum or union
-impl Foo3 {
+#[repr(C)] //~ ERROR: E0517
+impl Foo3 { //~ NOTE: not a struct, enum or union
 }
 
 fn main() {
diff --git a/src/test/compile-fail/E0518.rs b/src/test/compile-fail/E0518.rs
index f9494e0..63d40db 100644
--- a/src/test/compile-fail/E0518.rs
+++ b/src/test/compile-fail/E0518.rs
@@ -8,13 +8,11 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-#[inline(always)] //~ ERROR E0518
-                  //~| requires a function
-struct Foo;
+#[inline(always)] //~ ERROR: E0518
+struct Foo;       //~ NOTE: not a function
 
-#[inline(never)] //~ ERROR E0518
-                 //~| requires a function
-impl Foo {
+#[inline(never)] //~ ERROR: E0518
+impl Foo {       //~ NOTE: not a function
 }
 
 fn main() {
diff --git a/src/test/compile-fail/borrowck/borrowck-describe-lvalue.rs b/src/test/compile-fail/borrowck/borrowck-describe-lvalue.rs
index 06d6124..d1cf08a 100644
--- a/src/test/compile-fail/borrowck/borrowck-describe-lvalue.rs
+++ b/src/test/compile-fail/borrowck/borrowck-describe-lvalue.rs
@@ -46,12 +46,6 @@
     }
 }
 
-static mut sfoo : Foo = Foo{x: 23 };
-static mut sbar : Bar = Bar(23);
-static mut stuple : (i32, i32) = (24, 25);
-static mut senum : Baz = Baz::X(26);
-static mut sunion : U = U { a: 0 };
-
 fn main() {
     // Local and field from struct
     {
@@ -96,34 +90,6 @@
              //[mir]~^ ERROR cannot use `u.a` because it was mutably borrowed (Ast)
              //[mir]~| ERROR cannot use `u.a` because it was mutably borrowed (Mir)
     }
-    // Static and field from struct
-    unsafe {
-        let _x = sfoo.x();
-        sfoo.x; //[mir]~ ERROR cannot use `sfoo.x` because it was mutably borrowed (Mir)
-    }
-    // Static and field from tuple-struct
-    unsafe {
-        let _0 = sbar.x();
-        sbar.0; //[mir]~ ERROR cannot use `sbar.0` because it was mutably borrowed (Mir)
-    }
-    // Static and field from tuple
-    unsafe {
-        let _0 = &mut stuple.0;
-        stuple.0; //[mir]~ ERROR cannot use `stuple.0` because it was mutably borrowed (Mir)
-    }
-    // Static and field from enum
-    unsafe {
-        let _e0 = senum.x();
-        match senum {
-            Baz::X(value) => value
-            //[mir]~^ ERROR cannot use `senum.0` because it was mutably borrowed (Mir)
-        };
-    }
-    // Static and field from union
-    unsafe {
-        let _ra = &mut sunion.a;
-        sunion.a; //[mir]~ ERROR cannot use `sunion.a` because it was mutably borrowed (Mir)
-    }
     // Deref and field from struct
     {
         let mut f = Box::new(Foo { x: 22 });
diff --git a/src/test/compile-fail/issue-18937.rs b/src/test/compile-fail/issue-18937.rs
index 5996c8e..f7f84e6 100644
--- a/src/test/compile-fail/issue-18937.rs
+++ b/src/test/compile-fail/issue-18937.rs
@@ -27,7 +27,6 @@
 
 impl<'a> A<'a> for B {
     fn foo<F>(&mut self, f: F) //~ ERROR impl has stricter
-        //~^ WARNING future release
         where F: fmt::Debug + 'static,
     {
         self.list.push(Box::new(f));
diff --git a/src/test/ui/issue-26548.rs b/src/test/compile-fail/issue-26548.rs
similarity index 70%
rename from src/test/ui/issue-26548.rs
rename to src/test/compile-fail/issue-26548.rs
index 2591d7b..39c6e97 100644
--- a/src/test/ui/issue-26548.rs
+++ b/src/test/compile-fail/issue-26548.rs
@@ -8,7 +8,10 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-// error-pattern: overflow representing the type
+// error-pattern: unsupported cyclic reference between types/traits detected
+// note-pattern: the cycle begins when computing layout of
+// note-pattern: ...which then requires computing layout of
+// note-pattern: ...which then again requires computing layout of
 
 
 trait Mirror { type It: ?Sized; }
diff --git a/src/test/compile-fail/issue-36082.rs b/src/test/compile-fail/issue-36082.rs
index b46756b..1596d9c 100644
--- a/src/test/compile-fail/issue-36082.rs
+++ b/src/test/compile-fail/issue-36082.rs
@@ -8,6 +8,9 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
+// revisions: ast mir
+//[mir]compile-flags: -Z emit-end-regions -Z borrowck-mir
+
 use std::cell::RefCell;
 
 fn main() {
@@ -16,10 +19,20 @@
     let x = RefCell::new((&mut r,s));
 
     let val: &_ = x.borrow().0;
-    //~^ ERROR borrowed value does not live long enough
-    //~| temporary value dropped here while still borrowed
-    //~| temporary value created here
-    //~| consider using a `let` binding to increase its lifetime
+    //[ast]~^ ERROR borrowed value does not live long enough [E0597]
+    //[ast]~| NOTE temporary value dropped here while still borrowed
+    //[ast]~| NOTE temporary value created here
+    //[ast]~| NOTE consider using a `let` binding to increase its lifetime
+    //[mir]~^^^^^ ERROR borrowed value does not live long enough (Ast) [E0597]
+    //[mir]~| NOTE temporary value dropped here while still borrowed
+    //[mir]~| NOTE temporary value created here
+    //[mir]~| NOTE consider using a `let` binding to increase its lifetime
+    //[mir]~| ERROR borrowed value does not live long enough (Mir) [E0597]
+    //[mir]~| NOTE temporary value dropped here while still borrowed
+    //[mir]~| NOTE temporary value created here
+    //[mir]~| NOTE consider using a `let` binding to increase its lifetime
     println!("{}", val);
 }
-//~^ temporary value needs to live until here
+//[ast]~^ NOTE temporary value needs to live until here
+//[mir]~^^ NOTE temporary value needs to live until here
+//[mir]~| NOTE temporary value needs to live until here
diff --git a/src/test/incremental/hashes/exported_vs_not.rs b/src/test/incremental/hashes/exported_vs_not.rs
index 082bada..d7aba56 100644
--- a/src/test/incremental/hashes/exported_vs_not.rs
+++ b/src/test/incremental/hashes/exported_vs_not.rs
@@ -26,10 +26,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn body_not_exported_to_metadata() -> u32 {
@@ -49,10 +47,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_dirty(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 #[inline]
@@ -73,10 +69,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_dirty(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 #[inline]
diff --git a/src/test/incremental/hashes/if_expressions.rs b/src/test/incremental/hashes/if_expressions.rs
index c39eeab..3cd8308 100644
--- a/src/test/incremental/hashes/if_expressions.rs
+++ b/src/test/incremental/hashes/if_expressions.rs
@@ -36,10 +36,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized,TypeckTables")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn change_condition(x: bool) -> u32 {
@@ -61,10 +59,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn change_then_branch(x: bool) -> u32 {
@@ -88,10 +84,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn change_else_branch(x: bool) -> u32 {
@@ -110,24 +104,22 @@
     let mut ret = 1;
 
     if x {
-        ret += 1;
+        ret = 2;
     }
 
     ret
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,TypeckTables")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn add_else_branch(x: bool) -> u32 {
     let mut ret = 1;
 
     if x {
-        ret += 1;
+        ret = 2;
     } else {
     }
 
@@ -147,10 +139,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized,TypeckTables")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn change_condition_if_let(x: Option<u32>) -> u32 {
@@ -174,10 +164,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized,TypeckTables")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn change_then_branch_if_let(x: Option<u32>) -> u32 {
@@ -201,10 +189,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn change_else_branch_if_let(x: Option<u32>) -> u32 {
@@ -223,24 +209,22 @@
     let mut ret = 1;
 
     if let Some(x) = x {
-        ret += x;
+        ret = x;
     }
 
     ret
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,TypeckTables")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn add_else_branch_if_let(x: Option<u32>) -> u32 {
     let mut ret = 1;
 
     if let Some(x) = x {
-        ret += x;
+        ret = x;
     } else {
     }
 
diff --git a/src/test/incremental/hashes/indexing_expressions.rs b/src/test/incremental/hashes/indexing_expressions.rs
index a12624d..5a81d3a 100644
--- a/src/test/incremental/hashes/indexing_expressions.rs
+++ b/src/test/incremental/hashes/indexing_expressions.rs
@@ -10,7 +10,7 @@
 
 
 // This test case tests the incremental compilation hash (ICH) implementation
-// for closure expression.
+// for indexing expression.
 
 // The general pattern followed here is: Change one thing between rev1 and rev2
 // and make sure that the hash has changed, then change nothing between rev2 and
diff --git a/src/test/incremental/hashes/panic_exprs.rs b/src/test/incremental/hashes/panic_exprs.rs
index 4a3e4bc..cddd4ae 100644
--- a/src/test/incremental/hashes/panic_exprs.rs
+++ b/src/test/incremental/hashes/panic_exprs.rs
@@ -34,10 +34,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn indexing(slice: &[u8]) -> u8 {
@@ -52,10 +50,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_plus(val: i32) -> i32 {
@@ -70,10 +66,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_minus(val: i32) -> i32 {
@@ -88,10 +82,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_mult(val: i32) -> i32 {
@@ -106,10 +98,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_negation(val: i32) -> i32 {
@@ -124,10 +114,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn division_by_zero(val: i32) -> i32 {
@@ -141,10 +129,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn mod_by_zero(val: i32) -> i32 {
@@ -159,10 +145,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn shift_left(val: i32, shift: usize) -> i32 {
@@ -177,10 +161,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn shift_right(val: i32, shift: usize) -> i32 {
@@ -197,10 +179,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn bitwise(val: i32) -> i32 {
@@ -215,10 +195,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn logical(val1: bool, val2: bool, val3: bool) -> bool {
diff --git a/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs b/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs
index b3fc8e2..01fb9e9 100644
--- a/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs
+++ b/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs
@@ -41,10 +41,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn indexing(slice: &[u8]) -> u8 {
@@ -60,10 +58,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 #[rustc_inherit_overflow_checks]
@@ -80,10 +76,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 #[rustc_inherit_overflow_checks]
@@ -100,10 +94,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 #[rustc_inherit_overflow_checks]
@@ -120,10 +112,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 #[rustc_inherit_overflow_checks]
@@ -139,10 +129,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn division_by_zero(val: i32) -> i32 {
@@ -156,10 +144,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2", except="HirBody,MirValidated,MirOptimized")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn mod_by_zero(val: i32) -> i32 {
@@ -177,10 +163,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn bitwise(val: i32) -> i32 {
@@ -195,10 +179,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn logical(val1: bool, val2: bool, val3: bool) -> bool {
@@ -212,10 +194,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_plus(val: i32) -> i32 {
@@ -230,10 +210,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_minus(val: i32) -> i32 {
@@ -248,10 +226,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_mult(val: i32) -> i32 {
@@ -266,10 +242,8 @@
 }
 
 #[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(cfg="cfail2")]
+#[rustc_clean(cfg="cfail3")]
 #[rustc_metadata_clean(cfg="cfail2")]
 #[rustc_metadata_clean(cfg="cfail3")]
 pub fn arithmetic_overflow_negation(val: i32) -> i32 {
diff --git a/src/test/mir-opt/nll/liveness-call-subtlety.rs b/src/test/mir-opt/nll/liveness-call-subtlety.rs
index 2de3e7d..59a1d48 100644
--- a/src/test/mir-opt/nll/liveness-call-subtlety.rs
+++ b/src/test/mir-opt/nll/liveness-call-subtlety.rs
@@ -31,21 +31,15 @@
 //            | Live variables at bb0[0]: []
 //        StorageLive(_1);
 //            | Live variables at bb0[1]: []
-//        StorageLive(_2);
-//            | Live variables at bb0[2]: []
-//        _2 = const 22usize;
-//            | Live variables at bb0[3]: [_2]
-//        _1 = const <std::boxed::Box<T>>::new(_2) -> bb1;
+//        _1 = const <std::boxed::Box<T>>::new(const 22usize) -> bb1;
 //    }
 // END rustc.main.nll.0.mir
 // START rustc.main.nll.0.mir
 //    | Live variables on entry to bb1: [_1 (drop)]
 //    bb1: {
 //            | Live variables at bb1[0]: [_1 (drop)]
-//        StorageDead(_2);
+//        StorageLive(_2);
 //            | Live variables at bb1[1]: [_1 (drop)]
-//        StorageLive(_3);
-//            | Live variables at bb1[2]: [_1 (drop)]
-//        _3 = const can_panic() -> [return: bb2, unwind: bb4];
+//        _2 = const can_panic() -> [return: bb2, unwind: bb4];
 //    }
 // END rustc.main.nll.0.mir
diff --git a/src/test/mir-opt/nll/named-lifetimes-basic.rs b/src/test/mir-opt/nll/named-lifetimes-basic.rs
index e3f67d8..34d0482 100644
--- a/src/test/mir-opt/nll/named-lifetimes-basic.rs
+++ b/src/test/mir-opt/nll/named-lifetimes-basic.rs
@@ -26,9 +26,9 @@
 
 // END RUST SOURCE
 // START rustc.use_x.nll.0.mir
-// | '_#0r: {bb0[0], bb0[1], '_#0r}
-// | '_#1r: {bb0[0], bb0[1], '_#0r, '_#1r}
-// | '_#2r: {bb0[0], bb0[1], '_#2r}
-// ...
-// fn use_x(_1: &'_#0r mut i32, _2: &'_#1r u32, _3: &'_#0r u32, _4: &'_#2r u32) -> bool {
+// | '_#0r: {bb0[0], bb0[1], '_#0r, '_#1r, '_#2r, '_#3r}
+// | '_#1r: {bb0[0], bb0[1], '_#1r}
+// | '_#2r: {bb0[0], bb0[1], '_#1r, '_#2r}
+// | '_#3r: {bb0[0], bb0[1], '_#3r}
+// fn use_x(_1: &'_#1r mut i32, _2: &'_#2r u32, _3: &'_#1r u32, _4: &'_#3r u32) -> bool {
 // END rustc.use_x.nll.0.mir
diff --git a/src/test/mir-opt/nll/reborrow-basic.rs b/src/test/mir-opt/nll/reborrow-basic.rs
index c3df0c8..f51e839 100644
--- a/src/test/mir-opt/nll/reborrow-basic.rs
+++ b/src/test/mir-opt/nll/reborrow-basic.rs
@@ -28,12 +28,12 @@
 
 // END RUST SOURCE
 // START rustc.main.nll.0.mir
-// | '_#5r: {bb0[6], bb0[7], bb0[8], bb0[9], bb0[10], bb0[11], bb0[12], bb0[13], bb0[14]}
+// | '_#6r: {bb0[6], bb0[7], bb0[8], bb0[9], bb0[10], bb0[11], bb0[12], bb0[13], bb0[14]}
 // ...
-// | '_#7r: {bb0[11], bb0[12], bb0[13], bb0[14]}
+// | '_#8r: {bb0[11], bb0[12], bb0[13], bb0[14]}
 // END rustc.main.nll.0.mir
 // START rustc.main.nll.0.mir
-// let _2: &'_#5r mut i32;
+// let _2: &'_#6r mut i32;
 // ...
-// let _4: &'_#7r mut i32;
+// let _4: &'_#8r mut i32;
 // END rustc.main.nll.0.mir
diff --git a/src/test/mir-opt/nll/region-liveness-basic.rs b/src/test/mir-opt/nll/region-liveness-basic.rs
index f7276cb..ae059fe 100644
--- a/src/test/mir-opt/nll/region-liveness-basic.rs
+++ b/src/test/mir-opt/nll/region-liveness-basic.rs
@@ -31,15 +31,15 @@
 
 // END RUST SOURCE
 // START rustc.main.nll.0.mir
-// | '_#0r: {bb1[1], bb2[0], bb2[1]}
 // | '_#1r: {bb1[1], bb2[0], bb2[1]}
+// | '_#2r: {bb1[1], bb2[0], bb2[1]}
 // ...
-//             let _2: &'_#1r usize;
+//             let _2: &'_#2r usize;
 // END rustc.main.nll.0.mir
 // START rustc.main.nll.0.mir
 //    bb1: {
 //            | Live variables at bb1[0]: [_1, _3]
-//        _2 = &'_#0r _1[_3];
+//        _2 = &'_#1r _1[_3];
 //            | Live variables at bb1[1]: [_2]
 //        switchInt(const true) -> [0u8: bb3, otherwise: bb2];
 //    }
diff --git a/src/test/mir-opt/nll/region-liveness-drop-may-dangle.rs b/src/test/mir-opt/nll/region-liveness-drop-may-dangle.rs
index 6527df2..6d7aa0a 100644
--- a/src/test/mir-opt/nll/region-liveness-drop-may-dangle.rs
+++ b/src/test/mir-opt/nll/region-liveness-drop-may-dangle.rs
@@ -44,5 +44,5 @@
 
 // END RUST SOURCE
 // START rustc.main.nll.0.mir
-// | '_#4r: {bb1[3], bb1[4], bb1[5], bb2[0], bb2[1]}
+// | '_#5r: {bb1[3], bb1[4], bb1[5], bb2[0], bb2[1]}
 // END rustc.main.nll.0.mir
diff --git a/src/test/mir-opt/nll/region-liveness-drop-no-may-dangle.rs b/src/test/mir-opt/nll/region-liveness-drop-no-may-dangle.rs
index aedb3f5..aaeebe7 100644
--- a/src/test/mir-opt/nll/region-liveness-drop-no-may-dangle.rs
+++ b/src/test/mir-opt/nll/region-liveness-drop-no-may-dangle.rs
@@ -46,5 +46,5 @@
 
 // END RUST SOURCE
 // START rustc.main.nll.0.mir
-// | '_#4r: {bb1[3], bb1[4], bb1[5], bb2[0], bb2[1], bb2[2], bb3[0], bb3[1], bb3[2], bb4[0], bb4[1], bb4[2], bb6[0], bb7[0], bb7[1], bb7[2], bb8[0]}
+// | '_#5r: {bb1[3], bb1[4], bb1[5], bb2[0], bb2[1], bb2[2], bb3[0], bb4[0], bb4[1], bb4[2], bb6[0], bb7[0], bb7[1], bb8[0]}
 // END rustc.main.nll.0.mir
diff --git a/src/test/mir-opt/nll/region-liveness-two-disjoint-uses.rs b/src/test/mir-opt/nll/region-liveness-two-disjoint-uses.rs
index 23809d1..5c28746 100644
--- a/src/test/mir-opt/nll/region-liveness-two-disjoint-uses.rs
+++ b/src/test/mir-opt/nll/region-liveness-two-disjoint-uses.rs
@@ -36,14 +36,14 @@
 
 // END RUST SOURCE
 // START rustc.main.nll.0.mir
-// | '_#0r: {bb1[1], bb2[0], bb2[1]}
+// | '_#1r: {bb1[1], bb2[0], bb2[1]}
 // ...
-// | '_#2r: {bb7[2], bb7[3], bb7[4]}
-// | '_#3r: {bb1[1], bb2[0], bb2[1], bb7[2], bb7[3], bb7[4]}
+// | '_#3r: {bb7[2], bb7[3], bb7[4]}
+// | '_#4r: {bb1[1], bb2[0], bb2[1], bb7[2], bb7[3], bb7[4]}
 // ...
-// let mut _2: &'_#3r usize;
+// let mut _2: &'_#4r usize;
 // ...
-// _2 = &'_#0r _1[_3];
+// _2 = &'_#1r _1[_3];
 // ...
-// _2 = &'_#2r (*_11);
+// _2 = &'_#3r (*_10);
 // END rustc.main.nll.0.mir
diff --git a/src/test/mir-opt/nll/region-subtyping-basic.rs b/src/test/mir-opt/nll/region-subtyping-basic.rs
index cada9c7..fb178b4 100644
--- a/src/test/mir-opt/nll/region-subtyping-basic.rs
+++ b/src/test/mir-opt/nll/region-subtyping-basic.rs
@@ -32,16 +32,16 @@
 
 // END RUST SOURCE
 // START rustc.main.nll.0.mir
-// | '_#0r: {bb1[1], bb1[2], bb1[3], bb1[4], bb1[5], bb1[6], bb2[0], bb2[1]}
 // | '_#1r: {bb1[1], bb1[2], bb1[3], bb1[4], bb1[5], bb1[6], bb2[0], bb2[1]}
-// | '_#2r: {bb1[5], bb1[6], bb2[0], bb2[1]}
+// | '_#2r: {bb1[1], bb1[2], bb1[3], bb1[4], bb1[5], bb1[6], bb2[0], bb2[1]}
+// | '_#3r: {bb1[5], bb1[6], bb2[0], bb2[1]}
 // END rustc.main.nll.0.mir
 // START rustc.main.nll.0.mir
-// let _2: &'_#1r usize;
+// let _2: &'_#2r usize;
 // ...
-// let _6: &'_#2r usize;
+// let _6: &'_#3r usize;
 // ...
-// _2 = &'_#0r _1[_3];
+// _2 = &'_#1r _1[_3];
 // ...
 // _7 = _2;
 // ...
diff --git a/src/test/run-make/issue-25581/test.c b/src/test/run-make/issue-25581/test.c
index ab85d2b..5736b17 100644
--- a/src/test/run-make/issue-25581/test.c
+++ b/src/test/run-make/issue-25581/test.c
@@ -2,10 +2,15 @@
 #include <stddef.h>
 #include <stdint.h>
 
-size_t slice_len(uint8_t *data, size_t len) {
-    return len;
+struct ByteSlice {
+        uint8_t *data;
+        size_t len;
+};
+
+size_t slice_len(struct ByteSlice bs) {
+        return bs.len;
 }
 
-uint8_t slice_elem(uint8_t *data, size_t len, size_t idx) {
-    return data[idx];
+uint8_t slice_elem(struct ByteSlice bs, size_t idx) {
+        return bs.data[idx];
 }
diff --git a/src/test/run-pass/borrowck/borrowck-assignment-to-static-mut.rs b/src/test/run-pass/borrowck/borrowck-assignment-to-static-mut.rs
new file mode 100644
index 0000000..b241cb4
--- /dev/null
+++ b/src/test/run-pass/borrowck/borrowck-assignment-to-static-mut.rs
@@ -0,0 +1,23 @@
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test taken from #45641 (https://github.com/rust-lang/rust/issues/45641)
+
+// ignore-tidy-linelength
+// revisions: ast mir
+//[mir]compile-flags: -Z emit-end-regions -Z borrowck-mir
+
+static mut Y: u32 = 0;
+
+unsafe fn should_ok() {
+    Y = 1;
+}
+
+fn main() {}
\ No newline at end of file
diff --git a/src/test/run-pass/borrowck/borrowck-unsafe-static-mutable-borrows.rs b/src/test/run-pass/borrowck/borrowck-unsafe-static-mutable-borrows.rs
new file mode 100644
index 0000000..a4dd7b9
--- /dev/null
+++ b/src/test/run-pass/borrowck/borrowck-unsafe-static-mutable-borrows.rs
@@ -0,0 +1,31 @@
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// revisions: ast mir
+//[mir]compile-flags: -Z emit-end-regions -Z borrowck-mir
+
+// Test file taken from issue 45129 (https://github.com/rust-lang/rust/issues/45129)
+
+struct Foo { x: [usize; 2] }
+
+static mut SFOO: Foo = Foo { x: [23, 32] };
+
+impl Foo {
+    fn x(&mut self) -> &mut usize { &mut self.x[0] }
+}
+
+fn main() {
+    unsafe {
+        let sfoo: *mut Foo = &mut SFOO;
+        let x = (*sfoo).x();
+        (*sfoo).x[1] += 1;
+        *x += 1;
+    }
+}
diff --git a/src/test/run-pass/enum-discrim-manual-sizing.rs b/src/test/run-pass/enum-discrim-manual-sizing.rs
index 3bbc107..8557c06 100644
--- a/src/test/run-pass/enum-discrim-manual-sizing.rs
+++ b/src/test/run-pass/enum-discrim-manual-sizing.rs
@@ -108,6 +108,9 @@
     let array_expected_size = round_up(28, align_of::<Eu64NonCLike<[u32; 5]>>());
     assert_eq!(size_of::<Eu64NonCLike<[u32; 5]>>(), array_expected_size);
     assert_eq!(size_of::<Eu64NonCLike<[u32; 6]>>(), 32);
+
+    assert_eq!(align_of::<Eu32>(), align_of::<u32>());
+    assert_eq!(align_of::<Eu64NonCLike<u8>>(), align_of::<u64>());
 }
 
 // Rounds x up to the next multiple of a
diff --git a/src/test/run-pass/enum-univariant-repr.rs b/src/test/run-pass/enum-univariant-repr.rs
index ef4cc60..17d614b 100644
--- a/src/test/run-pass/enum-univariant-repr.rs
+++ b/src/test/run-pass/enum-univariant-repr.rs
@@ -22,6 +22,11 @@
     Y
 }
 
+#[repr(u8)]
+enum UnivariantWithData {
+    Z(u8),
+}
+
 pub fn main() {
     {
         assert_eq!(4, mem::size_of::<Univariant>());
@@ -44,4 +49,12 @@
         // check it has the same memory layout as u16
         assert_eq!(&[descr, descr, descr], ints);
     }
+
+    {
+        assert_eq!(2, mem::size_of::<UnivariantWithData>());
+
+        match UnivariantWithData::Z(4) {
+            UnivariantWithData::Z(x) => assert_eq!(x, 4),
+        }
+    }
 }
diff --git a/src/test/run-pass/implied-bounds-closure-arg-outlives.rs b/src/test/run-pass/implied-bounds-closure-arg-outlives.rs
new file mode 100644
index 0000000..0e5cc57
--- /dev/null
+++ b/src/test/run-pass/implied-bounds-closure-arg-outlives.rs
@@ -0,0 +1,44 @@
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to handle the relationships between free
+// regions bound in a closure callback.
+
+#[derive(Copy, Clone)]
+struct MyCx<'short, 'long: 'short> {
+    short: &'short u32,
+    long: &'long u32,
+}
+
+impl<'short, 'long> MyCx<'short, 'long> {
+    fn short(self) -> &'short u32 { self.short }
+    fn long(self) -> &'long u32 { self.long }
+    fn set_short(&mut self, v: &'short u32) { self.short = v; }
+}
+
+fn with<F, R>(op: F) -> R
+where
+    F: for<'short, 'long> FnOnce(MyCx<'short, 'long>) -> R,
+{
+    op(MyCx {
+        short: &22,
+        long: &22,
+    })
+}
+
+fn main() {
+    with(|mut cx| {
+        // For this to type-check, we need to be able to deduce that
+        // the lifetime of `l` can be `'short`, even though it has
+        // input from `'long`.
+        let l = if true { cx.long() } else { cx.short() };
+        cx.set_short(l);
+    });
+}
diff --git a/src/test/run-pass/lub-glb-with-unbound-infer-var.rs b/src/test/run-pass/lub-glb-with-unbound-infer-var.rs
new file mode 100644
index 0000000..6b9bd67
--- /dev/null
+++ b/src/test/run-pass/lub-glb-with-unbound-infer-var.rs
@@ -0,0 +1,24 @@
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test for a specific corner case: when we compute the LUB of two fn
+// types and their parameters have unbound variables. In that case, we
+// wind up relating those two variables. This was causing an ICE in an
+// in-progress PR.
+
+fn main() {
+    let a_f: fn(_) = |_| ();
+    let b_f: fn(_) = |_| ();
+    let c_f = match 22 {
+        0 => a_f,
+        _ => b_f,
+    };
+    c_f(4);
+}
diff --git a/src/test/run-pass/mir_trans_calls.rs b/src/test/run-pass/mir_trans_calls.rs
index d429c68..d02e328 100644
--- a/src/test/run-pass/mir_trans_calls.rs
+++ b/src/test/run-pass/mir_trans_calls.rs
@@ -8,7 +8,9 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-#![feature(fn_traits)]
+#![feature(fn_traits, test)]
+
+extern crate test;
 
 fn test1(a: isize, b: (i32, i32), c: &[i32]) -> (isize, (i32, i32), &[i32]) {
     // Test passing a number of arguments including a fat pointer.
@@ -156,6 +158,16 @@
     (z.0, z.1)
 }
 
+fn test_fn_const_arg_by_ref(mut a: [u64; 4]) -> u64 {
+    // Mutate the by-reference argument, which won't work with
+    // a non-immediate constant unless it's copied to the stack.
+    let a = test::black_box(&mut a);
+    a[0] += a[1];
+    a[0] += a[2];
+    a[0] += a[3];
+    a[0]
+}
+
 fn main() {
     assert_eq!(test1(1, (2, 3), &[4, 5, 6]), (1, (2, 3), &[4, 5, 6][..]));
     assert_eq!(test2(98), 98);
@@ -182,4 +194,7 @@
     assert_eq!(test_fn_ignored_pair_0(), ());
     assert_eq!(test_fn_ignored_pair_named(), (Foo, Foo));
     assert_eq!(test_fn_nested_pair(&((1.0, 2.0), 0)), (1.0, 2.0));
+
+    const ARRAY: [u64; 4] = [1, 2, 3, 4];
+    assert_eq!(test_fn_const_arg_by_ref(ARRAY), 1 + 2 + 3 + 4);
 }
diff --git a/src/test/run-pass/packed-struct-optimized-enum.rs b/src/test/run-pass/packed-struct-optimized-enum.rs
new file mode 100644
index 0000000..1179f16
--- /dev/null
+++ b/src/test/run-pass/packed-struct-optimized-enum.rs
@@ -0,0 +1,25 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[repr(packed)]
+#[derive(Copy, Clone)]
+struct Packed<T>(T);
+
+fn main() {
+    let one = (Some(Packed((&(), 0))), true);
+    let two = [one, one];
+    let stride = (&two[1] as *const _ as usize) - (&two[0] as *const _ as usize);
+
+    // This can fail if rustc and LLVM disagree on the size of a type.
+    // In this case, `Option<Packed<(&(), u32)>>` was erronously not
+    // marked as packed despite needing alignment `1` and containing
+    // its `&()` discriminant, which has alignment larger than `1`.
+    assert_eq!(stride, std::mem::size_of_val(&one));
+}
diff --git a/src/test/run-pass/regions-relate-bound-regions-on-closures-to-inference-variables.rs b/src/test/run-pass/regions-relate-bound-regions-on-closures-to-inference-variables.rs
index ae4adbf..3162ef5 100644
--- a/src/test/run-pass/regions-relate-bound-regions-on-closures-to-inference-variables.rs
+++ b/src/test/run-pass/regions-relate-bound-regions-on-closures-to-inference-variables.rs
@@ -42,7 +42,7 @@
             // inferring `'_2` to be `'static` in this case, because
             // it is created outside the closure but then related to
             // regions bound by the closure itself. See the
-            // `region_inference.rs` file (and the `givens` field, in
+            // `region_constraints.rs` file (and the `givens` field, in
             // particular) for more details.
             this.foo()
         }))
diff --git a/src/test/rustdoc/foreigntype.rs b/src/test/rustdoc/foreigntype.rs
new file mode 100644
index 0000000..06447ff
--- /dev/null
+++ b/src/test/rustdoc/foreigntype.rs
@@ -0,0 +1,28 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(extern_types)]
+
+extern {
+    // @has foreigntype/foreigntype.ExtType.html
+    pub type ExtType;
+}
+
+impl ExtType {
+    // @has - '//a[@class="fnname"]' 'do_something'
+    pub fn do_something(&self) {}
+}
+
+pub trait Trait {}
+
+// @has foreigntype/trait.Trait.html '//a[@class="foreigntype"]' 'ExtType'
+impl Trait for ExtType {}
+
+// @has foreigntype/index.html '//a[@class="foreigntype"]' 'ExtType'
diff --git a/src/test/ui-fulldeps/proc-macro/auxiliary/three-equals.rs b/src/test/ui-fulldeps/proc-macro/auxiliary/three-equals.rs
index 6fca32f..2381c61 100644
--- a/src/test/ui-fulldeps/proc-macro/auxiliary/three-equals.rs
+++ b/src/test/ui-fulldeps/proc-macro/auxiliary/three-equals.rs
@@ -18,7 +18,7 @@
 
 fn parse(input: TokenStream) -> Result<(), Diagnostic> {
     let mut count = 0;
-    let mut last_span = Span::default();
+    let mut last_span = Span::def_site();
     for tree in input {
         let span = tree.span;
         if count >= 3 {
@@ -37,7 +37,7 @@
     }
 
     if count < 3 {
-        return Err(Span::default()
+        return Err(Span::def_site()
                        .error(format!("found {} equal signs, need exactly 3", count))
                        .help("input must be: `===`"))
     }
diff --git a/src/test/ui/compare-method/proj-outlives-region.stderr b/src/test/ui/compare-method/proj-outlives-region.stderr
index e58251c..f871f03 100644
--- a/src/test/ui/compare-method/proj-outlives-region.stderr
+++ b/src/test/ui/compare-method/proj-outlives-region.stderr
@@ -1,4 +1,4 @@
-error: impl has stricter requirements than trait
+error[E0276]: impl has stricter requirements than trait
   --> $DIR/proj-outlives-region.rs:19:5
    |
 14 |     fn foo() where T: 'a;
@@ -6,10 +6,6 @@
 ...
 19 |     fn foo() where U: 'a { } //~ ERROR E0276
    |     ^^^^^^^^^^^^^^^^^^^^^^^^ impl has extra requirement `U: 'a`
-   |
-   = note: #[deny(extra_requirement_in_impl)] on by default
-   = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
-   = note: for more information, see issue #37166 <https://github.com/rust-lang/rust/issues/37166>
 
 error: aborting due to previous error
 
diff --git a/src/test/ui/compare-method/region-unrelated.stderr b/src/test/ui/compare-method/region-unrelated.stderr
index 95db68f..1df83c7 100644
--- a/src/test/ui/compare-method/region-unrelated.stderr
+++ b/src/test/ui/compare-method/region-unrelated.stderr
@@ -1,4 +1,4 @@
-error: impl has stricter requirements than trait
+error[E0276]: impl has stricter requirements than trait
   --> $DIR/region-unrelated.rs:19:5
    |
 14 |     fn foo() where T: 'a;
@@ -6,10 +6,6 @@
 ...
 19 |     fn foo() where V: 'a { }
    |     ^^^^^^^^^^^^^^^^^^^^^^^^ impl has extra requirement `V: 'a`
-   |
-   = note: #[deny(extra_requirement_in_impl)] on by default
-   = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
-   = note: for more information, see issue #37166 <https://github.com/rust-lang/rust/issues/37166>
 
 error: aborting due to previous error
 
diff --git a/src/test/ui/impl-trait/universal_wrong_bounds.stderr b/src/test/ui/impl-trait/universal_wrong_bounds.stderr
index 5e77888..600064c 100644
--- a/src/test/ui/impl-trait/universal_wrong_bounds.stderr
+++ b/src/test/ui/impl-trait/universal_wrong_bounds.stderr
@@ -9,7 +9,6 @@
    |
 21 | fn wants_debug(g: impl Debug) { }
    |                        ^^^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 13 | use std::fmt::Debug;
@@ -20,7 +19,6 @@
    |
 22 | fn wants_display(g: impl Debug) { }
    |                          ^^^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 13 | use std::fmt::Debug;
diff --git a/src/test/ui/issue-22644.stderr b/src/test/ui/issue-22644.stderr
index f4967c4..5777c24 100644
--- a/src/test/ui/issue-22644.stderr
+++ b/src/test/ui/issue-22644.stderr
@@ -50,7 +50,6 @@
    |                    ^ not interpreted as comparison
 27 |                    4);
    |                    - interpreted as generic arguments
-   |
 help: try comparing the casted value
    |
 23 |     println!("{}", (a
@@ -65,7 +64,6 @@
    |                    ^ not interpreted as comparison
 36 |                    5);
    |                    - interpreted as generic arguments
-   |
 help: try comparing the casted value
    |
 28 |     println!("{}", (a
diff --git a/src/test/ui/issue-26548.stderr b/src/test/ui/issue-26548.stderr
deleted file mode 100644
index 8bfe4ac..0000000
--- a/src/test/ui/issue-26548.stderr
+++ /dev/null
@@ -1,9 +0,0 @@
-error[E0391]: unsupported cyclic reference between types/traits detected
-  |
-note: the cycle begins when computing layout of `S`...
-note: ...which then requires computing layout of `std::option::Option<<S as Mirror>::It>`...
-note: ...which then requires computing layout of `<S as Mirror>::It`...
-  = note: ...which then again requires computing layout of `S`, completing the cycle.
-
-error: aborting due to previous error
-
diff --git a/src/test/ui/issue-35675.stderr b/src/test/ui/issue-35675.stderr
index ed330f4..e125d74 100644
--- a/src/test/ui/issue-35675.stderr
+++ b/src/test/ui/issue-35675.stderr
@@ -12,7 +12,6 @@
    |
 23 |     Apple(5)
    |     ^^^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 12 | use Fruit::Apple;
@@ -32,7 +31,6 @@
    |
 31 |     Apple(5)
    |     ^^^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 12 | use Fruit::Apple;
diff --git a/src/test/ui/lub-glb/old-lub-glb-hr.rs b/src/test/ui/lub-glb/old-lub-glb-hr.rs
new file mode 100644
index 0000000..85c90bb
--- /dev/null
+++ b/src/test/ui/lub-glb/old-lub-glb-hr.rs
@@ -0,0 +1,36 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we give a note when the old LUB/GLB algorithm would have
+// succeeded but the new code (which is stricter) gives an error.
+
+fn foo(
+    x: fn(&u8, &u8),
+    y: for<'a> fn(&'a u8, &'a u8),
+) {
+    let z = match 22 {
+        0 => x,
+        _ => y,
+    };
+}
+
+fn bar(
+    x: fn(&u8, &u8),
+    y: for<'a> fn(&'a u8, &'a u8),
+) {
+    let z = match 22 {
+        // No error with an explicit cast:
+        0 => x as for<'a> fn(&'a u8, &'a u8),
+        _ => y,
+    };
+}
+
+fn main() {
+}
diff --git a/src/test/ui/lub-glb/old-lub-glb-hr.stderr b/src/test/ui/lub-glb/old-lub-glb-hr.stderr
new file mode 100644
index 0000000..4a310a5
--- /dev/null
+++ b/src/test/ui/lub-glb/old-lub-glb-hr.stderr
@@ -0,0 +1,22 @@
+error[E0308]: match arms have incompatible types
+  --> $DIR/old-lub-glb-hr.rs:18:13
+   |
+18 |       let z = match 22 {
+   |  _____________^
+19 | |         0 => x,
+20 | |         _ => y,
+21 | |     };
+   | |_____^ expected bound lifetime parameter, found concrete lifetime
+   |
+   = note: expected type `for<'r, 's> fn(&'r u8, &'s u8)`
+              found type `for<'a> fn(&'a u8, &'a u8)`
+   = note: this was previously accepted by the compiler but has been phased out
+   = note: for more information, see https://github.com/rust-lang/rust/issues/45852
+note: match arm with an incompatible type
+  --> $DIR/old-lub-glb-hr.rs:20:14
+   |
+20 |         _ => y,
+   |              ^
+
+error: aborting due to previous error
+
diff --git a/src/test/ui/lub-glb/old-lub-glb-object.rs b/src/test/ui/lub-glb/old-lub-glb-object.rs
new file mode 100644
index 0000000..7cf89b6
--- /dev/null
+++ b/src/test/ui/lub-glb/old-lub-glb-object.rs
@@ -0,0 +1,38 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we give a note when the old LUB/GLB algorithm would have
+// succeeded but the new code (which is stricter) gives an error.
+
+trait Foo<T, U> { }
+
+fn foo(
+    x: &for<'a, 'b> Foo<&'a u8, &'b u8>,
+    y: &for<'a> Foo<&'a u8, &'a u8>,
+) {
+    let z = match 22 {
+        0 => x,
+        _ => y,
+    };
+}
+
+fn bar(
+    x: &for<'a, 'b> Foo<&'a u8, &'b u8>,
+    y: &for<'a> Foo<&'a u8, &'a u8>,
+) {
+    // Accepted with explicit case:
+    let z = match 22 {
+        0 => x as &for<'a> Foo<&'a u8, &'a u8>,
+        _ => y,
+    };
+}
+
+fn main() {
+}
diff --git a/src/test/ui/lub-glb/old-lub-glb-object.stderr b/src/test/ui/lub-glb/old-lub-glb-object.stderr
new file mode 100644
index 0000000..a1077f4
--- /dev/null
+++ b/src/test/ui/lub-glb/old-lub-glb-object.stderr
@@ -0,0 +1,22 @@
+error[E0308]: match arms have incompatible types
+  --> $DIR/old-lub-glb-object.rs:20:13
+   |
+20 |       let z = match 22 {
+   |  _____________^
+21 | |         0 => x,
+22 | |         _ => y,
+23 | |     };
+   | |_____^ expected bound lifetime parameter 'a, found concrete lifetime
+   |
+   = note: expected type `&for<'a, 'b> Foo<&'a u8, &'b u8>`
+              found type `&for<'a> Foo<&'a u8, &'a u8>`
+   = note: this was previously accepted by the compiler but has been phased out
+   = note: for more information, see https://github.com/rust-lang/rust/issues/45852
+note: match arm with an incompatible type
+  --> $DIR/old-lub-glb-object.rs:22:14
+   |
+22 |         _ => y,
+   |              ^
+
+error: aborting due to previous error
+
diff --git a/src/test/ui/nll/get_default.rs b/src/test/ui/nll/get_default.rs
new file mode 100644
index 0000000..5605206
--- /dev/null
+++ b/src/test/ui/nll/get_default.rs
@@ -0,0 +1,53 @@
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Basic test for free regions in the NLL code. This test ought to
+// report an error due to a reborrowing constraint. Right now, we get
+// a variety of errors from the older, AST-based machinery (notably
+// borrowck), and then we get the NLL error at the end.
+
+// compile-flags:-Znll -Zborrowck-mir
+
+struct Map {
+}
+
+impl Map {
+    fn get(&self) -> Option<&String> { None }
+    fn set(&mut self, v: String) { }
+}
+
+fn ok(map: &mut Map) -> &String {
+    loop {
+        match map.get() {
+            Some(v) => {
+                return v;
+            }
+            None => {
+                map.set(String::new()); // Just AST errors here
+            }
+        }
+    }
+}
+
+fn err(map: &mut Map) -> &String {
+    loop {
+        match map.get() {
+            Some(v) => {
+                map.set(String::new()); // Both AST and MIR error here
+                return v;
+            }
+            None => {
+                map.set(String::new()); // Just AST errors here
+            }
+        }
+    }
+}
+
+fn main() { }
diff --git a/src/test/ui/nll/get_default.stderr b/src/test/ui/nll/get_default.stderr
new file mode 100644
index 0000000..9586f42
--- /dev/null
+++ b/src/test/ui/nll/get_default.stderr
@@ -0,0 +1,47 @@
+error[E0502]: cannot borrow `*map` as mutable because it is also borrowed as immutable (Ast)
+  --> $DIR/get_default.rs:33:17
+   |
+28 |         match map.get() {
+   |               --- immutable borrow occurs here
+...
+33 |                 map.set(String::new()); // Just AST errors here
+   |                 ^^^ mutable borrow occurs here
+...
+37 | }
+   | - immutable borrow ends here
+
+error[E0502]: cannot borrow `*map` as mutable because it is also borrowed as immutable (Ast)
+  --> $DIR/get_default.rs:43:17
+   |
+41 |         match map.get() {
+   |               --- immutable borrow occurs here
+42 |             Some(v) => {
+43 |                 map.set(String::new()); // Both AST and MIR error here
+   |                 ^^^ mutable borrow occurs here
+...
+51 | }
+   | - immutable borrow ends here
+
+error[E0502]: cannot borrow `*map` as mutable because it is also borrowed as immutable (Ast)
+  --> $DIR/get_default.rs:47:17
+   |
+41 |         match map.get() {
+   |               --- immutable borrow occurs here
+...
+47 |                 map.set(String::new()); // Just AST errors here
+   |                 ^^^ mutable borrow occurs here
+...
+51 | }
+   | - immutable borrow ends here
+
+error[E0502]: cannot borrow `(*map)` as mutable because it is also borrowed as immutable (Mir)
+  --> $DIR/get_default.rs:43:17
+   |
+41 |         match map.get() {
+   |               --- immutable borrow occurs here
+42 |             Some(v) => {
+43 |                 map.set(String::new()); // Both AST and MIR error here
+   |                 ^^^ mutable borrow occurs here
+
+error: aborting due to 4 previous errors
+
diff --git a/src/test/ui/print_type_sizes/nullable.rs b/src/test/ui/print_type_sizes/niche-filling.rs
similarity index 75%
rename from src/test/ui/print_type_sizes/nullable.rs
rename to src/test/ui/print_type_sizes/niche-filling.rs
index 5052c59..f1c419d 100644
--- a/src/test/ui/print_type_sizes/nullable.rs
+++ b/src/test/ui/print_type_sizes/niche-filling.rs
@@ -10,8 +10,8 @@
 
 // compile-flags: -Z print-type-sizes
 
-// This file illustrates how enums with a non-null field are handled,
-// modelled after cases like `Option<&u32>` and such.
+// This file illustrates how niche-filling enums are handled,
+// modelled after cases like `Option<&u32>`, `Option<bool>` and such.
 //
 // It uses NonZero directly, rather than `&_` or `Unique<_>`, because
 // the test is not set up to deal with target-dependent pointer width.
@@ -68,8 +68,22 @@
     fn one() -> Self { 1 }
 }
 
+pub enum Enum4<A, B, C, D> {
+    One(A),
+    Two(B),
+    Three(C),
+    Four(D)
+}
+
 pub fn main() {
     let _x: MyOption<NonZero<u32>> = Default::default();
     let _y: EmbeddedDiscr = Default::default();
     let _z: MyOption<IndirectNonZero<u32>> = Default::default();
+    let _a: MyOption<bool> = Default::default();
+    let _b: MyOption<char> = Default::default();
+    let _c: MyOption<std::cmp::Ordering> = Default::default();
+    let _b: MyOption<MyOption<u8>> = Default::default();
+    let _e: Enum4<(), char, (), ()> = Enum4::One(());
+    let _f: Enum4<(), (), bool, ()> = Enum4::One(());
+    let _g: Enum4<(), (), (), MyOption<u8>> = Enum4::One(());
 }
diff --git a/src/test/ui/print_type_sizes/niche-filling.stdout b/src/test/ui/print_type_sizes/niche-filling.stdout
new file mode 100644
index 0000000..af3e89a
--- /dev/null
+++ b/src/test/ui/print_type_sizes/niche-filling.stdout
@@ -0,0 +1,80 @@
+print-type-size type: `IndirectNonZero<u32>`: 12 bytes, alignment: 4 bytes
+print-type-size     field `.nested`: 8 bytes
+print-type-size     field `.post`: 2 bytes
+print-type-size     field `.pre`: 1 bytes
+print-type-size     end padding: 1 bytes
+print-type-size type: `MyOption<IndirectNonZero<u32>>`: 12 bytes, alignment: 4 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 12 bytes
+print-type-size         field `.0`: 12 bytes
+print-type-size type: `EmbeddedDiscr`: 8 bytes, alignment: 4 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Record`: 7 bytes
+print-type-size         field `.val`: 4 bytes
+print-type-size         field `.post`: 2 bytes
+print-type-size         field `.pre`: 1 bytes
+print-type-size     end padding: 1 bytes
+print-type-size type: `NestedNonZero<u32>`: 8 bytes, alignment: 4 bytes
+print-type-size     field `.val`: 4 bytes
+print-type-size     field `.post`: 2 bytes
+print-type-size     field `.pre`: 1 bytes
+print-type-size     end padding: 1 bytes
+print-type-size type: `Enum4<(), char, (), ()>`: 4 bytes, alignment: 4 bytes
+print-type-size     variant `One`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Two`: 4 bytes
+print-type-size         field `.0`: 4 bytes
+print-type-size     variant `Three`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Four`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size type: `MyOption<char>`: 4 bytes, alignment: 4 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 4 bytes
+print-type-size         field `.0`: 4 bytes
+print-type-size type: `MyOption<core::nonzero::NonZero<u32>>`: 4 bytes, alignment: 4 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 4 bytes
+print-type-size         field `.0`: 4 bytes
+print-type-size type: `core::nonzero::NonZero<u32>`: 4 bytes, alignment: 4 bytes
+print-type-size     field `.0`: 4 bytes
+print-type-size type: `Enum4<(), (), (), MyOption<u8>>`: 2 bytes, alignment: 1 bytes
+print-type-size     variant `One`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Two`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Three`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Four`: 2 bytes
+print-type-size         field `.0`: 2 bytes
+print-type-size type: `MyOption<MyOption<u8>>`: 2 bytes, alignment: 1 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 2 bytes
+print-type-size         field `.0`: 2 bytes
+print-type-size type: `MyOption<u8>`: 2 bytes, alignment: 1 bytes
+print-type-size     discriminant: 1 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 1 bytes
+print-type-size         field `.0`: 1 bytes
+print-type-size type: `Enum4<(), (), bool, ()>`: 1 bytes, alignment: 1 bytes
+print-type-size     variant `One`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Two`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size     variant `Three`: 1 bytes
+print-type-size         field `.0`: 1 bytes
+print-type-size     variant `Four`: 0 bytes
+print-type-size         field `.0`: 0 bytes
+print-type-size type: `MyOption<bool>`: 1 bytes, alignment: 1 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 1 bytes
+print-type-size         field `.0`: 1 bytes
+print-type-size type: `MyOption<core::cmp::Ordering>`: 1 bytes, alignment: 1 bytes
+print-type-size     variant `None`: 0 bytes
+print-type-size     variant `Some`: 1 bytes
+print-type-size         field `.0`: 1 bytes
+print-type-size type: `core::cmp::Ordering`: 1 bytes, alignment: 1 bytes
+print-type-size     discriminant: 1 bytes
+print-type-size     variant `Less`: 0 bytes
+print-type-size     variant `Equal`: 0 bytes
+print-type-size     variant `Greater`: 0 bytes
diff --git a/src/test/ui/print_type_sizes/nullable.stdout b/src/test/ui/print_type_sizes/nullable.stdout
deleted file mode 100644
index 830678f..0000000
--- a/src/test/ui/print_type_sizes/nullable.stdout
+++ /dev/null
@@ -1,24 +0,0 @@
-print-type-size type: `IndirectNonZero<u32>`: 12 bytes, alignment: 4 bytes
-print-type-size     field `.nested`: 8 bytes
-print-type-size     field `.post`: 2 bytes
-print-type-size     field `.pre`: 1 bytes
-print-type-size     end padding: 1 bytes
-print-type-size type: `MyOption<IndirectNonZero<u32>>`: 12 bytes, alignment: 4 bytes
-print-type-size     variant `Some`: 12 bytes
-print-type-size         field `.0`: 12 bytes
-print-type-size type: `EmbeddedDiscr`: 8 bytes, alignment: 4 bytes
-print-type-size     variant `Record`: 7 bytes
-print-type-size         field `.val`: 4 bytes
-print-type-size         field `.post`: 2 bytes
-print-type-size         field `.pre`: 1 bytes
-print-type-size     end padding: 1 bytes
-print-type-size type: `NestedNonZero<u32>`: 8 bytes, alignment: 4 bytes
-print-type-size     field `.val`: 4 bytes
-print-type-size     field `.post`: 2 bytes
-print-type-size     field `.pre`: 1 bytes
-print-type-size     end padding: 1 bytes
-print-type-size type: `MyOption<core::nonzero::NonZero<u32>>`: 4 bytes, alignment: 4 bytes
-print-type-size     variant `Some`: 4 bytes
-print-type-size         field `.0`: 4 bytes
-print-type-size type: `core::nonzero::NonZero<u32>`: 4 bytes, alignment: 4 bytes
-print-type-size     field `.0`: 4 bytes
diff --git a/src/test/run-pass/issue-30276.rs b/src/test/ui/print_type_sizes/uninhabited.rs
similarity index 65%
copy from src/test/run-pass/issue-30276.rs
copy to src/test/ui/print_type_sizes/uninhabited.rs
index 5dd0cd8..69cc4c9 100644
--- a/src/test/run-pass/issue-30276.rs
+++ b/src/test/ui/print_type_sizes/uninhabited.rs
@@ -1,4 +1,4 @@
-// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
 // file at the top-level directory of this distribution and at
 // http://rust-lang.org/COPYRIGHT.
 //
@@ -8,7 +8,11 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-struct Test([i32]);
-fn main() {
-    let _x: fn(_) -> Test = Test;
+// compile-flags: -Z print-type-sizes
+
+#![feature(never_type)]
+
+pub fn main() {
+    let _x: Option<!> = None;
+    let _y: Result<u32, !> = Ok(42);
 }
diff --git a/src/test/ui/print_type_sizes/uninhabited.stdout b/src/test/ui/print_type_sizes/uninhabited.stdout
new file mode 100644
index 0000000..2a8706f
--- /dev/null
+++ b/src/test/ui/print_type_sizes/uninhabited.stdout
@@ -0,0 +1,5 @@
+print-type-size type: `std::result::Result<u32, !>`: 4 bytes, alignment: 4 bytes
+print-type-size     variant `Ok`: 4 bytes
+print-type-size         field `.0`: 4 bytes
+print-type-size type: `std::option::Option<!>`: 0 bytes, alignment: 1 bytes
+print-type-size     variant `None`: 0 bytes
diff --git a/src/test/ui/resolve/enums-are-namespaced-xc.stderr b/src/test/ui/resolve/enums-are-namespaced-xc.stderr
index a401861..52d798a 100644
--- a/src/test/ui/resolve/enums-are-namespaced-xc.stderr
+++ b/src/test/ui/resolve/enums-are-namespaced-xc.stderr
@@ -3,7 +3,6 @@
    |
 15 |     let _ = namespaced_enums::A;
    |                               ^ not found in `namespaced_enums`
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 14 | use namespaced_enums::Foo::A;
@@ -14,7 +13,6 @@
    |
 18 |     let _ = namespaced_enums::B(10);
    |                               ^ not found in `namespaced_enums`
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 14 | use namespaced_enums::Foo::B;
@@ -25,7 +23,6 @@
    |
 21 |     let _ = namespaced_enums::C { a: 10 };
    |                               ^ not found in `namespaced_enums`
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 14 | use namespaced_enums::Foo::C;
diff --git a/src/test/ui/resolve/issue-16058.stderr b/src/test/ui/resolve/issue-16058.stderr
index 6d7406f..322a1fe 100644
--- a/src/test/ui/resolve/issue-16058.stderr
+++ b/src/test/ui/resolve/issue-16058.stderr
@@ -3,7 +3,6 @@
    |
 19 |         Result {
    |         ^^^^^^ not a struct, variant or union type
-   |
 help: possible better candidates are found in other modules, you can import them into scope
    |
 12 | use std::fmt::Result;
diff --git a/src/test/ui/resolve/issue-17518.stderr b/src/test/ui/resolve/issue-17518.stderr
index 2f94dbd..bdc4fb0 100644
--- a/src/test/ui/resolve/issue-17518.stderr
+++ b/src/test/ui/resolve/issue-17518.stderr
@@ -3,7 +3,6 @@
    |
 16 |     E { name: "foobar" }; //~ ERROR unresolved struct, variant or union type `E`
    |     ^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 11 | use SomeEnum::E;
diff --git a/src/test/ui/resolve/issue-21221-1.stderr b/src/test/ui/resolve/issue-21221-1.stderr
index ddaee45..6038c68 100644
--- a/src/test/ui/resolve/issue-21221-1.stderr
+++ b/src/test/ui/resolve/issue-21221-1.stderr
@@ -3,7 +3,6 @@
    |
 53 | impl Mul for Foo {
    |      ^^^ not found in this scope
-   |
 help: possible candidates are found in other modules, you can import them into scope
    |
 11 | use mul1::Mul;
@@ -18,7 +17,6 @@
    |
 72 | fn getMul() -> Mul {
    |                ^^^ not found in this scope
-   |
 help: possible candidates are found in other modules, you can import them into scope
    |
 11 | use mul1::Mul;
@@ -42,7 +40,6 @@
    |
 88 | impl Div for Foo {
    |      ^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 11 | use std::ops::Div;
diff --git a/src/test/ui/resolve/issue-21221-2.stderr b/src/test/ui/resolve/issue-21221-2.stderr
index a759116..0ae8052 100644
--- a/src/test/ui/resolve/issue-21221-2.stderr
+++ b/src/test/ui/resolve/issue-21221-2.stderr
@@ -3,7 +3,6 @@
    |
 28 | impl T for Foo { }
    |      ^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 11 | use foo::bar::T;
diff --git a/src/test/ui/resolve/issue-21221-3.stderr b/src/test/ui/resolve/issue-21221-3.stderr
index da849ec..b26a8cd 100644
--- a/src/test/ui/resolve/issue-21221-3.stderr
+++ b/src/test/ui/resolve/issue-21221-3.stderr
@@ -3,7 +3,6 @@
    |
 25 | impl OuterTrait for Foo {}
    |      ^^^^^^^^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 18 | use issue_21221_3::outer::OuterTrait;
diff --git a/src/test/ui/resolve/issue-21221-4.stderr b/src/test/ui/resolve/issue-21221-4.stderr
index 78059ed..0a22d8e 100644
--- a/src/test/ui/resolve/issue-21221-4.stderr
+++ b/src/test/ui/resolve/issue-21221-4.stderr
@@ -3,7 +3,6 @@
    |
 20 | impl T for Foo {}
    |      ^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 18 | use issue_21221_4::T;
diff --git a/src/test/ui/resolve/issue-3907.stderr b/src/test/ui/resolve/issue-3907.stderr
index 7a4d0ca..26ff7e7 100644
--- a/src/test/ui/resolve/issue-3907.stderr
+++ b/src/test/ui/resolve/issue-3907.stderr
@@ -3,7 +3,6 @@
    |
 20 | impl Foo for S { //~ ERROR expected trait, found type alias `Foo`
    |      ^^^ type aliases cannot be used for traits
-   |
 help: possible better candidate is found in another module, you can import it into scope
    |
 14 | use issue_3907::Foo;
diff --git a/src/test/ui/resolve/privacy-struct-ctor.stderr b/src/test/ui/resolve/privacy-struct-ctor.stderr
index f7e5c60..cb459ae 100644
--- a/src/test/ui/resolve/privacy-struct-ctor.stderr
+++ b/src/test/ui/resolve/privacy-struct-ctor.stderr
@@ -7,7 +7,6 @@
    |         did you mean `S`?
    |         constructor is not visible here due to private fields
    |         did you mean `Z { /* fields */ }`?
-   |
 help: possible better candidate is found in another module, you can import it into scope
    |
 22 |     use m::n::Z;
@@ -21,7 +20,6 @@
    |     |
    |     constructor is not visible here due to private fields
    |     did you mean `S { /* fields */ }`?
-   |
 help: possible better candidate is found in another module, you can import it into scope
    |
 32 | use m::S;
@@ -35,7 +33,6 @@
    |     |
    |     constructor is not visible here due to private fields
    |     did you mean `xcrate::S { /* fields */ }`?
-   |
 help: possible better candidate is found in another module, you can import it into scope
    |
 32 | use m::S;
diff --git a/src/test/ui/resolve/use_suggestion_placement.stderr b/src/test/ui/resolve/use_suggestion_placement.stderr
index 08640b2..4018253 100644
--- a/src/test/ui/resolve/use_suggestion_placement.stderr
+++ b/src/test/ui/resolve/use_suggestion_placement.stderr
@@ -3,7 +3,6 @@
    |
 25 |     type Bar = Path;
    |                ^^^^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 21 |     use std::path::Path;
@@ -14,7 +13,6 @@
    |
 30 |     let _ = A;
    |             ^ not found in this scope
-   |
 help: possible candidate is found in another module, you can import it into scope
    |
 11 | use m::A;
@@ -25,7 +23,6 @@
    |
 35 |     type Dict<K, V> = HashMap<K, V>;
    |                       ^^^^^^^ not found in this scope
-   |
 help: possible candidates are found in other modules, you can import them into scope
    |
 11 | use std::collections::HashMap;
diff --git a/src/test/ui/short-error-format.rs b/src/test/ui/short-error-format.rs
index 3e6802c..ecce824 100644
--- a/src/test/ui/short-error-format.rs
+++ b/src/test/ui/short-error-format.rs
@@ -8,7 +8,7 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-// compile-flags: --error-format=short
+// compile-flags: --error-format=short -Zunstable-options
 
 fn foo(_: u32) {}
 
diff --git a/src/test/ui/span/issue-35987.stderr b/src/test/ui/span/issue-35987.stderr
index b57b58e..5e7a492 100644
--- a/src/test/ui/span/issue-35987.stderr
+++ b/src/test/ui/span/issue-35987.stderr
@@ -3,7 +3,6 @@
    |
 15 | impl<T: Clone, Add> Add for Foo<T> {
    |                     ^^^ not a trait
-   |
 help: possible better candidate is found in another module, you can import it into scope
    |
 13 | use std::ops::Add;
diff --git a/src/test/ui/span/issue-39018.stderr b/src/test/ui/span/issue-39018.stderr
index 2782753..e865b51 100644
--- a/src/test/ui/span/issue-39018.stderr
+++ b/src/test/ui/span/issue-39018.stderr
@@ -3,7 +3,6 @@
    |
 12 |     let x = "Hello " + "World!";
    |             ^^^^^^^^^^^^^^^^^^^ `+` can't be used to concatenate two `&str` strings
-   |
 help: `to_owned()` can be used to create an owned `String` from a string reference. String concatenation appends the string on the right to the string on the left and may require reallocation. This requires ownership of the string on the left
    |
 12 |     let x = "Hello ".to_owned() + "World!";
diff --git a/src/test/ui/span/missing-unit-argument.stderr b/src/test/ui/span/missing-unit-argument.stderr
index af558d0..672b071 100644
--- a/src/test/ui/span/missing-unit-argument.stderr
+++ b/src/test/ui/span/missing-unit-argument.stderr
@@ -3,7 +3,6 @@
    |
 21 |     let _: Result<(), String> = Ok();
    |                                 ^^^^
-   |
 help: expected the unit value `()`; create it with empty parentheses
    |
 21 |     let _: Result<(), String> = Ok(());
@@ -35,7 +34,6 @@
 ...
 24 |     bar();
    |     ^^^^^
-   |
 help: expected the unit value `()`; create it with empty parentheses
    |
 24 |     bar(());
@@ -49,7 +47,6 @@
 ...
 25 |     S.baz();
    |       ^^^
-   |
 help: expected the unit value `()`; create it with empty parentheses
    |
 25 |     S.baz(());
@@ -63,7 +60,6 @@
 ...
 26 |     S.generic::<()>();
    |       ^^^^^^^
-   |
 help: expected the unit value `()`; create it with empty parentheses
    |
 26 |     S.generic::<()>(());
diff --git a/src/tools/cargotest/main.rs b/src/tools/cargotest/main.rs
index a6c56a1..b1122f4 100644
--- a/src/tools/cargotest/main.rs
+++ b/src/tools/cargotest/main.rs
@@ -60,8 +60,8 @@
     },
     Test {
         name: "servo",
-        repo: "https://github.com/servo/servo",
-        sha: "38fe9533b93e985657f99a29772bf3d3c8694822",
+        repo: "https://github.com/eddyb/servo",
+        sha: "6031de9a397e2feba4ff98725991825f62b68518",
         lock: None,
         // Only test Stylo a.k.a. Quantum CSS, the parts of Servo going into Firefox.
         // This takes much less time to build than all of Servo and supports stable Rust.
diff --git a/src/tools/compiletest/src/header.rs b/src/tools/compiletest/src/header.rs
index 39f41f5..c853d53 100644
--- a/src/tools/compiletest/src/header.rs
+++ b/src/tools/compiletest/src/header.rs
@@ -150,6 +150,14 @@
                     // Ignore if actual version is smaller the minimum required
                     // version
                     &actual_version[..] < min_version
+                } else if line.starts_with("min-system-llvm-version") {
+                    let min_version = line.trim_right()
+                        .rsplit(' ')
+                        .next()
+                        .expect("Malformed llvm version directive");
+                    // Ignore if using system LLVM and actual version
+                    // is smaller the minimum required version
+                    !(config.system_llvm && &actual_version[..] < min_version)
                 } else {
                     false
                 }
diff --git a/src/tools/compiletest/src/runtest.rs b/src/tools/compiletest/src/runtest.rs
index 3e3c56a..749d1f4 100644
--- a/src/tools/compiletest/src/runtest.rs
+++ b/src/tools/compiletest/src/runtest.rs
@@ -1389,6 +1389,7 @@
         if let Some(ref incremental_dir) = self.props.incremental_dir {
             rustc.args(&["-Z", &format!("incremental={}", incremental_dir.display())]);
             rustc.args(&["-Z", "incremental-verify-ich"]);
+            rustc.args(&["-Z", "incremental-queries"]);
         }
 
         match self.config.mode {
diff --git a/src/tools/toolstate.toml b/src/tools/toolstate.toml
index 744a0f9..f1684f4 100644
--- a/src/tools/toolstate.toml
+++ b/src/tools/toolstate.toml
@@ -26,7 +26,7 @@
 miri = "Broken"
 
 # ping @Manishearth @llogiq @mcarton @oli-obk
-clippy = "Testing"
+clippy = "Broken"
 
 # ping @nrc
 rls = "Testing"