[sil-bug-reducer] Initial version of bug_reducer.

Currently it supports random bug finding and opt pipeline reduction. At some
point in the future it will support:

1. Reducing of optimizer crasher test cases.
2. Reducing optimizer miscompile test cases.

But those are for another time, this is just a first step. Patches welcome = ).
Check out llvm's bugpoint for inspiration.

It also has a basic test of pass pipeline reduction that is triggered via the
validation testing in lit. One thing to note here is that the tool right now
assumes darwin. This can be fixed in the future.
diff --git a/utils/bug_reducer/README.md b/utils/bug_reducer/README.md
new file mode 100644
index 0000000..a42a679
--- /dev/null
+++ b/utils/bug_reducer/README.md
@@ -0,0 +1,44 @@
+
+# SIL Bug Reducer
+
+This directory contains a script called bug_reducer. It is a reimplementation of
+llvm's bugpoint using a python script + high level IR utilities. This will
+enable bugpoint to easily be ported to newer IRs.
+
+An example invocation would be:
+
+./swift/utils/bug-reducer/bug_reducer.py \
+    opt \
+    --sdk=$(xcrun --sdk macosx --toolchain default --show-sdk-path) \
+    --module-name=${MODULE_NAME} \
+    --work-dir=${PWD}/bug_reducer \
+    --module-cache=${PWD}/bug_reducer/module-cache \
+    ${SWIFT_BUILD_DIR} \
+    ${SIB_FILE}
+
+This is still very alpha and needs a lot of work including tests, so please do
+not expect it to work at all until tests are available. Once testing is
+available, please only use this if you are willing to tolerate bugs (and
+hopefully help fix them/add tests).
+
+# Tasks
+
+1. A lot of this code was inspired by llvm's bisect tool. This includes a bit of
+   the code style, we should clean this up and pythonify these parts of the
+   tool.
+2. Once this is done, we should consider implementing function reducing
+   functionality. We already have sil-func-extractor which can reduce functions
+   in modules just how a bugpoint tool would like. We should wire up code to use
+   that tool to reduce functions. After this point, our tool should be good
+   enough for reducing test cases. Later we can implement block/instruction
+   extraction, but this is a first step.
+3. Then we need to implement miscompile detection support. This implies
+   implementing support for codegening code from this tool using swiftc. This
+   implies splitting modules into optimized and unoptimized parts. Luckily,
+   sil-func-extractor can perform this task modulo one task*.
+
+* Specifically, we need to be able to create internal shims that call shared
+  functions so that if a shared function is partitioned on the opposite side of
+  any of its call sites, we can avoid having to create a shared function
+  declaration in the other partition. We avoid this since creating shared
+  function declarations is illegal in SIL.
diff --git a/utils/bug_reducer/bug_reducer/__init__.py b/utils/bug_reducer/bug_reducer/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/utils/bug_reducer/bug_reducer/__init__.py
diff --git a/utils/bug_reducer/bug_reducer/bug_reducer.py b/utils/bug_reducer/bug_reducer/bug_reducer.py
new file mode 100755
index 0000000..95d87d6
--- /dev/null
+++ b/utils/bug_reducer/bug_reducer/bug_reducer.py
@@ -0,0 +1,31 @@
+#!/usr/bin/env python
+
+import argparse
+
+import opt_bug_reducer
+
+import random_bug_finder
+
+
+def main():
+    parser = argparse.ArgumentParser(description="""\
+A program for reducing sib/sil crashers""")
+    subparsers = parser.add_subparsers()
+
+    opt_subparser = subparsers.add_parser("opt")
+    opt_subparser.add_argument('swift_build_dir',
+                               help='Path to the swift build directory '
+                               'containing tools to use')
+    opt_bug_reducer.add_parser_arguments(opt_subparser)
+
+    random_search_subparser = subparsers.add_parser("random-search")
+    random_search_subparser.add_argument('swift_build_dir',
+                                         help='Path to the swift build '
+                                         'directory containing tools to use')
+    random_bug_finder.add_parser_arguments(random_search_subparser)
+
+    args = parser.parse_args()
+    args.func(args)
+
+
+main()
diff --git a/utils/bug_reducer/bug_reducer/bug_reducer_utils.py b/utils/bug_reducer/bug_reducer/bug_reducer_utils.py
new file mode 100644
index 0000000..a23d6e6
--- /dev/null
+++ b/utils/bug_reducer/bug_reducer/bug_reducer_utils.py
@@ -0,0 +1,155 @@
+
+import json
+import os
+import subprocess
+
+DRY_RUN = False
+
+
+def br_call(args):
+    if DRY_RUN:
+        print('DRY RUN: ' + ' '.join(args))
+        return 0
+    return subprocess.call(args, stdout=open('/dev/null'), stderr=open('/dev/null'))
+
+
+class SwiftTools(object):
+    """A utility class that enables users to easily find sil-tools without needing
+to constantly reform paths to the build directory. Also provides safety by
+asserting if one of the tools does not exist at the specified path"""
+
+    def __init__(self, swift_build_dir):
+        self.swift_build_dir = swift_build_dir
+
+    def _get_tool(self, name):
+        path = os.path.join(self.swift_build_dir, 'bin', name)
+        if not os.access(path, os.F_OK):
+            error_msg = "Error! {} does not exist at: {}".format(name, path)
+            raise RuntimeError(error_msg)
+        return path
+
+    @property
+    def sil_nm(self):
+        """Return the path to sil-nm in the specified swift build directory. Throws a
+runtime error if the tool does not exist"""
+        return self._get_tool('sil-nm')
+
+    @property
+    def swiftc(self):
+        """Return the path to swiftc in the specified swift build directory. Throws a
+runtime error if the tool does not exist"""
+        return self._get_tool('swiftc')
+
+    @property
+    def sil_opt(self):
+        """Return the path to sil-opt in the specified swift build directory. Throws a
+runtime error if the tool does not exist"""
+        return self._get_tool('sil-opt')
+
+    @property
+    def sil_func_extractor(self):
+        """Return the path to sil-func-extractor in the specified swift build
+directory. Throws a runtime error if the tool does not exist."""
+        return self._get_tool('sil-opt')
+
+    @property
+    def sil_passpipeline_dumper(self):
+        """Return the path to sil-passpipeline-dumper in the specified swift build
+directory. Throws a runtime error if the tool does not exist
+
+        """
+        return self._get_tool('sil-passpipeline-dumper')
+
+
+def maybe_abspath(x):
+    if x is None:
+        return x
+    return os.path.abspath(x)
+
+class SILOptInvoker(object):
+
+    def __init__(self, args, tools, early_passes, extra_args):
+        self.tools = tools
+        self.module_cache = args.module_cache
+        self.sdk = args.sdk
+        self.target = args.target
+        self.resource_dir = maybe_abspath(args.resource_dir)
+        self.work_dir = maybe_abspath(args.work_dir)
+        self.module_name = args.module_name
+        self.extra_args = extra_args
+
+        # Start by creating our workdir if necessary
+        subprocess.check_call(["mkdir", "-p", self.work_dir])
+
+        # Then copy our input file into the work dir
+        base_input_file = os.path.basename(args.input_file)
+        (base, ext) = os.path.splitext(base_input_file)
+        self.base_input_file_stem = base
+        self.base_input_file_ext = ".sib"
+
+        # First emit an initial *.sib file. This ensures no matter if we have a
+        # *.swiftmodule, *.sil, or *.sib file, we are always using *.sib.
+        self._invoke(args.input_file, self.get_suffixed_filename('initial'),
+                     [])
+        self.input_file = self.get_suffixed_filename('initial+early')
+
+        # Finally, run the initial sil-opt invocation running the
+        # early-passes. We will not run them again.
+        self._invoke(self.get_suffixed_filename('initial'),
+                     self.input_file,
+                     early_passes)
+
+    def get_suffixed_filename(self, suffix):
+        basename = self.base_input_file_stem + '_' + suffix
+        basename += self.base_input_file_ext
+        return os.path.join(self.work_dir, basename)
+
+    @property
+    def base_args(self):
+        x = [self.tools.sil_opt, "-emit-sib"]
+        if self.sdk is not None:
+            x.append("-sdk=%s" % self.sdk)
+        if self.target is not None:
+            x.append("-target=%s" % self.target)
+        if self.resource_dir is not None:
+            x.append("-resource-dir=%s" % self.resource_dir)
+        if self.module_cache is not None:
+            x.append("-module-cache-path=%s" % self.module_cache)
+        if self.module_name is not None:
+            x.append("-module-name=%s" % self.module_name)
+        return x
+
+    def _invoke(self, input_file, output_file, passes):
+        base_args = self.base_args
+        base_args.extend([
+            input_file,
+            '-o', output_file])
+        base_args.extend(self.extra_args)
+        base_args.extend(passes)
+        return br_call(base_args)
+
+    def invoke_with_passlist(self, output_filename, passes):
+        return self._invoke(self.input_file, output_filename, passes)
+
+
+def get_silopt_invoker(args):
+    tools = SwiftTools(args.swift_build_dir)
+
+    passes = []
+    if len(args.pass_list) == 0:
+        json_data = json.loads(subprocess.check_output(
+            [tools.sil_passpipeline_dumper, '-Performance']))
+        passes = sum((p[2:] for p in json_data if p[0] != 'EarlyModulePasses'), [])
+        passes = ['-' + x[1] for x in passes]
+    else:
+        passes = ['-' + x for x in args.pass_list]
+
+    # We assume we have an early module passes that runs until fix point and
+    # that is strictly not what is causing the issue.
+    #
+    # Everything else runs one iteration.
+    early_module_passes = [p[2:] for p in json_data
+                           if p[0] == 'EarlyModulePasses'][0]
+    early_module_passes = ['-' + x[1] for x in early_module_passes]
+
+    return SILOptInvoker(args, tools, early_module_passes)
diff --git a/utils/bug_reducer/bug_reducer/list_reducer.py b/utils/bug_reducer/bug_reducer/list_reducer.py
new file mode 100644
index 0000000..d14ece5
--- /dev/null
+++ b/utils/bug_reducer/bug_reducer/list_reducer.py
@@ -0,0 +1,186 @@
+
+import random
+
+TESTRESULT_NOFAILURE = "NoFailure"
+TESTRESULT_KEEPSUFFIX = "KeepSuffix"
+TESTRESULT_KEEPPREFIX = "KeepPrefix"
+TESTRESULTS = set([TESTRESULT_NOFAILURE, TESTRESULT_KEEPSUFFIX,
+                   TESTRESULT_KEEPPREFIX])
+
+
+class ListReducer(object):
+    """Reduce lists of objects. Inspired by llvm bugpoint"""
+
+    def __init__(self, l):
+        self.target_list = l
+        # Maximal number of allowed splitting iterations,
+        # before the elements are randomly shuffled.
+        self.max_iters_without_progress = 3
+
+        # Maximal number of allowed single-element trim iterations. We add a
+        # threshhold here as single-element reductions may otherwise take a
+        # very long time to complete.
+        self.max_trim_iterations_without_back_jump = 3
+        self.shuffling_enabled = True
+
+        self.num_iters_without_progress = 0
+        self.mid_top = 0
+        self.max_iters = self.max_iters_without_progress
+
+    def _reset_progress(self):
+        self.max_iters = self.max_iters_without_progress
+        self.num_iters_without_progress = 0
+
+    def run_test(self, prefix, suffix):
+        raise RuntimeError("Abstract method")
+
+    def _should_continue(self, result):
+        if result == TESTRESULT_KEEPPREFIX:
+            # We are done, we have the base case and the base case fails.
+            if len(self.target_list) == 1:
+                return {'should_continue': False, 'result': True}
+            else:
+                # There is an error, and we can narrow it down further.
+                return {'should_continue': True, 'result': None}
+
+        if result == TESTRESULT_KEEPSUFFIX:
+            raise RuntimeError("ListReducer internal error: Selected empty "
+                               "set!")
+        raise RuntimeError('Unknown test result: %s' % result)
+
+    def _test_shuffle_slow_converging_list(self):
+        if not self.shuffling_enabled or \
+           self.num_iters_without_progress <= self.max_iters_without_progress:
+            return
+
+        print("*** Testing shuffled set...")
+        shuffled_list = list(self.target_list)
+        random.shuffle(shuffled_list)
+        # TODO: Is this correct? I guess we are always doing something.
+        self.num_iters_without_progress = 0
+
+        # Check that the random shuffle does not lose the bug.
+        (result, _, _) = self.run_test(shuffled_list, [])
+        if result != TESTRESULT_KEEPPREFIX:
+            # If we fail here, disable any further shuffling...
+            self.shuffling_enabled = False
+            print("*** Shuffling hides the bug...")
+            return
+
+        self.mid_top = len(shuffled_list)
+        self.max_iters = self.max_iters + 2
+        print("*** Shuffling does not hide the bug...")
+        self.target_list = shuffled_list
+
+    def _test_prefix_suffix(self, mid, prefix, suffix):
+        (result, prefix, suffix) = self.run_test(prefix, suffix)
+
+        if result == TESTRESULT_KEEPSUFFIX:
+            # The property still holds. We can just drop the prefix
+            # elements, and shorten the list to the "kept" elements.
+            self.target_list = suffix
+            self.mid_top = len(self.target_list)
+
+            # Reset the progress threshold
+            self._reset_progress()
+            return False
+
+        if result == TESTRESULT_KEEPPREFIX:
+            # The predicate still holds, shorten the list to the prefix
+            # elements.
+            self.target_list = prefix
+            self.mid_top = len(self.target_list)
+            self._reset_progress()
+            return False
+
+        assert(result == TESTRESULT_NOFAILURE)
+        # The property does not hold. Some of the elements we removed must
+        # be necessary to maintain the property.
+        self.mid_top = mid
+        self.num_iters_without_progress = \
+            self.num_iters_without_progress + 1
+        return False
+
+    def _trim_target_list(self):
+        self.mid_top = len(self.target_list)
+        self.max_iters = self.max_iters_without_progress
+
+        # Binary split reduction loop
+        while self.mid_top > 1:
+            # If the loop doesn't make satisfying progress, try shuffling.
+            # The purpose of shuffling is to avoid the heavy tails of the
+            # distribution (improving the speed of convergence).
+            self._test_shuffle_slow_converging_list()
+
+            # Split the list into a prefix, suffix list and then run test on
+            # those.
+            mid = self.mid_top/2
+            if not self._test_prefix_suffix(mid, self.target_list[:mid],
+                                            self.target_list[mid:]):
+                # If we returned false, then we did some sort of work and there
+                # was not an error, so continue.
+                continue
+
+            # Otherwise, the test routine signaled an error, so return True to
+            # signal error.
+            return True
+        # If we reach this point, return False, we have no further work we can
+        # do.
+        return False
+
+    def _trim_try_backjump_and_trim_suffix(self):
+        backjump_probability = 10
+        if len(self.target_list) <= 2:
+            return False
+        changed = True
+        trim_iters = 0
+
+        # Trimming loop
+        while changed:
+            changed = False
+
+            # If the binary split reduction loop made an unfortunate sequence
+            # of splits, the trimming loop might be left off with a huge
+            # number of remaining elements (large search space). Backjumping
+            # out of that search space and attempting a different split can
+            # significantly improve the convergence speed.
+            if random.randint(0, 100) < backjump_probability:
+                return True
+
+            # Check interior elements, using an offset to make sure we do not
+            # skip elements when we trim.
+            offset = 0
+            for i in range(1, len(self.target_list)-1):
+                real_i = i + offset
+                test_list = self.target_list[real_i:]
+                (result, prefix, suffix) = self.run_test([], test_list)
+                if result == TESTRESULT_KEEPSUFFIX:
+                    # We can trim the list!
+                    self.target_list = test_list
+                    offset = offset - 1
+                    changed = True
+            if trim_iters >= self.max_trim_iterations_without_back_jump:
+                return False
+            trim_iters = trim_iters + 1
+
+    def reduce_list(self):
+        random.seed(0x6e5ea738)  # Seed the random number generator
+        (result, self.target_list, kept) = self.run_test(self.target_list, [])
+        assert(result in TESTRESULTS)
+        (should_continue, result) = self._should_continue(result)
+        if not should_continue:
+            return result
+
+        # Now try to trim the list.
+        should_backjump = True
+        while should_backjump:
+            # If self._trim_target_list returns True, then we failed to
+            # reduce. Bail!
+            if self._trim_target_list():
+                return False
+
+            # Finally decide if we should back_jump
+            should_backjump = self._trim_try_backjump_and_trim_suffix()
+
+        # There are some failure and we've narrowed them down
+        return True
diff --git a/utils/bug_reducer/bug_reducer/opt_bug_reducer.py b/utils/bug_reducer/bug_reducer/opt_bug_reducer.py
new file mode 100644
index 0000000..25fcd1b
--- /dev/null
+++ b/utils/bug_reducer/bug_reducer/opt_bug_reducer.py
@@ -0,0 +1,150 @@
+
+import json
+import md5
+import subprocess
+
+import bug_reducer_utils
+
+import list_reducer
+from list_reducer import TESTRESULT_KEEPPREFIX
+from list_reducer import TESTRESULT_KEEPSUFFIX
+from list_reducer import TESTRESULT_NOFAILURE
+
+
+class ReduceMiscompilingPasses(list_reducer.ListReducer):
+
+    def __init__(self, l, invoker):
+        list_reducer.ListReducer.__init__(self, l)
+        self.invoker = invoker
+
+    def run_test(self, prefix, suffix):
+        # First, run the program with just the Suffix passes.  If it is still
+        # broken with JUST the kept passes, discard the prefix passes.
+        suffix_joined = ' '.join(suffix)
+        suffix_hash = md5.md5(suffix_joined).hexdigest()
+        print("Checking to see if '%s' compiles correctly" % suffix_joined)
+
+        result = self.invoker.invoke_with_passlist(
+            self.invoker.get_suffixed_filename(suffix_hash),
+            suffix)
+
+        # Found a miscompile! Keep the suffix
+        if result != 0:
+            print("Suffix maintains the predicate. Returning suffix")
+            return (TESTRESULT_KEEPSUFFIX, prefix, suffix)
+
+        if len(prefix) == 0:
+            print("Suffix passes and no prefix, returning nofailure")
+            return (TESTRESULT_NOFAILURE, prefix, suffix)
+
+        # Next see if the program is broken if we run the "prefix" passes
+        # first, then separately run the "kept" passes.
+        prefix_joined = ' '.join(prefix)
+        prefix_hash = md5.md5(prefix_joined).hexdigest()
+        print("Checking to see if '%s' compiles correctly" % prefix_joined)
+
+        # If it is not broken with the kept passes, it's possible that the
+        # prefix passes must be run before the kept passes to break it.  If
+        # the program WORKS after the prefix passes, but then fails if running
+        # the prefix AND kept passes, we can update our bitcode file to
+        # include the result of the prefix passes, then discard the prefix
+        # passes.
+        prefix_path = self.invoker.get_suffixed_filename(prefix_hash)
+        result = self.invoker.invoke_with_passlist(
+            prefix_path,
+            prefix)
+        if result != 0:
+            print("Prefix maintains the predicate by itself. Returning keep "
+                  "prefix")
+            return (TESTRESULT_KEEPPREFIX, prefix, suffix)
+
+        # Ok, so now we know that the prefix passes work, first check if we
+        # actually have any suffix passes. If we don't, just return.
+        if len(suffix) == 0:
+            print("No suffix, and prefix passes, returning no failure")
+            return (TESTRESULT_NOFAILURE, prefix, suffix)
+
+        # Otherwise, treat the prefix as our new baseline and see if suffix on
+        # the new baseline finds the crash.
+        original_input_file = self.invoker.input_file
+        self.invoker.input_file = prefix_path
+        print("Checking to see if '%s' compiles correctly after the '%s' "
+              "passes" % (suffix_joined, prefix_joined))
+        result = self.invoker.invoke_with_passlist(
+            self.invoker.get_suffixed_filename(suffix_hash),
+            suffix)
+
+        # If we failed at this point, then the prefix is our new
+        # baseline. Return keep suffix.
+        if result != 0:
+            print("Suffix failed. Keeping prefix as new baseline")
+            return (TESTRESULT_KEEPSUFFIX, prefix, suffix)
+
+        # Otherwise, we must not be running the bad pass anymore. Restore the
+        # original input_file and return NoFailure.
+        self.invoker.input_file = original_input_file
+        return (TESTRESULT_NOFAILURE, prefix, suffix)
+
+
+def pass_bug_reducer(args):
+    """Given a path to a sib file with canonical sil, attempt to find a perturbed
+list of passes that the perf pipeline"""
+    tools = bug_reducer_utils.SwiftTools(args.swift_build_dir)
+
+    passes = []
+    early_module_passes = []
+    if args.pass_list is None:
+        json_data = json.loads(subprocess.check_output(
+            [tools.sil_passpipeline_dumper, '-Performance']))
+        passes = sum((p[2:] for p in json_data if p[0] != 'EarlyModulePasses'), [])
+        passes = ['-' + x[1] for x in passes]
+        # We assume we have an early module passes that runs until fix point and
+        # that is strictly not what is causing the issue.
+        #
+        # Everything else runs one iteration.
+        early_module_passes = [p[2:] for p in json_data
+                               if p[0] == 'EarlyModulePasses'][0]
+        early_module_passes = ['-' + x[1] for x in early_module_passes]
+    else:
+        passes = ['-' + x for x in args.pass_list]
+
+    extra_args = []
+    if args.extra_args is not None:
+        extra_args = args.extra_args
+    sil_opt_invoker = bug_reducer_utils.SILOptInvoker(args, tools,
+                                                      early_module_passes,
+                                                      extra_args)
+
+    # Make sure that the base case /does/ crash.
+    filename = sil_opt_invoker.get_suffixed_filename('base_case')
+    result = sil_opt_invoker.invoke_with_passlist(filename, passes)
+    # If we succeed, there is no further work to do.
+    if result == 0:
+        print("Success with PassList: %s" % (' '.join(passes)))
+        return
+
+    # Otherwise, reduce the list of pases that cause the optimzier to crash.
+    r = ReduceMiscompilingPasses(passes, sil_opt_invoker)
+    if not r.reduce_list():
+        print("Failed to find miscompiling pass list!")
+    print("Found miscompiling passes: %s" % (' '.join(r.target_list)))
+
+
+def add_parser_arguments(parser):
+    """Add parser arguments for opt_bug_reducer"""
+    parser.set_defaults(func=pass_bug_reducer)
+    parser.add_argument('input_file', help='The input file to optimize')
+    parser.add_argument('--module-cache', help='The module cache to use')
+    parser.add_argument('--sdk', help='The sdk to pass to sil-opt')
+    parser.add_argument('--target', help='The target to pass to sil-opt')
+    parser.add_argument('--resource-dir',
+                        help='The resource-dir to pass to sil-opt')
+    parser.add_argument('--work-dir',
+                        help='Working directory to use for temp files',
+                        default='bug_reducer')
+    parser.add_argument('--module-name',
+                        help='The name of the module we are optimizing')
+    parser.add_argument('--pass', help='pass to test', dest='pass_list',
+                        action='append')
+    parser.add_argument('--extra-arg', help='extra argument to pass to sil-opt',
+                        dest='extra_args', action='append')
diff --git a/utils/bug_reducer/bug_reducer/random_bug_finder.py b/utils/bug_reducer/bug_reducer/random_bug_finder.py
new file mode 100644
index 0000000..514139d
--- /dev/null
+++ b/utils/bug_reducer/bug_reducer/random_bug_finder.py
@@ -0,0 +1,67 @@
+
+import json
+import random
+import subprocess
+import sys
+
+import bug_reducer_utils
+
+
+def construct_pipeline(p):
+    return (p[0], {'execution_kind': p[1], 'passes': p[2:]})
+
+
+def random_bug_finder(args):
+    """Given a path to a sib file with canonical sil, attempt to find a perturbed
+list of passes that the perf pipeline"""
+    tools = bug_reducer_utils.SwiftTools(args.swift_build_dir)
+
+    json_data = json.loads(subprocess.check_output(
+        [tools.sil_passpipeline_dumper, '-Performance']))
+    passes = sum((p[2:] for p in json_data if p[0] != 'EarlyModulePasses'), [])
+    passes = ['-' + x[1] for x in passes]
+
+    # We assume we have an early module passes that runs until fix point and
+    # that is strictly not what is causing the issue.
+    #
+    # Everything else runs one iteration.
+    early_module_passes = [p[2:] for p in json_data
+                           if p[0] == 'EarlyModulePasses'][0]
+    early_module_passes = ['-' + x[1] for x in early_module_passes]
+
+    sil_opt_invoker = bug_reducer_utils.SILOptInvoker(args, tools,
+                                                      early_module_passes)
+
+    # Make sure that the base case /does/ crash.
+    max_count = args.max_count
+    for count in range(max_count):
+        print("Running round %i/%i" % (count, max_count))
+        random.shuffle(passes)
+        filename = sil_opt_invoker.get_suffixed_filename(str(count))
+        result = sil_opt_invoker.invoke_with_passlist(filename, passes)
+        if result == 0:
+            print("Success with PassList: %s" % (' '.join(passes)))
+        else:
+            print("Fail with PassList: %s" % (' '.join(passes)))
+            print("Output File: %s" % filename)
+            sys.exit(-1)
+
+
+def add_parser_arguments(parser):
+    """Add parser arguments for random_bug_reducer"""
+    parser.set_defaults(func=random_bug_finder)
+    parser.add_argument('input_file', help='The input file to optimize')
+    parser.add_argument('--module-cache', help='The module cache to use')
+    parser.add_argument('--sdk', help='The sdk to pass to sil-opt')
+    parser.add_argument('--target', help='The target to pass to sil-opt')
+    parser.add_argument('--resource-dir',
+                        help='The resource-dir to pass to sil-opt')
+    parser.add_argument('--work-dir',
+                        help='Working directory to use for temp files',
+                        default='bug_reducer')
+    parser.add_argument('--module-name',
+                        help='The name of the module we are optimizing')
+    parser.add_argument('--max-count',
+                        help='Maximum number of permutations to try before'
+                        ' exiting',
+                        default=100)
diff --git a/utils/bug_reducer/tests/__init__.py b/utils/bug_reducer/tests/__init__.py
new file mode 100644
index 0000000..3f0c19b
--- /dev/null
+++ b/utils/bug_reducer/tests/__init__.py
@@ -0,0 +1,16 @@
+# bug_reducer/tests/__init__.py ------------------- Test module -*- python -*-
+#
+# This source file is part of the Swift.org open source project
+#
+# Copyright (c) 2014 - 2016 Apple Inc. and the Swift project authors
+# Licensed under Apache License v2.0 with Runtime Library Exception
+#
+# See https://swift.org/LICENSE.txt for license information
+# See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
+#
+# ----------------------------------------------------------------------------
+#
+# This file needs to be here in order for Python to treat the
+# utils/bug_reducer/tests/ directory as a module.
+#
+# ----------------------------------------------------------------------------
diff --git a/utils/bug_reducer/tests/test_optbugreducer.py b/utils/bug_reducer/tests/test_optbugreducer.py
new file mode 100644
index 0000000..c1279ed
--- /dev/null
+++ b/utils/bug_reducer/tests/test_optbugreducer.py
@@ -0,0 +1,103 @@
+# ==--- opt_bug_reducer_test.py ------------------------------------------===#
+#
+# This source file is part of the Swift.org open source project
+#
+# Copyright (c) 2014 - 2016 Apple Inc. and the Swift project authors
+# Licensed under Apache License v2.0 with Runtime Library Exception
+#
+# See http:#swift.org/LICENSE.txt for license information
+# See http:#swift.org/CONTRIBUTORS.txt for the list of Swift project authors
+#
+# ==----------------------------------------------------------------------===#
+
+
+import json
+import os
+import platform
+import random
+import shutil
+import subprocess
+import unittest
+
+
+import bug_reducer.bug_reducer_utils as bug_reducer_utils
+
+
+@unittest.skipUnless(platform.system() == 'Darwin',
+                     'opt_bug_reducer is only available on Darwin for now')
+class OptBugReducerTestCase(unittest.TestCase):
+
+    def setUp(self):
+        self.file_dir = os.path.dirname(os.path.abspath(__file__))
+        self.reducer = os.path.join(os.path.dirname(self.file_dir),
+                                    'bug_reducer', 'bug_reducer.py')
+        self.build_dir = os.path.abspath(os.environ['BUGREDUCE_TEST_SWIFT_OBJ_ROOT'])
+        self.tmp_dir = os.path.abspath(os.environ['BUGREDUCE_TEST_TMP_DIR'])
+
+        self.module_cache = os.path.join(self.tmp_dir, 'module_cache')
+        self.sdk = subprocess.check_output(['xcrun', '--sdk', 'macosx',
+                                            '--toolchain', 'Default',
+                                            '--show-sdk-path']).strip("\n")
+        self.tools = bug_reducer_utils.SwiftTools(self.build_dir)
+        json_data = json.loads(subprocess.check_output(
+            [self.tools.sil_passpipeline_dumper, '-Performance']))
+        self.passes = []
+        for y in (x[2:] for x in json_data):
+            for z in y:
+                self.passes.append('--pass=-' + z[1])
+        random.seed(0xf487c07f)
+        random.shuffle(self.passes)
+        self.passes.insert(random.randint(0, len(self.passes)),
+                           '--pass=-bug-reducer-tester')
+
+        if os.access(self.tmp_dir, os.F_OK):
+            shutil.rmtree(self.tmp_dir)
+        os.makedirs(self.tmp_dir)
+        os.makedirs(self.module_cache)
+
+    def _get_test_file_path(self, module_name):
+        (root, _) = os.path.splitext(os.path.abspath(__file__))
+        root_basename = ''.join(os.path.basename(root).split('_'))
+        return os.path.join(self.file_dir,
+                            '{}_{}.swift'.format(root_basename, module_name))
+
+    def _get_sib_file_path(self, filename):
+        (root, ext) = os.path.splitext(filename)
+        return os.path.join(self.tmp_dir, os.path.basename(root) + '.sib')
+
+    def run_swiftc_command(self, name):
+        input_file_path = self._get_test_file_path(name)
+        args = [self.tools.swiftc,
+                '-module-cache-path', self.module_cache,
+                '-sdk', self.sdk,
+                '-Onone', '-parse-as-library',
+                '-module-name', name,
+                '-emit-sib',
+                '-resource-dir', os.path.join(self.build_dir, 'lib', 'swift'),
+                '-o', self._get_sib_file_path(input_file_path),
+                input_file_path]
+        #print ' '.join(args)
+        return subprocess.check_call(args)
+
+    def test_basic(self):
+        name = 'testbasic'
+        result_code = self.run_swiftc_command(name)
+        assert result_code == 0, "Failed initial compilation"
+        args = [
+            self.reducer,
+            'opt',
+            self.build_dir,
+            self._get_sib_file_path(self._get_test_file_path(name)),
+            '--sdk=%s' % self.sdk,
+            '--module-cache=%s' % self.module_cache,
+            '--module-name=%s' % name,
+            '--work-dir=%s' % self.tmp_dir,
+            '--extra-arg=-bug-reducer-tester-target-func=test_target'
+        ]
+        args.extend(self.passes)
+        output = subprocess.check_output(args).split("\n")
+        success_message = 'Found miscompiling passes: --bug-reducer-tester'
+        self.assertTrue(success_message in output)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/utils/bug_reducer/tests/testoptbugreducer_testbasic.swift b/utils/bug_reducer/tests/testoptbugreducer_testbasic.swift
new file mode 100644
index 0000000..651b75c
--- /dev/null
+++ b/utils/bug_reducer/tests/testoptbugreducer_testbasic.swift
@@ -0,0 +1,19 @@
+
+import Swift
+
+@_silgen_name("test_target")
+@inline(never)
+public func foo3() {
+}
+
+public func foo1() {
+  print("1")
+}
+
+public func foo2() {
+  print("3")
+  foo1()
+  foo3()
+}
+
+
diff --git a/validation-test/Python/bug-reducer.test-sh b/validation-test/Python/bug-reducer.test-sh
new file mode 100644
index 0000000..4042539
--- /dev/null
+++ b/validation-test/Python/bug-reducer.test-sh
@@ -0,0 +1,2 @@
+// RUN: BUGREDUCE_TEST_TMP_DIR=%t BUGREDUCE_TEST_SWIFT_OBJ_ROOT=%swift_obj_root %{python} -m unittest discover -s %utils/bug_reducer
+// REQUIRES: OS=macosx