Confirmed GPU support for S4TF. (#17230)

Made minor code changes to avoid unwrapping a nil value. e.g. if nvidia driver
is not properly installed, TFE_NewContext(opts, status) can return an error from
TF:

Fatal error: cudaGetDevice() failed. Status: CUDA driver version is insufficient
for CUDA runtime version: file
/usr/local/google/home/hongm/ssd_part/git/swift-source-201806012/swift/stdlib/public/TensorFlow/CompilerRuntime.swift,
line 244

To build and run the tests with cuda support, the steps are the following. They
can be documented in README when there is user demand, and get integrated into
CI once we acquire GPU machines:

1. Download and install cuda driver toolkit, and cudnn, as in
https://www.tensorflow.org/install/install_sources#optional_install_tensorflow_for_gpu_prerequisites.

2. Set the following environmental variables before running build-script. They
   are required by the bazel based TF build system. Customize the paths in the
   last 3 variables below.

export TF_NEED_CUDA="1"
export TF_CUDA_VERSION=9.0
export CUDA_TOOLKIT_PATH=/usr/local/nvidia/cuda-9.0
export CUDNN_INSTALL_PATH=/usr/lib/x86_64-linux-gnu/
export GCC_HOST_COMPILER_PATH=/usr/bin/gcc-6

3. Run build-script with an extra --tensorflow_bazel_options=--config=cuda. e.g.

utils/build-script --enable-tensorflow --release-debuginfo --tensorflow_bazel_options=--config=cuda

4. When running S4TF runtime tests via lit, turn off thread level parallelism
   via -j1, to avoid OOM (out of memory) for GPU. e.g.

llvm/utils/lit/lit.py -j1 -sv --param swift_test_mode=optimize --param swift_tensorflow_path=../build/bazel-bin/tensorflow build/Ninja-ReleaseAssert+stdlib-Release/swift-linux-x86_64/test-linux-x86_64/TensorFlowRuntime/

Aother option for future reference is to set the TF session config proto field
config.gpu_options.allow_growth to
true. (e.g. https://stackoverflow.com/questions/34199233/how-to-prevent-tensorflow-from-allocating-the-totality-of-a-gpu-memory).
diff --git a/stdlib/public/TensorFlow/CompilerRuntime.swift b/stdlib/public/TensorFlow/CompilerRuntime.swift
index a02e787..94f27b5 100644
--- a/stdlib/public/TensorFlow/CompilerRuntime.swift
+++ b/stdlib/public/TensorFlow/CompilerRuntime.swift
@@ -232,8 +232,12 @@
     InitTensorFlowRuntime(_RuntimeConfig.printsDebugLog ? 1 : 0,
                           _RuntimeConfig.tensorflowVerboseLogLevel)
 
-    let opts = TFE_NewContextOptions()
-    cContext = TFE_NewContext(opts, status)
+    guard let opts = TFE_NewContextOptions() else {
+      fatalError("ContextOptions object can never be nil.")
+    }
+    let ctx = TFE_NewContext(opts, status)
+    checkOk(status)
+    self.cContext = ctx!
     TFE_DeleteContextOptions(opts)
     checkOk(status)
 
diff --git a/stdlib/public/TensorFlow/TensorHandle.swift b/stdlib/public/TensorFlow/TensorHandle.swift
index e1525b5..0634c2a 100644
--- a/stdlib/public/TensorFlow/TensorHandle.swift
+++ b/stdlib/public/TensorFlow/TensorHandle.swift
@@ -151,6 +151,8 @@
         cTensorHandle, ctx, context.cpuDeviceName, status
       )
       checkOk(status)
+      internalConsistencyCheck(ret != nil,
+                               "TFE_TensorHandleCopyToDevice() returned nil.")
       return ret!
     }
     defer { TFE_DeleteTensorHandle(hostHandle) }
diff --git a/stdlib/public/TensorFlow/Utilities.swift b/stdlib/public/TensorFlow/Utilities.swift
index 20e0c46..28c2dab 100644
--- a/stdlib/public/TensorFlow/Utilities.swift
+++ b/stdlib/public/TensorFlow/Utilities.swift
@@ -63,6 +63,9 @@
 // Type aliases
 //===----------------------------------------------------------------------===//
 
+// Before assigning a C pointer to one of the pointer type aliases below, caller
+// should check that the pointer is not NULL.
+
 /// The `TF_Session *` type.
 typealias CTFSession = OpaquePointer