scanner: use infinitive verb after auxiliary word could

Could, as well as should, shall, must, may, can, might, etc.
are auxiliary words. After an auxiliary word should come an
infinitive verb.
diff --git a/CHANGES b/CHANGES
new file mode 100644
index 0000000..938dc46
--- /dev/null
+++ b/CHANGES
@@ -0,0 +1,147 @@
+
+For a complete Mercurial changelog, see
+'https://bitbucket.org/xi/pyyaml/commits'.
+
+3.11 (2014-03-26)
+-----------------
+
+* Source and binary distributions are rebuilt against the latest
+  versions of Cython and LibYAML.
+
+3.10 (2011-05-30)
+-----------------
+
+* Do not try to build LibYAML bindings on platforms other than CPython
+  (Thank to olt(at)bogosoft(dot)com).
+* Clear cyclic references in the parser and the emitter
+  (Thank to kristjan(at)ccpgames(dot)com).
+* Dropped support for Python 2.3 and 2.4.
+
+3.09 (2009-08-31)
+-----------------
+
+* Fixed an obscure scanner error not reported when there is
+  no line break at the end of the stream (Thank to Ingy).
+* Fixed use of uninitialized memory when emitting anchors with
+  LibYAML bindings (Thank to cegner(at)yahoo-inc(dot)com).
+* Fixed emitting incorrect BOM characters for UTF-16 (Thank to
+  Valentin Nechayev)
+* Fixed the emitter for folded scalars not respecting the preferred
+  line width (Thank to Ingy).
+* Fixed a subtle ordering issue with emitting '%TAG' directives
+  (Thank to Andrey Somov).
+* Fixed performance regression with LibYAML bindings.
+
+
+3.08 (2008-12-31)
+-----------------
+
+* Python 3 support (Thank to Erick Tryzelaar).
+* Use Cython instead of Pyrex to build LibYAML bindings.
+* Refactored support for unicode and byte input/output streams.
+
+
+3.07 (2008-12-29)
+-----------------
+
+* The emitter learned to use an optional indentation indicator
+  for block scalar; thus scalars with leading whitespaces
+  could now be represented in a literal or folded style.
+* The test suite is now included in the source distribution.
+  To run the tests, type 'python setup.py test'.
+* Refactored the test suite: dropped unittest in favor of
+  a custom test appliance.
+* Fixed the path resolver in CDumper.
+* Forced an explicit document end indicator when there is
+  a possibility of parsing ambiguity.
+* More setup.py improvements: the package should be usable
+  when any combination of setuptools, Pyrex and LibYAML
+  is installed.
+* Windows binary packages are built against LibYAML-0.1.2.
+* Minor typos and corrections (Thank to Ingy dot Net
+  and Andrey Somov).
+
+
+3.06 (2008-10-03)
+-----------------
+
+* setup.py checks whether LibYAML is installed and if so, builds
+  and installs LibYAML bindings.  To force or disable installation
+  of LibYAML bindings, use '--with-libyaml' or '--without-libyaml'
+  respectively.
+* The source distribution includes compiled Pyrex sources so
+  building LibYAML bindings no longer requires Pyrex installed.
+* 'yaml.load()' raises an exception if the input stream contains
+  more than one YAML document.
+* Fixed exceptions produced by LibYAML bindings.
+* Fixed a dot '.' character being recognized as !!float.
+* Fixed Python 2.3 compatibility issue in constructing !!timestamp values.
+* Windows binary packages are built against the LibYAML stable branch.
+* Added attributes 'yaml.__version__' and  'yaml.__with_libyaml__'.
+
+
+3.05 (2007-05-13)
+-----------------
+
+* Windows binary packages were built with LibYAML trunk.
+* Fixed a bug that prevent processing a live stream of YAML documents in
+  timely manner (Thanks edward(at)sweetbytes(dot)net).
+* Fixed a bug when the path in add_path_resolver contains boolean values
+  (Thanks jstroud(at)mbi(dot)ucla(dot)edu).
+* Fixed loss of microsecond precision in timestamps
+  (Thanks edemaine(at)mit(dot)edu).
+* Fixed loading an empty YAML stream.
+* Allowed immutable subclasses of YAMLObject.
+* Made the encoding of the unicode->str conversion explicit so that
+  the conversion does not depend on the default Python encoding.
+* Forced emitting float values in a YAML compatible form.
+
+
+3.04 (2006-08-20)
+-----------------
+
+* Include experimental LibYAML bindings.
+* Fully support recursive structures.
+* Sort dictionary keys.  Mapping node values are now represented
+  as lists of pairs instead of dictionaries.  No longer check
+  for duplicate mapping keys as it didn't work correctly anyway.
+* Fix invalid output of single-quoted scalars in cases when a single
+  quote is not escaped when preceeded by whitespaces or line breaks.
+* To make porting easier, rewrite Parser not using generators.
+* Fix handling of unexpected block mapping values.
+* Fix a bug in Representer.represent_object: copy_reg.dispatch_table
+  was not correctly handled.
+* Fix a bug when a block scalar is incorrectly emitted in the simple
+  key context.
+* Hold references to the objects being represented.
+* Make Representer not try to guess !!pairs when a list is represented.
+* Fix timestamp constructing and representing.
+* Fix the 'N' plain scalar being incorrectly recognized as !!bool.
+
+
+3.03 (2006-06-19)
+-----------------
+
+* Fix Python 2.5 compatibility issues.
+* Fix numerous bugs in the float handling.
+* Fix scanning some ill-formed documents.
+* Other minor fixes.
+
+
+3.02 (2006-05-15)
+-----------------
+
+* Fix win32 installer.  Apparently bdist_wininst does not work well
+  under Linux.
+* Fix a bug in add_path_resolver.
+* Add the yaml-highlight example.  Try to run on a color terminal:
+  `python yaml_hl.py <any_document.yaml`.
+
+
+3.01 (2006-05-07)
+-----------------
+
+* Initial release.  The version number reflects the codename
+  of the project (PyYAML 3000) and differenciates it from
+  the abandoned PyYaml module. 
+
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..050ced2
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,19 @@
+Copyright (c) 2006 Kirill Simonov
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 0000000..185e780
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1,7 @@
+include README LICENSE CHANGES setup.py
+recursive-include lib/yaml *.py
+recursive-include lib3/yaml *.py
+recursive-include examples *.py *.cfg *.yaml
+recursive-include tests/data *
+recursive-include tests/lib *.py
+recursive-include tests/lib3 *.py
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..da249e3
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,42 @@
+
+.PHONY: default build buildext force forceext install installext test testext dist clean
+
+PYTHON=/usr/bin/python
+TEST=
+PARAMETERS=
+
+build:
+	${PYTHON} setup.py build ${PARAMETERS}
+
+buildext:
+	${PYTHON} setup.py --with-libyaml build ${PARAMETERS}
+
+force:
+	${PYTHON} setup.py build -f ${PARAMETERS}
+
+forceext:
+	${PYTHON} setup.py --with-libyaml build -f ${PARAMETERS}
+
+install:
+	${PYTHON} setup.py install ${PARAMETERS}
+
+installext:
+	${PYTHON} setup.py --with-libyaml install ${PARAMETERS}
+
+test: build
+	${PYTHON} tests/lib/test_build.py ${TEST}
+
+testext: buildext
+	${PYTHON} tests/lib/test_build_ext.py ${TEST}
+
+testall:
+	${PYTHON} setup.py test
+
+dist:
+	${PYTHON} setup.py --with-libyaml sdist --formats=zip,gztar
+
+windist:
+	${PYTHON} setup.py --with-libyaml bdist_wininst
+
+clean:
+	${PYTHON} setup.py --with-libyaml clean -a
diff --git a/README b/README
new file mode 100644
index 0000000..c1edf13
--- /dev/null
+++ b/README
@@ -0,0 +1,35 @@
+PyYAML - The next generation YAML parser and emitter for Python.
+
+To install, type 'python setup.py install'.
+
+By default, the setup.py script checks whether LibYAML is installed
+and if so, builds and installs LibYAML bindings.  To skip the check
+and force installation of LibYAML bindings, use the option '--with-libyaml':
+'python setup.py --with-libyaml install'.  To disable the check and
+skip building and installing LibYAML bindings, use '--without-libyaml':
+'python setup.py --without-libyaml install'.
+
+When LibYAML bindings are installed, you may use fast LibYAML-based
+parser and emitter as follows:
+
+    >>> yaml.load(stream, Loader=yaml.CLoader)
+    >>> yaml.dump(data, Dumper=yaml.CDumper)
+
+PyYAML includes a comprehensive test suite.  To run the tests,
+type 'python setup.py test'.
+
+For more information, check the PyYAML homepage:
+'http://pyyaml.org/wiki/PyYAML'.
+
+For PyYAML tutorial and reference, see:
+'http://pyyaml.org/wiki/PyYAMLDocumentation'.
+
+Post your questions and opinions to the YAML-Core mailing list:
+'http://lists.sourceforge.net/lists/listinfo/yaml-core'.
+
+Submit bug reports and feature requests to the PyYAML bug tracker:
+'http://pyyaml.org/newticket?component=pyyaml'.
+
+PyYAML is written by Kirill Simonov <xi@resolvent.net>.  It is released
+under the MIT license. See the file LICENSE for more details.
+
diff --git a/announcement.msg b/announcement.msg
new file mode 100644
index 0000000..f0ff72e
--- /dev/null
+++ b/announcement.msg
@@ -0,0 +1,92 @@
+From: Kirill Simonov <xi@resolvent.net>
+To: python-list@python.org, python-announce@python.org, yaml-core@lists.sourceforge.net
+Subject: [ANN] PyYAML-3.10: YAML parser and emitter for Python
+
+========================
+ Announcing PyYAML-3.10
+========================
+
+A new bug fix release of PyYAML is now available:
+
+    http://pyyaml.org/wiki/PyYAML
+
+
+Changes
+=======
+
+* Do not try to build LibYAML bindings on platforms other than CPython;
+  this fixed installation under Jython (Thank to olt(at)bogosoft(dot)com).
+* Clear cyclic references in the parser and the emitter
+  (Thank to kristjan(at)ccpgames(dot)com).
+* LibYAML bindings are rebuilt with the latest version of Cython.
+* Dropped support for Python 2.3 and 2.4; currently supported versions
+  are 2.5 to 3.2.
+
+
+Resources
+=========
+
+PyYAML homepage: http://pyyaml.org/wiki/PyYAML
+PyYAML documentation: http://pyyaml.org/wiki/PyYAMLDocumentation
+
+TAR.GZ package: http://pyyaml.org/download/pyyaml/PyYAML-3.10.tar.gz
+ZIP package: http://pyyaml.org/download/pyyaml/PyYAML-3.10.zip
+Windows installers:
+    http://pyyaml.org/download/pyyaml/PyYAML-3.10.win32-py2.5.exe
+    http://pyyaml.org/download/pyyaml/PyYAML-3.10.win32-py2.6.exe
+    http://pyyaml.org/download/pyyaml/PyYAML-3.10.win32-py3.0.exe
+    http://pyyaml.org/download/pyyaml/PyYAML-3.10.win32-py3.1.exe
+    http://pyyaml.org/download/pyyaml/PyYAML-3.10.win32-py3.2.exe
+
+PyYAML SVN repository: http://svn.pyyaml.org/pyyaml
+Submit a bug report: http://pyyaml.org/newticket?component=pyyaml
+
+YAML homepage: http://yaml.org/
+YAML-core mailing list: http://lists.sourceforge.net/lists/listinfo/yaml-core
+
+
+About PyYAML
+============
+
+YAML is a data serialization format designed for human readability and
+interaction with scripting languages.  PyYAML is a YAML parser and
+emitter for Python.
+
+PyYAML features a complete YAML 1.1 parser, Unicode support, pickle
+support, capable extension API, and sensible error messages.  PyYAML
+supports standard YAML tags and provides Python-specific tags that allow
+to represent an arbitrary Python object.
+
+PyYAML is applicable for a broad range of tasks from complex
+configuration files to object serialization and persistance.
+
+
+Example
+=======
+
+>>> import yaml
+
+>>> yaml.load("""
+... name: PyYAML
+... description: YAML parser and emitter for Python
+... homepage: http://pyyaml.org/wiki/PyYAML
+... keywords: [YAML, serialization, configuration, persistance, pickle]
+... """)
+{'keywords': ['YAML', 'serialization', 'configuration', 'persistance',
+'pickle'], 'homepage': 'http://pyyaml.org/wiki/PyYAML', 'description':
+'YAML parser and emitter for Python', 'name': 'PyYAML'}
+
+>>> print yaml.dump(_)
+name: PyYAML
+homepage: http://pyyaml.org/wiki/PyYAML
+description: YAML parser and emitter for Python
+keywords: [YAML, serialization, configuration, persistance, pickle]
+
+
+Copyright
+=========
+
+The PyYAML module is written by Kirill Simonov <xi@resolvent.net>.
+
+PyYAML is released under the MIT license.
+
diff --git a/examples/pygments-lexer/example.yaml b/examples/pygments-lexer/example.yaml
new file mode 100644
index 0000000..9c0ed9d
--- /dev/null
+++ b/examples/pygments-lexer/example.yaml
@@ -0,0 +1,302 @@
+
+#
+# Examples from the Preview section of the YAML specification
+# (http://yaml.org/spec/1.2/#Preview)
+#
+
+# Sequence of scalars
+---
+- Mark McGwire
+- Sammy Sosa
+- Ken Griffey
+
+# Mapping scalars to scalars
+---
+hr:  65    # Home runs
+avg: 0.278 # Batting average
+rbi: 147   # Runs Batted In
+
+# Mapping scalars to sequences
+---
+american:
+  - Boston Red Sox
+  - Detroit Tigers
+  - New York Yankees
+national:
+  - New York Mets
+  - Chicago Cubs
+  - Atlanta Braves
+
+# Sequence of mappings
+---
+-
+  name: Mark McGwire
+  hr:   65
+  avg:  0.278
+-
+  name: Sammy Sosa
+  hr:   63
+  avg:  0.288
+
+# Sequence of sequences
+---
+- [name        , hr, avg  ]
+- [Mark McGwire, 65, 0.278]
+- [Sammy Sosa  , 63, 0.288]
+
+# Mapping of mappings
+---
+Mark McGwire: {hr: 65, avg: 0.278}
+Sammy Sosa: {
+    hr: 63,
+    avg: 0.288
+  }
+
+# Two documents in a stream
+--- # Ranking of 1998 home runs
+- Mark McGwire
+- Sammy Sosa
+- Ken Griffey
+--- # Team ranking
+- Chicago Cubs
+- St Louis Cardinals
+
+# Documents with the end indicator
+---
+time: 20:03:20
+player: Sammy Sosa
+action: strike (miss)
+...
+---
+time: 20:03:47
+player: Sammy Sosa
+action: grand slam
+...
+
+# Comments
+---
+hr: # 1998 hr ranking
+  - Mark McGwire
+  - Sammy Sosa
+rbi:
+  # 1998 rbi ranking
+  - Sammy Sosa
+  - Ken Griffey
+
+# Anchors and aliases
+---
+hr:
+  - Mark McGwire
+  # Following node labeled SS
+  - &SS Sammy Sosa
+rbi:
+  - *SS # Subsequent occurrence
+  - Ken Griffey
+
+# Mapping between sequences
+---
+? - Detroit Tigers
+  - Chicago cubs
+:
+  - 2001-07-23
+? [ New York Yankees,
+    Atlanta Braves ]
+: [ 2001-07-02, 2001-08-12,
+    2001-08-14 ]
+
+# Inline nested mapping
+---
+# products purchased
+- item    : Super Hoop
+  quantity: 1
+- item    : Basketball
+  quantity: 4
+- item    : Big Shoes
+  quantity: 1
+
+# Literal scalars
+--- | # ASCII art
+  \//||\/||
+  // ||  ||__
+
+# Folded scalars
+--- >
+  Mark McGwire's
+  year was crippled
+  by a knee injury.
+
+# Preserved indented block in a folded scalar
+---
+>
+ Sammy Sosa completed another
+ fine season with great stats.
+
+   63 Home Runs
+   0.288 Batting Average
+
+ What a year!
+
+# Indentation determines scope
+---
+name: Mark McGwire
+accomplishment: >
+  Mark set a major league
+  home run record in 1998.
+stats: |
+  65 Home Runs
+  0.278 Batting Average
+
+# Quoted scalars
+---
+unicode: "Sosa did fine.\u263A"
+control: "\b1998\t1999\t2000\n"
+hex esc: "\x0d\x0a is \r\n"
+single: '"Howdy!" he cried.'
+quoted: ' # not a ''comment''.'
+tie-fighter: '|\-*-/|'
+
+# Multi-line flow scalars
+---
+plain:
+  This unquoted scalar
+  spans many lines.
+quoted: "So does this
+  quoted scalar.\n"
+
+# Integers
+---
+canonical: 12345
+decimal: +12_345
+sexagesimal: 3:25:45
+octal: 014
+hexadecimal: 0xC
+
+# Floating point
+---
+canonical: 1.23015e+3
+exponential: 12.3015e+02
+sexagesimal: 20:30.15
+fixed: 1_230.15
+negative infinity: -.inf
+not a number: .NaN
+
+# Miscellaneous
+---
+null: ~
+true: boolean
+false: boolean
+string: '12345'
+
+# Timestamps
+---
+canonical: 2001-12-15T02:59:43.1Z
+iso8601: 2001-12-14t21:59:43.10-05:00
+spaced: 2001-12-14 21:59:43.10 -5
+date: 2002-12-14
+
+# Various explicit tags
+---
+not-date: !!str 2002-04-28
+picture: !!binary |
+ R0lGODlhDAAMAIQAAP//9/X
+ 17unp5WZmZgAAAOfn515eXv
+ Pz7Y6OjuDg4J+fn5OTk6enp
+ 56enmleECcgggoBADs=
+application specific tag: !something |
+ The semantics of the tag
+ above may be different for
+ different documents.
+
+# Global tags
+%TAG ! tag:clarkevans.com,2002:
+--- !shape
+  # Use the ! handle for presenting
+  # tag:clarkevans.com,2002:circle
+- !circle
+  center: &ORIGIN {x: 73, y: 129}
+  radius: 7
+- !line
+  start: *ORIGIN
+  finish: { x: 89, y: 102 }
+- !label
+  start: *ORIGIN
+  color: 0xFFEEBB
+  text: Pretty vector drawing.
+
+# Unordered sets
+--- !!set
+# sets are represented as a
+# mapping where each key is
+# associated with the empty string
+? Mark McGwire
+? Sammy Sosa
+? Ken Griff
+
+# Ordered mappings
+--- !!omap
+# ordered maps are represented as
+# a sequence of mappings, with
+# each mapping having one key
+- Mark McGwire: 65
+- Sammy Sosa: 63
+- Ken Griffy: 58
+
+# Full length example
+--- !<tag:clarkevans.com,2002:invoice>
+invoice: 34843
+date   : 2001-01-23
+bill-to: &id001
+    given  : Chris
+    family : Dumars
+    address:
+        lines: |
+            458 Walkman Dr.
+            Suite #292
+        city    : Royal Oak
+        state   : MI
+        postal  : 48046
+ship-to: *id001
+product:
+    - sku         : BL394D
+      quantity    : 4
+      description : Basketball
+      price       : 450.00
+    - sku         : BL4438H
+      quantity    : 1
+      description : Super Hoop
+      price       : 2392.00
+tax  : 251.42
+total: 4443.52
+comments:
+    Late afternoon is best.
+    Backup contact is Nancy
+    Billsmer @ 338-4338.
+
+# Another full-length example
+---
+Time: 2001-11-23 15:01:42 -5
+User: ed
+Warning:
+  This is an error message
+  for the log file
+---
+Time: 2001-11-23 15:02:31 -5
+User: ed
+Warning:
+  A slightly different error
+  message.
+---
+Date: 2001-11-23 15:03:17 -5
+User: ed
+Fatal:
+  Unknown variable "bar"
+Stack:
+  - file: TopClass.py
+    line: 23
+    code: |
+      x = MoreObject("345\n")
+  - file: MoreClass.py
+    line: 58
+    code: |-
+      foo = bar
+
diff --git a/examples/pygments-lexer/yaml.py b/examples/pygments-lexer/yaml.py
new file mode 100644
index 0000000..1ce9dac
--- /dev/null
+++ b/examples/pygments-lexer/yaml.py
@@ -0,0 +1,431 @@
+
+"""
+yaml.py
+
+Lexer for YAML, a human-friendly data serialization language
+(http://yaml.org/).
+
+Written by Kirill Simonov <xi@resolvent.net>.
+
+License: Whatever suitable for inclusion into the Pygments package.
+"""
+
+from pygments.lexer import  \
+        ExtendedRegexLexer, LexerContext, include, bygroups
+from pygments.token import  \
+        Text, Comment, Punctuation, Name, Literal
+
+__all__ = ['YAMLLexer']
+
+
+class YAMLLexerContext(LexerContext):
+    """Indentation context for the YAML lexer."""
+
+    def __init__(self, *args, **kwds):
+        super(YAMLLexerContext, self).__init__(*args, **kwds)
+        self.indent_stack = []
+        self.indent = -1
+        self.next_indent = 0
+        self.block_scalar_indent = None
+
+
+def something(TokenClass):
+    """Do not produce empty tokens."""
+    def callback(lexer, match, context):
+        text = match.group()
+        if not text:
+            return
+        yield match.start(), TokenClass, text
+        context.pos = match.end()
+    return callback
+
+def reset_indent(TokenClass):
+    """Reset the indentation levels."""
+    def callback(lexer, match, context):
+        text = match.group()
+        context.indent_stack = []
+        context.indent = -1
+        context.next_indent = 0
+        context.block_scalar_indent = None
+        yield match.start(), TokenClass, text
+        context.pos = match.end()
+    return callback
+
+def save_indent(TokenClass, start=False):
+    """Save a possible indentation level."""
+    def callback(lexer, match, context):
+        text = match.group()
+        extra = ''
+        if start:
+            context.next_indent = len(text)
+            if context.next_indent < context.indent:
+                while context.next_indent < context.indent:
+                    context.indent = context.indent_stack.pop()
+                if context.next_indent > context.indent:
+                    extra = text[context.indent:]
+                    text = text[:context.indent]
+        else:
+            context.next_indent += len(text)
+        if text:
+            yield match.start(), TokenClass, text
+        if extra:
+            yield match.start()+len(text), TokenClass.Error, extra
+        context.pos = match.end()
+    return callback
+
+def set_indent(TokenClass, implicit=False):
+    """Set the previously saved indentation level."""
+    def callback(lexer, match, context):
+        text = match.group()
+        if context.indent < context.next_indent:
+            context.indent_stack.append(context.indent)
+            context.indent = context.next_indent
+        if not implicit:
+            context.next_indent += len(text)
+        yield match.start(), TokenClass, text
+        context.pos = match.end()
+    return callback
+
+def set_block_scalar_indent(TokenClass):
+    """Set an explicit indentation level for a block scalar."""
+    def callback(lexer, match, context):
+        text = match.group()
+        context.block_scalar_indent = None
+        if not text:
+            return
+        increment = match.group(1)
+        if increment:
+            current_indent = max(context.indent, 0)
+            increment = int(increment)
+            context.block_scalar_indent = current_indent + increment
+        if text:
+            yield match.start(), TokenClass, text
+            context.pos = match.end()
+    return callback
+
+def parse_block_scalar_empty_line(IndentTokenClass, ContentTokenClass):
+    """Process an empty line in a block scalar."""
+    def callback(lexer, match, context):
+        text = match.group()
+        if (context.block_scalar_indent is None or
+                len(text) <= context.block_scalar_indent):
+            if text:
+                yield match.start(), IndentTokenClass, text
+        else:
+            indentation = text[:context.block_scalar_indent]
+            content = text[context.block_scalar_indent:]
+            yield match.start(), IndentTokenClass, indentation
+            yield (match.start()+context.block_scalar_indent,
+                    ContentTokenClass, content)
+        context.pos = match.end()
+    return callback
+
+def parse_block_scalar_indent(TokenClass):
+    """Process indentation spaces in a block scalar."""
+    def callback(lexer, match, context):
+        text = match.group()
+        if context.block_scalar_indent is None:
+            if len(text) <= max(context.indent, 0):
+                context.stack.pop()
+                context.stack.pop()
+                return
+            context.block_scalar_indent = len(text)
+        else:
+            if len(text) < context.block_scalar_indent:
+                context.stack.pop()
+                context.stack.pop()
+                return
+        if text:
+            yield match.start(), TokenClass, text
+            context.pos = match.end()
+    return callback
+
+def parse_plain_scalar_indent(TokenClass):
+    """Process indentation spaces in a plain scalar."""
+    def callback(lexer, match, context):
+        text = match.group()
+        if len(text) <= context.indent:
+            context.stack.pop()
+            context.stack.pop()
+            return
+        if text:
+            yield match.start(), TokenClass, text
+            context.pos = match.end()
+    return callback
+
+
+class YAMLLexer(ExtendedRegexLexer):
+    """Lexer for the YAML language."""
+
+    name = 'YAML'
+    aliases = ['yaml']
+    filenames = ['*.yaml', '*.yml']
+    mimetypes = ['text/x-yaml']
+
+    tokens = {
+
+        # the root rules
+        'root': [
+            # ignored whitespaces
+            (r'[ ]+(?=#|$)', Text.Blank),
+            # line breaks
+            (r'\n+', Text.Break),
+            # a comment
+            (r'#[^\n]*', Comment.Single),
+            # the '%YAML' directive
+            (r'^%YAML(?=[ ]|$)', reset_indent(Name.Directive),
+                'yaml-directive'),
+            # the %TAG directive
+            (r'^%TAG(?=[ ]|$)', reset_indent(Name.Directive),
+                'tag-directive'),
+            # document start and document end indicators
+            (r'^(?:---|\.\.\.)(?=[ ]|$)',
+                reset_indent(Punctuation.Document), 'block-line'),
+            # indentation spaces
+            (r'[ ]*(?![ \t\n\r\f\v]|$)',
+                save_indent(Text.Indent, start=True),
+                ('block-line', 'indentation')),
+        ],
+
+        # trailing whitespaces after directives or a block scalar indicator
+        'ignored-line': [
+            # ignored whitespaces
+            (r'[ ]+(?=#|$)', Text.Blank),
+            # a comment
+            (r'#[^\n]*', Comment.Single),
+            # line break
+            (r'\n', Text.Break, '#pop:2'),
+        ],
+
+        # the %YAML directive
+        'yaml-directive': [
+            # the version number
+            (r'([ ]+)([0-9]+\.[0-9]+)',
+                bygroups(Text.Blank, Literal.Version), 'ignored-line'),
+        ],
+
+        # the %YAG directive
+        'tag-directive': [
+            # a tag handle and the corresponding prefix
+            (r'([ ]+)(!|![0-9A-Za-z_-]*!)'
+                r'([ ]+)(!|!?[0-9A-Za-z;/?:@&=+$,_.!~*\'()\[\]%-]+)',
+                bygroups(Text.Blank, Name.Type, Text.Blank, Name.Type),
+                'ignored-line'),
+        ],
+
+        # block scalar indicators and indentation spaces
+        'indentation': [
+            # trailing whitespaces are ignored
+            (r'[ ]*$', something(Text.Blank), '#pop:2'),
+            # whitespaces preceeding block collection indicators
+            (r'[ ]+(?=[?:-](?:[ ]|$))', save_indent(Text.Indent)),
+            # block collection indicators
+            (r'[?:-](?=[ ]|$)', set_indent(Punctuation.Indicator)),
+            # the beginning a block line
+            (r'[ ]*', save_indent(Text.Indent), '#pop'),
+        ],
+
+        # an indented line in the block context
+        'block-line': [
+            # the line end
+            (r'[ ]*(?=#|$)', something(Text.Blank), '#pop'),
+            # whitespaces separating tokens
+            (r'[ ]+', Text.Blank),
+            # tags, anchors and aliases,
+            include('descriptors'),
+            # block collections and scalars
+            include('block-nodes'),
+            # flow collections and quoted scalars
+            include('flow-nodes'),
+            # a plain scalar
+            (r'(?=[^ \t\n\r\f\v?:,\[\]{}#&*!|>\'"%@`-]|[?:-][^ \t\n\r\f\v])',
+                something(Literal.Scalar.Plain),
+                'plain-scalar-in-block-context'),
+        ],
+
+        # tags, anchors, aliases
+        'descriptors' : [
+            # a full-form tag
+            (r'!<[0-9A-Za-z;/?:@&=+$,_.!~*\'()\[\]%-]+>', Name.Type),
+            # a tag in the form '!', '!suffix' or '!handle!suffix'
+            (r'!(?:[0-9A-Za-z_-]+)?'
+                r'(?:![0-9A-Za-z;/?:@&=+$,_.!~*\'()\[\]%-]+)?', Name.Type),
+            # an anchor
+            (r'&[0-9A-Za-z_-]+', Name.Anchor),
+            # an alias
+            (r'\*[0-9A-Za-z_-]+', Name.Alias),
+        ],
+
+        # block collections and scalars
+        'block-nodes': [
+            # implicit key
+            (r':(?=[ ]|$)', set_indent(Punctuation.Indicator, implicit=True)),
+            # literal and folded scalars
+            (r'[|>]', Punctuation.Indicator,
+                ('block-scalar-content', 'block-scalar-header')),
+        ],
+
+        # flow collections and quoted scalars
+        'flow-nodes': [
+            # a flow sequence
+            (r'\[', Punctuation.Indicator, 'flow-sequence'),
+            # a flow mapping
+            (r'\{', Punctuation.Indicator, 'flow-mapping'),
+            # a single-quoted scalar
+            (r'\'', Literal.Scalar.Flow.Quote, 'single-quoted-scalar'),
+            # a double-quoted scalar
+            (r'\"', Literal.Scalar.Flow.Quote, 'double-quoted-scalar'),
+        ],
+
+        # the content of a flow collection
+        'flow-collection': [
+            # whitespaces
+            (r'[ ]+', Text.Blank),
+            # line breaks
+            (r'\n+', Text.Break),
+            # a comment
+            (r'#[^\n]*', Comment.Single),
+            # simple indicators
+            (r'[?:,]', Punctuation.Indicator),
+            # tags, anchors and aliases
+            include('descriptors'),
+            # nested collections and quoted scalars
+            include('flow-nodes'),
+            # a plain scalar
+            (r'(?=[^ \t\n\r\f\v?:,\[\]{}#&*!|>\'"%@`])',
+                something(Literal.Scalar.Plain),
+                'plain-scalar-in-flow-context'),
+        ],
+
+        # a flow sequence indicated by '[' and ']'
+        'flow-sequence': [
+            # include flow collection rules
+            include('flow-collection'),
+            # the closing indicator
+            (r'\]', Punctuation.Indicator, '#pop'),
+        ],
+
+        # a flow mapping indicated by '{' and '}'
+        'flow-mapping': [
+            # include flow collection rules
+            include('flow-collection'),
+            # the closing indicator
+            (r'\}', Punctuation.Indicator, '#pop'),
+        ],
+
+        # block scalar lines
+        'block-scalar-content': [
+            # line break
+            (r'\n', Text.Break),
+            # empty line
+            (r'^[ ]+$',
+                parse_block_scalar_empty_line(Text.Indent,
+                    Literal.Scalar.Block)),
+            # indentation spaces (we may leave the state here)
+            (r'^[ ]*', parse_block_scalar_indent(Text.Indent)),
+            # line content
+            (r'[^\n\r\f\v]+', Literal.Scalar.Block),
+        ],
+
+        # the content of a literal or folded scalar
+        'block-scalar-header': [
+            # indentation indicator followed by chomping flag
+            (r'([1-9])?[+-]?(?=[ ]|$)',
+                set_block_scalar_indent(Punctuation.Indicator),
+                'ignored-line'),
+            # chomping flag followed by indentation indicator
+            (r'[+-]?([1-9])?(?=[ ]|$)',
+                set_block_scalar_indent(Punctuation.Indicator),
+                'ignored-line'),
+        ],
+
+        # ignored and regular whitespaces in quoted scalars
+        'quoted-scalar-whitespaces': [
+            # leading and trailing whitespaces are ignored
+            (r'^[ ]+|[ ]+$', Text.Blank),
+            # line breaks are ignored
+            (r'\n+', Text.Break),
+            # other whitespaces are a part of the value
+            (r'[ ]+', Literal.Scalar.Flow),
+        ],
+
+        # single-quoted scalars
+        'single-quoted-scalar': [
+            # include whitespace and line break rules
+            include('quoted-scalar-whitespaces'),
+            # escaping of the quote character
+            (r'\'\'', Literal.Scalar.Flow.Escape),
+            # regular non-whitespace characters
+            (r'[^ \t\n\r\f\v\']+', Literal.Scalar.Flow),
+            # the closing quote
+            (r'\'', Literal.Scalar.Flow.Quote, '#pop'),
+        ],
+
+        # double-quoted scalars
+        'double-quoted-scalar': [
+            # include whitespace and line break rules
+            include('quoted-scalar-whitespaces'),
+            # escaping of special characters
+            (r'\\[0abt\tn\nvfre "\\N_LP]', Literal.Scalar.Flow.Escape),
+            # escape codes
+            (r'\\(?:x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})',
+                Literal.Scalar.Flow.Escape),
+            # regular non-whitespace characters
+            (r'[^ \t\n\r\f\v\"\\]+', Literal.Scalar.Flow),
+            # the closing quote
+            (r'"', Literal.Scalar.Flow.Quote, '#pop'),
+        ],
+
+        # the beginning of a new line while scanning a plain scalar
+        'plain-scalar-in-block-context-new-line': [
+            # empty lines
+            (r'^[ ]+$', Text.Blank),
+            # line breaks
+            (r'\n+', Text.Break),
+            # document start and document end indicators
+            (r'^(?=---|\.\.\.)', something(Punctuation.Document), '#pop:3'),
+            # indentation spaces (we may leave the block line state here)
+            (r'^[ ]*', parse_plain_scalar_indent(Text.Indent), '#pop'),
+        ],
+
+        # a plain scalar in the block context
+        'plain-scalar-in-block-context': [
+            # the scalar ends with the ':' indicator
+            (r'[ ]*(?=:[ ]|:$)', something(Text.Blank), '#pop'),
+            # the scalar ends with whitespaces followed by a comment
+            (r'[ ]+(?=#)', Text.Blank, '#pop'),
+            # trailing whitespaces are ignored
+            (r'[ ]+$', Text.Blank),
+            # line breaks are ignored
+            (r'\n+', Text.Break, 'plain-scalar-in-block-context-new-line'),
+            # other whitespaces are a part of the value
+            (r'[ ]+', Literal.Scalar.Plain),
+            # regular non-whitespace characters
+            (r'(?::(?![ \t\n\r\f\v])|[^ \t\n\r\f\v:])+',
+                Literal.Scalar.Plain),
+        ],
+
+        # a plain scalar is the flow context
+        'plain-scalar-in-flow-context': [
+            # the scalar ends with an indicator character
+            (r'[ ]*(?=[,:?\[\]{}])', something(Text.Blank), '#pop'),
+            # the scalar ends with a comment
+            (r'[ ]+(?=#)', Text.Blank, '#pop'),
+            # leading and trailing whitespaces are ignored
+            (r'^[ ]+|[ ]+$', Text.Blank),
+            # line breaks are ignored
+            (r'\n+', Text.Break),
+            # other whitespaces are a part of the value
+            (r'[ ]+', Literal.Scalar.Plain),
+            # regular non-whitespace characters
+            (r'[^ \t\n\r\f\v,:?\[\]{}]+', Literal.Scalar.Plain),
+        ],
+
+    }
+
+    def get_tokens_unprocessed(self, text=None, context=None):
+        if context is None:
+            context = YAMLLexerContext(text, 0)
+        return super(YAMLLexer, self).get_tokens_unprocessed(text, context)
+
+
diff --git a/examples/yaml-highlight/yaml_hl.cfg b/examples/yaml-highlight/yaml_hl.cfg
new file mode 100644
index 0000000..69bb847
--- /dev/null
+++ b/examples/yaml-highlight/yaml_hl.cfg
@@ -0,0 +1,115 @@
+%YAML 1.1
+---
+
+ascii:
+
+    header: "\e[0;1;30;40m"
+
+    footer: "\e[0m"
+
+    tokens:
+        stream-start:
+        stream-end:
+        directive:              { start: "\e[35m", end: "\e[0;1;30;40m" }
+        document-start:         { start: "\e[35m", end: "\e[0;1;30;40m" }
+        document-end:           { start: "\e[35m", end: "\e[0;1;30;40m" }
+        block-sequence-start:
+        block-mapping-start:
+        block-end:
+        flow-sequence-start:    { start: "\e[33m", end: "\e[0;1;30;40m" }
+        flow-mapping-start:     { start: "\e[33m", end: "\e[0;1;30;40m" }
+        flow-sequence-end:      { start: "\e[33m", end: "\e[0;1;30;40m" }
+        flow-mapping-end:       { start: "\e[33m", end: "\e[0;1;30;40m" }
+        key:                    { start: "\e[33m", end: "\e[0;1;30;40m" }
+        value:                  { start: "\e[33m", end: "\e[0;1;30;40m" }
+        block-entry:            { start: "\e[33m", end: "\e[0;1;30;40m" }
+        flow-entry:             { start: "\e[33m", end: "\e[0;1;30;40m" }
+        alias:                  { start: "\e[32m", end: "\e[0;1;30;40m" }
+        anchor:                 { start: "\e[32m", end: "\e[0;1;30;40m" }
+        tag:                    { start: "\e[32m", end: "\e[0;1;30;40m" }
+        scalar:                 { start: "\e[36m", end: "\e[0;1;30;40m" }
+
+    replaces:
+        - "\r\n":   "\n"
+        - "\r":     "\n"
+        - "\n":     "\n"
+        - "\x85":   "\n"
+        - "\u2028": "\n"
+        - "\u2029": "\n"
+
+html: &html
+
+    tokens:
+        stream-start:
+        stream-end:
+        directive:              { start: <code class="directive_token">, end: </code> }
+        document-start:         { start: <code class="document_start_token">, end: </code> }
+        document-end:           { start: <code class="document_end_token">, end: </code> }
+        block-sequence-start:
+        block-mapping-start:
+        block-end:
+        flow-sequence-start:    { start: <code class="delimiter_token">, end: </code> }
+        flow-mapping-start:     { start: <code class="delimiter_token">, end: </code> }
+        flow-sequence-end:      { start: <code class="delimiter_token">, end: </code> }
+        flow-mapping-end:       { start: <code class="delimiter_token">, end: </code> }
+        key:                    { start: <code class="delimiter_token">, end: </code> }
+        value:                  { start: <code class="delimiter_token">, end: </code> }
+        block-entry:            { start: <code class="delimiter_token">, end: </code> }
+        flow-entry:             { start: <code class="delimiter_token">, end: </code> }
+        alias:                  { start: <code class="anchor_token">, end: </code> }
+        anchor:                 { start: <code class="anchor_token">, end: </code> }
+        tag:                    { start: <code class="tag_token">, end: </code> }
+        scalar:                 { start: <code class="scalar_token">, end: </code> }
+
+    events:
+        stream-start:   { start: <pre class="yaml_stream"> }
+        stream-end:     { end: </pre> }
+        document-start: { start: <span class="document"> }
+        document-end:   { end: </span> }
+        sequence-start: { start: <span class="sequence"> }
+        sequence-end:   { end: </span> }
+        mapping-start:  { start: <span class="mapping"> }
+        mapping-end:    { end: </span> }
+        scalar:         { start: <span class="scalar">, end: </span> }
+
+    replaces:
+        - "\r\n":   "\n"
+        - "\r":     "\n"
+        - "\n":     "\n"
+        - "\x85":   "\n"
+        - "\u2028": "\n"
+        - "\u2029": "\n"
+        - "&":      "&amp;"
+        - "<":      "&lt;"
+        - ">":      "&gt;"
+
+html-page:
+
+    header: |
+        <html>
+        <head>
+        <title>A YAML stream</title>
+        <style type="text/css">
+            .document { background: #FFF }
+            .sequence { background: #EEF }
+            .mapping { background: #EFE }
+            .scalar { background: #FEE }
+            .directive_token { color: #C0C }
+            .document_start_token { color: #C0C; font-weight: bold }
+            .document_end_token { color: #C0C; font-weight: bold }
+            .delimiter_token { color: #600; font-weight: bold }
+            .anchor_token { color: #090 }
+            .tag_token { color: #090 }
+            .scalar_token { color: #000 }
+            .yaml_stream { color: #999 }
+        </style>
+        <body>
+
+    footer: |
+        </body>
+        </html>
+
+    <<: *html
+
+
+# vim: ft=yaml
diff --git a/examples/yaml-highlight/yaml_hl.py b/examples/yaml-highlight/yaml_hl.py
new file mode 100755
index 0000000..d6f7bf4
--- /dev/null
+++ b/examples/yaml-highlight/yaml_hl.py
@@ -0,0 +1,114 @@
+#!/usr/bin/python
+
+import yaml, codecs, sys, os.path, optparse
+
+class Style:
+
+    def __init__(self, header=None, footer=None,
+            tokens=None, events=None, replaces=None):
+        self.header = header
+        self.footer = footer
+        self.replaces = replaces
+        self.substitutions = {}
+        for domain, Class in [(tokens, 'Token'), (events, 'Event')]:
+            if not domain:
+                continue
+            for key in domain:
+                name = ''.join([part.capitalize() for part in key.split('-')])
+                cls = getattr(yaml, '%s%s' % (name, Class))
+                value = domain[key]
+                if not value:
+                    continue
+                start = value.get('start')
+                end = value.get('end')
+                if start:
+                    self.substitutions[cls, -1] = start
+                if end:
+                    self.substitutions[cls, +1] = end
+
+    def __setstate__(self, state):
+        self.__init__(**state)
+
+yaml.add_path_resolver(u'tag:yaml.org,2002:python/object:__main__.Style',
+        [None], dict)
+yaml.add_path_resolver(u'tag:yaml.org,2002:pairs',
+        [None, u'replaces'], list)
+
+class YAMLHighlight:
+
+    def __init__(self, options):
+        config = yaml.load(file(options.config, 'rb').read())
+        self.style = config[options.style]
+        if options.input:
+            self.input = file(options.input, 'rb')
+        else:
+            self.input = sys.stdin
+        if options.output:
+            self.output = file(options.output, 'wb')
+        else:
+            self.output = sys.stdout
+
+    def highlight(self):
+        input = self.input.read()
+        if input.startswith(codecs.BOM_UTF16_LE):
+            input = unicode(input, 'utf-16-le')
+        elif input.startswith(codecs.BOM_UTF16_BE):
+            input = unicode(input, 'utf-16-be')
+        else:
+            input = unicode(input, 'utf-8')
+        substitutions = self.style.substitutions
+        tokens = yaml.scan(input)
+        events = yaml.parse(input)
+        markers = []
+        number = 0
+        for token in tokens:
+            number += 1
+            if token.start_mark.index != token.end_mark.index:
+                cls = token.__class__
+                if (cls, -1) in substitutions:
+                    markers.append([token.start_mark.index, +2, number, substitutions[cls, -1]])
+                if (cls, +1) in substitutions:
+                    markers.append([token.end_mark.index, -2, number, substitutions[cls, +1]])
+        number = 0
+        for event in events:
+            number += 1
+            cls = event.__class__
+            if (cls, -1) in substitutions:
+                markers.append([event.start_mark.index, +1, number, substitutions[cls, -1]])
+            if (cls, +1) in substitutions:
+                markers.append([event.end_mark.index, -1, number, substitutions[cls, +1]])
+        markers.sort()
+        markers.reverse()
+        chunks = []
+        position = len(input)
+        for index, weight1, weight2, substitution in markers:
+            if index < position:
+                chunk = input[index:position]
+                for substring, replacement in self.style.replaces:
+                    chunk = chunk.replace(substring, replacement)
+                chunks.append(chunk)
+                position = index
+            chunks.append(substitution)
+        chunks.reverse()
+        result = u''.join(chunks)
+        if self.style.header:
+            self.output.write(self.style.header)
+        self.output.write(result.encode('utf-8'))
+        if self.style.footer:
+            self.output.write(self.style.footer)
+
+if __name__ == '__main__':
+    parser = optparse.OptionParser()
+    parser.add_option('-s', '--style', dest='style', default='ascii',
+            help="specify the highlighting style", metavar='STYLE')
+    parser.add_option('-c', '--config', dest='config',
+            default=os.path.join(os.path.dirname(sys.argv[0]), 'yaml_hl.cfg'),
+            help="set an alternative configuration file", metavar='CONFIG')
+    parser.add_option('-i', '--input', dest='input', default=None,
+            help="set the input file (default: stdin)", metavar='FILE')
+    parser.add_option('-o', '--output', dest='output', default=None,
+            help="set the output file (default: stdout)", metavar='FILE')
+    (options, args) = parser.parse_args()
+    hl = YAMLHighlight(options)
+    hl.highlight()
+
diff --git a/ext/_yaml.h b/ext/_yaml.h
new file mode 100644
index 0000000..21fd6a9
--- /dev/null
+++ b/ext/_yaml.h
@@ -0,0 +1,23 @@
+
+#include <yaml.h>
+
+#if PY_MAJOR_VERSION < 3
+
+#define PyUnicode_FromString(s) PyUnicode_DecodeUTF8((s), strlen(s), "strict")
+
+#else
+
+#define PyString_CheckExact PyBytes_CheckExact
+#define PyString_AS_STRING  PyBytes_AS_STRING
+#define PyString_GET_SIZE   PyBytes_GET_SIZE
+#define PyString_FromStringAndSize  PyBytes_FromStringAndSize
+
+#endif
+
+#ifdef _MSC_VER	/* MS Visual C++ 6.0 */
+#if _MSC_VER == 1200
+
+#define PyLong_FromUnsignedLongLong(z)	PyInt_FromLong(i)
+
+#endif
+#endif
diff --git a/ext/_yaml.pxd b/ext/_yaml.pxd
new file mode 100644
index 0000000..f47f459
--- /dev/null
+++ b/ext/_yaml.pxd
@@ -0,0 +1,251 @@
+
+cdef extern from "_yaml.h":
+
+    void malloc(int l)
+    void memcpy(char *d, char *s, int l)
+    int strlen(char *s)
+    int PyString_CheckExact(object o)
+    int PyUnicode_CheckExact(object o)
+    char *PyString_AS_STRING(object o)
+    int PyString_GET_SIZE(object o)
+    object PyString_FromStringAndSize(char *v, int l)
+    object PyUnicode_FromString(char *u)
+    object PyUnicode_DecodeUTF8(char *u, int s, char *e)
+    object PyUnicode_AsUTF8String(object o)
+    int PY_MAJOR_VERSION
+
+    ctypedef enum:
+        SIZEOF_VOID_P
+    ctypedef enum yaml_encoding_t:
+        YAML_ANY_ENCODING
+        YAML_UTF8_ENCODING
+        YAML_UTF16LE_ENCODING
+        YAML_UTF16BE_ENCODING
+    ctypedef enum yaml_break_t:
+        YAML_ANY_BREAK
+        YAML_CR_BREAK
+        YAML_LN_BREAK
+        YAML_CRLN_BREAK
+    ctypedef enum yaml_error_type_t:
+        YAML_NO_ERROR
+        YAML_MEMORY_ERROR
+        YAML_READER_ERROR
+        YAML_SCANNER_ERROR
+        YAML_PARSER_ERROR
+        YAML_WRITER_ERROR
+        YAML_EMITTER_ERROR
+    ctypedef enum yaml_scalar_style_t:
+        YAML_ANY_SCALAR_STYLE
+        YAML_PLAIN_SCALAR_STYLE
+        YAML_SINGLE_QUOTED_SCALAR_STYLE
+        YAML_DOUBLE_QUOTED_SCALAR_STYLE
+        YAML_LITERAL_SCALAR_STYLE
+        YAML_FOLDED_SCALAR_STYLE
+    ctypedef enum yaml_sequence_style_t:
+        YAML_ANY_SEQUENCE_STYLE
+        YAML_BLOCK_SEQUENCE_STYLE
+        YAML_FLOW_SEQUENCE_STYLE
+    ctypedef enum yaml_mapping_style_t:
+        YAML_ANY_MAPPING_STYLE
+        YAML_BLOCK_MAPPING_STYLE
+        YAML_FLOW_MAPPING_STYLE
+    ctypedef enum yaml_token_type_t:
+        YAML_NO_TOKEN
+        YAML_STREAM_START_TOKEN
+        YAML_STREAM_END_TOKEN
+        YAML_VERSION_DIRECTIVE_TOKEN
+        YAML_TAG_DIRECTIVE_TOKEN
+        YAML_DOCUMENT_START_TOKEN
+        YAML_DOCUMENT_END_TOKEN
+        YAML_BLOCK_SEQUENCE_START_TOKEN
+        YAML_BLOCK_MAPPING_START_TOKEN
+        YAML_BLOCK_END_TOKEN
+        YAML_FLOW_SEQUENCE_START_TOKEN
+        YAML_FLOW_SEQUENCE_END_TOKEN
+        YAML_FLOW_MAPPING_START_TOKEN
+        YAML_FLOW_MAPPING_END_TOKEN
+        YAML_BLOCK_ENTRY_TOKEN
+        YAML_FLOW_ENTRY_TOKEN
+        YAML_KEY_TOKEN
+        YAML_VALUE_TOKEN
+        YAML_ALIAS_TOKEN
+        YAML_ANCHOR_TOKEN
+        YAML_TAG_TOKEN
+        YAML_SCALAR_TOKEN
+    ctypedef enum yaml_event_type_t:
+        YAML_NO_EVENT
+        YAML_STREAM_START_EVENT
+        YAML_STREAM_END_EVENT
+        YAML_DOCUMENT_START_EVENT
+        YAML_DOCUMENT_END_EVENT
+        YAML_ALIAS_EVENT
+        YAML_SCALAR_EVENT
+        YAML_SEQUENCE_START_EVENT
+        YAML_SEQUENCE_END_EVENT
+        YAML_MAPPING_START_EVENT
+        YAML_MAPPING_END_EVENT
+
+    ctypedef int yaml_read_handler_t(void *data, char *buffer,
+            int size, int *size_read) except 0
+
+    ctypedef int yaml_write_handler_t(void *data, char *buffer,
+            int size) except 0
+
+    ctypedef struct yaml_mark_t:
+        int index
+        int line
+        int column
+    ctypedef struct yaml_version_directive_t:
+        int major
+        int minor
+    ctypedef struct yaml_tag_directive_t:
+        char *handle
+        char *prefix
+
+    ctypedef struct _yaml_token_stream_start_data_t:
+        yaml_encoding_t encoding
+    ctypedef struct _yaml_token_alias_data_t:
+        char *value
+    ctypedef struct _yaml_token_anchor_data_t:
+        char *value
+    ctypedef struct _yaml_token_tag_data_t:
+        char *handle
+        char *suffix
+    ctypedef struct _yaml_token_scalar_data_t:
+        char *value
+        int length
+        yaml_scalar_style_t style
+    ctypedef struct _yaml_token_version_directive_data_t:
+        int major
+        int minor
+    ctypedef struct _yaml_token_tag_directive_data_t:
+        char *handle
+        char *prefix
+    ctypedef union _yaml_token_data_t:
+        _yaml_token_stream_start_data_t stream_start
+        _yaml_token_alias_data_t alias
+        _yaml_token_anchor_data_t anchor
+        _yaml_token_tag_data_t tag
+        _yaml_token_scalar_data_t scalar
+        _yaml_token_version_directive_data_t version_directive
+        _yaml_token_tag_directive_data_t tag_directive
+    ctypedef struct yaml_token_t:
+        yaml_token_type_t type
+        _yaml_token_data_t data
+        yaml_mark_t start_mark
+        yaml_mark_t end_mark
+
+    ctypedef struct _yaml_event_stream_start_data_t:
+        yaml_encoding_t encoding
+    ctypedef struct _yaml_event_document_start_data_tag_directives_t:
+        yaml_tag_directive_t *start
+        yaml_tag_directive_t *end
+    ctypedef struct _yaml_event_document_start_data_t:
+        yaml_version_directive_t *version_directive
+        _yaml_event_document_start_data_tag_directives_t tag_directives
+        int implicit
+    ctypedef struct _yaml_event_document_end_data_t:
+        int implicit
+    ctypedef struct _yaml_event_alias_data_t:
+        char *anchor
+    ctypedef struct _yaml_event_scalar_data_t:
+        char *anchor
+        char *tag
+        char *value
+        int length
+        int plain_implicit
+        int quoted_implicit
+        yaml_scalar_style_t style
+    ctypedef struct _yaml_event_sequence_start_data_t:
+        char *anchor
+        char *tag
+        int implicit
+        yaml_sequence_style_t style
+    ctypedef struct _yaml_event_mapping_start_data_t:
+        char *anchor
+        char *tag
+        int implicit
+        yaml_mapping_style_t style
+    ctypedef union _yaml_event_data_t:
+        _yaml_event_stream_start_data_t stream_start
+        _yaml_event_document_start_data_t document_start
+        _yaml_event_document_end_data_t document_end
+        _yaml_event_alias_data_t alias
+        _yaml_event_scalar_data_t scalar
+        _yaml_event_sequence_start_data_t sequence_start
+        _yaml_event_mapping_start_data_t mapping_start
+    ctypedef struct yaml_event_t:
+        yaml_event_type_t type
+        _yaml_event_data_t data
+        yaml_mark_t start_mark
+        yaml_mark_t end_mark
+
+    ctypedef struct yaml_parser_t:
+        yaml_error_type_t error
+        char *problem
+        int problem_offset
+        int problem_value
+        yaml_mark_t problem_mark
+        char *context
+        yaml_mark_t context_mark
+
+    ctypedef struct yaml_emitter_t:
+        yaml_error_type_t error
+        char *problem
+
+    char *yaml_get_version_string()
+    void yaml_get_version(int *major, int *minor, int *patch)
+
+    void yaml_token_delete(yaml_token_t *token)
+
+    int yaml_stream_start_event_initialize(yaml_event_t *event,
+            yaml_encoding_t encoding)
+    int yaml_stream_end_event_initialize(yaml_event_t *event)
+    int yaml_document_start_event_initialize(yaml_event_t *event,
+            yaml_version_directive_t *version_directive,
+            yaml_tag_directive_t *tag_directives_start,
+            yaml_tag_directive_t *tag_directives_end,
+            int implicit)
+    int yaml_document_end_event_initialize(yaml_event_t *event,
+            int implicit)
+    int yaml_alias_event_initialize(yaml_event_t *event, char *anchor)
+    int yaml_scalar_event_initialize(yaml_event_t *event,
+            char *anchor, char *tag, char *value, int length,
+            int plain_implicit, int quoted_implicit,
+            yaml_scalar_style_t style)
+    int yaml_sequence_start_event_initialize(yaml_event_t *event,
+            char *anchor, char *tag, int implicit, yaml_sequence_style_t style)
+    int yaml_sequence_end_event_initialize(yaml_event_t *event)
+    int yaml_mapping_start_event_initialize(yaml_event_t *event,
+            char *anchor, char *tag, int implicit, yaml_mapping_style_t style)
+    int yaml_mapping_end_event_initialize(yaml_event_t *event)
+    void yaml_event_delete(yaml_event_t *event)
+
+    int yaml_parser_initialize(yaml_parser_t *parser)
+    void yaml_parser_delete(yaml_parser_t *parser)
+    void yaml_parser_set_input_string(yaml_parser_t *parser,
+            char *input, int size)
+    void yaml_parser_set_input(yaml_parser_t *parser,
+            yaml_read_handler_t *handler, void *data)
+    void yaml_parser_set_encoding(yaml_parser_t *parser,
+            yaml_encoding_t encoding)
+    int yaml_parser_scan(yaml_parser_t *parser, yaml_token_t *token) except *
+    int yaml_parser_parse(yaml_parser_t *parser, yaml_event_t *event) except *
+
+    int yaml_emitter_initialize(yaml_emitter_t *emitter)
+    void yaml_emitter_delete(yaml_emitter_t *emitter)
+    void yaml_emitter_set_output_string(yaml_emitter_t *emitter,
+            char *output, int size, int *size_written)
+    void yaml_emitter_set_output(yaml_emitter_t *emitter,
+            yaml_write_handler_t *handler, void *data)
+    void yaml_emitter_set_encoding(yaml_emitter_t *emitter,
+            yaml_encoding_t encoding)
+    void yaml_emitter_set_canonical(yaml_emitter_t *emitter, int canonical)
+    void yaml_emitter_set_indent(yaml_emitter_t *emitter, int indent)
+    void yaml_emitter_set_width(yaml_emitter_t *emitter, int width)
+    void yaml_emitter_set_unicode(yaml_emitter_t *emitter, int unicode)
+    void yaml_emitter_set_break(yaml_emitter_t *emitter,
+            yaml_break_t line_break)
+    int yaml_emitter_emit(yaml_emitter_t *emitter, yaml_event_t *event) except *
+    int yaml_emitter_flush(yaml_emitter_t *emitter)
+
diff --git a/ext/_yaml.pyx b/ext/_yaml.pyx
new file mode 100644
index 0000000..5158fb4
--- /dev/null
+++ b/ext/_yaml.pyx
@@ -0,0 +1,1527 @@
+
+import yaml
+
+def get_version_string():
+    cdef char *value
+    value = yaml_get_version_string()
+    if PY_MAJOR_VERSION < 3:
+        return value
+    else:
+        return PyUnicode_FromString(value)
+
+def get_version():
+    cdef int major, minor, patch
+    yaml_get_version(&major, &minor, &patch)
+    return (major, minor, patch)
+
+#Mark = yaml.error.Mark
+YAMLError = yaml.error.YAMLError
+ReaderError = yaml.reader.ReaderError
+ScannerError = yaml.scanner.ScannerError
+ParserError = yaml.parser.ParserError
+ComposerError = yaml.composer.ComposerError
+ConstructorError = yaml.constructor.ConstructorError
+EmitterError = yaml.emitter.EmitterError
+SerializerError = yaml.serializer.SerializerError
+RepresenterError = yaml.representer.RepresenterError
+
+StreamStartToken = yaml.tokens.StreamStartToken
+StreamEndToken = yaml.tokens.StreamEndToken
+DirectiveToken = yaml.tokens.DirectiveToken
+DocumentStartToken = yaml.tokens.DocumentStartToken
+DocumentEndToken = yaml.tokens.DocumentEndToken
+BlockSequenceStartToken = yaml.tokens.BlockSequenceStartToken
+BlockMappingStartToken = yaml.tokens.BlockMappingStartToken
+BlockEndToken = yaml.tokens.BlockEndToken
+FlowSequenceStartToken = yaml.tokens.FlowSequenceStartToken
+FlowMappingStartToken = yaml.tokens.FlowMappingStartToken
+FlowSequenceEndToken = yaml.tokens.FlowSequenceEndToken
+FlowMappingEndToken = yaml.tokens.FlowMappingEndToken
+KeyToken = yaml.tokens.KeyToken
+ValueToken = yaml.tokens.ValueToken
+BlockEntryToken = yaml.tokens.BlockEntryToken
+FlowEntryToken = yaml.tokens.FlowEntryToken
+AliasToken = yaml.tokens.AliasToken
+AnchorToken = yaml.tokens.AnchorToken
+TagToken = yaml.tokens.TagToken
+ScalarToken = yaml.tokens.ScalarToken
+
+StreamStartEvent = yaml.events.StreamStartEvent
+StreamEndEvent = yaml.events.StreamEndEvent
+DocumentStartEvent = yaml.events.DocumentStartEvent
+DocumentEndEvent = yaml.events.DocumentEndEvent
+AliasEvent = yaml.events.AliasEvent
+ScalarEvent = yaml.events.ScalarEvent
+SequenceStartEvent = yaml.events.SequenceStartEvent
+SequenceEndEvent = yaml.events.SequenceEndEvent
+MappingStartEvent = yaml.events.MappingStartEvent
+MappingEndEvent = yaml.events.MappingEndEvent
+
+ScalarNode = yaml.nodes.ScalarNode
+SequenceNode = yaml.nodes.SequenceNode
+MappingNode = yaml.nodes.MappingNode
+
+cdef class Mark:
+    cdef readonly object name
+    cdef readonly int index
+    cdef readonly int line
+    cdef readonly int column
+    cdef readonly buffer
+    cdef readonly pointer
+
+    def __init__(self, object name, int index, int line, int column,
+            object buffer, object pointer):
+        self.name = name
+        self.index = index
+        self.line = line
+        self.column = column
+        self.buffer = buffer
+        self.pointer = pointer
+
+    def get_snippet(self):
+        return None
+
+    def __str__(self):
+        where = "  in \"%s\", line %d, column %d"   \
+                % (self.name, self.line+1, self.column+1)
+        return where
+
+#class YAMLError(Exception):
+#    pass
+#
+#class MarkedYAMLError(YAMLError):
+#
+#    def __init__(self, context=None, context_mark=None,
+#            problem=None, problem_mark=None, note=None):
+#        self.context = context
+#        self.context_mark = context_mark
+#        self.problem = problem
+#        self.problem_mark = problem_mark
+#        self.note = note
+#
+#    def __str__(self):
+#        lines = []
+#        if self.context is not None:
+#            lines.append(self.context)
+#        if self.context_mark is not None  \
+#            and (self.problem is None or self.problem_mark is None
+#                    or self.context_mark.name != self.problem_mark.name
+#                    or self.context_mark.line != self.problem_mark.line
+#                    or self.context_mark.column != self.problem_mark.column):
+#            lines.append(str(self.context_mark))
+#        if self.problem is not None:
+#            lines.append(self.problem)
+#        if self.problem_mark is not None:
+#            lines.append(str(self.problem_mark))
+#        if self.note is not None:
+#            lines.append(self.note)
+#        return '\n'.join(lines)
+#
+#class ReaderError(YAMLError):
+#
+#    def __init__(self, name, position, character, encoding, reason):
+#        self.name = name
+#        self.character = character
+#        self.position = position
+#        self.encoding = encoding
+#        self.reason = reason
+#
+#    def __str__(self):
+#        if isinstance(self.character, str):
+#            return "'%s' codec can't decode byte #x%02x: %s\n"  \
+#                    "  in \"%s\", position %d"    \
+#                    % (self.encoding, ord(self.character), self.reason,
+#                            self.name, self.position)
+#        else:
+#            return "unacceptable character #x%04x: %s\n"    \
+#                    "  in \"%s\", position %d"    \
+#                    % (ord(self.character), self.reason,
+#                            self.name, self.position)
+#
+#class ScannerError(MarkedYAMLError):
+#    pass
+#
+#class ParserError(MarkedYAMLError):
+#    pass
+#
+#class EmitterError(YAMLError):
+#    pass
+#
+#cdef class Token:
+#    cdef readonly Mark start_mark
+#    cdef readonly Mark end_mark
+#    def __init__(self, Mark start_mark, Mark end_mark):
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#
+#cdef class StreamStartToken(Token):
+#    cdef readonly object encoding
+#    def __init__(self, Mark start_mark, Mark end_mark, encoding):
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#        self.encoding = encoding
+#
+#cdef class StreamEndToken(Token):
+#    pass
+#
+#cdef class DirectiveToken(Token):
+#    cdef readonly object name
+#    cdef readonly object value
+#    def __init__(self, name, value, Mark start_mark, Mark end_mark):
+#        self.name = name
+#        self.value = value
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#
+#cdef class DocumentStartToken(Token):
+#    pass
+#
+#cdef class DocumentEndToken(Token):
+#    pass
+#
+#cdef class BlockSequenceStartToken(Token):
+#    pass
+#
+#cdef class BlockMappingStartToken(Token):
+#    pass
+#
+#cdef class BlockEndToken(Token):
+#    pass
+#
+#cdef class FlowSequenceStartToken(Token):
+#    pass
+#
+#cdef class FlowMappingStartToken(Token):
+#    pass
+#
+#cdef class FlowSequenceEndToken(Token):
+#    pass
+#
+#cdef class FlowMappingEndToken(Token):
+#    pass
+#
+#cdef class KeyToken(Token):
+#    pass
+#
+#cdef class ValueToken(Token):
+#    pass
+#
+#cdef class BlockEntryToken(Token):
+#    pass
+#
+#cdef class FlowEntryToken(Token):
+#    pass
+#
+#cdef class AliasToken(Token):
+#    cdef readonly object value
+#    def __init__(self, value, Mark start_mark, Mark end_mark):
+#        self.value = value
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#
+#cdef class AnchorToken(Token):
+#    cdef readonly object value
+#    def __init__(self, value, Mark start_mark, Mark end_mark):
+#        self.value = value
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#
+#cdef class TagToken(Token):
+#    cdef readonly object value
+#    def __init__(self, value, Mark start_mark, Mark end_mark):
+#        self.value = value
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#
+#cdef class ScalarToken(Token):
+#    cdef readonly object value
+#    cdef readonly object plain
+#    cdef readonly object style
+#    def __init__(self, value, plain, Mark start_mark, Mark end_mark, style=None):
+#        self.value = value
+#        self.plain = plain
+#        self.start_mark = start_mark
+#        self.end_mark = end_mark
+#        self.style = style
+
+cdef class CParser:
+
+    cdef yaml_parser_t parser
+    cdef yaml_event_t parsed_event
+
+    cdef object stream
+    cdef object stream_name
+    cdef object current_token
+    cdef object current_event
+    cdef object anchors
+    cdef object stream_cache
+    cdef int stream_cache_len
+    cdef int stream_cache_pos
+    cdef int unicode_source
+
+    def __init__(self, stream):
+        cdef is_readable
+        if yaml_parser_initialize(&self.parser) == 0:
+            raise MemoryError
+        self.parsed_event.type = YAML_NO_EVENT
+        is_readable = 1
+        try:
+            stream.read
+        except AttributeError:
+            is_readable = 0
+        self.unicode_source = 0
+        if is_readable:
+            self.stream = stream
+            try:
+                self.stream_name = stream.name
+            except AttributeError:
+                if PY_MAJOR_VERSION < 3:
+                    self.stream_name = '<file>'
+                else:
+                    self.stream_name = u'<file>'
+            self.stream_cache = None
+            self.stream_cache_len = 0
+            self.stream_cache_pos = 0
+            yaml_parser_set_input(&self.parser, input_handler, <void *>self)
+        else:
+            if PyUnicode_CheckExact(stream) != 0:
+                stream = PyUnicode_AsUTF8String(stream)
+                if PY_MAJOR_VERSION < 3:
+                    self.stream_name = '<unicode string>'
+                else:
+                    self.stream_name = u'<unicode string>'
+                self.unicode_source = 1
+            else:
+                if PY_MAJOR_VERSION < 3:
+                    self.stream_name = '<byte string>'
+                else:
+                    self.stream_name = u'<byte string>'
+            if PyString_CheckExact(stream) == 0:
+                if PY_MAJOR_VERSION < 3:
+                    raise TypeError("a string or stream input is required")
+                else:
+                    raise TypeError(u"a string or stream input is required")
+            self.stream = stream
+            yaml_parser_set_input_string(&self.parser, PyString_AS_STRING(stream), PyString_GET_SIZE(stream))
+        self.current_token = None
+        self.current_event = None
+        self.anchors = {}
+
+    def __dealloc__(self):
+        yaml_parser_delete(&self.parser)
+        yaml_event_delete(&self.parsed_event)
+
+    def dispose(self):
+        pass
+
+    cdef object _parser_error(self):
+        if self.parser.error == YAML_MEMORY_ERROR:
+            return MemoryError
+        elif self.parser.error == YAML_READER_ERROR:
+            if PY_MAJOR_VERSION < 3:
+                return ReaderError(self.stream_name, self.parser.problem_offset,
+                        self.parser.problem_value, '?', self.parser.problem)
+            else:
+                return ReaderError(self.stream_name, self.parser.problem_offset,
+                        self.parser.problem_value, u'?', PyUnicode_FromString(self.parser.problem))
+        elif self.parser.error == YAML_SCANNER_ERROR    \
+                or self.parser.error == YAML_PARSER_ERROR:
+            context_mark = None
+            problem_mark = None
+            if self.parser.context != NULL:
+                context_mark = Mark(self.stream_name,
+                        self.parser.context_mark.index,
+                        self.parser.context_mark.line,
+                        self.parser.context_mark.column, None, None)
+            if self.parser.problem != NULL:
+                problem_mark = Mark(self.stream_name,
+                        self.parser.problem_mark.index,
+                        self.parser.problem_mark.line,
+                        self.parser.problem_mark.column, None, None)
+            context = None
+            if self.parser.context != NULL:
+                if PY_MAJOR_VERSION < 3:
+                    context = self.parser.context
+                else:
+                    context = PyUnicode_FromString(self.parser.context)
+            if PY_MAJOR_VERSION < 3:
+                problem = self.parser.problem
+            else:
+                problem = PyUnicode_FromString(self.parser.problem)
+            if self.parser.error == YAML_SCANNER_ERROR:
+                return ScannerError(context, context_mark, problem, problem_mark)
+            else:
+                return ParserError(context, context_mark, problem, problem_mark)
+        if PY_MAJOR_VERSION < 3:
+            raise ValueError("no parser error")
+        else:
+            raise ValueError(u"no parser error")
+
+    def raw_scan(self):
+        cdef yaml_token_t token
+        cdef int done
+        cdef int count
+        count = 0
+        done = 0
+        while done == 0:
+            if yaml_parser_scan(&self.parser, &token) == 0:
+                error = self._parser_error()
+                raise error
+            if token.type == YAML_NO_TOKEN:
+                done = 1
+            else:
+                count = count+1
+            yaml_token_delete(&token)
+        return count
+
+    cdef object _scan(self):
+        cdef yaml_token_t token
+        if yaml_parser_scan(&self.parser, &token) == 0:
+            error = self._parser_error()
+            raise error
+        token_object = self._token_to_object(&token)
+        yaml_token_delete(&token)
+        return token_object
+
+    cdef object _token_to_object(self, yaml_token_t *token):
+        start_mark = Mark(self.stream_name,
+                token.start_mark.index,
+                token.start_mark.line,
+                token.start_mark.column,
+                None, None)
+        end_mark = Mark(self.stream_name,
+                token.end_mark.index,
+                token.end_mark.line,
+                token.end_mark.column,
+                None, None)
+        if token.type == YAML_NO_TOKEN:
+            return None
+        elif token.type == YAML_STREAM_START_TOKEN:
+            encoding = None
+            if token.data.stream_start.encoding == YAML_UTF8_ENCODING:
+                if self.unicode_source == 0:
+                    encoding = u"utf-8"
+            elif token.data.stream_start.encoding == YAML_UTF16LE_ENCODING:
+                encoding = u"utf-16-le"
+            elif token.data.stream_start.encoding == YAML_UTF16BE_ENCODING:
+                encoding = u"utf-16-be"
+            return StreamStartToken(start_mark, end_mark, encoding)
+        elif token.type == YAML_STREAM_END_TOKEN:
+            return StreamEndToken(start_mark, end_mark)
+        elif token.type == YAML_VERSION_DIRECTIVE_TOKEN:
+            return DirectiveToken(u"YAML",
+                    (token.data.version_directive.major,
+                        token.data.version_directive.minor),
+                    start_mark, end_mark)
+        elif token.type == YAML_TAG_DIRECTIVE_TOKEN:
+            handle = PyUnicode_FromString(token.data.tag_directive.handle)
+            prefix = PyUnicode_FromString(token.data.tag_directive.prefix)
+            return DirectiveToken(u"TAG", (handle, prefix),
+                    start_mark, end_mark)
+        elif token.type == YAML_DOCUMENT_START_TOKEN:
+            return DocumentStartToken(start_mark, end_mark)
+        elif token.type == YAML_DOCUMENT_END_TOKEN:
+            return DocumentEndToken(start_mark, end_mark)
+        elif token.type == YAML_BLOCK_SEQUENCE_START_TOKEN:
+            return BlockSequenceStartToken(start_mark, end_mark)
+        elif token.type == YAML_BLOCK_MAPPING_START_TOKEN:
+            return BlockMappingStartToken(start_mark, end_mark)
+        elif token.type == YAML_BLOCK_END_TOKEN:
+            return BlockEndToken(start_mark, end_mark)
+        elif token.type == YAML_FLOW_SEQUENCE_START_TOKEN:
+            return FlowSequenceStartToken(start_mark, end_mark)
+        elif token.type == YAML_FLOW_SEQUENCE_END_TOKEN:
+            return FlowSequenceEndToken(start_mark, end_mark)
+        elif token.type == YAML_FLOW_MAPPING_START_TOKEN:
+            return FlowMappingStartToken(start_mark, end_mark)
+        elif token.type == YAML_FLOW_MAPPING_END_TOKEN:
+            return FlowMappingEndToken(start_mark, end_mark)
+        elif token.type == YAML_BLOCK_ENTRY_TOKEN:
+            return BlockEntryToken(start_mark, end_mark)
+        elif token.type == YAML_FLOW_ENTRY_TOKEN:
+            return FlowEntryToken(start_mark, end_mark)
+        elif token.type == YAML_KEY_TOKEN:
+            return KeyToken(start_mark, end_mark)
+        elif token.type == YAML_VALUE_TOKEN:
+            return ValueToken(start_mark, end_mark)
+        elif token.type == YAML_ALIAS_TOKEN:
+            value = PyUnicode_FromString(token.data.alias.value)
+            return AliasToken(value, start_mark, end_mark)
+        elif token.type == YAML_ANCHOR_TOKEN:
+            value = PyUnicode_FromString(token.data.anchor.value)
+            return AnchorToken(value, start_mark, end_mark)
+        elif token.type == YAML_TAG_TOKEN:
+            handle = PyUnicode_FromString(token.data.tag.handle)
+            suffix = PyUnicode_FromString(token.data.tag.suffix)
+            if not handle:
+                handle = None
+            return TagToken((handle, suffix), start_mark, end_mark)
+        elif token.type == YAML_SCALAR_TOKEN:
+            value = PyUnicode_DecodeUTF8(token.data.scalar.value,
+                    token.data.scalar.length, 'strict')
+            plain = False
+            style = None
+            if token.data.scalar.style == YAML_PLAIN_SCALAR_STYLE:
+                plain = True
+                style = u''
+            elif token.data.scalar.style == YAML_SINGLE_QUOTED_SCALAR_STYLE:
+                style = u'\''
+            elif token.data.scalar.style == YAML_DOUBLE_QUOTED_SCALAR_STYLE:
+                style = u'"'
+            elif token.data.scalar.style == YAML_LITERAL_SCALAR_STYLE:
+                style = u'|'
+            elif token.data.scalar.style == YAML_FOLDED_SCALAR_STYLE:
+                style = u'>'
+            return ScalarToken(value, plain,
+                    start_mark, end_mark, style)
+        else:
+            if PY_MAJOR_VERSION < 3:
+                raise ValueError("unknown token type")
+            else:
+                raise ValueError(u"unknown token type")
+
+    def get_token(self):
+        if self.current_token is not None:
+            value = self.current_token
+            self.current_token = None
+        else:
+            value = self._scan()
+        return value
+
+    def peek_token(self):
+        if self.current_token is None:
+            self.current_token = self._scan()
+        return self.current_token
+
+    def check_token(self, *choices):
+        if self.current_token is None:
+            self.current_token = self._scan()
+        if self.current_token is None:
+            return False
+        if not choices:
+            return True
+        token_class = self.current_token.__class__
+        for choice in choices:
+            if token_class is choice:
+                return True
+        return False
+
+    def raw_parse(self):
+        cdef yaml_event_t event
+        cdef int done
+        cdef int count
+        count = 0
+        done = 0
+        while done == 0:
+            if yaml_parser_parse(&self.parser, &event) == 0:
+                error = self._parser_error()
+                raise error
+            if event.type == YAML_NO_EVENT:
+                done = 1
+            else:
+                count = count+1
+            yaml_event_delete(&event)
+        return count
+
+    cdef object _parse(self):
+        cdef yaml_event_t event
+        if yaml_parser_parse(&self.parser, &event) == 0:
+            error = self._parser_error()
+            raise error
+        event_object = self._event_to_object(&event)
+        yaml_event_delete(&event)
+        return event_object
+
+    cdef object _event_to_object(self, yaml_event_t *event):
+        cdef yaml_tag_directive_t *tag_directive
+        start_mark = Mark(self.stream_name,
+                event.start_mark.index,
+                event.start_mark.line,
+                event.start_mark.column,
+                None, None)
+        end_mark = Mark(self.stream_name,
+                event.end_mark.index,
+                event.end_mark.line,
+                event.end_mark.column,
+                None, None)
+        if event.type == YAML_NO_EVENT:
+            return None
+        elif event.type == YAML_STREAM_START_EVENT:
+            encoding = None
+            if event.data.stream_start.encoding == YAML_UTF8_ENCODING:
+                if self.unicode_source == 0:
+                    encoding = u"utf-8"
+            elif event.data.stream_start.encoding == YAML_UTF16LE_ENCODING:
+                encoding = u"utf-16-le"
+            elif event.data.stream_start.encoding == YAML_UTF16BE_ENCODING:
+                encoding = u"utf-16-be"
+            return StreamStartEvent(start_mark, end_mark, encoding)
+        elif event.type == YAML_STREAM_END_EVENT:
+            return StreamEndEvent(start_mark, end_mark)
+        elif event.type == YAML_DOCUMENT_START_EVENT:
+            explicit = False
+            if event.data.document_start.implicit == 0:
+                explicit = True
+            version = None
+            if event.data.document_start.version_directive != NULL:
+                version = (event.data.document_start.version_directive.major,
+                        event.data.document_start.version_directive.minor)
+            tags = None
+            if event.data.document_start.tag_directives.start != NULL:
+                tags = {}
+                tag_directive = event.data.document_start.tag_directives.start
+                while tag_directive != event.data.document_start.tag_directives.end:
+                    handle = PyUnicode_FromString(tag_directive.handle)
+                    prefix = PyUnicode_FromString(tag_directive.prefix)
+                    tags[handle] = prefix
+                    tag_directive = tag_directive+1
+            return DocumentStartEvent(start_mark, end_mark,
+                    explicit, version, tags)
+        elif event.type == YAML_DOCUMENT_END_EVENT:
+            explicit = False
+            if event.data.document_end.implicit == 0:
+                explicit = True
+            return DocumentEndEvent(start_mark, end_mark, explicit)
+        elif event.type == YAML_ALIAS_EVENT:
+            anchor = PyUnicode_FromString(event.data.alias.anchor)
+            return AliasEvent(anchor, start_mark, end_mark)
+        elif event.type == YAML_SCALAR_EVENT:
+            anchor = None
+            if event.data.scalar.anchor != NULL:
+                anchor = PyUnicode_FromString(event.data.scalar.anchor)
+            tag = None
+            if event.data.scalar.tag != NULL:
+                tag = PyUnicode_FromString(event.data.scalar.tag)
+            value = PyUnicode_DecodeUTF8(event.data.scalar.value,
+                    event.data.scalar.length, 'strict')
+            plain_implicit = False
+            if event.data.scalar.plain_implicit == 1:
+                plain_implicit = True
+            quoted_implicit = False
+            if event.data.scalar.quoted_implicit == 1:
+                quoted_implicit = True
+            style = None
+            if event.data.scalar.style == YAML_PLAIN_SCALAR_STYLE:
+                style = u''
+            elif event.data.scalar.style == YAML_SINGLE_QUOTED_SCALAR_STYLE:
+                style = u'\''
+            elif event.data.scalar.style == YAML_DOUBLE_QUOTED_SCALAR_STYLE:
+                style = u'"'
+            elif event.data.scalar.style == YAML_LITERAL_SCALAR_STYLE:
+                style = u'|'
+            elif event.data.scalar.style == YAML_FOLDED_SCALAR_STYLE:
+                style = u'>'
+            return ScalarEvent(anchor, tag,
+                    (plain_implicit, quoted_implicit),
+                    value, start_mark, end_mark, style)
+        elif event.type == YAML_SEQUENCE_START_EVENT:
+            anchor = None
+            if event.data.sequence_start.anchor != NULL:
+                anchor = PyUnicode_FromString(event.data.sequence_start.anchor)
+            tag = None
+            if event.data.sequence_start.tag != NULL:
+                tag = PyUnicode_FromString(event.data.sequence_start.tag)
+            implicit = False
+            if event.data.sequence_start.implicit == 1:
+                implicit = True
+            flow_style = None
+            if event.data.sequence_start.style == YAML_FLOW_SEQUENCE_STYLE:
+                flow_style = True
+            elif event.data.sequence_start.style == YAML_BLOCK_SEQUENCE_STYLE:
+                flow_style = False
+            return SequenceStartEvent(anchor, tag, implicit,
+                    start_mark, end_mark, flow_style)
+        elif event.type == YAML_MAPPING_START_EVENT:
+            anchor = None
+            if event.data.mapping_start.anchor != NULL:
+                anchor = PyUnicode_FromString(event.data.mapping_start.anchor)
+            tag = None
+            if event.data.mapping_start.tag != NULL:
+                tag = PyUnicode_FromString(event.data.mapping_start.tag)
+            implicit = False
+            if event.data.mapping_start.implicit == 1:
+                implicit = True
+            flow_style = None
+            if event.data.mapping_start.style == YAML_FLOW_MAPPING_STYLE:
+                flow_style = True
+            elif event.data.mapping_start.style == YAML_BLOCK_MAPPING_STYLE:
+                flow_style = False
+            return MappingStartEvent(anchor, tag, implicit,
+                    start_mark, end_mark, flow_style)
+        elif event.type == YAML_SEQUENCE_END_EVENT:
+            return SequenceEndEvent(start_mark, end_mark)
+        elif event.type == YAML_MAPPING_END_EVENT:
+            return MappingEndEvent(start_mark, end_mark)
+        else:
+            if PY_MAJOR_VERSION < 3:
+                raise ValueError("unknown event type")
+            else:
+                raise ValueError(u"unknown event type")
+
+    def get_event(self):
+        if self.current_event is not None:
+            value = self.current_event
+            self.current_event = None
+        else:
+            value = self._parse()
+        return value
+
+    def peek_event(self):
+        if self.current_event is None:
+            self.current_event = self._parse()
+        return self.current_event
+
+    def check_event(self, *choices):
+        if self.current_event is None:
+            self.current_event = self._parse()
+        if self.current_event is None:
+            return False
+        if not choices:
+            return True
+        event_class = self.current_event.__class__
+        for choice in choices:
+            if event_class is choice:
+                return True
+        return False
+
+    def check_node(self):
+        self._parse_next_event()
+        if self.parsed_event.type == YAML_STREAM_START_EVENT:
+            yaml_event_delete(&self.parsed_event)
+            self._parse_next_event()
+        if self.parsed_event.type != YAML_STREAM_END_EVENT:
+            return True
+        return False
+
+    def get_node(self):
+        self._parse_next_event()
+        if self.parsed_event.type != YAML_STREAM_END_EVENT:
+            return self._compose_document()
+
+    def get_single_node(self):
+        self._parse_next_event()
+        yaml_event_delete(&self.parsed_event)
+        self._parse_next_event()
+        document = None
+        if self.parsed_event.type != YAML_STREAM_END_EVENT:
+            document = self._compose_document()
+        self._parse_next_event()
+        if self.parsed_event.type != YAML_STREAM_END_EVENT:
+            mark = Mark(self.stream_name,
+                    self.parsed_event.start_mark.index,
+                    self.parsed_event.start_mark.line,
+                    self.parsed_event.start_mark.column,
+                    None, None)
+            if PY_MAJOR_VERSION < 3:
+                raise ComposerError("expected a single document in the stream",
+                        document.start_mark, "but found another document", mark)
+            else:
+                raise ComposerError(u"expected a single document in the stream",
+                        document.start_mark, u"but found another document", mark)
+        return document
+
+    cdef object _compose_document(self):
+        yaml_event_delete(&self.parsed_event)
+        node = self._compose_node(None, None)
+        self._parse_next_event()
+        yaml_event_delete(&self.parsed_event)
+        self.anchors = {}
+        return node
+
+    cdef object _compose_node(self, object parent, object index):
+        self._parse_next_event()
+        if self.parsed_event.type == YAML_ALIAS_EVENT:
+            anchor = PyUnicode_FromString(self.parsed_event.data.alias.anchor)
+            if anchor not in self.anchors:
+                mark = Mark(self.stream_name,
+                        self.parsed_event.start_mark.index,
+                        self.parsed_event.start_mark.line,
+                        self.parsed_event.start_mark.column,
+                        None, None)
+                if PY_MAJOR_VERSION < 3:
+                    raise ComposerError(None, None, "found undefined alias", mark)
+                else:
+                    raise ComposerError(None, None, u"found undefined alias", mark)
+            yaml_event_delete(&self.parsed_event)
+            return self.anchors[anchor]
+        anchor = None
+        if self.parsed_event.type == YAML_SCALAR_EVENT  \
+                and self.parsed_event.data.scalar.anchor != NULL:
+            anchor = PyUnicode_FromString(self.parsed_event.data.scalar.anchor)
+        elif self.parsed_event.type == YAML_SEQUENCE_START_EVENT    \
+                and self.parsed_event.data.sequence_start.anchor != NULL:
+            anchor = PyUnicode_FromString(self.parsed_event.data.sequence_start.anchor)
+        elif self.parsed_event.type == YAML_MAPPING_START_EVENT    \
+                and self.parsed_event.data.mapping_start.anchor != NULL:
+            anchor = PyUnicode_FromString(self.parsed_event.data.mapping_start.anchor)
+        if anchor is not None:
+            if anchor in self.anchors:
+                mark = Mark(self.stream_name,
+                        self.parsed_event.start_mark.index,
+                        self.parsed_event.start_mark.line,
+                        self.parsed_event.start_mark.column,
+                        None, None)
+                if PY_MAJOR_VERSION < 3:
+                    raise ComposerError("found duplicate anchor; first occurence",
+                            self.anchors[anchor].start_mark, "second occurence", mark)
+                else:
+                    raise ComposerError(u"found duplicate anchor; first occurence",
+                            self.anchors[anchor].start_mark, u"second occurence", mark)
+        self.descend_resolver(parent, index)
+        if self.parsed_event.type == YAML_SCALAR_EVENT:
+            node = self._compose_scalar_node(anchor)
+        elif self.parsed_event.type == YAML_SEQUENCE_START_EVENT:
+            node = self._compose_sequence_node(anchor)
+        elif self.parsed_event.type == YAML_MAPPING_START_EVENT:
+            node = self._compose_mapping_node(anchor)
+        self.ascend_resolver()
+        return node
+
+    cdef _compose_scalar_node(self, object anchor):
+        start_mark = Mark(self.stream_name,
+                self.parsed_event.start_mark.index,
+                self.parsed_event.start_mark.line,
+                self.parsed_event.start_mark.column,
+                None, None)
+        end_mark = Mark(self.stream_name,
+                self.parsed_event.end_mark.index,
+                self.parsed_event.end_mark.line,
+                self.parsed_event.end_mark.column,
+                None, None)
+        value = PyUnicode_DecodeUTF8(self.parsed_event.data.scalar.value,
+                self.parsed_event.data.scalar.length, 'strict')
+        plain_implicit = False
+        if self.parsed_event.data.scalar.plain_implicit == 1:
+            plain_implicit = True
+        quoted_implicit = False
+        if self.parsed_event.data.scalar.quoted_implicit == 1:
+            quoted_implicit = True
+        if self.parsed_event.data.scalar.tag == NULL    \
+                or (self.parsed_event.data.scalar.tag[0] == c'!'
+                        and self.parsed_event.data.scalar.tag[1] == c'\0'):
+            tag = self.resolve(ScalarNode, value, (plain_implicit, quoted_implicit))
+        else:
+            tag = PyUnicode_FromString(self.parsed_event.data.scalar.tag)
+        style = None
+        if self.parsed_event.data.scalar.style == YAML_PLAIN_SCALAR_STYLE:
+            style = u''
+        elif self.parsed_event.data.scalar.style == YAML_SINGLE_QUOTED_SCALAR_STYLE:
+            style = u'\''
+        elif self.parsed_event.data.scalar.style == YAML_DOUBLE_QUOTED_SCALAR_STYLE:
+            style = u'"'
+        elif self.parsed_event.data.scalar.style == YAML_LITERAL_SCALAR_STYLE:
+            style = u'|'
+        elif self.parsed_event.data.scalar.style == YAML_FOLDED_SCALAR_STYLE:
+            style = u'>'
+        node = ScalarNode(tag, value, start_mark, end_mark, style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        yaml_event_delete(&self.parsed_event)
+        return node
+
+    cdef _compose_sequence_node(self, object anchor):
+        cdef int index
+        start_mark = Mark(self.stream_name,
+                self.parsed_event.start_mark.index,
+                self.parsed_event.start_mark.line,
+                self.parsed_event.start_mark.column,
+                None, None)
+        implicit = False
+        if self.parsed_event.data.sequence_start.implicit == 1:
+            implicit = True
+        if self.parsed_event.data.sequence_start.tag == NULL    \
+                or (self.parsed_event.data.sequence_start.tag[0] == c'!'
+                        and self.parsed_event.data.sequence_start.tag[1] == c'\0'):
+            tag = self.resolve(SequenceNode, None, implicit)
+        else:
+            tag = PyUnicode_FromString(self.parsed_event.data.sequence_start.tag)
+        flow_style = None
+        if self.parsed_event.data.sequence_start.style == YAML_FLOW_SEQUENCE_STYLE:
+            flow_style = True
+        elif self.parsed_event.data.sequence_start.style == YAML_BLOCK_SEQUENCE_STYLE:
+            flow_style = False
+        value = []
+        node = SequenceNode(tag, value, start_mark, None, flow_style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        yaml_event_delete(&self.parsed_event)
+        index = 0
+        self._parse_next_event()
+        while self.parsed_event.type != YAML_SEQUENCE_END_EVENT:
+            value.append(self._compose_node(node, index))
+            index = index+1
+            self._parse_next_event()
+        node.end_mark = Mark(self.stream_name,
+                self.parsed_event.end_mark.index,
+                self.parsed_event.end_mark.line,
+                self.parsed_event.end_mark.column,
+                None, None)
+        yaml_event_delete(&self.parsed_event)
+        return node
+
+    cdef _compose_mapping_node(self, object anchor):
+        start_mark = Mark(self.stream_name,
+                self.parsed_event.start_mark.index,
+                self.parsed_event.start_mark.line,
+                self.parsed_event.start_mark.column,
+                None, None)
+        implicit = False
+        if self.parsed_event.data.mapping_start.implicit == 1:
+            implicit = True
+        if self.parsed_event.data.mapping_start.tag == NULL    \
+                or (self.parsed_event.data.mapping_start.tag[0] == c'!'
+                        and self.parsed_event.data.mapping_start.tag[1] == c'\0'):
+            tag = self.resolve(MappingNode, None, implicit)
+        else:
+            tag = PyUnicode_FromString(self.parsed_event.data.mapping_start.tag)
+        flow_style = None
+        if self.parsed_event.data.mapping_start.style == YAML_FLOW_MAPPING_STYLE:
+            flow_style = True
+        elif self.parsed_event.data.mapping_start.style == YAML_BLOCK_MAPPING_STYLE:
+            flow_style = False
+        value = []
+        node = MappingNode(tag, value, start_mark, None, flow_style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        yaml_event_delete(&self.parsed_event)
+        self._parse_next_event()
+        while self.parsed_event.type != YAML_MAPPING_END_EVENT:
+            item_key = self._compose_node(node, None)
+            item_value = self._compose_node(node, item_key)
+            value.append((item_key, item_value))
+            self._parse_next_event()
+        node.end_mark = Mark(self.stream_name,
+                self.parsed_event.end_mark.index,
+                self.parsed_event.end_mark.line,
+                self.parsed_event.end_mark.column,
+                None, None)
+        yaml_event_delete(&self.parsed_event)
+        return node
+
+    cdef int _parse_next_event(self) except 0:
+        if self.parsed_event.type == YAML_NO_EVENT:
+            if yaml_parser_parse(&self.parser, &self.parsed_event) == 0:
+                error = self._parser_error()
+                raise error
+        return 1
+
+cdef int input_handler(void *data, char *buffer, int size, int *read) except 0:
+    cdef CParser parser
+    parser = <CParser>data
+    if parser.stream_cache is None:
+        value = parser.stream.read(size)
+        if PyUnicode_CheckExact(value) != 0:
+            value = PyUnicode_AsUTF8String(value)
+            parser.unicode_source = 1
+        if PyString_CheckExact(value) == 0:
+            if PY_MAJOR_VERSION < 3:
+                raise TypeError("a string value is expected")
+            else:
+                raise TypeError(u"a string value is expected")
+        parser.stream_cache = value
+        parser.stream_cache_pos = 0
+        parser.stream_cache_len = PyString_GET_SIZE(value)
+    if (parser.stream_cache_len - parser.stream_cache_pos) < size:
+        size = parser.stream_cache_len - parser.stream_cache_pos
+    if size > 0:
+        memcpy(buffer, PyString_AS_STRING(parser.stream_cache)
+                            + parser.stream_cache_pos, size)
+    read[0] = size
+    parser.stream_cache_pos += size
+    if parser.stream_cache_pos == parser.stream_cache_len:
+        parser.stream_cache = None
+    return 1
+
+cdef class CEmitter:
+
+    cdef yaml_emitter_t emitter
+
+    cdef object stream
+
+    cdef int document_start_implicit
+    cdef int document_end_implicit
+    cdef object use_version
+    cdef object use_tags
+
+    cdef object serialized_nodes
+    cdef object anchors
+    cdef int last_alias_id
+    cdef int closed
+    cdef int dump_unicode
+    cdef object use_encoding
+
+    def __init__(self, stream, canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None, encoding=None,
+            explicit_start=None, explicit_end=None, version=None, tags=None):
+        if yaml_emitter_initialize(&self.emitter) == 0:
+            raise MemoryError
+        self.stream = stream
+        self.dump_unicode = 0
+        if PY_MAJOR_VERSION < 3:
+            if getattr3(stream, 'encoding', None):
+                self.dump_unicode = 1
+        else:
+            if hasattr(stream, u'encoding'):
+                self.dump_unicode = 1
+        self.use_encoding = encoding
+        yaml_emitter_set_output(&self.emitter, output_handler, <void *>self)    
+        if canonical:
+            yaml_emitter_set_canonical(&self.emitter, 1)
+        if indent is not None:
+            yaml_emitter_set_indent(&self.emitter, indent)
+        if width is not None:
+            yaml_emitter_set_width(&self.emitter, width)
+        if allow_unicode:
+            yaml_emitter_set_unicode(&self.emitter, 1)
+        if line_break is not None:
+            if line_break == '\r':
+                yaml_emitter_set_break(&self.emitter, YAML_CR_BREAK)
+            elif line_break == '\n':
+                yaml_emitter_set_break(&self.emitter, YAML_LN_BREAK)
+            elif line_break == '\r\n':
+                yaml_emitter_set_break(&self.emitter, YAML_CRLN_BREAK)
+        self.document_start_implicit = 1
+        if explicit_start:
+            self.document_start_implicit = 0
+        self.document_end_implicit = 1
+        if explicit_end:
+            self.document_end_implicit = 0
+        self.use_version = version
+        self.use_tags = tags
+        self.serialized_nodes = {}
+        self.anchors = {}
+        self.last_alias_id = 0
+        self.closed = -1
+
+    def __dealloc__(self):
+        yaml_emitter_delete(&self.emitter)
+
+    def dispose(self):
+        pass
+
+    cdef object _emitter_error(self):
+        if self.emitter.error == YAML_MEMORY_ERROR:
+            return MemoryError
+        elif self.emitter.error == YAML_EMITTER_ERROR:
+            if PY_MAJOR_VERSION < 3:
+                problem = self.emitter.problem
+            else:
+                problem = PyUnicode_FromString(self.emitter.problem)
+            return EmitterError(problem)
+        if PY_MAJOR_VERSION < 3:
+            raise ValueError("no emitter error")
+        else:
+            raise ValueError(u"no emitter error")
+
+    cdef int _object_to_event(self, object event_object, yaml_event_t *event) except 0:
+        cdef yaml_encoding_t encoding
+        cdef yaml_version_directive_t version_directive_value
+        cdef yaml_version_directive_t *version_directive
+        cdef yaml_tag_directive_t tag_directives_value[128]
+        cdef yaml_tag_directive_t *tag_directives_start
+        cdef yaml_tag_directive_t *tag_directives_end
+        cdef int implicit
+        cdef int plain_implicit
+        cdef int quoted_implicit
+        cdef char *anchor
+        cdef char *tag
+        cdef char *value
+        cdef int length
+        cdef yaml_scalar_style_t scalar_style
+        cdef yaml_sequence_style_t sequence_style
+        cdef yaml_mapping_style_t mapping_style
+        event_class = event_object.__class__
+        if event_class is StreamStartEvent:
+            encoding = YAML_UTF8_ENCODING
+            if event_object.encoding == u'utf-16-le' or event_object.encoding == 'utf-16-le':
+                encoding = YAML_UTF16LE_ENCODING
+            elif event_object.encoding == u'utf-16-be' or event_object.encoding == 'utf-16-be':
+                encoding = YAML_UTF16BE_ENCODING
+            if event_object.encoding is None:
+                self.dump_unicode = 1
+            if self.dump_unicode == 1:
+                encoding = YAML_UTF8_ENCODING
+            yaml_stream_start_event_initialize(event, encoding)
+        elif event_class is StreamEndEvent:
+            yaml_stream_end_event_initialize(event)
+        elif event_class is DocumentStartEvent:
+            version_directive = NULL
+            if event_object.version:
+                version_directive_value.major = event_object.version[0]
+                version_directive_value.minor = event_object.version[1]
+                version_directive = &version_directive_value
+            tag_directives_start = NULL
+            tag_directives_end = NULL
+            if event_object.tags:
+                if len(event_object.tags) > 128:
+                    if PY_MAJOR_VERSION < 3:
+                        raise ValueError("too many tags")
+                    else:
+                        raise ValueError(u"too many tags")
+                tag_directives_start = tag_directives_value
+                tag_directives_end = tag_directives_value
+                cache = []
+                for handle in event_object.tags:
+                    prefix = event_object.tags[handle]
+                    if PyUnicode_CheckExact(handle):
+                        handle = PyUnicode_AsUTF8String(handle)
+                        cache.append(handle)
+                    if not PyString_CheckExact(handle):
+                        if PY_MAJOR_VERSION < 3:
+                            raise TypeError("tag handle must be a string")
+                        else:
+                            raise TypeError(u"tag handle must be a string")
+                    tag_directives_end.handle = PyString_AS_STRING(handle)
+                    if PyUnicode_CheckExact(prefix):
+                        prefix = PyUnicode_AsUTF8String(prefix)
+                        cache.append(prefix)
+                    if not PyString_CheckExact(prefix):
+                        if PY_MAJOR_VERSION < 3:
+                            raise TypeError("tag prefix must be a string")
+                        else:
+                            raise TypeError(u"tag prefix must be a string")
+                    tag_directives_end.prefix = PyString_AS_STRING(prefix)
+                    tag_directives_end = tag_directives_end+1
+            implicit = 1
+            if event_object.explicit:
+                implicit = 0
+            if yaml_document_start_event_initialize(event, version_directive,
+                    tag_directives_start, tag_directives_end, implicit) == 0:
+                raise MemoryError
+        elif event_class is DocumentEndEvent:
+            implicit = 1
+            if event_object.explicit:
+                implicit = 0
+            yaml_document_end_event_initialize(event, implicit)
+        elif event_class is AliasEvent:
+            anchor = NULL
+            anchor_object = event_object.anchor
+            if PyUnicode_CheckExact(anchor_object):
+                anchor_object = PyUnicode_AsUTF8String(anchor_object)
+            if not PyString_CheckExact(anchor_object):
+                if PY_MAJOR_VERSION < 3:
+                    raise TypeError("anchor must be a string")
+                else:
+                    raise TypeError(u"anchor must be a string")
+            anchor = PyString_AS_STRING(anchor_object)
+            if yaml_alias_event_initialize(event, anchor) == 0:
+                raise MemoryError
+        elif event_class is ScalarEvent:
+            anchor = NULL
+            anchor_object = event_object.anchor
+            if anchor_object is not None:
+                if PyUnicode_CheckExact(anchor_object):
+                    anchor_object = PyUnicode_AsUTF8String(anchor_object)
+                if not PyString_CheckExact(anchor_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("anchor must be a string")
+                    else:
+                        raise TypeError(u"anchor must be a string")
+                anchor = PyString_AS_STRING(anchor_object)
+            tag = NULL
+            tag_object = event_object.tag
+            if tag_object is not None:
+                if PyUnicode_CheckExact(tag_object):
+                    tag_object = PyUnicode_AsUTF8String(tag_object)
+                if not PyString_CheckExact(tag_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("tag must be a string")
+                    else:
+                        raise TypeError(u"tag must be a string")
+                tag = PyString_AS_STRING(tag_object)
+            value_object = event_object.value
+            if PyUnicode_CheckExact(value_object):
+                value_object = PyUnicode_AsUTF8String(value_object)
+            if not PyString_CheckExact(value_object):
+                if PY_MAJOR_VERSION < 3:
+                    raise TypeError("value must be a string")
+                else:
+                    raise TypeError(u"value must be a string")
+            value = PyString_AS_STRING(value_object)
+            length = PyString_GET_SIZE(value_object)
+            plain_implicit = 0
+            quoted_implicit = 0
+            if event_object.implicit is not None:
+                plain_implicit = event_object.implicit[0]
+                quoted_implicit = event_object.implicit[1]
+            style_object = event_object.style
+            scalar_style = YAML_PLAIN_SCALAR_STYLE
+            if style_object == "'" or style_object == u"'":
+                scalar_style = YAML_SINGLE_QUOTED_SCALAR_STYLE
+            elif style_object == "\"" or style_object == u"\"":
+                scalar_style = YAML_DOUBLE_QUOTED_SCALAR_STYLE
+            elif style_object == "|" or style_object == u"|":
+                scalar_style = YAML_LITERAL_SCALAR_STYLE
+            elif style_object == ">" or style_object == u">":
+                scalar_style = YAML_FOLDED_SCALAR_STYLE
+            if yaml_scalar_event_initialize(event, anchor, tag, value, length,
+                    plain_implicit, quoted_implicit, scalar_style) == 0:
+                raise MemoryError
+        elif event_class is SequenceStartEvent:
+            anchor = NULL
+            anchor_object = event_object.anchor
+            if anchor_object is not None:
+                if PyUnicode_CheckExact(anchor_object):
+                    anchor_object = PyUnicode_AsUTF8String(anchor_object)
+                if not PyString_CheckExact(anchor_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("anchor must be a string")
+                    else:
+                        raise TypeError(u"anchor must be a string")
+                anchor = PyString_AS_STRING(anchor_object)
+            tag = NULL
+            tag_object = event_object.tag
+            if tag_object is not None:
+                if PyUnicode_CheckExact(tag_object):
+                    tag_object = PyUnicode_AsUTF8String(tag_object)
+                if not PyString_CheckExact(tag_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("tag must be a string")
+                    else:
+                        raise TypeError(u"tag must be a string")
+                tag = PyString_AS_STRING(tag_object)
+            implicit = 0
+            if event_object.implicit:
+                implicit = 1
+            sequence_style = YAML_BLOCK_SEQUENCE_STYLE
+            if event_object.flow_style:
+                sequence_style = YAML_FLOW_SEQUENCE_STYLE
+            if yaml_sequence_start_event_initialize(event, anchor, tag,
+                    implicit, sequence_style) == 0:
+                raise MemoryError
+        elif event_class is MappingStartEvent:
+            anchor = NULL
+            anchor_object = event_object.anchor
+            if anchor_object is not None:
+                if PyUnicode_CheckExact(anchor_object):
+                    anchor_object = PyUnicode_AsUTF8String(anchor_object)
+                if not PyString_CheckExact(anchor_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("anchor must be a string")
+                    else:
+                        raise TypeError(u"anchor must be a string")
+                anchor = PyString_AS_STRING(anchor_object)
+            tag = NULL
+            tag_object = event_object.tag
+            if tag_object is not None:
+                if PyUnicode_CheckExact(tag_object):
+                    tag_object = PyUnicode_AsUTF8String(tag_object)
+                if not PyString_CheckExact(tag_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("tag must be a string")
+                    else:
+                        raise TypeError(u"tag must be a string")
+                tag = PyString_AS_STRING(tag_object)
+            implicit = 0
+            if event_object.implicit:
+                implicit = 1
+            mapping_style = YAML_BLOCK_MAPPING_STYLE
+            if event_object.flow_style:
+                mapping_style = YAML_FLOW_MAPPING_STYLE
+            if yaml_mapping_start_event_initialize(event, anchor, tag,
+                    implicit, mapping_style) == 0:
+                raise MemoryError
+        elif event_class is SequenceEndEvent:
+            yaml_sequence_end_event_initialize(event)
+        elif event_class is MappingEndEvent:
+            yaml_mapping_end_event_initialize(event)
+        else:
+            if PY_MAJOR_VERSION < 3:
+                raise TypeError("invalid event %s" % event_object)
+            else:
+                raise TypeError(u"invalid event %s" % event_object)
+        return 1
+
+    def emit(self, event_object):
+        cdef yaml_event_t event
+        self._object_to_event(event_object, &event)
+        if yaml_emitter_emit(&self.emitter, &event) == 0:
+            error = self._emitter_error()
+            raise error
+
+    def open(self):
+        cdef yaml_event_t event
+        cdef yaml_encoding_t encoding
+        if self.closed == -1:
+            if self.use_encoding == u'utf-16-le' or self.use_encoding == 'utf-16-le':
+                encoding = YAML_UTF16LE_ENCODING
+            elif self.use_encoding == u'utf-16-be' or self.use_encoding == 'utf-16-be':
+                encoding = YAML_UTF16BE_ENCODING
+            else:
+                encoding = YAML_UTF8_ENCODING
+            if self.use_encoding is None:
+                self.dump_unicode = 1
+            if self.dump_unicode == 1:
+                encoding = YAML_UTF8_ENCODING
+            yaml_stream_start_event_initialize(&event, encoding)
+            if yaml_emitter_emit(&self.emitter, &event) == 0:
+                error = self._emitter_error()
+                raise error
+            self.closed = 0
+        elif self.closed == 1:
+            if PY_MAJOR_VERSION < 3:
+                raise SerializerError("serializer is closed")
+            else:
+                raise SerializerError(u"serializer is closed")
+        else:
+            if PY_MAJOR_VERSION < 3:
+                raise SerializerError("serializer is already opened")
+            else:
+                raise SerializerError(u"serializer is already opened")
+
+    def close(self):
+        cdef yaml_event_t event
+        if self.closed == -1:
+            if PY_MAJOR_VERSION < 3:
+                raise SerializerError("serializer is not opened")
+            else:
+                raise SerializerError(u"serializer is not opened")
+        elif self.closed == 0:
+            yaml_stream_end_event_initialize(&event)
+            if yaml_emitter_emit(&self.emitter, &event) == 0:
+                error = self._emitter_error()
+                raise error
+            self.closed = 1
+
+    def serialize(self, node):
+        cdef yaml_event_t event
+        cdef yaml_version_directive_t version_directive_value
+        cdef yaml_version_directive_t *version_directive
+        cdef yaml_tag_directive_t tag_directives_value[128]
+        cdef yaml_tag_directive_t *tag_directives_start
+        cdef yaml_tag_directive_t *tag_directives_end
+        if self.closed == -1:
+            if PY_MAJOR_VERSION < 3:
+                raise SerializerError("serializer is not opened")
+            else:
+                raise SerializerError(u"serializer is not opened")
+        elif self.closed == 1:
+            if PY_MAJOR_VERSION < 3:
+                raise SerializerError("serializer is closed")
+            else:
+                raise SerializerError(u"serializer is closed")
+        cache = []
+        version_directive = NULL
+        if self.use_version:
+            version_directive_value.major = self.use_version[0]
+            version_directive_value.minor = self.use_version[1]
+            version_directive = &version_directive_value
+        tag_directives_start = NULL
+        tag_directives_end = NULL
+        if self.use_tags:
+            if len(self.use_tags) > 128:
+                if PY_MAJOR_VERSION < 3:
+                    raise ValueError("too many tags")
+                else:
+                    raise ValueError(u"too many tags")
+            tag_directives_start = tag_directives_value
+            tag_directives_end = tag_directives_value
+            for handle in self.use_tags:
+                prefix = self.use_tags[handle]
+                if PyUnicode_CheckExact(handle):
+                    handle = PyUnicode_AsUTF8String(handle)
+                    cache.append(handle)
+                if not PyString_CheckExact(handle):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("tag handle must be a string")
+                    else:
+                        raise TypeError(u"tag handle must be a string")
+                tag_directives_end.handle = PyString_AS_STRING(handle)
+                if PyUnicode_CheckExact(prefix):
+                    prefix = PyUnicode_AsUTF8String(prefix)
+                    cache.append(prefix)
+                if not PyString_CheckExact(prefix):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("tag prefix must be a string")
+                    else:
+                        raise TypeError(u"tag prefix must be a string")
+                tag_directives_end.prefix = PyString_AS_STRING(prefix)
+                tag_directives_end = tag_directives_end+1
+        if yaml_document_start_event_initialize(&event, version_directive,
+                tag_directives_start, tag_directives_end,
+                self.document_start_implicit) == 0:
+            raise MemoryError
+        if yaml_emitter_emit(&self.emitter, &event) == 0:
+            error = self._emitter_error()
+            raise error
+        self._anchor_node(node)
+        self._serialize_node(node, None, None)
+        yaml_document_end_event_initialize(&event, self.document_end_implicit)
+        if yaml_emitter_emit(&self.emitter, &event) == 0:
+            error = self._emitter_error()
+            raise error
+        self.serialized_nodes = {}
+        self.anchors = {}
+        self.last_alias_id = 0
+
+    cdef int _anchor_node(self, object node) except 0:
+        if node in self.anchors:
+            if self.anchors[node] is None:
+                self.last_alias_id = self.last_alias_id+1
+                self.anchors[node] = u"id%03d" % self.last_alias_id
+        else:
+            self.anchors[node] = None
+            node_class = node.__class__
+            if node_class is SequenceNode:
+                for item in node.value:
+                    self._anchor_node(item)
+            elif node_class is MappingNode:
+                for key, value in node.value:
+                    self._anchor_node(key)
+                    self._anchor_node(value)
+        return 1
+
+    cdef int _serialize_node(self, object node, object parent, object index) except 0:
+        cdef yaml_event_t event
+        cdef int implicit
+        cdef int plain_implicit
+        cdef int quoted_implicit
+        cdef char *anchor
+        cdef char *tag
+        cdef char *value
+        cdef int length
+        cdef int item_index
+        cdef yaml_scalar_style_t scalar_style
+        cdef yaml_sequence_style_t sequence_style
+        cdef yaml_mapping_style_t mapping_style
+        anchor_object = self.anchors[node]
+        anchor = NULL
+        if anchor_object is not None:
+            if PyUnicode_CheckExact(anchor_object):
+                anchor_object = PyUnicode_AsUTF8String(anchor_object)
+            if not PyString_CheckExact(anchor_object):
+                if PY_MAJOR_VERSION < 3:
+                    raise TypeError("anchor must be a string")
+                else:
+                    raise TypeError(u"anchor must be a string")
+            anchor = PyString_AS_STRING(anchor_object)
+        if node in self.serialized_nodes:
+            if yaml_alias_event_initialize(&event, anchor) == 0:
+                raise MemoryError
+            if yaml_emitter_emit(&self.emitter, &event) == 0:
+                error = self._emitter_error()
+                raise error
+        else:
+            node_class = node.__class__
+            self.serialized_nodes[node] = True
+            self.descend_resolver(parent, index)
+            if node_class is ScalarNode:
+                plain_implicit = 0
+                quoted_implicit = 0
+                tag_object = node.tag
+                if self.resolve(ScalarNode, node.value, (True, False)) == tag_object:
+                    plain_implicit = 1
+                if self.resolve(ScalarNode, node.value, (False, True)) == tag_object:
+                    quoted_implicit = 1
+                tag = NULL
+                if tag_object is not None:
+                    if PyUnicode_CheckExact(tag_object):
+                        tag_object = PyUnicode_AsUTF8String(tag_object)
+                    if not PyString_CheckExact(tag_object):
+                        if PY_MAJOR_VERSION < 3:
+                            raise TypeError("tag must be a string")
+                        else:
+                            raise TypeError(u"tag must be a string")
+                    tag = PyString_AS_STRING(tag_object)
+                value_object = node.value
+                if PyUnicode_CheckExact(value_object):
+                    value_object = PyUnicode_AsUTF8String(value_object)
+                if not PyString_CheckExact(value_object):
+                    if PY_MAJOR_VERSION < 3:
+                        raise TypeError("value must be a string")
+                    else:
+                        raise TypeError(u"value must be a string")
+                value = PyString_AS_STRING(value_object)
+                length = PyString_GET_SIZE(value_object)
+                style_object = node.style
+                scalar_style = YAML_PLAIN_SCALAR_STYLE
+                if style_object == "'" or style_object == u"'":
+                    scalar_style = YAML_SINGLE_QUOTED_SCALAR_STYLE
+                elif style_object == "\"" or style_object == u"\"":
+                    scalar_style = YAML_DOUBLE_QUOTED_SCALAR_STYLE
+                elif style_object == "|" or style_object == u"|":
+                    scalar_style = YAML_LITERAL_SCALAR_STYLE
+                elif style_object == ">" or style_object == u">":
+                    scalar_style = YAML_FOLDED_SCALAR_STYLE
+                if yaml_scalar_event_initialize(&event, anchor, tag, value, length,
+                        plain_implicit, quoted_implicit, scalar_style) == 0:
+                    raise MemoryError
+                if yaml_emitter_emit(&self.emitter, &event) == 0:
+                    error = self._emitter_error()
+                    raise error
+            elif node_class is SequenceNode:
+                implicit = 0
+                tag_object = node.tag
+                if self.resolve(SequenceNode, node.value, True) == tag_object:
+                    implicit = 1
+                tag = NULL
+                if tag_object is not None:
+                    if PyUnicode_CheckExact(tag_object):
+                        tag_object = PyUnicode_AsUTF8String(tag_object)
+                    if not PyString_CheckExact(tag_object):
+                        if PY_MAJOR_VERSION < 3:
+                            raise TypeError("tag must be a string")
+                        else:
+                            raise TypeError(u"tag must be a string")
+                    tag = PyString_AS_STRING(tag_object)
+                sequence_style = YAML_BLOCK_SEQUENCE_STYLE
+                if node.flow_style:
+                    sequence_style = YAML_FLOW_SEQUENCE_STYLE
+                if yaml_sequence_start_event_initialize(&event, anchor, tag,
+                        implicit, sequence_style) == 0:
+                    raise MemoryError
+                if yaml_emitter_emit(&self.emitter, &event) == 0:
+                    error = self._emitter_error()
+                    raise error
+                item_index = 0
+                for item in node.value:
+                    self._serialize_node(item, node, item_index)
+                    item_index = item_index+1
+                yaml_sequence_end_event_initialize(&event)
+                if yaml_emitter_emit(&self.emitter, &event) == 0:
+                    error = self._emitter_error()
+                    raise error
+            elif node_class is MappingNode:
+                implicit = 0
+                tag_object = node.tag
+                if self.resolve(MappingNode, node.value, True) == tag_object:
+                    implicit = 1
+                tag = NULL
+                if tag_object is not None:
+                    if PyUnicode_CheckExact(tag_object):
+                        tag_object = PyUnicode_AsUTF8String(tag_object)
+                    if not PyString_CheckExact(tag_object):
+                        if PY_MAJOR_VERSION < 3:
+                            raise TypeError("tag must be a string")
+                        else:
+                            raise TypeError(u"tag must be a string")
+                    tag = PyString_AS_STRING(tag_object)
+                mapping_style = YAML_BLOCK_MAPPING_STYLE
+                if node.flow_style:
+                    mapping_style = YAML_FLOW_MAPPING_STYLE
+                if yaml_mapping_start_event_initialize(&event, anchor, tag,
+                        implicit, mapping_style) == 0:
+                    raise MemoryError
+                if yaml_emitter_emit(&self.emitter, &event) == 0:
+                    error = self._emitter_error()
+                    raise error
+                for item_key, item_value in node.value:
+                    self._serialize_node(item_key, node, None)
+                    self._serialize_node(item_value, node, item_key)
+                yaml_mapping_end_event_initialize(&event)
+                if yaml_emitter_emit(&self.emitter, &event) == 0:
+                    error = self._emitter_error()
+                    raise error
+            self.ascend_resolver()
+        return 1
+
+cdef int output_handler(void *data, char *buffer, int size) except 0:
+    cdef CEmitter emitter
+    emitter = <CEmitter>data
+    if emitter.dump_unicode == 0:
+        value = PyString_FromStringAndSize(buffer, size)
+    else:
+        value = PyUnicode_DecodeUTF8(buffer, size, 'strict')
+    emitter.stream.write(value)
+    return 1
+
diff --git a/lib/yaml/__init__.py b/lib/yaml/__init__.py
new file mode 100644
index 0000000..76e19e1
--- /dev/null
+++ b/lib/yaml/__init__.py
@@ -0,0 +1,315 @@
+
+from error import *
+
+from tokens import *
+from events import *
+from nodes import *
+
+from loader import *
+from dumper import *
+
+__version__ = '3.11'
+
+try:
+    from cyaml import *
+    __with_libyaml__ = True
+except ImportError:
+    __with_libyaml__ = False
+
+def scan(stream, Loader=Loader):
+    """
+    Scan a YAML stream and produce scanning tokens.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_token():
+            yield loader.get_token()
+    finally:
+        loader.dispose()
+
+def parse(stream, Loader=Loader):
+    """
+    Parse a YAML stream and produce parsing events.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_event():
+            yield loader.get_event()
+    finally:
+        loader.dispose()
+
+def compose(stream, Loader=Loader):
+    """
+    Parse the first YAML document in a stream
+    and produce the corresponding representation tree.
+    """
+    loader = Loader(stream)
+    try:
+        return loader.get_single_node()
+    finally:
+        loader.dispose()
+
+def compose_all(stream, Loader=Loader):
+    """
+    Parse all YAML documents in a stream
+    and produce corresponding representation trees.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_node():
+            yield loader.get_node()
+    finally:
+        loader.dispose()
+
+def load(stream, Loader=Loader):
+    """
+    Parse the first YAML document in a stream
+    and produce the corresponding Python object.
+    """
+    loader = Loader(stream)
+    try:
+        return loader.get_single_data()
+    finally:
+        loader.dispose()
+
+def load_all(stream, Loader=Loader):
+    """
+    Parse all YAML documents in a stream
+    and produce corresponding Python objects.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_data():
+            yield loader.get_data()
+    finally:
+        loader.dispose()
+
+def safe_load(stream):
+    """
+    Parse the first YAML document in a stream
+    and produce the corresponding Python object.
+    Resolve only basic YAML tags.
+    """
+    return load(stream, SafeLoader)
+
+def safe_load_all(stream):
+    """
+    Parse all YAML documents in a stream
+    and produce corresponding Python objects.
+    Resolve only basic YAML tags.
+    """
+    return load_all(stream, SafeLoader)
+
+def emit(events, stream=None, Dumper=Dumper,
+        canonical=None, indent=None, width=None,
+        allow_unicode=None, line_break=None):
+    """
+    Emit YAML parsing events into a stream.
+    If stream is None, return the produced string instead.
+    """
+    getvalue = None
+    if stream is None:
+        from StringIO import StringIO
+        stream = StringIO()
+        getvalue = stream.getvalue
+    dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
+            allow_unicode=allow_unicode, line_break=line_break)
+    try:
+        for event in events:
+            dumper.emit(event)
+    finally:
+        dumper.dispose()
+    if getvalue:
+        return getvalue()
+
+def serialize_all(nodes, stream=None, Dumper=Dumper,
+        canonical=None, indent=None, width=None,
+        allow_unicode=None, line_break=None,
+        encoding='utf-8', explicit_start=None, explicit_end=None,
+        version=None, tags=None):
+    """
+    Serialize a sequence of representation trees into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    getvalue = None
+    if stream is None:
+        if encoding is None:
+            from StringIO import StringIO
+        else:
+            from cStringIO import StringIO
+        stream = StringIO()
+        getvalue = stream.getvalue
+    dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
+            allow_unicode=allow_unicode, line_break=line_break,
+            encoding=encoding, version=version, tags=tags,
+            explicit_start=explicit_start, explicit_end=explicit_end)
+    try:
+        dumper.open()
+        for node in nodes:
+            dumper.serialize(node)
+        dumper.close()
+    finally:
+        dumper.dispose()
+    if getvalue:
+        return getvalue()
+
+def serialize(node, stream=None, Dumper=Dumper, **kwds):
+    """
+    Serialize a representation tree into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    return serialize_all([node], stream, Dumper=Dumper, **kwds)
+
+def dump_all(documents, stream=None, Dumper=Dumper,
+        default_style=None, default_flow_style=None,
+        canonical=None, indent=None, width=None,
+        allow_unicode=None, line_break=None,
+        encoding='utf-8', explicit_start=None, explicit_end=None,
+        version=None, tags=None):
+    """
+    Serialize a sequence of Python objects into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    getvalue = None
+    if stream is None:
+        if encoding is None:
+            from StringIO import StringIO
+        else:
+            from cStringIO import StringIO
+        stream = StringIO()
+        getvalue = stream.getvalue
+    dumper = Dumper(stream, default_style=default_style,
+            default_flow_style=default_flow_style,
+            canonical=canonical, indent=indent, width=width,
+            allow_unicode=allow_unicode, line_break=line_break,
+            encoding=encoding, version=version, tags=tags,
+            explicit_start=explicit_start, explicit_end=explicit_end)
+    try:
+        dumper.open()
+        for data in documents:
+            dumper.represent(data)
+        dumper.close()
+    finally:
+        dumper.dispose()
+    if getvalue:
+        return getvalue()
+
+def dump(data, stream=None, Dumper=Dumper, **kwds):
+    """
+    Serialize a Python object into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    return dump_all([data], stream, Dumper=Dumper, **kwds)
+
+def safe_dump_all(documents, stream=None, **kwds):
+    """
+    Serialize a sequence of Python objects into a YAML stream.
+    Produce only basic YAML tags.
+    If stream is None, return the produced string instead.
+    """
+    return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
+
+def safe_dump(data, stream=None, **kwds):
+    """
+    Serialize a Python object into a YAML stream.
+    Produce only basic YAML tags.
+    If stream is None, return the produced string instead.
+    """
+    return dump_all([data], stream, Dumper=SafeDumper, **kwds)
+
+def add_implicit_resolver(tag, regexp, first=None,
+        Loader=Loader, Dumper=Dumper):
+    """
+    Add an implicit scalar detector.
+    If an implicit scalar value matches the given regexp,
+    the corresponding tag is assigned to the scalar.
+    first is a sequence of possible initial characters or None.
+    """
+    Loader.add_implicit_resolver(tag, regexp, first)
+    Dumper.add_implicit_resolver(tag, regexp, first)
+
+def add_path_resolver(tag, path, kind=None, Loader=Loader, Dumper=Dumper):
+    """
+    Add a path based resolver for the given tag.
+    A path is a list of keys that forms a path
+    to a node in the representation tree.
+    Keys can be string values, integers, or None.
+    """
+    Loader.add_path_resolver(tag, path, kind)
+    Dumper.add_path_resolver(tag, path, kind)
+
+def add_constructor(tag, constructor, Loader=Loader):
+    """
+    Add a constructor for the given tag.
+    Constructor is a function that accepts a Loader instance
+    and a node object and produces the corresponding Python object.
+    """
+    Loader.add_constructor(tag, constructor)
+
+def add_multi_constructor(tag_prefix, multi_constructor, Loader=Loader):
+    """
+    Add a multi-constructor for the given tag prefix.
+    Multi-constructor is called for a node if its tag starts with tag_prefix.
+    Multi-constructor accepts a Loader instance, a tag suffix,
+    and a node object and produces the corresponding Python object.
+    """
+    Loader.add_multi_constructor(tag_prefix, multi_constructor)
+
+def add_representer(data_type, representer, Dumper=Dumper):
+    """
+    Add a representer for the given type.
+    Representer is a function accepting a Dumper instance
+    and an instance of the given data type
+    and producing the corresponding representation node.
+    """
+    Dumper.add_representer(data_type, representer)
+
+def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
+    """
+    Add a representer for the given type.
+    Multi-representer is a function accepting a Dumper instance
+    and an instance of the given data type or subtype
+    and producing the corresponding representation node.
+    """
+    Dumper.add_multi_representer(data_type, multi_representer)
+
+class YAMLObjectMetaclass(type):
+    """
+    The metaclass for YAMLObject.
+    """
+    def __init__(cls, name, bases, kwds):
+        super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
+        if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
+            cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
+            cls.yaml_dumper.add_representer(cls, cls.to_yaml)
+
+class YAMLObject(object):
+    """
+    An object that can dump itself to a YAML stream
+    and load itself from a YAML stream.
+    """
+
+    __metaclass__ = YAMLObjectMetaclass
+    __slots__ = ()  # no direct instantiation, so allow immutable subclasses
+
+    yaml_loader = Loader
+    yaml_dumper = Dumper
+
+    yaml_tag = None
+    yaml_flow_style = None
+
+    def from_yaml(cls, loader, node):
+        """
+        Convert a representation node to a Python object.
+        """
+        return loader.construct_yaml_object(node, cls)
+    from_yaml = classmethod(from_yaml)
+
+    def to_yaml(cls, dumper, data):
+        """
+        Convert a Python object to a representation node.
+        """
+        return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
+                flow_style=cls.yaml_flow_style)
+    to_yaml = classmethod(to_yaml)
+
diff --git a/lib/yaml/composer.py b/lib/yaml/composer.py
new file mode 100644
index 0000000..06e5ac7
--- /dev/null
+++ b/lib/yaml/composer.py
@@ -0,0 +1,139 @@
+
+__all__ = ['Composer', 'ComposerError']
+
+from error import MarkedYAMLError
+from events import *
+from nodes import *
+
+class ComposerError(MarkedYAMLError):
+    pass
+
+class Composer(object):
+
+    def __init__(self):
+        self.anchors = {}
+
+    def check_node(self):
+        # Drop the STREAM-START event.
+        if self.check_event(StreamStartEvent):
+            self.get_event()
+
+        # If there are more documents available?
+        return not self.check_event(StreamEndEvent)
+
+    def get_node(self):
+        # Get the root node of the next document.
+        if not self.check_event(StreamEndEvent):
+            return self.compose_document()
+
+    def get_single_node(self):
+        # Drop the STREAM-START event.
+        self.get_event()
+
+        # Compose a document if the stream is not empty.
+        document = None
+        if not self.check_event(StreamEndEvent):
+            document = self.compose_document()
+
+        # Ensure that the stream contains no more documents.
+        if not self.check_event(StreamEndEvent):
+            event = self.get_event()
+            raise ComposerError("expected a single document in the stream",
+                    document.start_mark, "but found another document",
+                    event.start_mark)
+
+        # Drop the STREAM-END event.
+        self.get_event()
+
+        return document
+
+    def compose_document(self):
+        # Drop the DOCUMENT-START event.
+        self.get_event()
+
+        # Compose the root node.
+        node = self.compose_node(None, None)
+
+        # Drop the DOCUMENT-END event.
+        self.get_event()
+
+        self.anchors = {}
+        return node
+
+    def compose_node(self, parent, index):
+        if self.check_event(AliasEvent):
+            event = self.get_event()
+            anchor = event.anchor
+            if anchor not in self.anchors:
+                raise ComposerError(None, None, "found undefined alias %r"
+                        % anchor.encode('utf-8'), event.start_mark)
+            return self.anchors[anchor]
+        event = self.peek_event()
+        anchor = event.anchor
+        if anchor is not None:
+            if anchor in self.anchors:
+                raise ComposerError("found duplicate anchor %r; first occurence"
+                        % anchor.encode('utf-8'), self.anchors[anchor].start_mark,
+                        "second occurence", event.start_mark)
+        self.descend_resolver(parent, index)
+        if self.check_event(ScalarEvent):
+            node = self.compose_scalar_node(anchor)
+        elif self.check_event(SequenceStartEvent):
+            node = self.compose_sequence_node(anchor)
+        elif self.check_event(MappingStartEvent):
+            node = self.compose_mapping_node(anchor)
+        self.ascend_resolver()
+        return node
+
+    def compose_scalar_node(self, anchor):
+        event = self.get_event()
+        tag = event.tag
+        if tag is None or tag == u'!':
+            tag = self.resolve(ScalarNode, event.value, event.implicit)
+        node = ScalarNode(tag, event.value,
+                event.start_mark, event.end_mark, style=event.style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        return node
+
+    def compose_sequence_node(self, anchor):
+        start_event = self.get_event()
+        tag = start_event.tag
+        if tag is None or tag == u'!':
+            tag = self.resolve(SequenceNode, None, start_event.implicit)
+        node = SequenceNode(tag, [],
+                start_event.start_mark, None,
+                flow_style=start_event.flow_style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        index = 0
+        while not self.check_event(SequenceEndEvent):
+            node.value.append(self.compose_node(node, index))
+            index += 1
+        end_event = self.get_event()
+        node.end_mark = end_event.end_mark
+        return node
+
+    def compose_mapping_node(self, anchor):
+        start_event = self.get_event()
+        tag = start_event.tag
+        if tag is None or tag == u'!':
+            tag = self.resolve(MappingNode, None, start_event.implicit)
+        node = MappingNode(tag, [],
+                start_event.start_mark, None,
+                flow_style=start_event.flow_style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        while not self.check_event(MappingEndEvent):
+            #key_event = self.peek_event()
+            item_key = self.compose_node(node, None)
+            #if item_key in node.value:
+            #    raise ComposerError("while composing a mapping", start_event.start_mark,
+            #            "found duplicate key", key_event.start_mark)
+            item_value = self.compose_node(node, item_key)
+            #node.value[item_key] = item_value
+            node.value.append((item_key, item_value))
+        end_event = self.get_event()
+        node.end_mark = end_event.end_mark
+        return node
+
diff --git a/lib/yaml/constructor.py b/lib/yaml/constructor.py
new file mode 100644
index 0000000..635faac
--- /dev/null
+++ b/lib/yaml/constructor.py
@@ -0,0 +1,675 @@
+
+__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
+    'ConstructorError']
+
+from error import *
+from nodes import *
+
+import datetime
+
+import binascii, re, sys, types
+
+class ConstructorError(MarkedYAMLError):
+    pass
+
+class BaseConstructor(object):
+
+    yaml_constructors = {}
+    yaml_multi_constructors = {}
+
+    def __init__(self):
+        self.constructed_objects = {}
+        self.recursive_objects = {}
+        self.state_generators = []
+        self.deep_construct = False
+
+    def check_data(self):
+        # If there are more documents available?
+        return self.check_node()
+
+    def get_data(self):
+        # Construct and return the next document.
+        if self.check_node():
+            return self.construct_document(self.get_node())
+
+    def get_single_data(self):
+        # Ensure that the stream contains a single document and construct it.
+        node = self.get_single_node()
+        if node is not None:
+            return self.construct_document(node)
+        return None
+
+    def construct_document(self, node):
+        data = self.construct_object(node)
+        while self.state_generators:
+            state_generators = self.state_generators
+            self.state_generators = []
+            for generator in state_generators:
+                for dummy in generator:
+                    pass
+        self.constructed_objects = {}
+        self.recursive_objects = {}
+        self.deep_construct = False
+        return data
+
+    def construct_object(self, node, deep=False):
+        if node in self.constructed_objects:
+            return self.constructed_objects[node]
+        if deep:
+            old_deep = self.deep_construct
+            self.deep_construct = True
+        if node in self.recursive_objects:
+            raise ConstructorError(None, None,
+                    "found unconstructable recursive node", node.start_mark)
+        self.recursive_objects[node] = None
+        constructor = None
+        tag_suffix = None
+        if node.tag in self.yaml_constructors:
+            constructor = self.yaml_constructors[node.tag]
+        else:
+            for tag_prefix in self.yaml_multi_constructors:
+                if node.tag.startswith(tag_prefix):
+                    tag_suffix = node.tag[len(tag_prefix):]
+                    constructor = self.yaml_multi_constructors[tag_prefix]
+                    break
+            else:
+                if None in self.yaml_multi_constructors:
+                    tag_suffix = node.tag
+                    constructor = self.yaml_multi_constructors[None]
+                elif None in self.yaml_constructors:
+                    constructor = self.yaml_constructors[None]
+                elif isinstance(node, ScalarNode):
+                    constructor = self.__class__.construct_scalar
+                elif isinstance(node, SequenceNode):
+                    constructor = self.__class__.construct_sequence
+                elif isinstance(node, MappingNode):
+                    constructor = self.__class__.construct_mapping
+        if tag_suffix is None:
+            data = constructor(self, node)
+        else:
+            data = constructor(self, tag_suffix, node)
+        if isinstance(data, types.GeneratorType):
+            generator = data
+            data = generator.next()
+            if self.deep_construct:
+                for dummy in generator:
+                    pass
+            else:
+                self.state_generators.append(generator)
+        self.constructed_objects[node] = data
+        del self.recursive_objects[node]
+        if deep:
+            self.deep_construct = old_deep
+        return data
+
+    def construct_scalar(self, node):
+        if not isinstance(node, ScalarNode):
+            raise ConstructorError(None, None,
+                    "expected a scalar node, but found %s" % node.id,
+                    node.start_mark)
+        return node.value
+
+    def construct_sequence(self, node, deep=False):
+        if not isinstance(node, SequenceNode):
+            raise ConstructorError(None, None,
+                    "expected a sequence node, but found %s" % node.id,
+                    node.start_mark)
+        return [self.construct_object(child, deep=deep)
+                for child in node.value]
+
+    def construct_mapping(self, node, deep=False):
+        if not isinstance(node, MappingNode):
+            raise ConstructorError(None, None,
+                    "expected a mapping node, but found %s" % node.id,
+                    node.start_mark)
+        mapping = {}
+        for key_node, value_node in node.value:
+            key = self.construct_object(key_node, deep=deep)
+            try:
+                hash(key)
+            except TypeError, exc:
+                raise ConstructorError("while constructing a mapping", node.start_mark,
+                        "found unacceptable key (%s)" % exc, key_node.start_mark)
+            value = self.construct_object(value_node, deep=deep)
+            mapping[key] = value
+        return mapping
+
+    def construct_pairs(self, node, deep=False):
+        if not isinstance(node, MappingNode):
+            raise ConstructorError(None, None,
+                    "expected a mapping node, but found %s" % node.id,
+                    node.start_mark)
+        pairs = []
+        for key_node, value_node in node.value:
+            key = self.construct_object(key_node, deep=deep)
+            value = self.construct_object(value_node, deep=deep)
+            pairs.append((key, value))
+        return pairs
+
+    def add_constructor(cls, tag, constructor):
+        if not 'yaml_constructors' in cls.__dict__:
+            cls.yaml_constructors = cls.yaml_constructors.copy()
+        cls.yaml_constructors[tag] = constructor
+    add_constructor = classmethod(add_constructor)
+
+    def add_multi_constructor(cls, tag_prefix, multi_constructor):
+        if not 'yaml_multi_constructors' in cls.__dict__:
+            cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
+        cls.yaml_multi_constructors[tag_prefix] = multi_constructor
+    add_multi_constructor = classmethod(add_multi_constructor)
+
+class SafeConstructor(BaseConstructor):
+
+    def construct_scalar(self, node):
+        if isinstance(node, MappingNode):
+            for key_node, value_node in node.value:
+                if key_node.tag == u'tag:yaml.org,2002:value':
+                    return self.construct_scalar(value_node)
+        return BaseConstructor.construct_scalar(self, node)
+
+    def flatten_mapping(self, node):
+        merge = []
+        index = 0
+        while index < len(node.value):
+            key_node, value_node = node.value[index]
+            if key_node.tag == u'tag:yaml.org,2002:merge':
+                del node.value[index]
+                if isinstance(value_node, MappingNode):
+                    self.flatten_mapping(value_node)
+                    merge.extend(value_node.value)
+                elif isinstance(value_node, SequenceNode):
+                    submerge = []
+                    for subnode in value_node.value:
+                        if not isinstance(subnode, MappingNode):
+                            raise ConstructorError("while constructing a mapping",
+                                    node.start_mark,
+                                    "expected a mapping for merging, but found %s"
+                                    % subnode.id, subnode.start_mark)
+                        self.flatten_mapping(subnode)
+                        submerge.append(subnode.value)
+                    submerge.reverse()
+                    for value in submerge:
+                        merge.extend(value)
+                else:
+                    raise ConstructorError("while constructing a mapping", node.start_mark,
+                            "expected a mapping or list of mappings for merging, but found %s"
+                            % value_node.id, value_node.start_mark)
+            elif key_node.tag == u'tag:yaml.org,2002:value':
+                key_node.tag = u'tag:yaml.org,2002:str'
+                index += 1
+            else:
+                index += 1
+        if merge:
+            node.value = merge + node.value
+
+    def construct_mapping(self, node, deep=False):
+        if isinstance(node, MappingNode):
+            self.flatten_mapping(node)
+        return BaseConstructor.construct_mapping(self, node, deep=deep)
+
+    def construct_yaml_null(self, node):
+        self.construct_scalar(node)
+        return None
+
+    bool_values = {
+        u'yes':     True,
+        u'no':      False,
+        u'true':    True,
+        u'false':   False,
+        u'on':      True,
+        u'off':     False,
+    }
+
+    def construct_yaml_bool(self, node):
+        value = self.construct_scalar(node)
+        return self.bool_values[value.lower()]
+
+    def construct_yaml_int(self, node):
+        value = str(self.construct_scalar(node))
+        value = value.replace('_', '')
+        sign = +1
+        if value[0] == '-':
+            sign = -1
+        if value[0] in '+-':
+            value = value[1:]
+        if value == '0':
+            return 0
+        elif value.startswith('0b'):
+            return sign*int(value[2:], 2)
+        elif value.startswith('0x'):
+            return sign*int(value[2:], 16)
+        elif value[0] == '0':
+            return sign*int(value, 8)
+        elif ':' in value:
+            digits = [int(part) for part in value.split(':')]
+            digits.reverse()
+            base = 1
+            value = 0
+            for digit in digits:
+                value += digit*base
+                base *= 60
+            return sign*value
+        else:
+            return sign*int(value)
+
+    inf_value = 1e300
+    while inf_value != inf_value*inf_value:
+        inf_value *= inf_value
+    nan_value = -inf_value/inf_value   # Trying to make a quiet NaN (like C99).
+
+    def construct_yaml_float(self, node):
+        value = str(self.construct_scalar(node))
+        value = value.replace('_', '').lower()
+        sign = +1
+        if value[0] == '-':
+            sign = -1
+        if value[0] in '+-':
+            value = value[1:]
+        if value == '.inf':
+            return sign*self.inf_value
+        elif value == '.nan':
+            return self.nan_value
+        elif ':' in value:
+            digits = [float(part) for part in value.split(':')]
+            digits.reverse()
+            base = 1
+            value = 0.0
+            for digit in digits:
+                value += digit*base
+                base *= 60
+            return sign*value
+        else:
+            return sign*float(value)
+
+    def construct_yaml_binary(self, node):
+        value = self.construct_scalar(node)
+        try:
+            return str(value).decode('base64')
+        except (binascii.Error, UnicodeEncodeError), exc:
+            raise ConstructorError(None, None,
+                    "failed to decode base64 data: %s" % exc, node.start_mark) 
+
+    timestamp_regexp = re.compile(
+            ur'''^(?P<year>[0-9][0-9][0-9][0-9])
+                -(?P<month>[0-9][0-9]?)
+                -(?P<day>[0-9][0-9]?)
+                (?:(?:[Tt]|[ \t]+)
+                (?P<hour>[0-9][0-9]?)
+                :(?P<minute>[0-9][0-9])
+                :(?P<second>[0-9][0-9])
+                (?:\.(?P<fraction>[0-9]*))?
+                (?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
+                (?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
+
+    def construct_yaml_timestamp(self, node):
+        value = self.construct_scalar(node)
+        match = self.timestamp_regexp.match(node.value)
+        values = match.groupdict()
+        year = int(values['year'])
+        month = int(values['month'])
+        day = int(values['day'])
+        if not values['hour']:
+            return datetime.date(year, month, day)
+        hour = int(values['hour'])
+        minute = int(values['minute'])
+        second = int(values['second'])
+        fraction = 0
+        if values['fraction']:
+            fraction = values['fraction'][:6]
+            while len(fraction) < 6:
+                fraction += '0'
+            fraction = int(fraction)
+        delta = None
+        if values['tz_sign']:
+            tz_hour = int(values['tz_hour'])
+            tz_minute = int(values['tz_minute'] or 0)
+            delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
+            if values['tz_sign'] == '-':
+                delta = -delta
+        data = datetime.datetime(year, month, day, hour, minute, second, fraction)
+        if delta:
+            data -= delta
+        return data
+
+    def construct_yaml_omap(self, node):
+        # Note: we do not check for duplicate keys, because it's too
+        # CPU-expensive.
+        omap = []
+        yield omap
+        if not isinstance(node, SequenceNode):
+            raise ConstructorError("while constructing an ordered map", node.start_mark,
+                    "expected a sequence, but found %s" % node.id, node.start_mark)
+        for subnode in node.value:
+            if not isinstance(subnode, MappingNode):
+                raise ConstructorError("while constructing an ordered map", node.start_mark,
+                        "expected a mapping of length 1, but found %s" % subnode.id,
+                        subnode.start_mark)
+            if len(subnode.value) != 1:
+                raise ConstructorError("while constructing an ordered map", node.start_mark,
+                        "expected a single mapping item, but found %d items" % len(subnode.value),
+                        subnode.start_mark)
+            key_node, value_node = subnode.value[0]
+            key = self.construct_object(key_node)
+            value = self.construct_object(value_node)
+            omap.append((key, value))
+
+    def construct_yaml_pairs(self, node):
+        # Note: the same code as `construct_yaml_omap`.
+        pairs = []
+        yield pairs
+        if not isinstance(node, SequenceNode):
+            raise ConstructorError("while constructing pairs", node.start_mark,
+                    "expected a sequence, but found %s" % node.id, node.start_mark)
+        for subnode in node.value:
+            if not isinstance(subnode, MappingNode):
+                raise ConstructorError("while constructing pairs", node.start_mark,
+                        "expected a mapping of length 1, but found %s" % subnode.id,
+                        subnode.start_mark)
+            if len(subnode.value) != 1:
+                raise ConstructorError("while constructing pairs", node.start_mark,
+                        "expected a single mapping item, but found %d items" % len(subnode.value),
+                        subnode.start_mark)
+            key_node, value_node = subnode.value[0]
+            key = self.construct_object(key_node)
+            value = self.construct_object(value_node)
+            pairs.append((key, value))
+
+    def construct_yaml_set(self, node):
+        data = set()
+        yield data
+        value = self.construct_mapping(node)
+        data.update(value)
+
+    def construct_yaml_str(self, node):
+        value = self.construct_scalar(node)
+        try:
+            return value.encode('ascii')
+        except UnicodeEncodeError:
+            return value
+
+    def construct_yaml_seq(self, node):
+        data = []
+        yield data
+        data.extend(self.construct_sequence(node))
+
+    def construct_yaml_map(self, node):
+        data = {}
+        yield data
+        value = self.construct_mapping(node)
+        data.update(value)
+
+    def construct_yaml_object(self, node, cls):
+        data = cls.__new__(cls)
+        yield data
+        if hasattr(data, '__setstate__'):
+            state = self.construct_mapping(node, deep=True)
+            data.__setstate__(state)
+        else:
+            state = self.construct_mapping(node)
+            data.__dict__.update(state)
+
+    def construct_undefined(self, node):
+        raise ConstructorError(None, None,
+                "could not determine a constructor for the tag %r" % node.tag.encode('utf-8'),
+                node.start_mark)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:null',
+        SafeConstructor.construct_yaml_null)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:bool',
+        SafeConstructor.construct_yaml_bool)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:int',
+        SafeConstructor.construct_yaml_int)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:float',
+        SafeConstructor.construct_yaml_float)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:binary',
+        SafeConstructor.construct_yaml_binary)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:timestamp',
+        SafeConstructor.construct_yaml_timestamp)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:omap',
+        SafeConstructor.construct_yaml_omap)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:pairs',
+        SafeConstructor.construct_yaml_pairs)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:set',
+        SafeConstructor.construct_yaml_set)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:str',
+        SafeConstructor.construct_yaml_str)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:seq',
+        SafeConstructor.construct_yaml_seq)
+
+SafeConstructor.add_constructor(
+        u'tag:yaml.org,2002:map',
+        SafeConstructor.construct_yaml_map)
+
+SafeConstructor.add_constructor(None,
+        SafeConstructor.construct_undefined)
+
+class Constructor(SafeConstructor):
+
+    def construct_python_str(self, node):
+        return self.construct_scalar(node).encode('utf-8')
+
+    def construct_python_unicode(self, node):
+        return self.construct_scalar(node)
+
+    def construct_python_long(self, node):
+        return long(self.construct_yaml_int(node))
+
+    def construct_python_complex(self, node):
+       return complex(self.construct_scalar(node))
+
+    def construct_python_tuple(self, node):
+        return tuple(self.construct_sequence(node))
+
+    def find_python_module(self, name, mark):
+        if not name:
+            raise ConstructorError("while constructing a Python module", mark,
+                    "expected non-empty name appended to the tag", mark)
+        try:
+            __import__(name)
+        except ImportError, exc:
+            raise ConstructorError("while constructing a Python module", mark,
+                    "cannot find module %r (%s)" % (name.encode('utf-8'), exc), mark)
+        return sys.modules[name]
+
+    def find_python_name(self, name, mark):
+        if not name:
+            raise ConstructorError("while constructing a Python object", mark,
+                    "expected non-empty name appended to the tag", mark)
+        if u'.' in name:
+            module_name, object_name = name.rsplit('.', 1)
+        else:
+            module_name = '__builtin__'
+            object_name = name
+        try:
+            __import__(module_name)
+        except ImportError, exc:
+            raise ConstructorError("while constructing a Python object", mark,
+                    "cannot find module %r (%s)" % (module_name.encode('utf-8'), exc), mark)
+        module = sys.modules[module_name]
+        if not hasattr(module, object_name):
+            raise ConstructorError("while constructing a Python object", mark,
+                    "cannot find %r in the module %r" % (object_name.encode('utf-8'),
+                        module.__name__), mark)
+        return getattr(module, object_name)
+
+    def construct_python_name(self, suffix, node):
+        value = self.construct_scalar(node)
+        if value:
+            raise ConstructorError("while constructing a Python name", node.start_mark,
+                    "expected the empty value, but found %r" % value.encode('utf-8'),
+                    node.start_mark)
+        return self.find_python_name(suffix, node.start_mark)
+
+    def construct_python_module(self, suffix, node):
+        value = self.construct_scalar(node)
+        if value:
+            raise ConstructorError("while constructing a Python module", node.start_mark,
+                    "expected the empty value, but found %r" % value.encode('utf-8'),
+                    node.start_mark)
+        return self.find_python_module(suffix, node.start_mark)
+
+    class classobj: pass
+
+    def make_python_instance(self, suffix, node,
+            args=None, kwds=None, newobj=False):
+        if not args:
+            args = []
+        if not kwds:
+            kwds = {}
+        cls = self.find_python_name(suffix, node.start_mark)
+        if newobj and isinstance(cls, type(self.classobj))  \
+                and not args and not kwds:
+            instance = self.classobj()
+            instance.__class__ = cls
+            return instance
+        elif newobj and isinstance(cls, type):
+            return cls.__new__(cls, *args, **kwds)
+        else:
+            return cls(*args, **kwds)
+
+    def set_python_instance_state(self, instance, state):
+        if hasattr(instance, '__setstate__'):
+            instance.__setstate__(state)
+        else:
+            slotstate = {}
+            if isinstance(state, tuple) and len(state) == 2:
+                state, slotstate = state
+            if hasattr(instance, '__dict__'):
+                instance.__dict__.update(state)
+            elif state:
+                slotstate.update(state)
+            for key, value in slotstate.items():
+                setattr(object, key, value)
+
+    def construct_python_object(self, suffix, node):
+        # Format:
+        #   !!python/object:module.name { ... state ... }
+        instance = self.make_python_instance(suffix, node, newobj=True)
+        yield instance
+        deep = hasattr(instance, '__setstate__')
+        state = self.construct_mapping(node, deep=deep)
+        self.set_python_instance_state(instance, state)
+
+    def construct_python_object_apply(self, suffix, node, newobj=False):
+        # Format:
+        #   !!python/object/apply       # (or !!python/object/new)
+        #   args: [ ... arguments ... ]
+        #   kwds: { ... keywords ... }
+        #   state: ... state ...
+        #   listitems: [ ... listitems ... ]
+        #   dictitems: { ... dictitems ... }
+        # or short format:
+        #   !!python/object/apply [ ... arguments ... ]
+        # The difference between !!python/object/apply and !!python/object/new
+        # is how an object is created, check make_python_instance for details.
+        if isinstance(node, SequenceNode):
+            args = self.construct_sequence(node, deep=True)
+            kwds = {}
+            state = {}
+            listitems = []
+            dictitems = {}
+        else:
+            value = self.construct_mapping(node, deep=True)
+            args = value.get('args', [])
+            kwds = value.get('kwds', {})
+            state = value.get('state', {})
+            listitems = value.get('listitems', [])
+            dictitems = value.get('dictitems', {})
+        instance = self.make_python_instance(suffix, node, args, kwds, newobj)
+        if state:
+            self.set_python_instance_state(instance, state)
+        if listitems:
+            instance.extend(listitems)
+        if dictitems:
+            for key in dictitems:
+                instance[key] = dictitems[key]
+        return instance
+
+    def construct_python_object_new(self, suffix, node):
+        return self.construct_python_object_apply(suffix, node, newobj=True)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/none',
+    Constructor.construct_yaml_null)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/bool',
+    Constructor.construct_yaml_bool)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/str',
+    Constructor.construct_python_str)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/unicode',
+    Constructor.construct_python_unicode)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/int',
+    Constructor.construct_yaml_int)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/long',
+    Constructor.construct_python_long)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/float',
+    Constructor.construct_yaml_float)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/complex',
+    Constructor.construct_python_complex)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/list',
+    Constructor.construct_yaml_seq)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/tuple',
+    Constructor.construct_python_tuple)
+
+Constructor.add_constructor(
+    u'tag:yaml.org,2002:python/dict',
+    Constructor.construct_yaml_map)
+
+Constructor.add_multi_constructor(
+    u'tag:yaml.org,2002:python/name:',
+    Constructor.construct_python_name)
+
+Constructor.add_multi_constructor(
+    u'tag:yaml.org,2002:python/module:',
+    Constructor.construct_python_module)
+
+Constructor.add_multi_constructor(
+    u'tag:yaml.org,2002:python/object:',
+    Constructor.construct_python_object)
+
+Constructor.add_multi_constructor(
+    u'tag:yaml.org,2002:python/object/apply:',
+    Constructor.construct_python_object_apply)
+
+Constructor.add_multi_constructor(
+    u'tag:yaml.org,2002:python/object/new:',
+    Constructor.construct_python_object_new)
+
diff --git a/lib/yaml/cyaml.py b/lib/yaml/cyaml.py
new file mode 100644
index 0000000..68dcd75
--- /dev/null
+++ b/lib/yaml/cyaml.py
@@ -0,0 +1,85 @@
+
+__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader',
+        'CBaseDumper', 'CSafeDumper', 'CDumper']
+
+from _yaml import CParser, CEmitter
+
+from constructor import *
+
+from serializer import *
+from representer import *
+
+from resolver import *
+
+class CBaseLoader(CParser, BaseConstructor, BaseResolver):
+
+    def __init__(self, stream):
+        CParser.__init__(self, stream)
+        BaseConstructor.__init__(self)
+        BaseResolver.__init__(self)
+
+class CSafeLoader(CParser, SafeConstructor, Resolver):
+
+    def __init__(self, stream):
+        CParser.__init__(self, stream)
+        SafeConstructor.__init__(self)
+        Resolver.__init__(self)
+
+class CLoader(CParser, Constructor, Resolver):
+
+    def __init__(self, stream):
+        CParser.__init__(self, stream)
+        Constructor.__init__(self)
+        Resolver.__init__(self)
+
+class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        CEmitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width, encoding=encoding,
+                allow_unicode=allow_unicode, line_break=line_break,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class CSafeDumper(CEmitter, SafeRepresenter, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        CEmitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width, encoding=encoding,
+                allow_unicode=allow_unicode, line_break=line_break,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        SafeRepresenter.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class CDumper(CEmitter, Serializer, Representer, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        CEmitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width, encoding=encoding,
+                allow_unicode=allow_unicode, line_break=line_break,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
diff --git a/lib/yaml/dumper.py b/lib/yaml/dumper.py
new file mode 100644
index 0000000..f811d2c
--- /dev/null
+++ b/lib/yaml/dumper.py
@@ -0,0 +1,62 @@
+
+__all__ = ['BaseDumper', 'SafeDumper', 'Dumper']
+
+from emitter import *
+from serializer import *
+from representer import *
+from resolver import *
+
+class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        Emitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width,
+                allow_unicode=allow_unicode, line_break=line_break)
+        Serializer.__init__(self, encoding=encoding,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        Emitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width,
+                allow_unicode=allow_unicode, line_break=line_break)
+        Serializer.__init__(self, encoding=encoding,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        SafeRepresenter.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class Dumper(Emitter, Serializer, Representer, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        Emitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width,
+                allow_unicode=allow_unicode, line_break=line_break)
+        Serializer.__init__(self, encoding=encoding,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
diff --git a/lib/yaml/emitter.py b/lib/yaml/emitter.py
new file mode 100644
index 0000000..e5bcdcc
--- /dev/null
+++ b/lib/yaml/emitter.py
@@ -0,0 +1,1140 @@
+
+# Emitter expects events obeying the following grammar:
+# stream ::= STREAM-START document* STREAM-END
+# document ::= DOCUMENT-START node DOCUMENT-END
+# node ::= SCALAR | sequence | mapping
+# sequence ::= SEQUENCE-START node* SEQUENCE-END
+# mapping ::= MAPPING-START (node node)* MAPPING-END
+
+__all__ = ['Emitter', 'EmitterError']
+
+from error import YAMLError
+from events import *
+
+class EmitterError(YAMLError):
+    pass
+
+class ScalarAnalysis(object):
+    def __init__(self, scalar, empty, multiline,
+            allow_flow_plain, allow_block_plain,
+            allow_single_quoted, allow_double_quoted,
+            allow_block):
+        self.scalar = scalar
+        self.empty = empty
+        self.multiline = multiline
+        self.allow_flow_plain = allow_flow_plain
+        self.allow_block_plain = allow_block_plain
+        self.allow_single_quoted = allow_single_quoted
+        self.allow_double_quoted = allow_double_quoted
+        self.allow_block = allow_block
+
+class Emitter(object):
+
+    DEFAULT_TAG_PREFIXES = {
+        u'!' : u'!',
+        u'tag:yaml.org,2002:' : u'!!',
+    }
+
+    def __init__(self, stream, canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None):
+
+        # The stream should have the methods `write` and possibly `flush`.
+        self.stream = stream
+
+        # Encoding can be overriden by STREAM-START.
+        self.encoding = None
+
+        # Emitter is a state machine with a stack of states to handle nested
+        # structures.
+        self.states = []
+        self.state = self.expect_stream_start
+
+        # Current event and the event queue.
+        self.events = []
+        self.event = None
+
+        # The current indentation level and the stack of previous indents.
+        self.indents = []
+        self.indent = None
+
+        # Flow level.
+        self.flow_level = 0
+
+        # Contexts.
+        self.root_context = False
+        self.sequence_context = False
+        self.mapping_context = False
+        self.simple_key_context = False
+
+        # Characteristics of the last emitted character:
+        #  - current position.
+        #  - is it a whitespace?
+        #  - is it an indention character
+        #    (indentation space, '-', '?', or ':')?
+        self.line = 0
+        self.column = 0
+        self.whitespace = True
+        self.indention = True
+
+        # Whether the document requires an explicit document indicator
+        self.open_ended = False
+
+        # Formatting details.
+        self.canonical = canonical
+        self.allow_unicode = allow_unicode
+        self.best_indent = 2
+        if indent and 1 < indent < 10:
+            self.best_indent = indent
+        self.best_width = 80
+        if width and width > self.best_indent*2:
+            self.best_width = width
+        self.best_line_break = u'\n'
+        if line_break in [u'\r', u'\n', u'\r\n']:
+            self.best_line_break = line_break
+
+        # Tag prefixes.
+        self.tag_prefixes = None
+
+        # Prepared anchor and tag.
+        self.prepared_anchor = None
+        self.prepared_tag = None
+
+        # Scalar analysis and style.
+        self.analysis = None
+        self.style = None
+
+    def dispose(self):
+        # Reset the state attributes (to clear self-references)
+        self.states = []
+        self.state = None
+
+    def emit(self, event):
+        self.events.append(event)
+        while not self.need_more_events():
+            self.event = self.events.pop(0)
+            self.state()
+            self.event = None
+
+    # In some cases, we wait for a few next events before emitting.
+
+    def need_more_events(self):
+        if not self.events:
+            return True
+        event = self.events[0]
+        if isinstance(event, DocumentStartEvent):
+            return self.need_events(1)
+        elif isinstance(event, SequenceStartEvent):
+            return self.need_events(2)
+        elif isinstance(event, MappingStartEvent):
+            return self.need_events(3)
+        else:
+            return False
+
+    def need_events(self, count):
+        level = 0
+        for event in self.events[1:]:
+            if isinstance(event, (DocumentStartEvent, CollectionStartEvent)):
+                level += 1
+            elif isinstance(event, (DocumentEndEvent, CollectionEndEvent)):
+                level -= 1
+            elif isinstance(event, StreamEndEvent):
+                level = -1
+            if level < 0:
+                return False
+        return (len(self.events) < count+1)
+
+    def increase_indent(self, flow=False, indentless=False):
+        self.indents.append(self.indent)
+        if self.indent is None:
+            if flow:
+                self.indent = self.best_indent
+            else:
+                self.indent = 0
+        elif not indentless:
+            self.indent += self.best_indent
+
+    # States.
+
+    # Stream handlers.
+
+    def expect_stream_start(self):
+        if isinstance(self.event, StreamStartEvent):
+            if self.event.encoding and not getattr(self.stream, 'encoding', None):
+                self.encoding = self.event.encoding
+            self.write_stream_start()
+            self.state = self.expect_first_document_start
+        else:
+            raise EmitterError("expected StreamStartEvent, but got %s"
+                    % self.event)
+
+    def expect_nothing(self):
+        raise EmitterError("expected nothing, but got %s" % self.event)
+
+    # Document handlers.
+
+    def expect_first_document_start(self):
+        return self.expect_document_start(first=True)
+
+    def expect_document_start(self, first=False):
+        if isinstance(self.event, DocumentStartEvent):
+            if (self.event.version or self.event.tags) and self.open_ended:
+                self.write_indicator(u'...', True)
+                self.write_indent()
+            if self.event.version:
+                version_text = self.prepare_version(self.event.version)
+                self.write_version_directive(version_text)
+            self.tag_prefixes = self.DEFAULT_TAG_PREFIXES.copy()
+            if self.event.tags:
+                handles = self.event.tags.keys()
+                handles.sort()
+                for handle in handles:
+                    prefix = self.event.tags[handle]
+                    self.tag_prefixes[prefix] = handle
+                    handle_text = self.prepare_tag_handle(handle)
+                    prefix_text = self.prepare_tag_prefix(prefix)
+                    self.write_tag_directive(handle_text, prefix_text)
+            implicit = (first and not self.event.explicit and not self.canonical
+                    and not self.event.version and not self.event.tags
+                    and not self.check_empty_document())
+            if not implicit:
+                self.write_indent()
+                self.write_indicator(u'---', True)
+                if self.canonical:
+                    self.write_indent()
+            self.state = self.expect_document_root
+        elif isinstance(self.event, StreamEndEvent):
+            if self.open_ended:
+                self.write_indicator(u'...', True)
+                self.write_indent()
+            self.write_stream_end()
+            self.state = self.expect_nothing
+        else:
+            raise EmitterError("expected DocumentStartEvent, but got %s"
+                    % self.event)
+
+    def expect_document_end(self):
+        if isinstance(self.event, DocumentEndEvent):
+            self.write_indent()
+            if self.event.explicit:
+                self.write_indicator(u'...', True)
+                self.write_indent()
+            self.flush_stream()
+            self.state = self.expect_document_start
+        else:
+            raise EmitterError("expected DocumentEndEvent, but got %s"
+                    % self.event)
+
+    def expect_document_root(self):
+        self.states.append(self.expect_document_end)
+        self.expect_node(root=True)
+
+    # Node handlers.
+
+    def expect_node(self, root=False, sequence=False, mapping=False,
+            simple_key=False):
+        self.root_context = root
+        self.sequence_context = sequence
+        self.mapping_context = mapping
+        self.simple_key_context = simple_key
+        if isinstance(self.event, AliasEvent):
+            self.expect_alias()
+        elif isinstance(self.event, (ScalarEvent, CollectionStartEvent)):
+            self.process_anchor(u'&')
+            self.process_tag()
+            if isinstance(self.event, ScalarEvent):
+                self.expect_scalar()
+            elif isinstance(self.event, SequenceStartEvent):
+                if self.flow_level or self.canonical or self.event.flow_style   \
+                        or self.check_empty_sequence():
+                    self.expect_flow_sequence()
+                else:
+                    self.expect_block_sequence()
+            elif isinstance(self.event, MappingStartEvent):
+                if self.flow_level or self.canonical or self.event.flow_style   \
+                        or self.check_empty_mapping():
+                    self.expect_flow_mapping()
+                else:
+                    self.expect_block_mapping()
+        else:
+            raise EmitterError("expected NodeEvent, but got %s" % self.event)
+
+    def expect_alias(self):
+        if self.event.anchor is None:
+            raise EmitterError("anchor is not specified for alias")
+        self.process_anchor(u'*')
+        self.state = self.states.pop()
+
+    def expect_scalar(self):
+        self.increase_indent(flow=True)
+        self.process_scalar()
+        self.indent = self.indents.pop()
+        self.state = self.states.pop()
+
+    # Flow sequence handlers.
+
+    def expect_flow_sequence(self):
+        self.write_indicator(u'[', True, whitespace=True)
+        self.flow_level += 1
+        self.increase_indent(flow=True)
+        self.state = self.expect_first_flow_sequence_item
+
+    def expect_first_flow_sequence_item(self):
+        if isinstance(self.event, SequenceEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            self.write_indicator(u']', False)
+            self.state = self.states.pop()
+        else:
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            self.states.append(self.expect_flow_sequence_item)
+            self.expect_node(sequence=True)
+
+    def expect_flow_sequence_item(self):
+        if isinstance(self.event, SequenceEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            if self.canonical:
+                self.write_indicator(u',', False)
+                self.write_indent()
+            self.write_indicator(u']', False)
+            self.state = self.states.pop()
+        else:
+            self.write_indicator(u',', False)
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            self.states.append(self.expect_flow_sequence_item)
+            self.expect_node(sequence=True)
+
+    # Flow mapping handlers.
+
+    def expect_flow_mapping(self):
+        self.write_indicator(u'{', True, whitespace=True)
+        self.flow_level += 1
+        self.increase_indent(flow=True)
+        self.state = self.expect_first_flow_mapping_key
+
+    def expect_first_flow_mapping_key(self):
+        if isinstance(self.event, MappingEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            self.write_indicator(u'}', False)
+            self.state = self.states.pop()
+        else:
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            if not self.canonical and self.check_simple_key():
+                self.states.append(self.expect_flow_mapping_simple_value)
+                self.expect_node(mapping=True, simple_key=True)
+            else:
+                self.write_indicator(u'?', True)
+                self.states.append(self.expect_flow_mapping_value)
+                self.expect_node(mapping=True)
+
+    def expect_flow_mapping_key(self):
+        if isinstance(self.event, MappingEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            if self.canonical:
+                self.write_indicator(u',', False)
+                self.write_indent()
+            self.write_indicator(u'}', False)
+            self.state = self.states.pop()
+        else:
+            self.write_indicator(u',', False)
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            if not self.canonical and self.check_simple_key():
+                self.states.append(self.expect_flow_mapping_simple_value)
+                self.expect_node(mapping=True, simple_key=True)
+            else:
+                self.write_indicator(u'?', True)
+                self.states.append(self.expect_flow_mapping_value)
+                self.expect_node(mapping=True)
+
+    def expect_flow_mapping_simple_value(self):
+        self.write_indicator(u':', False)
+        self.states.append(self.expect_flow_mapping_key)
+        self.expect_node(mapping=True)
+
+    def expect_flow_mapping_value(self):
+        if self.canonical or self.column > self.best_width:
+            self.write_indent()
+        self.write_indicator(u':', True)
+        self.states.append(self.expect_flow_mapping_key)
+        self.expect_node(mapping=True)
+
+    # Block sequence handlers.
+
+    def expect_block_sequence(self):
+        indentless = (self.mapping_context and not self.indention)
+        self.increase_indent(flow=False, indentless=indentless)
+        self.state = self.expect_first_block_sequence_item
+
+    def expect_first_block_sequence_item(self):
+        return self.expect_block_sequence_item(first=True)
+
+    def expect_block_sequence_item(self, first=False):
+        if not first and isinstance(self.event, SequenceEndEvent):
+            self.indent = self.indents.pop()
+            self.state = self.states.pop()
+        else:
+            self.write_indent()
+            self.write_indicator(u'-', True, indention=True)
+            self.states.append(self.expect_block_sequence_item)
+            self.expect_node(sequence=True)
+
+    # Block mapping handlers.
+
+    def expect_block_mapping(self):
+        self.increase_indent(flow=False)
+        self.state = self.expect_first_block_mapping_key
+
+    def expect_first_block_mapping_key(self):
+        return self.expect_block_mapping_key(first=True)
+
+    def expect_block_mapping_key(self, first=False):
+        if not first and isinstance(self.event, MappingEndEvent):
+            self.indent = self.indents.pop()
+            self.state = self.states.pop()
+        else:
+            self.write_indent()
+            if self.check_simple_key():
+                self.states.append(self.expect_block_mapping_simple_value)
+                self.expect_node(mapping=True, simple_key=True)
+            else:
+                self.write_indicator(u'?', True, indention=True)
+                self.states.append(self.expect_block_mapping_value)
+                self.expect_node(mapping=True)
+
+    def expect_block_mapping_simple_value(self):
+        self.write_indicator(u':', False)
+        self.states.append(self.expect_block_mapping_key)
+        self.expect_node(mapping=True)
+
+    def expect_block_mapping_value(self):
+        self.write_indent()
+        self.write_indicator(u':', True, indention=True)
+        self.states.append(self.expect_block_mapping_key)
+        self.expect_node(mapping=True)
+
+    # Checkers.
+
+    def check_empty_sequence(self):
+        return (isinstance(self.event, SequenceStartEvent) and self.events
+                and isinstance(self.events[0], SequenceEndEvent))
+
+    def check_empty_mapping(self):
+        return (isinstance(self.event, MappingStartEvent) and self.events
+                and isinstance(self.events[0], MappingEndEvent))
+
+    def check_empty_document(self):
+        if not isinstance(self.event, DocumentStartEvent) or not self.events:
+            return False
+        event = self.events[0]
+        return (isinstance(event, ScalarEvent) and event.anchor is None
+                and event.tag is None and event.implicit and event.value == u'')
+
+    def check_simple_key(self):
+        length = 0
+        if isinstance(self.event, NodeEvent) and self.event.anchor is not None:
+            if self.prepared_anchor is None:
+                self.prepared_anchor = self.prepare_anchor(self.event.anchor)
+            length += len(self.prepared_anchor)
+        if isinstance(self.event, (ScalarEvent, CollectionStartEvent))  \
+                and self.event.tag is not None:
+            if self.prepared_tag is None:
+                self.prepared_tag = self.prepare_tag(self.event.tag)
+            length += len(self.prepared_tag)
+        if isinstance(self.event, ScalarEvent):
+            if self.analysis is None:
+                self.analysis = self.analyze_scalar(self.event.value)
+            length += len(self.analysis.scalar)
+        return (length < 128 and (isinstance(self.event, AliasEvent)
+            or (isinstance(self.event, ScalarEvent)
+                    and not self.analysis.empty and not self.analysis.multiline)
+            or self.check_empty_sequence() or self.check_empty_mapping()))
+
+    # Anchor, Tag, and Scalar processors.
+
+    def process_anchor(self, indicator):
+        if self.event.anchor is None:
+            self.prepared_anchor = None
+            return
+        if self.prepared_anchor is None:
+            self.prepared_anchor = self.prepare_anchor(self.event.anchor)
+        if self.prepared_anchor:
+            self.write_indicator(indicator+self.prepared_anchor, True)
+        self.prepared_anchor = None
+
+    def process_tag(self):
+        tag = self.event.tag
+        if isinstance(self.event, ScalarEvent):
+            if self.style is None:
+                self.style = self.choose_scalar_style()
+            if ((not self.canonical or tag is None) and
+                ((self.style == '' and self.event.implicit[0])
+                        or (self.style != '' and self.event.implicit[1]))):
+                self.prepared_tag = None
+                return
+            if self.event.implicit[0] and tag is None:
+                tag = u'!'
+                self.prepared_tag = None
+        else:
+            if (not self.canonical or tag is None) and self.event.implicit:
+                self.prepared_tag = None
+                return
+        if tag is None:
+            raise EmitterError("tag is not specified")
+        if self.prepared_tag is None:
+            self.prepared_tag = self.prepare_tag(tag)
+        if self.prepared_tag:
+            self.write_indicator(self.prepared_tag, True)
+        self.prepared_tag = None
+
+    def choose_scalar_style(self):
+        if self.analysis is None:
+            self.analysis = self.analyze_scalar(self.event.value)
+        if self.event.style == '"' or self.canonical:
+            return '"'
+        if not self.event.style and self.event.implicit[0]:
+            if (not (self.simple_key_context and
+                    (self.analysis.empty or self.analysis.multiline))
+                and (self.flow_level and self.analysis.allow_flow_plain
+                    or (not self.flow_level and self.analysis.allow_block_plain))):
+                return ''
+        if self.event.style and self.event.style in '|>':
+            if (not self.flow_level and not self.simple_key_context
+                    and self.analysis.allow_block):
+                return self.event.style
+        if not self.event.style or self.event.style == '\'':
+            if (self.analysis.allow_single_quoted and
+                    not (self.simple_key_context and self.analysis.multiline)):
+                return '\''
+        return '"'
+
+    def process_scalar(self):
+        if self.analysis is None:
+            self.analysis = self.analyze_scalar(self.event.value)
+        if self.style is None:
+            self.style = self.choose_scalar_style()
+        split = (not self.simple_key_context)
+        #if self.analysis.multiline and split    \
+        #        and (not self.style or self.style in '\'\"'):
+        #    self.write_indent()
+        if self.style == '"':
+            self.write_double_quoted(self.analysis.scalar, split)
+        elif self.style == '\'':
+            self.write_single_quoted(self.analysis.scalar, split)
+        elif self.style == '>':
+            self.write_folded(self.analysis.scalar)
+        elif self.style == '|':
+            self.write_literal(self.analysis.scalar)
+        else:
+            self.write_plain(self.analysis.scalar, split)
+        self.analysis = None
+        self.style = None
+
+    # Analyzers.
+
+    def prepare_version(self, version):
+        major, minor = version
+        if major != 1:
+            raise EmitterError("unsupported YAML version: %d.%d" % (major, minor))
+        return u'%d.%d' % (major, minor)
+
+    def prepare_tag_handle(self, handle):
+        if not handle:
+            raise EmitterError("tag handle must not be empty")
+        if handle[0] != u'!' or handle[-1] != u'!':
+            raise EmitterError("tag handle must start and end with '!': %r"
+                    % (handle.encode('utf-8')))
+        for ch in handle[1:-1]:
+            if not (u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'  \
+                    or ch in u'-_'):
+                raise EmitterError("invalid character %r in the tag handle: %r"
+                        % (ch.encode('utf-8'), handle.encode('utf-8')))
+        return handle
+
+    def prepare_tag_prefix(self, prefix):
+        if not prefix:
+            raise EmitterError("tag prefix must not be empty")
+        chunks = []
+        start = end = 0
+        if prefix[0] == u'!':
+            end = 1
+        while end < len(prefix):
+            ch = prefix[end]
+            if u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'   \
+                    or ch in u'-;/?!:@&=+$,_.~*\'()[]':
+                end += 1
+            else:
+                if start < end:
+                    chunks.append(prefix[start:end])
+                start = end = end+1
+                data = ch.encode('utf-8')
+                for ch in data:
+                    chunks.append(u'%%%02X' % ord(ch))
+        if start < end:
+            chunks.append(prefix[start:end])
+        return u''.join(chunks)
+
+    def prepare_tag(self, tag):
+        if not tag:
+            raise EmitterError("tag must not be empty")
+        if tag == u'!':
+            return tag
+        handle = None
+        suffix = tag
+        prefixes = self.tag_prefixes.keys()
+        prefixes.sort()
+        for prefix in prefixes:
+            if tag.startswith(prefix)   \
+                    and (prefix == u'!' or len(prefix) < len(tag)):
+                handle = self.tag_prefixes[prefix]
+                suffix = tag[len(prefix):]
+        chunks = []
+        start = end = 0
+        while end < len(suffix):
+            ch = suffix[end]
+            if u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'   \
+                    or ch in u'-;/?:@&=+$,_.~*\'()[]'   \
+                    or (ch == u'!' and handle != u'!'):
+                end += 1
+            else:
+                if start < end:
+                    chunks.append(suffix[start:end])
+                start = end = end+1
+                data = ch.encode('utf-8')
+                for ch in data:
+                    chunks.append(u'%%%02X' % ord(ch))
+        if start < end:
+            chunks.append(suffix[start:end])
+        suffix_text = u''.join(chunks)
+        if handle:
+            return u'%s%s' % (handle, suffix_text)
+        else:
+            return u'!<%s>' % suffix_text
+
+    def prepare_anchor(self, anchor):
+        if not anchor:
+            raise EmitterError("anchor must not be empty")
+        for ch in anchor:
+            if not (u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'  \
+                    or ch in u'-_'):
+                raise EmitterError("invalid character %r in the anchor: %r"
+                        % (ch.encode('utf-8'), anchor.encode('utf-8')))
+        return anchor
+
+    def analyze_scalar(self, scalar):
+
+        # Empty scalar is a special case.
+        if not scalar:
+            return ScalarAnalysis(scalar=scalar, empty=True, multiline=False,
+                    allow_flow_plain=False, allow_block_plain=True,
+                    allow_single_quoted=True, allow_double_quoted=True,
+                    allow_block=False)
+
+        # Indicators and special characters.
+        block_indicators = False
+        flow_indicators = False
+        line_breaks = False
+        special_characters = False
+
+        # Important whitespace combinations.
+        leading_space = False
+        leading_break = False
+        trailing_space = False
+        trailing_break = False
+        break_space = False
+        space_break = False
+
+        # Check document indicators.
+        if scalar.startswith(u'---') or scalar.startswith(u'...'):
+            block_indicators = True
+            flow_indicators = True
+
+        # First character or preceded by a whitespace.
+        preceeded_by_whitespace = True
+
+        # Last character or followed by a whitespace.
+        followed_by_whitespace = (len(scalar) == 1 or
+                scalar[1] in u'\0 \t\r\n\x85\u2028\u2029')
+
+        # The previous character is a space.
+        previous_space = False
+
+        # The previous character is a break.
+        previous_break = False
+
+        index = 0
+        while index < len(scalar):
+            ch = scalar[index]
+
+            # Check for indicators.
+            if index == 0:
+                # Leading indicators are special characters.
+                if ch in u'#,[]{}&*!|>\'\"%@`': 
+                    flow_indicators = True
+                    block_indicators = True
+                if ch in u'?:':
+                    flow_indicators = True
+                    if followed_by_whitespace:
+                        block_indicators = True
+                if ch == u'-' and followed_by_whitespace:
+                    flow_indicators = True
+                    block_indicators = True
+            else:
+                # Some indicators cannot appear within a scalar as well.
+                if ch in u',?[]{}':
+                    flow_indicators = True
+                if ch == u':':
+                    flow_indicators = True
+                    if followed_by_whitespace:
+                        block_indicators = True
+                if ch == u'#' and preceeded_by_whitespace:
+                    flow_indicators = True
+                    block_indicators = True
+
+            # Check for line breaks, special, and unicode characters.
+            if ch in u'\n\x85\u2028\u2029':
+                line_breaks = True
+            if not (ch == u'\n' or u'\x20' <= ch <= u'\x7E'):
+                if (ch == u'\x85' or u'\xA0' <= ch <= u'\uD7FF'
+                        or u'\uE000' <= ch <= u'\uFFFD') and ch != u'\uFEFF':
+                    unicode_characters = True
+                    if not self.allow_unicode:
+                        special_characters = True
+                else:
+                    special_characters = True
+
+            # Detect important whitespace combinations.
+            if ch == u' ':
+                if index == 0:
+                    leading_space = True
+                if index == len(scalar)-1:
+                    trailing_space = True
+                if previous_break:
+                    break_space = True
+                previous_space = True
+                previous_break = False
+            elif ch in u'\n\x85\u2028\u2029':
+                if index == 0:
+                    leading_break = True
+                if index == len(scalar)-1:
+                    trailing_break = True
+                if previous_space:
+                    space_break = True
+                previous_space = False
+                previous_break = True
+            else:
+                previous_space = False
+                previous_break = False
+
+            # Prepare for the next character.
+            index += 1
+            preceeded_by_whitespace = (ch in u'\0 \t\r\n\x85\u2028\u2029')
+            followed_by_whitespace = (index+1 >= len(scalar) or
+                    scalar[index+1] in u'\0 \t\r\n\x85\u2028\u2029')
+
+        # Let's decide what styles are allowed.
+        allow_flow_plain = True
+        allow_block_plain = True
+        allow_single_quoted = True
+        allow_double_quoted = True
+        allow_block = True
+
+        # Leading and trailing whitespaces are bad for plain scalars.
+        if (leading_space or leading_break
+                or trailing_space or trailing_break):
+            allow_flow_plain = allow_block_plain = False
+
+        # We do not permit trailing spaces for block scalars.
+        if trailing_space:
+            allow_block = False
+
+        # Spaces at the beginning of a new line are only acceptable for block
+        # scalars.
+        if break_space:
+            allow_flow_plain = allow_block_plain = allow_single_quoted = False
+
+        # Spaces followed by breaks, as well as special character are only
+        # allowed for double quoted scalars.
+        if space_break or special_characters:
+            allow_flow_plain = allow_block_plain =  \
+            allow_single_quoted = allow_block = False
+
+        # Although the plain scalar writer supports breaks, we never emit
+        # multiline plain scalars.
+        if line_breaks:
+            allow_flow_plain = allow_block_plain = False
+
+        # Flow indicators are forbidden for flow plain scalars.
+        if flow_indicators:
+            allow_flow_plain = False
+
+        # Block indicators are forbidden for block plain scalars.
+        if block_indicators:
+            allow_block_plain = False
+
+        return ScalarAnalysis(scalar=scalar,
+                empty=False, multiline=line_breaks,
+                allow_flow_plain=allow_flow_plain,
+                allow_block_plain=allow_block_plain,
+                allow_single_quoted=allow_single_quoted,
+                allow_double_quoted=allow_double_quoted,
+                allow_block=allow_block)
+
+    # Writers.
+
+    def flush_stream(self):
+        if hasattr(self.stream, 'flush'):
+            self.stream.flush()
+
+    def write_stream_start(self):
+        # Write BOM if needed.
+        if self.encoding and self.encoding.startswith('utf-16'):
+            self.stream.write(u'\uFEFF'.encode(self.encoding))
+
+    def write_stream_end(self):
+        self.flush_stream()
+
+    def write_indicator(self, indicator, need_whitespace,
+            whitespace=False, indention=False):
+        if self.whitespace or not need_whitespace:
+            data = indicator
+        else:
+            data = u' '+indicator
+        self.whitespace = whitespace
+        self.indention = self.indention and indention
+        self.column += len(data)
+        self.open_ended = False
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+
+    def write_indent(self):
+        indent = self.indent or 0
+        if not self.indention or self.column > indent   \
+                or (self.column == indent and not self.whitespace):
+            self.write_line_break()
+        if self.column < indent:
+            self.whitespace = True
+            data = u' '*(indent-self.column)
+            self.column = indent
+            if self.encoding:
+                data = data.encode(self.encoding)
+            self.stream.write(data)
+
+    def write_line_break(self, data=None):
+        if data is None:
+            data = self.best_line_break
+        self.whitespace = True
+        self.indention = True
+        self.line += 1
+        self.column = 0
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+
+    def write_version_directive(self, version_text):
+        data = u'%%YAML %s' % version_text
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+        self.write_line_break()
+
+    def write_tag_directive(self, handle_text, prefix_text):
+        data = u'%%TAG %s %s' % (handle_text, prefix_text)
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+        self.write_line_break()
+
+    # Scalar streams.
+
+    def write_single_quoted(self, text, split=True):
+        self.write_indicator(u'\'', True)
+        spaces = False
+        breaks = False
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if spaces:
+                if ch is None or ch != u' ':
+                    if start+1 == end and self.column > self.best_width and split   \
+                            and start != 0 and end != len(text):
+                        self.write_indent()
+                    else:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                    start = end
+            elif breaks:
+                if ch is None or ch not in u'\n\x85\u2028\u2029':
+                    if text[start] == u'\n':
+                        self.write_line_break()
+                    for br in text[start:end]:
+                        if br == u'\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    self.write_indent()
+                    start = end
+            else:
+                if ch is None or ch in u' \n\x85\u2028\u2029' or ch == u'\'':
+                    if start < end:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                        start = end
+            if ch == u'\'':
+                data = u'\'\''
+                self.column += 2
+                if self.encoding:
+                    data = data.encode(self.encoding)
+                self.stream.write(data)
+                start = end + 1
+            if ch is not None:
+                spaces = (ch == u' ')
+                breaks = (ch in u'\n\x85\u2028\u2029')
+            end += 1
+        self.write_indicator(u'\'', False)
+
+    ESCAPE_REPLACEMENTS = {
+        u'\0':      u'0',
+        u'\x07':    u'a',
+        u'\x08':    u'b',
+        u'\x09':    u't',
+        u'\x0A':    u'n',
+        u'\x0B':    u'v',
+        u'\x0C':    u'f',
+        u'\x0D':    u'r',
+        u'\x1B':    u'e',
+        u'\"':      u'\"',
+        u'\\':      u'\\',
+        u'\x85':    u'N',
+        u'\xA0':    u'_',
+        u'\u2028':  u'L',
+        u'\u2029':  u'P',
+    }
+
+    def write_double_quoted(self, text, split=True):
+        self.write_indicator(u'"', True)
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if ch is None or ch in u'"\\\x85\u2028\u2029\uFEFF' \
+                    or not (u'\x20' <= ch <= u'\x7E'
+                        or (self.allow_unicode
+                            and (u'\xA0' <= ch <= u'\uD7FF'
+                                or u'\uE000' <= ch <= u'\uFFFD'))):
+                if start < end:
+                    data = text[start:end]
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    start = end
+                if ch is not None:
+                    if ch in self.ESCAPE_REPLACEMENTS:
+                        data = u'\\'+self.ESCAPE_REPLACEMENTS[ch]
+                    elif ch <= u'\xFF':
+                        data = u'\\x%02X' % ord(ch)
+                    elif ch <= u'\uFFFF':
+                        data = u'\\u%04X' % ord(ch)
+                    else:
+                        data = u'\\U%08X' % ord(ch)
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    start = end+1
+            if 0 < end < len(text)-1 and (ch == u' ' or start >= end)   \
+                    and self.column+(end-start) > self.best_width and split:
+                data = text[start:end]+u'\\'
+                if start < end:
+                    start = end
+                self.column += len(data)
+                if self.encoding:
+                    data = data.encode(self.encoding)
+                self.stream.write(data)
+                self.write_indent()
+                self.whitespace = False
+                self.indention = False
+                if text[start] == u' ':
+                    data = u'\\'
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+            end += 1
+        self.write_indicator(u'"', False)
+
+    def determine_block_hints(self, text):
+        hints = u''
+        if text:
+            if text[0] in u' \n\x85\u2028\u2029':
+                hints += unicode(self.best_indent)
+            if text[-1] not in u'\n\x85\u2028\u2029':
+                hints += u'-'
+            elif len(text) == 1 or text[-2] in u'\n\x85\u2028\u2029':
+                hints += u'+'
+        return hints
+
+    def write_folded(self, text):
+        hints = self.determine_block_hints(text)
+        self.write_indicator(u'>'+hints, True)
+        if hints[-1:] == u'+':
+            self.open_ended = True
+        self.write_line_break()
+        leading_space = True
+        spaces = False
+        breaks = True
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if breaks:
+                if ch is None or ch not in u'\n\x85\u2028\u2029':
+                    if not leading_space and ch is not None and ch != u' '  \
+                            and text[start] == u'\n':
+                        self.write_line_break()
+                    leading_space = (ch == u' ')
+                    for br in text[start:end]:
+                        if br == u'\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    if ch is not None:
+                        self.write_indent()
+                    start = end
+            elif spaces:
+                if ch != u' ':
+                    if start+1 == end and self.column > self.best_width:
+                        self.write_indent()
+                    else:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                    start = end
+            else:
+                if ch is None or ch in u' \n\x85\u2028\u2029':
+                    data = text[start:end]
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    if ch is None:
+                        self.write_line_break()
+                    start = end
+            if ch is not None:
+                breaks = (ch in u'\n\x85\u2028\u2029')
+                spaces = (ch == u' ')
+            end += 1
+
+    def write_literal(self, text):
+        hints = self.determine_block_hints(text)
+        self.write_indicator(u'|'+hints, True)
+        if hints[-1:] == u'+':
+            self.open_ended = True
+        self.write_line_break()
+        breaks = True
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if breaks:
+                if ch is None or ch not in u'\n\x85\u2028\u2029':
+                    for br in text[start:end]:
+                        if br == u'\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    if ch is not None:
+                        self.write_indent()
+                    start = end
+            else:
+                if ch is None or ch in u'\n\x85\u2028\u2029':
+                    data = text[start:end]
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    if ch is None:
+                        self.write_line_break()
+                    start = end
+            if ch is not None:
+                breaks = (ch in u'\n\x85\u2028\u2029')
+            end += 1
+
+    def write_plain(self, text, split=True):
+        if self.root_context:
+            self.open_ended = True
+        if not text:
+            return
+        if not self.whitespace:
+            data = u' '
+            self.column += len(data)
+            if self.encoding:
+                data = data.encode(self.encoding)
+            self.stream.write(data)
+        self.whitespace = False
+        self.indention = False
+        spaces = False
+        breaks = False
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if spaces:
+                if ch != u' ':
+                    if start+1 == end and self.column > self.best_width and split:
+                        self.write_indent()
+                        self.whitespace = False
+                        self.indention = False
+                    else:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                    start = end
+            elif breaks:
+                if ch not in u'\n\x85\u2028\u2029':
+                    if text[start] == u'\n':
+                        self.write_line_break()
+                    for br in text[start:end]:
+                        if br == u'\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    self.write_indent()
+                    self.whitespace = False
+                    self.indention = False
+                    start = end
+            else:
+                if ch is None or ch in u' \n\x85\u2028\u2029':
+                    data = text[start:end]
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    start = end
+            if ch is not None:
+                spaces = (ch == u' ')
+                breaks = (ch in u'\n\x85\u2028\u2029')
+            end += 1
+
diff --git a/lib/yaml/error.py b/lib/yaml/error.py
new file mode 100644
index 0000000..577686d
--- /dev/null
+++ b/lib/yaml/error.py
@@ -0,0 +1,75 @@
+
+__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
+
+class Mark(object):
+
+    def __init__(self, name, index, line, column, buffer, pointer):
+        self.name = name
+        self.index = index
+        self.line = line
+        self.column = column
+        self.buffer = buffer
+        self.pointer = pointer
+
+    def get_snippet(self, indent=4, max_length=75):
+        if self.buffer is None:
+            return None
+        head = ''
+        start = self.pointer
+        while start > 0 and self.buffer[start-1] not in u'\0\r\n\x85\u2028\u2029':
+            start -= 1
+            if self.pointer-start > max_length/2-1:
+                head = ' ... '
+                start += 5
+                break
+        tail = ''
+        end = self.pointer
+        while end < len(self.buffer) and self.buffer[end] not in u'\0\r\n\x85\u2028\u2029':
+            end += 1
+            if end-self.pointer > max_length/2-1:
+                tail = ' ... '
+                end -= 5
+                break
+        snippet = self.buffer[start:end].encode('utf-8')
+        return ' '*indent + head + snippet + tail + '\n'  \
+                + ' '*(indent+self.pointer-start+len(head)) + '^'
+
+    def __str__(self):
+        snippet = self.get_snippet()
+        where = "  in \"%s\", line %d, column %d"   \
+                % (self.name, self.line+1, self.column+1)
+        if snippet is not None:
+            where += ":\n"+snippet
+        return where
+
+class YAMLError(Exception):
+    pass
+
+class MarkedYAMLError(YAMLError):
+
+    def __init__(self, context=None, context_mark=None,
+            problem=None, problem_mark=None, note=None):
+        self.context = context
+        self.context_mark = context_mark
+        self.problem = problem
+        self.problem_mark = problem_mark
+        self.note = note
+
+    def __str__(self):
+        lines = []
+        if self.context is not None:
+            lines.append(self.context)
+        if self.context_mark is not None  \
+            and (self.problem is None or self.problem_mark is None
+                    or self.context_mark.name != self.problem_mark.name
+                    or self.context_mark.line != self.problem_mark.line
+                    or self.context_mark.column != self.problem_mark.column):
+            lines.append(str(self.context_mark))
+        if self.problem is not None:
+            lines.append(self.problem)
+        if self.problem_mark is not None:
+            lines.append(str(self.problem_mark))
+        if self.note is not None:
+            lines.append(self.note)
+        return '\n'.join(lines)
+
diff --git a/lib/yaml/events.py b/lib/yaml/events.py
new file mode 100644
index 0000000..f79ad38
--- /dev/null
+++ b/lib/yaml/events.py
@@ -0,0 +1,86 @@
+
+# Abstract classes.
+
+class Event(object):
+    def __init__(self, start_mark=None, end_mark=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+    def __repr__(self):
+        attributes = [key for key in ['anchor', 'tag', 'implicit', 'value']
+                if hasattr(self, key)]
+        arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
+                for key in attributes])
+        return '%s(%s)' % (self.__class__.__name__, arguments)
+
+class NodeEvent(Event):
+    def __init__(self, anchor, start_mark=None, end_mark=None):
+        self.anchor = anchor
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class CollectionStartEvent(NodeEvent):
+    def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
+            flow_style=None):
+        self.anchor = anchor
+        self.tag = tag
+        self.implicit = implicit
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.flow_style = flow_style
+
+class CollectionEndEvent(Event):
+    pass
+
+# Implementations.
+
+class StreamStartEvent(Event):
+    def __init__(self, start_mark=None, end_mark=None, encoding=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.encoding = encoding
+
+class StreamEndEvent(Event):
+    pass
+
+class DocumentStartEvent(Event):
+    def __init__(self, start_mark=None, end_mark=None,
+            explicit=None, version=None, tags=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.explicit = explicit
+        self.version = version
+        self.tags = tags
+
+class DocumentEndEvent(Event):
+    def __init__(self, start_mark=None, end_mark=None,
+            explicit=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.explicit = explicit
+
+class AliasEvent(NodeEvent):
+    pass
+
+class ScalarEvent(NodeEvent):
+    def __init__(self, anchor, tag, implicit, value,
+            start_mark=None, end_mark=None, style=None):
+        self.anchor = anchor
+        self.tag = tag
+        self.implicit = implicit
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.style = style
+
+class SequenceStartEvent(CollectionStartEvent):
+    pass
+
+class SequenceEndEvent(CollectionEndEvent):
+    pass
+
+class MappingStartEvent(CollectionStartEvent):
+    pass
+
+class MappingEndEvent(CollectionEndEvent):
+    pass
+
diff --git a/lib/yaml/loader.py b/lib/yaml/loader.py
new file mode 100644
index 0000000..293ff46
--- /dev/null
+++ b/lib/yaml/loader.py
@@ -0,0 +1,40 @@
+
+__all__ = ['BaseLoader', 'SafeLoader', 'Loader']
+
+from reader import *
+from scanner import *
+from parser import *
+from composer import *
+from constructor import *
+from resolver import *
+
+class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
+
+    def __init__(self, stream):
+        Reader.__init__(self, stream)
+        Scanner.__init__(self)
+        Parser.__init__(self)
+        Composer.__init__(self)
+        BaseConstructor.__init__(self)
+        BaseResolver.__init__(self)
+
+class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
+
+    def __init__(self, stream):
+        Reader.__init__(self, stream)
+        Scanner.__init__(self)
+        Parser.__init__(self)
+        Composer.__init__(self)
+        SafeConstructor.__init__(self)
+        Resolver.__init__(self)
+
+class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
+
+    def __init__(self, stream):
+        Reader.__init__(self, stream)
+        Scanner.__init__(self)
+        Parser.__init__(self)
+        Composer.__init__(self)
+        Constructor.__init__(self)
+        Resolver.__init__(self)
+
diff --git a/lib/yaml/nodes.py b/lib/yaml/nodes.py
new file mode 100644
index 0000000..c4f070c
--- /dev/null
+++ b/lib/yaml/nodes.py
@@ -0,0 +1,49 @@
+
+class Node(object):
+    def __init__(self, tag, value, start_mark, end_mark):
+        self.tag = tag
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+    def __repr__(self):
+        value = self.value
+        #if isinstance(value, list):
+        #    if len(value) == 0:
+        #        value = '<empty>'
+        #    elif len(value) == 1:
+        #        value = '<1 item>'
+        #    else:
+        #        value = '<%d items>' % len(value)
+        #else:
+        #    if len(value) > 75:
+        #        value = repr(value[:70]+u' ... ')
+        #    else:
+        #        value = repr(value)
+        value = repr(value)
+        return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
+
+class ScalarNode(Node):
+    id = 'scalar'
+    def __init__(self, tag, value,
+            start_mark=None, end_mark=None, style=None):
+        self.tag = tag
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.style = style
+
+class CollectionNode(Node):
+    def __init__(self, tag, value,
+            start_mark=None, end_mark=None, flow_style=None):
+        self.tag = tag
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.flow_style = flow_style
+
+class SequenceNode(CollectionNode):
+    id = 'sequence'
+
+class MappingNode(CollectionNode):
+    id = 'mapping'
+
diff --git a/lib/yaml/parser.py b/lib/yaml/parser.py
new file mode 100644
index 0000000..f9e3057
--- /dev/null
+++ b/lib/yaml/parser.py
@@ -0,0 +1,589 @@
+
+# The following YAML grammar is LL(1) and is parsed by a recursive descent
+# parser.
+#
+# stream            ::= STREAM-START implicit_document? explicit_document* STREAM-END
+# implicit_document ::= block_node DOCUMENT-END*
+# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+# block_node_or_indentless_sequence ::=
+#                       ALIAS
+#                       | properties (block_content | indentless_block_sequence)?
+#                       | block_content
+#                       | indentless_block_sequence
+# block_node        ::= ALIAS
+#                       | properties block_content?
+#                       | block_content
+# flow_node         ::= ALIAS
+#                       | properties flow_content?
+#                       | flow_content
+# properties        ::= TAG ANCHOR? | ANCHOR TAG?
+# block_content     ::= block_collection | flow_collection | SCALAR
+# flow_content      ::= flow_collection | SCALAR
+# block_collection  ::= block_sequence | block_mapping
+# flow_collection   ::= flow_sequence | flow_mapping
+# block_sequence    ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
+# indentless_sequence   ::= (BLOCK-ENTRY block_node?)+
+# block_mapping     ::= BLOCK-MAPPING_START
+#                       ((KEY block_node_or_indentless_sequence?)?
+#                       (VALUE block_node_or_indentless_sequence?)?)*
+#                       BLOCK-END
+# flow_sequence     ::= FLOW-SEQUENCE-START
+#                       (flow_sequence_entry FLOW-ENTRY)*
+#                       flow_sequence_entry?
+#                       FLOW-SEQUENCE-END
+# flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+# flow_mapping      ::= FLOW-MAPPING-START
+#                       (flow_mapping_entry FLOW-ENTRY)*
+#                       flow_mapping_entry?
+#                       FLOW-MAPPING-END
+# flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+#
+# FIRST sets:
+#
+# stream: { STREAM-START }
+# explicit_document: { DIRECTIVE DOCUMENT-START }
+# implicit_document: FIRST(block_node)
+# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
+# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
+# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
+# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# block_sequence: { BLOCK-SEQUENCE-START }
+# block_mapping: { BLOCK-MAPPING-START }
+# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
+# indentless_sequence: { ENTRY }
+# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# flow_sequence: { FLOW-SEQUENCE-START }
+# flow_mapping: { FLOW-MAPPING-START }
+# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
+# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
+
+__all__ = ['Parser', 'ParserError']
+
+from error import MarkedYAMLError
+from tokens import *
+from events import *
+from scanner import *
+
+class ParserError(MarkedYAMLError):
+    pass
+
+class Parser(object):
+    # Since writing a recursive-descendant parser is a straightforward task, we
+    # do not give many comments here.
+
+    DEFAULT_TAGS = {
+        u'!':   u'!',
+        u'!!':  u'tag:yaml.org,2002:',
+    }
+
+    def __init__(self):
+        self.current_event = None
+        self.yaml_version = None
+        self.tag_handles = {}
+        self.states = []
+        self.marks = []
+        self.state = self.parse_stream_start
+
+    def dispose(self):
+        # Reset the state attributes (to clear self-references)
+        self.states = []
+        self.state = None
+
+    def check_event(self, *choices):
+        # Check the type of the next event.
+        if self.current_event is None:
+            if self.state:
+                self.current_event = self.state()
+        if self.current_event is not None:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.current_event, choice):
+                    return True
+        return False
+
+    def peek_event(self):
+        # Get the next event.
+        if self.current_event is None:
+            if self.state:
+                self.current_event = self.state()
+        return self.current_event
+
+    def get_event(self):
+        # Get the next event and proceed further.
+        if self.current_event is None:
+            if self.state:
+                self.current_event = self.state()
+        value = self.current_event
+        self.current_event = None
+        return value
+
+    # stream    ::= STREAM-START implicit_document? explicit_document* STREAM-END
+    # implicit_document ::= block_node DOCUMENT-END*
+    # explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+
+    def parse_stream_start(self):
+
+        # Parse the stream start.
+        token = self.get_token()
+        event = StreamStartEvent(token.start_mark, token.end_mark,
+                encoding=token.encoding)
+
+        # Prepare the next state.
+        self.state = self.parse_implicit_document_start
+
+        return event
+
+    def parse_implicit_document_start(self):
+
+        # Parse an implicit document.
+        if not self.check_token(DirectiveToken, DocumentStartToken,
+                StreamEndToken):
+            self.tag_handles = self.DEFAULT_TAGS
+            token = self.peek_token()
+            start_mark = end_mark = token.start_mark
+            event = DocumentStartEvent(start_mark, end_mark,
+                    explicit=False)
+
+            # Prepare the next state.
+            self.states.append(self.parse_document_end)
+            self.state = self.parse_block_node
+
+            return event
+
+        else:
+            return self.parse_document_start()
+
+    def parse_document_start(self):
+
+        # Parse any extra document end indicators.
+        while self.check_token(DocumentEndToken):
+            self.get_token()
+
+        # Parse an explicit document.
+        if not self.check_token(StreamEndToken):
+            token = self.peek_token()
+            start_mark = token.start_mark
+            version, tags = self.process_directives()
+            if not self.check_token(DocumentStartToken):
+                raise ParserError(None, None,
+                        "expected '<document start>', but found %r"
+                        % self.peek_token().id,
+                        self.peek_token().start_mark)
+            token = self.get_token()
+            end_mark = token.end_mark
+            event = DocumentStartEvent(start_mark, end_mark,
+                    explicit=True, version=version, tags=tags)
+            self.states.append(self.parse_document_end)
+            self.state = self.parse_document_content
+        else:
+            # Parse the end of the stream.
+            token = self.get_token()
+            event = StreamEndEvent(token.start_mark, token.end_mark)
+            assert not self.states
+            assert not self.marks
+            self.state = None
+        return event
+
+    def parse_document_end(self):
+
+        # Parse the document end.
+        token = self.peek_token()
+        start_mark = end_mark = token.start_mark
+        explicit = False
+        if self.check_token(DocumentEndToken):
+            token = self.get_token()
+            end_mark = token.end_mark
+            explicit = True
+        event = DocumentEndEvent(start_mark, end_mark,
+                explicit=explicit)
+
+        # Prepare the next state.
+        self.state = self.parse_document_start
+
+        return event
+
+    def parse_document_content(self):
+        if self.check_token(DirectiveToken,
+                DocumentStartToken, DocumentEndToken, StreamEndToken):
+            event = self.process_empty_scalar(self.peek_token().start_mark)
+            self.state = self.states.pop()
+            return event
+        else:
+            return self.parse_block_node()
+
+    def process_directives(self):
+        self.yaml_version = None
+        self.tag_handles = {}
+        while self.check_token(DirectiveToken):
+            token = self.get_token()
+            if token.name == u'YAML':
+                if self.yaml_version is not None:
+                    raise ParserError(None, None,
+                            "found duplicate YAML directive", token.start_mark)
+                major, minor = token.value
+                if major != 1:
+                    raise ParserError(None, None,
+                            "found incompatible YAML document (version 1.* is required)",
+                            token.start_mark)
+                self.yaml_version = token.value
+            elif token.name == u'TAG':
+                handle, prefix = token.value
+                if handle in self.tag_handles:
+                    raise ParserError(None, None,
+                            "duplicate tag handle %r" % handle.encode('utf-8'),
+                            token.start_mark)
+                self.tag_handles[handle] = prefix
+        if self.tag_handles:
+            value = self.yaml_version, self.tag_handles.copy()
+        else:
+            value = self.yaml_version, None
+        for key in self.DEFAULT_TAGS:
+            if key not in self.tag_handles:
+                self.tag_handles[key] = self.DEFAULT_TAGS[key]
+        return value
+
+    # block_node_or_indentless_sequence ::= ALIAS
+    #               | properties (block_content | indentless_block_sequence)?
+    #               | block_content
+    #               | indentless_block_sequence
+    # block_node    ::= ALIAS
+    #                   | properties block_content?
+    #                   | block_content
+    # flow_node     ::= ALIAS
+    #                   | properties flow_content?
+    #                   | flow_content
+    # properties    ::= TAG ANCHOR? | ANCHOR TAG?
+    # block_content     ::= block_collection | flow_collection | SCALAR
+    # flow_content      ::= flow_collection | SCALAR
+    # block_collection  ::= block_sequence | block_mapping
+    # flow_collection   ::= flow_sequence | flow_mapping
+
+    def parse_block_node(self):
+        return self.parse_node(block=True)
+
+    def parse_flow_node(self):
+        return self.parse_node()
+
+    def parse_block_node_or_indentless_sequence(self):
+        return self.parse_node(block=True, indentless_sequence=True)
+
+    def parse_node(self, block=False, indentless_sequence=False):
+        if self.check_token(AliasToken):
+            token = self.get_token()
+            event = AliasEvent(token.value, token.start_mark, token.end_mark)
+            self.state = self.states.pop()
+        else:
+            anchor = None
+            tag = None
+            start_mark = end_mark = tag_mark = None
+            if self.check_token(AnchorToken):
+                token = self.get_token()
+                start_mark = token.start_mark
+                end_mark = token.end_mark
+                anchor = token.value
+                if self.check_token(TagToken):
+                    token = self.get_token()
+                    tag_mark = token.start_mark
+                    end_mark = token.end_mark
+                    tag = token.value
+            elif self.check_token(TagToken):
+                token = self.get_token()
+                start_mark = tag_mark = token.start_mark
+                end_mark = token.end_mark
+                tag = token.value
+                if self.check_token(AnchorToken):
+                    token = self.get_token()
+                    end_mark = token.end_mark
+                    anchor = token.value
+            if tag is not None:
+                handle, suffix = tag
+                if handle is not None:
+                    if handle not in self.tag_handles:
+                        raise ParserError("while parsing a node", start_mark,
+                                "found undefined tag handle %r" % handle.encode('utf-8'),
+                                tag_mark)
+                    tag = self.tag_handles[handle]+suffix
+                else:
+                    tag = suffix
+            #if tag == u'!':
+            #    raise ParserError("while parsing a node", start_mark,
+            #            "found non-specific tag '!'", tag_mark,
+            #            "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' and share your opinion.")
+            if start_mark is None:
+                start_mark = end_mark = self.peek_token().start_mark
+            event = None
+            implicit = (tag is None or tag == u'!')
+            if indentless_sequence and self.check_token(BlockEntryToken):
+                end_mark = self.peek_token().end_mark
+                event = SequenceStartEvent(anchor, tag, implicit,
+                        start_mark, end_mark)
+                self.state = self.parse_indentless_sequence_entry
+            else:
+                if self.check_token(ScalarToken):
+                    token = self.get_token()
+                    end_mark = token.end_mark
+                    if (token.plain and tag is None) or tag == u'!':
+                        implicit = (True, False)
+                    elif tag is None:
+                        implicit = (False, True)
+                    else:
+                        implicit = (False, False)
+                    event = ScalarEvent(anchor, tag, implicit, token.value,
+                            start_mark, end_mark, style=token.style)
+                    self.state = self.states.pop()
+                elif self.check_token(FlowSequenceStartToken):
+                    end_mark = self.peek_token().end_mark
+                    event = SequenceStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=True)
+                    self.state = self.parse_flow_sequence_first_entry
+                elif self.check_token(FlowMappingStartToken):
+                    end_mark = self.peek_token().end_mark
+                    event = MappingStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=True)
+                    self.state = self.parse_flow_mapping_first_key
+                elif block and self.check_token(BlockSequenceStartToken):
+                    end_mark = self.peek_token().start_mark
+                    event = SequenceStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=False)
+                    self.state = self.parse_block_sequence_first_entry
+                elif block and self.check_token(BlockMappingStartToken):
+                    end_mark = self.peek_token().start_mark
+                    event = MappingStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=False)
+                    self.state = self.parse_block_mapping_first_key
+                elif anchor is not None or tag is not None:
+                    # Empty scalars are allowed even if a tag or an anchor is
+                    # specified.
+                    event = ScalarEvent(anchor, tag, (implicit, False), u'',
+                            start_mark, end_mark)
+                    self.state = self.states.pop()
+                else:
+                    if block:
+                        node = 'block'
+                    else:
+                        node = 'flow'
+                    token = self.peek_token()
+                    raise ParserError("while parsing a %s node" % node, start_mark,
+                            "expected the node content, but found %r" % token.id,
+                            token.start_mark)
+        return event
+
+    # block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
+
+    def parse_block_sequence_first_entry(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_block_sequence_entry()
+
+    def parse_block_sequence_entry(self):
+        if self.check_token(BlockEntryToken):
+            token = self.get_token()
+            if not self.check_token(BlockEntryToken, BlockEndToken):
+                self.states.append(self.parse_block_sequence_entry)
+                return self.parse_block_node()
+            else:
+                self.state = self.parse_block_sequence_entry
+                return self.process_empty_scalar(token.end_mark)
+        if not self.check_token(BlockEndToken):
+            token = self.peek_token()
+            raise ParserError("while parsing a block collection", self.marks[-1],
+                    "expected <block end>, but found %r" % token.id, token.start_mark)
+        token = self.get_token()
+        event = SequenceEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    # indentless_sequence ::= (BLOCK-ENTRY block_node?)+
+
+    def parse_indentless_sequence_entry(self):
+        if self.check_token(BlockEntryToken):
+            token = self.get_token()
+            if not self.check_token(BlockEntryToken,
+                    KeyToken, ValueToken, BlockEndToken):
+                self.states.append(self.parse_indentless_sequence_entry)
+                return self.parse_block_node()
+            else:
+                self.state = self.parse_indentless_sequence_entry
+                return self.process_empty_scalar(token.end_mark)
+        token = self.peek_token()
+        event = SequenceEndEvent(token.start_mark, token.start_mark)
+        self.state = self.states.pop()
+        return event
+
+    # block_mapping     ::= BLOCK-MAPPING_START
+    #                       ((KEY block_node_or_indentless_sequence?)?
+    #                       (VALUE block_node_or_indentless_sequence?)?)*
+    #                       BLOCK-END
+
+    def parse_block_mapping_first_key(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_block_mapping_key()
+
+    def parse_block_mapping_key(self):
+        if self.check_token(KeyToken):
+            token = self.get_token()
+            if not self.check_token(KeyToken, ValueToken, BlockEndToken):
+                self.states.append(self.parse_block_mapping_value)
+                return self.parse_block_node_or_indentless_sequence()
+            else:
+                self.state = self.parse_block_mapping_value
+                return self.process_empty_scalar(token.end_mark)
+        if not self.check_token(BlockEndToken):
+            token = self.peek_token()
+            raise ParserError("while parsing a block mapping", self.marks[-1],
+                    "expected <block end>, but found %r" % token.id, token.start_mark)
+        token = self.get_token()
+        event = MappingEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    def parse_block_mapping_value(self):
+        if self.check_token(ValueToken):
+            token = self.get_token()
+            if not self.check_token(KeyToken, ValueToken, BlockEndToken):
+                self.states.append(self.parse_block_mapping_key)
+                return self.parse_block_node_or_indentless_sequence()
+            else:
+                self.state = self.parse_block_mapping_key
+                return self.process_empty_scalar(token.end_mark)
+        else:
+            self.state = self.parse_block_mapping_key
+            token = self.peek_token()
+            return self.process_empty_scalar(token.start_mark)
+
+    # flow_sequence     ::= FLOW-SEQUENCE-START
+    #                       (flow_sequence_entry FLOW-ENTRY)*
+    #                       flow_sequence_entry?
+    #                       FLOW-SEQUENCE-END
+    # flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+    #
+    # Note that while production rules for both flow_sequence_entry and
+    # flow_mapping_entry are equal, their interpretations are different.
+    # For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
+    # generate an inline mapping (set syntax).
+
+    def parse_flow_sequence_first_entry(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_flow_sequence_entry(first=True)
+
+    def parse_flow_sequence_entry(self, first=False):
+        if not self.check_token(FlowSequenceEndToken):
+            if not first:
+                if self.check_token(FlowEntryToken):
+                    self.get_token()
+                else:
+                    token = self.peek_token()
+                    raise ParserError("while parsing a flow sequence", self.marks[-1],
+                            "expected ',' or ']', but got %r" % token.id, token.start_mark)
+            
+            if self.check_token(KeyToken):
+                token = self.peek_token()
+                event = MappingStartEvent(None, None, True,
+                        token.start_mark, token.end_mark,
+                        flow_style=True)
+                self.state = self.parse_flow_sequence_entry_mapping_key
+                return event
+            elif not self.check_token(FlowSequenceEndToken):
+                self.states.append(self.parse_flow_sequence_entry)
+                return self.parse_flow_node()
+        token = self.get_token()
+        event = SequenceEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    def parse_flow_sequence_entry_mapping_key(self):
+        token = self.get_token()
+        if not self.check_token(ValueToken,
+                FlowEntryToken, FlowSequenceEndToken):
+            self.states.append(self.parse_flow_sequence_entry_mapping_value)
+            return self.parse_flow_node()
+        else:
+            self.state = self.parse_flow_sequence_entry_mapping_value
+            return self.process_empty_scalar(token.end_mark)
+
+    def parse_flow_sequence_entry_mapping_value(self):
+        if self.check_token(ValueToken):
+            token = self.get_token()
+            if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
+                self.states.append(self.parse_flow_sequence_entry_mapping_end)
+                return self.parse_flow_node()
+            else:
+                self.state = self.parse_flow_sequence_entry_mapping_end
+                return self.process_empty_scalar(token.end_mark)
+        else:
+            self.state = self.parse_flow_sequence_entry_mapping_end
+            token = self.peek_token()
+            return self.process_empty_scalar(token.start_mark)
+
+    def parse_flow_sequence_entry_mapping_end(self):
+        self.state = self.parse_flow_sequence_entry
+        token = self.peek_token()
+        return MappingEndEvent(token.start_mark, token.start_mark)
+
+    # flow_mapping  ::= FLOW-MAPPING-START
+    #                   (flow_mapping_entry FLOW-ENTRY)*
+    #                   flow_mapping_entry?
+    #                   FLOW-MAPPING-END
+    # flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+
+    def parse_flow_mapping_first_key(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_flow_mapping_key(first=True)
+
+    def parse_flow_mapping_key(self, first=False):
+        if not self.check_token(FlowMappingEndToken):
+            if not first:
+                if self.check_token(FlowEntryToken):
+                    self.get_token()
+                else:
+                    token = self.peek_token()
+                    raise ParserError("while parsing a flow mapping", self.marks[-1],
+                            "expected ',' or '}', but got %r" % token.id, token.start_mark)
+            if self.check_token(KeyToken):
+                token = self.get_token()
+                if not self.check_token(ValueToken,
+                        FlowEntryToken, FlowMappingEndToken):
+                    self.states.append(self.parse_flow_mapping_value)
+                    return self.parse_flow_node()
+                else:
+                    self.state = self.parse_flow_mapping_value
+                    return self.process_empty_scalar(token.end_mark)
+            elif not self.check_token(FlowMappingEndToken):
+                self.states.append(self.parse_flow_mapping_empty_value)
+                return self.parse_flow_node()
+        token = self.get_token()
+        event = MappingEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    def parse_flow_mapping_value(self):
+        if self.check_token(ValueToken):
+            token = self.get_token()
+            if not self.check_token(FlowEntryToken, FlowMappingEndToken):
+                self.states.append(self.parse_flow_mapping_key)
+                return self.parse_flow_node()
+            else:
+                self.state = self.parse_flow_mapping_key
+                return self.process_empty_scalar(token.end_mark)
+        else:
+            self.state = self.parse_flow_mapping_key
+            token = self.peek_token()
+            return self.process_empty_scalar(token.start_mark)
+
+    def parse_flow_mapping_empty_value(self):
+        self.state = self.parse_flow_mapping_key
+        return self.process_empty_scalar(self.peek_token().start_mark)
+
+    def process_empty_scalar(self, mark):
+        return ScalarEvent(None, None, (True, False), u'', mark, mark)
+
diff --git a/lib/yaml/reader.py b/lib/yaml/reader.py
new file mode 100644
index 0000000..3249e6b
--- /dev/null
+++ b/lib/yaml/reader.py
@@ -0,0 +1,190 @@
+# This module contains abstractions for the input stream. You don't have to
+# looks further, there are no pretty code.
+#
+# We define two classes here.
+#
+#   Mark(source, line, column)
+# It's just a record and its only use is producing nice error messages.
+# Parser does not use it for any other purposes.
+#
+#   Reader(source, data)
+# Reader determines the encoding of `data` and converts it to unicode.
+# Reader provides the following methods and attributes:
+#   reader.peek(length=1) - return the next `length` characters
+#   reader.forward(length=1) - move the current position to `length` characters.
+#   reader.index - the number of the current character.
+#   reader.line, stream.column - the line and the column of the current character.
+
+__all__ = ['Reader', 'ReaderError']
+
+from error import YAMLError, Mark
+
+import codecs, re
+
+class ReaderError(YAMLError):
+
+    def __init__(self, name, position, character, encoding, reason):
+        self.name = name
+        self.character = character
+        self.position = position
+        self.encoding = encoding
+        self.reason = reason
+
+    def __str__(self):
+        if isinstance(self.character, str):
+            return "'%s' codec can't decode byte #x%02x: %s\n"  \
+                    "  in \"%s\", position %d"    \
+                    % (self.encoding, ord(self.character), self.reason,
+                            self.name, self.position)
+        else:
+            return "unacceptable character #x%04x: %s\n"    \
+                    "  in \"%s\", position %d"    \
+                    % (self.character, self.reason,
+                            self.name, self.position)
+
+class Reader(object):
+    # Reader:
+    # - determines the data encoding and converts it to unicode,
+    # - checks if characters are in allowed range,
+    # - adds '\0' to the end.
+
+    # Reader accepts
+    #  - a `str` object,
+    #  - a `unicode` object,
+    #  - a file-like object with its `read` method returning `str`,
+    #  - a file-like object with its `read` method returning `unicode`.
+
+    # Yeah, it's ugly and slow.
+
+    def __init__(self, stream):
+        self.name = None
+        self.stream = None
+        self.stream_pointer = 0
+        self.eof = True
+        self.buffer = u''
+        self.pointer = 0
+        self.raw_buffer = None
+        self.raw_decode = None
+        self.encoding = None
+        self.index = 0
+        self.line = 0
+        self.column = 0
+        if isinstance(stream, unicode):
+            self.name = "<unicode string>"
+            self.check_printable(stream)
+            self.buffer = stream+u'\0'
+        elif isinstance(stream, str):
+            self.name = "<string>"
+            self.raw_buffer = stream
+            self.determine_encoding()
+        else:
+            self.stream = stream
+            self.name = getattr(stream, 'name', "<file>")
+            self.eof = False
+            self.raw_buffer = ''
+            self.determine_encoding()
+
+    def peek(self, index=0):
+        try:
+            return self.buffer[self.pointer+index]
+        except IndexError:
+            self.update(index+1)
+            return self.buffer[self.pointer+index]
+
+    def prefix(self, length=1):
+        if self.pointer+length >= len(self.buffer):
+            self.update(length)
+        return self.buffer[self.pointer:self.pointer+length]
+
+    def forward(self, length=1):
+        if self.pointer+length+1 >= len(self.buffer):
+            self.update(length+1)
+        while length:
+            ch = self.buffer[self.pointer]
+            self.pointer += 1
+            self.index += 1
+            if ch in u'\n\x85\u2028\u2029'  \
+                    or (ch == u'\r' and self.buffer[self.pointer] != u'\n'):
+                self.line += 1
+                self.column = 0
+            elif ch != u'\uFEFF':
+                self.column += 1
+            length -= 1
+
+    def get_mark(self):
+        if self.stream is None:
+            return Mark(self.name, self.index, self.line, self.column,
+                    self.buffer, self.pointer)
+        else:
+            return Mark(self.name, self.index, self.line, self.column,
+                    None, None)
+
+    def determine_encoding(self):
+        while not self.eof and len(self.raw_buffer) < 2:
+            self.update_raw()
+        if not isinstance(self.raw_buffer, unicode):
+            if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
+                self.raw_decode = codecs.utf_16_le_decode
+                self.encoding = 'utf-16-le'
+            elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
+                self.raw_decode = codecs.utf_16_be_decode
+                self.encoding = 'utf-16-be'
+            else:
+                self.raw_decode = codecs.utf_8_decode
+                self.encoding = 'utf-8'
+        self.update(1)
+
+    NON_PRINTABLE = re.compile(u'[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD]')
+    def check_printable(self, data):
+        match = self.NON_PRINTABLE.search(data)
+        if match:
+            character = match.group()
+            position = self.index+(len(self.buffer)-self.pointer)+match.start()
+            raise ReaderError(self.name, position, ord(character),
+                    'unicode', "special characters are not allowed")
+
+    def update(self, length):
+        if self.raw_buffer is None:
+            return
+        self.buffer = self.buffer[self.pointer:]
+        self.pointer = 0
+        while len(self.buffer) < length:
+            if not self.eof:
+                self.update_raw()
+            if self.raw_decode is not None:
+                try:
+                    data, converted = self.raw_decode(self.raw_buffer,
+                            'strict', self.eof)
+                except UnicodeDecodeError, exc:
+                    character = exc.object[exc.start]
+                    if self.stream is not None:
+                        position = self.stream_pointer-len(self.raw_buffer)+exc.start
+                    else:
+                        position = exc.start
+                    raise ReaderError(self.name, position, character,
+                            exc.encoding, exc.reason)
+            else:
+                data = self.raw_buffer
+                converted = len(data)
+            self.check_printable(data)
+            self.buffer += data
+            self.raw_buffer = self.raw_buffer[converted:]
+            if self.eof:
+                self.buffer += u'\0'
+                self.raw_buffer = None
+                break
+
+    def update_raw(self, size=1024):
+        data = self.stream.read(size)
+        if data:
+            self.raw_buffer += data
+            self.stream_pointer += len(data)
+        else:
+            self.eof = True
+
+#try:
+#    import psyco
+#    psyco.bind(Reader)
+#except ImportError:
+#    pass
+
diff --git a/lib/yaml/representer.py b/lib/yaml/representer.py
new file mode 100644
index 0000000..5f4fc70
--- /dev/null
+++ b/lib/yaml/representer.py
@@ -0,0 +1,484 @@
+
+__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
+    'RepresenterError']
+
+from error import *
+from nodes import *
+
+import datetime
+
+import sys, copy_reg, types
+
+class RepresenterError(YAMLError):
+    pass
+
+class BaseRepresenter(object):
+
+    yaml_representers = {}
+    yaml_multi_representers = {}
+
+    def __init__(self, default_style=None, default_flow_style=None):
+        self.default_style = default_style
+        self.default_flow_style = default_flow_style
+        self.represented_objects = {}
+        self.object_keeper = []
+        self.alias_key = None
+
+    def represent(self, data):
+        node = self.represent_data(data)
+        self.serialize(node)
+        self.represented_objects = {}
+        self.object_keeper = []
+        self.alias_key = None
+
+    def get_classobj_bases(self, cls):
+        bases = [cls]
+        for base in cls.__bases__:
+            bases.extend(self.get_classobj_bases(base))
+        return bases
+
+    def represent_data(self, data):
+        if self.ignore_aliases(data):
+            self.alias_key = None
+        else:
+            self.alias_key = id(data)
+        if self.alias_key is not None:
+            if self.alias_key in self.represented_objects:
+                node = self.represented_objects[self.alias_key]
+                #if node is None:
+                #    raise RepresenterError("recursive objects are not allowed: %r" % data)
+                return node
+            #self.represented_objects[alias_key] = None
+            self.object_keeper.append(data)
+        data_types = type(data).__mro__
+        if type(data) is types.InstanceType:
+            data_types = self.get_classobj_bases(data.__class__)+list(data_types)
+        if data_types[0] in self.yaml_representers:
+            node = self.yaml_representers[data_types[0]](self, data)
+        else:
+            for data_type in data_types:
+                if data_type in self.yaml_multi_representers:
+                    node = self.yaml_multi_representers[data_type](self, data)
+                    break
+            else:
+                if None in self.yaml_multi_representers:
+                    node = self.yaml_multi_representers[None](self, data)
+                elif None in self.yaml_representers:
+                    node = self.yaml_representers[None](self, data)
+                else:
+                    node = ScalarNode(None, unicode(data))
+        #if alias_key is not None:
+        #    self.represented_objects[alias_key] = node
+        return node
+
+    def add_representer(cls, data_type, representer):
+        if not 'yaml_representers' in cls.__dict__:
+            cls.yaml_representers = cls.yaml_representers.copy()
+        cls.yaml_representers[data_type] = representer
+    add_representer = classmethod(add_representer)
+
+    def add_multi_representer(cls, data_type, representer):
+        if not 'yaml_multi_representers' in cls.__dict__:
+            cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
+        cls.yaml_multi_representers[data_type] = representer
+    add_multi_representer = classmethod(add_multi_representer)
+
+    def represent_scalar(self, tag, value, style=None):
+        if style is None:
+            style = self.default_style
+        node = ScalarNode(tag, value, style=style)
+        if self.alias_key is not None:
+            self.represented_objects[self.alias_key] = node
+        return node
+
+    def represent_sequence(self, tag, sequence, flow_style=None):
+        value = []
+        node = SequenceNode(tag, value, flow_style=flow_style)
+        if self.alias_key is not None:
+            self.represented_objects[self.alias_key] = node
+        best_style = True
+        for item in sequence:
+            node_item = self.represent_data(item)
+            if not (isinstance(node_item, ScalarNode) and not node_item.style):
+                best_style = False
+            value.append(node_item)
+        if flow_style is None:
+            if self.default_flow_style is not None:
+                node.flow_style = self.default_flow_style
+            else:
+                node.flow_style = best_style
+        return node
+
+    def represent_mapping(self, tag, mapping, flow_style=None):
+        value = []
+        node = MappingNode(tag, value, flow_style=flow_style)
+        if self.alias_key is not None:
+            self.represented_objects[self.alias_key] = node
+        best_style = True
+        if hasattr(mapping, 'items'):
+            mapping = mapping.items()
+            mapping.sort()
+        for item_key, item_value in mapping:
+            node_key = self.represent_data(item_key)
+            node_value = self.represent_data(item_value)
+            if not (isinstance(node_key, ScalarNode) and not node_key.style):
+                best_style = False
+            if not (isinstance(node_value, ScalarNode) and not node_value.style):
+                best_style = False
+            value.append((node_key, node_value))
+        if flow_style is None:
+            if self.default_flow_style is not None:
+                node.flow_style = self.default_flow_style
+            else:
+                node.flow_style = best_style
+        return node
+
+    def ignore_aliases(self, data):
+        return False
+
+class SafeRepresenter(BaseRepresenter):
+
+    def ignore_aliases(self, data):
+        if data in [None, ()]:
+            return True
+        if isinstance(data, (str, unicode, bool, int, float)):
+            return True
+
+    def represent_none(self, data):
+        return self.represent_scalar(u'tag:yaml.org,2002:null',
+                u'null')
+
+    def represent_str(self, data):
+        tag = None
+        style = None
+        try:
+            data = unicode(data, 'ascii')
+            tag = u'tag:yaml.org,2002:str'
+        except UnicodeDecodeError:
+            try:
+                data = unicode(data, 'utf-8')
+                tag = u'tag:yaml.org,2002:str'
+            except UnicodeDecodeError:
+                data = data.encode('base64')
+                tag = u'tag:yaml.org,2002:binary'
+                style = '|'
+        return self.represent_scalar(tag, data, style=style)
+
+    def represent_unicode(self, data):
+        return self.represent_scalar(u'tag:yaml.org,2002:str', data)
+
+    def represent_bool(self, data):
+        if data:
+            value = u'true'
+        else:
+            value = u'false'
+        return self.represent_scalar(u'tag:yaml.org,2002:bool', value)
+
+    def represent_int(self, data):
+        return self.represent_scalar(u'tag:yaml.org,2002:int', unicode(data))
+
+    def represent_long(self, data):
+        return self.represent_scalar(u'tag:yaml.org,2002:int', unicode(data))
+
+    inf_value = 1e300
+    while repr(inf_value) != repr(inf_value*inf_value):
+        inf_value *= inf_value
+
+    def represent_float(self, data):
+        if data != data or (data == 0.0 and data == 1.0):
+            value = u'.nan'
+        elif data == self.inf_value:
+            value = u'.inf'
+        elif data == -self.inf_value:
+            value = u'-.inf'
+        else:
+            value = unicode(repr(data)).lower()
+            # Note that in some cases `repr(data)` represents a float number
+            # without the decimal parts.  For instance:
+            #   >>> repr(1e17)
+            #   '1e17'
+            # Unfortunately, this is not a valid float representation according
+            # to the definition of the `!!float` tag.  We fix this by adding
+            # '.0' before the 'e' symbol.
+            if u'.' not in value and u'e' in value:
+                value = value.replace(u'e', u'.0e', 1)
+        return self.represent_scalar(u'tag:yaml.org,2002:float', value)
+
+    def represent_list(self, data):
+        #pairs = (len(data) > 0 and isinstance(data, list))
+        #if pairs:
+        #    for item in data:
+        #        if not isinstance(item, tuple) or len(item) != 2:
+        #            pairs = False
+        #            break
+        #if not pairs:
+            return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
+        #value = []
+        #for item_key, item_value in data:
+        #    value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
+        #        [(item_key, item_value)]))
+        #return SequenceNode(u'tag:yaml.org,2002:pairs', value)
+
+    def represent_dict(self, data):
+        return self.represent_mapping(u'tag:yaml.org,2002:map', data)
+
+    def represent_set(self, data):
+        value = {}
+        for key in data:
+            value[key] = None
+        return self.represent_mapping(u'tag:yaml.org,2002:set', value)
+
+    def represent_date(self, data):
+        value = unicode(data.isoformat())
+        return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
+
+    def represent_datetime(self, data):
+        value = unicode(data.isoformat(' '))
+        return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
+
+    def represent_yaml_object(self, tag, data, cls, flow_style=None):
+        if hasattr(data, '__getstate__'):
+            state = data.__getstate__()
+        else:
+            state = data.__dict__.copy()
+        return self.represent_mapping(tag, state, flow_style=flow_style)
+
+    def represent_undefined(self, data):
+        raise RepresenterError("cannot represent an object: %s" % data)
+
+SafeRepresenter.add_representer(type(None),
+        SafeRepresenter.represent_none)
+
+SafeRepresenter.add_representer(str,
+        SafeRepresenter.represent_str)
+
+SafeRepresenter.add_representer(unicode,
+        SafeRepresenter.represent_unicode)
+
+SafeRepresenter.add_representer(bool,
+        SafeRepresenter.represent_bool)
+
+SafeRepresenter.add_representer(int,
+        SafeRepresenter.represent_int)
+
+SafeRepresenter.add_representer(long,
+        SafeRepresenter.represent_long)
+
+SafeRepresenter.add_representer(float,
+        SafeRepresenter.represent_float)
+
+SafeRepresenter.add_representer(list,
+        SafeRepresenter.represent_list)
+
+SafeRepresenter.add_representer(tuple,
+        SafeRepresenter.represent_list)
+
+SafeRepresenter.add_representer(dict,
+        SafeRepresenter.represent_dict)
+
+SafeRepresenter.add_representer(set,
+        SafeRepresenter.represent_set)
+
+SafeRepresenter.add_representer(datetime.date,
+        SafeRepresenter.represent_date)
+
+SafeRepresenter.add_representer(datetime.datetime,
+        SafeRepresenter.represent_datetime)
+
+SafeRepresenter.add_representer(None,
+        SafeRepresenter.represent_undefined)
+
+class Representer(SafeRepresenter):
+
+    def represent_str(self, data):
+        tag = None
+        style = None
+        try:
+            data = unicode(data, 'ascii')
+            tag = u'tag:yaml.org,2002:str'
+        except UnicodeDecodeError:
+            try:
+                data = unicode(data, 'utf-8')
+                tag = u'tag:yaml.org,2002:python/str'
+            except UnicodeDecodeError:
+                data = data.encode('base64')
+                tag = u'tag:yaml.org,2002:binary'
+                style = '|'
+        return self.represent_scalar(tag, data, style=style)
+
+    def represent_unicode(self, data):
+        tag = None
+        try:
+            data.encode('ascii')
+            tag = u'tag:yaml.org,2002:python/unicode'
+        except UnicodeEncodeError:
+            tag = u'tag:yaml.org,2002:str'
+        return self.represent_scalar(tag, data)
+
+    def represent_long(self, data):
+        tag = u'tag:yaml.org,2002:int'
+        if int(data) is not data:
+            tag = u'tag:yaml.org,2002:python/long'
+        return self.represent_scalar(tag, unicode(data))
+
+    def represent_complex(self, data):
+        if data.imag == 0.0:
+            data = u'%r' % data.real
+        elif data.real == 0.0:
+            data = u'%rj' % data.imag
+        elif data.imag > 0:
+            data = u'%r+%rj' % (data.real, data.imag)
+        else:
+            data = u'%r%rj' % (data.real, data.imag)
+        return self.represent_scalar(u'tag:yaml.org,2002:python/complex', data)
+
+    def represent_tuple(self, data):
+        return self.represent_sequence(u'tag:yaml.org,2002:python/tuple', data)
+
+    def represent_name(self, data):
+        name = u'%s.%s' % (data.__module__, data.__name__)
+        return self.represent_scalar(u'tag:yaml.org,2002:python/name:'+name, u'')
+
+    def represent_module(self, data):
+        return self.represent_scalar(
+                u'tag:yaml.org,2002:python/module:'+data.__name__, u'')
+
+    def represent_instance(self, data):
+        # For instances of classic classes, we use __getinitargs__ and
+        # __getstate__ to serialize the data.
+
+        # If data.__getinitargs__ exists, the object must be reconstructed by
+        # calling cls(**args), where args is a tuple returned by
+        # __getinitargs__. Otherwise, the cls.__init__ method should never be
+        # called and the class instance is created by instantiating a trivial
+        # class and assigning to the instance's __class__ variable.
+
+        # If data.__getstate__ exists, it returns the state of the object.
+        # Otherwise, the state of the object is data.__dict__.
+
+        # We produce either a !!python/object or !!python/object/new node.
+        # If data.__getinitargs__ does not exist and state is a dictionary, we
+        # produce a !!python/object node . Otherwise we produce a
+        # !!python/object/new node.
+
+        cls = data.__class__
+        class_name = u'%s.%s' % (cls.__module__, cls.__name__)
+        args = None
+        state = None
+        if hasattr(data, '__getinitargs__'):
+            args = list(data.__getinitargs__())
+        if hasattr(data, '__getstate__'):
+            state = data.__getstate__()
+        else:
+            state = data.__dict__
+        if args is None and isinstance(state, dict):
+            return self.represent_mapping(
+                    u'tag:yaml.org,2002:python/object:'+class_name, state)
+        if isinstance(state, dict) and not state:
+            return self.represent_sequence(
+                    u'tag:yaml.org,2002:python/object/new:'+class_name, args)
+        value = {}
+        if args:
+            value['args'] = args
+        value['state'] = state
+        return self.represent_mapping(
+                u'tag:yaml.org,2002:python/object/new:'+class_name, value)
+
+    def represent_object(self, data):
+        # We use __reduce__ API to save the data. data.__reduce__ returns
+        # a tuple of length 2-5:
+        #   (function, args, state, listitems, dictitems)
+
+        # For reconstructing, we calls function(*args), then set its state,
+        # listitems, and dictitems if they are not None.
+
+        # A special case is when function.__name__ == '__newobj__'. In this
+        # case we create the object with args[0].__new__(*args).
+
+        # Another special case is when __reduce__ returns a string - we don't
+        # support it.
+
+        # We produce a !!python/object, !!python/object/new or
+        # !!python/object/apply node.
+
+        cls = type(data)
+        if cls in copy_reg.dispatch_table:
+            reduce = copy_reg.dispatch_table[cls](data)
+        elif hasattr(data, '__reduce_ex__'):
+            reduce = data.__reduce_ex__(2)
+        elif hasattr(data, '__reduce__'):
+            reduce = data.__reduce__()
+        else:
+            raise RepresenterError("cannot represent object: %r" % data)
+        reduce = (list(reduce)+[None]*5)[:5]
+        function, args, state, listitems, dictitems = reduce
+        args = list(args)
+        if state is None:
+            state = {}
+        if listitems is not None:
+            listitems = list(listitems)
+        if dictitems is not None:
+            dictitems = dict(dictitems)
+        if function.__name__ == '__newobj__':
+            function = args[0]
+            args = args[1:]
+            tag = u'tag:yaml.org,2002:python/object/new:'
+            newobj = True
+        else:
+            tag = u'tag:yaml.org,2002:python/object/apply:'
+            newobj = False
+        function_name = u'%s.%s' % (function.__module__, function.__name__)
+        if not args and not listitems and not dictitems \
+                and isinstance(state, dict) and newobj:
+            return self.represent_mapping(
+                    u'tag:yaml.org,2002:python/object:'+function_name, state)
+        if not listitems and not dictitems  \
+                and isinstance(state, dict) and not state:
+            return self.represent_sequence(tag+function_name, args)
+        value = {}
+        if args:
+            value['args'] = args
+        if state or not isinstance(state, dict):
+            value['state'] = state
+        if listitems:
+            value['listitems'] = listitems
+        if dictitems:
+            value['dictitems'] = dictitems
+        return self.represent_mapping(tag+function_name, value)
+
+Representer.add_representer(str,
+        Representer.represent_str)
+
+Representer.add_representer(unicode,
+        Representer.represent_unicode)
+
+Representer.add_representer(long,
+        Representer.represent_long)
+
+Representer.add_representer(complex,
+        Representer.represent_complex)
+
+Representer.add_representer(tuple,
+        Representer.represent_tuple)
+
+Representer.add_representer(type,
+        Representer.represent_name)
+
+Representer.add_representer(types.ClassType,
+        Representer.represent_name)
+
+Representer.add_representer(types.FunctionType,
+        Representer.represent_name)
+
+Representer.add_representer(types.BuiltinFunctionType,
+        Representer.represent_name)
+
+Representer.add_representer(types.ModuleType,
+        Representer.represent_module)
+
+Representer.add_multi_representer(types.InstanceType,
+        Representer.represent_instance)
+
+Representer.add_multi_representer(object,
+        Representer.represent_object)
+
diff --git a/lib/yaml/resolver.py b/lib/yaml/resolver.py
new file mode 100644
index 0000000..6b5ab87
--- /dev/null
+++ b/lib/yaml/resolver.py
@@ -0,0 +1,224 @@
+
+__all__ = ['BaseResolver', 'Resolver']
+
+from error import *
+from nodes import *
+
+import re
+
+class ResolverError(YAMLError):
+    pass
+
+class BaseResolver(object):
+
+    DEFAULT_SCALAR_TAG = u'tag:yaml.org,2002:str'
+    DEFAULT_SEQUENCE_TAG = u'tag:yaml.org,2002:seq'
+    DEFAULT_MAPPING_TAG = u'tag:yaml.org,2002:map'
+
+    yaml_implicit_resolvers = {}
+    yaml_path_resolvers = {}
+
+    def __init__(self):
+        self.resolver_exact_paths = []
+        self.resolver_prefix_paths = []
+
+    def add_implicit_resolver(cls, tag, regexp, first):
+        if not 'yaml_implicit_resolvers' in cls.__dict__:
+            cls.yaml_implicit_resolvers = cls.yaml_implicit_resolvers.copy()
+        if first is None:
+            first = [None]
+        for ch in first:
+            cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
+    add_implicit_resolver = classmethod(add_implicit_resolver)
+
+    def add_path_resolver(cls, tag, path, kind=None):
+        # Note: `add_path_resolver` is experimental.  The API could be changed.
+        # `new_path` is a pattern that is matched against the path from the
+        # root to the node that is being considered.  `node_path` elements are
+        # tuples `(node_check, index_check)`.  `node_check` is a node class:
+        # `ScalarNode`, `SequenceNode`, `MappingNode` or `None`.  `None`
+        # matches any kind of a node.  `index_check` could be `None`, a boolean
+        # value, a string value, or a number.  `None` and `False` match against
+        # any _value_ of sequence and mapping nodes.  `True` matches against
+        # any _key_ of a mapping node.  A string `index_check` matches against
+        # a mapping value that corresponds to a scalar key which content is
+        # equal to the `index_check` value.  An integer `index_check` matches
+        # against a sequence value with the index equal to `index_check`.
+        if not 'yaml_path_resolvers' in cls.__dict__:
+            cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
+        new_path = []
+        for element in path:
+            if isinstance(element, (list, tuple)):
+                if len(element) == 2:
+                    node_check, index_check = element
+                elif len(element) == 1:
+                    node_check = element[0]
+                    index_check = True
+                else:
+                    raise ResolverError("Invalid path element: %s" % element)
+            else:
+                node_check = None
+                index_check = element
+            if node_check is str:
+                node_check = ScalarNode
+            elif node_check is list:
+                node_check = SequenceNode
+            elif node_check is dict:
+                node_check = MappingNode
+            elif node_check not in [ScalarNode, SequenceNode, MappingNode]  \
+                    and not isinstance(node_check, basestring)  \
+                    and node_check is not None:
+                raise ResolverError("Invalid node checker: %s" % node_check)
+            if not isinstance(index_check, (basestring, int))   \
+                    and index_check is not None:
+                raise ResolverError("Invalid index checker: %s" % index_check)
+            new_path.append((node_check, index_check))
+        if kind is str:
+            kind = ScalarNode
+        elif kind is list:
+            kind = SequenceNode
+        elif kind is dict:
+            kind = MappingNode
+        elif kind not in [ScalarNode, SequenceNode, MappingNode]    \
+                and kind is not None:
+            raise ResolverError("Invalid node kind: %s" % kind)
+        cls.yaml_path_resolvers[tuple(new_path), kind] = tag
+    add_path_resolver = classmethod(add_path_resolver)
+
+    def descend_resolver(self, current_node, current_index):
+        if not self.yaml_path_resolvers:
+            return
+        exact_paths = {}
+        prefix_paths = []
+        if current_node:
+            depth = len(self.resolver_prefix_paths)
+            for path, kind in self.resolver_prefix_paths[-1]:
+                if self.check_resolver_prefix(depth, path, kind,
+                        current_node, current_index):
+                    if len(path) > depth:
+                        prefix_paths.append((path, kind))
+                    else:
+                        exact_paths[kind] = self.yaml_path_resolvers[path, kind]
+        else:
+            for path, kind in self.yaml_path_resolvers:
+                if not path:
+                    exact_paths[kind] = self.yaml_path_resolvers[path, kind]
+                else:
+                    prefix_paths.append((path, kind))
+        self.resolver_exact_paths.append(exact_paths)
+        self.resolver_prefix_paths.append(prefix_paths)
+
+    def ascend_resolver(self):
+        if not self.yaml_path_resolvers:
+            return
+        self.resolver_exact_paths.pop()
+        self.resolver_prefix_paths.pop()
+
+    def check_resolver_prefix(self, depth, path, kind,
+            current_node, current_index):
+        node_check, index_check = path[depth-1]
+        if isinstance(node_check, basestring):
+            if current_node.tag != node_check:
+                return
+        elif node_check is not None:
+            if not isinstance(current_node, node_check):
+                return
+        if index_check is True and current_index is not None:
+            return
+        if (index_check is False or index_check is None)    \
+                and current_index is None:
+            return
+        if isinstance(index_check, basestring):
+            if not (isinstance(current_index, ScalarNode)
+                    and index_check == current_index.value):
+                return
+        elif isinstance(index_check, int) and not isinstance(index_check, bool):
+            if index_check != current_index:
+                return
+        return True
+
+    def resolve(self, kind, value, implicit):
+        if kind is ScalarNode and implicit[0]:
+            if value == u'':
+                resolvers = self.yaml_implicit_resolvers.get(u'', [])
+            else:
+                resolvers = self.yaml_implicit_resolvers.get(value[0], [])
+            resolvers += self.yaml_implicit_resolvers.get(None, [])
+            for tag, regexp in resolvers:
+                if regexp.match(value):
+                    return tag
+            implicit = implicit[1]
+        if self.yaml_path_resolvers:
+            exact_paths = self.resolver_exact_paths[-1]
+            if kind in exact_paths:
+                return exact_paths[kind]
+            if None in exact_paths:
+                return exact_paths[None]
+        if kind is ScalarNode:
+            return self.DEFAULT_SCALAR_TAG
+        elif kind is SequenceNode:
+            return self.DEFAULT_SEQUENCE_TAG
+        elif kind is MappingNode:
+            return self.DEFAULT_MAPPING_TAG
+
+class Resolver(BaseResolver):
+    pass
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:bool',
+        re.compile(ur'''^(?:yes|Yes|YES|no|No|NO
+                    |true|True|TRUE|false|False|FALSE
+                    |on|On|ON|off|Off|OFF)$''', re.X),
+        list(u'yYnNtTfFoO'))
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:float',
+        re.compile(ur'''^(?:[-+]?(?:[0-9][0-9_]*)\.[0-9_]*(?:[eE][-+][0-9]+)?
+                    |\.[0-9_]+(?:[eE][-+][0-9]+)?
+                    |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*
+                    |[-+]?\.(?:inf|Inf|INF)
+                    |\.(?:nan|NaN|NAN))$''', re.X),
+        list(u'-+0123456789.'))
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:int',
+        re.compile(ur'''^(?:[-+]?0b[0-1_]+
+                    |[-+]?0[0-7_]+
+                    |[-+]?(?:0|[1-9][0-9_]*)
+                    |[-+]?0x[0-9a-fA-F_]+
+                    |[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
+        list(u'-+0123456789'))
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:merge',
+        re.compile(ur'^(?:<<)$'),
+        [u'<'])
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:null',
+        re.compile(ur'''^(?: ~
+                    |null|Null|NULL
+                    | )$''', re.X),
+        [u'~', u'n', u'N', u''])
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:timestamp',
+        re.compile(ur'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
+                    |[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
+                     (?:[Tt]|[ \t]+)[0-9][0-9]?
+                     :[0-9][0-9] :[0-9][0-9] (?:\.[0-9]*)?
+                     (?:[ \t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
+        list(u'0123456789'))
+
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:value',
+        re.compile(ur'^(?:=)$'),
+        [u'='])
+
+# The following resolver is only for documentation purposes. It cannot work
+# because plain scalars cannot start with '!', '&', or '*'.
+Resolver.add_implicit_resolver(
+        u'tag:yaml.org,2002:yaml',
+        re.compile(ur'^(?:!|&|\*)$'),
+        list(u'!&*'))
+
diff --git a/lib/yaml/scanner.py b/lib/yaml/scanner.py
new file mode 100644
index 0000000..834f662
--- /dev/null
+++ b/lib/yaml/scanner.py
@@ -0,0 +1,1453 @@
+
+# Scanner produces tokens of the following types:
+# STREAM-START
+# STREAM-END
+# DIRECTIVE(name, value)
+# DOCUMENT-START
+# DOCUMENT-END
+# BLOCK-SEQUENCE-START
+# BLOCK-MAPPING-START
+# BLOCK-END
+# FLOW-SEQUENCE-START
+# FLOW-MAPPING-START
+# FLOW-SEQUENCE-END
+# FLOW-MAPPING-END
+# BLOCK-ENTRY
+# FLOW-ENTRY
+# KEY
+# VALUE
+# ALIAS(value)
+# ANCHOR(value)
+# TAG(value)
+# SCALAR(value, plain, style)
+#
+# Read comments in the Scanner code for more details.
+#
+
+__all__ = ['Scanner', 'ScannerError']
+
+from error import MarkedYAMLError
+from tokens import *
+
+class ScannerError(MarkedYAMLError):
+    pass
+
+class SimpleKey(object):
+    # See below simple keys treatment.
+
+    def __init__(self, token_number, required, index, line, column, mark):
+        self.token_number = token_number
+        self.required = required
+        self.index = index
+        self.line = line
+        self.column = column
+        self.mark = mark
+
+class Scanner(object):
+
+    def __init__(self):
+        """Initialize the scanner."""
+        # It is assumed that Scanner and Reader will have a common descendant.
+        # Reader do the dirty work of checking for BOM and converting the
+        # input data to Unicode. It also adds NUL to the end.
+        #
+        # Reader supports the following methods
+        #   self.peek(i=0)       # peek the next i-th character
+        #   self.prefix(l=1)     # peek the next l characters
+        #   self.forward(l=1)    # read the next l characters and move the pointer.
+
+        # Had we reached the end of the stream?
+        self.done = False
+
+        # The number of unclosed '{' and '['. `flow_level == 0` means block
+        # context.
+        self.flow_level = 0
+
+        # List of processed tokens that are not yet emitted.
+        self.tokens = []
+
+        # Add the STREAM-START token.
+        self.fetch_stream_start()
+
+        # Number of tokens that were emitted through the `get_token` method.
+        self.tokens_taken = 0
+
+        # The current indentation level.
+        self.indent = -1
+
+        # Past indentation levels.
+        self.indents = []
+
+        # Variables related to simple keys treatment.
+
+        # A simple key is a key that is not denoted by the '?' indicator.
+        # Example of simple keys:
+        #   ---
+        #   block simple key: value
+        #   ? not a simple key:
+        #   : { flow simple key: value }
+        # We emit the KEY token before all keys, so when we find a potential
+        # simple key, we try to locate the corresponding ':' indicator.
+        # Simple keys should be limited to a single line and 1024 characters.
+
+        # Can a simple key start at the current position? A simple key may
+        # start:
+        # - at the beginning of the line, not counting indentation spaces
+        #       (in block context),
+        # - after '{', '[', ',' (in the flow context),
+        # - after '?', ':', '-' (in the block context).
+        # In the block context, this flag also signifies if a block collection
+        # may start at the current position.
+        self.allow_simple_key = True
+
+        # Keep track of possible simple keys. This is a dictionary. The key
+        # is `flow_level`; there can be no more that one possible simple key
+        # for each level. The value is a SimpleKey record:
+        #   (token_number, required, index, line, column, mark)
+        # A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow),
+        # '[', or '{' tokens.
+        self.possible_simple_keys = {}
+
+    # Public methods.
+
+    def check_token(self, *choices):
+        # Check if the next token is one of the given types.
+        while self.need_more_tokens():
+            self.fetch_more_tokens()
+        if self.tokens:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.tokens[0], choice):
+                    return True
+        return False
+
+    def peek_token(self):
+        # Return the next token, but do not delete if from the queue.
+        while self.need_more_tokens():
+            self.fetch_more_tokens()
+        if self.tokens:
+            return self.tokens[0]
+
+    def get_token(self):
+        # Return the next token.
+        while self.need_more_tokens():
+            self.fetch_more_tokens()
+        if self.tokens:
+            self.tokens_taken += 1
+            return self.tokens.pop(0)
+
+    # Private methods.
+
+    def need_more_tokens(self):
+        if self.done:
+            return False
+        if not self.tokens:
+            return True
+        # The current token may be a potential simple key, so we
+        # need to look further.
+        self.stale_possible_simple_keys()
+        if self.next_possible_simple_key() == self.tokens_taken:
+            return True
+
+    def fetch_more_tokens(self):
+
+        # Eat whitespaces and comments until we reach the next token.
+        self.scan_to_next_token()
+
+        # Remove obsolete possible simple keys.
+        self.stale_possible_simple_keys()
+
+        # Compare the current indentation and column. It may add some tokens
+        # and decrease the current indentation level.
+        self.unwind_indent(self.column)
+
+        # Peek the next character.
+        ch = self.peek()
+
+        # Is it the end of stream?
+        if ch == u'\0':
+            return self.fetch_stream_end()
+
+        # Is it a directive?
+        if ch == u'%' and self.check_directive():
+            return self.fetch_directive()
+
+        # Is it the document start?
+        if ch == u'-' and self.check_document_start():
+            return self.fetch_document_start()
+
+        # Is it the document end?
+        if ch == u'.' and self.check_document_end():
+            return self.fetch_document_end()
+
+        # TODO: support for BOM within a stream.
+        #if ch == u'\uFEFF':
+        #    return self.fetch_bom()    <-- issue BOMToken
+
+        # Note: the order of the following checks is NOT significant.
+
+        # Is it the flow sequence start indicator?
+        if ch == u'[':
+            return self.fetch_flow_sequence_start()
+
+        # Is it the flow mapping start indicator?
+        if ch == u'{':
+            return self.fetch_flow_mapping_start()
+
+        # Is it the flow sequence end indicator?
+        if ch == u']':
+            return self.fetch_flow_sequence_end()
+
+        # Is it the flow mapping end indicator?
+        if ch == u'}':
+            return self.fetch_flow_mapping_end()
+
+        # Is it the flow entry indicator?
+        if ch == u',':
+            return self.fetch_flow_entry()
+
+        # Is it the block entry indicator?
+        if ch == u'-' and self.check_block_entry():
+            return self.fetch_block_entry()
+
+        # Is it the key indicator?
+        if ch == u'?' and self.check_key():
+            return self.fetch_key()
+
+        # Is it the value indicator?
+        if ch == u':' and self.check_value():
+            return self.fetch_value()
+
+        # Is it an alias?
+        if ch == u'*':
+            return self.fetch_alias()
+
+        # Is it an anchor?
+        if ch == u'&':
+            return self.fetch_anchor()
+
+        # Is it a tag?
+        if ch == u'!':
+            return self.fetch_tag()
+
+        # Is it a literal scalar?
+        if ch == u'|' and not self.flow_level:
+            return self.fetch_literal()
+
+        # Is it a folded scalar?
+        if ch == u'>' and not self.flow_level:
+            return self.fetch_folded()
+
+        # Is it a single quoted scalar?
+        if ch == u'\'':
+            return self.fetch_single()
+
+        # Is it a double quoted scalar?
+        if ch == u'\"':
+            return self.fetch_double()
+
+        # It must be a plain scalar then.
+        if self.check_plain():
+            return self.fetch_plain()
+
+        # No? It's an error. Let's produce a nice error message.
+        raise ScannerError("while scanning for the next token", None,
+                "found character %r that cannot start any token"
+                % ch.encode('utf-8'), self.get_mark())
+
+    # Simple keys treatment.
+
+    def next_possible_simple_key(self):
+        # Return the number of the nearest possible simple key. Actually we
+        # don't need to loop through the whole dictionary. We may replace it
+        # with the following code:
+        #   if not self.possible_simple_keys:
+        #       return None
+        #   return self.possible_simple_keys[
+        #           min(self.possible_simple_keys.keys())].token_number
+        min_token_number = None
+        for level in self.possible_simple_keys:
+            key = self.possible_simple_keys[level]
+            if min_token_number is None or key.token_number < min_token_number:
+                min_token_number = key.token_number
+        return min_token_number
+
+    def stale_possible_simple_keys(self):
+        # Remove entries that are no longer possible simple keys. According to
+        # the YAML specification, simple keys
+        # - should be limited to a single line,
+        # - should be no longer than 1024 characters.
+        # Disabling this procedure will allow simple keys of any length and
+        # height (may cause problems if indentation is broken though).
+        for level in self.possible_simple_keys.keys():
+            key = self.possible_simple_keys[level]
+            if key.line != self.line  \
+                    or self.index-key.index > 1024:
+                if key.required:
+                    raise ScannerError("while scanning a simple key", key.mark,
+                            "could not find expected ':'", self.get_mark())
+                del self.possible_simple_keys[level]
+
+    def save_possible_simple_key(self):
+        # The next token may start a simple key. We check if it's possible
+        # and save its position. This function is called for
+        #   ALIAS, ANCHOR, TAG, SCALAR(flow), '[', and '{'.
+
+        # Check if a simple key is required at the current position.
+        required = not self.flow_level and self.indent == self.column
+
+        # The next token might be a simple key. Let's save it's number and
+        # position.
+        if self.allow_simple_key:
+            self.remove_possible_simple_key()
+            token_number = self.tokens_taken+len(self.tokens)
+            key = SimpleKey(token_number, required,
+                    self.index, self.line, self.column, self.get_mark())
+            self.possible_simple_keys[self.flow_level] = key
+
+    def remove_possible_simple_key(self):
+        # Remove the saved possible key position at the current flow level.
+        if self.flow_level in self.possible_simple_keys:
+            key = self.possible_simple_keys[self.flow_level]
+            
+            if key.required:
+                raise ScannerError("while scanning a simple key", key.mark,
+                        "could not find expected ':'", self.get_mark())
+
+            del self.possible_simple_keys[self.flow_level]
+
+    # Indentation functions.
+
+    def unwind_indent(self, column):
+
+        ## In flow context, tokens should respect indentation.
+        ## Actually the condition should be `self.indent >= column` according to
+        ## the spec. But this condition will prohibit intuitively correct
+        ## constructions such as
+        ## key : {
+        ## }
+        #if self.flow_level and self.indent > column:
+        #    raise ScannerError(None, None,
+        #            "invalid intendation or unclosed '[' or '{'",
+        #            self.get_mark())
+
+        # In the flow context, indentation is ignored. We make the scanner less
+        # restrictive then specification requires.
+        if self.flow_level:
+            return
+
+        # In block context, we may need to issue the BLOCK-END tokens.
+        while self.indent > column:
+            mark = self.get_mark()
+            self.indent = self.indents.pop()
+            self.tokens.append(BlockEndToken(mark, mark))
+
+    def add_indent(self, column):
+        # Check if we need to increase indentation.
+        if self.indent < column:
+            self.indents.append(self.indent)
+            self.indent = column
+            return True
+        return False
+
+    # Fetchers.
+
+    def fetch_stream_start(self):
+        # We always add STREAM-START as the first token and STREAM-END as the
+        # last token.
+
+        # Read the token.
+        mark = self.get_mark()
+        
+        # Add STREAM-START.
+        self.tokens.append(StreamStartToken(mark, mark,
+            encoding=self.encoding))
+        
+
+    def fetch_stream_end(self):
+
+        # Set the current intendation to -1.
+        self.unwind_indent(-1)
+
+        # Reset simple keys.
+        self.remove_possible_simple_key()
+        self.allow_simple_key = False
+        self.possible_simple_keys = {}
+
+        # Read the token.
+        mark = self.get_mark()
+        
+        # Add STREAM-END.
+        self.tokens.append(StreamEndToken(mark, mark))
+
+        # The steam is finished.
+        self.done = True
+
+    def fetch_directive(self):
+        
+        # Set the current intendation to -1.
+        self.unwind_indent(-1)
+
+        # Reset simple keys.
+        self.remove_possible_simple_key()
+        self.allow_simple_key = False
+
+        # Scan and add DIRECTIVE.
+        self.tokens.append(self.scan_directive())
+
+    def fetch_document_start(self):
+        self.fetch_document_indicator(DocumentStartToken)
+
+    def fetch_document_end(self):
+        self.fetch_document_indicator(DocumentEndToken)
+
+    def fetch_document_indicator(self, TokenClass):
+
+        # Set the current intendation to -1.
+        self.unwind_indent(-1)
+
+        # Reset simple keys. Note that there could not be a block collection
+        # after '---'.
+        self.remove_possible_simple_key()
+        self.allow_simple_key = False
+
+        # Add DOCUMENT-START or DOCUMENT-END.
+        start_mark = self.get_mark()
+        self.forward(3)
+        end_mark = self.get_mark()
+        self.tokens.append(TokenClass(start_mark, end_mark))
+
+    def fetch_flow_sequence_start(self):
+        self.fetch_flow_collection_start(FlowSequenceStartToken)
+
+    def fetch_flow_mapping_start(self):
+        self.fetch_flow_collection_start(FlowMappingStartToken)
+
+    def fetch_flow_collection_start(self, TokenClass):
+
+        # '[' and '{' may start a simple key.
+        self.save_possible_simple_key()
+
+        # Increase the flow level.
+        self.flow_level += 1
+
+        # Simple keys are allowed after '[' and '{'.
+        self.allow_simple_key = True
+
+        # Add FLOW-SEQUENCE-START or FLOW-MAPPING-START.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(TokenClass(start_mark, end_mark))
+
+    def fetch_flow_sequence_end(self):
+        self.fetch_flow_collection_end(FlowSequenceEndToken)
+
+    def fetch_flow_mapping_end(self):
+        self.fetch_flow_collection_end(FlowMappingEndToken)
+
+    def fetch_flow_collection_end(self, TokenClass):
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Decrease the flow level.
+        self.flow_level -= 1
+
+        # No simple keys after ']' or '}'.
+        self.allow_simple_key = False
+
+        # Add FLOW-SEQUENCE-END or FLOW-MAPPING-END.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(TokenClass(start_mark, end_mark))
+
+    def fetch_flow_entry(self):
+
+        # Simple keys are allowed after ','.
+        self.allow_simple_key = True
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Add FLOW-ENTRY.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(FlowEntryToken(start_mark, end_mark))
+
+    def fetch_block_entry(self):
+
+        # Block context needs additional checks.
+        if not self.flow_level:
+
+            # Are we allowed to start a new entry?
+            if not self.allow_simple_key:
+                raise ScannerError(None, None,
+                        "sequence entries are not allowed here",
+                        self.get_mark())
+
+            # We may need to add BLOCK-SEQUENCE-START.
+            if self.add_indent(self.column):
+                mark = self.get_mark()
+                self.tokens.append(BlockSequenceStartToken(mark, mark))
+
+        # It's an error for the block entry to occur in the flow context,
+        # but we let the parser detect this.
+        else:
+            pass
+
+        # Simple keys are allowed after '-'.
+        self.allow_simple_key = True
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Add BLOCK-ENTRY.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(BlockEntryToken(start_mark, end_mark))
+
+    def fetch_key(self):
+        
+        # Block context needs additional checks.
+        if not self.flow_level:
+
+            # Are we allowed to start a key (not nessesary a simple)?
+            if not self.allow_simple_key:
+                raise ScannerError(None, None,
+                        "mapping keys are not allowed here",
+                        self.get_mark())
+
+            # We may need to add BLOCK-MAPPING-START.
+            if self.add_indent(self.column):
+                mark = self.get_mark()
+                self.tokens.append(BlockMappingStartToken(mark, mark))
+
+        # Simple keys are allowed after '?' in the block context.
+        self.allow_simple_key = not self.flow_level
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Add KEY.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(KeyToken(start_mark, end_mark))
+
+    def fetch_value(self):
+
+        # Do we determine a simple key?
+        if self.flow_level in self.possible_simple_keys:
+
+            # Add KEY.
+            key = self.possible_simple_keys[self.flow_level]
+            del self.possible_simple_keys[self.flow_level]
+            self.tokens.insert(key.token_number-self.tokens_taken,
+                    KeyToken(key.mark, key.mark))
+
+            # If this key starts a new block mapping, we need to add
+            # BLOCK-MAPPING-START.
+            if not self.flow_level:
+                if self.add_indent(key.column):
+                    self.tokens.insert(key.token_number-self.tokens_taken,
+                            BlockMappingStartToken(key.mark, key.mark))
+
+            # There cannot be two simple keys one after another.
+            self.allow_simple_key = False
+
+        # It must be a part of a complex key.
+        else:
+            
+            # Block context needs additional checks.
+            # (Do we really need them? They will be catched by the parser
+            # anyway.)
+            if not self.flow_level:
+
+                # We are allowed to start a complex value if and only if
+                # we can start a simple key.
+                if not self.allow_simple_key:
+                    raise ScannerError(None, None,
+                            "mapping values are not allowed here",
+                            self.get_mark())
+
+            # If this value starts a new block mapping, we need to add
+            # BLOCK-MAPPING-START.  It will be detected as an error later by
+            # the parser.
+            if not self.flow_level:
+                if self.add_indent(self.column):
+                    mark = self.get_mark()
+                    self.tokens.append(BlockMappingStartToken(mark, mark))
+
+            # Simple keys are allowed after ':' in the block context.
+            self.allow_simple_key = not self.flow_level
+
+            # Reset possible simple key on the current level.
+            self.remove_possible_simple_key()
+
+        # Add VALUE.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(ValueToken(start_mark, end_mark))
+
+    def fetch_alias(self):
+
+        # ALIAS could be a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after ALIAS.
+        self.allow_simple_key = False
+
+        # Scan and add ALIAS.
+        self.tokens.append(self.scan_anchor(AliasToken))
+
+    def fetch_anchor(self):
+
+        # ANCHOR could start a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after ANCHOR.
+        self.allow_simple_key = False
+
+        # Scan and add ANCHOR.
+        self.tokens.append(self.scan_anchor(AnchorToken))
+
+    def fetch_tag(self):
+
+        # TAG could start a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after TAG.
+        self.allow_simple_key = False
+
+        # Scan and add TAG.
+        self.tokens.append(self.scan_tag())
+
+    def fetch_literal(self):
+        self.fetch_block_scalar(style='|')
+
+    def fetch_folded(self):
+        self.fetch_block_scalar(style='>')
+
+    def fetch_block_scalar(self, style):
+
+        # A simple key may follow a block scalar.
+        self.allow_simple_key = True
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Scan and add SCALAR.
+        self.tokens.append(self.scan_block_scalar(style))
+
+    def fetch_single(self):
+        self.fetch_flow_scalar(style='\'')
+
+    def fetch_double(self):
+        self.fetch_flow_scalar(style='"')
+
+    def fetch_flow_scalar(self, style):
+
+        # A flow scalar could be a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after flow scalars.
+        self.allow_simple_key = False
+
+        # Scan and add SCALAR.
+        self.tokens.append(self.scan_flow_scalar(style))
+
+    def fetch_plain(self):
+
+        # A plain scalar could be a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after plain scalars. But note that `scan_plain` will
+        # change this flag if the scan is finished at the beginning of the
+        # line.
+        self.allow_simple_key = False
+
+        # Scan and add SCALAR. May change `allow_simple_key`.
+        self.tokens.append(self.scan_plain())
+
+    # Checkers.
+
+    def check_directive(self):
+
+        # DIRECTIVE:        ^ '%' ...
+        # The '%' indicator is already checked.
+        if self.column == 0:
+            return True
+
+    def check_document_start(self):
+
+        # DOCUMENT-START:   ^ '---' (' '|'\n')
+        if self.column == 0:
+            if self.prefix(3) == u'---'  \
+                    and self.peek(3) in u'\0 \t\r\n\x85\u2028\u2029':
+                return True
+
+    def check_document_end(self):
+
+        # DOCUMENT-END:     ^ '...' (' '|'\n')
+        if self.column == 0:
+            if self.prefix(3) == u'...'  \
+                    and self.peek(3) in u'\0 \t\r\n\x85\u2028\u2029':
+                return True
+
+    def check_block_entry(self):
+
+        # BLOCK-ENTRY:      '-' (' '|'\n')
+        return self.peek(1) in u'\0 \t\r\n\x85\u2028\u2029'
+
+    def check_key(self):
+
+        # KEY(flow context):    '?'
+        if self.flow_level:
+            return True
+
+        # KEY(block context):   '?' (' '|'\n')
+        else:
+            return self.peek(1) in u'\0 \t\r\n\x85\u2028\u2029'
+
+    def check_value(self):
+
+        # VALUE(flow context):  ':'
+        if self.flow_level:
+            return True
+
+        # VALUE(block context): ':' (' '|'\n')
+        else:
+            return self.peek(1) in u'\0 \t\r\n\x85\u2028\u2029'
+
+    def check_plain(self):
+
+        # A plain scalar may start with any non-space character except:
+        #   '-', '?', ':', ',', '[', ']', '{', '}',
+        #   '#', '&', '*', '!', '|', '>', '\'', '\"',
+        #   '%', '@', '`'.
+        #
+        # It may also start with
+        #   '-', '?', ':'
+        # if it is followed by a non-space character.
+        #
+        # Note that we limit the last rule to the block context (except the
+        # '-' character) because we want the flow context to be space
+        # independent.
+        ch = self.peek()
+        return ch not in u'\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>\'\"%@`'  \
+                or (self.peek(1) not in u'\0 \t\r\n\x85\u2028\u2029'
+                        and (ch == u'-' or (not self.flow_level and ch in u'?:')))
+
+    # Scanners.
+
+    def scan_to_next_token(self):
+        # We ignore spaces, line breaks and comments.
+        # If we find a line break in the block context, we set the flag
+        # `allow_simple_key` on.
+        # The byte order mark is stripped if it's the first character in the
+        # stream. We do not yet support BOM inside the stream as the
+        # specification requires. Any such mark will be considered as a part
+        # of the document.
+        #
+        # TODO: We need to make tab handling rules more sane. A good rule is
+        #   Tabs cannot precede tokens
+        #   BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END,
+        #   KEY(block), VALUE(block), BLOCK-ENTRY
+        # So the checking code is
+        #   if <TAB>:
+        #       self.allow_simple_keys = False
+        # We also need to add the check for `allow_simple_keys == True` to
+        # `unwind_indent` before issuing BLOCK-END.
+        # Scanners for block, flow, and plain scalars need to be modified.
+
+        if self.index == 0 and self.peek() == u'\uFEFF':
+            self.forward()
+        found = False
+        while not found:
+            while self.peek() == u' ':
+                self.forward()
+            if self.peek() == u'#':
+                while self.peek() not in u'\0\r\n\x85\u2028\u2029':
+                    self.forward()
+            if self.scan_line_break():
+                if not self.flow_level:
+                    self.allow_simple_key = True
+            else:
+                found = True
+
+    def scan_directive(self):
+        # See the specification for details.
+        start_mark = self.get_mark()
+        self.forward()
+        name = self.scan_directive_name(start_mark)
+        value = None
+        if name == u'YAML':
+            value = self.scan_yaml_directive_value(start_mark)
+            end_mark = self.get_mark()
+        elif name == u'TAG':
+            value = self.scan_tag_directive_value(start_mark)
+            end_mark = self.get_mark()
+        else:
+            end_mark = self.get_mark()
+            while self.peek() not in u'\0\r\n\x85\u2028\u2029':
+                self.forward()
+        self.scan_directive_ignored_line(start_mark)
+        return DirectiveToken(name, value, start_mark, end_mark)
+
+    def scan_directive_name(self, start_mark):
+        # See the specification for details.
+        length = 0
+        ch = self.peek(length)
+        while u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'    \
+                or ch in u'-_':
+            length += 1
+            ch = self.peek(length)
+        if not length:
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch.encode('utf-8'), self.get_mark())
+        value = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch not in u'\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch.encode('utf-8'), self.get_mark())
+        return value
+
+    def scan_yaml_directive_value(self, start_mark):
+        # See the specification for details.
+        while self.peek() == u' ':
+            self.forward()
+        major = self.scan_yaml_directive_number(start_mark)
+        if self.peek() != '.':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a digit or '.', but found %r"
+                    % self.peek().encode('utf-8'),
+                    self.get_mark())
+        self.forward()
+        minor = self.scan_yaml_directive_number(start_mark)
+        if self.peek() not in u'\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a digit or ' ', but found %r"
+                    % self.peek().encode('utf-8'),
+                    self.get_mark())
+        return (major, minor)
+
+    def scan_yaml_directive_number(self, start_mark):
+        # See the specification for details.
+        ch = self.peek()
+        if not (u'0' <= ch <= u'9'):
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a digit, but found %r" % ch.encode('utf-8'),
+                    self.get_mark())
+        length = 0
+        while u'0' <= self.peek(length) <= u'9':
+            length += 1
+        value = int(self.prefix(length))
+        self.forward(length)
+        return value
+
+    def scan_tag_directive_value(self, start_mark):
+        # See the specification for details.
+        while self.peek() == u' ':
+            self.forward()
+        handle = self.scan_tag_directive_handle(start_mark)
+        while self.peek() == u' ':
+            self.forward()
+        prefix = self.scan_tag_directive_prefix(start_mark)
+        return (handle, prefix)
+
+    def scan_tag_directive_handle(self, start_mark):
+        # See the specification for details.
+        value = self.scan_tag_handle('directive', start_mark)
+        ch = self.peek()
+        if ch != u' ':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected ' ', but found %r" % ch.encode('utf-8'),
+                    self.get_mark())
+        return value
+
+    def scan_tag_directive_prefix(self, start_mark):
+        # See the specification for details.
+        value = self.scan_tag_uri('directive', start_mark)
+        ch = self.peek()
+        if ch not in u'\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected ' ', but found %r" % ch.encode('utf-8'),
+                    self.get_mark())
+        return value
+
+    def scan_directive_ignored_line(self, start_mark):
+        # See the specification for details.
+        while self.peek() == u' ':
+            self.forward()
+        if self.peek() == u'#':
+            while self.peek() not in u'\0\r\n\x85\u2028\u2029':
+                self.forward()
+        ch = self.peek()
+        if ch not in u'\0\r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a comment or a line break, but found %r"
+                        % ch.encode('utf-8'), self.get_mark())
+        self.scan_line_break()
+
+    def scan_anchor(self, TokenClass):
+        # The specification does not restrict characters for anchors and
+        # aliases. This may lead to problems, for instance, the document:
+        #   [ *alias, value ]
+        # can be interpteted in two ways, as
+        #   [ "value" ]
+        # and
+        #   [ *alias , "value" ]
+        # Therefore we restrict aliases to numbers and ASCII letters.
+        start_mark = self.get_mark()
+        indicator = self.peek()
+        if indicator == u'*':
+            name = 'alias'
+        else:
+            name = 'anchor'
+        self.forward()
+        length = 0
+        ch = self.peek(length)
+        while u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'    \
+                or ch in u'-_':
+            length += 1
+            ch = self.peek(length)
+        if not length:
+            raise ScannerError("while scanning an %s" % name, start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch.encode('utf-8'), self.get_mark())
+        value = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch not in u'\0 \t\r\n\x85\u2028\u2029?:,]}%@`':
+            raise ScannerError("while scanning an %s" % name, start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch.encode('utf-8'), self.get_mark())
+        end_mark = self.get_mark()
+        return TokenClass(value, start_mark, end_mark)
+
+    def scan_tag(self):
+        # See the specification for details.
+        start_mark = self.get_mark()
+        ch = self.peek(1)
+        if ch == u'<':
+            handle = None
+            self.forward(2)
+            suffix = self.scan_tag_uri('tag', start_mark)
+            if self.peek() != u'>':
+                raise ScannerError("while parsing a tag", start_mark,
+                        "expected '>', but found %r" % self.peek().encode('utf-8'),
+                        self.get_mark())
+            self.forward()
+        elif ch in u'\0 \t\r\n\x85\u2028\u2029':
+            handle = None
+            suffix = u'!'
+            self.forward()
+        else:
+            length = 1
+            use_handle = False
+            while ch not in u'\0 \r\n\x85\u2028\u2029':
+                if ch == u'!':
+                    use_handle = True
+                    break
+                length += 1
+                ch = self.peek(length)
+            handle = u'!'
+            if use_handle:
+                handle = self.scan_tag_handle('tag', start_mark)
+            else:
+                handle = u'!'
+                self.forward()
+            suffix = self.scan_tag_uri('tag', start_mark)
+        ch = self.peek()
+        if ch not in u'\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a tag", start_mark,
+                    "expected ' ', but found %r" % ch.encode('utf-8'),
+                    self.get_mark())
+        value = (handle, suffix)
+        end_mark = self.get_mark()
+        return TagToken(value, start_mark, end_mark)
+
+    def scan_block_scalar(self, style):
+        # See the specification for details.
+
+        if style == '>':
+            folded = True
+        else:
+            folded = False
+
+        chunks = []
+        start_mark = self.get_mark()
+
+        # Scan the header.
+        self.forward()
+        chomping, increment = self.scan_block_scalar_indicators(start_mark)
+        self.scan_block_scalar_ignored_line(start_mark)
+
+        # Determine the indentation level and go to the first non-empty line.
+        min_indent = self.indent+1
+        if min_indent < 1:
+            min_indent = 1
+        if increment is None:
+            breaks, max_indent, end_mark = self.scan_block_scalar_indentation()
+            indent = max(min_indent, max_indent)
+        else:
+            indent = min_indent+increment-1
+            breaks, end_mark = self.scan_block_scalar_breaks(indent)
+        line_break = u''
+
+        # Scan the inner part of the block scalar.
+        while self.column == indent and self.peek() != u'\0':
+            chunks.extend(breaks)
+            leading_non_space = self.peek() not in u' \t'
+            length = 0
+            while self.peek(length) not in u'\0\r\n\x85\u2028\u2029':
+                length += 1
+            chunks.append(self.prefix(length))
+            self.forward(length)
+            line_break = self.scan_line_break()
+            breaks, end_mark = self.scan_block_scalar_breaks(indent)
+            if self.column == indent and self.peek() != u'\0':
+
+                # Unfortunately, folding rules are ambiguous.
+                #
+                # This is the folding according to the specification:
+                
+                if folded and line_break == u'\n'   \
+                        and leading_non_space and self.peek() not in u' \t':
+                    if not breaks:
+                        chunks.append(u' ')
+                else:
+                    chunks.append(line_break)
+                
+                # This is Clark Evans's interpretation (also in the spec
+                # examples):
+                #
+                #if folded and line_break == u'\n':
+                #    if not breaks:
+                #        if self.peek() not in ' \t':
+                #            chunks.append(u' ')
+                #        else:
+                #            chunks.append(line_break)
+                #else:
+                #    chunks.append(line_break)
+            else:
+                break
+
+        # Chomp the tail.
+        if chomping is not False:
+            chunks.append(line_break)
+        if chomping is True:
+            chunks.extend(breaks)
+
+        # We are done.
+        return ScalarToken(u''.join(chunks), False, start_mark, end_mark,
+                style)
+
+    def scan_block_scalar_indicators(self, start_mark):
+        # See the specification for details.
+        chomping = None
+        increment = None
+        ch = self.peek()
+        if ch in u'+-':
+            if ch == '+':
+                chomping = True
+            else:
+                chomping = False
+            self.forward()
+            ch = self.peek()
+            if ch in u'0123456789':
+                increment = int(ch)
+                if increment == 0:
+                    raise ScannerError("while scanning a block scalar", start_mark,
+                            "expected indentation indicator in the range 1-9, but found 0",
+                            self.get_mark())
+                self.forward()
+        elif ch in u'0123456789':
+            increment = int(ch)
+            if increment == 0:
+                raise ScannerError("while scanning a block scalar", start_mark,
+                        "expected indentation indicator in the range 1-9, but found 0",
+                        self.get_mark())
+            self.forward()
+            ch = self.peek()
+            if ch in u'+-':
+                if ch == '+':
+                    chomping = True
+                else:
+                    chomping = False
+                self.forward()
+        ch = self.peek()
+        if ch not in u'\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a block scalar", start_mark,
+                    "expected chomping or indentation indicators, but found %r"
+                        % ch.encode('utf-8'), self.get_mark())
+        return chomping, increment
+
+    def scan_block_scalar_ignored_line(self, start_mark):
+        # See the specification for details.
+        while self.peek() == u' ':
+            self.forward()
+        if self.peek() == u'#':
+            while self.peek() not in u'\0\r\n\x85\u2028\u2029':
+                self.forward()
+        ch = self.peek()
+        if ch not in u'\0\r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a block scalar", start_mark,
+                    "expected a comment or a line break, but found %r"
+                        % ch.encode('utf-8'), self.get_mark())
+        self.scan_line_break()
+
+    def scan_block_scalar_indentation(self):
+        # See the specification for details.
+        chunks = []
+        max_indent = 0
+        end_mark = self.get_mark()
+        while self.peek() in u' \r\n\x85\u2028\u2029':
+            if self.peek() != u' ':
+                chunks.append(self.scan_line_break())
+                end_mark = self.get_mark()
+            else:
+                self.forward()
+                if self.column > max_indent:
+                    max_indent = self.column
+        return chunks, max_indent, end_mark
+
+    def scan_block_scalar_breaks(self, indent):
+        # See the specification for details.
+        chunks = []
+        end_mark = self.get_mark()
+        while self.column < indent and self.peek() == u' ':
+            self.forward()
+        while self.peek() in u'\r\n\x85\u2028\u2029':
+            chunks.append(self.scan_line_break())
+            end_mark = self.get_mark()
+            while self.column < indent and self.peek() == u' ':
+                self.forward()
+        return chunks, end_mark
+
+    def scan_flow_scalar(self, style):
+        # See the specification for details.
+        # Note that we loose indentation rules for quoted scalars. Quoted
+        # scalars don't need to adhere indentation because " and ' clearly
+        # mark the beginning and the end of them. Therefore we are less
+        # restrictive then the specification requires. We only need to check
+        # that document separators are not included in scalars.
+        if style == '"':
+            double = True
+        else:
+            double = False
+        chunks = []
+        start_mark = self.get_mark()
+        quote = self.peek()
+        self.forward()
+        chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
+        while self.peek() != quote:
+            chunks.extend(self.scan_flow_scalar_spaces(double, start_mark))
+            chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
+        self.forward()
+        end_mark = self.get_mark()
+        return ScalarToken(u''.join(chunks), False, start_mark, end_mark,
+                style)
+
+    ESCAPE_REPLACEMENTS = {
+        u'0':   u'\0',
+        u'a':   u'\x07',
+        u'b':   u'\x08',
+        u't':   u'\x09',
+        u'\t':  u'\x09',
+        u'n':   u'\x0A',
+        u'v':   u'\x0B',
+        u'f':   u'\x0C',
+        u'r':   u'\x0D',
+        u'e':   u'\x1B',
+        u' ':   u'\x20',
+        u'\"':  u'\"',
+        u'\\':  u'\\',
+        u'N':   u'\x85',
+        u'_':   u'\xA0',
+        u'L':   u'\u2028',
+        u'P':   u'\u2029',
+    }
+
+    ESCAPE_CODES = {
+        u'x':   2,
+        u'u':   4,
+        u'U':   8,
+    }
+
+    def scan_flow_scalar_non_spaces(self, double, start_mark):
+        # See the specification for details.
+        chunks = []
+        while True:
+            length = 0
+            while self.peek(length) not in u'\'\"\\\0 \t\r\n\x85\u2028\u2029':
+                length += 1
+            if length:
+                chunks.append(self.prefix(length))
+                self.forward(length)
+            ch = self.peek()
+            if not double and ch == u'\'' and self.peek(1) == u'\'':
+                chunks.append(u'\'')
+                self.forward(2)
+            elif (double and ch == u'\'') or (not double and ch in u'\"\\'):
+                chunks.append(ch)
+                self.forward()
+            elif double and ch == u'\\':
+                self.forward()
+                ch = self.peek()
+                if ch in self.ESCAPE_REPLACEMENTS:
+                    chunks.append(self.ESCAPE_REPLACEMENTS[ch])
+                    self.forward()
+                elif ch in self.ESCAPE_CODES:
+                    length = self.ESCAPE_CODES[ch]
+                    self.forward()
+                    for k in range(length):
+                        if self.peek(k) not in u'0123456789ABCDEFabcdef':
+                            raise ScannerError("while scanning a double-quoted scalar", start_mark,
+                                    "expected escape sequence of %d hexdecimal numbers, but found %r" %
+                                        (length, self.peek(k).encode('utf-8')), self.get_mark())
+                    code = int(self.prefix(length), 16)
+                    chunks.append(unichr(code))
+                    self.forward(length)
+                elif ch in u'\r\n\x85\u2028\u2029':
+                    self.scan_line_break()
+                    chunks.extend(self.scan_flow_scalar_breaks(double, start_mark))
+                else:
+                    raise ScannerError("while scanning a double-quoted scalar", start_mark,
+                            "found unknown escape character %r" % ch.encode('utf-8'), self.get_mark())
+            else:
+                return chunks
+
+    def scan_flow_scalar_spaces(self, double, start_mark):
+        # See the specification for details.
+        chunks = []
+        length = 0
+        while self.peek(length) in u' \t':
+            length += 1
+        whitespaces = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch == u'\0':
+            raise ScannerError("while scanning a quoted scalar", start_mark,
+                    "found unexpected end of stream", self.get_mark())
+        elif ch in u'\r\n\x85\u2028\u2029':
+            line_break = self.scan_line_break()
+            breaks = self.scan_flow_scalar_breaks(double, start_mark)
+            if line_break != u'\n':
+                chunks.append(line_break)
+            elif not breaks:
+                chunks.append(u' ')
+            chunks.extend(breaks)
+        else:
+            chunks.append(whitespaces)
+        return chunks
+
+    def scan_flow_scalar_breaks(self, double, start_mark):
+        # See the specification for details.
+        chunks = []
+        while True:
+            # Instead of checking indentation, we check for document
+            # separators.
+            prefix = self.prefix(3)
+            if (prefix == u'---' or prefix == u'...')   \
+                    and self.peek(3) in u'\0 \t\r\n\x85\u2028\u2029':
+                raise ScannerError("while scanning a quoted scalar", start_mark,
+                        "found unexpected document separator", self.get_mark())
+            while self.peek() in u' \t':
+                self.forward()
+            if self.peek() in u'\r\n\x85\u2028\u2029':
+                chunks.append(self.scan_line_break())
+            else:
+                return chunks
+
+    def scan_plain(self):
+        # See the specification for details.
+        # We add an additional restriction for the flow context:
+        #   plain scalars in the flow context cannot contain ',', ':' and '?'.
+        # We also keep track of the `allow_simple_key` flag here.
+        # Indentation rules are loosed for the flow context.
+        chunks = []
+        start_mark = self.get_mark()
+        end_mark = start_mark
+        indent = self.indent+1
+        # We allow zero indentation for scalars, but then we need to check for
+        # document separators at the beginning of the line.
+        #if indent == 0:
+        #    indent = 1
+        spaces = []
+        while True:
+            length = 0
+            if self.peek() == u'#':
+                break
+            while True:
+                ch = self.peek(length)
+                if ch in u'\0 \t\r\n\x85\u2028\u2029'   \
+                        or (not self.flow_level and ch == u':' and
+                                self.peek(length+1) in u'\0 \t\r\n\x85\u2028\u2029') \
+                        or (self.flow_level and ch in u',:?[]{}'):
+                    break
+                length += 1
+            # It's not clear what we should do with ':' in the flow context.
+            if (self.flow_level and ch == u':'
+                    and self.peek(length+1) not in u'\0 \t\r\n\x85\u2028\u2029,[]{}'):
+                self.forward(length)
+                raise ScannerError("while scanning a plain scalar", start_mark,
+                    "found unexpected ':'", self.get_mark(),
+                    "Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.")
+            if length == 0:
+                break
+            self.allow_simple_key = False
+            chunks.extend(spaces)
+            chunks.append(self.prefix(length))
+            self.forward(length)
+            end_mark = self.get_mark()
+            spaces = self.scan_plain_spaces(indent, start_mark)
+            if not spaces or self.peek() == u'#' \
+                    or (not self.flow_level and self.column < indent):
+                break
+        return ScalarToken(u''.join(chunks), True, start_mark, end_mark)
+
+    def scan_plain_spaces(self, indent, start_mark):
+        # See the specification for details.
+        # The specification is really confusing about tabs in plain scalars.
+        # We just forbid them completely. Do not use tabs in YAML!
+        chunks = []
+        length = 0
+        while self.peek(length) in u' ':
+            length += 1
+        whitespaces = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch in u'\r\n\x85\u2028\u2029':
+            line_break = self.scan_line_break()
+            self.allow_simple_key = True
+            prefix = self.prefix(3)
+            if (prefix == u'---' or prefix == u'...')   \
+                    and self.peek(3) in u'\0 \t\r\n\x85\u2028\u2029':
+                return
+            breaks = []
+            while self.peek() in u' \r\n\x85\u2028\u2029':
+                if self.peek() == ' ':
+                    self.forward()
+                else:
+                    breaks.append(self.scan_line_break())
+                    prefix = self.prefix(3)
+                    if (prefix == u'---' or prefix == u'...')   \
+                            and self.peek(3) in u'\0 \t\r\n\x85\u2028\u2029':
+                        return
+            if line_break != u'\n':
+                chunks.append(line_break)
+            elif not breaks:
+                chunks.append(u' ')
+            chunks.extend(breaks)
+        elif whitespaces:
+            chunks.append(whitespaces)
+        return chunks
+
+    def scan_tag_handle(self, name, start_mark):
+        # See the specification for details.
+        # For some strange reasons, the specification does not allow '_' in
+        # tag handles. I have allowed it anyway.
+        ch = self.peek()
+        if ch != u'!':
+            raise ScannerError("while scanning a %s" % name, start_mark,
+                    "expected '!', but found %r" % ch.encode('utf-8'),
+                    self.get_mark())
+        length = 1
+        ch = self.peek(length)
+        if ch != u' ':
+            while u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'    \
+                    or ch in u'-_':
+                length += 1
+                ch = self.peek(length)
+            if ch != u'!':
+                self.forward(length)
+                raise ScannerError("while scanning a %s" % name, start_mark,
+                        "expected '!', but found %r" % ch.encode('utf-8'),
+                        self.get_mark())
+            length += 1
+        value = self.prefix(length)
+        self.forward(length)
+        return value
+
+    def scan_tag_uri(self, name, start_mark):
+        # See the specification for details.
+        # Note: we do not check if URI is well-formed.
+        chunks = []
+        length = 0
+        ch = self.peek(length)
+        while u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z'    \
+                or ch in u'-;/?:@&=+$,_.!~*\'()[]%':
+            if ch == u'%':
+                chunks.append(self.prefix(length))
+                self.forward(length)
+                length = 0
+                chunks.append(self.scan_uri_escapes(name, start_mark))
+            else:
+                length += 1
+            ch = self.peek(length)
+        if length:
+            chunks.append(self.prefix(length))
+            self.forward(length)
+            length = 0
+        if not chunks:
+            raise ScannerError("while parsing a %s" % name, start_mark,
+                    "expected URI, but found %r" % ch.encode('utf-8'),
+                    self.get_mark())
+        return u''.join(chunks)
+
+    def scan_uri_escapes(self, name, start_mark):
+        # See the specification for details.
+        bytes = []
+        mark = self.get_mark()
+        while self.peek() == u'%':
+            self.forward()
+            for k in range(2):
+                if self.peek(k) not in u'0123456789ABCDEFabcdef':
+                    raise ScannerError("while scanning a %s" % name, start_mark,
+                            "expected URI escape sequence of 2 hexdecimal numbers, but found %r" %
+                                (self.peek(k).encode('utf-8')), self.get_mark())
+            bytes.append(chr(int(self.prefix(2), 16)))
+            self.forward(2)
+        try:
+            value = unicode(''.join(bytes), 'utf-8')
+        except UnicodeDecodeError, exc:
+            raise ScannerError("while scanning a %s" % name, start_mark, str(exc), mark)
+        return value
+
+    def scan_line_break(self):
+        # Transforms:
+        #   '\r\n'      :   '\n'
+        #   '\r'        :   '\n'
+        #   '\n'        :   '\n'
+        #   '\x85'      :   '\n'
+        #   '\u2028'    :   '\u2028'
+        #   '\u2029     :   '\u2029'
+        #   default     :   ''
+        ch = self.peek()
+        if ch in u'\r\n\x85':
+            if self.prefix(2) == u'\r\n':
+                self.forward(2)
+            else:
+                self.forward()
+            return u'\n'
+        elif ch in u'\u2028\u2029':
+            self.forward()
+            return ch
+        return u''
+
+#try:
+#    import psyco
+#    psyco.bind(Scanner)
+#except ImportError:
+#    pass
+
diff --git a/lib/yaml/serializer.py b/lib/yaml/serializer.py
new file mode 100644
index 0000000..0bf1e96
--- /dev/null
+++ b/lib/yaml/serializer.py
@@ -0,0 +1,111 @@
+
+__all__ = ['Serializer', 'SerializerError']
+
+from error import YAMLError
+from events import *
+from nodes import *
+
+class SerializerError(YAMLError):
+    pass
+
+class Serializer(object):
+
+    ANCHOR_TEMPLATE = u'id%03d'
+
+    def __init__(self, encoding=None,
+            explicit_start=None, explicit_end=None, version=None, tags=None):
+        self.use_encoding = encoding
+        self.use_explicit_start = explicit_start
+        self.use_explicit_end = explicit_end
+        self.use_version = version
+        self.use_tags = tags
+        self.serialized_nodes = {}
+        self.anchors = {}
+        self.last_anchor_id = 0
+        self.closed = None
+
+    def open(self):
+        if self.closed is None:
+            self.emit(StreamStartEvent(encoding=self.use_encoding))
+            self.closed = False
+        elif self.closed:
+            raise SerializerError("serializer is closed")
+        else:
+            raise SerializerError("serializer is already opened")
+
+    def close(self):
+        if self.closed is None:
+            raise SerializerError("serializer is not opened")
+        elif not self.closed:
+            self.emit(StreamEndEvent())
+            self.closed = True
+
+    #def __del__(self):
+    #    self.close()
+
+    def serialize(self, node):
+        if self.closed is None:
+            raise SerializerError("serializer is not opened")
+        elif self.closed:
+            raise SerializerError("serializer is closed")
+        self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
+            version=self.use_version, tags=self.use_tags))
+        self.anchor_node(node)
+        self.serialize_node(node, None, None)
+        self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
+        self.serialized_nodes = {}
+        self.anchors = {}
+        self.last_anchor_id = 0
+
+    def anchor_node(self, node):
+        if node in self.anchors:
+            if self.anchors[node] is None:
+                self.anchors[node] = self.generate_anchor(node)
+        else:
+            self.anchors[node] = None
+            if isinstance(node, SequenceNode):
+                for item in node.value:
+                    self.anchor_node(item)
+            elif isinstance(node, MappingNode):
+                for key, value in node.value:
+                    self.anchor_node(key)
+                    self.anchor_node(value)
+
+    def generate_anchor(self, node):
+        self.last_anchor_id += 1
+        return self.ANCHOR_TEMPLATE % self.last_anchor_id
+
+    def serialize_node(self, node, parent, index):
+        alias = self.anchors[node]
+        if node in self.serialized_nodes:
+            self.emit(AliasEvent(alias))
+        else:
+            self.serialized_nodes[node] = True
+            self.descend_resolver(parent, index)
+            if isinstance(node, ScalarNode):
+                detected_tag = self.resolve(ScalarNode, node.value, (True, False))
+                default_tag = self.resolve(ScalarNode, node.value, (False, True))
+                implicit = (node.tag == detected_tag), (node.tag == default_tag)
+                self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
+                    style=node.style))
+            elif isinstance(node, SequenceNode):
+                implicit = (node.tag
+                            == self.resolve(SequenceNode, node.value, True))
+                self.emit(SequenceStartEvent(alias, node.tag, implicit,
+                    flow_style=node.flow_style))
+                index = 0
+                for item in node.value:
+                    self.serialize_node(item, node, index)
+                    index += 1
+                self.emit(SequenceEndEvent())
+            elif isinstance(node, MappingNode):
+                implicit = (node.tag
+                            == self.resolve(MappingNode, node.value, True))
+                self.emit(MappingStartEvent(alias, node.tag, implicit,
+                    flow_style=node.flow_style))
+                for key, value in node.value:
+                    self.serialize_node(key, node, None)
+                    self.serialize_node(value, node, key)
+                self.emit(MappingEndEvent())
+            self.ascend_resolver()
+
diff --git a/lib/yaml/tokens.py b/lib/yaml/tokens.py
new file mode 100644
index 0000000..4d0b48a
--- /dev/null
+++ b/lib/yaml/tokens.py
@@ -0,0 +1,104 @@
+
+class Token(object):
+    def __init__(self, start_mark, end_mark):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+    def __repr__(self):
+        attributes = [key for key in self.__dict__
+                if not key.endswith('_mark')]
+        attributes.sort()
+        arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
+                for key in attributes])
+        return '%s(%s)' % (self.__class__.__name__, arguments)
+
+#class BOMToken(Token):
+#    id = '<byte order mark>'
+
+class DirectiveToken(Token):
+    id = '<directive>'
+    def __init__(self, name, value, start_mark, end_mark):
+        self.name = name
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class DocumentStartToken(Token):
+    id = '<document start>'
+
+class DocumentEndToken(Token):
+    id = '<document end>'
+
+class StreamStartToken(Token):
+    id = '<stream start>'
+    def __init__(self, start_mark=None, end_mark=None,
+            encoding=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.encoding = encoding
+
+class StreamEndToken(Token):
+    id = '<stream end>'
+
+class BlockSequenceStartToken(Token):
+    id = '<block sequence start>'
+
+class BlockMappingStartToken(Token):
+    id = '<block mapping start>'
+
+class BlockEndToken(Token):
+    id = '<block end>'
+
+class FlowSequenceStartToken(Token):
+    id = '['
+
+class FlowMappingStartToken(Token):
+    id = '{'
+
+class FlowSequenceEndToken(Token):
+    id = ']'
+
+class FlowMappingEndToken(Token):
+    id = '}'
+
+class KeyToken(Token):
+    id = '?'
+
+class ValueToken(Token):
+    id = ':'
+
+class BlockEntryToken(Token):
+    id = '-'
+
+class FlowEntryToken(Token):
+    id = ','
+
+class AliasToken(Token):
+    id = '<alias>'
+    def __init__(self, value, start_mark, end_mark):
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class AnchorToken(Token):
+    id = '<anchor>'
+    def __init__(self, value, start_mark, end_mark):
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class TagToken(Token):
+    id = '<tag>'
+    def __init__(self, value, start_mark, end_mark):
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class ScalarToken(Token):
+    id = '<scalar>'
+    def __init__(self, value, plain, start_mark, end_mark, style=None):
+        self.value = value
+        self.plain = plain
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.style = style
+
diff --git a/lib3/yaml/__init__.py b/lib3/yaml/__init__.py
new file mode 100644
index 0000000..a5e20f9
--- /dev/null
+++ b/lib3/yaml/__init__.py
@@ -0,0 +1,312 @@
+
+from .error import *
+
+from .tokens import *
+from .events import *
+from .nodes import *
+
+from .loader import *
+from .dumper import *
+
+__version__ = '3.11'
+try:
+    from .cyaml import *
+    __with_libyaml__ = True
+except ImportError:
+    __with_libyaml__ = False
+
+import io
+
+def scan(stream, Loader=Loader):
+    """
+    Scan a YAML stream and produce scanning tokens.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_token():
+            yield loader.get_token()
+    finally:
+        loader.dispose()
+
+def parse(stream, Loader=Loader):
+    """
+    Parse a YAML stream and produce parsing events.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_event():
+            yield loader.get_event()
+    finally:
+        loader.dispose()
+
+def compose(stream, Loader=Loader):
+    """
+    Parse the first YAML document in a stream
+    and produce the corresponding representation tree.
+    """
+    loader = Loader(stream)
+    try:
+        return loader.get_single_node()
+    finally:
+        loader.dispose()
+
+def compose_all(stream, Loader=Loader):
+    """
+    Parse all YAML documents in a stream
+    and produce corresponding representation trees.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_node():
+            yield loader.get_node()
+    finally:
+        loader.dispose()
+
+def load(stream, Loader=Loader):
+    """
+    Parse the first YAML document in a stream
+    and produce the corresponding Python object.
+    """
+    loader = Loader(stream)
+    try:
+        return loader.get_single_data()
+    finally:
+        loader.dispose()
+
+def load_all(stream, Loader=Loader):
+    """
+    Parse all YAML documents in a stream
+    and produce corresponding Python objects.
+    """
+    loader = Loader(stream)
+    try:
+        while loader.check_data():
+            yield loader.get_data()
+    finally:
+        loader.dispose()
+
+def safe_load(stream):
+    """
+    Parse the first YAML document in a stream
+    and produce the corresponding Python object.
+    Resolve only basic YAML tags.
+    """
+    return load(stream, SafeLoader)
+
+def safe_load_all(stream):
+    """
+    Parse all YAML documents in a stream
+    and produce corresponding Python objects.
+    Resolve only basic YAML tags.
+    """
+    return load_all(stream, SafeLoader)
+
+def emit(events, stream=None, Dumper=Dumper,
+        canonical=None, indent=None, width=None,
+        allow_unicode=None, line_break=None):
+    """
+    Emit YAML parsing events into a stream.
+    If stream is None, return the produced string instead.
+    """
+    getvalue = None
+    if stream is None:
+        stream = io.StringIO()
+        getvalue = stream.getvalue
+    dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
+            allow_unicode=allow_unicode, line_break=line_break)
+    try:
+        for event in events:
+            dumper.emit(event)
+    finally:
+        dumper.dispose()
+    if getvalue:
+        return getvalue()
+
+def serialize_all(nodes, stream=None, Dumper=Dumper,
+        canonical=None, indent=None, width=None,
+        allow_unicode=None, line_break=None,
+        encoding=None, explicit_start=None, explicit_end=None,
+        version=None, tags=None):
+    """
+    Serialize a sequence of representation trees into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    getvalue = None
+    if stream is None:
+        if encoding is None:
+            stream = io.StringIO()
+        else:
+            stream = io.BytesIO()
+        getvalue = stream.getvalue
+    dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
+            allow_unicode=allow_unicode, line_break=line_break,
+            encoding=encoding, version=version, tags=tags,
+            explicit_start=explicit_start, explicit_end=explicit_end)
+    try:
+        dumper.open()
+        for node in nodes:
+            dumper.serialize(node)
+        dumper.close()
+    finally:
+        dumper.dispose()
+    if getvalue:
+        return getvalue()
+
+def serialize(node, stream=None, Dumper=Dumper, **kwds):
+    """
+    Serialize a representation tree into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    return serialize_all([node], stream, Dumper=Dumper, **kwds)
+
+def dump_all(documents, stream=None, Dumper=Dumper,
+        default_style=None, default_flow_style=None,
+        canonical=None, indent=None, width=None,
+        allow_unicode=None, line_break=None,
+        encoding=None, explicit_start=None, explicit_end=None,
+        version=None, tags=None):
+    """
+    Serialize a sequence of Python objects into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    getvalue = None
+    if stream is None:
+        if encoding is None:
+            stream = io.StringIO()
+        else:
+            stream = io.BytesIO()
+        getvalue = stream.getvalue
+    dumper = Dumper(stream, default_style=default_style,
+            default_flow_style=default_flow_style,
+            canonical=canonical, indent=indent, width=width,
+            allow_unicode=allow_unicode, line_break=line_break,
+            encoding=encoding, version=version, tags=tags,
+            explicit_start=explicit_start, explicit_end=explicit_end)
+    try:
+        dumper.open()
+        for data in documents:
+            dumper.represent(data)
+        dumper.close()
+    finally:
+        dumper.dispose()
+    if getvalue:
+        return getvalue()
+
+def dump(data, stream=None, Dumper=Dumper, **kwds):
+    """
+    Serialize a Python object into a YAML stream.
+    If stream is None, return the produced string instead.
+    """
+    return dump_all([data], stream, Dumper=Dumper, **kwds)
+
+def safe_dump_all(documents, stream=None, **kwds):
+    """
+    Serialize a sequence of Python objects into a YAML stream.
+    Produce only basic YAML tags.
+    If stream is None, return the produced string instead.
+    """
+    return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
+
+def safe_dump(data, stream=None, **kwds):
+    """
+    Serialize a Python object into a YAML stream.
+    Produce only basic YAML tags.
+    If stream is None, return the produced string instead.
+    """
+    return dump_all([data], stream, Dumper=SafeDumper, **kwds)
+
+def add_implicit_resolver(tag, regexp, first=None,
+        Loader=Loader, Dumper=Dumper):
+    """
+    Add an implicit scalar detector.
+    If an implicit scalar value matches the given regexp,
+    the corresponding tag is assigned to the scalar.
+    first is a sequence of possible initial characters or None.
+    """
+    Loader.add_implicit_resolver(tag, regexp, first)
+    Dumper.add_implicit_resolver(tag, regexp, first)
+
+def add_path_resolver(tag, path, kind=None, Loader=Loader, Dumper=Dumper):
+    """
+    Add a path based resolver for the given tag.
+    A path is a list of keys that forms a path
+    to a node in the representation tree.
+    Keys can be string values, integers, or None.
+    """
+    Loader.add_path_resolver(tag, path, kind)
+    Dumper.add_path_resolver(tag, path, kind)
+
+def add_constructor(tag, constructor, Loader=Loader):
+    """
+    Add a constructor for the given tag.
+    Constructor is a function that accepts a Loader instance
+    and a node object and produces the corresponding Python object.
+    """
+    Loader.add_constructor(tag, constructor)
+
+def add_multi_constructor(tag_prefix, multi_constructor, Loader=Loader):
+    """
+    Add a multi-constructor for the given tag prefix.
+    Multi-constructor is called for a node if its tag starts with tag_prefix.
+    Multi-constructor accepts a Loader instance, a tag suffix,
+    and a node object and produces the corresponding Python object.
+    """
+    Loader.add_multi_constructor(tag_prefix, multi_constructor)
+
+def add_representer(data_type, representer, Dumper=Dumper):
+    """
+    Add a representer for the given type.
+    Representer is a function accepting a Dumper instance
+    and an instance of the given data type
+    and producing the corresponding representation node.
+    """
+    Dumper.add_representer(data_type, representer)
+
+def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
+    """
+    Add a representer for the given type.
+    Multi-representer is a function accepting a Dumper instance
+    and an instance of the given data type or subtype
+    and producing the corresponding representation node.
+    """
+    Dumper.add_multi_representer(data_type, multi_representer)
+
+class YAMLObjectMetaclass(type):
+    """
+    The metaclass for YAMLObject.
+    """
+    def __init__(cls, name, bases, kwds):
+        super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
+        if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
+            cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
+            cls.yaml_dumper.add_representer(cls, cls.to_yaml)
+
+class YAMLObject(metaclass=YAMLObjectMetaclass):
+    """
+    An object that can dump itself to a YAML stream
+    and load itself from a YAML stream.
+    """
+
+    __slots__ = ()  # no direct instantiation, so allow immutable subclasses
+
+    yaml_loader = Loader
+    yaml_dumper = Dumper
+
+    yaml_tag = None
+    yaml_flow_style = None
+
+    @classmethod
+    def from_yaml(cls, loader, node):
+        """
+        Convert a representation node to a Python object.
+        """
+        return loader.construct_yaml_object(node, cls)
+
+    @classmethod
+    def to_yaml(cls, dumper, data):
+        """
+        Convert a Python object to a representation node.
+        """
+        return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
+                flow_style=cls.yaml_flow_style)
+
diff --git a/lib3/yaml/composer.py b/lib3/yaml/composer.py
new file mode 100644
index 0000000..d5c6a7a
--- /dev/null
+++ b/lib3/yaml/composer.py
@@ -0,0 +1,139 @@
+
+__all__ = ['Composer', 'ComposerError']
+
+from .error import MarkedYAMLError
+from .events import *
+from .nodes import *
+
+class ComposerError(MarkedYAMLError):
+    pass
+
+class Composer:
+
+    def __init__(self):
+        self.anchors = {}
+
+    def check_node(self):
+        # Drop the STREAM-START event.
+        if self.check_event(StreamStartEvent):
+            self.get_event()
+
+        # If there are more documents available?
+        return not self.check_event(StreamEndEvent)
+
+    def get_node(self):
+        # Get the root node of the next document.
+        if not self.check_event(StreamEndEvent):
+            return self.compose_document()
+
+    def get_single_node(self):
+        # Drop the STREAM-START event.
+        self.get_event()
+
+        # Compose a document if the stream is not empty.
+        document = None
+        if not self.check_event(StreamEndEvent):
+            document = self.compose_document()
+
+        # Ensure that the stream contains no more documents.
+        if not self.check_event(StreamEndEvent):
+            event = self.get_event()
+            raise ComposerError("expected a single document in the stream",
+                    document.start_mark, "but found another document",
+                    event.start_mark)
+
+        # Drop the STREAM-END event.
+        self.get_event()
+
+        return document
+
+    def compose_document(self):
+        # Drop the DOCUMENT-START event.
+        self.get_event()
+
+        # Compose the root node.
+        node = self.compose_node(None, None)
+
+        # Drop the DOCUMENT-END event.
+        self.get_event()
+
+        self.anchors = {}
+        return node
+
+    def compose_node(self, parent, index):
+        if self.check_event(AliasEvent):
+            event = self.get_event()
+            anchor = event.anchor
+            if anchor not in self.anchors:
+                raise ComposerError(None, None, "found undefined alias %r"
+                        % anchor, event.start_mark)
+            return self.anchors[anchor]
+        event = self.peek_event()
+        anchor = event.anchor
+        if anchor is not None:
+            if anchor in self.anchors:
+                raise ComposerError("found duplicate anchor %r; first occurence"
+                        % anchor, self.anchors[anchor].start_mark,
+                        "second occurence", event.start_mark)
+        self.descend_resolver(parent, index)
+        if self.check_event(ScalarEvent):
+            node = self.compose_scalar_node(anchor)
+        elif self.check_event(SequenceStartEvent):
+            node = self.compose_sequence_node(anchor)
+        elif self.check_event(MappingStartEvent):
+            node = self.compose_mapping_node(anchor)
+        self.ascend_resolver()
+        return node
+
+    def compose_scalar_node(self, anchor):
+        event = self.get_event()
+        tag = event.tag
+        if tag is None or tag == '!':
+            tag = self.resolve(ScalarNode, event.value, event.implicit)
+        node = ScalarNode(tag, event.value,
+                event.start_mark, event.end_mark, style=event.style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        return node
+
+    def compose_sequence_node(self, anchor):
+        start_event = self.get_event()
+        tag = start_event.tag
+        if tag is None or tag == '!':
+            tag = self.resolve(SequenceNode, None, start_event.implicit)
+        node = SequenceNode(tag, [],
+                start_event.start_mark, None,
+                flow_style=start_event.flow_style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        index = 0
+        while not self.check_event(SequenceEndEvent):
+            node.value.append(self.compose_node(node, index))
+            index += 1
+        end_event = self.get_event()
+        node.end_mark = end_event.end_mark
+        return node
+
+    def compose_mapping_node(self, anchor):
+        start_event = self.get_event()
+        tag = start_event.tag
+        if tag is None or tag == '!':
+            tag = self.resolve(MappingNode, None, start_event.implicit)
+        node = MappingNode(tag, [],
+                start_event.start_mark, None,
+                flow_style=start_event.flow_style)
+        if anchor is not None:
+            self.anchors[anchor] = node
+        while not self.check_event(MappingEndEvent):
+            #key_event = self.peek_event()
+            item_key = self.compose_node(node, None)
+            #if item_key in node.value:
+            #    raise ComposerError("while composing a mapping", start_event.start_mark,
+            #            "found duplicate key", key_event.start_mark)
+            item_value = self.compose_node(node, item_key)
+            #node.value[item_key] = item_value
+            node.value.append((item_key, item_value))
+        end_event = self.get_event()
+        node.end_mark = end_event.end_mark
+        return node
+
diff --git a/lib3/yaml/constructor.py b/lib3/yaml/constructor.py
new file mode 100644
index 0000000..981543a
--- /dev/null
+++ b/lib3/yaml/constructor.py
@@ -0,0 +1,686 @@
+
+__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
+    'ConstructorError']
+
+from .error import *
+from .nodes import *
+
+import collections, datetime, base64, binascii, re, sys, types
+
+class ConstructorError(MarkedYAMLError):
+    pass
+
+class BaseConstructor:
+
+    yaml_constructors = {}
+    yaml_multi_constructors = {}
+
+    def __init__(self):
+        self.constructed_objects = {}
+        self.recursive_objects = {}
+        self.state_generators = []
+        self.deep_construct = False
+
+    def check_data(self):
+        # If there are more documents available?
+        return self.check_node()
+
+    def get_data(self):
+        # Construct and return the next document.
+        if self.check_node():
+            return self.construct_document(self.get_node())
+
+    def get_single_data(self):
+        # Ensure that the stream contains a single document and construct it.
+        node = self.get_single_node()
+        if node is not None:
+            return self.construct_document(node)
+        return None
+
+    def construct_document(self, node):
+        data = self.construct_object(node)
+        while self.state_generators:
+            state_generators = self.state_generators
+            self.state_generators = []
+            for generator in state_generators:
+                for dummy in generator:
+                    pass
+        self.constructed_objects = {}
+        self.recursive_objects = {}
+        self.deep_construct = False
+        return data
+
+    def construct_object(self, node, deep=False):
+        if node in self.constructed_objects:
+            return self.constructed_objects[node]
+        if deep:
+            old_deep = self.deep_construct
+            self.deep_construct = True
+        if node in self.recursive_objects:
+            raise ConstructorError(None, None,
+                    "found unconstructable recursive node", node.start_mark)
+        self.recursive_objects[node] = None
+        constructor = None
+        tag_suffix = None
+        if node.tag in self.yaml_constructors:
+            constructor = self.yaml_constructors[node.tag]
+        else:
+            for tag_prefix in self.yaml_multi_constructors:
+                if node.tag.startswith(tag_prefix):
+                    tag_suffix = node.tag[len(tag_prefix):]
+                    constructor = self.yaml_multi_constructors[tag_prefix]
+                    break
+            else:
+                if None in self.yaml_multi_constructors:
+                    tag_suffix = node.tag
+                    constructor = self.yaml_multi_constructors[None]
+                elif None in self.yaml_constructors:
+                    constructor = self.yaml_constructors[None]
+                elif isinstance(node, ScalarNode):
+                    constructor = self.__class__.construct_scalar
+                elif isinstance(node, SequenceNode):
+                    constructor = self.__class__.construct_sequence
+                elif isinstance(node, MappingNode):
+                    constructor = self.__class__.construct_mapping
+        if tag_suffix is None:
+            data = constructor(self, node)
+        else:
+            data = constructor(self, tag_suffix, node)
+        if isinstance(data, types.GeneratorType):
+            generator = data
+            data = next(generator)
+            if self.deep_construct:
+                for dummy in generator:
+                    pass
+            else:
+                self.state_generators.append(generator)
+        self.constructed_objects[node] = data
+        del self.recursive_objects[node]
+        if deep:
+            self.deep_construct = old_deep
+        return data
+
+    def construct_scalar(self, node):
+        if not isinstance(node, ScalarNode):
+            raise ConstructorError(None, None,
+                    "expected a scalar node, but found %s" % node.id,
+                    node.start_mark)
+        return node.value
+
+    def construct_sequence(self, node, deep=False):
+        if not isinstance(node, SequenceNode):
+            raise ConstructorError(None, None,
+                    "expected a sequence node, but found %s" % node.id,
+                    node.start_mark)
+        return [self.construct_object(child, deep=deep)
+                for child in node.value]
+
+    def construct_mapping(self, node, deep=False):
+        if not isinstance(node, MappingNode):
+            raise ConstructorError(None, None,
+                    "expected a mapping node, but found %s" % node.id,
+                    node.start_mark)
+        mapping = {}
+        for key_node, value_node in node.value:
+            key = self.construct_object(key_node, deep=deep)
+            if not isinstance(key, collections.Hashable):
+                raise ConstructorError("while constructing a mapping", node.start_mark,
+                        "found unhashable key", key_node.start_mark)
+            value = self.construct_object(value_node, deep=deep)
+            mapping[key] = value
+        return mapping
+
+    def construct_pairs(self, node, deep=False):
+        if not isinstance(node, MappingNode):
+            raise ConstructorError(None, None,
+                    "expected a mapping node, but found %s" % node.id,
+                    node.start_mark)
+        pairs = []
+        for key_node, value_node in node.value:
+            key = self.construct_object(key_node, deep=deep)
+            value = self.construct_object(value_node, deep=deep)
+            pairs.append((key, value))
+        return pairs
+
+    @classmethod
+    def add_constructor(cls, tag, constructor):
+        if not 'yaml_constructors' in cls.__dict__:
+            cls.yaml_constructors = cls.yaml_constructors.copy()
+        cls.yaml_constructors[tag] = constructor
+
+    @classmethod
+    def add_multi_constructor(cls, tag_prefix, multi_constructor):
+        if not 'yaml_multi_constructors' in cls.__dict__:
+            cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
+        cls.yaml_multi_constructors[tag_prefix] = multi_constructor
+
+class SafeConstructor(BaseConstructor):
+
+    def construct_scalar(self, node):
+        if isinstance(node, MappingNode):
+            for key_node, value_node in node.value:
+                if key_node.tag == 'tag:yaml.org,2002:value':
+                    return self.construct_scalar(value_node)
+        return super().construct_scalar(node)
+
+    def flatten_mapping(self, node):
+        merge = []
+        index = 0
+        while index < len(node.value):
+            key_node, value_node = node.value[index]
+            if key_node.tag == 'tag:yaml.org,2002:merge':
+                del node.value[index]
+                if isinstance(value_node, MappingNode):
+                    self.flatten_mapping(value_node)
+                    merge.extend(value_node.value)
+                elif isinstance(value_node, SequenceNode):
+                    submerge = []
+                    for subnode in value_node.value:
+                        if not isinstance(subnode, MappingNode):
+                            raise ConstructorError("while constructing a mapping",
+                                    node.start_mark,
+                                    "expected a mapping for merging, but found %s"
+                                    % subnode.id, subnode.start_mark)
+                        self.flatten_mapping(subnode)
+                        submerge.append(subnode.value)
+                    submerge.reverse()
+                    for value in submerge:
+                        merge.extend(value)
+                else:
+                    raise ConstructorError("while constructing a mapping", node.start_mark,
+                            "expected a mapping or list of mappings for merging, but found %s"
+                            % value_node.id, value_node.start_mark)
+            elif key_node.tag == 'tag:yaml.org,2002:value':
+                key_node.tag = 'tag:yaml.org,2002:str'
+                index += 1
+            else:
+                index += 1
+        if merge:
+            node.value = merge + node.value
+
+    def construct_mapping(self, node, deep=False):
+        if isinstance(node, MappingNode):
+            self.flatten_mapping(node)
+        return super().construct_mapping(node, deep=deep)
+
+    def construct_yaml_null(self, node):
+        self.construct_scalar(node)
+        return None
+
+    bool_values = {
+        'yes':      True,
+        'no':       False,
+        'true':     True,
+        'false':    False,
+        'on':       True,
+        'off':      False,
+    }
+
+    def construct_yaml_bool(self, node):
+        value = self.construct_scalar(node)
+        return self.bool_values[value.lower()]
+
+    def construct_yaml_int(self, node):
+        value = self.construct_scalar(node)
+        value = value.replace('_', '')
+        sign = +1
+        if value[0] == '-':
+            sign = -1
+        if value[0] in '+-':
+            value = value[1:]
+        if value == '0':
+            return 0
+        elif value.startswith('0b'):
+            return sign*int(value[2:], 2)
+        elif value.startswith('0x'):
+            return sign*int(value[2:], 16)
+        elif value[0] == '0':
+            return sign*int(value, 8)
+        elif ':' in value:
+            digits = [int(part) for part in value.split(':')]
+            digits.reverse()
+            base = 1
+            value = 0
+            for digit in digits:
+                value += digit*base
+                base *= 60
+            return sign*value
+        else:
+            return sign*int(value)
+
+    inf_value = 1e300
+    while inf_value != inf_value*inf_value:
+        inf_value *= inf_value
+    nan_value = -inf_value/inf_value   # Trying to make a quiet NaN (like C99).
+
+    def construct_yaml_float(self, node):
+        value = self.construct_scalar(node)
+        value = value.replace('_', '').lower()
+        sign = +1
+        if value[0] == '-':
+            sign = -1
+        if value[0] in '+-':
+            value = value[1:]
+        if value == '.inf':
+            return sign*self.inf_value
+        elif value == '.nan':
+            return self.nan_value
+        elif ':' in value:
+            digits = [float(part) for part in value.split(':')]
+            digits.reverse()
+            base = 1
+            value = 0.0
+            for digit in digits:
+                value += digit*base
+                base *= 60
+            return sign*value
+        else:
+            return sign*float(value)
+
+    def construct_yaml_binary(self, node):
+        try:
+            value = self.construct_scalar(node).encode('ascii')
+        except UnicodeEncodeError as exc:
+            raise ConstructorError(None, None,
+                    "failed to convert base64 data into ascii: %s" % exc,
+                    node.start_mark)
+        try:
+            if hasattr(base64, 'decodebytes'):
+                return base64.decodebytes(value)
+            else:
+                return base64.decodestring(value)
+        except binascii.Error as exc:
+            raise ConstructorError(None, None,
+                    "failed to decode base64 data: %s" % exc, node.start_mark)
+
+    timestamp_regexp = re.compile(
+            r'''^(?P<year>[0-9][0-9][0-9][0-9])
+                -(?P<month>[0-9][0-9]?)
+                -(?P<day>[0-9][0-9]?)
+                (?:(?:[Tt]|[ \t]+)
+                (?P<hour>[0-9][0-9]?)
+                :(?P<minute>[0-9][0-9])
+                :(?P<second>[0-9][0-9])
+                (?:\.(?P<fraction>[0-9]*))?
+                (?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
+                (?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
+
+    def construct_yaml_timestamp(self, node):
+        value = self.construct_scalar(node)
+        match = self.timestamp_regexp.match(node.value)
+        values = match.groupdict()
+        year = int(values['year'])
+        month = int(values['month'])
+        day = int(values['day'])
+        if not values['hour']:
+            return datetime.date(year, month, day)
+        hour = int(values['hour'])
+        minute = int(values['minute'])
+        second = int(values['second'])
+        fraction = 0
+        if values['fraction']:
+            fraction = values['fraction'][:6]
+            while len(fraction) < 6:
+                fraction += '0'
+            fraction = int(fraction)
+        delta = None
+        if values['tz_sign']:
+            tz_hour = int(values['tz_hour'])
+            tz_minute = int(values['tz_minute'] or 0)
+            delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
+            if values['tz_sign'] == '-':
+                delta = -delta
+        data = datetime.datetime(year, month, day, hour, minute, second, fraction)
+        if delta:
+            data -= delta
+        return data
+
+    def construct_yaml_omap(self, node):
+        # Note: we do not check for duplicate keys, because it's too
+        # CPU-expensive.
+        omap = []
+        yield omap
+        if not isinstance(node, SequenceNode):
+            raise ConstructorError("while constructing an ordered map", node.start_mark,
+                    "expected a sequence, but found %s" % node.id, node.start_mark)
+        for subnode in node.value:
+            if not isinstance(subnode, MappingNode):
+                raise ConstructorError("while constructing an ordered map", node.start_mark,
+                        "expected a mapping of length 1, but found %s" % subnode.id,
+                        subnode.start_mark)
+            if len(subnode.value) != 1:
+                raise ConstructorError("while constructing an ordered map", node.start_mark,
+                        "expected a single mapping item, but found %d items" % len(subnode.value),
+                        subnode.start_mark)
+            key_node, value_node = subnode.value[0]
+            key = self.construct_object(key_node)
+            value = self.construct_object(value_node)
+            omap.append((key, value))
+
+    def construct_yaml_pairs(self, node):
+        # Note: the same code as `construct_yaml_omap`.
+        pairs = []
+        yield pairs
+        if not isinstance(node, SequenceNode):
+            raise ConstructorError("while constructing pairs", node.start_mark,
+                    "expected a sequence, but found %s" % node.id, node.start_mark)
+        for subnode in node.value:
+            if not isinstance(subnode, MappingNode):
+                raise ConstructorError("while constructing pairs", node.start_mark,
+                        "expected a mapping of length 1, but found %s" % subnode.id,
+                        subnode.start_mark)
+            if len(subnode.value) != 1:
+                raise ConstructorError("while constructing pairs", node.start_mark,
+                        "expected a single mapping item, but found %d items" % len(subnode.value),
+                        subnode.start_mark)
+            key_node, value_node = subnode.value[0]
+            key = self.construct_object(key_node)
+            value = self.construct_object(value_node)
+            pairs.append((key, value))
+
+    def construct_yaml_set(self, node):
+        data = set()
+        yield data
+        value = self.construct_mapping(node)
+        data.update(value)
+
+    def construct_yaml_str(self, node):
+        return self.construct_scalar(node)
+
+    def construct_yaml_seq(self, node):
+        data = []
+        yield data
+        data.extend(self.construct_sequence(node))
+
+    def construct_yaml_map(self, node):
+        data = {}
+        yield data
+        value = self.construct_mapping(node)
+        data.update(value)
+
+    def construct_yaml_object(self, node, cls):
+        data = cls.__new__(cls)
+        yield data
+        if hasattr(data, '__setstate__'):
+            state = self.construct_mapping(node, deep=True)
+            data.__setstate__(state)
+        else:
+            state = self.construct_mapping(node)
+            data.__dict__.update(state)
+
+    def construct_undefined(self, node):
+        raise ConstructorError(None, None,
+                "could not determine a constructor for the tag %r" % node.tag,
+                node.start_mark)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:null',
+        SafeConstructor.construct_yaml_null)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:bool',
+        SafeConstructor.construct_yaml_bool)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:int',
+        SafeConstructor.construct_yaml_int)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:float',
+        SafeConstructor.construct_yaml_float)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:binary',
+        SafeConstructor.construct_yaml_binary)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:timestamp',
+        SafeConstructor.construct_yaml_timestamp)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:omap',
+        SafeConstructor.construct_yaml_omap)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:pairs',
+        SafeConstructor.construct_yaml_pairs)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:set',
+        SafeConstructor.construct_yaml_set)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:str',
+        SafeConstructor.construct_yaml_str)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:seq',
+        SafeConstructor.construct_yaml_seq)
+
+SafeConstructor.add_constructor(
+        'tag:yaml.org,2002:map',
+        SafeConstructor.construct_yaml_map)
+
+SafeConstructor.add_constructor(None,
+        SafeConstructor.construct_undefined)
+
+class Constructor(SafeConstructor):
+
+    def construct_python_str(self, node):
+        return self.construct_scalar(node)
+
+    def construct_python_unicode(self, node):
+        return self.construct_scalar(node)
+
+    def construct_python_bytes(self, node):
+        try:
+            value = self.construct_scalar(node).encode('ascii')
+        except UnicodeEncodeError as exc:
+            raise ConstructorError(None, None,
+                    "failed to convert base64 data into ascii: %s" % exc,
+                    node.start_mark)
+        try:
+            if hasattr(base64, 'decodebytes'):
+                return base64.decodebytes(value)
+            else:
+                return base64.decodestring(value)
+        except binascii.Error as exc:
+            raise ConstructorError(None, None,
+                    "failed to decode base64 data: %s" % exc, node.start_mark)
+
+    def construct_python_long(self, node):
+        return self.construct_yaml_int(node)
+
+    def construct_python_complex(self, node):
+       return complex(self.construct_scalar(node))
+
+    def construct_python_tuple(self, node):
+        return tuple(self.construct_sequence(node))
+
+    def find_python_module(self, name, mark):
+        if not name:
+            raise ConstructorError("while constructing a Python module", mark,
+                    "expected non-empty name appended to the tag", mark)
+        try:
+            __import__(name)
+        except ImportError as exc:
+            raise ConstructorError("while constructing a Python module", mark,
+                    "cannot find module %r (%s)" % (name, exc), mark)
+        return sys.modules[name]
+
+    def find_python_name(self, name, mark):
+        if not name:
+            raise ConstructorError("while constructing a Python object", mark,
+                    "expected non-empty name appended to the tag", mark)
+        if '.' in name:
+            module_name, object_name = name.rsplit('.', 1)
+        else:
+            module_name = 'builtins'
+            object_name = name
+        try:
+            __import__(module_name)
+        except ImportError as exc:
+            raise ConstructorError("while constructing a Python object", mark,
+                    "cannot find module %r (%s)" % (module_name, exc), mark)
+        module = sys.modules[module_name]
+        if not hasattr(module, object_name):
+            raise ConstructorError("while constructing a Python object", mark,
+                    "cannot find %r in the module %r"
+                    % (object_name, module.__name__), mark)
+        return getattr(module, object_name)
+
+    def construct_python_name(self, suffix, node):
+        value = self.construct_scalar(node)
+        if value:
+            raise ConstructorError("while constructing a Python name", node.start_mark,
+                    "expected the empty value, but found %r" % value, node.start_mark)
+        return self.find_python_name(suffix, node.start_mark)
+
+    def construct_python_module(self, suffix, node):
+        value = self.construct_scalar(node)
+        if value:
+            raise ConstructorError("while constructing a Python module", node.start_mark,
+                    "expected the empty value, but found %r" % value, node.start_mark)
+        return self.find_python_module(suffix, node.start_mark)
+
+    def make_python_instance(self, suffix, node,
+            args=None, kwds=None, newobj=False):
+        if not args:
+            args = []
+        if not kwds:
+            kwds = {}
+        cls = self.find_python_name(suffix, node.start_mark)
+        if newobj and isinstance(cls, type):
+            return cls.__new__(cls, *args, **kwds)
+        else:
+            return cls(*args, **kwds)
+
+    def set_python_instance_state(self, instance, state):
+        if hasattr(instance, '__setstate__'):
+            instance.__setstate__(state)
+        else:
+            slotstate = {}
+            if isinstance(state, tuple) and len(state) == 2:
+                state, slotstate = state
+            if hasattr(instance, '__dict__'):
+                instance.__dict__.update(state)
+            elif state:
+                slotstate.update(state)
+            for key, value in slotstate.items():
+                setattr(object, key, value)
+
+    def construct_python_object(self, suffix, node):
+        # Format:
+        #   !!python/object:module.name { ... state ... }
+        instance = self.make_python_instance(suffix, node, newobj=True)
+        yield instance
+        deep = hasattr(instance, '__setstate__')
+        state = self.construct_mapping(node, deep=deep)
+        self.set_python_instance_state(instance, state)
+
+    def construct_python_object_apply(self, suffix, node, newobj=False):
+        # Format:
+        #   !!python/object/apply       # (or !!python/object/new)
+        #   args: [ ... arguments ... ]
+        #   kwds: { ... keywords ... }
+        #   state: ... state ...
+        #   listitems: [ ... listitems ... ]
+        #   dictitems: { ... dictitems ... }
+        # or short format:
+        #   !!python/object/apply [ ... arguments ... ]
+        # The difference between !!python/object/apply and !!python/object/new
+        # is how an object is created, check make_python_instance for details.
+        if isinstance(node, SequenceNode):
+            args = self.construct_sequence(node, deep=True)
+            kwds = {}
+            state = {}
+            listitems = []
+            dictitems = {}
+        else:
+            value = self.construct_mapping(node, deep=True)
+            args = value.get('args', [])
+            kwds = value.get('kwds', {})
+            state = value.get('state', {})
+            listitems = value.get('listitems', [])
+            dictitems = value.get('dictitems', {})
+        instance = self.make_python_instance(suffix, node, args, kwds, newobj)
+        if state:
+            self.set_python_instance_state(instance, state)
+        if listitems:
+            instance.extend(listitems)
+        if dictitems:
+            for key in dictitems:
+                instance[key] = dictitems[key]
+        return instance
+
+    def construct_python_object_new(self, suffix, node):
+        return self.construct_python_object_apply(suffix, node, newobj=True)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/none',
+    Constructor.construct_yaml_null)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/bool',
+    Constructor.construct_yaml_bool)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/str',
+    Constructor.construct_python_str)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/unicode',
+    Constructor.construct_python_unicode)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/bytes',
+    Constructor.construct_python_bytes)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/int',
+    Constructor.construct_yaml_int)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/long',
+    Constructor.construct_python_long)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/float',
+    Constructor.construct_yaml_float)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/complex',
+    Constructor.construct_python_complex)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/list',
+    Constructor.construct_yaml_seq)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/tuple',
+    Constructor.construct_python_tuple)
+
+Constructor.add_constructor(
+    'tag:yaml.org,2002:python/dict',
+    Constructor.construct_yaml_map)
+
+Constructor.add_multi_constructor(
+    'tag:yaml.org,2002:python/name:',
+    Constructor.construct_python_name)
+
+Constructor.add_multi_constructor(
+    'tag:yaml.org,2002:python/module:',
+    Constructor.construct_python_module)
+
+Constructor.add_multi_constructor(
+    'tag:yaml.org,2002:python/object:',
+    Constructor.construct_python_object)
+
+Constructor.add_multi_constructor(
+    'tag:yaml.org,2002:python/object/apply:',
+    Constructor.construct_python_object_apply)
+
+Constructor.add_multi_constructor(
+    'tag:yaml.org,2002:python/object/new:',
+    Constructor.construct_python_object_new)
+
diff --git a/lib3/yaml/cyaml.py b/lib3/yaml/cyaml.py
new file mode 100644
index 0000000..d5cb87e
--- /dev/null
+++ b/lib3/yaml/cyaml.py
@@ -0,0 +1,85 @@
+
+__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader',
+        'CBaseDumper', 'CSafeDumper', 'CDumper']
+
+from _yaml import CParser, CEmitter
+
+from .constructor import *
+
+from .serializer import *
+from .representer import *
+
+from .resolver import *
+
+class CBaseLoader(CParser, BaseConstructor, BaseResolver):
+
+    def __init__(self, stream):
+        CParser.__init__(self, stream)
+        BaseConstructor.__init__(self)
+        BaseResolver.__init__(self)
+
+class CSafeLoader(CParser, SafeConstructor, Resolver):
+
+    def __init__(self, stream):
+        CParser.__init__(self, stream)
+        SafeConstructor.__init__(self)
+        Resolver.__init__(self)
+
+class CLoader(CParser, Constructor, Resolver):
+
+    def __init__(self, stream):
+        CParser.__init__(self, stream)
+        Constructor.__init__(self)
+        Resolver.__init__(self)
+
+class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        CEmitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width, encoding=encoding,
+                allow_unicode=allow_unicode, line_break=line_break,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class CSafeDumper(CEmitter, SafeRepresenter, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        CEmitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width, encoding=encoding,
+                allow_unicode=allow_unicode, line_break=line_break,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        SafeRepresenter.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class CDumper(CEmitter, Serializer, Representer, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        CEmitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width, encoding=encoding,
+                allow_unicode=allow_unicode, line_break=line_break,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
diff --git a/lib3/yaml/dumper.py b/lib3/yaml/dumper.py
new file mode 100644
index 0000000..0b69128
--- /dev/null
+++ b/lib3/yaml/dumper.py
@@ -0,0 +1,62 @@
+
+__all__ = ['BaseDumper', 'SafeDumper', 'Dumper']
+
+from .emitter import *
+from .serializer import *
+from .representer import *
+from .resolver import *
+
+class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        Emitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width,
+                allow_unicode=allow_unicode, line_break=line_break)
+        Serializer.__init__(self, encoding=encoding,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        Emitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width,
+                allow_unicode=allow_unicode, line_break=line_break)
+        Serializer.__init__(self, encoding=encoding,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        SafeRepresenter.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
+class Dumper(Emitter, Serializer, Representer, Resolver):
+
+    def __init__(self, stream,
+            default_style=None, default_flow_style=None,
+            canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None,
+            encoding=None, explicit_start=None, explicit_end=None,
+            version=None, tags=None):
+        Emitter.__init__(self, stream, canonical=canonical,
+                indent=indent, width=width,
+                allow_unicode=allow_unicode, line_break=line_break)
+        Serializer.__init__(self, encoding=encoding,
+                explicit_start=explicit_start, explicit_end=explicit_end,
+                version=version, tags=tags)
+        Representer.__init__(self, default_style=default_style,
+                default_flow_style=default_flow_style)
+        Resolver.__init__(self)
+
diff --git a/lib3/yaml/emitter.py b/lib3/yaml/emitter.py
new file mode 100644
index 0000000..34cb145
--- /dev/null
+++ b/lib3/yaml/emitter.py
@@ -0,0 +1,1137 @@
+
+# Emitter expects events obeying the following grammar:
+# stream ::= STREAM-START document* STREAM-END
+# document ::= DOCUMENT-START node DOCUMENT-END
+# node ::= SCALAR | sequence | mapping
+# sequence ::= SEQUENCE-START node* SEQUENCE-END
+# mapping ::= MAPPING-START (node node)* MAPPING-END
+
+__all__ = ['Emitter', 'EmitterError']
+
+from .error import YAMLError
+from .events import *
+
+class EmitterError(YAMLError):
+    pass
+
+class ScalarAnalysis:
+    def __init__(self, scalar, empty, multiline,
+            allow_flow_plain, allow_block_plain,
+            allow_single_quoted, allow_double_quoted,
+            allow_block):
+        self.scalar = scalar
+        self.empty = empty
+        self.multiline = multiline
+        self.allow_flow_plain = allow_flow_plain
+        self.allow_block_plain = allow_block_plain
+        self.allow_single_quoted = allow_single_quoted
+        self.allow_double_quoted = allow_double_quoted
+        self.allow_block = allow_block
+
+class Emitter:
+
+    DEFAULT_TAG_PREFIXES = {
+        '!' : '!',
+        'tag:yaml.org,2002:' : '!!',
+    }
+
+    def __init__(self, stream, canonical=None, indent=None, width=None,
+            allow_unicode=None, line_break=None):
+
+        # The stream should have the methods `write` and possibly `flush`.
+        self.stream = stream
+
+        # Encoding can be overriden by STREAM-START.
+        self.encoding = None
+
+        # Emitter is a state machine with a stack of states to handle nested
+        # structures.
+        self.states = []
+        self.state = self.expect_stream_start
+
+        # Current event and the event queue.
+        self.events = []
+        self.event = None
+
+        # The current indentation level and the stack of previous indents.
+        self.indents = []
+        self.indent = None
+
+        # Flow level.
+        self.flow_level = 0
+
+        # Contexts.
+        self.root_context = False
+        self.sequence_context = False
+        self.mapping_context = False
+        self.simple_key_context = False
+
+        # Characteristics of the last emitted character:
+        #  - current position.
+        #  - is it a whitespace?
+        #  - is it an indention character
+        #    (indentation space, '-', '?', or ':')?
+        self.line = 0
+        self.column = 0
+        self.whitespace = True
+        self.indention = True
+
+        # Whether the document requires an explicit document indicator
+        self.open_ended = False
+
+        # Formatting details.
+        self.canonical = canonical
+        self.allow_unicode = allow_unicode
+        self.best_indent = 2
+        if indent and 1 < indent < 10:
+            self.best_indent = indent
+        self.best_width = 80
+        if width and width > self.best_indent*2:
+            self.best_width = width
+        self.best_line_break = '\n'
+        if line_break in ['\r', '\n', '\r\n']:
+            self.best_line_break = line_break
+
+        # Tag prefixes.
+        self.tag_prefixes = None
+
+        # Prepared anchor and tag.
+        self.prepared_anchor = None
+        self.prepared_tag = None
+
+        # Scalar analysis and style.
+        self.analysis = None
+        self.style = None
+
+    def dispose(self):
+        # Reset the state attributes (to clear self-references)
+        self.states = []
+        self.state = None
+
+    def emit(self, event):
+        self.events.append(event)
+        while not self.need_more_events():
+            self.event = self.events.pop(0)
+            self.state()
+            self.event = None
+
+    # In some cases, we wait for a few next events before emitting.
+
+    def need_more_events(self):
+        if not self.events:
+            return True
+        event = self.events[0]
+        if isinstance(event, DocumentStartEvent):
+            return self.need_events(1)
+        elif isinstance(event, SequenceStartEvent):
+            return self.need_events(2)
+        elif isinstance(event, MappingStartEvent):
+            return self.need_events(3)
+        else:
+            return False
+
+    def need_events(self, count):
+        level = 0
+        for event in self.events[1:]:
+            if isinstance(event, (DocumentStartEvent, CollectionStartEvent)):
+                level += 1
+            elif isinstance(event, (DocumentEndEvent, CollectionEndEvent)):
+                level -= 1
+            elif isinstance(event, StreamEndEvent):
+                level = -1
+            if level < 0:
+                return False
+        return (len(self.events) < count+1)
+
+    def increase_indent(self, flow=False, indentless=False):
+        self.indents.append(self.indent)
+        if self.indent is None:
+            if flow:
+                self.indent = self.best_indent
+            else:
+                self.indent = 0
+        elif not indentless:
+            self.indent += self.best_indent
+
+    # States.
+
+    # Stream handlers.
+
+    def expect_stream_start(self):
+        if isinstance(self.event, StreamStartEvent):
+            if self.event.encoding and not hasattr(self.stream, 'encoding'):
+                self.encoding = self.event.encoding
+            self.write_stream_start()
+            self.state = self.expect_first_document_start
+        else:
+            raise EmitterError("expected StreamStartEvent, but got %s"
+                    % self.event)
+
+    def expect_nothing(self):
+        raise EmitterError("expected nothing, but got %s" % self.event)
+
+    # Document handlers.
+
+    def expect_first_document_start(self):
+        return self.expect_document_start(first=True)
+
+    def expect_document_start(self, first=False):
+        if isinstance(self.event, DocumentStartEvent):
+            if (self.event.version or self.event.tags) and self.open_ended:
+                self.write_indicator('...', True)
+                self.write_indent()
+            if self.event.version:
+                version_text = self.prepare_version(self.event.version)
+                self.write_version_directive(version_text)
+            self.tag_prefixes = self.DEFAULT_TAG_PREFIXES.copy()
+            if self.event.tags:
+                handles = sorted(self.event.tags.keys())
+                for handle in handles:
+                    prefix = self.event.tags[handle]
+                    self.tag_prefixes[prefix] = handle
+                    handle_text = self.prepare_tag_handle(handle)
+                    prefix_text = self.prepare_tag_prefix(prefix)
+                    self.write_tag_directive(handle_text, prefix_text)
+            implicit = (first and not self.event.explicit and not self.canonical
+                    and not self.event.version and not self.event.tags
+                    and not self.check_empty_document())
+            if not implicit:
+                self.write_indent()
+                self.write_indicator('---', True)
+                if self.canonical:
+                    self.write_indent()
+            self.state = self.expect_document_root
+        elif isinstance(self.event, StreamEndEvent):
+            if self.open_ended:
+                self.write_indicator('...', True)
+                self.write_indent()
+            self.write_stream_end()
+            self.state = self.expect_nothing
+        else:
+            raise EmitterError("expected DocumentStartEvent, but got %s"
+                    % self.event)
+
+    def expect_document_end(self):
+        if isinstance(self.event, DocumentEndEvent):
+            self.write_indent()
+            if self.event.explicit:
+                self.write_indicator('...', True)
+                self.write_indent()
+            self.flush_stream()
+            self.state = self.expect_document_start
+        else:
+            raise EmitterError("expected DocumentEndEvent, but got %s"
+                    % self.event)
+
+    def expect_document_root(self):
+        self.states.append(self.expect_document_end)
+        self.expect_node(root=True)
+
+    # Node handlers.
+
+    def expect_node(self, root=False, sequence=False, mapping=False,
+            simple_key=False):
+        self.root_context = root
+        self.sequence_context = sequence
+        self.mapping_context = mapping
+        self.simple_key_context = simple_key
+        if isinstance(self.event, AliasEvent):
+            self.expect_alias()
+        elif isinstance(self.event, (ScalarEvent, CollectionStartEvent)):
+            self.process_anchor('&')
+            self.process_tag()
+            if isinstance(self.event, ScalarEvent):
+                self.expect_scalar()
+            elif isinstance(self.event, SequenceStartEvent):
+                if self.flow_level or self.canonical or self.event.flow_style   \
+                        or self.check_empty_sequence():
+                    self.expect_flow_sequence()
+                else:
+                    self.expect_block_sequence()
+            elif isinstance(self.event, MappingStartEvent):
+                if self.flow_level or self.canonical or self.event.flow_style   \
+                        or self.check_empty_mapping():
+                    self.expect_flow_mapping()
+                else:
+                    self.expect_block_mapping()
+        else:
+            raise EmitterError("expected NodeEvent, but got %s" % self.event)
+
+    def expect_alias(self):
+        if self.event.anchor is None:
+            raise EmitterError("anchor is not specified for alias")
+        self.process_anchor('*')
+        self.state = self.states.pop()
+
+    def expect_scalar(self):
+        self.increase_indent(flow=True)
+        self.process_scalar()
+        self.indent = self.indents.pop()
+        self.state = self.states.pop()
+
+    # Flow sequence handlers.
+
+    def expect_flow_sequence(self):
+        self.write_indicator('[', True, whitespace=True)
+        self.flow_level += 1
+        self.increase_indent(flow=True)
+        self.state = self.expect_first_flow_sequence_item
+
+    def expect_first_flow_sequence_item(self):
+        if isinstance(self.event, SequenceEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            self.write_indicator(']', False)
+            self.state = self.states.pop()
+        else:
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            self.states.append(self.expect_flow_sequence_item)
+            self.expect_node(sequence=True)
+
+    def expect_flow_sequence_item(self):
+        if isinstance(self.event, SequenceEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            if self.canonical:
+                self.write_indicator(',', False)
+                self.write_indent()
+            self.write_indicator(']', False)
+            self.state = self.states.pop()
+        else:
+            self.write_indicator(',', False)
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            self.states.append(self.expect_flow_sequence_item)
+            self.expect_node(sequence=True)
+
+    # Flow mapping handlers.
+
+    def expect_flow_mapping(self):
+        self.write_indicator('{', True, whitespace=True)
+        self.flow_level += 1
+        self.increase_indent(flow=True)
+        self.state = self.expect_first_flow_mapping_key
+
+    def expect_first_flow_mapping_key(self):
+        if isinstance(self.event, MappingEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            self.write_indicator('}', False)
+            self.state = self.states.pop()
+        else:
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            if not self.canonical and self.check_simple_key():
+                self.states.append(self.expect_flow_mapping_simple_value)
+                self.expect_node(mapping=True, simple_key=True)
+            else:
+                self.write_indicator('?', True)
+                self.states.append(self.expect_flow_mapping_value)
+                self.expect_node(mapping=True)
+
+    def expect_flow_mapping_key(self):
+        if isinstance(self.event, MappingEndEvent):
+            self.indent = self.indents.pop()
+            self.flow_level -= 1
+            if self.canonical:
+                self.write_indicator(',', False)
+                self.write_indent()
+            self.write_indicator('}', False)
+            self.state = self.states.pop()
+        else:
+            self.write_indicator(',', False)
+            if self.canonical or self.column > self.best_width:
+                self.write_indent()
+            if not self.canonical and self.check_simple_key():
+                self.states.append(self.expect_flow_mapping_simple_value)
+                self.expect_node(mapping=True, simple_key=True)
+            else:
+                self.write_indicator('?', True)
+                self.states.append(self.expect_flow_mapping_value)
+                self.expect_node(mapping=True)
+
+    def expect_flow_mapping_simple_value(self):
+        self.write_indicator(':', False)
+        self.states.append(self.expect_flow_mapping_key)
+        self.expect_node(mapping=True)
+
+    def expect_flow_mapping_value(self):
+        if self.canonical or self.column > self.best_width:
+            self.write_indent()
+        self.write_indicator(':', True)
+        self.states.append(self.expect_flow_mapping_key)
+        self.expect_node(mapping=True)
+
+    # Block sequence handlers.
+
+    def expect_block_sequence(self):
+        indentless = (self.mapping_context and not self.indention)
+        self.increase_indent(flow=False, indentless=indentless)
+        self.state = self.expect_first_block_sequence_item
+
+    def expect_first_block_sequence_item(self):
+        return self.expect_block_sequence_item(first=True)
+
+    def expect_block_sequence_item(self, first=False):
+        if not first and isinstance(self.event, SequenceEndEvent):
+            self.indent = self.indents.pop()
+            self.state = self.states.pop()
+        else:
+            self.write_indent()
+            self.write_indicator('-', True, indention=True)
+            self.states.append(self.expect_block_sequence_item)
+            self.expect_node(sequence=True)
+
+    # Block mapping handlers.
+
+    def expect_block_mapping(self):
+        self.increase_indent(flow=False)
+        self.state = self.expect_first_block_mapping_key
+
+    def expect_first_block_mapping_key(self):
+        return self.expect_block_mapping_key(first=True)
+
+    def expect_block_mapping_key(self, first=False):
+        if not first and isinstance(self.event, MappingEndEvent):
+            self.indent = self.indents.pop()
+            self.state = self.states.pop()
+        else:
+            self.write_indent()
+            if self.check_simple_key():
+                self.states.append(self.expect_block_mapping_simple_value)
+                self.expect_node(mapping=True, simple_key=True)
+            else:
+                self.write_indicator('?', True, indention=True)
+                self.states.append(self.expect_block_mapping_value)
+                self.expect_node(mapping=True)
+
+    def expect_block_mapping_simple_value(self):
+        self.write_indicator(':', False)
+        self.states.append(self.expect_block_mapping_key)
+        self.expect_node(mapping=True)
+
+    def expect_block_mapping_value(self):
+        self.write_indent()
+        self.write_indicator(':', True, indention=True)
+        self.states.append(self.expect_block_mapping_key)
+        self.expect_node(mapping=True)
+
+    # Checkers.
+
+    def check_empty_sequence(self):
+        return (isinstance(self.event, SequenceStartEvent) and self.events
+                and isinstance(self.events[0], SequenceEndEvent))
+
+    def check_empty_mapping(self):
+        return (isinstance(self.event, MappingStartEvent) and self.events
+                and isinstance(self.events[0], MappingEndEvent))
+
+    def check_empty_document(self):
+        if not isinstance(self.event, DocumentStartEvent) or not self.events:
+            return False
+        event = self.events[0]
+        return (isinstance(event, ScalarEvent) and event.anchor is None
+                and event.tag is None and event.implicit and event.value == '')
+
+    def check_simple_key(self):
+        length = 0
+        if isinstance(self.event, NodeEvent) and self.event.anchor is not None:
+            if self.prepared_anchor is None:
+                self.prepared_anchor = self.prepare_anchor(self.event.anchor)
+            length += len(self.prepared_anchor)
+        if isinstance(self.event, (ScalarEvent, CollectionStartEvent))  \
+                and self.event.tag is not None:
+            if self.prepared_tag is None:
+                self.prepared_tag = self.prepare_tag(self.event.tag)
+            length += len(self.prepared_tag)
+        if isinstance(self.event, ScalarEvent):
+            if self.analysis is None:
+                self.analysis = self.analyze_scalar(self.event.value)
+            length += len(self.analysis.scalar)
+        return (length < 128 and (isinstance(self.event, AliasEvent)
+            or (isinstance(self.event, ScalarEvent)
+                    and not self.analysis.empty and not self.analysis.multiline)
+            or self.check_empty_sequence() or self.check_empty_mapping()))
+
+    # Anchor, Tag, and Scalar processors.
+
+    def process_anchor(self, indicator):
+        if self.event.anchor is None:
+            self.prepared_anchor = None
+            return
+        if self.prepared_anchor is None:
+            self.prepared_anchor = self.prepare_anchor(self.event.anchor)
+        if self.prepared_anchor:
+            self.write_indicator(indicator+self.prepared_anchor, True)
+        self.prepared_anchor = None
+
+    def process_tag(self):
+        tag = self.event.tag
+        if isinstance(self.event, ScalarEvent):
+            if self.style is None:
+                self.style = self.choose_scalar_style()
+            if ((not self.canonical or tag is None) and
+                ((self.style == '' and self.event.implicit[0])
+                        or (self.style != '' and self.event.implicit[1]))):
+                self.prepared_tag = None
+                return
+            if self.event.implicit[0] and tag is None:
+                tag = '!'
+                self.prepared_tag = None
+        else:
+            if (not self.canonical or tag is None) and self.event.implicit:
+                self.prepared_tag = None
+                return
+        if tag is None:
+            raise EmitterError("tag is not specified")
+        if self.prepared_tag is None:
+            self.prepared_tag = self.prepare_tag(tag)
+        if self.prepared_tag:
+            self.write_indicator(self.prepared_tag, True)
+        self.prepared_tag = None
+
+    def choose_scalar_style(self):
+        if self.analysis is None:
+            self.analysis = self.analyze_scalar(self.event.value)
+        if self.event.style == '"' or self.canonical:
+            return '"'
+        if not self.event.style and self.event.implicit[0]:
+            if (not (self.simple_key_context and
+                    (self.analysis.empty or self.analysis.multiline))
+                and (self.flow_level and self.analysis.allow_flow_plain
+                    or (not self.flow_level and self.analysis.allow_block_plain))):
+                return ''
+        if self.event.style and self.event.style in '|>':
+            if (not self.flow_level and not self.simple_key_context
+                    and self.analysis.allow_block):
+                return self.event.style
+        if not self.event.style or self.event.style == '\'':
+            if (self.analysis.allow_single_quoted and
+                    not (self.simple_key_context and self.analysis.multiline)):
+                return '\''
+        return '"'
+
+    def process_scalar(self):
+        if self.analysis is None:
+            self.analysis = self.analyze_scalar(self.event.value)
+        if self.style is None:
+            self.style = self.choose_scalar_style()
+        split = (not self.simple_key_context)
+        #if self.analysis.multiline and split    \
+        #        and (not self.style or self.style in '\'\"'):
+        #    self.write_indent()
+        if self.style == '"':
+            self.write_double_quoted(self.analysis.scalar, split)
+        elif self.style == '\'':
+            self.write_single_quoted(self.analysis.scalar, split)
+        elif self.style == '>':
+            self.write_folded(self.analysis.scalar)
+        elif self.style == '|':
+            self.write_literal(self.analysis.scalar)
+        else:
+            self.write_plain(self.analysis.scalar, split)
+        self.analysis = None
+        self.style = None
+
+    # Analyzers.
+
+    def prepare_version(self, version):
+        major, minor = version
+        if major != 1:
+            raise EmitterError("unsupported YAML version: %d.%d" % (major, minor))
+        return '%d.%d' % (major, minor)
+
+    def prepare_tag_handle(self, handle):
+        if not handle:
+            raise EmitterError("tag handle must not be empty")
+        if handle[0] != '!' or handle[-1] != '!':
+            raise EmitterError("tag handle must start and end with '!': %r" % handle)
+        for ch in handle[1:-1]:
+            if not ('0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z'    \
+                    or ch in '-_'):
+                raise EmitterError("invalid character %r in the tag handle: %r"
+                        % (ch, handle))
+        return handle
+
+    def prepare_tag_prefix(self, prefix):
+        if not prefix:
+            raise EmitterError("tag prefix must not be empty")
+        chunks = []
+        start = end = 0
+        if prefix[0] == '!':
+            end = 1
+        while end < len(prefix):
+            ch = prefix[end]
+            if '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' \
+                    or ch in '-;/?!:@&=+$,_.~*\'()[]':
+                end += 1
+            else:
+                if start < end:
+                    chunks.append(prefix[start:end])
+                start = end = end+1
+                data = ch.encode('utf-8')
+                for ch in data:
+                    chunks.append('%%%02X' % ord(ch))
+        if start < end:
+            chunks.append(prefix[start:end])
+        return ''.join(chunks)
+
+    def prepare_tag(self, tag):
+        if not tag:
+            raise EmitterError("tag must not be empty")
+        if tag == '!':
+            return tag
+        handle = None
+        suffix = tag
+        prefixes = sorted(self.tag_prefixes.keys())
+        for prefix in prefixes:
+            if tag.startswith(prefix)   \
+                    and (prefix == '!' or len(prefix) < len(tag)):
+                handle = self.tag_prefixes[prefix]
+                suffix = tag[len(prefix):]
+        chunks = []
+        start = end = 0
+        while end < len(suffix):
+            ch = suffix[end]
+            if '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' \
+                    or ch in '-;/?:@&=+$,_.~*\'()[]'   \
+                    or (ch == '!' and handle != '!'):
+                end += 1
+            else:
+                if start < end:
+                    chunks.append(suffix[start:end])
+                start = end = end+1
+                data = ch.encode('utf-8')
+                for ch in data:
+                    chunks.append('%%%02X' % ord(ch))
+        if start < end:
+            chunks.append(suffix[start:end])
+        suffix_text = ''.join(chunks)
+        if handle:
+            return '%s%s' % (handle, suffix_text)
+        else:
+            return '!<%s>' % suffix_text
+
+    def prepare_anchor(self, anchor):
+        if not anchor:
+            raise EmitterError("anchor must not be empty")
+        for ch in anchor:
+            if not ('0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z'    \
+                    or ch in '-_'):
+                raise EmitterError("invalid character %r in the anchor: %r"
+                        % (ch, anchor))
+        return anchor
+
+    def analyze_scalar(self, scalar):
+
+        # Empty scalar is a special case.
+        if not scalar:
+            return ScalarAnalysis(scalar=scalar, empty=True, multiline=False,
+                    allow_flow_plain=False, allow_block_plain=True,
+                    allow_single_quoted=True, allow_double_quoted=True,
+                    allow_block=False)
+
+        # Indicators and special characters.
+        block_indicators = False
+        flow_indicators = False
+        line_breaks = False
+        special_characters = False
+
+        # Important whitespace combinations.
+        leading_space = False
+        leading_break = False
+        trailing_space = False
+        trailing_break = False
+        break_space = False
+        space_break = False
+
+        # Check document indicators.
+        if scalar.startswith('---') or scalar.startswith('...'):
+            block_indicators = True
+            flow_indicators = True
+
+        # First character or preceded by a whitespace.
+        preceeded_by_whitespace = True
+
+        # Last character or followed by a whitespace.
+        followed_by_whitespace = (len(scalar) == 1 or
+                scalar[1] in '\0 \t\r\n\x85\u2028\u2029')
+
+        # The previous character is a space.
+        previous_space = False
+
+        # The previous character is a break.
+        previous_break = False
+
+        index = 0
+        while index < len(scalar):
+            ch = scalar[index]
+
+            # Check for indicators.
+            if index == 0:
+                # Leading indicators are special characters.
+                if ch in '#,[]{}&*!|>\'\"%@`': 
+                    flow_indicators = True
+                    block_indicators = True
+                if ch in '?:':
+                    flow_indicators = True
+                    if followed_by_whitespace:
+                        block_indicators = True
+                if ch == '-' and followed_by_whitespace:
+                    flow_indicators = True
+                    block_indicators = True
+            else:
+                # Some indicators cannot appear within a scalar as well.
+                if ch in ',?[]{}':
+                    flow_indicators = True
+                if ch == ':':
+                    flow_indicators = True
+                    if followed_by_whitespace:
+                        block_indicators = True
+                if ch == '#' and preceeded_by_whitespace:
+                    flow_indicators = True
+                    block_indicators = True
+
+            # Check for line breaks, special, and unicode characters.
+            if ch in '\n\x85\u2028\u2029':
+                line_breaks = True
+            if not (ch == '\n' or '\x20' <= ch <= '\x7E'):
+                if (ch == '\x85' or '\xA0' <= ch <= '\uD7FF'
+                        or '\uE000' <= ch <= '\uFFFD') and ch != '\uFEFF':
+                    unicode_characters = True
+                    if not self.allow_unicode:
+                        special_characters = True
+                else:
+                    special_characters = True
+
+            # Detect important whitespace combinations.
+            if ch == ' ':
+                if index == 0:
+                    leading_space = True
+                if index == len(scalar)-1:
+                    trailing_space = True
+                if previous_break:
+                    break_space = True
+                previous_space = True
+                previous_break = False
+            elif ch in '\n\x85\u2028\u2029':
+                if index == 0:
+                    leading_break = True
+                if index == len(scalar)-1:
+                    trailing_break = True
+                if previous_space:
+                    space_break = True
+                previous_space = False
+                previous_break = True
+            else:
+                previous_space = False
+                previous_break = False
+
+            # Prepare for the next character.
+            index += 1
+            preceeded_by_whitespace = (ch in '\0 \t\r\n\x85\u2028\u2029')
+            followed_by_whitespace = (index+1 >= len(scalar) or
+                    scalar[index+1] in '\0 \t\r\n\x85\u2028\u2029')
+
+        # Let's decide what styles are allowed.
+        allow_flow_plain = True
+        allow_block_plain = True
+        allow_single_quoted = True
+        allow_double_quoted = True
+        allow_block = True
+
+        # Leading and trailing whitespaces are bad for plain scalars.
+        if (leading_space or leading_break
+                or trailing_space or trailing_break):
+            allow_flow_plain = allow_block_plain = False
+
+        # We do not permit trailing spaces for block scalars.
+        if trailing_space:
+            allow_block = False
+
+        # Spaces at the beginning of a new line are only acceptable for block
+        # scalars.
+        if break_space:
+            allow_flow_plain = allow_block_plain = allow_single_quoted = False
+
+        # Spaces followed by breaks, as well as special character are only
+        # allowed for double quoted scalars.
+        if space_break or special_characters:
+            allow_flow_plain = allow_block_plain =  \
+            allow_single_quoted = allow_block = False
+
+        # Although the plain scalar writer supports breaks, we never emit
+        # multiline plain scalars.
+        if line_breaks:
+            allow_flow_plain = allow_block_plain = False
+
+        # Flow indicators are forbidden for flow plain scalars.
+        if flow_indicators:
+            allow_flow_plain = False
+
+        # Block indicators are forbidden for block plain scalars.
+        if block_indicators:
+            allow_block_plain = False
+
+        return ScalarAnalysis(scalar=scalar,
+                empty=False, multiline=line_breaks,
+                allow_flow_plain=allow_flow_plain,
+                allow_block_plain=allow_block_plain,
+                allow_single_quoted=allow_single_quoted,
+                allow_double_quoted=allow_double_quoted,
+                allow_block=allow_block)
+
+    # Writers.
+
+    def flush_stream(self):
+        if hasattr(self.stream, 'flush'):
+            self.stream.flush()
+
+    def write_stream_start(self):
+        # Write BOM if needed.
+        if self.encoding and self.encoding.startswith('utf-16'):
+            self.stream.write('\uFEFF'.encode(self.encoding))
+
+    def write_stream_end(self):
+        self.flush_stream()
+
+    def write_indicator(self, indicator, need_whitespace,
+            whitespace=False, indention=False):
+        if self.whitespace or not need_whitespace:
+            data = indicator
+        else:
+            data = ' '+indicator
+        self.whitespace = whitespace
+        self.indention = self.indention and indention
+        self.column += len(data)
+        self.open_ended = False
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+
+    def write_indent(self):
+        indent = self.indent or 0
+        if not self.indention or self.column > indent   \
+                or (self.column == indent and not self.whitespace):
+            self.write_line_break()
+        if self.column < indent:
+            self.whitespace = True
+            data = ' '*(indent-self.column)
+            self.column = indent
+            if self.encoding:
+                data = data.encode(self.encoding)
+            self.stream.write(data)
+
+    def write_line_break(self, data=None):
+        if data is None:
+            data = self.best_line_break
+        self.whitespace = True
+        self.indention = True
+        self.line += 1
+        self.column = 0
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+
+    def write_version_directive(self, version_text):
+        data = '%%YAML %s' % version_text
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+        self.write_line_break()
+
+    def write_tag_directive(self, handle_text, prefix_text):
+        data = '%%TAG %s %s' % (handle_text, prefix_text)
+        if self.encoding:
+            data = data.encode(self.encoding)
+        self.stream.write(data)
+        self.write_line_break()
+
+    # Scalar streams.
+
+    def write_single_quoted(self, text, split=True):
+        self.write_indicator('\'', True)
+        spaces = False
+        breaks = False
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if spaces:
+                if ch is None or ch != ' ':
+                    if start+1 == end and self.column > self.best_width and split   \
+                            and start != 0 and end != len(text):
+                        self.write_indent()
+                    else:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                    start = end
+            elif breaks:
+                if ch is None or ch not in '\n\x85\u2028\u2029':
+                    if text[start] == '\n':
+                        self.write_line_break()
+                    for br in text[start:end]:
+                        if br == '\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    self.write_indent()
+                    start = end
+            else:
+                if ch is None or ch in ' \n\x85\u2028\u2029' or ch == '\'':
+                    if start < end:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                        start = end
+            if ch == '\'':
+                data = '\'\''
+                self.column += 2
+                if self.encoding:
+                    data = data.encode(self.encoding)
+                self.stream.write(data)
+                start = end + 1
+            if ch is not None:
+                spaces = (ch == ' ')
+                breaks = (ch in '\n\x85\u2028\u2029')
+            end += 1
+        self.write_indicator('\'', False)
+
+    ESCAPE_REPLACEMENTS = {
+        '\0':       '0',
+        '\x07':     'a',
+        '\x08':     'b',
+        '\x09':     't',
+        '\x0A':     'n',
+        '\x0B':     'v',
+        '\x0C':     'f',
+        '\x0D':     'r',
+        '\x1B':     'e',
+        '\"':       '\"',
+        '\\':       '\\',
+        '\x85':     'N',
+        '\xA0':     '_',
+        '\u2028':   'L',
+        '\u2029':   'P',
+    }
+
+    def write_double_quoted(self, text, split=True):
+        self.write_indicator('"', True)
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if ch is None or ch in '"\\\x85\u2028\u2029\uFEFF' \
+                    or not ('\x20' <= ch <= '\x7E'
+                        or (self.allow_unicode
+                            and ('\xA0' <= ch <= '\uD7FF'
+                                or '\uE000' <= ch <= '\uFFFD'))):
+                if start < end:
+                    data = text[start:end]
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    start = end
+                if ch is not None:
+                    if ch in self.ESCAPE_REPLACEMENTS:
+                        data = '\\'+self.ESCAPE_REPLACEMENTS[ch]
+                    elif ch <= '\xFF':
+                        data = '\\x%02X' % ord(ch)
+                    elif ch <= '\uFFFF':
+                        data = '\\u%04X' % ord(ch)
+                    else:
+                        data = '\\U%08X' % ord(ch)
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    start = end+1
+            if 0 < end < len(text)-1 and (ch == ' ' or start >= end)    \
+                    and self.column+(end-start) > self.best_width and split:
+                data = text[start:end]+'\\'
+                if start < end:
+                    start = end
+                self.column += len(data)
+                if self.encoding:
+                    data = data.encode(self.encoding)
+                self.stream.write(data)
+                self.write_indent()
+                self.whitespace = False
+                self.indention = False
+                if text[start] == ' ':
+                    data = '\\'
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+            end += 1
+        self.write_indicator('"', False)
+
+    def determine_block_hints(self, text):
+        hints = ''
+        if text:
+            if text[0] in ' \n\x85\u2028\u2029':
+                hints += str(self.best_indent)
+            if text[-1] not in '\n\x85\u2028\u2029':
+                hints += '-'
+            elif len(text) == 1 or text[-2] in '\n\x85\u2028\u2029':
+                hints += '+'
+        return hints
+
+    def write_folded(self, text):
+        hints = self.determine_block_hints(text)
+        self.write_indicator('>'+hints, True)
+        if hints[-1:] == '+':
+            self.open_ended = True
+        self.write_line_break()
+        leading_space = True
+        spaces = False
+        breaks = True
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if breaks:
+                if ch is None or ch not in '\n\x85\u2028\u2029':
+                    if not leading_space and ch is not None and ch != ' '   \
+                            and text[start] == '\n':
+                        self.write_line_break()
+                    leading_space = (ch == ' ')
+                    for br in text[start:end]:
+                        if br == '\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    if ch is not None:
+                        self.write_indent()
+                    start = end
+            elif spaces:
+                if ch != ' ':
+                    if start+1 == end and self.column > self.best_width:
+                        self.write_indent()
+                    else:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                    start = end
+            else:
+                if ch is None or ch in ' \n\x85\u2028\u2029':
+                    data = text[start:end]
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    if ch is None:
+                        self.write_line_break()
+                    start = end
+            if ch is not None:
+                breaks = (ch in '\n\x85\u2028\u2029')
+                spaces = (ch == ' ')
+            end += 1
+
+    def write_literal(self, text):
+        hints = self.determine_block_hints(text)
+        self.write_indicator('|'+hints, True)
+        if hints[-1:] == '+':
+            self.open_ended = True
+        self.write_line_break()
+        breaks = True
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if breaks:
+                if ch is None or ch not in '\n\x85\u2028\u2029':
+                    for br in text[start:end]:
+                        if br == '\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    if ch is not None:
+                        self.write_indent()
+                    start = end
+            else:
+                if ch is None or ch in '\n\x85\u2028\u2029':
+                    data = text[start:end]
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    if ch is None:
+                        self.write_line_break()
+                    start = end
+            if ch is not None:
+                breaks = (ch in '\n\x85\u2028\u2029')
+            end += 1
+
+    def write_plain(self, text, split=True):
+        if self.root_context:
+            self.open_ended = True
+        if not text:
+            return
+        if not self.whitespace:
+            data = ' '
+            self.column += len(data)
+            if self.encoding:
+                data = data.encode(self.encoding)
+            self.stream.write(data)
+        self.whitespace = False
+        self.indention = False
+        spaces = False
+        breaks = False
+        start = end = 0
+        while end <= len(text):
+            ch = None
+            if end < len(text):
+                ch = text[end]
+            if spaces:
+                if ch != ' ':
+                    if start+1 == end and self.column > self.best_width and split:
+                        self.write_indent()
+                        self.whitespace = False
+                        self.indention = False
+                    else:
+                        data = text[start:end]
+                        self.column += len(data)
+                        if self.encoding:
+                            data = data.encode(self.encoding)
+                        self.stream.write(data)
+                    start = end
+            elif breaks:
+                if ch not in '\n\x85\u2028\u2029':
+                    if text[start] == '\n':
+                        self.write_line_break()
+                    for br in text[start:end]:
+                        if br == '\n':
+                            self.write_line_break()
+                        else:
+                            self.write_line_break(br)
+                    self.write_indent()
+                    self.whitespace = False
+                    self.indention = False
+                    start = end
+            else:
+                if ch is None or ch in ' \n\x85\u2028\u2029':
+                    data = text[start:end]
+                    self.column += len(data)
+                    if self.encoding:
+                        data = data.encode(self.encoding)
+                    self.stream.write(data)
+                    start = end
+            if ch is not None:
+                spaces = (ch == ' ')
+                breaks = (ch in '\n\x85\u2028\u2029')
+            end += 1
+
diff --git a/lib3/yaml/error.py b/lib3/yaml/error.py
new file mode 100644
index 0000000..b796b4d
--- /dev/null
+++ b/lib3/yaml/error.py
@@ -0,0 +1,75 @@
+
+__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
+
+class Mark:
+
+    def __init__(self, name, index, line, column, buffer, pointer):
+        self.name = name
+        self.index = index
+        self.line = line
+        self.column = column
+        self.buffer = buffer
+        self.pointer = pointer
+
+    def get_snippet(self, indent=4, max_length=75):
+        if self.buffer is None:
+            return None
+        head = ''
+        start = self.pointer
+        while start > 0 and self.buffer[start-1] not in '\0\r\n\x85\u2028\u2029':
+            start -= 1
+            if self.pointer-start > max_length/2-1:
+                head = ' ... '
+                start += 5
+                break
+        tail = ''
+        end = self.pointer
+        while end < len(self.buffer) and self.buffer[end] not in '\0\r\n\x85\u2028\u2029':
+            end += 1
+            if end-self.pointer > max_length/2-1:
+                tail = ' ... '
+                end -= 5
+                break
+        snippet = self.buffer[start:end]
+        return ' '*indent + head + snippet + tail + '\n'  \
+                + ' '*(indent+self.pointer-start+len(head)) + '^'
+
+    def __str__(self):
+        snippet = self.get_snippet()
+        where = "  in \"%s\", line %d, column %d"   \
+                % (self.name, self.line+1, self.column+1)
+        if snippet is not None:
+            where += ":\n"+snippet
+        return where
+
+class YAMLError(Exception):
+    pass
+
+class MarkedYAMLError(YAMLError):
+
+    def __init__(self, context=None, context_mark=None,
+            problem=None, problem_mark=None, note=None):
+        self.context = context
+        self.context_mark = context_mark
+        self.problem = problem
+        self.problem_mark = problem_mark
+        self.note = note
+
+    def __str__(self):
+        lines = []
+        if self.context is not None:
+            lines.append(self.context)
+        if self.context_mark is not None  \
+            and (self.problem is None or self.problem_mark is None
+                    or self.context_mark.name != self.problem_mark.name
+                    or self.context_mark.line != self.problem_mark.line
+                    or self.context_mark.column != self.problem_mark.column):
+            lines.append(str(self.context_mark))
+        if self.problem is not None:
+            lines.append(self.problem)
+        if self.problem_mark is not None:
+            lines.append(str(self.problem_mark))
+        if self.note is not None:
+            lines.append(self.note)
+        return '\n'.join(lines)
+
diff --git a/lib3/yaml/events.py b/lib3/yaml/events.py
new file mode 100644
index 0000000..f79ad38
--- /dev/null
+++ b/lib3/yaml/events.py
@@ -0,0 +1,86 @@
+
+# Abstract classes.
+
+class Event(object):
+    def __init__(self, start_mark=None, end_mark=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+    def __repr__(self):
+        attributes = [key for key in ['anchor', 'tag', 'implicit', 'value']
+                if hasattr(self, key)]
+        arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
+                for key in attributes])
+        return '%s(%s)' % (self.__class__.__name__, arguments)
+
+class NodeEvent(Event):
+    def __init__(self, anchor, start_mark=None, end_mark=None):
+        self.anchor = anchor
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class CollectionStartEvent(NodeEvent):
+    def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
+            flow_style=None):
+        self.anchor = anchor
+        self.tag = tag
+        self.implicit = implicit
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.flow_style = flow_style
+
+class CollectionEndEvent(Event):
+    pass
+
+# Implementations.
+
+class StreamStartEvent(Event):
+    def __init__(self, start_mark=None, end_mark=None, encoding=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.encoding = encoding
+
+class StreamEndEvent(Event):
+    pass
+
+class DocumentStartEvent(Event):
+    def __init__(self, start_mark=None, end_mark=None,
+            explicit=None, version=None, tags=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.explicit = explicit
+        self.version = version
+        self.tags = tags
+
+class DocumentEndEvent(Event):
+    def __init__(self, start_mark=None, end_mark=None,
+            explicit=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.explicit = explicit
+
+class AliasEvent(NodeEvent):
+    pass
+
+class ScalarEvent(NodeEvent):
+    def __init__(self, anchor, tag, implicit, value,
+            start_mark=None, end_mark=None, style=None):
+        self.anchor = anchor
+        self.tag = tag
+        self.implicit = implicit
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.style = style
+
+class SequenceStartEvent(CollectionStartEvent):
+    pass
+
+class SequenceEndEvent(CollectionEndEvent):
+    pass
+
+class MappingStartEvent(CollectionStartEvent):
+    pass
+
+class MappingEndEvent(CollectionEndEvent):
+    pass
+
diff --git a/lib3/yaml/loader.py b/lib3/yaml/loader.py
new file mode 100644
index 0000000..08c8f01
--- /dev/null
+++ b/lib3/yaml/loader.py
@@ -0,0 +1,40 @@
+
+__all__ = ['BaseLoader', 'SafeLoader', 'Loader']
+
+from .reader import *
+from .scanner import *
+from .parser import *
+from .composer import *
+from .constructor import *
+from .resolver import *
+
+class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
+
+    def __init__(self, stream):
+        Reader.__init__(self, stream)
+        Scanner.__init__(self)
+        Parser.__init__(self)
+        Composer.__init__(self)
+        BaseConstructor.__init__(self)
+        BaseResolver.__init__(self)
+
+class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
+
+    def __init__(self, stream):
+        Reader.__init__(self, stream)
+        Scanner.__init__(self)
+        Parser.__init__(self)
+        Composer.__init__(self)
+        SafeConstructor.__init__(self)
+        Resolver.__init__(self)
+
+class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
+
+    def __init__(self, stream):
+        Reader.__init__(self, stream)
+        Scanner.__init__(self)
+        Parser.__init__(self)
+        Composer.__init__(self)
+        Constructor.__init__(self)
+        Resolver.__init__(self)
+
diff --git a/lib3/yaml/nodes.py b/lib3/yaml/nodes.py
new file mode 100644
index 0000000..c4f070c
--- /dev/null
+++ b/lib3/yaml/nodes.py
@@ -0,0 +1,49 @@
+
+class Node(object):
+    def __init__(self, tag, value, start_mark, end_mark):
+        self.tag = tag
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+    def __repr__(self):
+        value = self.value
+        #if isinstance(value, list):
+        #    if len(value) == 0:
+        #        value = '<empty>'
+        #    elif len(value) == 1:
+        #        value = '<1 item>'
+        #    else:
+        #        value = '<%d items>' % len(value)
+        #else:
+        #    if len(value) > 75:
+        #        value = repr(value[:70]+u' ... ')
+        #    else:
+        #        value = repr(value)
+        value = repr(value)
+        return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
+
+class ScalarNode(Node):
+    id = 'scalar'
+    def __init__(self, tag, value,
+            start_mark=None, end_mark=None, style=None):
+        self.tag = tag
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.style = style
+
+class CollectionNode(Node):
+    def __init__(self, tag, value,
+            start_mark=None, end_mark=None, flow_style=None):
+        self.tag = tag
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.flow_style = flow_style
+
+class SequenceNode(CollectionNode):
+    id = 'sequence'
+
+class MappingNode(CollectionNode):
+    id = 'mapping'
+
diff --git a/lib3/yaml/parser.py b/lib3/yaml/parser.py
new file mode 100644
index 0000000..13a5995
--- /dev/null
+++ b/lib3/yaml/parser.py
@@ -0,0 +1,589 @@
+
+# The following YAML grammar is LL(1) and is parsed by a recursive descent
+# parser.
+#
+# stream            ::= STREAM-START implicit_document? explicit_document* STREAM-END
+# implicit_document ::= block_node DOCUMENT-END*
+# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+# block_node_or_indentless_sequence ::=
+#                       ALIAS
+#                       | properties (block_content | indentless_block_sequence)?
+#                       | block_content
+#                       | indentless_block_sequence
+# block_node        ::= ALIAS
+#                       | properties block_content?
+#                       | block_content
+# flow_node         ::= ALIAS
+#                       | properties flow_content?
+#                       | flow_content
+# properties        ::= TAG ANCHOR? | ANCHOR TAG?
+# block_content     ::= block_collection | flow_collection | SCALAR
+# flow_content      ::= flow_collection | SCALAR
+# block_collection  ::= block_sequence | block_mapping
+# flow_collection   ::= flow_sequence | flow_mapping
+# block_sequence    ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
+# indentless_sequence   ::= (BLOCK-ENTRY block_node?)+
+# block_mapping     ::= BLOCK-MAPPING_START
+#                       ((KEY block_node_or_indentless_sequence?)?
+#                       (VALUE block_node_or_indentless_sequence?)?)*
+#                       BLOCK-END
+# flow_sequence     ::= FLOW-SEQUENCE-START
+#                       (flow_sequence_entry FLOW-ENTRY)*
+#                       flow_sequence_entry?
+#                       FLOW-SEQUENCE-END
+# flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+# flow_mapping      ::= FLOW-MAPPING-START
+#                       (flow_mapping_entry FLOW-ENTRY)*
+#                       flow_mapping_entry?
+#                       FLOW-MAPPING-END
+# flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+#
+# FIRST sets:
+#
+# stream: { STREAM-START }
+# explicit_document: { DIRECTIVE DOCUMENT-START }
+# implicit_document: FIRST(block_node)
+# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
+# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
+# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
+# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# block_sequence: { BLOCK-SEQUENCE-START }
+# block_mapping: { BLOCK-MAPPING-START }
+# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
+# indentless_sequence: { ENTRY }
+# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
+# flow_sequence: { FLOW-SEQUENCE-START }
+# flow_mapping: { FLOW-MAPPING-START }
+# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
+# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
+
+__all__ = ['Parser', 'ParserError']
+
+from .error import MarkedYAMLError
+from .tokens import *
+from .events import *
+from .scanner import *
+
+class ParserError(MarkedYAMLError):
+    pass
+
+class Parser:
+    # Since writing a recursive-descendant parser is a straightforward task, we
+    # do not give many comments here.
+
+    DEFAULT_TAGS = {
+        '!':   '!',
+        '!!':  'tag:yaml.org,2002:',
+    }
+
+    def __init__(self):
+        self.current_event = None
+        self.yaml_version = None
+        self.tag_handles = {}
+        self.states = []
+        self.marks = []
+        self.state = self.parse_stream_start
+
+    def dispose(self):
+        # Reset the state attributes (to clear self-references)
+        self.states = []
+        self.state = None
+
+    def check_event(self, *choices):
+        # Check the type of the next event.
+        if self.current_event is None:
+            if self.state:
+                self.current_event = self.state()
+        if self.current_event is not None:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.current_event, choice):
+                    return True
+        return False
+
+    def peek_event(self):
+        # Get the next event.
+        if self.current_event is None:
+            if self.state:
+                self.current_event = self.state()
+        return self.current_event
+
+    def get_event(self):
+        # Get the next event and proceed further.
+        if self.current_event is None:
+            if self.state:
+                self.current_event = self.state()
+        value = self.current_event
+        self.current_event = None
+        return value
+
+    # stream    ::= STREAM-START implicit_document? explicit_document* STREAM-END
+    # implicit_document ::= block_node DOCUMENT-END*
+    # explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+
+    def parse_stream_start(self):
+
+        # Parse the stream start.
+        token = self.get_token()
+        event = StreamStartEvent(token.start_mark, token.end_mark,
+                encoding=token.encoding)
+
+        # Prepare the next state.
+        self.state = self.parse_implicit_document_start
+
+        return event
+
+    def parse_implicit_document_start(self):
+
+        # Parse an implicit document.
+        if not self.check_token(DirectiveToken, DocumentStartToken,
+                StreamEndToken):
+            self.tag_handles = self.DEFAULT_TAGS
+            token = self.peek_token()
+            start_mark = end_mark = token.start_mark
+            event = DocumentStartEvent(start_mark, end_mark,
+                    explicit=False)
+
+            # Prepare the next state.
+            self.states.append(self.parse_document_end)
+            self.state = self.parse_block_node
+
+            return event
+
+        else:
+            return self.parse_document_start()
+
+    def parse_document_start(self):
+
+        # Parse any extra document end indicators.
+        while self.check_token(DocumentEndToken):
+            self.get_token()
+
+        # Parse an explicit document.
+        if not self.check_token(StreamEndToken):
+            token = self.peek_token()
+            start_mark = token.start_mark
+            version, tags = self.process_directives()
+            if not self.check_token(DocumentStartToken):
+                raise ParserError(None, None,
+                        "expected '<document start>', but found %r"
+                        % self.peek_token().id,
+                        self.peek_token().start_mark)
+            token = self.get_token()
+            end_mark = token.end_mark
+            event = DocumentStartEvent(start_mark, end_mark,
+                    explicit=True, version=version, tags=tags)
+            self.states.append(self.parse_document_end)
+            self.state = self.parse_document_content
+        else:
+            # Parse the end of the stream.
+            token = self.get_token()
+            event = StreamEndEvent(token.start_mark, token.end_mark)
+            assert not self.states
+            assert not self.marks
+            self.state = None
+        return event
+
+    def parse_document_end(self):
+
+        # Parse the document end.
+        token = self.peek_token()
+        start_mark = end_mark = token.start_mark
+        explicit = False
+        if self.check_token(DocumentEndToken):
+            token = self.get_token()
+            end_mark = token.end_mark
+            explicit = True
+        event = DocumentEndEvent(start_mark, end_mark,
+                explicit=explicit)
+
+        # Prepare the next state.
+        self.state = self.parse_document_start
+
+        return event
+
+    def parse_document_content(self):
+        if self.check_token(DirectiveToken,
+                DocumentStartToken, DocumentEndToken, StreamEndToken):
+            event = self.process_empty_scalar(self.peek_token().start_mark)
+            self.state = self.states.pop()
+            return event
+        else:
+            return self.parse_block_node()
+
+    def process_directives(self):
+        self.yaml_version = None
+        self.tag_handles = {}
+        while self.check_token(DirectiveToken):
+            token = self.get_token()
+            if token.name == 'YAML':
+                if self.yaml_version is not None:
+                    raise ParserError(None, None,
+                            "found duplicate YAML directive", token.start_mark)
+                major, minor = token.value
+                if major != 1:
+                    raise ParserError(None, None,
+                            "found incompatible YAML document (version 1.* is required)",
+                            token.start_mark)
+                self.yaml_version = token.value
+            elif token.name == 'TAG':
+                handle, prefix = token.value
+                if handle in self.tag_handles:
+                    raise ParserError(None, None,
+                            "duplicate tag handle %r" % handle,
+                            token.start_mark)
+                self.tag_handles[handle] = prefix
+        if self.tag_handles:
+            value = self.yaml_version, self.tag_handles.copy()
+        else:
+            value = self.yaml_version, None
+        for key in self.DEFAULT_TAGS:
+            if key not in self.tag_handles:
+                self.tag_handles[key] = self.DEFAULT_TAGS[key]
+        return value
+
+    # block_node_or_indentless_sequence ::= ALIAS
+    #               | properties (block_content | indentless_block_sequence)?
+    #               | block_content
+    #               | indentless_block_sequence
+    # block_node    ::= ALIAS
+    #                   | properties block_content?
+    #                   | block_content
+    # flow_node     ::= ALIAS
+    #                   | properties flow_content?
+    #                   | flow_content
+    # properties    ::= TAG ANCHOR? | ANCHOR TAG?
+    # block_content     ::= block_collection | flow_collection | SCALAR
+    # flow_content      ::= flow_collection | SCALAR
+    # block_collection  ::= block_sequence | block_mapping
+    # flow_collection   ::= flow_sequence | flow_mapping
+
+    def parse_block_node(self):
+        return self.parse_node(block=True)
+
+    def parse_flow_node(self):
+        return self.parse_node()
+
+    def parse_block_node_or_indentless_sequence(self):
+        return self.parse_node(block=True, indentless_sequence=True)
+
+    def parse_node(self, block=False, indentless_sequence=False):
+        if self.check_token(AliasToken):
+            token = self.get_token()
+            event = AliasEvent(token.value, token.start_mark, token.end_mark)
+            self.state = self.states.pop()
+        else:
+            anchor = None
+            tag = None
+            start_mark = end_mark = tag_mark = None
+            if self.check_token(AnchorToken):
+                token = self.get_token()
+                start_mark = token.start_mark
+                end_mark = token.end_mark
+                anchor = token.value
+                if self.check_token(TagToken):
+                    token = self.get_token()
+                    tag_mark = token.start_mark
+                    end_mark = token.end_mark
+                    tag = token.value
+            elif self.check_token(TagToken):
+                token = self.get_token()
+                start_mark = tag_mark = token.start_mark
+                end_mark = token.end_mark
+                tag = token.value
+                if self.check_token(AnchorToken):
+                    token = self.get_token()
+                    end_mark = token.end_mark
+                    anchor = token.value
+            if tag is not None:
+                handle, suffix = tag
+                if handle is not None:
+                    if handle not in self.tag_handles:
+                        raise ParserError("while parsing a node", start_mark,
+                                "found undefined tag handle %r" % handle,
+                                tag_mark)
+                    tag = self.tag_handles[handle]+suffix
+                else:
+                    tag = suffix
+            #if tag == '!':
+            #    raise ParserError("while parsing a node", start_mark,
+            #            "found non-specific tag '!'", tag_mark,
+            #            "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' and share your opinion.")
+            if start_mark is None:
+                start_mark = end_mark = self.peek_token().start_mark
+            event = None
+            implicit = (tag is None or tag == '!')
+            if indentless_sequence and self.check_token(BlockEntryToken):
+                end_mark = self.peek_token().end_mark
+                event = SequenceStartEvent(anchor, tag, implicit,
+                        start_mark, end_mark)
+                self.state = self.parse_indentless_sequence_entry
+            else:
+                if self.check_token(ScalarToken):
+                    token = self.get_token()
+                    end_mark = token.end_mark
+                    if (token.plain and tag is None) or tag == '!':
+                        implicit = (True, False)
+                    elif tag is None:
+                        implicit = (False, True)
+                    else:
+                        implicit = (False, False)
+                    event = ScalarEvent(anchor, tag, implicit, token.value,
+                            start_mark, end_mark, style=token.style)
+                    self.state = self.states.pop()
+                elif self.check_token(FlowSequenceStartToken):
+                    end_mark = self.peek_token().end_mark
+                    event = SequenceStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=True)
+                    self.state = self.parse_flow_sequence_first_entry
+                elif self.check_token(FlowMappingStartToken):
+                    end_mark = self.peek_token().end_mark
+                    event = MappingStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=True)
+                    self.state = self.parse_flow_mapping_first_key
+                elif block and self.check_token(BlockSequenceStartToken):
+                    end_mark = self.peek_token().start_mark
+                    event = SequenceStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=False)
+                    self.state = self.parse_block_sequence_first_entry
+                elif block and self.check_token(BlockMappingStartToken):
+                    end_mark = self.peek_token().start_mark
+                    event = MappingStartEvent(anchor, tag, implicit,
+                            start_mark, end_mark, flow_style=False)
+                    self.state = self.parse_block_mapping_first_key
+                elif anchor is not None or tag is not None:
+                    # Empty scalars are allowed even if a tag or an anchor is
+                    # specified.
+                    event = ScalarEvent(anchor, tag, (implicit, False), '',
+                            start_mark, end_mark)
+                    self.state = self.states.pop()
+                else:
+                    if block:
+                        node = 'block'
+                    else:
+                        node = 'flow'
+                    token = self.peek_token()
+                    raise ParserError("while parsing a %s node" % node, start_mark,
+                            "expected the node content, but found %r" % token.id,
+                            token.start_mark)
+        return event
+
+    # block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
+
+    def parse_block_sequence_first_entry(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_block_sequence_entry()
+
+    def parse_block_sequence_entry(self):
+        if self.check_token(BlockEntryToken):
+            token = self.get_token()
+            if not self.check_token(BlockEntryToken, BlockEndToken):
+                self.states.append(self.parse_block_sequence_entry)
+                return self.parse_block_node()
+            else:
+                self.state = self.parse_block_sequence_entry
+                return self.process_empty_scalar(token.end_mark)
+        if not self.check_token(BlockEndToken):
+            token = self.peek_token()
+            raise ParserError("while parsing a block collection", self.marks[-1],
+                    "expected <block end>, but found %r" % token.id, token.start_mark)
+        token = self.get_token()
+        event = SequenceEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    # indentless_sequence ::= (BLOCK-ENTRY block_node?)+
+
+    def parse_indentless_sequence_entry(self):
+        if self.check_token(BlockEntryToken):
+            token = self.get_token()
+            if not self.check_token(BlockEntryToken,
+                    KeyToken, ValueToken, BlockEndToken):
+                self.states.append(self.parse_indentless_sequence_entry)
+                return self.parse_block_node()
+            else:
+                self.state = self.parse_indentless_sequence_entry
+                return self.process_empty_scalar(token.end_mark)
+        token = self.peek_token()
+        event = SequenceEndEvent(token.start_mark, token.start_mark)
+        self.state = self.states.pop()
+        return event
+
+    # block_mapping     ::= BLOCK-MAPPING_START
+    #                       ((KEY block_node_or_indentless_sequence?)?
+    #                       (VALUE block_node_or_indentless_sequence?)?)*
+    #                       BLOCK-END
+
+    def parse_block_mapping_first_key(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_block_mapping_key()
+
+    def parse_block_mapping_key(self):
+        if self.check_token(KeyToken):
+            token = self.get_token()
+            if not self.check_token(KeyToken, ValueToken, BlockEndToken):
+                self.states.append(self.parse_block_mapping_value)
+                return self.parse_block_node_or_indentless_sequence()
+            else:
+                self.state = self.parse_block_mapping_value
+                return self.process_empty_scalar(token.end_mark)
+        if not self.check_token(BlockEndToken):
+            token = self.peek_token()
+            raise ParserError("while parsing a block mapping", self.marks[-1],
+                    "expected <block end>, but found %r" % token.id, token.start_mark)
+        token = self.get_token()
+        event = MappingEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    def parse_block_mapping_value(self):
+        if self.check_token(ValueToken):
+            token = self.get_token()
+            if not self.check_token(KeyToken, ValueToken, BlockEndToken):
+                self.states.append(self.parse_block_mapping_key)
+                return self.parse_block_node_or_indentless_sequence()
+            else:
+                self.state = self.parse_block_mapping_key
+                return self.process_empty_scalar(token.end_mark)
+        else:
+            self.state = self.parse_block_mapping_key
+            token = self.peek_token()
+            return self.process_empty_scalar(token.start_mark)
+
+    # flow_sequence     ::= FLOW-SEQUENCE-START
+    #                       (flow_sequence_entry FLOW-ENTRY)*
+    #                       flow_sequence_entry?
+    #                       FLOW-SEQUENCE-END
+    # flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+    #
+    # Note that while production rules for both flow_sequence_entry and
+    # flow_mapping_entry are equal, their interpretations are different.
+    # For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
+    # generate an inline mapping (set syntax).
+
+    def parse_flow_sequence_first_entry(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_flow_sequence_entry(first=True)
+
+    def parse_flow_sequence_entry(self, first=False):
+        if not self.check_token(FlowSequenceEndToken):
+            if not first:
+                if self.check_token(FlowEntryToken):
+                    self.get_token()
+                else:
+                    token = self.peek_token()
+                    raise ParserError("while parsing a flow sequence", self.marks[-1],
+                            "expected ',' or ']', but got %r" % token.id, token.start_mark)
+            
+            if self.check_token(KeyToken):
+                token = self.peek_token()
+                event = MappingStartEvent(None, None, True,
+                        token.start_mark, token.end_mark,
+                        flow_style=True)
+                self.state = self.parse_flow_sequence_entry_mapping_key
+                return event
+            elif not self.check_token(FlowSequenceEndToken):
+                self.states.append(self.parse_flow_sequence_entry)
+                return self.parse_flow_node()
+        token = self.get_token()
+        event = SequenceEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    def parse_flow_sequence_entry_mapping_key(self):
+        token = self.get_token()
+        if not self.check_token(ValueToken,
+                FlowEntryToken, FlowSequenceEndToken):
+            self.states.append(self.parse_flow_sequence_entry_mapping_value)
+            return self.parse_flow_node()
+        else:
+            self.state = self.parse_flow_sequence_entry_mapping_value
+            return self.process_empty_scalar(token.end_mark)
+
+    def parse_flow_sequence_entry_mapping_value(self):
+        if self.check_token(ValueToken):
+            token = self.get_token()
+            if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
+                self.states.append(self.parse_flow_sequence_entry_mapping_end)
+                return self.parse_flow_node()
+            else:
+                self.state = self.parse_flow_sequence_entry_mapping_end
+                return self.process_empty_scalar(token.end_mark)
+        else:
+            self.state = self.parse_flow_sequence_entry_mapping_end
+            token = self.peek_token()
+            return self.process_empty_scalar(token.start_mark)
+
+    def parse_flow_sequence_entry_mapping_end(self):
+        self.state = self.parse_flow_sequence_entry
+        token = self.peek_token()
+        return MappingEndEvent(token.start_mark, token.start_mark)
+
+    # flow_mapping  ::= FLOW-MAPPING-START
+    #                   (flow_mapping_entry FLOW-ENTRY)*
+    #                   flow_mapping_entry?
+    #                   FLOW-MAPPING-END
+    # flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+
+    def parse_flow_mapping_first_key(self):
+        token = self.get_token()
+        self.marks.append(token.start_mark)
+        return self.parse_flow_mapping_key(first=True)
+
+    def parse_flow_mapping_key(self, first=False):
+        if not self.check_token(FlowMappingEndToken):
+            if not first:
+                if self.check_token(FlowEntryToken):
+                    self.get_token()
+                else:
+                    token = self.peek_token()
+                    raise ParserError("while parsing a flow mapping", self.marks[-1],
+                            "expected ',' or '}', but got %r" % token.id, token.start_mark)
+            if self.check_token(KeyToken):
+                token = self.get_token()
+                if not self.check_token(ValueToken,
+                        FlowEntryToken, FlowMappingEndToken):
+                    self.states.append(self.parse_flow_mapping_value)
+                    return self.parse_flow_node()
+                else:
+                    self.state = self.parse_flow_mapping_value
+                    return self.process_empty_scalar(token.end_mark)
+            elif not self.check_token(FlowMappingEndToken):
+                self.states.append(self.parse_flow_mapping_empty_value)
+                return self.parse_flow_node()
+        token = self.get_token()
+        event = MappingEndEvent(token.start_mark, token.end_mark)
+        self.state = self.states.pop()
+        self.marks.pop()
+        return event
+
+    def parse_flow_mapping_value(self):
+        if self.check_token(ValueToken):
+            token = self.get_token()
+            if not self.check_token(FlowEntryToken, FlowMappingEndToken):
+                self.states.append(self.parse_flow_mapping_key)
+                return self.parse_flow_node()
+            else:
+                self.state = self.parse_flow_mapping_key
+                return self.process_empty_scalar(token.end_mark)
+        else:
+            self.state = self.parse_flow_mapping_key
+            token = self.peek_token()
+            return self.process_empty_scalar(token.start_mark)
+
+    def parse_flow_mapping_empty_value(self):
+        self.state = self.parse_flow_mapping_key
+        return self.process_empty_scalar(self.peek_token().start_mark)
+
+    def process_empty_scalar(self, mark):
+        return ScalarEvent(None, None, (True, False), '', mark, mark)
+
diff --git a/lib3/yaml/reader.py b/lib3/yaml/reader.py
new file mode 100644
index 0000000..f70e920
--- /dev/null
+++ b/lib3/yaml/reader.py
@@ -0,0 +1,192 @@
+# This module contains abstractions for the input stream. You don't have to
+# looks further, there are no pretty code.
+#
+# We define two classes here.
+#
+#   Mark(source, line, column)
+# It's just a record and its only use is producing nice error messages.
+# Parser does not use it for any other purposes.
+#
+#   Reader(source, data)
+# Reader determines the encoding of `data` and converts it to unicode.
+# Reader provides the following methods and attributes:
+#   reader.peek(length=1) - return the next `length` characters
+#   reader.forward(length=1) - move the current position to `length` characters.
+#   reader.index - the number of the current character.
+#   reader.line, stream.column - the line and the column of the current character.
+
+__all__ = ['Reader', 'ReaderError']
+
+from .error import YAMLError, Mark
+
+import codecs, re
+
+class ReaderError(YAMLError):
+
+    def __init__(self, name, position, character, encoding, reason):
+        self.name = name
+        self.character = character
+        self.position = position
+        self.encoding = encoding
+        self.reason = reason
+
+    def __str__(self):
+        if isinstance(self.character, bytes):
+            return "'%s' codec can't decode byte #x%02x: %s\n"  \
+                    "  in \"%s\", position %d"    \
+                    % (self.encoding, ord(self.character), self.reason,
+                            self.name, self.position)
+        else:
+            return "unacceptable character #x%04x: %s\n"    \
+                    "  in \"%s\", position %d"    \
+                    % (self.character, self.reason,
+                            self.name, self.position)
+
+class Reader(object):
+    # Reader:
+    # - determines the data encoding and converts it to a unicode string,
+    # - checks if characters are in allowed range,
+    # - adds '\0' to the end.
+
+    # Reader accepts
+    #  - a `bytes` object,
+    #  - a `str` object,
+    #  - a file-like object with its `read` method returning `str`,
+    #  - a file-like object with its `read` method returning `unicode`.
+
+    # Yeah, it's ugly and slow.
+
+    def __init__(self, stream):
+        self.name = None
+        self.stream = None
+        self.stream_pointer = 0
+        self.eof = True
+        self.buffer = ''
+        self.pointer = 0
+        self.raw_buffer = None
+        self.raw_decode = None
+        self.encoding = None
+        self.index = 0
+        self.line = 0
+        self.column = 0
+        if isinstance(stream, str):
+            self.name = "<unicode string>"
+            self.check_printable(stream)
+            self.buffer = stream+'\0'
+        elif isinstance(stream, bytes):
+            self.name = "<byte string>"
+            self.raw_buffer = stream
+            self.determine_encoding()
+        else:
+            self.stream = stream
+            self.name = getattr(stream, 'name', "<file>")
+            self.eof = False
+            self.raw_buffer = None
+            self.determine_encoding()
+
+    def peek(self, index=0):
+        try:
+            return self.buffer[self.pointer+index]
+        except IndexError:
+            self.update(index+1)
+            return self.buffer[self.pointer+index]
+
+    def prefix(self, length=1):
+        if self.pointer+length >= len(self.buffer):
+            self.update(length)
+        return self.buffer[self.pointer:self.pointer+length]
+
+    def forward(self, length=1):
+        if self.pointer+length+1 >= len(self.buffer):
+            self.update(length+1)
+        while length:
+            ch = self.buffer[self.pointer]
+            self.pointer += 1
+            self.index += 1
+            if ch in '\n\x85\u2028\u2029'  \
+                    or (ch == '\r' and self.buffer[self.pointer] != '\n'):
+                self.line += 1
+                self.column = 0
+            elif ch != '\uFEFF':
+                self.column += 1
+            length -= 1
+
+    def get_mark(self):
+        if self.stream is None:
+            return Mark(self.name, self.index, self.line, self.column,
+                    self.buffer, self.pointer)
+        else:
+            return Mark(self.name, self.index, self.line, self.column,
+                    None, None)
+
+    def determine_encoding(self):
+        while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2):
+            self.update_raw()
+        if isinstance(self.raw_buffer, bytes):
+            if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
+                self.raw_decode = codecs.utf_16_le_decode
+                self.encoding = 'utf-16-le'
+            elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
+                self.raw_decode = codecs.utf_16_be_decode
+                self.encoding = 'utf-16-be'
+            else:
+                self.raw_decode = codecs.utf_8_decode
+                self.encoding = 'utf-8'
+        self.update(1)
+
+    NON_PRINTABLE = re.compile('[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD]')
+    def check_printable(self, data):
+        match = self.NON_PRINTABLE.search(data)
+        if match:
+            character = match.group()
+            position = self.index+(len(self.buffer)-self.pointer)+match.start()
+            raise ReaderError(self.name, position, ord(character),
+                    'unicode', "special characters are not allowed")
+
+    def update(self, length):
+        if self.raw_buffer is None:
+            return
+        self.buffer = self.buffer[self.pointer:]
+        self.pointer = 0
+        while len(self.buffer) < length:
+            if not self.eof:
+                self.update_raw()
+            if self.raw_decode is not None:
+                try:
+                    data, converted = self.raw_decode(self.raw_buffer,
+                            'strict', self.eof)
+                except UnicodeDecodeError as exc:
+                    character = self.raw_buffer[exc.start]
+                    if self.stream is not None:
+                        position = self.stream_pointer-len(self.raw_buffer)+exc.start
+                    else:
+                        position = exc.start
+                    raise ReaderError(self.name, position, character,
+                            exc.encoding, exc.reason)
+            else:
+                data = self.raw_buffer
+                converted = len(data)
+            self.check_printable(data)
+            self.buffer += data
+            self.raw_buffer = self.raw_buffer[converted:]
+            if self.eof:
+                self.buffer += '\0'
+                self.raw_buffer = None
+                break
+
+    def update_raw(self, size=4096):
+        data = self.stream.read(size)
+        if self.raw_buffer is None:
+            self.raw_buffer = data
+        else:
+            self.raw_buffer += data
+        self.stream_pointer += len(data)
+        if not data:
+            self.eof = True
+
+#try:
+#    import psyco
+#    psyco.bind(Reader)
+#except ImportError:
+#    pass
+
diff --git a/lib3/yaml/representer.py b/lib3/yaml/representer.py
new file mode 100644
index 0000000..67cd6fd
--- /dev/null
+++ b/lib3/yaml/representer.py
@@ -0,0 +1,374 @@
+
+__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
+    'RepresenterError']
+
+from .error import *
+from .nodes import *
+
+import datetime, sys, copyreg, types, base64
+
+class RepresenterError(YAMLError):
+    pass
+
+class BaseRepresenter:
+
+    yaml_representers = {}
+    yaml_multi_representers = {}
+
+    def __init__(self, default_style=None, default_flow_style=None):
+        self.default_style = default_style
+        self.default_flow_style = default_flow_style
+        self.represented_objects = {}
+        self.object_keeper = []
+        self.alias_key = None
+
+    def represent(self, data):
+        node = self.represent_data(data)
+        self.serialize(node)
+        self.represented_objects = {}
+        self.object_keeper = []
+        self.alias_key = None
+
+    def represent_data(self, data):
+        if self.ignore_aliases(data):
+            self.alias_key = None
+        else:
+            self.alias_key = id(data)
+        if self.alias_key is not None:
+            if self.alias_key in self.represented_objects:
+                node = self.represented_objects[self.alias_key]
+                #if node is None:
+                #    raise RepresenterError("recursive objects are not allowed: %r" % data)
+                return node
+            #self.represented_objects[alias_key] = None
+            self.object_keeper.append(data)
+        data_types = type(data).__mro__
+        if data_types[0] in self.yaml_representers:
+            node = self.yaml_representers[data_types[0]](self, data)
+        else:
+            for data_type in data_types:
+                if data_type in self.yaml_multi_representers:
+                    node = self.yaml_multi_representers[data_type](self, data)
+                    break
+            else:
+                if None in self.yaml_multi_representers:
+                    node = self.yaml_multi_representers[None](self, data)
+                elif None in self.yaml_representers:
+                    node = self.yaml_representers[None](self, data)
+                else:
+                    node = ScalarNode(None, str(data))
+        #if alias_key is not None:
+        #    self.represented_objects[alias_key] = node
+        return node
+
+    @classmethod
+    def add_representer(cls, data_type, representer):
+        if not 'yaml_representers' in cls.__dict__:
+            cls.yaml_representers = cls.yaml_representers.copy()
+        cls.yaml_representers[data_type] = representer
+
+    @classmethod
+    def add_multi_representer(cls, data_type, representer):
+        if not 'yaml_multi_representers' in cls.__dict__:
+            cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
+        cls.yaml_multi_representers[data_type] = representer
+
+    def represent_scalar(self, tag, value, style=None):
+        if style is None:
+            style = self.default_style
+        node = ScalarNode(tag, value, style=style)
+        if self.alias_key is not None:
+            self.represented_objects[self.alias_key] = node
+        return node
+
+    def represent_sequence(self, tag, sequence, flow_style=None):
+        value = []
+        node = SequenceNode(tag, value, flow_style=flow_style)
+        if self.alias_key is not None:
+            self.represented_objects[self.alias_key] = node
+        best_style = True
+        for item in sequence:
+            node_item = self.represent_data(item)
+            if not (isinstance(node_item, ScalarNode) and not node_item.style):
+                best_style = False
+            value.append(node_item)
+        if flow_style is None:
+            if self.default_flow_style is not None:
+                node.flow_style = self.default_flow_style
+            else:
+                node.flow_style = best_style
+        return node
+
+    def represent_mapping(self, tag, mapping, flow_style=None):
+        value = []
+        node = MappingNode(tag, value, flow_style=flow_style)
+        if self.alias_key is not None:
+            self.represented_objects[self.alias_key] = node
+        best_style = True
+        if hasattr(mapping, 'items'):
+            mapping = list(mapping.items())
+            try:
+                mapping = sorted(mapping)
+            except TypeError:
+                pass
+        for item_key, item_value in mapping:
+            node_key = self.represent_data(item_key)
+            node_value = self.represent_data(item_value)
+            if not (isinstance(node_key, ScalarNode) and not node_key.style):
+                best_style = False
+            if not (isinstance(node_value, ScalarNode) and not node_value.style):
+                best_style = False
+            value.append((node_key, node_value))
+        if flow_style is None:
+            if self.default_flow_style is not None:
+                node.flow_style = self.default_flow_style
+            else:
+                node.flow_style = best_style
+        return node
+
+    def ignore_aliases(self, data):
+        return False
+
+class SafeRepresenter(BaseRepresenter):
+
+    def ignore_aliases(self, data):
+        if data in [None, ()]:
+            return True
+        if isinstance(data, (str, bytes, bool, int, float)):
+            return True
+
+    def represent_none(self, data):
+        return self.represent_scalar('tag:yaml.org,2002:null', 'null')
+
+    def represent_str(self, data):
+        return self.represent_scalar('tag:yaml.org,2002:str', data)
+
+    def represent_binary(self, data):
+        if hasattr(base64, 'encodebytes'):
+            data = base64.encodebytes(data).decode('ascii')
+        else:
+            data = base64.encodestring(data).decode('ascii')
+        return self.represent_scalar('tag:yaml.org,2002:binary', data, style='|')
+
+    def represent_bool(self, data):
+        if data:
+            value = 'true'
+        else:
+            value = 'false'
+        return self.represent_scalar('tag:yaml.org,2002:bool', value)
+
+    def represent_int(self, data):
+        return self.represent_scalar('tag:yaml.org,2002:int', str(data))
+
+    inf_value = 1e300
+    while repr(inf_value) != repr(inf_value*inf_value):
+        inf_value *= inf_value
+
+    def represent_float(self, data):
+        if data != data or (data == 0.0 and data == 1.0):
+            value = '.nan'
+        elif data == self.inf_value:
+            value = '.inf'
+        elif data == -self.inf_value:
+            value = '-.inf'
+        else:
+            value = repr(data).lower()
+            # Note that in some cases `repr(data)` represents a float number
+            # without the decimal parts.  For instance:
+            #   >>> repr(1e17)
+            #   '1e17'
+            # Unfortunately, this is not a valid float representation according
+            # to the definition of the `!!float` tag.  We fix this by adding
+            # '.0' before the 'e' symbol.
+            if '.' not in value and 'e' in value:
+                value = value.replace('e', '.0e', 1)
+        return self.represent_scalar('tag:yaml.org,2002:float', value)
+
+    def represent_list(self, data):
+        #pairs = (len(data) > 0 and isinstance(data, list))
+        #if pairs:
+        #    for item in data:
+        #        if not isinstance(item, tuple) or len(item) != 2:
+        #            pairs = False
+        #            break
+        #if not pairs:
+            return self.represent_sequence('tag:yaml.org,2002:seq', data)
+        #value = []
+        #for item_key, item_value in data:
+        #    value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
+        #        [(item_key, item_value)]))
+        #return SequenceNode(u'tag:yaml.org,2002:pairs', value)
+
+    def represent_dict(self, data):
+        return self.represent_mapping('tag:yaml.org,2002:map', data)
+
+    def represent_set(self, data):
+        value = {}
+        for key in data:
+            value[key] = None
+        return self.represent_mapping('tag:yaml.org,2002:set', value)
+
+    def represent_date(self, data):
+        value = data.isoformat()
+        return self.represent_scalar('tag:yaml.org,2002:timestamp', value)
+
+    def represent_datetime(self, data):
+        value = data.isoformat(' ')
+        return self.represent_scalar('tag:yaml.org,2002:timestamp', value)
+
+    def represent_yaml_object(self, tag, data, cls, flow_style=None):
+        if hasattr(data, '__getstate__'):
+            state = data.__getstate__()
+        else:
+            state = data.__dict__.copy()
+        return self.represent_mapping(tag, state, flow_style=flow_style)
+
+    def represent_undefined(self, data):
+        raise RepresenterError("cannot represent an object: %s" % data)
+
+SafeRepresenter.add_representer(type(None),
+        SafeRepresenter.represent_none)
+
+SafeRepresenter.add_representer(str,
+        SafeRepresenter.represent_str)
+
+SafeRepresenter.add_representer(bytes,
+        SafeRepresenter.represent_binary)
+
+SafeRepresenter.add_representer(bool,
+        SafeRepresenter.represent_bool)
+
+SafeRepresenter.add_representer(int,
+        SafeRepresenter.represent_int)
+
+SafeRepresenter.add_representer(float,
+        SafeRepresenter.represent_float)
+
+SafeRepresenter.add_representer(list,
+        SafeRepresenter.represent_list)
+
+SafeRepresenter.add_representer(tuple,
+        SafeRepresenter.represent_list)
+
+SafeRepresenter.add_representer(dict,
+        SafeRepresenter.represent_dict)
+
+SafeRepresenter.add_representer(set,
+        SafeRepresenter.represent_set)
+
+SafeRepresenter.add_representer(datetime.date,
+        SafeRepresenter.represent_date)
+
+SafeRepresenter.add_representer(datetime.datetime,
+        SafeRepresenter.represent_datetime)
+
+SafeRepresenter.add_representer(None,
+        SafeRepresenter.represent_undefined)
+
+class Representer(SafeRepresenter):
+
+    def represent_complex(self, data):
+        if data.imag == 0.0:
+            data = '%r' % data.real
+        elif data.real == 0.0:
+            data = '%rj' % data.imag
+        elif data.imag > 0:
+            data = '%r+%rj' % (data.real, data.imag)
+        else:
+            data = '%r%rj' % (data.real, data.imag)
+        return self.represent_scalar('tag:yaml.org,2002:python/complex', data)
+
+    def represent_tuple(self, data):
+        return self.represent_sequence('tag:yaml.org,2002:python/tuple', data)
+
+    def represent_name(self, data):
+        name = '%s.%s' % (data.__module__, data.__name__)
+        return self.represent_scalar('tag:yaml.org,2002:python/name:'+name, '')
+
+    def represent_module(self, data):
+        return self.represent_scalar(
+                'tag:yaml.org,2002:python/module:'+data.__name__, '')
+
+    def represent_object(self, data):
+        # We use __reduce__ API to save the data. data.__reduce__ returns
+        # a tuple of length 2-5:
+        #   (function, args, state, listitems, dictitems)
+
+        # For reconstructing, we calls function(*args), then set its state,
+        # listitems, and dictitems if they are not None.
+
+        # A special case is when function.__name__ == '__newobj__'. In this
+        # case we create the object with args[0].__new__(*args).
+
+        # Another special case is when __reduce__ returns a string - we don't
+        # support it.
+
+        # We produce a !!python/object, !!python/object/new or
+        # !!python/object/apply node.
+
+        cls = type(data)
+        if cls in copyreg.dispatch_table:
+            reduce = copyreg.dispatch_table[cls](data)
+        elif hasattr(data, '__reduce_ex__'):
+            reduce = data.__reduce_ex__(2)
+        elif hasattr(data, '__reduce__'):
+            reduce = data.__reduce__()
+        else:
+            raise RepresenterError("cannot represent object: %r" % data)
+        reduce = (list(reduce)+[None]*5)[:5]
+        function, args, state, listitems, dictitems = reduce
+        args = list(args)
+        if state is None:
+            state = {}
+        if listitems is not None:
+            listitems = list(listitems)
+        if dictitems is not None:
+            dictitems = dict(dictitems)
+        if function.__name__ == '__newobj__':
+            function = args[0]
+            args = args[1:]
+            tag = 'tag:yaml.org,2002:python/object/new:'
+            newobj = True
+        else:
+            tag = 'tag:yaml.org,2002:python/object/apply:'
+            newobj = False
+        function_name = '%s.%s' % (function.__module__, function.__name__)
+        if not args and not listitems and not dictitems \
+                and isinstance(state, dict) and newobj:
+            return self.represent_mapping(
+                    'tag:yaml.org,2002:python/object:'+function_name, state)
+        if not listitems and not dictitems  \
+                and isinstance(state, dict) and not state:
+            return self.represent_sequence(tag+function_name, args)
+        value = {}
+        if args:
+            value['args'] = args
+        if state or not isinstance(state, dict):
+            value['state'] = state
+        if listitems:
+            value['listitems'] = listitems
+        if dictitems:
+            value['dictitems'] = dictitems
+        return self.represent_mapping(tag+function_name, value)
+
+Representer.add_representer(complex,
+        Representer.represent_complex)
+
+Representer.add_representer(tuple,
+        Representer.represent_tuple)
+
+Representer.add_representer(type,
+        Representer.represent_name)
+
+Representer.add_representer(types.FunctionType,
+        Representer.represent_name)
+
+Representer.add_representer(types.BuiltinFunctionType,
+        Representer.represent_name)
+
+Representer.add_representer(types.ModuleType,
+        Representer.represent_module)
+
+Representer.add_multi_representer(object,
+        Representer.represent_object)
+
diff --git a/lib3/yaml/resolver.py b/lib3/yaml/resolver.py
new file mode 100644
index 0000000..0eece25
--- /dev/null
+++ b/lib3/yaml/resolver.py
@@ -0,0 +1,224 @@
+
+__all__ = ['BaseResolver', 'Resolver']
+
+from .error import *
+from .nodes import *
+
+import re
+
+class ResolverError(YAMLError):
+    pass
+
+class BaseResolver:
+
+    DEFAULT_SCALAR_TAG = 'tag:yaml.org,2002:str'
+    DEFAULT_SEQUENCE_TAG = 'tag:yaml.org,2002:seq'
+    DEFAULT_MAPPING_TAG = 'tag:yaml.org,2002:map'
+
+    yaml_implicit_resolvers = {}
+    yaml_path_resolvers = {}
+
+    def __init__(self):
+        self.resolver_exact_paths = []
+        self.resolver_prefix_paths = []
+
+    @classmethod
+    def add_implicit_resolver(cls, tag, regexp, first):
+        if not 'yaml_implicit_resolvers' in cls.__dict__:
+            cls.yaml_implicit_resolvers = cls.yaml_implicit_resolvers.copy()
+        if first is None:
+            first = [None]
+        for ch in first:
+            cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
+
+    @classmethod
+    def add_path_resolver(cls, tag, path, kind=None):
+        # Note: `add_path_resolver` is experimental.  The API could be changed.
+        # `new_path` is a pattern that is matched against the path from the
+        # root to the node that is being considered.  `node_path` elements are
+        # tuples `(node_check, index_check)`.  `node_check` is a node class:
+        # `ScalarNode`, `SequenceNode`, `MappingNode` or `None`.  `None`
+        # matches any kind of a node.  `index_check` could be `None`, a boolean
+        # value, a string value, or a number.  `None` and `False` match against
+        # any _value_ of sequence and mapping nodes.  `True` matches against
+        # any _key_ of a mapping node.  A string `index_check` matches against
+        # a mapping value that corresponds to a scalar key which content is
+        # equal to the `index_check` value.  An integer `index_check` matches
+        # against a sequence value with the index equal to `index_check`.
+        if not 'yaml_path_resolvers' in cls.__dict__:
+            cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
+        new_path = []
+        for element in path:
+            if isinstance(element, (list, tuple)):
+                if len(element) == 2:
+                    node_check, index_check = element
+                elif len(element) == 1:
+                    node_check = element[0]
+                    index_check = True
+                else:
+                    raise ResolverError("Invalid path element: %s" % element)
+            else:
+                node_check = None
+                index_check = element
+            if node_check is str:
+                node_check = ScalarNode
+            elif node_check is list:
+                node_check = SequenceNode
+            elif node_check is dict:
+                node_check = MappingNode
+            elif node_check not in [ScalarNode, SequenceNode, MappingNode]  \
+                    and not isinstance(node_check, str) \
+                    and node_check is not None:
+                raise ResolverError("Invalid node checker: %s" % node_check)
+            if not isinstance(index_check, (str, int))  \
+                    and index_check is not None:
+                raise ResolverError("Invalid index checker: %s" % index_check)
+            new_path.append((node_check, index_check))
+        if kind is str:
+            kind = ScalarNode
+        elif kind is list:
+            kind = SequenceNode
+        elif kind is dict:
+            kind = MappingNode
+        elif kind not in [ScalarNode, SequenceNode, MappingNode]    \
+                and kind is not None:
+            raise ResolverError("Invalid node kind: %s" % kind)
+        cls.yaml_path_resolvers[tuple(new_path), kind] = tag
+
+    def descend_resolver(self, current_node, current_index):
+        if not self.yaml_path_resolvers:
+            return
+        exact_paths = {}
+        prefix_paths = []
+        if current_node:
+            depth = len(self.resolver_prefix_paths)
+            for path, kind in self.resolver_prefix_paths[-1]:
+                if self.check_resolver_prefix(depth, path, kind,
+                        current_node, current_index):
+                    if len(path) > depth:
+                        prefix_paths.append((path, kind))
+                    else:
+                        exact_paths[kind] = self.yaml_path_resolvers[path, kind]
+        else:
+            for path, kind in self.yaml_path_resolvers:
+                if not path:
+                    exact_paths[kind] = self.yaml_path_resolvers[path, kind]
+                else:
+                    prefix_paths.append((path, kind))
+        self.resolver_exact_paths.append(exact_paths)
+        self.resolver_prefix_paths.append(prefix_paths)
+
+    def ascend_resolver(self):
+        if not self.yaml_path_resolvers:
+            return
+        self.resolver_exact_paths.pop()
+        self.resolver_prefix_paths.pop()
+
+    def check_resolver_prefix(self, depth, path, kind,
+            current_node, current_index):
+        node_check, index_check = path[depth-1]
+        if isinstance(node_check, str):
+            if current_node.tag != node_check:
+                return
+        elif node_check is not None:
+            if not isinstance(current_node, node_check):
+                return
+        if index_check is True and current_index is not None:
+            return
+        if (index_check is False or index_check is None)    \
+                and current_index is None:
+            return
+        if isinstance(index_check, str):
+            if not (isinstance(current_index, ScalarNode)
+                    and index_check == current_index.value):
+                return
+        elif isinstance(index_check, int) and not isinstance(index_check, bool):
+            if index_check != current_index:
+                return
+        return True
+
+    def resolve(self, kind, value, implicit):
+        if kind is ScalarNode and implicit[0]:
+            if value == '':
+                resolvers = self.yaml_implicit_resolvers.get('', [])
+            else:
+                resolvers = self.yaml_implicit_resolvers.get(value[0], [])
+            resolvers += self.yaml_implicit_resolvers.get(None, [])
+            for tag, regexp in resolvers:
+                if regexp.match(value):
+                    return tag
+            implicit = implicit[1]
+        if self.yaml_path_resolvers:
+            exact_paths = self.resolver_exact_paths[-1]
+            if kind in exact_paths:
+                return exact_paths[kind]
+            if None in exact_paths:
+                return exact_paths[None]
+        if kind is ScalarNode:
+            return self.DEFAULT_SCALAR_TAG
+        elif kind is SequenceNode:
+            return self.DEFAULT_SEQUENCE_TAG
+        elif kind is MappingNode:
+            return self.DEFAULT_MAPPING_TAG
+
+class Resolver(BaseResolver):
+    pass
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:bool',
+        re.compile(r'''^(?:yes|Yes|YES|no|No|NO
+                    |true|True|TRUE|false|False|FALSE
+                    |on|On|ON|off|Off|OFF)$''', re.X),
+        list('yYnNtTfFoO'))
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:float',
+        re.compile(r'''^(?:[-+]?(?:[0-9][0-9_]*)\.[0-9_]*(?:[eE][-+][0-9]+)?
+                    |\.[0-9_]+(?:[eE][-+][0-9]+)?
+                    |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*
+                    |[-+]?\.(?:inf|Inf|INF)
+                    |\.(?:nan|NaN|NAN))$''', re.X),
+        list('-+0123456789.'))
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:int',
+        re.compile(r'''^(?:[-+]?0b[0-1_]+
+                    |[-+]?0[0-7_]+
+                    |[-+]?(?:0|[1-9][0-9_]*)
+                    |[-+]?0x[0-9a-fA-F_]+
+                    |[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
+        list('-+0123456789'))
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:merge',
+        re.compile(r'^(?:<<)$'),
+        ['<'])
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:null',
+        re.compile(r'''^(?: ~
+                    |null|Null|NULL
+                    | )$''', re.X),
+        ['~', 'n', 'N', ''])
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:timestamp',
+        re.compile(r'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
+                    |[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
+                     (?:[Tt]|[ \t]+)[0-9][0-9]?
+                     :[0-9][0-9] :[0-9][0-9] (?:\.[0-9]*)?
+                     (?:[ \t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
+        list('0123456789'))
+
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:value',
+        re.compile(r'^(?:=)$'),
+        ['='])
+
+# The following resolver is only for documentation purposes. It cannot work
+# because plain scalars cannot start with '!', '&', or '*'.
+Resolver.add_implicit_resolver(
+        'tag:yaml.org,2002:yaml',
+        re.compile(r'^(?:!|&|\*)$'),
+        list('!&*'))
+
diff --git a/lib3/yaml/scanner.py b/lib3/yaml/scanner.py
new file mode 100644
index 0000000..c8d127b
--- /dev/null
+++ b/lib3/yaml/scanner.py
@@ -0,0 +1,1444 @@
+
+# Scanner produces tokens of the following types:
+# STREAM-START
+# STREAM-END
+# DIRECTIVE(name, value)
+# DOCUMENT-START
+# DOCUMENT-END
+# BLOCK-SEQUENCE-START
+# BLOCK-MAPPING-START
+# BLOCK-END
+# FLOW-SEQUENCE-START
+# FLOW-MAPPING-START
+# FLOW-SEQUENCE-END
+# FLOW-MAPPING-END
+# BLOCK-ENTRY
+# FLOW-ENTRY
+# KEY
+# VALUE
+# ALIAS(value)
+# ANCHOR(value)
+# TAG(value)
+# SCALAR(value, plain, style)
+#
+# Read comments in the Scanner code for more details.
+#
+
+__all__ = ['Scanner', 'ScannerError']
+
+from .error import MarkedYAMLError
+from .tokens import *
+
+class ScannerError(MarkedYAMLError):
+    pass
+
+class SimpleKey:
+    # See below simple keys treatment.
+
+    def __init__(self, token_number, required, index, line, column, mark):
+        self.token_number = token_number
+        self.required = required
+        self.index = index
+        self.line = line
+        self.column = column
+        self.mark = mark
+
+class Scanner:
+
+    def __init__(self):
+        """Initialize the scanner."""
+        # It is assumed that Scanner and Reader will have a common descendant.
+        # Reader do the dirty work of checking for BOM and converting the
+        # input data to Unicode. It also adds NUL to the end.
+        #
+        # Reader supports the following methods
+        #   self.peek(i=0)       # peek the next i-th character
+        #   self.prefix(l=1)     # peek the next l characters
+        #   self.forward(l=1)    # read the next l characters and move the pointer.
+
+        # Had we reached the end of the stream?
+        self.done = False
+
+        # The number of unclosed '{' and '['. `flow_level == 0` means block
+        # context.
+        self.flow_level = 0
+
+        # List of processed tokens that are not yet emitted.
+        self.tokens = []
+
+        # Add the STREAM-START token.
+        self.fetch_stream_start()
+
+        # Number of tokens that were emitted through the `get_token` method.
+        self.tokens_taken = 0
+
+        # The current indentation level.
+        self.indent = -1
+
+        # Past indentation levels.
+        self.indents = []
+
+        # Variables related to simple keys treatment.
+
+        # A simple key is a key that is not denoted by the '?' indicator.
+        # Example of simple keys:
+        #   ---
+        #   block simple key: value
+        #   ? not a simple key:
+        #   : { flow simple key: value }
+        # We emit the KEY token before all keys, so when we find a potential
+        # simple key, we try to locate the corresponding ':' indicator.
+        # Simple keys should be limited to a single line and 1024 characters.
+
+        # Can a simple key start at the current position? A simple key may
+        # start:
+        # - at the beginning of the line, not counting indentation spaces
+        #       (in block context),
+        # - after '{', '[', ',' (in the flow context),
+        # - after '?', ':', '-' (in the block context).
+        # In the block context, this flag also signifies if a block collection
+        # may start at the current position.
+        self.allow_simple_key = True
+
+        # Keep track of possible simple keys. This is a dictionary. The key
+        # is `flow_level`; there can be no more that one possible simple key
+        # for each level. The value is a SimpleKey record:
+        #   (token_number, required, index, line, column, mark)
+        # A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow),
+        # '[', or '{' tokens.
+        self.possible_simple_keys = {}
+
+    # Public methods.
+
+    def check_token(self, *choices):
+        # Check if the next token is one of the given types.
+        while self.need_more_tokens():
+            self.fetch_more_tokens()
+        if self.tokens:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.tokens[0], choice):
+                    return True
+        return False
+
+    def peek_token(self):
+        # Return the next token, but do not delete if from the queue.
+        while self.need_more_tokens():
+            self.fetch_more_tokens()
+        if self.tokens:
+            return self.tokens[0]
+
+    def get_token(self):
+        # Return the next token.
+        while self.need_more_tokens():
+            self.fetch_more_tokens()
+        if self.tokens:
+            self.tokens_taken += 1
+            return self.tokens.pop(0)
+
+    # Private methods.
+
+    def need_more_tokens(self):
+        if self.done:
+            return False
+        if not self.tokens:
+            return True
+        # The current token may be a potential simple key, so we
+        # need to look further.
+        self.stale_possible_simple_keys()
+        if self.next_possible_simple_key() == self.tokens_taken:
+            return True
+
+    def fetch_more_tokens(self):
+
+        # Eat whitespaces and comments until we reach the next token.
+        self.scan_to_next_token()
+
+        # Remove obsolete possible simple keys.
+        self.stale_possible_simple_keys()
+
+        # Compare the current indentation and column. It may add some tokens
+        # and decrease the current indentation level.
+        self.unwind_indent(self.column)
+
+        # Peek the next character.
+        ch = self.peek()
+
+        # Is it the end of stream?
+        if ch == '\0':
+            return self.fetch_stream_end()
+
+        # Is it a directive?
+        if ch == '%' and self.check_directive():
+            return self.fetch_directive()
+
+        # Is it the document start?
+        if ch == '-' and self.check_document_start():
+            return self.fetch_document_start()
+
+        # Is it the document end?
+        if ch == '.' and self.check_document_end():
+            return self.fetch_document_end()
+
+        # TODO: support for BOM within a stream.
+        #if ch == '\uFEFF':
+        #    return self.fetch_bom()    <-- issue BOMToken
+
+        # Note: the order of the following checks is NOT significant.
+
+        # Is it the flow sequence start indicator?
+        if ch == '[':
+            return self.fetch_flow_sequence_start()
+
+        # Is it the flow mapping start indicator?
+        if ch == '{':
+            return self.fetch_flow_mapping_start()
+
+        # Is it the flow sequence end indicator?
+        if ch == ']':
+            return self.fetch_flow_sequence_end()
+
+        # Is it the flow mapping end indicator?
+        if ch == '}':
+            return self.fetch_flow_mapping_end()
+
+        # Is it the flow entry indicator?
+        if ch == ',':
+            return self.fetch_flow_entry()
+
+        # Is it the block entry indicator?
+        if ch == '-' and self.check_block_entry():
+            return self.fetch_block_entry()
+
+        # Is it the key indicator?
+        if ch == '?' and self.check_key():
+            return self.fetch_key()
+
+        # Is it the value indicator?
+        if ch == ':' and self.check_value():
+            return self.fetch_value()
+
+        # Is it an alias?
+        if ch == '*':
+            return self.fetch_alias()
+
+        # Is it an anchor?
+        if ch == '&':
+            return self.fetch_anchor()
+
+        # Is it a tag?
+        if ch == '!':
+            return self.fetch_tag()
+
+        # Is it a literal scalar?
+        if ch == '|' and not self.flow_level:
+            return self.fetch_literal()
+
+        # Is it a folded scalar?
+        if ch == '>' and not self.flow_level:
+            return self.fetch_folded()
+
+        # Is it a single quoted scalar?
+        if ch == '\'':
+            return self.fetch_single()
+
+        # Is it a double quoted scalar?
+        if ch == '\"':
+            return self.fetch_double()
+
+        # It must be a plain scalar then.
+        if self.check_plain():
+            return self.fetch_plain()
+
+        # No? It's an error. Let's produce a nice error message.
+        raise ScannerError("while scanning for the next token", None,
+                "found character %r that cannot start any token" % ch,
+                self.get_mark())
+
+    # Simple keys treatment.
+
+    def next_possible_simple_key(self):
+        # Return the number of the nearest possible simple key. Actually we
+        # don't need to loop through the whole dictionary. We may replace it
+        # with the following code:
+        #   if not self.possible_simple_keys:
+        #       return None
+        #   return self.possible_simple_keys[
+        #           min(self.possible_simple_keys.keys())].token_number
+        min_token_number = None
+        for level in self.possible_simple_keys:
+            key = self.possible_simple_keys[level]
+            if min_token_number is None or key.token_number < min_token_number:
+                min_token_number = key.token_number
+        return min_token_number
+
+    def stale_possible_simple_keys(self):
+        # Remove entries that are no longer possible simple keys. According to
+        # the YAML specification, simple keys
+        # - should be limited to a single line,
+        # - should be no longer than 1024 characters.
+        # Disabling this procedure will allow simple keys of any length and
+        # height (may cause problems if indentation is broken though).
+        for level in list(self.possible_simple_keys):
+            key = self.possible_simple_keys[level]
+            if key.line != self.line  \
+                    or self.index-key.index > 1024:
+                if key.required:
+                    raise ScannerError("while scanning a simple key", key.mark,
+                            "could not find expected ':'", self.get_mark())
+                del self.possible_simple_keys[level]
+
+    def save_possible_simple_key(self):
+        # The next token may start a simple key. We check if it's possible
+        # and save its position. This function is called for
+        #   ALIAS, ANCHOR, TAG, SCALAR(flow), '[', and '{'.
+
+        # Check if a simple key is required at the current position.
+        required = not self.flow_level and self.indent == self.column
+
+        # The next token might be a simple key. Let's save it's number and
+        # position.
+        if self.allow_simple_key:
+            self.remove_possible_simple_key()
+            token_number = self.tokens_taken+len(self.tokens)
+            key = SimpleKey(token_number, required,
+                    self.index, self.line, self.column, self.get_mark())
+            self.possible_simple_keys[self.flow_level] = key
+
+    def remove_possible_simple_key(self):
+        # Remove the saved possible key position at the current flow level.
+        if self.flow_level in self.possible_simple_keys:
+            key = self.possible_simple_keys[self.flow_level]
+            
+            if key.required:
+                raise ScannerError("while scanning a simple key", key.mark,
+                        "could not find expected ':'", self.get_mark())
+
+            del self.possible_simple_keys[self.flow_level]
+
+    # Indentation functions.
+
+    def unwind_indent(self, column):
+
+        ## In flow context, tokens should respect indentation.
+        ## Actually the condition should be `self.indent >= column` according to
+        ## the spec. But this condition will prohibit intuitively correct
+        ## constructions such as
+        ## key : {
+        ## }
+        #if self.flow_level and self.indent > column:
+        #    raise ScannerError(None, None,
+        #            "invalid intendation or unclosed '[' or '{'",
+        #            self.get_mark())
+
+        # In the flow context, indentation is ignored. We make the scanner less
+        # restrictive then specification requires.
+        if self.flow_level:
+            return
+
+        # In block context, we may need to issue the BLOCK-END tokens.
+        while self.indent > column:
+            mark = self.get_mark()
+            self.indent = self.indents.pop()
+            self.tokens.append(BlockEndToken(mark, mark))
+
+    def add_indent(self, column):
+        # Check if we need to increase indentation.
+        if self.indent < column:
+            self.indents.append(self.indent)
+            self.indent = column
+            return True
+        return False
+
+    # Fetchers.
+
+    def fetch_stream_start(self):
+        # We always add STREAM-START as the first token and STREAM-END as the
+        # last token.
+
+        # Read the token.
+        mark = self.get_mark()
+        
+        # Add STREAM-START.
+        self.tokens.append(StreamStartToken(mark, mark,
+            encoding=self.encoding))
+        
+
+    def fetch_stream_end(self):
+
+        # Set the current intendation to -1.
+        self.unwind_indent(-1)
+
+        # Reset simple keys.
+        self.remove_possible_simple_key()
+        self.allow_simple_key = False
+        self.possible_simple_keys = {}
+
+        # Read the token.
+        mark = self.get_mark()
+        
+        # Add STREAM-END.
+        self.tokens.append(StreamEndToken(mark, mark))
+
+        # The steam is finished.
+        self.done = True
+
+    def fetch_directive(self):
+        
+        # Set the current intendation to -1.
+        self.unwind_indent(-1)
+
+        # Reset simple keys.
+        self.remove_possible_simple_key()
+        self.allow_simple_key = False
+
+        # Scan and add DIRECTIVE.
+        self.tokens.append(self.scan_directive())
+
+    def fetch_document_start(self):
+        self.fetch_document_indicator(DocumentStartToken)
+
+    def fetch_document_end(self):
+        self.fetch_document_indicator(DocumentEndToken)
+
+    def fetch_document_indicator(self, TokenClass):
+
+        # Set the current intendation to -1.
+        self.unwind_indent(-1)
+
+        # Reset simple keys. Note that there could not be a block collection
+        # after '---'.
+        self.remove_possible_simple_key()
+        self.allow_simple_key = False
+
+        # Add DOCUMENT-START or DOCUMENT-END.
+        start_mark = self.get_mark()
+        self.forward(3)
+        end_mark = self.get_mark()
+        self.tokens.append(TokenClass(start_mark, end_mark))
+
+    def fetch_flow_sequence_start(self):
+        self.fetch_flow_collection_start(FlowSequenceStartToken)
+
+    def fetch_flow_mapping_start(self):
+        self.fetch_flow_collection_start(FlowMappingStartToken)
+
+    def fetch_flow_collection_start(self, TokenClass):
+
+        # '[' and '{' may start a simple key.
+        self.save_possible_simple_key()
+
+        # Increase the flow level.
+        self.flow_level += 1
+
+        # Simple keys are allowed after '[' and '{'.
+        self.allow_simple_key = True
+
+        # Add FLOW-SEQUENCE-START or FLOW-MAPPING-START.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(TokenClass(start_mark, end_mark))
+
+    def fetch_flow_sequence_end(self):
+        self.fetch_flow_collection_end(FlowSequenceEndToken)
+
+    def fetch_flow_mapping_end(self):
+        self.fetch_flow_collection_end(FlowMappingEndToken)
+
+    def fetch_flow_collection_end(self, TokenClass):
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Decrease the flow level.
+        self.flow_level -= 1
+
+        # No simple keys after ']' or '}'.
+        self.allow_simple_key = False
+
+        # Add FLOW-SEQUENCE-END or FLOW-MAPPING-END.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(TokenClass(start_mark, end_mark))
+
+    def fetch_flow_entry(self):
+
+        # Simple keys are allowed after ','.
+        self.allow_simple_key = True
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Add FLOW-ENTRY.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(FlowEntryToken(start_mark, end_mark))
+
+    def fetch_block_entry(self):
+
+        # Block context needs additional checks.
+        if not self.flow_level:
+
+            # Are we allowed to start a new entry?
+            if not self.allow_simple_key:
+                raise ScannerError(None, None,
+                        "sequence entries are not allowed here",
+                        self.get_mark())
+
+            # We may need to add BLOCK-SEQUENCE-START.
+            if self.add_indent(self.column):
+                mark = self.get_mark()
+                self.tokens.append(BlockSequenceStartToken(mark, mark))
+
+        # It's an error for the block entry to occur in the flow context,
+        # but we let the parser detect this.
+        else:
+            pass
+
+        # Simple keys are allowed after '-'.
+        self.allow_simple_key = True
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Add BLOCK-ENTRY.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(BlockEntryToken(start_mark, end_mark))
+
+    def fetch_key(self):
+        
+        # Block context needs additional checks.
+        if not self.flow_level:
+
+            # Are we allowed to start a key (not nessesary a simple)?
+            if not self.allow_simple_key:
+                raise ScannerError(None, None,
+                        "mapping keys are not allowed here",
+                        self.get_mark())
+
+            # We may need to add BLOCK-MAPPING-START.
+            if self.add_indent(self.column):
+                mark = self.get_mark()
+                self.tokens.append(BlockMappingStartToken(mark, mark))
+
+        # Simple keys are allowed after '?' in the block context.
+        self.allow_simple_key = not self.flow_level
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Add KEY.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(KeyToken(start_mark, end_mark))
+
+    def fetch_value(self):
+
+        # Do we determine a simple key?
+        if self.flow_level in self.possible_simple_keys:
+
+            # Add KEY.
+            key = self.possible_simple_keys[self.flow_level]
+            del self.possible_simple_keys[self.flow_level]
+            self.tokens.insert(key.token_number-self.tokens_taken,
+                    KeyToken(key.mark, key.mark))
+
+            # If this key starts a new block mapping, we need to add
+            # BLOCK-MAPPING-START.
+            if not self.flow_level:
+                if self.add_indent(key.column):
+                    self.tokens.insert(key.token_number-self.tokens_taken,
+                            BlockMappingStartToken(key.mark, key.mark))
+
+            # There cannot be two simple keys one after another.
+            self.allow_simple_key = False
+
+        # It must be a part of a complex key.
+        else:
+            
+            # Block context needs additional checks.
+            # (Do we really need them? They will be catched by the parser
+            # anyway.)
+            if not self.flow_level:
+
+                # We are allowed to start a complex value if and only if
+                # we can start a simple key.
+                if not self.allow_simple_key:
+                    raise ScannerError(None, None,
+                            "mapping values are not allowed here",
+                            self.get_mark())
+
+            # If this value starts a new block mapping, we need to add
+            # BLOCK-MAPPING-START.  It will be detected as an error later by
+            # the parser.
+            if not self.flow_level:
+                if self.add_indent(self.column):
+                    mark = self.get_mark()
+                    self.tokens.append(BlockMappingStartToken(mark, mark))
+
+            # Simple keys are allowed after ':' in the block context.
+            self.allow_simple_key = not self.flow_level
+
+            # Reset possible simple key on the current level.
+            self.remove_possible_simple_key()
+
+        # Add VALUE.
+        start_mark = self.get_mark()
+        self.forward()
+        end_mark = self.get_mark()
+        self.tokens.append(ValueToken(start_mark, end_mark))
+
+    def fetch_alias(self):
+
+        # ALIAS could be a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after ALIAS.
+        self.allow_simple_key = False
+
+        # Scan and add ALIAS.
+        self.tokens.append(self.scan_anchor(AliasToken))
+
+    def fetch_anchor(self):
+
+        # ANCHOR could start a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after ANCHOR.
+        self.allow_simple_key = False
+
+        # Scan and add ANCHOR.
+        self.tokens.append(self.scan_anchor(AnchorToken))
+
+    def fetch_tag(self):
+
+        # TAG could start a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after TAG.
+        self.allow_simple_key = False
+
+        # Scan and add TAG.
+        self.tokens.append(self.scan_tag())
+
+    def fetch_literal(self):
+        self.fetch_block_scalar(style='|')
+
+    def fetch_folded(self):
+        self.fetch_block_scalar(style='>')
+
+    def fetch_block_scalar(self, style):
+
+        # A simple key may follow a block scalar.
+        self.allow_simple_key = True
+
+        # Reset possible simple key on the current level.
+        self.remove_possible_simple_key()
+
+        # Scan and add SCALAR.
+        self.tokens.append(self.scan_block_scalar(style))
+
+    def fetch_single(self):
+        self.fetch_flow_scalar(style='\'')
+
+    def fetch_double(self):
+        self.fetch_flow_scalar(style='"')
+
+    def fetch_flow_scalar(self, style):
+
+        # A flow scalar could be a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after flow scalars.
+        self.allow_simple_key = False
+
+        # Scan and add SCALAR.
+        self.tokens.append(self.scan_flow_scalar(style))
+
+    def fetch_plain(self):
+
+        # A plain scalar could be a simple key.
+        self.save_possible_simple_key()
+
+        # No simple keys after plain scalars. But note that `scan_plain` will
+        # change this flag if the scan is finished at the beginning of the
+        # line.
+        self.allow_simple_key = False
+
+        # Scan and add SCALAR. May change `allow_simple_key`.
+        self.tokens.append(self.scan_plain())
+
+    # Checkers.
+
+    def check_directive(self):
+
+        # DIRECTIVE:        ^ '%' ...
+        # The '%' indicator is already checked.
+        if self.column == 0:
+            return True
+
+    def check_document_start(self):
+
+        # DOCUMENT-START:   ^ '---' (' '|'\n')
+        if self.column == 0:
+            if self.prefix(3) == '---'  \
+                    and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
+                return True
+
+    def check_document_end(self):
+
+        # DOCUMENT-END:     ^ '...' (' '|'\n')
+        if self.column == 0:
+            if self.prefix(3) == '...'  \
+                    and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
+                return True
+
+    def check_block_entry(self):
+
+        # BLOCK-ENTRY:      '-' (' '|'\n')
+        return self.peek(1) in '\0 \t\r\n\x85\u2028\u2029'
+
+    def check_key(self):
+
+        # KEY(flow context):    '?'
+        if self.flow_level:
+            return True
+
+        # KEY(block context):   '?' (' '|'\n')
+        else:
+            return self.peek(1) in '\0 \t\r\n\x85\u2028\u2029'
+
+    def check_value(self):
+
+        # VALUE(flow context):  ':'
+        if self.flow_level:
+            return True
+
+        # VALUE(block context): ':' (' '|'\n')
+        else:
+            return self.peek(1) in '\0 \t\r\n\x85\u2028\u2029'
+
+    def check_plain(self):
+
+        # A plain scalar may start with any non-space character except:
+        #   '-', '?', ':', ',', '[', ']', '{', '}',
+        #   '#', '&', '*', '!', '|', '>', '\'', '\"',
+        #   '%', '@', '`'.
+        #
+        # It may also start with
+        #   '-', '?', ':'
+        # if it is followed by a non-space character.
+        #
+        # Note that we limit the last rule to the block context (except the
+        # '-' character) because we want the flow context to be space
+        # independent.
+        ch = self.peek()
+        return ch not in '\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>\'\"%@`'  \
+                or (self.peek(1) not in '\0 \t\r\n\x85\u2028\u2029'
+                        and (ch == '-' or (not self.flow_level and ch in '?:')))
+
+    # Scanners.
+
+    def scan_to_next_token(self):
+        # We ignore spaces, line breaks and comments.
+        # If we find a line break in the block context, we set the flag
+        # `allow_simple_key` on.
+        # The byte order mark is stripped if it's the first character in the
+        # stream. We do not yet support BOM inside the stream as the
+        # specification requires. Any such mark will be considered as a part
+        # of the document.
+        #
+        # TODO: We need to make tab handling rules more sane. A good rule is
+        #   Tabs cannot precede tokens
+        #   BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END,
+        #   KEY(block), VALUE(block), BLOCK-ENTRY
+        # So the checking code is
+        #   if <TAB>:
+        #       self.allow_simple_keys = False
+        # We also need to add the check for `allow_simple_keys == True` to
+        # `unwind_indent` before issuing BLOCK-END.
+        # Scanners for block, flow, and plain scalars need to be modified.
+
+        if self.index == 0 and self.peek() == '\uFEFF':
+            self.forward()
+        found = False
+        while not found:
+            while self.peek() == ' ':
+                self.forward()
+            if self.peek() == '#':
+                while self.peek() not in '\0\r\n\x85\u2028\u2029':
+                    self.forward()
+            if self.scan_line_break():
+                if not self.flow_level:
+                    self.allow_simple_key = True
+            else:
+                found = True
+
+    def scan_directive(self):
+        # See the specification for details.
+        start_mark = self.get_mark()
+        self.forward()
+        name = self.scan_directive_name(start_mark)
+        value = None
+        if name == 'YAML':
+            value = self.scan_yaml_directive_value(start_mark)
+            end_mark = self.get_mark()
+        elif name == 'TAG':
+            value = self.scan_tag_directive_value(start_mark)
+            end_mark = self.get_mark()
+        else:
+            end_mark = self.get_mark()
+            while self.peek() not in '\0\r\n\x85\u2028\u2029':
+                self.forward()
+        self.scan_directive_ignored_line(start_mark)
+        return DirectiveToken(name, value, start_mark, end_mark)
+
+    def scan_directive_name(self, start_mark):
+        # See the specification for details.
+        length = 0
+        ch = self.peek(length)
+        while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z'  \
+                or ch in '-_':
+            length += 1
+            ch = self.peek(length)
+        if not length:
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch, self.get_mark())
+        value = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch not in '\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch, self.get_mark())
+        return value
+
+    def scan_yaml_directive_value(self, start_mark):
+        # See the specification for details.
+        while self.peek() == ' ':
+            self.forward()
+        major = self.scan_yaml_directive_number(start_mark)
+        if self.peek() != '.':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a digit or '.', but found %r" % self.peek(),
+                    self.get_mark())
+        self.forward()
+        minor = self.scan_yaml_directive_number(start_mark)
+        if self.peek() not in '\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a digit or ' ', but found %r" % self.peek(),
+                    self.get_mark())
+        return (major, minor)
+
+    def scan_yaml_directive_number(self, start_mark):
+        # See the specification for details.
+        ch = self.peek()
+        if not ('0' <= ch <= '9'):
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a digit, but found %r" % ch, self.get_mark())
+        length = 0
+        while '0' <= self.peek(length) <= '9':
+            length += 1
+        value = int(self.prefix(length))
+        self.forward(length)
+        return value
+
+    def scan_tag_directive_value(self, start_mark):
+        # See the specification for details.
+        while self.peek() == ' ':
+            self.forward()
+        handle = self.scan_tag_directive_handle(start_mark)
+        while self.peek() == ' ':
+            self.forward()
+        prefix = self.scan_tag_directive_prefix(start_mark)
+        return (handle, prefix)
+
+    def scan_tag_directive_handle(self, start_mark):
+        # See the specification for details.
+        value = self.scan_tag_handle('directive', start_mark)
+        ch = self.peek()
+        if ch != ' ':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected ' ', but found %r" % ch, self.get_mark())
+        return value
+
+    def scan_tag_directive_prefix(self, start_mark):
+        # See the specification for details.
+        value = self.scan_tag_uri('directive', start_mark)
+        ch = self.peek()
+        if ch not in '\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected ' ', but found %r" % ch, self.get_mark())
+        return value
+
+    def scan_directive_ignored_line(self, start_mark):
+        # See the specification for details.
+        while self.peek() == ' ':
+            self.forward()
+        if self.peek() == '#':
+            while self.peek() not in '\0\r\n\x85\u2028\u2029':
+                self.forward()
+        ch = self.peek()
+        if ch not in '\0\r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a directive", start_mark,
+                    "expected a comment or a line break, but found %r"
+                        % ch, self.get_mark())
+        self.scan_line_break()
+
+    def scan_anchor(self, TokenClass):
+        # The specification does not restrict characters for anchors and
+        # aliases. This may lead to problems, for instance, the document:
+        #   [ *alias, value ]
+        # can be interpteted in two ways, as
+        #   [ "value" ]
+        # and
+        #   [ *alias , "value" ]
+        # Therefore we restrict aliases to numbers and ASCII letters.
+        start_mark = self.get_mark()
+        indicator = self.peek()
+        if indicator == '*':
+            name = 'alias'
+        else:
+            name = 'anchor'
+        self.forward()
+        length = 0
+        ch = self.peek(length)
+        while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z'  \
+                or ch in '-_':
+            length += 1
+            ch = self.peek(length)
+        if not length:
+            raise ScannerError("while scanning an %s" % name, start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch, self.get_mark())
+        value = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch not in '\0 \t\r\n\x85\u2028\u2029?:,]}%@`':
+            raise ScannerError("while scanning an %s" % name, start_mark,
+                    "expected alphabetic or numeric character, but found %r"
+                    % ch, self.get_mark())
+        end_mark = self.get_mark()
+        return TokenClass(value, start_mark, end_mark)
+
+    def scan_tag(self):
+        # See the specification for details.
+        start_mark = self.get_mark()
+        ch = self.peek(1)
+        if ch == '<':
+            handle = None
+            self.forward(2)
+            suffix = self.scan_tag_uri('tag', start_mark)
+            if self.peek() != '>':
+                raise ScannerError("while parsing a tag", start_mark,
+                        "expected '>', but found %r" % self.peek(),
+                        self.get_mark())
+            self.forward()
+        elif ch in '\0 \t\r\n\x85\u2028\u2029':
+            handle = None
+            suffix = '!'
+            self.forward()
+        else:
+            length = 1
+            use_handle = False
+            while ch not in '\0 \r\n\x85\u2028\u2029':
+                if ch == '!':
+                    use_handle = True
+                    break
+                length += 1
+                ch = self.peek(length)
+            handle = '!'
+            if use_handle:
+                handle = self.scan_tag_handle('tag', start_mark)
+            else:
+                handle = '!'
+                self.forward()
+            suffix = self.scan_tag_uri('tag', start_mark)
+        ch = self.peek()
+        if ch not in '\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a tag", start_mark,
+                    "expected ' ', but found %r" % ch, self.get_mark())
+        value = (handle, suffix)
+        end_mark = self.get_mark()
+        return TagToken(value, start_mark, end_mark)
+
+    def scan_block_scalar(self, style):
+        # See the specification for details.
+
+        if style == '>':
+            folded = True
+        else:
+            folded = False
+
+        chunks = []
+        start_mark = self.get_mark()
+
+        # Scan the header.
+        self.forward()
+        chomping, increment = self.scan_block_scalar_indicators(start_mark)
+        self.scan_block_scalar_ignored_line(start_mark)
+
+        # Determine the indentation level and go to the first non-empty line.
+        min_indent = self.indent+1
+        if min_indent < 1:
+            min_indent = 1
+        if increment is None:
+            breaks, max_indent, end_mark = self.scan_block_scalar_indentation()
+            indent = max(min_indent, max_indent)
+        else:
+            indent = min_indent+increment-1
+            breaks, end_mark = self.scan_block_scalar_breaks(indent)
+        line_break = ''
+
+        # Scan the inner part of the block scalar.
+        while self.column == indent and self.peek() != '\0':
+            chunks.extend(breaks)
+            leading_non_space = self.peek() not in ' \t'
+            length = 0
+            while self.peek(length) not in '\0\r\n\x85\u2028\u2029':
+                length += 1
+            chunks.append(self.prefix(length))
+            self.forward(length)
+            line_break = self.scan_line_break()
+            breaks, end_mark = self.scan_block_scalar_breaks(indent)
+            if self.column == indent and self.peek() != '\0':
+
+                # Unfortunately, folding rules are ambiguous.
+                #
+                # This is the folding according to the specification:
+                
+                if folded and line_break == '\n'    \
+                        and leading_non_space and self.peek() not in ' \t':
+                    if not breaks:
+                        chunks.append(' ')
+                else:
+                    chunks.append(line_break)
+                
+                # This is Clark Evans's interpretation (also in the spec
+                # examples):
+                #
+                #if folded and line_break == '\n':
+                #    if not breaks:
+                #        if self.peek() not in ' \t':
+                #            chunks.append(' ')
+                #        else:
+                #            chunks.append(line_break)
+                #else:
+                #    chunks.append(line_break)
+            else:
+                break
+
+        # Chomp the tail.
+        if chomping is not False:
+            chunks.append(line_break)
+        if chomping is True:
+            chunks.extend(breaks)
+
+        # We are done.
+        return ScalarToken(''.join(chunks), False, start_mark, end_mark,
+                style)
+
+    def scan_block_scalar_indicators(self, start_mark):
+        # See the specification for details.
+        chomping = None
+        increment = None
+        ch = self.peek()
+        if ch in '+-':
+            if ch == '+':
+                chomping = True
+            else:
+                chomping = False
+            self.forward()
+            ch = self.peek()
+            if ch in '0123456789':
+                increment = int(ch)
+                if increment == 0:
+                    raise ScannerError("while scanning a block scalar", start_mark,
+                            "expected indentation indicator in the range 1-9, but found 0",
+                            self.get_mark())
+                self.forward()
+        elif ch in '0123456789':
+            increment = int(ch)
+            if increment == 0:
+                raise ScannerError("while scanning a block scalar", start_mark,
+                        "expected indentation indicator in the range 1-9, but found 0",
+                        self.get_mark())
+            self.forward()
+            ch = self.peek()
+            if ch in '+-':
+                if ch == '+':
+                    chomping = True
+                else:
+                    chomping = False
+                self.forward()
+        ch = self.peek()
+        if ch not in '\0 \r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a block scalar", start_mark,
+                    "expected chomping or indentation indicators, but found %r"
+                    % ch, self.get_mark())
+        return chomping, increment
+
+    def scan_block_scalar_ignored_line(self, start_mark):
+        # See the specification for details.
+        while self.peek() == ' ':
+            self.forward()
+        if self.peek() == '#':
+            while self.peek() not in '\0\r\n\x85\u2028\u2029':
+                self.forward()
+        ch = self.peek()
+        if ch not in '\0\r\n\x85\u2028\u2029':
+            raise ScannerError("while scanning a block scalar", start_mark,
+                    "expected a comment or a line break, but found %r" % ch,
+                    self.get_mark())
+        self.scan_line_break()
+
+    def scan_block_scalar_indentation(self):
+        # See the specification for details.
+        chunks = []
+        max_indent = 0
+        end_mark = self.get_mark()
+        while self.peek() in ' \r\n\x85\u2028\u2029':
+            if self.peek() != ' ':
+                chunks.append(self.scan_line_break())
+                end_mark = self.get_mark()
+            else:
+                self.forward()
+                if self.column > max_indent:
+                    max_indent = self.column
+        return chunks, max_indent, end_mark
+
+    def scan_block_scalar_breaks(self, indent):
+        # See the specification for details.
+        chunks = []
+        end_mark = self.get_mark()
+        while self.column < indent and self.peek() == ' ':
+            self.forward()
+        while self.peek() in '\r\n\x85\u2028\u2029':
+            chunks.append(self.scan_line_break())
+            end_mark = self.get_mark()
+            while self.column < indent and self.peek() == ' ':
+                self.forward()
+        return chunks, end_mark
+
+    def scan_flow_scalar(self, style):
+        # See the specification for details.
+        # Note that we loose indentation rules for quoted scalars. Quoted
+        # scalars don't need to adhere indentation because " and ' clearly
+        # mark the beginning and the end of them. Therefore we are less
+        # restrictive then the specification requires. We only need to check
+        # that document separators are not included in scalars.
+        if style == '"':
+            double = True
+        else:
+            double = False
+        chunks = []
+        start_mark = self.get_mark()
+        quote = self.peek()
+        self.forward()
+        chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
+        while self.peek() != quote:
+            chunks.extend(self.scan_flow_scalar_spaces(double, start_mark))
+            chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
+        self.forward()
+        end_mark = self.get_mark()
+        return ScalarToken(''.join(chunks), False, start_mark, end_mark,
+                style)
+
+    ESCAPE_REPLACEMENTS = {
+        '0':    '\0',
+        'a':    '\x07',
+        'b':    '\x08',
+        't':    '\x09',
+        '\t':   '\x09',
+        'n':    '\x0A',
+        'v':    '\x0B',
+        'f':    '\x0C',
+        'r':    '\x0D',
+        'e':    '\x1B',
+        ' ':    '\x20',
+        '\"':   '\"',
+        '\\':   '\\',
+        'N':    '\x85',
+        '_':    '\xA0',
+        'L':    '\u2028',
+        'P':    '\u2029',
+    }
+
+    ESCAPE_CODES = {
+        'x':    2,
+        'u':    4,
+        'U':    8,
+    }
+
+    def scan_flow_scalar_non_spaces(self, double, start_mark):
+        # See the specification for details.
+        chunks = []
+        while True:
+            length = 0
+            while self.peek(length) not in '\'\"\\\0 \t\r\n\x85\u2028\u2029':
+                length += 1
+            if length:
+                chunks.append(self.prefix(length))
+                self.forward(length)
+            ch = self.peek()
+            if not double and ch == '\'' and self.peek(1) == '\'':
+                chunks.append('\'')
+                self.forward(2)
+            elif (double and ch == '\'') or (not double and ch in '\"\\'):
+                chunks.append(ch)
+                self.forward()
+            elif double and ch == '\\':
+                self.forward()
+                ch = self.peek()
+                if ch in self.ESCAPE_REPLACEMENTS:
+                    chunks.append(self.ESCAPE_REPLACEMENTS[ch])
+                    self.forward()
+                elif ch in self.ESCAPE_CODES:
+                    length = self.ESCAPE_CODES[ch]
+                    self.forward()
+                    for k in range(length):
+                        if self.peek(k) not in '0123456789ABCDEFabcdef':
+                            raise ScannerError("while scanning a double-quoted scalar", start_mark,
+                                    "expected escape sequence of %d hexdecimal numbers, but found %r" %
+                                        (length, self.peek(k)), self.get_mark())
+                    code = int(self.prefix(length), 16)
+                    chunks.append(chr(code))
+                    self.forward(length)
+                elif ch in '\r\n\x85\u2028\u2029':
+                    self.scan_line_break()
+                    chunks.extend(self.scan_flow_scalar_breaks(double, start_mark))
+                else:
+                    raise ScannerError("while scanning a double-quoted scalar", start_mark,
+                            "found unknown escape character %r" % ch, self.get_mark())
+            else:
+                return chunks
+
+    def scan_flow_scalar_spaces(self, double, start_mark):
+        # See the specification for details.
+        chunks = []
+        length = 0
+        while self.peek(length) in ' \t':
+            length += 1
+        whitespaces = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch == '\0':
+            raise ScannerError("while scanning a quoted scalar", start_mark,
+                    "found unexpected end of stream", self.get_mark())
+        elif ch in '\r\n\x85\u2028\u2029':
+            line_break = self.scan_line_break()
+            breaks = self.scan_flow_scalar_breaks(double, start_mark)
+            if line_break != '\n':
+                chunks.append(line_break)
+            elif not breaks:
+                chunks.append(' ')
+            chunks.extend(breaks)
+        else:
+            chunks.append(whitespaces)
+        return chunks
+
+    def scan_flow_scalar_breaks(self, double, start_mark):
+        # See the specification for details.
+        chunks = []
+        while True:
+            # Instead of checking indentation, we check for document
+            # separators.
+            prefix = self.prefix(3)
+            if (prefix == '---' or prefix == '...')   \
+                    and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
+                raise ScannerError("while scanning a quoted scalar", start_mark,
+                        "found unexpected document separator", self.get_mark())
+            while self.peek() in ' \t':
+                self.forward()
+            if self.peek() in '\r\n\x85\u2028\u2029':
+                chunks.append(self.scan_line_break())
+            else:
+                return chunks
+
+    def scan_plain(self):
+        # See the specification for details.
+        # We add an additional restriction for the flow context:
+        #   plain scalars in the flow context cannot contain ',', ':' and '?'.
+        # We also keep track of the `allow_simple_key` flag here.
+        # Indentation rules are loosed for the flow context.
+        chunks = []
+        start_mark = self.get_mark()
+        end_mark = start_mark
+        indent = self.indent+1
+        # We allow zero indentation for scalars, but then we need to check for
+        # document separators at the beginning of the line.
+        #if indent == 0:
+        #    indent = 1
+        spaces = []
+        while True:
+            length = 0
+            if self.peek() == '#':
+                break
+            while True:
+                ch = self.peek(length)
+                if ch in '\0 \t\r\n\x85\u2028\u2029'    \
+                        or (not self.flow_level and ch == ':' and
+                                self.peek(length+1) in '\0 \t\r\n\x85\u2028\u2029') \
+                        or (self.flow_level and ch in ',:?[]{}'):
+                    break
+                length += 1
+            # It's not clear what we should do with ':' in the flow context.
+            if (self.flow_level and ch == ':'
+                    and self.peek(length+1) not in '\0 \t\r\n\x85\u2028\u2029,[]{}'):
+                self.forward(length)
+                raise ScannerError("while scanning a plain scalar", start_mark,
+                    "found unexpected ':'", self.get_mark(),
+                    "Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.")
+            if length == 0:
+                break
+            self.allow_simple_key = False
+            chunks.extend(spaces)
+            chunks.append(self.prefix(length))
+            self.forward(length)
+            end_mark = self.get_mark()
+            spaces = self.scan_plain_spaces(indent, start_mark)
+            if not spaces or self.peek() == '#' \
+                    or (not self.flow_level and self.column < indent):
+                break
+        return ScalarToken(''.join(chunks), True, start_mark, end_mark)
+
+    def scan_plain_spaces(self, indent, start_mark):
+        # See the specification for details.
+        # The specification is really confusing about tabs in plain scalars.
+        # We just forbid them completely. Do not use tabs in YAML!
+        chunks = []
+        length = 0
+        while self.peek(length) in ' ':
+            length += 1
+        whitespaces = self.prefix(length)
+        self.forward(length)
+        ch = self.peek()
+        if ch in '\r\n\x85\u2028\u2029':
+            line_break = self.scan_line_break()
+            self.allow_simple_key = True
+            prefix = self.prefix(3)
+            if (prefix == '---' or prefix == '...')   \
+                    and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
+                return
+            breaks = []
+            while self.peek() in ' \r\n\x85\u2028\u2029':
+                if self.peek() == ' ':
+                    self.forward()
+                else:
+                    breaks.append(self.scan_line_break())
+                    prefix = self.prefix(3)
+                    if (prefix == '---' or prefix == '...')   \
+                            and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
+                        return
+            if line_break != '\n':
+                chunks.append(line_break)
+            elif not breaks:
+                chunks.append(' ')
+            chunks.extend(breaks)
+        elif whitespaces:
+            chunks.append(whitespaces)
+        return chunks
+
+    def scan_tag_handle(self, name, start_mark):
+        # See the specification for details.
+        # For some strange reasons, the specification does not allow '_' in
+        # tag handles. I have allowed it anyway.
+        ch = self.peek()
+        if ch != '!':
+            raise ScannerError("while scanning a %s" % name, start_mark,
+                    "expected '!', but found %r" % ch, self.get_mark())
+        length = 1
+        ch = self.peek(length)
+        if ch != ' ':
+            while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z'  \
+                    or ch in '-_':
+                length += 1
+                ch = self.peek(length)
+            if ch != '!':
+                self.forward(length)
+                raise ScannerError("while scanning a %s" % name, start_mark,
+                        "expected '!', but found %r" % ch, self.get_mark())
+            length += 1
+        value = self.prefix(length)
+        self.forward(length)
+        return value
+
+    def scan_tag_uri(self, name, start_mark):
+        # See the specification for details.
+        # Note: we do not check if URI is well-formed.
+        chunks = []
+        length = 0
+        ch = self.peek(length)
+        while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z'  \
+                or ch in '-;/?:@&=+$,_.!~*\'()[]%':
+            if ch == '%':
+                chunks.append(self.prefix(length))
+                self.forward(length)
+                length = 0
+                chunks.append(self.scan_uri_escapes(name, start_mark))
+            else:
+                length += 1
+            ch = self.peek(length)
+        if length:
+            chunks.append(self.prefix(length))
+            self.forward(length)
+            length = 0
+        if not chunks:
+            raise ScannerError("while parsing a %s" % name, start_mark,
+                    "expected URI, but found %r" % ch, self.get_mark())
+        return ''.join(chunks)
+
+    def scan_uri_escapes(self, name, start_mark):
+        # See the specification for details.
+        codes = []
+        mark = self.get_mark()
+        while self.peek() == '%':
+            self.forward()
+            for k in range(2):
+                if self.peek(k) not in '0123456789ABCDEFabcdef':
+                    raise ScannerError("while scanning a %s" % name, start_mark,
+                            "expected URI escape sequence of 2 hexdecimal numbers, but found %r"
+                            % self.peek(k), self.get_mark())
+            codes.append(int(self.prefix(2), 16))
+            self.forward(2)
+        try:
+            value = bytes(codes).decode('utf-8')
+        except UnicodeDecodeError as exc:
+            raise ScannerError("while scanning a %s" % name, start_mark, str(exc), mark)
+        return value
+
+    def scan_line_break(self):
+        # Transforms:
+        #   '\r\n'      :   '\n'
+        #   '\r'        :   '\n'
+        #   '\n'        :   '\n'
+        #   '\x85'      :   '\n'
+        #   '\u2028'    :   '\u2028'
+        #   '\u2029     :   '\u2029'
+        #   default     :   ''
+        ch = self.peek()
+        if ch in '\r\n\x85':
+            if self.prefix(2) == '\r\n':
+                self.forward(2)
+            else:
+                self.forward()
+            return '\n'
+        elif ch in '\u2028\u2029':
+            self.forward()
+            return ch
+        return ''
+
+#try:
+#    import psyco
+#    psyco.bind(Scanner)
+#except ImportError:
+#    pass
+
diff --git a/lib3/yaml/serializer.py b/lib3/yaml/serializer.py
new file mode 100644
index 0000000..fe911e6
--- /dev/null
+++ b/lib3/yaml/serializer.py
@@ -0,0 +1,111 @@
+
+__all__ = ['Serializer', 'SerializerError']
+
+from .error import YAMLError
+from .events import *
+from .nodes import *
+
+class SerializerError(YAMLError):
+    pass
+
+class Serializer:
+
+    ANCHOR_TEMPLATE = 'id%03d'
+
+    def __init__(self, encoding=None,
+            explicit_start=None, explicit_end=None, version=None, tags=None):
+        self.use_encoding = encoding
+        self.use_explicit_start = explicit_start
+        self.use_explicit_end = explicit_end
+        self.use_version = version
+        self.use_tags = tags
+        self.serialized_nodes = {}
+        self.anchors = {}
+        self.last_anchor_id = 0
+        self.closed = None
+
+    def open(self):
+        if self.closed is None:
+            self.emit(StreamStartEvent(encoding=self.use_encoding))
+            self.closed = False
+        elif self.closed:
+            raise SerializerError("serializer is closed")
+        else:
+            raise SerializerError("serializer is already opened")
+
+    def close(self):
+        if self.closed is None:
+            raise SerializerError("serializer is not opened")
+        elif not self.closed:
+            self.emit(StreamEndEvent())
+            self.closed = True
+
+    #def __del__(self):
+    #    self.close()
+
+    def serialize(self, node):
+        if self.closed is None:
+            raise SerializerError("serializer is not opened")
+        elif self.closed:
+            raise SerializerError("serializer is closed")
+        self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
+            version=self.use_version, tags=self.use_tags))
+        self.anchor_node(node)
+        self.serialize_node(node, None, None)
+        self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
+        self.serialized_nodes = {}
+        self.anchors = {}
+        self.last_anchor_id = 0
+
+    def anchor_node(self, node):
+        if node in self.anchors:
+            if self.anchors[node] is None:
+                self.anchors[node] = self.generate_anchor(node)
+        else:
+            self.anchors[node] = None
+            if isinstance(node, SequenceNode):
+                for item in node.value:
+                    self.anchor_node(item)
+            elif isinstance(node, MappingNode):
+                for key, value in node.value:
+                    self.anchor_node(key)
+                    self.anchor_node(value)
+
+    def generate_anchor(self, node):
+        self.last_anchor_id += 1
+        return self.ANCHOR_TEMPLATE % self.last_anchor_id
+
+    def serialize_node(self, node, parent, index):
+        alias = self.anchors[node]
+        if node in self.serialized_nodes:
+            self.emit(AliasEvent(alias))
+        else:
+            self.serialized_nodes[node] = True
+            self.descend_resolver(parent, index)
+            if isinstance(node, ScalarNode):
+                detected_tag = self.resolve(ScalarNode, node.value, (True, False))
+                default_tag = self.resolve(ScalarNode, node.value, (False, True))
+                implicit = (node.tag == detected_tag), (node.tag == default_tag)
+                self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
+                    style=node.style))
+            elif isinstance(node, SequenceNode):
+                implicit = (node.tag
+                            == self.resolve(SequenceNode, node.value, True))
+                self.emit(SequenceStartEvent(alias, node.tag, implicit,
+                    flow_style=node.flow_style))
+                index = 0
+                for item in node.value:
+                    self.serialize_node(item, node, index)
+                    index += 1
+                self.emit(SequenceEndEvent())
+            elif isinstance(node, MappingNode):
+                implicit = (node.tag
+                            == self.resolve(MappingNode, node.value, True))
+                self.emit(MappingStartEvent(alias, node.tag, implicit,
+                    flow_style=node.flow_style))
+                for key, value in node.value:
+                    self.serialize_node(key, node, None)
+                    self.serialize_node(value, node, key)
+                self.emit(MappingEndEvent())
+            self.ascend_resolver()
+
diff --git a/lib3/yaml/tokens.py b/lib3/yaml/tokens.py
new file mode 100644
index 0000000..4d0b48a
--- /dev/null
+++ b/lib3/yaml/tokens.py
@@ -0,0 +1,104 @@
+
+class Token(object):
+    def __init__(self, start_mark, end_mark):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+    def __repr__(self):
+        attributes = [key for key in self.__dict__
+                if not key.endswith('_mark')]
+        attributes.sort()
+        arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
+                for key in attributes])
+        return '%s(%s)' % (self.__class__.__name__, arguments)
+
+#class BOMToken(Token):
+#    id = '<byte order mark>'
+
+class DirectiveToken(Token):
+    id = '<directive>'
+    def __init__(self, name, value, start_mark, end_mark):
+        self.name = name
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class DocumentStartToken(Token):
+    id = '<document start>'
+
+class DocumentEndToken(Token):
+    id = '<document end>'
+
+class StreamStartToken(Token):
+    id = '<stream start>'
+    def __init__(self, start_mark=None, end_mark=None,
+            encoding=None):
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.encoding = encoding
+
+class StreamEndToken(Token):
+    id = '<stream end>'
+
+class BlockSequenceStartToken(Token):
+    id = '<block sequence start>'
+
+class BlockMappingStartToken(Token):
+    id = '<block mapping start>'
+
+class BlockEndToken(Token):
+    id = '<block end>'
+
+class FlowSequenceStartToken(Token):
+    id = '['
+
+class FlowMappingStartToken(Token):
+    id = '{'
+
+class FlowSequenceEndToken(Token):
+    id = ']'
+
+class FlowMappingEndToken(Token):
+    id = '}'
+
+class KeyToken(Token):
+    id = '?'
+
+class ValueToken(Token):
+    id = ':'
+
+class BlockEntryToken(Token):
+    id = '-'
+
+class FlowEntryToken(Token):
+    id = ','
+
+class AliasToken(Token):
+    id = '<alias>'
+    def __init__(self, value, start_mark, end_mark):
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class AnchorToken(Token):
+    id = '<anchor>'
+    def __init__(self, value, start_mark, end_mark):
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class TagToken(Token):
+    id = '<tag>'
+    def __init__(self, value, start_mark, end_mark):
+        self.value = value
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+
+class ScalarToken(Token):
+    id = '<scalar>'
+    def __init__(self, value, plain, start_mark, end_mark, style=None):
+        self.value = value
+        self.plain = plain
+        self.start_mark = start_mark
+        self.end_mark = end_mark
+        self.style = style
+
diff --git a/setup.cfg b/setup.cfg
new file mode 100644
index 0000000..d0239e4
--- /dev/null
+++ b/setup.cfg
@@ -0,0 +1,29 @@
+
+# The INCLUDE and LIB directories to build the '_yaml' extension.
+# You may also set them using the options '-I' and '-L'.
+[build_ext]
+
+# List of directories to search for 'yaml.h' (separated by ':').
+#include_dirs=/usr/local/include:../../include
+
+# List of directories to search for 'libyaml.a' (separated by ':').
+#library_dirs=/usr/local/lib:../../lib
+
+# An alternative compiler to build the extention.
+#compiler=mingw32
+
+# Additional preprocessor definitions might be required.
+#define=YAML_DECLARE_STATIC
+
+# The following options are used to build PyYAML Windows installer
+# for Python 2.5 on my PC:
+#include_dirs=../../../libyaml/tags/0.1.4/include
+#library_dirs=../../../libyaml/tags/0.1.4/win32/vs2003/output/release/lib
+#define=YAML_DECLARE_STATIC
+
+# The following options are used to build PyYAML Windows installer
+# for Python 2.6, 2.7, 3.0, 3.1 and 3.2 on my PC:
+#include_dirs=../../../libyaml/tags/0.1.4/include
+#library_dirs=../../../libyaml/tags/0.1.4/win32/vs2008/output/release/lib
+#define=YAML_DECLARE_STATIC
+
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..727c3e0
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,345 @@
+
+NAME = 'PyYAML'
+VERSION = '3.11'
+DESCRIPTION = "YAML parser and emitter for Python"
+LONG_DESCRIPTION = """\
+YAML is a data serialization format designed for human readability
+and interaction with scripting languages.  PyYAML is a YAML parser
+and emitter for Python.
+
+PyYAML features a complete YAML 1.1 parser, Unicode support, pickle
+support, capable extension API, and sensible error messages.  PyYAML
+supports standard YAML tags and provides Python-specific tags that
+allow to represent an arbitrary Python object.
+
+PyYAML is applicable for a broad range of tasks from complex
+configuration files to object serialization and persistance."""
+AUTHOR = "Kirill Simonov"
+AUTHOR_EMAIL = 'xi@resolvent.net'
+LICENSE = "MIT"
+PLATFORMS = "Any"
+URL = "http://pyyaml.org/wiki/PyYAML"
+DOWNLOAD_URL = "http://pyyaml.org/download/pyyaml/%s-%s.tar.gz" % (NAME, VERSION)
+CLASSIFIERS = [
+    "Development Status :: 5 - Production/Stable",
+    "Intended Audience :: Developers",
+    "License :: OSI Approved :: MIT License",
+    "Operating System :: OS Independent",
+    "Programming Language :: Python",
+    "Programming Language :: Python :: 2",
+    "Programming Language :: Python :: 2.5",
+    "Programming Language :: Python :: 2.6",
+    "Programming Language :: Python :: 2.7",
+    "Programming Language :: Python :: 3",
+    "Programming Language :: Python :: 3.0",
+    "Programming Language :: Python :: 3.1",
+    "Programming Language :: Python :: 3.2",
+    "Topic :: Software Development :: Libraries :: Python Modules",
+    "Topic :: Text Processing :: Markup",
+]
+
+
+LIBYAML_CHECK = """
+#include <yaml.h>
+
+int main(void) {
+    yaml_parser_t parser;
+    yaml_emitter_t emitter;
+
+    yaml_parser_initialize(&parser);
+    yaml_parser_delete(&parser);
+
+    yaml_emitter_initialize(&emitter);
+    yaml_emitter_delete(&emitter);
+
+    return 0;
+}
+"""
+
+
+import sys, os.path
+
+from distutils import log
+from distutils.core import setup, Command
+from distutils.core import Distribution as _Distribution
+from distutils.core import Extension as _Extension
+from distutils.dir_util import mkpath
+from distutils.command.build_ext import build_ext as _build_ext
+from distutils.command.bdist_rpm import bdist_rpm as _bdist_rpm
+from distutils.errors import CompileError, LinkError, DistutilsPlatformError
+
+if 'setuptools.extension' in sys.modules:
+    _Extension = sys.modules['setuptools.extension']._Extension
+    sys.modules['distutils.core'].Extension = _Extension
+    sys.modules['distutils.extension'].Extension = _Extension
+    sys.modules['distutils.command.build_ext'].Extension = _Extension
+
+with_pyrex = None
+if sys.version_info[0] < 3:
+    try:
+        from Cython.Distutils.extension import Extension as _Extension
+        from Cython.Distutils import build_ext as _build_ext
+        with_pyrex = 'cython'
+    except ImportError:
+        try:
+            # Pyrex cannot build _yaml.c at the moment,
+            # but it may get fixed eventually.
+            from Pyrex.Distutils import Extension as _Extension
+            from Pyrex.Distutils import build_ext as _build_ext
+            with_pyrex = 'pyrex'
+        except ImportError:
+            pass
+
+
+class Distribution(_Distribution):
+
+    def __init__(self, attrs=None):
+        _Distribution.__init__(self, attrs)
+        if not self.ext_modules:
+            return
+        for idx in range(len(self.ext_modules)-1, -1, -1):
+            ext = self.ext_modules[idx]
+            if not isinstance(ext, Extension):
+                continue
+            setattr(self, ext.attr_name, None)
+            self.global_options = [
+                    (ext.option_name, None,
+                        "include %s (default if %s is available)"
+                        % (ext.feature_description, ext.feature_name)),
+                    (ext.neg_option_name, None,
+                        "exclude %s" % ext.feature_description),
+            ] + self.global_options
+            self.negative_opt = self.negative_opt.copy()
+            self.negative_opt[ext.neg_option_name] = ext.option_name
+
+    def has_ext_modules(self):
+        if not self.ext_modules:
+            return False
+        for ext in self.ext_modules:
+            with_ext = self.ext_status(ext)
+            if with_ext is None or with_ext:
+                return True
+        return False
+
+    def ext_status(self, ext):
+        if 'Java' in sys.version or 'IronPython' in sys.version or 'PyPy' in sys.version:
+            return False
+        if isinstance(ext, Extension):
+            with_ext = getattr(self, ext.attr_name)
+            return with_ext
+        else:
+            return True
+
+
+class Extension(_Extension):
+
+    def __init__(self, name, sources, feature_name, feature_description,
+            feature_check, **kwds):
+        if not with_pyrex:
+            for filename in sources[:]:
+                base, ext = os.path.splitext(filename)
+                if ext == '.pyx':
+                    sources.remove(filename)
+                    sources.append('%s.c' % base)
+        _Extension.__init__(self, name, sources, **kwds)
+        self.feature_name = feature_name
+        self.feature_description = feature_description
+        self.feature_check = feature_check
+        self.attr_name = 'with_' + feature_name.replace('-', '_')
+        self.option_name = 'with-' + feature_name
+        self.neg_option_name = 'without-' + feature_name
+
+
+class build_ext(_build_ext):
+
+    def run(self):
+        optional = True
+        disabled = True
+        for ext in self.extensions:
+            with_ext = self.distribution.ext_status(ext)
+            if with_ext is None:
+                disabled = False
+            elif with_ext:
+                optional = False
+                disabled = False
+                break
+        if disabled:
+            return
+        try:
+            _build_ext.run(self)
+        except DistutilsPlatformError:
+            exc = sys.exc_info()[1]
+            if optional:
+                log.warn(str(exc))
+                log.warn("skipping build_ext")
+            else:
+                raise
+
+    def get_source_files(self):
+        self.check_extensions_list(self.extensions)
+        filenames = []
+        for ext in self.extensions:
+            if with_pyrex == 'pyrex':
+                self.pyrex_sources(ext.sources, ext)
+            elif with_pyrex == 'cython':
+                self.cython_sources(ext.sources, ext)
+            for filename in ext.sources:
+                filenames.append(filename)
+                base = os.path.splitext(filename)[0]
+                for ext in ['c', 'h', 'pyx', 'pxd']:
+                    filename = '%s.%s' % (base, ext)
+                    if filename not in filenames and os.path.isfile(filename):
+                        filenames.append(filename)
+        return filenames
+
+    def get_outputs(self):
+        self.check_extensions_list(self.extensions)
+        outputs = []
+        for ext in self.extensions:
+            fullname = self.get_ext_fullname(ext.name)
+            filename = os.path.join(self.build_lib,
+                                    self.get_ext_filename(fullname))
+            if os.path.isfile(filename):
+                outputs.append(filename)
+        return outputs
+
+    def build_extensions(self):
+        self.check_extensions_list(self.extensions)
+        for ext in self.extensions:
+            with_ext = self.distribution.ext_status(ext)
+            if with_ext is None:
+                with_ext = self.check_extension_availability(ext)
+            if not with_ext:
+                continue
+            if with_pyrex == 'pyrex':
+                ext.sources = self.pyrex_sources(ext.sources, ext)
+            elif with_pyrex == 'cython':
+                ext.sources = self.cython_sources(ext.sources, ext)
+            self.build_extension(ext)
+
+    def check_extension_availability(self, ext):
+        cache = os.path.join(self.build_temp, 'check_%s.out' % ext.feature_name)
+        if not self.force and os.path.isfile(cache):
+            data = open(cache).read().strip()
+            if data == '1':
+                return True
+            elif data == '0':
+                return False
+        mkpath(self.build_temp)
+        src = os.path.join(self.build_temp, 'check_%s.c' % ext.feature_name)
+        open(src, 'w').write(ext.feature_check)
+        log.info("checking if %s is compilable" % ext.feature_name)
+        try:
+            [obj] = self.compiler.compile([src],
+                    macros=ext.define_macros+[(undef,) for undef in ext.undef_macros],
+                    include_dirs=ext.include_dirs,
+                    extra_postargs=(ext.extra_compile_args or []),
+                    depends=ext.depends)
+        except CompileError:
+            log.warn("")
+            log.warn("%s is not found or a compiler error: forcing --%s"
+                     % (ext.feature_name, ext.neg_option_name))
+            log.warn("(if %s is installed correctly, you may need to"
+                    % ext.feature_name)
+            log.warn(" specify the option --include-dirs or uncomment and")
+            log.warn(" modify the parameter include_dirs in setup.cfg)")
+            open(cache, 'w').write('0\n')
+            return False
+        prog = 'check_%s' % ext.feature_name
+        log.info("checking if %s is linkable" % ext.feature_name)
+        try:
+            self.compiler.link_executable([obj], prog,
+                    output_dir=self.build_temp,
+                    libraries=ext.libraries,
+                    library_dirs=ext.library_dirs,
+                    runtime_library_dirs=ext.runtime_library_dirs,
+                    extra_postargs=(ext.extra_link_args or []))
+        except LinkError:
+            log.warn("")
+            log.warn("%s is not found or a linker error: forcing --%s"
+                     % (ext.feature_name, ext.neg_option_name))
+            log.warn("(if %s is installed correctly, you may need to"
+                    % ext.feature_name)
+            log.warn(" specify the option --library-dirs or uncomment and")
+            log.warn(" modify the parameter library_dirs in setup.cfg)")
+            open(cache, 'w').write('0\n')
+            return False
+        open(cache, 'w').write('1\n')
+        return True
+
+
+class bdist_rpm(_bdist_rpm):
+
+    def _make_spec_file(self):
+        argv0 = sys.argv[0]
+        features = []
+        for ext in self.distribution.ext_modules:
+            if not isinstance(ext, Extension):
+                continue
+            with_ext = getattr(self.distribution, ext.attr_name)
+            if with_ext is None:
+                continue
+            if with_ext:
+                features.append('--'+ext.option_name)
+            else:
+                features.append('--'+ext.neg_option_name)
+        sys.argv[0] = ' '.join([argv0]+features)
+        spec_file = _bdist_rpm._make_spec_file(self)
+        sys.argv[0] = argv0
+        return spec_file
+
+
+class test(Command):
+
+    user_options = []
+
+    def initialize_options(self):
+        pass
+
+    def finalize_options(self):
+        pass
+
+    def run(self):
+        build_cmd = self.get_finalized_command('build')
+        build_cmd.run()
+        sys.path.insert(0, build_cmd.build_lib)
+        if sys.version_info[0] < 3:
+            sys.path.insert(0, 'tests/lib')
+        else:
+            sys.path.insert(0, 'tests/lib3')
+        import test_all
+        test_all.main([])
+
+
+if __name__ == '__main__':
+
+    setup(
+        name=NAME,
+        version=VERSION,
+        description=DESCRIPTION,
+        long_description=LONG_DESCRIPTION,
+        author=AUTHOR,
+        author_email=AUTHOR_EMAIL,
+        license=LICENSE,
+        platforms=PLATFORMS,
+        url=URL,
+        download_url=DOWNLOAD_URL,
+        classifiers=CLASSIFIERS,
+
+        package_dir={'': {2: 'lib', 3: 'lib3'}[sys.version_info[0]]},
+        packages=['yaml'],
+        ext_modules=[
+            Extension('_yaml', ['ext/_yaml.pyx'],
+                'libyaml', "LibYAML bindings", LIBYAML_CHECK,
+                libraries=['yaml']),
+        ],
+
+        distclass=Distribution,
+
+        cmdclass={
+            'build_ext': build_ext,
+            'bdist_rpm': bdist_rpm,
+            'test': test,
+        },
+    )
+
diff --git a/tests/data/a-nasty-libyaml-bug.loader-error b/tests/data/a-nasty-libyaml-bug.loader-error
new file mode 100644
index 0000000..f97d49f
--- /dev/null
+++ b/tests/data/a-nasty-libyaml-bug.loader-error
@@ -0,0 +1 @@
+[ [
\ No newline at end of file
diff --git a/tests/data/aliases-cdumper-bug.code b/tests/data/aliases-cdumper-bug.code
new file mode 100644
index 0000000..0168441
--- /dev/null
+++ b/tests/data/aliases-cdumper-bug.code
@@ -0,0 +1 @@
+[ today, today ]
diff --git a/tests/data/aliases.events b/tests/data/aliases.events
new file mode 100644
index 0000000..9139b51
--- /dev/null
+++ b/tests/data/aliases.events
@@ -0,0 +1,8 @@
+- !StreamStart
+- !DocumentStart
+- !SequenceStart
+- !Scalar { anchor: 'myanchor', tag: '!mytag', value: 'data' }
+- !Alias { anchor: 'myanchor' }
+- !SequenceEnd
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/bool.data b/tests/data/bool.data
new file mode 100644
index 0000000..0988b63
--- /dev/null
+++ b/tests/data/bool.data
@@ -0,0 +1,4 @@
+- yes
+- NO
+- True
+- on
diff --git a/tests/data/bool.detect b/tests/data/bool.detect
new file mode 100644
index 0000000..947ebbb
--- /dev/null
+++ b/tests/data/bool.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:bool
diff --git a/tests/data/colon-in-flow-context.loader-error b/tests/data/colon-in-flow-context.loader-error
new file mode 100644
index 0000000..13d5087
--- /dev/null
+++ b/tests/data/colon-in-flow-context.loader-error
@@ -0,0 +1 @@
+{ foo:bar }
diff --git a/tests/data/construct-binary-py2.code b/tests/data/construct-binary-py2.code
new file mode 100644
index 0000000..67ac0d5
--- /dev/null
+++ b/tests/data/construct-binary-py2.code
@@ -0,0 +1,7 @@
+{
+    "canonical":
+        "GIF89a\x0c\x00\x0c\x00\x84\x00\x00\xff\xff\xf7\xf5\xf5\xee\xe9\xe9\xe5fff\x00\x00\x00\xe7\xe7\xe7^^^\xf3\xf3\xed\x8e\x8e\x8e\xe0\xe0\xe0\x9f\x9f\x9f\x93\x93\x93\xa7\xa7\xa7\x9e\x9e\x9eiiiccc\xa3\xa3\xa3\x84\x84\x84\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9!\xfe\x0eMade with GIMP\x00,\x00\x00\x00\x00\x0c\x00\x0c\x00\x00\x05,  \x8e\x810\x9e\xe3@\x14\xe8i\x10\xc4\xd1\x8a\x08\x1c\xcf\x80M$z\xef\xff0\x85p\xb8\xb01f\r\x1b\xce\x01\xc3\x01\x1e\x10' \x82\n\x01\x00;",
+    "generic":
+        "GIF89a\x0c\x00\x0c\x00\x84\x00\x00\xff\xff\xf7\xf5\xf5\xee\xe9\xe9\xe5fff\x00\x00\x00\xe7\xe7\xe7^^^\xf3\xf3\xed\x8e\x8e\x8e\xe0\xe0\xe0\x9f\x9f\x9f\x93\x93\x93\xa7\xa7\xa7\x9e\x9e\x9eiiiccc\xa3\xa3\xa3\x84\x84\x84\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9!\xfe\x0eMade with GIMP\x00,\x00\x00\x00\x00\x0c\x00\x0c\x00\x00\x05,  \x8e\x810\x9e\xe3@\x14\xe8i\x10\xc4\xd1\x8a\x08\x1c\xcf\x80M$z\xef\xff0\x85p\xb8\xb01f\r\x1b\xce\x01\xc3\x01\x1e\x10' \x82\n\x01\x00;",
+    "description": "The binary value above is a tiny arrow encoded as a gif image.",
+}
diff --git a/tests/data/construct-binary-py2.data b/tests/data/construct-binary-py2.data
new file mode 100644
index 0000000..dcdb16f
--- /dev/null
+++ b/tests/data/construct-binary-py2.data
@@ -0,0 +1,12 @@
+canonical: !!binary "\
+ R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5\
+ OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+\
+ +f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC\
+ AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs="
+generic: !!binary |
+ R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5
+ OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+
+ +f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC
+ AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs=
+description:
+ The binary value above is a tiny arrow encoded as a gif image.
diff --git a/tests/data/construct-binary-py3.code b/tests/data/construct-binary-py3.code
new file mode 100644
index 0000000..30bfc3f
--- /dev/null
+++ b/tests/data/construct-binary-py3.code
@@ -0,0 +1,7 @@
+{
+    "canonical":
+        b"GIF89a\x0c\x00\x0c\x00\x84\x00\x00\xff\xff\xf7\xf5\xf5\xee\xe9\xe9\xe5fff\x00\x00\x00\xe7\xe7\xe7^^^\xf3\xf3\xed\x8e\x8e\x8e\xe0\xe0\xe0\x9f\x9f\x9f\x93\x93\x93\xa7\xa7\xa7\x9e\x9e\x9eiiiccc\xa3\xa3\xa3\x84\x84\x84\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9!\xfe\x0eMade with GIMP\x00,\x00\x00\x00\x00\x0c\x00\x0c\x00\x00\x05,  \x8e\x810\x9e\xe3@\x14\xe8i\x10\xc4\xd1\x8a\x08\x1c\xcf\x80M$z\xef\xff0\x85p\xb8\xb01f\r\x1b\xce\x01\xc3\x01\x1e\x10' \x82\n\x01\x00;",
+    "generic":
+        b"GIF89a\x0c\x00\x0c\x00\x84\x00\x00\xff\xff\xf7\xf5\xf5\xee\xe9\xe9\xe5fff\x00\x00\x00\xe7\xe7\xe7^^^\xf3\xf3\xed\x8e\x8e\x8e\xe0\xe0\xe0\x9f\x9f\x9f\x93\x93\x93\xa7\xa7\xa7\x9e\x9e\x9eiiiccc\xa3\xa3\xa3\x84\x84\x84\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9\xff\xfe\xf9!\xfe\x0eMade with GIMP\x00,\x00\x00\x00\x00\x0c\x00\x0c\x00\x00\x05,  \x8e\x810\x9e\xe3@\x14\xe8i\x10\xc4\xd1\x8a\x08\x1c\xcf\x80M$z\xef\xff0\x85p\xb8\xb01f\r\x1b\xce\x01\xc3\x01\x1e\x10' \x82\n\x01\x00;",
+    "description": "The binary value above is a tiny arrow encoded as a gif image.",
+}
diff --git a/tests/data/construct-binary-py3.data b/tests/data/construct-binary-py3.data
new file mode 100644
index 0000000..dcdb16f
--- /dev/null
+++ b/tests/data/construct-binary-py3.data
@@ -0,0 +1,12 @@
+canonical: !!binary "\
+ R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5\
+ OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+\
+ +f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC\
+ AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs="
+generic: !!binary |
+ R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5
+ OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+
+ +f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC
+ AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs=
+description:
+ The binary value above is a tiny arrow encoded as a gif image.
diff --git a/tests/data/construct-bool.code b/tests/data/construct-bool.code
new file mode 100644
index 0000000..3d02580
--- /dev/null
+++ b/tests/data/construct-bool.code
@@ -0,0 +1,7 @@
+{
+    "canonical": True,
+    "answer": False,
+    "logical": True,
+    "option": True,
+    "but": { "y": "is a string", "n": "is a string" },
+}
diff --git a/tests/data/construct-bool.data b/tests/data/construct-bool.data
new file mode 100644
index 0000000..36d6519
--- /dev/null
+++ b/tests/data/construct-bool.data
@@ -0,0 +1,9 @@
+canonical: yes
+answer: NO
+logical: True
+option: on
+
+
+but:
+    y: is a string
+    n: is a string
diff --git a/tests/data/construct-custom.code b/tests/data/construct-custom.code
new file mode 100644
index 0000000..2d5f063
--- /dev/null
+++ b/tests/data/construct-custom.code
@@ -0,0 +1,10 @@
+[
+    MyTestClass1(x=1),
+    MyTestClass1(x=1, y=2, z=3),
+    MyTestClass2(x=10),
+    MyTestClass2(x=10, y=20, z=30),
+    MyTestClass3(x=1),
+    MyTestClass3(x=1, y=2, z=3),
+    MyTestClass3(x=1, y=2, z=3),
+    YAMLObject1(my_parameter='foo', my_another_parameter=[1,2,3])
+]
diff --git a/tests/data/construct-custom.data b/tests/data/construct-custom.data
new file mode 100644
index 0000000..9db0f64
--- /dev/null
+++ b/tests/data/construct-custom.data
@@ -0,0 +1,26 @@
+---
+- !tag1
+  x: 1
+- !tag1
+  x: 1
+  'y': 2
+  z: 3
+- !tag2
+  10
+- !tag2
+  =: 10
+  'y': 20
+  z: 30
+- !tag3
+  x: 1
+- !tag3
+  x: 1
+  'y': 2
+  z: 3
+- !tag3
+  =: 1
+  'y': 2
+  z: 3
+- !foo
+  my-parameter: foo
+  my-another-parameter: [1,2,3]
diff --git a/tests/data/construct-float.code b/tests/data/construct-float.code
new file mode 100644
index 0000000..8493bf2
--- /dev/null
+++ b/tests/data/construct-float.code
@@ -0,0 +1,8 @@
+{
+    "canonical": 685230.15,
+    "exponential": 685230.15,
+    "fixed": 685230.15,
+    "sexagesimal": 685230.15,
+    "negative infinity": -1e300000,
+    "not a number": 1e300000/1e300000,
+}
diff --git a/tests/data/construct-float.data b/tests/data/construct-float.data
new file mode 100644
index 0000000..b662c62
--- /dev/null
+++ b/tests/data/construct-float.data
@@ -0,0 +1,6 @@
+canonical: 6.8523015e+5
+exponential: 685.230_15e+03
+fixed: 685_230.15
+sexagesimal: 190:20:30.15
+negative infinity: -.inf
+not a number: .NaN
diff --git a/tests/data/construct-int.code b/tests/data/construct-int.code
new file mode 100644
index 0000000..1058f7b
--- /dev/null
+++ b/tests/data/construct-int.code
@@ -0,0 +1,8 @@
+{
+    "canonical": 685230,
+    "decimal": 685230,
+    "octal": 685230,
+    "hexadecimal": 685230,
+    "binary": 685230,
+    "sexagesimal": 685230,
+}
diff --git a/tests/data/construct-int.data b/tests/data/construct-int.data
new file mode 100644
index 0000000..852c314
--- /dev/null
+++ b/tests/data/construct-int.data
@@ -0,0 +1,6 @@
+canonical: 685230
+decimal: +685_230
+octal: 02472256
+hexadecimal: 0x_0A_74_AE
+binary: 0b1010_0111_0100_1010_1110
+sexagesimal: 190:20:30
diff --git a/tests/data/construct-map.code b/tests/data/construct-map.code
new file mode 100644
index 0000000..736ba48
--- /dev/null
+++ b/tests/data/construct-map.code
@@ -0,0 +1,6 @@
+{
+    "Block style":
+        { "Clark" : "Evans", "Brian" : "Ingerson", "Oren" : "Ben-Kiki" },
+    "Flow style":
+        { "Clark" : "Evans", "Brian" : "Ingerson", "Oren" : "Ben-Kiki" },
+}
diff --git a/tests/data/construct-map.data b/tests/data/construct-map.data
new file mode 100644
index 0000000..022446d
--- /dev/null
+++ b/tests/data/construct-map.data
@@ -0,0 +1,6 @@
+# Unordered set of key: value pairs.
+Block style: !!map
+  Clark : Evans
+  Brian : Ingerson
+  Oren  : Ben-Kiki
+Flow style: !!map { Clark: Evans, Brian: Ingerson, Oren: Ben-Kiki }
diff --git a/tests/data/construct-merge.code b/tests/data/construct-merge.code
new file mode 100644
index 0000000..6cd419d
--- /dev/null
+++ b/tests/data/construct-merge.code
@@ -0,0 +1,10 @@
+[
+    { "x": 1, "y": 2 },
+    { "x": 0, "y": 2 },
+    { "r": 10 },
+    { "r": 1 },
+    { "x": 1, "y": 2, "r": 10, "label": "center/big" },
+    { "x": 1, "y": 2, "r": 10, "label": "center/big" },
+    { "x": 1, "y": 2, "r": 10, "label": "center/big" },
+    { "x": 1, "y": 2, "r": 10, "label": "center/big" },
+]
diff --git a/tests/data/construct-merge.data b/tests/data/construct-merge.data
new file mode 100644
index 0000000..3fdb2e2
--- /dev/null
+++ b/tests/data/construct-merge.data
@@ -0,0 +1,27 @@
+---
+- &CENTER { x: 1, 'y': 2 }
+- &LEFT { x: 0, 'y': 2 }
+- &BIG { r: 10 }
+- &SMALL { r: 1 }
+
+# All the following maps are equal:
+
+- # Explicit keys
+  x: 1
+  'y': 2
+  r: 10
+  label: center/big
+
+- # Merge one map
+  << : *CENTER
+  r: 10
+  label: center/big
+
+- # Merge multiple maps
+  << : [ *CENTER, *BIG ]
+  label: center/big
+
+- # Override
+  << : [ *BIG, *LEFT, *SMALL ]
+  x: 1
+  label: center/big
diff --git a/tests/data/construct-null.code b/tests/data/construct-null.code
new file mode 100644
index 0000000..a895eaa
--- /dev/null
+++ b/tests/data/construct-null.code
@@ -0,0 +1,13 @@
+[
+    None,
+    { "empty": None, "canonical": None, "english": None, None: "null key" },
+    {
+        "sparse": [
+            None,
+            "2nd entry",
+            None,
+            "4th entry",
+            None,
+        ],
+    },
+]
diff --git a/tests/data/construct-null.data b/tests/data/construct-null.data
new file mode 100644
index 0000000..9ad0344
--- /dev/null
+++ b/tests/data/construct-null.data
@@ -0,0 +1,18 @@
+# A document may be null.
+---
+---
+# This mapping has four keys,
+# one has a value.
+empty:
+canonical: ~
+english: null
+~: null key
+---
+# This sequence has five
+# entries, two have values.
+sparse:
+  - ~
+  - 2nd entry
+  -
+  - 4th entry
+  - Null
diff --git a/tests/data/construct-omap.code b/tests/data/construct-omap.code
new file mode 100644
index 0000000..f4cf1b8
--- /dev/null
+++ b/tests/data/construct-omap.code
@@ -0,0 +1,8 @@
+{
+    "Bestiary": [
+        ("aardvark", "African pig-like ant eater. Ugly."),
+        ("anteater", "South-American ant eater. Two species."),
+        ("anaconda", "South-American constrictor snake. Scaly."),
+    ],
+    "Numbers": [ ("one", 1), ("two", 2), ("three", 3) ],
+}
diff --git a/tests/data/construct-omap.data b/tests/data/construct-omap.data
new file mode 100644
index 0000000..4fa0f45
--- /dev/null
+++ b/tests/data/construct-omap.data
@@ -0,0 +1,8 @@
+# Explicitly typed ordered map (dictionary).
+Bestiary: !!omap
+  - aardvark: African pig-like ant eater. Ugly.
+  - anteater: South-American ant eater. Two species.
+  - anaconda: South-American constrictor snake. Scaly.
+  # Etc.
+# Flow style
+Numbers: !!omap [ one: 1, two: 2, three : 3 ]
diff --git a/tests/data/construct-pairs.code b/tests/data/construct-pairs.code
new file mode 100644
index 0000000..64f86ee
--- /dev/null
+++ b/tests/data/construct-pairs.code
@@ -0,0 +1,9 @@
+{
+    "Block tasks": [
+        ("meeting", "with team."),
+        ("meeting", "with boss."),
+        ("break", "lunch."),
+        ("meeting", "with client."),
+    ],
+    "Flow tasks": [ ("meeting", "with team"), ("meeting", "with boss") ],
+}
diff --git a/tests/data/construct-pairs.data b/tests/data/construct-pairs.data
new file mode 100644
index 0000000..05f55b9
--- /dev/null
+++ b/tests/data/construct-pairs.data
@@ -0,0 +1,7 @@
+# Explicitly typed pairs.
+Block tasks: !!pairs
+  - meeting: with team.
+  - meeting: with boss.
+  - break: lunch.
+  - meeting: with client.
+Flow tasks: !!pairs [ meeting: with team, meeting: with boss ]
diff --git a/tests/data/construct-python-bool.code b/tests/data/construct-python-bool.code
new file mode 100644
index 0000000..170da01
--- /dev/null
+++ b/tests/data/construct-python-bool.code
@@ -0,0 +1 @@
+[ True, False ]
diff --git a/tests/data/construct-python-bool.data b/tests/data/construct-python-bool.data
new file mode 100644
index 0000000..0068869
--- /dev/null
+++ b/tests/data/construct-python-bool.data
@@ -0,0 +1 @@
+[ !!python/bool True, !!python/bool False ]
diff --git a/tests/data/construct-python-bytes-py3.code b/tests/data/construct-python-bytes-py3.code
new file mode 100644
index 0000000..b9051d8
--- /dev/null
+++ b/tests/data/construct-python-bytes-py3.code
@@ -0,0 +1 @@
+b'some binary data'
diff --git a/tests/data/construct-python-bytes-py3.data b/tests/data/construct-python-bytes-py3.data
new file mode 100644
index 0000000..9528725
--- /dev/null
+++ b/tests/data/construct-python-bytes-py3.data
@@ -0,0 +1 @@
+--- !!python/bytes 'c29tZSBiaW5hcnkgZGF0YQ=='
diff --git a/tests/data/construct-python-complex.code b/tests/data/construct-python-complex.code
new file mode 100644
index 0000000..e582dff
--- /dev/null
+++ b/tests/data/construct-python-complex.code
@@ -0,0 +1 @@
+[0.5+0j, 0.5+0.5j, 0.5j, -0.5+0.5j, -0.5+0j, -0.5-0.5j, -0.5j, 0.5-0.5j]
diff --git a/tests/data/construct-python-complex.data b/tests/data/construct-python-complex.data
new file mode 100644
index 0000000..17ebad4
--- /dev/null
+++ b/tests/data/construct-python-complex.data
@@ -0,0 +1,8 @@
+- !!python/complex 0.5+0j
+- !!python/complex 0.5+0.5j
+- !!python/complex 0.5j
+- !!python/complex -0.5+0.5j
+- !!python/complex -0.5+0j
+- !!python/complex -0.5-0.5j
+- !!python/complex -0.5j
+- !!python/complex 0.5-0.5j
diff --git a/tests/data/construct-python-float.code b/tests/data/construct-python-float.code
new file mode 100644
index 0000000..d5910a0
--- /dev/null
+++ b/tests/data/construct-python-float.code
@@ -0,0 +1 @@
+123.456
diff --git a/tests/data/construct-python-float.data b/tests/data/construct-python-float.data
new file mode 100644
index 0000000..b460eb8
--- /dev/null
+++ b/tests/data/construct-python-float.data
@@ -0,0 +1 @@
+!!python/float 123.456
diff --git a/tests/data/construct-python-int.code b/tests/data/construct-python-int.code
new file mode 100644
index 0000000..190a180
--- /dev/null
+++ b/tests/data/construct-python-int.code
@@ -0,0 +1 @@
+123
diff --git a/tests/data/construct-python-int.data b/tests/data/construct-python-int.data
new file mode 100644
index 0000000..741d669
--- /dev/null
+++ b/tests/data/construct-python-int.data
@@ -0,0 +1 @@
+!!python/int 123
diff --git a/tests/data/construct-python-long-short-py2.code b/tests/data/construct-python-long-short-py2.code
new file mode 100644
index 0000000..fafc3f1
--- /dev/null
+++ b/tests/data/construct-python-long-short-py2.code
@@ -0,0 +1 @@
+123L
diff --git a/tests/data/construct-python-long-short-py2.data b/tests/data/construct-python-long-short-py2.data
new file mode 100644
index 0000000..4bd5dc2
--- /dev/null
+++ b/tests/data/construct-python-long-short-py2.data
@@ -0,0 +1 @@
+!!python/long 123
diff --git a/tests/data/construct-python-long-short-py3.code b/tests/data/construct-python-long-short-py3.code
new file mode 100644
index 0000000..190a180
--- /dev/null
+++ b/tests/data/construct-python-long-short-py3.code
@@ -0,0 +1 @@
+123
diff --git a/tests/data/construct-python-long-short-py3.data b/tests/data/construct-python-long-short-py3.data
new file mode 100644
index 0000000..4bd5dc2
--- /dev/null
+++ b/tests/data/construct-python-long-short-py3.data
@@ -0,0 +1 @@
+!!python/long 123
diff --git a/tests/data/construct-python-name-module.code b/tests/data/construct-python-name-module.code
new file mode 100644
index 0000000..6f39148
--- /dev/null
+++ b/tests/data/construct-python-name-module.code
@@ -0,0 +1 @@
+[str, yaml.Loader, yaml.dump, abs, yaml.tokens]
diff --git a/tests/data/construct-python-name-module.data b/tests/data/construct-python-name-module.data
new file mode 100644
index 0000000..f0c9712
--- /dev/null
+++ b/tests/data/construct-python-name-module.data
@@ -0,0 +1,5 @@
+- !!python/name:str
+- !!python/name:yaml.Loader
+- !!python/name:yaml.dump
+- !!python/name:abs
+- !!python/module:yaml.tokens
diff --git a/tests/data/construct-python-none.code b/tests/data/construct-python-none.code
new file mode 100644
index 0000000..b0047fa
--- /dev/null
+++ b/tests/data/construct-python-none.code
@@ -0,0 +1 @@
+None
diff --git a/tests/data/construct-python-none.data b/tests/data/construct-python-none.data
new file mode 100644
index 0000000..7907ec3
--- /dev/null
+++ b/tests/data/construct-python-none.data
@@ -0,0 +1 @@
+!!python/none
diff --git a/tests/data/construct-python-object.code b/tests/data/construct-python-object.code
new file mode 100644
index 0000000..7f1edf1
--- /dev/null
+++ b/tests/data/construct-python-object.code
@@ -0,0 +1,23 @@
+[
+AnObject(1, 'two', [3,3,3]),
+AnInstance(1, 'two', [3,3,3]),
+
+AnObject(1, 'two', [3,3,3]),
+AnInstance(1, 'two', [3,3,3]),
+
+AState(1, 'two', [3,3,3]),
+ACustomState(1, 'two', [3,3,3]),
+
+InitArgs(1, 'two', [3,3,3]),
+InitArgsWithState(1, 'two', [3,3,3]),
+
+NewArgs(1, 'two', [3,3,3]),
+NewArgsWithState(1, 'two', [3,3,3]),
+
+Reduce(1, 'two', [3,3,3]),
+ReduceWithState(1, 'two', [3,3,3]),
+
+MyInt(3),
+MyList(3),
+MyDict(3),
+]
diff --git a/tests/data/construct-python-object.data b/tests/data/construct-python-object.data
new file mode 100644
index 0000000..bce8b2e
--- /dev/null
+++ b/tests/data/construct-python-object.data
@@ -0,0 +1,21 @@
+- !!python/object:test_constructor.AnObject { foo: 1, bar: two, baz: [3,3,3] }
+- !!python/object:test_constructor.AnInstance { foo: 1, bar: two, baz: [3,3,3] }
+
+- !!python/object/new:test_constructor.AnObject { args: [1, two], kwds: {baz: [3,3,3]} }
+- !!python/object/apply:test_constructor.AnInstance { args: [1, two], kwds: {baz: [3,3,3]} }
+
+- !!python/object:test_constructor.AState { _foo: 1, _bar: two, _baz: [3,3,3] }
+- !!python/object/new:test_constructor.ACustomState { state: !!python/tuple [1, two, [3,3,3]] }
+
+- !!python/object/new:test_constructor.InitArgs [1, two, [3,3,3]]
+- !!python/object/new:test_constructor.InitArgsWithState { args: [1, two], state: [3,3,3] }
+
+- !!python/object/new:test_constructor.NewArgs [1, two, [3,3,3]]
+- !!python/object/new:test_constructor.NewArgsWithState { args: [1, two], state: [3,3,3] }
+
+- !!python/object/apply:test_constructor.Reduce [1, two, [3,3,3]]
+- !!python/object/apply:test_constructor.ReduceWithState { args: [1, two], state: [3,3,3] }
+
+- !!python/object/new:test_constructor.MyInt [3]
+- !!python/object/new:test_constructor.MyList { listitems: [~, ~, ~] }
+- !!python/object/new:test_constructor.MyDict { dictitems: {0, 1, 2} }
diff --git a/tests/data/construct-python-str-ascii.code b/tests/data/construct-python-str-ascii.code
new file mode 100644
index 0000000..d9d62f6
--- /dev/null
+++ b/tests/data/construct-python-str-ascii.code
@@ -0,0 +1 @@
+"ascii string"
diff --git a/tests/data/construct-python-str-ascii.data b/tests/data/construct-python-str-ascii.data
new file mode 100644
index 0000000..a83349e
--- /dev/null
+++ b/tests/data/construct-python-str-ascii.data
@@ -0,0 +1 @@
+--- !!python/str "ascii string"
diff --git a/tests/data/construct-python-str-utf8-py2.code b/tests/data/construct-python-str-utf8-py2.code
new file mode 100644
index 0000000..47b28ab
--- /dev/null
+++ b/tests/data/construct-python-str-utf8-py2.code
@@ -0,0 +1 @@
+u'\u042d\u0442\u043e \u0443\u043d\u0438\u043a\u043e\u0434\u043d\u0430\u044f \u0441\u0442\u0440\u043e\u043a\u0430'.encode('utf-8')
diff --git a/tests/data/construct-python-str-utf8-py2.data b/tests/data/construct-python-str-utf8-py2.data
new file mode 100644
index 0000000..9ef2c72
--- /dev/null
+++ b/tests/data/construct-python-str-utf8-py2.data
@@ -0,0 +1 @@
+--- !!python/str "Это уникодная строка"
diff --git a/tests/data/construct-python-str-utf8-py3.code b/tests/data/construct-python-str-utf8-py3.code
new file mode 100644
index 0000000..9f66032
--- /dev/null
+++ b/tests/data/construct-python-str-utf8-py3.code
@@ -0,0 +1 @@
+'\u042d\u0442\u043e \u0443\u043d\u0438\u043a\u043e\u0434\u043d\u0430\u044f \u0441\u0442\u0440\u043e\u043a\u0430'
diff --git a/tests/data/construct-python-str-utf8-py3.data b/tests/data/construct-python-str-utf8-py3.data
new file mode 100644
index 0000000..9ef2c72
--- /dev/null
+++ b/tests/data/construct-python-str-utf8-py3.data
@@ -0,0 +1 @@
+--- !!python/str "Это уникодная строка"
diff --git a/tests/data/construct-python-tuple-list-dict.code b/tests/data/construct-python-tuple-list-dict.code
new file mode 100644
index 0000000..20ced98
--- /dev/null
+++ b/tests/data/construct-python-tuple-list-dict.code
@@ -0,0 +1,6 @@
+[
+    [1, 2, 3, 4],
+    (1, 2, 3, 4),
+    {1: 2, 3: 4},
+    {(0,0): 0, (0,1): 1, (1,0): 1, (1,1): 0},
+]
diff --git a/tests/data/construct-python-tuple-list-dict.data b/tests/data/construct-python-tuple-list-dict.data
new file mode 100644
index 0000000..c56159b
--- /dev/null
+++ b/tests/data/construct-python-tuple-list-dict.data
@@ -0,0 +1,8 @@
+- !!python/list [1, 2, 3, 4]
+- !!python/tuple [1, 2, 3, 4]
+- !!python/dict {1: 2, 3: 4}
+- !!python/dict
+    !!python/tuple [0,0]: 0
+    !!python/tuple [0,1]: 1
+    !!python/tuple [1,0]: 1
+    !!python/tuple [1,1]: 0
diff --git a/tests/data/construct-python-unicode-ascii-py2.code b/tests/data/construct-python-unicode-ascii-py2.code
new file mode 100644
index 0000000..d4cd82c
--- /dev/null
+++ b/tests/data/construct-python-unicode-ascii-py2.code
@@ -0,0 +1 @@
+u"ascii string"
diff --git a/tests/data/construct-python-unicode-ascii-py2.data b/tests/data/construct-python-unicode-ascii-py2.data
new file mode 100644
index 0000000..3a0647b
--- /dev/null
+++ b/tests/data/construct-python-unicode-ascii-py2.data
@@ -0,0 +1 @@
+--- !!python/unicode "ascii string"
diff --git a/tests/data/construct-python-unicode-ascii-py3.code b/tests/data/construct-python-unicode-ascii-py3.code
new file mode 100644
index 0000000..d9d62f6
--- /dev/null
+++ b/tests/data/construct-python-unicode-ascii-py3.code
@@ -0,0 +1 @@
+"ascii string"
diff --git a/tests/data/construct-python-unicode-ascii-py3.data b/tests/data/construct-python-unicode-ascii-py3.data
new file mode 100644
index 0000000..3a0647b
--- /dev/null
+++ b/tests/data/construct-python-unicode-ascii-py3.data
@@ -0,0 +1 @@
+--- !!python/unicode "ascii string"
diff --git a/tests/data/construct-python-unicode-utf8-py2.code b/tests/data/construct-python-unicode-utf8-py2.code
new file mode 100644
index 0000000..2793ac7
--- /dev/null
+++ b/tests/data/construct-python-unicode-utf8-py2.code
@@ -0,0 +1 @@
+u'\u042d\u0442\u043e \u0443\u043d\u0438\u043a\u043e\u0434\u043d\u0430\u044f \u0441\u0442\u0440\u043e\u043a\u0430'
diff --git a/tests/data/construct-python-unicode-utf8-py2.data b/tests/data/construct-python-unicode-utf8-py2.data
new file mode 100644
index 0000000..5a980ea
--- /dev/null
+++ b/tests/data/construct-python-unicode-utf8-py2.data
@@ -0,0 +1 @@
+--- !!python/unicode "Это уникодная строка"
diff --git a/tests/data/construct-python-unicode-utf8-py3.code b/tests/data/construct-python-unicode-utf8-py3.code
new file mode 100644
index 0000000..9f66032
--- /dev/null
+++ b/tests/data/construct-python-unicode-utf8-py3.code
@@ -0,0 +1 @@
+'\u042d\u0442\u043e \u0443\u043d\u0438\u043a\u043e\u0434\u043d\u0430\u044f \u0441\u0442\u0440\u043e\u043a\u0430'
diff --git a/tests/data/construct-python-unicode-utf8-py3.data b/tests/data/construct-python-unicode-utf8-py3.data
new file mode 100644
index 0000000..5a980ea
--- /dev/null
+++ b/tests/data/construct-python-unicode-utf8-py3.data
@@ -0,0 +1 @@
+--- !!python/unicode "Это уникодная строка"
diff --git a/tests/data/construct-seq.code b/tests/data/construct-seq.code
new file mode 100644
index 0000000..0c90c05
--- /dev/null
+++ b/tests/data/construct-seq.code
@@ -0,0 +1,4 @@
+{
+    "Block style": ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune", "Pluto"],
+    "Flow style": ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune", "Pluto"],
+}
diff --git a/tests/data/construct-seq.data b/tests/data/construct-seq.data
new file mode 100644
index 0000000..bb92fd1
--- /dev/null
+++ b/tests/data/construct-seq.data
@@ -0,0 +1,15 @@
+# Ordered sequence of nodes
+Block style: !!seq
+- Mercury   # Rotates - no light/dark sides.
+- Venus     # Deadliest. Aptly named.
+- Earth     # Mostly dirt.
+- Mars      # Seems empty.
+- Jupiter   # The king.
+- Saturn    # Pretty.
+- Uranus    # Where the sun hardly shines.
+- Neptune   # Boring. No rings.
+- Pluto     # You call this a planet?
+Flow style: !!seq [ Mercury, Venus, Earth, Mars,      # Rocks
+                    Jupiter, Saturn, Uranus, Neptune, # Gas
+                    Pluto ]                           # Overrated
+
diff --git a/tests/data/construct-set.code b/tests/data/construct-set.code
new file mode 100644
index 0000000..aa090e8
--- /dev/null
+++ b/tests/data/construct-set.code
@@ -0,0 +1,4 @@
+{
+    "baseball players": set(["Mark McGwire", "Sammy Sosa", "Ken Griffey"]),
+    "baseball teams": set(["Boston Red Sox", "Detroit Tigers", "New York Yankees"]),
+}
diff --git a/tests/data/construct-set.data b/tests/data/construct-set.data
new file mode 100644
index 0000000..e05dc88
--- /dev/null
+++ b/tests/data/construct-set.data
@@ -0,0 +1,7 @@
+# Explicitly typed set.
+baseball players: !!set
+  ? Mark McGwire
+  ? Sammy Sosa
+  ? Ken Griffey
+# Flow style
+baseball teams: !!set { Boston Red Sox, Detroit Tigers, New York Yankees }
diff --git a/tests/data/construct-str-ascii.code b/tests/data/construct-str-ascii.code
new file mode 100644
index 0000000..d9d62f6
--- /dev/null
+++ b/tests/data/construct-str-ascii.code
@@ -0,0 +1 @@
+"ascii string"
diff --git a/tests/data/construct-str-ascii.data b/tests/data/construct-str-ascii.data
new file mode 100644
index 0000000..0d93013
--- /dev/null
+++ b/tests/data/construct-str-ascii.data
@@ -0,0 +1 @@
+--- !!str "ascii string"
diff --git a/tests/data/construct-str-utf8-py2.code b/tests/data/construct-str-utf8-py2.code
new file mode 100644
index 0000000..2793ac7
--- /dev/null
+++ b/tests/data/construct-str-utf8-py2.code
@@ -0,0 +1 @@
+u'\u042d\u0442\u043e \u0443\u043d\u0438\u043a\u043e\u0434\u043d\u0430\u044f \u0441\u0442\u0440\u043e\u043a\u0430'
diff --git a/tests/data/construct-str-utf8-py2.data b/tests/data/construct-str-utf8-py2.data
new file mode 100644
index 0000000..e355f18
--- /dev/null
+++ b/tests/data/construct-str-utf8-py2.data
@@ -0,0 +1 @@
+--- !!str "Это уникодная строка"
diff --git a/tests/data/construct-str-utf8-py3.code b/tests/data/construct-str-utf8-py3.code
new file mode 100644
index 0000000..9f66032
--- /dev/null
+++ b/tests/data/construct-str-utf8-py3.code
@@ -0,0 +1 @@
+'\u042d\u0442\u043e \u0443\u043d\u0438\u043a\u043e\u0434\u043d\u0430\u044f \u0441\u0442\u0440\u043e\u043a\u0430'
diff --git a/tests/data/construct-str-utf8-py3.data b/tests/data/construct-str-utf8-py3.data
new file mode 100644
index 0000000..e355f18
--- /dev/null
+++ b/tests/data/construct-str-utf8-py3.data
@@ -0,0 +1 @@
+--- !!str "Это уникодная строка"
diff --git a/tests/data/construct-str.code b/tests/data/construct-str.code
new file mode 100644
index 0000000..8d57214
--- /dev/null
+++ b/tests/data/construct-str.code
@@ -0,0 +1 @@
+{ "string": "abcd" }
diff --git a/tests/data/construct-str.data b/tests/data/construct-str.data
new file mode 100644
index 0000000..606ac6b
--- /dev/null
+++ b/tests/data/construct-str.data
@@ -0,0 +1 @@
+string: abcd
diff --git a/tests/data/construct-timestamp.code b/tests/data/construct-timestamp.code
new file mode 100644
index 0000000..ffc3b2f
--- /dev/null
+++ b/tests/data/construct-timestamp.code
@@ -0,0 +1,7 @@
+{
+    "canonical": datetime.datetime(2001, 12, 15, 2, 59, 43, 100000),
+    "valid iso8601": datetime.datetime(2001, 12, 15, 2, 59, 43, 100000),
+    "space separated": datetime.datetime(2001, 12, 15, 2, 59, 43, 100000),
+    "no time zone (Z)": datetime.datetime(2001, 12, 15, 2, 59, 43, 100000),
+    "date (00:00:00Z)": datetime.date(2002, 12, 14),
+}
diff --git a/tests/data/construct-timestamp.data b/tests/data/construct-timestamp.data
new file mode 100644
index 0000000..c5f3840
--- /dev/null
+++ b/tests/data/construct-timestamp.data
@@ -0,0 +1,5 @@
+canonical:        2001-12-15T02:59:43.1Z
+valid iso8601:    2001-12-14t21:59:43.10-05:00
+space separated:  2001-12-14 21:59:43.10 -5
+no time zone (Z): 2001-12-15 2:59:43.10
+date (00:00:00Z): 2002-12-14
diff --git a/tests/data/construct-value.code b/tests/data/construct-value.code
new file mode 100644
index 0000000..f1f015e
--- /dev/null
+++ b/tests/data/construct-value.code
@@ -0,0 +1,9 @@
+[
+    { "link with": [ "library1.dll", "library2.dll" ] },
+    {
+        "link with": [
+            { "=": "library1.dll", "version": 1.2 },
+            { "=": "library2.dll", "version": 2.3 },
+        ],
+    },
+]
diff --git a/tests/data/construct-value.data b/tests/data/construct-value.data
new file mode 100644
index 0000000..3eb7919
--- /dev/null
+++ b/tests/data/construct-value.data
@@ -0,0 +1,10 @@
+---     # Old schema
+link with:
+  - library1.dll
+  - library2.dll
+---     # New schema
+link with:
+  - = : library1.dll
+    version: 1.2
+  - = : library2.dll
+    version: 2.3
diff --git a/tests/data/document-separator-in-quoted-scalar.loader-error b/tests/data/document-separator-in-quoted-scalar.loader-error
new file mode 100644
index 0000000..9eeb0d6
--- /dev/null
+++ b/tests/data/document-separator-in-quoted-scalar.loader-error
@@ -0,0 +1,11 @@
+---
+"this --- is correct"
+---
+"this
+...is also
+correct"
+---
+"a quoted scalar
+cannot contain
+---
+document separators"
diff --git a/tests/data/documents.events b/tests/data/documents.events
new file mode 100644
index 0000000..775a51a
--- /dev/null
+++ b/tests/data/documents.events
@@ -0,0 +1,11 @@
+- !StreamStart
+- !DocumentStart { explicit: false }
+- !Scalar { implicit: [true,false], value: 'data' }
+- !DocumentEnd
+- !DocumentStart
+- !Scalar { implicit: [true,false] }
+- !DocumentEnd
+- !DocumentStart { version: [1,1], tags: { '!': '!foo', '!yaml!': 'tag:yaml.org,2002:', '!ugly!': '!!!!!!!' } }
+- !Scalar { implicit: [true,false] }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/duplicate-anchor-1.loader-error b/tests/data/duplicate-anchor-1.loader-error
new file mode 100644
index 0000000..906cf29
--- /dev/null
+++ b/tests/data/duplicate-anchor-1.loader-error
@@ -0,0 +1,3 @@
+- &foo bar
+- &bar bar
+- &foo bar
diff --git a/tests/data/duplicate-anchor-2.loader-error b/tests/data/duplicate-anchor-2.loader-error
new file mode 100644
index 0000000..62b4389
--- /dev/null
+++ b/tests/data/duplicate-anchor-2.loader-error
@@ -0,0 +1 @@
+&foo [1, 2, 3, &foo 4]
diff --git a/tests/data/duplicate-key.former-loader-error.code b/tests/data/duplicate-key.former-loader-error.code
new file mode 100644
index 0000000..cb73906
--- /dev/null
+++ b/tests/data/duplicate-key.former-loader-error.code
@@ -0,0 +1 @@
+{ 'foo': 'baz' }
diff --git a/tests/data/duplicate-key.former-loader-error.data b/tests/data/duplicate-key.former-loader-error.data
new file mode 100644
index 0000000..84deb8f
--- /dev/null
+++ b/tests/data/duplicate-key.former-loader-error.data
@@ -0,0 +1,3 @@
+---
+foo: bar
+foo: baz
diff --git a/tests/data/duplicate-mapping-key.former-loader-error.code b/tests/data/duplicate-mapping-key.former-loader-error.code
new file mode 100644
index 0000000..17a6285
--- /dev/null
+++ b/tests/data/duplicate-mapping-key.former-loader-error.code
@@ -0,0 +1 @@
+{ 'foo': { 'baz': 'bat', 'foo': 'duplicate key' } }
diff --git a/tests/data/duplicate-mapping-key.former-loader-error.data b/tests/data/duplicate-mapping-key.former-loader-error.data
new file mode 100644
index 0000000..7e7b4d1
--- /dev/null
+++ b/tests/data/duplicate-mapping-key.former-loader-error.data
@@ -0,0 +1,6 @@
+---
+&anchor foo:
+    foo: bar
+    *anchor: duplicate key
+    baz: bat
+    *anchor: duplicate key
diff --git a/tests/data/duplicate-merge-key.former-loader-error.code b/tests/data/duplicate-merge-key.former-loader-error.code
new file mode 100644
index 0000000..6a757f3
--- /dev/null
+++ b/tests/data/duplicate-merge-key.former-loader-error.code
@@ -0,0 +1 @@
+{ 'x': 1, 'y': 2, 'foo': 'bar', 'z': 3, 't': 4 }
diff --git a/tests/data/duplicate-merge-key.former-loader-error.data b/tests/data/duplicate-merge-key.former-loader-error.data
new file mode 100644
index 0000000..cebc3a1
--- /dev/null
+++ b/tests/data/duplicate-merge-key.former-loader-error.data
@@ -0,0 +1,4 @@
+---
+<<: {x: 1, y: 2}
+foo: bar
+<<: {z: 3, t: 4}
diff --git a/tests/data/duplicate-tag-directive.loader-error b/tests/data/duplicate-tag-directive.loader-error
new file mode 100644
index 0000000..50c81a0
--- /dev/null
+++ b/tests/data/duplicate-tag-directive.loader-error
@@ -0,0 +1,3 @@
+%TAG    !foo!   bar
+%TAG    !foo!   baz
+--- foo
diff --git a/tests/data/duplicate-value-key.former-loader-error.code b/tests/data/duplicate-value-key.former-loader-error.code
new file mode 100644
index 0000000..12f48c1
--- /dev/null
+++ b/tests/data/duplicate-value-key.former-loader-error.code
@@ -0,0 +1 @@
+{ 'foo': 'bar', '=': 2 }
diff --git a/tests/data/duplicate-value-key.former-loader-error.data b/tests/data/duplicate-value-key.former-loader-error.data
new file mode 100644
index 0000000..b34a1d6
--- /dev/null
+++ b/tests/data/duplicate-value-key.former-loader-error.data
@@ -0,0 +1,4 @@
+---
+=: 1
+foo: bar
+=: 2
diff --git a/tests/data/duplicate-yaml-directive.loader-error b/tests/data/duplicate-yaml-directive.loader-error
new file mode 100644
index 0000000..9b72390
--- /dev/null
+++ b/tests/data/duplicate-yaml-directive.loader-error
@@ -0,0 +1,3 @@
+%YAML   1.1
+%YAML   1.1
+--- foo
diff --git a/tests/data/emit-block-scalar-in-simple-key-context-bug.canonical b/tests/data/emit-block-scalar-in-simple-key-context-bug.canonical
new file mode 100644
index 0000000..473bed5
--- /dev/null
+++ b/tests/data/emit-block-scalar-in-simple-key-context-bug.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+--- !!map
+{
+  ? !!str "foo"
+  : !!str "bar"
+}
diff --git a/tests/data/emit-block-scalar-in-simple-key-context-bug.data b/tests/data/emit-block-scalar-in-simple-key-context-bug.data
new file mode 100644
index 0000000..b6b42ba
--- /dev/null
+++ b/tests/data/emit-block-scalar-in-simple-key-context-bug.data
@@ -0,0 +1,4 @@
+? |-
+  foo
+: |-
+  bar
diff --git a/tests/data/emitting-unacceptable-unicode-character-bug-py2.code b/tests/data/emitting-unacceptable-unicode-character-bug-py2.code
new file mode 100644
index 0000000..4b92854
--- /dev/null
+++ b/tests/data/emitting-unacceptable-unicode-character-bug-py2.code
@@ -0,0 +1 @@
+u"\udd00"
diff --git a/tests/data/emitting-unacceptable-unicode-character-bug-py2.data b/tests/data/emitting-unacceptable-unicode-character-bug-py2.data
new file mode 100644
index 0000000..2a5df00
--- /dev/null
+++ b/tests/data/emitting-unacceptable-unicode-character-bug-py2.data
@@ -0,0 +1 @@
+"\udd00"
diff --git a/tests/data/emitting-unacceptable-unicode-character-bug-py2.skip-ext b/tests/data/emitting-unacceptable-unicode-character-bug-py2.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/emitting-unacceptable-unicode-character-bug-py2.skip-ext
diff --git a/tests/data/emitting-unacceptable-unicode-character-bug-py3.code b/tests/data/emitting-unacceptable-unicode-character-bug-py3.code
new file mode 100644
index 0000000..2a5df00
--- /dev/null
+++ b/tests/data/emitting-unacceptable-unicode-character-bug-py3.code
@@ -0,0 +1 @@
+"\udd00"
diff --git a/tests/data/emitting-unacceptable-unicode-character-bug-py3.data b/tests/data/emitting-unacceptable-unicode-character-bug-py3.data
new file mode 100644
index 0000000..2a5df00
--- /dev/null
+++ b/tests/data/emitting-unacceptable-unicode-character-bug-py3.data
@@ -0,0 +1 @@
+"\udd00"
diff --git a/tests/data/emitting-unacceptable-unicode-character-bug-py3.skip-ext b/tests/data/emitting-unacceptable-unicode-character-bug-py3.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/emitting-unacceptable-unicode-character-bug-py3.skip-ext
diff --git a/tests/data/empty-anchor.emitter-error b/tests/data/empty-anchor.emitter-error
new file mode 100644
index 0000000..ce663b6
--- /dev/null
+++ b/tests/data/empty-anchor.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart
+- !Scalar { anchor: '', value: 'foo' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/empty-document-bug.canonical b/tests/data/empty-document-bug.canonical
new file mode 100644
index 0000000..28a6cf1
--- /dev/null
+++ b/tests/data/empty-document-bug.canonical
@@ -0,0 +1 @@
+# This YAML stream contains no YAML documents.
diff --git a/tests/data/empty-document-bug.data b/tests/data/empty-document-bug.data
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/empty-document-bug.data
diff --git a/tests/data/empty-document-bug.empty b/tests/data/empty-document-bug.empty
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/empty-document-bug.empty
diff --git a/tests/data/empty-documents.single-loader-error b/tests/data/empty-documents.single-loader-error
new file mode 100644
index 0000000..f8dba8d
--- /dev/null
+++ b/tests/data/empty-documents.single-loader-error
@@ -0,0 +1,2 @@
+--- # first document
+--- # second document
diff --git a/tests/data/empty-python-module.loader-error b/tests/data/empty-python-module.loader-error
new file mode 100644
index 0000000..83d3232
--- /dev/null
+++ b/tests/data/empty-python-module.loader-error
@@ -0,0 +1 @@
+--- !!python:module:
diff --git a/tests/data/empty-python-name.loader-error b/tests/data/empty-python-name.loader-error
new file mode 100644
index 0000000..6162957
--- /dev/null
+++ b/tests/data/empty-python-name.loader-error
@@ -0,0 +1 @@
+--- !!python/name: empty
diff --git a/tests/data/empty-tag-handle.emitter-error b/tests/data/empty-tag-handle.emitter-error
new file mode 100644
index 0000000..235c899
--- /dev/null
+++ b/tests/data/empty-tag-handle.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart { tags: { '': 'bar' } }
+- !Scalar { value: 'foo' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/empty-tag-prefix.emitter-error b/tests/data/empty-tag-prefix.emitter-error
new file mode 100644
index 0000000..c6c0e95
--- /dev/null
+++ b/tests/data/empty-tag-prefix.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart { tags: { '!': '' } }
+- !Scalar { value: 'foo' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/empty-tag.emitter-error b/tests/data/empty-tag.emitter-error
new file mode 100644
index 0000000..b7ca593
--- /dev/null
+++ b/tests/data/empty-tag.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart
+- !Scalar { tag: '', value: 'key', implicit: [false,false] }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/expected-document-end.emitter-error b/tests/data/expected-document-end.emitter-error
new file mode 100644
index 0000000..0cbab89
--- /dev/null
+++ b/tests/data/expected-document-end.emitter-error
@@ -0,0 +1,6 @@
+- !StreamStart
+- !DocumentStart
+- !Scalar { value: 'data 1' }
+- !Scalar { value: 'data 2' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/expected-document-start.emitter-error b/tests/data/expected-document-start.emitter-error
new file mode 100644
index 0000000..8ce575e
--- /dev/null
+++ b/tests/data/expected-document-start.emitter-error
@@ -0,0 +1,4 @@
+- !StreamStart
+- !MappingStart
+- !MappingEnd
+- !StreamEnd
diff --git a/tests/data/expected-mapping.loader-error b/tests/data/expected-mapping.loader-error
new file mode 100644
index 0000000..82aed98
--- /dev/null
+++ b/tests/data/expected-mapping.loader-error
@@ -0,0 +1 @@
+--- !!map [not, a, map]
diff --git a/tests/data/expected-node-1.emitter-error b/tests/data/expected-node-1.emitter-error
new file mode 100644
index 0000000..36ceca3
--- /dev/null
+++ b/tests/data/expected-node-1.emitter-error
@@ -0,0 +1,4 @@
+- !StreamStart
+- !DocumentStart
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/expected-node-2.emitter-error b/tests/data/expected-node-2.emitter-error
new file mode 100644
index 0000000..891ee37
--- /dev/null
+++ b/tests/data/expected-node-2.emitter-error
@@ -0,0 +1,7 @@
+- !StreamStart
+- !DocumentStart
+- !MappingStart
+- !Scalar { value: 'key' }
+- !MappingEnd
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/expected-nothing.emitter-error b/tests/data/expected-nothing.emitter-error
new file mode 100644
index 0000000..62c54d3
--- /dev/null
+++ b/tests/data/expected-nothing.emitter-error
@@ -0,0 +1,4 @@
+- !StreamStart
+- !StreamEnd
+- !StreamStart
+- !StreamEnd
diff --git a/tests/data/expected-scalar.loader-error b/tests/data/expected-scalar.loader-error
new file mode 100644
index 0000000..7b3171e
--- /dev/null
+++ b/tests/data/expected-scalar.loader-error
@@ -0,0 +1 @@
+--- !!str [not a scalar]
diff --git a/tests/data/expected-sequence.loader-error b/tests/data/expected-sequence.loader-error
new file mode 100644
index 0000000..08074ea
--- /dev/null
+++ b/tests/data/expected-sequence.loader-error
@@ -0,0 +1 @@
+--- !!seq {foo, bar, baz}
diff --git a/tests/data/expected-stream-start.emitter-error b/tests/data/expected-stream-start.emitter-error
new file mode 100644
index 0000000..480dc2e
--- /dev/null
+++ b/tests/data/expected-stream-start.emitter-error
@@ -0,0 +1,2 @@
+- !DocumentStart
+- !DocumentEnd
diff --git a/tests/data/explicit-document.single-loader-error b/tests/data/explicit-document.single-loader-error
new file mode 100644
index 0000000..46c6f8b
--- /dev/null
+++ b/tests/data/explicit-document.single-loader-error
@@ -0,0 +1,4 @@
+---
+foo: bar
+---
+foo: bar
diff --git a/tests/data/fetch-complex-value-bug.loader-error b/tests/data/fetch-complex-value-bug.loader-error
new file mode 100644
index 0000000..25fac24
--- /dev/null
+++ b/tests/data/fetch-complex-value-bug.loader-error
@@ -0,0 +1,2 @@
+? "foo"
+ : "bar"
diff --git a/tests/data/float-representer-2.3-bug.code b/tests/data/float-representer-2.3-bug.code
new file mode 100644
index 0000000..d8db834
--- /dev/null
+++ b/tests/data/float-representer-2.3-bug.code
@@ -0,0 +1,7 @@
+{
+#    0.0: 0,
+    1.0: 1,
+    1e300000: +10,
+    -1e300000: -10,
+    1e300000/1e300000: 100,
+}
diff --git a/tests/data/float-representer-2.3-bug.data b/tests/data/float-representer-2.3-bug.data
new file mode 100644
index 0000000..efd1716
--- /dev/null
+++ b/tests/data/float-representer-2.3-bug.data
@@ -0,0 +1,5 @@
+#0.0:   # hash(0) == hash(nan) and 0 == nan in Python 2.3
+1.0: 1
++.inf: 10
+-.inf: -10
+.nan: 100
diff --git a/tests/data/float.data b/tests/data/float.data
new file mode 100644
index 0000000..524d5db
--- /dev/null
+++ b/tests/data/float.data
@@ -0,0 +1,6 @@
+- 6.8523015e+5
+- 685.230_15e+03
+- 685_230.15
+- 190:20:30.15
+- -.inf
+- .NaN
diff --git a/tests/data/float.detect b/tests/data/float.detect
new file mode 100644
index 0000000..1e12343
--- /dev/null
+++ b/tests/data/float.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:float
diff --git a/tests/data/forbidden-entry.loader-error b/tests/data/forbidden-entry.loader-error
new file mode 100644
index 0000000..f2e3079
--- /dev/null
+++ b/tests/data/forbidden-entry.loader-error
@@ -0,0 +1,2 @@
+test: - foo
+      - bar
diff --git a/tests/data/forbidden-key.loader-error b/tests/data/forbidden-key.loader-error
new file mode 100644
index 0000000..da9b471
--- /dev/null
+++ b/tests/data/forbidden-key.loader-error
@@ -0,0 +1,2 @@
+test: ? foo
+      : bar
diff --git a/tests/data/forbidden-value.loader-error b/tests/data/forbidden-value.loader-error
new file mode 100644
index 0000000..efd7ce5
--- /dev/null
+++ b/tests/data/forbidden-value.loader-error
@@ -0,0 +1 @@
+test: key: value
diff --git a/tests/data/implicit-document.single-loader-error b/tests/data/implicit-document.single-loader-error
new file mode 100644
index 0000000..f8c9a5c
--- /dev/null
+++ b/tests/data/implicit-document.single-loader-error
@@ -0,0 +1,3 @@
+foo: bar
+---
+foo: bar
diff --git a/tests/data/int.data b/tests/data/int.data
new file mode 100644
index 0000000..d44d376
--- /dev/null
+++ b/tests/data/int.data
@@ -0,0 +1,6 @@
+- 685230
+- +685_230
+- 02472256
+- 0x_0A_74_AE
+- 0b1010_0111_0100_1010_1110
+- 190:20:30
diff --git a/tests/data/int.detect b/tests/data/int.detect
new file mode 100644
index 0000000..575c9eb
--- /dev/null
+++ b/tests/data/int.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:int
diff --git a/tests/data/invalid-anchor-1.loader-error b/tests/data/invalid-anchor-1.loader-error
new file mode 100644
index 0000000..fcf7d0f
--- /dev/null
+++ b/tests/data/invalid-anchor-1.loader-error
@@ -0,0 +1 @@
+--- &?  foo # we allow only ascii and numeric characters in anchor names.
diff --git a/tests/data/invalid-anchor-2.loader-error b/tests/data/invalid-anchor-2.loader-error
new file mode 100644
index 0000000..bfc4ff0
--- /dev/null
+++ b/tests/data/invalid-anchor-2.loader-error
@@ -0,0 +1,8 @@
+---
+- [
+    &correct foo,
+    *correct,
+    *correct]   # still correct
+- *correct: still correct
+- &correct-or-not[foo, bar]
+
diff --git a/tests/data/invalid-anchor.emitter-error b/tests/data/invalid-anchor.emitter-error
new file mode 100644
index 0000000..3d2a814
--- /dev/null
+++ b/tests/data/invalid-anchor.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart
+- !Scalar { anchor: '5*5=25', value: 'foo' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/invalid-base64-data-2.loader-error b/tests/data/invalid-base64-data-2.loader-error
new file mode 100644
index 0000000..2553a4f
--- /dev/null
+++ b/tests/data/invalid-base64-data-2.loader-error
@@ -0,0 +1,2 @@
+--- !!binary
+    двоичные данные в base64
diff --git a/tests/data/invalid-base64-data.loader-error b/tests/data/invalid-base64-data.loader-error
new file mode 100644
index 0000000..798abba
--- /dev/null
+++ b/tests/data/invalid-base64-data.loader-error
@@ -0,0 +1,2 @@
+--- !!binary
+    binary data encoded in base64 should be here.
diff --git a/tests/data/invalid-block-scalar-indicator.loader-error b/tests/data/invalid-block-scalar-indicator.loader-error
new file mode 100644
index 0000000..16a6db1
--- /dev/null
+++ b/tests/data/invalid-block-scalar-indicator.loader-error
@@ -0,0 +1,2 @@
+--- > what is this?  # a comment
+data
diff --git a/tests/data/invalid-character.loader-error b/tests/data/invalid-character.loader-error
new file mode 100644
index 0000000..03687b0
--- /dev/null
+++ b/tests/data/invalid-character.loader-error
Binary files differ
diff --git a/tests/data/invalid-character.stream-error b/tests/data/invalid-character.stream-error
new file mode 100644
index 0000000..171face
--- /dev/null
+++ b/tests/data/invalid-character.stream-error
Binary files differ
diff --git a/tests/data/invalid-directive-line.loader-error b/tests/data/invalid-directive-line.loader-error
new file mode 100644
index 0000000..0892eb6
--- /dev/null
+++ b/tests/data/invalid-directive-line.loader-error
@@ -0,0 +1,2 @@
+%YAML   1.1 ?   # extra symbol
+---
diff --git a/tests/data/invalid-directive-name-1.loader-error b/tests/data/invalid-directive-name-1.loader-error
new file mode 100644
index 0000000..153fd88
--- /dev/null
+++ b/tests/data/invalid-directive-name-1.loader-error
@@ -0,0 +1,2 @@
+%   # no name at all
+---
diff --git a/tests/data/invalid-directive-name-2.loader-error b/tests/data/invalid-directive-name-2.loader-error
new file mode 100644
index 0000000..3732a06
--- /dev/null
+++ b/tests/data/invalid-directive-name-2.loader-error
@@ -0,0 +1,2 @@
+%invalid-characters:in-directive name
+---
diff --git a/tests/data/invalid-escape-character.loader-error b/tests/data/invalid-escape-character.loader-error
new file mode 100644
index 0000000..a95ab76
--- /dev/null
+++ b/tests/data/invalid-escape-character.loader-error
@@ -0,0 +1 @@
+"some escape characters are \ncorrect, but this one \?\nis not\n"
diff --git a/tests/data/invalid-escape-numbers.loader-error b/tests/data/invalid-escape-numbers.loader-error
new file mode 100644
index 0000000..614ec9f
--- /dev/null
+++ b/tests/data/invalid-escape-numbers.loader-error
@@ -0,0 +1 @@
+"hm.... \u123?"
diff --git a/tests/data/invalid-indentation-indicator-1.loader-error b/tests/data/invalid-indentation-indicator-1.loader-error
new file mode 100644
index 0000000..a3cd12f
--- /dev/null
+++ b/tests/data/invalid-indentation-indicator-1.loader-error
@@ -0,0 +1,2 @@
+--- >0  # not valid
+data
diff --git a/tests/data/invalid-indentation-indicator-2.loader-error b/tests/data/invalid-indentation-indicator-2.loader-error
new file mode 100644
index 0000000..eefb6ec
--- /dev/null
+++ b/tests/data/invalid-indentation-indicator-2.loader-error
@@ -0,0 +1,2 @@
+--- >-0
+data
diff --git a/tests/data/invalid-item-without-trailing-break.loader-error b/tests/data/invalid-item-without-trailing-break.loader-error
new file mode 100644
index 0000000..fdcf6c6
--- /dev/null
+++ b/tests/data/invalid-item-without-trailing-break.loader-error
@@ -0,0 +1,2 @@
+-
+-0
\ No newline at end of file
diff --git a/tests/data/invalid-merge-1.loader-error b/tests/data/invalid-merge-1.loader-error
new file mode 100644
index 0000000..fc3c284
--- /dev/null
+++ b/tests/data/invalid-merge-1.loader-error
@@ -0,0 +1,2 @@
+foo: bar
+<<: baz
diff --git a/tests/data/invalid-merge-2.loader-error b/tests/data/invalid-merge-2.loader-error
new file mode 100644
index 0000000..8e88615
--- /dev/null
+++ b/tests/data/invalid-merge-2.loader-error
@@ -0,0 +1,2 @@
+foo: bar
+<<: [x: 1, y: 2, z, t: 4]
diff --git a/tests/data/invalid-omap-1.loader-error b/tests/data/invalid-omap-1.loader-error
new file mode 100644
index 0000000..2863392
--- /dev/null
+++ b/tests/data/invalid-omap-1.loader-error
@@ -0,0 +1,3 @@
+--- !!omap
+foo: bar
+baz: bat
diff --git a/tests/data/invalid-omap-2.loader-error b/tests/data/invalid-omap-2.loader-error
new file mode 100644
index 0000000..c377dfb
--- /dev/null
+++ b/tests/data/invalid-omap-2.loader-error
@@ -0,0 +1,3 @@
+--- !!omap
+- foo: bar
+- baz
diff --git a/tests/data/invalid-omap-3.loader-error b/tests/data/invalid-omap-3.loader-error
new file mode 100644
index 0000000..2a4f50d
--- /dev/null
+++ b/tests/data/invalid-omap-3.loader-error
@@ -0,0 +1,4 @@
+--- !!omap
+- foo: bar
+- baz: bar
+  bar: bar
diff --git a/tests/data/invalid-pairs-1.loader-error b/tests/data/invalid-pairs-1.loader-error
new file mode 100644
index 0000000..42d19ae
--- /dev/null
+++ b/tests/data/invalid-pairs-1.loader-error
@@ -0,0 +1,3 @@
+--- !!pairs
+foo: bar
+baz: bat
diff --git a/tests/data/invalid-pairs-2.loader-error b/tests/data/invalid-pairs-2.loader-error
new file mode 100644
index 0000000..31389ea
--- /dev/null
+++ b/tests/data/invalid-pairs-2.loader-error
@@ -0,0 +1,3 @@
+--- !!pairs
+- foo: bar
+- baz
diff --git a/tests/data/invalid-pairs-3.loader-error b/tests/data/invalid-pairs-3.loader-error
new file mode 100644
index 0000000..f8d7704
--- /dev/null
+++ b/tests/data/invalid-pairs-3.loader-error
@@ -0,0 +1,4 @@
+--- !!pairs
+- foo: bar
+- baz: bar
+  bar: bar
diff --git a/tests/data/invalid-python-bytes-2-py3.loader-error b/tests/data/invalid-python-bytes-2-py3.loader-error
new file mode 100644
index 0000000..f43af59
--- /dev/null
+++ b/tests/data/invalid-python-bytes-2-py3.loader-error
@@ -0,0 +1,2 @@
+--- !!python/bytes
+    двоичные данные в base64
diff --git a/tests/data/invalid-python-bytes-py3.loader-error b/tests/data/invalid-python-bytes-py3.loader-error
new file mode 100644
index 0000000..a19dfd0
--- /dev/null
+++ b/tests/data/invalid-python-bytes-py3.loader-error
@@ -0,0 +1,2 @@
+--- !!python/bytes
+    binary data encoded in base64 should be here.
diff --git a/tests/data/invalid-python-module-kind.loader-error b/tests/data/invalid-python-module-kind.loader-error
new file mode 100644
index 0000000..4f71cb5
--- /dev/null
+++ b/tests/data/invalid-python-module-kind.loader-error
@@ -0,0 +1 @@
+--- !!python/module:sys { must, be, scalar }
diff --git a/tests/data/invalid-python-module-value.loader-error b/tests/data/invalid-python-module-value.loader-error
new file mode 100644
index 0000000..f6797fc
--- /dev/null
+++ b/tests/data/invalid-python-module-value.loader-error
@@ -0,0 +1 @@
+--- !!python/module:sys "non-empty value"
diff --git a/tests/data/invalid-python-module.loader-error b/tests/data/invalid-python-module.loader-error
new file mode 100644
index 0000000..4e24072
--- /dev/null
+++ b/tests/data/invalid-python-module.loader-error
@@ -0,0 +1 @@
+--- !!python/module:no.such.module
diff --git a/tests/data/invalid-python-name-kind.loader-error b/tests/data/invalid-python-name-kind.loader-error
new file mode 100644
index 0000000..6ff8eb6
--- /dev/null
+++ b/tests/data/invalid-python-name-kind.loader-error
@@ -0,0 +1 @@
+--- !!python/name:sys.modules {}
diff --git a/tests/data/invalid-python-name-module-2.loader-error b/tests/data/invalid-python-name-module-2.loader-error
new file mode 100644
index 0000000..debc313
--- /dev/null
+++ b/tests/data/invalid-python-name-module-2.loader-error
@@ -0,0 +1 @@
+--- !!python/name:xml.parsers
diff --git a/tests/data/invalid-python-name-module.loader-error b/tests/data/invalid-python-name-module.loader-error
new file mode 100644
index 0000000..1966f6a
--- /dev/null
+++ b/tests/data/invalid-python-name-module.loader-error
@@ -0,0 +1 @@
+--- !!python/name:sys.modules.keys
diff --git a/tests/data/invalid-python-name-object.loader-error b/tests/data/invalid-python-name-object.loader-error
new file mode 100644
index 0000000..50f386f
--- /dev/null
+++ b/tests/data/invalid-python-name-object.loader-error
@@ -0,0 +1 @@
+--- !!python/name:os.path.rm_rf
diff --git a/tests/data/invalid-python-name-value.loader-error b/tests/data/invalid-python-name-value.loader-error
new file mode 100644
index 0000000..7be1401
--- /dev/null
+++ b/tests/data/invalid-python-name-value.loader-error
@@ -0,0 +1 @@
+--- !!python/name:sys.modules 5
diff --git a/tests/data/invalid-simple-key.loader-error b/tests/data/invalid-simple-key.loader-error
new file mode 100644
index 0000000..a58deec
--- /dev/null
+++ b/tests/data/invalid-simple-key.loader-error
@@ -0,0 +1,3 @@
+key: value
+invalid simple key
+next key: next value
diff --git a/tests/data/invalid-single-quote-bug.code b/tests/data/invalid-single-quote-bug.code
new file mode 100644
index 0000000..5558945
--- /dev/null
+++ b/tests/data/invalid-single-quote-bug.code
@@ -0,0 +1 @@
+["foo 'bar'", "foo\n'bar'"]
diff --git a/tests/data/invalid-single-quote-bug.data b/tests/data/invalid-single-quote-bug.data
new file mode 100644
index 0000000..76ef7ae
--- /dev/null
+++ b/tests/data/invalid-single-quote-bug.data
@@ -0,0 +1,2 @@
+- "foo 'bar'"
+- "foo\n'bar'"
diff --git a/tests/data/invalid-starting-character.loader-error b/tests/data/invalid-starting-character.loader-error
new file mode 100644
index 0000000..bb81c60
--- /dev/null
+++ b/tests/data/invalid-starting-character.loader-error
@@ -0,0 +1 @@
+@@@@@@@@@@@@@@@@@@@
diff --git a/tests/data/invalid-tag-1.loader-error b/tests/data/invalid-tag-1.loader-error
new file mode 100644
index 0000000..a68cd38
--- /dev/null
+++ b/tests/data/invalid-tag-1.loader-error
@@ -0,0 +1 @@
+- !<foo#bar> baz
diff --git a/tests/data/invalid-tag-2.loader-error b/tests/data/invalid-tag-2.loader-error
new file mode 100644
index 0000000..3a36700
--- /dev/null
+++ b/tests/data/invalid-tag-2.loader-error
@@ -0,0 +1 @@
+- !prefix!foo#bar baz
diff --git a/tests/data/invalid-tag-directive-handle.loader-error b/tests/data/invalid-tag-directive-handle.loader-error
new file mode 100644
index 0000000..42b5d7e
--- /dev/null
+++ b/tests/data/invalid-tag-directive-handle.loader-error
@@ -0,0 +1,2 @@
+%TAG !!! !!!
+---
diff --git a/tests/data/invalid-tag-directive-prefix.loader-error b/tests/data/invalid-tag-directive-prefix.loader-error
new file mode 100644
index 0000000..0cb482c
--- /dev/null
+++ b/tests/data/invalid-tag-directive-prefix.loader-error
@@ -0,0 +1,2 @@
+%TAG    !   tag:zz.com/foo#bar  # '#' is not allowed in URLs
+---
diff --git a/tests/data/invalid-tag-handle-1.emitter-error b/tests/data/invalid-tag-handle-1.emitter-error
new file mode 100644
index 0000000..d5df9a2
--- /dev/null
+++ b/tests/data/invalid-tag-handle-1.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart { tags: { '!foo': 'bar' } }
+- !Scalar { value: 'foo' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/invalid-tag-handle-1.loader-error b/tests/data/invalid-tag-handle-1.loader-error
new file mode 100644
index 0000000..ef0d143
--- /dev/null
+++ b/tests/data/invalid-tag-handle-1.loader-error
@@ -0,0 +1,2 @@
+%TAG    foo bar
+---
diff --git a/tests/data/invalid-tag-handle-2.emitter-error b/tests/data/invalid-tag-handle-2.emitter-error
new file mode 100644
index 0000000..d1831d5
--- /dev/null
+++ b/tests/data/invalid-tag-handle-2.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart { tags: { '!!!': 'bar' } }
+- !Scalar { value: 'foo' }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/invalid-tag-handle-2.loader-error b/tests/data/invalid-tag-handle-2.loader-error
new file mode 100644
index 0000000..06c7f0e
--- /dev/null
+++ b/tests/data/invalid-tag-handle-2.loader-error
@@ -0,0 +1,2 @@
+%TAG    !foo    bar
+---
diff --git a/tests/data/invalid-uri-escapes-1.loader-error b/tests/data/invalid-uri-escapes-1.loader-error
new file mode 100644
index 0000000..a6ecb36
--- /dev/null
+++ b/tests/data/invalid-uri-escapes-1.loader-error
@@ -0,0 +1 @@
+--- !<tag:%x?y> foo
diff --git a/tests/data/invalid-uri-escapes-2.loader-error b/tests/data/invalid-uri-escapes-2.loader-error
new file mode 100644
index 0000000..b89e8f6
--- /dev/null
+++ b/tests/data/invalid-uri-escapes-2.loader-error
@@ -0,0 +1 @@
+--- !<%FF> foo
diff --git a/tests/data/invalid-uri-escapes-3.loader-error b/tests/data/invalid-uri-escapes-3.loader-error
new file mode 100644
index 0000000..f2e4cb8
--- /dev/null
+++ b/tests/data/invalid-uri-escapes-3.loader-error
@@ -0,0 +1 @@
+--- !<foo%d0%af%d0%af%d0bar> baz
diff --git a/tests/data/invalid-uri.loader-error b/tests/data/invalid-uri.loader-error
new file mode 100644
index 0000000..06307e0
--- /dev/null
+++ b/tests/data/invalid-uri.loader-error
@@ -0,0 +1 @@
+--- !foo!   bar
diff --git a/tests/data/invalid-utf8-byte.loader-error b/tests/data/invalid-utf8-byte.loader-error
new file mode 100644
index 0000000..0a58c70
--- /dev/null
+++ b/tests/data/invalid-utf8-byte.loader-error
@@ -0,0 +1,66 @@
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+Invalid byte ('\xFF'): ÿ <--
+###############################################################
diff --git a/tests/data/invalid-utf8-byte.stream-error b/tests/data/invalid-utf8-byte.stream-error
new file mode 100644
index 0000000..0a58c70
--- /dev/null
+++ b/tests/data/invalid-utf8-byte.stream-error
@@ -0,0 +1,66 @@
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+###############################################################
+Invalid byte ('\xFF'): ÿ <--
+###############################################################
diff --git a/tests/data/invalid-yaml-directive-version-1.loader-error b/tests/data/invalid-yaml-directive-version-1.loader-error
new file mode 100644
index 0000000..e9b4e3a
--- /dev/null
+++ b/tests/data/invalid-yaml-directive-version-1.loader-error
@@ -0,0 +1,3 @@
+# No version at all.
+%YAML
+---
diff --git a/tests/data/invalid-yaml-directive-version-2.loader-error b/tests/data/invalid-yaml-directive-version-2.loader-error
new file mode 100644
index 0000000..6aa7740
--- /dev/null
+++ b/tests/data/invalid-yaml-directive-version-2.loader-error
@@ -0,0 +1,2 @@
+%YAML   1e-5
+---
diff --git a/tests/data/invalid-yaml-directive-version-3.loader-error b/tests/data/invalid-yaml-directive-version-3.loader-error
new file mode 100644
index 0000000..345e784
--- /dev/null
+++ b/tests/data/invalid-yaml-directive-version-3.loader-error
@@ -0,0 +1,2 @@
+%YAML 1.
+---
diff --git a/tests/data/invalid-yaml-directive-version-4.loader-error b/tests/data/invalid-yaml-directive-version-4.loader-error
new file mode 100644
index 0000000..b35ca82
--- /dev/null
+++ b/tests/data/invalid-yaml-directive-version-4.loader-error
@@ -0,0 +1,2 @@
+%YAML 1.132.435
+---
diff --git a/tests/data/invalid-yaml-directive-version-5.loader-error b/tests/data/invalid-yaml-directive-version-5.loader-error
new file mode 100644
index 0000000..7c2b49f
--- /dev/null
+++ b/tests/data/invalid-yaml-directive-version-5.loader-error
@@ -0,0 +1,2 @@
+%YAML A.0
+---
diff --git a/tests/data/invalid-yaml-directive-version-6.loader-error b/tests/data/invalid-yaml-directive-version-6.loader-error
new file mode 100644
index 0000000..bae714f
--- /dev/null
+++ b/tests/data/invalid-yaml-directive-version-6.loader-error
@@ -0,0 +1,2 @@
+%YAML 123.C
+---
diff --git a/tests/data/invalid-yaml-version.loader-error b/tests/data/invalid-yaml-version.loader-error
new file mode 100644
index 0000000..dd01948
--- /dev/null
+++ b/tests/data/invalid-yaml-version.loader-error
@@ -0,0 +1,2 @@
+%YAML   2.0
+--- foo
diff --git a/tests/data/latin.unicode b/tests/data/latin.unicode
new file mode 100644
index 0000000..4fb799c
--- /dev/null
+++ b/tests/data/latin.unicode
@@ -0,0 +1,384 @@
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
+ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊ
+ËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿĀāĂ㥹ĆćĈĉĊċČčĎ
+ďĐđĒēĔĕĖėĘęĚěĜĝĞğĠġĢģĤĥĦħĨĩĪīĬĭĮįİıIJijĴĵĶķĸĹĺĻļĽľĿŀŁłŃńŅņŇňʼnŊŋŌōŎŏŐ
+őŒœŔŕŖŗŘřŚśŜŝŞşŠšŢţŤťŦŧŨũŪūŬŭŮůŰűŲųŴŵŶŷŸŹźŻżŽžſƀƁƂƃƄƅƆƇƈƉƊƋƌƍƎƏƐƑƒ
+ƓƔƕƖƗƘƙƚƛƜƝƞƟƠơƢƣƤƥƦƧƨƩƪƫƬƭƮƯưƱƲƳƴƵƶƷƸƹƺƼƽƾƿDŽdžLJljNJnjǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜ
+ǝǞǟǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZdzǴǵǶǷǸǹǺǻǼǽǾǿȀȁȂȃȄȅȆȇȈȉȊȋȌȍȎȏȐȑȒȓȔȕȖȗȘșȚțȜȝȞȟ
+ȠȡȢȣȤȥȦȧȨȩȪȫȬȭȮȯȰȱȲȳȴȵȶȷȸȹȺȻȼȽȾȿɀɁɐɑɒɓɔɕɖɗɘəɚɛɜɝɞɟɠɡɢɣɤɥɦɧɨɩɪɫɬɭɮɯ
+ɰɱɲɳɴɵɶɷɸɹɺɻɼɽɾɿʀʁʂʃʄʅʆʇʈʉʊʋʌʍʎʏʐʑʒʓʔʕʖʗʘʙʚʛʜʝʞʟʠʡʢʣʤʥʦʧʨʩʪʫʬʭʮʯΆΈ
+ΉΊΌΎΏΐΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫάέήίΰαβγδεζηθικλμνξοπρςστυφχψωϊϋόύ
+ώϐϑϒϓϔϕϖϗϘϙϚϛϜϝϞϟϠϡϢϣϤϥϦϧϨϩϪϫϬϭϮϯϰϱϲϳϴϵϷϸϹϺϻϼϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБ
+ВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяѐёђѓ
+єѕіїјљњћќѝўџѠѡѢѣѤѥѦѧѨѩѪѫѬѭѮѯѰѱѲѳѴѵѶѷѸѹѺѻѼѽѾѿҀҁҊҋҌҍҎҏҐґҒғҔҕҖҗҘҙҚқҜҝ
+ҞҟҠҡҢңҤҥҦҧҨҩҪҫҬҭҮүҰұҲҳҴҵҶҷҸҹҺһҼҽҾҿӀӁӂӃӄӅӆӇӈӉӊӋӌӍӎӐӑӒӓӔӕӖӗӘәӚӛӜӝӞӟӠ
+ӡӢӣӤӥӦӧӨөӪӫӬӭӮӯӰӱӲӳӴӵӶӷӸӹԀԁԂԃԄԅԆԇԈԉԊԋԌԍԎԏԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉ
+ՊՋՌՍՎՏՐՑՒՓՔՕՖաբգդեզէըթժիլխծկհձղճմյնշոչպջռսվտրցւփքօֆևႠႡႢႣႤႥႦႧႨႩႪႫႬႭ
+ႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅᴀᴁᴂᴃᴄᴅᴆᴇᴈᴉᴊᴋᴌᴍᴎᴏᴐᴑᴒᴓᴔᴕᴖᴗᴘᴙᴚᴛᴜᴝᴞᴟᴠᴡᴢᴣᴤᴥᴦᴧᴨᴩ
+ᴪᴫᵢᵣᵤᵥᵦᵧᵨᵩᵪᵫᵬᵭᵮᵯᵰᵱᵲᵳᵴᵵᵶᵷᵹᵺᵻᵼᵽᵾᵿᶀᶁᶂᶃᶄᶅᶆᶇᶈᶉᶊᶋᶌᶍᶎᶏᶐᶑᶒᶓᶔᶕᶖᶗᶘᶙᶚḀḁḂḃḄḅḆḇ
+ḈḉḊḋḌḍḎḏḐḑḒḓḔḕḖḗḘḙḚḛḜḝḞḟḠḡḢḣḤḥḦḧḨḩḪḫḬḭḮḯḰḱḲḳḴḵḶḷḸḹḺḻḼḽḾḿṀṁṂṃṄṅṆṇṈṉ
+ṊṋṌṍṎṏṐṑṒṓṔṕṖṗṘṙṚṛṜṝṞṟṠṡṢṣṤṥṦṧṨṩṪṫṬṭṮṯṰṱṲṳṴṵṶṷṸṹṺṻṼṽṾṿẀẁẂẃẄẅẆẇẈẉẊẋ
+ẌẍẎẏẐẑẒẓẔẕẖẗẘẙẚẛẠạẢảẤấẦầẨẩẪẫẬậẮắẰằẲẳẴẵẶặẸẹẺẻẼẽẾếỀềỂểỄễỆệỈỉỊịỌọỎỏỐố
+ỒồỔổỖỗỘộỚớỜờỞởỠỡỢợỤụỦủỨứỪừỬửỮữỰựỲỳỴỵỶỷỸỹἀἁἂἃἄἅἆἇἈἉἊἋἌἍἎἏἐἑἒἓἔἕἘἙἚἛ
+ἜἝἠἡἢἣἤἥἦἧἨἩἪἫἬἭἮἯἰἱἲἳἴἵἶἷἸἹἺἻἼἽἾἿὀὁὂὃὄὅὈὉὊὋὌὍὐὑὒὓὔὕὖὗὙὛὝὟὠὡὢὣὤὥὦὧ
+ὨὩὪὫὬὭὮὯὰάὲέὴήὶίὸόὺύὼώᾀᾁᾂᾃᾄᾅᾆᾇᾐᾑᾒᾓᾔᾕᾖᾗᾠᾡᾢᾣᾤᾥᾦᾧᾰᾱᾲᾳᾴᾶᾷᾸᾹᾺΆιῂῃῄῆῇῈΈῊ
+ΉῐῑῒΐῖῗῘῙῚΊῠῡῢΰῤῥῦῧῨῩῪΎῬῲῳῴῶῷῸΌῺΏⁱⁿℂℇℊℋℌℍℎℏℐℑℒℓℕℙℚℛℜℝℤΩℨKÅℬℭℯℰℱℳℴℹ
diff --git a/tests/data/mappings.events b/tests/data/mappings.events
new file mode 100644
index 0000000..3cb5579
--- /dev/null
+++ b/tests/data/mappings.events
@@ -0,0 +1,44 @@
+- !StreamStart
+
+- !DocumentStart
+- !MappingStart
+- !Scalar { implicit: [true,true], value: 'key' }
+- !Scalar { implicit: [true,true], value: 'value' }
+- !Scalar { implicit: [true,true], value: 'empty mapping' }
+- !MappingStart
+- !MappingEnd
+- !Scalar { implicit: [true,true], value: 'empty mapping with tag' }
+- !MappingStart { tag: '!mytag', implicit: false }
+- !MappingEnd
+- !Scalar { implicit: [true,true], value: 'block mapping' }
+- !MappingStart
+- !MappingStart
+- !Scalar { implicit: [true,true], value: 'complex' }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !Scalar { implicit: [true,true], value: 'complex' }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !MappingEnd
+- !MappingStart
+- !Scalar { implicit: [true,true], value: 'complex' }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !MappingEnd
+- !MappingEnd
+- !Scalar { implicit: [true,true], value: 'flow mapping' }
+- !MappingStart { flow_style: true }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !Scalar { implicit: [true,true], value: 'value' }
+- !MappingStart
+- !Scalar { implicit: [true,true], value: 'complex' }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !Scalar { implicit: [true,true], value: 'complex' }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !MappingEnd
+- !MappingStart
+- !Scalar { implicit: [true,true], value: 'complex' }
+- !Scalar { implicit: [true,true], value: 'key' }
+- !MappingEnd
+- !MappingEnd
+- !MappingEnd
+- !DocumentEnd
+
+- !StreamEnd
diff --git a/tests/data/merge.data b/tests/data/merge.data
new file mode 100644
index 0000000..e455bbc
--- /dev/null
+++ b/tests/data/merge.data
@@ -0,0 +1 @@
+- <<
diff --git a/tests/data/merge.detect b/tests/data/merge.detect
new file mode 100644
index 0000000..1672d0d
--- /dev/null
+++ b/tests/data/merge.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:merge
diff --git a/tests/data/more-floats.code b/tests/data/more-floats.code
new file mode 100644
index 0000000..e3e444e
--- /dev/null
+++ b/tests/data/more-floats.code
@@ -0,0 +1 @@
+[0.0, +1.0, -1.0, +1e300000, -1e300000, 1e300000/1e300000, -(1e300000/1e300000)] # last two items are ind and qnan respectively.
diff --git a/tests/data/more-floats.data b/tests/data/more-floats.data
new file mode 100644
index 0000000..399eb17
--- /dev/null
+++ b/tests/data/more-floats.data
@@ -0,0 +1 @@
+[0.0, +1.0, -1.0, +.inf, -.inf, .nan, .nan]
diff --git a/tests/data/negative-float-bug.code b/tests/data/negative-float-bug.code
new file mode 100644
index 0000000..18e16e3
--- /dev/null
+++ b/tests/data/negative-float-bug.code
@@ -0,0 +1 @@
+-1.0
diff --git a/tests/data/negative-float-bug.data b/tests/data/negative-float-bug.data
new file mode 100644
index 0000000..18e16e3
--- /dev/null
+++ b/tests/data/negative-float-bug.data
@@ -0,0 +1 @@
+-1.0
diff --git a/tests/data/no-alias-anchor.emitter-error b/tests/data/no-alias-anchor.emitter-error
new file mode 100644
index 0000000..5ff065c
--- /dev/null
+++ b/tests/data/no-alias-anchor.emitter-error
@@ -0,0 +1,8 @@
+- !StreamStart
+- !DocumentStart
+- !SequenceStart
+- !Scalar { anchor: A, value: data }
+- !Alias { }
+- !SequenceEnd
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/no-alias-anchor.skip-ext b/tests/data/no-alias-anchor.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/no-alias-anchor.skip-ext
diff --git a/tests/data/no-block-collection-end.loader-error b/tests/data/no-block-collection-end.loader-error
new file mode 100644
index 0000000..02d4d37
--- /dev/null
+++ b/tests/data/no-block-collection-end.loader-error
@@ -0,0 +1,3 @@
+- foo
+- bar
+baz: bar
diff --git a/tests/data/no-block-mapping-end-2.loader-error b/tests/data/no-block-mapping-end-2.loader-error
new file mode 100644
index 0000000..be63571
--- /dev/null
+++ b/tests/data/no-block-mapping-end-2.loader-error
@@ -0,0 +1,3 @@
+? foo
+: bar
+: baz
diff --git a/tests/data/no-block-mapping-end.loader-error b/tests/data/no-block-mapping-end.loader-error
new file mode 100644
index 0000000..1ea921c
--- /dev/null
+++ b/tests/data/no-block-mapping-end.loader-error
@@ -0,0 +1 @@
+foo: "bar" "baz"
diff --git a/tests/data/no-document-start.loader-error b/tests/data/no-document-start.loader-error
new file mode 100644
index 0000000..c725ec8
--- /dev/null
+++ b/tests/data/no-document-start.loader-error
@@ -0,0 +1,3 @@
+%YAML   1.1
+# no ---
+foo: bar
diff --git a/tests/data/no-flow-mapping-end.loader-error b/tests/data/no-flow-mapping-end.loader-error
new file mode 100644
index 0000000..8bd1403
--- /dev/null
+++ b/tests/data/no-flow-mapping-end.loader-error
@@ -0,0 +1 @@
+{ foo: bar ]
diff --git a/tests/data/no-flow-sequence-end.loader-error b/tests/data/no-flow-sequence-end.loader-error
new file mode 100644
index 0000000..750d973
--- /dev/null
+++ b/tests/data/no-flow-sequence-end.loader-error
@@ -0,0 +1 @@
+[foo, bar}
diff --git a/tests/data/no-node-1.loader-error b/tests/data/no-node-1.loader-error
new file mode 100644
index 0000000..07b1500
--- /dev/null
+++ b/tests/data/no-node-1.loader-error
@@ -0,0 +1 @@
+- !foo ]
diff --git a/tests/data/no-node-2.loader-error b/tests/data/no-node-2.loader-error
new file mode 100644
index 0000000..563e3b3
--- /dev/null
+++ b/tests/data/no-node-2.loader-error
@@ -0,0 +1 @@
+- [ !foo } ]
diff --git a/tests/data/no-tag.emitter-error b/tests/data/no-tag.emitter-error
new file mode 100644
index 0000000..384c62f
--- /dev/null
+++ b/tests/data/no-tag.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart
+- !Scalar { value: 'foo', implicit: [false,false] }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/null.data b/tests/data/null.data
new file mode 100644
index 0000000..ad12528
--- /dev/null
+++ b/tests/data/null.data
@@ -0,0 +1,3 @@
+-
+- ~
+- null
diff --git a/tests/data/null.detect b/tests/data/null.detect
new file mode 100644
index 0000000..19110c7
--- /dev/null
+++ b/tests/data/null.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:null
diff --git a/tests/data/odd-utf16.stream-error b/tests/data/odd-utf16.stream-error
new file mode 100644
index 0000000..b59e434
--- /dev/null
+++ b/tests/data/odd-utf16.stream-error
Binary files differ
diff --git a/tests/data/recursive-anchor.former-loader-error b/tests/data/recursive-anchor.former-loader-error
new file mode 100644
index 0000000..661166c
--- /dev/null
+++ b/tests/data/recursive-anchor.former-loader-error
@@ -0,0 +1,4 @@
+- &foo [1
+    2,
+    3,
+    *foo]
diff --git a/tests/data/recursive-dict.recursive b/tests/data/recursive-dict.recursive
new file mode 100644
index 0000000..8f326f5
--- /dev/null
+++ b/tests/data/recursive-dict.recursive
@@ -0,0 +1,3 @@
+value = {}
+instance = AnInstance(value, value)
+value[instance] = instance
diff --git a/tests/data/recursive-list.recursive b/tests/data/recursive-list.recursive
new file mode 100644
index 0000000..27a4ae5
--- /dev/null
+++ b/tests/data/recursive-list.recursive
@@ -0,0 +1,2 @@
+value = []
+value.append(value)
diff --git a/tests/data/recursive-set.recursive b/tests/data/recursive-set.recursive
new file mode 100644
index 0000000..457c50d
--- /dev/null
+++ b/tests/data/recursive-set.recursive
@@ -0,0 +1,7 @@
+try:
+    set
+except NameError:
+    from sets import Set as set
+value = set()
+value.add(AnInstance(foo=value, bar=value))
+value.add(AnInstance(foo=value, bar=value))
diff --git a/tests/data/recursive-state.recursive b/tests/data/recursive-state.recursive
new file mode 100644
index 0000000..bffe61e
--- /dev/null
+++ b/tests/data/recursive-state.recursive
@@ -0,0 +1,2 @@
+value = []
+value.append(AnInstanceWithState(value, value))
diff --git a/tests/data/recursive-tuple.recursive b/tests/data/recursive-tuple.recursive
new file mode 100644
index 0000000..dc08d02
--- /dev/null
+++ b/tests/data/recursive-tuple.recursive
@@ -0,0 +1,3 @@
+value = ([], [])
+value[0].append(value)
+value[1].append(value[0])
diff --git a/tests/data/recursive.former-dumper-error b/tests/data/recursive.former-dumper-error
new file mode 100644
index 0000000..3c7cc2f
--- /dev/null
+++ b/tests/data/recursive.former-dumper-error
@@ -0,0 +1,3 @@
+data = []
+data.append(data)
+dump(data)
diff --git a/tests/data/remove-possible-simple-key-bug.loader-error b/tests/data/remove-possible-simple-key-bug.loader-error
new file mode 100644
index 0000000..fe1bc6c
--- /dev/null
+++ b/tests/data/remove-possible-simple-key-bug.loader-error
@@ -0,0 +1,3 @@
+foo: &A bar
+*A ]    # The ']' indicator triggers remove_possible_simple_key,
+        # which should raise an error.
diff --git a/tests/data/resolver.data b/tests/data/resolver.data
new file mode 100644
index 0000000..a296404
--- /dev/null
+++ b/tests/data/resolver.data
@@ -0,0 +1,30 @@
+---
+"this scalar should be selected"
+---
+key11: !foo
+    key12:
+        is: [selected]
+    key22:
+        key13: [not, selected]
+        key23: [not, selected]
+    key32:
+        key31: [not, selected]
+        key32: [not, selected]
+        key33: {not: selected}
+key21: !bar
+    - not selected
+    - selected
+    - not selected
+key31: !baz
+    key12:
+        key13:
+            key14: {selected}
+        key23:
+            key14: [not, selected]
+        key33:
+            key14: {selected}
+            key24: {not: selected}
+    key22:
+        -   key14: {selected}
+            key24: {not: selected}
+        -   key14: {selected}
diff --git a/tests/data/resolver.path b/tests/data/resolver.path
new file mode 100644
index 0000000..ec677d2
--- /dev/null
+++ b/tests/data/resolver.path
@@ -0,0 +1,30 @@
+--- !root/scalar
+"this scalar should be selected"
+--- !root
+key11: !foo
+    key12: !root/key11/key12/*
+        is: [selected]
+    key22:
+        key13: [not, selected]
+        key23: [not, selected]
+    key32:
+        key31: [not, selected]
+        key32: [not, selected]
+        key33: {not: selected}
+key21: !bar
+    - not selected
+    - !root/key21/1/* selected
+    - not selected
+key31: !baz
+    key12:
+        key13:
+            key14: !root/key31/*/*/key14/map {selected}
+        key23:
+            key14: [not, selected]
+        key33:
+            key14: !root/key31/*/*/key14/map {selected}
+            key24: {not: selected}
+    key22:
+        -   key14: !root/key31/*/*/key14/map {selected}
+            key24: {not: selected}
+        -   key14: !root/key31/*/*/key14/map {selected}
diff --git a/tests/data/run-parser-crash-bug.data b/tests/data/run-parser-crash-bug.data
new file mode 100644
index 0000000..fe01734
--- /dev/null
+++ b/tests/data/run-parser-crash-bug.data
@@ -0,0 +1,8 @@
+---
+- Harry Potter and the Prisoner of Azkaban
+- Harry Potter and the Goblet of Fire
+- Harry Potter and the Order of the Phoenix
+---
+- Memoirs Found in a Bathtub
+- Snow Crash
+- Ghost World
diff --git a/tests/data/scalars.events b/tests/data/scalars.events
new file mode 100644
index 0000000..32c40f4
--- /dev/null
+++ b/tests/data/scalars.events
@@ -0,0 +1,28 @@
+- !StreamStart
+
+- !DocumentStart
+- !MappingStart
+- !Scalar { implicit: [true,true], value: 'empty scalar' }
+- !Scalar { implicit: [true,false], value: '' }
+- !Scalar { implicit: [true,true], value: 'implicit scalar' }
+- !Scalar { implicit: [true,true], value: 'data' }
+- !Scalar { implicit: [true,true], value: 'quoted scalar' }
+- !Scalar { value: 'data', style: '"' }
+- !Scalar { implicit: [true,true], value: 'block scalar' }
+- !Scalar { value: 'data', style: '|' }
+- !Scalar { implicit: [true,true], value: 'empty scalar with tag' }
+- !Scalar { implicit: [false,false], tag: '!mytag', value: '' }
+- !Scalar { implicit: [true,true], value: 'implicit scalar with tag' }
+- !Scalar { implicit: [false,false], tag: '!mytag', value: 'data' }
+- !Scalar { implicit: [true,true], value: 'quoted scalar with tag' }
+- !Scalar { value: 'data', style: '"', tag: '!mytag', implicit: [false,false] }
+- !Scalar { implicit: [true,true], value: 'block scalar with tag' }
+- !Scalar { value: 'data', style: '|', tag: '!mytag', implicit: [false,false] }
+- !Scalar { implicit: [true,true], value: 'single character' }
+- !Scalar { value: 'a', implicit: [true,true] }
+- !Scalar { implicit: [true,true], value: 'single digit' }
+- !Scalar { value: '1', implicit: [true,false] }
+- !MappingEnd
+- !DocumentEnd
+
+- !StreamEnd
diff --git a/tests/data/scan-document-end-bug.canonical b/tests/data/scan-document-end-bug.canonical
new file mode 100644
index 0000000..4a0e8a8
--- /dev/null
+++ b/tests/data/scan-document-end-bug.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!null ""
diff --git a/tests/data/scan-document-end-bug.data b/tests/data/scan-document-end-bug.data
new file mode 100644
index 0000000..3c70543
--- /dev/null
+++ b/tests/data/scan-document-end-bug.data
@@ -0,0 +1,3 @@
+# Ticket #4
+---
+...
\ No newline at end of file
diff --git a/tests/data/scan-line-break-bug.canonical b/tests/data/scan-line-break-bug.canonical
new file mode 100644
index 0000000..79f08b7
--- /dev/null
+++ b/tests/data/scan-line-break-bug.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!map { ? !!str "foo" : !!str "bar baz" }
diff --git a/tests/data/scan-line-break-bug.data b/tests/data/scan-line-break-bug.data
new file mode 100644
index 0000000..c974fab
--- /dev/null
+++ b/tests/data/scan-line-break-bug.data
@@ -0,0 +1,3 @@
+foo:

+    bar

+    baz

diff --git a/tests/data/sequences.events b/tests/data/sequences.events
new file mode 100644
index 0000000..692a329
--- /dev/null
+++ b/tests/data/sequences.events
@@ -0,0 +1,81 @@
+- !StreamStart
+
+- !DocumentStart
+- !SequenceStart
+- !SequenceEnd
+- !DocumentEnd
+
+- !DocumentStart
+- !SequenceStart { tag: '!mytag', implicit: false }
+- !SequenceEnd
+- !DocumentEnd
+
+- !DocumentStart
+- !SequenceStart
+- !SequenceStart
+- !SequenceEnd
+- !SequenceStart { tag: '!mytag', implicit: false }
+- !SequenceEnd
+- !SequenceStart
+- !Scalar
+- !Scalar { value: 'data' }
+- !Scalar { tag: '!mytag', implicit: [false,false], value: 'data' }
+- !SequenceEnd
+- !SequenceStart
+- !SequenceStart
+- !SequenceStart
+- !Scalar
+- !SequenceEnd
+- !SequenceEnd
+- !SequenceEnd
+- !SequenceStart
+- !SequenceStart { tag: '!mytag', implicit: false }
+- !SequenceStart
+- !Scalar { value: 'data' }
+- !SequenceEnd
+- !SequenceEnd
+- !SequenceEnd
+- !SequenceEnd
+- !DocumentEnd
+
+- !DocumentStart
+- !SequenceStart
+- !MappingStart
+- !Scalar { value: 'key1' }
+- !SequenceStart
+- !Scalar { value: 'data1' }
+- !Scalar { value: 'data2' }
+- !SequenceEnd
+- !Scalar { value: 'key2' }
+- !SequenceStart { tag: '!mytag1', implicit: false }
+- !Scalar { value: 'data3' }
+- !SequenceStart
+- !Scalar { value: 'data4' }
+- !Scalar { value: 'data5' }
+- !SequenceEnd
+- !SequenceStart { tag: '!mytag2', implicit: false }
+- !Scalar { value: 'data6' }
+- !Scalar { value: 'data7' }
+- !SequenceEnd
+- !SequenceEnd
+- !MappingEnd
+- !SequenceEnd
+- !DocumentEnd
+
+- !DocumentStart
+- !SequenceStart
+- !SequenceStart { flow_style: true }
+- !SequenceStart
+- !SequenceEnd
+- !Scalar
+- !Scalar { value: 'data' }
+- !Scalar { tag: '!mytag', implicit: [false,false], value: 'data' }
+- !SequenceStart { tag: '!mytag', implicit: false }
+- !Scalar { value: 'data' }
+- !Scalar { value: 'data' }
+- !SequenceEnd
+- !SequenceEnd
+- !SequenceEnd
+- !DocumentEnd
+
+- !StreamEnd
diff --git a/tests/data/serializer-is-already-opened.dumper-error b/tests/data/serializer-is-already-opened.dumper-error
new file mode 100644
index 0000000..9a23525
--- /dev/null
+++ b/tests/data/serializer-is-already-opened.dumper-error
@@ -0,0 +1,3 @@
+dumper = yaml.Dumper(StringIO())
+dumper.open()
+dumper.open()
diff --git a/tests/data/serializer-is-closed-1.dumper-error b/tests/data/serializer-is-closed-1.dumper-error
new file mode 100644
index 0000000..8e7e600
--- /dev/null
+++ b/tests/data/serializer-is-closed-1.dumper-error
@@ -0,0 +1,4 @@
+dumper = yaml.Dumper(StringIO())
+dumper.open()
+dumper.close()
+dumper.open()
diff --git a/tests/data/serializer-is-closed-2.dumper-error b/tests/data/serializer-is-closed-2.dumper-error
new file mode 100644
index 0000000..89aef7e
--- /dev/null
+++ b/tests/data/serializer-is-closed-2.dumper-error
@@ -0,0 +1,4 @@
+dumper = yaml.Dumper(StringIO())
+dumper.open()
+dumper.close()
+dumper.serialize(yaml.ScalarNode(tag='!foo', value='bar'))
diff --git a/tests/data/serializer-is-not-opened-1.dumper-error b/tests/data/serializer-is-not-opened-1.dumper-error
new file mode 100644
index 0000000..8f22e73
--- /dev/null
+++ b/tests/data/serializer-is-not-opened-1.dumper-error
@@ -0,0 +1,2 @@
+dumper = yaml.Dumper(StringIO())
+dumper.close()
diff --git a/tests/data/serializer-is-not-opened-2.dumper-error b/tests/data/serializer-is-not-opened-2.dumper-error
new file mode 100644
index 0000000..ebd9df1
--- /dev/null
+++ b/tests/data/serializer-is-not-opened-2.dumper-error
@@ -0,0 +1,2 @@
+dumper = yaml.Dumper(StringIO())
+dumper.serialize(yaml.ScalarNode(tag='!foo', value='bar'))
diff --git a/tests/data/single-dot-is-not-float-bug.code b/tests/data/single-dot-is-not-float-bug.code
new file mode 100644
index 0000000..dcd0c2f
--- /dev/null
+++ b/tests/data/single-dot-is-not-float-bug.code
@@ -0,0 +1 @@
+'.'
diff --git a/tests/data/single-dot-is-not-float-bug.data b/tests/data/single-dot-is-not-float-bug.data
new file mode 100644
index 0000000..9c558e3
--- /dev/null
+++ b/tests/data/single-dot-is-not-float-bug.data
@@ -0,0 +1 @@
+.
diff --git a/tests/data/sloppy-indentation.canonical b/tests/data/sloppy-indentation.canonical
new file mode 100644
index 0000000..438bc04
--- /dev/null
+++ b/tests/data/sloppy-indentation.canonical
@@ -0,0 +1,18 @@
+%YAML 1.1
+---
+!!map { 
+    ? !!str "in the block context"
+    : !!map {
+        ? !!str "indentation should be kept"
+        : !!map {
+            ? !!str "but in the flow context"
+            : !!seq [ !!str "it may be violated" ]
+        }
+    }
+}
+--- !!str
+"the parser does not require scalars to be indented with at least one space"
+--- !!str
+"the parser does not require scalars to be indented with at least one space"
+--- !!map
+{ ? !!str "foo": { ? !!str "bar" : !!str "quoted scalars may not adhere indentation" } }
diff --git a/tests/data/sloppy-indentation.data b/tests/data/sloppy-indentation.data
new file mode 100644
index 0000000..2eb4f5a
--- /dev/null
+++ b/tests/data/sloppy-indentation.data
@@ -0,0 +1,17 @@
+---
+in the block context:
+    indentation should be kept: { 
+    but in the flow context: [
+it may be violated]
+}
+---
+the parser does not require scalars
+to be indented with at least one space
+...
+---
+"the parser does not require scalars
+to be indented with at least one space"
+---
+foo:
+    bar: 'quoted scalars
+may not adhere indentation'
diff --git a/tests/data/spec-02-01.data b/tests/data/spec-02-01.data
new file mode 100644
index 0000000..d12e671
--- /dev/null
+++ b/tests/data/spec-02-01.data
@@ -0,0 +1,3 @@
+- Mark McGwire
+- Sammy Sosa
+- Ken Griffey
diff --git a/tests/data/spec-02-01.structure b/tests/data/spec-02-01.structure
new file mode 100644
index 0000000..f532f4a
--- /dev/null
+++ b/tests/data/spec-02-01.structure
@@ -0,0 +1 @@
+[True, True, True]
diff --git a/tests/data/spec-02-01.tokens b/tests/data/spec-02-01.tokens
new file mode 100644
index 0000000..ce44cac
--- /dev/null
+++ b/tests/data/spec-02-01.tokens
@@ -0,0 +1 @@
+[[ , _ , _ , _ ]}
diff --git a/tests/data/spec-02-02.data b/tests/data/spec-02-02.data
new file mode 100644
index 0000000..7b7ec94
--- /dev/null
+++ b/tests/data/spec-02-02.data
@@ -0,0 +1,3 @@
+hr:  65    # Home runs
+avg: 0.278 # Batting average
+rbi: 147   # Runs Batted In
diff --git a/tests/data/spec-02-02.structure b/tests/data/spec-02-02.structure
new file mode 100644
index 0000000..aba1ced
--- /dev/null
+++ b/tests/data/spec-02-02.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-02.tokens b/tests/data/spec-02-02.tokens
new file mode 100644
index 0000000..e4e381b
--- /dev/null
+++ b/tests/data/spec-02-02.tokens
@@ -0,0 +1,5 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-03.data b/tests/data/spec-02-03.data
new file mode 100644
index 0000000..656d628
--- /dev/null
+++ b/tests/data/spec-02-03.data
@@ -0,0 +1,8 @@
+american:
+  - Boston Red Sox
+  - Detroit Tigers
+  - New York Yankees
+national:
+  - New York Mets
+  - Chicago Cubs
+  - Atlanta Braves
diff --git a/tests/data/spec-02-03.structure b/tests/data/spec-02-03.structure
new file mode 100644
index 0000000..25de5d2
--- /dev/null
+++ b/tests/data/spec-02-03.structure
@@ -0,0 +1 @@
+[(True, [True, True, True]), (True, [True, True, True])]
diff --git a/tests/data/spec-02-03.tokens b/tests/data/spec-02-03.tokens
new file mode 100644
index 0000000..89815f2
--- /dev/null
+++ b/tests/data/spec-02-03.tokens
@@ -0,0 +1,4 @@
+{{
+? _ : [[ , _ , _ , _ ]}
+? _ : [[ , _ , _ , _ ]}
+]}
diff --git a/tests/data/spec-02-04.data b/tests/data/spec-02-04.data
new file mode 100644
index 0000000..430f6b3
--- /dev/null
+++ b/tests/data/spec-02-04.data
@@ -0,0 +1,8 @@
+-
+  name: Mark McGwire
+  hr:   65
+  avg:  0.278
+-
+  name: Sammy Sosa
+  hr:   63
+  avg:  0.288
diff --git a/tests/data/spec-02-04.structure b/tests/data/spec-02-04.structure
new file mode 100644
index 0000000..e7b526c
--- /dev/null
+++ b/tests/data/spec-02-04.structure
@@ -0,0 +1,4 @@
+[
+    [(True, True), (True, True), (True, True)],
+    [(True, True), (True, True), (True, True)],
+]
diff --git a/tests/data/spec-02-04.tokens b/tests/data/spec-02-04.tokens
new file mode 100644
index 0000000..9cb9815
--- /dev/null
+++ b/tests/data/spec-02-04.tokens
@@ -0,0 +1,4 @@
+[[
+, {{ ? _ : _ ? _ : _ ? _ : _ ]}
+, {{ ? _ : _ ? _ : _ ? _ : _ ]}
+]}
diff --git a/tests/data/spec-02-05.data b/tests/data/spec-02-05.data
new file mode 100644
index 0000000..cdd7770
--- /dev/null
+++ b/tests/data/spec-02-05.data
@@ -0,0 +1,3 @@
+- [name        , hr, avg  ]
+- [Mark McGwire, 65, 0.278]
+- [Sammy Sosa  , 63, 0.288]
diff --git a/tests/data/spec-02-05.structure b/tests/data/spec-02-05.structure
new file mode 100644
index 0000000..e06b75a
--- /dev/null
+++ b/tests/data/spec-02-05.structure
@@ -0,0 +1,5 @@
+[
+    [True, True, True],
+    [True, True, True],
+    [True, True, True],
+]
diff --git a/tests/data/spec-02-05.tokens b/tests/data/spec-02-05.tokens
new file mode 100644
index 0000000..3f6f1ab
--- /dev/null
+++ b/tests/data/spec-02-05.tokens
@@ -0,0 +1,5 @@
+[[
+, [ _ , _ , _ ]
+, [ _ , _ , _ ]
+, [ _ , _ , _ ]
+]}
diff --git a/tests/data/spec-02-06.data b/tests/data/spec-02-06.data
new file mode 100644
index 0000000..7a957b2
--- /dev/null
+++ b/tests/data/spec-02-06.data
@@ -0,0 +1,5 @@
+Mark McGwire: {hr: 65, avg: 0.278}
+Sammy Sosa: {
+    hr: 63,
+    avg: 0.288
+  }
diff --git a/tests/data/spec-02-06.structure b/tests/data/spec-02-06.structure
new file mode 100644
index 0000000..3ef0f4b
--- /dev/null
+++ b/tests/data/spec-02-06.structure
@@ -0,0 +1,4 @@
+[
+    (True, [(True, True), (True, True)]),
+    (True, [(True, True), (True, True)]),
+]
diff --git a/tests/data/spec-02-06.tokens b/tests/data/spec-02-06.tokens
new file mode 100644
index 0000000..a1a5eef
--- /dev/null
+++ b/tests/data/spec-02-06.tokens
@@ -0,0 +1,4 @@
+{{
+? _ : { ? _ : _ , ? _ : _ }
+? _ : { ? _ : _ , ? _ : _ }
+]}
diff --git a/tests/data/spec-02-07.data b/tests/data/spec-02-07.data
new file mode 100644
index 0000000..bc711d5
--- /dev/null
+++ b/tests/data/spec-02-07.data
@@ -0,0 +1,10 @@
+# Ranking of 1998 home runs
+---
+- Mark McGwire
+- Sammy Sosa
+- Ken Griffey
+
+# Team ranking
+---
+- Chicago Cubs
+- St Louis Cardinals
diff --git a/tests/data/spec-02-07.structure b/tests/data/spec-02-07.structure
new file mode 100644
index 0000000..c5d72a3
--- /dev/null
+++ b/tests/data/spec-02-07.structure
@@ -0,0 +1,4 @@
+[
+[True, True, True],
+[True, True],
+]
diff --git a/tests/data/spec-02-07.tokens b/tests/data/spec-02-07.tokens
new file mode 100644
index 0000000..ed48883
--- /dev/null
+++ b/tests/data/spec-02-07.tokens
@@ -0,0 +1,12 @@
+---
+[[
+, _
+, _
+, _
+]}
+
+---
+[[
+, _
+, _
+]}
diff --git a/tests/data/spec-02-08.data b/tests/data/spec-02-08.data
new file mode 100644
index 0000000..05e102d
--- /dev/null
+++ b/tests/data/spec-02-08.data
@@ -0,0 +1,10 @@
+---
+time: 20:03:20
+player: Sammy Sosa
+action: strike (miss)
+...
+---
+time: 20:03:47
+player: Sammy Sosa
+action: grand slam
+...
diff --git a/tests/data/spec-02-08.structure b/tests/data/spec-02-08.structure
new file mode 100644
index 0000000..24cff73
--- /dev/null
+++ b/tests/data/spec-02-08.structure
@@ -0,0 +1,4 @@
+[
+[(True, True), (True, True), (True, True)],
+[(True, True), (True, True), (True, True)],
+]
diff --git a/tests/data/spec-02-08.tokens b/tests/data/spec-02-08.tokens
new file mode 100644
index 0000000..7d2c03d
--- /dev/null
+++ b/tests/data/spec-02-08.tokens
@@ -0,0 +1,15 @@
+---
+{{
+? _ : _
+? _ : _
+? _ : _
+]}
+...
+
+---
+{{
+? _ : _
+? _ : _
+? _ : _
+]}
+...
diff --git a/tests/data/spec-02-09.data b/tests/data/spec-02-09.data
new file mode 100644
index 0000000..e264180
--- /dev/null
+++ b/tests/data/spec-02-09.data
@@ -0,0 +1,8 @@
+---
+hr: # 1998 hr ranking
+  - Mark McGwire
+  - Sammy Sosa
+rbi:
+  # 1998 rbi ranking
+  - Sammy Sosa
+  - Ken Griffey
diff --git a/tests/data/spec-02-09.structure b/tests/data/spec-02-09.structure
new file mode 100644
index 0000000..b4c9914
--- /dev/null
+++ b/tests/data/spec-02-09.structure
@@ -0,0 +1 @@
+[(True, [True, True]), (True, [True, True])]
diff --git a/tests/data/spec-02-09.tokens b/tests/data/spec-02-09.tokens
new file mode 100644
index 0000000..b2ec10e
--- /dev/null
+++ b/tests/data/spec-02-09.tokens
@@ -0,0 +1,5 @@
+---
+{{
+? _ : [[ , _ , _ ]}
+? _ : [[ , _ , _ ]}
+]}
diff --git a/tests/data/spec-02-10.data b/tests/data/spec-02-10.data
new file mode 100644
index 0000000..61808f6
--- /dev/null
+++ b/tests/data/spec-02-10.data
@@ -0,0 +1,8 @@
+---
+hr:
+  - Mark McGwire
+  # Following node labeled SS
+  - &SS Sammy Sosa
+rbi:
+  - *SS # Subsequent occurrence
+  - Ken Griffey
diff --git a/tests/data/spec-02-10.structure b/tests/data/spec-02-10.structure
new file mode 100644
index 0000000..ff8f4c3
--- /dev/null
+++ b/tests/data/spec-02-10.structure
@@ -0,0 +1 @@
+[(True, [True, True]), (True, ['*', True])]
diff --git a/tests/data/spec-02-10.tokens b/tests/data/spec-02-10.tokens
new file mode 100644
index 0000000..26caa2b
--- /dev/null
+++ b/tests/data/spec-02-10.tokens
@@ -0,0 +1,5 @@
+---
+{{
+? _ : [[ , _ , & _ ]}
+? _ : [[ , * , _ ]}
+]}
diff --git a/tests/data/spec-02-11.data b/tests/data/spec-02-11.data
new file mode 100644
index 0000000..9123ce2
--- /dev/null
+++ b/tests/data/spec-02-11.data
@@ -0,0 +1,9 @@
+? - Detroit Tigers
+  - Chicago cubs
+:
+  - 2001-07-23
+
+? [ New York Yankees,
+    Atlanta Braves ]
+: [ 2001-07-02, 2001-08-12,
+    2001-08-14 ]
diff --git a/tests/data/spec-02-11.structure b/tests/data/spec-02-11.structure
new file mode 100644
index 0000000..3d8f1ff
--- /dev/null
+++ b/tests/data/spec-02-11.structure
@@ -0,0 +1,4 @@
+[
+([True, True], [True]),
+([True, True], [True, True, True]),
+]
diff --git a/tests/data/spec-02-11.tokens b/tests/data/spec-02-11.tokens
new file mode 100644
index 0000000..fe24203
--- /dev/null
+++ b/tests/data/spec-02-11.tokens
@@ -0,0 +1,6 @@
+{{
+? [[ , _ , _ ]}
+: [[ , _ ]}
+? [ _ , _ ]
+: [ _ , _ , _ ]
+]}
diff --git a/tests/data/spec-02-12.data b/tests/data/spec-02-12.data
new file mode 100644
index 0000000..1fc33f9
--- /dev/null
+++ b/tests/data/spec-02-12.data
@@ -0,0 +1,8 @@
+---
+# products purchased
+- item    : Super Hoop
+  quantity: 1
+- item    : Basketball
+  quantity: 4
+- item    : Big Shoes
+  quantity: 1
diff --git a/tests/data/spec-02-12.structure b/tests/data/spec-02-12.structure
new file mode 100644
index 0000000..e9c5359
--- /dev/null
+++ b/tests/data/spec-02-12.structure
@@ -0,0 +1,5 @@
+[
+[(True, True), (True, True)],
+[(True, True), (True, True)],
+[(True, True), (True, True)],
+]
diff --git a/tests/data/spec-02-12.tokens b/tests/data/spec-02-12.tokens
new file mode 100644
index 0000000..ea21e50
--- /dev/null
+++ b/tests/data/spec-02-12.tokens
@@ -0,0 +1,6 @@
+---
+[[
+, {{ ? _ : _ ? _ : _ ]}
+, {{ ? _ : _ ? _ : _ ]}
+, {{ ? _ : _ ? _ : _ ]}
+]}
diff --git a/tests/data/spec-02-13.data b/tests/data/spec-02-13.data
new file mode 100644
index 0000000..13fb656
--- /dev/null
+++ b/tests/data/spec-02-13.data
@@ -0,0 +1,4 @@
+# ASCII Art
+--- |
+  \//||\/||
+  // ||  ||__
diff --git a/tests/data/spec-02-13.structure b/tests/data/spec-02-13.structure
new file mode 100644
index 0000000..0ca9514
--- /dev/null
+++ b/tests/data/spec-02-13.structure
@@ -0,0 +1 @@
+True
diff --git a/tests/data/spec-02-13.tokens b/tests/data/spec-02-13.tokens
new file mode 100644
index 0000000..7456c05
--- /dev/null
+++ b/tests/data/spec-02-13.tokens
@@ -0,0 +1 @@
+--- _
diff --git a/tests/data/spec-02-14.data b/tests/data/spec-02-14.data
new file mode 100644
index 0000000..59943de
--- /dev/null
+++ b/tests/data/spec-02-14.data
@@ -0,0 +1,4 @@
+---
+  Mark McGwire's
+  year was crippled
+  by a knee injury.
diff --git a/tests/data/spec-02-14.structure b/tests/data/spec-02-14.structure
new file mode 100644
index 0000000..0ca9514
--- /dev/null
+++ b/tests/data/spec-02-14.structure
@@ -0,0 +1 @@
+True
diff --git a/tests/data/spec-02-14.tokens b/tests/data/spec-02-14.tokens
new file mode 100644
index 0000000..7456c05
--- /dev/null
+++ b/tests/data/spec-02-14.tokens
@@ -0,0 +1 @@
+--- _
diff --git a/tests/data/spec-02-15.data b/tests/data/spec-02-15.data
new file mode 100644
index 0000000..80b89a6
--- /dev/null
+++ b/tests/data/spec-02-15.data
@@ -0,0 +1,8 @@
+>
+ Sammy Sosa completed another
+ fine season with great stats.
+
+   63 Home Runs
+   0.288 Batting Average
+
+ What a year!
diff --git a/tests/data/spec-02-15.structure b/tests/data/spec-02-15.structure
new file mode 100644
index 0000000..0ca9514
--- /dev/null
+++ b/tests/data/spec-02-15.structure
@@ -0,0 +1 @@
+True
diff --git a/tests/data/spec-02-15.tokens b/tests/data/spec-02-15.tokens
new file mode 100644
index 0000000..31354ec
--- /dev/null
+++ b/tests/data/spec-02-15.tokens
@@ -0,0 +1 @@
+_
diff --git a/tests/data/spec-02-16.data b/tests/data/spec-02-16.data
new file mode 100644
index 0000000..9f66d88
--- /dev/null
+++ b/tests/data/spec-02-16.data
@@ -0,0 +1,7 @@
+name: Mark McGwire
+accomplishment: >
+  Mark set a major league
+  home run record in 1998.
+stats: |
+  65 Home Runs
+  0.278 Batting Average
diff --git a/tests/data/spec-02-16.structure b/tests/data/spec-02-16.structure
new file mode 100644
index 0000000..aba1ced
--- /dev/null
+++ b/tests/data/spec-02-16.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-16.tokens b/tests/data/spec-02-16.tokens
new file mode 100644
index 0000000..e4e381b
--- /dev/null
+++ b/tests/data/spec-02-16.tokens
@@ -0,0 +1,5 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-17.data b/tests/data/spec-02-17.data
new file mode 100644
index 0000000..b2870c5
--- /dev/null
+++ b/tests/data/spec-02-17.data
@@ -0,0 +1,7 @@
+unicode: "Sosa did fine.\u263A"
+control: "\b1998\t1999\t2000\n"
+hexesc:  "\x13\x10 is \r\n"
+
+single: '"Howdy!" he cried.'
+quoted: ' # not a ''comment''.'
+tie-fighter: '|\-*-/|'
diff --git a/tests/data/spec-02-17.structure b/tests/data/spec-02-17.structure
new file mode 100644
index 0000000..933646d
--- /dev/null
+++ b/tests/data/spec-02-17.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True), (True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-17.tokens b/tests/data/spec-02-17.tokens
new file mode 100644
index 0000000..db65540
--- /dev/null
+++ b/tests/data/spec-02-17.tokens
@@ -0,0 +1,8 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-18.data b/tests/data/spec-02-18.data
new file mode 100644
index 0000000..e0a8bfa
--- /dev/null
+++ b/tests/data/spec-02-18.data
@@ -0,0 +1,6 @@
+plain:
+  This unquoted scalar
+  spans many lines.
+
+quoted: "So does this
+  quoted scalar.\n"
diff --git a/tests/data/spec-02-18.structure b/tests/data/spec-02-18.structure
new file mode 100644
index 0000000..0ca4991
--- /dev/null
+++ b/tests/data/spec-02-18.structure
@@ -0,0 +1 @@
+[(True, True), (True, True)]
diff --git a/tests/data/spec-02-18.tokens b/tests/data/spec-02-18.tokens
new file mode 100644
index 0000000..83b31dc
--- /dev/null
+++ b/tests/data/spec-02-18.tokens
@@ -0,0 +1,4 @@
+{{
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-19.data b/tests/data/spec-02-19.data
new file mode 100644
index 0000000..bf69de6
--- /dev/null
+++ b/tests/data/spec-02-19.data
@@ -0,0 +1,5 @@
+canonical: 12345
+decimal: +12,345
+sexagesimal: 3:25:45
+octal: 014
+hexadecimal: 0xC
diff --git a/tests/data/spec-02-19.structure b/tests/data/spec-02-19.structure
new file mode 100644
index 0000000..48ca99d
--- /dev/null
+++ b/tests/data/spec-02-19.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-19.tokens b/tests/data/spec-02-19.tokens
new file mode 100644
index 0000000..5bda68f
--- /dev/null
+++ b/tests/data/spec-02-19.tokens
@@ -0,0 +1,7 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-20.data b/tests/data/spec-02-20.data
new file mode 100644
index 0000000..1d4897f
--- /dev/null
+++ b/tests/data/spec-02-20.data
@@ -0,0 +1,6 @@
+canonical: 1.23015e+3
+exponential: 12.3015e+02
+sexagesimal: 20:30.15
+fixed: 1,230.15
+negative infinity: -.inf
+not a number: .NaN
diff --git a/tests/data/spec-02-20.structure b/tests/data/spec-02-20.structure
new file mode 100644
index 0000000..933646d
--- /dev/null
+++ b/tests/data/spec-02-20.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True), (True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-20.tokens b/tests/data/spec-02-20.tokens
new file mode 100644
index 0000000..db65540
--- /dev/null
+++ b/tests/data/spec-02-20.tokens
@@ -0,0 +1,8 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-21.data b/tests/data/spec-02-21.data
new file mode 100644
index 0000000..dec6a56
--- /dev/null
+++ b/tests/data/spec-02-21.data
@@ -0,0 +1,4 @@
+null: ~
+true: y
+false: n
+string: '12345'
diff --git a/tests/data/spec-02-21.structure b/tests/data/spec-02-21.structure
new file mode 100644
index 0000000..021635f
--- /dev/null
+++ b/tests/data/spec-02-21.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-21.tokens b/tests/data/spec-02-21.tokens
new file mode 100644
index 0000000..aeccbaf
--- /dev/null
+++ b/tests/data/spec-02-21.tokens
@@ -0,0 +1,6 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-22.data b/tests/data/spec-02-22.data
new file mode 100644
index 0000000..aaac185
--- /dev/null
+++ b/tests/data/spec-02-22.data
@@ -0,0 +1,4 @@
+canonical: 2001-12-15T02:59:43.1Z
+iso8601: 2001-12-14t21:59:43.10-05:00
+spaced: 2001-12-14 21:59:43.10 -5
+date: 2002-12-14
diff --git a/tests/data/spec-02-22.structure b/tests/data/spec-02-22.structure
new file mode 100644
index 0000000..021635f
--- /dev/null
+++ b/tests/data/spec-02-22.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-22.tokens b/tests/data/spec-02-22.tokens
new file mode 100644
index 0000000..aeccbaf
--- /dev/null
+++ b/tests/data/spec-02-22.tokens
@@ -0,0 +1,6 @@
+{{
+? _ : _
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-23.data b/tests/data/spec-02-23.data
new file mode 100644
index 0000000..5dbd992
--- /dev/null
+++ b/tests/data/spec-02-23.data
@@ -0,0 +1,13 @@
+---
+not-date: !!str 2002-04-28
+
+picture: !!binary |
+ R0lGODlhDAAMAIQAAP//9/X
+ 17unp5WZmZgAAAOfn515eXv
+ Pz7Y6OjuDg4J+fn5OTk6enp
+ 56enmleECcgggoBADs=
+
+application specific tag: !something |
+ The semantics of the tag
+ above may be different for
+ different documents.
diff --git a/tests/data/spec-02-23.structure b/tests/data/spec-02-23.structure
new file mode 100644
index 0000000..aba1ced
--- /dev/null
+++ b/tests/data/spec-02-23.structure
@@ -0,0 +1 @@
+[(True, True), (True, True), (True, True)]
diff --git a/tests/data/spec-02-23.tokens b/tests/data/spec-02-23.tokens
new file mode 100644
index 0000000..9ac54aa
--- /dev/null
+++ b/tests/data/spec-02-23.tokens
@@ -0,0 +1,6 @@
+---
+{{
+? _ : ! _
+? _ : ! _
+? _ : ! _
+]}
diff --git a/tests/data/spec-02-24.data b/tests/data/spec-02-24.data
new file mode 100644
index 0000000..1180757
--- /dev/null
+++ b/tests/data/spec-02-24.data
@@ -0,0 +1,14 @@
+%TAG ! tag:clarkevans.com,2002:
+--- !shape
+  # Use the ! handle for presenting
+  # tag:clarkevans.com,2002:circle
+- !circle
+  center: &ORIGIN {x: 73, y: 129}
+  radius: 7
+- !line
+  start: *ORIGIN
+  finish: { x: 89, y: 102 }
+- !label
+  start: *ORIGIN
+  color: 0xFFEEBB
+  text: Pretty vector drawing.
diff --git a/tests/data/spec-02-24.structure b/tests/data/spec-02-24.structure
new file mode 100644
index 0000000..a800729
--- /dev/null
+++ b/tests/data/spec-02-24.structure
@@ -0,0 +1,5 @@
+[
+[(True, [(True, True), (True, True)]), (True, True)],
+[(True, '*'), (True, [(True, True), (True, True)])],
+[(True, '*'), (True, True), (True, True)],
+]
diff --git a/tests/data/spec-02-24.tokens b/tests/data/spec-02-24.tokens
new file mode 100644
index 0000000..039c385
--- /dev/null
+++ b/tests/data/spec-02-24.tokens
@@ -0,0 +1,20 @@
+%
+--- !
+[[
+, !
+    {{
+    ? _ : & { ? _ : _ , ? _ : _ }
+    ? _ : _
+    ]}
+, !
+    {{
+    ? _ : *
+    ? _ : { ? _ : _ , ? _ : _ }
+    ]}
+, !
+    {{
+    ? _ : *
+    ? _ : _
+    ? _ : _
+    ]}
+]}
diff --git a/tests/data/spec-02-25.data b/tests/data/spec-02-25.data
new file mode 100644
index 0000000..769ac31
--- /dev/null
+++ b/tests/data/spec-02-25.data
@@ -0,0 +1,7 @@
+# sets are represented as a
+# mapping where each key is
+# associated with the empty string
+--- !!set
+? Mark McGwire
+? Sammy Sosa
+? Ken Griff
diff --git a/tests/data/spec-02-25.structure b/tests/data/spec-02-25.structure
new file mode 100644
index 0000000..0b40e61
--- /dev/null
+++ b/tests/data/spec-02-25.structure
@@ -0,0 +1 @@
+[(True, None), (True, None), (True, None)]
diff --git a/tests/data/spec-02-25.tokens b/tests/data/spec-02-25.tokens
new file mode 100644
index 0000000..b700236
--- /dev/null
+++ b/tests/data/spec-02-25.tokens
@@ -0,0 +1,6 @@
+--- !
+{{
+? _
+? _
+? _
+]}
diff --git a/tests/data/spec-02-26.data b/tests/data/spec-02-26.data
new file mode 100644
index 0000000..3143763
--- /dev/null
+++ b/tests/data/spec-02-26.data
@@ -0,0 +1,7 @@
+# ordered maps are represented as
+# a sequence of mappings, with
+# each mapping having one key
+--- !!omap
+- Mark McGwire: 65
+- Sammy Sosa: 63
+- Ken Griffy: 58
diff --git a/tests/data/spec-02-26.structure b/tests/data/spec-02-26.structure
new file mode 100644
index 0000000..cf429b9
--- /dev/null
+++ b/tests/data/spec-02-26.structure
@@ -0,0 +1,5 @@
+[
+[(True, True)],
+[(True, True)],
+[(True, True)],
+]
diff --git a/tests/data/spec-02-26.tokens b/tests/data/spec-02-26.tokens
new file mode 100644
index 0000000..7bee492
--- /dev/null
+++ b/tests/data/spec-02-26.tokens
@@ -0,0 +1,6 @@
+--- !
+[[
+, {{ ? _ : _ ]}
+, {{ ? _ : _ ]}
+, {{ ? _ : _ ]}
+]}
diff --git a/tests/data/spec-02-27.data b/tests/data/spec-02-27.data
new file mode 100644
index 0000000..4625739
--- /dev/null
+++ b/tests/data/spec-02-27.data
@@ -0,0 +1,29 @@
+--- !<tag:clarkevans.com,2002:invoice>
+invoice: 34843
+date   : 2001-01-23
+bill-to: &id001
+    given  : Chris
+    family : Dumars
+    address:
+        lines: |
+            458 Walkman Dr.
+            Suite #292
+        city    : Royal Oak
+        state   : MI
+        postal  : 48046
+ship-to: *id001
+product:
+    - sku         : BL394D
+      quantity    : 4
+      description : Basketball
+      price       : 450.00
+    - sku         : BL4438H
+      quantity    : 1
+      description : Super Hoop
+      price       : 2392.00
+tax  : 251.42
+total: 4443.52
+comments:
+    Late afternoon is best.
+    Backup contact is Nancy
+    Billsmer @ 338-4338.
diff --git a/tests/data/spec-02-27.structure b/tests/data/spec-02-27.structure
new file mode 100644
index 0000000..a2113b9
--- /dev/null
+++ b/tests/data/spec-02-27.structure
@@ -0,0 +1,17 @@
+[
+(True, True),
+(True, True),
+(True, [
+    (True, True),
+    (True, True),
+    (True, [(True, True), (True, True), (True, True), (True, True)]),
+    ]),
+(True, '*'),
+(True, [
+        [(True, True), (True, True), (True, True), (True, True)],
+        [(True, True), (True, True), (True, True), (True, True)],
+    ]),
+(True, True),
+(True, True),
+(True, True),
+]
diff --git a/tests/data/spec-02-27.tokens b/tests/data/spec-02-27.tokens
new file mode 100644
index 0000000..2dc1c25
--- /dev/null
+++ b/tests/data/spec-02-27.tokens
@@ -0,0 +1,20 @@
+--- !
+{{
+? _ : _
+? _ : _
+? _ : &
+    {{
+    ? _ : _
+    ? _ : _
+    ? _ : {{ ? _ : _ ? _ : _ ? _ : _ ? _ : _ ]}
+    ]}
+? _ : *
+? _ :
+    [[
+    , {{ ? _ : _ ? _ : _ ? _ : _ ? _ : _ ]}
+    , {{ ? _ : _ ? _ : _ ? _ : _ ? _ : _ ]}
+    ]}
+? _ : _
+? _ : _
+? _ : _
+]}
diff --git a/tests/data/spec-02-28.data b/tests/data/spec-02-28.data
new file mode 100644
index 0000000..a5c8dc8
--- /dev/null
+++ b/tests/data/spec-02-28.data
@@ -0,0 +1,26 @@
+---
+Time: 2001-11-23 15:01:42 -5
+User: ed
+Warning:
+  This is an error message
+  for the log file
+---
+Time: 2001-11-23 15:02:31 -5
+User: ed
+Warning:
+  A slightly different error
+  message.
+---
+Date: 2001-11-23 15:03:17 -5
+User: ed
+Fatal:
+  Unknown variable "bar"
+Stack:
+  - file: TopClass.py
+    line: 23
+    code: |
+      x = MoreObject("345\n")
+  - file: MoreClass.py
+    line: 58
+    code: |-
+      foo = bar
diff --git a/tests/data/spec-02-28.structure b/tests/data/spec-02-28.structure
new file mode 100644
index 0000000..8ec0b56
--- /dev/null
+++ b/tests/data/spec-02-28.structure
@@ -0,0 +1,10 @@
+[
+[(True, True), (True, True), (True, True)],
+[(True, True), (True, True), (True, True)],
+[(True, True), (True, True), (True, True),
+(True, [
+    [(True, True), (True, True), (True, True)],
+    [(True, True), (True, True), (True, True)],
+    ]),
+]
+]
diff --git a/tests/data/spec-02-28.tokens b/tests/data/spec-02-28.tokens
new file mode 100644
index 0000000..8d5e1bc
--- /dev/null
+++ b/tests/data/spec-02-28.tokens
@@ -0,0 +1,23 @@
+---
+{{
+? _ : _
+? _ : _
+? _ : _
+]}
+---
+{{
+? _ : _
+? _ : _
+? _ : _
+]}
+---
+{{
+? _ : _
+? _ : _
+? _ : _
+? _ :
+    [[
+        , {{ ? _ : _ ? _ : _ ? _ : _ ]}
+        , {{ ? _ : _ ? _ : _ ? _ : _ ]}
+    ]}
+]}
diff --git a/tests/data/spec-05-01-utf16be.data b/tests/data/spec-05-01-utf16be.data
new file mode 100644
index 0000000..3525062
--- /dev/null
+++ b/tests/data/spec-05-01-utf16be.data
Binary files differ
diff --git a/tests/data/spec-05-01-utf16be.empty b/tests/data/spec-05-01-utf16be.empty
new file mode 100644
index 0000000..bfffa8b
--- /dev/null
+++ b/tests/data/spec-05-01-utf16be.empty
@@ -0,0 +1,2 @@
+# This stream contains no
+# documents, only comments.
diff --git a/tests/data/spec-05-01-utf16le.data b/tests/data/spec-05-01-utf16le.data
new file mode 100644
index 0000000..0823f74
--- /dev/null
+++ b/tests/data/spec-05-01-utf16le.data
Binary files differ
diff --git a/tests/data/spec-05-01-utf16le.empty b/tests/data/spec-05-01-utf16le.empty
new file mode 100644
index 0000000..bfffa8b
--- /dev/null
+++ b/tests/data/spec-05-01-utf16le.empty
@@ -0,0 +1,2 @@
+# This stream contains no
+# documents, only comments.
diff --git a/tests/data/spec-05-01-utf8.data b/tests/data/spec-05-01-utf8.data
new file mode 100644
index 0000000..780d25b
--- /dev/null
+++ b/tests/data/spec-05-01-utf8.data
@@ -0,0 +1 @@
+# Comment only.
diff --git a/tests/data/spec-05-01-utf8.empty b/tests/data/spec-05-01-utf8.empty
new file mode 100644
index 0000000..bfffa8b
--- /dev/null
+++ b/tests/data/spec-05-01-utf8.empty
@@ -0,0 +1,2 @@
+# This stream contains no
+# documents, only comments.
diff --git a/tests/data/spec-05-02-utf16be.data b/tests/data/spec-05-02-utf16be.data
new file mode 100644
index 0000000..5ebbb04
--- /dev/null
+++ b/tests/data/spec-05-02-utf16be.data
Binary files differ
diff --git a/tests/data/spec-05-02-utf16be.error b/tests/data/spec-05-02-utf16be.error
new file mode 100644
index 0000000..1df3616
--- /dev/null
+++ b/tests/data/spec-05-02-utf16be.error
@@ -0,0 +1,3 @@
+ERROR:
+ A BOM must not appear
+ inside a document.
diff --git a/tests/data/spec-05-02-utf16le.data b/tests/data/spec-05-02-utf16le.data
new file mode 100644
index 0000000..0cd90a2
--- /dev/null
+++ b/tests/data/spec-05-02-utf16le.data
Binary files differ
diff --git a/tests/data/spec-05-02-utf16le.error b/tests/data/spec-05-02-utf16le.error
new file mode 100644
index 0000000..1df3616
--- /dev/null
+++ b/tests/data/spec-05-02-utf16le.error
@@ -0,0 +1,3 @@
+ERROR:
+ A BOM must not appear
+ inside a document.
diff --git a/tests/data/spec-05-02-utf8.data b/tests/data/spec-05-02-utf8.data
new file mode 100644
index 0000000..fb74866
--- /dev/null
+++ b/tests/data/spec-05-02-utf8.data
@@ -0,0 +1,3 @@
+# Invalid use of BOM
+# inside a
+# document.
diff --git a/tests/data/spec-05-02-utf8.error b/tests/data/spec-05-02-utf8.error
new file mode 100644
index 0000000..1df3616
--- /dev/null
+++ b/tests/data/spec-05-02-utf8.error
@@ -0,0 +1,3 @@
+ERROR:
+ A BOM must not appear
+ inside a document.
diff --git a/tests/data/spec-05-03.canonical b/tests/data/spec-05-03.canonical
new file mode 100644
index 0000000..a143a73
--- /dev/null
+++ b/tests/data/spec-05-03.canonical
@@ -0,0 +1,14 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "sequence"
+  : !!seq [
+    !!str "one", !!str "two"
+  ],
+  ? !!str "mapping"
+  : !!map {
+    ? !!str "sky" : !!str "blue",
+#    ? !!str "sea" : !!str "green",
+    ? !!map { ? !!str "sea" : !!str "green" } : !!null "",
+  }
+}
diff --git a/tests/data/spec-05-03.data b/tests/data/spec-05-03.data
new file mode 100644
index 0000000..4661f33
--- /dev/null
+++ b/tests/data/spec-05-03.data
@@ -0,0 +1,7 @@
+sequence:
+- one
+- two
+mapping:
+  ? sky
+  : blue
+  ? sea : green
diff --git a/tests/data/spec-05-04.canonical b/tests/data/spec-05-04.canonical
new file mode 100644
index 0000000..00c9723
--- /dev/null
+++ b/tests/data/spec-05-04.canonical
@@ -0,0 +1,13 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "sequence"
+  : !!seq [
+    !!str "one", !!str "two"
+  ],
+  ? !!str "mapping"
+  : !!map {
+    ? !!str "sky" : !!str "blue",
+    ? !!str "sea" : !!str "green",
+  }
+}
diff --git a/tests/data/spec-05-04.data b/tests/data/spec-05-04.data
new file mode 100644
index 0000000..df33847
--- /dev/null
+++ b/tests/data/spec-05-04.data
@@ -0,0 +1,2 @@
+sequence: [ one, two, ]
+mapping: { sky: blue, sea: green }
diff --git a/tests/data/spec-05-05.data b/tests/data/spec-05-05.data
new file mode 100644
index 0000000..62524c0
--- /dev/null
+++ b/tests/data/spec-05-05.data
@@ -0,0 +1 @@
+# Comment only.
diff --git a/tests/data/spec-05-05.empty b/tests/data/spec-05-05.empty
new file mode 100644
index 0000000..bfffa8b
--- /dev/null
+++ b/tests/data/spec-05-05.empty
@@ -0,0 +1,2 @@
+# This stream contains no
+# documents, only comments.
diff --git a/tests/data/spec-05-06.canonical b/tests/data/spec-05-06.canonical
new file mode 100644
index 0000000..4f30c11
--- /dev/null
+++ b/tests/data/spec-05-06.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "anchored"
+  : &A1 !local "value",
+  ? !!str "alias"
+  : *A1,
+}
diff --git a/tests/data/spec-05-06.data b/tests/data/spec-05-06.data
new file mode 100644
index 0000000..7a1f9b3
--- /dev/null
+++ b/tests/data/spec-05-06.data
@@ -0,0 +1,2 @@
+anchored: !local &anchor value
+alias: *anchor
diff --git a/tests/data/spec-05-07.canonical b/tests/data/spec-05-07.canonical
new file mode 100644
index 0000000..dc3732a
--- /dev/null
+++ b/tests/data/spec-05-07.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "literal"
+  : !!str "text\n",
+  ? !!str "folded"
+  : !!str "text\n",
+}
diff --git a/tests/data/spec-05-07.data b/tests/data/spec-05-07.data
new file mode 100644
index 0000000..97eb3a3
--- /dev/null
+++ b/tests/data/spec-05-07.data
@@ -0,0 +1,4 @@
+literal: |
+  text
+folded: >
+  text
diff --git a/tests/data/spec-05-08.canonical b/tests/data/spec-05-08.canonical
new file mode 100644
index 0000000..610bd68
--- /dev/null
+++ b/tests/data/spec-05-08.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "single"
+  : !!str "text",
+  ? !!str "double"
+  : !!str "text",
+}
diff --git a/tests/data/spec-05-08.data b/tests/data/spec-05-08.data
new file mode 100644
index 0000000..04ebf69
--- /dev/null
+++ b/tests/data/spec-05-08.data
@@ -0,0 +1,2 @@
+single: 'text'
+double: "text"
diff --git a/tests/data/spec-05-09.canonical b/tests/data/spec-05-09.canonical
new file mode 100644
index 0000000..597e3de
--- /dev/null
+++ b/tests/data/spec-05-09.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "text"
diff --git a/tests/data/spec-05-09.data b/tests/data/spec-05-09.data
new file mode 100644
index 0000000..a43431b
--- /dev/null
+++ b/tests/data/spec-05-09.data
@@ -0,0 +1,2 @@
+%YAML 1.1
+--- text
diff --git a/tests/data/spec-05-10.data b/tests/data/spec-05-10.data
new file mode 100644
index 0000000..a4caf91
--- /dev/null
+++ b/tests/data/spec-05-10.data
@@ -0,0 +1,2 @@
+commercial-at: @text
+grave-accent: `text
diff --git a/tests/data/spec-05-10.error b/tests/data/spec-05-10.error
new file mode 100644
index 0000000..46f776e
--- /dev/null
+++ b/tests/data/spec-05-10.error
@@ -0,0 +1,3 @@
+ERROR:
+ Reserved indicators can't
+ start a plain scalar.
diff --git a/tests/data/spec-05-11.canonical b/tests/data/spec-05-11.canonical
new file mode 100644
index 0000000..fc25bef
--- /dev/null
+++ b/tests/data/spec-05-11.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+--- !!str
+"Generic line break (no glyph)\n\
+ Generic line break (glyphed)\n\
+ Line separator\u2028\
+ Paragraph separator\u2029"
diff --git a/tests/data/spec-05-11.data b/tests/data/spec-05-11.data
new file mode 100644
index 0000000..b448b75
--- /dev/null
+++ b/tests/data/spec-05-11.data
@@ -0,0 +1,3 @@
+|
+  Generic line break (no glyph)
+  Generic line break (glyphed)…  Line separator
  Paragraph separator

\ No newline at end of file
diff --git a/tests/data/spec-05-12.data b/tests/data/spec-05-12.data
new file mode 100644
index 0000000..7c3ad7f
--- /dev/null
+++ b/tests/data/spec-05-12.data
@@ -0,0 +1,9 @@
+# Tabs do's and don'ts:
+# comment: 	
+quoted: "Quoted		"
+block: |
+  void main() {
+  	printf("Hello, world!\n");
+  }
+elsewhere:	# separation
+	indentation, in	plain scalar
diff --git a/tests/data/spec-05-12.error b/tests/data/spec-05-12.error
new file mode 100644
index 0000000..8aad4c8
--- /dev/null
+++ b/tests/data/spec-05-12.error
@@ -0,0 +1,8 @@
+ERROR:
+ Tabs may appear inside
+ comments and quoted or
+ block scalar content.
+ Tabs must not appear
+ elsewhere, such as
+ in indentation and
+ separation spaces.
diff --git a/tests/data/spec-05-13.canonical b/tests/data/spec-05-13.canonical
new file mode 100644
index 0000000..90c1c5c
--- /dev/null
+++ b/tests/data/spec-05-13.canonical
@@ -0,0 +1,5 @@
+%YAML 1.1
+--- !!str
+"Text containing \
+ both space and \
+ tab	characters"
diff --git a/tests/data/spec-05-13.data b/tests/data/spec-05-13.data
new file mode 100644
index 0000000..fce7951
--- /dev/null
+++ b/tests/data/spec-05-13.data
@@ -0,0 +1,3 @@
+  "Text containing   
+  both space and	
+  	tab	characters"
diff --git a/tests/data/spec-05-14.canonical b/tests/data/spec-05-14.canonical
new file mode 100644
index 0000000..4bff01c
--- /dev/null
+++ b/tests/data/spec-05-14.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+"Fun with \x5C
+ \x22 \x07 \x08 \x1B \x0C
+ \x0A \x0D \x09 \x0B \x00
+ \x20 \xA0 \x85 \u2028 \u2029
+ A A A"
diff --git a/tests/data/spec-05-14.data b/tests/data/spec-05-14.data
new file mode 100644
index 0000000..d6e8ce4
--- /dev/null
+++ b/tests/data/spec-05-14.data
@@ -0,0 +1,2 @@
+"Fun with \\
+ \" \a \b \e \f \… \n \r \t \v \0 \
 \  \_ \N \L \P \
 \x41 \u0041 \U00000041"
diff --git a/tests/data/spec-05-15.data b/tests/data/spec-05-15.data
new file mode 100644
index 0000000..7bf12b6
--- /dev/null
+++ b/tests/data/spec-05-15.data
@@ -0,0 +1,3 @@
+Bad escapes:
+  "\c
+  \xq-"
diff --git a/tests/data/spec-05-15.error b/tests/data/spec-05-15.error
new file mode 100644
index 0000000..71ffbd9
--- /dev/null
+++ b/tests/data/spec-05-15.error
@@ -0,0 +1,3 @@
+ERROR:
+- c is an invalid escaped character.
+- q and - are invalid hex digits.
diff --git a/tests/data/spec-06-01.canonical b/tests/data/spec-06-01.canonical
new file mode 100644
index 0000000..f17ec92
--- /dev/null
+++ b/tests/data/spec-06-01.canonical
@@ -0,0 +1,15 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "Not indented"
+  : !!map {
+      ? !!str "By one space"
+      : !!str "By four\n  spaces\n",
+      ? !!str "Flow style"
+      : !!seq [
+          !!str "By two",
+          !!str "Also by two",
+          !!str "Still by two",
+        ]
+    }
+}
diff --git a/tests/data/spec-06-01.data b/tests/data/spec-06-01.data
new file mode 100644
index 0000000..6134ba1
--- /dev/null
+++ b/tests/data/spec-06-01.data
@@ -0,0 +1,14 @@
+  # Leading comment line spaces are
+   # neither content nor indentation.
+    
+Not indented:
+ By one space: |
+    By four
+      spaces
+ Flow style: [    # Leading spaces
+   By two,        # in flow style
+  Also by two,    # are neither
+# Tabs are not allowed:
+#  	Still by two   # content nor
+    Still by two   # content nor
+    ]             # indentation.
diff --git a/tests/data/spec-06-02.data b/tests/data/spec-06-02.data
new file mode 100644
index 0000000..ff741e5
--- /dev/null
+++ b/tests/data/spec-06-02.data
@@ -0,0 +1,3 @@
+  # Comment
+   
+
diff --git a/tests/data/spec-06-02.empty b/tests/data/spec-06-02.empty
new file mode 100644
index 0000000..bfffa8b
--- /dev/null
+++ b/tests/data/spec-06-02.empty
@@ -0,0 +1,2 @@
+# This stream contains no
+# documents, only comments.
diff --git a/tests/data/spec-06-03.canonical b/tests/data/spec-06-03.canonical
new file mode 100644
index 0000000..ec26902
--- /dev/null
+++ b/tests/data/spec-06-03.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "key"
+  : !!str "value"
+}
diff --git a/tests/data/spec-06-03.data b/tests/data/spec-06-03.data
new file mode 100644
index 0000000..9db0912
--- /dev/null
+++ b/tests/data/spec-06-03.data
@@ -0,0 +1,2 @@
+key:    # Comment
+  value
diff --git a/tests/data/spec-06-04.canonical b/tests/data/spec-06-04.canonical
new file mode 100644
index 0000000..ec26902
--- /dev/null
+++ b/tests/data/spec-06-04.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "key"
+  : !!str "value"
+}
diff --git a/tests/data/spec-06-04.data b/tests/data/spec-06-04.data
new file mode 100644
index 0000000..86308dd
--- /dev/null
+++ b/tests/data/spec-06-04.data
@@ -0,0 +1,4 @@
+key:    # Comment
+        # lines
+  value
+
diff --git a/tests/data/spec-06-05.canonical b/tests/data/spec-06-05.canonical
new file mode 100644
index 0000000..8da431d
--- /dev/null
+++ b/tests/data/spec-06-05.canonical
@@ -0,0 +1,16 @@
+%YAML 1.1
+---
+!!map {
+  ? !!map {
+    ? !!str "first"
+    : !!str "Sammy",
+    ? !!str "last"
+    : !!str "Sosa"
+  }
+  : !!map {
+    ? !!str "hr"
+    : !!int "65",
+    ? !!str "avg"
+    : !!float "0.278"
+  }
+}
diff --git a/tests/data/spec-06-05.data b/tests/data/spec-06-05.data
new file mode 100644
index 0000000..37613f5
--- /dev/null
+++ b/tests/data/spec-06-05.data
@@ -0,0 +1,6 @@
+{ first: Sammy, last: Sosa }:
+# Statistics:
+  hr:  # Home runs
+    65
+  avg: # Average
+    0.278
diff --git a/tests/data/spec-06-06.canonical b/tests/data/spec-06-06.canonical
new file mode 100644
index 0000000..513d07a
--- /dev/null
+++ b/tests/data/spec-06-06.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "plain"
+  : !!str "text lines",
+  ? !!str "quoted"
+  : !!str "text lines",
+  ? !!str "block"
+  : !!str "text\n 	lines\n"
+}
diff --git a/tests/data/spec-06-06.data b/tests/data/spec-06-06.data
new file mode 100644
index 0000000..2f62d08
--- /dev/null
+++ b/tests/data/spec-06-06.data
@@ -0,0 +1,7 @@
+plain: text
+  lines
+quoted: "text
+  	lines"
+block: |
+  text
+   	lines
diff --git a/tests/data/spec-06-07.canonical b/tests/data/spec-06-07.canonical
new file mode 100644
index 0000000..11357e4
--- /dev/null
+++ b/tests/data/spec-06-07.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "foo\nbar",
+  !!str "foo\n\nbar"
+]
diff --git a/tests/data/spec-06-07.data b/tests/data/spec-06-07.data
new file mode 100644
index 0000000..130cfa7
--- /dev/null
+++ b/tests/data/spec-06-07.data
@@ -0,0 +1,8 @@
+- foo
+ 
+  bar
+- |-
+  foo
+ 
+  bar
+  
diff --git a/tests/data/spec-06-08.canonical b/tests/data/spec-06-08.canonical
new file mode 100644
index 0000000..cc72bc8
--- /dev/null
+++ b/tests/data/spec-06-08.canonical
@@ -0,0 +1,5 @@
+%YAML 1.1
+--- !!str
+"specific\L\
+ trimmed\n\n\n\
+ as space"
diff --git a/tests/data/spec-06-08.data b/tests/data/spec-06-08.data
new file mode 100644
index 0000000..f2896ed
--- /dev/null
+++ b/tests/data/spec-06-08.data
@@ -0,0 +1,2 @@
+>-
+  specific
  trimmed…  … ……  as…  space
diff --git a/tests/data/spec-07-01.canonical b/tests/data/spec-07-01.canonical
new file mode 100644
index 0000000..8c8c48d
--- /dev/null
+++ b/tests/data/spec-07-01.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+--- !!str
+"foo"
diff --git a/tests/data/spec-07-01.data b/tests/data/spec-07-01.data
new file mode 100644
index 0000000..2113eb6
--- /dev/null
+++ b/tests/data/spec-07-01.data
@@ -0,0 +1,3 @@
+%FOO  bar baz # Should be ignored
+               # with a warning.
+--- "foo"
diff --git a/tests/data/spec-07-01.skip-ext b/tests/data/spec-07-01.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/spec-07-01.skip-ext
diff --git a/tests/data/spec-07-02.canonical b/tests/data/spec-07-02.canonical
new file mode 100644
index 0000000..cb7dd1c
--- /dev/null
+++ b/tests/data/spec-07-02.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "foo"
diff --git a/tests/data/spec-07-02.data b/tests/data/spec-07-02.data
new file mode 100644
index 0000000..c8b7322
--- /dev/null
+++ b/tests/data/spec-07-02.data
@@ -0,0 +1,4 @@
+%YAML 1.2 # Attempt parsing
+           # with a warning
+---
+"foo"
diff --git a/tests/data/spec-07-02.skip-ext b/tests/data/spec-07-02.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/spec-07-02.skip-ext
diff --git a/tests/data/spec-07-03.data b/tests/data/spec-07-03.data
new file mode 100644
index 0000000..4bfa07a
--- /dev/null
+++ b/tests/data/spec-07-03.data
@@ -0,0 +1,3 @@
+%YAML 1.1
+%YAML 1.1
+foo
diff --git a/tests/data/spec-07-03.error b/tests/data/spec-07-03.error
new file mode 100644
index 0000000..b0ac446
--- /dev/null
+++ b/tests/data/spec-07-03.error
@@ -0,0 +1,3 @@
+ERROR:
+The YAML directive must only be
+given at most once per document.
diff --git a/tests/data/spec-07-04.canonical b/tests/data/spec-07-04.canonical
new file mode 100644
index 0000000..cb7dd1c
--- /dev/null
+++ b/tests/data/spec-07-04.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "foo"
diff --git a/tests/data/spec-07-04.data b/tests/data/spec-07-04.data
new file mode 100644
index 0000000..50f5ab9
--- /dev/null
+++ b/tests/data/spec-07-04.data
@@ -0,0 +1,3 @@
+%TAG !yaml! tag:yaml.org,2002:
+---
+!yaml!str "foo"
diff --git a/tests/data/spec-07-05.data b/tests/data/spec-07-05.data
new file mode 100644
index 0000000..7276eae
--- /dev/null
+++ b/tests/data/spec-07-05.data
@@ -0,0 +1,3 @@
+%TAG ! !foo
+%TAG ! !foo
+bar
diff --git a/tests/data/spec-07-05.error b/tests/data/spec-07-05.error
new file mode 100644
index 0000000..5601b19
--- /dev/null
+++ b/tests/data/spec-07-05.error
@@ -0,0 +1,4 @@
+ERROR:
+The TAG directive must only
+be given at most once per
+handle in the same document.
diff --git a/tests/data/spec-07-06.canonical b/tests/data/spec-07-06.canonical
new file mode 100644
index 0000000..bddf616
--- /dev/null
+++ b/tests/data/spec-07-06.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!seq [
+  !<!foobar> "baz",
+  !<tag:yaml.org,2002:str> "string"
+]
diff --git a/tests/data/spec-07-06.data b/tests/data/spec-07-06.data
new file mode 100644
index 0000000..d9854cb
--- /dev/null
+++ b/tests/data/spec-07-06.data
@@ -0,0 +1,5 @@
+%TAG !      !foo
+%TAG !yaml! tag:yaml.org,2002:
+---
+- !bar "baz"
+- !yaml!str "string"
diff --git a/tests/data/spec-07-07a.canonical b/tests/data/spec-07-07a.canonical
new file mode 100644
index 0000000..fa086df
--- /dev/null
+++ b/tests/data/spec-07-07a.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!<!foo> "bar"
diff --git a/tests/data/spec-07-07a.data b/tests/data/spec-07-07a.data
new file mode 100644
index 0000000..9d42ec3
--- /dev/null
+++ b/tests/data/spec-07-07a.data
@@ -0,0 +1,2 @@
+# Private application:
+!foo "bar"
diff --git a/tests/data/spec-07-07b.canonical b/tests/data/spec-07-07b.canonical
new file mode 100644
index 0000000..fe917d8
--- /dev/null
+++ b/tests/data/spec-07-07b.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!<tag:ben-kiki.org,2000:app/foo> "bar"
diff --git a/tests/data/spec-07-07b.data b/tests/data/spec-07-07b.data
new file mode 100644
index 0000000..2d36d0e
--- /dev/null
+++ b/tests/data/spec-07-07b.data
@@ -0,0 +1,4 @@
+# Migrated to global:
+%TAG ! tag:ben-kiki.org,2000:app/
+---
+!foo "bar"
diff --git a/tests/data/spec-07-08.canonical b/tests/data/spec-07-08.canonical
new file mode 100644
index 0000000..703aa7b
--- /dev/null
+++ b/tests/data/spec-07-08.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!seq [
+  !<!foo> "bar",
+  !<tag:yaml.org,2002:str> "string",
+  !<tag:ben-kiki.org,2000:type> "baz"
+]
diff --git a/tests/data/spec-07-08.data b/tests/data/spec-07-08.data
new file mode 100644
index 0000000..e2c6d9e
--- /dev/null
+++ b/tests/data/spec-07-08.data
@@ -0,0 +1,9 @@
+# Explicitly specify default settings:
+%TAG !     !
+%TAG !!    tag:yaml.org,2002:
+# Named handles have no default:
+%TAG !o! tag:ben-kiki.org,2000:
+---
+- !foo "bar"
+- !!str "string"
+- !o!type "baz"
diff --git a/tests/data/spec-07-09.canonical b/tests/data/spec-07-09.canonical
new file mode 100644
index 0000000..32d9e94
--- /dev/null
+++ b/tests/data/spec-07-09.canonical
@@ -0,0 +1,9 @@
+%YAML 1.1
+---
+!!str "foo"
+%YAML 1.1
+---
+!!str "bar"
+%YAML 1.1
+---
+!!str "baz"
diff --git a/tests/data/spec-07-09.data b/tests/data/spec-07-09.data
new file mode 100644
index 0000000..1209d47
--- /dev/null
+++ b/tests/data/spec-07-09.data
@@ -0,0 +1,11 @@
+---
+foo
+...
+# Repeated end marker.
+...
+---
+bar
+# No end marker.
+---
+baz
+...
diff --git a/tests/data/spec-07-10.canonical b/tests/data/spec-07-10.canonical
new file mode 100644
index 0000000..1db650a
--- /dev/null
+++ b/tests/data/spec-07-10.canonical
@@ -0,0 +1,15 @@
+%YAML 1.1
+---
+!!str "Root flow scalar"
+%YAML 1.1
+---
+!!str "Root block scalar\n"
+%YAML 1.1
+---
+!!map {
+  ? !!str "foo"
+  : !!str "bar"
+}
+---
+#!!str ""
+!!null ""
diff --git a/tests/data/spec-07-10.data b/tests/data/spec-07-10.data
new file mode 100644
index 0000000..6939b39
--- /dev/null
+++ b/tests/data/spec-07-10.data
@@ -0,0 +1,11 @@
+"Root flow
+ scalar"
+--- !!str >
+ Root block
+ scalar
+---
+# Root collection:
+foo : bar
+... # Is optional.
+---
+# Explicit document may be empty.
diff --git a/tests/data/spec-07-11.data b/tests/data/spec-07-11.data
new file mode 100644
index 0000000..d11302d
--- /dev/null
+++ b/tests/data/spec-07-11.data
@@ -0,0 +1,2 @@
+# A stream may contain
+# no documents.
diff --git a/tests/data/spec-07-11.empty b/tests/data/spec-07-11.empty
new file mode 100644
index 0000000..bfffa8b
--- /dev/null
+++ b/tests/data/spec-07-11.empty
@@ -0,0 +1,2 @@
+# This stream contains no
+# documents, only comments.
diff --git a/tests/data/spec-07-12a.canonical b/tests/data/spec-07-12a.canonical
new file mode 100644
index 0000000..efc116f
--- /dev/null
+++ b/tests/data/spec-07-12a.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "foo"
+  : !!str "bar"
+}
diff --git a/tests/data/spec-07-12a.data b/tests/data/spec-07-12a.data
new file mode 100644
index 0000000..3807d57
--- /dev/null
+++ b/tests/data/spec-07-12a.data
@@ -0,0 +1,3 @@
+# Implicit document. Root
+# collection (mapping) node.
+foo : bar
diff --git a/tests/data/spec-07-12b.canonical b/tests/data/spec-07-12b.canonical
new file mode 100644
index 0000000..04bcffc
--- /dev/null
+++ b/tests/data/spec-07-12b.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "Text content\n"
diff --git a/tests/data/spec-07-12b.data b/tests/data/spec-07-12b.data
new file mode 100644
index 0000000..43250db
--- /dev/null
+++ b/tests/data/spec-07-12b.data
@@ -0,0 +1,4 @@
+# Explicit document. Root
+# scalar (literal) node.
+--- |
+ Text content
diff --git a/tests/data/spec-07-13.canonical b/tests/data/spec-07-13.canonical
new file mode 100644
index 0000000..5af71e9
--- /dev/null
+++ b/tests/data/spec-07-13.canonical
@@ -0,0 +1,9 @@
+%YAML 1.1
+---
+!!str "First document"
+---
+!<!foo> "No directives"
+---
+!<!foobar> "With directives"
+---
+!<!baz> "Reset settings"
diff --git a/tests/data/spec-07-13.data b/tests/data/spec-07-13.data
new file mode 100644
index 0000000..ba7ec63
--- /dev/null
+++ b/tests/data/spec-07-13.data
@@ -0,0 +1,9 @@
+! "First document"
+---
+!foo "No directives"
+%TAG ! !foo
+---
+!bar "With directives"
+%YAML 1.1
+---
+!baz "Reset settings"
diff --git a/tests/data/spec-08-01.canonical b/tests/data/spec-08-01.canonical
new file mode 100644
index 0000000..69e4161
--- /dev/null
+++ b/tests/data/spec-08-01.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? &A1 !!str "foo"
+  : !!str "bar",
+  ? &A2 !!str "baz"
+  : *A1
+}
diff --git a/tests/data/spec-08-01.data b/tests/data/spec-08-01.data
new file mode 100644
index 0000000..48986ec
--- /dev/null
+++ b/tests/data/spec-08-01.data
@@ -0,0 +1,2 @@
+!!str &a1 "foo" : !!str bar
+&a2 baz : *a1
diff --git a/tests/data/spec-08-02.canonical b/tests/data/spec-08-02.canonical
new file mode 100644
index 0000000..dd6f76e
--- /dev/null
+++ b/tests/data/spec-08-02.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "First occurrence"
+  : &A !!str "Value",
+  ? !!str "Second occurrence"
+  : *A
+}
diff --git a/tests/data/spec-08-02.data b/tests/data/spec-08-02.data
new file mode 100644
index 0000000..600d179
--- /dev/null
+++ b/tests/data/spec-08-02.data
@@ -0,0 +1,2 @@
+First occurrence: &anchor Value
+Second occurrence: *anchor
diff --git a/tests/data/spec-08-03.canonical b/tests/data/spec-08-03.canonical
new file mode 100644
index 0000000..be7ea8f
--- /dev/null
+++ b/tests/data/spec-08-03.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!map {
+  ? !<tag:yaml.org,2002:str> "foo"
+  : !<!bar> "baz"
+}
diff --git a/tests/data/spec-08-03.data b/tests/data/spec-08-03.data
new file mode 100644
index 0000000..8e51f52
--- /dev/null
+++ b/tests/data/spec-08-03.data
@@ -0,0 +1,2 @@
+!<tag:yaml.org,2002:str> foo :
+  !<!bar> baz
diff --git a/tests/data/spec-08-04.data b/tests/data/spec-08-04.data
new file mode 100644
index 0000000..f7d1b01
--- /dev/null
+++ b/tests/data/spec-08-04.data
@@ -0,0 +1,2 @@
+- !<!> foo
+- !<$:?> bar
diff --git a/tests/data/spec-08-04.error b/tests/data/spec-08-04.error
new file mode 100644
index 0000000..6066375
--- /dev/null
+++ b/tests/data/spec-08-04.error
@@ -0,0 +1,6 @@
+ERROR:
+- Verbatim tags aren't resolved,
+  so ! is invalid.
+- The $:? tag is neither a global
+  URI tag nor a local tag starting
+  with “!”.
diff --git a/tests/data/spec-08-05.canonical b/tests/data/spec-08-05.canonical
new file mode 100644
index 0000000..a5c710a
--- /dev/null
+++ b/tests/data/spec-08-05.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!seq [
+  !<!local> "foo",
+  !<tag:yaml.org,2002:str> "bar",
+  !<tag:ben-kiki.org,2000:type> "baz",
+]
diff --git a/tests/data/spec-08-05.data b/tests/data/spec-08-05.data
new file mode 100644
index 0000000..93576ed
--- /dev/null
+++ b/tests/data/spec-08-05.data
@@ -0,0 +1,5 @@
+%TAG !o! tag:ben-kiki.org,2000:
+---
+- !local foo
+- !!str bar
+- !o!type baz
diff --git a/tests/data/spec-08-06.data b/tests/data/spec-08-06.data
new file mode 100644
index 0000000..8580010
--- /dev/null
+++ b/tests/data/spec-08-06.data
@@ -0,0 +1,5 @@
+%TAG !o! tag:ben-kiki.org,2000:
+---
+- !$a!b foo
+- !o! bar
+- !h!type baz
diff --git a/tests/data/spec-08-06.error b/tests/data/spec-08-06.error
new file mode 100644
index 0000000..fb76f42
--- /dev/null
+++ b/tests/data/spec-08-06.error
@@ -0,0 +1,4 @@
+ERROR:
+- The !$a! looks like a handle.
+- The !o! handle has no suffix.
+- The !h! handle wasn't declared.
diff --git a/tests/data/spec-08-07.canonical b/tests/data/spec-08-07.canonical
new file mode 100644
index 0000000..e2f43d9
--- /dev/null
+++ b/tests/data/spec-08-07.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!seq [
+  !<tag:yaml.org,2002:str> "12",
+  !<tag:yaml.org,2002:int> "12",
+#  !<tag:yaml.org,2002:str> "12",
+  !<tag:yaml.org,2002:int> "12",
+]
diff --git a/tests/data/spec-08-07.data b/tests/data/spec-08-07.data
new file mode 100644
index 0000000..98aa565
--- /dev/null
+++ b/tests/data/spec-08-07.data
@@ -0,0 +1,4 @@
+# Assuming conventional resolution:
+- "12"
+- 12
+- ! 12
diff --git a/tests/data/spec-08-08.canonical b/tests/data/spec-08-08.canonical
new file mode 100644
index 0000000..d3f8b1a
--- /dev/null
+++ b/tests/data/spec-08-08.canonical
@@ -0,0 +1,15 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "foo"
+  : !!str "bar baz"
+}
+%YAML 1.1
+---
+!!str "foo bar"
+%YAML 1.1
+---
+!!str "foo bar"
+%YAML 1.1
+---
+!!str "foo\n"
diff --git a/tests/data/spec-08-08.data b/tests/data/spec-08-08.data
new file mode 100644
index 0000000..757a93d
--- /dev/null
+++ b/tests/data/spec-08-08.data
@@ -0,0 +1,13 @@
+---
+foo:
+ "bar
+ baz"
+---
+"foo
+ bar"
+---
+foo
+ bar
+--- |
+ foo
+...
diff --git a/tests/data/spec-08-09.canonical b/tests/data/spec-08-09.canonical
new file mode 100644
index 0000000..3805daf
--- /dev/null
+++ b/tests/data/spec-08-09.canonical
@@ -0,0 +1,21 @@
+%YAML 1.1
+--- !!map {
+  ? !!str "scalars" : !!map {
+      ? !!str "plain"
+      : !!str "some text",
+      ? !!str "quoted"
+      : !!map {
+        ? !!str "single"
+        : !!str "some text",
+        ? !!str "double"
+        : !!str "some text"
+  } },
+  ? !!str "collections" : !!map {
+    ? !!str "sequence" : !!seq [
+      !!str "entry",
+      !!map {
+        ? !!str "key" : !!str "value"
+    } ],
+    ? !!str "mapping" : !!map {
+      ? !!str "key" : !!str "value"
+} } }
diff --git a/tests/data/spec-08-09.data b/tests/data/spec-08-09.data
new file mode 100644
index 0000000..69da042
--- /dev/null
+++ b/tests/data/spec-08-09.data
@@ -0,0 +1,11 @@
+---
+scalars:
+  plain: !!str some text
+  quoted:
+    single: 'some text'
+    double: "some text"
+collections:
+  sequence: !!seq [ !!str entry,
+    # Mapping entry:
+      key: value ]
+  mapping: { key: value }
diff --git a/tests/data/spec-08-10.canonical b/tests/data/spec-08-10.canonical
new file mode 100644
index 0000000..8281c5e
--- /dev/null
+++ b/tests/data/spec-08-10.canonical
@@ -0,0 +1,23 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "block styles" : !!map {
+    ? !!str "scalars" : !!map {
+      ? !!str "literal"
+      : !!str "#!/usr/bin/perl\n\
+          print \"Hello,
+          world!\\n\";\n",
+      ? !!str "folded"
+      : !!str "This sentence
+          is false.\n"
+    },
+    ? !!str "collections" : !!map {
+      ? !!str "sequence" : !!seq [
+        !!str "entry",
+        !!map {
+          ? !!str "key" : !!str "value"
+        }
+      ],
+      ? !!str "mapping" : !!map {
+        ? !!str "key" : !!str "value"
+} } } }
diff --git a/tests/data/spec-08-10.data b/tests/data/spec-08-10.data
new file mode 100644
index 0000000..72acc56
--- /dev/null
+++ b/tests/data/spec-08-10.data
@@ -0,0 +1,15 @@
+block styles:
+  scalars:
+    literal: !!str |
+      #!/usr/bin/perl
+      print "Hello, world!\n";
+    folded: >
+      This sentence
+      is false.
+  collections: !!map
+    sequence: !!seq # Entry:
+      - entry # Plain
+      # Mapping entry:
+      - key: value
+    mapping: 
+      key: value
diff --git a/tests/data/spec-08-11.canonical b/tests/data/spec-08-11.canonical
new file mode 100644
index 0000000..dd6f76e
--- /dev/null
+++ b/tests/data/spec-08-11.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "First occurrence"
+  : &A !!str "Value",
+  ? !!str "Second occurrence"
+  : *A
+}
diff --git a/tests/data/spec-08-11.data b/tests/data/spec-08-11.data
new file mode 100644
index 0000000..600d179
--- /dev/null
+++ b/tests/data/spec-08-11.data
@@ -0,0 +1,2 @@
+First occurrence: &anchor Value
+Second occurrence: *anchor
diff --git a/tests/data/spec-08-12.canonical b/tests/data/spec-08-12.canonical
new file mode 100644
index 0000000..93899f4
--- /dev/null
+++ b/tests/data/spec-08-12.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "Without properties",
+  &A !!str "Anchored",
+  !!str "Tagged",
+  *A,
+  !!str "",
+  !!str "",
+]
diff --git a/tests/data/spec-08-12.data b/tests/data/spec-08-12.data
new file mode 100644
index 0000000..3d4c6b7
--- /dev/null
+++ b/tests/data/spec-08-12.data
@@ -0,0 +1,8 @@
+[
+  Without properties,
+  &anchor "Anchored",
+  !!str 'Tagged',
+  *anchor, # Alias node
+  !!str ,  # Empty plain scalar
+  '',   # Empty plain scalar
+]
diff --git a/tests/data/spec-08-13.canonical b/tests/data/spec-08-13.canonical
new file mode 100644
index 0000000..618bb7b
--- /dev/null
+++ b/tests/data/spec-08-13.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "foo"
+#  : !!str "",
+#  ? !!str ""
+  : !!null "",
+  ? !!null ""
+  : !!str "bar",
+}
diff --git a/tests/data/spec-08-13.data b/tests/data/spec-08-13.data
new file mode 100644
index 0000000..ebe663a
--- /dev/null
+++ b/tests/data/spec-08-13.data
@@ -0,0 +1,4 @@
+{
+  ? foo :,
+  ? : bar,
+}
diff --git a/tests/data/spec-08-13.skip-ext b/tests/data/spec-08-13.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/spec-08-13.skip-ext
diff --git a/tests/data/spec-08-14.canonical b/tests/data/spec-08-14.canonical
new file mode 100644
index 0000000..11db439
--- /dev/null
+++ b/tests/data/spec-08-14.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "flow in block",
+  !!str "Block scalar\n",
+  !!map {
+    ? !!str "foo"
+    : !!str "bar"
+  }
+]
diff --git a/tests/data/spec-08-14.data b/tests/data/spec-08-14.data
new file mode 100644
index 0000000..2fbb1f7
--- /dev/null
+++ b/tests/data/spec-08-14.data
@@ -0,0 +1,5 @@
+- "flow in block"
+- >
+ Block scalar
+- !!map # Block collection
+  foo : bar
diff --git a/tests/data/spec-08-15.canonical b/tests/data/spec-08-15.canonical
new file mode 100644
index 0000000..76f028e
--- /dev/null
+++ b/tests/data/spec-08-15.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!seq [
+  !!null "",
+  !!map {
+    ? !!str "foo"
+    : !!null "",
+    ? !!null ""
+    : !!str "bar",
+  }
+]
diff --git a/tests/data/spec-08-15.data b/tests/data/spec-08-15.data
new file mode 100644
index 0000000..7c86bcf
--- /dev/null
+++ b/tests/data/spec-08-15.data
@@ -0,0 +1,5 @@
+- # Empty plain scalar
+- ? foo
+  :
+  ?
+  : bar
diff --git a/tests/data/spec-09-01.canonical b/tests/data/spec-09-01.canonical
new file mode 100644
index 0000000..e71a548
--- /dev/null
+++ b/tests/data/spec-09-01.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "simple key"
+  : !!map {
+    ? !!str "also simple"
+    : !!str "value",
+    ? !!str "not a simple key"
+    : !!str "any value"
+  }
+}
diff --git a/tests/data/spec-09-01.data b/tests/data/spec-09-01.data
new file mode 100644
index 0000000..9e83eaf
--- /dev/null
+++ b/tests/data/spec-09-01.data
@@ -0,0 +1,6 @@
+"simple key" : {
+  "also simple" : value,
+  ? "not a
+  simple key" : "any
+  value"
+}
diff --git a/tests/data/spec-09-02.canonical b/tests/data/spec-09-02.canonical
new file mode 100644
index 0000000..6f8f41a
--- /dev/null
+++ b/tests/data/spec-09-02.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!str "as space \
+  trimmed\n\
+  specific\L\n\
+  escaped\t\n\
+  none"
diff --git a/tests/data/spec-09-02.data b/tests/data/spec-09-02.data
new file mode 100644
index 0000000..d84883d
--- /dev/null
+++ b/tests/data/spec-09-02.data
@@ -0,0 +1,6 @@
+ "as space	
+ trimmed 
+
+ specific

+ escaped	\
 
+ none"
diff --git a/tests/data/spec-09-03.canonical b/tests/data/spec-09-03.canonical
new file mode 100644
index 0000000..658c6df
--- /dev/null
+++ b/tests/data/spec-09-03.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!seq [
+  !!str " last",
+  !!str " last",
+  !!str " \tfirst last",
+]
diff --git a/tests/data/spec-09-03.data b/tests/data/spec-09-03.data
new file mode 100644
index 0000000..e0b914d
--- /dev/null
+++ b/tests/data/spec-09-03.data
@@ -0,0 +1,6 @@
+- "
+  last"
+- " 	
+  last"
+- " 	first
+  last"
diff --git a/tests/data/spec-09-04.canonical b/tests/data/spec-09-04.canonical
new file mode 100644
index 0000000..fa46632
--- /dev/null
+++ b/tests/data/spec-09-04.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!str "first \
+  inner 1  \
+  inner 2 \
+  last"
diff --git a/tests/data/spec-09-04.data b/tests/data/spec-09-04.data
new file mode 100644
index 0000000..313a91b
--- /dev/null
+++ b/tests/data/spec-09-04.data
@@ -0,0 +1,4 @@
+ "first
+ 	inner 1	
+ \ inner 2 \
+ last"
diff --git a/tests/data/spec-09-05.canonical b/tests/data/spec-09-05.canonical
new file mode 100644
index 0000000..24d1052
--- /dev/null
+++ b/tests/data/spec-09-05.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "first ",
+  !!str "first\nlast",
+  !!str "first inner  \tlast",
+]
diff --git a/tests/data/spec-09-05.data b/tests/data/spec-09-05.data
new file mode 100644
index 0000000..624c30e
--- /dev/null
+++ b/tests/data/spec-09-05.data
@@ -0,0 +1,8 @@
+- "first
+  	"
+- "first
+
+  	last"
+- "first
+ inner
+ \ 	last"
diff --git a/tests/data/spec-09-06.canonical b/tests/data/spec-09-06.canonical
new file mode 100644
index 0000000..5028772
--- /dev/null
+++ b/tests/data/spec-09-06.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "here's to \"quotes\""
diff --git a/tests/data/spec-09-06.data b/tests/data/spec-09-06.data
new file mode 100644
index 0000000..b038078
--- /dev/null
+++ b/tests/data/spec-09-06.data
@@ -0,0 +1 @@
+ 'here''s to "quotes"'
diff --git a/tests/data/spec-09-07.canonical b/tests/data/spec-09-07.canonical
new file mode 100644
index 0000000..e71a548
--- /dev/null
+++ b/tests/data/spec-09-07.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "simple key"
+  : !!map {
+    ? !!str "also simple"
+    : !!str "value",
+    ? !!str "not a simple key"
+    : !!str "any value"
+  }
+}
diff --git a/tests/data/spec-09-07.data b/tests/data/spec-09-07.data
new file mode 100644
index 0000000..755b54a
--- /dev/null
+++ b/tests/data/spec-09-07.data
@@ -0,0 +1,6 @@
+'simple key' : {
+  'also simple' : value,
+  ? 'not a
+  simple key' : 'any
+  value'
+}
diff --git a/tests/data/spec-09-08.canonical b/tests/data/spec-09-08.canonical
new file mode 100644
index 0000000..06abdb5
--- /dev/null
+++ b/tests/data/spec-09-08.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!str "as space \
+  trimmed\n\
+  specific\L\n\
+  none"
diff --git a/tests/data/spec-09-08.data b/tests/data/spec-09-08.data
new file mode 100644
index 0000000..aa4d458
--- /dev/null
+++ b/tests/data/spec-09-08.data
@@ -0,0 +1 @@
+ 'as space	… trimmed …… specific
… none'
diff --git a/tests/data/spec-09-09.canonical b/tests/data/spec-09-09.canonical
new file mode 100644
index 0000000..658c6df
--- /dev/null
+++ b/tests/data/spec-09-09.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!seq [
+  !!str " last",
+  !!str " last",
+  !!str " \tfirst last",
+]
diff --git a/tests/data/spec-09-09.data b/tests/data/spec-09-09.data
new file mode 100644
index 0000000..52171df
--- /dev/null
+++ b/tests/data/spec-09-09.data
@@ -0,0 +1,6 @@
+- '
+  last'
+- ' 	
+  last'
+- ' 	first
+  last'
diff --git a/tests/data/spec-09-10.canonical b/tests/data/spec-09-10.canonical
new file mode 100644
index 0000000..2028d04
--- /dev/null
+++ b/tests/data/spec-09-10.canonical
@@ -0,0 +1,5 @@
+%YAML 1.1
+---
+!!str "first \
+  inner \
+  last"
diff --git a/tests/data/spec-09-10.data b/tests/data/spec-09-10.data
new file mode 100644
index 0000000..0e41449
--- /dev/null
+++ b/tests/data/spec-09-10.data
@@ -0,0 +1,3 @@
+ 'first
+ 	inner	
+ last'
diff --git a/tests/data/spec-09-11.canonical b/tests/data/spec-09-11.canonical
new file mode 100644
index 0000000..4eb222c
--- /dev/null
+++ b/tests/data/spec-09-11.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "first ",
+  !!str "first\nlast",
+]
diff --git a/tests/data/spec-09-11.data b/tests/data/spec-09-11.data
new file mode 100644
index 0000000..5efa873
--- /dev/null
+++ b/tests/data/spec-09-11.data
@@ -0,0 +1,5 @@
+- 'first
+  	'
+- 'first
+
+  	last'
diff --git a/tests/data/spec-09-12.canonical b/tests/data/spec-09-12.canonical
new file mode 100644
index 0000000..d8e6dce
--- /dev/null
+++ b/tests/data/spec-09-12.canonical
@@ -0,0 +1,12 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "::std::vector",
+  !!str "Up, up, and away!",
+  !!int "-123",
+  !!seq [
+    !!str "::std::vector",
+    !!str "Up, up, and away!",
+    !!int "-123",
+  ]
+]
diff --git a/tests/data/spec-09-12.data b/tests/data/spec-09-12.data
new file mode 100644
index 0000000..b9a3ac5
--- /dev/null
+++ b/tests/data/spec-09-12.data
@@ -0,0 +1,8 @@
+# Outside flow collection:
+- ::std::vector
+- Up, up, and away!
+- -123
+# Inside flow collection:
+- [ '::std::vector',
+  "Up, up, and away!",
+  -123 ]
diff --git a/tests/data/spec-09-13.canonical b/tests/data/spec-09-13.canonical
new file mode 100644
index 0000000..e71a548
--- /dev/null
+++ b/tests/data/spec-09-13.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "simple key"
+  : !!map {
+    ? !!str "also simple"
+    : !!str "value",
+    ? !!str "not a simple key"
+    : !!str "any value"
+  }
+}
diff --git a/tests/data/spec-09-13.data b/tests/data/spec-09-13.data
new file mode 100644
index 0000000..b156386
--- /dev/null
+++ b/tests/data/spec-09-13.data
@@ -0,0 +1,6 @@
+simple key : {
+  also simple : value,
+  ? not a
+  simple key : any
+  value
+}
diff --git a/tests/data/spec-09-14.data b/tests/data/spec-09-14.data
new file mode 100644
index 0000000..97f2316
--- /dev/null
+++ b/tests/data/spec-09-14.data
@@ -0,0 +1,14 @@
+---
+--- ||| : foo
+... >>>: bar
+---
+[
+---
+,
+... ,
+{
+--- :
+... # Nested
+}
+]
+...
diff --git a/tests/data/spec-09-14.error b/tests/data/spec-09-14.error
new file mode 100644
index 0000000..9f3db7b
--- /dev/null
+++ b/tests/data/spec-09-14.error
@@ -0,0 +1,6 @@
+ERROR:
+ The --- and ... document
+ start and end markers must
+ not be specified as the
+ first content line of a
+ non-indented plain scalar.
diff --git a/tests/data/spec-09-15.canonical b/tests/data/spec-09-15.canonical
new file mode 100644
index 0000000..df02040
--- /dev/null
+++ b/tests/data/spec-09-15.canonical
@@ -0,0 +1,18 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "---"
+  : !!str "foo",
+  ? !!str "..."
+  : !!str "bar"
+}
+%YAML 1.1
+---
+!!seq [
+  !!str "---",
+  !!str "...",
+  !!map {
+    ? !!str "---"
+    : !!str "..."
+  }
+]
diff --git a/tests/data/spec-09-15.data b/tests/data/spec-09-15.data
new file mode 100644
index 0000000..e6863b0
--- /dev/null
+++ b/tests/data/spec-09-15.data
@@ -0,0 +1,13 @@
+---
+"---" : foo
+...: bar
+---
+[
+---,
+...,
+{
+? ---
+: ...
+}
+]
+...
diff --git a/tests/data/spec-09-16.canonical b/tests/data/spec-09-16.canonical
new file mode 100644
index 0000000..06abdb5
--- /dev/null
+++ b/tests/data/spec-09-16.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!str "as space \
+  trimmed\n\
+  specific\L\n\
+  none"
diff --git a/tests/data/spec-09-16.data b/tests/data/spec-09-16.data
new file mode 100644
index 0000000..473beb9
--- /dev/null
+++ b/tests/data/spec-09-16.data
@@ -0,0 +1,3 @@
+# Tabs are confusing:
+# as space/trimmed/specific/none
+ as space … trimmed …… specific
… none
diff --git a/tests/data/spec-09-17.canonical b/tests/data/spec-09-17.canonical
new file mode 100644
index 0000000..68cb70d
--- /dev/null
+++ b/tests/data/spec-09-17.canonical
@@ -0,0 +1,4 @@
+%YAML 1.1
+---
+!!str "first line\n\
+      more line"
diff --git a/tests/data/spec-09-17.data b/tests/data/spec-09-17.data
new file mode 100644
index 0000000..97bc46c
--- /dev/null
+++ b/tests/data/spec-09-17.data
@@ -0,0 +1,3 @@
+ first line 
+   
+  more line
diff --git a/tests/data/spec-09-18.canonical b/tests/data/spec-09-18.canonical
new file mode 100644
index 0000000..f21428f
--- /dev/null
+++ b/tests/data/spec-09-18.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "literal\n",
+  !!str " folded\n",
+  !!str "keep\n\n",
+  !!str " strip",
+]
diff --git a/tests/data/spec-09-18.data b/tests/data/spec-09-18.data
new file mode 100644
index 0000000..68c5d7c
--- /dev/null
+++ b/tests/data/spec-09-18.data
@@ -0,0 +1,9 @@
+- | # Just the style
+ literal
+- >1 # Indentation indicator
+  folded
+- |+ # Chomping indicator
+ keep
+
+- >-1 # Both indicators
+  strip
diff --git a/tests/data/spec-09-19.canonical b/tests/data/spec-09-19.canonical
new file mode 100644
index 0000000..3e828d7
--- /dev/null
+++ b/tests/data/spec-09-19.canonical
@@ -0,0 +1,6 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "literal\n",
+  !!str "folded\n",
+]
diff --git a/tests/data/spec-09-19.data b/tests/data/spec-09-19.data
new file mode 100644
index 0000000..f0e589d
--- /dev/null
+++ b/tests/data/spec-09-19.data
@@ -0,0 +1,4 @@
+- |
+ literal
+- >
+ folded
diff --git a/tests/data/spec-09-20.canonical b/tests/data/spec-09-20.canonical
new file mode 100644
index 0000000..d03bef5
--- /dev/null
+++ b/tests/data/spec-09-20.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "detected\n",
+  !!str "\n\n# detected\n",
+  !!str " explicit\n",
+  !!str "\t\ndetected\n",
+]
diff --git a/tests/data/spec-09-20.data b/tests/data/spec-09-20.data
new file mode 100644
index 0000000..39bee04
--- /dev/null
+++ b/tests/data/spec-09-20.data
@@ -0,0 +1,11 @@
+- |
+ detected
+- >
+ 
+  
+  # detected
+- |1
+  explicit
+- >
+ 	
+ detected
diff --git a/tests/data/spec-09-20.skip-ext b/tests/data/spec-09-20.skip-ext
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/data/spec-09-20.skip-ext
diff --git a/tests/data/spec-09-21.data b/tests/data/spec-09-21.data
new file mode 100644
index 0000000..0fdd14f
--- /dev/null
+++ b/tests/data/spec-09-21.data
@@ -0,0 +1,8 @@
+- |
+  
+ text
+- >
+  text
+ text
+- |1
+ text
diff --git a/tests/data/spec-09-21.error b/tests/data/spec-09-21.error
new file mode 100644
index 0000000..1379ca5
--- /dev/null
+++ b/tests/data/spec-09-21.error
@@ -0,0 +1,7 @@
+ERROR:
+- A leading all-space line must
+  not have too many spaces.
+- A following text line must
+  not be less indented.
+- The text is less indented
+  than the indicated level.
diff --git a/tests/data/spec-09-22.canonical b/tests/data/spec-09-22.canonical
new file mode 100644
index 0000000..c1bbcd2
--- /dev/null
+++ b/tests/data/spec-09-22.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "strip"
+  : !!str "text",
+  ? !!str "clip"
+  : !!str "text\n",
+  ? !!str "keep"
+  : !!str "text\L",
+}
diff --git a/tests/data/spec-09-22.data b/tests/data/spec-09-22.data
new file mode 100644
index 0000000..0dd51eb
--- /dev/null
+++ b/tests/data/spec-09-22.data
@@ -0,0 +1,4 @@
+strip: |-
+  text
clip: |
+  text…keep: |+
+  text

\ No newline at end of file
diff --git a/tests/data/spec-09-23.canonical b/tests/data/spec-09-23.canonical
new file mode 100644
index 0000000..c4444ca
--- /dev/null
+++ b/tests/data/spec-09-23.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "strip"
+  : !!str "# text",
+  ? !!str "clip"
+  : !!str "# text\n",
+  ? !!str "keep"
+  : !!str "# text\L\n",
+}
diff --git a/tests/data/spec-09-23.data b/tests/data/spec-09-23.data
new file mode 100644
index 0000000..8972d2b
--- /dev/null
+++ b/tests/data/spec-09-23.data
@@ -0,0 +1,11 @@
+ # Strip
+  # Comments:
+strip: |-
+  # text
  
 # Clip
+  # comments:
+…clip: |
+  # text… 
 # Keep
+  # comments:
+…keep: |+
+  # text
… # Trail
+  # comments.
diff --git a/tests/data/spec-09-24.canonical b/tests/data/spec-09-24.canonical
new file mode 100644
index 0000000..45a99b0
--- /dev/null
+++ b/tests/data/spec-09-24.canonical
@@ -0,0 +1,10 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "strip"
+  : !!str "",
+  ? !!str "clip"
+  : !!str "",
+  ? !!str "keep"
+  : !!str "\n",
+}
diff --git a/tests/data/spec-09-24.data b/tests/data/spec-09-24.data
new file mode 100644
index 0000000..de0b64b
--- /dev/null
+++ b/tests/data/spec-09-24.data
@@ -0,0 +1,6 @@
+strip: >-
+
+clip: >
+
+keep: |+
+
diff --git a/tests/data/spec-09-25.canonical b/tests/data/spec-09-25.canonical
new file mode 100644
index 0000000..9d2327b
--- /dev/null
+++ b/tests/data/spec-09-25.canonical
@@ -0,0 +1,4 @@
+%YAML 1.1
+---
+!!str "literal\n\
+      \ttext\n"
diff --git a/tests/data/spec-09-25.data b/tests/data/spec-09-25.data
new file mode 100644
index 0000000..f6303a1
--- /dev/null
+++ b/tests/data/spec-09-25.data
@@ -0,0 +1,3 @@
+| # Simple block scalar
+ literal
+ 	text
diff --git a/tests/data/spec-09-26.canonical b/tests/data/spec-09-26.canonical
new file mode 100644
index 0000000..3029a11
--- /dev/null
+++ b/tests/data/spec-09-26.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "\n\nliteral\n\ntext\n"
diff --git a/tests/data/spec-09-26.data b/tests/data/spec-09-26.data
new file mode 100644
index 0000000..f28555a
--- /dev/null
+++ b/tests/data/spec-09-26.data
@@ -0,0 +1,8 @@
+|
+ 
+  
+  literal
+ 
+  text
+
+ # Comment
diff --git a/tests/data/spec-09-27.canonical b/tests/data/spec-09-27.canonical
new file mode 100644
index 0000000..3029a11
--- /dev/null
+++ b/tests/data/spec-09-27.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "\n\nliteral\n\ntext\n"
diff --git a/tests/data/spec-09-27.data b/tests/data/spec-09-27.data
new file mode 100644
index 0000000..f28555a
--- /dev/null
+++ b/tests/data/spec-09-27.data
@@ -0,0 +1,8 @@
+|
+ 
+  
+  literal
+ 
+  text
+
+ # Comment
diff --git a/tests/data/spec-09-28.canonical b/tests/data/spec-09-28.canonical
new file mode 100644
index 0000000..3029a11
--- /dev/null
+++ b/tests/data/spec-09-28.canonical
@@ -0,0 +1,3 @@
+%YAML 1.1
+---
+!!str "\n\nliteral\n\ntext\n"
diff --git a/tests/data/spec-09-28.data b/tests/data/spec-09-28.data
new file mode 100644
index 0000000..f28555a
--- /dev/null
+++ b/tests/data/spec-09-28.data
@@ -0,0 +1,8 @@
+|
+ 
+  
+  literal
+ 
+  text
+
+ # Comment
diff --git a/tests/data/spec-09-29.canonical b/tests/data/spec-09-29.canonical
new file mode 100644
index 0000000..0980789
--- /dev/null
+++ b/tests/data/spec-09-29.canonical
@@ -0,0 +1,4 @@
+%YAML 1.1
+---
+!!str "folded text\n\
+      \tlines\n"
diff --git a/tests/data/spec-09-29.data b/tests/data/spec-09-29.data
new file mode 100644
index 0000000..82e611f
--- /dev/null
+++ b/tests/data/spec-09-29.data
@@ -0,0 +1,4 @@
+> # Simple folded scalar
+ folded
+ text
+ 	lines
diff --git a/tests/data/spec-09-30.canonical b/tests/data/spec-09-30.canonical
new file mode 100644
index 0000000..fc37db1
--- /dev/null
+++ b/tests/data/spec-09-30.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!str "folded line\n\
+      next line\n\n\
+      \  * bullet\n\
+      \  * list\n\n\
+      last line\n"
diff --git a/tests/data/spec-09-30.data b/tests/data/spec-09-30.data
new file mode 100644
index 0000000..a4d8c36
--- /dev/null
+++ b/tests/data/spec-09-30.data
@@ -0,0 +1,14 @@
+>
+ folded
+ line
+
+ next
+ line
+
+   * bullet
+   * list
+
+ last
+ line
+
+# Comment
diff --git a/tests/data/spec-09-31.canonical b/tests/data/spec-09-31.canonical
new file mode 100644
index 0000000..fc37db1
--- /dev/null
+++ b/tests/data/spec-09-31.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!str "folded line\n\
+      next line\n\n\
+      \  * bullet\n\
+      \  * list\n\n\
+      last line\n"
diff --git a/tests/data/spec-09-31.data b/tests/data/spec-09-31.data
new file mode 100644
index 0000000..a4d8c36
--- /dev/null
+++ b/tests/data/spec-09-31.data
@@ -0,0 +1,14 @@
+>
+ folded
+ line
+
+ next
+ line
+
+   * bullet
+   * list
+
+ last
+ line
+
+# Comment
diff --git a/tests/data/spec-09-32.canonical b/tests/data/spec-09-32.canonical
new file mode 100644
index 0000000..fc37db1
--- /dev/null
+++ b/tests/data/spec-09-32.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!str "folded line\n\
+      next line\n\n\
+      \  * bullet\n\
+      \  * list\n\n\
+      last line\n"
diff --git a/tests/data/spec-09-32.data b/tests/data/spec-09-32.data
new file mode 100644
index 0000000..a4d8c36
--- /dev/null
+++ b/tests/data/spec-09-32.data
@@ -0,0 +1,14 @@
+>
+ folded
+ line
+
+ next
+ line
+
+   * bullet
+   * list
+
+ last
+ line
+
+# Comment
diff --git a/tests/data/spec-09-33.canonical b/tests/data/spec-09-33.canonical
new file mode 100644
index 0000000..fc37db1
--- /dev/null
+++ b/tests/data/spec-09-33.canonical
@@ -0,0 +1,7 @@
+%YAML 1.1
+---
+!!str "folded line\n\
+      next line\n\n\
+      \  * bullet\n\
+      \  * list\n\n\
+      last line\n"
diff --git a/tests/data/spec-09-33.data b/tests/data/spec-09-33.data
new file mode 100644
index 0000000..a4d8c36
--- /dev/null
+++ b/tests/data/spec-09-33.data
@@ -0,0 +1,14 @@
+>
+ folded
+ line
+
+ next
+ line
+
+   * bullet
+   * list
+
+ last
+ line
+
+# Comment
diff --git a/tests/data/spec-10-01.canonical b/tests/data/spec-10-01.canonical
new file mode 100644
index 0000000..d08cdd4
--- /dev/null
+++ b/tests/data/spec-10-01.canonical
@@ -0,0 +1,12 @@
+%YAML 1.1
+---
+!!seq [
+  !!seq [
+    !!str "inner",
+    !!str "inner",
+  ],
+  !!seq [
+    !!str "inner",
+    !!str "last",
+  ],
+]
diff --git a/tests/data/spec-10-01.data b/tests/data/spec-10-01.data
new file mode 100644
index 0000000..e668d38
--- /dev/null
+++ b/tests/data/spec-10-01.data
@@ -0,0 +1,2 @@
+- [ inner, inner, ]
+- [inner,last]
diff --git a/tests/data/spec-10-02.canonical b/tests/data/spec-10-02.canonical
new file mode 100644
index 0000000..82fe0d9
--- /dev/null
+++ b/tests/data/spec-10-02.canonical
@@ -0,0 +1,14 @@
+%YAML 1.1
+---
+!!seq [
+  !!str "double quoted",
+  !!str "single quoted",
+  !!str "plain text",
+  !!seq [
+    !!str "nested",
+  ],
+  !!map {
+    ? !!str "single"
+    : !!str "pair"
+  }
+]
diff --git a/tests/data/spec-10-02.data b/tests/data/spec-10-02.data
new file mode 100644
index 0000000..3b23351
--- /dev/null
+++ b/tests/data/spec-10-02.data
@@ -0,0 +1,8 @@
+[
+"double
+ quoted", 'single
+           quoted',
+plain
+ text, [ nested ],
+single: pair ,
+]
diff --git a/tests/data/spec-10-03.canonical b/tests/data/spec-10-03.canonical
new file mode 100644
index 0000000..1443395
--- /dev/null
+++ b/tests/data/spec-10-03.canonical
@@ -0,0 +1,12 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "block"
+  : !!seq [
+    !!str "one",
+    !!map {
+      ? !!str "two"
+      : !!str "three"
+    }
+  ]
+}
diff --git a/tests/data/spec-10-03.data b/tests/data/spec-10-03.data
new file mode 100644
index 0000000..9e15f83
--- /dev/null
+++ b/tests/data/spec-10-03.data
@@ -0,0 +1,4 @@
+block: # Block
+       # sequence
+- one
+- two : three
diff --git a/tests/data/spec-10-04.canonical b/tests/data/spec-10-04.canonical
new file mode 100644
index 0000000..ae486a3
--- /dev/null
+++ b/tests/data/spec-10-04.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "block"
+  : !!seq [
+    !!str "one",
+    !!seq [
+      !!str "two"
+    ]
+  ]
+}
diff --git a/tests/data/spec-10-04.data b/tests/data/spec-10-04.data
new file mode 100644
index 0000000..2905b0d
--- /dev/null
+++ b/tests/data/spec-10-04.data
@@ -0,0 +1,4 @@
+block:
+- one
+-
+ - two
diff --git a/tests/data/spec-10-05.canonical b/tests/data/spec-10-05.canonical
new file mode 100644
index 0000000..07cc0c9
--- /dev/null
+++ b/tests/data/spec-10-05.canonical
@@ -0,0 +1,14 @@
+%YAML 1.1
+---
+!!seq [
+  !!null "",
+  !!str "block node\n",
+  !!seq [
+    !!str "one",
+    !!str "two",
+  ],
+  !!map {
+    ? !!str "one"
+    : !!str "two",
+  }
+]
diff --git a/tests/data/spec-10-05.data b/tests/data/spec-10-05.data
new file mode 100644
index 0000000..f19a99e
--- /dev/null
+++ b/tests/data/spec-10-05.data
@@ -0,0 +1,7 @@
+- # Empty
+- |
+ block node
+- - one # in-line
+  - two # sequence
+- one: two # in-line
+           # mapping
diff --git a/tests/data/spec-10-06.canonical b/tests/data/spec-10-06.canonical
new file mode 100644
index 0000000..d9986c2
--- /dev/null
+++ b/tests/data/spec-10-06.canonical
@@ -0,0 +1,16 @@
+%YAML 1.1
+---
+!!seq [
+  !!map {
+    ? !!str "inner"
+    : !!str "entry",
+    ? !!str "also"
+    : !!str "inner"
+  },
+  !!map {
+    ? !!str "inner"
+    : !!str "entry",
+    ? !!str "last"
+    : !!str "entry"
+  }
+]
diff --git a/tests/data/spec-10-06.data b/tests/data/spec-10-06.data
new file mode 100644
index 0000000..860ba25
--- /dev/null
+++ b/tests/data/spec-10-06.data
@@ -0,0 +1,2 @@
+- { inner : entry , also: inner , }
+- {inner: entry,last : entry}
diff --git a/tests/data/spec-10-07.canonical b/tests/data/spec-10-07.canonical
new file mode 100644
index 0000000..ec74230
--- /dev/null
+++ b/tests/data/spec-10-07.canonical
@@ -0,0 +1,16 @@
+%YAML 1.1
+---
+!!map {
+  ? !!null ""
+  : !!str "value",
+  ? !!str "explicit key"
+  : !!str "value",
+  ? !!str "simple key"
+  : !!str "value",
+  ? !!seq [
+    !!str "collection",
+    !!str "simple",
+    !!str "key"
+  ]
+  : !!str "value"
+}
diff --git a/tests/data/spec-10-07.data b/tests/data/spec-10-07.data
new file mode 100644
index 0000000..ff943fb
--- /dev/null
+++ b/tests/data/spec-10-07.data
@@ -0,0 +1,7 @@
+{
+? : value, # Empty key
+? explicit
+ key: value,
+simple key : value,
+[ collection, simple, key ]: value
+}
diff --git a/tests/data/spec-10-08.data b/tests/data/spec-10-08.data
new file mode 100644
index 0000000..55bd788
--- /dev/null
+++ b/tests/data/spec-10-08.data
@@ -0,0 +1,5 @@
+{
+multi-line
+ simple key : value,
+very long ...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................(>1KB)................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... key: value
+}
diff --git a/tests/data/spec-10-08.error b/tests/data/spec-10-08.error
new file mode 100644
index 0000000..3979e1f
--- /dev/null
+++ b/tests/data/spec-10-08.error
@@ -0,0 +1,5 @@
+ERROR:
+- A simple key is restricted
+  to only one line.
+- A simple key must not be
+  longer than 1024 characters.
diff --git a/tests/data/spec-10-09.canonical b/tests/data/spec-10-09.canonical
new file mode 100644
index 0000000..4d9827b
--- /dev/null
+++ b/tests/data/spec-10-09.canonical
@@ -0,0 +1,8 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "key"
+  : !!str "value",
+  ? !!str "empty"
+  : !!null "",
+}
diff --git a/tests/data/spec-10-09.data b/tests/data/spec-10-09.data
new file mode 100644
index 0000000..4d55e21
--- /dev/null
+++ b/tests/data/spec-10-09.data
@@ -0,0 +1,4 @@
+{
+key : value,
+empty: # empty value↓
+}
diff --git a/tests/data/spec-10-10.canonical b/tests/data/spec-10-10.canonical
new file mode 100644
index 0000000..016fb64
--- /dev/null
+++ b/tests/data/spec-10-10.canonical
@@ -0,0 +1,16 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "explicit key1"
+  : !!str "explicit value",
+  ? !!str "explicit key2"
+  : !!null "",
+  ? !!str "explicit key3"
+  : !!null "",
+  ? !!str "simple key1"
+  : !!str "explicit value",
+  ? !!str "simple key2"
+  : !!null "",
+  ? !!str "simple key3"
+  : !!null "",
+}
diff --git a/tests/data/spec-10-10.data b/tests/data/spec-10-10.data
new file mode 100644
index 0000000..0888b05
--- /dev/null
+++ b/tests/data/spec-10-10.data
@@ -0,0 +1,8 @@
+{
+? explicit key1 : explicit value,
+? explicit key2 : , # Explicit empty
+? explicit key3,     # Empty value
+simple key1 : explicit value,
+simple key2 : ,     # Explicit empty
+simple key3,         # Empty value
+}
diff --git a/tests/data/spec-10-11.canonical b/tests/data/spec-10-11.canonical
new file mode 100644
index 0000000..7309544
--- /dev/null
+++ b/tests/data/spec-10-11.canonical
@@ -0,0 +1,24 @@
+%YAML 1.1
+---
+!!seq [
+  !!map {
+    ? !!str "explicit key1"
+    : !!str "explicit value",
+  },
+  !!map {
+    ? !!str "explicit key2"
+    : !!null "",
+  },
+  !!map {
+    ? !!str "explicit key3"
+    : !!null "",
+  },
+  !!map {
+    ? !!str "simple key1"
+    : !!str "explicit value",
+  },
+  !!map {
+    ? !!str "simple key2"
+    : !!null "",
+  },
+]
diff --git a/tests/data/spec-10-11.data b/tests/data/spec-10-11.data
new file mode 100644
index 0000000..9f05568
--- /dev/null
+++ b/tests/data/spec-10-11.data
@@ -0,0 +1,7 @@
+[
+? explicit key1 : explicit value,
+? explicit key2 : , # Explicit empty
+? explicit key3,     # Implicit empty
+simple key1 : explicit value,
+simple key2 : ,     # Explicit empty
+]
diff --git a/tests/data/spec-10-12.canonical b/tests/data/spec-10-12.canonical
new file mode 100644
index 0000000..a95dd40
--- /dev/null
+++ b/tests/data/spec-10-12.canonical
@@ -0,0 +1,9 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "block"
+  : !!map {
+    ? !!str "key"
+    : !!str "value"
+  }
+}
diff --git a/tests/data/spec-10-12.data b/tests/data/spec-10-12.data
new file mode 100644
index 0000000..5521443
--- /dev/null
+++ b/tests/data/spec-10-12.data
@@ -0,0 +1,3 @@
+block: # Block
+    # mapping
+ key: value
diff --git a/tests/data/spec-10-13.canonical b/tests/data/spec-10-13.canonical
new file mode 100644
index 0000000..e183c50
--- /dev/null
+++ b/tests/data/spec-10-13.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "explicit key"
+  : !!null "",
+  ? !!str "block key\n"
+  : !!seq [
+    !!str "one",
+    !!str "two",
+  ]
+}
diff --git a/tests/data/spec-10-13.data b/tests/data/spec-10-13.data
new file mode 100644
index 0000000..b5b97db
--- /dev/null
+++ b/tests/data/spec-10-13.data
@@ -0,0 +1,5 @@
+? explicit key # implicit value
+? |
+  block key
+: - one # explicit in-line
+  - two # block value
diff --git a/tests/data/spec-10-14.canonical b/tests/data/spec-10-14.canonical
new file mode 100644
index 0000000..e87c880
--- /dev/null
+++ b/tests/data/spec-10-14.canonical
@@ -0,0 +1,11 @@
+%YAML 1.1
+---
+!!map {
+  ? !!str "plain key"
+  : !!null "",
+  ? !!str "quoted key"
+  : !!seq [
+    !!str "one",
+    !!str "two",
+  ]
+}
diff --git a/tests/data/spec-10-14.data b/tests/data/spec-10-14.data
new file mode 100644
index 0000000..7f5995c
--- /dev/null
+++ b/tests/data/spec-10-14.data
@@ -0,0 +1,4 @@
+plain key: # empty value
+"quoted key":
+- one # explicit next-line
+- two # block value
diff --git a/tests/data/spec-10-15.canonical b/tests/data/spec-10-15.canonical
new file mode 100644
index 0000000..85fbbd0
--- /dev/null
+++ b/tests/data/spec-10-15.canonical
@@ -0,0 +1,18 @@
+%YAML 1.1
+---
+!!seq [
+  !!map {
+    ? !!str "sun"
+    : !!str "yellow"
+  },
+  !!map {
+    ? !!map {
+      ? !!str "earth"
+      : !!str "blue"
+    }
+    : !!map {
+      ? !!str "moon"
+      : !!str "white"
+    }
+  }
+]
diff --git a/tests/data/spec-10-15.data b/tests/data/spec-10-15.data
new file mode 100644
index 0000000..d675cfd
--- /dev/null
+++ b/tests/data/spec-10-15.data
@@ -0,0 +1,3 @@
+- sun: yellow
+- ? earth: blue
+  : moon: white
diff --git a/tests/data/str.data b/tests/data/str.data
new file mode 100644
index 0000000..7cbdb7c
--- /dev/null
+++ b/tests/data/str.data
@@ -0,0 +1 @@
+- abcd
diff --git a/tests/data/str.detect b/tests/data/str.detect
new file mode 100644
index 0000000..7d5026f
--- /dev/null
+++ b/tests/data/str.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:str
diff --git a/tests/data/tags.events b/tests/data/tags.events
new file mode 100644
index 0000000..bb93dce
--- /dev/null
+++ b/tests/data/tags.events
@@ -0,0 +1,12 @@
+- !StreamStart
+- !DocumentStart
+- !SequenceStart
+- !Scalar { value: 'data' }
+#- !Scalar { tag: '!', value: 'data' }
+- !Scalar { tag: 'tag:yaml.org,2002:str', value: 'data' }
+- !Scalar { tag: '!myfunnytag', value: 'data' }
+- !Scalar { tag: '!my!ugly!tag', value: 'data' }
+- !Scalar { tag: 'tag:my.domain.org,2002:data!? #', value: 'data' }
+- !SequenceEnd
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/test_mark.marks b/tests/data/test_mark.marks
new file mode 100644
index 0000000..7b08ee4
--- /dev/null
+++ b/tests/data/test_mark.marks
@@ -0,0 +1,38 @@
+---
+*The first line.
+The last line.
+---
+The first*line.
+The last line.
+---
+The first line.*
+The last line.
+---
+The first line.
+*The last line.
+---
+The first line.
+The last*line.
+---
+The first line.
+The last line.*
+---
+The first line.
+*The selected line.
+The last line.
+---
+The first line.
+The selected*line.
+The last line.
+---
+The first line.
+The selected line.*
+The last line.
+---
+*The only line.
+---
+The only*line.
+---
+The only line.*
+---
+Loooooooooooooooooooooooooooooooooooooooooooooong*Liiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiine
diff --git a/tests/data/timestamp-bugs.code b/tests/data/timestamp-bugs.code
new file mode 100644
index 0000000..b1d6e9c
--- /dev/null
+++ b/tests/data/timestamp-bugs.code
@@ -0,0 +1,8 @@
+[
+    datetime.datetime(2001, 12, 15, 3, 29, 43, 100000),
+    datetime.datetime(2001, 12, 14, 16, 29, 43, 100000),
+    datetime.datetime(2001, 12, 14, 21, 59, 43, 1010),
+    datetime.datetime(2001, 12, 14, 21, 59, 43, 0, FixedOffset(60, "+1")),
+    datetime.datetime(2001, 12, 14, 21, 59, 43, 0, FixedOffset(-90, "-1:30")),
+    datetime.datetime(2005, 7, 8, 17, 35, 4, 517600),
+]
diff --git a/tests/data/timestamp-bugs.data b/tests/data/timestamp-bugs.data
new file mode 100644
index 0000000..721d290
--- /dev/null
+++ b/tests/data/timestamp-bugs.data
@@ -0,0 +1,6 @@
+- 2001-12-14 21:59:43.10 -5:30
+- 2001-12-14 21:59:43.10 +5:30
+- 2001-12-14 21:59:43.00101
+- 2001-12-14 21:59:43+1
+- 2001-12-14 21:59:43-1:30
+- 2005-07-08 17:35:04.517600
diff --git a/tests/data/timestamp.data b/tests/data/timestamp.data
new file mode 100644
index 0000000..7d214ce
--- /dev/null
+++ b/tests/data/timestamp.data
@@ -0,0 +1,5 @@
+- 2001-12-15T02:59:43.1Z
+- 2001-12-14t21:59:43.10-05:00
+- 2001-12-14 21:59:43.10 -5
+- 2001-12-15 2:59:43.10
+- 2002-12-14
diff --git a/tests/data/timestamp.detect b/tests/data/timestamp.detect
new file mode 100644
index 0000000..2013936
--- /dev/null
+++ b/tests/data/timestamp.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:timestamp
diff --git a/tests/data/unacceptable-key.loader-error b/tests/data/unacceptable-key.loader-error
new file mode 100644
index 0000000..d748e37
--- /dev/null
+++ b/tests/data/unacceptable-key.loader-error
@@ -0,0 +1,4 @@
+---
+? - foo
+  - bar
+: baz
diff --git a/tests/data/unclosed-bracket.loader-error b/tests/data/unclosed-bracket.loader-error
new file mode 100644
index 0000000..8c82077
--- /dev/null
+++ b/tests/data/unclosed-bracket.loader-error
@@ -0,0 +1,6 @@
+test:
+    - [ foo: bar
+# comment the rest of the stream to let the scanner detect the problem.
+#    - baz
+#"we could have detected the unclosed bracket on the above line, but this would forbid such syntax as": {
+#}
diff --git a/tests/data/unclosed-quoted-scalar.loader-error b/tests/data/unclosed-quoted-scalar.loader-error
new file mode 100644
index 0000000..8537429
--- /dev/null
+++ b/tests/data/unclosed-quoted-scalar.loader-error
@@ -0,0 +1,2 @@
+'foo
+ bar
diff --git a/tests/data/undefined-anchor.loader-error b/tests/data/undefined-anchor.loader-error
new file mode 100644
index 0000000..9469103
--- /dev/null
+++ b/tests/data/undefined-anchor.loader-error
@@ -0,0 +1,3 @@
+- foo
+- &bar baz
+- *bat
diff --git a/tests/data/undefined-constructor.loader-error b/tests/data/undefined-constructor.loader-error
new file mode 100644
index 0000000..9a37ccc
--- /dev/null
+++ b/tests/data/undefined-constructor.loader-error
@@ -0,0 +1 @@
+--- !foo bar
diff --git a/tests/data/undefined-tag-handle.loader-error b/tests/data/undefined-tag-handle.loader-error
new file mode 100644
index 0000000..82ba335
--- /dev/null
+++ b/tests/data/undefined-tag-handle.loader-error
@@ -0,0 +1 @@
+--- !foo!bar    baz
diff --git a/tests/data/unknown.dumper-error b/tests/data/unknown.dumper-error
new file mode 100644
index 0000000..83204d2
--- /dev/null
+++ b/tests/data/unknown.dumper-error
@@ -0,0 +1 @@
+yaml.safe_dump(object)
diff --git a/tests/data/unsupported-version.emitter-error b/tests/data/unsupported-version.emitter-error
new file mode 100644
index 0000000..f9c6197
--- /dev/null
+++ b/tests/data/unsupported-version.emitter-error
@@ -0,0 +1,5 @@
+- !StreamStart
+- !DocumentStart { version: [5,6] }
+- !Scalar { value: foo }
+- !DocumentEnd
+- !StreamEnd
diff --git a/tests/data/utf16be.code b/tests/data/utf16be.code
new file mode 100644
index 0000000..c45b371
--- /dev/null
+++ b/tests/data/utf16be.code
@@ -0,0 +1 @@
+"UTF-16-BE"
diff --git a/tests/data/utf16be.data b/tests/data/utf16be.data
new file mode 100644
index 0000000..50dcfae
--- /dev/null
+++ b/tests/data/utf16be.data
Binary files differ
diff --git a/tests/data/utf16le.code b/tests/data/utf16le.code
new file mode 100644
index 0000000..400530a
--- /dev/null
+++ b/tests/data/utf16le.code
@@ -0,0 +1 @@
+"UTF-16-LE"
diff --git a/tests/data/utf16le.data b/tests/data/utf16le.data
new file mode 100644
index 0000000..76f5e73
--- /dev/null
+++ b/tests/data/utf16le.data
Binary files differ
diff --git a/tests/data/utf8-implicit.code b/tests/data/utf8-implicit.code
new file mode 100644
index 0000000..29326db
--- /dev/null
+++ b/tests/data/utf8-implicit.code
@@ -0,0 +1 @@
+"implicit UTF-8"
diff --git a/tests/data/utf8-implicit.data b/tests/data/utf8-implicit.data
new file mode 100644
index 0000000..9d8081e
--- /dev/null
+++ b/tests/data/utf8-implicit.data
@@ -0,0 +1 @@
+--- implicit UTF-8
diff --git a/tests/data/utf8.code b/tests/data/utf8.code
new file mode 100644
index 0000000..dcf11cc
--- /dev/null
+++ b/tests/data/utf8.code
@@ -0,0 +1 @@
+"UTF-8"
diff --git a/tests/data/utf8.data b/tests/data/utf8.data
new file mode 100644
index 0000000..686f48a
--- /dev/null
+++ b/tests/data/utf8.data
@@ -0,0 +1 @@
+--- UTF-8
diff --git a/tests/data/value.data b/tests/data/value.data
new file mode 100644
index 0000000..c5b7680
--- /dev/null
+++ b/tests/data/value.data
@@ -0,0 +1 @@
+- =
diff --git a/tests/data/value.detect b/tests/data/value.detect
new file mode 100644
index 0000000..7c37d02
--- /dev/null
+++ b/tests/data/value.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:value
diff --git a/tests/data/yaml.data b/tests/data/yaml.data
new file mode 100644
index 0000000..a4bb3f8
--- /dev/null
+++ b/tests/data/yaml.data
@@ -0,0 +1,3 @@
+- !!yaml '!'
+- !!yaml '&'
+- !!yaml '*'
diff --git a/tests/data/yaml.detect b/tests/data/yaml.detect
new file mode 100644
index 0000000..e2cf189
--- /dev/null
+++ b/tests/data/yaml.detect
@@ -0,0 +1 @@
+tag:yaml.org,2002:yaml
diff --git a/tests/lib/canonical.py b/tests/lib/canonical.py
new file mode 100644
index 0000000..020e6db
--- /dev/null
+++ b/tests/lib/canonical.py
@@ -0,0 +1,360 @@
+
+import yaml, yaml.composer, yaml.constructor, yaml.resolver
+
+class CanonicalError(yaml.YAMLError):
+    pass
+
+class CanonicalScanner:
+
+    def __init__(self, data):
+        try:
+            self.data = unicode(data, 'utf-8')+u'\0'
+        except UnicodeDecodeError:
+            raise CanonicalError("utf-8 stream is expected")
+        self.index = 0
+        self.tokens = []
+        self.scanned = False
+
+    def check_token(self, *choices):
+        if not self.scanned:
+            self.scan()
+        if self.tokens:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.tokens[0], choice):
+                    return True
+        return False
+
+    def peek_token(self):
+        if not self.scanned:
+            self.scan()
+        if self.tokens:
+            return self.tokens[0]
+
+    def get_token(self, choice=None):
+        if not self.scanned:
+            self.scan()
+        token = self.tokens.pop(0)
+        if choice and not isinstance(token, choice):
+            raise CanonicalError("unexpected token "+repr(token))
+        return token
+
+    def get_token_value(self):
+        token = self.get_token()
+        return token.value
+
+    def scan(self):
+        self.tokens.append(yaml.StreamStartToken(None, None))
+        while True:
+            self.find_token()
+            ch = self.data[self.index]
+            if ch == u'\0':
+                self.tokens.append(yaml.StreamEndToken(None, None))
+                break
+            elif ch == u'%':
+                self.tokens.append(self.scan_directive())
+            elif ch == u'-' and self.data[self.index:self.index+3] == u'---':
+                self.index += 3
+                self.tokens.append(yaml.DocumentStartToken(None, None))
+            elif ch == u'[':
+                self.index += 1
+                self.tokens.append(yaml.FlowSequenceStartToken(None, None))
+            elif ch == u'{':
+                self.index += 1
+                self.tokens.append(yaml.FlowMappingStartToken(None, None))
+            elif ch == u']':
+                self.index += 1
+                self.tokens.append(yaml.FlowSequenceEndToken(None, None))
+            elif ch == u'}':
+                self.index += 1
+                self.tokens.append(yaml.FlowMappingEndToken(None, None))
+            elif ch == u'?':
+                self.index += 1
+                self.tokens.append(yaml.KeyToken(None, None))
+            elif ch == u':':
+                self.index += 1
+                self.tokens.append(yaml.ValueToken(None, None))
+            elif ch == u',':
+                self.index += 1
+                self.tokens.append(yaml.FlowEntryToken(None, None))
+            elif ch == u'*' or ch == u'&':
+                self.tokens.append(self.scan_alias())
+            elif ch == u'!':
+                self.tokens.append(self.scan_tag())
+            elif ch == u'"':
+                self.tokens.append(self.scan_scalar())
+            else:
+                raise CanonicalError("invalid token")
+        self.scanned = True
+
+    DIRECTIVE = u'%YAML 1.1'
+
+    def scan_directive(self):
+        if self.data[self.index:self.index+len(self.DIRECTIVE)] == self.DIRECTIVE and \
+                self.data[self.index+len(self.DIRECTIVE)] in u' \n\0':
+            self.index += len(self.DIRECTIVE)
+            return yaml.DirectiveToken('YAML', (1, 1), None, None)
+        else:
+            raise CanonicalError("invalid directive")
+
+    def scan_alias(self):
+        if self.data[self.index] == u'*':
+            TokenClass = yaml.AliasToken
+        else:
+            TokenClass = yaml.AnchorToken
+        self.index += 1
+        start = self.index
+        while self.data[self.index] not in u', \n\0':
+            self.index += 1
+        value = self.data[start:self.index]
+        return TokenClass(value, None, None)
+
+    def scan_tag(self):
+        self.index += 1
+        start = self.index
+        while self.data[self.index] not in u' \n\0':
+            self.index += 1
+        value = self.data[start:self.index]
+        if not value:
+            value = u'!'
+        elif value[0] == u'!':
+            value = 'tag:yaml.org,2002:'+value[1:]
+        elif value[0] == u'<' and value[-1] == u'>':
+            value = value[1:-1]
+        else:
+            value = u'!'+value
+        return yaml.TagToken(value, None, None)
+
+    QUOTE_CODES = {
+        'x': 2,
+        'u': 4,
+        'U': 8,
+    }
+
+    QUOTE_REPLACES = {
+        u'\\': u'\\',
+        u'\"': u'\"',
+        u' ': u' ',
+        u'a': u'\x07',
+        u'b': u'\x08',
+        u'e': u'\x1B',
+        u'f': u'\x0C',
+        u'n': u'\x0A',
+        u'r': u'\x0D',
+        u't': u'\x09',
+        u'v': u'\x0B',
+        u'N': u'\u0085',
+        u'L': u'\u2028',
+        u'P': u'\u2029',
+        u'_': u'_',
+        u'0': u'\x00',
+
+    }
+
+    def scan_scalar(self):
+        self.index += 1
+        chunks = []
+        start = self.index
+        ignore_spaces = False
+        while self.data[self.index] != u'"':
+            if self.data[self.index] == u'\\':
+                ignore_spaces = False
+                chunks.append(self.data[start:self.index])
+                self.index += 1
+                ch = self.data[self.index]
+                self.index += 1
+                if ch == u'\n':
+                    ignore_spaces = True
+                elif ch in self.QUOTE_CODES:
+                    length = self.QUOTE_CODES[ch]
+                    code = int(self.data[self.index:self.index+length], 16)
+                    chunks.append(unichr(code))
+                    self.index += length
+                else:
+                    if ch not in self.QUOTE_REPLACES:
+                        raise CanonicalError("invalid escape code")
+                    chunks.append(self.QUOTE_REPLACES[ch])
+                start = self.index
+            elif self.data[self.index] == u'\n':
+                chunks.append(self.data[start:self.index])
+                chunks.append(u' ')
+                self.index += 1
+                start = self.index
+                ignore_spaces = True
+            elif ignore_spaces and self.data[self.index] == u' ':
+                self.index += 1
+                start = self.index
+            else:
+                ignore_spaces = False
+                self.index += 1
+        chunks.append(self.data[start:self.index])
+        self.index += 1
+        return yaml.ScalarToken(u''.join(chunks), False, None, None)
+
+    def find_token(self):
+        found = False
+        while not found:
+            while self.data[self.index] in u' \t':
+                self.index += 1
+            if self.data[self.index] == u'#':
+                while self.data[self.index] != u'\n':
+                    self.index += 1
+            if self.data[self.index] == u'\n':
+                self.index += 1
+            else:
+                found = True
+
+class CanonicalParser:
+
+    def __init__(self):
+        self.events = []
+        self.parsed = False
+
+    def dispose(self):
+        pass
+
+    # stream: STREAM-START document* STREAM-END
+    def parse_stream(self):
+        self.get_token(yaml.StreamStartToken)
+        self.events.append(yaml.StreamStartEvent(None, None))
+        while not self.check_token(yaml.StreamEndToken):
+            if self.check_token(yaml.DirectiveToken, yaml.DocumentStartToken):
+                self.parse_document()
+            else:
+                raise CanonicalError("document is expected, got "+repr(self.tokens[0]))
+        self.get_token(yaml.StreamEndToken)
+        self.events.append(yaml.StreamEndEvent(None, None))
+
+    # document: DIRECTIVE? DOCUMENT-START node
+    def parse_document(self):
+        node = None
+        if self.check_token(yaml.DirectiveToken):
+            self.get_token(yaml.DirectiveToken)
+        self.get_token(yaml.DocumentStartToken)
+        self.events.append(yaml.DocumentStartEvent(None, None))
+        self.parse_node()
+        self.events.append(yaml.DocumentEndEvent(None, None))
+
+    # node: ALIAS | ANCHOR? TAG? (SCALAR|sequence|mapping)
+    def parse_node(self):
+        if self.check_token(yaml.AliasToken):
+            self.events.append(yaml.AliasEvent(self.get_token_value(), None, None))
+        else:
+            anchor = None
+            if self.check_token(yaml.AnchorToken):
+                anchor = self.get_token_value()
+            tag = None
+            if self.check_token(yaml.TagToken):
+                tag = self.get_token_value()
+            if self.check_token(yaml.ScalarToken):
+                self.events.append(yaml.ScalarEvent(anchor, tag, (False, False), self.get_token_value(), None, None))
+            elif self.check_token(yaml.FlowSequenceStartToken):
+                self.events.append(yaml.SequenceStartEvent(anchor, tag, None, None))
+                self.parse_sequence()
+            elif self.check_token(yaml.FlowMappingStartToken):
+                self.events.append(yaml.MappingStartEvent(anchor, tag, None, None))
+                self.parse_mapping()
+            else:
+                raise CanonicalError("SCALAR, '[', or '{' is expected, got "+repr(self.tokens[0]))
+
+    # sequence: SEQUENCE-START (node (ENTRY node)*)? ENTRY? SEQUENCE-END
+    def parse_sequence(self):
+        self.get_token(yaml.FlowSequenceStartToken)
+        if not self.check_token(yaml.FlowSequenceEndToken):
+            self.parse_node()
+            while not self.check_token(yaml.FlowSequenceEndToken):
+                self.get_token(yaml.FlowEntryToken)
+                if not self.check_token(yaml.FlowSequenceEndToken):
+                    self.parse_node()
+        self.get_token(yaml.FlowSequenceEndToken)
+        self.events.append(yaml.SequenceEndEvent(None, None))
+
+    # mapping: MAPPING-START (map_entry (ENTRY map_entry)*)? ENTRY? MAPPING-END
+    def parse_mapping(self):
+        self.get_token(yaml.FlowMappingStartToken)
+        if not self.check_token(yaml.FlowMappingEndToken):
+            self.parse_map_entry()
+            while not self.check_token(yaml.FlowMappingEndToken):
+                self.get_token(yaml.FlowEntryToken)
+                if not self.check_token(yaml.FlowMappingEndToken):
+                    self.parse_map_entry()
+        self.get_token(yaml.FlowMappingEndToken)
+        self.events.append(yaml.MappingEndEvent(None, None))
+
+    # map_entry: KEY node VALUE node
+    def parse_map_entry(self):
+        self.get_token(yaml.KeyToken)
+        self.parse_node()
+        self.get_token(yaml.ValueToken)
+        self.parse_node()
+
+    def parse(self):
+        self.parse_stream()
+        self.parsed = True
+
+    def get_event(self):
+        if not self.parsed:
+            self.parse()
+        return self.events.pop(0)
+
+    def check_event(self, *choices):
+        if not self.parsed:
+            self.parse()
+        if self.events:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.events[0], choice):
+                    return True
+        return False
+
+    def peek_event(self):
+        if not self.parsed:
+            self.parse()
+        return self.events[0]
+
+class CanonicalLoader(CanonicalScanner, CanonicalParser,
+        yaml.composer.Composer, yaml.constructor.Constructor, yaml.resolver.Resolver):
+
+    def __init__(self, stream):
+        if hasattr(stream, 'read'):
+            stream = stream.read()
+        CanonicalScanner.__init__(self, stream)
+        CanonicalParser.__init__(self)
+        yaml.composer.Composer.__init__(self)
+        yaml.constructor.Constructor.__init__(self)
+        yaml.resolver.Resolver.__init__(self)
+
+yaml.CanonicalLoader = CanonicalLoader
+
+def canonical_scan(stream):
+    return yaml.scan(stream, Loader=CanonicalLoader)
+
+yaml.canonical_scan = canonical_scan
+
+def canonical_parse(stream):
+    return yaml.parse(stream, Loader=CanonicalLoader)
+
+yaml.canonical_parse = canonical_parse
+
+def canonical_compose(stream):
+    return yaml.compose(stream, Loader=CanonicalLoader)
+
+yaml.canonical_compose = canonical_compose
+
+def canonical_compose_all(stream):
+    return yaml.compose_all(stream, Loader=CanonicalLoader)
+
+yaml.canonical_compose_all = canonical_compose_all
+
+def canonical_load(stream):
+    return yaml.load(stream, Loader=CanonicalLoader)
+
+yaml.canonical_load = canonical_load
+
+def canonical_load_all(stream):
+    return yaml.load_all(stream, Loader=CanonicalLoader)
+
+yaml.canonical_load_all = canonical_load_all
+
diff --git a/tests/lib/test_all.py b/tests/lib/test_all.py
new file mode 100644
index 0000000..fec4ae4
--- /dev/null
+++ b/tests/lib/test_all.py
@@ -0,0 +1,15 @@
+
+import sys, yaml, test_appliance
+
+def main(args=None):
+    collections = []
+    import test_yaml
+    collections.append(test_yaml)
+    if yaml.__with_libyaml__:
+        import test_yaml_ext
+        collections.append(test_yaml_ext)
+    test_appliance.run(collections, args)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/tests/lib/test_appliance.py b/tests/lib/test_appliance.py
new file mode 100644
index 0000000..d50d5a2
--- /dev/null
+++ b/tests/lib/test_appliance.py
@@ -0,0 +1,151 @@
+
+import sys, os, os.path, types, traceback, pprint
+
+DATA = 'tests/data'
+
+def find_test_functions(collections):
+    if not isinstance(collections, list):
+        collections = [collections]
+    functions = []
+    for collection in collections:
+        if not isinstance(collection, dict):
+            collection = vars(collection)
+        keys = collection.keys()
+        keys.sort()
+        for key in keys:
+            value = collection[key]
+            if isinstance(value, types.FunctionType) and hasattr(value, 'unittest'):
+                functions.append(value)
+    return functions
+
+def find_test_filenames(directory):
+    filenames = {}
+    for filename in os.listdir(directory):
+        if os.path.isfile(os.path.join(directory, filename)):
+            base, ext = os.path.splitext(filename)
+            if base.endswith('-py3'):
+                continue
+            filenames.setdefault(base, []).append(ext)
+    filenames = filenames.items()
+    filenames.sort()
+    return filenames
+
+def parse_arguments(args):
+    if args is None:
+        args = sys.argv[1:]
+    verbose = False
+    if '-v' in args:
+        verbose = True
+        args.remove('-v')
+    if '--verbose' in args:
+        verbose = True
+    if 'YAML_TEST_VERBOSE' in os.environ:
+        verbose = True
+    include_functions = []
+    if args:
+        include_functions.append(args.pop(0))
+    if 'YAML_TEST_FUNCTIONS' in os.environ:
+        include_functions.extend(os.environ['YAML_TEST_FUNCTIONS'].split())
+    include_filenames = []
+    include_filenames.extend(args)
+    if 'YAML_TEST_FILENAMES' in os.environ:
+        include_filenames.extend(os.environ['YAML_TEST_FILENAMES'].split())
+    return include_functions, include_filenames, verbose
+
+def execute(function, filenames, verbose):
+    if hasattr(function, 'unittest_name'):
+        name = function.unittest_name
+    else:
+        name = function.func_name
+    if verbose:
+        sys.stdout.write('='*75+'\n')
+        sys.stdout.write('%s(%s)...\n' % (name, ', '.join(filenames)))
+    try:
+        function(verbose=verbose, *filenames)
+    except Exception, exc:
+        info = sys.exc_info()
+        if isinstance(exc, AssertionError):
+            kind = 'FAILURE'
+        else:
+            kind = 'ERROR'
+        if verbose:
+            traceback.print_exc(limit=1, file=sys.stdout)
+        else:
+            sys.stdout.write(kind[0])
+            sys.stdout.flush()
+    else:
+        kind = 'SUCCESS'
+        info = None
+        if not verbose:
+            sys.stdout.write('.')
+    sys.stdout.flush()
+    return (name, filenames, kind, info)
+
+def display(results, verbose):
+    if results and not verbose:
+        sys.stdout.write('\n')
+    total = len(results)
+    failures = 0
+    errors = 0
+    for name, filenames, kind, info in results:
+        if kind == 'SUCCESS':
+            continue
+        if kind == 'FAILURE':
+            failures += 1
+        if kind == 'ERROR':
+            errors += 1
+        sys.stdout.write('='*75+'\n')
+        sys.stdout.write('%s(%s): %s\n' % (name, ', '.join(filenames), kind))
+        if kind == 'ERROR':
+            traceback.print_exception(file=sys.stdout, *info)
+        else:
+            sys.stdout.write('Traceback (most recent call last):\n')
+            traceback.print_tb(info[2], file=sys.stdout)
+            sys.stdout.write('%s: see below\n' % info[0].__name__)
+            sys.stdout.write('~'*75+'\n')
+            for arg in info[1].args:
+                pprint.pprint(arg, stream=sys.stdout)
+        for filename in filenames:
+            sys.stdout.write('-'*75+'\n')
+            sys.stdout.write('%s:\n' % filename)
+            data = open(filename, 'rb').read()
+            sys.stdout.write(data)
+            if data and data[-1] != '\n':
+                sys.stdout.write('\n')
+    sys.stdout.write('='*75+'\n')
+    sys.stdout.write('TESTS: %s\n' % total)
+    if failures:
+        sys.stdout.write('FAILURES: %s\n' % failures)
+    if errors:
+        sys.stdout.write('ERRORS: %s\n' % errors)
+
+def run(collections, args=None):
+    test_functions = find_test_functions(collections)
+    test_filenames = find_test_filenames(DATA)
+    include_functions, include_filenames, verbose = parse_arguments(args)
+    results = []
+    for function in test_functions:
+        if include_functions and function.func_name not in include_functions:
+            continue
+        if function.unittest:
+            for base, exts in test_filenames:
+                if include_filenames and base not in include_filenames:
+                    continue
+                filenames = []
+                for ext in function.unittest:
+                    if ext not in exts:
+                        break
+                    filenames.append(os.path.join(DATA, base+ext))
+                else:
+                    skip_exts = getattr(function, 'skip', [])
+                    for skip_ext in skip_exts:
+                        if skip_ext in exts:
+                            break
+                    else:
+                        result = execute(function, filenames, verbose)
+                        results.append(result)
+        else:
+            result = execute(function, [], verbose)
+            results.append(result)
+    display(results, verbose=verbose)
+
diff --git a/tests/lib/test_build.py b/tests/lib/test_build.py
new file mode 100644
index 0000000..901e8ed
--- /dev/null
+++ b/tests/lib/test_build.py
@@ -0,0 +1,10 @@
+
+if __name__ == '__main__':
+    import sys, os, distutils.util
+    build_lib = 'build/lib'
+    build_lib_ext = os.path.join('build', 'lib.%s-%s' % (distutils.util.get_platform(), sys.version[0:3]))
+    sys.path.insert(0, build_lib)
+    sys.path.insert(0, build_lib_ext)
+    import test_yaml, test_appliance
+    test_appliance.run(test_yaml)
+
diff --git a/tests/lib/test_build_ext.py b/tests/lib/test_build_ext.py
new file mode 100644
index 0000000..ff195d5
--- /dev/null
+++ b/tests/lib/test_build_ext.py
@@ -0,0 +1,11 @@
+
+
+if __name__ == '__main__':
+    import sys, os, distutils.util
+    build_lib = 'build/lib'
+    build_lib_ext = os.path.join('build', 'lib.%s-%s' % (distutils.util.get_platform(), sys.version[0:3]))
+    sys.path.insert(0, build_lib)
+    sys.path.insert(0, build_lib_ext)
+    import test_yaml_ext, test_appliance
+    test_appliance.run(test_yaml_ext)
+
diff --git a/tests/lib/test_canonical.py b/tests/lib/test_canonical.py
new file mode 100644
index 0000000..a851ef2
--- /dev/null
+++ b/tests/lib/test_canonical.py
@@ -0,0 +1,40 @@
+
+import yaml, canonical
+
+def test_canonical_scanner(canonical_filename, verbose=False):
+    data = open(canonical_filename, 'rb').read()
+    tokens = list(yaml.canonical_scan(data))
+    assert tokens, tokens
+    if verbose:
+        for token in tokens:
+            print token
+
+test_canonical_scanner.unittest = ['.canonical']
+
+def test_canonical_parser(canonical_filename, verbose=False):
+    data = open(canonical_filename, 'rb').read()
+    events = list(yaml.canonical_parse(data))
+    assert events, events
+    if verbose:
+        for event in events:
+            print event
+
+test_canonical_parser.unittest = ['.canonical']
+
+def test_canonical_error(data_filename, canonical_filename, verbose=False):
+    data = open(data_filename, 'rb').read()
+    try:
+        output = list(yaml.canonical_load_all(data))
+    except yaml.YAMLError, exc:
+        if verbose:
+            print exc
+    else:
+        raise AssertionError("expected an exception")
+
+test_canonical_error.unittest = ['.data', '.canonical']
+test_canonical_error.skip = ['.empty']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_constructor.py b/tests/lib/test_constructor.py
new file mode 100644
index 0000000..beee7b0
--- /dev/null
+++ b/tests/lib/test_constructor.py
@@ -0,0 +1,275 @@
+
+import yaml
+import pprint
+
+import datetime
+try:
+    set
+except NameError:
+    from sets import Set as set
+import yaml.tokens
+
+def execute(code):
+    exec code
+    return value
+
+def _make_objects():
+    global MyLoader, MyDumper, MyTestClass1, MyTestClass2, MyTestClass3, YAMLObject1, YAMLObject2,  \
+            AnObject, AnInstance, AState, ACustomState, InitArgs, InitArgsWithState,    \
+            NewArgs, NewArgsWithState, Reduce, ReduceWithState, MyInt, MyList, MyDict,  \
+            FixedOffset, today, execute
+
+    class MyLoader(yaml.Loader):
+        pass
+    class MyDumper(yaml.Dumper):
+        pass
+
+    class MyTestClass1:
+        def __init__(self, x, y=0, z=0):
+            self.x = x
+            self.y = y
+            self.z = z
+        def __eq__(self, other):
+            if isinstance(other, MyTestClass1):
+                return self.__class__, self.__dict__ == other.__class__, other.__dict__
+            else:
+                return False
+
+    def construct1(constructor, node):
+        mapping = constructor.construct_mapping(node)
+        return MyTestClass1(**mapping)
+    def represent1(representer, native):
+        return representer.represent_mapping("!tag1", native.__dict__)
+
+    yaml.add_constructor("!tag1", construct1, Loader=MyLoader)
+    yaml.add_representer(MyTestClass1, represent1, Dumper=MyDumper)
+
+    class MyTestClass2(MyTestClass1, yaml.YAMLObject):
+        yaml_loader = MyLoader
+        yaml_dumper = MyDumper
+        yaml_tag = "!tag2"
+        def from_yaml(cls, constructor, node):
+            x = constructor.construct_yaml_int(node)
+            return cls(x=x)
+        from_yaml = classmethod(from_yaml)
+        def to_yaml(cls, representer, native):
+            return representer.represent_scalar(cls.yaml_tag, str(native.x))
+        to_yaml = classmethod(to_yaml)
+
+    class MyTestClass3(MyTestClass2):
+        yaml_tag = "!tag3"
+        def from_yaml(cls, constructor, node):
+            mapping = constructor.construct_mapping(node)
+            if '=' in mapping:
+                x = mapping['=']
+                del mapping['=']
+                mapping['x'] = x
+            return cls(**mapping)
+        from_yaml = classmethod(from_yaml)
+        def to_yaml(cls, representer, native):
+            return representer.represent_mapping(cls.yaml_tag, native.__dict__)
+        to_yaml = classmethod(to_yaml)
+
+    class YAMLObject1(yaml.YAMLObject):
+        yaml_loader = MyLoader
+        yaml_dumper = MyDumper
+        yaml_tag = '!foo'
+        def __init__(self, my_parameter=None, my_another_parameter=None):
+            self.my_parameter = my_parameter
+            self.my_another_parameter = my_another_parameter
+        def __eq__(self, other):
+            if isinstance(other, YAMLObject1):
+                return self.__class__, self.__dict__ == other.__class__, other.__dict__
+            else:
+                return False
+
+    class YAMLObject2(yaml.YAMLObject):
+        yaml_loader = MyLoader
+        yaml_dumper = MyDumper
+        yaml_tag = '!bar'
+        def __init__(self, foo=1, bar=2, baz=3):
+            self.foo = foo
+            self.bar = bar
+            self.baz = baz
+        def __getstate__(self):
+            return {1: self.foo, 2: self.bar, 3: self.baz}
+        def __setstate__(self, state):
+            self.foo = state[1]
+            self.bar = state[2]
+            self.baz = state[3]
+        def __eq__(self, other):
+            if isinstance(other, YAMLObject2):
+                return self.__class__, self.__dict__ == other.__class__, other.__dict__
+            else:
+                return False
+
+    class AnObject(object):
+        def __new__(cls, foo=None, bar=None, baz=None):
+            self = object.__new__(cls)
+            self.foo = foo
+            self.bar = bar
+            self.baz = baz
+            return self
+        def __cmp__(self, other):
+            return cmp((type(self), self.foo, self.bar, self.baz),
+                    (type(other), other.foo, other.bar, other.baz))
+        def __eq__(self, other):
+            return type(self) is type(other) and    \
+                    (self.foo, self.bar, self.baz) == (other.foo, other.bar, other.baz)
+
+    class AnInstance:
+        def __init__(self, foo=None, bar=None, baz=None):
+            self.foo = foo
+            self.bar = bar
+            self.baz = baz
+        def __cmp__(self, other):
+            return cmp((type(self), self.foo, self.bar, self.baz),
+                    (type(other), other.foo, other.bar, other.baz))
+        def __eq__(self, other):
+            return type(self) is type(other) and    \
+                    (self.foo, self.bar, self.baz) == (other.foo, other.bar, other.baz)
+
+    class AState(AnInstance):
+        def __getstate__(self):
+            return {
+                '_foo': self.foo,
+                '_bar': self.bar,
+                '_baz': self.baz,
+            }
+        def __setstate__(self, state):
+            self.foo = state['_foo']
+            self.bar = state['_bar']
+            self.baz = state['_baz']
+
+    class ACustomState(AnInstance):
+        def __getstate__(self):
+            return (self.foo, self.bar, self.baz)
+        def __setstate__(self, state):
+            self.foo, self.bar, self.baz = state
+
+    class InitArgs(AnInstance):
+        def __getinitargs__(self):
+            return (self.foo, self.bar, self.baz)
+        def __getstate__(self):
+            return {}
+
+    class InitArgsWithState(AnInstance):
+        def __getinitargs__(self):
+            return (self.foo, self.bar)
+        def __getstate__(self):
+            return self.baz
+        def __setstate__(self, state):
+            self.baz = state
+
+    class NewArgs(AnObject):
+        def __getnewargs__(self):
+            return (self.foo, self.bar, self.baz)
+        def __getstate__(self):
+            return {}
+
+    class NewArgsWithState(AnObject):
+        def __getnewargs__(self):
+            return (self.foo, self.bar)
+        def __getstate__(self):
+            return self.baz
+        def __setstate__(self, state):
+            self.baz = state
+
+    class Reduce(AnObject):
+        def __reduce__(self):
+            return self.__class__, (self.foo, self.bar, self.baz)
+
+    class ReduceWithState(AnObject):
+        def __reduce__(self):
+            return self.__class__, (self.foo, self.bar), self.baz
+        def __setstate__(self, state):
+            self.baz = state
+
+    class MyInt(int):
+        def __eq__(self, other):
+            return type(self) is type(other) and int(self) == int(other)
+
+    class MyList(list):
+        def __init__(self, n=1):
+            self.extend([None]*n)
+        def __eq__(self, other):
+            return type(self) is type(other) and list(self) == list(other)
+
+    class MyDict(dict):
+        def __init__(self, n=1):
+            for k in range(n):
+                self[k] = None
+        def __eq__(self, other):
+            return type(self) is type(other) and dict(self) == dict(other)
+
+    class FixedOffset(datetime.tzinfo):
+        def __init__(self, offset, name):
+            self.__offset = datetime.timedelta(minutes=offset)
+            self.__name = name
+        def utcoffset(self, dt):
+            return self.__offset
+        def tzname(self, dt):
+            return self.__name
+        def dst(self, dt):
+            return datetime.timedelta(0)
+
+    today = datetime.date.today()
+
+def _load_code(expression):
+    return eval(expression)
+
+def _serialize_value(data):
+    if isinstance(data, list):
+        return '[%s]' % ', '.join(map(_serialize_value, data))
+    elif isinstance(data, dict):
+        items = []
+        for key, value in data.items():
+            key = _serialize_value(key)
+            value = _serialize_value(value)
+            items.append("%s: %s" % (key, value))
+        items.sort()
+        return '{%s}' % ', '.join(items)
+    elif isinstance(data, datetime.datetime):
+        return repr(data.utctimetuple())
+    elif isinstance(data, unicode):
+        return data.encode('utf-8')
+    elif isinstance(data, float) and data != data:
+        return '?'
+    else:
+        return str(data)
+
+def test_constructor_types(data_filename, code_filename, verbose=False):
+    _make_objects()
+    native1 = None
+    native2 = None
+    try:
+        native1 = list(yaml.load_all(open(data_filename, 'rb'), Loader=MyLoader))
+        if len(native1) == 1:
+            native1 = native1[0]
+        native2 = _load_code(open(code_filename, 'rb').read())
+        try:
+            if native1 == native2:
+                return
+        except TypeError:
+            pass
+        if verbose:
+            print "SERIALIZED NATIVE1:"
+            print _serialize_value(native1)
+            print "SERIALIZED NATIVE2:"
+            print _serialize_value(native2)
+        assert _serialize_value(native1) == _serialize_value(native2), (native1, native2)
+    finally:
+        if verbose:
+            print "NATIVE1:"
+            pprint.pprint(native1)
+            print "NATIVE2:"
+            pprint.pprint(native2)
+
+test_constructor_types.unittest = ['.data', '.code']
+
+if __name__ == '__main__':
+    import sys, test_constructor
+    sys.modules['test_constructor'] = sys.modules['__main__']
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_emitter.py b/tests/lib/test_emitter.py
new file mode 100644
index 0000000..61fd941
--- /dev/null
+++ b/tests/lib/test_emitter.py
@@ -0,0 +1,100 @@
+
+import yaml
+
+def _compare_events(events1, events2):
+    assert len(events1) == len(events2), (events1, events2)
+    for event1, event2 in zip(events1, events2):
+        assert event1.__class__ == event2.__class__, (event1, event2)
+        if isinstance(event1, yaml.NodeEvent):
+            assert event1.anchor == event2.anchor, (event1, event2)
+        if isinstance(event1, yaml.CollectionStartEvent):
+            assert event1.tag == event2.tag, (event1, event2)
+        if isinstance(event1, yaml.ScalarEvent):
+            if True not in event1.implicit+event2.implicit:
+                assert event1.tag == event2.tag, (event1, event2)
+            assert event1.value == event2.value, (event1, event2)
+
+def test_emitter_on_data(data_filename, canonical_filename, verbose=False):
+    events = list(yaml.parse(open(data_filename, 'rb')))
+    output = yaml.emit(events)
+    if verbose:
+        print "OUTPUT:"
+        print output
+    new_events = list(yaml.parse(output))
+    _compare_events(events, new_events)
+
+test_emitter_on_data.unittest = ['.data', '.canonical']
+
+def test_emitter_on_canonical(canonical_filename, verbose=False):
+    events = list(yaml.parse(open(canonical_filename, 'rb')))
+    for canonical in [False, True]:
+        output = yaml.emit(events, canonical=canonical)
+        if verbose:
+            print "OUTPUT (canonical=%s):" % canonical
+            print output
+        new_events = list(yaml.parse(output))
+        _compare_events(events, new_events)
+
+test_emitter_on_canonical.unittest = ['.canonical']
+
+def test_emitter_styles(data_filename, canonical_filename, verbose=False):
+    for filename in [data_filename, canonical_filename]:
+        events = list(yaml.parse(open(filename, 'rb')))
+        for flow_style in [False, True]:
+            for style in ['|', '>', '"', '\'', '']:
+                styled_events = []
+                for event in events:
+                    if isinstance(event, yaml.ScalarEvent):
+                        event = yaml.ScalarEvent(event.anchor, event.tag,
+                                event.implicit, event.value, style=style)
+                    elif isinstance(event, yaml.SequenceStartEvent):
+                        event = yaml.SequenceStartEvent(event.anchor, event.tag,
+                                event.implicit, flow_style=flow_style)
+                    elif isinstance(event, yaml.MappingStartEvent):
+                        event = yaml.MappingStartEvent(event.anchor, event.tag,
+                                event.implicit, flow_style=flow_style)
+                    styled_events.append(event)
+                output = yaml.emit(styled_events)
+                if verbose:
+                    print "OUTPUT (filename=%r, flow_style=%r, style=%r)" % (filename, flow_style, style)
+                    print output
+                new_events = list(yaml.parse(output))
+                _compare_events(events, new_events)
+
+test_emitter_styles.unittest = ['.data', '.canonical']
+
+class EventsLoader(yaml.Loader):
+
+    def construct_event(self, node):
+        if isinstance(node, yaml.ScalarNode):
+            mapping = {}
+        else:
+            mapping = self.construct_mapping(node)
+        class_name = str(node.tag[1:])+'Event'
+        if class_name in ['AliasEvent', 'ScalarEvent', 'SequenceStartEvent', 'MappingStartEvent']:
+            mapping.setdefault('anchor', None)
+        if class_name in ['ScalarEvent', 'SequenceStartEvent', 'MappingStartEvent']:
+            mapping.setdefault('tag', None)
+        if class_name in ['SequenceStartEvent', 'MappingStartEvent']:
+            mapping.setdefault('implicit', True)
+        if class_name == 'ScalarEvent':
+            mapping.setdefault('implicit', (False, True))
+            mapping.setdefault('value', '')
+        value = getattr(yaml, class_name)(**mapping)
+        return value
+
+EventsLoader.add_constructor(None, EventsLoader.construct_event)
+
+def test_emitter_events(events_filename, verbose=False):
+    events = list(yaml.load(open(events_filename, 'rb'), Loader=EventsLoader))
+    output = yaml.emit(events)
+    if verbose:
+        print "OUTPUT:"
+        print output
+    new_events = list(yaml.parse(output))
+    _compare_events(events, new_events)
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_errors.py b/tests/lib/test_errors.py
new file mode 100644
index 0000000..7dc9388
--- /dev/null
+++ b/tests/lib/test_errors.py
@@ -0,0 +1,67 @@
+
+import yaml, test_emitter
+
+def test_loader_error(error_filename, verbose=False):
+    try:
+        list(yaml.load_all(open(error_filename, 'rb')))
+    except yaml.YAMLError, exc:
+        if verbose:
+            print "%s:" % exc.__class__.__name__, exc
+    else:
+        raise AssertionError("expected an exception")
+
+test_loader_error.unittest = ['.loader-error']
+
+def test_loader_error_string(error_filename, verbose=False):
+    try:
+        list(yaml.load_all(open(error_filename, 'rb').read()))
+    except yaml.YAMLError, exc:
+        if verbose:
+            print "%s:" % exc.__class__.__name__, exc
+    else:
+        raise AssertionError("expected an exception")
+
+test_loader_error_string.unittest = ['.loader-error']
+
+def test_loader_error_single(error_filename, verbose=False):
+    try:
+        yaml.load(open(error_filename, 'rb').read())
+    except yaml.YAMLError, exc:
+        if verbose:
+            print "%s:" % exc.__class__.__name__, exc
+    else:
+        raise AssertionError("expected an exception")
+
+test_loader_error_single.unittest = ['.single-loader-error']
+
+def test_emitter_error(error_filename, verbose=False):
+    events = list(yaml.load(open(error_filename, 'rb'),
+                    Loader=test_emitter.EventsLoader))
+    try:
+        yaml.emit(events)
+    except yaml.YAMLError, exc:
+        if verbose:
+            print "%s:" % exc.__class__.__name__, exc
+    else:
+        raise AssertionError("expected an exception")
+
+test_emitter_error.unittest = ['.emitter-error']
+
+def test_dumper_error(error_filename, verbose=False):
+    code = open(error_filename, 'rb').read()
+    try:
+        import yaml
+        from StringIO import StringIO
+        exec code
+    except yaml.YAMLError, exc:
+        if verbose:
+            print "%s:" % exc.__class__.__name__, exc
+    else:
+        raise AssertionError("expected an exception")
+
+test_dumper_error.unittest = ['.dumper-error']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_input_output.py b/tests/lib/test_input_output.py
new file mode 100644
index 0000000..9ccc8fc
--- /dev/null
+++ b/tests/lib/test_input_output.py
@@ -0,0 +1,151 @@
+
+import yaml
+import codecs, StringIO, tempfile, os, os.path
+
+def _unicode_open(file, encoding, errors='strict'):
+    info = codecs.lookup(encoding)
+    if isinstance(info, tuple):
+        reader = info[2]
+        writer = info[3]
+    else:
+        reader = info.streamreader
+        writer = info.streamwriter
+    srw = codecs.StreamReaderWriter(file, reader, writer, errors)
+    srw.encoding = encoding
+    return srw
+
+def test_unicode_input(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    value = ' '.join(data.split())
+    output = yaml.load(_unicode_open(StringIO.StringIO(data.encode('utf-8')), 'utf-8'))
+    assert output == value, (output, value)
+    for input in [data, data.encode('utf-8'),
+                    codecs.BOM_UTF8+data.encode('utf-8'),
+                    codecs.BOM_UTF16_BE+data.encode('utf-16-be'),
+                    codecs.BOM_UTF16_LE+data.encode('utf-16-le')]:
+        if verbose:
+            print "INPUT:", repr(input[:10]), "..."
+        output = yaml.load(input)
+        assert output == value, (output, value)
+        output = yaml.load(StringIO.StringIO(input))
+        assert output == value, (output, value)
+
+test_unicode_input.unittest = ['.unicode']
+
+def test_unicode_input_errors(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    for input in [data.encode('latin1', 'ignore'),
+                    data.encode('utf-16-be'), data.encode('utf-16-le'),
+                    codecs.BOM_UTF8+data.encode('utf-16-be'),
+                    codecs.BOM_UTF16_BE+data.encode('utf-16-le'),
+                    codecs.BOM_UTF16_LE+data.encode('utf-8')+'!']:
+        try:
+            yaml.load(input)
+        except yaml.YAMLError, exc:
+            if verbose:
+                print exc
+        else:
+            raise AssertionError("expected an exception")
+        try:
+            yaml.load(StringIO.StringIO(input))
+        except yaml.YAMLError, exc:
+            if verbose:
+                print exc
+        else:
+            raise AssertionError("expected an exception")
+
+test_unicode_input_errors.unittest = ['.unicode']
+
+def test_unicode_output(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    value = ' '.join(data.split())
+    for allow_unicode in [False, True]:
+        data1 = yaml.dump(value, allow_unicode=allow_unicode)
+        for encoding in [None, 'utf-8', 'utf-16-be', 'utf-16-le']:
+            stream = StringIO.StringIO()
+            yaml.dump(value, _unicode_open(stream, 'utf-8'), encoding=encoding, allow_unicode=allow_unicode)
+            data2 = stream.getvalue()
+            data3 = yaml.dump(value, encoding=encoding, allow_unicode=allow_unicode)
+            stream = StringIO.StringIO()
+            yaml.dump(value, stream, encoding=encoding, allow_unicode=allow_unicode)
+            data4 = stream.getvalue()
+            for copy in [data1, data2, data3, data4]:
+                if allow_unicode:
+                    try:
+                        copy[4:].encode('ascii')
+                    except (UnicodeDecodeError, UnicodeEncodeError), exc:
+                        if verbose:
+                            print exc
+                    else:
+                        raise AssertionError("expected an exception")
+                else:
+                    copy[4:].encode('ascii')
+            assert isinstance(data1, str), (type(data1), encoding)
+            data1.decode('utf-8')
+            assert isinstance(data2, str), (type(data2), encoding)
+            data2.decode('utf-8')
+            if encoding is None:
+                assert isinstance(data3, unicode), (type(data3), encoding)
+                assert isinstance(data4, unicode), (type(data4), encoding)
+            else:
+                assert isinstance(data3, str), (type(data3), encoding)
+                data3.decode(encoding)
+                assert isinstance(data4, str), (type(data4), encoding)
+                data4.decode(encoding)
+
+test_unicode_output.unittest = ['.unicode']
+
+def test_file_output(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    handle, filename = tempfile.mkstemp()
+    os.close(handle)
+    try:
+        stream = StringIO.StringIO()
+        yaml.dump(data, stream, allow_unicode=True)
+        data1 = stream.getvalue()
+        stream = open(filename, 'wb')
+        yaml.dump(data, stream, allow_unicode=True)
+        stream.close()
+        data2 = open(filename, 'rb').read()
+        stream = open(filename, 'wb')
+        yaml.dump(data, stream, encoding='utf-16-le', allow_unicode=True)
+        stream.close()
+        data3 = open(filename, 'rb').read().decode('utf-16-le')[1:].encode('utf-8')
+        stream = _unicode_open(open(filename, 'wb'), 'utf-8')
+        yaml.dump(data, stream, allow_unicode=True)
+        stream.close()
+        data4 = open(filename, 'rb').read()
+        assert data1 == data2, (data1, data2)
+        assert data1 == data3, (data1, data3)
+        assert data1 == data4, (data1, data4)
+    finally:
+        if os.path.exists(filename):
+            os.unlink(filename)
+
+test_file_output.unittest = ['.unicode']
+
+def test_unicode_transfer(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    for encoding in [None, 'utf-8', 'utf-16-be', 'utf-16-le']:
+        input = data
+        if encoding is not None:
+            input = (u'\ufeff'+input).encode(encoding)
+        output1 = yaml.emit(yaml.parse(input), allow_unicode=True)
+        stream = StringIO.StringIO()
+        yaml.emit(yaml.parse(input), _unicode_open(stream, 'utf-8'),
+                            allow_unicode=True)
+        output2 = stream.getvalue()
+        if encoding is None:
+            assert isinstance(output1, unicode), (type(output1), encoding)
+        else:
+            assert isinstance(output1, str), (type(output1), encoding)
+            output1.decode(encoding)
+        assert isinstance(output2, str), (type(output2), encoding)
+        output2.decode('utf-8')
+
+test_unicode_transfer.unittest = ['.unicode']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_mark.py b/tests/lib/test_mark.py
new file mode 100644
index 0000000..f30a121
--- /dev/null
+++ b/tests/lib/test_mark.py
@@ -0,0 +1,32 @@
+
+import yaml
+
+def test_marks(marks_filename, verbose=False):
+    inputs = open(marks_filename, 'rb').read().split('---\n')[1:]
+    for input in inputs:
+        index = 0
+        line = 0
+        column = 0
+        while input[index] != '*':
+            if input[index] == '\n':
+                line += 1
+                column = 0
+            else:
+                column += 1
+            index += 1
+        mark = yaml.Mark(marks_filename, index, line, column, unicode(input), index)
+        snippet = mark.get_snippet(indent=2, max_length=79)
+        if verbose:
+            print snippet
+        assert isinstance(snippet, str), type(snippet)
+        assert snippet.count('\n') == 1, snippet.count('\n')
+        data, pointer = snippet.split('\n')
+        assert len(data) < 82, len(data)
+        assert data[len(pointer)-1] == '*', data[len(pointer)-1]
+
+test_marks.unittest = ['.marks']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_reader.py b/tests/lib/test_reader.py
new file mode 100644
index 0000000..3576ae6
--- /dev/null
+++ b/tests/lib/test_reader.py
@@ -0,0 +1,35 @@
+
+import yaml.reader
+import codecs
+
+def _run_reader(data, verbose):
+    try:
+        stream = yaml.reader.Reader(data)
+        while stream.peek() != u'\0':
+            stream.forward()
+    except yaml.reader.ReaderError, exc:
+        if verbose:
+            print exc
+    else:
+        raise AssertionError("expected an exception")
+
+def test_stream_error(error_filename, verbose=False):
+    _run_reader(open(error_filename, 'rb'), verbose)
+    _run_reader(open(error_filename, 'rb').read(), verbose)
+    for encoding in ['utf-8', 'utf-16-le', 'utf-16-be']:
+        try:
+            data = unicode(open(error_filename, 'rb').read(), encoding)
+            break
+        except UnicodeDecodeError:
+            pass
+    else:
+        return
+    _run_reader(data, verbose)
+    _run_reader(codecs.open(error_filename, encoding=encoding), verbose)
+
+test_stream_error.unittest = ['.stream-error']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_recursive.py b/tests/lib/test_recursive.py
new file mode 100644
index 0000000..6707fd4
--- /dev/null
+++ b/tests/lib/test_recursive.py
@@ -0,0 +1,50 @@
+
+import yaml
+
+class AnInstance:
+
+    def __init__(self, foo, bar):
+        self.foo = foo
+        self.bar = bar
+
+    def __repr__(self):
+        try:
+            return "%s(foo=%r, bar=%r)" % (self.__class__.__name__,
+                    self.foo, self.bar)
+        except RuntimeError:
+            return "%s(foo=..., bar=...)" % self.__class__.__name__
+
+class AnInstanceWithState(AnInstance):
+
+    def __getstate__(self):
+        return {'attributes': [self.foo, self.bar]}
+
+    def __setstate__(self, state):
+        self.foo, self.bar = state['attributes']
+
+def test_recursive(recursive_filename, verbose=False):
+    exec open(recursive_filename, 'rb').read()
+    value1 = value
+    output1 = None
+    value2 = None
+    output2 = None
+    try:
+        output1 = yaml.dump(value1)
+        value2 = yaml.load(output1)
+        output2 = yaml.dump(value2)
+        assert output1 == output2, (output1, output2)
+    finally:
+        if verbose:
+            #print "VALUE1:", value1
+            #print "VALUE2:", value2
+            print "OUTPUT1:"
+            print output1
+            print "OUTPUT2:"
+            print output2
+
+test_recursive.unittest = ['.recursive']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_representer.py b/tests/lib/test_representer.py
new file mode 100644
index 0000000..a82a32a
--- /dev/null
+++ b/tests/lib/test_representer.py
@@ -0,0 +1,43 @@
+
+import yaml
+import test_constructor
+import pprint
+
+def test_representer_types(code_filename, verbose=False):
+    test_constructor._make_objects()
+    for allow_unicode in [False, True]:
+        for encoding in ['utf-8', 'utf-16-be', 'utf-16-le']:
+            native1 = test_constructor._load_code(open(code_filename, 'rb').read())
+            native2 = None
+            try:
+                output = yaml.dump(native1, Dumper=test_constructor.MyDumper,
+                            allow_unicode=allow_unicode, encoding=encoding)
+                native2 = yaml.load(output, Loader=test_constructor.MyLoader)
+                try:
+                    if native1 == native2:
+                        continue
+                except TypeError:
+                    pass
+                value1 = test_constructor._serialize_value(native1)
+                value2 = test_constructor._serialize_value(native2)
+                if verbose:
+                    print "SERIALIZED NATIVE1:"
+                    print value1
+                    print "SERIALIZED NATIVE2:"
+                    print value2
+                assert value1 == value2, (native1, native2)
+            finally:
+                if verbose:
+                    print "NATIVE1:"
+                    pprint.pprint(native1)
+                    print "NATIVE2:"
+                    pprint.pprint(native2)
+                    print "OUTPUT:"
+                    print output
+
+test_representer_types.unittest = ['.code']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_resolver.py b/tests/lib/test_resolver.py
new file mode 100644
index 0000000..5566750
--- /dev/null
+++ b/tests/lib/test_resolver.py
@@ -0,0 +1,92 @@
+
+import yaml
+import pprint
+
+def test_implicit_resolver(data_filename, detect_filename, verbose=False):
+    correct_tag = None
+    node = None
+    try:
+        correct_tag = open(detect_filename, 'rb').read().strip()
+        node = yaml.compose(open(data_filename, 'rb'))
+        assert isinstance(node, yaml.SequenceNode), node
+        for scalar in node.value:
+            assert isinstance(scalar, yaml.ScalarNode), scalar
+            assert scalar.tag == correct_tag, (scalar.tag, correct_tag)
+    finally:
+        if verbose:
+            print "CORRECT TAG:", correct_tag
+            if hasattr(node, 'value'):
+                print "CHILDREN:"
+                pprint.pprint(node.value)
+
+test_implicit_resolver.unittest = ['.data', '.detect']
+
+def _make_path_loader_and_dumper():
+    global MyLoader, MyDumper
+
+    class MyLoader(yaml.Loader):
+        pass
+    class MyDumper(yaml.Dumper):
+        pass
+
+    yaml.add_path_resolver(u'!root', [],
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver(u'!root/scalar', [], str,
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver(u'!root/key11/key12/*', ['key11', 'key12'],
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver(u'!root/key21/1/*', ['key21', 1],
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver(u'!root/key31/*/*/key14/map', ['key31', None, None, 'key14'], dict,
+            Loader=MyLoader, Dumper=MyDumper)
+
+    return MyLoader, MyDumper
+
+def _convert_node(node):
+    if isinstance(node, yaml.ScalarNode):
+        return (node.tag, node.value)
+    elif isinstance(node, yaml.SequenceNode):
+        value = []
+        for item in node.value:
+            value.append(_convert_node(item))
+        return (node.tag, value)
+    elif isinstance(node, yaml.MappingNode):
+        value = []
+        for key, item in node.value:
+            value.append((_convert_node(key), _convert_node(item)))
+        return (node.tag, value)
+
+def test_path_resolver_loader(data_filename, path_filename, verbose=False):
+    _make_path_loader_and_dumper()
+    nodes1 = list(yaml.compose_all(open(data_filename, 'rb').read(), Loader=MyLoader))
+    nodes2 = list(yaml.compose_all(open(path_filename, 'rb').read()))
+    try:
+        for node1, node2 in zip(nodes1, nodes2):
+            data1 = _convert_node(node1)
+            data2 = _convert_node(node2)
+            assert data1 == data2, (data1, data2)
+    finally:
+        if verbose:
+            print yaml.serialize_all(nodes1)
+
+test_path_resolver_loader.unittest = ['.data', '.path']
+
+def test_path_resolver_dumper(data_filename, path_filename, verbose=False):
+    _make_path_loader_and_dumper()
+    for filename in [data_filename, path_filename]:
+        output = yaml.serialize_all(yaml.compose_all(open(filename, 'rb')), Dumper=MyDumper)
+        if verbose:
+            print output
+        nodes1 = yaml.compose_all(output)
+        nodes2 = yaml.compose_all(open(data_filename, 'rb'))
+        for node1, node2 in zip(nodes1, nodes2):
+            data1 = _convert_node(node1)
+            data2 = _convert_node(node2)
+            assert data1 == data2, (data1, data2)
+
+test_path_resolver_dumper.unittest = ['.data', '.path']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_structure.py b/tests/lib/test_structure.py
new file mode 100644
index 0000000..61bcb80
--- /dev/null
+++ b/tests/lib/test_structure.py
@@ -0,0 +1,187 @@
+
+import yaml, canonical
+import pprint
+
+def _convert_structure(loader):
+    if loader.check_event(yaml.ScalarEvent):
+        event = loader.get_event()
+        if event.tag or event.anchor or event.value:
+            return True
+        else:
+            return None
+    elif loader.check_event(yaml.SequenceStartEvent):
+        loader.get_event()
+        sequence = []
+        while not loader.check_event(yaml.SequenceEndEvent):
+            sequence.append(_convert_structure(loader))
+        loader.get_event()
+        return sequence
+    elif loader.check_event(yaml.MappingStartEvent):
+        loader.get_event()
+        mapping = []
+        while not loader.check_event(yaml.MappingEndEvent):
+            key = _convert_structure(loader)
+            value = _convert_structure(loader)
+            mapping.append((key, value))
+        loader.get_event()
+        return mapping
+    elif loader.check_event(yaml.AliasEvent):
+        loader.get_event()
+        return '*'
+    else:
+        loader.get_event()
+        return '?'
+
+def test_structure(data_filename, structure_filename, verbose=False):
+    nodes1 = []
+    nodes2 = eval(open(structure_filename, 'rb').read())
+    try:
+        loader = yaml.Loader(open(data_filename, 'rb'))
+        while loader.check_event():
+            if loader.check_event(yaml.StreamStartEvent, yaml.StreamEndEvent,
+                                yaml.DocumentStartEvent, yaml.DocumentEndEvent):
+                loader.get_event()
+                continue
+            nodes1.append(_convert_structure(loader))
+        if len(nodes1) == 1:
+            nodes1 = nodes1[0]
+        assert nodes1 == nodes2, (nodes1, nodes2)
+    finally:
+        if verbose:
+            print "NODES1:"
+            pprint.pprint(nodes1)
+            print "NODES2:"
+            pprint.pprint(nodes2)
+
+test_structure.unittest = ['.data', '.structure']
+
+def _compare_events(events1, events2, full=False):
+    assert len(events1) == len(events2), (len(events1), len(events2))
+    for event1, event2 in zip(events1, events2):
+        assert event1.__class__ == event2.__class__, (event1, event2)
+        if isinstance(event1, yaml.AliasEvent) and full:
+            assert event1.anchor == event2.anchor, (event1, event2)
+        if isinstance(event1, (yaml.ScalarEvent, yaml.CollectionStartEvent)):
+            if (event1.tag not in [None, u'!'] and event2.tag not in [None, u'!']) or full:
+                assert event1.tag == event2.tag, (event1, event2)
+        if isinstance(event1, yaml.ScalarEvent):
+            assert event1.value == event2.value, (event1, event2)
+
+def test_parser(data_filename, canonical_filename, verbose=False):
+    events1 = None
+    events2 = None
+    try:
+        events1 = list(yaml.parse(open(data_filename, 'rb')))
+        events2 = list(yaml.canonical_parse(open(canonical_filename, 'rb')))
+        _compare_events(events1, events2)
+    finally:
+        if verbose:
+            print "EVENTS1:"
+            pprint.pprint(events1)
+            print "EVENTS2:"
+            pprint.pprint(events2)
+
+test_parser.unittest = ['.data', '.canonical']
+
+def test_parser_on_canonical(canonical_filename, verbose=False):
+    events1 = None
+    events2 = None
+    try:
+        events1 = list(yaml.parse(open(canonical_filename, 'rb')))
+        events2 = list(yaml.canonical_parse(open(canonical_filename, 'rb')))
+        _compare_events(events1, events2, full=True)
+    finally:
+        if verbose:
+            print "EVENTS1:"
+            pprint.pprint(events1)
+            print "EVENTS2:"
+            pprint.pprint(events2)
+
+test_parser_on_canonical.unittest = ['.canonical']
+
+def _compare_nodes(node1, node2):
+    assert node1.__class__ == node2.__class__, (node1, node2)
+    assert node1.tag == node2.tag, (node1, node2)
+    if isinstance(node1, yaml.ScalarNode):
+        assert node1.value == node2.value, (node1, node2)
+    else:
+        assert len(node1.value) == len(node2.value), (node1, node2)
+        for item1, item2 in zip(node1.value, node2.value):
+            if not isinstance(item1, tuple):
+                item1 = (item1,)
+                item2 = (item2,)
+            for subnode1, subnode2 in zip(item1, item2):
+                _compare_nodes(subnode1, subnode2)
+
+def test_composer(data_filename, canonical_filename, verbose=False):
+    nodes1 = None
+    nodes2 = None
+    try:
+        nodes1 = list(yaml.compose_all(open(data_filename, 'rb')))
+        nodes2 = list(yaml.canonical_compose_all(open(canonical_filename, 'rb')))
+        assert len(nodes1) == len(nodes2), (len(nodes1), len(nodes2))
+        for node1, node2 in zip(nodes1, nodes2):
+            _compare_nodes(node1, node2)
+    finally:
+        if verbose:
+            print "NODES1:"
+            pprint.pprint(nodes1)
+            print "NODES2:"
+            pprint.pprint(nodes2)
+
+test_composer.unittest = ['.data', '.canonical']
+
+def _make_loader():
+    global MyLoader
+
+    class MyLoader(yaml.Loader):
+        def construct_sequence(self, node):
+            return tuple(yaml.Loader.construct_sequence(self, node))
+        def construct_mapping(self, node):
+            pairs = self.construct_pairs(node)
+            pairs.sort()
+            return pairs
+        def construct_undefined(self, node):
+            return self.construct_scalar(node)
+
+    MyLoader.add_constructor(u'tag:yaml.org,2002:map', MyLoader.construct_mapping)
+    MyLoader.add_constructor(None, MyLoader.construct_undefined)
+
+def _make_canonical_loader():
+    global MyCanonicalLoader
+
+    class MyCanonicalLoader(yaml.CanonicalLoader):
+        def construct_sequence(self, node):
+            return tuple(yaml.CanonicalLoader.construct_sequence(self, node))
+        def construct_mapping(self, node):
+            pairs = self.construct_pairs(node)
+            pairs.sort()
+            return pairs
+        def construct_undefined(self, node):
+            return self.construct_scalar(node)
+
+    MyCanonicalLoader.add_constructor(u'tag:yaml.org,2002:map', MyCanonicalLoader.construct_mapping)
+    MyCanonicalLoader.add_constructor(None, MyCanonicalLoader.construct_undefined)
+
+def test_constructor(data_filename, canonical_filename, verbose=False):
+    _make_loader()
+    _make_canonical_loader()
+    native1 = None
+    native2 = None
+    try:
+        native1 = list(yaml.load_all(open(data_filename, 'rb'), Loader=MyLoader))
+        native2 = list(yaml.load_all(open(canonical_filename, 'rb'), Loader=MyCanonicalLoader))
+        assert native1 == native2, (native1, native2)
+    finally:
+        if verbose:
+            print "NATIVE1:"
+            pprint.pprint(native1)
+            print "NATIVE2:"
+            pprint.pprint(native2)
+
+test_constructor.unittest = ['.data', '.canonical']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_tokens.py b/tests/lib/test_tokens.py
new file mode 100644
index 0000000..9613fa0
--- /dev/null
+++ b/tests/lib/test_tokens.py
@@ -0,0 +1,77 @@
+
+import yaml
+import pprint
+
+# Tokens mnemonic:
+# directive:            %
+# document_start:       ---
+# document_end:         ...
+# alias:                *
+# anchor:               &
+# tag:                  !
+# scalar                _
+# block_sequence_start: [[
+# block_mapping_start:  {{
+# block_end:            ]}
+# flow_sequence_start:  [
+# flow_sequence_end:    ]
+# flow_mapping_start:   {
+# flow_mapping_end:     }
+# entry:                ,
+# key:                  ?
+# value:                :
+
+_replaces = {
+    yaml.DirectiveToken: '%',
+    yaml.DocumentStartToken: '---',
+    yaml.DocumentEndToken: '...',
+    yaml.AliasToken: '*',
+    yaml.AnchorToken: '&',
+    yaml.TagToken: '!',
+    yaml.ScalarToken: '_',
+    yaml.BlockSequenceStartToken: '[[',
+    yaml.BlockMappingStartToken: '{{',
+    yaml.BlockEndToken: ']}',
+    yaml.FlowSequenceStartToken: '[',
+    yaml.FlowSequenceEndToken: ']',
+    yaml.FlowMappingStartToken: '{',
+    yaml.FlowMappingEndToken: '}',
+    yaml.BlockEntryToken: ',',
+    yaml.FlowEntryToken: ',',
+    yaml.KeyToken: '?',
+    yaml.ValueToken: ':',
+}
+
+def test_tokens(data_filename, tokens_filename, verbose=False):
+    tokens1 = []
+    tokens2 = open(tokens_filename, 'rb').read().split()
+    try:
+        for token in yaml.scan(open(data_filename, 'rb')):
+            if not isinstance(token, (yaml.StreamStartToken, yaml.StreamEndToken)):
+                tokens1.append(_replaces[token.__class__])
+    finally:
+        if verbose:
+            print "TOKENS1:", ' '.join(tokens1)
+            print "TOKENS2:", ' '.join(tokens2)
+    assert len(tokens1) == len(tokens2), (tokens1, tokens2)
+    for token1, token2 in zip(tokens1, tokens2):
+        assert token1 == token2, (token1, token2)
+
+test_tokens.unittest = ['.data', '.tokens']
+
+def test_scanner(data_filename, canonical_filename, verbose=False):
+    for filename in [data_filename, canonical_filename]:
+        tokens = []
+        try:
+            for token in yaml.scan(open(filename, 'rb')):
+                tokens.append(token.__class__.__name__)
+        finally:
+            if verbose:
+                pprint.pprint(tokens)
+
+test_scanner.unittest = ['.data', '.canonical']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_yaml.py b/tests/lib/test_yaml.py
new file mode 100644
index 0000000..0927368
--- /dev/null
+++ b/tests/lib/test_yaml.py
@@ -0,0 +1,18 @@
+
+from test_mark import *
+from test_reader import *
+from test_canonical import *
+from test_tokens import *
+from test_structure import *
+from test_errors import *
+from test_resolver import *
+from test_constructor import *
+from test_emitter import *
+from test_representer import *
+from test_recursive import *
+from test_input_output import *
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib/test_yaml_ext.py b/tests/lib/test_yaml_ext.py
new file mode 100644
index 0000000..bdfda3e
--- /dev/null
+++ b/tests/lib/test_yaml_ext.py
@@ -0,0 +1,277 @@
+
+import _yaml, yaml
+import types, pprint
+
+yaml.PyBaseLoader = yaml.BaseLoader
+yaml.PySafeLoader = yaml.SafeLoader
+yaml.PyLoader = yaml.Loader
+yaml.PyBaseDumper = yaml.BaseDumper
+yaml.PySafeDumper = yaml.SafeDumper
+yaml.PyDumper = yaml.Dumper
+
+old_scan = yaml.scan
+def new_scan(stream, Loader=yaml.CLoader):
+    return old_scan(stream, Loader)
+
+old_parse = yaml.parse
+def new_parse(stream, Loader=yaml.CLoader):
+    return old_parse(stream, Loader)
+
+old_compose = yaml.compose
+def new_compose(stream, Loader=yaml.CLoader):
+    return old_compose(stream, Loader)
+
+old_compose_all = yaml.compose_all
+def new_compose_all(stream, Loader=yaml.CLoader):
+    return old_compose_all(stream, Loader)
+
+old_load = yaml.load
+def new_load(stream, Loader=yaml.CLoader):
+    return old_load(stream, Loader)
+
+old_load_all = yaml.load_all
+def new_load_all(stream, Loader=yaml.CLoader):
+    return old_load_all(stream, Loader)
+
+old_safe_load = yaml.safe_load
+def new_safe_load(stream):
+    return old_load(stream, yaml.CSafeLoader)
+
+old_safe_load_all = yaml.safe_load_all
+def new_safe_load_all(stream):
+    return old_load_all(stream, yaml.CSafeLoader)
+
+old_emit = yaml.emit
+def new_emit(events, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_emit(events, stream, Dumper, **kwds)
+
+old_serialize = yaml.serialize
+def new_serialize(node, stream, Dumper=yaml.CDumper, **kwds):
+    return old_serialize(node, stream, Dumper, **kwds)
+
+old_serialize_all = yaml.serialize_all
+def new_serialize_all(nodes, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_serialize_all(nodes, stream, Dumper, **kwds)
+
+old_dump = yaml.dump
+def new_dump(data, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_dump(data, stream, Dumper, **kwds)
+
+old_dump_all = yaml.dump_all
+def new_dump_all(documents, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_dump_all(documents, stream, Dumper, **kwds)
+
+old_safe_dump = yaml.safe_dump
+def new_safe_dump(data, stream=None, **kwds):
+    return old_dump(data, stream, yaml.CSafeDumper, **kwds)
+
+old_safe_dump_all = yaml.safe_dump_all
+def new_safe_dump_all(documents, stream=None, **kwds):
+    return old_dump_all(documents, stream, yaml.CSafeDumper, **kwds)
+
+def _set_up():
+    yaml.BaseLoader = yaml.CBaseLoader
+    yaml.SafeLoader = yaml.CSafeLoader
+    yaml.Loader = yaml.CLoader
+    yaml.BaseDumper = yaml.CBaseDumper
+    yaml.SafeDumper = yaml.CSafeDumper
+    yaml.Dumper = yaml.CDumper
+    yaml.scan = new_scan
+    yaml.parse = new_parse
+    yaml.compose = new_compose
+    yaml.compose_all = new_compose_all
+    yaml.load = new_load
+    yaml.load_all = new_load_all
+    yaml.safe_load = new_safe_load
+    yaml.safe_load_all = new_safe_load_all
+    yaml.emit = new_emit
+    yaml.serialize = new_serialize
+    yaml.serialize_all = new_serialize_all
+    yaml.dump = new_dump
+    yaml.dump_all = new_dump_all
+    yaml.safe_dump = new_safe_dump
+    yaml.safe_dump_all = new_safe_dump_all
+
+def _tear_down():
+    yaml.BaseLoader = yaml.PyBaseLoader
+    yaml.SafeLoader = yaml.PySafeLoader
+    yaml.Loader = yaml.PyLoader
+    yaml.BaseDumper = yaml.PyBaseDumper
+    yaml.SafeDumper = yaml.PySafeDumper
+    yaml.Dumper = yaml.PyDumper
+    yaml.scan = old_scan
+    yaml.parse = old_parse
+    yaml.compose = old_compose
+    yaml.compose_all = old_compose_all
+    yaml.load = old_load
+    yaml.load_all = old_load_all
+    yaml.safe_load = old_safe_load
+    yaml.safe_load_all = old_safe_load_all
+    yaml.emit = old_emit
+    yaml.serialize = old_serialize
+    yaml.serialize_all = old_serialize_all
+    yaml.dump = old_dump
+    yaml.dump_all = old_dump_all
+    yaml.safe_dump = old_safe_dump
+    yaml.safe_dump_all = old_safe_dump_all
+
+def test_c_version(verbose=False):
+    if verbose:
+        print _yaml.get_version()
+        print _yaml.get_version_string()
+    assert ("%s.%s.%s" % _yaml.get_version()) == _yaml.get_version_string(),    \
+            (_yaml.get_version(), _yaml.get_version_string())
+
+def _compare_scanners(py_data, c_data, verbose):
+    py_tokens = list(yaml.scan(py_data, Loader=yaml.PyLoader))
+    c_tokens = []
+    try:
+        for token in yaml.scan(c_data, Loader=yaml.CLoader):
+            c_tokens.append(token)
+        assert len(py_tokens) == len(c_tokens), (len(py_tokens), len(c_tokens))
+        for py_token, c_token in zip(py_tokens, c_tokens):
+            assert py_token.__class__ == c_token.__class__, (py_token, c_token)
+            if hasattr(py_token, 'value'):
+                assert py_token.value == c_token.value, (py_token, c_token)
+            if isinstance(py_token, yaml.StreamEndToken):
+                continue
+            py_start = (py_token.start_mark.index, py_token.start_mark.line, py_token.start_mark.column)
+            py_end = (py_token.end_mark.index, py_token.end_mark.line, py_token.end_mark.column)
+            c_start = (c_token.start_mark.index, c_token.start_mark.line, c_token.start_mark.column)
+            c_end = (c_token.end_mark.index, c_token.end_mark.line, c_token.end_mark.column)
+            assert py_start == c_start, (py_start, c_start)
+            assert py_end == c_end, (py_end, c_end)
+    finally:
+        if verbose:
+            print "PY_TOKENS:"
+            pprint.pprint(py_tokens)
+            print "C_TOKENS:"
+            pprint.pprint(c_tokens)
+
+def test_c_scanner(data_filename, canonical_filename, verbose=False):
+    _compare_scanners(open(data_filename, 'rb'),
+            open(data_filename, 'rb'), verbose)
+    _compare_scanners(open(data_filename, 'rb').read(),
+            open(data_filename, 'rb').read(), verbose)
+    _compare_scanners(open(canonical_filename, 'rb'),
+            open(canonical_filename, 'rb'), verbose)
+    _compare_scanners(open(canonical_filename, 'rb').read(),
+            open(canonical_filename, 'rb').read(), verbose)
+
+test_c_scanner.unittest = ['.data', '.canonical']
+test_c_scanner.skip = ['.skip-ext']
+
+def _compare_parsers(py_data, c_data, verbose):
+    py_events = list(yaml.parse(py_data, Loader=yaml.PyLoader))
+    c_events = []
+    try:
+        for event in yaml.parse(c_data, Loader=yaml.CLoader):
+            c_events.append(event)
+        assert len(py_events) == len(c_events), (len(py_events), len(c_events))
+        for py_event, c_event in zip(py_events, c_events):
+            for attribute in ['__class__', 'anchor', 'tag', 'implicit',
+                                'value', 'explicit', 'version', 'tags']:
+                py_value = getattr(py_event, attribute, None)
+                c_value = getattr(c_event, attribute, None)
+                assert py_value == c_value, (py_event, c_event, attribute)
+    finally:
+        if verbose:
+            print "PY_EVENTS:"
+            pprint.pprint(py_events)
+            print "C_EVENTS:"
+            pprint.pprint(c_events)
+
+def test_c_parser(data_filename, canonical_filename, verbose=False):
+    _compare_parsers(open(data_filename, 'rb'),
+            open(data_filename, 'rb'), verbose)
+    _compare_parsers(open(data_filename, 'rb').read(),
+            open(data_filename, 'rb').read(), verbose)
+    _compare_parsers(open(canonical_filename, 'rb'),
+            open(canonical_filename, 'rb'), verbose)
+    _compare_parsers(open(canonical_filename, 'rb').read(),
+            open(canonical_filename, 'rb').read(), verbose)
+
+test_c_parser.unittest = ['.data', '.canonical']
+test_c_parser.skip = ['.skip-ext']
+
+def _compare_emitters(data, verbose):
+    events = list(yaml.parse(data, Loader=yaml.PyLoader))
+    c_data = yaml.emit(events, Dumper=yaml.CDumper)
+    if verbose:
+        print c_data
+    py_events = list(yaml.parse(c_data, Loader=yaml.PyLoader))
+    c_events = list(yaml.parse(c_data, Loader=yaml.CLoader))
+    try:
+        assert len(events) == len(py_events), (len(events), len(py_events))
+        assert len(events) == len(c_events), (len(events), len(c_events))
+        for event, py_event, c_event in zip(events, py_events, c_events):
+            for attribute in ['__class__', 'anchor', 'tag', 'implicit',
+                                'value', 'explicit', 'version', 'tags']:
+                value = getattr(event, attribute, None)
+                py_value = getattr(py_event, attribute, None)
+                c_value = getattr(c_event, attribute, None)
+                if attribute == 'tag' and value in [None, u'!'] \
+                        and py_value in [None, u'!'] and c_value in [None, u'!']:
+                    continue
+                if attribute == 'explicit' and (py_value or c_value):
+                    continue
+                assert value == py_value, (event, py_event, attribute)
+                assert value == c_value, (event, c_event, attribute)
+    finally:
+        if verbose:
+            print "EVENTS:"
+            pprint.pprint(events)
+            print "PY_EVENTS:"
+            pprint.pprint(py_events)
+            print "C_EVENTS:"
+            pprint.pprint(c_events)
+
+def test_c_emitter(data_filename, canonical_filename, verbose=False):
+    _compare_emitters(open(data_filename, 'rb').read(), verbose)
+    _compare_emitters(open(canonical_filename, 'rb').read(), verbose)
+
+test_c_emitter.unittest = ['.data', '.canonical']
+test_c_emitter.skip = ['.skip-ext']
+
+def wrap_ext_function(function):
+    def wrapper(*args, **kwds):
+        _set_up()
+        try:
+            function(*args, **kwds)
+        finally:
+            _tear_down()
+    try:
+        wrapper.func_name = '%s_ext' % function.func_name
+    except TypeError:
+        pass
+    wrapper.unittest_name = '%s_ext' % function.func_name
+    wrapper.unittest = function.unittest
+    wrapper.skip = getattr(function, 'skip', [])+['.skip-ext']
+    return wrapper
+
+def wrap_ext(collections):
+    functions = []
+    if not isinstance(collections, list):
+        collections = [collections]
+    for collection in collections:
+        if not isinstance(collection, dict):
+            collection = vars(collection)
+        keys = collection.keys()
+        keys.sort()
+        for key in keys:
+            value = collection[key]
+            if isinstance(value, types.FunctionType) and hasattr(value, 'unittest'):
+                functions.append(wrap_ext_function(value))
+    for function in functions:
+        assert function.unittest_name not in globals()
+        globals()[function.unittest_name] = function
+
+import test_tokens, test_structure, test_errors, test_resolver, test_constructor,   \
+        test_emitter, test_representer, test_recursive, test_input_output
+wrap_ext([test_tokens, test_structure, test_errors, test_resolver, test_constructor,
+        test_emitter, test_representer, test_recursive, test_input_output])
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/canonical.py b/tests/lib3/canonical.py
new file mode 100644
index 0000000..a8b4e3a
--- /dev/null
+++ b/tests/lib3/canonical.py
@@ -0,0 +1,361 @@
+
+import yaml, yaml.composer, yaml.constructor, yaml.resolver
+
+class CanonicalError(yaml.YAMLError):
+    pass
+
+class CanonicalScanner:
+
+    def __init__(self, data):
+        if isinstance(data, bytes):
+            try:
+                data = data.decode('utf-8')
+            except UnicodeDecodeError:
+                raise CanonicalError("utf-8 stream is expected")
+        self.data = data+'\0'
+        self.index = 0
+        self.tokens = []
+        self.scanned = False
+
+    def check_token(self, *choices):
+        if not self.scanned:
+            self.scan()
+        if self.tokens:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.tokens[0], choice):
+                    return True
+        return False
+
+    def peek_token(self):
+        if not self.scanned:
+            self.scan()
+        if self.tokens:
+            return self.tokens[0]
+
+    def get_token(self, choice=None):
+        if not self.scanned:
+            self.scan()
+        token = self.tokens.pop(0)
+        if choice and not isinstance(token, choice):
+            raise CanonicalError("unexpected token "+repr(token))
+        return token
+
+    def get_token_value(self):
+        token = self.get_token()
+        return token.value
+
+    def scan(self):
+        self.tokens.append(yaml.StreamStartToken(None, None))
+        while True:
+            self.find_token()
+            ch = self.data[self.index]
+            if ch == '\0':
+                self.tokens.append(yaml.StreamEndToken(None, None))
+                break
+            elif ch == '%':
+                self.tokens.append(self.scan_directive())
+            elif ch == '-' and self.data[self.index:self.index+3] == '---':
+                self.index += 3
+                self.tokens.append(yaml.DocumentStartToken(None, None))
+            elif ch == '[':
+                self.index += 1
+                self.tokens.append(yaml.FlowSequenceStartToken(None, None))
+            elif ch == '{':
+                self.index += 1
+                self.tokens.append(yaml.FlowMappingStartToken(None, None))
+            elif ch == ']':
+                self.index += 1
+                self.tokens.append(yaml.FlowSequenceEndToken(None, None))
+            elif ch == '}':
+                self.index += 1
+                self.tokens.append(yaml.FlowMappingEndToken(None, None))
+            elif ch == '?':
+                self.index += 1
+                self.tokens.append(yaml.KeyToken(None, None))
+            elif ch == ':':
+                self.index += 1
+                self.tokens.append(yaml.ValueToken(None, None))
+            elif ch == ',':
+                self.index += 1
+                self.tokens.append(yaml.FlowEntryToken(None, None))
+            elif ch == '*' or ch == '&':
+                self.tokens.append(self.scan_alias())
+            elif ch == '!':
+                self.tokens.append(self.scan_tag())
+            elif ch == '"':
+                self.tokens.append(self.scan_scalar())
+            else:
+                raise CanonicalError("invalid token")
+        self.scanned = True
+
+    DIRECTIVE = '%YAML 1.1'
+
+    def scan_directive(self):
+        if self.data[self.index:self.index+len(self.DIRECTIVE)] == self.DIRECTIVE and \
+                self.data[self.index+len(self.DIRECTIVE)] in ' \n\0':
+            self.index += len(self.DIRECTIVE)
+            return yaml.DirectiveToken('YAML', (1, 1), None, None)
+        else:
+            raise CanonicalError("invalid directive")
+
+    def scan_alias(self):
+        if self.data[self.index] == '*':
+            TokenClass = yaml.AliasToken
+        else:
+            TokenClass = yaml.AnchorToken
+        self.index += 1
+        start = self.index
+        while self.data[self.index] not in ', \n\0':
+            self.index += 1
+        value = self.data[start:self.index]
+        return TokenClass(value, None, None)
+
+    def scan_tag(self):
+        self.index += 1
+        start = self.index
+        while self.data[self.index] not in ' \n\0':
+            self.index += 1
+        value = self.data[start:self.index]
+        if not value:
+            value = '!'
+        elif value[0] == '!':
+            value = 'tag:yaml.org,2002:'+value[1:]
+        elif value[0] == '<' and value[-1] == '>':
+            value = value[1:-1]
+        else:
+            value = '!'+value
+        return yaml.TagToken(value, None, None)
+
+    QUOTE_CODES = {
+        'x': 2,
+        'u': 4,
+        'U': 8,
+    }
+
+    QUOTE_REPLACES = {
+        '\\': '\\',
+        '\"': '\"',
+        ' ': ' ',
+        'a': '\x07',
+        'b': '\x08',
+        'e': '\x1B',
+        'f': '\x0C',
+        'n': '\x0A',
+        'r': '\x0D',
+        't': '\x09',
+        'v': '\x0B',
+        'N': '\u0085',
+        'L': '\u2028',
+        'P': '\u2029',
+        '_': '_',
+        '0': '\x00',
+    }
+
+    def scan_scalar(self):
+        self.index += 1
+        chunks = []
+        start = self.index
+        ignore_spaces = False
+        while self.data[self.index] != '"':
+            if self.data[self.index] == '\\':
+                ignore_spaces = False
+                chunks.append(self.data[start:self.index])
+                self.index += 1
+                ch = self.data[self.index]
+                self.index += 1
+                if ch == '\n':
+                    ignore_spaces = True
+                elif ch in self.QUOTE_CODES:
+                    length = self.QUOTE_CODES[ch]
+                    code = int(self.data[self.index:self.index+length], 16)
+                    chunks.append(chr(code))
+                    self.index += length
+                else:
+                    if ch not in self.QUOTE_REPLACES:
+                        raise CanonicalError("invalid escape code")
+                    chunks.append(self.QUOTE_REPLACES[ch])
+                start = self.index
+            elif self.data[self.index] == '\n':
+                chunks.append(self.data[start:self.index])
+                chunks.append(' ')
+                self.index += 1
+                start = self.index
+                ignore_spaces = True
+            elif ignore_spaces and self.data[self.index] == ' ':
+                self.index += 1
+                start = self.index
+            else:
+                ignore_spaces = False
+                self.index += 1
+        chunks.append(self.data[start:self.index])
+        self.index += 1
+        return yaml.ScalarToken(''.join(chunks), False, None, None)
+
+    def find_token(self):
+        found = False
+        while not found:
+            while self.data[self.index] in ' \t':
+                self.index += 1
+            if self.data[self.index] == '#':
+                while self.data[self.index] != '\n':
+                    self.index += 1
+            if self.data[self.index] == '\n':
+                self.index += 1
+            else:
+                found = True
+
+class CanonicalParser:
+
+    def __init__(self):
+        self.events = []
+        self.parsed = False
+
+    def dispose(self):
+        pass
+
+    # stream: STREAM-START document* STREAM-END
+    def parse_stream(self):
+        self.get_token(yaml.StreamStartToken)
+        self.events.append(yaml.StreamStartEvent(None, None))
+        while not self.check_token(yaml.StreamEndToken):
+            if self.check_token(yaml.DirectiveToken, yaml.DocumentStartToken):
+                self.parse_document()
+            else:
+                raise CanonicalError("document is expected, got "+repr(self.tokens[0]))
+        self.get_token(yaml.StreamEndToken)
+        self.events.append(yaml.StreamEndEvent(None, None))
+
+    # document: DIRECTIVE? DOCUMENT-START node
+    def parse_document(self):
+        node = None
+        if self.check_token(yaml.DirectiveToken):
+            self.get_token(yaml.DirectiveToken)
+        self.get_token(yaml.DocumentStartToken)
+        self.events.append(yaml.DocumentStartEvent(None, None))
+        self.parse_node()
+        self.events.append(yaml.DocumentEndEvent(None, None))
+
+    # node: ALIAS | ANCHOR? TAG? (SCALAR|sequence|mapping)
+    def parse_node(self):
+        if self.check_token(yaml.AliasToken):
+            self.events.append(yaml.AliasEvent(self.get_token_value(), None, None))
+        else:
+            anchor = None
+            if self.check_token(yaml.AnchorToken):
+                anchor = self.get_token_value()
+            tag = None
+            if self.check_token(yaml.TagToken):
+                tag = self.get_token_value()
+            if self.check_token(yaml.ScalarToken):
+                self.events.append(yaml.ScalarEvent(anchor, tag, (False, False), self.get_token_value(), None, None))
+            elif self.check_token(yaml.FlowSequenceStartToken):
+                self.events.append(yaml.SequenceStartEvent(anchor, tag, None, None))
+                self.parse_sequence()
+            elif self.check_token(yaml.FlowMappingStartToken):
+                self.events.append(yaml.MappingStartEvent(anchor, tag, None, None))
+                self.parse_mapping()
+            else:
+                raise CanonicalError("SCALAR, '[', or '{' is expected, got "+repr(self.tokens[0]))
+
+    # sequence: SEQUENCE-START (node (ENTRY node)*)? ENTRY? SEQUENCE-END
+    def parse_sequence(self):
+        self.get_token(yaml.FlowSequenceStartToken)
+        if not self.check_token(yaml.FlowSequenceEndToken):
+            self.parse_node()
+            while not self.check_token(yaml.FlowSequenceEndToken):
+                self.get_token(yaml.FlowEntryToken)
+                if not self.check_token(yaml.FlowSequenceEndToken):
+                    self.parse_node()
+        self.get_token(yaml.FlowSequenceEndToken)
+        self.events.append(yaml.SequenceEndEvent(None, None))
+
+    # mapping: MAPPING-START (map_entry (ENTRY map_entry)*)? ENTRY? MAPPING-END
+    def parse_mapping(self):
+        self.get_token(yaml.FlowMappingStartToken)
+        if not self.check_token(yaml.FlowMappingEndToken):
+            self.parse_map_entry()
+            while not self.check_token(yaml.FlowMappingEndToken):
+                self.get_token(yaml.FlowEntryToken)
+                if not self.check_token(yaml.FlowMappingEndToken):
+                    self.parse_map_entry()
+        self.get_token(yaml.FlowMappingEndToken)
+        self.events.append(yaml.MappingEndEvent(None, None))
+
+    # map_entry: KEY node VALUE node
+    def parse_map_entry(self):
+        self.get_token(yaml.KeyToken)
+        self.parse_node()
+        self.get_token(yaml.ValueToken)
+        self.parse_node()
+
+    def parse(self):
+        self.parse_stream()
+        self.parsed = True
+
+    def get_event(self):
+        if not self.parsed:
+            self.parse()
+        return self.events.pop(0)
+
+    def check_event(self, *choices):
+        if not self.parsed:
+            self.parse()
+        if self.events:
+            if not choices:
+                return True
+            for choice in choices:
+                if isinstance(self.events[0], choice):
+                    return True
+        return False
+
+    def peek_event(self):
+        if not self.parsed:
+            self.parse()
+        return self.events[0]
+
+class CanonicalLoader(CanonicalScanner, CanonicalParser,
+        yaml.composer.Composer, yaml.constructor.Constructor, yaml.resolver.Resolver):
+
+    def __init__(self, stream):
+        if hasattr(stream, 'read'):
+            stream = stream.read()
+        CanonicalScanner.__init__(self, stream)
+        CanonicalParser.__init__(self)
+        yaml.composer.Composer.__init__(self)
+        yaml.constructor.Constructor.__init__(self)
+        yaml.resolver.Resolver.__init__(self)
+
+yaml.CanonicalLoader = CanonicalLoader
+
+def canonical_scan(stream):
+    return yaml.scan(stream, Loader=CanonicalLoader)
+
+yaml.canonical_scan = canonical_scan
+
+def canonical_parse(stream):
+    return yaml.parse(stream, Loader=CanonicalLoader)
+
+yaml.canonical_parse = canonical_parse
+
+def canonical_compose(stream):
+    return yaml.compose(stream, Loader=CanonicalLoader)
+
+yaml.canonical_compose = canonical_compose
+
+def canonical_compose_all(stream):
+    return yaml.compose_all(stream, Loader=CanonicalLoader)
+
+yaml.canonical_compose_all = canonical_compose_all
+
+def canonical_load(stream):
+    return yaml.load(stream, Loader=CanonicalLoader)
+
+yaml.canonical_load = canonical_load
+
+def canonical_load_all(stream):
+    return yaml.load_all(stream, Loader=CanonicalLoader)
+
+yaml.canonical_load_all = canonical_load_all
+
diff --git a/tests/lib3/test_all.py b/tests/lib3/test_all.py
new file mode 100644
index 0000000..fec4ae4
--- /dev/null
+++ b/tests/lib3/test_all.py
@@ -0,0 +1,15 @@
+
+import sys, yaml, test_appliance
+
+def main(args=None):
+    collections = []
+    import test_yaml
+    collections.append(test_yaml)
+    if yaml.__with_libyaml__:
+        import test_yaml_ext
+        collections.append(test_yaml_ext)
+    test_appliance.run(collections, args)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/tests/lib3/test_appliance.py b/tests/lib3/test_appliance.py
new file mode 100644
index 0000000..81ff00b
--- /dev/null
+++ b/tests/lib3/test_appliance.py
@@ -0,0 +1,145 @@
+
+import sys, os, os.path, types, traceback, pprint
+
+DATA = 'tests/data'
+
+def find_test_functions(collections):
+    if not isinstance(collections, list):
+        collections = [collections]
+    functions = []
+    for collection in collections:
+        if not isinstance(collection, dict):
+            collection = vars(collection)
+        for key in sorted(collection):
+            value = collection[key]
+            if isinstance(value, types.FunctionType) and hasattr(value, 'unittest'):
+                functions.append(value)
+    return functions
+
+def find_test_filenames(directory):
+    filenames = {}
+    for filename in os.listdir(directory):
+        if os.path.isfile(os.path.join(directory, filename)):
+            base, ext = os.path.splitext(filename)
+            if base.endswith('-py2'):
+                continue
+            filenames.setdefault(base, []).append(ext)
+    filenames = sorted(filenames.items())
+    return filenames
+
+def parse_arguments(args):
+    if args is None:
+        args = sys.argv[1:]
+    verbose = False
+    if '-v' in args:
+        verbose = True
+        args.remove('-v')
+    if '--verbose' in args:
+        verbose = True
+    if 'YAML_TEST_VERBOSE' in os.environ:
+        verbose = True
+    include_functions = []
+    if args:
+        include_functions.append(args.pop(0))
+    if 'YAML_TEST_FUNCTIONS' in os.environ:
+        include_functions.extend(os.environ['YAML_TEST_FUNCTIONS'].split())
+    include_filenames = []
+    include_filenames.extend(args)
+    if 'YAML_TEST_FILENAMES' in os.environ:
+        include_filenames.extend(os.environ['YAML_TEST_FILENAMES'].split())
+    return include_functions, include_filenames, verbose
+
+def execute(function, filenames, verbose):
+    name = function.__name__
+    if verbose:
+        sys.stdout.write('='*75+'\n')
+        sys.stdout.write('%s(%s)...\n' % (name, ', '.join(filenames)))
+    try:
+        function(verbose=verbose, *filenames)
+    except Exception as exc:
+        info = sys.exc_info()
+        if isinstance(exc, AssertionError):
+            kind = 'FAILURE'
+        else:
+            kind = 'ERROR'
+        if verbose:
+            traceback.print_exc(limit=1, file=sys.stdout)
+        else:
+            sys.stdout.write(kind[0])
+            sys.stdout.flush()
+    else:
+        kind = 'SUCCESS'
+        info = None
+        if not verbose:
+            sys.stdout.write('.')
+    sys.stdout.flush()
+    return (name, filenames, kind, info)
+
+def display(results, verbose):
+    if results and not verbose:
+        sys.stdout.write('\n')
+    total = len(results)
+    failures = 0
+    errors = 0
+    for name, filenames, kind, info in results:
+        if kind == 'SUCCESS':
+            continue
+        if kind == 'FAILURE':
+            failures += 1
+        if kind == 'ERROR':
+            errors += 1
+        sys.stdout.write('='*75+'\n')
+        sys.stdout.write('%s(%s): %s\n' % (name, ', '.join(filenames), kind))
+        if kind == 'ERROR':
+            traceback.print_exception(file=sys.stdout, *info)
+        else:
+            sys.stdout.write('Traceback (most recent call last):\n')
+            traceback.print_tb(info[2], file=sys.stdout)
+            sys.stdout.write('%s: see below\n' % info[0].__name__)
+            sys.stdout.write('~'*75+'\n')
+            for arg in info[1].args:
+                pprint.pprint(arg, stream=sys.stdout)
+        for filename in filenames:
+            sys.stdout.write('-'*75+'\n')
+            sys.stdout.write('%s:\n' % filename)
+            data = open(filename, 'r', errors='replace').read()
+            sys.stdout.write(data)
+            if data and data[-1] != '\n':
+                sys.stdout.write('\n')
+    sys.stdout.write('='*75+'\n')
+    sys.stdout.write('TESTS: %s\n' % total)
+    if failures:
+        sys.stdout.write('FAILURES: %s\n' % failures)
+    if errors:
+        sys.stdout.write('ERRORS: %s\n' % errors)
+
+def run(collections, args=None):
+    test_functions = find_test_functions(collections)
+    test_filenames = find_test_filenames(DATA)
+    include_functions, include_filenames, verbose = parse_arguments(args)
+    results = []
+    for function in test_functions:
+        if include_functions and function.__name__ not in include_functions:
+            continue
+        if function.unittest:
+            for base, exts in test_filenames:
+                if include_filenames and base not in include_filenames:
+                    continue
+                filenames = []
+                for ext in function.unittest:
+                    if ext not in exts:
+                        break
+                    filenames.append(os.path.join(DATA, base+ext))
+                else:
+                    skip_exts = getattr(function, 'skip', [])
+                    for skip_ext in skip_exts:
+                        if skip_ext in exts:
+                            break
+                    else:
+                        result = execute(function, filenames, verbose)
+                        results.append(result)
+        else:
+            result = execute(function, [], verbose)
+            results.append(result)
+    display(results, verbose=verbose)
+
diff --git a/tests/lib3/test_build.py b/tests/lib3/test_build.py
new file mode 100644
index 0000000..901e8ed
--- /dev/null
+++ b/tests/lib3/test_build.py
@@ -0,0 +1,10 @@
+
+if __name__ == '__main__':
+    import sys, os, distutils.util
+    build_lib = 'build/lib'
+    build_lib_ext = os.path.join('build', 'lib.%s-%s' % (distutils.util.get_platform(), sys.version[0:3]))
+    sys.path.insert(0, build_lib)
+    sys.path.insert(0, build_lib_ext)
+    import test_yaml, test_appliance
+    test_appliance.run(test_yaml)
+
diff --git a/tests/lib3/test_build_ext.py b/tests/lib3/test_build_ext.py
new file mode 100644
index 0000000..ff195d5
--- /dev/null
+++ b/tests/lib3/test_build_ext.py
@@ -0,0 +1,11 @@
+
+
+if __name__ == '__main__':
+    import sys, os, distutils.util
+    build_lib = 'build/lib'
+    build_lib_ext = os.path.join('build', 'lib.%s-%s' % (distutils.util.get_platform(), sys.version[0:3]))
+    sys.path.insert(0, build_lib)
+    sys.path.insert(0, build_lib_ext)
+    import test_yaml_ext, test_appliance
+    test_appliance.run(test_yaml_ext)
+
diff --git a/tests/lib3/test_canonical.py b/tests/lib3/test_canonical.py
new file mode 100644
index 0000000..a3b1153
--- /dev/null
+++ b/tests/lib3/test_canonical.py
@@ -0,0 +1,40 @@
+
+import yaml, canonical
+
+def test_canonical_scanner(canonical_filename, verbose=False):
+    data = open(canonical_filename, 'rb').read()
+    tokens = list(yaml.canonical_scan(data))
+    assert tokens, tokens
+    if verbose:
+        for token in tokens:
+            print(token)
+
+test_canonical_scanner.unittest = ['.canonical']
+
+def test_canonical_parser(canonical_filename, verbose=False):
+    data = open(canonical_filename, 'rb').read()
+    events = list(yaml.canonical_parse(data))
+    assert events, events
+    if verbose:
+        for event in events:
+            print(event)
+
+test_canonical_parser.unittest = ['.canonical']
+
+def test_canonical_error(data_filename, canonical_filename, verbose=False):
+    data = open(data_filename, 'rb').read()
+    try:
+        output = list(yaml.canonical_load_all(data))
+    except yaml.YAMLError as exc:
+        if verbose:
+            print(exc)
+    else:
+        raise AssertionError("expected an exception")
+
+test_canonical_error.unittest = ['.data', '.canonical']
+test_canonical_error.skip = ['.empty']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_constructor.py b/tests/lib3/test_constructor.py
new file mode 100644
index 0000000..427f53c
--- /dev/null
+++ b/tests/lib3/test_constructor.py
@@ -0,0 +1,260 @@
+
+import yaml
+import pprint
+
+import datetime
+import yaml.tokens
+
+def execute(code):
+    global value
+    exec(code)
+    return value
+
+def _make_objects():
+    global MyLoader, MyDumper, MyTestClass1, MyTestClass2, MyTestClass3, YAMLObject1, YAMLObject2,  \
+            AnObject, AnInstance, AState, ACustomState, InitArgs, InitArgsWithState,    \
+            NewArgs, NewArgsWithState, Reduce, ReduceWithState, MyInt, MyList, MyDict,  \
+            FixedOffset, today, execute
+
+    class MyLoader(yaml.Loader):
+        pass
+    class MyDumper(yaml.Dumper):
+        pass
+
+    class MyTestClass1:
+        def __init__(self, x, y=0, z=0):
+            self.x = x
+            self.y = y
+            self.z = z
+        def __eq__(self, other):
+            if isinstance(other, MyTestClass1):
+                return self.__class__, self.__dict__ == other.__class__, other.__dict__
+            else:
+                return False
+
+    def construct1(constructor, node):
+        mapping = constructor.construct_mapping(node)
+        return MyTestClass1(**mapping)
+    def represent1(representer, native):
+        return representer.represent_mapping("!tag1", native.__dict__)
+
+    yaml.add_constructor("!tag1", construct1, Loader=MyLoader)
+    yaml.add_representer(MyTestClass1, represent1, Dumper=MyDumper)
+
+    class MyTestClass2(MyTestClass1, yaml.YAMLObject):
+        yaml_loader = MyLoader
+        yaml_dumper = MyDumper
+        yaml_tag = "!tag2"
+        def from_yaml(cls, constructor, node):
+            x = constructor.construct_yaml_int(node)
+            return cls(x=x)
+        from_yaml = classmethod(from_yaml)
+        def to_yaml(cls, representer, native):
+            return representer.represent_scalar(cls.yaml_tag, str(native.x))
+        to_yaml = classmethod(to_yaml)
+
+    class MyTestClass3(MyTestClass2):
+        yaml_tag = "!tag3"
+        def from_yaml(cls, constructor, node):
+            mapping = constructor.construct_mapping(node)
+            if '=' in mapping:
+                x = mapping['=']
+                del mapping['=']
+                mapping['x'] = x
+            return cls(**mapping)
+        from_yaml = classmethod(from_yaml)
+        def to_yaml(cls, representer, native):
+            return representer.represent_mapping(cls.yaml_tag, native.__dict__)
+        to_yaml = classmethod(to_yaml)
+
+    class YAMLObject1(yaml.YAMLObject):
+        yaml_loader = MyLoader
+        yaml_dumper = MyDumper
+        yaml_tag = '!foo'
+        def __init__(self, my_parameter=None, my_another_parameter=None):
+            self.my_parameter = my_parameter
+            self.my_another_parameter = my_another_parameter
+        def __eq__(self, other):
+            if isinstance(other, YAMLObject1):
+                return self.__class__, self.__dict__ == other.__class__, other.__dict__
+            else:
+                return False
+
+    class YAMLObject2(yaml.YAMLObject):
+        yaml_loader = MyLoader
+        yaml_dumper = MyDumper
+        yaml_tag = '!bar'
+        def __init__(self, foo=1, bar=2, baz=3):
+            self.foo = foo
+            self.bar = bar
+            self.baz = baz
+        def __getstate__(self):
+            return {1: self.foo, 2: self.bar, 3: self.baz}
+        def __setstate__(self, state):
+            self.foo = state[1]
+            self.bar = state[2]
+            self.baz = state[3]
+        def __eq__(self, other):
+            if isinstance(other, YAMLObject2):
+                return self.__class__, self.__dict__ == other.__class__, other.__dict__
+            else:
+                return False
+
+    class AnObject:
+        def __new__(cls, foo=None, bar=None, baz=None):
+            self = object.__new__(cls)
+            self.foo = foo
+            self.bar = bar
+            self.baz = baz
+            return self
+        def __cmp__(self, other):
+            return cmp((type(self), self.foo, self.bar, self.baz),
+                    (type(other), other.foo, other.bar, other.baz))
+        def __eq__(self, other):
+            return type(self) is type(other) and    \
+                    (self.foo, self.bar, self.baz) == (other.foo, other.bar, other.baz)
+
+    class AnInstance:
+        def __init__(self, foo=None, bar=None, baz=None):
+            self.foo = foo
+            self.bar = bar
+            self.baz = baz
+        def __cmp__(self, other):
+            return cmp((type(self), self.foo, self.bar, self.baz),
+                    (type(other), other.foo, other.bar, other.baz))
+        def __eq__(self, other):
+            return type(self) is type(other) and    \
+                    (self.foo, self.bar, self.baz) == (other.foo, other.bar, other.baz)
+
+    class AState(AnInstance):
+        def __getstate__(self):
+            return {
+                '_foo': self.foo,
+                '_bar': self.bar,
+                '_baz': self.baz,
+            }
+        def __setstate__(self, state):
+            self.foo = state['_foo']
+            self.bar = state['_bar']
+            self.baz = state['_baz']
+
+    class ACustomState(AnInstance):
+        def __getstate__(self):
+            return (self.foo, self.bar, self.baz)
+        def __setstate__(self, state):
+            self.foo, self.bar, self.baz = state
+
+    class NewArgs(AnObject):
+        def __getnewargs__(self):
+            return (self.foo, self.bar, self.baz)
+        def __getstate__(self):
+            return {}
+
+    class NewArgsWithState(AnObject):
+        def __getnewargs__(self):
+            return (self.foo, self.bar)
+        def __getstate__(self):
+            return self.baz
+        def __setstate__(self, state):
+            self.baz = state
+
+    InitArgs = NewArgs
+
+    InitArgsWithState = NewArgsWithState
+
+    class Reduce(AnObject):
+        def __reduce__(self):
+            return self.__class__, (self.foo, self.bar, self.baz)
+
+    class ReduceWithState(AnObject):
+        def __reduce__(self):
+            return self.__class__, (self.foo, self.bar), self.baz
+        def __setstate__(self, state):
+            self.baz = state
+
+    class MyInt(int):
+        def __eq__(self, other):
+            return type(self) is type(other) and int(self) == int(other)
+
+    class MyList(list):
+        def __init__(self, n=1):
+            self.extend([None]*n)
+        def __eq__(self, other):
+            return type(self) is type(other) and list(self) == list(other)
+
+    class MyDict(dict):
+        def __init__(self, n=1):
+            for k in range(n):
+                self[k] = None
+        def __eq__(self, other):
+            return type(self) is type(other) and dict(self) == dict(other)
+
+    class FixedOffset(datetime.tzinfo):
+        def __init__(self, offset, name):
+            self.__offset = datetime.timedelta(minutes=offset)
+            self.__name = name
+        def utcoffset(self, dt):
+            return self.__offset
+        def tzname(self, dt):
+            return self.__name
+        def dst(self, dt):
+            return datetime.timedelta(0)
+
+    today = datetime.date.today()
+
+def _load_code(expression):
+    return eval(expression)
+
+def _serialize_value(data):
+    if isinstance(data, list):
+        return '[%s]' % ', '.join(map(_serialize_value, data))
+    elif isinstance(data, dict):
+        items = []
+        for key, value in data.items():
+            key = _serialize_value(key)
+            value = _serialize_value(value)
+            items.append("%s: %s" % (key, value))
+        items.sort()
+        return '{%s}' % ', '.join(items)
+    elif isinstance(data, datetime.datetime):
+        return repr(data.utctimetuple())
+    elif isinstance(data, float) and data != data:
+        return '?'
+    else:
+        return str(data)
+
+def test_constructor_types(data_filename, code_filename, verbose=False):
+    _make_objects()
+    native1 = None
+    native2 = None
+    try:
+        native1 = list(yaml.load_all(open(data_filename, 'rb'), Loader=MyLoader))
+        if len(native1) == 1:
+            native1 = native1[0]
+        native2 = _load_code(open(code_filename, 'rb').read())
+        try:
+            if native1 == native2:
+                return
+        except TypeError:
+            pass
+        if verbose:
+            print("SERIALIZED NATIVE1:")
+            print(_serialize_value(native1))
+            print("SERIALIZED NATIVE2:")
+            print(_serialize_value(native2))
+        assert _serialize_value(native1) == _serialize_value(native2), (native1, native2)
+    finally:
+        if verbose:
+            print("NATIVE1:")
+            pprint.pprint(native1)
+            print("NATIVE2:")
+            pprint.pprint(native2)
+
+test_constructor_types.unittest = ['.data', '.code']
+
+if __name__ == '__main__':
+    import sys, test_constructor
+    sys.modules['test_constructor'] = sys.modules['__main__']
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_emitter.py b/tests/lib3/test_emitter.py
new file mode 100644
index 0000000..90d1652
--- /dev/null
+++ b/tests/lib3/test_emitter.py
@@ -0,0 +1,100 @@
+
+import yaml
+
+def _compare_events(events1, events2):
+    assert len(events1) == len(events2), (events1, events2)
+    for event1, event2 in zip(events1, events2):
+        assert event1.__class__ == event2.__class__, (event1, event2)
+        if isinstance(event1, yaml.NodeEvent):
+            assert event1.anchor == event2.anchor, (event1, event2)
+        if isinstance(event1, yaml.CollectionStartEvent):
+            assert event1.tag == event2.tag, (event1, event2)
+        if isinstance(event1, yaml.ScalarEvent):
+            if True not in event1.implicit+event2.implicit:
+                assert event1.tag == event2.tag, (event1, event2)
+            assert event1.value == event2.value, (event1, event2)
+
+def test_emitter_on_data(data_filename, canonical_filename, verbose=False):
+    events = list(yaml.parse(open(data_filename, 'rb')))
+    output = yaml.emit(events)
+    if verbose:
+        print("OUTPUT:")
+        print(output)
+    new_events = list(yaml.parse(output))
+    _compare_events(events, new_events)
+
+test_emitter_on_data.unittest = ['.data', '.canonical']
+
+def test_emitter_on_canonical(canonical_filename, verbose=False):
+    events = list(yaml.parse(open(canonical_filename, 'rb')))
+    for canonical in [False, True]:
+        output = yaml.emit(events, canonical=canonical)
+        if verbose:
+            print("OUTPUT (canonical=%s):" % canonical)
+            print(output)
+        new_events = list(yaml.parse(output))
+        _compare_events(events, new_events)
+
+test_emitter_on_canonical.unittest = ['.canonical']
+
+def test_emitter_styles(data_filename, canonical_filename, verbose=False):
+    for filename in [data_filename, canonical_filename]:
+        events = list(yaml.parse(open(filename, 'rb')))
+        for flow_style in [False, True]:
+            for style in ['|', '>', '"', '\'', '']:
+                styled_events = []
+                for event in events:
+                    if isinstance(event, yaml.ScalarEvent):
+                        event = yaml.ScalarEvent(event.anchor, event.tag,
+                                event.implicit, event.value, style=style)
+                    elif isinstance(event, yaml.SequenceStartEvent):
+                        event = yaml.SequenceStartEvent(event.anchor, event.tag,
+                                event.implicit, flow_style=flow_style)
+                    elif isinstance(event, yaml.MappingStartEvent):
+                        event = yaml.MappingStartEvent(event.anchor, event.tag,
+                                event.implicit, flow_style=flow_style)
+                    styled_events.append(event)
+                output = yaml.emit(styled_events)
+                if verbose:
+                    print("OUTPUT (filename=%r, flow_style=%r, style=%r)" % (filename, flow_style, style))
+                    print(output)
+                new_events = list(yaml.parse(output))
+                _compare_events(events, new_events)
+
+test_emitter_styles.unittest = ['.data', '.canonical']
+
+class EventsLoader(yaml.Loader):
+
+    def construct_event(self, node):
+        if isinstance(node, yaml.ScalarNode):
+            mapping = {}
+        else:
+            mapping = self.construct_mapping(node)
+        class_name = str(node.tag[1:])+'Event'
+        if class_name in ['AliasEvent', 'ScalarEvent', 'SequenceStartEvent', 'MappingStartEvent']:
+            mapping.setdefault('anchor', None)
+        if class_name in ['ScalarEvent', 'SequenceStartEvent', 'MappingStartEvent']:
+            mapping.setdefault('tag', None)
+        if class_name in ['SequenceStartEvent', 'MappingStartEvent']:
+            mapping.setdefault('implicit', True)
+        if class_name == 'ScalarEvent':
+            mapping.setdefault('implicit', (False, True))
+            mapping.setdefault('value', '')
+        value = getattr(yaml, class_name)(**mapping)
+        return value
+
+EventsLoader.add_constructor(None, EventsLoader.construct_event)
+
+def test_emitter_events(events_filename, verbose=False):
+    events = list(yaml.load(open(events_filename, 'rb'), Loader=EventsLoader))
+    output = yaml.emit(events)
+    if verbose:
+        print("OUTPUT:")
+        print(output)
+    new_events = list(yaml.parse(output))
+    _compare_events(events, new_events)
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_errors.py b/tests/lib3/test_errors.py
new file mode 100644
index 0000000..a3f86af
--- /dev/null
+++ b/tests/lib3/test_errors.py
@@ -0,0 +1,67 @@
+
+import yaml, test_emitter
+
+def test_loader_error(error_filename, verbose=False):
+    try:
+        list(yaml.load_all(open(error_filename, 'rb')))
+    except yaml.YAMLError as exc:
+        if verbose:
+            print("%s:" % exc.__class__.__name__, exc)
+    else:
+        raise AssertionError("expected an exception")
+
+test_loader_error.unittest = ['.loader-error']
+
+def test_loader_error_string(error_filename, verbose=False):
+    try:
+        list(yaml.load_all(open(error_filename, 'rb').read()))
+    except yaml.YAMLError as exc:
+        if verbose:
+            print("%s:" % exc.__class__.__name__, exc)
+    else:
+        raise AssertionError("expected an exception")
+
+test_loader_error_string.unittest = ['.loader-error']
+
+def test_loader_error_single(error_filename, verbose=False):
+    try:
+        yaml.load(open(error_filename, 'rb').read())
+    except yaml.YAMLError as exc:
+        if verbose:
+            print("%s:" % exc.__class__.__name__, exc)
+    else:
+        raise AssertionError("expected an exception")
+
+test_loader_error_single.unittest = ['.single-loader-error']
+
+def test_emitter_error(error_filename, verbose=False):
+    events = list(yaml.load(open(error_filename, 'rb'),
+                    Loader=test_emitter.EventsLoader))
+    try:
+        yaml.emit(events)
+    except yaml.YAMLError as exc:
+        if verbose:
+            print("%s:" % exc.__class__.__name__, exc)
+    else:
+        raise AssertionError("expected an exception")
+
+test_emitter_error.unittest = ['.emitter-error']
+
+def test_dumper_error(error_filename, verbose=False):
+    code = open(error_filename, 'rb').read()
+    try:
+        import yaml
+        from io import StringIO
+        exec(code)
+    except yaml.YAMLError as exc:
+        if verbose:
+            print("%s:" % exc.__class__.__name__, exc)
+    else:
+        raise AssertionError("expected an exception")
+
+test_dumper_error.unittest = ['.dumper-error']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_input_output.py b/tests/lib3/test_input_output.py
new file mode 100644
index 0000000..70a945a
--- /dev/null
+++ b/tests/lib3/test_input_output.py
@@ -0,0 +1,150 @@
+
+import yaml
+import codecs, io, tempfile, os, os.path
+
+def test_unicode_input(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    value = ' '.join(data.split())
+    output = yaml.load(data)
+    assert output == value, (output, value)
+    output = yaml.load(io.StringIO(data))
+    assert output == value, (output, value)
+    for input in [data.encode('utf-8'),
+                    codecs.BOM_UTF8+data.encode('utf-8'),
+                    codecs.BOM_UTF16_BE+data.encode('utf-16-be'),
+                    codecs.BOM_UTF16_LE+data.encode('utf-16-le')]:
+        if verbose:
+            print("INPUT:", repr(input[:10]), "...")
+        output = yaml.load(input)
+        assert output == value, (output, value)
+        output = yaml.load(io.BytesIO(input))
+        assert output == value, (output, value)
+
+test_unicode_input.unittest = ['.unicode']
+
+def test_unicode_input_errors(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    for input in [data.encode('latin1', 'ignore'),
+                    data.encode('utf-16-be'), data.encode('utf-16-le'),
+                    codecs.BOM_UTF8+data.encode('utf-16-be'),
+                    codecs.BOM_UTF16_BE+data.encode('utf-16-le'),
+                    codecs.BOM_UTF16_LE+data.encode('utf-8')+b'!']:
+        try:
+            yaml.load(input)
+        except yaml.YAMLError as exc:
+            if verbose:
+                print(exc)
+        else:
+            raise AssertionError("expected an exception")
+        try:
+            yaml.load(io.BytesIO(input))
+        except yaml.YAMLError as exc:
+            if verbose:
+                print(exc)
+        else:
+            raise AssertionError("expected an exception")
+
+test_unicode_input_errors.unittest = ['.unicode']
+
+def test_unicode_output(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    value = ' '.join(data.split())
+    for allow_unicode in [False, True]:
+        data1 = yaml.dump(value, allow_unicode=allow_unicode)
+        for encoding in [None, 'utf-8', 'utf-16-be', 'utf-16-le']:
+            stream = io.StringIO()
+            yaml.dump(value, stream, encoding=encoding, allow_unicode=allow_unicode)
+            data2 = stream.getvalue()
+            data3 = yaml.dump(value, encoding=encoding, allow_unicode=allow_unicode)
+            if encoding is not None:
+                assert isinstance(data3, bytes)
+                data3 = data3.decode(encoding)
+            stream = io.BytesIO()
+            if encoding is None:
+                try:
+                    yaml.dump(value, stream, encoding=encoding, allow_unicode=allow_unicode)
+                except TypeError as exc:
+                    if verbose:
+                        print(exc)
+                    data4 = None
+                else:
+                    raise AssertionError("expected an exception")
+            else:
+                yaml.dump(value, stream, encoding=encoding, allow_unicode=allow_unicode)
+                data4 = stream.getvalue()
+                if verbose:
+                    print("BYTES:", data4[:50])
+                data4 = data4.decode(encoding)
+            for copy in [data1, data2, data3, data4]:
+                if copy is None:
+                    continue
+                assert isinstance(copy, str)
+                if allow_unicode:
+                    try:
+                        copy[4:].encode('ascii')
+                    except UnicodeEncodeError as exc:
+                        if verbose:
+                            print(exc)
+                    else:
+                        raise AssertionError("expected an exception")
+                else:
+                    copy[4:].encode('ascii')
+            assert isinstance(data1, str), (type(data1), encoding)
+            assert isinstance(data2, str), (type(data2), encoding)
+
+test_unicode_output.unittest = ['.unicode']
+
+def test_file_output(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    handle, filename = tempfile.mkstemp()
+    os.close(handle)
+    try:
+        stream = io.StringIO()
+        yaml.dump(data, stream, allow_unicode=True)
+        data1 = stream.getvalue()
+        stream = io.BytesIO()
+        yaml.dump(data, stream, encoding='utf-16-le', allow_unicode=True)
+        data2 = stream.getvalue().decode('utf-16-le')[1:]
+        stream = open(filename, 'w', encoding='utf-16-le')
+        yaml.dump(data, stream, allow_unicode=True)
+        stream.close()
+        data3 = open(filename, 'r', encoding='utf-16-le').read()
+        stream = open(filename, 'wb')
+        yaml.dump(data, stream, encoding='utf-8', allow_unicode=True)
+        stream.close()
+        data4 = open(filename, 'r', encoding='utf-8').read()
+        assert data1 == data2, (data1, data2)
+        assert data1 == data3, (data1, data3)
+        assert data1 == data4, (data1, data4)
+    finally:
+        if os.path.exists(filename):
+            os.unlink(filename)
+
+test_file_output.unittest = ['.unicode']
+
+def test_unicode_transfer(unicode_filename, verbose=False):
+    data = open(unicode_filename, 'rb').read().decode('utf-8')
+    for encoding in [None, 'utf-8', 'utf-16-be', 'utf-16-le']:
+        input = data
+        if encoding is not None:
+            input = ('\ufeff'+input).encode(encoding)
+        output1 = yaml.emit(yaml.parse(input), allow_unicode=True)
+        if encoding is None:
+            stream = io.StringIO()
+        else:
+            stream = io.BytesIO()
+        yaml.emit(yaml.parse(input), stream, allow_unicode=True)
+        output2 = stream.getvalue()
+        assert isinstance(output1, str), (type(output1), encoding)
+        if encoding is None:
+            assert isinstance(output2, str), (type(output1), encoding)
+        else:
+            assert isinstance(output2, bytes), (type(output1), encoding)
+            output2.decode(encoding)
+
+test_unicode_transfer.unittest = ['.unicode']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_mark.py b/tests/lib3/test_mark.py
new file mode 100644
index 0000000..09eea2e
--- /dev/null
+++ b/tests/lib3/test_mark.py
@@ -0,0 +1,32 @@
+
+import yaml
+
+def test_marks(marks_filename, verbose=False):
+    inputs = open(marks_filename, 'r').read().split('---\n')[1:]
+    for input in inputs:
+        index = 0
+        line = 0
+        column = 0
+        while input[index] != '*':
+            if input[index] == '\n':
+                line += 1
+                column = 0
+            else:
+                column += 1
+            index += 1
+        mark = yaml.Mark(marks_filename, index, line, column, input, index)
+        snippet = mark.get_snippet(indent=2, max_length=79)
+        if verbose:
+            print(snippet)
+        assert isinstance(snippet, str), type(snippet)
+        assert snippet.count('\n') == 1, snippet.count('\n')
+        data, pointer = snippet.split('\n')
+        assert len(data) < 82, len(data)
+        assert data[len(pointer)-1] == '*', data[len(pointer)-1]
+
+test_marks.unittest = ['.marks']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_reader.py b/tests/lib3/test_reader.py
new file mode 100644
index 0000000..c07b346
--- /dev/null
+++ b/tests/lib3/test_reader.py
@@ -0,0 +1,34 @@
+
+import yaml.reader
+
+def _run_reader(data, verbose):
+    try:
+        stream = yaml.reader.Reader(data)
+        while stream.peek() != '\0':
+            stream.forward()
+    except yaml.reader.ReaderError as exc:
+        if verbose:
+            print(exc)
+    else:
+        raise AssertionError("expected an exception")
+
+def test_stream_error(error_filename, verbose=False):
+    _run_reader(open(error_filename, 'rb'), verbose)
+    _run_reader(open(error_filename, 'rb').read(), verbose)
+    for encoding in ['utf-8', 'utf-16-le', 'utf-16-be']:
+        try:
+            data = open(error_filename, 'rb').read().decode(encoding)
+            break
+        except UnicodeDecodeError:
+            pass
+    else:
+        return
+    _run_reader(data, verbose)
+    _run_reader(open(error_filename, encoding=encoding), verbose)
+
+test_stream_error.unittest = ['.stream-error']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_recursive.py b/tests/lib3/test_recursive.py
new file mode 100644
index 0000000..321a75f
--- /dev/null
+++ b/tests/lib3/test_recursive.py
@@ -0,0 +1,51 @@
+
+import yaml
+
+class AnInstance:
+
+    def __init__(self, foo, bar):
+        self.foo = foo
+        self.bar = bar
+
+    def __repr__(self):
+        try:
+            return "%s(foo=%r, bar=%r)" % (self.__class__.__name__,
+                    self.foo, self.bar)
+        except RuntimeError:
+            return "%s(foo=..., bar=...)" % self.__class__.__name__
+
+class AnInstanceWithState(AnInstance):
+
+    def __getstate__(self):
+        return {'attributes': [self.foo, self.bar]}
+
+    def __setstate__(self, state):
+        self.foo, self.bar = state['attributes']
+
+def test_recursive(recursive_filename, verbose=False):
+    context = globals().copy()
+    exec(open(recursive_filename, 'rb').read(), context)
+    value1 = context['value']
+    output1 = None
+    value2 = None
+    output2 = None
+    try:
+        output1 = yaml.dump(value1)
+        value2 = yaml.load(output1)
+        output2 = yaml.dump(value2)
+        assert output1 == output2, (output1, output2)
+    finally:
+        if verbose:
+            print("VALUE1:", value1)
+            print("VALUE2:", value2)
+            print("OUTPUT1:")
+            print(output1)
+            print("OUTPUT2:")
+            print(output2)
+
+test_recursive.unittest = ['.recursive']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_representer.py b/tests/lib3/test_representer.py
new file mode 100644
index 0000000..10d4a8f
--- /dev/null
+++ b/tests/lib3/test_representer.py
@@ -0,0 +1,43 @@
+
+import yaml
+import test_constructor
+import pprint
+
+def test_representer_types(code_filename, verbose=False):
+    test_constructor._make_objects()
+    for allow_unicode in [False, True]:
+        for encoding in ['utf-8', 'utf-16-be', 'utf-16-le']:
+            native1 = test_constructor._load_code(open(code_filename, 'rb').read())
+            native2 = None
+            try:
+                output = yaml.dump(native1, Dumper=test_constructor.MyDumper,
+                            allow_unicode=allow_unicode, encoding=encoding)
+                native2 = yaml.load(output, Loader=test_constructor.MyLoader)
+                try:
+                    if native1 == native2:
+                        continue
+                except TypeError:
+                    pass
+                value1 = test_constructor._serialize_value(native1)
+                value2 = test_constructor._serialize_value(native2)
+                if verbose:
+                    print("SERIALIZED NATIVE1:")
+                    print(value1)
+                    print("SERIALIZED NATIVE2:")
+                    print(value2)
+                assert value1 == value2, (native1, native2)
+            finally:
+                if verbose:
+                    print("NATIVE1:")
+                    pprint.pprint(native1)
+                    print("NATIVE2:")
+                    pprint.pprint(native2)
+                    print("OUTPUT:")
+                    print(output)
+
+test_representer_types.unittest = ['.code']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_resolver.py b/tests/lib3/test_resolver.py
new file mode 100644
index 0000000..f059dab
--- /dev/null
+++ b/tests/lib3/test_resolver.py
@@ -0,0 +1,92 @@
+
+import yaml
+import pprint
+
+def test_implicit_resolver(data_filename, detect_filename, verbose=False):
+    correct_tag = None
+    node = None
+    try:
+        correct_tag = open(detect_filename, 'r').read().strip()
+        node = yaml.compose(open(data_filename, 'rb'))
+        assert isinstance(node, yaml.SequenceNode), node
+        for scalar in node.value:
+            assert isinstance(scalar, yaml.ScalarNode), scalar
+            assert scalar.tag == correct_tag, (scalar.tag, correct_tag)
+    finally:
+        if verbose:
+            print("CORRECT TAG:", correct_tag)
+            if hasattr(node, 'value'):
+                print("CHILDREN:")
+                pprint.pprint(node.value)
+
+test_implicit_resolver.unittest = ['.data', '.detect']
+
+def _make_path_loader_and_dumper():
+    global MyLoader, MyDumper
+
+    class MyLoader(yaml.Loader):
+        pass
+    class MyDumper(yaml.Dumper):
+        pass
+
+    yaml.add_path_resolver('!root', [],
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver('!root/scalar', [], str,
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver('!root/key11/key12/*', ['key11', 'key12'],
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver('!root/key21/1/*', ['key21', 1],
+            Loader=MyLoader, Dumper=MyDumper)
+    yaml.add_path_resolver('!root/key31/*/*/key14/map', ['key31', None, None, 'key14'], dict,
+            Loader=MyLoader, Dumper=MyDumper)
+
+    return MyLoader, MyDumper
+
+def _convert_node(node):
+    if isinstance(node, yaml.ScalarNode):
+        return (node.tag, node.value)
+    elif isinstance(node, yaml.SequenceNode):
+        value = []
+        for item in node.value:
+            value.append(_convert_node(item))
+        return (node.tag, value)
+    elif isinstance(node, yaml.MappingNode):
+        value = []
+        for key, item in node.value:
+            value.append((_convert_node(key), _convert_node(item)))
+        return (node.tag, value)
+
+def test_path_resolver_loader(data_filename, path_filename, verbose=False):
+    _make_path_loader_and_dumper()
+    nodes1 = list(yaml.compose_all(open(data_filename, 'rb').read(), Loader=MyLoader))
+    nodes2 = list(yaml.compose_all(open(path_filename, 'rb').read()))
+    try:
+        for node1, node2 in zip(nodes1, nodes2):
+            data1 = _convert_node(node1)
+            data2 = _convert_node(node2)
+            assert data1 == data2, (data1, data2)
+    finally:
+        if verbose:
+            print(yaml.serialize_all(nodes1))
+
+test_path_resolver_loader.unittest = ['.data', '.path']
+
+def test_path_resolver_dumper(data_filename, path_filename, verbose=False):
+    _make_path_loader_and_dumper()
+    for filename in [data_filename, path_filename]:
+        output = yaml.serialize_all(yaml.compose_all(open(filename, 'rb')), Dumper=MyDumper)
+        if verbose:
+            print(output)
+        nodes1 = yaml.compose_all(output)
+        nodes2 = yaml.compose_all(open(data_filename, 'rb'))
+        for node1, node2 in zip(nodes1, nodes2):
+            data1 = _convert_node(node1)
+            data2 = _convert_node(node2)
+            assert data1 == data2, (data1, data2)
+
+test_path_resolver_dumper.unittest = ['.data', '.path']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_structure.py b/tests/lib3/test_structure.py
new file mode 100644
index 0000000..6d6f59d
--- /dev/null
+++ b/tests/lib3/test_structure.py
@@ -0,0 +1,187 @@
+
+import yaml, canonical
+import pprint
+
+def _convert_structure(loader):
+    if loader.check_event(yaml.ScalarEvent):
+        event = loader.get_event()
+        if event.tag or event.anchor or event.value:
+            return True
+        else:
+            return None
+    elif loader.check_event(yaml.SequenceStartEvent):
+        loader.get_event()
+        sequence = []
+        while not loader.check_event(yaml.SequenceEndEvent):
+            sequence.append(_convert_structure(loader))
+        loader.get_event()
+        return sequence
+    elif loader.check_event(yaml.MappingStartEvent):
+        loader.get_event()
+        mapping = []
+        while not loader.check_event(yaml.MappingEndEvent):
+            key = _convert_structure(loader)
+            value = _convert_structure(loader)
+            mapping.append((key, value))
+        loader.get_event()
+        return mapping
+    elif loader.check_event(yaml.AliasEvent):
+        loader.get_event()
+        return '*'
+    else:
+        loader.get_event()
+        return '?'
+
+def test_structure(data_filename, structure_filename, verbose=False):
+    nodes1 = []
+    nodes2 = eval(open(structure_filename, 'r').read())
+    try:
+        loader = yaml.Loader(open(data_filename, 'rb'))
+        while loader.check_event():
+            if loader.check_event(yaml.StreamStartEvent, yaml.StreamEndEvent,
+                                yaml.DocumentStartEvent, yaml.DocumentEndEvent):
+                loader.get_event()
+                continue
+            nodes1.append(_convert_structure(loader))
+        if len(nodes1) == 1:
+            nodes1 = nodes1[0]
+        assert nodes1 == nodes2, (nodes1, nodes2)
+    finally:
+        if verbose:
+            print("NODES1:")
+            pprint.pprint(nodes1)
+            print("NODES2:")
+            pprint.pprint(nodes2)
+
+test_structure.unittest = ['.data', '.structure']
+
+def _compare_events(events1, events2, full=False):
+    assert len(events1) == len(events2), (len(events1), len(events2))
+    for event1, event2 in zip(events1, events2):
+        assert event1.__class__ == event2.__class__, (event1, event2)
+        if isinstance(event1, yaml.AliasEvent) and full:
+            assert event1.anchor == event2.anchor, (event1, event2)
+        if isinstance(event1, (yaml.ScalarEvent, yaml.CollectionStartEvent)):
+            if (event1.tag not in [None, '!'] and event2.tag not in [None, '!']) or full:
+                assert event1.tag == event2.tag, (event1, event2)
+        if isinstance(event1, yaml.ScalarEvent):
+            assert event1.value == event2.value, (event1, event2)
+
+def test_parser(data_filename, canonical_filename, verbose=False):
+    events1 = None
+    events2 = None
+    try:
+        events1 = list(yaml.parse(open(data_filename, 'rb')))
+        events2 = list(yaml.canonical_parse(open(canonical_filename, 'rb')))
+        _compare_events(events1, events2)
+    finally:
+        if verbose:
+            print("EVENTS1:")
+            pprint.pprint(events1)
+            print("EVENTS2:")
+            pprint.pprint(events2)
+
+test_parser.unittest = ['.data', '.canonical']
+
+def test_parser_on_canonical(canonical_filename, verbose=False):
+    events1 = None
+    events2 = None
+    try:
+        events1 = list(yaml.parse(open(canonical_filename, 'rb')))
+        events2 = list(yaml.canonical_parse(open(canonical_filename, 'rb')))
+        _compare_events(events1, events2, full=True)
+    finally:
+        if verbose:
+            print("EVENTS1:")
+            pprint.pprint(events1)
+            print("EVENTS2:")
+            pprint.pprint(events2)
+
+test_parser_on_canonical.unittest = ['.canonical']
+
+def _compare_nodes(node1, node2):
+    assert node1.__class__ == node2.__class__, (node1, node2)
+    assert node1.tag == node2.tag, (node1, node2)
+    if isinstance(node1, yaml.ScalarNode):
+        assert node1.value == node2.value, (node1, node2)
+    else:
+        assert len(node1.value) == len(node2.value), (node1, node2)
+        for item1, item2 in zip(node1.value, node2.value):
+            if not isinstance(item1, tuple):
+                item1 = (item1,)
+                item2 = (item2,)
+            for subnode1, subnode2 in zip(item1, item2):
+                _compare_nodes(subnode1, subnode2)
+
+def test_composer(data_filename, canonical_filename, verbose=False):
+    nodes1 = None
+    nodes2 = None
+    try:
+        nodes1 = list(yaml.compose_all(open(data_filename, 'rb')))
+        nodes2 = list(yaml.canonical_compose_all(open(canonical_filename, 'rb')))
+        assert len(nodes1) == len(nodes2), (len(nodes1), len(nodes2))
+        for node1, node2 in zip(nodes1, nodes2):
+            _compare_nodes(node1, node2)
+    finally:
+        if verbose:
+            print("NODES1:")
+            pprint.pprint(nodes1)
+            print("NODES2:")
+            pprint.pprint(nodes2)
+
+test_composer.unittest = ['.data', '.canonical']
+
+def _make_loader():
+    global MyLoader
+
+    class MyLoader(yaml.Loader):
+        def construct_sequence(self, node):
+            return tuple(yaml.Loader.construct_sequence(self, node))
+        def construct_mapping(self, node):
+            pairs = self.construct_pairs(node)
+            pairs.sort(key=(lambda i: str(i)))
+            return pairs
+        def construct_undefined(self, node):
+            return self.construct_scalar(node)
+
+    MyLoader.add_constructor('tag:yaml.org,2002:map', MyLoader.construct_mapping)
+    MyLoader.add_constructor(None, MyLoader.construct_undefined)
+
+def _make_canonical_loader():
+    global MyCanonicalLoader
+
+    class MyCanonicalLoader(yaml.CanonicalLoader):
+        def construct_sequence(self, node):
+            return tuple(yaml.CanonicalLoader.construct_sequence(self, node))
+        def construct_mapping(self, node):
+            pairs = self.construct_pairs(node)
+            pairs.sort(key=(lambda i: str(i)))
+            return pairs
+        def construct_undefined(self, node):
+            return self.construct_scalar(node)
+
+    MyCanonicalLoader.add_constructor('tag:yaml.org,2002:map', MyCanonicalLoader.construct_mapping)
+    MyCanonicalLoader.add_constructor(None, MyCanonicalLoader.construct_undefined)
+
+def test_constructor(data_filename, canonical_filename, verbose=False):
+    _make_loader()
+    _make_canonical_loader()
+    native1 = None
+    native2 = None
+    try:
+        native1 = list(yaml.load_all(open(data_filename, 'rb'), Loader=MyLoader))
+        native2 = list(yaml.load_all(open(canonical_filename, 'rb'), Loader=MyCanonicalLoader))
+        assert native1 == native2, (native1, native2)
+    finally:
+        if verbose:
+            print("NATIVE1:")
+            pprint.pprint(native1)
+            print("NATIVE2:")
+            pprint.pprint(native2)
+
+test_constructor.unittest = ['.data', '.canonical']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_tokens.py b/tests/lib3/test_tokens.py
new file mode 100644
index 0000000..828945a
--- /dev/null
+++ b/tests/lib3/test_tokens.py
@@ -0,0 +1,77 @@
+
+import yaml
+import pprint
+
+# Tokens mnemonic:
+# directive:            %
+# document_start:       ---
+# document_end:         ...
+# alias:                *
+# anchor:               &
+# tag:                  !
+# scalar                _
+# block_sequence_start: [[
+# block_mapping_start:  {{
+# block_end:            ]}
+# flow_sequence_start:  [
+# flow_sequence_end:    ]
+# flow_mapping_start:   {
+# flow_mapping_end:     }
+# entry:                ,
+# key:                  ?
+# value:                :
+
+_replaces = {
+    yaml.DirectiveToken: '%',
+    yaml.DocumentStartToken: '---',
+    yaml.DocumentEndToken: '...',
+    yaml.AliasToken: '*',
+    yaml.AnchorToken: '&',
+    yaml.TagToken: '!',
+    yaml.ScalarToken: '_',
+    yaml.BlockSequenceStartToken: '[[',
+    yaml.BlockMappingStartToken: '{{',
+    yaml.BlockEndToken: ']}',
+    yaml.FlowSequenceStartToken: '[',
+    yaml.FlowSequenceEndToken: ']',
+    yaml.FlowMappingStartToken: '{',
+    yaml.FlowMappingEndToken: '}',
+    yaml.BlockEntryToken: ',',
+    yaml.FlowEntryToken: ',',
+    yaml.KeyToken: '?',
+    yaml.ValueToken: ':',
+}
+
+def test_tokens(data_filename, tokens_filename, verbose=False):
+    tokens1 = []
+    tokens2 = open(tokens_filename, 'r').read().split()
+    try:
+        for token in yaml.scan(open(data_filename, 'rb')):
+            if not isinstance(token, (yaml.StreamStartToken, yaml.StreamEndToken)):
+                tokens1.append(_replaces[token.__class__])
+    finally:
+        if verbose:
+            print("TOKENS1:", ' '.join(tokens1))
+            print("TOKENS2:", ' '.join(tokens2))
+    assert len(tokens1) == len(tokens2), (tokens1, tokens2)
+    for token1, token2 in zip(tokens1, tokens2):
+        assert token1 == token2, (token1, token2)
+
+test_tokens.unittest = ['.data', '.tokens']
+
+def test_scanner(data_filename, canonical_filename, verbose=False):
+    for filename in [data_filename, canonical_filename]:
+        tokens = []
+        try:
+            for token in yaml.scan(open(filename, 'rb')):
+                tokens.append(token.__class__.__name__)
+        finally:
+            if verbose:
+                pprint.pprint(tokens)
+
+test_scanner.unittest = ['.data', '.canonical']
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_yaml.py b/tests/lib3/test_yaml.py
new file mode 100644
index 0000000..0927368
--- /dev/null
+++ b/tests/lib3/test_yaml.py
@@ -0,0 +1,18 @@
+
+from test_mark import *
+from test_reader import *
+from test_canonical import *
+from test_tokens import *
+from test_structure import *
+from test_errors import *
+from test_resolver import *
+from test_constructor import *
+from test_emitter import *
+from test_representer import *
+from test_recursive import *
+from test_input_output import *
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+
diff --git a/tests/lib3/test_yaml_ext.py b/tests/lib3/test_yaml_ext.py
new file mode 100644
index 0000000..93d397b
--- /dev/null
+++ b/tests/lib3/test_yaml_ext.py
@@ -0,0 +1,271 @@
+
+import _yaml, yaml
+import types, pprint
+
+yaml.PyBaseLoader = yaml.BaseLoader
+yaml.PySafeLoader = yaml.SafeLoader
+yaml.PyLoader = yaml.Loader
+yaml.PyBaseDumper = yaml.BaseDumper
+yaml.PySafeDumper = yaml.SafeDumper
+yaml.PyDumper = yaml.Dumper
+
+old_scan = yaml.scan
+def new_scan(stream, Loader=yaml.CLoader):
+    return old_scan(stream, Loader)
+
+old_parse = yaml.parse
+def new_parse(stream, Loader=yaml.CLoader):
+    return old_parse(stream, Loader)
+
+old_compose = yaml.compose
+def new_compose(stream, Loader=yaml.CLoader):
+    return old_compose(stream, Loader)
+
+old_compose_all = yaml.compose_all
+def new_compose_all(stream, Loader=yaml.CLoader):
+    return old_compose_all(stream, Loader)
+
+old_load = yaml.load
+def new_load(stream, Loader=yaml.CLoader):
+    return old_load(stream, Loader)
+
+old_load_all = yaml.load_all
+def new_load_all(stream, Loader=yaml.CLoader):
+    return old_load_all(stream, Loader)
+
+old_safe_load = yaml.safe_load
+def new_safe_load(stream):
+    return old_load(stream, yaml.CSafeLoader)
+
+old_safe_load_all = yaml.safe_load_all
+def new_safe_load_all(stream):
+    return old_load_all(stream, yaml.CSafeLoader)
+
+old_emit = yaml.emit
+def new_emit(events, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_emit(events, stream, Dumper, **kwds)
+
+old_serialize = yaml.serialize
+def new_serialize(node, stream, Dumper=yaml.CDumper, **kwds):
+    return old_serialize(node, stream, Dumper, **kwds)
+
+old_serialize_all = yaml.serialize_all
+def new_serialize_all(nodes, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_serialize_all(nodes, stream, Dumper, **kwds)
+
+old_dump = yaml.dump
+def new_dump(data, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_dump(data, stream, Dumper, **kwds)
+
+old_dump_all = yaml.dump_all
+def new_dump_all(documents, stream=None, Dumper=yaml.CDumper, **kwds):
+    return old_dump_all(documents, stream, Dumper, **kwds)
+
+old_safe_dump = yaml.safe_dump
+def new_safe_dump(data, stream=None, **kwds):
+    return old_dump(data, stream, yaml.CSafeDumper, **kwds)
+
+old_safe_dump_all = yaml.safe_dump_all
+def new_safe_dump_all(documents, stream=None, **kwds):
+    return old_dump_all(documents, stream, yaml.CSafeDumper, **kwds)
+
+def _set_up():
+    yaml.BaseLoader = yaml.CBaseLoader
+    yaml.SafeLoader = yaml.CSafeLoader
+    yaml.Loader = yaml.CLoader
+    yaml.BaseDumper = yaml.CBaseDumper
+    yaml.SafeDumper = yaml.CSafeDumper
+    yaml.Dumper = yaml.CDumper
+    yaml.scan = new_scan
+    yaml.parse = new_parse
+    yaml.compose = new_compose
+    yaml.compose_all = new_compose_all
+    yaml.load = new_load
+    yaml.load_all = new_load_all
+    yaml.safe_load = new_safe_load
+    yaml.safe_load_all = new_safe_load_all
+    yaml.emit = new_emit
+    yaml.serialize = new_serialize
+    yaml.serialize_all = new_serialize_all
+    yaml.dump = new_dump
+    yaml.dump_all = new_dump_all
+    yaml.safe_dump = new_safe_dump
+    yaml.safe_dump_all = new_safe_dump_all
+
+def _tear_down():
+    yaml.BaseLoader = yaml.PyBaseLoader
+    yaml.SafeLoader = yaml.PySafeLoader
+    yaml.Loader = yaml.PyLoader
+    yaml.BaseDumper = yaml.PyBaseDumper
+    yaml.SafeDumper = yaml.PySafeDumper
+    yaml.Dumper = yaml.PyDumper
+    yaml.scan = old_scan
+    yaml.parse = old_parse
+    yaml.compose = old_compose
+    yaml.compose_all = old_compose_all
+    yaml.load = old_load
+    yaml.load_all = old_load_all
+    yaml.safe_load = old_safe_load
+    yaml.safe_load_all = old_safe_load_all
+    yaml.emit = old_emit
+    yaml.serialize = old_serialize
+    yaml.serialize_all = old_serialize_all
+    yaml.dump = old_dump
+    yaml.dump_all = old_dump_all
+    yaml.safe_dump = old_safe_dump
+    yaml.safe_dump_all = old_safe_dump_all
+
+def test_c_version(verbose=False):
+    if verbose:
+        print(_yaml.get_version())
+        print(_yaml.get_version_string())
+    assert ("%s.%s.%s" % _yaml.get_version()) == _yaml.get_version_string(),    \
+            (_yaml.get_version(), _yaml.get_version_string())
+
+def _compare_scanners(py_data, c_data, verbose):
+    py_tokens = list(yaml.scan(py_data, Loader=yaml.PyLoader))
+    c_tokens = []
+    try:
+        for token in yaml.scan(c_data, Loader=yaml.CLoader):
+            c_tokens.append(token)
+        assert len(py_tokens) == len(c_tokens), (len(py_tokens), len(c_tokens))
+        for py_token, c_token in zip(py_tokens, c_tokens):
+            assert py_token.__class__ == c_token.__class__, (py_token, c_token)
+            if hasattr(py_token, 'value'):
+                assert py_token.value == c_token.value, (py_token, c_token)
+            if isinstance(py_token, yaml.StreamEndToken):
+                continue
+            py_start = (py_token.start_mark.index, py_token.start_mark.line, py_token.start_mark.column)
+            py_end = (py_token.end_mark.index, py_token.end_mark.line, py_token.end_mark.column)
+            c_start = (c_token.start_mark.index, c_token.start_mark.line, c_token.start_mark.column)
+            c_end = (c_token.end_mark.index, c_token.end_mark.line, c_token.end_mark.column)
+            assert py_start == c_start, (py_start, c_start)
+            assert py_end == c_end, (py_end, c_end)
+    finally:
+        if verbose:
+            print("PY_TOKENS:")
+            pprint.pprint(py_tokens)
+            print("C_TOKENS:")
+            pprint.pprint(c_tokens)
+
+def test_c_scanner(data_filename, canonical_filename, verbose=False):
+    _compare_scanners(open(data_filename, 'rb'),
+            open(data_filename, 'rb'), verbose)
+    _compare_scanners(open(data_filename, 'rb').read(),
+            open(data_filename, 'rb').read(), verbose)
+    _compare_scanners(open(canonical_filename, 'rb'),
+            open(canonical_filename, 'rb'), verbose)
+    _compare_scanners(open(canonical_filename, 'rb').read(),
+            open(canonical_filename, 'rb').read(), verbose)
+
+test_c_scanner.unittest = ['.data', '.canonical']
+test_c_scanner.skip = ['.skip-ext']
+
+def _compare_parsers(py_data, c_data, verbose):
+    py_events = list(yaml.parse(py_data, Loader=yaml.PyLoader))
+    c_events = []
+    try:
+        for event in yaml.parse(c_data, Loader=yaml.CLoader):
+            c_events.append(event)
+        assert len(py_events) == len(c_events), (len(py_events), len(c_events))
+        for py_event, c_event in zip(py_events, c_events):
+            for attribute in ['__class__', 'anchor', 'tag', 'implicit',
+                                'value', 'explicit', 'version', 'tags']:
+                py_value = getattr(py_event, attribute, None)
+                c_value = getattr(c_event, attribute, None)
+                assert py_value == c_value, (py_event, c_event, attribute)
+    finally:
+        if verbose:
+            print("PY_EVENTS:")
+            pprint.pprint(py_events)
+            print("C_EVENTS:")
+            pprint.pprint(c_events)
+
+def test_c_parser(data_filename, canonical_filename, verbose=False):
+    _compare_parsers(open(data_filename, 'rb'),
+            open(data_filename, 'rb'), verbose)
+    _compare_parsers(open(data_filename, 'rb').read(),
+            open(data_filename, 'rb').read(), verbose)
+    _compare_parsers(open(canonical_filename, 'rb'),
+            open(canonical_filename, 'rb'), verbose)
+    _compare_parsers(open(canonical_filename, 'rb').read(),
+            open(canonical_filename, 'rb').read(), verbose)
+
+test_c_parser.unittest = ['.data', '.canonical']
+test_c_parser.skip = ['.skip-ext']
+
+def _compare_emitters(data, verbose):
+    events = list(yaml.parse(data, Loader=yaml.PyLoader))
+    c_data = yaml.emit(events, Dumper=yaml.CDumper)
+    if verbose:
+        print(c_data)
+    py_events = list(yaml.parse(c_data, Loader=yaml.PyLoader))
+    c_events = list(yaml.parse(c_data, Loader=yaml.CLoader))
+    try:
+        assert len(events) == len(py_events), (len(events), len(py_events))
+        assert len(events) == len(c_events), (len(events), len(c_events))
+        for event, py_event, c_event in zip(events, py_events, c_events):
+            for attribute in ['__class__', 'anchor', 'tag', 'implicit',
+                                'value', 'explicit', 'version', 'tags']:
+                value = getattr(event, attribute, None)
+                py_value = getattr(py_event, attribute, None)
+                c_value = getattr(c_event, attribute, None)
+                if attribute == 'tag' and value in [None, '!'] \
+                        and py_value in [None, '!'] and c_value in [None, '!']:
+                    continue
+                if attribute == 'explicit' and (py_value or c_value):
+                    continue
+                assert value == py_value, (event, py_event, attribute)
+                assert value == c_value, (event, c_event, attribute)
+    finally:
+        if verbose:
+            print("EVENTS:")
+            pprint.pprint(events)
+            print("PY_EVENTS:")
+            pprint.pprint(py_events)
+            print("C_EVENTS:")
+            pprint.pprint(c_events)
+
+def test_c_emitter(data_filename, canonical_filename, verbose=False):
+    _compare_emitters(open(data_filename, 'rb').read(), verbose)
+    _compare_emitters(open(canonical_filename, 'rb').read(), verbose)
+
+test_c_emitter.unittest = ['.data', '.canonical']
+test_c_emitter.skip = ['.skip-ext']
+
+def wrap_ext_function(function):
+    def wrapper(*args, **kwds):
+        _set_up()
+        try:
+            function(*args, **kwds)
+        finally:
+            _tear_down()
+    wrapper.__name__ = '%s_ext' % function.__name__
+    wrapper.unittest = function.unittest
+    wrapper.skip = getattr(function, 'skip', [])+['.skip-ext']
+    return wrapper
+
+def wrap_ext(collections):
+    functions = []
+    if not isinstance(collections, list):
+        collections = [collections]
+    for collection in collections:
+        if not isinstance(collection, dict):
+            collection = vars(collection)
+        for key in sorted(collection):
+            value = collection[key]
+            if isinstance(value, types.FunctionType) and hasattr(value, 'unittest'):
+                functions.append(wrap_ext_function(value))
+    for function in functions:
+        assert function.__name__ not in globals()
+        globals()[function.__name__] = function
+
+import test_tokens, test_structure, test_errors, test_resolver, test_constructor,   \
+        test_emitter, test_representer, test_recursive, test_input_output
+wrap_ext([test_tokens, test_structure, test_errors, test_resolver, test_constructor,
+        test_emitter, test_representer, test_recursive, test_input_output])
+
+if __name__ == '__main__':
+    import test_appliance
+    test_appliance.run(globals())
+