poly1305: modify s390x assembly to implement MAC interface

The vector (vx) implementation has been updated to read in the
state and update it - as opposed to being a single shot function.
This has allowed the new MAC interface can be implemented.

For performance reasons s390x uses a larger buffer than the generic
implementation. There is a relatively high fixed cost to read the
state, calculate the key coefficients and serialize the state, so
it makes sense to buffer more blocks before calling it.

For now I've had to remove the faster VMSL implementation. It is
too complex for me to update in time for Go 1.15. At some point
I'd like to revisit it but for now it looks like using the MAC
interface is more of a win than using VMSL.

The benchmarks show considerable improvements when using the MAC
interface. The Sum benchmarks show slowdown due to a combination
of the removal of the VMSL implementation and also the added
overhead from splitting the summation function into multiple parts.

poly1305:

name              old speed      new speed      delta
64                1.33GB/s ± 0%  0.80GB/s ± 1%   -39.51%  (p=0.000 n=16+20)
1K                4.04GB/s ± 0%  2.97GB/s ± 0%   -26.46%  (p=0.000 n=19+19)
2M                5.32GB/s ± 1%  3.63GB/s ± 0%   -31.76%  (p=0.000 n=20+19)
64Unaligned       1.33GB/s ± 0%  0.80GB/s ± 0%   -39.80%  (p=0.000 n=19+18)
1KUnaligned       4.09GB/s ± 1%  2.94GB/s ± 0%   -28.23%  (p=0.000 n=19+18)
2MUnaligned       5.33GB/s ± 1%  3.52GB/s ± 0%   -34.04%  (p=0.000 n=20+19)
Write64           1.03GB/s ± 1%  1.49GB/s ± 1%   +44.34%  (p=0.000 n=20+20)
Write1K           1.21GB/s ± 0%  3.24GB/s ± 0%  +169.02%  (p=0.000 n=20+17)
Write2M           1.24GB/s ± 1%  3.63GB/s ± 0%  +192.36%  (p=0.000 n=20+19)
Write64Unaligned  1.04GB/s ± 1%  1.50GB/s ± 0%   +44.16%  (p=0.000 n=19+14)
Write1KUnaligned  1.21GB/s ± 0%  3.20GB/s ± 0%  +164.55%  (p=0.000 n=20+16)
Write2MUnaligned  1.24GB/s ± 1%  3.51GB/s ± 0%  +183.96%  (p=0.000 n=20+19)

chacha20poly1305 (this vs. using generic MAC interface - post CL 206977):

name         old speed      new speed      delta
Open-64       147MB/s ± 2%   156MB/s ± 1%   +6.15%  (p=0.000 n=20+19)
Seal-64       151MB/s ± 0%   164MB/s ± 1%   +8.86%  (p=0.000 n=19+16)
Open-64-X     104MB/s ± 2%   111MB/s ± 1%   +6.24%  (p=0.000 n=20+20)
Seal-64-X     109MB/s ± 2%   111MB/s ± 1%   +2.11%  (p=0.000 n=20+19)
Open-1350     555MB/s ± 0%   751MB/s ± 1%  +35.19%  (p=0.000 n=20+20)
Seal-1350     557MB/s ± 0%   759MB/s ± 0%  +36.23%  (p=0.000 n=20+20)
Open-1350-X   517MB/s ± 1%   683MB/s ± 1%  +31.97%  (p=0.000 n=20+20)
Seal-1350-X   511MB/s ± 0%   683MB/s ± 0%  +33.77%  (p=0.000 n=18+19)
Open-8192     672MB/s ± 0%  1013MB/s ± 0%  +50.65%  (p=0.000 n=19+19)
Seal-8192     674MB/s ± 0%  1018MB/s ± 0%  +50.98%  (p=0.000 n=18+20)
Open-8192-X   663MB/s ± 0%   979MB/s ± 0%  +47.57%  (p=0.000 n=20+20)
Seal-8192-X   658MB/s ± 0%   985MB/s ± 0%  +49.62%  (p=0.000 n=18+20)

name         old allocs/op  new allocs/op  delta
Open-64          0.00           0.00          ~     (all equal)
Seal-64          0.00           0.00          ~     (all equal)
Open-64-X        0.00           0.00          ~     (all equal)
Seal-64-X        0.00           0.00          ~     (all equal)
Open-1350        0.00           0.00          ~     (all equal)
Seal-1350        0.00           0.00          ~     (all equal)
Open-1350-X      0.00           0.00          ~     (all equal)
Seal-1350-X      0.00           0.00          ~     (all equal)
Open-8192        0.00           0.00          ~     (all equal)
Seal-8192        0.00           0.00          ~     (all equal)
Open-8192-X      0.00           0.00          ~     (all equal)
Seal-8192-X      0.00           0.00          ~     (all equal)

chacha20poly1305 (this vs. using asm Sum interface - pre CL 206977):

name         old speed      new speed      delta
Open-64       144MB/s ± 0%   156MB/s ± 1%    +8.16%  (p=0.000 n=20+19)
Seal-64       150MB/s ± 0%   164MB/s ± 1%    +9.35%  (p=0.000 n=20+16)
Open-64-X     104MB/s ± 1%   111MB/s ± 1%    +6.15%  (p=0.000 n=19+20)
Seal-64-X     109MB/s ± 1%   111MB/s ± 1%    +1.43%  (p=0.000 n=19+19)
Open-1350     702MB/s ± 1%   751MB/s ± 1%    +6.98%  (p=0.000 n=20+20)
Seal-1350     715MB/s ± 0%   759MB/s ± 0%    +6.09%  (p=0.000 n=19+20)
Open-1350-X   642MB/s ± 0%   683MB/s ± 1%    +6.37%  (p=0.000 n=19+20)
Seal-1350-X   639MB/s ± 0%   683MB/s ± 0%    +6.98%  (p=0.000 n=20+19)
Open-8192     994MB/s ± 0%  1013MB/s ± 0%    +1.85%  (p=0.000 n=20+19)
Seal-8192    1.00GB/s ± 0%  1.02GB/s ± 0%    +1.90%  (p=0.000 n=20+20)
Open-8192-X   965MB/s ± 0%   979MB/s ± 0%    +1.43%  (p=0.000 n=19+20)
Seal-8192-X   962MB/s ± 0%   985MB/s ± 0%    +2.39%  (p=0.000 n=20+20)

name         old allocs/op  new allocs/op  delta
Open-64          1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Seal-64          1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Open-64-X        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Seal-64-X        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Open-1350        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Seal-1350        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Open-1350-X      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Seal-1350-X      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Open-8192        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Seal-8192        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Open-8192-X      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)
Seal-8192-X      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=20+20)

Updates golang/go#25219.

Change-Id: Ib491e3a47b6b3ec8bbbe1f41f7bf42ad82f5c249
Reviewed-on: https://go-review.googlesource.com/c/crypto/+/219057
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Filippo Valsorda <filippo@golang.org>
diff --git a/poly1305/mac_noasm.go b/poly1305/mac_noasm.go
index 347c8b1..d118f30 100644
--- a/poly1305/mac_noasm.go
+++ b/poly1305/mac_noasm.go
@@ -2,7 +2,7 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
-// +build !amd64,!ppc64le gccgo purego
+// +build !amd64,!ppc64le,!s390x gccgo purego
 
 package poly1305
 
diff --git a/poly1305/poly1305.go b/poly1305/poly1305.go
index 3c75c2a..9d7a6af 100644
--- a/poly1305/poly1305.go
+++ b/poly1305/poly1305.go
@@ -26,7 +26,9 @@
 // 16-byte result into out. Authenticating two different messages with the same
 // key allows an attacker to forge messages at will.
 func Sum(out *[16]byte, m []byte, key *[32]byte) {
-	sum(out, m, key)
+	h := New(key)
+	h.Write(m)
+	h.Sum(out[:0])
 }
 
 // Verify returns true if mac is a valid authenticator for m with the given key.
diff --git a/poly1305/poly1305_test.go b/poly1305/poly1305_test.go
index 721a262..e7ec6d1 100644
--- a/poly1305/poly1305_test.go
+++ b/poly1305/poly1305_test.go
@@ -6,6 +6,7 @@
 
 import (
 	"crypto/rand"
+	"encoding/binary"
 	"encoding/hex"
 	"flag"
 	"testing"
@@ -15,9 +16,10 @@
 var stressFlag = flag.Bool("stress", false, "run slow stress tests")
 
 type test struct {
-	in  string
-	key string
-	tag string
+	in    string
+	key   string
+	tag   string
+	state string
 }
 
 func (t *test) Input() []byte {
@@ -48,9 +50,33 @@
 	return tag
 }
 
+func (t *test) InitialState() [3]uint64 {
+	// state is hex encoded in big-endian byte order
+	if t.state == "" {
+		return [3]uint64{0, 0, 0}
+	}
+	buf, err := hex.DecodeString(t.state)
+	if err != nil {
+		panic(err)
+	}
+	if len(buf) != 3*8 {
+		panic("incorrect state length")
+	}
+	return [3]uint64{
+		binary.BigEndian.Uint64(buf[16:24]),
+		binary.BigEndian.Uint64(buf[8:16]),
+		binary.BigEndian.Uint64(buf[0:8]),
+	}
+}
+
 func testSum(t *testing.T, unaligned bool, sumImpl func(tag *[TagSize]byte, msg []byte, key *[32]byte)) {
 	var tag [16]byte
 	for i, v := range testData {
+		// cannot set initial state before calling sum, so skip those tests
+		if v.InitialState() != [3]uint64{0, 0, 0} {
+			continue
+		}
+
 		in := v.Input()
 		if unaligned {
 			in = unalignBytes(in)
@@ -140,6 +166,9 @@
 			input = unalignBytes(input)
 		}
 		h := newMACGeneric(&key)
+		if s := v.InitialState(); s != [3]uint64{0, 0, 0} {
+			h.macState.h = s
+		}
 		n, err := h.Write(input[:len(input)/3])
 		if err != nil || n != len(input[:len(input)/3]) {
 			t.Errorf("#%d: unexpected Write results: n = %d, err = %v", i, n, err)
@@ -165,6 +194,9 @@
 			input = unalignBytes(input)
 		}
 		h := New(&key)
+		if s := v.InitialState(); s != [3]uint64{0, 0, 0} {
+			h.macState.h = s
+		}
 		n, err := h.Write(input[:len(input)/3])
 		if err != nil || n != len(input[:len(input)/3]) {
 			t.Errorf("#%d: unexpected Write results: n = %d, err = %v", i, n, err)
diff --git a/poly1305/sum_generic.go b/poly1305/sum_generic.go
index c77ff17..c942a65 100644
--- a/poly1305/sum_generic.go
+++ b/poly1305/sum_generic.go
@@ -41,7 +41,8 @@
 // the value of [x0, x1, x2] is x[0] + x[1] * 2⁶⁴ + x[2] * 2¹²⁸.
 type macState struct {
 	// h is the main accumulator. It is to be interpreted modulo 2¹³⁰ - 5, but
-	// can grow larger during and after rounds.
+	// can grow larger during and after rounds. It must, however, remain below
+	// 2 * (2¹³⁰ - 5).
 	h [3]uint64
 	// r and s are the private key components.
 	r [2]uint64
diff --git a/poly1305/sum_noasm.go b/poly1305/sum_noasm.go
deleted file mode 100644
index 2b55a29..0000000
--- a/poly1305/sum_noasm.go
+++ /dev/null
@@ -1,18 +0,0 @@
-// Copyright 2018 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// At this point only s390x has an assembly implementation of sum. All other
-// platforms have assembly implementations of mac, and just define sum as using
-// that through New. Once s390x is ported, this file can be deleted and the body
-// of sum moved into Sum.
-
-// +build !go1.11 !s390x gccgo purego
-
-package poly1305
-
-func sum(out *[TagSize]byte, msg []byte, key *[32]byte) {
-	h := New(key)
-	h.Write(msg)
-	h.Sum(out[:0])
-}
diff --git a/poly1305/sum_s390x.go b/poly1305/sum_s390x.go
index 5f91ff8..958fedc 100644
--- a/poly1305/sum_s390x.go
+++ b/poly1305/sum_s390x.go
@@ -2,7 +2,7 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
-// +build go1.11,!gccgo,!purego
+// +build !gccgo,!purego
 
 package poly1305
 
@@ -10,30 +10,66 @@
 	"golang.org/x/sys/cpu"
 )
 
-// poly1305vx is an assembly implementation of Poly1305 that uses vector
+// updateVX is an assembly implementation of Poly1305 that uses vector
 // instructions. It must only be called if the vector facility (vx) is
 // available.
 //go:noescape
-func poly1305vx(out *[16]byte, m *byte, mlen uint64, key *[32]byte)
+func updateVX(state *macState, msg []byte)
 
-// poly1305vmsl is an assembly implementation of Poly1305 that uses vector
-// instructions, including VMSL. It must only be called if the vector facility (vx) is
-// available and if VMSL is supported.
-//go:noescape
-func poly1305vmsl(out *[16]byte, m *byte, mlen uint64, key *[32]byte)
+// mac is a replacement for macGeneric that uses a larger buffer and redirects
+// calls that would have gone to updateGeneric to updateVX if the vector
+// facility is installed.
+//
+// A larger buffer is required for good performance because the vector
+// implementation has a higher fixed cost per call than the generic
+// implementation.
+type mac struct {
+	macState
 
-func sum(out *[16]byte, m []byte, key *[32]byte) {
-	if cpu.S390X.HasVX {
-		var mPtr *byte
-		if len(m) > 0 {
-			mPtr = &m[0]
+	buffer [16 * TagSize]byte // size must be a multiple of block size (16)
+	offset int
+}
+
+func (h *mac) Write(p []byte) (int, error) {
+	nn := len(p)
+	if h.offset > 0 {
+		n := copy(h.buffer[h.offset:], p)
+		if h.offset+n < len(h.buffer) {
+			h.offset += n
+			return nn, nil
 		}
-		if cpu.S390X.HasVXE && len(m) > 256 {
-			poly1305vmsl(out, mPtr, uint64(len(m)), key)
+		p = p[n:]
+		h.offset = 0
+		if cpu.S390X.HasVX {
+			updateVX(&h.macState, h.buffer[:])
 		} else {
-			poly1305vx(out, mPtr, uint64(len(m)), key)
+			updateGeneric(&h.macState, h.buffer[:])
 		}
-	} else {
-		sumGeneric(out, m, key)
 	}
+
+	tail := len(p) % len(h.buffer) // number of bytes to copy into buffer
+	body := len(p) - tail          // number of bytes to process now
+	if body > 0 {
+		if cpu.S390X.HasVX {
+			updateVX(&h.macState, p[:body])
+		} else {
+			updateGeneric(&h.macState, p[:body])
+		}
+	}
+	h.offset = copy(h.buffer[:], p[body:]) // copy tail bytes - can be 0
+	return nn, nil
+}
+
+func (h *mac) Sum(out *[TagSize]byte) {
+	state := h.macState
+	remainder := h.buffer[:h.offset]
+
+	// Use the generic implementation if we have 2 or fewer blocks left
+	// to sum. The vector implementation has a higher startup time.
+	if cpu.S390X.HasVX && len(remainder) > 2*TagSize {
+		updateVX(&state, remainder)
+	} else if len(remainder) > 0 {
+		updateGeneric(&state, remainder)
+	}
+	finalize(out, &state.h, &state.s)
 }
diff --git a/poly1305/sum_s390x.s b/poly1305/sum_s390x.s
index 806d169..0fa9ee6 100644
--- a/poly1305/sum_s390x.s
+++ b/poly1305/sum_s390x.s
@@ -2,115 +2,187 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
-// +build go1.11,!gccgo,!purego
+// +build !gccgo,!purego
 
 #include "textflag.h"
 
-// Implementation of Poly1305 using the vector facility (vx).
+// This implementation of Poly1305 uses the vector facility (vx)
+// to process up to 2 blocks (32 bytes) per iteration using an
+// algorithm based on the one described in:
+//
+// NEON crypto, Daniel J. Bernstein & Peter Schwabe
+// https://cryptojedi.org/papers/neoncrypto-20120320.pdf
+//
+// This algorithm uses 5 26-bit limbs to represent a 130-bit
+// value. These limbs are, for the most part, zero extended and
+// placed into 64-bit vector register elements. Each vector
+// register is 128-bits wide and so holds 2 of these elements.
+// Using 26-bit limbs allows us plenty of headroom to accomodate
+// accumulations before and after multiplication without
+// overflowing either 32-bits (before multiplication) or 64-bits
+// (after multiplication).
+//
+// In order to parallelise the operations required to calculate
+// the sum we use two separate accumulators and then sum those
+// in an extra final step. For compatibility with the generic
+// implementation we perform this summation at the end of every
+// updateVX call.
+//
+// To use two accumulators we must multiply the message blocks
+// by r² rather than r. Only the final message block should be
+// multiplied by r.
+//
+// Example:
+//
+// We want to calculate the sum (h) for a 64 byte message (m):
+//
+//   h = m[0:16]r⁴ + m[16:32]r³ + m[32:48]r² + m[48:64]r
+//
+// To do this we split the calculation into the even indices
+// and odd indices of the message. These form our SIMD 'lanes':
+//
+//   h = m[ 0:16]r⁴ + m[32:48]r² +   <- lane 0
+//       m[16:32]r³ + m[48:64]r      <- lane 1
+//
+// To calculate this iteratively we refactor so that both lanes
+// are written in terms of r² and r:
+//
+//   h = (m[ 0:16]r² + m[32:48])r² + <- lane 0
+//       (m[16:32]r² + m[48:64])r    <- lane 1
+//                ^             ^
+//                |             coefficients for second iteration
+//                coefficients for first iteration
+//
+// So in this case we would have two iterations. In the first
+// both lanes are multiplied by r². In the second only the
+// first lane is multiplied by r² and the second lane is
+// instead multiplied by r. This gives use the odd and even
+// powers of r that we need from the original equation.
+//
+// Notation:
+//
+//   h - accumulator
+//   r - key
+//   m - message
+//
+//   [a, b]       - SIMD register holding two 64-bit values
+//   [a, b, c, d] - SIMD register holding four 32-bit values
+//   xᵢ[n]        - limb n of variable x with bit width i
+//
+// Limbs are expressed in little endian order, so for 26-bit
+// limbs x₂₆[4] will be the most significant limb and x₂₆[0]
+// will be the least significant limb.
 
-// constants
-#define MOD26 V0
-#define EX0   V1
-#define EX1   V2
-#define EX2   V3
+// masking constants
+#define MOD24 V0 // [0x0000000000ffffff, 0x0000000000ffffff] - mask low 24-bits
+#define MOD26 V1 // [0x0000000003ffffff, 0x0000000003ffffff] - mask low 26-bits
 
-// temporaries
-#define T_0 V4
-#define T_1 V5
-#define T_2 V6
-#define T_3 V7
-#define T_4 V8
+// expansion constants (see EXPAND macro)
+#define EX0 V2
+#define EX1 V3
+#define EX2 V4
 
-// key (r)
-#define R_0  V9
-#define R_1  V10
-#define R_2  V11
-#define R_3  V12
-#define R_4  V13
-#define R5_1 V14
-#define R5_2 V15
-#define R5_3 V16
-#define R5_4 V17
-#define RSAVE_0 R5
-#define RSAVE_1 R6
-#define RSAVE_2 R7
-#define RSAVE_3 R8
-#define RSAVE_4 R9
-#define R5SAVE_1 V28
-#define R5SAVE_2 V29
-#define R5SAVE_3 V30
-#define R5SAVE_4 V31
+// key (r², r or 1 depending on context)
+#define R_0 V5
+#define R_1 V6
+#define R_2 V7
+#define R_3 V8
+#define R_4 V9
 
-// message block
-#define F_0 V18
-#define F_1 V19
-#define F_2 V20
-#define F_3 V21
-#define F_4 V22
+// precalculated coefficients (5r², 5r or 0 depending on context)
+#define R5_1 V10
+#define R5_2 V11
+#define R5_3 V12
+#define R5_4 V13
 
-// accumulator
-#define H_0 V23
-#define H_1 V24
-#define H_2 V25
-#define H_3 V26
-#define H_4 V27
+// message block (m)
+#define M_0 V14
+#define M_1 V15
+#define M_2 V16
+#define M_3 V17
+#define M_4 V18
 
-GLOBL ·keyMask<>(SB), RODATA, $16
-DATA ·keyMask<>+0(SB)/8, $0xffffff0ffcffff0f
-DATA ·keyMask<>+8(SB)/8, $0xfcffff0ffcffff0f
+// accumulator (h)
+#define H_0 V19
+#define H_1 V20
+#define H_2 V21
+#define H_3 V22
+#define H_4 V23
 
-GLOBL ·bswapMask<>(SB), RODATA, $16
-DATA ·bswapMask<>+0(SB)/8, $0x0f0e0d0c0b0a0908
-DATA ·bswapMask<>+8(SB)/8, $0x0706050403020100
+// temporary registers (for short-lived values)
+#define T_0 V24
+#define T_1 V25
+#define T_2 V26
+#define T_3 V27
+#define T_4 V28
 
-GLOBL ·constants<>(SB), RODATA, $64
-// MOD26
-DATA ·constants<>+0(SB)/8, $0x3ffffff
-DATA ·constants<>+8(SB)/8, $0x3ffffff
+GLOBL ·constants<>(SB), RODATA, $0x30
 // EX0
-DATA ·constants<>+16(SB)/8, $0x0006050403020100
-DATA ·constants<>+24(SB)/8, $0x1016151413121110
+DATA ·constants<>+0x00(SB)/8, $0x0006050403020100
+DATA ·constants<>+0x08(SB)/8, $0x1016151413121110
 // EX1
-DATA ·constants<>+32(SB)/8, $0x060c0b0a09080706
-DATA ·constants<>+40(SB)/8, $0x161c1b1a19181716
+DATA ·constants<>+0x10(SB)/8, $0x060c0b0a09080706
+DATA ·constants<>+0x18(SB)/8, $0x161c1b1a19181716
 // EX2
-DATA ·constants<>+48(SB)/8, $0x0d0d0d0d0d0f0e0d
-DATA ·constants<>+56(SB)/8, $0x1d1d1d1d1d1f1e1d
+DATA ·constants<>+0x20(SB)/8, $0x0d0d0d0d0d0f0e0d
+DATA ·constants<>+0x28(SB)/8, $0x1d1d1d1d1d1f1e1d
 
-// h = (f*g) % (2**130-5) [partial reduction]
+// MULTIPLY multiplies each lane of f and g, partially reduced
+// modulo 2¹³⁰ - 5. The result, h, consists of partial products
+// in each lane that need to be reduced further to produce the
+// final result.
+//
+//   h₁₃₀ = (f₁₃₀g₁₃₀) % 2¹³⁰ + (5f₁₃₀g₁₃₀) / 2¹³⁰
+//
+// Note that the multiplication by 5 of the high bits is
+// achieved by precalculating the multiplication of four of the
+// g coefficients by 5. These are g51-g54.
 #define MULTIPLY(f0, f1, f2, f3, f4, g0, g1, g2, g3, g4, g51, g52, g53, g54, h0, h1, h2, h3, h4) \
 	VMLOF  f0, g0, h0        \
-	VMLOF  f0, g1, h1        \
-	VMLOF  f0, g2, h2        \
 	VMLOF  f0, g3, h3        \
+	VMLOF  f0, g1, h1        \
 	VMLOF  f0, g4, h4        \
+	VMLOF  f0, g2, h2        \
 	VMLOF  f1, g54, T_0      \
-	VMLOF  f1, g0, T_1       \
-	VMLOF  f1, g1, T_2       \
 	VMLOF  f1, g2, T_3       \
+	VMLOF  f1, g0, T_1       \
 	VMLOF  f1, g3, T_4       \
+	VMLOF  f1, g1, T_2       \
 	VMALOF f2, g53, h0, h0   \
-	VMALOF f2, g54, h1, h1   \
-	VMALOF f2, g0, h2, h2    \
 	VMALOF f2, g1, h3, h3    \
+	VMALOF f2, g54, h1, h1   \
 	VMALOF f2, g2, h4, h4    \
+	VMALOF f2, g0, h2, h2    \
 	VMALOF f3, g52, T_0, T_0 \
-	VMALOF f3, g53, T_1, T_1 \
-	VMALOF f3, g54, T_2, T_2 \
 	VMALOF f3, g0, T_3, T_3  \
+	VMALOF f3, g53, T_1, T_1 \
 	VMALOF f3, g1, T_4, T_4  \
+	VMALOF f3, g54, T_2, T_2 \
 	VMALOF f4, g51, h0, h0   \
-	VMALOF f4, g52, h1, h1   \
-	VMALOF f4, g53, h2, h2   \
 	VMALOF f4, g54, h3, h3   \
+	VMALOF f4, g52, h1, h1   \
 	VMALOF f4, g0, h4, h4    \
+	VMALOF f4, g53, h2, h2   \
 	VAG    T_0, h0, h0       \
-	VAG    T_1, h1, h1       \
-	VAG    T_2, h2, h2       \
 	VAG    T_3, h3, h3       \
-	VAG    T_4, h4, h4
+	VAG    T_1, h1, h1       \
+	VAG    T_4, h4, h4       \
+	VAG    T_2, h2, h2
 
-// carry h0->h1 h3->h4, h1->h2 h4->h0, h0->h1 h2->h3, h3->h4
+// REDUCE performs the following carry operations in four
+// stages, as specified in Bernstein & Schwabe:
+//
+//   1: h₂₆[0]->h₂₆[1] h₂₆[3]->h₂₆[4]
+//   2: h₂₆[1]->h₂₆[2] h₂₆[4]->h₂₆[0]
+//   3: h₂₆[0]->h₂₆[1] h₂₆[2]->h₂₆[3]
+//   4: h₂₆[3]->h₂₆[4]
+//
+// The result is that all of the limbs are limited to 26-bits
+// except for h₂₆[1] and h₂₆[4] which are limited to 27-bits.
+//
+// Note that although each limb is aligned at 26-bit intervals
+// they may contain values that exceed 2²⁶ - 1, hence the need
+// to carry the excess bits in each limb.
 #define REDUCE(h0, h1, h2, h3, h4) \
 	VESRLG $26, h0, T_0  \
 	VESRLG $26, h3, T_1  \
@@ -136,144 +208,155 @@
 	VN     MOD26, h3, h3 \
 	VAG    T_2, h4, h4
 
-// expand in0 into d[0] and in1 into d[1]
+// EXPAND splits the 128-bit little-endian values in0 and in1
+// into 26-bit big-endian limbs and places the results into
+// the first and second lane of d₂₆[0:4] respectively.
+//
+// The EX0, EX1 and EX2 constants are arrays of byte indices
+// for permutation. The permutation both reverses the bytes
+// in the input and ensures the bytes are copied into the
+// destination limb ready to be shifted into their final
+// position.
 #define EXPAND(in0, in1, d0, d1, d2, d3, d4) \
-	VGBM   $0x0707, d1       \ // d1=tmp
-	VPERM  in0, in1, EX2, d4 \
 	VPERM  in0, in1, EX0, d0 \
 	VPERM  in0, in1, EX1, d2 \
-	VN     d1, d4, d4        \
+	VPERM  in0, in1, EX2, d4 \
 	VESRLG $26, d0, d1       \
 	VESRLG $30, d2, d3       \
 	VESRLG $4, d2, d2        \
-	VN     MOD26, d0, d0     \
-	VN     MOD26, d1, d1     \
-	VN     MOD26, d2, d2     \
-	VN     MOD26, d3, d3
+	VN     MOD26, d0, d0     \ // [in0₂₆[0], in1₂₆[0]]
+	VN     MOD26, d3, d3     \ // [in0₂₆[3], in1₂₆[3]]
+	VN     MOD26, d1, d1     \ // [in0₂₆[1], in1₂₆[1]]
+	VN     MOD24, d4, d4     \ // [in0₂₆[4], in1₂₆[4]]
+	VN     MOD26, d2, d2     // [in0₂₆[2], in1₂₆[2]]
 
-// pack h4:h0 into h1:h0 (no carry)
-#define PACK(h0, h1, h2, h3, h4) \
-	VESLG $26, h1, h1  \
-	VESLG $26, h3, h3  \
-	VO    h0, h1, h0   \
-	VO    h2, h3, h2   \
-	VESLG $4, h2, h2   \
-	VLEIB $7, $48, h1  \
-	VSLB  h1, h2, h2   \
-	VO    h0, h2, h0   \
-	VLEIB $7, $104, h1 \
-	VSLB  h1, h4, h3   \
-	VO    h3, h0, h0   \
-	VLEIB $7, $24, h1  \
-	VSRLB h1, h4, h1
+// func updateVX(state *macState, msg []byte)
+TEXT ·updateVX(SB), NOSPLIT, $0
+	MOVD state+0(FP), R1
+	LMG  msg+8(FP), R2, R3 // R2=msg_base, R3=msg_len
 
-// if h > 2**130-5 then h -= 2**130-5
-#define MOD(h0, h1, t0, t1, t2) \
-	VZERO t0          \
-	VLEIG $1, $5, t0  \
-	VACCQ h0, t0, t1  \
-	VAQ   h0, t0, t0  \
-	VONE  t2          \
-	VLEIG $1, $-4, t2 \
-	VAQ   t2, t1, t1  \
-	VACCQ h1, t1, t1  \
-	VONE  t2          \
-	VAQ   t2, t1, t1  \
-	VN    h0, t1, t2  \
-	VNC   t0, t1, t1  \
-	VO    t1, t2, h0
-
-// func poly1305vx(out *[16]byte, m *byte, mlen uint64, key *[32]key)
-TEXT ·poly1305vx(SB), $0-32
-	// This code processes up to 2 blocks (32 bytes) per iteration
-	// using the algorithm described in:
-	// NEON crypto, Daniel J. Bernstein & Peter Schwabe
-	// https://cryptojedi.org/papers/neoncrypto-20120320.pdf
-	LMG out+0(FP), R1, R4 // R1=out, R2=m, R3=mlen, R4=key
-
-	// load MOD26, EX0, EX1 and EX2
+	// load EX0, EX1 and EX2
 	MOVD $·constants<>(SB), R5
-	VLM  (R5), MOD26, EX2
+	VLM  (R5), EX0, EX2
 
-	// setup r
-	VL   (R4), T_0
-	MOVD $·keyMask<>(SB), R6
-	VL   (R6), T_1
-	VN   T_0, T_1, T_0
-	EXPAND(T_0, T_0, R_0, R_1, R_2, R_3, R_4)
+	// generate masks
+	VGMG $(64-24), $63, MOD24 // [0x00ffffff, 0x00ffffff]
+	VGMG $(64-26), $63, MOD26 // [0x03ffffff, 0x03ffffff]
 
-	// setup r*5
-	VLEIG $0, $5, T_0
-	VLEIG $1, $5, T_0
+	// load h (accumulator) and r (key) from state
+	VZERO T_1               // [0, 0]
+	VL    0(R1), T_0        // [h₆₄[0], h₆₄[1]]
+	VLEG  $0, 16(R1), T_1   // [h₆₄[2], 0]
+	VL    24(R1), T_2       // [r₆₄[0], r₆₄[1]]
+	VPDI  $0, T_0, T_2, T_3 // [h₆₄[0], r₆₄[0]]
+	VPDI  $5, T_0, T_2, T_4 // [h₆₄[1], r₆₄[1]]
 
-	// store r (for final block)
-	VMLOF T_0, R_1, R5SAVE_1
-	VMLOF T_0, R_2, R5SAVE_2
-	VMLOF T_0, R_3, R5SAVE_3
-	VMLOF T_0, R_4, R5SAVE_4
-	VLGVG $0, R_0, RSAVE_0
-	VLGVG $0, R_1, RSAVE_1
-	VLGVG $0, R_2, RSAVE_2
-	VLGVG $0, R_3, RSAVE_3
-	VLGVG $0, R_4, RSAVE_4
+	// unpack h and r into 26-bit limbs
+	// note: h₆₄[2] may have the low 3 bits set, so h₂₆[4] is a 27-bit value
+	VN     MOD26, T_3, H_0            // [h₂₆[0], r₂₆[0]]
+	VZERO  H_1                        // [0, 0]
+	VZERO  H_3                        // [0, 0]
+	VGMG   $(64-12-14), $(63-12), T_0 // [0x03fff000, 0x03fff000] - 26-bit mask with low 12 bits masked out
+	VESLG  $24, T_1, T_1              // [h₆₄[2]<<24, 0]
+	VERIMG $-26&63, T_3, MOD26, H_1   // [h₂₆[1], r₂₆[1]]
+	VESRLG $+52&63, T_3, H_2          // [h₂₆[2], r₂₆[2]] - low 12 bits only
+	VERIMG $-14&63, T_4, MOD26, H_3   // [h₂₆[1], r₂₆[1]]
+	VESRLG $40, T_4, H_4              // [h₂₆[4], r₂₆[4]] - low 24 bits only
+	VERIMG $+12&63, T_4, T_0, H_2     // [h₂₆[2], r₂₆[2]] - complete
+	VO     T_1, H_4, H_4              // [h₂₆[4], r₂₆[4]] - complete
 
-	// skip r**2 calculation
+	// replicate r across all 4 vector elements
+	VREPF $3, H_0, R_0 // [r₂₆[0], r₂₆[0], r₂₆[0], r₂₆[0]]
+	VREPF $3, H_1, R_1 // [r₂₆[1], r₂₆[1], r₂₆[1], r₂₆[1]]
+	VREPF $3, H_2, R_2 // [r₂₆[2], r₂₆[2], r₂₆[2], r₂₆[2]]
+	VREPF $3, H_3, R_3 // [r₂₆[3], r₂₆[3], r₂₆[3], r₂₆[3]]
+	VREPF $3, H_4, R_4 // [r₂₆[4], r₂₆[4], r₂₆[4], r₂₆[4]]
+
+	// zero out lane 1 of h
+	VLEIG $1, $0, H_0 // [h₂₆[0], 0]
+	VLEIG $1, $0, H_1 // [h₂₆[1], 0]
+	VLEIG $1, $0, H_2 // [h₂₆[2], 0]
+	VLEIG $1, $0, H_3 // [h₂₆[3], 0]
+	VLEIG $1, $0, H_4 // [h₂₆[4], 0]
+
+	// calculate 5r (ignore least significant limb)
+	VREPIF $5, T_0
+	VMLF   T_0, R_1, R5_1 // [5r₂₆[1], 5r₂₆[1], 5r₂₆[1], 5r₂₆[1]]
+	VMLF   T_0, R_2, R5_2 // [5r₂₆[2], 5r₂₆[2], 5r₂₆[2], 5r₂₆[2]]
+	VMLF   T_0, R_3, R5_3 // [5r₂₆[3], 5r₂₆[3], 5r₂₆[3], 5r₂₆[3]]
+	VMLF   T_0, R_4, R5_4 // [5r₂₆[4], 5r₂₆[4], 5r₂₆[4], 5r₂₆[4]]
+
+	// skip r² calculation if we are only calculating one block
 	CMPBLE R3, $16, skip
 
-	// calculate r**2
-	MULTIPLY(R_0, R_1, R_2, R_3, R_4, R_0, R_1, R_2, R_3, R_4, R5SAVE_1, R5SAVE_2, R5SAVE_3, R5SAVE_4, H_0, H_1, H_2, H_3, H_4)
-	REDUCE(H_0, H_1, H_2, H_3, H_4)
-	VLEIG $0, $5, T_0
-	VLEIG $1, $5, T_0
-	VMLOF T_0, H_1, R5_1
-	VMLOF T_0, H_2, R5_2
-	VMLOF T_0, H_3, R5_3
-	VMLOF T_0, H_4, R5_4
-	VLR   H_0, R_0
-	VLR   H_1, R_1
-	VLR   H_2, R_2
-	VLR   H_3, R_3
-	VLR   H_4, R_4
+	// calculate r²
+	MULTIPLY(R_0, R_1, R_2, R_3, R_4, R_0, R_1, R_2, R_3, R_4, R5_1, R5_2, R5_3, R5_4, M_0, M_1, M_2, M_3, M_4)
+	REDUCE(M_0, M_1, M_2, M_3, M_4)
+	VGBM   $0x0f0f, T_0
+	VERIMG $0, M_0, T_0, R_0 // [r₂₆[0], r²₂₆[0], r₂₆[0], r²₂₆[0]]
+	VERIMG $0, M_1, T_0, R_1 // [r₂₆[1], r²₂₆[1], r₂₆[1], r²₂₆[1]]
+	VERIMG $0, M_2, T_0, R_2 // [r₂₆[2], r²₂₆[2], r₂₆[2], r²₂₆[2]]
+	VERIMG $0, M_3, T_0, R_3 // [r₂₆[3], r²₂₆[3], r₂₆[3], r²₂₆[3]]
+	VERIMG $0, M_4, T_0, R_4 // [r₂₆[4], r²₂₆[4], r₂₆[4], r²₂₆[4]]
 
-	// initialize h
-	VZERO H_0
-	VZERO H_1
-	VZERO H_2
-	VZERO H_3
-	VZERO H_4
+	// calculate 5r² (ignore least significant limb)
+	VREPIF $5, T_0
+	VMLF   T_0, R_1, R5_1 // [5r₂₆[1], 5r²₂₆[1], 5r₂₆[1], 5r²₂₆[1]]
+	VMLF   T_0, R_2, R5_2 // [5r₂₆[2], 5r²₂₆[2], 5r₂₆[2], 5r²₂₆[2]]
+	VMLF   T_0, R_3, R5_3 // [5r₂₆[3], 5r²₂₆[3], 5r₂₆[3], 5r²₂₆[3]]
+	VMLF   T_0, R_4, R5_4 // [5r₂₆[4], 5r²₂₆[4], 5r₂₆[4], 5r²₂₆[4]]
 
 loop:
-	CMPBLE R3, $32, b2
-	VLM    (R2), T_0, T_1
-	SUB    $32, R3
-	MOVD   $32(R2), R2
-	EXPAND(T_0, T_1, F_0, F_1, F_2, F_3, F_4)
-	VLEIB  $4, $1, F_4
-	VLEIB  $12, $1, F_4
+	CMPBLE R3, $32, b2 // 2 or fewer blocks remaining, need to change key coefficients
+
+	// load next 2 blocks from message
+	VLM (R2), T_0, T_1
+
+	// update message slice
+	SUB  $32, R3
+	MOVD $32(R2), R2
+
+	// unpack message blocks into 26-bit big-endian limbs
+	EXPAND(T_0, T_1, M_0, M_1, M_2, M_3, M_4)
+
+	// add 2¹²⁸ to each message block value
+	VLEIB $4, $1, M_4
+	VLEIB $12, $1, M_4
 
 multiply:
-	VAG    H_0, F_0, F_0
-	VAG    H_1, F_1, F_1
-	VAG    H_2, F_2, F_2
-	VAG    H_3, F_3, F_3
-	VAG    H_4, F_4, F_4
-	MULTIPLY(F_0, F_1, F_2, F_3, F_4, R_0, R_1, R_2, R_3, R_4, R5_1, R5_2, R5_3, R5_4, H_0, H_1, H_2, H_3, H_4)
+	// accumulate the incoming message
+	VAG H_0, M_0, M_0
+	VAG H_3, M_3, M_3
+	VAG H_1, M_1, M_1
+	VAG H_4, M_4, M_4
+	VAG H_2, M_2, M_2
+
+	// multiply the accumulator by the key coefficient
+	MULTIPLY(M_0, M_1, M_2, M_3, M_4, R_0, R_1, R_2, R_3, R_4, R5_1, R5_2, R5_3, R5_4, H_0, H_1, H_2, H_3, H_4)
+
+	// carry and partially reduce the partial products
 	REDUCE(H_0, H_1, H_2, H_3, H_4)
+
 	CMPBNE R3, $0, loop
 
 finish:
-	// sum vectors
+	// sum lane 0 and lane 1 and put the result in lane 1
 	VZERO  T_0
 	VSUMQG H_0, T_0, H_0
-	VSUMQG H_1, T_0, H_1
-	VSUMQG H_2, T_0, H_2
 	VSUMQG H_3, T_0, H_3
+	VSUMQG H_1, T_0, H_1
 	VSUMQG H_4, T_0, H_4
+	VSUMQG H_2, T_0, H_2
 
-	// h may be >= 2*(2**130-5) so we need to reduce it again
+	// reduce again after summation
+	// TODO(mundaym): there might be a more efficient way to do this
+	// now that we only have 1 active lane. For example, we could
+	// simultaneously pack the values as we reduce them.
 	REDUCE(H_0, H_1, H_2, H_3, H_4)
 
-	// carry h1->h4
+	// carry h[1] through to h[4] so that only h[4] can exceed 2²⁶ - 1
+	// TODO(mundaym): in testing this final carry was unnecessary.
+	// Needs a proof before it can be removed though.
 	VESRLG $26, H_1, T_1
 	VN     MOD26, H_1, H_1
 	VAQ    T_1, H_2, H_2
@@ -284,95 +367,137 @@
 	VN     MOD26, H_3, H_3
 	VAQ    T_3, H_4, H_4
 
-	// h is now < 2*(2**130-5)
-	// pack h into h1 (hi) and h0 (lo)
-	PACK(H_0, H_1, H_2, H_3, H_4)
+	// h is now < 2(2¹³⁰ - 5)
+	// Pack each lane in h₂₆[0:4] into h₁₂₈[0:1].
+	VESLG $26, H_1, H_1
+	VESLG $26, H_3, H_3
+	VO    H_0, H_1, H_0
+	VO    H_2, H_3, H_2
+	VESLG $4, H_2, H_2
+	VLEIB $7, $48, H_1
+	VSLB  H_1, H_2, H_2
+	VO    H_0, H_2, H_0
+	VLEIB $7, $104, H_1
+	VSLB  H_1, H_4, H_3
+	VO    H_3, H_0, H_0
+	VLEIB $7, $24, H_1
+	VSRLB H_1, H_4, H_1
 
-	// if h > 2**130-5 then h -= 2**130-5
-	MOD(H_0, H_1, T_0, T_1, T_2)
-
-	// h += s
-	MOVD  $·bswapMask<>(SB), R5
-	VL    (R5), T_1
-	VL    16(R4), T_0
-	VPERM T_0, T_0, T_1, T_0    // reverse bytes (to big)
-	VAQ   T_0, H_0, H_0
-	VPERM H_0, H_0, T_1, H_0    // reverse bytes (to little)
-	VST   H_0, (R1)
-
+	// update state
+	VSTEG $1, H_0, 0(R1)
+	VSTEG $0, H_0, 8(R1)
+	VSTEG $1, H_1, 16(R1)
 	RET
 
-b2:
+b2:  // 2 or fewer blocks remaining
 	CMPBLE R3, $16, b1
 
-	// 2 blocks remaining
-	SUB    $17, R3
-	VL     (R2), T_0
-	VLL    R3, 16(R2), T_1
-	ADD    $1, R3
-	MOVBZ  $1, R0
-	CMPBEQ R3, $16, 2(PC)
-	VLVGB  R3, R0, T_1
-	EXPAND(T_0, T_1, F_0, F_1, F_2, F_3, F_4)
-	CMPBNE R3, $16, 2(PC)
-	VLEIB  $12, $1, F_4
-	VLEIB  $4, $1, F_4
+	// Load the 2 remaining blocks (17-32 bytes remaining).
+	MOVD $-17(R3), R0    // index of final byte to load modulo 16
+	VL   (R2), T_0       // load full 16 byte block
+	VLL  R0, 16(R2), T_1 // load final (possibly partial) block and pad with zeros to 16 bytes
 
-	// setup [r²,r]
-	VLVGG $1, RSAVE_0, R_0
-	VLVGG $1, RSAVE_1, R_1
-	VLVGG $1, RSAVE_2, R_2
-	VLVGG $1, RSAVE_3, R_3
-	VLVGG $1, RSAVE_4, R_4
-	VPDI  $0, R5_1, R5SAVE_1, R5_1
-	VPDI  $0, R5_2, R5SAVE_2, R5_2
-	VPDI  $0, R5_3, R5SAVE_3, R5_3
-	VPDI  $0, R5_4, R5SAVE_4, R5_4
+	// The Poly1305 algorithm requires that a 1 bit be appended to
+	// each message block. If the final block is less than 16 bytes
+	// long then it is easiest to insert the 1 before the message
+	// block is split into 26-bit limbs. If, on the other hand, the
+	// final message block is 16 bytes long then we append the 1 bit
+	// after expansion as normal.
+	MOVBZ  $1, R0
+	MOVD   $-16(R3), R3   // index of byte in last block to insert 1 at (could be 16)
+	CMPBEQ R3, $16, 2(PC) // skip the insertion if the final block is 16 bytes long
+	VLVGB  R3, R0, T_1    // insert 1 into the byte at index R3
+
+	// Split both blocks into 26-bit limbs in the appropriate lanes.
+	EXPAND(T_0, T_1, M_0, M_1, M_2, M_3, M_4)
+
+	// Append a 1 byte to the end of the second to last block.
+	VLEIB $4, $1, M_4
+
+	// Append a 1 byte to the end of the last block only if it is a
+	// full 16 byte block.
+	CMPBNE R3, $16, 2(PC)
+	VLEIB  $12, $1, M_4
+
+	// Finally, set up the coefficients for the final multiplication.
+	// We have previously saved r and 5r in the 32-bit even indexes
+	// of the R_[0-4] and R5_[1-4] coefficient registers.
+	//
+	// We want lane 0 to be multiplied by r² so that can be kept the
+	// same. We want lane 1 to be multiplied by r so we need to move
+	// the saved r value into the 32-bit odd index in lane 1 by
+	// rotating the 64-bit lane by 32.
+	VGBM   $0x00ff, T_0         // [0, 0xffffffffffffffff] - mask lane 1 only
+	VERIMG $32, R_0, T_0, R_0   // [_,  r²₂₆[0], _,  r₂₆[0]]
+	VERIMG $32, R_1, T_0, R_1   // [_,  r²₂₆[1], _,  r₂₆[1]]
+	VERIMG $32, R_2, T_0, R_2   // [_,  r²₂₆[2], _,  r₂₆[2]]
+	VERIMG $32, R_3, T_0, R_3   // [_,  r²₂₆[3], _,  r₂₆[3]]
+	VERIMG $32, R_4, T_0, R_4   // [_,  r²₂₆[4], _,  r₂₆[4]]
+	VERIMG $32, R5_1, T_0, R5_1 // [_, 5r²₂₆[1], _, 5r₂₆[1]]
+	VERIMG $32, R5_2, T_0, R5_2 // [_, 5r²₂₆[2], _, 5r₂₆[2]]
+	VERIMG $32, R5_3, T_0, R5_3 // [_, 5r²₂₆[3], _, 5r₂₆[3]]
+	VERIMG $32, R5_4, T_0, R5_4 // [_, 5r²₂₆[4], _, 5r₂₆[4]]
 
 	MOVD $0, R3
 	BR   multiply
 
 skip:
-	VZERO H_0
-	VZERO H_1
-	VZERO H_2
-	VZERO H_3
-	VZERO H_4
-
 	CMPBEQ R3, $0, finish
 
-b1:
-	// 1 block remaining
-	SUB    $1, R3
-	VLL    R3, (R2), T_0
-	ADD    $1, R3
+b1:  // 1 block remaining
+
+	// Load the final block (1-16 bytes). This will be placed into
+	// lane 0.
+	MOVD $-1(R3), R0
+	VLL  R0, (R2), T_0 // pad to 16 bytes with zeros
+
+	// The Poly1305 algorithm requires that a 1 bit be appended to
+	// each message block. If the final block is less than 16 bytes
+	// long then it is easiest to insert the 1 before the message
+	// block is split into 26-bit limbs. If, on the other hand, the
+	// final message block is 16 bytes long then we append the 1 bit
+	// after expansion as normal.
 	MOVBZ  $1, R0
 	CMPBEQ R3, $16, 2(PC)
 	VLVGB  R3, R0, T_0
-	VZERO  T_1
-	EXPAND(T_0, T_1, F_0, F_1, F_2, F_3, F_4)
-	CMPBNE R3, $16, 2(PC)
-	VLEIB  $4, $1, F_4
-	VLEIG  $1, $1, R_0
-	VZERO  R_1
-	VZERO  R_2
-	VZERO  R_3
-	VZERO  R_4
-	VZERO  R5_1
-	VZERO  R5_2
-	VZERO  R5_3
-	VZERO  R5_4
 
-	// setup [r, 1]
-	VLVGG $0, RSAVE_0, R_0
-	VLVGG $0, RSAVE_1, R_1
-	VLVGG $0, RSAVE_2, R_2
-	VLVGG $0, RSAVE_3, R_3
-	VLVGG $0, RSAVE_4, R_4
-	VPDI  $0, R5SAVE_1, R5_1, R5_1
-	VPDI  $0, R5SAVE_2, R5_2, R5_2
-	VPDI  $0, R5SAVE_3, R5_3, R5_3
-	VPDI  $0, R5SAVE_4, R5_4, R5_4
+	// Set the message block in lane 1 to the value 0 so that it
+	// can be accumulated without affecting the final result.
+	VZERO T_1
+
+	// Split the final message block into 26-bit limbs in lane 0.
+	// Lane 1 will be contain 0.
+	EXPAND(T_0, T_1, M_0, M_1, M_2, M_3, M_4)
+
+	// Append a 1 byte to the end of the last block only if it is a
+	// full 16 byte block.
+	CMPBNE R3, $16, 2(PC)
+	VLEIB  $4, $1, M_4
+
+	// We have previously saved r and 5r in the 32-bit even indexes
+	// of the R_[0-4] and R5_[1-4] coefficient registers.
+	//
+	// We want lane 0 to be multiplied by r so we need to move the
+	// saved r value into the 32-bit odd index in lane 0. We want
+	// lane 1 to be set to the value 1. This makes multiplication
+	// a no-op. We do this by setting lane 1 in every register to 0
+	// and then just setting the 32-bit index 3 in R_0 to 1.
+	VZERO T_0
+	MOVD  $0, R0
+	MOVD  $0x10111213, R12
+	VLVGP R12, R0, T_1         // [_, 0x10111213, _, 0x00000000]
+	VPERM T_0, R_0, T_1, R_0   // [_,  r₂₆[0], _, 0]
+	VPERM T_0, R_1, T_1, R_1   // [_,  r₂₆[1], _, 0]
+	VPERM T_0, R_2, T_1, R_2   // [_,  r₂₆[2], _, 0]
+	VPERM T_0, R_3, T_1, R_3   // [_,  r₂₆[3], _, 0]
+	VPERM T_0, R_4, T_1, R_4   // [_,  r₂₆[4], _, 0]
+	VPERM T_0, R5_1, T_1, R5_1 // [_, 5r₂₆[1], _, 0]
+	VPERM T_0, R5_2, T_1, R5_2 // [_, 5r₂₆[2], _, 0]
+	VPERM T_0, R5_3, T_1, R5_3 // [_, 5r₂₆[3], _, 0]
+	VPERM T_0, R5_4, T_1, R5_4 // [_, 5r₂₆[4], _, 0]
+
+	// Set the value of lane 1 to be 1.
+	VLEIF $3, $1, R_0 // [_,  r₂₆[0], _, 1]
 
 	MOVD $0, R3
 	BR   multiply
diff --git a/poly1305/sum_vmsl_s390x.s b/poly1305/sum_vmsl_s390x.s
deleted file mode 100644
index b439af9..0000000
--- a/poly1305/sum_vmsl_s390x.s
+++ /dev/null
@@ -1,909 +0,0 @@
-// Copyright 2018 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// +build go1.11,!gccgo,!purego
-
-#include "textflag.h"
-
-// Implementation of Poly1305 using the vector facility (vx) and the VMSL instruction.
-
-// constants
-#define EX0   V1
-#define EX1   V2
-#define EX2   V3
-
-// temporaries
-#define T_0 V4
-#define T_1 V5
-#define T_2 V6
-#define T_3 V7
-#define T_4 V8
-#define T_5 V9
-#define T_6 V10
-#define T_7 V11
-#define T_8 V12
-#define T_9 V13
-#define T_10 V14
-
-// r**2 & r**4
-#define R_0  V15
-#define R_1  V16
-#define R_2  V17
-#define R5_1 V18
-#define R5_2 V19
-// key (r)
-#define RSAVE_0 R7
-#define RSAVE_1 R8
-#define RSAVE_2 R9
-#define R5SAVE_1 R10
-#define R5SAVE_2 R11
-
-// message block
-#define M0 V20
-#define M1 V21
-#define M2 V22
-#define M3 V23
-#define M4 V24
-#define M5 V25
-
-// accumulator
-#define H0_0 V26
-#define H1_0 V27
-#define H2_0 V28
-#define H0_1 V29
-#define H1_1 V30
-#define H2_1 V31
-
-GLOBL ·keyMask<>(SB), RODATA, $16
-DATA ·keyMask<>+0(SB)/8, $0xffffff0ffcffff0f
-DATA ·keyMask<>+8(SB)/8, $0xfcffff0ffcffff0f
-
-GLOBL ·bswapMask<>(SB), RODATA, $16
-DATA ·bswapMask<>+0(SB)/8, $0x0f0e0d0c0b0a0908
-DATA ·bswapMask<>+8(SB)/8, $0x0706050403020100
-
-GLOBL ·constants<>(SB), RODATA, $48
-// EX0
-DATA ·constants<>+0(SB)/8, $0x18191a1b1c1d1e1f
-DATA ·constants<>+8(SB)/8, $0x0000050403020100
-// EX1
-DATA ·constants<>+16(SB)/8, $0x18191a1b1c1d1e1f
-DATA ·constants<>+24(SB)/8, $0x00000a0908070605
-// EX2
-DATA ·constants<>+32(SB)/8, $0x18191a1b1c1d1e1f
-DATA ·constants<>+40(SB)/8, $0x0000000f0e0d0c0b
-
-GLOBL ·c<>(SB), RODATA, $48
-// EX0
-DATA ·c<>+0(SB)/8, $0x0000050403020100
-DATA ·c<>+8(SB)/8, $0x0000151413121110
-// EX1
-DATA ·c<>+16(SB)/8, $0x00000a0908070605
-DATA ·c<>+24(SB)/8, $0x00001a1918171615
-// EX2
-DATA ·c<>+32(SB)/8, $0x0000000f0e0d0c0b
-DATA ·c<>+40(SB)/8, $0x0000001f1e1d1c1b
-
-GLOBL ·reduce<>(SB), RODATA, $32
-// 44 bit
-DATA ·reduce<>+0(SB)/8, $0x0
-DATA ·reduce<>+8(SB)/8, $0xfffffffffff
-// 42 bit
-DATA ·reduce<>+16(SB)/8, $0x0
-DATA ·reduce<>+24(SB)/8, $0x3ffffffffff
-
-// h = (f*g) % (2**130-5) [partial reduction]
-// uses T_0...T_9 temporary registers
-// input: m02_0, m02_1, m02_2, m13_0, m13_1, m13_2, r_0, r_1, r_2, r5_1, r5_2, m4_0, m4_1, m4_2, m5_0, m5_1, m5_2
-// temp: t0, t1, t2, t3, t4, t5, t6, t7, t8, t9
-// output: m02_0, m02_1, m02_2, m13_0, m13_1, m13_2
-#define MULTIPLY(m02_0, m02_1, m02_2, m13_0, m13_1, m13_2, r_0, r_1, r_2, r5_1, r5_2, m4_0, m4_1, m4_2, m5_0, m5_1, m5_2, t0, t1, t2, t3, t4, t5, t6, t7, t8, t9) \
-	\ // Eliminate the dependency for the last 2 VMSLs
-	VMSLG m02_0, r_2, m4_2, m4_2                       \
-	VMSLG m13_0, r_2, m5_2, m5_2                       \ // 8 VMSLs pipelined
-	VMSLG m02_0, r_0, m4_0, m4_0                       \
-	VMSLG m02_1, r5_2, V0, T_0                         \
-	VMSLG m02_0, r_1, m4_1, m4_1                       \
-	VMSLG m02_1, r_0, V0, T_1                          \
-	VMSLG m02_1, r_1, V0, T_2                          \
-	VMSLG m02_2, r5_1, V0, T_3                         \
-	VMSLG m02_2, r5_2, V0, T_4                         \
-	VMSLG m13_0, r_0, m5_0, m5_0                       \
-	VMSLG m13_1, r5_2, V0, T_5                         \
-	VMSLG m13_0, r_1, m5_1, m5_1                       \
-	VMSLG m13_1, r_0, V0, T_6                          \
-	VMSLG m13_1, r_1, V0, T_7                          \
-	VMSLG m13_2, r5_1, V0, T_8                         \
-	VMSLG m13_2, r5_2, V0, T_9                         \
-	VMSLG m02_2, r_0, m4_2, m4_2                       \
-	VMSLG m13_2, r_0, m5_2, m5_2                       \
-	VAQ   m4_0, T_0, m02_0                             \
-	VAQ   m4_1, T_1, m02_1                             \
-	VAQ   m5_0, T_5, m13_0                             \
-	VAQ   m5_1, T_6, m13_1                             \
-	VAQ   m02_0, T_3, m02_0                            \
-	VAQ   m02_1, T_4, m02_1                            \
-	VAQ   m13_0, T_8, m13_0                            \
-	VAQ   m13_1, T_9, m13_1                            \
-	VAQ   m4_2, T_2, m02_2                             \
-	VAQ   m5_2, T_7, m13_2                             \
-
-// SQUARE uses three limbs of r and r_2*5 to output square of r
-// uses T_1, T_5 and T_7 temporary registers
-// input: r_0, r_1, r_2, r5_2
-// temp: TEMP0, TEMP1, TEMP2
-// output: p0, p1, p2
-#define SQUARE(r_0, r_1, r_2, r5_2, p0, p1, p2, TEMP0, TEMP1, TEMP2) \
-	VMSLG r_0, r_0, p0, p0     \
-	VMSLG r_1, r5_2, V0, TEMP0 \
-	VMSLG r_2, r5_2, p1, p1    \
-	VMSLG r_0, r_1, V0, TEMP1  \
-	VMSLG r_1, r_1, p2, p2     \
-	VMSLG r_0, r_2, V0, TEMP2  \
-	VAQ   TEMP0, p0, p0        \
-	VAQ   TEMP1, p1, p1        \
-	VAQ   TEMP2, p2, p2        \
-	VAQ   TEMP0, p0, p0        \
-	VAQ   TEMP1, p1, p1        \
-	VAQ   TEMP2, p2, p2        \
-
-// carry h0->h1->h2->h0 || h3->h4->h5->h3
-// uses T_2, T_4, T_5, T_7, T_8, T_9
-//       t6,  t7,  t8,  t9, t10, t11
-// input: h0, h1, h2, h3, h4, h5
-// temp: t0, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11
-// output: h0, h1, h2, h3, h4, h5
-#define REDUCE(h0, h1, h2, h3, h4, h5, t0, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11) \
-	VLM    (R12), t6, t7  \ // 44 and 42 bit clear mask
-	VLEIB  $7, $0x28, t10 \ // 5 byte shift mask
-	VREPIB $4, t8         \ // 4 bit shift mask
-	VREPIB $2, t11        \ // 2 bit shift mask
-	VSRLB  t10, h0, t0    \ // h0 byte shift
-	VSRLB  t10, h1, t1    \ // h1 byte shift
-	VSRLB  t10, h2, t2    \ // h2 byte shift
-	VSRLB  t10, h3, t3    \ // h3 byte shift
-	VSRLB  t10, h4, t4    \ // h4 byte shift
-	VSRLB  t10, h5, t5    \ // h5 byte shift
-	VSRL   t8, t0, t0     \ // h0 bit shift
-	VSRL   t8, t1, t1     \ // h2 bit shift
-	VSRL   t11, t2, t2    \ // h2 bit shift
-	VSRL   t8, t3, t3     \ // h3 bit shift
-	VSRL   t8, t4, t4     \ // h4 bit shift
-	VESLG  $2, t2, t9     \ // h2 carry x5
-	VSRL   t11, t5, t5    \ // h5 bit shift
-	VN     t6, h0, h0     \ // h0 clear carry
-	VAQ    t2, t9, t2     \ // h2 carry x5
-	VESLG  $2, t5, t9     \ // h5 carry x5
-	VN     t6, h1, h1     \ // h1 clear carry
-	VN     t7, h2, h2     \ // h2 clear carry
-	VAQ    t5, t9, t5     \ // h5 carry x5
-	VN     t6, h3, h3     \ // h3 clear carry
-	VN     t6, h4, h4     \ // h4 clear carry
-	VN     t7, h5, h5     \ // h5 clear carry
-	VAQ    t0, h1, h1     \ // h0->h1
-	VAQ    t3, h4, h4     \ // h3->h4
-	VAQ    t1, h2, h2     \ // h1->h2
-	VAQ    t4, h5, h5     \ // h4->h5
-	VAQ    t2, h0, h0     \ // h2->h0
-	VAQ    t5, h3, h3     \ // h5->h3
-	VREPG  $1, t6, t6     \ // 44 and 42 bit masks across both halves
-	VREPG  $1, t7, t7     \
-	VSLDB  $8, h0, h0, h0 \ // set up [h0/1/2, h3/4/5]
-	VSLDB  $8, h1, h1, h1 \
-	VSLDB  $8, h2, h2, h2 \
-	VO     h0, h3, h3     \
-	VO     h1, h4, h4     \
-	VO     h2, h5, h5     \
-	VESRLG $44, h3, t0    \ // 44 bit shift right
-	VESRLG $44, h4, t1    \
-	VESRLG $42, h5, t2    \
-	VN     t6, h3, h3     \ // clear carry bits
-	VN     t6, h4, h4     \
-	VN     t7, h5, h5     \
-	VESLG  $2, t2, t9     \ // multiply carry by 5
-	VAQ    t9, t2, t2     \
-	VAQ    t0, h4, h4     \
-	VAQ    t1, h5, h5     \
-	VAQ    t2, h3, h3     \
-
-// carry h0->h1->h2->h0
-// input: h0, h1, h2
-// temp: t0, t1, t2, t3, t4, t5, t6, t7, t8
-// output: h0, h1, h2
-#define REDUCE2(h0, h1, h2, t0, t1, t2, t3, t4, t5, t6, t7, t8) \
-	VLEIB  $7, $0x28, t3 \ // 5 byte shift mask
-	VREPIB $4, t4        \ // 4 bit shift mask
-	VREPIB $2, t7        \ // 2 bit shift mask
-	VGBM   $0x003F, t5   \ // mask to clear carry bits
-	VSRLB  t3, h0, t0    \
-	VSRLB  t3, h1, t1    \
-	VSRLB  t3, h2, t2    \
-	VESRLG $4, t5, t5    \ // 44 bit clear mask
-	VSRL   t4, t0, t0    \
-	VSRL   t4, t1, t1    \
-	VSRL   t7, t2, t2    \
-	VESRLG $2, t5, t6    \ // 42 bit clear mask
-	VESLG  $2, t2, t8    \
-	VAQ    t8, t2, t2    \
-	VN     t5, h0, h0    \
-	VN     t5, h1, h1    \
-	VN     t6, h2, h2    \
-	VAQ    t0, h1, h1    \
-	VAQ    t1, h2, h2    \
-	VAQ    t2, h0, h0    \
-	VSRLB  t3, h0, t0    \
-	VSRLB  t3, h1, t1    \
-	VSRLB  t3, h2, t2    \
-	VSRL   t4, t0, t0    \
-	VSRL   t4, t1, t1    \
-	VSRL   t7, t2, t2    \
-	VN     t5, h0, h0    \
-	VN     t5, h1, h1    \
-	VESLG  $2, t2, t8    \
-	VN     t6, h2, h2    \
-	VAQ    t0, h1, h1    \
-	VAQ    t8, t2, t2    \
-	VAQ    t1, h2, h2    \
-	VAQ    t2, h0, h0    \
-
-// expands two message blocks into the lower halfs of the d registers
-// moves the contents of the d registers into upper halfs
-// input: in1, in2, d0, d1, d2, d3, d4, d5
-// temp: TEMP0, TEMP1, TEMP2, TEMP3
-// output: d0, d1, d2, d3, d4, d5
-#define EXPACC(in1, in2, d0, d1, d2, d3, d4, d5, TEMP0, TEMP1, TEMP2, TEMP3) \
-	VGBM   $0xff3f, TEMP0      \
-	VGBM   $0xff1f, TEMP1      \
-	VESLG  $4, d1, TEMP2       \
-	VESLG  $4, d4, TEMP3       \
-	VESRLG $4, TEMP0, TEMP0    \
-	VPERM  in1, d0, EX0, d0    \
-	VPERM  in2, d3, EX0, d3    \
-	VPERM  in1, d2, EX2, d2    \
-	VPERM  in2, d5, EX2, d5    \
-	VPERM  in1, TEMP2, EX1, d1 \
-	VPERM  in2, TEMP3, EX1, d4 \
-	VN     TEMP0, d0, d0       \
-	VN     TEMP0, d3, d3       \
-	VESRLG $4, d1, d1          \
-	VESRLG $4, d4, d4          \
-	VN     TEMP1, d2, d2       \
-	VN     TEMP1, d5, d5       \
-	VN     TEMP0, d1, d1       \
-	VN     TEMP0, d4, d4       \
-
-// expands one message block into the lower halfs of the d registers
-// moves the contents of the d registers into upper halfs
-// input: in, d0, d1, d2
-// temp: TEMP0, TEMP1, TEMP2
-// output: d0, d1, d2
-#define EXPACC2(in, d0, d1, d2, TEMP0, TEMP1, TEMP2) \
-	VGBM   $0xff3f, TEMP0     \
-	VESLG  $4, d1, TEMP2      \
-	VGBM   $0xff1f, TEMP1     \
-	VPERM  in, d0, EX0, d0    \
-	VESRLG $4, TEMP0, TEMP0   \
-	VPERM  in, d2, EX2, d2    \
-	VPERM  in, TEMP2, EX1, d1 \
-	VN     TEMP0, d0, d0      \
-	VN     TEMP1, d2, d2      \
-	VESRLG $4, d1, d1         \
-	VN     TEMP0, d1, d1      \
-
-// pack h2:h0 into h1:h0 (no carry)
-// input: h0, h1, h2
-// output: h0, h1, h2
-#define PACK(h0, h1, h2) \
-	VMRLG  h1, h2, h2  \ // copy h1 to upper half h2
-	VESLG  $44, h1, h1 \ // shift limb 1 44 bits, leaving 20
-	VO     h0, h1, h0  \ // combine h0 with 20 bits from limb 1
-	VESRLG $20, h2, h1 \ // put top 24 bits of limb 1 into h1
-	VLEIG  $1, $0, h1  \ // clear h2 stuff from lower half of h1
-	VO     h0, h1, h0  \ // h0 now has 88 bits (limb 0 and 1)
-	VLEIG  $0, $0, h2  \ // clear upper half of h2
-	VESRLG $40, h2, h1 \ // h1 now has upper two bits of result
-	VLEIB  $7, $88, h1 \ // for byte shift (11 bytes)
-	VSLB   h1, h2, h2  \ // shift h2 11 bytes to the left
-	VO     h0, h2, h0  \ // combine h0 with 20 bits from limb 1
-	VLEIG  $0, $0, h1  \ // clear upper half of h1
-
-// if h > 2**130-5 then h -= 2**130-5
-// input: h0, h1
-// temp: t0, t1, t2
-// output: h0
-#define MOD(h0, h1, t0, t1, t2) \
-	VZERO t0          \
-	VLEIG $1, $5, t0  \
-	VACCQ h0, t0, t1  \
-	VAQ   h0, t0, t0  \
-	VONE  t2          \
-	VLEIG $1, $-4, t2 \
-	VAQ   t2, t1, t1  \
-	VACCQ h1, t1, t1  \
-	VONE  t2          \
-	VAQ   t2, t1, t1  \
-	VN    h0, t1, t2  \
-	VNC   t0, t1, t1  \
-	VO    t1, t2, h0  \
-
-// func poly1305vmsl(out *[16]byte, m *byte, mlen uint64, key *[32]key)
-TEXT ·poly1305vmsl(SB), $0-32
-	// This code processes 6 + up to 4 blocks (32 bytes) per iteration
-	// using the algorithm described in:
-	// NEON crypto, Daniel J. Bernstein & Peter Schwabe
-	// https://cryptojedi.org/papers/neoncrypto-20120320.pdf
-	// And as moddified for VMSL as described in
-	// Accelerating Poly1305 Cryptographic Message Authentication on the z14
-	// O'Farrell et al, CASCON 2017, p48-55
-	// https://ibm.ent.box.com/s/jf9gedj0e9d2vjctfyh186shaztavnht
-
-	LMG   out+0(FP), R1, R4 // R1=out, R2=m, R3=mlen, R4=key
-	VZERO V0                // c
-
-	// load EX0, EX1 and EX2
-	MOVD $·constants<>(SB), R5
-	VLM  (R5), EX0, EX2        // c
-
-	// setup r
-	VL    (R4), T_0
-	MOVD  $·keyMask<>(SB), R6
-	VL    (R6), T_1
-	VN    T_0, T_1, T_0
-	VZERO T_2                 // limbs for r
-	VZERO T_3
-	VZERO T_4
-	EXPACC2(T_0, T_2, T_3, T_4, T_1, T_5, T_7)
-
-	// T_2, T_3, T_4: [0, r]
-
-	// setup r*20
-	VLEIG $0, $0, T_0
-	VLEIG $1, $20, T_0       // T_0: [0, 20]
-	VZERO T_5
-	VZERO T_6
-	VMSLG T_0, T_3, T_5, T_5
-	VMSLG T_0, T_4, T_6, T_6
-
-	// store r for final block in GR
-	VLGVG $1, T_2, RSAVE_0  // c
-	VLGVG $1, T_3, RSAVE_1  // c
-	VLGVG $1, T_4, RSAVE_2  // c
-	VLGVG $1, T_5, R5SAVE_1 // c
-	VLGVG $1, T_6, R5SAVE_2 // c
-
-	// initialize h
-	VZERO H0_0
-	VZERO H1_0
-	VZERO H2_0
-	VZERO H0_1
-	VZERO H1_1
-	VZERO H2_1
-
-	// initialize pointer for reduce constants
-	MOVD $·reduce<>(SB), R12
-
-	// calculate r**2 and 20*(r**2)
-	VZERO R_0
-	VZERO R_1
-	VZERO R_2
-	SQUARE(T_2, T_3, T_4, T_6, R_0, R_1, R_2, T_1, T_5, T_7)
-	REDUCE2(R_0, R_1, R_2, M0, M1, M2, M3, M4, R5_1, R5_2, M5, T_1)
-	VZERO R5_1
-	VZERO R5_2
-	VMSLG T_0, R_1, R5_1, R5_1
-	VMSLG T_0, R_2, R5_2, R5_2
-
-	// skip r**4 calculation if 3 blocks or less
-	CMPBLE R3, $48, b4
-
-	// calculate r**4 and 20*(r**4)
-	VZERO T_8
-	VZERO T_9
-	VZERO T_10
-	SQUARE(R_0, R_1, R_2, R5_2, T_8, T_9, T_10, T_1, T_5, T_7)
-	REDUCE2(T_8, T_9, T_10, M0, M1, M2, M3, M4, T_2, T_3, M5, T_1)
-	VZERO T_2
-	VZERO T_3
-	VMSLG T_0, T_9, T_2, T_2
-	VMSLG T_0, T_10, T_3, T_3
-
-	// put r**2 to the right and r**4 to the left of R_0, R_1, R_2
-	VSLDB $8, T_8, T_8, T_8
-	VSLDB $8, T_9, T_9, T_9
-	VSLDB $8, T_10, T_10, T_10
-	VSLDB $8, T_2, T_2, T_2
-	VSLDB $8, T_3, T_3, T_3
-
-	VO T_8, R_0, R_0
-	VO T_9, R_1, R_1
-	VO T_10, R_2, R_2
-	VO T_2, R5_1, R5_1
-	VO T_3, R5_2, R5_2
-
-	CMPBLE R3, $80, load // less than or equal to 5 blocks in message
-
-	// 6(or 5+1) blocks
-	SUB    $81, R3
-	VLM    (R2), M0, M4
-	VLL    R3, 80(R2), M5
-	ADD    $1, R3
-	MOVBZ  $1, R0
-	CMPBGE R3, $16, 2(PC)
-	VLVGB  R3, R0, M5
-	MOVD   $96(R2), R2
-	EXPACC(M0, M1, H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_0, T_1, T_2, T_3)
-	EXPACC(M2, M3, H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_0, T_1, T_2, T_3)
-	VLEIB  $2, $1, H2_0
-	VLEIB  $2, $1, H2_1
-	VLEIB  $10, $1, H2_0
-	VLEIB  $10, $1, H2_1
-
-	VZERO  M0
-	VZERO  M1
-	VZERO  M2
-	VZERO  M3
-	VZERO  T_4
-	VZERO  T_10
-	EXPACC(M4, M5, M0, M1, M2, M3, T_4, T_10, T_0, T_1, T_2, T_3)
-	VLR    T_4, M4
-	VLEIB  $10, $1, M2
-	CMPBLT R3, $16, 2(PC)
-	VLEIB  $10, $1, T_10
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, T_10, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_10, M0, M1, M2, M3, M4, T_4, T_5, T_2, T_7, T_8, T_9)
-	VMRHG  V0, H0_1, H0_0
-	VMRHG  V0, H1_1, H1_0
-	VMRHG  V0, H2_1, H2_0
-	VMRLG  V0, H0_1, H0_1
-	VMRLG  V0, H1_1, H1_1
-	VMRLG  V0, H2_1, H2_1
-
-	SUB    $16, R3
-	CMPBLE R3, $0, square
-
-load:
-	// load EX0, EX1 and EX2
-	MOVD $·c<>(SB), R5
-	VLM  (R5), EX0, EX2
-
-loop:
-	CMPBLE R3, $64, add // b4	// last 4 or less blocks left
-
-	// next 4 full blocks
-	VLM  (R2), M2, M5
-	SUB  $64, R3
-	MOVD $64(R2), R2
-	REDUCE(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_10, M0, M1, T_0, T_1, T_3, T_4, T_5, T_2, T_7, T_8, T_9)
-
-	// expacc in-lined to create [m2, m3] limbs
-	VGBM   $0x3f3f, T_0     // 44 bit clear mask
-	VGBM   $0x1f1f, T_1     // 40 bit clear mask
-	VPERM  M2, M3, EX0, T_3
-	VESRLG $4, T_0, T_0     // 44 bit clear mask ready
-	VPERM  M2, M3, EX1, T_4
-	VPERM  M2, M3, EX2, T_5
-	VN     T_0, T_3, T_3
-	VESRLG $4, T_4, T_4
-	VN     T_1, T_5, T_5
-	VN     T_0, T_4, T_4
-	VMRHG  H0_1, T_3, H0_0
-	VMRHG  H1_1, T_4, H1_0
-	VMRHG  H2_1, T_5, H2_0
-	VMRLG  H0_1, T_3, H0_1
-	VMRLG  H1_1, T_4, H1_1
-	VMRLG  H2_1, T_5, H2_1
-	VLEIB  $10, $1, H2_0
-	VLEIB  $10, $1, H2_1
-	VPERM  M4, M5, EX0, T_3
-	VPERM  M4, M5, EX1, T_4
-	VPERM  M4, M5, EX2, T_5
-	VN     T_0, T_3, T_3
-	VESRLG $4, T_4, T_4
-	VN     T_1, T_5, T_5
-	VN     T_0, T_4, T_4
-	VMRHG  V0, T_3, M0
-	VMRHG  V0, T_4, M1
-	VMRHG  V0, T_5, M2
-	VMRLG  V0, T_3, M3
-	VMRLG  V0, T_4, M4
-	VMRLG  V0, T_5, M5
-	VLEIB  $10, $1, M2
-	VLEIB  $10, $1, M5
-
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	CMPBNE R3, $0, loop
-	REDUCE(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_10, M0, M1, M3, M4, M5, T_4, T_5, T_2, T_7, T_8, T_9)
-	VMRHG  V0, H0_1, H0_0
-	VMRHG  V0, H1_1, H1_0
-	VMRHG  V0, H2_1, H2_0
-	VMRLG  V0, H0_1, H0_1
-	VMRLG  V0, H1_1, H1_1
-	VMRLG  V0, H2_1, H2_1
-
-	// load EX0, EX1, EX2
-	MOVD $·constants<>(SB), R5
-	VLM  (R5), EX0, EX2
-
-	// sum vectors
-	VAQ H0_0, H0_1, H0_0
-	VAQ H1_0, H1_1, H1_0
-	VAQ H2_0, H2_1, H2_0
-
-	// h may be >= 2*(2**130-5) so we need to reduce it again
-	// M0...M4 are used as temps here
-	REDUCE2(H0_0, H1_0, H2_0, M0, M1, M2, M3, M4, T_9, T_10, H0_1, M5)
-
-next:  // carry h1->h2
-	VLEIB  $7, $0x28, T_1
-	VREPIB $4, T_2
-	VGBM   $0x003F, T_3
-	VESRLG $4, T_3
-
-	// byte shift
-	VSRLB T_1, H1_0, T_4
-
-	// bit shift
-	VSRL T_2, T_4, T_4
-
-	// clear h1 carry bits
-	VN T_3, H1_0, H1_0
-
-	// add carry
-	VAQ T_4, H2_0, H2_0
-
-	// h is now < 2*(2**130-5)
-	// pack h into h1 (hi) and h0 (lo)
-	PACK(H0_0, H1_0, H2_0)
-
-	// if h > 2**130-5 then h -= 2**130-5
-	MOD(H0_0, H1_0, T_0, T_1, T_2)
-
-	// h += s
-	MOVD  $·bswapMask<>(SB), R5
-	VL    (R5), T_1
-	VL    16(R4), T_0
-	VPERM T_0, T_0, T_1, T_0    // reverse bytes (to big)
-	VAQ   T_0, H0_0, H0_0
-	VPERM H0_0, H0_0, T_1, H0_0 // reverse bytes (to little)
-	VST   H0_0, (R1)
-	RET
-
-add:
-	// load EX0, EX1, EX2
-	MOVD $·constants<>(SB), R5
-	VLM  (R5), EX0, EX2
-
-	REDUCE(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_10, M0, M1, M3, M4, M5, T_4, T_5, T_2, T_7, T_8, T_9)
-	VMRHG  V0, H0_1, H0_0
-	VMRHG  V0, H1_1, H1_0
-	VMRHG  V0, H2_1, H2_0
-	VMRLG  V0, H0_1, H0_1
-	VMRLG  V0, H1_1, H1_1
-	VMRLG  V0, H2_1, H2_1
-	CMPBLE R3, $64, b4
-
-b4:
-	CMPBLE R3, $48, b3 // 3 blocks or less
-
-	// 4(3+1) blocks remaining
-	SUB    $49, R3
-	VLM    (R2), M0, M2
-	VLL    R3, 48(R2), M3
-	ADD    $1, R3
-	MOVBZ  $1, R0
-	CMPBEQ R3, $16, 2(PC)
-	VLVGB  R3, R0, M3
-	MOVD   $64(R2), R2
-	EXPACC(M0, M1, H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_0, T_1, T_2, T_3)
-	VLEIB  $10, $1, H2_0
-	VLEIB  $10, $1, H2_1
-	VZERO  M0
-	VZERO  M1
-	VZERO  M4
-	VZERO  M5
-	VZERO  T_4
-	VZERO  T_10
-	EXPACC(M2, M3, M0, M1, M4, M5, T_4, T_10, T_0, T_1, T_2, T_3)
-	VLR    T_4, M2
-	VLEIB  $10, $1, M4
-	CMPBNE R3, $16, 2(PC)
-	VLEIB  $10, $1, T_10
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M4, M5, M2, T_10, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_10, M0, M1, M3, M4, M5, T_4, T_5, T_2, T_7, T_8, T_9)
-	VMRHG  V0, H0_1, H0_0
-	VMRHG  V0, H1_1, H1_0
-	VMRHG  V0, H2_1, H2_0
-	VMRLG  V0, H0_1, H0_1
-	VMRLG  V0, H1_1, H1_1
-	VMRLG  V0, H2_1, H2_1
-	SUB    $16, R3
-	CMPBLE R3, $0, square // this condition must always hold true!
-
-b3:
-	CMPBLE R3, $32, b2
-
-	// 3 blocks remaining
-
-	// setup [r²,r]
-	VSLDB $8, R_0, R_0, R_0
-	VSLDB $8, R_1, R_1, R_1
-	VSLDB $8, R_2, R_2, R_2
-	VSLDB $8, R5_1, R5_1, R5_1
-	VSLDB $8, R5_2, R5_2, R5_2
-
-	VLVGG $1, RSAVE_0, R_0
-	VLVGG $1, RSAVE_1, R_1
-	VLVGG $1, RSAVE_2, R_2
-	VLVGG $1, R5SAVE_1, R5_1
-	VLVGG $1, R5SAVE_2, R5_2
-
-	// setup [h0, h1]
-	VSLDB $8, H0_0, H0_0, H0_0
-	VSLDB $8, H1_0, H1_0, H1_0
-	VSLDB $8, H2_0, H2_0, H2_0
-	VO    H0_1, H0_0, H0_0
-	VO    H1_1, H1_0, H1_0
-	VO    H2_1, H2_0, H2_0
-	VZERO H0_1
-	VZERO H1_1
-	VZERO H2_1
-
-	VZERO M0
-	VZERO M1
-	VZERO M2
-	VZERO M3
-	VZERO M4
-	VZERO M5
-
-	// H*[r**2, r]
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M0, M1, M2, M3, M4, H0_1, H1_1, T_10, M5)
-
-	SUB    $33, R3
-	VLM    (R2), M0, M1
-	VLL    R3, 32(R2), M2
-	ADD    $1, R3
-	MOVBZ  $1, R0
-	CMPBEQ R3, $16, 2(PC)
-	VLVGB  R3, R0, M2
-
-	// H += m0
-	VZERO T_1
-	VZERO T_2
-	VZERO T_3
-	EXPACC2(M0, T_1, T_2, T_3, T_4, T_5, T_6)
-	VLEIB $10, $1, T_3
-	VAG   H0_0, T_1, H0_0
-	VAG   H1_0, T_2, H1_0
-	VAG   H2_0, T_3, H2_0
-
-	VZERO M0
-	VZERO M3
-	VZERO M4
-	VZERO M5
-	VZERO T_10
-
-	// (H+m0)*r
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M3, M4, M5, V0, T_10, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M0, M3, M4, M5, T_10, H0_1, H1_1, H2_1, T_9)
-
-	// H += m1
-	VZERO V0
-	VZERO T_1
-	VZERO T_2
-	VZERO T_3
-	EXPACC2(M1, T_1, T_2, T_3, T_4, T_5, T_6)
-	VLEIB $10, $1, T_3
-	VAQ   H0_0, T_1, H0_0
-	VAQ   H1_0, T_2, H1_0
-	VAQ   H2_0, T_3, H2_0
-	REDUCE2(H0_0, H1_0, H2_0, M0, M3, M4, M5, T_9, H0_1, H1_1, H2_1, T_10)
-
-	// [H, m2] * [r**2, r]
-	EXPACC2(M2, H0_0, H1_0, H2_0, T_1, T_2, T_3)
-	CMPBNE R3, $16, 2(PC)
-	VLEIB  $10, $1, H2_0
-	VZERO  M0
-	VZERO  M1
-	VZERO  M2
-	VZERO  M3
-	VZERO  M4
-	VZERO  M5
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M0, M1, M2, M3, M4, H0_1, H1_1, M5, T_10)
-	SUB    $16, R3
-	CMPBLE R3, $0, next   // this condition must always hold true!
-
-b2:
-	CMPBLE R3, $16, b1
-
-	// 2 blocks remaining
-
-	// setup [r²,r]
-	VSLDB $8, R_0, R_0, R_0
-	VSLDB $8, R_1, R_1, R_1
-	VSLDB $8, R_2, R_2, R_2
-	VSLDB $8, R5_1, R5_1, R5_1
-	VSLDB $8, R5_2, R5_2, R5_2
-
-	VLVGG $1, RSAVE_0, R_0
-	VLVGG $1, RSAVE_1, R_1
-	VLVGG $1, RSAVE_2, R_2
-	VLVGG $1, R5SAVE_1, R5_1
-	VLVGG $1, R5SAVE_2, R5_2
-
-	// setup [h0, h1]
-	VSLDB $8, H0_0, H0_0, H0_0
-	VSLDB $8, H1_0, H1_0, H1_0
-	VSLDB $8, H2_0, H2_0, H2_0
-	VO    H0_1, H0_0, H0_0
-	VO    H1_1, H1_0, H1_0
-	VO    H2_1, H2_0, H2_0
-	VZERO H0_1
-	VZERO H1_1
-	VZERO H2_1
-
-	VZERO M0
-	VZERO M1
-	VZERO M2
-	VZERO M3
-	VZERO M4
-	VZERO M5
-
-	// H*[r**2, r]
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, T_10, M0, M1, M2, M3, M4, T_4, T_5, T_2, T_7, T_8, T_9)
-	VMRHG V0, H0_1, H0_0
-	VMRHG V0, H1_1, H1_0
-	VMRHG V0, H2_1, H2_0
-	VMRLG V0, H0_1, H0_1
-	VMRLG V0, H1_1, H1_1
-	VMRLG V0, H2_1, H2_1
-
-	// move h to the left and 0s at the right
-	VSLDB $8, H0_0, H0_0, H0_0
-	VSLDB $8, H1_0, H1_0, H1_0
-	VSLDB $8, H2_0, H2_0, H2_0
-
-	// get message blocks and append 1 to start
-	SUB    $17, R3
-	VL     (R2), M0
-	VLL    R3, 16(R2), M1
-	ADD    $1, R3
-	MOVBZ  $1, R0
-	CMPBEQ R3, $16, 2(PC)
-	VLVGB  R3, R0, M1
-	VZERO  T_6
-	VZERO  T_7
-	VZERO  T_8
-	EXPACC2(M0, T_6, T_7, T_8, T_1, T_2, T_3)
-	EXPACC2(M1, T_6, T_7, T_8, T_1, T_2, T_3)
-	VLEIB  $2, $1, T_8
-	CMPBNE R3, $16, 2(PC)
-	VLEIB  $10, $1, T_8
-
-	// add [m0, m1] to h
-	VAG H0_0, T_6, H0_0
-	VAG H1_0, T_7, H1_0
-	VAG H2_0, T_8, H2_0
-
-	VZERO M2
-	VZERO M3
-	VZERO M4
-	VZERO M5
-	VZERO T_10
-	VZERO M0
-
-	// at this point R_0 .. R5_2 look like [r**2, r]
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M2, M3, M4, M5, T_10, M0, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M2, M3, M4, M5, T_9, H0_1, H1_1, H2_1, T_10)
-	SUB    $16, R3, R3
-	CMPBLE R3, $0, next
-
-b1:
-	CMPBLE R3, $0, next
-
-	// 1 block remaining
-
-	// setup [r²,r]
-	VSLDB $8, R_0, R_0, R_0
-	VSLDB $8, R_1, R_1, R_1
-	VSLDB $8, R_2, R_2, R_2
-	VSLDB $8, R5_1, R5_1, R5_1
-	VSLDB $8, R5_2, R5_2, R5_2
-
-	VLVGG $1, RSAVE_0, R_0
-	VLVGG $1, RSAVE_1, R_1
-	VLVGG $1, RSAVE_2, R_2
-	VLVGG $1, R5SAVE_1, R5_1
-	VLVGG $1, R5SAVE_2, R5_2
-
-	// setup [h0, h1]
-	VSLDB $8, H0_0, H0_0, H0_0
-	VSLDB $8, H1_0, H1_0, H1_0
-	VSLDB $8, H2_0, H2_0, H2_0
-	VO    H0_1, H0_0, H0_0
-	VO    H1_1, H1_0, H1_0
-	VO    H2_1, H2_0, H2_0
-	VZERO H0_1
-	VZERO H1_1
-	VZERO H2_1
-
-	VZERO M0
-	VZERO M1
-	VZERO M2
-	VZERO M3
-	VZERO M4
-	VZERO M5
-
-	// H*[r**2, r]
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M0, M1, M2, M3, M4, T_9, T_10, H0_1, M5)
-
-	// set up [0, m0] limbs
-	SUB    $1, R3
-	VLL    R3, (R2), M0
-	ADD    $1, R3
-	MOVBZ  $1, R0
-	CMPBEQ R3, $16, 2(PC)
-	VLVGB  R3, R0, M0
-	VZERO  T_1
-	VZERO  T_2
-	VZERO  T_3
-	EXPACC2(M0, T_1, T_2, T_3, T_4, T_5, T_6)// limbs: [0, m]
-	CMPBNE R3, $16, 2(PC)
-	VLEIB  $10, $1, T_3
-
-	// h+m0
-	VAQ H0_0, T_1, H0_0
-	VAQ H1_0, T_2, H1_0
-	VAQ H2_0, T_3, H2_0
-
-	VZERO M0
-	VZERO M1
-	VZERO M2
-	VZERO M3
-	VZERO M4
-	VZERO M5
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M0, M1, M2, M3, M4, T_9, T_10, H0_1, M5)
-
-	BR next
-
-square:
-	// setup [r²,r]
-	VSLDB $8, R_0, R_0, R_0
-	VSLDB $8, R_1, R_1, R_1
-	VSLDB $8, R_2, R_2, R_2
-	VSLDB $8, R5_1, R5_1, R5_1
-	VSLDB $8, R5_2, R5_2, R5_2
-
-	VLVGG $1, RSAVE_0, R_0
-	VLVGG $1, RSAVE_1, R_1
-	VLVGG $1, RSAVE_2, R_2
-	VLVGG $1, R5SAVE_1, R5_1
-	VLVGG $1, R5SAVE_2, R5_2
-
-	// setup [h0, h1]
-	VSLDB $8, H0_0, H0_0, H0_0
-	VSLDB $8, H1_0, H1_0, H1_0
-	VSLDB $8, H2_0, H2_0, H2_0
-	VO    H0_1, H0_0, H0_0
-	VO    H1_1, H1_0, H1_0
-	VO    H2_1, H2_0, H2_0
-	VZERO H0_1
-	VZERO H1_1
-	VZERO H2_1
-
-	VZERO M0
-	VZERO M1
-	VZERO M2
-	VZERO M3
-	VZERO M4
-	VZERO M5
-
-	// (h0*r**2) + (h1*r)
-	MULTIPLY(H0_0, H1_0, H2_0, H0_1, H1_1, H2_1, R_0, R_1, R_2, R5_1, R5_2, M0, M1, M2, M3, M4, M5, T_0, T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9)
-	REDUCE2(H0_0, H1_0, H2_0, M0, M1, M2, M3, M4, T_9, T_10, H0_1, M5)
-	BR next
diff --git a/poly1305/vectors_test.go b/poly1305/vectors_test.go
index 18d7ff8..4788950 100644
--- a/poly1305/vectors_test.go
+++ b/poly1305/vectors_test.go
@@ -2940,4 +2940,61 @@
 			"eeb979c28f03795f6f1e4b8410beab19a20febc91985b8a7c298534a6598" +
 			"f2c5b0dc5de9f5e55a97791507bc6373db26",
 	},
+
+	// Override initial state to ensure large h (subject to h < 2(2¹³⁰ - 5)) is
+	// deserialized from the state correctly.
+	{
+		key:   "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
+		state: "0000000000000007fffffffffffffffffffffffffffffff5", // 2(2¹³⁰ - 5) - 1
+		in:    "",
+		tag:   "f9ffffffffffffffffffffffffffffff",
+	},
+	{
+		key:   "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
+		state: "000000000000000700000000000000000000000000000000", // 2¹³⁰
+		in:    "",
+		tag:   "04000000000000000000000000000000",
+	},
+	{
+		key:   "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
+		state: "0000000000000007fffffffffffffffffffffffffffffff5", // 2(2¹³⁰ - 5) - 1
+		in: "ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff",
+		tag: "1b000e5e5dfe8f5c4da11dd17b7654e7",
+	},
+	{
+		key:   "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
+		state: "000000000000000700000000000000000000000000000001", // 2¹³⁰
+		in: "ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff" +
+			"ffffffffffffffffffffffffffffffff",
+		tag: "380859a4a5860b0e0967edfd711d37de",
+	},
 }