)]}'
{
  "log": [
    {
      "commit": "7d797a27a7f949571f7a022d6d25f6f5f5a5e303",
      "tree": "c8e472c476d022d1a0e0298f50431a6e1f7ebade",
      "parents": [
        "09bab1e05025394fa1d7720f6abc3ef903594636"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Tue May 30 17:45:11 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 30 17:45:11 2023 -0500"
      },
      "message": "Update vhlo.md to include specific RUN lines for testing (#1560)\n\nWe have slightly different tests in\r\n`stablehlo_legalize_to_vhlo.0_X_0.mlir` and\r\n`stablehlo_legalize_to_vhlo.mlir`, so this provides extra clarity on\r\nwhat the run lines should be.\r\n\r\nThis was noticed by @ghpvnist."
    },
    {
      "commit": "09bab1e05025394fa1d7720f6abc3ef903594636",
      "tree": "27196788b1d7920de3971bdd39667039d334d385",
      "parents": [
        "e61d473f94c5e3c377e33c72e0c254fc663deed9"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 30 13:14:35 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 30 13:14:35 2023 -0700"
      },
      "message": "Remove an implementation-defined test for GatherOp (#1550)\n\nIn the removed test, one of the start_indices is 10 which is out of\r\nbounds for the operand, and that\u0027s currently specced as\r\nimplementation-defined. Per interpreter checklist, we should remove this\r\ntest."
    },
    {
      "commit": "e61d473f94c5e3c377e33c72e0c254fc663deed9",
      "tree": "650f3cbcc13141c583a6506691dc2fe9ff983cfe",
      "parents": [
        "6758a9c8fccafabd113b8e964843a298c51b1e83"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Tue May 30 11:38:02 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 30 11:38:02 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@f81f32adc9a8 (#1558)\n\n"
    },
    {
      "commit": "6758a9c8fccafabd113b8e964843a298c51b1e83",
      "tree": "72134ae4767d3a3690d9581f50b6899686bc6fbe",
      "parents": [
        "e6ef63c741378281d032715aee3a3fd2759a9f4b"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 30 11:12:24 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 30 11:12:24 2023 -0700"
      },
      "message": "Simplify evalWhileOp (#1553)\n\nThis PR drops checking that evaluating WhileOp::cond produces exactly\r\none result. This is guaranteed by the verifier, so this is not something\r\nthat we should be checking in the interpreter."
    },
    {
      "commit": "e6ef63c741378281d032715aee3a3fd2759a9f4b",
      "tree": "c6bc3388924ee6779327fb697037753994ea728f",
      "parents": [
        "4f5451533bbe5a87731fdd1cc866a81076377ecd"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 30 08:34:20 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 30 08:34:20 2023 -0700"
      },
      "message": "Use func.func and func.return in the spec (#1555)\n\nThe stablehlo.func and stablehlo.return syntax in the opening example of\r\nthe spec was aspirational, and at the time we expected that we\u0027ll soon\r\nadopt it.\r\n\r\nA lot of the time has passed, and we still haven\u0027t gotten around to\r\ndoing this, while we\u0027ve been getting regular questions about this. Let\u0027s\r\nreflect reality in the example for now - we can always change it back\r\nonce we address #425."
    },
    {
      "commit": "4f5451533bbe5a87731fdd1cc866a81076377ecd",
      "tree": "017dc0a038a3d2806ef7ee83cebc90432c3f14cc",
      "parents": [
        "b8c6b90b692d5552742a9167adb1a5bb6858f693"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 30 08:30:13 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 30 10:30:13 2023 -0500"
      },
      "message": "Bump patch version after integrate 0.12.0 -\u003e 0.12.1 (#1548)\n\n"
    },
    {
      "commit": "b8c6b90b692d5552742a9167adb1a5bb6858f693",
      "tree": "2173a2ebc04e1f0d317e7b7c0e6f81ba5c6dd62f",
      "parents": [
        "e169d26ccfce28ca45d5b8d5a39734eca81427e3"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 26 16:57:22 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 26 16:57:22 2023 -0700"
      },
      "message": "Compress the code a little bit (#1543)\n\nWhen reviewing the ScatterOp PR, I saw a few opportunities for making\r\nthe code a bit more compressed."
    },
    {
      "commit": "e169d26ccfce28ca45d5b8d5a39734eca81427e3",
      "tree": "89296c69ce4ec1d648f076d6b88301a1e58ff121",
      "parents": [
        "aac89de63ba020d26b7afad7af6b127e7184890c"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Fri May 26 16:39:18 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 26 16:39:18 2023 -0700"
      },
      "message": "Add interpreter for ScatterOp (#1488)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) `inputs` variadic number of tensors.\r\n(I2) `scatter_indices` tensor of integer type.\r\n(I3) `updates` variadic number of tensors.\r\n(I4) `update_window_dims` 1-dimensional tensor constant of type `si64`.\r\n(I5) `inserted_window_dims` 1-dimensional tensor constant of type `si64`.\r\n(I6) `scatter_dims_to_operand_dims` 1-dimensional tensor constant of type `si64`.\r\n(I7) `index_vector_dim` constant of type `si64`.\r\n(I8) `indices_are_sorted` constant of type `i1`.\r\n(I9) `unique_indices` constant of type `i1`.\r\n(I10) `update_computation` function.\r\n(C1) All `inputs` have the same shape.\r\n(C2) rank(`inputs`[0]) \u003d size(`update_window_dims`) +\r\n     size(`inserted_window_dims`).\r\n(C3) All `updates` have the same shape.\r\n(C4) `shape(updates[0])` \u003d\r\n      `combine(update_scatter_dim_sizes, update_window_dim_sizes)` where:\r\n* `update_scatter_dim_sizes` \u003d `shape(scatter_indices)` except that\r\n  the dimension size of `scatter_indices` corresponding to\r\n  `index_vector_dim` is not included.\r\n* `update_window_dim_sizes` \u003c\u003d `shape(inputs[0])` except that\r\n  the dimension sizes in `inputs[0]` corresponding to `inserted_window_dims`\r\n  are not included.\r\n* `combine` puts `update_scatter_dim_sizes` at axes corresponding to\r\n `update_scatter_dims` and `update_window_dim_sizes` at axes corresponding\r\n to `update_window_dims`.\r\n(C5) N \u003d size(`inputs`) \u003d size(`updates`) and N \u003e\u003d 1.\r\n(C6) `element_type(updates[k]) \u003d element_type(inputs[k])` for all k $\\in$\r\n     [0, N).\r\n(C7) All dimensions in `update_window_dims` are unique and sorted.\r\n(C8) For all i in [0, size(`update_window_dims`)), 0 \u003c\u003d\r\n`update_window_dims`[i] \u003c rank(`updates`[0]).\r\n(C9) All dimensions in `inserted_window_dims` are unique and sorted.\r\n(C10) For all i in [0, size(`inserted_window_dims`)), 0 \u003c\u003d\r\n`inserted_window_dims`[i] \u003c rank(`inputs`[0]).\r\n(C11) size(`scatter_dims_to_operand_dims`) \u003d\r\n     `index_vector_dim` \u003c rank(`scatter_indices`) ?\r\n     dim(`scatter_indices`, `index_vector_dim`) : 1.\r\n(C12) All dimensions in `scatter_dims_to_operand_dims` are unique.\r\n(C13) For all i in [0, size(`scatter_dims_to_operand_dims`)), 0 \u003c\u003d\r\n    `scatter_dims_to_operand_dims`[i] \u003c rank(`inputs`[0]).\r\n(C14) 0 \u003c\u003d `index_vector_dim` \u003c\u003d rank(`scatter_indices`).\r\n(C15) `update_computation` has type\r\n      `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)`\r\n      where `Ek \u003d element_type(inputs[k])` for all k in [0, N).\r\n(C16) `inputs[k]` and `result[k]` have the same type for all k in [0, N).\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) `inputs` variadic number of tensors. (Covered by ODS).\r\nI2: a) `scatter_indices` tensor of integer type. (Covered by ODS).\r\nI3: a) `updates` variadic number of tensors. (Covered by ODS).\r\nI4: a) `update_window_dims` 1-dimensional tensor constant of type `si64`. (Covered by ODS).\r\nI5: a) `inserted_window_dims` 1-dimensional tensor constant of type `si64`. (Covered by ODS).\r\nI6: a) `scatter_dims_to_operand_dims` 1-dimensional tensor constant of type `si64`. (Covered by ODS).\r\nI7: a) `index_vector_dim` constant of type `si64`. (Covered by ODS).\r\nI8: a) `indices_are_sorted` constant of type `i1`. (Covered by ODS).\r\nI9: a) `unique_indices` constant of type `i1`. (Covered by ODS).\r\nI10: a) `update_computation` function. (Covered by ODS).\r\nC1: a) Not all `inputs` have the same shape.\r\nC2: a) rank(`inputs`[0]) !\u003d size(`update_window_dims`) +\r\n       size(`inserted_window_dims`).\r\nC3: a) Not all `updates` have the same shape.\r\nC4: a) `shape(updates[0])` !\u003d\r\n      `combine(update_scatter_dim_sizes, update_window_dim_sizes)`.\r\n    b) `update_scatter_dim_sizes` !\u003d `shape(scatter_indices)` except that the\r\n       dimension size of `scatter_indices` corresponding to `index_vector_dim`\r\n       is not included.\r\n    c) `update_window_dim_sizes` \u003c\u003d `shape(inputs[0])` except that the dimension\r\n       sizes in `inputs[0]` corresponding to `inserted_window_dims` are not\r\n       included.\r\n    where `combine` puts `update_scatter_dim_sizes` at axes corresponding to\r\n    `update_scatter_dims` and `update_window_dim_sizes` at axes corresponding to\r\n    `update_window_dims`.\r\nC5: a) N !\u003d size(`inputs`). (Covered by ODS).\r\n    b) N !\u003d size(`updates`). (Covered by ODS).\r\n    c) N \u003c 1. (Covered by ODS).\r\nC6: a) `element_type(updates[k]) !\u003d element_type(inputs[k])` for any k in [0, N).\r\nC7: a) Dimensions in `update_window_dims` are not unique.\r\n    b) Dimensions in `update_window_dims` are not sorted.\r\nC8: a) For any i in [0, size(`update_window_dims`)), `update_window_dims`[i] \u003c 0.\r\n    b) For any i in [0, size(`update_window_dims`)), `update_window_dims`[i] \u003e\u003d rank(`updates`[0]). \r\nC9: a) Dimensions in `inserted_window_dims` are not unique.\r\n    b) Dimensions in `inserted_window_dims` are not sorted.\r\nC10: a) For any i in [0, size(`inserted_window_dims`)), `inserted_window_dims`[i] \u003c 0.\r\n     b) For any i in [0, size(`inserted_window_dims`)), \u003e\u003d rank(`inputs`[0]).\r\nC11: a) size(`scatter_dims_to_operand_dims`) !\u003d\r\n     `index_vector_dim` \u003c rank(`scatter_indices`) ?\r\n     dim(`scatter_indices`, `index_vector_dim`) : 1.\r\nC12: a) Dimensions in `scatter_dims_to_operand_dims` are not unique.\r\nC13: a) For any i in [0, size(`scatter_dims_to_operand_dims`)), `scatter_dims_to_operand_dims`[i] \u003c 0.\r\n     b) For any i in [0, size(`scatter_dims_to_operand_dims`)), `scatter_dims_to_operand_dims`[i] \u003e\u003d rank(`inputs`[0]).\r\nC14: a) `index_vector_dim` \u003c 0.\r\n     b) `index_vector_dim` \u003e rank(`scatter_indices`).\r\nC15: a) `update_computation` does not have type\r\n        `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)`\r\n        where `Ek \u003d element_type(inputs[k])` for any k $\\in$ [0, N).\r\nC16: a) type(`inputs[k]`) !\u003d type(`result[k]`) for any k $\\in$ [0, N).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nC1a: Not all `inputs` have the same shape.\r\nC2a: rank(`inputs`[0]) !\u003d size(`update_window_dims`) + size(`inserted_window_dims`).\r\nC3a: Not all `updates` have the same shape.\r\nC4a: `shape(updates[0])` !\u003d\r\n     `combine(update_scatter_dim_sizes, update_window_dim_sizes)`.\r\nC4b: `update_scatter_dim_sizes` !\u003d `shape(scatter_indices)` except that the\r\n     dimension size of `scatter_indices` corresponding to `index_vector_dim`\r\n     is not included.\r\nC4c: `update_window_dim_sizes` \u003c\u003d `shape(inputs[0])` except that the dimension\r\n     sizes in `inputs[0]` corresponding to `inserted_window_dims` are not\r\n     included.\r\n     where `combine` puts `update_scatter_dim_sizes` at axes corresponding to\r\n     `update_scatter_dims` and `update_window_dim_sizes` at axes corresponding to\r\n     `update_window_dims`.\r\nC6a: `element_type(updates[k]) !\u003d element_type(inputs[k])` for any k in [0, N).\r\nC7a: Dimensions in `update_window_dims` are not unique.\r\nC7b: Dimensions in `update_window_dims` are not sorted.\r\nC8a: For any i in [0, size(`update_window_dims`)), `update_window_dims`[i] \u003c 0.\r\nC8b: For any i in [0, size(`update_window_dims`)), `update_window_dims`[i] \u003e\u003d rank(`updates`[0]). \r\nC9a: Dimensions in `inserted_window_dims` are not unique.\r\nC9b: Dimensions in `inserted_window_dims` are not sorted.\r\nC10a: For any i in [0, size(`inserted_window_dims`)), `inserted_window_dims`[i] \u003c 0.\r\nC10b: For any i in [0, size(`inserted_window_dims`)), \u003e\u003d rank(`inputs`[0]).\r\nC11a: size(`scatter_dims_to_operand_dims`) !\u003d\r\n      `index_vector_dim` \u003c rank(`scatter_indices`) ?\r\n      dim(`scatter_indices`, `index_vector_dim`) : 1.\r\nC12a: Dimensions in `scatter_dims_to_operand_dims` are not unique.\r\nC13a: For any i in [0, size(`scatter_dims_to_operand_dims`)), `scatter_dims_to_operand_dims`[i] \u003c 0.\r\nC13b: For any i in [0, size(`scatter_dims_to_operand_dims`)), `scatter_dims_to_operand_dims`[i] \u003e\u003d rank(`inputs`[0]).\r\nC14a: `index_vector_dim` \u003c 0.\r\nC14b: `index_vector_dim` \u003e rank(`scatter_indices`).\r\nC15a: `update_computation` does not have type\r\n      `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)`\r\n      where `Ek \u003d element_type(inputs[k])` for any k $\\in$ [0, N).\r\nC16a: type(`inputs[k]`) !\u003d type(`result[k]`) for any k $\\in$ [0, N).\r\n```\r\n\r\nNotes:\r\n  * Some missing verifications were added.\r\n  * Updates typo in spec wording (i.e. For any k -\u003e For all k).\r\n  * Updates notation `do` -\u003e `di`, `ds` -\u003e `dj`\r\n\r\ncloses #987"
    },
    {
      "commit": "aac89de63ba020d26b7afad7af6b127e7184890c",
      "tree": "9b4be42018336a543e736dca77db886c2b4c39a7",
      "parents": [
        "bcf050a2e920702e4b63a481087b9f1980020226"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Thu May 25 18:37:53 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 25 18:37:53 2023 -0700"
      },
      "message": "Add interpreter for GatherOp (#1058)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) operand tensor.\r\n(I2) start_indices tensor of integer type.\r\n(I3) offset_dims 1-dimensional tensor constant of type `si64`.\r\n(I4) collapsed_slice_dims 1-dimensional tensor constant of type `si64`.\r\n(I5) start_index_map 1-dimensional tensor constant of type `si64`.\r\n(I6) index_vector_dim constant of type `si64`.\r\n(I7) slice_sizes 1-dimensional tensor constant of type `si64`.\r\n(I8) indices_are_sorted constant of type `i1`.\r\n(C1) rank(`operand`) $\u003d$ size(`offset_dims`) $+$\r\n     size(`collapsed_slice_dims`).\r\n(C2) $0 \\le$ `index_vector_dim` $\\le$ rank(`start_indices`).\r\n(C3) size(`start_index_map`) $\u003d$\r\n     `index_vector_dim` $\\lt$ rank(`start_indices`) ?\r\n     dim(`start_indices`, `index_vector_dim`) : 1.\r\n(C4) All dimensions in `offset_dims` are unique and sorted in ascending\r\n     order.\r\n(C5) $0 \\le$ `offset_dims`[i] $\\lt$ rank(`result`) $\\forall i$\r\n     such that $0 \\le$ i $\\lt$ size(`offset_dims`).\r\n(C6) All dimensions in `collapsed_slice_dims` are unique and sorted in\r\n     ascending order.\r\n(C7) $0 \\le$ `collapsed_slice_dims`[i] $\\lt$ size(`slice_sizes`)\r\n      $\\forall i$ such that $0 \\le$ i $\\lt$ size(`collapsed_slice_dims`).\r\n(C8) `slice_sizes`[i] $\\le$ 1 $\\forall i \\in$ `collapsed_slice_dims`.\r\n(C9) All dimensions in `start_index_map` are unique.\r\n(C10) $0 \\le$ `start_index_map`[i] $\\lt$ rank(`operand`) $\\forall i$\r\n     such that $0 \\le$ i $\\lt$ size(`start_index_map`).\r\n(C11) size(`slice_sizes`) $\u003d$ rank(`operand`).\r\n(C12) $0 \\le$ `slice_sizes`[i] $\\le$ dim(`operand`, i) $\\forall i$\r\n      such that $0 \\le$ i $\\lt$ size(`slice_sizes`).\r\n(C13) `shape(result)` $\u003d$ `combine(batch_dim_sizes, offset_dim_sizes)`\r\n      where:\r\n      * `batch_dim_sizes` \u003d `shape(start_indices)` except that the dimension size\r\n        of `start_indices` corresponding to `index_vector_dim` is not included.\r\n      * `offset_dim_sizes` \u003d `shape(slice_sizes)` except that the dimension sizes\r\n        in `slice_sizes` corresponding to `collapsed_slice_dims` are not included.\r\n      * `combine` puts `batch_dim_sizes` at axes corresponding to `batch_dims` and\r\n        `offset_dim_sizes` at axes corresponding to `offset_dims`.\r\n(C14) `operand` and `result` have the same element type.\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) operand is not a tensor. (Covered by ODS).\r\nI2: a) start_indices is not a tensor of integer type. (Covered by ODS).\r\nI3: a) offset_dims is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(offset_dims) !\u003d `si64`. (Covered by ODS).\r\nI4: a) collapsed_slice_dims is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(collapsed_slice_dims) !\u003d `si64`. (Covered by ODS).\r\nI5: a) start_index_map is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(start_index_map) !\u003d `si64`. (Covered by ODS).\r\nI6: a) type(index_vector_dim) !\u003d `si64`. (Covered by ODS).\r\nI7: a) slice_sizes is not a 1-dimensional tensor.\r\n    b) element_type(slice_sizes) !\u003d `si64`. (Covered by ODS).\r\nI8: a) element_type(indices_are_sorted) !\u003d `i1`. (Covered by ODS).\r\n(C1) a) rank(operand) !\u003d size(offset_dims) + size(collapsed_slice_dims).\r\n(C2) a) index_vector_dim \u003c 0.\r\n     b)  index_vector_dim \u003e rank(start_indices).\r\n(C3) a) size(start_index_map) !\u003d dim(start_indices, index_vector_dim) if index_vector_dim \u003c rank(start_indices).\r\n     b) size(start_index_map) !\u003d 1 if index_vector_dim \u003e\u003d rank(start_indices).\r\n(C4) a) offset_dims values are not unique.\r\n     b) offset_dims values are not sorted in ascending order.\r\n(C5) a) offset_dims[i] \u003c 0 for any i.\r\n     b) offset_dims[i] \u003e\u003d rank(result) for any i.\r\n(C6) a) collapsed_slice_dims values are not unique.\r\n     b) collapsed_slice_dims are not sorted in ascending order.\r\n(C7) a) collapsed_slice_dims[i] \u003c 0 for any i.\r\n     b) collapsed_slice_dims[i] \u003e\u003d size(slice_sizes) for any i.\r\n(C8) a) slice_sizes[i] \u003e 1 for any i in collapsed_slice_dims.\r\n(C9) a) start_index_map values are not unique.\r\n(C10) a) start_index_map[i] \u003c 0 for any i.\r\n      b) start_index_map[i] \u003e\u003d rank(operand) for any i.\r\n(C11) a) size(slice_sizes) !\u003d rank(operand).\r\n(C12) a) slice_sizes[i] \u003c 0 for any i.\r\n      b) slice_sizes[i] \u003e dim(operand, i) for any i.\r\n(C13) no negative test needed since it\u0027s just inferring the shape.\r\n(C14) element_type(operand) !\u003d  element_type(result). (Covered by ODS).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nI7a: slice_sizes is not a 1-dimensional tensor.\r\nC1a: rank(operand) !\u003d size(offset_dims) + size(collapsed_slice_dims).\r\nC2a: index_vector_dim \u003c 0.\r\nC2b: index_vector_dim \u003e rank(start_indices).\r\nC3a: size(start_index_map) !\u003d dim(start_indices, index_vector_dim) if index_vector_dim \u003c rank(start_indices).\r\nC3b: size(start_index_map) !\u003d 1 if index_vector_dim \u003e\u003d rank(start_indices).\r\nC4a: offset_dims values are not unique.\r\nC4b: offset_dims values are not sorted in ascending order.\r\nC5a: offset_dims[i] \u003c 0 for any i.\r\nC5b: offset_dims[i] \u003e\u003d rank(result) for any i.\r\nC6a: collapsed_slice_dims values are not unique.\r\nC6b: collapsed_slice_dims are not sorted in ascending order.\r\nC7a: collapsed_slice_dims[i] \u003c 0 for any i.\r\nC7b: collapsed_slice_dims[i] \u003e\u003d size(slice_sizes) for any i.\r\nC8a: slice_sizes[i] \u003e 1 for any i in collapsed_slice_dims.\r\nC9a: start_index_map values are not unique.\r\nC10a: start_index_map[i] \u003c 0 for any i.\r\nC10b: start_index_map[i] \u003e\u003d rank(operand) for any i.\r\nC11a: size(slice_sizes) !\u003d rank(operand).\r\nC12a: slice_sizes[i] \u003c 0 for any i.\r\nC12b: slice_sizes[i] \u003e dim(operand, i) for any i.\r\n```\r\n\r\nAlso fixed typo (C15) -\u003e (C14)\r\n\r\ncloses #976"
    },
    {
      "commit": "bcf050a2e920702e4b63a481087b9f1980020226",
      "tree": "802b4a0ecf774f686e0b4881753519efaa8dfcbc",
      "parents": [
        "aedc509bfbb9f17a02afb068cb413d2981215541"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Thu May 25 18:21:34 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 25 18:21:34 2023 -0700"
      },
      "message": "Add interpreter for BatchNormGradOp (#1394)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) `operand`: tensor of floating-point type.\r\n(I2) `scale`: 1-dimensional tensor of floating-point type.\r\n(I3) `mean`: 1-dimensional tensor of floating-point type.\r\n(I4) `variance`: 1-dimensional tensor of floating-point type.\r\n(I5) `grad_output`: tensor of floating-point type.\r\n(I6) `epsilon`: constant of type `f32`.\r\n(I7) `feature_index`: constant of type `si64`.\r\n(C1) 0 \u003c\u003d `feature_index` \u003c rank(`operand`).\r\n(C2) `operand`, `scale`, `mean`, `variance`, `grad_output`, `grad_operand`\r\n     `grad_scale` and `grad_offset` have the same element type.\r\n(C3) `operand`, `grad_output` and `grad_operand` have the same shape.\r\n(C4) `scale`, `mean`, `variance`, `grad_scale` and `grad_offset` have the\r\n     same shape.\r\n(C5) size(`scale`) \u003d `dim(operand, feature_index)`.\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) `operand` is not a tensor of floating-point type. (Covered by ODS).\r\nI2: a) `scale` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) `scale` is not a tensor of floating-point type. (Covered by ODS).\r\nI3: a) `mean` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) `mean` is not a tensor of floating-point type. (Covered by ODS).\r\nI4: a) `variance` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) `variance` is not a tensor of floating-point type. (Covered by ODS).\r\nI5: a) `grad_output` is not a tensor of floating-point type. (Covered by ODS).\r\nI6: a) `epsilon` is not a constant of type `f32`. (Covered by ODS).\r\nI7: a) `feature_index` is not a constant of type `si64`. (Covered by ODS).\r\nC1: a) `feature_index` \u003c 0.\r\n    b) `feature_index` \u003e\u003d rank(`operand`).\r\nC2: a) element_type(`operand`) !\u003d element_type(`scale`). (Covered by ODS).\r\n    b) element_type(`operand`) !\u003d element_type(`mean`). (Covered by ODS).\r\n    c) element_type(`operand`) !\u003d element_type(`variance`). (Covered by ODS).\r\n    d) element_type(`operand`) !\u003d element_type(`grad_output`). (Covered by ODS).\r\n    e) element_type(`operand`) !\u003d element_type(`grad_operand`). (Covered by ODS).\r\n    f) element_type(`operand`) !\u003d element_type(`grad_scale`). (Covered by ODS).\r\n    g) element_type(`operand`) !\u003d element_type(`grad_offset`). (Covered by ODS).\r\nC3: a) shape(`operand`) !\u003d shape(`grad_output`).\r\n    b) shape(`operand`) !\u003d shape(`grad_operand`).\r\nC4: a) shape(`scale`) !\u003d shape(`mean`).\r\n    b) shape(`scale`) !\u003d shape(`variance`).\r\n    c) shape(`scale`) !\u003d shape(`grad_scale`).\r\n    d) shape(`scale`) !\u003d shape(`grad_offset`).\r\nC5: a) size(`scale`) !\u003d dim(operand, feature_index).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nC1a: `feature_index` \u003c 0.\r\nC1b: `feature_index` \u003e\u003d rank(`operand`).\r\nC3a: shape(`operand`) !\u003d shape(`grad_output`).\r\nC3b: shape(`operand`) !\u003d shape(`grad_operand`).\r\nC4a: shape(`scale`) !\u003d shape(`mean`).\r\nC4b: shape(`scale`) !\u003d shape(`variance`).\r\nC4c: shape(`scale`) !\u003d shape(`grad_scale`).\r\nC4d: shape(`scale`) !\u003d shape(`grad_offset`).\r\nC5a: size(`scale`) !\u003d dim(operand, feature_index).\r\n```\r\n\r\nNotes:\r\n* Added `i6` in the spec to better align with the spec comments\r\nreferring to\r\n[batchnorm_expander.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/batchnorm_expander.cc#L521-L524)\r\nin XLA.\r\n* `size` -\u003e `num_elements` to be consistent with what\u0027s written in\r\n`compute_mean` function.\r\n\r\ncloses #1121"
    },
    {
      "commit": "aedc509bfbb9f17a02afb068cb413d2981215541",
      "tree": "5297400d4aba64de7943b479c71e094c92d13f3d",
      "parents": [
        "7d992e65c0cd67b79ac81c2eb39c4b6e7b146df6"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Thu May 25 14:26:42 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 25 16:26:42 2023 -0500"
      },
      "message": "Move chlo::TopKOp type inference into TypeInference.h (#1536)\n\nThis is needed to support the work on introducing mhlo::TopKOp. MLIR-HLO\r\ncommit:\r\nhttps://github.com/tensorflow/mlir-hlo/commit/4651ac2e8375b706643fdab809e0bc30a7ecd666"
    },
    {
      "commit": "7d992e65c0cd67b79ac81c2eb39c4b6e7b146df6",
      "tree": "0c0e524b30467404074ed95882e56bf2dc38720d",
      "parents": [
        "accf3fa9cff539e9c093c9ce0851741daf926fa8"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Thu May 25 14:25:23 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 25 16:25:23 2023 -0500"
      },
      "message": "Improve documentation for getMinimumVersion (#1537)\n\nDuring review of https://github.com/google/jax/pull/16081, we received\r\nfeedback that the name `get_minimum_version` is not very intuitive.\r\n\r\nWhile we haven\u0027t yet come up with a better name, improving documentation\r\nis the second best thing that we can do."
    },
    {
      "commit": "accf3fa9cff539e9c093c9ce0851741daf926fa8",
      "tree": "8e5baa8790bac65cf4d9cefa96c541ba7db62a78",
      "parents": [
        "468f7bd02bba1520530cad41c8d0124b01908bc9"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu May 25 15:47:43 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 25 15:47:43 2023 -0500"
      },
      "message": "Integrate LLVM at llvm/llvm-project@e837f4b7 (#1540)\n\nNeed to clean up `llvm_disable_optional_support_deps` after:\r\nhttps://reviews.llvm.org/D151006"
    },
    {
      "commit": "468f7bd02bba1520530cad41c8d0124b01908bc9",
      "tree": "d96b3358ffbd66508984300578f3ea35ba021a00",
      "parents": [
        "6a5ee09907ff69be3f80e6d43561591566a577f4"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 24 14:51:24 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 24 14:51:24 2023 -0700"
      },
      "message": "Add interpreter for BatchNormTrainingOp (#1393)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) `operand`: tensor of floating-point type.\r\n(I2) `scale`: 1-dimensional tensor of floating-point type.\r\n(I3) `offset`: 1-dimensional tensor of floating-point type.\r\n(I4) `epsilon`: constant of type `f32`.\r\n(I5) `feature_index`: constant of type `si64`.\r\n(C1) 0 \u003c\u003d `feature_index` \u003c rank(`operand`).\r\n(C2) `operand`, `scale`, `offset`, `result`, `batch_mean` and `batch_var`\r\n     have the same element type.\r\n(C3) size(`scale`) \u003d `dim(operand, feature_index)`.\r\n(C4) size(`offset`) \u003d `dim(operand, feature_index)`.\r\n(C5) size(`batch_mean`) \u003d `dim(operand, feature_index)`.\r\n(C6) size(`batch_var`) \u003d `dim(operand, feature_index)`.\r\n(C7) `operand` and `output` have the same type.\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) `operand` is not a tensor of floating-point type. (Covered by ODS).\r\nI2: a) `scale` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) `scale` is not a tensor of floating-point type. (Covered by ODS).\r\nI3: a) `offset` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) `offset` is not a tensor of floating-point type. (Covered by ODS).\r\nI4: a) `epsilon` is not a constant of type `f32`. (Covered by ODS).\r\nI5: a) `feature_index` is not a constant of type `si64`. (Covered by ODS).\r\nC1: a) `feature_index` \u003c 0.\r\n    b) `feature_index` \u003e\u003d rank(`operand`).\r\nC2: a) element_type(`operand`) !\u003d element_type(`scale`). (Covered by ODS).\r\n    b) element_type(`operand`) !\u003d element_type(`offset`). (Covered by ODS).\r\n    c) element_type(`operand`) !\u003d element_type(`result`). (Covered by ODS).\r\n    d) element_type(`operand`) !\u003d element_type(`batch_mean`). (Covered by ODS).\r\n    e) element_type(`operand`) !\u003d element_type(`batch_var`). (Covered by ODS).\r\nC3: a) size(`scale`) !\u003d dim(operand, feature_index).\r\nC4: a) size(`offset`) !\u003d dim(operand, feature_index).\r\nC5: a) size(`batch_mean`) !\u003d dim(operand, feature_index).\r\nC6: a) size(`batch_var`) !\u003d dim(operand, feature_index).\r\nC7: a) type(`operand`) !\u003d type(`output`).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nC1a: `feature_index` \u003c 0.\r\nC1b: `feature_index` \u003e\u003d rank(`operand`).\r\nC3a: size(`scale`) !\u003d dim(operand, feature_index).\r\nC4a: size(`offset`) !\u003d dim(operand, feature_index).\r\nC5a: size(`batch_mean`) !\u003d dim(operand, feature_index).\r\nC6a: size(`batch_var`) !\u003d dim(operand, feature_index).\r\nC7a: type(`operand`) !\u003d type(`output`).\r\n```\r\n\r\ncloses #1122"
    },
    {
      "commit": "6a5ee09907ff69be3f80e6d43561591566a577f4",
      "tree": "335f77570c5c9511a4bdb5f5af01e34115a8a0d3",
      "parents": [
        "40e6532da4fe633edc4d0b3127f2d7e2a981c280"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Wed May 24 10:48:06 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 24 10:48:06 2023 -0700"
      },
      "message": "Simplify evalConvertOp (#1532)\n\nThis PR introduces `Element convert(Type type, const Element \u0026e)` which\r\nconsiderably simplifies `evalConvertOp` by hiding type-based dispatch in\r\nElement.h, consistently with many other implementations of evalFooOp\r\nfunctions."
    },
    {
      "commit": "40e6532da4fe633edc4d0b3127f2d7e2a981c280",
      "tree": "ed0966872b69f62030c97a144f1d18307fc55034",
      "parents": [
        "53a76fa57d8045373e608bb64f5e2dee3c4183c2"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Wed May 24 11:37:16 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 24 11:37:16 2023 -0500"
      },
      "message": "Bump MLIR Bytecode Format Version (#1534)\n\nIncrement to use latest bytecode format per guidelines in: [vhlo.md \u003e\r\nMLIR Bytecode Format\r\nVersions](https://github.com/openxla/stablehlo/blob/main/docs/vhlo.md#mlir-bytecode-format-versions).\r\n\r\nIncludes version bump to v0.12.0 and test file generated using commands\r\nin: [vhlo.md \u003e Add Versioned Serialization\r\nTest](https://github.com/GleasonK/stablehlo/pull/new/bytecode-format-version-bump)."
    },
    {
      "commit": "53a76fa57d8045373e608bb64f5e2dee3c4183c2",
      "tree": "3c49ed60dfee8269717ee421c9f276132b57207f",
      "parents": [
        "509f3eb0d337cce5b63d7966f4dd6f26530eda32"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Wed May 24 10:17:12 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 24 10:17:12 2023 -0500"
      },
      "message": "Delete testdata bytecode tests (#1425)\n\nRecently we gathered some code coverage metrics for testdata tests, and\r\ncompared that to our unit tests.\r\n\r\nIt turns out that our testdata coverage was a strict subset of unit\r\ntests, including in generated files, which makes them not particularly\r\nuseful as serialization and deserialization tests, and minimally useful\r\nas versioned semantic compatibility tests, but very useful as testdata\r\nfor StableHLO users.\r\n\r\nFor semantic compatibility, this approach does not scale well due to the\r\nhigh redundancy in test cases. The overhead for cloning all these files\r\nat each StableHLO version bump is too high for its limited value (25MB\r\nof bytecode files per version, difficult to review due to PR size,\r\nconsistently breaks our tooling). We have created #1416 to rethink\r\nsemantic compatibility testing, and are planning to only version\r\n`stablehlo_legalize_to_vhlo.mlir` for serialization testing, which alone\r\nhas 95% coverage of compatibility machinery, covering all ops, types,\r\nand attributes.\r\n\r\n\r\nThis PR was created using the following commands (meaning it should be\r\nuniform):\r\n\r\n```\r\ncd stablehlo/testdata\r\nrm *.mlir.bc\r\nsed -i \u0027s/stablehlo-translate.*--interpret/stablehlo-opt -inline %s | stablehlo-translate --interpret/\u0027 *.mlir\r\nsed -i \"/0_9_0/d\" *.mlir\r\n```\r\n\r\n**EDIT 5/24: Alternatives considered and mitigations**\r\n\r\nA primary use case for testdata on top of reference interpreter testing,\r\nis to provide copy-pastable snippets for StableHLO users to test their\r\nbackend implementations against. As such, maintaining some form of\r\ntextual assembly format is a requirement. The primary alternative\r\nconsidered was to remove the MLIR files and only have bytecode files.\r\nThis is more stable, and there is an MLIR vscode extension that permits\r\nviewing and editing bytecode files in place. This solution was not\r\nchosen for two main reasons - 1. Serialized portable artifacts are VHLO,\r\nmeaning the decoded file would not be a copy-pastable snippet, as IR\r\nupgrades and conversion to StableHLO are still required. 2. This\r\nextension does not work on github, meaning files would need to be opened\r\nin VSCode or deserialized using an opt tool.\r\n\r\nThe downside of only preserving text files is that StableHLO portable\r\nartifacts have stability, meaning they will not break between releases.\r\nTextual assembly format may break. In the case of assembly format\r\nchanges, we may have a large amount of testdata that breaks and requires\r\nfixing. This overhead can be automated away with a script that checks\r\nout a known good release, generates bytecode files, and deserializes at\r\nHEAD to update the testdata. This script is tracked in #1533."
    },
    {
      "commit": "509f3eb0d337cce5b63d7966f4dd6f26530eda32",
      "tree": "001a0c34c55da296197fdc628114bb62638b0dc0",
      "parents": [
        "2ef30c7ee3057ff3a3951a53bdc2442200dc3114"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 23 21:59:14 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 21:59:14 2023 -0700"
      },
      "message": "Simplify verification of ReduceOp and ReduceWindowOp (#1467)\n\nWhile reviewing PRs that implement interpreters for ReduceOp and\r\nReduceWindowOp, I noticed that the verifiers can be somewhat simplified.\r\nThis PR does the necessary cleanup."
    },
    {
      "commit": "2ef30c7ee3057ff3a3951a53bdc2442200dc3114",
      "tree": "276e69862f6ac97185792359eae8af5531a5d134",
      "parents": [
        "ad9d815007ed27fe14bbdf1a44c2d4827711790f"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Tue May 23 11:27:34 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 11:27:34 2023 -0700"
      },
      "message": "Update CustomCallOp status to `yes` and update ODS (#1521)\n\nWe already have an implementation to handle custom_call ops by passing a\r\nfallback function to handle them. This PR updates the status to reflect\r\nthat. Other minor change include updating the example in td file to use\r\npretty-print format following the\r\n[guide](https://github.com/openxla/stablehlo/blob/main/docs/reference.md#testing-guidelines)."
    },
    {
      "commit": "ad9d815007ed27fe14bbdf1a44c2d4827711790f",
      "tree": "31ae05fae7bda2eeff94fd680b53824f54a0b28d",
      "parents": [
        "c576c2fb4211332a57bb395bfaa57bf414e15b4a"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Tue May 23 10:50:04 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 10:50:04 2023 -0700"
      },
      "message": "Tag issue for failing tests for supported ops (#1438)\n\nOne of the\r\n[checklist](https://github.com/openxla/stablehlo/blob/main/docs/reference_checklist.md)\r\nitems is to tag tests failing due to floating-point differences with an\r\nissue #1278. So far, this has not been done, so this PR applies them in\r\nbulk.\r\n\r\nThese tests are generated by:\r\n1. Get all .mlir tests from `testdata/`.\r\n2. Filter tests containing only supported ops.\r\n3. Remove `-DISABLED` from filtered tests.\r\n4. Run the test.\r\n5. Diff the list of failing tests with step 2.\r\n6. Replace `-DISABLED` with `-DISABLED(#1278)` if `(#1278)` not already\r\npresent from step 5."
    },
    {
      "commit": "c576c2fb4211332a57bb395bfaa57bf414e15b4a",
      "tree": "6be9e0f6a343fbe2c628a7277c0123c5e15df114",
      "parents": [
        "a8cb1c747f5a54db0abf0f668ad9eb3ed47c27a6"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 23 10:20:30 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 10:20:30 2023 -0700"
      },
      "message": "Bump patch version after integrate 0.11.7 -\u003e 0.11.8 (#1528)\n\n"
    },
    {
      "commit": "a8cb1c747f5a54db0abf0f668ad9eb3ed47c27a6",
      "tree": "6be9e0f6a343fbe2c628a7277c0123c5e15df114",
      "parents": [
        "fab5f24fe96b428dc746ce175dfef06c54285302"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 23 09:35:20 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 11:35:20 2023 -0500"
      },
      "message": "Fix issues identified during integrate (#1527)\n\n* BUILD.bazel: add missing dependencies to stablehlo-translate.\r\n  * CMakeLists.txt: same."
    },
    {
      "commit": "fab5f24fe96b428dc746ce175dfef06c54285302",
      "tree": "082b2b6183b8cc017d30e3619da493dbe1c974e1",
      "parents": [
        "f556890d68c2da9ed93218959449e54e4858e460"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 23 09:34:45 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 11:34:45 2023 -0500"
      },
      "message": "Bump patch version after integrate 0.11.7 -\u003e 0.11.8 (#1528)\n\n"
    },
    {
      "commit": "f556890d68c2da9ed93218959449e54e4858e460",
      "tree": "082b2b6183b8cc017d30e3619da493dbe1c974e1",
      "parents": [
        "9fa406730f46c5062b82776acf1db8e8bf007417"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 23 09:34:33 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 23 11:34:33 2023 -0500"
      },
      "message": "Bump patch version after integrate 0.11.7 -\u003e 0.11.8 (#1528)\n\n"
    },
    {
      "commit": "9fa406730f46c5062b82776acf1db8e8bf007417",
      "tree": "4b1f1bceb18c6098330e9bc7610be054c8a4ee42",
      "parents": [
        "a505ee5d96ee1860e975f69b26d2bcc1c51dd581"
      ],
      "author": {
        "name": "anakinxc",
        "email": "103552181+anakinxc@users.noreply.github.com",
        "time": "Tue May 23 13:04:37 2023 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 22:04:37 2023 -0700"
      },
      "message": "Fix build error on macOS (#1529)\n\nThis is the same as #1293, but on newly added code."
    },
    {
      "commit": "a505ee5d96ee1860e975f69b26d2bcc1c51dd581",
      "tree": "1a499a5cb20f37811c7107f517a6ecd462a52558",
      "parents": [
        "b357106f7cdc51da144ffe033459944b1965c876"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 22 18:46:42 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 18:46:42 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@27eadeee6b45 (#1526)\n\n"
    },
    {
      "commit": "b357106f7cdc51da144ffe033459944b1965c876",
      "tree": "a5b4204e485440bf1551189a626c04ee6c02d73c",
      "parents": [
        "5f89cf92e1ae3185ad064913814ce8e688435e1c"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 20:37:22 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 18:37:22 2023 -0700"
      },
      "message": "Add stop-gap static forward compatibility tests (#1525)\n\nUse https://github.com/openxla/stablehlo/pull/1524 as diffbase.\r\n\r\nThis is a stop-gap measure to improve the detection of forward\r\nincompatibilities in the StableHLO repo, and repos where StableHLO is\r\nexported like openxla/xla, while the Forward Compatibility Testing RFC\r\n(https://github.com/openxla/stablehlo/pull/1498) is reviewed. These\r\ntests will be reworked based on the outcome of the RFC review.\r\n\r\nThe forward compatibility test is a byte-wise comparison using:\r\n\r\n```bash\r\n# %s \u003d stablehlo/tests/stablehlo_legalize_to_vhlo.0_10_0.mlir\r\ndiff %s.bc \u003c(stablehlo-translate --serialize --target\u003d0.10.0 --strip-debuginfo %s)\r\n```"
    },
    {
      "commit": "5f89cf92e1ae3185ad064913814ce8e688435e1c",
      "tree": "2a65c608579deeacc8342cd39a7e717e970cc59b",
      "parents": [
        "9d2ac314f25ba886a20232ccb0415f25816e980c"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 20:34:07 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 18:34:07 2023 -0700"
      },
      "message": "Update bytecode docs to mention MLIR Bytecode Format and StableHLO releases (#1522)\n\nPlaced in VHLO markdown file since this is more implementation detail\r\nthan user-facing documentation.\r\n\r\nThe plan is to increment the minor release of StableHLO whenever MLIR\r\nBytecode Format updates, so we can keep StableHLO closely tied to the\r\nlatest MLIR Bytecode Format. This also allows us to provide more strict\r\nforward compatibility requirements. I.e. we have no dependency on old\r\nbytecode versions a month after the newer bytecode version was adopted."
    },
    {
      "commit": "9d2ac314f25ba886a20232ccb0415f25816e980c",
      "tree": "a9bff3edb461a1e88e2eca52decbb889a3e0fcad",
      "parents": [
        "4cd6f24257a364857add0e3c1dc11b2364669d50"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 20:20:52 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 18:20:52 2023 -0700"
      },
      "message": "Update bytecode artifacts to remove debuginfo and include StableHLO producer string (#1524)\n\nNote in the below examples, the suffix `-09`, `-010`, `-011` refers to\r\ntooling at the StableHLO@v0.X.0 release:\r\n\r\n```bash\r\n# for `-09`:\r\ngit checkout v0.9.0\r\n# Checkout LLVM at `./build_tools/llvm_version.txt` version\r\n# Build LLVM and StableHLO\r\n```\r\n\r\nCommands used to generate the files below. \r\n\r\n```\r\nstablehlo-opt-v09 stablehlo/tests/stablehlo_legalize_to_vhlo.0_9_0.mlir --strip-debuginfo --stablehlo-legalize-to-vhlo --vhlo-to-version\u003d\u0027target\u003d0.9.0\u0027 --emit-bytecode | sed \u0027s/MLIR17.0.0git/StableHLO_v0.9.0/\u0027 \u003e /tmp/stablehlo_legalize_to_vhlo.0_9_0.mlir.bc\r\nstablehlo-opt-v010 stablehlo/tests/stablehlo_legalize_to_vhlo.0_10_0.mlir --strip-debuginfo --emit-bytecode | stablehlo-translate-v010 --serialize --target\u003d0.10.0 | sed \u0027s/MLIR17.0.0git/StableHLO_v0.10.0/\u0027 \u003e /tmp/stablehlo_legalize_to_vhlo.0_10_0.mlir.bc\r\nstablehlo-opt-v011 stablehlo/tests/stablehlo_legalize_to_vhlo.0_11_0.mlir --strip-debuginfo --emit-bytecode | stablehlo-translate-v011 --serialize --target\u003d0.11.0 | sed \u0027s/MLIR17.0.0git/StableHLO_v0.11.0/\u0027 \u003e /tmp/stablehlo_legalize_to_vhlo.0_11_0.mlir.bc\r\n```\r\n\r\nThis is the equivalent way at the time of each release to generate what\r\nthe following command does today:\r\n\r\n```\r\nstablehlo-translate --serialize --target\u003d0.X.0 --strip-debuginfo\r\n```\r\n\r\nLastly, `--strip-debuginfo` is added to the diff check comparison since\r\nReduceOp prettyprinting currently depends on debug info, and the\r\nserialized artifacts do not have debuginfo:\r\nhttps://github.com/openxla/stablehlo/blob/4cd6f24257a364857add0e3c1dc11b2364669d50/stablehlo/dialect/StablehloOps.cpp#L1489-L1490\r\n\r\nThis is an issue that will be addressed separately:\r\nhttps://github.com/openxla/stablehlo/issues/1523"
    },
    {
      "commit": "4cd6f24257a364857add0e3c1dc11b2364669d50",
      "tree": "d853522a6101117279a0cd672d8c8b6e9e7d406d",
      "parents": [
        "f2472e6d2ab47c9e6594aff6508fa1c66e711ce6"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 22 16:43:21 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 18:43:21 2023 -0500"
      },
      "message": "Update the link to serialization APIs in compatibility.md (#1515)\n\nNow that we have these APIs documented right in this file, let\u0027s link\r\ndirectly to it."
    },
    {
      "commit": "f2472e6d2ab47c9e6594aff6508fa1c66e711ce6",
      "tree": "2932cfd36cb11f7f24fc7d2e82e2fbc40ce1fa21",
      "parents": [
        "428d88ddc5e94e246f3709c63399f32ffe30d5ab"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 22 16:42:32 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 18:42:32 2023 -0500"
      },
      "message": "Bump patch version after integrate 0.11.6 -\u003e 0.11.7 (#1513)\n\n"
    },
    {
      "commit": "428d88ddc5e94e246f3709c63399f32ffe30d5ab",
      "tree": "5fd0bdfcba696cf8856557511b2cfd704ddc9245",
      "parents": [
        "7c120e2edc8557aa96670067c976b37a755519ab"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 18:40:45 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 16:40:45 2023 -0700"
      },
      "message": "Add API to determine bytecode format version of StableHLO release (#1520)\n\nAdds a mapping of StableHLO version to bytecode format version. Map only\r\nindicates the releases where the format version changes:\r\n\r\n```c++\r\n\u003c0.10.0, 1\u003e // bytecode format incremented to v1 in 0.10.0\r\n\u003c0.9.0, 0\u003e // bytecode format started at v0 in 0.9.0\r\n```\r\n\r\nComparison algorithm validates supported version range, and walks the\r\nlist until it finds a version less or equal to the requested version:\r\n\r\n\r\n```c++\r\n// \u003c- 0.12.0 is above curr version, failure()\r\n// \u003c- 0.11.0 uses v1 \r\n// \u003c- 0.10.0 uses v1 \r\n\u003c0.10.0, 1\u003e\r\n// \u003c- 0.9.2 uses v0\r\n// \u003c- 0.9.0 uses v0\r\n\u003c0.9.0, 0\u003e\r\n// \u003c- 0.8.0 is below minimum, failure()\r\n```"
    },
    {
      "commit": "7c120e2edc8557aa96670067c976b37a755519ab",
      "tree": "1ea651cee81eeb57fdc5bccce81ffa789eb3ecce",
      "parents": [
        "38743b2fe0bfeaeac8301acf464b30e6002a9e57"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 16:26:35 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 16:26:35 2023 -0500"
      },
      "message": "Add option to stablehlo-translate to generate bytecode without debug info (#1519)\n\nExposes `--strip-debuginfo` as a pass from `stablehlo-translate`. This\r\nis currently available in `stablehlo-opt`.\r\n\r\nAlternatively, we could use:\r\n\r\n```\r\nstablehlo-opt file.mlir --strip-debuginfo --emit-bytecode | stablehlo-translate --serialize --target\u003dX.Y.Z\r\n```\r\n\r\nMust use bytecode across the bash pipe, otherwise debug info gets\r\npopulated with \u003cstdin\u003e values:\r\n\r\n```\r\n#loc972 \u003d loc(\"\u003cstdin\u003e\":853:5)\r\n#loc973 \u003d loc(\"\u003cstdin\u003e\":855:3)\r\n#loc975 \u003d loc(\"\u003cstdin\u003e\":856:10)\r\n#loc976 \u003d loc(\"\u003cstdin\u003e\":857:5)\r\n```"
    },
    {
      "commit": "38743b2fe0bfeaeac8301acf464b30e6002a9e57",
      "tree": "690bc70d526c6b68eb412628c06bffbd7fd712c2",
      "parents": [
        "27e1e53fd5c4d37c140b273e0b5d1c02dd7bc2b5"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 15:33:59 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 15:33:59 2023 -0500"
      },
      "message": "Consolidate VhloToVersion negative tests (#1517)\n\nConsolidate VhloToVersion negative tests. Use\r\nhttps://github.com/openxla/stablehlo/pull/1516 as diffbase."
    },
    {
      "commit": "27e1e53fd5c4d37c140b273e0b5d1c02dd7bc2b5",
      "tree": "9c58ff4806e02c08cb05ac2e83da673bae962530",
      "parents": [
        "05223052b7e2f387b3534496b0a0109094a6af2b"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 22 15:17:31 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 22 13:17:31 2023 -0700"
      },
      "message": "Add producer to StableHLO portable artifacts. (#1516)\n\nProducer reads `StableHLO_v\u003ctarget_version\u003e`, for example:\r\n\r\n```\r\n$ stablehlo-translate --serialize --target\u003d0.9.0 file.mlir\r\nML?RStableHLO_v0.9.0[...]\r\n```\r\n\r\nAlso moved some logic around and added `minimum` as supported target\r\nversion now that we have a getMinimumVersion API.\r\n\r\nCurrently there are no APIs that inspect the producer string, so this is\r\npurely debug info and does not impact forward or backward compatibility."
    },
    {
      "commit": "05223052b7e2f387b3534496b0a0109094a6af2b",
      "tree": "c039ba694bb1d7fcb409e2fed93fdfa2c7514ab7",
      "parents": [
        "b1cff89dbef1e2db09beafd26ec7c5eabd338aac"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Sun May 21 11:43:18 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 21 11:43:18 2023 -0700"
      },
      "message": "Actually use the BytecodeWriterConfig in writeBytecodeToFile (#1512)\n\n#1511 introduced BytecodeWriterConfig to our serialization logic,\r\nbut I forgot to actually use it. This PR fixes this oversight.\r\n\r\nThis mistake on my part happened because we don\u0027t have forward\r\ncompatibility tests yet. To make sure that I didn\u0027t mess up anything\r\nelse this time, I have manually verified the following:\r\n* At HEAD, build/bin/stablehlo-translate --serialize produces different\r\npayloads when: 1) local clone of llvm-project is pristine, 2) local\r\nclone of llvm-project has kVersion manually changed to 2. The only\r\ndifference is the bytecodeVersion field in the serialized payloads.\r\n* With this PR, build/bin/stablehlo-translate --serialize produces the\r\nsame payloads when: 1) local clone of llvm-project is pristine, 2) local\r\nclone of llvm-project has kVersion manually changed to 2. As expected,\r\nusing BytecodeWriterConfig overrides kVersion."
    },
    {
      "commit": "b1cff89dbef1e2db09beafd26ec7c5eabd338aac",
      "tree": "13fee4aa53e4084539bcc6c0f7af28fc0d5190e5",
      "parents": [
        "e86b9c56110538f48960cc03126e67a9ad03d10b"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Sun May 21 10:54:22 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 21 10:54:22 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@f7e2678bb706 (#1510)\n\n"
    },
    {
      "commit": "e86b9c56110538f48960cc03126e67a9ad03d10b",
      "tree": "02f52b0077c44b83beab616a00476b09a1267c65",
      "parents": [
        "54bf2f32e790fa8105e90d8cdffbf09ce20da86e"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Sun May 21 10:40:03 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 21 10:40:03 2023 -0700"
      },
      "message": "Use bytecodeVersion \u003d 1 when serializing MLIR bytecode (#1511)\n\nhttps://reviews.llvm.org/D149515 has just landed, so we must explicitly\r\nspecify bytecodeVersion when using BytecodeWriter to avoid breaking\r\nforward compatibility guarantees.\r\n\r\nMore specifically, we want to avoid a situation where:\r\n  1) A StableHLO producer using a post-D149515 version of LLVM\r\n     serializes a StableHLO program to bytecode. (This will use\r\n     bytecodeVersion \u003d 2 by default).\r\n  2) A StableHLO consumer using a pre-D149515 version of LLVM within\r\n     the 1 month StableHLO forward compatibility window cannot\r\n     deserialize the StableHLO program.\r\n\r\nWe don\u0027t have forward compatibility tests yet, so this PR doesn\u0027t have\r\ntests either. See #1498 for an RFC for forward compatibility testing."
    },
    {
      "commit": "54bf2f32e790fa8105e90d8cdffbf09ce20da86e",
      "tree": "4a5a42296ebdaf4440f80b00e237a8f26ad96d61",
      "parents": [
        "26b63a9398d68a16aef27a035e4d8520596a7dc0"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Sun May 21 10:26:05 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 21 10:26:05 2023 -0700"
      },
      "message": "Fix issues identified during integrate (#1505)\n\n* BUILD.bazel: remove import of unused build_test.\r\n  * BUILD.bazel: order dependencies alphabetically."
    },
    {
      "commit": "26b63a9398d68a16aef27a035e4d8520596a7dc0",
      "tree": "3de1ebfe1157bddb1745b8479a1b7c8b63e1bd7d",
      "parents": [
        "4fe990afe3a79ffaf6ca502fb6eebde14abadeb6"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 18:00:40 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 18:00:40 2023 -0700"
      },
      "message": "Bump patch version after integrate 0.11.5 -\u003e 0.11.6 (#1507)\n\nThis PR concludes this weeks integrates, which were a bit messed up\r\nbecause of my oversight. #1506 explains what happened and provides a\r\nfix, and this PR is the final step towards normalcy."
    },
    {
      "commit": "4fe990afe3a79ffaf6ca502fb6eebde14abadeb6",
      "tree": "f79ac667d5fd6ab3c240854200784514a178524b",
      "parents": [
        "6232753c7b74bb5d555293479b80902eed55ff2e"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 17:29:12 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 17:29:12 2023 -0700"
      },
      "message": "Belatedly bump patch version after integrate 0.11.4 -\u003e 0.11.5 (#1506)\n\nOur integrate process involves bumping Version.h on GitHub after a\r\nsuccessful landing of an downstream integrate. This is done to make sure\r\nthat the subsequent downstream integrate is guaranteed to receive a\r\ndifferent version, so that we can tell them apart down the line.\r\n\r\nUnfortunately, I forgot to do that after the previous downstream\r\nintegrate:\r\nhttps://github.com/tensorflow/mlir-hlo/commit/e46e2b655d70a9099acd47be940ca3c0973583a2\r\nwhich had its Version.h say 0.11.4.\r\n\r\nNow we have another downstream integrate that has just landed:\r\nhttps://github.com/tensorflow/mlir-hlo/commit/a1cd423b7ae9cf9f48cd21494756d85d04e97411,\r\nand it has the same version as its predecessor (its Version.h also says\r\n0.11.4). This is exactly what the integrate process was trying to avoid.\r\n\r\nTo untangle this, I propose that we:\r\n  1) Bump Version.h to 0.11.5 on GitHub (this PR).\r\n  2) Bump Version.h to 0.11.5 downstream (there\u0027ll be a separate CL).\r\n  3) After 1) and 2) are merged, we tag the GitHub HEAD as v0.11.5.\r\n  4) And then proceed as usual."
    },
    {
      "commit": "6232753c7b74bb5d555293479b80902eed55ff2e",
      "tree": "465cbb089e5d543c236cd072dc2be4cfaebfbbf9",
      "parents": [
        "9e36ae5e5032ec28ea436a3e1153ddb54adfdfcf"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 13:03:36 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 13:03:36 2023 -0700"
      },
      "message": "Rename tosa/ passes to follow pass name conventions (#1504)\n\nFor the legalization passes, we\u0027ve been following the\r\nfoo-legalize-to-bar convention. And for passes in general, we\u0027ve been\r\nadhering to dialectname-something-something. These conventions are\r\napplied to the recently introduced tosa/ passes in this PR.\r\n\r\nAlso, to further the consistency between existing passes and newly\r\nintroduced passes, this PR removes manual definitions of createFooPass\r\nfunctions. At some point, TableGen got the ability to automatically\r\ngenerate these definitions, so we\u0027re leveraging it here.\r\n\r\nThank you, @GleasonK for your feedback that led to the creation of this\r\npull request."
    },
    {
      "commit": "9e36ae5e5032ec28ea436a3e1153ddb54adfdfcf",
      "tree": "d2421811b5ae8e2da5117e01679639c1c13d9597",
      "parents": [
        "608d220172fbad41e9055b2f559026f2c7e3085d"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 12:19:38 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 12:19:38 2023 -0700"
      },
      "message": "Rename some files in tosa/ to follow filename conventions (#1503)\n\nIn other parts of the repo, we\u0027re using the LLVM-style convention of\r\nFooBar.h and FooBar.cpp vs the Google-style convention of foo_bar.h and\r\nfoo_bar.cc."
    },
    {
      "commit": "608d220172fbad41e9055b2f559026f2c7e3085d",
      "tree": "1c3577c176af739db19730894d3553fa27025be2",
      "parents": [
        "4e2ad864c806b8bf650eb55d88c6433155fa6472"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 11:54:53 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 11:54:53 2023 -0700"
      },
      "message": "Integrate tosa/ into the Bazel build (#1502)\n\nThis PR merges the BUILD.bazel file from\r\nstablehlo/conversions/tosa/transforms into the root BUILD.bazel file, to\r\nfollow the current convention.\r\n\r\nWe cargo-culted this convention from the MLIR-HLO repository, so maybe\r\nit\u0027s time to get rid of it, but I\u0027ll leave that to future work.\r\n\r\nFurthermore, I added BUILD.bazel in stablehlo/conversions/tosa/tests, so\r\nthat Bazel can actually run those tests.\r\n\r\nFinally, I noticed that a minor cleanup opportunity in the CMake build,\r\nand I figured it wouldn\u0027t hurt to pursue it in this PR."
    },
    {
      "commit": "4e2ad864c806b8bf650eb55d88c6433155fa6472",
      "tree": "8d6172044ea1f2c5716e5f8959c1eaa9df10a319",
      "parents": [
        "d9f723d046d62d0b2987ac01b359c8da0b8c57d3"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 11:05:46 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 11:05:46 2023 -0700"
      },
      "message": "Incorporate tosa/ tests into our test infrastructure (#1501)\n\nIn #1466, we ran into some issues when adding the newly introduced tosa/\r\nsuite to relevant test targets, so this PR revamps this system to\r\nproperly integrate tosa/ as well as simplify similar work in the future.\r\n\r\nInstead of the state of the art with one `check-stablehlo` custom target\r\nand a bunch of suites becoming its dependencies, we now have three\r\ncustom targets:\r\n  1) check-stablehlo-ci whose name makes it clear that this is what\r\n     runs in CI.\r\n  2) check-stablehlo-slow for slow-running tests like the testdata/\r\n     suite that we\u0027d like to separate from the other suites, so that\r\n     humans don\u0027t have to run them every time.\r\n  3) check-stablehlo-quick for everything else.\r\n\r\nEach suite becomes a dependency of either -slow or -quick, making its\r\nnature explicit and clearly documented.\r\n\r\nAlso, it looks like the tosa/ suite got broken by one of the LLVM bumps\r\nthat happened between when its PR got created and when it got merged. I\r\nfixed those breakages too."
    },
    {
      "commit": "d9f723d046d62d0b2987ac01b359c8da0b8c57d3",
      "tree": "c994de0eab1c7be59155d3ea5c79734253cbc526",
      "parents": [
        "cf2f2b84e564cbb5014275698720867322e443bb"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 19 09:45:30 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 09:45:30 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@e96123dfeabc (#1499)\n\n"
    },
    {
      "commit": "cf2f2b84e564cbb5014275698720867322e443bb",
      "tree": "37b4e2980d94c0f6c2bdd5dda09f26e7bd0e3d43",
      "parents": [
        "14691ce2e956f089d401b5bfee9fcb10e20d4755"
      ],
      "author": {
        "name": "Jacques Pienaar",
        "email": "jpienaar@google.com",
        "time": "Fri May 19 09:44:36 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 19 09:44:36 2023 -0700"
      },
      "message": "[tosa] Add path to TOSA backed backends. (#1466)\n\nConnect these two industry standards by way of dialect legalization from\r\nStableHLO to TOSA. This adds basic support and testing: along with usage\r\nof some of the equivalent canonicalization patterns from MHLO (not\r\nincluded in this PR) this has been sufficient for some full models\r\nstarting from ML framework to TOSA backed. Support is not complete and\r\npartly relies on some canonical StableHLO forms.\r\n\r\nThe legalizations are also written primarily using PDLL, but we have not\r\nyet adopted some of the newer support there for variadics. This work\r\nstarted by targeting MHLO in TensorFlow repo as StableHLO was still\r\nyoung, but given StableHLO development it makes more sense to instead\r\nstart there and provide a connection for community backends.\r\n\r\nNo new repository dependency is introduced. The cmake config enables\r\ndisabling building conversion,\r\n\r\n---------\r\n\r\nCo-authored-by: Eugene Burmako \u003cburmako@google.com\u003e"
    },
    {
      "commit": "14691ce2e956f089d401b5bfee9fcb10e20d4755",
      "tree": "fb3fa614f020140efa7bc597e62b2254ea437e4f",
      "parents": [
        "dc3938e491c6b56caf0f0f8bfca0ac361727b6c2"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Wed May 17 08:59:37 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 17 08:59:37 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@33da608ecc0f (#1495)\n\n"
    },
    {
      "commit": "dc3938e491c6b56caf0f0f8bfca0ac361727b6c2",
      "tree": "054e9b7a3a624a9b9753079e7c67f12e4ad56518",
      "parents": [
        "14b7a892da76c2574e274be7ba7de640bcab1d3d"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Wed May 17 08:12:50 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 17 08:12:50 2023 -0700"
      },
      "message": "Add limited support for multiple functions in shape refinement (#1484)\n\n--stablehlo-refine-shapes currently has a limitation of only supporting\r\none function to avoid dealing with complexities of potential loops in\r\nthe dataflow graphs.\r\n\r\nThis PR slightly relaxes this limitation by not erroring out on multiple\r\nfunctions and instead refining shapes in just the `main` function among\r\nthose (in case one doesn\u0027t exist, that would be an error).\r\n\r\nMLIR-HLO commit:\r\nhttps://github.com/tensorflow/mlir-hlo/commit/36225413eab276eda72f936065d3ab361cf53889."
    },
    {
      "commit": "14b7a892da76c2574e274be7ba7de640bcab1d3d",
      "tree": "fe6f1550b3a9641ae8bc95f7e41f0e68054657ee",
      "parents": [
        "f5a391162249925f820ef6839608023b7a0bd0be"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Wed May 17 08:08:04 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 17 08:08:04 2023 -0700"
      },
      "message": "Slightly reword compatibility.md (#1493)\n\n1\\) With the addition of new content, the \"Out of scope\" section seemed\r\na bit out of place, so I moved it to the bottom of the document.\r\n\r\n2\\) Shortened \"Creating portable artifacts\" to \"APIs\" and applied\r\nfurther shortenings to subordinate sections.\r\n\r\n3\\) Consistently referred to these APIs as \"compatibility APIs\" since we\r\nare in a document called compatibility.md."
    },
    {
      "commit": "f5a391162249925f820ef6839608023b7a0bd0be",
      "tree": "13141b0b4e62ac60b3bcc623168aca53e792c501",
      "parents": [
        "8bbb0ee9f6136d6028084a81b76b6dd9756278ec"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Mon May 15 22:49:15 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 15 22:49:15 2023 -0700"
      },
      "message": "Fix CholeskyOp breakage (#1487)\n\nThe recent PR has merged with breakage in its tests, so this PR fixes\r\nit."
    },
    {
      "commit": "8bbb0ee9f6136d6028084a81b76b6dd9756278ec",
      "tree": "69aa0d3e4a5a9e6bc1e3eb257d85f9ccd6054ab6",
      "parents": [
        "2358069918b19e49eabf9afc283d25b70d6dc4ac"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Mon May 15 22:20:29 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 15 22:20:29 2023 -0700"
      },
      "message": "Add interpreter for CholeskyOp (#1444)\n\nHere are the constraints for CholeskyOp:\r\n```\r\n(I1) `a` is a tensor of floating-point or complex type.\r\n(I2) `lower` is a tensor constant of `i1` type.\r\n(C1) `a` and `result` have the same type.\r\n(C2) rank(`a`) \u003e\u003d 2.\r\n(C3) dim(`a`, -2) \u003d dim(`a`, -1).\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) `a` is not a tensor of floating-point type or complex type. (Covered by ODS).\r\nI2: a) `lower` is not a tensor constant of `i1` type. (Covered by ODS).\r\nC1: a) type(a) !\u003d type(result).\r\nC2: a) rank(a) \u003c 2.\r\nC3: a) dim(`a`, -2) !\u003d dim(`a`, -1).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nC1a: type(a) !\u003d type(result).\r\nC2a: rank(a) \u003c 2.\r\nC3a: dim(`a`, -2) !\u003d dim(`a`, -1).\r\n```\r\n\r\nNotes:\r\n* Implementation inspired by the [Cholesky–Banachiewicz\r\nalgorithm](https://en.wikipedia.org/wiki/Cholesky_decomposition).\r\n\r\ncloses #1123"
    },
    {
      "commit": "2358069918b19e49eabf9afc283d25b70d6dc4ac",
      "tree": "166d14bd7b2f0615f0b4a68cc2105e694397b4da",
      "parents": [
        "c717cb992d6d3970a66fc00aba9569d2fc4c1605"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Mon May 15 15:10:59 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 15 13:10:59 2023 -0700"
      },
      "message": "Update serialization cookbook to include Python APIs (#1483)\n\nMove instructions on creating serialized artifacts to `compatibility.md`\r\nto have a centralized location to look for user compatibility\r\ndocumentation.\r\n\r\nAlso add Python APIs and augment with links to example code."
    },
    {
      "commit": "c717cb992d6d3970a66fc00aba9569d2fc4c1605",
      "tree": "ba18c64ffacab4c87df1d47867b44d6faf6f1882",
      "parents": [
        "4614c7fa8c451e44f085ce15466863edfbfc9c32"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 15 11:55:43 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 15 11:55:43 2023 -0700"
      },
      "message": "Fix issues identified during integrate (#1485)\n\n* BUILD.bazel \u0026 CMakeLists.txt: add a missing dependency.\r\n* stablehlo/dialect/TypeInference.cpp,\r\nstablehlo/dialect/TypeInference.h, stablehlo/reference/Ops.cpp,\r\nstablehlo/reference/Tensor.h: fix clang-tidy warnings.\r\n  * stablehlo/integrations/python/tests/stablehlo.py: fix formatting."
    },
    {
      "commit": "4614c7fa8c451e44f085ce15466863edfbfc9c32",
      "tree": "7bd4414513df8d00dc20d3b8adc9e78f0bffcfa5",
      "parents": [
        "19ca41caa46260b64581740b8f8f9a38bc86126b"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 15 11:55:29 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 15 11:55:29 2023 -0700"
      },
      "message": "Bump patch version after integrate 0.11.3 -\u003e 0.11.4 (#1486)\n\n"
    },
    {
      "commit": "19ca41caa46260b64581740b8f8f9a38bc86126b",
      "tree": "c6f2d86906098e1e606023dbe2740e46088426eb",
      "parents": [
        "89b9da3163855eb0721a8a3f487b651ecaae57fb"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Fri May 12 14:31:43 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 12 14:31:43 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@8faffa3cd3e1 (#1482)\n\n"
    },
    {
      "commit": "89b9da3163855eb0721a8a3f487b651ecaae57fb",
      "tree": "fcf6e13909d7706c80d87cb760c20501a3e6f6cd",
      "parents": [
        "d6f684691fe3e27751aec24c51a236a3379093b7"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Fri May 12 16:26:17 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 12 14:26:17 2023 -0700"
      },
      "message": "Add API to get minimum supported StableHLO version (#1481)\n\nThis is based on https://github.com/openxla/stablehlo/pull/1480"
    },
    {
      "commit": "d6f684691fe3e27751aec24c51a236a3379093b7",
      "tree": "66cb3c34dc96c53f12d64b4c24ceaf17e1b93f14",
      "parents": [
        "fb050189fec06a9b50fe08cbe95a7a21e7db0375"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Fri May 12 16:16:23 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 12 14:16:23 2023 -0700"
      },
      "message": "Split portable StableHLO APIs into separate file (#1480)\n\nPortable APIs are APIs with signatures that do not depend on MLIR. This\r\nprovides a way to access some StableHLO APIs without needing an MLIR\r\ndependency/visibility at the call site.\r\n\r\nThese APIs also can be safer in cases where shared objects are used, as\r\npassing MLIR Context across shared objects can cause problems."
    },
    {
      "commit": "fb050189fec06a9b50fe08cbe95a7a21e7db0375",
      "tree": "a72f0c3101825c79aa868cf782ed5b6612cb4ec6",
      "parents": [
        "8e7ec970fad3e80a725145fde213032e28e2c38f"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 10 17:09:04 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 10 17:09:04 2023 -0700"
      },
      "message": "Add interpreter for BatchNormInferenceOp (#1371)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) `operand`: tensor of floating-point type.\r\n(I2) `scale`: 1-dimensional tensor of floating-point type.\r\n(I3) `offset`: 1-dimensional tensor of floating-point type.\r\n(I4) `mean`: 1-dimensional tensor of floating-point type.\r\n(I5) `variance`: 1-dimensional tensor of floating-point type.\r\n(I6) `epsilon`: constant of type `f32`.\r\n(I7) `feature_index`: constant of type `si64`.\r\n(C1) 0 \u003c\u003d `feature_index` \u003c rank(`operand`).\r\n(C2) `operand`, `scale`, `offset`, `mean`, `variance` and `result` have the\r\nsame element type.\r\n(C3) size(`scale`) \u003d `dim(operand, feature_index)`.\r\n(C4) size(`offset`) \u003d `dim(operand, feature_index)`.\r\n(C5) size(`mean`) \u003d `dim(operand, feature_index)`.\r\n(C6) size(`variance`) \u003d `dim(operand, feature_index)`.\r\n(C7) `operand` and `result` have the same type.\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) `operand` is not a tensor of floating-point type. (Covered by ODS).\r\nI2: a) `scale` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(`scale`) !\u003d floating-point type. (Covered by ODS).\r\nI3: a) `offset` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(`offset`) !\u003d floating-point type. (Covered by ODS).\r\nI4: a) `mean` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(`mean`) !\u003d floating-point type. (Covered by ODS).\r\nI5: a) `variance` is not a 1-dimensional tensor. (Covered by ODS).\r\n    b) element_type(`variance`) !\u003d floating-point type. (Covered by ODS).\r\nI6: a) `epsilon` is not a constant of type `f32`. (Covered by ODS).\r\nI7: a) `feature_index` is not a constant of type `si64`. (Covered by ODS).\r\nC1: a) `feature_index` \u003c 0.\r\n    b) `feature_index` \u003e\u003d rank(`operand`).\r\nC2: a) element_type(`operand`) !\u003d element_type(`scale`). (Covered by ODS).\r\n    b) element_type(`operand`) !\u003d element_type(`offset`). (Covered by ODS).\r\n    c) element_type(`operand`) !\u003d element_type(`mean`). (Covered by ODS).\r\n    d) element_type(`operand`) !\u003d element_type(`variance`). (Covered by ODS).\r\n    e) element_type(`operand`) !\u003d element_type(`result`). (Covered by ODS).\r\nC3: a) size(`scale`) !\u003d `dim(operand, feature_index)`.\r\nC4: a) size(`offset`) !\u003d `dim(operand, feature_index)`.\r\nC5: a) size(`mean`) !\u003d `dim(operand, feature_index)`.\r\nC6: a) size(`variance`) !\u003d `dim(operand, feature_index)`.\r\nC7: a) type(`operand`) !\u003d type(`result`).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nC1a: `feature_index` \u003c 0.\r\nC1b: `feature_index` \u003e\u003d rank(`operand`).\r\nC3a: size(`scale`) !\u003d `dim(operand, feature_index)`.\r\nC4a: size(`offset`) !\u003d `dim(operand, feature_index)`.\r\nC5a: size(`mean`) !\u003d `dim(operand, feature_index)`.\r\nC6a: size(`variance`) !\u003d `dim(operand, feature_index)`.\r\nC7a: type(`operand`) !\u003d type(`result`).\r\n```\r\n\r\ncloses #963"
    },
    {
      "commit": "8e7ec970fad3e80a725145fde213032e28e2c38f",
      "tree": "ffa48034c80955b130d7274468ccb1909d26a7ec",
      "parents": [
        "cc08d0990acc322f91695488eb810560c05e8c33"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 10 14:20:57 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 10 14:20:57 2023 -0700"
      },
      "message": "Add interpreter for ConvertOp (#1349)\n\nHere are the following constraints:\r\n```\r\n(I1) operand is a tensor. (Covered by ODS).\r\n(C1) `operand` and `result` have the same shape. (Covered by ODS).\r\n```\r\n\r\nNotes:\r\n* Added one positive test apart from existing fp8 tests.\r\n* No additional constraint tests are needed as all constraints and shape\r\ninference is covered by ODS.\r\n* Left out handling special behaviors for floating-point/complex -\u003e\r\ninteger and vice versa to #180 (currently the behavior is implementation\r\ndefined).\r\n* Updated spec to clarify semantics for complex to boolean case.\r\n\r\ncloses #969\r\n\r\n---------\r\n\r\nCo-authored-by: Eugene Burmako \u003cburmako@google.com\u003e"
    },
    {
      "commit": "cc08d0990acc322f91695488eb810560c05e8c33",
      "tree": "32f800e8bbac440d16c6bd694e86fe784dc937fe",
      "parents": [
        "f396777811792145c4915df2c7f842185cc6b017"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 10 13:51:49 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 10 13:51:49 2023 -0700"
      },
      "message": "Add interpreter for ReduceWindowOp (#1336)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) `inputs`: variadic number of tensors.\r\n(I2) `init_values`: variadic number of 0-dimensional tensors.\r\n(I3) `window_dimensions`: 1-dimensional tensor constant of type `si64`.\r\n(I4) `window_strides`: 1-dimensional tensor constant of type `si64`.\r\n(I5) `base_dilations`: 1-dimensional tensor constant of type `si64`.\r\n(I6) `window_dilations`: 1-dimensional tensor constant of type `si64`.\r\n(I7) `padding`: 2-dimensional tensor constant of type `si64`.\r\n(I8) `body`: function.\r\n(C1) size(`inputs`) \u003d size(`init_values`) \u003d size(`results`) \u003d N and\r\n  N \u003e\u003d 1.\r\n(C2) All `inputs` have the same shape.\r\n(C3) `element_type(inputs[k]) \u003d element_type(init_values[k])` for all k\r\n  in [0, N).\r\n(C4) size(`window_dimensions`) \u003d rank(`inputs[0]`).\r\n(C5) `window_dimensions[i]` \u003e 0 for all i in [0, size(`window_dimensions`)).\r\n(C6) size(`window_strides`) \u003d rank(`inputs[0]`).\r\n(C7) `window_strides[i]` \u003e 0 for all i in [0, size(`window_strides`)).\r\n(C8) size(`base_dilations`) \u003d rank(`inputs[0]`).\r\n(C9) `base_dilations[i]` \u003e 0 for all i in [0, size(`base_dilations`)).\r\n(C10) size(`window_dilations`) \u003d rank(`inputs[0]`).\r\n(C11) `window_dilations[i]` \u003e 0 for all i in [0, size(`window_dilations`)).\r\n(C12) dim(`padding`, 0) \u003d rank(`inputs[0]`) and dim(`padding`, 1) \u003d 2.\r\n(C13) `body` has type `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)`\r\n  where `Ek \u003d element_type(inputs[0])`.\r\n(C14) All `results` have the same shape.\r\n(C15) `shape(results[0]) \u003d num_windows`\r\n  * `dilated_input_shape \u003d shape(inputs[0]) \u003d\u003d 0 ? 0 : (shape(inputs[0]) - 1) * base_dilations + 1`.\r\n  * `padded_input_shape \u003d padding[:, 0] + dilated_input_shape + padding[:, 1]`.\r\n  * `dilated_window_shape \u003d window_dimensions \u003d\u003d 0 ? 0 : (window_dimensions - 1) * window_dilations + 1`.\r\n  * `num_windows \u003d (padded_input_shape \u003d\u003d 0 || dilated_window_shape \u003e padded_input_shape) ? 0 : floor((padded_input_shape - dilated_window_shape) / window_strides) + 1`.\r\n(C16) `element_type(results[k]) \u003d element_type(init_values[k])` for all k\r\n  in [0, N).\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) `inputs` is not variadic number of tensors. (Covered by ODS).\r\nI2: a) `init_values` is not variadic number of tensors. (Covered by ODS).\r\n    b) `init_values` is not 0-dimensional tensors.\r\nI3: a) `window_dimensions` is not a 1-dimensional tensor constant.\r\n    b) element_type(`window_dimensions`) !\u003d `si64`. (Covered by ODS).\r\nI4: a) `window_strides` is not a 1-dimensional tensor constant.\r\n    b) element_type(`window_strides`) !\u003d `si64`. (Covered by ODS).\r\nI5: a) `base_dilations` is not a  1-dimensional tensor constant.\r\n    b) element_type(`base_dilations`) !\u003d `si64`. (Covered by ODS).\r\nI6: a) `window_dilations` is not a 1-dimensional tensor constant.\r\n    b) element_type(`window_dilations`) !\u003d `si64`. (Covered by ODS).\r\nI7: a) `padding` is not a 2-dimensional tensor constant.\r\n    b) element_type(`padding`) !\u003d `si64`. (Covered by ODS).\r\nI8: a) `body` is not a function. (Covered by ODS).\r\nC1: a) size(`inputs`) !\u003d size(`init_values`)\r\n    b) size(`inputs`) !\u003d size(`results`)\r\n    c) size(`inputs`) \u003c 1.\r\nC2: a) Any `inputs` does not have the same shape.\r\nC3: a) `element_type(inputs[k]) !\u003d element_type(init_values[k])` for any k\r\n  in [0, N).\r\nC4: a) size(`window_dimensions`) !\u003d rank(`inputs[0]`).\r\nC5: a) `window_dimensions[i]` \u003c\u003d 0 for any i in [0, size(`window_dimensions`)).\r\nC6: a) size(`window_strides`) !\u003d rank(`inputs[0]`).\r\nC7: a) `window_strides[i]` \u003c\u003d 0 for any i in [0, size(`window_strides`)).\r\nC8: a) size(`base_dilations`) !\u003d rank(`inputs[0]`).\r\nC9: a) `base_dilations[i]` \u003c\u003d 0 for any i in [0, size(`base_dilations`)).\r\nC10: a) size(`window_dilations`) !\u003d rank(`inputs[0]`).\r\nC11: a) `window_dilations[i]` \u003c\u003d 0 for any i in [0, size(`window_dilations`)).\r\nC12: a) dim(`padding`, 0) !\u003d rank(`inputs[0]`)\r\n     b) dim(`padding`, 1) !\u003d 2.\r\nC13: a) `body` does not have type `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)`\r\n  where `Ek \u003d element_type(inputs[0])`.\r\nC14: a) Any `results` do not have the same shape.\r\nC15: a) `shape(results[0]) !\u003d num_windows`\r\n  * `dilated_input_shape \u003d shape(inputs[0]) \u003d\u003d 0 ? 0 : (shape(inputs[0]) - 1) * base_dilations + 1`.\r\n  * `padded_input_shape \u003d padding[:, 0] + dilated_input_shape + padding[:, 1]`.\r\n  * `dilated_window_shape \u003d window_dimensions \u003d\u003d 0 ? 0 : (window_dimensions - 1) * window_dilations + 1`.\r\n  * `num_windows \u003d (padded_input_shape \u003d\u003d 0 || dilated_window_shape \u003e padded_input_shape) ? 0 : floor((padded_input_shape - dilated_window_shape) / window_strides) + 1`.\r\nC16: a) `element_type(results[k]) !\u003d element_type(init_values[k])` for any k\r\n  in [0, N).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nI2b: `init_values` is not 0-dimensional tensors.\r\nI3a: `window_dimensions` is not a 1-dimensional tensor constant.\r\nI4a: `window_strides` is not a 1-dimensional tensor constant.\r\nI5a: `base_dilations` is not a  1-dimensional tensor constant.\r\nI6a: `window_dilations` is not a 1-dimensional tensor constant.\r\nI7a: `padding` is not a 2-dimensional tensor constant.\r\nC1a: size(`inputs`) !\u003d size(`init_values`)\r\nC1b: size(`inputs`) !\u003d size(`results`)\r\nC1c: size(`inputs`) \u003c 1.\r\nC2a: Any `inputs` does not have the same shape.\r\nC3a: `element_type(inputs[k]) !\u003d element_type(init_values[k])` for any k in [0, N).\r\nC4a: size(`window_dimensions`) !\u003d rank(`inputs[0]`).\r\nC5a: `window_dimensions[i]` \u003c\u003d 0 for any i in [0, size(`window_dimensions`)).\r\nC6a: size(`window_strides`) !\u003d rank(`inputs[0]`).\r\nC7a: `window_strides[i]` \u003c\u003d 0 for any i in [0, size(`window_strides`)).\r\nC8a: size(`base_dilations`) !\u003d rank(`inputs[0]`).\r\nC9a: `base_dilations[i]` \u003c\u003d 0 for any i in [0, size(`base_dilations`)).\r\nC10a: size(`window_dilations`) !\u003d rank(`inputs[0]`).\r\nC11a: `window_dilations[i]` \u003c\u003d 0 for any i in [0, size(`window_dilations`)).\r\nC12a: dim(`padding`, 0) !\u003d rank(`inputs[0]`)\r\nC12b: dim(`padding`, 1) !\u003d 2.\r\nC13a: `body` does not have type `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)`\r\n  where `Ek \u003d element_type(inputs[0])`.\r\nC14a: Any `results` do not have the same shape.\r\nC15a: `shape(results[0]) !\u003d num_windows`\r\n  * `dilated_input_shape \u003d shape(inputs[0]) \u003d\u003d 0 ? 0 : (shape(inputs[0]) - 1) * base_dilations + 1`.\r\n  * `padded_input_shape \u003d padding[:, 0] + dilated_input_shape + padding[:, 1]`.\r\n  * `dilated_window_shape \u003d window_dimensions \u003d\u003d 0 ? 0 : (window_dimensions - 1) * window_dilations + 1`.\r\n  * `num_windows \u003d (padded_input_shape \u003d\u003d 0 || dilated_window_shape \u003e padded_input_shape) ? 0 : floor((padded_input_shape - dilated_window_shape) / window_strides) + 1`.\r\nC16a: `element_type(results[k]) !\u003d element_type(init_values[k])` for any k in [0, N).\r\n```\r\n\r\nNotes:\r\n* Minor wording change in the spec.\r\n* We cannot verify C1a: size(`inputs`) !\u003d size(`init_values`) as noted\r\nin #1334.\r\n* Removed some duplicate verification tests.\r\n\r\ncloses #983\r\n\r\n---------\r\n\r\nCo-authored-by: Eugene Burmako \u003cburmako@google.com\u003e"
    },
    {
      "commit": "f396777811792145c4915df2c7f842185cc6b017",
      "tree": "e1033223b01ba6599632f8e3f998db2a20175f74",
      "parents": [
        "05d050d187ab3c614a4b134bf81ea8be00b03f4e"
      ],
      "author": {
        "name": "Sandeep Dasgupta",
        "email": "sdasgup@google.com",
        "time": "Wed May 10 09:34:26 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 10 16:34:26 2023 +0000"
      },
      "message": "Specification for quantized DotGeneralOp (#1413)\n\n## Summary \r\nThe PR proposes the spec for quantized dot-general op along with the\r\nspecifications for a few other ops on which the dot-general depends on,\r\nfor example, `slice`, `transpose`, and `reshape`.\r\n\r\n## A few details\r\nGiven `fp \u003d tensor with floating-point type and q \u003d tensor with\r\nuniformed quantized type`, the PR covers the semantics of\r\n(1) Static range quantized `dot_general` op `dot_general(q, q)`, and \r\n~~(2) Hybrid quantized `dot_general` op `dot_general(fp, q)`: Currently,\r\nthis version of the op only supports dynamic range quantization, where\r\nthe on-the-fly quantization of `lhs` is fused in the op-semantics. IMO,\r\nonce we support https://github.com/openxla/stablehlo/issues/1407, the\r\nquantization logic can be un-fused and made explicit in the MLIR graph\r\n(cc @sngyhan).~~\r\n\r\n**update**: As per the\r\n[discussion](https://github.com/openxla/stablehlo/pull/1413#discussion_r1183127043),\r\nit is decided to have only (1) in the spec. It might be too early to\r\nintroduce (2), the \"dynamic range quantizated\" variant of the op, mainly\r\nbecause (a) only TFLite CPU implements it and (b) in the long, there are\r\nplans to implement dynamic range quantization expolicitly in the graph\r\nlevel.\r\n\r\n\r\n## What comes next\r\nThe plan forward is to propose a PR for convolution op in very near\r\nfuture. I realized that the spec for convolution depends on dot-general\r\nand a split might help the review process.\r\n\r\nPlease let me know your review feedback."
    },
    {
      "commit": "05d050d187ab3c614a4b134bf81ea8be00b03f4e",
      "tree": "05b29fe7c9acf0b2dc82b5288d3818eea7491e63",
      "parents": [
        "ea7153c5ccf8dd1435720d0c46397a63224ecc6e"
      ],
      "author": {
        "name": "Sandeep Dasgupta",
        "email": "sdasgup@google.com",
        "time": "Tue May 09 18:51:13 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 10 01:51:13 2023 +0000"
      },
      "message": "Specification for quantized  AddOp (#1446)\n\n## Summary \r\n\r\nThe PR proposes the specification for quantized add op.\r\n\r\n## A few details\r\n\r\nAt some point we\r\n[decided](https://github.com/openxla/stablehlo/pull/1352#discussion_r1166196224)\r\nto drop the introduction of the specification of this op mainly because\r\nwe were unsure about the fate of\r\nhttps://github.com/openxla/stablehlo/issues/1406.\r\n \r\nPlease have a look at my revised proposal on\r\nhttps://github.com/openxla/stablehlo/issues/1406 and let me know if I am\r\nmissing something. Otherwise, let us review this op and let me know your\r\nfeedback.\r\n\r\nSide note: For those who are already aware of the context of prior\r\nintroduction of this op, please note that the current proposal is almost\r\nsame as before except that it does not have any additional constraint\r\nimposed by the op\u0027s semantics on `storage_min` or `storage_max`."
    },
    {
      "commit": "ea7153c5ccf8dd1435720d0c46397a63224ecc6e",
      "tree": "3323c6b8d424f6f46b28ae5b440affb5cabd2d54",
      "parents": [
        "7f265c6e40364c061a574b2a4cb6faea94c9af78"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Tue May 09 17:23:38 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 09 17:23:38 2023 -0700"
      },
      "message": "Clarify the checklist wording and provide examples (#1474)\n\ncloses #1473"
    },
    {
      "commit": "7f265c6e40364c061a574b2a4cb6faea94c9af78",
      "tree": "e11f37c2afadface38e06a166ddef4e974e329da",
      "parents": [
        "17fda0c9f45798a0bd4ddd89c133ef8735f7bddd"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 09 11:33:39 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 09 11:33:39 2023 -0700"
      },
      "message": "Bump patch version after integrate 0.11.2 -\u003e 0.11.3 (#1471)\n\n"
    },
    {
      "commit": "17fda0c9f45798a0bd4ddd89c133ef8735f7bddd",
      "tree": "a485d19d9ac63e58eb11423d23a7a81bb6da8014",
      "parents": [
        "eaefd3143fb5d18636f5d7d3cdc5b5d3c9797326"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Mon May 08 18:24:53 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 08 18:24:53 2023 -0700"
      },
      "message": "Add interpreter for ReduceOp (#1280)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) inputs: variadic number of tensors\r\n(I2) init_values: variadic number of 0-dimensional tensors\r\n(I3) dimensions: 1-dimensional tensor constant of type `si64`\r\n(I4) body: function\r\n(C1) All `inputs` have the same shape.\r\n(C2) element_type(`inputs[k]`) \u003d element_type(`init_values[k]`) \u003d\r\nelement_type(`results[k]`) for all `k` $\\in$ [0, N).\r\n(C3) size(`inputs`) \u003d size(`init_values`) $\u003d$ size(`results`) $\u003d$ N where\r\nN \u003e\u003d 1.\r\n(C4) 0 \u003c\u003d `dimensions[d]` \u003c rank(`inputs[0][d]`) for all dimension `d`.\r\n(C5) All dimensions in `dimensions` are unique.\r\n(C6) `body` has type `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ...,`\r\n`tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)` where\r\n`Ek \u003d element_type(inputs[k])`.\r\n(C7) shape(`results[k]`) \u003d shape(`inputs[k]`) except that the dimension\r\nsizes of `inputs[k]` corresponding to `dimensions` are not included.\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) inputs is not a variadic tensor. (Covered by ODS).\r\nI2: a) init_values is not a variadic 0-dimensional tensors.\r\nI3: a) dimensions is not a 1-dimensional tensor.\r\n    b) element_type(dimensions) !\u003d si64. (Covered by ODS).\r\nI4: a) body is not a function. (Covered by ODS).\r\nC1: a) Not all `inputs` have the same shape.\r\nC2: a) element_type(`inputs[k]`) !\u003d element_type(`init_values[k]`) for any `k` $\\in$ [0, N).\r\n    b) element_type(`inputs[k]`) !\u003d element_type(`results[k]`) for any `k` $\\in$ [0, N).\r\nC3: a) size(`inputs`) !\u003d size(`init_values`). (Covered by ODS).\r\n    b) size(`inputs`) !\u003d size(`results`). (Covered by ODS).\r\n    c) size(`inputs`) \u003c 1. (Covered by ODS).\r\nC4: a) 0 \u003e `dimensions[d]` for any dimension `d`.\r\n    b) `dimensions[d]` \u003e\u003d rank(`inputs[0][d]`) for any dimension `d`.\r\nC5: a) Dimensions in `dimensions` are not unique.\r\nC6: a) `body` does not have type `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ...,`\r\n`tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)` where\r\n`Ek \u003d element_type(inputs[k])`.\r\nC7: shape(`results[k]`) !\u003d shape(`inputs[k]`) except that the dimension\r\nsizes of `inputs[k]` corresponding to `dimensions` are not included.\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nI2a: init_values is not a variadic 0-dimensional tensors.\r\nI3a: dimensions is not a 1-dimensional tensor.\r\nC1a: Not all `inputs` have the same shape.\r\nC2a: element_type(`inputs[k]`) !\u003d element_type(`results[k]`) for any `k` $\\in$ [0, N).\r\nC2b: element_type(`init_values[k]`) !\u003d element_type(`results[k]`) for any `k` $\\in$ [0, N).\r\nC4a: 0 \u003e `dimensions[d]` for any dimension `d`.\r\nC4b: `dimensions[d]` \u003e\u003d rank(`inputs[0][d]`) for any dimension `d`.\r\nC5a: Dimensions in `dimensions` are not unique.\r\nC6a: `body` does not have type `(tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cE0\u003e, ...,`\r\n`tensor\u003cEN-1\u003e) -\u003e (tensor\u003cE0\u003e, ..., tensor\u003cEN-1\u003e)` where\r\n`Ek \u003d element_type(inputs[k])`.\r\nC7a: shape(`results[k]`) !\u003d shape(`inputs[k]`) except that the dimension\r\nsizes of `inputs[k]` corresponding to `dimensions` are not included.\r\n```\r\n\r\nNotes:\r\n* Verification for I2 is not added because of #704.\r\n\r\ncloses #982"
    },
    {
      "commit": "eaefd3143fb5d18636f5d7d3cdc5b5d3c9797326",
      "tree": "d40e00280967cbf598c29d896ed6f36e2e2bc92f",
      "parents": [
        "09c82d2dbefeae322010548ee2542c97364f3d32"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Mon May 08 17:35:33 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 08 17:35:33 2023 -0700"
      },
      "message": "Add missing checks/tests for supported f8 types (#1470)\n\nMigrating changes made in\r\nhttps://github.com/tensorflow/mlir-hlo/commit/ed8f354a68855753128e60e40c88b54af1fef6f5.\r\ncloses #1454\r\n\r\nReintroducing PR from #1459 as it was closed prematurely."
    },
    {
      "commit": "09c82d2dbefeae322010548ee2542c97364f3d32",
      "tree": "6e430a34e9b31e76175a366f358ad6938d39b4dc",
      "parents": [
        "72dc462bd459bc937b9c213f6afd0182c36022df"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 08 15:23:04 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 08 15:23:04 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@14f0776550b5 (#1468)\n\n"
    },
    {
      "commit": "72dc462bd459bc937b9c213f6afd0182c36022df",
      "tree": "53e84a0fc67f9a3fbe3a57dafbaa07df47fe9079",
      "parents": [
        "579f865e350cbd3513df71ecec4f4a12f0acb5fb"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Fri May 05 18:10:47 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 05 18:10:47 2023 -0700"
      },
      "message": "Bump patch version 0.11.1 -\u003e 0.11.2 (#1465)\n\n"
    },
    {
      "commit": "579f865e350cbd3513df71ecec4f4a12f0acb5fb",
      "tree": "a4c13f92227d5e2506005c04f88aea6730a1d43b",
      "parents": [
        "9e2b072b2e79d656fb6b7b782f18b7b53e5bbdf7"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu May 04 11:31:26 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 04 09:31:26 2023 -0700"
      },
      "message": "Add note on version bumping to compatibility.md (#1462)\n\nCloses #1319"
    },
    {
      "commit": "9e2b072b2e79d656fb6b7b782f18b7b53e5bbdf7",
      "tree": "2954535937b0b2bab25a805ae77f60530d3e8366",
      "parents": [
        "7fcd2015c5c27931612b9c60ed3e1bee802e30d9"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu May 04 10:46:12 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu May 04 10:46:12 2023 -0500"
      },
      "message": "Add Python Serialization APIs that operate on strings (#1461)\n\nMore details on rationale in API comments of\r\n`stablehlo/dialect/Serialization.h`.\r\n\r\nAfter learning more about Python bindings, unless build is set up in a\r\nspecific way where all dialect extensions built together so type IDs are\r\naccurate, passing/returning strings is safer. These APIs give the option\r\nto do either.\r\n\r\nBackport of\r\nhttps://github.com/tensorflow/mlir-hlo/commit/6d62e3157aec86d9a6c023595c1c7f89ecf928da"
    },
    {
      "commit": "7fcd2015c5c27931612b9c60ed3e1bee802e30d9",
      "tree": "a248e552a2c394bc1a2e655406cab0ecffc80506",
      "parents": [
        "1a97b32fdf6c23ccb33b2a0b87a8764c71874868"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 03 15:10:14 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 15:10:14 2023 -0700"
      },
      "message": "Enable passing interpreter tests (#1418)\n\nTests can slip through if a test contains more than one op from the PR\r\nqueue and one PR is merged while the other PR does not check for\r\nenabling additional tests after rebase."
    },
    {
      "commit": "1a97b32fdf6c23ccb33b2a0b87a8764c71874868",
      "tree": "a1c2975460f1e942f8b7a77b10a7658d8aa0a575",
      "parents": [
        "4603fd199e8eb61a3ba28a787f46e640a9989b6a"
      ],
      "author": {
        "name": "Karthik Rangasai",
        "email": "39360170+karthikrangasai@users.noreply.github.com",
        "time": "Thu May 04 03:19:00 2023 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 14:49:00 2023 -0700"
      },
      "message": "Add interpreter for ComplexOp. (#1414)\n\nCloses #1101 .\r\n\r\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) lhs is a tensor of 32 bit floating-point or 64 bit floating-point.\r\n(I2) rhs is a tensor of 32 bit floating-point or 64 bit floating-point.\r\n(C1) lhs and rhs have the same type.\r\n(C2) result and lhs have the same shape.\r\n(C3) the return type should be a complex type of the element type of the lhs i.e. element_type(`result`) \u003d complex_type(element_type(`lhs`)).\r\n```\r\n\r\nThese constraints are covered by the following tests\r\n\r\n```\r\nI1: lhs is not a tensor of 32 bit floating-point or 64 bit floating-point. (ODS)\r\nI2: rhs is not a tensor of 32 bit floating-point or 64 bit floating-point. (ODS)\r\nC1: type(lhs) !\u003d type(rhs).\r\nC2: shape(result) !\u003d shape(lhs).\r\nC3: element_type(`result`) !\u003d complex_type(element_type(`lhs`)). (ODS)\r\n```"
    },
    {
      "commit": "4603fd199e8eb61a3ba28a787f46e640a9989b6a",
      "tree": "c8c6d3ee07722ea46c794d16da5256e69039122e",
      "parents": [
        "bc5bcae44e267ccd6b3532afacd24704fcffdd8a"
      ],
      "author": {
        "name": "Karthik Rangasai",
        "email": "39360170+karthikrangasai@users.noreply.github.com",
        "time": "Thu May 04 03:18:36 2023 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 14:48:36 2023 -0700"
      },
      "message": "Add interpreter for Expm1Op. (#1411)\n\ncloses #1102 .\r\n\r\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) operand is a tensor of floating-point or complex type.\r\n(C1) operand and result have the same type.\r\n```\r\n\r\nThese constraints are covered by the following tests\r\n\r\n```\r\nI1: a) operand is not a tensor of floating-point or complex type. (Covered by ODS).\r\nC1: a) type(operand) !\u003d type(result). (Covered by ODS).\r\n```"
    },
    {
      "commit": "bc5bcae44e267ccd6b3532afacd24704fcffdd8a",
      "tree": "961d39f3d39f6825a43d086cae3209b5f25c77c9",
      "parents": [
        "5fa939785c0f87d7e112e6ac2798dc418e0eff90"
      ],
      "author": {
        "name": "Karthik Rangasai",
        "email": "39360170+karthikrangasai@users.noreply.github.com",
        "time": "Thu May 04 03:10:16 2023 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 14:40:16 2023 -0700"
      },
      "message": "Add interpreter for Log1pOp. (#1402)\n\ncloses #1105 \r\n\r\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) operand is a tensor of floating-point or complex type.\r\n(C1) operand and result have the same type.\r\n```\r\n\r\nThese constraints are covered by the following tests\r\n\r\n```\r\nI1: a) operand is not a tensor of floating-point or complex type. (Covered by ODS).\r\nC1: a) type(operand) !\u003d type(result). (Covered by ODS).\r\n```"
    },
    {
      "commit": "5fa939785c0f87d7e112e6ac2798dc418e0eff90",
      "tree": "3da704b222f8d0e612e1b3f784ae0efca9bded15",
      "parents": [
        "d415650b81d1865aec0a72d0f0b9a3f27da21d9f"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 03 10:57:52 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 10:57:52 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@8f966cedea59 (#1458)\n\n"
    },
    {
      "commit": "d415650b81d1865aec0a72d0f0b9a3f27da21d9f",
      "tree": "e73584b285f06aab7e56284c27e6cb09ff042f6e",
      "parents": [
        "c8a634d34826730df4481aeb169136e3a8433f7e"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Wed May 03 10:37:12 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 10:37:12 2023 -0700"
      },
      "message": "Bump patch version 0.11.0 -\u003e 0.11.1 (#1457)\n\n"
    },
    {
      "commit": "c8a634d34826730df4481aeb169136e3a8433f7e",
      "tree": "533b21527225fe94f69ca3cdf88a7ef1cb26dde0",
      "parents": [
        "b90e52ef967e4a39a1844d91c8edd59349e463a3"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Wed May 03 09:14:01 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 03 09:14:01 2023 -0700"
      },
      "message": "Account for operand_layouts in CustomCallOp canonicalization (#1455)\n\nCustomCallOp canonicalization can delete operands called out in\r\nindices_of_shape_operands if certain conditions are met.\r\n\r\nWhat I missed when implementing this is that this requires fixing up the\r\noperand_layouts attribute.\r\n\r\nThis has been originally implemented in\r\nhttps://github.com/tensorflow/mlir-hlo/commit/b35b237e77984ff8dd75f3a5a9f29b174d0e40c9\r\nearlier today, and this PR backports that work."
    },
    {
      "commit": "b90e52ef967e4a39a1844d91c8edd59349e463a3",
      "tree": "cd0548207d63fdef05737002925b2ca4df9262b3",
      "parents": [
        "dd4deed0beec28cba3b91984b70ca50430cde8b3"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 02 15:54:34 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 02 15:54:34 2023 -0700"
      },
      "message": "Drop support for index element types in Eval patterns (#1453)\n\nThis PR is based on #1452.\r\n\r\nStableHLO ops don\u0027t actually support index element types, except in\r\nrare situations (see uses of HLO_DimensionTensor in the TableGen file)\r\nwhich don\u0027t apply to Eval patterns."
    },
    {
      "commit": "dd4deed0beec28cba3b91984b70ca50430cde8b3",
      "tree": "43961d40d8e969cb09a1ee8818646850590de0be",
      "parents": [
        "0f8c6e95626de6a215d2776787f58b146df0a1cf"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 02 15:35:51 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 02 15:35:51 2023 -0700"
      },
      "message": "Refactor EvalSliceOpPattern to use APSInt-based matchInts (#1452)\n\nErasing the bitness and the signedness of the underlying values to\r\nint64_t is reasonable when we\u0027re going to use these values in the\r\nint64_t context (e.g. take a shape of DynamicReshapeOp and then put it\r\ninto ShapedTypeComponents).\r\n\r\nHowever, in the case of data movement ops like BroadcastInDimOp,\r\nConcatenateOp and now SliceOp, this doesn\u0027t make much sense.\r\n\r\nThis even caused a crash when matchInts was called when trying to\r\nevaluate slices of unsigned tensors. When we were trying to call\r\nDenseIntElementsAttr::get with an unsigned tensor type and values of\r\ntype SmallVector\u003cint64_t\u003e, that led to a crash.\r\n\r\nNow that we have switched to SmallVector\u003cAPSInt\u003e, this crash is fixed.\r\nI\u0027ve also audited all occurrences of DenseIntElementsAttr::get to make\r\nsure that we no longer have any type mismatches that could lead to\r\nfurther crashes elsewhere."
    },
    {
      "commit": "0f8c6e95626de6a215d2776787f58b146df0a1cf",
      "tree": "9d008ff393654f1424d29b23188b07e23a17e7ba",
      "parents": [
        "3b60c2fd8c65d59ff0cd83a220d56e71192ff6b8"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 02 14:49:11 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 02 14:49:11 2023 -0700"
      },
      "message": "Refactor matchInts to use APSInt instead of APInt (#1451)\n\nAs I was cleaning up the uses of matchInts, I figured I\u0027d do the\r\nlong-standing refactoring to switch to using APSInt.\r\n\r\nThis had the benefit of simplifying partial evaluation logic in\r\n--stablehlo-refine-shapes because with APSInts we no longer need to\r\nbranch on whether the underlying APInts are signed or unsigned."
    },
    {
      "commit": "3b60c2fd8c65d59ff0cd83a220d56e71192ff6b8",
      "tree": "36547b9832dbeab6a5e558cff06a1b6a3d1feb93",
      "parents": [
        "5572baa1071a98ec1389d4c17fb851bfa7b14508"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Tue May 02 12:10:00 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 02 12:10:00 2023 -0700"
      },
      "message": "Add interpreter for CbrtOp (#1417)\n\nHere are the constraints for CbrtOp:\r\n```\r\n(I1) operand is a tensor of floating-point or complex type.\r\n(C1) `operand` and `result` have the same type.\r\n```\r\nI1 and C1 are covered by the ODS, so no additional tests are added.\r\n\r\nNotes:\r\n* The implementation is inspired from De Moivre\u0027s formula to calculate\r\nnth root of a complex number. k is assumed 0 for principal root. See:\r\nhttps://en.wikipedia.org/wiki/De_Moivre%27s_formula\r\n\r\ncloses #1099"
    },
    {
      "commit": "5572baa1071a98ec1389d4c17fb851bfa7b14508",
      "tree": "f7aa45cff216edb7567833ed2b5063c3853eddff",
      "parents": [
        "da30def7b19529899c354f5527a0cbe61a5e45b6"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Tue May 02 11:32:40 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 02 11:32:40 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@a91cb9ce39dc (#1450)\n\n"
    },
    {
      "commit": "da30def7b19529899c354f5527a0cbe61a5e45b6",
      "tree": "346baf75e7392efdc20f33914c1ac3d3c6db44d9",
      "parents": [
        "1d4198f2af9fad418afc4eaa20e126e65fef691a"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Tue May 02 08:52:12 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 02 08:52:12 2023 -0700"
      },
      "message": "Assorted refactorings of the interpreter (#1440)\n\nHere are some of the things that I noticed over the last few weeks but\r\ndidn\u0027t have the time to follow up on:\r\n\r\n1) Let\u0027s unify the boilerplate in the eval() function to: a)\r\nconsistently use auto, b) drop the \"runtime\" part of variable names to\r\nmake things easier to read, c) consistently declare temporary variables.\r\nAs an alternative to c), we could skip creating temporaries except for\r\nresult.\r\n\r\n2) Let\u0027s drop isSupportedFooType checks in Element.cpp when there\u0027s just\r\none supported category of types. These checks are redundant because\r\ngetFooValue will check that anyway.\r\n\r\n3) Same for the isSupportedComplexType + comparisonDirection check in\r\nevalCompareOp. It is redundant as well.\r\n\r\n4) There are few more minor cleanups in this pull request which I don\u0027t\r\nthink need special description / justification."
    },
    {
      "commit": "1d4198f2af9fad418afc4eaa20e126e65fef691a",
      "tree": "e16f57285b7a957e2055446bf8b54d4387a5b020",
      "parents": [
        "cfc35b2d4cd427f3c54464f1c944fd593fb12377"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 01 21:51:40 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 01 21:51:40 2023 -0700"
      },
      "message": "Improve logging in refineReturnTypes (#1388)\n\nWhile working on refineReturnTypes, I made some improvements to\r\nhow the application of ShapedTypeComponents to Type happens.\r\nThis is mostly an NFC that restructures the code to improve readability\r\nand logging."
    },
    {
      "commit": "cfc35b2d4cd427f3c54464f1c944fd593fb12377",
      "tree": "8713fdb2ed7fefb1a191f12d25a581d2e0a96af2",
      "parents": [
        "802bf1d1170575aa349d9821b924f29393b72721"
      ],
      "author": {
        "name": "Eugene Burmako",
        "email": "burmako@google.com",
        "time": "Mon May 01 20:45:50 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 01 20:45:50 2023 -0700"
      },
      "message": "Extend indices_of_shape_operands to work for tuple types (#1387)\n\nAt the moment, indices_of_shape_operands are in 1:1 correspondence with\r\nresult types, but that doesn\u0027t work when custom calls return tuples.\r\n\r\nThis pull requests addresses this problem, and changes\r\nindices_of_shape_operands to be in 1:1 correspondence with flattened\r\nresult types. Originally, I thought that I\u0027d need to redesign\r\nindices_of_shape_operands, e.g. carry arrays of 1-dimensional tensors or\r\narrays of strings, but this solution keeps the existing structure of\r\nthe attribute, i.e. a 1-dimensional tensor of i64.\r\n\r\nThe changes are surprisingly compact and cover the following three\r\nareas of the implementation:\r\n  1) Verification logic for the attribute that lives in\r\n     getShapeRefinements in Base.h.\r\n  2) Helper logic which applies refinements to operation types\r\n     that lives in refineReturnTypes in StablehloRefineShapes.cpp.\r\n  3) inferMostSpecificType logic which is used to merged unrefined\r\n     types and the corresponding refinements.\r\n\r\nThese changes also led to an unexpected improvement. Now that\r\n--stablehlo-refine-shapes started operating on tuple types,\r\ntensor::CastOp stopped working (it only applies to tensors), so I was\r\nforced to look for a better solution and remembered about\r\nUnrealizedConversionCastOp. This was an easy migration, and as a nice\r\nbonus the pass no longer depends on the Tensor dialect."
    },
    {
      "commit": "802bf1d1170575aa349d9821b924f29393b72721",
      "tree": "58e9def91ca808b4a2d4ac8e240a8881e8e9ddca",
      "parents": [
        "43e9dda16b47bd0f33b977bb1c3a951f7708ee99"
      ],
      "author": {
        "name": "David Majnemer",
        "email": "david.majnemer@gmail.com",
        "time": "Mon May 01 23:20:34 2023 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 01 20:20:34 2023 -0700"
      },
      "message": "Add Float8E4M3B11FNUZ type support. (#1448)\n\nAs proposed in [RFC: E4M3B11FNUZ in\r\nXLA](https://github.com/openxla/stablehlo/blob/main/rfcs/20230309-e4m3b11.md)\r\n(#1308), this change adds support for these types to StableHLO.\r\n\r\nThis includes the type definitions, vhlo, and interpreter support. The\r\ntesting approach mirrors the Float8E4M3FNUZ tests, since it is also a\r\n\"non-standard\" floating point type supported by StableHLO."
    },
    {
      "commit": "43e9dda16b47bd0f33b977bb1c3a951f7708ee99",
      "tree": "588dbb0d6861d5928d7a5fea60686b20100c8e43",
      "parents": [
        "77e060469b02b75f8501d4d6e3475be2053aa7a0"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Mon May 01 17:22:58 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 01 17:22:58 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@dc275fd03254 (#1449)\n\n"
    },
    {
      "commit": "77e060469b02b75f8501d4d6e3475be2053aa7a0",
      "tree": "a943a9fc3ad9908fadb6765810a08c9d9954a662",
      "parents": [
        "43d81c6883ade82052920bd367c61f9e52f09954"
      ],
      "author": {
        "name": "Sandeep Dasgupta",
        "email": "sdasgup@google.com",
        "time": "Mon May 01 15:48:29 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon May 01 15:48:29 2023 -0700"
      },
      "message": "Bump patch version 0.10.1 -\u003e 0.10.2 (#1447)\n\nAfter a StableHLO release is integrated into OpenXLA\r\n([openxla/xla](https://github.com/openxla/xla/tree/main/third_party/stablehlo)),\r\nwe bump the patch version so HEAD remains ahead of the latest release."
    },
    {
      "commit": "43d81c6883ade82052920bd367c61f9e52f09954",
      "tree": "553aa1dfadd6772af9d5c05492254ea7a098917d",
      "parents": [
        "e5f8e944d2b50060502cf51e32bf8b7332d4a46e"
      ],
      "author": {
        "name": "Sandeep Dasgupta",
        "email": "sdasgup@google.com",
        "time": "Thu Apr 27 13:52:03 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 27 13:52:03 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@ba38640b9901 (#1441)\n\n"
    },
    {
      "commit": "e5f8e944d2b50060502cf51e32bf8b7332d4a46e",
      "tree": "948ac721cabacdb6e84d9b72e9ae93d30c0a734c",
      "parents": [
        "1e5ef51f25f45a0a00ffb8881650b109cd5aeace"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu Apr 27 13:21:08 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 27 15:21:08 2023 -0500"
      },
      "message": "Bump patch to 0.10.1 after integrate (#1442)\n\nAfter a StableHLO release is integrated into OpenXLA\r\n([openxla/xla](https://github.com/openxla/xla/tree/main/third_party/stablehlo)),\r\nbump the patch version so HEAD remains ahead of the latest release."
    },
    {
      "commit": "1e5ef51f25f45a0a00ffb8881650b109cd5aeace",
      "tree": "7925cfe85cbc1ff917e55738e7a17e515efec497",
      "parents": [
        "76e76ce7ea7e865f8a2e124565d5a556d372490b"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Fri Apr 21 16:08:00 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 21 16:08:00 2023 -0700"
      },
      "message": "Add interpreter for SortOp (#1283)\n\nWe have the following constraints in the spec:\r\n\r\n```\r\n(I1) inputs: variadic number of tensors.\r\n(I2) dimension: constant of type `si64`.\r\n(I3) is_stable: constant of type `i1`.\r\n(I4) comparator: function.\r\n(C1) `inputs` have at least 1 tensor.\r\n(C2) For all `i`, `type(inputs[i])` \u003d `type(results[i])`.\r\n(C3) All tensors in `inputs` and `results` have the same shape.\r\n(C4) `-R` $\\le$ `dimension` $\\lt$ `R`, where `R` is rank of `inputs[0]`.\r\n(C5) `comparator` has type\r\n`(tensor\u003cE1\u003e, tensor\u003cE1\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cEN-1\u003e) -\u003e tensor\u003ci1\u003e`,\r\nwhere `Ei` is element type of `inputs[i]`.\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n\r\n```\r\nI1: a) inputs is not variadic tensor. (Covered by ODS).\r\nI2: a) element_type(dimension) !\u003d `si64`. (Covered by ODS).\r\nI3: a) is_stable is not a constant of type `i1`. (Covered by ODS).\r\nI4: a) comparator is not a function. (Covered by ODS).\r\nC1: a) size(inputs) \u003c 1. (Covered by ODS).\r\nC2: a) For any `i`, `type(inputs[i])` !\u003d `type(results[i])`.\r\nC3: a) Any tensors in `inputs` and `results` do not have the same shape. (Covered by ODS).\r\nC4: a) `dimension` \u003c `-R` where `R` is rank of `inputs[0]`.\r\n    b) `dimension` \u003e\u003d `R`, where `R` is rank of `inputs[0]`.\r\nC5: a) `comparator` does not have type\r\n`(tensor\u003cE1\u003e, tensor\u003cE1\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cEN-1\u003e) -\u003e tensor\u003ci1\u003e`,\r\nwhere `Ei` is element type of `inputs[i]`.\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n\r\n```\r\nC2a: For any `i`, `type(inputs[i])` !\u003d `type(results[i])`.\r\nC4a: `dimension` \u003c `-R` where `R` is rank of `inputs[0]`.\r\nC4b: `dimension` \u003e\u003d `R`, where `R` is rank of `inputs[0]`.\r\nC5a: `comparator` does not have type\r\n`(tensor\u003cE1\u003e, tensor\u003cE1\u003e, ..., tensor\u003cEN-1\u003e, tensor\u003cEN-1\u003e) -\u003e tensor\u003ci1\u003e`,\r\nwhere `Ei` is element type of `inputs[i]`.\r\n```\r\n\r\nNotes:\r\n* Simplified spec wording from \"a variadic number of tensors in `foo`\"\r\nto just \"`foo`\" for brevity.\r\n* Clarified the semantics to describe the op\u0027s behavior in more detail.\r\n\r\ncloses #991\r\n\r\n---------\r\n\r\nCo-authored-by: Eugene Burmako \u003cburmako@google.com\u003e"
    },
    {
      "commit": "76e76ce7ea7e865f8a2e124565d5a556d372490b",
      "tree": "777a3f6d50fa5442a8cc93dca90d4e3203c1639d",
      "parents": [
        "21bcf32ec64cc2dbdb75a2b98664ac23db4a3500"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Thu Apr 20 16:21:00 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 16:21:00 2023 -0700"
      },
      "message": "Add interpreter for ShiftRightArithmeticOp (#1431)\n\nHere are the constraints for the ShiftRightArithmeticOp:\r\n```\r\n(I1) lhs is a tensor of integer type.\r\n(I2) rhs is a tensor of integer type.\r\n(C1) `lhs`, `rhs`, and `result` have the same type.\r\n```\r\nI1, I2, and C1 are covered by the ODS, so no additional tests are added.\r\n\r\nNotes:\r\n* Corner cases (shift overflow) has not been accounted for: #1150\r\n\r\ncloses #1113"
    },
    {
      "commit": "21bcf32ec64cc2dbdb75a2b98664ac23db4a3500",
      "tree": "3ee6f5722f08e538434bc0f4d4f8194ba2e37e91",
      "parents": [
        "a67229858619361d55dcc6e54d2d1079d39c4f1a"
      ],
      "author": {
        "name": "Sandeep Dasgupta",
        "email": "sdasgup@google.com",
        "time": "Thu Apr 20 15:33:35 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 15:33:35 2023 -0700"
      },
      "message": "enable a few interpreter tests (#1437)\n\nA few interpreter tests which might have missed enabling."
    },
    {
      "commit": "a67229858619361d55dcc6e54d2d1079d39c4f1a",
      "tree": "87c6726ce6f2524a45d7c63a13b3178814829178",
      "parents": [
        "86b2fa6a3817966821be143f307b3457ea10bb61"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Thu Apr 20 15:28:46 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 15:28:46 2023 -0700"
      },
      "message": "Add interpreter for ShiftRightLogicalOp (#1429)\n\nHere are the constraints for the ShiftRightLogicalOp:\r\n```\r\n(I1) lhs is a tensor of integer type.\r\n(I2) rhs is a tensor of integer type.\r\n(C1) `lhs`, `rhs`, and `result` have the same type.\r\n```\r\nI1, I2, and C1 are covered by the ODS, so no additional tests are added.\r\n\r\nNotes:\r\n* Corner cases (shift overflow) has not been accounted for: #1150\r\n\r\ncloses #1114"
    },
    {
      "commit": "86b2fa6a3817966821be143f307b3457ea10bb61",
      "tree": "1bc76c0e734f60bcb4c06a36a22aed09d94eafdb",
      "parents": [
        "45a85ebd8afcc67429d7158c25af2381e80f74f9"
      ],
      "author": {
        "name": "Gunhyun Park",
        "email": "gunhyun@google.com",
        "time": "Thu Apr 20 15:15:47 2023 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 15:15:47 2023 -0700"
      },
      "message": "Add interpreter for GetDimensionSizeOp (#1436)\n\nHere are the constraints for the GetDimensionSizeOp:\r\n```\r\n(I1) operand is a tensor.\r\n(I2) dimension is a constant of type `si64`.\r\n(C1) 0 \u003c\u003d dimension \u003c rank(operand).\r\n```\r\n\r\nThese constraints will be comprehensively covered by the following\r\ntests:\r\n```\r\nI1: a) operand is not a tensor. (Covered by ODS).\r\nI2: a) dimension is not a constant of type `si64`. (Covered by ODS).\r\nC1: a) 0 \u003c dimension.\r\n    b) dimension \u003c rank(operand).\r\n```\r\n\r\nIf we drop the \"Covered by ODS\" pieces, this will leave us with the\r\nfollowing test cases:\r\n```\r\nC1a: 0 \u003c dimension.\r\nC1b: dimension \u003c rank(operand).\r\n```\r\n\r\ncloses #1432"
    },
    {
      "commit": "45a85ebd8afcc67429d7158c25af2381e80f74f9",
      "tree": "4a5de109fc0a593c2a6bfb352727cd9e1cdea36d",
      "parents": [
        "69a873702df5489911efef87f4674943730da630"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu Apr 20 13:24:46 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 13:24:46 2023 -0500"
      },
      "message": "Add missing dependency causing issues in google infra (#1435)\n\nNeeded since `StablehloRefineShapes.cpp` includes `Base.h`:\r\n\r\n\r\nhttps://github.com/openxla/stablehlo/blob/main/stablehlo/transforms/StablehloRefineShapes.cpp#L45"
    },
    {
      "commit": "69a873702df5489911efef87f4674943730da630",
      "tree": "00577fba40e87c68c1b21ae6d422979ce49e5648",
      "parents": [
        "095fa9ef6df2058f68819d1b45afa211b4d1aebd"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu Apr 20 13:14:49 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 13:14:49 2023 -0500"
      },
      "message": "Update `vhlo.md` with contribution guidelines. (#1433)\n\nAdd guidelines for implementing compatibility for new StableHLO features\r\nin VHLO. Will be useful to link to this doc in future reviews.\r\n\r\nThis does not impact anything about the compatibility process in\r\n`compatibility.md` or the Compatibility RFC, this is intended to aid\r\ndevelopers."
    },
    {
      "commit": "095fa9ef6df2058f68819d1b45afa211b4d1aebd",
      "tree": "d85b221e075a0b35ee205974007d77eed4b52a46",
      "parents": [
        "69b13c7fcdcfc61bd75d1315629289a7157e1f6c"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Thu Apr 20 10:54:53 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 20 08:54:53 2023 -0700"
      },
      "message": "Integrate LLVM at llvm/llvm-project@98f5a340975b (StableHLO 0.10.0) (#1434)\n\nOnce merged and CI passes, this commit can be tagged for release 0.10.0."
    },
    {
      "commit": "69b13c7fcdcfc61bd75d1315629289a7157e1f6c",
      "tree": "60391aadd2d0615277340a7bf5e26906810efd4b",
      "parents": [
        "a3164810bfcf72b5694034753bc3512e74b6b215"
      ],
      "author": {
        "name": "Kevin Gleason",
        "email": "gleasonk@google.com",
        "time": "Wed Apr 19 21:37:46 2023 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 19 21:37:46 2023 -0500"
      },
      "message": "Setup for StableHLO 0.10.0, serialization test and doc updates (#1430)\n\nThis release sets everything up for StableHLO 0.10.0 which includes two\r\nnew FP8 types.\r\n\r\nEdit: I will tag the LLVM Integrate PR which will follow this one.\r\n\r\nCloses #1409"
    }
  ],
  "next": "a3164810bfcf72b5694034753bc3512e74b6b215"
}
