We'd love to accept your patches and contributions to this project. There are just a few small guidelines you need to follow.
Contributions to this project must be accompanied by a Contributor License Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. Head over to https://cla.developers.google.com/ to see your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you‘ve already submitted one (even if it was for a different project), you probably don’t need to do it again.
All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
All changes to the StableHLO opset including new ops, types, or attributes must be reviewed via an RFC. We aim for StableHLO opset changes to take ~2 weeks if feedback is actively addressed. This allows adequate time for the community to review all proposals.
An RFC should outline the proposed spec changes, as well as the rationale, and alternatives considered if relevant. This can be shared as a markdown file in the rfcs/
directory and shared as a PR.
For example, see the collective_broadcast
RFC.
To signal boost your RFC, post on OpenXLA Discuss. This will ensure the proper reviewers see the RFC. While there is no formal process for posts, we tend to recommend keeping RFC discussion on the PR to keep feedback centralized in the repository.
For example, see the collective_broadcast
post.
As denoted in governance.md
, while we work towards instating StableHLO module maintainers, the interim review process requires approval from Google project maintainers. A member of the StableHLO team will help drive final approval.
Once an RFC is approved, PRs which implement the approved proposal may be sent, reviewed, and merged.
A few things to consider when adding new features:
spec.md
as well as related op implementation can be found in spec_checklist.md
.type_inference.md
.vhlo.md
.reference_checklist.md
.Some examples to help guide changes:
collective_broadcast
.f8E4M3FNUZ
and f8E5M2FNUZ
.ReduceOp
.