[roll] Roll fuchsia [kernel][runtime stats] Add an extra fence during Update.

Add an extra fence after the read of the clock register during
ThreadRuntimeStats::Update to make absolutely sure that the clock
observation cannot take place after the final update to the SeqLock
sequence number.

Prior to this, the update flow was something like:

```
  seq_lock.Acquire();
  const zx_ticks_t now =
      platform_current_ticks_synchronized<GetTicksSyncFlag::kAfterPreviousStores>();
  payload.last_update_time = now;
  // other payload updates
  seq_lock.Release();
```

It is super important that the read of the clock stays within the
update transaction.  The "after previous stores" fence ensures that
the clock read cannot move up before the initial store to the seq
lock's sequence number.

The assumption was that the store to the payload's last_update_time
would create a dependency.  The clock read has to finish before we can
actually write the value to the payload memory, and the sequence locks
memory order directives should ensure that any reader who sees the
final sequence number also sees the updated last_update_time, because
we had to know that value in order to write it to RAM, right?

Perhaps not.  There has been at least one failure in CI/CQ which
implies that this logic may not be sound.  One way of thinking about
it could be that the memory order guarantees only guarantee that we
see last update time which is paired with the final sequence number
update; it still does not prevent the clock read from floating out of
the update transaction.  IOW - the effective sequence might have
actually been:

1) Writer puts a clock read into the pipeline.
2) Writer puts a store of the clock read into the pipeline.
3) Writer puts a store of the final sequence number into the pipeline.
4) The writer's pipeline commits the final sequence number to RAM.
5) The reader starts a transaction and sees the final sequence number
   which made it to RAM in step #4.
6) Reader observes the clock and gets the value `T`.  This read has to
   finish before the initial seq no read, and before the subsequent
   payload read happens (because of the explicit fences).
7) Reader attempts to read the payload, but the store from #2 has not
   made it to RAM yet, and the memory order guarantees demand that we
   see the final value whose store was scheduled (in sequence order)
   before the store of the final sequence number.
8) So, the reader's pipline stalls.
9) The clock read finally finishes and observes the value `T+1`.
10) Now the store in step #2 can complete, and does.
11) The reader continues and sees the value `T+1` in the payload.
12) The reader loads the sequence number and sees same value that it
    saw at the start of the transaction, so it declares this to be a
    successful transaction.

Now, we have a reader with a successful transaction, but who sees that
the "last state change" happened before their transaction's timestamp,
which results in non-monotonic behavior of the current state
accounting.

Is this actually what happened?  I honestly don't know.  I have not
been able to repro this during stress testing, and this is the only
theory I have been able to come up with which could explain the
behavior.  So, instead of taking chances, I've just added a
kBeforeSubsequentStores fence to the clock read during update.  This
should make it impossible for the read of the clock to not finish
before the store of the member of the payload.

In addition, I've reversed the order of payload vs. clock on the read
side of the transaction.  Instead of reading the clock and then
reading the payload, we now read the payload and then read the clock.
The `kAfterPreviousLoads` fence should guarantee that all prior loads
of have become globally visible, which now includes the load of the
last update time.  Hopefully, this should (also) mean that the clock
read on the update side had to have finished and become visible before
the clock read on the reader side of thing happens.

Original-Bug: 42085921,359336350
Original-Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/1044145
Original-Revision: 1052bca52e7f6bfe1583ef84352a1ee4946ed34a
GitOrigin-RevId: 86ed5e6ec33f92dd9b928fdce22e37d1609a2d87
Change-Id: I5183b9229ecaaaca0623a81a3070b3a87cc06235
1 file changed
tree: a85e65b31e701399c65a12a5bc3b58bc3aa4ae30
  1. ctf/
  2. git-hooks/
  3. infra/
  4. third_party/
  5. cts
  6. firmware
  7. flower
  8. jiri.lock
  9. MILESTONE
  10. minimal
  11. prebuilts
  12. README.md
  13. stem
  14. test_durations
  15. toolchain
README.md

Integration

This repository contains Fuchsia's Global Integration manifest files.

Making changes

All changes should be made to the internal version of this repository. Our infrastructure automatically updates this version when the internal one changes.

Currently all changes must be made by a Google employee. Non-Google employees wishing to make a change can ask for assistance via the IRC channel #fuchsia on Freenode.

Obtaining the source

First install Jiri.

Next run:

$ jiri init
$ jiri import minimal https://fuchsia.googlesource.com/integration
$ jiri update

Third party

Third party projects should have their own subdirectory in ./third_party.