blob: 0f5ce2eb0bad7da25a2147922aaec42f188c1e7b [file] [log] [blame] [view]
<link rel="stylesheet" href="../style.css" />
[TOC]
# fuchsia.media
<div class="fidl-version-div"><span class="fidl-attribute fidl-version">Added: 7</span></div>
## **PROTOCOLS**
## ActivityReporter {#ActivityReporter}
*Defined in [fuchsia.media/activity_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/activity_reporter.fidl;l=10)*
<p>A protocol for monitoring the usage activity of the AudioRenderers and AudioCapturers.</p>
### WatchCaptureActivity {#ActivityReporter.WatchCaptureActivity}
<p>Notifies the client whenever there is a change in the set of active AudioCaptureUsages.
It returns immediately the first time that it is called.</p>
#### Request {#ActivityReporter.WatchCaptureActivity_Request}
&lt;EMPTY&gt;
#### Response {#ActivityReporter.WatchCaptureActivity_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>active_usages</code></td>
<td>
<code>vector&lt;<a class='link' href='#AudioCaptureUsage'>AudioCaptureUsage</a>&gt;[4]</code>
</td>
</tr>
</table>
### WatchRenderActivity {#ActivityReporter.WatchRenderActivity}
<p>Notifies the client whenever there is a change in the set of active AudioRenderUsages.
It returns immediately the first time that it is called.</p>
#### Request {#ActivityReporter.WatchRenderActivity_Request}
&lt;EMPTY&gt;
#### Response {#ActivityReporter.WatchRenderActivity_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>active_usages</code></td>
<td>
<code>vector&lt;<a class='link' href='#AudioRenderUsage'>AudioRenderUsage</a>&gt;[5]</code>
</td>
</tr>
</table>
## Audio {#Audio}
*Defined in [fuchsia.media/audio.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio.fidl;l=7)*
### CreateAudioCapturer {#Audio.CreateAudioCapturer}
<p>Creates an AudioCapturer which either captures from the current default
audio input device, or loops-back from the current default audio output
device based on value passed for the loopback flag.</p>
#### Request {#Audio.CreateAudioCapturer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>audio_capturer_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioCapturer'>AudioCapturer</a>&gt;</code>
</td>
</tr>
<tr>
<td><code>loopback</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### CreateAudioRenderer {#Audio.CreateAudioRenderer}
#### Request {#Audio.CreateAudioRenderer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>audio_renderer_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioRenderer'>AudioRenderer</a>&gt;</code>
</td>
</tr>
</table>
### SetSystemGain {#Audio.SetSystemGain}
<p><b>DEPRECATED </b></p>
#### Request {#Audio.SetSystemGain_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### SetSystemMute {#Audio.SetSystemMute}
<p><b>DEPRECATED </b></p>
#### Request {#Audio.SetSystemMute_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>muted</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### SystemGainMuteChanged {#Audio.SystemGainMuteChanged}
<p><b>DEPRECATED </b></p>
#### Response {#Audio.SystemGainMuteChanged_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
<tr>
<td><code>muted</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
## AudioCapturer {#AudioCapturer}
*Defined in [fuchsia.media/audio_capturer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_capturer.fidl;l=271)*
<p>AudioCapturer</p>
<p>An AudioCapturer is an interface returned from an fuchsia.media.Audio's
CreateAudioCapturer method, which may be used by clients to capture audio
from either the current default audio input device, or the current default
audio output device depending on the flags passed during creation.</p>
<p><strong>Format support</strong></p>
<p>See (Get|Set)StreamType below. By default, the captured stream type will be
initially determined by the currently configured stream type of the source
that the AudioCapturer was bound to at creation time. Users may either fetch
this type using GetStreamType, or they may choose to have the media
resampled or converted to a type of their choosing by calling SetStreamType.
Note: the stream type may only be set while the system is not running,
meaning that there are no pending capture regions (specified using CaptureAt)
and that the system is not currently running in 'async' capture mode.</p>
<p><strong>Buffers and memory management</strong></p>
<p>Audio data is captured into a shared memory buffer (a VMO) supplied by the
user to the AudioCapturer during the AddPayloadBuffer call. Please note the
following requirements related to the management of the payload buffer.</p>
<ul>
<li>The payload buffer must be supplied before any capture operation may
start. Any attempt to start capture (via either CaptureAt or
StartAsyncCapture) before a payload buffer has been established is an
error.</li>
<li>The payload buffer may not be changed while there are any capture
operations pending.</li>
<li>The stream type may not be changed after the payload buffer has been set.</li>
<li>The payload buffer must be an integral number of audio frame sizes (in
bytes)</li>
<li>When running in 'async' mode (see below), the payload buffer must be at
least as large as twice the frames_per_packet size specified during
StartAsyncCapture.</li>
<li>The handle to the payload buffer supplied by the user must be readable,
writable, mappable and transferable.</li>
<li>Users should always treat the payload buffer as read-only.</li>
</ul>
<p><strong>Synchronous vs. Asynchronous capture mode</strong></p>
<p>The AudioCapturer interface can be used in one of two mutually exclusive
modes: Synchronous and Asynchronous. A description of each mode and their
tradeoffs is given below.</p>
<p><strong>Synchronous mode</strong></p>
<p>By default, AudioCapturer instances are running in 'sync' mode. They will
only capture data when a user supplies at least one region to capture into
using the CaptureAt method. Regions supplied in this way will be filled in
the order that they are received and returned to the client as StreamPackets
via the return value of the CaptureAt method. If an AudioCapturer instance
has data to capture, but no place to put it (because there are no more
pending regions to fill), the next payload generated will indicate that their
has been an overflow by setting the Discontinuity flag on the next produced
StreamPacket. Synchronous mode may not be used in conjunction with
Asynchronous mode. It is an error to attempt to call StartAsyncCapture while
the system still regions supplied by CaptureAt waiting to be filled.</p>
<p>If a user has supplied regions to be filled by the AudioCapturer instance in
the past, but wishes to reclaim those regions, they may do so using the
DiscardAllPackets method. Calling the DiscardAllPackets method will cause
all pending regions to be returned, but with <code>NO_TIMESTAMP</code> as their
StreamPacket's PTS. See &quot;Timing and Overflows&quot;, below, for a discussion of
timestamps and discontinuity flags. After a DiscardAllPackets operation,
an OnEndOfStream event will be produced. While an AudioCapturer will never
overwrite any region of the payload buffer after a completed region is
returned, it may overwrite the unfilled portions of a partially filled
buffer which has been returned as a result of a DiscardAllPackets operation.</p>
<p><strong>Asynchronous mode</strong></p>
<p>While running in 'async' mode, clients do not need to explicitly supply
shared buffer regions to be filled by the AudioCapturer instance. Instead, a
client enters into 'async' mode by calling StartAsyncCapture and supplying a
callback interface and the number of frames to capture per-callback. Once
running in async mode, the AudioCapturer instance will identify which
payload buffer regions to capture into, capture the specified number of
frames, then deliver those frames as StreamPackets using the OnPacketCapture
FIDL event. Users may stop capturing and return the AudioCapturer instance to
'sync' mode using the StopAsyncCapture method.</p>
<p>It is considered an error to attempt any of the following operations.</p>
<ul>
<li>To attempt to enter 'async' capture mode when no payload buffer has been
established.</li>
<li>To specify a number of frames to capture per payload which does not permit
at least two contiguous capture payloads to exist in the established
shared payload buffer simultaneously.</li>
<li>To send a region to capture into using the CaptureAt method while the
AudioCapturer instance is running in 'async' mode.</li>
<li>To attempt to call DiscardAllPackets while the AudioCapturer instance is
running in 'async' mode.</li>
<li>To attempt to re-start 'async' mode capturing without having first
stopped.</li>
<li>To attempt any operation except for SetGain while in the process of
stopping.</li>
</ul>
<p><strong>Synchronizing with a StopAsyncCapture operation</strong></p>
<p>Stopping asynchronous capture mode and returning to synchronous capture mode
is an operation which takes time. Aside from SetGain, users may not call any
other methods on the AudioCapturer interface after calling StopAsyncCapture
(including calling StopAsyncCapture again) until after the stop operation has
completed. Because of this, it is important for users to be able to
synchronize with the stop operation. Two mechanisms are provided for doing
so.</p>
<p>The first is to use StopAsyncCapture (not the NoReply variant). When the user's
callback has been called, they can be certain that stop operation is complete
and that the AudioCapturer instance has returned to synchronous operation
mode.</p>
<p>The second way to determine that a stop operation has completed is to use the
flags on the packets which get delivered via the user-supplied
AudioCapturerCallback interface after calling StopAsyncCapture. When
asked to stop, any partially filled packet will be returned to the user, and
the final packet returned will always have the end-of-stream flag (kFlagsEos)
set on it to indicate that this is the final frame in the sequence. If
there is no partially filled packet to return, the AudioCapturer will
synthesize an empty packet with no timestamp, and offset/length set to zero,
in order to deliver a packet with the end-of-stream flag set on it. Once
users have seen the end-of-stream flag after calling stop, the AudioCapturer
has finished the stop operation and returned to synchronous operating mode.</p>
<p><strong>Timing and Overflows</strong></p>
<p>All media packets produced by an AudioCapturer instance will have their PTS
field filled out with the capture time of the audio expressed as a timestamp
given by the reference clock timeline. Note: this timestamp is actually a
capture timestamp, not a presentation timestamp (it is more of a CTS than a
PTS) and is meant to represent the underlying system's best estimate of the
capture time of the first frame of audio, including all outboard and hardware
introduced buffering delay. As a result, all timestamps produced by an
AudioCapturer should be expected to be in the past relative to 'now' on the
stream's reference clock timeline.</p>
<p>The one exception to the &quot;everything has an explicit timestamp&quot; rule is when
discarding submitted regions while operating in synchronous mode. Discarded
packets have no data in them, but FIDL demands that all pending
method-return-value callbacks be executed. Because of this, the regions will
be returned to the user, but their timestamps will be set to
<code>NO_TIMESTAMP</code>, and their payload sizes will be set to zero. Any
partially filled payload will have a valid timestamp, but a payload size
smaller than originally requested. The final discarded payload (if there
were any to discard) will be followed by an OnEndOfStream event.</p>
<p>Two StreamPackets delivered by an AudioCapturer instance are 'continuous' if
the first frame of audio contained in the second packet was capture exactly
one nominal frame time after the final frame of audio in the first packet.
If this relationship does not hold, the second StreamPacket will have the
'kFlagDiscontinuous' flag set in it's flags field.</p>
<p>Even though explicit timestamps are provided on every StreamPacket produced,
users who have very precise timing requirements are encouraged to always
reason about time by counting frames delivered since the last discontinuity,
rather than simply using the raw capture timestamps. This is because the
explicit timestamps written on continuous packets may have a small amount of
rounding error based on whether or not the units of the capture timeline
reference clock are divisible by the chosen audio frame rate.</p>
<p>Users should always expect the first StreamPacket produced by an
AudioCapturer to have the discontinuous flag set on it (as there is no
previous packet to be continuous with). Similarly, the first StreamPacket
after a DiscardAllPackets or a Stop/Start cycle will always be
discontinuous. After that, there are only two reasons that a StreamPacket
will ever be discontinuous:</p>
<ol>
<li>The user is operating in synchronous mode and does not supply regions to
be filled quickly enough. If the next continuous frame of data has not
been captured by the time it needs to be purged from the source buffers,
an overflow has occurred and the AudioCapturer will flag the next captured
region as discontinuous.</li>
<li>The user is operating in asynchronous mode and some internal error
prevents the AudioCapturer instance from capturing the next frame of audio
in a continuous fashion. This might be high system load or a hardware
error, but in general it is something which should never normally happen.
In practice, however, if it does, the next produced packet will be flagged
as being discontinuous.</li>
</ol>
<p><strong>Synchronous vs. Asynchronous Trade-offs</strong></p>
<p>The choice of operating in synchronous vs. asynchronous mode is up to the
user, and depending on the user's requirements, there are some advantages and
disadvantages to each choice.</p>
<p>Synchronous mode requires only a single Zircon channel under the hood and can
achieve some small savings because of this. In addition, the user has
complete control over the buffer management. Users specify exactly where
audio will be captured to and in what order. Because of this, if users do
not need to always be capturing, it is simple to stop and restart the capture
later (just by ceasing to supply packets, then resuming later on). Payloads
do not need to be uniform in size either, clients may specify payloads of
whatever granularity is appropriate.</p>
<p>The primary downside of operating in synchronous mode is that two messages
will need to be sent for every packet to be captured. One to inform the
AudioCapturer of the instance to capture into, and one to inform the user
that the packet has been captured. This may end up increasing overhead and
potentially complicating client designs.</p>
<p>Asynchronous mode has the advantage requiring only 1/2 of the messages,
however, when operating in 'async' mode, AudioCapturer instances have no way
of knowing if a user is processing the StreamPackets being sent in a timely
fashion, and no way of automatically detecting an overflow condition. Users
of 'async' mode should be careful to use a buffer large enough to ensure that
they will be able to process their data before an AudioCapturer will be
forced to overwrite it.</p>
### AddPayloadBuffer {#AudioCapturer.AddPayloadBuffer}
<p>Adds a payload buffer to the current buffer set associated with the
connection. A <code>StreamPacket</code> struct reference a payload buffer in the
current set by ID using the <code>StreamPacket.payload_buffer_id</code> field.</p>
<p>A buffer with ID <code>id</code> must not be in the current set when this method is
invoked, otherwise the service will close the connection.</p>
#### Request {#AudioCapturer.AddPayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>payload_buffer</code></td>
<td>
<code>handle&lt;vmo&gt;</code>
</td>
</tr>
</table>
### BindGainControl {#AudioCapturer.BindGainControl}
<p>Binds to the gain control for this AudioCapturer.</p>
#### Request {#AudioCapturer.BindGainControl_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_control_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='../fuchsia.media.audio/'>fuchsia.media.audio</a>/<a class='link' href='../fuchsia.media.audio/#GainControl'>GainControl</a>&gt;</code>
</td>
</tr>
</table>
### CaptureAt {#AudioCapturer.CaptureAt}
<p>Explicitly specifies a region of the shared payload buffer for the audio
input to capture into.</p>
#### Request {#AudioCapturer.CaptureAt_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>payload_buffer_id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>payload_offset</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>frames</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
#### Response {#AudioCapturer.CaptureAt_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>captured_packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
### DiscardAllPackets {#AudioCapturer.DiscardAllPackets}
#### Request {#AudioCapturer.DiscardAllPackets_Request}
&lt;EMPTY&gt;
#### Response {#AudioCapturer.DiscardAllPackets_Response}
&lt;EMPTY&gt;
### DiscardAllPacketsNoReply {#AudioCapturer.DiscardAllPacketsNoReply}
#### Request {#AudioCapturer.DiscardAllPacketsNoReply_Request}
&lt;EMPTY&gt;
### GetReferenceClock {#AudioCapturer.GetReferenceClock}
<p>Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE
and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.</p>
#### Request {#AudioCapturer.GetReferenceClock_Request}
&lt;EMPTY&gt;
#### Response {#AudioCapturer.GetReferenceClock_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_clock</code></td>
<td>
<code>handle&lt;clock&gt;</code>
</td>
</tr>
</table>
### GetStreamType {#AudioCapturer.GetStreamType}
<p>Gets the currently configured stream type. Note: for an AudioCapturer
which was just created and has not yet had its stream type explicitly
set, this will retrieve the stream type -- at the time the AudioCapturer
was created -- of the source (input or looped-back output) to which the
AudioCapturer is bound. Even if this matches the client's desired format,
<code>SetPcmStreamType</code> must still be called.</p>
#### Request {#AudioCapturer.GetStreamType_Request}
&lt;EMPTY&gt;
#### Response {#AudioCapturer.GetStreamType_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_type</code></td>
<td>
<code><a class='link' href='#StreamType'>StreamType</a></code>
</td>
</tr>
</table>
### OnEndOfStream {#AudioCapturer.OnEndOfStream}
<p>Indicates that the stream has ended.</p>
#### Response {#AudioCapturer.OnEndOfStream_Response}
&lt;EMPTY&gt;
### OnPacketProduced {#AudioCapturer.OnPacketProduced}
<p>Delivers a packet produced by the service. When the client is done with
the payload memory, the client must call <code>ReleasePacket</code> to release the
payload memory.</p>
#### Response {#AudioCapturer.OnPacketProduced_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
### ReleasePacket {#AudioCapturer.ReleasePacket}
<p>Releases payload memory associated with a packet previously delivered
via <code>OnPacketProduced</code>.</p>
#### Request {#AudioCapturer.ReleasePacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
### RemovePayloadBuffer {#AudioCapturer.RemovePayloadBuffer}
<p>Removes a payload buffer from the current buffer set associated with the
connection.</p>
<p>A buffer with ID <code>id</code> must exist in the current set when this method is
invoked, otherwise the service will will close the connection.</p>
#### Request {#AudioCapturer.RemovePayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
### SetPcmStreamType {#AudioCapturer.SetPcmStreamType}
<p>Sets the stream type of the stream to be delivered. Causes the source
material to be reformatted/resampled if needed in order to produce the
requested stream type. Must be called before the payload buffer is
established.</p>
#### Request {#AudioCapturer.SetPcmStreamType_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_type</code></td>
<td>
<code><a class='link' href='#AudioStreamType'>AudioStreamType</a></code>
</td>
</tr>
</table>
### SetReferenceClock {#AudioCapturer.SetReferenceClock}
<p>Sets the reference clock that controls this capturer's playback rate. If the input
parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and
refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock
is passed (such as the uninitialized <code>zx::clock()</code>), this indicates that the stream
will use a 'flexible' clock generated by AudioCore that tracks the audio device.</p>
<p><code>SetReferenceClock</code> cannot be called after the capturer payload buffer has been
added. It also cannot be called a second time (even before capture).
If the client wants a reference clock that is initially <code>CLOCK_MONOTONIC</code> but may
diverge at some later time, they should create a clone of the monotonic clock, set
this as the stream's reference clock, then rate-adjust it subsequently as needed.</p>
#### Request {#AudioCapturer.SetReferenceClock_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_clock</code></td>
<td>
<code>handle&lt;clock&gt;?</code>
</td>
</tr>
</table>
### SetUsage {#AudioCapturer.SetUsage}
<p>Sets the usage of the capture stream. This may be changed on the fly, but
packets in flight may be affected by the new usage. By default the
Capturer is created with the FOREGROUND usage.</p>
#### Request {#AudioCapturer.SetUsage_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#AudioCaptureUsage'>AudioCaptureUsage</a></code>
</td>
</tr>
</table>
### StartAsyncCapture {#AudioCapturer.StartAsyncCapture}
<p>Places the AudioCapturer into 'async' capture mode and begin to produce
packets of exactly 'frames_per_packet' number of frames each. The
OnPacketProduced event (of StreamSink) will be used to inform the client
of produced packets.</p>
#### Request {#AudioCapturer.StartAsyncCapture_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>frames_per_packet</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
### StopAsyncCapture {#AudioCapturer.StopAsyncCapture}
<p>Stops capturing in 'async' capture mode and (optionally) deliver a callback
that may be used by the client if explicit synchronization is needed.</p>
#### Request {#AudioCapturer.StopAsyncCapture_Request}
&lt;EMPTY&gt;
#### Response {#AudioCapturer.StopAsyncCapture_Response}
&lt;EMPTY&gt;
### StopAsyncCaptureNoReply {#AudioCapturer.StopAsyncCaptureNoReply}
#### Request {#AudioCapturer.StopAsyncCaptureNoReply_Request}
&lt;EMPTY&gt;
## AudioConsumer {#AudioConsumer}
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=34)*
<p>Interface for playing and controlling audio.</p>
### BindVolumeControl {#AudioConsumer.BindVolumeControl}
<p>Binds to this <code>AudioConsumer</code> volume control for control and notifications.</p>
#### Request {#AudioConsumer.BindVolumeControl_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>volume_control_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='../fuchsia.media.audio/'>fuchsia.media.audio</a>/<a class='link' href='../fuchsia.media.audio/#VolumeControl'>VolumeControl</a>&gt;</code>
</td>
</tr>
</table>
### CreateStreamSink {#AudioConsumer.CreateStreamSink}
<p>Creates a <code>StreamSink</code> for the consumer with the indicated properties.</p>
<p>Multiple stream sinks may be acquired using this method, but they are intended to be used
sequentially rather than concurrently. The first stream sink that's created using this
method is used as the sole source of packets incoming to the logical consumer until that
stream sink is closed or the <code>EndOfStream</code> method is called on that sink. At that point,
the second stream sink is used, and so on.</p>
<p>If an unsupported compression type is supplied, the
<code>stream_sink_request</code> request will be closed with an epitaph value of
<code>ZX_ERR_INVALID_ARGS</code>.</p>
#### Request {#AudioConsumer.CreateStreamSink_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>buffers</code></td>
<td>
<code>vector&lt;vmo&gt;[16]</code>
</td>
</tr>
<tr>
<td><code>stream_type</code></td>
<td>
<code><a class='link' href='#AudioStreamType'>AudioStreamType</a></code>
</td>
</tr>
<tr>
<td><code>compression</code></td>
<td>
<code><a class='link' href='#Compression'>Compression</a>?</code>
</td>
</tr>
<tr>
<td><code>stream_sink_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#StreamSink'>StreamSink</a>&gt;</code>
</td>
</tr>
</table>
### OnEndOfStream {#AudioConsumer.OnEndOfStream}
<p>Indicates that the last packet prior to the end of the stream has been rendered.</p>
#### Response {#AudioConsumer.OnEndOfStream_Response}
&lt;EMPTY&gt;
### SetRate {#AudioConsumer.SetRate}
<p>Requests to change the playback rate of the renderer. 1.0 means normal
playback. Negative rates are not supported. The new rate will be
reflected in the updated status. The default rate of any newly created <code>StreamSink</code> is 1.0.</p>
#### Request {#AudioConsumer.SetRate_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>rate</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### Start {#AudioConsumer.Start}
<p>Starts rendering as indicated by <code>flags</code>.</p>
<p><code>media_time</code> indicates the packet timestamp that corresponds to <code>reference_time</code>.
Typically, this is the timestamp of the first packet that will be
rendered. If packets will be supplied with no timestamps, this value
should be <code>NO_TIMESTAMP</code>. Passing a <code>media_time</code> value of
<code>NO_TIMESTAMP</code> chooses the default media time, established as follows:
1. When starting for the first time, the default media time is the
timestamp on the first packet sent to the stream sink.
2. When resuming after stop, the default media time is the media
time at which the stream stopped.</p>
<p><code>reference_time</code> is the monotonic system time at which rendering should
be started. For supply-driven sources, this must be the time at which
the first packet was (or will be) sent plus a lead time, which must be
in the range indicated in the <code>AudioConsumerStatus</code>. For demand-driven
sources, the client must ensure that the lead time requirement is met at
the start time. Passing the default value of 0 for <code>reference_time</code>
causes the consumer to choose a start time based on the availability of
packets, the lead time requirements, and whether <code>LOW_LATENCY</code> has been
specified.</p>
<p>The actual start time will be reflected in the updated status.</p>
#### Request {#AudioConsumer.Start_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>flags</code></td>
<td>
<code><a class='link' href='#AudioConsumerStartFlags'>AudioConsumerStartFlags</a></code>
</td>
</tr>
<tr>
<td><code>reference_time</code></td>
<td>
<code><a class='link' href='../zx/'>zx</a>/<a class='link' href='../zx/#time'>time</a></code>
</td>
</tr>
<tr>
<td><code>media_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
### Stop {#AudioConsumer.Stop}
<p>Stops rendering as soon as possible after this method is called. The actual stop time will
be reflected in the updated status.</p>
#### Request {#AudioConsumer.Stop_Request}
&lt;EMPTY&gt;
### WatchStatus {#AudioConsumer.WatchStatus}
<p>Gets the current status of the consumer using the long get pattern. The consumer responds
to this method when the status changes - initially with respect to the initial status value
and thereafter with respect to the previously-reported status value.</p>
#### Request {#AudioConsumer.WatchStatus_Request}
&lt;EMPTY&gt;
#### Response {#AudioConsumer.WatchStatus_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>status</code></td>
<td>
<code><a class='link' href='#AudioConsumerStatus'>AudioConsumerStatus</a></code>
</td>
</tr>
</table>
## AudioCore {#AudioCore}
*Defined in [fuchsia.media/audio_core.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=81)*
### BindUsageVolumeControl {#AudioCore.BindUsageVolumeControl}
<p>Binds to a volume control protocol for the given usage.</p>
#### Request {#AudioCore.BindUsageVolumeControl_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>volume_control</code></td>
<td>
<code>server_end&lt;<a class='link' href='../fuchsia.media.audio/'>fuchsia.media.audio</a>/<a class='link' href='../fuchsia.media.audio/#VolumeControl'>VolumeControl</a>&gt;</code>
</td>
</tr>
</table>
### CreateAudioCapturer {#AudioCore.CreateAudioCapturer}
<p>Creates an AudioCapturer which either captures from the current default
audio input device, or loops-back from the current default audio output
device based on value passed for the loopback flag.</p>
#### Request {#AudioCore.CreateAudioCapturer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>loopback</code></td>
<td>
<code>bool</code>
</td>
</tr>
<tr>
<td><code>audio_in_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioCapturer'>AudioCapturer</a>&gt;</code>
</td>
</tr>
</table>
### CreateAudioCapturerWithConfiguration {#AudioCore.CreateAudioCapturerWithConfiguration}
<p>Creates an AudioCapturer according to the given requirements.</p>
<p><code>pcm_stream_type</code> sets the stream type of the stream to be delivered.
It causes the source material to be reformatted/resampled if needed
in order to produce the requested stream type.</p>
<p><code>usage</code> is used by Fuchsia to make decisions about user experience.
See <code>AudioCaptureUsage</code> for more details.</p>
<p><code>configuration</code> must be initialized to a variant, or no capturer
can be created.</p>
<p>TODO(fxbug.dev/45240): Implement</p>
#### Request {#AudioCore.CreateAudioCapturerWithConfiguration_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_type</code></td>
<td>
<code><a class='link' href='#AudioStreamType'>AudioStreamType</a></code>
</td>
</tr>
<tr>
<td><code>configuration</code></td>
<td>
<code><a class='link' href='#AudioCapturerConfiguration'>AudioCapturerConfiguration</a></code>
</td>
</tr>
<tr>
<td><code>audio_capturer_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioCapturer'>AudioCapturer</a>&gt;</code>
</td>
</tr>
</table>
### CreateAudioRenderer {#AudioCore.CreateAudioRenderer}
<p>Creates an AudioRenderer which outputs audio to the default device.</p>
#### Request {#AudioCore.CreateAudioRenderer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>audio_out_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioRenderer'>AudioRenderer</a>&gt;</code>
</td>
</tr>
</table>
### EnableDeviceSettings {#AudioCore.EnableDeviceSettings}
<p><b>DEPRECATED </b></p>
#### Request {#AudioCore.EnableDeviceSettings_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>enabled</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### GetDbFromVolume {#AudioCore.GetDbFromVolume}
<p>Queries the decibel value that maps to a volume percentage [0, 1] for a particular <code>usage</code>.
This is the same mapping as used by the VolumeControl from <code>BindUsageVolumeControl</code>.</p>
#### Request {#AudioCore.GetDbFromVolume_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>volume</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
#### Response {#AudioCore.GetDbFromVolume_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### GetVolumeFromDb {#AudioCore.GetVolumeFromDb}
<p>Queries the volume percentage [0, 1] that maps to a <code>gain_db</code> value for a particular
<code>usage</code>. This is the same mapping as used by the VolumeControl from
<code>BindUsageVolumeControl</code>.</p>
#### Request {#AudioCore.GetVolumeFromDb_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
#### Response {#AudioCore.GetVolumeFromDb_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>volume</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### LoadDefaults {#AudioCore.LoadDefaults}
<p>Re-loads the platform policy configuration. Falls back to a default config if the platform
does not provide a config.</p>
#### Request {#AudioCore.LoadDefaults_Request}
&lt;EMPTY&gt;
### ResetInteractions {#AudioCore.ResetInteractions}
<p>Re-initializes the set of rules that are currently governing the interaction of streams in
audio_core. The default behavior is 'NONE'.</p>
#### Request {#AudioCore.ResetInteractions_Request}
&lt;EMPTY&gt;
### SetCaptureUsageGain {#AudioCore.SetCaptureUsageGain}
<p>Sets the Usage gain applied to Capturers. By default, the gain for all
capture usages is set to Unity (0 db).</p>
#### Request {#AudioCore.SetCaptureUsageGain_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#AudioCaptureUsage'>AudioCaptureUsage</a></code>
</td>
</tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### SetInteraction {#AudioCore.SetInteraction}
<p>Sets how audio_core handles interactions of multiple active streams simultaneously. If
streams of Usage <code>active</code> are processing audio, and streams of Usage <code>affected</code> are as well,
the Behavior specified will be applied to the streams of Usage <code>affected</code>.</p>
#### Request {#AudioCore.SetInteraction_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>active</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>affected</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>behavior</code></td>
<td>
<code><a class='link' href='#Behavior'>Behavior</a></code>
</td>
</tr>
</table>
### SetRenderUsageGain {#AudioCore.SetRenderUsageGain}
<p>Sets the Usage gain applied to Renderers. By default, the gain for all
render usages is set to Unity (0 db).</p>
#### Request {#AudioCore.SetRenderUsageGain_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#AudioRenderUsage'>AudioRenderUsage</a></code>
</td>
</tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### SetSystemGain {#AudioCore.SetSystemGain}
<p>System Gain and Mute</p>
<p>Fuchsia clients control the volume of individual audio streams via the
fuchsia.media.audio.GainControl protocol. System Gain and Mute affect
all audio output, and are controlled with methods that use the same
concepts as GainControl, namely: independent gain and mute, with change
notifications. Setting System Mute to true leads to the same outcome as
setting System Gain to MUTED_GAIN_DB: all audio output across the system
is silenced.</p>
<p>Sets the systemwide gain in decibels. <code>gain_db</code> values are clamped to
the range -160 db to 0 db, inclusive. This setting is applied to all
audio output devices. Audio input devices are unaffected.
Does not affect System Mute.</p>
<p><b>DEPRECATED </b></p>
#### Request {#AudioCore.SetSystemGain_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### SetSystemMute {#AudioCore.SetSystemMute}
<p>Sets/clears the systemwide 'Mute' state for audio output devices.
Audio input devices are unaffected. Changes to the System Mute state do
not affect the value of System Gain.</p>
<p><b>DEPRECATED </b></p>
#### Request {#AudioCore.SetSystemMute_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>muted</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### SystemGainMuteChanged {#AudioCore.SystemGainMuteChanged}
<p>Provides current values for systemwide Gain and Mute. When a client
connects to AudioCore, the system immediately sends that client a
SystemGainMuteChanged event with the current system Gain|Mute settings.
Subsequent events will be sent when these Gain|Mute values change.</p>
<p><b>DEPRECATED </b></p>
#### Response {#AudioCore.SystemGainMuteChanged_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
</tr>
<tr>
<td><code>muted</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
## AudioDeviceEnumerator {#AudioDeviceEnumerator}
*Defined in [fuchsia.media/audio_device_enumerator.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_device_enumerator.fidl;l=39)*
### AddDeviceByChannel {#AudioDeviceEnumerator.AddDeviceByChannel}
#### Request {#AudioDeviceEnumerator.AddDeviceByChannel_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_name</code></td>
<td>
<code>string[256]</code>
</td>
</tr>
<tr>
<td><code>is_input</code></td>
<td>
<code>bool</code>
</td>
</tr>
<tr>
<td><code>channel</code></td>
<td>
<code><a class='link' href='../fuchsia.hardware.audio/'>fuchsia.hardware.audio</a>/<a class='link' href='../fuchsia.hardware.audio/#StreamConfig'>StreamConfig</a></code>
</td>
</tr>
</table>
### GetDefaultInputDevice {#AudioDeviceEnumerator.GetDefaultInputDevice}
<p>Default Device</p>
<p>Fetch the device ID of the current default input or output device, or
<code>ZX_KOID_INVALID</code> if no such device exists.</p>
<p><b>DEPRECATED </b></p>
#### Request {#AudioDeviceEnumerator.GetDefaultInputDevice_Request}
&lt;EMPTY&gt;
#### Response {#AudioDeviceEnumerator.GetDefaultInputDevice_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### GetDefaultOutputDevice {#AudioDeviceEnumerator.GetDefaultOutputDevice}
<p><b>DEPRECATED </b></p>
#### Request {#AudioDeviceEnumerator.GetDefaultOutputDevice_Request}
&lt;EMPTY&gt;
#### Response {#AudioDeviceEnumerator.GetDefaultOutputDevice_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### GetDeviceGain {#AudioDeviceEnumerator.GetDeviceGain}
<p>Gain/Mute/AGC control</p>
<p>Note that each of these operations requires a device_token in order to
target the proper input/output.</p>
<p>The Get command returns the device_token of the device whose gain is
being reported, or <code>ZX_KOID_INVALID</code> in the case that the requested
device_token was invalid or the device had been removed from the system
before the Get command could be processed.</p>
<p>Set commands which are given an invalid device token are ignored and
have no effect on the system. In addition, users do not need to control
all of the gain settings for an audio device with each call. Only the
settings with a corresponding flag set in the set_flags parameter will
be affected. For example, passing SetAudioGainFlag_MuteValid will cause
a SetDeviceGain call to care only about the mute setting in the
gain_info structure, while passing (SetAudioGainFlag_GainValid |
SetAudioGainFlag_MuteValid) will cause both the mute and the gain
status to be changed simultaneously.</p>
#### Request {#AudioDeviceEnumerator.GetDeviceGain_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
#### Response {#AudioDeviceEnumerator.GetDeviceGain_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>gain_info</code></td>
<td>
<code><a class='link' href='#AudioGainInfo'>AudioGainInfo</a></code>
</td>
</tr>
</table>
### GetDevices {#AudioDeviceEnumerator.GetDevices}
<p>Obtain the list of currently active audio devices.</p>
#### Request {#AudioDeviceEnumerator.GetDevices_Request}
&lt;EMPTY&gt;
#### Response {#AudioDeviceEnumerator.GetDevices_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>devices</code></td>
<td>
<code>vector&lt;<a class='link' href='#AudioDeviceInfo'>AudioDeviceInfo</a>&gt;</code>
</td>
</tr>
</table>
### OnDefaultDeviceChanged {#AudioDeviceEnumerator.OnDefaultDeviceChanged}
<p><b>DEPRECATED </b></p>
#### Response {#AudioDeviceEnumerator.OnDefaultDeviceChanged_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>old_default_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>new_default_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### OnDeviceAdded {#AudioDeviceEnumerator.OnDeviceAdded}
<p>Events sent when devices are added or removed, or when properties of a
device change.</p>
#### Response {#AudioDeviceEnumerator.OnDeviceAdded_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device</code></td>
<td>
<code><a class='link' href='#AudioDeviceInfo'>AudioDeviceInfo</a></code>
</td>
</tr>
</table>
### OnDeviceGainChanged {#AudioDeviceEnumerator.OnDeviceGainChanged}
#### Response {#AudioDeviceEnumerator.OnDeviceGainChanged_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>gain_info</code></td>
<td>
<code><a class='link' href='#AudioGainInfo'>AudioGainInfo</a></code>
</td>
</tr>
</table>
### OnDeviceRemoved {#AudioDeviceEnumerator.OnDeviceRemoved}
#### Response {#AudioDeviceEnumerator.OnDeviceRemoved_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### SetDeviceGain {#AudioDeviceEnumerator.SetDeviceGain}
#### Request {#AudioDeviceEnumerator.SetDeviceGain_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_token</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>gain_info</code></td>
<td>
<code><a class='link' href='#AudioGainInfo'>AudioGainInfo</a></code>
</td>
</tr>
<tr>
<td><code>valid_flags</code></td>
<td>
<code><a class='link' href='#AudioGainValidFlags'>AudioGainValidFlags</a></code>
</td>
</tr>
</table>
## AudioRenderer {#AudioRenderer}
*Defined in [fuchsia.media/audio_renderer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_renderer.fidl;l=24)*
<p>AudioRenderers can be in one of two states at any time: <em>configurable</em> or <em>operational</em>. A
renderer is considered operational whenever it has packets queued to be rendered; otherwise it
is <em>configurable</em>. Once an AudioRenderer enters the operational state, calls to &quot;configuring&quot;
methods are disallowed and will cause the audio service to disconnect the client's connection.
The following are considered configuring methods: <code>AddPayloadBuffer</code>, <code>SetPcmStreamType</code>,
<code>SetStreamType</code>, <code>SetPtsUnits</code>, <code>SetPtsContinuityThreshold</code>.</p>
<p>If an AudioRenderer must be reconfigured, the client must ensure that no packets are still
enqueued when these &quot;configuring&quot; methods are called. Thus it is best practice to call
<code>DiscardAllPackets</code> on the AudioRenderer (and ideally <code>Stop</code> before <code>DiscardAllPackets</code>), prior
to reconfiguring the renderer.</p>
### AddPayloadBuffer {#AudioRenderer.AddPayloadBuffer}
<p>Adds a payload buffer to the current buffer set associated with the
connection. A <code>StreamPacket</code> struct reference a payload buffer in the
current set by ID using the <code>StreamPacket.payload_buffer_id</code> field.</p>
<p>A buffer with ID <code>id</code> must not be in the current set when this method is
invoked, otherwise the service will close the connection.</p>
#### Request {#AudioRenderer.AddPayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>payload_buffer</code></td>
<td>
<code>handle&lt;vmo&gt;</code>
</td>
</tr>
</table>
### BindGainControl {#AudioRenderer.BindGainControl}
<p>Binds to the gain control for this AudioRenderer.</p>
#### Request {#AudioRenderer.BindGainControl_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>gain_control_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='../fuchsia.media.audio/'>fuchsia.media.audio</a>/<a class='link' href='../fuchsia.media.audio/#GainControl'>GainControl</a>&gt;</code>
</td>
</tr>
</table>
### DiscardAllPackets {#AudioRenderer.DiscardAllPackets}
<p>Discards packets previously sent via <code>SendPacket</code> or <code>SendPacketNoReply</code>
and not yet released. The response is sent after all packets have been
released.</p>
#### Request {#AudioRenderer.DiscardAllPackets_Request}
&lt;EMPTY&gt;
#### Response {#AudioRenderer.DiscardAllPackets_Response}
&lt;EMPTY&gt;
### DiscardAllPacketsNoReply {#AudioRenderer.DiscardAllPacketsNoReply}
<p>Discards packets previously sent via <code>SendPacket</code> or <code>SendPacketNoReply</code>
and not yet released.</p>
#### Request {#AudioRenderer.DiscardAllPacketsNoReply_Request}
&lt;EMPTY&gt;
### EnableMinLeadTimeEvents {#AudioRenderer.EnableMinLeadTimeEvents}
<p>Enables or disables notifications about changes to the minimum clock lead
time (in nanoseconds) for this AudioRenderer. Calling this method with
'enabled' set to true will trigger an immediate <code>OnMinLeadTimeChanged</code>
event with the current minimum lead time for the AudioRenderer. If the
value changes, an <code>OnMinLeadTimeChanged</code> event will be raised with the
new value. This behavior will continue until the user calls
<code>EnableMinLeadTimeEvents(false)</code>.</p>
<p>The minimum clock lead time is the amount of time ahead of the reference
clock's understanding of &quot;now&quot; that packets needs to arrive (relative to
the playback clock transformation) in order for the mixer to be able to
mix packet. For example...</p>
<ul>
<li>Let the PTS of packet X be P(X)</li>
<li>Let the function which transforms PTS -&gt; RefClock be R(p) (this
function is determined by the call to Play(...)</li>
<li>Let the minimum lead time be MLT</li>
</ul>
<p>If R(P(X)) &lt; RefClock.Now() + MLT
Then the packet is late, and some (or all) of the packet's payload will
need to be skipped in order to present the packet at the scheduled time.</p>
#### Request {#AudioRenderer.EnableMinLeadTimeEvents_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>enabled</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### EndOfStream {#AudioRenderer.EndOfStream}
<p>Indicates the stream has ended. The precise semantics of this method are
determined by the inheriting interface.</p>
#### Request {#AudioRenderer.EndOfStream_Request}
&lt;EMPTY&gt;
### GetMinLeadTime {#AudioRenderer.GetMinLeadTime}
<p>While it is possible to call <code>GetMinLeadTime</code> before <code>SetPcmStreamType</code>,
there's little reason to do so. This is because lead time is a function
of format/rate, so lead time will be recalculated after <code>SetPcmStreamType</code>.
If min lead time events are enabled before <code>SetPcmStreamType</code> (with
<code>EnableMinLeadTimeEvents(true)</code>), then an event will be generated in
response to <code>SetPcmStreamType</code>.</p>
#### Request {#AudioRenderer.GetMinLeadTime_Request}
&lt;EMPTY&gt;
#### Response {#AudioRenderer.GetMinLeadTime_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>min_lead_time_nsec</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
### GetReferenceClock {#AudioRenderer.GetReferenceClock}
<p>Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE
and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.</p>
#### Request {#AudioRenderer.GetReferenceClock_Request}
&lt;EMPTY&gt;
#### Response {#AudioRenderer.GetReferenceClock_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_clock</code></td>
<td>
<code>handle&lt;clock&gt;</code>
</td>
</tr>
</table>
### OnMinLeadTimeChanged {#AudioRenderer.OnMinLeadTimeChanged}
#### Response {#AudioRenderer.OnMinLeadTimeChanged_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>min_lead_time_nsec</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
### Pause {#AudioRenderer.Pause}
<p>Immediately puts the AudioRenderer into the paused state and then report
the relationship between the media and reference timelines which was
established (if requested).</p>
<p>If the AudioRenderer is already in the paused state when this called,
the previously-established timeline values are returned (if requested).</p>
#### Request {#AudioRenderer.Pause_Request}
&lt;EMPTY&gt;
#### Response {#AudioRenderer.Pause_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
<tr>
<td><code>media_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
### PauseNoReply {#AudioRenderer.PauseNoReply}
#### Request {#AudioRenderer.PauseNoReply_Request}
&lt;EMPTY&gt;
### Play {#AudioRenderer.Play}
<p>Immediately puts the AudioRenderer into a playing state. Starts the advance
of the media timeline, using specific values provided by the caller (or
default values if not specified). In an optional callback, returns the
timestamp values ultimately used -- these set the ongoing relationship
between the media and reference timelines (i.e., how to translate between
the domain of presentation timestamps, and the realm of local system
time).</p>
<p>Local system time is specified in units of nanoseconds; media_time is
specified in the units defined by the user in the <code>SetPtsUnits</code> function,
or nanoseconds if <code>SetPtsUnits</code> is not called.</p>
<p>The act of placing an AudioRenderer into the playback state establishes a
relationship between 1) the user-defined media (or presentation) timeline
for this particular AudioRenderer, and 2) the real-world system reference
timeline. To communicate how to translate between timelines, the Play()
callback provides an equivalent timestamp in each time domain. The first
value ('reference_time') is given in terms of this renderer's reference
clock; the second value ('media_time') is what media instant exactly
corresponds to that local time. Restated, the frame at 'media_time' in
the audio stream should be presented at 'reference_time' according to
the reference clock.</p>
<p>Note: on calling this API, media_time immediately starts advancing. It is
possible (if uncommon) for a caller to specify a system time that is
far in the past, or far into the future. This, along with the specified
media time, is simply used to determine what media time corresponds to
'now', and THAT media time is then intersected with presentation
timestamps of packets already submitted, to determine which media frames
should be presented next.</p>
<p>With the corresponding reference_time and media_time values, a user can
translate arbitrary time values from one timeline into the other. After
calling <code>SetPtsUnits(pts_per_sec_numerator, pts_per_sec_denominator)</code> and
given the 'ref_start' and 'media_start' values from <code>Play</code>, then for
any 'ref_time':</p>
<p>media_time = ( (ref_time - ref_start) / 1e9
* (pts_per_sec_numerator / pts_per_sec_denominator) )
+ media_start</p>
<p>Conversely, for any presentation timestamp 'media_time':</p>
<p>ref_time = ( (media_time - media_start)
* (pts_per_sec_denominator / pts_per_sec_numerator)
* 1e9 )
+ ref_start</p>
<p>Users, depending on their use case, may optionally choose not to specify
one or both of these timestamps. A timestamp may be omitted by supplying
the special value '<code>NO_TIMESTAMP</code>'. The AudioRenderer automatically deduces
any omitted timestamp value using the following rules:</p>
<p>Reference Time
If 'reference_time' is omitted, the AudioRenderer will select a &quot;safe&quot;
reference time to begin presentation, based on the minimum lead times for
the output devices that are currently bound to this AudioRenderer. For
example, if an AudioRenderer is bound to an internal audio output
requiring at least 3 mSec of lead time, and an HDMI output requiring at
least 75 mSec of lead time, the AudioRenderer might (if 'reference_time'
is omitted) select a reference time 80 mSec from now.</p>
<p>Media Time
If media_time is omitted, the AudioRenderer will select one of two
values.</p>
<ul>
<li>If the AudioRenderer is resuming from the paused state, and packets
have not been discarded since being paused, then the AudioRenderer will
use a media_time corresponding to the instant at which the presentation
became paused.</li>
<li>If the AudioRenderer is being placed into a playing state for the first
time following startup or a 'discard packets' operation, the initial
media_time will be set to the PTS of the first payload in the pending
packet queue. If the pending queue is empty, initial media_time will be
set to zero.</li>
</ul>
<p>Return Value
When requested, the AudioRenderer will return the 'reference_time' and
'media_time' which were selected and used (whether they were explicitly
specified or not) in the return value of the play call.</p>
<p>Examples</p>
<ol>
<li>
<p>A user has queued some audio using <code>SendPacket</code> and simply wishes them
to start playing as soon as possible. The user may call Play without
providing explicit timestamps -- <code>Play(NO_TIMESTAMP, NO_TIMESTAMP)</code>.</p>
</li>
<li>
<p>A user has queued some audio using <code>SendPacket</code>, and wishes to start
playback at a specified 'reference_time', in sync with some other media
stream, either initially or after discarding packets. The user would call
<code>Play(reference_time, NO_TIMESTAMP)</code>.</p>
</li>
<li>
<p>A user has queued some audio using <code>SendPacket</code>. The first of these
packets has a PTS of zero, and the user wishes playback to begin as soon
as possible, but wishes to skip all of the audio content between PTS 0
and PTS 'media_time'. The user would call
<code>Play(NO_TIMESTAMP, media_time)</code>.</p>
</li>
<li>
<p>A user has queued some audio using <code>SendPacket</code> and want to present
this media in synch with another player in a different device. The
coordinator of the group of distributed players sends an explicit
message to each player telling them to begin presentation of audio at
PTS 'media_time', at the time (based on the group's shared reference
clock) 'reference_time'. Here the user would call
<code>Play(reference_time, media_time)</code>.</p>
</li>
</ol>
#### Request {#AudioRenderer.Play_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
<tr>
<td><code>media_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
#### Response {#AudioRenderer.Play_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
<tr>
<td><code>media_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
### PlayNoReply {#AudioRenderer.PlayNoReply}
#### Request {#AudioRenderer.PlayNoReply_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
<tr>
<td><code>media_time</code></td>
<td>
<code>int64</code>
</td>
</tr>
</table>
### RemovePayloadBuffer {#AudioRenderer.RemovePayloadBuffer}
<p>Removes a payload buffer from the current buffer set associated with the
connection.</p>
<p>A buffer with ID <code>id</code> must exist in the current set when this method is
invoked, otherwise the service will will close the connection.</p>
#### Request {#AudioRenderer.RemovePayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
### SendPacket {#AudioRenderer.SendPacket}
<p>Sends a packet to the service. The response is sent when the service is
done with the associated payload memory.</p>
<p><code>packet</code> must be valid for the current buffer set, otherwise the service
will close the connection.</p>
#### Request {#AudioRenderer.SendPacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
#### Response {#AudioRenderer.SendPacket_Response}
&lt;EMPTY&gt;
### SendPacketNoReply {#AudioRenderer.SendPacketNoReply}
<p>Sends a packet to the service. This interface doesn't define how the
client knows when the sink is done with the associated payload memory.
The inheriting interface must define that.</p>
<p><code>packet</code> must be valid for the current buffer set, otherwise the service
will close the connection.</p>
#### Request {#AudioRenderer.SendPacketNoReply_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
### SetPcmStreamType {#AudioRenderer.SetPcmStreamType}
<p>Sets the type of the stream to be delivered by the client. Using this method implies
that the stream encoding is <code>AUDIO_ENCODING_LPCM</code>.</p>
<p>This must be called before <code>Play</code> or <code>PlayNoReply</code>. After a call to <code>SetPcmStreamType</code>,
the client must then send an <code>AddPayloadBuffer</code> request, then the various <code>StreamSink</code>
methods such as <code>SendPacket</code>/<code>SendPacketNoReply</code>.</p>
#### Request {#AudioRenderer.SetPcmStreamType_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>type</code></td>
<td>
<code><a class='link' href='#AudioStreamType'>AudioStreamType</a></code>
</td>
</tr>
</table>
### SetPtsContinuityThreshold {#AudioRenderer.SetPtsContinuityThreshold}
<p>Sets the maximum threshold (in seconds) between explicit user-provided PTS
and expected PTS (determined using interpolation). Beyond this threshold,
a stream is no longer considered 'continuous' by the renderer.</p>
<p>Defaults to an interval of half a PTS 'tick', using the currently-defined PTS units.
Most users should not need to change this value from its default.</p>
<p>Example:
A user is playing back 48KHz audio from a container, which also contains
video and needs to be synchronized with the audio. The timestamps are
provided explicitly per packet by the container, and expressed in mSec
units. This means that a single tick of the media timeline (1 mSec)
represents exactly 48 frames of audio. The application in this scenario
delivers packets of audio to the AudioRenderer, each with exactly 470
frames of audio, and each with an explicit timestamp set to the best
possible representation of the presentation time (given this media
clock's resolution). So, starting from zero, the timestamps would be..</p>
<p>[ 0, 10, 20, 29, 39, 49, 59, 69, 78, 88, ... ]</p>
<p>In this example, attempting to use the presentation time to compute the
starting frame number of the audio in the packet would be wrong the
majority of the time. The first timestamp is correct (by definition), but
it will be 24 packets before the timestamps and frame numbers come back
into alignment (the 24th packet would start with the 11280th audio frame
and have a PTS of exactly 235).</p>
<p>One way to fix this situation is to set the PTS continuity threshold
(henceforth, CT) for the stream to be equal to 1/2 of the time taken by
the number of frames contained within a single tick of the media clock,
rounded up. In this scenario, that would be 24.0 frames of audio, or 500
uSec. Any packets whose expected PTS was within +/-CT frames of the
explicitly provided PTS would be considered to be a continuation of the
previous frame of audio. For this example, calling 'SetPtsContinuityThreshold(0.0005)'
would work well.</p>
<p>Other possible uses:
Users who are scheduling audio explicitly, relative to a clock which has
not been configured as the reference clock, can use this value to control
the maximum acceptable synchronization error before a discontinuity is
introduced. E.g., if a user is scheduling audio based on a recovered
common media clock, and has not published that clock as the reference
clock, and they set the CT to 20mSec, then up to 20mSec of drift error
can accumulate before the AudioRenderer deliberately inserts a
presentation discontinuity to account for the error.</p>
<p>Users whose need to deal with a container where their timestamps may be
even less correct than +/- 1/2 of a PTS tick may set this value to
something larger. This should be the maximum level of inaccuracy present
in the container timestamps, if known. Failing that, it could be set to
the maximum tolerable level of drift error before absolute timestamps are
explicitly obeyed. Finally, a user could set this number to a very large
value (86400.0 seconds, for example) to effectively cause <em>all</em>
timestamps to be ignored after the first, thus treating all audio as
continuous with previously delivered packets. Conversely, users who wish
to <em>always</em> explicitly schedule their audio packets exactly may specify
a CT of 0.</p>
<p>Note: explicitly specifying high-frequency PTS units reduces the default
continuity threshold accordingly. Internally, this threshold is stored as an
integer of 1/8192 subframes. The default threshold is computed as follows:
RoundUp((AudioFPS/PTSTicksPerSec) * 4096) / (AudioFPS * 8192)
For this reason, specifying PTS units with a frequency greater than 8192x
the frame rate (or NOT calling SetPtsUnits, which accepts the default PTS
unit of 1 nanosec) will result in a default continuity threshold of zero.</p>
#### Request {#AudioRenderer.SetPtsContinuityThreshold_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>threshold_seconds</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
### SetPtsUnits {#AudioRenderer.SetPtsUnits}
<p>Sets the units used by the presentation (media) timeline. By default, PTS units are
nanoseconds (as if this were called with numerator of 1e9 and denominator of 1).
This ratio must lie between 1/60 (1 tick per minute) and 1e9/1 (1ns per tick).</p>
#### Request {#AudioRenderer.SetPtsUnits_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>tick_per_second_numerator</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>tick_per_second_denominator</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
### SetReferenceClock {#AudioRenderer.SetReferenceClock}
<p>Sets the reference clock that controls this renderer's playback rate. If the input
parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and
refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock
is passed (such as the uninitialized <code>zx::clock()</code>), this indicates that the stream
will use a 'flexible' clock generated by AudioCore that tracks the audio device.</p>
<p><code>SetReferenceClock</code> cannot be called once <code>SetPcmStreamType</code> is called. It also
cannot be called a second time (even if the renderer format has not yet been set).
If a client wants a reference clock that is initially <code>CLOCK_MONOTONIC</code> but may
diverge at some later time, they should create a clone of the monotonic clock, set
this as the stream's reference clock, then rate-adjust it subsequently as needed.</p>
#### Request {#AudioRenderer.SetReferenceClock_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>reference_clock</code></td>
<td>
<code>handle&lt;clock&gt;?</code>
</td>
</tr>
</table>
### SetUsage {#AudioRenderer.SetUsage}
<p>Sets the usage of the render stream. This method may not be called after
<code>SetPcmStreamType</code> is called. The default usage is <code>MEDIA</code>.</p>
#### Request {#AudioRenderer.SetUsage_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#AudioRenderUsage'>AudioRenderUsage</a></code>
</td>
</tr>
</table>
## ProfileProvider {#ProfileProvider}
*Defined in [fuchsia.media/profile_provider.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/profile_provider.fidl;l=9)*
### RegisterHandlerWithCapacity {#ProfileProvider.RegisterHandlerWithCapacity}
<p>Register a thread as a media thread. This notifies the media subsystem that this thread
should have an elevated scheduling profile applied to it in order to meet audio or video
deadlines.</p>
<p><code>name</code> is the name of a system scheduling role to apply to the thread given by
<code>thread_handle</code> -- different products may customize the underlying scheduling strategy based
on the requested role. <code>period</code> is the suggested interval to be scheduled at. <code>period</code> may
be zero if the thread has no preferred scheduling interval. <code>capacity</code> is the proportion of
the scheduling interval the thread needs to be running to achieve good performance or to
meet the scheduling deadline defined by <code>period</code>. <code>capacity</code> may be zero if the workload has
no firm runtime requirements. Note that <code>capacity</code> should be a good faith estimate based on
the worst case runtime the thread requires each period. Excessive capacity requests may
be rejected or result in scaling back the performance of other threads to fit resource
limits.</p>
<p>Capacity, max runtime, and period have the following relationship:</p>
<p>capacity = max runtime / period</p>
<p>Where:</p>
<p>0 &lt;= max runtime &lt;= period and 0 &lt;= capacity &lt;= 1</p>
<p>For heterogeneous systems, the capacity should be planned / measured against the highest
performance processor(s) in the system. The system will automatically adjust the effective
capacity to account for slower processors and operating points and will avoid processors and
operating points that are too slow to meet the requested scheduling parameters (provided
they are reasonable).</p>
<p>Returns the period and capacity (actually maximum runtime) that was applied, either of which
may be zero to indicate not applicable.</p>
#### Request {#ProfileProvider.RegisterHandlerWithCapacity_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>thread_handle</code></td>
<td>
<code>handle&lt;thread&gt;</code>
</td>
</tr>
<tr>
<td><code>name</code></td>
<td>
<code>string[64]</code>
</td>
</tr>
<tr>
<td><code>period</code></td>
<td>
<code><a class='link' href='../zx/'>zx</a>/<a class='link' href='../zx/#duration'>duration</a></code>
</td>
</tr>
<tr>
<td><code>capacity</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
#### Response {#ProfileProvider.RegisterHandlerWithCapacity_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>period</code></td>
<td>
<code><a class='link' href='../zx/'>zx</a>/<a class='link' href='../zx/#duration'>duration</a></code>
</td>
</tr>
<tr>
<td><code>capacity</code></td>
<td>
<code><a class='link' href='../zx/'>zx</a>/<a class='link' href='../zx/#duration'>duration</a></code>
</td>
</tr>
</table>
### UnregisterHandler {#ProfileProvider.UnregisterHandler}
<p>Reset a thread's scheduling profile to the default.</p>
#### Request {#ProfileProvider.UnregisterHandler_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>thread_handle</code></td>
<td>
<code>handle&lt;thread&gt;</code>
</td>
</tr>
<tr>
<td><code>name</code></td>
<td>
<code>string[64]</code>
</td>
</tr>
</table>
#### Response {#ProfileProvider.UnregisterHandler_Response}
&lt;EMPTY&gt;
## SessionAudioConsumerFactory {#SessionAudioConsumerFactory}
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=11)*
<p>Interface for creating audio consumers bound to a session.</p>
### CreateAudioConsumer {#SessionAudioConsumerFactory.CreateAudioConsumer}
<p>Creates an <code>AudioConsumer</code>, which is an interface for playing audio, bound
to a particular session. <code>session_id</code> is the identifier of the media session
for which audio is to be rendered.</p>
#### Request {#SessionAudioConsumerFactory.CreateAudioConsumer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>session_id</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>audio_consumer_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioConsumer'>AudioConsumer</a>&gt;</code>
</td>
</tr>
</table>
## SimpleStreamSink {#SimpleStreamSink}
*Defined in [fuchsia.media/stream.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=108)*
<p>A StreamSink that uses StreamBufferSet for buffer management.</p>
### AddPayloadBuffer {#SimpleStreamSink.AddPayloadBuffer}
<p>Adds a payload buffer to the current buffer set associated with the
connection. A <code>StreamPacket</code> struct reference a payload buffer in the
current set by ID using the <code>StreamPacket.payload_buffer_id</code> field.</p>
<p>A buffer with ID <code>id</code> must not be in the current set when this method is
invoked, otherwise the service will close the connection.</p>
#### Request {#SimpleStreamSink.AddPayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>payload_buffer</code></td>
<td>
<code>handle&lt;vmo&gt;</code>
</td>
</tr>
</table>
### DiscardAllPackets {#SimpleStreamSink.DiscardAllPackets}
<p>Discards packets previously sent via <code>SendPacket</code> or <code>SendPacketNoReply</code>
and not yet released. The response is sent after all packets have been
released.</p>
#### Request {#SimpleStreamSink.DiscardAllPackets_Request}
&lt;EMPTY&gt;
#### Response {#SimpleStreamSink.DiscardAllPackets_Response}
&lt;EMPTY&gt;
### DiscardAllPacketsNoReply {#SimpleStreamSink.DiscardAllPacketsNoReply}
<p>Discards packets previously sent via <code>SendPacket</code> or <code>SendPacketNoReply</code>
and not yet released.</p>
#### Request {#SimpleStreamSink.DiscardAllPacketsNoReply_Request}
&lt;EMPTY&gt;
### EndOfStream {#SimpleStreamSink.EndOfStream}
<p>Indicates the stream has ended. The precise semantics of this method are
determined by the inheriting interface.</p>
#### Request {#SimpleStreamSink.EndOfStream_Request}
&lt;EMPTY&gt;
### RemovePayloadBuffer {#SimpleStreamSink.RemovePayloadBuffer}
<p>Removes a payload buffer from the current buffer set associated with the
connection.</p>
<p>A buffer with ID <code>id</code> must exist in the current set when this method is
invoked, otherwise the service will will close the connection.</p>
#### Request {#SimpleStreamSink.RemovePayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
### SendPacket {#SimpleStreamSink.SendPacket}
<p>Sends a packet to the service. The response is sent when the service is
done with the associated payload memory.</p>
<p><code>packet</code> must be valid for the current buffer set, otherwise the service
will close the connection.</p>
#### Request {#SimpleStreamSink.SendPacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
#### Response {#SimpleStreamSink.SendPacket_Response}
&lt;EMPTY&gt;
### SendPacketNoReply {#SimpleStreamSink.SendPacketNoReply}
<p>Sends a packet to the service. This interface doesn't define how the
client knows when the sink is done with the associated payload memory.
The inheriting interface must define that.</p>
<p><code>packet</code> must be valid for the current buffer set, otherwise the service
will close the connection.</p>
#### Request {#SimpleStreamSink.SendPacketNoReply_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
## StreamBufferSet {#StreamBufferSet}
*Defined in [fuchsia.media/stream.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=15)*
<p>Manages a set of payload buffers for a stream. This interface is typically
inherited along with <code>StreamSink</code> or <code>StreamSource</code> to enable the transport
of elementary streams between clients and services.</p>
### AddPayloadBuffer {#StreamBufferSet.AddPayloadBuffer}
<p>Adds a payload buffer to the current buffer set associated with the
connection. A <code>StreamPacket</code> struct reference a payload buffer in the
current set by ID using the <code>StreamPacket.payload_buffer_id</code> field.</p>
<p>A buffer with ID <code>id</code> must not be in the current set when this method is
invoked, otherwise the service will close the connection.</p>
#### Request {#StreamBufferSet.AddPayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
<tr>
<td><code>payload_buffer</code></td>
<td>
<code>handle&lt;vmo&gt;</code>
</td>
</tr>
</table>
### RemovePayloadBuffer {#StreamBufferSet.RemovePayloadBuffer}
<p>Removes a payload buffer from the current buffer set associated with the
connection.</p>
<p>A buffer with ID <code>id</code> must exist in the current set when this method is
invoked, otherwise the service will will close the connection.</p>
#### Request {#StreamBufferSet.RemovePayloadBuffer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>id</code></td>
<td>
<code>uint32</code>
</td>
</tr>
</table>
## StreamProcessor {#StreamProcessor}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=507)*
<p>Overview of operation:</p>
<ol>
<li>Create</li>
</ol>
<ul>
<li>create via CodecFactory - see CodecFactory</li>
<li>create via LicenseSession - see LicenseSession</li>
</ul>
<ol start="2">
<li>Get input constraints</li>
</ol>
<ul>
<li>OnInputConstraints() - sent unsolicited by stream processor shortly after
stream processor creation.</li>
</ul>
<ol start="3">
<li>Provide input buffers</li>
</ol>
<ul>
<li>SetInputBufferPartialSettings()</li>
</ul>
<ol start="4">
<li>Deliver input data</li>
</ol>
<ul>
<li>QueueInputPacket() + OnFreeInputPacket(), for as long as it takes,
possibly working through all input packets repeatedly before...</li>
</ul>
<ol start="5">
<li>Get output constraints and format</li>
</ol>
<ul>
<li>OnOutputConstraints()</li>
<li>This is not sent until after at least one QueueInput* message is sent by
the client, even if the underlying processor behind the StreamProcessor
doesn't fundamentally need any input data to determine its output
constraints. This server behavior prevents clients taking an incorrect
dependency on the output constraints showing up before input is
delivered.</li>
<li>A client must tolerate this arriving as late as after substantial input
data has been delivered, including lots of input packet recycling via
OnFreeInputPacket().</li>
<li>This message can arrive more than once before the first output data.</li>
</ul>
<ol start="6">
<li>Provide output buffers</li>
</ol>
<ul>
<li>SetOutputBufferPartialSettings() / CompleteOutputBufferPartialSettings()</li>
</ul>
<ol start="7">
<li>Data flows, with optional EndOfStream</li>
</ol>
<ul>
<li>OnOutputPacket() / RecycleOutputPacket() / QueueInputPacket() /
OnFreeInputPacket() / QueueInputEndOfStream() / OnOutputEndOfStream()</li>
</ul>
<p>Semi-trusted StreamProcessor server - SW decoders run in an isolate (with
very few capabilities) just in case the decoding SW has a vulnerability
which could be used to take over the StreamProcessor server. Clients of the
stream processor interface using decoders and processing streams of separate
security contexts, to a greater extent than some other interfaces, need to
protect themselves against invalid server behavior, such as double-free of a
packet_index and any other invalid server behavior. Having fed in
compressed data of one security context, don't place too much trust in a
single StreamProcessor instance to not mix data among any buffers that
StreamProcessor server has ever been told about. Instead, create separate
StreamProcessor instances for use by security-separate client-side contexts.
While the picture for HW-based decoders looks somewhat different and is out
of scope of this paragraph, the client should always use separate
StreamProcessor instances for security-separate client-side contexts.</p>
<p>Descriptions of actions taken by methods of this protocol and the states of
things are given as if the methods are synchronously executed by the stream
processor server, but in reality, as is typical of FIDL interfaces, the
message processing is async. The states described are to be read as the
state from the client's point of view unless otherwise stated. Events
coming back from the server are of course delivered async, and a client that
processes more than one stream per StreamProcessor instance needs to care
whether a given event is from the current stream vs. some older
soon-to-be-gone stream.</p>
<p>The Sync() method's main purpose is to enable the client to robustly prevent
having both old and new buffers allocated in the system at the same time,
since media buffers can be significantly large, depending. The Sync() method
achieves this by only delivering it's response when all previous calls to
the StreamProcessor protocol have actually taken effect in the
StreamControl ordering domain. Sync() can also be used to wait for the
stream processor server to catch up if there's a possibility that a client
might otherwise get too far ahead of the StreamProcessor server, by for
example requesting creation of a large number of streams in a row. It can
also be used during debugging to ensure that a stream processor server
hasn't gotten stuck. Calling Sync() is entirely optional and never required
for correctness - only potentially required to de-overlap resource usage.</p>
<p>It's possible to re-use a StreamProcessor instance for another stream, and
doing so can sometimes skip over re-allocation of buffers. This can be a
useful thing to do for cases like seeking to a new location - at the
StreamProcessor interface that can look like switching to a new stream.</p>
### CloseCurrentStream {#StreamProcessor.CloseCurrentStream}
<p>This &quot;closes&quot; the current stream, leaving no current stream. In
addition, this message can optionally release input buffers or output
buffers.</p>
<p>If there has never been any active stream, the stream_lifetime_ordinal
must be zero or the server will close the channel. If there has been an
active stream, the stream_lifetime_ordinal must be the most recent
active stream whether that stream is still active or not. Else the
server will close the channel.</p>
<p>Multiple of this message without any new active stream in between is not
to be considered an error, which allows a client to use this message to
close the current stream to stop wasting processing power on a stream the
user no longer cares about, then later decide that buffers should be
released and send this message again with release_input_buffers and/or
release_output_buffers true to get the buffers released, if the client is
interested in trying to avoid overlap in resource usage between old
buffers and new buffers (not all clients are).</p>
<p>See also Sync().</p>
#### Request {#StreamProcessor.CloseCurrentStream_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>release_input_buffers</code></td>
<td>
<code>bool</code>
</td>
</tr>
<tr>
<td><code>release_output_buffers</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### CompleteOutputBufferPartialSettings {#StreamProcessor.CompleteOutputBufferPartialSettings}
<p>After SetOutputBufferPartialSettings(), the server won't send
OnOutputConstraints(), OnOutputFormat(), OnOutputPacket(), or
OnOutputEndOfStream() until after the client sends
CompleteOutputBufferPartialSettings().</p>
<p>Some clients may be able to send
CompleteOutputBufferPartialSettings() immediately after
SetOutputBufferPartialSettings() - in that case the client needs to be
prepared to receive output without knowing the buffer count or packet
count yet - such clients may internally delay processing the received
output until the client has heard from sysmem (which is when the client
will learn the buffer count and packet count).</p>
<p>Other clients may first wait for sysmem to allocate, prepare to receive
output, and then send CompleteOutputBufferPartialSettings().</p>
#### Request {#StreamProcessor.CompleteOutputBufferPartialSettings_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>buffer_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### EnableOnStreamFailed {#StreamProcessor.EnableOnStreamFailed}
<p>Permit the server to use OnStreamFailed() instead of the server just
closing the whole StreamProcessor channel on stream failure.</p>
<p>If the server hasn't seen this message by the time a stream fails, the
server will close the StreamProcessor channel instead of sending
OnStreamFailed().</p>
#### Request {#StreamProcessor.EnableOnStreamFailed_Request}
&lt;EMPTY&gt;
### FlushEndOfStreamAndCloseStream {#StreamProcessor.FlushEndOfStreamAndCloseStream}
<p>This message is optional.</p>
<p>This message is only valid after QueueInputEndOfStream() for this stream.
The stream_lifetime_ordinal input parameter must match the
stream_lifetime_ordinal of the QueueInputEndOfStream(), else the server
will close the channel.</p>
<p>A client can use this message to flush through (not discard) the last
input data of a stream so that the stream processor server generates
corresponding output data for all the input data before the server moves
on to the next stream, without forcing the client to wait for
OnOutputEndOfStream() before queueing data of another stream.</p>
<p>The difference between QueueInputEndOfStream() and
FlushEndOfStreamAndCloseStream(): QueueInputEndOfStream() is a promise
from the client that there will not be any more input data for the
stream (and this info is needed by some stream processors for the stream
processor to ever emit the very last output data). The
QueueInputEndOfStream() having been sent doesn't prevent the client from
later completely discarding the rest of the current stream by closing
the current stream (with or without a stream switch). In contrast,
FlushEndOfStreamAndCloseStream() is a request from the client that all
the previously-queued input data be processed including the logical
&quot;EndOfStream&quot; showing up as OnOutputEndOfStream() (in success case)
before moving on to any newer stream - this essentially changes the
close-stream handling from discard to flush-through for this stream
only.</p>
<p>A client using this message can start providing input data for a new
stream without that causing discard of old stream data. That's the
purpose of this message - to allow a client to flush through (not
discard) the old stream's last data (instead of the default when closing
or switching streams which is discard).</p>
<p>Because the old stream is not done processing yet and the old stream's
data is not being discarded, the client must be prepared to continue to
process OnOutputConstraints() messages until the stream_lifetime_ordinal
is done. The client will know the stream_lifetime_ordinal is done when
OnOutputEndOfStream(), OnStreamFailed(), or the StreamProcessor channel
closes.</p>
#### Request {#StreamProcessor.FlushEndOfStreamAndCloseStream_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### OnFreeInputPacket {#StreamProcessor.OnFreeInputPacket}
<p>The server sends this message when the stream processor is done
consuming the data in this packet (but not necessarily done processing
the data) and the packet can be re-filled by the client.</p>
<p>This is not sent for all packets when a new buffer_lifetime_ordinal
starts as in that case all the packets are initially free with the
client.</p>
<p>After receiving the available input buffer via this event, the stream
processor client can call later call QueueInputBuffer with appropriate
offset and length set, with the same packet_index, to re-use the
packet_index.</p>
<p>OnFreeInputPacket() does <em>not</em> imply that the data in the input packet
has been processed successfully, only that the input data is no longer
needed by the StreamProcessor. If a client needs to know which input
data has generated corresponding output, using timestamp_ish values for
that is recommended.</p>
<p>Any reliance on the relative order of OnFreeInputPacket() and
OnStreamFailed() is discouraged and deprecated. Instead, use
timstamp_ish values to establish which input packets generated
corresponding output packets. Note that even using timestamp_ish values
doesn't necessarily imply that the processing of input data with a given
timestamp_ish value is fully complete, as in some StreamProcessor(s) the
data derived from an input packet can be kept for reference purposes for
a long time (in general indefinitely) after the input data has generated
its primary output data (the output data to which the timestamp_ish
value is attached). The StreamProcessor interface currently does not
provide any way to determine when all data derived from an input packet
has been discarded by the StreamProcessor, and if such a mechanism is
ever added to the StreamProcessor protocol, it would be an optional
StreamProcessor capability, since it would be infeasible to implement
for some StreamProcessor implementations that rely on external means to
process data, where the external means won't necessarily provide info
regarding when an input packet's derived data is fully discarded. An
input packet's derived data will never generate or contribute to any
output data for a different stream.</p>
<p>The order of OnFreeInputPacket() is not guaranteed to be the same as the
order of QueueInputPacket(). Any reliance on the order being the same
is strongly discouraged and deprecated. Clients are expected to work
properly even if the order of OnFreeInputPacket() messages is
intentionally scrambled with respect to each other (but not scrambled
across OnStreamFailed(), for now).</p>
#### Response {#StreamProcessor.OnFreeInputPacket_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>free_input_packet</code></td>
<td>
<code><a class='link' href='#PacketHeader'>PacketHeader</a></code>
</td>
</tr>
</table>
### OnInputConstraints {#StreamProcessor.OnInputConstraints}
<p>The server sends this shortly after StreamProcessor creation to indicate
input buffer constraints. The &quot;min&quot; and &quot;max&quot; input constraints don't
change for the life of the StreamProcessor.</p>
<p>The &quot;max&quot; values for buffer size and count are large enough to support
the most demanding format the server supports on input. The
&quot;recommended&quot; values should be workable for use with the input
FormatDetails conveyed during StreamProcessor creation. The
&quot;recommended&quot; values are not necessarily suitable if the client uses
QueueInputFormatDetails() to change the input format. In that case it's
up to the client to determine suitable values, either by creating a new
StreamProcessor instance instead, or knowing suitable values outside the
scope of this protocol.</p>
<p>See comments on StreamBufferConstraints.</p>
<p>This message is guaranteed to be sent unsolicited to the StreamProcessor
client during or shortly after StreamProcessor creation. Clients should
not depend on this being the very first message to arrive at the client.</p>
<p>The &quot;min&quot; and &quot;max&quot; input constraints are guaranteed not to change for a
given StreamProcessor instance. The &quot;recommended&quot; values may
effectively change when the server processes QueueInputFormatDetails().
There is not any way in the protocol short of creating a new
StreamProcessor instance for the client to get those new &quot;recommended&quot;
values.</p>
#### Response {#StreamProcessor.OnInputConstraints_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>input_constraints</code></td>
<td>
<code><a class='link' href='#StreamBufferConstraints'>StreamBufferConstraints</a></code>
</td>
</tr>
</table>
### OnOutputConstraints {#StreamProcessor.OnOutputConstraints}
<p>This event informs the client of new output constraints.</p>
<p>This message is ordered with respect to other output (such as output
packets, output format, output end-of-stream).</p>
<p>Before the first OnOutputPacket() of a stream, the server guarantees that
at least one OnOutputConstraints() and exactly one OnOutputFormat() will
be sent. The server may not set buffer_constraints_action_required true
in OnOutputConstraints() if the buffer config is already suitable for the
stream (buffer_constraints_action_required false means the buffer config
is already fine). The client must tolerate multiple
OnOutputConstraints() (and 1 OnOutputFormat() message) before the first
output packet. As long as the client hasn't moved to a new stream, the
server won't send another OnOutputConstraints() until after the client
has configured output buffers.</p>
<p>This message can be sent mid-stream by a server. If
buffer_constraints_action_required false, the message is safe to
ignore, but a client may choose to stash the new constraints for
later use the next time the client wants to unilaterally re-configure
buffers (when allowed). If later the server needs the output config to
change, the server may send a new OnOutputConstraints() with
buffer_constraints_action_required true.</p>
<p>On buffer_constraints_action_required true, a client that does not wish
to fully handle mid-stream output buffer config changes should either
give up completely on the processing, or at least re-config the output
as specified before starting a new stream (and possibly re-delivering
input data, if the client wants). This avoids useless retry with a new
stream starting from just before the output buffer config change which
would hit the same mid-stream output config change again.</p>
<p>Similarly, some servers may only partly support mid-stream format
changes, or only support a mid-stream format change if the buffers are
already large enough to handle both before and after the format change.
Such servers should still indicate buffer_constraints_action_required
true, but then send OnStreamFailed() after the client has re-configured
output buffers (seamlessly dealing with the mid-stream output config
change is even better of course, but is not always feasible depending on
format). When the client retries with a new stream starting from a
nearby location in the client's logical overall media timeline, the
output buffers will already be suitable for the larger size output, so
the new stream will not need any mid-stream output buffer re-config,
only a mid-stream OnOutputFormat(). This strategy avoids the problem
that would otherwise occur if a client were to retry with a new stream
starting just before the mid-stream output buffer config change (the
retry wouldn't be effective since the same need for an output buffer
config change would be hit again). Servers are discouraged from sending
OnStreamFailed() solely due to a mid-stream need for different output
buffer config without first sending OnOutputConstraints() with
buffer_constraints_action_required true and waiting for the client to
re-configure output buffers (to avoid the useless client retry with a
new stream from a logical location before the config change).</p>
<p>When buffer_constraints_action_required true, the server will not send
any OnOutputPacket() for this stream until after the client has
configured/re-configured output buffers.</p>
<p>A client that gives up on processing a stream on any mid-stream
OnOutputConstraints() or mid-stream OnOutputFormat() should completely
ignore any OnOutputConstraints() with buffer_constraints_action_required
false. Otherwise the client may needlessly fail processing, or server
implementations might not be able to use
buffer_constraints_action_required false for fear of simpler clients
just disconnecting.</p>
<p>All clients, even those which don't want to support any mid-stream
output buffer re-config or mid-stream OnOutputFormat() are required to
deal with 1..multiple OnOutputConstraints() messages before the first
output packet, and 1 OnOutputFormat() messages before the first output
packet.</p>
<p>This message is ordered with respect to output packets, and with respect
to OnOutputFormat().</p>
#### Response {#StreamProcessor.OnOutputConstraints_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>output_config</code></td>
<td>
<code><a class='link' href='#StreamOutputConstraints'>StreamOutputConstraints</a></code>
</td>
</tr>
</table>
### OnOutputEndOfStream {#StreamProcessor.OnOutputEndOfStream}
<p>After QueueInputEndOfStream() is sent by the StreamProcessor client,
within a reasonable duration the corresponding OnOutputEndOfStream()
will be sent by the StreamProcessor server. Similar to
QueueInputEndOfStream(), OnOutputEndOfStream() is sent a maximum of once
per stream.</p>
<p>No more stream data for this stream will be sent after this message. All
input data for this stream was processed.</p>
<p>While a StreamProcessor client is not required to
QueueInputEndOfStream() (unless the client wants to use
FlushEndOfStreamAndCloseStream()), if a StreamProcessor server receives
QueueInputEndOfStream(), and the client hasn't closed the stream, the
StreamProcessor server must generate a corresponding
OnOutputEndOfStream() if nothing went wrong, or must send
OnStreamFailed(), or must close the server end of the StreamProcessor
channel. An ideal StreamProcessor server would handle and report stream
errors via the error_ flags and complete stream processing without
sending OnStreamFailed(), but in any case, the above-listed options are
the only ways that an OnOutputEndOfStream() won't happen after
QueueInputEndOfStream().</p>
<p>There will be no more OnOutputPacket() or OnOutputConstraints() messages
for this stream_lifetime_ordinal after this message - if a server doesn't
follow this rule, a client should close the StreamProcessor channel.</p>
<p>The error_detected_before bool has the same semantics as the
error_detected_before bool in OnOutputPacket().</p>
#### Response {#StreamProcessor.OnOutputEndOfStream_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>error_detected_before</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### OnOutputFormat {#StreamProcessor.OnOutputFormat}
<p>This message is sent by the server before the first output packet of any
stream, and potentially mid-stream between output packets of the stream,
ordered with respect to output packets, and ordered with respect to
OnOutputConstraints().</p>
<p>The server guarantees that the first packet of every stream will be
preceeded by an OnOutputFormat().</p>
<p>The server guarantees that there will be an OnOutputFormat() between an
OnOutputConstraints() with buffer_constraints_action_required true and an
OnOutputPacket(). In other words, the client is essentially allowed to
forget what the output format is on any OnOutputConstraints() with
buffer_constraints_action_required true, because the server promises a
subsequent OnOutputFormat() before any OnOutputPacket().</p>
<p>If the server sets buffer_constraints_action_required true in
OnOutputConstraints(), the server won't send OnOutputFormat() (and
therefore also won't send OnOutputPacket()) until the client has
re-configured output buffers.</p>
<p>The server is allowed to send an OnOutputFormat() mid-stream between two
output packets.</p>
<p>A server won't send two adjacent OnOutputFormat() messages without any
output packet in between. However an OnOutputFormat() message doesn't
guarantee a subsequent packet, because for example the server could send
OnOutputEndOfStream() or OnStreamFailed() instead.</p>
<p>A client that does not wish to seamlessly handle mid-stream output format
changes should either ensure that no stream processed by the client
ever has any mid-stream format change, or the client should ensure that
any retry of processing starts the new attempt at a point logically at or
after the point where the old format has ended and the new format starts,
else the client could just hit the same mid-stream format change again.</p>
<p>An example of this message being sent mid-stream is mid-stream change
of dimensions of video frames output from a video decoder.</p>
<p>Not all servers will support seamless handling of format change. Those
that do support seamless handling of format change may require that the
format change not also require output buffer re-config, in order for the
handling to be seamless. See the comment block for OnOutputConstraints()
for more discussion of how servers and clients should behave - in
particular when they don't seamlessly handle output constraint change
and/or output format change.</p>
<p>If this message isn't being sent by the server when expected at the
start of a stream, the most common reason is that a OnOutputConstraints()
with buffer_constraints_action_required true hasn't been processed by the
client (by configuring output buffers using
SetOutputBufferPartialSettings() etc).</p>
#### Response {#StreamProcessor.OnOutputFormat_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>output_format</code></td>
<td>
<code><a class='link' href='#StreamOutputFormat'>StreamOutputFormat</a></code>
</td>
</tr>
</table>
### OnOutputPacket {#StreamProcessor.OnOutputPacket}
<p>This is how the stream processor emits an output packet to the stream
processor client.</p>
<p>Order is significant.</p>
<p>The client should eventually call RecycleOutputPacket() (possibly after
switching streams multiple times), unless the buffer_lifetime_ordinal
has moved on. A stream change doesn't change which packets are busy
with the client vs. free with the server.</p>
<p>The relevant buffer is always the one specified in the packet's buffer_index field.</p>
<p>For low-level buffer types that support it, a StreamProcessor is free to
emit an output packet before the low-level buffer actually has any
usable data in the buffer, with the mechanism for signalling the
presence of data separate from the OnOutputPacket() message. For such
low-level buffer types, downstream consumers of data from the emitted
packet must participate in the low-level buffer signalling mechanism to
know when it's safe to consume the data. This is most likely to be
relevant when using a video decoder and gralloc-style buffers.</p>
<p>The error_ bool(s) allow (but do not require) a StreamProcessor server
to report errors that happen during an AU or between AUs.</p>
<p>The scope of error_detected_before starts at the end of the last
delivered output packet on this stream, or the start of stream if there
were no previous output packets on this stream. The scope ends at the
start of the output_packet.</p>
<p>The error_detected_before bool is separate so that discontinuities can be
indicated separately from whether the current packet is damaged.</p>
<p>The scope of error_detected_during is from the start to the end of this
output_packet.</p>
#### Response {#StreamProcessor.OnOutputPacket_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>output_packet</code></td>
<td>
<code><a class='link' href='#Packet'>Packet</a></code>
</td>
</tr>
<tr>
<td><code>error_detected_before</code></td>
<td>
<code>bool</code>
</td>
</tr>
<tr>
<td><code>error_detected_during</code></td>
<td>
<code>bool</code>
</td>
</tr>
</table>
### OnStreamFailed {#StreamProcessor.OnStreamFailed}
<p>The stream has failed, but the StreamProcessor instance is still usable
for a new stream.</p>
<p>This message is only ever sent by the server if the client previously
sent EnableOnStreamFailed(). If the client didn't send
EnableOnStreamFailed() then the server closes the StreamProcessor
channel instead.</p>
<p>StreamProcessor server implementations are encouraged to handle stream
errors (and ideally to also report them via error_ bools of
OnOutputPacket() and OnOutputEndOfStream()) without failing the whole
stream, but if a stream processor server is unable to do that, but still
can cleanly contain the failure to the stream, the stream processor
server can (assuming EnableOnStreamFailed() was called) use
OnStreamFailed() to indicate the stream failure to the client without
closing the StreamProcessor channel.</p>
<p>An ideal StreamProcessor server handles problems with input data without
sending this message, but sending this message is preferred vs. closing
the server end of the StreamProcessor channel if the StreamProcessor
server can 100% reliably contain the stream failure to the stream,
without any adverse impact to any later stream.</p>
<p>No further messages will arrive from the server regarding the failed
stream. This includes any OnOutputEndOfStream() that the client would
have otherwise expected.</p>
#### Response {#StreamProcessor.OnStreamFailed_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>error</code></td>
<td>
<code><a class='link' href='#StreamError'>StreamError</a></code>
</td>
</tr>
</table>
### QueueInputEndOfStream {#StreamProcessor.QueueInputEndOfStream}
<p>Inform the server that all QueueInputPacket() messages for this stream
have been sent.</p>
<p>If the stream isn't closed first (by the client, or by OnStreamFailed(),
or StreamProcessor channel closing), there will later be a corresponding
OnOutputEndOfStream().</p>
<p>The corresponding OnOutputEndOfStream() message will be generated only if
the server finishes processing the stream before the server sees the
client close the stream (such as by starting a new stream). A way to
force the server to finish the stream before closing is to use
FlushEndOfStreamAndCloseStream() after QueueInputEndOfStream() before any
new stream. Another way to force the server to finish the stream before
closing is to wait for the OnOutputEndOfStream() before taking any action
that closes the stream.</p>
<p>In addition to serving as an &quot;EndOfStream&quot; marker to make it obvious
client-side when all input data has been processed, if a client never
sends QueueInputEndOfStream(), no amount of waiting will necessarily
result in all input data getting processed through to the output. Some
stream processors have some internally-delayed data which only gets
pushed through by additional input data <em>or</em> by this EndOfStream marker.
In that sense, this message can be viewed as a flush-through at
InputData domain level, but the flush-through only takes effect if the
stream processor even gets that far before the stream is just closed at
StreamControl domain level. This message is not alone sufficient to act
as an overall flush-through at StreamControl level. For that, send this
message first and then send FlushEndOfStreamAndCloseStream() (at which
point it becomes possible to queue input data for a new stream without
causing discard of this older stream's data), or wait for the
OnOutputEndOfStream() before closing the current stream.</p>
<p>If a client sends QueueInputPacket(), QueueInputFormatDetails(),
QueueInputEndOfStream() for this stream after the first
QueueInputEndOfStream() for this stream, a server should close the
StreamProcessor channel.</p>
#### Request {#StreamProcessor.QueueInputEndOfStream_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
</table>
### QueueInputFormatDetails {#StreamProcessor.QueueInputFormatDetails}
<p>If the input format details are still the same as specified during
StreamProcessor creation, this message is unnecessary and does not need
to be sent.</p>
<p>If the stream doesn't exist yet, this message creates the stream.</p>
<p>The server won't send OnOutputConstraints() until after the client has
sent at least one QueueInput* message.</p>
<p>All servers must permit QueueInputFormatDetails() at the start of a
stream without failing, as long as the new format is supported by the
StreamProcessor instance. Technically this allows for a server to only
support the exact input format set during StreamProcessor creation, and
that is by design. A client that tries to switch formats and gets a
StreamProcessor channel failure should try again one more time with a
fresh StreamProcessor instance created with CodecFactory using the new
input format during creation, before giving up.</p>
<p>These format details override the format details specified during stream
processor creation for this stream only. The next stream will default
back to the format details set during stream processor creation.</p>
<p>This message is permitted at the start of the first stream (just like at
the start of any stream). The format specified need not match what was
specified during stream processor creation, but if it doesn't match, the
StreamProcessor channel might close as described above.</p>
#### Request {#StreamProcessor.QueueInputFormatDetails_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
</tr>
<tr>
<td><code>format_details</code></td>
<td>
<code><a class='link' href='#FormatDetails'>FormatDetails</a></code>
</td>
</tr>
</table>
### QueueInputPacket {#StreamProcessor.QueueInputPacket}
<p>This message queues input data to the stream processor for processing.</p>
<p>If the stream doesn't exist yet, this message creates the new stream.</p>
<p>The server won't send OnOutputConstraints() until after the client has
sent at least one QueueInput* message.</p>
<p>The client must continue to deliver input data via this message even if
the stream processor has not yet generated the first OnOutputConstraints(),
and even if the StreamProcessor is generating OnFreeInputPacket() for
previously-queued input packets. The input data must continue as long
as there are free packets to be assured that the server will ever
generate the first OnOutputConstraints().</p>
#### Request {#StreamProcessor.QueueInputPacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#Packet'>Packet</a></code>
</td>
</tr>
</table>
### RecycleOutputPacket {#StreamProcessor.RecycleOutputPacket}
<p>After the client is done with an output packet, the client needs to tell
the stream processor that the output packet can be re-used for more
output, via this method.</p>
<p>It's not permitted to recycle an output packet that's already free with
the stream processor server. It's permitted but discouraged for a
client to recycle an output packet that has been deallocated by an
explicit or implicit output buffer de-configuration(). See
buffer_lifetime_ordinal for more on that. A server must ignore any such
stale RecycleOutputPacket() calls.</p>
#### Request {#StreamProcessor.RecycleOutputPacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>available_output_packet</code></td>
<td>
<code><a class='link' href='#PacketHeader'>PacketHeader</a></code>
</td>
</tr>
</table>
### SetInputBufferPartialSettings {#StreamProcessor.SetInputBufferPartialSettings}
<p>This is the replacement for SetInputBufferSettings().</p>
<p>When the client is using sysmem to allocate buffers, this message is
used instead of SetInputBufferSettings()+AddInputBuffer(). Instead, a
single SetInputBufferPartialSettings() provides the StreamProcessor with
the client-specified input settings and a BufferCollectionToken which
the StreamProcessor will use to convey constraints to sysmem. Both the
client and the StreamProcessor will be informed of the allocated buffers
directly by sysmem via their BufferCollection channel (not via the
StreamProcessor channel).</p>
<p>The client must not QueueInput...() until after sysmem informs the client
that buffer allocation has completed and was successful.</p>
<p>The server should be prepared to see QueueInput...() before the server
has necessarily heard from sysmem that the buffers are allocated - the
server must tolerate either ordering, as the QueueInput...() and
notification of sysmem allocation completion arrive on different
channels, so the client having heard that allocation is complete doesn't
mean the server knows that allocation is complete yet. However, the
server can expect that allocation is in fact complete and can expect to
get the allocation information from sysmem immediately upon requesting
the information from sysmem.</p>
#### Request {#StreamProcessor.SetInputBufferPartialSettings_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>input_settings</code></td>
<td>
<code><a class='link' href='#StreamBufferPartialSettings'>StreamBufferPartialSettings</a></code>
</td>
</tr>
</table>
### SetOutputBufferPartialSettings {#StreamProcessor.SetOutputBufferPartialSettings}
<p>This is the replacement for SetOutputBufferSettings().</p>
<p>When the client is using sysmem to allocate buffers, this message is
used instead of SetOutputBufferSettings()+AddOutputBuffer(). Instead, a
single SetOutputBufferPartialSettings() provides the StreamProcessor
with the client-specified output settings and a BufferCollectionToken
which the StreamProcessor will use to convey constraints to sysmem.
Both the client and the StreamProcessor will be informed of the
allocated buffers directly by sysmem via their BufferCollection channel
(not via the StreamProcessor channel).</p>
<p>Configuring output buffers is <em>required</em> after OnOutputConstraints() is
received by the client with buffer_constraints_action_required true and
stream_lifetime_ordinal equal to the client's current
stream_lifetime_ordinal (even if there is an active stream), and is
<em>permitted</em> any time there is no current stream.</p>
<p>Closing the current stream occurs on the StreamControl ordering domain,
so after a CloseCurrentStream() or FlushEndOfStreamAndCloseStream(), a
subsequent Sync() completion must be received by the client before the
client knows that there's no longer a current stream.</p>
<p>See also CompleteOutputBufferPartialSettings().</p>
#### Request {#StreamProcessor.SetOutputBufferPartialSettings_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>output_settings</code></td>
<td>
<code><a class='link' href='#StreamBufferPartialSettings'>StreamBufferPartialSettings</a></code>
</td>
</tr>
</table>
### Sync {#StreamProcessor.Sync}
<p>On completion, all previous StreamProcessor calls have done what they're
going to do server-side, <em>except</em> for processing of data queued using
QueueInputPacket().</p>
<p>The main purpose of this call is to enable the client to wait until
CloseCurrentStream() with release_input_buffers and/or
release_output_buffers set to true to take effect, before the client
allocates new buffers and re-sets-up input and/or output buffers. This
de-overlapping of resource usage can be worthwhile for media buffers
which can consume resource types whose overall pools aren't necessarily
vast in comparison to resources consumed. Especially if a client is
reconfiguring buffers multiple times.</p>
<p>Note that Sync() prior to allocating new media buffers is not alone
sufficient to achieve non-overlap of media buffer resource usage system
wide, but it can be a useful part of achieving that.</p>
<p>The Sync() transits the Output ordering domain and the StreamControl
ordering domain, but not the InputData ordering domain.</p>
<p>This request can be used to avoid hitting kMaxInFlightStreams which is
presently 10. A client that stays &lt;= 8 in-flight streams will
comfortably stay under the limit of 10. While the protocol permits
repeated SetInputBufferSettings() and the like, a client that spams the
channel can expect that the channel will just close if the server or the
channel itself gets too far behind.</p>
#### Request {#StreamProcessor.Sync_Request}
&lt;EMPTY&gt;
#### Response {#StreamProcessor.Sync_Response}
&lt;EMPTY&gt;
## StreamSink {#StreamSink}
*Defined in [fuchsia.media/stream.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=40)*
<p>Consumes a stream of packets. This interface is typically inherited along
with <code>StreamBufferSet</code> to enable the transport of elementary streams from
clients to services.</p>
### DiscardAllPackets {#StreamSink.DiscardAllPackets}
<p>Discards packets previously sent via <code>SendPacket</code> or <code>SendPacketNoReply</code>
and not yet released. The response is sent after all packets have been
released.</p>
#### Request {#StreamSink.DiscardAllPackets_Request}
&lt;EMPTY&gt;
#### Response {#StreamSink.DiscardAllPackets_Response}
&lt;EMPTY&gt;
### DiscardAllPacketsNoReply {#StreamSink.DiscardAllPacketsNoReply}
<p>Discards packets previously sent via <code>SendPacket</code> or <code>SendPacketNoReply</code>
and not yet released.</p>
#### Request {#StreamSink.DiscardAllPacketsNoReply_Request}
&lt;EMPTY&gt;
### EndOfStream {#StreamSink.EndOfStream}
<p>Indicates the stream has ended. The precise semantics of this method are
determined by the inheriting interface.</p>
#### Request {#StreamSink.EndOfStream_Request}
&lt;EMPTY&gt;
### SendPacket {#StreamSink.SendPacket}
<p>Sends a packet to the service. The response is sent when the service is
done with the associated payload memory.</p>
<p><code>packet</code> must be valid for the current buffer set, otherwise the service
will close the connection.</p>
#### Request {#StreamSink.SendPacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
#### Response {#StreamSink.SendPacket_Response}
&lt;EMPTY&gt;
### SendPacketNoReply {#StreamSink.SendPacketNoReply}
<p>Sends a packet to the service. This interface doesn't define how the
client knows when the sink is done with the associated payload memory.
The inheriting interface must define that.</p>
<p><code>packet</code> must be valid for the current buffer set, otherwise the service
will close the connection.</p>
#### Request {#StreamSink.SendPacketNoReply_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
## StreamSource {#StreamSource}
*Defined in [fuchsia.media/stream.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=77)*
<p>Produces a stream of packets. This interface is typically inherited along
with <code>StreamBufferSet</code> to enable the transport of elementary streams from
services to clients.</p>
### DiscardAllPackets {#StreamSource.DiscardAllPackets}
#### Request {#StreamSource.DiscardAllPackets_Request}
&lt;EMPTY&gt;
#### Response {#StreamSource.DiscardAllPackets_Response}
&lt;EMPTY&gt;
### DiscardAllPacketsNoReply {#StreamSource.DiscardAllPacketsNoReply}
#### Request {#StreamSource.DiscardAllPacketsNoReply_Request}
&lt;EMPTY&gt;
### OnEndOfStream {#StreamSource.OnEndOfStream}
<p>Indicates that the stream has ended.</p>
#### Response {#StreamSource.OnEndOfStream_Response}
&lt;EMPTY&gt;
### OnPacketProduced {#StreamSource.OnPacketProduced}
<p>Delivers a packet produced by the service. When the client is done with
the payload memory, the client must call <code>ReleasePacket</code> to release the
payload memory.</p>
#### Response {#StreamSource.OnPacketProduced_Response}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
### ReleasePacket {#StreamSource.ReleasePacket}
<p>Releases payload memory associated with a packet previously delivered
via <code>OnPacketProduced</code>.</p>
#### Request {#StreamSource.ReleasePacket_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>packet</code></td>
<td>
<code><a class='link' href='#StreamPacket'>StreamPacket</a></code>
</td>
</tr>
</table>
## UsageAudioConsumerFactory {#UsageAudioConsumerFactory}
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=24)*
<p>Interface for creating audio consumers for local rendering.</p>
### CreateAudioConsumer {#UsageAudioConsumerFactory.CreateAudioConsumer}
<p>Creates an <code>AudioConsumer</code>, which is an interface for playing audio, given a usage value.
Audio submitted to such a consumer is always rendered locally.</p>
#### Request {#UsageAudioConsumerFactory.CreateAudioConsumer_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#AudioRenderUsage'>AudioRenderUsage</a></code>
</td>
</tr>
<tr>
<td><code>audio_consumer_request</code></td>
<td>
<code>server_end&lt;<a class='link' href='#AudioConsumer'>AudioConsumer</a>&gt;</code>
</td>
</tr>
</table>
## UsageGainListener {#UsageGainListener}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=81)*
<p>A protocol for watching changes to usage gain settings.</p>
<p>The channel will close when the device is not present.</p>
### OnGainMuteChanged {#UsageGainListener.OnGainMuteChanged}
<p>Called immediately on connection and afterward any time
the usage gain setting changes.</p>
<p>Clients must respond to acknowledge the event. Clients that do not acknowledge their
events will eventually be disconnected.</p>
<p>Note: This API does not have mute reporting implemented; <code>muted</code> is always false.</p>
#### Request {#UsageGainListener.OnGainMuteChanged_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>muted</code></td>
<td>
<code>bool</code>
</td>
</tr>
<tr>
<td><code>gain_dbfs</code></td>
<td>
<code>float32</code>
</td>
</tr>
</table>
#### Response {#UsageGainListener.OnGainMuteChanged_Response}
&lt;EMPTY&gt;
## UsageGainReporter {#UsageGainReporter}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=54)*
<p>A protocol for setting up watchers of usage gain.</p>
### RegisterListener {#UsageGainReporter.RegisterListener}
<p>Connects a listener to a stream of usage gain setting changes
for <code>usage</code> on the device identified by <code>device_token</code>. Usage
Gain is not set directly by any client; it is a translation of
the usage volume setting for each device, summed with active
muting/ducking gain adjustments.</p>
<p>Devices may map the same volume level to different dbfs, so
a <code>device_unique_id</code> is needed to indentify the device.</p>
<p><code>AudioDeviceEnumerator</code> provides programmatic access to devices
and their unique ids if it is necessary for a client to select
an id at runtime.</p>
#### Request {#UsageGainReporter.RegisterListener_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>device_unique_id</code></td>
<td>
<code>string[36]</code>
</td>
</tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>usage_gain_listener</code></td>
<td>
<code><a class='link' href='#UsageGainListener'>UsageGainListener</a></code>
</td>
</tr>
</table>
## UsageReporter {#UsageReporter}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=45)*
<p>A protocol for setting up watchers of audio usages.</p>
### Watch {#UsageReporter.Watch}
#### Request {#UsageReporter.Watch_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>usage_watcher</code></td>
<td>
<code><a class='link' href='#UsageWatcher'>UsageWatcher</a></code>
</td>
</tr>
</table>
## UsageWatcher {#UsageWatcher}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=30)*
<p>A protocol for listening to changes to the policy state of an audio usage.</p>
<p>User actions, such as lowering the volume or muting a stream, are not reflected in this
API.</p>
### OnStateChanged {#UsageWatcher.OnStateChanged}
<p>Called on first connection and whenever the watched usage changes. The provided
usage will always be the bound usage; it is provided so that an implementation of
this protocol may be bound to more than one usage.</p>
<p>Clients must respond to acknowledge the event. Clients that do not acknowledge their
events will eventually be disconnected.</p>
#### Request {#UsageWatcher.OnStateChanged_Request}
<table>
<tr><th>Name</th><th>Type</th></tr>
<tr>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#Usage'>Usage</a></code>
</td>
</tr>
<tr>
<td><code>state</code></td>
<td>
<code><a class='link' href='#UsageState'>UsageState</a></code>
</td>
</tr>
</table>
#### Response {#UsageWatcher.OnStateChanged_Response}
&lt;EMPTY&gt;
## **STRUCTS**
### AacConstantBitRate {#AacConstantBitRate data-text="AacConstantBitRate"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=539)*
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="AacConstantBitRate.bit_rate">
<td><code>bit_rate</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Bits per second</p>
</td>
<td>No default</td>
</tr>
</table>
### AacEncoderSettings {#AacEncoderSettings data-text="AacEncoderSettings"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=568)*
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="AacEncoderSettings.transport">
<td><code>transport</code></td>
<td>
<code><a class='link' href='#AacTransport'>AacTransport</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AacEncoderSettings.channel_mode">
<td><code>channel_mode</code></td>
<td>
<code><a class='link' href='#AacChannelMode'>AacChannelMode</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AacEncoderSettings.bit_rate">
<td><code>bit_rate</code></td>
<td>
<code><a class='link' href='#AacBitRate'>AacBitRate</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AacEncoderSettings.aot">
<td><code>aot</code></td>
<td>
<code><a class='link' href='#AacAudioObjectType'>AacAudioObjectType</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### AacTransportAdts {#AacTransportAdts data-text="AacTransportAdts"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=526)*
<p>AAC inside ADTS</p>
&lt;EMPTY&gt;
### AacTransportLatm {#AacTransportLatm data-text="AacTransportLatm"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=520)*
<p>AAC inside LATM</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="AacTransportLatm.mux_config_present">
<td><code>mux_config_present</code></td>
<td>
<code>bool</code>
</td>
<td><p>Whether MuxConfiguration stream element is present</p>
</td>
<td>No default</td>
</tr>
</table>
### AacTransportRaw {#AacTransportRaw data-text="AacTransportRaw"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=517)*
<p>Raw AAC access units.</p>
&lt;EMPTY&gt;
### AudioCompressedFormatAac {#AudioCompressedFormatAac data-text="AudioCompressedFormatAac"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=99)*
&lt;EMPTY&gt;
### AudioCompressedFormatSbc {#AudioCompressedFormatSbc data-text="AudioCompressedFormatSbc"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=101)*
&lt;EMPTY&gt;
### AudioDeviceInfo {#AudioDeviceInfo data-text="AudioDeviceInfo"}
*Defined in [fuchsia.media/audio_device_enumerator.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_device_enumerator.fidl;l=19)*
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="AudioDeviceInfo.name">
<td><code>name</code></td>
<td>
<code>string</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioDeviceInfo.unique_id">
<td><code>unique_id</code></td>
<td>
<code>string</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioDeviceInfo.token_id">
<td><code>token_id</code></td>
<td>
<code>uint64</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioDeviceInfo.is_input">
<td><code>is_input</code></td>
<td>
<code>bool</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioDeviceInfo.gain_info">
<td><code>gain_info</code></td>
<td>
<code><a class='link' href='#AudioGainInfo'>AudioGainInfo</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioDeviceInfo.is_default">
<td><code>is_default</code></td>
<td>
<code>bool</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### AudioGainInfo {#AudioGainInfo data-text="AudioGainInfo"}
*Defined in [fuchsia.media/audio_device_enumerator.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_device_enumerator.fidl;l=14)*
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="AudioGainInfo.gain_db">
<td><code>gain_db</code></td>
<td>
<code>float32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioGainInfo.flags">
<td><code>flags</code></td>
<td>
<code><a class='link' href='#AudioGainInfoFlags'>AudioGainInfoFlags</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### AudioStreamType {#AudioStreamType data-text="AudioStreamType"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=82)*
<p>Describes the type of an audio elementary stream.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="AudioStreamType.sample_format">
<td><code>sample_format</code></td>
<td>
<code><a class='link' href='#AudioSampleFormat'>AudioSampleFormat</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioStreamType.channels">
<td><code>channels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="AudioStreamType.frames_per_second">
<td><code>frames_per_second</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### Compression {#Compression data-text="Compression"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=65)*
<p>Describes the compression applied to a stream. This type can be used in conjunction with
<code>AudioStreamType</code> or <code>VideoStreamType</code> to represent a medium-specific compressed type.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="Compression.type">
<td><code>type</code></td>
<td>
<code><a class='link' href='#CompressionType'>CompressionType</a></code>
</td>
<td><p>The type of compression applied to the stream. This is generally one of the <em><em>ENCODING</em></em>
values, though <code>AUDIO_ENCODING_LPCM</code> and <code>VIDEO_ENCODING_UNCOMPRESSED</code> must not be used,
because those encodings are regarded as uncompressed.</p>
</td>
<td>No default</td>
</tr>
<tr id="Compression.parameters">
<td><code>parameters</code></td>
<td>
<code>vector&lt;uint8&gt;[8192]?</code>
</td>
<td><p>Type-specific, opaque 'out-of-band' parameters describing the compression of the stream.</p>
</td>
<td>No default</td>
</tr>
</table>
### EncryptionPattern {#EncryptionPattern data-text="EncryptionPattern"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=382)*
<p>EncryptionPattern</p>
<p>Pattern encryption utilizes a pattern of encrypted and clear 16 byte blocks
over the protected range of a subsample (the encrypted_bytes of a
<code>SubsampleEntry</code>). This structure specifies the number of encrypted data
blocks followed by the number of clear data blocks.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="EncryptionPattern.clear_blocks">
<td><code>clear_blocks</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="EncryptionPattern.encrypted_blocks">
<td><code>encrypted_blocks</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### Metadata {#Metadata data-text="Metadata"}
*Defined in [fuchsia.media/metadata.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=7)*
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="Metadata.properties">
<td><code>properties</code></td>
<td>
<code>vector&lt;<a class='link' href='#Property'>Property</a>&gt;</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### Parameter {#Parameter data-text="Parameter"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=33)*
<p>Parameter</p>
<p>Generic parameter.</p>
<p>We want to minimize use of this generic &quot;Parameter&quot; structure by natively
defining as many stream-specific parameter semantics as we can.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="Parameter.scope">
<td><code>scope</code></td>
<td>
<code>string</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="Parameter.name">
<td><code>name</code></td>
<td>
<code>string</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="Parameter.value">
<td><code>value</code></td>
<td>
<code><a class='link' href='#Value'>Value</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### PcmFormat {#PcmFormat data-text="PcmFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=165)*
<p>PcmFormat</p>
<p>PCM audio format details.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="PcmFormat.pcm_mode">
<td><code>pcm_mode</code></td>
<td>
<code><a class='link' href='#AudioPcmMode'>AudioPcmMode</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="PcmFormat.bits_per_sample">
<td><code>bits_per_sample</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="PcmFormat.frames_per_second">
<td><code>frames_per_second</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="PcmFormat.channel_map">
<td><code>channel_map</code></td>
<td>
<code>vector&lt;<a class='link' href='#AudioChannelId'>AudioChannelId</a>&gt;[16]</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### Property {#Property data-text="Property"}
*Defined in [fuchsia.media/metadata.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=11)*
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="Property.label">
<td><code>label</code></td>
<td>
<code>string</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="Property.value">
<td><code>value</code></td>
<td>
<code>string</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### SbcEncoderSettings {#SbcEncoderSettings data-text="SbcEncoderSettings"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=504)*
<p>Settings for an SBC Encoder.</p>
<p>SBC Encoders take signed little endian 16 bit linear PCM samples and
return encoded SBC frames. SBC encoder PCM data in batches of
<code>sub_bands * block_count</code> PCM frames. This encoder will accept PCM data on
arbitrary frame boundaries, but the output flushed when EOS is queued may be
zero-padded to make a full batch for encoding.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="SbcEncoderSettings.sub_bands">
<td><code>sub_bands</code></td>
<td>
<code><a class='link' href='#SbcSubBands'>SbcSubBands</a></code>
</td>
<td></td>
<td><a class='link' href='#SbcSubBands.SUB_BANDS_8'>SbcSubBands.SUB_BANDS_8</a></td>
</tr>
<tr id="SbcEncoderSettings.allocation">
<td><code>allocation</code></td>
<td>
<code><a class='link' href='#SbcAllocation'>SbcAllocation</a></code>
</td>
<td></td>
<td><a class='link' href='#SbcAllocation.ALLOC_LOUDNESS'>SbcAllocation.ALLOC_LOUDNESS</a></td>
</tr>
<tr id="SbcEncoderSettings.block_count">
<td><code>block_count</code></td>
<td>
<code><a class='link' href='#SbcBlockCount'>SbcBlockCount</a></code>
</td>
<td></td>
<td><a class='link' href='#SbcBlockCount.BLOCK_COUNT_4'>SbcBlockCount.BLOCK_COUNT_4</a></td>
</tr>
<tr id="SbcEncoderSettings.channel_mode">
<td><code>channel_mode</code></td>
<td>
<code><a class='link' href='#SbcChannelMode'>SbcChannelMode</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="SbcEncoderSettings.bit_pool">
<td><code>bit_pool</code></td>
<td>
<code>uint64</code>
</td>
<td><p>SBC bit pool value.</p>
</td>
<td>No default</td>
</tr>
</table>
### StreamPacket {#StreamPacket data-text="StreamPacket"}
*Defined in [fuchsia.media/stream.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=114)*
<p>Describes a packet consumed by <code>StreamSink</code> or produced by <code>StreamSource</code>.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="StreamPacket.pts">
<td><code>pts</code></td>
<td>
<code>int64</code>
</td>
<td><p>Time at which the packet is to be presented, according to the
presentation clock.</p>
</td>
<td><a class='link' href='#NO_TIMESTAMP'>NO_TIMESTAMP</a></td>
</tr>
<tr id="StreamPacket.payload_buffer_id">
<td><code>payload_buffer_id</code></td>
<td>
<code>uint32</code>
</td>
<td><p>ID of the payload buffer used for this packet.</p>
<p>When this struct is used with <code>StreamBufferSet</code>, this field is the ID of
a payload buffer provided via <code>StreamBufferSet.AddPayloadBuffer</code>. In
that case, this value must identify a payload buffer in the current set.
Other interfaces may define different semantics for this field.</p>
</td>
<td>No default</td>
</tr>
<tr id="StreamPacket.payload_offset">
<td><code>payload_offset</code></td>
<td>
<code>uint64</code>
</td>
<td><p>Offset of the packet payload in the payload buffer.</p>
<p>This value plus the <code>payload_size</code> value must be less than or equal to
the size of the referenced payload buffer.</p>
</td>
<td>No default</td>
</tr>
<tr id="StreamPacket.payload_size">
<td><code>payload_size</code></td>
<td>
<code>uint64</code>
</td>
<td><p>Size in bytes of the payload.</p>
<p>This value plus the <code>payload_offest</code> value must be less than or equal to
the size of the referenced payload buffer.</p>
</td>
<td>No default</td>
</tr>
<tr id="StreamPacket.flags">
<td><code>flags</code></td>
<td>
<code>uint32</code>
</td>
<td><p>An bitwise-or'ed set of flags (see constants below) describing
properties of this packet.</p>
</td>
<td>0</td>
</tr>
<tr id="StreamPacket.buffer_config">
<td><code>buffer_config</code></td>
<td>
<code>uint64</code>
</td>
<td><p>The buffer configuration associated with this packet. The semantics of
this field depend on the interface with which this struct is used.
In many contexts, this field is not used. This field is intended for
situations in which buffer configurations (i.e. sets of payload buffers)
are explicitly identified. In such cases, the <code>payload_buffer_id</code> refers
to a payload buffer in the buffer configuration identified by this
field.</p>
</td>
<td>0</td>
</tr>
<tr id="StreamPacket.stream_segment_id">
<td><code>stream_segment_id</code></td>
<td>
<code>uint64</code>
</td>
<td><p>The stream segment associated with this packet. The semantics of this
field depend on the interface with which this struct is used. In many
contexts, this field is not used. This field is intended to distinguish
contiguous segments of the stream where stream properties (e.g.
encoding) may differ from segment to segment.</p>
</td>
<td>0</td>
</tr>
</table>
### StreamType {#StreamType data-text="StreamType"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=13)*
<p>Describes the type of an elementary stream.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="StreamType.medium_specific">
<td><code>medium_specific</code></td>
<td>
<code><a class='link' href='#MediumSpecificStreamType'>MediumSpecificStreamType</a></code>
</td>
<td><p>Medium-specific type information.</p>
</td>
<td>No default</td>
</tr>
<tr id="StreamType.encoding">
<td><code>encoding</code></td>
<td>
<code>string[255]</code>
</td>
<td><p>Encoding (see constants below). This value is represented as a string
so that new encodings can be introduced without modifying this file.</p>
</td>
<td>No default</td>
</tr>
<tr id="StreamType.encoding_parameters">
<td><code>encoding_parameters</code></td>
<td>
<code>vector&lt;uint8&gt;?</code>
</td>
<td><p>Encoding-specific parameters, sometimes referred to as 'out-of-band
data'. Typically, this data is associated with a compressed stream and
provides parameters required to decompress the stream. This data is
generally opaque to all parties except the producer and consumer of the
stream.</p>
</td>
<td>No default</td>
</tr>
</table>
### SubpictureStreamType {#SubpictureStreamType data-text="SubpictureStreamType"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=152)*
<p>Describes the type of a subpicture elementary stream.</p>
&lt;EMPTY&gt;
### SubsampleEntry {#SubsampleEntry data-text="SubsampleEntry"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=371)*
<p>SubsampleEntry</p>
<p>A subsample is a byte range within a sample consisting of a clear byte range
followed by an encrypted byte range. This structure specifies the size of
each range in the subsample.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="SubsampleEntry.clear_bytes">
<td><code>clear_bytes</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="SubsampleEntry.encrypted_bytes">
<td><code>encrypted_bytes</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
</table>
### TextStreamType {#TextStreamType data-text="TextStreamType"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=144)*
<p>Describes the type of a text elementary stream.</p>
&lt;EMPTY&gt;
### TimelineFunction {#TimelineFunction data-text="TimelineFunction"}
*Defined in [fuchsia.media/timeline_function.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/timeline_function.fidl;l=41)*
<p>A TimelineFunction represents a relationship between a subject timeline and a
reference timeline with a linear relation.</p>
<p>For example, consider a common use case in which reference time is the
monotonic clock of a system and subject time is intended presentation time
for some media such as a video.</p>
<p><code>reference_time</code> is the value of the monotonic clock at the beginning of
playback. <code>subject_time</code> is 0 assuming playback starts at the beginning of
the media. We then choose a <code>reference_delta</code> and <code>subject_delta</code> so that
<code>subject_delta</code> / <code>reference_delta</code> represents the desired playback rate,
e.g. 0/1 for paused and 1/1 for normal playback.</p>
<h2>Formulas</h2>
<p>With a function we can determine the subject timeline value <code>s</code> in terms of
reference timeline value <code>r</code> with this formula (where <code>reference_delta</code> &gt; 0):</p>
<p>s = (r - reference_time) * (subject_delta / reference_delta) + subject_time</p>
<p>And similarly we can find the reference timeline value <code>r</code> in terms of
subject timeline value <code>s</code> with this formula (where <code>subject_delta</code> &gt; 0):</p>
<p>r = (s - subject_time) * (reference_delta / subject_delta) + referenc_time</p>
<h2>Choosing time values</h2>
<p>Time values can be arbitrary and our linear relation will of course be the
same, but we can use them to represent the bounds of pieces in a piecewise
linear relation.</p>
<p>For example, if a user performs skip-chapter, we might want to describe
this with a TimelineFunction whose <code>subject_time</code> is the time to skip to,
<code>reference_time</code> is now plus some epsilon, and delta ratio is 1/1 for normal
playback rate.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="TimelineFunction.subject_time">
<td><code>subject_time</code></td>
<td>
<code>int64</code>
</td>
<td><p>A value from the subject timeline that correlates to reference_time.</p>
</td>
<td>0</td>
</tr>
<tr id="TimelineFunction.reference_time">
<td><code>reference_time</code></td>
<td>
<code>int64</code>
</td>
<td><p>A value from the reference timeline that correlates to subject_time.</p>
</td>
<td>0</td>
</tr>
<tr id="TimelineFunction.subject_delta">
<td><code>subject_delta</code></td>
<td>
<code>uint32</code>
</td>
<td><p>The change in the subject timeline corresponding to reference_delta.</p>
</td>
<td>0</td>
</tr>
<tr id="TimelineFunction.reference_delta">
<td><code>reference_delta</code></td>
<td>
<code>uint32</code>
</td>
<td><p>The change in the reference timeline corresponding to subject_delta.
Cannot be zero.</p>
</td>
<td>1</td>
</tr>
</table>
### VideoStreamType {#VideoStreamType data-text="VideoStreamType"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=108)*
<p>Describes the type of a video elementary stream.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="VideoStreamType.pixel_format">
<td><code>pixel_format</code></td>
<td>
<code><a class='link' href='../fuchsia.images/'>fuchsia.images</a>/<a class='link' href='../fuchsia.images/#PixelFormat'>PixelFormat</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.color_space">
<td><code>color_space</code></td>
<td>
<code><a class='link' href='#ColorSpace'>ColorSpace</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.width">
<td><code>width</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Dimensions of the video frames as displayed in pixels.</p>
</td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.height">
<td><code>height</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.coded_width">
<td><code>coded_width</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Dimensions of the video frames as encoded in pixels. These values must
be equal to or greater than the respective width/height values.</p>
</td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.coded_height">
<td><code>coded_height</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.pixel_aspect_ratio_width">
<td><code>pixel_aspect_ratio_width</code></td>
<td>
<code>uint32</code>
</td>
<td><p>The aspect ratio of a single pixel as frames are intended to be
displayed.</p>
</td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.pixel_aspect_ratio_height">
<td><code>pixel_aspect_ratio_height</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoStreamType.stride">
<td><code>stride</code></td>
<td>
<code>uint32</code>
</td>
<td><p>The number of bytes per 'coded' row in the primary video plane.</p>
</td>
<td>No default</td>
</tr>
</table>
### VideoUncompressedFormat {#VideoUncompressedFormat data-text="VideoUncompressedFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=246)*
<p>VideoUncompressedFormat</p>
<p>Uncompressed video format details.</p>
<table>
<tr><th>Field</th><th>Type</th><th>Description</th><th>Default</th></tr>
<tr id="VideoUncompressedFormat.image_format">
<td><code>image_format</code></td>
<td>
<code><a class='link' href='../fuchsia.sysmem/'>fuchsia.sysmem</a>/<a class='link' href='../fuchsia.sysmem/#ImageFormat_2'>ImageFormat_2</a></code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.fourcc">
<td><code>fourcc</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_width_pixels">
<td><code>primary_width_pixels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_height_pixels">
<td><code>primary_height_pixels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.secondary_width_pixels">
<td><code>secondary_width_pixels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.secondary_height_pixels">
<td><code>secondary_height_pixels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.planar">
<td><code>planar</code></td>
<td>
<code>bool</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.swizzled">
<td><code>swizzled</code></td>
<td>
<code>bool</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_line_stride_bytes">
<td><code>primary_line_stride_bytes</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.secondary_line_stride_bytes">
<td><code>secondary_line_stride_bytes</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_start_offset">
<td><code>primary_start_offset</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.secondary_start_offset">
<td><code>secondary_start_offset</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.tertiary_start_offset">
<td><code>tertiary_start_offset</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_pixel_stride">
<td><code>primary_pixel_stride</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.secondary_pixel_stride">
<td><code>secondary_pixel_stride</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_display_width_pixels">
<td><code>primary_display_width_pixels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.primary_display_height_pixels">
<td><code>primary_display_height_pixels</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>No default</td>
</tr>
<tr id="VideoUncompressedFormat.has_pixel_aspect_ratio">
<td><code>has_pixel_aspect_ratio</code></td>
<td>
<code>bool</code>
</td>
<td></td>
<td>false</td>
</tr>
<tr id="VideoUncompressedFormat.pixel_aspect_ratio_width">
<td><code>pixel_aspect_ratio_width</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>1</td>
</tr>
<tr id="VideoUncompressedFormat.pixel_aspect_ratio_height">
<td><code>pixel_aspect_ratio_height</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
<td>1</td>
</tr>
</table>
### Void {#Void data-text="Void"}
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=149)*
&lt;EMPTY&gt;
## **ENUMS**
### AacAudioObjectType [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AacAudioObjectType data-text="AacAudioObjectType"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=561)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AacAudioObjectType.MPEG2_AAC_LC">
<td><h3 id="AacAudioObjectType.MPEG2_AAC_LC" class="add-link hide-from-toc">MPEG2_AAC_LC</h3></td>
<td><code>0</code></td>
<td><p>MPEG-2 Low Complexity</p>
</td>
</tr>
<tr id="AacAudioObjectType.MPEG4_AAC_LC">
<td><h3 id="AacAudioObjectType.MPEG4_AAC_LC" class="add-link hide-from-toc">MPEG4_AAC_LC</h3></td>
<td><code>1</code></td>
<td><p>MPEG-4 Low Complexity</p>
</td>
</tr>
</table>
### AacChannelMode [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AacChannelMode data-text="AacChannelMode"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=534)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AacChannelMode.MONO">
<td><h3 id="AacChannelMode.MONO" class="add-link hide-from-toc">MONO</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="AacChannelMode.STEREO">
<td><h3 id="AacChannelMode.STEREO" class="add-link hide-from-toc">STEREO</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
</table>
### AacVariableBitRate [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AacVariableBitRate data-text="AacVariableBitRate"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=548)*
<p>Variable bit rate modes. The actual resulting bitrate
varies based on input signal and other encoding settings.</p>
<p>See https://wiki.hydrogenaud.io/index.php?title=Fraunhofer_FDK_AAC#Bitrate_Modes</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AacVariableBitRate.V1">
<td><h3 id="AacVariableBitRate.V1" class="add-link hide-from-toc">V1</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
<tr id="AacVariableBitRate.V2">
<td><h3 id="AacVariableBitRate.V2" class="add-link hide-from-toc">V2</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
<tr id="AacVariableBitRate.V3">
<td><h3 id="AacVariableBitRate.V3" class="add-link hide-from-toc">V3</h3></td>
<td><code>3</code></td>
<td></td>
</tr>
<tr id="AacVariableBitRate.V4">
<td><h3 id="AacVariableBitRate.V4" class="add-link hide-from-toc">V4</h3></td>
<td><code>4</code></td>
<td></td>
</tr>
<tr id="AacVariableBitRate.V5">
<td><h3 id="AacVariableBitRate.V5" class="add-link hide-from-toc">V5</h3></td>
<td><code>5</code></td>
<td></td>
</tr>
</table>
### AudioBitrateMode [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioBitrateMode data-text="AudioBitrateMode"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=89)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioBitrateMode.UNSPECIFIED">
<td><h3 id="AudioBitrateMode.UNSPECIFIED" class="add-link hide-from-toc">UNSPECIFIED</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="AudioBitrateMode.CBR">
<td><h3 id="AudioBitrateMode.CBR" class="add-link hide-from-toc">CBR</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
<tr id="AudioBitrateMode.VBR">
<td><h3 id="AudioBitrateMode.VBR" class="add-link hide-from-toc">VBR</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
</table>
### AudioCaptureUsage [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioCaptureUsage data-text="AudioCaptureUsage"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_core.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=39)*
<p>Usages annotating the purpose of the stream being used to capture audio. The
AudioCaptureUsage is used by audio policy to dictate how audio streams
interact with each other.</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioCaptureUsage.BACKGROUND">
<td><h3 id="AudioCaptureUsage.BACKGROUND" class="add-link hide-from-toc">BACKGROUND</h3></td>
<td><code>0</code></td>
<td><p>Stream is used to capture audio while in the background. These streams
may be active at any the time and are considered privileged.
Example: Listening for Hotwords</p>
</td>
</tr>
<tr id="AudioCaptureUsage.FOREGROUND">
<td><h3 id="AudioCaptureUsage.FOREGROUND" class="add-link hide-from-toc">FOREGROUND</h3></td>
<td><code>1</code></td>
<td><p>Stream is intended to be used for normal capture functionality. Streams
that are used for audio capture while the stream creator is in the
foreground should use this.
Example: Voice Recorder</p>
</td>
</tr>
<tr id="AudioCaptureUsage.SYSTEM_AGENT">
<td><h3 id="AudioCaptureUsage.SYSTEM_AGENT" class="add-link hide-from-toc">SYSTEM_AGENT</h3></td>
<td><code>2</code></td>
<td><p>Stream is for interaction with a system agent. This should only be used
once a user has signalled their intent to have the interaction with an
interested party.
Examples: Assistant, Siri, Alexa</p>
</td>
</tr>
<tr id="AudioCaptureUsage.COMMUNICATION">
<td><h3 id="AudioCaptureUsage.COMMUNICATION" class="add-link hide-from-toc">COMMUNICATION</h3></td>
<td><code>3</code></td>
<td><p>Stream is intended to be used for some form of real time user to user
communication. Voice/Video chat should use this.</p>
</td>
</tr>
</table>
### AudioChannelId [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioChannelId data-text="AudioChannelId"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=137)*
<p>AudioChannelId</p>
<p>Used in specifying which audio channel is for which speaker location / type.</p>
<p>TODO(dustingreen): Do we need more channel IDs than this?</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioChannelId.SKIP">
<td><h3 id="AudioChannelId.SKIP" class="add-link hide-from-toc">SKIP</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.LF">
<td><h3 id="AudioChannelId.LF" class="add-link hide-from-toc">LF</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.RF">
<td><h3 id="AudioChannelId.RF" class="add-link hide-from-toc">RF</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.CF">
<td><h3 id="AudioChannelId.CF" class="add-link hide-from-toc">CF</h3></td>
<td><code>3</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.LS">
<td><h3 id="AudioChannelId.LS" class="add-link hide-from-toc">LS</h3></td>
<td><code>4</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.RS">
<td><h3 id="AudioChannelId.RS" class="add-link hide-from-toc">RS</h3></td>
<td><code>5</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.LFE">
<td><h3 id="AudioChannelId.LFE" class="add-link hide-from-toc">LFE</h3></td>
<td><code>6</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.CS">
<td><h3 id="AudioChannelId.CS" class="add-link hide-from-toc">CS</h3></td>
<td><code>7</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.LR">
<td><h3 id="AudioChannelId.LR" class="add-link hide-from-toc">LR</h3></td>
<td><code>8</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.RR">
<td><h3 id="AudioChannelId.RR" class="add-link hide-from-toc">RR</h3></td>
<td><code>9</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.END_DEFINED">
<td><h3 id="AudioChannelId.END_DEFINED" class="add-link hide-from-toc">END_DEFINED</h3></td>
<td><code>10</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.EXTENDED_CHANNEL_ID_BASE">
<td><h3 id="AudioChannelId.EXTENDED_CHANNEL_ID_BASE" class="add-link hide-from-toc">EXTENDED_CHANNEL_ID_BASE</h3></td>
<td><code>1862270976</code></td>
<td></td>
</tr>
<tr id="AudioChannelId.MAX">
<td><h3 id="AudioChannelId.MAX" class="add-link hide-from-toc">MAX</h3></td>
<td><code>2147483647</code></td>
<td></td>
</tr>
</table>
### AudioOutputRoutingPolicy [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioOutputRoutingPolicy data-text="AudioOutputRoutingPolicy"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_core.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=241)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioOutputRoutingPolicy.ALL_PLUGGED_OUTPUTS">
<td><h3 id="AudioOutputRoutingPolicy.ALL_PLUGGED_OUTPUTS" class="add-link hide-from-toc">ALL_PLUGGED_OUTPUTS</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="AudioOutputRoutingPolicy.LAST_PLUGGED_OUTPUT">
<td><h3 id="AudioOutputRoutingPolicy.LAST_PLUGGED_OUTPUT" class="add-link hide-from-toc">LAST_PLUGGED_OUTPUT</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
</table>
### AudioPcmMode [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioPcmMode data-text="AudioPcmMode"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=113)*
<p>AudioPcmMode</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioPcmMode.LINEAR">
<td><h3 id="AudioPcmMode.LINEAR" class="add-link hide-from-toc">LINEAR</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="AudioPcmMode.ALAW">
<td><h3 id="AudioPcmMode.ALAW" class="add-link hide-from-toc">ALAW</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
<tr id="AudioPcmMode.MULAW">
<td><h3 id="AudioPcmMode.MULAW" class="add-link hide-from-toc">MULAW</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
</table>
### AudioRenderUsage [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioRenderUsage data-text="AudioRenderUsage"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_core.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=12)*
<p>Usage annotating the purpose of the stream being used to render audio.
An AudioRenderer's usage cannot be changed after creation. The
AudioRenderUsage is used by audio policy to dictate how audio streams
interact with each other.</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioRenderUsage.BACKGROUND">
<td><h3 id="AudioRenderUsage.BACKGROUND" class="add-link hide-from-toc">BACKGROUND</h3></td>
<td><code>0</code></td>
<td><p>Stream is intended to be used for ambient or background sound. Streams
that can be interrupted without consequence should use this.</p>
</td>
</tr>
<tr id="AudioRenderUsage.MEDIA">
<td><h3 id="AudioRenderUsage.MEDIA" class="add-link hide-from-toc">MEDIA</h3></td>
<td><code>1</code></td>
<td><p>Stream is intended to be used for normal functionality. Streams that
are part of normal functionality should use this.</p>
</td>
</tr>
<tr id="AudioRenderUsage.INTERRUPTION">
<td><h3 id="AudioRenderUsage.INTERRUPTION" class="add-link hide-from-toc">INTERRUPTION</h3></td>
<td><code>2</code></td>
<td><p>Stream is intended to interrupt any ongoing function of the device.
Streams that are used for interruptions like notifications should use
this.</p>
</td>
</tr>
<tr id="AudioRenderUsage.SYSTEM_AGENT">
<td><h3 id="AudioRenderUsage.SYSTEM_AGENT" class="add-link hide-from-toc">SYSTEM_AGENT</h3></td>
<td><code>3</code></td>
<td><p>Stream is for interaction with a system agent. This should be used
in response to a user initiated trigger.</p>
</td>
</tr>
<tr id="AudioRenderUsage.COMMUNICATION">
<td><h3 id="AudioRenderUsage.COMMUNICATION" class="add-link hide-from-toc">COMMUNICATION</h3></td>
<td><code>4</code></td>
<td><p>Stream is intended to be used for some form of real time user to user
communication. Voice/Video chat should use this.</p>
</td>
</tr>
</table>
### AudioSampleFormat [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioSampleFormat data-text="AudioSampleFormat"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=90)*
<p>Enumerates the supported audio sample formats.</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioSampleFormat.UNSIGNED_8">
<td><h3 id="AudioSampleFormat.UNSIGNED_8" class="add-link hide-from-toc">UNSIGNED_8</h3></td>
<td><code>1</code></td>
<td><p>8-bit unsigned samples, sample size 1 byte.</p>
</td>
</tr>
<tr id="AudioSampleFormat.SIGNED_16">
<td><h3 id="AudioSampleFormat.SIGNED_16" class="add-link hide-from-toc">SIGNED_16</h3></td>
<td><code>2</code></td>
<td><p>16-bit signed samples, host-endian, sample size 2 bytes.</p>
</td>
</tr>
<tr id="AudioSampleFormat.SIGNED_24_IN_32">
<td><h3 id="AudioSampleFormat.SIGNED_24_IN_32" class="add-link hide-from-toc">SIGNED_24_IN_32</h3></td>
<td><code>3</code></td>
<td><p>24-bit signed samples in 32 bits, host-endian, sample size 4 bytes.</p>
</td>
</tr>
<tr id="AudioSampleFormat.FLOAT">
<td><h3 id="AudioSampleFormat.FLOAT" class="add-link hide-from-toc">FLOAT</h3></td>
<td><code>4</code></td>
<td><p>32-bit floating-point samples, sample size 4 bytes.</p>
</td>
</tr>
</table>
### Behavior [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#Behavior data-text="Behavior"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_core.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=64)*
<p>The behaviors applied to streams when multiple are active.</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="Behavior.NONE">
<td><h3 id="Behavior.NONE" class="add-link hide-from-toc">NONE</h3></td>
<td><code>0</code></td>
<td><p>Mix the streams.</p>
</td>
</tr>
<tr id="Behavior.DUCK">
<td><h3 id="Behavior.DUCK" class="add-link hide-from-toc">DUCK</h3></td>
<td><code>1</code></td>
<td><p>Apply a gain to duck the volume of one of the streams. (-14.0db)</p>
</td>
</tr>
<tr id="Behavior.MUTE">
<td><h3 id="Behavior.MUTE" class="add-link hide-from-toc">MUTE</h3></td>
<td><code>2</code></td>
<td><p>Apply a gain to mute one of the streams. (-160.0db)</p>
</td>
</tr>
</table>
### ColorSpace [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#ColorSpace data-text="ColorSpace"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=132)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="ColorSpace.UNKNOWN">
<td><h3 id="ColorSpace.UNKNOWN" class="add-link hide-from-toc">UNKNOWN</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="ColorSpace.NOT_APPLICABLE">
<td><h3 id="ColorSpace.NOT_APPLICABLE" class="add-link hide-from-toc">NOT_APPLICABLE</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
<tr id="ColorSpace.JPEG">
<td><h3 id="ColorSpace.JPEG" class="add-link hide-from-toc">JPEG</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
<tr id="ColorSpace.HD_REC709">
<td><h3 id="ColorSpace.HD_REC709" class="add-link hide-from-toc">HD_REC709</h3></td>
<td><code>3</code></td>
<td></td>
</tr>
<tr id="ColorSpace.SD_REC601">
<td><h3 id="ColorSpace.SD_REC601" class="add-link hide-from-toc">SD_REC601</h3></td>
<td><code>4</code></td>
<td></td>
</tr>
</table>
### SbcAllocation [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#SbcAllocation data-text="SbcAllocation"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=485)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="SbcAllocation.ALLOC_LOUDNESS">
<td><h3 id="SbcAllocation.ALLOC_LOUDNESS" class="add-link hide-from-toc">ALLOC_LOUDNESS</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="SbcAllocation.ALLOC_SNR">
<td><h3 id="SbcAllocation.ALLOC_SNR" class="add-link hide-from-toc">ALLOC_SNR</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
</table>
### SbcBlockCount [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#SbcBlockCount data-text="SbcBlockCount"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=478)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="SbcBlockCount.BLOCK_COUNT_4">
<td><h3 id="SbcBlockCount.BLOCK_COUNT_4" class="add-link hide-from-toc">BLOCK_COUNT_4</h3></td>
<td><code>4</code></td>
<td></td>
</tr>
<tr id="SbcBlockCount.BLOCK_COUNT_8">
<td><h3 id="SbcBlockCount.BLOCK_COUNT_8" class="add-link hide-from-toc">BLOCK_COUNT_8</h3></td>
<td><code>8</code></td>
<td></td>
</tr>
<tr id="SbcBlockCount.BLOCK_COUNT_12">
<td><h3 id="SbcBlockCount.BLOCK_COUNT_12" class="add-link hide-from-toc">BLOCK_COUNT_12</h3></td>
<td><code>12</code></td>
<td></td>
</tr>
<tr id="SbcBlockCount.BLOCK_COUNT_16">
<td><h3 id="SbcBlockCount.BLOCK_COUNT_16" class="add-link hide-from-toc">BLOCK_COUNT_16</h3></td>
<td><code>16</code></td>
<td></td>
</tr>
</table>
### SbcChannelMode [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#SbcChannelMode data-text="SbcChannelMode"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=490)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="SbcChannelMode.MONO">
<td><h3 id="SbcChannelMode.MONO" class="add-link hide-from-toc">MONO</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
<tr id="SbcChannelMode.DUAL">
<td><h3 id="SbcChannelMode.DUAL" class="add-link hide-from-toc">DUAL</h3></td>
<td><code>1</code></td>
<td></td>
</tr>
<tr id="SbcChannelMode.STEREO">
<td><h3 id="SbcChannelMode.STEREO" class="add-link hide-from-toc">STEREO</h3></td>
<td><code>2</code></td>
<td></td>
</tr>
<tr id="SbcChannelMode.JOINT_STEREO">
<td><h3 id="SbcChannelMode.JOINT_STEREO" class="add-link hide-from-toc">JOINT_STEREO</h3></td>
<td><code>3</code></td>
<td></td>
</tr>
</table>
### SbcSubBands [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#SbcSubBands data-text="SbcSubBands"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=473)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="SbcSubBands.SUB_BANDS_4">
<td><h3 id="SbcSubBands.SUB_BANDS_4" class="add-link hide-from-toc">SUB_BANDS_4</h3></td>
<td><code>4</code></td>
<td></td>
</tr>
<tr id="SbcSubBands.SUB_BANDS_8">
<td><h3 id="SbcSubBands.SUB_BANDS_8" class="add-link hide-from-toc">SUB_BANDS_8</h3></td>
<td><code>8</code></td>
<td></td>
</tr>
</table>
### StreamError [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#StreamError data-text="StreamError"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=47)*
<p>StreamError</p>
<p>This error code encapsulates various errors that might emanate from a
StreamProcessor server. It can be sent either as an OnStreamFailed event or
as an epitaph for the channel.</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="StreamError.UNKNOWN">
<td><h3 id="StreamError.UNKNOWN" class="add-link hide-from-toc">UNKNOWN</h3></td>
<td><code>1</code></td>
<td><p>An internal error with an unspecified reason.</p>
</td>
</tr>
<tr id="StreamError.INVALID_INPUT_FORMAT_DETAILS">
<td><h3 id="StreamError.INVALID_INPUT_FORMAT_DETAILS" class="add-link hide-from-toc">INVALID_INPUT_FORMAT_DETAILS</h3></td>
<td><code>2</code></td>
<td><p>The client provided invalid input format details.</p>
</td>
</tr>
<tr id="StreamError.INCOMPATIBLE_BUFFERS_PROVIDED">
<td><h3 id="StreamError.INCOMPATIBLE_BUFFERS_PROVIDED" class="add-link hide-from-toc">INCOMPATIBLE_BUFFERS_PROVIDED</h3></td>
<td><code>3</code></td>
<td><p>The server received buffers that are not suitable for the operation to
be performed. An example of this would be if a Decoder received output
buffers that are too small to decode a frame into.</p>
</td>
</tr>
<tr id="StreamError.EOS_PROCESSING">
<td><h3 id="StreamError.EOS_PROCESSING" class="add-link hide-from-toc">EOS_PROCESSING</h3></td>
<td><code>4</code></td>
<td><p>Processing of input EOS (end of stream) failed, so the stream failed.
Currently this can occur if a core codec watchdog fires while processing
EOS.</p>
</td>
</tr>
<tr id="StreamError.DECODER_UNKNOWN">
<td><h3 id="StreamError.DECODER_UNKNOWN" class="add-link hide-from-toc">DECODER_UNKNOWN</h3></td>
<td><code>16777217</code></td>
<td><p>An internal decoder error with an unspecified reason.</p>
</td>
</tr>
<tr id="StreamError.DECODER_DATA_PARSING">
<td><h3 id="StreamError.DECODER_DATA_PARSING" class="add-link hide-from-toc">DECODER_DATA_PARSING</h3></td>
<td><code>16777218</code></td>
<td><p>Input data that can't be parsed. Only some parsing problems/errors are
reported this way. Corrupt input data may be reported as other
StreamError, or may not cause a StreamError.</p>
</td>
</tr>
<tr id="StreamError.ENCODER_UNKNOWN">
<td><h3 id="StreamError.ENCODER_UNKNOWN" class="add-link hide-from-toc">ENCODER_UNKNOWN</h3></td>
<td><code>33554433</code></td>
<td><p>An internal encoder error with an unspecified reason.</p>
</td>
</tr>
<tr id="StreamError.DECRYPTOR_UNKNOWN">
<td><h3 id="StreamError.DECRYPTOR_UNKNOWN" class="add-link hide-from-toc">DECRYPTOR_UNKNOWN</h3></td>
<td><code>50331649</code></td>
<td><p>An internal decryptor error with an unspecified reason.</p>
</td>
</tr>
<tr id="StreamError.DECRYPTOR_NO_KEY">
<td><h3 id="StreamError.DECRYPTOR_NO_KEY" class="add-link hide-from-toc">DECRYPTOR_NO_KEY</h3></td>
<td><code>50331650</code></td>
<td><p>The requested KeyId is not available for use by the Decryptor. The
client may try again later if that key becomes available.</p>
</td>
</tr>
</table>
### VideoColorSpace [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#VideoColorSpace data-text="VideoColorSpace"}
Type: <code>uint32</code>
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=234)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="VideoColorSpace.INVALID">
<td><h3 id="VideoColorSpace.INVALID" class="add-link hide-from-toc">INVALID</h3></td>
<td><code>0</code></td>
<td></td>
</tr>
</table>
## **TABLES**
### AudioCompressedFormatCvsd {#AudioCompressedFormatCvsd data-text="AudioCompressedFormatCvsd"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=107)*
<p>AudioCompressedFormatCvsd contains no fields for now since we will be
using the parameter values recommended by Bluetooth Core Spec v5.3
section 9.2.</p>
<div class="fidl-version-div"><span class="fidl-attribute fidl-version">Added: HEAD</span></div>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
</table>
### AudioConsumerStatus {#AudioConsumerStatus data-text="AudioConsumerStatus"}
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=123)*
<p>Represents the status of the consumer. In the initial status, <code>error</code> and
<code>presentation_timeline</code> are absent. The lead time fields are always present.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="AudioConsumerStatus.error">
<td><h3 id="AudioConsumerStatus.error" class="add-link hide-from-toc">1</h3></td>
<td><code>error</code></td>
<td>
<code><a class='link' href='#AudioConsumerError'>AudioConsumerError</a></code>
</td>
<td><p>If present, indicates an error condition currently in effect. Absent if no error.</p>
</td>
</tr>
<tr id="AudioConsumerStatus.presentation_timeline">
<td><h3 id="AudioConsumerStatus.presentation_timeline" class="add-link hide-from-toc">2</h3></td>
<td><code>presentation_timeline</code></td>
<td>
<code><a class='link' href='#TimelineFunction'>TimelineFunction</a></code>
</td>
<td><p>If present, indicates the current relationship between the presentation timeline
and local monotonic clock, both in nanosecond units. If not present,
indicates there is no relationship. Absent initially.</p>
<p>'Presentation timeline' refers to the <code>pts</code> (presentation timestamp) values on the packets.
This timeline function can be used to determine the local monotonic clock time that a
packet will be presented based on that packet's <code>pts</code> value.</p>
</td>
</tr>
<tr id="AudioConsumerStatus.min_lead_time">
<td><h3 id="AudioConsumerStatus.min_lead_time" class="add-link hide-from-toc">3</h3></td>
<td><code>min_lead_time</code></td>
<td>
<code>uint64</code>
</td>
<td><p>Indicates the minimum lead time in nanoseconds supported by this
<code>AudioConsumer</code>. Or in other words, how small of a gap between the
<code>media_time</code> provided to <code>AudioConsumer.Start</code> and the pts on the first
packet can be. Values outside this range will be clipped.</p>
</td>
</tr>
<tr id="AudioConsumerStatus.max_lead_time">
<td><h3 id="AudioConsumerStatus.max_lead_time" class="add-link hide-from-toc">4</h3></td>
<td><code>max_lead_time</code></td>
<td>
<code>uint64</code>
</td>
<td><p>Indicates the maximum lead time in nanoseconds supported by this
<code>AudioConsumer</code>. Or in other words, how large of a gap between the
<code>media_time</code> provided to <code>AudioConsumer.Start</code> and the pts on the first
packet can be. Values outside this range will be clipped.</p>
</td>
</tr>
</table>
### CvsdEncoderSettings {#CvsdEncoderSettings data-text="CvsdEncoderSettings"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=640)*
<p>Settings for CVSD Encoders. It contains no fields for now since we will be
using the parameter values recommended by Bluetooth Core Spec v5.3
section 9.2.</p>
<div class="fidl-version-div"><span class="fidl-attribute fidl-version">Added: HEAD</span></div>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
</table>
### DecryptedFormat {#DecryptedFormat data-text="DecryptedFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=448)*
<p>DecryptedFormat</p>
<p>This describes the format of the decrypted content. It is required to be
sent by the StreamProcessor server prior to the delivery of output packets.
Currently, there is no additional format details for decrypted output.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="DecryptedFormat.ignore_this_field">
<td><h3 id="DecryptedFormat.ignore_this_field" class="add-link hide-from-toc">1</h3></td>
<td><code>ignore_this_field</code></td>
<td>
<code>bool</code>
</td>
<td></td>
</tr>
</table>
### EncryptedFormat {#EncryptedFormat data-text="EncryptedFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=393)*
<p>EncryptedFormat</p>
<p>The stream format details payload of a decrypting stream processor. This is
a sparsely populated table to specify parameters necessary for decryption
other than the data stream. It is only necessary to update fields if they
changed, but not an error if the same value is repeated.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="EncryptedFormat.">
<td><h3 id="EncryptedFormat." class="add-link hide-from-toc">1</h3></td>
<td><code>RESERVED</code></td>
<td>
<code></code>
</td>
<td></td>
</tr>
<tr id="EncryptedFormat.">
<td><h3 id="EncryptedFormat." class="add-link hide-from-toc">2</h3></td>
<td><code>RESERVED</code></td>
<td>
<code></code>
</td>
<td></td>
</tr>
<tr id="EncryptedFormat.init_vector">
<td><h3 id="EncryptedFormat.init_vector" class="add-link hide-from-toc">3</h3></td>
<td><code>init_vector</code></td>
<td>
<code><a class='link' href='#InitVector'>InitVector</a></code>
</td>
<td><p><code>init_vector</code> is used in combination with a key and a block of content
to create the first cipher block in a chain and derive subsequent cipher
blocks in a cipher block chain.
Usage:</p>
<ul>
<li>It is required to be set prior to the delivery of input packets to a
decryptor.</li>
<li>This may be changed multiple times during a data stream.</li>
</ul>
</td>
</tr>
<tr id="EncryptedFormat.subsamples">
<td><h3 id="EncryptedFormat.subsamples" class="add-link hide-from-toc">4</h3></td>
<td><code>subsamples</code></td>
<td>
<code>vector&lt;<a class='link' href='#SubsampleEntry'>SubsampleEntry</a>&gt;</code>
</td>
<td><p><code>subsamples</code> is used to identify the clear and encrypted portions of a
subsample.
Usage:</p>
<ul>
<li>For whole sample encryption, this parameter should not be sent.</li>
<li>This may be changed multiple times during a data stream.</li>
</ul>
</td>
</tr>
<tr id="EncryptedFormat.pattern">
<td><h3 id="EncryptedFormat.pattern" class="add-link hide-from-toc">5</h3></td>
<td><code>pattern</code></td>
<td>
<code><a class='link' href='#EncryptionPattern'>EncryptionPattern</a></code>
</td>
<td><p><code>pattern</code> is used to identify the clear and encrypted blocks for pattern
based encryption.
Usage:</p>
<ul>
<li>This is not allowed for CENC and CBC1 and required for CENS and CBCS.</li>
<li>If required, it must be set prior to the delivery of input packets to
a decryptor.</li>
<li>This may be changed multiple times during a data stream.</li>
</ul>
</td>
</tr>
<tr id="EncryptedFormat.scheme">
<td><h3 id="EncryptedFormat.scheme" class="add-link hide-from-toc">6</h3></td>
<td><code>scheme</code></td>
<td>
<code>string</code>
</td>
<td><p><code>scheme</code> specifies which encryption scheme to use, such as
<code>fuchsia.media.ENCRYPTION_SCHEME_CENC</code>.
Usage:</p>
<ul>
<li>It is required to be set prior to delivery of input packets.</li>
<li>Changing the scheme mid-stream is only permitted in some scenarios.
Once an encrypted scheme is selected for a stream, the scheme may
only be set to <code>fuchsia.media.ENCRYPTION_SCHEME_UNENCRYPTED</code> or that
same initial encrypted scheme. The scheme may be set to
<code>fuchsia.media.ENCRYPTION_SCHEME_UNENCRYPTED</code> at any point.</li>
</ul>
</td>
</tr>
<tr id="EncryptedFormat.">
<td><h3 id="EncryptedFormat." class="add-link hide-from-toc">7</h3></td>
<td><code>RESERVED</code></td>
<td>
<code></code>
</td>
<td></td>
</tr>
<tr id="EncryptedFormat.key_id">
<td><h3 id="EncryptedFormat.key_id" class="add-link hide-from-toc">8</h3></td>
<td><code>key_id</code></td>
<td>
<code><a class='link' href='#KeyId'>KeyId</a></code>
</td>
<td><p><code>key_id</code> identifies the key that should be used for decrypting
subsequent data.
Usage:</p>
<ul>
<li>It is required to be set prior to delivery of input packets to a
decryptor.</li>
<li>This may be changed multiple times during a data stream.</li>
</ul>
</td>
</tr>
</table>
### FormatDetails {#FormatDetails data-text="FormatDetails"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=678)*
<p>FormatDetails</p>
<p>This describes/details the format on input or output of a StreamProcessor
(separate instances for input vs. output).</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="FormatDetails.format_details_version_ordinal">
<td><h3 id="FormatDetails.format_details_version_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>format_details_version_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td></td>
</tr>
<tr id="FormatDetails.mime_type">
<td><h3 id="FormatDetails.mime_type" class="add-link hide-from-toc">2</h3></td>
<td><code>mime_type</code></td>
<td>
<code>string</code>
</td>
<td></td>
</tr>
<tr id="FormatDetails.oob_bytes">
<td><h3 id="FormatDetails.oob_bytes" class="add-link hide-from-toc">3</h3></td>
<td><code>oob_bytes</code></td>
<td>
<code>vector&lt;uint8&gt;</code>
</td>
<td></td>
</tr>
<tr id="FormatDetails.domain">
<td><h3 id="FormatDetails.domain" class="add-link hide-from-toc">4</h3></td>
<td><code>domain</code></td>
<td>
<code><a class='link' href='#DomainFormat'>DomainFormat</a></code>
</td>
<td></td>
</tr>
<tr id="FormatDetails.pass_through_parameters">
<td><h3 id="FormatDetails.pass_through_parameters" class="add-link hide-from-toc">5</h3></td>
<td><code>pass_through_parameters</code></td>
<td>
<code>vector&lt;<a class='link' href='#Parameter'>Parameter</a>&gt;</code>
</td>
<td></td>
</tr>
<tr id="FormatDetails.encoder_settings">
<td><h3 id="FormatDetails.encoder_settings" class="add-link hide-from-toc">6</h3></td>
<td><code>encoder_settings</code></td>
<td>
<code><a class='link' href='#EncoderSettings'>EncoderSettings</a></code>
</td>
<td><p>Instructs an encoder on how to encode raw data.</p>
<p>Decoders may ignore this field but are entitled to rejected requests with
this field set because it doesn't make sense.</p>
</td>
</tr>
<tr id="FormatDetails.timebase">
<td><h3 id="FormatDetails.timebase" class="add-link hide-from-toc">7</h3></td>
<td><code>timebase</code></td>
<td>
<code>uint64</code>
</td>
<td><p>The number of ticks of the timebase of input packet timestamp_ish values
per second.</p>
<p>The timebase is only used used for optional extrapolation of timestamp_ish
values when an input timestamp which applies to byte 0 of the valid portion
of the input packet does not correspond directly to byte 0 of the valid
portion of any output packet.</p>
<p>Leave unset if timestamp extrapolation is not needed, either due to lack of
timestamps on input, or due to input being provided in increments of the
encoder's input chunk size (based on the encoder settings and calculated
independently by the client). Set if timestamp extrapolation is known to be
needed or known to be acceptable to the client.</p>
</td>
</tr>
</table>
### H264EncoderSettings {#H264EncoderSettings data-text="H264EncoderSettings"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=599)*
<p>Settings for H264 Encoders.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="H264EncoderSettings.bit_rate">
<td><h3 id="H264EncoderSettings.bit_rate" class="add-link hide-from-toc">1</h3></td>
<td><code>bit_rate</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Target bits per second for encoded stream.
If omitted, interpreted as 200,000.</p>
</td>
</tr>
<tr id="H264EncoderSettings.frame_rate">
<td><h3 id="H264EncoderSettings.frame_rate" class="add-link hide-from-toc">2</h3></td>
<td><code>frame_rate</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Target frames per second for encoded stream.
If omitted, interpreted as 30.</p>
</td>
</tr>
<tr id="H264EncoderSettings.gop_size">
<td><h3 id="H264EncoderSettings.gop_size" class="add-link hide-from-toc">3</h3></td>
<td><code>gop_size</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Number of pictures per keyframe. Setting to 0 will disable key frame
encoding, except for if force_key_frame is set to true.
If omitted, interpreted as 8.</p>
</td>
</tr>
<tr id="H264EncoderSettings.variable_frame_rate">
<td><h3 id="H264EncoderSettings.variable_frame_rate" class="add-link hide-from-toc">4</h3></td>
<td><code>variable_frame_rate</code></td>
<td>
<code>bool</code>
</td>
<td><p>Whether to enable frame rate adjustments in order to meet target bitrate.
If omitted, interpreted as false.</p>
</td>
</tr>
<tr id="H264EncoderSettings.min_frame_rate">
<td><h3 id="H264EncoderSettings.min_frame_rate" class="add-link hide-from-toc">5</h3></td>
<td><code>min_frame_rate</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Lowest frame rate allowed if <code>variable_frame_rate</code> is enabled. If
omitted, interpreted as 10.</p>
</td>
</tr>
<tr id="H264EncoderSettings.force_key_frame">
<td><h3 id="H264EncoderSettings.force_key_frame" class="add-link hide-from-toc">6</h3></td>
<td><code>force_key_frame</code></td>
<td>
<code>bool</code>
</td>
<td><p>If true, next frame encoded will be a key frame. If omitted, interpreted
as false.</p>
</td>
</tr>
<tr id="H264EncoderSettings.quantization_params">
<td><h3 id="H264EncoderSettings.quantization_params" class="add-link hide-from-toc">7</h3></td>
<td><code>quantization_params</code></td>
<td>
<code><a class='link' href='#H264QuantizationParameters'>H264QuantizationParameters</a></code>
</td>
<td><p>Allow customization of quantization parameters for encoding. Each frame
submitted after setting this will use the new values. If omitted, no
change from encoder defaults is made.</p>
</td>
</tr>
</table>
### H264QuantizationParameters {#H264QuantizationParameters data-text="H264QuantizationParameters"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=583)*
<p>Customization of h264 encoder parameters for macroblock quantization. The values
can range from 0 to 51, with lower numbers indicating higher
quality/bitrate. While encoders should support these fields if feasible,
some encoders may ignore these fields. It's ok to not set this table, or
not set some of the fields in this table, as encoders can determine their
own defaults. If the targeted bitrate can't be achieved with the specified values,
then the user should expect the resulting encoded stream bitrate to differ from
the requested bitrate.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="H264QuantizationParameters.i_base">
<td><h3 id="H264QuantizationParameters.i_base" class="add-link hide-from-toc">1</h3></td>
<td><code>i_base</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Starting value for quantization of key frames.</p>
</td>
</tr>
<tr id="H264QuantizationParameters.i_min">
<td><h3 id="H264QuantizationParameters.i_min" class="add-link hide-from-toc">2</h3></td>
<td><code>i_min</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Smallest allowed value for quantization of key frames.</p>
</td>
</tr>
<tr id="H264QuantizationParameters.i_max">
<td><h3 id="H264QuantizationParameters.i_max" class="add-link hide-from-toc">3</h3></td>
<td><code>i_max</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Largest allowed value for quantization of key frames.</p>
</td>
</tr>
<tr id="H264QuantizationParameters.p_base">
<td><h3 id="H264QuantizationParameters.p_base" class="add-link hide-from-toc">4</h3></td>
<td><code>p_base</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Starting value for quantization of predicted frames.</p>
</td>
</tr>
<tr id="H264QuantizationParameters.p_min">
<td><h3 id="H264QuantizationParameters.p_min" class="add-link hide-from-toc">5</h3></td>
<td><code>p_min</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Smallest allowed value for quantization of predicted frames.</p>
</td>
</tr>
<tr id="H264QuantizationParameters.p_max">
<td><h3 id="H264QuantizationParameters.p_max" class="add-link hide-from-toc">6</h3></td>
<td><code>p_max</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Largest allowed value for quantization of predicted frames.</p>
</td>
</tr>
</table>
### HevcEncoderSettings {#HevcEncoderSettings data-text="HevcEncoderSettings"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=626)*
<p>Settings for HEVC/H265 Encoders.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="HevcEncoderSettings.bit_rate">
<td><h3 id="HevcEncoderSettings.bit_rate" class="add-link hide-from-toc">1</h3></td>
<td><code>bit_rate</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Target bits per second for encoded stream. Defaults to 200,000 if
omitted.</p>
</td>
</tr>
<tr id="HevcEncoderSettings.frame_rate">
<td><h3 id="HevcEncoderSettings.frame_rate" class="add-link hide-from-toc">2</h3></td>
<td><code>frame_rate</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Target frames per second for encoded stream. Defaults to 30 if omitted.</p>
</td>
</tr>
<tr id="HevcEncoderSettings.gop_size">
<td><h3 id="HevcEncoderSettings.gop_size" class="add-link hide-from-toc">3</h3></td>
<td><code>gop_size</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Number of pictures per keyframe. Defaults to 8 if omitted.</p>
</td>
</tr>
</table>
### InputAudioCapturerConfiguration {#InputAudioCapturerConfiguration data-text="InputAudioCapturerConfiguration"}
*Defined in [fuchsia.media/audio_capturer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_capturer.fidl;l=15)*
<p>Configuration for a capturer which will receive a stream from an
input device.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="InputAudioCapturerConfiguration.usage">
<td><h3 id="InputAudioCapturerConfiguration.usage" class="add-link hide-from-toc">1</h3></td>
<td><code>usage</code></td>
<td>
<code><a class='link' href='#AudioCaptureUsage'>AudioCaptureUsage</a></code>
</td>
<td></td>
</tr>
</table>
### LoopbackAudioCapturerConfiguration {#LoopbackAudioCapturerConfiguration data-text="LoopbackAudioCapturerConfiguration"}
*Defined in [fuchsia.media/audio_capturer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_capturer.fidl;l=11)*
<p>Configuration for a capturer which will receive a loopback stream
a system output.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
</table>
### Packet {#Packet data-text="Packet"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=301)*
<p>A Packet represents a chunk of input or output data to or from a stream
processor.</p>
<p>stream processor output:</p>
<p>While the Packet is outstanding with the client via OnOutputPacket(), the
stream processor will avoid modifying the referenced output data. After the
client calls RecycleOutputPacket(packet_index), the stream processor is
notified that the client is again ok with the referenced data changing.</p>
<p>stream processor input:</p>
<p>The client initially has all packet_index(es) available to fill, and later
gets packet_index(s) that are again ready to fill via OnFreeInputPacket().
The client must not modify the referenced data in between QueueInputPacket()
and OnFreeInputPacket().</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="Packet.header">
<td><h3 id="Packet.header" class="add-link hide-from-toc">1</h3></td>
<td><code>header</code></td>
<td>
<code><a class='link' href='#PacketHeader'>PacketHeader</a></code>
</td>
<td></td>
</tr>
<tr id="Packet.buffer_index">
<td><h3 id="Packet.buffer_index" class="add-link hide-from-toc">2</h3></td>
<td><code>buffer_index</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Which buffer this packet refers to. For single-buffer mode this will
always be 0, but for multi-buffer mode, a given in-flight interval of a
packet can refer to any buffer. The packet has an associated buffer only
while the packet is in-flight, not while the packet is free.</p>
<p>The default value makes accidental inappropriate use of index 0 less
likely (will tend to complain in an obvious way if not filled out
instead of a non-obvious data corruption when decoding buffer 0
repeatedly instead of the correct buffers).</p>
<p>TODO(dustingreen): Try to make FIDL table defaults have meaning, and not
complain about !has when accessing the field. For now the default
specified here does nothing.</p>
</td>
</tr>
<tr id="Packet.stream_lifetime_ordinal">
<td><h3 id="Packet.stream_lifetime_ordinal" class="add-link hide-from-toc">3</h3></td>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>The value 1 is the lowest permitted value after stream processor
creation. Values sent by the client must be odd. Values must only
increase.</p>
<p>A stream_lifetime_ordinal represents the lifetime of a stream. All
messages that are specific to a stream have the stream_lifetime_ordinal
value and the value is the same for all messages relating to a given
stream.</p>
</td>
</tr>
<tr id="Packet.start_offset">
<td><h3 id="Packet.start_offset" class="add-link hide-from-toc">4</h3></td>
<td><code>start_offset</code></td>
<td>
<code>uint32</code>
</td>
<td><p>Which part of the relevant buffer is this packet using. These are valid
for input data that's in-flight to the stream processor, and are valid
for output data from the stream processor.</p>
<p>For compressed formats and uncompressed audio, the data in
[start_offset, start_offset + valid_length_bytes) is the contiguously
valid data referred to by this packet.</p>
<p>For uncompressed video frames, FormatDetails is the primary means of
determining which bytes are relevant. The offsets in FormatDetails
are relative to the start_offset here. The valid_length_bytes must be
large enough to include the full last line of pixel data, including the
full line stride of the last line (not just the width in pixels of the
last line).</p>
<p>Despite these being filled out, some uncompressed video buffers are of
types that are not readable by the CPU. These fields being here don't
imply there's any way for the CPU to read an uncompressed frame.</p>
</td>
</tr>
<tr id="Packet.valid_length_bytes">
<td><h3 id="Packet.valid_length_bytes" class="add-link hide-from-toc">5</h3></td>
<td><code>valid_length_bytes</code></td>
<td>
<code>uint32</code>
</td>
<td><p>This must be &gt; 0.</p>
<p>The semantics for valid data per packet vary depending on data type as
follows.</p>
<p>uncompressed video - A video frame can't be split across packets. Each
packet is one video frame.</p>
<p>uncompressed audio - Regardless of float or int, linear or uLaw, or
number of channels, a packet must contain an non-negative number of
complete audio frames, where a single audio frame consists of data for
all the channels for the same single point in time. Any
stream-processor-specific internal details re. lower rate sampling for
LFE channel or the like should be hidden by the StreamProcessor server
implementation.</p>
<p>compressed data input - A packet must contain at least one byte of data.
See also stream_input_bytes_min. Splitting AUs at arbitrary byte
boundaries is permitted, including at boundaries that are in AU headers.</p>
<p>compressed data output - The stream processor is not required to fully
fill each output packet's buffer.</p>
</td>
</tr>
<tr id="Packet.timestamp_ish">
<td><h3 id="Packet.timestamp_ish" class="add-link hide-from-toc">6</h3></td>
<td><code>timestamp_ish</code></td>
<td>
<code>uint64</code>
</td>
<td><p>This value is not strictly speaking a timestamp. It is an arbitrary
unsigned 64-bit number that, under some circumstances, will be passed by
a stream processor unmodified from an input packet to the
exactly-corresponding output packet.</p>
<p>For timestamp_ish values to be propagated from input to output the
following conditions must be true:</p>
<ul>
<li>promise_separate_access_units_on_input must be true</li>
<li>has_timestamp_ish must be true for a given input packet, to have that
timestamp_ish value (potentially) propagate through to an output</li>
<li>the StreamProcessor instance itself decides (async) that the input
packet generates an output packet - if a given input never generates
an output packet then the timestamp_ish value on the input will never
show up on any output packet - depending on the characteristics of the
input and output formats, and whether a decoder is willing to join
mid-stream, etc this can be more or less likely to occur, but clients
should be written to accommodate timestamp_ish values that are fed on
input but never show up on output, at least to a reasonable degree
(not crashing, not treating as an error).</li>
</ul>
</td>
</tr>
<tr id="Packet.start_access_unit">
<td><h3 id="Packet.start_access_unit" class="add-link hide-from-toc">7</h3></td>
<td><code>start_access_unit</code></td>
<td>
<code>bool</code>
</td>
<td><p>If promise_separate_access_units_on_input (TODO(dustingreen): or any
similar mode for output) is true, this bool must be set appropriately
depending on whether byte 0 <em>is</em> or <em>is not</em> the start of an access
unit. The client is required to know, and required to set this boolean
properly. The server is allowed to infer that when this boolean is
false, byte 0 is the first byte of a continuation of a
previously-started AU. (The byte at start_offset is &quot;byte 0&quot;.)</p>
<p>If promise_separate_access_units_on_input is false, this boolean is
ignored.</p>
</td>
</tr>
<tr id="Packet.known_end_access_unit">
<td><h3 id="Packet.known_end_access_unit" class="add-link hide-from-toc">8</h3></td>
<td><code>known_end_access_unit</code></td>
<td>
<code>bool</code>
</td>
<td><p>A client is never required to set this boolean to true.</p>
<p>If promise_separate_access_units_on_input is true, for input data, this
boolean must be false if the last byte of this packet is not the last
byte of an AU, and this boolean <em>may</em> be true if the last byte of this
packet is the last byte of an AU. A client delivering one AU at a time
that's interested in the lowest possible latency via the decoder should
set this boolean to true when it can be set to true.</p>
<p>If promise_separate_access_units_on_input is false, this boolean is
ignored.</p>
</td>
</tr>
<tr id="Packet.key_frame">
<td><h3 id="Packet.key_frame" class="add-link hide-from-toc">9</h3></td>
<td><code>key_frame</code></td>
<td>
<code>bool</code>
</td>
<td><p>Used for compressed video packets. If not present should be assumed to
be unknown. If false, indicates the packet is not part of a key frame. If
true, indicates the packet is part of a key frame.</p>
</td>
</tr>
</table>
### PacketHeader {#PacketHeader data-text="PacketHeader"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=248)*
<p>PacketHeader</p>
<p>When referring to a free packet, we use PacketHeader alone instead of
Packet, since while a packet is free it doesn't really have meaningful
offset or length etc.</p>
<p>A populated Packet also has a PacketHeader.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="PacketHeader.buffer_lifetime_ordinal">
<td><h3 id="PacketHeader.buffer_lifetime_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>buffer_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>This is which buffer configuration lifetime this header is referring to.</p>
<p>A packet_index is only really meaningful with respect to a particular
buffer_lifetime_ordinal.</p>
<p>See StreamBufferPartialSettings.buffer_lifetime_ordinal.</p>
<p>For QueueInputPacket(), a server receiving a buffer_lifetime_ordinal that
isn't the current input buffer_lifetime_ordinal will close the channel.</p>
<p>For OnFreeInputPacket() and RecycleOutputPacket(), the receiver (client
or server) must ignore a message with stale buffer_lifetime_ordinal.</p>
</td>
</tr>
<tr id="PacketHeader.packet_index">
<td><h3 id="PacketHeader.packet_index" class="add-link hide-from-toc">2</h3></td>
<td><code>packet_index</code></td>
<td>
<code>uint32</code>
</td>
<td><p>The overall set of packet_index values is densely packed from 0..count-1
for input and output separately. They can be queued in any order.</p>
<p>Both the client and server should validate the packet_index against the
known bound and disconnect if it's out of bounds.</p>
<p>When running in single-buffer mode, the buffer index is always 0.</p>
<p>The packet_index values don't imply anything about order of use of
packets. The client should not expect the ordering to remain the same
over time - the stream processor is free to hold on to an input or
output packet for a while during which other packet_index values may be
used multiple times.</p>
<p>For a given properly-functioning StreamProcessor instance, packet_index
values will be unique among concurrently-outstanding packets. Servers
should validate that a client isn't double-using a packet and clients
should validate as necessary to avoid undefined or unexpected client
behavior.</p>
</td>
</tr>
</table>
### StreamBufferConstraints {#StreamBufferConstraints data-text="StreamBufferConstraints"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=20)*
<p>This struct conveys the buffer_constraints_version_ordinal.</p>
<p>Historically this table conveyed more fields than it currently does, but
those fields are all deprecated in favor of using sysmem instead.</p>
<p>There are separate instances of this struct for stream input and stream
output.</p>
<p>Notes about fields:</p>
<p>For uncompressed video, separate and complete frames in their
separate buffers (buffer-per-packet mode) are always a requirement.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="StreamBufferConstraints.buffer_constraints_version_ordinal">
<td><h3 id="StreamBufferConstraints.buffer_constraints_version_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>buffer_constraints_version_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>This is a version number the server sets on the constraints to allow the
server to determine when the client has caught up with the latest
constraints sent by the server. The server won't emit output data until
the client has configured output settings and buffers with a
buffer_constraints_version_ordinal &gt;= the latest
buffer_constraints_version_ordinal that had
buffer_constraints_action_required true. See
buffer_constraints_action_required comments for more.</p>
<p>A buffer_constraints_version_ordinal of 0 is not permitted, to simplify
initial state handling. Other than 0, both odd and even version ordinals
are allowed (in contrast to the stream_lifetime_ordinal, neither the
client nor server ever has a reason to consider the latest version to be
stale, so there would be no benefit to disallowing even values).</p>
</td>
</tr>
<tr id="StreamBufferConstraints.default_settings">
<td><h3 id="StreamBufferConstraints.default_settings" class="add-link hide-from-toc">2</h3></td>
<td><code>default_settings</code></td>
<td>
<code><a class='link' href='#StreamBufferSettings'>StreamBufferSettings</a></code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.per_packet_buffer_bytes_min">
<td><h3 id="StreamBufferConstraints.per_packet_buffer_bytes_min" class="add-link hide-from-toc">3</h3></td>
<td><code>per_packet_buffer_bytes_min</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.per_packet_buffer_bytes_recommended">
<td><h3 id="StreamBufferConstraints.per_packet_buffer_bytes_recommended" class="add-link hide-from-toc">4</h3></td>
<td><code>per_packet_buffer_bytes_recommended</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.per_packet_buffer_bytes_max">
<td><h3 id="StreamBufferConstraints.per_packet_buffer_bytes_max" class="add-link hide-from-toc">5</h3></td>
<td><code>per_packet_buffer_bytes_max</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.packet_count_for_server_min">
<td><h3 id="StreamBufferConstraints.packet_count_for_server_min" class="add-link hide-from-toc">6</h3></td>
<td><code>packet_count_for_server_min</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.packet_count_for_server_recommended">
<td><h3 id="StreamBufferConstraints.packet_count_for_server_recommended" class="add-link hide-from-toc">7</h3></td>
<td><code>packet_count_for_server_recommended</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.packet_count_for_server_recommended_max">
<td><h3 id="StreamBufferConstraints.packet_count_for_server_recommended_max" class="add-link hide-from-toc">8</h3></td>
<td><code>packet_count_for_server_recommended_max</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.packet_count_for_server_max">
<td><h3 id="StreamBufferConstraints.packet_count_for_server_max" class="add-link hide-from-toc">9</h3></td>
<td><code>packet_count_for_server_max</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.packet_count_for_client_min">
<td><h3 id="StreamBufferConstraints.packet_count_for_client_min" class="add-link hide-from-toc">10</h3></td>
<td><code>packet_count_for_client_min</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.packet_count_for_client_max">
<td><h3 id="StreamBufferConstraints.packet_count_for_client_max" class="add-link hide-from-toc">11</h3></td>
<td><code>packet_count_for_client_max</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferConstraints.single_buffer_mode_allowed">
<td><h3 id="StreamBufferConstraints.single_buffer_mode_allowed" class="add-link hide-from-toc">12</h3></td>
<td><code>single_buffer_mode_allowed</code></td>
<td>
<code>bool</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Obsolete.</p></td>
</tr>
<tr id="StreamBufferConstraints.is_physically_contiguous_required">
<td><h3 id="StreamBufferConstraints.is_physically_contiguous_required" class="add-link hide-from-toc">13</h3></td>
<td><code>is_physically_contiguous_required</code></td>
<td>
<code>bool</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
</table>
### StreamBufferPartialSettings [resource](/fuchsia-src/reference/fidl/language/language.md#value-vs-resource){:.fidl-attribute} {#StreamBufferPartialSettings data-text="StreamBufferPartialSettings"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=195)*
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="StreamBufferPartialSettings.buffer_lifetime_ordinal">
<td><h3 id="StreamBufferPartialSettings.buffer_lifetime_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>buffer_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>The containing message starts a new buffer_lifetime_ordinal.</p>
<p>There is a separate buffer_lifetime_ordinal for input vs. output.</p>
<p>Re-use of the same value is not allowed. Values must be odd. Values
must only increase (increasing by more than 2 is permitted).</p>
<p>A buffer_lifetime_ordinal lifetime starts at SetInputBufferSettings() or
SetOutputBufferSettings(), and ends at the earlier of
CloseCurrentStream() with release_input_buffers/release_output_buffers
set or SetOutputBufferSettings() with new buffer_lifetime_ordinal in the
case of mid-stream output config change.</p>
</td>
</tr>
<tr id="StreamBufferPartialSettings.buffer_constraints_version_ordinal">
<td><h3 id="StreamBufferPartialSettings.buffer_constraints_version_ordinal" class="add-link hide-from-toc">2</h3></td>
<td><code>buffer_constraints_version_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>This value indicates which version of constraints the client is/was aware
of so far.</p>
<p>For input, this must always be 0 because constraints don't change for
input (settings can change, but there's no settings vs current
constraints synchronization issue on input).</p>
<p>For output, this allows the server to know when the client is
sufficiently caught up before the server will generate any more output.</p>
<p>When there is no active stream, a client is permitted to re-configure
buffers again using the same buffer_constraints_version_ordinal.</p>
</td>
</tr>
<tr id="StreamBufferPartialSettings.single_buffer_mode">
<td><h3 id="StreamBufferPartialSettings.single_buffer_mode" class="add-link hide-from-toc">3</h3></td>
<td><code>single_buffer_mode</code></td>
<td>
<code>bool</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Obsolete.</p></td>
</tr>
<tr id="StreamBufferPartialSettings.packet_count_for_server">
<td><h3 id="StreamBufferPartialSettings.packet_count_for_server" class="add-link hide-from-toc">4</h3></td>
<td><code>packet_count_for_server</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferPartialSettings.packet_count_for_client">
<td><h3 id="StreamBufferPartialSettings.packet_count_for_client" class="add-link hide-from-toc">5</h3></td>
<td><code>packet_count_for_client</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() </p></td>
</tr>
<tr id="StreamBufferPartialSettings.sysmem_token">
<td><h3 id="StreamBufferPartialSettings.sysmem_token" class="add-link hide-from-toc">6</h3></td>
<td><code>sysmem_token</code></td>
<td>
<code><a class='link' href='../fuchsia.sysmem/'>fuchsia.sysmem</a>/<a class='link' href='../fuchsia.sysmem/#BufferCollectionToken'>BufferCollectionToken</a></code>
</td>
<td><p>The client end of a BufferCollectionToken channel, which the
StreamProcessor will use to deliver constraints to sysmem and learn of
buffers allocated by sysmem.</p>
<p>The client guarantees that the token is already known to sysmem (via
BufferCollectionToken.Sync(), BufferCollection.Sync(), or
BufferCollectionEvents.OnDuplicatedTokensKnownByServer()).</p>
</td>
</tr>
</table>
### StreamBufferSettings {#StreamBufferSettings data-text="StreamBufferSettings"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=1117)*
<p>Deprecated. Use SetStreamBufferPartialSettings() and
StreamBufferPartialSettings instead.</p>
<p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="StreamBufferSettings.buffer_lifetime_ordinal">
<td><h3 id="StreamBufferSettings.buffer_lifetime_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>buffer_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p></td>
</tr>
<tr id="StreamBufferSettings.buffer_constraints_version_ordinal">
<td><h3 id="StreamBufferSettings.buffer_constraints_version_ordinal" class="add-link hide-from-toc">2</h3></td>
<td><code>buffer_constraints_version_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p></td>
</tr>
<tr id="StreamBufferSettings.packet_count_for_server">
<td><h3 id="StreamBufferSettings.packet_count_for_server" class="add-link hide-from-toc">3</h3></td>
<td><code>packet_count_for_server</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p></td>
</tr>
<tr id="StreamBufferSettings.packet_count_for_client">
<td><h3 id="StreamBufferSettings.packet_count_for_client" class="add-link hide-from-toc">4</h3></td>
<td><code>packet_count_for_client</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p></td>
</tr>
<tr id="StreamBufferSettings.per_packet_buffer_bytes">
<td><h3 id="StreamBufferSettings.per_packet_buffer_bytes" class="add-link hide-from-toc">5</h3></td>
<td><code>per_packet_buffer_bytes</code></td>
<td>
<code>uint32</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p></td>
</tr>
<tr id="StreamBufferSettings.single_buffer_mode">
<td><h3 id="StreamBufferSettings.single_buffer_mode" class="add-link hide-from-toc">6</h3></td>
<td><code>single_buffer_mode</code></td>
<td>
<code>bool</code>
</td>
<td><p><b>DEPRECATED </b>- Ignore. Use SetStreamBufferPartialSettings instead.</p></td>
</tr>
</table>
### StreamOutputConstraints {#StreamOutputConstraints data-text="StreamOutputConstraints"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=71)*
<p>The stream-processor-controlled output configuration, including both
StreamBufferConstraints for the output and FormatDetails for the output.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="StreamOutputConstraints.stream_lifetime_ordinal">
<td><h3 id="StreamOutputConstraints.stream_lifetime_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>A client which always immediately re-configures output buffers on
receipt of OnOutputConstraints() with buffer_constraints_action_required
true can safely ignore this field.</p>
<p>A client is permitted to ignore an OnOutputConstraints() message even with
buffer_constraints_action_required true if the client knows the server
has already been told to discard the remainder of the stream with the
same stream_lifetime_ordinal or if this stream_lifetime_ordinal field is
set to 0. The server is required to re-send needed output config via
OnOutputConstraints() with new stream_lifetime_ordinal and
buffer_constraints_action_required true, if the most recent completed
server-side output config isn't what the server wants/needs yet for the
new stream.</p>
</td>
</tr>
<tr id="StreamOutputConstraints.buffer_constraints_action_required">
<td><h3 id="StreamOutputConstraints.buffer_constraints_action_required" class="add-link hide-from-toc">2</h3></td>
<td><code>buffer_constraints_action_required</code></td>
<td>
<code>bool</code>
</td>
<td><p>When the buffer constraints are delivered, they indicate whether action
is required. A false value here permits delivery of constraints which
are fresher without forcing a buffer reconfiguration. If this is false,
a client cannot assume that it's safe to immediately re-configure output
buffers. If this is true, the client can assume it's safe to
immediately configure output buffers once.</p>
<p>A client is permitted to ignore buffer constraint versions which have
buffer_constraints_action_required false. The server is not permitted
to change buffer_constraints_action_required from false to true for the
same buffer_constraints_version_ordinal.</p>
<p>For each configuration, a client must use new buffers, never buffers
that were previously used for anything else, and never buffers
previously used for any other StreamProcessor purposes. This rule
exists for multiple good reasons, relevant to both mid-stream changes,
and changes on stream boundaries. A client should just use new buffers
each time.</p>
<p>When this is true, the server has already de-refed as many low-level
output buffers as the server can while still performing efficient
transition to the new buffers and will de-ref the rest asap. A Sync()
is not necessary to achieve non-overlap of resource usage to the extent
efficiently permitted by the formats involved.</p>
<p>If buffer_constraints_action_required is true, the server <em>must</em> not
deliver more output data until after output buffers have been configured
(or re-configured) by the client.</p>
</td>
</tr>
<tr id="StreamOutputConstraints.buffer_constraints">
<td><h3 id="StreamOutputConstraints.buffer_constraints" class="add-link hide-from-toc">3</h3></td>
<td><code>buffer_constraints</code></td>
<td>
<code><a class='link' href='#StreamBufferConstraints'>StreamBufferConstraints</a></code>
</td>
<td></td>
</tr>
</table>
### StreamOutputFormat {#StreamOutputFormat data-text="StreamOutputFormat"}
*Defined in [fuchsia.media/stream_processor.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_processor.fidl;l=119)*
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
<tr id="StreamOutputFormat.stream_lifetime_ordinal">
<td><h3 id="StreamOutputFormat.stream_lifetime_ordinal" class="add-link hide-from-toc">1</h3></td>
<td><code>stream_lifetime_ordinal</code></td>
<td>
<code>uint64</code>
</td>
<td><p>A client is permitted to ignore an OnOutputFormat() message even with
buffer_constraints_action_required true if the client knows the server
has already been told to discard the remainder of the stream with the
same stream_lifetime_ordinal or if this stream_lifetime_ordinal field is
set to 0. The server is required to re-send needed output config via
OnOutputConstraints() with new stream_lifetime_ordinal and
buffer_constraints_action_required true, if the most recent completed
server-side output config isn't what the server wants/needs yet for the
new stream.</p>
<p>The server is required to send an OnOutputFormat() before the first
output packet of a stream.</p>
</td>
</tr>
<tr id="StreamOutputFormat.format_details">
<td><h3 id="StreamOutputFormat.format_details" class="add-link hide-from-toc">2</h3></td>
<td><code>format_details</code></td>
<td>
<code><a class='link' href='#FormatDetails'>FormatDetails</a></code>
</td>
<td><p>If format_details.format_details_version_ordinal changes, the client
should inspect the new format details and determine if it must adjust to
the new format. The server guarantees that if the format has changed, then
format_details.format_details_version_ordinal will change, but a change
to format_details.format_details_version_ordinal does not guarantee that
the format details actually changed. Servers are strongly encouraged to
not change format_details.format_details_version_ordinal other than
before the first output data of a stream unless there is a real
mid-stream format change in the stream. Unnecessary mid-stream format
changes can cause simpler clients that have no need to handle mid-stream
format changes to just close the channel. Format changes before the
first output data of a stream are not &quot;mid-stream&quot; in this context -
those can be useful for stream format detection / setup reasons.</p>
<p>Note that in case output buffers don't really need to be re-configured
despite a format change, a server is encouraged, but not required, to
set buffer_constraints_action_required false on the message that conveys
the new format details. Simpler servers may just treat the whole output
situation as one big thing and demand output buffer reconfiguration on
any change in the output situation.</p>
<p>A client may or may not actually handle a new buffer_constraints with
buffer_constraints_action_required false, but the client should always
track the latest format_details.</p>
<p>An updated format_details is ordered with respect to emitted output
packets, and applies to all subsequent packets until the next
format_details with larger version_ordinal. A simple client that does
not intend to handle mid-stream format changes should still keep track
of the most recently received format_details until the first output
packet arrives, then lock down the format details, handle those format
details, and verify that any
format_details.format_details_version_ordinal received from the server
is the same as the locked-down format_details, until the client is done
with the stream. Even such a simple client must tolerate
format_details.format_details_version_ordinal changing multiple times
before the start of data output from a stream (any stream - the first
stream or a subsequent stream). This allows a stream processor to
request that output buffers and output format be configured
speculatively, and for the output config to be optionally adjusted by
the server before the first data output from a stream after the server
knows everything it needs to know to fully establish the initial output
format details. This simplifies stream processor server implementation,
and allows a clever stream processor server to guess it's output config
for lower latency before any input data, while still being able to fix
the output config (including format details) if the guess turns out to
be wrong.</p>
<p>Whether the format_details.format_details_version_ordinal will actually
change mid-stream is a per-stream-processor and per-stream detail that
is not specified in comments here, and in most cases also depends on
whether the format changes on the input to the stream processor.
Probably it'll be fairly common for a client to use a format which
technically supports mid-stream format change, but the client happens to
know that none of the streams the client intends to process will ever
have a mid-stream format change.</p>
</td>
</tr>
</table>
### UsageStateDucked {#UsageStateDucked data-text="UsageStateDucked"}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=13)*
<p>A state of audio usages in which a policy decision has been made to temporarily
lower the volume of all streams with this usage.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
</table>
### UsageStateMuted {#UsageStateMuted data-text="UsageStateMuted"}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=17)*
<p>A state of audio usages in which a policy decision has been made to temporarily
mute the volume of all streams with this usage.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
</table>
### UsageStateUnadjusted {#UsageStateUnadjusted data-text="UsageStateUnadjusted"}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=9)*
<p>A state of audio usages in which no policy actions are taken on any streams with the usage.</p>
<table>
<tr><th>Ordinal</th><th>Field</th><th>Type</th><th>Description</th></tr>
</table>
## **UNIONS**
### AacBitRate [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AacBitRate data-text="AacBitRate"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=556)*
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AacBitRate.constant">
<td><h3 id="AacBitRate.constant" class="add-link hide-from-toc">1</h3></td>
<td><code>constant</code></td>
<td>
<code><a class='link' href='#AacConstantBitRate'>AacConstantBitRate</a></code>
</td>
<td></td>
</tr>
<tr id="AacBitRate.variable">
<td><h3 id="AacBitRate.variable" class="add-link hide-from-toc">2</h3></td>
<td><code>variable</code></td>
<td>
<code><a class='link' href='#AacVariableBitRate'>AacVariableBitRate</a></code>
</td>
<td></td>
</tr>
</table>
### AacTransport [flexible](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AacTransport data-text="AacTransport"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=528)*
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AacTransport.raw">
<td><h3 id="AacTransport.raw" class="add-link hide-from-toc">1</h3></td>
<td><code>raw</code></td>
<td>
<code><a class='link' href='#AacTransportRaw'>AacTransportRaw</a></code>
</td>
<td></td>
</tr>
<tr id="AacTransport.latm">
<td><h3 id="AacTransport.latm" class="add-link hide-from-toc">2</h3></td>
<td><code>latm</code></td>
<td>
<code><a class='link' href='#AacTransportLatm'>AacTransportLatm</a></code>
</td>
<td></td>
</tr>
<tr id="AacTransport.adts">
<td><h3 id="AacTransport.adts" class="add-link hide-from-toc">3</h3></td>
<td><code>adts</code></td>
<td>
<code><a class='link' href='#AacTransportAdts'>AacTransportAdts</a></code>
</td>
<td></td>
</tr>
</table>
### AudioCapturerConfiguration [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioCapturerConfiguration data-text="AudioCapturerConfiguration"}
*Defined in [fuchsia.media/audio_capturer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_capturer.fidl;l=20)*
<p>Configuration for an audio Capturer.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AudioCapturerConfiguration.loopback">
<td><h3 id="AudioCapturerConfiguration.loopback" class="add-link hide-from-toc">1</h3></td>
<td><code>loopback</code></td>
<td>
<code><a class='link' href='#LoopbackAudioCapturerConfiguration'>LoopbackAudioCapturerConfiguration</a></code>
</td>
<td></td>
</tr>
<tr id="AudioCapturerConfiguration.input">
<td><h3 id="AudioCapturerConfiguration.input" class="add-link hide-from-toc">2</h3></td>
<td><code>input</code></td>
<td>
<code><a class='link' href='#InputAudioCapturerConfiguration'>InputAudioCapturerConfiguration</a></code>
</td>
<td></td>
</tr>
</table>
### AudioCompressedFormat [flexible](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioCompressedFormat data-text="AudioCompressedFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=82)*
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AudioCompressedFormat.aac">
<td><h3 id="AudioCompressedFormat.aac" class="add-link hide-from-toc">1</h3></td>
<td><code>aac</code></td>
<td>
<code><a class='link' href='#AudioCompressedFormatAac'>AudioCompressedFormatAac</a></code>
</td>
<td></td>
</tr>
<tr id="AudioCompressedFormat.sbc">
<td><h3 id="AudioCompressedFormat.sbc" class="add-link hide-from-toc">2</h3></td>
<td><code>sbc</code></td>
<td>
<code><a class='link' href='#AudioCompressedFormatSbc'>AudioCompressedFormatSbc</a></code>
</td>
<td></td>
</tr>
<tr id="AudioCompressedFormat.cvsd">
<td><h3 id="AudioCompressedFormat.cvsd" class="add-link hide-from-toc">3</h3></td>
<td><code>cvsd</code></td>
<td>
<code><a class='link' href='#AudioCompressedFormatCvsd'>AudioCompressedFormatCvsd</a></code>
</td>
<td><div class="fidl-version-div"><span class="fidl-attribute fidl-version">Added: HEAD</span></div>
</td>
</tr>
</table>
### AudioConsumerError [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioConsumerError data-text="AudioConsumerError"}
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=152)*
<p>Represents a <code>AudioConsumer</code> error condition.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AudioConsumerError.place_holder">
<td><h3 id="AudioConsumerError.place_holder" class="add-link hide-from-toc">1</h3></td>
<td><code>place_holder</code></td>
<td>
<code><a class='link' href='#Void'>Void</a></code>
</td>
<td></td>
</tr>
</table>
### AudioFormat [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioFormat data-text="AudioFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=213)*
<p>AudioFormat</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AudioFormat.compressed">
<td><h3 id="AudioFormat.compressed" class="add-link hide-from-toc">1</h3></td>
<td><code>compressed</code></td>
<td>
<code><a class='link' href='#AudioCompressedFormat'>AudioCompressedFormat</a></code>
</td>
<td></td>
</tr>
<tr id="AudioFormat.uncompressed">
<td><h3 id="AudioFormat.uncompressed" class="add-link hide-from-toc">2</h3></td>
<td><code>uncompressed</code></td>
<td>
<code><a class='link' href='#AudioUncompressedFormat'>AudioUncompressedFormat</a></code>
</td>
<td></td>
</tr>
</table>
### AudioUncompressedFormat [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioUncompressedFormat data-text="AudioUncompressedFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=206)*
<p>AudioUncompressedFormat</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="AudioUncompressedFormat.pcm">
<td><h3 id="AudioUncompressedFormat.pcm" class="add-link hide-from-toc">1</h3></td>
<td><code>pcm</code></td>
<td>
<code><a class='link' href='#PcmFormat'>PcmFormat</a></code>
</td>
<td></td>
</tr>
</table>
### CryptoFormat [flexible](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#CryptoFormat data-text="CryptoFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=457)*
<p>CryptoFormat</p>
<p>Crypto (encrypted or decrypted) format details.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="CryptoFormat.encrypted">
<td><h3 id="CryptoFormat.encrypted" class="add-link hide-from-toc">1</h3></td>
<td><code>encrypted</code></td>
<td>
<code><a class='link' href='#EncryptedFormat'>EncryptedFormat</a></code>
</td>
<td></td>
</tr>
<tr id="CryptoFormat.decrypted">
<td><h3 id="CryptoFormat.decrypted" class="add-link hide-from-toc">2</h3></td>
<td><code>decrypted</code></td>
<td>
<code><a class='link' href='#DecryptedFormat'>DecryptedFormat</a></code>
</td>
<td></td>
</tr>
</table>
### DomainFormat [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#DomainFormat data-text="DomainFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=465)*
<p>DomainFormat</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="DomainFormat.audio">
<td><h3 id="DomainFormat.audio" class="add-link hide-from-toc">1</h3></td>
<td><code>audio</code></td>
<td>
<code><a class='link' href='#AudioFormat'>AudioFormat</a></code>
</td>
<td></td>
</tr>
<tr id="DomainFormat.video">
<td><h3 id="DomainFormat.video" class="add-link hide-from-toc">2</h3></td>
<td><code>video</code></td>
<td>
<code><a class='link' href='#VideoFormat'>VideoFormat</a></code>
</td>
<td></td>
</tr>
<tr id="DomainFormat.crypto">
<td><h3 id="DomainFormat.crypto" class="add-link hide-from-toc">3</h3></td>
<td><code>crypto</code></td>
<td>
<code><a class='link' href='#CryptoFormat'>CryptoFormat</a></code>
</td>
<td></td>
</tr>
</table>
### EncoderSettings [flexible](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#EncoderSettings data-text="EncoderSettings"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=644)*
<p>Settings for encoders that tell them how to encode raw
formats.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="EncoderSettings.sbc">
<td><h3 id="EncoderSettings.sbc" class="add-link hide-from-toc">1</h3></td>
<td><code>sbc</code></td>
<td>
<code><a class='link' href='#SbcEncoderSettings'>SbcEncoderSettings</a></code>
</td>
<td></td>
</tr>
<tr id="EncoderSettings.aac">
<td><h3 id="EncoderSettings.aac" class="add-link hide-from-toc">2</h3></td>
<td><code>aac</code></td>
<td>
<code><a class='link' href='#AacEncoderSettings'>AacEncoderSettings</a></code>
</td>
<td></td>
</tr>
<tr id="EncoderSettings.h264">
<td><h3 id="EncoderSettings.h264" class="add-link hide-from-toc">3</h3></td>
<td><code>h264</code></td>
<td>
<code><a class='link' href='#H264EncoderSettings'>H264EncoderSettings</a></code>
</td>
<td></td>
</tr>
<tr id="EncoderSettings.hevc">
<td><h3 id="EncoderSettings.hevc" class="add-link hide-from-toc">4</h3></td>
<td><code>hevc</code></td>
<td>
<code><a class='link' href='#HevcEncoderSettings'>HevcEncoderSettings</a></code>
</td>
<td></td>
</tr>
<tr id="EncoderSettings.cvsd">
<td><h3 id="EncoderSettings.cvsd" class="add-link hide-from-toc">5</h3></td>
<td><code>cvsd</code></td>
<td>
<code><a class='link' href='#CvsdEncoderSettings'>CvsdEncoderSettings</a></code>
</td>
<td><div class="fidl-version-div"><span class="fidl-attribute fidl-version">Added: HEAD</span></div>
</td>
</tr>
</table>
### MediumSpecificStreamType [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#MediumSpecificStreamType data-text="MediumSpecificStreamType"}
*Defined in [fuchsia.media/stream_type.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=30)*
<p>A union of all medium-specific stream type structs.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="MediumSpecificStreamType.audio">
<td><h3 id="MediumSpecificStreamType.audio" class="add-link hide-from-toc">1</h3></td>
<td><code>audio</code></td>
<td>
<code><a class='link' href='#AudioStreamType'>AudioStreamType</a></code>
</td>
<td></td>
</tr>
<tr id="MediumSpecificStreamType.video">
<td><h3 id="MediumSpecificStreamType.video" class="add-link hide-from-toc">2</h3></td>
<td><code>video</code></td>
<td>
<code><a class='link' href='#VideoStreamType'>VideoStreamType</a></code>
</td>
<td></td>
</tr>
<tr id="MediumSpecificStreamType.text">
<td><h3 id="MediumSpecificStreamType.text" class="add-link hide-from-toc">3</h3></td>
<td><code>text</code></td>
<td>
<code><a class='link' href='#TextStreamType'>TextStreamType</a></code>
</td>
<td></td>
</tr>
<tr id="MediumSpecificStreamType.subpicture">
<td><h3 id="MediumSpecificStreamType.subpicture" class="add-link hide-from-toc">4</h3></td>
<td><code>subpicture</code></td>
<td>
<code><a class='link' href='#SubpictureStreamType'>SubpictureStreamType</a></code>
</td>
<td></td>
</tr>
</table>
### Usage [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#Usage data-text="Usage"}
*Defined in [fuchsia.media/audio_core.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=75)*
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="Usage.render_usage">
<td><h3 id="Usage.render_usage" class="add-link hide-from-toc">1</h3></td>
<td><code>render_usage</code></td>
<td>
<code><a class='link' href='#AudioRenderUsage'>AudioRenderUsage</a></code>
</td>
<td></td>
</tr>
<tr id="Usage.capture_usage">
<td><h3 id="Usage.capture_usage" class="add-link hide-from-toc">2</h3></td>
<td><code>capture_usage</code></td>
<td>
<code><a class='link' href='#AudioCaptureUsage'>AudioCaptureUsage</a></code>
</td>
<td></td>
</tr>
</table>
### UsageState [flexible](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#UsageState data-text="UsageState"}
*Defined in [fuchsia.media/usage_reporter.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/usage_reporter.fidl;l=20)*
<p>The state of audio policy enforcement on a stream or set of streams.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="UsageState.unadjusted">
<td><h3 id="UsageState.unadjusted" class="add-link hide-from-toc">1</h3></td>
<td><code>unadjusted</code></td>
<td>
<code><a class='link' href='#UsageStateUnadjusted'>UsageStateUnadjusted</a></code>
</td>
<td></td>
</tr>
<tr id="UsageState.ducked">
<td><h3 id="UsageState.ducked" class="add-link hide-from-toc">2</h3></td>
<td><code>ducked</code></td>
<td>
<code><a class='link' href='#UsageStateDucked'>UsageStateDucked</a></code>
</td>
<td></td>
</tr>
<tr id="UsageState.muted">
<td><h3 id="UsageState.muted" class="add-link hide-from-toc">3</h3></td>
<td><code>muted</code></td>
<td>
<code><a class='link' href='#UsageStateMuted'>UsageStateMuted</a></code>
</td>
<td></td>
</tr>
</table>
### Value [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#Value data-text="Value"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=15)*
<p>Value</p>
<p>Generic &quot;value&quot; for use within generic &quot;Parameter&quot; struct.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="Value.bool_value">
<td><h3 id="Value.bool_value" class="add-link hide-from-toc">1</h3></td>
<td><code>bool_value</code></td>
<td>
<code>bool</code>
</td>
<td></td>
</tr>
<tr id="Value.uint64_value">
<td><h3 id="Value.uint64_value" class="add-link hide-from-toc">2</h3></td>
<td><code>uint64_value</code></td>
<td>
<code>uint64</code>
</td>
<td></td>
</tr>
<tr id="Value.int64_value">
<td><h3 id="Value.int64_value" class="add-link hide-from-toc">3</h3></td>
<td><code>int64_value</code></td>
<td>
<code>int64</code>
</td>
<td></td>
</tr>
<tr id="Value.string_value">
<td><h3 id="Value.string_value" class="add-link hide-from-toc">4</h3></td>
<td><code>string_value</code></td>
<td>
<code>string</code>
</td>
<td></td>
</tr>
<tr id="Value.bytes_value">
<td><h3 id="Value.bytes_value" class="add-link hide-from-toc">5</h3></td>
<td><code>bytes_value</code></td>
<td>
<code>vector&lt;uint8&gt;</code>
</td>
<td></td>
</tr>
</table>
### VideoCompressedFormat [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#VideoCompressedFormat data-text="VideoCompressedFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=225)*
<p>VideoCompressedFormat</p>
<p>Compressed video format details.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="VideoCompressedFormat.temp_field_todo_remove">
<td><h3 id="VideoCompressedFormat.temp_field_todo_remove" class="add-link hide-from-toc">1</h3></td>
<td><code>temp_field_todo_remove</code></td>
<td>
<code>uint32</code>
</td>
<td></td>
</tr>
</table>
### VideoFormat [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#VideoFormat data-text="VideoFormat"}
*Defined in [fuchsia.media/stream_common.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=347)*
<p>VideoFormat</p>
<p>Video (compress or uncompressed) format details. In this context,
&quot;uncompressed&quot; can include block-based image compression formats that still
permit fairly fast random access to image data.</p>
<table>
<tr><th>Ordinal</th><th>Variant</th><th>Type</th><th>Description</th></tr>
<tr id="VideoFormat.compressed">
<td><h3 id="VideoFormat.compressed" class="add-link hide-from-toc">1</h3></td>
<td><code>compressed</code></td>
<td>
<code><a class='link' href='#VideoCompressedFormat'>VideoCompressedFormat</a></code>
</td>
<td></td>
</tr>
<tr id="VideoFormat.uncompressed">
<td><h3 id="VideoFormat.uncompressed" class="add-link hide-from-toc">2</h3></td>
<td><code>uncompressed</code></td>
<td>
<code><a class='link' href='#VideoUncompressedFormat'>VideoUncompressedFormat</a></code>
</td>
<td></td>
</tr>
</table>
## **BITS**
### AudioConsumerStartFlags [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioConsumerStartFlags}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_consumer.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_consumer.fidl;l=110)*
<p>Flags passed to <code>AudioConsumer.Start</code>.</p>
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioConsumerStartFlags.LOW_LATENCY">
<td><h3 id="AudioConsumerStartFlags.LOW_LATENCY" class="add-link hide-from-toc">LOW_LATENCY</h3></td>
<td>1</td>
<td><p>Indicates that latency should be kept as low as possible.</p>
</td>
</tr>
<tr id="AudioConsumerStartFlags.SUPPLY_DRIVEN">
<td><h3 id="AudioConsumerStartFlags.SUPPLY_DRIVEN" class="add-link hide-from-toc">SUPPLY_DRIVEN</h3></td>
<td>2</td>
<td><p>Indicates that the timing of packet delivery is determined by an external process rather
than being demand-based. When this flag is set, the service should expect underflow or
overflow due to a mismatch between packet arrival rate and presentation rate. When this
flag is not set, packets arrive on demand.</p>
</td>
</tr>
</table>
### AudioGainInfoFlags [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioGainInfoFlags}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_device_enumerator.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_device_enumerator.fidl;l=8)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioGainInfoFlags.MUTE">
<td><h3 id="AudioGainInfoFlags.MUTE" class="add-link hide-from-toc">MUTE</h3></td>
<td>1</td>
<td></td>
</tr>
<tr id="AudioGainInfoFlags.AGC_SUPPORTED">
<td><h3 id="AudioGainInfoFlags.AGC_SUPPORTED" class="add-link hide-from-toc">AGC_SUPPORTED</h3></td>
<td>2</td>
<td></td>
</tr>
<tr id="AudioGainInfoFlags.AGC_ENABLED">
<td><h3 id="AudioGainInfoFlags.AGC_ENABLED" class="add-link hide-from-toc">AGC_ENABLED</h3></td>
<td>4</td>
<td></td>
</tr>
</table>
### AudioGainValidFlags [strict](/fuchsia-src/reference/fidl/language/language.md#strict-vs-flexible){:.fidl-attribute} {#AudioGainValidFlags}
Type: <code>uint32</code>
*Defined in [fuchsia.media/audio_device_enumerator.fidl](https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_device_enumerator.fidl;l=32)*
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="AudioGainValidFlags.GAIN_VALID">
<td><h3 id="AudioGainValidFlags.GAIN_VALID" class="add-link hide-from-toc">GAIN_VALID</h3></td>
<td>1</td>
<td></td>
</tr>
<tr id="AudioGainValidFlags.MUTE_VALID">
<td><h3 id="AudioGainValidFlags.MUTE_VALID" class="add-link hide-from-toc">MUTE_VALID</h3></td>
<td>2</td>
<td></td>
</tr>
<tr id="AudioGainValidFlags.AGC_VALID">
<td><h3 id="AudioGainValidFlags.AGC_VALID" class="add-link hide-from-toc">AGC_VALID</h3></td>
<td>4</td>
<td></td>
</tr>
</table>
## **CONSTANTS**
<table>
<tr><th>Name</th><th>Value</th><th>Type</th><th>Description</th></tr>
<tr id="AUDIO_ENCODING_AAC">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=38">AUDIO_ENCODING_AAC</a></td>
<td><code>fuchsia.media.aac</code></td>
<td><code>String</code></td>
<td><p>Audio encodings.</p>
</td>
</tr>
<tr id="AUDIO_ENCODING_AACLATM">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=39">AUDIO_ENCODING_AACLATM</a></td>
<td><code>fuchsia.media.aaclatm</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_AMRNB">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=40">AUDIO_ENCODING_AMRNB</a></td>
<td><code>fuchsia.media.amrnb</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_AMRWB">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=41">AUDIO_ENCODING_AMRWB</a></td>
<td><code>fuchsia.media.amrwb</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_APTX">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=42">AUDIO_ENCODING_APTX</a></td>
<td><code>fuchsia.media.aptx</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_FLAC">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=43">AUDIO_ENCODING_FLAC</a></td>
<td><code>fuchsia.media.flac</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_GSMMS">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=44">AUDIO_ENCODING_GSMMS</a></td>
<td><code>fuchsia.media.gsmms</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_LPCM">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=45">AUDIO_ENCODING_LPCM</a></td>
<td><code>fuchsia.media.lpcm</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_MP3">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=46">AUDIO_ENCODING_MP3</a></td>
<td><code>fuchsia.media.mp3</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_OPUS">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=51">AUDIO_ENCODING_OPUS</a></td>
<td><code>fuchsia.media.opus</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_PCMALAW">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=47">AUDIO_ENCODING_PCMALAW</a></td>
<td><code>fuchsia.media.pcmalaw</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_PCMMULAW">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=48">AUDIO_ENCODING_PCMMULAW</a></td>
<td><code>fuchsia.media.pcmmulaw</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_SBC">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=49">AUDIO_ENCODING_SBC</a></td>
<td><code>fuchsia.media.sbc</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="AUDIO_ENCODING_VORBIS">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=50">AUDIO_ENCODING_VORBIS</a></td>
<td><code>fuchsia.media.vorbis</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="CAPTURE_USAGE_COUNT">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=61">CAPTURE_USAGE_COUNT</a></td>
<td>
<code>4</code>
</td>
<td><code>uint8</code></td>
<td></td>
</tr>
<tr id="ENCRYPTION_SCHEME_CBC1">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=359">ENCRYPTION_SCHEME_CBC1</a></td>
<td><code>cbc1</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="ENCRYPTION_SCHEME_CBCS">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=361">ENCRYPTION_SCHEME_CBCS</a></td>
<td><code>cbcs</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="ENCRYPTION_SCHEME_CENC">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=358">ENCRYPTION_SCHEME_CENC</a></td>
<td><code>cenc</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="ENCRYPTION_SCHEME_CENS">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=360">ENCRYPTION_SCHEME_CENS</a></td>
<td><code>cens</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="ENCRYPTION_SCHEME_UNENCRYPTED">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=357">ENCRYPTION_SCHEME_UNENCRYPTED</a></td>
<td><code>unencrypted</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="MAX_ENCRYPTION_SCHEME_SIZE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=10">MAX_ENCRYPTION_SCHEME_SIZE</a></td>
<td>
<code>100</code>
</td>
<td><code>uint32</code></td>
<td></td>
</tr>
<tr id="MAX_FRAMES_PER_RENDERER_PACKET">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_renderer.fidl;l=10">MAX_FRAMES_PER_RENDERER_PACKET</a></td>
<td>
<code>262143</code>
</td>
<td><code>int64</code></td>
<td><p>The maximum number of frames that may be contained within a single StreamPacket.</p>
</td>
</tr>
<tr id="MAX_INIT_VECTOR_SIZE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=9">MAX_INIT_VECTOR_SIZE</a></td>
<td>
<code>16</code>
</td>
<td><code>uint32</code></td>
<td></td>
</tr>
<tr id="MAX_KEY_ID_SIZE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=8">MAX_KEY_ID_SIZE</a></td>
<td>
<code>16</code>
</td>
<td><code>uint32</code></td>
<td></td>
</tr>
<tr id="MAX_PCM_CHANNEL_COUNT">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio.fidl;l=48">MAX_PCM_CHANNEL_COUNT</a></td>
<td>
<code>8</code>
</td>
<td><code>uint32</code></td>
<td></td>
</tr>
<tr id="MAX_PCM_FRAMES_PER_SECOND">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio.fidl;l=50">MAX_PCM_FRAMES_PER_SECOND</a></td>
<td>
<code>192000</code>
</td>
<td><code>uint32</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_ALBUM">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=18">METADATA_LABEL_ALBUM</a></td>
<td><code>fuchsia.media.album</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_ARTIST">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=17">METADATA_LABEL_ARTIST</a></td>
<td><code>fuchsia.media.artist</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_COMPOSER">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=22">METADATA_LABEL_COMPOSER</a></td>
<td><code>fuchsia.media.composer</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_EPISODE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=25">METADATA_LABEL_EPISODE</a></td>
<td><code>fuchsia.media.episode</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_GENRE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=21">METADATA_LABEL_GENRE</a></td>
<td><code>fuchsia.media.genre</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_PUBLISHER">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=20">METADATA_LABEL_PUBLISHER</a></td>
<td><code>fuchsia.media.publisher</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_RELEASE_DATE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=24">METADATA_LABEL_RELEASE_DATE</a></td>
<td><code>fuchsia.media.release_date</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_SEASON">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=26">METADATA_LABEL_SEASON</a></td>
<td><code>fuchsia.media.season</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_STUDIO">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=27">METADATA_LABEL_STUDIO</a></td>
<td><code>fuchsia.media.studio</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_SUBTITLE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=23">METADATA_LABEL_SUBTITLE</a></td>
<td><code>fuchsia.media.subtitle</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_TITLE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=16">METADATA_LABEL_TITLE</a></td>
<td><code>fuchsia.media.title</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_LABEL_TRACK_NUMBER">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=19">METADATA_LABEL_TRACK_NUMBER</a></td>
<td><code>fuchsia.media.track_number</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="METADATA_SOURCE_TITLE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/metadata.fidl;l=31">METADATA_SOURCE_TITLE</a></td>
<td><code>fuchsia.media.source_title</code></td>
<td><code>String</code></td>
<td><p>The title of the source of the media, e.g. a player, streaming service, or
website.</p>
</td>
</tr>
<tr id="MIN_PCM_CHANNEL_COUNT">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio.fidl;l=47">MIN_PCM_CHANNEL_COUNT</a></td>
<td>
<code>1</code>
</td>
<td><code>uint32</code></td>
<td><p>Permitted ranges for AudioRenderer and AudioCapturer</p>
</td>
</tr>
<tr id="MIN_PCM_FRAMES_PER_SECOND">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio.fidl;l=49">MIN_PCM_FRAMES_PER_SECOND</a></td>
<td>
<code>1000</code>
</td>
<td><code>uint32</code></td>
<td></td>
</tr>
<tr id="NO_TIMESTAMP">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=167">NO_TIMESTAMP</a></td>
<td>
<code>9223372036854775807</code>
</td>
<td><code>int64</code></td>
<td><p>When used as a <code>StreamPacket.pts</code> value, indicates that the packet has no
specific presentation timestamp. The effective presentation time of such a
packet depends on the context in which the <code>StreamPacket</code> is used.</p>
</td>
</tr>
<tr id="RENDER_USAGE_COUNT">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/audio_core.fidl;l=34">RENDER_USAGE_COUNT</a></td>
<td>
<code>5</code>
</td>
<td><code>uint8</code></td>
<td></td>
</tr>
<tr id="STREAM_PACKET_FLAG_DISCONTINUITY">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=185">STREAM_PACKET_FLAG_DISCONTINUITY</a></td>
<td>
<code>4</code>
</td>
<td><code>uint32</code></td>
<td><p>Indicates a discontinuity in an otherwise continuous-in-time sequence of
packets. The precise semantics of this flag depend on the context in which
the <code>StreamPacket</code> is used.</p>
</td>
</tr>
<tr id="STREAM_PACKET_FLAG_DROPPABLE">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=180">STREAM_PACKET_FLAG_DROPPABLE</a></td>
<td>
<code>2</code>
</td>
<td><code>uint32</code></td>
<td><p>Indicates that all other packets in the stream can be understood without
reference to this packet. This is typically used in compressed streams to
identify packets containing frames that may be discarded without affecting
other frames.</p>
</td>
</tr>
<tr id="STREAM_PACKET_FLAG_KEY_FRAME">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream.fidl;l=174">STREAM_PACKET_FLAG_KEY_FRAME</a></td>
<td>
<code>1</code>
</td>
<td><code>uint32</code></td>
<td><p>Indicates that the packet can be understood without reference to other
packets in the stream. This is typically used in compressed streams to
identify packets that contain key frames.</p>
</td>
</tr>
<tr id="VIDEO_ENCODING_H263">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=54">VIDEO_ENCODING_H263</a></td>
<td><code>fuchsia.media.h263</code></td>
<td><code>String</code></td>
<td><p>Video encodings.</p>
</td>
</tr>
<tr id="VIDEO_ENCODING_H264">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=55">VIDEO_ENCODING_H264</a></td>
<td><code>fuchsia.media.h264</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="VIDEO_ENCODING_MPEG4">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=56">VIDEO_ENCODING_MPEG4</a></td>
<td><code>fuchsia.media.mpeg4</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="VIDEO_ENCODING_THEORA">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=57">VIDEO_ENCODING_THEORA</a></td>
<td><code>fuchsia.media.theora</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="VIDEO_ENCODING_UNCOMPRESSED">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=58">VIDEO_ENCODING_UNCOMPRESSED</a></td>
<td><code>fuchsia.media.uncompressed_video</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="VIDEO_ENCODING_VP3">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=59">VIDEO_ENCODING_VP3</a></td>
<td><code>fuchsia.media.vp3</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="VIDEO_ENCODING_VP8">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=60">VIDEO_ENCODING_VP8</a></td>
<td><code>fuchsia.media.vp8</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="VIDEO_ENCODING_VP9">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=61">VIDEO_ENCODING_VP9</a></td>
<td><code>fuchsia.media.vp9</code></td>
<td><code>String</code></td>
<td></td>
</tr>
<tr id="kMaxOobBytesSize">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=471">kMaxOobBytesSize</a></td>
<td>
<code>8192</code>
</td>
<td><code>uint64</code></td>
<td></td>
</tr>
</table>
## **ALIASES**
<table>
<tr><th>Name</th><th>Value</th><th>Description</th></tr>
<tr id="CompressionType">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_type.fidl;l=76">CompressionType</a></td>
<td>
<code>string</code>[<code>256</code>]</td>
<td><p>An identifier for compression types.</p>
</td>
</tr>
<tr id="EncryptionScheme">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=356">EncryptionScheme</a></td>
<td>
<code>string</code>[<code><a class='link' href='#MAX_ENCRYPTION_SCHEME_SIZE'>MAX_ENCRYPTION_SCHEME_SIZE</a></code>]</td>
<td></td>
</tr>
<tr id="InitVector">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=364">InitVector</a></td>
<td>
<code>vector</code>[<code><a class='link' href='#MAX_INIT_VECTOR_SIZE'>MAX_INIT_VECTOR_SIZE</a></code>]</td>
<td></td>
</tr>
<tr id="KeyId">
<td><a href="https://cs.opensource.google/fuchsia/fuchsia/+/main:sdk/fidl/fuchsia.media/stream_common.fidl;l=363">KeyId</a></td>
<td>
<code>vector</code>[<code><a class='link' href='#MAX_KEY_ID_SIZE'>MAX_KEY_ID_SIZE</a></code>]</td>
<td></td>
</tr>
</table>