The RTCEncodedAudioFrame
of the WebRTC API represents an encoded audio frame in the WebRTC receiver or sender pipeline, which may be modified using a WebRTC Encoded Transform.
The interface provides methods and properties to get metadata about the frame, allowing its format and order in the sequence of frames to be determined. The data
property gives access to the encoded frame data as a buffer, which might be encrypted, or otherwise modified by a transform.
This code snippet shows a handler for the rtctransform
event in a Worker
that implements a TransformStream
, and pipes encoded frames through it from the event.transformer.readable
to event.transformer.writable
(event.transformer
is a RTCRtpScriptTransformer
, the worker-side counterpart of RTCRtpScriptTransform
).
If the tranformer is inserted into an audio stream, the transform()
method is called with a RTCEncodedAudioFrame
whenever a new frame is enqueued on event.transformer.readable
. The transform()
method shows how this might be read, modified using a fictional encryption function, and then enqueued on the controller (this ultimately pipes it through to the event.transformer.writable
, and then back into the WebRTC pipline).
addEventListener("rtctransform", (event) => {
const async transform = new TransformStream({
async transform(encodedFrame, controller) {
const view = new DataView(encodedFrame.data);
const newData = new ArrayBuffer(encodedFrame.data.byteLength);
const newView = new DataView(newData);
for (let i = 0; i < encodedFrame.data.byteLength; ++i) {
const encryptedByte = encryptFunction(~view.getInt8(i));
newView.setInt8(i, encryptedByte);
}
encodedFrame.data = newData;
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
Note that more complete examples are provided in Using WebRTC Encoded Transforms.