The process()
method of an AudioWorkletProcessor
-derived class implements the audio processing algorithm for the audio processor worklet.
Although the method is not a part of the AudioWorkletProcessor
interface, any implementation of AudioWorkletProcessor
must provide a process()
method.
The method is called synchronously from the audio rendering thread, once for each block of audio (also known as a rendering quantum) being directed through the processor's corresponding AudioWorkletNode
. In other words, every time a new block of audio is ready for your processor to manipulate, your process()
function is invoked to do so.
Note: Currently, audio data blocks are always 128 frames long—that is, they contain 128 32-bit floating-point samples for each of the inputs' channels. However, plans are already in place to revise the specification to allow the size of the audio blocks to be changed depending on circumstances (for example, if the audio hardware or CPU utilization is more efficient with larger block sizes). Therefore, you must always check the size of the sample array rather than assuming a particular size.
This size may even be allowed to change over time, so you mustn't look at just the first block and assume the sample buffers will always be the same size.
process(inputs, outputs, parameters)
A Boolean value indicating whether or not to force the AudioWorkletNode
to remain active even if the user agent's internal logic would otherwise decide that it's safe to shut down the node.
The returned value lets your processor have influence over the lifetime policy of the AudioWorkletProcessor
and the node that owns it. If the combination of the return value and the state of the node causes the browser to decide to stop the node, process()
will not be called again.
Returning true
forces the Web Audio API to keep the node alive, while returning false
allows the browser to terminate the node if it is neither generating new audio data nor receiving data through its inputs that it is processing.
The 3 most common types of audio node are:
- A source of output. An
AudioWorkletProcessor
implementing such a node should return true
from the process
method as long as it produces an output. The method should return false
as soon as it's known that it will no longer produce an output. For example, take the AudioBufferSourceNode
— the processor behind such a node should return true
from the process
method while the buffer is playing, and start returning false
when the buffer playing has ended (there's no way to call play
on the same AudioBufferSourceNode
again). - A node that transforms its input. A processor implementing such a node should return
false
from the process
method to allow the presence of active input nodes and references to the node to determine whether it can be garbage-collected. An example of a node with this behavior is the GainNode
. As soon as there are no inputs connected and references retained, gain can no longer be applied to anything, so it can be safely garbage-collected. - A node that transforms its input, but has a so-called tail-time — this means that it will produce an output for some time even after its inputs are disconnected or are inactive (producing zero-channels). A processor implementing such a node should return
true
from the process
method for the period of the tail-time, beginning as soon as inputs are found that contain zero-channels. An example of such a node is the DelayNode
— it has a tail-time equal to its delayTime
property.
Note: An absence of the return
statement means that the method returns undefined
, and as this is a falsy value, it is like returning false
. Omitting an explicit return
statement may cause hard-to-detect problems for your nodes.
As the process()
method is implemented by the user, it can throw anything. If an uncaught error is thrown, the node will emit an processorerror
event and will output silence for the rest of its lifetime.
In this example we create an AudioWorkletProcessor
that outputs white noise to its first output. The gain can be controlled by the customGain
parameter.
class WhiteNoiseProcessor extends AudioWorkletProcessor {
process(inputs, outputs, parameters) {
const output = outputs[0];
output.forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] =
(Math.random() * 2 - 1) *
(parameters["customGain"].length > 1
? parameters["customGain"][i]
: parameters["customGain"][0]);
}
});
return true;
}
static get parameterDescriptors() {
return [
{
name: "customGain",
defaultValue: 1,
minValue: 0,
maxValue: 1,
automationRate: "a-rate",
},
];
}
}