W3cubDocs

/Web APIs

AudioProcessingEvent

Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.

The AudioProcessingEvent interface of the Web Audio API represents events that occur when a ScriptProcessorNode input buffer is ready to be processed.

An audioprocess event with this interface is fired on a ScriptProcessorNode when audio processing is required. During audio processing, the input buffer is read and processed to produce output audio data, which is then written to the output buffer.

Warning: This feature has been deprecated and should be replaced by an AudioWorklet.

Event AudioProcessingEvent

Constructor

AudioProcessingEvent() Deprecated

Creates a new AudioProcessingEvent object.

Instance properties

Also implements the properties inherited from its parent, Event.

playbackTime Read only Deprecated

A double representing the time when the audio will be played, as defined by the time of AudioContext.currentTime.

inputBuffer Read only Deprecated

An AudioBuffer that is the buffer containing the input audio data to be processed. The number of channels is defined as a parameter numberOfInputChannels, of the factory method AudioContext.createScriptProcessor(). Note that the returned AudioBuffer is only valid in the scope of the event handler.

outputBuffer Read only Deprecated

An AudioBuffer that is the buffer where the output audio data should be written. The number of channels is defined as a parameter, numberOfOutputChannels, of the factory method AudioContext.createScriptProcessor(). Note that the returned AudioBuffer is only valid in the scope of the event handler.

Examples

The following example shows how to use of a ScriptProcessorNode to take a track loaded via AudioContext.decodeAudioData(), process it, adding a bit of white noise to each audio sample of the input track (buffer) and play it through the AudioDestinationNode. For each channel and each sample frame, the scriptNode.onaudioprocess function takes the associated audioProcessingEvent and uses it to loop through each channel of the input buffer, and each sample in each channel, and add a small amount of white noise, before setting that result to be the output sample in each case.

Note: For a full working example, see our script-processor-node GitHub repo. (You can also access the source code.)

js

const myScript = document.querySelector("script");
const myPre = document.querySelector("pre");
const playButton = document.querySelector("button");

// Create AudioContext and buffer source
const audioCtx = new AudioContext();
const source = audioCtx.createBufferSource();

// Create a ScriptProcessorNode with a bufferSize of 4096 and a single input and output channel
const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
console.log(scriptNode.bufferSize);

// load in an audio track via XHR and decodeAudioData

function getData() {
  request = new XMLHttpRequest();
  request.open("GET", "viper.ogg", true);
  request.responseType = "arraybuffer";
  request.onload = () => {
    const audioData = request.response;

    audioCtx.decodeAudioData(
      audioData,
      (buffer) => {
        myBuffer = buffer;
        source.buffer = myBuffer;
      },
      (e) => console.error(`Error with decoding audio data: ${e.err}`),
    );
  };
  request.send();
}

// Give the node a function to process audio events
scriptNode.onaudioprocess = (audioProcessingEvent) => {
  // The input buffer is the song we loaded earlier
  const inputBuffer = audioProcessingEvent.inputBuffer;

  // The output buffer contains the samples that will be modified and played
  const outputBuffer = audioProcessingEvent.outputBuffer;

  // Loop through the output channels (in this case there is only one)
  for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
    const inputData = inputBuffer.getChannelData(channel);
    const outputData = outputBuffer.getChannelData(channel);

    // Loop through the 4096 samples
    for (let sample = 0; sample < inputBuffer.length; sample++) {
      // make output equal to the same as the input
      outputData[sample] = inputData[sample];

      // add noise to each output sample
      outputData[sample] += (Math.random() * 2 - 1) * 0.2;
    }
  }
};

getData();

// Wire up the play button
playButton.onclick = () => {
  source.connect(scriptNode);
  scriptNode.connect(audioCtx.destination);
  source.start();
};

// When the buffer source stops playing, disconnect everything
source.onended = () => {
  source.disconnect(scriptNode);
  scriptNode.disconnect(audioCtx.destination);
};

Browser compatibility

Desktop Mobile
Chrome Edge Firefox Internet Explorer Opera Safari WebView Android Chrome Android Firefox for Android Opera Android Safari on IOS Samsung Internet
AudioProcessingEvent 57 79 No No 44 14.1 57 57 No 43 14.5 7.0
AudioProcessingEvent 14 12 25 No 15 6 4.4 18 25 14 6 1.0
inputBuffer 14 12 25 No 15 6 4.4 18 25 14 6 1.0
outputBuffer 14 12 25 No 15 6 4.4 18 25 14 6 1.0
playbackTime 14 12 25 No 15 6 ≤37 18 25 14 6 1.0

See also

© 2005–2023 MDN contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://developer.mozilla.org/en-US/docs/Web/API/AudioProcessingEvent