The OfflineAudioContext interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.
Starts rendering the audio, taking into account the current connections and the current scheduled changes. This page covers both the event-based version and the promise-based version.
When the startRendering() promise resolves, rendering has completed and the output AudioBuffer is returned out of the promise.
At this point we create another audio context, create an AudioBufferSourceNode inside it, and set its buffer to be equal to the promise AudioBuffer. This is then played as part of a simple standard audio graph.
// define online and offline audio contextconst audioCtx =newAudioContext();const offlineCtx =newOfflineAudioContext(2,44100*40,44100);
source = offlineCtx.createBufferSource();// use XHR to load an audio track, and// decodeAudioData to decode it and OfflineAudioContext to render itfunctiongetData(){
request =newXMLHttpRequest();
request.open("GET","viper.ogg",true);
request.responseType ="arraybuffer";
request.onload=()=>{const audioData = request.response;
audioCtx.decodeAudioData(audioData,(buffer)=>{
myBuffer = buffer;
source.buffer = myBuffer;
source.connect(offlineCtx.destination);
source.start();//source.loop = true;
offlineCtx
.startRendering().then((renderedBuffer)=>{
console.log("Rendering completed successfully");const song = audioCtx.createBufferSource();
song.buffer = renderedBuffer;
song.connect(audioCtx.destination);
play.onclick=()=>{
song.start();};}).catch((err)=>{
console.error(`Rendering failed: ${err}`);// Note: The promise should reject when startRendering is called a second time on an OfflineAudioContext});});};
request.send();}// Run getData to start the process offgetData();