The frequencyBinCount
read-only property of the AnalyserNode
interface contains the total number of data points available to AudioContext
sampleRate
. This is half of the value
of the AnalyserNode.fftSize
. The two methods' indices have a linear relationship with the frequencies they represent, between 0 and the Nyquist frequency.
An unsigned integer, equal to the number of values that AnalyserNode.getByteFrequencyData()
and AnalyserNode.getFloatFrequencyData()
copy into the provided TypedArray
.
For technical reasons related to how the Fast Fourier transform is defined, it is always half the value of AnalyserNode.fftSize
. Therefore, it will be one of 16
, 32
, 64
, 128
, 256
, 512
, 1024
, 2048
, 4096
, 8192
, and 16384
.
The following example shows basic usage of an AudioContext
to create an AnalyserNode
, then requestAnimationFrame
and <canvas>
to collect frequency data repeatedly and draw a "winamp bar graph style" output of the current audio input. For more complete applied examples/information, check out our Voice-change-O-matic demo (see app.js lines 108-193 for relevant code).
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const analyser = audioCtx.createAnalyser();
analyser.minDecibels = -90;
analyser.maxDecibels = -10;
analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
console.log(bufferLength);
const dataArray = new Uint8Array(bufferLength);
canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
function draw() {
drawVisual = requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);
canvasCtx.fillStyle = "rgb(0, 0, 0)";
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
const barWidth = (WIDTH / bufferLength) * 2.5 - 1;
let barHeight;
let x = 0;
for (let i = 0; i < bufferLength; i++) {
barHeight = dataArray[i];
canvasCtx.fillStyle = `rgb(${barHeight + 100}, 50, 50)`;
canvasCtx.fillRect(x, HEIGHT - barHeight / 2, barWidth, barHeight / 2);
x += barWidth;
}
}
draw();