2D real time visualization: frequencies

2D real time visualization: frequencies

First typical example

Example at JSBin:

audio player with frequency visualisations with red barsfrequency viualisation this time fftsize = 64

This time, instead of a waveform we want to visualize an animated bar chart. Each bar will correspond to a frequency range and ‘dance’ in concert with the music being played.

    • The frequency range depends upon the sample rate of the signal (the audio source) and on the FFT size. While the sound is being played, the values change and the bar chart is animated.
    • The number of bars is equal to the FFT size / 2 (left screenshot with size = 512, right screenshot with size = 64).
    • In the example above, the Nth bar (from left to right) corresponds to the frequency range N * (samplerate/fftSize). If we have a sample rate equal to 44100 Hz and a FFT size equal to 512, then the first bar represents frequencies between 0 and 44100/512 = 86.12Hz. etc. As the amount of data returned by the analyser node is half the fft size, we will only be able to plot the frequency-range to half the sample rate. You will see that this is generally enough as frequencies in the second half of the sample rate are not relevant.
    • The height of each bar shows the strength of that specific frequency bucket. It’s just a representation of how much of each frequency is present in the signal (i.e. how “loud” the frequency is).

You do not have to master the signal processing ‘plumbing’ summarised above – just plot the reported values!

Enough said! Let’s study some extracts from the source code. 

This code is very similar to Example 1 at the top of this page. We’ve set the FFT size to a lower value, and rewritten the animation loop to plot frequency bars instead of a waveform:

  1. function buildAudioGraph() {
  2.   …
  3.   // Create an analyser node
  4.   analyser = audioContext.createAnalyser();
  5.   // Try changing to lower values: 512, 256, 128, 64…
  6.   // Lower values are good for frequency visualizations,
  7.   // try 128, 64 etc.? 
  8.   analyser.fftSize = 256;
  9.   …
  10. }

This time, when building the audio graph, we have used a smaller FFT size. Values between 64 and 512 are very common here. Try them in the JSBin example! Apart from the lines in bold, this function is exactly the same as in Example 1.

The new visualization code:

  1. function visualize() {
  2.   // clear the canvas
  3.   canvasContext.clearRect(0, 0, width, height);
  4.   // Get the analyser data
  5.   analyser.getByteFrequencyData(dataArray);
  6.   var barWidth = width / bufferLength;
  7.   var barHeight;
  8.   var x = 0;
  9.   // values go from 0 to 255 and the canvas heigt is 100. Let’s rescale
  10.   // before drawing. This is the scale factor
  11.   heightScale = height/128;
  12.   for(var i = 0; i < bufferLength; i++) {
  13.     // between 0 and 255
  14.     barHeight = dataArray[i];
  15.     // The color is red but lighter or darker depending on the value
  16.     canvasContext.fillStyle = ‘rgb(‘ + (barHeight+100) + ‘,50,50)’;
  17.     // scale from [0, 255] to the canvas height [0, height] pixels
  18.     barHeight *= heightScale;
  19.     // draw the bar
  20.     canvasContext.fillRect(x, heightbarHeight/2, barWidth, barHeight/2);
  21.     // 1 is the number of pixels between bars – you can change it
  22.     x += barWidth + 1;
  23.   }
  24.   // once again call the visualize function at 60 frames/s
  25.   requestAnimationFrame(visualize);
  26. }

Explanations: 

    • Line 6: this is different to code which draws a waveform! We ask for byteFrequencyData (vs byteTimeDomainData earlier) and it returns an array of fftSize/2 values between 0 and 255.
    • Lines 16-29: we iterate on the value. The x position of each bar is incremented at each iteration (line 28) adding a small interval of 1 pixel between bars (you can try different values here). The width of each bar is computed at line 8.
    • Line 14: we compute a scale factor to be able to display the values (ranging from 0 to 255) in direct proportion to the height of the canvas. This scale factor is used in line 23, when we compute the height of the bars we are going to draw.

Another example: achieving more impressive frequency visualization

Example at JSBin with a different look for the visualization: The example is given as is, read the source code and try to understand how the drawing of the frequency is done.

Same example as before but with symmetric and colored frequency visualisations

Last example at JSBin with this time the graphic equalizer,  a master volume (gain) and a stereo panner node just before the visualizer node:

Previous example with a master volume (gain node) and the equalizer + a stereoPanner node

And here is the audio graph for this example:

audio graph from above example

Source code from this example’s the buildAudioGraph function:

  1. function buildAudioGraph() {
  2.    var mediaElement = document.getElementById(‘player’);
  3.    var sourceNode = audioContext.createMediaElementSource(mediaElement);
  4.    // Create an analyser node
  5.   analyser = audioContext.createAnalyser();
  6.   // Try changing for lower values: 512, 256, 128, 64…
  7.   analyser.fftSize = 1024;
  8.   bufferLength = analyser.frequencyBinCount;
  9.   dataArray = new Uint8Array(bufferLength);
  10.   // Create the equalizer, which comprises a set of biquad filters
  11.   // Set filters
  12.   [60, 170, 350, 1000, 3500, 10000].forEach(function(freq, i) {
  13.      var eq = audioContext.createBiquadFilter();
  14.      eq.frequency.value = freq;
  15.      eq.type = “peaking”;
  16.      eq.gain.value = 0;
  17.      filters.push(eq);
  18.    });
  19.    // Connect filters in sequence
  20.    sourceNode.connect(filters[0]);
  21.    for(var i = 0; i < filters.length  1; i++) {
  22.      filters[i].connect(filters[i+1]);
  23.    }
  24.    // Master volume is a gain node
  25.    masterGain = audioContext.createGain();
  26.    masterGain.value = 1;
  27.    // Connect the last filter to the speakers
  28.    filters[filters.length  1].connect(masterGain);
  29.    // for stereo balancing, split the signal
  30.    stereoPanner = audioContext.createStereoPanner();
  31.    // connect master volume output to the stereo panner
  32.    masterGain.connect(stereoPanner);
  33.    // Connect the stereo panner to analyser and analyser to destination
  34.    stereoPanner.connect(analyser);
  35.    analyser.connect(audioContext.destination);
  36. }

Leave a comment