How to play audio from Buffer

I’m building an integration with AWS Lex and the API returns an audio response in the form of a Buffer

I’ve been spinning my wheels failing to find a way to play the audio response. Anyone have any idea?
Here’s what I’ve got so far. It’s not throwing any errors, but it’s not playing any audio.

               var AudioContext = window.AudioContext // Default
                || window.webkitAudioContext // Safari and old versions of Chrome
                || false;

                let context = new AudioContext();
                let playSound = context.createBufferSource();
                // transform the aws Buffer response to an ArrayBuffer
                let ab = new ArrayBuffer(awsres.audioStream.length);
                let view = new Uint8Array(ab);
                for (let i = 0; i < awsres.audioStream.length; ++i) {
                  view[i] = awsres.audioStream[i];

                context.decodeAudioData(ab, (buf) => {
                  playSound.buffer = buf;
                console.log('playing sound');

What devices do you need to support, and where are you testing this? I’m asking because the “can i use?” section of the Web Audio API docs says no Android support.

Right now I’ve just been trying to get working in iOS, but in the future Android would be required as well

In that case, and if that MDN chart is to be believed, maybe you might want to try something other than WebAudio. I realize it seems a bit clunky, but one option would be to write the contents of the Buffer to a file, at which point you could use either the Media or NativeAudio plugins to play it from the file.

Yeah that may be the way to go, any pointers on writing to file? This whole buffer business is new to me and somewhat confusing as I’ve found there’s a difference between Buffer, ArrayBuffer, and AudioBuffer.

To play an audio stream, as far as I tested, you could use

  1. the cordova-plugin-media


  1. use the native html5 audio tag

but, on android, in both case, as far as I understood and tested, if the stream least a while after the phone goes idle, the stream itself may stop because of the Android Doze (wich protect the battery use).

I didn’t found a solution about that now, except running something in the background (like with the cordova background plugin, ).

I’m not expecting timeouts or device idling to be an issue, the audio from Lex is really short, just a few seconds in length really. It’s just like Siri saying "Are you sure you want to order xxxxxxx?"
Do either of those options work with a Buffer directly? If not, I need to figure out how to get the buffer into a (temporary) file format that can be used and discarded.

Coolio then. I guess so, with both of these options I was able to play a radio stream, I guess they managed their own default buffer.

Found that tutorial about html5 audio if that could help:

Wich actually isn’t that positive about processing the response :frowning:

But still audio html5 tag is pretty simple, you could maybe have a quick try

Dude, thanks for finding that!
It’s pointed me in the right direction for downsampling the submission audio (an issue I had been working on but hadn’t posted here about). The conversion of response buffer to inline url was kind of glossed over, but it sounds like they had success so it at least appears doable :slight_smile:
I’ve emailed the author to see if he’s willing to share any more details, but I feel a little more optimistic about this now.

1 Like

Awesome! I learn some stuffs about Lex too thx to your question, so thx you too sir :wink:

Sounds like you’ve found a potentially better solution, but the native File plugin has the capability to write to files.

Hi…I can’t figure out how I can automatically let the patch load up audio into the buffer~ objects I’m using. Every time I’m working on the patch I have to reload the audio. And I’m planning on using a lot of small audiotracks triggered by many buffers.
I’m not very good with the program, so if you can explain it as easy as possible, that would be wonderful.