Matrix Voice ESP32: problem with Build and Make


I have solved it (with my Prof. help), writing a batch file and launching it directly from /tmp directory of my Raspberry (instead of PC)
This is the batch file content, that help me to avoid the flash error:

voice_esptool --chip esp32 --port /dev/ttyS0 --baud 115200 --before default_reset --after hard_reset write_flash -u --flash_mode dio --flash_freq 40m --flash_size detect 0x1000 bootloader/bootloader.bin 0x10000 mic_energy.bin 0x8000 partitions_singleapp.bin

I have another big problem for develop my university project; it consists in a little system (c++ code) that acquires music and sounds, from the environment, and trasform them into colors. My hardware is composed by Raspberry Pi 3 + DMX module for Raspberry Pi 3 + LED unit + Matrix Voice esp32. My Raspberry’s GPIO is occupied by the DMX module so for that reason I recurred to Matrix Voice esp32. The problem is how can my Matrix Voice esp32 send via wi-fi audio stream (environmental sounds or music) in real time to my Raspberry Pi? Do you have a sample code in C++ for that? I got this link from Kevin, the main problem is that my background is in economics, I’m a newbie in coding, and this code is created for arduino - esp32 communication (and viceversa) instead I need c++ code raspberry - esp32 (and viceversa).
Any other suggestion or solution is welcome!


P.s. I’m going to post this code in


Nice you found the problem and solved it.
What deploy does is actually tar some bin files and pipe it to your Pi, exactly what you did manually.

So, you have two Pi’s right?
Arduino is actually a layer on c/c++ so you should be able to use my code in a c++ program.
The AudioStreamer was actually c++, but I had some troubles with the MQTT.
The code is NOT written for Arduino-esp32 communication, the code publishes an audiostream to a MQTT broker in real time (amongst other things)

That link is actually my code (hence the name Romkabouter in it), I do not know how Kevin is, but thanks anyway :smiley:
It is written using the Arduino IDE, but if you install Arduino IDE, you can follow the steps and see how far you get.

If you check the AudioStream task, you will see that is posts messages to a MQTT broker.
Snips uses small wave files, but you could easily remove the header part so that you have a raw audio stream.
You other Pi can consume that MQTT stream as input, if you need more explanation feel free to ask, since I wrote that code I know pretty well what it does :smiley:


Hi Rom,
I have not an Arduino board and I have no idea about Arduino Ide. I see a zip file in your git hub repository, as I told you before, I have a Raspberry Pi 3 + DMX module and a Matrix Voice esp32. I’d like to use Matrix Voice esp32 such as microphone that hears music or sound and stream these to the raspberry board. So I need that esp32 communicates to raspberry and the raspberry communicates to esp32.
what steps need to be taken?
thanks for support!


For my repo, you do not need an Arduino board.
I use the IDE only to compile and upload OTA (over the air), no Arduino board needed
What zip file are you taking about? Are you familiar with cloning repo’s?

If you want to use my repo:

When you have follow these steps you have a Matrix Voice, running standalone, publishing an audiostream to your MQTT broker (that broker an be your Pi, install Mosquitto on it and yr done)
I suggest setting the MQTT_IP and MQTT_HOST to the IP address of your Pi

Now, you say you want to stream raw audio
My repo sends small WAV files, because Snips uses that. So you have to change that.
You can this like so:

  • In MatrixVoiceAudioServer.ino, on line 342:
    audioServer.publish(audioFrameTopic.c_str(), (uint8_t *)payload, sizeof(payload));
    into audioServer.publish(audioFrameTopic.c_str(), (uint8_t *) voicemapped, sizeof(voicemapped));

Voicemapped is the raw buffer coming from the microphones without the wav header.
After that, upload via command U in the Arduino IDE and the messages will be raw audio.

Now to your Pi:

  • Install Mosquitto
  • Create a python script to connect your MQTT broker (use paho.mqtt.client for example) and subscribe to the audiostream topic. This will be hermes/audioServer/matrixvoice/audioFrame if you do not change anything.
  • Run the script as a service, my matrix hotword might be a good starter:, although you will have to change that.

What you can have at the end of all this is:

  • Matrix Voice, esp32 running standalone publishing an audio stream to the MQTT broker on the Pi
  • A python script, running as a service, consuming that audio stream

That said, with you being a novice coder, you really need to get your hand dirty and bite down into this.
Otherwise you will not pull this off, it will not be a copy and paste exercise :smiley:

Good luck!


thanks for support! I’m going to try everything step by step!


This is what I understood:
My Matrix Voice Esp32 will become client and my Raspberry pi 3 will become the local broker/server

  1. as first I install the Arduino IDE on my pc (Can I do it from my raspberry?)
  2. I clone your repository git clone on my pc (Can I do it from my raspberry?)
  3. I Follow the Get Started of your repo
  4. I Follow the OTA stuff; here I use the IDE only to compile and upload OTA (over the air).



You understood correct :slight_smile:

  1. Yes, on PC
  2. Yes, on Pi because that is a python script you need to edit (it is just to show you how to use paho.mqtt)
  3. Yes, PC
  4. Yes, PC, first time flash is not OTA so Matrix Voice should be connected to the Pi at first flash :slight_smile:


Ok thanks!
At the moment I’m in the Arduino IDE, I have opened MatrixVoiceAudioServer.ino but I don’t know

  1. how to Select ESP32 Dev Module as Board (checking on Arduino Ide–>tools—>board there isn’t ESP32 DEV Module) , set flash size to 4MB and Upload speed to 115200
  2. how to set correctly these: MQTT_IP, MQTT_PORT, MQTT_HOST, SITEID, SSID and PASSWORD to fit your needs. SSID and PASSWORD are in config.h; how I determine correctly
    thanks again

  2. MQTT_IP and MQTT_HOST: Ip adress of your Pi. PORT and SITEID you can leave as set :slight_smile:


I almost did everything you told me…
LED are in red color

  1. Open your Arduino IDE again, after a while the Matrix Voice should show up as a network port, select this port —> I opened in IDE MAtrixVoiceAudioServer, in tools I selected ESP32W Rover Module

  2. Make a change (or not) and do a Sketch -->Upload. The leds will turn WHITE
    I did 3 times but always the same error, this is this error:
    Sketch uses 985114 bytes (75%) of program storage space. Maximum is 1310720 bytes.
    Global variables use 62976 bytes (19%) of dynamic memory, leaving 264704 bytes for local variables. Maximum is 327680 bytes.
    Please select a Port before Upload

  3. In the meanwhile I also installed Mosquitto on my Raspberry, but I don’t understand the next step —> Create a python script to connect your MQTT broker (use paho.mqtt.client for example) and subscribe to the audiostream topic. This will be hermes/audioServer/matrixvoice/audioFrame if you do not change anything.

  1. No, the guide says: * Select ESP32 Dev Module as Board, set flash size to 4MB and Upload speed to 115200"
    I do not know if selecting something else causes issues.
  2. Well, the error message says: Please select a Port before Upload. Did you?
    See the bullit above it: " * Open your Arduino IDE again, after a while the Matrix Voice should show up as a network port, select this port"
  3. What is it that you do not understand? Did you check the code from matrix_hotword?


After open Arduino IDE…
I re-open the MatrixVoiceAudioServer.ino in the Arduino IDE? (it will be uploaded?)
Where Do I select Matrix Voice as network port? I don’t see it. On tools I only see “Port” in grey color I can’t select nothing.


I see you also state the leds are RED. They should be BLUE (connected to your wifi)
Red is disconnected, did you update config.h to reflect your wifi settings?

When they are blue, the Voice should be indeed be under network ports.
Update passwordhash as well in the sketch :slight_smile:


Also, I use this program:
You can connect it to your broker and is has a tab “subscribe”. At the bottom right there is a button “scan”, which scans your broker for topics.

When you have done everthing correctly, a topic “hermes/audioServer/matrixvoice/audioFrame” should be listed there and when you subscibe to it, a very fast and large amount of message should be created in the messagebuffer.
This is normal and is in fact the audiostream you want to work with


I don’t update

my config.h content (edited in MatrixVoiceAudioServer directory) :
#define SSID “YourSSID”
#define PASSWORD “YourPassword”
SSID= Pi name?
PASSWORD= Pi pass?


No, SSID is your NetworkID (wifiname)
Password is your wifi password

Are you really that novice in coding?
If so, it might be a good idea to become more familiar with codig in general before trying something complicated as trying to consume an audiostream an change colors…


Sorry Rom, You are right, I’m bothering you with a lot of requests. Now I update config.h, I compile again and deploy…


No worries about the questions, but if you have so many questions, I am just wondering if your project might be a bit of high goal :smiley:

I love the fact my software is being used, so no worries about that :slight_smile:


thanks Rom for your precious support! Everything is going fine, now the system works in standalone mode! I have to study a c++ code that get down the buffer data


Great! Be aware that the messages are small wav files.
You need to change the sketch a bit (see ealier comment about line 342 in the sketch), or remove the wav header in the consuming python script :slight_smile:

Good luck