Matrix Voice Mesh

Hi there,

cannot wait to get my Matrix Voice.

I would like to create a mesh of several Matrix Voices in several rooms (diningroom, livingroom and kitchen are open, so it is one big room) for homeautomation using Pocketsphinx.
Placing several Matrix Voices in it, it would be nice to get the audio-input-level (volume) to decide in which room the voice-command was emitted. Does anyone know where to hook-in using the pocketsphinx-demo or something else?
It would also be very nice to detect the direction of the speaker source to one matrix voice. How could I get it? Maybe I could use parts of the direction_of_arrival_demo.cpp together with the pocketsphinx demo?

At the moment it is only an idea. I am in concept-phase so it does not matter which program-language to use. I would like to recognize german language locally (not cloud), send homeautomation-commands directly via http (in lan) to my openhab and delegate other commands using keyword “alexa” to acs or “google” to google service.

I would like to determine the speaker as well later.

So the plan in summary is to determane the following:

  1. What was said? (Homeautomation-command or delegate it)
  2. Where was it said? (which room, Matrix Voice A or Matrix Voice B)
  3. Who said it?

Any ideas where to start the research, what service to use?

Thanks for any input in advance.