Is it possible to share the matrix voice over two codebases. Currently, I am running ODAS to find the doa and another separate program for speech recognition. I was wondering if the matrix voice can be shared between these two programs (perhaps by changing values within asound.conf)?
I’m after the same thing. For example, to be able to say to a robot, “Come over here”.
Unfortunately, I’m new to this and don’t have a solution for you, but I wanted to add my voice.
My guess is that the ODAS code would be the place to change things in order to pass the audio on to Rhasspy etc. for speech recognition and intent recognition.
I have not yet looked at it in detail, but I imagine that Rhasspy or Mycroft etc. would be flexible enough to pick up from a file, or have the audio piped from another program.
I am also thinking about edge-AI applications to recognise bird songs and the direction from which they came in order to try to map out territory of various birds.with less installed units than simply having a grid of recording stations. (although, maybe the latter will turn out to be cheaper)
Given the quasi-abandonware status of Matrix hardware of late, can anyone suggest some alternatives for picking up arrival direction of sound?
I will persevere with my Matrix Voice board for now, since I’ve already bought it, and it has a lot of interesting possibilities that I’m sure some community members will be able to take further, even without active formal support.