IBM Watson/Amazon Transcribe/Every other speaker separation service out there does not work with the matrix creator. Why is this the case?
Your question has been moved to “Help and Troubleshooting”.
If two speakers are speaking, I’d like to separate them. There are tons of online services for this such as IBM Watson speech to text. However, if I take the beam formed audio from the matrix and upload it into IBM speech to text, it recognizes it all as one speaker. With something like iPhone audio or other microphones, it works well. What is the recommended way to do speaker separation on matrix and is there something I’m doing wrong (I’m using micarrayrecorderdirect ).