Many thanks for the reply!
My ultimate aim is to develop a sound localisation system based on the MUSIC algorithm, which was the basis of my dissertation. Ideally this would be fully performed on the FPGA however given the limitations off the Spartan-6 I am hoping I may be able to daisy chain another FPGA through the GPIO’s.
Initially, I’d like to get the signal data from microphones so I can verify that I am performing the demodulation correctly and everything is working properly there, before I develop the full localisation code. I’d prefer to save the signal data in a file, then analysis it in Matlab after, rather than stream it directly. I’m okay-ish on the FPGA side of things and can develop the protocol to send the data, I’m just bit unsure how to go about receiving it and then saving it as a file on the host PC. I know Python and C, but my C++ is lacking, so I think I will have to improve this!!
Unless I’m mistaken, 8 channels of 48kHz, 16 bit audio, will require at least 6.144 Mbits/s transfer rate, so surely SPI and UART are unsuitable for this process? I’m working on just a single stream at the moment so it is not a major issue for me as yet.
Obviously, as I intend to develop this solely on an FPGA I would need the demodulation code anyway, and as the MUSIC algorithm works on correlation I really could do with limiting the harmonics as much as possible. Admittedly I have only modelled up to 2nd order ‘standard’ sigma-delta process as it becomes unstable after that, while it is possible for 4th order delta-sigma (as with the mics), the arrangement is modified and I’m struggling to find out which process the manufactures use! As said, the demodulation process is basically same in each case so this is more to understand why the harmonics are a lot higher than predicted.
I’ll neaten up some of the Matlab / Simulink models that I’ve done and upload them later.