Great succes! ... and a question about Kernel / HAL

So, as some of you know, I’ve been working on using the Creator in my home domotica setup. Pi+Creator, Google Assistant, Domoticz, Controlicz and a load of sensors and actuators (Zwave, 433 Mhz, etc).

I’ve been quiet here for a while, because I had to tackle some Google assistant issues, in combination with bluetooth speakers. Good news: I got it to work. Needed to switch to the C++ implementation of the assistant SDK, so I’ve more or less completely ditched Python.

I now have a running integrated setup of Google Assistant + Snowboy. So I can use custom wakewords to activate google assistant, all working with a bluetooth speaker and the Matrix mics!

My question: I am using the Kernel modules as per this topic: MATRIX Hal - Important Update
Am I correct to assume this means I am automatically using the HAL? Or do I need to install something more?
Does that also mean I can just look at all the demo’s in the HAL git repo as a reference on how to use the everloop, etc?

Hi @IronClaw, that’s awesome!

As you are already doing wou can use the kernel modules and then you build and run any of the HAL demos. HAL internally detects if the kernel modules are up and either gets the information from the kernel or gets it directly from the Creator throw SPI. So, I think you are good.

Yes. These demos serve as a starting point for C++ development. We also recently improve the documentation about this, please see here.

-Yoel

Thanks for the answers and the useful link Yoel! I’ll start working on the integrations.
Once I’ve booked some further result and have a noteworthy working system, I’ll write a topic on what I achieved and what steps I took.

One caveat with mixing the HAL and the kernel modules is that you should generally try to avoid programatically interacting with the microphones in any way through their HAL classes. On the pi, this will seemingly frequently cause a kernel oops after a short while (which basically means you have to hard reboot your pi, which is risky for the sd card etc).

In an ideal setup, we would use either the HAL or the kernel modules (which generally create nodes under /dev), but even Matrix doesn’t do this in their own code for some reason. So while the everloop HAL code seems to work just fine even if you have the kernel modules installed, we really “should” interact with it by writing to /dev/matrixio_everloop

The only module I’ve found where this matters is the microphone array, but I haven’t looked into why. Just be aware that you might run into problems if you mix the two.

Hi @giblet37,

Interesting analysis here. I just wanted to add few ideas here.

When you use both HAL and the kernel modules, everything should run smooth. Even when working with the microphone data. There is a check in HAL that detects if the kernel modules are already running and using the SPI to talk to the Creator/Voice board. When the module are running, HAL gets its data from the kernel using the /dev/ devices you mentioned.

Another option for getting audio data in C++ while running the kernel modules is to use ALSA itself. An simple example here: A Minimal Capture Program.

If you have found issues with the microphones, we can look into that and try to see what could be happening. Please, post any information here that could help us.

Thank!

-Yoel

Oh I know the HAL chooses which bus to use based on the presence of the kernel modules, but if you actually follow the code for any given module, there are a fair number of differences between the driver (/dev) access and the HAL’s use of the regmap. I know they are intented to do basically the same things, but the regmap ‘device’ doesn’t actually map like it implies.

As a simple example, HAL Everloop::Write() just calls (in our case) BusKernel::Write(), which does ioctl() on the regmap fd. The ioctl() for the regmap does not actually do any delegation to the different device drivers, it instead ultimately just does a direct SPI transfer - there is never an access to /dev/matrixio_everloop. Reads (for other modules) appear to function the same way, ignoring the device nodes and reading directly on the bus.

For the Everloop, this seems to be fine, since that is all the driver itself actually does. The microphone driver is harder to trace since it’s ultimately using external system devices, and that is the main reason I didn’t really look into the problem. My guess is that once the kernel has registered a codec for a device, simultaneous accesses to that device from kernel-managed sources and from ‘direct’ sources (the HAL code, here) are a race condition, or potentially stomping on each others’ memory spaces. We are basically directly writing to a device that the kernel thinks it is managing.

After more than a dozen kernel oopses, I tested enough to determine that:

  • they frequently happen when using the microphone HAL code with the kernel modules installed
  • they do not happen using the microphone HAL code when the kernel modules are NOT installed
  • using just the kernel modules/codec (via ALSA since I wasn’t about to write my own audio input…) works fine

For me, they tended to happen sometime after running my code, usually not during execution but anywhere from maybe 1-30 minutes later. At least for some of that testing, I was using the mics with external applications (alsa again), but never at exactly the same time as my HAL code was running.

Unfortunately I don’t really want to test this any further, as I’ve already permanently destroyed two SD cards due to the matrix (one to this problem, one to an older issue). Ultimately it was kind of moot for me since the reason I wanted to use the mic code was for things that don’t work right now (DOA, beamforming etc)

@giblet37, I will ping the dev team with this.

Thanks also for the details about how it breaks and for taking the time to explore the code behind this. This will contribute to future improvements on the kernel modules and HAL design.

-Yoel

Interesting analysis. I don’t intend to use the microphone other than directly with ALSA. The Google Assistant C++ code uses direct ALSA in/output, and I’ve integrated Snowboy into that. Currently I don’t intend to use the microphone outside of that, so I guess I should be fine.