>I felt there is no solution that I really felt comfortable with
I wish the author elaborated at all about why they felt that way. Even if it was just "existing solutions are too easy and I want to learn the hard way". They linked to a pretty big list of established microcontroller neural network frameworks. I still have my little "sparkfun" microcontroller that runs Tensforflow Lite neural networks powered by just a coin cell battery. They were free in the goodie bags at Tensorflow Summit 2019. "Edge Computing" on "Internet of Things" was the hype that year.
Edit: Ah, I see they do have elaboration linked - "By simplifying the model architecture and using a full-custom implementation, I bypassed the usual complexities and memory overhead associated with Edge-ML inference engines." Nice work!
How would one get these 16x16 images generated in a way that does not need a lot more compute power than the inference itself? Maybe by using a sensor from an optical mouse which seems to have a similar resolution? [0] According to a quick web-search, the CH32V003 seems to support SPI and I²C out of the box [1] which the mentioned sensor supports?
Image classification is a good demo/test case. However image sensors still cost multiple dollars, so one would likely spent a bit more in the microcontroller in that case. Accelerometer or microphone on the other hand adds just 30 cents to the BOM, and can be processed on similar cheap microcontroller. That is at least what I have found so far, trying to build a sub 1 dollar ML-powered system
https://hackaday.io/project/194511-1-dollar-tinyml
I wish the author elaborated at all about why they felt that way. Even if it was just "existing solutions are too easy and I want to learn the hard way". They linked to a pretty big list of established microcontroller neural network frameworks. I still have my little "sparkfun" microcontroller that runs Tensforflow Lite neural networks powered by just a coin cell battery. They were free in the goodie bags at Tensorflow Summit 2019. "Edge Computing" on "Internet of Things" was the hype that year.
Edit: Ah, I see they do have elaboration linked - "By simplifying the model architecture and using a full-custom implementation, I bypassed the usual complexities and memory overhead associated with Edge-ML inference engines." Nice work!