Our latest API service for fitting your existing ML models onto an embedded target as small as a Cortex-M0+!
Qeexo AutoML offers end-to-end machine learning with no coding required. While this SaaS product presents a wholistic user experience, we understand that machine learning (ML) practitioners working in the tinyML space may want to use their preexisting models that they’ve already spent a lot of time and efforts to finetune. To these folks working on tinyML applications, fitting the models onto embedded hardware with constrained resources is the final step before they can test their models on the embedded Edge device. However, this step requires a specialized set of embedded knowledge that may be outside of a typical ML engineer’s repertoire.
Qeexo addresses this pain point by offering an API-based model converter service. At launch, the Qeexo Model Converter currently converts tree-based models (Random Forest, XGBoost, Gradient Boosting Machine) for Arm Cortex-M0+ to Cortex-M4 embedded targets.
Let’s dive into more details!
Qeexo’s approach to tree-based model conversion
While there are dozens of different machine learning algorithms in use, both open-source and proprietary model conversion solutions largely focus on converting neural network (NN) models. From our experience, tree-based models often outperform NN models in tinyML applications because they require less training data, have lower latency, are smaller in size, and do not need a significant amount of RAM during inference. Our team at Qeexo first developed proprietary methods to convert tree-based models for embedded devices for our internal use, since we were unable to find comparable solutions on the market.
Qeexo Model Converter contains patent-pending quantization technologies, as explained by Dr. Schradin, Principal ML Engineer at Qeexo, in this tinyML talk. Our model converter utilizes intelligent pruning and quantization technologies that enable these tree-based ensemble models to have a low memory footprint without compromising classification performance. (Note that the tree-based models can be pruned post-training, while NN models usually need to be re-trained after compression.)
This conversion process outputs optimized object code with metadata, which can easily be integrated into Arm Cortex-M0+ to Cortex-M4 embedded platforms.
API-based Qeexo Model Converter
Our model converter can be accessed through a RESTful API that can be called from a wide range of programming languages. We chose the ONNX format as the input to the Qeexo Model Converter, which enables us to support both scikit-learn models as well as XGBoost tree models. We also feel that this open standard offers exceptional interoperability among different workflow architectures.
Perhaps the coolest and most useful feature of the Qeexo Model Converter is the ability to limit models to a given size – when provided with a maximum size, our converter will try its best to reduce the input model to this desired size by applying a tree-pruning technique. This tree-pruning technique is different from and in addition to the quantization feature. When enabled, the data arrays are stored in smaller integer types, resulting in further reduction in model size.
For more detailed instructions on how to use our API, please refer to the user guide for example code.
Come try it out!
We hope that you are ready to sign up for a Qeexo account (the same one that you use to log into Qeexo AutoML) and subscribe to the Qeexo Model Converter service (comes with a 30-day free trial)!
As far as future roadmap is concerned, we are considering extending support to other tree-based models as well as neural network models. Our goal is to provide model conversion as a service so that ML practitioners working in tinyML are free to try different algorithms for their embedded projects, just like the way Qeexo AutoML offers more than a dozen algorithms.
We would love to get your valuable feedback in order to further improve our model conversion service and build in additional features. Please email us at firstname.lastname@example.org.