Qeexo

TRY
BACK TO PRESS

Qeexo Adds Support for Arm’s Edge Processor

Datanami 12 October 2020

Qeexo, the “tinyML” specialist, said its AutoML platform now supports the smallest Cortex processors from Arm Ltd., making it the first vendor to automate machine learning on the Arm processor used for edge computing in sensors and microcontrollers.

Read the rest of the article here: https://www.datanami.com/2020/09/23/qeexo-adds-support-for-arms-edge-processor/

BACK TO PRESS

Sound Recognition with Qeexo AutoML

Zhongyu Ouyang and Dr. Geoffrey Newman 21 September 2020

Introduction

Sound Recognition is a technology based on traditional pattern recognition theories and signal analysis methods which is widely used in speech recognition, music recognition and many other research areas such as acoustical oceanography [1]. Generally, microphones are regarded as sufficient sensing modalities as input to machine learning methods within these fields. Microphones are capable of capturing information necessary for the variety of classification tasks that can be performed on lightweight devices. With this type of sensor, Qeexo AutoML provides a diverse feature stack, taking advantage of the physical properties of microphone data to extract information relevant to such classification tasks. This blog will show you how to perform sound recognition with Qeexo AutoML and explain some of the basics concepts of our feature stack.

AutoML Tutorial

Qeexo AutoML offers a general use user-friendly interface for engineers who wants to perform sound recognition, or any other classification task on embedded devices. The processes discussed in this blog are not specific to sound recognition, but are specifically applicable to it. To get started, navigate to the training page and select (or upload) the labeled training data that you want to use to build models for your embedded device. In the Sensor Selection page, you can select the desired sensor types, (in our sound recognition example we utilize the microphone sensor), to choose the collected data, as shown in Figure 1.

Figure 1: Sensor Selection Page

You are also provided with the option of automatic sensor and feature group selection, if you want to use additional sensor modalities or experiment with feature subgroups. If this is selected, Qeexo AutoML will automatically choose the sensor and feature groups that make the classes most distinct. In the Inference Settings page, you can manually set up the instance length and the classification interval, or let Qeexo AutoML determines them by selecting Determine Automatically, as shown in Figure 2.

Figure 2: Inference Settings Page

In the Model Settings page, you can pick the algorithm(s), choose whether to generate learning curve and/or perform hyperparameter tuning and click Start Training button to start. After the training is finished, a binary file will be generated and can be flashed to the device by clicking the Push to Hardware button. Once the process is finished, you can perform live tests on the model that was built, as shown in Figure 3.

Figure 3: Model Details Page

While the process is by design very straightforward, the details of some of the choices may appear ambiguous.
Other blog posts go into some detail on different aspects of the pipeline, but we will focus on some of the feature
choices applicable to sound recognition.

Sound Recognition Highlighted Features

Fast Fourier Transform (FFT)

Signals in the time domain are difficult for humans and computers alike to distinguish among similar sound sources. One of the most popular ways to transform raw sound data is the Fast Fourier Transform (FFT). Due to the constraints of embedded devices, the FFT is an efficient frequency decomposition technique. The process is described in Figure 4.

Figure 4: FFT process

For different classes, the signals differ in their magnitudes for a given frequency bin. E.g., in Figure 5, sounds generated with different instruments have different distributions of the magnitudes among the frequencies 0-800 Hz; even with differences present up to 2000 Hz.

Figure 5: FFT Features for Different Classes

The Qeexo AutoML training methods will take advantage of the increased class separability in this range to train the model through model training. Qeexo AutoML doesn’t just use all of the FFT coefficients as input in training the model, but actually aggregate the coefficients to create sophisticated features. The specific groupings can be hand-picked during the model selection process to accommodate implementation constraints. To select the features groups, simply check the box(es) in the manual feature selection page as shown in Figure 6.

Figure 6: Manual Feature Selection Page

Mel Frequency Cepstral Coefficients (MFCC)

Mel Frequency Cepstral Coefficients (MFCC) is also an important technique for sound recognition. Humans react differently to distinct ranges of frequencies. As a species, we are more capable of telling the difference in frequencies between a 50Hz and a 100 Hz signal, than that between 10050Hz and 10100 Hz. In other words, we are really bad at distinguishing high pitched sounds. Therefore, in situations where you want to replicate a task performed by humans, such as voice separation, the difference when the frequency is low is the most important. The value of the signal properties decreases with increasing frequency. Mel scale comes into place here, by assigning more importance to the low frequency content and less to the high frequency content. The formula for converting from frequency to Mel score is:

    \begin{align*} M(f) = 1125 * ln(1+f/700)\\ \end{align*}

We build a filter bank containing many triangular filters and apply them to our FFT features to rescale the signals again and convert them to the corresponding Mel scales. In the Mel spectrograms shown in Figure 7, we can see that different classes’ Mel spectrograms appear to have many differences, making them ideal inputs for training a classifier.

Figure 7: Mel Spectrograms for Different Classes

Qeexo AutoML also provides features generated from the coefficients of MFCC. The feature groups can also be selected in the manual selection page shown in Figure 3. If desired, you can visualize the selected features through a UMAP plot by clicking the Visualize button shown in the Sensor Selection page and Feature Group Selection page.

Based on this discussion, it should be apparent that MFCC features will work well for tasks involving human speech. Depending on the task, it may be disadvantageous to include these MFCC features if it does not share similarities with human hearing. Qeexo AutoML performs automatic feature reduction, however, when automatic selection is enabled, so this does not need to be an active concern when training models. If the MFCC features are not highly separable for the task, assuming sufficient data is provided, they will be dropped from the final model during this process.

Conclusion

Qeexo AutoML not only provides model building functionality, but also present the details of the trained models. We provide evaluation metrics like confusion matrix, by-fold cross validation, ROC curve, and even support downloading the trained model to test it elsewhere. As mentioned earlier, we provide support for, but do not limit to microphone sensor usage for sound recognition. You are free to select any other provided sensors such as accelerometer and gyroscope. If these additional sensors don’t improve model performance, they won’t be included in the final device library, through the automated sensor selection process.

Bibliography

[1] Wikipedia: Sound Recognition,

https://en.wikipedia.org/wiki/Sound_recognition


BACK TO PRESS

Qeexo AutoML Enables Machine Learning on Arm Cortex-M0 and Cortex-M0+

Qeexo, Co. 09 September 2020

First company to build an automated ML platform for the Arm Cortex-M0 and Cortex-M0+ processor

MOUNTAIN VIEW, Calif. (PRWEB) September 09, 2020

Qeexo, developer of an automated machine learning (ML) platform that accelerates the deployment of tinyML at the edge, today announces that its Qeexo AutoML platform now supports machine learning on Arm® Cortex®-M0 and Cortex®-M0+ processors, which power devices including sensors and microcontrollers from companies such as Arduino, Renesas, STMicroelectronics, and Bosch Sensortec.

The Arm Cortex-M0 processor is the smallest Arm processor available, and the Cortex-M0+ processor builds on Cortex-M0 while further reducing energy consumption and increasing performance. Qeexo is the first company to automate adding machine learning to a processor of this size. The Cortex-M0 and Cortex-M0+ processors are designed for smart and connected embedded applications, and are ideal for use in simple, cost-sensitive devices due to the lower power-consumption and ability to extend the battery life of critical use cases such as activity trackers.

Machine learning models built with Qeexo AutoML are highly optimized and have an incredibly small memory footprint. Models are designed to run locally on embedded devices, ideal for ultra low-power, low-latency applications on MCUs and other highly constrained platforms.

“This integration delivers the advantages of data processing at the edge to even the smallest of devices,” said Sang Won Lee, co-founder and CEO of Qeexo. “Qeexo AutoML, combined with the accessibility of MCUs from companies such as Arduino, Renesas, STMicroelectronics, and Bosch Sensortec, greatly benefits application developers, who can now build smart hardware products with relative ease.”

The growing list of machine learning algorithms supported on Qeexo AutoML currently include: GBM, XGBoost, Random Forest, Logistic Regression, Decision Tree, SVM, CNN, RNN, CRNN, ANN, Local Outlier Factor, and Isolation Forest. Several hardware platforms from Arduino, Renesas, and STMicroelectronics work with Qeexo AutoML out-of-the-box.

Supporting Partner Quotes

Arm

“Today even the smallest devices can contain some layer of artificial intelligence and machine learning. The Cortex-M0 and Cortex-M0+ processors pack high performance with very low power consumption, and the added support of the Qeexo AutoML platform enables application developers to easily add intelligence to small devices such as wearables, making a world of one trillion intelligent devices a closer reality.”

— Steve Roddy, Vice President of Product Marketing, Machine Learning Group of Arm

Arduino

“Arduino is on a mission to make machine learning simple enough for anyone to use. We’re excited to partner with Qeexo AutoML to accelerate professional embedded ML development by guiding users to the optimal algorithms for their application. Combined with Arduino Nano 33 IoT, users can quickly create smart IoT sensors that can perform analytics at the edge, minimize communication, and maximize battery life.”

– Dominic Pajak, VP Business Development, Arduino

Bosch

“Bosch Sensortec and Qeexo are collaborating on machine learning solutions for smart sensors and sensor nodes. We are glad that Qeexo’s AutoML has added support for Cortex-M0 families, to which Bosch Sensortec’s smart sensors like BMF055 belongs. We are excited to see more applications made possible by combining the smart sensors from Bosch Sensortec and AutoML from Qeexo.”

– Marcellino Gemelli, Director of Global Business Development at Bosch Sensortec

Renesas

“Renesas and Qeexo collaborated on the design of a new RA-Ready sensor board: the RA6M3 ML Sensor Module. Equipped with various motion and environmental sensors and enhanced with Qeexo AutoML, this sensor module is the perfect reference platform for developing intelligent machine learning applications.”

– Kaushal Vora, Director of Strategic Partnerships & Global Ecosystem at Renesas

STMicroelectronics

“Qeexo AutoML recently added support for our STWIN industrial platform, which features embedded industrial-grade sensors and an ultra-low-power microcontroller for vibration analysis. By automating the development of ML solutions for advanced industrial IoT applications such as condition monitoring and predictive maintenance, Qeexo AutoML eases the usability of our products.”

– Pierrick Autret, Product Marketing Engineer at STMicroelectronics

###

About Qeexo
Qeexo is the first company to automate end-to-end machine learning for embedded edge devices (Cortex-M0-to-M4 class). Our one-click, fully-automated, Qeexo AutoML platform allows customers to leverage sensor data to rapidly build machine learning solutions for highly constrained environments with applications in industrial, IoT, wearables, mobile, automotive, and more.

Delivering high performance, solutions built with Qeexo AutoML are optimized to have ultra-low latency, ultra-low power consumption, and an incredibly small memory footprint. As billions of sensors collect data on every device imaginable, Qeexo can equip them with machine learning to discover knowledge, make predictions, and generate actionable insights.

Spun out of Carnegie Mellon University, Qeexo is venture-backed and headquartered in Mountain View, CA, with offices in Pittsburgh, Shanghai, and Beijing. To learn more, visit https://qeexo.com.

BACK TO PRESS

Inference Settings: Instance Length and Classification Interval

Xun (Jared) Liu, Dr. Rajen Bhatt, and Dr. Geoffrey Newman

Qeexo AutoML enables machine learning application developers to customize inference settings based on their use-case. These parameters are critical for achieving the best live performance of models on the embedded target. In this article, we will discuss the two parameters associated with the inference settings; instance length and classification interval.

Figure 1. Inference settings with microphone sensor (16000Hz) on Arduino

Instance Length

Instance length is the time period over which to make one prediction using raw sensor data. It is measured in milliseconds. According to the selected sensors and their ODRs, this time is then converted to the number of raw sensor data samples. These samples are used for computing features for training of ML models and also during on-device inference. If only one sensor is considered for the application, instance length is converted from milliseconds to number of samples using that sensor’s corresponding ODR. If there are multiple sensors with different ODRs, however, this conversion takes into consideration the sensor with the highest ODR. For other sensors, the number of samples is determined proportionally. Below are some examples for the Arduino sensor board with instance length of 500 milliseconds (0.5 seconds).

Setting 1: Microphone with ODR of 16000Hz.

Setting 2: Accelerometer and Gyroscope with 952Hz and microphone with 16000Hz.

For microphone,

For accelerometer and gyroscope,

How to Determine the Instance Length

Long instance length corresponds to a larger number of samples for featurization. According to Fourier Transform basic principles, more data points could yield finer frequency resolution, which captures an increased quantity of information from the signals. Therefore, it produces a greater number of features for the ML model training.

However, given the total time length, a long instance length would reduce the training dataset size. For example, if a signal of length L seconds is given and we divide that into segments of T seconds each, we get more segments if T is smaller and fewer if T is larger. For on-device live testing, larger T also implies more data needs to be collected at once to form a single prediction. Due to memory constraints of embedded devices, there will be limitations on the maximum instance length. Too small of an instance length can sometimes result in numerical instability of signal processing algorithms and may not capture sufficient discriminative information from the signals. For these reasons, AutoML restricts the minimum signal length to at least 64 samples.

Consider the following example for the microphone sensor (16000Hz) on Arduino. The instance length supported is at minimum 64 samples and at most 12000 samples. In milliseconds, this represents a range from 4 milliseconds to 750 milliseconds, as calculated here:

If multiple sensors (accelerometer & gyroscope; 952Hz ODR) are chosen, the range then becomes 4 to 1075 milliseconds.

Selecting the Best Instance Length

Qeexo AutoML supports automatically determining the instance length or setting it manually. The “Determine Automatically” option takes the minimum and maximum permissible values of instance length and finds the optimal value within this range. The optimization process tries to maximize the classification performance. It should be kept in mind for efficient model training that the optimization process takes longer to train models than manual selection.

Manual selection constrains the mininum and maximum permissible values. Any value within this range can be chosen for building the models. One way to estimate an instance length manually is visualizing the signal. As a general guideline, choose an instance length that is neither too short to miss part of the signal, nor too long that it could include unnecessary noise over multiple instances.

Instance length is a common parameter across all of the models, i.e., an instance length determined automatically or manually is applicable across all of the models.

Classification Interval (CI)

Classification interval refers to the time interval in milliseconds between any two classifications when live streaming sensor signals as illustrated in Fig. 3. It is a user defined parameter and accepts a value between 100 milliseconds (10 classifications in 1 second) and 3600 seconds (1 classification every 1 hour). Classification interval is not optimized even when selecting the “Determine Automatically” option.

Shorter intervals make predictions more frequent, but consume more power, while longer intervals save power, but can miss quick-burst live-streaming events when they occur between two consecutive classifications.

Figure 2. Instance length and classification interval

The detailed description of the Classification Interval is in this blog post.

BACK TO PRESS

Classification Interval for Qeexo AutoML Inference Settings

Sidharth Gulati and Dr. William Levine 12 August 2020

Inference settings contain two important parameters; Instance length and Classification interval. In this blog, we will explain the Classification Interval and in conjunction with raw sensor signals, ODR, Instance length, latency, and performance of the model on the embedded target.

Classification Interval is the step–size for each on-device classification, i.e., live testing. This interval determines “how often” we do on-device classification as shown in the plot below. For example, if Classification Interval is set to 200 milliseconds, Qeexo AutoML will produce a classifier that classifies incoming data at a rate of 5 Hz (5 times / second).

In the plot below, Instance Length (in milliseconds) determines how many milliseconds of sensor data are taken into account for each classification. Depending on the maximum sensor ODR selected for the use case, instance length in milliseconds gets converted into the number of raw sensor data samples. For example, 250 milliseconds of instance length is essentially 238 raw sensor data samples if sensor ODR is 952 Hz.

Please note that the true classification interval can never be less than the classification latency (the amount of time needed to calculate a single classification result). So, if the requested classification interval is less than the classifier latency, the true classification interval will necessarily be larger than the requested one; as the next classification will not begin until the current one is finished. 

There are 3 different relational cases between Classification Interval and Instance length which are described below.

Case 1: Classification Interval < Instance Length

Below case shows on-device classification with Classification Interval < Instance Length. This will result in overlapping of instances, i.e., some “overlap” of data between 2 classifications.

This choice of parameters may be appropriate for detecting short-lived transient events.

Case 2: Classification Length = Instance Length

Below case shows on-device classification with Classification Interval = Instance Length. This will result in no “overlap” of data between 2 classifications. Although, there will be no gap between 2 classifications.

This will reduce the rate of classification and depending on the application use-case, if it involves working with high ODR sensors, may result in missing some transitional data.

Case 3: Classification Interval > Instance Length

Below case shows on-device classification with Classification Interval > Instance Length. This will result in no “overlap” of data between 2 classifications and there will be a gap between 2 classifications.

This gap will manifest in even “slower” classifications (in comparison to Cases 1 and 2 described above) and might result in missing some transitions or classes completely.

This choice of parameters may be appropriate for monitoring the state of long-running machinery, where an anomalous state is expected to persist for some time. The larger classification interval has the advantage of reducing power consumption.

BACK TO PRESS

Anomaly Detection in Qeexo AutoML

Dr. Karanpreet Singh and Dr. Rajen Bhatt 15 July 2020

Qeexo AutoML supports three one-class classification algorithms widely used for anomaly/outlier detection; Isolation Forest, Local Outlier Factor, and One-class Support Vector Machine. These algorithms build models by learning from only one class of data. After learning, anomaly detection algorithms determine whether a test instance belongs to the normal class or if it is an anomaly. Qeexo has taken one-class approach for anomaly detection because it is easy to collect the data from normal class (e.g., normal operation of a machine) compared to doing multi-class data collection where each type of anomaly represents one class.

Isolation Forest (IF) [1]

Isolation Forest is an efficient algorithm for outlier detection, also very effective in high-dimensional datasets. It builds an ensemble of decision trees in which each tree is trained randomly; at each node in the trees, it picks a feature randomly, then it picks a random threshold value (between minimum to maximum value of the feature) for splitting the dataset. The trees are grown until all the instances are isolated from other instances. The anomalies generally tend to be far away from normal instances. The number of divisions required to isolate a sample from other instances is equivalent to the path length from the root node to the terminating node in the tree. The path length, averaged over all the trees, produces noticeable shorter paths for anomalies, and comparatively longer paths for normal data.

The average path length over the collection of isolation trees, referred as E(h(x)) in [1], is used to compute the anomaly score as:

Where  is the total number of instances in training data and . The  is the harmonic number.

Local Outlier Factor (LOF) [2]

LOF algorithm compares the density of instances around a given instance with the density around its neighboring instances. The distances of the given instance with respect to its k-nearest neighbors are used to estimate its local density. The LOF compares the local density of the given instance to the local densities of its neighbors. Instances that have substantially lower density than their neighboring instances are considered as outliers.

If we consider some data points in a space, the reachability distance of a data point p with respect to data point o is defined as:

where k is the number of neighbors considered in this calculation. The k-distance(o) is the distance of the data point o to its kth farthest data point from the dataset. The d(p, o) is the distance between data points p and o.

The reachability distance is used to calculate local reachability density (LRD). The (LRD) is inverse of the average reachability distance based on the k-neighbors of data point p. It can be written as:

Finally, the LOF of a data point p is average of the ratio of LRD of the p and those of its k-neighbors.

One-class SVM (OCSVM) [3]

OCSVM tries to separate instances in high-dimensional space from the origin. In original space, this corresponds to finding a small region which encompasses all the instances. If a given instance doesn’t lie in this small region, then it is considered an anomaly. The OCSVM makes use of quadratic programming to solve the optimizing problem for finding the coefficients corresponding to the support vectors. 

The objective function of the model for separating the data from the origin is written as:

The  variables are non-zero and are penalized in the objective function. Thus, the decision function for an instance becomes  which will be positive for most of the training data points while having the regularization term  to be small. The variable  controls the trade-offs between these two goals. 

Mapping Anomaly Scores to Range of 0 to 1

Qeexo AutoML internally squashes anomaly scores from different models in the range (0,1]. This is done to have consistent view of anomalies across all the algorithms which in turn assists in better calibration of the anomaly threshold. An instance is called an anomaly if the output of the squashing function is larger than a threshold value. The default value of the threshold in AutoML is 0.5. The user has the option to calibrate the threshold to make the predictions biased towards inliers or outliers.

Advantages of Qeexo AutoML Anomaly detection:

  • Only Normal­ class data is required. It is extremely difficult and sometimes even impossible to collect data for different kinds of anomalies. Qeexo AutoML need data only from one class.
  • Easy calibration of anomaly detection threshold with live streaming of scores and live classification
  • Support of multiple algorithms described in this blog with Quantization support for Isolation Forest
  • Can also be utilized for other one-class applications such as detecting unique air gesture using magic wand against all other gestures
  • Support for Automatic and Manual selection of features

Example Case

An application of anomaly detection for machine monitoring can be found here: https://qeexotdkcom.wpengine.com/detecting-anomalies-in-machine-data-with-qeexo-automl-2/

References:

[1] Liu, F. T., Ting, K. M., & Zhou, Z. H. (2008, December). Isolation forest. In 2008 Eighth IEEE International Conference on Data Mining (pp. 413-422). IEEE.

[2] Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May). LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data (pp. 93-104).

[3] Schölkopf, B., Williamson, R. C., Smola, A. J., Shawe-Taylor, J., & Platt, J. C. (2000). Support vector method for novelty detection. In Advances in neural information processing systems (pp. 582-588).

BACK TO PRESS

Qeexo Takes Misery Out of EdgeML

Electronic Engineering Journal 14 July 2020

Startup Takes a Dose of its Own Medicine

Read the full article at: https://www.eejournal.com/article/qeexo-takes-misery-out-of-edgeml/

BACK TO PRESS

Qeexo AutoML Now Hosted on AWS, Adds Algorithm Support

Embedded Computing 16 June 2020

The latest release of Qeexo AutoML makes the automated TinyML model development and deployment platform available as a web application hosted on Amazon Web Services (AWS).

Read the full article at: https://www.embedded-computing.com/machine-learning/qeexo-automl-now-hosted-on-aws-adds-algorithm-support#

BACK TO PRESS

Qeexo Takes ‘TinyML’ to AWS Cloud

Enterprise AI

“Qeexo, the Carnegie Mellon University spinoff, is expanding public cloud access to its automated machine learning platform as it pushes its no-code “TinyML” approach to the network edge.”

Read the full article at: https://www.enterpriseai.news/2020/06/08/qeexo-takes-tinyml-to-aws-cloud/

BACK TO PRESS

AutoML Mentioned in insideBIGDATA Latest News 6/12

InsideBIGDATA 15 June 2020

Qeexo Announces General Availability of the Qeexo AutoML Platform to Enable TinyML for Edge Devices

Read the full article at: https://insidebigdata.com/2020/06/12/insidebigdata-latest-news-6-11-2020/