Qeexo

TRY
BACK TO PRESS

LETU Machine Learning Contest Video

KLTV 12 November 2020

Click HERE to view the video

Full URL to video source: https://www.kltv.com/video/2020/11/06/machine-learning-contest/

BACK TO PRESS

Can a piece of drywall be smart? Bringing machine learning to everyday objects with TinyML

Diginomica 11 November 2020

So-called smart devices like Amazon Echo and Google Nest made early headway into our homes. But will devices as small as a vibration sensor soon outsmart an Echo? Here’s a look under the hood of “TinyML.”

Read the full article at: https://diginomica.com/can-piece-drywall-be-smart-bringing-machine-learning-everyday-objects-tinyml

BACK TO PRESS

LeTourneau University students design artificial intelligence projects for contest

Longview News-Journal 09 November 2020

Since mid-September, 10 teams of LeTourneau University engineering students have been working on projects involving artificial intelligence to enter into a contest. Winners of that contest were announced Friday in the lobby of the Glaske Engineering Center after demonstrations from students.

The teams were challenged by contest sponsors Qeexo, the maker of the machine learning platform AutoML, and Arduino, an open-source electronics hardware and software platform, to provide solutions for real world problems using embedded machine learning. Students’ projects include a device that monitors hand movements to allow it to be almost unbeatable in a game of rock, paper, scissors to a another device that helps fly fishermen perfect their cast.

Link to the full article: https://www.news-journal.com/news/local/letourneau-university-students-design-artificial-intelligence-projects-for-contest/article_c936dc60-2088-11eb-930c-bb89584d78c8.html

BACK TO PRESS

Live Classification Analysis

Sidharth Gulati, Dr. Rajen Bhatt 03 November 2020

Qeexo AutoML enables machine learning application developers to do analysis of different performance metrics for their use-cases and equip them to make decisions regarding ML models like tweaking some training parameters, adding more data etc. based on those real-time test data metrics. In this article, we will discuss in detail regarding live classification analysis module.

Figure 1: Live Classification Analysis

Once the user clicks on Live Classification Analysis for a particular model, they will be directed to the Live Classification Analysis module that would resemble below screenshot.

Figure 2: Live Classification Analysis

In this module we won’t be discussing Sensitivity analysis. To refer to details regarding sensitivity analysis, please read this blog.

Figure 3: Confusion Matrix

For the purpose of this blog, we will use a use-case which aims to classify a few musical air gestures: Drums, Violin and Background. These datasets can be found here.

Live Data Collection

Qeexo AutoML supports live data collection module which can be used to collect data to do analysis on. Data

collection requires a Data collection library to be pushed to the respective hardware. A user can push the library by clicking the “Push To Hardware” button shown below.

Figure 4: Push to Hardware Screen

Once, they click the button and the library flashing is successful, the user will be able to record the data for trained classes in the model for analysis purpose. The user can select any number of seconds of data to do the analysis on. For this particular use-case, we have 3 Classes: Drums, Background and Violin as shown below.

Figure 5: Data Recording Input Screen

Once the user clicks “Record”, they will be redirected to Data Collection page as shown below. This module is same as the Data Collection module which is used to collect training data.

Figure 6: Recording Screen

As the user collects data for respective classes, they will be able to able to see the data in tabular format shown below. They can see the dataset information, delete data and re-record based on their preference.

Figure 7: Dataset Collection

Once, the user has collected the data, they can select whichever data they want to do analysis on by selecting the checkbox as shown above. Once, the user has selected atleast 1 dataset, they will see the Analyze button is activated and as we say, with Qeexo AutoML, “a click is all you need to do Machine Learning”, they will be able to analyze different performance metrics!

Figure 8: Analyze

Performance Metrics

Qeexo AutoML supports 5 different types of performance metrics listed below:

  1. Confusion Matrix: Represents True Labels and Predicted Labels in square matrix. Diagonal (upper left to lower right) elements indicates instances correctly classified. Off-diagonal elements indicate instances mis-classified. Summing instances over each row should sum to total instances for the respective class.
  2. F-1 Score: Measures the 1st harmonic mean of Precision and Recall. Computed as 2 * (Precision * Recall)/(Precision + Recall). Precision measures out of all the samples detected of a given class, how many are relevant. Recall measures out of all the relevant samples of a given class, how many are detected.
  3. Matthews Correlation Coefficient: Measure of discriminative power for binary classifiers. In the multi-class classification case, it quantifies which combinations of classes are the least distinguished by the model. The values can range between -1 and 1, although most often in AutoML the values will be between 0 and 1. A value of 0 means that the model is not able to distinguish between the given pair of classes at all, and a value of 1 means that the model can perfectly make this distinction.
  4. ROC Curve: Plots the False Positive Rate (FPR, x-axis) vs. True Positive Rate (TPR, y-axis) for each class in the classification problem. The dotted line indicates flip-of-the-coin performance where the model has no discriminative ability to distinguish among the classes. The greater the area under the curve (AUC), the better the model.
  5. Kernel Density Estimation plots: This will result in n plots, where n = Number of trained classes. This plot shows the estimated probability density function for each class vs rest of the classes.

For the use case of this blog, please find respective metrics below:

Confusion Matrix

Figure 9: Confusion Matrix

ROC Curve

Figure 10: ROC Curve

Matthews Correlation Coefficient

Figure 11: Matthews Correlation Coefficient

F-1 Score

Figure 12: F-1 Score

Kernel Density Estimation (KDE)

Figure 13: KDE for Background vs Rest of the labels
Figure 14: KDE for Drums vs Rest of the labels
Figure 15: KDE for Violin vs Rest of the labels

With these performance metrics, a user can determine how “well” the model is performing on test data or in live classification scenario. With the help of this module, a user can decide different aspects of a ML pipeline like whether to retrain a model with different parameters, whether more data will help improving the performance or different sensitivities for different classes should be considered. In a nutshell, Live Classification Analysis enables the user to take more control over ML model development cycle based on performance analysis on test data.