• +44 (1235) 420123
  • 1 (800) 711-5346

From simulation powered design to predictive digital twins

by Mike Nieburg in CAD/CAE, Democratization, News & Articles

Predictive Analytics 3.0. It’s a fascinating proposition – and involves “informing” a machine learning (ML) model, dramatically reducing the amount of data required to train the ML algorithm.

Let’s imagine for a moment that you are involved in a project which requires training a Machine Learning algorithm to control an industrial robot arm to catch a tennis ball. The ball is repeatedly thrown towards the robot with varying speeds and trajectories. You have as inputs the feeds from several video cameras that can “see” the ball as it starts its journey, and the goal is to use these inputs to the Machine Learning algorithm to generate outputs which position the arm correctly so that a catch can be made.

Naturally, there’s going to be a learning curve (literally); there will be many failures at first. There will need to be a mechanism to label the failed and the successful attempts as such, but eventually, the algorithm will “learn” how to catch the ball.

 

No alt text provided for this image

It’s not so very different from how a human child would learn to catch. Initially, there will be many failures, but with many attempts, and with the appropriate labeling of the data (in this case probably the proud parents smiling and clapping when Junior makes a catch, and making commiserating sounds when she doesn’t), then the child will learn to catch the ball.

But consider this – the machine learning algorithm has no clue about the laws of physics (and of course, neither does the child). Basic physics tells us that a thrown ball more or less follows a parabolic trajectory, defined by the initial velocity of the ball. If the algorithm “knew” that, imagine how much faster the learning process would be. There would be far fewer failures early on, and far less data would be required (and if you’re wondering about the importance of the quantity of data to Machine Learning and Artificial Intelligence, have a read of AI Superpowers by Kai-Fu Lee).

This is the approach taken by our technology partners Front End Analytics in developing Predictive Analytics 3.0. It’s a fascinating proposition – and involves using conventional analytical tools to “inform” a machine learning model, thus dramatically reducing the amount of data required to train the ML algorithm. The whole thing is “democratized” – that is, deployed to end-users – with EASA, a model-agnostic deployment platform which enables companies to safely share and deploy all kinds of models, from financial models in spreadsheets to engineering models in Matlab – and now, Machine Learning models created in TensorFlow and other frameworks.

You can read the paper here. The application cited in the paper is the prediction of failure of an automotive component, but this approach has applications not only in manufacturing, but also in areas such as drug design, financial analysis and risk management, healthcare, and many more.

Enjoy!

Comments are closed.