It’s not so very different from how a human child would learn to catch. Initially, there will be many failures, but with many attempts, and with the appropriate labeling of the data (in this case probably the proud parents smiling and clapping when Junior makes a catch, and making commiserating sounds when she doesn’t), then the child will learn to catch the ball.
But consider this – the machine learning algorithm has no clue about the laws of physics (and of course, neither does the child). Basic physics tells us that a thrown ball more or less follows a parabolic trajectory, defined by the initial velocity of the ball. If the algorithm “knew” that, imagine how much faster the learning process would be. There would be far fewer failures early on, and far less data would be required (and if you’re wondering about the importance of the quantity of data to Machine Learning and Artificial Intelligence, have a read of AI Superpowers by Kai-Fu Lee).
This is the approach taken by our technology partners Front End Analytics in developing Predictive Analytics 3.0. It’s a fascinating proposition – and involves using conventional analytical tools to “inform” a machine learning model, thus dramatically reducing the amount of data required to train the ML algorithm. The whole thing is “democratized” – that is, deployed to end-users – with EASA, a model-agnostic deployment platform which enables companies to safely share and deploy all kinds of models, from financial models in spreadsheets to engineering models in Matlab – and now, Machine Learning models created in TensorFlow and other frameworks.
You can read the paper here. The application cited in the paper is the prediction of failure of an automotive component, but this approach has applications not only in manufacturing, but also in areas such as drug design, financial analysis and risk management, healthcare, and many more.