A recipe for the most reliable and efficient method of autonomous production conceivable

In recent times, many things have been said about artificial intelligence and machine learning. Dig deeper and some applications are so diminutive, you would be unlikely to think of labelling them AI as such. Or the inner workings are so difficult to comprehend, it would be far too risky letting them loose on complete production lines. How can we harness the might of machine learning in manufacturing, yet still stay on the safe side?

In the ideal world, we would be able to combine the myriad of possibilities offered by self-learning algorithms with rule-based expert knowledge. The methods of machine learning are useful if you want to see round corners and predict numerical outcomes, but they struggle when it comes to responding to unprecedented disruptions. Things are different with rules. They enable immediate responses to disruptions, reactions that can be validated, although – even at best – they can only determine short-term developments. Bring the two approaches together and you would have the most reliable and efficient methods of autonomous manufacturing currently imaginable.

Drawing on advanced and sophisticated technology, GFT has invested years in forging and framing this concept of combining approaches and has come up with a form of software that we refer to as the Model in the Middle. It now constitutes the core component of sphinx open online, an IoT platform that enables both autonomous responses to spontaneous events and the application of machine learning methods. It achieves this by working continuously ‘in the loop’ – in other words it is capable of looking back at the past at extremely rapid intervals in order to compute forecasts of the future. An important aspect of this is the ability of the Model in the Middle to store past events without having to tap into separate applications. It passes on historical values cyclically to an AI module, which typically reverts to the cloud for computational power. This allows it to create forecasts, which in turn go back into the storage system. And this makes it possible to peer into the future.

The beauty of this method is that forecasts created along these lines can be merged and matched with expert knowledge. If we know there will be a problem one hour from now, we can prepare for that exact situation before we get there and call on a tried-and-tested response in the form of a rule. It’s a bit like thinking ahead behind the wheel – predictive driving. Imagine you’re travelling down the road and you see taillights in the distance. What do you do? You ease off the accelerator. If you’re driving in fog, you can only see the lights of the car in front of you, so your foot spends more time hovering over the brake pedal. Basing behaviour on what’s happening in the here and now offers decisive advantages if you suddenly meet an obstacle.

Alternatively, building on our analogy, in normal traffic you focus your thoughts on the horizon and drive predictively, but you brake and accelerate according to taillights fifty metres away. If a child suddenly appears in front of you, it doesn’t matter how ‘predictive’ (or horizon-based) your driving is – you have to slam on the brakes. Translating this concept to software systems, braking is rule-based. Some disruptions are unpredictable; you simply can’t see them coming. This is where AI hits a glass ceiling because it’s always trained by using models. Yet models can only be trained themselves by allowing them to run through situations time and again. You can’t do that, however, with incidents that happen out of the blue in production or control processes. These require deterministic responses.

As a result, many key decision-makers in technical areas treat the use of AI with a certain degree of respect, especially when it comes to autonomous production. Not without reason: if a model has not been trained properly, it could slip up and make erroneous decisions. Or the machine learning algorithm could fail. Such situations are conceivable, although they’re relatively easy to prevent if you continuously monitor the plausibility of results produced by AI running in the cloud. If something does appear erroneous, the application reacts accordingly and flags fog on the horizon. In other words: it’s time to revert to rules. This provides you with a reliable framework that protects the systems you need to control from damage.

So how does this translate into front-line practice? How will knowing this help when it comes to application? With most projects in this area, the standard procedure is to start by setting up a rule-based system. This is a safe bet, so for strategic and psychological reasons it’s not to be underestimated. The good thing about rules is that they can be validated: you can prod and tease systems until they malfunction to see if they know how to react properly – a bit like seeing if a car hits the brakes. This is comparable with the human brain stem, which operates certain mechanisms according to fundamental rules – such as breathing or the reflex that pulls your hand away if you touch a hot stove. Once you’ve put such a system in place, you can rest assured that it’s all right to drive in fog.

You then add predictive driving, i.e. AI. The AI part of the system is given targets, such as ‘don’t exceed certain values’. To adhere to targets, it gathers data from all of the machines, systems or other data sources it’s connected to and checks them every minute. Responsibility for coordinating data connections, evaluations, monitoring and forecasting is given to the solution mentioned at the beginning: the Model in the Middle. Its architecture allows it to link up digital twins of all of the originators of data. These links work bi-directionally, such that the originators of data not only send data, but also – based on data – execute optimisation protocols.

This combined model can be applied to all sorts of application scenarios, depending on the expert knowledge that underpins rules and the targets that were given to the AI. There are hundreds of conceivable use cases that can be controlled with this approach. One example would be a machine shutting down automatically if it slips into fault mode or a hazard is imminent. Another would be material supplies, or managing entire shop floors. Or regulating the amount of energy needed in production. Or facility management on a large company site. Or…

Hybrid and multicloud

Learn how cloud and multicloud drive transformation!

Download now