Topic IA2IA – new column by Dr. Andreas Helget is online

Topic IA2IA – new column by Dr. Andreas Helget is online

11. May 2020

Autonomous Systems

Self-driving cars are well-known. In other areas of application, “autonomy” is however also a major topic for the future. This includes the process industry. But what do we currently actually understand by “autonomous systems”, and where is this journey leading us?

We posed three questions in this regard to Dr Andreas Helget, MD of Yokogawa Germany.

 

What is the difference between “highly automated” and “autonomous”?

Dr Andreas Helget: I’ll start with the technical term: to be autonomous means to be independent. In technology, autonomous means nothing more than the capability of controlling oneself. An autonomous system can act independently – this means without direct steering from humans and under conditions that have not been previously rehearsed.

So, an automated robot that operates in a controlled environment can mount a car’s windscreen at precisely the same location every time. An autonomous robot would perform these tasks as well, but this robot could also do them at locations unknown to it. This means: The area of application no longer needs to be uniquely defined in advance. In addition, the robot could select an alternative action or adapt its response if the windscreen were damaged, for example.

An autonomous system makes decisions independently

For an automated system in industry this means: Process changes are implemented by engineers or IT professionals. By contrast, an autonomous system would decide for itself when to implement which action in order to achieve a goal – without humans intervening in the decision.

There is a great deal of discussion between IT professionals and engineers as to whether autonomous systems are just a logical further development of automated systems. Automation follows prescribed processes in that it decides between uniquely specified, prescribed options. Even though this can be extremely complex, it is far from self-determined action. An autonomous system is in any event greater than the sum of its individual parts: It executes programmed processes, it responds to sensor impulses; but it is also able to tailor its behaviour through variable experiences, that is, to “learn”. This is the decisive step from automation to autonomous systems.

How autonomous should an autonomous system be?

Dr Andreas Helget: Human fallibility is a major obstacle. To this extent, autonomous systems could be a blessing for us humans. We see this every day, just from accidents in road traffic. But in the processing industry as well, human failings are still the most common cause of industrial accidents – despite the vast number of technological innovations.

If we take a look at the four most important areas of application, the answer is however different for every area. There is a completely different (lower) degree of acceptance for autonomous systems in road traffic and for smart homes when compared to industrial production. Acceptance in inhumane environments is particularly high: we need only think of incidents in explosive environments, of impassable terrain such as the desert through which pipelines are laid, or the crew of a Mars research shuttle.

Optimisation of quality, time and resources

Economic efficiency is key in industry – and will be the deciding criterion for the implementation and use of autonomous systems. The benefits that autonomy promises, namely optimisation of quality, time and resources, can be quantified in economic terms.

When it comes to considerations and trade-offs that need to be made – and there are quite a few besides acceptance, economic efficiency and safety – it is important to place humans and not technology at the centre. This would appear to be obvious, but it actually represents one of the biggest challenges. The capacity of a technical system to develop on the basis of its own performance leads to a situation where it determines and takes on ever more functions for itself. Does this mean that technical evolution will at some stage be superior to biological evolution, because a machine can learn faster, knows more and can adapt more easily?

Efficient AI must have access to context

Context plays a crucial role in the approximation of AI to human intelligence. Both humans and artificial intelligence rely heavily on context when making decisions. For example, if I ask a colleague to come to a meeting one floor lower, I obviously assume that he will get there by taking the stairs – and not by choosing the shorter route through the window. Efficient AI must also have access to context – in this case subconscious knowledge of the relationship between time, route and safety. That is, it must be provided with appropriate data.

Not a major problem in this straightforward example. But because data for all options is infinite, it is impossible to determine all the data. No matter how much linked information is available or how networked the data pool is that provides context and can be queried by algorithms in any conceivable direction, it will never be possible to achieve human context dimensions. Especially not while some skills such as speech comprehension or facial recognition cannot be entirely explained from a logical point of view.

Not least for this reason, AI requires limits defined by humans. This is where ethics comes in. Last year, EU experts sat together and developed guidelines for artificial intelligence – on which every autonomous system is based – to prevent undesirable developments. After all, a system’s degree of autonomy is not only subject to the technical limitations of AI. Legal framework conditions and requirements for data security also set relevant limits in this regard.

How safe is the autonomous system?

Dr Andreas Helget: Reasoning in this regard is as varied as this question is succinct, because “safe” has many dimensions. In my opinion, the German Ethics Council has already collated the most important considerations:

Who is responsible for the “actions” of autonomous machines if the user himself is not part of such decisions or is only marginally involved?

Based on which criteria should machines “make decisions” in case of conflict, and who stipulates these criteria?

How can we ensure appropriate handling of the large volumes of sensitive data that need to be collected and exchanged by autonomous systems so that they can function optimally?

How can we minimise the risk of abuse of such systems by others?

In terms of specific system-related application in industry, however, there is one clear-cut answer. This is because in process engineering we have always considered automation/availability separately from actual safety (and here we mean safety AND security). In process engineering, autonomous operation can be seamlessly integrated into industrial processes if the context is defined.


Blockchain in focus – New column by Andreas Helget is online

For the first time as a video: New column by Dr. Andreas Helget is online

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.