Giving machines to the will of survival will bring them closer to 'Strong AI' - Information Technology Blog

Latest

Tuesday, November 12, 2019

Giving machines to the will of survival will bring them closer to 'Strong AI'

Giving machines to the will of survival will bring them closer to 'Strong AI'

Giving machines to the will of survival will bring them closer to 'Strong AI'(and force us to adopt robotics laws)

Neuroscientists Kingson Man and António Damásio explain, in an article recently published in Nature Machine Intelligence, an obvious fact: that artificial intelligences lack feelings and that, at best, they can only aspire to artificially simulate them, because " they are not designed to represent the internal state of their processes in a way that allows them to experience that state in a mental space. " But why should we give AIs feelings?

Man and Damásio are convinced that there is a way to give robots feelings. At least indirectly, as they propose to do so through the 'implementation' of a single desire: that of self-preservation; his theory is that, from that point, an artificial intelligence would have to develop those 'feelings', a way of calling those necessary behaviors that ensure their own survival.

It would try to simulate a biological property, that of homeostasis; that is, the ability of the organisms themselves to remain within a narrow range of conditions considered acceptable to keep them alive (for example, a certain range of body temperatures).

If we could teach the machines what factors occupy that role in their own survival (connected cables, adequate amount of electric current, etc.), we could provide them with a self-conservative behavior: that is, give them a sense of vulnerability that makes them feel ' fear 'of threats to their existence, and to' comfort 'them when these factors are restored.

The road to intelligence through feeling

But, although we can all perceive the advantages of achieving this ... why should these 'feelings' suppose a cognitive improvement of artificial intelligence capable of bringing it closer to a general human intelligence? Man and Damásio start from the basis that our own high-level cognition is the consequence of the adaptation of the human species to more efficiently solve the biological problem of homeostasis.

The researchers are convinced that this is the ingredient that would be needed to achieve an AI equivalent to the human one, in the sense that it would not be designed only for highly specialized tasks, but could be deployed in all kinds of situations, even in those not provided by its programming.

But the understanding of the internal state itself, a certain form of self-awareness, is necessary to allow AIs to perceive threats to their existence, and that can only be 'taught' by resorting to deep learning or 'deep learning' and to the use of artificial neural networks. , which allow to detect and classify patterns in data entry.

Thanks to these technologies, an AI could deduce cause-effect relationships between its internal state and external conditions, just as it already does between a certain movement of lips and the sound emitted when speaking, and knowledge of these relationships would be the basis of their 'feelings', which would lead him to behave creatively, without relying on pre-programmed conditions for each eventuality, only looking for his own homeostasis ... or that of those around him.

Related Article: Adobe introduces AI-based tools Face Detect: "Photoshopped"


Legislate empathy

We would enter there in an area reserved to the famous Laws of Asimov, or to the norms with which we want to equip ourselves in the future for these cases, since it is obvious that we will want to protect ourselves from certain unwanted consequences of the will of self-preservation of AIs ... especially those with physical presence, such as robots.

Recall that, in the fiction of Asimov, and of so many authors after him, robotic self-protection was subject to the protection of human homeostasis, as well as obedience to their orders.

Man and Damásio are convinced that we would not have to face any Skynet if we achieve that the machines, "in addition to having access to their own feelings, can know the feelings of others, that is, are endowed with empathy." Thanks to that, the 'Laws of Man-Damásio' (they don't call them that, of course) for AIs would be reduced to two very specific and brief orders: 1) Feel good and 2) Feel empathy.

Among human beings, psychopaths are defined by their absolute inability to 'put themselves in the place of the other', that is, by their absence of an element as basic to the rest of us as empathy. And yet, we can find thousands of daily examples of people who, even feeling empathy, are able to harm their peers voluntarily or involuntarily ... which makes this 'legal' proposal less than enormous optimism.

However, the proposal to continue investigating the relationship between self-preservation and the development of general AI seems more promising, in light of its research.

No comments:

Post a Comment