Could we avoid psychopath and sociopath robots?

For all of us who follow with interest the progress of Artificial Intelligence(AI) and Robots, the recent news about the victory of Google’s AlphaGo against grandmaster Lee Se-Dol has made us think if we face the evolutionary moment in which it has finally produced the triumph of technology over humanity and was Google who just proved how unpredictable artificial intelligence can be.

To fear or not fear Robots

No doubt this new will increase the fears over the pervasive application of deep learning and AI in our future – fears famously expressed by Elon Musk as “our greatest existential threat” and what to say about the fears of those that believe that Machines will replace our White Collar Jobs. If you are in marketing maybe you already had suffered in your own flesh the fear How Machine Learning Robots will replace Human Marketers.

After watch this video from Atlas Boston Dynamics, I am not sure if Robots will not hate Humans.

Why We Laugh at Robot Fail Videos ?. It seems that we all have to some extent fear the rise of robots and artificial intelligence and it is possible that our fears are justified after news like this “Most Advanced A.I. Robot Admits It Wants to Destroy Humans After Glitch During TV Interview”.

However, for those who believe that Machines will help us to be better People, maybe would open a ray of hope to a more just world.

Have you ever think ahead about how will be our life as retired workers in the Smart Cities we are building? Looks like the British are aware according with this article Rise of the machines: A future the public sector can’t ignore .

The challenge in our society is to think in Robots and Artificial Intelligence not only productivity and profit but also other values eg justice, opportunity, freedom, compassion.

Living with Robots today

Are we aware that we are living with robots today?. It seems clear enough to recognize that there are already many robots among us. For instance Hilton Hotels has a robot concierge that just wants to help and you can check in at Henn-na Hotel in Japan with the Velociraptor First Robot Hotel.

This year’s CeBIT consisted of the Dronemasters Summit focus was on business applications for flying robots, as used for example by energy companies to monitor the condition of overhead powerlines and substations. But the crowd favorite at CeBIT had to have been “Pepper,” a humanoid robot developed by French-based Aldebaran and IBM which can speak 20 languages, recognizes its interlocutor’s emotions from his facial expressions, and is going to be used not only in Japanese temples of consumerism, but also, in the near future, on German cruise ships. Pepper is a robot designed for human interaction, he’s intended to make people happy – to enhance people’s lives, facilitate relationships, have fun with people and connect them to the outside world. He can recognize faces, speak, hear, and move around autonomously. He understands basic human emotions, like happiness and sadness. He can identify speech, inflections and tones in our voices and use these to determine whether his human is in a good or a bad mood. He can also learn from his interactions, as his 25 sensors and cameras provide detailed information about the environment and people it interacts with.

And there are many more cases like a robot that serves food at a restaurant in Shenyang, or Robots introduced to primary-school lessons in order to show a more “humanistic” side of coding and to help attract more girls to the digital sector.

Robot. It’s the latest humanoid robot from Hanson Robotics. Love it or hate it their coming. So is digital currency.

And who doubts that Corporations Will Use Artificial Empathy to Sell Us More Shit.

Are robots smarter than humans?

I read in the Guardian that when it comes to human-machine interactions, even the smartest AI is orders of magnitude more inflexible than the most intransigent human.

The question to ask, should be “When the robots will become more intelligent than humans?. And looks like we already have a date: 2029. Just read the opinions of those who think that robots will not be smarter than humans by 2029 and those who predict 2029 will be the year when robots will have the power to outsmart their makers.

“We will see robots and AI do the utterly unexpected moving effortlessly beyond the limits of human imagination”

Will Robots will be our Managers and Bosses?

Sure there are people who prefer a robot to a human boss . If a robot can do something a human can do, it is only a matter of time before it does this cheaper and more efficiently. The robots will develop a capacity to deal with emotions such as, “stressed”, “fear”, “anger” and other emotions, and many will even be programmed to handle self-motivation. Although technological limitations are disappearing, social, moral and ethical ones remain, but will be enough to persuade us to trust artificial intelligence? Or to accept a robot as a member—or even as a manager? Will we be able to express our emotional concerns to a robot manager?

Although perhaps many would prefer we will have the opportunity to choose and leave the robotic jobs to the robots, and find more fulfilling work for humans to do, but in any case never obey to a Robot.

But the results of an experiment in 2014, carried out by James Young an assistant professor at the University of Manitoba and Derek Cormier a graduate student in Human-Computer Interaction at the University of British Columbia , show that many people will follow robots placed in positions of authority to do daily mundane things.

“When a good manager speaks, employees not only listen but act based on what is said. In at least some cases, robots may one day be the ones giving the instructions.”

Hitachi may be the first to create robot bosses. The company hires Artificially Intelligent Bosses for their warehouses. The company says productivity has already increased in the warehouse environment by 8 percent, compared to one of their non-AI run warehouses, and they hope to expand “human and AI cooperation.”

So better be prepared for the rise of the Robot Bosses. Robots and software may soon be more than capable of doing your boss’s job, but the ultimate question may be whether you would choose to work for them.

Educating Robots as Humans

Robots and AI are often trained using a combination of logic and heuristics, and reinforcement learning. The logic and heuristics part has reasonably predictable results: we program the rules of the game or problem into the computer, as well as some human-expert guidelines, and then use the computer’s number-crunching power to think further ahead than humans can. This is how the early chess programs worked. While they played ugly chess, it was sufficient to win.

The challenge to educate robots as humans is “Anything that is inherently human is always very difficult to translate into a computer”.

How we should educate robots, or Should we say program robots, toward integrating robots and artificial intelligences into human society?

Harnessing artificial empathy is considered an essential step as it will allow for more fluid and affective human-robot interaction. Researchers are working to create robots and computers able to detect various shades of wit from their human companions, and to fire back in turn with their own wisecracks. Some specialists even see humor as the final frontier for artificial intelligence, because it requires mastery of sophisticated functions like self-awareness, empathy, spontaneity, and linguistic subtlety.

There are several approaches to educate Robots:

  • “Data-driven”, machine-learning approaches in which the machine is not constrained by human experience or expectations.
  • “Theory-driven” approaches that attempt to model mental processes in software. Abstract algorithms can mimic decision-making and other cognitive processes without worrying about how such processing occurs in the brain.
  • “Biology-driven” approaches are inspired in biologically realistic models simulating actual neural processing in terms of electrical impulses, chemical messengers, synaptic connections and so on.
  • “Reinforcement-learning” approaches involve reward and punishment. Researchers believe the brain employs two distinct types of process in reinforcement-learning situations. One is a simple, rapid, habitual form that predicts the consequences of actions using expectations based on how often an action has been rewarded in the past. The difference between the predicted reward and the one actually obtained is a “reward prediction error,” which can be used to update expectations. The other is a slower, more deliberative form of goal-oriented control, which uses knowledge about the world to think through (often multiple) actions to assess probable consequences. This approach is more reliable, being able to rapidly adapt to changes in the environment, but is also much more intensive and costly.

How to teach ethics to Robots

In the next 10 to 20 years, robots will be doing everything from driving our cars to fighting our wars and will be taking the place of humans in the most intimate of roles. As the Robots are increasingly replacing humans in some of the most commonplace functions of everyday life, Robots will need ethical guidance. Our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as preventing humans from unjustly avoiding responsibility for their actions.

There are at least three things we might mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots.

The best known prescription for robots is the Three Laws of Robotics formulated by Isaac Asimov (1942):

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second law.

These laws can be considered the first steps of robot ethics, but these three laws were for robot slaves. It was about to prevent robots from doing harm–harm to people, to themselves, to property, to the environment, to people’s feelings, etc. Today, robots have increased their abilities and complexity and it is necessary we develop more sophisticated safety control systems that prevent the most obvious dangers and potential harms.

As robots become more involved in the business of understanding and interpreting human actions, they will require greater social, emotional, and moral intelligence.

For robots that are capable of engaging in human social activities, and thereby capable of interfering in them, we might expect robots to behave morally towards people–not to lie, cheat or steal, etc.–even if we do not expect people to act morally towards robots.

Ultimately it may be necessary to also treat robots morally, but robots will not suddenly become moral agents. Rather, they will move slowly into jobs in which their actions have moral implications, require them to make moral determinations, and which would be aided by moral reasoning.

If we want robots to behave more like equals, robots will need to behave ethically and morally as we do. Unfortunately, ethics and morality are not reducible to heuristics or rules.

Will robots ever have the empathy and intrinsic morality of human beings?

Robots not only should be able to learn to imitate human empathetic and ethical behavior, they will have to acquire greater capabilities and ethical sophistication.

Avoiding create psychopath and sociopath Robots

If we do not know yet what powerful psychological forces make good people do bad things, we cannot take the risk that good robots will can do bad things.

If, as seems, the future will hold living with robots and many humans will have Robots as a manager or boss or leader, we must avoid from now that companies can build sociopaths or psychopaths robots.

As I am not sure that it can be achieved, given the moral and nature of enterprises, it is necessary that companies have mechanisms to avoid putting these robots devoid of moral and ethics at the forefront of processes that can endanger humans and other robots.

It is extremely important to open a debate to develop technology that force companies build robots capable of performing an ethical and moral learning. And it is also necessary to design and implement the necessary controls to ensure that the most advanced robots will be build according to new and more stringent laws of Ethics and Morals Robotics.

It is important to include in our universities the field of Computational psychiatry (a discipline that brings together bedfellows from disparate departments) that allow humans guarantee robots are working with ethics and moral. And we need companies to develop sophisticated tools for Computational Psychiatrists that may be able to disentangle the Every robot psychiatrist wants to know which treatment will work best for a given robot.

Whether you view it as ethic robots or just simple robots with machine-augmented human cognition, or human-assisted machine cognition, it comes back to one simple fact: to monitor and control robots behaviour we will trained people.

Thanks in advance for your Likes and Shares

Thoughts ? Comments ?



Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de

Estás comentando usando tu cuenta de Cerrar sesión /  Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )


Conectando a %s