Friday, June 15, 2018

Ethical Machines

The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.
— Stephen Hawking



You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence.

Imagine a more intelligent smart phone embeded in your brain

Hyper-intelligent software may not necessarily decide to support the continued existence of humanity, and would be extremely difficult to stop. This could be a real source of risks to civilization, humans, and planet Earth

There are three philosophical questions related to AI:
1. Is artificial general intelligence possible? Can a machine solve any problem that a human being can
solve using intelligence? Or are there hard limits to what a machine can accomplish?
2. Are intelligent machines dangerous? How can we ensure that machines behave ethically and that they are used ethically?
3. Can a machine have a mind, consciousness and mental states in exactly the same sense that human beings do? Can a machine be sentient, and thus deserve certain rights? Can a machine intentionally cause harm?

Development of militarized artificial intelligence is a related concern. Currently, over 50 countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI want to limit the use of artificial soldiers

Wallach describe artificial intelligence as being guided by its two central questions
1- Does Humanity Want Computers Making Moral Decisions
2- Can Robots Really Be Moral

At present businesses are deploying AI for profit with drones, driverless cars, medical diagnosis and financial apps, why stop the momemtum before the ethics concern are adequately understood, articulated and crystalized.
 
In conclusion, apply the the three-box framework to your AI business.
The Three-Box Framework, first introduced by Govinderajan and makes leading innovation easier:
Box 1: The present- manage the core business at peak profitability
Box 2: The past- abandon ideas, practices and attitudes that could inhibit innovation
Box 3: The future- convert breakthrough ideas into new products and businesses

Humans Remain the Biggest Ethical Risk

See You at the Top

Great Blockchain Opportunity: https://chain-trans.com/portfolio/ref/scroll