Articles

Future Risks of AI

by Juan Brown Consultant

The field of Artificial Intelligence is one of the most rapidly advancing fields out there. Recent years have shown dramatic breakthroughs in autonomous robotics, voice recognition and many more. And the coming decades will only see the continuation of this progress. The possibilities are endless: from medical advances and new scientific discoveries to better and cheaper goods and services. But all of this also comes with serious concerns regarding our privacy, safety, and security. But what do experts think?

Well, most of the notable figures inside and outside the field has actually also raised their concerns on what future development of the technology may mean for humanity. One of the most famous physicists Stephen Hawking, as well as Tesla and SpaceX creator Elon Musk, have both named the technology very dangerous. At one point Musk even compared the danger level of AI to the dictatorship regime of North Korea. Microsoft co-founder Bill Gates was also vocal about the topic: he thinks that we should be cautious, but if done right, the good will eventually outweigh the bad. And since the technology is developing way quicker then we ever imagined, let us try to figure out if the possibility of the threat is real here.

At its core, Artificial Intelligence is all about created machines that can think and act intelligently, as humans do. And while most current applications, like self-driven cars and smart houses, were all created with a positive meaning in mind, every technology can be twisted and used in the wrong way if in the wrong hands. Most current AI systems are narrow though, meaning that they were programmed to perform only one specific task and cannot develop outside of its own purpose. However, a long-held goal in the field has been always about the development of an Artificial Intelligence that can learn and adapt to a very broad range of various challenges.

Recently Elon Musk wrote the following: "The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most.”

There are of course lots of applications that make our everyday life more convenient and efficient. And those AI applications are the ones that ensure and threat safety, the one's Musk, Hawking and others were so vocal about. If the system gets out of hand or gets hacked, we can all get in a pretty dire situation.

Most researchers agree with the fact that the superhuman AI will not likely cause a direct threat to threat to our emotions and that there is no reason to expect it becoming intentionally malevolent or benevolent. If we want to consider how AI can become a real risk, we should look at the two following scenarios.

AI programmed to harm people

People can create autonomous weapons and artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons and systems could easily cause mass destruction. Moreover, an AI arms race could inadvertently lead to an AI war that also results in huge casualties. Russia’s president Vladimir Putin once said: “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply put them on the shell, so humanity could easily lose control of such a situation. This risk is one that is present even with narrow AI but grows dramatically as levels of AI intelligence and autonomy increase. And this can mean weapons that we never thought are possible, but the ones we would wish that was never created.

AI programmed to help people

Let me explain this one. In theory, the AI can be programmed to do something beneficial, but it can develop a destructive method in order to achieve its goal. This can happen whenever we fail to fully align the AI’s goals with ours, which can definitely happen if done poorly. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there, chased by helicopters and covered in your own vomit, doing not what you wanted but literally what you asked for. This scenario can even lead to human casualties. If a super intelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to the initial task and completely ignore them.

Social disruption

Social media is a very strong and autonomously-powered force that we all should be reckoning with. Its algorithms study us, so they know what we like and even try to predict what we think. Recent investigations are still underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from fifty million Facebook users to try and sway the outcome of the 2016 U.S. presidential election and the U.K.'s Brexit referendum. But just imagine if those accusations are actually correct - they will illustrate the real power of social networking. Artificial Intelligence programs can segment and target a specific group of individuals and give them whatever information it is needed, no matter facts or fiction.

Also, recent advances in AI have led to researching enthusiasts of creating realistic audio and videos of political figures that are designed to look, and talk like their real-life counterparts. For example, AI researchers at the University of Washington recently created a video of former U.S. President Barack Obama giving a speech that looked incredibly realistic but was actually fake. One of the authors of this program suggests that people could create “fake news reports” with fabricated video and audio that can, of course, lead to social misleading and disruption.


Sponsor Ads


About Juan Brown Junior   Consultant

1 connections, 0 recommendations, 7 honor points.
Joined APSense since, December 13th, 2018, From Szamotuły, Poland.

Created on Dec 13th 2018 07:22. Viewed 509 times.

Comments

No comment, be the first to comment.
Please sign in before you comment.