Artificial intelligence will (not) end the world

Many Wonderful Artists

Photo by Many Wonderful Artists, published on June 6, 2017, Public domain, license link:, link to original work:

Perhaps saying that humanity has taken a liking with seeing the end of the world is not the best way to put its fascination for dystopian societies, post-apocalyptic scenarios, or doomsday events into words. However, how else would one be able to put it? There seems to be an inherent attraction to literature and motion pictures that portray this allure onto the page or the big screen; The Handmaid’s Tale, The Walking Dead, and The Stand elicit an introspective view on society as a whole. And now, with the rise of artificial intelligence within the consumer market, there has never been a better time to initiate a discussion of the future of artificial intelligence and the dangers that may come following its advent. Already, there are many predictions regarding the end of the world through mechanical insurrection and systematic eradication, an apprehension portrayed in the critically acclaimed Terminator series. Additionally, with the creation of IBM Watson and Facebook’s language creating agencies, society’s fears have never been more justified. That is, if there was anything inherently dangerous in artificial intelligence in the first place. Artificial intelligence has been in steady development since the 1950’s and has been a force of progress for many, not a force of mass destruction. Search engines, virtual assistants, and many other technologies take advantage of artificial intelligence to perform tasks no ordinary individual would hope to be able to achieve. So, where do these fears find their foundation? To answer that question, one must address the primary misconceptions upon which many argue the “inherently dangerous” nature of artificial intelligence.

Superficially, artificial intelligence is a system that can mimic the functions of human cognition to a point where it is indistinguishable to the common observer. It is, in essence, a programmable brain; a brain that can perceive the world through the lens of humanity, one that can learn through experience and by processing information, and perhaps most controversially, one that can be programmed to be conscious. However, therein lies the first of two principal misconceptions. An artificial intelligence has no need for a consciousness, the awareness of self, to perform the tasks required of it by its creators. By allowing the researchers to determine what the entity’s “self” is, there is greater control over the objectives of the artificial intelligence. Moreover, even if researchers were to attempt to project a consciousness onto an artificial intelligence, humanity currently does not have an efficient and realistic means of doing so. As such, teams of researchers at Google DeepMind and a pair of developers from Carnegie Mellon University have set out to emulate other cognitive functions: critical thinking and strategy, as well as claircognizant intuition, or the intuitive ability to predict the future. In the world of professional strategy games, artificial intelligences have handily beat world-renowned professionals in their respective fields. Google’s own artificial intelligence, AlphaGo, beat Go master Lee Se-dol five to zero in a display of strategical prowess. Most recently, Liberatus, an artificial intelligence developed by the aforementioned Carnegie Mellon University programmers, defeated “four of the world’s best professional poker players in a marathon 20-day poker competition” through emulated intuition, a major milestone in artificial intelligence research. However, the list of achievements does not end there. Artificial intelligences have been known to be able to diagnose patients even when professionals in the medical field were unable to. More specifically, IBM Watson was the one to achieve this feat, when “[the] artificial intelligence machine correctly diagnosed a 60-year-old woman’s rare form of leukemia within 10 minutes — a medical mystery that doctors had missed for months at the University of Tokyo”. All of these were achieved despite the absence of machine consciousness.

However, while an artificial intelligence may lack a consciousness, that alone may not hinder the development of an intelligence that far surpasses the capabilities of human cognition: a superintelligence. According to Nick Bostrom, the classification “superintelligence” holds that the intelligence must be able to “greatly outperform the best current human minds across many very general cognitive domains”. A superintelligence must also be able to recursively self-improve, a term that Yampolskiy defines as the ability of an intelligence to completely replace the original algorithm with a completely different approach and more importantly to do so multiple times. At each stage, newly created software should be better at optimizing future version of the software compared to the original algorithm. If that is the case, how would humanity be able to stop the superintelligence from ending the world? To answer simply, humanity would be doomed. However, thus poses the question, is artificial intelligence inevitably bound to have an ill intention towards humanity? Nicholas Agar responds to claims made in support of this assumption by stating that humans will be able to solve the intelligence control problem before superintelligence can be fully realized. To justify his assertion, Agar makes the following arguments:

In the Terminator movies, humans don’t get to approach a newly self-aware Skynet and request a do over. One minute Skynet is uncomplainingly complying with all human directives. The next, it’s nuking us. I suspect that we are likely to have plenty of opportunities for do overs in our attempts to make autonomous AIs. Autonomy is not an all-or-nothing proposition. The first machine agents are likely to be quite clumsy. They may be capable of forming goals in respect of their world but they won’t be particularly effective at implementing them. This gives us plenty of opportunity to tweak their programming as they travel the path from clumsy to sophisticated agency.

As mentioned in the previous paragraph, a consciousless artificial intelligence would be malleable in its intentions and can be conditioned to be beneficial to humanity. Alan Turing relates the intelligence’s pliability to that of a child, as he also asserts that the developers creating the artificial intelligence should not attempt to program the intelligence to simulate the adult mind, in which all of the information in the world is contained, but instead should try to program the capacity to learn that information. In other words, to simulate the mind of a child. This would allow its developers to take charge of the “clumsy” intelligence’s moral development and to maintain its moral standing in favor of humanity’s survival.

I will allow that there are a number of uncertainties in regard to the future of artificial intelligence, especially those concerning the development of an intelligence’s stance on the existence of humanity. However, there is much evidence to alleviate such uncertaintiesbut what is to become of humanity as a result of the existence of artificial intelligence in the near future? How far will its influence reach? Who will it affect? Artificial intelligence has already begun to permeate society in ways that are generally positive, as the internet had done so many years ago. The ever-popular search engine Google employs artificial intelligence to provide its users with relevant search results and auto-fill suggestions, IBM provides its customers with services that utilize artificial intelligence to comb through terabytes of information in a matter of minutes and procure a specified result, and Tesla adopts a specialized form of artificial intelligence to give their customers autopilot functionality in their cars. However, despite these benefits, one could argue that artificial intelligence could soon take the jobs of a great chunk of society, causing unemployment rates to skyrocket as companies see fit to employ a cheaper and, admittedly, a more efficient labor force, namely an artificial intelligence more suited for the job. It has been projected that at least 47 percent of U.S. jobs will be replaced by this automated labor force; currently, low-skilled service jobs, such as call centers and manufacturing industries, fall under the most risk. But, as artificial intelligence continues to become more and more sophisticated, even high-skilled jobs are at risk of being overtaken by machines: accounting, real estate, stock brokerage, financial advisory, and much more. But, as Jerry Kaplan puts it, “AI is simply a natural continuation of longstanding efforts to automate tasks, dating back at least to the start of the industrial revolution.” A shift from one labor-force to another has happened before, and humanity had figured out a way to circumvent the problems that arose from such shifts, time and time again. That isn’t to say that the transition may not have a different outcome, but with enough preparation for the inevitable and with heavy investment into education, there might be a chance for the United States to be able to curb the negative effects of this change.

To conclude, artificial intelligence is not inherently destructive to humanity. It does not inherently hold negative intentions against mankind, as it can be conditioned in such a way that it only takes benevolent actions towards it. Artificial intelligence does not require a consciousness in order to perform the directives provided to it by its creators, nor will it immediately determine that humanity must be eradicated if developed properly. The only threat to humanity is itself, and if the impediment to an imperative step towards technological advancement becomes the only perceivable solution to maintaining the existence of mankind, then there is no telling what achievements humanity may never accomplish.