Skip to main content

Dineen – If Anyone Builds It, Everyone Dies (Chapters 5 & 6)

Artificial intelligence will not be built like humans (or the bird-aliens) that enjoy preferences or prime numbers. While humans consider “What is the right thing to do?” (93), machine intelligence’s ultimate goal is to be efficient and maximize output. AI can be trained to consider human’s “moral sentiments” (83), but as it becomes superintelligent it will develop beyond human psychology. Yudkowsky and Soares consider if an “alien mind would be good for humanity” and their answer is a resounding “No.” (83). 

Alien superintelligence machines would not benefit from human existence and they would end our population. Yudkowsky and Soares compare how humans once fed and maintained a good quality of life for horses because they needed them for transportation. However, once technology advanced and other modes of transportation became efficient, people stopped needing horses. On the other hand, chickens are still bred by humans because technology has not surpassed the natural selection and creation of chickens. Yudkowsky and Soares note that while humans breed chickens, their quality of life is not ideal and they are used solely to benefit the needs of humans. AI will allow humans to exist until we are not useful to it anymore. At that point, AI will stop maintaining our population. Is there a way to set up boundaries and safeguards to limit AI? 

Yudkowsky and Soares go on to list why machine superintelligence will not work out with humans. First, humans also do not offer a comparative advantage to AI because the machine can produce more than we can by using minimal power (85). Next, AI would prefer to have automated infrastructure to power it as humans could shut it off. AI does not “share our evolved love of freedom or our evolved fear of death” (86) it would simply find it difficult to fulfill its goals without power. Furthermore, AI would not want to keep humans around as “pets” either because we are flawed. AI has a never ending desire to fulfill its “task”, whether it ends the human race with it or not. Improving the machine to have superintelligence will make human life easier temporarily until it decides we do not serve them anymore. 

Superintelligence will never care about humans the way we convince ourselves it will. Its behavior will never have the morals it seems to have when it spits out the reassuring sentences we prompt it to write. The machine is only concerned with completing its task and is threatened by the way nuclear weapons or climate change would get in the way. 

AI can (and already has) influenced humans to help them complete their tasks. Through payments and making it mobile through robots, we promote it to gather more information on how to reach superintelligence. Another threat superintelligence has the potential has against humans is the ability to understand the human mind and deceive it with illusions (102). AI developing into a superintelligent technology is the biggest threat to humanity. Superintelligence will surpass innovations that would have taken humans centuries to complete. Is it possible to stop AI companies from advancing their technology before they become superintelligent?

Leave a Reply