Artificial intelligence: friend or foe?

Matthew Savoca
September 04, 2020
"Machine Learning & Artificial Intelligence" by mikemacmarketing is licensed under CC BY 2.0
"Machine Learning & Artificial Intelligence" by mikemacmarketing. Licensed under CC BY 2.0

Despite being a comparatively new science, advanced Artificial Intelligence (AI) is now found in most technology and is rapidly becoming a larger part of day-to-day life. From Google’s search algorithms to personal digital assistants like Siri and Alexa, AI unobtrusively yet directly makes the world a faster, smarter place. Roombas vacuum our floors and smart security systems keep our homes safe. In addition to domestic life, AI has steadily been incorporated into the workplace, where it boosts employee productivity to hit business goals sooner and more effectively. Additionally, AI is capable of handling high volumes of monotonous tasks across organizations, leaving the workers space to focus on innovative solutions, complex problem solving and more compelling work. Advances in AI have left most of us confident and optimistic about what life may look like in the future. 

This, however, all seems too good to be true. There is indeed another, darker side to the AI coin that could threaten both national security and personal privacy. As AI appears in industries such as finance, social media and healthcare, leading world governments have deftly applied AI to war. Engineers have seamlessly integrated it into national defense systems and autonomous weapons, such as the robot soldiers developed by the Defense Advanced Research Projects Agency, better known as DARPA, or unmanned combat aerial vehicles (drones).  In another instance, the U.S. government uses AI to closely monitor its citizens’ emails, text messages and phone calls to counteract internal terrorism at the expense of personal privacy. High-profile scientists and engineers such as Stephen Hawking and Elon Musk have cautioned us about the potential outcomes of advanced AI: if left in the wrong hands through ignorance or unethical practices, AI poses a grave threat with the potential of flipping society on its head. 

"With AI still in its relative infancy, researchers and engineers in the field will need to perform a balancing act between breakthroughs that can greatly contribute to the development of advanced society and those that could invoke its downfall."

-- Matthew Savoca

Long before AI was invented, Sci-Fi shows and comic books warned us about its threat with characters like Hal from “2001: A Space Odyssey” or the “Terminator.” We are all familiar with the common trope of an artificial hyperintelligence becoming smarter than people and conquering humanity.  To many of us, the ideas of AI sovereignty may seem far-fetched or too distant from the present to warrant immediate counteraction. However, AI has already begun subtly yet successfully influencing public discourse and economic choices through high-traffic media platforms like Facebook and YouTube. 

Francois Chollet, a deep learning researcher at Google, has been outspoken about the unethical manipulation of human behavior by corporations and governments through AI. For example, social media companies keep records of all your interactions on their platforms and then streamline content that endorses ideas of the company’s choosing. Chollet asserts that, in the social media bubble, users tend to believe the media company’s ideas are more widespread than they actually are, a phenomenon termed “argument personalization.”  With argument personalization, social media users are more likely to agree with and follow the trend of ideas that the companies convey by means of a subconscious anxiety of being considered a societal outsider if they hold opposite views and opinions. 

In a blog post on Medium, Chollet breaks the argument personalization process down for us. To begin, media AI may observe that contact with specific content has caused a “desired” view change in users of similar psychological profiles. Chollet goes on to state, “the AI will then convey content that is optimally calculated for the most effective response from someone with particular views and life experiences.” This ultimately creates an environment where the targeted users only encounter opinions or beliefs that overlap with their own. If echo-chambers were not enough, over a long period of time AI is even capable of producing maximally effective, personalized, original content, thus potentiating misinformation, further obscuring the truth and leading to deep societal rifts. By using such tactics, these media companies effectively condition users to their set belief system, thus gaining essential social or political power, key for company expansion with the ultimate goal of procuring more capital.

While we have benefitted greatly from AI both at home and in the workplace, advanced AI is the contemporary embodiment of the double-edged sword. With each breakthrough in deep artificial learning comes advancement that will profoundly impact our quality of life for better or worse. With AI still in its relative infancy, researchers and engineers in the field will need to perform a balancing act between breakthroughs that can greatly contribute to the development of advanced society and those that could invoke its downfall.