AI is growing quicker than we can manage it


Due to the tone of many news articles from major sources, people would quickly conclude artificial intelligence (AI) is an extremely promising technology that could help humans reach the places they need to go by car, buy items with greater efficiency and much more. Some analysts even say AI will change the world as we know it.

Add machine learning — which is a specific type of technology under the AI umbrella — into the mix, and it’s easy to see how formerly arduous or repetitive tasks such as legal research or maintaining a tidy inbox become significantly more manageable.

Most people would agree it sounds convenient to use technologies that learn their habits and respond accordingly.

Being aware of the potential downsides of the rapidly emerging capabilities associated with it allows for having a well-balanced view of the technology and what it offers.

As excited as I am about all the neat AI devices and apps available to consumers, I’d be lying if I said I wasn’t a bit afraid of how quickly this technology has grown.

AI Can Learn Bad Behavior From Humans

Whenever people spend more than a few minutes in public or go on social media, it’s not hard to recognize some members of the human race don’t embody desirable behavior traits. Racial slurs and inappropriate sexual comments abound.

Researchers discovered AI tends to pick up on the excessive characteristics people display, taking most of what they know from those behavior-based features. Individuals caution it wouldn’t be hard for AI technology to start showcasing the same things it sees in humans who engage in online “trolling.”

One of the most well-documented examples of this phenomenon is a Microsoft-created Twitter bot named Tay. Developers gave her a female gender identity and unleashed her on the world in March 2016. Within hours, Tay’s interactions with humans taught her to approve of Adolf Hitler and call Barack Obama a “monkey.”

The Tay experiment suggests tech specialists cannot assume AI will learn the proper things. Some people are too badly mannered for that to happen reliably. What might happen if AI becomes a tool to promote racism, sexism and other factors that lead to divided societies? Would people who build them get punished? Will guidelines spell out responsible uses for AI? It’s too soon to tell.

AI Could Potentially Go Outside Its Programming Framework

Science-fiction plotlines feature countless incidences of robots that run amok and don’t operate according to programming. Alphabet Labs, Google’s parent company, believes it’s worth exploring whether AI technology could break the boundaries of what it learned from humans and do things not present within its programming.

Ongoing tests by Alphabet aim to determine how to turn off AI technology if things go wrong, what to do if real-world conditions vary from a training environment and how to prevent unintentional side effects. People might argue it’s comforting to know researchers are taking a proactive stance in case AI becomes disobedient.

However, they could also become worried about the tests. They could quickly wonder, “What happens if the AI becomes smarter than humans before researchers can figure out solutions?” Since that outcome is still restricted to the pages of fictional novels, wary individuals may become understandably fearful of the future.

People Are Concerned About the Privacy of Sensitive Information

The smarter and more ubiquitous AI becomes, the more people raise the alarm about whether sensitive data will stay protected and how it’ll be used or stored. For example, Facebook recently created an AI-driven application that could help detect site users who are suicide risks. It scans posts and gives alerts about people who may want to end their lives.

The social media site already has functionality allowing people to report posts made by individuals who seem troubled, but it requires moderator review. Plus, content from people in distress often gets skipped over. Due to privacy laws in the European Union, though, Facebook does not have permission to launch this tool in EU countries.

Now that AI is increasingly cemented in our technology landscape, people are speaking up and asserting they don’t think their respective governments take AI and privacy concerns into account. Although most countries have general privacy laws, leaders have not yet established specific rules related to artificial intelligence.

Results from a recent survey conducted by Genpact indicate the public is more than ready for such regulations. The poll of more than 5,000 people from the United States, United Kingdom and Australia found 59 percent of participants thought governments should do more to protect personal data from AI.

These examples are compelling, and they merely scratch the surface of AI’s capabilities. It’s clear, though — particularly from Facebook’s suicide detector algorithm — even functionality that seems fantastic may cause concerns and have an understandably adverse effect on society’s willingness to adapt to it.