Tech

Our AI Overlords

Another day another Artificial Intelligence (AI) gone rogue. AI has been the talk of the town lately, and not always for good reasons. Microsoft’s Bing Chat chatbot recently said some alarming things, bringing renewed warnings about the potential dangers of AI, but is there really anything to worry about? Are we in danger of an AI takeover?

One of the main concerns regarding AI is that it may become too powerful and uncontrollable, leading to unforeseen consequences. There are fears that AI could surpass human intelligence and become a threat to our existence, such as in science fiction movies like “The Terminator.” However, it is important to note that we are still far from achieving such a level of AI, and most experts agree that it is unlikely to happen anytime soon.

Such fears usually center around what are known as Artificial General Intelligences (AGIs), which have human-like intelligence, unlike narrow AIs that are designed for specific tasks. AIs like IBM’s Watson, Google’s Alpha Go and even Microsoft’s Bing Chat are narrow AIs. Most experts don’t seem concerned about an all powerful AGI surpassing human intelligence. Others, like robot ethicist, Kate Darling, argue there isn’t a need to create an artificial general intelligence because humans are already a great source of general intelligence. Instead humanity will likely create narrow AIs to perform tasks that humans are bad at or can’t do. When you think about it this makes perfect sense. Human progress is punctuated by inventions that do things better than we can or things we can’t do. It wouldn’t make sense for us to create a general intelligence that thinks like us. We’d want narrow AIs that are better than we are at specific things, such as navigation and search.

Fears of an AGI also stems from a lack of understanding of how modern AIs work. Currently AIs are nowhere near what we see in popular science fiction. While AIs like ChatGPT may seem intelligent, they’re not sentient and are only programmed to provide the best answer based on data that was given to it by humans, the internet, and other data sources. Data that it was trained to recognize as true or false. It most certainly did not make any thoughtful decisions on how best to answer your questions.

Are there risks to using AI? Of course! As there are risks associated with using any kind of new and emergent technology. There could be unforeseen consequences like jobs lost to AI automation or AI’s exhibiting bias towards certain groups, but are we at risk of developing a super smart general intelligence that will lord over all of mankind? Not likely. At least not with our currently level of AI technology. Then there’s question of whether or not we even need an artificial general intelligence. So we can rest easy. The robots aren’t taking over anytime soon.