Is AI Moving Too Fast

It’s almost 2020 and technology is being dominated by artificial intelligence. However, is it moving too fast? Read the opinions of two students. Artificial intelligence is now all around us every day, when using recommended playlist/videos, facial recognition or self-driving cars. It’s growing faster each day; but shouldn’t we regulate the AI development field before… Read more »

by Unknown 1 year ago

It’s almost 2020 and technology is being dominated by artificial intelligence. However, is it moving too fast? Read the opinions of two students.

Artificial intelligence is now all around us every day, when using recommended playlist/videos, facial recognition or self-driving cars. It’s growing faster each day; but shouldn’t we regulate the AI development field before it becomes too late? Could AI be moving so fast that it could spell the end of our civilisation?

AI is in computer science, the intelligence demonstrated by a machine; it includes processes like learning, reasoning and self-correction. The AI of today is known as narrow AI (or weak AI), that is designed and trained to handle one particular task.

However, the long-term goal of many researchers is to create a general AI (strong AI). Something a lot more sophisticated that could cope with multiple tasks and would even surpass humans in multiple areas. This theoretical event is called the technological singularity. 

AI is making the headlines because of how fast general AI is moving. Merely five to ten years ago, we could never predict the point we are at now. Completely unexpected goals have been reached making many experts take seriously the possibility of creating a strong AI in our lifetime.

During the National Governors Association 2017 summer conference, the CEO of Tesla and SpaceX, Elon Musk, shared his concern about artificial intelligence. He is thinking that people are not as afraid of the potential of AI as they should be, because they don’t fully understand its capability. 

If evolution has taught us one thing it’s that a species will do whatever it can in its power to survive and reproduce. There’s no reason why AI would be any different. Our planet contains limited resources and if the AI deems the entire human race as an unnecessary consumer of said precious resources, then it could decide to terminate us to free up all those lovely resources and energy.

An AI would be so much more intelligent than us that the chance of it de-humanising us and seeing us more as a pest is very high. This would erase any moral guilt they could potentially have about committing genocide.

Once it’s invented it can’t be “un-invented”, therefore, AI development field must be regulated, because its evolution could cause a danger to our civilisation

BY JOSHUA EMPTOZ

We only have to look to the movie screens to see our main issue with AI. Take films such as Terminator or The Matrix to see where society fears a quick development in AI may take us. Yet, that is all on the movie screens, and some think it could happen, especially when we consider machines that are able to learn and adapt themselves. But AI isn’t all about robots taking over the world and bringing immense destruction to our planet. Instead we should be looking at the here and now.

Developments in the sector mean that it is all changing faster than most of us thought. In November 2018, Elon Musk (CEO of Tesla Motors) himself spoke up about the increasing abilities of AI. After someone set to Twitter stating ‘we dead’ beneath a video inside the workshop of Boston Robotics, Musk responded with ‘This is nothing. In a few years, that bot will move so fast you’ll need a strobe light to see it. Sweet dreams…’ Although, Musk has good reason to be nervous. You only have to look at today’s AI. 

The main example that could cause issue would be drones. They may not be your initial thought when it comes to artificial intelligence, but many are being made. One of the most dangerous may be Shield AI like technology. The company is based in San Diego and pride themselves on having drones that work together using their ‘hivemind’ system to learn, adapt and respond in unison with each other in war type situations. The company aims to help organisations such as the military when they need to check hazardous areas ahead to gain the necessary ground information. 

I do not believe Shield AI intend to sell their product to opposition groups, but the rapid improvement of AI means that copies will be made as soon as more products become available on the day to day market to create similar versions, meaning the upper hand our authorities may have now and in the near future will soon be gone, leaving us in a constant and dangerous battle. 

In conclusion, it is not that AI itself is moving too fast that will be the problem, it is the increasingly easy access that others will have to it as it becomes the norm.

BY ELLA TALBOT