For all the mystery, intrigue and fascination with AI and the future there is equal amounts of skepticism, worry, and outright fear. These feelings are not entirely unjustified as we learn more and more about AI and it's development. Questions and concerns arise from technical, ethical, and philosophical.
We touched a little bit on this in the AI Art blog post in January but I'm going to dive in a little more with some of the main issues around AI, or more specifically, the advancement of it. After all, no one wants a Skynet scenario!
The more ethical and philosophically charged issues around AI tend to get more attention, leaving some other equally, if not more important concerns go unnoticed by the public at large. One of these is the Black Box problem. Essentially, in the most basic explanation, input goes in, output comes out, but we have no idea what the heck is happening in between to get that output. And that is a bit concerning.
When it comes to neural networks engaged in deep learning they utilize neurons. Not the biological kind, the math kind. A neuron in a neural network (remember, it's an attempt to mimic how the human brain learns) is a mathematical function that collects and classifies information. So what happens is engineers and developers of AI systems will see the AI function as it gathers input and when it spits out output they cannot always accurately say how that output happened. They don't know how all the individual neurons work to get to that output. In fact, it isn't always clear what any specific neuron is even doing on its own.
This is a pretty watered down version of this problem but the fact that we cannot fully understand how the AI's were using come about providing us with output is a big issue that needs resolving.
Now we come to the stuff we hear a lot of. In a world with ever progressing AI, what role then does humanity have? The truth is, there is a reality where AI's can do jobs we once thought impossible. With the advent of AI art there could come a time where a paid human artist will be an exception. Why pay someone when an AI can make what you want?
ChatGPT passed the bar exam. There is absolutely a world where one could be defended in court by an AI. The more AI does, and is corrected, the better it gets. So it is not unreasonable to imagine an AI that has an unforgettable memory and that houses the entirety of state and federal law with a extremely low percent of error to be your lawyer. The biggest jobs at risk right now would be the ones that are administrative, customer facing, and mechanical. We already have seen a huge reduction in labor in many automotive factories and self checkouts at stores are more and more common. A future like this doesn't mean people won't exist in these roles, but there would be far, far fewer.
What time will tell is how much we as humans value the human element over profit. Because ultimately that is what this will all come down to.
The other part of the ethical problem is that AI is not neutral. Developers strive, diligently, to make it so but the fact of the matter is that AI learn and engage in all that they do from datasets. And for something like ChatGPT its dataset is the internet, and we all know the internet is a fair, balanced, and unbiased place of harmony, right? .... Riiight??
Just look at the disaster that was Microsoft's AI chatbot called Tay. Basically, at launch Tay announced herself to twitter with a "Hello world!" tweet. Within a day she was pulled for spouting some incredibly hateful and sexually charged tweets. It was bad, really really bad and an extreme case but there are other, less intense, examples.
For instance in 2019, researchers found that an algorithm used in US hospitals to predict which patients will require additional medical care favored white patients over black patients by a considerable margin. Another example is when an independent research at Carnegie Mellon University in Pittsburgh revealed that Google's online advertising system displayed high-paying positions to males much more often than women.
The nasty truth is that many systems that exist right now, the man made ones, have embedded within them biases and -ism's. These reflect in the datasets that AI utilize. Before we can fully realize AGI without DISATEROUS effect, we need to solve the black box problem which in turn can help solve the biases problem. Keyword there being it can help, but there still needs to be a lot of work in making sure the datasets are neutral and how the AI understands it all.