This is why we need to regulate artificial intelligence

This is why we need to regulate artificial intelligence

Sebastian Stovrik and I disagree about whether artificial intelligence is a threat to democracy. While I believe technology can challenge democracy and invite regulation, Storvik believes that artificial intelligence is salvation and that it must be unleashed. Storvik bases his arguments on the fact that Western values ​​such as freedom of speech, private property rights, and free markets will ensure the greatest innovation and thus promote democracy.

Many believe that innovation is about developing something new that is useful and put to good use. This leads me to believe that Storvik mixes two phases: AI development (the invention phase) with commercialization (the innovation phase).

In this light, I have no problems with Western values ​​as good terms of the framework for the development of something new, such as artificial intelligence. Since Western researchers have been working with artificial intelligence since 1955, we have recently made major breakthroughs in areas such as deep learning, (deep) neural networks, generative language models, and machine learning.

according to Information Technology and Innovation Corporation American researchers have largely led the development of artificial intelligence. The European Union has lagged behind, and China has largely caught up. This challenges Storvik’s argument.



Big changes since November

Since November 2022, OpenAI has taken the world by storm with ChatGPT and GPT4. With the launch of GPT4, we’ve taken a big step closer to what’s called General Artificial Intelligence (GAI), or God-like Artificial Intelligence. This is the level that is on par with human intelligence.

Thus, there are many, myself included, who believe that the risks and consequences of the uncontrolled development of AI are too great to ignore. Suffice it to point out the societal harm (violence, killing, extremism, polarization) of the algorithms in Twitter, YouTube and Facebook, which are optimized to create emotional responses and increase reach and use – something Max Fisher elegantly documents in the book chaos machine.

Two good arguments

The potential risks of AI, including its ability to copy human decision-making processes, which can lead to undesirable consequences, and the risk of AI being used for malicious purposes, are two good arguments for action so that the technology, before it is released. Society, must go through a quality assurance process.

See also  Weather, Lillehammer | Hottest day of the year: Check the temperature at your local measurement station

Once the uncertified AI is released to the market, it is difficult to pull it off the market, that is, the damage is done. This can lead to “fake news” and the destabilization of political, legal and economic institutions – the cornerstones of democracy. Therefore, we need to create a space where AI is tested and approved before it is launched. There’s a lot at stake – for all the wonders of generative AI technology.



Dalila Awolowo

Dalila Awolowo

"Explorer. Unapologetic entrepreneur. Alcohol fanatic. Certified writer. Wannabe tv evangelist. Twitter fanatic. Student. Web scholar. Travel buff."

Leave a Reply

Your email address will not be published. Required fields are marked *