Should We As Humanity Be Afraid of Artificial Intelligence?

In the last few years, new artificial intelligence tools have emerged that could threaten the survival of human civilization from an unexpected direction.

artificial intelligence (ai) has caused fear and anxiety in people since the beginning of the computer age. Until now, these fears and concerns have generally been about the possibility that physical tools guided by artificial intelligence will destroy or enslave humanity. But in the last few years, new artificial intelligence tools have emerged that could threaten the survival of human civilization from an unexpected direction. AI has gained some significant capabilities in processing and creating language, whether it be words, sounds, or images.

Throughout history, stories have been the most effective tools used to shape people’s feelings, thoughts and beliefs for a specific purpose. Although stories often contain many abstract entities, surreal events, and exaggerated and distorted versions of reality, they leave an undeniable impact on the listener. Even today, larger stories called Islam, Christianity or Judaism, which are essentially a combination of old mythological stories, continue to influence all humanity.

So what happens when a non-human intelligence becomes better than the average human at things like telling stories, composing melodies, drawing pictures, writing laws and scriptures? When people think of chatgpt and other new artificial intelligence tools, the first thing that comes to their mind is; There are issues such as preparation of documents such as homework, articles and applications. Its potential in producing fake and biased news, creating new stories or creating new sacred texts is often overlooked.

In the 17th century, René Descartes was concerned that perhaps a malevolent demon was trapping him in a world of illusion, creating everything he saw and heard. In ancient Greece, Plato described the famous allegory of the cave, in which a group of people are chained to a blank wall in a cave for the rest of their lives. The prisoners could only see various shadows reflected on the wall and thought the illusions they saw were real. In ancient India, Buddhist and Hindu sages stated that all humans were trapped in Maya, the world of illusions. The artificial intelligence revolution brings us face to face with Descartes’ devil, Plato’s cave and maya. If we are not careful, we, as humanity, may be stuck behind a curtain of illusion that we cannot even realize exists.

Of course, the new power of artificial intelligence can also be used for good purposes; It can help us in countless ways, from finding new treatments for cancer to finding solutions to the ecological crisis. The real problem we face is how to make sure new AI tools are used for good rather than evil. We have known since 1945 that nuclear technology can produce cheap energy for the benefit of humans, but can also physically destroy human civilization. That’s why we reshaped the entire international order to protect humanity and ensure that nuclear technology is used primarily for good. Now we may have to grapple with a new weapon of mass destruction that can destroy our mental and social world.

We can still engineer new AI tools, but we must move quickly. While nuclear weapons cannot invent more powerful nuclear bombs, AI can create exponentially more powerful AIs. The first important step is to require stringent security checks before powerful AI tools are released to the public. Just as a pharmaceutical company cannot launch new drugs without testing both short- and long-term side effects, technology companies should not launch new AI tools until they are safe. We need an equivalent of the Food and Drug Administration for new technology. We must end the irresponsible use of AI tools in the public sphere and regulate AI before it regulates us.

By Mehmet Özkoç

Hi readers. I am from Turkey, I am one of the founders of the site. We opened the Adsenses site to provide information to our readers from all over the world. Stay tuned.

Leave a Reply

Your email address will not be published. Required fields are marked *