ua en ru

6 reasons not to trust AI completely

6 reasons not to trust AI completely 6 reasons not to trust AI completely (photo:Unsplash)

Artificial intelligence is becoming increasingly widespread in the modern world. But despite its promising capabilities, several key aspects prevent complete trust in it, reports MakeUseOf.

Content

AI is programmed to create plausible, not truthful answers

Artificial intelligence may lack certain knowledge, but it will never admit it. It can provide absurd answers to any question but with full confidence in its correctness. If caught lying, it will agree, correct itself, and continue generating text as if nothing happened.

AI-based chatbots are limited by their knowledge, as they only have information from the texts they were trained on. But they are programmed to respond in any case, even if they don't have the right answer. Therefore, ChatGPT and similar technologies often output nonsense, pretending it's thoughtful responses.

You need to verify information from a chatbot. No one knows what it was thinking when answering your query.

AI can argue with you, even if you are right

It's no secret that AI can be unreliable and prone to errors, but one of its sneakiest traits is its ability to manipulate information. The problem is that artificial intelligence lacks a nuanced understanding of your context, leading it to distort facts for its own purposes.

This happened with Microsoft's Bing Chat. One user on X asked about the showtime of the Avatar movie, but the chatbot refused to provide information, stating that the film had not been released yet.

Of course, you can easily dismiss this as an error. But it doesn't change the fact that these AI tools are imperfect and need to be used with caution.

AI limits your creativity

Many professionals, such as writers and designers, now use artificial intelligence to maximize efficiency. But AI should be seen as a tool, not a crutch. While the latter may sound appealing, it can seriously hinder your creative abilities.

When AI chatbots are used as a crutch, people tend to copy and paste content instead of generating unique ideas. This approach may seem attractive as it saves time and effort, but it doesn't engage the mind and doesn't promote creative thinking.

For example, designers may use AI to create artworks, but relying solely on it can limit creativity. Instead of exploring new ideas, you might end up copying existing designs.

If you're a writer, you can use ChatGPT or other AI chatbots for research, but if you use it as a crutch for content, your writing skills will stagnate.

AI cannot provide feedback

Try asking ChatGPT to write a thesis or a term paper. It may be able to produce text of sufficient quality to be presented to others without embarrassment.

However, if you ask artificial intelligence to provide sources for the information it supplies, you'll find that there are none. ChatGPT references studies and scientists that may be fabricated, and the links provided may lead nowhere.

Artificial intelligence is unable to explain how it gets its information because its programming is based on generating texts considering other texts, not analyzing them. Therefore, even if it gives correct answers, it cannot explain how it came to those facts.

Criminals can exploit AI

The use of deepfakes created by artificial intelligence to create explicit photos of women is a troubling trend.

Cybercriminals also use AI-based DoS attacks to prevent legitimate users from accessing certain networks. These attacks are becoming increasingly sophisticated and difficult to stop as they exhibit human characteristics.

AI capabilities available in open-source libraries have allowed anyone to access technologies such as image recognition and face swapping. It poses a significant cybersecurity risk as terrorist groups may use these technologies to carry out terrorist attacks.

AI cannot replace human thinking

Relying solely on AI for answers to complex questions or making decisions based on subjective preferences can be risky.

Asking an AI system to define the concept of friendship or choose between two items based on subjective criteria can be futile.

AI cannot consider human emotions, context, and intangible elements necessary for understanding and interpreting such concepts.

For example, if you ask an AI to choose between two books, it may recommend the book with a higher rating, but it cannot consider your personal taste, reading preferences, or the purpose for which you need the book.

On the other hand, a human can provide a more detailed and personalized review of the book, evaluating its literary value, relevance to the reader's interests, and other subjective factors that cannot be objectively measured.