Is AI so magical as people think?

On April 14–15, the European IQ Media Conference on “Innovation in the Media” was organized by the Group Nice-Matin and the National and Kapodistrian University of Athens. The workshop led by Mr Jonathan Soma focused on the role of AI and how people can use it effectively. Mr Soma also emphasized to the audience that AI can make mistakes, which he categorized into three main types.

💡 Many people believe that AI is magical because it can do whatever you ask for it. However, the truth is that it is a form of software that works purely based on statistical probabilities. For example, if you ask it to write a text, it searches the internet and predicts the next word that would most likely follow in a sentence.

🤔 Every conversation with AI is different, so each response it gives to a question like ‘Hi, how are you?’ can vary. The reason for this is a setting called temperature. The higher the temperature value, the less
predictable the AI’s response will be.

The traits that characterize AI are hallucinations, bias and the low-quality outputs it often produces.

AI cannot be perfect and avoid hallucinations because artificial intelligence tools make mistakes. Therefore, when a person uses these tools, the key idea is that he/she should come to terms with the fact that mistakes will sometimes happen.

🛑 These tasks, therefore, distinguish into three categories:

1. The first includes situations where the low-quality output can be ignored. For example, AI might confuse a name and consider it as female instead of male. Or, people who speak English as a second
language might ask AI to write a nice text for them and then they accept the information blindly, using it without reviewing it.

However, according to some AI tools that analyze such texts, they find that about 30% of the content includes stupid information. Still, as human beings, we simply ignore the information we don’t like.

2. Cases with limited feedback margin: Journalists should not be afraid that AI will replace them. This conclusion is valid for two reasons. First, journalism differs from artificial intelligence because the mistakes AI makes are very easy and quick to detect. In journalism, there is a process for correcting an error: someone must notice it, inform the journalist, even after the article has been published, and then the correction is made. Additionally, AI tends to produce texts that are dull, predictable, and boring compared to a human, as it operates based on statistical probabilities to determine the next word.

3. Situations where a high error rate is expected: LLMs are based on statistical probability. However, statistical probability does not always align with reality. When AI assumes that a certain word is likely to be the next in a sentence, that word often has nothing to do with the truth—it’s simply based on the fact that AI searches for patterns online and, using probabilities, predicts the next word. As a result, AI produces hallucinations because it relies on statistics. Moreover, LLMs cannot truly understand things like humor, since their responses are generated solely based on data from the internet. Still, these types of mistakes—like when AI gives you the wrong menu at a restaurant—are not serious, because they don’t
directly affect a person’s life. In journalism, however, you can’t trust AI, because a single error in a text can completely change its meaning. The journalist’s job is directly connected to accuracy and truth.

✍️ Panagiotis Anastopoulos (Student at the University of Athens)

📸 Franck Fernandes (Nice-Matin)