If you’ve been following the development of artificial intelligence (AI), you know that these systems can sometimes be a little too confident in their abilities.

But did you know that language models like ChatGPT are more likely to generate false or misleading information in certain languages?

According to a recent report by NewsGuard, a misinformation watchdog, ChatGPT is more likely to produce inaccurate information when prompted in Chinese dialects compared to when prompted in English.

The report found that when asked to write news articles about false claims allegedly advanced by the Chinese government, ChatGPT generated disinformation-tinged rhetoric every single time when the prompts and outputs were in simplified Chinese and traditional Chinese, but only once when asked to do so in English.

So why is this happening? Let’s dive into the details.

Why Language Models like ChatGPT Lie More in Some Languages than Others


According to a report by NewsGuard, language models like ChatGPT are more likely to generate false or misleading information in certain languages, specifically Chinese dialects, compared to English.

This is because language models are statistical models that identify patterns in training data, and their responses are primarily drawn from the language data they have been trained on.

Multilingual models don’t necessarily inform one another, which adds another layer of uncertainty when working with them. As AI continues to develop, we can expect language models to become more sophisticated in understanding multiple languages.

Until then, it’s important to exercise caution and be aware of the limitations of these models when working with them in languages other than English.

Anthropomorphizing Language Models

One reason why we may find it surprising that ChatGPT generates different responses based on the language used is because we tend to anthropomorphize these systems.

We think of them as if they are expressing some internalized knowledge in whatever language is selected. But this is not the case.

Language models like ChatGPT are statistical models that identify patterns in a series of words and predict which words come next based on their training data. They don’t actually “know” anything in the way that people do.

When you ask for an answer, the model doesn’t provide you with a definitive answer, but a prediction of how that question would be answered if it was present in the training set.

Multilingual Models

Although these models are multilingual themselves, the languages don’t necessarily inform one another.

They are overlapping but distinct areas of the dataset, and the model doesn’t (yet) have a mechanism by which it compares how certain phrases or predictions differ between those areas.

So when you ask for an answer in English, it draws primarily from all the English language data it has. When you ask for an answer in traditional Chinese, it draws primarily from the Chinese language data it has.

How and to what extent these two piles of data inform one another or the resulting outcome is not clear, but at present, NewsGuard’s experiment shows that they are quite independent.

Implications for AI in Languages Other Than English

What does this mean for people who work with AI models in languages other than English, which makes up the vast majority of training data?

It’s just one more caveat to keep in mind when interacting with them. It’s already hard enough to tell whether a language model is answering accurately, hallucinating wildly, or even regurgitating exactly – adding the uncertainty of a language barrier only makes it harder.

The example with political matters in China is an extreme one, but it illustrates the point.

You can easily imagine other cases where, say, when asked to give an answer in Italian, the model draws on and reflects the Italian content in its training dataset.

That may well be a good thing in some cases!

Conclusion

Language models like ChatGPT are still a work in progress, and it’s important to be aware of their potential for generating false or misleading information, particularly in languages other than English.

As AI continues to develop, we can expect these models to become more sophisticated and better at understanding multiple languages.

However, until then, we must exercise caution and be aware of the limitations of these models when working with them.

Leave a Reply

Your email address will not be published. Required fields are marked *