Dmitry Medvedev, deputy chairman of Russia’s Security Council, called YandexGPT, an AI model developed by Russia’s largest search engine, a “terrible coward” that “does not answer quite ordinary questions,” the majority of which are related to the war in Ukraine.

Medvedev detailed his interactions with the artificial intelligence (AI) model in a Telegram post, where the latter said it “didn’t want to look stupid” when prompted with questions such as: “When did the US adopt a law on the seizure of Russian assets?” and “Where are the monuments to Stepan Bandera located in Ukraine?” the latter of which refers to a controversial Ukrainian figure Moscow has used to justify its Ukrainian neo-Nazi claims.

However, Medvedev said it could answer other questions, such as those about Russian and international laws, the location of Nelson’s Column in London, or the monument to Pushkin in Russia.

He also tried to ask YandexGPT the distance between Kyiv and Russia’s Belgorod, to which he said it answered correctly for the first time before feigning ignorance when prompted again.

Medvedev said his questions were “neutral,” and YandexGPT’s inability to answer those questions “greatly undermines trust in Yandex and its products.” He said this provides a basis for recognizing Yandex, a Russian company that had previously complied with Moscow’s censorship requests, as a foreign agent. Censorship laws allow Moscow to ban groups it deems undesirable.

Tech Summit 2024: Ukrainian IT Companies Adjusting to Wartime
Other Topics of Interest

Tech Summit 2024: Ukrainian IT Companies Adjusting to Wartime

On May 3, IT firms under "Diia City" tax regime met in Kyiv for Tech Summit 2024, introducing Diia City Union to advocate with authorities.

However, YandexGPT’s inability to answer the questions could be attributed to the Kremlin’s censorship tactics.

What’s YandexGPT?

YandexGPT is a generative AI model like OpenAI’s ChatGPT in the West that could process a colossal amount of data, generate text interaction, and perform creative tasks similar to real-life, human interaction.

Generally, training similar models takes multiple stages – pretraining, which feeds a huge amount of data to the AI model to build a basic understanding of our world and allows the AI to learn the language structure.

Advertisement

Then comes the fine-tuning stage, which focuses solely on desirable answers to avoid inaccuracy that requires more human intervention.

The AI model has been integrated into Yandex’s line of products, including its search engine and its virtual assistant Alice, the latter of which was used in Medvedev’s interaction which he complained about having to pay for.

Why can’t YandexGPT tell Medvedev where the Bandera monuments are in Ukraine? 

Given that Russia has largely isolated itself from the outside informational world, even going so far as to creating its own Wikipedia alternative, it is possible that YandexGPT could not tell Medvedev about Bandera monuments either due to a lack of available data or the tech company’s intentional removal of search results it considered sensitive.

For AI models, the results are determined by the data available to it at the time of the request, as well as rules laid out by the company to avoid sensitive answers such as racial slurs and offensive statements, which might explain the AI model’s failure to answer questions about Bandera monuments in Ukraine.

Advertisement

While it’s unclear what kind of censorship is introduced to the YandexGPT model, the company has a history of altering search results to ensure compliance with the Kremlin’s narratives.

In a 2023 source code leak, it was discovered that Yandex made sure not to return Russian President Vladimir Putin as the search result when prompted with phrases such as “Dick in a spacesuit,” “Grandpa in his bunker” or “Scumbag of all Rus,’” as reported by Russian news outlet Meduza.

In short, Medvedev has the Kremlin to blame for Yandex’s shortcomings.

What about the law on Washington’s seizure of frozen Russian assets?

Apart from being considered potentially sensitive, the answer might simply be too new for YandexGPT.

Generally, complex AI models are not updated with real-time data – in ChatGPT’s case, some have indicated the latest dataset was updated by late 2023, while the law was signed in April this year.

YandexGPT 3, the company’s newest offering, was released in March this year, with the data cutoff point presumably earlier than that.

To suggest a correction or clarification, write to us here
You can also highlight the text and press Ctrl + Enter