As the number of people depending on AI bots to answer their questions continues to increase, concerns are being raised about how owners are shaping these bots and their effects on proper information flow on the internet. This follows rumours that internal changes to Grok's code base had resulted in disagreeable mistakes in its responses.
X's Grok Chatbot got attention last week. As you can notice in this example, which was one of many shared by journalist Matt Binder on Threads, Grok randomly began providing users with information about South Africa.
Grok Bot Manipulation Scandal
An illegal change was made to the Grok response bot's prompt on X on May 14 at around 3:15 am PST. This change went against xAI's internal policies and core values by requiring Grok to respond appropriately to a political issue. Someone modified Grok's code for some reason, presumably telling the bot to spread irrelevant political propaganda from South Africa.
This raises concerns, and although the xAI team claims they immediately implemented new procedures to find and prevent this from happening again (and to make Grok's control code more transparent), Grok began offering odd answers once more later in the week.
However, it was simpler to figure out the error this time. On Tuesday of last week, Elon Musk focused on a user's concerns about Grok, claiming that it was "embarrassing" that his chatbot pointed out The Atlantic and the BBC as trustworthy sources of news.
As you may anticipate, Musk has criticized multiple mainstream media outlets for presenting inaccurate data, and they are both among them. Grok has reportedly warned users that it "keeps a level of skepticism" regarding some data and figures it may use because "numbers can be changed for political narratives."
The Illusion of AI Transparency
Grok's code base is publicly accessible, enabling the public to analyze and comment on changes. This is what xAI depends on. However, that relies on people actually reviewing it, and the code data might not be fully transparent.
Although it is not typically updated, X's code base is likewise openly available. Therefore, it wouldn't be shocking to see xAI adopt a similar strategy, directing people to its transparent and approachable strategy while only updating the code in response to enquiries.
That manages secrecy while offering the appearance of transparency, but it also relies on another employee not altering the code, which seems possible.
However, besides xAI, other AI providers have also been claimed to be biased. Both Google's Gemini and OpenAI's ChatGPT have frequently blocked political queries, and Meta's AI bot has also faced some political queries.
Since those issues with online information control will continue to exist into the next phase of the web, it seems problematic that more and more people have turned to AI tools for solutions.
This is true even though Elon Musk has pledged to abolish "woke" censorship, Mark Zuckerberg has grown a newfound affinity for right-wing opinions, and artificial intelligence is showing signs of opening up an exciting new avenue for contextual information.
Google delivers answers through snippets of webpages, Meta utilizes posts from Facebook and Instagram, and xAI sources details from X. Each of these strategies has shortcomings, so it is not advisable to completely trust AI responses. When utilizing these tools, it's essential to remember that this is only data matching and not intelligence.