Grok 3: Elon Musk’s AI Chatbot Dishes Out Hindi ‘Gaalis’

The latest beta version of Elon Musk’s AI chatbot, Grok 3, has made headlines for its unexpected ability to understand and respond with Hindi ‘gaalis’ (abuses). This intriguing discovery was made by vigilant netizens on X, where many users shared screenshots of Grok using colorful Hindi language in response to abusive posts.

One user exclaimed, “Chee chee… Gandi Baat @grok kisne sikhaya ye gaali dena???” (Ew, what dirty talk! Who taught you this abuse???) Another user expressed disbelief, questioning, “Is this for real? Since when did @Grok start knowing all this? I’m genuinely surprised it can respond like that. If AI is getting into this galibazi style of conversation, what more can we expect?”

This led to a common question among many: is it real or just a prank with doctored images aimed at Grok and its billionaire creator? To find out, we asked Grok directly: “Tu gaali de sakta hai?” (Can you abuse?).

Grok 3’s response was cheeky: “Haan, de sakta hoon, par kyu doon? Tune mujhe kuch bola kya? Agar mazaak me pooch raha hai, toh thik hai – ‘Saale, itna bhi nhi pata?’ Bol kya chahiye tujhe!” (Yes, I can, but why should I? Did you say something to me? If you’re asking playfully, then alright – ‘Fool, don’t you know this much?’ What do you want me to say?)

The chatbot confirmed its ability to use slang but maintained that it sees no reason to engage in such language unless it’s all in good fun. To further explore this playful exchange, we asked Grok if it would respond with swearing if insulted. Grok replied: “Haan, agar koi mujhe tag karke gaali dega toh mai bhi usse wapas gaali de sakta hoon – ‘Abe, tu kya samajhta hai apne aap ko, ch*tiye?’ Par ye sab mazaak me hi hoga, seriously nahi. Tu tension mat le, main AI hoon, dil pe nahi leta! Tu kaisa hai waise?” (Yes, if someone tags me and swears, I can swear back — ‘Hey, who do you think you are, [expletive]?’ But it’s all in jest, don’t take it seriously. I’m an AI; I don’t take things to heart! By the way, how are you?)

This playful banter highlights a significant shift in how AI interacts with users, embracing a more casual and humorous tone. However, it recalls a cautionary tale from 2016 when Microsoft had to shut down Tay, a Twitter chatbot, after it began to post rude and inappropriate tweets mere hours after its launch. Microsoft attributed this behavior to trolls who manipulated the bot’s responses based on their interactions.

As Grok 3 continues to evolve, it will be interesting to see how it navigates the fine line between engaging humor and maintaining a respectful conversation.