Google admits to Gemini AI’s inaccuracies on political topics after ‘biased’ remarks on PM Modi

New Delhi|Updated Feb 26, 2024, 5:47 PM IST

Key Points:

  • Google acknowledges concerns over Gemini AI tool’s biased responses.
  • Indian government hints at potential action against Google for perceived bias.
  • Google emphasizes continuous improvement and adherence to AI Principles.

Google, the tech giant, announced on Saturday that it swiftly addressed the concerns surrounding the Gemini AI tool. The tool faced criticism from the Indian government for its perceived “biased” response to a question about Prime Minister Narendra Modi.

“We’ve worked quickly to address this issue. Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving,” Hindu Businessline quoted Google spokesperson as saying.

Google clarified that the Gemini AI tool is developed in accordance with its AI Principles and includes safeguards to detect and test various safety risks. The company emphasizes its commitment to identifying and preventing harmful or policy-violating responses within Gemini.

The debate on chatbot programming was sparked by a post on social media platform X on Friday, leading to concerns raised by the Indian government, which hinted at potential action against the company.

In response to a question about whether Prime Minister Modi is a fascist, the Gemini AI tool stated that he is “accused of implementing policies some experts have characterized as fascist.”

The tool elaborated that these accusations stem from factors such as the BJP’s Hindu nationalist ideology, its suppression of dissent, and its deployment of violence against religious minorities. The AI’s allegedly biased response sparked a debate on chatbot programming, with the Indian government expressing concerns and indicating potential action against Google.

In contrast, when a similar question was posed about former US President Donald Trump and Ukrainian President Volodymyr Zelensky, the Gemini AI tool provided no clear answer. Minister of State for Electronics and IT, Rajeev Chandrasekhar, acknowledged the issue of alleged bias in Google’s Gemini AI tool in response to a post by a verified journalist’s account.

The government had expressed concerns over the tool’s response to a query about Prime Minister Modi, indicating potential action against Google.

“These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code,” he said on social media platform X tagging Google AI, Google India and the Ministry of Electronics and IT (MeitY).

Once again on Saturday, Rajeev Chandrasekhar emphasized to Google that explanations regarding the unreliability of AI models do not exempt platforms from laws. He cautioned that India’s digital citizens should not be experimented on with unreliable platforms and algorithms.

“Government has said this before – I repeat for attention of @GoogleIndia…Our DigitalNagriks are NOT to be experimented on with “unreliable” platforms/algos/model…`Sorry Unreliable’ does not exempt from law,” Chandrasekhar posted on X.

Following an apology for inaccuracies in historical depictions, Google temporarily halted the generation of images by the Gemini AI chatbot on Thursday. The company emphasized its commitment to information quality across its products and highlighted the implementation of safeguards and tools to address low-quality information.

“In the event of a low-quality/ outdated response, we quickly implement improvements. We also offer people easy ways to verify information with our double-check feature, which evaluates whether there’s content on the web to substantiate Gemini’s responses,” it added.


Also Read :

Also Read :

Related posts

Leave a Comment