Why self-censorship of political speech could be the future normal for AI platforms
- March 16, 2024
- Posted by: OptimizeIAS Team
- Category: DPN Topics
No Comments
Why self-censorship of political speech could be the future normal for AI platforms
Subject: Science and tech
Section: Awareness in IT and computer
Context:
- As India heads to Lok Sabha elections, Google has said it will restrict the types of election-related questions users can ask its artificial intelligence (AI) chatbot Gemini in the country.
More on news:
- Earlier, Krutrim, the chatbot developed by an Indian AI startup founded by Bhavish Aggarwal of Ola, had been found to self-censor on certain keywords.
- Ola had seemingly applied algorithmic filters to ensure that Krutrim beta did not produce results for queries that included keywords such as Narendra Modi, BJP, and Rahul Gandhi.
- AI platforms such as India’s own Krutrim while answering political questions, the government chose to instead advise the companies to fine-tune their systems.
What is “code-level censorship”?
- Basically these companies have written a code that whenever a user asks a question that contains certain keywords, the platform will not ping the underlying foundational model, which has the potential answer to that question, but return with a predetermined response that it is not able to respond to that particular question.
What is the specific background of Google’s decision?
- Google’s AI platform has been under fire in recent weeks over the various responses it has generated.
- The company apologized for what it said were “inaccuracies in some historical image generation depictions” after Gemini depicted white figures (such as the founding fathers of the United States) or groups like Nazi-era German soldiers as people of color.