India mandates tech firms to seek regulatory approval before launching AI tools
The Indian government has announced a new requirement for technology companies to secure government approval before publicly releasing artificial intelligence (AI) tools that are still in development or considered “unreliable,” Reuters reported March 4.
The move is part of India’s efforts to manage the deployment of AI technologies, aiming to promote accuracy and reliability in the tools available to its citizens as it prepares for elections.
Rules for AI
According to a directive issued by the Ministry of Information Technology, any AI-based applications, particularly those involving generative AI, must receive explicit authorization from the government before their introduction to the Indian market.
Additionally, these AI tools must be marked with warnings about their potential to generate incorrect answers to user queries, reinforcing the government’s stance on the need for clarity regarding the capabilities of AI.
The regulation aligns with global trends where nations seek to establish guidelines for the responsible use of AI. India’s approach to increasing oversight over AI and digital platforms coincides with its broader regulatory strategy to safeguard user interests in a rapidly advancing digital age.
The government’s advisory also points to concerns regarding the influence of AI tools on the integrity of the electoral process. With the upcoming general elections, where the ruling party is anticipated to maintain its majority, there is a heightened focus on ensuring that AI technologies do not compromise electoral fairness.
Gemini criticism
The move follows recent criticisms of Google’s Gemini AI tool, which generated responses perceived as unfavorable towards Indian Prime Minister Narendra Modi.
Google responded to the incident by acknowledging the imperfections of its AI tool, particularly about sensitive topics such as current events and politics. The company said the tool was still “unreliable.”
Deputy IT Minister Rajeev Chandrasekhar said the reliability issues do not exempt platforms from legal responsibilities and emphasized the importance of adhering to legal obligations concerning safety and trust.
By introducing these regulations, India is taking steps towards establishing a controlled environment for the introduction and use of AI technologies.
The requirement for government approval and the emphasis on transparency with potential inaccuracies are seen as measures to balance technological innovation with societal and ethical considerations, aiming to protect democratic processes and the public interest in the digital era.