Centre told social media platforms to take down misleading AI content
- November 8, 2023
- Posted by: OptimizeIAS Team
- Category: DPN Topics
Centre told social media platforms to take down misleading AI content
Section: Legislation in news
Context: Centre issues advisory to social media platforms to take down misleading AI content.
More about the news:
- The Ministry of Electronics and IT (MeitY) has issued advisories to social media platforms, including Facebook, Instagram, and YouTube, instructing them to remove misleading content generated by artificial intelligence within 24 hours.
- This action follows the recent viral spread of a deepfake video of actor Rashmika Mandanna on social media.
- The advisory references existing legal provisions that platforms must adhere to as online intermediaries, including Section 66D of the Information Technology Act and Rule 3(2)(b) of the Information Technology Rules, which require the removal of impersonation content, including artificially manipulated images, within 24 hours of receiving a complaint.
- Deepfake technology poses a significant challenge, particularly for women, as it adds a new dimension to online harassment.
What is deep fakes:
- A deepfake is an artificially created image or video that convincingly portrays one person as another.
- It represents an advanced form of producing deceptive content, harnessing the power of Artificial Intelligence(AI) .
- AI involves programming machines to emulate human intelligence, enabling them to think and act like humans.
- With AI, it becomes possible to generate entirely fictitious individuals and manipulate genuine individuals, causing them to appear as if they said or did things they never actually did.
- The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.”
- This user manipulated Google’s open-source, deep-learning technology to create and post pornographic videos.
What are the Global Efforts to regulate Deepfake technology:
- European Union
- The EU has an updated Code of Practice to stop the spread of disinformation through deepfakes.
- The revised Code requires tech companies including Google, Meta, and Twitter to take measures in countering deepfakes and fake accounts on their platforms.
- They have six months to implement their measures once they have signed up to the Code.
- If found non-compliant, these companies can face ﬁnes as much as 6% of their annual global turnover.
- United States
- In July 2021, the US introduced the bipartisan Deepfake Task Force Act to assist the Department of Homeland Security (DHS) to counter deepfake technology.
- The measure directs the DHS to conduct an annual study of deepfakes assess the technology used, track its uses by foreign and domestic entities, and come up with available countermeasures to tackle the same.
- In China, it is mandatory for deep synthesis service providers and users to ensure that any doctored content using the technology is explicitly labelled and can be traced back to its source.
- The regulation also mandates people using the technology to edit someone’s image or voice, to notify and take the consent of the person in question.
- When reposting news made by the technology, the source can only be from the government-approved list of news outlets.
What is the Legal Framework Related to AI in India:
- In India, there are currently no specific legal regulations governing the use of deepfake technology. However, existing laws can be applied to address the misuse of this technology, covering aspects such as Copyright Violation, Defamation, and cybercrimes.
- For instance, the Indian Penal Code, which addresses defamation, and the Information Technology Act of 2000, which pertains to sexually explicit material, could potentially be used to combat malicious deepfake usage.
- The Representation of the People Act of 1951 contains provisions that prohibit the creation or dissemination of false or deceptive information about candidates or political parties during election periods.
- Additionally, the Election Commission of India has established regulations requiring registered political parties and candidates to obtain prior approval for all political advertisements on electronic media.
- Despite these measures, they may still be inadequate in fully addressing the multifaceted challenges arising from AI algorithms, including the potential risks associated with deepfake content.
What are the Recent Global Efforts to Regulate AI:
- The world’s inaugural AI Safety Summit, hosted at Bletchley Park in the UK, saw 28 major nations, including the US, China, Japan, the UK, France, India, and the European Union, unite in signing a declaration emphasizing the necessity for global action to address the potential perils of AI.
- The declaration underscores the recognition of significant risks stemming from potential deliberate misuse and unintended control challenges in advanced AI, particularly in domains such as cybersecurity, biotechnology, and the spread of disinformation.
- In response to these concerns, the US President issued an executive order aiming to fortify defenses against AI-related threats and exercise regulatory oversight over safety standards applied by companies in the assessment of generative AI systems like ChatGPT and Google Bard.
- During the G20 Leaders’ Summit held in New Delhi, the Indian Prime Minister advocated for the creation of a global framework governing the development of “ethical” AI tools.
- This shift in New Delhi’s stance signifies a transition from a position of non-interference in AI regulation to a proactive approach, involving the formulation of regulations grounded in a “risk-based, user-harm” perspective.
What is Information Technology Rules, 2021
- IT Rules 2021 were released under section 87 of the IT Act, 2000 for Social-Media, Digital Media, and OTT platforms.
- It covers digitized content that can be transmitted over the internet or computer networks and includes intermediaries such as Twitter, Facebook, YouTube.
- It also includes publishers of news and current affairs content and also curators of such content over online papers, news portals, news agencies and news aggregators.
- However, e-papers are not covered because print media comes under the purview of the Press Council of India. Newspapers and TV news channels are governed under the Press Council of India Act, 1978 and Cable Television Networks Regulation Act, 1995 respectively.
- Through the act the digital media is brought under the ambit of Section 69(A) of the Information Technology Act, 2000 which gives takedown powers to the government.
- The section allows the Centre to block public access to an intermediary in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognisable offence relating to above”.
- It also deprives the intermediaries of their “safe harbour protections under Section 79 of the IT Act, 2000.
- Safe Harbour provisions protect the intermediaries from liability for the acts of third parties who use their infrastructure for their own purposes.
- The act provides for three Tier Check Structure part III of the rules imposes three-tier complaints and adjudication structure on publishers.
- Level I: Self-regulation.
- Level II: Industry regulatory body headed by a former judge of the Supreme Court and High Court with additional members from a panel approved by the Ministry of Information and Broadcasting.
- Level III: Oversight mechanism that includes an inter ministerial committee with the authority to block access to content, which can also take suo moto cognisance of an issue and any grievance flagged by the Ministry.
- Social media companies are needed to appoint Content Moderation Officers who will be responsible for complying with content moderation orders.
- The New rules make it mandatory for platforms such as WhatsApp to aid in identifying the originator of unlawful messages.
- The rules mandate the creation of a grievance redressal portal as the central repository for receiving and processing all grievances.
- Intermediaries are required to act on certain kinds of violations within 24 hours, and on all concerns of a complainant within 15 days.
- The rules also hold that Information Disclosure to Competent Authorities may demand pertinent information for the purposes of prevention, detection, investigation, prosecution or punishment of crimes. However, it excludes the intermediary from having to disclose the content of the personal message.