PMF IAS Current Affairs
PMF IAS Current Affairs
  • Context (TH): The Ministry of Electronics and Information Technology (MeitY) issued an advisory to the Artificial Intelligence industry.
  • After facing backlash, the government clarified that its advisory on generative artificial intelligence (AI) services and elections was directed towards “significant” platforms and not start-ups.
  • The advisory instructed large tech platforms, including Google, to submit an action taken-cum-status report to the Ministry within 15 days.

Key Takeaways from the Advisory

Permission must be for AI models still in the testing stage

  • Explicit permission from the GoI is required for the use of an under-tested or unreliable AI model/LLM/ generative AI, software, or algorithm and its availability to users on the Indian Internet.
  • They should be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated.
    • This part of the advisory was slammed by the startup founders, who described it as a “bad move” and “demotivating”.

Large language model (LLM)

  • It is a type of artificial intelligence (AI) program that can recognise and generate text, among other tasks.
  • LLMs are trained on huge sets of data — hence the name “large.”
  • LLMs are built on machine learning. Specifically, a type of neural network called a transformer model.

Machine Learning

  • Machine learning is a branch of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.

AI platforms can’t threaten the poll process, spread misinformation

  • The platforms have to ensure that AI models do not permit users to publish/host any unlawful content as defined under Rule 3(1) (b) of the IT rules.
  • Platforms must ensure that their “computer resources,” including AI model / LLM/Generative AI/software or algorithm, do not permit any bias “or threaten the integrity of the electoral process.

Generative Artificial Intelligence

  • Generative AI refers to deep-learning models that can take raw data and “learn” to generate statistically probable outputs when prompted.
  • Generative AI is powered by foundation models (large AI models) that can multi-task and perform out-of-the-box tasks, including summarisation, Q&A, classification, and more.
  • With minimal training required, foundation models can be adapted for targeted use cases with very little example data.

How Does Generative AI Work?

  • Generative AI works by using a Machine Learning model to learn the patterns and relationships in a dataset of human-created content.
  • It then uses the learned patterns to generate new content.
  • The most common way to train a generative AI model is to use supervised learning – the model is given a set of human-created content and corresponding labels.
  • It then learns to generate content that is similar to the human-created content and labeled with the same labels.

‘Permanent unique identifier’ for AI-generated content

  • Synthetic content that can be used to spread misinformation or deepfake, “it is advised that such information is labelled or embedded with a permanent unique metadata or identifier.”
    • This metadata or identifier can be used to identify the “creator or first originator of such misinformation or deep fake,”.
  • Platforms should use a “consent popup” mechanism to inform users about possible inaccuracies in AI-generated output.

Users ‘dealing’ with unlawful information can be punished

  • AI platforms to communicate to the users that “dealing” with unlawful information can lead to suspension from the platform and may also incur punishment under applicable laws.

Non-compliance can lead to penal consequences

  • Non-compliance with the provisions of the IT Act and/or IT Rules would result in potential penal consequences for intermediaries, platforms, or users when identified.
    • The prosecution can be under the IT Act and several other statutes of the criminal code.

An Analysis of the Government Advisory

Lack of Statutory Power

  • The legal status of MEITY’s advisories is a concern in understanding the government’s regulatory authority and its implications for stakeholders.
  • Unlike regulatory bodies like the Securities and Exchange Board of India (SEBI), MEITY lacks clear statutory powers explicitly granting it the authority to issue binding directives or advisories.
    • This absence of a specific legislative framework leaves room for interpretation and raises questions about the enforceability of MEITY’s directives.

Lack of Clarity on MEITY’s Power to Issue Advisories

  • The IT Act 2000, India’s primary legislation for technology regulation, does not explicitly confer MEITY the power to issue advisories to regulate emerging technologies like AI.
    • The IT Act provides provisions for the regulation of electronic records, digital signatures, and cybersecurity; it does not delineate MEITY’s authority to issue directives on AI governance.

Lack of Clear Definitions and Citations

  • The term “advisory” itself lacks a precise definition under the IT Act or other relevant legislation.
    • This ambiguity allows MEITY to issue directives that carry the weight of official recommendations without clear legal backing.
    • Stakeholders, including technology companies, users, and legal experts, are uncertain about the legal implications of non-compliance with MEITY’s advisories.
  • MEITY’s advisories often lack explicit citations of legal authority or references to specific legislative provisions.
  • This absence of legal grounding contributes to the perception of MEITY’s regulatory actions as arbitrary and potentially overreaching.

Compliance Issue

  • In the absence of clear penalties or enforcement mechanisms compliance becomes a matter of discretion rather than legal obligation.
  • This further raises concerns about accountability and due process in technology regulation.

Opportunistic Transparency and Rapid Policy Making

  • These advisories are triggered by media events and issued hastily, without thorough assessment or stakeholder consultation.
  • The lack of transparency, with only partial information released to the public, further undermines the legitimacy of MEITY’s regulatory actions.

Undefined Terms and Ministerial Clarifications

  • The advisory introduces vague terms such as “bias prevention” and proposes a licensing regime for AI models without clear definitions or legal framework.
  • This lack of clarity contributes to uncertainty among stakeholders and undermines the rule of law.

Ineffective Advisory Regulation and Decline in Administrative Standards

  • The advisory regulation represents a decline in administrative standards, bypassing formal legislative processes and stakeholder consultations.
  • The expansion of IT Rules 2021 to regulate various aspects of digital content further exemplifies regulatory overreach.
  • Moreover, the influence of social media metrics on policy decisions reflects a departure from deliberative governance processes.

Curtailing Freedom of Expression

  • MEITY’s regulatory actions, including advisories on AI governance and social media content moderation, risk curtailing freedom of expression online.
  • This can lead to self-censorship among individuals and organisations, fearing reprisals for expressing dissenting views or challenging government policies.
  • This stifling of free speech undermines democratic discourse and pluralism in the digital public sphere.

Expansion of Surveillance and Control

  • Digital authoritarianism is often characterised by the expansion of state surveillance and control over online activities.
    • MEITY’s proposals for AI governance, may facilitate increased government surveillance and censorship of online content.

Threats to Innovation and Technological Development

  • MEITY’s advisory and ambiguity in legal status pose significant challenges to innovation and technological development.
  • The uncertainty surrounding compliance requirements and enforcement mechanisms discourages investment and innovation in emerging technologies like AI.
  • The burdensome regulatory requirements may stifle entrepreneurship and impede the growth of India’s technology sector.

Role of Gen AI on Elections in 2024

  • In 2024, elections will be held in over 50 countries, including India, the US, the UK, Indonesia, Russia, Taiwan, and South Africa.
  • Like in previous elections, one of the biggest challenges voters will face will be the prevalence of fake news, especially as AI technology makes it easier to create and disseminate.

How is AI Linked with the Electoral Landscape?

Campaign Strategy and Targeting

  • AI algorithms can analyse vast amounts of data about voters, including demographics, social media activity, and past voting behaviour, to tailor messages and more effectively target specific voter groups.

Predictive Analytics

  • AI-powered predictive analytics can forecast election outcomes by analysing various factors such as polling data, economic indicators, and sentiment analysis from social media.
  • This can help parties allocate resources strategically and focus on key battleground areas.

Voter Engagement

  • AI chatbots can engage with voters on social media platforms, answering questions, providing information about candidates and policies, and even encouraging voter turnout.
  • This can enhance voter engagement and participation in the electoral process.

Security and Integrity

  • AI-powered tools can be employed to detect and prevent election fraud, including voter suppression, tampering with electronic voting systems, and the spread of disinformation.
  • AI algorithms can help ensure the integrity of the electoral process by analysing patterns and anomalies in data.

Regulation and Oversight

  • Governments and election authorities can use AI to monitor and regulate political advertising, identify violations of campaign finance laws, and ensure compliance with electoral regulations.
  • AI-powered tools can help enforce transparency and accountability in the electoral process.
    • The Bihar Election Commission (BEC) tied up with AI firm Staqu to use video analytics with optical character recognition (OCR) to analyse CCTV footage during the panchayat elections.
    • This enabled the BEC to achieve complete transparency and eliminate any chances of manipulation.

Concerns about Deploying AI for Electoral Purposes?

Manipulation of Electoral Behavior

  • AI models, particularly Generative AI and AGI, can be used to spread disinformation, create deep fake elections, and inundate voters with highly personalised propaganda.
  • Deepfake Videos of opponents can be created to tarnish their image.
    • The term “Deep Fake Elections” refers to the use of AI software to create convincing fake videos, audio, and other content that can deceive voters and influence their decisions.
    • One prominent example highlighting the potential dangers of such manipulation is the Cambridge Analytica scandal.
    • Cambridge Analytica exploited Facebook data to create targeted political advertisements and influence voter behaviour during the 2016 US presidential election.

Messaging and Propaganda

  • AI tools can be trained to translate into regional languages, which candidates can use for Microtargeting in their campaigns.
  • Microtargeting is a marketing strategy that uses recent technological developments to reach specific segments of a larger audience based on detailed demographic, psychographic, behavioural, or other data.

Spreading Disinformation

  • The World Economic Forum 2024 Global Risk Report ranked AI-derived misinformation and its potential for societal polarisation as one of its top 10 risks over the next two years.
  • AI models would be far superior to the bots and automated social media accounts that are now baseline tools for spreading disinformation.
  • The risks are compounded by social media companies such as Facebook and Twitter significantly cutting their fact-checking and election integrity teams.

Inaccuracies and Unreliability

  • AI models, including AGI, are not infallible and can produce inaccuracies and inconsistencies.
  • There has been public wrath over Google AI models for portraying persons and personalities in a malefic manner, mistakenly or otherwise. These reflect well the dangers of ‘runaway’ AI.
  • Runaway AI means AI that has run away from Human Control.

Ethical Concerns

  • The use of AI in elections raises ethical questions about privacy, transparency, and fairness.
  • AI algorithms may inadvertently perpetuate biases present in training data, leading to unfair treatment or discrimination against certain groups of voters.
  • Moreover, the lack of transparency in AI decision-making processes can erode public trust and confidence in electoral outcomes.
  • Parties with better resources can better utilise AI in comparison to small and regional parties with lesser resources, which may disrupt the level playing field in the elections.

Regulatory Challenges

  • Regulating the use of AI in electoral campaigns presents significant challenges due to the rapid pace of technological advancements and the global nature of online platforms.
  • Governments and election authorities struggle to keep pace with evolving AI techniques and may lack the necessary expertise to regulate AI-driven electoral activities effectively.
  • The primary statutes that could potentially tackle fake news spread using deepfakes are The India Penal Code, 1860 (or the Bharatiya Nyaya Sanhita, 2023 in due course) Information Technology Act, 2000; and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
  • However, a specific law doesn’t exist that addresses only AI and deepfake technology and targets the individual who creates it.
Sharing is Caring !!

Newsletter Updates

Subscribe to our newsletter and never miss an important update!

Assured Discounts on our New Products!

Leave a Reply

Your email address will not be published. Required fields are marked *

Newsletter

Never miss an important update!