Conversation around AI

|
  • 1

Conversation around AI

Monday, 05 August 2019 | Shohini Sengupta

Conversation around AI

Before attempting to craft laws and regulations, officials and the public need to understand the effect and impact of deploying different kinds of AI on individuals and the society in general

Earlier this month, Member of Parliament Rajeev Chandrasekhar alleged that social media companies used algorithms to suppress, deny and amplify certain conversations, and made a case for India to enact a legislation to check algorithmic bias. Chandrasekhar’s allegation is not new, not in India or anywhere else. In fact, it is a symptom of a much bigger malaise — a rapidly evolving technology ecosystem with disparate and reactive attempts at accountability and regulation.

The reason why Governments around the world and industries have become so concerned about Artificial Intelligence (AI) in general is because it presents vast economic and social opportunities. AI systems (ie, computer science that emphasises on the development of intelligent machines to mimic certain features of the human brain) have improved mobility, fostered greater information-sharing culture and are fundamental to many functions and businesses today, including LinkedIn, social media firms, search engine functions and cab aggregators like Uber. However, with an increase in the use of AI, multiple issues of trust, privacy, bias, market dominance and security have also arisen.

On the issue of algorithmic bias itself, Chandrasekhar is right to the extent that various technology companies have skirted these issues in the past and have been criticised for being politically influenced or for using technologies in a manner that is sometimes beyond their control or are driven by hidden motives —  often both. Further, due to price efficiencies and immediate benefits like lower prices and convenience benefits to consumers, people tend to be “present biased” and are more than willing to trade off future benefits (say privacy or  stronger redressal mechanisms among other things) for present gratification (like lower prices and convenience).

This common cognitive bias results in a tendency to overestimate immediate utility and discount future or invisible harms that may accrue to communities later, much like the fallouts of climate change or the impact Facebook on the 2016 US elections, or the impact on civil liberties, attributable to predictive technology used for policing and facial recognition applications. Notably, allegations of AI entrenching bias and discrimination are not limited to technology companies, they extend to Governments and State authorities as well. Take the example of the discriminatory outcomes of using AI in the US criminal justice system and for policing in Australia.

Thus, there have been rising and compelling reasons to regulate technology firms, particularly those, who rely on discriminatory and biased AI outcomes. However, adopting the right governance framework is becoming increasingly difficult, both because AI technologies are becoming more complex and are being applied in a variety of different contexts — from helping in procedural vetting of documents to content moderation. They present twin capacities for enormous social development and harm, often together. Therefore, before rushing to draft a law to “regulate algorithmic bias” in India, the need of the hour is to consider and experiment with innovative models so as to ensure that the economic gains, social influence and security impact of AI is positive for all. Thus, engaging and bringing all relevant stakeholders into the conversation — academia, policy-makers and the wider community — will be crucial to assessing the challenges that lie ahead.

In this context, understandably, various Governments around the world, as also international organisations, have begun to weigh in on the issues of regulating certain technology companies and their use of AI. In India, conversations around regulating AI and technology companies have been conflated and there is no clarity as to how this can be achieved. Further, discussions around technology regulation in general have been scattered across regulators (depending on specific subject matter) and are bereft of guiding principles, unlike what has been done in several other countries.

For instance, in Australia, following the release of a White paper on the governance of AI by the Australian Human Rights Commission earlier this year, another Australian federal Government agency, responsible for scientific research (CSIRO), released a discussion paper on AI and ethics, proposing a toolkit to assist stakeholders in applying eight core principles. Similarly, in Europe, the EU High-Level Expert Group comprising 52 experts on AI, including representatives from academia, civil society and industry, recently published its Ethics Guidelines for Trustworthy AI, following stakeholder consultation on draft guidelines. The EU guidelines were not binding, but nevertheless offered stakeholders a set of guiding principles to follow to indicate their commitment to achieving “Trustworthy AI.” Most importantly, the EU guidelines identified core ethical imperatives such as prevention of harm, respect for human autonomy, fairness and explicability.

Therefore, in India, before attempting to craft laws and regulations, officials and the public alike need to better understand in general the effect of AI and the impact of deploying its uses on individuals and the society in general, while establishing clear policy objectives and governing principles. More importantly, policy discussions need to clarify the difference or lack thereof between regulating technology companies deploying AI unethically, and behind regulating all kinds of AI indiscriminately.

Further, regulations in this space need to be grounded on core principles of data privacy, security, consumer safety, ethics and fairness. It is instructive to note that at present, there is no cogent and comprehensive data privacy law in India. Simultaneously, it must also be noted that the potential impact of certain kinds of AI, including on other human rights, goes beyond privacy, and impacts future of work and innovation, decision-making, and has a profound impact on our democractic processes and institutions.

In this context, it is crucial to remember that any meaningful policy discussion around the regulation of AI and algorithmic bias in India will first need to start by acknowledging that core socio-economic issues and biases cannot be cured by technological advancements. Acknowledging the inherent bias in our sociological understanding of the world, in our databases and existing jurisprudence, allows us to control and correct for such bias in the deployment of any technology, including AI, or at the very least, opens our minds to the possibilities of over-reliance on such AI.

Further, any regulation on AI will need to begin with the establishment of foundational concepts and building on the existing frameworks of law and governance before we move on to ethical and other challenges that may go beyond the framework of the law.

Policy will then need to take into account the scope of deployment of AI and the impact of such deployment. In the clamour to regulate technology companies, it must not be forgotten that there are differences in various firms  deploying AI and its varied uses. For instance, there is a difference between AI used on an OTT platform to present viewers with choices of certain kinds of movies over others and the one  used to filter or moderate content on a big social media platform that can lead to real world violence.

A governance framework that does not engage in the nuances of technological uses and the different kinds of biases — which exist in the offline and online world, in our thinking and in the software we build, in power structures and differing incentives, in our Governments and businesses — can be both misleading and misguided. In correcting for discrimination and bias, unless we militate against absolute and simplistic understandings of technology and on drafting laws without consulting the wider community, a law regulating algorithmic bias may be more harmful to the innovation ecosystem in India than no law at all.

(The writer is fellow at the Esya Centre)

Sunday Edition

Astroturf | Om – The Shabda Brahman

21 July 2024 | Bharat Bhushan Padmadeo | Agenda

A model for India's smart city aspirations

21 July 2024 | Gyaneshwar Dayal | Agenda

A tale of two countries India and China beyond binaries

21 July 2024 | Gyaneshwar Dayal | Agenda

Inspirations Behind Zaira and Authorship Journey

21 July 2024 | SAKSHI PRIYA | Agenda

LOBSTER LOVE

21 July 2024 | Pawan Soni | Agenda