AI: A double-edged sword we must live with

|
  • 0

AI: A double-edged sword we must live with

Thursday, 16 November 2023 | Govind Bhattacharjee

AI: A double-edged sword we must live with

AI is one of the biggest technical advancements in recent times but it can pose a great danger to humanity by unleashing misinformation and deepfakes

After its launch last year, the generative AI tool with astoundingly diverse capabilities, the ChatGPT, has taken the world by storm. Despite its immense capabilities to improve our lives and solve humanity’s problems, it has made data scientists and governments deeply worried about its potential to create a tsunami of misinformation and an atmosphere of distrust. More ominously, it may endanger democracy itself by its potential to manipulate elections through fake news and false propaganda, making it difficult to distinguish truth from falsehood, and elections in the two largest democracies are due only next year.

In March 2023, a thousand technology leaders warned in an open letter that AI posed a profound, existential threat to humanity that needed immediate regulation. The letter called for a six-month moratorium in developing powerful AI systems and called for a pause in AI development, urging that, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." The pause would provide time to introduce “shared safety protocols” for AI systems and governments should step in to institute a moratorium, it added.

Swift government action was a slim possibility, as politicians have very little understanding of AI. However, this time at least some governments have taken the threats rather seriously. On 1st and 2nd November 2023, delegates from 27 governments around the world, as well as the heads of top artificial intelligence companies, gathered for the world’s first “AI Safety Summit” at Bletchley Park in the UK, a picturesque venue north of London whose symbolism cannot be missed - it was home to Alan Turing during World War-II where machines devised to decode the Nazis’ Enigma Code formed the first blueprints for programmable computers. Among the attendees were delegates from the UN, the USA, China, the EU, and India and tech leaders from OpenAI, Anthropic, Google DeepMind, Microsoft, Meta and XAI.

In 2021, European Union policymakers proposed a law, yet to be passed but designed to regulate AI technologies that might create harm including facial recognition systems, and requiring companies to conduct risk assessments of AI technologies to determine how their applications could affect health, safety and individual rights. In 2022 alone, 37 regulations mentioning AI were passed around the globe; Italy went so far as to ban ChatGPT, but little global coordination was visible otherwise. This time, the conference delegates from different countries agreed to share a common approach to identifying risks and ways to mitigate them. The "Bletchley Declaration" issued on 1st November recognised the short-term and longer-term risks of AI, affirmed the responsibility of the creators of powerful AI systems to ensure that they are safe, and committed to international collaboration on identifying and mitigating the risks.

The proceedings were held in camera, but the British PM said that on 2nd November, the USA, EU and other "like-minded" countries had reached a "landmark agreement" with select cutting-edge AI companies that models should be rigorously assessed before and after they are deployed – an agreement not signed by China, which signed the "Bletchley Declaration" the previous day. The AI companies at the Summit have agreed to give governments early access to their models to perform safety evaluations. Details are still sketchy, and the only thing we know is that both the UK and the USA would set up permanent bodies called AI Safety institutes to carry out safety evaluations and develop risk assessment guidelines for AI systems. The progress seemed to be limited, but global coordination seen at this level for the first time is a welcome feature to ensure that humanity can come together to counter the threats posed by the unlimited capabilities of AI.

At the end of the Summit, the UK which chaired the Summit, issued a statement summarising the discussions on various objectives. The major objectives of the Summit included developing a shared understanding of the risks posed by frontier AI technologies and the need for concerted action, triggering a process for international collaboration on the safety of frontier AI technologies and development of new standards to support AI-governance.

The consensus emerged on developing common and measurable international standards for safety and on governments’ role in testing models, not just pre- and post-deployment, but earlier in the lifecycle of the model, including in training runs of Large Language Models. Delegates also shared the ambition to unlock the significant potential of frontier AI technologies with their potential to transform economies and societies – especially in improving healthcare, education and handling environmental problems inclusively.

Currently, a fragmented and incomplete understanding of frontier AI technologies marks their development in different countries. Inclusivity means equitable realisation and distribution of the benefits of AI for all countries and across all groups, including minorities and marginalised ones; failure in this will create extreme inequality with disastrous consequences for the future. To mitigate risks, there is also a need for shared principles and codes and their standardisation. Some progress has already been achieved in this, like the 2019 OECD Recommendation on AI, and the G7-initiated Hiroshima Process under the presidency of Japan, especially the Hiroshima Process International Guiding Principles and International Code of Conduct for Organisations Developing Advanced AI Systems.

However, ensuring AI safety would require the convergence of multiple branches of activity, including skills, talent, and physical infrastructure. These are presently monopolised by advanced countries and tech giants, and without shared benefits and inclusivity, the risks cannot be mitigated. Presently there is a huge concentration of AI power within a handful of companies unwilling to let go of their stranglehold.

Profit-driven companies monopolising AI research are likely to lead to bad outcomes, and there is a raging debate about the open-sourcing of codes to address this problem. There are contrary views on this, with some pointing to the dangers of open-sourced codes in the hands of rogue state and non-state players, while others say that open-sourcing of models can accelerate safety research. In regulating AI, there are different approaches followed by different countries which are competing with each other with different objectives. While the US wants self-regulation by tech companies to promote innovation, the EU wants a risk-based approach. For China, socialist values, meaning political control, remain at the forefront of regulation. The EU’s General Data Protection Regulation (GDPR), effective from 2018, focuses on the privacy of personal data, and how that data can be used by AI systems. The Digital Markets Act, which has also passed, focuses on competition and will target the largest cloud players, which are essential for AI systems. US regulation is focused primarily on containing China.

Given the breakneck speed at which AI is evolving, there is also the question of what to regulate. Tech firms want only a narrow regulation, limiting scrutiny to the most powerful frontier models.

Microsoft is calling for a licensing regime requiring firms to register models that exceed certain performance thresholds. Some are advocating controlling the sale of powerful chips used to train the LLMs. The best option, as suggested by many experts, is to create a new global, neutral, non-profit international regulatory agency like the Intergovernmental Panel on Climate Change (IPCC) with adequate regulatory authority to guide, coordinate and control the development of frontier AI technologies with mandatory safety standards.

Such a body must be represented by all governments with equal voting powers. That would mean involving all stakeholders, and the biggest obstacle to this would be the advanced countries and their tech companies which can make unlimited profits only so long as the others remain excluded from the cutting-edge AI technologies.

(The author, a former Director General at the Office of the Comptroller & Auditor General of India, is currently a Professor at the Arun Jaitley National Institute of Financial Management; views are personal)

State Editions

BJP promises aspirational manifesto ahead of polls

21 December 2024 | Staff Reporter | Delhi

NDMC to host Winter Rose Show in Chanakyapuri this weekend

21 December 2024 | Staff Reporter | Delhi

Mahender Choudhary will contest Mehrauli for AAP

21 December 2024 | Pioneer News Service | Delhi

BJP accuses AAP of contempt over CAG reports

21 December 2024 | Pioneer News Service | Delhi

Sunday Edition

Celebrating the Rich Culture of Northeast India

15 December 2024 | Abhi Singhal | Agenda

A Taste of Tokyo in the Heart of Delhi

15 December 2024 | Team Agenda | Agenda

Basko: All-Day Culinary and Cocktail Experience

15 December 2024 | Team Agenda | Agenda

Unique Dual Dining Experience

15 December 2024 | Team Agenda | Agenda

A Peruvian Extravaganza

15 December 2024 | Team Agenda | Agenda

Regal Flavours of Lucknowi Dawat

15 December 2024 | Team Agenda | Agenda