Do machines make mistakes? How often and at what cost? When can we truly trust machines? From SIRI to self-driving cars, Google’s search algorithms to autonomous weapons and drones, the past few decades have witnessed some of the fastest, almost meteoric, rises in Artificial Intelligence (AI). Today, as different nation-states make choices around data, AI is shaping the privacy rights and governance rhetoric. Conversely, the “data” policies that nation-states form are also shaping the type of AI that emerges.
With this backdrop, Vasant Dhar, Professor, Stern School of Business, New York University, discussed the ascent of Artificial Intelligence and its uses and implications for nation-states during a Development Seminar at Brookings India. Anna Roy, Industry Advisor at the NITI Aayog, offered her perspective on India’s Artificial Intelligence policies and presented a future-focused discussion on what India can do to harness the power of AI.
Most AI systems today are based on supervised and reinforced learning, where learning happens through exposing the AI system to tens of thousands of illustrative examples. This has led to a stage where the world is shifting from a time where people wrote programs to a time where machines are learning to write codes for themselves. This can be illustrated through the simplest examples of driverless cars – how machines (for example, cars) learn (for example, driving) instead of being programmed to do so.
Machines are now capable of taking inputs directly from the environment and this has interesting implications for humans. Similarly, machines are now starting to learn the relationship between variables based on data and are able to predict future behaviour. But the catch is, machines make mistakes. More importantly, after these mistakes have been made, machines cannot explain their behaviour that caused them to fault. This causes us to question, When should we trust machines?
Most AI systems today are based on supervised and reinforced learning, where learning happens through exposing the AI system to tens of thousands of illustrative examples.
Using his model of AI and trust, Dhar spoke about how problems can be evaluated based on the cost of error. Autonomous vehicles for example have high predictability but also a high cost of error. However, as AI advances and times change, dynamics of trust change, where predictability might go up and the cost of error might come down or vice-versa.
The goal for AI professionals hence becomes trying to move most things to the automation frontier where the risk appetite to trust machines exists as does the need to innovate and invest in AI.
At the heart of all AI opportunity lies one inherent word: Data. Data is the source of all information and managing such massive volumes of data remains a challenge for many countries. While algorithms are free, data are expensive and proprietary. Countries like India are hence at crucial stages in their histories, where windows of opportunity for harnessing large amounts of data exist and policy pushes on effective and efficient data utilisation can make or break the future of AI. In this view, policy around data is set to shape future nation-states.
Interestingly, four emerging models of data use have evolved around the world. These include:
- Data as an asset: The focus of the U.S. has traditionally been on monetising data, as seen by examples of internet giants such as Google, Amazon, Facebook, etc. The flipside of this has been how the same internet giants are now beginning to resemble nation-states themselves. This trend poses difficulty for the policy and regulation of data as threats of data being used to manipulate opinions rises.
- Data as a risk: The focus in the European Union and countries like Japan has been on how to pre-empt misuse of data. The challenges around such a policy lie mainly around the enforcement of regulations.
- Data for state control: The focus of the Chinese nation-state, for example, has been how to use link data and use it for surveillance. This line of argument leads us to question the stringent assumptions we hold about state control itself and whether there exists a trade-off between privacy and the increasingly real benefits of such centralised data.
- Data for inclusion: The focus of a country like India, for example, has been largely on empowerment and inclusion via consent and liberty of when people choose to share or not share their data. It also goes by the term “Data Fiduciary” where the state sees immense value in leveraging AI to enhance state capacity for basic services, digital as well as public.
Anna Roy, industry advisor at NITI Aayog, focused her presentation on various issues that surround AI in India from research and innovation to the issues of ethics and equity. Given how the use of data for policy purposes has driven narratives around privacy and rights, AI offers interesting solutions. Focussing on five identified sectors that could greatly benefit from AI, namely agriculture, health, education, urbanisation and mobility, she framed recommendations for India, including the importance of research in AI, skilling people for AI, accelerating the adoption of AI and ethics and security for AI.
As machine-learning and artificial intelligence, undoubtedly delivers on its hype by creating innovative and valuable solutions for organisations and governments, there are enough success stories out there to encourage and develop AI systems. However, failures also abound, which have implicit implications for policy and nation-states. While, training and skills development are vital for any AI endeavour and initiative, governments will need to formulate policies that won’t just shape the future of the data and AI but of governance and governing in itself.