In our new series ‘Sit Down with StateUp’, we’ll interview leading experts to hear their stories and share their insights focused on innovation, government and public purpose. In this edition, StateUp Associate Riley Kaminer sat down with Dr. Paolo Turrini, our own Expert Affiliate specialising in AI for social good. Also check out the first article in the series, which features Dr. Rehema Msulwa, StateUp’s Expert Affiliate specialising in Infrastructure and the Built Environment.
Dr. Paolo Turrini, Expert Affiliate at StateUp, argues that artificial intelligence has the potential to change our society for the better, when accompanied by effective policy making. “Every day, computers are getting faster, neural networks are becoming increasingly developed, and more investments into AI are made,” he says.
Turrini is an Associate Professor in Computer Science at the University of Warwick, but he sees himself as more of a social scientist: “I’m more interested in people than computers.” Ultimately, he argues, “interacting artificial agents display interesting social behaviour and affect that of people, as well. AI needs to be studied in interaction.”
In his work, Turrini investigates how to use game theory to design AI algorithms that promote desirable social behaviour at scale. “I focus on how to understand large societies, not just two people playing together.”
Turrini became interested in AI for social good while undertaking his master’s degree at the University of Siena. “I met Professor Rosaria Conte while taking a course on social psychology. She was using this mathematical language called game theory – which, at the time, I had no idea about. [Turrini now teaches game theory.] I thought it was cool to learn about how cooperation can emerge in society, and how computers can enter the picture to model complex human behaviour.”
Applications of artificial intelligence in the public sector
Turrini believes that public sector organisations and industry allies can leverage AI tools for social good. He calls public “a natural platform” for socially-conscious technology because it is “easier to find common values and interests” there.
Turrini outlines two primary ways that public sector bodies interact with AI. The first is regulatory. He explains that in this instance, “policymakers survey what tech is out there and think, ‘is this tech good? Is it promoting unwanted behaviour?’”
The second approach views policymakers, in Turrini’s words, as “people designing societal interactions with an eye on improving social behavior or policy.” In this case, technology is used as a tool to design these interactions to help policymakers achieve their goals. An example of this approach would be asking the question, “How can I as a policymaker devise systems that incentivise people to share their unbiased views, while also making it more difficult to spread fake news?”
Turrini notes both these approaches have a proven track record of promoting social good. Social networking websites trying to detect fake accounts, which he described as a “very difficult” problem – and one with tangible societal ramifications–is one example. Using AI, Facebook was able to shut down 5.4 billion fake accounts in the first nine months of 2019. “This is an impressive number,” says Turrini. “The extent of manipulation online is huge and it is incredible how much AI can do.”
Another example is policymakers’ use of data to inform government responses to the pandemic. Policymakers overcame “difficult computational problems” to make life-saving, data-driven decisions around restrictions such as coordinated lockdowns, according to Turrini.
“Models can be devised to optimise coordinated lockdowns,” Turrini says. “They might not give exact or optimal solutions, because of the complexity of the problem. But they can certainly give better solutions than independent country-wide decisions.” This international cooperation is crucial: “If a country goes into lockdown but the neighbouring countries don’t, and travel between them is non-negligible, pandemic mitigation efforts will be ill spent.”
Turrini calls the public sector “a natural platform” for socially-conscious technology because it is “easier to find common values and interests” there.
While the effectiveness of AI to combat Covid is contested, Turrini maintains that “politicians have disappointed more than AI.” This concerns him: “To use [these models], we need to have political coordination and common will. If countries don’t want to give up their independent decision making and start acting together, then AI can’t do much because it won’t be deployed to start with.”
AI can also play a role in fuelling the public sector’s post-pandemic recovery. For example, algorithms can help cut through the NHS’s backlog by helping healthcare professionals “make ethical decisions about which surgeries to prioritise.”
“I’m not suggesting AI should be used to make decisions on who to accept in a hospital and who to send home,” Turrini clarifies. “But if such decisions are to be made – and we know that they have been made in the past, when for example beds were simply not enough for all – they’d better be made looking at good models that are based on a significant number of data points.”
Navigating ethical issues when dealing with artificial intelligence
Despite Turrini’s bullishness on public sector use cases of AI, he recognises the ethical dilemmas, and the public backlash that could ensue if they are not carefully navigated. This past April, the New York Police Department was criticised over its use of an AI-powered robot. The NYPD has said that it hopes the devices will help “save lives, protect people, and protect officers.” However, in a context where police interactions are increasingly scrutinised, critics are concerned that the machine will do more harm than good.
Turrini urges us to recall that artificial intelligence can be imbued with the biases of the humans that developed it. These biases might even be exacerbated when applied to artificial technology: “We shouldn’t forget that every model is based on assumptions, which can be affected by biases and oversimplifications.”
Another way Turrini suggests thinking about AI ethics is by interrogating the effects the technology has on real people. “Consider a case where one group of people uses a certain technology that makes their life easier, but their use of this technology makes other peoples’ lives worse. In that case, the people who are negatively affected should be paid some compensation.”
His approach here is a pragmatic one: “AI for social good sometimes is unattainable and technology will favour some groups at the expense of others. Compensation mechanisms need to be introduced by policymakers in this case.”
Creating incentives to connect academia with government and industry
Turrini thinks that it is important to bridge the gap between academic research and decision-making in industry and government: “It is valuable for researchers that work in academia to bring their theoretical background to solve difficult problems.”
A fellow at the Alan Turing Institute, Turrini cites the speculative funding of Alan Turing’s mathematical research, without which we would not have computers as we know them, as an example of the benefits derived from curiosity-driven research.
Equally, Turrini advocates for more results-driven research funding. “Scientists need to have an eye on the practical side of things as well. They should get involved with the practical implications of their work and contribute in order to make a positive impact.”
“Setting the right incentives is the key issue,” says Turrini. “Incentive schemes do currently exist, but we would benefit from having more,” he notes, underscoring the key role UK Research and Innovation (UKRI) plays in enabling experts to come together to solve society’s thorniest problems.
Turrini argues that policymakers should take a close look at the structure of government projects. Taking UKRI as an example, he highlights unanswered questions including “How is the selection process of these funded projects conducted?” and “Are we sure that a voluntary system of scientists reviewing [proposals] in their free time is the right way to go?”
“These might sound like very specific issues, but they are actually at the heart of scientific progress,” he says. “There is a lot of research going on that we aren’t using.”