Government needs to quickly regulate artificial intelligence development to ensure it benefits communities without harming marginalized populations, say Quebec researchers advocating “socially beneficial” AI.
The profit-driven private sector cannot be relied on to provide AI applications that benefit society, they told Research Money. Instead, it is up to government to provide policy, invest in and promote community-focused AI solutions, and prevent the racial and other discrimination that is already occurring with the technology.
“We know that the market itself won’t be sufficient to guide this transition to making publicly beneficial AI,” said Allison Cohen, applied AI projects lead, AI For Humanity, at Mila – Quebec AI Institute. “But the government has a series of tools at its disposal to better guide the market, whether that’s through incentives to provide that carrot that you need to do research in a certain direction, or sticks in the way of regulation [such as] penalizing problematic applications."
According to Dr. Karine Gentelet, PhD, a sociologist at the University of Quebec in Outaouais, whose research focuses on how people use technology to address social justice issues, “It’s government’s role to ensure the technology is used for the social good. Government needs to understand what is behind AI and what could be the positive and negative impacts on society, and not only rely on what the industry is saying."
In separate interviews, both researchers identified a lot of problems with the way AI is currently being developed by the private sector. “The harms that these types of applications can cause people are significant,” Cohen said.
In March, Yoshua Bengio, a professor at the University of Montreal and scientific director of Mila, along with more than 1,000 key players in artificial intelligence and technology, signed an open letter urging all AI labs to agree to a six-month, “public and verifiable” pause on training systems that are more powerful than the AI-enabled GPT-4 chatbot.
According to the letter, “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
Last week, Sam Altman, CEO of OpenAI which created ChatGPT, told a U.S. Senate hearing that a U.S. or global agency needs to regulate AI because “if this technology goes wrong, it can go quite wrong.”
Earlier this month, Geoffrey Hinton, a pioneer of deep learning and chief scientific advisor for Toronto’s Vector Institute, quit Google to focus on concerns that AI will become much more intelligent than humans and be used for harmful purposes.
“There’s so much potential in AI to do so much good for people," Cohen told Research Money. "But it can easily be squandered, just because of the incentives that currently guide the way that this technology is being developed."
She added the market will naturally favour widely applicable AI solutions, scaled as big as possible, because company profits grow with the amount of data and number of users. That approach is antithetical to socially beneficial AI, which is designed for a desirable impact, as defined by the communities affected by the technology.
“A community is not going to be served with the types of AI applications that don’t look to connect that community to the solution that’s being built and don’t bring them into that process,” Cohen concluded.
How AI is being developed causes multiple harms
Gentelet is a member of the Laval University-based International Observatory on the Societal Impacts of AI and Digital Technology, which works to help communities, organizations and individuals maximize the positive outcomes of AI and digital technology.
Well documented research and real-world incidents show AI applications' multiple harms, including racial, gender, colonialist and other types of discrimination, she said. “AI is deployed on the social fabric of society. So it cannot be deployed outside of any societal racism, discrimination issue, [or other biases inherent in society]."
AI can make marginalized and disadvantaged members of society even more so, based on the data used to create algorithms, having control of how data is used, or being able to opt out of AI applications. For example, said Gentelet, people in remote communities, without reliable access to the internet cannot provide or influence data used by AI developers. Similarly, because some marginalized groups do not have doctors, these people are "invisible" in databases used to design new drugs or document health problems. “When you have this gap in the data, and the algorithms are trained with this data which is not representative of society, you have a tool that is not representative of the whole society."
As for the real-world negative impacts of AI developed by the private sector:
Gentelet insisted governments should implement accountable and transparent mechanisms, ensuring decisions about public services, such as social security or health, will never rely on any kind of algorithm or AI model. Government and the general public also need to become better educated about what types of AI are deployed, how it is deployed, how the data are used, and what the consequences are.
She also advised AI developers and others in the industry to get more training in ethics, as well as social sciences education about marginalized and racialized communities and groups. AI experts told Research Money that it is possible in Canada to graduate with a Master’s degree in AI without ever having taken an ethics course.
AI industry lacks standards in ethics
The Mila AI institute teamed up with the United Nations Educational, Scientific, and Cultural Organization (UNESCO) to produce a comprehensive, freely available book, titled Missing Links in AI Governance, published earlier this year
The book’s 18 chapters, consisting of selected submissions from a global open call for proposals, includes a study by Dr. Golnoosh Farnadi, PhD, assistant professor of machine learning at HEC Montréal, holder of Canada’s CIFAR AI chair, and a core faculty member at Mila. She and her research team interviewed seven international experts on AI to get their views on the global AI industry.
“The lack of broadly agreed-upon standards of ethics in AI has created a massive power vacuum in which industry players are both defining and enforcing their own ethical norms,” Farnadi and her co-authors say. They point to studies showing AI models perpetuate and cement society's underlying forms of discrimination, specifically targeting historically disenfranchised demographic groups. Such discrimination can happen anywhere in the AI process: in the data, the model, and the outcome. “In order to identify actionable change, we need to unveil the inner workings of the AI industry and understand the challenge in producing ethical products,” their report concluded.
The study also highlighted a need for a fairness-aware workforce, with hiring decisions a critical step in the ethical AI pipeline. For example, less than two per cent of employees in technical roles at Facebook and Google are Black. At eight large tech companies evaluated by Bloomberg, only about one-fifth of the technical workforce at each firm are women.“Without having fairness-aware employees who can take ownership of the AI model from the ethical perspective, companies are forced to rely on outsourcing their ethics assessments to external players.”
Government needs to “reorient” AI industry
Cohen said developing socially beneficial AI involves a lot of trust building and interdisciplinary collaboration. Meanwhile, private sector developers want to maximize profit, rather than invest time, energy and expense in best practices. “The trick to making really important socially beneficial AI is valuing the perspectives of others in this domain."
Mila, for example, hires linguists and gender-studies experts to label AI data. Cohen's Mila work includes a misogyny-detection tool, identifying offending language in written text as well as helping AI developers understand why it is offensive and how to remove it.
Cohen co-authored a chapter of Missing Links in AI Governance, calling for the creation of “poles of excellence” in research and training. These centres or organizations would bring together talent working on meaningful or fundamental AI research, focusing on the need for public good and disseminating findings more broadly, including making research results open-source.
Open-source datasets actually make the process of developing socially beneficial AI much more financially attractive to the private sector, Cohen explained. Companies do not need to spend as much money at the outset developing datasets for their AI applications. A report by McKinsey found that open data can unlock global economic value on the order of $3 trillion annually, contributing to innovation in every sector.
“Governments should act to re-orient this industry: they should invest in AI literacy and education, set up a well-integrated, multi-stakeholder ecosystem, create sufficient incentives along the pipeline to engage and maintain talent, inspire AI for social good applications and promote data sharing,” the authors said.
Mila has received a three-year, $21-million grant as part of Quebec's Research and Innovation Research Strategy, which will partly go toward further research and development of socially beneficial AI. Benjamin Prud’homme, executive director, AI for Humanity at Mila, said in an email the funding will support activities under four pillars:
“All of this work on the [technology] development side should be coupled with work on the policy side and the regulatory side,” Cohen noted.
The federal government’s proposed Artificial Intelligence and Data Act, part of Bill C-27 introduced in June 2022, would set requirements for how AI could be developed and deployed, banning uses that could cause “serious harm.” But the legislation has yet to pass Parliament after being criticized that it lacked sufficient public consultation and left too many details about regulations to be decided later.
However, some Canadian AI researchers and experts signed an open letter asking MPs to pass the legislation. Canada needs a national regulatory framework, they said, arguing that the legislation is “directionally sound”, balancing the protection of Canadians and the imperative for innovation. Cohen also emphasized the need for government regulation, given the uncertainties about AI development.
“That’s what’s so scary about this technology," she said. "We really don’t know where the beneficial components of this are felt and in what way they’re derived, and where the harms are."
R$