There are lessons to be learned from internet technologies introduced in the last 15 years if society wants artificial intelligence to benefit democracy rather than undermine it, says McGill University professor Taylor Owen (photo at right).
“Social media is shaping our life, our economy, our democracy and our society. It’s going to be the same with AI,” he told Scale AI’s ALL IN conference in Montreal.
The problem with previous generations of technologies is that most of the decision-making power over how such technologies were built and embedded in society were given to private entities or governments, said Owen, professor and founding director of the Centre for Media, Technology and Democracy at McGill’s Max Bell School of Public Policy.
Everyday citizens must be meaningfully engaged from the start in developing and implementing AI systems for the technology to benefit democracy, he said during a panel session titled “AI Democracy, Leadership & Social Impact.”
That includes citizens having the power to decide not just how to govern AI systems, but whether they want certain AI systems built at all and, if so, how to implement them.
“We’ve learned the cost of not bringing citizens in a meaningful way into that conversation,” said Owen, a senior fellow at the Centre for International Governance Innovation and the host of the Big Tech podcast
“We’ve left the discourse in our society, which is core to democratic societies – how we share information and how we adjudicate the viability and filter the viability of information – we’ve left that, in many ways, to technologies.”
As a result, citizens’ trust in democratic society, in once-respected institutions, and in governments, private tech actors and the media, is at an all-time low, Owen said.
Those institutions – the foundations of a democracy – have failed to devolve agency, responsibility and power to citizens in having a meaningful discourse and making decisions about emerging and disruptive technologies, he said.
Instead of having that discourse, democratic governments are – ironically, he pointed out – simply decreeing rules, safeguards and standards for new technologies on the private sector and citizens.
“I really think that’s the lesson we have to take from these past 15 years when we look at a new technology.”
Another important lesson from past generations of internet technologies is that self-regulation of these technologies fundamentally doesn’t work, Owen said.
“I think we trusted that for far too long, and we’ve now gone through this long, difficult process of figuring out regulatory regimes and safeguards – not just for mitigating the risks of these technologies, which are clearly there – but for holding individuals and non-government agencies, organizations and companies accountable for the products they’re building and the impacts they have on society and the economy.”
The core of using a new and different approach in developing and implementing AI systems is not treating them as unknowable systems that only those who build them understand, Owen said.
The new approach with AI is not about communicating information to people, he added. It’s about finding mechanisms to have a dialogue with citizens about how these technologies are already shaping their lives, for good and bad.
“That means not just communicating, but actual, deliberative processes in which citizens are given power in shaping these technologies.”
Marginalized communities are at greatest risk from AI
Like social media and other internet technologies, the risks of AI are much greater for marginalized communities in Canada and globally, said Lynnsey Chartrand (photo at left), Indigenous projects manager at Mila – Quebec’s AI Institute.
“They’re going to bear the brunt of the risks, of the discrimination, the biases, that come out of these systems,” she said.
The citizenship roles that government forced Indigenous people into – through residential schools and the Indian Act, for example – have left a profound legacy of mistrust in democratic systems in Canada, Chartrand said.
When it comes to AI specifically, “to say that Indigenous people aren’t in this space is an understatement. They’re not at the table, and they’re certainly not at the decision-making table.”
When Indspire looked at its network a few years ago, the organization found only four Indigenous people who were enrolled in post-secondary computer science programs, she noted.
Indspire is an Indigenous national charity that invests in the education of First Nations, Inuit and Métis people for the long-term benefit of these individuals, their families and communities, and Canada.
Indigenous people make up about five percent of Canada’s population, but less than two percent of the STEM workforce.
There are remote Indigenous communities in northern Ontario where all of their high school education is done online, using poor digital infrastructure, Chartrand said.
“For marginalized communities, it’s a bit of a luxury to think about preserving and upholding a democracy that still doesn’t sit well with everyone that’s in this democracy,” she said. “[Especially] when you’re worried about access to education, access to health care, access to clean water.”
To change that picture, Mila brought 12 talented individuals into the institute this summer to develop AI projects aimed at benefitting Indigenous communities, Chartrand said. The “Indigenous Pathfinders in AI” program, developed in partnership with Indspire, is a career pathway program designed to inspire Indigenous talent to learn, develop and lead the evolution of AI.
Through a series of workshops, activities and collaborative projects, participants delve deeper into AI through an experience that emphasizes Indigenous approaches, perspectives and communities.
Much more needs to be done to spark interest in Indigenous youth in AI and STEM, much earlier in their educational journeys, Chartrand said.
Those opportunities have to be tailored for Indigenous groups to remove barriers and provide access [to AI], include culturally relevant material and mentors, and ensure access to networking opportunities at a reasonable cost, she said.
“This [AI] space is inaccessible for a lot of people. Find a way marginalized communities can access these spaces.”
AI designers and standards can positively influence AI development
Adrien Morisot (photo at right), a technical staff member at Toronto-based AI developer Cohere, said designers who create AI systems can have a lot of positive influence in the two major stages used to develop and train the systems.
In the first stage, all AI systems are trained by essentially reading information and knowledge on the internet. In the second stage, the systems are fine-tuned to instill further properties in them, such as being a pleasant chatbot to interact with.
But no AI system designer with a new AI system in development just has the system read the entire internet, “because the internet is full of nonsense, it’s full of garbage,” including hate speech and misinformation, Morisot said.
“A large part of our job when we’re creating these systems is filtering out the bad part of the internet and augmenting the good part,” he said. “There’s a lot of human judgment that goes into this, a lot of design decisions that are made.”
Most of the information on the internet is in English, so for a long time, all AI systems just spoke English.
But Cohere, in an effort to make its AI systems more diversified and inclusive, developed an initiative called Aya Expanse which proactively sought out people who speak marginalized languages around the world and talked with them, to create a multilingual family of AI systems.
“We are very intentional about training on as many languages as possible and being proactive about this,” Morisot said.
During the second stage of instilling different properties into AI systems, Cohere also is extremely careful about how this is done, using many safety principles and a detailed style guide, he said.
One reason why people mistrust AI is that, for example, Google’s algorithm and Facebook’s ranking algorithm are completely secret “black boxes,” he noted. The data that AI systems are trained on also are typically kept secret.
In contrast, Cohere develops “open weight” large language AI systems with publicly available architecture and parameters, where people can look at the systems and know exactly what’s going to happen with them.
Pierre Bilodeau (photo at left), vice-president, standardization services, at the Standards Council of Canada, said he believes there are international standard-setting organizations and emerging standards that can be applied to AI in specific democracies and with different tools.
“What’s important is what safeguards or guardrails we put as a society or as a democracy to try to manage the positive behaviour, try to incentivize this, and mitigate the negative impacts,” he said.
The International Standards Organizations and the International Electrochemical Commission are working on and have already developed some standards to guide development and deployment of AI systems, Bilodeau said.
“It’s not a prescriptive approach,” he noted. “It [then] becomes up to the regulators in jurisdictions to decide how to apply the standards,” and whether to use regulations, voluntary measures, or a combination of both.
Bilodeau acknowledged, however, that standards aren’t infallible. Even when it comes to standards in the automobile manufacturing industry, “they save men’s lives, [but] they’re not so good to save women’s lives.”
For example, when a woman is involved in a car crash, she is 17 percent more likely than men to die, 47 percent more likely to be seriously injured, and 71 percent more likely to be moderately injured – even when researchers control for factors such as height, weight, seatbelt usage and crash intensity. It all has to do with how cars are designed, typically using car crash-test dummies based on standards for the “average” male.
However, Bilodeau said there’s a will among standards-setting organizations to change and bring communities and underrepresented populations into these organizations to improve standards development. “But we need their help. And it takes time.”
Owen from McGill University said developing guardrails and regulations in AI development and implementation will take a shift in mindset, which means the people building and regulating AI systems must give up some of that power to everyday citizens.
“It’s not a communicating act, it’s like a building-with [act], and that’s a really different mentality,” he said.
“I think we need to take a risk and give agency and trust to citizens to help make some of these decisions about how technologies are built, whether they’re built, and how they’re embedded in our society.”
R$