“For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks of which natural intelligence alone was thought capable.”
So begins the preamble to the Montreal Declaration for a Responsible Development of Artificial Intelligence, one of multiple landmark efforts by Canadian organizations to grapple with the ethical questions and problems created by the emergence of powerful new autonomous systems. The Montreal Declaration was created at Université de Montréal and launched on December 4, 2018, in collaboration with civil and academic partners.
The Canadian government is also seeking to adopt a leadership stance internationally for the development of ethical standards for artificial intelligence. Apart from its extensive investments in Canada’s AI ecosystem — $125 million for the Pan-Canadian AI Strategy and $290 million for SCALE.AI, the supercluster dedicated to AI-based supply chains — Ottawa has partnered with France to create an International Panel on Artificial Intelligence, modelled after the Intergovernmental Panel on Climate Change, with the mandate to support the responsible adoption of AI that is ”grounded in human rights, inclusion, diversity, innovation and economic growth.”
Other recent instances of federal position-taking on ethical AI include the Declaration on Ethics and Data Protection in Artificial Intelligence, adopted by Privacy Commissioner of Canada Daniel Therrien and global counterparts in November, 2018, and the Directive on Automated Decision Making, launched by Jane Philpott on March 4, in her final act before departing her position as president of the Treasury Board.
Threat assessment
The current conversation about AI ethics seeks to navigate public uncertainty about the threats and capabilities of AI. Dr. Pierre Levy, a University of Ottawa professor who researches cultural and cognitive implications of digital technologies, believes that non-experts habitually overestimate the degree to which AI systems are autonomous. “Artificial intelligence is a marketing word,” says Levy. “The reality is that it’s mainly statistical algorithms.”
Levy takes a skeptical position about the many efforts to define ethical principles for AI. “I think this whole story about the ethics of artificial intelligence is a smokescreen created by large enterprises who sell artificial intelligence. Now all the big cloud and web enterprises are based on AI, and they are investing in the area of AI ethics to reassure people,” he says.
Reassurance is wanted. Whatever the threat that AI poses to the public, it may not exceed the threat posed to AI research by wary citizens and concerned engineers, whose anxieties about privacy violations, algorithmic biases and killer robots hang like a dark cloud over AI companies and institutions. Prominent figures have stoked the flames of fear. Elon Musk called AI “potentially more lethal than nukes,” and Stephen Hawking warned that creating AI might be the last event in human history, ”unless we learn how to avoid the risks."
While most researchers agree that the threat of human-level autonomous AI is frequently overstated, other risks are in fact genuine and pressing. The Montreal Declaration states that ”intelligent machines can restrict the choices of individuals and groups, lower living standards, disrupt the organization of labor and the job market, influence politics, clash with fundamental rights, exacerbate social and economic inequalities, and affect ecosystems, the climate and the environment.”
Likewise, the potential benefits of advancements in AI are equally numerous. Dr. Yoshua Bengio enumerated many of them at a recent talk in London, England: improving the efficiency of agriculture; reducing the cost of healthcare; making education more accessible through AI tutorial systems; assisting people with disabilities; optimizing energy resources; and aiding conservation efforts, among many others.
“The current tools can create immense economic growth. But they can also be used in ways that are troubling,” said Bengio. Even if the capabilities are occasionally overstated in the public discourse, “we shouldn’t wait for human-level AI to be concerned," he added.
From statements to action
Principled statements like the Montreal Declaration seek to orient AI research away from the risks and toward the opportunities. To that end, the Montreal Declaration outlines ten primary values, such as well-being, respect for autonomy, solidarity, and others. The Declaration also identifies several key objectives, like creating an ethical framework for the development and deployment of AI, and opening a forum for discussion to achieve equitable, inclusive and sustainable AI development.
While the Montreal Declaration offers a laudatory vision, it’s not clear how such well-intentioned assertions will translate to meaningful impact on AI research and development. Other Canadian organizations are filling the gap with more practical and detailed mandates.
The CIO Strategy Council (CIOSC) has developed a draft set of standards for AI development that are intended to go a step farther than the existing ethical frameworks, by providing specific requirements that will help to realize the broader principles.
“We currently don’t have a set of rules or requirements that govern automated decision systems,” says Keith Jansa, executive director of the CIOSC. The lack of rules makes for uncertainty in the marketplace and in society, he says. Jansa believes the reputational risk for AI-based companies and institutions is significant enough that they will invest in compliance processes that will reassure potential customers and users. The CIOSC’s draft standard is meant to offer a proxy for full-blown regulation, while nonetheless providing a level of assurance that an AI product service is ethically sound.
The practice at the heart of the proposed standard is the “ethical impact assessment.” This assessment would grade proposals for automated decision systems at four potential levels, from low-risk to high-risk. The CIOSC’s emphasis on assessment echoes with the Treasury Board’s Directive on Automated Decision-Making, which requires that an Algorithmic Impact Assessment be completed prior to the production of any Automated Decision System. Whereas the CIOSC’s assessment must be executed by external experts, the Government of Canada’s algorithmic assessment tool is an interactive questionnaire that's meant to be completed internally by specific departments.
Putting principles to the test
On December 3, 2018, Université Laval launched the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technologies (OIISIAN), a broad coalition of research centres, nongovernment organizations, businesses, government institutions, and groups in Quebec, across Canada and around the world. OIISIAN, which has received $7.5 million in funding from the Fonds de recherche du Québec (FRQ) over five years, is led by Dr. Lyse Langlois, professor of industrial relations and director of Institut d’éthique appliquée. Close to 160 researchers in Canada and elsewhere will contribute to the hub.
The intention behind OIISSIAN is to test the principles outlined in the Montreal Declaration, through experiments that engage both the public and private sectors. “Otherwise, it could remain just a facade,” says Lyse Langlois, in conversation with RE$EARCH MONEY. “Let’s look at the limits and the possibilities and remain open to adjusting.”
Even as groups like OISSIAN and CIOSC seek to lend force and credibility to the movement for ethical AI, work on the development of broad ethical principles for AI continues to be important. “The first step is to develop a better understanding of what is meant by ‘responsible AI’,” says Dr. François Laviolette, an AI researcher associated with OIISIAN. “There are many decisions to make that are neither white or black.” We’re still in the process of turning over rocks, he says, and we haven’t reached the point of knowing what to do with everything we’ve found.
R$