Artificial intelligence is starting to permeate everyday life, from the chatbots some people confide in and AI used at workplaces, to facial recognition systems and worries about superintelligent AI that could destroy humanity.
Luke Stark (photo at right) is an early-career researcher whose research focuses on the history and contemporary effects of AI systems designed to interact with humans, along with digital technologies more broadly.
“AI is a big topic of conversation these days. It’s having a big impact and conversations about it are also having a big impact,” Stark, assistant professor in the Faculty of Information & Media Studies at Western University and a CIFAR Azrieli Global Scholar, told Research Money.
“I think it’s important to ground those conversations in empirical reality, to ground them in what AI systems can do and what they can’t do,” he said.
“[We need] to talk about the impacts of those systems on everybody, not just on AI developers or startup founders or governments, but on people who are experiencing those systems in [their] everyday work and everyday lives.”
Stark is especially interested in the application of social and emotional AI systems in disciplines such as psychology, medicine and education, all areas where AI is deployed to reshape the lives of citizens in the name of societal improvement.
His work explores how the organizations developing these AI tools understand concepts like human emotion, intelligence and sentience; how these definitions are operationalized in AI applications; and how impacted communities resist and reject such technologies and the broader ideologies behind them.
Stark’s scholarship also looks at the conceptual and philosophical limits of key components of AI systems such as logical inference and interactivity, and the ways in which human values like equality, justice and privacy can be supported in the design of digital technologies.
Stark said he was really interested in science fiction as a youngster and was a voracious reader. Also, his parents were committed to and engaged in progressive political and social justice causes.
“Those two things have really shaped my career trajectory – that combination of values growing up and interest in the future and speculative fiction and the way the world might be,” he said.
Stark completed a Master of Arts in History degree at the University of Toronto. But he was increasingly drawn to interdisciplinary media studies and the philosophy of technology.
“I was thinking a lot about new technologies: how they’re being developed, how they’re being built, what kind of human values get built into these tools and technologies, and how does that impact the world at large?”
He did his PhD in Media, Culture and Communication at New York University with Helen Nissenbaum, a well-known philosopher of technology who Stark cites as one of his mentors. Nissenbaum is now at Cornell University.
Sociologist Denise Anthony, Stark’s post-doctoral supervisor at Dartmouth College in New Hampshire, also has been a “wonderful mentor,” he said.
Another mentor is Fernando Diaz, a computer scientist at Microsoft Research in Montreal who Stark said helped him “navigate the vagaries of actually working in an AI research lab.”
Understanding AI through the lens of animation
Stark had been thinking about AI technologies of various kinds for a few years when OpenAI released its ChatGPT chatbot in November 2022.
There was a lot of uptake of ChatGPT and conversations around the perceived sentience of such technologies and “the singularity” (a hypothetical point in time when technological growth becomes uncontrollable and irreversible), he said.
There also was concern about artificial general intelligence, a theoretical stage of AI development where a machine can perform any cognitive task a human can.
But for Stark, who’d done some work early in his career on animation in the context of social media, what struck him about ChatGPT is that it was text-based animation – or textual animation.
“Animation is a kind of form of creative human expression, not just cartoons,” he noted. “[Animation occurs] any time that humans project liveliness or life onto inanimate objects or onto the external environment.” Puppets, for example, are an example of an animated genre.
“We can in fact identify precisely what ChatGPT and other similar technologies are: animated characters, far closer to Mickey Mouse than a flesh-and-blood bird, let alone a human being,” Stark wrote in an article in Daily Nous, which provides news for and about the philosophy profession.
He developed his idea, of framing an understanding of ChatGPT and other chatbots around the perspective of animation, into a scientific paper published last year by the Association for Computing Machinery.
“ChatGPT and other LLMs [large language models] are evocative animations, but like all forms of animation, they present only the illusion of vitality. Claiming these technologies deserve recognition as persons makes as much sense as doing the same for a Disney film,” he wrote.
Chatbots and the large language models on which they’re based are very impressive technical achievements, Stark acknowledged.
“But I think that part of why they’ve been so popular and so successful in the last two or three years is because they have been set up to present themselves as conscious entities,” he said.
Like animated characters, “OpenAI designed ChatGPT specifically as a bot with a first-person singular interface with all these quirks of human personality and human conversation,” Stark said.
“I think that’s also why a lot of folks, including very smart folks in computer science, have projected a lot onto them in terms of their capacities, their intelligence.”
AI pioneers Yoshua Bengio, Geoffrey Hinton and others have raised concerns about an “existential risk” that AI has the potential to become artificial general intelligence that could decide to destroy humanity.
“The fact that many quite eminent computer scientists are concerned about this is a testament especially to the fact of the sophistication of the linguistic outputs of these chatbots is impressive,” Stark said.
Yet the chatbots powered by OpenAI’s GPT-3 language model (such as ChatGPT and Microsoft’s Bing search engine) work by predicting the likelihood that one word or phrase will follow another, he said. These predictions are based on millions of parameters (in essence, “umpteen” pages of digital text).
Other machine learning techniques are then used to “tune” the chatbot’s responses, training output prompts to be more in line with human language use.
“In a way you can think about the kind of collective meaning-making of hundreds of thousands of humans living or dead that is being mathematically reanimated, repurposed to produce these topical sentences that ChatGPT pops out,” Stark said.
“These technologies produce the illusion of meaning on the part of the chatbot: because ChatGPT is interactive, the illusion is compelling, but nonetheless an illusion,” he wrote in his Daily Nous article.
“Understanding ChatGPT and similar LLM-powered bots as animated characters clarifies the capacities, limitations and implications of these technologies.”
Chatbots lack a sense of self and other human qualities
Despite the illusion that chatbots have human qualities, some people ask for their advice about their human relationships and other issues and use a chatbot as a “surrogate therapist.”
“There have been a number of research studies that suggest LLM-based texts are much more persuasive than the average human interlocutor in convincing, for instance, people to change their political beliefs,” Stark said.
In one study by the University of Toronto, AI was rated more empathetic than human therapists by third-party evaluators. The study involved participants judging the level of compassion in written responses created by AI, humans and expert crisis responders. AI responses were preferred and rated as more compassionate.
The study’s results show “these AI-based tools are trained essentially to never say the wrong thing if they can help it, in a way that humans in our messy, imperfect, distracted selves can’t [manage],” Stark said.
However, even a rushed human therapist or doctor is still able to do things – such as making judgments, adapting on the fly, and having a holistic conscious understanding of a situation – that a chatbot is incapable of because it’s essentially a type of language model, he said.
“They’ve been designed and trained to articulate a sense of self but they don’t have a sense of self,” he added.
Chatbots and other AI systems can mimic many aspects of human conversation and human emotions, Stark said. But he thinks their practical use as “therapists” or teaching tools in education is limited and people need to be cautious about such applications.
“It’s not clear to me whether they would push people to reexamine their feelings and beliefs and concepts about the world,” he said.
It’s unlikely that chatbots can make people feel uncomfortable in a way that is actually valuable in the context of growth and development, or share actual feelings and memories (rather simulate such sharing), or tell people when they no longer need the “therapy” – all things a well-trained human therapist does.
“Within the broader context of what therapy is for, I think they really fall down,” Stark said.
Another concern he pointed to is that such AI-based tools are often developed by large private corporations.
“AI, as a conjectural science [rather than being an empirical science] is about making the world that it’s promoters want. It is the ubiquity and invasiveness of AI-driven systems that their promoters hope will ensure predictability and profit,” Stark wrote in a 2023 article in the journal BJHS Themes, published by Cambridge University Press.
“At best, their interest is to have a kind of subscription model where you keep paying 30 bucks a month to engage with the system,” Stark said.
When it comes to potentially dangerous uses of AI, Stark has publicly warned about the risks of facial recognition technology.
“The downsides of facial recognition in having the ability of this identification at a mass scale, especially in a democratic society, are huge,” Stark said.
“I think there are very few appropriate uses for facial recognition technology. I think as a technology its benefits and gains can be replicated using things that are not so invasive.”
Yet facial recognition systems are increasingly being implemented in places where people have little choice about whether or not to engage with them, such as at airport boarding gates. Given airline travellers are already in a time and stress crunch, “almost nobody’s going to say ‘No’ to that, and that’s quite unfair from a social perspective,” Stark said.
There has been an ongoing conversation about data use versus data collection and putting safeguards on how the data is used, he said.
However, “the actions of the current American administration have really shown the profound danger of relying on regulating [data] use as opposed to regulating collection,” Stark said. “If you have an internal decision maker who just decides they can break the law and use the data how they want, they have less power if that data hasn’t been collected in the first place.”
Advice for early-career researchers
Stark was doing a postdoctoral fellowship at Dartmouth College in New Hampshire when Donald Trump was elected to his first term as president of the U.S.
Based on what he saw then, Stark prioritized coming back to Canada and he returned in 2018. He was completing his second postdoctoral fellowship in Montreal in 2020 and deciding whether to go to an American university or stay in Canada.
“One of the factors was uncertainty around whether Trump would be elected again [and] the status of immigration in the U.S.,” he said.
In the end, Stark stayed in Canada and took his current position at Western University.
Compared with the political situation in the U.S., “the relatively more stable support that the Canadian government provides to researchers has been appealing,” he said. “At the federal level, the government has done a pretty good job.”
Stark also credited his support for his CIFAR Azrieli Global Scholar position. “CIFAR is really involved in a lot of the conversations around AI technologies in Canada. I feel well-supported by them and I’m grateful for that.”
He said he has many colleagues and friends who are academics and researchers in the U.S. “and they are having such a hard time.”
But many are preserving and working hard to build solidarity and push their universities to stand up to the Trump administration, he added. “I certainly haven’t given up on Americans or American research and science, but it is a really challenging time.”
Asked what advice he would give to early-career researchers in Canada, Stark urged them to learn what the Greek philosopher Aristotle called “phronesis,” or practical wisdom. “That’s so important, not just in research but in life and how we navigate the world.”
“Following your gut, but not to the extent that it overtakes your brain. How to get that balance right, how to listen to advice from multiple people and figure out what it means to you and how to triangulate it.”
He pointed to the Detroit-based National Faculty Centre for Development and Diversity as a “great organization” that provides professional development and training resources for faculty, postdocs and graduate students, including in Canada.
Young researchers should not only look for mentors but cultivate and have conversations about looking for sponsors, Stark advised. “If you’re a sponsor for somebody, you’re going to bat for them in rooms where they’re not there. In academia you also need to have folks who are batting for you.”
R$