By Eli Fathi and Peter MacKinnon
Eli Fathi (left photo), C.M., is Chair of MindBridge Analytics in Ottawa. Peter K. MacKinnon (right photo) is Senior Research Associate, Engineering at the University of Ottawa and a member of the IEEE-USA Artificial Intelligence Policy Committee in Washington, D.C. The opinions expressed here are the authors’ own.
The emergence of artificial intelligence (AI)-enabled applications have the potential for societal disruption, especially with the advent of generative AI (GAI) tools.
There is a growing concern about the sophisticated use of GAI to create deepfake videos, images, audios and text that manipulates and/or fabricates content. Such use of GAI can spread misinformation, deceive individuals and organizations and manipulate public opinion.
This online behaviour can lead to the erosion of trust and credibility in various contexts, such as in news, politics, institutions and online interactions, even among friends be they persons or nations – thus leading to a fragmentation of society.
The fact these applications are called “artificial intelligence” does not mean they are intelligent, nor are they sentient. We should not apply human qualities such as integrity and ethics to label these applications, but instead consider these qualities in the way the applications are being developed and used.
There is a fundamental difference between the human qualities of integrity and ethics. Integrity is an internalization of beliefs such as being honest and fair, is absolute and resides at the core of the human psyche or anima. Integrity manifests itself through human ethical behaviour, a cultural set of rules and ideas that have evolved over time against a framework of moral principles. Ethics is a collection of values shaped by external forces over thousands of years to accommodate differences in religion, culture and socioeconomic status.
GAI has a vast potential to create both positive and negative impacts, within a range that encompasses individuals to society.
On the positive side, experts anticipate a significant economic boon to the world economy, with projections of adding trillions of dollars to the world GDP based on primarily and secondary usage of GAI- enabled systems. Aside from financial gains, there will be many benefits to society as a direct result of newly developed AI products and services to improve the lives of individuals, the value of companies and the efficiency of governments, to name a few beneficiaries.
Risks and potential harms with AI development
The winners and losers in the context of AI and integrity are not fixed or predetermined. The impact can vary based on the actions and decisions taken by stakeholders across different sectors.
Individuals who lack awareness or understanding of AI risks and the importance of integrity in the development and use of AI may be at a disadvantage. They may unknowingly fall victim to biased or discriminatory AI systems or be affected by privacy breaches. Lack of knowledge or control over AI systems can limit their ability to protect their interests and challenge unfair or harmful AI practices.
The current biggest winners of AI and GAI are the principal platform providers and the early business and individual adopters of GAI, including business users. Unfortunately, the beneficiaries also include a range of nefarious actors who use GAI with the intent of influencing others for a wide range of selfish and/or ideologically motivated goals, from fraud to espionage.
AI can disrupt jobs, not just manual jobs – many of which are being taken over by AI-enabled robotics from shop floors to warehousing and retail – but also several kinds of knowledge worker jobs across virtually all professions. This is leading to public concerns about workforce displacement and job security.
AI algorithms can be intentionally biased to achieve specific outcomes or objectives. For instance, in social media platforms or online advertising, AI algorithms can be used to amplify certain content, manipulate user behaviour and reinforce echo chambers.
AI algorithms employed in decision-making processes, such as loan approvals, hiring decisions and criminal justice applications, can inadvertently perpetuate and amplify biases and inequalities in AI-enabled applications.
Today, AI algorithms are widely used in stock markets for algorithmic trading. However, they also can be manipulated to gain an unfair advantage.
Individuals whose personal data are used by AI systems without proper safeguards could be compromised. Lack of data protection measures, unauthorized data sharing and insufficient transparency about data usage can erode privacy rights and compromise people’s control over their personal information.
Most AI systems rely on vast quantities of data for training. If the training data are intentionally biased, it is a sign that integrity has been compromised. Adversaries can inject misleading or distorted data into the training process to influence the behaviour or outcomes of AI models, leading to such actions as incorrect decisions, biased results and misleading disinformation.
For example, AI-powered chatbots can be designed to impersonate humans, spread misinformation, and/or manipulate emotions to exploit individuals’ vulnerabilities to such tactics as data theft, financial fraud and social engineering attacks such as bullying.
Building trust in and reliability of AI systems
The link between integrity and AI involves ensuring that AI algorithms, all data and associated applications are designed and operate with integrity and in an ethical manner by respecting human rights, privacy and fairness. These are essential to build trust in and reliability of AI systems to help minimize malicious uses of AI.
If AI systems are not designed with integrity and ethical considerations built-in, marginalized communities can be disproportionately affected. Biased algorithms or discriminatory practices can reinforce existing disparities and exacerbate social inequalities. It is crucial to ensure that AI technologies prioritize fairness, inclusivity and equal opportunities for all segments of society.
The exploitative concerns with GAI arise due to the malicious use or manipulation of AI technology and do not inherently stem from AI technology itself. Responsible development, accommodating ethical considerations, and installing appropriate safeguards can help mitigate risks and ensure integrity is at the core of building and using AI applications.
If AI systems are deployed without integrity as a foundation, public trust in institutions and organizations utilizing AI can erode. Instances of AI failures, breaches of privacy or unethical use of AI can undermine trust in those responsible for AI development and deployment.
Implementing and preserving integrity in AI systems points to a need to invoke some kinds of internal and external guardrails, along with – and which is needed soon – a universal regulatory framework and system of rules and penalties.
Adhering to such practices reflects integrity by ensuring AI systems meet specific standards, respect legal requirements and operate within established ethical boundaries. By following these principles, AI can be developed and deployed in a responsible, transparent and beneficial manner.
Without adhering to developing GAI systems with built-in integrity and ultimately within an international regulatory framework, society will become the biggest loser. The results will be unpredictable and could even lead to the demise of ethical principles that have been established over many centuries to enable world order.
By prioritizing integrity in AI development and use, society can harness the potential of AI while upholding moral and ethical standards and ensuring a fair and just future.
R$