Yogi Schulz has more than 40 years of information technology experience in various industries, including the energy industry; his specialties include IT strategy, web strategy and systems project management. His new book, co-authored by Jocelyn Schulz Lapointe, is “A Project Sponsor’s Warp-Speed Guide: Improving Project Performance.”
Artificial intelligence is sweeping through most organizations. It's out of control, like the Wild West.
AI output is showing up in reports and presentations. The Apple App Store and Google Play offer many free AI apps of varying quality. Every AI software vendor provides access to their prompt website.
AI output is part of search results. AI capabilities are integrated into desktop software.
Board members see dramatic headlines in the media about AI fiascos. The articles describe disastrous outcomes that all boards want to avoid. These outcomes include:
On the other hand, the excitement around AI points to an incredible opportunity that no organization can afford to ignore. Improved organizational performance benefits include:
For example, SLB, the global oil and natural gas service company previously known as Schlumberger, announced a 20-percent increase in digital revenue, totalling US$2.44 billion.
SLB attributed the success to the company's strategic use of AI to optimize operations and drive efficiency for its producer customers, including Aramco.
The board-level challenge is to provide corporate governance guidance and accountability that builds trust and fosters innovation while managing the risks. Effective oversight mechanisms address risks such as:
The board's governance role suggests that future meetings should include a discussion of these specific topics:
Discussing these topics should lead to policies that form the basis for staff accountability. Let's explore these board AI topics that implement governance by design policies for AI.
Acceptable AI usage
Staff is experimenting with generative AI. They are oblivious to the risks. The board and the CEO should sponsor the development of an acceptable AI usage policy to encourage AI usage while reducing risk. Here's what makes AI appealing:
A corporate acceptable AI usage policy articulates guidelines and expectations of staff behaviour. The policy:
One example is IBM's internal implementation of its AI ethics policy. The company delivered in-depth training programs and created an internal platform to guide employees on the responsible use of AI, focusing on its benefits and potential risks.
This hands-on approach cultivated a workplace culture of responsibility and innovation in leveraging AI technologies.
Related reading: Why You Need a Generative AI Policy.
This article offers advice on how to develop clear organization guidelines for staff to follow.
AI risk management
Organizations and AI application projects are dealing with AI risk haphazardly. They are shooting from the hip. The board and the CEO should sponsor an AI risk management process to reduce AI risk.
The fastest and easiest way to implement an AI risk management process is to adopt one of the existing AI risk frameworks. For example, the MIT AI risk framework starts with these risk domains:
The MIT AI risk framework elaborates on these risk domains with multiple sub-domains.
Organizations can establish a policy requiring every AI application to perform risk management and implement necessary mitigations repeatedly during every project phase. In this way, risk management becomes an integral part of the innovation process rather than an afterthought.
Related reading: What are the risks of Artificial Intelligence?
This is a comprehensive living database at MIT of over 1000 AI risks categorized by their cause and risk domain.
AI hallucinations
In their rush to complete AI projects, teams often do not pay enough attention to AI hallucinations. AI hallucinations occur when AI applications produce erroneous, biased or misleading output.
To reduce the risk and frequency of AI hallucinations, the board and the CEO should sponsor the adoption of processes that reduce hallucinations in AI applications. Considerations to reduce AI hallucinations include:
The organization can adopt a policy whereby every AI application demonstrates that relevant processes that reduce hallucinations have been incorporated into the AI application's design and planned operations before the application can be promoted to production status.
Related reading: How can engineers reduce AI model hallucinations?
This article discusses best practices to help engineers significantly reduce model hallucinations.
AI project best practices
AI application development is brand new. Teams often do not have the necessary skills to be successful. Sometimes, teams are under budget or schedule pressure. These situations lead teams to cut corners and disregard AI project best practices.
To improve AI application development, the board and the CEO should sponsor the adoption of AI project best practices.
A widely accepted summary list of project best practices consists of the following:
Related reading: Download this Warp-speed project assessment for a more comprehensive list of project best practices that a board should consider.
The widely accepted project manager selection criteria consist of the following:
Related reading: Characteristics of a successful project manager.
Successful project managers exhibit specific characteristics that contribute to project success and reduce the risk of failure.
By embracing these AI project best practices and avoiding common pitfalls, organizations can significantly improve the likelihood of project success while minimizing risks.
Cybersecurity for AI applications
Because AI applications are new, project teams often fail to consider cybersecurity requirements in their designs. Adding cybersecurity defence features into AI applications later is less successful and more expensive.
AI applications face new cybersecurity attack surfaces, including:
To reduce AI cybersecurity risks in applications, the board and the CEO should sponsor a review process that ensures adequate cybersecurity defense features are included in AI applications. The project characteristics that create cybersecurity risks include:
The organization can adopt a policy that every AI application design must include relevant cybersecurity defense features.
Related reading: Top 10 causes of stalled AI/ML projects and some suggestions.
This article discusses the 10 most common causes of this unfortunate situation and what project managers can do to correct the problem.
AI for cybersecurity defences
Cyber attackers have noticed the explosion of AI capabilities and developed new attack surfaces, including those listed above, to challenge organizations' cybersecurity defences.
In response, cybersecurity software vendors have quickly jumped on the AI bandwagon. Unfortunately, some vendors have only added the word AI to their marketing materials and made few, if any, enhancements to their software. Other vendors have made more functionality enhancements and hyped those.
To further strengthen cybersecurity defences with AI, the board and the CEO should sponsor a review that ensures adequate cybersecurity defense features are included in the computing environment. The review should consist of ensuring that:
The organization can adopt a policy that includes AI features in the organization's cybersecurity defences.
Related reading: Successfully managing Cybersecurity projects in the Age of AI.
As this article points out, "Managing cybersecurity projects in the age of AI has become more demanding. The stakes are higher. The cost to recover from a successful cyberattack is typically millions of dollars. The damage to reputation is significant but difficult to estimate."
Conclusion
Every organization can develop AI governance policies at a modest cost through the collaboration of staff and external consultants. The operation and enforcement of the AI governance policies are typically assigned to human resources and information technology staff.
Every board of directors should sponsor the development and use of AI governance policies that clarify staff accountability and build AI trust while controlling AI risks.
These policies will help drive innovation and deliver measurable business results.
R$
Organizations: | Apple, Google, IBM, Massachusetts Institute of Technology, and SLB (Schlumberger) |
People: | Yogi Schulz |
Topics: | guidance for corporate boards on implementing AI |