Corporate boards need to provide guidance and accountability on AI to build trust, foster innovation and manage risk

Yogi Schulz
March 5, 2025

Yogi Schulz has more than 40 years of information technology experience in various industries, including the energy industry; his specialties include IT strategy, web strategy and systems project management. His new book, co-authored by Jocelyn Schulz Lapointe, is “A Project Sponsor’s Warp-Speed Guide: Improving Project Performance.”

Artificial intelligence is sweeping through most organizations. It's out of control, like the Wild West.

AI output is showing up in reports and presentations. The Apple App Store and Google Play offer many free AI apps of varying quality. Every AI software vendor provides access to their prompt website.

AI output is part of search results. AI capabilities are integrated into desktop software.

Board members see dramatic headlines in the media about AI fiascos. The articles describe disastrous outcomes that all boards want to avoid. These outcomes include:

  • High organization disruption and recovery costs.
  • Loss of reputation.
  • Distracting regulator investigations and fines.

On the other hand, the excitement around AI points to an incredible opportunity that no organization can afford to ignore. Improved organizational performance benefits include:

  • Accelerated product development.
  • Enhanced employee productivity and innovation.
  • Improved customer service with richer personalization.
  • Reduced capital and operating costs.
  • Optimized supply chain operations.

For example, SLB, the global oil and natural gas service company previously known as Schlumberger, announced a 20-percent increase in digital revenue, totalling US$2.44 billion.

SLB attributed the success to the company's strategic use of AI to optimize operations and drive efficiency for its producer customers, including Aramco.

The board-level challenge is to provide corporate governance guidance and accountability that builds trust and fosters innovation while managing the risks. Effective oversight mechanisms address risks such as:

  • Biased output that leads to discrimination.
  • Privacy infringement that leads to lawsuits.
  • Regulatory noncompliance that leads to investigations and fines.
  • Misuse by staff that leads to intellectual property loss and erroneous recommendations.
  • Poor data management that fails to exploit this valuable corporate asset.

The board's governance role suggests that future meetings should include a discussion of these specific topics:

  • Acceptable AI usage.
  • AI risk management.
  • AI hallucinations.
  • AI project best practices.
  • Cybersecurity for AI applications.
  • AI for cybersecurity defences.

Discussing these topics should lead to policies that form the basis for staff accountability. Let's explore these board AI topics that implement governance by design policies for AI.

Acceptable AI usage

Staff is experimenting with generative AI. They are oblivious to the risks. The board and the CEO should sponsor the development of an acceptable AI usage policy to encourage AI usage while reducing risk. Here's what makes AI appealing:

  • Access to AI is easy. No approval is required.
  • The cost at AI software vendor websites is free or incredibly low.
  • AI technology is new, fascinating and triggers Fear of Missing Out (FOMO).
  • AI promises to make us look smarter with less effort.
  • AI apps are readily available for every laptop and smartphone.

A corporate acceptable AI usage policy articulates guidelines and expectations of staff behaviour. The policy:

  • Educates staff about generative AI opportunities and risks.
  • Raises awareness of organization policies.
  • Avoids constraining innovation.
  • Ensures the responsible use of generative AI.
  • Reduces the risks associated with this technology.
  • Describes consequences of policy violations.

One example is IBM's internal implementation of its AI ethics policy. The company delivered in-depth training programs and created an internal platform to guide employees on the responsible use of AI, focusing on its benefits and potential risks.

This hands-on approach cultivated a workplace culture of responsibility and innovation in leveraging AI technologies.

Related reading: Why You Need a Generative AI Policy.

This article offers advice on how to develop clear organization guidelines for staff to follow.

AI risk management

Organizations and AI application projects are dealing with AI risk haphazardly. They are shooting from the hip. The board and the CEO should sponsor an AI risk management process to reduce AI risk.

The fastest and easiest way to implement an AI risk management process is to adopt one of the existing AI risk frameworks. For example, the MIT AI risk framework starts with these risk domains:

  • Discrimination and toxicity.
  • Privacy and security.
  • Misinformation.
  • Malicious actors and misuse.
  • Human-computer interaction.
  • Socioeconomic and environmental.
  • AI system safety, failures and limitations.

The MIT AI risk framework elaborates on these risk domains with multiple sub-domains.

Organizations can establish a policy requiring every AI application to perform risk management and implement necessary mitigations repeatedly during every project phase. In this way, risk management becomes an integral part of the innovation process rather than an afterthought.

Related reading: What are the risks of Artificial Intelligence?

This is a comprehensive living database at MIT of over 1000 AI risks categorized by their cause and risk domain.

AI hallucinations

In their rush to complete AI projects, teams often do not pay enough attention to AI hallucinations. AI hallucinations occur when AI applications produce erroneous, biased or misleading output.

To reduce the risk and frequency of AI hallucinations, the board and the CEO should sponsor the adoption of processes that reduce hallucinations in AI applications. Considerations to reduce AI hallucinations include:

  • Clear model goal.
  • Balanced training data.
  • Accurate training data.
  • Sufficient model tuning.
  • Precision prompts.
  • Fact-check outputs.
  • Limit the scope of responses.
  • Comprehensive model testing.
  • Adversarial fortification.
  • Ongoing human oversight.

The organization can adopt a policy whereby every AI application demonstrates that relevant processes that reduce hallucinations have been incorporated into the AI application's design and planned operations before the application can be promoted to production status.

Related reading: How can engineers reduce AI model hallucinations?

This article discusses best practices to help engineers significantly reduce model hallucinations.

AI project best practices

AI application development is brand new. Teams often do not have the necessary skills to be successful. Sometimes, teams are under budget or schedule pressure. These situations lead teams to cut corners and disregard AI project best practices.

To improve AI application development, the board and the CEO should sponsor the adoption of AI project best practices.

A widely accepted summary list of project best practices consists of the following:

  • A project goal aligned with the business plan.
  • A credible business case.
  • A senior project sponsor.
  • A suitably experienced project manager.
  • A project team with the required skills and experience.
  • A reasonable understanding of project risks.
  • A comprehensive project charter.
  • A reasonable project management plan.

Related reading: Download this Warp-speed project assessment for a more comprehensive list of project best practices that a board should consider.

The widely accepted project manager selection criteria consist of the following:

  • Project management expertise and experience.
  • Desired personal attributes.
  • Industry and business experience.
  • Technical knowledge in information technology.

Related reading: Characteristics of a successful project manager.

Successful project managers exhibit specific characteristics that contribute to project success and reduce the risk of failure.

By embracing these AI project best practices and avoiding common pitfalls, organizations can significantly improve the likelihood of project success while minimizing risks.

Cybersecurity for AI applications

Because AI applications are new, project teams often fail to consider cybersecurity requirements in their designs. Adding cybersecurity defence features into AI applications later is less successful and more expensive.

AI applications face new cybersecurity attack surfaces, including:

  • Prompt injection.
  • Training data attacks.
  • Model theft.
  • Model inversion attacks.

To reduce AI cybersecurity risks in applications, the board and the CEO should sponsor a review process that ensures adequate cybersecurity defense features are included in AI applications. The project characteristics that create cybersecurity risks include:

  • Ambitious project scope risks.

  • Project team skill risks arising from inexperience.
  • Model vendor and software risks due to product immaturity.
  • Software design gaps and instability risks.
  • Management expectations for an aggressive schedule create a risk of inadequate testing.

The organization can adopt a policy that every AI application design must include relevant cybersecurity defense features.

Related reading: Top 10 causes of stalled AI/ML projects and some suggestions.

This article discusses the 10 most common causes of this unfortunate situation and what project managers can do to correct the problem.

AI for cybersecurity defences

Cyber attackers have noticed the explosion of AI capabilities and developed new attack surfaces, including those listed above, to challenge organizations' cybersecurity defences.

In response, cybersecurity software vendors have quickly jumped on the AI bandwagon. Unfortunately, some vendors have only added the word AI to their marketing materials and made few, if any, enhancements to their software. Other vendors have made more functionality enhancements and hyped those.

To further strengthen cybersecurity defences with AI, the board and the CEO should sponsor a review that ensures adequate cybersecurity defense features are included in the computing environment. The review should consist of ensuring that:

  • AI features to better detect and respond to cyber-attacks have been implemented.
  • A cybersecurity platform integrates multiple tools, data and processes into a unified system that includes AI features. This integrated approach is preferable to numerous point solutions.
  • AI features are purpose-built for cybersecurity management. AI-based cybersecurity should not be based on domain-agnostic tools.
  • AI features enhance cybersecurity analysts' experience. New features should not simply add more automation.

The organization can adopt a policy that includes AI features in the organization's cybersecurity defences.

Related reading: Successfully managing Cybersecurity projects in the Age of AI.

As this article points out, "Managing cybersecurity projects in the age of AI has become more demanding. The stakes are higher. The cost to recover from a successful cyberattack is typically millions of dollars. The damage to reputation is significant but difficult to estimate."

Conclusion

Every organization can develop AI governance policies at a modest cost through the collaboration of staff and external consultants. The operation and enforcement of the AI governance policies are typically assigned to human resources and information technology staff.

Every board of directors should sponsor the development and use of AI governance policies that clarify staff accountability and build AI trust while controlling AI risks.

These policies will help drive innovation and deliver measurable business results.

R$

 

 


Other stories mentioning these organizations, people and topics
Organizations: Apple, Google, IBM, Massachusetts Institute of Technology, and SLB (Schlumberger)
People: Yogi Schulz
Topics: guidance for corporate boards on implementing AI

Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events











Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.