Skip links

AI Without Guardrails: The Boardroom Reckoning Has Already Begun

Book a call with Kylie Hammond – A confidential conversation about where you are, where you want to be, and whether a paid board career is your next move. Honest insight from Australia’s leading board search expert on what’s possible, and what it takes.


There is a quiet but profound asymmetry emerging in boardrooms across Australia.

On one side, organisations are accelerating into artificial intelligence at pace. Employees are using it daily. Suppliers are embedding it into workflows. Customers are interacting with it often without realising.

On the other side, many boards are still observing from a distance.

This gap is no longer theoretical. It is a governance risk.


“You cannot govern what you do not understand.”

That single truth sits at the heart of the AI challenge facing directors today.

Artificial intelligence is not arriving as a single, contained transformation. It is diffusing across every function, every system, every decision layer. Quietly. Rapidly. Often invisibly.

The rise of what is now commonly referred to as shadow AI is particularly concerning. Employees are pasting sensitive information into tools. Drafting contracts. Analysing financial data. Generating strategy documents. Often without disclosure. Almost always without oversight.

Boards are now confronted with a reality where organisational data is being processed in ways that leadership cannot fully track.

And yet, many directors have never meaningfully used these tools themselves.


A Perfect Storm for Governance Failure

This moment represents a convergence of risks:

  • Mass adoption without coordination
  • Limited board-level fluency
  • Unclear regulatory boundaries
  • Rapid technological evolution
  • Data exposure at scale

According to a 2024 global survey by McKinsey, more than 70 percent of employees are already using generative AI tools in some capacity, often without formal approval.

Closer to home, the Australian Cyber Security Centre has repeatedly warned that emerging technologies are expanding the attack surface for organisations, particularly where governance frameworks lag adoption.

Meanwhile, high-profile incidents are already offering early warning signs.

  • In 2023, Samsung engineers inadvertently leaked confidential semiconductor data via ChatGPT prompts.
  • Financial institutions globally have restricted AI tool usage after concerns around data leakage and compliance breaches.
  • Deepfake technology has already been used in sophisticated fraud attempts, including a widely reported case in Hong Kong involving a fake executive video call that resulted in millions in losses.

These are not edge cases. They are signals.


“AI is one of the most profound technologies humanity is working on.”

— Sundar Pichai, CEO, Google

“Mitigating the risk of extinction from AI should be a global priority.”

— Statement signed by leading AI researchers and executives, 2023


The Board’s Dilemma

Boards are being asked to set policy, define acceptable use, and oversee risk in an area that is still evolving faster than most regulatory frameworks can keep up.

This creates a paradox.

Directors are accountable for governance outcomes, yet many lack first-hand experience of the very tools reshaping their organisations.

Without that lived understanding, oversight becomes abstract. Policies become generic. Risks remain hidden.

And in governance, what you cannot see will ultimately surface at the worst possible time.


From Curiosity to Capability: A Minimum Standard for Directors

There is now a baseline expectation emerging for board members.

At a minimum, directors should:

  • Use AI tools personally to understand their capabilities and limitations
  • Observe how outputs are generated, including hallucinations and bias
  • Understand how data is processed and where it may be stored
  • Recognise the difference between public, private, and enterprise AI environments

This is not about becoming technical experts. It is about informed oversight.


What Good AI Governance Actually Looks Like

Strong AI governance is not a single policy document. It is a system of guardrails that evolves with the technology.

1. Clear AI Usage Policies

  • Define acceptable and prohibited uses of AI across the organisation
  • Establish rules for handling confidential and sensitive data
  • Require disclosure of AI usage in key business processes

2. Data Governance and Security Controls

  • Classify data and restrict what can be entered into AI tools
  • Implement secure, enterprise-grade AI solutions where appropriate
  • Monitor data flows and potential exposure points

3. Oversight and Reporting Structures

  • Assign executive accountability for AI governance
  • Require regular reporting to the board on AI usage, risks, and incidents
  • Integrate AI risk into existing risk and audit frameworks

4. Ethical Frameworks

  • Address bias, fairness, and transparency in AI outputs
  • Ensure alignment with organisational values and stakeholder expectations
  • Consider the societal impact of AI-driven decisions

5. Workforce Education

  • Train employees on responsible AI use
  • Build awareness of risks such as data leakage and misinformation
  • Encourage a culture of transparency rather than hidden usage

6. Scenario Planning and Risk Preparedness

  • Model potential AI-driven incidents, including cyber events
  • Test organisational response capabilities
  • Align with broader cybersecurity and crisis management strategies

“The pace of progress in artificial intelligence is incredibly fast.”

— Elon Musk


The Emerging Risk: AI-Driven Cyber Events

One of the most underappreciated risks is the intersection of AI and cybersecurity.

AI is already being used to:

  • Generate highly convincing phishing attacks
  • Automate vulnerability discovery
  • Create deepfake identities and communications

It is not difficult to imagine a near-term scenario where an AI-driven cyber event causes significant operational, financial, and reputational damage.

Boards should not be asking if this will happen.

They should be asking when.


A Shift in Board Composition?

This moment may also accelerate changes in board composition.

Organisations may increasingly seek:

  • Directors with digital and AI literacy
  • Advisory boards focused on emerging technologies
  • External experts to support governance frameworks

The traditional governance skill set remains critical. But it is no longer sufficient on its own.


“We are at an inflection point.”

That phrase is often overused. In this context, it is accurate.

AI represents a structural shift in how businesses operate, compete, and create value.

Boards that engage early, build capability, and implement meaningful guardrails will be better positioned to harness their benefits while managing their risks.

Boards that delay may find themselves reacting to incidents rather than shaping outcomes.


The Question Every Board Should Be Asking

Not whether AI will impact your organisation.

Not whether your people are using it.

But this:

What guardrails do we have in place today, and are they fit for the speed at which this technology is evolving?

Because AI without guardrails is not just a technology risk.

It is a governance failure waiting to happen.

For board directors, the time to engage is not tomorrow. It is now.

Leave a comment