What “Doing Something About AI” Now Means Inside Enterprises
AI adoption accountability is shifting toward audit and risk committees, while L&D is expected to ensure readiness without clear authority.
Boards are already treating AI as a risk and governance issue. Many organizations have not decided who owns the consequences. That ambiguity is starting to surface inside learning and capability functions.
Over the last 18 to 24 months, large enterprises have resolved one question about AI while leaving another dangerously open.
Boards have decided that AI is a governance and risk issue.
Executives have not agreed on who actually owns the consequences.
Public disclosures now show AI oversight being routed through audit and risk committees, often using incident response logic originally designed for cybersecurity. Companies including Accenture, Visa, TD Bank, and John Wiley have disclosed regular AI risk briefings to boards or audit committees, escalation thresholds, and formal incident review pathways. In several cases, even immaterial AI incidents are reviewed for potential board escalation.
The signal is clear. At the top of the organization, AI is already treated as a control and accountability problem.
What has not been settled is how accountability flows downward.
In practice, AI ownership is fragmented. Some firms have appointed Chief AI Officers to create single-threaded accountability. Others leave ownership with CIOs or distribute it across business units using first line of defense models. Risk, compliance, privacy, and cyber teams increasingly act as gatekeepers. None of these structures place Learning and Development in a primary ownership role.
Yet L&D is being asked to solve for readiness.
This is the gap. AI failures are escalating through governance channels, but workforce readiness is being implicitly delegated to L&D without a corresponding transfer of authority over risk thresholds, escalation rules, or decision rights.
Evidence of this mismatch is already visible.
Investor and broker research shows growing pressure on boards to disclose AI oversight only when risk is material, not when training is complete. At the same time, CEOs rank AI and security as protected budget lines, while middle management reports confusion over responsibility for outcomes. External studies show leadership optimism about AI returns sharply diverging from operational skepticism, particularly among managers closest to execution.
When something goes wrong, the organization has three possible explanations. The model failed. Governance failed. Or people failed.
In the absence of clear ownership, those explanations blur. Training is delivered, but denial rates rise. Tools are deployed, but managers override outputs at increasing frequency. Audit trails grow thicker, while accountability becomes harder to trace.
L&D sits in the middle of this ambiguity.
The problem is not that L&D lacks tools, platforms, or content. The problem is that the organization has not decided whether AI is fundamentally a training issue, a capability issue, or a performance and risk management issue. Boards are already acting as if it is the last of these. Many executive teams are still speaking as if it is the first.
That inconsistency is now observable. It shows up in governance structures, escalation playbooks, and budget decisions. It is also beginning to shape how executive peers assess credibility.
This article is not an argument for more AI training, but rather an examination of what happens when accountability for AI outcomes becomes more stringent at the top of the organization, while responsibility for readiness remains diffuse below it.
The implications of that gap are where most L&D leaders are currently exposed.
AI Is Compressing Tasks and Expanding Judgment, and That Shift Is Already Visible
Across multiple sectors, AI deployment is producing a consistent pattern that senior executives recognize, even if it is rarely labeled directly.
Routine work is moving faster. Decisions are not becoming simpler.
Post implementation evidence from healthcare, insurance, financial services, and software operations shows that AI reliably shortens transactional steps while increasing the volume of exceptions, overrides, and escalations that require human judgment. The net effect is not reduced managerial load but a redistribution of it.
Healthcare provides one of the clearest signals. Claims automation and revenue cycle tools have accelerated initial screening, yet denial rates continue to rise. Experian Health reports that more than 40 percent of providers now face denial rates above ten percent, despite broad adoption of AI assisted claims processes. Managers are spending less time on manual entry and more time reconciling AI driven inconsistencies, managing appeals, and revising policies to withstand audits. Coding audits show AI systems up coding diagnoses that require human correction and justification, shifting effort from production to verification.
Insurance shows a similar pattern. At Tryg, machine learning models now close the majority of simple motor claims in minutes. At the same time, internal commentary highlights employee hesitation, distrust of outputs, and added manual review layers for cases that fall outside clean parameters. Regulators and auditors have responded by pushing insurers to formalize override rationales and expand audit trails, increasing supervisory workload even as straight through processing improves.
Financial services offers a parallel signal. Faster underwriting and automated customer interactions have triggered regulatory interventions when models fail to escalate edge cases. Settlements tied to discriminatory underwriting models and chatbot failures have forced lenders to insert new human checkpoints and retrain managers on when automated decisions must be paused or reversed. Compliance teams now expect managers to explain not just what decision was made, but why an AI recommendation was accepted or rejected.
Operations-heavy environments show the same tension in a different form. GitLab’s 2025 AI Paradox survey found that while AI accelerates coding, managers now contend with fragmented toolchains, overlapping AI systems, and new compliance checks that consume nearly a full workday per team member each week. Time saved at the task level is reabsorbed downstream through coordination, arbitration, and risk management.
These are not edge cases. They are second order effects.
AI reduces execution time while increasing ambiguity at the decision boundary. When outputs are partial, probabilistic, or context dependent, managers are forced into more frequent judgment calls. Those calls carry operational, regulatory, and reputational consequences.
This shift has direct implications for Learning and Development.
Executives are not measuring whether employees have been exposed to AI tools. They are observing whether exceptions are handled consistently, whether overrides are defensible, and whether escalations reach the right level at the right time. In healthcare, insurance, and lending, failure to do so is already producing audit findings, appeals backlogs, and legal scrutiny.
Traditional learning responses do not map cleanly onto this environment. Content can explain how a tool works. It does not resolve when a manager should trust it, override it, or stop a process altogether.
The result is a quiet recalibration. Where AI-driven exceptions rise, leaders infer that the organization lacks judgment readiness, not technical literacy. Where override behavior is inconsistent, they infer gaps in manager enablement rather than frontline skills.
This is the pressure point L&D leaders are encountering, often without it being stated explicitly. The system is signaling that capability now lives at the level of judgment under uncertainty. Most learning functions were not designed for that mandate.
How L&D Leaders Are Preserving Influence by Narrowing Their Role
A small but growing group of L&D leaders has already drawn a conclusion from the patterns above.
If AI increases judgment load, expands exception handling, and pulls accountability upward toward boards and risk committees, then credibility does not come from doing more. It comes from being precise about what L&D owns, what it supports, and where it stops.
This shift is not theoretical. It is showing up in organizational design, budget decisions, and executive access.
First, these teams have deliberately stepped away from end-to-end ownership of AI execution.
At companies such as D2L and Docebo, routine learning operations, including reminders, content generation, and basic AI-enabled workflows, have been automated or delegated. The stated rationale is not efficiency alone. It is to remove L&D from low-value execution so it can operate upstream, where judgment and design decisions are made.
Second, value is being framed around the manager's decision quality rather than employee exposure.
Across multiple disclosures and surveys, L&D leaders describe their mandate in terms of enabling managers to interpret AI outputs, handle edge cases, and escalate appropriately. D2L’s workforce enablement models emphasize calibration loops and governance awareness. Absorb’s 2026 research shows leadership and critical thinking now outrank content development as stated priorities. Berry Dunn’s client work positions L&D as curator of risk awareness and escalation discipline rather than builder of AI tools.
Third, boundaries are being made explicit and visible to executives.
Go1’s survey data show that clarity of AI ownership remains rare, but where L&D influence is strongest, it is because the function has named what it does not control. These teams co lead AI capability discussions with IT, risk, or business leaders instead of claiming end to end responsibility. That constraint appears to increase trust rather than reduce relevance.
There is evidence that this boundary setting is changing how L&D is perceived.
Docebo reports L&D leaders being invited into executive-level planning conversations after operational responsibilities were narrowed. Coforge reallocated budget toward role-specific AI literacy tied to business performance reviews, placing L&D closer to revenue and risk discussions. In several cases, escalation forums now exist where L&D participates in deciding which AI-related capabilities matter and which do not.
The common thread is restraint.
These teams are not positioning themselves as AI strategists. They are not certifying readiness through completion data. They are not evangelizing tools. They are aligning their mandate with how accountability for AI already operates inside the enterprise.
For senior L&D leaders, the signal is straightforward. Influence is no longer correlated with scope. It is correlated with judgment about scope.
As AI adoption accelerates, executives will look for fewer surprises, fewer defensibility gaps, and more consistent escalation behavior. L&D leaders who can clearly articulate how their function contributes to those outcomes are retaining credibility. Those who cannot are finding that decisions about AI readiness are being made elsewhere.
🚩 Flag this for early January:
When governance rhythms resume in early January, appoint one trusted leader to track where AI first appears in board and audit conversations. Focus on which issues are escalated, not how they are solved.
This window closes quickly. By mid-Q1, those signals will already be interpreted. If you are seeing them for the first time in March, someone else has already drawn conclusions.
About The Intelligence Council
The Intelligence Council publishes sharp, judgment-forward intelligence for decision-makers in complex industries. Our weekly briefs, monthly deep dives, and quarterly sentiment indexes are built to help you grow your top-line and bottom-line, manage risk, and gain a competitive edge. No puff pieces. No b.s. Just the clearest signal in a noisy, complex world.


