Why California Employers Should Be Paying Attention to New York’s AI Law — Even If It Never Comes West
When New York passed the RAISE Act, most of the immediate commentary focused on a familiar question: Will California be next?
That’s not the question I’m asking.
From my perspective — and from years of working inside complex organizations — the more interesting point is what the RAISE Act reveals about where employment law enforcement is already headed, particularly in California. Not toward banning AI. Not toward prescribing ethics. But toward something much more practical and much harder to fake: governance.
I’ve watched this shift happen before. Long before “AI oversight” became a headline, I helped a global technology company build its first internal governance committee when automated tools were just beginning to influence hiring, performance evaluation, and compensation decisions. At the time, there was no statute demanding it. What there was — even then — was a recognition that systems were starting to make decisions faster than organizations could explain them.
That gap is what New York’s law exposes. And California employers should recognize it immediately, because California has been living with the consequences of that gap for years.
California already regulates outcomes — not intentions
One reason the RAISE Act feels novel is that it appears to regulate process rather than outcomes. It asks: Who owns the system? Who reviews it? Who intervenes when it drifts?
But from a California employment law perspective, that approach isn’t new at all.
California law has long been comfortable imposing liability based on results — even where no one intended harm. Pay equity is an obvious example. An employer can face exposure based on disparities alone, with the burden shifting to explain and justify them. More recently, SB 642 has extended that exposure window dramatically, turning pay decisions into long-tail risks that must be explainable years later.
Another familiar example is the interactive process. Employers are often surprised to learn that they can be liable for failing to engage in the process even if no reasonable accommodation ultimately existed. The harm, in the court’s eyes, is not just the outcome. It’s the breakdown of the process itself.
Seen through that lens, California’s approach looks very different from jurisdictions like New York that regulate AI through standalone, tool-specific statutes. Rather than telling employers in advance how AI systems must be audited or governed, California relies on existing civil rights law to evaluate what actually happened: who made the decision, how it was made, and whether the process can be explained and defended.
That approach should sound familiar. It’s the same way California already evaluates pay equity, accommodations, and other employment decisions. The technology may be new, but the legal question is the same: can the employer explain the outcome and the process that produced it?
What the RAISE Act really surfaces: institutional ambiguity
Most organizations I work with don’t lack good intentions. What they lack is clarity.
When AI or algorithmic tools influence employment decisions, the ambiguity shows up quickly:
• Is the tool advisory or determinative?
• Who reviews its outputs?
• What happens when it conflicts with human judgment?
• Who owns it after procurement — Legal, HR, IT, Compliance?
In my experience, these questions often don’t have clean answers because systems evolved incrementally. Tools were adopted for efficiency. Responsibilities were shared. Oversight was informal. And for a while, that worked.
The RAISE Act doesn’t accuse companies of wrongdoing. It forces them to confront ambiguity. It asks organizations to formalize what has historically lived in the gray.
That’s why I don’t see this as a “New York problem.” I see it as a preview of how enforcement bodies are thinking about accountability in system-driven workplaces.
Governance doesn’t eliminate risk — it makes outcomes defensible
This is the point that often gets lost.
Governance does not guarantee perfect outcomes. It doesn’t eliminate disparities. It doesn’t prevent every bad decision. What it does is change how outcomes are evaluated when they are challenged.
I’ve seen this play out repeatedly.
Two employers can have the same compensation disparity. One struggles to defend it because decisions were decentralized, undocumented, and inconsistent. The other can walk through who made the decision, under what framework, using what data, and with what oversight. The numbers may look similar, but the legal posture is entirely different.
The same is true with AI-assisted hiring or performance tools. The risk isn’t that AI was used. The risk is that no one can explain how it fit into a governed decision process — or who was responsible for intervening when it mattered.
From an in-house perspective, this is where governance quietly becomes litigation strategy.
Why this matters now for California employers
California already imposes outcome-based liability. It already extends exposure periods. It already scrutinizes systems and patterns. What it does not yet do consistently is ask employers to show their governance structures upfront.
That doesn’t mean it won’t. More often, governance gets tested the hard way — in discovery, through deposition testimony, long after the people who designed the system have moved on.
The organizations that struggle the most are not the ones that used advanced tools. They’re the ones that can’t reconstruct how decisions were made, who exercised judgment, and what guardrails existed at the time.
That’s the risk the RAISE Act puts in sharp relief.
A quiet shift for in-house teams
From where I sit, this is less about anticipating a new statute and more about recognizing a structural shift.
Employment law is moving away from discrete decisions and toward system-level accountability. The question is no longer just what happened, but how did the organization design the machinery that produced it?
For in-house legal teams, that reframes the work. Governance is no longer a theoretical exercise or a compliance overlay. It becomes part of how organizations preserve institutional memory, allocate responsibility, and defend outcomes over time.
The companies that recognize this early don’t wait for laws to force the issue. They build governance incrementally, thoughtfully, and in a way that fits their business — not because they’re told to, but because they understand how scrutiny actually unfolds.
A final thought
New York’s AI law may never be adopted wholesale in California. But the pressure it reflects already exists here.
California employers are already judged by outcomes. Increasingly, they will be judged by whether their systems — human and automated — can explain themselves years later, under scrutiny, by people who didn’t design them.
That’s not a political reckoning. It’s an institutional one.
And it’s already underway.
Disclaimer:
This article is intended for general informational purposes and reflects an in-house perspective on emerging legal and operational issues. It does not constitute legal advice. Legal obligations and risk exposure depend on specific facts and circumstances, and readers should consult with experienced counsel before taking action.