“No Robo Bosses”: What California Employers Should Know About AI at Work

Technology is changing how employers make decisions at work. Many companies now use software, algorithms, or artificial intelligence (AI) to help with hiring, scheduling, performance reviews, and other employment decisions.

California lawmakers are paying close attention to this trend.

Even though a proposed law called the “No Robo Bosses Act” (SB 7) did not pass, it sends an important message: California is moving toward regulating how AI is used in the workplace. Employers should understand what this means and why it matters.

What is a “Robo Boss”?

A “Robo Boss” is not a robot walking around the office. It is a nickname for computer systems that help make decisions about employees.

For example, some systems help screen job applicants, score resumes, predict performance, create work schedules, or flag employees for discipline. These tools do not usually make final decisions on their own, but they can strongly influence what happens to workers.

Because these systems affect real people, lawmakers are concerned about fairness, accuracy, and transparency.

What was SB 7 trying to do?

SB 7 was a proposed California bill that would have created rules for using AI in employment decisions. The bill did not become law, so employers do not have new legal requirements from SB 7 itself.

However, the bill shows what lawmakers are thinking. SB 7 focused on making sure employers understand how AI tools work, keep humans involved in important decisions, and prevent unfair bias in automated systems.

In short, the goal was to make sure technology helps employers without harming workers.

Why does this matter if the bill did not pass?

Because this issue is not going away.

In California, new employment laws often start as ideas that take time to become law. A bill that fails one year often returns later in a revised form. SB 7 is best understood as a preview of future regulation, not a dead idea.

More importantly, employers already have legal responsibilities under existing California discrimination laws. If an AI tool leads to unfair outcomes for certain groups, employers can still face legal risk — even without a special AI law.

Using software does not remove employer responsibility.

What is the risk for employers right now?

California law already requires employment decisions to be fair and non-discriminatory. That applies whether decisions are made by people, software, or a mix of both.

If an employer uses an AI tool it does not understand, or relies too heavily on automated recommendations, problems can arise. Employers are generally responsible for the tools they choose, including tools provided by outside vendors.

This means AI should be treated like any other high-impact management tool: carefully, thoughtfully, and with oversight.

Employment teams and privacy teams should work together

That risk is not limited to employment law alone.

Many AI systems collect, analyze, or store large amounts of personal data, including information about applicants, employees, productivity, behavior, or performance. As a result, these tools raise privacy and data protection issues, not just employment law concerns.

For this reason, in-house employment or HR teams should work closely with privacy, data security, or information governance teams when vetting any AI-driven decision-making software. This includes tools provided by outside vendors.

Employment teams tend to focus on fairness and compliance with labor and discrimination laws. Privacy teams focus on how data is collected, used, shared, stored, and protected. Both perspectives are necessary.

When these teams collaborate early, employers are better positioned to identify risks, ask the right questions of vendors, and avoid problems that can arise later — especially as California continues to expand both workplace regulation and privacy enforcement.

What should employers do now?

There is no need to panic or abandon technology. But employers should be thoughtful.

It is wise to know which systems affect hiring or employment decisions, understand their basic purpose, and ensure that humans remain involved in final decisions. Employers should also be cautious about assuming that vendor tools are automatically compliant with California law.

Preparing now puts employers in a much better position if and when new AI regulations are adopted.

The bottom line

SB 7 did not pass, but it clearly signals where California is heading. The use of AI at work will face more scrutiny, not less.

Employers who start paying attention now — and who take a coordinated approach across employment and privacy teams — will be better protected later.

At Habib Workplace Advisory, APC, our practice focuses on helping employers navigate emerging workplace issues before they become enforcement or litigation problems. If your organization uses technology to make employment decisions, now is a good time to take a closer look.

Disclaimer:
This article is intended for general informational purposes and reflects an in-house perspective on emerging legal and operational issues. It does not constitute legal advice. Legal obligations and risk exposure depend on specific facts, and readers should consult with experienced counsel before taking action.

Previous
Previous

California’s New Equal Pay Reality: Why SB 642 Turns Pay Equity Into a Systems and AI Problem