California AB 2013 Vendor-Risk Checklist for Los Angeles SMBs
Why AB 2013 Changes AI Buying in 2026
California AB 2013 takes effect on January 1, 2026 and raises the floor for AI transparency in vendor evaluations. For Los Angeles SMBs, that matters because AI adoption is moving fastest in legal, healthcare, media, logistics, and professional services. The law does not replace security due diligence, but it gives procurement teams better evidence to ask harder questions. Start with the bill text so legal and IT leads use the same baseline: California AB 2013.
What AB 2013 Actually Gives Your Team
The practical value is training-data transparency, not a blanket statement that a tool is safe. Expect vendors to publish summaries of datasets and sources used to train covered models. Use those disclosures to test fit for your risk profile, client confidentiality obligations, and sector rules. Treat missing or vague disclosures as a procurement risk signal, not a documentation delay.
Pre-Purchase Vendor-Risk Checklist
Before approving any AI add-on, require a written response to this checklist.
- Use-case scope: Which business workflows are allowed, and which are prohibited on day one?
- Data classes: Can users enter client PII, employee records, contracts, or financial data?
- Model provenance: Does the vendor provide AB 2013-aligned dataset summaries and update history?
- Human review: Which outputs require manager or specialist approval before external use?
- Risk owner: Who in your LA office owns decisions when output is wrong, biased, or leaked?
- Exit plan: How quickly can you disable the feature and export logs if risk increases?
Data Governance Questions for Copilot and Workspace AI
Ask these questions in writing before pilot access is granted.
- Is tenant content used to train foundation models, service models, or any third-party models?
- What are prompt and response retention periods, and can admins shorten them?
- Where is data processed and stored for Southern California users, and can region be constrained?
- Can admins block high-risk connectors, shared drives, or mailbox scopes by policy?
- Are generated files and summaries covered by existing DLP, eDiscovery, and legal hold settings?
If the vendor answer is broad marketing language, pause and request technical documentation.
Security Controls to Require Before Rollout
Require security settings that are enforceable by admin policy, not user preference.
- Identity: SSO, phishing-resistant MFA, conditional access, and device compliance checks.
- Logging: Centralized audit logs for prompts, plugin access, and admin policy changes.
- Access: Least privilege for connectors to SharePoint, Google Drive, CRM, and ticketing systems.
- Testing: Internal abuse-case testing mapped to OWASP LLM Top 10.
- Governance: Risk scoring aligned to NIST AI RMF and review by IT and legal.
Contract Clauses Los Angeles SMBs Should Not Skip
Your MSA, order form, and AI addendum should match your actual risk appetite. Require explicit language on data-use limitations, subcontractor controls, and breach notification timelines. Add a clause for prompt support during incidents that affect California client data or regulated records. Ask for indemnity language tied to IP and data claims arising from model output or training disputes. Reference operational preparedness guidance from CISA and deceptive-AI claim expectations from the FTC.
30-Day Pilot Framework for a Safe Go/No-Go
Week 1: Define two low-risk use cases and one prohibited use case, then publish internal rules. Week 2: Configure identity, DLP, and logging controls, then validate with test accounts. Week 3: Run pilot users on non-sensitive datasets and track accuracy, time saved, and escalation rate. Week 4: Introduce limited real workflows with manager review and documented exception handling.
Set measurable thresholds before launch, such as error rate, policy violations, and time-to-correct. If thresholds fail, extend pilot or stop rollout rather than widening access.
Red Flags and Final Decision Criteria
Delay procurement if any of these conditions remain unresolved.
- No AB 2013-related disclosure or no change log for model/data updates.
- No tenant-level controls for retention, connector scope, or feature disablement.
- No audit trail that your security team can export to SIEM.
- No clear ownership for incidents spanning vendor support, your IT team, and business leadership.
- No practical user training plan for high-pressure teams like sales, operations, or client services.
The best decision is not the most feature-rich tool, but the one your team can control, monitor, and defend. Need a structured AI vendor-risk assessment before rollout? We Solve Problems helps Los Angeles businesses evaluate controls, contracts, and deployment readiness for Copilot, Workspace AI, and similar tools. Talk with our team at wesolve.tech/contact.