Shadow AI Control Playbook for Los Angeles SMBs
Why Shadow AI Now Looks Like Shadow IT
Employees are adopting AI tools faster than most SMB policies can keep up. That creates the same visibility gap that “shadow IT” created years ago, but with a new twist: employees may paste sensitive data into external models in seconds. For Los Angeles and Southern California businesses, this is especially important in industries with tight confidentiality expectations, including legal, healthcare-adjacent services, real estate, and logistics. Insurers now ask sharper questions about data governance, MFA, endpoint controls, and vendor risk, and AI usage intersects all of them. A practical response is not banning AI outright. A practical response is building a 30-day control baseline you can prove.
Day 1-7: Inventory Real AI Usage Across the Business
Start with discovery, not punishment. Tell staff you are documenting AI usage to protect clients, not to block productivity. Use a short intake form for every department:
- Which AI tools are used today?
- What tasks are they used for?
- What data types are entered?
- Is any output sent to customers or regulators?
- Who approves tool adoption?
Cross-check self-reports with logs from your network, endpoint web history controls, SSO apps, and browser extensions. Look for unsanctioned use of public chatbots, AI writing assistants, note-takers, and code copilots. Flag any workflow where client data, employee records, contracts, financials, or authentication details are entered. Use the NIST AI Risk Management Framework terms to classify where AI is being used and where controls are missing. At the end of week one, produce a single inventory spreadsheet your leadership team can review in 20 minutes.
Day 8-14: Classify Risk by Data and Business Impact
Not all AI usage is equal. Rank each use case with two scores: data sensitivity and business impact. Data sensitivity should map to your internal categories such as Public, Internal, Confidential, and Restricted. Business impact should measure legal, operational, and reputational consequences if data leaks or outputs are wrong.
Create three action buckets:
- Approved now (low sensitivity, low impact)
- Approved with controls (moderate risk, needs guardrails)
- Prohibited (high sensitivity or high consequence)
For regulated or contractual data, include references to applicable obligations. Use CISA guidance and your existing security framework so AI controls do not become a separate silo. If your team cannot explain why a tool is safe, treat it as unapproved until validated.
Day 15-21: Enforce Microsoft 365 Controls
For Microsoft 365 tenants, your priority is preventing uncontrolled data exposure. Require phishing-resistant MFA wherever possible and disable legacy authentication. Review external sharing in SharePoint and OneDrive and tighten anonymous link settings. Enable sensitivity labels and data loss prevention policies for financial, legal, and HR data. Restrict third-party app consent so users cannot freely authorize risky AI add-ins. Audit Copilot or other AI features by role, not company-wide by default. Use Conditional Access to block high-risk sign-ins and unmanaged device access. Confirm mailbox auditing and unified audit logs are enabled and retained. Map these controls to insurer questions so your evidence is ready at renewal.
Day 15-21: Enforce Google Workspace Controls
For Google Workspace, focus on OAuth app governance and data movement. Restrict marketplace app installations and require admin approval for new AI-connected apps. Use context-aware access and strong 2-Step Verification policies. Set Drive sharing defaults to least privilege and limit public-link sprawl. Apply DLP rules for Gmail and Drive to detect sensitive data patterns leaving the tenant. Review Gemini or third-party AI integrations at the OU/group level before broad rollout. Make sure alerting is active for suspicious logins, mass downloads, and unusual sharing. Document who can approve exceptions and how long exceptions stay active.
Day 22-26: Publish an AI Acceptable-Use Policy That Works
Your policy should be short, enforceable, and tied to daily behavior. Define approved tools, prohibited tools, and approved use cases by role. State clearly which data classes cannot be entered into external AI tools. Require human review for customer-facing content, legal language, and financial outputs. Prohibit pasting credentials, API keys, contracts under NDA, and regulated personal data. Set a rule that any new AI tool must go through security and legal review before use. Use plain language and examples from your own departments so the policy is usable. Align the policy with FTC business guidance on AI claims and practices. Have every manager review and acknowledge the policy with their team.
Day 27-30: Build an Insurance-Ready Evidence Pack
Before renewal, convert your work into auditable artifacts. Insurers care less about promises and more about proof. Prepare a lightweight evidence pack containing:
- AI tool inventory and risk ratings
- Microsoft 365 and Google Workspace control screenshots/exports
- Final AI acceptable-use policy and acknowledgment records
- Incident response updates for AI-related events
- Exception register with owner and expiration date
Validate that your incident reporting path includes cyber events reportable to law enforcement, including FBI IC3. If your broker requests control mappings, align your evidence with recognized frameworks like OWASP and NIST. Keep this pack updated quarterly, not just at renewal time.
Common Southern California SMB Mistakes to Avoid
Treating AI as an HR policy only, without security enforcement. Allowing broad app consent in Microsoft 365 or Google Workspace. Assuming enterprise AI features are safe by default without tenant configuration. Writing policy language that is too vague to train or audit. Ignoring vendor due diligence for AI subprocessors and data retention terms. Waiting until 30 days before renewal to gather evidence.
A better pattern is monthly review with IT, security, legal, and operations. In fast-moving LA markets, speed matters, but controlled speed wins.
A Practical Operating Rhythm After Day 30
Run a monthly AI governance review meeting with clear owners. Track new tools, blocked attempts, policy exceptions, and control drift. Retrain teams when new AI capabilities are enabled in M365 or Workspace. Schedule a quarterly tabletop exercise for an AI-related data exposure scenario. Update your risk register and insurer documentation as controls evolve. This turns Shadow AI from an unknown risk into a managed business process.
Need help implementing this 30-day plan before your next renewal? We Solve Problems can help your Los Angeles team inventory AI use, enforce Microsoft 365 and Google Workspace controls, and operationalize policy governance. Contact us at We Solve Problems.