AI Governance
Companies building with artificial intelligence face a rapidly shifting legal landscape. From the EU AI Act to state-level legislation in Colorado, Illinois, and California, regulatory obligations are multiplying faster than most companies can track. We help businesses that build, deploy, or rely on AI systems understand their legal exposure and put governance frameworks in place before regulators, investors, or customers force the issue.
James A. Long
Managing Partner
FAQ
+ What is AI governance, and why does my company need it?
AI governance is the set of policies, processes, and legal frameworks a company uses to manage the risks of building or deploying artificial intelligence. It covers everything from how training data is sourced to how automated decisions are reviewed, documented, and explained.
If your company builds AI-powered products, uses AI tools internally, or integrates AI into services you sell to customers, you have legal exposure. Enterprise customers are increasingly requiring AI governance documentation before signing vendor contracts. Investors are asking about it during due diligence. And regulators at both the state and federal level are beginning to enforce compliance obligations.
Governance is not just a legal checkbox. It is the infrastructure that lets you ship AI products without creating liability you did not anticipate.
+ What laws apply to AI right now?
The regulatory landscape is fragmented but moving fast. The EU AI Act is now in effect and applies to any company whose AI products reach EU users, regardless of where the company is based. In the U.S., Colorado's AI Act (effective 2026) imposes specific obligations on companies deploying "high-risk" AI systems. Illinois has the Artificial Intelligence Video Interview Act. California, Texas, and several other states have bills in various stages of the legislative process.
At the federal level, the FTC has been actively enforcing against deceptive or unfair AI practices under existing consumer protection authority, and the NIST AI Risk Management Framework is increasingly referenced as a baseline standard.
We monitor these developments and advise clients on which obligations apply to their specific products and markets.
+ I am a startup building an AI product. When should I think about governance?
Now. The most expensive time to address AI governance is after a customer, investor, or regulator forces the conversation. The best time is during product development, when governance decisions can be built into your architecture and workflows rather than bolted on after the fact.
We work with startups from formation through Series A and beyond to build governance frameworks that scale with the company. This includes data sourcing and licensing practices, model documentation, automated decision review processes, and the contractual provisions that enterprise customers and investors expect to see.
+ What do enterprise customers want to see in AI vendor contracts?
Enterprise procurement teams are adding AI-specific provisions to vendor agreements at an accelerating pace. Common requirements include representations about training data provenance and licensing, indemnification for AI-generated outputs, transparency about how models make decisions, data handling and retention commitments, and the ability to audit or explain automated decisions that affect end users.
If you are selling an AI-powered product to enterprise customers, you will encounter these requirements. Having clear, documented governance practices and contract-ready language positions you to close deals faster and with less friction.
+ How does AI governance relate to data privacy and cybersecurity?
AI governance overlaps significantly with data privacy and cybersecurity, but it is not the same thing. Data privacy laws like the CCPA, GDPR, and the New York SHIELD Act govern how personal information is collected, stored, and shared. AI governance adds a layer on top of that: how data is used to train models, whether automated decisions are fair and explainable, and who is liable when an AI system produces a harmful or inaccurate output.
We advise on both. Our data privacy and cybersecurity practice handles the compliance infrastructure, and our AI governance practice addresses the additional obligations that come with building or deploying AI systems.
+ What is the EU AI Act, and does it apply to U.S. companies?
The EU AI Act is the world's first comprehensive AI regulation. It classifies AI systems by risk level and imposes obligations accordingly, from transparency requirements for low-risk systems to outright bans on certain practices (such as social scoring and real-time biometric surveillance in public spaces).
If your AI product is used by anyone in the EU, or if your AI system's outputs affect EU residents, the Act likely applies to you regardless of where your company is headquartered. Penalties for non-compliance can reach 35 million euros or 7% of global annual turnover.
We help U.S. companies assess whether and how the EU AI Act applies to their products and build compliance programs that satisfy EU requirements without unnecessarily constraining their product roadmap.
+ Can I use AI-generated code or content in my products?
This is one of the most active areas of AI law. The copyright status of AI-generated outputs remains unsettled. The U.S. Copyright Office has issued guidance indicating that purely AI-generated works are not copyrightable, but works involving meaningful human creative input may be. Courts are still working through cases involving training data and the scope of fair use.
For companies building products, the practical questions are: Do you own the code your engineers write with Copilot? Can you claim copyright in content generated by your AI features? What happens if your AI model was trained on copyrighted material?
We help clients navigate these questions with practical frameworks for IP ownership, licensing, and risk allocation in their development workflows and customer agreements.
Have questions? Let’s start the conversation.
Get in touch with us
We want you to call. We want you to ask questions. We want to build something with you for the long term.