Legal Considerations for Drafting and Negotiating AI Contracts in Higher Education
- Anne Wilder and Emma Bahner, XL Law & Consulting
- Apr 2
- 6 min read
AI agreements pose many legal risks, including in the areas of data privacy, intellectual property, liability, and bias.
Counsel can mitigate many of these risks through careful contract drafting and review as well as strategic negotiation.
As artificial intelligence (AI) technologies become integral to the operations and activities of higher education institutions (IHEs), colleges and universities must navigate complex legal considerations when entering into contracts with vendors. AI agreements pose particular legal risks and compliance challenges in a number of areas, including data privacy, intellectual property, liability, and ethics. We provide in this article an overview of those risks along with practical tips for IHEs.
Types of AI Contracts
What do we mean by an AI agreement? While we will no doubt see AI deployed in new contexts as its capabilities develop, as of this writing, common AI contracts at IHEs include the following:
Generative AI, including large language models (LLMs) – using existing LLMs (e.g., ChatGPT Edu), IHEs can create their own custom LLMs (e.g., custom GPTs) to use in a wide variety of ways (e.g., to generate administrative content such as document templates; to generate creative content such as test questions or marketing materials; to provide customer service chatbots, etc.)
AI facial recognition technology – for conducting campus security surveillance, restricting building access, etc.
AI analytics software –for detecting plagiarism or AI-generated context, analyzing research data, reviewing employment or enrollment applications, etc.
Practice Tip: Look out for vendor updates to terms that add the use of AI. Ensure that the updated terms do not permit use of the IHE’s data to train the vendor’s AI. If they do, you will likely need to take affirmative steps to opt out of such use. |
2. Dissecting Parts of AI Contracts
It is essential to understand which of the many parts governing an AI-related contract apply to the specific AI services your institution will be using and to review carefully all of the applicable components. These might include, for example, terms of use/terms & conditions, service terms, product terms, sharing & publicationpolicies, usage policies, end-user license agreement/customer agreements, privacy policies, data processingagreements.
Practice Tip: Look out for provisions stating that the vendor’s terms are subject to change at any time and that continued use of the services constitutes agreement with the updated terms. Attempt to negotiate a provision requiring the vendor to notify you directly of any changes to the standard terms and conditions. |
3. Ownership and Intellectual Property Rights
With respect to LLM agreements, ownership of AI-generated inputs and outputs tends to rest with the user to the extent allowable by law and subject to third-party terms. It can be extremely challenging, however, to determine (1) the law, given that the law in this area is unsettled and rapidly evolving; and (2) whether third-partytechnology is being used in such a way that it affects ownership of inputs and outputs. For example:
Is content produced by generative AI protected by copyright law, given that it was not created by a human being?
Can AI-generated content be deemed a derivative work of the content it uses to produce the output in response to a user’s input?
Could the user be violating the copyright of third parties if generative AI is using copyrighted material to generate its outputs?
Does the service include any AI tool(s) powered by third-party foundation models, and if so, how do the third-party terms affect ownership of data the user inputs and generates?
Practice Tip: AI contracts should address ownership and responsibility pertaining to works protected by copyright, including derivative works. Make sure the AI contract provides indemnification of the LLM user if the AI model uses copyrighted material to generate the output. |
4. Data Privacy Concerns
Compliance with U.S. Privacy and AI Laws
Typically, LLM contracts include clauses requiring the client to comply with all applicable privacy laws (both domestic and international). This includes the obligation to obtain consent where required. Approximately 20 states have comprehensive data privacy laws; while most do not apply to nonprofit IHEs, several do. There are numerous ways in which IHEs may run afoul of such laws in their use of generative AI:
If your IHE’s AI contract with a vendor permits the following… | …you may be at risk of violating this requirement in a relevant privacy law |
|
|
|
|
|
|
While not yet as common as state data privacy laws, states have also begun to consider and pass AI laws. According to the National Conference of State Legislatures, 45 states introduced AI legislation in the 2024 legislative session, and a couple of states (e.g., Colorado and Utah) now have comprehensive AI laws. Such laws typically impose obligations on both developers and deployers of AI. IHEs may be required to provide notice, maintain risk management policies and programs, perform data protection impact assessments and to monitor the outcomes of high-risk AI processing.
Compliance with International Privacy and AI Laws
For the most part, the concerns IHEs need to address in order to comply with U.S. state data privacy laws will also aid in compliance with international data privacy laws. However, comprehensive data privacy laws like the EU’sGeneral Data Protection Regulation (GDPR) and China’s Personal Information Protection Law (PIPL) alsorequire, among other things, certain contractual terms with AI vendors if IHEs plan to use their AI tools to input PI. IHEs will increasingly need to ensure compliance with international AI laws such as the European Union’sAI Act, which took effect August 1, 2024 and whose provisions are scheduled to apply gradually over time until August 2, 2026.
Practice Tip: Ensure that your IHE’s use of AI tools does not violate any data use restrictions, which are often included in contracts with international partners and with partners who provide proprietary or other confidential data. |
5. Liability and Risk Allocation
Unsurprisingly, vendors of LLM and other types of AI are unlikely to accept liability for inaccurate outputs,instead placing the burden on the user to verify the accuracy of the results. LLM AI vendors also typically require IHEs to broadly indemnify the vendor, and they restrict the vendor’s indemnification obligations to claims alleging that the vendor’s services infringe third- party intellectual property (IP) rights. Typically, suchprovisions will include IP rights to the data the vendor used to train the LLM.
Practice Tip #1: Consider including in your institution’s internal AI policy or guidelines a requirement to verify the accuracy of AI outputs.
Practice Tip #2: Attempt to negotiate the indemnification clause to require the vendor’s indemnification of the IHE for the vendor’s violation of applicable laws, breach of security obligations, and gross negligence or willful misconduct. |
6. Use Restrictions
Typically, AI contracts permit any use other than those which are specifically prohibited. They also typically impose restrictions on certain types of uses. Prohibited or restricted uses commonly include age-related restrictions, development of other AI tools, sharing and publication of AI-generated content, the inputting of personal information (unless the parties have entered into a data processing addendum) or HIPAA-protected data (unless the parties have entered into a HIPAA addendum).
Conclusion
Counsel must quickly get up to speed on best practices for drafting and negotiating AI contracts, ensuring that they protect their institution’s interests while fostering innovation. Both data privacy and AI are fast-moving areas of legislation, and counsel should continue to monitor legislation relevant to their jurisdiction(s).