top of page

XL INSIGHTS+
Legal Alerts and News Updates

Drafting AI Policies: Considerations for Higher Education Institutions

  • Generative AI poses both opportunities and risks for organizations.

  • Higher education institutions should strongly consider drafting AI policies to guide employees’ use of AI and protect against the numerous legal risks associated with AI use.


It’s now been well over a year since ChatGPT’s public launch on November 30, 2022.  Generative artificial intelligence (AI) programs like ChatGPT—commonly defined as AI models that are capable of generating new text, images, and other content in response to the input data on which they were trained—are increasingly used by faculty and staff for a number of reasons.  Generative AI can help faculty and staff with a wide variety of tasks, from researching and drafting content to brainstorming ideas to performing rote tasks (thus freeing up time for more creative work).


A 2023 Conference Board study suggests a majority of workers are doing just that. According to the study of nearly 1,100 U.S. employees, 56% were using generative AI for work tasks. Of those who were using generative AI, 71% reported that their managers were aware of their usage. More problematic, however, is that only 26% of those surveyed reported that their organizations had an AI policy (compared to 34% who said their organization did not have an AI policy and 23% who said one was under development). Even in organizations without AI policies, 40% of the surveyed employees reported their managers were fully aware of their AI usage for work tasks.


Policy Considerations

For the majority of organizations that don’t yet have AI policies, there are numerous factors to consider in drafting a policy that both supports AI use in a way that benefits the organization’s mission while protecting against potential risks. For organizations like colleges and universities, the benefit/risk calculation of using generative AI may vary greatly based on the relevant department. While it would be impossible to recommend a particular AI policy for use across all institutions of higher education, given institutions’ unique missions, cultures, and structures, some potential issues to consider when drafting an AI policy include the following:


  • Privacy and Confidentiality: If users provide any information that relates to an identified or identifiable person (Personal Information) as input to a generative AI platform, they could be violating various domestic and international data privacy laws that require privacy notices, consent, and/or data processing agreements. Processing Personal Information using generative AI could also violate privacy laws that provide data subjects with the right to opt out of automated decision-making (ADM) that produces legal or similarly significant effects (e.g., employment or enrollment decisions) or require additional measures prior to using ADM with individuals’ Personal Information, such as obtaining their informed consent.

  • In particular, lawyers must ensure that their use of generative AI does not violate professional responsibility obligations. Lawyers should follow relevant rules and guidance from their bar associations. For example, California’s Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law warns lawyers against “input[ting] any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections.” Lawyers should be particularly cautious of using programs with embedded AI solutions labeled as “beta,” “preview,” or similar labels, as their terms and conditions typically provide that the use of such AI solutions is not subject to the security and confidentiality terms that are otherwise applicable when using the program. Furthermore, California’s guidance states that “[a] lawyer must anonymize client information and avoid entering details that can be used to identify the client.”  A recent Florida Bar ethics opinion similarly addresses issues related to the use of generative AI, including confidentiality, and cautions lawyers against disclosing a client’s confidential information to a generative AI platform without the client’s informed consent.

  • Accuracy: Generative AI is not always accurate.  This is not a rare phenomenon; in fact, the word “hallucination” has been repurposed to describe an inaccurate or misleading response generated by an AI program. The FAQ document maintained by OpenAI, the company that developed ChatGPT, states that “outputs may be inaccurate, untruthful, and otherwise misleading at times” and acknowledges that ChatGPT “has limited knowledge of world and events after 2021.” Referring to its generative AI product, Google similarly states that “AI can & will make mistakes” and that the user should “[a]lways evaluate responses.” While generative AI continues to improve, users should not rely on the accuracy of its outputs, and it may be prudent for institutions to provide guidelines for when faculty and staff are expected to confirm the accuracy of the AI outputs. 

  • Bias: Because ChatGPT generates responses based on information from the internet, any bias in that content may be replicated in ChatGPT’s output. OpenAI acknowledges that “[ChatGPT] may also occasionally produce . . . biased content.” ChatGPT has been criticized for generating racial, political, and other types of biased responses.

  • Contractual Obligations: Sharing confidential information with generative AI may violate contractual provisions with customers and clients that limit the use of customer/client data to certain purposes. Before allowing faculty and staff to use any generative AI tool, legal counsel should carefully review each tool’s legal terms and be particularly cautious with AI tools that include terms allowing the use of input data to train AI models.

  • Intellectual Property: Generative AI use raises numerous IP-related concerns, including:

    1. whether content produced by generative AI is protected by copyright law since it was not created by a human being; and

    2.  whether AI-generated content may be deemed a derivative work of the content it uses to produce the output in response to a user’s input. In other words, by using the outputs produced by generative AI, a user may be violating the copyright of third parties due to the possibility that generative AI is using copyrighted material to generate its outputs. ChatGPT produces limited and/or inaccurate citations, which makes it difficult to know when a user may have committed copyright infringement.

  • Third Parties: An institution may be held accountable if a third party (e.g., a vendor or a vendor’s subcontractor) uses generative AI in violation of applicable laws (e.g., data privacy laws) or uses generative AI in a manner that causes harm to another individual or organization, and the third party may be liable to the institution if the third party’s use of AI violates contractual terms between the third party and the institution. For example, if a university has formed an agreement with a service provider in which the service provider agrees to keep the university’s information confidential, and the service provider then uses a subcontractor to perform a service related to the university’s information, the university may be held accountable if the subcontractor violates applicable laws or causes harm by feeding the university’s confidential information into a generative AI system, and, in turn, the service provider may be liable to the university. Legal counsel should ensure that an institution’s contracts with third parties appropriately address confidentiality obligations with respect to the use of generative AI.

 

Compliance with State and Federal Law

 

Institutions will also want to ensure that their internal policies are consistent with applicable state and federal law, and they may also wish to consider whether any of their activities are subject to foreign AI laws, such as Europe’s new Artificial Intelligence Act. [Note: XL Law & Consulting will take a deeper dive into state and federal AI laws, in addition to the laws of a few select foreign jurisdictions, in a future article.]

 

Implications for Higher Education Institutions

 

Institutions of higher education (IHEs) will likely need to address institutional AI use based, at least in part, on individuals’ roles. For example, many IHEs have given wide latitude to faculty with respect to determining the circumstances and conditions in which students in their classes may use generative AI. Institutions will likely wish to exercise more control over AI use by administrators and other staff, however.

 

IHEs will also want to consider the needs and risks of different departments in crafting an AI policy (or policies). As noted above, lawyers in the general counsel’s office will have specific duties to maintain client confidentiality. Human resources departments may need their own AI policies, given AI’s potential to significantly increase efficiency (e.g., analyzing resumes, predicting performance, etc.) and, on the other hand, to increase risks related to confidentiality, discrimination, bias, and automated decision-making that produces legal or similarly significant effects. For example, the EEOC released guidance in May 2022 that addressed how employer use of AI may, in certain circumstances, violate Title I of the ADA. 


Institutions will want to consider the risk based on the type of information. For example, inputting information that is protected under the Family Educational Rights and Privacy Act (FERPA) or the Health Insurance Portability Act (HIPAA) into generative AI—regardless of the role or department involved—would likely raise serious confidentiality and privacy issues.

 

In addition to the above, IHEs may wish to consider the following when drafting policies on generative AI use:

 

  • Cybersecurity – If the IHE will be using generative AI systems, IT professionals should be consulted to ensure that the systems follow security, confidentiality, data retention, and data breach notification protocols. IHEs should understand how generative AI companies may use and retain data (including whether the companies assert any proprietary rights to the inputs), and institutions should adjust contractual terms as necessary. In instances where confidential or sensitive information will be shared with a generative AI system, IHEs should avoid using AI solutions that are labeled using terms such as “beta,” or “preview,” as such AI solutions are typically not subject to standard security and confidentiality terms. IHEs should also ensure that the data will not be used by the company for training its AI models or other internal purposes and that it will not be shared with third parties. The use of an in-house generative AI system, where data is hosted and stored locally, may be one way to address confidentiality concerns. Such programs still have the benefit of “self-learning,” since they are trained on inputs from the institution, but reduce the risk of disclosure of confidential or otherwise sensitive information.

  • Contractual Terms – IHEs may wish to address generative AI use in its contracts, particularly when the agreement involves the use of confidential or sensitive information.

  •  Training – Faculty, staff, and students (if applicable) need training to ensure policy compliance.

 

Key Elements of a College or University AI Policy

 

An effective college or university AI policy must establish clear guidelines and regulations to navigate the ethical and practical challenges posed by the deployment of various forms of AI throughout the academic enterprise.

 

First and foremost, the policy should preclude certain uses of AI that may contravene ethical standards or pose risks to individuals or the institution itself. This might include prohibiting the use of generative AI for purposes such as generating fraudulent academic work, facilitating academic dishonesty, or infringing upon individuals' confidentiality or privacy rights (e.g., when used in a HR context).

 

Identifying permitted uses of AI in a college or university policy is crucial for ensuring its responsible deployment. The policy should outline specific scenarios where AI can be used, such as aiding in research, supporting student learning, automating administrative tasks and/or circumstances when steps can be taken, even by contract or otherwise, to ensure sufficient protective measures or guardrails are in place to ensure proper use. Moreover, it should delineate cases where prior approval is required, particularly in sensitive areas like conducting research involving human subjects or using AI for decision-making processes that may have significant consequences.

 

To ensure accountability and mitigate potential risks, the policy should also mandate internal tracking of AI usage and implement additional human review mechanisms for selected AI-generated content. This dual approach helps to identify any misuse or unintended consequences of AI while also promoting transparency and accountability in its deployment.

 

At the same time, addressing external disclosures of AI use and output is essential for maintaining trust and transparency with stakeholders. The policy should stipulate guidelines for communicating the use of AI in research, teaching, or other activities, ensuring that disclosures are accurate, comprehensible, and respectful of privacy concerns.

 

Regulating the uses of AI by external business partners and vendors is another critical aspect of an effective AI policy. Institutions must establish clear contractual agreements and oversight mechanisms to ensure that external entities adhere to the same ethical and operational standards governing AI use within the college or university, and whenever possible, such standards are passed through subcontractors or affiliates.

 

Finally, addressing the possibility of embedding AI applications on the institution's website requires careful consideration of privacy, accessibility, and security concerns. The policy should outline protocols for vetting and integrating AI applications, including risk assessments, data protection measures, and user consent mechanisms, to safeguard against potential harm or misuse. By incorporating these key elements into their AI policy, colleges and universities can foster a culture of responsible AI use that aligns with their academic mission and values.

 

Current predictions point towards continued growth in (and regulation of) generative AI use, and colleges and universities should give serious consideration to developing an AI policy if they do not already have one. Even if it needs further refinement as generative AI’s advantages and risks become better understood, a well-drafted policy can provide guidance to faculty and staff while protecting the institution from AI’s very serious risks.



Drafting AI Policies- Considerations for Higher Education Institutions
.pdf
Download PDF • 2.12MB

bottom of page