top of page

XL INSIGHTS+
Legal Alerts and News Updates

EU AI Act Approach to General-Purpose AI Models Takes Shape: Takeaways for IHEs


  • On August 2, 2025, provisions of the EU AI Act addressing large “general-purpose AI”

    (GPAI) models took effect.

  • The provisions include transparency and copyright compliance obligations for all GPAI

    models and heightened safety and security obligations for GPAI models with “systemic

    risk.”

  • For institutions of higher education adopting and using AI tools powered by large GPAI

    models, these provisions will likely produce useful information for conducting due

    diligence and framing AI governance practices.


By late 2022, when OpenAI’s ChatGPT brought widespread attention to the potential power of generative AI, the European Union was already considering a complete draft of what would become the EU AI Act. That draft took a risk-based approach to regulation. Rather than focusing on the technical specifications of AI systems, it regulates them according to the risks involved in their use, classifying them as unacceptable, high, or limited risk. The introduction of ChatGPT, however, heralded a breakthrough in AI capabilities, and the drafters rushed to develop technologically neutral provisions to address potential risks associated with what the Act terms “general-purpose AI” (GPAI) models.


On August 2, 2025, the Act’s provisions establishing a regulatory framework for these GPAI models took effect. For new GPAI models that are placed on the market after August 2, 2025, the Act’s compliance obligations take immediate effect. For those that were already on the market prior to that date, there is an enforcement grace period until August 2, 2027. At around the same time, the European Commission also released a voluntary General-Purpose AI Code of Practice (July 10, 2025) designed to help providers of GPAI models comply with the Act’s obligations, a set of Guidelines (July 18, 2025) helping to clarify the scope of the new

provisions, and a Template for the Public Summary of Training Content for General-Purpose AI models (July 24, 2025) for mandatory disclosures by GPAI model developers.


Though institutions of higher education (IHEs) are unlikely to develop generative AI models large enough to be subject to the Act’s regulations, virtually every IHE already has AI tools in use on their campus powered by large GPAI models that will soon need to comply with these provisions. Accordingly, these provisions will have important implications for entities downstream in the AI value chain, including IHEs.


General-Purpose AI Models


The Act’s definition of GPAI models incorporates two important elements: the scale of the training that goes into these models and the generality of their resulting capabilities.


First, GPAI models are distinguished by the fact that they are “trained with a large amount of data using self-supervisions at scale.” Rather than attempt to define a threshold for the amount of data required to bring a GPAI model under the Act’s regulation, the Act instead looks to the computational resources used in training (i.e., the model’s “training compute’). The Guidelines offer as an “indicative criterion” the threshold that the model’s “training compute is greater than 10 23 FLOP” (or Floating Point Operations [i.e., distinct mathematical operations]), which the Commission notes “corresponds to the approximate amount of compute typically used to train a model with one billion parameters on a large amount of data.” As described below, when a model reaches 10 25 FLOPs in training compute, it is subject to heightened regulation because it is then deemed to have high-impact capacities and thereby also to involve “systemic risk.”


Second, the more important aspect of GPAI models for the Act’s purposes is the generality of their capabilities. The Act regulates GPAI models when they prove “capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” If a model meets the indicative threshold of training compute but proves not to have general capabilities, it will not be considered a GPAI model for regulatory purposes, but the Guidelines emphasize that these situations will be exceptional.


To convey a sense of the enormous scale a model would need to reach in order to be subject to the highest level of regulation under the Act, one estimate found that as of June 2025 there were only around 30 such models in existence, noting also that it currently takes tens of millions of dollars to train a model at this scale. Accordingly, it is currently unlikely that IHEs will face direct regulation under the Act as providers of GPAI models—though they will still be subject to other provisions of the Act, as we noted in our earlier [article].


Compliance Obligations for All GPAI Models


Turning to the compliance obligations, Article 53 of the Act outlines several basic compliance obligations applicable to all providers of GPAI models.


  • Documentation for the EU AI Office. Providers must maintain and make available to the AI Office (and national competent authorities) up-to-date documentation about the model, including (1) detailed general information (such as the tasks it is intended to perform, applicable acceptable use policies, details of its distribution, its architecture and number of parameters, its modality and format of inputs and outputs, and license) and (2) detailed technical information about the model and its development (such as the means

    for integrating it into AI systems, specifications regarding the training methodology and process, information about the data used for its training, testing, and validation, the computational resources used for its training, and the model’s energy consumption).

  • Documentation for Downstream Providers. Providers must also maintain and make available similar documentation to downstream providers who incorporate the GPAI model into their AI systems.

  • Policy on Copyright and Related Rights. Providers must maintain a policy to comply with EU copyright and related intellectual property law.

  • Summary of Training Content. Providers must also make publicly available a detailed summary of the content used for training the GPAI model. The Template for summarizing this content, published by the Commission on July 24, 2025, details the mandatory content of this summary.


These obligations, however, do not apply to providers of AI models released under a free and open-source license. Procedurally, all providers must also cooperate with the Commission and national competent authorities as they fulfill their responsibilities under the Act, and providers in third countries must appoint an authorized representative in the Union.


Finally, it is also worth noting that under Article 50, which will take effect on August 2, 2026, providers of GPAI models must take steps to ensure the transparency of their models’ outputs to users through such measures as marking those outputs in machine-readable formats to indicate that they were artificially generated or manipulated.


Obligations of GPAI Models with Systemic Risk


When GPAI models reach a certain size, they acquire what the Act terms “high impact apabilities” and pose what it terms “systemic risk.” The Act deems that GPAI models have “systemic risk” when they have “a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”


Article 51 provides that GPAI models are presumed to have high impact capabilities when the amount of computation used in their training is 10 25 FLOPs or greater. The Commission may also designate a model as having systemic risk ex officio or following a qualified alert.


Article 55 outlines the following additional compliance obligations for providers of GPAI models with systemic risk.


  • Model Evaluations. Providers must evaluate their models by state-of-the-art methods to

    identify and mitigate systemic risks.

  • Risk Assessment and Mitigation at the EU Level. Providers must assess and mitigate

    possible risks at the Union level, including risks related to the sources, development, and

    placing on the market of GPAI models with systemic risk.

  • Serious Incident Reporting. Providers must track, document, and report to the AI office

    or national competent authorities relevant information related to serious incidents and

    possible corrective measures to address them.

  • Cybersecurity Protection. Providers must ensure adequate cybersecurity for both the

    GPAI models and their physical infrastructure.


Unlike the above general compliance exception for free and open source GPAI models, there is no exception for open-source GPAI models that have systemic risk.


Code of Practice and Template for Summarizing Training Content


On July 10, 2025, the Commission published the Code of Practice to serve as a voluntary mechanism for providers of GPAI models to demonstrate compliance with the obligations of Articles 53 and 55. The Code is divided into three chapters. The first two chapters, which address “Transparency” and “Copyright,” apply to all GPAI models, while the third chapter, which addresses “Safety and Security,” applies specifically to GPAI models with systemic risk.


  • Transparency. This chapter explains the documentation signatories must draw up and keep up to date and which information they must furnish to the AI Office, national competent authorities, and downstream providers incorporating the model into AI systems. This chapter also includes a documentation Template for use in assembling this documentation.

  • Copyright. This chapter outlines measures signatories must take with respect to implementing a copyright policy, extracting only lawfully accessible content through web crawling, identifying and complying with rights reservations when web crawling, mitigating the risk of copyright-infringing outputs, and facilitating the lodging of complaints.

  • Safety and Security. This lengthy chapter outlines certain commitments signatories that provide GPAI models with systemic risk must undertake to implement a safety and security framework. These include commitments to identify, analyze, and make acceptance determinations for systemic risk; implement safety and security mitigations; make safety and security model reports; allocate responsibility for systemic risk; report serious incidents; and undertake additional documentation and transparency.


The Commission’s website for the Code of Practice includes a listing of the GPAI mode providers that have committed to the framework as signatories. Among the signatories are Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI. xAI has signed onto the chapter for Safety and Security but opted not to sign onto chapters for Transparency and Copyright. Meta has announced that it will not become a signatory.


Finally, on July 18, 2025, the Commission also published a mandatory Template for providers of GPAI models to use to summarize for public disclosure the content used for training GPAI models, which must include information about the model itself, the sources of the data used to train the model, and the processing utilized to respect intellectual property rights and remove illegal content.


Insights for IHEs


Though these particular provisions of the EU AI Act are unlikely to apply directly to IHEs, they may provide useful resources for conducting due diligence and framing AI governance practices for the AI tools IHEs adopt for use on campus.


First, many IHEs are currently in the process of selecting or approving AI tools for use by their faculty and staff. When reviewing such AI tools, IHEs may wish to consider whether the providers have signed onto all or part of the Code of Practice. Similarly, while some campuses might prefer to adopt tools built on models released under free and open-source licenses, others might prefer tools based on models that are subject to these regulations.


Second, the Act requires that the providers of GPAI models maintain and make available to providers incorporating the model into downstream AI systems certain information, including about what the model was intended to do, the kinds of AI systems into which it may be integrated, and an acceptable use policy. For those evaluating adoption of new AI tools on campus, consulting this kind of documentation should become a standard part of the due diligence process to ensure that the components of these tools are not being utilized in a way that might involve more risk than they were intended to handle.


Finally, as the providers of the GPAI models that currently dominate the market come into compliance with the Act over the next two years, we can expect to learn more about how they are identifying, analyzing, and mitigating the risks associated with the sheer size of the models they have created. It will be important for those involved in AI governance on campus to pay careful attention to the insights this process yields as it may help to better inform the risk-based determinations about (1) when and where IHEs might be willing or unwilling to deploy specific AI tools and (2) when IHEs might want to implement additional safeguards, such as adversarial testing or increased human oversight, to mitigate the their risk.



© 2024 XL Law & Consulting P.A. - A U.S. Corporation - Privacy Policy - Cookies Policy - Contact Us

 - 

The information provided on the XL Law & Consulting website is for educational purposes only. Nothing on this website should be construed as or relied upon as legal or other professional advice, nor does use of this website create an attorney-client relationship.

bottom of page