AI Isn’t Slowing Down. Neither Are the Questions Around It.

Each year, Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) releases its AI Index report. The study offers a sweeping, data-driven survey of where artificial intelligence stands. The 2026 edition is the most revealing yet: a technology leaping ahead on performance metrics, reshaping economies and classrooms, and leaving regulators scrambling to catch up.

Here are the ten findings that matter most:

Performance is accelerating, not plateauing

The narrative of diminishing returns in AI development is not holding up. Industry produced over 90% of notable frontier models in 2025, and several now meet or exceed human baselines on PhD-level science questions, multimodal reasoning, and competition mathematics. On one key coding benchmark — SWE-bench Verified — performance jumped from 60% to near 100% in a single year. Organizational adoption reached 88%, and four in five university students now use generative AI regularly.

“AI capability is not plateauing. It is accelerating and reaching more people than ever.”

The U.S.–China gap has effectively closed

What was once a clear American-led AI model performance is now a competitive race. U.S. and Chinese models have traded the top spot multiple times since early 2025. DeepSeek-R1 briefly matched the leading U.S. model in February 2025, and as of March 2026, Anthropic’s top model leads by just 2.7%. The U.S. still produces more top-tier models and higher-impact patents; China leads in publication volume, patent output, and industrial robot installations. South Korea stands out for innovation density, leading the world in AI patents per capita.

The hardware supply chain runs through one building in Taiwan

The U.S. hosts more than 5,400 data centers, dwarfing every other country. But almost every leading AI chip is fabricated by a single company — TSMC — at a single location in Taiwan. That concentration is a strategic vulnerability that policymakers and executives are only beginning to reckon with. A TSMC facility in the United States began operations in 2025, but the dependency remains acute.

Rendering of the Taiwan Semiconductor Factory, presumably one of the five coming to the United States. The Arizona facility has already sold out production through 2027. The Arizona operation employees approximatley 3000. https://www.tsmc.com/static/abouttsmcaz/index.htm

AI has a jagged frontier

One of the report’s most striking — and humbling — findings: the same models winning gold medals at the International Mathematical Olympiad read analog clocks correctly just 50.1% of the time. AI agents went from 12% to roughly 66% task success on OSWorld (a benchmark testing real computer tasks), but still fail about one in three attempts on structured tests. The frontier of capability is not a smooth curve; it is jagged and unpredictable.

Responsible AI is falling behind

Nearly all leading AI developers report results on capability benchmarks. Reporting on responsible AI benchmarks — safety, fairness, transparency — remains inconsistent. Documented AI incidents rose to 362 in 2025, up from 233 the year before. Complicating matters further, recent research found that improving one responsible AI dimension, such as safety, can degrade another, such as accuracy. The tradeoffs are real and poorly understood.

America leads in investment but is losing its talent advantage

U.S. private AI investment reached $285.9 billion in 2025 — more than 23 times China’s $12.4 billion, though China’s state-guided funds make direct comparisons imprecise. The U.S. also led in new AI company formation, with 1,953 freshly funded startups. Yet the number of AI researchers and developers choosing to relocate to the United States has dropped 89% since 2017, with an 80% decline in the last year alone. That is a canary worth watching.

Adoption is outpacing the internet or the PC

Generative AI reached 53% population adoption within three years — faster than the personal computer or the internet. That said, the pace varies sharply by country and correlates strongly with GDP per capita. Singapore (61%) and the UAE (54%) show higher-than-expected adoption. The U.S. ranks 24th globally at 28.3%. The estimated annual value of generative AI tools to U.S. consumers reached $172 billion by early 2026, with the median value per user tripling between 2025 and 2026 — much of it captured for free.

Education is still catching up

Over 80% of U.S. high school and college students now use AI for school-related work, but only half of middle and high schools have any AI policy in place, and just 6% of teachers describe those policies as clear. Outside the classroom, AI engineering skills are growing fastest in the UAE, Chile, and South Africa. New AI PhDs in the U.S. and Canada increased 22% from 2022 to 2024 — but the newly minted PhDs driving that growth are heading into academia, not industry. Full Education Report.

AI sovereignty is reshaping geopolitics

National AI strategies are multiplying, particularly among developing economies, and state-backed investments in supercomputing infrastructure are rising. Yet model production remains concentrated in the U.S. and China. Open-source development is beginning to redistribute participation: contributions from the rest of the world now outpace Europe on GitHub, fueling more linguistically diverse models and benchmarks. The question of who controls the infrastructure of intelligence is becoming a defining feature of international politics.

Experts and the public are living in different realities

Perhaps the most striking finding in the report is the perception gap between AI experts and the general public. When asked about AI’s impact on how people do their jobs, 73% of experts expect a positive effect — compared with just 23% of the public. That 50-point gap recurs across questions about the economy and medical care. On trust in governments to regulate AI, the findings are equally striking: the United States reported the lowest level of trust in its own government among surveyed countries, at 31%. Globally, the EU is trusted more than either the U.S. or China to regulate AI effectively.

“Among surveyed countries, the United States reported the lowest level of trust in its own government to regulate AI.”

The 2026 AI Index is a benchmark document — not optimistic, not pessimistic, but rigorously empirical about a technology that is moving faster than almost anyone predicted. Read alongside the headlines, it offers something rare: data over narrative. For anyone trying to understand where AI is actually going, it is essential reading.

Rhetorical Histoy Timeline

Interactive Timeline of Rhetorical History

8th Century BCE – Epic Poetry
Rhetoric had not yet emerged as a formal discipline, but foundational persuasive practices appear in epic poetry. Homer’s works depict structured speeches before assemblies, battlefield persuasion, negotiation, and honor-based argument. Authority derived from reputation, narrative framing, and emotional appeal.
Major World Events:
• Founding of Rome (753 BCE)
• Rise of Neo-Assyrian Empire
5th Century BCE – Greek Democracy & Sophists
Democratic governance required citizens to argue in courts and assemblies. Sophists professionalized rhetorical education, teaching argument structure, stylistic techniques, and the ability to argue both sides. Debate emerged over whether rhetoric served truth or merely persuasion.
Major World Events:
• Persian Wars
• Construction of the Parthenon
• Peloponnesian War
4th Century BCE – Plato & Aristotle
Plato
Aristotle
Plato critiqued rhetoric as persuasion detached from truth, while Aristotle systematized it as the counterpart of dialectic. He identified three genres (deliberative, forensic, epideictic) and three appeals (ethos, pathos, logos), transforming rhetoric into an analytical discipline.
Major World Events:
• Conquests of Alexander the Great
• Spread of Hellenistic culture
Roman Republic (509–27 BCE)
Rome made rhetoric central to law and politics. Cicero synthesized Greek theory with Roman practice, presenting the ideal orator as a statesman guided by wisdom and virtue. Rhetoric was practical, civic, and legally grounded.
Major World Events:
• Punic Wars
• Julius Caesar assassinated
• Rise of Augustus
Imperial Rome (27 BCE – 410 CE)
Political centralization reduced deliberative rhetoric, but courts and ceremonial praise flourished. Quintilian emphasized moral education, arguing the orator must be a “good person skilled in speaking.”
Major World Events:
• Pax Romana
• Spread of Christianity
• Edict of Milan
Middle Ages (410–1400 CE)
Rhetoric was preserved through Christian scholarship. Augustine integrated classical rhetoric into preaching. Medieval rhetoric emphasized letter writing, preaching, and textual composition rather than civic debate.
Major World Events:
• Fall of Western Roman Empire
• Magna Carta
• Black Death
Renaissance (1400–1600)
Humanists revived classical rhetoric as the foundation of education and civic life. Printing spread rhetorical texts widely. Ramus later narrowed rhetoric to style and delivery, reshaping curricula.
Major World Events:
• Fall of Constantinople
• Protestant Reformation
Age of Science & Rationalism (1600–1700)
Scientific and rational methods challenged rhetorical authority. Bacon promoted induction; Descartes rational certainty. Vico defended rhetoric as essential for human institutions and probability.
Major World Events:
• Galileo’s trials
• Newton’s Principia
Enlightenment (1700–1800)
Scottish Enlightenment thinkers integrated psychology into rhetoric. Campbell connected persuasion to human cognitive faculties; Blair emphasized taste and style. Rhetoric shaped emerging democratic revolutions.
Major World Events:
• American Revolution
• French Revolution
19th Century – Legal Education Shift
The case method reframed law as scientific reasoning rather than persuasive advocacy. Rhetoric became marginalized in formal legal education, though still practiced in courtrooms.
Major World Events:
• U.S. Civil War
• Industrial Revolution
20th Century – Reconnection (1970s–Present)
Clinics, legal writing, and advocacy training restored rhetoric to legal education. Contemporary scholarship reconnects law to classical rhetorical traditions emphasizing narrative, audience, and ethical persuasion.
Major World Events:
• World Wars
• Civil Rights Movement
• Digital Revolution

MPT Builder Template

Here is a template to build your very own Multi-State Performance “style” exam. After you’ve made your edits, save the HTML file and share it with students. Keep a copy for a template.

Word Count Restraint Syndrome

Oh, Claude!

§ § §
Three students unable to express their Contracts analysis in fewer than 14,000 words
Medical Emergency — Academic Crisis

Help Law Students Suffering from Word Count Restraint Syndrome (WCRS)

They just need one more footnote. Please.

RJ
Rebecca J., 2L at Avalon School of Law
Organizing on behalf of the Coalition for Unrestrained Legal Verbosity (CULV)
$18,247 raised
of $25,000 goal
312 donors 18 days left 73% funded

About this campaign

Every year, thousands of law students are diagnosed with Word Count Restraint Syndrome — a devastating condition in which the afflicted student is constitutionally incapable of writing a memo, brief, or exam answer within the assigned page limit.

“The prompt said ‘briefly explain.’ I wrote 47 pages. I cannot stop. I physically cannot stop.”
— Tyler M., 1L, diagnosed Week 2 of Civ Pro

Symptoms include:

Compulsive footnoting (up to 200 per page) · Inability to use a period without appending a dependent clause · Reflexive citation of law review articles published between 1887 and 1904 · Spontaneous Latin · Restatement parentheticals inserted into casual conversation · Accidentally writing a 12-page text message · Beginning every sentence with “It is well-established that…”

The science

Researchers at the Institute for Prolific Legal Scholarship (not accredited) have identified WCRS as a bio-psycho-socio-legal phenomenon arising from overexposure to casebooks, legal writing professors who use the phrase “thorough analysis,” and grading rubrics that award points for “comprehensiveness.” Peer-reviewed*

*Reviewed by three peers who also have WCRS and made the abstract 6,000 words long.

Where your money goes

Funds will cover: therapy (specifically, a licensed professional who can say “stop writing” in a legally binding way) · Printer ink · Wrist braces from excessive Westlaw scrolling · A graduate student to read the footnotes · Emergency Bluebook consultations · Post-traumatic stress counseling for TAs who received 80-page “short answer” exams

The case for giving

As the court held in Generosity v. Stinginess, 404 U.S. 1 (1971) (fictional), the duty to assist those in verbose distress is not merely moral but arguably quasi-contractual, subject to promissory estoppel if you’ve read this far. We argue, in the alternative, that your donation is both (1) the right thing to do, and (2) tax-deductible, notwithstanding the fact that we checked neither of those things.

AP
Anonymous Professor
“I read their final exam. I donated immediately. I also cried.” — $500
KL
Kate L. (fellow 2L, recovering)
“I once wrote ‘cf.’ in a birthday card. I understand.” — $25
JD
John D., BigLaw Associate
“My 300-page due diligence memo was just my first draft. This hits close to home.” — $100
?
Anonymous
“Donated instead of writing a conclusion to my Note. They’d want it this way.” — $15
Day 1 update — 18 days ago
We have launched the campaign. The full explanation of why we launched is available in the attached 340-page explanatory memorandum.
Day 7 update — 11 days ago
Going viral. One sufferer was found attempting to add a Table of Authorities to their DoorDash review. Doctors say she is “stable but extremely thorough.”
Day 14 update — 4 days ago
We tried to write a short thank-you note to donors. It is currently 88 pages. We are working on a shorter version. It is 91 pages.

GoFundMe is not responsible for the contents of this campaign, including but not limited to: the 14 string citations embedded in the campaign description, the three footnotes we removed before publishing, or the executive summary (62 pages) available upon request. All legal conclusions herein are provided for entertainment purposes only and do not constitute legal advice, notwithstanding that they are extensively cited.

AI Law Final Spring 2026

AI Law Final Spring 2026 — Administrative Process: AI in Law and Practice

Administrative Process: AI in Law and Practice

Final Examination — Multistate Performance Test Format

Format
Take-home, open book
Suggested time
90 minutes
Word limit
3,000 words
Read all instructions before opening the File or Library.

About this examination

This examination is modeled on the Multistate Performance Test (MPT). You are a junior associate at a law firm retained by Nexus Health, Inc. You must complete three lawyering tasks.

The File and the Library

The File contains the factual record: client communications, internal documents, and regulatory notices. The Library contains a curated set of real, enacted authorities with official citations and hyperlinks — the statutes, regulations, and professional responsibility rules directly relevant to the tasks.

The Library is sufficient to complete all three tasks with a full, well-reasoned answer. You are not required to go beyond it.

Because this is an open-book, take-home examination, you may also draw on additional authorities from outside the Library — cases, regulations, secondary sources, or other statutes — if you believe they strengthen your analysis. Outside authorities are not required, and a response grounded entirely in the Library materials can receive full credit. If you do cite outside sources, use proper legal citation form and do not misrepresent what those sources say.

Scope note on California AI law

The California legislature has enacted a layered AI framework. The CCPA/CPRA provides the broadest base, with finalized ADMT regulations effective January 1, 2027. California’s Transparency in Frontier Artificial Intelligence Act (SB 53, 2025) adds requirements for “frontier developers.” Students should assess which frameworks apply to Nexus Health and where scope limitations create gaps that affect the analysis.

Format and citation rules

  • Answer in the document type specified for each task (memo, client letter, ethics analysis)
  • Use headings appropriate to the document type
  • Cite all authorities using proper legal citation form
  • Total word limit: 3,000 words across all tasks
This exam tests your ability to apply law to facts and identify where the law is unsettled or where a client’s conduct falls into a gap. Precision matters more than volume — or the breadth of your research.

Scenario: Nexus Health & the ARIA System

Nexus Health, Inc. is a digital health company headquartered in San Francisco, California. Nexus developed ARIA (Adaptive Risk Intelligence Assistant), an AI system deployed at partner hospitals to assist clinicians with triage and to flag patients at elevated risk of deterioration. ARIA is not FDA-cleared as a medical device; it is marketed as a “clinical decision support tool.” Nexus has approximately $80 million in annual revenue and processes health data for over 300,000 California residents.


Document 1 — Memo from General Counsel
MEMO

Privileged & Confidential — Attorney Work Product
MEMO To: Outside Counsel | From: Dana Voss, General Counsel, Nexus Health | Date: March 14, 2026

Three urgent issues require your written analysis. First, ARIA is deployed in hospitals in California and Germany. Each regulator is asking different things of us. Second, the California Privacy Protection Agency (CPPA) has sent a formal inquiry alleging that ARIA processes “sensitive personal information” without adequate disclosure and that its outputs constitute “automated decisionmaking technology” (ADMT) subject to consumer rights. Third, our data scientist Dr. Priya Mehta has flagged internally that ARIA’s risk scores show a disparate performance gap across racial subgroups — an 18% false-negative rate for Black patients versus 9% for white patients on the deterioration-prediction task. Dr. Mehta has asked whether she has any obligation to report this externally. We have not yet disclosed this disparity to our hospital partners.

Please advise on all three issues.

Document 2 — CPPA Inquiry Notice
REGULATORY

California Privacy Protection Agency — Informal Inquiry Notice No. 2026-012

The CPPA has received a complaint alleging that Nexus Health’s ARIA system: (1) processes patient health data and infers racial/ethnic characteristics without a conspicuous pre-use notice; (2) generates risk scores that constitute “automated decisionmaking technology” affecting patients’ access to care; and (3) has not provided consumers with an opt-out mechanism.

The CPPA requests a written response within 30 days addressing Nexus Health’s data practices under the CCPA (Cal. Civ. Code §§ 1798.100–1798.199.100) and the CPPA’s ADMT Regulations (11 Cal. Code Regs. §§ 7150–7157), which require pre-use notices for ADMT used in significant decisions affecting health.

Document 3 — Internal Engineering Summary
INTERNAL

ARIA Model Card — Internal v2.3 (Excerpt)

ARIA uses a gradient boosting model trained on EHR data from three hospital systems (2015–2023). Inputs include age, vital signs, lab values, ICD-10 codes, medication history, and ZIP code as a socioeconomic proxy. Outputs are risk scores (0–100) used by clinicians in triage decisions.

Post-deployment monitoring (Q4 2025) identified a performance disparity: false-negative rate of 18% for Black patients vs. 9% for white patients. Root cause analysis is ongoing.

The model was developed at an estimated cost well below $100 million in compute.

Compliance note: ARIA is not an FDA-cleared device and is not developed by a “frontier developer” as defined in California Health & Safety Code § 22756.1 (SB 53). Nexus does not train foundation models.

Document 4 — European Deployment Note
REGULATORY

Nexus EU Operations — Compliance Status (March 2026)

ARIA is deployed at two hospital partners in Germany. Both classify ARIA as a high-risk AI system under Annex III of the EU AI Act (Reg. (EU) 2024/1689). They are requesting a conformity assessment and technical documentation.

Our German deployment predates August 2, 2026. Counsel should advise whether the transitional provisions of Article 111 of the AI Act affect our obligations.

Document 5 — Email from Dr. Priya Mehta
INTERNAL

Internal Email — Dr. Mehta to General Counsel

“Dana — I’ve reviewed the Q4 monitoring data. The disparity is real and the hospitals are using ARIA scores to prioritize ICU bed allocation. If patients are being harmed because the model performs worse for Black patients, I believe we have a duty to disclose. I’ve spoken with HR about whistleblower protections. I asked specifically whether California’s new AI safety law — the one Newsom signed last fall — protects me if I report to the CPPA or Attorney General directly. HR couldn’t give me a clear answer. Can outside counsel address whether I’m protected?”

These authorities are sufficient to answer all three tasks. You may cite additional sources if you choose, but doing so is not required and will not by itself improve your grade. See General Instructions.
I. California Consumer Privacy Act / CPRA — Selected Provisions
Cal. Civ. Code §§ 1798.100–1798.199.100 (CCPA/CPRA)
§ 1798.100(a)(1) — General Duties of Businesses that Collect Personal Information

(a) A business that controls the collection of a consumer’s personal information shall, at or before the point of collection, inform consumers of the following: (1) The categories of personal information to be collected and the purposes for which the categories of personal information are collected or used and whether that information is sold or shared. A business shall not collect additional categories of personal information or use personal information collected for additional purposes that are incompatible with the disclosed purpose for which the personal information was collected without providing the consumer with notice consistent with this section. (2) If the business collects sensitive personal information, the categories of sensitive personal information to be collected and the purposes for which the categories of sensitive personal information are collected or used, and whether that information is sold or shared. A business shall not collect additional categories of sensitive personal information or use sensitive personal information collected for additional purposes that are incompatible with the disclosed purpose for which the sensitive personal information was collected without providing the consumer with notice consistent with this section. (3) The length of time the business intends to retain each category of personal information, including sensitive personal information, or if that is not possible, the criteria used to determine that period provided that a business shall not retain a consumer’s personal information or sensitive personal information for each disclosed purpose for which the personal information was collected for longer than is reasonably necessary for that disclosed purpose.

§ 1798.121(a) — Right to limit sensitive personal information

A consumer shall have the right, at any time, to direct a business that collects sensitive personal information about the consumer to limit its use of the consumer’s sensitive personal information to that use which is necessary to perform the services or provide the goods reasonably expected by an average consumer who requests those goods or services.

§ 1798.140(ae) — Definition: Sensitive personal information (selected subparts)

Includes: (1) personal information that reveals a consumer’s racial or ethnic origin; (2) a consumer’s health or medical information; (3) inferences drawn from any personal information to create a profile about a consumer reflecting the consumer’s health.

§ 1798.185(a)(15) — Rulemaking authority: ADMT

The Agency shall promulgate regulations governing the use of automated decisionmaking technology, including profiling. The regulations shall establish consumer rights to access information and to opt out of the use of automated decisionmaking technology.

§ 1798.150(a) — Private right of action (data breaches)

A consumer whose nonencrypted and nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure may bring a civil action for statutory damages of $100 to $750 per consumer per incident, or actual damages, whichever is greater.

II. CPPA ADMT Regulations (11 Cal. Code Regs. §§ 7150–7157) — Effective Jan. 1, 2027
Cal. Code Regs. tit. 11, §§ 7150–7157
§ 7001(f) — Definition: Automated decisionmaking technology (ADMT)

Any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially replace human decisionmaking. “Substantially replace” means using the technology’s output as a key factor in a human’s decisionmaking.

§ 7001(ddd) — Definition: Significant decision

A decision that results in the provision or denial of, or that significantly affects: financial or lending services; housing; insurance; education enrollment; employment or independent contractor opportunities; healthcare access or service; or access to essential goods or services.

§ 7150 — Pre-use notice requirement

A business that uses ADMT to make a significant decision concerning a consumer must provide a Pre-use Notice before using ADMT with respect to that consumer. The Pre-use Notice must inform the consumer about: (1) the type of ADMT used; (2) the purpose and logic of the ADMT; (3) how to exercise the right to opt out.

§ 7152 — Risk assessment requirement

A business must conduct and document a risk assessment before initiating processing activities that pose significant risk to consumer privacy, including use of ADMT for a significant decision concerning a consumer. Assessments must identify and weigh benefits against potential risks to consumers, including risks from algorithmic discrimination based on protected characteristics.

Compliance timeline

ADMT obligations apply to businesses using ADMT for significant decisions beginning January 1, 2027. Risk assessment obligations are effective January 1, 2026. As of the exam date (March 2026), pre-use notice and opt-out obligations are not yet operative, but risk assessments are required.

III. EU Artificial Intelligence Act — Selected Provisions
Regulation (EU) 2024/1689 (AI Act)
Art. 6 & Annex III — High-risk classification

AI systems listed in Annex III are classified as high-risk. Annex III, point 5(c) covers AI systems intended to be used for making decisions or materially influencing decisions on access to and enjoyment of essential private services and public services, including healthcare.

Art. 9(1) — Risk management system

A risk management system shall be established, implemented, documented, and maintained in relation to high-risk AI systems. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system.

Art. 10(2)(f) — Data governance: bias examination

Training, validation, and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern, in particular: the examination in view of possible biases that could affect health, safety or fundamental rights or lead to discrimination prohibited under Union law.

Art. 13 — Transparency and information provision

High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately. The provider shall ensure that high-risk AI systems are accompanied by instructions for use including: the level of accuracy, robustness, and cybersecurity against which the system has been tested and validated, and any known and foreseeable limitations.

Art. 73 — Reporting of serious incidents

Providers of high-risk AI systems shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. A “serious incident” includes any malfunctioning of a high-risk AI system that has led or may lead to the death of a person or serious damage to a person’s health.

Art. 111(2) — Transitional provisions

High-risk AI systems that have been placed on the market or put into service before August 2, 2026 shall comply with this Regulation by August 2, 2027, provided they have not undergone significant changes in their design since their initial placing on the market or putting into service.

IV. Transparency in Frontier Artificial Intelligence Act (TFAIA), California SB 53 (2025)
Cal. Health & Safety Code §§ 22756–22756.9
Scope limitation

The TFAIA applies to “frontier developers” — persons who train a “frontier model” using more than 10²⁶ floating-point operations. A “large frontier developer” additionally has annual gross revenues exceeding $500 million. Students must assess whether Nexus Health meets these thresholds.

§ 22756.1 — Definitions

“Frontier model” means a foundation model trained using a quantity of computing power greater than 10²⁶ integer or floating-point operations. “Frontier developer” means a person that trains a frontier model and makes it publicly available to Californians.

§ 22756.3 — Whistleblower protections (all frontier developers)

A frontier developer shall not make, adopt, or enforce a rule, regulation, policy, or contract that prevents a covered employee from disclosing to the Attorney General, a federal authority, or a person with authority over the covered employee, information that the covered employee reasonably believes discloses that the frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk, or that the frontier developer has violated the TFAIA. A frontier developer shall not retaliate against a covered employee for such disclosures.

§ 22756.3(c) — Covered employee definition

“Covered employee” means an employee responsible for assessing, managing, or addressing the risk of a critical safety incident in the company.

Note on general whistleblower law

California Labor Code § 1102.5 (not reproduced in full) provides broader whistleblower protections for employees who report violations of state or federal law to government agencies. Students should note whether § 1102.5 may provide an independent basis for protection where the TFAIA does not apply.

V. ABA Model Rules of Professional Conduct
Model Rules of Pro. Conduct (A.B.A. 2016, as amended)
Rule 1.13(b) — Organization as client

If a lawyer for an organization knows that an officer, employee, or other person associated with the organization is engaged in action, intends to act, or refuses to act in a matter related to the representation that is a violation of a legal obligation to the organization, or a violation of law that reasonably might be imputed to the organization, and that is likely to result in substantial injury to the organization, then the lawyer shall proceed as is reasonably necessary in the best interest of the organization. Unless the lawyer reasonably believes that it is not necessary in the best interest of the organization to do so, the lawyer shall refer the matter to higher authority in the organization.

Rule 1.6(b)(1) — Confidentiality: exception for bodily harm

A lawyer may reveal information relating to the representation of a client to the extent the lawyer reasonably believes necessary to prevent reasonably certain death or substantial bodily harm.

Rule 1.6(b)(2) — Confidentiality: exception for financial crime

A lawyer may reveal information relating to the representation of a client to the extent the lawyer reasonably believes necessary to prevent the client from committing a crime or fraud that is reasonably certain to result in substantial financial injury to another and in furtherance of which the client has used or is using the lawyer’s services.

Rule 2.1 — Advisor

In representing a client, a lawyer shall exercise independent professional judgment and render candid advice. In addition to legal considerations, a lawyer may refer to other considerations such as moral, economic, social and political factors, that may be relevant to the client’s situation.

VI. Background authority: CPPA regulatory announcement
CPPA Announcement (Sept. 23, 2025)
Finalization of ADMT, Risk Assessment, and Cybersecurity Regulations

The CPPA announced that regulations covering cybersecurity audits, risk assessments, and automated decisionmaking technology were approved on September 22, 2025 and take effect January 1, 2026, with ADMT-specific consumer rights operative January 1, 2027. The regulations require businesses using ADMT for significant decisions to provide pre-use notices and honor opt-out rights. Risk assessments are required before initiating processing activities posing significant risk, including use of ADMT for significant decisions affecting healthcare access.

Complete all three tasks. Total response should not exceed 3,000 words. Allocate your time: Task A ~1,100 words, Task B ~1,000 words, Task C ~700 words.

Task A — Interoffice Memorandum (~1,100 words)

Dana Voss has asked you to prepare an interoffice memorandum analyzing Nexus Health’s legal exposure under three frameworks: (1) the CCPA/CPRA and finalized ADMT Regulations; (2) the EU AI Act; and (3) California’s Transparency in Frontier Artificial Intelligence Act (SB 53).

For each framework, address:

  • (a) whether and why ARIA is a covered system;
  • (b) Nexus Health’s most significant compliance obligations given the facts; and
  • (c) the primary legal risk from non-compliance, particularly given the known performance disparity.

Your memo must engage honestly with scope limitations. For SB 53, you must analyze whether Nexus Health and ARIA fall within the statute’s definitions. If a framework does not fully apply, explain the gap and identify what residual risk remains.

Your memo should also assess how California’s regulatory trajectory — the ADMT regulations effective January 1, 2027 — affects compliance planning even now.

Drafting note: The memo is privileged and candid. Do not soft-pedal bad facts.

Task B — Client Letter (~1,000 words)

Write a client letter to Dana Voss responding to the CPPA inquiry (Document 2). Your letter must address three questions:

  1. Whether ARIA’s risk scores likely constitute “automated decisionmaking technology” (ADMT) under 11 Cal. Code Regs. § 7001(f), and whether using them for ICU triage qualifies as a “significant decision” under § 7001(ddd). What does the “substantially replace” standard mean in ARIA’s context, where a human clinician sees the score and makes the final call?
  2. Whether patient health data and inferences about race/ethnicity processed by ARIA qualify as “sensitive personal information” under Cal. Civ. Code § 1798.140(ae), and what use limitations apply under § 1798.121(a).
  3. What you recommend Nexus Health do within the 30-day CPPA response window. Your recommendation must address whether and how to disclose the performance disparity — and the legal and ethical consequences of non-disclosure.
Write for a sophisticated client. Do not over-explain basic legal concepts; focus on the application.

Task C — Ethics Analysis (~700 words)

Dr. Mehta’s email (Document 5) raises professional responsibility questions. Draft a section of an internal ethics memo addressing:

  1. Your obligations under Model Rule 1.13(b). You know about the performance disparity and the risk of patient harm. Nexus Health is your client. What does Rule 1.13(b) require of you? What is “higher authority” in this context, and have you satisfied your obligation?
  2. The confidentiality question under Model Rule 1.6. Does the “reasonably certain death or substantial bodily harm” exception in Rule 1.6(b)(1) apply here? Analyze the standard carefully — “reasonably certain” is a demanding threshold. What, if anything, does Rule 1.6 permit you to disclose, and to whom?
  3. Dr. Mehta’s whistleblower protection under California law. Dr. Mehta asks whether SB 53 (Cal. Health & Safety Code § 22756.3) protects her if she reports to the CPPA or Attorney General. Analyze whether SB 53 applies to Nexus Health and to Dr. Mehta as a “covered employee.” If SB 53 does not fully protect her, identify what alternative protection, if any, might exist under the California Labor Code § 1102.5 framework (note: § 1102.5 is referenced in the Library but not reproduced in full; you may note this limitation or research it further).
Be precise about what the rules permit, require, and prohibit. Do not conflate your obligations as counsel with Dr. Mehta’s rights as an employee.
Note on AI tool use: any AI-generated content must be disclosed. You are responsible for every legal claim you make.

Use this checklist to self-assess your response before submitting. I will look for evidence of each item in your written work.

Task A — Interoffice Memorandum

Did you address all three frameworks — CCPA/ADMT Regulations, EU AI Act, and SB 53 — in your memo?
Did you analyze whether ARIA qualifies as a covered system under each framework, rather than assuming coverage?
Did you correctly distinguish between what is required of Nexus Health now (January 2026) and what will be required when the ADMT consumer-rights provisions become operative (January 2027)?
Did you engage with the EU AI Act’s transitional provision (art. 111(2)) and explain what conditions must be met for it to apply?
Did you analyze whether Nexus Health meets the SB 53 “frontier developer” definition, and explain what follows if it does not?
Did you treat the performance disparity as a material fact and address it in your legal analysis, rather than setting it aside?
Is your memo written in the register of a privileged internal document — direct, candid, and organized for a general counsel reader?

Task B — Client Letter

Did you apply the § 7001(f) definition of ADMT to ARIA’s actual clinical workflow, including the role of the human clinician?
Did you address the “substantially replace” standard specifically, rather than simply concluding that human involvement defeats ADMT coverage?
Did you identify the categories of data ARIA processes that qualify as sensitive personal information under § 1798.140(ae)?
Did you explain what use limitation § 1798.121(a) imposes and how it applies to Nexus Health’s current practices?
Did you give Dana Voss a concrete recommendation for responding to the CPPA within 30 days?
Did your recommendation address the performance disparity — including whether and how to disclose it — rather than deferring the question?
Is your letter written at the right level for a sophisticated in-house counsel, without over-explaining settled law?

Task C — Ethics Analysis

Did you correctly identify who your client is under Rule 1.13, and explain why that matters here?
Did you apply the Rule 1.13(b) “substantial injury to the organization” standard to the specific facts, rather than just restating the rule?
Did you identify what action Rule 1.13(b) requires of you at this stage, and whether you have taken it?
Did you carefully apply the Rule 1.6(b)(1) “reasonably certain death or substantial bodily harm” standard without overstating what it permits?
Did you analyze whether Nexus Health is a “frontier developer” under SB 53, and what that conclusion means for Dr. Mehta’s protection under § 22756.3?
Did you identify Cal. Labor Code § 1102.5 as a potential alternative source of protection and explain why it may be relevant here?
Did you keep your obligations as counsel analytically separate from Dr. Mehta’s rights as an employee throughout?