|
Akin Intelligence - April 2025
|
|
|
|
Welcome to the April edition of Akin Intelligence. Deepfakes continued to be a major area of legislation in April, with action at both state and federal level.
|
|
|
|
|
President Trump Signs TAKE IT DOWN Act into Law
On May 19, 2025, President Trump
signed the bipartisan
TAKE IT DOWN Act into law. The Act criminalizes the publication of nonconsensual intimate visual depictions of individuals, including AI-generated deepfakes. Threats to publish intimate visual depictions are also prohibited. Penalties include mandatory restitution, forfeiture of any proceeds of the violation and criminal penalties, including prison, a fine or both.
Separately, the Act establishes a notice and removal process, mandating that online platforms remove such content within 48 hours of a victim’s request. Failure to reasonably comply with the notice and takedown obligations is treated as an unfair and deceptive act under the Federal Trade Commission Act.
The TAKE IT DOWN Act aims to protect individuals from online harassment and abuse and empowers institutions with resources to address digital exploitation effectively.
|
|
|
|
|
OMB Issues Memorandum on Driving Efficient Acquisition of Artificial Intelligence in Government
On April 3, 2025, the Office of Management and Budget (OMB) issued a Memorandum for the Heads of Executive Departments and Agencies (M-25-22) on Driving Efficient Acquisition of Artificial Intelligence in Government. The Memorandum issued from Executive Order 14179, which President Trump signed on January 23, 2025, directing the OMB to revise OMB Memorandum M-24-18 to make it consistent with the Order’s policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
The Memorandum has three grounding themes:
- Ensuring the Government and the Public Benefit from a Competitive American AI Marketplace
- Safeguarding Taxpayer Dollars by Tracking AI Performance and Managing Risk
- Promoting Effective AI Acquisition with Cross-Functional Engagement
It directs agencies to update agency policies; maximize use of American-made AI; protect privacy, IP rights, and use of government data; spotlight AI acquisition authorities, approaches, and vehicles; contribute to a shared repository of best practices; and determine necessary disclosures of AI use in the fulfillment of a government contract. The Memorandum also details requirements and recommendations for agencies as part of their AI acquisition practices, such as identification of requirements, market research and planning, solicitation development, selection of AI proposals, contract administration, and contract closeout.
OMB Issues Memorandum on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust
On April 3, 2025, the Office of Management and Budget (OMB) issued a Memorandum for the Heads of Executive Departments and Agencies (M-25-21) on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, along with an accompanying Fact Sheet: Eliminating Barriers for Federal Artificial Intelligence Use and Procurement. The Memorandum issued from Executive Order 14179, which President Trump signed on January 23, 2025, directing the OMB Director to revise OMB Memorandum M-24-10 to make it consistent with the Order’s policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
The Memorandum directs the agencies to accelerate federal use of AI by focusing on three priorities: innovation, governance, and public trust. Consistent with these priorities, the Memorandum directs the agencies to undertake the requirements described in the Appendix of the Memorandum. These requirements include the following:
- Agencies must remove barriers to innovation and provide the best value for the taxpayer, e.g., developing agency AI strategies, sharing agency data and AI assets, leveraging American AI and innovation, promoting effective federal procurement of AI, and enabling an AI-ready federal workforce.
- Agencies must empower AI leaders to accelerate responsible AI adoption, e.g., establishing a Chief AI Officer and AI governance board, developing compliance plans and AI policies, and coordinating the developing and use of AI across agencies by participating in the Chief AI Officer Council.
- Agencies must ensure their use of AI works for the American people, e.g., determining “high-impact” AI—i.e., AI whose “output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety”—and implementing minimum risk management practices for high-impact AI.
Meador Confirmed as FTC Commissioner
On April 10, 2025, the Senate voted 50-46 to confirm Mark Meador as the third Republican Commissioner on the Federal Trade Commission (FTC). The party-line vote came after the Senate Commerce Committee previously advanced Meador’s nomination on a 24-4 vote in early March. FTC Chair Andrew Ferguson stated, “I am thrilled to welcome Mark to the Commission. Mark is a brilliant antitrust lawyer who will be a great asset to the Trump-Vance FTC.” Meador was nominated on January 20, 2025, by President Trump to a term that will expire on September 25, 2031.
GAO Issues Report on Generative AI’s Environmental and Human Effects
On April 22, 2025, the US Government Accountability Office (GAO) announced the release of its Technology Assessment on Generative AI’s Environmental and Human Effects. The report discusses the significant resources used by generative AI, while recognizing that the environmental effects are uncertain and not well understood due to insufficient data and information. The report also discusses the potential substantial human effects of generative AI, focusing on five risks and challenges: unsafe systems, lack of data privacy, cybersecurity concerns, unintentional bias, and lack of accountability. The potential benefits and challenges of generative AI are also described for four application areas: public services, labor markets, education, and research and development.
To enhance the benefits and address the potential effects of generative AI, the report proposes policy options. To reduce environmental effects of generative AI, policy options include maintaining the status quo of current efforts in academia, industry, and government; expanding efforts to improve data collection and reporting; and encouraging innovation. Policy options for human effects of generative AI include maintaining the status quo of current policy efforts; encouraging the use of available AI frameworks to inform generative AI use and software development processes; and continuing to expand efforts to share best practices and establish standards.
President Trump Signs Executive Order on Advancing AI Education
On April 23, 2025, President Trump signed his Executive Order on Advancing Artificial Intelligence Education for American Youth. To ensure that the United States remains a global leader in artificial intelligence (AI), the Order makes it the policy of the United States to “promote AI literacy and proficiency among Americans by promoting the appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators.”
The Order advances this policy by:
- Establishing an AI Education Task Force chaired by the Director of the Office of Science and Technology Policy (OSTP);
- Establishing the Presidential AI Challenge to encourage and highlight student and educator achievements in AI;
- Improving Education Through AI by providing resources for K-12 AI education through public-private partnerships;
- Enhancing Training for Educators on AI by prioritizing the use of AI in discretionary grant programs for teacher training; and
- Promoting Registered Apprenticeships by tasking the Secretary of Labor with seeking to increase participation in AI-related Registered Apprenticeships.
|
|
|
|
|
House E&C Advances TAKE IT DOWN Act
On April 8, 2025, the House E&C Committee advanced the TAKE IT DOWN Act (H.R. 633) out of Committee by a vote of 49-1. Rep. Yvette Clarke (D-NY) was the lone “no” vote against the bill. The bill now heads to the House floor. The Act would criminalize the publication of non-consensual intimate imagery (NCII) in interstate commerce.
Lawmakers Reintroduce NO FAKES Act
On April 9, 2025, Sens. Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC), and Amy Klobuchar (D-MN), along with Reps. Maria Salazar (R-FL) and Madeleine Dean (D-PA), have reintroduced the NO FAKES Act (S. 1367/H.R. 2794), which aims to address the use of non-consensual digital replications in audiovisual works or sound recordings by (1) holding individuals or companies liable if they distribute an unauthorized digital replica of an individual’s voice or visual likeness; (2) holding platforms liable for hosting an unauthorized digital replica if the platform has knowledge of the fact that the replica was not authorized by the individual depicted; (3) excluding certain digital replicas from coverage based on recognized First Amendment protections; and (4) preempting future state laws regulating digital replicas.
House E&C Convenes Hearing on the Future of AI Technology
On April 9, 2025, the House Energy and Commerce (E&C) Committee held a Full Committee hearing titled “Converting Energy into Intelligence: the Future of AI Technology, Human Discovery, and American Global Competitiveness,” featuring testimony from Eric Schmidt of the Special Competitive Studies Project, Manish Bhatia of Micron Technology, Alexandr Wang of Scale AI, and Former United States Deputy Secretary of Energy David Turk. During the hearing, Wang urged lawmakers to establish a national AI data reserve which includes all relevant government data “to serve as a centralized data hub for all of the government's AI programs to leverage,” further stating, “This would allow for the data to be easily shared between agencies and be leveraged for widespread AI adoption. The Department of Defense is currently working towards its own version of this, but if the United States wants to lead, this must be government-wide.”
|
|
|
|
|
CMS Declines to Finalize Rule for Equitable Use of AI by Medicare Advantage Plans
On April 4, the Centers for Medicare and Medicaid Services (CMS) issued a final rule regarding Medicare Advantage (MA) plans. As we previously reported here, CMS proposed to require MA organizations that use AI to ensure that such use is equitable. CMS decided not to finalize this proposal, but the final rule provides that CMS wants to “acknowledge the broad interest in regulation of AI and will continue to consider the extent to which it may be appropriate to engage in future rulemaking in this area.”
Joint Economic Committee Convenes Hearing on AI/Government Efficiency
On April 9, 2025, the Joint Economic Committee held a hearing titled, “Reducing Waste, Fraud, and Abuse Through Innovation: How AI and Data Can Improve Government Efficiency.” Topics discussed include reducing improper payments in Medicaid via automation of eligibility determination and redetermination; increasing administrative efficiency in Medicare; and improving prior authorization. The Council of the Inspectors General on Integrity and Efficiency (CIGIE) called on lawmakers to enact legislation establishing a permanent, scalable data analytics platform to aid IGs in detecting and preventing fraud and improper payments in federal spending. Witnesses also called for the federal government to update legacy IT systems, bolster data privacy and security protections, and address policy considerations such as audit standards.
|
|
|
|
|
New Jersey
New Jersey Enacts Law Against Deceptive AI Deepfakes
On April 2, 2025, New Jersey Governor Phil Murphy signed bipartisan legislation (Bill A3540) establishing civil and criminal penalties for the production and dissemination of deceptive audio or visual media, known as “deepfakes.” Under this law, an individual commits a crime if, without license or privilege to do so, the individual makes or distributes deceptive audio or visual media for the furtherance of criminal activity and may be subject to imprisonment and a fine of up to $30,000. Such criminal activity includes, for example, advertising commercial sex abuse of a minor, endangering the welfare of children, threats or improper influence in official and political matters, false public alarms, harassment, cyber-harassment, and hazing. The law also makes the individual who violates its provisions liable to the victim of the violation, and the victim may bring a civil action.
|
|
|
|
|
House Science Convenes Hearing on DeepSeek
On April 8, 2025, the House Science, Space and Technology Research and Technology Subcommittee held a hearing on “DeepSeek: A Deep Dive,” featuring testimony from Adam Thierer of the R Street Institute, Gregory Allen of the Center for Strategic and International Studies (CSIS), Julia Stoyanovich of New York University’s Center for Responsible AI, and Tim Fist of the Institute for Progress. During the hearing, witnesses cautioned against dismissing China’s AI gains as mere imitation. While models like DeepSeek-R1 may be built on U.S. innovations, experts testified that Chinese firms are making genuine breakthroughs, launching competitive and cost-efficient models at a rapid pace. There was bipartisan agreement on the need to respond to China’s AI gains. Witnesses differed on solutions—such as export controls—but emphasized the importance of sustained U.S. investment in science and technology, including reversing recent budget cuts.
China’s Measures for Artificial Intelligence Meteorological Application Services
On April 29, 2025, China Meteorological Administration and Cyberspace Administration of China jointly issued the Measures for Artificial Intelligence Meteorological Application Services (Measures), which will take effect on June 1, 2025. This is the first departmental rule in China aimed at promoting and regulating the application of artificial intelligence in a specific sector. The Measures include specific policy support and promotion measures regarding data openness, algorithm model research and development, as well as rules for algorithm registration and security assessments, the labeling of AI-generated content, algorithm safety review, network security, data security, information dissemination review, and complaint reporting.
|
|
|
|
|
Visit our AI Law & Regulation Tracker for the latest in AI across regulatory developments, legal and policy issues, and industry news.

|
|
|
|
|
Questions?
If you have any questions, please contact:
|
|
|
|
|

Jingli Jiang
Partner / Registered Foreign Lawyer (HK)
Hong Kong
|
|
|
|
|
|
|

Lamar Smith
Senior Consultant and Former Member of Congress
Washington, D.C.
|
|
|
|
|
|
|

Joseph Hold
Cybersecurity & Data Privacy Advisor
Washington, D.C.
|
|
|

Evan Sarnor
Public Policy Specialist
Washington, D.C.
|
|
|
|
|
|
© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Receipt of this information does not create an attorney-client relationship. Do not act upon this information without seeking professional counsel. All content is presented by Akin and cannot be copied or rebroadcasted without express written consent. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin network operates worldwide, please see our Legal Notices page.
Update Your Preferences | Unsubscribe | Subscribe | Legal Notices | Privacy Policy
This email was sent by: 2001 K Street, N.W., Washington, DC 20006-1037
|
|
|
|
|
|