Trump's Lesson on Artificial Intelligence


Getty Images
The AI race
Some see AI as a resource, while others fear it. Two opposing trends emerge from regulations adopted in the United States and the EU. Brussels should take notes from the United States, for a liberal approach.
On the same topic:
Four weeks ago, all attention in Europe was focused on the final stages of the tariff negotiations with the United States, which concluded after arduous weeks with the informal meeting on a golf course in Scotland between Trump and Ursula von der Leyen. Until yesterday, these negotiations had lacked essential details on the products and supply chains included in the exemption list, given that only yesterday did the European Commission announce the final agreement on a joint statement outlining how the 15 percent tariff in the US will be applied to the vast majority of European products, including pharmaceuticals, vehicles, and semiconductors. When the Scottish agreement was announced, many European observers and politicians called it an inglorious surrender, as if they had only just discovered the mountain of European weaknesses accumulated vis-à-vis the US—weaknesses in technology, energy, and defense. A singular discovery, given that those weaknesses today constitute a club brandished with blackmailing harshness by Trumpian bullying, but in reality they have accumulated over the last three decades while Europe stood by and watched .
These gaps now risk worsening further. A month ago, amid widespread inattention sparked by anxiety over tariffs, the United States passed a series of profoundly innovative regulations on a topic now essential to European competitiveness and productivity: artificial intelligence. The new American regulations represent a radical departure from the Biden Administration's approach, and the most interesting aspect is that they go in the exact opposite direction from the European approach adopted with the AI Act , which, being a European regulation and not a directive, is now effectively becoming, for individual entities and company sizes, already in force in all EU countries. It's time to ask a serious question: Are we sure that the European approach to AI is the right one for turning things around, or does the new package of American regulations also offer Europe valuable insights for a much-needed soul-searching? To answer this question, we need to be clear on three points: what the American shift consists of, how it differs from the EU's approach, and finally, what we Europeans should do.

Trump's plan was released on July 23 and is aggressive, starting with its name: Winning the Race, America's AI Action Plan. It formally repeals Executive Order 14110 of November 2023, with which the Biden administration had outlined the purpose, responsibilities, and oversight of federal authorities over AI developments. The Democratic administration's approach was in some ways similar to the European one, aiming primarily at mitigating AI risks , protecting civil rights and fairness, and ensuring oversight of advanced AI models by authorities such as those responsible for enforcing the Defense Production Act.
In contrast, Trump's plan radically changes its approach: it prioritizes deregulation, infrastructure development, and global competitiveness, viewing US leadership in AI as a national strategic imperative, to be vigorously promoted and defended worldwide. The Action Plan consequently shifts away from an emphasis on precautionary regulation. The priority shifts to facilitating private sector innovation, streamlining authorization procedures , and establishing federal procurement standards centered on ideological neutrality, not on politically dictated decisions regarding this or that limit to the next-frontier of generative AI.
The Plan is structured around three key pillars. The first is "Accelerating AI Innovation" and focuses on promoting open-source and open-weight models (those in which the numerical parameters for machine learning are public, even if the original model's parameters are not), public-private research partnerships, the rapid adoption of AI in specific sectors such as healthcare, manufacturing, and agriculture, and investment in scientific infrastructure.
The second pillar, "Building America's AI Infrastructure," focuses on accelerating permitting for data centers, semiconductor factories, and supporting power systems; modernizing the electric grid; improving cyber and physical resilience; and scaling the skilled workforce.
The third pillar, "AI Leadership in Global Security and Geopolitics," aims to expand the global reach of American-made AI by promoting exports of verticalized AI solutions, strengthening export controls and reviewing outbound investments, and engaging as actively as possible in standards-setting bodies and aligning them with US militarily allied countries on regulatory matters. Key measures include new procurement requirements requiring compliance with the "Unbiased AI Principles" for large language models used by federal agencies; increased funding, led by the Departments of Labor and the Treasury, for AI workforce development and reskilling; investments in public data resources and evaluation infrastructure; and the implementation and facilitation of regulatory sandboxes to support private experimentation and innovation across all sectors. The plan’s infrastructure emphasis is expected to catalyze significant capital expenditures—potentially $90 billion in data center investments over the next few years—with projections that AI workloads could account for up to 9 percent of U.S. electricity consumption by 2030. Industry-specific use cases are also being facilitated, including AI-based drug discovery and diagnostics in healthcare, predictive maintenance in manufacturing supported by power grid modernization, and exportable tools for precision agriculture .
The Plan also concludes with some guiding principles to avoid litigation and delays in AI implementation due to potential conflicts over regulations, federal procurement, and alignment with international standards in other countries, or oversight of AI imports and exports by the relevant federal agencies. The imperative is to act quickly, which is perhaps why European and Italian regulations virtually never provide ex ante anti-litigation provisions, but instead proliferate them without addressing them in advance .
Three executive orders: rapid authorizations, on ideology, exports à gogoTrump's AI Plan was accompanied by three presidential executive orders to immediately begin implementing key elements. The first aims to accelerate federal permits for data center infrastructure. Specifically, this will accelerate the completion of private projects with a capital investment of at least $500 million, a committed electricity output of at least 100 MgW, or otherwise designated as "qualifying projects" by the Secretary of Defense, the Interior, or Commerce. The use of federal lands will be approved, and a dedicated fund will be established at the Department of Commerce to provide financial support for all phases of data center construction. The multitude of environmental permits required under current legislation will be eliminated . Companies intending to build data centers will be relieved of the hassle of searching for permits by directly instructing the Departments of the Interior and Energy to consult with the Department of Commerce to obtain the appropriate site permits, in accordance with applicable federal laws.
The second executive order is the one generating the most controversy. In theory, it mandates "impartial AI," meaning its models are free from ideological biases of any kind and are not prone to supporting the deep fakes that dominate the internet and digital platforms. In reality, however, the presidential order, in addition to requiring federal authorities to prohibit any AI model not based on incontrovertible historical and scientific truths, explicitly targets woke culture, with an explicit ban on LLMs that provide "manipulative responses that favor God's criteria," that is, the goals of diversity, equity, and inclusion. From this perspective, the order is itself heavily ideological, consistent with the entire cultural and social framework of the Trump presidency but in complete contradiction to the premise of "non-ideological" AI. These are authoritarian regimes, those who define only that of their political adversaries as "dangerous ideology."
Finally, Trump's third executive order aims to ensure that the United States leads the development of AI technologies and that these technologies, standards, and American AI governance models are widely adopted worldwide, to strengthen relationships with US allies. It directs the Secretary of Commerce to establish and implement an American AI Export Program within 90 days, with an invitation to participate extended to all major US groups active in each AI-related sector, and an immediate public call for proposals from industry-led consortia is launched. Each proposal must identify specific target countries for export and identify the appropriate federal incentives required . The Secretary of State is charged with coordinating US participation in multilateral initiatives and country-specific partnerships, while the federal Economic Diplomacy Action Group (EDAG) will coordinate the mobilization of federal funding tools to support AI export packages.
The EU's counter-model must be changedOne can have the worst possible judgments about Trump. This writer, for example, has a terrible judgment. But this judgment doesn't prevent me personally from believing that the decisions his administration has just made regarding AI should be considered an example to be followed in Europe. With the obvious exception of the strong bias against social, ethnic, and gender inclusion. The European AI Act has long been rightly accused by the vast majority of businesses of adopting a flawed logic . It is, in fact, heavily and ideologically (in this case, the adverb seems correct) driven by a strategy of preventive regulation: it defines risk categories (unacceptable, high, limited, minimal) and imposes strict ex ante requirements on AI systems deemed "at risk" before they can be marketed or used. This requires suppliers to demonstrate a priori compliance with numerous obligations (risk assessment, transparency, human oversight, data quality, etc.) for any system falling within the many areas considered sensitive (not just health, transportation, education, and media).

Professor Carlo Alberto Carnevale Maffé , speaking to Bocconi students enrolled in a Master's program in entrepreneurship who intend to work on AI, showed a beautiful slide that says it all. If you want to develop an AI application in the EU, you must first: create a rigorous and comprehensive risk management system; assure authorities that the system is trained on data with appropriate statistical properties; draft detailed technical documentation before any release; create automatic event logging throughout the system's lifespan; build a system that guarantees full interpretation of the output by supervisory authorities; create a system that includes installation, implementation, and maintenance of post-market monitoring; keep all of this operational for the next 10 years; appoint an authorized representative established in the EU; undergo a prior conformity assessment by the designated authority; obtain a fundamental rights impact assessment ; draft an official EU declaration of conformity; and register in the relevant EU database. In case of errors or non-compliance, the penalty is up to 15 million euros or 3 percent of total turnover.
Thus conceived, the European AI Act dampens innovation and investment, multiplies organizational and administrative burdens not only on supplier companies but on any company adopting AI models and software, and achieves the exact opposite effect to the acceleration and mass experimentation of AI in manufacturing and services, which should be Europe's top priority, accelerating at full speed. Meanwhile, China generates 40 times more AI patent applications than the EU (and five times more than the US), and considering that in 2023, 109 foundational generative AI models had been developed in America, compared to only seven in the EU, and that the US has already built six times as many data centers as Germany, Italy, and France combined.
The time has come to say it. The approach that considers AI a risk to be avoided, rather than a powerful driver of productivity and employment, must be overturned . If Europe were truly led by liberals, it would dismantle the ex ante risk assessment that is killing AI in SMEs and overturn it with an approach that reduces costs rather than increases them, pushes experimental sandboxes rather than impedes them, and shifts the burden of ex post, rather than ex ante, risk assessment not to businesses but to vigilant regulators. In this, indeed, Trump has a lot to teach us.
More on these topics:
ilmanifesto