I see that Pope Leo XIV has weighed in on AI. He happened to comment about AI in conjunction with nuclear weapons, but the broader topic is the regulation of AI for one reason or another. The recent widespread broadcast of sexually explicit images on Elon Musk’s Grok AI certainly has caused a furor in the European Union. But let’s face it, regulation and especially regulation of world-changing technology like AI is one of those questions where thoughtful people genuinely disagree, and the debate often hinges on what kinds of regulation we’re talking about and what problems they’re meant to solve.
The case for regulation generally centers on preventing concrete harms. Ensuring AI systems don’t discriminate in consequential decisions like lending or hiring seems like a sensible thing. Protecting people’s data and privacy and preventing misuse for fraud or misinformation seems equally to be a worthwhile reason to police the use of AI. Proponents of regulating AI argue that without regulation, we’re essentially running a massive uncontrolled experiment on society, and that waiting for harm to occur before acting is irresponsible. While there is some validity to that line of thinking, its hard to imagine how we can understand the harm cases surrounding AI without the clinical testing that is inherent to the development of other technologies—from pharmaceuticals to aviation—where regulation has successfully balanced innovation with safety.
There’s also a national security dimension involved in AI regulation. Some argue that AI capabilities, particularly in military applications or critical infrastructure, need government oversight to prevent catastrophic risks.
The case against regulation (or even for minimal regulation) emphasizes innovation and competition. Critics worry that premature or heavy-handed rules could cement the position of existing tech giants, make it prohibitively expensive for startups to compete, and push development to countries with fewer restrictions. They argue that AI is evolving too rapidly for regulations to keep pace, and that we don’t yet understand the technology well enough to regulate it wisely. There’s also concern that overly restrictive rules could cause the US or Europe to fall behind China in AI development, with geopolitical consequences. The practical middle ground often involves targeted regulation—focusing on high-risk applications (medical diagnosis, criminal justice, autonomous vehicles) while leaving lower-risk uses relatively unencumbered. The EU’s AI Act attempts this tiered approach.
These are difficult times in the development of AI. I do not know the exact percentage of media airtime that is occupied by AI, but any way you slice it, its a lot. If I were estimating, I would say that 20% or more of all the articles I read are either directly or indirectly about AI. It’s hard to parse all the issues that swirl around the AI topic. To begin with, there are the big picture economic issues. At the product line level, the S&P 500’s weight to AI is approximately 7.9% , which is considerably less than earlier estimates that looked only at primary business lines. This more precise figure accounts for the fact that even AI-heavy companies like NVIDIA have non-AI product lines (such as consumer graphics cards). The five biggest AI-focused names—NVIDIA, Microsoft, Apple, Alphabet, and Amazon—now represent nearly 30% of the entire S&P 500 index . This historic concentration means investors with S&P 500 exposure have significantly more AI-related risk than the index’s diversification might suggest. Only 16 companies in the S&P 500 have product lines with reported AI revenue, and just six companies have a majority of revenue tied to AI software, but that belies the reality. NVIDIA and Alphabet dominate the landscape. These two companies are by far the largest sources of AI exposure in the index. AI hardware (semiconductors, processors) and AI software (search engines, cybersecurity, development tools) make up the bulk of the exposure.
The data on AI as a percentage of capital investment shows a dramatic concentration in a small number of companies. The AI share of overall S&P 500 capital expenditure (Capex) is ~30%. The collective Capex of the “Magnificent 7” has surged from roughly 10% to 30% of total S&P 500 Capex in the past six years, reflecting an immense bet on AI by only a handful of players. This is an extraordinary concentration. Big Tech Capex surpassed $405 billion in 2025, representing year-over-year growth of 62%. Bank of America sees global hyperscale spending rising another 31% in 2026, with total outlays climbing to $611 billion. Global AI Capex spending is forecast to reach $1.3 trillion by 2030, implying a 25% compound annual growth rate. Meta, Microsoft, and Alphabet are each set to spend between 21 and 35% of their revenue on Capex, more than both the average global utility today and AT&T at the height of the telecom bubble. Capex, excluding dividends and share repurchases, is reaching extreme levels of 94% of operating cash flows in 2025, up 18 percentage points from 2024 levels. Still, this is significantly below historical infrastructure booms. Historical infrastructure investment booms including railroads, automotive infrastructure, computers, and telecommunications ranged from 1.5% to 4.5% of global GDP, suggesting there may be room for further AI investment growth—or that expectations are being tempered by practical constraints.
Interestingly, 72% of S&P 500 companies disclosed AI as a material risk in their 2025 10-K filings, up from just 12% in 2023 , suggesting that while direct AI revenue may be concentrated, AI’s impact on business operations and risk profiles is much broader across the entire index. So, direct AI development represents roughly 8% of the S&P 500 by revenue, but the concentration in the top five companies means practical exposure for index investors is significantly higher than that figure suggests.The data on AI’s current contribution to GDP is quite revealing, and there’s an important distinction between AI’s ”share” of GDP versus its “contribution to GDP growth”. The current AI share of GDP approximates 4-5%. But AI’s contribution to GDP Growth in 2025 is much higher. This is where it gets interesting. AI is punching way above its weight in driving growth. Business investment in AI and data centers alone was responsible for 30% of GDP growth. Some economists put it even more dramatically. Harvard economist Jason Furman said that AI investments accounted for nearly 92% of U.S. GDP growth in the first half of 2025.
This reality has resulted in a fragmented global landscape for regulating AI. Some governments stopped “watching the space” and started writing rules that touch real products—chatbots, hiring and credit tools, recommendation systems, deepfakes, and the data pipelines behind them. Different jurisdictions are now pursuing fundamentally different regulatory models, creating what’s being called a “compliance splinternet.” The EU seems to be leading the way. The EU AI Act is the world’s first comprehensive AI-focused law, with prohibited AI activities banned as of February 2025 and high-risk AI systems facing stringent rules coming into full force in August 2026. Organizations that break regulations concerning banned AI practices face fines up to 35 million euros or 7% of global annual turnover, whichever is higher. However, the European Commission in November 2025 published “digital omnibus” legislative proposals seeking to ease AI-related compliance obligations, including deferring the high-risk AI systems rules. Meanwhile, the US is engaged in a Federal vs. state battle with President Trump signing an executive order on December 11, 2025 titled “Ensuring a National Policy Framework for Artificial Intelligence” to establish a minimally burdensome national policy framework and attempting to preempt state AI laws that conflict with that policy. You see, the US states passed 82 AI-related bills in 2024 , creating the patchwork the Trump administration is attempting to override. Then there’s the China “Control-First” approach. China’s Measures for Labeling AI-Generated Content from September 2025 mandate both visible watermarks and invisible encrypted metadata labels on synthetic content, creating a closed loop where all AI content is trackable and non-anonymous. Their Cybersecurity Law amendments taking effect in January 2026 remove the ‘warning shot’ for violations, allowing for immediate and severe fines for data leaks or infrastructure failures.
In 2026, the White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China. Since Congress has not yet passed a federal AI law that preempts state AI laws, existing state AI laws will likely not be impacted in the short term by the executive order, and the most prudent approach is to continue to comply with state AI laws until there is greater clarity. What this portends is that AI regulation is heading toward increased fragmentation and conflict rather than harmonization. The EU is proceeding with comprehensive rules (though potentially softening them), the US faces an internal federal-state regulatory war with the Trump administration pushing for minimal federal standards, and China continues tightening control-oriented requirements. For companies, this means navigating multiple conflicting frameworks with no convergence in sight for 2026.

