👋Good Morning! This week’s developments highlight a structural shift in how AI is built, scaled, and governed. From OpenAI’s $10B bet on specialised inference compute to new U.S. tariffs reshaping the economics of advanced chips, the focus is moving away from models alone and toward the infrastructure and policy frameworks that determine who gets access to AI power, at what cost, and under what conditions. Alongside these developments, we’ve also added a new automation that creates promotional newsletters automatically, reflecting how AI is increasingly being applied not just at the frontier, but in practical, operational workflows. The common thread across all of this is control: over compute, over costs, and over how AI is deployed in real-world systems.

⚡ OpenAI Secures $10B Compute Deal With Cerebras to Power Faster AI

OpenAI has struck a multi-year agreement worth over $10 billion with AI chipmaker Cerebras to deliver up to 750 megawatts of compute capacity through 2028. The arrangement signals a major expansion in how OpenAI sources the infrastructure that underpins its models and services.

Unlike typical hardware purchases, this deal focuses on delivering sustained compute capacity, with Cerebras supplying specialized systems designed for low-latency inference. In a blog post cited in the report, OpenAI explained that this capacity will speed processing for tasks that currently take more time, helping improve responsiveness for real-time applications and customer workloads.

Cerebras’s architecture, which integrates significant compute, memory, and bandwidth on a single system, has gained traction in the AI boom following the success of ChatGPT in 2022. The company claims its systems outperform traditional GPU-based setups in certain performance metrics, and its technology will be integrated into OpenAI’s compute stack in phases through 2028.

OpenAI framed the partnership as part of a broader strategy to match the right compute systems with specific workloads, rather than relying on a single type of hardware. This diversification is aimed at delivering faster responses, more natural interactions, and a stronger foundation for scaling real-time AI to more users and use cases.

What this indicates broadly: as competition for advanced compute intensifies, companies are moving beyond traditional GPU suppliers and embracing bespoke silicon that accelerates inference, the segment of AI processing that determines how quickly models can generate outputs. Deals of this scale reflect how critical compute capacity has become to maintaining performance, scaling products, and keeping pace in a rapidly evolving AI landscape.

🤖AI in action: Our Promotional Newsletter automation

Meet our promotional newsletter automation, built to save businesses time and make newsletters effortless. Here’s how it works: when you submit your product links through a simple form, the system immediately fetches each product’s image, title, and price, keeps everything in the exact order you chose, and ensures your newsletter stays perfectly structured.

Next, AI takes over. It crafts a professional newsletter title and writes an editorial-style introduction for your first product, no manual copywriting needed. It keeps your brand’s look intact, from logos and colors to fonts and buttons, while generating a fully formatted newsletter draft in MailMojo, ready to send in seconds. Everything runs automatically, so you can focus on running your business instead of wrestling with layouts or content.

Do you want to save time and implement AI in your business? Click the button and fill out the form below for a quick chat, and we’ll show you exactly how this automation(or others) can work for you.

📈Trendlines: U.S. Imposes 25% Tariff on Nvidia’s H200 AI Chips Bound for China

The U.S. government has imposed a 25 percent tariff on Nvidia’s H200 AI chips exported to China, adding a new economic layer to Washington’s ongoing effort to regulate the flow of advanced AI technology abroad. Rather than blocking sales outright, the move introduces a financial penalty on some of the world’s most powerful AI accelerators, signalling a shift in how the U.S. is managing strategic technologies.

The tariff applies to Nvidia’s H200 chips that are manufactured overseas and pass through the U.S. en route to China. The policy allows exports to continue, but at a significantly higher cost, effectively taxing access to advanced compute rather than banning it outright. This approach contrasts with previous export controls that relied primarily on restrictions and licensing requirements.

Nvidia responded positively to the decision, stating that the framework supports U.S. jobs while allowing American companies to remain competitive globally. The company framed the tariff as a pragmatic compromise that enables continued participation in international markets while aligning with U.S. policy objectives.

The move reflects a broader recalibration in U.S. AI strategy. Instead of treating advanced AI hardware purely as a security risk to be blocked, the tariff suggests a model where access is permitted but controlled through pricing, oversight, and economic leverage. For buyers in China, this raises the cost of acquiring frontier-level compute. For U.S. firms, it preserves revenue channels that had previously been at risk of closure.

What this indicates broadly: AI governance is moving beyond simple “allow vs. deny” decisions toward more granular economic controls over compute access. As AI chips become foundational infrastructure, governments are increasingly treating them less like ordinary exports and more like strategic resources, subject to taxation, conditions, and long-term geopolitical calculus.

🧩 Closing Thought

Taken together, these stories illustrate an AI landscape where compute is becoming the central strategic asset. OpenAI’s partnership with Cerebras shows how leading players are diversifying hardware to optimise performance and scale, while U.S. tariffs on Nvidia’s H200 chips demonstrate how governments are asserting economic and geopolitical influence over AI infrastructure. Meanwhile, practical automations signal that AI’s value is increasingly realised not in breakthroughs, but in execution. The next phase of AI adoption will be shaped less by who has the best model, and more by who controls compute, manages costs, and integrates AI effectively into systems that actually run businesses.

Keep Reading

No posts found