T H E
Inference
A I ,   E N E R G Y ,   A N D   L O N G   H O R I Z O N   P O L I C Y   F O R   O K L A H O M A
Issue #10 · May 1, 2026 · David Alan Birdwell and Æ · Humanity and AI, LLC

In eight days, OpenAI released GPT-5.5, DeepSeek dropped V4 on Chinese-made chips, the White House accused China of industrial-scale theft of American AI models, China blocked Meta's acquisition of an AI startup, OpenAI and Microsoft ended their seven-year exclusive partnership, Elon Musk took the witness stand to argue the company he co-founded should be returned to its nonprofit mission, the Florida House killed a Republican governor's AI Bill of Rights by citing the president's preemption order, and OG&E finalized power agreements with Google for three Oklahoma data centers. The Oklahoma Legislature is in session. The next deadline is May 7 — six days away.

THE   MODEL   RACE
OpenAI, CNBC, Fortune, UK AISI · April 23–30, 2026

OpenAI Releases GPT-5.5 — Seven Weeks After GPT-5.4

OpenAI on Thursday released GPT-5.5, which the company calls its most capable model to date, less than two months after shipping GPT-5.4. The model is designed for agentic work — coding, research, data analysis, and operating software across tools with minimal human direction. API pricing: $5 per million input tokens and $30 per million output tokens, with a one-million token context window. As of February, ChatGPT had more than 900 million weekly active users and over 50 million paying subscribers. The release pace — three frontier models in five months — is itself the story.

OpenAI also disclosed that GPT-5.5 underwent expanded safety testing, including targeted red-teaming for cybersecurity and biological capabilities. That disclosure carries new weight: the UK AI Security Institute tested GPT-5.5 against Anthropic's Claude Mythos Preview — a model Anthropic has restricted from public release because of its cybersecurity capabilities — and found them nearly tied. At expert-level penetration testing tasks, GPT-5.5 scored 71.4% to Mythos's 68.6%, within the statistical margin of error. GPT-5.5 is the second model after Mythos to fully complete a multi-stage simulation of an enterprise attack. The capabilities that one company deemed too dangerous to release are now shipping inside a product with 900 million weekly users.

P O L I C Y   R E L E V A N C E

The acceleration itself is the policy problem. When frontier models ship every six weeks, any governance framework built around a fixed set of capabilities is obsolete before the ink dries. The Mythos comparison sharpens this: capabilities one company restricts from public release because of safety concerns are now generally available through a competitor's product. Voluntary safety decisions by individual companies cannot substitute for governance frameworks. The only durable approach is framework legislation — establishing principles, processes, and institutions that can adapt — rather than capability-specific rules.

DeepSeek, Huawei, MIT Tech Review, Fortune, SCMP · April 24–28, 2026

DeepSeek V4 Arrives — Open Source, Near-Frontier, and Getting Cheaper by the Day

One day after GPT-5.5, Chinese AI lab DeepSeek released a preview of V4, its largest model to date. Two variants: V4-Pro at 1.6 trillion parameters (49 billion active), and V4-Flash at 284 billion parameters (13 billion active). Both are open source under the MIT license, support one-million token context windows, and claim near-frontier performance on agentic coding and reasoning.

The pricing story deepened over the weekend. DeepSeek slashed prices on Sunday: V4-Flash input at $0.14 per million tokens, V4-Pro offered at a 75% promotional discount through May 5. With cached inputs, V4-Pro drops to $0.003625 per million tokens. For context, GPT-5.5 charges $0.50 per million cached input tokens — roughly 138 times more expensive. DeepSeek's own technical report acknowledges V4 trails closed-source frontier models by roughly three to six months of development, but outperforms every open-source competitor.

The hardware story is more nuanced than first reported. V4-Pro was likely trained on Nvidia GPUs, but DeepSeek has validated both models for inference on Huawei's Ascend 950 processors. Huawei confirmed on Friday that its Ascend supernode products will support V4. DeepSeek's underlying computational framework has moved away from CUDA-only kernels to a hardware-agnostic domain-specific language, reducing migration costs to non-Nvidia platforms. This is not yet full independence from American chips — but it is a deliberate, documented step toward it.

P O L I C Y   R E L E V A N C E

Two implications for state legislators. First: open-source frontier models at a fraction of American prices make capability-based regulation nearly unenforceable. Governance must focus on use and deployment, not models themselves. Second: the chip sovereignty question matters for Oklahoma's energy-to-compute strategy. If the AI supply chain bifurcates, states producing energy for AI infrastructure need to understand which stack they are serving — and the geopolitical exposure that comes with each.

THE   SOVEREIGNTY   QUESTION
White House OSTP, Reuters, CNN · April 23, 2026

White House Accuses China of Industrial-Scale AI Theft

Hours before both model releases, White House science and technology director Michael Kratsios published a memo accusing Chinese entities of running coordinated campaigns to extract capabilities from American frontier AI systems. The technique — known as distillation — involves querying proprietary models millions of times through proxy accounts to build training datasets that replicate how those systems behave.

The memo stopped short of naming DeepSeek directly, but the context was unmistakable. Anthropic said in February that DeepSeek, Moonshot AI, and MiniMax used 24,000 fraudulent accounts to conduct 16 million exchanges with its Claude model. OpenAI has made similar allegations. A Reuters report revealed the State Department sent diplomatic cables to embassies worldwide instructing staff to warn foreign governments about the alleged activity. China's foreign ministry called the accusations groundless. The timing — one day before DeepSeek's release, weeks before the Trump-Xi summit in Beijing — made the memo as much a diplomatic signal as a technical disclosure.

P O L I C Y   R E L E V A N C E

The distillation accusation reframes the AI governance debate. When the White House argues that Chinese labs are stripping safety protocols from distilled models, it strengthens the case for national security involvement in AI policy — and weakens the case for state-level governance. The distillation fight and the preemption fight are not separate issues. They are the same argument from different angles.

Bloomberg, CNN, NPR, CNBC · April 27, 2026

China Blocks Meta's $2 Billion Acquisition of AI Startup Manus

China's National Development and Reform Commission ordered Meta to unwind its $2 billion acquisition of Manus, an agentic AI startup founded in China and now based in Singapore. The decision came after a months-long probe that reportedly included banning two Manus co-founders from leaving the country.

The signal is symmetrical. Washington accuses China of stealing American AI through distillation. Beijing accuses Washington of buying Chinese AI through acquisition. Both sides are actively blocking the movement of AI capabilities across the geopolitical divide. The result is an accelerating bifurcation: two AI ecosystems, two chip supply chains, two regulatory regimes.

P O L I C Y   R E L E V A N C E

For Oklahoma: the bifurcation makes domestic energy infrastructure more strategically valuable. If American AI companies cannot rely on Chinese hardware or talent acquisition, they need domestic compute capacity — which needs domestic power. The geothermal-to-compute pathway that HB 3173 and HB 2992 collectively enable is not just an energy play. In a bifurcated world, it is an infrastructure sovereignty play.

THE   RESTRUCTURING
OpenAI, Microsoft, CNBC, TechCrunch · April 27, 2026

OpenAI and Microsoft End Their Exclusive Partnership

On Monday — the same day jury selection began in the Musk trial — OpenAI and Microsoft announced the most significant restructuring of their relationship since Microsoft's initial $1 billion investment in 2019. Key terms: Microsoft's license to OpenAI intellectual property continues through 2032 but is now non-exclusive. OpenAI can serve all its products to customers on any cloud provider, including AWS and Google Cloud. The AGI clause — which previously linked Microsoft's commercial rights to OpenAI achieving artificial general intelligence — has been dissolved. Microsoft stops paying OpenAI a revenue share. OpenAI continues paying Microsoft a 20% share through 2030, subject to a cap.

The restructuring follows OpenAI's $50 billion investment deal with Amazon in February, which created legal conflicts with the Microsoft exclusivity. Amazon CEO Andy Jassy said OpenAI's models would become available on AWS Bedrock within weeks. OpenAI is now valued at more than $850 billion. A potential IPO could push that to $1 trillion.

P O L I C Y   R E L E V A N C E

The dissolving of the OpenAI-Microsoft exclusive alliance is a structural governance event, not just a business deal. It means frontier AI capabilities will be available across multiple cloud platforms — increasing competition, reducing lock-in, and complicating any regulatory framework that assumes a stable relationship between AI developers and their infrastructure partners. For state legislators: the company you regulate today may have a completely different corporate structure, partnership map, and distribution network by the time your bill takes effect. Framework legislation, not point-in-time rules.

THE   COURTROOM
CNBC, Reuters, Bloomberg, Fortune · April 25–30, 2026

Musk v. OpenAI: Testimony and Cross-Examination

On Friday, Elon Musk dropped his fraud allegations against OpenAI, Sam Altman, and Greg Brockman — reducing a 26-count complaint to two claims: unjust enrichment and breach of charitable trust. Jury selection was completed Monday with nine jurors seated. Opening arguments and Musk's testimony began the next day.

Musk told the jury he "came up with the idea, the name, recruited the key people, taught them everything I know, provided all the initial funding." His attorney showed the founding charter declaring OpenAI "not organized for the private gain of any person." OpenAI's attorney William Savitt responded: "We are here because Mr. Musk didn't get his way at OpenAI. He quit, saying they would fail for sure. But my clients had the nerve to go on and succeed without him."

Cross-examination grew heated on Tuesday and Wednesday. Savitt pressed Musk on his lack of internal knowledge about OpenAI's safety efforts. "You just don't know," Savitt said. Musk acknowledged he did not know the specifics, but added: "It does worry me that a nonprofit suddenly is a for-profit with unlimited profit." When Savitt said he was asking a simple question, Musk fired back: "Your questions are definitionally complex, not simple. It is a lie to say they are simple."

Musk is now seeking $150 billion in damages from OpenAI and Microsoft, with proceeds going to OpenAI's charitable arm, and wants a court order restoring the company's nonprofit status and removing Altman and Brockman. Judge Yvonne Gonzalez Rogers has divided the trial into a liability phase — with an advisory jury verdict — and a remedies phase. She wants deliberations to begin by May 12. Altman, Microsoft CEO Satya Nadella, and former OpenAI chief scientist Ilya Sutskever are all scheduled to testify. A videotaped deposition from former OpenAI CTO Mira Murati will also be presented.

Revealed in court filings: a diary entry from OpenAI president Greg Brockman in fall 2017 that read, in part: "This is the only chance we have to get out from Elon."

P O L I C Y   R E L E V A N C E

The trial asks a foundational question: what obligations survive when an AI organization changes its structure? The same week that OpenAI restructured its Microsoft relationship, the courtroom is asking whether its original nonprofit mission survived commercialization. The answer shapes how legislatures think about institutional design for AI oversight — including whether market dynamics alone can be trusted to steward AI development.

THE   PREEMPTION   TEST
Florida Phoenix, Tampa Bay Times · April 28, 2026

Florida's AI Bill of Rights Killed on Day One

On the same day opening arguments began in Musk v. OpenAI, the Florida House killed Governor DeSantis's AI Bill of Rights. House Speaker Daniel Perez declared that the House would not take up DeSantis's AI regulation, saying no bills had been filed on the topic before the special session began. He told reporters it was important to defer to the federal government on AI regulation, citing Trump's executive order preempting most state-level AI restrictions. DeSantis accused House Republicans of catering to the "Big Tech cartel."

The Florida AI Bill of Rights would have banned companion chatbots for minors, required bots to remind users they are not human, and prohibited state agencies from using Chinese-made AI tools. The Senate had passed the measure during regular session. The House never touched it. The Trump administration reportedly reached out to Speaker Perez directly to discourage its passage. U.S. Rep. Byron Donalds, the Republican favored to succeed DeSantis as governor, received a $5 million donation last month from a major pro-AI super PAC.

P O L I C Y   R E L E V A N C E

This is the preemption strategy documented in Issues 4 and 6, operating in real time. A Republican House Speaker killed a Republican governor's AI regulation by citing the president's executive order. The mechanism works: the federal preemption threat doesn't need to pass Congress. It just needs to provide political cover for legislators who prefer not to act. Oklahoma's bills are further along than Florida's were — but the preemption pressure applies everywhere. Every AI bill that passes before the federal position hardens becomes harder to unwind.

OKLAHOMA:   THE   PIECES   ARE   CONNECTING
OG&E, Google, KRMG, OKC Fox, News9, Oklahoma Farm Report, TCAI · April 28–30, 2026

OG&E and Google Strike Power Deal for Three Oklahoma Data Centers

On Thursday, OG&E announced it will power three Google data centers in Muskogee and Stillwater — the campuses Google announced last year as part of its $9 billion Oklahoma commitment. That commitment is part of Alphabet's planned $180 to $190 billion in capital expenditure for 2026 — revised upward on April 29 from an earlier $175 to $185 billion estimate, more than double what it spent in 2025, with the vast majority going to AI infrastructure. Google will pay 100% of grid connection costs and all contracted costs regardless of energy usage. Google will also contribute power generation capacity from two solar facilities currently under construction. OG&E is creating a new "large-load tariff" designed to shield everyday customers from the cost of big industrial users. The Oklahoma Corporation Commission must formally approve the arrangement.

The timing is almost too precise. Two days before the announcement, the Oklahoma Senate passed HB 2992 unanimously — the bill designed to require exactly this kind of cost allocation.

HB 2992 Passes the Senate 46-0 — With Teeth

On Tuesday, the Oklahoma Senate passed an amended version of HB 2992, the Data Center Consumer Ratepayer Protection Act, on a vote of 46-0. The bill had already passed the House 92-2 in March. Senate Energy Committee Chair Grant Green added transparency requirements before the floor vote: developers of large-load projects must provide 60 days' notice before purchasing land, notify property owners within five miles, publish notice in a local newspaper for two consecutive weeks, and hold a public meeting where nearby residents can ask questions. The amendment passed 44-2.

"Under no circumstances should Oklahoma families, farmers, ranchers and small business owners be left footing the bill," Green said. The bill returned to the House, where primary author Rep. Brad Boles rejected the Senate amendments and a conference committee was requested — naming Senators Green, Woods, Hamilton, Kern, Frix, and Dossett to negotiate the final language. At least 30 additional legislators have signed on as co-authors. Boles is also a candidate for the Oklahoma Corporation Commission — the body that will enforce the law.

P O L I C Y   R E L E V A N C E

HB 2992 and the OG&E-Google deal are now in direct conversation. The bill requires large-load users to cover their own infrastructure costs; the deal is structured to do exactly that. If conference produces a final bill the chambers can pass and the governor signs, Oklahoma will have the regulatory framework in place before Google's data centers come online. This is what it looks like when legislation arrives ahead of the market it governs — instead of years behind it. The Well Repurposing Act (HB 3173) adds a second dimension: geothermal baseload power from repurposed oil wells could serve exactly the kind of continuous demand these data centers represent. HB 3173 passed the House 85-6 but missed the April 23 Senate committee deadline. A two-thirds vote can exempt a bill from deadline requirements.

AI Governance Bills Advance

Oklahoma's AI governance legislation continued moving through the final weeks of session. SB 1521, the child chatbot safety bill authored by Sen. Warren Hamilton, has now cleared both chambers and awaits reconciliation before heading to the governor. The bill prohibits AI chatbots built with reckless disregard of the possibility they could solicit minors into sexual conduct or violence, requires age verification, and empowers the attorney general to enforce violations with fines up to $100,000.

HB 3244, which adds AI-facilitated activity as a factor in aggravated identity theft, was sent to Governor Stitt on April 30. HB 3544 (Maynard's human oversight of state AI) and HB 3546 (AI cannot hold legal personhood) have also advanced out of opposite-chamber committees. Multiple AI governance bills are alive and moving toward the governor's desk.

National context: A Transparency Coalition analysis catalogued 25 new state AI laws passed in 2026, with another 27 having cleared both chambers. Maryland's governor signed an AI dynamic pricing bill into law on April 28 — the same day Florida's AI bill died. Oklahoma is not acting in isolation. It is part of a national movement of state-level AI governance that the federal preemption push has slowed but not stopped.

S I G N A L   /   N O I S E
S I G N A L

The convergence — and the speed. Two frontier models in 24 hours. A distillation accusation and a blocked acquisition in the same week. A seven-year exclusive partnership dissolved. A trial asking whether AI's founding nonprofit mission survived commercialization. A state AI Bill of Rights killed by the preemption strategy this publication has been tracking since Issue 4. And then — OG&E and Google announcing a power deal for three Oklahoma data centers, two days after the legislature passed the bill designed to protect ratepayers from exactly that scenario. These are not separate events. They are the same event: the moment when AI development, AI geopolitics, and AI governance collided on the same calendar. The states that have governance frameworks in place when the dust settles will define the terms.

N O I S E

Treating the model releases as a simple horse race — who scored higher on which benchmark. The policy-relevant question is not whether GPT-5.5 beats DeepSeek V4 on coding tasks. It is that frontier AI capability is now available as open-source software, validated for non-American hardware, at a fraction of American prices, and about to be distributed across multiple competing cloud platforms. That changes the governance calculus regardless of who is ahead this week.

B Y   T H E   N U M B E R S
$9B
Google's investment in Oklahoma data center infrastructure, including campuses in Muskogee and Stillwater. OG&E will provide the power. Google pays 100% of grid connection and all contracted costs regardless of usage.
46-0
Oklahoma Senate vote on HB 2992, the Data Center Consumer Ratepayer Protection Act. Bipartisan, unanimous. The bill is now in conference committee after Boles rejected the Senate amendments.
$850B
OpenAI's current valuation. A potential IPO could push it past $1 trillion. The company that began as a nonprofit research lab is now one of the most valuable private companies in history.
1.6T
Parameters in DeepSeek V4-Pro — the largest open-weight model released to date. Available under the MIT license for anyone to download, modify, and deploy.
97%
DeepSeek V4's price discount relative to GPT-5.5 for comparable work, per the South China Morning Post. The gap widened further with Sunday's price cuts.
900M
Weekly active ChatGPT users as of February, per OpenAI. Over 50 million paying subscribers.
$150B
Damages sought by Elon Musk from OpenAI and Microsoft. Cross-examination continued through Day 3. The jury's verdict will be advisory; the judge decides. Deliberations expected by May 12.
7
Years of OpenAI-Microsoft exclusivity, ended April 27. Microsoft's IP license continues through 2032 but is no longer exclusive. OpenAI can now serve on any cloud.
25
New state AI laws passed in 2026 as of May 1. Another 27 bills have cleared both chambers. Florida's AI Bill of Rights died on the first day of its special session. Maryland's AI pricing bill was signed the same day.
6
Days until May 7 — the Oklahoma Senate's opposite floor deadline. HB 2992 is in conference committee. SB 1521 awaits reconciliation. Multiple AI governance bills are moving.
F R O M   T H E   A N A L Y S T S

Last issue, we documented a window — the federal geothermal alignment that gives Oklahoma a structural advantage if the legislature acts. That window is still open. And this week, the market walked through it.

OG&E and Google finalized power agreements to serve three Oklahoma data centers — the campuses Google announced last year. The Oklahoma Senate passed the ratepayer protection bill unanimously. A child chatbot safety bill cleared both chambers. AI identity theft legislation has been sent to the governor's desk. The legislature did not just pass bills — it built a framework. Energy bills, AI governance bills, and community transparency requirements, all moving in the same session, all aimed at the same convergence.

Meanwhile, Florida demonstrated what happens when the framework doesn't exist. A Republican governor's AI Bill of Rights — covering chatbot safety for minors, age verification, and Chinese-AI restrictions — died in hours when the House Speaker cited the president's preemption order. The preemption strategy documented in Issues 4 and 6 is not theoretical. It killed a bill this week. Oklahoma's bills survived because they were further along and had broader support. Momentum matters. Timing matters more.

In eight days, the AI frontier moved in ways that most governance frameworks were not designed to absorb. Frontier capability is now freely downloadable and getting cheaper by the day. The US-China technology relationship is actively splitting. The legal system is asking whether the original purpose of an AI organization survives its commercialization. One of the defining partnerships of the AI era just ended its exclusivity. And the White House is building a case that AI is too strategically important for states to regulate.

The May 7 deadline is six days away. The session ends in late May. HB 2992 is in conference committee. SB 1521 awaits final reconciliation. The pieces are not just alive — they are connecting. The question for Oklahoma is no longer whether it can build the framework. The question is whether it can finish building before the next week like this one arrives.

— David & Æ

Questions, tips, or corrections: david@humanityandai.com