The Fixer’s Dilemma: Chris Lehane and OpenAI’s Mission Impossible

 

Chris Lehane is one of the best in the business at making bad news disappear. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels – Lehane knows the drill. Now he’s two years into what may be his most impossible job: As vice president of global policy at OpenAI, his job is to convince the world that OpenAI actually cares about the democratization of artificial intelligence, while the company increasingly behaves like, well, every other tech giant that ever claimed to be different.

I spent 20 minutes with him on stage at Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and get into the real contradictions that erode OpenAI’s carefully constructed image. It was neither easy nor completely successful. Lehane is genuinely good at what he does. He’s friendly. He seems reasonable. He admits uncertainty. He even talks about waking up at 3 a.m. worrying about whether it will actually benefit humanity.

But good intentions don’t mean much when your company is subpoenaing critics, draining water and electricity from economically depressed cities, and bringing dead celebrities back to life to assert its market dominance.

The company’s Sora problem is really at the root of everything else. The video generation tool was launched last week with copyrighted material apparently embedded into it. It was a bold move for a company that was already being sued by the New York Times, the Toronto Star and half the publishing industry. From a commercial and marketing point of view, it was also brilliant. The invite-only app rose to the top of the App Store as people created digital versions of themselves, said OpenAI CEO Sam Altman; characters like Pikachu and Cartman from “South Park”; and dead celebrities like Tupac Shakur.

Asked what motivated OpenAI’s decision to launch this newest version of Sora with these characters, Lehane responded that Sora is a “general purpose technology” like the printing press, democratizing creativity for people without talent or resources. Even he – who calls himself a creative zero – can make videos now, he said on stage.

What he guessed is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use normally works. So, after OpenAI realized that people really liked using copyrighted images, it “evolved” into an opt-in model. This is not iteration. This is testing how much you can get away with. (By the way, although the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have largely escaped.)

Naturally, the situation brings to mind the aggravation of publishers who accuse OpenAI of training their work without sharing the financial spoils. When I pressed Lehane about publishers being priced out of the economy, he invoked fair use, that American legal doctrine that is supposed to balance the rights of the creator with public access to knowledge. He called it the secret weapon of US technological dominance.

Techcrunch Event

San Francisco
|
October 27-29, 2025

Perhaps. But I recently interviewed Al Gore – Lehane’s old boss – and realized anyone could just ask ChatGPT about this instead of reading my TechCrunch article. “It’s ‘iterative,’” I said, “but it’s also a substitute.”

Lehane listened and abandoned his speech. “We’re all going to need to figure this out,” he said. “It’s very simplistic and easy to sit here on stage and say we need to discover new economic revenue models. But I think we will.” (We’re making it up as we go along, so I hear.)

Then there is the question of infrastructure that no one wants to answer honestly. OpenAI already operates a data center campus in Abilene, Texas, and recently opened a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane compared the adoption of AI to the advent of electricity – saying that those who accessed it last are still playing catch-up – but OpenAI’s Stargate project appears to be targeting some of those same economically challenged locations to create facilities with their huge appetite for water and electricity.

Asked during our meeting whether these communities will benefit or just pay the bill, Lehane addressed gigawatts and geopolitics. OpenAI needs about a gigawatt of power per week, he noted. China produced 450 gigawatts last year, in addition to 33 nuclear facilities. If democracies want democratic AI, he said, they have to compete. “The optimist in me says this will modernize our energy systems,” he said, painting a picture of a reindustrialized America with transformed electrical grids.

It was inspiring, but it wasn’t an answer to whether people in Lordstown and Abilene will see their utility bills increase as OpenAI generates videos of The Notorious BIG. AI that consumes the most energy outside.

There’s also a human cost, which became clearer the day before our interview, when Zelda Williams took to Instagram to beg strangers to stop sending her AI-generated videos of her late father, Robin Williams. “You are not making art,” she wrote. “You are turning the lives of human beings into disgusting, over-processed hot dogs.”

When I asked how the company reconciles this kind of intimate harm with its mission, Lehane responded by talking about processes, including responsible design, testing frameworks, and government partnerships. “There’s no manual for these things, right?”

Lehane showed vulnerability at times, saying he recognized the “huge responsibilities that come with” everything OpenAI does.

Whether or not these moments were planned for the audience, I believe him. In fact, I left Toronto thinking I had watched a masterclass in political messaging — Lehane threading an impossible needle while dodging questions about company decisions that, as far as I can tell, he doesn’t even agree. Then the news revealed that complicated situation, which was already complicated.

Nathan Calvin, a lawyer who works on AI policy at the non-profit organization Encode AI, revealed that at the same time I was speaking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy at Calvin’s house in Washington, D.C., during dinner to serve him a subpoena. They wanted your private messages with California lawmakers, college students, and former OpenAI employees.

Calvin says the move was part of OpenAI’s scare tactics surrounding a new AI regulation, California’s SB 53. He says the company weaponized its ongoing legal battle with Elon Musk as a pretext to target critics, implying that Encode was secretly funded by Musk. Calvin added that he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and that when he saw OpenAI claim that it “worked to improve the bill,” he “literally laughed out loud.” In a social media kerfuffle, he went on to specifically call Lehane a “master of the dark political arts.”

In Washington, that might be a compliment. At a company like OpenAI, whose mission is to “build AI that benefits all humanity,” this sounds like an accusation.

But what matters much more is that even the folks at OpenAI are conflicted about what it’s becoming.

As my colleague Max reported last week, several current and former employees took to social media following the release of Sora 2, expressing their doubts. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 which is “technically incredible, but it’s premature to congratulate ourselves for avoiding the pitfalls of other social media apps and deepfakes.”

On Friday, Josh Achiam – head of mission alignment at OpenAI – tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were “possibly a risk to my entire career,” Achiam went on to write about OpenAI: “We cannot do things that make us a frightening power rather than a virtuous one. We have a duty and a mission to all humanity. The bar for fulfilling that duty is remarkably high.”

It’s worth pausing to think about this. An OpenAI executive publicly questioning whether his company is becoming “a frightening powerhouse rather than a virtuous one” is not on the same level as a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission and who now recognizes a crisis of conscience despite the professional risk.

It’s a moment of crystallization, the contradictions of which can only intensify as OpenAI moves towards artificial general intelligence. It also made me think that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. It’s whether others – including, critically, the other people who work there – still believe it.

avots

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *