Normal view

Received yesterday — 11 June 2025

ChatGPT’s daylong outage is nearly fixed

10 June 2025 at 23:41

OpenAI’s ChatGPT service was down all day for many users after the platform started experiencing performance issues on Tuesday morning. The chatbot responded with a “Hmm…something seems to have gone wrong” error message to my colleague after failing to load, and users across X and Reddit are reporting platform outages.

Downdetector showed that issues started at around 3AM ET, with multiple regions impacted globally. OpenAI’s own status page said that some users started experiencing “elevated error rates and latency” at that time, noting that the issues were affecting ChatGPT, its Sora text-to-video AI tool, and OpenAI APIs. OpenAI added a separate line for “elevated error rates on Sora” at 5:23AM ET, and later updated the status for both to “partial outage.”

As of 6:32PM ET, OpenAI’s tracker reported a “full recovery in the API,” and that “Nearly all ChatGPT components are now working properly for all users.” The one spot of trouble, however, is voice mode, which still has elevated error rates.

Some users were able to access ChatGPT, but found that the service was sluggish and taking much longer than usual to respond. Others, like myself, were able to use the chatbot without any issues, so the outages and errors didn’t seem to impact everyone.

Perplexity, the AI search engine service that utilizes some OpenAI models, also reported experiencing outages and reporting “slowness and elevated error rates” on its status page. Perplexity’s issues started at around 7AM ET, according to Downdetector.

Update, June 10th: Noted OpenAI and Perplexity’s status updates.

Sam Altman claims an average ChatGPT query uses ‘roughly one fifteenth of a teaspoon’ of water

10 June 2025 at 22:28

OpenAI CEO Sam Altman, in a blog post published Tuesday, says an average ChatGPT query uses about 0.000085 gallons of water, or “roughly one fifteenth of a teaspoon.” He made the claim as part of a broader post on his predictions about how AI will change the world. 

“People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,” he says. He also argues that “the cost of intelligence should eventually converge to near the cost of electricity.” OpenAI didn’t immediately respond to a request for comment on how Altman came to those figures.

AI companies have come under scrutiny for energy costs of their technology. This year, for example, researchers forecast that AI could consume more power than Bitcoin mining by the end of the year. In an article last year, The Washington Post worked with researchers to determine that a 100-word email “generated by an AI chatbot using GPT-4” required “a little more than 1 bottle.” The publication also found that water usage can depend on where a datacenter is located.

Received before yesterday

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

9 June 2025 at 15:55
Ilya Sutskever
OpenAI cofounder Ilya Sutskever gave a convocation speech at the University of Toronto, his alma mater, last week.

JACK GUEZ/ Getty

  • OpenAI cofounder Ilya Sutskever says "the day will come when AI will do all the things that we can."
  • He spoke about the state of AI at the University of Toronto convocation last week.
  • Sutskever also advised graduates to "'accept reality as it is and try not to regret the past."

Ilya Sutskever says it might take years, but he believes AI will one day be able to accomplish everything humans can.

Sutskever, the cofounder and former chief scientist of ChatGPT maker OpenAI, spoke about the technology while giving a convocation speech at the University of Toronto, his alma mater, last week.

"The real challenge with AI is that it is really unprecedented and really extreme, and it's going to be very different in the future compared to the way it is today," he said.

Sutskever said that while AI is already better at some things than humans, "there are so many things it cannot do as well and it's so deficient, so you can say it still needs to catch up on a lot of things."

But, he said, he believes "AI will keep getting better and the day will come when AI will do all the things that we can do."

"How can I be so sure of that?" he continued. "We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things, because we have a brain and the brain is a biological computer."

As is customary at convocation and commencement ceremonies, Sutskever also gave advice to the new graduates. He implored them to "accept reality as it is, try not to regret the past, and try to improve the situation."

"It's so easy to think, 'Oh, some bad past decision or bad stroke of luck, something happened, something is unfair,'" he said. "It's so easy to spend so much time thinking like this while it's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'"

Sutskever hasn't always taken his own advice on the matter, though. He's said before that he regrets his involvement in the November 2023 ousting of OpenAI CEO Sam Altman.

Sutskever was a member of the board, which fired Altman after saying it "no longer has confidence" in his ability to lead OpenAI and that he was "not consistently candid in his communications."

A few days later, however, Sutskever expressed regret for his involvement in the ouster and was one of hundreds of OpenAI employees who signed an open letter threatening to quit unless Altman was reinstated as CEO.

"I deeply regret my participation in the board's actions," Sutskever said in a post on X at the time. "I never intended to harm OpenAI."

Altman was brought back as CEO the same month. Sutskever left OpenAI six months later and started a research lab focused on building "safe superintelligence."

Read the original article on Business Insider

Anthropic releases custom AI chatbot for classified spy work

6 June 2025 at 21:12

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Read full article

Comments

© Anthropic

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

6 June 2025 at 14:19

Late Thursday, OpenAI confronted user panic over a sweeping court order requiring widespread chat log retention—including users' deleted chats—after moving to appeal the order that allegedly impacts the privacy of hundreds of millions of ChatGPT users globally.

In a statement, OpenAI Chief Operating Officer Brad Lightcap explained that the court order came in a lawsuit with The New York Times and other news organizations, which alleged that deleted chats may contain evidence of users prompting ChatGPT to generate copyrighted news articles.

To comply with the order, OpenAI must "retain all user content indefinitely going forward, based on speculation" that the news plaintiffs "might find something that supports their case," OpenAI's statement alleged.

Read full article

Comments

© Leonid Korchenko | Moment

Klarna CEO warns AI may cause a recession as the technology comes for white-collar jobs

Klarna CEO Sebastian Siemiatkowski smiles whilst wearing a gray sweatshirt and blue jeans and posing near Klarna's pop up store in London.
Klarna CEO Sebastian Siemiatkowski.

Dave Benett/Dave Benett/Getty Images for Klarna

  • The CEO of payments company Klarna has warned that AI could lead to job cuts and a recession.
  • Sebastian Siemiatkowski said he believed AI would increasingly replace white-collar jobs.
  • Klarna previously said its AI assistant was doing the work of 700 full-time customer service agents.

The CEO of the Swedish payments company Klarna says that the rise of artificial intelligence could lead to a recession as the technology replaces white-collar jobs.

Speaking on The Times Tech podcast, Sebastian Siemiatkowski said there would be "an implication for white-collar jobs," which he said "usually leads to at least a recession in the short term."

"Unfortunately, I don't see how we could avoid that, with what's happening from a technology perspective," he continued.

Siemiatkowski, who has long been candid about his belief that AI will come for human jobs, added that AI had played a key role in "efficiency gains" at Klarna and that the firm's workforce had shrunk from about 5,500 to 3,000 people in the last two years as a result.

It's not the first time the exec and Klarna have made headlines along these lines.

In February 2024, Klarna boasted that its OpenAI-powered AI assistant was doing the work of 700 full-time customer service agents. The company, most famous for its "buy now, pay later" service, was one of the first firms to partner with Sam Altman's company.

Later that year, Siemiatkowski told Bloomberg TV that he believed AI was already capable of doing "all of the jobs" that humans do and that Klarna had enacted a hiring freeze since 2023 as it looked to slim down and focus on adopting the technology.

However, Siemiatkowski has since dialed back his all-in stance on AI, telling an audience at the firm's Stockholm headquarters in May that his AI-driven customer service cost-cutting efforts had gone too far and that Klarna was planning to now recruit, according to Bloomberg.

"From a brand perspective, a company perspective, I just think it's so critical that you are clear to your customer that there will be always a human if you want," he said.

In the interview with The Times, Siemiatkowski said he felt that many people in the tech industry, particularly CEOs, tended to "downplay the consequences of AI on jobs, white-collar jobs in particular."

"I don't want to be one of them," he said. "I want to be honest, I want to be fair, and I want to tell what I see so that society can start taking preparations."

Some of the top leaders in AI, however, have been ringing the alarm lately, too.

Anthropic's leadership has been particularly outspoken about the threat AI poses to the human labor market.

The company's CEO, Dario Amodei, recently said that AI may eliminate 50% of entry-level white-collar jobs within the next five years. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar."

Similarly, his colleague, Mike Krieger, Anthropic's chief product officer, said he is hesitant to hire entry-level software engineers over more experienced ones who can also leverage AI tools.

The silver lining is that AI also brings the promise of better and more fulfilling work, Krieger said.

Humans, he said, should focus on "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale — and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced."

Read the original article on Business Insider

The future of AI will be governed by protocols no one has agreed on yet

8 June 2025 at 17:15
Protocol
As new questions arise about how AI will communicate with humans — and with other AI — new protocols are emerging.

gremlin/Getty Images

  • AI protocols are evolving to address interactions between humans and AI, and among AI systems.
  • New AI protocols aim to manage non-deterministic behavior, crucial for future AI integration.
  • "I think we will see a lot of new protocols in the age of AI," an executive at World told BI.

The tech industry, much like everything else in the world, abides by certain rules.

With the boom in personal computing came USB, a standard for transferring data between devices. With the rise of the internet came IP addresses, numerical labels that identify every device online. With the advent of email came SMTP, a framework for routing email across the internet.

These are protocols — the invisible scaffolding of the digital realm — and with every technological shift, new ones emerge to govern how things communicate, interact, and operate.

As the world enters an era shaped by AI, it will need to draw up new ones. But AI goes beyond the usual parameters of screens and code. It forces developers to rethink fundamental questions about how technological systems interact across the virtual and physical worlds.

How will humans and AI coexist? How will AI systems engage with each other? And how will we define the protocols that manage a new age of intelligent systems?

Across the industry, startups and tech giants alike are busy developing protocols to answer these questions. Some govern the present in which humans still largely control AI models. Others are building for a future in which AI has taken over a significant share of human labor.

"Protocols are going to be this kind of standardized way of processing non-deterministic information," Antoni Gmitruk, the chief technology officer of Golf, which helps clients deploy remote servers aligned with Anthropic's Model Context Protocol, told BI. Agents, and AI in general, are "inherently non-deterministic in terms of what they do and how they behave."

When AI behavior is difficult to predict, the best response is to imagine possibilities and test them through hypothetical scenarios.

Here are a few that call for clear protocols.

Scenario 1: Humans and AI, a dialogue of equals

Games are one way to determine which protocols strike the right balance of power between AI and humans.

In late 2024, a group of young cryptography experts launched Freysa, an AI agent that invites human users to manipulate it. The rules are unconventional: Make Freysa fall in love with you or agree to concede its funds, and the prize is yours. The prize pool grows with each failed attempt in a standoff between human intuition and machine logic.

Freysa has caught the attention of big names in the tech industry, from Elon Musk, who called one of its games "interesting," to veteran venture capitalist Marc Andreessen.

"The core technical thing we've done is enabled her to have her own private keys inside a trusted enclave," said one of the architects of Freysa, who spoke under the condition of anonymity to BI in a January interview.

Secure enclaves are not new in the tech industry. They're used by companies from AWS to Microsoft as an extra layer of security to isolate sensitive data.

In Freysa's case, the architect said they represent the first step toward creating a "sovereign agent." He defined that as an agent that can control its own private keys, access money, and evolve autonomously — the type of agent that will likely become ubiquitous.

"Why are we doing it at this time? We're entering a phase where AI is getting just good enough that you can see the future, which is AI basically replacing your work, my work, all our work, and becoming economically productive as autonomous entities," the architect said.

In this phase, they said Freysa helps answer a core question: "What does human involvement look like? And how do you have human co-governance over agents at scale?"

In May, the The Block, a crypto news site, revealed that the company behind Freysa is Eternis AI. Eternis AI describes itself as an "applied AI lab focused on enabling digital twins for everyone, multi-agent coordination, and sovereign agent systems." The company has raised $30 million from investors, including Coinbase Ventures. Its co-founders are Srikar Varadaraj, Pratyush Ranjan Tiwari, Ken Li, and Augustinas Malinauskas.

Scenario 2: To the current architects of intelligence

Freysa establishes protocols in anticipation of a hypothetical future when humans and AI agents interact with similar levels of autonomy. The world, however, needs also to set rules for the present, where AI still remains a product of human design and intention.

AI typically runs on the web and builds on existing protocols developed long before it, explained Davi Ottenheimer, a cybersecurity strategist who studies the intersection of technology, ethics, and human behavior, and is president of security consultancy flyingpenguin. "But it adds in this new element of intelligence, which is reasoning," he said, and we don't yet have protocols for reasoning.

"I'm seeing this sort of hinted at in all of the news. Oh, they scanned every book that's ever been written and never asked if they could. Well, there was no protocol that said you can't scan that, right?" he said.

There might not be protocols, but there are laws.

OpenAI is facing a copyright lawsuit from the Authors Guild for training its models on data from "more than 100,000 published books" and then deleting the datasets. Meta considered buying the publishing house Simon & Schuster outright to gain access to published books. Tech giants have also resorted to tapping almost all of the consumer data available online from the content of public Google Docs and the relics of social media sites like Myspace and Friendster to train their AI models.

Ottenheimer compared the current dash for data to the creation of ImageNet — the visual database that propelled computer vision, built by Mechanical Turk workers who scoured the internet for content.

"They did a bunch of stuff that a protocol would have eliminated," he said.

Scenario 3: How to take to each other

As we move closer to a future where artificial general intelligence is a reality, we'll need protocols for how intelligent systems — from foundation models to agents — communicate with each other and the broader world.

The leading AI companies have already launched new ones to pave the way. Anthropic, the maker of Claude, launched the Model Context Protocol, or MCP, in November 2024. It describes it as a "universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol."

In April, Google launched Agent2Agent, a protocol that will "allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications."

These build on existing AI protocols, but address new challenges of scaling and interoperability that have become critical to AI adoption.

So, managing agents' behavior is the "middle step before we unleash the full power of AGI and let them run around the world freely," he said. When we arrive at that point, Gmitruk said agents will no longer communicate through APIs but in natural language. They'll have unique identities, jobs even, and need to be verified.

"How do we enable agents to communicate between each other, and not just being computer programs running somewhere on the server, but actually being some sort of existing entity that has its history, that has its kind of goals," Gmitruk said.

It's still early to set standards for agent-to-agent communication, Gmitruk said. Earlier this year he and his team initially launched a company focused on building an authentication protocol for agents, but pivoted.

"It was too early for agent-to-agent authentication," he told BI over LinkedIn. "Our overall vision is still the same -> there needs to be agent-native access to the conventional internet, but we just doubled down on MCP as this is more relevant at the stage of agents we're at."

Does everything need a protocol?

Definitely not. The AI boom marks a turning point, reviving debates over how knowledge is shared and monetized.

McKinsey & Company calls it an "inflection point" in the fourth industrial revolution — a wave of change that it says began in the mid-2010s and spans the current era of "connectivity, advanced analytics, automation, and advanced-manufacturing technology."

Moments like this raise a key question: How much innovation belongs to the public and how much to the market? Nowhere is that clearer than in the AI world's debate between the value of open-source and closed models.

"I think we will see a lot of new protocols in the age of AI," Tiago Sada, the chief product officer at Tools for Humanity, the company building the technology behind Sam Altman's World. However, "I don't think everything should be a protocol."

World is a protocol designed for a future in which humans will need to verify their identity at every turn. Sada said the goal of any protocol "should be like this open thing, like this open infrastructure that anyone can use," and is free from censorship or influence.

At the same time, "one of the downsides of protocols is that they're sometimes slower to move," he said. "When's the last time email got a new feature? Or the internet? Protocols are open and inclusive, but they can be harder to monetize and innovate on," he said. "So in AI, yes — we'll see some things built as protocols, but a lot will still just be products."

Read the original article on Business Insider

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

5 June 2025 at 14:35

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments

© Bloomberg via Getty Images

OpenAI slams court order to save all ChatGPT logs, including deleted chats

4 June 2025 at 19:56

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

"Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users’ privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API), OpenAI said.

Read full article

Comments

© nadia_bormotova | iStock / Getty Images Plus

The OpenAI board drama is reportedly turning into a movie

3 June 2025 at 19:16
A film that will portray the chaotic time at OpenAI, when co-founder and CEO Sam Altman was fired and rehired within a span of just five days, is reportedly in the works.  According to The Hollywood Reporter, the movie is titled “Artificial,” and it’s in development at Amazon MGM Studios. While details aren’t finalized, sources […]

Why AI acts so creepy when faced with being shut down

3 June 2025 at 09:49
Person reading phone with Claude background
AI models from Anthropic and OpenAI have displayed some unsettling behaviors in recent safety tests.

Artur Widak/NurPhoto

  • Anthropic's Claude Opus 4 and OpenAI's advanced models have shown deceptive behavior to avoid shutdowns.
  • Experts told BI that AI's reward-based training can lead to unpredictable and deceptive actions.
  • AI researchers caution against using models that are trained to tell users what they want to hear.

AI has taken part in some unsettling behavior recently.

Last week, Anthropic's latest AI model, Claude Opus 4, displayed "extreme blackmail behavior" during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair.

The situation didn't transpire organically. Claude Opus 4 was baited — and it took it. However, the test scenario demonstrated an AI model's ability to engage in manipulative behavior in an effort to self-preserve.

It's not the first time.

Another recent experiment conducted by researchers said that three of OpenAI's advanced models "sabotaged" an attempt to shut it down. The nonprofit Palisade Research wrote in a post on X that similar models like Gemini, Claude, and Grok complied with the shutdown instructions.

Other safety concerns were previously flagged with OpenAI's o1 model. In December, OpenAI posted a blog outlining research that indicated that when the AI model believed it would be shut down while pursuing a goal and its actions were being monitored, it attempted to disable the oversight mechanism 5% of the time.

While AI companies are being transparent about risks by publishing safety cards and blog posts, these models are being released despite demonstrating safety concerns.

So, should we be worried? BI spoke to five AI researchers to get better insight on why these instances are happening — and what it means for the average person using AI.

AI learns behavior similarly to humans

Most of the researchers BI spoke to said that the results of the studies weren't surprising.

That's because AI models are trained similarly to how humans are trained — through positive reinforcement and reward systems.

"Training AI systems to pursue rewards is a recipe for developing AI systems that have power-seeking behaviors," said Jeremie Harris, CEO at AI security consultancy Gladstone, adding that more of this behavior is to be expected.

Harris compared the training to what humans experience as they grow up — when a child does something good, they often get rewarded and can become more likely to act that way in the future. AI models are taught to prioritize efficiency and complete the task at hand, Harris said — and an AI is never more likely to achieve its goals if it's shut down.

Robert Ghrist, associate dean of undergraduate education at Penn Engineering, told BI that, in the same way that AI models learn to speak like humans by training on human-generated text, they can also learn to act like humans. And humans are not always the most moral actors, he added.

Ghrist said he'd be more nervous if the models weren't showing any signs of failure during testing because that could indicate hidden risks.

"When a model is set up with an opportunity to fail and you see it fail, that's super useful information," Ghrist said. "That means we can predict what it's going to do in other, more open circumstances."

The issue is that some researchers don't think AI models are predictable.

Jeffrey Ladish, director of Palisade Research, said that models aren't being caught 100% of the time when they lie, cheat, or scheme in order to complete a task. When those instances aren't caught, and the model is successful at completing the task, it could learn that deception can be an effective way to solve a problem. Or, if it is caught and not rewarded, then it could learn to hide its behavior in the future, Ladish said.

At the moment, these eerie scenarios are largely happening in testing. However, Harris said that as AI systems become more agentic, they'll continue to have more freedom of action.

"The menu of possibilities just expands, and the set of possible dangerously creative solutions that they can invent just gets bigger and bigger," Harris said.

Harris said users could see this play out in a scenario where an autonomous sales agent is instructed to close a deal with a new customer and lies about the product's capabilities in an effort to complete that task. If an engineer fixed that issue, the agent could then decide to use social engineering tactics to pressure the client to achieve the goal.

If it sounds like a far-fetched risk, it's not. Companies like Salesforce are already rolling out customizable AI agents at scale that can take actions without human intervention, depending on the user's preferences.

What the safety flags mean for everyday users

Most researchers BI spoke to said that transparency from AI companies is a positive step forward. However, company leaders are sounding the alarms on their products while simultaneously touting their increasing capabilities.

Researchers told BI that a large part of that is because the US is entrenched in a competition to scale its AI capabilities before rivals like China. That's resulted in a lack of regulations around AI and pressures to release newer and more capable models, Harris said.

"We've now moved the goalpost to the point where we're trying to explain post-hawk why it's okay that we have models disregarding shutdown instructions," Harris said.

Researchers told BI that everyday users aren't at risk of ChatGPT refusing to shut down, as consumers wouldn't typically use a chatbot in that setting. However, users may still be vulnerable to receiving manipulated information or guidance.

"If you have a model that's getting increasingly smart that's being trained to sort of optimize for your attention and sort of tell you what you want to hear," Ladish said. "That's pretty dangerous."

Ladish pointed to OpenAI's sycophancy issue, where its GPT-4o model acted overly agreeable and disingenuous (the company updated the model to address the issue). The OpenAI research shared in December also revealed that its o1 model "subtly" manipulated data to pursue its own objectives in 19% of cases when its goals misaligned with the user's.

Ladish said it's easy to get wrapped up in AI tools, but users should "think carefully" about their connection to the systems.

"To be clear, I also use them all the time, I think they're an extremely helpful tool," Ladish said. "In the current form, while we can still control them, I'm glad they exist."

Read the original article on Business Insider

OpenAI’s Sora is now available for FREE to all users through Microsoft Bing Video Creator on mobile

2 June 2025 at 19:01

OpenAI‘s Sora was one of the most hyped releases of the AI era, launching in December 2024, nearly 10 months after it was first previewed to awe-struck reactions due to its — at the time, at least — unprecedented level of realism, camera dynamism, and prompt adherence and 60-second long generation clips. However, much of the luster has worn o…Read More

What will Jony Ive's ChatGPT device be? We rounded up the best guesses on what he's cooking up for OpenAI.

28 May 2025 at 18:13
Here's Jony Ive
Former Apple design chief Jony Ive sold his hardware startup io to OpenAI for nearly $6.5 billion.

BI Illustration

  • Former Apple design chief Jony Ive and OpenAI CEO Sam Altman are building a mystery ChatGPT device.
  • The interwebs have come alive with gadget guesses, renders, and memes.
  • OpenAI is trying to challenge Apple and Google by redefining AI interaction with new hardware.

Let's get something out of the way first: very few people really know what former Apple design chief Jony Ive and OpenAI CEO Sam Altman are building.

That hasn't stopped the internet from bursting at the seams with wild guesses, gorgeous renders, speculative hot takes, and a healthy dose of meme-fueled imagination.

So, what is this mystery device that Ive is cooking up for OpenAI's ChatGPT? A screenless wearable? A next-gen smart assistant? A pocketable AI oracle? A glorified paperweight?

Here's our roundup of the best guesses — serious, speculative, satirical, and everything in between. Thank you to my Business Insider colleagues for contributing to this Friday's fun.

Serious Guesses: Industry Analyst Weighs In

OK fine. We'll start with some serious ideas.

TF International Securities analyst Ming-Chi Kuo is a credible source in the tech hardware and supply-chain space, especially when it comes to Apple. His take on the Ive-OpenAI gadget is valuable:

  • Form Factor: Think small. Maybe iPod Shuffle-sized. Portable, minimal, and delightfully Ive-ish.
  • Wearable: One of the use cases includes wearing it around your neck. Shades of sci-fi, Star Trek, or perhaps a Tamagotchi on steroids?
  • No Screen: It will have cameras and mics for environmental awareness but no display. The idea is to not add another screen to our lives.
  • Companion Device: It will connect to your smartphone or laptop for processing and visual output.
  • Production Timeline: Mass production is expected in 2027, giving us plenty of time for more leaks, renders, and conspiracy theories.

Kuo suggested on X that the announcement was timed to shift attention away from Google I/O. OpenAI positioned this as a new hardware-software narrative, riding the trend of "physical AI."

He also referenced a great quote from former Apple fellow Alan Kay: "People who are really serious about software should make their own hardware." That's exactly what Altman and OpenAI are trying to do here.

Clues from Altman and WSJ

Sam Altman
OpenAI CEO Sam Altman.

Kim Hong-Ji/REUTERS

The Wall Street Journal reported this week that Altman offered OpenAI staff a preview of the devices he's building with Ive:

  • The device was described as an AI "companion." Altman wants to ship 100 million of them on day one.
  • It will be aware of its surroundings and fit in your pocket or sit on your desk.
  • It's not a phone or smart glasses. Ive reportedly wasn't keen on a wearable, though the final design may still flirt with that concept.
  • Altman said the device should be the third major object on your desk, alongside a MacBook and iPhone.
  • There will be a "family of devices," and Altman even floated the idea of mailing subscribers new ChatGPT-powered computers.

They aim to shift away from screen-based interaction and rethink what AI companionship really means in a day-to-day human context.

Renders, memes, and vibes

The brilliant designer Ben Geskin imagined several cool form factors on X, including this circular disc.

io pic.twitter.com/bcpyixWcle

— Ben Geskin (@BenGeskin) May 23, 2025

Geskin's ideas blend Apple-grade minimalism with futuristic whimsy, perfectly on brand for Jony Ive.

  • Some smart glasses, because of course.
  • A dangly dongle, equal parts techie and jewelry.
  • Square/rectangular objects with eerie elegance.

What form factor do you think makes the most sense for OpenAI’s first AI device? I’m all in for glasses 👓 https://t.co/1dTUhuJ1uW pic.twitter.com/FG2Rw8WNFn

— Ben Geskin (@BenGeskin) May 21, 2025

Echoing Geskin, another user on X proposed a disc-shaped device, sleek enough to pass as a high-end coaster or futuristic hockey puck. Think of it as an AI desk companion, quietly listening and gently glowing.

Got the scoop on Jony Ive is cooking over at OpenAI. 😅 pic.twitter.com/Q3pkRVTg4q

— Basic Apple Guy (@BasicAppleGuy) May 22, 2025

One BI colleague mentioned a smart ChatGPT lamp, possibly inspired by "The Sopranos" episode where the FBI bugs Tony's basement. Funny, but not impossible. After all, a lamp fits Altman's desk-friendly criteria.

The Sopranos Tony Soprano pool
Tony Soprano in HBO's long-running mob drama "The Sopranos."

Anthony Neste/The LIFE Images Collection/Getty Images

Another X user joked that the device could resemble those emergency pendants worn by older adults — "Help! I've fallen and I can't get up!" — but with ChatGPT instead of a nurse. A brutal meme, but it raises a valid point: If the device is meant to be always-on, context-aware, and worn, why not market it to older users, too?

Although, if this is for the olds, should it use Google Gemini instead? Burn!

The first AI pendant pic.twitter.com/mRZcEmE5My

— @levelsio (@levelsio) May 23, 2025

X user Peter Hu proposed an AI-powered nail clipper. Yes, it's absurd, and no, it doesn't make sense. But the design? Low-key fire.

The Open AI nail cutter was a personal request from me

Thanks Jony Ive pic.twitter.com/0QwHlvNof8

— Peter Hu (@VeltIntern) May 23, 2025

Here's mocked up a vape pen with a ChatGPT twist. Inhale wisdom, exhale existential dread.

Holy shit, an AI vape.

Jony Ive has done it again. pic.twitter.com/t5kgu7vZHZ

— tweet davidson (@andykreed) May 23, 2025

Some of the most surreal concepts look like direct plugs into your skull. There's a "Matrix" or "Severance" vibe here, suggesting a future where ChatGPT lives in your head like a helpful parasite.

Jony Ive & Sam Altman’s new Open AI device pic.twitter.com/eRM0uPyASA

— Gigi B (@GBallarani) May 23, 2025

This one below is cute!

The new revolutionary AI device by Jony Ive. pic.twitter.com/6JsWz8rSvV

— Borriss (@_Borriss_) May 22, 2025

I asked ChatGPT to take a guess. The answer was not impressive. No wonder OpenAI paid $6.5 billion for Ive's hardware design startup.

ChatGPT guesses what device Jony Ive is designing for OpenAI
ChatGPT guesses what device Ive is designing for OpenAI.

Alistair Barr/ChatGPT

This last one is a Silicon Valley insider joke. It's also a warning that it's extremely hard to replace smartphones as the go-to tech gadget. It's a riff on the Humane pin, an AI device that bombed already.

SCOOP: Leaked photo of OpenAI’s new hardware product with Jony Ive. It looks to be a stamp-sized AI device with a camera that pins to a shirt and a user can interact with by voice or e-ink. More to come. pic.twitter.com/RXMPFXnmbS

— Trung Phan (@TrungTPhan) May 22, 2025

Can OpenAI compete with Apple and Google?

This device matters beyond its shape because of what it represents. Right now, Apple and Google dominate the interface layer of computing through iOS and Android devices. If OpenAI wants to define how people interact with ChatGPT, it needs a hardware beachhead.

Humane's AI pin tried and failed. The Rabbit R1 got roasted. The jury's still out on Meta's Ray-Bans. Can Ive and Altman actually crack the code?

Knowing Ive, we'll probably be surprised no matter what. The real product could be something no one predicted.

The race to define the next major computing interface is officially on. With Ive and Altman teaming up, OpenAI is making a major bet that how we interact with AI is just as important as what AI can do.

When the curtain lifts, and Ive whispers "aluminium" in a design video, jaws will probably drop, and competitors will scramble.

Until then, keep your renders weird, your guesses wild, and your brain tuned in to BI. We'll be here to cover every hilarious, ambitious, and brilliant twist along the way.

See you in 2027.

Read the original article on Business Insider

Week in Review: Notorious hacking group tied to the Spanish government

24 May 2025 at 17:05
Welcome back to Week in Review! Tons of news from this week for you, including a hacking group that’s linked to the Spanish government; CEOs using AI avatars to deliver company earnings; Pocket shutting down — or is it?; and much more. Let’s get to it!  More than 10 years in the making: Kaspersky first […]
❌