Reading view

Inside Mark Zuckerberg’s AI hiring spree

AI researchers have recently been asking themselves a version of the question, "Is that really Zuck?"

As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new "superintelligence" AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit's work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone.

For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they'll have to make risky bets, the scale of Meta's products, and the money he's prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta's headquarters, where I'm told the desks have already been rearranged for the incoming team.

Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I've covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have t …

Read the full story at The Verge.

  •  

AI chatbots tell users what they want to hear, and that’s problematic

The world’s leading artificial intelligence companies are stepping up efforts to deal with a growing problem of chatbots telling people what they want to hear.

OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users.

The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as research assistants, but in their personal lives as therapists and social companions.

Read full article

Comments

© FT montage

  •  

ChatGPT’s daylong outage is nearly fixed

OpenAI’s ChatGPT service was down all day for many users after the platform started experiencing performance issues on Tuesday morning. The chatbot responded with a “Hmm…something seems to have gone wrong” error message to my colleague after failing to load, and users across X and Reddit are reporting platform outages.

Downdetector showed that issues started at around 3AM ET, with multiple regions impacted globally. OpenAI’s own status page said that some users started experiencing “elevated error rates and latency” at that time, noting that the issues were affecting ChatGPT, its Sora text-to-video AI tool, and OpenAI APIs. OpenAI added a separate line for “elevated error rates on Sora” at 5:23AM ET, and later updated the status for both to “partial outage.”

As of 6:32PM ET, OpenAI’s tracker reported a “full recovery in the API,” and that “Nearly all ChatGPT components are now working properly for all users.” The one spot of trouble, however, is voice mode, which still has elevated error rates.

Some users were able to access ChatGPT, but found that the service was sluggish and taking much longer than usual to respond. Others, like myself, were able to use the chatbot without any issues, so the outages and errors didn’t seem to impact everyone.

Perplexity, the AI search engine service that utilizes some OpenAI models, also reported experiencing outages and reporting “slowness and elevated error rates” on its status page. Perplexity’s issues started at around 7AM ET, according to Downdetector.

Update, June 10th: Noted OpenAI and Perplexity’s status updates.

  •  

Sam Altman claims an average ChatGPT query uses ‘roughly one fifteenth of a teaspoon’ of water

OpenAI CEO Sam Altman, in a blog post published Tuesday, says an average ChatGPT query uses about 0.000085 gallons of water, or “roughly one fifteenth of a teaspoon.” He made the claim as part of a broader post on his predictions about how AI will change the world. 

“People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,” he says. He also argues that “the cost of intelligence should eventually converge to near the cost of electricity.” OpenAI didn’t immediately respond to a request for comment on how Altman came to those figures.

AI companies have come under scrutiny for energy costs of their technology. This year, for example, researchers forecast that AI could consume more power than Bitcoin mining by the end of the year. In an article last year, The Washington Post worked with researchers to determine that a 100-word email “generated by an AI chatbot using GPT-4” required “a little more than 1 bottle.” The publication also found that water usage can depend on where a datacenter is located.

  •  

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

Ilya Sutskever
OpenAI cofounder Ilya Sutskever gave a convocation speech at the University of Toronto, his alma mater, last week.

JACK GUEZ/ Getty

  • OpenAI cofounder Ilya Sutskever says "the day will come when AI will do all the things that we can."
  • He spoke about the state of AI at the University of Toronto convocation last week.
  • Sutskever also advised graduates to "'accept reality as it is and try not to regret the past."

Ilya Sutskever says it might take years, but he believes AI will one day be able to accomplish everything humans can.

Sutskever, the cofounder and former chief scientist of ChatGPT maker OpenAI, spoke about the technology while giving a convocation speech at the University of Toronto, his alma mater, last week.

"The real challenge with AI is that it is really unprecedented and really extreme, and it's going to be very different in the future compared to the way it is today," he said.

Sutskever said that while AI is already better at some things than humans, "there are so many things it cannot do as well and it's so deficient, so you can say it still needs to catch up on a lot of things."

But, he said, he believes "AI will keep getting better and the day will come when AI will do all the things that we can do."

"How can I be so sure of that?" he continued. "We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things, because we have a brain and the brain is a biological computer."

As is customary at convocation and commencement ceremonies, Sutskever also gave advice to the new graduates. He implored them to "accept reality as it is, try not to regret the past, and try to improve the situation."

"It's so easy to think, 'Oh, some bad past decision or bad stroke of luck, something happened, something is unfair,'" he said. "It's so easy to spend so much time thinking like this while it's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'"

Sutskever hasn't always taken his own advice on the matter, though. He's said before that he regrets his involvement in the November 2023 ousting of OpenAI CEO Sam Altman.

Sutskever was a member of the board, which fired Altman after saying it "no longer has confidence" in his ability to lead OpenAI and that he was "not consistently candid in his communications."

A few days later, however, Sutskever expressed regret for his involvement in the ouster and was one of hundreds of OpenAI employees who signed an open letter threatening to quit unless Altman was reinstated as CEO.

"I deeply regret my participation in the board's actions," Sutskever said in a post on X at the time. "I never intended to harm OpenAI."

Altman was brought back as CEO the same month. Sutskever left OpenAI six months later and started a research lab focused on building "safe superintelligence."

Read the original article on Business Insider

  •  

Anthropic releases custom AI chatbot for classified spy work

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Read full article

Comments

© Anthropic

  •  

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

Late Thursday, OpenAI confronted user panic over a sweeping court order requiring widespread chat log retention—including users' deleted chats—after moving to appeal the order that allegedly impacts the privacy of hundreds of millions of ChatGPT users globally.

In a statement, OpenAI Chief Operating Officer Brad Lightcap explained that the court order came in a lawsuit with The New York Times and other news organizations, which alleged that deleted chats may contain evidence of users prompting ChatGPT to generate copyrighted news articles.

To comply with the order, OpenAI must "retain all user content indefinitely going forward, based on speculation" that the news plaintiffs "might find something that supports their case," OpenAI's statement alleged.

Read full article

Comments

© Leonid Korchenko | Moment

  •  

Klarna CEO warns AI may cause a recession as the technology comes for white-collar jobs

Klarna CEO Sebastian Siemiatkowski smiles whilst wearing a gray sweatshirt and blue jeans and posing near Klarna's pop up store in London.
Klarna CEO Sebastian Siemiatkowski.

Dave Benett/Dave Benett/Getty Images for Klarna

  • The CEO of payments company Klarna has warned that AI could lead to job cuts and a recession.
  • Sebastian Siemiatkowski said he believed AI would increasingly replace white-collar jobs.
  • Klarna previously said its AI assistant was doing the work of 700 full-time customer service agents.

The CEO of the Swedish payments company Klarna says that the rise of artificial intelligence could lead to a recession as the technology replaces white-collar jobs.

Speaking on The Times Tech podcast, Sebastian Siemiatkowski said there would be "an implication for white-collar jobs," which he said "usually leads to at least a recession in the short term."

"Unfortunately, I don't see how we could avoid that, with what's happening from a technology perspective," he continued.

Siemiatkowski, who has long been candid about his belief that AI will come for human jobs, added that AI had played a key role in "efficiency gains" at Klarna and that the firm's workforce had shrunk from about 5,500 to 3,000 people in the last two years as a result.

It's not the first time the exec and Klarna have made headlines along these lines.

In February 2024, Klarna boasted that its OpenAI-powered AI assistant was doing the work of 700 full-time customer service agents. The company, most famous for its "buy now, pay later" service, was one of the first firms to partner with Sam Altman's company.

Later that year, Siemiatkowski told Bloomberg TV that he believed AI was already capable of doing "all of the jobs" that humans do and that Klarna had enacted a hiring freeze since 2023 as it looked to slim down and focus on adopting the technology.

However, Siemiatkowski has since dialed back his all-in stance on AI, telling an audience at the firm's Stockholm headquarters in May that his AI-driven customer service cost-cutting efforts had gone too far and that Klarna was planning to now recruit, according to Bloomberg.

"From a brand perspective, a company perspective, I just think it's so critical that you are clear to your customer that there will be always a human if you want," he said.

In the interview with The Times, Siemiatkowski said he felt that many people in the tech industry, particularly CEOs, tended to "downplay the consequences of AI on jobs, white-collar jobs in particular."

"I don't want to be one of them," he said. "I want to be honest, I want to be fair, and I want to tell what I see so that society can start taking preparations."

Some of the top leaders in AI, however, have been ringing the alarm lately, too.

Anthropic's leadership has been particularly outspoken about the threat AI poses to the human labor market.

The company's CEO, Dario Amodei, recently said that AI may eliminate 50% of entry-level white-collar jobs within the next five years. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar."

Similarly, his colleague, Mike Krieger, Anthropic's chief product officer, said he is hesitant to hire entry-level software engineers over more experienced ones who can also leverage AI tools.

The silver lining is that AI also brings the promise of better and more fulfilling work, Krieger said.

Humans, he said, should focus on "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale — and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced."

Read the original article on Business Insider

  •  

The future of AI will be governed by protocols no one has agreed on yet

Protocol
As new questions arise about how AI will communicate with humans — and with other AI — new protocols are emerging.

gremlin/Getty Images

  • AI protocols are evolving to address interactions between humans and AI, and among AI systems.
  • New AI protocols aim to manage non-deterministic behavior, crucial for future AI integration.
  • "I think we will see a lot of new protocols in the age of AI," an executive at World told BI.

The tech industry, much like everything else in the world, abides by certain rules.

With the boom in personal computing came USB, a standard for transferring data between devices. With the rise of the internet came IP addresses, numerical labels that identify every device online. With the advent of email came SMTP, a framework for routing email across the internet.

These are protocols — the invisible scaffolding of the digital realm — and with every technological shift, new ones emerge to govern how things communicate, interact, and operate.

As the world enters an era shaped by AI, it will need to draw up new ones. But AI goes beyond the usual parameters of screens and code. It forces developers to rethink fundamental questions about how technological systems interact across the virtual and physical worlds.

How will humans and AI coexist? How will AI systems engage with each other? And how will we define the protocols that manage a new age of intelligent systems?

Across the industry, startups and tech giants alike are busy developing protocols to answer these questions. Some govern the present in which humans still largely control AI models. Others are building for a future in which AI has taken over a significant share of human labor.

"Protocols are going to be this kind of standardized way of processing non-deterministic information," Antoni Gmitruk, the chief technology officer of Golf, which helps clients deploy remote servers aligned with Anthropic's Model Context Protocol, told BI. Agents, and AI in general, are "inherently non-deterministic in terms of what they do and how they behave."

When AI behavior is difficult to predict, the best response is to imagine possibilities and test them through hypothetical scenarios.

Here are a few that call for clear protocols.

Scenario 1: Humans and AI, a dialogue of equals

Games are one way to determine which protocols strike the right balance of power between AI and humans.

In late 2024, a group of young cryptography experts launched Freysa, an AI agent that invites human users to manipulate it. The rules are unconventional: Make Freysa fall in love with you or agree to concede its funds, and the prize is yours. The prize pool grows with each failed attempt in a standoff between human intuition and machine logic.

Freysa has caught the attention of big names in the tech industry, from Elon Musk, who called one of its games "interesting," to veteran venture capitalist Marc Andreessen.

"The core technical thing we've done is enabled her to have her own private keys inside a trusted enclave," said one of the architects of Freysa, who spoke under the condition of anonymity to BI in a January interview.

Secure enclaves are not new in the tech industry. They're used by companies from AWS to Microsoft as an extra layer of security to isolate sensitive data.

In Freysa's case, the architect said they represent the first step toward creating a "sovereign agent." He defined that as an agent that can control its own private keys, access money, and evolve autonomously — the type of agent that will likely become ubiquitous.

"Why are we doing it at this time? We're entering a phase where AI is getting just good enough that you can see the future, which is AI basically replacing your work, my work, all our work, and becoming economically productive as autonomous entities," the architect said.

In this phase, they said Freysa helps answer a core question: "What does human involvement look like? And how do you have human co-governance over agents at scale?"

In May, the The Block, a crypto news site, revealed that the company behind Freysa is Eternis AI. Eternis AI describes itself as an "applied AI lab focused on enabling digital twins for everyone, multi-agent coordination, and sovereign agent systems." The company has raised $30 million from investors, including Coinbase Ventures. Its co-founders are Srikar Varadaraj, Pratyush Ranjan Tiwari, Ken Li, and Augustinas Malinauskas.

Scenario 2: To the current architects of intelligence

Freysa establishes protocols in anticipation of a hypothetical future when humans and AI agents interact with similar levels of autonomy. The world, however, needs also to set rules for the present, where AI still remains a product of human design and intention.

AI typically runs on the web and builds on existing protocols developed long before it, explained Davi Ottenheimer, a cybersecurity strategist who studies the intersection of technology, ethics, and human behavior, and is president of security consultancy flyingpenguin. "But it adds in this new element of intelligence, which is reasoning," he said, and we don't yet have protocols for reasoning.

"I'm seeing this sort of hinted at in all of the news. Oh, they scanned every book that's ever been written and never asked if they could. Well, there was no protocol that said you can't scan that, right?" he said.

There might not be protocols, but there are laws.

OpenAI is facing a copyright lawsuit from the Authors Guild for training its models on data from "more than 100,000 published books" and then deleting the datasets. Meta considered buying the publishing house Simon & Schuster outright to gain access to published books. Tech giants have also resorted to tapping almost all of the consumer data available online from the content of public Google Docs and the relics of social media sites like Myspace and Friendster to train their AI models.

Ottenheimer compared the current dash for data to the creation of ImageNet — the visual database that propelled computer vision, built by Mechanical Turk workers who scoured the internet for content.

"They did a bunch of stuff that a protocol would have eliminated," he said.

Scenario 3: How to take to each other

As we move closer to a future where artificial general intelligence is a reality, we'll need protocols for how intelligent systems — from foundation models to agents — communicate with each other and the broader world.

The leading AI companies have already launched new ones to pave the way. Anthropic, the maker of Claude, launched the Model Context Protocol, or MCP, in November 2024. It describes it as a "universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol."

In April, Google launched Agent2Agent, a protocol that will "allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications."

These build on existing AI protocols, but address new challenges of scaling and interoperability that have become critical to AI adoption.

So, managing agents' behavior is the "middle step before we unleash the full power of AGI and let them run around the world freely," he said. When we arrive at that point, Gmitruk said agents will no longer communicate through APIs but in natural language. They'll have unique identities, jobs even, and need to be verified.

"How do we enable agents to communicate between each other, and not just being computer programs running somewhere on the server, but actually being some sort of existing entity that has its history, that has its kind of goals," Gmitruk said.

It's still early to set standards for agent-to-agent communication, Gmitruk said. Earlier this year he and his team initially launched a company focused on building an authentication protocol for agents, but pivoted.

"It was too early for agent-to-agent authentication," he told BI over LinkedIn. "Our overall vision is still the same -> there needs to be agent-native access to the conventional internet, but we just doubled down on MCP as this is more relevant at the stage of agents we're at."

Does everything need a protocol?

Definitely not. The AI boom marks a turning point, reviving debates over how knowledge is shared and monetized.

McKinsey & Company calls it an "inflection point" in the fourth industrial revolution — a wave of change that it says began in the mid-2010s and spans the current era of "connectivity, advanced analytics, automation, and advanced-manufacturing technology."

Moments like this raise a key question: How much innovation belongs to the public and how much to the market? Nowhere is that clearer than in the AI world's debate between the value of open-source and closed models.

"I think we will see a lot of new protocols in the age of AI," Tiago Sada, the chief product officer at Tools for Humanity, the company building the technology behind Sam Altman's World. However, "I don't think everything should be a protocol."

World is a protocol designed for a future in which humans will need to verify their identity at every turn. Sada said the goal of any protocol "should be like this open thing, like this open infrastructure that anyone can use," and is free from censorship or influence.

At the same time, "one of the downsides of protocols is that they're sometimes slower to move," he said. "When's the last time email got a new feature? Or the internet? Protocols are open and inclusive, but they can be harder to monetize and innovate on," he said. "So in AI, yes — we'll see some things built as protocols, but a lot will still just be products."

Read the original article on Business Insider

  •  

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments

© Bloomberg via Getty Images

  •  

OpenAI slams court order to save all ChatGPT logs, including deleted chats

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

"Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users’ privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API), OpenAI said.

Read full article

Comments

© nadia_bormotova | iStock / Getty Images Plus

  •  
  •  

The OpenAI board drama is reportedly turning into a movie

A film that will portray the chaotic time at OpenAI, when co-founder and CEO Sam Altman was fired and rehired within a span of just five days, is reportedly in the works.  According to The Hollywood Reporter, the movie is titled “Artificial,” and it’s in development at Amazon MGM Studios. While details aren’t finalized, sources […]
  •