AI researchers have recently been asking themselves a version of the question, "Is that really Zuck?"
As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new "superintelligence" AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit's work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone.
For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they'll have to make risky bets, the scale of Meta's products, and the money he's prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta's headquarters, where I'm told the desks have already been rearranged for the incoming team.
Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I've covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have t β¦
Metaβs massive investment in Scale AI may be giving some of the startupβs biggest customers pause. Reuters reports that Google had planned to pay Scale AI $200 million this year but is now planning to cut ties with the startup and is having conversations with its competitors.
Alphabet-owned robotaxi company Waymo is limiting service due to Saturdayβs scheduled nationwide βNo Kingsβ protests against President Donald Trump and his policies.
NotebookLM is undoubtedly one of Google's best implementations of generative AI technology, giving you the ability to explore documents and notes with a Gemini AI model. Last year, Google added the ability to generate so-called "audio overviews" of your source material in NotebookLM. Now, Google has brought those fake AI podcasts to search results as a test. Instead of clicking links or reading the AI Overview, you can have two nonexistent people tell you what the results say.
This feature is not currently rolling out widelyβit's available in search labs, which means you have to manually enable it. Anyone can opt in to the new Audio Overview search experience, though. If you join the test, you'll quickly see the embedded player in Google search results. However, it's not at the top with the usual block of AI-generated text. Instead, you'll see it after the first few search results, below the "People also ask" knowledge graph section.
Meta has invested $15 billion into data-labeling startup Scale AI and hired its co-founder, Alexandr Wang, as part of its bid to attract talent from rivals in a fiercely competitive market.
The deal values Scale at $29 billion, double its valuation last year. Scale said it would βsubstantially expandβ its commercial relationship with Meta βto accelerate deployment of Scaleβs data solutions,β without giving further details. Scale helps companies improve their artificial intelligence models by providing labeled training data.
Scale will distribute proceeds from Metaβs investment to shareholders, and Meta will own 49 percent of Scaleβs equity following the transaction.
As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.
Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.
Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.
When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google's AI search results are spreading misinformation claiming the incident involved an Airbus planeβit was actually a Boeing 787.
Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing's 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google's results page.
Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We've run a few similar searchesβsome of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It's a mess.
The worldβs leading artificial intelligence companies are stepping up efforts to deal with a growing problem of chatbots telling people what they want to hear.
OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users.
The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as research assistants, but in their personal lives as therapists and social companions.
Fans saw clips of a man riding an alligator in a Kalshi ad.
Kalshi
An AI-generated ad for Kalshi, where you can bet on real-world events, aired during an NBA Finals game.
PJ Accetturo, a self-described AI filmmaker, described his process for creating the ad.
Here's how he used Google's Gemini chatbot and Veo 3 video generator to make the "most unhinged" ad.
A farmer floating in a pool of eggs. An alien chugging beer. An older man, draped in an American flag, screaming, "Indiana gonna win baby." The chaotic scenes are all part of a new AI-generated ad from sports betting marketplace Kalshi, which aired Wednesday during Game 3 of the NBA Finals.
"The world's gone mad, trade it," the commercial's tagline read, following the 30-second collection of surreal scenes.
In a recent thread on X, the ad's director explained how he made the clip for just $2,000.
"Kalshi hired me to make the most unhinged NBA Finals commercial possible," PJ Accetturo, a self-described AI filmmaker, wrote on Wednesday. "Network TV actually approved this GTA-style madness."
Kalshi hired me to make the most unhinged NBA Finals commercial possible.
Network TV actually approved this GTA-style madness π€£
High-dopamine Veo 3 videos will be the ad trend of 2025.
Accetturo said he made the ad using Veo 3, Google's latestΒ AI video generator. A Kalshi spokesperson confirmed to BI that the company hired Accetturo to make the ad and that it was generated entirely using Veo 3.
"Kalshi asked me to create a spot about people betting on various markets, including the NBA Finals," Accetturo wrote on X. "I said the best Veo 3 content is crazy people doing crazy things while showcasing your brand. They love GTA VI. I grew up in Florida. This idea wrote itself."
He said that he started by writing a rough script, turned to Gemini to generate a shot list and prompts, pasted it into Veo 3, and made the finishing touches in editing software.
To write the script, he said he asked Kalshi's team for pieces of dialogue they wanted to include, then thought up "10 wild characters in unhinged situations to say them." Accetturo said that he got help from Gemini and ChatGPT for coming up with ideas and working them into a script.
A screenshot he posted of this stage of his process showed dialogue like "Indiana gonna win baby" and "I'm all in on OKC" alongside characters like "rizzed out grandpa headed to the club" and "old lady in front of pickup truck that says 'fresh manatee' in a cooler behind her."
Accetturo said he then asked Gemini to turn every shot description into a Veo 3 prompt.
"I always tell it to return 5 prompts at a timeβany more than that and the quality starts to slip," he wrote on X. "Each prompt should fully describe the scene as if Veo 3 has no context of the shot before or after it. Re-describe the setting, the character, and the tone every time to maintain consistency."
Accetturo said it took 300 to 400 generations to get 15 usable clips.
"We were not specifically looking for an AI video at first, but after getting quotes from production companies that were in the six or seven figure range with timelines that didn't fit our needs, we decided to experiment, and that's when we made the decision to go with AI and hire PJ," the Kalshi spokesperson told BI. "Given the success of this first ad, we are absolutely planning on doing more with AI."
The spokesperson said the video went from idea to live ad in three days, cost roughly $2,000 to make, and is on track to finish with 20 million impressions across mediums.
Accetturo told BI that he was "paid very well for the project" and now makes a "lot more as an AI director" than he did for live action contracts, which often involved weeks of work before and after the shoot compared to the few days the Kalshi ad required.
"The client got an insane ad for a great rate on a blistering timeline, and I got paid really well, while working in my underwear," he said.
This is the space where I usually try an AI tool. This week, though, I'm featuring an experience shared by a Tech Memo reader who got in touch after last week's installment about AI coding services such as Replit, Cursor, and Bolt.new.
This person worked at Google for more than two decades, so they know their software! They recently tried out Replit, following Google CEO Sundar Pichai saying he's been messing around with this tool.
"Like Sundar, I've also tried Replit to test out a cat purring app I had (lol). I poked around on some other options, but I liked Replit because it took the query and really built an app for you (even on the free test version). So based on a query alone and answering some questions (e.g., do you want people to be able to log in and save their cat?), you had an app. And it would work! You could launch it if you were really interested and happy with it.
"The limitations came with fine-tuning the app from there, as it seemed to get confused (and use up your credits) if you asked it for changes, e.g., change how the cat looked. It also was a pretty rough product; ultimately, if you wanted more than a proof of concept, you'd probably want to delve into the software code and change things yourself versus relying on queries.
"Over time, I think they'll fine-tune these things and I love how it makes it easy to prototype ideas. It really lowers the upfront cost of testing ideas."
Thank you, dear reader, for getting in touch. I have also been messing around with an AI coding tool. I chose Bolt.new, partly because I recently met the cofounder of the startup behind this service, Stackblitz's Eric Simons (another Tech Memo reader, btw). Next week, I'll share some thoughts about Bolt. I've been building something with my daughter Tessa and we can't wait to show you!
WWDC was a bit of a bust. Apple's Liquid Glass design overhaul was criticized on social media because it makes some iPhone notifications hard to read. A few jokers on X even shared a screenshot of YouTube's play button obstructing the "Gl" in a thumbnail for an Apple Liquid Glass promo. Need I say more?
The more serious question hanging over this year's WWDC was not answered. When will Siri get the AI upgrade it desperately needs? Software chief Craig Federighi delivered the bad news: It's still not ready. That knocked roughly $75 billion off Apple's market value. The stock recovered a bit, but it's still badly lagging behind rivals this month.
Andy Kiersz/BI; Google Finance
Google, OpenAI, and other tech companies are launching powerful new AI models and products at a breakneck pace. Apple is running out of time to prove it's a real player in this important field. Analyst Dan Ives is usually bullish, but even he's concerned. "They have a tight window to figure this out," Ives wrote, after calling this year's WWDC a "yawner."
AI is complex, expensive, and takes a long time to get right. Apple was late to start building the needed foundational technology, such as data centers, training data pipelines, and homegrown AI chips. By contrast, Google began laying its AI groundwork decades ago. It bought DeepMind in 2014, and this AI lab shapes Google's models in profound ways today.
When I was at Google I/O last month, one or two insiders whispered a phrase. They cautiously described an "intelligence gap" that could open up between the iPhone and other smartphones. Many Android phones already feature Google's Gemini chatbot, which is far more capable than Siri. If Apple's AI upgrade takes too long, this intelligence gap could widen so much that some iPhone users might consider switching.
At I/O, these insiders only whispered this idea. That's because it will take something pretty dramatic to get people to give up their iPhones. This device has become a utility that we can't live without β even for the few days (weeks?) it might take to get used to an Android replacement.
Still, if Apple doesn't get its AI house in order soon, this intelligence gap will keep growing, and things could get really siri-ous.
TensorWave, a leader in AMD-powered AI infrastructure solutions, today announced the deployment of AMD Instinct MI355X GPUs in its high-performance cloud platform.Read More