Reading view

Google can now generate a fake AI podcast of your search results

NotebookLM is undoubtedly one of Google's best implementations of generative AI technology, giving you the ability to explore documents and notes with a Gemini AI model. Last year, Google added the ability to generate so-called "audio overviews" of your source material in NotebookLM. Now, Google has brought those fake AI podcasts to search results as a test. Instead of clicking links or reading the AI Overview, you can have two nonexistent people tell you what the results say.

This feature is not currently rolling out widely—it's available in search labs, which means you have to manually enable it. Anyone can opt in to the new Audio Overview search experience, though. If you join the test, you'll quickly see the embedded player in Google search results. However, it's not at the top with the usual block of AI-generated text. Instead, you'll see it after the first few search results, below the "People also ask" knowledge graph section.

Credit: Google

Read full article

Comments

© Google

  •  

Another one for the graveyard: Google to kill Instant Apps in December

Apps used to be the measure of a mobile platform's worth, with Apple and Google dueling over who could list the most items in their respective stores. Today, the numbers don't matter as much—there are enough apps, and Google's attempt to replace parts of the web with apps is going away. Instant Apps, a feature that debuted in 2017, will reportedly be scrapped in December 2025. In its place, you'll just have to use the Internet.

Developer Leon Omelan spotted this news buried in the latest Canary release of Android Studio (confirmed by Android Authority). The development client includes a warning that Instant Apps is headed for the Google graveyard. Here's the full notice, which is the only official confirmation from Google at this time.

Instant apps notice Google's latest Android Studio build announces the end of Instant Apps. Credit: Android Authority

Instant Apps wasn't a bad idea—it was just too late. Early in the mobile era, browsers and websites were sluggish on phones, making apps a much better option. Installing them for every site that offered them could be a pain, though. Google's Instant Apps tried to smooth over the experience by delivering an app live without installation. When developers implemented the feature, clicking a link to their websites could instead open the Android app in a similar amount of time as loading a webpage. Google later expanded the feature to games.

Read full article

Comments

© Ryan Whitwam

  •  

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash

When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google's AI search results are spreading misinformation claiming the incident involved an Airbus plane—it was actually a Boeing 787.

Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing's 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google's results page.

Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We've run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It's a mess.

Read full article

Comments

© Boeing

  •  

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop

Google's Pixel phones got a big update this week with the release of Android 16 and a batch of Pixel Drop features. Pixels now have enhanced security, new contact features, and improved button navigation. However, some of the most interesting features, like desktop windowing and Material 3 Expressive, are coming later. Another thing that's coming later, it seems, is a fix for an annoying bug Google introduced a few months back.

Google broke the system dark mode schedule in its March Pixel update and did not address it in time for Android 16. The company confirms a fix is coming, though.

The system-level dark theme arrives in Android 10 to offer a less eye-searing option, which is particularly handy in dark environments. It took a while for even Google's apps to fully adopt this feature, but support is solid five years later. Google even offers a scheduling feature to switch between light and dark mode at custom times or based on sunrise/sunset. However, the scheduling feature was busted in the March update.

Read full article

Comments

© Ryan Whitwam

  •  

Android 16 is here, but its big redesign isn’t ready

Google rolled out a bunch of new features with Android 16 on Tuesday, but the company appears to be saving its big Material 3 Expressive redesign for a future update. The update doesn’t feature the design language’s revamped elements, and a source tells Android Authority’s Mishaal Rahman that Google is planning to launch the new look on September 3rd, 2025, instead.

With Android 16, Google is starting to roll out support for Live Updates with progress-centric notifications and enhanced settings for users with hearing aids. The updates are coming to Pixel devices first, but according to Google, Android users will have to wait for another update to see Live Updates “fully realized.”

Google officially took the wraps off Material 3 Expressive following a leak last month, which features updates to icon shapes, type styles, and color palettes with “more natural, springy animations” across the Android interface. You can still check out some Material 3 Expressive updates in the Android 16 QPR1 beta that’s available now, but Rahman notes that Google plans on launching more design updates in the next Android 16 QPR1 Beta 2.

Google is expected to include Android’s desktop mode in a September launch as well. The new mode, which builds on Samsung’s DeX platform, optimizes apps and content for large-screen devices. It will allow you to resize multiple app windows across your screens, as well as connect phones and tablets to external displays for a desktop-like experience. Users with a Pixel 8 and up can try out these features in the Android 16 beta, but the rest of us will likely have to wait a few more months.

  •  

Google is offering employee buyouts in Search and other orgs

Google is starting to offer buyouts to US-based employees in its sprawling Search organization, along with other divisions like marketing, research, and core engineering, according to multiple employees familiar with the matter.

The buyouts, which Google is referring to as a "voluntary exit program," are currently not being offered to employees in DeepMind, Google Cloud, YouTube, or Google's central ad sales organization. Employees in Google's platforms and services group, which includes Android and the Pixel line of devices, were offered buyouts earlier this year before the company enacted layoffs. It's unclear if more layoffs will follow this week's buyout announcement. Employees in some orgs are being offered a minimum of 14 weeks' pay with a July 1st enrollment deadline.

Other parts of Google, including YouTube, are also requiring US employees within a 50-mile radius of an office to return to work at least three days a week by September, or be laid off with severance.

In an internal memo I obtained, Nick Fox, the head of Google's wider "Knowledge and Information" group that includes Search, called the buyout program a "supportive exit path for those of you who don't feel a …

Read the full story at The Verge.

  •  

Sundar Pichai says AI is making Google engineers 10% more productive. Here's how it measures that.

Sundar Pichai
Google has its own internal AI tools to help engineers be more productive.

Getty Images

  • Google CEO Sundar Pichai said the company is tracking how AI makes its engineers more productive.
  • During the "Lex Fridman Podcast," Pichai estimated a 10% increase in engineering capacity.
  • Separately, Google and Microsoft have publicly shared how much of their code is being generated by AI.

Google is tracking how AI is making its engineers more productive — and has developed a specific way to measure it.

Speaking on an episode of the "Lex Fridman Podcast" that aired last week, Google CEO Sundar Pichai said that the company was looking closely at how artificial intelligence was boosting productivity among its software developers.

"The most important metric, and we carefully measure it, is how much has our engineering velocity increased as a company due to AI?" he said. The company estimates that it's so far seen a 10% boost, Pichai said.

A Google spokesperson clarified to Business Insider that the company tracks this by measuring the increase in engineering capacity created, in hours per week, from the use of AI-powered tools.

Put simply, it's a measurement of how much extra time engineers are getting back thanks to AI.

Whether Google expects that 10% number to keep increasing, Pichai didn't say. However, he said he expects agentic capabilities — where AI can take actions and make decisions more autonomously — will unlock the "next big wave".

Google has its own internal tools to help engineers code. Last year, the company launched an internal coding copilot named "Goose," trained on 25 years of Google's technical history, Business Insider previously reported.

While AI Pichai said during the podcast that Google plans to hire more engineers next year. "The opportunity space of what we can do is expanding too," he said, adding that he hopes AI removes some of the grunt work and frees up time for more enjoyable aspects of engineering.

Separately, the company is tracking the amount of code that is being generated by AI within Google's walls — a number that is apparently increasing.

Pichai said during Alphabet's most recent earnings call that more than 30% of the company's new code is generated by AI, up from an estimated 25% in October.

Google isn't the only one. Speaking at London Tech Week on Monday, Microsoft UK CEO Darren Hardman said its GitHub Copilot coding assistant is now writing 40% of code at the company, "enabling us to launch more products in the last 12 months than we did in the previous three years."

He added: "It isn't just about speed."

In April, Meta CEO Mark Zuckerberg predicted AI could handle half of Meta's developer work within a year.

Additional reporting by Effie Webb.

Have something to share? Contact this reporter via email at [email protected] or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

  •  

The future of AI will be governed by protocols no one has agreed on yet

Protocol
As new questions arise about how AI will communicate with humans — and with other AI — new protocols are emerging.

gremlin/Getty Images

  • AI protocols are evolving to address interactions between humans and AI, and among AI systems.
  • New AI protocols aim to manage non-deterministic behavior, crucial for future AI integration.
  • "I think we will see a lot of new protocols in the age of AI," an executive at World told BI.

The tech industry, much like everything else in the world, abides by certain rules.

With the boom in personal computing came USB, a standard for transferring data between devices. With the rise of the internet came IP addresses, numerical labels that identify every device online. With the advent of email came SMTP, a framework for routing email across the internet.

These are protocols — the invisible scaffolding of the digital realm — and with every technological shift, new ones emerge to govern how things communicate, interact, and operate.

As the world enters an era shaped by AI, it will need to draw up new ones. But AI goes beyond the usual parameters of screens and code. It forces developers to rethink fundamental questions about how technological systems interact across the virtual and physical worlds.

How will humans and AI coexist? How will AI systems engage with each other? And how will we define the protocols that manage a new age of intelligent systems?

Across the industry, startups and tech giants alike are busy developing protocols to answer these questions. Some govern the present in which humans still largely control AI models. Others are building for a future in which AI has taken over a significant share of human labor.

"Protocols are going to be this kind of standardized way of processing non-deterministic information," Antoni Gmitruk, the chief technology officer of Golf, which helps clients deploy remote servers aligned with Anthropic's Model Context Protocol, told BI. Agents, and AI in general, are "inherently non-deterministic in terms of what they do and how they behave."

When AI behavior is difficult to predict, the best response is to imagine possibilities and test them through hypothetical scenarios.

Here are a few that call for clear protocols.

Scenario 1: Humans and AI, a dialogue of equals

Games are one way to determine which protocols strike the right balance of power between AI and humans.

In late 2024, a group of young cryptography experts launched Freysa, an AI agent that invites human users to manipulate it. The rules are unconventional: Make Freysa fall in love with you or agree to concede its funds, and the prize is yours. The prize pool grows with each failed attempt in a standoff between human intuition and machine logic.

Freysa has caught the attention of big names in the tech industry, from Elon Musk, who called one of its games "interesting," to veteran venture capitalist Marc Andreessen.

"The core technical thing we've done is enabled her to have her own private keys inside a trusted enclave," said one of the architects of Freysa, who spoke under the condition of anonymity to BI in a January interview.

Secure enclaves are not new in the tech industry. They're used by companies from AWS to Microsoft as an extra layer of security to isolate sensitive data.

In Freysa's case, the architect said they represent the first step toward creating a "sovereign agent." He defined that as an agent that can control its own private keys, access money, and evolve autonomously — the type of agent that will likely become ubiquitous.

"Why are we doing it at this time? We're entering a phase where AI is getting just good enough that you can see the future, which is AI basically replacing your work, my work, all our work, and becoming economically productive as autonomous entities," the architect said.

In this phase, they said Freysa helps answer a core question: "What does human involvement look like? And how do you have human co-governance over agents at scale?"

In May, the The Block, a crypto news site, revealed that the company behind Freysa is Eternis AI. Eternis AI describes itself as an "applied AI lab focused on enabling digital twins for everyone, multi-agent coordination, and sovereign agent systems." The company has raised $30 million from investors, including Coinbase Ventures. Its co-founders are Srikar Varadaraj, Pratyush Ranjan Tiwari, Ken Li, and Augustinas Malinauskas.

Scenario 2: To the current architects of intelligence

Freysa establishes protocols in anticipation of a hypothetical future when humans and AI agents interact with similar levels of autonomy. The world, however, needs also to set rules for the present, where AI still remains a product of human design and intention.

AI typically runs on the web and builds on existing protocols developed long before it, explained Davi Ottenheimer, a cybersecurity strategist who studies the intersection of technology, ethics, and human behavior, and is president of security consultancy flyingpenguin. "But it adds in this new element of intelligence, which is reasoning," he said, and we don't yet have protocols for reasoning.

"I'm seeing this sort of hinted at in all of the news. Oh, they scanned every book that's ever been written and never asked if they could. Well, there was no protocol that said you can't scan that, right?" he said.

There might not be protocols, but there are laws.

OpenAI is facing a copyright lawsuit from the Authors Guild for training its models on data from "more than 100,000 published books" and then deleting the datasets. Meta considered buying the publishing house Simon & Schuster outright to gain access to published books. Tech giants have also resorted to tapping almost all of the consumer data available online from the content of public Google Docs and the relics of social media sites like Myspace and Friendster to train their AI models.

Ottenheimer compared the current dash for data to the creation of ImageNet — the visual database that propelled computer vision, built by Mechanical Turk workers who scoured the internet for content.

"They did a bunch of stuff that a protocol would have eliminated," he said.

Scenario 3: How to take to each other

As we move closer to a future where artificial general intelligence is a reality, we'll need protocols for how intelligent systems — from foundation models to agents — communicate with each other and the broader world.

The leading AI companies have already launched new ones to pave the way. Anthropic, the maker of Claude, launched the Model Context Protocol, or MCP, in November 2024. It describes it as a "universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol."

In April, Google launched Agent2Agent, a protocol that will "allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications."

These build on existing AI protocols, but address new challenges of scaling and interoperability that have become critical to AI adoption.

So, managing agents' behavior is the "middle step before we unleash the full power of AGI and let them run around the world freely," he said. When we arrive at that point, Gmitruk said agents will no longer communicate through APIs but in natural language. They'll have unique identities, jobs even, and need to be verified.

"How do we enable agents to communicate between each other, and not just being computer programs running somewhere on the server, but actually being some sort of existing entity that has its history, that has its kind of goals," Gmitruk said.

It's still early to set standards for agent-to-agent communication, Gmitruk said. Earlier this year he and his team initially launched a company focused on building an authentication protocol for agents, but pivoted.

"It was too early for agent-to-agent authentication," he told BI over LinkedIn. "Our overall vision is still the same -> there needs to be agent-native access to the conventional internet, but we just doubled down on MCP as this is more relevant at the stage of agents we're at."

Does everything need a protocol?

Definitely not. The AI boom marks a turning point, reviving debates over how knowledge is shared and monetized.

McKinsey & Company calls it an "inflection point" in the fourth industrial revolution — a wave of change that it says began in the mid-2010s and spans the current era of "connectivity, advanced analytics, automation, and advanced-manufacturing technology."

Moments like this raise a key question: How much innovation belongs to the public and how much to the market? Nowhere is that clearer than in the AI world's debate between the value of open-source and closed models.

"I think we will see a lot of new protocols in the age of AI," Tiago Sada, the chief product officer at Tools for Humanity, the company building the technology behind Sam Altman's World. However, "I don't think everything should be a protocol."

World is a protocol designed for a future in which humans will need to verify their identity at every turn. Sada said the goal of any protocol "should be like this open thing, like this open infrastructure that anyone can use," and is free from censorship or influence.

At the same time, "one of the downsides of protocols is that they're sometimes slower to move," he said. "When's the last time email got a new feature? Or the internet? Protocols are open and inclusive, but they can be harder to monetize and innovate on," he said. "So in AI, yes — we'll see some things built as protocols, but a lot will still just be products."

Read the original article on Business Insider

  •  

Google’s new Gemini 2.5 Pro release aims to fix past “regressions” in the model

It seems like hardly a day goes by anymore without a new version of Google's Gemini AI landing, and sure enough, Google is rolling out a major update to its most powerful 2.5 Pro model. This release is aimed at fixing some problems that cropped up in an earlier Gemini Pro update, and the word is, this version will become a stable release that comes to the Gemini app for everyone to use.

The previous Gemini 2.5 Pro release, known as the I/O Edition, or simply 05-06, was focused on coding upgrades. Google claims the new version is even better at generating code, with a new high score of 82.2 percent in the Aider Polyglot test. That beats the best from OpenAI, Anthropic, and DeepSeek by a comfortable margin.

While the general-purpose Gemini 2.5 Flash has left preview, the Pro version is lagging behind. In fact, the last several updates have attracted some valid criticism of 2.5 Pro's performance outside of coding tasks since the big 03-25 update. Google's Logan Kilpatrick says the team has taken that feedback to heart and that the new model "closes [the] gap on 03-25 regressions." For example, users will supposedly see more creativity with better formatting of responses.

Read full article

Comments

© Ryan Whitwam

  •  
  •  

Google’s NotebookLM now lets you share your notebook — and AI podcasts — publicly

Google’s AI-powered notetaking app, NotebookLM, now lets you share your notebooks with classmates, coworkers, or students using a public link. Though viewers can’t edit what’s in your notebook, they can still use it to ask questions and interact with AI-generated content like audio overviews, briefings, and FAQs.

First launched as an experiment in 2023, NotebookLM has become a breakout hit for Google. The app is designed to help you understand material from a variety of sources, such as notes, documents, presentation slides, and even YouTube videos. It can provide AI-generated summaries of the content, generate AI podcast-style discussions, “chat” with you about the material, and more. Google launched a mobile NotebookLM app last month.

The steps to making your notebook available publicly are pretty similar to the way you share something in Google Drive, Docs, Sheets, and Slides. You just select the Share button in the top-right corner of the notebook, and then change the access to “Anyone with a link.” From there, hit the “Copy link” button and then paste the notebook link into a text, email, or even on social media if you want more people to interact with the information.

Google also lets you share your notebooks with others by entering their email address. Unlike with public link-sharing, you can give individual users the ability to edit your notebook. You can share audio overviews from within the Gemini app as well.

  •  

Google settles shareholder lawsuit, will spend $500M on being less evil

It has become a common refrain during Google's antitrust saga: What happened to "don't be evil?" Google's unofficial motto has haunted it as it has grown ever larger, but a shareholder lawsuit sought to rein in some of the company's excesses. And it might be working. The plaintiffs in the case have reached a settlement with Google parent company Alphabet, which will spend a boatload of cash on "comprehensive" reforms. The goal is to steer Google away from the kind of anticompetitive practices that got it in hot water.

Under the terms of the settlement, obtained by Bloomberg Law, Alphabet will spend $500 million over the next 10 years on systematic reforms. The company will have to form a board-level committee devoted to overseeing the company's regulatory compliance and antitrust risk, a rarity for US firms. This group will report directly to CEO Sundar Pichai. There will also be reforms at other levels of the company that allow employees to identify potential legal pitfalls before they affect the company. Google has also agreed to preserve communications. Google's propensity to use auto-deleting chats drew condemnation from several judges overseeing its antitrust cases.

The agreement still needs approval from US District Judge Rita Lin in San Francisco, but that's mainly a formality at this point. Naturally, Alphabet does not admit to any wrongdoing under the terms of the settlement, but it may have to pay tens of millions in legal fees on top of the promised $500 million investment.

Read full article

Comments

© Aurich Lawson

  •  

Samsung could drop Google Gemini in favor of Perplexity for Galaxy S26

Every smartphone maker is racing to find a way to put AI in your pocket, but no one has cracked the code yet. Samsung was an early supporter of Google's Gemini AI, which has largely supplanted its little-used Bixby assistant. However, a new report claims Samsung is planning a big AI shakeup by partnering with Perplexity on the Galaxy S26.

Perplexity pitches itself as an AI-powered search service, running on the same generative AI technology behind ChatGPT, Gemini, and all the others. However, it cites its sources around the web more prominently than a pure chatbot. Perplexity made waves during the Google search antitrust trial when executive Dmitry Shevelenko testified that Google blocked Motorola from using Perplexity on its 2024 phones. The company got its wish this year, though, with Perplexity finding a place on 2025 Razr phones.

A report from Bloomberg says Samsung will be the next to leverage Perplexity's AI. The companies are apparently close to signing a deal that will make this AI model a core part of the Galaxy S26 lineup. Motorola uses Perplexity for search functionality inside its Moto AI system, but the Samsung deal would be more comprehensive.

Read full article

Comments

© Samsung

  •