Normal view

Received today — 26 April 2025

YouTube’s AI Overviews want to make search results smarter

25 April 2025 at 21:27
YouTube is experimenting with a new AI feature that could change how people find videos. Here’s the kicker: not everyone is going to love it. The platform has started rolling out AI-generated video summaries directly in search results, but only for a limited group of YouTube Premium subscribers in the U.S. For now, the AI […]

Google announces 1st and 2nd gen Nest Thermostats will lose support in October 2025

25 April 2025 at 18:58

Google's oldest smart thermostats have an expiration date. The company has announced that the first and second generation Nest Learning Thermostats will lose support in October 2025, disabling most of the connected features. Google is offering some compensation for anyone still using these devices, but there's no Google upgrade for European users. Google is also discontinuing its only European model, and it's not planning to release another.

Both affected North American thermostats predate Google's ownership of the company, which it acquired in 2014. Nest released the original Learning Thermostat to almost universal praise in 2011, with the sequel arriving a year later. Google's second-gen Euro unit launched in 2014. Since launch, all these devices have been getting regular software updates and have migrated across multiple app redesigns. However, all good things must come to an end.

As Google points out, these products have had a long life, and they're not being rendered totally inoperable. Come October 25, 2025, these devices will no longer receive software updates or connect to Google's cloud services. That means you won't be able to control them from the Google Home app or via Assistant (or more likely Gemini by that point). The devices will still work as a regular dumb thermostat to control temperature, and scheduling will remain accessible from the thermostat's screen.

Read full article

Comments

© Google

Received yesterday — 25 April 2025

Chromebooks could get a boost from Snapdragon X Plus chips soon

25 April 2025 at 17:58

Chromebooks on Arm processors are about to get a big boost as developers prepare new versions of ChromeOS with support for Qualcomm’s latest Snapdragon chips, reports Chrome Unboxed.

According to a new developer commit message posted in the Chromium project Gerrit code review, the SoCID for a Qualcomm X1P42100, aka the Snapdragon X Plus, is now being included in the Chromium repository, which likely means active development of Chromebooks with the chip is underway.

The Snapdragon X Plus isn’t Qualcomm’s flagship “Elite” processor used in some of the top Windows 11 Arm laptops, but it is capable of the same 45 TOPS of AI performance from its NPU.

Qualcomm’s previous Arm-powered Chromebooks haven’t exactly been powerhouses. The 2021 Acer Chromebook Spin 513 that we’ve tested has great battery life, but a very slow Snapdragon 7c chip powers it. And although the 7c Gen 2 version was faster in devices like the Lenovo Chromebook Duet 3, Qualcomm ended up not bringing the Gen 3 to Chromebooks. That left Chromebooks with chip options from MediaTek and Intel, the latter of which hasn’t been known for excellent battery life.

Google is killing software support for early Nest Thermostats

25 April 2025 at 17:00

Google has just announced that it’s ending software updates for the first-generation Nest Learning Thermostat, released in 2011, and the second-gen model that came a year later. This decision also affects the European Nest Learning Thermostat from 2014. “You will no longer be able to control them remotely from your phone or with
Google Assistant, but can still adjust the temperature and modify schedules directly on the thermostat,“ the company wrote in a Friday blog post.

The cutoff date for software updates and general support within the Google Home and Nest apps is October 25th.

No more controlling these “smart” thermostats from a phone.

In other significant news, Google is flatly stating that it has no plans to release additional Nest thermostats in Europe. “Heating systems in Europe are unique and have a variety of hardware and software requirements that make it challenging to build for the diverse set of homes,“ the company said. “The Nest Learning Thermostat (3rd gen, 2015) and Nest Thermostat E (2018) will continue to be sold in Europe while current supplies last.”

Losing the ability to control these smart thermostats from a phone will inevitably frustrate customers who’ve had Nest hardware in their home for many years now. Google’s not breaking their core functionality, but a lot of the appeal and convenience will disappear as software support winds down. The early Nest Learning Thermostats can at least be used locally without Wi-Fi, which isn’t true of newer models.

Still, this type of phase-out is a very real fear tied to smart home devices as companies put screens into more and more appliances. Is 14 years a reasonable lifespan for the these gadgets before their smarts fade away? There’s no indication that Google plans to open source the hardware.

In a clear attempt to ease customer anger, Google is offering a $130 discount on the fourth-gen Nest Learning Thermostat in the US, $160 off the same device in Canada, and 50 percent savings on the Tado Smart Thermostat X in Europe since the Nest lineup will soon be gone.

The original Nest thermostats were released while the company was an independent brand under the leadership of former Apple executive Tony Fadell. Google acquired Nest in 2014 for $3.2 billion.

Google has a 'You can't lick a badger twice' problem

25 April 2025 at 17:30
Magnifying glass with "meaning" highlighted in search bar

Getty Images; Alyssa Powell/BI

  • Google's AI answers will give you a definition of any made-up saying. I tried: "You can't lick a badger twice."
  • This is exactly the kind of thing AI should be really good at — explaining language use. But something's off.
  • Is it a hallucination, or AI just being too eager to please?

What does "You can't lick a badger twice" mean?

Like many English sayings — "A bird in the hand is worth two in the bush," "A watched pot never boils" — it isn't even true. Frankly, nothing stops you from licking a badger as often as you'd like, although I don't recommend it.

(I'm sure Business Insider's lawyers would like me to insist you exercise caution when encountering wildlife, and that we cannot be held liable for any rabies infections.)

If the phrase doesn't ring a bell to you, it's because, unlike "rings a bell," it is not actually a genuine saying — or idiom — in the English language.

But Google's AI Overview sure thinks it's real, and will happily give you a detailed answer of what the phrase means.

Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine

[image or embed]

— Greg Jenner (@gregjenner.bsky.social) April 23, 2025 at 6:15 AM

Greg Jenner, a British historian and podcaster, saw people talking about this phenomenon on Threads and wanted to try it himself with a made-up idiom. The badger phrase "just popped into my head," he told Business Insider. His Google search spit out an answer that seemed reasonable.

I wanted to try this myself, so I made up a few fake phrases — like "You can't fit a duck in a pencil" — and added "meaning" onto my search query.

Google took me seriously and explained:

you can't fit a fuck in a pencil google search
"You can't fit a duck in a pencil."

Business Insider

So I tried some others, like "The Road is full of salsa." (This one I'd like to see being used in real life, personally.)

A Google spokeswoman told me, basically, that its AI systems are trying their best to give you what you want — but that when people purposely try to play games, sometimes the AI can't exactly keep up.

"When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," spokeswoman Meghann Farnsworth said.

"This is true of Search overall — and in some cases, AI Overviews will also trigger in an effort to provide helpful context."

the road is full of salsa meaning
"The road is full of salsa."

Business Insider

Basically, AI Overviews aren't perfect (duh), and these fake idioms are "false premise" searches that are purposely intended to trip it up (fair enough).

Google does try to limit the AI Overviews from answering things that are "data voids," i.e., when there are no good web results to a question.

But clearly, it doesn't always work.

I have some ideas about what's going on here — some of it is good and useful, some of it isn't. As one might even say, it's a mixed bag.

But first, one more made-up phrase that Google tried hard to find meaning for: "Don't kiss the doorknob." Says Google's AI Overview:

don't kiss the doorknob google search
"Don't kiss the doorknob."

Business Insider

So what's going on here?

The Good:

English is full of idioms like "kick the bucket" or "piece of cake." These can be confusing if English isn't your first language (and frankly, they're often confusing for native speakers, too). My case in point is that the phrase is commonly misstated as "case and point."

So it makes lots of sense that people would often be Googling to understand the meaning of a phrase they came across that they don't understand. And in theory, this is a great use for the AI Overview answers: You want to see the simply-stated answer right away, not click on a link.

The Bad:

AI should be really good at this particular thing. LLMs are trained on vast amounts of the English written language — reams of books, websites, YouTube transcriptions, etc., so being able to recognize idioms is something they should be very good at doing.

The fact that it's making mistakes here is not ideal. What's going wrong that Google's AI Overview isn't giving the real answer, which is "That isn't a phrase, you idiot"? Is it just a classic AI hallucination?

The ugly:

Comparatively, ChatGPT gave a better answer when I asked it about the badger phrase. It told me that it was not a standard English idiom, even though it had the vaguely folksy sound of one. Then it offered, "If we treat it like a real idiom (for fun)," and gave a possible definition.

So this isn't a problem across all AI — it seems to be a Google problem.

badger
You can't lick a badger twice?

REUTERS/Russell Cheyne

This is somewhat different from last year's Google AI Overview answers fiasco where the results pulled in information from places like Reddit without considering sarcasm — remember when it suggested people should eat rocks for minerals or put glue in their pizza (someone on Reddit had once joked about glue in pizza, which seems to be where it drew from).

This is all very low-stakes and silly fun, making up fake phrases, but it speaks to the bigger, uglier problems with AI becoming more and more enmeshed in how we use the internet. It means Google searches are somehow worse, and since people start to rely on these more and more, that bad information is just getting out there into the world and taken as fact.

Sure, AI search will get better and more accurate, but what growing pains will we endure while we're in this middle phase of a kinda wonky, kinda garbage-y, slop-filled AI internet?

AI is here, it's already changing our lives. There's no going back, the horse has left the barn. Or as they say, you can't lick a badger twice.

Read the original article on Business Insider

Google’s AI search numbers are growing, and that’s by design

25 April 2025 at 14:34
Google started testing AI-summarized results in Google Search, AI Overviews, two years ago, and continues to expand the feature to new regions and languages. By the company’s estimation, it’s been a big success. AI Overviews is now used by more than 1.5 billion users monthly across over 100 countries. AI Overviews compiles results from around […]
Received before yesterday

Google reveals sky-high Gemini usage numbers in antitrust case

23 April 2025 at 18:05

You may not use Gemini or other AI products, but many people do, and their ranks are growing. During day three of Google's antitrust remedies trial, the company presented a slide showing that Gemini reached 350 million monthly active users as of March 2025. That's a massive increase from last year, showing that Google is beginning to gain traction among competing chatbots, but Google's estimation of ChatGPT's traffic shows it still has a long climb ahead of it.

The slide was presented during the testimony of Sissie Hsiao, who until recently was leading Google's Gemini efforts. She was replaced earlier this month by Josh Woodward, who also runs Google Labs. The slide listed Gemini's 350 million monthly users, along with daily traffic of 35 million users.

These numbers represent a huge increase for Gemini, which languished in the tens of millions of monthly users late last year. Gemini's daily user count at the time was a mere 9 million, according to Google. Since then, Google has released its Gemini 2.0 and 2.5 models, both of which have shown demonstrable improvements over the previous iterations. It has also begun adding Gemini features to more parts of the Google ecosystem, even though some of those integrations can be more frustrating than useful.

Read full article

Comments

© Ryan Whitwam

OpenAI wants to buy Chrome and make it an “AI-first” experience

22 April 2025 at 21:55

The remedy phase of Google's antitrust trial is underway, with the government angling to realign Google's business after the company was ruled a search monopolist. The Department of Justice is seeking a plethora of penalties, but perhaps none as severe as forcing Google to sell Chrome. But who would buy it? An OpenAI executive says his employer would be interested.

Among the DOJ's witnesses on the second day of the trial was Nick Turley, head of product for ChatGPT at OpenAI. He wasn't there to talk about Chrome exclusively—the government's proposed remedies also include forcing Google to share its search index with competitors.

OpenAI is in bed with Microsoft, but Bing's search data wasn't cutting it, Turley suggested (without naming Microsoft). "We believe having multiple partners, and in particular Google's API, would enable us to provide a better product to users," OpenAI told Google in an email revealed at trial. However, Google turned OpenAI down because it believed the deal would harm its lead in search. The companies have no ongoing partnership today, but Turley noted that forcing Google to license its search data would restore competition.

Read full article

Comments

© Getty Images | Vincent Feuray

Google won’t ditch third-party cookies in Chrome after all

22 April 2025 at 19:36

Google has made an unusual announcement about browser cookies, but it may not come as much of a surprise given recent events. After years spent tinkering with the Privacy Sandbox, Google has essentially called it quits. According to Anthony Chavez, VP of the company's Privacy Sandbox initiative, Google won't be rolling out a planned feature to help users disable third-party cookies. Instead, cookie support will remain in place as is, possibly forever.

Beginning in 2019, Google embarked on an effort under the Privacy Sandbox banner aimed at developing a new way to target ads that could preserve a modicum of user privacy. This approach included doing away with third-party cookies, small snippets of code that advertisers use to follow users around the web.

Google struggled to find a solution that pleased everyone. Its initial proposal for FLoC (Federated Learning of Cohorts) was widely derided as hardly any better than cookies. Google then moved on to the Topics API, but the company's plans to kill cookies have been delayed repeatedly since 2022.

Read full article

Comments

© Getty Images

OpenAI releases new simulated reasoning models with full tool access

16 April 2025 at 22:21

On Wednesday, OpenAI announced the release of two new models—o3 and o4-mini—that combine simulated reasoning capabilities with access to functions like web browsing and coding. These models mark the first time OpenAI's reasoning-focused models can use every ChatGPT tool simultaneously, including visual analysis and image generation.

OpenAI announced o3 in December, and until now, only less capable derivative models named "o3-mini" and "03-mini-high" have been available. However, the new models replace their predecessors—o1 and o3-mini.

OpenAI is rolling out access today for ChatGPT Plus, Pro, and Team users, with Enterprise and Edu customers gaining access next week. Free users can try o4-mini by selecting the "Think" option before submitting queries. OpenAI CEO Sam Altman tweeted that "we expect to release o3-pro to the pro tier in a few weeks."

Read full article

Comments

© Floriana via Getty Images

Google suspended 39.2 million malicious advertisers in 2024 thanks to AI

16 April 2025 at 16:58

Google may have finally found an application of large language models (LLMs) that even AI skeptics can get behind. The company just released its 2024 Ads Safety report, confirming that it used a collection of newly upgraded AI models to scan for bad ads. The result is a huge increase in suspended spammer and scammer accounts, with fewer malicious ads in front of your eyeballs.

While stressing that it was not asleep at the switch in past years, Google reports that it deployed more than 50 enhanced LLMs to help enforce its ad policy in 2024. Some 97 percent of Google's advertising enforcement involved these AI models, which reportedly require even less data to make a determination. Therefore, it's feasible to tackle rapidly evolving scam tactics.

Google says that its efforts in 2024 resulted in 39.2 million US ad accounts being suspended for fraudulent activities. That's over three times more than the number of suspended accounts in 2023 (12.7 million). The factors that trigger a suspension usually include ad network abuse, improper use of personalization data, false medical claims, trademark infringement, or a mix of violations.

Read full article

Comments

© Google

Researchers claim breakthrough in fight against AI’s frustrating security hole

16 April 2025 at 11:15

In the AI world, a vulnerability called a "prompt injection" has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerability—the digital equivalent of whispering secret instructions to override a system's intended behavior—no one has found a reliable solution. Until now, perhaps.

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

The new paper grounds CaMeL's design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.

Read full article

Comments

© Aman Verma via Getty Images

Google adds Veo 2 video generation to Gemini app

15 April 2025 at 19:43

Google has announced that yet another AI model is coming to Gemini, but this time, it's more than a chatbot. The company's Veo 2 video generator is rolling out to the Gemini app and website, giving paying customers a chance to create short video clips with Google's allegedly state-of-the-art video model.

Veo 2 works like other video generators, including OpenAI's Sora—you input text describing the video you want, and a Google data center churns through tokens until it has an animation. Google claims that Veo 2 was designed to have a solid grasp of real-world physics, particularly the way humans move. Google's examples do look good, but presumably that's why they were chosen.

Prompt: Aerial shot of a grassy cliff onto a sandy beach where waves crash against the shore, a prominent sea stack rises from the ocean near the beach, bathed in the warm, golden light of either sunrise or sunset, capturing the serene beauty of the Pacific coastline.

Read full article

Comments

I'm a former recruiter for Google and Indeed. If I were labeled an underperformer, here's what I'd do next.

13 April 2025 at 09:07
headshot of a woman in a green top
Erica Rivera spent three years at Indeed as a recruiter and around two years at Google.

Sebastian Rivera

  • Erica Rivera is a career coach who formerly recruited for Google and Indeed.
  • Rivera said that she's coached many employees who've been labeled as underperformers.
  • She advises employees who've been labeled as underperformers to seek clarity as a first step.

This as-told-to essay is based on a conversation with Erica Rivera, a 37-year-old career coach now based in Barcelona. It has been edited for length and clarity.

Before becoming a career coach, I worked as a recruiter for Indeed for three years and Google for roughly another two.

I now work with people one-on-one to navigate career changes and transition into new roles. As a coach, I've helped those who have been labeled as underperformers, and it breaks my heart when I hear them talk about it.

First, there's the initial shock — Hey, I'm labeled as an underperformer? — and then I see how deeply they internalize that as their truth. The people I've talked with feel like they're broken, that there's something wrong with them because they're not seen as meeting expectations.

When this happens, I say, take a second to breathe. This label is just that, a label — and it doesn't mean that there's something wrong with you.

Many times, people who get that label are not underperforming; they're just caught up in unfortunate situations. Sometimes, new management comes in, or goals shift, and the employee isn't made aware.

I tell them, It doesn't define you. It doesn't define the rest of your career. Instead, it could be time to stop and evaluate your next steps. No matter the scenario or why you received this label, here are four steps to take if you're labeled an underperformer at work.

Seek clarity

Many times, if there isn't clarity about what is being asked of an employee or if they don't fully understand what their manager is looking for, it creates a gap — first in communication, then in performance.

Maybe there is a misunderstanding of what the goals are versus what the manager has been expecting. If that's the case, it's time to assess bridging the gap.

When having a conversation with your manager, ask: What am I being measured against? What does success look like in the next 60 to 90 days? Can you help me understand where I'm missing the mark? How does that align with the team's expectations and the greater organizational goals?

By getting clarity, you have something to measure your performance against.

Take action and document it

Once you have had that conversation with your manager and understand the expectations, it's time to take action and track your progress.

I tell people all the time, "You need to document, document, document," because you have to make sure that you're covering yourself in the work that you're doing.

This might include documenting any kudos you get, your metrics (which your manager should also be tracking), and any internal awards — anything that can show where you're delivering in your role and exceeding.

Check-in with your manager

You should be having weekly, or at least bi-weekly, one-on-ones with your manager. During these conversations, update your manager on your wins, your metrics, and key positive feedback you're receiving.

After your one-on-one, send them a follow-up email: Hey, just as a follow-up, here's my understanding from our conversation. Here are the wins, areas of opportunity, and what I'm focusing on this week.

Your manager might not read it, but at least you're documenting it, sending it out, and taking ownership.

In the end, be thorough in documenting and updating your manager to show that you are progressing toward the goals you have set.

Update your résumé and look elsewhere

Although you might be doing all that you can internally, make sure you're also updating your résumé and LinkedIn. Start tapping into your network and possibly just re-engaging with people that you haven't connected with in a while.

You could check in with your connections and say that you're interested in seeing what openings are available at their organizations.

When doing so, it's best not to bash your old company because, a lot of times, that can reflect poorly on you, and when looking for work, you'll want to keep it more neutral.

If you're asked why you're looking for new opportunities in an interview, you might say: There was a shift in the direction of the organization, and as a result, there was no longer alignment between the work that I was doing and the new priorities that were being implemented. I am looking for a long-term opportunity where I can grow with the next organization that I'm in.

Whatever you do, you don't want to call yourself out and label yourself as an underperformer.

If you're a recruiter with career tips you'd like to share, please contact this editor, Manseen Logan, at [email protected].

Read the original article on Business Insider

❌