Normal view

Received yesterday — 11 June 2025

AI is helping blue-collar workers do more with less as labor shortages are projected to worsen 

11 June 2025 at 10:00

There are an estimated 180 million utility poles currently in operation in the U.S., and every so often, they need to be inspected. Historically, crews of specialized workers would go from pole to pole, climbing to the top and evaluating the integrity of the structure, regardless of whether or not the pole had a known problem. Today with AI, sensors, and drones, teams can detect the state of this critical infrastructure without physically being there, sending a worker on site only when there’s an issue that needs to be addressed. What’s more, the data made possible by these remote monitoring systems means workers are more informed and prepared for the job when they are deployed to a pole.  

“There’s a lot of diagnostic time to figure out what’s going on, but now imagine that you just show up on a site with the information. So you’re sending somebody to the right spot when there’s an actual issue, and then they’re much more likely to have the right part, or the right truck, or the right materials they need in that moment,” said Alex Hawkinson, CEO of BrightAI, a company using AI solutions to address worker challenges in the energy sector and other blue-collar industries including HVAC, water pipeline, construction, manufacturing, pest control, and field service. 

It’s just one example of how AI-enabled technologies are increasingly helping workers in blue-collar industries do their jobs, saving them time and energy, and reducing their exposure to risky situations (like having to climb to the top of utility poles). The new wave of AI is also allowing workers across these fields to get more out of the technologies they’ve already been using and data they’ve been collecting. AI’s long-term impact on jobs is an increasingly important topic of debate, as analysts and economists look for clues by examining hiring practices at different companies. But in many of these blue-collar fields that are currently struggling with labor shortages, AI is a welcome helper.

Labor shortages drive blue-collar appetite for automation 

Blue-collar industries that require specialized trade skills are some of the most labor-squeezed parts of the workforce, particularly as aging workers who were trained for them years ago start to retire. Between 30% and 50% of water pipeline workers are expected to retire in the next decade, for example, and there aren’t enough younger workers entering the field to replace them. It’s a similar situation in farming: The average age of the U.S. farmer is 58.1 years old, and there are four times as many farmers who are 65 or older than those younger than 35, according to the 2022 U.S. Census of Agriculture. Farming also has to deal with the seasonality of its labor needs, which sway dramatically throughout the year.

“Another big misconception is that autonomy is about labor replacement,” said Willy Pell, CEO of John Deere subsidiary Blue River Technology, regarding AI in the farming industry. “In many cases, it just isn’t there to begin with. So it’s not replacing anything—it’s giving them labor.”

Whether it’s a utility worker inspecting a pole or a farmer harvesting crops, doing more with less time is paramount when there aren’t enough people to get the work done. 

“One of the biggest things is that farmers never have enough time. When we can give them their time back, it makes their lives meaningfully better. They get to spend more time with their family. They get to spend more time running the higher-leverage parts of their business, the higher-value parts of their business, and they have less stress,” said Pell. “There’s an incredible amount of anxiety that comes with not knowing if you can run your business because you’re relying on an extremely sparse, fragile labor force to help you do it. And autonomy helps farmers with this problem.”

Crucially, it’s not just industry leaders who are on board, but workers too. A study on workers’ openness to automation performed by Massachusetts Institute of Technology researchers (and backed by Amazon) found that those without college degrees, or “blue-collar” workers, are more open to automation than those with degrees. According to the study, 27.4% of workers without a college degree said they believe that AI will be beneficial for their job security, compared to 23.7% of workers with a college degree.

AI supercharges the data and technologies workers are already relying on 

For many blue-collar workers, the problems they’re facing on the job are increasingly measurable. For example, Blue River Technology has neural networks that integrate directly into field-spraying machines, detecting the crops and weeds in order to spray herbicides only on the weeds. Technologies like sensors and drones have been around for years, but recent progress in AI is allowing them to derive more benefit from these technologies and the data they produce.

“A lot of factories and other industrial environments have had data around for a long time and haven’t necessarily known what to do with it. Now there are new algorithms and new software that’s allowing these companies to be a lot more intelligent with using that data to make work better,” said Ben Armstrong, coauthor of the study on worker attitudes surrounding automation and an MIT researcher who focuses on the relationship between technology and work, especially in American manufacturing.

BrightAI’s Hawkinson echoes this, saying that “a simple sensor reading isn’t enough to give you the pattern that you care about” and that it’s the maturation of AI that’s made the difference. For example, the company has tapped large language models (LLMs) for voice interaction to allow workers to interact with sensor data via wearable devices, which is crucial for workers who need to have their hands free, as is common in the fields BrightAI operates in. Hawkinson said that companies working with BrightAI’s platform are seeing productivity lifts between 20% and 30% within three to six months of getting up and running.

Overall, a lot of the potential benefits hinge on using AI to improve organization and access to the information that’s vital to get these jobs done. Blue River Technology, for example, is tapping LLMs to turn the very complicated information around equipment error codes into a more readable format with easy-to-understand troubleshooting tips. 

“In a lot of the companies we’re studying, there are these companies’ specific tools that workers can use to solve problems in their job by either doing a different kind of research or trying to organize information in a new way,” Pell said. “And I think for blue-collar workers who have a lot of knowledge about the particular processes and technologies that they work on, that can be really exciting.”

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

Everyone’s using AI at work. Here’s how companies can keep data safe

11 June 2025 at 10:00

Companies across industries are encouraging their employees to use AI tools at work. Their workers, meanwhile, are often all too eager to make the most of generative AI chatbots like ChatGPT. So far, everyone is on the same page, right?

There’s just one hitch: How do companies protect sensitive company data from being hoovered up by the same tools that are supposed to boost productivity and ROI? After all, it’s all too tempting to upload financial information, client data, proprietary code, or internal documents into your favorite chatbot or AI coding tool, in order to get the quick results you want (or that your boss or colleague might be demanding). In fact, a new study from data security company Varonis found that shadow AI—unsanctioned generative AI applications—poses a significant threat to data security, with tools that can bypass corporate governance and IT oversight, leading to potential data leaks. The study found that nearly all companies have employees using unsanctioned apps, and nearly half have employees using AI applications considered high-risk. 

For information security leaders, one of the key challenges is educating workers about what the risks are and what the company requires. They must ensure that employees understand the types of data the organization handles—ranging from corporate data like internal documents, strategic plans, and financial records, to customer data such as names, email addresses, payment details, and usage patterns. It’s also critical to communicate how each type of data is classified—for example, whether it is public, internal-only, confidential, or highly restricted. Once this foundation is in place, clear policies and access boundaries must be established to protect that data accordingly.

Striking a balance between encouraging AI use and building guardrails

“What we have is not a technology problem, but a user challenge,” said James Robinson, chief information security officer at data security company Netskope. The goal, he explained, is to ensure that employees use generative AI tools safely—without discouraging them from adopting approved technologies.

“We need to understand what the business is trying to achieve,” he added. Rather than simply telling employees they’re doing something wrong, security teams should work to understand how people are using the tools, to make sure the policies are the right fit—or whether they need to be adjusted to allow employees to share information appropriately.

Jacob DePriest, chief information security officer at password protection provider 1Password, agreed, saying that his company is trying to strike a balance with its policies—to both encourage AI usage and also educate so that the right guardrails are in place. 

Sometimes that means making adjustments. For example, the company released a policy on the acceptable use of AI last year, part of the company’s annual security training. “Generally, it’s this theme of ‘Please use AI responsibly; please focus on approved tools; and here are some unacceptable areas of usage.’” But the way it was written caused many employees to be overly cautious, he said. 

“It’s a good problem to have, but CISOs can’t just focus exclusively on security,” he said. “We have to understand business goals and then help the company achieve both business goals and security outcomes as well. I think AI technology in the last decade has highlighted the need for that balance. And so we’ve really tried to approach this hand in hand between security and enabling productivity.” 

Banning AI tools to avoid misuse does not work

But companies who think banning certain tools is a solution, should think again. Brooke Johnson, SVP of HR and security at Ivanti, said her company found that among people who use generative AI at work, nearly a third keep their AI use completely hidden from management. “They’re sharing company data with systems nobody vetted, running requests through platforms with unclear data policies, and potentially exposing sensitive information,” she said in a message. 

The instinct to ban certain tools is understandable but misguided, she said. “You don’t want employees to get better at hiding AI use; you want them to be transparent so it can be monitored and regulated,” she explained. That means accepting the reality that AI use is happening regardless of policy, and conducting a proper assessment of which AI platforms meet your security standards. 

“Educate teams about specific risks without vague warnings,” she said. Help them understand why certain guardrails exist, she suggested, while emphasizing that it is not punitive. “It’s about ensuring they can do their jobs efficiently, effectively, and safely.” 

Agentic AI will create new challenges for data security

Think securing data in the age of AI is complicated now? AI agents will up the ante, said DePriest. 

“To operate effectively, these agents need access to credentials, tokens, and identities, and they can act on behalf of an individual—maybe they have their own identity,” he said. “For instance, we don’t want to facilitate a situation where an employee might cede decision-making authority over to an AI agent, where it could impact a human.” Organizations want tools to help facilitate faster learning and synthesize data more quickly, but ultimately, humans need to be able to make the critical decisions, he explained. 

Whether it is the AI agents of the future or the generative AI tools of today, striking the right balance between enabling productivity gains and doing so in a secure, responsible way may be tricky. But experts say every company is facing the same challenge—and meeting it is going to be the best way to ride the AI wave. The risks are real, but with the right mix of education, transparency, and oversight, companies can harness AI’s power—without handing over the keys to their kingdom.

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

ChatGPT’s daylong outage is nearly fixed

10 June 2025 at 23:41

OpenAI’s ChatGPT service was down all day for many users after the platform started experiencing performance issues on Tuesday morning. The chatbot responded with a “Hmm…something seems to have gone wrong” error message to my colleague after failing to load, and users across X and Reddit are reporting platform outages.

Downdetector showed that issues started at around 3AM ET, with multiple regions impacted globally. OpenAI’s own status page said that some users started experiencing “elevated error rates and latency” at that time, noting that the issues were affecting ChatGPT, its Sora text-to-video AI tool, and OpenAI APIs. OpenAI added a separate line for “elevated error rates on Sora” at 5:23AM ET, and later updated the status for both to “partial outage.”

As of 6:32PM ET, OpenAI’s tracker reported a “full recovery in the API,” and that “Nearly all ChatGPT components are now working properly for all users.” The one spot of trouble, however, is voice mode, which still has elevated error rates.

Some users were able to access ChatGPT, but found that the service was sluggish and taking much longer than usual to respond. Others, like myself, were able to use the chatbot without any issues, so the outages and errors didn’t seem to impact everyone.

Perplexity, the AI search engine service that utilizes some OpenAI models, also reported experiencing outages and reporting “slowness and elevated error rates” on its status page. Perplexity’s issues started at around 7AM ET, according to Downdetector.

Update, June 10th: Noted OpenAI and Perplexity’s status updates.

Apple punts on Siri updates as it struggles to keep up in the AI race

10 June 2025 at 23:23

Apple's WWDC 2025 had new software, Formula 1 references, and a piano man crooning the text of different app reviews. But one key feature got the short end of the stick: Siri.

Although the company continuously referenced Apple Intelligence and pushed new features like live translation for Messages, FaceTime, and phone calls, Apple's AI assistant was barely mentioned. In fact, the most attention Siri got was when Apple explained that some of its previously promised features were running behind schedule.

To address what many saw as the elephant in the room, Apple's keynote briefly mentioned that it had updated Siri to be "more natural and more helpful," but that personalization features were still on the horizon. Those features were first mentioned at last year's WWDC, with a rollout timeline "over the course of the next year."

"We're continuing our work to deliver the features that make Siri even more personal," Craig Federighi, Apple's SVP of software engineering, said during Monday's keynote. "This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year."

Apple's relative silence on Siri stands out

Apple has long b …

Read the full story at The Verge.

Sam Altman claims an average ChatGPT query uses ‘roughly one fifteenth of a teaspoon’ of water

10 June 2025 at 22:28

OpenAI CEO Sam Altman, in a blog post published Tuesday, says an average ChatGPT query uses about 0.000085 gallons of water, or “roughly one fifteenth of a teaspoon.” He made the claim as part of a broader post on his predictions about how AI will change the world. 

“People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,” he says. He also argues that “the cost of intelligence should eventually converge to near the cost of electricity.” OpenAI didn’t immediately respond to a request for comment on how Altman came to those figures.

AI companies have come under scrutiny for energy costs of their technology. This year, for example, researchers forecast that AI could consume more power than Bitcoin mining by the end of the year. In an article last year, The Washington Post worked with researchers to determine that a 100-word email “generated by an AI chatbot using GPT-4” required “a little more than 1 bottle.” The publication also found that water usage can depend on where a datacenter is located.

Report: Meta taps Scale AI’s Alexandr Wang to join new ‘superintelligence’ lab

10 June 2025 at 15:22
Mark Zuckerberg is hand-picking top researchers and engineers to join an upcoming AI research lab dedicated to “superintelligence," and Scale AI's Alexandr Wang is on the list.
Received before yesterday

Apple’s AI-driven Stem Splitter audio separation tech has hugely improved in a year

9 June 2025 at 12:15

Imagine that you have a song file—drums, guitar, bass, vocals, piano—and you want to rebalance it, bringing the voice down just a touch in the mix.

Or you want to turn a Lyle Lovett country-rock jam into a slamming club banger, and all that's standing between you and the booty-shaking masses is a clean copy of Lovett's voice without all those instruments mucking things up.

Or you recorded a once-in-a-lifetime, Stevie Nicks-meets-Ann Wilson vocal performance into your voice notes app... but your dog was baying in the background, and your guitar was out of tune. Can you extract the magic and discard the rest?

Read full article

Comments

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies

9 June 2025 at 11:00

Ajit Pai is back on the telecom policy scene as chief lobbyist for the mobile industry, and he has quickly managed to anger a coalition that includes both cable companies and consumer advocates.

Pai was the Federal Communications Commission chairman during President Trump's first term and then spent several years at private equity firm Searchlight Capital. He changed jobs in April, becoming the president and CEO of wireless industry lobby group CTIA. Shortly after, he visited the White House to discuss wireless industry priorities and had a meeting with Brendan Carr, the current FCC chairman who was part of Pai's Republican majority at the FCC from 2017 to 2021.

Pai's new job isn't surprising. He was once a lawyer for Verizon, and it's not uncommon for FCC chairs and commissioners to be lobbyists before or after terms in government.

Read full article

Comments

© Getty Images | Bloomberg

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

9 June 2025 at 15:55
Ilya Sutskever
OpenAI cofounder Ilya Sutskever gave a convocation speech at the University of Toronto, his alma mater, last week.

JACK GUEZ/ Getty

  • OpenAI cofounder Ilya Sutskever says "the day will come when AI will do all the things that we can."
  • He spoke about the state of AI at the University of Toronto convocation last week.
  • Sutskever also advised graduates to "'accept reality as it is and try not to regret the past."

Ilya Sutskever says it might take years, but he believes AI will one day be able to accomplish everything humans can.

Sutskever, the cofounder and former chief scientist of ChatGPT maker OpenAI, spoke about the technology while giving a convocation speech at the University of Toronto, his alma mater, last week.

"The real challenge with AI is that it is really unprecedented and really extreme, and it's going to be very different in the future compared to the way it is today," he said.

Sutskever said that while AI is already better at some things than humans, "there are so many things it cannot do as well and it's so deficient, so you can say it still needs to catch up on a lot of things."

But, he said, he believes "AI will keep getting better and the day will come when AI will do all the things that we can do."

"How can I be so sure of that?" he continued. "We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things, because we have a brain and the brain is a biological computer."

As is customary at convocation and commencement ceremonies, Sutskever also gave advice to the new graduates. He implored them to "accept reality as it is, try not to regret the past, and try to improve the situation."

"It's so easy to think, 'Oh, some bad past decision or bad stroke of luck, something happened, something is unfair,'" he said. "It's so easy to spend so much time thinking like this while it's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'"

Sutskever hasn't always taken his own advice on the matter, though. He's said before that he regrets his involvement in the November 2023 ousting of OpenAI CEO Sam Altman.

Sutskever was a member of the board, which fired Altman after saying it "no longer has confidence" in his ability to lead OpenAI and that he was "not consistently candid in his communications."

A few days later, however, Sutskever expressed regret for his involvement in the ouster and was one of hundreds of OpenAI employees who signed an open letter threatening to quit unless Altman was reinstated as CEO.

"I deeply regret my participation in the board's actions," Sutskever said in a post on X at the time. "I never intended to harm OpenAI."

Altman was brought back as CEO the same month. Sutskever left OpenAI six months later and started a research lab focused on building "safe superintelligence."

Read the original article on Business Insider

Nvidia CEO Jensen Huang says programming AI is similar to how you 'program a person'

9 June 2025 at 14:59
A picture of Jensen Huang with his arms outstretched on stage
AI is the "great equalizer," Nvidia CEO Jensen Huang said at London Tech Week.

CARL COURT/POOL/AFP via Getty Images

  • Jensen Huang said people programming AI is similar to the way "you program a person."
  • Speaking at London Tech Week, the Nvidia CEO said all anyone had to do to program AI was "just ask nicely."
  • He called AI "the great equalizer, " allowing anyone to program computers using plain language.

Nvidia CEO Jensen Huang has said that programming AI is similar to "the way you program a person" — and that "human" is the new coders' language.

"The thing that's really, really quite amazing is the way you program an AI is like the way you program a person," Huang told London Tech Week on Monday.

Huang shared an example, saying, "You say, 'You are an incredible poet. You are deeply steeped in Shakespeare, and I would like you to write a poem to describe today's keynote.' Without very much effort, this AI would help you generate such a wonderful poem.

"And when it answers, you could say, 'I feel like you could do even better.' And it will go off and think about it and it will come back and say, 'In fact, I can do better.' And it does do a better job."

Huang said that in the past, "technology was hard to use" and that to access computer science, "we had to learn programming languages, architect systems, and design very complicated computers.

"But now, all of a sudden, there's a new programming language. This new programming language is called human."

"Most people don't know C++, very few people know Python, and everybody, as you know, knows human."

Huang called AI "the great equalizer" for making technology accessible to everyone and called the shift "transformative.

"This way of interacting with computers, I think, is something that almost anybody can do," he said.

"The way you program a computer today is to ask the computer to do something for you, even write a program, generate images, write a poem — just ask it nicely," Huang added.

At the World Government Summit in Dubai last year, Huang suggested the tech sector should focus less on coding and more on using AI as a tool across fields like farming, biology, and education.

"It is our job to create computing technology such that nobody has to program. And that the programming language is human, everybody in the world is now a programmer. This is the miracle of artificial intelligence," Huang said at the time.

Read the original article on Business Insider

Sundar Pichai says AI is making Google engineers 10% more productive. Here's how it measures that.

9 June 2025 at 14:16
Sundar Pichai
Google has its own internal AI tools to help engineers be more productive.

Getty Images

  • Google CEO Sundar Pichai said the company is tracking how AI makes its engineers more productive.
  • During the "Lex Fridman Podcast," Pichai estimated a 10% increase in engineering capacity.
  • Separately, Google and Microsoft have publicly shared how much of their code is being generated by AI.

Google is tracking how AI is making its engineers more productive — and has developed a specific way to measure it.

Speaking on an episode of the "Lex Fridman Podcast" that aired last week, Google CEO Sundar Pichai said that the company was looking closely at how artificial intelligence was boosting productivity among its software developers.

"The most important metric, and we carefully measure it, is how much has our engineering velocity increased as a company due to AI?" he said. The company estimates that it's so far seen a 10% boost, Pichai said.

A Google spokesperson clarified to Business Insider that the company tracks this by measuring the increase in engineering capacity created, in hours per week, from the use of AI-powered tools.

Put simply, it's a measurement of how much extra time engineers are getting back thanks to AI.

Whether Google expects that 10% number to keep increasing, Pichai didn't say. However, he said he expects agentic capabilities — where AI can take actions and make decisions more autonomously — will unlock the "next big wave".

Google has its own internal tools to help engineers code. Last year, the company launched an internal coding copilot named "Goose," trained on 25 years of Google's technical history, Business Insider previously reported.

While AI Pichai said during the podcast that Google plans to hire more engineers next year. "The opportunity space of what we can do is expanding too," he said, adding that he hopes AI removes some of the grunt work and frees up time for more enjoyable aspects of engineering.

Separately, the company is tracking the amount of code that is being generated by AI within Google's walls — a number that is apparently increasing.

Pichai said during Alphabet's most recent earnings call that more than 30% of the company's new code is generated by AI, up from an estimated 25% in October.

Google isn't the only one. Speaking at London Tech Week on Monday, Microsoft UK CEO Darren Hardman said its GitHub Copilot coding assistant is now writing 40% of code at the company, "enabling us to launch more products in the last 12 months than we did in the previous three years."

He added: "It isn't just about speed."

In April, Meta CEO Mark Zuckerberg predicted AI could handle half of Meta's developer work within a year.

Additional reporting by Effie Webb.

Have something to share? Contact this reporter via email at [email protected] or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Meta reportedly in talks to invest billions of dollars in Scale AI

8 June 2025 at 19:59
Meta is discussing a multi-billion dollar investment in Scale AI, according to Bloomberg. In fact, the deal value could reportedly exceed $10 billion, making it the largest external AI investment for the Facebook parent company and one of the largest funding events ever for a private company.
❌