Reading view

Exclusive: Easy-to-deploy industrial robot startup emerges from stealth with $8.5 million in seed funding

Sunrise Robotics, a startup building modular industrial robotics and AI models that makes them simple to deploy in different environments, has emerged from stealth with $8.5 million in seed funding.

The investment round is being led by Plural, a London-based early stage venture capital firm formed by a group of prominent startup founders including Wise cofounder Taavet Hinrikus and SongKick cofounder Ian Hogarth. Venture capital firms Tapestry, Seedcamp, Tiny.vc and Prototype Capital also participated in the funding.

Sunrise, which is headquartered in Ljubljana, Slovenia, declined to comment on its valuation following the funding round.

The startup is trying to address an acute and worsening labor shortage in many European manufacturing firms, Tomaz Stolfa, its cofounder and CEO, said. These businesses currently represent 15% of Europe’s GDP and employ 32 million people. But close to a third of this existing European manufacturing workforce is set to retire in the coming decade and industrial companies are already saying they cannot find enough young workers to replace those who are leaving. Sunrise sees industrial robots being able to take over some of the manual cutting, welding, fastening, and bolting human workers currently perform on the production lines of these businesses.

Artists' rendering of Sunrise Robotics' industrial robot workstation,
The design for a Sunrise Robotics industrial robot workstation, or “cell.”
Photo courtesy of Sunrise Robotics



The company can have its two-armed robots up-and-running on a new industrial production line in less than 10 weeks, Stolfa said, while it can take as long as eight months to deploy traditional industrial robots that have to be programmed on site.

The startup accomplishes this by using cameras to gather detailed three-dimensional data on the workstation where the robot will be deployed and also recording the steps a human worker currently takes to accomplish a task at that workstation. Sunrise uses this camera data to build what is essentially a digital twin of that workstation and trains AI models in a simulator that can control its robots to complete the task. Then it transfers this control software to its real robots.

Sunrise Robotics is not the only robot startup trying to use modern AI techniques and modular designs to make the delivery of robots for factories and warehouses much faster and more affordable. Paris-based Inbolt is also targeting industrial robotic arms, while Physical Intelligence is building “foundation models” that will enable any robotic arm designed to pick up and maniuplate a wide variety of objects.

Stolfa says that Sunrise’s software uses a combination of small AI models and conventional computer coding to control its robots. He said that as its robots master new skills, the time it should take to deploy them to future environments that demand similar skills should shorten considerably.

He also says that Sunrise’s decision to build standardized “cells”—as it calls its robotic workstations—makes it easier to train the robots for new tasks. The robot workstations are Sunrise’s own design, but are composed of mostly off-the-shelf parts, which makes them cheaper to build and maintain. “What we’ve done is we’ve productized the hardware,” he said.

Stolfa said one reason traditional industrial robots were expensive and time-consuming to deploy is that they were often designed specifically for one particular assembly line. This meant that only the largest manufacturing companies could afford to use them.

Sunrise, Stolfa said, is not targeting these businesses, such as major automakers. Instead, he said the company is going after the 60% of European manufacturers that are “high mix, low volume,” meaning that they produce a lot of different parts, but a relatively small number of finished products. He said Sunrise’s sweet spot was probably companies producing less than 100,000 parts each year, but that it could also work for those producing up to about 400,000 parts.

So far the company said it has signed letters of intent with about 10 customers, including those in supercar development, high-performance batteries, and consumer electronics manufacturing. Andrew Buss, the managing director at Asteelflash, an electronics manufacturer based in Bedford, England, that is an early Sunrise customer, said in a statement that the startup has helped it “adopt cutting-edge innovation at remarkable speed. Just a few months after initial data collection, we had a fully-trained, operational-intelligent robot up and running within hours of delivery.”  

Two of Sunrise’s three cofounders are experienced entrepreneurs, and all three spent time working in tech in Silicon Valley. Stolfa co-founded a number of previous companies—including the voice-over-internet company vox.io and also the messaging app builder Layer. Cofounder Marko Thaler, the company’s chief technology officer, previously founded Airnamics, which built AI brains for robots and drones. Meanwhile, Joe Perrott, Sunrise’s third cofounder and its chief commercial officer, was head of global program management at PCH International, which helps businesses build supply chains, including finding contract manufacturing partners. Its clients have included Apple, Amazon, Google, and Square.

The company currently employs 25 people in Ljubljana and working in a dozen locations across Europe. Stolfa said it plans to use its new funding to expand its team and ramp up production of its robot workstations.

This story was originally featured on Fortune.com

© Courtesy of Sunrise Robotics

Sunrise Robotics cofounders from left: Joe Perrott, now its chief commercial officer; Tomaz Stalfo, its CEO; and Marko Thaler, its chief technology officer. The startup, which is building easy-to-deploy industrial robots, just announced $8.5 million in seed funding.
  •  

Sainsbury’s trials new concepts and technology in bid to boost customer experience

In a move to improve customer experience and make its stores easier to shop, Sainsbury’s is investing in multiple trials of exciting new formats across the U.K. They provide an insight into the grocer’s philosophy for the future of its stores. And nowhere else is this more evident than at its Kiln Lane store in Epsom, Surrey.

Kiln Lane is a 100,000 square foot so-called “Destination Plus” store, stocking the full range of products and brands, including Tu, Habitat, and Argos, in addition to general merchandise and groceries.

But what makes it stand out is not tech-driven; it is the layout and ease of navigation for the shopper.

The first thing on entering the store is a feeling of calm, and this is no accident; a lot of work by many teams across the business has gone into creating this environment.

Kiln Lane store stocks the full range of products and brands, including Tu, Habitat, and Argos, in addition to general merchandise and groceries.
Courtesy of Andrew Busby

Clear white signage using light boxes allows for easy navigation. Speaking to Fortune, Sainsbury’s Director of Future Stores and Customer Experience, Darren Sinclair, said, “The store has been designed based on the mission, the purpose of the store, the experience we want to create for customers”.

In the words of Sinclair, the signage is “really quick, really simple, really clear”. To achieve this, Sainsbury’s has stripped away a lot of it, especially pendant signs hung from the ceiling, as, using eye-tracking technology, they found that customers weren’t actually looking at them.

Another contribution to that feeling of calmness are the new shelving units, which look slicker and more modern but are easier to clean and don’t cost any more. Noticeably, this color scheme also links very well with the purple Nectar signage.

Good product availability was also in evidence, and again, this is no accident. According to Sinclair, a lot of effort has gone into ensuring the right space allocation for each product so that it can be traded throughout most of the day, with far less need for replenishing.

“The store has been designed based on the mission, the purpose of the store, the experience we want to create for customers”.

Darren Sinclair, Director of Future Stores and Customer Experience, Sainsbury’s

What underpins the Epsom store is the concept of what Sainsbury’s calls “mission-based” shopping. Sinclair said, “What we wanted to do is make it really easy when customers enter the store. If they want dairy or they want meat, fish, or poultry, they can easily see it”.

An example of this is the location of the wines, beers, and spirits. Rather than being tucked away at one end of the store as one might expect, it’s front and center. The reason, according to Sinclair, is that, for example, this makes it easy (and quicker) for someone on their evening meal mission shop. They can choose their core product(s), and then on the way to the checkout, easily pick up their drink.

The trials Sainsbury’s is undertaking also extend to technology. First introduced in 2016, SmartShop (a method for shoppers to scan their purchases as they shop via an app or handheld scanner) has become embedded in the Sainsbury’s shopping experience. However, in what Sinclair describes as a “technology trial” at their Richmond and Kempton stores, in conjunction with partners, Zebra Technologies, they have added a payment option.

Speaking to Fortune, Mark Thomson, Retail Industry Director at Zebra Technologies, said of the Zebra handheld devices, “It’s also a technology stack because it’s Sainsbury’s software on Zebra’s devices with location services provided by a third-party company using the magnetic field and then payment services provided by Worldline”.

According to Thomson, there’s more functionality which will enhance the customer experience waiting to be unlocked, “Where better place to get a promotion than at the moment of purchase when you’re, say, in the crisps aisle, and Walkers suddenly offer you a promotion”.

In another move to improve the customer experience, at their Witney store in Oxfordshire, Sainsbury’s is trialing electronic shelf-edge labels (ESLs) from VusionGroup, allowing the grocer to introduce dynamic pricing.

Designated a “Future Store” the trial at Witney is across the majority of categories. It is a move in common with most U.K. grocers, who are revisiting the technology as a way to drive efficiencies in the face of increased costs. However, at Witney, the grocer is also trialing computer vision technology, also from VusionGroup, in order to monitor the shelf life of perishable products in real time in order to improve availability and reduce wastage.

Better store experiences aren’t solely the preserve of deploying new technology; it is achieved through blending technology with what is right for the customer, and creating an inviting environment in which to shop. Sainsbury’s “mission-based” strategy is a back-to-basics, cutting-edge technology approach for the future.

This story was originally featured on Fortune.com

© Matt Cardy / Contributor via Getty

Sainsbury's is one of the largest market leaders of groceries in the U.K.
  •  

AMD says new chips can top Nvidia’s in booming AI chip field

Advanced Micro Devices Inc. Chief Executive Officer Lisa Su said her company’s latest AI processors can challenge Nvidia Corp. chips in a market she now expects to soar past $500 billion in the next three years. 

The latest installments in AMD’s MI350 chip series are faster than Nvidia counterparts and represent major gains over earlier versions, Su said at a company event Thursday in San Jose, California. New MI355 chips, which started shipping earlier this month, are 35 times faster than predecessors, she said.

Though AMD remains a distant second to Nvidia in AI accelerators — the chips that help develop and run artificial intelligence tools — it aims to start catching up with these new products. The stakes are higher than ever: Su previously predicted $500 billion in market revenue by 2028, but she now sees it topping that number. 

In February, AMD gave a forecast for its data center business that showed growth is coming at a slower pace than some analysts had predicted. AMD believes the new update to its MI range will restore momentum and prove it can go toe to toe with a much bigger rival.

AMD said that the MI355 outperforms Nvidia’s B200 and GB200 products when it comes to running AI software and equals or exceeds them when creating the code. Purchasers will pay significantly less than they would versus Nvidia, AMD said. 

Investors gave a tepid response to AMD’s latest presentation, with the shares falling as much as 1.9%. The stock was up less than 1% this year through Wednesday’s close.

Nvidia and AMD are the leading providers of advanced computer graphics chips, which became the basis of components for developing AI. Demand has consistently outstripped supply as some of the world’s largest companies have poured tens of billions of dollars into new infrastructure. That’s forced up the price of chips, which can cost multiple tens of thousands of dollars each.

For AMD, the accelerator business has helped it escape the shadow of Intel Corp., its longtime rival in personal computer processors. But Nvidia has eclipsed them both. While AMD is getting multiple billions of dollars from its AI accelerators, Nvidia is generating more than $100 billion a year. 

This story was originally featured on Fortune.com

© Getty Images—Nathan Laine/Bloomberg

AMD CEO Lisa Su previously predicted $500 billion in market revenue by 2028, but she now sees it topping that number.
  •  

Cyberattack on Whole Foods supplier that left store shelves bare is part of a boom in attacks on retailers

A string of recent cyberattacks and data breaches involving the systems of major retailers have started affecting shoppers.

United Natural Foods, a wholesale distributor that supplies Whole Foods and other grocers, said this week that a breach of its systems was disrupting its ability to fulfill orders — leaving many stores without certain items.

In the U.K., consumers could not order from the website of Marks & Spencer for more than six weeks — and found fewer in-store options after hackers targeted the British clothing, home goods and food retailer. A cyberattack on Co-op, a U.K. grocery chain, also led to empty shelves in some stores.

Cyberattacks have been on the rise across industries. But infiltrations of corporate technology carry their own set of implications when the target is a consumer-facing business.

Beyond potentially halting sales of physical goods, breaches can expose customers’ personal data to future phishing or fraud attempts.

Here’s what you need to know.

Cyberattacks are on the rise overall

Despite ongoing efforts from organizations to boost their cybersecurity defenses, experts note that cyberattacks continue to increase across the board.

In the past year, there’s also been an “uptick in the retail victims” of such attacks, said Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, a U.S. nonprofit.

“Cyber criminals are moving a little quicker than we are in terms of securing our systems,” he said.

Ransomware attacks — in which hackers demand a hefty payment to restore hacked systems — account for a growing share of cyber crimes, experts note. And of course, retail isn’t the only affected sector. Tracking by NCC Group, a global cybersecurity and software escrow firm, showed that industrial businesses were most often targeted for ransomware attacks in April, followed by companies in the “consumer discretionary” sector.

Attackers know there’s a particular impact when going after well-known brands and products that shoppers buy or need every day, experts note.

“Creating that chaos and that panic with consumers puts pressure on the retailer,” Steinhauer said, especially if there’s a ransom demand involved.

Ade Clewlow, an associate director and senior adviser at the NCC Group, points specifically to food supply chain disruptions. Following the cyberattacks targeting M&S and Co-op, for example, supermarkets in remote areas of the U.K., where inventory already was strained, saw product shortages.

“People were literally going without the basics,” Clewlow said.

Personal data is also at risk

Along with impacting business operations, cyber breaches may compromise customer data. The information can range from names and email addresses, to more sensitive data like credit card numbers, depending on the scope of the breach. Consumers therefore need to stay alert, according to experts.

“If (consumers have) given their personal information to these retailers, then they just have to be on their guard. Not just immediately, but really going forward,” Clewlow said, noting that recipients of the data may try to commit fraud “downstream.”

Fraudsters might send look-alike emails asking a retailer’s account holders to change their passwords or promising fake promotions to get customers to click on a sketchy link. A good rule of thumb is to pause before opening anything and to visit the company’s recognized website or call an official customer service hotline to verify the email, experts say.

It’s also best not to reuse the same passwords across multiple websites — because if one platform is breached, that login information could be used to get into other accounts, through a tactic known as “credential stuffing.” Steinhauer adds that using multifactor authentication, when available, and freezing your credit are also useful for added lines of defense.

Which companies have reported recent cybersecurity incidents?

A range of consumer-facing companies have reported cybersecurity incidents recently — including breaches that have caused some businesses to halt operations.

United Natural Foods, a major distributor for Whole Foods and other grocers across North America, took some of its systems offline after discovering “unauthorized activity” on June 5.

In a securities filing, the company said the incident had impacted its “ability to fulfill and distribute customer orders.” United Natural Foods said in a Wednesday update that it was “working steadily” to gradually restore the services.

Still, that’s meant leaner supplies of certain items this week. A Whole Foods spokesperson told The Associated Press via email that it was working to restock shelves as soon as possible. The Amazon-owned grocer’s partnership with United Natural Foods currently runs through May 2032.

Meanwhile, a security breach detected by Victoria’s Secret last month led the popular lingerie seller to shut down its U.S. shopping site for nearly four days, as well as to halt some in-store services. Victoria’s Secret later disclosed that its corporate systems also were affected, too, causing the company to delay the release of its first quarter earnings.

Several British retailers — M&SHarrods and Co-op — have all pointed to impacts of recent cyberattacks. The attack targeting M&S, which was first reported around Easter weekend, stopped it from processing online orders and also emptied some store shelves.

The company estimated last month that the it would incur costs of 300 million pounds ($400 million) from the attack. But progress towards recovery was shared Tuesday, when M&S announced that some of its online order operations were back — with more set to be added in the coming weeks.

Other breaches exposed customer data, with brands like Adidas, The North Face and reportedly Cartier all disclosing that some contact information was compromised recently.

In a statement, The North Face said it discovered a “small-scale credential stuffing attack” on its website in April. The company reported that no credit card data was compromised and said the incident, which impacted 1,500 consumers, was “quickly contained.”

Meanwhile, Adidas disclosed last month that an “unauthorized external party” obtained some data, which was mostly contact information, through a third-party customer service provider.

Whether or not the incidents are connected is unknown. Experts like Steinhauer note that hackers sometimes target a piece of software used by many different companies and organizations. But the range of tactics used could indicate the involvement of different groups.

Companies’ language around cyberattacks and security breaches also varies — and may depend on what they know when. But many don’t immediately or publicly specify whether ransomware was involved.

Still, Steinhauer says the likelihood of ransomware attacks is “pretty high” in today’s cybersecurity landscape — and key indicators can include businesses taking their systems offline or delaying financial reporting.

Overall, experts say it’s important to build up “cyber hygiene” defenses and preparations across organizations.

“Cyber is a business risk, and it needs to be treated that way,” Clewlow said.

This story was originally featured on Fortune.com

© Wyatte Grantham-Phillips—AP

Shelves at a Whole Foods in New York City sit empty on June 10, 2025.
  •  

Uber to launch self-driving taxis in London despite COO labeling the city’s roads as some of the ‘world’s busiest and most complex’

Ride-hailing firm Uber will launch self-driving taxis in London next year when England trials new driverless services, the firm and the UK government said on Tuesday.

Under the Uber pilot scheme, services will initially have a human in the driver’s seat who can take control of the vehicle in an emergency, but the trials will eventually transition to being fully driverless.

The government announcement will see companies including Uber allowed to trial commercial driverless services without a human presence for the first time in the UK.

They will include taxis and “bus-like” services.

Uber COO Andrew Macdonald described London’s roads as “one of the world’s busiest and most complex urban environments”.

“Our vision is to make autonomy a safe and reliable option for riders everywhere, and this trial in London brings that future closer to reality,” he said.

Members of the public will be able to book the transport via an app from spring 2026, ahead of a potential wider rollout when new legislation — the Automated Vehicles Act — becomes law from the second half of 2027, the Department for Transport added.

The technology could create 38,000 jobs, add £42 billion ($57 billion) to the UK economy by 2025, and make roads safer, it said.

“The future of transport is arriving. Self-driving cars could bring jobs, investment, and the opportunity for the UK to be among the world-leaders in new technology,” Transport Secretary Heidi Alexander said.

“We can’t afford to take a back seat on AI…. That’s why we’re bringing timelines forward today,” added Technology Secretary Peter Kyle.

The wider rollout will also allow the sale and use of self-driving, private cars.

Driverless vehicle trials have been underway in the UK since January 2015, with British companies Wayve and Oxa “spearheading significant breakthroughs in the technology”, the ministry said.

“These early pilots will help build public trust and unlock new jobs, services, and markets,” said Wayve CEO Alex Kendall.

According to the government the forthcoming legislation will require self-driving vehicles to “achieve a level of safety at least as high as competent and careful human drivers”.

“By having faster reaction times than humans, and by being trained on large numbers of driving scenarios, including learning from real-world incidents, self-driving vehicles can help reduce deaths and injuries,” it said.

Driverless taxis with limited capacity are already on the roads in the United States and China, most notably in the central Chinese city of Wuhan where a fleet of over 500 can be hailed by app in designated areas.

This story was originally featured on Fortune.com

© Sam Barnes—Sportsfile for Collision via Getty Images

President and COO at Uber, Andrew Macdonald described London's roads as "one of the world's busiest and most complex urban environments".
  •  

Bluesky is backfiring. Mark Cuban says the ‘lack of diversity of thought’ is actually pushing users back to X

  • Billionaire Mark Cuban, who has been an active Bluesky user and supporter for several months now, said this week he thinks the social media platform has gotten “ruder and more hateful.” Its liberal-heavy user base makes it challenging to raise questions or have debates with other users, even if their political ideologies align, he said.

After Elon Musk bought Twitter (now X) in late 2022, the vibes on the platform started changing, and many accounts that were verified lost their status. But the mass exodus from X came in late 2024 following the U.S. presidential election and Musk throwing his support—and millions—behind Donald Trump. 

By December 2024, the platform had lost about 2.7 million active Apple and Android users in just two months, with competitor Bluesky absorbing nearly all of those users. Colloquially, it became somewhat of a safe haven for liberal users who wanted to drown out the noise of President Trump’s reelection. 

“It’s people wanting to just try something new. It’s people finding their community here,” Bluesky CEO Jay Graber told Vox Media’s Peter Kafta in a June 4 podcast. “I think in general it’s both people looking for something and people looking to get away from something.”

Between November 2024 and this May, Bluesky grew from about 10 million users to 30 million, according to a Pew Research Center analysis. Many news influencers—people who regularly post about current events on the platform—lean left politically, according to the analysis. 

One such figure was billionaire Mark Cuban, who supported former Vice President Kamala Harris during her presidential run in 2024, although he didn’t give her a penny for her campaign, he said. Cuban became a regular Bluesky user, having posted nearly 2,000 times since November 2024. When he first joined the platform, he famously posted: “Hello Less Hateful World.”

But Cuban has changed his tune. In a series of posts this week, Cuban argued Bluesky has become too much of an echo chamber, and is sending more users back to X. 

“Engagement went from great convos on many topics, to agree with me or you are a nazi fascist,” Cuban wrote. “We are forcing posts to X.”

The former Shark Tank star and Dallas Mavericks owner also said he thinks Bluesky users have “grown ruder and more hateful.” 

“Even if you agree with 95% of what a person is saying on a topic, if there is one point that you might call out as being more of a grey area, they will call you a fascist etc.,” said Cuban, whose current net worth is about $8.33 billion, according to Bloomberg.

Bluesky did not respond to Fortune’s request for comment. 

Cuban also reposted a Washington Post opinion article published Sunday titled “The Bluesky bubble hurts liberals and their causes.” Author Megan McArdle argued Bluesky’s left-leaning user base segregates it into a political silo. Cuban agreed. 

“The lack of diversity of thought here is really hurting usage,” Cuban wrote in a separate Bluesky update including the Washington Post article. “The moderation and block tools on here are so advanced, if you see someone you don’t want to see on here, just block them.  Don’t attack them.” 

“There used to be great give and take discussions on politics and news. Not so much any more,” Cuban added. “Doesn’t have to be this way.”

This story was originally featured on Fortune.com

© Getty Images—Julia Beverly/WireImage

Mark Cuban spoke out about behavior on Bluesky's platform.
  •  

Google is offering buyouts and tightening its RTO policy—only problem is, it’s already worried about losing top performers

  • Google is introducing a voluntary exit program for select U.S. teams and tightening its return-to-office policy for remote employees living near offices, aiming to streamline operations without losing top talent. While the company emphasizes it wants high performers to stay, it’s also offering a “supportive exit path” for those misaligned with its strategy amid growing pressure from the accelerating AI race.

Google is once again walking the tightrope that corporate America has been struggling with since the end of the pandemic: How do you invite some staff to leave, without your best employees walking out the door?

Indeed, how do you ask employees to return to the office without your best hires going in search of new pastures?

And with the AI race gathering pace with every quarter, losing valuable human resources to fierce competitors could have a tremendous impact.

These questions and concerns are clearly top of mind for the Big Tech giant, which sent out a memo to staff in the U.S. this week indicating buyouts were available to certain teams—notably within its knowledge and information and central engineering units, in addition to marketing, research, and communications.

Similar moves were already announced by Google’s Platforms and Devices team, as well as its People Operations team, earlier this year.

In addition to announcing the “voluntary exit program,” Alphabet-owned Google also said staff in some teams will also have to come to the office more often, though did not specify which departments would be affected when asked by Fortune.

This change of rules will impact remote staffers who live within 50 miles of the office, and will be asked to return to their in-person desks on a hybrid schedule. The policy change is not a company-wide alteration.

“Earlier this year, some of our teams introduced a voluntary exit program with severance for U.S.-based Googlers, and several more are now offering the program to support our important work ahead,” Google spokesperson Courtenay Mencini told Fortune. “A number of teams are also asking remote employees who live near an office to return to a hybrid work schedule in order to bring folks more together in-person.”

The severance packages are available to U.S.-based individuals regardless of their role or level, seeking to leave the Mag7 company whether for personal or professional reasons.

Finding the balance

The problem with opening up buyout conversations with staffers does mean that talented individuals may just take up their employer on the offer.

Indeed, a working paper published last year from Mark Ma, associate professor of business administration at the University of Pittsburgh, and colleagues found prominent technology and finance companies that implemented return-to-office (RTO) mandates lost their most skilled and senior employees. 

This seems to be a situation Google is aware of and is trying to navigate. Per CNBC reporting, which viewed the memo from Google executive Nick Fox announcing the changes, the tech leader wanted to be “very clear” he hopes high performers will stay.

“If you’re excited about your work, energized by the opportunity ahead, and performing well, I really (really!) hope you don’t take this! We have ambitious plans and tons to get done,” Fox wrote, per the memo reviewed by CNBC. “On the other hand, this [voluntary exit program] offers a supportive exit path for those of you who don’t feel aligned with our strategy, don’t feel energized by your work, or are having difficulty meeting the expectations of your role.”

On the RTO changes, Fox added (per the memo reported by The Verge) “you’ve heard me say that I believe we innovate better and make decisions faster when we’re working together in the office” and continued teams are working to ensure the sites were ready for an influx of new visitors.

The goal, he wrote, “is to ensure that everyone on our team is fully committed—it’s not to achieve a headcount target. In fact, we continue to hire where needed, and we expect to backfill many of the exited roles—which will also create new opportunities for internal mobility and growth.”

This story was originally featured on Fortune.com

© David Paul Morris/Bloomberg - Getty Images

Sundar Pichai, chief executive officer of Google owner, Alphabet Inc
  •  

Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—‘I would be terrified’

Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked.

The flaw, revealed today by AI security startup Aim Security and shared exclusively in advance with Fortune, is the first known “zero-click” attack on an AI agent, an AI that acts autonomously to achieve specific goals. The nature of the vulnerability means that the user doesn’t need to click anything or interact with a message for an attacker to access sensitive information from apps and data sources connected to the AI agent. 

In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself. 

Microsoft 365 Copilot acts based on user instructions inside Office apps to do things like access documents and produce suggestions. If infiltrated by hackers, it could be used to target sensitive internal information such as emails, spreadsheets, and chats. The attack bypasses Copilot’s built-in protections, which are designed to ensure that only users can access their own files—potentially exposing proprietary, confidential, or compliance-related data.

The researchers at Aim Security dubbed the flaw “EchoLeak.” Microsoft told Fortune that it has already fixed the issue in Microsoft 365 Copilot and that its customers were unaffected. 

“We appreciate Aim for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted,” a Microsoft spokesperson said in a statement. “We have already updated our products to mitigate this issue, and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture.”

The Aim researchers said that EchoLeak is not just a run-of-the-mill security bug. It has broader implications beyond Copilot because it stems from a fundamental design flaw in LLM-based AI agents that is similar to software vulnerabilities in the 1990s, when attackers began to be able to take control of devices like laptops and mobile phones. 

Adir Gruss, cofounder and CTO of Aim Security, told Fortune that he and his fellow researchers took about three months to reverse engineer Microsoft 365 Copilot, one of the most widely used generative AI assistants. They wanted to determine whether something like those earlier software vulnerabilities lurked under the hood and then develop guardrails to mitigate against them. 

“We found this chain of vulnerabilities that allowed us to do the equivalent of the ‘zero click’ for mobile phones, but for AI agents,” he said. First, the attacker sends an innocent-seeming email that contains hidden instructions meant for Copilot. Then, since Copilot scans the user’s emails in the background, Copilot reads the message and follows the prompt—digging into internal files and pulling out sensitive data. Finally, Copilot hides the source of the instructions, so the user can’t trace what happened. 

After discovering the flaw in January, Gruss explained that Aim contacted the Microsoft Security Response Center, which investigates all reports of security vulnerabilities affecting Microsoft products and services. “They want their customers to be secure,” he said. “They told us this was super groundbreaking for them.”

However, it took five months for Microsoft to address the issue, which, Gruss said, “is on the (very) high side of something like this.” One reason, he explained, is that the vulnerability is so new, and it took time to get the right Microsoft teams involved in the process and educate them about the vulnerability and mitigations.

Microsoft initially attempted a fix in April, Gruss said, but in May the company discovered additional security issues around the vulnerability. Aim decided to wait until Microsoft had fully fixed the flaw before publishing its research, in the hope that other vendors that might have similar vulnerabilities “will wake up.”

Gruss said the biggest concern is that EchoLeak could apply to other kinds of agents—from Anthropic’s MCP (Model Context Protocol), which connects AI assistants to other applications, to platforms like Salesforce’s Agentforce. 

If he led a company implementing AI agents right now, “I would be terrified,” Gruss said. “It’s a basic kind of problem that caused us 20, 30 years of suffering and vulnerability because of some design flaws that went into these systems, and it’s happening all over again now with AI.”

Organizations understand that, he explained, which may be why most have not yet widely adopted AI agents. “They’re just experimenting, and they’re super afraid,” he said. “They should be afraid, but on the other hand, as an industry we should have the proper systems and guardrails.”

Microsoft tried to prevent such a problem, known as an LLM scope violation vulnerability. It’s a class of security flaws in which the model is tricked into accessing or exposing data beyond what it’s authorized or intended to handle—essentially violating its “scope” of permissions. “They tried to block it in multiple paths across the chain, but they just failed to do so because AI is so unpredictable and the attack surface is so big,” Gruss said. 

While Aim is offering interim mitigations to clients adopting other AI agents that could be affected by the EchoLeak vulnerability, Gruss said the long-term fix will require a fundamental redesign of how AI agents are built. “The fact that agents use trusted and untrusted data in the same ‘thought process’ is the basic design flaw that makes them vulnerable,” he explained. “Imagine a person that does everything he reads—he would be very easy to manipulate. Fixing this problem would require either ad hoc controls, or a new design allowing for clearer separation between trusted instructions and untrusted data.” 

Such a redesign could be in the models themselves, Gruss said, citing active research into enabling the models to better distinguish between instructions and data. Or the applications the agents are built on top of could add mandatory guardrails for any agent. 

For now, “every Fortune 500 I know is terrified of getting agents to production,” he said, pointing out that Aim has previously done research on coding agents where the team was able to run malicious code on developers’ machines. “There are users experimenting, but these kind of vulnerabilities keep them up at night and prevent innovation.” 

This story was originally featured on Fortune.com

© FABRICE COFFRINI—AFP/Getty Images

Microsoft CEO Satya Nadella
  •  

AI is changing how employees train—and starting to reduce how much training they need

Proficiency with AI tools has quickly become a top skill, and companies are working to train their employees how to use it. At the same time, AI is also emerging as a useful training tool in its own right.

Across industries, AI is helping companies create training materials faster and more efficiently, as well as allowing them to design new, more interactive methods to train workers. Artificial intelligence technology is also enabling a shift toward on-the-job instruction that can guide employees in real time. The benefits can be wide-ranging, from massive cost savings for the companies to providing a safer place to simulate tasks in which the cost of an error could be severe. 

Creating training content just got easier

BSH Home Appliances, a subset of the multinational technology Bosch Group, has been using an AI-generated video platform called Synthesia to create material ranging from compliance trainings to technical trainings. The platform allows users to quickly generate videos from prompts or documents and include generic avatars in their videos or even AI avatars of themselves. The videos can range from two minutes to 45 minutes, and the company has been significantly scaling its use of the platform after seeing a 70% cost savings in external video production.

Previously, the company’s learning and development teams had to purchase training video content from a vendor or repeatedly host and record training sessions. Lindsey Bradley, learning and development partner at BSH, says the platform has reduced instruction hours for facilitators and made it possible for a wide variety of stakeholders across the company to create training videos and seamlessly update them as often as needed. The other major benefit has been the ability to instantly translate and localize training content, which is typically a costly yet necessary task for a multinational company with employees in several countries. 

“One of our training sessions that covers energy, environment, and health compliance was created with the platform. In the learning hub for employees, the training session is offered in more than 10 languages, and all the trainer has to do is switch the language in the system,” says Bradley. “The content and script can remain the same. No language experts were required, no actors, etc., because the platform offers a wide range of languages already available that our learning and development teams can choose from for our videos.”

While customers are increasingly using the platform to create videos for all different purposes, employee training and learning and development has been the most common use case so far, says Synthesia cofounder and CEO Victor Riparbelli. The company is continuing to take advantage of the advancements in AI to make videos even more engaging, moving beyond broadcast to interactive choose-your-own-adventure-style videos that provide training paths customized to individual needs.

“An interactive AI video in Synthesia might start off the same for everyone, but it might branch into a more detailed explanation for more advanced viewers, for example,” Riparbelli says. 

Welcome to the simulation

Sometimes, watching a video isn’t enough. That’s where simulation-style training comes into play. 

For example, researchers at New Jersey Institute of Technology, Robert Wood Johnson Medical School, and software company Robust AI have developed an AI-powered program to teach and simulate the basic tenets of laparoscopic surgery. Using the actual tools used in surgery, medical students complete exercises to transfer rings between pegs without dropping them and within short time constraints, mimicking the delicate movements surgeons need to complete with swift precision. 

The team used convolutional neural networks to train the model to recognize the different components. Another neural network, trained on the correct sequence of actions a user should follow, then detects when a user is out of sequence, enabling the program to give feedback to correct their action.

A recent study from this year showed the program is as good and even slightly better than faculty human evaluators when rating surgical skills. Currently, students are using the program informally, but it is headed to become an official part of the curriculum. Since surgical training involves significant oversight and input from senior surgeons who are typically already inundated with responsibilities, and since mistakes come with significant costs, improvements that allow students to do more realistic training in lower-pressure settings have enormous potential.  

“An app like ours helps to reduce medical errors. Students can practice as much as they like in the app before they enter the operating room,” says Usman Roshan, an associate professor of computer science at New Jersey Institute of Technology who’s been collaborating on the program.

Benefits of improved AI-enabled simulation-style training stretch beyond the operating room, however. Strivr, a company combining AI and virtual reality to create immersive training experiences, is serving customers across logistics, transportation, retail, and other industries, such as Walmart, Verizon, and Amazon. Strivr uses AI to create custom content for customers (to build out avatars for 3D environments, for example) and also to power user-facing capabilities that make up the training experience, such as AI-powered dynamic conversation abilities. Previous trainings included only scripted dialogue, but recent advancements in AI are making it possible for users to engage in more naturalistic, nonlinear conversations with the avatars in their training simulations. 

“AI allows for a more realistic, real-world applicable training experience,” says Strivr founder and CEO Derek Belch.

The pursuit of real-time training

Thanks to AI, Strivr is also making progress on its next frontier: augmented-reality-powered experiences that guide workers in real time and connect them to information they need while performing a job. The company is working with 10 design partners to build out early versions of its platform for real-time guidance, called WorkWise.

“The end result of all of this is going to be someone—let’s just say a warehouse worker—putting packages on a truck. They’re going to be wearing smart glasses, and the glasses are going to be telling them what to do in real time. This is kind of ironic, given what we’ve been doing for the last 10 years, but you’re probably not going to have to train people, or you’re going to significantly reduce the amount of training time required,” says Belch.

While smart glasses are still in their infancy and this vision is still a work in progress, AI is already powering real-time guidance experiences via smartphones and other wearable devices, and reducing the need for upfront training as a result. Alex Hawkinson, CEO of BrightAI, a company creating AI solutions for blue-collar industries, for example, worked with a manufacturer of custom pool covers to do just that. Traditionally, two workers would spend an entire afternoon manually measuring a pool and creating a CAD model to design the cover. The company developed an autonomous scanning system and accompanying copilot, or assistant tool, that gives real-time guidance to lead the process, build the models on the spot, and then creates the cost estimate for the job. 

The real-time guidance dramatically speeds up the manufacturing process and reduces measuring errors, Hawkinson says, but it also makes these jobs more available to lesser trained workers. Across the various fields BrightAI works in, like HVAC and energy infrastructure, he says real-time guidance makes it possible to decrease the training requirements and productively deploy new hires.

“It doesn’t have to be a highly trained person to go out and measure. It talks directly to the manufacturing system,” he says. “So while the person is sitting there with the customer at their house, it shows the quote and what the cover is going to look like. It helps them visualize that with a copilot that we built for that worker, and then the customer can say ‘yes’ and it can be there in three days.”

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

  •  

Companies are overhauling their hiring processes to screen candidates for AI skills—and attitudes

As companies race to incorporate AI into their workflows, it’s not only models and tools they’re relying on for a competitive advantage but, increasingly, people. Across industries, 66% of business leaders said they would not hire someone without AI skills, according to the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn.

Company leaders and professionals in the hiring space say they’re now specifically considering candidates’ proficiency with AI tools and sometimes even prioritizing these skills over professional experience. They’re also reimagining their hiring processes, developing new ways to screen for candidates’ familiarity with and ability to use AI tools. Their approaches range from focusing interview conversations on AI—providing an opportunity to gauge a person’s familiarity with and attitude toward the technology—to having candidates complete tasks with AI tools and observing how they use them.

“Every organization is—no matter what the skill set might be—looking to see if they can find someone that potentially has some experience with AI, and specifically generative AI, and now you’ve got agentic AI on the horizon, so they’re definitely looking for people who have experience in those areas,” said Thomas Vick, senior regional director for technology at talent and consulting firm Robert Half.

Skills take center stage

Vick said he noticed the emphasis on AI skills in hiring emerge about a year ago and continue to accelerate ever since. The clear trend is that AI skills are now deemed as important as experience and education.

In the LinkedIn and Microsoft report, which included insights from a survey of 31,000 people from 31 countries, 71% said they would hire a less experienced candidate with AI skills over a more experienced candidate without them. PwC’s 2024 AI Jobs Barometer states that skills sought by employers are changing at a 25% higher rate in occupations most able to use AI, such as developers, statisticians, and judges. Additionally, a study on hiring trends in the U.K. found that candidates with AI skills are landing wages 23% higher compared to those without, making a greater difference than higher degrees up until the PhD level.

Alyssa Cook, a senior managing consultant at hiring and staffing firm Beacon Hill, has also observed that hiring teams are more willing to hire candidates with AI skills. What’s more, she said, skills with specific AI tools a company is using or wants to adopt can even take precedence over an overall greater depth of experience with AI. 

“Companies would rather hire a candidate who has hands-on experience with a particular tool they are implementing if they have the ability and interest to train up on other skills,” she said 
The newfound focus on AI skills in hiring is happening across the various departments of companies. Vick said he’s seen it across accounting, finance, creative roles, and especially technical roles. According to job listing data cited by the Wall Street Journal, one in four U.S. tech jobs posted so far this year are looking for people with AI skills.

The AI test

Automation firm Caddi is one company where this is playing out across the organization. CEO Alejandro Castellano said interviewers regularly ask candidates about their experience using AI tools; for technical candidates, the firm encourages individuals to use AI coding assistants like Cursor, Claude Code, or Copilot during code analysis and technical exercises.

“We want to see how they work in real conditions,” said Castellano.

The approach flips on its head the way companies have traditionally tested candidates for software engineering jobs. Typically, coding tests have been designed to isolate candidates from their real workflows in order to assess their fundamental knowledge. In a world where AI tools are increasingly used to help employees accomplish particular tasks, however, this old approach hardly makes sense. In their day-to-day duties, developers and engineers must be able to work effectively with these systems to enhance their own productivity—not delve into the realm of theory and concepts. 

“We’re moving toward exercises that reflect how engineers actually work, how they search, use AI suggestions, and debug. We care as much about how they solve a problem as we do about the end result,” Castellano said.  

Ehsan Mokhtari, CTO of ChargeLap, a company that creates software for electric-vehicle charging, said encouraging candidates to use AI tools has become a formal part of the firm’s hiring process. The effort started a year ago after it was noticed that candidates were avoiding using AI tools, assuming they would be penalized for it. So the company revamped its hiring process and its broader operations to embrace AI tools, starting with restructuring take-home challenges for technical candidates and then rolling out the effort for positions across the company.

“We started with engineering, but we’re now pushing it org-wide. Sales came next—they were surprisingly fast to adopt AI. Tools like ChatGPT are now common for them for research and outbound comms. We’ve made AI literacy part of departmental OKRs,” Mokhtari said. “That means every function—support, product, sales, engineering, operations—is expected to include it in their hiring considerations.”
In working with clients on their hiring, Robert Half’s Vick has seen a variety of approaches to screen candidates for AI skills. Some companies are turning to their contractors, Vick says, asking those with AI experience to help them evaluate candidates during the interview process.  One of the most popular techniques he’s seen is bringing job candidates into a “sandbox” environment and having them actually show how they would utilize AI within that environment to complete various tasks. It’s the same idea as the reimagined coding assessments, but applicable to any role in the organization.

Attitude goes a long way

While company leaders generally say they would hire a candidate who is proficient with AI over one who isn’t, they also stress that there’s more to it than skills: Attitude also plays a significant role. 

ChargeLab’s Mokhtari explained that he looks at AI proficiency in two layers: skill set and mindset. While skill set is highly desirable, it can also be easily taught. Mindset, however—being proactive in using AI, curious about where it can add value, and not being combative toward it—“is harder to coach and more important long-term,” he said.

Castellano echoes this idea. He’s found that understanding how someone thinks about and works with AI is one of the strongest signals the company has found to gauge that person’s ability to keep delivering value in a fast-changing environment.

“We’re not just looking for people who know the tools,” he said. “We’re looking for those who are curious, adaptable, and thoughtful about how they use AI. That mindset makes the biggest difference.”

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

  •  

‘AI fatigue’ is settling in as companies’ proofs of concept increasingly fail. Here’s how to prevent it 

AI experimentation inside companies has been moving swiftly, but it’s not always going smoothly. The share of companies that scrapped the majority of their AI initiatives jumped from 17% in 2024 to 42% so far this year, according to analysis from S&P Global Market Intelligence based on a survey of over 1,000 respondents. Overall, the average company abandoned 46% of its AI proofs of concept rather than deploying them, according to the data. 

Against the backdrop of more than two years of rapid AI development and the pressure that has come with it, some company leaders facing repeated AI failures are starting to feel fatigued. Employees are feeling it, too: According to a study from Quantum Workplace, employees who consider themselves frequent AI users reported higher levels of burnout (45%) compared to those who infrequently (38%) or never (35%) use AI at work. 

Failure is of course a natural part of R&D and any technology adoption, but many leaders describe feeling a heightened sense of pressure surrounding AI compared to other technology shifts. At the same time, weighty conversations about AI are unfolding far beyond the workplace as AI takes center stage everywhere from schools to geopolitics. 

“Anytime [that] a market, and everyone around you, is beating you over the head with a message on a trending technology, it’s human nature—you just get sick of hearing about it,” said Erik Brown, the AI and emerging tech lead at consulting firm West Monroe.

Failure and pressure drive “AI fatigue”

In his work supporting clients as they explore implementing AI, Brown has observed a significant trend of clients feeling “AI fatigue” and becoming increasingly frustrated with AI proof of concept projects that fail to deliver tangible results. He attributes a lot of the failures to businesses exploring the wrong use cases or misunderstanding the various subsets of AI that are relevant for a job—for example, jumping on large language models (LLMs) to solve a problem because they’ve become popular, when machine learning or another approach would actually be a better fit. The field itself is also evolving so rapidly and is so complex that it creates an environment ripe for fatigue. 

In other cases, the pressure and even excitement about the possibilities can cause companies to take too-big swings without fully thinking them through. Brown describes how one of his clients, a massive global organization, corralled a dozen of its top data scientists into a new “innovation group” tasked with figuring out how to use AI to drive innovation in their products. They built a lot of really cool AI-driven technology, he said, but struggled to get it adopted because it didn’t really solve core business issues, causing a lot of frustration around wasted effort, time, and resources.

“I think it’s so easy with any new technology, especially one that’s getting the attention of AI, to just lead with the tech first,” said Brown. “That’s where I think a lot of this fatigue and initial failures are coming from.”

Eoin Hinchy, cofounder and CEO of workflow automation company Tines, said his team had 70 failures with an AI initiative they were working on over the course of a year before finally landing on a successful iteration. The main technical challenge was around ensuring the environment they were building for the company’s clients to deploy LLMs would be sufficiently secure and private, so they absolutely had to get it right.

“There were certainly moments when we felt like we’d cracked it and, yes, this is it. This is the feature that we need. This is going to be the big-step change—only for us to realize, actually, no, we need to go back to the drawing board,” he said.

Aside from the team that was actually working out the technical solutions, Hinchy said other parts of the organization were also fatigued by the ups and downs. The go-to-market team in particular was trying to do its job in a competitive sales environment where other vendors were releasing similar offerings, yet the pace of getting to the finalized product was out of their hands. Aligning the product and sales team turned out to be the biggest challenge from an organizational standpoint, said Hinchy. 

“There had to be a lot of pep talks, dialogue, and reassurance with the engineers, product team, and our sales folks saying all this blood, sweat, and tears up front in this unglamorous work will be worth it in the end,” he said.

Let functional teams take charge

At cybersecurity company Netskope, chief information security officer James Robinson has felt his fair share of disappointment, describing feeling underwhelmed by agents that failed to deliver on various technical tasks and other investments that didn’t deliver after he got his hopes up. But while he and his engineers have largely stayed motivated by their own inner desires to build and experiment, the company’s governance team is really feeling the fatigue. Their to-do lists often read like work that’s already been completed as they have to race to keep up with approving new efforts, the latest AI tool a team wants to adopt, and everything in between. 

In this case, the solution was all in the process. The company is removing some of the burden by asking specific business units to handle the initial governance steps and setting clear expectations for what needs to be done before approaching the AI governance committee. 

“One of the things that we’re really pushing on and exploring is ways we can put this into business units,” said Robinson. “For instance, with marketing or engineering productivity teams, let them actually do the first round of review. They’re more interested and more motivated for it, honestly, so let them take that review. And then once it gets to the governance team, they can just do some specific deep-dive questions and we can make sure the documentation is done.”

The approach mirrors what West Monroe’s Brown said ultimately helped his client recover from its failed “innovation lab” effort. His team suggested going back to the business units to identify some key challenges and then seeing which might be best suited for an AI solution. Then they broke into smaller teams that included input from the relevant business unit throughout the process, and they were able to experiment and build a prototype that proved AI could help solve one of those problems within a month. Another month and a half later, the first release of that solution was deployed.

Overall, his advice for preventing and overcoming AI fatigue is to start small. 

“There are two things you can do that are counterproductive: One is to just succumb to the fear and do nothing at all, and then eventually your competitors will overtake you. Or you can try to do too much at once or not be focused enough in how you experiment [with] embedding AI in various parts of your business, and that’s going to be overwhelming as well,” he said. “So take a step back, think through in what types of scenarios you can experiment with AI, break into smaller teams in those functional areas, and work in small chunks with some guidance.”

The point of AI, after all, is to help you work smarter, not harder.

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

  •  

Ahead of Tesla robotaxi launch, residents in one Austin neighborhood say Model Ys—with drivers—are circling their blocks over and over

Christian Pfister, a 68-year-old retiree, walks his Great Pyrenees, Wally, each morning on the street in his quiet neighborhood—a compilation of old oak tree-lined streets for single-family homes, duplexes, and apartments in southeast Austin where he’s been living the last 26 years. It was about three weeks ago, on one of these morning strolls, that he spotted a white Tesla Y with a Texas manufacturer plate drive by, with a dark-colored Tesla closely trailing behind it.

He watched as the Tesla tandem conducted a left turn at a street up ahead of him, disappeared around the block for half a mile, then drove by him again—once, then twice, then again and again. 

“That’s all they did—around the same block over and over and over, all day long,” Pfister says in an interview.

Since Pfister’s spotting of the vehicles a few weeks ago, a handful of white Teslas (and some black and gray Teslas too) have frequented the streets of Pfister’s small neighborhood, driving the same routes and taking the same turns repeatedly—typically with drivers in the front seat, though two residents in the neighborhood that Fortune interviewed say they have seen some driverless vehicles with someone in the passenger seat. Another resident saw Teslas without anyone in them at all on multiple occasions.

Tesla is testing the vehicles in the neighborhood as it gears up for a long-anticipated launch of its self-driving taxi service in Austin by the end of this month. The EV company, which has been working on autonomous technology for more than a decade now, has said it is finally ready to go up against robotaxi competitors like Alphabet, whose subsidiary Waymo has already offered 10 million paid rides and is operating in four cities and planning to launch soon in several more. Elon Musk has assured investors that Tesla’s robotaxi service, which will initially start small with 10 to 20 vehicles, will expand to several other cities before the end of the year. But it all will start in Austin—and specifically in this small neighborhood—as Tesla proves its concept and irons out any kinks.

When the sightings of Tesla’s robotaxis began a few weeks ago, they raised alarm among some of the people who lived in the neighborhood. A couple of residents took to the community messaging platform Nextdoor to query their neighbors as to why white Teslas—with drivers—were parking in front of their houses for long stretches of time. “It’s freaking me out,” one woman posted.

Anastasia Maren, 24, who moved into the neighborhood last month, said she has seen Teslas drive by or park in front of her duplex repeatedly since she moved in, particularly when she is going on walks.

“They stare you down as if you’re in their way, or you’re the one who shouldn’t be here,” Maren says of the drivers. She says that while she has sometimes seen the vehicles driving around with only someone in the passenger seat—she often sees a person in the driver’s seat controlling the vehicles. “Sometimes I can see the person actually turning the wheel,” she says.

A 37-year-old Austin resident, Robert Yeats, who lives in an apartment complex further north in the neighborhood than Maren and Pfister, says he sees white Teslas line up in front of his apartment, parked and with their hazard lights on, often in groups of about four. In some cases, the Teslas were parked in the middle of the road with their hazard lights on, forcing other drivers to go around them. According to one resident, the tests have occurred as late as 10 p.m. None of the residents Fortune spoke to said they had received any notice or information from Tesla about the testing in their neighborhood.

Austin residents are used to seeing self-driving vehicles around town. Waymo’s cars started mapping the city in 2023 with safety drivers on board, and has since begun offering passenger service around the city without safety drivers in the vehicles. Pfister told Fortune he has seen Waymos parked overnight in front of empty lots in the same neighborhood. A few years ago, Cruise had released robotaxis on the streets of Austin, back before parent company General Motors stopped all rides, and later shut down the ride-hail service, after a high-profile accident in San Francisco.

But the Tesla sightings add to the questions that many industry observers have about the viability of the company’s technology and approach to autonomous driving. While other autonomous vehicle companies have needed to digitally map roads and neighborhoods before launch, Tesla claims that its camera-only system doesn’t require high-definition mapping, radar, or lidar technology. According to the company, its approach to autonomous driving is less expensive and more adaptable than the competition: instead of mapping an area for months, Tesla cars can figure out the terrain wherever they are.  But if that’s the case, why are Teslas driving around the same streets of one neighborhood over and over—and why do many of the vehicles have someone driving them?

“I thought, well, maybe they’re just in the driver’s seat, so that if something goes wrong, they can grab the steering wheel. But they are actually driving the car,” Pfister says, noting that he has seen the drivers with their hands on the steering wheel. “They are actually driving the car, so it’s not driverless. I don’t really understand.”

Tesla did not respond to a request for comment.

Tesla has also conducted testing in at least two other locations in Texas. There was a scheduled testing with emergency vehicles in a separate isolated street in Austin, as Fortune earlier reported. Tesla also did testing at a training facility in Florence, Tex., with the Texas Department of Public Safety’s crash reconstruction team. During that event, state agencies set up scenarios for Tesla’s robotaxis to operate, so that the company could collect information about how to respond to various encounters with emergency personnel and equipment, such as crash scenes or flashing lights and sirens, according to a spokesperson for the Texas Department of Public Safety.

But it’s along a few blocks of the neighborhood in southeast Austin where Tesla has been conducting its regular, real-world testing in the weeks before launch. There’s a Tesla Supercharger station just across a busy street—the only station for about two miles—as well as a Tesla collision center less than two miles down the road. The neighborhood itself features quiet streets, though Teslas will have to cross a busy road to get to the charging station. There aren’t sidewalks on the residential streets, so residents walk their dogs or push strollers on the street itself—giving the cars an opportunity to operate with obstacles in a controlled environment. The three residents tell Fortune that the cars appear to operate at speeds no greater than 25 miles per hour.

Tesla is nearing the end of the June deadline that Musk set for launch—with just three weeks until the end of the month. A Bloomberg report had suggested the company was aiming for a June 12 launch. But as of Tuesday, June 10, several important pre-launch checklist items appeared to be outstanding. Tesla had provided drafts, but not finalized emergency responder guides, nor had it conducted emergency responder trainings to the Austin Transportation and Works Department of the Austin Fire Department as of Tuesday, the agencies told Fortune. As Fortune earlier reported, the EV maker told city employees those items would be furnished before the company launches service. 

This story was originally featured on Fortune.com

Daniel Pier—Getty Images
  •  

Snap CEO Evan Spiegel promises new lightweight ‘Specs’ smart glasses next year, in race to beat Meta and Google to market

Snapchat, long-known as a featherweight in the league of Big Tech giants, is hoping to best opponents Meta, Google and Apple by releasing its new augmented reality AI-enabled smart glasses months, maybe even years, before the big guys. 

Speaking at a conference on Tuesday, Snap CEO Evan Spiegel said the company would release a new version of its camera-equipped glasses next year that will incorporate an interactive, AI-enhanced digital screen within the lens. The 2026 release date would be ahead of Meta, which plans to release its AR “Orion” glasses in 2027, while Google has not attached a date to its Android XR glasses

“The tiny smartphone limited our imagination,” Spiegel said in his keynote at the Augmented World Expo conference in Long Beach, Calif. “It’s clear that today’s devices and user interfaces are woefully inadequate to realize the full potential of AI.” 

The new “Snapchat Specs” will be lightweight and AI-enhanced, Snap said. They will allow users to look at objects in the real world and leverage AI to access information, such as translating ingredients on a label from foreign languages. The glasses will also allow users to interact with the objects on the lens, Snap said, citing examples like playing video games with their eyeballs.

The company did not share photos of the Specs frames or provide information on pricing. As part of the Specs announcement, Snapchat shared that operating system partnerships with OpenAI and Google Gemini will extend into experiences for the glasses. 

If Snap follows through on the promise of 2026 launch, it would be the first Big Tech company to market with augmented reality glasses for mainstream consumers, claiming an early lead in the race to create the successor to the smartphone—a competition involving everyone from Meta, Google, and Apple, to ChatGPT maker OpenAI, which recently announced a partnership with former Apple design boss Jony Ive.   

A pioneer in the glasses form factor, Snap made waves with the release of its “Spectacles” in 2016. The funky looking glasses were equipped with a camera that allowed users to post photos and short video clips directly to their Snapchat feed. But in recent years, Snap’s Spectacles have been eclipsed by Meta, which partnered with EssilorLuxottica to release Ray-Ban smart glasses. Though Meta hasn’t shared financials around its Ray-Ban glasses, EssilorLuxottica noted that the companies have sold over 2 billion glasses since their 2023 debut. Luxottica plans to increase products of the co-branded glasses to 10 million units by 2026, suggesting that the companies are pleased with the results and potential of the glasses. 

That said, Meta’s glasses do not have AR capabilities; rather, the glasses have audio-based AI features as well as photo and video capability. Meta has said it will release its Orion AR glasses in 2027, with technology that will allow users to scan their Threads feeds with eye tracking hardware.  

Other tech giants have glasses in their sights, too. At its IO developer’s conference in May, Google announced that it would join the smart glasses market by partnering with Warby Parker. And Apple, whose $3,500 VisionPro headset has failed to catch on with consumers, is reported to release smart glasses next year that mimic the current version of Meta’s Ray Bans, while working on more advanced AR glasses that are still years away, according to Bloomberg.

The Specs announcement follows a turbulent financial period for Snapchat. After years of worrisome financials, Snapchat seems to have stabilized and increased free cash flow in the most recent quarter. The glasses are partially a revenue diversification effort as the company is propagated by ads to its social network.

Still, Snapchat did not share what the glasses will cost consumers. Meta’s Ray-Ban glasses, which do not have AR capabilities, cost between $239 and $303 so it’s reasonable to assume the Specs’ prices will be steeper due to the hardware requirements. 

The style and comfort of the glasses are also likely to be critical, with consumers having repeatedly demonstrated an aversion to bulky- or geeky-looking smart glasses and headsets. With its 2026 launch date, Snap has thrust itself back into the conversation, but success will rest on whether it can produce a product consumers actually want to wear.

This story was originally featured on Fortune.com

© JOEL SAGET / AFP) (Photo by JOEL SAGET/AFP via Getty Images

Snap cofounder and CEO Evan Spiegel wears the the developer version of the Spectacle Augmented Reality glasses. Images of the new "lightweight" version of the Specs have yet to be released.
  •  

Exclusive: Ex-Meta AI leaders debut an agent that scours the web for you in a push to ultimately give users their own digital ‘chief of staff’

The trio is widely regarded as among the world’s most elite AI talent.  All three are veteran ex-Meta researchers who helped lead the company’s high-profile generative AI efforts—and before that, ran labs together at Georgia Tech. 

Devi Parikh led Meta’s multimodal AI research team. Dhruv Batra headed up embodied AI, building models that help robots navigate the physical world. And Abhishek Das was a research scientist at Meta’s Fundamental AI Research lab, or FAIR.

A year ago, Parikh and Das left Meta to launch Yutori, a startup named after the Japanese word for the mental spaciousness that comes from having room to think. Batra joined a couple of months later. 

Now, investors are betting big on the team’s vision for Yutori. Radical Ventures, Felicis, and a roster of top AI angels—including Elad Gil, Sarah Guo, Jeff Dean, and Fei-Fei Li—have backed Yutori’s $15 million seed round. The mission: to rethink how people interact with  AI agents—where the software, not the user, is the one doing the surfing to accomplish tasks like an AI ‘chief of staff.’

Taking daily digital chores off your plate

“The web is simultaneously one of humanity’s greatest inventions—and really, really clunky,” said Parikh. Yutori’s long-term dream, she explained, is to build AI personal assistants—in the form of web agents—that can take daily digital chores off your plate without you lifting a finger, leaving you with time to tackle whatever brings you joy. But to make agents people actually want to use, she said, the entire experience needs a redesign—from product and user interface to technical infrastructure. 

“That’s something that’s harder for larger entities to think through from scratch, since they are incentivized to think about their existing products,” said Parikh, adding that she saw a lot of that at Meta. “We have the luxury to be able to just think from scratch.” 

Parikh explained that Yutori’s focus is on improving  interaction with generative AI.  It should be dynamic and adapt to the task at hand, rather than using a rigid, pre-designed template like a chat box or a web page.

For example, if an AI agent is ordering food on DoorDash for you, it might need to show which restaurants it searched, what menu items it considered, and a few options you can quickly review and confirm. But if that same agent is monitoring the news and generating daily summaries, the format should be entirely different—perhaps organized like a briefing or timeline.

Ultimately, Parikh believes, a system should intelligently decide how to present information and how users can interact with the agent to refine or redirect the task. To get there, Yutori is building on top of existing models, including Meta’s Llama, with a singular focus on agents that can navigate the web and take actions on behalf of a user. 

Today, Yutori announced its first consumer product, Scouts, which the Yutori team explained is like having a team of agents that can monitor the web for anything you care about. Say you’re interested in buying a phone: You want to have a team of agents monitoring the web for whenever there is a discount on the Google Pixel 9. A Scout can notify you when that happens. Or if you are interested in a daily news update on an obscure topic, you can set up a Scout for that.

“Anything of this flavor where you want a team of agents to monitor the web and then notify you, either based on a condition or at a particular time, that’s  the use case we are going after,” said Das. His own very Scout, he explained, is one to reserve tennis courts in San Francisco. He asked his Scout to “Notify me whenever a tennis court in Buena Vista park becomes available for Mondays at 7:30am.” He gets timely notifications over email and he ends up booking the courts. Scouts is free to use, though there is a waitlist for access. 

Unlike traditional search tools – like Google Alerts, for example – Scouts work deeper behind the scenes, autonomously operating browsers and clicking through websites to gather details.  They can also monitor dozens of sites at the same time to find updates. While other companies like OpenAI may be going after the same kind of idea, Batra said that it’s “still early” in the AI agent space and that Yutori is not deterred: “I think we still have a shot.” 

A long-term consumer vision

While Yutori has launched its first product, both the founding team and its investors are clear: the initial $15 million investment is less about this specific release, and more about the team’s bona fides and its long-term consumer vision. For years, the three close friends—Parikh and Batra have also been married since 2010, while Batra advised Das’s PhD—had met weekly over brainstorming dinners and long discussed the possibility of starting a company focused on the future of AI agents. 

“In the early stages of a startup, the quality of the team is the single most important thing—more than the idea, more than the product, more than the market,” said Rob Toews, partner at Radical Ventures, which led Yutori’s seed round and has invested in AI startups including Cohere, Waabi, and Writer. He emphasized that only a “very, very small set of individuals” in the world have the technical depth and creative judgment to build cutting-edge AI systems.

“The Yutori founding team is very much in that upper echelon,” he said, referring to the three cofounders and their initial hires, totaling 15. “It’s just an incredibly dense talent team, top to bottom. Everyone they’ve hired so far is a highly coveted researcher or engineer from places like Meta, Google, and Tesla. Teams of this caliber just don’t come along very often.”

It’s a $15 million bet on what Batra called “an experiment” and “a hypothesis,” explaining that the goal of Scouts is to learn how people actually use autonomous agents in real life—and then iterate quickly. Take an agent that can monitor a gym’s scheduling page every 20 minutes: “No human wants to sit down and do that,” said Das.

For now, Yutori has no plans to charge for its products, but instead to keep experimenting to see what clicks with consumers. Ultimately, the founders say they aren’t selling AI for its own sake, but instead are focused on rethinking not just the tasks agents can take on, but the context in which they operate. Today’s digital assistants are mostly reactive—requiring users to reach for their phones, open an app, and manually explain what they need. Yutori’s vision is to remove that friction by building agents that understand what a user is doing in the moment and can proactively step in to help.

It’s a vision the founders have been working toward since their time at Meta, where they experimented with early versions of smart assistants in Meta’s smart glasses made in partnership with Ray-Ban. At Yutori, they’re continuing that work—testing different ways to deliver helpful support exactly when people need it.

This story was originally featured on Fortune.com

© Courtesy of Yutori

Yutori co-founders (left to right) Dhruv Batra, Devi Parikh and Abhishek Das.
  •  

Elon Musk’s Starlink was installed on the White House roof—Dems say it may ‘undermine national security’ by exposing sensitive data to hackers

  • Despite warnings from White House security and communications experts, Elon Musk’s DOGE team installed a Starlink satellite internet system at the White House, sparking a confrontation with the Secret Service. Lawmakers and security professionals are concerned that the setup could undermine national security and expose sensitive White House communications.

Elon Musk’s DOGE team reportedly installed a Starlink satellite internet system in the White House despite the objections of government security experts.

According to The Washington Post, White House communications experts reportedly raised concerns over the installation of the satellite internet system, citing national security concerns.

At the time, the installation also reportedly sparked a confrontation between DOGE employees and the Secret Service.

Staffers from Musk’s DOGE team set up the Starlink terminal on the roof of the Eisenhower Executive Office Building in February without informing White House communications teams.

This setup allowed internet access through Starlink without standard tracking or authentication safeguards, three people told The Washington Post, potentially exposing the White House to data leaks or hacking.

Unlike other government Wi-Fi systems, the “Starlink Guest” Wi-Fi required only a password rather than the usual username or two-factor authentication. Such a connection could allow devices to bypass security, evade monitoring, and transmit untracked data, according to the report.

It is unclear if the Starlink terminal is still installed at the White House following Musk’s exit and public rift with Donald Trump, but the satellite internet system has also reportedly been used at other government agencies.

Representatives for the White House did not immediately respond to a request for comment made by Fortune.

However, Secret Service spokesman Anthony Guglielmi told The Washington Post: “We were aware of DOGE’s intentions to improve internet access on the campus and did not consider this matter a security incident or security breach.”

Security Concerns

Starlink’s satellite connections are generally considered more difficult to hack than traditional U.S. telecommunications networks, which have been compromised by foreign adversaries in the past.

However, this added layer of security does not address the core issue: the inability to monitor or control data leaving the White House.

Sources told The Post that any added security from satellite connections does not solve the issue of monitoring restricted data leaving the premises. 

The lack of logging and authentication means that malicious software could enter the building undetected, posing an even greater risk than data leaks.

The controversy has drawn the attention of lawmakers.

Democrats on the House Oversight Committee have raised the alarm about the Trump administration’s use of Starlink at the White House and across government agencies.

“Brave whistleblowers have shared concerning and vital information with the Committee, and we are pursuing multiple investigations,” said Stephen F. Lynch, the committee’s acting top Democrat. “It could have the potential to undermine our national security by exposing sensitive data and information to hackers, our adversaries, or those wishing to do Americans harm.”

Democratic senators have previously criticized the potential conflict of interest between Musk’s role at SpaceX and in the government.

Last month, in a letter to President Trump, 13 Democratic senators accused the tech mogul of potentially leveraging his government role to secure lucrative private contracts for Starlink, his satellite internet venture, in foreign markets.

The senators urged Trump to launch an investigation into the deals and to make the findings public.

This story was originally featured on Fortune.com

© Photo by Win McNamee/Getty Images

Elon Musk's DOGE team reportedly installed a Starlink satellite internet system in the White House despite the wishes of government security experts.
  •  

Google CEO Sundar Pichai’s advice to young people is to work with those who outshine you

  • Google CEO Sundar Pichai, who turns 53 today, recently celebrated two decades with the tech giant. Reflecting on his career, he offered advice to younger workers who want to become leaders someday. He encouraged them to work with others who outshine them.

Today—Tuesday, June 10—one of the world’s most significant leaders in tech turns 53. During Sundar Pichai’s two-decade career with Google, he’s worked on many of the company’s major products including Google Chrome, Gmail, Google Maps, and Chromebook. In 2019, he became CEO of Alphabet and its subsidiary Google. His current net worth is estimated at about $1.1 billion.

As one of the most powerful leaders in tech, Pichai recently reflected on how he got to where he is in his career. On a recent podcast by Podium VC, he said it took “a lot of luck along the way,” but added “it’s important to listen to your heart and see whether you actually enjoy doing it.”

While Pichai sits at the helm of one of the largest tech companies in the world, his path to the top wasn’t a completely smooth ride. His advice to young people who aspire to be in leadership positions like him someday is to surround themselves with people who outshine them.

“At various points in my life, I worked with people who I felt were better than me,” Pichai said. “Get yourself in a position where you’re working with people who you feel are stretching your abilities. [It’s] what helps you grow. [Put] yourself in uncomfortable situations. I think often you’ll surprise yourself.”

How Sundar Pichai became CEO of Google 

Pichai was born and raised in Chennai, India, to a father who was an electrical engineer and a mother who worked as a stenographer. They were considered to be a middle-class family; Pichai told Yahoo Finance he was fortunate to have grown up in a household where education was valued. 

He said he had minimal access to computers growing up—and even recalled being on a waitlist for five years to get a rotary phone. He said experiencing technology for the first time changed his life. 

“It was a vivid moment for me as to how access to technology can make a difference,” Pichai told Yahoo Finance, adding that his limited exposure to computers during childhood is something he’s carried with him throughout his career, serving as inspiration for the rollout of Chromebooks to students in the U.S.

Pichai moved to the U.S. in 1993 to earn his master’s degree in materials science from Stanford University in the heart of Silicon Valley. He briefly worked for a semiconductor materials company after graduating, but then went back to school to earn his MBA from the Wharton School at the University of Pennsylvania. Pichai had a brief stint at McKinsey & Co. after earning his MBA and landing at Google in 2004. 

“I think it’s tough to find things you love doing, but I think listening to your heart a bit more than your mind [helps] in terms of figuring out what you want to do,” Pichai said during the podcast. 

Reflecting on 20 years at Google in April 2024, Pichai said a lot had changed about the company since he first joined, like the technology, the number of people who use Google products, and his hair. 

“What hasn’t changed—the thrill I get from working at this amazing company,” Pichai wrote in an Instagram post. “20 years in, I’m still feeling lucky.”

This story was originally featured on Fortune.com

© Getty Images—Bloomberg

Google CEO Sundar Pichai turns 53 on Tuesday.
  •  

At least 6 Waymo autonomous vehicles have been vandalized amid anti-ICE protests in Los Angeles

At least six Waymo self-driving cars have been damaged by the violence taking place in Los Angeles in recent days amid protests against federal immigration raids, according to a representative for the company. 

The autonomous vehicles, as well as some Lime electric scooters, have been vandalized, and in some cases set on fire and completely destroyed. Videos of people climbing the Waymo robotaxis and bashing in the windshields, as well as clips of Waymo cars engulfed in flames, were shared widely online, quickly becoming key imagery of the protests in downtown Los Angeles. 

The Los Angeles Police Department warned people on Sunday to steer clear of the area, due to the risk of toxin exposure from electric batteries catching on fire. All of the self-driving taxis deployed by Waymo, which is owned by Alphabet, are electric. “Burning lithium-ion batteries release toxic gases, including hydrogen fluoride, posing risks to responders and those nearby,” the department warned in a social media post.

No Waymo riders or employees were harmed during the incidents, and passengers had exited vehicles before they were vandalized, according to a Waymo spokesman. The company stopped service downtown on Monday as the protests continued, though Waymo continued to operate in the broader Los Angeles region.

“Out of an abundance of caution given the recent activity, we removed vehicles from Downtown Los Angeles and will not be serving that specific area of LA at the moment,” a Waymo spokesman said in a statement, noting that the company is working with the police department and other authorities to assess the situation.

Workers clean up debris after protestors violently burned autonomous Waymo vehicles near the City Hall in Los Angeles, California on June 9, 2025 amid protests over immigration raids.
Tayfun Coskun—Getty Images

It’s not clear whether protesters decided to specifically target the Waymo cars or the Lime scooters, and it’s possible that the vehicles and scooters were in an unfortunate place as the protests escalated.

The Los Angeles Department referred all questions to Waymo and said it did not know if any incident reports had been filed at this time. Waymo declined to comment on the total estimated damage, and Lime declined to comment. Analysts have estimated that the Waymo Jaguar I-Pace SUVs, which are equipped with radar and lidar equipment, cost between $150,000 to $200,000 each.

In San Francisco, where anti-ICE protests have also been ongoing, there was another isolated instance of a Waymo being vandalized, according to the company.

Caught in the crosshairs

Self-driving vehicles have periodically become the targets of vandals, with instances of tire slashing or people throwing fireworks into the vehicles. A couple years ago, a man with a hatchet chased several self-driving Cruise taxis around the streets of San Francisco—sometimes when there were passengers inside. 

President Donald Trump ordered the National Guard to intervene in Los Angeles on Saturday. Protesters clashed with police, dumpsters were vandalized, and the Los Angeles Police Department shared videos on social media of a store being looted. 

By Monday, the protests had calmed, though there was still a large group of protestors marching downtown.

Waymo, which operates in San Francisco, Phoenix, Los Angeles, and Austin, and is planning to launch in Austin and Miami, is currently the only robotaxi company in the U.S. offering commercial operations in several different markets. Tesla is preparing to launch a robotaxi service in Austin this month, although the rollout will be very limited with just 10 to 20 vehicles at first. Waymo said it had surpassed 10 million paid rides near the end of May.

This story was originally featured on Fortune.com

© Benjamin Hanson—Getty Images

A protester kicks a burning Waymo vehicle during an anti-ICE protest in downtown Los Angeles, California, on June 8, 2025.
  •  

Exclusive: Gusto launches $200 million–plus tender offer

Gusto, an HR tech startup valued at more than $9 billion, is conducting an over $200 million tender offer via a new deal led by the Ontario Teachers’ Pension Plan.

The tender offer, which begins Monday and runs through July 8, will allow employees in the company to cash out some of their shares while giving the Canadian fund its first stake in the company. 

“Given the momentum, we’ve had investors interested in owning Gusto stock for a long time,” Gusto cofounder and CEO Josh Reeves told Fortune via email. The offer will be open to both current and former employees with a minimum of two years of tenure. Gusto declined to disclose price per share and whether there is a maximum number of shares that employees can sell. 

The deal was done at Gusto’s last valuation, $9.3 billion, and is led by Teachers’ Venture Growth, which is part of Ontario Teachers’ Pension Plan. (OTPP, the largest single-profession pension plan in Canada serving over 340,000 current and retired teachers, is also an investor in Canva, Databricks, and SpaceX.) OTPP is the anchor for the deal, and is joined by new and existing Gusto investors. It’s a full-circle moment of sorts—Reeves’ parents are both teachers

The tender offer, the third that Gusto has arranged for employees since its founding in 2012, comes as the market for initial public offerings remains limited. Several tech companies, including Circle and Omada Health, have had IPOs in recent weeks, but the overall number of public listings remains well below historical norms.

Reeves declined to comment on Gusto’s IPO plans, telling Fortune: “Gusto has been a long-term focused, multi-decade company from day one … When we have more details to share on an IPO, we’ll share it.”

The company’s last employee tender offer was in 2021, done in addition to the startup’s $175 million Series E funding round. Gusto—founded in 2011 by Reeves, Tomer London, and Edward Kim—has been free cash flow positive since early 2023.  

As Fortune reported in May 2024, Gusto generated north of $500 million in revenue in its 2023 fiscal year. The company also said that it’s been growing over the past year, driven by the expansion of existing products like health benefits and 401(k) management. In 2024, Gusto’s 401(k) business grew its ARR, or annual recurring revenue, about 50% year over year, while the unicorn’s Gusto Money spending account product grew ARR over 140% year over year. 

HR tech has recently made headlines for the sprawling legal brawl between HR unicorns Rippling and Deel, but Reeves says that the space itself remains active and bright. In 2025, Reeves added, the company is set to add 150,000 new small businesses to its platform, and is actively hiring, with a particular focus on R&D.

“There is tremendous opportunity in the broader HR tech space,” said Reeves. “More businesses are being created while at the same time more rules and regulations are being introduced. Gusto can help. I have conviction that there will be multiple $100 billion–plus new companies built in this space, including Gusto. And as a reminder, Intuit is a $200 billion–plus company today; ADP is a $100 billion–plus company today; and Paychex is a $50 billion–plus company today.”

This story was originally featured on Fortune.com

© Courtesy of Gusto

Josh Reeves, cofounder and CEO of Gusto
  •  

JBL refreshes its 2025 soundbars with a serious power up

JBL is reintroducing its immersive soundbar lineup with even more power, but thankfully, the convenient detachable side speakers aren't going anywhere. JBL refreshed the entire Bar lineup, with the Bar 1000 MK2 leading the charge. The soundbar still features a 10-inch wireless subwoofer along with removable speakers on each end that let you continue playing your music or movies even if you wander away from the TV.

For the 2025 refresh, JBL kept the Bar 1000 MK2 on a 7.1.4 channel setup, but pumped up the max power output to 960W for an even louder and more immersive listening experience. The detachable speakers still have a max battery life of up to 10 hours, where you can reattach them to the soundbar to recharge. Like its predecessor, the Bar 1000 MK2 features true Dolby Atmos, thanks to four upfiring drivers, along with DTS:X 3D surround sound and MultiBeam 3.0. You won't have to constantly adjust the volume when bouncing between dialogue scenes and loud action since the updated soundbar has PureVoice 2.0 that automatically optimizes the dialogue volume based on the scene's ambient sound.

The Bar 1000 MK2 can still use Bluetooth or Wi-Fi to connect to a device, but is also compatible with AirPlay 2 and Spotify Connect. Besides the Bar 1000 MK2, JBL will debut the refreshed Bar 700 MK2 that comes with detachable speakers too, but can only virtualize Dolby Atmos. The updated Bar 500 MK2 and Bar 300 MK2 also don't offer true Dolby Atmos, nor JBL's Night Listening mode that automatically reduces loud noises. If you're in need of a soundbar with a subwoofer, it's worth noting that the new Bar 300 MK2 still doesn't have one.

The Bar 1000 MK2 is due to hit the shelves first at $1,199 later this month. The release of the $899 Bar 700 MK2, the $649 Bar 500 MK2 and the $449 Bar 300 MK2 will follow shortly after. The most expensive of JBL's Bar lineup, an 11.1.4-channel follow-up to the Bar 1300X, will release in the fall and start at $1,699. While it's much more expensive, it will come with detachable speakers that have standalone Bluetooth capabilities.

This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/jbl-refreshes-its-2025-soundbars-with-a-serious-power-up-120014177.html?src=rss

©

© JBL

The JBL Bar 1000 MK2 with its detachable satellite speakers resting on a TV stand.
  •  

US air traffic control still runs on Windows 95 and floppy disks

On Wednesday, acting FAA Administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems, which still rely on floppy disks and Windows 95 computers, Tom's Hardware reports. The agency has issued a Request For Information to gather proposals from companies willing to tackle the massive infrastructure overhaul.

"The whole idea is to replace the system. No more floppy disks or paper strips," Rocheleau said during the committee hearing. Transportation Secretary Sean Duffy called the project "the most important infrastructure project that we've had in this country for decades," describing it as a bipartisan priority.

Most air traffic control towers and facilities across the US currently operate with technology that seems frozen in the 20th century, although that isn't necessarily a bad thing—when it works. Some controllers currently use paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft's Windows 95 operating system, which launched in 1995.

Read full article

Comments

© Getty Images

  •