At WWDC 2024, Apple failed its customers. When the company announced the new, more personal Siri last year, it showed a product that was nowhere near ready. You can point to many different places where Apple Intelligence failed to meet expectations, but with WWDC 2025 fast approaching, the company owes its users an explanation of how it intends to win back their trust.
If you didn't watch last year's conference, here's a recap. At the heart of Apple’s promise of a better digital assistant was App Intents, a feature that would give Siri the ability to understand all the personal information stored on your iPhone. During the presentation's most memorable moment, Apple demoed Siri responding to a request from Kelsey Peterson, the company’s director of machine learning and AI, for an update on her mom’s flight. The assistant not only understood the prompt, but provided real-time flight tracking information in the process.
In short, the demo promised – after years of neglect – that Siri would finally be useful.
It should have been obvious in hindsight that Apple was overselling its progress on Siri, and AI more broadly. At WWDC 2024, the company did not let press and other attendees try the new version of the assistant. There wasn't even an opportunity to watch the company's employees use Siri. In fact, according to reporting The Information later published, it’s probably more accurate to describe what Apple showed last June not as a demo but as an elaborate concept video.
If Apple had only faked the WWDC demo, that would have been bad enough, but the company did something worse. As you may recall, the tech giant began rolling out Apple Intelligence features piecemeal in September. Rather than a public statement explaining the lack of progress, the company only admitted it was delaying the upgrade to sometime "in the coming year" after Daring Fireball's John Gruber sought answers. That same day, as if the company had only just realized its error, Apple pulled a TV commercial that showed The Last of Us star Bella Ramsey using the new Siri in the way it had demoed at WWDC 2024.
It’s going to take a lot for Apple to fix Apple Intelligence, but the best place the company can start is by being honest with its customers. Corporations, especially ones as big as Apple, rarely show humility, but in this case, an acknowledgement from the company that it promised the moon and missed the mark would go a long way towards righting some of the sins of WWDC 2024.
This applies to other aspects of Apple Intelligence just as much as it does with Siri. Look at the damage notification summaries did to Apple’s reputation. Apple Intelligence was so bad at aggregating the news, the company ended up pausing the notifications in the iOS 18.3 beta. When it released 18.3 to the public a couple of weeks later, it began labeling the alerts to give users a warning they may include errors.
As for other Apple Intelligence features like Image Playground and Genmoji, they’re forgettable because they offer little utility and see Apple following trends rather than offering something that truly enhances the usefulness of its devices. There too the company can tell its users it missed the mark and it plans to do better.
There’s no reason Apple can’t make Apple Intelligence great, but any effort to do so has to start with the company being honest: about what its roadmap of features can actually do, and to own up to when its promises can't be fulfilled.
This article originally appeared on Engadget at https://www.engadget.com/ai/to-fix-apple-intelligence-apple-needs-to-be-honest-about-its-capabilities-130046256.html?src=rss
Discord co-founder and CTO Stanislav Vishnevskiy wants you to know he thinks a lot about enshittification. With reports of an upcoming IPO and the news of his co-founder, Jason Citron, recently stepping down to hand leadership of the company over to Humam Sakhnini, a former Activision Blizzard executive, many Discord users are rightfully worried the platform is about to become, well, shit.
"I understand the anxiety and concern," Vishnevskiy told Engadget in a recent call. "I think the things that people are afraid of are what separate a great, long-term focused company from just any other company." According to Vishnevskiy, the concern that Discord could fail to do right by its users or otherwise lose its way is a topic of regular discussion at the company.
"I'm definitely the one who's constantly bringing up enshittification," he said of Discord's internal meetings. "It's not a bad thing to build a strong business and to monetize a product. That's how we can reinvest and continue to make things better. But we have to be extremely thoughtful about how we do that."
The way Vishnevskiy tells it, Discord already had an identity crisis and came out of that moment with a stronger sense of what its product means to people. You may recall the company briefly operated a curated game store. Discord launched the storefront in 2018 only to shut it down less than a year later in 2019. Vishnevskiy describes that as a period of reckoning within Discord.
"We call it embracing the brutal facts internally," he said of the episode. When Vishnevskiy and Citron started Discord, they envisioned a platform that would not just be for chatting with friends, but one that would also serve as a game distribution hub. "We spent a year building that component of our business and then, quite frankly, we quickly knew it wasn't going well."
Out of that failure, Discord decided to focus on its Nitro subscription and embrace everyone who was using the app to organize communities outside of gaming. Since its introduction in 2017, the service has evolved to include a few different perks, but at its heart, Nitro has always been a way for Discord users to get more out of the app and support their favorite servers. For instance, the $3 per month Basic tier allows people to use custom emoji and stickers on any server, and upload files that are up to 50MB. The regular tier, which costs $10 per month, includes 4K streaming, 500MB uploads and more. They're all nice-to-haves, but the core functions remain free.
Marissa Leshnov for Discord
Vishnevskiy describes Nitro as a "phenomenal business," but the decision to look beyond gaming created a different set of problems. "It wasn't clear exactly who we were building for, because now Discord was a community product for everyone, and that drove a lot of distractions," he said.
That sense of mission drift was further exacerbated by the explosive growth Disord saw during the pandemic, as even more new users turned to the platform to stay in touch with friends during lockdown. "It covered up all the things that we didn't fully clarify about how we want to approach things," said Vishnevskiy. "We came out stronger. A lot of people were introduced to Discord, and it's their home now, but it's probably part of what made it take longer to realize some of the decisions we made at the time weren't right."
One of those was a brief flirtation with the Web3 craze of 2021. That November, Citron tweeted a screenshot of an unreleased Discord build with integrations for two crypto wallet apps. The post sparked an intense backlash, with users threatening to cancel their Nitro subscriptions if the company went forward with the release. Two days later, Citron issued a statement saying Discord would not ship the integration.
"We weren't trying to chase a technology. It was about allowing people to use Discord in a certain way, and that came with a lot of downsides. We were trying to do some integrations to limit some scams, and actually do right by users and make people safer," said Vishnevskiy. "But we really underestimated the sensitivity the general user base had to the topic of NFTs, and we did not do a really good job at explaining what we were trying to do."
According to reporting from that period, Discord's employees were partly responsible for the reversal. An internal server made up of workers and game studio representatives reportedly erupted over the proposed implementation.
Looking back, Vishnevskiy credits the company's employees, some of whom have been with Discord for a decade, for steering leadership in the right direction over the years. He says there have been situations where the company's employees have come to him and Citron to ask "why are we doing this?" He adds, "sometimes, they've pushed us to do things [Jason and I] didn't think we should be doing. I think that's an amazing asset to have. This product is built by people who love it and use it."
Coming out of the pandemic, Discord announced last year it would refocus on gaming. In the immediate future, that shift of strategy will see the company emphasize "simple things" like app performance and useability over "building new features." In March, users got a taste of that new approach, with the company releasing a redesign of its PC overlay that made it less likely to trigger anti-cheat systems like BattleEye. In turn, that made the overlay compatible with a greater number of the most-played games on Discord. In that same release, Discord added three new UI density options to give users more control over the look and feel of the app.
Moving forward, one area where the company wants to be particularly thoughtful is around AI. Discord has deployed the tech in a few areas – for example, it partnered with Krisp AI in 2019 to add noise cancellation to calls – but it also has wound down experiments that didn't work. "What we've found is that a lot of these things did not work well enough to be in the product," said Vishnevskiy, pointing to features like AutoMod.
The tool exists in Discord right now. Moderators can use it to filter for specific words and phrases. But when the company first pitched the feature, it envisioned an AI component that would help admins manage large, unruly servers, and even built a version of it that ran on a large language model. The company has yet to ship the feature because "it was making too many mistakes." Discord also experimented briefly with a built-in chatbot called Clyde that leveraged tech from OpenAI, but canned it less than a year later. At the time, the company didn't give a reason for the shutdown, but the occasional screenshots posted to the Discord subreddit showed Clyde could, often unprompted, say some questionable things.
"We're constantly retrying some of those ideas with modern models. No timeline on any of this because we will not ship until we think it's a good fit for the product," said Vishnevskiy, adding the last thing the company wants to do is "slap [AI] in because everyone else is doing it."
Looking to the future, Vishnevskiy says Discord is focused on helping game developers, especially as it relates to discovery. The majority of the most popular games on Discord are the same ones that were popular on the platform 10 years ago. That's where Vishnevskiy says the app's new Orbs currency comes in, which people can earn by watching interactive ads, playing a game, or streaming their gameplay to friends on Discord. Yes, it's a way for Discord to grow its revenue, but Vishnevskiy believes the system aligns player interests with developer interests by giving Discord users something in return for their time and attention.
At least that's the idea. I got to try the system after my interview with Vishnevskiy, and while it does feel friendly to users, I'd like to see how Discord plans to make it into something smaller game studios can leverage. Right now, many of the publishers the company has partnerships with are advertising releases that already have a lot of word of mouth going for them. I'm sure fans of Marvel Rivals will love the chance to earn an Ultron avatar decoration for their Discord, but a game with 147,000 concurrent players on Steam isn't exactly struggling.
Vishnevskiy wouldn't discuss the specifics of when and if the company plans to IPO, but did offer one last assurance for users. "Discord is something that is meant to be a durable company that has a meaningful impact on people's lives, not just now but in 10 years as well," he said. "That's the journey that Humam joined and signed up for too. We are long-term focused. Our investors are long-term focused."
While it may be true that the Vishnevskiy and Discord's veteran employees have learned a lot over the company's sometimes turbulent history, it's not clear how a culture of experimentation and dissent might change with more shareholders to appease. The test will be whether Discord can stay true to itself and its many fans.
This article originally appeared on Engadget at https://www.engadget.com/gaming/discords-cto-is-just-as-worried-about-enshittification-as-you-are-173049834.html?src=rss
Nothing plans to launch the Phone 3, its first proper flagship, on July 1, the company announced today. We already knew the phone was coming this summer, thanks to a teaser Nothing shared during last month's The Android Show I/O Edition. During the segment, Nothing CEO Carl Pei said the new device will cost around £800 ($1080), which would easily make it the most expensive handset the company has produced to date. Pei also said Nothing plans to go "all-in" on Phone 3 with "premium materials, major performance upgrades and software that really levels things up."
It will be interesting to see how Nothing builds on the Phone 3a and 3a Pro (pictured above), the two mid-range handsets the company released earlier this year. I reviewed both phones for Engadget, and felt they offered great value for the asking price of $379 and $459. I'm also curious to see if Nothing decides to make a proper push into the US market. While you can buy the company's phones stateside, they don't have robust carrier support. In any case, we'll find out more about the new Phone 3 next month.
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/the-nothing-phone-3-arrives-in-july-134121908.html?src=rss
It was only earlier this year Norway's Opera released a new browser, and now it's adding yet another offering to an already crowded field. Opera is billing Neon as a "fully agentic browser." It comes with an integrated AI that can chat with users and surf the web on their behalf. Compared to competing agents, the company says Neon is faster and more efficient at navigating the internet on its own due to the fact it parses webpages by analyzing their layout data.
Building on Opera's recent preview of Browser Operator, Neon can also complete tasks for you, like filling out a form or doing some online shopping. The more you use Neon to write, the more it will learn your personal style and adapt to it. All of this happens locally, in order to ensure user data remains private.
Additionally, Neon can make things for you, including websites, animations and even game prototypes, according to Opera. If you ask Neon to build something particularly complicated or time-consuming, it can continue the task even when you're offline. This part of the browser’s feature set depends on a connection to Opera's servers in Europe where privacy laws are more robust than in North America.
"Opera Neon is the first step towards fundamentally re-imagining what a browser can be in the age of intelligent agents," the company says.
If all of this sounds familiar, it's because other companies, including Google and OpenAI, have been working on similar products. In the case of Google, the search giant began previewing Project Mariner, an extension that adds a web-surfing agent to Chrome, last December. OpenAI, similarly, has been working on its own "Operator" mode since the start of the year.
Neon, therefore, sees Opera attempting to position itself as an innovator in hopes of claiming market share, but the company has a difficult task ahead. According to data from StatCounter, only about 2.09 percent of internet users use Opera to access the web. Chrome, by contrast, commands a dominant 66.45 percent of the market. That's a hard hill to climb when your competitors are working on similar features.
It's also worth asking if an agentic browser is something people really want. Opera suggests Neon is smart enough to book a trip for you. That sounds great in theory, but what if the agent makes an error and books the wrong connecting flight. A certain amount of friction ensures users pay attention and check things on their own.
If you want to try Neon for yourself, you can join the wait list.
This article originally appeared on Engadget at https://www.engadget.com/ai/operas-new-fully-agentic-browser-can-surf-the-web-for-you-145035874.html?src=rss
Games Workshop, maker of the popular Warhammer 40K tabletop game, held its annual Warhammer Skulls festival today, and announced a slew of new video games, remasters and DLC for its properties. I've collected some of the more exciting announcements below.
First, let's start with Space Marine 2. In March, Games Workshop announced a sequel was already in development. At the time, the company said the new game was "likely years away from release," but it also reiterated post-launch support for Space Marine 2 would continue. Today we got an update on Space Marine 2's long awaited horde mode, called here Siege Mode. It will arrive as part of a free update slated for release on June 26.
However, that's not all. On June 10th, GW will release Space Marine — Master Crafted Edition. It's a remaster of the 2011 original developed by Relic Entertainment. The new version features updated textures and character models, with support for 4K resolutions, modernized controls and a refreshed user interface. If you've not played Space Marine, it's worth revisiting if only to hear Mark Strong voice protagonist Captain Titus. Master Crafted Edition arrives on June 10on Xbox Series X/S and PC through Steam and GOG.
Speaking of Relic, a remaster of the studio's excellent Dawn of War is also on the way. No word on an exact release date yet, but GW says Dawn of War – Definitive Edition will feature updated visuals, camera controls and a new HUD. The re-release will be compatible with mods for the existing game, and has a 64-bit code base to ensure it's playable on modern systems for years to come.
For fans of boomer shooters, there's Boltgun game on the way. It will arrive in 2026, and it's coming to Steam, Xbox Series X/S and PS5. The new game will pick up right where the first game ended, with a new non-linear single player campaign that has new enemies for players to overcome. Of course, you'll also have access to new weapons with which to vanquish the Emperor's enemies in the most cartoonishly violent way possible.
In the meantime, today you can download Boltgun — Words of Vengeance, a free typing action game that will have you spelling words and phrases from Warhammer lore like "thin your paints" and "Ghazghkull." You bet I'm downloading it right now.
Last but not least, Owlcat, creator of the CRPG Pathfinder: Wrath of the Righteous, is working on a sequel to the criminally underrated Rogue Trader. The new game is adaptation of GW's Dark Heresy RPG and casts the player as the leader of an Inquisition party. It looks like Owlcat has a bigger budget this time around, which is a great sign for the project. Rogue Trader had moments where it felt like if Owlcat had the time and resources that Larian did to work on Baldur's Gate 3, it would have been every bit as popular.
Again, those are just some of the announcements Games Workshop made today, so be sure to check out the Warhammer Community website to get the full story.
This article originally appeared on Engadget at https://www.engadget.com/gaming/boltgun--words-of-vengeance-is-warhammers-grimdark-answer-to-typing-of-the-dead-193515536.html?src=rss
Anthropic kicked off its first-ever Code with Claude conference today with the announcement of a new frontier AI system. The company is calling Claude Opus 4 the best coding model in the world. According to Anthropic, Opus 4 is dramatically better at tasks that require it to complete thousands of separate steps, giving it the ability to work continuously for several hours in one go. Additionally, the new model can use multiple software tools in parallel, and it's better at following instructions more precisely.
In combination, Anthropic says those capabilities make Opus 4 ideal for powering upcoming AI agents. For the unfamiliar, agentic systems are AIs that are designed to plan and carry out complicated tasks without human supervision. They represent an important step towards the promise of artificial general intelligence (AGI). In customer testing, Anthropic saw Opus 4 work on its own seven hours, or nearly a full workday. That's an important milestone for the type of agentic systems the company wants to build.
Anthropic
Another reason Anthropic thinks Opus 4 is ready to enable the creation of better AI agents is because the model is 65 percent less likely to use a shortcut or loophole when completing tasks. The company says the system also demonstrates significantly better "memory capabilities," particularly when developers grant Claude local file access. To encourage devs to try Opus 4, Anthropic is making Claude Code, its AI coding agent, widely available. It has also added new integrations with Visual Studio Code and JetBrains.
Even if you're not a coder, Anthropic might have something for you. That's because alongside Opus 4, the company announced a new version of its Sonnet model. Like Claude 3.7 Sonnet before it and Opus 4, the new system is a hybrid reasoning model, meaning it can execute prompts nearly instantaneously and engage in extended thinking. As a user, this gives you a best of both worlds chatbot that's better equipped to tackle complex problems when needed. It also incorporates many of the same improvements found in Opus 4, including the ability to use tools in parallel and follow instructions more faithfully.
Sonnet 3.7 was so popular among users Anthropic ended up introducing a Max plan in response, which starts at $100 per month. The good news is you won't need to pay anywhere near that much to use Sonnet 4, as Anthropic is making it available to free users.
Anthropic
For those who want to use Sonnet 4 for a project, API pricing is staying at $3 per one million input tokens and $15 for the same amount of output tokens. Notably, outside of all the usual places you'll find Anthropic's models, including Amazon Bedrock and Google Vertex AI, Microsoft is making Sonnet 4 the default model for the new coding agent it's offering through GitHub Copilot. Both Opus 4 and Sonnet 4 are available to use today.
Today's announcement comes during what's already been a busy week in the AI industry. On Tuesday, Google kicked off its I/O 2025 conference, announcing, among other things, that it was rolling out AI Mode to all Search users in the US. A day later, OpenAI said it was spending $6.5 billion to buy Jony Ive’s hardware startup.
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-claude-opus-4-model-can-work-autonomously-for-nearly-a-full-workday-164526696.html?src=rss
At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new $250 per month AI Ultra plan.
During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features.
What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it.
I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip.
By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better.
The fact you have to pay $250 per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public.
By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it.
Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine.
The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently.
Kris Holt for Engadget
I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.
This article originally appeared on Engadget at https://www.engadget.com/ai/googles-most-powerful-ai-tools-arent-for-us-134657007.html?src=rss
Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, May 8, 2018. REUTERS/Stephen Lam
OpenAI is buying Jony Ive's startup, io, for $6.5 billion, as first reported by The New York Times. The company confirmed the news in a blog post on its website headlined by the photo you see above, which is apparently real and not AI generated. As part of the deal, Ive and his design studio, LoveFrom, will continue to work independently of OpenAI. However, Scott Cannon, Evans Hankey and Tang Tan, who co-founded io with Ive, will become OpenAI employees, alongside about 50 other engineers, designers and researchers. In collaboration with OpenAI's existing teams, they'll work on hardware that allows people to interact with OpenAI's technologies.
OpenAI has not disclosed whether the deal would be paid for in cash or stock. Per the Wall Street Journal, it's an all-equity deal. Open AI has yet to turn a profit. Moreover, according to reporting from The Information, OpenAI agreed to share 20 percent of its revenue with Microsoft until 2030 in return for the more than $13 billion the tech giant has invested into it. When asked about how it would finance the acquisition, Altman told The Times the press worries about OpenAI's funding and revenue more than the company itself. "We'll be fine," he said. "Thanks for the concern." The deal is still subject to regulatory approval.
In an interview with The Times, OpenAI CEO Sam Altman and Ive, best known for his design work on the iPhone, said the goal of the partnership is to create "amazing products that elevate humanity." Before today, Altman was an investor in Humane, the startup behind the failed Humane AI Pin. HP bought the company earlier this year for $116 million, far less than the $1 billion Humane had reportedly sought before the sale.
"The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco," OpenAI writes of the acquisition on its website. "As io merges with OpenAI, Jony and LoveFrom will assume deep design and creative responsibilities across OpenAI and io."
According to The Times, OpenAI already had a 23 percent stake in io following an agreement the two companies made at the end of 2024. OpenAI is now paying approximately $5 billion to take full control of the startup. Whether this points towards physical OpenAI devices on the horizon, and if so what form they take, remains unclear. The description for the YouTube video you see above says, "Building a family of AI products for everyone." Whatever comes out of the acquisition could take years to hit the market, and some of what Ive and his team do may never see the light of day.
This article originally appeared on Engadget at https://www.engadget.com/ai/openai-buys-jony-ives-design-startup-for-65-billion-173356962.html?src=rss
In February, Sigma announced the Sigma BF. It's a full-frame, interchangeable lens camera with just a shutter release, a dial and three buttons. That minimalism speaks to me, and I felt the BF was potentially transformative. Photography is one of my favorite hobbies, and I've always felt modern cameras are too complicated. When I received a unit of the Sigma BF to test, I wanted to love it. Unfortunately, it might be too simple.
It all starts with the design. The Sigma BF is one of the industry's few unibody cameras. It's carved from a single slab of aluminum, a process Sigma says takes seven hours to complete. The result is a camera unlike any I've used before, with build quality that surpasses either of my current Fujifilm models, the X-E3 and X-S20. Now, I know what you're thinking: The BF looks like an ergonomic nightmare. Surprisingly, it's not too bad, thanks to the inclusion of two beveled edges where your hands meet the bottom of the camera body.
Igor Bonifacic for Engadget
Still, it's missing a few features that would have made it more comfortable to use, likely due to the limitations of its unibody design. For one, a proper grip would have been nice, especially when you're using a heavy 50mm lens like the one Sigma sent me for testing. The BF is also missing a hot shoe mount, so third-party thumb grips are off the table. Most annoying of all, it only has a single strap eyelet, so if you don't want to use a neck strap, you'll need one that attaches to the camera's tripod mount. I don't own one of those, so I had to carry around the $2,000 BF in my hand the entire time I was using it. You can imagine how that felt.
The BF offers a very different shooting experience from your typical digital camera. As I mentioned, it has only a shutter release, a single dial and three buttons (one for powering the camera on and off, one for reviewing your photos and footage and one for accessing the overflow menu). There's also a touchscreen, but you wouldn't know it at first, because other than when selecting a focus point and toggling some options, you won't be using it much while shooting.
The BF's one dial is the primary way to interact with the camera. To adjust your exposure, you first press left or right on the dial to cycle to a specific setting, and then spin it to tweak the levels as desired. A second smaller screen above the dial allows you to adjust those parameters without interacting with the main display.
Igor Bonifacic for Engadget
Alternatively, you can press down on the center of the dial to open the BF's "dual layer" menu system. As the name suggests, Sigma has organized most of what you might need across two levels of menus. For example, say you want to switch the camera from matrix to spot metering. That involves pressing down on the dial, scrolling over to one of the exposure settings, tapping the center of the dial again, and then using your thumb to press the touchscreen and enable spot metering. Accessing most of the settings you'll need won't be as tedious, but this worst case scenario demonstrates where the experience of shooting with the Sigma BF falls short.
The BF isn't great for capturing fleeting moments. In ditching most of the physical controls modern cameras are known for, the Sigma BF makes it difficult to change multiple settings simultaneously. I was most annoyed by the BF whenever I wanted to shoot a fast-moving scene.
On one of my photo walks with the Sigma BF, I saw a father riding a bike with his son in the seat behind him. With my X-E3 or nearly any other camera, capturing that moment would have been simple. I could have changed the drive mode, focus system and shutter speed independently of one another. On the BF, I had to adjust each setting consecutively. By the time I was done, the father and son were long gone.
Some of the BF's shortcomings could be addressed if Sigma at least allowed you to edit the quick settings screen to show fewer options. I don't need easy access to change things like the aspect ratio, for example. In 2025, every new camera ships with an overly bloated menu system, and in that regard the Sigma BF is a breath of fresh air. However, allowing the user to make their own tweaks would have made for a much better experience.
And that’s the thing: With the BF, Sigma breaks camera interface conventions that are conventions for good reason. Let me give you one of the more frustrating examples: The camera inexplicably doesn't offer an easy way to measure the exposure of a scene. There was no meter to indicate whether I was about to under- or overexpose a shot, and I couldn't add one to the main screen.
Igor Bonifacic for Engadget
The only way I could see a histogram, my preferred method for nailing exposure, was to access the second layer of the interface from one of the capture settings. This is an especially confounding decision because you can half press the shutter to make quick exposure compensation adjustments with the control dial, but as soon as you do, the BF jumps out of whatever menu you were looking at. If digging through menus isn't your thing, there are two live view overlays you can enable to see if you've clipped your shadows or highlights. The first is your usual zebra pattern. The second, which Sigma calls False Color, turns most of the screen grayscale and uses warning colors. Neither felt as precise as a proper exposure meter or histogram.
On paper, the BF is a decent camera for video, with support for 6K recording, HEVC encoding and L-Log. Unfortunately, the BF's minimalism is a weakness here too. To start, framing a shot is a challenge since the camera has a fixed screen. Getting usable footage is also tricky. The BF doesn't offer in-body image stabilization, and while there are a few L-mount lenses with built-in stabilization, most wouldn't be practical to use with the BF due to their size and weight.
Igor Bonifacic for Engadget
If you've gotten this far, you're probably wondering if I have something positive to say about the BF. Well, the best thing about the camera is that it takes genuinely great photos, which is what makes all its shortcomings all the more frustrating. The 24-megapixel, backside illuminated sensor and Sigma's lenses capture and render detail beautifully without being clinical. The BF also has great subject detection autofocus that makes shooting portraits of people and pets easy.
The Sigma BF has some interesting ideas about what a camera can look like in 2025, but those ideas are often marred by poor execution. As a first stab at a minimalist camera, the BF has enough going for it, and with refinement, I could see future versions evolving into something special. For example, I’d love to see Sigma find a way to include a flip-out screen in the BF's unibody frame. Until then, $2,000 is a lot to ask for a camera that could be so much more.
This article originally appeared on Engadget at https://www.engadget.com/cameras/sigma-bf-hands-on-minimal-to-a-fault-144024445.html?src=rss
Today is Global Accessibility Awareness Day (GAAD), and, as in years past, many tech companies are marking the occasion with the announcement of new assistive features for their ecosystems. Apple got things rolling on Tuesday, and now Google is joining in on the parade. To start, the company has made TalkBack, Android's built-in screen reader, more useful. With the help of one of Google's Gemini models, TalkBack can now answer questions about images displayed on your phone, even they don't have any alt text describing them.
"That means the next time a friend texts you a photo of their new guitar, you can get a description and ask follow-up questions about the make and color, or even what else is in the image," explains Google. The fact Gemini can see and understand the image is thanks to the multi-modal capabilities Google built into the model. Additionally, the Q&A functionality works across the entire screen. So, for example, say you're doing some online shopping, you can first ask your phone to describe the color of the piece of clothing you're interested in and then ask if it's on sale.
Separately, Google is rolling out a new version of its Expressive Captions. First announced at the end of last year, the feature generates subtitles that attempt to capture the emotion of what’s being said. For instance, if you're video chatting with some friends and one of them groans after you make a lame joke, your phone will not only subtitle what they said but it will also include "[groaning]" in the transcription. With the new version of Expressive Captions, the resulting subtitles will reflect when someone drags out the sound of their words. That means the next time you're watching a live soccer match and the announcer yells "goallllllll," their excitement will be properly transcribed. Plus, there will be more labels now for sounds like when someone is clearing their throat.
The new version of Expressive Captions is rolling out to English-speaking users in the US, UK, Canada and Australia running Android 15 and above on their phones.
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/androids-screen-reader-can-now-answer-questions-about-images-160032185.html?src=rss
Travel is one of the best ways to learn something new about yourself and as a parent, you probably want to ensure your child has a great time exploring the world. Visiting a new place can be intimidating, but a few gadgets can help make the experience easier and more enjoyable for your new grad. These are some of the best travel accessories that always earn a spot in our bag whenever we head out on an adventure.
This article originally appeared on Engadget at https://www.engadget.com/best-travel-tech-for-graduates-123028465.html?src=rss
Google just announced an upgrade to Chrome’s Enhanced Protection feature. On desktop, the browser now uses Gemini Nano to protect users against remote tech support scams. According to Google, the on-device large language model allows Chrome to protect people against scams the company hasn’t seen before.
“[Gemini Nano] is perfect for this use because of its ability to distill the varied, complex nature of websites, helping us adapt to new scam tactics more quickly,” Google says, adding it hopes to this bring the feature to Android devices soon. The company plans to use this same AI approach against a greater variety of scams in the future as well.
In the meantime, Android users can look forward to stronger protection against scams that use Chrome notifications as an attack vector. Google is once again turning to machine learning to offer this feature. “When Chrome’s on-device machine learning model flags a notification, you’ll receive a warning with the option to either unsubscribe or view the content that was blocked,” Google explains. “And if you decide the warning was shown incorrectly, you can choose to allow future notifications from that website.”
Fighting scams was a major focus for Google last year. In May, for instance, the company previewed a system for delivering real-time scam alerts during phone calls. More recently, the company introduced a suite of safety features for Messages. As a result of its efforts, Google says it’s preventing hundreds of millions of scam-related results from reaching its users.
This article originally appeared on Engadget at https://www.engadget.com/ai/chrome-will-now-use-gemini-nano-to-catch-scams-170057893.html?src=rss
FILE PHOTO: The Google logo is seen on the Google house at CES 2024, an annual consumer electronics trade show, in Las Vegas, Nevada, U.S. January 10, 2024. REUTERS/Steve Marcus/File Photo
In just under a week, Google's annual developer conference will kick off on May 20. The event is probably the most important on the company's calendar, offering a glimpse at everything it has been working on over the past year.
From the rumors and even information Google has trickled out, I/O 2025 should be one of the more exciting tech keynotes this year. Plus, for the first time, Google has spun out a dedicated Android showcase planned a whole week earlier. That just happened yesterday (May 13), and you can check out everything that was announced at the Android Show or go to our liveblog to get a feel for how things played out.
Now that the Android Show is over, it's time to look ahead to I/O, where the focus will almost definitely be about AI. We've gathered the most credible reports and leaks to put together this roundup of what to expect, and though most of the Android-related announcements have been made, it's still possible that Google shares more details about its mobile platform next week.
If you'd like to tune in from home and follow along as Google makes its announcements, check out our article on how to watch the Google I/O 2025 keynote. We'll also be liveblogging the event, so you can just come to Engadget for the breaking news.
Android 16
Some of my favorite I/O moments involved watching Dave Burke take to the Shoreline stage to talk about the latest updates for Android. But for the past couple of years, Android hasn't had much of a spotlight at Google's annual developer conference. That's about to change, with the company's dedicated showcase during today's Android Show: I/O Edition.
Get a front row seat to The Android Show: I/O Edition 🍿 May 13, 10 AM PT.
Meet our team and learn about new experiences coming to Android.
Speaking of timing, Google has already confirmed the new operating system will arrive sometime before the second half of the year. Though it did not release a stable build of Android 16 today, Samat shared during the show that Android 16 (or at least part of it) is coming next month to Pixel devices. And though the company did cover some new features coming to Android XR, senior director for Android Product and UX Guemmy Kim said during the presentation that "we'll share more on Android XR at I/O next week."
It clearly seems like more is still to come, and not just for Android XR. We didn't get confirmation on the Android Authorityreport that Google could add a more robust photo picker, with support for cloud storage solutions. That doesn't mean it won't be in Android 16, it might just be something the company didn't get to mention in its 30-minute showcase. Plus, Google has been releasing new Android features in a quarterly cadence lately, rather than wait till an annual update window to make updates available. It's possible we see more added to Android 16 as the year progresses.
One of the best places to get an idea for what's to come in Android 16 is in its beta version, which has already been available to developers and is currently in its fourth iteration. For example, we learned in March that Android 16 will bring Auracast support, which could make it easier to listen to and switch between multiple Bluetooth devices. This could also enable people to receive Bluetooth audio on hearing aids they have paired with their phones or tablets.
Android XR
Remember Google Glass? No? How about Daydream? Maybe Cardboard? After sending (at least) three XR projects to the graveyard, you would think even Google would say enough is enough. Instead, the company is preparing to release Android XR after previewing the platform at the end of last year. This time around, the company says the power of its Gemini AI models will make things different. We know Google is working with Samsung on a headset codenamed Project Moohan. Last fall, Samsung hinted that the device could arrive sometime this year.
Whether Google and Samsung demo Project Moohan at I/O, I imagine the search giant will have more to say about Android XR and the ecosystem partners it has worked to bring to its side for the initiative. This falls in line with what Kim said about more on Android XR being shared at I/O.
AI, AI and more AI
If Google felt the need to split off Android into its own showcase, we're likely to get more AI-related announcements at I/O than ever before. The company hasn't provided many hints about what we can expect on that front, but if I had to guess, features like AI Overviews and AI Mode are likely to get substantive updates. I suspect Google will also have something to say about Project Mariner, the web-surfing agent it demoed at I/O 2024. Either way, Google is an AI company now, and every I/O moving forward will reflect that.
Project Astra
Speaking of AI, Project Astra was one of the more impressive demos Google showed off at I/O 2024. The technology made the most of the latest multi-modal capabilities of Google's Gemini models to offer something we hadn't seen before from the company. It's a voice assistant with advanced image recognition features that allows it to converse about the things it sees. Google envisions Project Astra one day providing a truly useful artificial assistant.
However, after seeing an in-person demo of Astra, the Engadget crew felt the tech needed a lot more work. Given the splash Project Astra made last year, there's a good chance we could get an update on it at I/O 2025.
A Pinterest competitor
According to a report from The Information, Google might be planning to unveil its own take on Pinterest next week. That characterization is courtesy ofThe Information, but based on the features described in the article, Engadget team members found it more reminiscent of Cosmos instead. Cosmos is a pared-down version of Pinterest, letting people save and curate anything they see on the internet. It also allows you to share your saved pages with others.
Google's version, meanwhile, will reportedly show image results based on your queries, and you can save the pictures in different folders based on your own preferences. So say you're putting together a lookbook based on Jennie from Blackpink. You can search for her outfits and save your favorites in a folder you can title "Lewks," perhaps.
Whether this is simply built into Search or exists as a standalone product is unclear, and we'll have to wait till I/O to see whether the report was accurate and what the feature really is like.
Wear OS
Last year, Wear OS didn't get a mention during the company's main keynote, but Google did preview Wear OS 5 during the developer sessions that followed. The company only began rolling out Wear OS 5.1 to Pixel devices in March. This year, we've already learned at the Android Show that Wear OS 6 is coming, with Material 3 Expressive gracing its interface. Will we learn more at I/O next week? It's unclear, but it wouldn't be a shock if that was all the air time Wear OS gets this year.
NotebookLM
Since 2023, Google has offered NotebookLM on desktop. The note-taking app uses machine learning for features like automated summaries. Based on App Store and Google Play listings, the company is getting ready to release a mobile version of the service on the first day of I/O 2025.
Everything else
Google has a terrible track record when it comes to preventing leaks within its internal ranks, so the likelihood the company could surprise us is low. Still, Google could announce something we don't expect. As always, your best bet is to visit Engadget on May 20 and 21. We'll have all the latest from Google then along with our liveblog and analysis.
Update, May 5 2025, 7:08PM ET: This story has been updated to include details on a leaked blog post discussing "Material 3 Expressive."
Update, May 6 2025, 5:29PM ET: This story has been updated to include details on the Android 16 beta, as well as Auracast support.
Update, May 8 2025, 3:20PM ET: This story has been updated to include details on how to watch the Android Show and the Google I/O keynote, as well as tweak the intro for freshness.
Update, May 13 2025, 3:22PM ET: This story has been updated to include all the announcements from the Android Show and a new report from The Information about a possible image search feature debuting at I/O. The intro was also edited to accurately reflect what has happened since the last time this article was updated.
Update, May 14 2025, 4:32PM ET: This story has been updated to include details about other events happening at the same time as Google I/O, including Microsoft Build 2025 and Computex 2025.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-io-2025-what-to-expect-including-gemini-ai-android-16-updates-android-xr-and-more-203044563.html?src=rss
Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, May 8, 2018. REUTERS/Stephen Lam
OpenAI has abandoned its controversial restructuring plan. In a dramatic reversal, the company said Monday it would no longer try to separate control of its for-profit arm from the non-profit board that currently oversees operations. "We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California," said Bret Taylor, the chairman of OpenAI.
OpenAI had originally argued its existing structure would not allow its nonprofit to "easily do more than control the for-profit." It also said it needed more money, a mere two months after securing $6.6 billion in new investment. "We once again need to raise more capital than we'd imagined," the company wrote in December. "Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness."
OpenAI's previous plan called for the nonprofit to cede absolute control of the for-profit, in return for whatever degree of control came with the amount of stock it was granted through the reorganization process.
This was the controversial part of OpenAI's plan, with many, including former employees, labor and nonprofit groups and even Elon Musk, voicing opposition to the proposal. Now, the company says its nonprofit will retain control and become a "big shareholder in the PBC."
"How is the nonprofit going to maintain control? How will that purpose be advanced?" asks Jill Horwitz, a visiting professor of law at Northwestern University. "We know from the press that OpenAI plans to appoint all the board members of the operating entity. Will that happen forever? Who will they be? Will it be self-perpetuating? Will the for-profit investors have a say in who those board members are?"
Put another way, OpenAI hasn't said the exact structure that it intends to implement. According to Professor Michael Dorff, executive director of the Lowell Milken Institute for Business Law and Policy at UCLA, the company could adopt one of a few different options.
"If you had one class of stock, one vote per share, they would elect a board. You could just give the nonprofit the majority of the shares, and then they would then elect a majority of the board. They would therefore be in charge, at least for a while," he says.
"More stable governance arrangements could be done by having dual class shares, where the nonprofit would have a class of stock and they would be the only owners of that class of the stock that is either super voting shares, again, giving it a majority, or even better, you can define a class of stock and say it has the right to elect a majority of the board."
In short, the company hasn't said how it plans to ensure its nonprofit maintains control. The nonprofit may have a "big" stake to start, but there are a few different ways that stake could be diluted. Even if you set aside the idea of an IPO for now, the company could still issue new shares or carry out a stock split. In those scenarios, if OpenAI's non-profit doesn't own special shares, its control of the company would be weakened.
According to Bloomberg, Microsoft has yet to sign off on OpenAI's proposal. The company has invested nearly $14 billion into OpenAI. Under the terms of its October funding round, OpenAI had two years to transform itself into a for-profit business. If it failed to do so, the $6.6 billion it secured would turn into debt. We don't know for sure, but the question of control is likely front and center in the negotiations between Microsoft and OpenAI, with the company's financial future at stake. Complicating matters is that whatever arrangement the two come to, it needs to be rubber stamped by the state attorneys general of California and Delaware.
"We look forward to advancing the details of this plan in continued conversation with [the state AGs], Microsoft, and our newly appointed nonprofit commissioners," Altman wrote in his letter.
Parts of OpenAI's previous plan remain unchanged. As before, the company will reorganize its for-profit subsidiary into a public benefit corporation. In doing so, OpenAI still plans to eliminate the current capped profit structure that limits investor returns to 100x, with excess profits reserved for the nonprofit. OpenAI has yet to record a profit; as of last year, the company recorded around $5 billion in losses.
"This is not a sale, but a change of structure to something simpler," wrote OpenAI CEO Sam Altman in a letter to employees shared by the company. "Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn't in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock."
This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-for-profit-plan-leaves-many-unanswered-questions-193942365.html?src=rss
Sam Altman, co-founder and CEO of OpenAI, takes part in a panel discussion on artificial intelligence at the Technical University Berlin, in Berlin, Germany, February 7, 2025. REUTERS/Axel Schmidt
Every video game is a miracle. Long hours, extraordinary technical and artistic requirements and cross-disciplinary collaboration: the very act of making games is difficult, and leaves room for catastrophic errors. It's a wonder any of them make it to release at all.
Fulcrum Defender, the new Playdate exclusive from Jay Ma, the co-founder of indie darling Subset Games, is one such miraculous game. It's the first new release from the studio since 2018's critically acclaimed Into the Breach. Ma began work on Fulcrum Defender following a life-changing Covid infection that has greatly diminished her quality of life and ability to do the thing she loves.
The story of Fulcrum Defender begins following a trip Ma made to Vancouver, Canada in August 2023 to see Subset co-founder Matt Davis and a few other members of the studio in-person. At the time, the team was working on more than one game. According to Ma, one of the larger, more promising projects was "struggling," but the trip led to a breakthrough. Then, she caught Covid-19. "It was pretty unfortunate timing," she said. "For the first time in a while, I was gung-ho about being able to figure this game out."
At first, Ma's latest bout with the coronavirus didn't seem all that different from her previous experiences. She returned to her home in Kyoto, Japan, quarantined and eventually recovered from the acute symptoms, but never bounced back completely. "I think it was the first day that I went out to be outside, bike, do normal things, and I just completely shut down," she said. "I couldn't get out of bed for like four days." She realized she was experiencing long Covid.
As we chatted over Google Meet, Ma frequently took long pauses to piece together her memories and find the right words to express her loss. "I'm a different person," she told me after one such break. "I walk around with a cane. I need to structure exactly how I do something outside. I need to know where all the chairs are. I walk at a grandma's pace, and I'm constantly forced to maintain awareness of my physical state, because if I do too much, it's already too late. It makes everything feel dangerous."
For the first four months of her illness, Ma couldn't work at all. "Even even when I was more used to needing to pace myself, not only was it harder to do things that used to come naturally to me, but I would also get lost in my own head," she said. She worried she might never make games again.
Subset Games
Fulcrum Defender was a chance to prove to herself she could still do the thing she loved. Subset’s Mauro López provided additional programming, and composer Aaron Cherof, best known for his work on Minecraft’s Trails and Tails update, made the music for the game. Panic contacted Ma about the project after she showed a few friends the game around the time of Kyoto's annual Bit Summit indie game festival in the summer of last year.
"I would wake up in the morning and think about the game and make progress every day – even if it was only a couple of hours – that did something really important for my psychological state," she said.
In a preview released by Panic, Ma describes Fulcrum Defender as a game "that starts out slow and relaxing but gradually ramps up until it becomes frantic chaos." You can see the connective tissue between itand Ma's previous work. Players can earn upgrades to make their run easier. Success then depends on a combination of good aim (using the Playdate's signature crank), smart decision-making and a well thought out build. I expect it will have the same addictive "one more run" quality that Subset’s other games are known for.
This illness has shrunk my world and perception of time considerably.Jay Ma
"With Into the Breach, if I wanted to add one enemy, that one enemy would change how maps are designed, how character weapons are designed, and how scaling works," she said. "So a single new idea requires you to kind of keep everything in your head at once, and that specifically is just something that I struggle to do now." Fulcrum Defender taught Ma how much she had taken for granted the ease with which she one juggled those various dependencies in her mind.
Ma hasn't found a doctor in Japan who knows enough about the illness to offer her a conclusive diagnosis, and the state of research on long Covid in general is nascent. "They hate to make uncertain calls," she explained. The one thing she's found she can do is take frequent dementia tests to track the condition of her mind.
"I feel like I need to live with the possibility that it won't go away, so I just sort of operate with that mindset," she told me. "This illness has shrunk my world and perception of time considerably. My memory is way worse. I'll forget what happened like a week ago, and I don't really think about the future at all. And so I'm just in a constant present. It feels like I'm being forced to train to be a monk."
When I asked what her illness might mean for the future of Subset, Ma took a long time to consider her answer. "We set a rule that we will not announce anything unless we're absolutely certain it's coming out. We want to live with the freedom of being able to cancel rather than feeling we're stuck in having to release something we don't like," she said. "So Subset is doing fine, but my output has dropped to like 20 percent of what it used to be."
Subset Games
Davis, she adds, has been productive, but he too has had to adapt his work schedule, in his case due to two young kids. "If we want to make another Into the Breach-scale game, it feels like we might need more help in the long run. I need to come to terms with the fact I can't do all the art the way I used to."
Ma has been through so much, and yet Fulcrum Defender isn't a game about chronic health concerns, disability or memory loss. It seems to studiously avoid borrowing any biographical detail from Ma's life whatsoever. People will play and enjoy it knowing nothing of the challenging circumstances in which it was made. It turns out, that's the only way Ma would have it.
"I know of a lot of developers who put themselves into their game. You can see the author's intent, emotional state and the things they were processing in it. I've wondered what it would be like to make a game like that, but I have no idea how. Basically, the only thing that drives me is mechanics," she told me. "So I have no expectation, or really desire for people to see the author in the little arcade games I make. I would be very happy to never be perceived."
Fulcrum Defender — along with 11 other games — arrives as part of Playdate's second season of weekly games beginning on May 29. You can pre-order Season Two for $39 through the Catalog store.
This article originally appeared on Engadget at https://www.engadget.com/gaming/subset-games-co-founder-jay-ma-went-through-hell-to-make-fulcrum-defender-153028909.html?src=rss
You can sink a lot of money into your kitchen without even realizing it. There’s no doubt that some of the best kitchen gadgets are on the pricey side, but there are also plenty of budget-friendly tools that can make your time meal prepping, cooking for a party and reheating leftovers much easier. All the recommendations on this list are either products I use currently, or more affordable versions of something I decided to splurge on after years of food prep. You may not consider every single item an essential for your kitchen, but all of them can save you time when you need to get dinner on the table quickly.
Best cheap kitchen gadgets for 2025
This article originally appeared on Engadget at https://www.engadget.com/home/kitchen-tech/best-cheap-kitchen-gadgets-130049897.html?src=rss
A few years ago, it may have been fashionable to spend $1,000 on the latest flagship smartphone, but for most people, that’s neither practical nor necessary. You don't even have to spend $500 today to get a decent handset, whether it’s a refurbished iPhone or an affordable Android phone, as there are plenty of decent options as low as $160.
However, navigating the budget phone market can be tricky; options that look good on paper may not be in practice, and some devices will end up costing you more when you consider many come with restrictive storage. While we spend most of our time reviewing mid- to high-end handsets at Engadget, we've tested a number of the latest budget-friendly phones on the market to see cut it as the best cheap phones you can get right now.
Best cheap phones
What to look for in a cheap phone
For this guide, our top picks cost between $100 and $300. Anything less and you might as well go buy a dumb phone instead. Since they’re meant to be more affordable than flagship phones and even midrange handsets, budget smartphones involve compromises; the cheaper a device, the lower your expectations around specs, performance and experience should be. For that reason, the best advice I can give is to spend as much as you can afford. In this price range, even $50 or $100 more can get you a dramatically better product.
Second, you should know what you want most from a phone. When buying a budget smartphone, you may need to sacrifice a decent main camera for long battery life, or trade a high-resolution display for a faster CPU. That’s just what comes with the territory, but knowing your priorities will make it easier to find the right phone.
It’s also worth noting some features can be hard to find on cheaper handsets. For instance, you won’t need to search far for a device with all-day battery life — but if you want a phone with excellent camera quality, you’re better off shelling out for one of the recommendations in our midrange smartphone guide, which all come in at $600 or less.
Wireless charging and waterproofing also aren’t easy to find in this price range and forget about the fastest chipset. On the bright side, most of our recommendations come with headphone jacks, so you won’t need to buy wireless headphones.
iOS is also off the table, since, following the discontinuation of the iPhone SE, the $599 iPhone 16e is now the most affordable offering from Apple. That leaves Android as the only option in the under-$300 price range. Thankfully today, there’s little to complain about Google’s operating system – and you may even prefer it to iOS.
Lastly, keep in mind most Android manufacturers typically offer far less robust software features and support for their budget devices. In some cases, your new phone may only receive one major software update and a year or two of security patches beyond that. That applies to the OnePlus and Motorola recommendations on our list.
If you’d like to keep your phone for as long as possible, Samsung has the best software policy of any Android manufacturer in the budget space, offering at least four years of security updates on all of its devices. Recently, it even began offering six years of support on the $200 A16 5G, which we recommend below. That said, if software support (or device longevity overall) is your main focus, consider spending a bit more on the $500 Google Pixel 9a, or even the previous-gen Pixel 8a, which has planned software updates through mid-2031.
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/best-cheap-phones-130017793.html?src=rss
The Washington Post is partnering with OpenAI to bring its reporting to ChatGPT. The two organizations did not disclose the financial terms of the agreement, but the deal will see ChatGPT display summaries, quotes and links to articles from The Post when users prompt the chatbot to search the web.
"We're all in on meeting our audiences where they are," said Peter Elkins-Williams, head of global partnerships at The Post. "Ensuring ChatGPT users have our impactful reporting at their fingertips builds on our commitment to provide access where, how and when our audiences want it."
The Post is no stranger to generative AI. In November, the publisher began using the technology to offer article summaries. Since the start of February, ChatGPT Search has been available to everyone, with no account or sign-in necessary.
Later that same month, Jeff Bezos, the owner of The Washington Post, announced a "significant shift" in the publisher's editorial strategy. As part of the overhaul, the paper has been publishing daily opinion stories "in defense of two pillars," personal liberties and free markets. Given that focus and Amazon's own investments in artificial intelligence, it's not surprising to see The Washington Post and OpenAI sign a strategic partnership.
More broadly, today's announcement sees yet another publisher partnering with OpenAI, following an early but brief period of resistance from some players in the news media industry — most notably The New York Times. According to OpenAI, it has signed similar agreements with more than 20 news publishers globally.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-washington-post-partners-with-openai-to-bring-its-content-to-chatgpt-141215314.html?src=rss
A mere two days after announcing GPT-4.1, OpenAI is releasing not one but two new models. The company today announced the public availability of o3 and o4-mini. Of the former, OpenAI says o3 is its most advanced reasoning model yet, with it showing "strong performance" in coding, math and science tasks. As for o4-mini, OpenAI is billing it as a lower cost alternative that still delivers "impressive results" across those same fields.
More notably, both models offer novel capabilities not found in OpenAI's past systems. For first time, the company's reasoning models can use and combine all of the tools available in ChatGPT, including web browsing and image generation. The company says this capability allows o3 and o4-mini solve challenging, multi-step problems more effectively, and "take real steps toward acting independently."
At the same time, o3 and o4-mini can not just see images, but also interpret and "think" about them in a way that significantly extends their visual processing capabilities. For instance, you can upload images of whiteboards, diagrams or sketches — even poor quality ones — and the new models will understand them. They can also adjust the images as part of how they reason.
"The combined power of state-of-the-art reasoning with full tool access translates into significantly stronger performance across academic benchmarks and real-world tasks, setting a new standard in both intelligence and usefulness," says OpenAI.
Separately, OpenAI is releasing a new coding agent (à la Claude Code) named Codex CLI. It's designed to give developers a minimal interface they can use to link OpenAI's models with their local code. Out of the box, it works with o3 and o4-mini, with support for GPT-4.1 on the way.
Today's announcement comes after OpenAI CEO Sam Altman said the company was changing course on the roadmap he detailed in February. At the time, Altman indicated OpenAI would not release o3, which the company first previewed late last year, as a standalone product. However, at the start of April, he announced a "change of plans," noting OpenAI was moving forward with the release of o3 and o4-mini.
"There are a bunch of reasons for this, but the most exciting one is that we are going to be able to make GPT-5 much better than we originally though," he wrote on X. "We also found it harder than we thought it was going to be to smoothly integrate everything. and we want to make sure we have enough capacity to support what we expect to be unprecedented demand."
That means the streamlining Altman promised in February will likely need to wait until at least the release of GPT-5, which he said would arrive sometime in the next "few months."
In the meantime, ChatGPT Plus, Pro and Team users can begin using o3 and o4-mini starting today. Sometime in the next few weeks, OpenAI will bring online o3-pro, an even more powerful version of its flagship reasoning model, and make it available to Pro subscribers. For the time being, those users can continue to use o1-pro.
This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-o3-and-o4-mini-models-are-all-about-thinking-with-images-170043465.html?src=rss
Ninja Gaiden: Ragebound is a labor of love. You can see it in every pixel, animation and cutscene of the new 2D action game. It might be a surprise then that it's not the work of Team Ninja, the studio most closely associated with the series, but rather franchise newcomer The Game Kitchen. The Spanish studio is best known for its work on Blasphemous, a series of Souls-like Metroidvanias influenced by Spain’s own Andalusian culture and history with Roman Catholicism.
"I'm an '80s kid," says David Jaumandreu, game director and producer on NinjaGaiden: Ragebound. "I still have my copy of the first Ninja Gaiden in the basement." The way Jaumandreu tells it, Ragebound is a dream project for him and his coworkers. The Game Kitchen began working on the gameabout halfway through the production of Blasphemous 2. French publisher Dotemu, best known for releasing Teenage Mutant Ninja Turtles: Shredder's Revenge, approached the studio after seeing its work on Blasphemous.
There's some superficial visual overlap between Ragebound and Blasphemous 2, but when it comes to tone and gameplay, theycould not be more dissimilar. Where Blasphemous 2 is dark and solemn, Ragebound leans into the franchise's origin as a product of the '80s. It's loud and frequently cheesy, but in an endearing way. It's also a lot faster paced, with levels that grade on how quickly you can complete them, often while taking as little damage as possible.
Early on in the project, one of the things the team knew they wanted was to include two protagonists, with one of them hailing from the Black Spider Clan. For the uninitiated, the Black Spider Clan has usually served as the antagonists of the Ninja Gaiden series. Ragebound is set during the events of the 1988 NES game. After series protagonist Ryu Hayabusa leaves for the US to avenge his father's death, demons descend on peaceful Hayabusa Village and it's up to newcomer Kenji Mozu to save his clan.
The Game Kitchen / Dotemu
"We thought if we're taking the series back to its roots, wouldn't it be cool to control one of the Black Spider Clan?" says Jaumandreu. "It's like when you get to make a Star Wars game, and you fantasize about controlling an Empire character."
To the surprise of everyone at The Game Kitchen, both Dotemu and Koei Tecmo — Ninja Gaiden's original license holder — liked the idea. In the demo I played, I didn't see the exact circumstances of how Kenji and the Black Spider Clan's Kumori end up working together, but the gist of it is that they're forced to merge souls to survive a deadly encounter.
From what I can tell, outside of one mission that serves as an introduction to Kumori's skillset, you'll spend most of your time playing as Kenji in Ragebound. However, once the two of them join forces, Kenji's ability to engage enemies at range is greatly increased since he has access to Kumori's kunai.
The Game Kitchen / Dotemu
Moreover, some of the platforming sections I ran into during the demo required that I play as Kumori to progress through the level. The tricky thing about these segments is that Kumori can only manifest for a short time, a gauge above her head indicating how much time I had left with her before I was back to Kenji and had to try the section again. It's possible to extend her gauge by taking out enemies along the way. At least in the demo, Kumori's segments weren't too difficult, but I could also see how the structure could really test players — maybe not to the level of Hollow Knight'sPath of Pain, say, but something close.
One of the things that stood out about both characters was how nimble they felt. Kenji can pogo off enemies and projectiles to gain additional height over his foes. During her platforming segments, Kumori can use her kunai to teleport across gaps and complete jumps Kenji can't. Most levels also include ceilings the two can climb along, and I frequently had to fight my way through multiple enemies to get to the other end. Speaking of combat, it's frenetic in a way that has mostly gone out of style in modern gaming. Outside of bosses, most enemies will fall after one or two slashes from Kenji's katana.
The Game Kitchen / Dotemu
Even in early combat scenarios, I often had to fight two or three enemies simultaneously, while dodging and deflecting ranged attacks along the way to my next target. Once the combat system started to click for me, it felt incredibly satisfying to bounce between enemies and use Kenji and Kumori's abilities in unison.
I left my hands-on with Ninja Gaiden: Ragebound excited to play the final product. Of course, the ultimate test of the game will be how fans receive it. "We really put a lot of effort into creating a Ninja Gaiden game," says Jaumandreu. "We didn't want it to be a Blasphemous game with ninjas. We really hope when players get the controller, they feel at home with the series."
Ninja Gaiden: Ragebound arrives this summer on PC, Nintendo Switch, PlayStation and Xbox.
This article originally appeared on Engadget at https://www.engadget.com/gaming/ninja-gaiden-ragebound-is-a-love-letter-to-the-series-nes-roots-150050773.html?src=rss