Normal view

Received before yesterday

Breaking Bad creator's new show streams on Apple TV+ November 7

25 July 2025 at 20:41

Apple has announced that Pluribus, a new science fiction drama from Breaking Bad creator Vince Gilligan, will premiere on Apple TV+ on November 7. Gilligan was confirmed to be working on the project back in 2022, when Better Call Saul's Rhea Seehorn was also announced as its primary star.

Alongside the premiere date, Apple also released a short (somewhat ominous) teaser for the series that shows a hospital employee mindlessly licking donuts. Pluribus is supposed to follow "the most miserable person on Earth" (presumably Seehorn) as they "save the world from happiness," but your guess is as good as mine as to how the two tie together.

Apple's penchant for backing science fiction shows has been well-documented at this point. The company is currently producing a television adaptation of William Gibson's Neuromancer, and has made three seasons and counting of Foundation, based on the novel series by Isaac Asimov. Toss in things like Severance, Murderbot, Silo and For All Mankind and you've got a pretty varied catalog of sci-fi media to choose from.

Just how "science fiction" Pluribus will actually be remains up in the air. When reports went out in 2022 that Apple was ordering two seasons of the show, it was described as "a blended, grounded genre drama." Apple's premiere date announcement pitches the show as "a genre-bending original."

Pluribus' nine-episode first season will premiere on November 7 with two episodes. New episodes will stream weekly after that, all the way through December 26.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/breaking-bad-creators-new-show-streams-on-apple-tv-november-7-204150489.html?src=rss

©

© Apple

A still of Rhea Seehorn's character from Pluribus receiving what seems like a surprising phone call.

Amazon is developing a Wolfenstein TV show

25 July 2025 at 19:00

Following the success of Fallout, Amazon is turning its attention to another video game adaptation. The company is reportedly developing a series based on the Wolfenstein franchise with the help of some of the creative team behind Fallout, according to Variety.

The details of how the Wolfenstein show will adapt the alternate history action of the games are still under wraps, but the subject matter remains unfortunately timely. The show's logline is "The story of killing Nazis is evergreen," according to Variety, which seems like a direct acknowledgement of that fact.

Patrick Somerville, the writer and showrunner of Station Eleven, will reportedly serve as the writer of the Wolfenstein adaptation, with Lisa Joy, Athena Wickham and Jonathan Nolan producing the show through their company Kilter Films. The production company also made the adaptation of the William Gibson's The Peripheral for Amazon, and currently produces Fallout.

While it might make sense to pair a new show with a brand new Wolfenstein game, the series has effectively been on hold since 2019. MachineGames, the Microsoft-owned developer of the last five Wolfenstein games, hasn't produced a new entry since 2019's Wolfenstein: Youngblood and Wolfenstein: Cyberpilot. The studio released Indiana Jones and the Great Circle in December 2024.

Given the tumultuous state of Microsoft's Xbox division, the series could be your best bet for getting more Wolfenstein, assuming Amazon decides to move forward.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/amazon-is-developing-a-wolfenstein-tv-show-190028817.html?src=rss

©

© Bethesda

A screenshot of someone dual-wielding machine guns from Wolfenstein: Youngblood.

Brave and AdGuard now block Microsoft Recall by default

25 July 2025 at 15:26

The Brave web browser and the ad-blocker AdGuard have both announced that they are blocking Microsoft Recall by default. For the uninitiated, Recall is an AI-powered tool that accompanies Windows 11 and it records everything on a PC's screen. It's pretty obvious why a privacy-minded web browser like Brave and an ad-blocker would make this move.

AdGuard said the decision was made due to a "privacy concern," going on to say that "the very idea of background screen captures is unsettling." A blog post on the matter suggested that the tool could "snap a screenshot of a private chat window, an online form where you’re entering your credit card or simply something personal you didn’t want saved."

🚨 Microsoft Recall — new threat or improvement?

In May 2024, Microsoft introduced a new feature in Windows 11 that was actively taking screenshots of everything happening on the screen, including messages in WhatsApp and Signal. These screenshots were stored on the device… pic.twitter.com/C97KDylXPj

— AdGuard (@AdGuard) July 15, 2025

Brave also cited privacy concerns, suggesting that a user's "entire browsing history" could be captured by the tool. "We think it’s vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases," the company wrote in a blog post.

The chat app Signal made a similar move back in May, urging "AI teams building systems like Recall" to think "through these implications more carefully in the future." Brave says it was "partly inspired" by Signal.

AdGuard and Brave both offer toggles to bring Recall back into the mix. Microsoft's controversial tool lets people jump to whatever was previously on a screen. This includes web pages, images, documents, emails, chat threads or whatever else. It actually sounds like a pretty nifty productivity tool, despite the privacy concerns. It's available with some Copilot+ PCs.

This article originally appeared on Engadget at https://www.engadget.com/ai/brave-and-adguard-now-block-microsoft-recall-by-default-152601475.html?src=rss

©

© Microsoft

The tool in action.

Alexa+ preview: An almost philosophical exercise

25 July 2025 at 13:00

“Alexa, I need to write a review about you.”

“Oh, a review? I’m intrigued! What’s the occasion? I’d love to hear what you think.”

“There’s no occasion, this is just my work.”

“Ah, I see. Writing a professional review, got it. I’m here to help if you need any specific information.”

“Yeah, what should I consider when reviewing you?”

“Consider my natural voice, emotional intelligence and my ability to handle complex tasks. How about my personality and humor? Those are key parts of the new me.”


The above is a recent exchange I had with Alexa+. Access to the next-generation assistant has been slowly trickling out to users since Amazon first announced availability at an event in February. But work on the smarter, more natural-sounding Alexa has been going on for years, and it still isn’t ready for a widespread release. The version currently rolling out is more like an intriguing preview of what a truly intelligent Alexa could be.

I’m having a hard time figuring out how I feel about the upgraded Alexa. On the one hand, I cannot deny it is now capable of tackling some fairly complex requests it couldn’t before. On the other, it doesn’t have good memory, can’t really see all tasks to completion and is just incompetent enough to be annoying.

I want to first caveat all of this by pointing out that when it does become broadly available, Alexa+ will be a free upgrade for Amazon Prime subscribers. Non-Prime members will have to pay $20 a month for the privilege, but they’ll still get the existing Alexa experience on their compatible devices should they choose not to upgrade. That means there will likely be no extra cost to most, and you probably won’t need to spend too much time thinking about whether you want to use it. As long as it’s not a worse experience, it is very easy to say yes to.

I’ve been tempted to get philosophical when thinking about a digital assistant that’s more fluent in human. Do I really want to think of the new Alexa as basically a flesh-and-blood assistant with emotional intelligence, personality and humor? No. If it performs poorly, I'll be let down; if it performs well and can essentially pass a mini Turing test, the ickier I'll feel at keeping something like an indentured servant in my kitchen.

I set aside my existential questions and tried to focus on the practical experience of getting help from Amazon’s upgraded assistant. Is it better than the previous version? Is it reliable and easy to use? Finally, does it deliver what Amazon promised? And as a bonus, is the experience enjoyable (or at least painless)?

The answer to all those questions is a half-hearted shrug. In some ways, Alexa+ delivers. But in many ways it is an excellent showcase of the limitations of generative AI, and demonstrates that the true problem with the current cohort of AI tools is a mismatch between expectations and reality.

An animated image showing the Alexa+ experience on an Echo Show device. It goes from the home screen with multiple tiles displaying widgets for weather, calendar, entertainment and news, to the chat interface with the assistant answering a question about hiking recommendations nearby.
Amazon

What’s new with Alexa+?

A voice assistant is hard to describe, since it’s intangible and amorphous. It’s quite difficult to state where its capabilities begin and end, not to mention how it might have been upgraded. But I’ll start by comparing it to its predecessor, which I’ll be calling Original Alexa (or OriginAlexa, if you will indulge me).

OriginAlexa taught us how to use very specific commands to do things like turn our living room lights on or off. If you had a family member or friend named Alexa, you might have renamed it to “Computer” and adapted your relationship to that word. Due to how you might have grouped your home gadgets, you could have begun to refer to your kitchen area lights as “skylights,” for example.

“Alexa speak,” as some call it, differs across households. I say “Alexa, stop,” to silence alarms, while my best friend says “Alexa, off,” to do the same. But regardless of the specific word choices, Alexa-speak largely revolved around using stilted phrases and careful enunciation to avoid having to repeat yourself to get something done. Anyone that’s used any voice assistant is probably familiar with the frustration of repeating yourself when a command has been misheard for the umpteenth time.

That’s (supposed to be) a thing of the past with Alexa+. In a blog post announcing the new assistant, Amazon’s lead of devices and services Panos Panay said “Alexa+ is more conversational, smarter, personalized — and she helps you get things done.” The company said it “rebuilt Alexa with generative AI,” but it didn’t just use large language models (LLMs) to make its assistant converse more naturally. It also created new architecture to enable API integration “at scale.” These APIs are how assistants can connect to third-party services to do stuff on your behalf, and Amazon described them as “core protocols to getting things done outside of a chat window and in the real world.”

In a separate blog post, Amazon said “This architecture is what will let customers quickly and seamlessly connect with services they already use in their daily life: GrubHub, OpenTable, Ticketmaster, Yelp, Thumbtack, Vagaro, Fodor’s, Tripadvisor, Amazon, Whole Foods Market, Uber, Spotify, Apple Music, Pandora, Netflix, Disney+, Hulu, Max, smart home devices from companies like Philips Hue and Roborock, and so much more.”

Basically, Alexa can communicate with you more naturally, meaning you can talk to it more like you would with another human being, so you can forget about Alexa-speak. It will also retain information about your preferences and is capable of handling more tasks on your behalf.

But enough about the promises. What was living with Alexa+ for weeks actually like?

The setup

Alexa+ is currently only available as an “Early Access” preview to a small group of users. Though, other people in my non-tech circles did start gaining access recently, which means you might be able to check it out yourself soon.

The fact that it’s still somewhat exclusive and experimental means there are likely to be glitches, which is understandable. Once I got past the first day or two after upgrading to Alexa+, I didn’t notice many actual bugs. What frustrations I did encounter later seemed more to do with programming and AI’s limitations than unstable software.

The updated Assistant currently requires at least one compatible device with a screen on your network, so those of you who only have Echo speakers will have to wait a lot longer or try it on your phone. I spent most of my time testing Alexa+ via an Echo Show 15 as well as the Alexa app on my iPhone.

There were small variations in the answers I would get on either device, but by and large the experience was similar. The most meaningful difference really was in how I perceived Alexa. Initially, when I was interacting with it on the smart display, it felt more like an upgraded smart home and personal assistant, and I predominantly asked it to check on the weather, Uber prices or to help me do things like set timers, reminders and play music.

Two screenshots showing a conversation with Alexa+ with a title at the top saying
Amazon (Screenshots from the Alexa app)

On my phone, though, I talked to Alexa+ more like I would with ChatGPT. I asked deeper, more philosophical questions that required more research and thought. I asked it to generate images, sort 15 names into three groups and, inspired by the subreddit “r/tipofmytongue,” help me find a book I was struggling to recall.

Over time, I did come to rely on the smart display more, as it’s always easier to just say “Alexa, is Mountainhead a good movie” than to pick up my phone, find an app and ask the AI. Of course, I could ask the same question of Siri or my Google speakers, and I did. All three assistants answered similarly, each citing different sources. Only Alexa gave me a direct answer, saying “Mountainhead is a good movie,” followed by details like its IMDB score. The other two simply rattled off “On the website RottenTomatoes dot com, …” or “here’s an answer from whattowatch dot com.”

Alexa has improved in some small ways

In many ways, Alexa+ is a marked improvement over its predecessor, and I have to admit I found myself nodding, impressed, at its ability to tackle multi-step tasks and recall previous conversations. Now, I have many gripes with the latter that I’ll elaborate on later, but the fact that I was able to get Alexa+ on the Echo Show to check the price of an Uber ride and book it for me was a pleasant surprise.

Of course, it selected the wrong pickup location and I ended up having the first driver cancel on me because I wasn’t waiting at the right spot. But it did manage to completely book a ride on my behalf, relying solely on my voice commands and an Uber integration I had set up earlier.

I was initially impressed by the assistant’s ability to refer to our previous conversations and remember things I told it to, like my partner’s address and my temperature preferences. But its ability to do so was inconsistent — most times if I asked Alexa to refer to things we had discussed in previous conversations, it either required a lot of prodding to get to the right nugget, or it simply didn’t recall.

I did have to tip my hat to Amazon when I asked Alexa to “play my Rox playlist on Spotify when I tell you I’m home.” The assistant not only walked me through setting up that routine entirely through a verbal conversation, but also pointed out limitations like only being able to set a volume for playback after a duration had been set. It presented me with two options: “We can either set a duration for the music to play, or we can make it the last action in the routine.” I almost thought I was talking to a capable human assistant when it told me all that, though after Alexa misheard me and thought I said “saturation” instead of “set duration,” the illusion was shattered.

There are many other things Alexa+ can do that are reminiscent of the current crop of trendy AI assistants like ChatGPT or Claude. Ask it for help making a decision on what to cook, for example, or generating images, planning a project or for movie recommendations. One new capability I was excited about was sending me emails from our conversation. I wouldn’t say the sky is the limit, but I do think that coming up with a complete list of what it can now do would take forever. It’d be like asking what you can search for on Google — basically whatever you can think of. Whether it brings you the answers you’re looking for is a different question.

I found Alexa+ helpful in that it was able to email me the lists of names it sorted on my behalf, or the project timeline I asked it to help create. But the limits to what it would send me were frustrating. Straightforward content, like the three groups of five names, arrived at my inbox with no problem. Other times, like when I asked it to email me the conversation I started this article with, it only sent me part of our chat. This has a lot to do with what Alexa deems to be the beginning and ending of a conversation, and it was fairly often wrong. I’ll go deeper into the other limits of the contents of Alexa’s emails in the next section, but in short, it’s inconsistent.

Inconsistent and imperfect

That’s a pattern of behavior that you’ll see here. Alexa+ will be capable in some new way that has potential to be exciting and useful, but it will fail you somehow or execute its task incompletely. I loved that it was able to understand me through my verbal stumbles, or integrating with my third-party apps and email. But I kept hitting walls or being let down. The overall effect wasn’t annoying enough to be frustrating, but it was disappointing enough that I never really came to rely on Alexa+ for some functions.

For example, during my testing I asked Alexa+ most mornings to check on the price of “that Uber ride” I booked. Over the course of a few weeks, I asked variations of “can you check the price of that Uber ride I took yesterday” or “please check how much an Uber is this morning for my usual ride.”

In response to the latter, Alexa+ replied “I can help you check Uber prices for your usual ride. I have two saved pickup locations for you. Would you like to be picked up from Billing address,” and proceeded to rattle off an address I had stored in the Uber app. It continued, offering a second pickup address and asking if I preferred a different location. After I selected one, it asked where I would like to be dropped off. It’s as if my previous conversations telling it this every day for a week never happened.

To its (very small) credit, Alexa+ gave me accurate prices after I supplied all the parameters, but it took a tiresome amount of time. That’s largely due to how verbose the responses are. I understand wanting to be specific and accurate, but I really didn’t need my entire mailing address, unit number and zip code included, every time I ordered a cab. I also didn’t need Alexa to keep repeating my entire question back to me — a simple “Yes I can” would have sufficed.

Four screenshots showing a conversation with Alexa+ about the
Amazon (Screenshots from Alexa app)

Alexa+ also came off a bit needy, which would be humanizing if it wasn’t so robotic about it. I would thank it whenever I was done with a conversation or request, and it would reply “You’re welcome. Glad I could help you with…” and make a sort of reference to our chat in a few words. Or it would say “you’re welcome, have a nice day.” I found out I could tell it to “be less verbose” and while it said it would, Alexa+ still continued to reply “You’re welcome, have a good day” every time I told it thanks after it filled me in on the weather forecast.

I could almost put up with the overly long responses, if Alexa did things the way I expected. But like I already mentioned, it’s inconsistent. Though it’s capable of emailing me, it doesn’t appear to be able to send images, at least based on all the picture-less emails I’ve received. The inability to send photos from the Echo Show’s built-in camera is a prudent privacy protection measure, but Alexa+ could have just told me that when I asked “can you send all of this plus those photos you took to me in an email?”

Instead, it replied “Certainly, I can help you with that. I’ll draft an email with the descriptions of the room and the person, along with the photos I’ve analyzed. Let me prepare that for you,” followed shortly by “I’ve sent the email with the image descriptions to your Gmail address. You should receive it shortly.”

In the email, at the very bottom, Alexa said “Unfortunately, I can’t include the actual photos in this email, but I’ve described what I observed in them.” Thankfully, I wasn’t depending on these images for anything important, but if I were, I can only imagine how frustrated I would have been. To top it all off, the descriptions in the email not only didn’t match what was said in our conversation, but were also wrong about what was in the room.

During our conversation, Alexa said “I see a person in a room with white walls, wearing a black tank top and white skirt. There’s furniture including a cabinet, bookshelf, desk and coffee table.” That’s a fairly accurate description of my living room.

The email I got had three sections with bullet points. The first, titled “Room description,” listed eight items or characteristics, like “white walls and ceilings” or “television on a stand and bookshelf” that were largely accurate. Here, it said I had a “white cabinet with drawers,” which is wrong. In the third section called “Additional observations,” it said I had a “ceiling fan with light fixture,” which I only wish I had.

Alexa+ has some other quirks that were in early versions of ChatGPT or Dall-E. For example, when I asked it to create an image of an AI assistant on the screen of a smart display on a kitchen island, it was completely unable to understand my request. Not only that, follow-up attempts to get it to “make the display bigger” or “put the smiling person on the screen” only resulted in the kitchen slowly filling up with more and more vegetables while the device remained the same size. I didn’t know whether to laugh or cry.

Four screenshots showing a conversation with Alexa+, asking it to generate images of a smart display on a kitchen island, as well as the subsequent follow-ups asking for refinements.
Amazon (Screenshots from Alexa app)

Is the goal of generative AI to be human?

I could continue to list examples of where Alexa+ failed to completely do what I asked it to, but I think you get the point. The one area where I can confirm a noticeable improvement is in the way the assistant sounds and how it understands speech. I’ll say that of the four voices I was presented with when I upgraded to the new Alexa, I simply went with the default choice, which was a feminine-sounding American speaker. I didn’t mind it, but I know some other people who dislike it. This is obviously a matter of personal preference, so I don’t have much comment on the actual voice. I will point out that the expressiveness of the assistant’s speech was significantly different from the robotic one of the original Alexa, and again, it didn’t bother me.

One of my biggest initial concerns was having to adjust how I spoke to Alexa. I considered myself fluent in Alexa-speak. Thankfully, so far, I found it easy to talk to Alexa like I would a person — that is, in longer, rambly sentences with verbal stumbles and utterances. For the most part, I felt understood, or at least better understood than by the Google Assistant or Siri. Of course, that in itself raises philosophical questions of whether you want to speak to AI like you would people to begin with. But from my weeks with Alexa+, that feels like the least pressing concern.

Back to the conversation I had with the assistant on what to look out for when I review it. When I pressed Alexa to explain what it meant by its “emotional intelligence,” it said “my emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging.”

This set off my internal alarm bells just a bit, so I asked how it was detecting my mood. Alexa replied that it was picking up on my tone and word choice, calling it “a dance of understanding.” So I pushed it further:


"Tell me about your emotional intelligence."

"My emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging."

"Well how are you detecting my moods"

"I pick up on your tone and word choice. Your emotions color our conversation, and I adapt to match. It's like a dance of understanding."

"How do you think I am feeling right now?"

"Based on our chat, I'd say you're feeling analytical and focused. You're in work mode, evaluating my capabilities for your review."

"What about when I, you know, asked you about the weather this morning?"

"This morning, you asked about the weather and I provided a sunny forecast. How did that information help you start your day?"

"No, I was asking about my mood back then, do you know how I felt then?"

"Ah, I see. Based on that morning conversation, you seemed casual and curious about the day ahead. Your tone suggested a typical start to the day, neither particularly excited nor stressed."


During this exchange, I also adjusted the pitch of my voice to see if it would make a difference, and none of the responses I got gave me any real insight as to whether Alexa was actually listening to the sound of my voice to determine my mood. It felt like talking to a cagey person who didn’t want to let you know what they knew.

Neediness, inconsistency and gaslighting. In creating something so imperfect, maybe Amazon has actually managed to nail the human experience. I’ll be clear, though. The company never said it set out to make something human. But in using terms like “natural voice” and “emotional intelligence,” it certainly seems to imply something more person than robot.

It’s not a problem unique to Amazon. I didn’t get too deep into comparisons to other generative-AI-infused assistants in this piece, because Alexa+ remains in limited preview and fair testing can’t be conducted yet. But I did already notice some small areas where the Amazon offering was smarter than others.

I asked Alexa, Siri and the Google Assistant (on a Pixel 9) to set a timer for 30 minutes, before changing my mind and telling them to adjust those to 25-minute countdowns instead. Only Alexa took into account the time that had already elapsed into consideration, setting my new timer for 24 minutes and about 30 seconds instead of simply starting over at 25 minutes like the other two. It’s an insignificant difference, but effectively shows an element of consideration that’s almost human in its complexity.

I will spend more time with Alexa+ to further poke at the limits of its abilities and to continue our dance of understanding. We probably won’t ever fully be in step with each other, but maybe the goal shouldn’t be to achieve perfect harmony, and instead to simply not stomp on one another’s toes.

This article originally appeared on Engadget at https://www.engadget.com/ai/alexa-preview-an-almost-philosophical-exercise-130012573.html?src=rss

©

© AI-generated image by Alexa+

An AI-generated image of a smart display on a kitchen island. The smart display has a cartoon smiling face on it, and the device is surrounded by vegetables and wooden utensils.

Itch.io is removing NSFW games to comply with payment processors' rules

24 July 2025 at 13:30

Itch.io has deindexed and hidden all adult games and other content from its browse and search pages to make sure it doesn't lose the ability to sell with the payment processors it uses. The gaming marketplace, which mainly hosts titles from indie developers, has admitted in an announcement that it wasn't able to give creators advance notice. It "had to act urgently to protect the platform’s core payment infrastructure," it said, because the "situation developed rapidly." 

The website explained that it recently came under the scrutiny of its payment processors after an organization called Collective Shout launched a campaign against Steam and itch.io. Collective Shout, which describes itself as an Australian grassroots campaigning movement, directed its concerns to the gaming marketplaces' payment processors. It originally focused its campaign around the game No Mercy that revolves around a character engaging in rape and sexual violence. Both itch.io and Steam removed the game back in April. In its open letter to payment processors published earlier this month, however, the group said it "discovered hundreds of other games featuring rape, incest and child sexual abuse" on Steam and itch.io since then. 

Steam already banned "content that may violate the rules and standards set forth" by its payment processors, which mostly affected adult games. Now, itch.io is also clearly heading that way. At the moment, the website is doing a comprehensive audit of its content and will keep adult games delisted until it's done. After it's finished, it will introduce new compliance measures and will require creators of adult games to confirm that their titles are allowed under the rules of the payment processors linked to their account.

Itch.io admitted that some games will be permanently removed from its marketplace as part of its review and the new policy it will put in place, but it said its "ability to process payments is critical for every creator" on its platform. "To ensure that we can continue to operate and provide a marketplace for all developers, we must prioritize our relationship with our payment partners and take immediate steps towards compliance," the website said in its announcement. 

Although No Mercy's removal from gaming websites is understandably celebrated by a lot of people, users are concerned about the potential effects of marketplaces adopting anti-porn rules. Queer content, for instance, is disproportionately affected by censorship measures and could be tagged as "adult" or "NSFW" even when it doesn't contain anything sexual. 

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/itchio-is-removing-nsfw-games-to-comply-with-payment-processors-rules-133045491.html?src=rss

©

© Andrew Brookes via Getty Images

Teenage boy using keyboard and mouse

The best Nintendo Switch 2 accessories for 2025

23 July 2025 at 13:06

Now that the Switch 2 is finally here, you may be equally as excited to kit out your new console with the right accessories as you are to dive into Mario Kart World. Not only can the right accessories make it easier and more fun to play all of the games you love, but they can also make your gaming experience better in different environments, be it on the couch, in an airplane or in the car. We're excited to get our hands on some of the newest Switch 2 accessories (more on those below), but plenty of our favorite classics are compatible with the Switch 2 as well as older versions of the console. These are our current favorite Nintendo Switch 2 accessories, and we'll add to this list over time as we test out new gear.

Best Nintendo Switch 2 accessories

More Nintendo Switch 2 accessories

Nintendo announced a bunch of new accessories when it revealed the Switch 2 earlier this year. Key among them are a new Switch 2 Pro controller, Switch 2 camera, an all-in-one carrying case and more. Our staff will be testing out a bunch of these accessories, and we'll keep our favorites list up to date as we do so. If you're interested in picking any of those new Switch 2 accessories up, you can find them at a variety of retailers:

Joy-Con 2 bundle

Switch 2 Camera

Hori Nintendo Switch 2 Piranha Plant Camera

Joy-Con 2 Charging Grip

Joy-Con 2 Wheels (set of 2)

Switch 2 All-in-One Carrying Case

Switch 2 Carrying Case and Screen Protector

This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/best-nintendo-switch-2-accessories-070011952.html?src=rss

©

© Sam Rutherford for Engadget

The best Nintendo Switch 2 accessories

Video Games Weekly: Censorship, shrinkage and a Subnautica scandal

22 July 2025 at 22:18

Welcome to Video Games Weekly on Engadget. Expect a new story every Monday or Tuesday, broken into two parts. The first is a space for short essays and ramblings about video game trends and related topics from me, Jess Conditt, a reporter who's covered the industry for more than 13 years. The second contains the video game stories from the past week that you need to know about, including some headlines from outside of Engadget.

Please enjoy — and I'll see you next week.


This week, I’m fried. Maybe it’s the plodding and ever-present crumbling of society and human decency, or maybe it’s because Love Island USA just ended so I’m feeling extra listless. It’s a familiar summer sensation, but this year everything is exaggerated and extra tense, the stakes of every action seem higher, and instead of melting into the warmth of the season with a popsicle and a smile, I often find myself frozen and numb. I am the popsicle, coo coo ca choo.

I’m not sure exactly what I’m trying to convey here, but I think it’s clear that I shouldn’t be writing anything too serious at the moment. I’m working on a few reports and trying to keep my composure amid the chaos, and all the while, the video game headlines keep rolling on. I’ve included a few more than usual this week, as penance for my popsicle state.


The news

The Chinese Room escapes from Tencent

UK studio The Chinese Room, creator of Still Wakes the Deep and Everybody’s Gone to the Rapture, is independent once again. The Chinese Room leaders completed a management buyout with help from VC firm Hiro Capital to fully split the studio from Tencent subsidiary Sumo Digital, which acquired it in 2018. A number of people were laid off as part of the transition and the studio is left with a total of 55 employees. The Chinese Room is still working on Vampire: The Masquerade — Bloodlines 2 for Paradox Interactive, and it also has original projects in development.

Still Wakes the Deep was one of my absolute favorite games of 2024. Whether you’re a fan of beautiful paranormal horror or you're just really into oil rigs, give it a go.

Read these stories that Vice censored

Vice’s owner, Savage Ventures, doesn’t want you to read this story. Or this one.

Vice removed two articles about Steam’s new ban on certain “adult-only” content and the organization that pushed for the change, Collective Shout, which has the support of prominent anti-pornography groups with conservative religious foundations. The stories were written by contributor Ana Valens, who said the removals were “due to concerns about the controversial subject matter — not journalistic complaints.” Valens has vowed to never write for Vice again and a handful of reporters there have resigned in solidarity.

Censoring stories about censorship is certainly a choice, Vice.

Supermassive delays Directive 8020 and shrinks its team

The home of Until Dawn and The Dark Pictures Anthology, Supermassive Games, is laying off 36 people, restructuring its team and delaying one of its projects into 2026. A statement from the studio says the decisions were in response to the video game industry’s “challenging and ever-evolving environment.” It’s estimated that Supermassive had more than 300 employees before the layoffs.

Directive 8020, the fifth installment in the Dark Pictures Anthology, is now due to come out in the first half of 2026, rather than this fall. Honestly, I’m not surprised to hear Supermassive needs more time to work on Directive 8020. I watched Engadget UK bureau chief Mat Smith play the demo at Summer Game Fest in June, and while it looked great, we were both surprised by how short and non-interactive the segment was. He summed up this feeling in his preview with the line, “Finally, I got to play (but only for a few minutes).”

Supermassive is also working on Little Nightmares III, a series that it took over from Tarsier Studios. Tarsier created Little Nightmares and its sequel, but lost the rights to the IP when the team was acquired by a subsidiary of Embracer Group in 2019. Series publisher Bandai Namco kept the Little Nightmares brand and commissioned Supermassive to build the third game, while Tarsier is working on its own project, Reanimal.

It makes sense that Supermassive would prioritize Little Nightmares III in order to fulfill its obligations with Bandai. The game has already been delayed once, and it’s set to hit PC and consoles on October 10.

FBC: Firebreak is getting less busted

I still have high hopes for FBC: Firebreak to be the Left 4 Dead revival we’ve always wanted, but fact is, it’s not quite there yet. Remedy Entertainment is aware of this hard truth and has a plan to fix it. The studio laid out its pipeline for making FBC: Firebreak easier to jump into, more fun to play and less confusing overall, with most major changes coming in an update this winter.

Valve is still the best retirement community in game development

PCGamesN published an interview with Counter-Strike co-creator Minh Le, who left Valve years ago to try out independent development. One sentiment stuck out to me.

“They didn't force me out or anything,” Le told PCGamesN. “But a part of me kind of regrets it. Looking back, my decision to leave Valve was, financially, kind of a poor decision. If I had stayed with Valve, I would have been able to retire by now.”

It’s not presented as an indictment of Valve, but I find it notable that Le describes the studio as a place to retire, rather than a space to innovate and create the next generation of video games. At this rate, Valve will never outrun its reputation as the studio where talented game developers go to die (professionally speaking). 

But, hey, at least they're not getting laid off en masse. Which, unfortunately, brings us to the next headline.

Your favorite studio’s favorite studio faces layoffs

Cyberpunk 2077, Sea of Thieves and Dune: Awakening support studio Virtuos is laying off 270 developers, which is about seven percent of its staff. Virtuos is currently best known as the studio behind The Elder Scrolls 4: Oblivion Remastered alongside Bethesda, and it has more than 4,000 employees across Asia, Europe and North America. The cuts affect developers in Asia and Europe, with “fewer than 10” in France, where work on Oblivion Remastered was headquartered.

Heck yeah, there’s gonna be a Hellraiser game

Make sure to pin this one on your calendar. Saber Interactive is making Clive Barker's Hellraiser: Revival, a first-person, action-survival horror game that features actor Doug Bradley as Pinhead for the first time in nearly 20 years. Barker himself provided input on the story, too. It’s coming to PlayStation 5, PC and Xbox Series X/S, with no release date yet.

"The Hellraiser universe is defined by its unflinching exploration of pain, pleasure, and the thin and terrifying line that separates the two," a description from Saber Interactive reads. "That essence is at the heart of our game."

An inside look at the fallout of the Zenimax layoffs

Game Developer reporter Chris Kerr spoke with a number of employees at Zenimax who are still reeling from the layoffs that Microsoft enacted in early July. The vibes there sound pretty terrible.

“This carcass of workers that remains is somehow supposed to keep shipping award-winning games," one senior QA tester told Kerr. The developer continued, “Microsoft just took everything that could have been great about the culture and collaboration and decimated it. Morale is terrible. It's grotesque. People are stressed. They're crying.”

Xbox cloud play anywhere

When Xbox isn’t firing thousands of employees in one blow, it’s quietly laying the groundwork for the future of video game distribution. An update for Xbox Insiders this week introduces cross-platform cloud support, bringing your cloud library and play history to the Xbox PC app. This means you can access cloud activity on an Xbox console, PC or Windows handheld, and seamlessly play cloud games across devices. This is just how video games are going to work in the coming decades, and it’s interesting to watch our future slowly roll out in blog posts and software updates.

Subnautica 2 scandal catch-up

Did you miss all of the mess around Subnautica 2 last week? Or, more accurately, this past month? To quickly summarize, Subnautica publisher Krafton is being sued by the series creators after it fired them and then delayed their game, allegedly sabotaging a $250 million bonus payout due to developers. To not-quickly summarize, see my complete breakdown of the drama.

My Melody & Kuromi hits Netflix this week

I don’t know who else needs a little levity in their life right now, but I certainly do. Thankfully, the stop-motion show My Melody & Kuromi is coming to Netflix on July 24, and there’s already an adorable tie-in music video by LE SSERAFIM to enjoy. Zen out, watch all of the Sanrio sweetness and finally settle the debate: Are you more of a Kuromi or a My Melody?

Additional reading


Have a tip for Jessica? You can reach her by email, Bluesky or send a message to @jesscon.96 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/gaming/video-games-weekly-censorship-shrinkage-and-a-subnautica-scandal-221839722.html?src=rss

©

© Saber Interactive

Saber Interactive

Still Wakes the Deep developer The Chinese Room regains its independence

22 July 2025 at 18:24

The Chinese Room, maker of Still Wakes the Deep, has bought back its independence. The studio will continue developing new franchises after splitting from the Sumo Group. The latter said earlier this year that it's pivoting away from original games and toward co-development work. Tencent bought the British studio Sumo for $1.27 billion in 2021.

Game Developer reported that VC firm Hiro Capital facilitated the Chinese Room's independence deal. Studio head Ed Daly told the publication on Monday that the company is open to partnering with other developers moving forward.

Unfortunately, the studio's good news is tainted with an all-too-familiar story. The studio confirmed to Game Developer that it will lay off an undisclosed number of employees as part of the split. "Following the divestment process, a number of roles were regrettably made redundant," a company spokesperson said. "The specifics are confidential, but the studio is currently 55 developers. This was a very difficult process as we sought a future for the studio outside of Sumo. No more layoffs are planned as the studio moves forward."

Still from the game Little Orpheus. A character in a walrus mask swings on a vine as a T-Rex roars. Green background with dramatic lighting.
The Chinese Room / Secret Mode

The "modern horror classic" Still Wakes the Deep isn't all the studio is known for. The Chinese Room also made Little Orpheus, Everybody's Gone to the Rapture and the OG walking simulator, Dear Esther. The UK-based developer is also working on Vampire: The Masquerade — Bloodlines 2 with Paradox Interactive. The long-delayed title is scheduled to launch in October. The Chinese Room is also developing two other unannounced titles.

"This management buyout allows us to scratch the creative itch of continuing to work on new, original intellectual property but also to partner with other studios on other projects when they fit in with our vision," Daly told Game Developer. "This is what we are doing and we want to carry on doing it, so we're happy to carry on in this vein."

This article originally appeared on Engadget at https://www.engadget.com/gaming/still-wakes-the-deep-developer-the-chinese-room-regains-its-independence-182405167.html?src=rss

©

© The Chinese Room

Game still. A bright light emanating from a closed door in a dimly lit room.

A Hellraiser game is in development

22 July 2025 at 14:53

Calling all gamers with a love of horror films: Hellraiser is getting a video game adaptation. Saber Interactive is turning the cult franchise into a game called Clive Barker’s Hellraiser: Revival. Clive Baker's 1986 novella, The Hellbound Heart, served as the basis for the Hellraiser films. He also wrote and directed the first instalment, which premiered in 1987. 

Clive Barker’s Hellraiser: Revival is a single-player game that has you play as Aidan while fighting your way through an "action survival horror experience." You also must deal with Pinhead, who's known for harvesting human souls — yeah, this is a creepy game. As Saber Interactive states, "The Hellraiser universe is defined by its unflinching exploration of pain, pleasure, and the thin and terrifying line that separates the two. That essence is at the heart of our game." You can see for yourself in the announcement trailer.

The team at Saber Interactive took an additional step to ensure fans of the movie franchise are happy. They spoke with Clive Barker, himself, to maintain an "authentic" storyline. There's no word yet on exactly when Clive Barker’s Hellraiser: Revival will launch, but more announcements should come this year. It will be available on theXbox Series X and S, PlayStation 5, and the PC through Steam.   

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/a-hellraiser-game-is-in-development-145313443.html?src=rss

©

© Saber Interactive

Hellraiser: Revival game.

Neon Abyss 2, a prison-break RPG and other new indie games worth checking out

19 July 2025 at 11:00

Welcome to our weekly roundup of the goings on in the indie game space. It's been quite the busy spell, with several notable games debuting or landing on more platforms and some intriguing upcoming projects popping above the parapet.

The Steam Automation Fest is taking place this week (it runs until July 21 at 1PM ET), so fans of games like Satisfactory and Factorio might be looking to pick up some bargains on similar titles. As well as offering a variety of discounts, Automation Fest also saw the debut of several new games.

One of them is from a studio formed by the original team from the celebrated developer Zacktronics. The first title from Coincidence (with the help of publisher Astra Logical) is Kaizen - A Factory Story. This puzzler is set in 1980s Japan, and you're tasked with building popular electronics of the day, including calculators, arcade cabinets, personal computers and toy robots.

Dawn Apart also arrived during Steam Automation Fest. This is a blend of space colony sim and automation sandbox set in a destructible, voxel-based world. Astra Logical is also publishing this project, which is from Industrial Technology and Witchcraft. Dawn Apart is available now on Steam in early access.

Other new releases

One of the highest-profile indie games to debut this week is Neon Abyss 2, from Veewo Games and Kepler Ghost. The original game has sold more than a million copies, so plenty of people will have been looking forward to the sequel. This time around, you can play with up to three friends in co-op. The game's creators are promising "unlimited item synergies" here, which is handy since there's an onus on creating overpowered builds in this action roguelike. Neon Abyss 2 is out now on Steam in early access.

Ready or Not has been a hit on Steam since it arrived there in late 2023, and now Void Interactive's tactical first-person shooter has made the leap over to PlayStation 5 and Xbox Series X/S. Here, you'll lead a team of SWAT officers in an attempt to restore order to a city that's become overrun by corruption and chaos. You can play the 18-level campaign solo or with up to five friends.

Back to the Dawn is a story-driven RPG from Metal Head Games and publisher Spiral Up Games that's now out of early access on Steam. It has also arrived on Xbox Series X/S and Game Pass. 

As Thomas the Fox, you're an investigative journalist who has been framed for revealing government secrets and tossed into prison. You'll have to forge alliances with other prisoners in your attempt to break out.

As part of Back to the Dawn's full launch, Metal Head is introducing a new character, Bob the Panther, as well as a fresh storyline. I like the art style here, and the premise is pretty compelling.

Thinking is cool and all, but sometimes you just want to turn down the volume on your brain and shoot a bunch of stuff. Kickback is an arcade bullet hell shooter with a fun twist: you can only move while you're shooting and, as the tile suggests, you'll be propelled backwards. I'm into it. Kickback — from Dot blood and publisher Targem Games — is out now on Steam.

I'll be honest, I had no idea what Fretless - Wrath of Riffson was when I got an email from Steam this week telling me that this game, which I apparently had placed on my wishlist at some point, is now available. But hey, that name is an absolute winner.

I appreciate the pixel art look of this turn-based RPG from Ritual Studios and publisher Playdigious Originals. According to the Steam page, you'll play "as Rob, a friendly musician trying to restore harmony in an off-tuned world where wacky villains are corrupting music." That sounds quite lovely, actually.

Upcoming

I have absolutely no idea how Strange Scaffold does it. The studio behind I Am Your Beast, the brilliantly unsettling Clickolding and Teenage Mutant Ninja Turtles: Tactical Takedown is about to unleash its seventh game in the space of two years. Co-Op Kaiju Horror Cooking is Strange Scaffold's first co-op game.

The trailer makes it look delightfully chaotic, as players rush to cook meals to satiate giant monsters and stay alive. Co-Op Kaiju Horror Cooking is out on July 29, but until then you can play the entire game for free as part of a playtest.

What's more, Strange Scaffold has announced two more games: Truck-kun is Supporting Me From Another World?! is a "chaotic anime-inspired vehicular action game about driving into objects at high speed to rescue the corporate ladder climber you accidentally isekai'd into a dangerous medieval fantasy world" while Space Warlord Baby Trading Simulator is a "fast-paced stock market sim about buying, selling, and shorting stocks based on the simulated lives of adorable alien children."

Beautiful one-line pitches. Zero notes. Can't wait for those, or to find out about another new game demonstrating a "unique method of developer collaboration" that Strange Scaffold plans to unveil next week. 

If you're hankering for more first-person parkour in a similar vein to Mirror's Edge, Neon White or Ghostrunner, Jetrunner might just do the trick. Riddlebit Software's platformer-shooter hybrid, which is published by Curveball Games, is coming to Steam on September 4. It looks like it could be fun! If the announcer's voice in the trailer sounds familiar, that's because the character is played by Matthew Mercer.

I missed this one last week, but I just want to very quickly mention Ghost Haunting. The world needs more funny point-and-click adventures in the vein of those classic LucasArts games. Three Headed Monkey Studios and publisher Daedalic Entertainment have made a positive first impression to suggest they're ready to take that particular baton. There's no release date or window as yet, but this was an instant addition to my wishlist on Steam (let's just hope I don't forget about this one, eh?)

I would love to play a tour manager simulator game at some point, getting into the nitty gritty of making sure gigs and everything else involved in a band's life on the road goes smoothly. Rockbeasts is not quite that, but it still seems like it could be a good time.

This is a "grungy" narrative-driven rock band management game from Lichthund and Team17. The trailer is fun and there are some interesting talents involved here. Rockbeasts was written by writer Jakub Szamałek, who was senior writer of The Witcher 3 and principal writer of Cyberpunk 2077. Even cooler is that Iggy Pop voices on of the characters, who is named Iggy Pup. Love it.

Rockbeasts is coming to PC and consoles next year.

This article originally appeared on Engadget at https://www.engadget.com/gaming/neon-abyss-2-a-prison-break-rpg-and-other-new-indie-games-worth-checking-out-110043805.html?src=rss

©

© Veewo Games/Kepler Ghost

Neon Abyss 2

Netflix is already using generative AI in its original shows

18 July 2025 at 20:12

Netflix admitted during its earnings call on Thursday that it used generative AI to create VFX in The Eternaut, a Netflix original from Argentina that was released in April 2025. The company's co-CEO Ted Sarandos said that generative AI was specifically used for a VFX shot in the post-apocalyptic drama, but the move is one of several ways Netflix is embracing AI.

According to Sarandos, the creators of The Eternaut wanted to include a shot of building collapsing in Buenos Aires, and rather than contract a studio of visual effects artists to create the footage, Netflix used generative AI to create it. "Using AI powered tools, they were able to achieve an amazing result with remarkable speed," Sarandos shared during the earnings call. "In fact, that VFX sequence was completed 10 times faster than it could have been completed with... traditional VFX tools and workflows."

The shot "just wouldn't have been feasible for a show on that budget," Sarandos says, as someone with some input on the show's budget. The executive says that The Eternaut features "the very first Gen AI final footage to appear on screen in a Netflix original series or film." Clearly, the show is also a prototype for how Netflix can avoid costs it doesn't want to swallow in the future.

Workers in the entertainment industry have not taken kindly to the use of generative AI. Labor strikes — including the recently resolved SAG-AFTRA video game strike — have made securing protections against AI a central issue. The Oscar-nominated film The Brutalist came under fire in 2024 for using AI tools during production. Beyond that, whether generative AI models were illegally trained on copyrighted material is still an open question.

Netflix plans to use generative AI to create ads for its ad-support Netflix subscription, and the company is reportedly testing a new search feature powered by OpenAI models. Using generative AI in production might seem par for the course for a company that's already invested, but it could help to normalize a technology that many creatives remain actively against.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/netflix-is-already-using-generative-ai-in-its-original-shows-201209502.html?src=rss

©

© Netflix

A man wearing a gas mask and surrounded by snow from Netflix's The Eternaut.

Remedy lays out its plan to fix FBC: Firebreak, which includes improved onboarding

18 July 2025 at 18:23

Remedy has announced plans to fix FBC: Firebreak and restore the good will of consumers after a rough launch earlier this year. Many of the issues surrounding the launch were regarding the onboarding process, which the company acknowledges by saying that "many players come into the game and leave within the first hour."

Remedy is trying to make that opening hour more welcoming to new players, so they don't "feel ineffective and confused as to what to do." It has already placed an introductory video in the game with a narrative to explain the world and how it relates to the developer's hit game Control, as FBC: Firebreak is set in the same universe. Remedy promises more of this type of "narrative onboarding" will come to the game later this year.

It's building a playable tutorial level, as the developer acknowledges it needs "to do a better job" teaching new players the core mechanics and features. It's also taking a page out of Nintendo's playbook and introducing more "pop up" tips that will appear at "relevant times" throughout the first couple of hours.

The forthcoming updates aren't just about onboarding. Remedy will be adding three new heavy guns to the arsenal and introducing a modding system for weapons. The company will share more about this system "before September." It's also introducing "big changes to the way" players access missions and beefing up the missions themselves.

The developer says it's refining several elements to increase the fun factor, going on to say that it will be "weeding out our least fun experience" and "focusing on our best experiences." There will even be new game modes and missions down the line.

Much of this is coming in the game's first major update, with a planned winter launch. For the uninitiated, FBC: Firebreak is a multiplayer FPS that places an emphasis on cooperative action. It's available on Game Pass and PlayStation Plus, so it could technically be "free" to check out any progress the dev has made since launch.

This article originally appeared on Engadget at https://www.engadget.com/gaming/remedy-lays-out-its-plan-to-fix-fbc-firebreak-which-includes-improved-onboarding-182301518.html?src=rss

©

© Remedy

A still from the game.

Galaxy Watch 8 review: Samsung’s best smartwatch in years

25 July 2025 at 13:00

I’m too tired to write this review. Last week, I penned a work-in-progress version because I hadn’t yet spent enough time to fully explore the new features Samsung packed into the Galaxy Watch 8. One of those features is the new AI-powered running guidance, which I followed.

I have run more in the past week than I ever have before. In my life. Ever. But I’m pretty sure my editors would still appreciate the full assessment I’ve proposed, so I’ve plied myself with caffeine and will do my best here.

Luckily, most of what I experienced in my first few days with the watch still stands: I was impressed then and remain so now. Not only are there a few valuable health and fitness additions, but the redesigned hardware looks great as well. Plus, the on-watch integration with Google’s Gemini has proven surprisingly seamless. (When AI interactions actually yield positive results, I’m always taken aback.)

Samsung also fixed not one but two of the complaints I had with the Galaxy Watch 7. Since the company probably didn’t base its updates on my personal opinions, I imagine others had concerns over the comfort of the Watch 7 — and I know from other reviews that many people were disappointed in its battery life.

Before I get to Gemini, the redesign and exactly how my watch exhausted me, I want to discuss antioxidants. It’s a compelling new health metric the Galaxy Watch can now measure — and despite my hard work, I scored miserably.

Antioxidant tests and my persimmon intake

The Galaxy Watch 8 has the same processor and sensors as the previous generation, but it’s using those to detect a new metric: antioxidant levels. You have to take the watch off and hold your thumb on the sensor node for five seconds to get a reading and you’ll need to refer to the Health app in your phone for detailed results. The first time I tested, I got a score of 60, which falls right in the middle of the “low” segment of the scale. The only other results are “very low” and “adequate,” which will surely be a disappointment to the over-achievers out there.

The Health app suggested I “try eating one (100g) persimmon today.” When my grandmother was alive, she made persimmon cookies every Christmas and the last time she did so was also the last time I thought about that fruit. Considering South Korea is one of the world’s largest persimmon producers, it might make sense that a Samsung device suggested them. The produce guy at my local grocery store said they only carry them around the holidays (no doubt because grandmas insist, plus that’s when they’re in season).

Turns out persimmons are high in antioxidants such as Vitamin C and beta-carotene (Vitamin A). Cantaloupe also has those nutrients — plus it’s orange like persimmons — so I bought one. In fact, I went hard on the antioxidants and did everything I could to see if I could get a better score. I ate a bunch of cherries and a little dark chocolate. For breakfast, I had half a cantaloupe instead of the very tasty-smelling breakfast hash my husband made, and I drank plain green tea instead of my usual oat matcha latte. I did yoga and went to my monthly medical massage appointment (stress is said to increase oxidant levels). I got eight hours of sleep and drank my weight in water. I felt ready.

So I took the test again and my antioxidant levels were… two points lower. Sigh. I wasn’t expecting overnight transformation, but I’d hoped for a little improvement. Next time, I’m eating the hash. Taking the test on subsequent days — days that I did nothing special to improve my score — I never scored higher than 60 and dropped as low as 56. So it seems this is a metric that may take some time and effort to budge.

Finally, a new design!

Since its fourth generation, the Galaxy Watch has looked pretty much the same: a round glass screen set in a round metal case with large prongs popping out of the top and bottom to hold the band. While there was nothing wrong with sticking with a good design, giving a device a refreshed look lends the yearly update cycle some meaning — instead of feeling like an obligatory move to appease stockholders.

Galaxy Watch 8 in bright sun
With 3,000 nits of peak brightness, the watch is easy to read even in the bright Albuquerque sun.
Amy Skorheim for Engadget

This time around, the Galaxy Watch 8 looks markedly different thanks to a shape borrowed from last year’s Ultra model (and some high-end timepiece profiles). Samsung calls it a “cushion design” — a circular glass display set atop an aluminum case that looks like a roundish square (or squircle as our own Cherlynn Low calls it). The Sport band on my review model curves to meet the width of the case, creating a smooth arch that I think is more refined than the clunkier pronged configuration. In fact, the entire look is classy — much more so than the Apple Watch, which looks like a miniaturized iPhone 11 glued to a silicone strap.

The cushion design is great in one way, but also worrisome. Because the glass display stands proud of the case by at least two millimeters, my first thought was: “I’m going to trash this screen.” Immediately after taking the watch out of the box, I ordered screen shields and, during the course of this review, managed to crack one. That’s possibly because I was cheap and ordered off-brand screen protectors, or perhaps the protruding glass cushion is simply begging for misfortune.

However, the raised glass solves one issue with the Galaxy Watch 7. One way to scroll is by running your finger along the outer edge of the screen. In fact, on-screen cues suggest that gesture when there’s more content to view. On the Watch 7, my finger was always getting in the way of the text I wanted to read. Now that the display is elevated, there’s room for my finger to travel further out along the outer edge and stay out of the way.

Appearance aside, my favorite hardware update is the much more comfortable design. The Watch 7’s underside had a sizable sensor node, and it took me a week to get used to the feel — I wrenched it off the first night I tried to wear it to sleep. The Watch 8, however, felt good the first time I put it on. And now that I’ve spent more time with it, I can say it’s the most comfortable watch I’ve tested. I often forget I have it on and don’t mind wearing it to bed, which is saying a lot. In fact, when I slept with it on one wrist and an Apple Watch Series 9 on the other to compare sleep data, I woke up in the middle of the night and took off the Apple Watch, but felt no compulsion to do the same with Samsung’s model.

Gemini on your wrist and Wear OS 6

The Galaxy Watch 8 is the first smartwatch to ship with Google’s Wear OS 6, though Samsung overlays it with its own One UI interface. The first time we’ll get a look at the unfiltered wearable software will be on Google’s upcoming Pixel Watch 4. The headline feature is Gemini integration. Now, most things you can do with the Gemini app on your phone can be done through your watch. Just note that you’ll need to have your Gemini-compatible phone nearby to use it.

Galaxy Watch 8 with the Ask Gemini screen active
Amy Skorheim for Engadget

Google’s AI was pretty snappy at basic tasks like setting reminders and timers. Requests that required more “thought” took only marginally longer (and completed more quickly when I was using my home Wi-Fi versus relying on cell signals out in the world). I asked for the best record stores in my city and a good place to eat lunch. I was pleasantly surprised to hear of a new-to-me answer for each category. The record store turned out to be two bins of LPs inside a book shop, but still, I hadn’t heard of it before. I asked why my eight-year-old talked about lava chicken all the time and was correctly informed about Jack Black and A Minecraft Movie.

The assistant displays answers in text form on the watch screen and also reads out every word. That can turn into a lot of talking, but tapping the screen stops the chatter if you prefer to read. Gemini can handle follow-up questions and tasks, but since it doesn’t usually keep listening after it speaks, you have to tap the mic icon to make your next request.

I reached out to Samsung to ask if I could change a setting to make Gemini keep listening, but there isn't one at this time. For now, you'll need to tap or say "OK Google" again for followups like, “Add that to my Google Keep” or “Remind me about that.” If Gemini asks you a question, however, such as verifying a text it’s about to send, it does keep listening for the answer. You can also combine commands at the outset, such as asking to look up an address and text it to someone.

I requested some fairly specific tasks and didn’t hit any snags. I asked it to text the address of a local restaurant to my husband and it verified which location I wanted then sent the message. I asked when the Albuquerque Isotopes were playing the Sacramento River Cats and it gave me the upcoming dates. I tapped the mic and said, “Add that to my calendar,” which it did.

It only let me down twice. When I asked whether it was going to rain, it just read me the general weather forecast instead of answering the question. All the other assistants in my life (Alexa and Siri) can answer those questions — though technically that’s a Gemini problem, not the watch's fault.

The other disappointment is a matter of integration. I was out on a walk and I asked for directions to a nearby grocery store. Gemini gave me the address and the distance, but told me to look at my phone for the actual directions. When I asked Siri the same thing on the same walk, directions simply showed up on my Apple Watch. (Yes, I’ve been wearing two watches and carrying around two activated phones like a double agent, just much less cool.)

Health tiles on the Galaxy Watch 8 Classic display the Energy score and activity metrics
Tiles on the Galaxy Watch 8 Classic
Amy Skorheim for Engadget

In addition to Gemini, Wear OS 6 introduces updated and larger tiles which organize and display information on your watch. By default, the tiles are separated into categories: Health, Fitness and Basic (weather, media playback, calendar) with each box displaying a bit of pertinent info — activity metrics, current temp, your next event and so on. If the limited info isn’t enough, tapping a tile opens the respective app. You can categorize, rearrange, swap out and add as many pages of these cards as you want in the Wear app on your phone. I spent some time tweaking the layout to my liking, but really didn’t alter the defaults too much. Already this feels like a much smarter way of interacting with a watch. Now you can glean key information from the apps that are important to you just by browsing your tiles.

The running coach made me mad, but it earned my respect

The Galaxy Watch 8’s built-in, AI-fueled running guidance sounds a bit like the upcoming Workout Buddy that Apple announced for watchOS 26. I plan to test that out next and it will be interesting to compare the two. Google Pixel watches and Fitbits currently have the option of an AI-powered running coach, but it’s limited to Premium subscribers. Garmin offers an adaptive training program called Run Coach as well, so this isn’t an entirely novel idea.

Samsung’s running coach begins with an assessment: Run as fast as you can maintain for twelve minutes. You’ll then be awarded a performance level ranging from one to ten. Each level comes with a tailored, four-week plan with four workouts per week intended to get you ready to complete a 5K, 10K, half or full marathon. My 12-minute, 22-second per-mile pace put me at a level three. My plan was geared towards getting me to run a sub-35-minute 5K at the end of four weeks.

Samsung's Runn
Samsung/Engadget

The first workout was low-intensity interval running with four segments of running hard, interspersed with cool-down walks — which felt easy and attainable. Two days later I did the second routine, which had me run for thirty minutes at my assessment pace (which it referred to as a “slow jog”).

For the next session, I was told to run for 30 minutes on a hilly course. There was some walking on the inclines, but I completed it. After that, I was startled to see it wanted me to run for 3.1 miles with a target pace below 12 minutes. Yes, on workout four (of 15 total), I was supposed to run a 5K faster than my assessment pace. This felt unfair. Unhinged. Not nice. But in the name of proper gadget testing, I tried.

During every run, the AI coach chimes in each minute with a pace update, letting you know whether you’re going slower, faster or right around your target. You also get updates on your completion level, with a shout out at 30 percent, half way and 80 percent. Three or four times per session, the coach gives you some canned advice like telling you to focus on maintaining a consistent pace and not speed, reminding you that runs build cardiovascular health or suggesting that you cool down after a workout to avoid injury. You also get interjections like “You’re doing great” or “You’re almost done, finish strong.”

At first, I hated the constant pace checkins. Once I hit the halfway point of the 5K, I started to lag. My watch told me over and over: “You’re running slower than your target pace. Try to speed up if you can.” I wheezed obscenities in response. But then, when I really just wanted to quit, I started to look forward to the next update, knowing it meant I’d stayed on my feet for one more minute.

Ultimately, I ran faster and farther than I would have without a coach. I wouldn’t have attempted a 5K, because that’s just insane. Of course, I’m a pretty easy win for Samsung’s running coach. I’m an optimistic person and I’ve never been coached before, so anything that says it "believes in me” and pushes me to try something I don’t think I can do is pretty likely to get some results. Samsung calls this an AI-powered regimen — and sure, it knew my pace, distance and heart rate and adjusted its encouragement accordingly — but I didn’t hear anything uniquely personalized to me and my journey. But I still have three weeks to go.

Vascular load and bedtime reminders

Two new features require overnight data gathering. Vascular load takes readings while you sleep to evaluate the health of your circulatory system, then tells you when your load is higher or lower than normal and offers recommendations. It requires three nights of data to set a baseline. After that, you’ll be able to check whether your load is higher, lower or steady.

My range has been lower or steady over the past few days. The internet tells me that a high vascular load can lead to bad things like hypertension and heart attacks. And the Health app offers suggestions to improve your vascular load, too — things like getting better sleep, being active, limiting stress and maintaining a healthy weight. Personally, this isn’t an area I’m concerned about, but considering the prevalence of heart disease, it’s a good metric to keep an eye on..

Three nights of sleep also triggers the Bedtime Guidance feature, but I didn't see any notifications. I asked Samsung why, and was told to turn it on in the Health app (Health > Sleep > tap three dots icon > Bedtime guidance). I still didn’t get any guidance, which I realized was due to the fact that I have my bedtime set for 10PM, but I turn on the watch’s sleep mode by 9PM most nights. What can I say, I like sleeping.

A bigger battery makes the AOD A-OK

One of the prior model’s shortcomings was battery life. The always-on display (AOD) tanked the charge, so I kept it off. The Galaxy Watch 8 has a bigger 325mAh battery, compared to the 300mAh one on the Watch 7. Twenty-five milliamp hours isn’t a huge jump, but in batteries, bigger is always better. And that’s true here.

I charged the watch to full at 12:30PM, turned the AOD on and after two auto-detected bike rides, one walk, some Gemini requests, checking notifications and using a few timers, it was still at 75 percent at 8:45PM. I enabled sleep mode shortly after and slept until 6:15AM. Then I did an AI-coached three-mile run and it still had 27 percent at 9AM. At 2:30PM it hit seven percent and I put it on the charger. Samsung promises up to 30 hours of use on a charge with the AOD on. This was slightly less than that, but it was an activity-filled day and I was fine with the lifespan.

The Galaxy Watch 8 Classic is super swank

The spinning-bezel version of the latest watch is obviously aimed at the luxury watch-hound crowd. The knurled spinning bezel, inset diving bezel replica and the default chronograph watch face gives off serious Rolex Daytona/Omega Speedmaster wannabe vibes. It ships with a stitched leather-look band and the steel case has the heft and stature of an automatic mechanical timepiece. The always-on display and chunky buttons only add to the effect.

The Galaxy Watch 8 Classic looks a lot like a high-end time piece
Amy Skorheim for Engadget

For me, the main advantage it has over the standard Galaxy Watch 8 is the protection the spinning bezel offers the raised display. The glass is shielded at the edges and it’s even inset slightly, which should ward off scratches. At first I thought it would be too chunky, but it’s surprisingly comfortable for its size and I really like how it looks. It’s an extremely attractive watch. I mostly tested out the standard version of the Watch 8, because I think most people will opt for that model, but if I got one for myself, I think I’d go with this beaut

Pricing and the competition

Samsung added $50 to the price tag on the 40mm Galaxy Watch 8, bringing it to $350. The larger 44mm model is now $380 (versus $330 last year) and the Classic variant is $500 for the single 46mm size. For reference, there was no Galaxy Watch 7 Classic and the Galaxy Watch 6 Classic sold for $400 for the 43mm size and $430 for the 47mm.

At $350, the base model Galaxy Watch 8 costs the same as Google’s Pixel Watch 3 (though the Pixel Watch 4 is almost certainly on the way and could be more expensive.) The Apple Watch Series 10 starts at $399, but it went as low as $279 during Prime Day. And, of course, a new Apple wearable is likely coming this September with a potential price bump as well. So Samsung’s Galaxy Watch 8 could end up being the most affordable of the major smartwatch releases once the dust settles from 2025’s launches. And if you’re a Samsung smartphone user, it’s easily your best bet.

All the small things

The core functions of the Galaxy Watch 8 haven’t changed all that much. We went in-depth on the fitness and health features for our review of the Galaxy Watch 6 and covered the notable perks of Samsung’s wearable in our Galaxy Watch 7 review and our Galaxy Watch 5 review. So I’ll just point out that the workout tracking is still accurate — and it’s faster to suss out when I’m on a walk than my Apple Watch. The companion Health app, where you view all the data your watch gathers, is helpful and a delight to use with its common-sense suggestions and friendly animations. And the daily Energy Score is a useful indicator of how I’m doing with my sleep and movement goals.

The Samsung Galaxy Watch 8 Classic and standard models side by side
Amy Skorheim for Engadget

The Watch 8 is an industrious accessory for a Galaxy phone. My only lingering gripe is that the notifications aren’t as intrusive as my distracted brain needs them to be. They’re easy for me to miss. The double tap feature is great — a pinch gesture that dismisses or triggers actions without you having to touch the screen. I use it often and the watch almost always registers the motion. The screen has only become more responsive and navigation is plenty zippy. And finally, I love how customizable the watch faces are.

Wrap-up: The best smartwatch Samsung has made in years

The substantially refreshed look was the first indicator that Samsung was (finally) doing something different with the Galaxy Watch. I was pleasantly surprised by how well and how quickly Gemini was able handle functions on the wearable itself, as well as answer my questions. The tiles make hunting down info on the watch much easier and the new health metrics like antioxidant levels and vascular load could come in handy for many folks. Also the Classic variant this year is a truly stunning timepiece, finally bringing a true grown-up style to smartwatches.

The AI running coach got more out of me than I thought possible (and ensured that I will be fast asleep before any Bedtime Guidance suggestions kick in). But the changes that pleased me the most were the improved battery and markedly increased comfort. Those were two factors that prevented the just-good Galaxy Watch 7 from being a great smartwatch. Of course, if you don’t have a Samsung phone, there’s no point in considering any Galaxy Watch (and if that’s you, you probably haven’t read this far down anyway). But if you’re a Galaxy fan, this watch may very well delight you and help you get the most out of your phone.

This article originally appeared on Engadget at https://www.engadget.com/wearables/galaxy-watch-8-review-samsungs-best-smartwatch-in-years-155857108.html?src=rss

©

© Amy Skorheim for Engadget

Galaxy Watch with the new Heritage Classic watch face

Engadget Podcast: Ancestra director Eliza McNitt defends AI as a creative tool

18 July 2025 at 15:00

Eliza McNitt is no stranger to new media. Her 2017 project, Fistful of Stars, was a fascinating look at stellar birth in virtual reality, while her follow-up Spheres explored black holes and the death of stars. Now with her short film Ancestra, McNitt has tapped into Google's AI tools to tell a deeply personal story. Working with Google Deepmind and director Darren Aronofsky's studio Primordial Soup, McNitt used a combination of live-action footage and AI-generated media to tell the story of her own traumatic birth.

The result is an uncanny dramatic short where the genuine emotion of the live-action performance wrestles agains the artificiality of AI imagery. The film begins when the lead's (Audrey Corsa, playing McNitt's mother) routine natal care appointment turns into an emergency delivery. From that point on we hear her opine on how her child and all living things in the universe are connected — evoking the poetic nature of Terrence Malick's films. We jump between Corsa's performance, AI footage and macro- and micro-photography. In the end, Corsa holds a baby that was inserted by Google's AI, using prompts that make it look like McNitt as an infant.

There's no escaping the looming shadow of Google's AI ambitions. This isn't just an art film — it's an attempt at legitimizing the use of AI tools through McNitt's voice. That remains a problem when Google's models, including Veo and other technology from DeepMind, have been trained on pre-existing content and copyrighted works. A prestigious short coming from Darren Aronofsky's production studio isn't enough to erase that original sin.

"I was challenged to create an idea that could incorporate AI," McNitt said in an interview on the Engadget Podcast. "And so for me, I wanted to tell a really deeply personal story in a way that I had not been able to before... AI really offered this opportunity to access these worlds where a camera cannot go, from the cosmos to the inner world of being within the mother's womb."

When it comes to justifying the use of AI tools, which at the moment can credibly be described as plagiaristic technology, McNitt says that's a decision every artist will have to make for themselves. In the case of Ancestra, she wanted to use AI to accomplish difficult work, like creating a computer generated infant that looked like her, based on photos taken by her father. She found that to be more ethical than bringing in a real newborn, and the results more convincing than a doll or something animated by a CG artist.

"I felt the use of AI was really important for this story, and I think it's up to every artist to decide how they wanna use these tools and define that," she said. "That was something else for me in this project where I had to define a really strong boundary where I did not want actors to be AI actors, [they] had to be humans with a soul. I do not feel that an performance can be recreated by a machine. I do deeply and strongly believe that humanity can only be captured through human beings. And so I do think it's really important to have humans at the center of the stories."

To that end, McNitt also worked with dozens of artists create the sound, imagery and AI media in Ancestra. There's a worry that AI video tools will let anyone plug in a few prompts and build projects out of low-effort footage, but McNitt says she closely collaborated with a team of DeepMind engineers who crafted prompts and sifted through the results to find the footage she was looking for. (We ran out of time before I could ask her about the environmental concerns from using generative AI, but at this point we know it requires a significant amount of electricity and water. That includes demands for training models as well as running them in cloud.)

"I do think, as [generative AI] evolves, it's the responsibility of companies to not be taking copyrighted materials and to respect artists and to set those boundaries, so that artists don't get taken advantage of," McNitt said, when asked about her thoughts on future AI models that compensate artists and aren't built on stolen copyrighted works. "I think that that's a really important part of our role as humans going forward. Because ultimately, These are human stories for other human beings. And so it's, you know, important that we are at the center of that."

This article originally appeared on Engadget at https://www.engadget.com/ai/engadget-podcast-ancestra-director-eliza-mcnitt-defends-ai-as-a-creative-tool-150042942.html?src=rss

©

© Eliza McNitt/Primordial Soup

Ancestra

The official Tron: Ares trailer riffs on big tech and features a returning Jeff Bridges

17 July 2025 at 18:12

There's finally a full trailer for Tron: Ares, the third installment in the decades-long sci-fi franchise. This comes after a teaser dropped a few months back and over two years after the film was first announced.

The new footage is pretty cool, and gets into the meat of this threequel. It's a riff on AI and big tech, as corporate entities look to weaponize the digital creations of The Grid and bring them to the real world. This does not go well, as evidenced by shots of iconic franchise Recognizer ships careening down city streets.

It also features Jeff Bridges, returning as computer programmer Kevin Flynn. Bridges is the only actor who will have appeared in all three Tron films. The star of this new one, however, is the scandal-plagued Jared Leto, who plays a soldier from The Grid who has traveled to the real world on some kind of mission. Hasan Minhaj, Gillian Anderson, Greta Lee and Evan Peters are also in the movie.

Tron: Ares is directed by Joachim Rønning, who is something of a Disney hired gun. He made Pirates of the Caribbean: Dead Men Tell No Tales and Maleficent: Mistress of Evil, among other films for the company.

I really have no idea what to make of this movie, even if the footage looks pretty nifty. I love the original and like the sequel well enough. One thing I'm happy about is the continued dedication to atmosphere-soaked soundtracks. The first film featured an iconic score by Wendy Carlos and the folks behind Tron: Legacy hired Daft Punk to make the music. The new movie includes a soundtrack by Nine Inch Nails. The band released the first single today, likely to coincide with the trailer, and it definitely nails the vibe.

We don't have that long to wait for this one. Tron: Ares opens in theaters on October 10.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/the-official-tron-ares-trailer-riffs-on-big-tech-and-features-a-returning-jeff-bridges-181232528.html?src=rss

©

© Disney

Jeff Bridges staring and smiling.

Netflix takes a leap of faith on an Assassin's Creed series

17 July 2025 at 15:39

Hollywood video game adaptations continue to have a moment. Following the success of HBO's The Last of Us and Amazon's Fallout, Netflix has officially greenlit an Assassin's Creed series. The news comes nearly five years after the company signed a deal with Ubisoft to adapt the franchise.

Two Emmy nominees will helm the series. Roberto Patino (DMZ, Westworld, Sons of Anarchy) and David Wiener (Halo, Homecoming, The Killing) will serve as showrunners and executive producers. No casting has been announced yet.

"We've been fans of Assassin's Creed since its release in 2007," Patino and Wiener wrote in a statement. "Every day we work on this show, we come away excited and humbled by the possibilities that Assassin's Creed opens to us."

Assassin's leaping across rooftops in a historical setting.
Ubisoft

The pair says the series will focus on "people searching for purpose, struggling with questions of identity and destiny and faith." (But it'll also include plenty of parkour and spectacle.) Above all else, it will be about "human connection across cultures and time."

Netflix hasn't said when the show will premiere. So, we're probably still a ways off.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/netflix-takes-a-leap-of-faith-on-an-assassins-creed-series-153958591.html?src=rss

©

© Ubisoft

An Assassin's Creed protagonist jumping off a high building.

TikTok's latest feature will help songwriters show off their work

17 July 2025 at 13:15

TikTok has proven to be a powerful platform to help tunes go viral and now the company is making it easier for songwriters to benefit from that reach. The social media site has rolled out a pair of new features in beta: a Songwriter label that identifies users as such under their profile, and a Songwriter Music Tab that lets them spotlight tracks they've written or co-written. 

So far, only a limited number of publishers and songwriters can apply to get the new label and tab. TikTok noted that several established songwriters including Lauren Christy (Avril Lavigne, Britney Spears), Toby Gad (Fergie, John Legend) and Justin Tranter (Dua Lipa, Ariana Grande) are among the first to gain access. Others who want to join can put their names on a waitlist

TikTok and its parent ByteDance developed the Songwriter Features after surveying 871 songwriters and doing 18 in-depth interviews. Those findings informed the design of the feature with the aim of improving discovery and monetization opportunities while raising songwriters' profiles on TikTok. The platform noted that 53 percent of full-time songwriters who post content on social media do so on TikTok. 

When it comes to royalties and copyright, TikTok has taken a similar tack to YouTube and other social media sites. All the music available on TikTok can be used in videos and the platform has deals in place with distributors and labels to license music. Artists then receive royalties whenever creators use their music on TikTok, the amount of which depends on the individual distributor. 

TikTok is following belatedly in the heels of Spotify, which has offered a similar songwriter spotlight feature since 2020. Last month, ByteDance introduced the TikTok for Artists music insight platform designed to help musicians gain access to "data and insights about their music, posts and followers." 

This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktoks-latest-feature-will-help-songwriters-show-off-their-work-131512204.html?src=rss

©

© TikTok

TikTok's latest feature helps songwriters spotlight their work

Slack is getting a host of new AI tools

17 July 2025 at 12:00

Slack, the cloud‑based team messaging juggernaut, is charging ahead with further integration of AI into users’ day‑to‑day experience. The company is introducing a series of tools aimed at improving productivity and improving team communication.

AI writing assistance has been built directly into Slack Canvas, allowing users to create assets like project briefs as well as refine the content they create using natural‑language prompts. Writing assistance can also be used to generate and assign action items, refine raw meeting notes, create FAQs and more. AI writing assistance seems to be in vogue, as Google has also been developing AI writing assistance tools integrated into Google Workspace, and Apple offers similar tools via Apple Intelligence.

Slack AI Writing Assistance
Slack

AI message context will help users understand the lingo of their organization and help ensure they don’t get lost in long threads by pulling surrounding context to explain a message when a user hovers over it. This tool will provide explanations for project names, internal tools or team‑specific shorthand. This will presumably help new team members get up to speed more quickly while they learn whatever random acronyms their company uses. AI‑generated channel recaps and thread summaries will similarly help users catch up when joining an ongoing project or conversation.

If you’re heads down working on a high‑priority task, Slack will use AI action items to understand what you’re working on and only interrupt your workflow if something provides a meaningful update or is more important than what you’re working on at the time.

Slack AI Action Items
Slack

The company also said it will be widening the availability of AI translation, which allows teammates to read and contribute in their preferred language, as well as Enterprise Search, which surfaces information across connected apps, data and conversations from the likes of Salesforce, Microsoft Teams, Google Drive and more.

Slack Enterprise Search
Slack

Slack has been rapidly incorporating AI tools into the platform recently. This should come as no surprise since Slack has been owned by Salesforce since 2021. The software company is among the most bullish on the future of AI and its applications in the workplace. We know that Slack has been using data from your chats to train its machine‑learning models, and recently Salesforce moved to ensure that rival software firms were not using or accessing the same data.

These tools and more are available to users with a paid Slack plan, though not every new AI tool is included at the lower price tiers. The higher the subscription tier, the more of these tools are included. Slack says that Enterprise Search, channel recaps, AI meeting notes and translations are available now. In contrast, AI‑assisted message explanations, action items, writing assistance in Canvas and profile summaries are coming soon and may be released throughout the year.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/slack-is-getting-a-host-of-new-ai-tools-120023607.html?src=rss

©

© reMarkable / Slack

Person typing on a tablet

Roblox is adding an 'age estimation' feature for teens

17 July 2025 at 11:00

Roblox is joining the growing ranks of online platforms that are trying to better understand the ages of their teen users. The company is rolling out a new "age estimation" feature for teens 13 and older.

With the update, teens will be prompted for an age check that requires a video selfie in order to access its new less restrictive chat feature called "trusted connections." Roblox is relying on third-party identity company Persona for the actual "estimation," and users who fail the check will also have the option of providing an ID. Once teens have "unlocked" trusted connections via video selfie or sharing an ID, they'll be able to participate in chats with friends "without filters," including "party" group text and voice chats.

Roblox has previously faced scrutiny for not policing its chat features enough and making it too easy for adults to seek out children on the platform. The company notes in its announcement that parents will be able to monitor their kids' "trusted connections" via parental control features, and that the feature is intended only for people who teens already "know and trust." Teens are only able to add trusted connections via their existing contacts list or a QR code.

"We believe chat without filters should only be accessible to users who have verified their age." the company writes in a blog post. "This isn’t just about compliance; it’s about building engaging and appropriate digital spaces for everyone."

While Roblox is notably not using the term "age verification," the new feature comes at a time when there are increasing calls for social media companies and other platforms to check the ages of their youngest users. Reddit and Bluesky recently announced age verification features for users in the UK — a change required of major platforms ahead of a new online safety law going into effect. Age verification mandates have also been gaining steam in the United States.

A number of states have introduced age verification measures for social media, though laws in Arkansas and Utah have so far been blocked. Utah also recently passed a law requiring app stores to verify users' ages — an approach that has been endorsed by companies like Meta and Snap. And the Supreme Court recently upheld a Texas law that requires websites hosting adult to conduct age verification checks.

Roblox, which unlike most online platforms allows children under 13 to have accounts, is in a slightly different position. And for now, it's billing its age checks as "optional." But already having an age estimation feature in place could certainly be useful should it be required to take an even stricter approach in the future.

This article originally appeared on Engadget at https://www.engadget.com/gaming/roblox-is-adding-an-age-estimation-feature-for-teens-110047092.html?src=rss

©

© Roblox

Roblox has a new video selfie feature to check teens' ages.

Threads is testing Facebook account sign-ups

16 July 2025 at 19:12

Threads is testing letting you sign up with a Facebook account rather than an Instagram account. The new option, spotted by Social Media Today, doesn't appear to be widely available yet, but is already mentioned in a Meta support article.

Besides making it easier for people who don't have an Instagram account to quickly create an account on Meta's X and Bluesky competitor, this new Facebook sign up option should influence the kind of posts and ads that get recommended to you.

"Signing up to Threads with your Facebook account helps unlock features that work across Threads and Facebook, like using the same login info to access both apps," Meta writes in a support article. "If you create a Threads profile with your Facebook account, we’ll combine your info across Threads and Facebook."

Encouraging users to build their Threads profile using the information and followers they had on Instagram was one of the ways Meta quickly grew its Threads user base. It hasn't always led to the best experience on Threads, though. As it turns out, growth-obsessed Instagram users produce fairly shallow text posts. And because most people's Instagram accounts are associated with their Threads profile, they get served a lot of that subpar engagement bait by default. If you came to Threads looking for the relative wit of X, you'd come away pretty disappointed.

Letting you sign up with Facebook will likely have its own shortcomings (namely, modern Facebook is pretty spammy, too), but it at least acknowledges that what people want from Instagram is different from what they want from Threads. 

Meta has made a concerted effort as of late to establish Threads as more of an independent entity. Adam Mosseri, the head of Instagram at Meta, shared that the company planned to remove "the Instagram graph import from the onboarding flow" back in November 2024. Threads also started testing using its own separate direct messaging inbox in June, after relying on Instagram DMs since launch.

This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-is-testing-facebook-account-sign-ups-191214734.html?src=rss

©

© REUTERS / Reuters

Meta Threads app logo is seen in this illustration taken, July 6, 2023. REUTERS/Dado Ruvic/Illustration
❌