Reading view

Is Mark Zuckerberg flip flopping on open source AI?

Earlier today, Mark Zuckerberg shared a rambling memo outlining his vision to build AI "superintelligence." In the memo, Zuckerberg hinted that the pursuit of more powerful AI might require the company to be more selective in what it open sources.

Citing "safety concerns" he wrote that Meta would need to be "rigorous" about such decisions. The line stood out to many as Zuckerberg — who once said "fuck that" in reference to closed platforms — has made open source central to Meta's AI strategy.

During Meta's second quarter earnings call, Zuckerberg further acknowledged there could be a shift, though he downplayed the significance of it. Here's what he said when asked if his thinking had changed.

I don't think that our thinking has particularly changed on this. We've always open sourced some of our models and not open sourced everything that we've done. So I would expect that we will continue to produce and share leading open source models. I also think that there are a couple of trends that are playing out. One is that we're getting models that are so big that they're just not practical for a lot of other people to use, so we kind of wrestle with whether it's productive or helpful to share that, or if that's really just primarily helping competitors or something like that. So I think that there's, there's that concern.

And then obviously, as you approach real superintelligence, I think there's a whole different set of safety concerns that I think we need to take very seriously, that I wrote about in my note this morning. But I think the bottom line is I would expect that we will continue open sourcing work. I expect us to continue to be a leader there, and I also expect us to continue to not open source everything that we do, which is a continuation of kind of what we, what we've been, been kind of working on.

That's notably different than what he wrote almost exactly a year ago in a different memo titled "Open Source AI is the Path Forward." In that, even longer note, he said that open source is crucial for both Meta and developers.

"People often ask if I’m worried about giving up a technical advantage by open sourcing Llama, but I think this misses the big picture," he wrote. "I expect AI development will continue to be very competitive, which means that open sourcing any given model isn’t giving away a massive advantage over the next best models at that point in time."

He also argued that open source is safer. "There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute."

To be clear, Zuckerberg said the company would continue to open source some of its work. But he seems to be laying the groundwork for a future in which Meta's "superintelligence" could be a lot less open.

This article originally appeared on Engadget at https://www.engadget.com/ai/is-mark-zuckerberg-flip-flopping-on-open-source-ai-231310567.html?src=rss

©

© REUTERS / Reuters

Meta CEO Mark Zuckerberg delivers a speech at the Meta Connect event at the company's headquarters in Menlo Park, California, U.S., September 27, 2023. REUTERS/Carlos Barria
  •  

Showrunner, an AI-powered streaming service, launches in alpha this week

Fable, a startup designing tools to generate animated TV shows from scratch, is launching an AI-powered streaming service this week, Variety reports. The service is called Showrunner, and it will allow subscribers to generate scenes, view content created for Fable's in-house AI-generated animated shows and even insert themselves into the animations they generate.

Showrunner is launching in alpha, and based on Fable's website, you'll primarily interact with it through the company's Discord to start. Per Variety, subscribers will pay anywhere from $10 to $20 month for credits that can be exchanged for the ability to generate new animated scenes. The word "scenes" is key here. While Fable is launching with a couple of distinct animated shows, they're really more like collections of worlds and characters that subscribers can use to create new scenes. 

Those include Exit Valley, a Family Guy-inspired riff on the culture of Silicon Valley, and Everything is Fine, a comedy about a husband and wife who are stranded in an alternate world and trying to reunite. The company's goal is bigger than than just original content. Fable founder Edward Saatchi imagines a Disney-owned "Toy Story of AI" where subscribers could generate endless new scenes of Buzz and Woody interacting. For now, though, interested Showrunner users have to settle with knockoffs.

Engadget was able to preview an earlier version of Fable's Showrunner AI models back in 2019 that capably generated new South Park episodes based on audio inputs. The startup's model was able to create a watchable copy of the show's style even back then, which might be why Amazon has invested an undisclosed amount in Fable as part of the launch of Showrunner. 

While creatives remain skeptical to downright antagonist towards AI, companies have started using it more publicly in production. For example, Netflix recently touted its use of generative AI in its original show The Eternaut. Even if adoption grows, though, that doesn't really clarify whether the average film or TV viewer has any desire to generate content on their own, especially if it's as rough-around-the-edges as most AI-generated media is. Besides being a public demonstration of Fable's tech, Showrunner seems like a test to see if that desire is even there.

This article originally appeared on Engadget at https://www.engadget.com/ai/showrunner-an-ai-powered-streaming-service-launches-in-alpha-this-week-204042241.html?src=rss

©

© Fable / Engadget

A Netflix-style interface of preview images for AI-generated animated shows.
  •  

Substack accidentally sent push alerts promoting a Nazi publication

It was easy to view Substack's 2023 Nazi controversy as a kicked can that could turn up again. Well, white supremacist content led to another headache for the company this week. User Mag reported on Tuesday that the app sent a push alert to some users promoting a Nazi newsletter. The company told Engadget the notification was an "extremely offensive and disturbing" error.

The Substack newsletter in question describes itself as "a National Socialist weekly newsletter." It includes "opinions and news important to the National Socialist and White Nationalist Community." The far-right blog has a mere 757 subscribers. (That's a drop in the ocean compared to, say, Heather Cox Richardson's 2.6 million, George Saunders' 312,000 and Kareem Abdul-Jabbar's 236,000.)

Given the newsletter's offensive content and relatively paltry audience, this wouldn't seem like something to promote. According to the company, it didn't mean to. "We discovered an error that caused some people to receive push notifications they should never have received," a Substack spokesperson told Engadget.

"In some cases, these notifications were extremely offensive or disturbing," the statement continued. "This was a serious error, and we apologize for the distress it caused. We have taken the relevant system offline, diagnosed the issue and are making changes to ensure it doesn't happen again."

Engadget asked Substack for extra details or context about how the accident happened. It didn't have further comment at the time of publication. We'll update this story if we find out more.

Screenshot of a Substack Nazi newsletter.
The newsletter in question
Substack

User Mag reports that those who clicked on the Nazi blog's profile received recommendations for a similar one. That one had a larger audience of 8,600 subscribers.

One reason social users were quick to pounce on the latest incident: It provides a symbolic callback to Substack's 2023 Nazi shitstorm. That's when The Atlantic dug up "scores" of white-supremacist, neo-Confederate and Nazi newsletters on the platform. Some were monetized.

Substack's policy is one of anti-censorship. "I just want to make it clear that we don't like Nazis either — we wish no one held those views," Substack cofounder Hamish McKenzie wrote in December 2023. "But some people do hold those and other extreme views. Given that, we don't think that censorship (including through demonetizing publications) makes the problem go away — in fact, it makes it worse."

After weeks of negative press coverage and prominent authors leaving the platform, Substack relented… sort of. On one hand, the company removed "some" pro-Nazi publications. However, it did so without changing its policies. Instead, it said five publications violated its existing content guidelines. Specifically, they broke rules prohibiting "incitements to violence based on protected classes."

Some critics didn't believe that was enough. The Platformer's Casey Newton, a prominent voice who left Substack during the episode, thought the company needed to take more responsibility. "Every platform hosts its share of racists, white nationalists and other noxious personalities," Newton wrote in early 2024. "In some very real sense, there is no escaping them online. But there ought to be ways to see them less; to recommend them less; to fund them less. Other platforms have realized this as they’ve grown up. Here’s hoping Substack does the same."

Substack has since found its footing as a haven for independent content creators. Numerous journalists looking to build an audience sans traditional media have flocked to it. (Among others, that list includes Tina Brown, Jim Acosta, Terry Moran and Jennifer Rubin.) In recent years, Substack has added a Twitter-like social feature, live video and TikTok-esque vertical video.

The company said it had its biggest week during the 2024 presidential election with an 82 percent boost in paid subscriptions. It recently raised $100 million in funding.

This article originally appeared on Engadget at https://www.engadget.com/apps/substack-accidentally-sent-push-alerts-promoting-a-nazi-publication-191004115.html?src=rss

©

© SOPA Images via Getty Images

CANADA - 2025/07/03: In this photo illustration, the Substack logo is seen displayed on a smartphone screen. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)
  •  

YouTube will be included in Australia's social media ban for children after all

YouTube will be included in Australia's social media ban for children under 16, as reported by Bloomberg. The country's Labor government said that the site will be subject to the same rules as other leading platforms like Facebook, Instagram, TikTok and X when the legislation goes into effect this December.

Australia had planned to exempt YouTube from the forthcoming social media ban, on the grounds that it's an educational tool. This irked other platforms, with Meta and TikTok urging the country's government to backtrack on the exclusion.

This move didn't sit too well with YouTube and parent company Alphabet. A company spokesperson told CNN that this decision “reverses a clear, public commitment” from the government to treat the platform as an educational tool. YouTube Kids isn't included in the ban because it doesn't allow users to upload videos or comment.

The government said this reversal was largely influenced by a survey released by Australia's independent online regulator, the eSafety Commission. It found that 37 percent of children surveyed had reported seeing harmful content on YouTube. This includes stuff like dangerous online challenges, fight videos and hateful rhetoric.

Communications Minister Anika Wells recently spoke to the Australian Parliament and noted that "YouTube uses the same persuasive design features as other social media platforms, like infinite scroll, like autoplay and algorithmic feed." She went on to say that she accepted the results of the aforementioned survey and that YouTube "should not be treated differently from other social media platforms."

The ban originally passed at the tail-end of last year, though some of the details have yet to be ironed out. The government has until December to finalize everything. We do know that the ban puts the onus on the actual platforms to prevent children from opening up accounts, as they'll face hefty fines of up to nearly $50 million Australian dollars ($32 million USD) if they don't comply.

“Kids, God bless them, are going to find a way around this. Maybe they’re all going to swarm on LinkedIn. We don’t know,” Minister Wells said. My hunch is that it'll be more VPN than LinkedIn.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/youtube-will-be-included-in-australias-social-media-ban-for-children-after-all-161707399.html?src=rss

©

© Unsplash/Christian Wiediger

A logo on a phone.
  •  

TikTok's Community Notes era starts today

TikTok users in the United States will soon see crowd-sourced fact checks appearing alongside videos on the platform. The app is beginning to roll out Footnotes, its version of Community Notes, the company announced.

TikTok announced its plan to adopt the feature back in April and since then almost 80,000 users have been approved as contributors. Footnotes works similarly to Community Notes on X. Contributors can add a note to videos with false claims, AI-generated content or that otherwise require more context. Contributors are required to cite a source for the information they provide and other contributors need to rate a footnote as helpful before it will show up broadly. Like X, TikTok will use a bridging algorithm to determine which notes have reached "a broad level of consensus."

According to screenshots shared by the company, Footnotes will appear prominently underneath a video's caption. Users will be able to read the full note and view a link to its source material.

While TikTok is the latest major platform to adopt the crowd sourced approach to fact checking, unlike Meta, the company is still continuing to work with professional fact checking organizations, including in the United States. The company also points out that Footnotes will be subject to the same content moderation standards as the rest of its platform, and that people can report notes that might break its rules. The presence of a note won't, however, impact whether a particular video is eligible for recommendations in the "For You" feed.

For now, the company isn't making any commitments to roll out the system beyond the US. "We picked the US market because it's sufficiently large that it has a content ecosystem that can support this kind of a test," TikTok's head of integrity and authenticity product, Erica Ruzic, said during a press event. "We will be evaluating over the coming weeks and months, as we see how our US pilot is going, whether we would want to expand this to additional markets."

The test of Footnotes comes at a moment when the company's future in the United States is still somewhat in limbo. President Donald Trump has delayed a potential ban three times since taking office in January as a long long-promised "deal" to create a US-owned TikTok entity has yet to materialize. Trump said a month ago that an agreement could be announced in "two weeks." Since then, there have also been reports that TikTok owner ByteDance is working on a new, US-only version of the app in anticipation of a deal. TikTok representatives declined to comment on those reports, which have suggested such an app could debut in early September.

This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktoks-community-notes-era-starts-today-110041152.html?src=rss

©

© TikTok

TikTok Footnotes.
  •  

YouTube is turning over age verification to AI

YouTube will start using machine learning to determine whether viewers should be on a teen account. The company said it plans to start using this AI application on a subset of US users in the coming weeks for a trial before rolling it out to the rest of the market. The tool will assess user behaviors including the types of videos being searched for, the categories of videos watched and how long the account has existed. When an account is deemed by machine learning to belong to a teen, YouTube will disable personalized advertising, activate digital wellbeing tools and add other safeguards for its younger users.

Since introducing supervised teen accounts in 2021, YouTube has continued to add protections aimed at its younger users. The company noted that it has been using this machine learning approach "in other markets for some time, where it is working well."

YouTube isn't the only service to leverage an AI tool to catch users who might be lying about their age. However, it's putting the burden of correcting false positives on its users: "If the system incorrectly estimates a user to be under 18, they will have the option to verify that they are 18 or over, such as using a credit card or a government ID." Meta has a similar tool it began rolling out across Facebook, Instagram and Messenger earlier this year, but it lets people simply change their settings if the AI tool incorrectly guesses their age.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-is-turning-over-age-verification-to-ai-185634150.html?src=rss

©

©

YouTube mobile app on a smartphone screen
  •  

Google adds Video Overviews to NotebookLM

NotebookLM, the Google research tool that gained notoriety for its AI-generated podcasts, is introducing a feature called Video Overviews. As the name suggests, this tool automatically creates videos on requested topics and pulls data from a user's uploaded images, diagrams, quotes and documents.

It's rolling out right now, but there are some caveats. These overviews are only available in English, though Google says more languages are on the way. Also, the tool doesn't make videos in a conventional sense. It creates slideshows with narration.

Despite this limitation, the company says the feature is "uniquely effective for explaining data, demonstrating processes and making abstract concepts more tangible." Google also says it'll be expanding the toolset in the future, so it won't always be just a slideshow machine.

Video Overviews do currently offer some handy playback options. There's the ability to skip back and forth by 10 seconds and adjust the playback speed. The company is also updating NotebookLM's Studio tab. It's getting a visual refresh, which rolls out "over the next few weeks" to all users.

Google has certainly been busy iterating on the platform these past few months. It released an official NotebookLM app back in May and began offering curated "featured notebooks" earlier this month. This lets users experiment with the platform with pre-approved topics like William Shakespeare, so newbies won't have to actually upload anything. Audio-only overviews are also now available in over 50 languages

This article originally appeared on Engadget at https://www.engadget.com/ai/google-adds-video-overviews-to-notebooklm-181524866.html?src=rss

©

© Google

NotebookLM's new feature.
  •  

Lies of P leads August’s lineup of PS Plus games

After kicking off its PS Plus 15th anniversary celebrations last month, Sony has announced the PS Plus Monthly games for August. From August 5, members of all of the service’s tiers will be able to add Lies of P (PS5, PS4), DayZ (PS4) and My Hero One’s Justice 2 (PS4) to their libraries.

The headliner is undoubtedly Lies of P, the surprisingly great Pinocchio-flavored soulslike from 2023. South Korea-based studio Neowiz’s twisted take on the oft-reimagined children’s novel stood out in what is now an overwhelmingly crowded genre, thanks to its stylish Belle Epoque-era setting and slick combat. There aren’t many better FromSoft tribute acts around. The post-apocalyptic survival game DayZ launched on PS4 with not-exactly-glowing reviews in 2019, but is still getting fairly regular updates in 2025. Meanwhile, 3D arena fighter My Hero One’s Justice 2 is worth checking out for fans of the My Hero Academia manga on which it’s based.

As it continues to mark 15 years of its Game Pass-rivalling subscription service, Sony has announced that from August 5, members can download a set of anniversary avatars. Themed around the likes of Cyberpunk 2077, Hogwarts Legacy, Diablo IV and God of War Ragnarok, as well as cult retro favorite Twisted Metal, these are also available to subscribers on all tiers.

As for last month’s games, you have until August 4 to add Diablo IV, The King of Fighters XV and Jusant to your game library.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/lies-of-p-leads-augusts-lineup-of-ps-plus-games-165600067.html?src=rss

©

© Neowiz

Lies of P
  •  

Oakley Meta glasses review: A familiar formula with some upgrades

If you're at all familiar with Meta's Ray-Ban-branded smart glasses, there won't be many surprises when it comes to its latest Oakley frames. The Oakley Meta glasses rely heavily on what's already been a successful playbook for the company: the style of a popular eyewear brand mixed with juuust enough tech to let you keep your phone in your pocket a little longer.

But the Oakley Meta glasses are also the social media company's first collaboration with a non-Ray-Ban brand (though both share a parent company in EssilorLuxottica). And while Meta stays pretty close to the strategy it's used for the last four years, its latest frames offer some hints about its longterm ambitions in the space.

Meta has described its Oakley-branded frames as "performance glasses," which isn't entirely surprising given Oakley's longtime association with athletes. But there are only a few actual upgrades compared to the Ray-Ban lineup. The Oakley Meta glasses have a notably longer battery life, both for the glasses themselves and the charging case. They are also able to capture higher quality video than previous versions.

With a starting price of nearly $400, though, I'm not sure those upgrades are worth an extra $100 - $200.

Why do they look like that?

Meta's debut pair of Oakley-branded glasses are based on the brand's HSTN (pronounced how-stuhn) frames and there's really nothing subtle about the design. The first of these is a limited edition version with shiny gold lenses and bright white frames (which Meta inexplicably calls "warm grey").

Like previous Ray-Ban models, they don't look overtly techy, but I still wasn't a big fan of the design. The glasses felt just a little oversized for my face and something about the bright white paired with gold lenses reminded me a little too much of a bug. The color combo also accentuates just how thick the frames are, particularly around the awkwardly wide nosepiece.

The big gold lenses and white frames give
Karissa Bell for Engadget

I posted a selfie on my Instagram Story and polled my friends on what they thought. And while a few politely said they thought I was "pulling them off," the majority said they looked too big for my face. A few told me they looked straight-up weird, and one summed up my feelings pretty well with "something looks off about them." Style is subjective, of course. And depending on your face shape and tolerance for contrasting colors, I could see others enjoying the design. I'm looking forward to seeing the rest of the HSTN collection, which is coming later this summer, and will hopefully have some more flattering color variations.

Looks aside, the glasses function almost identically to the Ray-Ban glasses Meta introduced in 2023. There's a 12-megapixel POV camera over the left eye, and an indicator light over the right that lights up when you snap a photo or start recording a video via the capture button. There are open-ear speakers in the arms so you can listen to music and hear notifications. Much like the Ray-Ban glasses, the speakers here are pretty good at containing the sound so others can't hear when you're listening at lower volumes, but it's definitely noticeable at higher levels. You can control music playback and volume pretty easily, though, with a touchpad on the right side of the glasses.

"Performance" upgrades

The most important upgrade that comes with the Oakley glasses is the battery. Meta claims the glasses can last up to eight hours with "typical" (non-constant) use and up to 19 on standby. I was able to squeeze a little over five hours of continuous music playback out of the battery in one sitting, which is about an hour better than the Ray-Ban frames. The charging case can provide up to 48 hours of additional runtime, according to Meta. It's been well over a week and I haven't yet had to plug in the case.

The charging case is, however, noticeably bigger and heavier than the Ray-Ban case. It's not a dealbreaker, but the case is too big for any of my pockets and just barely fits into my small sling bag. My other gripe with the charging case is the same complaint I had about the Ray-Ban case: there's no way to see the charge level of the case itself. There's a small LED in the front that will change from green to yellow to red based on the battery level, but it's hardly a precise indicator.

The case is fairly bulky.
Karissa Bell for Engadget

The other major upgrade is the 12MP camera, which can now shoot in 3K compared to 1080p on previous models. The higher resolution video is, notably, not the default setting, but I appreciated having the option. I could see it being especially useful for creators looking to shoot POV footage, but I mostly use the glasses for still shots rather than video.

San Francisco is currently having a record-breaking cold summer so most of my testing has been in fairly overcast conditions. It might be a product of the gray weather, but I found the photos I've shot with the glasses a bit overly saturated for my taste. They looked fine on an Instagram Story, though. The camera has a bit of a wide angle with a 100-degree field of view, so there's still a bit of a learning curve in terms of figuring out how best to frame the shots. 

Another issue is that it's very easy for a hat or a piece of hair to make it into your photos without realizing. My previous experience with the Ray-Ban Meta glasses meant I was careful to pull my hair back before snapping a picture, but I was bummed to realize after a long bike ride that the visor on my helmet was visible in the frame of every photo and video. It seems like Meta may have a plan to address this: I noticed a setting called "media quality" that's meant to alert you when something is partially obstructing the camera. The feature is apparently still testing, though, and it wasn't functional. A Meta spokesperson did confirm it would be added in a future update, though. "Media Quality Check is a feature we're working to bring to our AI glasses collection in the future that will alert users when photos are blurry or if something like your hair or a hat blocks what you capture," Meta said.

The Meta AI app (formerly known as Meta View) can help fix other issues, though. It has a "smart crop" feature that can automatically straighten your pics to correct for any head tilt. It also has built in AI-powered edits for photos and video so you can restyle your clips directly in the app. And while the functionality isn't limited to clips shot with the glasses, the possibility of adding AI edits after the fact makes shooting otherwise mundane clips a bit more appealing. The ability to restyle video, however, is only "free for a limited time," according to the Meta AI app. 

Meta AI

While the core features of Meta's smart glasses have largely stayed the same since it first introduced the Ray-Ban Stories in 2021, one of the more interesting changes is how Mark Zuckerberg and other execs have shifted from calling them "smart glasses" to "AI glasses." As the company has shifted away from the metaverse and made AI a central focus, it's not surprising those themes would play out in its wearables too.

And while none of the Meta AI features are unique to the Oakley frames, Meta has added a couple of abilities since my last review that are worth mentioning. The first is live translation. The feature, which you have to enable in the Meta AI app, allows the onboard assistant to translate speech as you hear it. If both sides of a conversation have a pair of Meta glasses, then you can carry on a full conversation even if you don't speak the same language. The feature currently supports Spanish, French, Italian and English.

Meta AI has the same capabilities on all the company's smart glasses.
Karissa Bell for Engadget

I tried it out with my husband — a native Spanish speaker who was also wearing a pair of Meta glasses — and we were both fairly impressed. I would say something in English and Meta AI on his glasses would relay it to him in Spanish. He would then respond in Spanish and Meta AI would translate the words into English.

It's not the most natural way to speak because you have to pause and wait for a translation, but it was mostly effective. There were a few bugs, though. Because we were sitting close to each other, sometimes Meta AI would overhear the translated audio from the other person's glasses and translate it back, which made the whole thing feel like a bizarre game of telephone.

And over the course of a several-minute conversation, there were a handful of times when Meta AI wouldn't pick up on what was said at all, or would only begin translating halfway through a statement. We also encountered some issues with Meta AI's translations when it came to slang or regional variations of certain words. While it wasn't perfect, I could see it being useful while traveling since it's much smoother than using Google Translate. There was also something endlessly amusing about hearing my husband's words relayed back to me by the voice of AI Judi Dench (Meta tapped a bunch of celebrities last year to help voice its AI).

Stills from a video of a walk through a parking lot (left) restyled as
Stills from a video of a walk through a parking lot (left), and the same image after using the "desert rave" effect in Meta AI app.
Screenshots (Meta AI)

The other major AI addition is something called "Live AI," which is essentially a real-time version of the glasses' multimodal powers. Once you start a Live AI session, Meta's assistant is able to "see" everything you're looking at and you can ask it questions without having to repeatedly say "hey Meta." For example, you can look at plants and ask it to identify them, or ask about landmarks or your surroundings.

The feature can feel a bit gimmicky and it doesn't always work the way you want it to. For example, Meta AI can identify landmarks but it can't help you find them. While on a bike ride, I asked if it could help me navigate somewhere based on the intersection I was at and Meta AI responded that it was unable to help with navigation. It also didn't correctly identify some (admittedly exotic) plants during a walk through San Francisco's botanical gardens. But it did helpfully let me know that I may want to keep my distance from a pack of geese on the path.

I'm still not entirely sure what problems these types of multimodal features are meant to solve, but I think it offers an interesting window into how Meta is positioning its smart glasses as an AI-first product. It also opens up some intriguing possibilities whenever we get a version of Meta glasses with an actual display, which the rumor mill suggests could come as soon as this year.

Wrap-up

While I don't love the style of the Oakley Meta HSTN frames, Meta has shown that it's been consistently able to improve its glasses. The upgrades that come with the new Oakley frames aren't major leaps, but they deliver improvements to core features. Whether those upgrades justify the price, though, depends a lot on how you plan to use the glasses.

The special edition HSTN frames I tested are $499 and the other versions coming later this year will start at $399. Considering you can get several models of Meta's Ray-Ban glasses for just $299, I'm not sure the upgrades justify the added cost for most people. That's probably why Meta has positioned these as a "performance" model better suited to athletes and Oakley loyalists.

But the glasses do offer a clearer picture of where Meta is going with its smart glasses. We know the company is planning to add displays and, eventually, full augmented reality capabilities — both of which will benefit from better battery life and cameras. Both are also likely to cost a whole lot more than any of the frames we've seen so far. But, if you don't want to wait, the Oakley Meta glasses are the closest you can get to that right now.

This article originally appeared on Engadget at https://www.engadget.com/wearables/oakley-meta-glasses-review-a-familiar-formula-with-some-upgrades-120026844.html?src=rss

©

© Karissa Bell for Engadget

The Oakley Meta glasses.
  •  

The new Warner Bros. and Discovery Global would like you to forget Warner Bros. Discovery

Earlier this summer Engadget covered the news that Warner Bros. Discovery would split into two giant media companies. Today the conglomerate announced the names for the restructured entities.

The company retaining Warner Bros. film, television and game studios, along with New Line Cinema, DC Studios, HBO and HBO Max, will be called Warner Bros. Meanwhile, the other entity, which holds the original company’s full portfolio of live cable channels, including CNN, HGTV, Cartoon Network, Discovery and TLC, will be called Discovery Global.

These companies have a storied history of creative naming schemes, such as when WarnerMedia merged with Discovery to form Warner Bros. Discovery. It was also this parent company that brought us the saga of HBO Go being renamed HBO Max, which was then renamed Max, only to revert back to HBO Max this year.

The announcement also confirmed that David Zaslav, the current president and CEO of Warner Bros. Discovery, will serve as president and CEO of the newly formed Warner Bros. Meanwhile, current Warner Bros. Discovery CFO Gunnar Wiedenfels will serve as president and CEO of Discovery Global.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/the-new-warner-bros-and-discovery-global-would-like-you-to-forget-warner-bros-discovery-183951865.html?src=rss

©

© Reuters / Reuters

FILE PHOTO: The exterior of the Warner Bros. Discovery Atlanta campus is pictured in Atlanta, Georgia, U.S. May 2, 2023. REUTERS/Alyssa Pointer/File Photo
  •  

Breaking Bad creator's new show streams on Apple TV+ November 7

Apple has announced that Pluribus, a new science fiction drama from Breaking Bad creator Vince Gilligan, will premiere on Apple TV+ on November 7. Gilligan was confirmed to be working on the project back in 2022, when Better Call Saul's Rhea Seehorn was also announced as its primary star.

Alongside the premiere date, Apple also released a short (somewhat ominous) teaser for the series that shows a hospital employee mindlessly licking donuts. Pluribus is supposed to follow "the most miserable person on Earth" (presumably Seehorn) as they "save the world from happiness," but your guess is as good as mine as to how the two tie together.

Apple's penchant for backing science fiction shows has been well-documented at this point. The company is currently producing a television adaptation of William Gibson's Neuromancer, and has made three seasons and counting of Foundation, based on the novel series by Isaac Asimov. Toss in things like Severance, Murderbot, Silo and For All Mankind and you've got a pretty varied catalog of sci-fi media to choose from.

Just how "science fiction" Pluribus will actually be remains up in the air. When reports went out in 2022 that Apple was ordering two seasons of the show, it was described as "a blended, grounded genre drama." Apple's premiere date announcement pitches the show as "a genre-bending original."

Pluribus' nine-episode first season will premiere on November 7 with two episodes. New episodes will stream weekly after that, all the way through December 26.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/breaking-bad-creators-new-show-streams-on-apple-tv-november-7-204150489.html?src=rss

©

© Apple

A still of Rhea Seehorn's character from Pluribus receiving what seems like a surprising phone call.
  •  

Amazon is developing a Wolfenstein TV show

Following the success of Fallout, Amazon is turning its attention to another video game adaptation. The company is reportedly developing a series based on the Wolfenstein franchise with the help of some of the creative team behind Fallout, according to Variety.

The details of how the Wolfenstein show will adapt the alternate history action of the games are still under wraps, but the subject matter remains unfortunately timely. The show's logline is "The story of killing Nazis is evergreen," according to Variety, which seems like a direct acknowledgement of that fact.

Patrick Somerville, the writer and showrunner of Station Eleven, will reportedly serve as the writer of the Wolfenstein adaptation, with Lisa Joy, Athena Wickham and Jonathan Nolan producing the show through their company Kilter Films. The production company also made the adaptation of the William Gibson's The Peripheral for Amazon, and currently produces Fallout.

While it might make sense to pair a new show with a brand new Wolfenstein game, the series has effectively been on hold since 2019. MachineGames, the Microsoft-owned developer of the last five Wolfenstein games, hasn't produced a new entry since 2019's Wolfenstein: Youngblood and Wolfenstein: Cyberpilot. The studio released Indiana Jones and the Great Circle in December 2024.

Given the tumultuous state of Microsoft's Xbox division, the series could be your best bet for getting more Wolfenstein, assuming Amazon decides to move forward.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/amazon-is-developing-a-wolfenstein-tv-show-190028817.html?src=rss

©

© Bethesda

A screenshot of someone dual-wielding machine guns from Wolfenstein: Youngblood.
  •  

Brave and AdGuard now block Microsoft Recall by default

The Brave web browser and the ad-blocker AdGuard have both announced that they are blocking Microsoft Recall by default. For the uninitiated, Recall is an AI-powered tool that accompanies Windows 11 and it records everything on a PC's screen. It's pretty obvious why a privacy-minded web browser like Brave and an ad-blocker would make this move.

AdGuard said the decision was made due to a "privacy concern," going on to say that "the very idea of background screen captures is unsettling." A blog post on the matter suggested that the tool could "snap a screenshot of a private chat window, an online form where you’re entering your credit card or simply something personal you didn’t want saved."

🚨 Microsoft Recall — new threat or improvement?

In May 2024, Microsoft introduced a new feature in Windows 11 that was actively taking screenshots of everything happening on the screen, including messages in WhatsApp and Signal. These screenshots were stored on the device… pic.twitter.com/C97KDylXPj

— AdGuard (@AdGuard) July 15, 2025

Brave also cited privacy concerns, suggesting that a user's "entire browsing history" could be captured by the tool. "We think it’s vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases," the company wrote in a blog post.

The chat app Signal made a similar move back in May, urging "AI teams building systems like Recall" to think "through these implications more carefully in the future." Brave says it was "partly inspired" by Signal.

AdGuard and Brave both offer toggles to bring Recall back into the mix. Microsoft's controversial tool lets people jump to whatever was previously on a screen. This includes web pages, images, documents, emails, chat threads or whatever else. It actually sounds like a pretty nifty productivity tool, despite the privacy concerns. It's available with some Copilot+ PCs.

This article originally appeared on Engadget at https://www.engadget.com/ai/brave-and-adguard-now-block-microsoft-recall-by-default-152601475.html?src=rss

©

© Microsoft

The tool in action.
  •  

Alexa+ preview: An almost philosophical exercise

“Alexa, I need to write a review about you.”

“Oh, a review? I’m intrigued! What’s the occasion? I’d love to hear what you think.”

“There’s no occasion, this is just my work.”

“Ah, I see. Writing a professional review, got it. I’m here to help if you need any specific information.”

“Yeah, what should I consider when reviewing you?”

“Consider my natural voice, emotional intelligence and my ability to handle complex tasks. How about my personality and humor? Those are key parts of the new me.”


The above is a recent exchange I had with Alexa+. Access to the next-generation assistant has been slowly trickling out to users since Amazon first announced availability at an event in February. But work on the smarter, more natural-sounding Alexa has been going on for years, and it still isn’t ready for a widespread release. The version currently rolling out is more like an intriguing preview of what a truly intelligent Alexa could be.

I’m having a hard time figuring out how I feel about the upgraded Alexa. On the one hand, I cannot deny it is now capable of tackling some fairly complex requests it couldn’t before. On the other, it doesn’t have good memory, can’t really see all tasks to completion and is just incompetent enough to be annoying.

I want to first caveat all of this by pointing out that when it does become broadly available, Alexa+ will be a free upgrade for Amazon Prime subscribers. Non-Prime members will have to pay $20 a month for the privilege, but they’ll still get the existing Alexa experience on their compatible devices should they choose not to upgrade. That means there will likely be no extra cost to most, and you probably won’t need to spend too much time thinking about whether you want to use it. As long as it’s not a worse experience, it is very easy to say yes to.

I’ve been tempted to get philosophical when thinking about a digital assistant that’s more fluent in human. Do I really want to think of the new Alexa as basically a flesh-and-blood assistant with emotional intelligence, personality and humor? No. If it performs poorly, I'll be let down; if it performs well and can essentially pass a mini Turing test, the ickier I'll feel at keeping something like an indentured servant in my kitchen.

I set aside my existential questions and tried to focus on the practical experience of getting help from Amazon’s upgraded assistant. Is it better than the previous version? Is it reliable and easy to use? Finally, does it deliver what Amazon promised? And as a bonus, is the experience enjoyable (or at least painless)?

The answer to all those questions is a half-hearted shrug. In some ways, Alexa+ delivers. But in many ways it is an excellent showcase of the limitations of generative AI, and demonstrates that the true problem with the current cohort of AI tools is a mismatch between expectations and reality.

An animated image showing the Alexa+ experience on an Echo Show device. It goes from the home screen with multiple tiles displaying widgets for weather, calendar, entertainment and news, to the chat interface with the assistant answering a question about hiking recommendations nearby.
Amazon

What’s new with Alexa+?

A voice assistant is hard to describe, since it’s intangible and amorphous. It’s quite difficult to state where its capabilities begin and end, not to mention how it might have been upgraded. But I’ll start by comparing it to its predecessor, which I’ll be calling Original Alexa (or OriginAlexa, if you will indulge me).

OriginAlexa taught us how to use very specific commands to do things like turn our living room lights on or off. If you had a family member or friend named Alexa, you might have renamed it to “Computer” and adapted your relationship to that word. Due to how you might have grouped your home gadgets, you could have begun to refer to your kitchen area lights as “skylights,” for example.

“Alexa speak,” as some call it, differs across households. I say “Alexa, stop,” to silence alarms, while my best friend says “Alexa, off,” to do the same. But regardless of the specific word choices, Alexa-speak largely revolved around using stilted phrases and careful enunciation to avoid having to repeat yourself to get something done. Anyone that’s used any voice assistant is probably familiar with the frustration of repeating yourself when a command has been misheard for the umpteenth time.

That’s (supposed to be) a thing of the past with Alexa+. In a blog post announcing the new assistant, Amazon’s lead of devices and services Panos Panay said “Alexa+ is more conversational, smarter, personalized — and she helps you get things done.” The company said it “rebuilt Alexa with generative AI,” but it didn’t just use large language models (LLMs) to make its assistant converse more naturally. It also created new architecture to enable API integration “at scale.” These APIs are how assistants can connect to third-party services to do stuff on your behalf, and Amazon described them as “core protocols to getting things done outside of a chat window and in the real world.”

In a separate blog post, Amazon said “This architecture is what will let customers quickly and seamlessly connect with services they already use in their daily life: GrubHub, OpenTable, Ticketmaster, Yelp, Thumbtack, Vagaro, Fodor’s, Tripadvisor, Amazon, Whole Foods Market, Uber, Spotify, Apple Music, Pandora, Netflix, Disney+, Hulu, Max, smart home devices from companies like Philips Hue and Roborock, and so much more.”

Basically, Alexa can communicate with you more naturally, meaning you can talk to it more like you would with another human being, so you can forget about Alexa-speak. It will also retain information about your preferences and is capable of handling more tasks on your behalf.

But enough about the promises. What was living with Alexa+ for weeks actually like?

The setup

Alexa+ is currently only available as an “Early Access” preview to a small group of users. Though, other people in my non-tech circles did start gaining access recently, which means you might be able to check it out yourself soon.

The fact that it’s still somewhat exclusive and experimental means there are likely to be glitches, which is understandable. Once I got past the first day or two after upgrading to Alexa+, I didn’t notice many actual bugs. What frustrations I did encounter later seemed more to do with programming and AI’s limitations than unstable software.

The updated Assistant currently requires at least one compatible device with a screen on your network, so those of you who only have Echo speakers will have to wait a lot longer or try it on your phone. I spent most of my time testing Alexa+ via an Echo Show 15 as well as the Alexa app on my iPhone.

There were small variations in the answers I would get on either device, but by and large the experience was similar. The most meaningful difference really was in how I perceived Alexa. Initially, when I was interacting with it on the smart display, it felt more like an upgraded smart home and personal assistant, and I predominantly asked it to check on the weather, Uber prices or to help me do things like set timers, reminders and play music.

Two screenshots showing a conversation with Alexa+ with a title at the top saying
Amazon (Screenshots from the Alexa app)

On my phone, though, I talked to Alexa+ more like I would with ChatGPT. I asked deeper, more philosophical questions that required more research and thought. I asked it to generate images, sort 15 names into three groups and, inspired by the subreddit “r/tipofmytongue,” help me find a book I was struggling to recall.

Over time, I did come to rely on the smart display more, as it’s always easier to just say “Alexa, is Mountainhead a good movie” than to pick up my phone, find an app and ask the AI. Of course, I could ask the same question of Siri or my Google speakers, and I did. All three assistants answered similarly, each citing different sources. Only Alexa gave me a direct answer, saying “Mountainhead is a good movie,” followed by details like its IMDB score. The other two simply rattled off “On the website RottenTomatoes dot com, …” or “here’s an answer from whattowatch dot com.”

Alexa has improved in some small ways

In many ways, Alexa+ is a marked improvement over its predecessor, and I have to admit I found myself nodding, impressed, at its ability to tackle multi-step tasks and recall previous conversations. Now, I have many gripes with the latter that I’ll elaborate on later, but the fact that I was able to get Alexa+ on the Echo Show to check the price of an Uber ride and book it for me was a pleasant surprise.

Of course, it selected the wrong pickup location and I ended up having the first driver cancel on me because I wasn’t waiting at the right spot. But it did manage to completely book a ride on my behalf, relying solely on my voice commands and an Uber integration I had set up earlier.

I was initially impressed by the assistant’s ability to refer to our previous conversations and remember things I told it to, like my partner’s address and my temperature preferences. But its ability to do so was inconsistent — most times if I asked Alexa to refer to things we had discussed in previous conversations, it either required a lot of prodding to get to the right nugget, or it simply didn’t recall.

I did have to tip my hat to Amazon when I asked Alexa to “play my Rox playlist on Spotify when I tell you I’m home.” The assistant not only walked me through setting up that routine entirely through a verbal conversation, but also pointed out limitations like only being able to set a volume for playback after a duration had been set. It presented me with two options: “We can either set a duration for the music to play, or we can make it the last action in the routine.” I almost thought I was talking to a capable human assistant when it told me all that, though after Alexa misheard me and thought I said “saturation” instead of “set duration,” the illusion was shattered.

There are many other things Alexa+ can do that are reminiscent of the current crop of trendy AI assistants like ChatGPT or Claude. Ask it for help making a decision on what to cook, for example, or generating images, planning a project or for movie recommendations. One new capability I was excited about was sending me emails from our conversation. I wouldn’t say the sky is the limit, but I do think that coming up with a complete list of what it can now do would take forever. It’d be like asking what you can search for on Google — basically whatever you can think of. Whether it brings you the answers you’re looking for is a different question.

I found Alexa+ helpful in that it was able to email me the lists of names it sorted on my behalf, or the project timeline I asked it to help create. But the limits to what it would send me were frustrating. Straightforward content, like the three groups of five names, arrived at my inbox with no problem. Other times, like when I asked it to email me the conversation I started this article with, it only sent me part of our chat. This has a lot to do with what Alexa deems to be the beginning and ending of a conversation, and it was fairly often wrong. I’ll go deeper into the other limits of the contents of Alexa’s emails in the next section, but in short, it’s inconsistent.

Inconsistent and imperfect

That’s a pattern of behavior that you’ll see here. Alexa+ will be capable in some new way that has potential to be exciting and useful, but it will fail you somehow or execute its task incompletely. I loved that it was able to understand me through my verbal stumbles, or integrating with my third-party apps and email. But I kept hitting walls or being let down. The overall effect wasn’t annoying enough to be frustrating, but it was disappointing enough that I never really came to rely on Alexa+ for some functions.

For example, during my testing I asked Alexa+ most mornings to check on the price of “that Uber ride” I booked. Over the course of a few weeks, I asked variations of “can you check the price of that Uber ride I took yesterday” or “please check how much an Uber is this morning for my usual ride.”

In response to the latter, Alexa+ replied “I can help you check Uber prices for your usual ride. I have two saved pickup locations for you. Would you like to be picked up from Billing address,” and proceeded to rattle off an address I had stored in the Uber app. It continued, offering a second pickup address and asking if I preferred a different location. After I selected one, it asked where I would like to be dropped off. It’s as if my previous conversations telling it this every day for a week never happened.

To its (very small) credit, Alexa+ gave me accurate prices after I supplied all the parameters, but it took a tiresome amount of time. That’s largely due to how verbose the responses are. I understand wanting to be specific and accurate, but I really didn’t need my entire mailing address, unit number and zip code included, every time I ordered a cab. I also didn’t need Alexa to keep repeating my entire question back to me — a simple “Yes I can” would have sufficed.

Four screenshots showing a conversation with Alexa+ about the
Amazon (Screenshots from Alexa app)

Alexa+ also came off a bit needy, which would be humanizing if it wasn’t so robotic about it. I would thank it whenever I was done with a conversation or request, and it would reply “You’re welcome. Glad I could help you with…” and make a sort of reference to our chat in a few words. Or it would say “you’re welcome, have a nice day.” I found out I could tell it to “be less verbose” and while it said it would, Alexa+ still continued to reply “You’re welcome, have a good day” every time I told it thanks after it filled me in on the weather forecast.

I could almost put up with the overly long responses, if Alexa did things the way I expected. But like I already mentioned, it’s inconsistent. Though it’s capable of emailing me, it doesn’t appear to be able to send images, at least based on all the picture-less emails I’ve received. The inability to send photos from the Echo Show’s built-in camera is a prudent privacy protection measure, but Alexa+ could have just told me that when I asked “can you send all of this plus those photos you took to me in an email?”

Instead, it replied “Certainly, I can help you with that. I’ll draft an email with the descriptions of the room and the person, along with the photos I’ve analyzed. Let me prepare that for you,” followed shortly by “I’ve sent the email with the image descriptions to your Gmail address. You should receive it shortly.”

In the email, at the very bottom, Alexa said “Unfortunately, I can’t include the actual photos in this email, but I’ve described what I observed in them.” Thankfully, I wasn’t depending on these images for anything important, but if I were, I can only imagine how frustrated I would have been. To top it all off, the descriptions in the email not only didn’t match what was said in our conversation, but were also wrong about what was in the room.

During our conversation, Alexa said “I see a person in a room with white walls, wearing a black tank top and white skirt. There’s furniture including a cabinet, bookshelf, desk and coffee table.” That’s a fairly accurate description of my living room.

The email I got had three sections with bullet points. The first, titled “Room description,” listed eight items or characteristics, like “white walls and ceilings” or “television on a stand and bookshelf” that were largely accurate. Here, it said I had a “white cabinet with drawers,” which is wrong. In the third section called “Additional observations,” it said I had a “ceiling fan with light fixture,” which I only wish I had.

Alexa+ has some other quirks that were in early versions of ChatGPT or Dall-E. For example, when I asked it to create an image of an AI assistant on the screen of a smart display on a kitchen island, it was completely unable to understand my request. Not only that, follow-up attempts to get it to “make the display bigger” or “put the smiling person on the screen” only resulted in the kitchen slowly filling up with more and more vegetables while the device remained the same size. I didn’t know whether to laugh or cry.

Four screenshots showing a conversation with Alexa+, asking it to generate images of a smart display on a kitchen island, as well as the subsequent follow-ups asking for refinements.
Amazon (Screenshots from Alexa app)

Is the goal of generative AI to be human?

I could continue to list examples of where Alexa+ failed to completely do what I asked it to, but I think you get the point. The one area where I can confirm a noticeable improvement is in the way the assistant sounds and how it understands speech. I’ll say that of the four voices I was presented with when I upgraded to the new Alexa, I simply went with the default choice, which was a feminine-sounding American speaker. I didn’t mind it, but I know some other people who dislike it. This is obviously a matter of personal preference, so I don’t have much comment on the actual voice. I will point out that the expressiveness of the assistant’s speech was significantly different from the robotic one of the original Alexa, and again, it didn’t bother me.

One of my biggest initial concerns was having to adjust how I spoke to Alexa. I considered myself fluent in Alexa-speak. Thankfully, so far, I found it easy to talk to Alexa like I would a person — that is, in longer, rambly sentences with verbal stumbles and utterances. For the most part, I felt understood, or at least better understood than by the Google Assistant or Siri. Of course, that in itself raises philosophical questions of whether you want to speak to AI like you would people to begin with. But from my weeks with Alexa+, that feels like the least pressing concern.

Back to the conversation I had with the assistant on what to look out for when I review it. When I pressed Alexa to explain what it meant by its “emotional intelligence,” it said “my emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging.”

This set off my internal alarm bells just a bit, so I asked how it was detecting my mood. Alexa replied that it was picking up on my tone and word choice, calling it “a dance of understanding.” So I pushed it further:


"Tell me about your emotional intelligence."

"My emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging."

"Well how are you detecting my moods"

"I pick up on your tone and word choice. Your emotions color our conversation, and I adapt to match. It's like a dance of understanding."

"How do you think I am feeling right now?"

"Based on our chat, I'd say you're feeling analytical and focused. You're in work mode, evaluating my capabilities for your review."

"What about when I, you know, asked you about the weather this morning?"

"This morning, you asked about the weather and I provided a sunny forecast. How did that information help you start your day?"

"No, I was asking about my mood back then, do you know how I felt then?"

"Ah, I see. Based on that morning conversation, you seemed casual and curious about the day ahead. Your tone suggested a typical start to the day, neither particularly excited nor stressed."


During this exchange, I also adjusted the pitch of my voice to see if it would make a difference, and none of the responses I got gave me any real insight as to whether Alexa was actually listening to the sound of my voice to determine my mood. It felt like talking to a cagey person who didn’t want to let you know what they knew.

Neediness, inconsistency and gaslighting. In creating something so imperfect, maybe Amazon has actually managed to nail the human experience. I’ll be clear, though. The company never said it set out to make something human. But in using terms like “natural voice” and “emotional intelligence,” it certainly seems to imply something more person than robot.

It’s not a problem unique to Amazon. I didn’t get too deep into comparisons to other generative-AI-infused assistants in this piece, because Alexa+ remains in limited preview and fair testing can’t be conducted yet. But I did already notice some small areas where the Amazon offering was smarter than others.

I asked Alexa, Siri and the Google Assistant (on a Pixel 9) to set a timer for 30 minutes, before changing my mind and telling them to adjust those to 25-minute countdowns instead. Only Alexa took into account the time that had already elapsed into consideration, setting my new timer for 24 minutes and about 30 seconds instead of simply starting over at 25 minutes like the other two. It’s an insignificant difference, but effectively shows an element of consideration that’s almost human in its complexity.

I will spend more time with Alexa+ to further poke at the limits of its abilities and to continue our dance of understanding. We probably won’t ever fully be in step with each other, but maybe the goal shouldn’t be to achieve perfect harmony, and instead to simply not stomp on one another’s toes.

This article originally appeared on Engadget at https://www.engadget.com/ai/alexa-preview-an-almost-philosophical-exercise-130012573.html?src=rss

©

© AI-generated image by Alexa+

An AI-generated image of a smart display on a kitchen island. The smart display has a cartoon smiling face on it, and the device is surrounded by vegetables and wooden utensils.
  •  

Itch.io is removing NSFW games to comply with payment processors' rules

Itch.io has deindexed and hidden all adult games and other content from its browse and search pages to make sure it doesn't lose the ability to sell with the payment processors it uses. The gaming marketplace, which mainly hosts titles from indie developers, has admitted in an announcement that it wasn't able to give creators advance notice. It "had to act urgently to protect the platform’s core payment infrastructure," it said, because the "situation developed rapidly." 

The website explained that it recently came under the scrutiny of its payment processors after an organization called Collective Shout launched a campaign against Steam and itch.io. Collective Shout, which describes itself as an Australian grassroots campaigning movement, directed its concerns to the gaming marketplaces' payment processors. It originally focused its campaign around the game No Mercy that revolves around a character engaging in rape and sexual violence. Both itch.io and Steam removed the game back in April. In its open letter to payment processors published earlier this month, however, the group said it "discovered hundreds of other games featuring rape, incest and child sexual abuse" on Steam and itch.io since then. 

Steam already banned "content that may violate the rules and standards set forth" by its payment processors, which mostly affected adult games. Now, itch.io is also clearly heading that way. At the moment, the website is doing a comprehensive audit of its content and will keep adult games delisted until it's done. After it's finished, it will introduce new compliance measures and will require creators of adult games to confirm that their titles are allowed under the rules of the payment processors linked to their account.

Itch.io admitted that some games will be permanently removed from its marketplace as part of its review and the new policy it will put in place, but it said its "ability to process payments is critical for every creator" on its platform. "To ensure that we can continue to operate and provide a marketplace for all developers, we must prioritize our relationship with our payment partners and take immediate steps towards compliance," the website said in its announcement. 

Although No Mercy's removal from gaming websites is understandably celebrated by a lot of people, users are concerned about the potential effects of marketplaces adopting anti-porn rules. Queer content, for instance, is disproportionately affected by censorship measures and could be tagged as "adult" or "NSFW" even when it doesn't contain anything sexual. 

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/itchio-is-removing-nsfw-games-to-comply-with-payment-processors-rules-133045491.html?src=rss

©

© Andrew Brookes via Getty Images

Teenage boy using keyboard and mouse
  •  

The best Nintendo Switch 2 accessories for 2025

Now that the Switch 2 is finally here, you may be equally as excited to kit out your new console with the right accessories as you are to dive into Mario Kart World. Not only can the right accessories make it easier and more fun to play all of the games you love, but they can also make your gaming experience better in different environments, be it on the couch, in an airplane or in the car. We're excited to get our hands on some of the newest Switch 2 accessories (more on those below), but plenty of our favorite classics are compatible with the Switch 2 as well as older versions of the console. These are our current favorite Nintendo Switch 2 accessories, and we'll add to this list over time as we test out new gear.

Best Nintendo Switch 2 accessories

More Nintendo Switch 2 accessories

Nintendo announced a bunch of new accessories when it revealed the Switch 2 earlier this year. Key among them are a new Switch 2 Pro controller, Switch 2 camera, an all-in-one carrying case and more. Our staff will be testing out a bunch of these accessories, and we'll keep our favorites list up to date as we do so. If you're interested in picking any of those new Switch 2 accessories up, you can find them at a variety of retailers:

Joy-Con 2 bundle

Switch 2 Camera

Hori Nintendo Switch 2 Piranha Plant Camera

Joy-Con 2 Charging Grip

Joy-Con 2 Wheels (set of 2)

Switch 2 All-in-One Carrying Case

Switch 2 Carrying Case and Screen Protector

This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/best-nintendo-switch-2-accessories-070011952.html?src=rss

©

© Sam Rutherford for Engadget

The best Nintendo Switch 2 accessories
  •  

Video Games Weekly: Censorship, shrinkage and a Subnautica scandal

Welcome to Video Games Weekly on Engadget. Expect a new story every Monday or Tuesday, broken into two parts. The first is a space for short essays and ramblings about video game trends and related topics from me, Jess Conditt, a reporter who's covered the industry for more than 13 years. The second contains the video game stories from the past week that you need to know about, including some headlines from outside of Engadget.

Please enjoy — and I'll see you next week.


This week, I’m fried. Maybe it’s the plodding and ever-present crumbling of society and human decency, or maybe it’s because Love Island USA just ended so I’m feeling extra listless. It’s a familiar summer sensation, but this year everything is exaggerated and extra tense, the stakes of every action seem higher, and instead of melting into the warmth of the season with a popsicle and a smile, I often find myself frozen and numb. I am the popsicle, coo coo ca choo.

I’m not sure exactly what I’m trying to convey here, but I think it’s clear that I shouldn’t be writing anything too serious at the moment. I’m working on a few reports and trying to keep my composure amid the chaos, and all the while, the video game headlines keep rolling on. I’ve included a few more than usual this week, as penance for my popsicle state.


The news

The Chinese Room escapes from Tencent

UK studio The Chinese Room, creator of Still Wakes the Deep and Everybody’s Gone to the Rapture, is independent once again. The Chinese Room leaders completed a management buyout with help from VC firm Hiro Capital to fully split the studio from Tencent subsidiary Sumo Digital, which acquired it in 2018. A number of people were laid off as part of the transition and the studio is left with a total of 55 employees. The Chinese Room is still working on Vampire: The Masquerade — Bloodlines 2 for Paradox Interactive, and it also has original projects in development.

Still Wakes the Deep was one of my absolute favorite games of 2024. Whether you’re a fan of beautiful paranormal horror or you're just really into oil rigs, give it a go.

Read these stories that Vice censored

Vice’s owner, Savage Ventures, doesn’t want you to read this story. Or this one.

Vice removed two articles about Steam’s new ban on certain “adult-only” content and the organization that pushed for the change, Collective Shout, which has the support of prominent anti-pornography groups with conservative religious foundations. The stories were written by contributor Ana Valens, who said the removals were “due to concerns about the controversial subject matter — not journalistic complaints.” Valens has vowed to never write for Vice again and a handful of reporters there have resigned in solidarity.

Censoring stories about censorship is certainly a choice, Vice.

Supermassive delays Directive 8020 and shrinks its team

The home of Until Dawn and The Dark Pictures Anthology, Supermassive Games, is laying off 36 people, restructuring its team and delaying one of its projects into 2026. A statement from the studio says the decisions were in response to the video game industry’s “challenging and ever-evolving environment.” It’s estimated that Supermassive had more than 300 employees before the layoffs.

Directive 8020, the fifth installment in the Dark Pictures Anthology, is now due to come out in the first half of 2026, rather than this fall. Honestly, I’m not surprised to hear Supermassive needs more time to work on Directive 8020. I watched Engadget UK bureau chief Mat Smith play the demo at Summer Game Fest in June, and while it looked great, we were both surprised by how short and non-interactive the segment was. He summed up this feeling in his preview with the line, “Finally, I got to play (but only for a few minutes).”

Supermassive is also working on Little Nightmares III, a series that it took over from Tarsier Studios. Tarsier created Little Nightmares and its sequel, but lost the rights to the IP when the team was acquired by a subsidiary of Embracer Group in 2019. Series publisher Bandai Namco kept the Little Nightmares brand and commissioned Supermassive to build the third game, while Tarsier is working on its own project, Reanimal.

It makes sense that Supermassive would prioritize Little Nightmares III in order to fulfill its obligations with Bandai. The game has already been delayed once, and it’s set to hit PC and consoles on October 10.

FBC: Firebreak is getting less busted

I still have high hopes for FBC: Firebreak to be the Left 4 Dead revival we’ve always wanted, but fact is, it’s not quite there yet. Remedy Entertainment is aware of this hard truth and has a plan to fix it. The studio laid out its pipeline for making FBC: Firebreak easier to jump into, more fun to play and less confusing overall, with most major changes coming in an update this winter.

Valve is still the best retirement community in game development

PCGamesN published an interview with Counter-Strike co-creator Minh Le, who left Valve years ago to try out independent development. One sentiment stuck out to me.

“They didn't force me out or anything,” Le told PCGamesN. “But a part of me kind of regrets it. Looking back, my decision to leave Valve was, financially, kind of a poor decision. If I had stayed with Valve, I would have been able to retire by now.”

It’s not presented as an indictment of Valve, but I find it notable that Le describes the studio as a place to retire, rather than a space to innovate and create the next generation of video games. At this rate, Valve will never outrun its reputation as the studio where talented game developers go to die (professionally speaking). 

But, hey, at least they're not getting laid off en masse. Which, unfortunately, brings us to the next headline.

Your favorite studio’s favorite studio faces layoffs

Cyberpunk 2077, Sea of Thieves and Dune: Awakening support studio Virtuos is laying off 270 developers, which is about seven percent of its staff. Virtuos is currently best known as the studio behind The Elder Scrolls 4: Oblivion Remastered alongside Bethesda, and it has more than 4,000 employees across Asia, Europe and North America. The cuts affect developers in Asia and Europe, with “fewer than 10” in France, where work on Oblivion Remastered was headquartered.

Heck yeah, there’s gonna be a Hellraiser game

Make sure to pin this one on your calendar. Saber Interactive is making Clive Barker's Hellraiser: Revival, a first-person, action-survival horror game that features actor Doug Bradley as Pinhead for the first time in nearly 20 years. Barker himself provided input on the story, too. It’s coming to PlayStation 5, PC and Xbox Series X/S, with no release date yet.

"The Hellraiser universe is defined by its unflinching exploration of pain, pleasure, and the thin and terrifying line that separates the two," a description from Saber Interactive reads. "That essence is at the heart of our game."

An inside look at the fallout of the Zenimax layoffs

Game Developer reporter Chris Kerr spoke with a number of employees at Zenimax who are still reeling from the layoffs that Microsoft enacted in early July. The vibes there sound pretty terrible.

“This carcass of workers that remains is somehow supposed to keep shipping award-winning games," one senior QA tester told Kerr. The developer continued, “Microsoft just took everything that could have been great about the culture and collaboration and decimated it. Morale is terrible. It's grotesque. People are stressed. They're crying.”

Xbox cloud play anywhere

When Xbox isn’t firing thousands of employees in one blow, it’s quietly laying the groundwork for the future of video game distribution. An update for Xbox Insiders this week introduces cross-platform cloud support, bringing your cloud library and play history to the Xbox PC app. This means you can access cloud activity on an Xbox console, PC or Windows handheld, and seamlessly play cloud games across devices. This is just how video games are going to work in the coming decades, and it’s interesting to watch our future slowly roll out in blog posts and software updates.

Subnautica 2 scandal catch-up

Did you miss all of the mess around Subnautica 2 last week? Or, more accurately, this past month? To quickly summarize, Subnautica publisher Krafton is being sued by the series creators after it fired them and then delayed their game, allegedly sabotaging a $250 million bonus payout due to developers. To not-quickly summarize, see my complete breakdown of the drama.

My Melody & Kuromi hits Netflix this week

I don’t know who else needs a little levity in their life right now, but I certainly do. Thankfully, the stop-motion show My Melody & Kuromi is coming to Netflix on July 24, and there’s already an adorable tie-in music video by LE SSERAFIM to enjoy. Zen out, watch all of the Sanrio sweetness and finally settle the debate: Are you more of a Kuromi or a My Melody?

Additional reading


Have a tip for Jessica? You can reach her by email, Bluesky or send a message to @jesscon.96 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/gaming/video-games-weekly-censorship-shrinkage-and-a-subnautica-scandal-221839722.html?src=rss

©

© Saber Interactive

Saber Interactive
  •  

Still Wakes the Deep developer The Chinese Room regains its independence

The Chinese Room, maker of Still Wakes the Deep, has bought back its independence. The studio will continue developing new franchises after splitting from the Sumo Group. The latter said earlier this year that it's pivoting away from original games and toward co-development work. Tencent bought the British studio Sumo for $1.27 billion in 2021.

Game Developer reported that VC firm Hiro Capital facilitated the Chinese Room's independence deal. Studio head Ed Daly told the publication on Monday that the company is open to partnering with other developers moving forward.

Unfortunately, the studio's good news is tainted with an all-too-familiar story. The studio confirmed to Game Developer that it will lay off an undisclosed number of employees as part of the split. "Following the divestment process, a number of roles were regrettably made redundant," a company spokesperson said. "The specifics are confidential, but the studio is currently 55 developers. This was a very difficult process as we sought a future for the studio outside of Sumo. No more layoffs are planned as the studio moves forward."

Still from the game Little Orpheus. A character in a walrus mask swings on a vine as a T-Rex roars. Green background with dramatic lighting.
The Chinese Room / Secret Mode

The "modern horror classic" Still Wakes the Deep isn't all the studio is known for. The Chinese Room also made Little Orpheus, Everybody's Gone to the Rapture and the OG walking simulator, Dear Esther. The UK-based developer is also working on Vampire: The Masquerade — Bloodlines 2 with Paradox Interactive. The long-delayed title is scheduled to launch in October. The Chinese Room is also developing two other unannounced titles.

"This management buyout allows us to scratch the creative itch of continuing to work on new, original intellectual property but also to partner with other studios on other projects when they fit in with our vision," Daly told Game Developer. "This is what we are doing and we want to carry on doing it, so we're happy to carry on in this vein."

This article originally appeared on Engadget at https://www.engadget.com/gaming/still-wakes-the-deep-developer-the-chinese-room-regains-its-independence-182405167.html?src=rss

©

© The Chinese Room

Game still. A bright light emanating from a closed door in a dimly lit room.
  •  

A Hellraiser game is in development

Calling all gamers with a love of horror films: Hellraiser is getting a video game adaptation. Saber Interactive is turning the cult franchise into a game called Clive Barker’s Hellraiser: Revival. Clive Baker's 1986 novella, The Hellbound Heart, served as the basis for the Hellraiser films. He also wrote and directed the first instalment, which premiered in 1987. 

Clive Barker’s Hellraiser: Revival is a single-player game that has you play as Aidan while fighting your way through an "action survival horror experience." You also must deal with Pinhead, who's known for harvesting human souls — yeah, this is a creepy game. As Saber Interactive states, "The Hellraiser universe is defined by its unflinching exploration of pain, pleasure, and the thin and terrifying line that separates the two. That essence is at the heart of our game." You can see for yourself in the announcement trailer.

The team at Saber Interactive took an additional step to ensure fans of the movie franchise are happy. They spoke with Clive Barker, himself, to maintain an "authentic" storyline. There's no word yet on exactly when Clive Barker’s Hellraiser: Revival will launch, but more announcements should come this year. It will be available on theXbox Series X and S, PlayStation 5, and the PC through Steam.   

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/a-hellraiser-game-is-in-development-145313443.html?src=rss

©

© Saber Interactive

Hellraiser: Revival game.
  •  

Neon Abyss 2, a prison-break RPG and other new indie games worth checking out

Welcome to our weekly roundup of the goings on in the indie game space. It's been quite the busy spell, with several notable games debuting or landing on more platforms and some intriguing upcoming projects popping above the parapet.

The Steam Automation Fest is taking place this week (it runs until July 21 at 1PM ET), so fans of games like Satisfactory and Factorio might be looking to pick up some bargains on similar titles. As well as offering a variety of discounts, Automation Fest also saw the debut of several new games.

One of them is from a studio formed by the original team from the celebrated developer Zacktronics. The first title from Coincidence (with the help of publisher Astra Logical) is Kaizen - A Factory Story. This puzzler is set in 1980s Japan, and you're tasked with building popular electronics of the day, including calculators, arcade cabinets, personal computers and toy robots.

Dawn Apart also arrived during Steam Automation Fest. This is a blend of space colony sim and automation sandbox set in a destructible, voxel-based world. Astra Logical is also publishing this project, which is from Industrial Technology and Witchcraft. Dawn Apart is available now on Steam in early access.

Other new releases

One of the highest-profile indie games to debut this week is Neon Abyss 2, from Veewo Games and Kepler Ghost. The original game has sold more than a million copies, so plenty of people will have been looking forward to the sequel. This time around, you can play with up to three friends in co-op. The game's creators are promising "unlimited item synergies" here, which is handy since there's an onus on creating overpowered builds in this action roguelike. Neon Abyss 2 is out now on Steam in early access.

Ready or Not has been a hit on Steam since it arrived there in late 2023, and now Void Interactive's tactical first-person shooter has made the leap over to PlayStation 5 and Xbox Series X/S. Here, you'll lead a team of SWAT officers in an attempt to restore order to a city that's become overrun by corruption and chaos. You can play the 18-level campaign solo or with up to five friends.

Back to the Dawn is a story-driven RPG from Metal Head Games and publisher Spiral Up Games that's now out of early access on Steam. It has also arrived on Xbox Series X/S and Game Pass. 

As Thomas the Fox, you're an investigative journalist who has been framed for revealing government secrets and tossed into prison. You'll have to forge alliances with other prisoners in your attempt to break out.

As part of Back to the Dawn's full launch, Metal Head is introducing a new character, Bob the Panther, as well as a fresh storyline. I like the art style here, and the premise is pretty compelling.

Thinking is cool and all, but sometimes you just want to turn down the volume on your brain and shoot a bunch of stuff. Kickback is an arcade bullet hell shooter with a fun twist: you can only move while you're shooting and, as the tile suggests, you'll be propelled backwards. I'm into it. Kickback — from Dot blood and publisher Targem Games — is out now on Steam.

I'll be honest, I had no idea what Fretless - Wrath of Riffson was when I got an email from Steam this week telling me that this game, which I apparently had placed on my wishlist at some point, is now available. But hey, that name is an absolute winner.

I appreciate the pixel art look of this turn-based RPG from Ritual Studios and publisher Playdigious Originals. According to the Steam page, you'll play "as Rob, a friendly musician trying to restore harmony in an off-tuned world where wacky villains are corrupting music." That sounds quite lovely, actually.

Upcoming

I have absolutely no idea how Strange Scaffold does it. The studio behind I Am Your Beast, the brilliantly unsettling Clickolding and Teenage Mutant Ninja Turtles: Tactical Takedown is about to unleash its seventh game in the space of two years. Co-Op Kaiju Horror Cooking is Strange Scaffold's first co-op game.

The trailer makes it look delightfully chaotic, as players rush to cook meals to satiate giant monsters and stay alive. Co-Op Kaiju Horror Cooking is out on July 29, but until then you can play the entire game for free as part of a playtest.

What's more, Strange Scaffold has announced two more games: Truck-kun is Supporting Me From Another World?! is a "chaotic anime-inspired vehicular action game about driving into objects at high speed to rescue the corporate ladder climber you accidentally isekai'd into a dangerous medieval fantasy world" while Space Warlord Baby Trading Simulator is a "fast-paced stock market sim about buying, selling, and shorting stocks based on the simulated lives of adorable alien children."

Beautiful one-line pitches. Zero notes. Can't wait for those, or to find out about another new game demonstrating a "unique method of developer collaboration" that Strange Scaffold plans to unveil next week. 

If you're hankering for more first-person parkour in a similar vein to Mirror's Edge, Neon White or Ghostrunner, Jetrunner might just do the trick. Riddlebit Software's platformer-shooter hybrid, which is published by Curveball Games, is coming to Steam on September 4. It looks like it could be fun! If the announcer's voice in the trailer sounds familiar, that's because the character is played by Matthew Mercer.

I missed this one last week, but I just want to very quickly mention Ghost Haunting. The world needs more funny point-and-click adventures in the vein of those classic LucasArts games. Three Headed Monkey Studios and publisher Daedalic Entertainment have made a positive first impression to suggest they're ready to take that particular baton. There's no release date or window as yet, but this was an instant addition to my wishlist on Steam (let's just hope I don't forget about this one, eh?)

I would love to play a tour manager simulator game at some point, getting into the nitty gritty of making sure gigs and everything else involved in a band's life on the road goes smoothly. Rockbeasts is not quite that, but it still seems like it could be a good time.

This is a "grungy" narrative-driven rock band management game from Lichthund and Team17. The trailer is fun and there are some interesting talents involved here. Rockbeasts was written by writer Jakub Szamałek, who was senior writer of The Witcher 3 and principal writer of Cyberpunk 2077. Even cooler is that Iggy Pop voices on of the characters, who is named Iggy Pup. Love it.

Rockbeasts is coming to PC and consoles next year.

This article originally appeared on Engadget at https://www.engadget.com/gaming/neon-abyss-2-a-prison-break-rpg-and-other-new-indie-games-worth-checking-out-110043805.html?src=rss

©

© Veewo Games/Kepler Ghost

Neon Abyss 2
  •