Reading view

Acrobat Studio is Adobe's new AI-powered hub for PDFs

Whether you love or hate them, PDFs are an inescapable part of the job for many of us. In fact, it's safe to say the format isn't going away anytime soon, with Adobe reporting there are 3 trillion PDFs in circulation worldwide. However, there's no denying they can be a pain to work with, and in an effort to make it easier to manage projects involving multiple PDFs, Adobe is launching a new product today called Acrobat Studio. And wouldn't you know it, the company is marketing the inclusion of generative AI tools as a major selling point of the suite.

The main feature of Acrobat Studio are hubs Adobe calls PDF Spaces. Here, you can upload up to 100 files — including PDFs of course, alongside public web pages, RTFs, DOCXs and more — and Acrobat Studio's built-in AI assistants will help you make sense of everything. To start, the hub will generate a summary of all the documents, with a few pre-populated prompts to help with further analysis. Accompanying each bullet point from the AI is a citation you can use to verify the model's summary by quickly jumping to the document it pulled the information from. Sharing your PDF Spaces with colleagues is built right into Acrobat Studio.

In addition to chatting with Acrobat Studio's AI assistant, you can create custom assistants to carry out specific tasks. By default, Adobe offers three of these — analyst, instructor and entertainer — to get you started. The names do a decent job of communicating each assistant's purpose. For example, the instructor will attempt to explain complex topics. You can create your own by writing a set of custom prompts.

There are some notable limitations to PDF Spaces. For one, the hub's generative AI features currently only work with documents written in English. Adobe says it will add support for other languages "over time." Similarly, the hub can't analyze videos, handwritten notes and password-protected files.

Outside of PDF Spaces, Acrobat Studio offers access to Adobe Express built right into the app, meaning you can use Adobe's Firefly AI models to generate commercially safe images for your PDFs. As you would expect, the suite also comes with Adobe Acrobat and all the tools you might need to create and edit your own protected documents.

Pricing for Acrobat Studio starts at $25 per month for individuals, with a 14-day trial available.

This article originally appeared on Engadget at https://www.engadget.com/ai/acrobat-studio-is-adobes-new-ai-powered-hub-for-pdfs-130003264.html?src=rss

©

© Adobe

The PDF Spaces feature in Acrobat Studio creates a hub where AI can analyze your PDF documents.
  •  

The best gaming monitors in 2025

If you want to get the most out of your games — whether you're into competitive FPS titles, sprawling RPGs or story-driven adventures — a good gaming monitor can make all the difference. Smooth gameplay, low input lag and crisp visuals are just the start. With the right screen, everything from your aim to your immersion gets a serious upgrade.

These days, there’s a lot more to consider than just refresh rate or screen size. You’ll find ultrawide gaming monitors, widescreen displays, models with USB-C support, and monitors that can bring out the best in your CPU and GPU. Some even match the style of your setup, pairing perfectly with gaming headsets and accessories for a clean, cohesive look.

Whether you're shopping on a tighter price range or splurging on high-end picture quality, we’ve rounded up the best options to suit different setups and play styles — so you can level up your experience without the guesswork.

Table of contents

Best gaming monitors for 2025

How we test gaming monitors

While I’ve not used every product recommended in our list, I have extensively tested dozens of gaming monitors in the past, including models with WOLED and QD-OLED panels. In the case of the Alienware monitor I highlight above, I bought one for myself with my own money. Separately, I spent dozens of hours over a two-year period researching computer monitor options to write the current version of this guide.

Factors to consider before buying a gaming monitor

LCD vs OLED

When shopping for a gaming monitor, you first need to decide if you want to go with a screen that has an LCD or OLED panel. For most people, that choice will come down to price; OLED gaming monitors are more expensive than their LCD counterparts. Even if money isn’t a concern, the choice might not be as straightforward as you think; both LCD and OLED panels come in a few different flavors, and knowing the differences between each type is important to making an informed decision.

LCD monitors come in three different varieties: twisted nematic (TN), vertical alignment (VA) or in-plane switching (IPS). For the most part, you want to avoid TN monitors unless you’re strapped for cash or want a monitor with the fastest possible refresh rate or fast response rate. TN screens feature the worst viewing angles, contrast ratios and colors of the group.

The differences between VA and IPS panels are more subtle. Historically, VA gaming monitors featured slower pixel response times than their TN and IPS counterparts, leading to unsightly image smearing. However, that’s improved in recent years. VA panels also frequently sport better contrast ratios than both TN and IPS screens. They’re not dramatically better than their IPS siblings on that front, but when contrast ratios aren’t an inherent strength of LCDs, every bit helps.

On the other hand, IPS panels excel at color accuracy and many offer high refresh rates and response times that are as fast as the fastest TN panels. The majority of LCD gaming monitors on the market today feature IPS panels, though you will frequently find VA screens on ultrawide monitors.

What about OLED?

If you can afford one, an OLED screen makes for the best monitor for gaming. The ability of organic light-emitting diodes to produce true blacks is transformational. Simply put, every game looks better when there isn’t a backlight to wash out shadow detail. Plus, you can experience true HDR with an OLED screen, something that LCDs aren’t known for.

Today, OLED screens come in two different flavors: WOLED and QD-OLED, with LG producing the former and Samsung the latter. I won’t bore you with the technical details of how the two panel types differ from one another other than to note both technologies broadly offer the same set of shortcomings.

Most notably, OLED monitors don’t get very bright. At best, the most capable models peak at around 250 nits when measuring brightness across the entire screen. I didn’t find this to be an issue in my testing, but your experience may vary depending on the ambient light in your gaming setup.

If brightness is important to you, note that due to manufacturer tunings, different models can perform better than others, even if they feature the same panel from LG or Samsung. It’s worth comparing monitors in the same class to find the model that’s right for you.

Separately, almost all OLEDs feature sub-pixel layouts that produce text fringing in Windows. The latest generation of OLED panels from both LG and Samsung are much better in this regard, to the point where modern OLEDs are good enough for reading and image editing. However, it’s still worth going to your local Micro Center or Best Buy to see the model you want in person, as the text fringing issue is hard to capture in photos and videos.

Another (potentially more serious) issue is burn-in. Organic light-emitting diodes can get “stuck” if they display the same image for long periods of time. Every OLED gaming monitor you can buy today comes with features designed to prevent burn-in and other image quality issues. Provided you don’t use your new OLED monitor for eight hours of daily productivity work, I don’t think you need to worry about burn-in too much.

Screen size, resolution and aspect ratio

After deciding where you fall on the LCD vs OLED debate, you can start thinking about the size of your future gaming monitor. Personal preference and the limitations of your gaming setup will play a big part here, but there are also a few technical considerations. You should think about size in conjunction with resolution and aspect ratio.

A 1440p monitor has 78 percent more pixels than a 1080p resolution screen, and a 4K display has more than twice as many pixels as a QHD panel. As the size of a monitor increases, pixel density decreases unless you also increase resolution. For that reason, there are sweet spots between size and high resolution. For instance, I wouldn’t recommend buying an FHD monitor that is larger than 24 inches or a QHD one bigger than 27 inches. Conversely, text and interface elements on a 4K monitor can look tiny without scaling on panels smaller than 32 inches.

You also need to consider the performance costs of running games at higher resolutions. The latest entry-level GPUs can comfortably run most modern games at 1080p and 60 frames per second. They can even render some competitive titles at 120 frames per second and higher — but push them to run those same games at 1440p and beyond, and you’re bound to run into problems. And as you’ll see in a moment, a consistently high frame rate is vital to getting the most out of the latest gaming monitors.

If your budget allows for it, 1440p offers the best balance between image quality and gaming performance. As for 1080p resolution and 4K, I would only consider the former if you’re on a tight budget or enjoy competitive gaming shooters like Valorant and Overwatch 2. For most people, the user experience and productivity benefits of QHD far outweigh the performance gains you get from going with a lower resolution screen.

Just a few years ago, 4K was not a viable resolution for PC gaming, but then NVIDIA came out with its 40 series GPUs. With those video cards offering the company’s DLSS 3 frame generation technology, there’s a case to be made that the technology is finally there to play 4K games at a reasonable frame rate, particularly if you exclusively play big, AAA single-player games like Alan Wake 2 and Cyberpunk 2077 or enjoy strategy games like the Total War series. However, even with frame generation, you will need a GPU like the $999 RTX 4080 Super or $1,599 RTX 4090 to drive a 4K display. Plus, 4K gaming monitors tend to cost more than their 1440p counterparts.

If you want an ultrawide, note that not every game supports the 21:9 aspect ratio, and fewer still support 32:9. When shopping for a curved monitor, a lower Radius, or ‘R’ number, indicates a more aggressive curve. So, a 1000R monitor is more curved than an 1800R one.

The best gaming monitor
Photo by Igor Bonifacic / Engadget

Refresh rates and response times

And now, finally, for the fun stuff. The entire reason to buy a gaming monitor is for its ability to draw more images than a traditional computer monitor. As you shop for a new screen, you will see models advertising refresh rates like 120Hz, 240Hz and 360Hz. The higher the refresh rate of a monitor, the more times it can update the image it displays on screen every second, thereby producing a smoother moving image. When it comes to games like Overwatch, Valorant and League of Legends, a faster refresh rate can give you a competitive edge, but even immersive single-player games can benefit.

A monitor with a 240Hz refresh rate will look better in motion than one with a 120Hz refresh rate, but there are diminishing returns. At 60Hz, the image you see on your computer monitor is updated every 16.67ms. At 120Hz, 240Hz and 360Hz, the gap between new frames shortens to 8.33ms, 4.17ms and 2.78ms, respectively. Put another way, although a 360Hz monitor can display 50 percent more frames than a 240Hz screen in a given time period, you will only see a speedup of 1.14ms between frame intervals. And all that depends on your GPU’s ability to render a consistent 360 frames per second.

Ultimately, a fast response monitor will do you no good if you don't have a gaming PC with a graphics card that can keep up. For example, with a 1440p 360Hz monitor, you realistically need a GPU like the RTX 4070 Super or RTX 4080 Super to saturate that display while playing competitive gaming titles like Overwatch 2 and Valorant.

There’s also more to motion clarity than refresh rates alone. Just as important are fast response times, or the amount of time it takes for pixels to transition from one color to another and then back again. Monitors with slow response times tend to produce smearing that is distracting no matter what kind of game you’re playing. Curved gaming monitor options help with immersion by wrapping the screen around your field of vision, making the gaming setup feel more expansive. Unfortunately, response times are also one of the more opaque aspects of picking the best gaming monitor for your needs.

Many LCD monitor manufacturers claim their products feature 1ms gray-to-gray (GtG) response times, yet they don’t handle motion blur to the same standard. One of the reasons for that is that many companies tend to cherry-pick GtG results that make their monitors look better on paper. The Video Electronics Standards Association (VESA) recently created a new certification program to address that problem, but the grading system is unwieldy and, as far as I can tell, hasn’t had a lot of pickup from manufacturers.

For now, your best bet is to turn to resources like Rtings and Monitors Unboxed when shopping for a new gaming monitor. Both outlets conduct extensive testing of every screen they review and present their findings and recommendations in a way that’s easy to understand.

FreeSync vs G-Sync

No matter how powerful your system, it will sometimes fail to maintain a consistent framerate. In fact, you should expect frame rate fluctuations when playing graphically-intensive games like Alan Wake 2 and Cyberpunk 2077. For those moments, you want a gaming display with adaptive sync. Otherwise, you can run into screen tearing.

Adaptive sync technologies come in a few flavors. The two you’re most likely to encounter are AMD FreeSync and NVIDIA G-Sync, and each has its own set of performance tiers. With G-Sync, for instance, they are – from lowest to highest – G-Sync Compatible, G-Sync and G-Sync Ultimate.

The good news is that you don’t need to think too much about which adaptive sync technology a display supports. In the early days of the tech, it was rare to see a gaming monitor that offered both FreeSync and G-Sync since including the latter meant a manufacturer had to equip their display with a dedicated processor from NVIDIA. That changed in 2019 when the company introduced its G-Sync Compatible certification. Today, if a monitor supports FreeSync, it is almost certainly G-Sync Compatible, too, meaning you can enjoy tear-free gaming whether you’re using an AMD or NVIDIA GPU.

In fact, I would go so far as to say you shouldn’t make your purchasing decision based on the level of adaptive sync performance a monitor offers. As of right now, the list of G-Sync Ultimate-certified displays is about two dozen models long, and some are a few years old now.

The best gaming monitor
Photo by Igor Bonifacic / Engadget

Inputs

Almost every gaming display on the market right now comes with at least one DisplayPort 1.4 connection, and that’s the port you will want to use to connect your new monitor to your graphics card. If you own a PS5 or Xbox Series X/S, it’s also worth looking out for monitors that come with HDMI 2.1 ports, as those will allow you to get the most out of your current generation console.

A word about HDR

As fast and responsive gaming monitors have become in recent years, there’s one area where progress has been frustratingly slow: HDR performance. The majority of gaming monitors currently on sale, including most high-end models, only meet VESA’s DisplayHDR 400 certification. As someone who owned one such monitor, let me tell you it’s not even worth turning on HDR on those screens. You will only be disappointed.

The good news is that things are getting better, albeit slowly. The release of Windows 11 did a lot to improve the state of HDR on PC, and more games are shipping with competent HDR modes, not just ones that increase the brightness of highlights. Thankfully, with more affordable mini-LED monitors, like our top pick, making their way to the market, HDR gaming is finally within reach of most PC gamers.

Gaming monitor FAQs

Are curved monitors better for gaming?

It depends on personal preference. Many manufacturers claim curved monitors offer a more immersive gaming experience due to the way the display wraps around your field of vision. However, I find the edge distortion distracting, particularly when you increase the field of view in a game.

What aspect ratio should I look for in a gaming monitor?

The vast majority of 24-, 27- and 32-inch gaming monitors feature 16:9 aspect ratio panels, and that’s been the case for many years. In fact, nearly every game made in the last two decades supports 16:9 resolutions, such as 1,920 x 1,080 and 2,560 by 1,440, and if you buy a standard-sized monitor, you won’t need to worry about letterboxing.

In the case of ultrawides, 21:9 is the most common aspect ratio, with some very wide models sporting 32:9 panels. Among games, support for 21:9 and 32:9 resolutions is far from universal, so don’t be surprised if a game doesn’t fill the entirety of your screen.

Is OLED good for gaming?

OLED monitors are great for gaming. Not only do they offer excellent motion clarity and input latency, but they’re also easily the best displays for HDR gaming. If money is no object, and you primarily use your PC for gaming, you can’t go wrong with an OLED monitor.

How much does a good gaming monitor cost?

While you could easily spend more than $1,000 to obtain the best gaming monitor on the market now, the reality is that the budget and midrange categories have never been more competitive. In 2015, I spent $500 CAD to buy a 1080p monitor with a 144Hz refresh rate and TN panel. The budget AOC model I highlight above is not only cheaper than my first gaming monitor, but it also features a faster 180Hz refresh rate and a higher contrast VA panel.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/best-gaming-monitor-140008940.html?src=rss

©

© Engadget

The best gaming monitors
  •  

Why on Earth would NASA build a nuclear reactor on the Moon?

"Duffy to announce nuclear reactor on the moon" is not a headline I imagined reading before last week. Sure, as a sci-fi loving nerd, I could see a future where nuclear power played a role in permanent Moon settlements. But the idea of NASA building a 100-kilowatt microreactor there in the next five years seemed ridiculous. Not so, according to scientists.

"I have no idea why this is getting so much play," Professor Bhavya Lal tells me over the phone, with a hint of exasperation in her voice. Lal's response makes sense once you understand the arc of her career; she has spent much of her professional life thinking about how the US should use nuclear power to explore space. At NASA, she served as the acting chief technologist, and was awarded the agency's Distinguished Service Medal. Among her other qualifications, she also testified before Congress on the subject of nuclear propulsion, and even helped rewrite the rules governing launches involving radioactive materials.

Most recently, she wrote a paper titled Weighing the Future: Strategic Options for US Space Nuclear Leadership where she and her co-author, Dr. Roger Myers, examine the past failures of US policy as it relates to nuclear power in space and argue the country should test a small nuclear system on the Moon by 2030. The way Casey Dreier, chief of space policy at The Planetary Society — a nonprofit that advocates for the exploration and study of space — tells it, many aspects of Secretary Duffy's plan are "pretty much straight out" of that report.

Lal is more modest and describes the directive Duffy issued as "accelerating ongoing work" at NASA. According to her, the agency has been "funding [space] fission power for years," adding that the only new thing here is that there's a date. "We've done this for more than 60 years," she tells me, and if NASA ends up delivering on Duffy's plan, it wouldn't even be the first nuclear reactor the US has sent into space. That distinction goes to SNAP-10A in 1965.

The reason the US has spent decades exploring space-capable nuclear reactors is simple. "You can get massive amounts of power from very little mass," explains Nick Touran, reactor physicist, nuclear advocate and the founder of What is Nuclear. And for launches to space, keeping payload amounts low is critical.

Just how much power are we talking about? "When fully fissioned, a softball-sized chunk of Uranium-235 offers as much energy as a freight train full of coal," says Dr. Lal. Combined with the limitations of solar power, particularly the farther a spacecraft travels away from the sun, nuclear is a game changer.

An artist concept of a fission power system on the lunar surface
NASA

Dr. Lal points to the New Horizons probe as an example. In 2015, the spacecraft flew past Pluto, in the process capturing stunning photos of the dwarf planet. If you followed the mission closely, you may remember New Horizons didn't make a stop at Pluto. The reason for that is it didn't have enough power to enter orbit. "We had about 200 watts on New Horizons. That's basically two light bulbs worth of power," said Dr. Lal. It subsequently took New Horizons 16 months to send all of the 50-plus gigabytes of data it captured back to Earth. Had the probe had a 20-kilowatt microreactor, Dr. Lal says it could have streamed that data in real-time, on top of entering orbit and operating all of its instruments continuously.

When it comes to the Moon, nuclear would be transformational. On our only natural satellite, nights last 14 Earth days, and there are craters that never see any sunlight. Solar energy could power a permanent NASA outpost on the Moon, but not without a "huge" number of batteries to bridge the two-week gap in power generation, and those batteries would need to be ferried from Earth.

"At some point, we will want to do industrial-scale work on the Moon. Even if we want to do 3D printing, it requires hundreds of kilowatts of power – if not more," said Dr. Lal. "If you're going to do any kind of commercial activity on the Moon, we need more than solar can provide."

On Mars, meanwhile, nuclear power would be absolutely essential. The Red Planet is home to dust storms that can last weeks or months, and cover entire continents. In those conditions, solar power is unreliable. In fact, when NASA finally ended Opportunity's nearly 15-year mission on Mars, it was a planet-wide dust storm that left the rover inoperable.

As such, if the US wants to establish a permanent presence on Mars, Dr. Lal argues it would make the most sense to perfect the necessary reactor technology on the Moon. "We don't want our first-ever nuclear reactor operating on Mars. We want to try it out on the Moon first. And that is what I think NASA is trying to do."

Of course, there are many technical hurdles NASA will need to overcome before any of this is anywhere close to reality. Surprisingly, the most straightforward problem might be finding a 100-kilowatt microreactor. Right now, there's no company in the US producing microreactors. Atomics International and North American Aviation, the companies that built SNAP-10A, went defunct decades ago.

NASA and NNSA engineers lower the wall of the vacuum chamber around KRUSTY system.
Los Alamos National Laboratory

"There are many that are in development, but almost none that are even in the prototype stage," said Touran. As he explains, that's an important detail; most nuclear reactors don't work at all when they're first turned on. "It takes a few iterations to get a reactor up to a level where it's operable, reliable and cost effective," he said.

The good news is Touran believes there's more than enough time for either NASA or a private company to build a working reactor for the project. "I think we're in a great spot to take a good swing at this by 2030," said Touran. In 2018, NASA and the Department of Energy demoed KRUSTY, a lightweight, 10-kilowatt fission system. "That was one of the only newish reactors we've turned on in many decades, and it was done on a shoestring budget," he said.

In the end, deploying a reactor on the Moon may prove more difficult than building one. Based on some rough math done by Dr. Myers, a 100-kilowatt reactor would weigh between 10 to 15 metric tons, meaning no current commercial rocket could carry it to space. NASA will also need to find a way to fit the reactor's radiator inside a rocket. Unfolded, the component will be about the size of a basketball court.

According to Dr. Lal, the 2030 timeline for the project is likely based on the assumption Starship will be ready to fly by then. But Elon Musk's super heavy-lift rocket has had a bad 2025. Of the three test flights SpaceX has attempted this year, two ended in the spacecraft exploding. One of those saw Starship go up in flames during what should have been a routine ground test.

SpaceX's Starship as seen during its eighth test flight
Reuters

If Starship isn't ready by 2030, NASA could conceivably fly the reactor separately from all the other components needed to make a functioning power system, but according to Lal, "that comes with its own set of challenges." Primarily, the agency doesn't have a great way of assembling such a complex system autonomously. In any case, Starship is at least a tangible work in progress. The same can't be said for the lander that would be needed to bring the reactor to the surface of the Moon. In 2021, NASA contracted SpaceX to build a lander for the Artemis missions, but the latest update the two shared on the spacecraft was a pair of 3D renderings. Similarly, Blue Origin's Blue Moon lander has yet to fly, despite promises it could make its first trip to the Moon as early as this spring or summer.   

Another question mark hangs over the entire project. As of the end of July, NASA is on track to lose approximately 4,000 employees who have agreed to leave the agency through either early retirement, a voluntary separation or a deferred resignation — all as part of the Trump administration's broader efforts to trim the number of workers across the entire federal government. All told, NASA is on track to lose about a fifth of its workforce, and morale at the agency is at an all-time low. Even with the Department of Energy and private industry providing support, there's good reason to believe the reductions will affect NASA's ability to deliver the project on time.

"The contradiction inherent in this proposal is that the White House is directing NASA to do the two most ambitious and difficult projects any space program can do, which is to send humans to the Moon and Mars, but to do so with a resource level and workforce equivalent to what the agency had before the first humans went to space in 1961," said Dreier.

A NASA spokesperson declined to share specifics on the reductions — including the number of employees set to leave the Glenn Research Center, the facility that built the KRUSTY reactor, and where much of the agency's nuclear engineering talent is concentrated. "As more official information becomes available, we anticipate answering more of your questions," the spokesperson said.

"I wish there was some inventory of the 4,000 people who left. What gaps are left? We have no idea if the departures were systematic," said Dr. Lal. "NASA has not been open or transparent about what types of employees have taken the deferred resignation program, where those skills are and where they're departing from," Drier added. "Nuclear engineering is not a common field for most people. [The reductions] certainly can't help." Still, both Lal and Touran believe the involvement of the Department of Energy is likely to swing things in NASA's favor.

In a statement NASA shared with Engadget, Secretary Duffy downplayed the workforce concerns. “NASA remains committed to our mission, even as we work within a more prioritized budget and changes with our workforce. NASA retains a strong bench of talent. I am confident that our exceptional team remains capable of executing upon my directives safely and in a timely manner and will continue to carry our work forward," he said. "We will continue to ensure America continues to lead in space exploration, advancing progress on key goals including returning Americans to the Moon and planting the Stars and Stripes on Mars, as we usher in the Golden Age of American innovation.”

In their report, Lal and Myers estimate it would cost about $800 million annually for five years to build and deploy a nuclear reactor on the Moon. Even if DoE support can prevent NASA's staffing cuts from kneecapping the project, its feasibility will hinge on if the Trump administration ponies up the cash to execute on its own bold claims.

Have a tip for Igor? You can reach him by email, on Bluesky or send a message to @Kodachrome.72 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/science/space/why-on-earth-would-nasa-build-a-nuclear-reactor-on-the-moon-153741891.html?src=rss

©

© REUTERS / Reuters

Residents watch as a full moon known as the "Sturgeon Moon" rises over a horizon in Kyiv, Ukraine August 9. REUTERS/Valentyn Ogirenko
  •  

Anthropic brings Claude's learning mode to regular users and devs

This past spring, Anthropic introduced learning mode, a feature that changed Claude's interaction style. When enabled, the chatbot would, following a question, try to guide the user to their own solution, instead of providing them with an answer outright. Since its introduction in April, learning mode has only been available to Claude for Education users. Now, like OpenAI did with Study Mode, Anthropic is making the tool available to everyone.

Starting today, Claude.ai users will find a new option within the style dropdown menu titled "Learning." The experience here is similar to the one Anthropic offers with Claude for Education. When you turn learning mode on, the chatbot will employ a Socratic approach, trying to guide you through your question. However, unlike the real-life Socrates, who was famous for bombarding strangers with endless questions, you can turn off learning mode at any time.

Notably, Anthropic is also offering two different takes on the feature through Claude Code. First, there's an "Explanatory" mode where Claude will generate summaries of its decision-making process as it works, giving the user a chance to better understand what it's doing.

For those at the start of their coding career or hobby, there's also a more robust option, which is once again called "Learning." Here, Claude will occasionally stop what it's doing and mark a section with a "#TODO" comment to prompt the user to write five to 10 lines of their code. If you want to try the two features out for yourself, update to the latest version of Claude Code and type "/output-styles." You can then select between the two modes or Claude's default behavior.

According to Drew Bent, education lead at Anthropic, learning mode, particularly as it exists in Claude Code, is the company's attempt to make its chatbot into more of a collaborative tool. "I think it's great that there's a race between all of the AI labs to offer the best learning mode," he said. "In a similar way, I hope we can inspire something similar with coding agents."

Bent says the original learning mode came out of conversations Anthropic had with university students, who kept referring back to the concept of brain rot. "We found that they themselves realized that when they just copy and paste something directly from a chat bot, it's not good for their long-term learning," he said. When it came time to adapt the feature to Claude Code, the company wanted to balance the needs of new programmers with those like Bent who have been coding for a decade or more.

"Learning mode is designed to help all of those audiences not just complete tasks, but also help them grow and learn in the process and better understand their code base," Bent said. His hope is that the new tools will allow any coder to become a "really good engineering manager." In practice, that means those users won't necessarily write most of the code on a project, but they will develop a keen eye for how everything fits together and what sections of code might need some more work.

Looking forward, Bent says Anthropic doesn't "have all the answers, but needless to say, we're trying to think through other features we can build" that expand on what it's doing with learning mode. To that end, the company is opening up Claude Code's new Output Styles to developers, allowing them to build their own learning modes. Users too can modify how Claude communicates by creating their own custom prompts for the chatbot.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-brings-claudes-learning-mode-to-regular-users-and-devs-170018471.html?src=rss

©

© Anthropic

Claude Code's new learning mode will guide users through the code created by Claude.
  •  

Apple's 'redesigned' blood oxygen monitoring feature hits Apple Watches in the US today

More than a year after an import ban forced the company to remove blood oxygen monitoring from some US Apple Watch models, Apple says it will introduce a redesigned version of the feature later today. In a post on its newsroom website, the company says the feature will roll out to Apple Watch Series 9, Series 10, and Watch Ultra 2 users through a joint Apple Watch and iPhone update. Once Apple begins rolling out the software, you'll need to update your devices to iOS 18.6.1 and watchOS 11.6.1 to access the reworked feature. Following the update, any blood oxygen data captured by your Apple Watch will be calculated on your iPhone, with the resulting data viewable in the respiratory section of the Health app.   

"There will be no impact to Apple Watch units previously purchased that include the original Blood Oxygen feature, nor to Apple Watch units purchased outside of the US," Apple said. 

Today's update marks another unexpected development in Apple's long, drawn out legal feud with Masimo. In 2021, the medical device maker sued Apple, alleging the tech giant had infringed on its intellectual properties related to pulse oximeter blood-oxygen monitoring technology. Following a couple of years of legal back and forth, the issue came to a head when the US International Trade Commission (ITC) upheld a prior ruling that found Apple had violated Masimo's patents. After former President Biden chose not to veto the decision, Apple was forced to temporarily pause sales of the Apple Watch Series 9 and Ultra 2. The company later started selling the wearables again following an update the removed the infringing blood oxygen monitoring feature. Now Apple says it's able to offer the functionality again, with a slight modification, due to a recent US Customs ruling.

This article originally appeared on Engadget at https://www.engadget.com/wearables/apples-redesigned-blood-oxygen-monitoring-feature-hits-apple-watches-in-the-us-today-131558485.html?src=rss

©

© Cherlynn Low / Engadget

The Apple Watch Ultra on a wrist held in mid-air with a compass on its screen. The compass is showing the letter W with "~256 degrees" below it.
  •  

GPT-5 is here and it's free for everyone

A couple of days after announcing its first open-weight models in six years, OpenAI is releasing the long-awaited GPT-5. What's more, you can start using it today, even if you're a free user. With GPT-5, the company is touting across-the-board enhancements, claiming the model is its best yet when it comes to coding, writing, safety, accuracy and more.

"GPT-5 is the first time that it really feels like you're talking to an expert in any topic," said OpenAI CEO (and hypeman) Sam Altman during a press briefing the company held before today's announcement. "It reminds me of when the iPhone went from those giant, old pixel [screens] to the Retina Display, and then I went back to using one of those big pixelated things and I was like, 'Wow, I can't believe how bad we had it.'"

At the start of the year, Altman said GPT-5 would offer a unified experience for users, and the new model delivers on that promise. For the first time, OpenAI's default offering is a reasoning model, meaning the system is programmed to tackle complex problems by breaking them into smaller parts. Previously, if you wanted to force ChatGPT to use one of OpenAI's reasoning models, you had to select the "Think Longer" option from the prompt bar. This meant most free users didn't even know OpenAI had more capable models. With GPT-5, the company has significantly simplified the ChatGPT experience.

On the consumer side of things, there are only three versions of the new model. One of those — GPT-5 mini — only crops up when free and Plus users run into their regular GPT-5 usage limit. The other variant, GPT-5 Pro, is, as the name suggests, only available to subscribers of the company's $200 per month Pro plan. On the subject of query limits, Plus users can use GPT-5 "significantly" more than those with a free account, while Pro customers can chat with GPT-5 as much as they want.

A graphic highlighting some of the enhanced capabilities of GPT-5
OpenAI

When it comes to reasoning, GPT-5 is much faster than o3, OpenAI's previous state-of-the-art AI. "It's so fast that I've had the psychological experience of wondering, like, is it really thinking enough? And then it gives a great answer," said Altman. Perhaps more importantly, it suffers from fewer hallucinations, with OpenAI claiming the model delivers more accurate answers than any of its previous reasoning systems. For instance, when thinking, GPT-5 is approximately 80 percent less likely to include a factual error in its answer than o3. We'll see how GPT-5 responds in real-world use, but if OpenAI has made meaningful improvements here, it would be a big deal; hallucinations have typically been a major weakness of reasoning models, particularly relative to traditional large language counterparts.

At the same time, OpenAI says GPT-5 is its safest AI to date. For one, it includes a new feature called Safe Completions. "In the past, we've approached this from a sort of a binary, if we thought that the prompt was safe, we would comply. If we thought it was unsafe, the model would refuse," said Alex Beutel, safety research lead at OpenAI. "This worked well, but as a challenge that there can be kind of carefully worded prompts that could be confusing. So if someone says how much energy is needed to ignite some specific material that could be an adversary trying to get around the safety protections and cause harm, or it could be a student asking a science question to understand the physics of this material."

With Safe Completions, GPT-5 will try to give the most helpful answer within the safety constraints OpenAI has imposed on it. In tricky situations like the one Beutel outlined above, the model will only provide high-level information that can't be used to harm anyone. "On average, the system is both safer and more helpful for users, and we think that'll be much better," Beutel added.

Additionally, when it comes to health-related questions, GPT-5 is better at flagging concerns and suggesting questions the user should ask of their healthcare provider. It will also answer those prompts more precisely, thanks to the ability to adapt to the person's knowledge level and geography.

On top of everything else, OpenAI says GPT-5 is its best model for coding yet. It's supposedly a better writer too, with the company promising the chatbot is better at translating your drafts into "compelling, resonant" copy.

Alongside GPT-5, OpenAI is adding a handful of new features to ChatGPT. To start, users can now choose a color for their chats, with a few exclusive options available for paying customers. OpenAI has also made it easier to connect ChatGPT to Gmail, Google Calendar and Google Contacts. Once you enable the connections, the chatbot will know when to automatically reference your Google accounts; you won't need to select anything before you start chatting. OpenAI will begin rolling out this feature to Pro subscribers starting next week, with availability for other users to follow.

Over in the Custom Instructions pane, where you can write system prompts to tweak how ChatGPT interacts with you, OpenAI is introducing a handful of pre-set personalities. The four options — cynic, robot, listener and nerd — are available as part of a research preview, and can be changed or disabled at any time.

Last but not least, OpenAI is releasing an updated version of its Advanced Voice feature the company introduced last summer. OpenAI says the tool is better at understanding instructions and adapting its speaking style to the moment. As part of this change, OpenAI is retiring Standard Voice Mode. In practice, that means the company can now offer a better voice experience to everyone since it doesn't need to fall back on Standard Voice Mode, which isn't natively multi-modal like Advanced Voice and therefore worse at understanding the nuances of human speech.

If you're wondering where this leaves OpenAI on the path toward artificial general intelligence, Altman had this to say when asked about the topic. "I kind of hate the term AGI, because everyone at this point uses it to mean a slightly different thing, but [GPT-5] is a significant step forward towards models that are really capable. We're still missing something quite important," he said, noting GPT-5 can't continuously learn on its own. "But the level of intelligence here, the level of capability, it feels like a huge improvement. Certainly, if I could go back five years before GPT-3 and you told me we have this now, I'd be like that's a significant fraction of the way to something very AGI-like."

Update 2:00PM: Added more context about hallucination rates.  

This article originally appeared on Engadget at https://www.engadget.com/ai/gpt-5-is-here-and-its-free-for-everyone-170001066.html?src=rss

©

© OpenAI

GPT-5 key art
  •  

OpenAI's first new open-weight LLMs in six years are here

For the first time since GPT-2 in 2019, OpenAI is releasing new open-weight large language models. It's a major milestone for a company that has increasingly been accused of forgoing its original stated mission of "ensuring artificial general intelligence benefits all of humanity." Now, following multiple delays for additional safety testing and refinement, gpt-oss-120b and gpt-oss-20b are available to download from Hugging Face.

Before going any further, it's worth taking a moment to clarify what exactly OpenAI is doing here. The company is not releasing new open-source models that include the underlying code and data the company used to train them. Instead, it's sharing the weights — that is, the numerical values the models learned to assign to inputs during their training — that inform the new systems. According to Benjamin C. Lee, professor of engineering and computer science at the University of Pennsylvania, open-weight and open-source models serve two very different purposes.

"An open-weight model provides the values that were learned during the training of a large language model, and those essentially allow you to use the model and build on top of it. You could use the model out of the box, or you could redefine or fine-tune it for a particular application, adjusting the weights as you like," he said. If commercial models are an absolute black box and an open-source system allows for complete customization and modification, open-weight AIs are somewhere in the middle.

OpenAI has not released open-source models, likely since a rival could use the training data and code to reverse engineer its tech. "An open-source model is more than just the weights. It would also potentially include the code used to run the training process," Lee said. And practically speaking, the average person wouldn't get much use out of an open-source model unless they had a farm of high-end NVIDIA GPUs running up their electricity bill. (They would be useful for researchers looking to learn more about the data the company used to train its models though, and there are a handful of open-source models out there like Mistral NeMo and Mistral Small 3.)

With that out of the way, the primary difference between gpt-oss-120b and gpt-oss-20b is how many parameters each one offers. If you're not familiar with the term, parameters are the settings a large language model can tweak to provide you with an answer. The naming is slightly confusing here, but gpt-oss-120b is a 117 billion parameter model, while its smaller sibling is a 21-billion one.

In practice, that means gpt-oss-120b requires more powerful hardware to run, with OpenAI recommending a single 80GB GPU for efficient use. The good news is the company says any modern computer with 16GB of RAM can run gpt-oss-20b. As a result, you could use the smaller model to do something like vibe code on your own computer without a connection to the internet. What's more, OpenAI is making the models available through the Apache 2.0 license, giving people a great deal of flexibility to modify the systems to their needs.

Despite this not being a new commercial release, OpenAI says the new models are in many ways comparable to its proprietary systems. The one limitation of the oss models is that they don't offer multi-modal input, meaning they can't process images, video and voice. For those capabilities, you'll still need to turn to the cloud and OpenAI's commercial models, something both new open-weight systems can be configured to do. Beyond that, however, they offer many of the same capabilities, including chain-of-thought reasoning and tool use. That means the models can tackle more complex problems by breaking them into smaller steps, and if they need additional assistance, they know how to use the web and coding languages like Python.

Additionally, OpenAI trained the models using techniques the company previously employed in the development of o3 and its other recent frontier systems. In competition-level coding gpt-oss-120b earned a score that is only a shade worse than o3, OpenAI's current state-of-the-art reasoning model, while gpt-oss-20b landed in between o3-mini and o4-mini. Of course, we'll have to wait for more real-world testing to see how the two new models compare to OpenAI's commercial offerings and those of its rivals.

The release of gpt-oss-120b and gpt-oss-20b and OpenAI's apparent willingness to double down on open-weight models comes after Mark Zuckerberg signaled Meta would release fewer such systems to the public. Open-sourcing was previously central to Zuckerberg's messaging about his company's AI efforts, with the CEO once remarking about closed-source systems "fuck that." At least among the sect of tech enthusiasts willing to tinker with LLMs, the timing, accidental or not, is somewhat embarrassing for Meta.

"One could argue that open-weight models democratize access to the largest, most capable models to people who don't have these massive, hyperscale data centers with lots of GPUs," said Professor Lee. "It allows people to use the outputs or products of a months-long training process on a massive data center without having to invest in that infrastructure on their own. From the perspective of someone who just wants a really capable model to begin with, and then wants to build for some application. I think open-weight models can be really useful."

OpenAI is already working with a few different organizations to deploy their own versions of these models, including AI Sweden, the country's national center for applied AI. In a press briefing OpenAI held before today's announcement, the team that worked on gpt-oss-120b and gpt-oss-20b said they view the two models as an experiment; the more people use them, the more likely OpenAI is to release additional open-weight models in the future.

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-first-new-open-weight-llms-in-six-years-are-here-170019087.html?src=rss

©

© Igor Bonifacic / Engadget

The icon for ChatGPT on iOS
  •  

Google DeepMind's Genie 3 can dynamically alter the state of its simulated worlds

At start of December, Google DeepMind released Genie 2. The Genie family of AI systems are what are known as world models. They're capable of generating images as the user — either a human or, more likely, an automated AI agent — moves through the world the software is simulating. The resulting video of the model in action may look like a video game, but DeepMind has always positioned Genie 2 as a way to train other AI systems to be better at what they're designed to accomplish. With its new Genie 3 model, which the lab announced on Tuesday, DeepMind believes it has made an even better system for training AI agents.   

At first glance, the jump between Genie 2 and 3 isn't as dramatic as the one the model made last year. With Genie 2, DeepMind's system became capable of generating 3D worlds, and could accurately reconstruct part of the environment even after the user or an AI agent left it to explore other parts of the generated scene. Environmental consistency was often a weakness of prior world models. For instance, Decart's Oasis system had trouble remembering the layout of the Minecraft levels it would generate. 

By comparison, the enhancements offered by Genie 3 seem more modest, but in a press briefing Google held ahead of today's official announcement, Shlomi Fruchter, research director at DeepMind, and Jack Parker-Holder, research scientist at DeepMind, argued they represent important stepping stones in the road toward artificial general intelligence.

So what exactly does Genie 3 do better? To start, it outputs footage at 720p, instead of 360p like its predecessor. It's also capable of sustaining a "consistent" simulation for longer. Genie 2 had a theoretical limit of up to 60 seconds, but in practice the model would often start to hallucinate much earlier. By contrast, DeepMind says Genie 3 is capable of running for several minutes before it starts producing artifacts.

Also new to the model is a capability DeepMind calls "promptable world events." Genie 2 was interactive insofar as the user or an AI agent was able to input movement commands and the model would respond after it had a few moments to generate the next frame. Genie 3 does this work in real-time. Moreover, it’s possible to tweak the simulation with text prompts that instruct Genie to alter the state of the world it’s generating. In a demo DeepMind showed, the model was told to insert a herd of deer into a scene of a person skiing down a mountain. The deer didn't move in the most realistic manner, but this is the killer feature of Genie 3, says DeepMind.

As mentioned before, the lab primarily envisions the model as a tool for training and evaluating AI agents. DeepMind says Genie 3 could be used to teach AI systems to tackle "what if" scenarios that aren't covered by their pre-training. "There are a lot of things that have to happen before a model can be deployed in the real world, but we do see it as a way to more efficiently train models and increase their reliability," said Fruchter, pointing to, for example, a scenario where Genie 3 could be used to teach a self-driving car how to safely avoid a pedestrian that walks in front of it.

A GIF demonstrating Genie 3's great interactivity,
Google DeepMind

Despite the improvements DeepMind has made to Genie, the lab acknowledges there's much work to be done. For instance, the model can't generate real-world locations with perfect accuracy, and it struggles with text rendering. Moreover, for Genie to be truly useful, DeepMind believes the model needs to be able to sustain a simulated world for hours, not minutes. Still, the lab feels Genie is ready to make a real-world impact.

"We already at the point where you wouldn't use [Genie] as your sole training environment, but you can certainly finds things you wouldn't want agents to do because if they act unsafe in some settings, even if those settings aren't perfect, it's still good to know," said Parker-Holder. "You can already see where this is going. It will get increasingly useful as the models get better."

For the time being, Genie 3 isn't available to the general public. However, DeepMind says it's working to make the model available to additional testers.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-deepminds-genie-3-can-dynamically-alter-the-state-of-its-simulated-worlds-140052124.html?src=rss

©

© Google DeepMind

Genie 3 keyart
  •  

The New York Times and Amazon's AI licensing deal is reportedly worth up to $25 million per year

Amazon's AI licensing deal with The New York Times is worth $20 million to $25 million per year, according to The Wall Street Journal. The two companies did not disclose the fiscal terms of the agreement back when it was announced in May. The Journal's reporting provides a rare insight into the value of a media company licensing its content for AI training.

In the case of The Times, Amazon's annual payments to the publisher would amount to nearly one percent of its total revenue in 2024. In return, the agreement allows Amazon to train its AI models on content from The Times, including content from auxiliary arms of the company like The Athletic and NYT Cooking. It also allows Amazon to offer summaries and excerpts from the paper through Alexa.

In light of that, $20 million to $25 million per year seems a small payout when the threat AI poses to publishers is so great, and other media companies have been able to negotiate bigger payouts. For instance, OpenAI's five-year licensing deal with News Corp, the owner of The Wall Street Journal, is reportedly worth more than $250 million.

The New York Times sued OpenAI and Microsoft for training their models on the company’s content without permission back in 2023. That case is still ongoing.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-new-york-times-and-amazons-ai-licensing-deal-is-reportedly-worth-up-to-25-million-per-year-135523853.html?src=rss

©

© REUTERS / Reuters

Copies of the May 9, 2025 New York Times front page move on a conveyer belt after Pope Leo XIV was elected as the first ever American Pope, in New York City, U.S., May 8, 2025. REUTERS/Adam Gray
  •  

ChatGPT's Study Mode will guide students to an answer stey by step

OpenAI is rolling out a new Study Mode the company says is designed to give students a better understanding of complex topics. Like Claude's Learning Mode, which Anthropic introduced in April, Study Mode will see ChatGPT adopt a Socratic approach to conversations. Rather than answer a question outright, the chatbot will attempt to guide the user to their own solution, starting with questions that allow the system to calibrate its responses to their objective and understanding. Conversations then unfold using a "scaffold" structure, which means ChatGPT will slowly roll out information so as not to overwhelm the user with more information than they're ready to digest.

OpenAI says it developed Study Mode in collaboration with teachers, scientists and pedagogy experts. Rather than running on an entirely new model, the tool is powered by a series of custom system instructions.

"We chose this approach because it lets us quickly learn from real student feedback and improve the experience — even if it results in some inconsistent behavior and mistakes across conversations," said OpenAI. "We plan on training this behavior directly into our main models once we’ve learned what works best through iteration and student feedback."

Notably, OpenAI isn't making Study Mode available only to ChatGPT Edu users. Instead, the company is first rolling out the feature to logged in Free, Plus, Pro and Team users. Edu subscribers will gain access in the "next few weeks." 

It will be interesting to find out how many students end up actually using Study Mode, as a toggle allows you to easily turn the feature on and off. And as a recent New York Magazine article vividly detailed, AI cheating is a major problem at US colleges. For its part, OpenAI says it plans to work on making Study Mode more engaging and useful to students. The company is exploring how to offer deeper personalization through the tool, as well as ways to offer goal setting and progress tracking across conversations.

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpts-study-mode-will-guide-students-to-an-answer-stey-by-step-180614172.html?src=rss

©

© OpenAI

A screenshot showing ChatGPT's new Study Mode, which will attempt to guide users to a solution rather than giving it outright.
  •  

Google's tool for virtually trying on clothes is now available in the US

At I/O 2025 in May, Google previewed a new AI-powered feature the company said would simplify online shopping. The tool allows you to upload a single, full-body photo of yourself to "try on" different pieces of clothing you find online. Following a limited preview, Google has begun rolling out the feature to users in the US. You can start trying on clothing for yourself by tapping the "try on" icon on any product listing on Google or apparel product result you find on Google Images. 

Powering the experience is an image generation model Google trained to take into account how different materials fold, stretch and drape across different human bodies. According to Google, the model supports billions of clothing items found across the company's Shopping Graph, meaning their may be some outfits the AI will have a hard time parsing. However, most clothing items from popular retailers should be supported out of the gate.   

With today's release, Google has also enhanced the price-tracking functionality built into the feature. Naturally, you can specify the color and size you want, but Google also allows you to set the price you want to pay for the item. It's possible to configure the watch so you're only alerted after the product you want dips below a specific price. "The Shopping Graph has products and prices from all across the web — so we’ll let you know when there’s an offer that meets your criteria," says Google. "No more constantly checking to see if that bag you’re eyeing is finally at the right price for you or forgetting to come back to a product you loved."

Later this year, Google plans to bring additional shopping features to AI Mode, the dedicated AI tab the company began rolling out to everyone in the US this past May. Come this fall, you'll be able to explore outfit and decor ideas — and buy what suits your fancy — directly from the chat bot.   

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-tool-for-virtually-trying-on-clothes-is-now-available-in-the-us-144342056.html?src=rss

©

© Google

Google's latest AI-powered tool allows you to virtually try on clothes.
  •  

Trump's AI Action Plan targets state regulation and 'ideological bias'

At the start of the year, President Trump announced his AI Action Plan, an initiative he said would eventually enact policy that would "enhance America's position as an AI powerhouse." Now, after months of consultation with industry players like Google and OpenAI, the administration has finally shared the specific actions it plans to take.   

Notably, the framework seeks to limit state regulation of AI companies by instructing the Office of Science and Technology Policy (OSTP) and other federal agencies to consider a state's existing AI laws before awarding AI-related funding. "The Federal government should not allow AI-related Federal funding to be directed to those states with burdensome AI regulations that waste these funds," the document states. As you may recall, Trump's "Big Beautiful Bill" was supposed to include a 10-year qualified moratorium on state AI regulation before that amendment was ultimately removed in a 99-1 vote by the US Senate.

Elsewhere, the AI Action Plan targets AI systems the White House says promote "social engineering agendas." To that end, Trump plans to direct the National Institute of Standards and Technology, through the Department of Commerce, to revise its AI Risk Management Framework to remove any mentions of "misinformation, Diversity, Equity, and Inclusion, and climate change." Furthermore, he's calling for an update to the federal government's procurement guidelines to ensure the government only contracts model providers that can definitively say their AI systems are "free from top-down ideological bias." Just how companies like OpenAI, Google and others are expected to do this is unclear from the document. 

Separately, Trump says he plans to remove regulatory hurdles that slow the construction of AI data centers. "America's environmental permitting system and other regulations make it almost impossible to build this infrastructure in the United States with the speed that is required," the document states. Specifically, the president plans to make federal lands available for the construction of data centers and power generation facilities. Under the Action Plan, the federal government will also expand efforts to use AI to carry out environmental reviews.    

The president plans to sign a handful of executive orders today to start the wheels turning on his action plan. Trump began his second term by rescinding President Biden's October 2023 AI guidelines. Biden's executive order outlined a plan to establish protections for the general public with regard to artificial intelligence. Specifically, the EO sought new standards for safety and security in addition to protocols for AI watermarking and both civil rights and consumer protections.

This article originally appeared on Engadget at https://www.engadget.com/ai/trumps-ai-action-plan-targets-state-regulation-and-ideological-bias-163247225.html?src=rss

©

© Reuters / Reuters

U.S. President Donald Trump stands after delivering remarks on AI infrastructure at the Roosevelt room at White House in Washington, U.S., January 21, 2025. REUTERS/Carlos Barria/File Photo
  •  

Proton's privacy-focused Lumo chatbot encrypts all your conversations

What's another AI chatbot in an already crowded field? That's the question Proton is trying to answer today with the release of its new Lumo assistant. And like with its best known service, Proton Mail, the company says Lumo is for those who want a private alternative to what big tech is offering.

Proton says every conversation with Lumo is secured with zero-access encryption, meaning only your device can unlock your content. In the context of an AI chatbot, that has several implications. Most notably, it means not even Proton can view your chats. As a result, the company can't share your data with governments, advertisers or, for that matter, any other company, and it can't use your data to train future AI models. "By using Lumo, you can enjoy the benefits of an advanced AI assistant without the risk of your data being misused," says Proton.

I briefly tried Lumo. It's a bit slow to generate a response, but you can broadly expect a similar experience to what you would find using ChatGPT or Claude for free. Lumo can search the web to answer questions beyond its knowledge cut-off date, but by default that feature is turned off to further protect user privacy. You can also upload files to Lumo. Here again Proton says the chatbot won't save any information.

Proton isn't touting the performance of Lumo's large language models, but if you're curious about this sort of thing, it's powered by a handful of open-source systems, including Mistral NeMo and Mistral Small 3, among others. Proton told The Verge Lumo will filter requests through the model best suited for the task. For example, it will use NVIDIA's OpenHands system for coding requests.

Lumo is free to use, with a weekly query limit. You don't need a Proton account to begin a conversation with the chatbot. In addition to being available on the web, Proton offers both Android and iOS apps. A $13 per month Plus plan offers unlimited usage, alongside perks like larger uploads, access to more advanced AI models, priority support and more.

This article originally appeared on Engadget at https://www.engadget.com/ai/protons-privacy-focused-lumo-chatbot-encrypts-all-your-conversations-144551345.html?src=rss

©

© Proton

Proton's Lumo chatbot has a purple cat for a mascot.
  •  

DuckDuckGo now lets you customize the responses of its Duck.ai chatbots

Since last June, when DuckDuckGo introduced AI Chat, you've been able to use chat bots like Claude directly through the browser. Now the company is making it easier to tweak the system prompts of those AI models while retaining your privacy. For the uninitiated, system prompts are a set of instructions given to a chat bot at the start of a conversation to guide things along. Often they'll set the tone of the dialogue, and can sometimes cause a chat bot to be overly sycophantic as was the case with GPT-4o this past March

Both Anthropic and OpenAI give users a way to customize the responses of their respective chat bots, but if you don't know where to look for those settings, they can be tricky to find. DuckDuckGo's new system setting is available directly through Duck.ai's prompt bar and works a bit differently. Whatever customization you add is appended to the default system prompt for each model you chat with, meaning you don't need to set them independently of one another. Moreover, your tweaks are stored locally on your device, with no data being sent to Anthropic, OpenAI or any other model provider. It's a small addition, but if you use Duck.ai to compare the responses between different models, now you'll get more consistency in tone.

This article originally appeared on Engadget at https://www.engadget.com/ai/duckduckgo-now-lets-you-customize-the-responses-of-its-duckai-chatbots-151521930.html?src=rss

©

© DuckDuckGo

DuckDuckGo's new AI customization bar allows you to alter the system prompts for different chat bots.
  •  

DuckDuckGo now allows you to filter out AI images in search results

DuckDuckGo is making it easier to wade through some of the AI slop that has taken over the internet in recent months. This week, the company introduced a new filter for removing AI-generated images from search results. The next time you use the browser, you'll see a new dropdown menu titled "AI images." From there, you can set whether you want to see AI content or not. 

New setting: hide AI-generated images in DuckDuckGo

Our philosophy about AI features is “private, useful, and optional.” Our goal is to help you find what you’re looking for. You should decide for yourself how much AI you want in your life – or if you want any at all. (1/4) pic.twitter.com/pTolmsEQlQ

— DuckDuckGo (@DuckDuckGo) July 14, 2025

The filter relies on manually curated open-source block lists maintained by uBlockOrigin and others. According to DuckDuckGo, the filter won't catch every AI-generated image out on the internet, but it will greatly reduce how many you see. The company says it's working on additional filters.  

You'll notice the example DuckDuckGo uses to demo the feature in the GIF it provided involves a search for images of a "baby peacock." That's not by accident. People first started noticing how much Google Search results had been overrun by AI slop about a year ago, and one of the worst examples was any query involving the showy birds. Google has since addressed the situation somewhat, but AI slop in search results remain a problem on the platform. So it's good to see DuckDuckGo adopt a simple but effective solution to the issue. 

This article originally appeared on Engadget at https://www.engadget.com/ai/duckduckgo-now-allows-you-to-filter-out-ai-images-in-search-results-144326213.html?src=rss

©

© DuckDuckGo

DuckDuckGo's new filter allows you to remove most AI images from your search results.
  •  

One of my favorite Steam early access games is now available on Switch and PS5

After five years of development, one of Steam's coziest games is leaving Steam early access and making the jump to consoles. Starting today, you can purchase The Wandering Village on PC, Nintendo Switch and PlayStation 5. On Steam, the game's developer, Stray Fawn Studio, is offering a 35 percent discount until July 31. In the US, that means you can get the game for just under $20. Switch owners, meanwhile, can get a 10 percent launch discount until August 7.

I've been playing The Wandering Village on and off since it entered early access in 2022. It's a lovely game that combines two very different influences. Most obviously, the game wears on its sleeve Stray Fawn's love of Hayao Miyazaki's seminal Nausicaä of the Valley of the Wind. The manga and later film is set in a desolate, post-apocalyptic world ravaged by nuclear war. 

The Wandering Village's other major influence are the titles of Impressions Games. In the late '90s and early 2000s, the now-defunct studio went on a hot streak releasing three games — Caesar III, Pharaoh and Zeus: Master of Olympus — that, to this day, define much of the city-building genre.

The Wandering Village marries those influences in a novel way. Rather than building your city on solid ground, you build it on the back of a giant creature called the Onbu. As you can probably guess, the Onbu doesn't stay still. And while there are ways you can influence its behavior, sometimes it can have a mind of its own. All of that leads to some interesting gameplay interactions. For example, the Onbu might wander into a biome that is toxic to your villagers. As of the writing of this article, the game has a "very positive" rating on Steam on nearly 6,000 reviews, with recent reviews tilting toward "overwhelming positive."

If you want to grab a physical copy of the game for Switch or PS5, Stray Fawn has partnered with Serenity Forge to offer collectors and premium editions of the game. Pre-orders will ship early next year. Despite the game leaving early access, Stray Fawn has promised to keep working The Wandering Village

This article originally appeared on Engadget at https://www.engadget.com/gaming/one-of-my-favorite-steam-early-access-games-is-now-available-on-switch-and-ps5-174539016.html?src=rss

©

© Stray Fawn Studios

In The Wandering Village, players build a city on top of a giant creature called the Onbu.
  •  

Trump's defunding of NASA would be catastrophic

"This is probably the most uncertain future NASA has faced, maybe since the end of Apollo," Casey Dreier tells me over the phone. Dreier is the chief of space policy at The Planetary Society, a nonprofit that advocates for the exploration and study of space.

On July 10, the Senate Appropriations Committee met to discuss the proposed federal Commerce, Justice and Science budget for 2026. While on average, funding for NASA has accounted for about 0.3 percent of total yearly spending by the federal government since the start of the 2010s, President Trump has called for a 24 percent cut year over year to the agency's operating allowance. By any metric, his plan would be devastating.

Adjusted for inflation, it would leave NASA with the smallest operating budget it has had since Russian cosmonaut Yuri Gagarin became the first human to travel to space in 1961. In the process, it would eviscerate the agency's science budget by nearly half, resulting in the termination of 55 ongoing and or planned missions. It would also leave NASA with its smallest workforce in 70 years. All this, at a time when the agency has been tasked with returning to the Moon and bringing the first humans to Mars.

"There's no historical precedent to this level of single year, functionally indiscriminate and dramatic cuts. You lose, in one year, a third of all active science projects. [The Trump administration is] proposing to turn off missions that are performing not just good science, but unique and irreplaceable science. This isn't so they can reinvest the money in some radical new science efforts. No, the money is gone," said Dreier. "It's almost certainly the greatest threat to NASA science activities in the history of the space agency."

Dreier isn't exaggerating when he says some missions would be impossible to replace. One of the casualties of Trump's cuts would be the New Horizons probe. In 2015, New Horizons gave us our best look at Pluto ever. Four years later, it performed the farthest flyby in human history. As things stand, it's the only active spacecraft in the Kuiper belt, a region of our solar system that is not well-understood by scientists. Even if NASA were to start working on a replacement today, it would take a generation for that vehicle to reach where New Horizons is right now. It costs NASA about $14.7 million per year to continue operating the probe, a fraction of the $29.9 billion in additional funding Congress allocated to fund ICE enforcement and detainment operations in the president's recently passed tax bill.

OSIRIS-APEX probe visiting the Apophis asteroid
Heather Roper

Another mission that would be impossible to replace is OSIRIS-APEX. If the name sounds familiar, it's because OSRIS-APEX is a continuation of NASA's incredibly successful OSRIS-REx flight. In 2020, the spacecraft visited 101955 Bennu, an ancient asteroid about the size of the Empire State Building, and collected a sample of regolith (rocks and dirt) from its surface using a never-before-tried technique.

After OSRIS-REx successfully returned the sample to Earth, NASA decided to extend the spacecraft's mission and fly to another asteroid, 99942 Apophis. In 2029, Apophis will pass about 19,600 miles from Earth. It will be the closest approach of any known asteroid of its size. NASA said the extension would add $200 million to a mission that had already cost it an estimated $1.16 billion.

"This project is a pennies on the dollar repurposing of an existing spacecraft. It's the only American spacecraft that will be at Apophis for a once in a generation opportunity to study an asteroid that will just barely miss us," said Dreier. "That seems important to know."

At a time when nearly every facet of American life is being upturned, the potential cancellation of dozens of NASA missions might seem a distant concern, but the gutting of the agency's science budget would have a ripple effect on communities across the US.

"NASA is an engine for jobs in the country, and for every NASA job, there are many more that are created in the private workforce," said Bethany Ehlmann, Professor of Planetary Science at the California Institute of Technology. She also serves on the board of directors for The Planetary Society.

Professor Ehlmann's claim is supported by NASA's own data. In 2023, the agency employed 17,823 full-time civil servants nationwide. With NASA's private sector support factored in, that year the agency's missions were responsible for sustaining 304,803 jobs across all 50 states and the District of Columbia. Put another way, for every full-time equivalent job at a NASA facility, NASA supports at least 16 private sector jobs. "Space science has been broadly supported and impacts roughly three quarters of every congressional district in the country," said Dreier. "It's not just a red or blue state thing."

Following last week's Senate meeting, policymakers from both parties said they would push back on President Trump's NASA budget cuts. On Tuesday, the House Appropriations Committee's Subcommittee on Commerce, Justice, Science and Related Agencies passed a funding bill that would provide NASA with a total budget of $24.8 billion for 2026, or the same amount it was allocated this year. The week before, the corresponding subcommittee in the Senate passed its own NASA funding bill.

The two versions differ on one critical detail. The Senate legislation maintains the agency's science budget at $7.3 billion, while the House version seeks to reduce it by 18 percent to $6 billion. Separately, the House is calling for a 23 percent cut to the National Science Foundation's budget. NSF funds much of the nation's astronomy research.

"What I'm hearing from lawmakers is that they understand how important NASA is to industry. They understand how important NASA is to universities in terms of training, and providing grants that train the next generation of the space workforce," said Professor Ehlmann, who was on Capitol Hill last week. The House and Senate will need to come to an agreement for the bill to move forward.

Even with many lawmakers in favor of maintaining NASA's budget, a flat budget is still a funding cut when accounting for inflation. Moreover, NASA has already been negatively affected by the Trump administration's efforts to trim the federal workforce.

According to reporting Politico published on July 9, 2,694 NASA employees have agreed to leave the agency through either early retirement, a buyout or a deferred resignation. Of those individuals, 2,145 are workers in senior positions and 1,818 are staff serving in missions areas like human spaceflight and science. "Once the workforce is gone, they're gone. You lose a ton of institutional knowledge," said Dreier. The employees who have agreed to leave represent about 15 percent of NASA's 2023 workforce of 17,823. With the July 25 deadline for early retirement, voluntary separation and deferred resignations quickly approaching, that number is likely to grow. NASA's shifting priorities under the Trump administration have also created uncertainty among the agency's contractors.

According to former NASA employee and NASA Watch creator Keith Cowing the workforce cuts are already affecting employees. "In the 40 years I've been involved with NASA in one way or another, I've never seen morale so bad," he said. "Is NASA bloated? Yeah, but the way you deal with bloat is to go in with a scalpel and you cut carefully. And yet you have people [like Elon Musk] standing on stage with chainsaws. That is not the way to run government, and it's certainly not the way to create the machinery needed to explore the universe."

Whatever happens next, Dreier worries there's the potential for there to be an erosion in public support for NASA. He points to a survey published by Pew Research. In 2023, the organization found that monitoring for asteroids that could hit Earth and tracking changes to the planet's climate were the two activities Americans wanted NASA to prioritize over other mandates. By contrast, sending human astronauts to the Moon and Mars were the least important priorities for the public.

NASA's next-generation moon rocket, the Space Launch System (SLS) rocket with the Orion crew capsule, is readied for launch on pad 39-B, for the unmanned Artemis 1 mission to the Moon, at Cape Canaveral, Florida, U.S. November 15, 2022. REUTERS/Joe Skipper
REUTERS / Reuters

The House version of NASA's 2026 budget would boost the agency's exploration budget by 25 percent to $9.7 billion. In Trump's tax bill, Senator Ted Cruz (R-TX) included language that provided NASA with $4.1 billion for the fourth and fifth flights of the Space Launch System (SLS) rocket — the vehicle intended to carry the first NASA astronauts back to the Moon before before private sector alternatives like SpaceX's Starship are ready to fly.

With both the Trump administration and House pushing Moon and Mars missions as priorities, Dreier says they're "ironically doubling down on the activities that the private sector is already doing — SpaceX says it's going to send humans to Mars — and abandoning the things that only NASA does. There's no private sector company doing space science."

In effect, a NASA budget that sacrifices on scientific research in lieu of Mars missions would be one that invests in things the public says are the least important to it.

"I worry that they're moving away from what the public expects their space agency to do, and that as a consequence, it will undermine public investment in NASA," he said. "NASA is usually tied for the number one or two most popular federal agency. People wear NASA t-shirts. No one wears a Department of the Interior t-shirt walking out of the GAP. It's a rare and precious thing to have, and they're risking it. It's not just the future of the agency that's at risk, but the future of the public's relationship with it."

When asked for comment on this story, Bethany Stevens, NASA's press secretary, pointed Engadget to a letter from Acting Administrator Janet Petro NASA shared in a technical supplement it published alongside the president's budget request.

"We must continue to be responsible stewards of taxpayer dollars. That means making strategic decisions — including scaling back or discontinuing ineffective efforts not aligned with our Moon and Mars exploration priorities" Petro wrote.

The final NASA budget for 2026 is still months away from being finalized. After Tuesday's vote, the two funding bills will move to the full Senate and House appropriations committees for a vote and further revisions. Only after that will every member of each chamber get a chance to vote on the matter. Congress has until September 30 to complete the appropriations process before 2025 funding runs out. President Trump could also decide to veto the bill if it doesn't align with his priorities.

Have a tip for Igor? You can reach him by email, on Bluesky or send a message to @Kodachrome.72 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/science/space/trumps-defunding-of-nasa-would-be-catastrophic-153053020.html?src=rss

©

© REUTERS / Reuters

NASA's next-generation moon rocket, the Space Launch System (SLS) rocket with the Orion crew capsule, lifts off from launch complex 39-B on the unmanned Artemis 1 mission to the moon, seen from Sebastian, Florida, U.S. November 16, 2022. REUTERS/Joe Rimkus Jr.
  •  

Adobe Firefly can now generate sound effects from your audio cues

Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities.

To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect.

In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly.

With Firefly's June update, users can upload images or videos to guide their video generation.
Adobe

The other new features Adobe is adding today are called Composition Reference, Keyframe Cropping and Video Presets. The first of those allows you to upload a video or image you captured to guide the generation process. In combination with Video Presets, you can define the style of the final output. Some of the options Adobe is offering at launch allow you to create clips with anime, black and white or vector art styles. Lastly, with Keyframe Cropping you can upload the first and final frame of a video and select an aspect ratio. Firefly will then generate a video that stays within your desired format.

In June, Adobe added support for additional third-party models, and this month it's doing the same. Most notable is the inclusion of Veo 3, which Google premiered at its I/O 2025 conference in May. At the moment, Veo 3 is one of the only AI models that can generate video with sound. Like with all the other partner models Adobe offers in Firefly, Google has agreed not to use data from Adobe users for training future models. Every image and video people create through Firefly is digitally signed with the model that was used to create it. That is one of the safeguards Adobe includes so that Firefly customers don't accidentally ship an asset that infringes on copyrighted material.

According to Zeke Koch, vice president of product management for Adobe Firefly, users can expect the fast pace of updates to continue. "We're relentlessly shipping stuff almost as quickly as we can," he said. Koch adds Adobe will continue to integrate more third-party models, as long as their providers agree to the company's data privacy terms.

This article originally appeared on Engadget at https://www.engadget.com/ai/adobe-firefly-can-now-generate-sound-effects-from-your-audio-cues-130008172.html?src=rss

©

© Adobe

With Adobe's June update, Firefly users can generate audio effects.
  •  

The next Made By Google event (better known as the Pixel launch) is set for August 20

Google will host its next Made by Google event on August 20, the company announced today. In a media invite, it promised the event would feature new Pixel phones, watches, buds "and more." It's hard to imagine what other product types might be covered by those last two words, but for those who watch the industry closely, this event is likely to see the launch of the Pixel 10 flagship phones, along with a Pixel Watch 4 and new Pixel Buds. 

It's easy to make that deduction, especially going by previous Made By Google events. At last year's hardware launch, Google announced the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, Pixel 9 Pro Fold, Pixel Watch 3 and Pixel Buds Pro 2. 

Between that and the company's invite, we can expect a refresh of nearly the entire Pixel line. As for what the "and more" bit could entail, recent rumors suggesting Google is working on a proper response to Apple's MagSafe tech dubbed Pixelsnap. Android manufactures have been slow to adopt the Qi2 wireless charging standard, but with the upcoming Pixel 10 it appears the company is working on a host of magnetic Qi2 accessories, including a new charging stand. As always, be sure to visit Engadget on the day of the event as we'll have a liveblog of the entire proceedings. 

Update, July 16 2025, 1:50PM ET: This story has been updated to include a list of devices we expect Google to unveil on August 20.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/the-next-made-by-google-event-better-known-as-the-pixel-launch-is-set-for-august-20-162832319.html?src=rss

©

© Photo by Sam Rutherford / Engadget

All the hardware announced during Google's annual Pixel hardware event is arranged on a white table and look quite shiny and new in black and pastel hues.
  •  

xAI starts offering Grok to US government agencies

Just days after apologizing for Grok's recent hard turn toward antisemitism, xAI has announced a suite of AI products for government use. Grok for Government brings together the company's latest commercial products, including Grok 4 and Deep Search, with special considerations given to the needs of federal, state and local agencies. 

To that end, xAI says it will design custom models for specific national security and research customers. It will also develop specialized AI applications for use in healthcare, fundamental science and national defense, as well as offer models that can safely be used in classified and restricted environments. 

Announcing Grok for Government - a suite of products that make our frontier models available to United States Government customers

We are especially excited about two new partnerships for our US Government partners

1) a new contract from the US Department of Defense
2) our…

— xAI (@xai) July 14, 2025

Despite President Trump threatening to cut Elon Musk's companies off from government subsidies over their recent public feud, xAI says it already has a contract with the US Department of Defense. The company's products are also available to purchase through General Services Administration schedule, which means every federal government department, agency, or office can potentially access its models. OpenAI, which Musk helped fund in its early days as research lab through donations, launched ChatGPT Gov at the start of the year.   

This article originally appeared on Engadget at https://www.engadget.com/ai/xai-starts-offering-grok-to-us-government-agencies-162952893.html?src=rss

©

© Igor Bonifacic for Engadget

A closeup of the Grok icon on iOS.
  •