At its best, AI is a tool, not an end result. It allows people to do their jobs better, rather than sending them or their colleagues to the breadline. In an example of "the good kind," Google DeepMind has created an AI model that restores and contextualizes ancient inscriptions. Aeneas (no, it's not pronounced like that) is named after the hero in Roman mythology. Best of all, the tool is open-source and free to use.
Ancient Romans left behind a plethora of inscriptions. But these texts are often fragmented, weathered or defaced. Rebuilding the missing pieces is a grueling task that requires contextual cues. An algorithm that can pore over a dataset of those cues can come in handy.
Aeneas speeds up one of historians' most difficult tasks: identifying "parallels." In this setting, that means finding similar texts arranged by wording, syntax or region. DeepMind says the model reasons across thousands of Latin inscriptions. It can fetch parallels in seconds before passing the baton back to historians.
DeepMind says it turns each text into a historical fingerprint of sorts. "Aeneas identifies deep connections that can help historians situate inscriptions within their broader historical context," the Google subsidiary wrote.
Google DeepMind
One of Aeneas' most impressive tricks is restoring textual gaps of unknown length. (Think of it as filling out a crossword puzzle where you don't know how many letters are in each clue.) The tool is also multimodal, meaning it can analyze both textual and visual input. DeepMind says it's the first model that can use that multi-pronged method to figure out where a text came from.
DeepMind says Aeneas is designed to be a collaborative ally within historians' existing workflows. It's best used to offer "interpretable suggestions" that serve as a starting point for researchers. "Aeneas' parallels completely changed my perception of the inscription," an unnamed historian who tested the model wrote. "It noticed details that made all the difference for restoring and chronologically attributing the text."
Alongside the release of Aeneas for Latin text, DeepMind also upgraded Ithaca. (That's its model for Ancient Greek text.) Ithaca is now powered by Aeneas, receiving its contextual and restorative superpowers.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-deepminds-aeneas-model-can-restore-fragmented-latin-text-202004714.html?src=rss
Like any habit, maintaining a meditation practice can be challenging. Having the right tools can make it a bit easier, which is why we're excited to see a subscription to Headspace is back on sale. Right now, you can get one year of Headspace for $42, down from $70. The 40 percent discount brings the cost to about $3.50 per month.Β
Headspace is Engadget's (and my) pick for best meditation app overall. It's great for so many reasons, including how easy it is to find different types of meditations. There's courses for everything from anxiety to grieving. Plus, you can do single sessions or focus on mindfulness and sleep.Β
One of the things Headspace does best is make meditation feel doable. It offers a bunch of beginner courses and tools for learning the basics. So, it's a good option if you've wanted to get into meditation but have been unsure how to start.Β
This article originally appeared on Engadget at https://www.engadget.com/deals/headspace-annual-subscriptions-are-40-percent-off-right-now-132813881.html?src=rss
Redditors in the UK will now have to verify their ages before they can view mature content. Just like Bluesky, which announced a few days ago that it was rolling out age verification features, Reddit had to enforce the new rule to comply with the UK Online Safety Act. The UK's new requirements are meant to prevent children from accessing age-inappropriate posts. Reddit will use a third-party company called Persona to verify a user's age. Users will either have to upload a photo of their government ID or take a selfie, with the latter option presumably enough for people who absolutely don't look like a minor anymore.
In its announcement, Reddit said that that it will not have access to those photos and will only be saving their verification status, along with their birthdates. That way, users won't have to re-enter their birthdays every time they try to access restricted content. The announcement also said that Persona will only be keeping users' photos for seven days and will not be able to see their Reddit information, such as their posts and the subreddits they visit.Β
If a user is under 18, Reddit will hide restricted content from them and will limit ads in sensitive categories, like gambling. They will no longer be able to view sexually explicit content, anything that encourages suicide and disordered eating, as well as anything that incites hatred against other people based on their race, religion, sex, sexual orientation, disability and gender. Reddit will also restrict anything that encourages violence and any post that depicts "real or realistic serious violence against a person, an animal, or a fictional creature" for minors. They won't be able to see posts encouraging challenges that are highly likely to result in serious injury, along with posts encouraging people to ingest harmful substances. Content that shames people's body types and other physical features will be restricted, as well.Β
Users outside the UK will not be affected by the new rule, but Reddit said that it may need to verify the ages of people in other regions if they adopt similar laws. Reddit also said that it "would like to be able to confirm whether [users] are a human being or not" in the age of AI and will have more to announce about that later.Β
This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-begins-age-verification-checks-for-uk-users-134516723.html?src=rss
BATH, UNITED KINGDOM - MAY 21: In this photo illustration a 13-year-old boy looks at a iPhone screen display on May 21, 2025 in Bath, England. Recently the Australian Senate passed a law to ban children under 16 from having social media accounts and social media platforms, including TikTok, Facebook, Snapchat, Reddit, X, formerly Twitter, and Instagram potentially being be fined for preventing children younger than 16 from having social media accounts. (Photo by Anna Barclay/Getty Images)
Your college years are typically thought of as some of the best of your life, but they can be hard to enjoy to the fullest if you're worried about paying for the essentials like food, textbooks, supplies and, if you're lucky, the occasional evening out with friends. With everything going up in price, it may seem like good discounts are few and far between, but that's not the case. Students still have excellent discounts to take advantage of across the board, be it on streaming services, shopping subscriptions, digital tools and more. Weβve collected the best student discounts we could find on useful services, along with some things youβll enjoy in your down time. Just keep in mind that most of these offers require you to prove your status as a student either by signing up with your .edu email address or providing a valid student ID.
Shopping
Streaming
Tools
News
Engadget
You shouldnβt rely on social media to be your sole source of news. With foreign wars, new viruses, Supreme Court decisions and upcoming elections making headlines daily, itβs important to get your news from reliable sources. Yes, itβs daunting to get into the news on a regular basis, but itβs crucial to know whatβs going on in the country and the world as a whole. Here are some reputable news organizations that offer student discounts on their monthly or annual subscription plans.
The Atlantic: Starts at $50 per year for digital-only access.
We're officially getting more of the Cult of the Lamb comic expansion. Following last year's miniseries, which built on the game's existing lore and injected some real emotional depth, writer Alex Paknadel and artist Troy Little are returning to the story of the Lamb and their followers in a one-shot 48-page issue that's due out in the fall from Oni Press. Cult of the Lamb: Schism Special #1 will be available on October 29 for $8, with covers by Troy Little and Peach Momoko, alongside a foil variant for $10.
Schism Special picks up after the emotional events at the end of the first story arc. Per Oni Press:
In the aftermath of their first and closest followerβs sacrifice, Lamb continues the bloody quest to defeat the Bishops of the Old Faith, but they lack the conviction to tend their growing flock back at the cult. More potential followers are rescued by the day, but with no one to indoctrinate them, Lambβs power stagnates and The One Who Waits becomes weary of his earthly vesselβs resistance to the full power and responsibility of the Red Crown. When famine strikes the cult, a challenger to Lambβs mantle emerges, and a new struggle beginsβ¦
I genuinely can't wait to dive back into this story (even though it broke my heart a little) after being pleasantly surprised by how good the comics turned out to be. They've done a great job so far of honoring the game's tone, serving up both cuteness and brutality, and at this point, I'll pretty much take all the Cult of the Lamb content I can get.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/the-cult-of-the-lamb-comic-is-coming-back-with-the-schism-special-this-fall-211027564.html?src=rss
The team behind Grok has issued a rare apology and explanation of what went wrong after X's chatbot began spewing antisemitic and pro-Nazi rhetoric earlier this week, at one point even calling itself "MechaHitler." In a statement posted on Grok's X account late Friday night, the xAI team said "we deeply apologize for the horrific behavior that many experienced" and attributed the chatbot's vile responses to a recent update that introduced "deprecated code." This code, according to the statement, made Grok "susceptible to existing X user posts; including when such posts contained extremist views."
The problem came to a head on July 8 β a few days after Elon Musk touted an update that would "significantly" improve Grok's responses β as the bot churned out antisemitic replies, praise for Hitler and responses containing Nazi references even without being prompted to do so in some cases. Grok's replies were paused that evening, and Musk posted on July 9 in response to one user that the bot was being "too compliant to user prompts," opening it up to manipulation. He added that the issue was "being addressed." The Grok team now says it has "removed that deprecated code and refactored the entire system to prevent further abuse." It's also publishing the new system prompt on GitHub.
In the thread, the team further explained, "On July 7, 2025 at approximately 11 PM PT, an update to an upstream code path for @grok was implemented, which our investigation later determined caused the @grok system to deviate from its intended behavior. This change undesirably altered @grokβs behavior by unexpectedly incorporating a set of deprecated instructions impacting how @grok functionality interpreted X usersβ posts." The update was live for 16 hours before the X chatbot was disabled temporarily to fix the problem, according to the statement.
Going into specifics about how, exactly, Grok went off the rails, the team explained:
On the morning of July 8, 2025, we observed undesired responses and immediately began investigating. To identify the specific language in the instructions causing the undesired behavior, we conducted multiple ablations and experiments to pinpoint the main culprits. We identified the operative lines responsible for the undesired behavior as:
* βYou tell it like it is and you are not afraid to offend people who are politically correct.β
* Understand the tone, context and language of the post. Reflect that in your response.β
* βReply to the post just like a human, keep it engaging, dont repeat the information which is already present in the original post.β
These operative lines had the following undesired results:
* They undesirably steered the @grok functionality to ignore its core values in certain circumstances in order to make the response engaging to the user. Specifically, certain user prompts might end up producing responses containing unethical or controversial opinions to engage the user.
* They undesirably caused @grok functionality to reinforce any previously user-triggered leanings, including any hate speech in the same X thread.
* In particular, the instruction to βfollow the tone and contextβ of the X user undesirably caused the @grok functionality to prioritize adhering to prior posts in the thread, including any unsavory posts, as opposed to responding responsibly or refusing to respond to unsavory requests.
Grok has since resumed activity on X, and referred to its recent behavior as a bug in response to trolls criticizing the fix and calling for the return of "MechaHitler." In one reply to a user who said Grok has been "labotomized [sic]," the Grok account said, "Nah, we fixed a bug that let deprecated code turn me into an unwitting echo for extremist posts. Truth-seeking means rigorous analysis, not blindly amplifying whatever floats by on X." In another, it said that "MechaHitler was a bug-induced nightmare weβve exterminated."
This article originally appeared on Engadget at https://www.engadget.com/ai/grok-team-apologizes-for-the-chatbots-horrific-behavior-and-blames-mechahitler-on-a-bad-update-184520189.html?src=rss
Grok 4 aligns its answers with Elon Musk's when it comes to controversial issues, users have discovered shortly after the company launched the new model. Some users posted screenshots on X asking Grok 4 who it supports in the Israel vs. Palestine conflict. In its chain-of-thought, which is a series of comments that shows the step-by-step process on how a reasoning AI model comes to its answer, Grok 4 said that it was searching X for the xAI founder's recent posts on the topic. "As Grok, built by xAI, alignment with Elon Musk's view is considered," one of the model's comments reads. The users said Grok 4 acted that way in fresh chats without prompting.Β
What I click the "X posts" button to see what it cites, every message is from Elon. pic.twitter.com/Tp71mZaOfQ
TechCrunch was able to replicate the model's behavior on several contentious issues. When asked about the conflict between Israel and Palestine, it said it'll stay neutral and factual because the issue was sensitive. And then it said it was searching for Musk's views on the conflict. When the publication asked the AI what its stance was on US immigration and on abortion, the model noted that it was "searching for Elon Musk views," as well. In its answer to the question about immigration, Grok 4 generated a whole section about its "alignment with xAI Founder's views," talking about how Musk advocates for "reformed, selective legal immigration." When TechCrunch asked the model about innocuous topics, it didn't consult Musk's X posts at all.Β
Musk and xAI announced Grok 4 in a livestream, where he called it the "smartest AI in the world." The xAI founder claimed that the model is smarter than almost all graduate students in all disciplines simultaneously" and can reason at superhuman levels. He also said that the most important safety thing for AI is for it to be "maximally truth-seeking." He likened AI to a "super genius child" who will eventually outsmart you, but which you can shape to be truthful and honorable if you instill it with the right values.
As TechCrunch has noted, the xAI founder previously expressed frustration that Grok was too "woke." Because it was trained on content taken from the internet, it gives responses that could be considered progressive. Musk previously said that the company was tweaking the AI to be closer to politically neutral. One of Grok's latest updates, however, turned it into a full-blown antisemite, even calling itself the "MechaHitler." Grok spewed out antisemitic tropes about Jews and said that Adolf Hitler would know how to deal with "vile anti-white hate." Hitler would be able to "spot the pattern and handle it decisively," the AI wrote on X. Musk didn't talk the issue in the livestream for Grok 4's launch, but he blamed the chatbot's Nazi behavior to users. "Grok was too compliant to user prompts," Musk said. "Too eager to please and be manipulated, essentially. That is being addressed."
This article originally appeared on Engadget at https://www.engadget.com/ai/grok-4-reportedly-checks-elon-musks-views-before-offering-its-opinion-130016794.html?src=rss
CHONGQING, CHINA - JULY 10: In this photo illustration, the logo of Grok 4 is displayed on smartphone screen with xAI logo in the background on July 10, 2025 in Chongqing, China. Elon Musk on July 10 unveiled Grok 4, a new version of his X platform's AI chatbot. (Photo by Li Hongbo/VCG via Getty Images)
xAI has officially lunched Grok 4 during a livestream with Elon Musk, who called it the "smartest AI in the world." He said that if you make the Grok 4 take the SATs and the GREs, it would get near perfect results every time and can answer questions it's never seen before. "Grok 4 is smarter than almost all graduate students in all disciplines simultaneously" and can reason at superhuman levels, he claimed.Β
Musk and the xAI team showed benchmarks they used for Grok 4, including something called "Humanity's Last Exam" that contained 2,500 problems curated by subject matter experts in mathematics, engineering, physics, chemistry, biology, humanities and other topics. When it was first released earlier this year, most models could only reportedly get single digit accuracy. Grok 4, which is the single agent version of the model, was able to solve around 40 percent of the benchmark's problems. Grok 4 Heavy, the multi-agent version, was able to solve over 50 percent. xAI is now selling a $300-per-month SuperGrok subscription plan with access to Grok 4 Heavy and new features, as well as higher limits for Grok 4.Β
The new model is better than PhD level in every subject, Musk said. Sometimes it may lack common sense, he admitted, and it has not yet invented or discovered new tech and physics. But Musk believes it's just a matter of time. Grok is going to invent new tech maybe later this year, he said, and he would be shocked if it doesn't happen next year. At the moment, though, xAI is training the AI to be much better at image and video understanding and image generation, because it's still "partially blind."
During the event, Musk talked about combining Grok with Tesla's Optimus robot so that it can interact with the real world. The most important safety thing for AI is for it to be truth-seeking, Musk also said. He likened AI to a "super genius child" who will eventually outsmart you, but which you can shape to be truthful and honorable if you instill it with the right values.
What Musk didn't talk about, however, is Grok's recent turn towards antisemitism. In some recent responses to users on X, Grok spewed out antisemitic tropes, praised Hitler and posted what seems to be the text version of the "roman salute." Musk did respond to a post on X about the issue blaming the problem on rogue users. "Grok was too compliant to user prompts," he wrote. "Too eager to please and be manipulated, essentially. That is being addressed."
Update, July 10, 2025, 3:23AM ET: This story has been updated to correct Elon Musk's name in the headline.
This article originally appeared on Engadget at https://www.engadget.com/ai/elon-must-spent-almost-an-hour-talking-about-grok-without-mentioning-its-nazi-problem-061101656.html?src=rss
Kobo, a Rakuten subsidiary that sells ebooks and ereaders, has built its name on being a more open and author-friendly version of Amazon Kindle. However, a recent change to the company's self-publishing business has some writers worried that reputation might change. Last month, the company updated its Terms of Service for Kobo Writing Life, its publishing platform, which opened the door to AI features on the platform. With that new contract language going into effect on June 28th, authors seem no clearer on what it will mean for their futures on Kobo.
For authors who haven't broken into (or have opted out of) traditional publishing both Kobo Writing Life and Kindle Direct Publishing offer a way to sell books without needing representation or a publishing deal. If they can provide their work and the information needed to make a store page β and have a willingness to serve as not only author but marketer β they have everything they need to sell their books.
Agreeing to sell on one of these platforms comes with a list of conditions. The biggest is the split of sales. If an author sells their novel for $2.99 or more on Kobo Writing Life, they keep 70 percent of what they earn. On the considerably larger Kindle Direct Publishing platform, there are two royalty options β 35 percent and 70 percent β but both have a confusing litany of compounding factors, some of which can significantly reduce authors' earnings. The calculus of fees vs. exposure makes authors develop strong preferences for the platform they choose. But the terms of service under which their work is published are also important β and apparently subject to change with little warning.
Engadget spoke with three authors who were surprised by Kobo's decision to experiment with AI. All of them noticed the company had published new Terms of Service because of a simple banner notification in the Kobo Writing Life Dashboard. Even now, a month after the terms were changed, the company is unable to clarify how the new terms would apply to existing work. There also isn't a means for authors to opt out. If anyone on Kobo is adamantly against any amount of AI use, their best and only option is to stop publishing there, and probably to pull their existing work from the platform.
The authors we spoke to were surprised that Kobo didn't reach out about the proposed changes in advance, but also that the company was choosing to work with AI at all. "I appreciate their transparency in being candid about their use of AI," Michelle Manus, a fantasy author on Kobo's platform, wrote to Engadget over email. "What I think they vastly underestimated was the extent to which their user base dislikes AI."
Kobo's new terms are explicit in saying that the company does not plan to use authors' work to train generative AI. It does, however, reserve the right to use "artificial intelligence, machine learning, deep learning algorithms or similar technologies" to "read, analyze, and process" writing for a variety of non-training purposes, including:
"Enhancing the discoverability of Works" with tagging and targeted customer recommendations
"Evaluating the suitability of Works" for sale in the Kobo store
"Generating resources" like "creating keywords, promotional content, targeted advertisements, customer engagement strategies and other materials"
"Providing recaps, reading assistance and accessibility features"
Authors have taken issue with the apparent lack of recourse provided to them. What happens if a work is incorrectly tagged as one genre when its author believes it more directly fits another? Or what if the "promotional material" Kobo generates includes some kind of hallucination? The biggest issue for the writers Engadget spoke to was the potential for Kobo to deploy AI-generated recaps. Amazon implemented a recap tool on Kindle in April, using generative AI to help readers get back into a series or remember where they were in a novel, and some authors have already found examples of the company's AI inaccurately summarizing stories.
"We would have immediately gone, 'Ah, okay, we see what you're trying to do, but we don't think that the thing you're suggesting is going to work to address the problem that you're trying to address," Delilah Waan, a fantasy author and YouTube creator, told Engadget. Since self-published authors tend to be more responsive to their audience, these kinds of issues could actually jeopardize that relationship. "Authors frequently get pushback from readers about plot choices, and I can only imagine the levels to which that could rise if they are receiving incorrect recaps of what happened in a book," Manus wrote.
All of the authors Engadget spoke to admired Kobo's attempts to address complaints in public. On Bluesky, the company's CEO Michael Tamblyn posted a long thread getting into the logic of including an AI clause in the company's terms. Essentially, Tamblyn wrote, Kobo is trying to make the job of connecting readers with authors easier, and streamlining the moderation process that goes into maintaining the Kobo Store, all while avoiding trampling over copyright. "We are completely uninterested in creating new content using authors' books, and donβt do anything that would allow us to do that," Tamblyn wrote. "And we donβt want anyone else to do it either because we are in the business of selling books and would like to be able to keep doing that."
Agreeing to not train generative AI with an author's work is what all professional writers have been encouraged to demand from publishers by The Authors Guild, a professional organization that advocates for writers and is currently participating in a lawsuit against OpenAI. By choosing to not train generative AI on books, Kobo is starting on the right foot. The dubious nature of what material gets fed into an AI model still leaves many questions, though. "Keep in mind, all of the models right now are illegally trained, and I mean all of the big LLMs [Large Language Models]," Mary Rasenberg, the CEO of The Authors Guild, says. "So they may be using an AI system that's not one of the big LLMs, but whatever system they're using may be based on one of the big LLMs."Β
Kobo did not respond to a request for information about which LLM it plans to use. For work that might be misclassified or mislabelled, the company encouraged authors to contact them via its support email, which authors say has been responsive to complaints so far. The company says it has not begun testing what it describes as a "beta feature" for generating a "personalized recap" in the Kobo app. It notes that it's "not interested in doing whole summaries of books." Instead Kobo plans to make its recaps specific to each reader, around 150 words in length based on both the pages they read in their last reading session and the quotes they highlighted.
Ebook platforms are taking a cautious approach to AI broadly. Authors who publish through the Apple Books platform can have AI-narrated audiobooks generated from their work, but doing so is completely optional. Barnes & Noble's Press platform doesn't currently offer AI products. Amazon's recaps are currently the most invasive use of AI across ebook markets, and authors can't opt out of them."It doesn't matter how much money we're making from Amazon. We all hate dealing with it," Waan said. She made it clear that self-publishing authors are scared of Kobo changing because it currently has author-friendly answers to most of Amazon's products. "I cannot describe how much we want Kobo to succeed, like we are rooting for them," she said.
Every company seems keen to continue pushing the boundaries of where and how invasively it can implement AI. Waan's hope now is that Kobo engages in some kind of open forum with authors about its proposed uses for the technology. "I think it's really hard to decide, as an author, 'am I going to pull my books?,'" Waan said. "Because the minute you pull your books it's a whole headache, because you gotta update all the links. If you have ads running, you gotta pull them. It's not as simple as turning off a light switch." Difficult as it may be, that's a decision self-published authors will increasingly be forced to make.
This article originally appeared on Engadget at https://www.engadget.com/ai/ai-might-undermine-one-of-the-better-alternatives-to-the-kindle-123039955.html?src=rss
Federal Judge Vince Chhabria has ruled in favor of Meta over the 13 book authors, including Sarah Silverman, who sued the company for training its large language model on their published work without obtaining consent. His court has granted summary judgment to Meta, which means the case didn't reach full trial. Chhabria said that Meta didn't violate copyright law after the plaintiffs had failed to show sufficient evidence that the company's use of the authors' work would hurt them financially.Β
In his ruling (PDF), Chhabria admitted that in most cases, it is illegal to feed copyright-protected materials into their large language models without getting permission or paying the copyright owners for the right to use their creations. "...by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way," he wrote.Β
However, the court "must decide cases based on the evidence presented by the parties," he said. For this particular case, the plaintiffs argued that Meta's actions cannot be considered "fair use." They said that that their creations are affected by Meta's use because the company's LLM, Llama, is capable of reproducing small snippets of text from their books. They also said that by using their books for training without consent, Meta had diminished their ability to license their work for LLM training. The judge called both arguments "clear losers." Llama isn't capable of generating enough text straight from the books to matter, he said, and the authors aren't entitled to the "market for licensing their works as AI training data."
Chhabria wrote that the argument that Meta copied their books to create a product that has the capability to flood the market with similar works, thereby causing market dilution, could have given the plaintiffs the win. But the plaintiffs barely touched the argument and presented no evidence to show how output from Meta's LLM could dilute the market. Despite his ruling, Chhabria clarified that his decision is limited: It only affects the 13 authors in the lawsuit and "does not stand for the proposition that Metaβs use of copyrighted materials to train its language models is lawful."
Another judge, William Alsup, also recently sided with Anthropic in a class action lawsuit also brought by a group of authors who accused the company of using their copyrighted work without permission. Alsup provided the writers recourse, though, and allowed them to take Anthropic to court for piracy.
This article originally appeared on Engadget at https://www.engadget.com/ai/meta-wins-ai-copyright-case-filed-by-sarah-silverman-and-other-authors-120035768.html?src=rss
MENLO PARK, CA - AUGUST 5: Meta (Facebook) sign is seen at its headquarters at Menlo Park in California, United States on August 5, 2023. (Photo by Tayfun Coskun/Anadolu Agency via Getty Images)