Reddit is making it easier to gauge the impact your comments are having. The company is introducing detailed analytics for comments that measure views and other engagement metrics. Reddit shared the change as part of a larger batch of updates around how it handles comments on its platform.
Comment insights will provide details around upvotes (including the ratio of upvotes to downvotes), replies, views, shares and awards. Additionally, Redditors will now be able to share comments as a standalone post on Reddit.
Reddit
The platform is also adding a drafts feature for comments that will allow people to save up to 20 drafts for 14 days. Drafts will automatically save, according to Reddit, so that you can revisit your thoughts if you navigate away from the page or leave the app mid-comment. In an update to moderators, Reddit said that drafts is "still in its early iteration" and that it may tweak how the feature works in future versions.
The changes are the latest way Reddit has added more flexibility around comments in recent weeks. Earlier this month, the company said it would allow users to hide their commenting history from their profiles.
This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-adds-analytics-and-drafts-for-comments-060550128.html?src=rss
Scams using AI deepfakes of celebrities have become an increasingly prominent issue for Meta over the last couple of years. Now, the Oversight Board has weighed in and has seemingly confirmed what other critics have said: Meta isn't doing enough to enforce its own rules, and makes it far too easy for scammers to get away with these schemes.
"Meta is likely allowing significant amounts of scam content on its platforms to avoid potentially overenforcing a small subset of genuine celebrity endorsements," the board wrote in its latest decision. "At-scale reviewers are not empowered to enforce this prohibition on content that establishes a fake persona or pretends to be a famous person in order to scam or defraud."
That conclusion came as the result of a case involving an ad for an online casino-style game called Plinko that used an AI-manipulated video of Ronaldo Nazário, a retired Brazilian soccer player. The ad, which according to the board showed obvious signs of being fake, was not removed by Meta even after it was reported as a scam more than 50 times. Meta later removed the ad, but not the underlying Facebook post behind it until the Oversight Board agreed to review the case. It was viewed more than 600,000 times.
The board says that the case highlights fundamental flaws in how Meta approaches content moderation for reported scams involving celebrities and public figures. The board says that Meta told its members that "it enforces the policy only on escalation to ensure the person depicted in the content did not actually endorse the product" and that individual reviewers' "interpretation of what constitutes a ‘fake persona’ could vary across regions and introduce inconsistencies in enforcement.” The result, according to the Oversight Board, is that a "significant" amount of scam content is likely slipping through the cracks.
In its sole recommendation to Meta, the board urged the company should update its internal guidelines, empower content reviewers to identify such scams and train them on "indicators" of AI-manipulated content. In a statement, a spokesperson for Meta said that "many of the Board's claims are simply inaccurate" and pointed to a test it began last year that uses facial recognition technology to fight "celeb-bait" scams.
“Scams have grown in scale and complexity in recent years, driven by ruthless cross-border criminal networks," the spokesperson said. "As this activity has become more persistent and sophisticated, so have our efforts to combat it. We’re testing the use of facial recognition technology, enforcing aggressively against scams, and empowering people to protect themselves through many different on platform safety tools and warnings. While we appreciate the Oversight Board’s views in this case, many of the Board's claims are simply inaccurate and we will respond to the full recommendation in 60 days in accordance with the bylaws.”
Scams using AI deepfakes of celebrities has become a major problem for Meta as AI tech gets cheaper and more easily accessible. Earlier this year, I reported that dozens of pages were running ads featuring deepfakes of Elon Musk and Fox News personalities promoting supplements that claimed to cure diabetes. Some of these pages repeatedly ran hundreds of versions of these ads with seemingly few repercussions. Meta disabled some of the pages after my reporting, but similar scam ads persist on Facebook to this day. Actress Jamie Lee Curtis also recently publicly slammed Mark Zuckerberg for not removing a deepfaked Facebook ad that featured her (Meta removed the ad after her public posts).
The Oversight Board similarly highlighted the scale of the problem in this case, noting that it found thousands of video ads promoting the Plinko app in Meta's Ad Library. It said that several of these featured AI deepfakes, including ads featuring another Brazilian soccer star, Cristiano Ronaldo, and Meta's own CEO Mark Zuckerberg.
The Oversight Board isn't the only group that's raised the alarm about scams on Meta's platforms. The Wall Street Journalrecently reported that Meta "accounted for nearly half of all reported scams on Zelle for JPMorgan Chase between the summers of 2023 and 2024" and that "British and Australian regulators have found similar levels of fraud originating on Meta’s platforms." The paper noted that Meta is "reluctant" to add friction to its ad-buying process and that the company "balks" at banning advertisers, even those with a history of conducting scams.
This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-isnt-doing-enough-to-fight-celeb-deepfake-scams-194636203.html?src=rss
Meta Founder and CEO Mark Zuckerberg speaks at LlamaCon 2025, an AI developer conference, in Menlo Park, Calif., Tuesday, April 29, 2025. (AP Photo/Jeff Chiu)
Whatever your opinion of X, you probably don't think of it as a platform known for fostering agreement. The company is apparently trying to change that, though, and is in the early stages of an experiment that aims to boost posts that are widely agreeable to the site's users.
With a new test, described by X as an "experimental pilot," the app will begin asking a small subset of users what they think of a particular post in their timeline. A screenshot shared by X shows that people can respond with a range of positive or negative opinions, like "it makes a meaningful point," "it's funny" or "it doesn't interest me." X will then use those responses to help it "develop an open source algorithm that could effectively identify posts liked by people from different perspectives."
The concept is somewhat similar to Community Notes, which already attempts to take differing perspectives into account when ranking fact checks. The new program, though, isn't about surfacing fact-checked content but boosting posts that are likely to be, well, liked.
X's post about the test suggests it has lofty goals. "This experimental new feature seeks to uncover ideas, insights, and opinions that bridge perspectives," the company wrote. "It can bring awareness to what resonates broadly. It could motivate people to share those ideas in the first place."
Whether an open source algorithm based on data about users' likes can actually accomplish that, though, is unclear. A report published today by Pew Research shows that there is still a significant partisan divide in terms of how X is perceived and experienced by users. Overcoming that could be more difficult than boosting a few extra posts.
This article originally appeared on Engadget at https://www.engadget.com/social-media/x-tests-centrism-170939276.html?src=rss
Reddit had filed a lawsuit against Anthropic, alleging that the AI company behind the Claude chatbot has been using its data for years without permission. The lawsuit comes after Reedit has increasingly taken a hardline stance against scrapers and companies that use its data to train AI models.
In their filing, Reddit alleges that Anthropic was training its Claude chatbot on Reddit data as early as December 2021. The lawsuit also includes a screenshot in which Claude seems to acknowledge it was trained on Reddit data. In a statement to Engadget, a Reddit spokesperson said the lawsuit was the company's "final option to force Anthropic to stop its unlawful practices" after repeated warnings.
"We believe in the Open Internet—that does not give Anthropic the right to scrape Reddit content unlawfully, exploit it for billions of dollars in profit, and disregard the rights and privacy of our users," the spokesperson said. "In clear violation of Reddit’s terms and despite repeated requests to stop, Anthropic has been caught accessing or attempting to access Reddit content via automated bots at least 100,000 times. This isn’t a misunderstanding, it’s a sustained effort to extract value from Reddit while ignoring legal and ethical boundaries."
Reddit's vast archive of online discussions has become a particularly valuable commodity for the company as generative AI companies race to train new models. The company has struck lucrative licensing deals with companies like Google and OpenAI for access to its data. Reddit CEO Steve Huffman has previously called out Anthropic (along with other AI firms) for scraping Reddit. Last year, the company took steps to limit automated scraping and warned AI companies that they would need to pay up.
In their lawsuit, Reddit says that "Anthropic refused to engage" in discussions about licensing. "Unlike its competitors, Anthropic has refused to agree to respect Reddit users’ basic privacy rights, including removing deleted posts from its systems," it says. "This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer’s consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets."
In a statement, a spokesperson for Anthropic said, “we disagree with Reddit's claims and will defend ourselves vigorously."
Update, June 4, 2025 12:03 PM ET: This post was updated to include a statement from Anthropic.
This article originally appeared on Engadget at https://www.engadget.com/ai/reddit-is-suing-anthropic-for-allegedly-scraping-its-data-without-permission-185833267.html?src=rss
Bluesky is ramping up its verification program, even though it's still not exactly clear how it plans to determine which accounts are "authentic and notable" enough for a blue checkmark. One month after the company said it would start giving checkmarks to select accounts, the company is now allowing people to apply for verification.
Currently, the application consists of a multi-page Google Form that asks users to share details about their account and why they want to be verified. However, it's not exactly clear what criteria Bluesky will be taking into account or how it will be reviewing what will almost certainly be a flood of applications.
The company notes that it will only verify accounts that are "active and secure, authentic, and notable." Bluesky also recommends some obvious steps, like having a complete bio and using two-factor authentication. The linked form also asks users about what "category" they may fall into, such as an elected official, brand, athlete, journalist, academic or "other."
But it sounds like Blueksy is very much still figuring out verification as it goes. "Our criteria for verification is evolving based on user feedback," the form states. "We will continue to expand the scope of accounts that are eligible for verification over time. This is an initial version of the form that will change as we finalize the requirements for verification." It also notes that "meeting the basic criteria does not guarantee verification."
That could complicate things for Bluesky, which resisted the idea of having an in-house verification system until recently, despite repeated issues with impersonation. The service has more than 36 million sign-ups, and if even a small percentage of them request a badge, it could quickly overwhelm the company's small team.
Notably, the platform is also expanding its "trusted verifiers," which are third-party entities that can verify users (who get a slightly different-shaped checkmark) and vouch for their legitimacy. Organizations that want to verify on behalf of others can also sign up via the same form.
This article originally appeared on Engadget at https://www.engadget.com/social-media/you-can-now-apply-for-verification-on-bluesky-222802057.html?src=rss
WhatsApp is expanding its Discord-like voice chat feature so that group chats of any size can talk to each other in real time. Unlike group calling, which has existed on the app for years, real-time "audio hangouts" are more of a drop-in feature that doesn't ring every member of the chat.
Voice chats also offer a bit more flexibility than a traditional call because the interface doesn't take over your whole screen. That means you can still follow along in the chat for new messages or keep an eye on any incoming notifications.
Meta first introduced the feature in 2023, but for some reason limited it to larger groups of 32 to 256 participants, which is likely a lot bigger than the average group thread on the app. Now, though, WhatsApp users can start an audio hangout in both smaller group chats and even larger ones. WhatsApp supports groups of up to 1,024 participants, which sounds extremely chaotic even for texting, much less audio.
This article originally appeared on Engadget at https://www.engadget.com/social-media/whatsapp-audio-hangouts-are-now-open-to-group-chats-of-any-size-194504841.html?src=rss
The FTC just rested its case following weeks of testimony in a landmark antitrust case against Meta. But before Meta can begin its defense, the company's lawyers have opted for another move: asking the judge to throw out the case entirely.
The company filed a motion on Thursday asking US District judge James Boasberg to toss out the FTC's case, arguing that the regulator has not proved that Meta acted anticompetitively. "Meta has made two promising mobile apps with uncertain prospect: two of the most successful apps in the world, enjoyed by approximately half of the planet's population (including hundreds of millions of U.S. consumers) on demand, in unlimited quantities, all for free," the filing says, "The FTC has not carried its burden to prove that Meta 'is currently violating the antitrust laws.'"
The company's reasoning is similar to past arguments it's made about the FTC's case. Meta has said that Instagram and WhatsApp were able to grow to one-billion-user services because of the company's investments. The company also takes issue with the FTC's claim that there is a lack of competition for "personal social networking services." (The FTC has argued that Meta's only competitors for social networking are Snapchat and MeWe, a small privacy-focused social app that runs on decentralized protocols.)
So far, the month-long trial has seen a number of prominent current and former Meta executives take the stand, including CEO Mark Zuckerberg, former COO Sheryl Sandberg and Instagram cofounder Kevin Systrom. Their testimony has revealed new details about the inner workings of the social media company and its tactics to stay ahead of potential competitors.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-is-trying-to-get-its-antitrust-case-thrown-out-in-the-middle-of-the-trial-204656979.html?src=rss
PARIS, FRANCE - MAY 22: The Meta logo is displayed during the Viva Technology show at Parc des Expositions Porte de Versailles on May 22, 2024 in Paris, France. Viva Technology, the biggest tech show in Europe but also in a unique digital format, for 4 days of reconnection and relaunch thanks to innovation. The event brings together startups, CEOs, investors, tech leaders and all of the digital transformation players who are shaping the future of the Internet. The annual technology conference, also known as VivaTech, was founded in 2016 by Publicis Groupe and Groupe Les Echos and is dedicated to promoting innovation and startups. (Photo by Chesnot/Getty Images)
TikTok recently began experimenting with an in-app meditation feature that encouraged teens to "wind down" after 10PM. Now, the company is making the feature official for all users and turning it on by default for all teens under the age of 18.
With the change, teens will hit a full-screen "guided meditation exercise" when attempting to scroll after 10PM. The prompt is apparently something you can opt to ignore, but teens who do will encounter a second "harder to dismiss" prompt. TikTok's adult users will also be able to access the in-app meditations via the app's screen time controls (the feature will not be on by default for adults).
The company notes that its initial tests of "Sleep Hours" were successful, with 98 percent of teens opting to keep the late-night meditation settings on. Previous attempts by TikTok to limit screen time have a somewhat different track record. Documents that surfaced as part of a lawsuit against the company showed that teens were spending about 107 minutes a day in the app even when screen time was set to a 60-minute limit.
Since then, TikTok has beefed up some of its safety features, including its parental controls, amid increasing scrutiny of the company. TikTok's fate in the US is still, officially, in limbo as President Donald Trump signed off on another extension of a deadline to ban the app last month. Terms of a final deal that will allow it to remain in the country permanently have yet to be announced, though there are a number of interested buyers.
This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-will-try-to-force-teens-to-meditate-after-10pm-231118942.html?src=rss
Last month, Meta hosted LlamaCon, its first ever generative AI conference. But while the event delivered some notable improvements for developers, it also felt a bit underwhelming considering how important AI is to the company. Now, we know a bit more about why, thanks to a new report in The Wall Street Journal.
According to the report, Meta had originally intended to release its "Behemoth" Llama 4 model at the April developer event, but later delayed its release to June. Now, it's apparently been pushed back again, potentially until "fall or later." Meta engineers are reportedly "struggling to significantly improve the capabilities" of the model that Mark Zuckerberg has called “the highest performing base model in the world.”
Meta has already released two smaller Llama 4 models, Scout and Maverick, and has also teased a fourth lightweight model that's apparently nicknamed "Little Llama." Meanwhile, the "Behemoth" model will have 288 billion active parameters and "outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several STEM benchmarks," the company said last month.
Meta has never given a firm timeline of when to expect the model. The company said last month that it was "still training." And while Behemoth got a few nods during the LlamaCon keynote, there were no updates on when it might actually be ready. That's probably because it could still be several months. Inside Meta there are apparently questions "about whether improvements over prior versions are significant enough to justify public release."
Meta didn't immediately respond to a request for comment. As the report notes, it wouldn't be the first company to run into snags as it races to release new models and outpace competitors. But the delay is still notable given the Meta's lofty ambitions when it comes to AI. Zuckerberg has made AI a top priority with Meta planning to spend as much as $72 billion on its AI infrastructure this year.
This article originally appeared on Engadget at https://www.engadget.com/ai/metas-behemoth-llama-4-model-might-still-be-months-away-221240585.html?src=rss
X has once again been accepting payments from people associated with terrorist groups and other entities subject to US sanctions, according to a new report from the Tech Transparency Project (TTP). According to the report, X has not only accepted payments in exchange for its premium service, but in some cases has provided an "ID verified" badge.
The report once again questions whether X is complying with US sanctions that restrict companies' ability to do business with individuals and entities that have been deemed a security threat. Last year, the TTP published a similar report that identified more than two dozen verified accounts that were affiliated with sanctioned groups, including leaders of Hezbollah and accounts associated with Houthis in Yemen. Many of those checkmarks were subsequently revoked, with X promising to "maintain a safe, secure and compliant platform."
But some of those accounts simply "resubscribed" to X's premium service or created fresh accounts, according to the report, which is based on research between November 2024 and April 2025. "TTP’s new investigation found an array of blue checkmark accounts for U.S.-sanctioned individuals and organizations, including several that appeared to simply re-subscribe to premium service or create new accounts after their old ones were restricted or removed by X," the report says. "Moreover, some of the accounts were 'ID verified,' meaning X conducted an additional review to confirm their identity."
The report once again highlights verified accounts associated with members of Hezbollah, including one of its founders, as well as Houthi officials who "are making heavy use of X for messaging and propaganda." The son of Libyan dictator Muammar Gadhafi, whose account was previously suspended, also currently has a blue check, as does Raghad Saddam Hussein al-Tikriti, one of Saddam Hussein's daughters. Both have been under sanctions for more than a decade.
X didn't respond to a request for comment on the report. In response to last year's report, the company said it would "take action if necessary." However, it's unclear if the company changed any of its practices regarding who can pay for premium subscriptions.
“If a small team can use X’s public facing search tools to identify these accounts, it’s unclear why a multi-billion-dollar company cannot do the same,” Michelle Kuppersmith, the executive director for Campaign for Accountability, the watchdog group that runs TTP said in a statement. “It’s one thing to allow terrorists to have a voice on the platform; it’s another entirely to allow them to pay for a more effective megaphone.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-once-again-selling-checkmarks-to-us-sanctioned-groups-report-says-194352896.html?src=rss
X logo displayed on a laptop screena and X logo displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on January 2, 2025. (Photo by Jakub Porzycki/NurPhoto via Getty Images)
If you're active on Threads, you've probably noticed that posts with links don't do very well with the app's recommendation algorithm. This is especially noticeable if you're a publisher, creator or, yes, a journalist who depends on social media to share your work.
Threads' ambivalence to links isn't an accident. Instagram and Threads boss Adam Mosseri has confirmed that "we don’t place much value" on links, though the company doesn't intentionally downrank them. That may be starting to change, though. As Meta has made winning over creators a bigger priority for Threads, the company is now taking steps to make links a more prominent part of the service.
To start, the app will now allow users to add up to five links to their Threads bios. More importantly, Threads posts with links will now be surfaced more often in the app's recommendations. And Meta is adding link-specific insights to its built-in analytics tool so creators can track how often people are interacting with the content they share. "We want Threads to be a place that helps you grow your reach – even outside of Threads," Meta notes in a blog post.
Meta will show how many people are clicking on links you share on Threads.
Meta
While that will be welcome news for anyone hoping to turn their Threads account into a reliable traffic source, it's unclear just how dramatic of a shift users should expect. The app's algorithm is still a black box, even for power users. And Threads' emphasis on recommended posts means that even users with large numbers of followers tend to get more interactions from non-followers.
Publishers have also reported mixed results when it comes to Threads. Last year, several publishers reported that Bluesky, despite being far smaller than Threads or X, was a far more reliable traffic source than its larger counterparts. More recently though, some publishers have reported spikes in referral traffic from Threads following the company's reversal of a policy to not recommend political content. On the other hand, Meta's past is filledwithnumerousexamplesofwhy publishers and creators shouldn't rely too heavily on the social network. Still, it may be a good time to at least start experimenting with more links on Threads.
This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-is-finally-embracing-links-150012499.html?src=rss
Threads will finally start giving users more visibility into when their accounts are penalized for breaking its rules. Meta is bringing its “Account Status” feature to Threads, which will enable people to see when the company has removed or demoted posts or handed out other penalties.
The change adds a layer of much-needed transparency to Threads, which already has a recommendation algorithm that can be hard for creators to understand. Earlier this year, Meta reversed course on whether it would recommend political content to Threads users after it tried to limit posts about elections and other “social” topics last year.
As on Instagram (and Facebook), Account Status allows Threads users to view “actions” Meta has taken against their account. It will indicate if a post has been removed, made less visible in other users’ feeds or deemed un-recommendable by Meta. It will also show if a user has been blocked from using certain features for breaking the platform’s rules.
If Meta has “actioned” your account for some reason, Account Status is also where you can request an appeal. The company says it will alert users once their report has been reviewed.
Account Status is starting to roll out now and is accessible from the “account” section in Threads’ settings menu.
This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-will-start-telling-users-when-their-posts-are-demoted-204628224.html?src=rss
A jury has ruled that the company behind the infamous Pegasus spyware must pay Meta more than $167 million in damages for spreading malware via WhatsApp. The ruling is a major victory for Meta after a years-long legal battle with NSO Group.
Meta sued the NSO Group in 2019 over its Pegasus spyware. Meta said at the time that more than 1,400 people in 20 countries had been targeted, including journalists and human rights activists. The company said that the “highly sophisticated cyber attack” spread malware via video calls even when the calls went unanswered. Last year, a judge sided with Meta and found the Israeli company had violated the US Computer Fraud and Abuse Act. Tuesday’s verdict followed a week-long jury trial to determine just how much NSO should pay in damages to Meta.
The jury ultimately awarded Meta $444,719 in compensatory damages and $167,254,000 in punitive damages. In a statement, WhatsApp’s VP of Global Communications Carl Woog called the verdict “a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.”
NSO Group, which describes itself as a “cyber intelligence" firm, has said that it’s not possible to use Pegasus on US phone numbers. In court, lawyers for the firm argued that WhatsApp wasn’t harmed in any way by Pegasus, according toCourthouse News Service.
In a statement, NSO’s Gil Lainer said the verdict was “another step in a lengthy judicial process” and said it would pursue “further proceedings” or an appeal. “We firmly believe that our technology plays a critical role in preventing serious crime and terrorism and is deployed responsibly by authorized government agencies,” Lainer said. “This perspective, validated by extensive real-world evidence and numerous security operations that have saved many lives, including American lives, was excluded from the jury's consideration in this case.”
WhatsApp’s Woog said Meta knows it has “a long road ahead” to collect damages from NSO. “Ultimately, we would like to make a donation to digital rights organizations that are working to defend people against such attacks around the world,” he said. He added that Meta plans to pursue a court order to prevent NSO from targeting WhatsApp in the future.
This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/meta-wins-more-than-167-million-in-damages-from-spyware-maker-that-targeted-whatsapp-215459438.html?src=rss
The WhatsApp messaging app logo is seen in this illustrtion photo taken in Warsaw, Poland on 21 November, 2023. (Photo by Jaap Arriens/NurPhoto via Getty Images)
Meta is facing its biggest existential threat in its history. Years after the Federal Trade Commission first sued the social network in an attempt to unwind its acquisitions of Instagram and WhatsApp, the trial that will shape its future is finally underway. FTC v. Meta began last month when CEO Mark Zuckerberg took the stand, and is expected to last for several weeks.
The FTC is hoping to prove to US District judge James Boasberg that Meta’s acquisitions of its one-time rivals were anticompetitive and hurt US consumers. Meta, meanwhile, has argued that Instagram and WhatsApp were only able to grow to the billion-user services they are because of its investment into them over the last decade or more.
While the case is unlikely to be fully settled anytime soon, the trial has successfully uncovered tons of new details about the inner workings of Meta and its approach to potential competitors. And testimony from former execs like Instagram cofounder Kevin Systrom and longtime COO Sheryl Sandberg have shed new light on the company’s past.
Instagram’s former CEO speaks
Facebook’s 2012 acquisition of Instagram is a central part of the FTC’s case against Meta. The government has argued that Mark Zuckerberg bought Instagram in order to neutralize it as a competitor and is trying to force Meta to divest it. So it was more than a little eyebrow raising when Instagram’s cofounder and former CEO Kevin Systrom took the stand and didn’t exactly come to Meta’s defense.
While Zuckerberg had testified that Meta had helped Instagram grow, Systrom testified that Zuckerberg saw Instagram as a “threat” to Facebook’s growth and intentionally withheld company resources as a result. “As the founder of Facebook, he felt a lot of emotion around which one was better, meaning Instagram or Facebook," Systrom said.
Sheryl Sandberg thought Zuckerberg overpaid for Instagram
Facebook’s decision to pay $1 billion for Instagram — an app that had no revenue and just a handful of employees — seemed like an incredible sum to many onlookers at the time. Among them, though, was Zuckerberg’s former top lieutenant. The trial unearthed an exchange between the two from 2012 in which Zuckerberg asked if $1 billion was too much to pay. She replied that “yes, of course it’s way too much.”
On the stand, however, Sandberg said that she had been wrong. “I don’t think anyone today would say we paid too much for Instagram,” she said, in testimony reported byBloomberg.
Zuckerberg knew the company could face a breakup
In one notable email exchange, Zuckerberg speculated that the company could one day face antitrust action that would force the company to divest Instagram. "I'm beginning to wonder whether spinning Instagram out is the only structure that will accomplish a number of important goals," Zuckerberg mused in a 2018 email. "As calls to break up the big tech companies grow, there is a non-trivial chance that we will be forced to spin out Instagram and perhaps WhatsApp in the next 5-10 years anyway."
Zuckerberg considered nuking friend lists to boost engagement
In 2022, facing rising competition from TikTok, Zuckerberg apparently was growing concerned that Facebook’s “cultural relevance is decreasing quickly.” To address this, he suggested deleting users’ friends lists as often as once a year in an effort to get people to “start again.” Bizarrely, he referred to this plan as “double down on friending,” as Business Insidernoted.
Zuckerberg, apparently aware that the plan was somewhat risky, even suggested that Facebook could test out the idea in a “smaller country” first in order to gauge the effect it might have on users. However, Tom Alison, who oversees the Facebook app for Meta, quickly shot him down, according to The Verge, telling Zuckerberg the plan was not “viable.”
When asked about it directly on the stand, Zuckerberg simply stated that “we never did that.” Still, the fact that he even considered such a drastic move is telling. Zuckerberg floated the idea in 2022, at a time when TikTok’s popularity among US teens was surging and Meta was becoming increasingly alarmed at TikTok’s dominance. In the same email, Zuckerberg also questioned Alison about whether Facebook could move to a “follow model.”
Just how threatened they were by TikTok
Zuckerberg has previously talked about how Meta was “slow” to recognize the threat posed by TikTok. But the FTC trial has unearthed new details about Meta’s response to the app’s rise. In her testimony, Sandberg said that Meta was already feeling pressure from TikTok in 2018. By 2020, the company had invested more than $500 million into building its competitor, Reels, according to an internal email noted by The New York Times. That effort saw the company hire more than 1,000 new employees to bolster the company’s video efforts.
Zuckerberg also touched on TikTok, saying that the app quickly became a “highly urgent” threat to Meta. “We observed that our growth slowed down dramatically,” Zuckerberg said, referring to TikTok’s rise. That may sound surprisingly candid for Zuckerberg, but his remarks were also strategic for Meta’s defense. The company has argued that TikTok is an even bigger threat to its business than Instagram or WhatsApp ever was, and has slammed the government for claiming that TikTok isn’t a direct rival.
As the European Union has adopted stricter tech regulations over the last few years, the new laws have forced tech giants to change their products in sometimes meaningful ways. For Meta, one such change has been the addition of ad-free versions of Facebook and Instagram that are only available via subscription in the EU. The company began offering it in 2023 and has slashed the price of it more recently following legal scrutiny.
But even with a price cut, it seems ad-free subscriptions to Facebook and Instagram are unpopular. On the stand, Meta’s Chief Revenue Officer John Hegeman testified that there has been “very little interest” in the plan with only “about .007 percent” of users opting in, according to testimony reported by The Verge.
Threads almost didn’t get its own app
Meta’s X competitor was almost relegated to a feature within Instagram. That’s according to Instagram and Threads chief Adam Mosseri who testified (per The Verge) that Meta’s original plan was for Threads to live inside Instagram itself. However, it ended up being too “confusing” so they ultimately opted to break it out into its own service. Mosseri said the move was “contentious” at the time.
Now though, it’s hard to argue that Meta didn’t make the right call. Threads has passed 350 million users and Zuckerberg has predicted it will be the company’s next billion-person app. It’s nearly impossible to imagine Threads reaching that level of success if it was merely yet another Instagram feature.
Update, May 8, 2025, 12:15PM PT: This story has been updated to reflect testimony from Instagram's Adam Mosseri.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/what-weve-learned-from-ftc-v-meta-antitrust-trial-162048138.html?src=rss
This photo illustration created on January 7, 2025, in Washington, DC, shows an image of Mark Zuckerberg, CEO of Meta, and a phone displaying the download page for the Facebook app. Social media giant Meta on January 7, 2025, slashed its content moderation policies, including ending its US fact-checking program, in a major shift that conforms with the priorities of incoming president Donald Trump. (Photo by Drew ANGERER / AFP) (Photo by DREW ANGERER/AFP via Getty Images)
A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit’s most popular communities using AI-generated comments to test the persuasiveness of large language models. The experiment, which was revealed over the weekend by moderators of r/changemyview, is described by Reddit mods as “psychological manipulation” of unsuspecting users.
“The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users,” the subreddit’s moderators wrote in a lengthy post notifying Redditors about the research. “This experiment deployed AI-generated comments to study how AI could be used to change views.”
The researchers used LLMs to create comments in response to posts on r/changemyview, a subreddit where Reddit users post (often controversial or provocative) opinions and request debate from other users. The community has 3.8 million members and often ends up on the front page of Reddit. According to the subreddit’s moderators, the AI took on numerous different identities in comments during the course of the experiment, including a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.” Many of the original comments have since been deleted, but some can still be viewed in an archive created by 404 Media.
In a draft of their paper, the unnamed researchers describe how they not only used AI to generate responses, but attempted to personalize its replies based on information gleaned from the original poster’s prior Reddit history. “In addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM,” they write.
The r/chnagemyview moderators note that the researchers’ violated multiple subreddit rules, including a policy requiring the disclosure when AI is used to generate comment and a rule prohibiting bots. They say they filed an official complaint with the University of Zurich and have requested the researchers withhold publication of their paper.
Reddit also appears to be considering some kind of legal action. Chief Legal Officer Ben Lee responded to the controversy on Monday, writing that the researchers' actions were "deeply wrong on both a moral and legal level" and a violation of Reddit's site-wide rules.
We have banned all accounts associated with the University of Zurich research effort. Additionally, while we were able to detect many of these fake accounts, we will continue to strengthen our inauthentic content detection capabilities, and we have been in touch with the moderation team to ensure we’ve removed any AI-generated content associated with this research.
We are in the process of reaching out to the University of Zurich and this particular research team with formal legal demands. We want to do everything we can to support the community and ensure that the researchers are held accountable for their misdeeds here.
In an email, the University of Zurich researchers directed Engadget to the university's media relations department, which didn't immediately respond to questions. In posts on Reddit and in a draft of their paper, the researchers said their research had been approved by a university ethics committee and that their work could help online communities like Reddit protect users from more “malicious” uses of AI.
“We acknowledge the moderators’ position that this study was an unwelcome intrusion in your community, and we understand that some of you may feel uncomfortable that this experiment was conducted without prior consent,” the researchers wrote in a comment responding to the r/changemyview mods. “We believe the potential benefits of this research substantially outweigh its risks. Our controlled, low-risk study provided valuable insight into the real-world persuasive capabilities of LLMs—capabilities that are already easily accessible to anyone and that malicious actors could already exploit at scale for far more dangerous reasons (e.g., manipulating elections or inciting hateful speech).”
The mods for r/changemyview dispute that the research was necessary or novel, noting that OpenAI researchers have conducted experiments using data from r/changemyview “without experimenting on non-consenting human subjects.”
“People do not come here to discuss their views with AI or to be experimented upon,” the moderators wrote. “People who visit our sub deserve a space free from this type of intrusion.”
Update, April 28, 2025, 3:45PM PT: This post was updated to add details from a statement by Reddit's Chief Legal Officer.
This article originally appeared on Engadget at https://www.engadget.com/ai/researchers-secretly-experimented-on-reddit-users-with-ai-generated-comments-194328026.html?src=rss
POLAND - 2025/03/07: In this photo illustration, a woman seen looking at a smartphone with a Reddit logo displayed in the background. (Photo Illustration by Mateusz Slodkowski/SOPA Images/LightRocket via Getty Images)
Back when Meta first introduced its Twitter competitor Threads, many noted that the company had failed to secure the threads.com domain and instead launched the website at threads.net. At the time, the Threads dot com domain belonged to a messaging app startup that said it was reluctant to rebrand its business.
But that startup was later acquired by Shopify and Meta did eventually acquire the coveted threads.com domain for an undisclosed amount. Now, Meta is finally moving Threads’ website to threads.com, and adding some much needed functionality to the web version of Threads.
The update adds a new composer that pops up in its own window so you can continue to browse your feeds as you type out a new post. It also allows you to scroll your various custom feeds in a single-column view (much like Threads’ mobile app), and finally adds a menu shortcut for saved posts. (Previously, the only way to view saved posts on web was to add it as a pinned column.)
Screenshot via Threads
Meta is also stepping up its efforts to lure users directly from X. The company says it’s testing a new feature that allows users to upload a list of people they follow on X and find the corresponding accounts on Threads. The feature, currently labeled as being in “beta,” sounds a bit clunky according to Meta’s in-app description. It notes that downloading data from X can take as long as three days, so it’s not exactly a simple process. But in addition to giving users a way to find familiar accounts on Threads, it could also give Meta some valuable insight into users’ habits on other platforms.
This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-is-moving-to-threadscom-and-adding-a-bunch-of-new-web-features-190006238.html?src=rss
Meta is finally acknowledging that Facebook’s feed is filled with too many spammy posts. In an update, the company says it plans to start “cracking down” on some of the worst offenders. “Facebook Feed doesn’t always serve up fresh, engaging posts that you consistently enjoy,” the company writes. “We’re working on it.”
Specifically, Meta says it will lower the reach of creators that share posts with "long, distracting captions” as well as posts with captions that are irrelevant or unrelated to the shared content. These accounts will also no longer be eligible for monetization. Likewise, the company says it’s taking “more aggressive” steps to combat “spam networks that coordinate fake engagement.” This includes making comments from these accounts less visible, and removing Facebook pages meant to “inflate reach.” Meta is also testing a feature that allows users to anonymously downvote comments in order to flag them as not “useful.”
The update comes as Meta is trying to revamp Facebook to make it more appealing to “young adults.” The company recently brought back a tab for friends content, in an update Mark Zuckerberg described as making the platform more like “OG Facebook.” Notably though, Meta’s update doesn’t mention one of the more persistent forms of engagement bait that’s emerged on Facebook over the last year: AI slop.
The phenomenon, which has been extensively documented by 404 Media, involved bizarre, often nonsensical AI-generated images — like the now infamous “Shrimp Jesus” — that serve little purpose other than to farm engagement for people trying to make money on or off Facebook. These spammers are often aided by Facebook’s own algorithm, which boosts the posts, researchers have found.
AI slop and engagement bait aren’t the only types of low-quality posts that have overwhelmed users’ Facebook feeds in recent years. I regularly see posts from pages that seem to do nothing but screenshot old Reddit posts from r/AITA, or recycle old news about celebrities I don’t follow or particularly care about. Meta’s reports on the most widely-viewed content on its platform regularly feature anodyne posts that are engineered to rack up millions of comments, like those that ask users to comment “amen” or solve basic math equations. Posts like that may not fit neatly into Meta’s latest crackdown, though it’s unlikely many Facebook users are actually enjoying this content.
The company does note it’s also trying to “elevate” the creators that are actually sharing original content, including by cracking down on accounts that steal their work. But given how much easier it is to make AI slop than good original content, it could be a long time before Meta is able to get Facebook’s spam problem under control.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-finally-acknowledges-that-facebook-has-a-major-spam-problem-175304372.html?src=rss
A laptop keyboard and Facebook on App Store displayed on a phone screen are seen in this illustration photo taken in Poland on October 8, 2024. (Photo by Jakub Porzycki/NurPhoto via Getty Images)
Facebook acquired Instagram in 2012 for $1 billion, but tensions between Mark Zuckerberg and the app’s founders persisted for years afterward. On Tuesday, Instagram’s former CEO and cofounder Kevin Systrom took the stand in Meta’s antitrust trial in Washington, D.C. and offered a firsthand account of how Zuckerberg viewed the photo-sharing app as a “threat” to Facebook.
Systrom, who ran Instagram until 2018, said that Zuckerberg slowed hiring and other investments into Instagram despite its success. Zuckerberg, Systrom testified, "believed we were a threat to their growth," and as a result "was not investing" in the photo-sharing app, according to testimony reported byThe New York Times. As The Times notes, Instagram had only a fraction of the employees as Facebook even after reaching 1 billion users. "As the founder of Facebook, he felt a lot of emotion around which one was better, meaning Instagram or Facebook," Systrom reportedly said.
Tensions between Instagram’s founders and Zuckerberg over company resources have been previously reported, but Systrom’s testimony is the first time he’s publicly spoken in detail about the issues that ultimately led him to resign from the company. On the stand Tuesday, Systrom said that Zuckerberg “believed we were hurting Facebook’s growth,” according toBloomberg.
Facebook’s acquisition of Instagram is central to the FTC’s case against Meta. The government has argued that Meta’s purchase of WhatsApp and Instagram were anticompetitive and that the social media company should be forced to divest the businesses. Systrom’s testimony comes a week after Zuckerberg took the stand and defended Meta’s $1 billion Instagram acquisition. However, a 2018 email from Zuckerberg that surfaced earlier in the trial showed that the Facebook founder was aware as early as 2018 that he could be forced to spin off the services into independent entities.
This article originally appeared on Engadget at https://www.engadget.com/social-media/instagrams-former-ceo-testifies-zuckerberg-thought-the-app-was-a-threat-to-facebook-202112282.html?src=rss
Logos of Instagram social media application are seen on February 20, 2023 in L'Aquila, Italy. Meta CEO Mark Zuckerberg announced the launch of a paid subscription starting at $11.99 per month for users to authenticate their profiles on Meta platforms (Facebook, Instagram). (Photo by Lorenzo Di Cola/NurPhoto via Getty Images)
Earlier this year, right as TikTok and other ByteDance apps were temporarily pulled from Apple and Google’s app stores, Meta announced that it was working on a new video editing app tailored to Instagram creators. That app, called Edits, is now finally rolling out as Meta continues to try to leverage the uncertainty surrounding TikTok’s future to draw more creators to its apps.
As previewed in its earlier app store listings, Edits promises much more advanced editing tools than what’s been available in Meta’s apps. The in-app camera allows creators to capture up to 10 minutes of video and publish to Instagram in “enhanced quality.” It also features popular editing effects like green screen and Instagram’s extensive music catalog.
In keeping with Meta’s current focus on AI, Edits comes with a couple AI-powered features as well. The “animate” feature allows users to create a video from a static image, while “cutouts” enables video makers to “isolate specific people or objects with precision tracking.” And unlike ByteDance’s popular editor CapCut, Edits doesn’t export videos with a watermark of any kind (Instagram downranks videos with visible watermarks).
While Edits is launching months after CapCut came back online in the US, Meta is adding some Instagram-specific features to lure Reels creators. This includes in-app post analytics, as well as the ability to import audio tracks they’ve previously saved in the app. And it sounds like Instagram creators can look forward to more specialized features in the future. In a blog post, the company notes that the current version of the app is merely “the first step” for Edits, and that it plans to collaborate with creators on more functionality going forward.
This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-is-rolling-out-edits-its-capcut-competitor-163045930.html?src=rss
As lawmakers, regulators call for social media companies to do more to protect the mental health of their youngest users, teens’ perception of social media also seems to be changing. A growing number of teens say that social media is harmful and takes up too much of their time, according to a new report from Pew Research.
The report, which was based on a survey of 1,391 teens and parents in the United States, sheds light on how teens’ perspective on social media has changed amid increasing calls to hold online platforms accountable for the alleged harms they’ve done to their youngest users.
According to the report, 48 percent of teens now view social media as a “mostly negative” influence on other people their age. That’s a significant jump from the last time Pew polled teens on the question in 2022, when just under a third of teens said the same. The number of teens who view social media as “mostly positive” also decreased, from 24 percent in 2022 to 11 percent in the latest poll. “Teens’ views of the impact of social media on their peers has grown increasingly negative,” Pew’s researchers note.
Interestingly, teens are significantly less likely to report that social media is harmful to themselves specifically. Only 14 percent of teens polled by Pew reported that social media “negatively affects them personally.” Pew’s researchers don’t speculate on the reason for that disparity, though the report notes that there have been growing conversations about the effect social media has on teen mental health, including a warning last year from the US Surgeon General.
Pew’s report also suggests that teens are becoming increasingly aware of how much time they spend on social media platforms. Forty-five percent of teens said they "spend too much time” on social media, up from 27 percent who said the same in 2023. A similar proportion of teens said that social media negatively affects their sleep (45 percent) and productivity (40 percent). And 44 percent of teens report that they’ve “cut back” their smartphone and social media use overall.
While this report is unlikely to settle the long-running debate about whether social media is more helpful or harmful to young people, the fact that teens’ views are shifting is telling. At a time when some lawmakers have proposed banning younger kids from social media altogether, Pew’s report suggests that adults aren’t the only ones worried about the issue.
This article originally appeared on Engadget at https://www.engadget.com/social-media/teens-are-becoming-more-worried-about-the-effects-of-social-media-113027657.html?src=rss