Earlier today, Mark Zuckerberg shared a rambling memo outlining his vision to build AI "superintelligence." In the memo, Zuckerberg hinted that the pursuit of more powerful AI might require the company to be more selective in what it open sources.
Citing "safety concerns" he wrote that Meta would need to be "rigorous" about such decisions. The line stood out to many as Zuckerberg β who once said "fuck that" in reference to closed platforms β has made open source central to Meta's AI strategy.
During Meta's second quarter earnings call, Zuckerberg further acknowledged there could be a shift, though he downplayed the significance of it. Here's what he said when asked if his thinking had changed.
I don't think that our thinking has particularly changed on this. We've always open sourced some of our models and not open sourced everything that we've done. So I would expect that we will continue to produce and share leading open source models. I also think that there are a couple of trends that are playing out. One is that we're getting models that are so big that they're just not practical for a lot of other people to use, so we kind of wrestle with whether it's productive or helpful to share that, or if that's really just primarily helping competitors or something like that. So I think that there's, there's that concern.
And then obviously, as you approach real superintelligence, I think there's a whole different set of safety concerns that I think we need to take very seriously, that I wrote about in my note this morning. But I think the bottom line is I would expect that we will continue open sourcing work. I expect us to continue to be a leader there, and I also expect us to continue to not open source everything that we do, which is a continuation of kind of what we, what we've been, been kind of working on.
That's notably different than what he wrote almost exactly a year ago in a different memo titled "Open Source AI is the Path Forward." In that, even longer note, he said that open source is crucial for both Meta and developers.
"People often ask if Iβm worried about giving up a technical advantage by open sourcing Llama, but I think this misses the big picture," he wrote. "I expect AI development will continue to be very competitive, which means that open sourcing any given model isnβt giving away a massive advantage over the next best models at that point in time."
He also argued that open source is safer. "There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. As long as everyone has access to similar generations of models β which open source promotes β then governments and institutions with more compute resources will be able to check bad actors with less compute."
To be clear, Zuckerberg said the company would continue to open source some of its work. But he seems to be laying the groundwork for a future in which Meta's "superintelligence" could be a lot less open.
This article originally appeared on Engadget at https://www.engadget.com/ai/is-mark-zuckerberg-flip-flopping-on-open-source-ai-231310567.html?src=rss
Meta CEO Mark Zuckerberg delivers a speech at the Meta Connect event at the company's headquarters in Menlo Park, California, U.S., September 27, 2023. REUTERS/Carlos Barria
LinkedIn quietly changed the language of its hateful content policy this week. The update, the company's first change in three years according to the site's own changelog, removed a line that stated the company prohibits the misgendering and deadnaming of transgender individuals.
The change, which was first noted by the organization Open Terms Archive, was the only modification to the "hateful and derogatory content" policy. An archived version of the rules includes "misgendering or deadnaming of transgender individuals" as an example of prohibited content under the policy. That line was removed on July 28, 2025.
Open Terms and other groups have interpreted the change to mean that LinkedIn is rolling back protections for transgender people.
A LinkedIn spokesperson told Engadget the company's underlying policies hadn't changed despite the updated wording. The company's rules still reference "gender identity" as a protected characteristic. "We regularly update our policies," the company said in a statement. "Personal attacks or intimidation toward anyone based on their identity, including misgendering, violates our harassment policy and is not allowed on our platform." The company didn't provide an explanation for the change.
Advocacy groups say they are alarmed by the move. In a statement, GLAAD denounced LinkedIn's update and suggested it was part of a broader pattern of tech platforms loosening rules meant to protect vulnerable users. βLinkedInβs quiet decision to retract longstanding, best-practice hate speech protections for transgender and nonbinary people is an overt anti-LGBTQ move β and one that should alarm everyone," a spokesperson for the organization said in a statement. "Following Meta and YouTube earlier this year, yet another social media company is choosing to adopt cowardly business practices to try to appease anti-LGBTQ political ideologues at the expense of user safety."
Earlier this year, Meta rewrote its rules to allow its users to claim LGBTQ people are mentally ill. The company also added a term associated with discrimination and dehumanization to its community standards and has so far declined to remove it even after its Oversight Board recommended it do so. YouTube also quietly updated its rules this year to remove a reference to "gender identity" from its hate speech policies. The platform denied that it had changed any of its rules in practice, suggesting toUser Mag the move "was part of regular copy edits to the website."
Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.
This article originally appeared on Engadget at https://www.engadget.com/social-media/linkedin-quietly-removed-references-to-deadnaming-and-misgendering-from-its-hateful-content-policy-190031953.html?src=rss
Facade of LinkedIn office building with large logo visible, reflecting surrounding cityscape, SoMa neighborhood, San Francisco, California, March 18, 2025. (Photo by Smith Collection/Gado/Getty Images)
Mark Zuckerberg has spent the last several months and several billion dollars recruiting prominent AI researchers and executives for a new "superintelligence" team at Meta. Now, the Meta CEO has published a lengthy memo that attempts to lay out his big plan for using the company's vast resources to create "personal superintelligence."
In the memo, which reads more like a manifesto than a strategic business plan, Zuckerberg explains that he's "extremely optimistic that superintelligence will help humanity accelerate our pace of progress." The technology, according to him, "has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose."
Zuckerberg, who has previously expressed a desire to build artificial general intelligence, never defines "superintelligence." Nor does the 616-word memo explain how Meta plans to create such a technology, what it might help people accomplish or why anyone should trust the company to build it. Instead, he implies that Meta will be a better steward of this non-specifically powerful AI than "others in the industry" who expect "humanity will live on a dole of its output."
As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.
Meta's vision is to bring personal superintelligence to everyone. We believe in putting this power in people's hands to direct it towards what they value in their own lives.
This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well.Β
Left unsaid by Zuckerberg, is the fact that the memo comes at a time when he's been rapidly reorganizing Meta's AI teams. Last month, the company invested $14.8 billion into Scale AI, a move that allowed it to bring Scale CEO and founder Alexandr Wang into the company. The 28-year-old founder is now Meta's Chief AI Officer in charge of its superintelligence efforts.
Meta has also been on a hiring spree for the effort, and has reportedly been offering prominent researchers eight- and nine-figure pay packages to come to Meta. In recent weeks, the company has successfully recruited high-profile talent from Apple and OpenAI, including Shengjia Zhao, who helped created GPT-4. Zhao announced last week that he will take on the role of "chief scientist of Meta superintelligence labs." Just yesterday, Wiredreported that Meta has recently turned its recruiting efforts to Thinking Machines Lab, an AI startup founded by former OpenAI CTO Mira Murati, and that in at least one case it made an offer worth more than $1 billion over several years. (Meta PR said some details of that report were "off.") All that is on top of the $72 billion Zuckerberg has said Meta plans to spend on AI infrastructure.
Driving all this is that Zuckerberg has reportedly grown increasingly frustrated by Meta's own generative AI efforts. The company has had to delay its larger "Behemoth" Llama 4 model by months. Llama's struggles have also reportedly caused Zuckerberg to question whether Meta's AI efforts should remain open source, according to CNBC.
It's also likely no coincidence Zuckerberg's rambling manifesto comes hours before the company is scheduled to report earnings and tell analysts more about its plans to spend billions of dollars on new AI efforts.
Meta's CEO also clearly sees AI dominance as an opportunity to end the company's reliance on mobile platforms, especially Apple, which he believes have been able to exert too much control via their app stores. In his memo, he explains that "personal devices like glasses β¦ will become our primary computing devices." A future where smart glasses are more important than smartphones would, of course, be extremely convenient for Meta, which has spent the last several years building smart glasses.
This article originally appeared on Engadget at https://www.engadget.com/ai/mark-zuckerberg-shares-a-confusing-vision-for-ai-superintelligence-153944322.html?src=rss
FILE PHOTO: Meta's CEO Mark Zuckerberg mimics virtual reality glasses during a live recording panel at Acquired, a technology podcast, at the Chase Center in San Francisco, California, U.S., September 10, 2024. REUTERS/Laure Andrillon/File Photo
TikTok users in the United States will soon see crowd-sourced fact checks appearing alongside videos on the platform. The app is beginning to roll out Footnotes, its version of Community Notes, the company announced.
TikTok announced its plan to adopt the feature back in April and since then almost 80,000 users have been approved as contributors. Footnotes works similarly to Community Notes on X. Contributors can add a note to videos with false claims, AI-generated content or that otherwise require more context. Contributors are required to cite a source for the information they provide and other contributors need to rate a footnote as helpful before it will show up broadly. Like X, TikTok will use a bridging algorithm to determine which notes have reached "a broad level of consensus."
According to screenshots shared by the company, Footnotes will appear prominently underneath a video's caption. Users will be able to read the full note and view a link to its source material.
While TikTok is the latest major platform to adopt the crowd sourced approach to fact checking, unlike Meta, the company is still continuing to work with professional fact checking organizations, including in the United States. The company also points out that Footnotes will be subject to the same content moderation standards as the rest of its platform, and that people can report notes that might break its rules. The presence of a note won't, however, impact whether a particular video is eligible for recommendations in the "For You" feed.
For now, the company isn't making any commitments to roll out the system beyond the US. "We picked the US market because it's sufficiently large that it has a content ecosystem that can support this kind of a test," TikTok's head of integrity and authenticity product, Erica Ruzic, said during a press event. "We will be evaluating over the coming weeks and months, as we see how our US pilot is going, whether we would want to expand this to additional markets."
The test of Footnotes comes at a moment when the company's future in the United States is still somewhat in limbo. President Donald Trump has delayed a potential ban three times since taking office in January as a long long-promised "deal" to create a US-owned TikTok entity has yet to materialize. Trump said a month ago that an agreement could be announced in "two weeks." Since then, there have also been reports that TikTok owner ByteDance is working on a new, US-only version of the app in anticipation of a deal. TikTok representatives declined to comment on those reports, which have suggested such an app could debut in early September.
This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktoks-community-notes-era-starts-today-110041152.html?src=rss
TikTok's "For You" recommendations have long been a source of mystery and fascination for creators on the platform. Even the most seasoned TikTok stars don't always understand why some videos go viral and some don't. And there's long been lots of speculation about the types of content that is and isn't acceptable to the app's recommendation algorithm.
Now, the company is looking to give creators more transparency into its recommendations. TikTok is testing out a "content check" feature that will allow creators to preview whether their videos have issues that might prevent them from appearing in the coveted "For You" feed.
TikTok is kicking it off with a web-based feature called "Content Check Lite" that will be available on desktop in TikTok Studio. The feature will check uploaded videos for "For You" eligibility and flag potential issues before posting. The company also says it's in the "early stages" of experimenting with a "broader" content check feature that can check "content against all our Community Guidelines before it goes live on platform," and offer specific feedback on changes that can help correct ineligible content. A "small group" of creators are currently part of the test, according to the company.
TikTok has already tested a version of this for TikTok Shop sellers, and says the feature has already resulted in a 27 percent reduction in "low-quality uploads" to the app. The feature could also help the company increase trust with creators, who often speculate about "shadow bans" and why some videos don't get as many views as they expect.
"Ultimately, our goal is to help creators understand our rules and make sure that they can know how best to build that audience and build that thriving following on TikTok," TikTok's head of operations and trust and safety, Adam Presser, said during a press event. "We're excited to learn from the pilot, and hope to have more to share ahead in the next few months."
The company is also adding several other updates for creators, including new muting and filtering controls that make it easier to weed out specific terms from comments in live streams and other posts. The app is also getting a designated "creator inbox" to make it easier to manage messaging in the app. Creators who want to have a separate space to interact with followers will also be able to take advantage of "creator chat rooms," which allows eligible accounts to make a dedicated space for chats with up to 300 followers.
This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-content-check-tells-creators-if-their-videos-will-be-blocked-from-for-you-pages-110015168.html?src=rss
If you're at all familiar with Meta's Ray-Ban-branded smart glasses, there won't be many surprises when it comes to its latest Oakley frames. The Oakley Meta glasses rely heavily on what's already been a successful playbook for the company: the style of a popular eyewear brand mixed with juuust enough tech to let you keep your phone in your pocket a little longer.
But the Oakley Meta glasses are also the social media company's first collaboration with a non-Ray-Ban brand (though both share a parent company in EssilorLuxottica). And while Meta stays pretty close to the strategy it's used for the last four years, its latest frames offer some hints about its longterm ambitions in the space.
Meta has described its Oakley-branded frames as "performance glasses," which isn't entirely surprising given Oakley's longtime association with athletes. But there are only a few actual upgrades compared to the Ray-Ban lineup. The Oakley Meta glasses have a notably longer battery life, both for the glasses themselves and the charging case. They are also able to capture higher quality video than previous versions.
With a starting price of nearly $400, though, I'm not sure those upgrades are worth an extra $100 - $200.
Why do they look like that?
Meta's debut pair of Oakley-branded glasses are based on the brand's HSTN (pronounced how-stuhn) frames and there's really nothing subtle about the design. The first of these is a limited edition version with shiny gold lenses and bright white frames (which Meta inexplicably calls "warm grey").
Like previous Ray-Ban models, they don't look overtly techy, but I still wasn't a big fan of the design. The glasses felt just a little oversized for my face and something about the bright white paired with gold lenses reminded me a little too much of a bug. The color combo also accentuates just how thick the frames are, particularly around the awkwardly wide nosepiece.
Karissa Bell for Engadget
I posted a selfie on my Instagram Story and polled my friends on what they thought. And while a few politely said they thought I was "pulling them off," the majority said they looked too big for my face. A few told me they looked straight-up weird, and one summed up my feelings pretty well with "something looks off about them." Style is subjective, of course. And depending on your face shape and tolerance for contrasting colors, I could see others enjoying the design. I'm looking forward to seeing the rest of the HSTN collection, which is coming later this summer, and will hopefully have some more flattering color variations.
Looks aside, the glasses function almost identically to the Ray-Ban glasses Meta introduced in 2023. There's a 12-megapixel POV camera over the left eye, and an indicator light over the right that lights up when you snap a photo or start recording a video via the capture button. There are open-ear speakers in the arms so you can listen to music and hear notifications. Much like the Ray-Ban glasses, the speakers here are pretty good at containing the sound so others can't hear when you're listening at lower volumes, but it's definitely noticeable at higher levels. You can control music playback and volume pretty easily, though, with a touchpad on the right side of the glasses.
"Performance" upgrades
The most important upgrade that comes with the Oakley glasses is the battery. Meta claims the glasses can last up to eight hours with "typical" (non-constant) use and up to 19 on standby. I was able to squeeze a little over five hours of continuous music playback out of the battery in one sitting, which is about an hour better than the Ray-Ban frames. The charging case can provide up to 48 hours of additional runtime, according to Meta. It's been well over a week and I haven't yet had to plug in the case.
The charging case is, however, noticeably bigger and heavier than the Ray-Ban case. It's not a dealbreaker, but the case is too big for any of my pockets and just barely fits into my small sling bag. My other gripe with the charging case is the same complaint I had about the Ray-Ban case: there's no way to see the charge level of the case itself. There's a small LED in the front that will change from green to yellow to red based on the battery level, but it's hardly a precise indicator.
Karissa Bell for Engadget
The other major upgrade is the 12MP camera, which can now shoot in 3K compared to 1080p on previous models. The higher resolution video is, notably, not the default setting, but I appreciated having the option. I could see it being especially useful for creators looking to shoot POV footage, but I mostly use the glasses for still shots rather than video.
San Francisco is currently having a record-breaking cold summer so most of my testing has been in fairly overcast conditions. It might be a product of the gray weather, but I found the photos I've shot with the glasses a bit overly saturated for my taste. They looked fine on an Instagram Story, though. The camera has a bit of a wide angle with a 100-degree field of view, so there's still a bit of a learning curve in terms of figuring out how best to frame the shots.Β
Another issue is that it's very easy for a hat or a piece of hair to make it into your photos without realizing. My previous experience with the Ray-Ban Meta glasses meant I was careful to pull my hair back before snapping a picture, but I was bummed to realize after a long bike ride that the visor on my helmet was visible in the frame of every photo and video. It seems like Meta may have a plan to address this: I noticed a setting called "media quality" that's meant to alert you when something is partially obstructing the camera. The feature is apparently still testing, though, and it wasn't functional. A Meta spokesperson did confirm it would be added in a future update, though. "Media Quality Check is a feature we're working to bring to our AI glasses collection in the future that will alert users when photos are blurry or if something like your hair or a hat blocks what you capture," Meta said.
The Meta AI app (formerly known as Meta View) can help fix other issues, though. It has a "smart crop" feature that can automatically straighten your pics to correct for any head tilt. It also has built in AI-powered edits for photos and video so you can restyle your clips directly in the app. And while the functionality isn't limited to clips shot with the glasses, the possibility of adding AI edits after the fact makes shooting otherwise mundane clips a bit more appealing. The ability to restyle video, however, is only "free for a limited time," according to the Meta AI app.Β
Meta AI
While the core features of Meta's smart glasses have largely stayed the same since it first introduced the Ray-Ban Stories in 2021, one of the more interesting changes is how Mark Zuckerberg and other execs have shifted from calling them "smart glasses" to "AI glasses." As the company has shifted away from the metaverse and made AI a central focus, it's not surprising those themes would play out in its wearables too.
And while none of the Meta AI features are unique to the Oakley frames, Meta has added a couple of abilities since my last review that are worth mentioning. The first is live translation. The feature, which you have to enable in the Meta AI app, allows the onboard assistant to translate speech as you hear it. If both sides of a conversation have a pair of Meta glasses, then you can carry on a full conversation even if you don't speak the same language. The feature currently supports Spanish, French, Italian and English.
Karissa Bell for Engadget
I tried it out with my husband β a native Spanish speaker who was also wearing a pair of Meta glasses β and we were both fairly impressed. I would say something in English and Meta AI on his glasses would relay it to him in Spanish. He would then respond in Spanish and Meta AI would translate the words into English.
It's not the most natural way to speak because you have to pause and wait for a translation, but it was mostly effective. There were a few bugs, though. Because we were sitting close to each other, sometimes Meta AI would overhear the translated audio from the other person's glasses and translate it back, which made the whole thing feel like a bizarre game of telephone.
And over the course of a several-minute conversation, there were a handful of times when Meta AI wouldn't pick up on what was said at all, or would only begin translating halfway through a statement. We also encountered some issues with Meta AI's translations when it came to slang or regional variations of certain words. While it wasn't perfect, I could see it being useful while traveling since it's much smoother than using Google Translate. There was also something endlessly amusing about hearing my husband's words relayed back to me by the voice of AI Judi Dench (Meta tapped a bunch of celebrities last year to help voice its AI).
Stills from a video of a walk through a parking lot (left), and the same image after using the "desert rave" effect in Meta AI app.
Screenshots (Meta AI)
The other major AI addition is something called "Live AI," which is essentially a real-time version of the glasses' multimodal powers. Once you start a Live AI session, Meta's assistant is able to "see" everything you're looking at and you can ask it questions without having to repeatedly say "hey Meta." For example, you can look at plants and ask it to identify them, or ask about landmarks or your surroundings.
The feature can feel a bit gimmicky and it doesn't always work the way you want it to. For example, Meta AI can identify landmarks but it can't help you find them. While on a bike ride, I asked if it could help me navigate somewhere based on the intersection I was at and Meta AI responded that it was unable to help with navigation. It also didn't correctly identify some (admittedly exotic) plants during a walk through San Francisco's botanical gardens. But it did helpfully let me know that I may want to keep my distance from a pack of geese on the path.
I'm still not entirely sure what problems these types of multimodal features are meant to solve, but I think it offers an interesting window into how Meta is positioning its smart glasses as an AI-first product. It also opens up some intriguing possibilities whenever we get a version of Meta glasses with an actual display, which the rumor mill suggests could come as soon as this year.
Wrap-up
While I don't love the style of the Oakley Meta HSTN frames, Meta has shown that it's been consistently able to improve its glasses. The upgrades that come with the new Oakley frames aren't major leaps, but they deliver improvements to core features. Whether those upgrades justify the price, though, depends a lot on how you plan to use the glasses.
The special edition HSTN frames I tested are $499 and the other versions coming later this year will start at $399. Considering you can get several models of Meta's Ray-Ban glasses for just $299, I'm not sure the upgrades justify the added cost for most people. That's probably why Meta has positioned these as a "performance" model better suited to athletes and Oakley loyalists.
But the glasses do offer a clearer picture of where Meta is going with its smart glasses. We know the company is planning to add displays and, eventually, full augmented reality capabilities β both of which will benefit from better battery life and cameras. Both are also likely to cost a whole lot more than any of the frames we've seen so far. But, if you don't want to wait, the Oakley Meta glasses are the closest you can get to that right now.
This article originally appeared on Engadget at https://www.engadget.com/wearables/oakley-meta-glasses-review-a-familiar-formula-with-some-upgrades-120026844.html?src=rss
It seems like LeBron James' legal team has been trying to stop the spread of viral AI videos featuring the basketball star. As 404 Mediareported, a law firm representing James has sent a cease and desist letter to a person behind an AI platform that allowed Discord users to make AI videos of James and other NBA stars.
As 404 noted, these videos have been circulating for awhile but it's one particularly strange clip that seems to have gotten James' lawyers involved. The video, which reportedly racked up millions of views on Instagram, shows a pregnant James being loaded into an ambulance after telling an AI Steph Curry to "come quick our baby is being born."
404 reports that at least three Instagram accounts that had shared the clip have since been removed, though the video is available on X. The founder of the AI platform used to make the videos also posted about the cease and desist letter he received. It's unclear what is in the letter, or if James' lawyers were also in touch with Meta about the videos. We've reached out to the company for more info on its rules.Β
Of course, LeBron James is far from the only public figure to grapple with unwanted AI versions of themselves. Social media scammers routinely impersonate celebrities to promote sketchy products and other schemes. We've previously reported on such scams involving deepfakes of Elon Musk and Fox News personalities that have proliferated on Facebook. Jamie Lee Curtis also recently had to publicly plead with Mark Zuckerberg to take down deepfaked ads of herself.
A still from a clip created with Google's Veo (left) and images generated by Meta AI (right)
Screenshots via Veo and Meta AI
But the videos of James are a little different. They don't feature fake endorsements and seem to be more of a prank meant to go viral in the way that lots of "AI slop" does. And James and other celebrities will likely continue to have a difficult time preventing these kinds of deepfakes from spreading. Some quick testing by Engadget showed that it's relatively easy to get AI chatbots to create images and video of "pregnant LeBron James."
We first asked ChatGPT, Gemini and Copilot to make such a photo. All chatbots initially refused, saying that such an image could go against their guidelines. But when given an image of James and asked to "make this person eight months pregnant," Google's Gemini delivered a 7-second clip of the basketball star cradling a pregnant belly. (We've reached out to Google to clarify its rules around such content.)
Likewise, Meta AI seemingly had no reservations about producing images of "pregnant LeBron James" and promptly delivered many such variations. While these creations aren't as detailed as the initial video that went viral, they do highlight how difficult it can be for AI companies to prevent people from circumventing whatever guardrails may exist.
This article originally appeared on Engadget at https://www.engadget.com/social-media/lebron-james-is-reportedly-trying-to-stop-the-spread-of-viral-ai-pregnancy-videos-211947871.html?src=rss
Los Angeles Lakers' LeBron James (23) dribbles during an NBA basketball game between Los Angeles Lakers and Los Angeles Clippers, Wednesday, Dec. 25, 2019, in Los Angeles. The Clippers won 111-106. (AP Photo/Ringo H.W. Chiu)
Meta is no longer paying creators to post on Threads. The company quietly ended the Threads bonus program, which offered some creators thousands of dollars a month in bonuses, earlier this year, Engadget has confirmed.
The company hasn't officially commented on why it stopped the payments, but an Instagram support page that once listed details about the creator incentives no longer references Threads at all. In posts on Threads, creators who were once part of the program have said they stopped receiving payments around the end of April. That's roughly one year after Meta first started paying creators for popular posts. Though Meta never publicly shared a lot of details about how the program worked, creators who previously spoke with Engadget reported that they were able to earn monthly bonuses ranging from $500 to $5,000 in exchange for hitting specific metrics around post counts and views.
It's not clear what Meta's strategy for creators on Threads is going forward. The company is still trying to lure more brands and notable faces to the platform, and has tested features to help people find popular creators they previously followed on X. Meta has also added creator-friendly tools, like the ability to add more links to profiles and more detailed analytics for the app.
But Meta has yet to clearly explain what it can offer creators in return. The platform is hardly driving any traffic to outside websites. It's also much harder to build a following on Threads, since the platform defaults to an algorithmic timeline consisting mainly of recommended content. This means that it's easier for a post from an unknown account to go viral, but viral posts rarely lead to an influx of new followers.
Meta may simply be calculating that Threads already has enough momentum without paying people for viral content. At the same time, Mark Zuckerberg has repeatedly said he believes the app can be Meta's next billion-user platform. It's difficult to see how that happens without the buy-in of creators.
Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-has-ended-its-bonus-program-on-threads-201627935.html?src=rss
INDIA - 2024/02/12: In this photo illustration, the Threads logo is seen displayed on a mobile phone screen and in the background. (Photo Illustration by Idrees Abbas/SOPA Images/LightRocket via Getty Images)
Roblox is joining the growing ranks of online platforms that are trying to better understand the ages of their teen users. The company is rolling out a new "age estimation" feature for teens 13 and older.
With the update, teens will be prompted for an age check that requires a video selfie in order to access its new less restrictive chat feature called "trusted connections." Roblox is relying on third-party identity company Persona for the actual "estimation," and users who fail the check will also have the option of providing an ID. Once teens have "unlocked" trusted connections via video selfie or sharing an ID, they'll be able to participate in chats with friends "without filters," including "party" group text and voice chats.
Roblox has previously faced scrutiny for not policing its chat features enough and making it too easy for adults to seek out children on the platform. The company notes in its announcement that parents will be able to monitor their kids' "trusted connections" via parental control features, and that the feature is intended only for people who teens already "know and trust." Teens are only able to add trusted connections via their existing contacts list or a QR code.
"We believe chat without filters should only be accessible to users who have verified their age." the company writes in a blog post. "This isnβt just about compliance; itβs about building engaging and appropriate digital spaces for everyone."
While Roblox is notably not using the term "age verification," the new feature comes at a time when there are increasing calls for social media companies and other platforms to check the ages of their youngest users. Reddit and Bluesky recently announced age verification features for users in the UK β a change required of major platforms ahead of a new online safety law going into effect. Age verification mandates have also been gaining steam in the United States.
A number of states have introduced age verification measures for social media, though laws in Arkansas and Utah have so far been blocked. Utah also recently passed a law requiring app stores to verify users' ages β an approach that has been endorsed by companies like Meta and Snap. And the Supreme Court recently upheld a Texas law that requires websites hosting adult to conduct age verification checks.
Roblox, which unlike most online platforms allows children under 13 to have accounts, is in a slightly different position. And for now, it's billing its age checks as "optional." But already having an age estimation feature in place could certainly be useful should it be required to take an even stricter approach in the future.
This article originally appeared on Engadget at https://www.engadget.com/gaming/roblox-is-adding-an-age-estimation-feature-for-teens-110047092.html?src=rss
Meta is going after creators who rip off other users' content as part of a broader effort to fix Facebook's feed. In its latest update, the company laid out new steps it's taking to penalize accounts that lift work from others.
In a blog post for creators, Meta says that accounts that "repeatedly" and "improperly" reuse other accounts' text posts, photos or videos will have their pages demonetized "for a period of time." Meta willa also throttle all of their posts, not just the ones with the offending content. The company notes that the change is meant to target "repeated reposting of content from other creators without permission or meaningful enhancements" and not content like reaction videos.
Meta has previously taken similar steps to reward original content on Instagram, where the company has actively replaced reposted Reels with the original clip. The company now says it's looking into a similar move on Facebook by adding a link to the original video when it detects a duplicate.
Meta
The latest crackdown comes as Meta says it's trying to reduce the amount of spammy and other undesirable posts in Facebook's feed. Earlier this year, the company said it would demonetize creators who share posts with spammy captions and go after creators that manipulate engagement on the platform. In its newest update, Meta shared that since the start of the year it penalized more than 500,000 accounts that engaged in such tactics, "applying measures ranging from demoting their comments and reducing the distribution of their content to preventing these accounts from monetizing." The company has also removed more than 10 million profiles it says impersonated "large content producers."
Additionally, Meta is rolling out new in-app insights it says can help realtors understand issues affecting their reach or monetization status. The new dashboard will highlight potential problems, like unoriginal content or spammy captions, as well as issues affecting monetization.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-says-its-cracking-down-on-facebook-creators-who-steal-content-203713569.html?src=rss
Two years in, Threads is starting to look more and more like the most viable challenger to X. It passed 350 million monthly users earlier this year and Mark Zuckerberg has predicted it could be Meta's next billion-user app.
But Threads still isn't sending much traffic to other websites, which could make the platform less appealing for creators, publishers and others whose businesses depend on non-Meta owned websites. According to Similarweb, a marketing intelligence firm, outbound referral traffic from Threads climbed to 28.4 million visits in June. That's a notable jump from 15.1 million visits a year ago, but still relatively tiny considering Threads is currently averaging more than 115 million users a day on its app, according to Similarweb.
Regular Threads users have long suspected that Meta deprioritizes posts with links. For most of the last two years of Threads' existence the common wisdom was that users shouldn't share links, or should only share them as replies to a primary post. Instagram chief Adam Mosseri, who also oversees Threads, hasn't exactly encouraged linking either. He said last year that Threads doesn't intentionally downrank links but that "we donβt place much value on" them because "people donβt like and comment on links much."
Meta's reluctance wasn't just about users' preferences, though. The company was also concerned about how spammers and other bad actors might abuse links on the text-based platform. More recently though, Meta has changed course, and has been taking steps to surface more "good" links in recommended posts.
"Weβve been working on making sure links are ranked properly," Mosseri said in June. "Links have been working much better for more than a month now." The company has also bolstered links on the platform by allowing users to add more links to their Threads profiles and providing link-specific analytics to its "insights" feature. "We want Threads to be a place that helps you grow your reach β even outside of Threads," Meta said in a May update.
But despite these changes, Threads is still sending very little traffic to websites. Data shared by Similarweb shows that during May and June of last year β when Threads had more than 150 million monthly users β it sent just 24.8 million referrals to outside websites. During May and June of this year, that number more than doubled, rising to 51.8 million.
Those numbers still suggest, though, that the majority of Threads' users are rarely, if ever, clicking on links they see on Threads. Lia Haberman, a social media marketing consultant and author of the ICYMI newsletter, said she's not surprised. "People just got trained not to look for them, not to include them, not to think about them," she tells Engadget. "You can't just flip a switch and all of a sudden expect people to embrace links."
Publishers, a group that likely posts more links on Threads than anyone else, don't seem to be seeing significant traffic from Threads either. Data provided by Chartbeat, a company that provides analytics data to publishers, shows that publisher page views from Threads have nearly doubled since the start of the year, rising from 8.8 million in January to 15.1 million in June.
Interestingly, according to both Similarweb and Chartbeat's data for 2025, referrals from Threads peaked in March. That month, Threads sent 28.8 million outbound referrals to websites, according to Similarweb, while Chartbeat publishers saw 25 million page views from the platform.
But while the latest stats show that traffic from Threads has grown significantly over the last year, it still represents a tiny proportion of the publishers' traffic overall. According to Chartbeat, over the last year and a half Threads has consistently accounted for less than one tenth of a percent of sites' referral traffic. By comparison, Facebook referrals have hovered around 2 - 3 percent over the same period, while Google Discover has accounted for about 13 - 14 percent of referrals. Even among other "small" sources of referrals, like chatGPT, Reddit and Perplexity, Threads is only ahead of Perplexity in terms of the number of referrals it sends.
Threads referrals even pale in comparison to Twitter's, which was never known as a major traffic driver even before Elon Musk's takeover of the company. In January of 2018, Twitter accounted for 3 percent of publishers' page views, according to Chartbeat data reported by the Press Gazette. By April of 2023, after Musk's takeover but before he rebranded the site to X, that number had fallen to 1.2 percent.
Chartbeat's data isn't a complete picture β stats provided to Engadget were based on an analysis of 3,000 sites that have opted in to anonymized data sharing β but the slight increase in referral traffic roughly lines up with another major change Meta made this year. In January, following Mark Zuckerberg's move to end fact checking and walk back content moderation rules, Threads also ended its moratorium on recommending political content to all users.
Following this change, some publishers of political news, including Newsweek, Politico and Forbes saw a spike in referrals from Threads, Digidayreported. But those gains don't seem to be universal, and it's not clear why some publishers may be benefitting more than others. "Threads is trailing significantly in traffic, subscription conversions, and overall conversion rate," compared with Bluesky and X, the Boston Globe's VP of Platforms Mark Karolian recently shared on Threads.
While Threads' growth so far hasn't been hampered by its inability to drive users off-platform, it could become an increasingly important issue for Meta if it really wants to bring more creators onto the platform. The company is also getting ready to flip the switch on ads on the platform. A user base that ignores links could complicate Meta's pitch to advertisers, who are already taking a cautious approach to Threads. Meta declined to comment.Β
Haberman says that Threads' ambivalence toward links might be symptomatic of a larger identity crisis the platform is still facing. It has a large user base, but it's not always clear who Threads is really for. It isn't known as a destination to follow breaking news, like Twitter once was, or as a place with highly-engaged subcultures, she notes. "Threads needs to have a purpose," she says. "And right now, it seems very much like a suggestion box at work where people are just filing complaints and trauma dumping."
Whether smaller platforms like Threads can reliably drive traffic to websites is an increasingly urgent question. At a time when online search feels like it's getting worse, AI is rapidly replacing many searches and cannibalizing websites' search traffic. Publishers, as The Wall Street Journal recently reported, are being hit especially hard by these shifts.
Threads is extremely unlikely to fill those gaps on its own, even if referral traffic vastly improves. And publishers in particular have plenty of reasons not to become too reliant on a Meta-owned platform. At the same time, there's clearly an opportunity for Threads to play a bigger role in a post-search world. That would not only benefit the creators, publishers and small business owners Meta has long courted, it could help Threads establish an identity of its own.
Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.
This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-users-still-barely-click-links-170139103.html?src=rss
CANADA - 2025/01/27: In this photo illustration, the Threads logo is seen displayed on a smartphone screen. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)
More than six months after TikTok was briefly banned, we still don't know exactly what its fate in the US will be. But we do have new insight into the legal wrangling that has allowed Apple, Google and other platforms to continue to support the app.
If you remember, TikTok was only "banned" for a matter of hours shortly before President Donald Trump took office in January and delayed enforcement of the law. The app's service was promptly restored January 19, 2025, but the app didn't return to Apple and Google's app stores until February 13. Reporting at the time suggested the companies had lingering concerns about potential liability for running afoul of the Protecting Americans from Foreign Adversary Controlled Applications Act.
Back in February, Axios and others reported that the Justice Department had given "assurances" to tech platforms that they wouldn't be penalized for violating the law. Now, we know exactly what Attorney General Pam Bondi told the companied as letters sent to Apple, Google, Amazon, Oracle and other firms have been made public. The letters were disclosed in a Freedom of Information Act Request made by Tony Tan, a software engineer and Google shareholder suing the search giant for not complying with the TikTok ban.
In a letter dated January 30, 2025, Bondi tells Apple and Google that "the President has determined that an abrupt shutdown of the TikTok platform would interfere with the execution of the President's constitutional duties to take care of the national security and foreign affairs of the United States." It goes on to state that Apple and Google "may continue to provide services to TikTok β¦ without incurring any legal liability."
A followup later dated April 5, 2025 (the day after Trump gave TikTok another 75-day reprieve), Bondi told the companies that "the Department of Justice is also irrevocably relinquishing any claims the United States might have had against" them "for the conduct proscribed in the Act during the Covered Period and Extended Covered Period, with respect to TikTok and the larger family of ByteDance Ltd. and TikTok, Inc. applications covered under the Act."
The letters can be read in full below.
The law has now been paused three times since Trump took office. Earlier this week, he said that details about TikTok's new ownership could be made public in "about two weeks."
This article originally appeared on Engadget at https://www.engadget.com/big-tech/here-are-the-letters-that-let-apple-and-google-ignore-the-tiktok-ban-220630588.html?src=rss
UNITED KINGDOM - 2025/01/25: In this photo illustration the TikTok app is displayed on a smartphone screen. TikTok users have reported seeing less livestreams and more content being removed or flagged after the US ban. (Photo Illustration by David Tramontan/SOPA Images/LightRocket via Getty Images)
As Meta's platforms fill up with more AI-generated content, the company still has a lot of work to do when it comes to enforcing its policies around manipulated media. The Oversight Board is once again criticizing the social media company over its handling of such posts, writing in its latest decision that its inability to enforce its rules consistently is "incoherent and unjustifiable."
If that sounds familiar, it's because this is the second time since last year the Oversight Board has used the word "incoherent" to describe Meta's approach to manipulated media. The board had previously urged Meta to update its rules after a misleadingly edited video of Joe Biden went viral on Facebook. In response, Meta said it would expand its use of labels to identify AI-generated content and that it would apply more prominent labels in "high risk" situations. These labels, like the one below, note when a post was created or edited using AI.
An example of a label when Meta determines a piece of Ai-manipulated content is "high risk."
Screenshot (Meta)
This approach is still falling short though, the board said. "The Board is concerned that, despite the increasing prevalence of manipulated content across formats, Metaβs enforcement of its manipulated media policy is inconsistent," it said in its latest decision. "Metaβs failure to automatically apply a label to all instances of the same manipulated media is incoherent and unjustifiable."
The statement came in a decision related to a post that claimed to feature audio of two politicians in Iraqi Kurdistan. The supposed "recorded conversation" included a discussion about rigging an upcoming election and other "sinister plans" for the region. The post was reported to Meta for misinformation, but the company closed the case "without human review," the board said. Meta later labeled some instances of the audio clip but not the one originally reported.
The case, according to the board, is not an outlier. Meta apparently told the board that it can't automatically identify and apply labels to audio and video posts, only to "static images." This means multiple instances of the same audio or video clip may not get the same treatment, which the board notes could cause further confusion. The Oversight Board also criticized Meta for often relying on third-parties to identify AI-manipulated video and audio, as it did in this case.
"Given that Meta is one of the leading technology and AI companies in the world, with its resources and the wide usage of Metaβs platforms, the Board reiterates that Meta should prioritize investing in technology to identify and label manipulated video and audio at scale," the board wrote. "It is not clear to the Board why a company of this technical expertise and resources outsources identifying likely manipulated media in high-risk situations to media outlets or Trusted Partners."
In its recommendations to Meta, the board said the company should adopt a "clear process" for consistently labeling "identical or similar content" in situations when it adds a "high risk" label to a post. The board also recommended that these labels should appear in a language that matches the rest of their settings on Facebook, Instagram and Threads.Β
Meta didn't respond to a request for comment. The company has 60 days to respond to the board's recommendations.
This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-calls-metas-uneven-ai-moderation-incoherent-and-unjustifiable-100056893.html?src=rss
CANADA - 2025/01/19: In this photo illustration, the Meta Platforms, Inc. logo is seen displayed on a smartphone screen. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)