5 Things I wish ChatGPT could do but can’t (yet) - with AI, sometimes less is more
© Shutterstock / Primakov
© Shutterstock / Primakov
© Getty Images
© OpenAI
NotebookLM is undoubtedly one of Google's best implementations of generative AI technology, giving you the ability to explore documents and notes with a Gemini AI model. Last year, Google added the ability to generate so-called "audio overviews" of your source material in NotebookLM. Now, Google has brought those fake AI podcasts to search results as a test. Instead of clicking links or reading the AI Overview, you can have two nonexistent people tell you what the results say.
This feature is not currently rolling out widely—it's available in search labs, which means you have to manually enable it. Anyone can opt in to the new Audio Overview search experience, though. If you join the test, you'll quickly see the embedded player in Google search results. However, it's not at the top with the usual block of AI-generated text. Instead, you'll see it after the first few search results, below the "People also ask" knowledge graph section.
As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.
Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.
Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.
© Aurich Lawson
When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google's AI search results are spreading misinformation claiming the incident involved an Airbus plane—it was actually a Boeing 787.
Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing's 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google's results page.
Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We've run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It's a mess.
© Boeing
Meta is making a $14.3 billion investment in artificial intelligence company Scale and recruiting its CEO Alexandr Wang to join a team developing “superintelligence” at the tech giant.
The deal announced Thursday reflects a push by Meta CEO Mark Zuckerberg to revive AI efforts at the parent company of Facebook and Instagram as it faces tough competition from rivals such as Google and OpenAI.
Meta announced what it called a “strategic partnership and investment” with Scale late Thursday. Scale said the $14.3 billion investment puts its market value at over $29 billion.
Scale said it will remain an independent company but the agreement will “substantially expand Scale and Meta’s commercial relationship.” Meta will hold a 49% stake in the startup.
Wang, though leaving for Meta with a small group of other Scale employees, will remain on Scale’s board of directors. Replacing him is a new interim Scale CEO Jason Droege, who was previously the company’s chief strategy officer and had past executive roles at Uber Eats and Axon.
Zuckerberg’s increasing focus on the abstract idea of “superintelligence” — which rival companies call artificial general intelligence, or AGI — is the latest pivot for a tech leader who in 2021 went all-in on the idea of the metaverse, changing the company’s name and investing billions into advancing virtual reality and related technology.
It won’t be the first time since ChatGPT’s 2022 debut sparked an AI arms race that a big tech company has gobbled up talent and products at innovative AI startups without formally acquiring them. Microsoft hired key staff from startup Inflection AI, including co-founder and CEO Mustafa Suleyman, who now runs Microsoft’s AI division.
Google pulled in the leaders of AI chatbot company Character.AI, while Amazon made a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept’s AI systems and datasets.
Wang was a 19-year-old student at the Massachusetts Institute of Technology when he and co-founder Lucy Guo started Scale in 2016.
They won influential backing that summer from the startup incubator Y Combinator, which was led at the time by Sam Altman, now the CEO of OpenAI. Wang dropped out of MIT, following a trajectory similar to that of Zuckerberg, who quit Harvard University to start Facebook more than a decade earlier.
Scale’s pitch was to supply the human labor needed to improve AI systems, hiring workers to draw boxes around a pedestrian or a dog in a street photo so that self-driving cars could better predict what’s in front of them. General Motors and Toyota have been among Scale’s customers.
What Scale offered to AI developers was a more tailored version of Amazon’s Mechanical Turk, which had long been a go-to service for matching freelance workers with temporary online jobs.
More recently, the growing commercialization of AI large language models — the technology behind OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama — brought a new market for Scale’s annotation teams. The company claims to service “every leading large language model,” including from Anthropic, OpenAI, Meta and Microsoft, by helping to fine tune their training data and test their performance. It’s not clear what the Meta deal will mean for Scale’s other customers.
Wang has also sought to build close relationships with the U.S. government, winning military contracts to supply AI tools to the Pentagon and attending President Donald Trump’s inauguration. The head of Trump’s science and technology office, Michael Kratsios, was an executive at Scale for the four years between Trump’s first and second terms. Meta has also begun providing AI services to the federal government.
Meta has taken a different approach to AI than many of its rivals, releasing its flagship Llama system for free as an open-source product that enables people to use and modify some of its key components. Meta says more than a billion people use its AI products each month, but it’s also widely seen as lagging behind competitors such as OpenAI and Google in encouraging consumer use of large language models, also known as LLMs.
It hasn’t yet released its purportedly most advanced model, Llama 4 Behemoth, despite previewing it in April as “one of the smartest LLMs in the world and our most powerful yet.”
Meta’s chief AI scientist Yann LeCun, who in 2019 was a winner of computer science’s top prize for his pioneering AI work, has expressed skepticism about the tech industry’s current focus on large language models.
“How do we build AI systems that understand the physical world, that have persistent memory, that can reason and can plan?” LeCun asked at a French tech conference last year.
These are all characteristics of intelligent behavior that large language models “basically cannot do, or they can only do them in a very superficial, approximate way,” LeCun said.
Instead, he emphasized Meta’s interest in “tracing a path towards human-level AI systems, or perhaps even superhuman.” When he returned to France’s annual VivaTech conference again on Wednesday, LeCun dodged a question about the pending Scale deal but said his AI research team’s plan has “always been to reach human intelligence and go beyond it.”
“It’s just that now we have a clearer vision for how to accomplish this,” he said.
LeCun co-founded Meta’s AI research division more than a decade ago with Rob Fergus, a fellow professor at New York University. Fergus later left for Google but returned to Meta last month after a 5-year absence to run the research lab, replacing longtime director Joelle Pineau.
Fergus wrote on LinkedIn last month that Meta’s commitment to long-term AI research “remains unwavering” and described the work as “building human-level experiences that transform the way we interact with technology.”
This story was originally featured on Fortune.com
© David Paul Morris/Bloomberg via Getty Images
Polly Pocket may one day be your digital assistant.
Mattel Inc., the maker of Barbie dolls and Hot Wheels cars, has signed a deal with OpenAI to use its artificial intelligence tools to design and in some cases power toys and other products based on its brands.
The collaboration is at an early stage, and its first release won’t be announced until later this year, Brad Lightcap, OpenAI’s chief operating officer, and Josh Silverman, Mattel’s chief franchise officer, said in a joint interview. The technology could ultimately result in the creation of digital assistants based on Mattel characters, or be used to make toys and games like the Magic 8 Ball or Uno even more interactive.
“We plan to announce something towards the tail end of this year, and it’s really across the spectrum of physical products and some experiences,” Silverman said, declining to comment further on the first product. “Leveraging this incredible technology is going to allow us to really reimagine the future of play.”
Mattel shares rose 1.8% to $19.59 Thursday morning in New York. The stock is up 10% this year.
Mattel isn’t licensing its intellectual property to OpenAI as part of the deal, Silverman said, and remains in full control of the products being created. Introductory talks between the two companies began late last year, he said.
Mattel Chief Executive Officer Ynon Kreiz has been looking to evolve the company from just a toy manufacturer into a producer of films, TV shows and mobile games based on its popular characters. OpenAI, meanwhile, has been courting companies with valuable intellectual property to aid them in developing new products based on iconic brands.
“The idea exploration phase of creative design for companies like Mattel and many others, that’s a critical part of the workflow,” Lightcap said. “As we think about how AI builds tools that extend that capability, I think we’re very lucky to have partners like Mattel that we can work with to better understand that problem.”
On Tuesday, OpenAI released its newest model — o3-pro — which can analyze files, search online and complete other tasks that made it score especially well with reviewers on “comprehensiveness, instruction-following and accuracy,” the company said.
OpenAI held meetings in Los Angeles with Hollywood studios, media executives and talent agencies last year to form partnerships in the entertainment industry and encourage filmmakers to integrate its new AI video generator into their work. In the meetings, led by Lightcap, The company demonstrated the capabilities of Sora, a service that at the time generated realistic-looking videos up to about a minute in length based on text prompts from users. OpenAI has not struck any deals with movie studios yet because it still has to establish a “level of trust” with Hollywood, Lightcap said in May at a Wall Street Journal conference in New York.
This story was originally featured on Fortune.com
© Photo by Scott Eisen/Getty Images for Airbnb
Tesla Inc. sued a former engineer with the company’s highly secretive Optimus program, accusing him of stealing confidential information about the humanoid robot and setting up a rival startup in Silicon Valley.
Zhongjie “Jay” Li worked at Tesla between August 2022 and September 2024, according to a complaint filed in a San Francisco Federal Court late Wednesday. Li worked on “advanced robotic hand sensors—and was entrusted with some of the most sensitive technical data in the program,” Tesla’s lawyers said in the complaint.
The suit, also filed against his company Proception Inc, alleges that in the weeks before his departure, Li downloaded Optimus-related files onto two personal smartphones and then formed his own firm.
“Less than a week after he left Tesla, Proception was incorporated,” according to the complaint. “And within just five months, Proception publicly claimed to have ‘successfully built’ advanced humanoid robotic hands—-hands that bear a striking resemblance to the designs Li worked on at Tesla.”
Li, who lists himself as founder and CEO of Proception on LinkedIn, didn’t respond to requests for comment sent outside of normal working hours on the platform. The company didn’t immediately respond to an emailed message seeking comment or message sent through its website. Proception is based in Palo Alto, California.
Attorneys for Li or the company weren’t immediately visible in court filings.
Making a hand that is as dexterous as a human one is one of the biggest challenges in robotics. Tesla intends for Optimus to perform several tasks, from working in the electric automaker’s factories to handling every day tasks like grocery shopping and babysitting. On Tesla’s earnings call in January, CEO Elon Musk said that Optimus has the most sophisticated hand ever made.
“My prediction long-term is that Optimus will be overwhelmingly the value of the company,” Musk said.
An exhibit to the complaint includes an emailed reminder to the Optimus team from August 2024 telling staff that Tesla IT assets and networks are monitored and that “incidents of mishandling or suspected theft of Tesla property, including data and code, will be thoroughly investigated.”
Li’s “conduct is not only unlawful trade misappropriation — it also constitutes a calculated effort to exploit Tesla’s investments, insights, and intellectual property for their own commercial gain,” Tesla’s lawyers said in the filing.
Milan Kovac, the head of engineering for Optimus, left Tesla last week, Bloomberg first reported. Ashok Elluswamy, who leads Tesla’s Autopilot teams, will take over responsibility for Optimus.
Read more: Tesla’s Head of Optimus Humanoid Robot Program Exits Company
The case is Tesla, Inc. v. Proception, Inc. et al, Docket No. 5:25-cv-04963 (N.D. Cal. Jun 11, 2025), Court Docket
This story was originally featured on Fortune.com
© Photographer: Nathan Laine/Bloomberg via Getty Images
Welcome to Eye on AI! In this edition…Disney and Universal join forces in lawsuit against AI image creator Midjourney…France’s Mistral gets a business boost thanks to fears over US AI dominance…Google names DeepMind’s Kavukcuoglu to lead AI-powered product development.
Mark Zuckerberg is rumored to be personally recruiting — reportedly at his homes in Lake Tahoe and Palo Alto — for a new 50-person “Superintelligence” AI team at Meta meant to gain ground on rivals like Google and OpenAI. The plan includes hiring a new head of AI research to work alongside Scale AI CEO Alexandr Wang, who is being brought in as part of a plan to invest up to $15 billion for a 49% stake in the training data company.
On the surface, it might appear that Zuckerberg could easily win this war for AI talent by writing the biggest checks.
And the checks Zuck is writing are, by all accounts, huge. Deedy Das, a VC at Menlo Ventures, told me that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor,” he said (a number that one AI researcher told me was “not outrageous at all” and “is likely low in certain sub-areas like LLM pre-training,” though most of the compensation would be in the form of equity). Later, on LinkedIn Das went further, claiming that for candidates working at a big AI lab, “Zuck is personally negotiating $10M+/yr in cold hard liquid money. I’ve never seen anything like it.”
Some of these pro athlete-level offers are working. According to Bloomberg, Jack Rae, a principal researcher at Google DeepMind, is expected to join Meta’s “superintelligence” team, while it said Meta has also recruited Johan Schalkwyk, a machine learning lead at AI voice startup Sesame AI.
But money alone may not be enough to build the kind of AI model shop Meta needs. According to Das, several researchers have turned down Zuckerberg’s offer to take roles at OpenAI and Anthropic.
There are several issues at play: For one thing, there simply aren’t that many top AI researchers, and many of them are happily ensconced at OpenAI, Anthropic, or Google DeepMind with high six- or low seven-figure salaries and access to all the computing capacity they could want. In a March Fortune article, I argued that companies are tracking top AI researchers and engineers like prized assets on the battlefield. The most intense fight is over a small pool of AI research scientists — estimated to be fewer than 1,000 individuals worldwide, according to several industry insiders Fortune spoke with — with the qualifications to build today’s most advanced large language models.
“In general, all these companies very closely watch each others’ compensation, so on average it is very close,” said Erik Meijer, a former senior director of engineering at Meta who left last year. However, he added that Meta uses “additional equity” which is a “special kind of bonus to make sure compensation is not the reason to leave.”
Beyond the financial incentives, personal ties to leading figures and adherence to differing philosophies about artificial intelligence have lent a tribal element to Silicon Valley’s AI talent wars. More than 19 OpenAI employees followed Mira Murati to her startup Thinking Machines earlier this year, for example. Anthropic was founded in 2021 by former OpenAI employees who disagreed with their employer’s strategic direction.
Das, however, said it really depends on the person. “I’d say a lot more people are mercenary than they let on,” he said. “People care about working with smart people and they care about working on products that actually work but they can be bought out if the price is right.” But for many, “they have too much money already and can’t be bought.”
Meta’s own sweeping layoffs earlier this year could also sour the market for AI talent, some told me. “I’ve decided to raise the bar on performance management and move out low-performers faster,” said Zuckerberg in an internal memo back in January. The memo said Meta planned to increasingly focus on developing AI, smart glasses and the future of social media. Following the memo, about 3,600 employees were laid off—roughly 5% of Meta’s workforce
One AI researcher told me that he had heard about Zuckerberg’s high-stakes offers, but that people don’t trust Meta after the “weedwacker” layoffs.
Meta’s existing advanced AI research team FAIR (Fundamental AI Research) has increasingly been sidelined in the development of Meta’s Llama AI models and has lost key researchers. Joelle Pineau, who had been leading FAIR, announced her departure in April. Most of the researchers who developed Meta’s original Llama model have left, including two cofounders of French AI startup Mistral. And a trio of top AI researchers left a year ago to found AI agent startup Yutori.
Finally, there are hard-to-quantify issues, like prestige. Meijer expressed doubt that Meta could produce AI products that experts in the field would perceive as embodying breakthrough capabilities. “The bitter truth is that Meta does not have any leaders that are good at bridging research and product,” he said. “For a long time Reality Labs and FAIR could do their esoteric things without being challenged. But now things are very different and companies like Anthropic, OpenAI, Google, Mistral, DeepSeek excel at pushing out research into production at record pace, and Meta is left standing on the sidelines.“
In addition, he said, huge salaries and additional equity “will not stick if the company feels unstable or if it is perceived by peers as a black mark on your resume. Prestige compounds, that is why top people self-select into labs like DeepMind, OpenAI, or Anthropic. Aura is not for sale.”
That’s not to say that Zuck’s largesse won’t land him some top AI talent. The question is whether it will be enough to deliver the AI product wins Meta needs.
With that, here’s the rest of the AI news.
Sharon Goldman
[email protected]
@sharongoldman
This story was originally featured on Fortune.com
© DAVID PAUL MORRIS—Bloomberg/Getty Images
Jeannie Kim went to sleep thinking about budgets and enrollment challenges. She woke up to discover her college had been invaded by an army of phantom students.
“When we got hit in the fall, we got hit hard,” Kim, president of California’s Santiago Canyon College, told Fortune. “They were occupying our wait lists, and they were in our classrooms as if they were real humans—and then our real students were saying they couldn’t get into the classes they needed.”
Kim worked quickly to bring in an AI firm to help protect the college and strengthen its guardrails, she said. Santiago Canyon wound up dropping more than 10,000 enrollments representing thousands of students who were not really students, said Kim. By spring 2025, ghost student enrollments had dropped from 14,000 since the start of the spring term to fewer than 3,000.
Across America’s community colleges and universities, sophisticated criminal networks are using AI to deploy thousands of “synthetic” or “ghost” students—sometimes in the dead of night—to attack colleges. The hordes are cramming themselves into registration portals to enroll and illegally apply for financial aid. The ghost students then occupy seats meant for real students—and have even resorted to handing in homework just to hold out long enough to siphon millions in financial aid before disappearing.
The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House’s Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure’s client base, between 20% to 60% of student applicants are ghosts.
“Imagine a world where 20% of the student population are fraudulent,” said Burris. “That’s the reality of the scale.”
At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. “It was a digital poltergeist effectively haunting the school’s enrollment system,” said Burris.
The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms.
“Every dollar stolen by a ghost is a dollar denied to a real student attempting to change their life,” Burris explained. “That’s a misallocation of public capital we really can’t afford.”
The strikes tend to unfold in the quiet evening hours when campuses are asleep, and with surgical precision, explained Laqwacia Simpkins, CEO of AMSimpkins & Associates, an edtech firm that works with colleges and universities to verify student identities with a fraud-detection platform called SAFE.
Bryce Pustos, director of administrative systems at Chaffey Community College, recalled last fall’s enrollment period when faculty members reported going to bed with zero students registered for classes and waking up to find a full class and a mile-long wait list.
Michael Fink, Chaffey’s chief technology officer, said the attacks took place at scale and within minutes. “We’ll see things like 50 applications coming in within two seconds and then somebody enrolling in all 36 seats in a class within the first minute,” Fink told Fortune.
Simpkins told Fortune the scammers have learned to strike on vulnerable days in the academic calendar, around holidays, enrollment deadlines, culmination, or at the start or end of term when staff are already stretched thin or systems are more loosely monitored.
“They push through hundreds and thousands of records at the same time and overwhelm the staff,” Simpkins said.
Plus, enrollment workers and faculty are just that, noted Simpkins; they’re educators who aren’t trained in detecting fraud. Their remit is focused on access and ensuring real students can get into the classes they need, she added, not policing fraud and fake students who are trying to trick their way to illicit financial gain. That aspect also makes the institutions more vulnerable to harm, said Simpkins.
“These are people who are admissions counselors who process applications and want to be able to admit students and give everybody an equal chance at an education,” she said.
Sadly, professors have dealt with cruel whiplash from the attacks, noted John Van Weeren, vice president of higher education at IT consulting firm Voyatek.
“One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section,” recalled Van Weeren. “When we worked with them as the first week of class was ongoing, we found out they were not real people.”
In a nightmare twist, community and technical colleges are seen as low-hanging fruit for this fraud scheme precisely because of how they’ve been designed to serve and engage with local communities and the public with as few barriers to entry as possible. Community colleges are often required to accept every eligible student and typically don’t charge fees for applying. While financial-aid fraud isn’t at all new, the fraud rings themselves have evolved from pandemic-era cash grabs and boogeymen in their mom’s basement, said Burris.
“There is an acceleration due to the proliferation of these automated technologies,” he said. “These are organized criminal enterprises—fraud rings—that are coming both from within the U.S., but also internationally.”
Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges.
The attacks specifically zero in on coursework that maximizes financial-aid eligibility, said Mike McCandless, vice president of student services at Merced College. Social sciences and online-only classes with large numbers of students that allow for as many credits or units as possible are often choice picks, he said.
For the spring semester, Merced booted about half of the 15,000 initial registrations that were fraudulent. Among the next tranche of about 7,500, some 20% were caught and removed from classes, freeing up space for real students.
In addition to financial theft, the ghost student epidemic is causing real students to get locked out of classes they need to graduate. Oftentimes, students have planned their work or childcare schedule around classes they intend to take—and getting locked out has led to a cascade of impediments.
“When you have fraudulent people taking up seats in classes, you have actual students who need to take those classes who can’t now, and it’s a barrier,” said Pustos.
The scheme continues to evolve, however, requiring constant changes to the algorithms schools are using to detect ghost students and prevent them from applying for financial aid—making the problem all the more explosive. Multiple school officials and cybersecurity experts interviewed by Fortune were reluctant to disclose the current signs of ghost students, for fear of the scheme further iterating.
In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.
Maurice Simpkins noted an uptick this year in the use of American stolen identities as more schools have engaged in hand-to-hand combat with the fraud rings. He’s seen college graduates who have had their identities stolen get reenrolled at their former university, or have had their former education email address used to enroll at another institution.
Scammers are also using bizarre-looking short-term and disposable email addresses to register for classes in a 10-minute period before they can get their hands on a .edu email address, said Simpkins. That verified email address is “like a gold bar,” Simpkins explained. The fraudster then appears legitimate going forward and is eligible for student discounts on hardware, software, and can use the college’s cloud storage.
“We had a school that reached out to us because some fraudsters ordered some computers and devices and other materials and then had them delivered overseas,” said Simpkins. “And they did it using an account with the school’s .edu email address.”
McCandless said initially it was easy to tell if a fake student was disguised as a local applicant because their IP address was generated overseas. But just a few semesters later, IP addresses were local. When the college’s tech team looked deeper, they would find the address was from an abandoned building or somewhere in the middle of Lake Merced.
Every time the school did something to lock out fraudulent applicants, the scammers would learn and tweak, McCandless said. The school’s system is now designed to block ghost applicants right out of the gate and at multiple stages before they start enrolling in classes.
McCandless said professors are assigning students homework for the first day of class, but the ghost students are completing the assignments with AI. Faculty have caught the fake homework, however, by noticing that half the class handed in identical work, or detecting the use of ChatGPT, for instance.
“They’re very innovative, very good at what they do,” said McCandless. “I just think the consistency with which they continue to learn and improve—it’s a multimillion-dollar scheme, there’s money there, why wouldn’t you invest in it?”
According to the DOE, the rate of financial fraud through stolen identities has reached a level that “imperils the federal student assistance programs under Title IV of the Higher Education Act.” In a statement, Secretary of Education Linda McMahon said the new temporary fix will help prevent identity theft fraud.
“When rampant fraud is taking aid away from eligible students, disrupting the operations of colleges, and ripping off taxpayers, we have a responsibility to act,” said McMahon.
Ultimately, what schools are trying to do is put in place hurdles that make it unappealing for scammers to attack because they have to do more front-end work to make the fraud scheme efficient, explained Jesse Gonzalez, assistant vice chancellor of IT services for Rancho Santiago Community College District. However, the schools are attempting to balance the delicate issue of accepting everyone eligible and remaining open to vulnerable or undocumented students, he said. “The more barriers you put in place, the more you’re going to impact students, and it’s usually the students who need the most help.”
Kim from Santiago Canyon College fears too many measures in place to root out fraud could make it more difficult for students and members of the community—who for various reasons might have a new email, phone number, or address—to access education and other resources that can help them improve their lives.
“Our ability to provide that democratic education to those that would not otherwise have access is at stake, and it’s in jeopardy because of these bad actors turning our system into their own piggy banks,” said Kim. “We have to continue to figure out ways to keep them out so the students can have those rightful seats—and keep it open-access.”
This story was originally featured on Fortune.com
© Chris Ryan—Getty Images
Jensen Huang is not on board with some of Anthropic CEO Dario Amodei’s predictions about advanced AI. Responding to a question about Amodei’s recent prediction that AI could automate up to half of all entry-level office jobs within five years, Huang said he “pretty much disagree[d] with almost everything” his fellow AI CEO says.
“One, he believes that AI is so scary that only they should do it,” Huang said of Amodei at a press briefing at Viva Technology in Paris. “Two, [he believes] that AI is so expensive, nobody else should do it … And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it.
“I think AI is a very important technology; we should build it and advance it safely and responsibly,” Huang continued. “If you want things to be done safely and responsibly, you do it in the open … Don’t do it in a dark room and tell me it’s safe.”
Anthropic was founded by Amodei and other former OpenAI employees in 2021 with safety as one of its core missions. Many of Anthropic’s founding team reportedly left OpenAI owing to disagreements about the direction and safety culture at the company.
Amodei has made several public statements about his belief in the potential existential risks of AI. He’s said that he believes humanity may one day lose control of AI systems if they become smarter than humans. He’s also raised concerns about rogue actors weaponizing advanced AI to create bioweapons, engineer cyberattacks, or unleash tools of mass disruption long before machines surpass human intelligence.
More recently, in an interview with Axios, he predicted AI could wipe out roughly 50% of all entry-level white-collar jobs and urged lawmakers to prepare now to protect people’s livelihoods.
Huang acknowledged that the tech may have some impact on employees, but dismissed Amodei’s recent bold claim.
“Everybody’s jobs will be changed. Some jobs will be obsolete, but many jobs are going to be created … Whenever companies are more productive, they hire more people,” he said.
Anthropic did not immediately respond to a request for comment from Fortune.
Huang made the comments in a press briefing following Nvidia’s GTC Paris conference, where the company announced a new partnership with French startup Mistral as part of a push to develop European computing capacity.
Huang said Nvidia had more than 20 “AI factories” in the works across the continent, promising European researchers and startups that their “GPU shortage will be resolved” soon.
The CEO also touched on Nvidia’s quantum computing efforts, spotlighting Nvidia’s hybrid quantum-classical platform, CUDA-Q, and claiming that quantum computing is hitting an “inflection point.” Huang said that the tech could start solving real-world problems in the next few years.
This story was originally featured on Fortune.com
© Chesnot—Getty Images
© Lance Ulanoff / Future