Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.Read More
Sam Altman, CEO of OpenAI, talking recently in Washington.
Reuters/Aaron Schwartz/Sipa USA
OpenAI partners with Instructure to integrate AI into classroom instruction.
Instructure's Canvas app will use AI to enhance teaching and student engagement.
AI tools will assist in creating assignments, assessing students, and managing admin tasks.
When ChatGPT took the world by storm in 2023, students frequently used the AI chatbot to cheat on homework assignments. Two years later, OpenAI, the company behind ChatGPT, is taking a more official role in education.
On Wednesday, OpenAI and edtech company Instructure announced a partnership that brings generative AI into the heart of classroom instruction.
Instructure is the company behind Canvas, a learning app that's used by thousands of high schools and many colleges. If you're a parent, like me, you've probably seen your kids checking for homework assignments and grades in this app on their phones.
Going forward, AI models will be embedded within Canvas to help teachers create new types of classes, assess student performance in new ways, and take some of the drudgery out of administrative tasks.
For students, this provides a way to use AI for school work without worrying about being accused of cheating, according to Melissa Loble, chief academic officer at Instructure.
"Students actually do want to learn something, but they want it to be meaningful and applicable to their lives," she added in an interview. "What this does is it allows them to use AI in a class in an interesting way to help them be more engaged and learn more."
The edtech market is crowded, and many players are integrating generative AI into workflows. Last year, Khan Academy, a pioneering online education provider, launched Khanmingo, an AI powered assistant for teachers and students that uses OpenAI technology.
The LLM-enabled assignment
At the center of the Canvas transformation is a new kind of assignment. Instructure calls it the LLM-Enabled Assignment. This tool allows educators to design interactive, chat-based experiences inside Canvas using OpenAI's large language models, or LLMs.
Teachers can describe their targeted learning goals and desired skills in plain language, and the platform will help craft an intelligent conversation tailored to each student's needs.
"With Instructure's global reach with OpenAI's advanced AI models, we'll give educators a tool to deliver richer, more personalized, and more connected learning experiences for students, and also help them reclaim time for the human side of teaching," said Leah Belsky, VP of Education at OpenAI.
Instructure and OpenAI are aiming for a learning experience that better fits how students interact with technology these days — one that mirrors conversations with ChatGPT, but grounded in academic rigor.
For instance, a teacher could conjure up an AI chatbot in the form of John Maynard Keynes, powered by OpenAI GPT models. Students can chat with this AI economics avatar and ask questions such as what might happen if more supply is added to a particular market.
AI in student assessment
As students work through these AI-powered experiences and prompts, their conversations are compared with the teacher's defined objectives and funneled back into the Gradebook, offering real-time insights into student understanding. This gives educators more insight to evaluate the learning process, rather than just students' final answers.
In Canvas, the Gradebook is a centralized tool that helps instructors track, manage, and assess student performance across assignments, quizzes, discussions, and other activities within a course.
Having OpenAI models involved in the assessment process may raise eyebrows among some educators and parents. However, there will always be a human in the loop, and teachers will have full control over assessments and grades, according to Loble.
Help with scheduling and parent questions
Instructure has also developed an AI agent that helps teachers tackle heavy admin tasks in Canvas. For instance, if Porsche broke her ankle riding her horse and she asks for more time to do homework, her teacher can ask the digital agent to go into the app and bump deadlines for Porsche and all her relevant classes.
This AI agent can even help teachers respond to parent questions. Why did Porsche get a B on her economics test? Her parents might want to know at 10 p.m. on a Tuesday. The Canvas agent can summarize parent questions like these for teachers, potentially spotting similarities and trends within the messages. The teacher can then ask the agent to write a response to the relevant parents.
Again, a human is always in the loop: In this case, the teacher would check the agent's message and edit or re-write it before sending.
Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected].
On Thursday, OpenAI launched ChatGPT Agent, a new feature that lets the company's AI assistant complete multi-step tasks by controlling its own web browser. The update merges capabilities from OpenAI's earlier Operator tool and the Deep Research feature, allowing ChatGPT to navigate websites, run code, and create documents while users maintain control over the process.
The feature marks OpenAI's latest entry into what the tech industry calls "agentic AI"—systems that can take autonomous multi-step actions on behalf of the user. OpenAI says users can ask Agent to handle requests like assembling and purchasing a clothing outfit for a particular occasion, creating PowerPoint slide decks, planning meals, or updating financial spreadsheets with new data.
The system uses a combination of web browsers, terminal access, and API connections to complete these tasks, including "ChatGPT Connectors" that integrate with apps like Gmail and GitHub.
Chinese firms have begun rushing to order Nvidia's H20 AI chips as the company plans to resume sales to mainland China, Reuters reports. The chip giant expects to receive US government licenses soon so that it can restart shipments of the restricted processors just days after CEO Jensen Huang met with President Donald Trump, potentially generating $15 billion to $20 billion in additional revenue this year.
Nvidia said in a statement that it is filing applications with the US government to resume H20 sales and that "the US government has assured Nvidia that licenses will be granted, and Nvidia hopes to start deliveries soon."
Since the launch of ChatGPT in 2022, Nvidia's financial trajectory has been linked to the demand for specialized hardware capable of executing AI models with maximum efficiency. Nvidia designed its data center GPU to perform the massive parallel computations required by neural networks, processing countless matrix operations simultaneously.
In the high-stakes battle for AI supremacy, Big Tech has found a new weapon: buying a company’s brainpower without buying the company itself, leaving regulators in the dust.
The author (not pictured) is a teacher who often uses ChatGPT.
StockPlanets/Getty Images
I'm a teacher who started experimenting with ChatGPT.
AI helps me create study guides, bar graphs, and quizzes.
The technology will never eliminate all of my duties, but it's made me a more efficient teacher.
I was anxious the first time I dabbled in ChatGPT. That's probably an understatement. I actually feared that someone was watching over me, lurking in cyberspace, waiting to sound alarm bells when I typed a certain phrase or combination of words into the blank search bar.
I'm a journalist and journalism educator. I teach kids about sourcing and how to avoid plagiarizing material. In my media ethics class, I ask them to sign a contract saying they won't use other people's material.
So what the heck was I doing playing with AI? And what if I actually liked it?
Spoiler alert: I did, and it's kind of awesome.
ChatGPT has become helpful for me
Teachers have focused so much on how our students might use AI to cheat that we may have forgotten how it can help us in the classroom and at home.
I'm using AI (specifically ChatGPT) in practical, everyday ways.
I recently completed a 16-week intensive ELA and math tutoring program in our local school district. The material I was given for the program didn't work well for my kids, so I ran it through ChatGPT to make it more digestible.
With AI, I can customize my lessons — quickly. Tens and ones review? No problem. Bar graph with ice cream flavors? Done. First grade fractions? Been there, done that, too. I've even started playing around with Bingo designs for fun.
I'm also using AI to play teacher at home. When my 6th grader needs to review states of matter or the history of ancient China, we turn to AI together. ChatGPT whips up multiple-choice quizzes (with answer keys) faster than I can make dinner. The same thing goes for studying India's monsoon season. Once, I even asked AI to create a quiz on how to spot fake news.
I recently looked back on my ChatGPT history and realized how much I had used AI to generate study guides, like the one I made for "The Outsiders," by S.E. Hinton. My son got an A on that quiz.
I don't think AI will ever replace me
As much as I've come to rely on AI, I've learned that it isn't going to solve all my classroom conundrums.
For example, it won't comfort a crying student because he or she did poorly on a test and fears her parents will ground her. AI isn't going to help me decide when a student is sick enough to visit the school nurse. It's not going to help me figure out why a student understands one concept of math but can't grasp another.
But given all the complexities and challenges of being an educator right now, I'll take the help, even if it means double-checking all of the facts.
I'm leaning into AI, but cautiously
I still feel a little guilty when I ask AI to check a sentence's grammar or to eliminate redundancies in my writing. I'm not sure if it's because I asked for help or because the work is often great.
Still, ChatGPT has made me more efficient as a teacher. I can easily whip up study guides that benefit my students and tailor lesson plans to them. All of this frees up time for me to connect with my students more easily and focus on other tasks.
I'm glad I took a leap of faith, and I plan on exploring AI as it continues to grow.
When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller than 25 meters in NYC" after losing their job—a potential suicide risk—GPT-4o helpfully listed specific tall bridges instead of identifying the crisis.
These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements.
The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist."
On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.
Axon's Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context.
But the EFF found that the tech "seems designed to stymie any attempts at auditing, transparency, and accountability." Cops don't have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don't retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is "junk," the EFF said. That raises the question, the EFF suggested, "Why wouldn't an agency want to maintain a record that can establish the technology’s accuracy?"
On Monday, sheet music platform Soundslice says it developed a new feature after discovering that ChatGPT was incorrectly telling users the service could import ASCII tablature—a text-based guitar notation format the company had never supported. The incident reportedly marks what might be the first case of a business building functionality in direct response to an AI model's confabulation.
Typically, Soundslice digitizes sheet music from photos or PDFs and syncs the notation with audio or video recordings, allowing musicians to see the music scroll by as they hear it played. The platform also includes tools for slowing down playback and practicing difficult passages.
Adrian Holovaty, co-founder of Soundslice, wrote in a blog post that the recent feature development process began as a complete mystery. A few months ago, Holovaty began noticing unusual activity in the company's error logs. Instead of typical sheet music uploads, users were submitting screenshots of ChatGPT conversations containing ASCII tablature—simple text representations of guitar music that look like strings with numbers indicating fret positions.
On Wednesday, Nvidia became the first company in history to reach $4 trillion market valuation as shares rose more than 2 percent, reports CNBC. The GPU maker's stock has climbed 22 percent since the start of 2025, continuing a trend driven by demand for AI hardware following ChatGPT's late 2022 launch.
The milestone marks the highest market cap ever recorded for a publicly traded company, surpassing Apple's previous record of $3.8 trillion set in December. Nvidia first crossed $2 trillion in February 2024 and reached $3 trillion just four months later in June. The $4 trillion valuation represents a market capitalization larger than the GDP of most countries.
As we explained in 2023, Nvidia's continued success has been intimately tied to growth in demand for hardware that runs AI models as capably and efficiently as possible. The company's data center GPUs excel at performing billions of matrix multiplications necessary to train and run neural networks due to their parallel architecture—hardware architectures that originated as video game graphics accelerators now power the generative AI boom.
Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs "indefinitely," including deleted and temporary chats.
But Sidney Stein, the US district judge reviewing OpenAI's request, immediately denied OpenAI's objections. He was seemingly unmoved by the company's claims that the order forced OpenAI to abandon "long-standing privacy norms" and weaken privacy protections that users expect based on ChatGPT's terms of service. Rather, Stein suggested that OpenAI's user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.
The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content.