As has been noted, AI is being used to cheat. A lot:
Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Clio, the Muse of History
He’s stupid. But that’s OK, because he’s young. What studies are showing is that people who use AI too much get stupider.
The study surveyed 666 participants across various demographics to assess the impact of AI tools on critical thinking skills. Key findings included:
- Cognitive Offloading: Frequent AI users were more likely to offload mental tasks, relying on the technology for problem-solving and decision-making rather than engaging in independent critical thinking.
- Skill Erosion: Over time, participants who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions.
- Generational Gaps: Younger participants exhibited greater dependence on AI tools compared to older groups, raising concerns about the long-term implications for professional expertise and judgment.
The researchers warned that while AI can streamline workflows and enhance productivity, excessive dependence risks creating “knowledge gaps” where users lose the capacity to verify or challenge the outputs generated by these tools.
Meanwhile, AI is hallucinating more and more:
Reasoning models, considered the “newest and most powerful technologies” from the likes of OpenAI, Google and the Chinese start-up DeepSeek, are “generating more errors, not fewer.” The models’ math skills have “notably improved,” but their “handle on facts has gotten shakier.” It is “not entirely clear why.”
If you can’t do the work without AI, you can’t check the AI. You don’t know when it’s hallucinating, and you don’t know when what it’s doing isn’t the best or most appropriate way to do the work. And if you’re totally reliant on AI, well, what do you bring to the table?
Students using AI to cheat are, well, cheating themselves:
It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”
This isn’t complicated to fix. Instead of having essays and unsupervised out of class assignments, instructors are going to have to evaluate knowledge and skills by:
- Oral tests. Ask them questions, one on one, and see if they can answer and how good their answers are.
- In class, supervised exams and assignments. No AI aid, proctors there to make sure of it, and can you do it without help.
The idea that essays and take-home assignments are the way to evaluate students wasn’t handed down from on high, and hasn’t always been the way students’ knowledge was judged.
Now, of course, this is extra work for instructors and the students will whine, but who cares? Those who graduate from such programs (which will also teach how to use AI, not everything has to be done without it), will be more skilled and capable.
Students are always willing to cheat themselves by cheating and not actually learning the material. This is a new way of cheating, but there are old methods which will stop it cold, IF instructors will do the work, and if they can give up the idea, in particular, that essays and at-home assignments are a good way to evaluate work. (They never were, entirely, there was an entire industry for writing other people’s essays, which I assume AI has pretty much killed.)
AI is here, it requires changes to adapt. That’s all. And unless things change, it isn’t going to replace all workers or any such nonsense: the hallucination problem is serious, and researchers have no idea how to fix it and right now there is no US company which is making money on AI, every single query, even from paying clients, is costing more to run than it returns.
IF AI delivered reliable results and thus really could replace all workers. If it could fully automate knowledge work, well, companies might be willing to pay a lot more for it. But as it stands right now, I don’t see the maximalist position happening. And my sense is that this particular model of AI, a blend of statistical compression and reasoning cannot be made to be reliable, period. A new model is needed.
So, make the students actually do the work, and actually learn, whether they want to or not.
This blog has always been free to read, but it isn’t free to produce. If you’d like to support my writing, I’d appreciate it. You can donate or subscribe by clicking on this link.
Hairhead
We are seeing this realization of the negative effects of our technological revolution(s) in several areas. One area in particular is the use of smartphones by students, particularly in the elementary/middle schools. Just this year in BC, the Ministry of Education outright banned the use of cellphones by students during class time throughout all of BC. Other such bans/limitations are being initiated and applied in many other countries and smaller jurisdictions.
One can’t help but think of Socrates’ dislike of the written word. “Without a good, strong, developed memory, how can people order their thoughts and come to good conclusions?”, he mused (I paraphrase). What he said was and is true, but we have managed to integrate the use of books into our lives successfully — at least until now, when smartphones are basically the Library of Alexandria, with cat memes, in our pockets.
Ian Welsh
Thing is, smart phones aren’t used as the Library of Alexandria. Alas.
marku52
But AI sort of kinda works. At least it looks like it works. And it lets bosses fire workers, and cut pay.
Hence it will be deployed massively. Even if it loses money for them. As Kalecki pointed out long ago capitalists will gladly suffer a loss in profit so long as they maintain power over workers.
One more argument for staying far, far, away from any office work. Become a welder or a plumber. AI aint going to clear your clogged toilet.
Oakchair
“Meanwhile, AI is hallucinating more
generating more errors,”
—-
For some this is a bug for some it is a feature.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” —Dune
AI isn’t spewing falsehoods and lies only because it’s garbage in garbage out. It’s telling you what those who’ve made them want you to hear, think and believe.
—-
studies are showing is that people who use AI too much get stupider.
—-
In this respect AI is repeating a phenomena that Ivan Illich detailed in the 70’s in “Deschooling society” and “Disabling professions”
“School prepares people for the alienating institutionalization of life, by teaching the necessity of being taught. Once this lesson is learned, people loose their incentive to develop independently”
We’ve been conditioned since birth that in order to learn we need to be taught by a teacher. In order to be healthy we need to follow the doctors orders. In order to be moral we need to do what the priest tells us the bible says, and so on.
The effect is a populace disabled by professions. People can’t learn on their own because a teacher needs to do it for them. They can’t improve their health because only the doctor knows how to. They can’t repair household appliances because a professional can do it for them.
It’s created a fully anti-intellectualism society where doing your own research marks one as a crazy stupid conspiracy theorist. A society filled with people that don’t even try to read the sole clinical trial before taking a novel drug over and over and over again. Everyone believes not only that they are too stupid to understand but that trying to do so is a mark of ignorance.
This is a society perfectly suited to be enslaved by “AI” because it is already one whose ability to think and function has been disabled by their religions obedience to the “experts”.
AI isn’t providing us with a new ruler; it’s the new face of the same rulers who’ve ruled us for generations.
Mary Bennet
Take home assignments was one way a plain or awkward person could have their efforts known and appreciated. You still had to work harder than the lookers and the rich goofballs, but at least teach did have to read and grade what you turned in. And your work was hidden from the other students, not provoking jealousy.