As has been noted, AI is being used to cheat. A lot:
Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Clio, the Muse of History
He’s stupid. But that’s OK, because he’s young. What studies are showing is that people who use AI too much get stupider.
The study surveyed 666 participants across various demographics to assess the impact of AI tools on critical thinking skills. Key findings included:
- Cognitive Offloading: Frequent AI users were more likely to offload mental tasks, relying on the technology for problem-solving and decision-making rather than engaging in independent critical thinking.
- Skill Erosion: Over time, participants who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions.
- Generational Gaps: Younger participants exhibited greater dependence on AI tools compared to older groups, raising concerns about the long-term implications for professional expertise and judgment.
The researchers warned that while AI can streamline workflows and enhance productivity, excessive dependence risks creating “knowledge gaps” where users lose the capacity to verify or challenge the outputs generated by these tools.
Meanwhile, AI is hallucinating more and more:
Reasoning models, considered the “newest and most powerful technologies” from the likes of OpenAI, Google and the Chinese start-up DeepSeek, are “generating more errors, not fewer.” The models’ math skills have “notably improved,” but their “handle on facts has gotten shakier.” It is “not entirely clear why.”
If you can’t do the work without AI, you can’t check the AI. You don’t know when it’s hallucinating, and you don’t know when what it’s doing isn’t the best or most appropriate way to do the work. And if you’re totally reliant on AI, well, what do you bring to the table?
Students using AI to cheat are, well, cheating themselves:
It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”
This isn’t complicated to fix. Instead of having essays and unsupervised out of class assignments, instructors are going to have to evaluate knowledge and skills by:
- Oral tests. Ask them questions, one on one, and see if they can answer and how good their answers are.
- In class, supervised exams and assignments. No AI aid, proctors there to make sure of it, and can you do it without help.
The idea that essays and take-home assignments are the way to evaluate students wasn’t handed down from on high, and hasn’t always been the way students’ knowledge was judged.
Now, of course, this is extra work for instructors and the students will whine, but who cares? Those who graduate from such programs (which will also teach how to use AI, not everything has to be done without it), will be more skilled and capable.
Students are always willing to cheat themselves by cheating and not actually learning the material. This is a new way of cheating, but there are old methods which will stop it cold, IF instructors will do the work, and if they can give up the idea, in particular, that essays and at-home assignments are a good way to evaluate work. (They never were, entirely, there was an entire industry for writing other people’s essays, which I assume AI has pretty much killed.)
AI is here, it requires changes to adapt. That’s all. And unless things change, it isn’t going to replace all workers or any such nonsense: the hallucination problem is serious, and researchers have no idea how to fix it and right now there is no US company which is making money on AI, every single query, even from paying clients, is costing more to run than it returns.
IF AI delivered reliable results and thus really could replace all workers. If it could fully automate knowledge work, well, companies might be willing to pay a lot more for it. But as it stands right now, I don’t see the maximalist position happening. And my sense is that this particular model of AI, a blend of statistical compression and reasoning cannot be made to be reliable, period. A new model is needed.
So, make the students actually do the work, and actually learn, whether they want to or not.
This blog has always been free to read, but it isn’t free to produce. If you’d like to support my writing, I’d appreciate it. You can donate or subscribe by clicking on this link.
Hairhead
We are seeing this realization of the negative effects of our technological revolution(s) in several areas. One area in particular is the use of smartphones by students, particularly in the elementary/middle schools. Just this year in BC, the Ministry of Education outright banned the use of cellphones by students during class time throughout all of BC. Other such bans/limitations are being initiated and applied in many other countries and smaller jurisdictions.
One can’t help but think of Socrates’ dislike of the written word. “Without a good, strong, developed memory, how can people order their thoughts and come to good conclusions?”, he mused (I paraphrase). What he said was and is true, but we have managed to integrate the use of books into our lives successfully — at least until now, when smartphones are basically the Library of Alexandria, with cat memes, in our pockets.
Ian Welsh
Thing is, smart phones aren’t used as the Library of Alexandria. Alas.
marku52
But AI sort of kinda works. At least it looks like it works. And it lets bosses fire workers, and cut pay.
Hence it will be deployed massively. Even if it loses money for them. As Kalecki pointed out long ago capitalists will gladly suffer a loss in profit so long as they maintain power over workers.
One more argument for staying far, far, away from any office work. Become a welder or a plumber. AI aint going to clear your clogged toilet.
Oakchair
“Meanwhile, AI is hallucinating more
generating more errors,”
—-
For some this is a bug for some it is a feature.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” —Dune
AI isn’t spewing falsehoods and lies only because it’s garbage in garbage out. It’s telling you what those who’ve made them want you to hear, think and believe.
—-
studies are showing is that people who use AI too much get stupider.
—-
In this respect AI is repeating a phenomena that Ivan Illich detailed in the 70’s in “Deschooling society” and “Disabling professions”
“School prepares people for the alienating institutionalization of life, by teaching the necessity of being taught. Once this lesson is learned, people loose their incentive to develop independently”
We’ve been conditioned since birth that in order to learn we need to be taught by a teacher. In order to be healthy we need to follow the doctors orders. In order to be moral we need to do what the priest tells us the bible says, and so on.
The effect is a populace disabled by professions. People can’t learn on their own because a teacher needs to do it for them. They can’t improve their health because only the doctor knows how to. They can’t repair household appliances because a professional can do it for them.
It’s created a fully anti-intellectualism society where doing your own research marks one as a crazy stupid conspiracy theorist. A society filled with people that don’t even try to read the sole clinical trial before taking a novel drug over and over and over again. Everyone believes not only that they are too stupid to understand but that trying to do so is a mark of ignorance.
This is a society perfectly suited to be enslaved by “AI” because it is already one whose ability to think and function has been disabled by their religions obedience to the “experts”.
AI isn’t providing us with a new ruler; it’s the new face of the same rulers who’ve ruled us for generations.
Mary Bennet
Take home assignments was one way a plain or awkward person could have their efforts known and appreciated. You still had to work harder than the lookers and the rich goofballs, but at least teach did have to read and grade what you turned in. And your work was hidden from the other students, not provoking jealousy.
Curt Kastens
We will know that AI has finally been achieved the moment after that AI self distruks (commits suicide?). Of course that is if we are still around.
bruce wilder
“But, hard”
I wonder about that.
It seems to me, perhaps naively, that AI is poison for students but it could be a huge help to teachers in creating useful exercises for students and for evaluating student work and giving students feedback, as well as managing student work.
Traditional teaching involves both a lot of drudgery for the teacher as well as the students and reliance on methods of dubious efficiency such as the lecture. AI could be very effective, I am guessing, at creating flexibly structured instructional exercises and navigational pathways for students. Simple, progressive “fill-in-the-blank” narrative pathways are amazingly effective for students, but are rarely created because they are so much work for teachers. AI should be able to spin them out like cotton candy based on any specific textbook. Moreover, AI can monitor individual student progress and help to diagnose learning difficulties, something even observant teachers struggle to spot and name.
Evaluating student essays, grading exams, listening to student presentations is exhausting drudgery. Teachers struggle against student apathy and their own tendency to slack and to like the students who need the teacher least, best. AI-generated schema for evaluation and feedback to students would open opportunities for less adverse discrimination by teachers and filters that helped the students instead of relieved the teachers. Gamification and social media are the most obvious ways to “sugar coat the pill” of learning in lieu of a teacher’s energy and charisma.
The antagonism between students and teachers built into the traditional teaching model is what leads to the reactionary expectation that AI must aid the students in their resistance to the pernicious efforts of teachers to, for example, “motivate” students with punitive grading. But, AI could aid teachers to actually help students to learn.
Joan
I had a one-year stint as a college writing instructor, then five years as a foreign language instructor at a university. We were already doing a lot of these things to combat internet use for cheating.
In the writing course, I did not assign homework that would generate unpaid grading time for me. I told them to read the books on our list. In class I discussed the texts with the students who read them, and let the ones who didn’t sleep and flunk out. For gradable assignments and exams, we used blue books: empty pamphlets of paper I provided. They wrote their essays and papers live for me, while I got a class period off of teaching. They could bring in a list of sources they wished to cite, but that was it.
For the foreign language courses, I had workbook pages & quizzes, lots of grading but I was paid for it. We put our backpacks at the front of the room with cellphones in them, showing empty pockets (lol). This was at decent altitude, so students were allowed a water bottle at their desk, a pencil and that was it. Kind of silly we had to go to such extent but it worked.
Joan
I have some ideas as to why AI generates so many errors. I’ve been job-searching for more technical work the last few months, now that I’ve been studying and practicing coding languages. My filters on linkedin show a lot of AI companies looking to hire people who will train their AI. Check the prompt by googling it and verifying the information. Two problems with this: it only pays $20/hr so who cares, and people could cheat to get the money so who cares. I suspect a lot of people are just feeding it into another AI and saying yeah that’s good. Even though the companies tell you not to do that, well, unless someone’s checking you then you’re not going to get caught.
Most of these jobs I’ve seen do not let you into the general verification at $20/hr without first requiring that you pass a test for more technical verification at $40/hr. The problem here is that if you have enough knowledge to pass the test for coding, math, chem, or engineering, you don’t need $40/hr because you likely have a job that pays more than that or you’re busy searching for one. Any improvement on the technical side of things is perhaps math teachers running a side gig and honestly training it.
Alex
I may be missing something but the article you’ve mentioned doesn’t prove that the ai makes you stupider. They just found that
Younger people use ai more and report engaging in critical thinking less
Older people use ai less and report doing more critical thinking
This finding is compatible with other very likely explanations
1. Younger people generally are worse at critical thinking
2. This particular generation of young people are bad at critical thinking because tiktok, Instagram etc
Ian Welsh
What it says is that older people have less loss. That’s to be expected, they already have the skills and have ground them in for longer.
Younger people lose more, and as the other article suggests, often don’t develop skills in the first place.
Alex
Respectfully, I have to disagree. They didn’t monitor anyone over time, so they can’t draw any conclusions about losses. It’s a snapshot study, all they can see are correlations.
Ian Welsh
You may be right, we’ll see, but I’d lay 10:1 chronological studies will say the same thing. Skills you don’t use, degrade. Reliance on AI will degrade skills and judgment over time except for a small minority who use it very wisely, and they will see improvements.
different clue
I remembered reading from time to time that the various TechLords of Silicon Valley forbade their children from using computers until at least a certain age, and they put their own children in high-priced private schools where the teaching was all by human teachers and there was no computer teaching at all, least of all the computer teaching which they made billions of dollars from by selling it to thousands of schools for us masses.
I finally found an article about that about a year or so ago in Town And Country Magazine and I tried to bring a link to it here, but it was paywalled. ( I read it in the physical ink-on-paper version of the magazine). It was about the newest high fashion trend among the Rich Digerati being Dumm Houses for themselves, utterly uninfected by all the Smart House digital cooties they sell billions of dollars-worth of to us mere masses. And there was something in that article also about no digitech for their children and especially social media utterly forbidden to their children up to a certain age.
I imagine they will also forbid their own children from using AI, perhaps by sending them to AI-forbidden private schools. In the Kingdom of the Blind, the one-eyed man is King. In the Kingdom of the Dumm, the half-wit man is King. They want to make sure their young one-eyed half-wit wonders grow up to be default kings by virtue of blinding and lobotomizing as many millions of other peoples’ children as possible. And AI will be one of their new weapons of choice in achieving that mission.
NRG
AI is being trained, at least partially, on increasingly fantastical AI slop that is permeating the web. It is inevitably getting less accurate over time, as its errors and hallucinations are perpetuated among different AI apps. It’s a self-perpetuating problem that will only get worse.