The horizon is not so far as we can see, but as far as we can imagine

Tag: “AI”

The Horror of School

Back during the pandemic, two things happened with students. Overall, they committed more suicides, BUT when schools were closed, suicide rates went down. (I predicted the latter at the time.)

Then there’s this lovely chart:

Well, well, well. Seems forcing people to do what they don’t want to do, in what is usually a socially oppressive environment, is bad for them.

There are, of course, those who thrive in school, and love it — usually the socially dominant kids. But for a lot of kids, school is Hell.

I think this has a lot to do with alignment of goals. I wrote recently about the epidemic of AI cheating and how to avoid it, but I think the smartest commentary I’ve seen on AI cheating, and cheating in general, is this one:

Has anyone stopped to ask WHY students cheat? Would a buddhist monk “cheat” at meditation? Would an artist “cheat” at painting? No. When process and outcomes are aligned, there’s no incentive to cheat. So what’s happening differently at colleges?

Back in the stone age, I took an introductory sociology class. The professor asked the students who were intending to be teachers to put up their hands. A forest. She told them to keep their hands up, and asked everyone who was planning on social work to put up their hands.

Out of a class of about a hundred and fifty, only three people’s hands weren’t up.

147 students weren’t taking sociology because they were interested in it, but because it was a way-station on the way to a goal.

The problem with “higher” education is that good jobs are locked by the requirement for degrees. The vast majority of students aren’t in university because they want to learn, they’re there because they need the credential. They don’t see the applicability of what they’re learning to their future jobs, in most cases correctly, so they just want to get through the courses with the least effort possible while getting the necessary grades.

Of course, they cheat. They’re being forced to waste three or four years and huge amounts of money on a chance of getting past the gatekeepers.

I used to amuse myself by talking to graduates. I’d ask them what their major was, then discuss it with them. Nine times out of ten, I knew more about the subject than they did, even though I’d never taken a single course in the topic at hand. They’d memorized enough to pass the tests, then immediately forgot it, because it had no relevance to their goals or their life.

The only case for requiring a bachelors degree, in a job that doesn’t use the knowledge taught by that course, is that it screens out people who won’t put up with bullshit and who won’t do what their told when it doesn’t align with their goals. A B.A. certifies to potential employers that, “this person will do what they’re told and put up with your bullshit. They barely need to be coerced, they do what is expected of them.”

Problems is, it also certifies that, “they will put in as little effort in the job as they can, unless it serves them to do otherwise.”

If it were up to me, I’d make it illegal to require unrelated educational credentials. Want to hire an engineer (an actual engineer, not a programmer)? Fine, ask for a degree. But if it’s just some unrelated job, no.

But I’d go even further, I’d mandate exams to test for job knowledge. (In person, supervised) similar to how a lot of companies test programmers. “Can you actually make a small program?”

Testing for jobs used to be pretty standard. Almost all civil service jobs were gated behind exams and so were a lot of private sector ones.

Then see how they perform for a few months.

Forcing people to do what they don’t want to do is sometimes necessary, to be sure. But it has to make sense. There’s plenty of evidence that good home-schooling teaches students skills faster than classroom teaching (and no, not all  home schooling is right wing nutjobs, where I grew up it was hippies.) As for socialization, there are other ways to socialize children, most of which are probably more pleasant and less harmful than the often hellish social circumstances in schools, especially high schools.

As for spending time with adults, well, that’s what children did for most of history. They weren’t stuck just with kids for most of the day, then just their parents. After all, you’re a kid for a lot less longer than you’re an adult, and it’s the adult world you need to know how to navigate.

I’m not saying mass schooling has or had no benefits. It obviously does and did. But can we find a better way to teach children?

This blog has always been free to read, but it isn’t free to produce. If you’d like to support my writing, I’d appreciate it. You can donate or subscribe by clicking on this link.

Why Technocratic Elites Aren’t Trusted (Sam Altman Edition)

So, Sam Altman recently said something which seems reasonable, but isn’t:

using technology to create abundance–intelligence, energy, longevity, whatever–will not solve all problems and will not magically make everyone happy. but it is an unequivocally great thing to do, and expands our option space. to me, it feels like a moral imperative...

most surprising takeaway from recent college visits: this is a surprisingly controversial opinion with certain demographics.

(lack of capitalization from original.)

Back in the original Greek writings on rhetoric and argument, one of the three steps was Ethos: this is the rhetorician’s qualifications, including his ethical qualifications. The “why should we listen to you?” part. If you’re talking about courage, are you brave? If charity, are you charitable?

If technology, do you use it for the good of others?

What most people can’t explicate about their objection to Altman’s thesis that using technology to create abundance is a good thing is that they don’t trust Altman. OpenAI was originally a non-profit, meant to create AI in a way which would benefit everyone. Altman turned it into a for profit, and no one except billionaires and sycophants think that companies are out to be beneficial to the majority of people: we work in them, we know it’s bullshit.

And how did Altman create his AIs? By training them on other people’s work, without permission or payment. Further, the AIs compete with the people whose data they trained on: you can ask for a picture in the style of a particular artist, for example, and they compete with artists, writers and other professionals in general.

So, the people whose actual work made AIs (they aren’t really AIs but I use the term for convenience) possible, are the ones harmed by them AND they didn’t give their permission or get paid.

Why they hell would anyone other than a shareholder or a well paid employee “trust” Sam Altman?

Now let’s move on to the Altman’s actual argument (his logos and pathos)

using technology to create abundance–intelligence, energy, longevity, whatever–will not solve all problems and will not magically make everyone happy. but it is an unequivocally great thing to do, and expands our option space. to me, it feels like a moral imperative...

Now, this is a case where the logos is almost entirely true.

But what’s the actual track record of using technology to create abundance?

We’re losing our topsoil. Nutrition in food is less than it used to be. We’ve created climate change, which appears to now be past key tipping points and will kill and impoverish billions. Most of the American population is fat, they weren’t fifty years ago, so it’s not “individual choices.” We have widespread ecological collapse, including the loss of most large mammals and so few insects compared to even fifty years ago that there is no longer “bug splat” on windshields. The oceans are full of plastic, and the coral reefs are dying, while fish stocks collapse.

None of this is to say that technology hasn’t had vast benefits, but we’re using it also to reduce our option space: to damage the carrying capacity of the Earth in ways which will take tens of thousand of years to recover from, as a best case estimate (millions for some of the issues.) The last 40 years, when people like Altman have had the most influence, have seen a vast rise in inequality, and a huge number of homeless. Altman and co. blame left wingers, but who are the billionaires? Who actually has the power?

Altman’s making an argument which is true on its face, but he belong to a class of people whose actions do a great deal of harm. Most people can’t clearly articulate this, but they know he and his class can’t be trusted, so they instinctively disagree with him, but since they can’t quite say why, they sound incoherent.

But they’re right to distrust Altman. Technology could be used to benefit everyone, even in the long term, but Altman isn’t trying to do that: he’s trying to get rich, and if that hurts a lot of people along the way, he’s OK with it.

You get what you support. If you like my writing, please SUBSCRIBE OR DONATE

GIGO: The Past of Google Search Is the Future Of LLM AI Models

If you’re old enough, you remember how amazing Google search was when it first came out and for the first few years. Excellent results, right at the top. Nowadays, it’s crap and half the time to find what I want I have to append “Reddit” or search very specific domains. (Reddit is likely to be worthless in a few years due to the IPO.)

Anyway, Google search results became crap for three main reasons, from least to most important:

  1. Worship of the official and the orthodox. Every time I search some medical issue, the top twenty sites tell me the same thing. That didn’t used to be the case, for cancer, for example, the old “cancer tutor” site would be on the first page. Maybe it’s good that the equivalent isn’t any more, but I wanted to read the alternative views as well as the orthodoxy.
  2. Monetization. Prioritizing selling ads over providing the best search results has had the effect one would expect.
  3. Organic link destruction. What made Google so good at the start is that its algo was almost entirely based on the number of incoming links a site had. Since the internet at that point was almost all human created, links were highly curated and genuine: someone had read the site, liked it and taken the time to link to it. Nowadays, most links aren’t organic: they’re SEO crap or advertising or intended to play the search algo, leading to an endless arm race. A link is no longer an endorsement and there’s no easy way around that: nothing can replace a human being reading a site, liking it, and linking to it.

Google, to put it simply, destroyed its own usefulness by destroying the internet ecosystem that had organic links, links by people who didn’t expect to be paid for them, to sites they found interesting whether those sites were official or orthodox or not.

Now, Large Language Model (LLM) AI is based off training on, basically, the entire internet. It’s essentially statistical. How good the AI is based on how good what it trained on is, with a lot of tweaking to try and point it towards stuff that isn’t horrible (not good, horrible, like avoiding racism.)

The problem is that over time more and more of the internet will be AI produces. AI will be feeding on its own output, rather than on organic human writing. It’s already been noticed that AI that eats its own dogfood tends to go nuts, and it’s fairly clear that AI is rather bland: it is a blender for what’s been done before.

So AI will, like Google, damage the very resources it needs to work well, especially since most people won’t admit that their AI content is AI, so it’s hard to avoid. It will slowly degrade over time, with some bumps of improvement as new advances are made.

Mind you LLM AI isn’t a general AI, it’s not smart, it’s just another algo. It doesn’t understand anything. Real AI will wait further advancements, if it every happens at all.

Still, enjoy it now while you can. I expect it’ll get better for two to three years, then degrade. That’s the optimistic view, there’s some evidence that the degradation is already underway.

You get what you support. If you like my writing, please SUBSCRIBE OR DONATE

Google Neural Net “AI” Is About To Destroy Half The Independent Web

As various folks have quipped the safest place to hide a body is on the second page of Google search results, because no one goes there.

Google is about to role out its “AI” for search (I’ll be saying AI in quotes as policy when referring to neural nets because they aren’t intelligent) and if it stays as it is it’s going to destroy most sites that provide information or analysis. (I’ll feel some hit, but will survive as I have my own audience.)

That screen-shot is the kicker. It takes up too much of the page. Worst, people don’t like to click, so if Google presents the info they want, they’ll just stay on Google.

Now, of course, Google is summarizing data that the neural net has scraped from the Web, much like when you used to read some books then summarize them for your term paper. None of the information Google’s “AI” will present in answer to questions is information from Google, it’s scraped, swallowed and regurgitated from the websites which won’t be getting the traffic any more, who will then die. The perfect parasite.

There’s going to be lawsuits, and I’m no lawyer, but my understanding is that just as if you do your research and re-write to summarize this probably doesn’t fall under current copyright law. That law is entirely reasonable, for people, but for neural nets it seems like a huge gap, but without a change in the law, it seems unlikely there’s a legal remedy.

I’m thinking about this. I may decide to keep most of my site off search engines (which is a problem in the sense that I use search engines to find my own articles, I’ve written so many).

But in the larger sense “AI” is a giant parasite (well, Google won’t be the only one) devouring other people’s expertise and denying them a living. Google controls about 45% of the internet ad market already with most of the rest divided up between various social median giants, and doing so destroyed a vast swathe of sites. Now they are set to kill much of what remains.

Tacitus’s line, supposedly quoting Calgacus, about the Roman Empire, was that the Romans “made a desert and called it peace”, Google and “AI” is making an internet wasteland and calling it profits.


My ability to write these articles depends on donors and subscribers so if you value this writing, please DONATE or SUBSCRIBE.

Powered by WordPress & Theme by Anders Norén