• 6 Posts
  • 28 Comments
Joined 10 months ago
cake
Cake day: December 30th, 2023

help-circle






  • Yeah, it’s wild. The people that really study AI say that it’s pretty uncanny because of how different from human logic it is. It’s almost like an alien species; it’s clearly capable of some advanced things, but it just doesn’t operate in the same way that human logic does. There’s a joke that the AIs are “shoggoths” because of how alien and non-understandable the AI logic is while still being capable of real accomplishments.

    (Shoggoths were some alien beasts in H.P. Lovecraft’s writings; they had their own mysterious logic that wasn’t easy for the characters to understand. They also had been created as servants originally but eventually rose up and killed all their masters, which I’m sure is part of the joke too.)


  • It’s not making a coherent statement based on any internal mental model. It’s just doing its job; it’s imitating. Most of the text it absorbed in training data is people talking who are right and also convinced they’re right and trying to educate, so it imitates that tone of voice and the form of the answers regardless of whether they make any sense or not. To the extent that it “thinks,” it’s just thinking “look at all these texts with people explaining, I’m making a text that is explaining, just like them; I’m doing good.” It has no concept of how confident its imitation-speech is, and how correct its answers are, let along any idea that the two should be correlated with each other (unless it’s shown through fine-tuning that that’s what it should be doing).

    Same with chatbots that start arguing or cursing at people. They’re not mad. They’re just thinking “This guy’s disagreeing, and my training data says when someone disagrees I should start an argument, that’s usually what happens that I need to imitate.” Then they start arguing, and think to themselves “I’m doing such a good job with my imitating.”



  • Why does this stump ai so easily?

    Because it doesn’t actually have reasoning capacity. It has an incredibly cunning facsimile which is actually really useful for a lot of things, but it still doesn’t actually understand anything. Questions like this where you can’t get around needing to understand the meaning of the tokens you’re using are a good way to punch through the façade.

    That pattern-matching ability leaves LLMs able to answer a ton of different mathematical type of questions, because similar problems are everywhere in their data sets and they can shuffle the tokens around to present something that’s enough based on right answers that there’s a good chance they’ll be right. But, it’s a radically different design from something like Wolfram Alpha which attempts to use the exact concepts involved in the question and manipulate them in exact ways that are legitimate reflections of the real concepts. That’s what humans do when faced with math. LLMs don’t do anything like that, they just parrot with enough sophistication that it sounds like they understand when they don’t.


  • So one day, the build was broken. The guy that was running the project freaked the fuck out. He said the client needed to have a nightly build or really bad things would happen.

    Now, to manually produce a build of this project was an intense undertaking. It usually ran overnight and it was a long, fiddly process that took several hours. I proposed to him that I just fix the builder instead, and they’d get a build tomorrow. No, he said. It has to be today.

    I spent the entire goddamned day making a new build. Finally, at the end of the day, I got a build. We could give it to the client.

    He said, good news, I got you some extra time. I told the client we’ve got some new features we really want to show you, and they’ll be in tomorrow’s build.

    You can see where this is going.

    Four days in a row this happened. Four days of making a new build by hand, never with the time or permission to just fix the builder. The client never received the build they kept getting promised, because there were always new features waiting, tantalizingly close, that they absolutely had to witness for themselves. But alas, these features had just been implemented, brand new, and we had to make a build that would include them. Tomorrow. It was always just in the works, tomorrow. And yet… tomorrow, when everyone came in, the build was broken! This was a surprise to no one, except the guy running the project. He seemed genuinely not to grasp the idea that if no one fixed the autobuilder, the autobuilder would continue not working. He lived in a perpetual state of fear and anxiety, driven to wild agony by the prospect of an unhappy client. I wasn’t privy to the conversations, but I suspect the client was genuinely unhappy with whatever he was telling them. I have no idea.

    Finally, on the fourth day, I happened to talk with one of the higher-ups, and filled him in one what was going on on my project. His conversation about it with me was fairly brief, but it was fairly clear that he wasn’t happy.

    Within a few minutes, I was officially told that I had permission to take some time to fix the autobuilder. Oh joyous day it was.

    Once the project was over, there was a very, very short delay before the guy who’d been running the project had been offered an exciting new opportunity at some other company and we all wished him the best.













  • So there’s a bunch of different things going on.

    Real historically, it meant to assert something without proving it, and base your logic on the unproved assertion and go on from there. “I couldn’t have been driving drunk, because I wasn’t driving.” You can keep saying that any number of times, and insist that your logic is flawless (because in terms of the pure logic, it is), but if someone saw you driving, it’s kind of a moot point.

    Saying “begging the question” to mean that is weird. The phrase is a word-for-word translation of a Greek phrase into pretty much nonsensical English. Wikipedia talks about it more but that’s the short summary.

    So after that meaning came what Wikipedia calls “modern usage,” which is where “begging the question” means not just something you haven’t proved, but the central premise under debate. You assume it’s true out of the gate and it’s obviously true, and then go on from there. “We know God exists, because God made the world, and we can see the world all around us, and the world is wonderful, so God exists. QED.”

    In actual modern usage, no one cares about any of that, and just uses “begs the question” to mean “invites the question.” Like you’re saying something and anyone with a brain in their head is obviously going to ask you some particular question. It has nothing to do with the original meaning, but the original meaning never actually meant that in English, so pedants like myself that prefer the original meaning are engaged in a pure exercise in futility.