Lvxferre [he/him]

I have two chimps within, Laziness and Hyperactivity. They smoke cigs, drink yerba, fling shit at each other, and devour the face of anyone who gets close to either.

They also devour my dreams.

  • 58 Posts
  • 5.39K Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle
  • To be clear, by “communication” I’m talking about the information conveyed by a certain utterance, while you’re likely referring to the utterance itself.

    Once you take that into account, your example is optimising for #2 at the expense of #1 — yes, you can get away conveying info in more succinct ways, but at the expense of requiring a shared context; that shared context is also info the receiver knows beforehand. It works fine in this case because spouses accumulate that shared context across the years (so it’s a good trade-off), but if you replace the spouse with some random person it becomes a “how the fuck am I supposed to know what you mean?” matter.


  • For succinctness I’ll interpret “stupid” = “immoral and/or false”.

    This assumes your typical person naturally discards a stupid discourse, once you show it’s stupid. I don’t think they do; instead they’ll discard a discourse that conflicts with their world view, and is not emotionally engaging enough to replace it.

    In the light of that, “let the Nazi talk” sounds like a notoriously bad idea. Specially when the Nazi in question are highly rhetorical, i.e. make their stupidity highly emotionally engaging.

    A better approach is to address the core claims of the Nazi discourse, in absence of their rhetoric, showing why they’re stupid (and cringe). That’s basically what a lot of people already do.


  • I believe that good communication has four attributes.

    1. It’s approachable: it demands from the reader (or hearer, or viewer) the least amount of reasoning and previous knowledge, in order to receive the message.
    2. It’s succinct: it demands from the reader the least amount of time.
    3. It’s accurate: it neither states nor implies (for a reasonable = non-assumptive receiver) anything false.
    4. It’s complete: it provides all relevant information concerning what’s being communicated.

    However no communication is perfect and those four attributes are in odds with each other: if you try to optimise your message for one or more of them, the others are bound to suffer.

    Why this matters here: it shows the problem of ablation is unsolvable. Even if generative models were perfectly competent at rephrasing text (they aren’t), simply by asking them to make the text more approachable, you’re bound to lose info or accuracy. Specially in the current internet, where you got a bunch of skibidi readers who’ll screech “WAAAAH!!! TL;DR!!!” at anything with more than two sentences.

    I’d also argue “semantic ablation” is actually way, way better as a concept than “hallucination”. The later is not quite “additive error”; it’s a misleading metaphor for output that is generated by the model the same way as the rest, but it happens to be incorrect when interpreted by human beings.








  • Link to the archived version of the article in question.

    I actually like the editor’s note. Instead of naming-and-shaming the author (Benj Edwards), it’s blaming “Ars Technica”. It also claims they looked for further issues. It sounds surprisingly sincere for corporate apology.

    Blaming AT as a whole is important because it acknowledges Edwards wasn’t the only one fucking it up. Whatever a journalist submits needs to be reviewed by at least a second person, exactly for this reason: to catch up dumb mistakes. Either this system is not in place or not working properly.

    I do think Edwards is to blame but I wouldn’t go so far as saying he should be fired, unless he has a backstory of doing this sort of dumb shit. (AFAIK he doesn’t.) “People should be responsible for their tool usage” is not the same as “every infraction deserves capital punishment”; sometimes scolding is enough. I think @[email protected]’s comment was spot on in this regard: he should’ve taken sick time off, but this would have cost him vacation time, and even being forced to make this choice is a systemic problem. So ultimately it falls on his employer (AT) again.




  • Oh fuck. Then it gets even worse (and funnier). Because even if that was a human contributor, Shambaugh acted 100% correctly, and this defeats the core lie outputted by the bot.

    If you got a serious collaborative project, you don’t want to enable the participation of people who act based on assumptions. Because those people ruin everything they touch with their “but I thought that…”, unless you actively fix their mistakes — i.e. more work for you.

    And yet once you construe that bloody bot’s output as if they were human actions, that’s exactly what you get — a human who assumes. A dead weight and a burden.

    It remains an open question whether it was set up to do that, or, more probably, did it by itself because the Markov chain came up with the wrong token.

    A lot of people would disagree with me here, but IMO they’re the same picture. In either case, the human enabling the bot’s actions should be blamed as if those were their own actions, regardless of their “intentions”.




  • Pretty much this.

    I have a lot of issues with this sort of model, from energy consumption (cooking the planet) to how easy it is to mass produce misinformation. But I don’t think judicious usage (like at the top) is necessarily bad; the underlying issue is not the tech itself, but who controls it.

    However. Someone letting an AI “agent” rogue out there is basically doing the later, and expecting the others to accept it. “I did nothing wrong! The bot did it lol lmao” style. (Kind of like Reddit mods blaming Automod instead of themselves when they fuck it up.)



  • It’s more like

    • [This case] “etymology shows this usage of the word is acceptable”
    • [Typically] “language change shows the usage of that other word is also acceptable”

    IMO they’re both poor grounds to defend the acceptability of a certain word usage. But they don’t really contradict each other; in fact they’re both the same fallacy (fallacy of origins aka genetic fallacy).

    I believe a better way to defend the acceptability of a certain word usage is to highlight language is a communication system; the point is not to use this or that word, it’s to convey meaning. So if $vegetable milk conveys the meaning, it’s fine; if “skibidi” also conveys meaning, it’s also fine.

    Just my two cents.