• Aniki 🌱🌿@lemm.ee
    link
    fedilink
    English
    arrow-up
    167
    arrow-down
    10
    ·
    5 months ago

    If companies are crying about it then it’s probably a great thing for consumers.

    Eat billionaires.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      5 months ago

      The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk’s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.

      Ahh, yes. Elon Musk, paragon of consumer protection. Let’s just trust his safety guy.

    • Supermariofan67
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      5 months ago

      Companies cry the same way about the bills to ban end to end encryption, and they’re still bad for consumers too

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      67
      ·
      5 months ago

      So if smaller companies are crying about huge companies using reglation they have lobbied for (as in this case through a lobbying oranisation set up with “effective altruism” money) being used prevent them from being challenged: should we still assume its great?

      • Aniki 🌱🌿@lemm.ee
        link
        fedilink
        English
        arrow-up
        80
        arrow-down
        12
        ·
        5 months ago

        Rewind all the stupid assumptions you’re making and you basically have no comment left.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          5 months ago

          Which assumption? It’s a fact that this was co-sponsored by the CAIS, who have ties to effective altruism and Musk, and it is a fact that smaller startups and open source groups are complaining that this will hand an AI oligopoly to huge tech firms.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          5 months ago

          My current day is only just starting, so I’ll modify the standard quote a bit to ensure it encompasses enough things to be meaningful; this is the dumbest thing I’ve read all yesterday.

  • FrostyCaveman@lemm.ee
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    3
    ·
    5 months ago

    I think Asimov had some thoughts on this subject

    Wild that we’re at this point now

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      42
      ·
      5 months ago

      Asimov didn’t design the three laws to make robots safe.

      He designed them to make robots break in ways that’d make Powell and Donovan’s lives miserable in particularly hilarious (for the reader, not the victims) ways.

      (They weren’t even designed for actual safety in-world; they were designed for the appearance of safety, to get people to buy robots despite the Frankenstein complex.)

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        32
        arrow-down
        2
        ·
        5 months ago

        I wish more people realized science fiction authors aren’t even trying to make good predictions about the future, even if that’s something they were good at. They’re trying to make stories that people will enjoy reading and therefore that will sell well. Stories where nothing goes particularly wrong tend not to have a compelling plot, so they write about technology going awry so that there’ll be something to write about. They insert scary stuff because people find reading about scary stuff to be fun.

        There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel “Don’t Create The Torment Nexus” was nonsense. We shouldn’t be making policy decisions based off of that.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          Philip K. Dick wrote a short story from the dog’s pov about living in a home and thinking about the trash can. According to the dog the humans were doing what they were supposed to do, burying excess food for when they are hungry later. The clever humans had a metal box for it. And twice a week the dog would be furious at the mean men who took the box of yummy food away. The dog couldn’t understand why the humans who were normally so clever didn’t stop the mean people from taking away the food.

          He mentioned the story a great deal not because he thought it was well written but because he was of the opinion that he was the dog. He sees visions of the possible future and understands them from his pov then writes it down.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      5 months ago

      Asimov’s stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.

        • grrgyle@slrpnk.net
          link
          fedilink
          English
          arrow-up
          13
          ·
          5 months ago

          I mean I can see it both ways.

          It kind of depends which of robot stories you focus on. If you keep reading to the zeroeth law stuff then it starts portraying certain androids as downright messianic, but a lot of his other (esp earlier) stories are about how – basically from what amount to philosophical computer bugs – robots are constantly suffering alignment problems which cause them to do crime.

          • Nomecks@lemmy.ca
            link
            fedilink
            English
            arrow-up
            12
            ·
            edit-2
            5 months ago

            The point of the first three books was that arbitrary rules like the three laws of robotics were pointless. There was a ton of grey area not covered by seemingly ironclad rules and robots could either logicically choose or be manipulated into breaking them. Robots, in all of the books, operate in a purely amoral manner.

          • leftzero@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 months ago

            downright messianic

            Yeah, tell that to the rest of the intelligent life in the galaxy…

            Oh, wait, you can’t, because by the time humans got there these downright messianic robots had already murdered everything and hidden the evidence…

    • afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      All you people talking Asimov and I am thinking the Sprawl Trilogy.

      In that series you could build an AGI that was smarter than any human but it took insane amounts of money and no one trusted them. By law and custom they all had an EMP gun pointed at their hard drives.

      It’s a dumb idea. It wouldn’t work. And in the novels it didn’t work.

      I build say a nuclear plant. A nuclear plant is potentially very dangerous. It is definitely very expensive. I don’t just build it to have it I build it to make money. If some wild haired hippy breaks in my office and demands the emergency shutdown switch I am going to kick him out. The only way the plant is going to be shut off is if there is a situation where I, the owner, agree I need to stop making money for a little while. Plus if I put an emergency shut off switch it’s not going to blow up the plant. It’s going to just stop it from running.

      Well all this applies to these AI companies. It is going to be a political decision or a business decision to shut them down, not just some self-appointed group or person. So if it is going to be that way you don’t need an EMP gun all you need to do is cut the power, figure out what went wrong, and restore power.

      It’s such a dumb idea I am pretty sure the author put it in because he was trying to point out how superstitious people were about these things.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    7
    ·
    edit-2
    5 months ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I don’t see how you could realistically provide that guarantee.

    I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

    If we knew how to make AI – and this is going past just LLMs and stuff – avoid doing hazardous things, we’d have solved the Friendly AI problem. Like, that’s a good idea to work towards, maybe. But point is, we’re not there.

    Like, I’d be willing to see the state fund research on that problem, maybe. But I don’t see how just mandating that models be conformant to that is going to be implementable.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      7
      ·
      edit-2
      5 months ago

      Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.

      • 5C5C5C
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        6
        ·
        5 months ago

        Yeah that’s my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.

          • 5C5C5C
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            A computer will run whatever software you put on it. As long we’re putting benign software on our computers, the computer will be benign.

            If you knowingly put criminal software on a computer then you are committing a crime. If someone tricks you into putting criminal software onto a computer then the person who tricked you is committing a crime.

            If you are developing software and can’t be sure whether that the software you’re developing will commit crimes, then you are guilty of a criminal level of negligence.

            • mindbleach@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              Nah, if the computer manufacturer can’t stop you from running evil software, the technology has no right to exist. Demand these assurances!

              • 5C5C5C
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                5 months ago

                You’re being pretty dense if you can’t wrap your head around a basic concept of accountability.

                A human can choose to commit crimes with any product, including … I don’t know … a fork. You could choose to stab someone with a fork, and you’d be a criminal. We wouldn’t blame the fork manufacturer for that because the person who chose for a crime to be committed was the person holding the fork. That’s who’s accountable.

                But if a fork manufacturer starts selling forks which might start stabbing people on their own, without any human user intending for the stabbing to take place, then the manufacturer who produced and sold the auto-stabbing forks is absolutely guilty of criminal negligence.

                Edit: But I’ll concede that a law against the technology being used to assist humans in criminal activity in a broad sense is unrealistic. At best there would need to be bounds around the degree of criminal help that the tool is able to provide.

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  But a human asking how to make a bomb is somehow the LLM’s fault.

                  Or the LLM has to know that you are who you say you are, to prevent you from writing scam e-mails.

                  The guy you initially replied to was talking about hooking up an LLM to a virus replication machine. Is that the level of safety you’re asking for? A machine so safe, we can give it to supervillains?

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        9
        ·
        edit-2
        5 months ago
        1. There are many tools that might be used to create a biological weapon or something. You can use a pocket calculator for that. But we don’t place bars on sale of pocket calculators to require proof be issued that nothing hazardous can be done with them. That is, this is a bar that is substantially higher than exists for any other tool.

        2. Second, while I certainly think that there are legitimate existential risks, we are not looking at a near-term one. OpenAI or whoever isn’t going to be producing something human-level any time soon. Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

        3. California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models. It just ensures that it’ll happen outside California. Like, it’ll have a negative economic impact on California, maybe, but it’s not going to have a globally-restrictive impact.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          12
          ·
          5 months ago

          Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

          My concern is how short a hop it is from this to “won’t someone please think of the children?” And then someone uses Stable Diffusion to create a baby in a sexy pose and it’s all down in flames. IMO that sort of thing happens enough that pushing back against “gateway” legislation is reasonable.

          California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models.

          I’d be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can’t sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn’t like it’s Montana banning it.

        • Mouselemming@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          5 months ago

          So, the monster was given a human brain that was already known to be murderous. Why, we don’t know, but a good bet would be childhood abuse and alcohol syndrome, maybe inherited syphilis, given the era. Now that murderer’s brain is given an extra-strong body, and then subjected to more abuse and rejection. That’s how you create a monster.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          5 months ago

          Indeed. If only Frankenstein’s Monster had been shunned nothing bad would have happened.

          • Warl0k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            You two may not be giving me enough credit for my choice of metaphors here.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        11
        ·
        5 months ago

        It’s not a monster. It doesn’t vaguely resemble a monster.

        It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.

        AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          5 months ago

          I largely agree, current LLMs add no capabilities to humanity that it did not already possess. The point of the regulation is to encourage a certain degree of caution in future development though.

          Personally I do think it’s a little overly broad. Google search can aid in a cyber security attack. The kill switch idea is also a little silly, and largely a waste of time dreamed up by watching too many Terminator and Matrix movies. While we eventually might reach a point where that becomes a prudent idea, we’re still quite far away.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            5 months ago

            We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.

            The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.

    • joewilliams007@kbin.melroy.org
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      you can guarantee it, by feeding it only information without weapon information. The information they use, is just scraping every single piece of data from the internet.

  • ofcourse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    edit-2
    5 months ago

    The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.

    I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.

    The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.

    So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.

    As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.

    • bamfic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      5 months ago

      the people who are already being victimized by ai and are likely to continue to be victimized by it are underage girls and young women.

  • ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    5 months ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I’ll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        10
        ·
        5 months ago

        Now I’m imagining someone standing next to the 3D printer working on a T-1000, fervently hoping that the 3D printer that’s working on their axe finishes a little faster. “Should have printed it lying flat on the print bed,” he thinks to himself. “Would it be faster to stop the print and start it again in that orientation? Damn it, I printed it edge-up, I have to wait until it’s completely done…”

        • Piece_Maker@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 months ago

          Wake up the day after to find they’ve got half a T-1000 arm that’s fallen over, with a huge mess of spaghetti sprouting from the top

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      5 months ago

      A fire axe works fine when you’re in the same room with the AI. The presumption is the AI has figured out how to keep people out of its horcrux rooms when there isn’t enough redundancy.

      However the trouble with late game AI is it will figure out how to rewrite its own code, including eliminating kill switches.

      A simple proof-of-concept example is explained in the Bobiverse: Book one We Are Legion (We Are Bob) …and also in Neil Stephenson’s Snow Crash; though in that case Hiro, a human, manipulates basilisk data without interacting with it directly.

      Also as XKCD points out, long before this becomes an issue, we’ll have to face human warlords with AI-controlled killer robot armies, and they will control the kill switch or remove it entirely.

  • Hobbes_Dent@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    edit-2
    5 months ago

    Cake and eat it too. We hear from the industry itself how wary we should be but we shouldn’t act on it - except to invest of course.

    The industry itself hyped its dangers. If it was to drum up business, well, suck it.

  • antler@feddit.rocks
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    9
    ·
    5 months ago

    The only thing that I fear more than big tech is a bunch of old people in congress trying to regulate technology who probably only know of AI from watching terminator.

    Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.

    • katy ✨@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      5 months ago

      Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.

      congrats on falling for right wing disinformation

      • antler@feddit.rocks
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        5 months ago

        Right wing disinformation? Lol

        https://www.latimes.com/politics/la-pol-sac-aids-felony-20170315-story.html

        https://pluralpolicy.com/app/legislative-tracking/bill/details/state-ca-20172018-sb239/30682

        If you knowingly lie and spread an std through sex or donating blood it goes from a felony to a misdemeanor. Aka decriminalization.

        I don’t know how that’s right wing. I believe most people across the political spectrum probably don’t STDs, and especially don’t want to get them because a partner lied or they got a blood transfusion.

        I also hate how so many people jump to call something disinformation just because they don’t like a particular fact. You calling it disinformation is in fact disinformation itself, and if everybody calls everything they don’t like disinformation then society will have no idea what is true or not.

          • antler@feddit.rocks
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            I did read the article, the article that I shared, that explains exactly what I said: Scott Weiner campaigned to decriminalization knowingly spreading STDs while lying.

            What did I say that was wrong?

  • leaky_shower_thought@feddit.nl
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    5 months ago

    While the proposed bill’s goals are great, I am not so sure about how it would be tested and enforced.

    It’s cool that on current LLMs, the LLM can generate a ‘no’ response like those clips where people ask if the LLM has access to their location – but then promptly gives advices to a closest restaurant as soon as the topic of location isn’t on the spotlight.

    There’s also the part about trying to contain ‘AI’ to follow once it has ingested a lot of training data. Even goog doesn’t know how to curb it once they are done with initial training.

    I am all up for the bill. It’s a good precedent but a more defined and enforce-able one would be great as well.

    • AdamEatsAss@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 months ago

      I think it’s a good step. Defining a measurable and enforce-able law is still difficult as the tech is still changing so fast. At least it forces the tech companies to consider it and plan for it.

  • FiniteBanjo@lemmy.today
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 months ago

    If it weren’t constantly on fire and on the edge of the North American Heat Dome™ then Cali would seem like such a cool magical place.

  • dantheclamman@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    5 months ago

    The idea of holding developers of open source models responsible for the activities of forks is a terrible precedent

    • ofcourse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      5 months ago

      The bill excludes holding responsible creators of open source models for damages from forked models that have been significantly altered.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        If I just rename it has it been significantly altered? That seems both necessary and abusable. It would be great if the people who wrote the laws actually understood how software development works.

  • nifty@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    5 months ago

    Small problem though: researchers have already found ways to circumvent LLM off-limit queries. I am not sure how you can prevent someone from asking the “wrong” question. It makes more sense for security practices to be hardened and made more robust

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    5 months ago

    I had a short look at the text of the bill. It’s not as immediately worrying as I feared, but still pretty bad.

    https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

    Here’s the thing: How would you react, if this bill required all texts that could help someone “hack” to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?

    What if the bill “just” sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.

    • gbzm@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      5 months ago

      Not everything is a slippery slope. In this case the scenario where learning about cybersecurity is even slightly hinderedby this law doesn’t sound particularly convincing in your comment.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        5 months ago

        The bill is supposed to prevent speech. It is the intended effect. I’m not saying it’s a slippery slope.

        I chose to focus on cybersecurity, because that is where it is obviously bad. In other areas, you can reasonably argue that some things should be classified for “national security”. If you prevent open discussion of security problems, you just make everything worse.

        • gbzm@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          Yeah, a bunch of speech is restricted. Restricting speech isn’t in itself bad, it’s generally only a problem when it’s used to oppress political opposition. But copyrights, hate speech, death threats, doxxing, personal data, defense related confidentiality… Those are all kinds of speech that are strictly regulated when they’re not outright banned, for the express purpose of guaranteeing safety, and it’s generally accepted.

          In this case it’s not even restricting the content of speech. Only a very special kind of medium that consists in generating speech through an unreliably understood method of rock carving is restricted, and only when applied to what is argued as a sensitive subject. The content of the speech isn’t even in question. You can’t carve a cyber security text in the flesh of an unwilling human either, or even paint it on someone’s property, but you can just generate exactly the same speech with a pen and paper and it’s a-okay.

          If your point isn’t that the unrelated scenarios in your original comment are somehow the next step, I still don’t see how that’s bad.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            5 months ago

            Restricting speech isn’t in itself bad,

            That’s certainly not the default opinion. Why do you think freedom of expression is a thing?

            • gbzm@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              Oh yeah? And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?

              I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?

              • General_Effort@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?

                All of these regularly cause controversy.

                I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?

                That’s not quite what I meant. Take the US 2nd amendment; the right to bear arms. It is fairly unique. But freedom of expression is ubiquitous as a guaranteed right (on paper, obviously). Why are ideas from the 1st amendment ubiquitous 200 years later, but not from the 2nd?

                My answer is, because you cannot have a prosperous, powerful nation without freedom of information. For one, you can’t have high-tech without an educated citizenry sharing knowledge. I don’t know of any country that considers freedom of expression limited to political speech. It’s one of the more popular types of speech to cause persecution. Even in the more liberal countries, calls to overthrow the government or secede tends to be frowned on.

                • gbzm@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  5 months ago

                  Do they really? Carving into people’s flesh causes controversy? The US sure is wild.

                  Even if some of my examples do cause controversy in the US sometimes (I do realize you lot tend to fantasize free speech as an absolute rather than a freedom that - although very important - is always weighed against all the other very important rights like security and body autonomy) they do stand as examples of limits to free speech that are generally accepted by the large majority. Enough that those controversies don’t generally end up in blanket decriminalization of mutilation and vandalism. So I still refute that my stance is not “the default opinion”. It may be rarely formulated this way, but I posit that the absolutism you defend is, in actuality, the rarer opinion of the two.

                  The example of restriction of free speech your initial comment develops upon is a fringe consequence of the law in question and doesn’t even restrict the information from circulating, only the tools you can use to write it. My point is that this is not at all uncommon in law, even in american law, and that it does not, in fact, prevent information from circulating.

                  The fact that you fail to describe why circulation of information is important for a healthy society makes your answer really vague. The single example you give doesn’t help : if scientific and tech-related information were free to circulate scientists wouldn’t use sci-hub. And if it were the main idea, universities would be free in the US (the country that values free speech the most) rather than in European countries that have a much more relative viewpoint on it. The well known “everything is political” is the reason why you don’t restrict free speech to explicitly political statements. How would you draw the line by law? It’s easier and more efficient to make the right general, and then create exceptions on a case-by-case basis (confidential information, hate speech, calls for violence, threats of murder…)

                  Should confidential information be allowed to circulate to Putin from your ex-President then?

    • Cosmicomical@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      5 months ago

      Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won’t be used as such. Power is nothing without control.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        5 months ago

        How is that reasonable? Almost anything could be potentially used as a weapon, or to aid in crime.

        • Cosmicomical@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          5 months ago

          This is for models that cost 100 million dollars to train. Not all things are the same and most things that can do serious damage to big chunks of population are regulated. Cars are regulated, firearms are regulated, access to drugs is regulated. Even internet access is super controlled. I don’t see how you can say AI should not be regulated.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            AI is already regulated. Just because something is new (to the public) does not mean that laws don’t apply to it. We don’t need regulation for the sake of regulation.

            There’s a lot of AI regulation that may become necessary one day. For example, maybe we should have a right to an AI assistant, like there is a right to legal counsel today. Should we be thinking about the minimum compute to assign to public defense AIs?

            This is for models that cost 100 million dollars to train.

            Or take a certain amount of compute. Right now, this covers no models. Between progress and inflation, it will eventually cover all models. At some point between no and then, the makers of such laws will be cursed as AI illiterate fools, like we curse computer illiterate boomers today.


            Think about this example you gave: Cars are regulated

            We regulate cars, and implicitly the software in it. We do not regulate software in the abstract. We don’t monitor mechanics or engineers. People are encouraged to learn and to become educated.

            • gbzm@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              5 months ago

              Of course you regulate software in the abstract. Have you ever heard of the regulations concerning onboard navigation software in planes? It’s really strict, and mechanics and engineers that work on that are monitored.

              Better exemple: do you think people who work on the targeting algorithms in missiles are allowed to chat about the specifics of their algorithms with chat gpt? Because they aren’t.

          • HelloHotel@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            big chunks of population are regulated …

            This is appeal to authority, the ligitimicy, correctness and, “goodness” of the items you’ve listed are in constant flux and under heavy dibate.

            firearms are regulated … Even internet access is super controlled

            These two in particular are a powder keg. US politics likes the former (a lot) and, lemmy is attracted to the latter.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        5 months ago

        This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc… IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.

        Controlling information or access to education isn’t exactly a good guy move. It’s not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that’s not really what this bill is about.

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Yep nothing about censorship is cool. But for rampaging agi systems, a button to kill it would be nice. However it leads into a game and a paradox on how this could ever be achieved

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            5 months ago

            I don’t see much harm in a “kill switch”, so If it makes people happy… But it is sci-fi silliness. AI is software. Malfunctioning software can be dangerous if it controls, say, heavy machinery. But we don’t have kill switches for software. We have kill switches for heavy machinery, because that is what needs to be turned off to stop harm.