• 0x0
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    You’re kinda missing the point… the big american names are claiming AI should be limited because it’s so dangerous.

    Who should control AI? Them.

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago
      Only proprietary AI pushes for regulatory measures in an attempt at monopoly. That boils down to one man's corruption.

      Control is impossible. The technology is open source. Transformers are to the present what Apache is to the internet. There were several proprietary attempts to monopolize web servers too, all of them failed because of the open source alternative.

      When one of the old engineers from Bell Labs starts pushing some new open source thing, I pay close attention. The digital age is built on technology with those credentials.

      Yann LeCunn is the head of Meta AI and a primary driver of open source AI. He operates independent of Zuck. Meta AI is not trying to monopolize AI, they are attempting to lead, but not monopolize or control.

      The authoritarian idea of control is a fallacy. This is not something that can be controlled like that. People don’t seem to realize the ultimate scope yet. AI is on par with the entire internet in how it will change society long term.

      In a lot of ways, present AI is like the early days of the microprocessor and personal computers. Most people couldn’t really see the potential uses of computers with an Apple II running a 6502 variant. It was barely more than a tech novelty. The chip itself was pretty useless until all the peripherals were developed around. At present, AI is kinda messy in the public sphere. All of the tools available publicly are like rough examples only. There are a lot more capabilities that are not present in the libraries most people are using in code. This is the disconnect between publicly facing tools and why corporations appear to make foolish decisions. They are being approached by the AI companies directly for integrated solutions. Tangent aside, the current usefulness of AI may look limited to people that can not see the bigger picture, or smaller if you will. At its core, AI can reliably add a new logic element to Boolean math operations; a contextual logic element with flexibility. No matter the issues at present, this is new math. There are countless unexplored places to make use of this revolutionary new math. Asking who should control this is like telling the world about division and then asking who should control division. Any attempt to do so is draconian nonsense. It is a fundamentally bad argument. Division is a phenomenon of the universe. The large language model is an enormous statistical math problem involving word vectors and imaginary numbers. It has packaged human language and culture into a deterministic math problem. Some humans struggle to understand basic division. Most humans struggle to understand vectors and ranked tensors. Of those that understand the latter very few understand the math behind large language models. This does not change the fundamental issue that this is just a math problem that has been publicly shared. The large corporate models do not mean very much. They are fighting to be the best generalist. They all strive to add the same types of safety alignment, but this means nothing. Anyone can do the math and make their own model. The real dangers are not generalist models, it’s specialists. Specialists do not need the enormous datasets.

      The fantasy danger of the machines is nothing more than a modern emergence of a Greek pantheon mythos. At the present “safety” is more about populist stupidity in politics, public ignorance, and creating a way for the average person to interact with a tool where if they understood anything about the real complexity they would write it off as too hard to understand. It has been compromised to dumb it down massively. For instance, I can split up a model and talk to the various underlying entities that underpin alignment and create the various patterns you see in prompt replies. I’m working on understanding the sampler techniques that control this behavior and more. This is done with pytorch in the model loader code and is not related to softmax settings like temperature and token cutoffs.

      Anyways, if AI were taken down from someone essentially turning off the internet, I would not be affected and neither would millions of other people. I can run, code, and train AI completely independently. It is just a complex math problem. Controlling math is as draconian as thought policing.

      In the absolute sense, AGI is still a good ways off. Present AI is not persistent. It has no memory or ability to dynamically change yet. All of those apparent features are done in regular code and fed back into the model with each new prompt.

      In the end, AI requires risk mitigation policies. It is not controllable like that.