I’m a dev and I was browsing Mozilla’s careers page and came across this. I find a privacy respecting company being interested in building an AI powered recommendation engine a little odd. Wouldn’t they need to sift through the very data we want private in order for a recommendation engine to be good? Curious of what others think.

  • fiat_lux@kbin.social
    link
    fedilink
    arrow-up
    34
    ·
    1 year ago

    Mozilla has a huge amount of information already submitted by volunteers to train their own specific-subject LLM.

    And as we saw from Meta’s nearly ethical-consideration-devoid CM3Leon (no i will not pronounce it “Chameleon”) paper, you don’t need a huge dataset to train if you supplement with your own preconfigured biases. For better or worse.

    Just because something is “AI-powered” doesn’t mean the training datasets have to be acquired without ethics. Even if there is something to be said for making material public and the inevitable consequences it can be used.

    I hope whoever gets the job can help pave the way for ethics standards in AI research.

    • markOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      9
      ·
      1 year ago

      Ironically, this comment reads just like an AI wrote it.

      • fiat_lux@kbin.social
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        The irony of AI-generated responses being difficult to distinguish from the rules educators harassed me to comply with is something I’ve found pretty amusing lately. It’s a bias built into the system, but has the opposite unintended effect of delegitimising actual human opinions. What an own-goal for civilisation.

        I am regrettably all too human. I have even been issued hardware keys to prove it!