• popcar2
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I low-key wish there were a separate AI leaderboard. It would be really interesting to see how fast bots can actually solve a problem as soon as it goes up, and it’d be nice to compare that to last year.

    • soulsource@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Honestly, I’d be very surprised if AI could even solve those problems.

      The only AI I’ve heard of that might eventually be able to actually solve problems is OpenAI’s Project Q, and that one isn’t public yet. The publicly available AI tools can just repeat things they have already seen online (with a rather large chance of repeating it wrong due to their lossy nature). So, unless the riddles exist somewhere online in a reasonably similar form, I’d expect the chatbots to fail at solving them.

      (They can, however, help a human developer solve them quicker than the developer could without AI assistence.)

  • armchair_progamer
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Whoever makes the Advent of Code problems should test all them on GPT4 / other LLMs and try to make it so the AI can’t solve them.

  • derpgon
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I hope AoC is niche enough that the community won’t use AI before leaderboards are filled.

    Would be interesting to compare this year’s times with the years before and see if there is a trend.