• 0 Posts
  • 160 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • This.

    My units and integration tests are for the things I thought of, and more importantly, don’t want to accidentally break in the future. I will be monumentally stupid a year from now and try to destroy something because I forgot it existed.

    Testers get in there and play, be creative, be evil, and they discuss what they find. Is this a problem? Do we want to get out in front of it before the customer finds it? They aren’t the red team, they aren’t the enemy. We sharpen each other. And we need each other.





  • I think this might be hypocritical of me, but in one sense I think I prefer that outcome. Let those existing trained models become the most vile and untouchable of copyright infringing works. Send those ill-gotten corporate gains back to the rights holders.

    What, me? Of course I’ve erased all my copies of those evil, evil models. There’s no way I’m keeping my own copies to run, illicitly, on my own hardware.

    (This probably has terrible consequences I haven’t thought far enough ahead on.)


  • I think you’re right about style. As a software developer myself, I keep thinking back to early commercial / business software terms that listed all of the exhaustive ways you could not add their work to any “information retrieval system.” And I think, ultimately, computers cannot process style. They can process something, and style feels like the closest thing our brains can come up with.

    This feels trite at first, but computers process data. They don’t have a sense of style. They don’t have independent thought, even if you call it a “<think> tag”. Any work product created by a computer from copyrighted information is a derivative work, in the same way a machine-translated version of a popular fiction book is.

    This act of mass corporate disobedience, putting distillate made from our collective human works behind a paywall needs to be punished.

    . . .

    But it won’t be. That bugs me to no end.

    (I feel like my tone became a bit odd, so if it felt like the I was yelling at the poster I replied to, I apologize. The topic bugs me, but what you said is true and you’re also correct.)









  • I, too, think humans become incapable of learning from their mistakes when they become wealthy. That’s what keeps them wealthy of course.

    More seriously, it makes sense that this could become a good thing. If it’s true that Kevin failed the first time by lacking the confidence to stand up for his ideals, why are we judging what we haven’t seen yet? Give him a chance.

    (Is that true? I’m open to being wrong.)

    If they ran ads asking Reddit moderators to catalogue their frustrations, it feels reasonable that he could be bankrolling solutions to address those weaknesses and problems.

    I’m excited to see what amazing new Fediverse features will be inspired by what he pays his teams to build for Digg.

    (I need some hope for the future, damnit. Do NOT take this away from me.)



  • mspencer712toScience Memes@mander.xyzwe are stardust
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    28 days ago

    In the US? You have to pay taxes. Don’t need to file. See Substitute For Return (SFR). I’ve done this ever since the Bush administration.

    Intuit are quick to point out the massive deductions you’re missing out on. Until you pay for their services, and those theoretical deductions evaporate.




  • mspencer712toTechnology@lemmy.worldKevin Rose, Alexis Ohanian acquire Digg
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    14
    ·
    29 days ago

    It’s ok to fear that someone else could get rich through trickery.

    It’s also ok to have hope that people learn from past mistakes and try to build something good.

    AI can generate slop, but it can also understand, categorize, filter, moderate. It can also be slow to adapt to new attacks, or be analyzed and manipulated.

    I can’t offer much help to people who need to decide right now if it’s good or bad. Predicting the future is a messy thing. But I choose to be cautiously optimistic.