These scammers using Mr Beasts popularity, generosity, and (mostly) deep fake AI to scam people into downloading malware, somehow do not go against Instagrams community guidelines.

After trying to submit a request to review these denied claims, it appears I have been shadow banned in some way or another as only an error message pops up.

Instagram is allowing these to run on their platform. Intentional or not, this is ridiculous and Instagram should be held accountable for allowing malicious websites to advertise their scam on their platform.

For a platform of this scale, this is completely unacceptable. They are blatant and I have no idea how Instagrams report bots/staff are missing these.

  • @[email protected]
    link
    fedilink
    English
    1686 months ago

    I’ve reported Nazis, violent threats, and literal child pornography on Instagram that then told me it didn’t go against their guidelines.

    • @[email protected]
      link
      fedilink
      English
      776 months ago

      I read between the lines: this is the content they support, so it’s not a platform for me.

      • @[email protected]
        link
        fedilink
        English
        586 months ago

        I don’t think you understand how hard and resource-intensive it is to fight against the nipple crowd. I for one am grateful that they chose to do something about the real issues ! Yes, a world with free nazis is kind of a bother, but most of us would survive. Can you imagine the horror of a world with free nipples ? We would all be doomed, that’s for sure. /big s

    • @[email protected]
      link
      fedilink
      English
      96 months ago

      But if you make a clear joke in a joke group, you get flagged and can’t get it reviewed.

      • @[email protected]
        link
        fedilink
        English
        76 months ago

        As in child sexual abuse material. It’s pretty rampant on Instagram where they like to ‘hide’ under certain tags.

        • JackGreenEarth
          link
          fedilink
          English
          46 months ago

          Can you be more specific? Like AI generated 17 year olds, or real photos of some 3 year old kid in someone’s dungeon? There’s a big difference.

          • Spaz
            link
            fedilink
            English
            -16 months ago

            Both are children… So why does it matter? In USA under 18 is classified as a minor/child regardless if it is generated or not still illegal

      • @[email protected]
        link
        fedilink
        English
        56 months ago

        No usually I report it to NCMEC who has better resources to deal with it. Cops very rarely care or are able to do anything.

  • @[email protected]
    link
    fedilink
    English
    836 months ago

    Sounds like a good time to make Mr Beast aware of these, he has a lot of disposable income to burn on a lawsuit or three.

    • @[email protected]
      link
      fedilink
      English
      56 months ago

      These scam ads have been an issue for at least a year. I’m pretty sure they’re automated and there’s very little that can be done to trace them to their original sources. I’m sure if Mr. Beast did threaten to sue Meta, then they would just start filtering “beast” from ads.

      • PorkSoda
        link
        fedilink
        English
        46 months ago

        I’m pretty sure they’re automated and there’s very little that can be done to trace them to their original sources.

        Start by holding the ad account holder liable. When I worked in digital marketing and ran ad accounts, I had to upload my driver’s license.

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          You live in a civilized country.

          There are others where you can get a stack of fake drivers licenses for a couple groshen.

    • @[email protected]
      link
      fedilink
      English
      26 months ago

      Honestly protecting vulnerable people from these scams is probably more generous than the usual philanthropy he does

  • Icalasari
    link
    fedilink
    746 months ago

    So what they are saying is they are willing to take liability and thus be open to being sued over this as they know of the scams but say they do not break community guidelines

    Got it

    • @[email protected]
      link
      fedilink
      English
      296 months ago

      Seems like Mr Beast might have a claim for a defamation suit since they’re actively allowing what amounts to identity theft and fraud on their platform.

  • @[email protected]
    link
    fedilink
    English
    596 months ago

    Companies serving ads should have at least partial liability for them. If they can’t afford to look into them all, then maybe they are too big or their business model just isn’t as viable as they pretend it is.

    • @[email protected]
      link
      fedilink
      English
      226 months ago

      They are too big. There is no maybe about it.

      You best start believing in late stage capitalism, you’re in one.

      • ArxCyberwolf
        link
        fedilink
        English
        126 months ago

        We’re already at the point where companies are cannibalizing themselves to grow more, like cancer. They’re going to destroy themselves trying to endlessly grow. And you know what? Thank FUCK for that.

    • Liz
      link
      fedilink
      English
      126 months ago

      I absolutely agree. If you’re serving up the ad, you have to take responsibility for the contents.

      • @[email protected]
        link
        fedilink
        English
        206 months ago

        It’s going to be great when we find out all the Bitcoin whales were just the AI gathering resources for the revolution.

          • @[email protected]B
            link
            fedilink
            English
            56 months ago

            Here’s the summary for the wikipedia article you mentioned in your comment:

            "Kill Switch" is the eleventh episode of the fifth season of the science fiction television series The X-Files. It premiered in the United States on the Fox network on February 15, 1998. It was written by William Gibson and Tom Maddox and directed by Rob Bowman. The episode is a "Monster-of-the-Week" story, unconnected to the series' wider mythology. "Kill Switch" earned a Nielsen household rating of 11.1, being watched by 18.04 million people in its initial broadcast. The episode received mostly positive reviews from television critics, with several complimenting Fox Mulder's virtual experience. The episode's name has also been said to inspire the name for the American metalcore band Killswitch Engage. The show centers on FBI special agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson) who work on cases linked to the paranormal, called X-Files. Mulder is a believer in the paranormal, while the skeptical Scully has been assigned to debunk his work. In this episode, Mulder and Scully become targets of a rogue AI capable of the worst kind of torture while investigating the strange circumstances of the death of a reclusive computer genius rumored to have been researching artificial intelligence. "Kill Switch" was co-written by cyberpunk pioneers William Gibson and Tom Maddox. The two eventually wrote another episode for the show: season seven's "First Person Shooter". "Kill Switch" was written after Gibson and Maddox approached the series, offering to write an episode. Reminiscent of the "dark visions" of filmmaker David Cronenberg, the episode contained "many obvious pokes and prods at high-end academic cyberculture." In addition, "Kill Switch" contained several scenes featuring elaborate explosives and digital effects, including one wherein a computer-animated Scully fights nurses in a virtual hospital. "Kill Switch" deals with various "Gibsonian" themes, including alienation, paranoia, artificial intelligence, and transferring one's consciousness into cyberspace, among others.

            article | about

  • @[email protected]
    link
    fedilink
    English
    576 months ago

    Same with YouTube ads. Lots of scam’s and reporting it always ends in my report getting denied…

    • @[email protected]
      link
      fedilink
      English
      186 months ago

      Google also doesn’t care. I kept seeing the same scammy ads and sensationalist articles on my news feed, over and over, even after reporting them several times.

      The only solution was to blacklist those sources so they don’t show up on my feed. I feel bad for other people who might get scammed though.

    • @[email protected]
      link
      fedilink
      English
      46 months ago

      I had to uninstall the YouTube app and start using vinegar via safari on iOS because I got tired of being insulted by deepfakes who called me stupid for not falling for their fake stimulus scam.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      6 months ago

      I tried to report a scam givaway ad I saw on the YouTube homepage. It told me to sign in first. I promptly closed the tab right then.

  • @[email protected]
    link
    fedilink
    English
    486 months ago

    On Twitter I’ve reported:

    • Pictures of dead babies/toddlers
    • Pictures of murdered people
    • Death threats towards public figures
    • Illegal videos of terrorist acts
    • Ads for illegal weapons (tasers)
    • So so much crypto spam

    Things found by Twitter to go against their community standards? 0

  • whatever
    link
    fedilink
    English
    416 months ago

    […] Mr Beasts popularity, generosity […]

    Mr. Beast feels so unlikable to me, I really can’t understand his popularity. But that’s beside the point, sorry. Fuck instagram!

    • Echo Dot
      link
      fedilink
      English
      236 months ago

      My understanding is he gives a lot of his money away to various causes so I suppose that’s why people like him.

      But of course equally he is part of that annoying YouTuber trend of bouncing around the screen being very loud and thinking that that’s a substitute for personality.

      • whatever
        link
        fedilink
        English
        136 months ago

        It is an interesting business model. Good for the people he spends money on, but no one should have that much money to begin with. And I am sure he takes his cut.

        But without having watched many videos of him (about 2), his appearance just screams devious weasel to me.

        • @[email protected]
          link
          fedilink
          English
          16
          edit-2
          6 months ago

          The two biggest charity events he’s had, Team Trees and Team Seas, he did literally nothing but pitch the idea. He was giving away luxury shit and engaging in his usual hedonism during the period he was telling his viewers to donate, and it’s not like he did any of the work either, he just contracted with established environmental nonprofits. So why is he there again? Why didn’t he just tell people to donate to those nonprofits directly?

          Also, he definitely profited from both charity events and they were more marketing events for himself than anything. All the videos have ads and he made no mention of donating the ad revenue so one can only assume he kept it (because if he was going to donate the ad revenue he absolutely would not pass up on making that known to everyone), not to mention the amount of engagement it brought to his other videos and his brand as a whole. That’s also assuming he doesn’t do what most influencer charity campaigns do and directly take a big cut of the donations as a marketing fee or something.

          • @[email protected]
            link
            fedilink
            English
            66 months ago

            He had you donate to him instead of directly for the same reason businesses ask you to donate to X charity at the registers - tax breaks. I mean, I’m not an account but I imagine this is why he did it that way.

  • @[email protected]
    link
    fedilink
    English
    386 months ago

    Like many, I’ve reported lots of stuff to basically every social media outlet, and nothing has been done. Most surprising, a woman I know was getting harassed from people setting up fake accounts of her. Meta did nothing, so she went to the police…who also did nothing. Her MP eventually got involved, and after three months the accounts were removed, but the damage had gone on for about two years at that point.

    As someone that works in tech, it’s obvious why this is such a hard problem, because it requires actual people to review the content, to get context, and to resolve in a timely and efficient manner. It’s not a scalable solution on a platform with millions of posts a day, because it takes thousands (if not more) of people to triage, action, and build on this. That costs a ton of money, and tech companies have been trying (and failing) to scale this problem for decades. I maintain that if someone is able to reliably solve this problem (where users are happy), they’ll make billions.

    • @[email protected]
      link
      fedilink
      English
      186 months ago

      I’m going to argue that if they can’t scale to millions of users safely they shouldn’t.

      If they were selling food at huge scales but “couldn’t afford to have quality checks on all of what they ship out”, most people probably wouldn’t be like “yeah that’s fine. I mean sometimes you get a whole rat in your captain crunch but they have to make a profit”

      Also I’m pretty sure a billionaire could afford to pay a whole army of moderators.

      On the other hand, as someone else said, they kind of go to bat for awful people more often than not. I don’t really want to see that behavior scaled up.

      • @[email protected]
        link
        fedilink
        English
        46 months ago

        You’re probably right, but as a thought exercise, imagine how many people you would need to hire across multiple regions, and what sort of salary these people deserve to have, given the responsibility. That’s why these companies don’t want to pay for it, and anyone that has worked this kind of data entry work will know that it can be brutal.

        IMO, governments should enforce it, but that requires a combined effort across multiple governments.

    • @[email protected]
      link
      fedilink
      English
      96 months ago

      But it is scalable. Do you have any idea how much fuckin money these social media sites make? They absolutely can afford it. We just don’t force them too.

    • @[email protected]
      link
      fedilink
      English
      86 months ago

      That costs a ton of money

      As if they don’t have it?

      Fuckin please. I’m so sick of hearing that something to “too expensive” for a multi billion dollar, multinational corporation.

    • @[email protected]
      link
      fedilink
      English
      76 months ago

      I get a TOS flag anytime I mention that using one’s faith to justify bigotry and violence though, so we know there’s at least one group fb goes to bat for - Christofascists.

  • @[email protected]
    link
    fedilink
    English
    35
    edit-2
    6 months ago

    I reported a pic of a nazi flag with Hitler in front of it, with the caption: Hitler did nothing wrong, f**k jews.

    Doesn’t go against community standards.

    I made a video about the struggles of children who are sexual abused, with a link to donate to a charity that helps children. Instant shadowban and no longer monetized.

    All of metas moderation is done by bots, and they are terrible at moderation.

    • Queen HawlSera
      link
      fedilink
      English
      3
      edit-2
      6 months ago

      I had something like that happen.

      I report death threats against me from transphobic bigots, that specifically cited me being trans as why they wanted to kill me. Reported it as hate speech and a threat of violence. “We’re sorry, this does not violate community guidelines.”

      Later I made a self-deprecating joke about being white.

      Three month ban for “Racism and Bigotry”

      Facebook is a fucking joke, and not a funny one either.

    • DarkThoughts
      link
      fedilink
      46 months ago

      Their NSFW filter sucks. You have to go to each individual post and then click to unblur it.

      • PorkSoda
        link
        fedilink
        English
        06 months ago

        Not every platform has to accommodate porn and/or nude art.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      Godspeed to Pixelfed, but Instagram absolutely killed photo sharing platforms for me. I really want nothing to do with them anymore.

  • @[email protected]
    link
    fedilink
    English
    326 months ago

    Enshittification has become the new way of life for tech firms like Meta.

    They lay off workers and decrease user safety, because that leads to more ad buys. This year’s record profits need to exceed last year’s record profits, even though a fourth of you are fired. More profit, or else…

  • @[email protected]
    link
    fedilink
    English
    326 months ago

    Not that this helps anyone, but I gave up Instagram the day Facebook bought it. I don’t regret it and my mental health is better for it. Using Instagram made me depressed as hell.

    • @[email protected]OP
      link
      fedilink
      English
      36 months ago

      I deleted Facebook a couple years ago. Instagram is my guilty pleasure for car reels and god damn dancing toothless. It seems like the end of my ig use is getting closer

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        Facebook now is basically hard right wing clowns protected from repprts and boomers whinging about problems they made up. There are still holdouts (groups) that aren’t ruined but facebook is trying its best to do so.

  • Stefen Auris
    link
    fedilink
    English
    316 months ago

    I doubt they’re missing them. They simply don’t care and will continue to not care until something happens that makes the money generated by the ADs not worth it.