Gaywallet (they/it)

I’m gay

  • 298 Posts
  • 1.16K Comments
Joined 4 years ago
cake
Cake day: January 28th, 2022

help-circle
  • This solves nothing, the exact same people will just move to another company.

    The only way to effectively stop this kind of behavior is with regulation. The following types of regulation can help curb this behavior:

    1. Steep financial penalties for violations that are actually enforced. These need to be anchored directly to total value or profitability over a certain time frame. A specific number value will easily be outpaced by consolidation and gigantic companies can basically ignore them. Even a 100 million dollar fine can be ignored by companies the size of Amazon, Nvidia, and so forth. The EU has been good at architecting this kind of legislation.
    2. Strong rewards for whistle blowing on criminal behavior. Note that this is not prosecution of individuals responsible for said behavior because it will be very difficult to prove this in court and utilizing simple information warfare tactics, folks can be glass cliffed, made into patsies, or otherwise obscured from any record of their involvement or require extreme in-depth investigations to figure out.
    3. Strong criminal prosecution for repeat offenders and funding for real investigations of any company who has been found liable of any penalties or suspected of bad behavior. Some people hop from company to company doing the same thing over and over again. When we are focused on the companies rather than the people behind such bad behavior, they get a slap on the wrist at most and continue to do damage to society. We need to more aggressively profile and prosecute individuals with a track record of malicious behavior. As already mentioned, this is unfortunately the most difficult of the above to both legislate and enforce as what is considered “malicious” behavior is up for debate and difficult to quantify.





  • It could be the person was already in a problematic situation with family and friends, and they just need to blame someone or something and don’t want to admit the real problems. Kind of what often happened back in the day with videogames getting blamed for killing humans.

    This is not a fair analogy for what is going on here. Video games being blamed harkens back to times when music or other counter cultural media was blamed for behavior. We have a lot of literature which shows that the passive consumption of media doesn’t really affect someone in the ways which they were being blamed. From the beginning, this argument lacked a logical or hypothetical framework as well - it was entirely based on moral judgement values by certain individuals in society who simply “believed” that these were the cause.

    AI on the other hand, interacts back with you, and amplifies psychosis. Now this is early days and most of what we have is theoretical in nature, based off case-studies, or simply clinical hypothesis [1, 2, 3]. However, there is a clear difference in media itself - the chatbot is able to interact with the user in a dynamic way, and is programmed in a manner by which to reinforce certain thoughts and feelings. The chatbot is also human-seeming enough for a person to anthropomorphize the chatbot and treat it like an individual for the purposes of therapy or an attempt at emotional closeness. While video games do involve human interaction and a piece of media could be designed to be psychologically difficult to deal with, that would be hyper-specific to the media and not the medium as a whole. The issues with chatbots (the LLM subset of AI) is pervasive across all chatbots because of how they are designed and the populace they are serving.

    we could end up in a society where everyone undermines real problems in physical world and blames Ai to sideload the question

    This is a valid point to bring up, however, I think it is shortsighted when we think in a broader context such as that of public health. We could say the same about addictive behaviors and personalities, for example, and absolve casinos of any blame for designing a system which takes advantage of these individuals and sends them down a spiraling path of gambling addiction. Or, we can recognize that this is a contributing and amplifying factor, by paying close attention to what is happening to individuals in a broad sense, as well as smartly applying theory and hypothesis.

    I think it’s completely fair to say that this kid likely had a lot of contributing factors to his depression and ultimate and final decision. There is a clear hypothetical framework with some circumstantial evidence with strong theoretical support to suggest that AI are exacerbating the problem and also should be considered a contributing factor. This suggests that regulation may be helpful, or at the very least increased public awareness of this particular technology having the potential to cause harm to certain individuals.


  • Great article. I laugh at the folks who think this dude is bought into the fantasy that some folks have turned into what best represents a spirituality. As in if they haven’t seen folks who go a little too hard in any one specific part of their life. Sure, gooning as a term has long since entered the cultural zeitgeist and has been used, both ironically and not, as a way to simply now refer to excessive masturbation. But to discount that there is a loneliness epidemic out there and folks who have turned to gooning as some form of extreme kink or outlet for some need for human connection and healing, going 24/7 like many dom/sub relationships or cnc, ferality, etc. shows either a lack of exposure to the vastness of this damaged world or an attempt to poke fun at the author for seriously studying a cultural phenomenon. Either way, this is a fascinating look into a weird niche subculture and a really well written article. Thank you for sharing.




  • Absolutely nothing about this is surprising to me in the least. What is surprising, however, is how much people recognize this is a serious problem that seems to continue to get worse, and yet people will insist that free speech is more important. We’ve placed restrictions on yelling fire in a theater when there is none, because it causes harm to society to do it. Why, similarly, can we not place restrictions on obviously hateful and intolerant speech? Certainly those which have larger platforms and opportunity to sew this intolerance and erode democracy should have more scrutiny, no?





  • Actually, it’s pretty clear they are planning on completely gutting this company. They’re taking on debt to buy this deal, which they will put on the company. Their pitch is to eliminate jobs with AI (which they probably know won’t work) which means they’ll cut most of the staff and “replace” it with AI, likely contracts with companies they own so that they can continue to leech off whatever income comes in from game sales. The company will continue to churn out trash and make some money by repeating last year’s sports game this year but now with AI coding until it eventually declares bankruptcy and is either auctioned off to be stripped for what’s left of its parts or simply shutters forever.








  • The right as a political machine didn’t bat an eye when democratic government officials were assassinated. They also have completely ignored the facts of just about everything and inserted their own ideology or fantasy about what’s true and what’s not. What do you think “shouting from the rooftops” is going to accomplish here? This same nonsense has repeated itself multiple times with the attempted Trump assassinations and with other figures on the right. 99 times out of 100 it’s a young straight white conservative male behind shootings, yet there is never introspection on this issue. I cannot imagine this will change the minds of any significant number of those on the right. As Kirk himself said, this is the price of business.