• 2 Posts
  • 631 Comments
Joined 6 个月前
cake
Cake day: 2025年1月29日

help-circle


  • If they wanted any games banned all they had to do was talk to the Office of Film and Literature Classification (OFLC) in Australia, where they’re based. Any of the games listed would have likely been added to the ‘Refused Classification’ list and thereby banned from sale and import in Australia. If they wanted them pulled from Steam or Itch entirely they could have talked to those platforms.

    But they didn’t want to raise objections through appropriate preexisting channels, they wanted to push their Christian-based ideology on the whole world by going Karen on the social media of all the payment processors.





  • Because letting extremely biased ideological groups dictate worldwide policy is always a bad thing that comes with negative consequences.

    I’m not personally familiar with any of the games in this ban wave, but Steam’s stance prior is that these games are free expression of art, made by adults and it’s not Steam’s job to police art. If a group does want to impose limitations on art on a worldwide storefront - that should be a national limitation performed by an appropriate body - Australia already has a stringent games rating system, and if these games do not meet any approved standards they would be hit with ‘Refused Classification’ and thereby restricted to be sold (banned from sale or import) to Australia and Steam would region block them for sale to Aus. As is the case for many games already.

    However, this group deemed following the appropriate channels too much work, so instead went for a Karen smear campaign of the payment processors on social media - stating that they supported the sale of rape and incest games (simply by working with Steam), thereby pressuring the payment processors to put lobby Steam to remove the games entirely as the easiest path for Steam to avoid financial processing impacts.


  • They use GiveSendGo, which is the same thing. It’s founded as a Christian alternative payment platform and the CEO and CFO have gone on record numerous times to defend indefensible scum.

    GiveSendGo used to use Square which actually does run CashApp (so your joke is not far from truth), but after pressure on Square they seem to have dropped GiveSendGo, whom now use Stripe. Stripe has not responded to queries regarding the Shiloh Hendrix case (awful lady who called the kid at a playground the n word and went on to raise nearly $800k), and continue to work with GiveSendGo.

    As a totally unrelated aside, major early investors in Stripe are Elon Musk, and Peter Thiel.



  • Sure, but it’s a bit of an open-ended question because it depends on your requirements (and your clients’ potentially), and your risk comfort level. Sorry in advance, huge reply.

    When you’re backing up an production environment it’s different to just backing up personal data so you have to consider stateful-backups of the data across the whole environment - to ensure for instance that an app’s config is aware of changes made recently on the database, else you may be restoring inconsistent data that will have issues/errors. For a small project that runs on a single server you can do a nightly backup that runs a pre-backup script to gracefully stop all of your key services, then performs backup, then starts them again with a post-backup script. Large environments with multiple servers (or containers/etc) or sites get much more complex.

    Keeping with the single server example - those backups can be stored on a local NAS, synced to another location on schedule (not set to overwrite but to keep multiple copies), and ideally you would take a periodical (eg weekly, whatever you’re comfortable with) copy off to a non-networked device like a USB drive or tape, which would also be offsite (eg carried home or stored in a drawer in case of a home office). This is loosely the 3-2-1 strategy is to have at least 3 copies of important data in 2 different mediums (‘devices’ is often used today) with 1 offsite. It keeps you protected from a local physical disaster (eg fire/burglary), a network disaster (eg virus/crypto/accidental deletion), and has a lot of points of failure so that more than one thing has to go wrong to cause you serious data loss.

    Really the best advice I can give is to make a disaster recovery plan (DRP), there are guides online, but essentially you plot out the sequence it would take you to restore your environment to up-and-running with current data, in case of a disaster that takes out your production environment or its data.

    How long would it take you to spin up new servers (or docker containers or whatever) and configure them to the right IPs, DNS, auth keys and so on? How long to get the most recent copy of your production data back on that newly-built system and running? Those are the types of questions you try to answer with a DRP.

    Once you have an idea of what a recovery would look like and how long it would take, it will inform how you may want to approach your backup. You might decide that file-based backups of your core config data and database files or other unique data is not enough for you (because the restore process may have you out of business for a week), and you’d rather do a machine-wide stateful backup of the system that could get you back up and running much quicker (perhaps a day).

    Whatever you choose, the most important step (that is often overlooked) is to actually do a test recovery once you have a backup plan implemented and DR plan considered. Take your live environment offline and attempt your recovery plan. It’s really not so hard for small environments, and can make you find all sorts of things you missed in the planning stage that need reconsideration. 'Much less stressful when you find those problems and you know you actually have your real environment just sitting waiting to be turned back on. But like I said it’s all down to how comfortable you are with risk, and really how much of your time you want to spend considering backups and DR.


  • Starlink is only ‘better’ (in some cases, certainly isn’t better in all nor at all times as its speed is highly variable), because your US cable and ISP giants have lobbied for decades to remove any oversight the FCC may exert on them - up to and including banning local communities from building their own networks, and ensuring that they have more or less monopoly control in many rural markets.

    The only way to fix this is with the hammer of the FCC regulation. The ‘free market’ cannot fix a monopoly.

    So when you laugh at your rural citizenry getting shafted by FCC policy changes that will benefit nobody but the large ISPs & cable companies, because you’re in a city and have lots of optiond with the competition that high-density living affords, you’re shooting your own regulatory powers in the foot for the sake of spite.

    This is all setting aside of course than a huge proportion of rural Americans are not MAGA.

    So yeah that’s why it’s a crappy take.









  • If you want one that has a calendar and supports inviting other people to events and accepting other people’s invitations - there’s really only Proton.

    I tried mailbox.org, they use OX App Suite (Open-Xchange) on the backend which is full of holes with regards to calendar invites/reception support. I also tried Tuta which does not even bother to attempt trying to support accepting invites from other services (iCloud / exchange online / gmail).


  • I have a piezoelectric doorbell.

    The bell part plugs directly into a wall socket. The button part is completely wireless and batteryless and is affixed near my front door.

    Been working like clockwork for a decade to let me know when someone is at the door and I’m home.

    If I’m not home, the postman or delivery driver leaves a note to go to the collection center for my package. If it’s a small package not requiring signature, they just leave it at the door or in the mailbox if it fits. None of that changes with a camera.

    Why overcomplicate life.