Our business-critical internal software suite was written in Pascal as a temporary solution and has been unmaintained for almost 20 years. It transmits cleartext usernames and passwords as the URI components of GET requests. They also use a single decade-old Excel file to store vital statistics. A key part of the workflow involves an Excel file with a macro that processes an HTML document from the clipboard.
I offered them a better solution, which was rejected because the downtime and the minimal training would be more costly than working around the current issues.
The library I worked for as a teen used to process off-site reservations by writing them to a text file, which was automatically e-faxed to all locations every odd day.
If you worked at not-the-main-location, you couldn’t do an off-site reservation, so on even days, you would print your list and fax it to the main site, who would re-enter it into the system.
This was 2005. And yes, it broke every month with an odd number of days.
cleartext usernames and passwords as the URI components of GET requests
I’m not an infrastructure person. If the receiving web server doesn’t log the URI, and supposing the communication is encrypted with TLS, which removes the credentials from the URI, are there security concerns?
Anyone who has access to any involved network infrastructure can trace the cleartext communication and extract the credentials.
What do you mean by any involved network infrastructure? The URI is encrypted by TLS, you would only see the host address/domain unless you had access to it after decryption on the server.
They said clear text, I would assume it’s not https.
The comment we are replying to is asking about a situation where there is TLS. Also using clear text values in the URI itself does not mean there wouldn’t be TLS.
When someone just says cleartext, I assume they mean transmission too.
OP replied confirming HTTP: https://lemmy.world/comment/1033128
I’m not 100% on this but I think GET requests are logged by default.
POST requests, normally used for passwords, don’t get logged by default.
BUT the Uri would get logged would get logged on both, so if the URI contained @username:Password then it’s likely all there in the logs
GET requests are logged
That’s why I specified
the receiving web server doesn’t log the URI
in my question.
Get and post requests are logged
The difference is that the logged get requests will also include any query params
GET /some/uri?user=Alpha&pass=bravo
While a post request will have those same params sent as part of a form body request. Those aren’t logged and so it would look like this
POST /some/uri
Nope, it’s bare-ass HTTP. The server software also connected to an LDAP server.
I don’t even let things communicate on /30 networks via HTTP/cleartext…this whole thing is horrifying.
I would still not sleep well; other things might log URI’s to different unprotected places. Depending on how the software works, this might be client, but also middleware or proxy…
supposing the communication is encrypted with TLS
I can practically guarantee you it was not
Browser history
Even if the destination doesn’t log GET components, there could be corporate proxies that MITM that might log the URL. Corporate proxies usually present an internally trusted certificate to the client.
downtime
minimal retraining
I feel your pain. Many good ideas that cause this are rejected. I have had ideas requiring one big downtime chunk rejected even though it reduces short but constant downtimes and mathematically the fix will pay for itself in a month easily.
Then the minimal retraining is frustrating when work environments and coworkers still pretend computers are some crazy device they’ve never seen before.
Places like that never learn their lesson until The Event™ happens. At my last place, The Event™ was a derecho that knocked out power for a few days, and then when it came back on, the SAN was all kinds of fucked. On top of that, we didn’t have backups for everything because they didn’t want to pay for more storage. They were losing like $100K+ every hour they were down.
The speed at which they approved all-new hardware inside a colocation facility after The Event™ was absolutely hilarious, I’d never seen anything approved that quickly.
Trust me, they’re going to keep putting it off until you have your own version of The Event™, and they’ll deny that they ever disregarded the risk of it happening in the first place, even though you have years’ worth of emails saying “If we don’t do X, Y will occur.” And when when Y occurs, they’ll scream “Oh my God, Y has occurred, no one could have ever foreseen this!”
It’ll happen. Wait and watch.
Sounds like a universal experience for pretty much all fields of work.
Government and policy? Climate change? A fucking pandemic?!
We’ve seen it all happen time and time again. People in positions of authority get overconfident that if things are working right now, they’ll keep working indefinitely. And then despite being warned for decades, when things finally break, they’ll claim no one could have foreseen the consequences of their lack of responsibility. Some people will even chime in and begin theorising that surely, those that warned them, had to be responsible for all the chaos. It was an act of sabotage, and not of foresight.
Places I’m at usually end up bricking robots and causing tens of thousands of dollars of damage to them because they insist on running the robot without allowing small fixes.
Usually a big robot crash will be The Event that teaches people to respect early warning signs…for about 3 months. Then the old attitude slides back.
Good thing we aren’t building something that requires precision, like semi-conductor wafers. Oh wait.
That’s just be on them losing tons and tons of money from bad usable platter space lol they’re machine gunning themselves in the legs
As weird as it may seem, this might be a good argument in favor of Pascal. I despised learning it at uni, as it seems worthless, but is seems that it can still handle business-critical software for 20 years.
What OP didn’t tell you is that, due to its age, it’s running on an unpatched WinXP SP2 install and patching, upgrading to SP3, or to any newer Windows OS will break the software calls that version of Pascal relies upon.
You’re literally describing the system that controlled employee keyscan badges a couple of jobs ago…
That thing was fun to try and tie into the user disable/termination script that I wrote. I ended up having to just manipulate its DB tables manually in the script instead of going through an API that the software exposed, because it didn’t do that. Figuring out their fucked-up DB schema was an adventure on its own too.
I’m also describing the machine in my office that runs my $20,000 laser plotter/large format scanner. The software in the machine uses (Java?) over a web interface which was deprecated and removed from all browsers around 2012-14, iirc. The machine isn’t supported anymore and the only way to clear an error or update where it sends scans is using that interface. I have a XPSP2 machine running the internal IE6 browser which will still display the interface. Since I’m now a one-person office, and I use the scanner about 6 times a year, I keep that machine around in case I need to turn it on to update the scanner or clear a print error. Buying a new plotter isn’t worth the time/money - when it dies I’ll just farm out the work to a 3rd party vendor; but while it does work it’s convenient to have in-house.
If it’s that old, I’m betting it doesn’t use HTTPS for its connections. You could do a network packet capture on the XP machine (or if you can find one, hook it up to a network hub with another computer attached and capture there) while performing the “clear error” action and find out how it works/what you need to send to it to clear the error. You could also set up a SPAN port on a switch and mirror the traffic on the port going to the printer to capture the traffic, if you have a switch capable of doing that. If not, you can get one off Amazon for about $100.
It’d be pretty simple to put together a script that sends the “clear error” action to the printer after seeing how it’s done in the packet capture. I’ve done this numerous times, the latest of which was for a network-connected temperature sensor that I wanted to tie into but didn’t (publicly) expose an API of any kind.
It’s more than that, though - it’s used to setup custom sheet widths as well as enter new server and login details for sending scans via FTP to a server. If I’m doing billable work, I’m charging $225/hr. If I’m snooping the network, which isn’t my field and I do almost never so it takes me several times longer than an expert, I’m making nothing. With an annual value on the machine’s services at less than $500 (more than half of which would become reimbursable if I didn’t have it), there’s no actual value in “fixing” it by creating a different work around. 🤷♂️
Anything can if you don’t update it.
i worked for a hybrid hosting and cloud provider that was partnered with Electronic Arts for the SimCity reboot.
well half way through they decided our cloud wasn’t worth it, and moved providers. but no one bothered to tell all the outsourced foreign developers that they were on a new provider architecture.
all the shit storm fail launch of SimCity was because of extremely shitty code that was meant to work on one cloud and didn’t really work on another. but they assumed hurr hurr all server same.
so you guys got that shit launch and i knew exactly why and couldn’t say a damn thing for YEARS
Not to put the blame on the devs, but the problems might have been attenuated by defining a proper interface layer against the server.
It’s a damn single player game 💀
The multiplayer stuff was neat in theory, but any multiplayer thing you did took like 20+ minutes to actually propagate to other players games
I wonder if that’s related to “the wrong cloud”. Imagine if someone wrote some super slick code that worked really really well in the original cloud, and just couldn’t figure out how to make it work in the new cloud, so everything is just an awful workaround.
Unless you’re really deep into a particular provider’s unique-esque products (Lambda, Azure AD, Fargate, etc), this is exactly why things like Terraform exist.
Oh for sure, but the games industry is one of the few that still does some weird stuff because a lot of the software is only expected to last 5 years or so at most, and needs to get every drop of performance.
I could definitely see some hyper optimized cloud API looking really great and then not having an equivalent in another ecosystem (or at least not one that could be quickly swapped out just before release).
Your comment seems to be related to something else… or I’m stupid, which is entirely plausible, too.
I think it’s refering to the fact that the reboot SimCity was a single player game (you could never play with someone else) but that was always online anyway
There was no Rebooted SimCity in Ba Sing Se
All this fuss over servers for a single player game. Not only did they handle the migration poorly, it shouldn’t need to talk to servers period!
I think it’s AWS
That’s cool to know! I had been wondering what happened with that historically bad launch.
Kevin Fang - The Worst Website Launch of All Time <on Youtube> <on Piped’s frontend (thanks bot!)>
Here is an alternative Piped link(s): https://piped.video/watch?v=Ui5op0N700A
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I knew that’s gonna be gold after I read that first sentence
It’s pretty depressing, but the fact that soil and groundwater are almost certainly contaminated anywhere that humans have touched. I’ve seen all kinds of places from gas stations, to dry cleaners, to mines, to fire stations, to military bases, to schools, to hydroelectric plants, the list could go on, and every last one of them had poison in the ground.
Some places are insanely polluted to the point where you wonder how a whole company could be so braindead and essentially poison themselves.
A place not far from where I live had a chemical plant which just dumped loads of chemicals on a meadow for years. Now there are ground water pumps installed there which need to run 24/7 so that the chemicals don’t contaminate nearby rivers and hence the rest of the country.
When taking samples from the pumped up water you can smell gasoline.We’re house shopping and there has been a house on a lake sitting on the market forever. I got curious and researched the lake and… It’s a literal superfund site. The company that was on the other side of the lake just dumped their waste chemicals right on the shore and it has polluted both the lake and ground water forever essentially because they don’t break down. I looked up the previous owner… Died of cancer. The shit that companies are and were allowed to get away with is just insane. Meanwhile right wing nut jobs want to get rid of the EPA (which was ironically created by Richard Nixon).
Some places are insanely polluted to the point where you wonder how a whole company could be so braindead and essentially poison themselves.
“That’s the future guy’s problem, my problem is making money.”
No need to wonder. That’s how.
A place not far from where I live had a chemical plant which just dumped loads of chemicals on a meadow for years.
Sounds cheap.
The largest lake in the UK by area got massively polluted and turned into a swamp of toxic green algae. It’s crazy how people just let stuff like that happen.
It’s just as depressing when something counts as “clean”. My saddest example was a former sand pit, they spent 30 years digging out 15 meters of sand, then another 30 years filling it with anything from industrial to veterinary waste, “capped” it with rubble in the late 40s and called it clean enough.
Had a bigass job digging out the top 3 meters of random waste, including several thousand of barrels of whatever the fuck. And definitely no unexploded ordnance (spoiler, after finding several ww2 rifle stocks and helmets, the first mortarshells were dug up too). After makimg room, it was covered in sand, clay, bentonite and a protective grid.
So naturally, 3 months after that finished, some cockhead decided to throw an anchor and hit go all ahead flank on his assholes boat and tore the whole thing up. No need to fix anything though, just shovel some more sand it, that’ll stop the anthrax!
This was all in open connection with a major river, of course. One people swim in.
@Tar_alcaran @thrawn21 fucking yikes. Was the public notified in any way? Did it make it to the news? Or just kind of brushed under the rug?
What are they poisoned with and how does it happen?
Varies depending on the site, sometimes it’s gasoline, or solvents, or heavy metals or PFAS. As for how it happens, accidental or deliberate releases. I’ve found military documents from the 50s that say the official place to dispose of used motor oil was a pit they’d dug in the ground.
Yep, the regulation is now a 5ft cubed hole dug around the soil in any spill. It’s resulted in folks being more careful but also hiding where things are spilled. I’ve not once seen a hole dug. Corporations are roughly similar. Small organizations don’t care at all.
Here’s a recent article about PFAS in drinking water. Very unfortunate.
Heavy metals and PCBs are most common in my area, various VOCs aren’t far behind. Prior to the EPA and associated legislation companies would commonly use waste process waters for dust control, dump wastes in to pits or on the ground, spills would be left to soak away, and general processes were dirtier and uncontrolled.
One terrible example from western NY that bugs me even more than Love Canal is the involvement with the Manhattan Project. Local steel workers rolled Uranium and they were never told what is was, given any protections, or cared for when the inevitable happened. Radioactive waste was later used as fill for residential and commercial properties in the area. These Hotspot still exist and it is a slow process to get any cleanup done.
I work in air quality and it’s a similar story. It’s crazy to me seeing how much is unregulated, grandfathered in, or simply not enforced.
What do you want? They moved it out of the environment. . .
The programming team that is working hard on your project is just one dude and he smells funny. The programming team you’ve met in your introductory meeting are just the two unpaid interns that will be fired or will quit within the next two months and don’t know what’s happening. We don’t do agile despite advertising it. Also your project being a priority means it’ll be slapped together from start to finish 24 hours prior to the deadline. Oh and there will be extra charges to fix anything that doesn’t work as it should.
I think we work in the same company, the dude does not smell funny to me but maybe that’s just me.
Are you that dude?
No he is many things including functioning alcoholic and a choleric but I could not detect strong odor.
I do not know what my thing is because that’s obviously my blind spot.
That’s what he said, yep.
We all work for that company. Except at mine, I work remote, so I have only myself to blame the stinkiness on.
When you have a great programmer working on your project he will be cycled to a new project in 2-3 months. Your new senior developer who silently takes over the project is part time because he’s working on finishing his education.
No one knows how anything works, except that one guy, who left the company half a year ago. That’s how all software development is.
Throw in a mysterious comment that says “Don’t change anything below this line or everything breaks” and it’s complete.
“We don’t know why this works, but it does, don’t touch it.” would also be acceptable.
“The server mangles the authentication token after receiving it for reasons we don’t really understand, so this function just checks to see that it’s set in the request, but nothing actually cares if it’s valid. DO NOT RETURN USER ACCOUNT DATA HERE AND YES THAT MEANS YOU MARCUS”
Thai is basically my current team, haha.
In my company we have a very modern agile workflow where QA is top priority.
At least that what we advertise. In reality it’s all an unorganized clusterfuck where I’m pretty sure I am the only one who bothers to write automated tests. Who’s got time to write tests bro just push that shit out ASAP we’ll deal with it when the client calls us in the middle of the night to complain about previously-working shit being broken now.
I’ve worked for one company that actually did it right (complete with pair programming, even). It was pretty nice.
Too bad we were apparently the “experimental?” team and the only one in the whole company doing it that way.
I worked for a company like that. Wall Street shits bought us up and sold everything that wasn’t bolted down.
Ironically, that was the one time I was working for a large, publicly-traded company (a big-box retailer, no less – not even one of the halfway-respectable Fortune 500s!).
A lot of outsourcers do this. Here’s my experience with a few companies.
- The “team” you meet are competent, English speaking fronts. They are the demo models of the people who will work on your projects.
- After the contract is signed, these people are swapped out with randos of varying competence.
- In some cases, some of these randos are further hidden behind aliases: people with names that are actually more than one person sharing logins and passwords.
- They will string you along, trying to charge maximum hours worked without regards to product or services delivered.
- Most of these companies have a “bucket of crabs” mentality: the managers are horrible, the staff incompetent, and once the gain some skill, they leave for better companies. They backstab one another, hijack projects to fuck over coworkers, and lie and cover their tracks. Some of this is cultural, like a caste system, while some are just racist.
At one time, these people were pretty good, but they realized they had skills and left for other countries for better pay and better working conditions. The bids got more and more competitive, cutting costs until they were literally filled with low-skilled labor who can’t be promoted or leave for economic or competence reasons.
Now that I read this, I’m kinda glad that our company doesn’t do anything like that. But it’s just a small indie team porting games to consoles, so I guess what you’re mentioning is the bigger corp problem.
Programming teams I’ve worked with are a joke.
Company A: We got hacked and the lead dev argued for days it wasn’t a hack. Malware was actively being served to customers during this time period because she refused to deal with it and there was no security team.
Company B: programming team was the IT guys nephew and some random UI designer who hadn’t finished college and was never able to be employed after finishing college…
Company C: We interviewed a candidate who was way over qualified and would make our life so easy because he was eager and hungry. Instead we hired a bootcamper who had never heard of docker (half our infra is docker), react, or anything other than vanilla JavaScript. She failed our practical but still got hired because the hiring manager wanted and assistant. She has become a glorified project manager, but still has the title software engineer.
Can confirm. I am the smelly guy. Leave me alone and you get code. Bother me and you don’t.
Hah, is this contracting? And what is done vs agile?
Think waterfall. But like. No design and no testing.
Not contracting, just another small shop that offers “complete” solutions from a to z kinda situation.
The only competent person in that org would be, oddly enough, the ceo. Everybody else just feel like they show up to be marked present on an attendance sheet in terms of being useful.
Think waterfall. But like. No design and no testing.
That’s just “cowboy coding.”
I used to work for a popular wrestling company, billionaire owner, very profitable, would write off any OSHA penalties as the ‘cost of doing business’ just as they did in 1998, when The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table
The company would bid on government contracts, knowing full well they promised features that didn’t exists and never would, but calculating that the fine for not meeting the specs was lower than the benefit of the contract and getting the buyers locked into our system. I raised this to my boss, nothing changed and I quit shortly after.
I’ve worked in IT consulting for over 10 years and have never once lied about the capabilities of a product. I have said, it doesn’t do that natively, but if that’s a requirement we can scope how much it would take to make it happen. Sadly my company is very much the exception.
The worst I saw was years ago I was working on an infrastructure upgrade of a Hyper-V environment. The client purchased a backup solution I wasn’t familiar with but said it supported Hyper-V. It turns out their Hyper-V support was in “beta”. It wasn’t in beta. They were literally using this client as a development environment. It was a freaking joke. At one point I had to get on the phone with one of their developers and explain how high-availability and fail-over worked.
I could very well have been that developer. Usual story, sales promised the world, that our vmware-based system would run on anything and everything, and of course it’s all HA and load balanced, smash cut to me on Monday morning trying to figure out how to make it do that before it goes live on Wednesday.
eh DHCP isn’t really important right? obviously if it hasn’t changed since the 80’s why would you need to reboot your server.
what are vulnerabilities?
You responded to the wrong comment, but i’ve been seeing that a lot so I wonder what causes it.
Being a frontend dev myself, I’d guess someone screwed up the indexing of comments :P
Sounds like a DHCP issue.
(I mean, not really, but it rhymes I guess.)
It’s definitely DNS.
I’d actually wager the comments are cached, sent to the front end wrong (because of the bad cache), and then the front end posts against the wrong comment ID (maybe that’s what you mean to be fair :) ).
I had something different in mind, coming from Angular: There would be a list of comment objects associated with DOM nodes, then the comment list would get updated, and Angular would associate the DOM nodes with the wrong list entries.
How would a bad cache mess up the association between a comment and its ID?
I used to do AngularJS and I’ve done some react… maybe something like that could happen. I’d wager it’s unlikely though (bordering on Angular/Inferno itself having a bug).
I’ve seen some other things that seem like caching issues (e.g., seeing the wrong counts when switching between posts).
A cache could literally report the wrong ID for a comment to the front end in the JSON if the caching isn’t right (and bad input = bad output).
Granted, in both cases I’d wonder why we’re not seeing this all the time, it’s got to be something niche, possible something already fixed but not on all instances.
The contractor I worked for was run by a man who used to say “if the contract says they’ll blow up the contractor on delivery, we’re putting in a bid and solve the problem later”
Promising features that never existed is part and parcel to a lot of software sales, whether gov or private. Speaking from post-sales experience.
I think it’s fine to promise them, but to claim they currently exist when you never plan to implement them is what I couldn’t support.
I worked in government contracting (and government, for that matter) for years and that blows my mind. I can’t remember the details, but if you even had a bad reviews, much less being found noncompliant, it could disqualify you entirely from some contract vehicles for a matter of years. Wild that there’s some agency that somehow lets people get away with fraud.
Also, if that cost the government money, there’s a chance you could report that after the fact and make some money.
Might be local government. Me and sales have this argument pretty often
Me: it is in the spec
Sales: no one noticed it except you
Me: thanks?
Sales: no one is going to care
Me: then take it out of the spec and resign everything.
Sales: why are you making a big deal about this?
Me: because it is in the spec that we signed and if we don’t honor the spec they can backcharge us.
Sales: that won’t happen
Me: you are right because we are going to follow the spec. If you don’t want me to please email me, the department head, and the client specifically ordering me not to follow the contract that we signed.
Yeah I’m in Europe and our customers were municipalities buying healthcare related solutions. It happened after our little startup got taken over by a big player and they started getting involved in the contract bids.
There is a million times more counterfeit/fake items at amazon than you think, and they dont care one bit to fix the problem
Geek Squad, We were flying under the radar upgrading Macbook RAM, until one day we became officially Apple Authorized to fix iPhones, which means we were no longer allowed to upgrade Macbook RAM since the Macbooks were older and considered “obsolete” by apple, meaning we were unable to repair or upgrade the hardware the customer paid for, simply because apple said it was “too old”. it was at this point in my customer interaction, that we recommend a repair shop down the road that isn’t held at gunpoint by apple ;)
1-800-got-junk? doesn’t care at all about its environmental impact. No sorting what so ever happens to what goes on their trucks it all goes to landfills. All the ads will say they recycle and that they repurpose old furniture but I was threatened with being fired when I recommended donating antiques instead of dumping a load of furniture.
More jobs and more profits comes before anything else in that company, including employee health and safety. Several times I was told to enter spaces we werent trained for (attics and crawl spaces) and carry waste I legally couldn’t transport (human/organic wastes and the laws states the driver is fined, not the company). One guy injured his shoulder during an attic job and was told to finish the shift or lose his job. Absoulte scum of a company with very sleazy management and possibly the labour board in their pocket as they kept “losing the files” when I tried to file a report with buddy’s shoulder (he was hesistant to report for fear of losing his job).
Anybody knows that one waterfall attraction in the Southeast US? The one that advertises bloody everywhere? Waterfall is pumped during the dry seasons, otherwise there’d be nothing to see. Lots of the formations are fake, and the Cactus and Candle formation was either moved from a different spot in the cave, or is from a different cave in New Mexico. Management doesn’t want people to know that, but fuck 'em.
Ruby Falls?
Ye!
After looking it up, you can find reports from others stating the same things. When I was there as a kid, I remember that they claimed no one knew where the source of the water came from… I guess they actually know enough to help it out at least, lol
I really enjoyed it and would like to go again, but it’s no Mammoth Cave.
Gravity Falls?
Niagara falls?
Nawh mate, that’s up in New York and Canada.
I’m simple man not from US. I hear waterfalls, I think Niagara ¯\_(ツ)_/¯
Victoria falls?
For some reason I’m not surprised to learn this about Ruby Falls. Lived near it awhile and visited.
Eh kinda cruddy to learn, but also was still a cool experience.
I quit a well known ecomm tech company a few months ago ahead of (another) one of their layoff rounds because upper mgmt was turning into ultra-wall street corpo bullshit. With 30% of staff gone, and yet our userbase almost doubling over the same period, they wanted everyone to continue increasing output and quality. We were barely keeping up with our existing workload at that point, burnout was (and still is) rampant.
Over the two weeks after I gave my notice I discovered that in the third-party app ecosystem many thousands of apps that had (approved) access to the Billing API weren’t even operating anymore. Some had quit operating years ago, but they were still billing end-users on a monthly basis. Many end-users install dozens of apps (just like people do with mobile phones) and then forget they ever did so. The monthly rates for these apps are anywhere from 3 to 20 dollars per month, many people never checked their bank statements or invoices (when they eventually did, they’d contact support to complain about paying for an app that doesn’t even load and may not have for months or years at this point).
I gathered evidence on at least three dozen of these zombie apps. Many of them had hundreds of active installs, and were billing users for in some cases the past three years. I extrapolated that there were probably in the high-hundreds or low-thousands of these zombie apps billing users on the platform, amounting to high-thousands to low-tens-of thousands of installs… amounting to likely millions per year in faulty and sketchy invoicing happening over our Billing API.
Mgmt actually did put together a triage team to address my findings, but I can absolutely assure you the only reason they acted so quickly is because I was on the way out of the company. I’d spotted things like this in the wild previously and nothing had ever been done about it. The pat answer has always been well people are responsible for their own accounts and invoicing. I believe they acted on this one because I was being very vocal about how it would be ‘a shame’ if this situation ever became public, and all those end-users came after the company for those false invoices at one time. It would be a PR and Support nightmare.
You have definitely interacted with this ecommerce platform if you shop online.
Health insurance company I worked for would automatically reject claims over a certain amount without reviewing them. Just to be dicks and make people have to resubmit. This was over 25 years ago, but it’s my understanding many health insurers still pull this shit. They don’t care if it’s legal or not. Enforcement is lazy and fines are cheaper than medical claims.
Obviously this is in the USA.
Over a decade ago I worked as a freelancer for an Investment Bank (the largest one that went bankrupt in the 2008 Crash, which was a few years later) were the head of the Proprietary Trading Desk (the team of Traders who invest for the profit of the bank) asked me if I could change the software so that they could see the investments of the Client Trading Desk (who invest for clients with client money) was making, with the assent of the latter team.
Now if the guys investing money for the bank know what they guys investing customer money are doing they can do things like Front-Run the customer trades (or serve them at exactly the right price to barelly beat the competiotion) thus making more profits for the bank and hence get bigger bonuses. This is why Financial regulations say that there is supposed to be so-called Chinese Walls between the proprietary trading and the customer trading activities: they’re supposed to be segregated and not visible to each other.
Note that the heads of both teams were mates and already regularly had chats, so they might already have been exchanging this info informally.
I was quite fresh in there (less than 1 year) and the software system I worked in at the time was used by both teams, but when I started looking into it I saw that the separation was very explicitly coded in software and that got me thinking about what I had learned from the mandatory compliance training I had done when I first joined (so, yeah, that stuff is not totally useless!!!)
So I asked for written confirmation from the heads of both teams, and just got some vague response e-mails, no clear “do such and such”.
So I played the fool and took it to a seperate team called Compliance (responsible for compliance with financial regulations) saying I just wanted to make sure it was all prim and proper, “just in case”.
Of course, it kinda blew up (locally) and I ended up called to a meeting with the heads of the Prop Desk and whatnot - all stern looks and barelly contained angry tones - were I kept playing the fool.
Ultimatelly it ended up not being a problem for me at all, to the point that after that bank went bust and its component parts were sold to another bank, the technical team manager asked me to come back to work with the same IT group (remember, I was a freelancer) with even greater responsabilities, so this didn’t exactly damage my career.
That said, over the years there were various cases of IT guys in large investment banks who went along with “innocent” requests from the Traders and ended up as the fall-guys for subsequent breaking of Finance Regulations, serving jail time, so had I gone along with that request I would’ve actually risked ending up in jail.
(Financial Regulators were and are a complete total joke when it comes to large banks, which actually makes it more likely that some poor techie guy will be made the fall guy to protected the bank and its heads).
I worked for for the railroad. Nothing is fixed ever. I witnessed hundreds of code violations every day for years. Doesn’t matter if a rail car or locomotive meets code as long as it “can travel” its good to go.
When an employee inspector finds a defective rail car management determines if it will get fixed. If the supervisor “feels” like “it’s not that bad” then the rail car is “let go”.
Worked at a globally popular fast food francise many years ago. They had collection boxes for a charity that they raised money for. None of the money went to that charity, but was divided between owners and managers.