- cross-posted to:
- webdev
- cross-posted to:
- webdev
Why do I need to know all of this stuff, why isn’t the web safe by default?
The answer to questions like this is often that there was no need for such safety features when the underlying technology was introduced (more examples here) and adding it later required consensus from many people and organizations who wouldn’t accept something that broke their already-running systems. It’s easy to criticize something when you don’t understand the needs and constraints that led to it.
(The good news is that gradual changes, over the course of years, can further improve things without being too disruptive to survive.)
He’s not wrong in principle, though: Building safe web sites is far more complicated than it should be, and relies far too much on a site to behave in the user’s best interests. Especially when client-side scripts are used.
Anything that didn’t need that kind of security from the beginning also wouldn’t break if it’s built.
The stuff that would break are all vulnerable because it doesn’t exist.
It’s easy to criticize something when you don’t understand the needs and constraints that led to it.
And that assumption is exactly what led us to the current situation.
It doesn’t matter, why the present is garbage, it’s garbage and we should address that. Statements like this are the engineering equivalent of “it is what it is shrug emoji”.
Take a step back and look at the pile of overengineered yet underthought, inefficient, insecure and complicated crap that we call the modern web. And it’s not only the browser, but also the backend stack.
Think about how many indirections and half-baked abstraction layers are between your code and what actually gets executed.
Statements like this are the engineering equivalent of “it is what it is shrug emoji”.
No, what I wrote is nothing like that. Please re-read until you understand it better.
Of course it is like that. You’re saying that the complaint is wrong because the author doesn’t know the history, and now you accuse me of not understanding you, because I pointed this out.
If you have to accuse everyone of “not understanding”, maybe you’re the one who doesn’t understand.
You’re saying that the complaint is wrong because the author doesn’t know the history
That’s not at all what he said. He literally even said “He’s not wrong in principle.”
If you don’t understand the history of why something is the way it is you can’t fix it. You can suggest your new “perfectly secured web site” but if Amazon, Microsoft, Google, Firefox, Apple, etc. don’t agree on your new protocol then there’s going to be exactly 1 person using it.
If you don’t understand the history of why something is the way it is you can’t fix it.
See also: Chesterton’s Fence.
I’d not heard of that before, thanks!
It doesn’t matter, why the present is garbage, it’s garbage and we should address that. Statements like this are the engineering equivalent of “it is what it is shrug emoji”.
I don’t think your opinion is grounded on reality. The “it is what it is” actually reflects the facts that there is no way to fix the issue in backwards-compatible ways, and it’s unrealistic to believe that vulnerable frameworks/websites/webservices can be updated in a moment’s notice, or even at all. This fact is mentioned in the article. Those which can be updated already moved onto a proper authentication scheme. Those who didn’t have to continue to work after users upgrade their browser.
A lot of the web used to run on flash. Then apple comes around and says “flash is terrible and insecure”. Within a number of years everything moved away from flash, so it’s definitely possible to force the web in new directions.
And most old Flash content is basically gone now.
Take a step back and look at the pile of overengineered yet underthought, inefficient, insecure and complicated crap that we call the modern web…
Think about how many indirections and half-baked abstraction layers are between your code and what actually gets executed.
Think about that, and then…what, exactly? As a website author, you don’t control the browser. You don’t control the web standards.
I’m extremely sympathetic to this way of thinking, because I completely agree. The web is crap, and we shouldn’t be complacent about that. But if you are actually in the position of building or maintaining a website (or any other piece of software), then you need to build on what already exists, unless you’re in the exceedingly rare position of being able to near-unilaterally make changes to an existing platform (as Google does with Chrome, or Microsoft and Apple do with their OSes) or to throw out a huge amount of standard infrastructure and start as close to “scratch” as possible (e.g. GNU Hurd, Mill Computing, Oxide, Redox OS, etc; note that several of these are hobby projects not yet ready for “serious” use).
Okay, and how would you address it? The limitation is easy to criticize when you can think in a vacuum about it. But in the real world, we’d need to find a way to change things that can actually be implemented by everyone.
Which usually means transformative change.
It doesn’t matter, why the present is garbage, it’s garbage and we should address that.
The problem is fixing it without inadvertently breaking for someone else. Changing the default behavior isn’t easy.
There’s probably some critical systems that relies on old outdated practices because that’s the way it worked when it was written 20 years ago. Why should they go back and fix their code when it has worked perfectly fine for the past two decades?
If you think anything in software has worked “perfectly fine for the past two decades”, you’re probably not looking closely enough.
I exaggerate, but honestly, not much.
Billions of programs worked perfectly fine today.
Cynicism is easy, but not helpful.
Yes, popular programs behave correctly most of the time.
But “perfectly fine for the last two decades” would imply a far lower rate of CVEs and general reliability than we actually have in modern software.
First and foremost _____ is a giant hack to mitigate legacy mistakes.
Wow, every article on web technology should start this way. And lots of non-web technologies, too.
Unless I’m missing something, the post is plain wrong in some parts. You can’t POST to a Cross-Site API because the browser will send a CORS preflight first before sending the real request. The only way around that are iirc form submits, for that you need csrf protection.
Also the CORS proxy statement is wrong if I don’t misunderstand their point. They don’t break security because they are obviously not the cookie domain. They’re the proxy domain so the browser will never send cookies to it.
Anyways, don’t trust the post or me. Just read https://owasp.org/ for web security advice.
As a userscript author, it is some bullshit.
Thanks, very interesting. I’m a bit confused about what this means:
explicit credentials are unsuitable for server-rendered sites as they aren’t included in top-level navigation
What does “top-level navigation” mean here?
‘’’ Note: When I say “top-level” I am talking about the URL that you see in the address bar. So if you load fun-games.example in your URL bar and it makes a request to your-bank.example then fun-games.example is the top-level site. ‘’’ Meaning explicit creds won’t be sent. Even if fun-games knows how to send explicit creds, it can’t because fun-games does not have access to creds which stored for your-bank. Say suppose your-bank creds stored in local store. Since current URL is fun-games it can only access local storage of fun-games, not your-bank.
Thank you! I was always wondering why the heck this (mostly) useless and broken mechanism exists. I had hesitations about disabling it but had doubts about my understanding. Now I know I’m right