This is something I have thought a lot recently since I recently saw a project that absolute didn’t care in the slightest about this and used many vendor specific features of MS SQL all over the place which had many advantages in terms of performance optimizations.

Basically everyone always advises you to write your backend so generically with technologies like ODBC, JDBC, Hibernate, … and never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries with the argument being that you can later switch your DBMS without much hassle.

I really wonder if this actually happens in the real world with production Software or if this is just some advice that makes sense on surface level but in reality never pans out. I personally haven’t seen any large piece of Software switch to a different DBMS, even if there would be long term advantages of doing so, because the risk and work to retest everything would be far too great.

The only examples I know of (like SAP) were really part of a much larger rewrite or update rather than “just” switching DBMS.

  • lemmyng@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    I’ve seen it successfully happen due to licensing costs and cloud migration (MSSQL->Spanner), as well as for scalability reasons (vanilla postgres->cockroach). The first one was a significant change in features, the latter did sacrifice some native plugins. In the first case the company was using vendor specific features, and rewrote the backend to fit the new vendor.

    There’s vendor agnosticism, and then there’s platform agnosticism. Writing your code so that it’s not tied to one specific implementation of postgres is fine, and lets you use a compatible drop-in. Writing your code so you can swap MSSQL for Oracle or Aurora or whatever at will does not make sense. In every case of attempted platform agnosticism I’ve seen they ended up abandoning the project within a year or two with nothing to show for it.