Omg it’s sooo daammmn slooow it takes around 30 seconds to bulk - insert 15000 rows

Disabling indices doesn’t help. Database log is at SIMPLE. My table is 50 columns wide, and from what i understand the main reason is the stupid limit of 2100 parameters in query in ODBC driver. I am using the . NET SqlBulkCopy. I only open the connection + transaction once per ~15000 inserts

I have 50 millions rows to insert, it takes literally days, please send help, i can fucking write with a pen and paper faster than damned Microsoft driver inserts rows

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    Hobbyist here, Is it normal for businesses to be having 50 mil rows to insert into a 50 columns wide database via a 2100+ parameters query, 15000 inserts at a time to a single DB?

    • kSPvhmTOlwvMd7Y7EOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      3 months ago

      Oh buddy, enjoy your life & don’t touch Microsoft even with a 10 meters stick

    • deegeese@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      3 months ago

      Inserting 15k rows of 50 columns into a 50M table is something we do every day.

      2100 params on a query sounds like spaghetti code.

      I suspect OP is using single row insert statements when they need a bulk insert to be performant.

      • kSPvhmTOlwvMd7Y7EOP
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        3 months ago

        I am using SqlBulkInsert, given how bad MS is with naming things, that might as well be row inserts instead of bulks

      • kSPvhmTOlwvMd7Y7EOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        3 months ago

        2100 parameters is a documented ODBC limitation( which applies on all statements in a batch)

        This means that a

        “insert into (c1, c2) values (?,?), (?,?)…” can only have 2100 bound parameters, and has nothing to do with code, and even less that surrounding code is “spaghetti”

        The tables ARE normalised, the fact that there are 50 colums is because underlying market - data calibration functions expects dozens of parameters, and returns back dozens of other results, such as volatility, implied durations, forward duration and more

        The amount of immaturity, inexperience, and ignorance coming from 2 people here is astounding

        Blocked

    • GetOffMyLan
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      3 months ago

      No. This seems like a poorly designed system. Definitely sounds like a nosql database would be a much better fit for this task.

      And that many parameters seems like madness haha

      • kSPvhmTOlwvMd7Y7EOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        3 months ago

        Please enlighten us? You barely know anything about the system or usage, and you have deduced nosql is better? Lol

        • GetOffMyLan
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          3 months ago

          A flat 50 column table is usually an indicator of bad design and lack of normalization.

          Nosql is absolutely ideal for flat data with lots of columns and huge amounts of rows. It’s like one of its main use cases.

          That many parameters is an indicator of poorly structured queries and spaghetti code. There is no way that’s the best way the data can be structured.