• 👍Maximum Derek👍
    link
    fedilink
    English
    4910 months ago

    Im my experience, if your logs are growing that fast for a reason, you’ll get to see it again… and again… and again. And show it to people going, “WTF, have you ever seen anything like this before?”

    • seahorse [Ohio]OP
      link
      fedilink
      English
      2410 months ago

      In my case docker didn’t have a default max size that logs would stop at, so they just grew and grew exponentially. I also had the highest log level turned on to debug something so it was constantly logging a bunch of data.

    • @[email protected]
      link
      fedilink
      810 months ago

      You’ll also have management breathing down your neck about the costs if it’s not absolutely necessary.

    • @[email protected]
      link
      fedilink
      310 months ago

      Built a centralized logging system to handle logging like this. Fun project but very much the result of bad logging hygiene.

  • @[email protected]
    link
    fedilink
    English
    3110 months ago

    Once I had a customer report that her computer was giving her out of disk space errors. This was weird because we script their My Documents and Desktop to network file shares. Like wtf could be using up the disk? While walking to their system I figured the drive was going bad. Nope.

    Just a 250+GB log file from a chat program that they used. Like OMG that was amazing

  • TipRing
    link
    fedilink
    2310 months ago

    Just add
    : > logfile
    to crontab and run it once a minute, problem solved.

    • @[email protected]
      link
      fedilink
      5410 months ago

      Yeah I centralize all my server logs, they point to a nifty location called /dev/null. It’s so good at collection and compression, it never grows in size!

  • I Cast Fist
    link
    English
    2010 months ago

    I’ve had that happen with database logs where I used to work, back in 2015-6.

    The reason was a very shitty system that, for some reason, threw around 140 completely identical delete queries per millisecond. When I say completely identical, I mean it. It’d end up something like this in the log:

    2015-10-22 13:01:42.226 = delete from table_whatever
          where id = 1
              and name = 'Bob'
              and other_identifier = '123';
    2015-10-22 13:01:42.226 = delete from table_whatever
          where id = 1
              and name = 'Bob'
              and other_identifier = '123';
    2015-10-22 13:01:42.226 = delete from table_whatever
          where id = 1
              and name = 'Bob'
              and other_identifier = '123';
    -- repeated over and over with the exact same fucking timestamp, then repeated again with slightly different parameters and different timestamp
    

    Of course, “no way it’s our system, it handles too much data, we can’t risk losing it, it’s your database that’s messy”. Yeah, sure, I set up triggers to repeat every fucking delete query. Fucking morons. Since they were “more important”, database logging was disabled.

    • @[email protected]
      link
      fedilink
      1310 months ago

      Having query logging enabled on a production database is bonkers. The duplicate deletes are too but query logging is intended for troubleshooting only. It kills performance.

      • I Cast Fist
        link
        English
        1510 months ago

        Take a wild guess as to why it had to be enabled in the first place, and only for Delete queries.

    • @mostlypixels
      link
      910 months ago

      I saw php error logs cause a full disk in a few minutes (thankfully on a shared dev server), thanks to an accidental endless loop that just flooded everything with a wall of notices…

      And, working with a CMS that allows third-party plugins that don’t bother to catch exceptions, aggressive web crawlers are not a good thing to encounter on a weekend… 1 exception x 400000 product pages makes for a loooot of text.