• @[email protected]
    link
    fedilink
    English
    143
    edit-2
    6 months ago

    my website’s backend is made with bash, it calls make for every request and it probably has hundreds of remote arbitrary code execution bugs that will get me pwned someday, it’s great

    edit: to clarify, it uses a rust program i made to expose the bash scripts as http endpoints, i’m not crazy enough to implement http in bash

    it behaves like a static file server, but if a file has the others-execute permission bit set it executes the file instead of reading it

    it’s surprisingly nice for prototyping since you can just write a cli program and it’s automatically available over http too

      • @[email protected]
        link
        fedilink
        English
        166 months ago

        i thought it was neat how php lets you write your website’s logic with the same directory tree pattern that clients consume it from, but i didn’t want to learn php so i made my own, worse version

      • agilob
        link
        English
        526 months ago

        You live like this?

      • @[email protected]
        link
        fedilink
        English
        96 months ago

        I’ve taken some precautions, it’s running in a container as an unprivileged user and the only writable mount is the directory where make writes rendered pages, but i probably should move it into a vm if i want to be completely safe lol

    • @[email protected]
      link
      fedilink
      506 months ago

      I designed a chip architecture that runs bash code on silicon.

      I reimplemented x86 assembly in purely bash script.

    • @[email protected]
      link
      fedilink
      2
      edit-2
      6 months ago

      you do realize that you can just use Apache instead of writing your own rust program for this, as this is more or less the CGI standard?

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        I know about the CGI standard, but mine does things a little differently (executable files don’t just render pages but also handle logging, access control, etc. when put in special positions within a directory), so I still think it was worth the afternoon i spent making it.

        • @[email protected]
          link
          fedilink
          26 months ago

          Yeah, especially if you did this for practice.

          Just saying, that apache, for big projects, is more battle-hardened. ;-)

  • agilob
    link
    English
    1216 months ago

    Before nginx was a thing, I worked with a guy who forked apache httpd and wrote this blog in C, like, literally embedded html and css inside the server, so when he made a tpyo or was adding another post he had to recompile the source code. The performance was out of this world.

    • @[email protected]
      link
      fedilink
      32
      edit-2
      6 months ago

      There are a lot of solutions like that in rust. You basically compile the template into your code.

      • voxel
        link
        fedilink
        8
        edit-2
        6 months ago

        yeah, templates can be parsed at compile time but these frameworks are not embeeding whole fucking prerendered static pages/assets

        • @[email protected]
          link
          fedilink
          86 months ago

          They are nowadays. Compiling assets and static data into rust and deliver virtual DOM via websocket to the browser is the new cool kid in the corner.

          Have a look at dioxus

        • @[email protected]
          link
          fedilink
          2
          edit-2
          6 months ago

          Compiling all assets into the binary is trivial in rust. When I have a small web server that generates everything in code I usually compile the favicon into the binary.

    • @[email protected]
      link
      fedilink
      156 months ago

      Does a file lookup really take that long? Id say the trick was to have just plain old html with no bloat and you’re golden.

      • agilob
        link
        English
        296 months ago

        Blog content was stored in memory and it was served with zero-copy to the socket, so yea, it’s way faster. It was before times of php-fpm and opcache that we’re using now. Back then things were deployed and communicated using tcp sockets (tcp to rails, django or php) or reading from a disk, when the best HDDs were 5600rpm, but rare to find on shared hosting.

        • @[email protected]
          link
          fedilink
          66 months ago

          Couldn’t the html be loaded into memory at the beginning of the program and then served whenever? I understand the reading from disk will be slow, but that only happens once in the beginning.

          • MeanEYE
            link
            fedilink
            36 months ago

            There are plenty of sins people still commit and can commit when it comes to web development. Reading from disk is not the bottleneck. If site is slow most likely it’s not the disk read times, database access or anything similar, but silly code that generates the page. It’s almost always the code generating the page that’s at fault.

      • MeanEYE
        link
        fedilink
        86 months ago

        The answer is no. The more file is used the longer it sits in kernel filesystem cache. Getting file from cache versus having it in process memory is few function calls away all of which takes few microseconds. Which is negligible in comparison to network latency and other small issues that might be present in the code.

        On few of our services we decided to store client configuration in JSON files on purpose instead of running with some sort of database storage. Accessing config is insanely fast and kernel makes sure that file is cached so when reading the file you always get fast and latest version. That service is currently handling around 100k requests a day, but we’ve had peaks in past that went up to almost a million requests a day.

        Besides when it comes to human interaction and web sites you only need to get first contentful paint within one second. Anything above 1.5s will feel sluggish, but below 1s, it feels instant. That gives you on average around 800ms to send data back. Plenty of time unless you have a dependency nightmare and parse everything all the time.

    • Bazsalanszky
      link
      fedilink
      136 months ago

      This reminds me of one of my older projects. I wanted to learn more about network communications, so I started working on a simple P2P chat app. It wasn’t anything fancy, but I really enjoyed working on it. One challenge I faced was that, at the time, I didn’t know how to listen for user input while handling network communication simultaneously. So, after I had managed to get multiple TCP sockets working on one thread, I thought, why not open another socket for HTTP communication? That way, I could incorporate a fancy web UI instead of just a CLI interface.

      So, I wrote a simple HTTP server, which, in hindsight, might not have been necessary.

    • MeanEYE
      link
      fedilink
      16 months ago

      Nothing good old cache can’t solve. Compile JS and CSS. Bundle CSS with main HTML file and send it in batches since HTTP2 supports chunkifying your output. HTTP prefers one big stream over multiple smaller anyway. So that guy was only inviting trouble for himself.

      • agilob
        link
        English
        7
        edit-2
        6 months ago

        You’re telling me about compiling JS, to my story that is so old… I had to check. and yes, JS existed back then. HTTP2? Wasn’t even planned. This was still when IRC communities weren’t sure if LAMP is Perl or PHP because both were equally popular ;)

        • MeanEYE
          link
          fedilink
          26 months ago

          Am just saying including source code into Apache is an overkill. But I guess if Apache was so old that doing so wasn’t much of a chore, sure thing. Still think apache module would have been simpler.

    • @[email protected]
      link
      fedilink
      276 months ago

      Have you considered embedding python in those bash scripts? I have done this, and it is glorious.

        • MeanEYE
          link
          fedilink
          76 months ago

          Did you know you can zip entire Python project into single file and make it executable? Quite a neat feature. Shove all dependencies, modules and assets in there and voila. Single file python application.

  • FauxPseudo
    link
    fedilink
    326 months ago

    I’m currently trying to relearn all my advanced bash in python.

    • @[email protected]
      link
      fedilink
      English
      11
      edit-2
      6 months ago

      i already learned how to use my operating system, now you’re telling me I have to learn 30 new libraries that do the exact same shit?

    • DreamButt
      link
      fedilink
      English
      16 months ago

      Just for fun or do you have a specific thing you feel would be better in python?

      • FauxPseudo
        link
        fedilink
        26 months ago

        Certain things I want to do will be easier in python and will be more portable. But bash is my home.

        • DreamButt
          link
          fedilink
          English
          16 months ago

          Fair enough. The line for me has always been whether or not I expect to use it for more than just glue or a one off run

    • @philm
      link
      5
      edit-2
      6 months ago

      but effectively it’s bash, I think /bin/sh is a symlink to bash on every system I know of…

      Edit: I feel corrected, thanks for the information, all the systems I used, had a symlink to bash. Also it was not intended to recommend using bash functionality when having a shebang !#/bin/sh. As someone other pointed out, recommendation would be #!/usr/bin/env bash, or !#/bin/sh if you know that you’re not using bash specific functionality.

      • @[email protected]
        link
        fedilink
        21
        edit-2
        6 months ago

        Still don’t do this. If you use bash specific syntax with this head, that’s a bashism and causes issues with people using zsh for example. Or with Debian/*buntu, who use dash as init shell.

        Just use #!/bin/bash or #!/usr/bin/env bash if you’re funny.

        • @[email protected]
          link
          fedilink
          English
          86 months ago

          #!/bin/bash doesn’t work on NixOS since bash is in the nix store somewhere, #!/usr/bin/env bash resolves the correct location regardless of where bash is

          • JackbyDev
            link
            English
            3
            edit-2
            6 months ago

            Are there any distos with /usr/bin/env in a different spot? I still believe that’s the best approach for getting bash.

              • @[email protected]
                link
                fedilink
                5
                edit-2
                6 months ago

                I do think a simple symlink is superior to a tool parsing stuff. A shame POSIX choose this approach.

                Still the issue that a posix shell can be on a non-posix system and vice versa. And certificates versus used practice. Btw, isn’t there only one posix certified Linux distro? Was it Suse?

                • @[email protected]
                  link
                  fedilink
                  26 months ago

                  Posix certification is dumb but posix compliance is nice to ensure some level of compatibility.

                  Symlinks would be pretty bad in the case of nixos. Wouldn’t fit at all

        • @[email protected]
          link
          fedilink
          06 months ago

          /bin/bash won’t work on every system for example NixOS some other systems may have bash in /usr/bin or elsewhere

            • @[email protected]
              link
              fedilink
              16 months ago

              Binaries are not in /usr/bin or /bin except for /bin/sh and /usr/bin/env. Programs should not assume fixed paths for binaries and instead look for them in $PATH.

      • JackbyDev
        link
        English
        156 months ago

        No no no no no, do not believe this you will shoot yourself in the foot.

        https://wiki.debian.org/Shell

        Beginning with DebianSqueeze, Debian uses Dash as the target of the /bin/sh symlink. Dash lacks many of the features one would expect in an interactive shell, making it faster and more memory efficient than Bash.

        From DebianSqueeze to DebianBullseye, it was possible to select bash as the target of the /bin/sh symlink (by running dpkg-reconfigure dash). As of DebianBookworm, this is no longer supported.

      • @[email protected]
        link
        fedilink
        56 months ago

        It is a symlink, but bash will automatically enable posix compliance mode if you use it. So any bash specific features will bomb out unless you explicitly reset it in the script.

      • @[email protected]
        link
        fedilink
        4
        edit-2
        6 months ago

        Wut that is not even the case for Ubuntu. You’re probably thinking of dash example:

        sh -c '[[ true ]] && echo ya' 
        # sh: 1: [[: not found
        
        bash -c '[[ true ]] && echo ya' 
        # ya
        
      • callyral [he/they]
        link
        fedilink
        English
        3
        edit-2
        6 months ago

        i thought most unix-like systems had it symlinked to a shell like dash. it’s what i have on my system (void linux), of course not as an interactive shell lol

        i use #!/bin/sh for posix scripts and #!/usr/bin/env bash for bash scripts. #!/bin/sh works for posix scripts since even if it’s symlinked to bash, bash still supports posix features.

  • @onlinepersona
    link
    English
    36 months ago

    The dude on the right is some neckbeard who yells “RTFM” and “i use Arch btw ;)” IRL.