It kind of makes me think of how odd it would have been if many of the old forums named themselves like bookclub.phpbulletin.com, metalheads.vbulletin.net, or something.

There’s nothing wrong with doing that, obviously, but it’s struck me as another interesting quirk of fediverse instances/sites. Generally as soon as you visit them you can tell by the site interface or an icon somewhere what software they’re using.

  • rglullis@communick.news
    link
    fedilink
    English
    arrow-up
    1
    ·
    il y a 1 an

    storage for networks they don’t want to host, so mirroring the all Activitypub on all servers

    In my dream world, servers are only relays. They don’t store anything, unless it wants to keeps a copy for one of its clients, like POP3.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      il y a 1 an

      In my dream world, servers are only relays. They don’t store anything That sounds a lot like an ISP, except an ISP can relay anything, not just activitypub, which makes it even better. unless it wants to keeps a copy for one of its clients ISP plus a CDN :p

      for the same reason that ISPs don’t solve the need for servers and serverside storage, moving all your storage to the edge is usually a bad idea. You’re basically describing a serverless P2P social network, but with it comes all of the pitfalls of strictly-p2p apps. mainly, searching becomes prohibitively expensive, and if your client goes offline (eg you need to go on an airplane or your phone runs out of batteries) reliably catching up can be problematic. How would this work for PeerTube, for example. would ever client that cared about peertub need to keep a copy of every peertube video on every peertube server, just in case you wanted to search it? My phone would fill up instantly. Would my phone just save an address to look up the video from the original author’s personal device? not only does that sound like a security nightmare, but also RIP to the author’s data usage caps if they published from their mobile device.

      I think that servers are needed. IDK if we need servers to partially mirror eachother like mastadon does, but i think that hosting the content on the servers themselves is the right practical move. and given that we’re more or less boxed into a federated server-client architecture, then I think that we’re getting it as good as we’re going to get, until we choose some standards body to govern how to expose capabilities.

      I do think that the right approach is to have a discoverable API where clients can discover what capabilities a certain piece of content has, and what those capabilities mean. Just like how javascript feature detection is far better than user agent detection, servers can integrate with any social network that supports some minimum set of capabilities, and clients can present all capabilities to the user (while ignoring unsupported capabilities) regardless of originating social network. but we’re not there yet, we need that standard first, and major players need to agree on it.

      • rglullis@communick.news
        link
        fedilink
        English
        arrow-up
        1
        ·
        il y a 1 an

        That sounds a lot like an ISP,

        No, that sounds exactly like Nostr, which is a lot more practical and cheap to run that a Mastodon server and actually scales quite well.

        moving all your storage to the edge is usually a bad idea.

        No. You just need to move the application state to the edge. Storage itself can still be in content-addressable data servers, like IPFS, magnet links or plain-old (S)FTP servers.

        When someone posts a picture on Mastodon, the picture itself is not replicated, just a link to it. Now, imagine that your “smart client” version of Mastodon (or Peertube, or Lemmy) wants to post a picture. How would it work?

        • User posts the photo to an IPFS server. It could be a cloud server or it could be their own NAS they have it running at home.
        • The uploaded photo generates a hash.
        • Client takes the hash of the photo and puts in the message.
        • Client signs a message and send to the server
        • Server receives the message, processes it (index metadata, puts the image in its cache to help with seeding, etc)
        • The server has a policy of keeping the image in the cache in the outbox until it has been delivered to at least 70% of the other clients or 5 days, whichever happens first.

        I think that servers are needed.

        If by “servers” you mean “nodes in the network that are more stable and have stronger uptime/performance guarantees”, I agree 100%. If by “servers” you mean “centralized nodes responsible for application logic” then I’d say you can be easily be proven wrong by actual examples of distributed apps.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          il y a 1 an

          Looking at nostr, I generally like the architecture, although the it’s very similar in broad strokes.

          I like the simplification and separation of the responsibilities. I don’t like using self signing as an identification mechanism for a social network.

          But crucially it seems like it has the same problem we’re discussing here, wrt different social networks based on that protocol, having different message schemas and capabilities, making them incompatible.