Hey! 👋

Whether you’re an experienced PHP developer, a beginner just starting your journey, or an enthusiast interested in understanding more about the world of PHP, this is the place for you. We are a community dedicated to sharing knowledge, discussing new trends, and solving problems related to PHP.

As members of this community, we expect everyone to respect each other and foster a positive environment. Please make sure your posts are relevant to PHP, and remember to be kind and considerate in your discussions. Let’s learn from each other and help each other grow.

Whether you’re here to ask questions or to share your knowledge, we’re excited to have you here. Let’s make the most of this community together!

Welcome to /c/php! 🐘

  • @msage
    link
    11 year ago

    Yeah, but any ‘single server’ PHP defeats the purpose of PHP, so I’m not a huge fan.

    I’ve used them in the past, and they work great, until you have to switchover to another node.

    • @[email protected]
      link
      fedilink
      11 year ago

      Fair point.

      However why do you need persistent connections ? I am thinking that the growing rate of the connections should be very low as the instances increase, given that the queries are quick.

      • @msage
        link
        11 year ago

        You lose about 5-6ms just connecting to the database. Keeping them open helps a lot.

        My goal is to write as simple code as possible, and everything else is super quick, just the connections aren’t handled well in PHP. Which is a shame.

        • @[email protected]
          link
          fedilink
          11 year ago

          It’s not that there isn’t the option, it’s just that I don’t know how to help you. MySQL has an option to reconnect, I suppose might be the same for postgres?

          The single running process that was so easily dismissed, could save tons of queries, for example! Sorry I keep thinking about that direction

          • @msage
            link
            11 year ago

            Single process doesn’t save any queries, no idea what you mean.

            Persistent connections persist between requests just like in a single process. It’s just that pool handling is hidden in PDO.

            • @[email protected]
              link
              fedilink
              11 year ago

              Also how’s the setup? You setup for example 5 max children in fpm and 5 persistent connections? Per server? So your overall connections to the db server will be 5x your server instances?

              If you setup 5 fpm children and less connections, one child will eventually reuse from another, but only when the connection is free (does not do a query for another process or pdo does not consume a resultset). If it tries to do a query at that time it will have to wait and it will block. This is my understanding. Also how you do transactions with persistent connections?

              This has evolved into such an interesting conversation.

              • @msage
                link
                21 year ago

                From my current understanding, there is no pool, just one process keeps and reuses one database handle over and over again.

                And it’s not PDO, but the driver, which handles that.

                Transactions are handled within try/finally blocks. You can reset the DB connection, but it’s not free in terms of time. You get more performance making sure code doesn’t leak open transactions.

          • @[email protected]
            link
            fedilink
            11 year ago

            Also, to work with persistent connections you will have to have a pool right? Because when you query from instance 1, the connection is not available until you consume the result set. Or is that only for MySQL?

            • @msage
              link
              11 year ago

              Yes, you have a pool, but it’s handled by PDO somewhere, and I have no idea how to manipulate it. It just occured to me to try to open another resource before deallocating the first one.

              If that doesn’t work, I will give PgBouncer a try. In case that won’t do what I need, I’ll just use pg_pconnect.

              I love PHP so much, this is one of two issues I have with it.

              • @[email protected]
                link
                fedilink
                11 year ago

                You can check with is_resource maybe?

                With the single process, you can cache queries in memory depending on how the data change for example and the frequency they have.

                The manual https://www.php.net/manual/en/pdo.connections.php

                Has some interesting notes and also

                https://www.php.net/manual/en/function.pg-connect.php

                Mentions a force_new kind of setting if you need. I think not PDO but the constant you might be able to pass.

                Also SO has some stuff users say

                https://stackoverflow.com/questions/3332074/what-are-the-disadvantages-of-using-persistent-connection-in-pdo

                Personally, I don’t try to optimize so hard in PHP (5 to 10ms due to db connection). There is always an improvement on the way things work, like how the code works that would probably give you a magnitude of performance. Just saying!

                • @msage
                  link
                  21 year ago

                  So I finally found the time, and also my issue - it was a PEBKAC all along.

                  My retry loop was written so haphazardly, that it was stuck in an infinite loop after experiencing rebalancing, instead of correcting it.

                  After fixing that, it all works as expected. There was no issue with persistent connections after all. Rebalancing halts the benchmark for 3 seconds, then traffic re-routes itself to the correct node.

                  The current set-up is three node cluster Postgres + PHP, with HAProxy routing the pg connects to the writeable node, and one nginx load balancer/reverse proxy. I tried PgBouncer with and without persistent PHP connects, and it made little to no measurable difference.

                  The whole deal is a Proof of Concept for replacing current production project, that is written in Perl+MySQL (PXC). And I dislike both with such burning passion, that I’m willing to use my free time to replace both.

                  And from my tests it seems that Patroni cluster can fully replace the multi-master cluster with auto-failover, and we can do switch-over without losing a single request.

                  • @[email protected]
                    link
                    fedilink
                    11 year ago

                    Glad to hear it. All of it actually. Sounds you are content with it now.

                    Had to Google PEBKAC. Aren’t all problems like that?

                • @msage
                  link
                  11 year ago

                  The issue is that the PDO returns a resource, but for a connection, that is no longer writeable.

                  I did not have time to actually test anything, and I won’t till sunday at least.

                  Just so you get my situation:

                  I need to benchmark 5000 successful write requests per second. So yes, 5ms is way too long for me, the rest of the request is done within 3ms tops.

                  I beat that benchmark with ease, the only issue is with the failover in Patroni cluster. Once I get some time to sit down, I will report my findings.

                  There are many other ways to solve this, I just want to better understand what PDO actually does.