I see so many posts and people who run NGINX as their reverse proxy. Why though? There’s HAProxy and Apache, with Caddy being a simpler option.

If you’re starting from scratch, why did you pick/are you picking NGINX over the others?

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    It depends on what you use on the Python side. Classically that would have been uWSGI or one of the *SGI interfaces, and lately ASGI.

    Sure, one can totally make Python apps that serve HTTP directly. The same can be done with PHP (and Ruby and others) as well, but most people still run their PHP through PHP-FPM over FastCGI because you can offload a lot of the work to the much faster NGINX side. A fair amount of apps make use of X-Accel-Redirect to serve private files, so you don’t tie up a PHP worker for an hour serving the user’s 2GB file.

    But yes, as those languages all move to async computing and away from worker pools, it’s more common to see those serve HTTP directly, and there’s less and less need for a proxy that supports those other protocols. The async event loop is what made NGINX special when it came out, so naturally languages that moves to that model greatly reduce the need for that as well, they too can easily handle thousands of concurrent connections no problems. Plus these days people slap a CDN in front anyway so static file performance doesn’t matter quite as much.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      Ye pretty much. I was just quite astounded at that statement as the AI Horde is basically just a lot of python processes behind a very low powered haproxy server.

      Personally, I understand people like to stay with the familiar, which is perfectly fine for a non-demanding service, but when something becomes demanding, I find the haproxy specialization serves better. I wish lemmy deployment by default utilized haproxy myself.