Article 6J5F6 I abandoned OpenLiteSpeed and went back to good ol’ Nginx

I abandoned OpenLiteSpeed and went back to good ol’ Nginx

by
Lee Hutchinson
from Ars Technica - All content on (#6J5F6)
GettyImages-200414013-001-800x471.jpg

Enlarge / Ish is on fire, yo. (credit: Tim Macpherson / Getty Images)

Since 2017, in what spare time I have (ha!), I help my colleague Eric Berger host his Houston-area weather forecasting site, Space City Weather. It's an interesting hosting challenge-on a typical day, SCW does maybe 20,000-30,000 page views to 10,000-15,000 unique visitors, which is a relatively easy load to handle with minimal work. But when severe weather events happen-especially in the summer, when hurricanes lurk in the Gulf of Mexico-the site's traffic can spike to more than a million page views in 12 hours. That level of traffic requires a bit more prep to handle.

Screenshot-2024-01-24-at-9.02.05%E2%80%A

Hey, it's Space City Weather! (credit: Lee Hutchinson)

For a very long time, I ran SCW on a backend stack made up of HAProxy for SSL termination, Varnish Cache for on-box caching, and Nginx for the actual web server application-all fronted by Cloudflare to absorb the majority of the load. (I wrote about this setup at length on Ars a few years ago for folks who want some more in-depth details.) This stack was fully battle-tested and ready to devour whatever traffic we threw at it, but it was also annoyingly complex, with multiple cache layers to contend with, and that complexity made troubleshooting issues more difficult than I would have liked.

So during some winter downtime two years ago, I took the opportunity to jettison some complexity and reduce the hosting stack down to a single monolithic web server application: OpenLiteSpeed.

Read 32 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments