Slashdotted/Digged

This guy is describing how his blog got “destroyed” by Digg because it had about 10,000 hits in a few hours. I don’t get this. We used to run OSNews (version 2) on a dual Celeron 466 Mhz (we now run on Xeons) and when we were getting Slashdotted back in the day, the server would work fine for up to 45,000 pages per hour served! It worked fine. Today, Adam’s rewrite of OSNews (version 3) continues to work fine when Digged or Slashdotted.

In my opinion, it’s not only the fact that these guys haven’t edited Apache’s and mySQL’s conf files to optimize them, but the client software they are running are unnecessarily bloated and slow. Including this very blog that I “lease” off Blogsome (WordPress). This was one of the reasons why I had decided back then to write OSNews v2 from scratch rather than use phpNuke or similar CMSes of that day. While I am not nearly as good as Adam optimizing stuff, it still proved way faster than popular off-the-shelf solutions because we focused on simplicity rather than feature-bloat.

Post a comment »

Luis wrote on October 29th, 2006 at 8:57 AM PST:

My PHP skills are very, very basic, but still I find it much more convenient to write my own stuff than using a free CMS or Blog software. Those are great in functionality, but the code is too complicated and “heavy”. I’ve seen a Wiki/CMS that could perform 130 MySQL connections to display the front page content. That’s overkill.

On a very simple test I made, an Apache server could handle about 2000 requests/sec for a static HTML page. When using the most simple PHP script (a “Hello world” one) it dropped to 300 reqs/sec. With one MySQL connection to the most simple and small table it could just handle 45 reqs/sec. PHP scripts and MySQL connections are expensive. So my theory is that everything that can be static should be static (for example, if this blog’s title is “Eugenia’s rants and thoughts” and it won’t change every day/week/month, it makes sense to make it static instead of connecting to a database to get the title and display it every time. Most programs out there do the latter).


This is the admin speaking...
Eugenia wrote on October 29th, 2006 at 9:16 AM PST:

There are some connection numbers you can tweak for mysql and apache on their config files. But overall, yes, static is good when it can be used. And most free CMSes are overkill indeed.


Adam wrote on October 30th, 2006 at 6:46 AM PST:

When using the most simple PHP script (a “Hello world” one) it dropped to 300 reqs/sec.

Obviously, I know nothing about your server setup, but IN GENERAL, PHP is usually not the bottleneck. It’s very rare the PHP is the problem. It’s MUCH more common that a db connection is the bottleneck.

At OSNews, we solved this with caching. We now use two types of caching systems that I wrote. One is a cache-on-demand system, where when the cache stales, the next request refreshes it. We do this for processor intensive or query-heavy content that isn’t used often. The other is a timed cache that refreshes itself every so often via a cron job. This is ideal for pages that get heavy requests even if they are not query-heavy.

I have been toying with the idea of using PHP to generate static pages, but to be honest, it’s probably more trouble than it’s worth, since the dynamic system seems to be working ok and is now optimized, and new code will likely be faster anyway.


Adam wrote on October 30th, 2006 at 6:50 AM PST:

I forgot to mention the bit about optimization.

We got around a lot of the intensive stuff by optimizing our raw SQL. We incorporated all the right JOINs in the right order and ran additional queries when we found it was faster. In some cases, it was faster to run 40 simple queries for a page than 8 big ones that took a lot longer to return. In fact, once the right SQL was in place, we could load our largest comments page in threaded mode (600+ comment in a dynamically threaded fashion) in just about 3 seconds.

It’s easy to forget that the path your queries take is very often the culprit. A poorly written query can bring your whole site to its knees.


Justin Silverton wrote on October 31st, 2006 at 5:33 AM PST:

I read this entire post and Im not sure why his blog couldn’t handle the digg.

I would get a new hosting provider asap. I have gotten about 5 of my articles to the front page of digg in the past 6 months and my website was able to handle it (I run wordpress on php/mysql).

I think I got around 30,000 unique IP addresses to my site each time in a matter of hours (I’m not even talking about hits here..there were def. more than this).

if you are running a dedicated server and have full control over the web server and other types of configuration files, you might want to start looking into memcached.

I’ve heard it works very well in these types of situations.


Comments are closed as this blog post is now archived.

Lines, paragraphs break automatically. HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

The URI to TrackBack this blog entry is this. And here is the RSS 2.0 for comments on this post.