How to handle large spike in traffic?

We are planning on running a Living Social deal that should bring a lot of traffic to our site. How much traffic, I do not know. I'm wondering how to prepare for the temporary spike in traffic. In the past, when we a burst in traffic where we had around 50-100 customers on our site at a time, our current server slowed down and the load got pretty high.



Our host currently is Liquid web and it is an 8gb storm server (https://www.liquidweb.com/StormServers/vps.html#showsignup) . We contacted there sales and they suggested we upgrade to there bare metal dedicated stom server(https://www.liquidweb.com/StormServers/dedicated.html) with these specs:



Intel Dual Xeon E5506

Number of CPU's: 2

Number of Cores: 8

Amount of RAM: 8GB

Amount of Hard Drives: 4

Total Disk Space: 261GB

Hard Drive Type: SAS

RAID Setup: RAID10



How many concurrent users will this handle? Is there even a way to figure that out? I was thinking of a load balancer and another server, but it will require DNS changes and for a temporary solution I want the least amount of downtime.



Does anyone out there have a high traffic site? If so, can you share your server specs and traffic numbers?

I'm running a bare metal setup at Liquid web (or Storm) with the following specs:



32GB Ram / 2TB SATA Raid 10 / Dual Xeon E5606 (8 Cores total).



I host around 40 websites, one of which runs CS-Cart with 20+ simultaneous visitors at any given time. Also, we host about 200 some odd mailboxes on the same box. The performance is good but it's starting to peak. I'm sure going SAS would help performance substantially in your case.



My advice would be to investigate using a Storm SSD instance for your mySQL database, and setup two load-balanced web servers for the CS-Cart frontend. You could also implement a CDN for static files such as javascript and CSS.



Lastly, beware of storm virtual instances. Unless you're running on a bare metal setup, you're not protected by a true RAID. Storm actually lost one of my virtual servers and if it weren't for their backup service I paid extra for, I would've lost the entire server.



Best of luck,

Depending on what backend caching method you are using, the odds are you are going to be bound by file-io rather than cpu or memory. However, more memory helps with both file-io and the number of PHP or Apache processes that can be resident in core. Note that if you are using Apache, the 'fastcgi' PHP mode would probably help you here because of the way it caches requests and keeps PHP processes connected to the same IP addresses while they are alive.



Be interested in what you learn here. Not sure I've ever seen a true cloud DB perform worth a damn, but if it's a private cloud and has an optical connection, it could work within a cloud architecture shared across multiple web servers as jtrotter says.



How were you detecting/measuring your 50-100 simultaneous users? What was the memory and disk usage of your system while this was occurring?

Also there is no formula here. You should limit your database according to what you think is reasonable according to your application needs. Typically, servers with applications using a connection pool shouldn't need more than a few hundred concurrent connections. Small-Medium sized websites may suffice with 100 - 200. Just see how it goes…



Also make sure, you set up a proper open_files_limit.



I found a nice example:



if your average page exec time is 5 seconds and you have 300 users and they tend to click once per minute, then 10 connections will have you at your saturation point: ((300 * 5) / 60) == 10. Your choices: make your pages faster (eg. a sub-second page will increase your limit x6) or get more connections. Otherwise, your users will end up waiting longer.

[quote name='tbirnseth' timestamp='1366171156' post='160080']

How were you detecting/measuring your 50-100 simultaneous users? What was the memory and disk usage of your system while this was occurring?

[/quote]



We were detecting with our livezilla live help install. It may not be entirely accurate as it shows visitors with activity within the past 5 minutes. I'm not sure how to check what the memory/disk usage was? Is there a way to do that with cpanel/Whm? It is for the site you just installed the custom previwer addon for us on Tony.



Some other questions regarding servers:



What are the advantages/disadvanges of a Dual Xeon vs the AMD Opteron? I've always been an intel guy, any reason to switch?



What are the advantages/disadvanges of SAS vs SATA? I know SSD would be ideal, but we are currently using around 500gb and there doesn't seem to be an option we can afford.



We host multiple websites on the server, including a few cs cart installs as well as some low traffic wordpress installs and low traffic vbulletin forums.

I believe livezilla places its own load on the server for every active customer. I.e. it wants to communicate with its client.

If you have a high traffic site that makes you money, you shouldn't pollute it with hosting other sites on the same server unless you are sure you really have the spare resources (including your peak times). If not, you're just putting rocks in your pack before your hike.



I'm a big believer in one site per server instance (whether a VPS or dedicated server) unless you are really able to administer a hosting site. Just my two cents toward maintaining sanity.



If you have low traffic, buy a cheap shared hosting plan. If you have light to moderate traffic, get a VPS and grow it as you need. If you get above the 1000 unique visitors a day and generate 100 or more orders/day then you can probably afford a dedicated server and it will help you during your growth and peak times.