Difference between revisions of "Hardware Specifications"

From Roaring Penguin
Jump to: navigation, search
Line 1: Line 1:
Here's what our database servers for our Hosted CanIt service look like:
+
Hardware provisioning guidelines for 25 / 50 / 100k mailbox.
  
===Disk===
+
Our rule of thumb is that 1 mailbox counts for 20-30 messages a day, so
We have sixteen 10k RPM SATA drives arranged using Linux Software
+
we're looking at 750k - 3M messages / day.
RAID-10 with the "offset" strategy.  This is described here:
 
http://www.ilsistemista.net/index.php/linux-a-unix/35-linux-software-raid-10-layouts-performance-near-far-and-offset-benchmark-analysis.html
 
  
The sixteen disks are arranged conceptually as eight stripes with each
+
For hardware a rule of thumb is that a single server can handle around
stripe being a RAID-1 pair; this gives us (theoretically) 16x read
+
150k messages a day.  For installations handling around 500k a day or
performance and 8x write performance compared to a single disk; actual
+
less, we can go with simpler math and say that <150k, 1 server,
measurements show that the performance improvement is quite close to
+
150k-300k, 2 servers, 300k-500k, 3 servers.
what the theory says.
 
  
We considered using SSDs for the database server, but I am still not
+
(where "server" means a modern quad-core processor with 8 GB of RAM and
completely comfortable with their reliability and for us, at least,
+
sufficient storage, e.g. 1TB or so)
spinning disks still work fine. Our hosted service peaks at about
 
7 million messages/day and the database server is nowhere near IO-saturated.
 
  
===RAM===
+
However, when you get into larger installations then there are a bunch
Our database servers have 256GB of RAM eachI recommend putting as
+
of optimizations that our devel. team are aware ofIn these larger
much RAM as possible into the system; you want all of the small tables
+
cases these details become more significant and it's more important to
such as sender rules, user-lookup settings, etc. to wind up cached in
+
tune our recommendations to fit the customer specific needs. (see [[Hardware Specifications Hosted]])
RAM for best performance.
 
  
===CPU===
+
For example, what kind of disks to buy, how to configure them (e.g.
Our database servers have two physical processors:
+
RAID10 using software RAID (most h/w RAID controllers are terrible
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
+
compared to linux software RAID), divvying up certain partitions on
 +
certain partitions to maximize disk IO throughput (our hosted systems
 +
use 3 pairs of disks, I think, for this reason), whether or not to use
 +
PGBouncer which optimizes database connections...  and a bunch of other
 +
optimizing stuff that our devel.  team gurus are aware of.
  
Each processor has six cores and each core has hyperthreading, so
 
cat /proc/cpuinfo reports 24 processors.  That seems to be plenty
 
of CPU for our needs.
 
  
The database servers configured as above cost us $10 000 each. We
+
So for smaller installations < 500k mesgs/day I'm pretty okay with
use SuperMicro hardware; brand-name hardware like HP or Dell will
+
recommending "general" hardware (quad-core processor, 8GB RAM, 1TB hard
probably be a bit more expensive.
+
disk, 3x servers) but for larger installations I want to let the devel.
 +
team provide more specific details as the details can make a big
 +
difference.
 +
 
 +
Otherwise a client could end up wasting money by buying too many
 +
servers, or maybe two servers like the dbservers should have had a
 +
different hard drive configuration than the others, or maybe the
 +
dbservers should have had 32GB of RAM and the others could get away with
 +
16, etc.
 +
 
 +
So hopefully the above is kind a a good ballpark for understanding the
 +
basics and also realizing that for a large installation like this it's
 +
best to let our devel. team come up with specifics to meet their needs
 +
so they don't over-provision or under-provision or misconfigure.
 +
 
 
<div style="float:right; clear:both; margin-right:0.5em">[[Support Wiki | [Home]]]</div>
 
<div style="float:right; clear:both; margin-right:0.5em">[[Support Wiki | [Home]]]</div>
 
[[category:All]][[category:Best Practices]]
 
[[category:All]][[category:Best Practices]]

Revision as of 13:58, 3 July 2014

Hardware provisioning guidelines for 25 / 50 / 100k mailbox.

Our rule of thumb is that 1 mailbox counts for 20-30 messages a day, so we're looking at 750k - 3M messages / day.

For hardware a rule of thumb is that a single server can handle around 150k messages a day. For installations handling around 500k a day or less, we can go with simpler math and say that <150k, 1 server, 150k-300k, 2 servers, 300k-500k, 3 servers.

(where "server" means a modern quad-core processor with 8 GB of RAM and sufficient storage, e.g. 1TB or so)

However, when you get into larger installations then there are a bunch of optimizations that our devel. team are aware of. In these larger cases these details become more significant and it's more important to tune our recommendations to fit the customer specific needs. (see Hardware Specifications Hosted)

For example, what kind of disks to buy, how to configure them (e.g. RAID10 using software RAID (most h/w RAID controllers are terrible compared to linux software RAID), divvying up certain partitions on certain partitions to maximize disk IO throughput (our hosted systems use 3 pairs of disks, I think, for this reason), whether or not to use PGBouncer which optimizes database connections... and a bunch of other optimizing stuff that our devel. team gurus are aware of.


So for smaller installations < 500k mesgs/day I'm pretty okay with recommending "general" hardware (quad-core processor, 8GB RAM, 1TB hard disk, 3x servers) but for larger installations I want to let the devel. team provide more specific details as the details can make a big difference.

Otherwise a client could end up wasting money by buying too many servers, or maybe two servers like the dbservers should have had a different hard drive configuration than the others, or maybe the dbservers should have had 32GB of RAM and the others could get away with 16, etc.

So hopefully the above is kind a a good ballpark for understanding the basics and also realizing that for a large installation like this it's best to let our devel. team come up with specifics to meet their needs so they don't over-provision or under-provision or misconfigure.