Hosting again

Lyle - CosmicPerl.com perl at cosmicperl.com
Mon Oct 29 14:37:00 GMT 2007


Gareth Harper wrote:
> Lyle - CosmicPerl.com wrote:
>
>> On a side note, while we are on the subject... I plan on co-locating 
>> 2 servers with 49pence. I want it setup so that one if a mirror of 
>> the other, and if one falls over it automatically goes to the other. 
>> I'll be running Fedora, but not having done this before I've no idea 
>> how to get this setup, or even what this kind of setup is called. If 
>> someone would care to enlighten me it would be much appreciated :)
>
> The best way to run that is generally with an active-active system.  
> Both systems take traffic  and you load balance between them.  If one 
> machine goes down your load balancer notices and stops sending traffic 
> to the mahcine which is down.  The advantage of this is you don't end 
> up having duplicated hardware sat there doing nothing, and you also 
> get the confidence that your "backup" machine is actually working 
> since it's dealing with live traffic all the time anyway.
>
> You can either get a dedicated hardware load balancer, or use 
> something like mod_proxy under apache which now has a 
> mod_proxy_balancer plugin for exactly this kind of thing.
Single point of failure is something I'm really trying to avoid. How 
reliable is a hardware load balancer? I guess it takes up 1u as well so 
that's extra cost.
>
> Obviously the slight downside to this is you need a third machine to 
> act as a load balancer (again, ideally 2 with a manual failover 
> process if the balancer goes down) but I guess you would need the same 
> even with an automatic failover process as you will need something 
> which chooses to direct the traffic to one machine or another.
>
I'm looking at two of supermicros twin inode in a 1U chassis. So 
technically I'll have 4 hardware machines (in 2U), and maybe zen running 
some more virtual machines.... With the problems I've had with my old 
machine, I really want something that will stay up if one machine goes 
down. The latest clients I'm targeting wont put up with downtime :(


Lyle


More information about the london.pm mailing list