Search squid archive

Re: IPVS/LVS load balancing Squid servers, anyone did it?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 27/08/2020 à 14:14, Eliezer Croitor a écrit :
> Hey Emmanuel,
>
> I was just trying to understand if and how nftables is can LB and what exactly makes IPVS that fast.
nftlb is simply a rulemanager in front of nftables. And IPVS is not as 
fast as nftables, it is the reverse.
All you could do with IPVS is normally do-able with nftable/nftlb and more.
> It seems that IPVS in DR actually converts the linux box into a Switch.
Technically it is purely hw address mangling not full switching.
> I am still unsure if mac address replacement is better then FWMARK for DR.
FWMARK is just for packet selection grouping for mac address 
replacement. It does not replace it.
And with the expressive capability of nftable this FWMARK dance is no 
longer necessary.

Emmanuel.
>
> Thanks,
> Eliezer
>
> ----
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1ltd@xxxxxxxxx
>
> -----Original Message-----
> From: squid-users <squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of FUSTE Emmanuel
> Sent: Thursday, August 27, 2020 2:23 PM
> To: squid-users@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re:  IPVS/LVS load balancing Squid servers, anyone did it?
>
> Hi,
>
> To complement this, on modern kernel take the opportunity to try nftlb
> instead of LVS too.
> https://www.zevenet.com/knowledge-base/nftlb/what-is-nftlb/
>
> Emmanuel.
>
> Le 27/08/2020 à 06:35, Bruce Rosenberg a écrit :
>> Hi Eliezer,
>>
>> We are running a couple of Squid proxies (the real servers) in front
>> of a pair of LVS servers with keepalived and it works flawlessly.
>> The 2 x Squid proxies are active / active and the LVS servers are
>> active / passive.
>> If a Squid proxy dies the remaining proxy takes all the traffic.
>> If the active LVS server dies, keepalived running on the backup LVS
>> (via VRRP) moves the VIP to itself and it takes all the traffic, so
>> the only difference between the two is one has a higher priority so it
>> gets the VIP first.
>> I have included some sanitised snippets from a keepalived.conf file
>> that should help you.
>> You could easily scale this out if you need more than 2 Squid proxies.
>>
>> The config I provided is for LVS/DR (Direct Route) mode.
>> This method rewrites the MAC address of forwarded packets to that of
>> one of the real servers and is the most scalable way to run LVS.
>> It does require the LVS and real servers be on the same L2 network.
>> If that is not possible then consider LVS/TUN mode or LVS/NAT mode.
>>
>> As LVS/DR rewrites the MAC address, it requires each real server to
>> have the VIP address plumbed on an interface and also requires the
>> real servers to ignore ARP requests for the VIP address as the only
>> device that should respond to ARP requests for the VIP is the active
>> LVS server.
>> We do this by configuring the VIP on the loopback interface on each
>> real but there are other methods as well such as dropping the ARP
>> responses using arptables, iptables or firewalld.
>> I think back in the kernel 2.4 and 2.6 days people used the noarp
>> kernel module which could be configured to ignore ARP requests for a
>> particular IP address but you don't really need this anymore.
>>
>> More info on the loopback arp blocking method -
>> https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/
>> More info on firewall type arp blocking methods -
>> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa
>> More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR
>>
>> If you are using a RPM based distro then to set up the LVS servers you
>> only need the ipvsadm and keepalived packages.
>> Install squid on the reals and configure the VIP on each and disable ARP.
>> Then build the keepalived.conf on both LVS servers and restart keepalived.
>>
>> The priority configuration stanza in the vrrp_instance section
>> determines the primary VRRP node (LVS server) for that virtual router
>> instance.
>> The secondary LVS server needs a lower priority compared to the primary.
>> You can configure one as the MASTER and the other as the BACKUP but
>> our guys make them both BACKUP and let the priority sort the election
>> of the primary out.
>> I think this might be to solve a problem of bringing up a BACKUP
>> without a MASTER but I can't confirm that.
>>
>>
>> Good luck.
>>
>>
>> $ cat /etc/keepalived/keepalived.conf
>>
>> global_defs {
>>
>>      notification_email {
>>          # rootmail@xxxxxxxxxxx <mailto:rootmail@xxxxxxxxxxx>
>>      }
>>      notification_email_from keepalive-daemon@xxxxxxxxxxxxxxxxx
>> <mailto:keepalive-daemon@xxxxxxxxxxxxxxxxx>
>>      smtp_server 10.1.2.3        # mail.example.com
>> <http://mail.example.com>
>>      smtp_connect_timeout 30
>>      lvs_id lvs01.example.com <http://lvs01.example.com>    # Name to
>> mention in email.
>> }
>>
>> vrrp_instance LVS_example {
>>
>>      state BACKUP
>>      priority 150
>>      interface eth0
>>      lvs_sync_daemon_interface eth0
>>      virtual_router_id 5
>>      preempt_delay 20
>>
>>      virtual_ipaddress_excluded {
>>
>>          10.10.10.10   # Squid proxy
>>      }
>>
>>      notify_master "some command to log or send an alert"
>>      notify_backup "some command to log or send an alert"
>>      notify_fault "some command to log or send an alert"
>> }
>>
>>
>> # SQUID Proxy
>> virtual_server 10.10.10.10 3128 {
>>
>>      delay_loop 5
>>      lb_algo wrr
>>      lb_kind DR
>>      protocol TCP
>>
>>      real_server 10.10.10.11 3128 {   # proxy01.example.com
>> <http://proxy01.example.com>
>>          weight 1
>>          inhibit_on_failure 1
>>          TCP_CHECK {
>>              connect_port 3128
>>              connect_timeout 5
>>          }
>>      }
>>
>>      real_server 10.10.10.12 3128 {   # proxy02.example.com
>> <http://proxy02.example.com>
>>          weight 1
>>          inhibit_on_failure 1
>>          TCP_CHECK {
>>              connect_port 3128
>>              connect_timeout 5
>>          }
>>      }
>> }
>>
>>
>> On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <ngtech1ltd@xxxxxxxxx
>> <mailto:ngtech1ltd@xxxxxxxxx>> wrote:
>>
>>      Hey All,
>>
>>      I am reading about LB and tried to find an up-to-date example or
>>      tutorial specific to squid with no luck.
>>
>>      I have seen:
>>      http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS
>>
>>      Which makes sense and also is similar or kind of identical to WCCP
>>      with gre.
>>
>>      Anyone knows about a working Squid setup with IPVS/LVS?
>>
>>      Thanks,
>>
>>      Eliezer
>>
>>      ----
>>
>>      Eliezer Croitoru
>>
>>      Tech Support
>>
>>      Mobile: +972-5-28704261
>>
>>      Email: ngtech1ltd@xxxxxxxxx <mailto:ngtech1ltd@xxxxxxxxx>
>>
>>      _______________________________________________
>>
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux