Skip to content

HAPROXY and PROXY Protocol

Important

This is the recommended mode of clustering for Web Safety. As we frequently need to test new versions of the application, we use exactly this mode in our test lab and regularly add/remove nodes to web filtering cluster without any client reconfiguration ever needed.

This way of relies on usage of external load balancer haproxy in front of many Web Safety nodes. Browsers then connect to haproxy node that will in its turn distribute TCP connections to Web Safety nodes using round robin scheme.

First, create new type A record for proxy.example.lan with IP address 192.168.178.10. This will be the haproxy frontend to our cluster.

haproxy frontend

Create new type A record for node11.example.lan with IP address 192.168.178.11. This will be our first Web Safety appliance node.

web safety node 1

Create another new type A record for node12.example.lan with IP address 192.168.178.12. This will be our second Web Safety appliance node.

web safety node 2

Configure haproxy on proxy.example.lan with the following configuration file /etc/haproxy/haproxy.cfg. Note how each server is marked with send-proxy directive. This is needed to relay client IP addresses to Web Safety appliance using PROXY protocol.

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    tcp
    option  tcplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend squid
    bind 192.168.178.10:3128
    default_backend squid_pool

backend squid_pool
    balance roundrobin
    mode tcp
    server squid1 192.168.178.11:3128 check send-proxy
    server squid2 192.168.178.12:3128 check send-proxy

Deploy two virtual appliances of Web Safety and assign IP address of 192.168.178.11 for the first node and 192.168.178.12 for the second node.

Then enable support for PROXY protocol in Admin UI / Squid / Settings / Network by setting the Require presence of PROXY protocol header checkbox and providing haproxy's IP address in address field as indicated on the following screenshot. Click Save and Restart.

Enable PROXY protocol on Squid

If Active Directory integration is required, follow the usual Active Directory configuration steps described in previous articles for each virtual appliance, but when configuring Kerberos authenticator provide the SPN based on proxy.example.lan and check the Use GSS_C_NO_NAME checkbox. This will let the node process requests for Kerberos authentication from browsers based on credentials contained in the request and not based on SPN (SPN still needs to be configured though).

Kerberos No Name

Restart haproxy afterwards systemctl restart haproxy. Log in /var/log/haproxy.log should indicate both proxy nodes are working.