Skip to content

A Quick HAProxy Load Balancer for VMware Cloud Foundation Operations

When I need to stand up a quick load balancer for VMware Cloud Foundation Operations in the lab, I don't want to rebuild the same HAProxy configuration by hand every time. I just want a small Debian or Ubuntu VM, a VIP on the right VLAN, a backend pool of appliance nodes, and a fast path to getting the service online.

This helper script is what I use for that job. It prompts for the virtual IP, netmask, redirect FQDN, and backend node IPs, then wires up the network interface, installs HAProxy, generates a fresh configuration, and restarts the required services. It is intentionally opinionated, which is exactly why it's useful in a home lab.

Why I Keep This Around

For repeatable lab work, speed matters. I usually already know the VLAN, the VIP, and the Operations node addresses, so the value here is not flexibility. The value is avoiding manual edits under /etc/network/interfaces and /etc/haproxy/haproxy.cfg when I only need a simple, working load balancer in front of a VCF Operations cluster.

The script also bakes in a health check against the Operations API endpoint:

/suite-api/api/deployment/node/status?services=api&services=adminui&services=ui

That means HAProxy is not just checking whether port 443 accepts a TCP connection. It is looking for an ONLINE response from the platform services that matter for the UI and API path.

What the Script Automates

At a high level, the script does the following:

  1. Validates the VIP address, netmask, and redirect FQDN.
  2. Prompts for the number of backend nodes and their IP addresses.
  3. Appends a VLAN interface definition for eth1.1110 to /etc/network/interfaces.
  4. Restarts networking.service.
  5. Installs HAProxy with apt-get.
  6. Backs up the default HAProxy configuration.
  7. Writes a new haproxy.cfg with:
    • an HTTP listener on port 80 that redirects to HTTPS
    • a TCP frontend on port 443
    • a backend pool using source hashing and consistent hashing
    • API-aware health checks against the Operations node status endpoint
    • a basic stats page on port 9090
  8. Restarts the HAProxy service.

For a disposable or fast-moving lab, that is usually enough.

Traffic Flow at a Glance

The overall request path is simple: clients hit the VIP, HAProxy redirects plain HTTP to HTTPS, then forwards secure traffic to whichever VCF Operations node is healthy based on the configured checks.

flowchart LR
    U[User or Admin Browser] --> H80[HAProxy VIP<br/>Port 80]
    U --> H443[HAProxy VIP<br/>Port 443]
    H80 --> R[HTTP Redirect<br/>to HTTPS FQDN]
    H443 --> F[TCP Frontend]
    F --> B[TCP Backend<br/>balance source<br/>hash-type consistent]
    B --> N1[Operations Node 1<br/>Port 443]
    B --> N2[Operations Node 2<br/>Port 443]
    B --> N3[Operations Node N<br/>Port 443]
    B -. Health check .-> API[/suite-api/api/deployment/node/status/]
    API -. ONLINE .-> B

Assumptions You Should Verify First

Lab Assumptions

This script is built for my home lab workflow, not as a general-purpose installer.

It assumes a Debian or Ubuntu style environment with apt-get, systemctl, and an /etc/network/interfaces based network configuration. It also assumes you want a VLAN subinterface named eth1.1110, an MTU of 8900, and a full replacement of /etc/haproxy/haproxy.cfg.

On current Debian releases, /etc/network/interfaces can still make sense if you are using ifupdown. On current Ubuntu releases, especially newer server builds, netplan is often the default. If your VM uses netplan, you should translate the interface portion of this script before running it rather than appending to /etc/network/interfaces and expecting it to take effect.

There are a few more rough edges worth calling out:

  • The netmask validation is intentionally narrow and only accepts masks that match the 255.255.255.X pattern.
  • The script appends to /etc/network/interfaces, so rerunning it can duplicate the interface stanza.
  • The HAProxy stats listener is configured as admin:admin, which is fine for an isolated lab and a bad idea anywhere else.
  • The script replaces the HAProxy configuration file, so it is best run on a VM dedicated to this role.

The Script

#!/bin/bash

# Function to validate IPv4 address.
function valid_ip() {
    local ip=$1
    local stat=1

    if [[ $ip =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
        OIFS=$IFS
        IFS='.'
        ip=($ip)
        IFS=$OIFS
        [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
        stat=$?
    fi
    return $stat
}

# Function to validate IPv4 netmask.
function valid_netmask() {
    local netmask=$1
    local stat=1

    if [[ $netmask =~ ^(255\.){3}(0|128|192|224|240|248|252|254|255)$ ]]; then
        stat=0
    fi
    return $stat
}

# Function to validate FQDN.
function valid_fqdn() {
    local fqdn=$1
    local stat=1

    if [[ $fqdn =~ ^([a-zA-Z0-9]+(-[a-zA-Z0-9]+)*\.)+[a-zA-Z]{2,}$ ]]; then
        stat=0
    fi
    return $stat
}

# Ask for the network interface address and validate it.
while true; do
    read -p "Enter the network interface address: " interface_address
    if valid_ip $interface_address; then
        break
    else
        echo "Invalid IPv4 address. Please try again."
    fi
done

# Ask for the network interface netmask and validate it.
while true; do
    read -p "Enter the network interface netmask: " interface_netmask
    if valid_netmask $interface_netmask; then
        break
    else
        echo "Invalid IPv4 netmask. Please try again."
    fi
done

# Ask for the FQDN for redirect and validate it.
while true; do
    read -p "Enter the FQDN for redirect: " fqdn
    if valid_fqdn $fqdn; then
        break
    else
        echo "Invalid FQDN. Please try again."
    fi
done

# Ask for the number of nodes.
while true; do
    read -p "Enter the number of nodes: " num_nodes
    if [[ "$num_nodes" =~ ^[1-9][0-9]*$ ]]; then
        break
    else
        echo "Invalid node count. Enter an integer greater than 0."
    fi
done

# Ask for the IPs of the nodes.
declare -a node_ips
for ((i=1; i<=num_nodes; i++)); do
    while true; do
        read -p "Enter IP address for node $i: " ip
        if valid_ip $ip; then
            node_ips+=($ip)
            break
        else
            echo "Invalid IPv4 address. Please try again."
        fi
    done
done

# Append network interface configuration.
echo "Appending network interface configuration..."
cat <<EOL >> /etc/network/interfaces

# VIP
auto eth1.1110
iface eth1.1110 inet static
address $interface_address
netmask $interface_netmask
mtu 8900
EOL

# Restart network service.
echo "WARNING: Restarting networking can disconnect this session if you are connected remotely over SSH."
read -r -p "Confirm you have console access and want to restart networking now? [y/N]: " restart_networking
if [[ "$restart_networking" =~ ^[Yy]$ ]]; then
    echo "Restarting network service..."
    systemctl restart networking.service
else
    echo "Aborting to avoid dropping a remote session. Re-run from console access or restart networking manually when ready."
    exit 1
fi

# Install HAProxy.
echo "Updating package indexes and installing HAProxy..."
apt-get update
apt-get install -y haproxy

# Backup default haproxy.cfg.
echo "Backing up default haproxy.cfg..."
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

# Generate the HAProxy configuration file.
echo "Generating HAProxy configuration file..."
cat <<EOL > /etc/haproxy/haproxy.cfg
global
        log 127.0.0.1 local2
        chroot /var/lib/haproxy
        pidfile /var/run/haproxy.pid
        maxconn 2000
        user haproxy
        group haproxy
        daemon
        stats socket /var/lib/haproxy/stats
        ssl-server-verify none

defaults
        log global
        mode http
        option httplog
        option dontlognull
        timeout connect 5000ms
        timeout client 50000ms
        timeout server 50000ms

listen stats
        bind :9090
        mode http
        stats enable
        stats auth admin:admin
        stats uri /
        stats realm Haproxy\ Statistics

frontend vcfops_unsecured_redirect
        bind *:80
        http-request redirect scheme https code 301

frontend vcfops_frontend_secure
        bind $interface_address:443
        mode tcp
        option tcplog
        default_backend vcfops_backend_secure

backend vcfops_backend_secure
        mode tcp
        balance source
        hash-type consistent
        option tcp-check
        tcp-check connect port 443 ssl
        tcp-check send GET\ /suite-api/api/deployment/node/status?services=api&services=adminui&services=ui\ HTTP/1.0\r\n\r\n
        tcp-check expect rstring ONLINE
EOL

# Add the server nodes to the backend configuration.
echo "Adding server nodes to the backend configuration..."
for ((i=0; i<num_nodes; i++)); do
    echo "        server node$((i+1)) ${node_ips[$i]}:443 check inter 15s check-ssl maxconn 140 fall 3 rise 3" >> /etc/haproxy/haproxy.cfg
done

echo "HAProxy configuration file generated at /etc/haproxy/haproxy.cfg"

# Restart HAProxy service.
echo "Restarting HAProxy service..."
systemctl restart haproxy

How I Use It

My normal pattern is to deploy a small Debian or Ubuntu virtual machine from a handy Packer-generated template that is dedicated to this one job, copy the script over, make it executable, and run it through sudo.

chmod +x ./vcf-ops-haproxy.sh
sudo ./vcf-ops-haproxy.sh

During execution I provide:

  • The VIP address that clients should use to reach Operations
  • The subnet mask for that VIP
  • The FQDN that should receive the HTTP to HTTPS redirect
  • The number of Operations nodes behind the load balancer
  • The IP address of each backend node

Because the backend is configured in TCP mode, HAProxy is passing through the encrypted session rather than terminating TLS locally. For this use case that keeps the load balancer simple and lets the Operations appliances continue to own the certificate and application behavior.

What I Verify

After the script completes, I usually check four things:

  1. The VLAN interface is up and holding the expected VIP.
  2. HAProxy is listening on ports 80, 443, and 9090.
  3. The stats page shows all expected backend nodes.
  4. A browser request to the Operations FQDN lands on the service over HTTPS.

These commands are usually enough:

ip addr show
systemctl status haproxy
ss -tulpn | grep -E '(:80|:443|:9090)'
curl -k https://<fqdn>/suite-api/api/deployment/node/status

This is one of those tiny utility scripts that earns its keep by saving ten or fifteen minutes of repetitive work every time I need a fresh load balancer in the lab.

It's not elegant. It's not perfect. It's not production ready and it does not try to be.

It's just fast, predictable, and close enough to my environment that I can get a load-balanced VCF Operations deployment online without stopping to think about HAProxy syntax.

If your lab looks similar, this may save you some time too. Just sanity-check the network model first, especially on newer Ubuntu systems where netplan is the default.

Disclaimer

This is not an official VMware by Broadcom document. This is a personal blog post.

The information is provided as-is with no warranties and confers no rights.

Please, refer to official documentation for the most up-to-date information.