Load balancer setup based on HAProxy¶
HAProxy is a reliable open source tool to implement reverse proxy server and load balancers. Most of well known load balancers, AWS LB for example, are based on HAProxy modified source code. Let's see how to setup a custom load balancer using HAProxy.
Prerequsites¶
To deploy load balancer, prepare the following:
- a couple of servers with WCS installed and configured (cloud or hardware)
- a dedicated server to be enrty point for clients incoming connections
- a domain name and SSL certificate
If WCS servers supposed to be in CDN, CDN setup should be done before. For example, if the goal is to balance publishers to a number of Origin servers, or subscribers to a number of Edges, all thosew instances should be configured before deploying the load balancer.
WCS servers setup¶
1. Incoming connections ports¶
Open all the necessary ports for incoming connections on every WCS server (if this is not already done). Look at the minimal ports setup example for AWS EC2 instance
Note that TCP port 9707 should be added. This port will be used by HAProxy to check a current server state.
Media traffic ports (30000-33000 in the example above) should be available from outside networks if server is behind a NAT, because HAProxy may proxy Websocket connectinos only, but not WebRTC.
2. WCS setup¶
Add the following parameters to flashphoner.properties
file to use a real client IP addresses in session identifiers
If servers load supposed to be balanced depenfing on channel bandwidth, add the following setting too
Then restart WCS
3. HAProxy agent setup¶
3.1. Install all the necessary dependencies to the server¶
3.2. Copy the necessary scripts to the server¶
Copy the scripts haproxy-agent-check.sh
and haproxy-agent-check-launch.sh
to /usr/local/bin
folder and allow execution
haproxy-agent-check.sh
#!/bin/bash
CPU_MAX_LOAD=90
MAX_PUBLISHERS=100
MAX_SUBSCRIBERS=100
MAX_HLS_STREAMS=100
MAX_BANDWIDTH_IN=100
MAX_BANDWIDTH_OUT=100
function isTreshold_Cpu() {
local load=$(uptime | grep -E -o 'load average[s:][: ].*' | sed 's/,//g' | cut -d' ' -f3-5)
local cpus=$(grep processor /proc/cpuinfo | wc -l)
local l5util=0
while read -r l1 l5 l15; do {
l5util=$(echo "pct=$l5/$cpus*100; if(pct<1) print 0; pct" | bc -l | cut -d"." -f1);
if [[ $l5util -lt $CPU_MAX_LOAD ]]; then
true; return
else
false; return
fi
}; done < <(echo $load)
}
function isTreshold_Publishers() {
local statsJson=$1
local webrtcPublishers=$(echo $statsJson | jq '.streams_stats.streams_webrtc_in' | bc -l)
local rtmpPublishers=$(echo $statsJson | jq '.streams_stats.streams_rtmp_in' | bc -l)
local rtspStreamsIn=$(echo $statsJson | jq '.streams_stats.streams_rtsp_in' | bc -l)
local rtspPublishers=$(echo $statsJson | jq '.streams_stats.streams_rtsp_push_in' | bc -l)
local publishers=$(($webrtcPublishers + $rtmpPublishers + $rtspStreamsIn + $rtspPublishers))
if [[ $publishers -lt $MAX_PUBLISHERS ]]; then
true; return
else
false; return
fi
}
function isTreshold_Subscribers() {
local statsJson=$1
local webrtcSubscribers=$(echo $statsJson | jq '.streams_stats.streams_webrtc_out' | bc -l)
local rtmpSubscribers=$(echo $statsJson | jq '.streams_stats.streams_rtmp_out' | bc -l)
local rtmpRepublishers=$(echo $statsJson | jq '.streams_stats.streams_rtmp_client_out' | bc -l)
local rtspSubscribers=$(echo $statsJson | jq '.streams_stats.streams_rtsp_out' | bc -l)
local websocketSubscribers=$(echo $statsJson | jq '.streams_stats.streams_websocket_out' | bc -l)
local subscribers=$(($webrtcSubscribers + $rtmpSubscribers + $rtmpRepublishers + $rtspSubscribers + $websocketSubscribers))
if [[ $subscribers -lt $MAX_SUBSCRIBERS ]]; then
true; return
else
false; return
fi
}
function isTreshold_HlsStreams() {
local statsJson=$1
local hlsStreams=$(echo $statsJson | jq '.streams_stats.streams_hls' | bc -l)
if [[ $hlsStreams -lt $MAX_HLS_STREAMS ]]; then
true; return
else
false; return
fi
}
function isTreshold_BandwidthIn() {
local statsJson=$1
local bandwidthIn=$(echo $statsJson | jq '.network_stats.global_bandwidth_in' | bc -l)
local comparison=$(echo "$bandwidthIn < $MAX_BANDWIDTH_IN" | bc -l)
if [[ $comparison -ne 0 ]]; then
true; return
else
false; return
fi
}
function isTreshold_BandwidthOut() {
local statsJson=$1
local bandwidthOut=$(echo $statsJson | jq '.network_stats.global_bandwidth_out' | bc -l)
local comparison=$(echo "$bandwidthOut < $MAX_BANDWIDTH_OUT" | bc -l)
if [[ $comparison -ne 0 ]]; then
true; return
else
false; return
fi
}
function usage() {
echo "Usage: $(basename $0) [OPTIONS]"
echo -e " cpu [<treshold>]\t\tcheck CPU load (default 90 %)"
echo -e " publishers [<treshold>]\tcheck publishers count (default 100)"
echo -e " subscribers [<treshold>]\tcheck subscribers count (default 100)"
echo -e " hls [<treshold>]\t\tcheck HLS streams count (default 100)"
echo -e " band-in [<treshold>]\t\tcheck incoming channel bandwidth (default 100 Mbps)"
echo -e " band-out [<treshold>]\t\tcheck outgoing channel bandwidth (default 100 Mbps)"
echo ""
echo -e "Example: $(basename $0) cpu 90 publishers 100 subscribers 100 hls 100 band-in 100 band-out 100"
exit 0
}
function main() {
local checklist=()
local statsJson=""
local check=""
if [[ $# -eq 0 ]]; then
checklist=(
'Cpu'
'Publishers'
'Subscribers'
'HlsStreams'
'BandwidthIn'
'BandwidthOut'
)
else
while [[ $# -gt 0 ]]; do
case $1 in
cpu)
checklist+=('Cpu')
if [ -z "${2//[0-9]}" ]; then
CPU_MAX_LOAD=$2
shift
fi
shift
;;
publishers)
checklist+=('Publishers')
if [ -z "${2//[0-9]}" ]; then
MAX_PUBLISHERS=$2
shift
fi
shift
;;
subscribers)
checklist+=('Subscribers')
if [ -z "${2//[0-9]}" ]; then
MAX_SUBSCRIBERS=$2
shift
fi
shift
;;
hls)
checklist+=('HlsStreams')
if [ -z "${2//[0-9]}" ]; then
MAX_HLS_STREAMS=$2
shift
fi
shift
;;
band-in)
checklist+=('BandwidthIn')
if [ -z "${2//[0-9]}" ]; then
MAX_BANDWIDTH_IN=$2
shift
fi
shift
;;
band-out)
checklist+=('BandwidthOut')
if [ -z "${2//[0-9]}" ]; then
MAX_BANDWIDTH_OUT=$2
shift
fi
shift
;;
help|*)
usage
;;
esac
done
fi
if [[ -z "${checklist[@]}" ]]; then
usage
return 1
fi
statsJson=$(curl -s 'http://localhost:8081/?action=stat&format=json')
if [[ -z "$statsJson" ]]; then
echo "down"
return 1
fi
for check in ${checklist[@]}; do
if ! isTreshold_$check $statsJson; then
echo "down"
return 1
fi
done
echo "up 100%"
return 0
}
main "$@"
exit $?
The haproxy-agent-check.sh
is used to check server state according to system information and WCS statistics. If any of thresholds passed to the script is reached, the script will return down
state. HAProxy, in its turn, will not dispatch a new connections to the server until the agent script returns up
.
The following tresholds are supported:
cpu
- maximum CPU load average in percents,90
by defaultpublishers
- maximum publishers count per server, including WebRTC, RTMP, RTSP streams,100
by defaultsubscribers
- maximum subscribers count per server, including WebRTC, RTMP, RTSP players,100
by defaulthls
- maximum HLS streams count per server,100
by defaultband-in
- maximum incoming channel bandwidth occupied,100
Mbps by defaultband-out
- maximum outgoing channel bandwidth occupied,100
Mbps by default
For example, to check if CPU LA is below 70%, the script sholud be launched as
3.3. Add the agent port to server setup¶
Add the following string to /etc/services
file
3.4. Configure xinetd¶
Add the file haproxy-agent-check
with the following content to the folder /etc/xinetd.d
# default: on
# description: haproxy-agent-check
service haproxy-agent-check
{
disable = no
flags = REUSE
socket_type = stream
port = 9707
wait = no
user = nobody
server = /usr/local/bin/haproxy-agent-check-launch.sh
log_on_failure += USERID
only_from = 172.31.42.154 127.0.0.1
per_source = UNLIMITED
}
The helper script haproxy-agent-check-launch.sh
is used because xinetd does not support any command line keys in server
parameter
The only_from
parameters allows connections to the port 9707 only from load balancer server where HAProxy will be installed, and from localhost for testing purposes.
3.5. Allow haproxy-agent-check
execution¶
3.6. Restart xinetd¶
3.7. Test the agent work¶
Load balancer setup¶
1. Configure nginx to serve example applications (or any other frontend task)¶
1.1. Install nginx¶
1.2 Configure nginx¶
Change default port in /etc/nginx/nginx.conf
file, and set the server name as localhost
server {
listen 8180;
listen [::]:8180;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
nginx will be available only locally because HAProxy will provide an entry point for clients.
1.3. Restart nginx¶
1.4. Download WebSDK actual build bundle¶
Download WebSDK actual build bundle
wget https://flashphoner.com/downloads/builds/flashphoner_client/wcs_api-2.0/flashphoner-api-2.0.206-7d9863ae4de631a59ff8793ddecd104ca2fd4a22.tar.gz
and unpack it to the /usr/share/nginx/html/wcs
folder
sudo mkdir /usr/share/nginx/html/wcs
cd /usr/share/nginx/html/wcs
sudo tar -xzf ~/flashphoner-api-2.0.206-7d9863ae4de631a59ff8793ddecd104ca2fd4a22.tar.gz --strip-components=2
2. SSL certificates setup for HAProxy¶
2.1. Create a full certificate file in PEM format¶
2.1. Create a full certificate file in PEM format (must include all the certificates and a private key) and copy to a folder whele certificate file should be available
cat cert.crt ca.crt cert.key >> cert.pem
sudo mkdir -p /etc/pki/tls/mydomain.com
sudo cp cert.pem /etc/pki/tls/mydomain.com
3. HAProxy configuration¶
3.1. Install HAProxy¶
3.2. Configure HAProxy¶
Edit the file /etc/haproxy/haproxy.cfg
haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend wcs-balancer
bind *:443 ssl crt /etc/pki/tls/mydomain.com/cert.pem
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr(Sec-WebSocket-Key) -m found
use_backend wcs_back if is_websocket
default_backend wcs_web_admin
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend wcs_back
http-request add-header X-Client-IP %ci:%cp
balance roundrobin
server wcs1_ws 172.31.44.243:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707
server wcs2_ws 172.31.33.112:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707
#---------------------------------------------------------------------
# WCS web admin dashboard
#---------------------------------------------------------------------
backend wcs_web_admin
server wcs_web_http localhost:8180 maxconn 100 check
All the parameters in global
and defaults
sections may be left by default. Configure frontend
frontend wcs-balancer
bind *:443 ssl crt /etc/pki/tls/mydomain.com/cert.pem
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr(Sec-WebSocket-Key) -m found
use_backend wcs_back if is_websocket
default_backend wcs_web_admin
Set nginx with WebSDK examples as default backend
Backend to balance a load between two instances (IP addresses are private and shown for example only)
backend wcs_back
http-request add-header X-Client-IP %ci:%cp
balance roundrobin
server wcs1_ws 172.31.44.243:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707
server wcs2_ws 172.31.33.112:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707
Session stickiness may be set up as follows
backend wcs_back
http-request add-header X-Client-IP %ci:%cp
balance roundrobin
cookie SERVERID insert indirect nocache
server wcs1_ws 172.31.44.243:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707 cookie wcs1_ws
server wcs2_ws 172.31.33.112:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707 cookie wcs1_ws
In this case, all the connections from a certain client will be redirected to the same server unless it returns down
state
Load balancing by maximum client connections to the server may be configured as follows
backend wcs_back
http-request add-header X-Client-IP %ci:%cp
balance leastconn
server wcs1_ws 172.31.44.243:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707
server wcs2_ws 172.31.33.112:8080 maxconn 100 weight 100 check agent-check agent-inter 5s agent-port 9707
In this case all the clients will be redirected to the first server until either maxconn
is reached or or server returns down
state
3.3. Restart HAProxy¶
Testing¶
-
Open
Two Way Streaming
example, set port 443 in Websocket URL input field and publish a stream
-
Check the statistics page on the first WCS server
One Websocket connection (1), one incoming stream (2) namedtest
(3) are displayed -
Check session Id
Client IP address and port are used in session Id. -
Open
Two Way Streaming
example in another browser window, set port 443 in Websocket URL input field and publish a second stream
-
Check the statistics page on the second WCS server
One Websocket connection (1), one incoming stream (2) namedtest2
(3) are displayed