搜档网
当前位置:搜档网 › nginx做负载均衡+keepalived

nginx做负载均衡+keepalived

nginx做负载均衡+keepalived
nginx做负载均衡+keepalived

在前面两篇文章中,阿堂就单nginx主机搭建负载均衡环境作了详细介绍。实际上,这样的负载均衡环境还是有问题的,因为只有一台nginx主机作负载均衡,这样就会存在单点的问题,不算是最理想的架构。所以这篇文章中,阿堂就进行了进一步的深入研究探讨,考虑设计了nginx也作负载均衡的架构设计,这里阿堂用到了keepalived

来作nginx的负载均衡功能。

阿堂的最终的具体web架构考虑设计如下。

Nginx_MASTER: 192.168.7.211 提供负载均衡

Nginx_SLAVER: 192.168.7.218 负载均衡备机

Nginx_VIP_TP: 192.168.7.219 网站的 VIP 地址(虚拟 IP)

Web1 server: 192.168.7.211 提供web服务

Web2 server: 192.168.7.160 提供web服务

原理:

VIP 是外网访问的IP地址,通过 keepalived 设置,以及 VRRP 将 VIP 绑定到主机和备机上,通过权重实现控制。当主机挂掉后,keepalived 释放对主机的控制,备机接管VIP。

安装 Nginx (省略)

请参考阿堂的前两篇文章

安装Keepalived,让其分别作web及Nginx的HA

#wget https://www.sodocs.net/doc/a311857798.html,/software/keepalived-1.1.15.tar.gz

#tar zxvf keepalived-1.1.15.tar.gz

#cd keepalived-1.1.15

#./configure --prefix=/usr/local/keepalived

#make

#make install

#cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

#cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

#cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

#mkdir /etc/keepalived

#cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived

#cd /etc/keepalived/

设置主备nginx机器上的配置文件内容:

vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

notification_email {

heyitang@https://www.sodocs.net/doc/a311857798.html,

}

notification_email_from maxhe@lotery.hk

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER

interface eth0

virtual_router_id 51

# 此处是主 Nginx 的 IP 地址.

mcast_src_ip 192.168.7.211

# 该机的 priority(优先) 为 100

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.7.219

}

}

前面的结构那里已经规定好了 VIP 和主备机的 IP, 所以这里按上面的填。备机的配置文件:

! Configuration File for keepalived

global_defs {

notification_email {

heyitang@https://www.sodocs.net/doc/a311857798.html,

}

notification_email_from maxhe@lotery.hk

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state SLAVER

interface eth0

virtual_router_id 51

# 此处是备 Nginx 的 IP 地址.

mcast_src_ip 192.168.7.218

# 该机的 priority(优先) 为 99

priority 99

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.7.219

}

}

这时候 ping 192.168.7.219 是不通的。

然后在两台机器上分别启动 keepalived 服务

#service keepalived start

这时候再 ping 192.168.7.219通了.

实际上这时候192.168.7.219 是被绑到nginx主机和nginx备机上了的。在nginx主机192.168.7.211上查看

查看系统日志:

tailf /var/log/messages

May 29 18:32:16 localhost Keepalived_vrrp[27731]: Opening file

'/etc/keepalived/keepalived.conf'.

May 29 18:32:16 localhost Keepalived_vrrp[27731]: Configuration is using : 62906 Bytes

May 29 18:32:16 localhost Keepalived_vrrp[27731]: Using LinkWatch kernel netlink reflector...

May 29 18:32:16 localhost Keepalived_healthcheckers[27729]: Using LinkWatch kernel netlink reflector...

May 29 18:32:16 localhost Keepalived_vrrp[27731]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]

May 29 18:32:17 localhost Keepalived_vrrp[27731]: VRRP_Instance(VI_1) Transition to MASTER STATE

May 29 18:32:18 localhost Keepalived_vrrp[27731]: VRRP_Instance(VI_1) Entering MASTER STATE

May 29 18:32:18 localhost Keepalived_vrrp[27731]: VRRP_Instance(VI_1) setting protocol VIPs.

May 29 18:32:18 localhost Keepalived_vrrp[27731]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.108

May 29 18:32:18 localhost Keepalived_healthcheckers[27729]: Netlink reflector reports IP 192.168.1.108 added

可以看到.VRRP(虚拟路由冗余协议)已经启动.我们可以通过命令 ip addr 来检查主 Nginx 上的 IP 分配情况.

可以看到 VIP 地址已经绑定到主 Nginx 机器上: inet 192.168.7.219/32 scope global eth0

我们通过 tcpdump 抓包:

[root@localhost ~]# tcpdump vrrp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

13:38:27.797982 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:28.794693 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:29.794518 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:30.798581 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:31.795902 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:32.804050 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:33.801191 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

13:38:34.798793 IP htuidc.bgp.ip > https://www.sodocs.net/doc/a311857798.html,: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20

这样,一个 Nginx + Keepalived 的架构就完成了。

监控和主备切换

接下来可以完善一下,加上实时监控,如果发现负载均衡的 Nginx 出现问题,就将该机器上的 Keepalived 服务停掉。

nginx_check.sh:

#!/bin/bash

while :

do

nginxpid = 'ps -C nginx --no-header | wc -l'

if[ $nginxpid -eq 0 ];then

service nginx start

sleep 3

nginxpid = 'ps -C nginx --no-header | wc -l'

echo $nginxpid

if[ $nginxpid -eq 0 ];then

service keepalived stop

fi

fi

sleep 3

done

然后让该脚本一直在后台运行:

nohup /etc/nginx_check.sh

测试过程

与阿堂的上两篇文章中操作类似,只是这时候,url中访问就不是写的nginx主机的ip了,而是写的vip 虚拟ip 地址访问了,原理很简单,请结合本篇最

上面阿堂给出来的架构流程图就很好理解了,这里阿堂就不再多说了。

负载均衡

通过ip_hash 实现会话保持

相关主题