通过SaltStack的配置管理来实现一个“中小型web架构”的自动化部署和配置管理,主要包括以下功能和服务:
系统初始化
Haproxy服务
Keepalived服务
Nginx服务
PHP(FastCGI)服务
Memcached服务
按照本案例的思路,我们将按照系统初始化、功能模块化、业务模块这样的设计思路来进行设计和实施:
系统初始化:指操作系统安装完毕之后,需要使用到的初始配置,比如安装监控代理、调整内核参数、设置域名解析等
功能模块:指的是生产用到的应用,比如Nginx、PHP、Haproxy、Keepalived等这类应用服务的安装和管理,每一个功能完美创建一个目录来存放,我们把这个目录的集合称之为“功能模块”
业务模块:在功能模块中我们编写了大量基础的功能状态,在业务层面直接进行引用,所以功能模块就是尽可能的全、而且独立。而业务模块,不同的业务类型就可以在Include功能模块里面的安装和部署,每个业务使用自己独特的配置文件等。最终在top.sls里面我们只需要给某个Minion指定一个业务的状态即可。
一、环境规划
环境规划包含实验环境规划SaltStack环境。
1.实验环境:
salt-master-1.example.com 10.0.241.122 Master
salt-minion-1.example.com 10.0.241.123 Minion、Haproxy+Keepalived、Nginx+PHP
salt-minion-2.example.com 10.0.241.124 Minion、Memcached、Haproxy+Keepalived、Nginx+PHP
2.SaltStack环境配置
本例子有两个环境base和prod,base环境用来存放初始化的功能。prod环境用于放置生产的配置管理功能:
[root@salt-master-1 ~]# vim /etc/salt/masterfile_roots: base: - /srv/salt/base prod: - /srv/salt/prod pillar_roots: base: - /srv/pillar/base prod: - /srv/pillar/prod [root@salt-master-1 ~]# mkdir -p /srv/salt/{base,prod}[root@salt-master-1 ~]# mkdir -p /srv/pillar/{base,prod}[root@salt-master-1 ~]# systemctl restart salt-master.service
二、系统初始化
1.DNS配置
[root@salt-master-1 ~]# cat /srv/salt/base/init/dns.sls/etc/resolv.conf: file.managed: - source: salt://init/files/resolv.conf - user: root - group: root - mode: 644# 把准备好的resolv.conf放置在/srv/salt/base/init/files/目录下
2.History记录时间
[root@salt-master-1 ~]# cat /srv/salt/base/init/history.sls/etc/profile: file.append: - text: - export HISTTIMEFORMAT="%F %T `whoami` "
3.命令操作审计
[root@salt-master-1 ~]# cat /srv/salt/base/init/audit.sls/etc/bashrc: file.append: - text: - export PROMPT_COMMAND='{ msg=$(history 1 | { read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):['prod']"$msg"; }'
4.内核参数优化
[root@salt-master-1 ~]# cat /srv/salt/base/init/sysctl.slsnet.ipv4.ip_local_port_range: sysctl.present: - value: 10000 65000fs.file_max: sysctl.present: - value: 2000000net.ipv4.ip_forward: sysctl.present: - value: 1vm.swappiness: sysctl.present: - value: 0
5.epel仓库
[root@salt-master-1 ~]# cat /srv/salt/base/init/epel.slsyum_repo_release: pkg.installed: - sources: - epel-release: http://mirrors.aliyun.com/epel/7/x86_64/e/epel-release-7-5.noarch.rpm - unless: rpm -qa | grep epel-release-7-5
6.zabbix_agentd安装
通过使用pillar来设置zabbix server的ip地址:
[root@salt-master-1 ~]# cat /srv/salt/base/init/top.slsbase: '*': - zabbix[root@salt-master-1 ~]# cat /srv/pillar/base/zabbix.slszabbix-agent: Zabbix_Server: 10.0.241.122
安装并启动zabbix agent:
[root@salt-master-1 ~]# cat /srv/salt/base/init/zabbix_agent.slszabbix-agent: pkg.installed: - name: zabbix22-agent file.managed: - name: /etc/zabbix_agentd.conf - source: salt://zabbix/files/zabbix_agentd.conf - template: jinja - defaults: Server: { { pillar['zabbix-agent']['Zabbix_Server'] }} - require: - pkg: zabbix-agent service.running: - enable: True - watch: - pkg: zabbix-agent - file: zabbix-agent
[root@salt-master-1 ~]# cat /srv/salt/base/init/env_init.slsinclude: - init.dns - init.history - init.audit - init.sysctl - init.epel - init.zabbix_agent[root@salt-master-1 ~]# cat /srv/salt/base/top.slsbase: '*': - init.env_init# 在服务器上执行[root@salt-master-1 ~]# salt 'salt-minion-1' state.highstate test=True
三、Haproxy配置管理
Haproxy是一个开源的高性能的反向代理项目,支持四层和七层的负载均衡,多种负载均衡算法和健康检查等。
Keepalived是一个高可用集群的项目,它是VRRP协议的完美实现,我们通过Keepalived来管理Haproxy上面的VIP。当主Haproxy发生故障时,将VIP漂移到备用的Haproxy上来继续提供服务。
[root@salt-master-1 ~]# mkdir /srv/salt/prod/pkg -p[root@salt-master-1 ~]# mkdir /srv/salt/prod/haproxy/files -p[root@salt-master-1 ~]# mkdir /srv/salt/prod/keepalived/files -p# 在每个服务的目录下面均创建一个files目录用来存放源码包和需要的相关启动脚本、配置文件等。
1.pkg配置
[root@salt-master-1 ~]# cat /srv/salt/prod/pkg/pkg-init.slspkg-init: pkg.installed: - pkgs: # 注意 - gcc - gcc-c++ - glibc - make - autoconf - openssl - openssl-devel
2.Haproxy服务配置
[root@salt-master-1 ~]# cd /usr/local/src/ && wget http://www.haproxy.org/download/1.6/src/haproxy-1.6.2.tar.gz && tar zxf haproxy-1.6.2.tar.gz && cd haproxy-1.6.2/examples/[root@salt-master-1 examples]# sed -i 's/\/usr\/sbin\/'\$BASENAME'/\/usr\/local\/haproxy\/sbin\/'\$BASENAME'/g' haproxy.init# 修改haproxy的启动脚本[root@salt-master-1 examples]# cp haproxy.init /srv/salt/prod/haproxy/files/
编写Haproxy代码如下:
[root@salt-master-1 examples]# cat /srv/salt/prod/haproxy/install.slsinclude: - pkg.pkg-inithaproxy-install: file.managed: - name: /usr/local/src/haproxy-1.6.2.tar.gz - source: salt://haproxy/files/haproxy-1.6.2.tar.gz - mode: 755 - user: root - group: root cmd.run: - name: cd /usr/local/src/ && tar zxf haproxy-1.6.2.tar.gz && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy - unless: test -d /usr/local/haproxy - require: - pkg: pkg-init - file: haproxy-install/etc/init.d/haproxy: file.managed: - source: salt://haproxy/files/haproxy.init - mode: 755 - user: root - group: root - require: - cmd: haproxy-installnet.ipv4.ip_nolocal_bind: sysctl.present: - value: 1haproxy-config-dir: file.directory: - name: /etc/haproxy - mode: 755 - user: root - group: roothaproxy-init: cmd.run: - name: chkconfig --add haproxy - unless: chkconfig --list | grep haproxy - require: - file: /etc/init.d/haproxy
管理haproxy的配置文件有两种方法:
1).直接在需要使用haproxy的地方引用haproxy的安装,然后加入haproxy的配置文件管理和服务管理。优点:简单明了;缺点:不够灵活通用。
2).使用jinja模版,将haproxy的基础配置编写完成后,其他的配置通过Pillar来进行自动生成。优点:非常灵活通用;缺点:由于需要使用大量的if、for等jinja模版语法,而且需要配置Pillarlai实现配置,比较复杂,有难度,容易出错。
3.Haproxy业务引用
我们现在切换功能服务配置外,编写一个业务模块Cluster,然后调用Haproxy来完成配置管理。这样做的好处是把基础服务的配置管理和业务分开。
创建cluster目录,并且在cluster目录创建files目录,用来存放配置文件:
[root@salt-master-1 ~]# mkdir -p /srv/salt/prod/cluster/files[root@salt-master-1 ~]# cat /srv/salt/prod/cluster/files/haproxy-outside.cfgglobalmaxconn 100000chroot /usr/local/haproxyuid 99gid 99daemonnbproc 1pidfile /usr/local/haproxy/logs/haproxy.pidlog 127.0.0.1 local3 info# 默认参数设置defaultsoption http-keep-alivemaxconn 100000mode httptimeout connect 5000mstimeout client 5000mstimeout server 5000ms# 开启Haproxy Status状态监控,增加验证listen statsmode httpbind 0.0.0.0:8888stats enablestats uri /haproxy-statusstats auth haproxy:saltstack# 前端设置frontend frontend_www_example_combind 10.0.241.123:80mode httpoption httploglog global default_backend backend_www_example_com# 后端设置backend backend_www_example_comoption forwardfor header X-REAL-IPoption httpchk HEAD / HTTP/1.0balance sourceserver web-node1 10.0.241.123:8080 check inter 2000 rise 30 fall 15server web-node1 10.0.241.124:8080 check inter 2000 rise 30 fall 15
编写haproxy的服务管理:
[root@salt-master-1 ~]# cat /srv/salt/prod/cluster/haproxy-outside.slsinclude: - haproxy.installhaproxy-service: file.managed: - name: /etc/haproxy/haproxy.cfg - source: salt://cluster/files/haproxy-outside.cfg - user: root - group: root - mode: 644 service.running: - name: haproxy - enable: True - reload: True - require: - cmd: haproxy-init - watch: - file: haproxy-service
4.执行Haproxy状态
[root@salt-master-1 ~]# cat /srv/salt/base/top.slsbase: '*': - init.env_initprod: '*': - cluster.haproxy-outside#[root@salt-master-1 prod]# salt 'salt-minion-1' state.highstate test=True
四、Keepalived配置管理
首先放置源码包、Keepalived的启动脚本、sysconfig配置文件在/srv/salt/prod/keepalived/files/目录下。启动脚本和配置文件都可以从源码包中获取到。
1.软件包准备
[root@salt-master-1 ~]# cd /usr/local/src/ && wget && cp keepalived-1.2.19.tar.gz /srv/salt/prod/keepalived/files/ && tar zxf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19/ && cp keepalived/etc/init.d/keepalived.init /srv/salt/prod/keepalived/files/ && cp keepalived/etc/init.d/keepalived.sysconfig /srv/salt/prod/keepalived/files/[root@salt-master-1 keepalived-1.2.19]# vim /srv/salt/prod/keepalived/files/keepalived.init将daemon keepalived ${KEEPALIVED_OPTIONS} 修改为 daemon /usr/local/keepalived/sbin/keepalived ${KEEPALIVED_OPTIONS}
2.编写Keepalived安装sls
[root@salt-master-1 keepalived]# cat install.slskeepalived-install: file.managed: - name: /usr/local/src/keepalived-1.2.19.tar.gz - source: salt://keepalived/files/keepalived-1.2.19.tar.gz - mode: 755 - user: root - group: root cmd.run: - cmd: cd /usr/local/src/ && tar zxf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make install - unless: test -d /usr/local/keepalived - require: file: keepalived-install# Keepalived的sysconfig配置文件/etc/sysconfig/keepalived: file.managed: - source: salt://keepalived/files/keepalived.sysconfig - mode: 644 - user: root - group: root# Keepalived的服务管理脚本/etc/init.d/keepalived: file.managed: - source: salt://keepalived/files/keepalived.init - mode: 755 - user: root - group: root# Keepalived加入系统服务管理keepalived-init: cmd.run: - name: chkconfig --add keepalived - unless: chkconfig --list | grep keepalived - require: - file: /etc/init.d/keepalived# keepalived的配置文件目录如下/etc/keepalived: file.directory: - user: root - group: root
3.Keepalived业务引用
首先和Haproxy一样,我们需要有一个Keepalived的配置文件,不过这次配置文件和Haproxy稍有不同,因为keepalived分为主、备节点,一些配置在主节点和备节点上是不同的。我们需要使用jinja模版来完成配置文件的管理。
[root@salt-master-1 keepalived]# cat /srv/salt/prod/cluster/files/haproxy-outside-keepalived.conf! Configuration File for keepalivedglobal_defs { notification_email { saltstack@example.com } notification_email_from keepalived@example.com smtp_server 127.0.0.1 smtp_connect_timeout 30 route_id { { ROUTEID }}}vrrp_instance haproxy_ha {state { { STATEID }}interface eth0 virtual_router_id 36priority { { PRIORITYID }} advert_int 1authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.241.123 }}
Cluster业务目录下编写Haproxy使用Keepalived做高可用的sls:
[root@salt-master-1 keepalived]# cat /srv/salt/prod/cluster/haproxy-outside-keepalived.slsinclude: - keepalived.installkeepalived-server: file.managed: - name: /etc/keepalived/keepalived.conf - source: salt://cluster/files/haproxy-outside-keepalived.conf - mode: 644 - user: root - group: root - template: jiaja {% if grains['fqdn'] == 'salt-minion-1.example.com' %} - ROUTEID: haproxy_ha - STATEID: MASTER - PRIORITYID: 150 {% elif grains['fqdn'] == 'salt-minion-2.example.com' %} - ROUTEID: haproxy_ha - STATEID: BACKUP - PRIORITYID: 100 {% endif %} service.running: - name: keepalived - enable: True - watch: - file: keepalived-server
4.执行keepalived状态
[root@salt-master-1 keepalived]# cat /srv/salt/base/top.slsbase: '*': - init.env_init - pkg-initprod: '*': - cluster.haproxy-outside - cluster.haproxy-outside-keepalived