4.ansible使用-playbook剧本模式-批量操作案例实战
1.playbook相关介绍
1).playbook 是一个由yml语法编写的文本文件,它由play和task 两部分组成。
play: 主要定义要操作主机或者主机组
task: 主要定义对主机或主机组具体执行的任务,可以是一个任务,也可以是多个任务(模块)
2).playbook是由一个或多个模块组成的,使用多个不同的模块,共同完成一件事情。
Playbook通过yaml语法识别描述的状态文件,扩展名是yaml。
2.yaml三板斧
1).缩进: yaml使用一个固定的缩进风格表示层级结构,每个缩进由两个空格组成,不能使用tab键。
2).冒号: 以冒号结尾的除外,其他所有冒号后面所有必须有空格。
3).短横线: 表示列表项,使用一个短横线加一个空格作为一个列表项,多个项使用同样的缩进级别作为同一列表。
3.ansible-playbook的实战案例
案例1: 用ansible-playbook方式远程批量安装httpd-若修改完配置,重新推送后,配置改了但没重载服务,不生效
管理端:192.168.171.128
[root@localhost ~]# ls
httpd.conf httpd_install.yaml
[root@localhost ~]# vim httpd_install.yaml
#这是一个ansible的playbook
#第一步: 找到谁,hosts: 定义主机清单,ansible的hosts文件里定义的主机清单模块名
#第二步: 大概做的任务: 安装,配置,启动
#第三步: 具体怎么做
#name:描述信息,task里有3个同级别的列表步骤
#yum: 远端安装服务,yum模块安装服务(installed)
#copy: 远端拷贝文件,copy模块传送文件到远端
#service: 远端启动服务(started)
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
- name: install httpd fuwu
yum: name=httpd,httpd-tools state=installed
- name: configure httpd fuwu
copy: src=/root/httpd.conf dest=/etc/httpd/conf/httpd.conf
- name: qidong httpd fuwu
service: name=httpd state=started enabled=yes
[root@localhost ~]# ansible-playbook --syntax-check httpd_install.yaml #检查语法是否有误
playbook: httpd_install.yaml
[root@localhost ~]# ansible-playbook -C httpd_install.yaml #-C模拟执行,不是真的直接执行
[root@localhost ~]# ansible-playbook httpd_install.yaml #真正模拟执行,批量操作远端机器安装服务
所有被管理端机器: 192.168.171.129和192.168.171.130 httpd服务会安装后并启动
[root@localhost ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-10-04 01:20:56 CST; 4s ago
案例2: 用ansible-playbook方式远程批量安装httpd-若修改完配置,重新推送后,配置改了且能触发重启服务配置生效.
管理端:192.168.171.128
[root@localhost ~]# curl 192.168.171.129
80端口能访问httpd
[root@localhost ~]# curl 192.168.171.130
80端口能访问httpd
[root@localhost ~]# ls
httpd.conf httpd_install.yaml
[root@localhost ~]# vim httpd.conf
Listen 8888 #修改端口
[root@localhost ~]# vim httpd_install.yaml
#这是一个ansible的playbook
#第一步: 找到谁,hosts: 定义主机清单,ansible的hosts文件里定义的主机清单模块名
#第二步: 大概做的任务: 安装,配置,启动 #第三步: 具体怎么做
#name:描述信息,task里有3个同级别的列表步骤
#yum: 远端安装服务,yum模块安装服务
#copy: 远端拷贝文件,copy模块传送文件到远端 #service: 远端启动服务
#notify: 当该项中的配置文件内容有变更时候,会触发下面的handlers的重启操作(根据handler描述信息关联触发)
#handler: 当被触发后执行的操作,重启httpd服务
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
- name: install httpd fuwu
yum: name=httpd,httpd-tools state=installed
- name: configure httpd fuwu
copy: src=/root/httpd.conf dest=/etc/httpd/conf/httpd.conf
notify: Restart httpd fuwu
- name: qidong httpd fuwu
service: name=httpd state=started enabled=yes
handlers:
- name: Restart httpd fuwu
service: name=httpd state=restarted
[root@localhost ~]# ansible-playbook --syntax-check httpd_install.yaml #检查语法是否有误
playbook: httpd_install.yaml
[root@localhost ~]# ansible-playbook -C httpd_install.yaml #-C模拟执行,不是真的直接执行
[root@localhost ~]# ansible-playbook httpd_install.yaml #真正模拟执行,批量操作远端机器安装服务
[root@localhost ~]# curl 192.168.171.129
curl: (7) Failed connect to 192.168.171.129:80; Connection refused
[root@localhost ~]# curl 192.168.171.130
curl: (7) Failed connect to 192.168.171.130:80; Connection refused
[root@localhost ~]# curl 192.168.171.129:8888
能访问httpd
[root@localhost ~]# curl 192.168.171.130:8888
能访问httpd
所有被管理端机器:192.168.171.129和192.168.171.130 httpd服务会安装后并启动
[root@localhost ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-10-04 01:47:03 CST; 18s ago
[root@localhost ~]# netstat -anput |grep 80
无
[root@localhost ~]# netstat -anput |grep 8888
tcp6 0 0 :::8888 :::* LISTEN 16723/httpd
案例3: 用ansible-playbook方式远程卸载httpd,且删除相应的用户和配置文件
[root@localhost ~]# ansible test -m command -a "netstat -anput"
192.168.171.129 | CHANGED | rc=0 >>
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 921/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1075/master
tcp 0 0 192.168.171.129:22 192.168.171.1:55261 ESTABLISHED 1856/sshd: root@pts
tcp 0 0 192.168.171.129:22 192.168.171.128:48634 ESTABLISHED 3255/sshd: root@pts
tcp6 0 0 :::22 :::* LISTEN 921/sshd
tcp6 0 0 :::8888 :::* LISTEN 3233/httpd
tcp6 0 0 ::1:25 :::* LISTEN 1075/master
192.168.171.130 | CHANGED | rc=0 >>
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 928/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1083/master
tcp 0 0 192.168.171.130:22 192.168.171.1:55262 ESTABLISHED 1530/sshd: root@pts
tcp 0 0 192.168.171.130:22 192.168.171.128:45528 ESTABLISHED 2905/sshd: root@pts
tcp6 0 0 :::22 :::* LISTEN 928/sshd
tcp6 0 0 :::8888 :::* LISTEN 2887/httpd
tcp6 0 0 ::1:25 :::* LISTEN 1083/master
[root@localhost ~]# vim httpd_removed.yaml
#这是一个ansible的playbook
#第一步: 找到谁,hosts: 定义主机清单,ansible的hosts文件里定义的主机清单模块名
#第二步: 大概做的任务: 安装,配置,启动 #第三步: 具体怎么做
#name:描述信息,task里有3个同级别的列表步骤
#yum: 远端安装服务,yum模块安装服务
#copy: 远端拷贝文件,copy模块传送文件到远端 #service: 远端启动服务
#notify: 当该项中的配置文件内容有变更时候,会触发下面的handlers的重启操作(根据handler描述信息关联触发)
#handler: 当被触发后执行的操作,重启httpd服务
#remote_user: 指定远程主机上使用的用户
#gather_facts: 默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
- name: remove httpd fufu
yum: name=httpd,httpd-tools state=absent
- name: remove apache user
user: name=apache state=absent
- name: remove data file
file: name=/etc/httpd state=absent
[root@localhost ~]# ansible-playbook httpd_removed.yaml
[root@localhost ~]# ansible test -m command -a "netstat -anput"
192.168.171.130 | CHANGED | rc=0 >>
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 928/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1083/master
tcp 0 0 192.168.171.130:22 192.168.171.128:45532 ESTABLISHED 3495/sshd: root@not
tcp 0 0 192.168.171.130:22 192.168.171.128:45534 ESTABLISHED 3676/sshd: root@pts
tcp 0 0 192.168.171.130:22 192.168.171.1:55262 ESTABLISHED 1530/sshd: root@pts
tcp6 0 0 :::22 :::* LISTEN 928/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1083/master
192.168.171.129 | CHANGED | rc=0 >>
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 921/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1075/master
tcp 0 0 192.168.171.129:22 192.168.171.128:48644 ESTABLISHED 4025/sshd: root@pts
tcp 0 0 192.168.171.129:22 192.168.171.1:55261 ESTABLISHED 1856/sshd: root@pts
tcp 0 0 192.168.171.129:22 192.168.171.128:48638 ESTABLISHED 3844/sshd: root@not
tcp6 0 0 :::22 :::* LISTEN 921/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1075/master
[root@localhost ~]# ansible test -m command -a "netstat -anput" |grep 88
空
案例4: 在管理端安装nfs服务,在被管理端批量挂载nfs的共享目录
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ~]# yum -y install nfs-utils #管理端和被管理的挂载端都要安装,才能挂载
[root@localhost ~]# vim /etc/exports
/data *(rw,no_root_squash)
[root@localhost ~]# ls /data/
a.txt
[root@localhost ~]# cat /data/a.txt
111
[root@localhost ~]# systemctl start nfs
[root@localhost ~]# cat web_mount.yaml
#test: 为/etc/ansible/hosts中的主机列表 #task: 执行的任务
#name: 描述信息 #mount: mount模块
#state=mounted: 马上直接挂载设备,并将配置写入/etc/fstab
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
- name: Mount nfs server share data
mount: src=192.168.171.128:/data path=/data fstype=nfs opts=defaults state=mounted
#若将state=absent,则立刻卸载并清除/etc/fstab中信息
[root@localhost ~]# ansible-playbook web_mount.yaml #执行剧本
所有被管理端:192.168.171.129和192.168.171.130
[root@localhost ~]# df -h|tail -1
192.168.171.128:/data 50G 1.3G 49G 3% /data
[root@localhost ~]# cat /etc/fstab |tail -1
192.168.171.128:/data /data nfs defaults 0 0
[root@localhost ~]# cat /data/a.txt
111
案例5: 远程批量安装rsync服务,并设置管理端修改配置文件变动时候执行playbook时触发重启服务
管理端:192.168.171.128
[root@localhost ~]# ls
conf rsync_install.yaml web_mount.yaml
[root@localhost ~]# ls conf/
rsyncd.conf
[root@localhost ~]# cat conf/rsyncd.conf
uid = www
gid = www
port = 873
fake super = yes
use chroot = no
max connections = 200
timeout = 600
ignore errors
read only = false
list = false
auth users = rsync_backup
secrets file = /etc/rsyncd.password
log file = /var/log/rsyncd.log
[data]
path=/data
[root@localhost ~]# cat rsync_install.yaml
#test: 为/etc/ansible/hosts中的主机列表 #task: 执行的任务
#name: 描述信息 #yum: yum模块,安装服务的
#copy: copy模块,远程传递文件的 #file: file模块,远程创建目录的
#service: service模块,远程管理服务的
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
#安装rsync服务
- name: Install Rsync Server
yum: name=rsync state=installed
#配置rsync服务,cp自定义的配置文件,且设置当该配置文件变更需要触发重启操作
- name: configure rsync server
copy: src=./conf/rsyncd.conf dest=/etc/rsyncd.conf
notify: Restart Rsync Server
#创建rsync虚拟用户和密码文件,用户名:rsync_backup,密码:1
- name: create Virt User
copy: content='rsync_backup:1' dest=/etc/rsyncd.password mode=600
#远程创建用户组和用户
- name: create yonghu zu www
group: name=www gid=666
#远程创建用户, create_home=no:不创建家目录 指定shell不能登录
- name: create yonghu www
user: name=www uid=666 group=www create_home=no shell=/sbin/nologin
#远程创建目录/data作为共享目录
- name: create data mulu
file: path=/data state=directory recurse=yes owner=www group=www mode=755
#远程启动rsync服务
- name: start rsyncserver
service: name=rsyncd state=started enabled=yes
#下面handler是接收notify的触发,执行重启rsync服务
handlers:
- name: Restart Rsync Server
service: name=rsyncd state=restarted
[root@localhost ~]# ansible-playbook rsync_install.yaml #执行远程安装
[root@localhost ~]# yum -y install rsync
[root@localhost ~]# echo 1 > /etc/rsync.pass
[root@localhost ~]# chmod -R 600 /etc/rsync.pass
[root@localhost ~]# echo 111 > a.txt
[root@localhost ~]# rsync -av a.txt rsync_backup@192.168.171.129::data --password-file=/etc/rsync.pass
[root@localhost ~]# rsync -av a.txt rsync_backup@192.168.171.130::data --password-file=/etc/rsync.pass
所有被管理端:192.168.171.129和192.168.171.130
[root@localhost ~]# systemctl status rsyncd
● rsyncd.service - fast remote file copy program daemon
Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-10-04 17:16:39 CST; 4min 18s ago
[root@localhost ~]# netstat -anput |grep 873
tcp 0 0 0.0.0.0:873 0.0.0.0:* LISTEN 23117/rsync
[root@localhost ~]# ls /data/
a.txt
[root@localhost ~]# cat /data/a.txt
111
案例6: 远程批量安装nfs服务,并设置管理端修改配置文件变动时候执行playbook时触发重启服务
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ~]# ls
conf nfs_install.yaml
[root@localhost ~]# ls conf/
exports
[root@localhost ~]# cat conf/exports
/data *(rw,no_root_squash)
[root@localhost ~]# cat nfs_install.yaml
#hosts: 指定要操作的主机清单
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
#远端安装nfs
- name: Install nfs server
yum: name=nfs-utils state=installed
#配置nfs,自定义配置文件传递到远端,并修改配置后触发重启服务动作
- name: configure nfs server
copy: src=./conf/exports dest=/etc/exports
notify: Restart Nfs Server
#远程递归创建共享目录
- name: create share data directory
file: path=/data state=directory recurse=yes owner=root group=root mode=755
#远程启动nfs
- name: start nfs server
service: name=nfs-server state=started enabled=yes
handlers:
- name: Restart Nfs Server
service: name=nfs-server state=restarted
[root@localhost ~]# ansible-playbook nfs_install.yaml #执行远程安装
[root@localhost ~]# yum -y install nfs-utils #安装客户端,查看挂载使用
[root@localhost ~]# showmount -e 192.168.171.129
Export list for 192.168.171.129:
/data *
[root@localhost ~]# showmount -e 192.168.171.130
Export list for 192.168.171.130:
/data *
所有被管理端:192.168.171.129和192.168.171.130
[root@localhost ~]# systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Fri 2019-10-04 17:39:08 CST; 14s ago
案例7: 远程批量添加定时任务
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ~]# cat cron_add.yaml
#hosts: 指定要操作的主机清单
#task: 任务列表
#name:描述注释信息
#cron:cron模块,添加定时任务,分时日月周,不写的默认是*,下面只关添加定时任务,具体要执行,需要本地有相应的脚本才行
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
- hosts: test
remote_user: root
gather_facts: no
tasks:
- name: Crontab Scripts chuangjian
cron: name='dellog scripts' minute=00 hour=01 job="/bin/sh /server/scripts/delete_log.sh &>/dev/null"
[root@localhost ~]# ansible-playbook cron_add.yaml #执行远程添加
注意:若下面是:删除定时任务:
cron: name='backup scripts' minute=00 hour=01 job="/bin/sh /server/scripts/delete_log.sh &>/dev/null" state=absent
若下面则是:注释定时任务:
cron: name='backup scripts' minute=00 hour=01 job="/bin/sh /server/scripts/delete_log.sh &>/dev/null" disabled=yes
所有被管理端:192.168.171.129和192.168.130
[root@localhost ~]# crontab -l
#Ansible: dellog scripts
00 01 * * * /bin/sh /server/scripts/delete_log.sh &>/dev/null
案例8: 远程批量安装源码nginx服务
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ansible-playbook-deploy-nginx-source-code]# pwd
/root/ansible-playbook-deploy-nginx-source-code
[root@localhost ansible-playbook-deploy-nginx-source-code]# ls
nginx-1.23.3.tar.gz nginx_source_code_deploy.yaml
[root@localhost ansible-playbook-deploy-nginx-source-code]# cat nginx_source_code_deploy.yaml
#参考链接: https://www.cnblogs.com/gjun/articles/12123253.html https://www.jianshu.com/p/a00f0699485c
#test: 为/etc/ansible/hosts中的主机列表 #task: 执行的任务
#name: 描述信息 #yum: yum模块,安装服务的
#copy: copy模块,远程传递文件的 #file: file模块,远程创建目录的
#service: service模块,远程管理服务的
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
#使用前先将相关软件包: nginx压缩包和jdk压缩包上传到/root/ansible-playbook-deploy-nginx/目录中
- hosts: test
remote_user: root
gather_facts: no
vars:
src_nginx: /root/ansible-playbook-deploy-nginx-source-code/nginx-1.23.3.tar.gz
nginx_jieya_dir: /usr/local
nginx_install_dir: /usr/local/nginx
nginx_jieyahou_name: nginx-1.23.3
tasks:
#上传nginx压缩安装包到: /root/ansible-playbook-deploy-nginx/, 提前操作
#安装编译工具和相关依赖
#- name: Install gcc gcc-c++ and yilai
# yum: name={{ item }} state=installed
# with_items:
# - gcc
# - gcc-c++
# - openssl-devel
# - openssl
# - zlib
# - zlib-devel
# - pcre
# - pcre-devel
#注意: 下面yum安装依赖方式也可用上面方式安装
#安装编译工具和相关依赖
- name: Install gcc gcc-c++ and yilai
yum: name=gcc,gcc-c++,openssl-devel,openssl,zlib,zlib-devel,pcre,pcre-devel,vim,wget state=installed
#解压nginx压缩包
- name: Unarchive nginx package
unarchive:
src: "{{ src_nginx }}"
dest: "{{ nginx_jieya_dir }}"
#配置编译nginx
- name: config and bianyi nginx
shell: useradd -s /sbin/nologin nginx &&
cd {{ nginx_jieya_dir }} &&
cd {{ nginx_jieyahou_name }} &&
./configure --user=nginx --group=nginx --prefix={{ nginx_install_dir }} --with-http_stub_status_module --with-http_ssl_module &&
make && make install
#启动nginx
- name: Start nginx
shell: /usr/local/nginx/sbin/nginx
#注意上面解压也可用另一种方式: shell命令
#- name: Unarchive tomcat package
# copy: src=/root/ansible-playbook-deploy-tomcat/apache-tomcat-8.0.32.tar.gz dest=/tmp/
#- name: Unarchive tomcat
# shell: cd /tmp && tar -zxf apache-tomcat-8.0.32.tar.gz
[root@localhost ansible-playbook-deploy-nginx-source-code]# ansible-playbook nginx_source_code_deploy.yaml
开始部署
所有被管理端:192.168.171.129和192.168.130查看nginx部署情况
[root@localhost ~]# ps -ef |grep nginx
root 37256 1 0 20:02 ? 00:00:00 nginx: master process /usr/local/nginx/sbin/nginx
nginx 37257 37256 0 20:02 ? 00:00:00 nginx: worker process
root 37268 26792 0 20:03 pts/0 00:00:00 grep --color=auto nginx
[root@localhost ~]# netstat -anput |grep 80|grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 37256/nginx: master
[root@localhost ~]# curl -I http://127.0.0.1/
HTTP/1.1 200 OK
Server: nginx/1.23.3
Date: Sat, 08 Apr 2023 12:04:37 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Sat, 08 Apr 2023 12:02:56 GMT
Connection: keep-alive
ETag: "643157f0-267"
Accept-Ranges: bytes
案例9: 远程批量安装二进制tomcat服务
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ansible-playbook-deploy-tomcat]# pwd
/root/ansible-playbook-deploy-tomcat
[root@localhost ansible-playbook-deploy-tomcat]# ls
apache-tomcat-8.0.32.tar.gz jdk-8u65-linux-x64.gz tomcat_deploy.yaml
[root@localhost ansible-playbook-deploy-tomcat]# cat tomcat_deploy.yaml
#test: 为/etc/ansible/hosts中的主机列表 #task: 执行的任务
#name: 描述信息 #yum: yum模块,安装服务的
#copy: copy模块,远程传递文件的 #file: file模块,远程创建目录的
#service: service模块,远程管理服务的
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
#使用前先将相关软件包: tomcat压缩包和jdk压缩包上传到/root/ansible-playbook-deploy-tomcat/目录中
- hosts: test
remote_user: root
gather_facts: no
vars:
src_jdk: /root/ansible-playbook-deploy-tomcat/jdk-8u65-linux-x64.gz
jdk_install_dir: /usr/local/
jdk_jieyahou_name: jdk1.8.0_65
src_tomcat: /root/ansible-playbook-deploy-tomcat/apache-tomcat-8.0.32.tar.gz
tomcat_install_dir: /usr/local/
tomcat_jieyahou_name: apache-tomcat-8.0.32
tasks:
#上传jdk压缩安装包到: /root/ansible-playbook-deploy-tomcat/, 提前操作
#解压jdk压缩包
- name: Unarchive jdk package
unarchive:
src: "{{ src_jdk }}"
dest: "{{ jdk_install_dir }}"
#配置jdk环境变量
- name: set jdk global env
shell: echo '''export JAVA_HOME=/usr/local/{{ jdk_jieyahou_name }}''' >> ~/.bashrc &&
echo '''export PATH=$JAVA_HOME/bin:$PATH''' >> ~/.bashrc &&
echo '''export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar''' >> ~/.bashrc &&
source ~/.bashrc
#有yum源时jdk也可采用下面方式安装
#安装jdk环境
#- name: Install jdk1.8
# yum: name=java-1.8.0-openjdk state=installed
#上传tomcat软件包到/root/ansible-playbook-deploy-tomcat/目录,提前操作
#解压tomcat软件包
- name: Unarchive tomcat package
unarchive:
src: "{{ src_tomcat }}"
dest: "{{ tomcat_install_dir }}"
#start tomcat,注意:tomcat首次启动需要用 nohup ./startup.sh & 或 nohup ./catalina.sh & 启动,如果直接使用/.../.../tomcat.../bin/startup.sh则启动不了
- name: Start tomcat
shell: cd "{{ tomcat_install_dir }}" && cd "{{ tomcat_jieyahou_name }}"/bin && nohup ./startup.sh &
#注意上面解压也可用另一种方式: shell命令
#- name: Unarchive tomcat package
# copy: src=/root/ansible-playbook-deploy-tomcat/apache-tomcat-8.0.32.tar.gz dest=/tmp/
#- name: Unarchive tomcat
# shell: cd /tmp && tar -zxf apache-tomcat-8.0.32.tar.gz
[root@localhost ansible-playbook-deploy-tomcat]# ansible-playbook tomcat_deploy.yaml
开始部署
所有被管理端:192.168.171.129和192.168.130查看nginx部署情况
[root@localhost ~]# ls /usr/local/jdk1.8.0_65/
bin COPYRIGHT db include javafx-src.zip jre lib LICENSE man README.html release src.zip THIRDPARTYLICENSEREADME-JAVAFX.txt THIRDPARTYLICENSEREADME.txt
[root@localhost ~]# ls /usr/local/apache-tomcat-8.0.32/
bin conf lib LICENSE logs NOTICE RELEASE-NOTES RUNNING.txt temp webapps work
[root@localhost ~]# ps -ef |grep tomcat
root 37600 1 4 20:09 ? 00:00:03 /usr/local/jdk1.8.0_65/bin/java -Djava.util.logging.config.file=/usr/local/apache-tomcat-8.0.32/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/local/apache-tomcat-8.0.32/endorsed -classpath /usr/local/apache-tomcat-8.0.32/bin/bootstrap.jar:/usr/local/apache-tomcat-8.0.32/bin/tomcat-juli.jar -Dcatalina.base=/usr/local/apache-tomcat-8.0.32 -Dcatalina.home=/usr/local/apache-tomcat-8.0.32 -Djava.io.tmpdir=/usr/local/apache-tomcat-8.0.32/temp org.apache.catalina.startup.Bootstrap start
root 37632 26792 0 20:10 pts/0 00:00:00 grep --color=auto tomcat
[root@localhost ~]# cat /root/.bashrc |grep export
export JAVA_HOME=/usr/local/jdk1.8.0_65
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
案例10: 远程批量安装二进制mysql5.7服务
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ansible-playbook-deploy-mysql5.7]# pwd
/root/ansible-playbook-deploy-mysql5.7
[root@localhost ansible-playbook-deploy-mysql5.7]# ls
my.cnf mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz mysql5.7_deploy.yaml mysqld.service
[root@localhost ansible-playbook-deploy-mysql5.7]# cat mysqld.service
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/mysql5.7/bin/mysqld --defaults-file=/etc/my.cnf
LimitNOFILE = 15000
[root@localhost ansible-playbook-deploy-mysql5.7]# cat mysql5.7_deploy.yaml
#test: 为/etc/ansible/hosts中的主机列表 #task: 执行的任务
#name: 描述信息 #yum: yum模块,安装服务的
#copy: copy模块,远程传递文件的 #file: file模块,远程创建目录的
#service: service模块,远程管理服务的
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
#使用前先将相关软件包: mysql压缩包和jdk压缩包上传到/root/ansible-playbook-deploy-mysql5.7/目录中
- hosts: test
remote_user: root
gather_facts: no
vars:
src_mysql: /root/ansible-playbook-deploy-mysql5.7/mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz
mysql_install_dir: /data/mysql5.7
mysql_data_dir: /data/mysql5.7/data
mysql_log_dir: /data/mysql5.7/log
mysql_yasuo_package_name: mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz
mysql_jieyahou_name: mysql-5.7.19-linux-glibc2.12-x86_64
config_mysql: /root/ansible-playbook-deploy-mysql5.7/my.cnf
service_mysql: /root/ansible-playbook-deploy-mysql5.7/mysqld.service
tasks:
#上传mysql压缩安装包到: /root/ansible-playbook-deploy-mysql5.7/, 提前操作
#安装相关依赖
- name: yilai
yum: name=libaio-devel state=installed
#传输mysql压缩包
- name: transfer mysql package
copy: src={{ src_mysql }} dest=/opt/
#解压mysql压缩包并移动
- name: Unarchive mysql package
shell: mkdir /data &&
cd /opt/ && tar -zxf {{ mysql_yasuo_package_name }} &&
mv {{ mysql_jieyahou_name }} {{ mysql_install_dir }}
#创建mysql用户,数据目录和日志目录,并设置权限
- name: create mysql user log data
shell: useradd -s /sbin/nologin mysql &&
mkdir {{ mysql_data_dir }} &&
mkdir {{ mysql_log_dir }} &&
chown -R mysql.mysql {{ mysql_install_dir }} &&
echo '''export PATH=/data/mysql5.7/bin/:$PATH''' >> ~/.bashrc &&
source ~/.bashrc
#准备mysql配置文件,传输过去
- name: transfer my.conf
copy: src={{ config_mysql }} dest=/etc/
#初始化mysql
- name: init mysql
shell: mysqld --initialize --user=mysql --basedir={{ mysql_install_dir }} --datadir={{ mysql_data_dir }}
#准备mysqld.service文件,传输过去,交给systemctl管理服务
- name: transfer mysqld.service
copy: src={{ service_mysql }} dest=/etc/systemd/system/
#刷新service文件和启动mysql
- name: flush service conf
shell: systemctl daemon-reload &&
systemctl enable mysqld &&
systemctl start mysqld
#修改mysql的登录密码,初始化安装后的mysql,初始密码会在相应日志文件中,mysql_error.log中过滤password可以找出初始密码进行登录,然后登录mysql,使用set password='xx';修改密码
#下面在脚本中,非交互式登录mysql时,获取不到密码变量的密码,可以手动登录修改密码
#- name: change mysql password wei '123456'
# shell: init_mysql_pass=`cat /data/mysql5.7/log/mysql_error.log |grep password |awk '{print $NF}'` &&
# mysql -uroot -p'${init_mysql_pass}' -e "set password='123456';"
#注意上面解压也可用另一种方式: shell命令
#- name: Unarchive tomcat package
# copy: src=/root/ansible-playbook-deploy-tomcat/apache-tomcat-8.0.32.tar.gz dest=/tmp/
#- name: Unarchive tomcat
# shell: cd /tmp && tar -zxf apache-tomcat-8.0.32.tar.gz
[root@localhost ansible-playbook-deploy-tomcat]# ansible-playbook mysql5.7_deploy.yaml
开始部署
所有被管理端:192.168.171.129和192.168.130查看mysql部署情况
[root@localhost ~]# ps -ef |grep mysql
mysql 38278 1 1 20:20 ? 00:00:00 /data/mysql5.7/bin/mysqld --defaults-file=/etc/my.cnf
root 38320 26792 0 20:21 pts/0 00:00:00 grep --color=auto mysql
[root@localhost ~]# netstat -anput |grep 3306
tcp6 0 0 :::3306 :::* LISTEN 38278/mysqld
[root@localhost ~]# cat /data/mysql5.7/log/mysql_error.log |grep password |awk '{print $NF}' #查看默认密码
5)P0Z=zkulkd
[root@localhost ~]# mysql -uroot -p'5)P0Z=zkulkd' #使用初始化默认密码登录mysql
mysql> set password='123456';
mysql> quit
[root@localhost ~]# mysql -uroot -p'123456' #使用新密码登录mysql即可
案例11: playbook中使用template模板管理nginx配置文件
管理端:192.168.171.128
[root@localhost ~]# cat /etc/ansible/hosts
[test] #添加一个组名
192.168.171.129 #添加被管理主机的IP
192.168.171.130 #添加被管理主机的IP
[root@localhost ansible-playbook-template-nginx-conf]# pwd
/root/ansible-playbook-template-nginx-conf
[root@localhost ansible-playbook-template-nginx-conf]# ls
playbook-template-nginx-conf.yaml site.j2
[root@localhost ansible-playbook-template-nginx-conf]# cat playbook-template-nginx-conf.yaml
#test: 为/etc/ansible/hosts中的主机列表 #task: 执行的任务
#name: 描述信息 #yum: yum模块,安装服务的
#copy: copy模块,远程传递文件的 #file: file模块,远程创建目录的
#service: service模块,远程管理服务的
#remote_user: root 是指定远程主机上使用的用户
#gather_facts: no 是默认执行playbook时候,默认会收集目标主机的信息,禁用掉能提高效率
#使用前先将相关软件包: 文件包上传到/root/ansible-playbook-vars-transfer-file/目录中
- hosts: test
remote_user: root
gather_facts: no
vars:
http_port: 80 #定义变量
server_name: www.test.com #定义变量
tasks:
- template: src=site.j2 dest=/tmp/site.conf
#引用模板拷贝到目标主机进行渲染,以该目录作为测试,也可以直接拷贝到nginx相应目录
[root@localhost ansible-playbook-template-nginx-conf]# cat site.j2
server {
listen {{http_port}}; #调用变量
server_name {{server_name}}; #调用变量
location / {
root /var/www/html;
index index.html;
}
}
[root@localhost ansible-playbook-template-nginx-conf]# ansible-playbook playbook-template-nginx-conf.yaml
所有被管理端:192.168.171.129和192.168.130查看/tmp目录配置文件情况
[root@localhost ~]# ls /tmp/site.conf
/tmp/site.conf
[root@localhost ~]# cat /tmp/site.conf
server {
listen 80; #调用变量
server_name www.test.com; #调用变量
location / {
root /var/www/html;
index index.html;
}
}