Openstack概述

一、概述

【Openstack官网】

1. 简介

Openstack实现的是云计算IaaS服务模型,是基于Apache 2.0开源协议研发的开源的云计算平台。它是一个虚拟机管理平台,用数据库统计每台虚拟机的信息并进行管理。

2. Openstack架构(SOA架构-面向服务架构)

服务名称 说明 端口
Nova计算机服务 提供虚拟机运行环境(KVM服务) 8774
Glance镜像服务 专门负责管理镜像模板 9292
Neutron网络服务 专门提供网络服务 9696
Cinder存储服务 为VM提供块存储 8776
Ceilometer服务 提供监控计费功能
Keystone认证服务 提供登录认证服务
Horizon 提供Web UI界面
Heat编排服务 类似于ansible剧本功能,批量创建虚拟机
Swift对象存储

以上每个服务都需要依赖数据库,消息队列,memcached缓存和时间同步等基础服务。

3. Openstack服务安装通用步骤(不含基础服务)

1.创建服务数据库并且授权用户;

2.在keystone服务上创建用户,并关联角色(授权);

3.在keystone上创建服务,注册API;

4.安装服务相关的软件包;

5.修改服务相应的配置文件;(数据库的链接,keystone的认证信息,消息队列的链接,其他配置)

6.同步数据库,创建表;

7.启动服务;

二、Openstack基本环境

1. 环境准备(本地YUM源配置)

本博客以安装Mitaka版本为示例(适用于旧版本,在YUM源中没有安装包的情况)

step 00 管理节点与计算节点进行host解析;

step 01 挂载光盘,配置YUM本地源(建议设置开机自动挂载);

mount /dev/cdrom /mnt #挂载光盘
[root@controller /opt]# mount /dev/cdrom /mnt/
mount: /dev/sr0 is write-protected, mounting read-only
[root@controller /opt]# ls /mnt/
CentOS_BuildTag  EULA  images    LiveOS    repodata              RPM-GPG-KEY-CentOS-Testing-7
EFI              GPL   isolinux  Packages  RPM-GPG-KEY-CentOS-7  TRANS.TBL
[root@controller /opt]# ssh root@10.0.0.31 "mount /dev/cdrom /mnt; ls /mnt"
root@10.0.0.31's password: 
mount: /dev/sr0 is write-protected, mounting read-only
CentOS_BuildTag
EFI
EULA
......
[root@controller /opt]# ssh root@10.0.0.32 "mount /dev/cdrom /mnt;"
root@10.0.0.32's password: 
mount: /dev/sr0 is write-protected, mounting read-only
[root@controller /opt]# echo 'mount /dev/cdrom /mnt' >>/etc/rc.d/rc.local
[root@controller /opt]# chmod +x /etc/rc.d/rc.local 
[root@controller /opt]# ssh root@10.0.0.31 "echo 'mount /dev/cdrom /mnt' >>/etc/rc.d/rc.local; chmod +x /etc/rc.d/rc.local"
root@10.0.0.31's password: 
[root@controller /opt]# ssh root@10.0.0.31 "tail -1 /etc/rc.local"
root@10.0.0.31's password: 
mount /dev/cdrom /mnt
[root@controller /opt]# ssh root@10.0.0.32 "echo 'mount /dev/cdrom /mnt' >>/etc/rc.d/rc.local; chmod +x /etc/rc.d/rc.local"
root@10.0.0.32's password: 
[root@controller /opt]# ssh root@10.0.0.32 "tail -1 /etc/rc.local"
root@10.0.0.32's password: 
mount /dev/cdrom /mnt

step 02 修改YUM源文件(删除其他repo文件)

# /etc/yum.repos.d/local.repo
[local]
name=local
baseurl="file:///mnt"
gpgcheck=0

[openstack]
name=openstack
baseurl="file:///目录/repo"
gpgcheck=0

[root@controller /opt]# vim /etc/yum.repos.d/local.repo
[root@controller /opt]# cat /etc/yum.repos.d/local.repo
[local]
name=local
baseurl="file:///mnt"
gpgcheck=0

[openstack]
name=openstack
baseurl="file:///opt/repo"
gpgcheck=0
[root@controller /opt]# scp -rp /etc/yum.repos.d/local.repo 10.0.0.31:/etc/yum.repos.d/local.repo
root@10.0.0.31's password: 
local.repo                                                                            100%  114   140.2KB/s   00:00    
[root@controller /opt]# scp -rp /etc/yum.repos.d/local.repo 10.0.0.32:/etc/yum.repos.d/local.repo
root@10.0.0.32's password: 
local.repo 

step 03 上传openstack安装包;

演示安装包
提取码:ngxz
rz #上传软件包
scp -rp 软件包 IP地址:/目录 #分发软件包
tar xf 软件包名称 #解压软件包
[root@controller ~]# cd /opt/
[root@controller /opt]# rz -E
rz waiting to receive.
[root@controller /opt]# scp -rp openstack_rpm.tar.gz 10.0.0.31:`pwd`
root@10.0.0.31's password: 
openstack_rpm.tar.gz                                100%  236MB  86.9MB/s   00:02    
[root@controller /opt]# scp -rp openstack_rpm.tar.gz 10.0.0.32:`pwd`
root@10.0.0.32's password: 
openstack_rpm.tar.gz                                                                  100%  236MB  78.6MB/s   00:03    
[root@controller /opt]# tar xf openstack_rpm.tar.gz 
[root@controller /opt]# ls /opt/
openstack_rpm.tar.gz  repo
[root@controller /opt]# ssh root@10.0.0.31 "tar xf /opt/openstack_rpm.tar.gz -C /opt; ls /opt"
root@10.0.0.31's password: 
openstack_rpm.tar.gz
repo
[root@controller /opt]# ssh root@10.0.0.32 "tar xf /opt/openstack_rpm.tar.gz -C /opt; ls /opt"
root@10.0.0.32's password: 
openstack_rpm.tar.gz
repo

2. 安装基础组件

openstack架构中,要求架构中所有节点时间同步;

step 00 安装Chrony服务

yum install -y chrony
[root@controller ~]# yum install -y chrony
Loaded plugins: fastestmirror
......
Installed:
  chrony.x86_64 0:3.4-1.el7                                                           

Dependency Installed:
  libseccomp.x86_64 0:2.3.1-4.el7                                                     

Complete!
[root@compute_01 ~]# yum install -y chrony &>/dev/null 
[root@compute_01 ~]# systemctl status chronyd.service 
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:chronyd(8)
           man:chrony.conf(5)
[root@compute_02 ~]# yum install -y chrony &>/dev/null 
[root@compute_02 ~]# systemctl status chronyd.service 
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:chronyd(8)
           man:chrony.conf(5)

step 01 时间同步

  • 控制节点(修改配置文件后,重启chronyd服务)
# /etc/chrony.conf
allow IP地址段/掩码 #指定NTP客户端访问地址
[root@controller ~]# vim /etc/chrony.conf 
[root@controller ~]# egrep -v '^$|^#' /etc/chrony.conf
server ntp1.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 10.0.0.0/24
logdir /var/log/chrony
[root@controller ~]# systemctl restart chronyd.service 
[root@controller ~]# netstat -lntup | grep chronyd
udp        0      0 0.0.0.0:123             0.0.0.0:*                           10193chronyd       
udp        0      0 127.0.0.1:323           0.0.0.0:*                           10193chronyd       
udp6       0      0 ::1:323                 :::*                                10193chronyd  
  • 计算节点(修改配置文件后,重启chronyd服务)
# /etc/chrony.conf
server IP地址 iburst #指定NTP客户端同步的服务端
[root@compute_01 ~]# vim /etc/chrony.conf 
[root@compute_01 ~]# egrep -v '^$|^#' /etc/chrony.conf
server 10.0.0.11 iburst 
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
[root@compute_01 ~]# systemctl restart chronyd.service 
[root@compute_01 ~]# scp -rp /etc/chrony.conf 10.0.0.32:/etc/chrony.conf
Warning: Permanently added '10.0.0.32' (ECDSA) to the list of known hosts.
root@10.0.0.32's password: 
chrony.conf  
[root@compute_01 ~]# ssh root@10.0.0.32 systemctl restart chronyd
root@10.0.0.32's password: 
[root@compute_01 ~]# ssh root@10.0.0.31 date; date
root@10.0.0.31's password: 
Fri Mar 25 16:23:02 CST 2022
Fri Mar 25 16:23:02 CST 2022
[root@compute_01 ~]# ssh root@10.0.0.32 date; date
root@10.0.0.32's password: 
Fri Mar 25 16:23:08 CST 2022
Fri Mar 25 16:23:08 CST 2022
[root@compute_01 ~]# netstat -lntup | grep chronyd
udp        0      0 127.0.0.1:323           0.0.0.0:*                           7737/chronyd        
udp6       0      0 ::1:323                 :::*                                7737/chronyd   

step 02 安装openstack的源

本文以本地YUM源为案例,安装Openstack Mitaka版

yum makecache #缓存YUM源
yum list| grep openstack #查询openstack包
yum install 安装包 #缓存YUM源

依赖YUM的BASE源;

[root@controller ~]# yum makecache 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
...... 
Metadata Cache Created
[root@controller ~]# yum list | grep openstack
ansible-openstack-modules.noarch         0-20140902git79d751a.el7      epel     
centos-release-openstack-queens.noarch   1-2.el7.centos                extras   
centos-release-openstack-rocky.noarch    1-1.el7.centos                extras   
centos-release-openstack-stein.noarch    1-1.el7.centos                extras   
centos-release-openstack-train.noarch    1-1.el7.centos                extras   
resalloc-openstack.noarch                8-1.el7                       epel 
[root@controller ~]# yum install -y centos-release-openstack-queens.noarch
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
......
Complete!
[root@compute_01 ~]# yum install -y centos-release-openstack-queens.noarch &>/dev/null 
[root@compute_01 ~]# echo $?
0
[root@compute_02 ~]# yum install -y centos-release-openstack-queens.noarch &>/dev/null [root@compute_02 ~]# yum install -y centos-release-openstack-queens.noarch &>/dev/null 
[root@compute_02 ~]# echo $?
0

step 03 安装openstack客户端和openstack-selinux

yum install -y python-openstackclient openstack-selinux
[root@controller ~]# yum install -y python-openstackclient openstack-selinux
......
Dependency Updated:
  audit.x86_64 0:2.8.5-4.el7         audit-libs.x86_64 0:2.8.5-4.el7         policycoreutils.x86_64 0:2.5-34.el7        

Replaced:
  python-six.noarch 0:1.9.0-2.el7                                                                                       

Complete!
[root@controller ~]# yum install -y python-openstackclient openstack-selinux
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * centos-qemu-ev: mirrors.tuna.tsinghua.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Package python2-openstackclient-3.14.3-1.el7.noarch already installed and latest version
Package openstack-selinux-0.8.13-1.el7.noarch already installed and latest version
Nothing to do
[root@compute_01 ~]# yum install -y python-openstackclient openstack-selinux &>/dev/null 
[root@compute_01 ~]# echo $?
0
[root@compute_01 ~]# yum install -y python-openstackclient openstack-selinux
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * centos-qemu-ev: mirrors.bupt.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Package python2-openstackclient-3.14.3-1.el7.noarch already installed and latest version
Package openstack-selinux-0.8.13-1.el7.noarch already installed and latest version
Nothing to do
[root@compute_02 ~]# yum install -y python-openstackclient openstack-selinux 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * centos-qemu-ev: mirrors.bupt.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Package python2-openstackclient-3.14.3-1.el7.noarch already installed and latest version
Package openstack-selinux-0.8.13-1.el7.noarch already installed and latest version
Nothing to do

step 04 安装并配置数据库

No-SQL类数据库(MongoDB)在Openstack架构中主要用于监控计费功能,在构建私有云的平台时,可以不安装;
  • 控制节点
yum install -y mariadb mariadb-server python2-pyMySQL

# /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = IP地址 #设置mysqld监控地址
default-storage-engine = innodb #设置默认存储引擎
innodb_file_per_table #为innodb表设置独立表空间
max_connections = 4096 #设置最大连接数
collation-server = utf8_general_ci
character-set-server = utf8

systemctl enable mariadb #设置mariaDB开机启动
systemctl start mariadb #启动mariaDB
mysql_secure_installation #初始化数据库

[root@controller ~]# yum install -y mariadb mariadb-server python2-pyMySQL 1>/dev/null[root@controller ~]# echo $?
0
[root@controller ~]# python -V
Python 2.7.5
[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf 
[root@controller ~]# cat /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
[root@controller ~]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@controller ~]# systemctl start mariadb
[root@controller ~]# mysql_secure_installation 
......
Enter current password for root (enter for none): 
......
Set root password? [Y/n] n
 ... skipping.
......
Remove anonymous users? [Y/n] y
 ... Success!
......
Disallow root login remotely? [Y/n] y
 ... Success!
......
Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!
......
Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...
......
Thanks for using MariaDB!

step 05 部署消息队列

消息队列是独立服务间的调用的必要组件。

本博客使用RabbitMQ作为示例

  • 控制节点
yum install -y rabbitmq-server #安装rabbitmq组件
systemctl start rabbitmq-server.service #设置RabbitMQ开机自启
systemctl start rabbitmq-server.service #启动rabbitMQ
rabbitmqctl add_user 用户名 密码 #添加RabbitMQ的用户(用户名: openstack; 密码: RABBIT_PASS)
rabbitmqctl set_permissions 用户名 ".*" ".*" ".*" #设置RabbitMQ用户权限(可读, 可写, 可配置)
rabbitmq-plugins enable rabbitmq_management #启用rabbitmq管理插件(用于监控rabbitmq组件)
[root@controller ~]# yum install -y rabbitmq-server.noarch >/dev/null
[root@controller ~]# echo $?
0
[root@controller ~]# systemctl start rabbitmq-server.service 
[root@controller ~]# systemctl enable rabbitmq-server.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack"
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" 
Setting permissions for user "openstack" in vhost "/"
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  amqp_client
  cowlib
  cowboy
  rabbitmq_web_dispatch
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbit@controller... started 6 plugins
[root@controller ~]# netstat -lntup | grep beam
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      11763beam          
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      11763beam          
tcp6       0      0 :::5672                 :::*                    LISTEN      11763beam 
rabbitmq管理插件web控制台。默认登录用户名和密码为guest。

step 06 安装并配置memcached缓存Token

memcached服务仅支持key-value类型存储,所以作为缓存服务,性能要优于redis数据库;但memcached服务不支持数据持久化。
  • 控制节点
yum install -y memcached python-memcached #安装memcached组件
sed -i 's#127.0.0.1#IP地址#g' /etc/sysconfig/memcached #更改memcached组件监听地址
systemctl enable memcached.service #设置memcached开机自启
systemctl start memcached.service #启动memcached服务
[root@controller ~]# yum install -y memcached python-memcached.noarch >/dev/null
[root@controller ~]# echo $?
0
[root@controller ~]# sed -i "s#127.0.0.1#10.0.0.11#g" /etc/sysconfig/memcached 
[root@controller ~]# grep -Ev '^$|^#' /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 10.0.0.11,::1"
[root@controller ~]# systemctl enable memcached.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.
[root@controller ~]# systemctl start memcached.service 
[root@controller ~]# netstat -lntup | grep memcached
tcp        0      0 10.0.0.11:11211         0.0.0.0:*               LISTEN      15987memcached     
tcp6       0      0 ::1:11211               :::*                    LISTEN      15987memcached