经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 大数据/云/AI » openstack » 查看文章
OpenStack云平台部署
来源:cnblogs  作者:SkyRainmom  时间:2023/5/17 8:52:47  对本文有异议

前言:本次部署采用系统的是Centos 8-Stream版,存储库为OpenStack-Victoria版,除基础配置,五大服务中的时间同步服务,七大组件中的nova服务,neutron服务,cinder服务需要在双节点配置外,其他服务配置均在控制节点,neutron配置从公有网络私有网络中选择一种即可,大多数情况还是选公有网络的配置,此次部署所有密码均为111111,可按自身需要自行配置

安装环境

  • 采用虚拟化软件:VMware Workstation 16 Pro
  • 操作系统:Centos 8-Stream
  • 控制节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎
  • 计算节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎

基础配置(双节点)

Yum源仓库配置

阿里云镜像仓库地址:https://mirrors.aliyun.com,有需要可自行配置,但是这里用不到

(1) 配置Centos 8的源只需改yum仓库.repo文件参数即可如下

  1. #更改CentOS-Stream-AppStream.repo文件,将baseurl参数中的地址改为https://mirrors.aliyun.com
  2. [root@localhost ~]# cd /etc/yum.repos.d/
  3. [root@localhost yum.repos.d]# vi CentOS-Stream-AppStream.repo
  4. [appstream]
  5. name=CentOS Stream $releasever - AppStream
  6. #mirrorlist=http://mirrorlist.centos.org/? release=$stream&arch=$basearch&repo=AppStream&infra=$infra
  7. baseurl=https://mirrors.aliyun.com/$contentdir/$stream/AppStream/$basearch/os/
  8. gpgcheck=1
  9. enabled=1
  10. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
  11. #更改CentOS-Stream-BaseOS.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com
  12. [root@localhost yum.repos.d]# vi CentOS-Stream-BaseOS.repo
  13. [baseos]
  14. name=CentOS Stream $releasever - BaseOS
  15. #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=BaseOS&infra=$infra
  16. baseurl=https://mirrors.aliyun.com/$contentdir/$stream/BaseOS/$basearch/os/
  17. gpgcheck=1
  18. enabled=1
  19. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
  20. #更改CentOS-Stream-Extras.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com
  21. [root@localhost yum.repos.d]# vi CentOS-Stream-Extras.repo
  22. [extras]
  23. name=CentOS Stream $releasever - Extras
  24. #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=extras&infra=$infra
  25. baseurl=https://mirrors.aliyun.com/$contentdir/$stream/extras/$basearch/os/
  26. gpgcheck=1
  27. enabled=1
  28. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

(2)配置openstack源

  1. #在yum仓库文件夹下面创建openstack-victoria.repo文件
  2. [root@localhost ~]# vi /etc/yum.repos.d/openstack-victoria.repo
  3. #写入以下内容
  4. [virctoria]
  5. name=virctoria
  6. baseurl=https://mirrors.aliyun.com/centos/8-stream/cloud/x86_64/openstack-victoria/
  7. gpgcheck=0
  8. enabled=1

(3)清除缓存,重建缓存

  1. [root@controller ~]# yum clean all
  2. [root@controller ~]# yum makecache

网络配置

  • 控制节点双网卡-------> 仅主机IP:10.10.10.10 Net外网IP:10.10.20.10
  • 计算节点双网卡-------> 仅主机IP:10.10.10.20 Net外网IP:10.10.20.20

(1)安装network网络服务

  1. #安装network,由于8系统自带的服务为NetworkManager,它会与neutron服务有冲突,所以安装network,关闭NetworkManager,并设置disable状态
  2. [root@localhost ~]# dnf -y install network-scripts
  3. [root@localhost ~]# systemctl disable --now NetManager
  4. #启动network服务,设为开机自启动
  5. [root@localhost ~]# systemctl enable --now network

(2) 配置静态IP

  1. #ens33,以控制节点为例
  2. [root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
  3. BOOTPROTO=static #修改
  4. ONBOOT=yes #修改
  5. IPADDR=10.10.10.10 #添加
  6. NETMASK=255.255.255.0 #添加
  7. #ens34,以控制节点为例
  8. [root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
  9. BOOTPROTO=static #修改
  10. ONBOOT=yes #修改
  11. IPADDR=10.10.20.10 #添加
  12. NETMASK=255.255.255.0 #添加
  13. GATEWAY=10.10.20.2 #添加
  14. DNS1=8.8.8.8 #添加
  15. DNS2=114.114.114.114 #添加

(3)重启网络,测试外网连通性

  1. [root@localhost ~]# systemctl restart network
  2. [root@localhost ~]# ping -c 3 www.baidu.com

主机配置

(1)修改主机名

  1. #控制节点
  2. [root@localhost ~]# hostnamectl set-hostname controller
  3. [root@localhost ~]# bash
  4. [root@controller ~]#
  5. #计算节点
  6. [root@localhost ~]# hostnamectl set-hostname compute
  7. [root@localhost ~]# bash
  8. [root@compute ~]#

(2)关闭防火墙

  1. #关防火墙并设置disable开机禁启动
  2. [root@controller ~]# systemctl disable --now firewalld

(3)关闭selinux安全子系统

  1. #设置selinux并设置disable开机禁启动
  2. [root@controller ~]# vi /etc/selinux/config
  3. SELINUX=disabled
  4. #可通过getenforce命令查看selinux状态
  5. [root@controller ~]# getenforce
  6. Disabled

(4)配置host主机映射

  1. #控制节点
  2. [root@controller ~]# cat >>etc/hosts<<EOF
  3. > 10.10.10.10 controller
  4. > 10.10.10.20 computer
  5. > EOF
  6. #计算节点
  7. [root@compute ~]# cat >>etc/hosts<<EOF
  8. > 10.10.10.10 controller
  9. > 10.10.10.20 compute
  10. > EOF

openstack存储库

  1. #安装openstack-victoria版存储库
  2. [root@controller ~]# dnf -y install centos-release-openstack-victoria
  3. #升级节点上所有的安装包
  4. [root@controller ~]# dnf -y upgrade
  5. #安装openstack客户端和openstack-selinux
  6. [root@controller ~]# dnf -y install python3-openstackclient openstack-selinux

五大服务

Chrony时间同步(双节点)

(1)查看系统是否安装chrony

  1. [root@controller ~]# rpm -qa |grep chrony
  2. #没有的话就安装
  3. [root@controller ~]# dnf -y install chrony

(2)编辑chrony配置文件

  1. #控制节点
  2. [root@controller ~]# vim /etc/chrony.conf
  3. server ntp6.aliyun.com iburst #添加与阿里云时间同步
  4. allow 10.10.10.0/24 #添加
  5. #计算节点
  6. [root@controller ~]# vim /etc/chrony.conf
  7. server controller iburst #添加与控制节点时间同步

(3)重启时间同步服务,设置开机自启

  1. [root@controller ~]# systemctl restart chronyd && systemctl enable chronyd

Mariadb数据库

(1)安装mariadb数据库

  1. [root@controller ~]# dnf -y install mariadb mariadb-server python3-PyMySQL
  2. #启动mariadb数据库
  3. [root@controller ~]# systemctl start mariadb

(2)创建openstack.cnf文件,编辑它

  1. [root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
  2. [mysqld]
  3. bind-address = 10.10.10.10 #绑定IP,如果后面换IP,这行可以删掉
  4. default-storage-engine = innodb
  5. innodb_file_per_table = on
  6. max_connections = 4096
  7. collation-server = utf8_general_ci
  8. character-set-server = utf8

(3)初始化数据库

  1. [root@controller ~]# mysql_secure_installation
  2. Enter current password for root (enter for none): #输入当前用户root密码,若为空直接回车
  3. OK, successfully used password, moving on...
  4. Set root password? [Y/n] y # 是否设置root密码
  5. New password: # 输入新密码
  6. Re-enter new password: # 再次输入新密码
  7. Remove anonymous users? [Y/n] y # 是否删除匿名用户
  8. Disallow root login remotely? [Y/n] n # 是否禁用远程登录
  9. Remove test database and access to it? [Y/n] y # 是否删除数据库并访问它
  10. Reload privilege tables now? [Y/n] y # 是否重新加载权限表

(4)重启数据库服务并设置开机自启

  1. [root@controller ~]# systemctl restart mariadb && systemctl enable mariadb

RabbitMQ消息队列

注意:安装rabbitmq-server时,可能会报错,这是安装源里面没有libSDL,下载所需包,再安装rabbitmq-server就行了

下载命令:wget http://rpmfind.net/linux/centos/8-stream/PowerTools/x86_64/os/Packages/SDL2-2.0.10-2.el8.x86_64.rpm

安装命令:dnf -y install SDL2-2.0.10-2.el8.x86_64.rpm

(1)安装rabbitmq软件包

  1. [root@controller ~]# dnf -y install rabbitmq-server

(2)启动消息队列服务并设置开机自启动

  1. [root@controller ~]# systemctl start rabbitmq-server && systemctl enable rabbitmq-server

(3) 添加openstack用户并设置密码

  1. [root@controller ~]# rabbitmqctl add_user openstack 111111

(4) 配置openstack用户权限

  1. [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

(5)启用消息队列Web界面管理插件

  1. [root@controller ~]# rabbitmq-plugins enable rabbitmq_management
  2. #这一步启动后,ss -antlu命令查看端口会有一个15672的端口开启,可通过web界面登录RabbitMQ查看,网站地址:http://10.10.10.10:15672,用户和密码默认都是guest

Memcached缓存

(1)安装memcache软件包

  1. [root@controller ~]# dnf -y install memcached python3-memcached

(2)编辑memcache配置文件

  1. [root@controller ~]# vim /etc/sysconfig/memcached
  2. ..........
  3. OPTIONS="-l 127.0.0.1,::1,controller" #修改这一行

(3)重启缓存服务并设置开机自启

  1. [root@controller ~]# systemctl start memcached && systemctl enable memcached

Etcd集群

(1)安装etcd软件包

  1. [root@controller ~]# dnf -y install etcd

(2)编辑etcd配置文件

  1. [root@controller ~]# vim /etc/etcd/etcd.conf
  2. #修改如下
  3. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  4. ETCD_LISTEN_PEER_URLS="http://10.10.10.10:2380"
  5. ETCD_LISTEN_CLIENT_URLS="http://10.10.10.10:2379"
  6. ETCD_NAME="controller"
  7. ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.10:2380"
  8. ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.10:2379"
  9. ETCD_INITIAL_CLUSTER="controller=http://10.10.10.10:2380"
  10. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  11. ETCD_INITIAL_CLUSTER_STATE="new"

(3)启动etcd服务并设置开机自启动

  1. [root@controller ~]# systemctl start etcd && systemctl enable etcd

七大组件

Keystone认证

(1)数据库创库授权

  1. #进入数据库
  2. [root@controller ~]# mysql -u root -p111111
  3. #创建keystone数据库
  4. MariaDB [(none)]> CREATE DATABASE keystone;
  5. #授权
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '111111';
  7. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '111111';

(2)安装keystone软件包

  1. [root@controller ~]# dnf -y install openstack-keystone httpd python3-mod_wsgi

(3)编辑配置文件

  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/keystone/keystone.conf
  6. [database]
  7. connection = mysql+pymysql://keystone:111111@controller/keystone
  8. [token]
  9. provider = fernet

(4)数据库初始化

  1. [root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

(5)查看keystone数据库表信息

  1. [root@controller ~]# mysql -uroot -p111111
  2. MariaDB [(none)]> use keystone;
  3. MariaDB [keystone]> show tables;
  4. MariaDB [keystone]> quit

(6)初始化Fernet

  1. [root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  2. [root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

(7)引导身份认证

  1. [root@controller ~]# keystone-manage bootstrap --bootstrap-password 111111 --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

(8)配置Apache HTTP服务

  1. #编辑httpd.conf文件
  2. [root@controller ~]# vim /etc/httpd/conf/httpd.conf
  3. ServerName controller #添加这一行
  4. <Directory />
  5. AllowOverride none
  6. Require all granted #这一行改成这样
  7. </Directory>
  8. #创建wsgi-keystone.conf文件链接
  9. [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

(9)重启httpd服务并设置开机自启动

  1. [root@controller ~]# systemctl restart httpd && systemctl enable httpd

(10)创建admin环境变量脚本

  1. [root@controller ~]# vim /admin-openrc.sh
  2. export OS_USERNAME=admin
  3. export OS_PASSWORD=111111
  4. export OS_PROJECT_NAME=admin
  5. export OS_USER_DOMAIN_NAME=Default
  6. export OS_PROJECT_DOMAIN_NAME=Default
  7. export OS_AUTH_URL=http://controller:5000/v3
  8. export OS_IDENTITY_API_VERSION=3
  9. export OS_IMAGE_API_VERSION=2
  10. #可通过source /admin-openrc.sh命令来导入环境变量,或./admin-openrc.sh命令,如果不想每次手动导入,可以修改.bashrc配置文件实现开机启动导入
  11. [root@controller ~]# vim .bashrc
  12. source /admin-openrc.sh #添加这一行

(11)创建域,项目,用户和角色

  1. #创建域,已有默认域default,自己可随便创一个
  2. [root@controller ~]# openstack domain create --description "An Example Domain" example
  3. #创建service项目
  4. [root@controller ~]# openstack project create --domain default --description "Service Project" service
  5. #创建测试项目
  6. [root@controller ~]# openstack project create --domain default --description "Demo Project" myproject
  7. #创建用户,此命令执行会要求输入密码,输两次即可
  8. [root@controller ~]# openstack user create --domain default --password-prompt myuser
  9. #创建角色
  10. [root@controller ~]# openstack role create myrole
  11. #添加角色与项目,用户绑定
  12. [root@controller ~]# openstack role add --project myproject --user myuser myrole

(12)验证token令牌

  1. [root@controller ~]# openstack token issue

Glance镜像

(1) 数据库创库授权

  1. #进入数据库
  2. [root@controller ~]# mysql -u root -p111111
  3. #创建glance数据库
  4. MariaDB [(none)]> CREATE DATABASE glance;
  5. #授权
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '111111';
  7. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '111111';

(2) 安装glance软件包

注:安装报错,修改CentOS-Stream-PowerTools.repo源为enable=1,重新安装

  1. [root@controller ~]# dnf install -y openstack-glance

(3) 编辑配置文件

  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/glance/glance-api.conf
  6. [database]
  7. connection = mysql+pymysql://glance:111111@controller/glance
  8. [keystone_authtoken]
  9. www_authenticate_uri = http://controller:5000
  10. auth_url = http://controller:5000
  11. memcached_servers = controller:11211
  12. auth_type = password
  13. project_domain_name = Default
  14. user_domain_name = Default
  15. project_name = service
  16. username = glance
  17. password = 111111
  18. [paste_deploy]
  19. flavor = keystone
  20. [glance_store]
  21. stores = file,http
  22. default_store = file
  23. filesystem_store_datadir = /var/lib/glance/images/

(4) 数据库初始化

  1. [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

(5) 查看glance数据库表信息

  1. [root@controller ~]# mysql -uroot -p111111
  2. MariaDB [(none)]> use glance;
  3. MariaDB [keystone]> show tables;
  4. MariaDB [keystone]> quit

(6) 创建glance用户和服务,关联admin角色

  1. #创建glance用户
  2. [root@controller ~]# openstack user create --domain default --password 111111 glance
  3. #关联admin角色
  4. [root@controller ~]# openstack role add --project service --user glance admin
  5. #创建glance服务
  6. [root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

(7) 注册API接口

  1. #public
  2. [root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
  3. #internal
  4. [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
  5. #admin
  6. [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

(8) 查看服务端点

  1. [root@controller ~]# openstack endpoint list

(9) 启动glance服务并设置开机自启

  1. [root@controller ~]# systemctl start openstack-glance-api && systemctl enable openstack-glance-api

(10) 测试镜像功能

  1. #此次采用的镜像为cirros-0.5.1-x86_64-disk.img,创建命令如下
  2. [root@controller ~]# openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public
  3. #创建成功后可通过openstack命令查看
  4. [root@controller ~]# openstack image list
  5. #进入glance数据库查看,存放在images表中
  6. [root@controller ~]# mysql -uroot -p111111
  7. MariaDB [(none)]> use glance;
  8. MariaDB [glance]> select * from images\G;
  9. #在/var/lib/glance/images/目录下可以看到镜像文件,如果要删除此镜像需要删除数据库信息,再删除镜像文件
  10. [root@controller ~]# ls /var/lib/glance/images/

Placement放置

(1) 数据库创库授权

  1. #进入数据库
  2. [root@controller ~]# mysql -u root -p111111
  3. #创建placement数据库
  4. MariaDB [(none)]> CREATE DATABASE placement;
  5. #授权
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '111111';
  7. MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '111111';

(2) 安装placement软件包

  1. [root@controller ~]# dnf install -y openstack-placement-api

(3) 编辑配置文件

  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak >/etc/placement/placement.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/placement/placement.conf
  6. [placement_database]
  7. connection = mysql+pymysql://placement:111111@controller/placement
  8. [api]
  9. auth_strategy = keystone
  10. [keystone_authtoken]
  11. auth_url = http://controller:5000/v3
  12. memcached_servers = controller:11211
  13. auth_type = password
  14. project_domain_name = Default
  15. user_domain_name = Default
  16. project_name = service
  17. username = placement
  18. password = 111111

(4) 数据库初始化

  1. [root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

(5) 查看placement数据库表信息

  1. [root@controller ~]# mysql -uroot -p111111
  2. MariaDB [(none)]> use placement;
  3. MariaDB [keystone]> show tables;
  4. MariaDB [keystone]> quit

(6) 创建placement用户和服务,关联admin角色

  1. #创建placement用户
  2. [root@controller ~]# openstack user create --domain default --password 111111 placement
  3. #关联admin角色
  4. [root@controller ~]# openstack role add --project service --user placement admin
  5. #创建placement服务
  6. [root@controller ~]# openstack service create --name placement --description "Placement API" placement

(7) 注册API接口

  1. #public
  2. [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
  3. #internal
  4. [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
  5. #admin
  6. [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

(8) 查看服务端点

  1. [root@controller ~]# openstack endpoint list

(9) 重启httpd服务

  1. [root@controller ~]# systemctl restart httpd

检测placement服务状态

  1. [root@controller ~]# placement-status upgrade check

Nova计算

1,控制节点(1)

(1) 数据库创库授权
  1. #进入数据库
  2. [root@controller ~]# mysql -u root -p111111
  3. #创建nova_api,nova和nova_cell0数据库
  4. MariaDB [(none)]> CREATE DATABASE nova_api;
  5. MariaDB [(none)]> CREATE DATABASE nova;
  6. MariaDB [(none)]> CREATE DATABASE nova_cell0;
  7. #授权
  8. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
  9. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '111111';
  10. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
  11. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '111111';
  12. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
  13. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '111111';
(2) 安装nova软件包
  1. [root@controller ~]# dnf install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
(3) 编辑配置文件
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/nova/nova.conf
  6. [DEFAULT]
  7. enabled_apis = osapi_compute,metadata
  8. transport_url = rabbit://openstack:111111@controller:5672/
  9. my_ip = 10.10.10.10 #本机IP,如果将来换IP,这地方一定要改
  10. [api_database]
  11. connection = mysql+pymysql://nova:111111@controller/nova_api
  12. [database]
  13. connection = mysql+pymysql://nova:111111@controller/nova
  14. [api]
  15. auth_strategy = keystone
  16. [keystone_authtoken]
  17. www_authenticate_uri = http://controller:5000/
  18. auth_url = http://controller:5000/
  19. memcached_servers = controller:11211
  20. auth_type = password
  21. project_domain_name = Default
  22. user_domain_name = Default
  23. project_name = service
  24. username = nova
  25. password = 111111
  26. [vnc]
  27. enabled = true
  28. server_listen = $my_ip
  29. server_proxyclient_address = $my_ip
  30. [glance]
  31. api_servers = http://controller:9292
  32. [oslo_concurrency]
  33. lock_path = /var/lib/nova/tmp
  34. [placement]
  35. region_name = RegionOne
  36. project_domain_name = Default
  37. project_name = service
  38. auth_type = password
  39. user_domain_name = Default
  40. auth_url = http://controller:5000/v3
  41. username = placement
  42. password = 111111
(4) 数据库初始化
  1. # 同步nova_api数据库
  2. [root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
  3. # 同步nova_cell0数据库
  4. [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  5. # 创建cell1
  6. [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
  7. # 同步nova数据库
  8. [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
(5) 创建nova用户和服务,关联admin角色
  1. #创建nova用户
  2. [root@controller ~]# openstack user create --domain default --password 111111 nova
  3. #关联admin角色
  4. [root@controller ~]# openstack role add --project service --user nova admin
  5. #创建nova服务
  6. [root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
(6) 注册API接口
  1. #public
  2. [root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
  3. #internal
  4. [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
  5. #admin
  6. [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
(7) 查看服务端点
  1. [root@controller ~]# openstack endpoint list
(8) 验证nova_cell0和cell1是否添加成功
  1. [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
(9) 启动nova所有服务并设为开机自启
  1. [root@controller ~]# systemctl enable --now openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
(10) 查看nova服务是否启动
  1. [root@controller ~]# nova service-list
  2. #一般只会显示两个服务:nova-scheduler和nova-conductor,这是因为上面这条命令是由nova-api接收,而它控制着nova-scheduler和nova-conductor服务,如果nova-api未开启,那这两个服务也会down掉,nova-novncproxy服务则是通过查看端口号的形式,示例如下:
  3. [root@controller ~]# netstat -lntup | grep 6080
  4. tcp 0 0 0.0.0.0:6080 0.0.0.0:* LISTEN 1456/python3
  5. [root@controller ~]# ps -ef | grep 1456
  6. nova 1456 1 0 18:29 ? 00:00:05 /usr/bin/python3 /usr/bin/nova-novncproxy --web /usr/share/novnc/
  7. root 27724 26054 0 20:51 pts/0 00:00:00 grep --color=auto 1456
(11) 如何通过web界面查看
  1. #如果不配置域名解析,就直接用ip
  2. http://10.10.10.10:6080
  3. #如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加
  4. 10.10.10.10 controller
  5. 10.10.10.20 compute
  6. #再访问
  7. http://controller:6080

2,计算节点

(1) 安装nova软件包
  1. [root@compute ~]# dnf install -y openstack-nova-compute
(2) 编辑配置文件
  1. #复制备份配置文件并去掉注释
  2. [root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
  4. #编辑
  5. [root@compute ~]# vim /etc/nova/nova.conf
  6. [DEFAULT]
  7. enabled_apis = osapi_compute,metadata
  8. transport_url = rabbit://openstack:111111@controller
  9. my_ip = 10.10.10.20 #本机IP,如果将来换IP,这地方一定要改
  10. [api]
  11. auth_strategy = keystone
  12. [keystone_authtoken]
  13. www_authenticate_uri = http://controller:5000/
  14. auth_url = http://controller:5000/
  15. memcached_servers = controller:11211
  16. auth_type = password
  17. project_domain_name = Default
  18. user_domain_name = Default
  19. project_name = service
  20. username = nova
  21. password = 111111
  22. [vnc]
  23. enabled = true
  24. server_listen = 0.0.0.0
  25. server_proxyclient_address = $my_ip
  26. novncproxy_base_url = http://controller:6080/vnc_auto.html
  27. [glance]
  28. api_servers = http://controller:9292
  29. [oslo_concurrency]
  30. lock_path = /var/lib/nova/tmp
  31. [placement]
  32. region_name = RegionOne
  33. project_domain_name = Default
  34. project_name = service
  35. auth_type = password
  36. user_domain_name = Default
  37. auth_url = http://controller:5000/v3
  38. username = placement
  39. password = 111111
(3) 确定计算节点是否支持虚拟机的硬件加速
  1. #如果此命令返回值是别的数字,计算节点支持硬件加速;如果此命令返回值是0,计算节点不支持硬件加速,需要配置[libvirt]
  2. [root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
  3. #配置[libvirt]
  4. [root@compute ~]# vim /etc/nova/nova.conf
  5. [libvirt]
  6. virt_type = qemu
(4) 启动计算节点nova服务并设置开机自启动
  1. [root@compute ~]# systemctl enable --now libvirtd.service openstack-nova-compute.service

控制节点(2)

(5) 将计算节点添加到单元数据库
  1. #确认数据库中存在计算主机
  2. [root@controller ~]# openstack compute service list --service nova-compute
  3. #控制节点发现计算节点
  4. [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
(6) 设置发现间隔
  1. [root@controller ~]# vim /etc/nova/nova.conf
  2. [scheduler]
  3. discover_hosts_in_cells_interval = 300

Neutron网络

(1) 数据库创库授权
  1. #进入数据库
  2. [root@controller ~]# mysql -u root -p111111
  3. #创建neutron数据库
  4. MariaDB [(none)] CREATE DATABASE neutron;
  5. #授权
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '111111';
  7. MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '111111';
(2) 创建neutron用户和服务,关联admin角色
  1. #创建neutron用户
  2. [root@controller ~]# openstack user create --domain default --password 111111 neutron
  3. #关联admin角色
  4. [root@controller ~]# openstack role add --project service --user neutron admin
  5. #创建neutron服务
  6. [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
(3) 注册API接口
  1. #public
  2. [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
  3. #internal
  4. [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
  5. #admin
  6. [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
(4) 查看服务端点
  1. [root@controller ~]# openstack endpoint list

控制节点公有网络

(1) 安装neutron软件包
  1. [root@controller ~]# dnf -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2) 编辑neutron配置文件
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/neutron.conf
  6. [DEFAULT]
  7. core_plugin = ml2
  8. service_plugins =
  9. transport_url = rabbit://openstack:111111@controller
  10. auth_strategy = keystone
  11. notify_nova_on_port_status_changes = true
  12. notify_nova_on_port_data_changes = true
  13. [database]
  14. connection = mysql+pymysql://neutron:111111@controller/neutron
  15. [keystone_authtoken]
  16. www_authenticate_uri = http://controller:5000
  17. auth_url = http://controller:5000
  18. memcached_servers = controller:11211
  19. auth_type = password
  20. project_domain_name = default
  21. user_domain_name = default
  22. project_name = service
  23. username = neutron
  24. password = 111111
  25. [nova] #如果配置文件没有这个参数,就直接加
  26. auth_url = http://controller:5000
  27. auth_type = password
  28. project_domain_name = default
  29. user_domain_name = default
  30. region_name = RegionOne
  31. project_name = service
  32. username = nova
  33. password = 111111
  34. [oslo_concurrency]
  35. lock_path = /var/lib/neutron/tmp
(3) 编辑ml2插件
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
  6. [ml2]
  7. type_drivers = flat,vlan
  8. tenant_network_types =
  9. mechanism_drivers = linuxbridge
  10. extension_drivers = port_security
  11. [ml2_type_flat]
  12. flat_networks = provider
  13. [securitygroup]
  14. enable_ipset = true
(4) 配置Linux网桥代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  6. [linux_bridge]
  7. physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
  8. [vxlan]
  9. enable_vxlan = false
  10. [securitygroup]
  11. enable_security_group = true
  12. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(5) 配置DHCP代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/dhcp_agent.ini
  6. [DEFAULT]
  7. interface_driver = linuxbridge
  8. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  9. enable_isolated_metadata = true
(6) 设置网桥过滤器
  1. #修改系统参数配置文件
  2. [root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
  3. net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
  4. #加载br_netfilter模块
  5. [root@controller ~]# modprobe br_netfilter
  6. #检查
  7. [root@controller ~]# sysctl -p
  8. net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
  9. net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(7) 配置元数据代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/metadata_agent.ini
  6. [DEFAULT]
  7. nova_metadata_host = controller
  8. metadata_proxy_shared_secret = METADATA_SECRET
  9. #'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
(8) 配置计算服务以使用网络服务
  1. #在[neutron]部分,配置访问参数,启用元数据代理
  2. [root@controller ~]# vim /etc/nova/nova.conf
  3. [neutron]
  4. auth_url = http://controller:5000
  5. auth_type = password
  6. project_domain_name = default
  7. user_domain_name = default
  8. region_name = RegionOne
  9. project_name = service
  10. username = neutron
  11. password = 111111
  12. service_metadata_proxy = true
  13. metadata_proxy_shared_secret = METADATA_SECRET #密码要一致
(9) 创建网络服务初始化脚本链接
  1. [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(10) 数据库初始化
  1. [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(11) 重启nova的API服务
  1. [root@controller ~]# systemctl restart openstack-nova-api.service
(12) 启动neutron服务并设置开机自启
  1. [root@controller ~]# systemctl enable --now neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

计算节点公有网络

(1) 安装neutron软件包
  1. [root@compute ~]# dnf install -y openstack-neutron-linuxbridge ebtables ipset
(2) 编辑neutron配置文件
  1. #复制备份配置文件并去掉注释
  2. [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
  4. #编辑
  5. [root@compute ~]# vim /etc/neutron/neutron.conf
  6. [DEFAULT]
  7. transport_url = rabbit://openstack:111111@controller
  8. auth_strategy = keystone
  9. [keystone_authtoken]
  10. www_authenticate_uri = http://controller:5000
  11. auth_url = http://controller:5000
  12. memcached_servers = controller:11211
  13. auth_type = password
  14. project_domain_name = default
  15. user_domain_name = default
  16. project_name = service
  17. username = neutron
  18. password = 111111
  19. [oslo_concurrency]
  20. lock_path = /var/lib/neutron/tmp
(3) 配置Linux网桥代理
  1. #复制备份配置文件并去掉注释
  2. [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  6. [linux_bridge]
  7. physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
  8. [vxlan]
  9. enable_vxlan = false
  10. [securitygroup]
  11. enable_security_group = true
  12. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4) 设置网桥过滤器
  1. #修改系统参数配置文件
  2. [root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
  3. net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
  4. #加载br_netfilter模块
  5. [root@compute ~]# modprobe br_netfilter
  6. #检查
  7. [root@compute ~]# sysctl -p
  8. net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
  9. net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(5) 配置计算服务以使用网络服务
  1. #在[neutron]部分,配置访问参数,启用元数据代理
  2. [root@compute ~]# vim /etc/nova/nova.conf
  3. [neutron]
  4. auth_url = http://controller:5000
  5. auth_type = password
  6. project_domain_name = default
  7. user_domain_name = default
  8. region_name = RegionOne
  9. project_name = service
  10. username = neutron
  11. password = 111111
(6) 重启nova的API服务
  1. [root@compute ~]# systemctl restart openstack-nova-api.service
(7) 启动Linux网桥服务并设置开机自启
  1. [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

公有网络服务是否正常运行

(8) 控制节点查看网络代理服务列表
  1. #控制节点查看网络代理服务列表
  2. [root@controller ~]# openstack network agent list
  3. #一般成功后会出现Metadata agent,DHCP agent,两个Linux bridge agent一共四个代理,一个Linux bridge agent属于controlller,另一个属于compute

控制节点私有网络

(1) 安装neutron软件包
  1. [root@controller ~]# dnf -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2) 编辑neutron配置文件
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/neutron.conf
  6. [DEFAULT]
  7. core_plugin = ml2
  8. service_plugins = router
  9. allow_overlapping_ips = true
  10. transport_url = rabbit://openstack:111111@controller
  11. auth_strategy = keystone
  12. notify_nova_on_port_status_changes = true
  13. notify_nova_on_port_data_changes = true
  14. [database]
  15. connection = mysql+pymysql://neutron:111111@controller/neutron
  16. [keystone_authtoken]
  17. www_authenticate_uri = http://controller:5000
  18. auth_url = http://controller:5000
  19. memcached_servers = controller:11211
  20. auth_type = password
  21. project_domain_name = default
  22. user_domain_name = default
  23. project_name = service
  24. username = neutron
  25. password = 111111
  26. [nova] #如果配置文件没有这个参数,就直接加
  27. auth_url = http://controller:5000
  28. auth_type = password
  29. project_domain_name = default
  30. user_domain_name = default
  31. region_name = RegionOne
  32. project_name = service
  33. username = nova
  34. password = 111111
  35. [oslo_concurrency]
  36. lock_path = /var/lib/neutron/tmp
(3) 编辑ml2插件
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
  6. [ml2]
  7. type_drivers = flat,vlan,vxlan
  8. tenant_network_types = vxlan
  9. mechanism_drivers = linuxbridge,l2population
  10. extension_drivers = port_security
  11. [ml2_type_flat]
  12. flat_networks = provider
  13. [ml2_type_vxlan]
  14. vni_ranges = 1:1000
  15. [securitygroup]
  16. enable_ipset = true
(4) 配置Linux网桥代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  6. [linux_bridge]
  7. physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
  8. [vxlan]
  9. enable_vxlan = true
  10. local_ip = 10.10.10.10
  11. l2_population = true
  12. [securitygroup]
  13. enable_security_group = true
  14. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(5) 设置网桥过滤器
  1. #修改系统参数配置文件
  2. [root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
  3. net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
  4. #加载br_netfilter模块
  5. [root@controller ~]# modprobe br_netfilter
  6. #检查
  7. [root@controller ~]# sysctl -p
  8. net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
  9. net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(6) 配置DHCP代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/dhcp_agent.ini
  6. [DEFAULT]
  7. interface_driver = linuxbridge
  8. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  9. enable_isolated_metadata = true
(7) 配置第三层代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/l3_agent.ini
  6. [DEFAULT]
  7. interface_driver = linuxbridge
(8) 配置元数据代理
  1. #复制备份配置文件并去掉注释
  2. [root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/metadata_agent.ini
  6. [DEFAULT]
  7. nova_metadata_host = controller
  8. metadata_proxy_shared_secret = METADATA_SECRET
  9. #'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
(9) 配置计算服务以使用网络服务
  1. #在[neutron]部分,配置访问参数,启用元数据代理
  2. [root@controller ~]# vim /etc/nova/nova.conf
  3. [neutron]
  4. auth_url = http://controller:5000
  5. auth_type = password
  6. project_domain_name = default
  7. user_domain_name = default
  8. region_name = RegionOne
  9. project_name = service
  10. username = neutron
  11. password = 111111
  12. service_metadata_proxy = true
  13. metadata_proxy_shared_secret = METADATA_SECRET #密码要一致
(10) 创建网络服务初始化脚本链接
  1. [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(11) 数据库初始化
  1. [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(12) 重启nova的API服务
  1. [root@controller ~]# systemctl restart openstack-nova-api.service
(13) 启动neutron服务并设置开机自启
  1. [root@controller ~]# systemctl enable --now neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

计算节点私有网络

(1) 安装neutron软件包
  1. [root@compute ~]# dnf install -y openstack-neutron-linuxbridge ebtables ipset
(2) 编辑neutron配置文件
  1. #复制备份配置文件并去掉注释
  2. [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
  4. #编辑
  5. [root@compute ~]# vim /etc/neutron/neutron.conf
  6. [DEFAULT]
  7. transport_url = rabbit://openstack:111111@controller
  8. auth_strategy = keystone
  9. [keystone_authtoken]
  10. www_authenticate_uri = http://controller:5000
  11. auth_url = http://controller:5000
  12. memcached_servers = controller:11211
  13. auth_type = password
  14. project_domain_name = default
  15. user_domain_name = default
  16. project_name = service
  17. username = neutron
  18. password = 111111
  19. [oslo_concurrency]
  20. lock_path = /var/lib/neutron/tmp
(3) 配置Linux网桥代理
  1. #复制备份配置文件并去掉注释
  2. [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  4. #编辑
  5. [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  6. [linux_bridge]
  7. physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
  8. [vxlan]
  9. enable_vxlan = true
  10. local_ip = 10.10.10.20
  11. l2_population = true
  12. [securitygroup]
  13. enable_security_group = true
  14. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4) 设置网桥过滤器
  1. #修改系统参数配置文件
  2. [root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
  3. net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
  4. #加载br_netfilter模块
  5. [root@compute ~]# modprobe br_netfilter
  6. #检查
  7. [root@compute ~]# sysctl -p
  8. net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
  9. net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(5) 配置计算服务以使用网络服务
  1. #在[neutron]部分,配置访问参数,启用元数据代理
  2. [root@compute ~]# vim /etc/nova/nova.conf
  3. [neutron]
  4. auth_url = http://controller:5000
  5. auth_type = password
  6. project_domain_name = default
  7. user_domain_name = default
  8. region_name = RegionOne
  9. project_name = service
  10. username = neutron
  11. password = 111111
(6) 重启nova的API服务
  1. [root@compute ~]# systemctl restart openstack-nova-api.service
(7) 启动Linux网桥服务并设置开机自启
  1. [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

私有网络服务是否正常运行

(8) 控制节点查看网络代理服务列表
  1. #控制节点查看网络代理服务列表
  2. [root@controller ~]# openstack network agent list
  3. #一般成功后会出现Metadata agent,DHCP agent,L3 agent,两个Linux bridge agent一共五个代理,一个Linux bridge agent属于controlller,另一个属于compute

Dashboard仪表盘

(1) 安装dashboard软件包

  1. [root@controller ~]# dnf install -y openstack-dashboard

(2) 编辑dashboard配置文件

  1. #此文件内所有选项与参数用命令模式搜索,有就修改,没有就添加
  2. [root@controller ~]# vim /etc/openstack-dashboard/local_settings
  3. OPENSTACK_HOST = "controller"
  4. #不配域名解析就要把IP写进去
  5. ALLOWED_HOSTS = ['controller''compute','10.10.10.10','10.10.10.20']
  6. SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
  7. CACHES = {
  8. 'default': {
  9. 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
  10. 'LOCATION': 'controller:11211',
  11. },
  12. }
  13. OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
  14. OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  15. OPENSTACK_API_VERSIONS = {
  16. "identity": 3,
  17. "image": 2,
  18. "volume": 3,
  19. }
  20. OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
  21. OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
  22. OPENSTACK_NEUTRON_NETWORK = {
  23. 'enable_router': False,
  24. 'enable_quotas': False,
  25. 'enable_distributed_router': False,
  26. 'enable_ha_router': False,
  27. 'enable_lb': False,
  28. 'enable_firewall': False,
  29. 'enable_vpn': False,
  30. 'enable_fip_topology_check': False,
  31. }
  32. TIME_ZONE = "Asia/Shanghai"

(3) 配置http服务

  1. [root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
  2. WSGIApplicationGroup %{GLOBAL} #添加这行
  3. #编辑dashboard配置文件
  4. [root@controller ~]# vim /etc/openstack-dashboard/local_settings
  5. WEBROOT = '/dashboard/' #添加这行

(4) 重启http和缓存服务

  1. [root@controller ~]# systemctl restart httpd.service memcached.service

(5) 登录web界面

  1. #如果不配置域名解析,就直接用ip
  2. http://10.10.10.10/dashboard
  3. #如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加
  4. 10.10.10.10 controller
  5. 10.10.10.20 compute
  6. #再访问
  7. http://controller/dashboard

Cinder存储

控制节点

(1) 数据库创库授权
  1. #进入数据库
  2. [root@controller ~]# mysql -u root -p111111
  3. #创建cinder数据库
  4. MariaDB [(none)] CREATE DATABASE cinder;
  5. #授权
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '111111';
  7. MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '111111';
(2) 编辑配置文件
  1. #复制一份去掉注释
  2. [root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
  3. [root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
  4. #编辑
  5. [root@controller ~]# vim /etc/cinder/cinder.conf
  6. [DEFAULT]
  7. transport_url = rabbit://openstack:111111@controller
  8. auth_strategy = keystone
  9. my_ip = 10.10.10.10
  10. [database]
  11. connection = mysql+pymysql://cinder:111111@controller/cinder
  12. [keystone_authtoken]
  13. www_authenticate_uri = http://controller:5000
  14. auth_url = http://controller:5000
  15. memcached_servers = controller:11211
  16. auth_type = password
  17. project_domain_name = default
  18. user_domain_name = default
  19. project_name = service
  20. username = cinder
  21. password = 111111
  22. [oslo_concurrency]
  23. lock_path = /var/lib/cinder/tmp
(3) 数据库初始化
  1. [root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
(4) 查看cinder数据库表信息
  1. [root@controller ~]# mysql -uroot -p111111
  2. MariaDB [(none)]> use cinder;
  3. MariaDB [cinder]> show tables;
  4. MariaDB [cinder]> quit
(5) 创建cinder用户和服务,关联admin角色
  1. #创建cinder用户
  2. [root@controller ~]# openstack user create --domain default --password 111111 placement
  3. #关联admin角色
  4. [root@controller ~]# openstack role add --project service --user cinder admin
  5. #创建cinderv2,cinderv3服务
  6. [root@controller ~]# openstack service create --name cinderv2 \> --description "OpenStack Block Storage" volumev2
  7. [root@controller ~]# openstack service create --name cinderv3 \> --description "OpenStack Block Storage" volumev3
(6) 注册API接口
cinderv2的服务端点
  1. #public
  2. [root@controller ~]# openstack endpoint create --region RegionOne > volumev2 public http://controller:8776/v2/%\(project_id\)s
  3. #internal
  4. [root@controller ~]# openstack endpoint create --region RegionOne > volumev2 internal http://controller:8776/v2/%\(project_id\)s
  5. #admin
  6. [root@controller ~]# openstack endpoint create --region RegionOne > volumev2 admin http://controller:8776/v2/%\(project_id\)s
cinderv3的服务端点
  1. #public
  2. [root@controller ~]# openstack endpoint create --region RegionOne > volumev3 public http://controller:8776/v3/%\(project_id\)s
  3. #internal
  4. [root@controller ~]# openstack endpoint create --region RegionOne > volumev3 internal http://controller:8776/v3/%\(project_id\)s
  5. #admin
  6. [root@controller ~]# openstack endpoint create --region RegionOne > volumev3 admin http://controller:8776/v3/%\(project_id\)s
(7) 查看服务端点
  1. [root@controller ~]# openstack endpoint list
(8) 配置计算服务使用块存储
  1. #编辑nova配置文件
  2. [root@controller cinder]# vi /etc/nova/nova.conf
  3. [cinder]
  4. os_region_name = RegionOne
  5. #重启nova
  6. [root@controller ~]# systemctl restart openstack-nova-api.service
(9) 启动cinder服务并设置开机自启
  1. [root@controller ~]# systemctl enable --now openstack-cinder-api.service openstack-cinder-scheduler.service

计算节点(关闭虚拟机添加一块50G硬盘)

(1) 查看磁盘
  1. [root@compute ~]# fdisk --list
(2) 安装 LVM 包
  1. [root@compute ~]# dnf -y install lvm2 device-mapper-persistent-data
(3) 创建 LVM 物理卷/dev/sdb
  1. [root@compute ~]# pvcreate /dev/sdb
(4) 创建 LVM 卷组cinder-volumes
  1. [root@compute ~]# vgcreate cinder-volumes /dev/sdb
(5) 修改LVM配置
  1. #复制一份去掉注释
  2. [root@compute ~]# cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/lvm/lvm.conf.bak > /etc/lvm/lvm.conf
  4. #编辑
  5. [root@compute ~]# vi /etc/lvm/lvm.conf
  6. devices {
  7. filter = [ "a/sda/",a/sdb/", "r/.*/"]
  8. }
(6) 安装cinder相关软件包
  1. [root@compute ~]# dnf install -y openstack-cinder targetcli python3-keystone
(7) 编辑cinder配置文件
  1. #复制一份去掉注释
  2. [root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
  3. [root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
  4. #编辑
  5. [root@compute ~]# vim /etc/cinder/cinder.conf
  6. [DEFAULT]
  7. transport_url = rabbit://openstack:111111@controller
  8. auth_strategy = keystone
  9. my_ip = 10.10.10.20
  10. enabled_backends = lvm
  11. glance_api_servers = http://controller:9292
  12. [database]
  13. connection = mysql+pymysql://cinder:111111@controller/cinder
  14. [keystone_authtoken]
  15. www_authenticate_uri = http://controller:5000
  16. auth_url = http://controller:5000
  17. memcached_servers = controller:11211
  18. auth_type = password
  19. project_domain_name = default
  20. user_domain_name = default
  21. project_name = service
  22. username = cinder
  23. password = 111111
  24. [lvm] #没有就添加
  25. volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
  26. volume_group = cinder-volumes #要与创建的卷组名对应
  27. target_protocol = iscsi
  28. target_helper = lioadm
  29. [oslo_concurrency]
  30. lock_path = /var/lib/cinder/tmp
(8) 启动cinder服务并设置开机自启
  1. [root@compute ~]# systemctl enable --now openstack-cinder-volume.service target.service
(9) 返回控制节点,查看服务列表
  1. [root@controller ~]# openstack volume service list
  2. #显示这样就行
  3. +------------------+-------------+------+---------+-------+----------------------------+
  4. | Binary | Host | Zone | Status | State | Updated At |
  5. +------------------+-------------+------+---------+-------+----------------------------+
  6. | cinder-scheduler | controller | nova | enabled | up | 2023-05-11T08:12:03.000000 |
  7. | cinder-volume | compute@lvm | nova | enabled | up | 2023-05-11T08:12:02.000000 |
  8. +------------------+-------------+------+---------+-------+----------------------------+

至此,openstack云平台搭建V版已全部完成

原文链接:https://www.cnblogs.com/skyrainmom/p/17401818.html

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号