Metonymical Deflection

ゆるく日々のコト・たまにITインフラ

CentOS7 OpenStack Neutron SR-IOV設定方法

CentOS7.7上でOpenStackを構築した後、SR-IOVの設定方法を記載します。*1

1.概要

1-1.環境
筐体                             : ProLiant DL360e Gen8
System ROM                       : P73 01/22/2018
NIC                              : Intel X540-AT2
OS                               : CentOS7.7(1908)
Installed Environment Groups     : Server with GUI
Add-Ons for Selected Environment : Virtualization Client, Virtualization Hypervisor, Virtualization Tools 
1-2.構成

f:id:metonymical:20191014185533p:plain

1-3.全体の流れ

SR-IOVの設定項目だけであれば以下の公式Docで十分ですが、
OpenStack Docs: SR-IOV
再現性が怪しいので、素の状態からの設定方法を記載します。*2
とはいえ、SR-IOV設定がメインなため、それ以外の細かい説明は省略します。

  • 事前設定
  • Controller事前準備
  • Keystone設定
  • Glance設定
  • Nova設定:Controller
  • Nova設定:Compute
  • Neutron設定:Controller
  • Neutron設定:Compute
  • Horizon設定
  • SR-IOV設定:Controller
  • SR-IOV設定:Compute
  • SR-IOVによるインスタンスの起動

2.全体的な事前準備

  • ControllerとComputeのhostsファイルなどを編集の上、名前解決できるようにしておいてください。
  • firewalldとNetworkManagerはDisableとし、networkをEnableにしてください。
  • SELinuxはDisableとしてください。
  • ens33とeno1に相当するインターフェースは、Internetへの疎通性を確保しておいてください。*3
  • ens36とeno2に相当するインターフェースは、OvSのアップリンクとして使用しますが、現段階では設定不要です。
  • ens1f0とens1f1については、過去記事などを参照の上、ComputeにてSR-IOVの設定を行っておいてください。*4
  • ControllerとComputeのMgmtIPやホスト名は以下とします。
Controller c76os11 192.168.11.101/24
Compute c76os12 192.168.11.102/24

3.Controller事前準備

Controllerのみ

3-1.リポジトリの設定
yum install -y centos-release-openstack-queens && \
yum upgrade -y

sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-OpenStack-queens.repo

reboot
3-2.MariaDBのインスト
yum --enablerepo=centos-openstack-queens -y install mariadb mariadb-server python2-PyMySQL

vi /etc/my.cnf

#Charsetを追記
[mysqld]
character-set-server=utf8

systemctl start mariadb && \
systemctl enable mariadb

#初期設定
mysql_secure_installation

#初期設定時の出力例

[root@c76os11 ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 空Enter
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 任意のパスワード設定
Re-enter new password: 任意のパスワード設定
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
3-3.RabbitMQ&Memcachedのインスト
yum --enablerepo=centos-openstack-queens -y install rabbitmq-server memcached

vi /etc/sysconfig/memcached

#ControllerのIPアドレスを追記
OPTIONS="-l 127.0.0.1,::1,192.168.11.101"

systemctl start rabbitmq-server memcached && \
systemctl enable rabbitmq-server memcached

rabbitmqctl add_user openstack password
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

4.Keystone設定

Controllerのみ

4-1.MariaDBに登録
mysql -u root -p

create database keystone;
grant all privileges on keystone.* to keystone@'localhost' identified by 'password';
grant all privileges on keystone.* to keystone@'%' identified by 'password';
flush privileges;
exit
4-2.Keystoneのインスト
yum --enablerepo=centos-openstack-queens -y install openstack-keystone openstack-utils python-openstackclient httpd mod_wsgi

vi /etc/keystone/keystone.conf

#以下を追記*5
[DEFAULT]
memcache_servers = 192.168.11.101:11211
[database]
connection = mysql+pymysql://keystone:password@192.168.11.101/keystone
[token]
provider = fernet

#db sync
su -s /bin/bash keystone -c "keystone-manage db_sync"

#初期化
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

4-3.bootstepの設定
# ControllerのIPを定義
export controller=192.168.11.101

# keystoneのbootstep (TESTPASSWD は任意のパスワード)
keystone-manage bootstrap --bootstrap-password TESTPASSWD \
--bootstrap-admin-url http://$controller:5000/v3/ \
--bootstrap-internal-url http://$controller:5000/v3/ \
--bootstrap-public-url http://$controller:5000/v3/ \
--bootstrap-region-id RegionOne
4-4.httpdの起動
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl start httpd && \
systemctl enable httpd
4-5.ログイン用環境変数ファイル設定(任意)

任意設定ですが、都度Credentialを求められるため、実施しておいた方が良いと思います。
「export OS_PASSWORD=」は、4-3.bootstepの設定と同一のPasswdを設定してください。

vi ~/keystonerc

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=TESTPASSWD
export OS_AUTH_URL=http://192.168.11.101:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='[\u@\h \W(ks)]\$ '

chmod 600 ~/keystonerc
source ~/keystonerc
echo "source ~/keystonerc " >> ~/.bash_profile
4-6.Project作成
openstack project create --domain default --description "Default Project" service
openstack project list

5.Glance設定

Controllerのみ

5-1.ユーザ登録&Endpoint設定
openstack user create --domain default --project service --password servicepassword glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image service" image

export controller=192.168.11.101
openstack endpoint create --region RegionOne image public http://$controller:9292
openstack endpoint create --region RegionOne image internal http://$controller:9292
openstack endpoint create --region RegionOne image admin http://$controller:9292
5-2.MariaDBに登録
mysql -u root -p

create database glance;
grant all privileges on glance.* to glance@'localhost' identified by 'password';
grant all privileges on glance.* to glance@'%' identified by 'password';
flush privileges;
exit
5-3.Glanceのインストと設定
yum --enablerepo=centos-openstack-queens -y install openstack-glance

#glance-api.confの設定
mv /etc/glance/glance-api.conf /etc/glance/glance-api.conf.org
vi /etc/glance/glance-api.conf

[DEFAULT]
bind_host = 0.0.0.0

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[database]
connection = mysql+pymysql://glance:password@192.168.11.101/glance

[keystone_authtoken]
www_authenticate_uri = http://192.168.11.101:5000
auth_url = http://192.168.11.101:5000
memcached_servers = 192.168.11.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = servicepassword

[paste_deploy]
flavor = keystone


#glance-registry.confの設定
mv /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.org
vi /etc/glance/glance-registry.conf

[DEFAULT]
bind_host = 0.0.0.0

[database]
connection = mysql+pymysql://glance:password@192.168.11.101/glance

[keystone_authtoken]
www_authenticate_uri = http://192.168.11.101:5000
auth_url = http://192.168.11.101:5000
memcached_servers = 192.168.11.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = servicepassword

[paste_deploy]
flavor = keystone

#Permissionや起動設定*6
chmod 640 /etc/glance/glance-api.conf /etc/glance/glance-registry.conf
chown root:glance /etc/glance/glance-api.conf /etc/glance/glance-registry.conf
su -s /bin/bash glance -c "glance-manage db_sync"
systemctl start openstack-glance-api openstack-glance-registry && \
systemctl enable openstack-glance-api openstack-glance-registry
5-4.イメージ登録
mkdir /tmp/images

#cirrosの登録
wget -P /tmp/images http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

openstack image create "cirros-0.4.0" \
--file /tmp/images/cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 \
--container-format bare \
--public

openstack image list

#centos7の登録
wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2

openstack image create "centos7" \
--file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 \
--disk-format qcow2 \
--container-format bare \
--public

openstack image list

6.Nova設定:Controller

Controllerのみ

6-1.ユーザ登録&Endpoint設定
openstack user create --domain default --project service --password servicepassword nova
openstack role add --project service --user nova admin
openstack user create --domain default --project service --password servicepassword placement
openstack role add --project service --user placement admin
openstack service create --name nova --description "OpenStack Compute service" compute
openstack service create --name placement --description "OpenStack Compute Placement service" placement

export controller=192.168.11.101
openstack endpoint create --region RegionOne compute public http://$controller:8774/v2.1/%\(tenant_id\)s && \
openstack endpoint create --region RegionOne compute internal http://$controller:8774/v2.1/%\(tenant_id\)s && \
openstack endpoint create --region RegionOne compute admin http://$controller:8774/v2.1/%\(tenant_id\)s && \
openstack endpoint create --region RegionOne placement public http://$controller:8778 && \
openstack endpoint create --region RegionOne placement internal http://$controller:8778 && \
openstack endpoint create --region RegionOne placement admin http://$controller:8778
6-2.MariaDB登録
mysql -u root -p

create database nova;
grant all privileges on nova.* to nova@'localhost' identified by 'password';
grant all privileges on nova.* to nova@'%' identified by 'password';
create database nova_api;
grant all privileges on nova_api.* to nova@'localhost' identified by 'password';
grant all privileges on nova_api.* to nova@'%' identified by 'password';
create database nova_placement;
grant all privileges on nova_placement.* to nova@'localhost' identified by 'password';
grant all privileges on nova_placement.* to nova@'%' identified by 'password';
create database nova_cell0;
grant all privileges on nova_cell0.* to nova@'localhost' identified by 'password';
grant all privileges on nova_cell0.* to nova@'%' identified by 'password';
flush privileges;
exit
6-3.Novaのインスト&設定
yum --enablerepo=centos-openstack-queens -y install openstack-nova

#nova.confの設定
mv /etc/nova/nova.conf /etc/nova/nova.conf.org
vi /etc/nova/nova.conf

[DEFAULT]
my_ip = 192.168.11.101
state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata
log_dir = /var/log/nova
transport_url = rabbit://openstack:password@192.168.11.101

[api]
auth_strategy = keystone

[glance]
api_servers = http://192.168.11.101:9292

[oslo_concurrency]
lock_path = $state_path/tmp

[api_database]
connection = mysql+pymysql://nova:password@192.168.11.101/nova_api

[database]
connection = mysql+pymysql://nova:password@192.168.11.101/nova

[keystone_authtoken]
www_authenticate_uri = http://192.168.11.101:5000
auth_url = http://192.168.11.101:5000
memcached_servers = 192.168.11.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = servicepassword

[placement]
auth_url = http://192.168.11.101:5000
os_region_name = RegionOne
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = servicepassword

[placement_database]
connection = mysql+pymysql://nova:password@192.168.11.101/nova_placement

[wsgi]
api_paste_config = /etc/nova/api-paste.ini

#Permission変更
chmod 640 /etc/nova/nova.conf
chgrp nova /etc/nova/nova.conf

#00-nova-placement-api.confの設定
vi /etc/httpd/conf.d/00-nova-placement-api.conf

#</VirtualHost>の直上に追記
<Directory /usr/bin>
    Require all granted
</Directory>
6-4.DB登録とサービス起動
#novaコマンドは、一行づつ投入してください。
su -s /bin/bash nova -c "nova-manage api_db sync"
su -s /bin/bash nova -c "nova-manage cell_v2 map_cell0"
su -s /bin/bash nova -c "nova-manage db sync"
su -s /bin/bash nova -c "nova-manage cell_v2 create_cell --name cell1"

systemctl restart httpd
chown nova. /var/log/nova/nova-placement-api.log

for service in api consoleauth conductor scheduler novncproxy; do
systemctl start openstack-nova-$service
systemctl enable openstack-nova-$service
done

openstack compute service list

#一旦Reboot
reboot

#出力例:以下のコマンド(su -s /bin/bash nova -c "nova-manage db sync")投入時にWarningが出力されますが、気にせず先に進んでください。

[root@c76os11 images(keystone)]# su -s /bin/bash nova -c "nova-manage db sync"
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)

7.Nova設定:Compute

Computeのみ

7-1.事前準備
yum install -y centos-release-openstack-queens && \
yum upgrade -y

sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-OpenStack-queens.repo

reboot
7-2.Novaインスト&設定
yum --enablerepo=centos-openstack-queens -y install openstack-nova-compute

#nova.confの設定
mv /etc/nova/nova.conf /etc/nova/nova.conf.org
vi /etc/nova/nova.conf

[DEFAULT]
my_ip = 192.168.11.102
state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata
log_dir = /var/log/nova
transport_url = rabbit://openstack:password@192.168.11.101
compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type=kvm
cpu_mode=host-passthrough
hw_machine_type=x86_64=pc-i440fx-rhel7.6.0

[api]
auth_strategy = keystone

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.11.101:6080/vnc_auto.html

[glance]
api_servers = http://192.168.11.101:9292

[oslo_concurrency]
lock_path = $state_path/tmp

[keystone_authtoken]
www_authenticate_uri = http://192.168.11.101:5000
auth_url = http://192.168.11.101:5000
memcached_servers = 192.168.11.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = servicepassword

[placement]
auth_url = http://192.168.11.101:5000
os_region_name = RegionOne
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = servicepassword

[wsgi]
api_paste_config = /etc/nova/api-paste.ini


#Permission変更
chmod 640 /etc/nova/nova.conf
chgrp nova /etc/nova/nova.conf

#サービス起動
systemctl start openstack-nova-compute && \
systemctl enable openstack-nova-compute
7-3.ComputeNodeのDiscovery

Controllerのみ

su -s /bin/bash nova -c "nova-manage cell_v2 discover_hosts"
openstack compute service list

#出力例
[root@c76os11 ~(keystone)]# openstack compute service list
+----+------------------+---------------+----------+----------+-------+----------------------------+
| ID | Binary           | Host          | Zone     | Status   | State | Updated At                 |
+----+------------------+---------------+----------+----------+-------+----------------------------+
|  7 | nova-consoleauth | c76os11.md.jp | internal | enabled  | up    | 2019-10-14T01:49:01.000000 |
|  8 | nova-conductor   | c76os11.md.jp | internal | enabled  | up    | 2019-10-14T01:49:01.000000 |
| 10 | nova-scheduler   | c76os11.md.jp | internal | enabled  | up    | 2019-10-14T01:49:02.000000 |
| 11 | nova-compute     | c76os12.md.jp | nova     | enabled  | up    | 2019-10-14T01:49:00.000000 |
+----+------------------+---------------+----------+----------+-------+----------------------------+

8.Neutron設定:Controller

Controllerのみ

8-1.ユーザ&Endpoint登録
openstack user create --domain default --project service --password servicepassword neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking service" network

export controller=192.168.11.101
openstack endpoint create --region RegionOne network public http://$controller:9696 && \
openstack endpoint create --region RegionOne network internal http://$controller:9696 && \
openstack endpoint create --region RegionOne network admin http://$controller:9696
8-2.DB登録
mysql -u root -p

create database neutron_ml2;
grant all privileges on neutron_ml2.* to neutron@'localhost' identified by 'password';
grant all privileges on neutron_ml2.* to neutron@'%' identified by 'password';
flush privileges;
exit
8-3.Neutronのインストと設定
yum --enablerepo=centos-openstack-queens -y install openstack-neutron openstack-neutron-ml2

#neutron.conf設定
mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.org
vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
state_path = /var/lib/neutron
dhcp_agent_notification = True
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
transport_url = rabbit://openstack:password@192.168.11.101

[keystone_authtoken]
www_authenticate_uri = http://192.168.11.101:5000
auth_url = http://192.168.11.101:5000
memcached_servers = 192.168.11.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = servicepassword

[database]
connection = mysql+pymysql://neutron:password@192.168.11.101/neutron_ml2

[nova]
auth_url = http://192.168.11.101:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = servicepassword

[oslo_concurrency]
lock_path = $state_path/tmp

#Permission設定
chmod 640 /etc/neutron/neutron.conf
chgrp neutron /etc/neutron/neutron.conf
8-4.metadata_agent.ini設定
vi /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = 192.168.11.101
metadata_proxy_shared_secret = metadata_secret

[cache]
memcache_servers = 192.168.11.101:11211
8-5.ml2_conf.ini設定
vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = flat,vlan
mechanism_drivers = openvswitch,l2population

[ml2_type_vlan]
network_vlan_ranges = physnet8:4000:4094

<補足>
physnet8:4000:4094は任意の名前と値です。
今回の構成においては以下の定義とします。

  • physnet8は、InstanceのMgmt用として使用します。
  • 4000:4094は、OvS内部で使用されるVLANレンジとなります。
8-6.nova.confの追加設定
vi /etc/nova/nova.conf

#追記
[DEFAULT]
use_neutron = True
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

#[neutron]セクションを最終行へ追記
[neutron]
auth_url = http://192.168.11.101:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = servicepassword
service_metadata_proxy = True
metadata_proxy_shared_secret = metadata_secret

#サービス起動
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/bash neutron -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head"
systemctl start neutron-server neutron-metadata-agent && \
systemctl enable neutron-server neutron-metadata-agent && \
systemctl restart openstack-nova-api
8-7.その他Agent&OVSのインスト&設定
yum --enablerepo=centos-openstack-queens -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

#L3agent設定
vi /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = openvswitch

#DHCPagent設定
vi /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

#ovs設定&サービス起動
ovs-vsctl add-br br-int

systemctl start openvswitch && \
systemctl enable openvswitch

for service in dhcp-agent l3-agent metadata-agent openvswitch-agent; do
systemctl start neutron-$service
systemctl enable neutron-$service
done

9.Neutron設定:Compute

Computeのみ

9-1.Neutronのインスト&設定
yum --enablerepo=centos-openstack-queens -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

#neutron.conf設定
mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.org
vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
state_path = /var/lib/neutron
allow_overlapping_ips = True
transport_url = rabbit://openstack:password@192.168.11.101

[keystone_authtoken]
www_authenticate_uri = http://192.168.11.101:5000
auth_url = http://192.168.11.101:5000
memcached_servers = 192.168.11.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = servicepassword

[oslo_concurrency]
lock_path = $state_path/lock

#Permission変更
chmod 640 /etc/neutron/neutron.conf
chgrp neutron /etc/neutron/neutron.conf
9-2.ml2_conf.ini設定
vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types =flat,vlan
mechanism_drivers = openvswitch,l2population

[ml2_type_vlan]
network_vlan_ranges = physnet8:4000:4094
9-3.nova.confの追加設定
vi /etc/nova/nova.conf

#追記
[DEFAULT]
use_neutron = True
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vif_plugging_is_fatal = True
vif_plugging_timeout = 300

#[neutron]セクションを最終行へ追記
[neutron]
auth_url = http://192.168.11.101:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = servicepassword
service_metadata_proxy = True
metadata_proxy_shared_secret = metadata_secret

#サービス起動
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
systemctl start openvswitch && \
systemctl enable openvswitch
9-4.OvS設定
ovs-vsctl add-br br-int
systemctl restart openstack-nova-compute

systemctl start neutron-openvswitch-agent && \
systemctl enable neutron-openvswitch-agent
9-5.Mgmtインターフェース設定

ControllerとCompute
metadata-agent & dhcp-agentの通信用NWインターフェースとして利用します。

#Controllerにて設定
ovs-vsctl add-br br-ens36
ovs-vsctl add-port br-ens36 ens36


vi /etc/sysconfig/network-scripts/ifcfg-br-ens36

NAME=br-ens36
DEVICE=br-ens36
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
NM_CONTROLLED=no
ONBOOT=yes
HOTPLUG=no


vi /etc/sysconfig/network-scripts/ifcfg-ens36

NAME=ens36
DEVICE=ens36
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ens36
BOOTPROTO=none
NM_CONTROLLED=no
HOTPLUG=no


vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2_type_flat]
flat_networks = *


vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
bridge_mappings = physnet8:br-ens36

systemctl restart neutron-openvswitch-agent


#Computeにて設定
ovs-vsctl add-br br-eno2
ovs-vsctl add-port br-eno2 eno2

vi /etc/sysconfig/network-scripts/ifcfg-br-eno2

NAME=br-eno2
DEVICE=br-eno2
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
NM_CONTROLLED=no
ONBOOT=yes
HOTPLUG=no


vi /etc/sysconfig/network-scripts/ifcfg-eno2

NAME=eno2
DEVICE=eno2
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-eno2
BOOTPROTO=none
NM_CONTROLLED=no
HOTPLUG=no


vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2_type_flat]
flat_networks = *


vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
bridge_mappings = physnet8:br-eno2

systemctl restart neutron-openvswitch-agent

10.Horizon設定

Controllerのみ

10-1.Horizonのインスト&設定
yum --enablerepo=centos-openstack-queens -y install openstack-dashboard

#local_settingsの設定
vi /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['dlp.srv.world', 'localhost', '*']

OPENSTACK_API_VERSIONS = {
    "data-processing": 1.1,
    "identity": 3,
    "image": 2,
    "volume": 2,
    "compute": 2,
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '192.168.11.101:11211',
    },
}

OPENSTACK_HOST = "192.168.11.101"

#openstack-dashboard.confの設定
#WSGISocketPrefix run/wsgiの直下に追記
vi /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}

#サービス再起動
systemctl restart httpd.service memcached.service
10-2.flavorの作成

事前にflavorを作成しておきます。

openstack flavor create --id 0 --vcpus 1 --ram 1024 --disk 10 m1.small
openstack flavor list
10-3.簡易動作確認

動作確認用として、mgmtネットワークを作成し、テストインスタンスの起動確認をしてください。

#nw-mgmt作成
openstack network create \
--share \
--no-default \
--enable \
--project admin \
--external \
--provider-network-type flat \
--provider-physical-network physnet8 \
nw-mgmt

#subnet-mgmt作成
openstack subnet create \
--project admin \
--gateway 10.10.0.254 \
--subnet-range 10.10.0.0/24 \
--allocation-pool start=10.10.0.16,end=10.10.0.127 \
--network nw-mgmt \
subnet-mgmt

#インスタンス起動
netID=$(openstack network list | grep nw-mgmt | awk '{ print $2 }')
openstack server create --flavor m1.small --image cirros-0.4.0 --nic net-id=$netID inst01
openstack server list

#出力例
[root@c76os11 ~(keystone)]# openstack server list
+--------------------------------------+---------+---------+----------------------------------------------------------------------------+---------------------+----------+
| ID                                   | Name    | Status  | Networks                                                                   | Image               | Flavor   |
+--------------------------------------+---------+---------+----------------------------------------------------------------------------+---------------------+----------+
| 2d2af2b3-89eb-4508-b425-49e67b3dbcaa | inst01  | ACTIVE  | nw-mgmt=10.10.0.28                                                         | cirros-0.4.0        | m1.small |
+--------------------------------------+---------+---------+----------------------------------------------------------------------------+---------------------+----------+

#Dashboradの動作確認
http://192.168.11.101/dashboard/
f:id:metonymical:20191014120535p:plain
ユーザ名:admin
パスワード:TESTPASSWD 

ここまででSR-IOVを設定する準備ができました。
上記インスタンスへの疎通確認なども実施しておいてください。
上手く疎通確認が取れない場合は、Controller&Computeを再起動してみてください。

11.SR-IOV設定:Controller

Controllerのみ

#nova.confの追加設定
vi /etc/nova/nova.conf

[DEFAULT]
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter
scheduler_available_filters = nova.scheduler.filters.all_filters

#ml2_conf.iniの追加設定
#青文字の箇所を追記してください。
vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = flat,vlan
mechanism_drivers = openvswitch,l2population,sriovnicswitch

[ml2_type_vlan]
network_vlan_ranges = physnet8:4000:4094,physnet0:300:304,physnet1:300:304

#neutron.confの追加設定
vi /etc/neutron/neutron.conf

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver

#openvswitch-agent再起動
systemctl restart neutron-openvswitch-agent

12.SR-IOV設定:Compute

Computeのみ

#sriov-nic-agentのインスト
yum --enablerepo=centos-openstack-queens -y install openstack-neutron-sriov-nic-agent

#nova.confの追加設定
vi /etc/nova/nova.conf

[DEFAULT]
pci_passthrough_whitelist = {"devname":"ens1f0","physical_network":"physnet0"}
pci_passthrough_whitelist = {"devname":"ens1f1","physical_network":"physnet1"}


#sriov_agent.ini設定
vi /etc/neutron/plugins/ml2/sriov_agent.ini

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver

[sriov_nic]
physical_device_mappings = physnet0:ens1f0,physnet1:ens1f1


#neutron.confの追加設定
vi /etc/neutron/neutron.conf

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver

#サービス起動
systemctl restart neutron-openvswitch-agent && \
systemctl start neutron-sriov-nic-agent && \
systemctl enable neutron-sriov-nic-agent

#NIC設定
vi /etc/sysconfig/network-scripts/ifcfg-ens1f0

TYPE=Ethernet
BOOTPROTO=none
NAME=ens1f0
DEVICE=ens1f0
ONBOOT=yes

vi /etc/sysconfig/network-scripts/ifcfg-ens1f1

TYPE=Ethernet
BOOTPROTO=none
NAME=ens1f1
DEVICE=ens1f1
ONBOOT=yes

systemctl restart network

13.SR-IOVによるインスタンスの起動

Controllerのみ

13-1.VLAN300のNW,Subnet,Portの作成

ここからSR-IOV用のNWなどを作成していきますが、Errorで弾かれる場合には、ControllerとComputeを一度再起動してみてください。*7

openstack network create \
--share \
--no-default \
--enable \
--project admin \
--external \
--provider-network-type vlan \
--provider-physical-network physnet0 \
--provider-segment 300 \
sriov300

openstack subnet create \
--project admin \
--no-dhcp \
--gateway 192.168.30.254 \
--subnet-range 192.168.30.0/24 \
--allocation-pool start=192.168.30.16,end=192.168.30.127 \
--network sriov300 \
sriov300_sb

openstack port create \
--network sriov300 \
--vnic-type direct \
--fixed-ip subnet=sriov300_sb,ip-address=192.168.30.128 \
--no-security-group \
--enable \
sriov300_port128

openstack port create \
--network sriov300 \
--vnic-type direct \
--fixed-ip subnet=sriov300_sb,ip-address=192.168.30.129 \
--no-security-group \
--enable \
sriov300_port129
13-2.VLAN301のNW,Subnet,Portの作成
openstack network create \
--share \
--no-default \
--enable \
--project admin \
--external \
--provider-network-type vlan \
--provider-physical-network physnet1 \
--provider-segment 301 \
sriov301

openstack subnet create \
--project admin \
--no-dhcp \
--gateway 192.168.31.254 \
--subnet-range 192.168.31.0/24 \
--allocation-pool start=192.168.31.16,end=192.168.31.127 \
--network sriov301 \
sriov301_sb

openstack port create \
--network sriov301 \
--vnic-type direct \
--fixed-ip subnet=sriov301_sb,ip-address=192.168.31.128 \
--no-security-group \
--enable \
sriov301_port128

openstack port create \
--network sriov301 \
--vnic-type direct \
--fixed-ip subnet=sriov301_sb,ip-address=192.168.31.129 \
--no-security-group \
--enable \
sriov301_port129

<補足>
Portの作成はCLI or APIにて実施してください。
公式Docに以下の記載があります。
SR-IOV is not integrated into the OpenStack Dashboard (horizon). Users must use the CLI or API to configure SR-IOV interfaces.

13-3.UserDataの作成
vi udata.txt

#cloud-config
password: TESTPASSWD
chpasswd: { expire: False }
ssh_pwauth: True
13-4.CLIによるインスタンスの起動
netID=$(openstack network list | grep nw-mgmt | awk '{ print $2 }')
portID01=$(openstack port list |grep sriov300_port128 | awk '{ print $2 }')
portID02=$(openstack port list |grep sriov301_port128 | awk '{ print $2 }')

openstack server create \
--flavor m1.small \
--image centos7 \
--user-data udata.txt \
--nic net-id=$netID \
--nic port-id=$portID01 \
--nic port-id=$portID02 \
inst02

<補足>
SR-IOVインターフェースはnet-idではなく、port-idとして指定してください。
次項のGUIによるインスタンスの起動においても同様に、「ネットワークのポート」画面にて設定してください。*8

13-5.GUIによるインスタンスの起動

プロジェクト>コンピュート>インスタンスインスタンスの起動をクリック。
f:id:metonymical:20191014182457p:plain
インスタンス名を入力。次へをクリック。
f:id:metonymical:20191014182538p:plain
centos7を割り当てし、次へをクリック。
f:id:metonymical:20191014182649p:plain
m1.smallを割り当てし、次へをクリック。
f:id:metonymical:20191014182739p:plain
nw-mgmtを割り当てし、次へをクリック。
f:id:metonymical:20191014182822p:plain
作成したPortを割り当てし、次へをクリック。「設定」まで次へをクリック。
f:id:metonymical:20191014182855p:plain
13-3.UserDataの作成の内容をコピペし、インスタンスの起動をクリック。f:id:metonymical:20191014183035p:plain
以下の通り起動すればOKです。
f:id:metonymical:20191014183107p:plain

13-5.NICの確認
[root@c76os11 ~(keystone)]# openstack server list
+--------------------------------------+--------+---------+----------------------------------------------------------------------+--------------+----------+
| ID                                   | Name   | Status  | Networks                                                             | Image        | Flavor   |
+--------------------------------------+--------+---------+----------------------------------------------------------------------+--------------+----------+
| a6f54ee9-6c3d-4497-b2f5-61ab0b493e94 | inst03 | ACTIVE  | sriov300=192.168.30.129; sriov301=192.168.31.129; nw-mgmt=10.10.0.22 | centos7      | m1.small |
| afe80655-2c2c-4349-82cb-5843256d1d05 | inst02 | ACTIVE  | sriov300=192.168.30.128; sriov301=192.168.31.128; nw-mgmt=10.10.0.21 | centos7      | m1.small |
| c13aeb0a-85b3-4174-a977-d317b7c84cc3 | inst01 | SHUTOFF | nw-mgmt=10.10.0.18                                                   | cirros-0.4.0 | m1.small |
+--------------------------------------+--------+---------+----------------------------------------------------------------------+--------------+----------+

#ここでは例としてinst02にssh接続します。
[root@c76os11 ~(keystone)]# ip netns
qdhcp-5561d190-1fbf-4d06-aa54-253935fdce59 (id: 0)
[root@c76os11 ~(keystone)]# ip netns exec qdhcp-5561d190-1fbf-4d06-aa54-253935fdce59 ssh centos@10.10.0.21
centos@10.10.0.21's password:
Last login: Mon Oct 14 09:15:26 2019 from host-10-10-0-16.openstacklocal
[centos@inst02 ~]$ sudo su -
Last login: Mon Oct 14 09:15:29 UTC 2019 on pts/0

#inst02のens5とens6のMACアドレスを確認します。
[root@inst02 ~]# ip link show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens5:  mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:43:a0:c5 brd ff:ff:ff:ff:ff:ff
3: ens6:  mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:43:f0:66 brd ff:ff:ff:ff:ff:ff
4: eth0:  mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:13:21:cb brd ff:ff:ff:ff:ff:ff

#ComputeNodeのVFのMACアドレスと一致していることを確認します。
[root@c76os12 ~]# ip link show
6: ens1f0:  mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:3e:6d:68 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC fa:16:3e:ae:b4:91, vlan 300, spoof checking on, link-state auto, trust off, query_rss off
    vf 1 MAC fa:16:3e:43:a0:c5, vlan 300, spoof checking on, link-state auto, trust off, query_rss off
7: ens1f1:  mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:3e:6d:6a brd ff:ff:ff:ff:ff:ff
    vf 0 MAC fa:16:3e:b7:1f:74, vlan 301, spoof checking on, link-state auto, trust off, query_rss off
    vf 1 MAC fa:16:3e:43:f0:66, vlan 301, spoof checking on, link-state auto, trust off, query_rss off
13-6.NICの設定

inst02のens5とens6にIPの設定し、networkサービスの再起動をしてください。

[root@inst02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens5

NAME=ens5
DEVICE=ens5
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=no
IPADDR=192.168.30.128
PREFIX=24

[root@inst02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens6

NAME=ens6
DEVICE=ens6
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=no
IPADDR=192.168.31.128
PREFIX=24


以上です。

14.最後に

以下のサイトを参考にさせて頂きました。
CentOS 7 : OpenStack Queens インストールと設定 : Server World
OpenStack Docs: SR-IOV

一度でも成功していれば勘所がわかってくるのですが、一度も成功したことがないと、ControllerとComputeのiniファイルやconfファイルを何度も編集したり、再起動したりと、ハマりやすいと思います。

*1:図はc76となっていますが、7.7でも動作確認を行っています。ちなみに最近、CentOSミラーサイトの7.6関連のファイルが軒並み削除されているため、7.7を使用しないとyum updateでもコケる可能性があります。

*2:個人的にとても苦労したので、この方法で最初から構築したら上手くいった、という内容を記載したいと思います。

*3:yum updateなどを実行するので。

*4:過去記事を参照する場合、10は設定しないでください。環境によっては、9も設定不要です。

*5:正確にどのセクションの何行目くらいかを知りたい場合は、memcache_serversやmysql+pymysqlなどで検索してください。

*6:待ちが必要な箇所には、&& \を記載しています

*7:各agentの整合性が取れてない可能性があるためです。

*8:net-idで指定すると失敗します。理由を考察するに、SR-IOVのVFを動的に割り当てているためではないかな、と考えています。