OpenStack on Baserock
This guide covers how to deploy an OpenStack cluster with Baserock, and a few guides for verifying it is correctly working, since while OpenStack itself provides guides for configuring and verifying that an OpenStack deployment is working correctly, at the time of writing there is not yet an upstreamed guide.
The Install guides may still be a useful reference when following this.
Deployment
Gathering environment details
For OpenStack to be able to assign floating IP addresses to nodes, it needs to have a range of addresses to allocate.
Some services need to bind themselves to only serve on specific interfaces, and since the only way they may be configured to do this is via the binding IP address, the mac address of the devices need to be collected and registered with the network administrator to assign static leases.
In multi-node configurations, it is recommended to have a private network between the nodes, rather than communicating over the external network, so the mapping between the physical links and the devices needs to be recorded for later configuration.
In this guide we are going to allow DHCP for the external network, but we also need an extra address assignment in a different subnet for the floating IP range, which is going to be the 172.16.25.0/24 subnet.
Deploying system images
There are multiple cluster morphologies for deploying different configurations of OpenStack. One for everything in one node, a second for two nodes, and a third for three nodes.
Deployment instructions are self-contained. For two node and three node
configurations it is recommended to use the HOSTS_*
options to add
entries to the hosts file for all the statically configured IP addresses
and use the host name for all the other configuration entries that
require an address.
For multi-node configurations, addressing other nodes is handled by
assigning static addresses to the appropriate interface in the
SIMPLE_NETWORK
configuration option, and adding entries to hosts
files.
For more information you could have a look at the clusters definitions:
Post-deployment networking configuration
Since we have our floating IP range in a different subnet, and the
networking is more complicated than SIMPLE_NETWORK
may handle at the
time of writing, we must do some post-hoc network configuration by
logging into the node that handles networking (the only node and the
controller node in single node and two node OpenStack configurations)
and running the following command.
cat /run/systemd/network/*-br-ex-dhcp.network - >>/etc/systemd/network/10-br-ex-dhcp.network <<'EOF'
# If we're on systemd 219, we can enable forwarding here
IPForward=yes
[Address]
Address=172.16.25.1/24
Label=internal
EOF
Any other deployment specific networking config that may not be
expressed in SIMPLE_NETWORK
configuration may be done here.
To apply this networking change, issue systemctl restart
systemd-networkd.service
.
Authorisation environment variable setup
Create openrc
by looking up in the cluster definition what the default
username and password is. The defaults as set by default are:
cat >openrc <<'EOF'
export OS_USERNAME=admin
export OS_PASSWORD=veryinsecure
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
EOF
Substitute the word controller
in the OS_AUTH_URL
variable with an
address that the controller node (the only node in single node OpenStack
configurations) may be reached by.
Physical network configuration
You then need to start configuring networking. Now that we're actually
running openstack commands, from now on we need the authentication
config, so run the . openrc
command, before running the following command.
neutron net-create ext-net --shared --router:external \
--provider:network_type=flat --provider:physical_network=External
If this command failed with unsupported locale setting
you may need to
run export LC_ALL=C
to fix this.
For this guide we use the 172.16.25.0/24 range, so configure that subnet with a reaonably sized allocation pool for virtual machines.
neutron subnet-create --name ext-subnet --disable-dhcp \
--allocation-pool start=172.16.25.11,end=172.16.25.250 \
--gateway 172.16.25.1 ext-net 172.16.25.0/24
If you forget to
--disable-dhcp
you will create a DHCP server in your network
Testing with a demo tenancy
Add demo user and tenant
keystone tenant-create --name demo --description "Demo Tenant"
keystone user-create --name demo --tenant demo --pass demo --email demo
cat >demorc <<'EOF'
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v2.0
EOF
As before, swap controller
for the address of the controller node.
Adding a private network
We need to source the new user config, so we can set up the demo network's config.
. demorc
neutron net-create demo-net
neutron subnet-create --name demo-subnet --gateway 192.168.1.1 \
demo-net 192.168.1.0/24
Routing a private network to the external network
The network has been created, but before it is able to get external network connectivity, we need to add a router to bridge the networks.
neutron router-create demo-router
neutron router-interface-add demo-router demo-subnet
neutron router-gateway-set demo-router ext-net
If the last command fails with Unable to find subnet with name
'ext-subnet'
, then it may be that the ext-net
network is missing the
shared flag. To set this, authenticate with . openrc
and run neutron
net-update ext-net --shared
.
Adding a test image
To download a test CirrOS image, run the following command. It is also recommended that you verify that the image download URL is still active.
glance image-create --name cirros64 \
--location http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img \
--disk-format qcow2 --container-format bare --progress
Testing Virtual Machine booting
Boot instructions
To test that VMs work, run:
nova boot test --flavor 1 --image cirros64 --nic net-id=`neutron net-show demo-net -F id -f value`
It is possible to track its boot progress by running:
watch --differences=permanent nova show test
At this point the virtual machine may connect to other machines on its internal network, but for full use, it needs to have some port rules allowed, and a floating IP address.
Allowing access in
The following rules allow ICMP (ping) and TCP 22 (SSH).
neutron security-group-rule-create --protocol icmp \
--direction ingress default
neutron security-group-rule-create --protocol tcp --port-range-min 22 \
--port-range-max 22 --direction ingress default
Note that it is also possible to set security group rules with nova
,
but this is for the legacy nova networking, and these rules won't be
applied.
Assigning a floating IP
ip=$(neutron floatingip-create ext-net -c floating_ip_address -f value | tail -n1)
nova floating-ip-associate test $ip
At this point it should be possible to ping or ssh into the VM from the
host, and between different VMs on the same host, via their floating IPs.
If we forget the assigned floating IP, we can look it up with
nova floating-ip-list
, or nova list
and ping or ssh to that address.
Using the Horizon web interface
We may access the Horizon web interface by connecting to the controller node's address.
We should be presented with a log in screen. The credentials of the demo user we created earlier may be used for access.
Viewing the VMs console in Horizon
For one-node, the console in Horizon is available for the VMs by default; but for multinode you will need to set the required IPs in the compute node following the instructions given by Openstack admin guide VNC FAQ .
Adding a test volume
To create a volume of 2 GB run the following command:
cinder create 2 --display-name test-volume
To attach the volume to the test
virtual machine run:
nova volume-attach test `cinder show test-volume | awk '/ id /{print $4}'`
To ensure that the volume is attached you could check nova show test
should show
the uuid of the test-volume
.
Bare Metal Provisioning
You can provision real hardware instead of Virtual Machines using the Ironic service of OpenStack. Before running any command with the OpenStack CLI clients, ensure that you are authenticated with the OpenStack Identity service.
Network Setup
How the network configuration should be done very much depends on how your network topology is, and in particular, how the bare metal machines are connected to the OpenStack node which handles networking. Here we describe two possible ways to configure the network, one for a one-node OpenStack system, and a second one for a three-node OpenStack system.
(a) Network Setup For an One-Node OpenStack System
For one-node network configuration, we assume that the bare metal machines which you are about to provision with Ironic are placed in a separate network segment, which is connected through a dedicated network interface in the one-node machine (the network interface could be a virtual network interface with an associated vlan ID created from a physical network interface).
1) Configure the ml2 plugin by giving a name to the physical network where the bare metal machines are placed, and a name for the bridge which will be used to reach that network
sed -i '/^flat_networks/ s/$/,BaremetalNetwork/' /etc/neutron/plugins/ml2/ml2_conf.ini
sed -i '/^bridge_mappings/ s/$/,BaremetalNetwork:br-bm/' /etc/neutron/plugins/ml2/ml2_conf.ini
2) Create the Open vSwitch bridge for bare metal provisioning and add the network interface which is connected to the bare metal network (ens5 in this case) to this bridge
ovs-vsctl add-br br-bm
ovs-vsctl add-port br-bm ens5
3) Restart the OpenStack networking services
systemctl restart openstack-neutron-plugin-openvswitch-agent
systemctl restart openstack-neutron-server
4) Verify that the services started correctly
systemctl status openstack-neutron-plugin-openvswitch-agent openstack-neutron-server
5) Verify that the br-bm bridge is connected to the br-int bridge
through a veth pair using the ovs-vsctl show
command. You should
see an output similiar to this
...
Bridge br-bm
Port br-bm
Interface br-bm
type: internal
Port "ens5"
Interface "ens5"
Port phy-br-bm
Interface phy-br-bm
type: patch
options: {peer=int-br-bm}
6) Create the flat network which is going to be used for provisioning
the bare metal machines. If the commands issued with the Neutron
CLI client are failing with unsupported locale setting
, you will
want to run export LC_ALL=C
to fix this.
# Create the bare metal network and subnet
neutron net-create --provider:network_type flat \
--provider:physical_network BaremetalNetwork --shared bm-net
neutron subnet-create --name bm-subnet --ip-version=4 \
--gateway=192.168.100.1 --enable-dhcp \
--allocation-pool start=192.168.100.12,end=192.168.100.254 \
bm-net 192.168.100.0/24
# Create the external network and subnet. Replace
# $EXT_NETWORK_CIDR and $EXT_NET_GATEWAY with the CIDR and gateway
# IP address of your external network
neutron net-create ext-net --router:external
neutron subnet-create --name ext-subnet --enable_dhcp=False \
--gateway=$EXT_NET_GATEWAY ext-net $EXT_NETWORK_CIDR
# Connect the bare metal network and the external network
neutron router-create bm-ext-router
neutron router-gateway-set bm-ext-router ext-net
neutron router-interface-add bm-ext-router subnet=bm-subnet
# Add an IP address to the bare metal bridge to create a route to
# the bare metal network. This is necessary so that the Ironic
# Conductor service reaches the bare metal machine to run the iSCSI
# commands
ip addr add 192.168.100.155/24 dev br-bm
ip link set br-bm up
(b) Network Setup For a Three-Node OpenStack System
For three-node configuration, we assume that the bare metal machines which you are about to provision with Ironic are placed in the same network segment as the management network. We are also assuming that the management IP address for the Networking node is 10.0.0.1 and the management subnet is 10.0.0.0/24.
On the Networking node, perform the following steps
1) Configure the ml2 plugin by giving a name to the physical network where the bare metal machines are placed, and a name for the bridge which will be used to reach that network
sed -i '/^flat_networks/ s/$/,BaremetalNetwork/' /etc/neutron/plugins/ml2/ml2_conf.ini
sed -i '/^bridge_mappings/ s/$/,BaremetalNetwork:br-bm/' /etc/neutron/plugins/ml2/ml2_conf.ini
2) Create the Open vSwitch bridge for bare metal provisioning and add the network interface which is connected to the management network (enp2s0 in this case) to this bridge
ovs-vsctl add-br br-bm
ovs-vsctl add-port br-bm enp2s0
3) Restart the OpenStack Open vSwitch plugin
systemctl restart openstack-neutron-plugin-openvswitch-agent
4) Verify that the service started correctly
systemctl status openstack-neutron-plugin-openvswitch-agent
5) Verify that the br-bm bridge is connected to the br-int bridge
through a veth pair using the ovs-vsctl show
command. You should
see an output similiar to this
...
Bridge br-bm
Port br-bm
Interface br-bm
type: internal
Port "enp2s0"
Interface "enp2s0"
Port phy-br-bm
Interface phy-br-bm
type: patch
options: {peer=int-br-bm}
6) Move the management IP address from the management interface to the bare metal bridge
ip addr del 10.0.0.1/24 dev enp2s0
ip addr add 10.0.0.1/24 dev br-bm
ip link set br-bm up
Now, on the Controller node, perform the following steps
1) Configure the ml2 plugin by providing the name of the physical network where the bare metal machines are placed
sed -i '/^flat_networks/ s/$/,BaremetalNetwork/' /etc/neutron/plugins/ml2/ml2_conf.ini
2) Restart the OpenStack Neutron server
systemctl restart openstack-neutron-server
4) Verify that the service started correctly
systemctl status openstack-neutron-server
5) Create the flat network which is going to be used for provisioning
the bare metal machines. If the commands issued with the Neutron
CLI client are failing with unsupported locale setting
, you may
need to run export LC_ALL=C
to fix this.
neutron net-create --provider:network_type flat \
--provider:physical_network BaremetalNetwork \
--shared --router:external bm-net
neutron subnet-create --name bm-subnet --ip-version=4 \
--gateway=10.0.0.10 --enable-dhcp \
--allocation-pool start=10.0.0.11,end=10.0.0.254 \
bm-net 10.0.0.0/24
Node Cleaning
You can configure Ironic to perform node cleaning, by running the following commands:
BM_NET_UUID="$(neutron net-show bm-net -F id -f value)"
sed -i "/^#cleaning_network_uuid/ s/.*/cleaning_network_uuid=$BM_NET_UUID/" \
/etc/ironic/ironic.conf
Image Setup
Bare Metal provisioning requires two sets of images: the deploy images and the user images. The deploy images are used by the Bare Metal Service to prepare the bare metal machine for actual OS deployment. Whereas the user images are installed on the bare metal machine to be used by the end user.
You can use Diskimage Builder to build those images. You will need to run Diskimage Builder outside of a Baserock__-based system though, as it is not compatible with some of the Busybox tools used in Baserock_.
Install the diskimage-builder on a no-Baserock machine
pip install diskimage-builder
Build the deploy image
ramdisk-image-create ubuntu deploy-ironic -o my-deploy-ramdisk
This will create my-deploy-ramdisk.kernel and my-deploy-ramdisk.initramfs images.
Build the image that the users will run (ubuntu in this case)
disk-image-create ubuntu baremetal dhcp-all-interfaces -o my-image
This will create the my-image.qcow2, my-image.vmlinuz and my-image.initrd images_.
Note that by default (at least for the ubuntu image), the root account is not active, so you will not be able to login into the deployed OS. You can workaround by mounting the my-image.qcow2 image and editing the /etc/shadow file to enable the root account.
Upload the images to the OpenStack controller machine
scp my-* $USERNAME@$CONTROLLER_IP_ADDRESS
Replace $CONTROLLER_IP_ADDRESS with the external IP address of your OpenStack controller machine.
Register the images with the OpenStack image service
DEPLOY_VMLINUZ_UUID=$(glance image-create --name deploy-vmlinuz --is-public True \ --disk-format aki < my-deploy-ramdisk.kernel | awk '/ id / {print $4}') DEPLOY_INITRD_UUID=$(glance image-create --name deploy-initrd --is-public True \ --disk-format ari < my-deploy-ramdisk.initramfs | awk '/ id / {print $4}') MY_VMLINUZ_UUID=$(glance image-create --name my-kernel --is-public True \ --disk-format aki < my-image.vmlinuz | awk '/ id / {print $4}') MY_INITRD_UUID=$(glance image-create --name my-image.initrd --is-public True \ --disk-format ari < my-image.initrd | awk '/ id / {print $4}') MY_IMAGE_UUID=$(glance image-create --name my-image --is-public True \ --disk-format qcow2 --container-format bare \ --property kernel_id=$MY_VMLINUZ_UUID --property ramdisk_id=$MY_INITRD_UUID \ < my-image.qcow2 | awk '/ id / {print $4}')
Verify that the images were registered sucessfully by checking the UUIDs
cat << EOF DEPLOY_VMLINUZ_UUID = $DEPLOY_VMLINUZ_UUID DEPLOY_INITRD_UUID = $DEPLOY_INITRD_UUID MY_VMLINUZ_UUID = $MY_VMLINUZ_UUID MY_INITRD_UUID = $MY_INITRD_UUID MY_IMAGE_UUID = $MY_IMAGE_UUID EOF
Provisioning Bare Metal Directly With Ironic
To enroll a node with Ironic, you need to specify which driver will be used for deployment, the required specific driver information, and the MAC address of the NIC which is going to be used to pxeboot on the bare metal machine. The drivers supported on OpenStack on Baserock are the pxe_ipmitool and the pxe_ssh drivers.
To enroll a node using the pxe_ipmitool driver, follow these steps
Create a node specifying the driver to use and passing the specific driver required information:
NODE_UUID=$(ironic node-create -d pxe_ipmitool \ -i ipmi_address=$IPMI_IP_ADDRESS \ -i ipmi_username=$IPMI_USERNAME \ -i ipmi_password=$IPMI_PASSWORD \ -i deploy_kernel=$DEPLOY_VMLINUZ_UUID \ -i deploy_ramdisk=$DEPLOY_INITRD_UUID \ | awk '/ uuid / {print $4}')
Replace $IPMI_IP_ADDRESS, $IPMI_USERNAME and $IPMI_PASSWORD with the IP address of the IPMI NIC interface on the bare metal machine, the IPMI username and the IPMI password respectively.
As there isn't a Nova flavor and the instance image is not provided with nova boot command, you also need to specify some fields in instance_info. For PXE deployment, they are image_source, kernel, ramdisk, and root_gb
ironic node-update $NODE_UUID add instance_info/image_source=$MY_IMAGE_UUID \ instance_info/kernel=$MY_VMLINUZ_UUID \ instance_info/ramdisk=$MY_INITRD_UUID instance_info/root_gb=8
The instance_info/root_gb field sets the size of the root partition which will be created on the bare metal machine's disk. Ensure that this value is greater than the size of the deployed image. Note that this value needs to be quoted to force a string interpretation due a bug in Ironic.
Create a port for the network interface on the bare metal machine which is going to be used for pxeboot. This will configure a dnsmasq process to serve DHCP for that network interface and provide the IP address of the TFTP server on the DHCP Offer response
IRONIC_PORT_UUID=$(ironic port-create -n $NODE_UUID \ -a $MAC_ADDRESS | awk '/ uuid / {print $4}') NEUTRON_PORT_ID=$(neutron port-create bm-net \ --mac-address $MAC_ADDRESS | awk '/ id / {print $4}') ironic port-update $IRONIC_PORT_UUID add extra/vif_port_id=$NEUTRON_PORT_ID
Replace $MAC_ADDRESS with the MAC address of the bare metal NIC.
Verify that everything is ready to go
ironic node-validate $NODE_UUID
Start provisioning the node
ironic node-set-provision-state $NODE_UUID active
Verify the operation
ironic node-show $NODE_UUID
It should show active on the provision_state field after a while.
Provisioning Bare Metal Through Nova
If you had set the NOVA_BAREMETAL_SCHEDULING to true in the cluster morphology, you can provision bare metal machines using Nova commands.
Create a special bare metal flavor in Nova. This flavor will be mapped to the bare metal machine through hardware specifications
RAM_MB=1024 CPU=1 DISK_GB=8 ARCH=x86_64 nova flavor-create bm-flavor-1 auto $RAM_MB $DISK_GB $CPU nova flavor-key bm-flavor-1 set cpu_arch=$ARCH
Create a node, giving extra properties to match the bare metal flavor created earlier
NODE_UUID=$(ironic node-create -d pxe_ipmitool \ -i ipmi_address=$IPMI_IP_ADDRESS \ -i ipmi_username=$IPMI_USERNAME \ -i ipmi_password=$IPMI_PASSWORD \ -i deploy_kernel=$DEPLOY_VMLINUZ_UUID \ -i deploy_ramdisk=$DEPLOY_INITRD_UUID \ -p cpus=$CPU -p memory_mb=$RAM_MB -p local_gb=$DISK_GB -p cpu_arch=$ARCH \ | awk '/ uuid / {print $4}')
Create a port for the network interface on the bare metal machine which is going to be used for pxeboot. Note that you shouldn't link the Ironic port with a Neutron's port as done before. This will be done by Nova.
ironic port-create -n $NODE_UUID -a $MAC_ADDRESS
Verify that everything is ready to go
ironic node-validate $NODE_UUID
Don't worry about the validation of deploy interface failed. The missing parameters will be set by Nova.
Verify whether the bare metal node was discovered by Nova. The output of
nova hypervisor-show 1
should show the value of $RAM_MB on the memory_mb field. You may need to wait a few seconds for this to happen.Provision the bare metal machine using Nova
nova boot --flavor bm-flavor-1 --image $MY_IMAGE_UUID \ --nic net-id=`neutron net-show bm-net -F id -f value` \ bm-instance-1
Verify the status of the operation by running
nova show bm-instance-1
Testing Ironic with VMs
You can test Ironic in an environment where the Openstack one-node system runs on a virtual machine.
Lets assume that
You are using libvirt and you have virt-manager installed
Your VM host has a network interface eth0 which is connected to a trunk port in a switch
Your bare metal machines are in a VLAN with ID $VLAN_ID
On the VM host machine run the following commands
ip link add link eth0 name eth0.20 type vlan id 20
ip link add name br-vlan type bridge
ip link set dev eth0.20 master br-vlan
ip link set eth0.20 up
ip link set br-vlan up
Now, on the network configuration for the virtual machine which runs the one-node system, add another network interface with virt-manager where the source device is "shared device name" and the name of the bridge is br-vlan.
If you don't have a bare metal machine, you can create another virtual machine with a network interface attached to br-vlan and with a MAC address set to $MAC_ADDRESS, and then use the Ironic's pxe_ssh driver to conduct the deployment:
NODE_UUID=$(ironic node-create -d pxe_ssh \
-i ssh_address=$VM_HOST_IP_ADDR \
-i ssh_username=$SSH_USERNAME \
-i ssh_password=$SSH_PASSWORD \
-i ssh_virt_type=virsh \
-i deploy_kernel=$DEPLOY_VMLINUZ_UUID \
-i deploy_ramdisk=$DEPLOY_INITRD_UUID \
| awk '/ uuid / {print $4}')
Replace $VM_HOST_IP_ADDR, $SSH_USERNAME and $SSH_PASSWORD with the IP address of your VM host, and the username and password for logging in the VM host machine respectively. Ensure that the virtual machine that is going to be provisioned has at least 512M of RAM.
More information
You can find additional information at the official install guide for Ironic.
Tempest Integration Test Suite
After installing the system, it is possible to test that tempest is
working correctly by logging into the controller node, navigating to the
/etc/tempest
directory and running ./run_tests.sh -N
this will validate
that the installation of tempest has been successfully completed.
When these tests pass you may wish to familiarise yourself with the
tempest.conf
file that contains information on what tests tempest will
run and what authentication will be used. The fields of this file
contain documentation on the purpose and use of each field in comment
lines above the field and so will not be covered within this
documentation.
In order to make easy to configure the "compute" section in tempest.conf, the
OpenStack deployment includes a script called ./set_openstack_to_run_tempest.sh
which
creates a public image in the admin tenant and set the fields:
image_ref, image_ssh_user, image_ssh_pasword, image_ref_alt and
image_alt_ssh_user.
Remember to set the fields regarding of the network with the values relatives to your network and subnetwork configuration, i.e. the following values are the relatives to the example of the external network and subnetwork created in the neutron section of this guide:
In tempest.conf:
fixed_network_name = ext-net floating_ip_range = 172.16.25.0/24 public_network_id = *UUID* given by `neutron net-show ext-net -F id -f value` floating_network_name = ext-subnet
In nova.conf:
default_floating_pool=ext-net auto_assign_floating_ip=false
NOTE: this change requires to restart nova-api service. Please run
systemctl restart openstack-nova-api.service
after the change for nova-api to get the change.
To run tests against OpenStack itself we may use the command
./run_tempest.sh
from the /etc/tempest
directory this will run
tempest on the services specified within the tempest.conf
file.
It is possible to test more specific trouble areas of OpenStack without
running the whole test and leaving ourselves a large amount of logs to
search through. This can be done by specifying the area to run tests
against. For example the command ./run_tempest.sh api
will run only
tempest tests that test the functionality of the api.
In the case of Baserock, it is recommended that tempest should not be
run in a virtual environment, as downloading and creating dependencies
at runtime is something that Baserock is capable of, but not its
preferred working methodology. To force tempest to not create a virtual
environment for testing purposes, pass the -N
argument. This results
in the above call becoming ./run_tempest.sh api -N
The currently used list of specific areas for testing with tempest is:
./run_tempest.sh services -N
./run_tempest.sh cli -N
./run_tempest.sh api -N
./run_tempest.sh scenario -N
./run_tempest.sh cmd -N
./run_tempest.sh stress -N
./run_tempest.sh thirdparty -N
Ceilometer
The telemetry service has been configured using the Install guides, other information about this service, like what alarms or information you could get from the cloud can be found in Admin cloud guide.