BMaaS Part 1: Deploying Openstack Ironic with Red Hat OSP13 / TripleO Queens

OpenStack Ironic is one of the most exciting project I had pleasure to work with and help my clients with in the last year. Project Ironic is all about provisioning clusters of Baremetal nodes based on images stored in Glance with assigned networks managed by Neutron.

In the past Ironic has been neglected by System Architects and System Integrators mostly due to lack of feature parity comparable to more traditional baremetal bootstrap mechanisms. Now the feature gap has been mostly closed and at the same time Ironic can take advantages of other components and methodologies that OpenStack is in particularly good at (and where traditional tools lack of), such as:

  • programmatic infrastructure management
  • multi-tenancy
  • integration with devops tools such as ansible
  • elasticity

On top of “cloudy” feature OpenStack also provides access to VM infrastructure and abstracts the compute consumption. Other words you can now orchestrate your workload to take advantage of both Baremetal and VM infrastructure and the process to consume these different resources is mostly identical from the end user perspective.

All this opens doors for OpenStack to be compelling solution to use cases such as:

  • Supercomputing
  • Rendering farms
  • High Performance Computing
  • Gaming Clouds
  • Large Research Infrastructure

This is a first post from the series of blog posts that will focus on Baremetal services in OpenStack. First I will focus on explaining how properly deploy OpenStack Ironic with Red Hat OpenStack Platform 13 and/or TripleO Queens release. In later post I will show how to include hardware Inspection and Discovery functionality. I will end up series with explaining process of building images that are both uefi and bios capable and taking advantage of multi-tenancy.

I. Lab Architecture

Ok, it’s not the most clear architectural diagram that you have seen, but there are few important roles that you should note.

KVM Admin Host (up top) – is a single pre-provisioned RHEL based hypervisor that stores VMs that help bootstrap and manage the lifecycle of my OpenStack

OpenStack Controllers – I have single standard OpenStack monolithic controller (all services on single node). The more traditional 3x controller architecture would be identical from the configuration perspective

HCI nodes – Compute and Ceph Storage are combined in hyperconverged way in my lab to save on physical resources

Baremetal Pool – This is a pool of “snowflaky” servers that will be used for Ironic service in overcloud 

In my case networks have been defined as follows:

VLANsTypeUndercloudControllerHCIBaremetal
PXENon-routableNativeNativeNativeN/A
ExternalRoutableNativeTaggedN/AN/A
MGMTNon-routableN/ATaggedTaggedN/A
BaremetalRoutableN/ATaggedTaggedNative

II. Overcloud Deployment with Ironic

Start with traditional Overcloud deployment templates and verify everything works without Ironic in the mix. At this point you should also be able to create a provider network running on baremetal VLAN. I won’t cover this part in detail here. The normal OSP13 installation guide can be followed here or here (upstream)

After you verify standard deployment without Ironic integration (baby steps), let’s take a look at yaml files and parameters that need to be added to our deployment.

1. First simply include in your deployment script (without making any modifications to it) :

-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml

This yaml allows to include containers for ironic-api, ironic-conductor, ironic-pxe and nova-ironic service to be deployed on your control plane.

Note: At the time of writing this blog there is another ironic yaml file in default templates that look like a good candidate for deployment located under environments/services/ironic.yaml. Do not use this file. If you do, your deployment will end up without pxe service (common error).

2. OSP13 is finally capable to define it’s own custom (composable) networks. Let’s define one to isolate all ironic network traffic (cleaning, deployment etc.).

  • Let’s copy a standard template first:

(undercloud) [stack@undercloud ~]$ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml ~/templates/

  • Edit file and add following to the bottom of the file:

(undercloud) [stack@undercloud ~]$ cat templates/network_data.yaml 
# List of networks, used for j2 templating of enabled networks
#
# Supported values:
...

- name: CustomBM
  name_lower: custombm
  vip: true
  ip_subnet: '172.31.10.0/24'
  allocation_pools: [{'start': '172.31.10.20', 'end': '172.31.10.49'}]
  ipv6_subnet: 'fd00:fd00:fd00:7000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]

Note that I am allocating entire new subnet dedicated for just ironic network traffic. The range of IP addresses here should be rather small since it’s only going to be allocated to your ironic controller nodes. You will define another pool to deployment that cannot overlap. Also in future blog posts you will learn about adding Inspector and Auto-Discovery that will yet require another slice of this subnet.

Now the biggest advantage in assigning dedicated network for ironic is to avoid layer3 traffic when cleaning and provisioning your baremetal nodes , which can be a major issue when using different sizes mtu or taking advantage of ssl encryption.

  • Edit your network-environment.yaml file to include just created subnet, cidr and vlan. Example:

(undercloud) [stack@undercloud ~]$ cat templates/network-environment.yaml 
#This file is an example of an environment file for defining the isolated
#networks and related parameters.
resource_registry:
  # Network Interface templates to use (these files must exist)
  OS::TripleO::Compute::Net::SoftwareConfig:
    ./bond-with-vlans/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig:
    ./bond-with-vlans/controller.yaml

 

parameter_defaults:

  # This section is where deployment-specific configuration is done
  # CIDR subnet mask length for provisioning network
  ControlPlaneSubnetCidr: '24'
  # Gateway router for the provisioning network (or Undercloud IP)
  ControlPlaneDefaultRoute: 172.31.0.10
  EC2MetadataIp: 172.31.0.10  # Generally the IP of the Undercloud
  # Customize the IP subnets to match the local environment
  InternalApiNetCidr: 172.31.1.0/24
  StorageNetCidr: 172.31.3.0/24
  StorageMgmtNetCidr: 172.31.4.0/24
  TenantNetCidr: 172.31.2.0/24
  ExternalNetCidr: 172.31.8.0/24
  CustomBMNetCidr: 172.31.10.0/24
  # Customize the VLAN IDs to match the local environment
  InternalApiNetworkVlanID: 311
  StorageNetworkVlanID: 313
  StorageMgmtNetworkVlanID: 314
  TenantNetworkVlanID: 312
  ExternalNetworkVlanID: 318
  CustomBMNetworkVlanID: 320
  # Customize the IP ranges on each network to use for static IPs and VIPs
  InternalApiAllocationPools: [{'start': '172.31.1.20', 'end': '172.31.1.49'}]
  StorageAllocationPools: [{'start': '172.31.3.20', 'end': '172.31.3.49'}]
  StorageMgmtAllocationPools: [{'start': '172.31.4.20', 'end': '172.31.4.49'}]
  TenantAllocationPools: [{'start': '172.31.2.20', 'end': '172.31.2.49'}]
  CustomBMAllocationPools: [{'start': '172.31.10.20', 'end': '172.31.10.49'}]
  # Leave room if the external network is also used for floating IPs
  ExternalAllocationPools: [{'start': '172.31.8.25', 'end': '172.31.8.29'}]
  # Gateway router for the external network
  ExternalInterfaceDefaultRoute: 172.31.8.254
  # Uncomment if using the Management Network (see network-management.yaml)
  # ManagementNetCidr: 10.0.1.0/24
  # ManagementAllocationPools: [{'start': '10.0.1.10', 'end': '10.0.1.50'}]
  # Use either this parameter or ControlPlaneDefaultRoute in the NIC templates
  # ManagementInterfaceDefaultRoute: 10.0.1.1
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["172.31.8.1","8.8.4.4"]
  # List of Neutron network types for tenant networks (will be used in order)
  NeutronNetworkType: 'vxlan,vlan'
  # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
  NeutronTunnelTypes: 'vxlan'
  # Neutron VLAN ranges per network, for example 'datacentre:1:499,tenant:500:1000':
  NeutronNetworkVLANRanges: 'datacentre:1:1000'
  # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 miimon=100"
  # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS active/backup.
  BondInterfaceOvsOptions: "bond_mode=active-backup"

  • Finally modify your controller nic config to include an extra vlan and associate it to your external bridge. Example:

(chrisj) [stack@undercloud ~]$ cat templates/bond-with-vlans/controller.yaml
heat_template_version: queens
description: >
  Software Config to drive os-net-config with 2 bonded nics on a bridge with VLANs attached for the controller role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
...
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string
  CustomBMIpSubnet:
    default: ''
    description: IP address/subnet on the Baremetal network
    type: string
  CustomBMNetworkVlanID:
    default: 70
    description: Vlan ID for the baremetal network traffic.
    type: number

resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
...
              - type: ovs_bridge
                name: bridge_name
                mtu: 9000
                dns_servers:
                  get_param: DnsServers
                members:
                - type: interface
                  name: nic2
                  mtu: 9000
                - type: vlan
                  mtu: 1500
                  device: bond1
                  vlan_id:
                    get_param: ExternalNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: ExternalIpSubnet
                  routes:
                  - default: true
                    next_hop:
                      get_param: ExternalInterfaceDefaultRoute
                - type: vlan
                  mtu: 9000
                  vlan_id:
                    get_param: InternalApiNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: InternalApiIpSubnet
                - type: vlan
                  mtu: 1500
                  vlan_id:
                    get_param: StorageNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageIpSubnet
                - type: vlan
                  mtu: 1500
                  vlan_id:
                    get_param: StorageMgmtNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageMgmtIpSubnet
                - type: vlan
                  mtu: 9000
                  vlan_id:
                    get_param: TenantNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: TenantIpSubnet
                - type: vlan
                  mtu: 1500
                  vlan_id:
                    get_param: CustomBMNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: CustomBMIpSubnet

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

 

3. Copy composable roles default template to your local templates directory and ensure the new CustomBM network is being deployed 

(undercloud) [stack@undercloud ~]$ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml templates/

(undercloud) [stack@undercloud ~]$ cat templates/roles_data.yaml 
###############################################################################
# File generated by TripleO
###############################################################################
###############################################################################
# Role: Controller                                                            #
###############################################################################
- name: Controller
  description: |
    Controller role that has all the controler services loaded and handles
    Database, Messaging and Network functions.
  CountDefault: 1
  tags:
    - primary
    - controller
  networks:
    - External
    - InternalApi
    - Storage
    - StorageMgmt
    - Tenant
    - CustomBM
  HostnameFormatDefault: '%stackname%-controller-%index%'
  # Deprecated & backward-compatible values (FIXME: Make parameters consistent)
  # Set uses_deprecated_params to True if any deprecated params are used.
  uses_deprecated_params: True
  deprecated_param_extraconfig: 'controllerExtraConfig'
  deprecated_param_flavor: 'OvercloudControlFlavor'
  deprecated_param_image: 'controllerImage'
  ServicesDefault:
    - OS::TripleO::Services::Aide
    - OS::TripleO::Services::AodhApi
...

4. Capture all reaming Ironic customization in another yaml file or add following values to one of existing custom yaml files.

(undercloud) [stack@undercloud ~]$ cat templates/ExtraConfig.yaml

parameter_defaults:

  NeutronEnableIsolatedMetadata: 'True'

  NovaSchedulerDefaultFilters:
    - RetryFilter
    - AggregateInstanceExtraSpecsFilter
    - AggregateMultiTenancyIsolation
    - AvailabilityZoneFilter
    - RamFilter
    - DiskFilter
    - ComputeFilter
    - ComputeCapabilitiesFilter
    - ImagePropertiesFilter

  IronicCleaningDiskErase: metadata   
  IronicIPXEEnabled: true   
  IronicCleaningNetwork: baremetal

  CustomBMVirtualFixedIPs: [{'ip_address':'172.31.10.14'}]

  ServiceNetMap:
    IronicApiNetwork: custombm
    IronicNetwork: custombm

 

5. Re-deploy your overcloud with newly defined yaml files:

time openstack overcloud deploy  --templates --stack chrisj \
...

  -r /home/stack/templates/roles_data.yaml \
  -n /home/stack/templates/network_data.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml \
  -e /home/stack/templates/network-environment.yaml \
  -e /home/stack/templates/ExtraConfig.yaml \


6. After complete deployment verify your overcloud rc file and verify:

(undercloud) [stack@undercloud ~]$ source chrisjrc 
(chrisj) [stack@undercloud ~]$ openstack baremetal driver list
+---------------------+------------------------------+
| Supported driver(s) | Active host(s)               |
+---------------------+------------------------------+
| ipmi                | chrisj-controller-0.home.lab |
| pxe_drac            | chrisj-controller-0.home.lab |
| pxe_ilo             | chrisj-controller-0.home.lab |
| pxe_ipmitool        | chrisj-controller-0.home.lab |
| redfish             | chrisj-controller-0.home.lab |
+---------------------+------------------------------+

 

III. Adding Baremetal nodes.

In this part of the series we are going to focus on manual process of adding nodes to BM pool.  You want to properly define all the specs including memory, CPU, mac address, ipmi credentials etc. In the next post in the series I’ll show the same process with auto-discovery and inspection enabled.

1. First let’s create a network that will match a composable network that we have defined during deployment. This provider network will be utilized for cleaning and deployment of the nodes.

neutron net-create baremetal  --provider:physical_network=fast \
    --provider:network_type=vlan --provider:segmentation_id=320 --mtu 1500 --shared
neutron subnet-create baremetal 172.31.10.0/24 \
    --name baremetal-sub --allocation-pool \
    start=172.31.10.70,end=172.31.10.199 --gateway 172.31.10.254 --dns-nameserver 172.31.8.1

 

Note that my allocation pool must not overlap with deployment pool assigned to composable network. Also, to avoid mtu fragmentation issues I am assigning mtu 1500 to the network itself.

2. Defined yaml file that will describe the hardware will be added to Ironic in overcloud. Example:

(chrisj) [stack@undercloud ~]$ cat baremetal.yaml 
nodes:
  - name: baremetal1
    driver: ipmi
    resource_class: baremetal
    properties:
     cpus: 4
     cpu_arch: "x86_64"
     memory_mb: 16384
     local_gb: 60
     root_device:
       name: /dev/sda
    ports:
     - address: "D0:50:99:79:78:01"
       pxe_enabled: true
    driver_info:
     ipmi_address: "172.31.9.31"
     ipmi_username: "admin"
     ipmi_password: "secret"
  - name: baremetal2
    driver: ipmi
    resource_class: baremetal
    properties:
     cpus: 4
     cpu_arch: "x86_64"
     memory_mb: 16384
     local_gb: 60
     root_device:
       name: /dev/sda
    ports:
     - address: "D0:50:99:79:77:01"
       pxe_enabled: true
    driver_info:
     ipmi_address: "172.31.9.32"
     ipmi_username: "admin"
     ipmi_password: "secret"
  - name: baremetal3
    driver: ipmi
    resource_class: baremetal
    properties:
     cpus: 4
     cpu_arch: "x86_64"
     memory_mb: 16384
     local_gb: 60
     root_device:
       name: /dev/sda
    ports:
     - address: "D0:50:99:C0:A3:3A"
       pxe_enabled: true
    driver_info:
     ipmi_address: "172.31.9.33"
     ipmi_username: "admin"
     ipmi_password: "secret"

 

Alternatively you could use the undercloud instackenv.json format and just convert it to ironic readable format. Here is the example -> here.

3. Import pre-defined nodes to ironic

(chrisj) [stack@undercloud ~]$ source chrisjrc 
(chrisj) [stack@undercloud ~]$ openstack baremetal create baremetal.yaml

4. Verify nodes are in enroll state:

(chrisj) [stack@undercloud ~]$ openstack baremetal node list
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name       | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+
| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None          | None        | enroll             | False       |
| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None          | None        | enroll             | False       |
| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None          | None        | enroll             | False       |
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+

 

Now we need to move nodes from enroll to available state, but fist we need to import and assign a ramdisk that will be used for cleaning up and deployment of the images. The easiest way for this is to re-use ramdisk images used for overcloud deployment

5. Upload Ramdisk and kernel images to glance and assign it to be used by just imported baremetal nodes:

(chrisj) [stack@undercloud ~]$ openstack image create --public --container-format aki --disk-format aki --file ~/images/ironic-python-agent.kernel deploy-kernel

+——————+————————————–+

| Field            | Value                                |

+——————+————————————–+

| checksum         | 3fa68970990a6ce72fbc6ebef4363f68     |

| container_format | aki                                  |

| created_at       | 2017-01-18T14:47:58.000000           |

| deleted          | False                                |

| deleted_at       | None                                 |

| disk_format      | aki                                  |

| id               | 845cd364-bc92-4321-98e6-f3979b5d9d29 |

| is_public        | True                                 |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | deploy-kernel                        |

| owner            | c600f2a2bea84459b6640267701f2268     |

| properties       |                                      |

| protected        | False                                |

| size             | 5390944                              |

| status           | active                               |

| updated_at       | 2017-01-18T14:48:02.000000           |

| virtual_size     | None                                 |

+——————+————————————–+

(chrisj) [stack@undercloud ~]$ openstack image create –public –container-format ari –disk-format ari –file ~/images/ironic-python-agent.initramfs deploy-ramdisk

+——————+————————————–+

| Field            | Value                                |

+——————+————————————–+

| checksum         | b4321200478252588cb6e9095f363a54     |

| container_format | ari                                  |

| created_at       | 2017-01-18T14:48:18.000000           |

| deleted          | False                                |

| deleted_at       | None                                 |

| disk_format      | ari                                  |

| id               | 9bce556e-215b-4e76-b1e0-ae7b8b4dda61 |

| is_public        | True                                 |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | deploy-ramdisk                       |

| owner            | c600f2a2bea84459b6640267701f2268     |

| properties       |                                      |

| protected        | False                                |

| size             | 363726791                            |

| status           | active                               |

| updated_at       | 2017-01-18T14:48:23.000000           |

| virtual_size     | None                                 |

+——————+————————————–+

(chrisj) [stack@undercloud ~]$ DEPLOY_KERNEL=$(openstack image show deploy-kernel -f value -c id)
(chrisj) [stack@undercloud ~]$ DEPLOY_RAMDISK=$(openstack image show deploy-ramdisk -f value -c id)
(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal1 --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK
(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal2 --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK
(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal3 --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK

 

6. Finally, let’s clean up just added nodes and make them available for deployment:

(chrisj) [stack@undercloud ~]$ openstack baremetal node manage baremetal1
(chrisj) [stack@undercloud ~]$ openstack baremetal node list

+————————————–+————+—————+————-+——————–+————-+
| UUID                                 | Name       | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+————+—————+————-+——————–+————-+
| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None          | power off   | manageable         | False       |
| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None          | None        | enroll             | False       |
| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None          | None        | enroll             | False       |
+————————————–+————+—————+————-+——————–+————-+
(chrisj) [stack@undercloud ~]$ openstack baremetal node provide baremetal1
(chrisj) [stack@undercloud ~]$ openstack baremetal node list

+————————————–+————+—————+————-+——————–+————-+
| UUID                                 | Name       | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+————+—————+————-+——————–+————-+
| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None          | power off   | cleaning           | False       |
| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None          | None        | enroll             | False       |
| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None          | None        | enroll             | False       |
+————————————–+————+—————+————-+——————–+————-+
 

Now, after few minutes of cleaning the nodes local disk the status of the node should look like this:

(chrisj) [stack@undercloud ~]$ openstack baremetal node list
+————————————–+————+—————+————-+——————–+————-+
| UUID                                 | Name       | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+————+—————+————-+——————–+————-+
| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None          | power off   | available          | False       |
| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None          | None        | enroll             | False       |
| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None          | None        | enroll             | False       |
+————————————–+————+—————+————-+——————–+————-+
 

Optional: If your hardware is set up to pxe boot in UEFI mode, you should also enable that mode in Ironic for all the nodes by invoking:

(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal1 --property capabilities=boot_option:local,boot_mode:uefi

Repeat the same process for all remaing nodes in your baremetal pool:

(chrisj) [stack@undercloud ~]$ openstack baremetal node manage baremetal2
(chrisj) [stack@undercloud ~]$ openstack baremetal node manage baremetal3
(chrisj) [stack@undercloud ~]$ openstack baremetal node provide baremetal2
(chrisj) [stack@undercloud ~]$ openstack baremetal node provide baremetal3
(chrisj) [stack@undercloud ~]$ openstack baremetal node list

+————————————–+————+—————+————-+——————–+————-+
| UUID                                 | Name       | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+————+—————+————-+——————–+————-+
| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None          | power off   | available          | False       |
| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None          | power off   | available          | False       |
| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None          | power off   | available          | False       |
+————————————–+————+—————+————-+——————–+————-+

You can now start deploying baremetal nodes from OpenStack.

IV. Deploying Operating System to Baremetal

First we need to get glance images. If you plan to just use legacy bios, you are in luck, all off the images that you typically download for VMs can be also used on the baremetal (as long as they have appropriate drives for your hardware).

The general info on where to find images of different distros (including Windows) can be found here. 

If your hardware requires UEFI boot, then you could still use off the shelve images, unfortunately not all the distributions make them UEFI ready. The best ones I was able to find were from Red Hat and Canonical. For rest of them you will most likely have to build them on your own.

1. With OSP13 / TripleO the easiest way to get an image is by re-using overcloud-full.qcow2 that is both legacy and uefi ready  (as long as you include the ramdisk and kernel).

(chrisj) [stack@undercloud ~]$KERNEL_ID=$(openstack image create --file ~/images/overcloud-full.vmlinuz --public --container-format aki --disk-format aki -f value -c id overcloud-full.vmlinuz)

(chrisj) [stack@undercloud ~]$RAMDISK_ID=$(openstack image create --file ~/images/overcloud-full.initrd --public --container-format ari --disk-format ari -f value -c id overcloud-full.initrd)

(chrisj) [stack@undercloud ~]$ openstack image create --file ~/images/overcloud-full.qcow2 --public --container-format bare --disk-format qcow2 --property kernel_id=$KERNEL_ID --property ramdisk_id=$RAMDISK_ID rhel7-baremetal

If you need ubuntu, just get the latest one from here.

(chrisj) [stack@undercloud nfsshare]$ wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img

(chrisj) [stack@undercloud nfsshare]$ openstack image create --file ./bionic-server-cloudimg-amd64.img --public --container-format bare --disk-format qcow2  ubuntu-bionic

Note: No need to include ramdisk and kernel files with ubuntu image. It will work in both legacy and uefi mode.

Later in a series I am anticipating to create another post on how to create images for Ironic. In a mean time, there is a great blog post about that created by my friend Ken Savich here.

2. Create a flavor that will later point our instances to be created in appropriate aggregate groups.

(chrisj) [stack@undercloud ~]$ openstack flavor create --ram 1024 --disk 40 --vcpus 1 baremetal

Note: It is a good practice to set the ram and vcpu parameters rather low and being able to cover all of the available hardware (these parameters indicate that you need to have at least 1 cpu or 1024 of ram available on the node). In case of disk I recommend setting this up little higher in case we anticipate to deploy windows images that typically take above 20GB of disk.

Starting with Pike release we are also required to set these up:

(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:CUSTOM_BAREMETAL=1
(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:VCPU=0
(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:MEMORY_MB=0
(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:DISK_GB=0

Optional: If you are anticipating to use UEFI hardware the proper uefi capability has also need to be assigned to a flavor:

(chrisj) [stack@undercloud ~]$ openstack flavor create --ram 1024 --disk 40 --vcpus 1 --property baremetal=true --property "capabilities:boot_mode=uefi" --property "capabilities:boot_option=local" baremetal
 

2. Let’s create a host aggregate that will cover all controllers (that ultimately manage ironic nodes).

(chrisj) [stack@undercloud ~]$ openstack aggregate create --property baremetal=true baremetal-hosts

(chrisj) [stack@undercloud ~]$ openstack host list
+------------------------------+-------------+----------+
| Host Name                    | Service     | Zone     |
+------------------------------+-------------+----------+
| chrisj-controller-0.home.lab | conductor   | internal |
| chrisj-hci-1.home.lab        | compute     | nova     |
| chrisj-hci-0.home.lab        | compute     | nova     |
| chrisj-controller-0.home.lab | consoleauth | internal |
| chrisj-controller-0.home.lab | scheduler   | internal |
| chrisj-controller-0.home.lab | compute     | nova     |
+------------------------------+-------------+----------+

 

(chrisj) [stack@undercloud ~]$ openstack aggregate add host baremetal-hosts chrisj-controller-0.home.lab

Note: If you use more then 1 ironic controller, repeat the last step for the remaining controllers.

3. Finally upload your keypair and deploy Operating System of choice on top of your baremetal node.

(chrisj) [stack@undercloud ~]$  openstack keypair create --public-key ~/.ssh/id_rsa.pub undercloud

(chrisj) [stack@undercloud ~]$ openstack server create --image rhel7-baremetal --flavor baremetal --nic net-id=baremetal --key-name undercloud rhel7-bm1

Nova scheduler will pick on of your available nodes in the baremetal pool and deploy your OS on top:

(chrisj) [stack@undercloud ~]$ openstack baremetal node list
+————————————–+————+————————————–+————-+——————–+————-+
| UUID                                 | Name       | Instance UUID                        | Power State | Provisioning State | Maintenance |
+————————————–+————+————————————–+————-+——————–+————-+
| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None                                 | power off   | available          | False       |
| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | 30e45954-1511-4703-b3b2-317a58c75418 | power on    | wait call-back     | False       |
| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None                                 | power off   | available          | False       |
+————————————–+————+————————————–+————-+——————–+————-+
(chrisj) [stack@undercloud ~]$ openstack server list
+————————————–+———–+——–+————————+—————–+———–+
| ID                                   | Name      | Status | Networks               | Image           | Flavor    |
+————————————–+———–+——–+————————+—————–+———–+
| 30e45954-1511-4703-b3b2-317a58c75418 | rhel7-bm1 | BUILD  | baremetal=172.31.10.72 | rhel7-baremetal | baremetal |
+————————————–+———–+——–+————————+—————–+———–+

When the process is done you should be able to ssh to the node to validate baremetal functionality.

(chrisj) [stack@undercloud ~]$ ssh cloud-user@172.31.10.72
The authenticity of host '172.31.10.72 (172.31.10.72)' can't be established.
ECDSA key fingerprint is SHA256:UMqiPXpbMVaVKLEP48h5T6vcQJ6pSg5GjzDZZ0EHYQ0.
ECDSA key fingerprint is MD5:52:7e:33:f9:b1:fa:8b:d3:25:db:b5:86:16:51:0d:b9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.31.10.72' (ECDSA) to the list of known hosts.
[cloud-user@rhel7-bm1 ~]$
 

Ultimately VM and BM nodes will get created the same way and with that you can easily orchestrate a workload to land on appropriate compute resource.

Leave a Reply

Your email address will not be published. Required fields are marked *