{"id":9,"date":"2023-11-28T11:14:39","date_gmt":"2023-11-28T16:14:39","guid":{"rendered":"https:\/\/chrisj.cloud\/?p=9"},"modified":"2023-11-28T13:44:41","modified_gmt":"2023-11-28T18:44:41","slug":"bmaas-part-1-deploying-openstack-ironic-with-red-hat-osp13-tripleo-queens","status":"publish","type":"post","link":"https:\/\/chrisj.cloud\/index.php\/2023\/11\/28\/bmaas-part-1-deploying-openstack-ironic-with-red-hat-osp13-tripleo-queens\/","title":{"rendered":"BMaaS Part 1: Deploying Openstack Ironic with Red Hat OSP13 \/ TripleO Queens"},"content":{"rendered":"\n<p>OpenStack Ironic is one of the most exciting project I had pleasure to work with and help my clients with in the last year. Project Ironic is all about provisioning clusters of Baremetal nodes based on images stored in Glance with assigned networks managed by Neutron.<\/p>\n\n\n\n<p>In the past Ironic has been neglected by System Architects and System Integrators mostly due to lack of feature parity comparable to more traditional baremetal bootstrap mechanisms. Now the feature gap has been mostly closed and at the same time Ironic can take advantages of other components and methodologies that OpenStack is in particularly good at (and where traditional tools lack of), such as:<\/p>\n\n\n\n<ul>\n<li>programmatic infrastructure management<\/li>\n\n\n\n<li>multi-tenancy<\/li>\n\n\n\n<li>integration with devops tools such as ansible<\/li>\n\n\n\n<li>elasticity<\/li>\n<\/ul>\n\n\n\n<p>On top of &#8220;cloudy&#8221; feature OpenStack also provides access to VM infrastructure and abstracts the compute consumption. Other words you can now orchestrate your workload to take advantage of both Baremetal and VM infrastructure and the process to consume these different resources is mostly identical from the end user perspective.<\/p>\n\n\n\n<p>All this opens&nbsp;doors for OpenStack to be compelling solution to use cases such as:<\/p>\n\n\n\n<ul>\n<li>Supercomputing<\/li>\n\n\n\n<li>Rendering farms<\/li>\n\n\n\n<li>High Performance Computing<\/li>\n\n\n\n<li>Gaming Clouds<\/li>\n\n\n\n<li>Large Research Infrastructure<\/li>\n<\/ul>\n\n\n\n<p>This is a first post from the&nbsp;series of blog posts that will focus on Baremetal services in OpenStack. First I will focus on explaining how properly deploy&nbsp;OpenStack Ironic with Red Hat OpenStack Platform 13 and\/or TripleO Queens release. In later post I will show how to include hardware Inspection and Discovery functionality. I will end up series with explaining process of building images that are both uefi and bios capable and taking advantage of multi-tenancy.<\/p>\n\n\n\n<p><strong>I. Lab Architecture<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"849\" src=\"https:\/\/chrisj.cloud\/wp-content\/uploads\/2023\/11\/home-lab-ironic-1024x849.png\" alt=\"\" class=\"wp-image-50\" srcset=\"https:\/\/chrisj.cloud\/wp-content\/uploads\/2023\/11\/home-lab-ironic-1024x849.png 1024w, https:\/\/chrisj.cloud\/wp-content\/uploads\/2023\/11\/home-lab-ironic-300x249.png 300w, https:\/\/chrisj.cloud\/wp-content\/uploads\/2023\/11\/home-lab-ironic-768x637.png 768w, https:\/\/chrisj.cloud\/wp-content\/uploads\/2023\/11\/home-lab-ironic.png 1200w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Ok, it&#8217;s not the most clear architectural diagram that you have seen, but there are few important roles that you should note.<\/p>\n\n\n\n<p><strong>KVM Admin&nbsp;Host (up top)&nbsp;<\/strong>&#8211; is a single pre-provisioned RHEL based hypervisor that stores VMs that help bootstrap and manage the lifecycle of my OpenStack<\/p>\n\n\n\n<p><strong>OpenStack Controllers&nbsp;<\/strong>&#8211; I have single standard OpenStack monolithic controller (all services on single node). The more traditional 3x controller architecture would be identical from the configuration perspective<\/p>\n\n\n\n<p><strong>HCI nodes<\/strong>&nbsp;&#8211; Compute and Ceph Storage are combined in hyperconverged way in my lab to save on physical resources<\/p>\n\n\n\n<p><strong>Baremetal Pool<\/strong>&nbsp;&#8211; This is a pool of &#8220;snowflaky&#8221; servers that will be used for Ironic service in overcloud&nbsp;<\/p>\n\n\n\n<p>In my case networks have been defined as follows:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>VLANs<\/strong><\/td><td><strong>Type<\/strong><\/td><td><strong>Undercloud<\/strong><\/td><td><strong>Controller<\/strong><\/td><td><strong>HCI<\/strong><\/td><td><strong>Baremetal<\/strong><\/td><\/tr><tr><td><strong>PXE<\/strong><\/td><td>Non-routable<\/td><td>Native<\/td><td>Native<\/td><td>Native<\/td><td>N\/A<\/td><\/tr><tr><td><strong>External<\/strong><\/td><td>Routable<\/td><td>Native<\/td><td>Tagged<\/td><td>N\/A<\/td><td>N\/A<\/td><\/tr><tr><td><strong>MGMT<\/strong><\/td><td>Non-routable<\/td><td>N\/A<\/td><td>Tagged<\/td><td>Tagged<\/td><td>N\/A<\/td><\/tr><tr><td><strong>Baremetal<\/strong><\/td><td>Routable<\/td><td>N\/A<\/td><td>Tagged<\/td><td>Tagged<\/td><td>Native<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>II. Overcloud Deployment with Ironic<\/strong><\/p>\n\n\n\n<p>Start with traditional Overcloud deployment templates and verify everything works without Ironic in the mix. At this point you should also be able to create a provider network running on baremetal VLAN. I won&#8217;t cover this part in detail here. The normal OSP13 installation guide can be followed&nbsp;<a href=\"https:\/\/access.redhat.com\/documentation\/en-us\/red_hat_openstack_platform\/13\/html\/director_installation_and_usage\/\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>&nbsp;or&nbsp;<a href=\"https:\/\/docs.openstack.org\/tripleo-docs\/latest\/install\/\" target=\"_blank\" rel=\"noreferrer noopener\">here (upstream)<\/a><\/p>\n\n\n\n<p>After you verify standard deployment without Ironic integration (baby steps), let&#8217;s take a look at yaml files and parameters that need to be added to our deployment.<\/p>\n\n\n\n<p>1. First simply include&nbsp;in your deployment script (without making any modifications to it) :<\/p>\n\n\n\n<p>-e&nbsp;\/usr\/share\/openstack-tripleo-heat-templates\/environments\/services-docker\/ironic.yaml<\/p>\n\n\n\n<p>This yaml allows to include containers for ironic-api, ironic-conductor, ironic-pxe and nova-ironic service to be deployed on your control&nbsp;plane.<\/p>\n\n\n\n<p>Note: At the time of writing this blog there is another ironic yaml file in default templates that look like a good candidate for deployment located under&nbsp;environments\/services\/ironic.yaml. Do&nbsp;<strong>not<\/strong>&nbsp;use this file. If you do, your deployment will end up without pxe service (common error).<\/p>\n\n\n\n<p>2. OSP13 is finally capable to define it&#8217;s own custom (composable) networks. Let&#8217;s define one to isolate all ironic network traffic (cleaning, deployment etc.).<\/p>\n\n\n\n<ul>\n<li>Let&#8217;s copy a standard template first:<\/li>\n<\/ul>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ cp \/usr\/share\/openstack-tripleo-heat-templates\/network_data.yaml\u00a0~\/templates\/<\/code><\/p>\n\n\n\n<ul>\n<li>Edit file and add following to the bottom of the file:<\/li>\n<\/ul>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ cat templates\/network_data.yaml\u00a0<br># List of networks, used for j2 templating of enabled networks<br>#<br># Supported values:<br>...<\/code><\/p>\n\n\n\n<p><code>- name: CustomBM<br>\u00a0 name_lower: custombm<br>\u00a0 vip: true<br>\u00a0 ip_subnet: '172.31.10.0\/24'<br>\u00a0 allocation_pools: [{'start': '172.31.10.20', 'end': '172.31.10.49'}]<br>\u00a0 ipv6_subnet: 'fd00:fd00:fd00:7000::\/64'<br>\u00a0 ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]<\/code><\/p>\n\n\n\n<p>Note that I am allocating entire new subnet dedicated for just ironic network traffic. The range of IP addresses here should be rather small since it&#8217;s only going to be allocated to your ironic controller nodes. You will define another pool to deployment that cannot overlap. Also in future blog posts you will learn about adding Inspector and Auto-Discovery that will yet require another slice of this subnet.<\/p>\n\n\n\n<p>Now the biggest advantage in assigning dedicated network for ironic is to avoid layer3 traffic when cleaning and provisioning your baremetal nodes , which can be a major issue when using different sizes mtu or taking advantage of ssl encryption.<\/p>\n\n\n\n<ul>\n<li>Edit your network-environment.yaml file to include just created subnet, cidr and vlan. Example:<\/li>\n<\/ul>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ cat templates\/network-environment.yaml\u00a0<br>#This file is an example of an environment file for defining the isolated<br>#networks and related parameters.<br>resource_registry:<br>\u00a0 # Network Interface templates to use (these files must exist)<br>\u00a0 OS::TripleO::Compute::Net::SoftwareConfig:<br>\u00a0 \u00a0 .\/bond-with-vlans\/compute.yaml<br>\u00a0 OS::TripleO::Controller::Net::SoftwareConfig:<br>\u00a0 \u00a0 .\/bond-with-vlans\/controller.yaml<\/code><br>\u00a0<\/p>\n\n\n\n<p><code>parameter_defaults:<\/code><\/p>\n\n\n\n<p><code>\u00a0 # This section is where deployment-specific configuration is done<br>\u00a0 # CIDR subnet mask length for provisioning network<br>\u00a0 ControlPlaneSubnetCidr: '24'<br>\u00a0 # Gateway router for the provisioning network (or Undercloud IP)<br>\u00a0 ControlPlaneDefaultRoute: 172.31.0.10<br>\u00a0 EC2MetadataIp: 172.31.0.10 \u00a0# Generally the IP of the Undercloud<br>\u00a0 # Customize the IP subnets to match the local environment<br>\u00a0 InternalApiNetCidr: 172.31.1.0\/24<br>\u00a0 StorageNetCidr: 172.31.3.0\/24<br>\u00a0 StorageMgmtNetCidr: 172.31.4.0\/24<br>\u00a0 TenantNetCidr: 172.31.2.0\/24<br>\u00a0 ExternalNetCidr: 172.31.8.0\/24<br><strong>\u00a0 CustomBMNetCidr: 172.31.10.0\/24<\/strong><br>\u00a0 # Customize the VLAN IDs to match the local environment<br>\u00a0 InternalApiNetworkVlanID: 311<br>\u00a0 StorageNetworkVlanID: 313<br>\u00a0 StorageMgmtNetworkVlanID: 314<br>\u00a0 TenantNetworkVlanID: 312<br>\u00a0 ExternalNetworkVlanID: 318<br><strong>\u00a0 CustomBMNetworkVlanID: 320<\/strong><br>\u00a0 # Customize the IP ranges on each network to use for static IPs and VIPs<br>\u00a0 InternalApiAllocationPools: [{'start': '172.31.1.20', 'end': '172.31.1.49'}]<br>\u00a0 StorageAllocationPools: [{'start': '172.31.3.20', 'end': '172.31.3.49'}]<br>\u00a0 StorageMgmtAllocationPools: [{'start': '172.31.4.20', 'end': '172.31.4.49'}]<br>\u00a0 TenantAllocationPools: [{'start': '172.31.2.20', 'end': '172.31.2.49'}]<br><strong>\u00a0 CustomBMAllocationPools: [{'start': '172.31.10.20', 'end': '172.31.10.49'}]<\/strong><br>\u00a0 # Leave room if the external network is also used for floating IPs<br>\u00a0 ExternalAllocationPools: [{'start': '172.31.8.25', 'end': '172.31.8.29'}]<br>\u00a0 # Gateway router for the external network<br>\u00a0 ExternalInterfaceDefaultRoute: 172.31.8.254<br>\u00a0 # Uncomment if using the Management Network (see network-management.yaml)<br>\u00a0 # ManagementNetCidr: 10.0.1.0\/24<br>\u00a0 # ManagementAllocationPools: [{'start': '10.0.1.10', 'end': '10.0.1.50'}]<br>\u00a0 # Use either this parameter or ControlPlaneDefaultRoute in the NIC templates<br>\u00a0 # ManagementInterfaceDefaultRoute: 10.0.1.1<br>\u00a0 # Define the DNS servers (maximum 2) for the overcloud nodes<br>\u00a0 DnsServers: [\"172.31.8.1\",\"8.8.4.4\"]<br>\u00a0 # List of Neutron network types for tenant networks (will be used in order)<br>\u00a0 NeutronNetworkType: 'vxlan,vlan'<br>\u00a0 # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.<br>\u00a0 NeutronTunnelTypes: 'vxlan'<br>\u00a0 # Neutron VLAN ranges per network, for example 'datacentre:1:499,tenant:500:1000':<br>\u00a0 NeutronNetworkVLANRanges: 'datacentre:1:1000'<br>\u00a0 # Customize bonding options, e.g. \"mode=4 lacp_rate=1 updelay=1000 miimon=100\"<br>\u00a0 # for Linux bonds w\/LACP, or \"bond_mode=active-backup\" for OVS active\/backup.<br>\u00a0 BondInterfaceOvsOptions: \"bond_mode=active-backup\"<\/code><\/p>\n\n\n\n<ul>\n<li>Finally modify your controller nic config to include an extra vlan and associate it to your external bridge. Example:<\/li>\n<\/ul>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ cat templates\/bond-with-vlans\/controller.yaml<br>heat_template_version: queens<br>description: ><br>\u00a0 Software Config to drive os-net-config with 2 bonded nics on a bridge with VLANs attached for the controller role.<br>parameters:<br>\u00a0 ControlPlaneIp:<br>\u00a0 \u00a0 default: ''<br>\u00a0 \u00a0 description: IP address\/subnet on the ctlplane network<br>\u00a0 \u00a0 type: string<br>...<br>\u00a0 EC2MetadataIp: # Override this via parameter_defaults<br>\u00a0 \u00a0 description: The IP address of the EC2 metadata server.<br>\u00a0 \u00a0 type: string<br><strong>\u00a0 CustomBMIpSubnet:<br>\u00a0 \u00a0 default: ''<br>\u00a0 \u00a0 description: IP address\/subnet on the Baremetal network<br>\u00a0 \u00a0 type: string<br>\u00a0 CustomBMNetworkVlanID:<br>\u00a0 \u00a0 default: 70<br>\u00a0 \u00a0 description: Vlan ID for the baremetal network traffic.<br>\u00a0 \u00a0 type: number<\/strong><\/code><\/p>\n\n\n\n<p><code>resources:<br>\u00a0 OsNetConfigImpl:<br>\u00a0 \u00a0 type: OS::Heat::SoftwareConfig<br>\u00a0 \u00a0 properties:<br>\u00a0 \u00a0 \u00a0 group: script<br>\u00a0 \u00a0 \u00a0 config:<br>\u00a0 \u00a0 \u00a0 \u00a0 str_replace:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 template:<br>...<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: ovs_bridge<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name: bridge_name<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 9000<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 dns_servers:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: DnsServers<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 members:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: interface<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name: nic2<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 9000<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: vlan<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 1500<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 device: bond1<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 vlan_id:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: ExternalNetworkVlanID<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 addresses:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - ip_netmask:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: ExternalIpSubnet<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 routes:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - default: true<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 next_hop:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: ExternalInterfaceDefaultRoute<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: vlan<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 9000<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 vlan_id:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: InternalApiNetworkVlanID<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 addresses:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - ip_netmask:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: InternalApiIpSubnet<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: vlan<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 1500<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 vlan_id:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: StorageNetworkVlanID<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 addresses:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - ip_netmask:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: StorageIpSubnet<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: vlan<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 1500<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 vlan_id:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: StorageMgmtNetworkVlanID<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 addresses:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - ip_netmask:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: StorageMgmtIpSubnet<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: vlan<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 9000<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 vlan_id:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: TenantNetworkVlanID<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 addresses:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - ip_netmask:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: TenantIpSubnet<br><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - type: vlan<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mtu: 1500<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 vlan_id:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: CustomBMNetworkVlanID<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 addresses:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 - ip_netmask:<br>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 get_param: CustomBMIpSubnet<\/strong><\/code><\/p>\n\n\n\n<p><code>outputs:<br>\u00a0 OS::stack_id:<br>\u00a0 \u00a0 description: The OsNetConfigImpl resource.<br>\u00a0 \u00a0 value:<br>\u00a0 \u00a0 \u00a0 get_resource: OsNetConfigImpl<\/code><br>\u00a0<\/p>\n\n\n\n<p>3. Copy composable roles default template to your local templates directory and ensure the new CustomBM network is being deployed&nbsp;<\/p>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ cp \/usr\/share\/openstack-tripleo-heat-templates\/roles_data.yaml templates\/<\/code><\/p>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ cat templates\/roles_data.yaml\u00a0<br>###############################################################################<br># File generated by TripleO<br>###############################################################################<br>###############################################################################<br># Role: Controller \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0#<br>###############################################################################<br>- name: Controller<br>\u00a0 description: |<br>\u00a0 \u00a0 Controller role that has all the controler services loaded and handles<br>\u00a0 \u00a0 Database, Messaging and Network functions.<br>\u00a0 CountDefault: 1<br>\u00a0 tags:<br>\u00a0 \u00a0 - primary<br>\u00a0 \u00a0 - controller<br>\u00a0 networks:<br>\u00a0 \u00a0 - External<br>\u00a0 \u00a0 - InternalApi<br>\u00a0 \u00a0 - Storage<br>\u00a0 \u00a0 - StorageMgmt<br>\u00a0 \u00a0 - Tenant<br><strong>\u00a0 \u00a0 - CustomBM<\/strong><br>\u00a0 HostnameFormatDefault: '%stackname%-controller-%index%'<br>\u00a0 # Deprecated &amp; backward-compatible values (FIXME: Make parameters consistent)<br>\u00a0 # Set uses_deprecated_params to True if any deprecated params are used.<br>\u00a0 uses_deprecated_params: True<br>\u00a0 deprecated_param_extraconfig: 'controllerExtraConfig'<br>\u00a0 deprecated_param_flavor: 'OvercloudControlFlavor'<br>\u00a0 deprecated_param_image: 'controllerImage'<br>\u00a0 ServicesDefault:<br>\u00a0 \u00a0 - OS::TripleO::Services::Aide<br>\u00a0 \u00a0 - OS::TripleO::Services::AodhApi<br>...<\/code><\/p>\n\n\n\n<p>4. Capture all reaming Ironic customization in another yaml file or add following values to one of existing custom yaml files.<\/p>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ cat templates\/ExtraConfig.yaml<\/code><\/p>\n\n\n\n<p><code>parameter_defaults:<\/code><\/p>\n\n\n\n<p>\u00a0 <code>NeutronEnableIsolatedMetadata: 'True'<\/code><\/p>\n\n\n\n<p><code>\u00a0 NovaSchedulerDefaultFilters:<br>\u00a0 \u00a0 - RetryFilter<br>\u00a0 \u00a0 - AggregateInstanceExtraSpecsFilter<br>\u00a0 \u00a0 - AggregateMultiTenancyIsolation<br>\u00a0 \u00a0 - AvailabilityZoneFilter<br>\u00a0 \u00a0 - RamFilter<br>\u00a0 \u00a0 - DiskFilter<br>\u00a0 \u00a0 - ComputeFilter<br>\u00a0 \u00a0 - ComputeCapabilitiesFilter<br>\u00a0 \u00a0 - ImagePropertiesFilter<\/code><\/p>\n\n\n\n<p>\u00a0<code> IronicCleaningDiskErase: metadata \u00a0\u00a0<br>\u00a0 IronicIPXEEnabled: true \u00a0\u00a0<br>\u00a0 IronicCleaningNetwork: baremetal<\/code><\/p>\n\n\n\n<p><code>\u00a0 CustomBMVirtualFixedIPs: [{'ip_address':'172.31.10.14'}]<\/code><\/p>\n\n\n\n<p><code>\u00a0 ServiceNetMap:<br>\u00a0 \u00a0 IronicApiNetwork: custombm<br>\u00a0 \u00a0 IronicNetwork: custombm<\/code><br>\u00a0<\/p>\n\n\n\n<p>5. Re-deploy your overcloud with newly defined yaml files:<\/p>\n\n\n\n<p><code>time openstack overcloud deploy \u00a0--templates --stack chrisj \\<br>...<\/code><\/p>\n\n\n\n<p><code>\u00a0 -r \/home\/stack\/templates\/roles_data.yaml \\<br>\u00a0 -n \/home\/stack\/templates\/network_data.yaml \\<br>\u00a0 -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/network-isolation.yaml \\<br>\u00a0 -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/services-docker\/ironic.yaml \\<br>\u00a0 -e \/home\/stack\/templates\/network-environment.yaml \\<br>\u00a0 -e \/home\/stack\/templates\/ExtraConfig.yaml \\<\/code><\/p>\n\n\n\n<p>&#8230;<br>6.&nbsp;After complete deployment verify your overcloud rc file and verify:<\/p>\n\n\n\n<p><code>(undercloud) [stack@undercloud ~]$ source chrisjrc\u00a0<br>(chrisj) [stack@undercloud ~]$ openstack baremetal driver list<br>+---------------------+------------------------------+<br>| Supported driver(s) | Active host(s) \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 |<br>+---------------------+------------------------------+<br>| ipmi \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| chrisj-controller-0.home.lab |<br>| pxe_drac \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| chrisj-controller-0.home.lab |<br>| pxe_ilo \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | chrisj-controller-0.home.lab |<br>| pxe_ipmitool \u00a0 \u00a0 \u00a0 \u00a0| chrisj-controller-0.home.lab |<br>| redfish \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | chrisj-controller-0.home.lab |<br>+---------------------+------------------------------+<\/code><br>\u00a0<\/p>\n\n\n\n<p><strong>III. Adding Baremetal nodes.<\/strong><\/p>\n\n\n\n<p>In this part of the series we are going to focus on manual process of adding nodes to BM pool.&nbsp;&nbsp;You want to properly define all the specs including memory, CPU, mac address, ipmi credentials etc. In the next post in the series I&#8217;ll show the same process with auto-discovery&nbsp;and inspection enabled.<\/p>\n\n\n\n<p>1. First let&#8217;s create a network that will match a composable network that we have defined during deployment. This provider network will be utilized for cleaning and deployment of the nodes.<\/p>\n\n\n\n<p><code>neutron net-create\u00a0<strong>baremetal<\/strong>\u00a0\u00a0--provider:physical_network=fast \\<br>\u00a0 \u00a0 --provider:network_type=vlan --provider:segmentation_id=320 --mtu 1500 --shared<br>neutron subnet-create baremetal 172.31.10.0\/24 \\<br>\u00a0 \u00a0 --name baremetal-sub --allocation-pool \\<br>\u00a0 \u00a0 start=172.31.10.70,end=172.31.10.199 --gateway 172.31.10.254 --dns-nameserver 172.31.8.1<\/code><br>\u00a0<\/p>\n\n\n\n<p>Note that my allocation pool must not overlap with deployment pool assigned to composable network. Also, to avoid mtu fragmentation issues I am assigning mtu 1500 to the network itself.<\/p>\n\n\n\n<p>2. Defined yaml file that will describe the hardware will be added to Ironic in overcloud. Example:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ cat baremetal.yaml\u00a0<br>nodes:<br>\u00a0 - name: baremetal1<br>\u00a0 \u00a0 driver: ipmi<br>\u00a0 \u00a0 resource_class: baremetal<br>\u00a0 \u00a0 properties:<br>\u00a0 \u00a0 \u00a0cpus: 4<br>\u00a0 \u00a0 \u00a0cpu_arch: \"x86_64\"<br>\u00a0 \u00a0 \u00a0memory_mb: 16384<br>\u00a0 \u00a0 \u00a0local_gb: 60<br>\u00a0 \u00a0 \u00a0root_device:<br>\u00a0 \u00a0 \u00a0 \u00a0name: \/dev\/sda<br>\u00a0 \u00a0 ports:<br>\u00a0 \u00a0 \u00a0- address: \"D0:50:99:79:78:01\"<br>\u00a0 \u00a0 \u00a0 \u00a0pxe_enabled: true<br>\u00a0 \u00a0 driver_info:<br>\u00a0 \u00a0 \u00a0ipmi_address: \"172.31.9.31\"<br>\u00a0 \u00a0 \u00a0ipmi_username: \"admin\"<br>\u00a0 \u00a0 \u00a0ipmi_password: \"secret\"<br>\u00a0 - name: baremetal2<br>\u00a0 \u00a0 driver: ipmi<br>\u00a0 \u00a0 resource_class: baremetal<br>\u00a0 \u00a0 properties:<br>\u00a0 \u00a0 \u00a0cpus: 4<br>\u00a0 \u00a0 \u00a0cpu_arch: \"x86_64\"<br>\u00a0 \u00a0 \u00a0memory_mb: 16384<br>\u00a0 \u00a0 \u00a0local_gb: 60<br>\u00a0 \u00a0 \u00a0root_device:<br>\u00a0 \u00a0 \u00a0 \u00a0name: \/dev\/sda<br>\u00a0 \u00a0 ports:<br>\u00a0 \u00a0 \u00a0- address: \"D0:50:99:79:77:01\"<br>\u00a0 \u00a0 \u00a0 \u00a0pxe_enabled: true<br>\u00a0 \u00a0 driver_info:<br>\u00a0 \u00a0 \u00a0ipmi_address: \"172.31.9.32\"<br>\u00a0 \u00a0 \u00a0ipmi_username: \"admin\"<br>\u00a0 \u00a0 \u00a0ipmi_password: \"secret\"<br>\u00a0 - name: baremetal3<br>\u00a0 \u00a0 driver: ipmi<br>\u00a0 \u00a0 resource_class: baremetal<br>\u00a0 \u00a0 properties:<br>\u00a0 \u00a0 \u00a0cpus: 4<br>\u00a0 \u00a0 \u00a0cpu_arch: \"x86_64\"<br>\u00a0 \u00a0 \u00a0memory_mb: 16384<br>\u00a0 \u00a0 \u00a0local_gb: 60<br>\u00a0 \u00a0 \u00a0root_device:<br>\u00a0 \u00a0 \u00a0 \u00a0name: \/dev\/sda<br>\u00a0 \u00a0 ports:<br>\u00a0 \u00a0 \u00a0- address: \"D0:50:99:C0:A3:3A\"<br>\u00a0 \u00a0 \u00a0 \u00a0pxe_enabled: true<br>\u00a0 \u00a0 driver_info:<br>\u00a0 \u00a0 \u00a0ipmi_address: \"172.31.9.33\"<br>\u00a0 \u00a0 \u00a0ipmi_username: \"admin\"<br>\u00a0 \u00a0 \u00a0ipmi_password: \"secret\"<\/code><br>\u00a0<\/p>\n\n\n\n<p>Alternatively you could use the undercloud instackenv.json format and just convert it to ironic readable format. Here is the example -&gt;&nbsp;<a href=\"https:\/\/docs.openstack.org\/tripleo-docs\/latest\/install\/advanced_deployment\/baremetal_overcloud.html#preparing-undercloud\" target=\"_blank\" rel=\"noreferrer noopener\">here.<\/a><\/p>\n\n\n\n<p>3. Import pre-defined nodes to ironic<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ source chrisjrc\u00a0<br>(chrisj) [stack@undercloud ~]$ openstack baremetal create baremetal.yaml<\/code><\/p>\n\n\n\n<p>4. Verify nodes are in enroll state:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack baremetal node list<br>+--------------------------------------+------------+---------------+-------------+--------------------+-------------+<br>| UUID \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | Name \u00a0 \u00a0 \u00a0 | Instance UUID | Power State | Provisioning State | Maintenance |<br>+--------------------------------------+------------+---------------+-------------+--------------------+-------------+<br>| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>+--------------------------------------+------------+---------------+-------------+--------------------+-------------+<\/code><br>\u00a0<\/p>\n\n\n\n<p>Now we need to move nodes from enroll to available state, but fist we need to import and assign a ramdisk that will be used for cleaning up and deployment of the images. The easiest way for this is to re-use ramdisk images used for overcloud deployment<\/p>\n\n\n\n<p>5. Upload Ramdisk and kernel images to glance and assign it to be used by just imported baremetal nodes:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack image create --public --container-format aki --disk-format aki --file ~\/images\/ironic-python-agent.kernel deploy-kernel<\/code><\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p>| Field &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| Value &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p>| checksum &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 3fa68970990a6ce72fbc6ebef4363f68&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| container_format | aki &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| created_at &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 2017-01-18T14:47:58.000000 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| deleted &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| False &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| deleted_at &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| None &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| disk_format &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| aki &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| id &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 845cd364-bc92-4321-98e6-f3979b5d9d29 |<\/p>\n\n\n\n<p>| is_public &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| True &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| min_disk &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| min_ram &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| name &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| deploy-kernel &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| owner &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| c600f2a2bea84459b6640267701f2268&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| properties &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| protected &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| False &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| size &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 5390944 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| status &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| active &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| updated_at &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 2017-01-18T14:48:02.000000 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| virtual_size&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| None &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p>(chrisj) [stack@undercloud ~]$ openstack image create &#8211;public &#8211;container-format ari &#8211;disk-format ari &#8211;file ~\/images\/ironic-python-agent.initramfs deploy-ramdisk<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p>| Field &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| Value &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p>| checksum &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| b4321200478252588cb6e9095f363a54&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| container_format | ari &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| created_at &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 2017-01-18T14:48:18.000000 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| deleted &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| False &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| deleted_at &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| None &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| disk_format &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| ari &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| id &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 9bce556e-215b-4e76-b1e0-ae7b8b4dda61 |<\/p>\n\n\n\n<p>| is_public &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| True &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| min_disk &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| min_ram &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| name &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| deploy-ramdisk &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| owner &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| c600f2a2bea84459b6640267701f2268&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| properties &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| protected &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| False &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| size &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 363726791 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| status &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| active &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| updated_at &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 2017-01-18T14:48:23.000000 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>| virtual_size&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| None &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ DEPLOY_KERNEL=$(openstack image show deploy-kernel -f value -c id)<br>(chrisj) [stack@undercloud ~]$ DEPLOY_RAMDISK=$(openstack image show deploy-ramdisk -f value -c id)<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal1 --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal2 --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal3 --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK<\/code><br>\u00a0<\/p>\n\n\n\n<p>6. Finally, let&#8217;s clean up just added nodes and make them available for deployment:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack baremetal node manage baremetal1<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node list<\/code><br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| UUID \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | Name \u00a0 \u00a0 \u00a0 | Instance UUID | Power State | Provisioning State | Maintenance |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| power off \u00a0 | manageable \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br><code>(chrisj) [stack@undercloud ~]$ openstack baremetal node provide baremetal1<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node list<\/code><br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| UUID \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | Name \u00a0 \u00a0 \u00a0 | Instance UUID | Power State | Provisioning State | Maintenance |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| power off \u00a0 | cleaning \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| None \u00a0 \u00a0 \u00a0 \u00a0| enroll \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | False \u00a0 \u00a0 \u00a0 |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>\u00a0<\/p>\n\n\n\n<p>Now, after few minutes of cleaning the nodes local disk the status of the node should look like this:<\/p>\n\n\n\n<p>(chrisj) [stack@undercloud ~]$ openstack baremetal node list<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| UUID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Name &nbsp; &nbsp; &nbsp; | Instance UUID | Power State | Provisioning State | Maintenance |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| power off &nbsp; | available &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| False &nbsp; &nbsp; &nbsp; |<br>| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| None &nbsp; &nbsp; &nbsp; &nbsp;| enroll &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | False &nbsp; &nbsp; &nbsp; |<br>| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| None &nbsp; &nbsp; &nbsp; &nbsp;| enroll &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | False &nbsp; &nbsp; &nbsp; |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>&nbsp;<\/p>\n\n\n\n<p><strong>Optional<\/strong>: If your hardware is set up to pxe boot in&nbsp;<strong>UEFI&nbsp;<\/strong>mode, you should also enable that mode in Ironic for all the nodes by invoking:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack baremetal node set baremetal1 --property capabilities=boot_option:local,<strong>boot_mode:uefi<\/strong><\/code><\/p>\n\n\n\n<p>Repeat the same process for all remaing nodes in your baremetal pool:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack baremetal node manage baremetal2<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node manage baremetal3<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node provide baremetal2<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node provide baremetal3<br>(chrisj) [stack@undercloud ~]$ openstack baremetal node list<\/code><br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| UUID \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | Name \u00a0 \u00a0 \u00a0 | Instance UUID | Power State | Provisioning State | Maintenance |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| power off \u00a0 | available \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| False \u00a0 \u00a0 \u00a0 |<br>| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| power off \u00a0 | available \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| False \u00a0 \u00a0 \u00a0 |<br>| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| power off \u00a0 | available \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| False \u00a0 \u00a0 \u00a0 |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<\/p>\n\n\n\n<p>You can now start deploying baremetal nodes from OpenStack.<\/p>\n\n\n\n<p><strong>IV. Deploying&nbsp;Operating System to Baremetal<\/strong><\/p>\n\n\n\n<p>First we need to get glance images. If you plan to just use legacy bios, you are in luck, all off the images that you typically download for VMs can be also used on the baremetal (as long as they have appropriate drives for your hardware).<\/p>\n\n\n\n<p>The general info on where to find images of different distros (including Windows)&nbsp;can be found&nbsp;<a href=\"https:\/\/docs.openstack.org\/image-guide\/obtain-images.html\" target=\"_blank\" rel=\"noreferrer noopener\">here.<\/a>&nbsp;<\/p>\n\n\n\n<p>If your hardware requires UEFI boot, then you could still use off the shelve images, unfortunately not all the distributions make them UEFI ready. The best ones I was able to find were from Red Hat and Canonical. For rest of them you will most likely have to build them on your own.<\/p>\n\n\n\n<p>1. With OSP13 \/ TripleO the easiest way to get an image is by re-using overcloud-full.qcow2 that is both legacy and uefi ready &nbsp;(as long as you include the ramdisk and kernel).<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$KERNEL_ID=$(openstack image create --file ~\/images\/overcloud-full.vmlinuz --public --container-format aki --disk-format aki -f value -c id overcloud-full.vmlinuz)<\/code><\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$RAMDISK_ID=$(openstack image create --file ~\/images\/overcloud-full.initrd --public --container-format ari --disk-format ari -f value -c id overcloud-full.initrd)<\/code><\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack image create --file ~\/images\/overcloud-full.qcow2 --public --container-format bare --disk-format qcow2 --property kernel_id=$KERNEL_ID --property ramdisk_id=$RAMDISK_ID rhel7-baremetal<\/code><\/p>\n\n\n\n<p>If you need ubuntu, just get the latest one from&nbsp;<a href=\"https:\/\/cloud-images.ubuntu.com\/bionic\/\">here<\/a>.<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud nfsshare]$ wget https:\/\/cloud-images.ubuntu.com\/bionic\/current\/bionic-server-cloudimg-amd64.img<\/code><\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud nfsshare]$ openstack image create --file .\/bionic-server-cloudimg-amd64.img --public --container-format bare --disk-format qcow2 \u00a0ubuntu-bionic<\/code><\/p>\n\n\n\n<p>Note: No need to include ramdisk and kernel files with ubuntu image. It&nbsp;will work in both legacy and uefi mode.<\/p>\n\n\n\n<p>Later in a series I am anticipating to create another post on how to create images for Ironic. In a mean time, there is a great blog post about that created&nbsp;by my friend Ken Savich&nbsp;<a href=\"https:\/\/kensavich.com\/2018\/07\/03\/building-a-customized-uefi-rhel-7-5-image-for-openstack-with-diskimage-builder\/#more-725\">here.<\/a><\/p>\n\n\n\n<p>2. Create a flavor that will later point our instances to be created in appropriate aggregate groups.<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack flavor create --ram 1024 --disk 40 --vcpus 1 baremetal<\/code><\/p>\n\n\n\n<p>Note: It is a good practice to set the ram and vcpu parameters rather low and being able to cover all of the available hardware (these parameters indicate that you need to have at least 1 cpu or 1024 of ram available on the node). In case of disk I recommend setting this up little higher in case we anticipate to deploy windows images that typically take above 20GB of disk.<\/p>\n\n\n\n<p>Starting with Pike release we are also required to set these up:<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:CUSTOM_BAREMETAL=1<br>(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:VCPU=0<br>(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:MEMORY_MB=0<br>(chrisj) [stack@undercloud ~]$ openstack flavor set baremetal --property resources:DISK_GB=0<\/code><\/p>\n\n\n\n<p>Optional: If you are anticipating to use UEFI hardware the proper uefi capability has also need to be assigned to a flavor:<\/p>\n\n\n\n<p><code>(chrisj)\u00a0[stack@undercloud ~]$ openstack flavor create --ram 1024 --disk 40 --vcpus 1 --property baremetal=true --property \"capabilities:boot_mode=uefi\" --property \"capabilities:boot_option=local\" baremetal<\/code><br>\u00a0<\/p>\n\n\n\n<p>2. Let&#8217;s create a host aggregate that will cover all controllers (that ultimately manage ironic nodes).<\/p>\n\n\n\n<p><code>(chrisj)\u00a0[stack@undercloud ~]$\u00a0openstack aggregate create --property baremetal=true baremetal-hosts<\/code><\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack host list<br>+------------------------------+-------------+----------+<br>| Host Name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| Service \u00a0 \u00a0 | Zone \u00a0 \u00a0 |<br>+------------------------------+-------------+----------+<br>| chrisj-controller-0.home.lab | conductor \u00a0 | internal |<br>| chrisj-hci-1.home.lab \u00a0 \u00a0 \u00a0 \u00a0| compute \u00a0 \u00a0 | nova \u00a0 \u00a0 |<br>| chrisj-hci-0.home.lab \u00a0 \u00a0 \u00a0 \u00a0| compute \u00a0 \u00a0 | nova \u00a0 \u00a0 |<br>| chrisj-controller-0.home.lab | consoleauth | internal |<br>| chrisj-controller-0.home.lab | scheduler \u00a0 | internal |<br>| chrisj-controller-0.home.lab | compute \u00a0 \u00a0 | nova \u00a0 \u00a0 |<br>+------------------------------+-------------+----------+<\/code><br>\u00a0<\/p>\n\n\n\n<p><code>(chrisj)\u00a0[stack@undercloud ~]$\u00a0openstack aggregate add host baremetal-hosts chrisj-controller-0.home.lab<\/code><\/p>\n\n\n\n<p>Note: If you use more then 1 ironic controller, repeat the last step for the remaining controllers.<\/p>\n\n\n\n<p>3. Finally upload your keypair and deploy Operating System of choice on top of your baremetal node.<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ \u00a0openstack keypair create --public-key ~\/.ssh\/id_rsa.pub undercloud<\/code><\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ openstack server create --image rhel7-baremetal\u00a0--flavor baremetal --nic net-id=baremetal --key-name undercloud rhel7-bm1<\/code><\/p>\n\n\n\n<p>Nova scheduler will pick on of your available nodes in the baremetal pool and deploy your OS on top:<\/p>\n\n\n\n<p>(chrisj) [stack@undercloud ~]$ openstack baremetal node list<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| UUID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Name &nbsp; &nbsp; &nbsp; | Instance UUID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| Power State | Provisioning State | Maintenance |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>| df57f6a4-00e4-424e-b25f-8d544d4a4e59 | baremetal1 | None &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | power off &nbsp; | available &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| False &nbsp; &nbsp; &nbsp; |<br>| a984821c-a0cd-4af2-884b-27157d1a96d8 | baremetal2 | 30e45954-1511-4703-b3b2-317a58c75418 | power on &nbsp; &nbsp;| wait call-back &nbsp; &nbsp; | False &nbsp; &nbsp; &nbsp; |<br>| 01886778-863d-46c3-919b-a7cc566c069a | baremetal3 | None &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | power off &nbsp; | available &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| False &nbsp; &nbsp; &nbsp; |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;-+<br>(chrisj) [stack@undercloud ~]$ openstack server list<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8211;+<br>| ID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Name &nbsp; &nbsp; &nbsp;| Status | Networks &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Image&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| Flavor &nbsp; &nbsp;|<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8211;+<br>| 30e45954-1511-4703-b3b2-317a58c75418 | rhel7-bm1 | BUILD &nbsp;| baremetal=172.31.10.72 | rhel7-baremetal&nbsp;| baremetal |<br>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+&#8212;&#8212;&#8212;&#8211;+<\/p>\n\n\n\n<p>When the process is done you should be able to ssh to the node to validate baremetal functionality.<\/p>\n\n\n\n<p><code>(chrisj) [stack@undercloud ~]$ ssh cloud-user@172.31.10.72<br>The authenticity of host '172.31.10.72 (172.31.10.72)' can't be established.<br>ECDSA key fingerprint is SHA256:UMqiPXpbMVaVKLEP48h5T6vcQJ6pSg5GjzDZZ0EHYQ0.<br>ECDSA key fingerprint is MD5:52:7e:33:f9:b1:fa:8b:d3:25:db:b5:86:16:51:0d:b9.<br>Are you sure you want to continue connecting (yes\/no)? yes<br>Warning: Permanently added '172.31.10.72' (ECDSA) to the list of known hosts.<br>[cloud-user@rhel7-bm1 ~]$<\/code>\u00a0<\/p>\n\n\n\n<p>Ultimately VM and BM nodes will get created the same way and with that you can easily orchestrate a workload to land on appropriate compute resource.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenStack Ironic is one of the most exciting project I had pleasure to work with and help my clients with in the last year. Project Ironic is all about provisioning clusters of Baremetal nodes base","protected":false},"author":1,"featured_media":15,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts\/9"}],"collection":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/comments?post=9"}],"version-history":[{"count":4,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts\/9\/revisions"}],"predecessor-version":[{"id":63,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts\/9\/revisions\/63"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/media\/15"}],"wp:attachment":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/media?parent=9"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/categories?post=9"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/tags?post=9"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}