{"id":67,"date":"2023-11-28T13:48:50","date_gmt":"2023-11-28T18:48:50","guid":{"rendered":"https:\/\/chrisj.cloud\/?p=67"},"modified":"2023-11-28T13:48:51","modified_gmt":"2023-11-28T18:48:51","slug":"openshift-on-openstack-no-brainer-on-prem-solution","status":"publish","type":"post","link":"https:\/\/chrisj.cloud\/index.php\/2023\/11\/28\/openshift-on-openstack-no-brainer-on-prem-solution\/","title":{"rendered":"OpenShift on OpenStack &#8211; no-brainer on-prem solution"},"content":{"rendered":"\n<p>I have to admit,&nbsp;it has taken me a while to produce this next article. Today, however it&#8217;s about to change and I am happy to introduce on this blog&nbsp;OpenShift on OpenStack &#8211; no-brain on-prem solution for building apps. What makes this combination so special? Here is the short list of the top integration features:<\/p>\n\n\n\n<p>&#8211; Fully Automated IPI experience<\/p>\n\n\n\n<p>&#8211; Ability to deploy on mix of VMs and BM nodes<\/p>\n\n\n\n<p>&#8211; With true multitenancy for VMs, BMs, Storage, Networking<\/p>\n\n\n\n<p>&#8211; Dynamic , multi-tier storage as a service<\/p>\n\n\n\n<p>&#8211; Best performance (No double network encapsulation, baremetal workers)<\/p>\n\n\n\n<p><strong>Disclaimer:<\/strong> I am not authorized to speak on behalf of Red Hat, are sponsored by Red Hat or are representing Red Hat and its views.&nbsp;The views and opinions expressed on this website are entirely my&nbsp;own.<\/p>\n\n\n\n<p>Check out my video first before jumping into configuration down below.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"OpenShift 4.X on OpenStack (RHOSP13) - integration Demo\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/87b0vNnfNFE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p><strong>I. OpenStack configuration<\/strong><\/p>\n\n\n\n<p><strong>1. deploy.sh<\/strong><\/p>\n\n\n\n<p>My environment is currently configured with RHOSP13.0.11 with following features enabled:<\/p>\n\n\n\n<p>&#8211; network isolation<\/p>\n\n\n\n<p>&#8211; ceph integration<\/p>\n\n\n\n<p>&#8211; ceph RGW<\/p>\n\n\n\n<p>&#8211; manila with ganesha<\/p>\n\n\n\n<p>&#8211; self signed ssl<\/p>\n\n\n\n<p>&#8211; ironic + inspector<\/p>\n\n\n\n<p>&#8211; octavia&nbsp;<\/p>\n\n\n\n<p>&#8211; ldap<\/p>\n\n\n\n<p>&#8211; net-ansible for BM multitenancy<\/p>\n\n\n\n<p>(undercloud) [stack@undercloud-osp13 ~]$ cat deploy.sh&nbsp;<br>#!\/bin\/bash<br>source ~\/stackrc<br>cd ~\/<br>time openstack overcloud deploy &nbsp;&#8211;templates &#8211;stack chrisj-osp13 \\<br>&nbsp; -r \/home\/stack\/templates\/roles_data.yaml \\<br>&nbsp; -n \/home\/stack\/templates\/network_data.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/network-isolation.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/ceph-ansible\/ceph-ansible.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/ceph-ansible\/ceph-rgw.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/manila-cephfsganesha-config.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/services\/ceph-mds.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/ssl\/tls-endpoints-public-dns.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/services-docker\/ironic.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/services-docker\/ironic-inspector.yaml \\<br>&nbsp; -e \/usr\/share\/openstack-tripleo-heat-templates\/environments\/services-docker\/octavia.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/network-environment.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/ceph-custom-config.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/enable-tls.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/enable-ldap.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/ExtraConfig.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/inject-trust-anchor.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/inject-trust-anchor-hiera.yaml \\<br>&nbsp; -e \/home\/stack\/templates\/overcloud_images.yaml&nbsp;<br>&nbsp;<\/p>\n\n\n\n<p><strong>2. network_data.yaml custom networks:<\/strong><\/p>\n\n\n\n<p>&#8211; name: CustomBM<br>&nbsp; name_lower: custombm<br>&nbsp; vip: true<br>&nbsp; ip_subnet: &#8216;172.31.10.0\/24&#8217;<br>&nbsp; allocation_pools: [{&#8216;start&#8217;: &#8216;172.31.10.20&#8217;, &#8216;end&#8217;: &#8216;172.31.10.49&#8217;}]<br>&nbsp; vlan: 320<\/p>\n\n\n\n<p>&#8211; name: StorageNFS<br>&nbsp; name_lower: storage_nfs<br>&nbsp; enable: true<br>&nbsp; vip: true<br>&nbsp; vlan: 315<br>&nbsp; ip_subnet: &#8216;172.31.5.0\/24&#8217;<br>&nbsp; allocation_pools: [{&#8216;start&#8217;: &#8216;172.31.5.20&#8217;, &#8216;end&#8217;: &#8216;172.31.5.29&#8217;}]<br>&nbsp;<\/p>\n\n\n\n<p>The extra network have been created to handle baremetal cleaning and provisioning and direct access to Manila NFS storage<\/p>\n\n\n\n<p><strong>3. roles_data.yaml<\/strong><\/p>\n\n\n\n<p>My roles data has changed mostly for controller by adding the 2 networks for manila and baremetal as well as 2 additional services:<\/p>\n\n\n\n<p>###############################################################################<br># Role: Controller &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;#<br>###############################################################################<br>&#8211; name: Controller<br>&nbsp; description: |<br>&#8230;<\/p>\n\n\n\n<p>&nbsp; networks:<br>&nbsp; &nbsp; &#8211; External<br>&nbsp; &nbsp; &#8211; InternalApi<br>&nbsp; &nbsp; &#8211; Storage<br>&nbsp; &nbsp; &#8211; StorageMgmt<br>&nbsp; &nbsp; &#8211; Tenant<br><strong>&nbsp; &nbsp; &#8211; CustomBM<br>&nbsp; &nbsp; &#8211; StorageNFS<\/strong><\/p>\n\n\n\n<p>&#8230;<\/p>\n\n\n\n<p>&nbsp; ServicesDefault:<br>&#8230;.<\/p>\n\n\n\n<p>&nbsp; &nbsp; &#8211; OS::TripleO::Services::<strong>CephNfs<\/strong><\/p>\n\n\n\n<p>&#8230;.<br>&nbsp; &nbsp; &#8211; OS::TripleO::Services::<strong>IronicInspector<\/strong><br>&#8230;..<\/p>\n\n\n\n<p><strong>4. Nic config controller.yaml<\/strong><\/p>\n\n\n\n<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &#8211; type: vlan<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mtu: 1500<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; vlan_id:<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; get_param: <strong>CustomBMNetworkVlanID<\/strong><br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; addresses:<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &#8211; ip_netmask:<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; get_param: <strong>CustomBMIpSubnet<\/strong><br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &#8211; type: vlan<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mtu: 1500<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; vlan_id:<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; get_param: <strong>StorageNFSNetworkVlanID<\/strong><br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; addresses:<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &#8211; ip_netmask:<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; get_param: <strong>StorageNFSIpSubnet<\/strong><\/p>\n\n\n\n<p><strong>5. network-environment.yaml<\/strong><\/p>\n\n\n\n<p>In order to enable net-ansible I have added the folloing:<\/p>\n\n\n\n<p>resource_registry:<br>&nbsp; OS::TripleO::Services::NeutronCorePlugin: OS::TripleO::Services::NeutronCorePluginML2Ansible<\/p>\n\n\n\n<p>parameter_defaults:<br>&nbsp; NeutronMechanismDrivers: openvswitch,ansible<br>&nbsp; NeutronTypeDrivers: local,vlan,flat,vxlan<br>&nbsp; ML2HostConfigs:<br>&nbsp; &nbsp; ex3400:<br>&nbsp; &nbsp; &nbsp; ansible_host: &#8216;172.31.8.254&#8217;<br>&nbsp; &nbsp; &nbsp; ansible_network_os: &#8216;junos&#8217;<br>&nbsp; &nbsp; &nbsp; ansible_ssh_pass: &#8216;XXXXXXX&#8217;<br>&nbsp; &nbsp; &nbsp; ansible_user: &#8216;ansible&#8217;<br>&nbsp; &nbsp; &nbsp; manage_vlans: &#8216;false&#8217;<br>&nbsp; &nbsp; &nbsp; mac: &#8217;84:c1:c1:48:06:1b&#8217;<\/p>\n\n\n\n<p><strong>6. Extra-config.yaml (mostly for ironic and inspection)<\/strong><\/p>\n\n\n\n<p>&nbsp; NovaSchedulerDefaultFilters:<br>&nbsp; &nbsp; &#8211; RetryFilter<br>&nbsp; &nbsp; &#8211; AggregateInstanceExtraSpecsFilter<br>&nbsp; &nbsp; &#8211; AggregateMultiTenancyIsolation<br>&nbsp; &nbsp; &#8211; AvailabilityZoneFilter<br>&nbsp; &nbsp; &#8211; RamFilter<br>&nbsp; &nbsp; &#8211; DiskFilter<br>&nbsp; &nbsp; &#8211; ComputeFilter<br>&nbsp; &nbsp; &#8211; ComputeCapabilitiesFilter<br>&nbsp; &nbsp; &#8211; ImagePropertiesFilter<br>&nbsp; &nbsp; &#8211; ServerGroupAntiAffinityFilter<br>&nbsp; &nbsp; &#8211; ServerGroupAffinityFilter<\/p>\n\n\n\n<p>&nbsp; IronicCleaningDiskErase: metadata<br>&nbsp; IronicIPXEEnabled: true<br>&nbsp; IronicCleaningNetwork: baremetal<br>&nbsp; IronicProvisioningNetwork: baremetal<\/p>\n\n\n\n<p>&nbsp; IronicInspectorIpRange: &#8216;172.31.10.50,172.31.10.69&#8217;<br>&nbsp; IronicInspectorInterface: vlan320<br>&nbsp; IronicInspectorEnableNodeDiscovery: true<br>&nbsp; IronicInspectorCollectors: default,extra-hardware,numa-topology,logs<\/p>\n\n\n\n<p>&nbsp; ServiceNetMap:<br>&nbsp; &nbsp; IronicApiNetwork: custombm<br>&nbsp; &nbsp; IronicNetwork: custombm<br>&nbsp; &nbsp; IronicInspectorNetwork: custombm<br>&nbsp; &nbsp; IronicProvisioningNetwork: baremetal<\/p>\n\n\n\n<p>&nbsp; ControllerExtraConfig:<br>&nbsp; &nbsp; ironic::inspector::add_ports: all<br>&nbsp;<\/p>\n\n\n\n<p><strong>II. OpenShift Configuration<\/strong><\/p>\n\n\n\n<p>Since I wanted to take advantage of features that are not yet in GA version of the product I have decided to use the latest (at the time of deploying) nightly version of developer preview, which happened to be:<\/p>\n\n\n\n<p><strong>1. openshift-install version<\/strong><br>openshift-install 4.5.0-0.nightly-2020-06-20-194346<br>built from commit 2874fb3204089822b561f98a0a3fe7b15a84da00<br>release image registry.svc.ci.openshift.org\/ocp\/release@sha256:749ebc496202f1e749f84c65ea8e16091c546528f0a37849bbc6340d1441fbe1<\/p>\n\n\n\n<p><strong>2. install-config<\/strong><\/p>\n\n\n\n<p>apiVersion: v1<br>baseDomain: openshift.lab<br>compute:<br>&#8211; architecture: amd64<br>&nbsp; hyperthreading: Enabled<br>&nbsp; name: worker<br><strong>&nbsp; platform:<br>&nbsp; &nbsp; openstack:<br>&nbsp; &nbsp; &nbsp; additionalNetworkIDs:<br>&nbsp; &nbsp; &nbsp; &#8211; 26531a24-0959-4cdb-8d8e-61a711b5847c<br>&nbsp; &nbsp; &nbsp; type: baremetal<\/strong><br>&nbsp; replicas: 2<br>controlPlane:<br>&nbsp; architecture: amd64<br>&nbsp; hyperthreading: Enabled<br>&nbsp; name: master<br>&nbsp; platform:<br><strong>&nbsp; &nbsp; openstack:<br>&nbsp; &nbsp; &nbsp; additionalNetworkIDs:<br>&nbsp; &nbsp; &nbsp; &#8211; 26531a24-0959-4cdb-8d8e-61a711b5847c<\/strong><br>&nbsp; replicas: 3<br>metadata:<br>&nbsp; creationTimestamp: null<br>&nbsp; name: production<br>networking:<br>&nbsp; clusterNetwork:<br>&nbsp; &#8211; cidr: 10.128.0.0\/14<br>&nbsp; &nbsp; hostPrefix: 23<br><strong>&nbsp; machineNetwork:<br>&nbsp; &#8211; cidr: 192.168.100.0\/24<\/strong><br>&nbsp; #- cidr: 10.0.0.0\/16<br>&nbsp; networkType: OpenShiftSDN<br>&nbsp; serviceNetwork:<br>&nbsp; &#8211; 172.30.0.0\/16<br>platform:<br>&nbsp; openstack:<br>&nbsp; &nbsp; cloud: openstack<br>&nbsp; &nbsp; computeFlavor: m1.large<br>&nbsp; &nbsp; externalNetwork: external<br>&nbsp; &nbsp; lbFloatingIP: 172.31.8.151<br>&nbsp; &nbsp; octaviaSupport: &#8220;1&#8221;<br>&nbsp; &nbsp; region: &#8220;&#8221;<br>&nbsp; &nbsp; trunkSupport: &#8220;1&#8221;<br><strong>&nbsp; &nbsp; machinesSubnet: e7b40924-afd1-4c35-85c5-f0da36b3272c<br>&nbsp; &nbsp; clusterOSImage: rhcos45-raw<br>&nbsp; &nbsp; apiVIP: 192.168.100.200<br>&nbsp; &nbsp; ingressVIP: 192.168.100.201<\/strong><br>publish: External<br>pullSecret:&#8230;<br>sshKey: |<br>&nbsp; ssh-rsa AAAAB &#8230;<\/p>\n\n\n\n<p>Above I am highlighting a few features that are not going to be there out of the box.<\/p>\n\n\n\n<p><strong>&#8211; additionalNetworkIDs &#8211;<\/strong> allows for attaching more then single network to instances. In my case I have pointed to UID for the StorageNFS network<\/p>\n\n\n\n<p><strong>&#8211; type: baremetal &#8211;<\/strong> allows to specify an openstack flavor (baremetal) to be used for each of the roles<\/p>\n\n\n\n<p><strong>&#8211; machineNetwork \/ machineSubnet <\/strong>&#8211; this is also a new feature that allows for Bring-your-own-network (BYON) functionality to OCP IPI installer. In my case I have pre-created a Tenant vlan network and asked OCP to use it. The CIDR needs to match as well<\/p>\n\n\n\n<p><strong>&#8211; apiVIP\/ingressVIP<\/strong> &#8211; is also there for BYON functionality and allows to manually pick IPs used for these 2 functions<\/p>\n\n\n\n<p>&#8211;&nbsp;<strong>clusterOSImage&nbsp;<\/strong>&#8211; allows to specify the glance pre-uploaded CoreOS image (in my case I have been using different images)&nbsp;<\/p>\n\n\n\n<p><strong>III. Ansible<\/strong><\/p>\n\n\n\n<p>The Ansible Tower integration for zero-touch provisioning is rather simpleand involves a single playbook that should be added to you Tower schedule. I have shared the source code in here:<\/p>\n\n\n\n<p><a href=\"https:\/\/github.com\/OOsemka\/AnsibleTower-IronicTools\">https:\/\/github.com\/OOsemka\/AnsibleTower-IronicTools<\/a><\/p>\n\n\n\n<p><strong>VI. Post Deploy tasks<\/strong><\/p>\n\n\n\n<p><strong>1. Manila CSI driver integration<\/strong><\/p>\n\n\n\n<p>Thanks to Mike Fedosin (main developer for Manila CSI integration) the process of adding Manila driver to OCP is really simple. To make sure you have the lastest instructions please follow his github readme page<\/p>\n\n\n\n<p><a href=\"https:\/\/github.com\/openshift\/csi-driver-manila-operator\">https:\/\/github.com\/openshift\/csi-driver-manila-operator<\/a><\/p>\n\n\n\n<p>Please note that for OCP deployment with baremetal nodes Manila is truly the &#8220;only game in town&#8221; from OpenStack perspective. Cinder integration is there but only for worker running as VMs.<\/p>\n\n\n\n<p><strong>2. Swift and Self Sign Certs workaround.<\/strong><\/p>\n\n\n\n<p>Unfortunately we still have a bug in case you end up using Self Sign Certs like myself. Te bug itself can be found in here:<\/p>\n\n\n\n<p><a href=\"https:\/\/bugzilla.redhat.com\/show_bug.cgi?id=1810461\">https:\/\/bugzilla.redhat.com\/show_bug.cgi?id=1810461<\/a><\/p>\n\n\n\n<p>The workaround is rather simple.<\/p>\n\n\n\n<p>After authenticating as kubeadmin to your deployed OCP cluster execute the following:<\/p>\n\n\n\n<p>oc edit configs.imageregistry.operator.openshift.io\/cluster<\/p>\n\n\n\n<p>then find <strong>spec<\/strong> section and add to it:<\/p>\n\n\n\n<p>disableRedirect: true<\/p>\n\n\n\n<p><strong>3. Timeout not long enough for Baremetal nodes initial deployment<\/strong><\/p>\n\n\n\n<p>There is a large amount of servers that happen to not bot quickly enough for OCP installer to accomodate for it. You will see a deployment failing in these cases. It is a bug that has been captured in here:<\/p>\n\n\n\n<p><a href=\"https:\/\/bugzilla.redhat.com\/show_bug.cgi?id=1843979\">https:\/\/bugzilla.redhat.com\/show_bug.cgi?id=1843979<\/a><\/p>\n\n\n\n<p>The workaround is also quite simple. After the initial failure just execute something like this (Example):<\/p>\n\n\n\n<p>openshift-install &#8211;dir=&lt;insert-your-install-directory-here&gt; wait-for install-complete<\/p>\n\n\n\n<p>This will allow you to see the finish line.<\/p>\n\n\n\n<p>Please note this only happens with the initial deploy, the scaleing out should not have a similar problem.<\/p>\n\n\n\n<p>Big thanks to Paul Halmos, Mike Fedosin, Martin Andre and entire shiftstack team!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I have to admit,&nbsp;it has taken me a while to produce this next article. Today, however it&#8217;s about to change and I am happy to introduce on this blog&nbsp;OpenShift on OpenStack &#8211; no","protected":false},"author":1,"featured_media":68,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts\/67"}],"collection":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/comments?post=67"}],"version-history":[{"count":1,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts\/67\/revisions"}],"predecessor-version":[{"id":70,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/posts\/67\/revisions\/70"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/media\/68"}],"wp:attachment":[{"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/media?parent=67"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/categories?post=67"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/chrisj.cloud\/index.php\/wp-json\/wp\/v2\/tags?post=67"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}