Pushing Openstack Workloads to the Edge!

Edge computing is an emerging trend that involves bringing computation and data storage closer to the edge of the network, closer to the users and devices that generate and consume data. This can be accomplished by deploying Openstack at the edge, in locations such as cell towers, retail stores, and factories however managing edge workloads poses significant challenges due to the diverse and often resource-constrained nature of edge devices, as well as the complexity of the edge ecosystem. In this presentation, we present a technical overview of deploying edge workloads using OpenStack.

We will discuss the benefits of pushing OpenStack workloads to the edge and explore the challenges and considerations involved in doing so. We will also describe how to deploy and configure OpenStack at the edge, including some examples of deployment topologies and the results they have achieved. Finally, we will provide some best practices for deploying and managing OpenStack at the edge.

The cloud journey you DON’T want to take.

The adoption of a private cloud platform like OpenStack can bring many benefits to an organization, including increased agility, scalability, and cost savings. However, the journey to implement and effectively utilize such a platform can be complex and fraught with challenges. In this presentation we outline some of the common mistakes being made during the OpenStack adoption journey, and offer suggestions on how to avoid them.

During this presentation we are going to discuss everything from understanding the organizations needs, estimating time and resources required, planning deployment/installation, migration processes and operational needs of a production deployment.

By avoiding the common pitfalls we will present, organizations can more smoothly navigate the OpenStack adoption journey and reap the full benefits of a private cloud platform.

OpenStack brings Kubernetes to the Edge

Businesses today are trying to bring data and applications as close to end users as possible to deliver real time services. The problem of delivering a robust application platform in a traditional monolithic datacenter was solved a while ago, but how do you bring the same architecture to an Edge? Can you turn your field offices or retail locations into small co-los? How do you connect all the locations together? Where are you going to store all the data? How will you manage the life cycle of hundreds of small locations? Come join us and learn how the Development team for Kubernetes and OpenStack joined forces with Field Engineering to deliver a push button Kubernetes Platform running on OpenStack for the Edge. Let us be your guide to walk you through how we’ve struggled but finally solved the problem of geographically distributed Kubernetes Platform. We will also talk about some of the limitations and gaps that are still there and how we are working to address them.

What the Cloud? – OpenInfra Summit Berlin 2022

You’re at a company where the technical management reads something about “the cloud” and now every project is about moving your company there. But where is there exactly? The one thing that is for sure is that the “cloud” is a service and your company wants in! So don’t do yourself a disservice and join this beginner session where we will dissect the polysemous nature of “cloud” and talk about how it applies to “Platform as a Service” and “Infrastructure as a Service” alike. We will discuss the pros and cons of various solutions that exist today, varying from Openstack to kubernetes and how they relate to both private and public clouds. With the knowledge gained from this session, you’ll be able to navigate the cloudy waves your management throws at you and get your company there, wherever that may be!

Using Ansible Tower To Automate Baremetal Benchmarking – AnsibleFest Atlanta 2019

At AMD we manufacture semiconductors, specifically CPUs and GPUs. As you might expect, we have to collect a LOT of performance data on them during development (and after). This is a case study on how AMD implemented Ansible, Red Hat Ansible Tower, and RedHat OSP to create a bare metal deployment service to allow for automated benchmark runs. It talks about the playbooks for both OS deployment and benchmark deployment and execution.

A Telco Story of OpenStack Success – OpenStack Summit Sydney 2017

Telcos are flocking to OpenStack for it’s NFV functionalities, ability to be deployed in modular POD architectures with federated deployments across datacenters, central offices & edge services. But to deploy something that can meet the business needs in the Telco world, there’s alot of crucial design decisions that must be made. What’s the right hardware? Which SDN and/or SDS option meets our performance needs? What topology delivers the right amount of High Availability? How can we make all of these decisions, yet make it cost effective? Come join us & hear our lessons learned deploying OpenStack at large Telcos. We’ll talk about topics such as L2/L3 stretching between OpenStack PODs, Service Provider integrations, identity federation, deployment automation, service function chaining & federated overlay networks. We’ll share pitfalls & issues encountered along the way but more importantly, we’ll tell you how we were able to overcome these hurdles so you can too!

Automated NFV deployment and management with TripleO – OpenStack Summit Sydney 2017

In this session, learn about the critical aspect of NFV enablement in OpenStack deployments: how to automatically & continuously manage the platform & network optimizations for NFV. Focusing on creating a seamless user experience, learn about the latest features and efforts underway to leverage composable roles to simplify the life cycle (enable, disable, & change) management and provide greater control to the VNF user. We’ll also share valuable insight into the mistakes made and lessons learned that helped us along the way.

No Longer Considered an Epic Spell of Transformation- IN-PLACE UPGRADE! – OpenStack Summit Boston 2017

Welcome to the magical school of Triple-O where impossibilities are not only possible, but a reality! Upgrade from Mitaka to Newton? Sure, we can do that and YOU can too! Let us show you the value of TripleO Magic! In place upgrades is only one item in our bag of tricks (that can be YOUR bag of tricks too!) Come join us while we show you how to conjure a rock solid OpenStack environment that has continuous validations running to ensure stability throughout the entire lifecycle of that cloud even THROUGH your in-place upgrade! This isn’t a cookie cutter deployment either! Customizations abound! The reference architecture not working for you? Use your magic wand to create custom composable roles and deploy YOUR cloud YOUR way!

Feeling a Bit Deprecated? We Are Too. Let’s Work Together to Embrace the OpenStack Unified CLI!- OpenStack Summit Barcelona 2016

Does anyone else recall learning the CLI of the OpenStack projects and scratching their head in bewilderment as to the variance in naming, structure, and arguments for the various projects? And then the output you received from executing the command was different depending on which project you were running against?

Enter the OpenStack Unified CLI to level set these commands. The Unified CLI not only attempts to create a consistent feel between the OpenStack projects, it also is a pluggable architecture to dynamically provided commands, depending on which OpenStack components you had installed. For instance, you don’t get OpenStack bare metal unless you have Ironic installed.

In this session we will deep dive into the OpenStack Unified CLI. We’ll show how it works behind the scenes, and we’ll discuss its available plugins. We’ll also provide some troubleshooting guidance for those times when things don’t work as expected.