4 min

Red Hat presented some much-needed innovation for Ansible at its summit in Boston. One very big change is that Ansible is now Event-Driven. You no longer need to manual kick off an Ansible playbook or use third party software, Ansible can automatically start a playbook based on observed events. In addition, with Ansible Lightspeed, it is introducing a generative AI chatbot that will help users write Ansible Playbooks.

Just to start with the most important part: Ansible is now Event-Driven. Looking at how IT environments are structured today and where organizations want to go. It’s kinda strange that this wasn’t already possible. That you couldn’t run Ansible playbooks based on observed events. This previously required using another software solution that both observed and started the appropriate playbook. This has now changed for 50 percent: Ansible cannot in fact observe anything itself; it connects the infrastructure to the applications that deal with observability. Once such an application communicates an event towards Ansible, one can configure in Ansible how Ansible should respond to the event.

Also read: Log4Shell in 2023: big impact still reverberates

The ultimate goal is IT automation. No more manually launching playbooks to solve a problem. The intention is also to become less dependent on third-party solutions, although you will remain dependent on observability.

As far as we are concerned, it is a step in the right direction, although we think the demand for more IT automation is many times greater. IT Operations wants to spend much less time managing and troubleshooting than they do now. Most of what IT Operations does today can be done virtually automated. Of course, supervision is needed, but developing or configuring each Ansible playbook yourself should no longer be necessary.

With more AI, Ansible should also be able to self-detect which Playbooks are needed for particular workloads and how to deploy them. Hopefully, that will be the next step in the near future.

Creating Playbooks faster and easier with Ansible Lightspeed

They couldn’t ignore the hype about generative AI at Red Hat either. Red Hat did not want to let the success of OpenAI pass them by. For that reason, it looked at how it could apply generative AI within its own portfolio. Of course, writing/coding Ansible Playbooks was a quick and obvious choice. At Red Hat, they could tell us that GPT-3 and GPT-4 are not good enough at this to deploy commercially. However, at Red Hat’s parent company IBM, they have a powerful IBM Watson AI engine on the shelf. This has allowed Red Hat to develop its own LLM (large language model) and with that LLM users will soon be able to write Ansible Playbooks. This solution is called Ansible Lightspeed. Lightspeed will also be able to automatically suggest enhancements, as well as a good spelling and grammar checker.

Large customers that have their own large database of Ansible Playbooks will be able to use those in the future to develop their own LLMs, so they can develop more Playbooks that use the same kind of code.

The actual operation and training of these LLMs is something Red Hat could not explain just yet. It did say that they think it is important that customers’ Playbooks do not end up in a generic LLM. What is still unclear, however, is how they will correct the models if, for example, there are errors in them. Or how they make the model stick between the lines, because generative AI needs boundaries or guard rails. That was still somewhat unclear locally in the Netherlands, but will hopefully become clear soon in the coming days or weeks.

Red Hat is now using generative AI to develop Playbooks, and this allows them to develop and deploy them faster. That’s a step in the right direction, although we expect not everyone in the community will see it that way. Most developers are not very excited about generative AI.

However, we wonder: couldn’t Red Hat also have capitalized on the no-code/low-code hype of recent years? With this, they could have used IT operations staff lacking coding skills to create Playbooks. In our view, that would have been more effective than generative AI.

Developing your own AI/LLM with Red Hat OpenShift AI

What Red Hat does do well is facilitate. The demand for AI, machine learning and developing your own LLMs, for example, is now growing tremendously. Training models require a huge amount of underlying infrastructure: it involves a lot of computing power. With Red Hat OpenShift AI, the company is responding to this. It has optimized OpenShift for these types of workloads to provide a unified experience both on-premises and in the cloud. To do so, it has formed partnerships to better support various technologies, including Anaconda, IBM Watson Studio, Intel OpenVINO, AI Analytics toolkit, Pachyderm and Starburst.

This will also allow large organizations with stringent compliance requirements to develop, train and deploy on-premises models.