Red Hat presented some much-needed innovation for Ansible at its summit in Boston. One very big change is that Ansible is now Event-Driven. No longer does the software have to kick off an Ansible playbook, Ansible can automatically start a playbook based on observed events. In addition, with Ansible Lightspeed, it is introducing a generative AI chatbot that will help users write Ansible Playbooks.
To cut to the chase: Ansible is now Event-Driven. It is rather strange that this hadn’t been the case up to this point, given the fact that IT organizations have been trending in this direction for a while. Ansible Playbooks run based on observed events. This previously required using another software solution that both observed and started the appropriate Playbook. 50 percent of this has now change: Ansible cannot in fact observe anything itself. Nevertheless, it connects the infrastructure to the applications that deal with observability. Once such an application communicates its events toward Ansible, one can configure in Ansible how it should respond to them.
Also read: Log4Shell in 2023: big impact still reverberates
The end goal here is IT automation. No more manually launching playbooks to solve a problem, in other words. It is also the intention to become less dependent on third-party solutions, although users will remain dependent on them when it comes to observability.
As far as we are concerned, it is a step in the right direction. However, we think the demand for more IT automation is many times greater. IT Operations wants to spend much less time managing and troubleshooting than they do now. Most of what IT Operations does today can be done almost entirely through automation. Of course supervision is needed, but developing or configuring each Ansible Playbook yourself should no longer be necessary.
With more AI, Ansible should also be able to self-detect which Playbooks are needed for particular workloads and how to deploy them. Hopefully that will be a next step in the near future.
Making Playbooks faster and easier with Ansible Lightspeed
At Red Hat, they couldn’t ignore the hype about generative AI either. The company did not want to let the success of OpenAI pass them by. For that reason, it looked at how it could apply generative AI within its own portfolio. Of course, writing/coding Ansible Playbooks was a quick and obvious choice. At Red Hat, they told us that GPT-3 and GPT-4 are not good enough at this to deploy that commercially. However, at Red Hat’s parent company IBM, they still have the very powerful IBM Watson AI engine on offer. This has allowed Red Hat to develop its own LLM (large language model) and, for that reason, users will soon be able to write Ansible Playbooks. This solution is called Ansible Lightspeed. Lightspeed will also soon be able to automatically suggest enhancements, as well as checking for spelling and grammar.
Large customers that have their own large database of Ansible Playbooks will also be able to use it in the future to develop their own LLMs, so it can develop more Playbooks that use the same kind of code.
Red Hat could not say much about the exact operation and training of these LLMs yet. They did state that they think it is important that customers’ Playbooks not end up in a generic LLM. What is still unclear, however, is how they are going to correct the models in the case of any errors. Developers need to provide guardrails with regard to topics in particular, because generative AI can go astray if they don’t. That was still somewhat unclear during our visit in the Netherlands, but will hopefully become evident in the coming days or weeks.
Red Hat is now using generative AI to develop Playbooks, as well. This allows them to develop and deploy them faster. That’s a step in the right direction, although not everyone in the community will see it that way. At any rate, it leaves us wondering: couldn’t Red Hat also have capitalized on the no-code/low-code hype of recent years? By turning in that direction, they could have used IT operations staff without coding skills to create Playbooks. In our view, that would have been more effective than generative AI.
Developing your own AI/LLM with Red Hat OpenShift AI
What Red Hat does do well is facilitate. The demand for AI, machine learning and developing one’s own LLMs, for example, is now growing tremendously. Training models requires a huge amount of underlying infrastructure: it requires a lot of computing power. With Red Hat OpenShift AI, the company is responding to this. It has optimized OpenShift for these types of workloads to provide a unified experience both on-premises and in the cloud. To do so, it has formed partnerships to better support various technologies, including Anaconda, IBM Watson Studio, Intel OpenVINO, AI Analytics toolkit, Pachyderm and Starburst.
This will also allow large organizations with stringent compliance requirements to develop, train and deploy on-premises models.