Cloud Hypervisor, an open source hypervisor project for cloud environments, is drawing a clear line: AI-generated code is not welcome. With the release of version 48, a new policy is in effect that rejects contributions written using large language models.
Cloud Hypervisor originated in 2018, when Google, Intel, Amazon, and Red Hat jointly launched the rust-vmm project to develop virtualization components in the Rust programming language. Intel later decided to take a different direction, resulting in Cloud Hypervisor. This is a Virtual Machine Monitor designed specifically for cloud workloads.
At the end of 2021, Cloud Hypervisor came under the umbrella of the Linux Foundation. This created a neutral and transparent governance structure. Big names such as Microsoft, ARM, ByteDance, and Alibaba subsequently joined. The software has now become a significant player in the infrastructure of public cloud providers.
The power of Cloud Hypervisor lies in its lightweight and secure design. The hypervisor supports dynamic addition of CPUs, memory, and devices, runs both Linux and Windows VMs, and has a compact code base that is easier to maintain than many older alternatives. Thanks to these features, hyperscalers utilize the software as the base layer of their IaaS services, making necessary adjustments to optimize their hardware usage.
With version 48 of Cloud Hypervisor, the maximum number of supported vCPUs on x86_64 hosts with KVM is increased from 254 to 8,192. This is a significant step that greatly increases scalability. Support for Intel SGX has also been removed. Inter-VM shared memory has been added and the pausing of virtual machines with many vCPUs has been accelerated. The project has thus demonstrated its growth from an experimental hypervisor to a fully fledged solution for production environments in a short period of time.
No AI code policy
The release also marks the introduction of a so-called No AI code policy. Contributions that are known to have been generated (in part) using large language models will be rejected. The administrators indicate that this policy is intended to avoid legal risks related to licenses and to make the most effective use of the limited capacity for code review. They emphasize that the quality and traceability of contributions are crucial for maintaining the code base.
There are concerns within the community about whether this policy is sustainable in practice, as it is difficult to determine with certainty whether AI has influenced code. To make this manageable, there is talk of a mandatory confirmation in the pull request process. Given that the project already has extensive guidelines for contributions, including testing requirements, documentation updates, and code quality, such an additional confirmation would logically fit in with existing procedures.