2 min

Tags in this article

, , ,

AWS has introduced Amazon Sagemaker for Kubernetes, a tool that can automatically provide hardware tailored to machine learning workloads in containers. Such workloads have specific requirements and configuring them without help is a time-consuming task.

Anyone who uses Kubernetes containers for machine learning runs into specific hardware requirements. Machine learning workloads are atypical; for example, a configured model for algorithm training makes extensive use of CPU power, which means that an administrator must adapt the allocation of CPU power for Kubernetes in order to optimise efficiency. In practice, the use of containers for machine learning within AWS involves a lot of configuration work, which also entails a risk of hardware overprovisioning.

AWS wants to address this inefficiency with the introduction of Amazon Sagemaker for Kubernetes. The tool works with the Kubernetes orchestrator to automate the management of machine learning containers. Sagemaker for Kubernetes has preconfigured workflows that automatically configure and optimise computer system resources for specific workloads.

Optimised efficiency

In addition, the tool ensures that hardware is only supplied when necessary. Conversely, provisioned hardware is taken out of service when containers no longer use it. AWS claims that this means that Sagemaker for Kubernetes has almost perfect scalability with an almost completely optimised efficiency in the use of system resources. Furthermore, developers can implement the tool without the need for additional code.

Amazon announces Sagemaker for Kubernetes at the annual Re:Invent conference in Las Vegas. The tool is now available in a handful of AWS clusters including EU (Ireland) alongside US East (Ohio), US East (N. Virginia) and US West (Oregon).