Google Cloud has made the use of its Cloud Bigtable, a cloud-based storage solution for big data, cheaper. Instead of a mandatory purchase of three nodes, a single node is now sufficient. The main advantage of this is that smaller workloads are now also suitable for the service.

With Cloud Bigtable, the tech giant’s public cloud offers a fully managed NoSQL database. This service is particularly suitable for running gigantic, petabytes-sized operational and analytical workloads.

In the old situation, Google Cloud offered this fully managed service at a price of 65 dollar cents per hour and per node. In addition, the tech giant required users to purchase at least three nodes for production workloads. This didn’t make the service very cheap and actually only suitable for large companies.

Suitable for big or small use cases

With the changes now implemented, the tech giant indicates that Cloud Bigtable should become the place where companies can house their important large and small use cases for operational and analytical workloads. According to Google Cloud, it does not matter whether these are start-ups or large companies that want to house their self-managed Apache HBase big data store or Apache Cassandra database platform clusters.

Smaller companies can now also benefit from the managed service. Something that used to be unattractive for these companies due to the price.

Other improved functionality

By making Cloud Bigtable cheaper and easier to deploy, the tech giant also makes it possible to use replication for higher availability of smaller clusters of data. In addition, customers can now more easily switch from a one-node development instance to a one-node production instance whenever they want.

Furthermore, the SLAs of Cloud Bigtable have also been adjusted so that all Bigtable instances are now covered, regardless of the size of these instances.