Informatica announces serverless for better data management

Get a free Techzine subscription!

Informatica, the business specialist in data management solutions and applications, recently updated its iPaaS Intelligent Cloud Service platform. The most important new feature is the advent of serverless computing.

The IPaaS platform of the data management specialist gets a big boost with the arrival of serverless computing. A few years ago, Informatica had based the platform entirely on microservices, but with serverless computing, there are even more possibilities.

According to Informatica, customers can make good use of serverless computing for the processes surrounding the recording of data and the integration of this data. Especially since these processes are often carried out in batches and, depending on the mix of sources, there are also differences in the extent to which sources are used, such as computing power.

The use of serverless computing helps here, because it prevents things like ‘in case of capacity’ peak loads. This makes it possible, for example, to automatically scale up or down computing power according to the amount of data traffic.

Properties of serverless option

The now released feature includes an autoscale option with built-in high availability and recovery. As a result, customers can still use their server-based options for, among other things, predictable and long-term workloads.

Furthermore, the serverless option offers customers a special ‘calculator’ based on machine learning. This calculator generates new workloads and calculates the possible costs of running these workloads. These costs are calculated based on whether customers consider performance – with parallel processing – or the final cost – based on a single node – to be more important.

Informatica is not the first business data management specialist to introduce serverless. Parties such as AWS Glue, Azure Data Factory, Google Cloud Data Fusion, and Databricks already have this option.

ML for data pipelines

Another newly released feature in the latest Informatica release in applying machine learning (ML) to rationalise data pipelines. As cloud-based low code/no code tools make it increasingly easy to set up on data pipelines, customers are increasingly able to create one-off applications.

The added ML tooling helps customers to inspect data pipelines, scan data sources and operational processes. This makes it much easier for them to identify which pipelines are using similar transformation patterns. In addition, the tool helps to build customizable templates for these pipelines.

Data streams

For the entry of data streams functionality has been added that scans the Kafka repository to track data linkage. This functionality was already available for database and file resources within the Informatica iPaaS platform.

Other updates

The release has also received a number of incremental updates. Important new features added include the introduction of de-duplication functionality for the data quality services. The catalogue has also been improved for various end users. The catalogue now not only retrieves the metadata from known data sources, but also from cloud services such as Microsoft Power BI, Qlik Sense, AWS Glue, Google Cloud and Snowflake.

The new release of the Informatica iPaas data management platform is now available through AWS and Azure. A beta version of the platform is available on Google Cloud.