7 min Applications

Rocketlane CEO: AI requires a structural reset of professional SaaS

Rocketlane CEO: AI requires a structural reset of professional SaaS

AI is driving a structural shift in technology, as we all know. But it’s a shift that is being felt most acutely throughout the fabrics, infrastructure layers and application services that manifest themselves in AI and Software-as-a-Service (SaaS) professional services. This is a shift that goes beyond productivity improvements; some say it is reshaping value and risk equations, how teams are organised and the speed and impact of services themselves. What exactly is happening here, what do we mean by SaaS professional services and what are our next steps?

If we can define SaaS as cloud computing itself in the most general terms, then SaaS professional services are consultancy-driven services for cloud that span implementation, customisation, integration, configuration, skills training and onward analysis to help software engineering teams deploy, tune and therefore maximise value from any given software platform or set of tools.

When we look at how technology leaders can (and there is a temptation to use the word “leverage”) use the opportunities herein, they need to understand that the gateway forward lies in redefining not just service delivery models, but also types of services themselves to radically impact success in client partnerships.

Keen to break down and explain this subject matter is Srikrishnan Ganesan, co-founder and CEO of Rocketlane, a company known for its platform that serves customer onboarding, implementation and professional services automation.

How the last 30% becomes strategic

Ganesan suggests that most SaaS products solve roughly 70% of customer requirements out of the box. The remaining 30% involves domain-specific workflows, complex integrations and regulatory nuances. This is where professional services traditionally step in.

The problem is, traditionally, delivering professional services requires senior software engineers, long scoping cycles and custom extensions that are difficult to maintain. The work is necessary, but it rarely scales well.

Could AI now move this frontier forward?

“With generative AI development tools and natural-language interfaces, teams can now move from concept to a functional prototype in hours rather than weeks. Iteration cycles compress and requirements can be validated in real-time with customers. Extensions or custom apps can be built and refined within the implementation flow rather than handed off across various teams,” said Ganesan.

He thinks that this makes all the difference between an experience where a product meets 70% of the need and the rest is stuck in slow custom solution development or feature requests waiting on product teams vs. an experience where we quickly get to a 90% fit with custom apps rapidly developed by the customer’s own team or the vendor that are also easy to maintain, or evolve. 

“This acceleration of the build cycle fundamentally changes the economics and the risks of the last mile. What once slowed growth can become a differentiator. The constraint shifts from coding capacity to clarity of outcomes and the strength of governance, turning the final 20-30% into a competitive advantage rather than a maintenance nightmare,” said Ganesan.

From delivery to orchestration

Agentic AI can obviously automate many of the core activities of a tech SaaS professional services team and that might include: documentation work, configuration work, solutioning work, data transformation, validation, testing, planning, project management, etc. 

Because of this, Ganesan points out that as execution becomes increasingly automated, the centre of gravity in professional services shifts toward orchestration. This, he says sees services teams deploying AI for customers are increasingly accountable for:

  • Translating AI capabilities into measurable business outcomes. 
  • Interpreting model outputs and resolving trade-offs. 
  • Governing reliability, compliance and risk. 
  • Continuously adapting solutions as models and data evolve.

“Implementation skills such as configuration, integration and workflow design remain important. However, the activities that require those skills are more often delegated to AI, making human involvement more of the ‘in-the-loop’ variety,” said Ganesan. “Human responsibility extends beyond completing scoped tasks to ensuring sustained performance, ROI and outcomes in dynamic environments.”

This change requires a systems mindset. Teams must understand observability, evaluation frameworks, guardrails and lifecycle management. The work shifts from building static artefacts to steering adaptive systems toward predictable outcomes.

“When you think about a supply chain system implementation, as an example, the focus for the human becomes understanding where most ROI will be unlocked, aligning customers with the right agentic use cases for them and then iterating to ensure the promised ROI is actually delivered and acknowledged, while overseeing the AI agents that execute the actual configuration work and summaries and data migration activities,” explained Ganesan.

Hybrid roles at the product–services boundary

As the product and services boundary shifts, the Rocketlane boss says new roles are emerging that combine field proximity with technical depth.

  • Agent builders focus on designing and governing AI agents that orchestrate workflows, automate decisions and interact with other systems. Their mandate is to ensure these agents are reliable, observable and aligned with business constraints. 
  • Customer engineers use natural-language tooling and composable platforms to build UI extensions, lightweight apps and integrations without a full-stack engineering background. They sit close to the customer, translating needs into working artefacts at the edge of the product. 
  • Forward-deployed engineers operate in the field with key accounts, rapidly prototyping solutions in the customer’s environment, validating them with real users and feeding patterns back into the core roadmap.

According to Ganesan, these roles redistribute innovation. Product teams no longer hold exclusive ownership of extensibility. Services organisations’ forward deployed engineers (FDEs) become contributors to platform evolution. Patterns discovered in the field can inform reusable capabilities inside the core product.

“In practice, this looks like a customer success management (CSM) or implementation consultant using AI-assisted builders to spin up a customer-specific dashboard or portal in days rather than waiting on the product backlog. Lightweight integrations with systems like ServiceNow or Zendesk can be built by services teams and, once proven successful across a few accounts, scaled to a supported pattern that product and engineering standardise for the broader customer base,” said Ganesan.

A more disciplined AI adoption model

The suggestion here is that AI initiatives often stall because they begin with broad mandates rather than targeting specific pain points or areas for improvement. A more pragmatic approach for service organisations follows a five-step sequence:

1. Identify points of margin erosion, delay, or rework within active engagements.

2. Quantify the cost of those constraints in time, revenue, or risk.

3. Define outcome-based success metrics tied to those constraints.

4. Execute tightly scoped pilots with clear evaluation criteria.

5. Convert validated approaches into repeatable playbooks.

Ganesan says that this also provides clarity for services teams whose roles are changing. The goal is to improve the most painful, time‑consuming and error‑prone parts of the work. For instance, some services teams start by automating status updates and executive summaries, prove time savings and better stakeholder alignment and only then expand into forecasting and risk prediction. 

When work that once required 1,000 hours can be delivered in a fraction of the time, effort-based pricing loses alignment with value delivered. Greater predictability supports fixed-fee and outcome-linked models. Lower delivery costs expand the addressable market. Engagements that previously required six-figure budgets can fall into ranges that unlock new segments and use cases. 

The strategic decision for vendors and system integrators centres on how efficiency gains are deployed: preserve them as margin, reinvest them into scalability, or use them to expand reach and market penetration.

Ganesan advises that organisations that treat AI solely as a cost-optimisation tool will see only incremental gains. Those that redesign their service model around value, speed and extensibility will define the next operating model. In onboarding, teams that began by cutting delivery hours with AI soon realised they could instead launch new lower-priced tiers and richer “white-glove” experiences with the same headcount, improving both margins and market coverage.

The structural reset

AI is reorienting SaaS professional services around value creation rather than labour intensity. The last mile becomes strategic as services teams evolve into orchestrators of intelligent systems and pricing now reflects outcomes rather than hours worked.

In conclusion, Ganesan tells us that, for senior technology leaders, the path forward is clear: design services organisations around continuously engineered outcomes, align product extensibility with field execution and embed governance and observability at every layer. When services focus on value rather than effort, the last mile becomes a source of competitive advantage.