The Art of Cloud-Native Application Development

When a new application is to be developed today, one of the first questions posed is whether it should be designed to take advantage of the cloud. If the application needs to scale in response to varying load levels, yet operate with near-zero downtime, the default new-world practice is to consider cloud-native development unless there are requirements or regulations that stipulate otherwise.

If your application is legacy, on the other hand, developed years or decades ago as a monolith, you can reduce operating costs and gain value by modernizing it into cloud nativity. In this case, the application’s internal architecture has to be redesigned, the code has to be refactored, and the resulting application has to be rehosted on a cloud platform. Modernizing business applications is often a prerequisite to modernizing the business model; in many cases, it leads to new operating models that help not only survive disruption but compete in new ways and thrive.

Cloud-native programming is a combination of several aspects: architecting and developing microservices based software, deploying the microservices on container environments on cloud, hosting the data on cloud-native databases and data lakes, maximising leverage of cloud PaaS (Platform-as-a-Service), adopting DevOps to engineer continuous delivery of releases, and following an agile development culture. Let us explore each of these in turn.

The cloud-native development landscape starts with an application architecture that defaults to microservices. The microservices philosophy calls for decoupling application componentry into multiple independently-deployable services. The application is decomposed into modules that perform specific micro-functions; they are rendered into interaction-friendly services exposed via standard programming interfaces. If your application is architected as microservices rather than as a “modular monolith”, you only need to rebuild and redeploy those services that you change. The twelve-factor app methodology lays down best practices for building microservices.

Microservices are deployed in container environments. The open-source Kubernetes platform is the defacto standard for container orchestration. When deployed on Kubernetes container pods (a pod is the basic replicable unit that hosts a microservice instance), each microservice becomes highly available and can scale and fail on its own terms. Each service can be deployed on its own through an associated DevOps pipeline; each can be coded in a different language and can be developed on separate timelines. Also, container environments generate more efficiencies compared to virtual machines (VMs) on hypervisors since the former virtualize the Operating System rather than the hardware. Because of this, containers consume lesser resources vis-à-vis VMs; they are faster and portable. Upgrades, rollbacks, redundancies and load balancing are software-defined and built into the platform in the container world.

Container environments are usually hosted on a cloud; however, they could also reside on-premise if there are reasons not to go to hyperscalers. If on cloud, the container environment – and hence the application – can further capitalize on the scalability, on-demand consumption flexibility and software-defined availability that come with the cloud model. These capabilities are difficult to create on-premise. For example, consider a mission-critical India-scale application whose load fluctuates over time but reaches peak concurrent usage levels in the double-digit millions. On-premise solutions will need to plan capacity for peak load, leading to periods of under-utilization and cost inefficiencies.

Application components that need to be triggered in response to occasional events – such as an asynchronous notification that needs to be pushed to a mobile device based on a state change in a cloud database – are candidates for serverless deployments. Serverless microservices as well as monoliths are common parts of cloud-native applications. Cloud cost for serverless compute accrues only when the code runs; moreover, you will not incur operational costs because you will not see servers!

An accompanying aspect is the need to leverage cloud-native data technologies such as cloud databases, data lakes and data warehouses to host application data. This will let you unleash the power of Artificial Intelligence (AI), Machine Learning (ML) and predictive analytics on the data through cloud PaaS. In the case where a legacy application is sought to be modernized, the data needs to be modernized as well; it needs to be moved from legacy data repositories to cloud data stores. This will help transform your business model by taking advantage of AI-based decision-making and forecasting.

The cloud-native narrative also implies the adoption of DevSecOps (Development, Security and Operations) to shift from the traditional practice of a few large releases of a full application to continuous integration and continuous delivery (CI/CD) of component microservices. Cloud-native CI/CD pipelines can take advantage of cloud functions to perform iterative builds and rolling updates such that new releases gradually get deployed across a large user base. On the cloud, infrastructure, policies and observability are stored and configured as code, which the deploy portion of the CI/CD pipeline can exploit.

Finally, comes a critical component that goes hand-in-hand with cloud-native development and DevSecOps: an agile engineering culture. This is a value system that rewards development squads for failing fast and consequently improving fast, encourages “blameless post-mortems”, and motivates deployment planning that limits the impact reach of any failure. It expounds the deliver-fail-change-succeed cycle where the way to a successful product is to bring out some features, test the market in a calibrated manner, and expand iteratively and continuously. Agile engineering operations combined with cloud-native CI/CD practices lead to a high-speed organization that has a high-trust culture.

Before concluding, let’s take a look at a specialized application modernization domain that is increasingly assuming prominence: mainframe modernization. Millions of lines of COBOL code running for decades on mainframes are being assessed, rewritten and recompiled into a modern programming language, refactored into microservices, and deployed to run natively on cloud. It is possible to obtain mainframe-grade processing power on hyperscalers today with consumptive billing. The operating efficiencies that accrue from mainframe modernization makes the investment worthwhile in many cases, but the re-engineering itself can take months or even years. There are tools to aid application modernization in general and mainframe modernization in particular.

The art of cloud-native programming thus combines a set of related facets: algorithmic thinking anchored around the microservices architecture, writing code that is cloud-aware, leveraging cloud data stores, maximising usage of cloud PaaS technologies such as machine learning, embracing DevSecOps practices, and adopting a fast-paced agile culture. Success here, is if your cloud-native application leads to a more digitized business model, reduction in operating costs, faster growth, and overall transformation.

Authored by

Sreekrishnan Venkiteswaran, CTO, Kyndryl India

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of ET Edge Insights, its management, or its members

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top