Many of the corebanking applications that I developed more than 20 years ago, when my working life began, are still active today. Processes developed more than 20 years ago that are still necessary and used in the day-to-day activities of financial institutions. This shows that the useful life, or lifespan, of the Banks’ core applications is very long compared to the lifespan of applications developed for other sectors and industries.
By comparison, the lifespan of certain modern technologies, such as that used in Cloud computing platforms, is much shorter, with technology solutions continually being replaced by better ones. Technical application platforms developed today will be considered technically obsolete in two or three years and replaced by new ones.
With the majority of financial entities embarking on costly transformation programs to replace systems on Mainframe platforms to low-cost infrastructures, such as the Cloud, an undesired effect is taking place and it is beginning to pose a problem for the entities: the generation of technical debt, which occurs when applications created to provide service for 15 or 20 years are deployed on technical platforms that become obsolete in 2 or 3.
This technical debt generates an extra cost for entities by having to operate platforms that should be decommissioned but that, since they support applications in their useful life, cannot be. Meanwhile, the organization continues to create new technical platforms based on fashionable technologies at the time, increasingly extending the bubble.
The solution to this problem requires a correct design of the applications based on their expected lifespan. This includes:
- The adoption of portability and isolation patterns in those applications with a longer lifespan;
- The use of commercial solutions provided by specialized vendors instead of the “DIY IT” practice in many organizations for the development of technical platforms;
- The use of low-code solutions which, in addition to other advantages associated with team productivity and application governance, provide future-proof features through portability capabilities that allow applications to be ported to new platforms or enable the evolution of the existing one.
Application lifespan as an application design criterion
The applications that support the business processes of financial institutions number in the hundreds. Each application serves a different purpose and respond to different requirements, such as expected lifespan.
In this context, we use the “2-speed Bank” concept, where certain applications are going to change very frequently while others are going to do it more slowly. For example, an application that supports customer or user interactions such as a mobile application or a branch teller application is expected to be renewed every four to five years. However, applications that support core processes may be in service for more than twenty years, such as a loan administration system.
These applications are deployed on technical platforms, with its own useful lifespan determined by the obsolescence of the technology on which they are built. Given the rapidity in the evolution of the technology available for the development of platforms in the cloud, this obsolescence occurs sooner and sooner, since the organizations want to continually adopt new, more efficient technologies, getting rid of old ones so they don’t have to maintain them
If the lifespan of an application is greater than that of the platform, and the application is technically coupled, it will be necessary to keep the platform active beyond its useful life, with the cost that this implies. In order to decommission the platforms the applications must be migrated, which requires investments and prioritization that generally those responsible for the applications are not willing to face.
For applications with a lifespan similar to that of the platform, this problem does not exist, so adopting isolation or portability patterns are not necessary, which simplifies and cheapens the development of applications. Hence the importance of making a correct estimate of the useful life of applications and technical platforms, and adopting appropriate architecture patterns in each case.
Replace or Evolve platforms, and the cost of DIY in IT.
To avoid technological obsolescence in technical platforms there are, at least, two options: decommission the platform after replacing it with a new and more modern one; or technically evolve the platform by replacing its technical services.
In the first case one must redeploy the applications on the old platform onto the new one; In the second case, the applications have to be developed with isolation patterns that allow the technical services to be changed without affecting the application, for example, by substituting one database for another, one event manager for another, or one container manager for another. If the application is impacted and require the application owner to take actions, it will hardly happen because of budget, priority or risk… you choose the excuse.
In general, most organizations have forcefully followed a “replace platform” strategy, and have rarely evolved the existing ones. The reason for this is usually that the applications were not developed with the patterns that allow the evolution of the platform without impacting the applications. This simply makes the evolution not to be an option and leaves the only alternative of developing new platforms. But the platforms to be replaced are never decommissioned when the new ones are created because of the existence of applications with a long lifespan, causing the platforms to accumulate and, with it, their associated cost.
This trend is accentuated by the DIY IT strategies of financial institutions. The “Do It Yourself” IT It consists of custom-developing technical platforms instead of using the ones existing in the market, such as someone who builds their own furniture instead of buying from manufacturers. I love DIY and I love tools, such as drills and saw, but I still buy my beds from manufactures. This is not what always happens with Cloud applications platforms. Most these DIY platforms are copies of existing ones, but require periods of up to two years to develop; when they became available for use by development teams they are, in many cases, already technically obsolete.
A typical scenario for many organizations started with strategies to develop platforms based on Java application servers to progressively replace core systems in Mainframe. This was followed by the development of platform based on containers in private clouds and, more recently, the development of applications platforms in public cloud infrastructures. There are organizations that currently maintain several platforms with these characteristics without the possibility of decommissioning them given the high cost of migrating the applications deployed on them.
Better strategic planning is really necessary in the development of technical platforms, deciding between evolving them or replacing them, and forcing the development teams to adopt the appropriate isolation or portability patterns to enable said strategy, should the organizations want to stop creating technical debt.
Use of Low Code solutions
In my many years working on the transformation and modernization of traditional core systems in Banking to Cloud Infrastructures, I have helped my clients adopt future-proof architectures through the use of hexagonal architecture patterns, I have designed frameworks that allow deployment of applications in hybrid cloud and I have helped design teams to adopt standards such as BIAN, or practices such as Domain Driven Design or Event Driven Architecture, for applications to be developed decoupled from technical platforms or other systems, and to perform properly in Hybrid Cloud environments.
But my most inspiring experience has been the use of low code solutions for the development of applications in Cloud, in particular the IBM Financial Services Workbench. With IBM FSW, I have developed applications deployable in any of the hyperscalers, in private clouds or in traditional infrastructures. My background as a Cobol developer, with minimal experience in languages such as Java or JavaScript, has not constrained me when it comes to developing Microservices. I have published APIs without knowledge of the API manager. I have published and subscribed to events without any technical knowledge of Kafka. I have persisted and queried data without awareness of the database on which it was performed. It’s everything a banking application developer could ask for: to be able to focus on building business capabilities and not worry about any technical considerations.
And, this being the most relevant aspect in the context of this article, in the three years since I started to use it, the framework has constantly technically evolved, moving from its initial container management architecture, going through deploying on ICP (IBM Cloud private), to the current version on OpenShift that allows the use in most types of technical infrastructure. Throughout this evolution, the business applications that I have developed have never been impacted.
For me, it’s a clear example of how organizations, among may other advantages, could avoid the great problem of technical debt. An organization adopting this kind of low code framework could technically evolve the platform, optimize costs by deploying each application on the most cost-efficient platform, move solutions from one platform to another, etc. And achieving this with no impact on the hundreds of developers who normally develop and maintain banks’ business applications.