Application Micronization

VC Sreedhar
7 min readDec 19, 2021
Source: https://us.nttdata.com/en/blog/2015/september/modernization-a-continuous-process

Imagine you are given a big ball of some material and your job is to reduce the diameter of the ball of material. There are many ways to do this. You can pound the material, you can grind the material, you can blast the material, etc. Micronization is a process of reducing the size (e.g., the diameter) of a material to micrometer or even nanometer size, without changing the chemical composition of the material, and at environmental constraints (such as temperature, pressure, etc.). Micronization is used to create medicine, capsules, cement powder, food powder, etc.

Now imagine you are given a “big ball of mud” software application and your job is to “micronize” the software application. You can take many approaches such as “pounding on the application”, “grinding the application”, “blasting the application”. At the end of the day, we cannot “change the semantics of the application”. Also, we need to consider environmental considerations when micronizing the application. Unlike in the material world, the software world is bit more sympathetic, especially when it comes to the “chemical composition of the software application”, and also the “environmental constraints”. I do not like the term “Application Modernization”. It is because every generation is a modernization of a previous generation. Also, Application Modernization is often focused on moving legacy application to new (so called modern) platforms. For instance, moving mainframe applications to public cloud. Application Micronization is not a platform or digital transformation (yet another buzzword). Application Micronization is an architectural transformation. We should not focus on investing in platform (say signing a contract with a public cloud provider) before understanding the architectural transformations needed to micronize the applications. So, I prefer to use the term Application Micronization where the focus is on architectural transformations of legacy application to micro-architected services or microservices.

The “big ball of mud” and other anti-patterns are living like cockroaches in legacy applications. It is very hard to get rid of the cockroaches. Also, people who are creating today’s software applications, they are either ignoring many of the anti-patterns and other clean architecture principles or are ignorant of them. Many enterprise companies also want to cut “today’s cost” rather than long term costs. So, they end up hiring poor programmers and use them to implement poor software.

There is another reason why we have many anti-pattern cockroaches in legacy applications. Many of the existing legacy applications, especially running on mainframe computers, are written decades ago using Cobol and other older languages. Also, in those days procedural languages were the most popular. Then came the object-oriented languages, functional languages, etc. There are language-based best practices, patterns, and anti-patterns. Understanding and adopting them are essential to build better code. But they are just one aspect of developing good applications.

Developing good applications should start at design thinking phase. I personally like Stanford Design Thinking process. It is simple and very clean compared to others. Also, it can be adopted naturally for designing software application, and bake that into software requirements, software architecture, microservices, etc. I will write more on that in a different blog.

Once we complete (the first iteration) of design thinking, the next step is to create requirements. These days requirements are written using epics and user stories. These user stories and epics should be driven by the outcomes of design thinking (especially the needs, insights, and ideations). Design thinking is an iterative process, and it is not same as software design concepts, such as design patterns and domain-driven design. Design thinking is a human or user-centric approach that is useful to extract empathy and convert them into insights and eventually prototype and test the prototype. When developing design thinking focus on at least two different end users or personas:

  1. End users who use the legacy application. For instance, American Express (Amex) card holders who expect that when they use the credit card to book a flight or a hotel, the credit card is authorized.
  2. Application owners or managers who own or manage the legacy applications. For instance, credit card authorization and other support applications running on mainframe are managed by application managers.

Once we collect user stories and epics (which are the needs of the end users or application managers) and create acceptance criteria (which are the insights and ideas identified and generated during design thinking). We should next focus on designing or architecting the application. This is where we should follow approaches such as Domain-Driven Design (DDD), Clean Architecture, and SOLID principle. I personally like DDD approach combined with Clean Architecture. Concepts like ubiquitous language, bounded context, etc. should be driven from design thinking and requirements exercises. There are typically two main types of legacy applications:

  1. Transaction processing such as credit card authorization, airline ticketing, etc. where consistency and availability are important. Such legacy applications use relational database running on centralized mainframe systems. Also, these applications follow CRUD (Create, Read, Update, and Delete) model. Also, consistency and availability are very important. Think of CAP theorem and ACID properties.
  2. Analytical processing such as understanding customer behavior, business intelligence, etc. These legacy applications also run-on mainframe and use relational database. The database is used as a data warehouse. Also, these applications do not worry too much about consistency, but availability is more important.

Mainframes are centralized system and inter-process communication is superfast on these machines. Going back to the CAP theorem, you do not worry about partition-tolerance in mainframe platform. In essence we get all three components of CAP theorem! So, when shifting applications from a centralized mainframe to cloud we lose partition-tolerance immediately. No matter how carefully you design your network, you will always find network is unreliable, especially with public cloud. According to CAP theorem we must give up either Consistency (strong consistency) or Availability. For analytical processing we give up strong consistency and accept weaker eventual consistency model. This means it is easier to move or “modernize” analytical processing applications and move them to public cloud. It is foolish to move transaction processing application to public cloud. Remember when we think of cloud computing we should consider it to be a (geographically) distributed systems with asynchronous method of communications over unreliable network. There is also FLP theorem that comes and haunts you with consensus, consistency and tolerate failures when dealing with cloud computing.

Now given that it is foolish to move move transaction processing applications to public cloud, what can we do? There are two things that should considered.

  1. Micronize the application in-place on mainframe. The mainframe architecture is advancing and adopting modern technology such as containers and Kubernetes. So now we can take the monolithic Cobol and Java applications and micronize them within the mainframe computing.
  2. Install a mainframe inside a data center and use the mainframe for critical transaction processing applications.

Getting back to DDD and application micronization, we should start with applications that are easier to chop up and create microservices. First focus on ubiquitous language that maps back to design thinking process terminology. Then identify a set of bounded-context, core domains and context mapping to create a set of microservices. Here we must decide whether to follow CRUD or CQRS depending on the application. If we are building a reactive microservice model, then CQRS makes more sense. I would highly recommend reading the reactive manifesto. DDD gives you a design approach to architecturally transform legacy applications in a step-wise approach focusing on one bounded context at a time.

The next step is architecting deployment of microservices. There are two considerations to keep in mind.

  1. It is important to ensure that the legacy application and the microservices run in parallel. Here we should use at strangler pattern. We should slowly and iteratively strangle legacy application, part by part. It is possible that we may still be stuck with some core logic on the legacy pattern for a long time.
  2. Containerizing the microservices and deploying them on Kubernetes or another container orchestration platform. It is important we carefully follow deployment strategies where the old (legacy) application and the new (containerized) application are running at the same time for some time.

The next thing to keep mind that application micronization is not a digital platform transformation, such as moving the application to some other platform. It is an architectural transformation. I know our business executives want to get rid the old-man mainframe in the house. Rather than focusing exclusively on how to get of the old man, focus how to give the old man some energy booster so that we first perform architectural transform using the old man. The mainframe and zOS from IBM are not stuck in Neanderthal age. They are making significant architecture improvement to adopt modern processing and architecture. For instance, Linux OS on Z, Z cloud, containerizations, etc. Focus on leveraging these capabilities to transform your legacy applications, before even thinking about public cloud. Incidentally, many executives have FOMO syndrome. Focus on your end customers and do not try to follow new cool kids (the public cloud).

Compared to a decade ago, Artificial Intelligence and Deep Learning has seen some key advancement such as GAN (Generative Adversarial Network), Reinforcement Learning, Attention-based learning. It is natural to take advantages of these techniques to help in developing tools that can accelerate application micronization. I do not believe even these advanced AI techniques can help in automatically generating microservices from monolithic. The reason is very simple. It is undecidable to determine if two (Turing complete) functions are equivalent. So, when we transform legacy application to microservices it is very important to test for a period and keep the legacy application running in parallel with “equivalent” microservices for sometime. I will write more on this in a later blog.

--

--

VC Sreedhar

VC Sreedhar is a Distinguished Engineer and VP focusing on FSS and FIntech at Kyndryl. He is ACM Distinguished Scientist and has Ph.D. from McGill University.