Legacy Migration: Maintain Short Update Cycles — Deliver value fast and often

Adrian Stanek
4 min readAug 7, 2022

--

AdobeStock_500413062

For bitsinmotion, we work on a platform rebuild of a larger B2B solution in the real-estate sector. The legacy platform is a typical monolithic application like how we used to develop in the past. The plan to migrate is already two years old, and in the early stages, we thought about rebuilding everything as a greenfield project all at once and releasing it someday.

Well, this plan was binned soon after we realized how much this approach would stress the company regarding budget, resources, and customer satisfaction. In addition, it would bound a lot of different people of the company and outside, with an unclear timeframe. But the biggest problem was that the new platform wouldn’t add to the value steam before it was released.

Since the overall user satisfaction decreases slowly but steadily because of performance problems and a feature jam, we needed to find a better approach.

The idea: Release an early version and iterate constantly.

Together with the developer team and the CEO, we defined the goal of releasing something at the beginning of the migration and providing it to the users so that it can create value for everyone early on.

The approach looked like that: We started a new environment on a cloud-native platform next to the data center where we hosted the legacy system. Microservices were the new way to go for this solution, and the backend team started to work on a general-purpose API, which was then deployed in the form of containers in cloud orchestration. From then on, the backend team wrapped more and more functionality to the legacy system into the new API service.

In parallel, the frontend team worked out a new web app with navigation and authentification, which was able to federate the old existing pages with the latest web app. With that approach, we could use both frontend generations simultaneously as one.

The key was combining the legacy system’s ASP.net and PHP session with the JWT strategy in a stable way.

The steps I’ve mentioned are all part of the first phase, where the legacy and the new system serve simultaneously. It’s important to note that the database and filesystem are still running in the existing data center and stay untouched.

Iterate and add continuously to the new platform

Part of the first phase is to substitute the frontend entirely before considering removing the legacy backend system and creating a pure microservices architecture. The API services are the single point of communication for all new features and services we’ve added since then.

Now, the focus for both teams was substituting feature by feature in many small changes. Meanwhile, the users were still using the system like usual and noticed constant changes positively. As a result, the system got faster, newly developed features had improved functionality,

and it looked fresh and modern.

Eventually, the legacy System will disappear — the Strangler Fig Pattern.

I wasn’t aware of this specific name, which Martin Fowler introduced. Jonathan Hall told me about that in a conversation we had. The strangler fig is a tree that wraps itself around another tree, grows continuously, and kills the supporting tree eventually. The same happens to the legacy system at some point in the future. Ultimately, a modern cloud-native microservice architecture will replace the legacy platform.

The difference between rebuilding all at once and iterating

For bitsinmotion, an essential requirement was to provide value early on and continuously while not putting too much stress on each department. So I wouldn’t say that it’s more or less expensive in terms of costs, but for sure, an ROI is realized faster by shipping features early in the game.

For whom is this approach a fit?

I recommend that small to medium-sized companies with monolithic legacy systems consider this approach. You are very flexible and can quickly define and adjust your focus due to the fast iteration cycles. While you have fast processes, you still have the option to define your own pace because the pressure for delivering new features is lowered significantly. Rebuilding as a greenfield solution is possible, but think twice if this is the best approach for your company.

My conclusion: The iteration won’t stop.

Last but not least, I want to mention that the progressive approach to iterating isn’t only a thing for migration processes. I recommend implementing this thinking and acting into your daily business operations anyway. When is there a reasonable point not to ship finished features to customers? I don’t see one. Ship what’s ready to ship.

When you have the technology ready to use, shape the mindset and culture of your company in that direction. There’s a great chance to become faster, more productive, and more confident as a team or company.

Features should be shipped as soon as they are ready to ship. Then, iterate with shorter cycles and smaller batch sizes to become more efficient in software development.

--

--

Adrian Stanek

CTO @webbar & raion.io | Blogger | CTO-Newsletter | Advocates web-native technologies to become the leading platform for digital businesses