Legacy Legacy Legacy
For many years, .NET Framework has been at the core of many enterprise systems, where some of the very crucial systems we use to this day still run on this technology. Like many other things that come and go, that are discontinued, deprecated, or no longer actively supported, the same can be said for .NET Framework. Though the support policy for the framework is still active and will still be distributed with Windows, many folks have stopped ensuring backwards compatibility for the framework in new tools.
Since the decision to unify the .NET Framework & .NET Core with the release of .NET 5 in 2020, a new .NET version has been released every year, with every second release being an LTS version. So the time has come for some companies to make the difficult & costly decision of how to move into the future with their legacy systems. What is the game plan? How far behind are they? Now, in my opinion, I believe that there are all but two clear paths that can be taken.
Strangler Fig Pattern
Simply put, rebuild the system piece by piece until you have a shiny “new” system. Now, a lot needs to be taken into consideration when going down this path, and best to have a guideline and well-planned execution. One could liken this approach to requiring “surgical” precision, knowing which components to migrate and when, having clear boundaries and understanding each dependency, but of course, do you simply have the know-how to put this strategy into place?
For more context: MS Learn
Full Rewrite
As one would think, this is probably the better road to take, blank canvas and all. Wrong, this path, just like the first, leaves so much room for error, if not more. One could say, if you are the gambling type, this approach is probably for you. There is a higher risk, higher cost, higher requirements and possibly a higher reward as well if executed skillfully.
I guess from the title, you can bet which approach we took.
Ready…Set…Go!
Let the rewriting begin. So a team was assembled that would be responsible for spearheading the redesign, redevelopment, re-everything of the system, and everyone else would follow our lead. The responsibilities included, but were not limited to, choosing of tech stack, choosing of architecture, choosing of infrastructure, and choosing of tools that would be used.
Mistake One: Tech Stack
Language & Framework (BE)
Having been a company that relied heavily on the Microsoft ecosystem and the legacy system being built with C# and .NET, the choice language-wise was basically already made, no room for discussion there. So much of the core data models and structures could be reused with a few minor tweaks here and there, leveraging the new features that came with C# v12 & .NET 8. This seemed really possible and straightforward, language and framework, check. Onto the next thing on the list.
Database + ORM
Already having decided that our BE is C# and .NET, there aren’t really a lot of ORM options one can go for, so why not the tried and tested Entity Framework, why not the constantly improving ORM that makes life so much easier for you? Nope. Fair, the original system was built using ADO.NET. Perhaps the aim is to use something similar and perhaps re-use some of the SQL queries with improvements from Dapper, wrong. We not only opted not to use Entity Framework, but we also opted for a NoSQL database as well, Azure Cosmos DB.
Language & Framework (FE)
With re-writing came the great opportunity to relook at the user interface and experience. We could explore and experiment as much as we want. The JS ecosystem has been booming with different libraries, frameworks and meta-frameworks which could really bring new life into how the system looks. Err, wrong! Most of the team members are C# developers and only ever use JS with the likes of JQuery; therefore, having to learn either React, Angular, or Vue is going to be costly and time-consuming.
Enter the “saviour”, Microsoft’s newest frontend framework, Blazor.
“Blazor is a modern front-end web framework based on HTML, CSS, and C# that helps you build web apps faster.”
Sure, why not use Blazor? It makes life easier for the developers who are primarily backend, to work on the frontend as well. Seems like the better choice. So what’s next? The UI library. No shade to the long-time standing Bootstrap, but it’s 2023 and there are so many other libraries one could use outside of Bootstrap; TailwindCSS, Bulma, Materialize CSS, just to name a few. Forget that for a minute, maybe we can use a Blazor component library to speed up development; there are a few in the market: MudBlazor, Telerik, Syncfusion, Radzen. Ultimately, we go with what we already have a license for, Telerik.
So there you have it, the main components of our tech stack:
- Blazor + Telerik
- C# + .NET
- Azure Cosmos DB + Repository
Mistake Two: Architecture
Having a blank canvas can just as easily break a system as it can make it. You start to think to yourself, if we had a monolithic system that ended up in the position it is now, where only a few can contribute to it because of its fragility and code spaghetti-ness, then we sure best not repeat the same mistake, so let’s go for a microservice architecture. Let us have individual services that will be responsible for separate yet not-so-isolated processes that can somewhat run independently of one another.
Microservice Architecture
Easy technical design to go with if all the considerations and requirements are taken into account, and everything matches up.
- Do we have a bounded context map that clearly outlines what the different services will be and what their responsibilities will be?
- What sort of data management will we have? Will each service have its own database, or will there be a logical separation of a shared database?
- How will the services communicate with each other? How will the data flow between them?
- How will this be deployed and maintained?
- What sort of security will we have?
- How will the frontends communicate with the backend services?
Sure enough, every question was answered, or so we thought. So where did it really start to go wrong?
Mistake Three: MVP aka POC
With “everything” taken into consideration, we got to writing the code. We had to prove that this technology can work together and that the architecture was sound. Why not start with a proof of concept that evolves into the minimal viable product? The requirements were set out, and so it began, day in, day out. Pull request after pull request. Demo after demo. One year later, we finally delivered a working MVP that was presented from videos recorded while the system was running on localhost with multiple Visual Studio debugging instances.
Happiness all round, we proved the concept works and the system “runs” on this new technology. Onto the next phase, delivering an actual deployable, “production-ready” version of the system.
Mistake Four: “Fail Fast, Fail Forward”
This is a term thrown around a lot in different environments and ecosystems, and in the development world, it purely means identifying problems quickly(iterate quickly) and learn from them to make the next iteration a lot better. Place your team in a position that allows them to iterate quickly and test out functionality, and see if it works. One word comes to mind: “Agile”. Agile + Scrum, why not? 2-week sprints with daily standups and sprint planning.
One of the most important items when wanting to fail fast but fail forward is to ensure everything is tested out in the very beginning, scaffold your solution, and an API or two, deploy to an environment and see with everything hooked up, does it work? You will quickly realise things you may have overlooked, things that may influence how your team will work, how they will deliver on the sprint items…developer experience.
Mistake Five: Developer Experience
With all the above mistakes being made, it led to the biggest mistake of all, crucial yet overlooked by plenty. Developer tools can only do so much to improve the overall experience of the developer when writing code; IntelliSense, debugging, code completion, etc. Yes, they are important, but restrictions placed on developers due to mistakes are just as important. “My laptop is too slow to run the solution”, “My laptop is out of memory, I need more RAM”, “The build pipelines take close to 8 minutes to build one service”, “I’d rather work on my personal machine since it has more power”.
Lessons Learnt
The definition of insanity is doing the same thing over and over again and expecting different results
A lot has been learned during the course of this rewrite, and we would surely be insane to do the same in future and expect success. It is crucial to understand the requirements of the path you are undertaking and not rush an outcome due to consequences that ought to have been considered upon planning. Lessons to take into the next journey:
- “Fail to plan, plan to fail” – without clear goals, architecture, and processes, you’ll waste time, mismanage resources, and deliver unstable systems.
- Understand your team’s capabilities (strengths & weaknesses) – knowing and understanding your team’s dynamics allows you to allocate work accordingly.
- Understand the tech you’re backing – the newest shiniest tool isn’t always the sharpest tool in the shed or the best tool for the job. Take time to understand the applicable scenarios that work best for the tool, not try to bend it to work best for you.
One major lesson I learned is, awareness without action is complicity — seeing the car headed for disaster and saying nothing until it crashes doesn’t absolve you.
You plan intelligently, expect failure, detect it fast, and evolve forward.
Cheers ✌🏽