Know more and connect with me on [Linkedin Profile].

Monday, January 29, 2018

Technical Codebase Visibility

In the last post, I wrote about the problems that happen because of the lack of technical codebase management.
First, you can not do anything without knowing your status right now as well as where you are heading. Putting your goals in mind, then you are fully equipped to do the right things and monitor your numbers to make sure you are on the right track.
How often should you monitor your metrics? In my opinion, given the fast paced age we all live in, we should measure weekly.
In selecting those metrics, I put into mind the following criteria:
  • No single metric will be sufficient. We should select a portfolio of metrics that shows the complete picture to avoid the dilemma of fixing something at the expense of something else.
  • Every metric should be simple to understand. Even if it is not perfect on its own.
  • Every metric should be easy to measure using open source tools.
  • Metrics should be measured on team level. So integration using Continuous Integration server such as Jenkins is necessary.
Here are some proposed metrics:
  • Code size measures. Seems trivial, but I can not imagine working on a codebase that I am unaware of its size. We can count things, such as Lines of code, classes, HTML templates, etc.
  • Static code analysis using SonarQube. It is nowadays trivial to install Sonarqube using Docker and analyse your code. SonarQube provides coding-standards checks and code-smells checks. I like to describe that process as a learning process as it enables developers to learn the best coding techniques and patterns. If your developers do not learn that, they will simply repeat their mistakes, even with a green field codebase.
  • Duplicate code analysis. Sonarqube is providing it.
  • Count of automated test case. If you have many, congratulations. Now, you can move one step further and check the percentage of unit tests compared to other test types. You can do deeper analysis and ask yourself questions, such as: Is this kind of test sufficient?, What kind of manual tests should be automated? Are test cases really helping your team? How to make it more useful?
  • Code coverage: The count of test cases is rarely useful in itself. 1000 test cases may be good for a small system, but for sure insufficient for a bigger system. This is why the percentage of covered lines of code with tests is important.
  • Manual test case count. Which areas of the system are covered and which are not. Where should you have more manual tests written?
  • Bugs analysis. The target of this analysis is to answer the following kind of questions. Which parts of the system generates the most bugs? Who is the developer who injects the most bugs?
  • Build a Refactoring Backlog. Tools are getting better everyday, but it is not perfect and it will never be. Developers already know system problems such as, which parts of the system are the most complex, which legacy component should be replaced, etc. Write all these tasks in a refactoring backlog, with attributes such as, estimated time, complexity and risk of introducing new bugs.
Now, you should have visibility on where your codebase is standing and the trend of each metric. Now the question is: how to plan and track the effort required to change the codebase trend? For example, if code coverage is only 9%. How can you plan and track the effort required to increase it. That will be the topic of next post.
Finally, thanks to Nour-Eldeen, my son, for reviewing this article.

Sunday, January 28, 2018

Lack of Technical Management

I worked on many projects and found a repeating anti-pattern that is happening to the codebase, which is the lack of technical management. Here, I am sharing my thoughts about it.

The problem definition:
  • The team has no understanding of the size of the product and its complexity.
  • There is a lack of metrics and understanding of source code quality, that includes the source code itself as well as its design and architecture.
  • There are no defined coding standards, as well as design and architectural standards. 
  • Lack of metrics regarding automated testing and code coverage. Sometimes, there is a complete lack of test strategy. 
  • No measures to the amount of codebase changes and how the codebase is affected. 
  • No statistical understanding to the quality of the product from customer’s perspectives, e.g. bugs. 
In general we need to know the baseline now, as well as, on weekly basis how the codebase is changing overtime. For example, is code quality deteriorating or improving? Is complexity increasing? At what pace? What about code modifiability?

In general I understand why project management is done properly, most of the time. Usually, because of the customer pushing to get things done, and our understanding that, if we are not able to satisfy our customers, our competitors will. But, what if you mismanage the product technically and focused solely on feature delivery? In the short term, no one will notice, but as time passes, here is what happens all the time.
  1. The code base becomes harder and harder to understand. 
  2. Code modifications becomes time consuming.
  3. Regression bugs increase. 
  4. The fear of changing code increases rapidly. Copy/paste strategies flourish and lead to a larger codebase and a lot of duplicate code. And, then new inconsistency bugs appear. 
  5. Codebase hate becomes widespread. Everyone is cursing the product and nagging to start something cleaner and better structured. Sometimes to use new frameworks or even a new language altogether. Guess what, we start a green field project, and once the codebase grows, and with the continual of “lack of disciplined technical management”, the whole cycle repeats. Just go up to item number one and read again. 

I will share how to resolve this in upcoming posts.