Know more and connect with me on [Linkedin Profile].

Monday, January 29, 2018

Technical Codebase Visibility

In the last post, I wrote about the problems that happen because of the lack of technical codebase management.
First, you can not do anything without knowing your status right now as well as where you are heading. Putting your goals in mind, then you are fully equipped to do the right things and monitor your numbers to make sure you are on the right track.
How often should you monitor your metrics? In my opinion, given the fast paced age we all live in, we should measure weekly.
In selecting those metrics, I put into mind the following criteria:
  • No single metric will be sufficient. We should select a portfolio of metrics that shows the complete picture to avoid the dilemma of fixing something at the expense of something else.
  • Every metric should be simple to understand. Even if it is not perfect on its own.
  • Every metric should be easy to measure using open source tools.
  • Metrics should be measured on team level. So integration using Continuous Integration server such as Jenkins is necessary.
Here are some proposed metrics:
  • Code size measures. Seems trivial, but I can not imagine working on a codebase that I am unaware of its size. We can count things, such as Lines of code, classes, HTML templates, etc.
  • Static code analysis using SonarQube. It is nowadays trivial to install Sonarqube using Docker and analyse your code. SonarQube provides coding-standards checks and code-smells checks. I like to describe that process as a learning process as it enables developers to learn the best coding techniques and patterns. If your developers do not learn that, they will simply repeat their mistakes, even with a green field codebase.
  • Duplicate code analysis. Sonarqube is providing it.
  • Count of automated test case. If you have many, congratulations. Now, you can move one step further and check the percentage of unit tests compared to other test types. You can do deeper analysis and ask yourself questions, such as: Is this kind of test sufficient?, What kind of manual tests should be automated? Are test cases really helping your team? How to make it more useful?
  • Code coverage: The count of test cases is rarely useful in itself. 1000 test cases may be good for a small system, but for sure insufficient for a bigger system. This is why the percentage of covered lines of code with tests is important.
  • Manual test case count. Which areas of the system are covered and which are not. Where should you have more manual tests written?
  • Bugs analysis. The target of this analysis is to answer the following kind of questions. Which parts of the system generates the most bugs? Who is the developer who injects the most bugs?
  • Build a Refactoring Backlog. Tools are getting better everyday, but it is not perfect and it will never be. Developers already know system problems such as, which parts of the system are the most complex, which legacy component should be replaced, etc. Write all these tasks in a refactoring backlog, with attributes such as, estimated time, complexity and risk of introducing new bugs.
Now, you should have visibility on where your codebase is standing and the trend of each metric. Now the question is: how to plan and track the effort required to change the codebase trend? For example, if code coverage is only 9%. How can you plan and track the effort required to increase it. That will be the topic of next post.
Finally, thanks to Nour-Eldeen, my son, for reviewing this article.

No comments: