Know more and connect with me on [Linkedin Profile].

Tuesday, February 05, 2019

Android Performance Story

I am developing a personal finance application and it uses SQLite to store all of its data. I had slowness in the application where I calculate balance for all accounts.

The first thing that came to my mind is that the DB access is slow and I have to research if I should add indexes to the tables. Before doing that, I gave Android Studio Profiler a try.





What was a surprise to me is finding the slowness is not related to database access but to date conversion function. See:







See this code snippet,

Class DateUtils {

const val YYYY_MM_DD_HH_MM_SS_S = "yyyy-MM-dd HH:mm:ss"
private val formatter = SimpleDateFormat(YYYY_MM_DD_HH_MM_SS_S, Locale("en"))

fun toDate(dateString: String) : Date  = formatter.parse(dateString)
...

}

Although the formatter object is created once and reused by the toDate() function, the toDate() is extremely slow. Although it is being called for each fetched row from SQLite, I did not expect it to be the performance bottleneck. I always assume DB access is the slowest part of any application which, in my little story, is wrong. 

So I learned, do not assume, get data and then decide.


Saturday, February 02, 2019

Using UUID as a Primary Key



I am developing a personal finance application on mobile and web. Data should be synced two ways between the mobile app and the web application.

The data schema is:

  • Currency table with currency name and conversion rate
  • Account table with name and a foreign key to its currency
  • Transactions table with an amount, withdraw account related, deposit account related, and date.

As for compatibility with server database, I added user id to all tables and a little User table with just our user ID, name and email.

I am using sample data with about 4500 transactions. The total size of the SQLite files is 324K Byte.

I am using the auto-increment primary key that is managed by SQLite. As I wanted to implement a sync feature between mobile apps and web app. I thought changing autoincrement id with UUID is a good idea to be able to sync data in two ways.

I did some research, and I found two options:

  • Implement UUID as clear text. This has the advantage of clearly seeing the ID in the database so debugging would be more comfortable. The disadvantage is having 36 bytes per id.
  • Implement UUID as a BLOB of 16 bytes. Data is stored as binary so that you save 20 bytes per id. You can easily use HEX(id) in the SQL query to view the BLOB UUID as a text.

For the sake of saving space on the mobile device, I decided to use BLOB. But I worried about performance, so I did some research.

I read multiple blogs, articles, and SQLite reference and found that there is no negative performance impact on using UUID either in binary or text. But of course, size will be much bigger especially in case of using the 36 bytes text-based UUID.

Before doing any changes. This is my performance numbers of the main APIs per one call to compute balances that are being displayed in the application dashboard.

  1 ║ Calculations time   │ 634  │ milliseconds  
  2 ║ Asset Accounts      │ 358  │ milliseconds  
  3 ║ Liability Accounts  │ 70   │ milliseconds  
  4 ║ Expense Accounts    │ 68   │ milliseconds  
  5 ║ Income Accounts     │ 67   │ milliseconds  
  6 ║ Other Accounts      │ 69   │ milliseconds  
  7 ║ Total               │ 1266 │ milliseconds        

Rows from 1 to 6 is an SQLite fetch from database and balance calculations. I did timing measurements for display refresh and found it very fast. So the bottleneck is in SQL data access.

As the whole database is 324K, I expect SQLite to load most of it to memory to give the best performance.

Then I spent an extensive effort to change all Primary Keys and Foreign Keys to be BLOB 16 bytes and migrate data.  I notice that:

The database grew to 807 KB.
The performance almost degrades immediately as follows:

  1 ║ Totals Calculations │ 1295 │ milliseconds  
  2 ║ Asset Accounts      │ 670  │ milliseconds  
  3 ║ Liability Accounts  │ 120  │ milliseconds  
  4 ║ Expense Accounts    │ 147  │ milliseconds  
  5 ║ Income Accounts     │ 147  │ milliseconds  
  6 ║ Other Accounts      │ 98   │ milliseconds  
  7 ║ Total               │ 2477 │ milliseconds  
This is almost double the time!

The first thing to think of to improve DB performance is adding indexes. So, I added indexes gradually and measured performance after each index. In the end, I added 4 indexes to fields that are used in the query where part and order by parts. I measured again and found these performance numbers.


  1 ║ Totals Calculations │ 1078 │ milliseconds  
  2 ║ Asset Accounts      │ 639  │ milliseconds  
  3 ║ Liability Accounts  │ 111  │ milliseconds  
  4 ║ Expense Accounts    │ 123  │ milliseconds  
  5 ║ Income Accounts     │ 141  │ milliseconds  
  6 ║ Other Accounts      │ 97   │ milliseconds  
  7 ║ Total               │ 2189 │ milliseconds  

The performance improved just by 188 milliseconds! While DB size jumped up to 1.4 Mega. Remember it was initially about 300KB.

I do expect database size to increase by adding indexes, but how performance is not improving at all!!


Lessons Learned and Ideas:

  • DB size increase decreases performance.
  • Using UUID is slower than using auto increment.
  • Adding indexes has no significant positive impact on performance. This does not make sense at all. It means simply database access is not the real problem.
  • As a hint: To make sure the queries are using indexes, I used the query "EXPLAIN QUERY PLAN" before my SQL query in "DB Browser for SQLite" open source tool. It shows clearly that my indexes are being used.

Conclusions:

  • The size increase is really significant. So, I will go back to use autoincrement ids. I am happily using GIT, so it is so easy to just get back to develop branch.
  • Performance needs analysis with a more appropriate tool, I tried Android Profiler and will continue using it to optimize performance. Measuring time the old way is not working properly.
  • The assumption that I must use UUID in all PK and FK is not true. I will consider other options.



Useful Links:

  • https://tomharrisonjr.com/uuid-or-guid-as-primary-keys-be-careful-7b2aa3dcb439
  • https://medium.com/@JasonWyatt/squeezing-performance-from-sqlite-indexes-indexes-c4e175f3c346
  • https://developer.android.com/studio/profile/cpu-profiler
  • http://plaintexttools.github.io/plain-text-table/







Saturday, November 17, 2018

JHipster Pros and Cons

Jhipster is a web application scaffold that generates backend and frontend code for CRUD applications. You describe entities in JDL format among other options and Jhipster generates backend server and frontend UI for you.

Here are some of the advantages:

  • Fast application code scaffold. In a short time, an application is generated. 
  • Many options exist on the frontend side such as React, and Angular. Vue.js is under development.
  • Also, many options exist on the backend side, such as monolithic or service-oriented architecture. 
  • Backend code is generated in Java code only. There is an in-progress work for Kotlin.
  • It generates automated tests of all kinds, unit, integration, performance and even e2e tests.
  • The code is well organized and uses best practices. You can learn a lot from it.
On the negative side:
  • Jhipster generates tons of code with many libraries that you probably do not know some of them. You can not go ahead and add your business logic and modifications without reading tutorials about these technologies.
  • Somewhat overuse of component, libraries, and frameworks that make your initial greenfield project complex from day one. It is like start big, rather than start small, but that was your choice from the beginning!
Alternatives:
  • JHipster is based on Yeomen. There are so many other scaffold generators, look at http://yeoman.io/generators/
  • You can create your own Yemon generator to generate the code that is just right for you.

Refactoring and Git Developer Story

I refactored a critical long class that has very long member functions. My main intention was to:

  • Extract many smaller member functions to enhance class modifiability.
  • By breaking the long member functions into smaller ones, it will be much easier to understand what the class is really doing.
  • By having many smaller member functions, we have the chance to discover if the class is not cohesive. We may then consider extracting a new class. This will be a good news as the class is already too long.
  • Enhance the memory footprint for the main class API. 

I did many commits, each with a logical and small refactoring step. When the code is sent for code review, the reviewer complained of time wasted reviewing all these commits. He asked, for next time, can you make it one big commit?

No, we can not, and here is why it is better to have fine-grained commits.

If you are doing a real code review that you have to understand each change to ensure it meets quality checks, ensure there is no bug injected and evaluate side effects and to propose QA test cases to cover them, then one small logical commit is much easier and accurate to review. It will take time, but it is worth the time.

For the developer himself, I find it very crucial to commit changes bit by bit, ensuring logical and small change is committed and a good comment is written. It is like a self-review and organized way even before code review.

If still, the reviewer needs to see all changes in one shot, he can easily check out the old class and the final edited class and used any diff tool to see all differences at once.

Happy coding!

Tuesday, July 31, 2018

Lesson Learned: Code Regression


Giving advice is not very effective most of the time. Sharing real experiences and trouble and how it is handled is more effective. The best is whoever learns from others mistakes. So I am sharing here a story where I and my team introduced a bug and it was leaked to the production environment. I know most of the advice here when I was consulting people in Agile but when the situation changed, I did that same mistake. The shame was painful and hence the learning was intense. Experience is where you really learn, not advice from ivory towers.

I am sharing here hoping that it will be useful for myself and for others.

Look at this code fragment that is a part of timezone based message broadcast.
    private String timezoneMetadataId = "23467fe6523310";
       
       ...

     //Check Metadata for Timezone
    List subTimezoneValue = sub.getSubscriberMetaData().get(timezoneMetadataId);
    for (String timezone : timezones.split("[;,]")) {
        for (MetadataValue mv : subTimezoneValue) {
            //Do some processing here

        }
    }


The question here: Is timezoneMetadataId acceptable to be null? In our case, it must not be null. The initialization above ensures it is not null.

Until now, everything is ok, but the hardcoded timezoneMetadataId is not recommended especially as it is repeated in many places in the source code.

We wanted to change this hardcoded id and place it inside the configuration file. As we use Guice dependency injection, we changed the above code to be:


    @Named(value = "mongo.wmprod.subscriber.metadata.timezoneId")
    private String timezoneMetadataId;


And here we made the mistake of forgetting the @Inject annotations which meant the @Named annotation has no effect and the variable, timezoneMetadataId is initiated to null. Sadly the above for loop code will not raise any errors as getting the value of key null is an empty list and looping over the empty list is ok.

This is a code regression. The changes that intend to make the code better ended injected a critical bug.

We have a Code Review process, the reviewer did not notice the mistake. Maybe because it was buried into a broad set of changed lines. 

QA also did their own testing, they did not run the timezone based broadcast test case. 

Here I am sharing lessons learned to avoid such regressions in the future:


Lessons Learned for Programming:

You are changing the code. Do not be fooled by the simplicity of your changes or assume safety. Follow proven programming practices all the time.

1) Verify value is not null

All configuration items must be verified. See below:

import com.google.common.base.Preconditions;
Preconditions.checkNotNull(timezoneMetadataId);

You can also verify the size of the key or the validity of the key. Fail Fast is your best friend.


2) Add log statement that shows the id.

logger.info("Sending broadcast with timezone metadata id: {}", timezoneMetadataId);

If the value is invalid in some way, you can check it in the log files.


3) Write a unit test that ensures failure in case of null or invalid timezone metadata.

Actually, this will ensure that your preconditions are really working. It is indeed easy to make mistakes. Ensuring your preconditions and verifications are working is essential. If the project has no unit tests at all, create a task to create your first unit test immediately. 


4) Write integration tests. 

In this project codebase, there was no integration testing framework in effect. If you join such team, create a task to create your first integration test in your first employment week.

5) Use Constructor Injection
See https://github.com/google/guice/wiki/Injections

Lessons Learned for Code Review:


We did not have a written checklist for code review. This is a mistake. Even if you join a team that does not have a checklist, do not wait until you have team experiences to create your own list. Conduct a meeting and start with a simple checklist immediately. With each Story or Bug Code Review, the checklist should be filled and attached to the Story or Bug. The checklist template should be updated frequently.

We took Code Review seriously but with no structure. The above ideas are to make it serious process with explicit checks and responsibility. 

Lessons Learned for QA:

We kept assuring the safety of changes to Project Leader and QA. This is really a mistake. Fears should be reduced by adding more test cases not by getting developers assurance about the safety of the changes.

I find it necessary to list all changed functionality to QA. It is even better for QA to have access to changes themselves. Once QA read source code nouns such as variables names, functions names, and class names, they can associate it to UI functionality and create the corresponding test cases.


In the end, I hope my mistakes and lessons learned will be useful.


Wednesday, July 18, 2018

Refactoring Tools

I used IntelliJ to do some package renaming refactoring. I was confident that it was safe to do so, however, other parts of the code had changed when they should not have. Later on, I discovered the problem and fixed it. I used to avoid find/replace tools or grepping because it is dangerous as you can easily change parts that should not be changed. 

I learned to never trust any tool regardless of whether it is smart or not. In complex code projects,
you have to be careful all the time and avoid being deceived by automated smart refactoring tools. I learned that using unsafe find/replace carefully is better than using smart refactoring tools carelessly. 

Monday, January 29, 2018

Technical Codebase Visibility

In the last post, I wrote about the problems that happen because of the lack of technical codebase management.
First, you can not do anything without knowing your status right now as well as where you are heading. Putting your goals in mind, then you are fully equipped to do the right things and monitor your numbers to make sure you are on the right track.
How often should you monitor your metrics? In my opinion, given the fast paced age we all live in, we should measure weekly.
In selecting those metrics, I put into mind the following criteria:
  • No single metric will be sufficient. We should select a portfolio of metrics that shows the complete picture to avoid the dilemma of fixing something at the expense of something else.
  • Every metric should be simple to understand. Even if it is not perfect on its own.
  • Every metric should be easy to measure using open source tools.
  • Metrics should be measured on team level. So integration using Continuous Integration server such as Jenkins is necessary.
Here are some proposed metrics:
  • Code size measures. Seems trivial, but I can not imagine working on a codebase that I am unaware of its size. We can count things, such as Lines of code, classes, HTML templates, etc.
  • Static code analysis using SonarQube. It is nowadays trivial to install Sonarqube using Docker and analyse your code. SonarQube provides coding-standards checks and code-smells checks. I like to describe that process as a learning process as it enables developers to learn the best coding techniques and patterns. If your developers do not learn that, they will simply repeat their mistakes, even with a green field codebase.
  • Duplicate code analysis. Sonarqube is providing it.
  • Count of automated test case. If you have many, congratulations. Now, you can move one step further and check the percentage of unit tests compared to other test types. You can do deeper analysis and ask yourself questions, such as: Is this kind of test sufficient?, What kind of manual tests should be automated? Are test cases really helping your team? How to make it more useful?
  • Code coverage: The count of test cases is rarely useful in itself. 1000 test cases may be good for a small system, but for sure insufficient for a bigger system. This is why the percentage of covered lines of code with tests is important.
  • Manual test case count. Which areas of the system are covered and which are not. Where should you have more manual tests written?
  • Bugs analysis. The target of this analysis is to answer the following kind of questions. Which parts of the system generates the most bugs? Who is the developer who injects the most bugs?
  • Build a Refactoring Backlog. Tools are getting better everyday, but it is not perfect and it will never be. Developers already know system problems such as, which parts of the system are the most complex, which legacy component should be replaced, etc. Write all these tasks in a refactoring backlog, with attributes such as, estimated time, complexity and risk of introducing new bugs.
Now, you should have visibility on where your codebase is standing and the trend of each metric. Now the question is: how to plan and track the effort required to change the codebase trend? For example, if code coverage is only 9%. How can you plan and track the effort required to increase it. That will be the topic of next post.
Finally, thanks to Nour-Eldeen, my son, for reviewing this article.

Sunday, January 28, 2018

Lack of Technical Management

I worked on many projects and found a repeating anti-pattern that is happening to the codebase, which is the lack of technical management. Here, I am sharing my thoughts about it.

The problem definition:
  • The team has no understanding of the size of the product and its complexity.
  • There is a lack of metrics and understanding of source code quality, that includes the source code itself as well as its design and architecture.
  • There are no defined coding standards, as well as design and architectural standards. 
  • Lack of metrics regarding automated testing and code coverage. Sometimes, there is a complete lack of test strategy. 
  • No measures to the amount of codebase changes and how the codebase is affected. 
  • No statistical understanding to the quality of the product from customer’s perspectives, e.g. bugs. 
In general we need to know the baseline now, as well as, on weekly basis how the codebase is changing overtime. For example, is code quality deteriorating or improving? Is complexity increasing? At what pace? What about code modifiability?

In general I understand why project management is done properly, most of the time. Usually, because of the customer pushing to get things done, and our understanding that, if we are not able to satisfy our customers, our competitors will. But, what if you mismanage the product technically and focused solely on feature delivery? In the short term, no one will notice, but as time passes, here is what happens all the time.
  1. The code base becomes harder and harder to understand. 
  2. Code modifications becomes time consuming.
  3. Regression bugs increase. 
  4. The fear of changing code increases rapidly. Copy/paste strategies flourish and lead to a larger codebase and a lot of duplicate code. And, then new inconsistency bugs appear. 
  5. Codebase hate becomes widespread. Everyone is cursing the product and nagging to start something cleaner and better structured. Sometimes to use new frameworks or even a new language altogether. Guess what, we start a green field project, and once the codebase grows, and with the continual of “lack of disciplined technical management”, the whole cycle repeats. Just go up to item number one and read again. 

I will share how to resolve this in upcoming posts.

Thursday, March 09, 2017

Personal Maps in a Kick of Meeting


I was in a KOM (Kick-off meeting) for a new Agile coaching project with some organization in Egypt. In my first day, first meeting, I planned a simple Agenda as following:

  • Know about the team
  • Know about the product
  • Know about the project
  • And build a list of problems and objectives for the project
So naturally, the first thing is to know about the team. I did not find anything better than exercising Jurgen Appelo Personal Maps. We were a small team and it was the first time to meet them. 

We were co-located, which gave us a better chance to make face to face conversations. I thought of using a white-board to have a big canvas for our personal maps, but the meeting room had no one. I asked one of the team members to bring a bunch of A3 printer papers and if he did not find any, to bring A3 papers. Almost no office does not have A4 printer papers, and sure he came back shortly with a bunch of papers. 

I started by describing the Personal Maps objectives and process. I worried that the team may find it a wasteful activity, but I continued per my plan.

I gave each team member one A4 blank paper and ran a timer for 3 minutes. All team members started drawing their personal maps. It was remarkable how much Mind Maps are popular and how fast they started being productive from the first minute.

After the timer expiration, everyone finished and we started to show each other our Personal Map paper. We agreed that no one will present his own map, and we will only answer questions related to it from other team members. We just shared the other's Mind-Map.

That was the first time I met with those members, and it was a very positive experience to know about each other as a human being. We found shared hobbies and shared experiences and sometimes remarkable things about individuals. The exercise succeeded not only as an introduction to each other but also as an ice breaker. Below is some of the maps done. It is not very fancy, but the discussions about them were.

Lessons Learned:
  • I plan next time to increase the time to 5 minutes, to have more time for bigger maps.
  • I will bring A3 papers to enable and encourage team members to write more details.
  • Let someone present others mind-maps fully and sure he will enable others to ask questions.
  • At the start, I will explain the importance of font clarity to make it easy for others to read it.
Here are some other mind-maps.





Tuesday, January 31, 2017

Organizing Application Classes into Packages

In Java language, we have encapsulation at many levels. First, is the class, as it may has private data
members and methods. It has the benefits of hiding complexity and provides a layer of abstraction.
Second is the package. It provides access rights to sibling classes inside it. Classes defined without being public are private to the same package classes and not exposed outside the package. So, packages in Java provide a means of encapsulation and details hiding inside it. There are also other means of encapsulation in Java, but I am not concerned about them in this post.

For more details about controlling access to class members for class, package, subclass and the world, see: https://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html

After this introduction, my question is, how can we organize project classes into packages? I am describing here two ways and comparing them. I will provide example product that is related to finance. I will assume both ways are using the same famous separation of concerns such as having separate layers for entities, data access repositories, business logic and web layer.


1) Organize Packages based on Layer:

  • com.finance.entity:        stores all data bases entity classes.
  • com.finance.repository: contains all data access classes.
  • com.finance.business:    contains all business classes.
  • com.finance.web:           stores all web layers that include controllers and rest classes.


2) Organize Packages based on business domain :

  • com.finance.user:            includes all classes from all layers that are related to user.
  • com.finance.account:      includes all classes from all layers that are related to account.
  • com.finance.transaction: contains everything related to transaction.
  • com.finance.charts:         contains everything related to charts
Each package has separate classes for each layer. As example in account package we will have:


    • AccountEntity
    • AccountRepository
    • AccountService
    • AccountRest
    • Any other classes related to Account


In approach 1:

  • All applications folder/package structure will look the same as long as they use the same described architecture. There is no identity exposed in package names. 
  • The utilization of java package encapsulation is almost eliminated or ignored.
  • This thinking may be imposed by functional devision of code that match organizing your teams to Functional Teams rather than Cross-Functional Teams. 


In approach 2:

  • We utilized the package as a component concept, that you can have private classes to the package. 
  • You will have a unique identity for each application that matches its domain.
  • Each package can be easily separated into micro-service.
  • Encourage or at least consistent with cross-functional teams organizations.


Modifiability:
Assume you want to change the transaction date to be transaction date and time. This change will required you to change all layers to accommodate this change.

In approach one: you have to navigate to 4 different packages to change them. I found this navigation tedious and distracting.
In approach two: all your changes are contained in one package only. Other packages will not be affected at all, except in case you will change a public class-method.

This is why I prefer approach two and see it consistent with Java package concept as well as Agile cross functional teams and micros-services.



Sunday, October 30, 2016

Why Velocity is a Broken Performance Metric?

In Scrum, we have two main metrics to measure Sprints. The first one is the Velocity, which is defined as the total story points of completely developed and tested User Stories. The second measure is Burned Effort, which is the total number of hours spent by the team throughout the Sprint. We use Burn Down chart to plot the remaining effort to check our progress daily and ensure we are on the right track, else we are advised to take action.


Velocity is designed to be used as a metric to measure and manage scope of work within a release. It is not designed as a productivity measure that we urge to increase overtime.

Many teams or sometimes managers consider Velocity as a productivity measure and insist that, teams should optimize their work and increase their velocity always. Optimizing engineering work is an acceptable target, but Velocity is has another goal to play. See what happens when teams are urged to increase their velocity overtime:
  • They tend to estimate new stories in larger sizes. So a troy that is used to be of size 3 will be sized as 5.
  • They frequently resize finished stories as if it took more than the planned effort and consider this as a correction.
  • They relax Done Definition. So, more stories are earned even if they are not really complete. You will know just before the release date, when you find a lot of bugs are accumulated and needs a lot of extra time to be fixed. 

This usually shows false increase of Velocity or inflation, where the increase in Velocity does not represent actual increase in value delivered. But managers may feel happy achieving better Velocity over time. Is this misleading? Yes of course. If you are already improving, why you may consider other ideas for improving?!

So, the point is, do not use Velocity as a measure of team productivity, but to manage release scope so that, you are able to track your progress effectively throughout the release.

Optimizing your work in Scrum has a different set of measures and actions, let me highlight some:

  1. Ensure a strictly Done definition overtime through the heavy use of automation. So, each Sprint is producing a really potentially shippable product increment. There are no stability Sprints at the end of the release any more.
  2. Ensure  the reduction of number of test/fix cycles. Testing is valuable, but too many bugs are a waste that switch our focus from learning to write better code first time to expensive cycles of test-fix-retest. 
  3. Ensure you finish almost all stories planned in the Sprint and avoid partially done stories that are moved from Sprint to Sprint.
  4. Reduce Sprint timebox over time. As example, if you started with a Sprint of 4 weeks then optimize to switch to a 3 weeks Sprint, then to a two weeks Sprint, etc. Agile prefer shorter cycles. Per second Manifesto principle: "Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale."

Finally remember, Velocity is a metric to manage Release scope, not a metric to measure team performance or productivity.

Notice: Illustrations copyright Jurgen Appelo, Management 3.0

Monday, October 10, 2016

Is Time Pressure an Acceptable Reason for Poor Quality?



I worked with many teams who had the same issue. An intense time pressure that led them to react by doing either of or both of the following:
  • Coding in hurry, and falling into too many cycles of testing and bug fixing, which delayed them much more time and made the situation worse.     
  • Delivering a code that is full of bugs, which makes the customer distrust the team.

When I ask developers and team leads, why they fall into this situation? They always point to the same reason. They had unreasonably forced deadlines and pressure. They reported to business people about the situation and they refused to listen. That meant the whole responsibility was on the business people or the customer himself. Developers see themselves as victims of the bad guys. They wish that the world would change around them and give them enough time to do their work the right way.

This perspective gives the whole blame to the pressure of business people and customers and no blame to the development team. And, of course business people blame the development team. This is a finger pointing situation where everyone is blaming others. I would like to ask some questions to highlight my point of view:
  • Who knows more about software engineering principles? Developers, business people or Customers?
  • If a customer pushed you for the sake of optimizing your work, or even as a sort of greed. Is it helpful for you or your customer to ship a buggy code?
  • Do many cycles of testing/fixing support you or your customer to deliver faster or just wasting more time?

Under pressure, if developers decide to voluntarily work all weekends and a lot of overtime and ship the right product, it may be OK, relatively. Developers sacrifice their personal life for the sake of happy customers. It is a lose-win situation, which is not optimal, but at least someone wins.

What is happening is much worse. We fall into a lose-lose situation, where developers are exhausted from working too much overtime, shipping a product with too many bugs and on the other end dissatisfied customers and business people. What could be worse than this?

Developers, please stop trying to change the world around you and start changing yourselves and your responses. You know your craft and you should act responsibly. I would like to give an example of what you should do as a professional.

Your car is broken and you need to fix it. You have a 500-mile journey and you are in a hurry. You went to a car maintenance center. After a careful investigation, the engineer told you that it would take 6 hours to fix. You asked him to finish it in just 1 hour due to your situation. The mechanical engineer knows his craft and work, you assume. I would fictionally describe two responses because of the intense pressure to finish early.
First Response: The engineer just listened to your demands and did an unreliable fix to the car in one hour. You were happy to be on time. You took the car and drove it, and in the middle of your journey at 2:00 AM in an uninhabited place, the car stopped, and the fire caught your engine. What would be your feelings about the mechanical engineer? Are you still happy that he fixed your car in one hour? You asked him to finish in one hour to support your urgency not to make your situation worse.

Second Response: Imagine this. The engineer took your request into consideration and consulted other engineers to optimize the time of fix. And the engineer came back to you with a full explanation, facts, and context to explain the constraints. He told you, ”there is no way to finish in one hour, but we  will do some high priority fixes in parallel and we will skip some minor enhancements. Some other fixes can be delayed until you reach your destination.” With all these optimizations, they could finish in two hours, not less. You were not so happy, but you felt they were disciplined and professional. The fix was well done and you drove your car to your destination safely (but with a one hour delay).

From these responses, which one would be appropriate for the customer? What is the response that is expected from a disciplined engineer or a professional organization? What is the ethical reaction that you should take regardless of the intense time pressure?

Nowadays, software is critical to billions of people's lives. Billions of dollars are lost because of bugs in software systems. People die from bugs in software. Can you imagine a bug’s consequences on radiology treatment machine? A bug in a blood-bank software! A bug in charging a mobile battery that may explode it! Or at least, do you know how many disappointed users there are, because of poor quality software?

ACM (Association of Computing Machinery) developed "ACM Code of Ethics and Professional Conduct" for software developers at [http://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct]. The declaration has 8 moral imperatives. The second one is "Avoid harm to others". In our described  scenario, you are not only harming others, you are harming yourself as well.
As a conclusion, please:
  • Do not try to change the world around you.
  • Change yourself and choose your reactions disciplinary.
  • There is no reason to fall into lose-lose situation and harm your customer and yourself.
  • Always write clean code and follow technical excellence practices. 
  • Technical practices will not delay you, but will save you time and hassle.

And remember, Be powerful and be happy.

Notice: Illustrations copyright Jurgen Appelo, Management 3.0

Sunday, September 25, 2016

Google Maps and Agility



I commute driving my car almost daily from 2 to 3 hours, . I live and work in one of the biggest and busiest cities I think, Cairo. If you are not regularly commuting in a busy and a big city, possibly you cannot imagine the hassle and uncertainty of it. It is disappointing. I face unexpected congestion randomly and have a very limited clue on how to manage my trip. Sometimes, I find unexpected congestion, and change my route to another one, assuming it is faster, to find it much worse. Actually I cannot be sure, as I do not have knowledge to be sure. With fear and ignorance, my commute is a misery.

Last year, I started to use a local mobile application named "Bey2ollak". Bey2ollak is an Arabic word means "He is saying to you". Yes, too much English words in only one Arabic word, that written in English characters and Arabic numerals! Bey2ollak is a social mobile application  who breaks famous route into segments and enable commuters to rate the segment from busyness perspective. A rank from "Perfectly Clean" to"Too busy - Avoid" is used. They use nice smiley faces as in the picture. It enables users to easily register account and post comments also. That is all. Recently they added more features but I am talking about my past experience and not evaluating the current state of the tool.




Using Bey2ollak, before going to work or returning back to home, I used to check my route conditions and decide if I should go home now, in which route or if it is better to stay an extra couple of hours at work until the road congestion resolves. Generally better than before, but I still has several pains:
  • The application does not propose different routes. 
  • I have to design my route myself, by selecting all segments manually.
  • Commuters rate segments manually. (Later, they made it automated if you open your GPS)
  • There is no time estimation for the entire route.
  • Sometimes people report on the wrong route segment.
  • Many times reporters got very emotional and start cursing the traffic and the city!

Google Maps app was available at that time, but was not handling traffic conditions of my city.


Recent months, Google Maps added  traffic conditions, and trip time estimation features. Now, I set my destination address, then it displays different routes with time estimates. It highlights parts of the road that has congestion and shows how much time it will delay me. Not only that, as I drive the selected route, it alerts me of better routes that become available. When the situation is changed, it alerts me of a new congestion ahead and tells me how much time it will delay me. 



With all these live information, my commute emotional experience is changed completely. Frustration of uncertainty and limited visibility are eliminated. The road conditions are not enhanced at all, but now, I have visibility and timely information to manage the situation and reduce its adverse impact on me. With Maps, my situation is changed. Even in difficult busy days, which I cannot control, the information given by Maps is calming. I no more got disastrous ambiguous situation hitting me as an unexpected mine. It relieved my emotions that, whatever the route status, I have power of knowledge to decide my route with accurate predictions and choices.


Yesterday, it popped to my mind that, our Agile approach (Typically Scrum) has similar characteristics. It enables me to manage the uncertainty and enables the team and Product Owner to decide what to do. Agile planning and tracking will not prevent problems (like in Maps), but it helps you track them down and helps you to manage the uncertainty inherent in software development. Let me highlight some Agile/Scrum concepts that helps:

  1. Relative size estimates done by the whole team without political pressure from management will help Agile team quantify reliably the amount of features required for the next release.
  2. With Done Definition that treats completeness as 100% criteria. It enforces the team to reliably finish valuable requirements completely to get the score of the finished stories. If not completely done per the Done Definition, they earn zero. It frees them from debating whether some requirements are finished by 60% or 80%. And hence enables more reliable progress visibility.
  3. With every sprint, we measure how much story points are completed. It gives a strong sense of our progress and our velocity regarding the required scope of work. 
  4. If problems happen, and it will, we can re-plan the remainder of the release accordingly.
  5. With Burn Down charts and daily Standup Meetings, the team can spot impediments and delays very early. It enables the team to react very fast to impediments and issues.

So, these Agile concepts/tools does not eliminate uncertainty or complexity of developing software, but it sets the stage to reliably navigate them. It helps in the same sense that Maps helped me to manage the complexity and uncertainty of my commute. Or at least, this is what happened with me.



Wednesday, September 14, 2016

Joined Happy Melly

I just joined @Happy_Melly network! and became a supporting member.

Why is that important?
I love my programming work and find it a joy and it was miserable why current management practices make people unhappy and disappointed unnecessarily. But simply I have no solution. I worked as a manager for 8 years and still can not find how to really manage the right way.

My exploration started by discovering Agile and Extreme Programming and found ways to find joy while developing great application. What was really great, is finding that joy on the team level.

Next, I discovered self organization through Open Space Technology by Harrison Owen great book and mailing list. I learned that, we are already self-organized and happy creatures and poor management just delays us and makes us unproductive and unhappy. But still something is missing in the buzzle.

That missing thing is uncovered recently. I discovered Management 3.0, and found the whole concept of self organization and Agility expanded to all areas of management, even areas that seems untouchable such as performance appraisals and bonuses.

I am learning and exploring all the time, very soon I will share with you my new stories.

Be Powerful Be Happy

Friday, September 09, 2016

Session Review: Next Evolution of Agile Leadership Roles



Session:  Agile Project Manager, Product Owner and ScrumMaster are all broken - The Next Evolution of Agile Leadership Roles
At:     Holiday-inn Hotel ... City Stars, Cairo, Egypt
Date: 8 Sept 2016, 

--------


There was a QA session followed by a session by Ahmed Sidky. My comments here are related to Sidky session.

In my opinion, he was presenting his experience in managing ownership and accountability in his Riot Games company.

First, he showed that, after joining Riot, he found the traditional Agile roles are confusing and makes it difficult to decide which one is really accountable for team results. That situation pushed him to experiment and come up with a different model.

Sidky broke up responsibilities into 35 distinct responsibilities. He was presenting them in the form of plastic cards, something similar to Planning Poker cards but double its size.
At Riot Games, they created 4 leadership roles as following:
  • Team Captain: This is the head of the other three leads. He is accountable about the project. He is the only accountable person in the four leadership roles. Later on, I will explain what is meant by being accountable.
  • Delivery Lead: Responsible about delivery deadlines.
  • Product Lead: Responsible about the product scope, something like Product Owner, I think.
  • Craft Lead: Like Test Lead, Developers Lead and so on. He is responsible about the craft quality and standards of his/her team’s work.
Sidky gave color for each leadership role, for example red color is assigned to the Team Captain. He showed us physical hats with different colors. The only problem is that; the size of hats is a little bit small to be wore by humans.

From the 35 responsibilities, there are 10 that are hard coded to each role and cannot be changed. 

First, the project team conduct a workshop with team members. They list the other 25 role cards and they collaborate to assign each of these responsibilities to these hats/roles.

Then, the team self-organize to assign a team member to each role/hat. There may be some additional conversations and negotiation until the team agree on the result. The result will be that, all 35 responsibilities are distributed on the four roles and a team member is assigned to each role/hat. Each team member can have one or more hats, or none at all of course.
 
Regarding accountability, Sidky described three steps for poor performance evaluation. First Step is to conduct a meeting, understand the problem, and what to do about it now and in the future. There is no blame culture, any one mentioned in conversation is invited immediately.

Second Poor Performance case: Just like the first one. I expect the conversation will be more difficult.

Third poor performance case: Here is where management will take action.

He mentioned that, we cannot overlook that, there are poor members and good members. In some cases, you have to fire poor performing members.

Note: The above description is a representation of my understanding to what Sidky proposed. It may be incomplete or inaccurate. Your feedback is welcomed.

My Personal Notes:
  • I find it is OK to design your role in your organization. Feel free to break the famous Scrum Roles SM/PO if it makes sense. According to Agile Manifesto, roles are not included in Agile Values or Principles. It was described in Scrum and many people find it useful.
  • As Sidky said that, this is RI phase of inventing things. If you are an Agile beginner, be careful and follow some well-known recipes, such as Scrum or Kanban.
  • I find it very self-organizing and matches Agile spirit to let the team collaborate on responsibilities and role assignment.  



It was a fun event, Sidky enjoys a sense of humor. Here are some pictures from the event.








Tuesday, August 23, 2016

"As a system developer I want to ..." User Story

I worked with a team who is used to have stories that represent the development tasks. It was a mobile application with back-end development. I explained to the team that, User Story is simply a story that is related to the system user. The developer is not a system user, he is the developer of the system. User story is the language used between product owner and development team to express valuable functionality. How is a developer technical task for the back-end be of some direct value to the customer or product owner? This representation typically break the core idea of User Story. It clearly break the famous INVEST mnemonic created by Bill Wake. The INVEST contains "V" letter, which stands for "Valuable". Valuable by the perception of end user. The story is a vertical slice of a cake that contains all layers that compose the cake. I really like this metaphor, at least it is tasty.

I was curious to know the team motivation in doing that. I asked several questions to explore the exact meaning of why these not-valuable stories are treated that way. After listening to their answers, I got it. First, it have a size points, which gives them some credit. Second, I found that, the back-end stories are typically a preparation of next sprint stories. So, they actually break down story work into two Sprints. The first Sprint is to develop back-end code, and the second Sprint is focused on developing user interface functionality of those back-end work.

Oh my God. There are a lot of ways creative teams are able to inject the waterfall mindset into Scrum. They easily break principles and values under many weak reasons.

So, let me stress that, this is not a user story, and the way work is divided between sprints is not Scrum. The user story is meant to be totally developed in one sprint, including all required activities. In the experience described here, and in a 3 weeks Sprints, I can expect many integration bugs as well as task switching waste.

In addition to that, the Sprint velocity measure is broken. It sums valuable work of this Sprint with non-valuable back-end work of the previous Sprint. The sense of comparing Sprint velocity to another Sprint velocity or managing the scope of release backlog is broken.

To summarize, the same story has a user interface and a back end work, all should be planned to be done in the same Sprint. Exceptions are OK as they are just exception.

Friday, May 20, 2016

User Story is Not a Documentation Format

User stories intent is not to capture requirements completely. We know that written words are not sufficient to capture requirements for two reasons. First there is a lot of knowledge regarding requirements that are very difficult to document and hence to consume. Second a large part of requirements is not visible. It is like an iceberg, what you see is much less than what is hidden.
The point I trying to say is that, we conduct a workshop with users, developers and testers to discuss, model, elaborate on what should be done. At the end, and as a reminder, they write a brief description about it. Although I like User Story format as an Agilest, but I think the shared understanding concept is much more important than the written words of requirements.
Writing a documentation that is brief is an objective in itself. We rely on short term memory to preserve a lot of knowledge. Some of this knowledge, we may be not aware of its existence.
So, replacing Use Cases or other requirements documentation format with User Story format does not mean you implemented the approach of User Stories approach. You have to think of it per the original intent of the approach. One of the models that is helping us understand is the 3C model of Ron Jeffries.This model describes User story as Card, Conversion, and Confirmation.
By Card, it means a simple and small index card is sufficient to denote the User Story. One of the negative features of most requirements management software tools is that you can write any amount of content per User Story. By ensuring the limits of physical card constraints, we make sure of the small amount of contents that will be written. Reaching the physical limits is a good indication of either you are writing a lot of un-necessary details or the story is so big and worth breaking it into two stories.
The second word is Conversation: User story is not written by some person and handed off to the team for implementation. It it actually emerge from conversation between users, developers and testers. It represent a shared understanding rather than a complete and contractual specifications.
The third word is Confirmation: It the assurance of that, all conditions, and scenarios expected from users and testers are covered. It ensure we are not only thinking of happy scenarios, but also all scenarios and special cases.
There is no typical importance order for the 3Cs. Rather, they are complimentary to assure User Story is not a requirements documentation format, but a methodology of working with user needs that is aligned with Agile mindset.
User Story usually written in the format like this: 
As a , I want to  so that

Usually, the format may attract people of comprehensive documentation mindset to compare it with other formats. But the format is actually the least important part of User Story.