Sunday, October 4, 2020

Applying the 4 R's of conservation to Software Development

Refuse, Reduce, Reuse and Recycle. It works to reduce landfill, It also works to ensure our code and documentation stands the test of time and do not end up in the virtual landfill.

Hopefully you'll see that applying each R will make it easier to apply the others.

Refuse
Do not write code you don't need to write. This applies at different levels, here are some examples
  • Before starting to code, analyze the problem. Not every problem gets solved with automation.
  • Before automating a process, ensure it has been properly engineered. A bad process automated is still a bad process. The resulting code will be harder to maintain as people may require drastic changes when they find problems with the process. And in a worst case scenario, people won't be able to change the bad process because they depend on the system, eventually scrapping the process and the software.
  • Always verify if you need to write new code or if something else already exists. This is: Don't write code that already exists just because you don't understand or like the existing code.
  • Avoid automating one-off tasks or tasks which take longer to automate than the time they save. However, this is a recommendation where one needs to balance being lazy (not writing unnecessary code) and being replaceable by ensuring people can do what we do once we leave. So, I make an exception for things that are part of a formal process and maybe hard for other people to understand.
Reduce
When it comes to code, less is more. Less to maintain, less opportunities for bugs to creep in, easier to extend.
  • Don't write code you "may" need. Write it as you need it. This is also called YANGI (You aren't gonna need it)
  • The less code you write, the easier it is to refactor later as new needs arise.
Reuse
Reuse existing code. The likelihood that this is the first time you have faced a particular problem or that you are the first person facing a problem is fairly low.
  • Search around for existing code, either in the form of libraries, frameworks, the internet and your own code.
  • There are many new challenges in each project, by leveraging existing code, you can focus on the new challenges, and if by looking at the existing code you find it can be improved, then Recycle it.
Recycle
The fourth R encompasses four other R:
  • Refactor existing code: As your change your code, refactor it to ensure it remains as clean and simple as you can. It is a virtuous cycle where it will be easier in the future to apply the Rs of conservation.
  • Refurbish:  As you refactor code, make an effort to leave it "as new". This includes removing old code, not just working around it.
  • Repair: Eventually operating systems, libraries, standards and needs evolve, breaking code that used to work. Frequently I see people doing back-flips trying find ways to keep the old environment when it may be easier and faster to fix the code to take advantage of the new environment.
  • Re-purpose: Whenever possible, refactor for Reuse. Identify patterns that emerge as you reuse code to create libraries, frameworks or simply to eliminate redundant code. Examples of this are creating Classes and functions from "in-line" code, introduce dependency injection, expose extensible interfaces and many others.
You will see that if you and your team follow these R's, you will end up writing less code and having a cleaner code base.

(I've updated this post which was originally published in April 2014)

Saturday, September 26, 2020

Coding is easy, creating Enterprise level code is hard

In the first chapter of The Mythical Man-Month, Frederik Brooks describes the progression from a program to a programming product to a programming system to finally be part of a programming system product; and how this increases the complexity of software development. I personally think that he low-balled the differences and, these days, the difference between one extreme to the other is not just a single digit multiplier but orders of magnitude more complicated.

This may not be obvious to the casual or beginner programmer who can do wonders by knowing a language, a platform and having an itch to scratch.

This came to mind recently when I was figuring a way to have a separate background picture in each monitor on my Linux Mint system at home. Unfortunately, this is not a default functionality in the Cinnamon Desktop environment.

Very quickly I realized that I could create a large picture composed of two pictures and set it to span across monitors. Then I figured out that I could use the same technique to span single picture across monitors. I opened my image editor and created the background. Task done! ... or was it?

If you've read my other posts, you know that I am lazy. I didn't want to manually edit each picture I wanted to put as background, so I figured out I could do it with three ImageMagick commands. One to scale and crop the image(s), and two to assemble them into the final image. 

Awesome, but... I have hundreds of pictures from my last trip to Asia and I want to rotate them as my background. Executing those three commands for each picture is still faster than editing each picture manually, but I am lazier than that. I decided to create a little bash script.

The first version of the script was about 30 lines of code and it did what I wanted it to do. After all, it is a very simple task, to execute three lines of code. I had my program. Yeah! 

Still, it was 10 times the amount of code than the original 3 lines. 

The result was really pleasing and I remembered that while searching for the original solution I had found several people wanting to do the same. So being the nice guy I am, I decided to share it with others.

Sharing it meant that now I had to:

  • account for different monitor resolutions and configurations, (+18 lines of code)
  • read parameters (+75 lines of code)
  • do parameter validation,  (+38 lines of code)
  • error checking,
  • consider edge cases,
  • ensure that dependencies were installed,  (+7 lines of code)
  • follow  standards,
  • create some help (+47 lines of "code")
  • tidy-up the code to ensure it was readable by others (code elegance should not be understated)
  • Create a git repository to keep track of future versions and be able to share it.

In total, the script right now is 300+ lines of code. If you are keeping track, this is about 10 times the original script and 100 times the original 3 command I had to execute.

And this is just for a personal script which does the basic I want it to do. I have a small list of other things I'd like the script to do so eventually the script size will grow, but for now, this is good enough.

If this were an official release (programming product), I'd probably need to test it in a variety of environments, different monitor set ups, different video cards, even different desktop managers and create an installation package. All that with the associated documentation, error checking, parameters, edge cases, etc. Probably increasing another order of magnitude. Just to show a background image!

If this was a function required for an Enterprise level system, I would also need to worry about security, additional standards, logging, decomposition and integration with other components, versioning, automated unit testing, integration into a DevOps workflow and many other ancillary tasks.

Even more important: given that more people is involved in the creation of an Enterprise System, a developer needs to understand the finesses of social interactions to be really successful.

I hope you can now see how knowing a development language is just a tiny portion of what a developer must know to create production ready enterprise level systems and I also hope this blog is helping you becoming a better developer.

Friday, September 9, 2016

You should change your culture and mindset to successfully implement Agile and DevOps

As with many other practices, Agile is more about the developer mindset and team culture than the tools or methodologies. While the later can support Agile development, without the proper mindset and culture they are bound to be just another tool to do check lists, adding to work and not reducing it.


Throughout my time as a developer and leading teams, it has been interesting (to say the least) finding developers that can code the whole day without ever compiling, even less running and unit testing and even more rare, integrating the code. This may come from practices that were appropriate (and may still be appropriate) when execution time had a high cost, where people had to schedule their runs or going even further back, when developers didn't have "execution rights" on their own code. While for most projects and for most environments that is no longer the case, people that lived through those times still feel this is a sound practice which gets carried on to new developers as part of the team culture.


For Agile and DevOps to truly add value, developers need to embrace not just the tools and methodologies but the mindset and culture of continuous feed back and collaboration.


Culture and mindset changes do not happen over night. Under pressure we tend to fall back to our old patterns. When I decided to learn to "touch type" or use the mouse with my left hand, I certainly worked slower.In times of pressure felt tempted to fall back to finger typing or switching to the right hand mouse. It was through forcing my self to stick to my plan that I eventually mastered both. After a small dip in productivity, my productivity increased substantially. Now I can just think what I want to type without worrying at what my fingers do, they just move.


But it is not just the personal changes that need to happen, the environment also needs to adapt. From the physical environment: Where we sit, where we meet, how we collaborate, to changes in the legacy code we are working with.


Here is where we need to accept some temporary pain: It seems that there is a chicken and egg conundrum between continuous unit testing and refactoring. It is easier to refactor code when it is properly structured to be "unit testable" the conundrum is that with legacy code we tend to start with code that has evolved to be "un-unit-testable": Big chunks of code with multiple code paths that do more than one thing and . This leads to fear of refactoring as it is difficult (or even impossible) to properly unit test the changes. Developers and managers need to understand that here, they will need to slow down the development cycle due to the inherent risk of the refactoring. The reward will be refactored code which bring higher productivity down the road.


I will close by extend my old saying "Don't bring a tool without a practice" and add "and ensure that the mindset and culture changes to adapt to that practice".

Tuesday, October 14, 2014

Assumptions are not facts

I was recently in the non enviable position of recommending changing direction of a large set of projects after 6-8 months of planning

For clarity's sake: By large I mean 7 to 8 figures multi-year program . By changing direction I mean Changing one of the core technologies and Reorganizing the order in which we were building the system; bringing some deliverables a year ahead of the original schedule.

The original plan and technology recommendation were made before I joined the team, by people who I respect a lot. So, why the change?

I must clarify that I normally start with the assumption that people before me had good reasons to make the decisions they made. But following what you will read later in this post, I decided to ask "why?". I needed to understand those decisions to be able to represent them to senior management.

What I eventually realized was that the original plan and technology were based on a set of assumptions that hadn't been validated for 6 to 8 months, meanwhile the understanding of the system had increased and there had been some important changes to the organization.

The team was under pressure and hadn't stopped to validate the assumptions. What's more, many people in the team didn't even understand why the current direction.

At that point I recommended to STOP and verify assumptions. We brought the team together and brainstormed assumptions, identified conflicting assumptions and, most importantly, discarded the incorrect ones. With a new set of assumptions, we verified the original decisions and agreed that we would benefit from changing direction.

Assumptions are not facts
A very common problem with assumptions is that, after a while, they are taken as facts.

The most effective way to eliminate this problem is to avoid assumptions altogether and make our decisions purely based on facts. As the old saying says: If you assume you'll end up making an ASS of U and ME.

This is of course, easier said than done. Frequently we need to make decisions even when we don't have all the facts.

Here is what has worked for me:
  • Identify assumptions
    This is probably the hardest part as not all the assumptions are relevant and most of us do not even realize when we are making assumptions. A good way to collect assumptions is to start from the decisions. Ask "Why?" if there is no concrete evidence for the answer, then it is an assumption. Another way is to ask "under which circumstance would you change the decision?". Usually the answer highlights the assumption.
  • Document
    Write down and share what the assumption is, but most importantly, the consequence or impact of that assumption. This is, if the assumption changed how would the decision change.
  • Verify
    Set the time to prove or disprove an assumption. If you can, then there is one less assumption to worry about.
  • Challenge
    By challenging your own and other people's assumptions you avoid inertia.
  • Review
    It is a good practice to review assumptions at predefined stages in the development cycle. Even more important is to review them when you identify change.
  • Act
    If an assumption changes or is disproved, bring it up for discussion, review the decisions made based on that assumption and if necessary, recommend changing direction.
Doing the 180
It is not easy to do an about-face after having given all your arguments for the current decisions. However, having clearly stated assumptions can reduce the pain, and certainly show people that you are a professional that is not afraid to change directions if conditions change.

Wednesday, October 8, 2014

I'm back blogging. This time more agile

I'm back blogging. After a few months hiatus, I've decided to start posting again. First I thought it was that going on vacations had changed my rhythm, then I thought it was all the workload of an important project, or maybe the summer.

Finally I realized what it was. I chocked trying to do too much at once.

My original plan for this blog was to write my thoughts as they came. Short posts that wouldn't bore people. Over time I would add additional related thoughts. Eventually, those thoughts would be associated by the labels.

In methodology terms: I was planning to write things "agile". Write as much as I had in my mind.

Then I came to the topic of Troubleshooting. I had a simple thing to say but as I wrote, I thought I could add more and more, then I realized I had to add structure, and reword things and... and... I got stuck. Instead of publishing what I had, and later improve it, I went for a big bang approach.

If that sounds familiar it may be because that's how many software projects get cancelled. They start with a simple, great idea but instead of implementing it and making it evolve, we try to improve the idea and add things to make it perfect on the first pass. As the idea gets reviewed by other people, new features are added to the list. We start thinking bigger, and bigger needs better architecture. As it gets better we need more input; after 6 months or a year of reviews and talks and meetings, still, nothing to show for it.

Most likely, releasing early could have shown if the idea was good or not. And the features that were really missing would have been added over time. 6 months or a year later, after learning from our mistakes, we would have a system that we could improve, refactor, even rearchitect, but we would be doing it with real usage data, not pipe dreams in meeting rooms.

So, that's it, I have decided to start posting again. Sometimes the ideas will be raw, most likely I will get some wrong, but as I get feedback and receive comments and discuss with other people, I may come back and review them, or even write new posts with a clearer idea.

May this post be the first where I spill my raw thoughts.

Monday, May 26, 2014

Testers are my best friends

Some developers think that testers are mean. "How dare testers misuse this piece of software causing a bug?" Hence, they try to direct the testers, train them in the application, even tell them how to test and what to test including "where not to click". Sometimes that works with more junior testers.

Unfortunately if the testers do not use the application in ways "it wasn't meant to be used", you can be sure that users will do and will find those undiscovered bugs. I rather have the tester find the bugs that have a user find it. 

To make matters worst, those undiscovered bugs tend to be gaps in the specifications that say what the system should do but not what the system should not do, or how should it react to unexpected situations.

Experienced testers, on the other hand, know better and will not fall for the developers complains, They know that this level of testing for the unexpected is important for two different reasons:

The first reason has always marvelled me: Users will find ways to use an application for purposes or in ways you never even thought of. Even making features out of bugs.

The second reason is better known as Murphy's law. One day I was in the Lab helping test a stand alone desktop application. At one point the tester told me: Now, save the transaction. And as I pressed enter she proceeded to unplug the computer. I was cold: How would the system react? Will it corrupt the data? Will the system even start again? While that simple test on its own wouldn't be enough to answer those questions, at least it made me ask them and review the design to ensure we knew how the system would react. Now, every time I go through a website that requires me to follow a set of steps, and I suddenly loose connectivity and I'm forced to start over, I realize that the testers of that site didn't test for Murphy's law.

That experienced tester mentality can help us even before a single line of code has been written. If you have a seasoned testers during specification review, they will ask questions to understand how the system will react to the unexpected. Remember, the sooner you find a potential bug, the easier it is to avoid it.

In my personal experience, If you make testers your best friends, you will end up saving a lot of money and most importantly they will help you keep your reputation as a great developer. 

Tuesday, April 22, 2014

Making choices

A good enough choice made on time is better than a belated best choice

This has been my motto when making decisions. Do not get paralyzed by the fear of not making the best decision. As long as you make good enough decision.

After all, Software systems are complex and there is hardly ever a "perfect" answer to a problem. If spending 20% of the effort you can get an 80% good enough answer which is probably the right answer why waste your and everybody else's time.

Be lazy: Reduce the effort you put making choices. 

Of course, we know that making a quick decision does guarantee a good enough answer:

- The teacher asks little Johnny: Jonnny, quickly, what is 2 times 5
- To which little Johnny immediately replies:  It is 11, teacher
T - Johnny that's wrong!
J - You asked for speed, not accuracy.

But there are techniques to help you make quick "good enough" decisions

Some of those are explained on the TED talk How to make choosing easier.

In future posts I will expand each of them showing specific examples on how to apply these techniques to software development.

Cut - Reduce the number of alternatives;
Only decide between options that make a difference. In fact, in some cases you can reduce the options to 1, which reduces the effort to 0. To help you cut on the options you can use Patterns, Frameworks, Standards Guidelines.

Concretize - Make it real;
When evaluating options, weight the actual project requirements to make the final decision relevant. 
I've seen people evaluate designs, libraries or tools based on "features". While that makes for a good magazine article, it is usually misleading and usually leads to a least than optimal decision. Compared feature by feature one of the options may have more customizations or better support or be the favourite of (insert your tech guru here) but it may not be the best option your project. 

Categorize - we can handle more categories, less choices each
Don't try to make all decisions at once. Categorize the decisions by impact, area of the system or areas of responsibility.
A very common trap when making architectural or design decisions is trying to make all decisions at once. There are so many decisions to make that we get paralyzed and feel that we cannot start the project until we have made the right choice for each of them.

Condition for complexity - Gradually increase the complexity of the decisions.
This one of course goes along the three techniques above. After cutting, concretizing and categorizing; start by making the decisions with less options earlier, That will give you a sense of accomplishment and will probably drive other choices down the road. In fact, having many options usually means that the difference between them is not that big.

But wait, there is more!
As you see all of those techniques provide frameworks to help you make decisions, but they are not the only ones. In future I will tag as "Choices" post that I identify a posts, that can help you make better decisions. 

Do you have your favourite techniques?