Welcome Changing Requirements
Here's another post about one of the 12 agile principles. Enjoy!
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
Does that make anybody else cringe?
This is clearly for our customer's benefit, not the developer's. They're listed explicitly in the second sentence and implicitly in the first. It's a nice goal, but really, what's the deal with changing the requirements so late in the game? How is that supposed to work, really? How do we do that?
Let's back up a bit. If you look at the history of computing machinery you'll see that the early equipment was a bunch of logic units which were connected with wire-wraps on the backplane of a rack-like beast or plug-boards. A new "program" required rewiring the backplane to arrange the logic units in a different manner. It took a while before the logic units were small enough and cheap enough that an additional layer of logic could be added for the rewiring based upon some form of machine code. In this vein, software was born.
This was obviously a major leap in computing. Rewiring the backplane was a tedious and problematic process. Imagine trying to debug something like that if you got even one wire wrong! And how much time do you think it would take to make changes to a program? How much risk would there be to make changes to an working program!?! The perspective is as important today as it was in the early days.
I'm a developer. I write software. Soft-ware. It is supposed to be easily malleable, otherwise it would be called hard-ware!
Ok, enough ranting about how we got here. We live in the future. No flying cars (yet), but we're close to Johnny Cabs at least. Our customers have certain expectations of what we deliver, and we should have certain expectations of what we expect from ourselves. Our software should be more easily changeable.
What makes changing code hard?
This list isn't comprehensive by any means. This is just my personal list, ordered from least to most significant.
Undocumented / Badly Documented
There are some functions which seems perfectly clear in their syntax, but without any explanation for what the code actually does or the principle upon which it works. The code for a bubble sort looks surprisingly similar to that of a quick sort, or of a binary search. It's a bold developer who expects the future maintainers to execute their code just to find out what it does.
This also includes "magic numbers" or other similar practices requiring divine providence.
Just because you've turned it into a constant doesn't make
private static final int TWENTY_TWO = 22; useful.
This is especially true when time has changed the line to
private static final int TWENTY_TWO = 47.
The flow of control in a non-trivial application can be a challenge to understand at times. You'll often find yourself reciting a version of Dem Bones when describing how the Database is read by the
ORMThingyinto a DTO, and the
ORMThingyis invoked by the
DTOManager, and the
WidgetController, which refreshes the
WidgetViewSingleton, which repaints the
When the outputs of one chunk of logic aren't clearly delineated from the inputs of the next, you get coupling. You can't change one thing because there are 17 other things that would have to change as well. The other way this happens is through the incorrect distribution of responsibilities. Let's say you have a data object that needs its fields populated from 4 different sources. If the code has step 1 calling step 2 calling step 3 calling step 4, you've tangled all those responsibilities, together into a big steaming mess. Far better to make each step separate, then have the combining (and any other logic needed in the sequence) be the responsibility of some other object.
Here's a hint: Code which consists largely of the keywords
continue needs to be refactored.
The code should represent a generalization of the problem space's solution, not a duplication of it.
It may be tempting to say "lines 234-237 were added to cover requirement WWSD-39248", but having code match requirements one-for-one is a recipe for disaster.
How can you deal with changing needs if every need is so explicitly laid out?
It's just not manageable because of how verbose the code is.
Note that this is different from what people call Cognitive Complexity. That's a different problem, where control structures dominate a method, introduce layers of nesting, and add mental overhead to read because of the need to maintain the mental state for a line of code.
Formatting and layout can be fixed with any modern IDE. That's not what I'm talking about, although properly formatted code certainly helps. I'm talking about code which looks like it has been run through a minimizer for a different language. Code that questions your trust in the compiler/interpreter. Code that makes you wish there was a reasonable spell checking capability for variable and method names. Code that uses exceptions to handle flow control or other bastardizations of the language. Sure, it works, but your mental model of the language gets warped when trying to debug. If it is hard to read, it's hard to change.
Have you ever opened a source code file and jumped back in shock? "What on earth happened here?!?" you may think to yourself. Momentarily you may entertain the notion of "fixing" the code, but only for a moment. Then the shock transforms into fear a the thought of cleaning up the mess. You KNOW that if you make changes to it you will certainly break it. And once you break it you own it. It will own you just as the monstrous creature owned Dr. Frankenstein. You no longer own the code, it owns you. You fear breaking it more than you fear the consequences of its existence in the system.
How do we make changing code easier?
Again, not comprehensive...
What, an agile person advocating for more documentation?!?
No, not more, but just enough of the right documentation. Methods should have clear names describing the purpose they serve. If their implementation isn't immediately obvious, explain that you're using a doing encoding using a variant of the LZW compression algorithm. It doesn't need to be overly specific, as documentation needs to be maintained which adds overhead as changes are made. But someone unfamiliar with the code should be able to get the gist of what's going on quickly. Give descriptive names to your variables and name constants by their purpose, not their value. They're essentially hardcoding, which should be handled carefully.
A module should be able to stand on its own. If it truly needs other modules in order for it to do its work, this dependency needs to be very carefully managed. Most importantly, you need to ensure that the dependency goes toward the thing which is less likely to change in the future.
Clearly there is value to
if statements, otherwise most languages wouldn't implement them. However, tons of conditional logic should be identified as a code smell.
In fact, most code smells are the identification of things which are functional but probably need to be refactored into a more flexible pattern.
Learn some smells, learn some patterns.
They tend to go hand-in-hand.
If your team hasn't settled on a formatting standard, do that now. Easy fix. Use good names for things. Spell them properly. Yes, that includes database column names. Follow best practices and patterns. Perform thorough code reviews (or even pair programming) to have another person weigh in on your "elegant" solution.
This is the big tuna of all the points to be made here. If the code is untested and it is bad, you likely will NOT try to improve it. You will fear it. The only possible outcome is that the code will rot and contaminate all the other code that touches it. Should this go on for too long, the whole system will become diseased. Automated testing allows you to have confidence in the code. New development code should be able to be tested automatically. Corollary: legacy code is code without tests Once you have tests in place, you can refactor or rewrite with confidence. So long as your tests still pass you can be fairly sure that you haven't broken anything.
This focus on testing feeds into the other areas as well.
- Loosely coupled systems make testing easier, since you can evaluate one module at a time. Any dependencies are typically mocked. Highly-coupled systems make testing harder, so developers who do Test-Driven development are naturally going to produce more loosely coupled systems.
- The tests serve as a form of documentation as well. How do you do a thing on a particular module? There should be a test case for each of the 3 ways to do it. What better documentation could there be for your use cases than examples which use your code?
- Making more generalized solutions becomes easier when you break the strict bonds between requirements and code. Again, you can refactor all the specific cases to more general ones in your code, and ensure that the specifics are satisfied by your tests instead. Your test then serves as the specification of what your code should do instead of your code serving this purpose.
- Calling your code from a test will help identify when the interface needs work. Then you can see when a method requires too many parameters, or has misspellings, or otherwise does weird things. It won't clean the code for you, but it should be easier to see the ugliness.
So, we've somehow managed to go from "welcoming changing requirements" to advocating automated testing and test-driven development. I think this runs contrary to intuitive thinking.
We don't have time for all these tests; we need to get to market faster.
Getting to market faster is a worthwhile goal, but what if the market needs change? How fast can the team adapt if they can't confidently make changes to the code they've already written? You may start out faster (this is debatable), but as time goes on the cost of changing code will increase and it will take longer to meet future needs again and again. It's a very expensive proposition in the long-term.
"Every time we change something the tests break"
This is a complaint I hear frequently, specifically about testing against the UI. I may be in the minority on this point, but writing tests against the UI seems like a silly thing to do. If we are implementing things using vertical slices, then by definition some aspect of the UI should change with every story we implement. The trick, it seems, is to avoid testing the UI in such a way that shifting something 3 pixels to the right would cause a problem. More semantic or descriptive testing of the underlying infrastructure which supports the UI would likely serve the same purpose, but prove significantly less fragile. The minutiae in the UI seems to be the one place where automated testing has less value, since this is upper-most part of the system which you can easily test with your eyes.
Should you have this "tests breaking constantly" problem, you probably aren't testing the right things anyway. Make an overall "smoke test" to ensure the application still functions and move on. Once the application's UI settles down a bit, you can more easily add specific, targeted UI tests for sanity checks. Future sweeping changes can be made where everything is verified by a human, then a new baseline can be taken for future testing.
Requirements are going to change. No matter how fast you code, the world changes faster. Instead of complaining about how the world is, focus efforts on how to adapt to the winds of change.