Complexity is inherent in software engineering. There's a lot of things that can go wrong, and they often do. What can we do about that? Let's explore 2 of your tools: Spikes and Testing, and see how they can help with complexity.
This is part of an ongoing series on principle-driven consulting. Read the previous post, here.
The scope of large applications, or features of applications, is often a moving target. When building an app for example, I like to plan for as many inevitable “gotchas” down the line as I can. How? I intentionally use development time to build features within all areas of the app to better understand the complexity that may present itself later on. How can we think bigger?
Consider a vertical slice approach
The vertical slice approach takes a cue from the culinary world. If I had a cake and I sliced from the top to the bottom, I’d get multiple layers. There would be some of the decorative bits on top, some of the frosting, some of all layers of the cake, and any of the filling the baker added between layers. This idea can be applied to technical design as well.
Video game developers and designers create vertical slices that showcase complexity. They do this when they want to validate a concept for a game, or generate interest from potential publishers.
A vertical slice features all the elements and a bit of complexity - there’s design, the story, mechanics, especially the core mechanics, maybe some sound design, and perhaps a bit of the win and lose states.
Models and Spikes in Software Development
One of the best tools we have for modeling complexity in agile software development is with a spike.
A spike helps us know what we don’t know. It’s the simplest version, not the most comprehensive; it’s a vertical slice for all intents and purposes. We are intentionally doing the simple version of a feature to understand the potential complexities down the road.
It won’t have all of the decorations, or “nice to haves.” It’s meant to provide feedback on the viability of the approach to solving the problem. In fact, spikes should usually be thrown away because they aren’t usually built with the same rigor and standards as production code. You might not even write tests for your spike.
Spikes are also useful when exploring problems relating to scale. I ask questions like - How many users can this support? How large can the upload file be? How many records of this data have to be processed in real time? Of course, you’ll want to have the most representable sample of data to work with, which is where my last post in this series comes into play.
See if you can relate to a story like this...A few years ago, we started a new client project with a core feature of being able to parse Excel files. The client sent us representative samples of these files prior to us starting the work. We had a pretty good idea of how we would approach this feature. We found a Ruby gem that seemed to do all the things. It seemed pretty straightforward. We didn’t, however, build a spike of the functionality. This meant we had no idea how much time the process to parse a file would take. We also had no idea how much memory the process would consume (in this case, it scaled based on the number of rows in the sheet). Both of these factors ended up being extremely important! The application runs on Heroku, which means if the process ran long, it would have to move to a background process. It also means that we had to bump up the size of the dyno just to be able to process a single file, and there would be dozens.
I’m happy to say that we were able to solve many of the issues, and this client is still happy with the work. Having said that, we could’ve been more efficient initially if we’d spiked on the process earlier to understand it better. This experience actually became a catalyst for this principle.
Test, Test, Test!
Testing is core to software development. We write tests to test our code and continually validate the feature code we end up writing. It’s a development fundamental and yet strangely, testing can be an afterthought, or maybe even a never-thought.
If you aren’t going to test everything, you should at least test the most important things following these guidelines:
1. Have a Test Suite: A test suite that covers your important business logic ensures that your team is immediately aware when a new change breaks it. It’s also becoming more and more common to see test suites that cover the experience a person would have as a user of the software. This end-to-end testing automates clicking around in your software and makes sure that your user experience still works. Good test coverage makes sure that your software’s most complex elements are still working, and working the way you want them to.
2. Write Tests Before You Develop: Tests are a great way for you to map out your approach to your software feature. If you start with the simplest version of those tests, you’ll also start with the simplest version of your feature. Hopefully this helps you head complexity off at the pass, keeping your software simple and straightforward. Starting with the code first, and testing later, doesn’t always lead to more complex code but it often does.
3. Test Effectively: If you have confidence in your tests, you can move quickly knowing that if you break something it’ll be quickly obvious. This also makes it possible for a team made of diverse experience levels to support the project. Tests are often self-documenting, which helps newer team members get up to speed quickly. People who are newer to the industry can contribute to the project with relative confidence because your test suite supports change without much risk.
Unnecessary complexity is poison to a software project. Many projects have been delivered far past deadlines, or not at all, because they were overly complex. Not accounting for complexity will slow you down, cause stress among your team members, and could sink your company entirely.
Root out areas of complexity by identifying them early, spiking on them to gain understanding, and writing good tests to move forward confidently.