In this series we will look at how we can improve the design of a piece of legacy code in order to add new functionality and make the code easier to maintain. First, lets take a look at the code we’ll be dealing with.
https://github.com/aweidner/monty-hall/blob/master/index.html
If you’re not familiar with the Monty-Hall problem, take a moment to read the Wikipedia article. The simulator in the Github repo helps convince us of the phenomenon by running a few thousand simulations and seeing that the probabilities of the various outcomes converge to the expected values.
Suppose you are the maintainer of this project and a request comes in from a user to make this simulator work for N doors with N-1 goats instead of being hard coded to 3 doors with 2 goats. Monty will still open only one door, but there could be many more than 3 doors to choose from. How should we go about implementing this change?
Why refactoring is necessary
In short, we need to:
- Figure out which behavior is hard coded around 3 doors
- Create the necessary abstractions to handle more than 3 doors
- Test our application to make sure the new functionality works
In order to accomplish the second step, we’ll have to refactor the existing abstractions and potentially build new abstractions. Unfortunately if you look at the repository (starting at commit 94c1c45), you will note that there are no tests, meaning that while we change and build our underlying abstractions, we’ll have no idea if we’re making safe changes until we can manually test. Moreover, we’ll have to be very careful to run a comprehensive set of manual tests after each change in order to make sure we didn’t break anything.
Working with legacy code
Clearly manually testing is unreasonable. We need to build in some automation to make our job easier. Luckily, we can reach for the information in Working Effectively with Legacy code. We are going to use three strategies:
- Make a few non-breaking changes in order to get the existing code into a test harness
- Write high level characterization tests to ensure that the functionality of our application continues to work as we move forward
- Use the seam model to support and guide our refactoring
Getting our code into a test harness
In order to start writing our characterization tests, we need to get the code into a test harness. Unfortunately we will need to manually test any changes we make at this stage because we don’t have any test support.
The first step is to extract the Javascript for the simulator into its own .js file so that we can import it independent of the HTML
After testing manually, we can see that everything still works as expected meaning our extraction was a success. This is an example of a good change to make without unit tests because it is relatively low risk. Changes like this have a low chance of causing unexpected consequences and are good targets for our initial time investment before we have a comprehensive test suite.
QUnit
We’ll be using QUnit for testing. We create the test.html and test.js files and add a “nothing” test to make sure everything is running correctly.
Now we can begin writing our characterization tests
Characterization tests
In some ways, characterization tests are similar to integration tests. They operate on the highest level of the testing pyramid and tell you what your application’s real behavior is. Put another way, these tests characterize the existing functionality, whether that functionality is correct or incorrect. Breaking these tests lets you know that your refactoring has changed some behavior.
Our characterization tests will ensure:
- Playing 5k switching should result in a P(Win) of about .66 (+/- .03)
- Playing 5k staying should result in a P(win) of about .33 (+/- .03)
Note that we do need to add a few invisible divs to our test platform due to the tight binding between the DOM and our simulator code. We’ll slowly loosen our reliance on these elements as we add test cases.
Moving on
Now that we have our characterization tests in place, we can proceed to refactor our code (while adding unit tests) to support our desired functionality. Stay tuned for part 2!