Clean Coders: What do you want viewers to get the most out of your videos?
Corey Haines: A main goal of my videos is to take it slow and talk about all the small (and large) decisions that we make while building a full application. By not focusing on "getting it done as quickly as possible," we can take our time to analyze the subtleties and complexities inherent in application development. In our day-to-day lives, we have pressure to finish; after all, that is what most of us are paid for: getting our applications into production. By slowing down, though, we give ourselves the opportunity to practice our craft more effectively.
Practicing and analyzing the small steps in a process allows us to internalize them and not think so much when it is time to apply them. This series is about discussing and understanding those steps.
CC: What is the next episode going to feature?
CH: The upcoming episode 2 will finish the feature we started in episode 1, viewing the running coderetreats. Along the way, we'll dive into some thoughts around the two primary uses for tests: verification and design feedback.
We left off episode 1 with just a dummy object serving up the data, so we'll see how we can listen to some feedback from our tests when pushing that functionality down our stack. This is a really exciting part of doing an outside-in style of development.
We'll also talk a bit about the "why" behind speeding up our tests while using the Rails framework.
And, of course, there will be cameos by Zak The Cat! Or, perhaps Zak is the star, and I'm just doing cameos.
CC: Tell us a bit about the series. What's coming?
CH: As we begin to pick up speed with our feature development, we'll start seeing how our tests can help guide us towards a malleable design that continues to accept new features without a lot of pain.
There is an exciting part coming within a couple episodes, were we'll notice a resource just screaming to come out in our design. Luckily, our design will make it easy to reify this concept.
As we get to good pausing points in the application construction, I'm looking forward to taking a few detours to investigate other design styles that are becoming more popular in the Rails world.
CC: Your biggest mistake coding? Your biggest lesson.
CH: Just one? How to choose from all of them? :)
There is one mistake that has really stuck with me over the years though, one that I always look back on with fondness.
I was at a large enterprise, working on their software distribution and desktop management system. This was to be installed on every computer in the enterprise. We had an initial team of 2 developers, and we built a really nice system. It consisted of a bunch of smaller pieces, all linked together. The design of each sub-system was driven heavily by the feedback that our tests gave us, resulting in a beautiful layered structure. I was quite proud of it.
Our distributed architecture had a need for some basic polling functionality. The client application wanted to display a "Running..." animation when the system was active. So, a natural solution was to poll for our specific process on the desktop. This was written in C#, running on version 2 of the .Net platform, and we implemented our polling by using a standard API for listing the processes running on the computer.
After some development, we were ready to release it to the enterprise. Immediately after it was installed, we started getting reports that some of the call center computers were slowing down. Pretty rapidly, we got flooded with complaints that the computers in the call center basically froze up and stopped working. After a reboot, they would start up again for a time, but pretty quickly freeze up again. We rolled back our system and things went back to normal. So, there was something wrong with our application.
We hadn't seen this behavior before on either our computers nor in the test lab. We spent quite a bit of time with the local Microsoft consultant (who had an amazingly sharp understanding of the internals of Windows). Memory dumps showed the names and executable information of other running processes leaking into our heap. There was a lot of discussion, working with the other development teams, and then we finally figured out what was going on.
The technique for getting processes in version 2 of the .Net framework used an API that took longer as more applications were loaded on the machine. So, the API started to take longer than my polling interval. This caused the threads to pile up onto each other, effectively killing the machine. Yikes! Luckily, the newer version of the .Net framework used a better API to get the process list, so we were able to upgrade and the problem went away. This took us a long time to figure out, though.
What did I learn from this? Well, two major things have stuck with me from this experience.
First, you should understand your production environment. What is running in it? What happens when your application comes into the ecosystem.
And, second, deploy even sooner and more frequently than you think you can, even before the application actually has enough features to be considered useful. It will help highlight any issues you might run into as soon as possible.
Check out these episodes:
Java Case Study - Episode 3Sorting Our Issues
Clean Code - Episode 17Component Coupling
Clean Code - Episode 20Clean Tests
Clean Code - Episode 10The Open-Closed Principle
Clean Code - Episode 11The Liskov Substitution Principle - Part 2
Java Case Study - Episode 2Let the Testing Begin - Part 1
Clean Code - Episode 22Test Process
Clean Code - Episode 9The Single Responsibility Principle