Sitemaps
1of10

Next Video

Register to continue watching.

Create a free account to unlock this video.

Login with Google

Submission confirms agreement to our Terms of Service and Privacy Policy.

Already a member? Login

Instructor

Ian McFarland

Founder of Neo, Pivotal Labs, Agile & Lean

Transcript

Lesson: Agile Development with Ian McFarland

Step #3 Pace: Continuous releases of tiny bits of code mitigate risk

The other thing that a rigorous tests framework gives to you is the, again, the knowledge that you are passing your requirements, that when you create a new piece of code, that it's tested well enough that you know you can push it live. So the next logical step then is to not treat releases as this big ceremonial thing, especially with web deployed, with modern infrastructures. It's actually much easier to continuously deploy things to users by being able to run the tests in an automated way and know that you have passing code. It makes it much safer then to take those little tiny incremental changes and push them live to site, push them out to customers. What that does is it reduces risks.

The likelihood that you're on the wrong track goes way down if you're measuring it all the time. Like if you push something out to users and you find out that it breaks, then you can roll it back right away in this tiny, tiny increment. Like this little tiny thing. Then we know also if your release is 1,000 lines of code or 10,000 lines of code, or 1 million lines of code and something breaks, then you have to look through all that code to figure out what happened. But when the change that is just a tiny little change and it broke something, then you know from actual user experience that it's pretty obvious what the thing is. You can go look in the little tiny thing and resolve it really quick. First of all you can roll it back quickly. Then you can resolve it really quickly and then move on.

When people talk about continuous integration they mean a lot of things. I will often talk about continuous releasability and the idea that every time something is checked in it could be pushed live. The best functioning teams in my opinion are the ones that really do push, every time something is checked into the source code repository it gets pushed live to site as part of the regular deployment process. And when you get to that state, that means you're releasing 10 or 100 times a day depending on the team size obviously. But as a typical developer, I think it's healthy for a developer to be checking in every few minutes. They make a little change and as soon as they have an increment of valuable code, it should be checked in so that it is shared as broadly as possible. In that kind of situation you may be deploying tens or hundreds or thousands of times a day depending again on the team size and how many people are deploying.

Certainly you don't want stuff sitting around for a week or a month. Traditionally old software projects would have six month or annual release cycle even. So you have all of this validation debt that you've accumulated. You don't know if this stuff actually works until you push it live. When you're actually pushing live the best is if you do an incremental deployment too and push to what Google calls canaries. Different people call it different things, but the idea of pushing to 1% of your infrastructure. You don't see future flags or other mechanisms for saying "I only want to expose this new behavior to a small fraction of my user population." You get then a little bit of confidence, without exposing a lot of your users to the new code, that it is working well. Once that's been validated you can push it across more and more of your cluster usually pretty rapidly.

What you find by doing rigorous testing is that the number of defects that actually make it to production are very, very small. But, you still will have some that make it to production. So then you have narrowed who is exposed to those production issues. Any time you do get a production issue you write a test. First you roll back the change and then you write a test that demonstrates the off behavior that you saw and validates that the code doesn't do that. Obviously at this point it does have the defect so it should fail because that defect still exists. And then you fix the defect and the code prevents you from recreating that defect in the future.

Loading...