Tag Archives: Testing

Attention to Quality gets Results

Most managers, and some ‘Agile’ coaches, I encounter shudder when the subject of quality comes up.  They think quality hampers delivery.   In some ways they are right, you do want to achieve balance, avoiding polishing too much because it can cause being late in delivery  (in mission critical projects it may be better to over polish if the cost of failure is high).

Some managers may shudder because, ooops I may have been found out and I’ve been pushing my teams to push out code which is not really ready.

Other managers, maybe the inexperienced ones but not always, have no idea what quality is.  They tend to believe the lies their development teams’ say when they say it’s ‘unit tested.’

As a manager, quality is one of if not your biggest concern.  Addressing quality is an economic concern as well and managers are responsible for economic outcomes.  Bad quality will slow down your team.  But not just that, it will also slow down your organisation.

How does this occur?  How does it slow down your organisation?  Well I’ve seen this time and time again when I come to a new client.   Work is at a stand still and when I dig a bit deeper I see the same issue appear.

Yes, teams are overwhelmed with work.  So limiting WiP will help.  Using workflow mapping and lean techniques will identify bottlenecks.  Yes address those.   Importantly, in the bottlenecks I find that the reason for those delays is the negative feedback of poor quality.

I see upstream, product ideas waiting to be developed and tested with the customer because there is a wave of poor quality slowing down the delivery pipe.   Overwhelmed with defects teams struggle to pull new work into development.  When they do eventually pull that work in, it is developed with such poor quality or worse the facade of quality (e.g. formal QA) that it further feeds into the negative feedback loop.

That loop gets even slower to run such that it turns into ‘3 month Stabilization’ phases or an entire quarter whereby the organisation freezes any new releases into production to avoid an outage.

These sort of mitigating steps often occur after some sort of disaster has occurred which has its root cause in poor quality.  One big example is releasing a new version of software such that it shuts down an assembly line at a factory and affects thousands of customers who are prevented from using their mobile devices.  The cost there was millions of dollars.

So asking for quality is easier said than done.  Or does it have to be that hard.  Usually workers are over-burdened.  So limit WiP as was mentioned earlier.  But then ask for quality.  Now that they are not over burdened they can put their efforts into producing quality output.

Give teams the space to improve and they usually will.  Sometimes as a coach I can give them some training and they will do it all by themselves (this interview demonstrates this) to a point, and sometimes being involved with them is needed from the outset or when levelling out or stalling out occurs.

In my experience, results can happen very quickly.  For an organisation I was recently involved with it took three months to go from a cycle time per feature of 40 days to 3 days.  This involved addressing quality and bringing in aligning practices from extreme programming and specification by example.  Reducing WiP and batch size also assisted.  This happened in stages, guided by data from a Cumulative Flow Diagram that mapped the stages of delivery and guided improvement efforts.  The first stage got it down to 9 days (by applying WiP limits, Policies on size and changing the Test Strategy) and then, the next stage, amplifying the unit testing practices to remove an archaic and slow UI testing bottleneck (sunk cost fallacy associated with that) really accelerated the flow of work.

AgWorld, the published case study, did it in 6 months.  They did chose to do for themselves which is fine.  They can achieve a lot more with a coach who can bring the practices quicker and help avoid stagnation which does tend to happen as well.

Find a slow process, you more often than not find poor quality.  Address quality and just observe how things get better for everyone.

 

Advertisements

Keeping up Technical Chops

Got asked to do this for a Coaching job.  It was nice to do.

There is an important and underestimated place for technical excellence.  Heck, it’s a principle!  Combine it with the others 🙂  Standing on the shoulders, Thanks Beck, Rainsberger, Meszaros,  Fowler, Wake, Prag Programmers and  many more unnamed.  It has 36 commits to demonstrate the process.  Feel free to take a look at it.  #agile #tdd #quality #emergent

https://github.com/nzdojo/spellchecker

Read the notes:

https://github.com/nzdojo/spellchecker/wiki/Notes-on-Implementation

‘Keep hands on, that includes the code!’

 


ICAgile Agile Testing Expert Gate

In this recording of a webinar I participated in on January 28th 2015, Janet Gregory who is one of the track authors and has written the immensely popular Agile Testing with Lisa Crispin and now More Agile Testing again with Lisa Crispin, tells us about the criteria for achieving Expert Status in Agile Testing.

It’s quite a jump from taking a course and we learn that demonstrated experience over a number of years is key.  No exams here we emphatically say.

We also talk about the new book, More Agile Testing, and I pose some questions as well.  Janet and Lisa bring up Organisational culture early in the book and I commend them for this rather than assuming that it takes place.  I also liked the balanced reporting of views of the testing quadrants.  It’s mental model that has been in some quarters taken down in an unkind manner.  They report on the better variations and tell us not to take a ‘cookie cutter’ approach to its application.

We also had other panelists involved bringing in their expert views.  Thanks to Aldo Rall, Devin Hedge and Agile Bill Krebs.  Elinor Slomba once again handled the facilitation brilliantly over a difficult medium at times.


A Cloud9 Custom Runner combining TypeScript and Mocha

The online IDE Cloud 9 is just awesome.  It is amongst a number of online IDEs out there, of which I think it is the best.  With Microsoft also making .NET opensource and available on all platforms I can also see myself doing .NET development on this as well.  Yes I know there is Monaco from Microsoft but that is still immature and yes there is Mono on linux as well but that requires fiddling about.

At the moment I’m doing node.js development with expressjs and now with typescript in the mix as well.  Typescript looks awesome as well.  I’ve ported some javascript code over to Typescript and it feels really nice.

However, Cloud9 does not have a runner setup for typescript compilation so I decided to set one up myself.  Cloud 9 provides a way to do this by adding a New Runner from the Run Menu.  I created the following script that associates the .ts file extension with the tsc compiler.  It will also then run the associated mocha test which existed when the code was just pure javascript.  It does rely on a naming convention and directory location for the test, so as long as you stick to a convention you will be Ok.  Here is the script:

{
“cmd” : [“bash”, “–login”, “-c”, “rm $file_base_name’.js’;tsc –sourcemap –module commonjs $file; mocha test/$file_base_name’Tests.js'”],
“info” : “Compiling Typescript file $file”,
“env” : {},
“selector” : “source.ts”
}

It uses bash to run multiple commands, one for the Tyepscript compiler and then followed by the mocha test runner.  Prior to running these command it deletes the previous .js file and if tsc fails the old file is still there and the test runner will pick that up still and we rather not have that.

I found when I converted the javascript implementation of the main file to typescript and compiled, my tests just worked.  The tests are still in pure javascript.  I’m still entertaining the idea of using Steve Fenton’s tsUnit but I fear it may not be as mature as the javascript test ecosystem.


Real Business Value of Testers

Another reminder type blog post that may or may not get further expanded upon 🙂

Liz Keogh says:

“If you don’t have actual testers, get someone to play that role. Someone who *isn’t* a dev. Us devs are very good at abstraction, and will start solving a problem before we see the whole problem. I don’t believe that an awesome dev can also be an awesome tester, or vice-versa.”

I wonder if we still need tools.  James Shore says the tools are a waste in 2010.  Was in kind of agreement there with him.  But there is value in the thinking process and dev’s can’t do all of this.   I need to revisit Jame’s article from then to make sure I’ve got all the nuance and not glossed over details.

Tools help but that’s not the be all and end all.  Remember the Agile Manifesto.

I tend to think the Agile Tester (more so than the traditional tester) can be a great productivity booster and very valuable to the organisation.  Their critical thinking at the beginning should be regarded as a blessing.  I’ve also worked with some great testers at end of the cycle, but you can also feel their frustration as well.  We can help each other 🙂

This is a reminder only – ideas are still forming – but the conversation is nonetheless open.