October 4, 2012

Software Testing is NOT "Breaking Things" - Part Two

Breaking Things, or is it Monkey Testing?

For some odd reason, I really don't like it when software testers say "I enjoy breaking things". copyrightjoestrazzere

When you test and find a bug, you haven't broken anything - it was already broken!  If anything, the developer who wrote the code broke it.


This article originally appeared in my blog: All Things Quality
My name is Joe Strazzere and I'm currently a Director of Quality Assurance.
I like to lead, to test, and occasionally to write about leading and testing.
Find me at http://AllThingsQuality.com/.


  1. Hi Joe,

    I think this is a good point and should be remembered when tester is accused of breaking something. I like how 'Lessons Learned in Software Testing' puts it:

    "Testers don't like to break things, they like dispel the illusion that things work."

    But I wouldn't necessarily bury the whole idea as there was an interesting discussion about this in TWiST (This Week in Software Testing) podcast #102. I don't remember who said it, but he was talking that breaking attitude might help in testing sometimes. I mean like if you are prepared for everything to work, that will most likely happen. But if you on the other hand are prepared to break things and find a lot of bugs, then perhaps that could be a first step in finding them? I don't have though much personal experience to back this up.

    Good post.


    Aleksis Tulonen

  2. Ya but I have read somewhere at Google's offical blog about testing "We build and then break it and we build a better product" and Most of the people in technical world think if Google, Microsoft and Apple has quoted this then take it as guaranteed as truth

  3. Aleksis - I agree that it's critically important as a tester to remember that you aren't trying to demonstrate that "everything is working", but rather your goal is to show where it isn't working.

    I just don't call that "breaking" anything.

  4. Maybe another approach is to call it "user scenario where the user has malicious intend or is from a competitor demonstrating how 'bad' the software is".
    That way the tester is in the right mindset but doesn't violate the "it was already broken when it came to me" truth to which I subscribe as well.

  5. Those are reasonable terms, but I usually just call it Negative Testing.

    To me, this is a bit different from the connotations in the words "malicious" and "competitor".

    Over the years, I've seen Developers focus on the Happy Path. As a Tester, I want to make sure I cover other paths.

    And when the developers say "end users will never do that", I usually respond "Never is a long time...".

  6. My first experiences "breaking" software were in situations where it wasn't actually my job to do so. When I was an undergraduate, I got blamed and then banned from the computer lab. As a grad student, I got cleared of all blame by the developer and then invited to beta test. Very different perceptions of what "broken" means, huh?