Help with the QA Process

Hey guys,

I’m hoping someone can steer me in the right direction.

My SaaS product launched a couple months ago…

I have a test server setup where new commits go before they go live. Currently, it is on me to go through and test new things. However, there are instances where one new thing seems to work fine, but it will break something else. I also don’t want to spend my time doing this.

We don’t have unit testing or anything like that in place. I don’t have any formal use cases or anything like that written up.

I have a job post up on oDesk for some QA testers and I’ve gotten over 100 different applications, but I haven’t managed a QA process before. I am wondering what advice the community has for something like this?

Should I write the test cases myself when I add tickets to Sprint.ly or should I have a lead QA guy review the tickets and prepare the case studies.

Do I hire multiple QA testers and have them all test the same things to make sure it goes through a few layers before getting pushed or do I have them all test different things? Having everything go through a few layers I think would help me to weed out poor QA testers.

Is there any software specifically for this type of process? I don’t want to use JIRA (I want to stick with Sprint.ly as I just moved there from Codebase). I also don’t want any enterprise level software. Am I better off just setting up a spreadsheet for this instead and sharing it with the QA testers?

What qualifications should I look for in a QA tester? Should they have some level of programming experience? I have to sort through quite a few applicants…

Any guidance here would be much appreciated!

I’d be curious to know more in depth an example of the type of thing that breaks after something else is fixed.

I tend to believe that the best place to invest time is in writing clean, maintainable code in the first place as much as possible.

Without having detailed scenarios describing how the system is supposed to work, I don’t think you’ll have any luck with outsourcing QA. Without these scenarios, the people you hire through a site like oDesk aren’t likely to do more than stick random text into your different form fields and see if something crashes.

Rather than investing in hiring and onboarding an external QA person, I think you would be better served in writing integration tests. With integration tests, you simulate the series of actions a user goes through on your site, and put assertions about the results. In the Ruby on Rails world, we use Capybara to do this, but your framework probably has an analogous solution. Once you learn how to write these integration tests, the effort involved in creating a new test is roughly equivalent to the effort it would take to describe to an outsourced QA person the scenario you want them to test; however, it has the advantage of being a one time cost, whereas with a QA person you’d need to have them go through the scenario each time you want to deploy a new version. Without having integration tests, you’ll be playing whack-a-mole each time you find and fix a bug, so I think it is worth investing in it.

As you’ve identified - I think in most cases to be able to successfully pick someone on the likes of oDesk (never mind even managing the job) you need to have a reasonable idea about how to do the job yourself.

So in your case what about using a outsourcing company that specialises in QA testing? They should already have experienced staff and procedures in place and should be able to help you prepare test plans etc.

Andy has written up his experiences here

I can personally recommend TestLab2 one of the companies he mentions - http://www.testlab2.com/

Unit/integration/ui tests - these are a good idea for sure, but can’t capture everything.

An example - some UI feature that makes perfect sense to you but is confusing to everyone else should be flagged by a good QA person.

That’s true - you can’t catch UX issues with automated testing.

If you’re looking to test the user experience of your product though, I think the best way of doing that is to have actual customers test it for you. Mention to a couple of your power-user customers that you’re launching a new feature, and would like to have them try it out before you launch it. Assuming they agree, ideally you meet with them in person, say over coffee. Then present them with a scenario, that describes their goal but not how they accomplish it (for instance if you added an invoicing feature to a time tracking app, “you want to invoice a customer for the hours you tracked over the last month”). By going directly to the customer, it’s a win all around: you get better feedback than outsourcing and the customer gets to feel involved in the evolution of the product.

Come on! In a non trivial bit of software have you never caused a bug by fixing something else? No “quick little change” that caused unforeseen problems elsewhere?

Imgur

1 Like

Very helpful - thank you!

Haha totally man. I’ve definitely done that my fair share of times and continue to do so on a regular basis! Sometimes when I write comments from my phone I don’t have the typing bandwidth to properly frame my comments. I really wish they would put “sent from phone” or something like that in the signature so I didn’t come out looking like a complete toolbag.

I’m not suggesting that looking into automated testing (AT) is a bad idea. Personally I don’t do much AT. I’ve done some back in the day and the conclusion that I ended up coming to personally was that it was a better use of my time to focus on code quality.

I could go into a lot of detail around that if you were curious, but I’m guessing you’re not :smile:

I’m guessing that most people would say AT is a must for any project these days, and I’m probably in the minority of people who isn’t on that bandwagon yet. That said, I do believe that there are certain types of problems that lend themselves much better to AT. For example I remember hearing about how Facebook has a lot of AT around their news feed privacy settings.

There are just so many different permutations of options, it’s the kind of problem where you could very easily make a small fix that accidentally broke something else in some other privacy setting.

I actually have a few areas of my app which I’ve been considering doing some AT around.

But the reason why I was curious to know more in depth about the actual case where something was broken by accident is because I’ve been in work environments before where we tried to slap AT on top of a lot of technical debt just due to unmaintainable code, and IMO that isn’t going to help much.

Obvoiusly we’re all human and we all make mistakes all the time. I just think that certain types of mistakes may be better addressed outside of AT.

1 Like

@JustinMcGill We sometimes use a so-called Staged Rollout, when we divert (on a load balancer) a small percentage of users to a new version of a web application and monitor where and what is failing. Over time, we increase the percentage.

This has proven to be very valuable, because in case of a major malfunction we can always just remove the server with the updated application from the LB and things will return to normal in no time.

Of course, this can be very hard to implement in certain scenarios. There are a lot of things to consider, particularly concerning shared state (database schema and data, sessions, etc.)

1 Like

@JustinMcGill - here’s something I had read back in 2013 about declining jobs:

5. Quality Assurance Specialist and Managers

Hiring professionals in the Dice survey placed Quality Assurance (QA) on the “low priority” side of the ledger. Do not expect this to change. These days, the tech industry seems to be following Google’s lead and turning everyone into beta testers. Users are the ultimate quality assurance staff - and they don’t get paid!

(source)

For our own applications, we run suites of unit & acceptance tests (automated on each commit/pull request/merge), deploy only from CI server (using CircleCI) and keeping the GUI/browser tests very scarce (probably once every 60 days).

For the problems that do slip (because they do no matter what we’ve tried), they aren’t usually ‘major’ and users report them or we catch them in our logs (those are another important part of QA IMO).

Hoping that helps.

so-called Staged Rollout, when we divert (on a load balancer) a small percentage of users to a new version of a web application and monitor where and what is failing

Doing that can be costly on different fronts - why not use feature toggling?

From wikipedia:

Feature Toggle is a technique in software development that attempts to provide an alternative to maintaining multiple source code branches, called feature branches.

Continuous release and continuous deployment enables you to have quick feedback about your coding. This requires you to integrate your changes as early as possible. Feature branches introduce a by-pass to this process. Feature toggles brings you back to the track, but the execution paths of your feature is still “dead” and “untested”, if a toggle is “off”. But the effort is low to enable the new execution paths just by setting a toggle to “on”.

If you are using PHP, here’s a library I had released some time ago: jadb/feature_toggle

I recently hired a QA on oDesk, and it’s working great.

I have no specs. I have no test cases. I threw the app and some product demo vids to the tester, and she started testing and raising defects. We have a call about once a week where she asks any questions, and we tic tac back and forth on email during the week. I’m very, very happy with how it’s going.

Now, automated testing is awesome! I do have a bunch of unit tests, and a couple of automated end-to-end tests, but I’ve got a complex app with a lot of edge cases, and I know that I’m gonna miss some either first or second or third time around. So a second set of eyes is brilliant.

Code quality is also awesome! But on the other hand, so is shipping something, Pareto principle and all.

So, down to brass tacks. How did I find this amazing QA?

Two weird tricks:

  1. I went through a few hundred profiles to find good candidates.
  2. I got them to record screen casts of bugs in a demo app.

I think those are the key points, but here’s the full process I followed (in brief.)

  1. I put a bunch of effort in to the job ad, to make it sound appealing.
  2. I spent a couple of hours searching for and inviting good candidates (I ended up hiring one of these).
  3. Narrowed it down to only 5/5 English and independent contractors
  4. Further screened applicants using a few questions in the job ad. (I have opinions about testing, I looked for people with matching opinions. This works less well if you don’t have opinions.)
  5. Put up a demo app with some hilarious defects. Employed candidates for an hour at their going rate to find defects. They needed to write up a defect report, and record a screencast with a voiceover.

The screen casts are golden. It let me see the quality of their work, and hear the quality of their English, without leaving the comfort of my browser.

I probably put about half a day of effort into preparing the process, but it’s paid dividends.

Good luck!

5 Likes

Thanks so much for this breakdown @danielstudds. This is a huge help!

1 Like

Wow fantastic process! Thanks for documenting. Nicely done!

1 Like