The attendees at the seminar all reminded me of university professors engulfed by their scientific background. There are many ways of conducting user studies and tests. The “correct” way of conducting such investigations according to the academia is to ensure scientifically proofed data. This includes collecting enough data for the findings to be empirically true.
That means not conducting just one or two studies, but 10 or 20 – or even more. That means renting or building expensive user-test labs. That means hiring expensive experts to ensure unbiased data and maybe much more important: adding a difference between who is building a product and who is testing it.
This approach seemed to be the “correct” way of conducting user research when talking to the attendees at the seminar. I must have provoked more than a dozen when I told them all that I couldn’t care less for that kind of approach.
In my view, user studies should be conducted by the people building the product. A developer is not only a developer and neither is a designer. Developers and designers – the ones building the product – should know who they’re building it for – in person. Having somebody else do it for them allows for double interpretation of who the users are: one by the user experience experts writing it down on paper, and another by the developers and designers reading that paper.
Having studied HCI at the university, I’ve both tried the “correct” way of doing user research and a more loose guerrilla HCI approach.
In my view, the most valuable thing to gain from user research is feedback. The more rapid the feedback the better. From my experience, I haven’t gained much more knowledge from conducting user research the “correct” way than from grabbing a victim in the cafeteria for a 30 minute session.
My goal with user testing is to be able to rapidly test what I have just been building, correct what I’ve build with base in what the user test showed, build something more, test that, correct it, build some more, etc.
Have any of you out there experience situations where the “correct” way gave better results than the budget way of doing user tests? I can’t find any other reason than bureaucratic and organizational ones.