Wednesday, April 16, 2008

User Research Smoke & Mirrors

The article (in 5 parts) by Christopher Fahey can be found here:
1 2 3 4 5

My Reflections
Part 1: Design versus Science

When we talk about User Experience Design, the main focus is undoubtedly on the user i.e. it is human-centric. Given this focus, I believe that the field of science that we are talking about here is the Social Sciences as opposed to Natural Sciences.

The key difference between the two fields is that social sciences study subjective, inter-subjective and objective or structural aspects of societies, whereas natural sciences focus on objective aspects of nature; there are no definitive answers in social sciences and depend largely on the context of the study.

Unfortunately, the problem with many UX designers (as identified by Fahey) is that they confused user research with the hard (natural) sciences research, and take for granted that their social-scientific research will provide definitive solutions to their design problems.

THEY DO NOT!

However, from an academic viewpoint, I still strongly believe that the values of these researches should not be discounted even in the field of user design. Designers, like academicians, should review each research data with a pinch of salt and gather as many varied perspectives as possible. The understanding of such diverse perspectives will aid in laying the foundation of the designers’ experience i.e. designers can gain experience from these researches. And experience, as Fahey pointed out, is highly necessary to be expert designers.

Part 2: Research as a Design Tool

The main problem I analyzed from reading this part of the article is that user researchers sometimes over inferred from their research data.

I believe this problem isn’t exclusive to user research as it is present in all other types of researches. For example, a direct positive statistical correlation between two factors (X and Y) simply tells us that when X increases, Y increases as well. It does not tell us that the increase in X causes the increase in Y or vice versa, or that a third factor Z causes both to increase at the same time.

So the issue in question here is the researchers’ ability to interpret the data as what the data says (or does not say), and NOT with the research itself. The data only tells you what it does, NOTHING MORE, NOTHING LESS.

With regards to the question on whether we need such expensive and complicated research methodology to inform us something that is ‘commonsensical’, I will think that it depends. From a commercial viewpoint where decisions are often made based on the dollar-sign, it may be rather wasteful and extravagant (both in terms of money and time) to conduct such research. But from an academic viewpoint, I will say ‘why not?”.

My point is that what is ‘common sense’ may not be so common after all. Commonality is perhaps just a social construct that is dependent on the society in question. For example, it may seem quite common sense that people will not put anything that stinks in their mouth, but in reality we have people who swear by the delicacy that is to be the smelly bean curd, which is touted to taste better if it smells worse (not to me though, the smell is enough to put a few meters between me and the dish). Another example somewhat related to eyetracking is that theatre practitioners often preached that the movement from upstage right (audience’s left) to downstage left (audience’s right) is the strongest, most attention-grabbing. This knowledge seems quite commonsensical but it does not quite apply to the Chinese audience; the thing is that English texts read from left to right whereas Chinese texts read from right to left. Thus, what is common sense may in fact be common only to a particular person, group, or society.

Hence, the reasoning in part 2 of the article does not discount the value of research at all.

The factor that determines the value of research to me is whether or not new knowledge can be garnered from it, and that the value of this new knowledge outweighs the cost of gaining it. There’s no point trying to prove something that is long proven again right? Isn’t that why we should always look through secondary research data first?

Part 3: Research as a Political Tool

This is the part whereby I fully agree with Fahey. My only gripe about this is that most often than not, designers are forced to justify every single aspect of their designs. And the effort needed to do so will mean that less effort can go to ‘perfecting’ the designs. It’s a lose-lose situation.

Part 4: Research as Bullshit

By now, I guess it is quite obvious that I place more significance on research as compared to the author (I may be wrong on this though; I suspect that the author places as much emphasis on well-planned and well-executed research that generates thoughtful insights). Hence, when I saw the sub-heading, I thought that this part of the article has potential of being flame bait. However, as I read further, I cannot help but agree with his observations.

As mentioned above, the value of research to me is whether or not new knowledge can be garnered from it, and that the value of this new knowledge outweighs the cost of gaining it. In the examples cited by Fahey, it is rather obvious that the cost outweighs the benefits. Seriously, a persona room?! Like, huh?

With regards to the passing off of subjective analysis as objective findings, I believe the issues here are on the integrity of the actual research methodologies and how the researchers present their findings. Unfortunately, unlike academic researches whereby researchers have to provide details of how they carry out their researches, or even the raw data collected (usually with a small administrative fee), and discuss the limitations, commercial researches are not bounded by such practices (due to the competitive nature of business). Hence, it makes it harder to verify the accuracy, biases, reliability and validity, etc., of the (commercial) research. Thus, the onus is on the readers to take the results with a pinch of salt and not over rely on a singular source; multiple perspectives from varied sources should be sought to gain a clearer understanding.

Part 5: Non-Scientific User Research isn’t a Bad Thing

Indeed. All types of research should be awarded with the same level of scrutiny and analysis.

Knowledge of user, like any other knowledge, can be gained from empirical research or critical/ cultural research, empiricism or rationalism. As such, our understanding of the users can only progress with the accumulation of knowledge from all varied perspectives. This is not unlike the accumulation of experience, which (I repeat) is a requirement of expert designers.

“If I have seen further it is by standing on the shoulders of giants.”

~ Sir Isaac Newton

My Conclusion
User Experience research (both scientific and non-scientific) is crucial in aiding designers’ understanding of the users especially in unfamiliar contexts. However, designers should not be over reliant on them as the interpretation of the research data may be biased and flawed; knowledge of the research background, limitations, etc., is necessary to sieve out such errors but such knowledge is usually unavailable in commercial researches.

It is true indeed, that research alone does not guarantee good user experience design; the experience of a good designer is also crucial. But I feel that the two are kind of intertwined, in that experience results in the ability to plan good research that will aid in the design, and good research will add to the experience of the designer.

Understanding the users is like a never-ending cycle; thorough verifications with the users should be carried out to ensure that the designers’ understandings (derived from their own experience or from research results) are reflective of the actual users. Ultimately, it all boils down to the users when it comes to User Experience Design.

No comments: