MAR
19
2006
|
"Usability is a technical problem we can solve on our own"A common solution I hear in OSS projects to learn about relevant features is to track the users' interactions with the software: The menu items they click, the settings they check, the applications they use. Implementing a tool that is able to track such data seems to be a widespread idea among Open Source developers. The rationale is usually to provide usability experts (like me) with data that describes the actions and behaviours of users. A noble gesture - but of limited use for us. Such a tool is an attempt to solve a usability problem technically. But usability is not about technics, it is about humans. The human's interaction with a computer. In order to understand the human's behaviour, we need to know about their context of use: Their goals, their tasks, their expectations and prior knowledge. Without that information, it is impossible to interprete data gathered by an action tracker. How can we know why users did not make use of templates when writing a document - didn't they find it, didn't they understand it, or wasn't it necessary in the scope of the task they were performing, e.g. making some quick notes? Did the users achieve their goals, or did they abort the task? The same accounts for settings: Did users click an option by accident or intentionally? Did they not configure something because it was irrelevant or because they did not understand it? And how do we know which kind of users sent in their answers? In the worst case it's only members of the community themselves, and the benefits are minimal. Context-less usage data will return some low peaks, some high peaks, and some sequences of actions. Without any further information, it is of limited use, and learning about priorities and relevancy is impossible. As a late response to Frans Englich's article on NewsForge Frans Englich's article on NewsForge Usability is a technical problem we can solve on our own in July 2004 I hereby emphasise: Usability is more than a technical problem. It requires extensive knowledge about the target users and the way they think. Don't underestimate the complexity of the human mind!
|
![]() |
Comments
Why?
Why dismiss an avenue to get some hard facts? A lot of time has in fact been wasted in usability discussions just arguing about basic figures:
"Most users want it this way!"
"No -- this is how most people would like to have it!"
Also, do not underestimate the possibilities for mining the usage data and otherwise making sense of it. You may not believe this to ever become fruitful, but frankly that argument is based on ignorance at this point.
I'll throw in an example: One of the most common data mining techniques is clustering. By clustering users' choices at least the following would be possible:
because of the context data!
of course you'd get some data from collecting click actions, and you could run statistical analysis on it. and sure you'd get some indicators. but the most important factors, the context data, is missing. you would have to assume it, and then you are exactly at the point you said you want to bypass: you argue about basic figures.
i don't say i don't want quantitative methods. i am a psychologist and had a very good education in statistics, and i know that they are crucial. but the pure tracking of actions, without collecting all the crucial covariates is very poor from a statistical point of view.
Grouping application options according to how they are used. Creating presets.
those presets of options origin from tasks users want to perform.
when you do proper ui design, you do not go into the software, look at the options and group the options that might help to accomplish a task. you do it the other way round: you analyse the task, consider what is required to accomplish the task, and create a user interface that best suites it.
mining click data may be an indicator for certain things, but using it as a base to do ui design just is not the right way.
Quantitatively identifying groups of users; in addition to just making up personas.
groups of users are not made up by the options they use. they have goals, prior knowledge, and different habits. actually, personas should be made up statistically - considering exactly those factors instead of 'just' the click behaviour.
Why not?
There is no reason that you cannot extract underlying tasks from the data;
Task := group of clusters of click sequences
You would recover the classical tasks (e.g. send email to someone in your address book) from looking at the clusters, but might well find some new, less trivial, ones as well. Granted, identifying a task would at least in a naïve approach require manual intervention, in grouping clusters of clicks that take different paths to accomplish it. But this could to some extent be overcome by looking at higher-level events; e.g. an address book entry was accessed; followed by a mail being sent to that person. This sequence will always occur in the above task, no matter how the user chooses to do it. So perhaps the following is better;
Task := sequence of high-level events
It is the Google approach versus the Yahoo one. Few people expected dumb automatic indexing to be superior to human compilation, but clever data mining turned out to make all the difference.
I am certainly not saying that click data would answer everything, but you are in fact telling developers not to implement collecting it, as far as I understand you. That would [likely|possibly] be a loss.
representative user sample
That would [likely|possibly] be a loss.
... in this regard, I'm doing a cost benefits analysis.
There are sure situations where click tracking in combination with contextual data collection is useful. But that needs to be in a controlled environment, and it's questionable if huge OSS projects ever get the opportunity to do such a controlled analysis.
A controlled environment means:
Most click tracking approaches I've seen so far spent a lot of efforts in how to develop the technique, but little in how to make sure the right people are tracked. If it is anonymous - how do you know it's a representative sample of your target users? Also: Who will switch on such a tool? If you have to download it as an extra plugin - will the non-community-members do that? If it is integrated with the application: Doesn't the pure possibility that an application can 'call home' lessen the trust in that application?
The users who agreed to be tracked answer questions on their behaviour which is correlated with the data set from the tracking, and are possibly asked for a post interview. Otherwise it's hard to know if people use the address book because auto-completion does not work properly, or because they need anything special within the address book. This means a lot of efforts, far from automagical data mining..
The first point is which makes me worry most. How to get a representative sample without losing your users' trust? You ask them for very sensitive data, and as I wrote above, the pure possibility an application can collect data may cause a loss of trust. Remember the 'WindowsXP calls home' hysteria....
It may work for smaller applications with a smaller and more uniform user base, but for KDE or Gimp or whatever OSS project where the target users are not even defined, it's impossible to address a representative sample without doing something illegal ;-)
True, but..
..don't completely disregard data that can be easily and automatically obtained. For instance, even if it only proved nobody uses the paste/copy/cut toolbar icons in konqi much, it would prove useful in deciding whether or not to remove them.
But yes I agree, raw data like this isn't always useful and needs to be coupled with real observation and questioning. But who has time for that?
Logging during tests
I thought about the issue and had the following idea:
what if such a user tracking feature were just present in applications during usability tests and would log the user's actions including timestamp to a file.
It could be a compile time option, enabled only for software in usability labs.
The action logfile could then be used afterwards to see if the test observer has missed something, for example when users use keyboard shortcuts or maybe mouse gestures.
yes
yes, that would be a good idea.
actually it does not need to be limited to tests in usability labs, but could be very well used for task analysis, that means observing users in their natural use environment, doing their everyday tasks and interviewing them before/while/after their work.
the keyword about all that is a controlled setting: you have the opportunity to ask the users for their goals, why they do something this way and not the other, so you'll be better able to identify problems.