Evaluation
I was intrigued by how many different factors and methods
that comes in play when evaluating. Surprised that it’s such a big area in
interaction design. I thought on forehand that the most common way of
evaluating a product today must be analytics using the data of natural setting
involving users because most tech made today is connected and thus can feed
user data to the designer. It became clear that’s not the case. I believe that think-a-louds,
which is a controlled setting involving users, provides a lot of useful
information to a designer. Unfortunately, the results can be somewhat non
representative because the part of the brain that’s responsible for decision
making is also used for speech, high level of cognitive awareness, meaning that when the most interesting decisions are made the user is most prone to be quiet.
As seen in the chapter of data gathering usability testing
also might provide both quantitative and qualitative data. The first might be given
by keystrokes, mouse movement or the time to complete a task. Qualitative might
collected semi structured interviews for example. It’s important though that
when interpreting and presenting the data one should be aware of that results
might be influenced, a test conducted in a lab says little about how users
will interact with the design “in the wild”.
When lacking real user’s heuristic evaluation is the way to
go. It seems that the set of heuristics is great guidelines to see whether the
design is meeting demands of good interaction design. Doing this before releasing
might save lots of time. Fitts’ law describes, in a mathematical manner, the time
it takes to select objects on a screen and thus need no users at all and can
still be motivating major design decisions which I does seem really useful.
Question: What evaluation method would be the best for our design nr.2?
Question: What evaluation method would be the best for our design nr.2?
No comments:
Post a Comment