I have a few years
experience of working with and around interaction design, therefore —
unsurprisingly and somewhat reassuringly — this week's reading did
not offer any major revelations for me to reflect on. But I will
endeavour with the spirit of the exercise.
The material deals
with data gathering, data analysis and requirements discovery,
presented in that order and suggesting a work process in that same
order. This order is also reinforced by the course project (so far),
wherein we first gather data about a group of persons at a
"journey-location", and most noteworthy is that this group
is chosen for no business or even product-developmental reason, and
then we do state-of-the-art analysis of what they might or ought to
interact with in relation to the chosen journey-location, still with
no concern for business or product. The aspiration is to discover the
needs of the group at the journey-location, and from that come up
with a design which meets their need, which is interesting though
somewhat lofty.
A more practical
situation to become accustomed to would be that of the start-up team
who knows where their development skills and domain knowledge lies,
and from it they agree what product they can make, and therefore have
a product-centric approach to gathering data about potential users of
their product, then analysing that very relevant data to discover
what their product needs to be. Another approach could be that of the
business, which has investment in (or at least funding for) a
specific business plan, and therefore has a business-centric approach
to gather data and so on as in the previous example.
This is of course an iterative process, where better understanding of product leads to
better understanding of users and vice versa. Hopefully the project
will allow for at least one such repeat iteration..
The literature
focuses on ways to gather data that are easy to label and put in a
book — interviews, questionnaires and observations, of which the
least covered and probably most available and used in software
development are indirect observations of usage stats (including A/B
testing), and to some degree observations in controlled environments
(such as inviting users to the office to be observed when using the
product, or sitting with a user at her workplace). There are no
assumptions of pre-existing expertise, or of really "becoming a
user". (Direct insider observation suggests observing other
"real" users as a spy, rather than being a real user and
focusing on whatever real users focus on.)
The literature
offers well considered approaches for design, such as user-focused
design
(building what the
users need) and activity-focused design (building what's needed to do
the activity), but it should also mention practical considerations
e.g. working with what you know, as when designing tools for other
developers (as in any API) or when the developers are users and
develop what they want to use (as in most open source software). Or
the very common "accidental design", which starts out with
no design, for the sake of curiosity or development in itself. Only
later, if ever, are the edges rounded off for easier use — and such
has been the design approach for much of the world's technical
innovations, and most if not all high art. (Cf. the KTH insignia
"Vetenskap och Konst".)
I was amused by the revelation that the think-aloud protocol is an established technique,
something to do consciously, rather than what inevitably happens when
you draw interactions on a whiteboard or demo a product/mockup :)
My question for the
seminar is how to overcome the Dunning-Kruger effect? — In re-evaluating what one has learned so far about the product/users,
in feedback from users/experts, and in those with greater business
authority over the design (e.g. managers, executives, shareholders…).
No comments:
Post a Comment