At the Information and Communication faculty research day at EPFL which is about "Invisible Computing: Novel Interfaces to Digital Environments". Two bits from the presentations caught my eyes. In her presentation entitled "The myth of touch", Chia Shen from Harvard University dealt with 3 wrong ideas about touch-based interfaces. She started by reminding us how people want touch because they have the impression that it's better for engaging users (and that it helps to "remember" and collaborate). She then moved to the description of 3 myths in this context:
- "Myth #1: touch is natural: The reality is not that simple. Actually, touch is natural up to 3 UI-less gestures: zoom in/out, pan/scroll, tap. For some applications, the mouse may be better and there is an entire user culture built around this vocabulary
- Myth #2: multi-touch = multi-user. As stated by Bill Buxton: "now not only can my eye see the pixels, but the pixels can see my finger", there is for example a problem with 2 fingers... whose fingers? if you don't know whose finger are there? how to zoom, select? pan?
- Myth #3: touch is intuitive: it's really the data that is intuitive, if it's not, you become an interpreter for the interface (...) a large proportion of our cognitive system is devoted to interpreting sensory information from body parts with the most sensory receptors such as our fingertips (...) visual sensory input overwhelms audio and tactile in the human brain."
It's always interesting when technology researcher bring out and explore their own myth, what is taken for granted and why they're wrong. I wish the speaker had spent longer time in this issue to dig more into details. The "natural" bit is interesting at it echoes with some elements I discuss: it connects to the fact that what is natural is socially constructed and shifts over time.
The third presenter, Richard Harper (from Microsoft Research) used the example of "smart home" design to describe his perspective on innovation and design process. Some hints about he started with echoed with what I discuss in my courses:
" it doesn't matter where you start but it matters to make assumptions... you need to start with the right assumptions
users do not know what is the future HOWEVER, the future is visible in your behaviors, the future lays here in the present, in weird behavior, the things we do that are actually special evocative and rich. We have aspirations and hope to make our life a success, we can learn from that for innovation, to bring out new ideas"
He then used his exploration of the complexity of home experience to demonstrate this process, finishing with different technological projects to support his claims. Some of the point he made about people's experience at home were quite interesting:
"People want to make distinctions, when they make a home, they make it different form work, they make their home different from everyone else. But it isn't easy, it's full of contradictions: people want to close the door on the world outside but they still want contact with that world (call their friend...). Furthermore, when they make their homes special, they cannot be so special that visitors don't feel at home.
when someone gets home, sits down and switches on the TV they are switching themselves off, but they have to work at doing nothing (housework, kids asking things, give love to partner). There is so much to do and so little time to do nothing.
And the occupants themselves make for contradictions: some tidy home up, some make a mess, some set up homes, other leave home...
do designers have to be smart to understand a smart home? yes, but it's not the technology that requires them to be smart. Don't assume that there is an integrated model of the user (one that fits all) but it doesn't mean that there can't be innovation"
Why do I blog this? some highly intriguing elements there, I find quite interesting to see how this can push a little bit the envelope at EPFL where it's uncommon to have this sort of approach (unfortunately technologies are often the starting point).