Research

Japanese hand calendar

Taken from MAKING MEANING FROM A CLOCK: MATERIAL ARTIFACTS AND CONCEPTUAL BLENDING IN TIME-TELLING INSTRUCTION, a thesis by Robert Frederick Williams. Japanese hand calendar, which uses the structure of the hand to support naming the day of the week that corresponds to a given date;

This is an example of how things are cognitive artifacts:

Cognitive artifacts as material anchors for conceptual blends In recent work (Hutchins in press), Hutchins uses conceptual integration theory to analyze cognitive artifacts whose spatial configurations support reasoning about temporal relations. (...) Hutchins argues that the material structure, here the configuration of bodies in space, anchors the conceptual blend, stabilizing and maintaining the set of conceptual relations during subsequent reasoning or computation

Why do I blog this because it's a nice and visual example of cognitive properties of artifacts and embodiment.

The cognitive life of things

In the following paper, Edwin Hutchins (proponent of the Distributed Cognition approach/framework) discuss what he calls "the cognitive life of things", attempting to place this in the context of rich multimodal interactions. Hutchins, E. (2006) Imagining the Cognitive Life of Things, presented at the symposium:"The Cognitive Life of Things: Recasting the boundaries of Mind" organized by Colin Renfrew and Lambros Malafouris at the McDonald Institute for Archaeological Research, Cambridge University, UK 7-9 April, 2006.

Hutchins's claim (which he developed in his book Cognition in the Wild) is that cognitive science was fundamentally flawed since its focus was to put cognitive properties inside the person and not in the social and material world. His book had been criticized about the very fact that he almost said nothing about the embodied practice of human in his examples (navigation). This paper tries to make distributed cognition less disembodied by showing how interaction are richly multimodal creating emergent cognitive effects. In this paper, the author also describes ths "cognitive ecology" concept:

By cognitive ecology I mean that all of the elements and relations potentially interact with one another and that each is part of the environment for all of the others (...) This rich cognitive ecology gives rise to some powerful cognitive processes. The embodied interaction with things creates mechanisms for reasoning, imagination, “Aha!” insight, and abstraction. Cultural things provide the mediational means to domesticate the embodied imagination.

Why do I blog this? this kind of argument is interesting to me especially when I think back about what I learn from my early cognitive psychology courses which were definitely disembodied (un-embodied at all I would say). I also like the development around the idea "Using the body to imagine the dynamics of things", this connects to the things I've read about the affordance of space in socio-sognition.

A review of "where the action is?" (Paul Dourish)

Dourish, P, (2001)"Where the Action Is : The Foundations of Embodied Interaction", MIT Press: Cambridge.

The book is about the common thread between current developments in Human-Computer Interaction and Computer-Supported Collaborative Work: embodiment, which is a central underlying concept for tangible computing and social computing. Tangible computing refers to the distribution of computation across different devices in the physical environment, and are sensitive to their location and proximity to other devices and people. Social computing refers to the increasing attempt of understanding the social world for interactive system design. Both have in common the familiarity with the everyday world and the way we experience it, the facts that things are embedded in the world (which is physical but also a social context).

What is also important here is that this very notion of embodiment does not come out from the blue. It's connected to other school of thoughts in philosophy such as Husserl's phenomenology, Heidegger's hermeneutics' phenomenology, Schutz's phenomenology of the social world and Merleau-Ponty's phenomenology of perception:

Husserl was concerned with how the life-world was based in everyday embodied experience rather than abstract reasoning; Schutz recognized that this conception of the life-world could be extended to address problems in social interaction. For Heidegger, embodied action was essential to our mode of of being and to the way which we encountered the world, while Merleau-Ponty emphasized the critical role of the body in mediating between internal and external experience.

Of course, as Dourish argues, some others have also underlined the importance of physical embodiment as a resource to act: Gibson's concept of affordance is a relevant one for example.

Placing the source of action and meaning in the world is an important issue. It leads to Dourish's refined definition of Embodiment: "Embodiment is the property of our engagement with the world that allows us to make it meaningful", "Embodied interaction is the creation, manipulation and sharing of meaning through engaged interaction with artifacts".

Of course, this is a shift from previous school of thoughts in Human-Computer Interaction and CSCW that were more articulated around cognitive psychology and user interface design. This lead to specific methodologies (controlled experiments, lab studies) drawn from experimental psychology and also theoretical frameworks (like information theory, the importance of internal representation...). In HCI/CSCW, these methods and frameworks has proven to be not wrong but incomplete; see for instance the work of Lucy Suchman (or Edwin Hutchins) that promoted the importance of situatedness in cognition (as opposed to internal and preformulated plans): resources for individual actions are situated in the environment. This theoretical shift is followed by a methodological one too: sociology and ethnomethodology (because understanding real world settings, tensions and organization of actions became important).

Some excerpts are also directly connected to my PhD research: the ones about 'awareness' (p165, p174-175):

"Awareness is the informal, often tacit, understanding that collaborators have of each other's activities. Being aware of each other's activities helps collaborators organize their own activities to contribute to the progress of the group's work. (...) The role of awareness as an element in the coordination of work emerged first from field studies of collaborative work [Heath and Luff, 1992] (...) Awareness in collaborative systems may arise directly through the visiblity of the effects of other people's actions, or indirectly through the visibility of the effects of actions on the objects of work"

The book is full of interesting stuff ranging from mention of Wittgenstein's connection with Elvis Presley to relationships between designers/social scientists. Aslo interesting: Matthew Chalmers's review of the book.

Why do I blog this? first because it give me a framework for my research, especially related to collaborative and awareness concepts, and how these concepts relates to accountability, meaning or intentionality are of crux importance. Furthermore, it allowed me to expand my point of view from psychology (my original field) to social sciences as a whole (from sociology to psychology or anthropology). IMO, different methods can be applied to study different things. The problem is more about the theoretical framework which are still not that clear and often nascent.

Finally, this embodiment concept connects to my work about video games research since this is exactly what is at stake today: tangible and social computing are really the new direction that expand the vision of the gaming situations. Studying player's embodiment and designing accordingly is one of the topic I am working on (not in my phd but for my gaming research) in terms of learning how tangible interactions are achieved and what can we learn about that for video game design. For instance, the next wii console will afford new interactions that we are clueless about.

Social Visualizations Claims by T. Erickson

Erickson, T. Designing visualizations of social activity: six claims. In proceedings of CHI 2003. The authors of this paper describes a set of claims drawn from his work about visual representations of groups in online environments. He briefly presents the work done with Babble (see representation below), a tool that proposes "social visualizations": "a visual (or sonic or other perceptual) representation of information from which the presence, activities, and other characteristics of members of a social collectivity may be inferred, and, by extension, can provide the basis for making inferences about the activities and characteristics of the group as a whole". This corresponds to what has been called elsewhere "awareness tool/interface".

Everyone sees the same thing; no customization: An important aspect of the power of a social visualization is the knowledge that everyone sees the same thing. If I see something, I know that you see it as well and that you know that I know.

Portray actions, not interpretation: let the users interpret—they understand the context better than the system ever will.

Social visualizations should allow deception stakeholders or other recipients of our ideas.

Support micro/macro readings (a la Tufte)

Ambiguity is useful: suggest rather than inform: accurately presenting information is not the point of a social visualization; its primary role is to provide grist for inferences.

Use a third-person point of view: People learn what elements of the social visualization mean by watching it over time, and, particularly, by seeing their own behavior reflected in it.

Why do I blog this? even though I am more interested in visualization for designers and not for users (as in this case), this gives interesting ideas for what I am working on currently. What I will use in my visualization will be: a third-person point of view (of course because I want a synthetic view of players actions), micro/macro readings, portraying actions (communication, division of labor, roles...).

Replayer: replay and visualize heterogeneous data

Morrison, A.,Tennent, P. and Chalmers, M. (2006). Coordinated Visualisation of Video and System Log Data, in proceedings of 4th International Conference on Coordinated & Multiple Views in Exploratory Visualization (CMV2006), London, UK. This article describes Replayer, an interesting platform that "replay" various sources of data, producing multiple visualisations that would eventually help analysts and evaluators of systems (in this case a collaborative location-based game).

Replayer [2] is a system designed to support both log and media analysis through the synchronised presentation of media and information visualisation style data exploration tools. A number of visualisation tools are provided for the visual exploration of log data, allowing an analyst to summarise all statistical data from a trial, or focus on a particular factor of interest. The video data are synchronised with the log visualisations, allowing analysts to make selections in one view and immediately jump to the corresponding section of the video. As well as supporting a richer appreciation of the recorded data, the provision of these multiple views allows an analyst to gain a fast overview of the recorded events and perform time-consuming video analysis on only the most salient areas.

A representation of this replay tool: "Five Replayer tools operating in coordination. Clockwise from top left, the figure shows the video component handling two streams, the event series charting signal strength for each user over time, a histogram and time series showing summary information on a system property and the map showing the recorded positions of users based on GPS. Data is taken from a multi-user mobile application.":

The tool offers different components like time-series and event-series, histograms of event (distribution over time) and maybe the most striking feature is the bridge with Google Earth that allows to display events on a map.

Why do I blog this? because I am working on similar issues (we worked on a replay tool for catchbob previously), but I am less concerned by the "replay" function than producing synthetic representations of spatial coordination of mobile agents. As a social scientist, I am interested in visualization to help me analyzing data from field studies like the one we had with CatchBob. Currently, our direction is to merge logfiles and qualitative coding of messages exchanged by catchbob players in an XML file so that I could create visualization of coordination. This might be relevant for different practitioners ranging from video game designers to sport analysis tool developers.

Tangible interface issues with the Wii

French game site Overgame has a pertinent interview of Roman Campos Oriola, a game design from Ubisoft who is working on a game for the Nintendo Wii. There are some good thoughts about the game controller and the potential interactions (I rougly translated the interesting excerpts):

- We had to reinvent control methods because there are no standards, we worked on that with Nintendo - fight with the sabers are achieved through motion detection: movements are detected and compared with a set of known movements, if there is a matching, the program trigger animations (so it's not a a real spatial positioning) - the challenge was to find the most natural movements for the players. Typically, for doors, we first though a movement (like a wrist rotation) would be ok because we are used to open doors like that. We actually noticed that nobody does the same movement do to that. And finally the most natural way to do it was to explain to the players that he simply had to push the door, and this is the movement we kept. That's where the difficult part is: trying to know what will be understood by players in terms of movements. - the problem is then to know what will be obvious for the player but there are also other issues, for instance we had to cut out the different actions: it's not possible to ask the players to perform simultaneous movement with both pads as if he were playing drums

Why do I blog this? This kind of issues are very important and empirical testing would be great to understand the grammar of interactions that would be affordable/understood by players using such tangible interfaces.

Sharing mobile device context: a comparison between Bluetooth and NFC

Kostakos, V., O'Neill, E., Shahi, A. (2006). Building Common Ground for Face to Face Interactions by Sharing Mobile Device Context. Workshop on Location and Context Awareness (LOCA 2006), Dublin, Ireland. Lecture Notes in Computer Science 3987, Springer, pp. 222-238. In this paper, the authors relies on Herbert Clark's theory of "grounding" (the construction of a shared understanding of the situation during collaboration) through a mobile application that allows users securely to exchange the contents of their address books (through Bluetooth and NFC).

The most important basis for the construction of common ground, evidence of common membership of cultural communities, is often difficult to establish. (...) Our application uses Bluetooth, NFC and mobile device address books as a means of locally sharing context. (...) We utilise users’ address books as the source of context. Using our application, two users are made aware of the common entries in their address books. (...) [The scenario:] Alice and Bob exchange digests of their address books. They then compare the received digests with their local digests to identify matches. Alice is then shown her local information linked to the matches, and so is Bob. The displayed information is not necessarily identical.

The technological affordance is quite interesting too:

With Bluetooth, two people can use our application without having established prior physical communication (in the form of eye contact, body language, or verbal communication). On the other hand, the use of NFC requires that the users and devices enter each other’s “intimate zones”. (...) the different ranges of Bluetooth and NFC create two different models of interaction between the users. Using Bluetooth, users need verbally to negotiate and coordinate their efforts to exchange data. With NFC, users have the cue of physically touching their phones. This tangible interaction is an explicit action which synchronises both the data exchange between devices and the coordination process between the users.

The user study is quite simple () and gave intriguing results concerning the Bluetooth/NFC differences: especially the fact that users preferred NFC over BT. Here are some of the lessons drawn from this study

Bluetooth Could be useful for getting to meet strangers Users reluctant to respond to requests from unknowns Does not give away physical location of user Weak joint experience Request - reply model
Limited usability when using the phone Participants initially thought the system would exchange numbers Preferred for face to face interaction Strong joint experience Symmetric model

Why do I blog this? this connects to the fact that I also use Clark's framework; relations between technology/media and the establishment of a common ground are then of interest to me. I also liked the comparison between the 2 technologies (NFC/BT) and I would be happy to see a broader field study with a more ecological context (namely real contacts).

Visualizing the overall structure of a tennis match

Still browsing documents in the Information Vizualization world, I ran across this paper: Liqun Jin; Banks, D.G. (1997): TennisViewer: a browser for competition trees, IEEE Computer Graphics and Applications, 17 (4), 63 - 65 It's about a tool called "TennisViewer" that aim at enabling coaches, players,and fans a new way to analyze, review, and browse a tennis match; a very interesting cultural practice IMO.

A tennis novice watching a match for the first time might be surprised that the crowd erupts with cheers when a player wins one point, then barely applauds when he wins the next. The crowd is not necessarily fickle; some points are genuinely more important than others because a tennis match is hierarchically structured. One match consists of several sets. One set consists of several games. One game consists of several points. The match-winning point is the most important one. How can we make that importance visible? Our goal is to let a fan, a player, or a coach examine tennis data visually, extract the interesting parts, and jump from one item to another quickly and easily. The visualization tool should help parse the elements of a match. We developed an interactive system called TennisViewer to visualize the dynamic, tree-structured data representing a tennis match. It provides an interface for users to quickly explore tennis match information. The visualization tool reveals the overall structure of the match as well as the fine details in a single screen. It uses a 2D display of translucent layers, a design that contains elements of Tree-Maps and of the Visual Scheduler system, which was designed to help faculty and students identify mutually available (transparent) time slots when arranging group meetings. TennisViewer provides MagicLens filters to explore specialized views of the information and a time-varying display to animate all or part of a match (...) TennisViewer displays a computer-generated tennis match. (a) A serve (top) is returned out of bounds (bottom). (b) One Magic Lens filter lies on top of another, revealing ball traces within a point:

Why do I blog this? This paper is related to my current research project. I like this idea of showing the "overall structure" of the match, that's basically my aim with the Catchbob data; the point would be to show some underlying phenomenons (exchange of coordination information) and this make them visible.

Visualizing collaboration in CatchBob!

I am currently working on visualizations of collaborative actions in Catchbob (the pervasive game platform I use to study the impacts mutual location-awareness interfaces). What I am interested in is to depict a chronological account of collaborative processes drawing on system logfiles (and the researcher's analysis in the form of messages categorizations, this packed in a XML file). Processes such as:- division of labor: the way a group of team-mates divide the work among themselves, in my case, it's mostly spatial (because the task is about exploring space) but there might be some simple roles. - the exchange of "coordination keys" among players: mutually recognized information that would enables the teammates to choose the right actions to perform so that the common goal might be reached.

It's not clear whether I would come up with one or more visualizations but the chronological view seems to be very pertinent (to see the evolution of coordination keys exchange). I might add a more spatial representation and potential statistical graph (like the evolution of players' dispersion in space, the evolution of the number of coordination devices exchanged).

As for the method, I now have a XML Schema that propose a grammar for the system logfiles and the annotations of the researcher. This allows me to transform my raw data into basic XML files that I could use to create visualizations. The point of this is to have a depiction of the activity, it might be useful for different persons: researchers (like me, interested in the analysis of collaboration), environment designers (concerned by evaluation), community managers or trainers (in case of sport analysis) or even users. However, I am less interested by the "user" category since it might need a different interface taking into account the fact that this awareness tools are used while performing the task, which is not what I am studying here.

Looking back at the CSCW/CSCL literature made me realize that most of the studies about activity visualization are concerned by social networks/sociograms. See for instance what is done at the Sociable Media Group like Visual Who (on the left here: Visual Who is a tool for visualizing the complex relationships among a large group of item: it depicts the Media Lab community, using the Lab's mailing lists ) or history flow (on the right: it provides a dynamic visualization of wiki modifications by multiple authors)

There is a tremendous mass of examples about such infoviz/informative art (a good way to find the most relevant ones is to browse infosthetics.com). What I am interested in is hence less the depictions of social networks but instead processes (like message exchange over time, role switching...). Mark Ackermann (in Ackerman M.S. and B. Starr (1995) "Social Activity Indicators for Groupware,". In Proceedings of the eight ACM symposium on User interface and software technology. pp. 159-168. Pittsburgh, PA: ACM Press.) calls this "social activity indicators", he describes 3 types of them: social network diagrams (as I explained above), notification of users' actions and activity graph/depictions. I am interested by the latter.

Here some examples I found that I think would be relevant for my purposes:

This example is taken from Actogram, a good PC-based tool to analyse qualitative data. Quite complicated here, I like the multidimension and synthetic perspective, even though the layout is not that sexy.
Proposed by Benford et al. (2002) “Staging and Evaluating Public Performances as an Approach to CVE”: "n this case, we are looking at a GANTT chart representation of the key scenes in chapter 1 of Avatar farm. Time runs from left to right and the different colours distinguish scenes that were occurring in different virtual worlds. The tool allows the viewer to overlay the paths of different participants through the structure. We see two participants (Role 2 and Role 3) in our example. "
Developed by Fabien and Patrick for the portailvisualizer. It shows user's activities on a portal over time. This synthetic representation is one my favorite.
This is taken from Paul Tennent's replay tool (the picture actually depicted 5 different windows of his tool "Clockwise from top left, the figure shows the video component handling two streams, the event series charting signal strength for each user over time, a histogram and time series showing summary information on a system property and the map showing the recorded positions of users based on GPS. Data is taken from a multi-user mobile application.", described in "Coordinated Visualisation of Video and System Log Data" by A. Morrison, P. Tennent, M. Chalmers (2006).

Why do I blog this? it's a "so what" question here for my research project. Based on this examples, I am now trying to clarify what i want to visualize and how. The idea is to have a main representation (such as the portailvisualizer version, maybe more like the Gantt Chart from the Benford paper) and different small vizualisation as in Tennent's work. I am still wondering about possible spatial representations such as:

Ideas or references are welcome!

Teams, problem detection and coordination

Klein, G. (2006): The strengths and limitations of teams for detecting problems, Cognition, Technology & Work. The paper is a "preliminary investigation of the ability of teams and organizations to detect problems": the author aims at identifying barriers that may restrict a team’s problem detection ability, caused by the difficulties of coordination. To do so, he examined different fields

Problem detection in operational settings requires expertise and vigilance. It is a difficult task for individuals. If a problem is not detected early enough, the opportunity to avoid or reduce its consequences may be lost. Teams have many strengths that individuals lack. The team can attend to a wider range of cues than any of the individuals can. They can offer a wider range of expertise, represent different perspectives, reorganize their efforts to adapt to situational demands, and work in parallel. These should improve problem detection. However, teams can also fall victim to a wide range of barriers that may reduce their alertness, mask early problem indicators, confound attempts to make sense of initial data, and restrict their range of actions. Therefore, teams may not necessarily be superior to individuals at problem detection. The capability of a team to detect problems may be a useful measure of the team’s maturity and competence.

What is interesting is the different list the author propose, about barriers to problem detection in individuals and teams. Here is the one about barriers to problem detection in teams :

Initial alert Production pressure discourages vigilance for problems Team members face differential consequences of problems Cue recognition The high cost of sending information filters out important messages A team member may fail to notify others in the mistaken belief that they already know A team member may assume that the absence of a message means that nothing happened There may be a disconnect between the data collectors and the data interpreters Inexperienced members as data collectors may miss early signs Unskilled data collectors can mask early signs of problems Bureaucratic rivalries disrupt the exchange of data Inconsistencies may be missed if they cross team boundaries There may be difficulty in communicating the urgency of a perceptual cue Sensemaking Multiple patterns allow multiple interpretations, so it is easier to deflect urgency The team may fail to realize that a common understanding has been lost The team may fail to use a central node to form a common picture of events and catch patterns Expertise is lost in trying to form interpretations using indirect evidence Problem indicators may be repressed Action Organizational inertia hinders action Challenges to credibility can prevent action

Why do I blog this? for my literature review about coordination and its inherent barriers.

Positioning and seamful design

Åsa Rudström, Kristina Höök and Martin Svensson (2005). Social positioning: Designing the Seams between Social, Physical and Digital Space. In 1st International Conference on Online Communities and Social Computing, at HCII 2005, 24-27 July 2005, Las Vegas, USA., Lawrence Erlbaum Associates The paper is in the "seamful design"/seamfuless trend in HCI (introduced by Weiser (1991) and has been further developed by Chalmers and Galani (2004) and by Chalmers, Dieberger, Höök & Rudström (2004)): getting away from a seamless vision of computation, and rather taking advantage of seams (connections, gaps, overlays and mismatches) in design. This is so important considering the chaotic and unexpectability of the physical world:

Most developers and researchers of mobile services make the assumption that users should never have to worry about when and how they are connected to the digital space. (...) But reality is and will continue to be less than perfect (...) Seamless design aims to hide what is perceived as unnecessary technical details from users. However, if these technicalities affect the functionality of a service, an alternative would be to carefully design features that enable users to visualise, understand and possibly take advantage of differences and variations in functionality or accessibility: seamful designs. An example of a successful, unobtrusive design of a “seam” is the visualisation of signal strength available on mobile phones. This visualisation is not strictly necessary – the user will be aware of signal strength anyway, since it affects the quality of the connection. However, without much explanation it becomes a tool that allows users to search for locations with better signal strength in areas with low coverage. It may also educate users in understanding of where connections could be expected to be stronger (close to a window) or weaker (in a tunnel).

What is interesting here is the idea of using seams visualization to "educate" the user and as a consequence to modify its behavior accordingly to better use the tool. This is the simplest example we also encounter in CatchBob when user moves around to find a connection to the network.

Another part of the article which is very relevant for my research is when they address seamful design about "the quest for perfect positioning". They account that "Positioning is another area where the mobile industry – and much research – strives for perfection". Based on the study of GeoNotes (a place-based annotation system allowing users to attach digital “Post-it” notes to physical locations), they showed that "Positioning offered by technology often does not correspond to the positions people want to refer to".:

GeoNotes used a WLAN network that technically speaking offered notes to be posted at each WLAN hotspot. However, hotspot coverage corresponds very poorly with the buildings, rooms and other places where users move about. Instead of providing lists of places where notes could be attached, GeoNotes users were allowed to themselves name the places where they wanted to attach their notes.

In a one-month field test with 78 users, seams between the underlying hotspot model and the user perceived model of how places should be named were elegantly handled by the users (Fagerberg, Espinoza & Persson, 2003). Place labels were created by the end-users to post notes at places that covered smaller areas than the positioning system could handle, such as “the sofa” or somewhat more esoteric “the lecturer’s forehead”. (...) By not forcing any official labelling system upon GeoNotes users, they were set free to explore the relation between hotspot coverage and perceived places – thus dynamici.e., the intermedia seam between the digital and the physical.

Why do I blog this? What I like is this idea of challenging one of the trend in the mobile research and industry: the striving for seamless, continuous connection and for perfect positioning. This is perfectly in line with what I am working on (showing how human agency is important in mutual-location awareness). I also like this idea that "Positioning offered by technology often does not correspond to the positions people want to refer to"; what I am interested in is not to reveal messages' positions as in GeoNotes but rather people's location in real-time. What happen when you have this sort of service? Does that make sense? Judging from the CatchBob! experiment, people not always benefit from that information (in terms of performance or socio-cognitive processes) but they manage to notice the flaws (which is what fabien is looking at).

Besides, their short literature review of location-based applications is a good summary of what happened in the last 5 years.

Awareness and Interruptions

Dabbish, L., Kraut, R. (2004). Controlling Interruptions: Awareness Displays and Social Motivation for Coordination, in Proceedings of the 2004 ACM conference on Computer supported cooperative work. 2004, ACM Press: Chicago, IL. p. 182-191. The paper addresses the notion of awareness with an interesting angle: how would awareness displays might interrupt and then impact people's activity (leading to performance problems. The authors used a very simple game to investigate whether "team membership influences interrupters' motivation to use awareness displays and whether the informational-intensity of a display influences its utility and cost".

Results indicate interrupters use awareness displays to time communication only when they and their partners are rewarded as a team and that this timing improves the target's performance on a continuous attention task. Eye-tracking data shows that monitoring an information-rich display imposes a substantial attentional cost on the interrupters, and that an abstract display provides similar benefit with less distraction.

This study has direct implications for design:

To balance the tradeoff between the amount of information presented and the incentive to use that information, electronic communications systems could regulate the awareness information they provide based on an interrupter’s inferred motivation to use that information. For example, in designing a corporate instant messaging client, one could apply these results by presenting a workload awareness display of a target’s activities only to people internal to the user’s project or company, and no such display to people outside the company.

Currently, the “away” and “busy” messages which various instant messaging clients use are too temporally coarse to provide sufficient information for synchronizing interruptions. (...) Displaying information about a remote collaborator’s workload helps both parties if that information is in an easy to process format and the potential interrupter has incentive to be polite.

Why do I blog this? because my research is about studying how certain awareness tools (bringing mutual-location awareness) influence collaboration in terms of producing a mutual intelligibility. Taking into account interruptability might be an issue, however, in the activities I studies, it's less continuous so interruptions are less important.

Media Space Reflecting on 20 Years workshop at CSCW 2006

What happends to "media space" when you have ubiquitous cell-phone cameras, web-cams, iChat, architectural scale displays, the Internet, and globalized work. That's the sort of questions a workshop led by Steve Harrison at CSCW 2006 will try to address.

Since the first media spaces were created in the 1980's, technology has changed and affordable real-time desktop conferencing is a reality. But what happened to the ideas of the media space? While there are ubiquitous cell-phone cameras, web-cams, iChat, architectural scale displays, the Internet, and globalized work, how do these current technologies and collaborative experiences look like and look different than those of a media space? What is the current state of systems that employ socially negotiated control instead of enforcing an established policy? What is the meaning of "awareness" and "presence" today? (...) We particularly seek significant unanswered questions and challenges to current paradigms that further media space research might address. Papers will be peer-reviewed and 15 will be selected. This workshop is inspired by an invitation to submit a book proposal on this topic to Springer's CSCW book series

Why do I blog this? right on spot with my research interests, don't know whether I would have enough time/energy to prepare something but...

Besides, it will certainly be food for thoughts for Alex Pang's "end of cyberspace" discussion.

Workshop at NordiChi: Near field interactions

This is a call for proposals for a workshop on user-centred interactions with the internet of things at Nordichi 2006, October 14 and 15, 2006 in Oslo, Norway. The user-centred Internet of Things The so-called ‘Internet of Things’ is a vision of the future of networked things that share a record of their interactions with context, people and other objects. The evolution of networking to include objects occupying space and moving within the physical world presents an urgent design challenge for new kinds of networked social practice. The challenge for design is to overcome the current overarching emphasis on business and technology that has largely ignored practices that fall outside of operational efficiency scenarios.

What is imminently needed is a user-centred approach to understand the physical, contextual and social relationships between people and the networked things they interact with.

The mobile device as early enabler The mobile phone is likely to play a key role in the early adoption of the internet of things. Mobile devices offer ubiquitous networks and interfaces, enabling otherwise offline objects at the edges of the network. Near Field Communication (NFC: http://www.nfc-forum.org/aboutnfc/) is a mobile technology that has been designed to integrate networked services into physical space and objects. NFC introduces a sense of ‘touch’, where interactions between devices are initiated by physical proximity.

In use, the mobile phone brings with it a history of personal and social activities and contexts. It is in this evolution that we see user-agency and social motivation emerging as an interesting area within the internet of things.

Workshop goals In this workshop we intend to build knowledge around the hands-on problems and opportunities of designing user-centred interactions with networked objects. Through a process of ‘making things’ we will look closely at the kinds of interactions we may want to design with networked objects, and what roles the mobile phone may play in this.

We will focus on the design of simple, effective and innovative interactions between mobile phones and physical objects, rather than focusing on technical or network issues.

The primary questions for the workshop are:

What kinds of common interactions will emerge as networked objects become everyday? What role will the mobile phone have to play in these interactions? How do we encourage playful, experimental and exploratory use of networked things? Some secondary questions are:

What interaction models can we bring to the internet of things? Do the fields of embodied interaction, tangible, social, ubiquitous or pervasive computing cover the required ground for designers? What new kinds of social practices could emerge out of the possibilities presented by networked things? How will the physical form of everyday objects and spaces be transformed by networks and near field interactions? How this would be reflected in users’ behavior? How can the design of physical objects help in overcoming potential information or interaction overload, and how does search or findability change when in a physical context? How can we move beyond commonsensical features such as object activation or findability? What kind of user-communities will co-opt the technology and how will they hack, adjust and re-form it for their needs?

Workshop structure Each workshop day will begin with a keynote presentation from invited experts. On the first day, participants will each give a short presentation of their position paper, no longer than 5 minutes.

Then groups of 3-4 people, each with different skills and backgrounds will then work on concepts, scenarios and prototypes. Prototypes may take the form of physical models, scenarios or enactments. We encourage the use of our wood, plastic and rapid prototyping workshops to create physical prototypes of selected concepts. We will provide workshop assistants for the creation of physical models.

Outcomes The outcomes should be in a range of implementation styles allowing for a variety of outputs that speaks to a wide audience. A report will be written on the workshop, and published on the Touch project website and in other relevant channels.

Call for participation The workshop is open to participants from human factors, mobile technology, social science, interaction and industrial design. Practitioners and those with industrial experience are strongly encouraged. Prior research work on embodied interaction, social and tangible computing would be particularly relevant. Participants will be selected based on their relevance to the workshop, and the overall balance of the group. Space is limited to 25 participants.

Call for short position papers Application is by position paper no longer than two pages. The position paper can be visual or experimental in design and content. The themes should cover an issue that is relevant to the design of interactions with everyday objects.

Deadline for papers is 1 August, selected participants will be notified on the 9 August. The workshop itself is October 14 and 15, 2006.

Papers and any questions should be submitted to timo (at) elasticspace (dot) com before 1 August.

Organisers Timo Arnall is a designer and researcher at the Oslo School of Architecture & Design (AHO). Timo’s research looks at practices around ubiquitous computing in urban space. At the moment his work focuses on the personal and social use of Radio Frequency Identification (RFID) technologies, looking for potential interactions with objects and city spaces through mobile devices. Previously his research looked at flyposting and stickering in public space, suggesting possible design strategies for combining physical marking and digital spatial annotation. Timo leads the research project Touch at AHO, looking at the use of mobile technology and Near Field Communication.

Julian Bleecker is a Research Fellow at the University of Southern California’s Annenberg Center for Communication and an Assistant Professor in the Interactive Media Division, part of the USC School of Cinema-Television. Bleecker’s work focuses on emerging technology design, research and development, implementation, concept innovation, particularly in the areas of pervasive media, mobile media, social networks and entertainment. He has a BS in Electrical Engineering and an MS in computer-human interaction. His doctoral dissertation from the University of California, Santa Cruz is on technology, entertainment and culture.

Nicolas Nova is a Ph.D. student at the CRAFT (Swiss Federal Institute of Technology Lausanne) working on the CatchBob! project. His current research is directed towards the understanding of how people use location-awareness information when collaborating in mobile settings, with a peculiar focus on pervasive games. After an undergraduate degree in cognitive sciences, he completed a master in human-computer interaction and educational technologies at TECFA (University of Geneva, Switzerland). His work is at the crossroads of cognitive psychology/ergonomics and human-computer interaction; relying on those disciplines to gain better understanding of how people use technology such as mobile and ubiquitous computing.

Cititag field study

Yanna Vogiazou, Bas Raijmakers, Erik Geelhoed, Josephine Reid, Marc Eisenstadt, (2006) Design for emergence: experiments with a mixed reality urban playground game. Personal and Ubiquitous Computing, Vol.10, 1, Springer This paper reports a field study of Cititag, a wireless location-based multiplayer game, designed to enhance spontaneous social interaction and novel experiences in city environments by integrating virtual presence with physical.

The game design is pretty simple:

As a player of CitiTag, you belong to either of two teams (Reds or Greens) and you roam the city, trying to find players from the opposite team to ‘tag’. When you get close to someone from the opposite team, you get the opportunity to ‘tag’ them: an alert appears on the screen with a sound. You tap on the screen with your thumb to ‘tag’ the other person. You can also get ‘tagged’ if someone from the opposite team gets close to you and ‘tags’ you first. If this happens, you need to try and find a team member in vicinity to set you free, to ‘untag’ you.

The field study is quite interesting. The methodology used is both qualitative and quantitative with different kinds of data: - video of usage (almost a think-aloud protocol) - group interviews ("open discussions loosely structured around the main research themes: experience of gameplay, the game as part of everyday life, group cooperation and strategies, awareness of others and interaction with the device") - questionnaire (graphic rating scale questions to investigate correlations)

Some excerpts from the results that I found relevant to my work:

What was interesting in CitiTag is how participants turned the technical difficulties to their advantage (...) At least a couple of people in the Bristol trial tried to take advantage of server communication lags by teaming up in a pair: in this way they were in a more advantaged position than a lonely opponent: even if he or she tagged one of them, the other usually still had enough time to tag the opponent and then rescue the tagged team member. (...) Our players in Bristol tried to use the environment to their advantage by hiding behind obstacles when trying to approach another person. A few people also tried to stay behind a bush for some time. However, hiding is not only physical as there is another form of hiding possible in CitiTag; one participant mentioned that if you go under the bus stop you would lose GPS so you could not be tracked any more, what we have identified as hiding in the virtual world, i.e. still visible by others, but not virtually ‘there’. (...) ‘team aware’ individuals are good Cititag players, that CitiTag is a team game and that ‘group state’ awareness information is important for ‘team belongingness’ and for cooperation to emerge. (...) team awareness is significantly correlated to amusement, awareness of other people and the importance assigned to wining

Why do I blog this? First because it's a good example of "The real world as an interface". Second because the results are very interesting to my research. About amusement connected to awareness of others: interesting because there is a similar result in CatchBob! (even though I haven't really dig into this)Finally, the discussion about "the mixed reality challenge: a mismatch between overlayed virtual reality and what users expect to see in the real world" is very relevant:

participants were frustrated by the fact that they would see people really close to them and expect game events to come up on screen (i.e. ‘tag’ ‘untag’ the other player), but there would be nothing new displayed or the events would come up with a delay. So the game did not correspond to the immediate environment as promptly as they would have expected. This was due to GPS errors and wi-fi loss and we believe that it is a typical and significant problem for mixed reality experiences. (...) Once we have provided a link between an overlayed reality and the real world, people expect to see the connection between the two. If what they see with their own eyes is not reflected in their device with a relevant timely alert, their expectation is not satisfied and this decreases enjoyment and hampers the game experience.

There is a lot more to draw from this paper but I just stretched few issues related to my work

Science 2.0 examples

OpenWetWare ("an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering") has a very nice page about ideas concerning "Science2.0".

Some existing examples: - Regularly scheduled printing of journal issues --> Continuous release of articles in online format. - Peer-reviewed specialty journals --> Articles aggregated and ranked by search engines (Google style) or via a catalog and user review (Amazon style) - Methods & Techniques Publishing --> Sharing materials

And some ideas to investigate:

- Slashdot for scientific articles, ideas - Online lab notebooks - Lab "feed": An equivalent of an RSS feed of what is coming out of a lab updated daily/weekly. Results would be less finalized but might help people coordinate on projects accross labs rather than just repeating each other's work in secret. - Immediate data sharing: Currently authors must keep data private until a paper is published. Regardless of whether the publishing process can be accelerated, it would be desirable if the raw data could be shared earlier while protecting the authors rights to publish papers based on the data. This idea is related to some of the suggestion/problems above. - Scientific "currency" outside of authorship on paper: I think this might enable better data sharing among other things. I.e. if you use my data you need to give me X credits (where the value of X credits to my career is less than authorship on paper but greater than nothing.) I have no idea what such a system would look like. - Collaborative written works: Currently review articles written by experts in the field are the primary mechanism in which the state of a research area is evaluated. Instead, one could imagine using collaborative writing tools like wiki's to maintain a real-time synopsis of a field.

Why do I blog this? I am interested in how web2.0 tech could reshape science (research!) practices.

Awareness and Accountability in MMORPG

A very good read yesterday in the train: Moore, Robert J., Nicolas Ducheneaut, and Eric Nickell. (2006): "Doing Virtually Nothing: Awareness and Accountability in Massively Multiplayer Online Worlds." Computer Supported Cooperative Work, pp. 1573-7551

The paper acknowledge the fact that "despite their ever-increasing visual realism, today’s virtual game worlds are much less advanced in terms of their interactional sophistication". Through diverse investigations of MMORPG using video-based conversation analysis (grounded in virtual ethnography), they look at the social interaction systems in massively multiplayer virtual worlds and then propose guidelines for increasing their effectiveness.

Starting from the face-2-face metaphor (the richest situation in terms of social interaction, as opposed to geographically dispersed settings), they state that participants are able to access to certain observational information about what others are doing in order to interpret others’ actions and design appropriate responses. This lead to coordination (I personally used different framework to talk about that, for instance Herbert Clark's theory of coordination). In a face to face context, three important types of cues are available: "(1) the real-time unfolding of turns-at-talk; (2) the observability of embodied activities; and (3) the direction of eye gaze for the purpose of gesturing".

They then build their investigations around those three kind of cues that are less available in virtual worlds. This can be connected to the work of Toni Manninen like The Hunt for Collaborative War Gaming - CASE: Battlefield 1942). It also makes me thing about one of the seminal paper by Clark and Brennan about how different media modifies the grounding process (the establishment of a share understanding of the situation).

Clark, H. H., and Brennan, S. A. (1991). Grounding in communication. In L.B. Resnick, J.M. Levine, & S.D. Teasley (Eds.). Perspectives on socially shared cognition . Washington: APA Books.

Why do I blog this? I still have to go further in the details of each of these investigations but I was very interested in their work because: - the methodology is complementary with what I am doing in CatchBob to investigate mutual awareness and players' anticipation of their partners' actions. The interactionist approach here could be very valuable to apply in my context. I am thinking about deepening the analysis of the messages exchanged by players (the map annotations) to see how accountability is conveyed through the players drawings. - they do translate results from empirical studies intro concrete and relevant design recommendations (for instance: other game companies should probably follow There’s lead and implement word-by-word (or even character-by-character) posting of chat messages. Such systems produce a turn-taking system that is more like that in face-to-face, and they better facilitate the coordination of turns-at-chat with each other and with other joint game activities.)

HCI research about awareness of others in nightclubs

"DJs' Perspectives on Interaction and Awareness in Nightclubs" is a paper by Carrie Gates (University of Saskatchewan), Sriram Subramanian (University of Saskatchewan), Carl Gutwin (University of Saskatchewan) at DIS2006. This is the account of their project which aims at investigating DJ-Audience Interaction in Nightclubs.

We are examining the ways in which DJs and audiences gain awareness of each other in nightclub environments in order to make a set of design principles for developing new technologies for nightclubs. We expect to discover opportunites to enhance communication and feedback mechanisms between DJs and audiences, and to discover opportunities for developing novel audience-audience communications in order to create more meaningful interactions between crowd members, more playful environments, and a new dimension of awareness in nightclubs. We also expect that these design principles could be explored later within other audience-presenter situations, such as in classrooms or theatres.

Why do I blog this? though a bit curious, this is very relevant from the HCI point of view: the question related to awareness of others are important in that context.

It reminds me of an article by Beatrice Cahour and Barbara Pentimalli about the awareness of waiter in café (in french: Awareness and cooperative work in a café-restaurant). They show how awareness is linked to the attention mechanisms of the participants and how their level of awareness is constantly varying.

Ethnographic studies of ubiquitous computing

Supporting Ethnographic Studies of Ubiquitous Computing in the Wild by Crabtree,M. Benford,S. Greenhalgh,C. Tennent,P. Chalmers,M., in Proc. ACM Designing Interactive Systems (DIS 2006). In this paper, the authors draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to understand interactions and eventually assemble coherent understandings of the social character and purchase of ubiquitous computing systems. Doing this, they aim at identifying key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild.

One of the issue there is the fact that ubicomp leads to distribute interactions in a wide range of applications, devices and artifacts. This foster the need for ethnographers to develop a coherent understanding of the traces of the activity: both external (audio and video recordings of action and talk) and internal (logfiles, digital messages...). Additional problems for ethnographers are: the fact that users of ubiquitous systems are often mobile, often interact with small displays, and with invisible sensing systems (e.g. GPS) and the interaction is often distributed across different applications and devices. The difficulty then lays in the reconciliation of fragments to describe the accountable interactional character of ubiquitous applications

I like that quote because it expresses the innovation here: the articulation between known methods and what they propose:

"Ubiquitous computing goes beyond logging machine states and events however, to record elements of social interaction and collaboration conducted and achieved through the use of ubiquitous applications as well. (...) System recordings make a range of digital media used in and effecting interaction available as resources for the ethnographer to exploit and understand the distinctive elements of ubiquitous computing and their impact on interaction. The challenge, then, is one of combining external resources gathered by the ethnographer with a burgeoning array of internal resources to support thick description of the accountable character of interaction in complex digital environments. "

The article also describes requirements for future tools but I won't discuss that here, maybe in another post, reflecting our own experience drawn from Catchbob. Anyway, I share one of the most important concern they have:

The ‘usability’ of the matter recognizes that ethnographic data, like all social science data, is an active construct. Data is not simply contained in system recordings but produced through their manipulation: through the identification of salient conversational threads in texts logs, for example, through the extraction of those threads, through the thickening up of those threads by synchronizing and integrating them with the contents of audio logs and video recordings, and through the act of thickening creating a description that represents interaction in coherent detail and makes it available to analysis

Why do I blog this? This paper describes a relevant framework of methods that I use even though I would argue that my work is a bit more quantitative, using mixed methods (ethnographical and quantitative) with the same array of data (internal and external). It's full of relevant ideas and insights about that and how effective tools could be designed to achieve this goal.

What is weird is that they do not spend too much time on one of the most powerful usage of the replay tool: using it as a source for post-activity interview with participants. This is a good way to have external traces to foster richer discussion. In CatchBob! this proved to be very efficient to gather information from the users' perspective (even though it's clearly a re-construction a posteriori). This method is called "self-confrontation" and is very common in the french tradition of ergonomics (the work of Yves Clot or Jacques Theureau, mostly in french).

Besides, there are some good connection with what we did and the problems we had ("the positions recorded on the server for a player are often dramatically different from the position recorded by the GPS on the handled computer.") or:

the use of Replayer also relies on technical knowledge of, e.g., the formats of system events and their internal names, and typically requires one of the system developers to be present during replay and analysis. This raises issues of how we might develop tools to more directly enable social science researchers to use record and replay tools themselves and it is towards addressing these and related issues that we now turn.

Spatial technology workshop at UpFing06

(sorry bad english below, I took notes in real time and recomposed them quickly) As I mentionned earlier, I had to manage a workshop about "locative media" and spatial technology today. What was interesting is that attendants had quite different ideas in mind when attending it: some were concerned by business models, other by memories in space, one or two by a curiosity towards google earth, place-based annotations, others by mobility and technology. Maybe the description on the website was a bit too narrow: since it quoted google earth, yellow arrow or flickr, different representation has been triggered in people's mind.

After introducing the whole concept and describing the fact that it is a bit messy and cover lots of practices/technologies/services/usage; there was thre presentations. The first one by Yann Le Fichant who is leading a company called voxinzebox; he explained us the different services they propose for city navigation (first on 2nd generation GSM and now on on pocketPC). He recalled us the importance of self-geolocation in that context (people declaring their own location on a cell phone to get some information about a specific place that would eventually guide them to various landmarks). He also underlined the importance of PND (personal navigation display) like TomTom or garmin that are more and more complex (improved memory, communication protocol) and could lead to new innovative tools. Yann provocatively asked why the sex industry has not yet found any big hits using location-based applications. The discussion also led Google's move in the 3D modeling by buying sketch-up (a modeling tool that would eventually allow people to model their house in 3D and put in on a google map)

Then Cyril Burger talked about his PhD research: an ethngraphy of the usage of mobile phones in the parisian subway. Cyril investigated people's behaviro and trajectory while using audio-communication and SMS. He underlined the fact that the transport facility first did not introduce any norms: so the rules that emerge were based on another norm based on how people drive. Through that code, rules of sociability emerged in terms of movements (for instance stopping in location which are not crowded so that the flow is not cut, the arrival of the metro often lead the user to stop the conversation). In terms of gesture, people stay often inanimated while texting, whereas audicommunication leads to more active/lively behavior (gestures, smiles...).

I also like his remark about the very fact that non-material places needed material places: servers need to be located somewhere. This is connected to what Jeffrey Huang talked about at LIFT06: the fact that networked technologies leads to new sort of places (and subsequently that place still matter).

Then Georges Amar (foresight manager at RATP, a subway company in france) presented the new paradigm of his company. Subway companies previously based their development on hygienist theories: efficiency was correlated with fluidity and less contact as possible (which is nicely exemplified by the non-contact RFID subway passes): the subway was disconnected with the city. Automation lead to layoff and the disappearance of controllers and even drivers, this caused the permeability of the subway (more and more insecurity, people taking it without paying): the city entered the subway. Now their model is rather about having both efficiency AND contact: let's take advantage of the presence of people, the city is in the metro and there are opportunities to have relevant services. The crowd is seen as a resource and not as a constraint. In terms of prospective services, places/stations can be transformed, new type of jobs can be created and tansporters' role change accordingly. The subway could then be seen as a PLACE to meet people, or at least to do something with others. One of the attendant mentionned the idea Starbucks had to be a place for business meetings: would the subway have certain area for business meetings? Another point is the signs that are fixed and directed to every users could be individualized for a certain category of customers (with precise interests or disabilities) or even further: the crowd's traces in space would be a material to use to create new kind of signs to foster better navigation or discovery of places or people.

After those 3 presentations, we had a discussion about different projects (current or prospective) like earthTV (seeing real-time events with google earth, this has actually been thought in the japanese subway to see where is the crowd to better avoid it), tags in google earth (very often community-based "I use linux" close to MS buidling), locator of personal objects (googling my shoes, finding my personal belongings), indoor technologies (museum), trackers (kids/prisoner tracking).

Overall, the discussion rather revolved around mobility, people and a lot about meetings, and less about technologies and usage. That's important from the rhetorical point of view: we rather dicussed the contexts and the needs (with a peculiar emphasis on the subway experience) as opposed to the technology-push projects we've seen so far: allowing PEOPLE (with a specific context: mobility, limited amount of time, limited cognitive resources because of route finding) to do something (having meetings and exchange with others, discovering information related or not to the route).

One of the conclusions here was also that innovation in spatial technologies is often due to work of peculiar companies such as RATP (subway companies), JCDecaux (urban ads) which are ubiquitous and bound to specific mobile needs. Soome researchers from a french phone operator acknowledged the fact that innovation is very tough for them because everything is either locked or behind walled gardens when it comes to phone (SIM cards, low interoperability, different standards, hard to use voice / location based application, different kind of phones/handhelds...). This resonates with discussions we had at the lab (see here or there).

Technorati Tags: ,