Research

CSCW course take-away

Some take-away for students of our Computer-Supported Collaborative Work course:

Common errors:

  • Media Richness is the solution: no adding video channel is not necessarily better in terms of coordination effectiveness and group collaboration.
  • Text is knowledge: text is information but not knowledge, learning is not reading.

Challenges:

  • Mediating informal interactions among distributed teams
  • Invent uses of location-based services, pervasive comouting and awareness tools

Today's trends:

  • Computation is back to real, to things you can touch (roomware, tangibles,…)
  • Physical space, places, topology, context and infrastructure matters

How will computation transform the new spaces that it comes to occupy

Williams, A., Kabisch, E., and Dourish, P. (2005.) From Interaction to Participation: Configuring Space through Embodied Interaction. Proc. Intl. Conf. Ubiquitous Computing Ubicomp 2005 (Tokyo, Japan). This paper addresses this very important question: how will ubiquitous computing transform the new spaces that it comes to occupy; or What sorts of impacts on space result when it is populated by ubicomp technologies? The paper starts by describing how space and social action are tightly entwined. Then they examine the development and evaluation of a collective dynamic audio installation called SignalPlay (a series of physical objects with embedded computational properties collectively control a dynamic “sound-scape” which responds to the orientation, configuration, and movement of the component objects).

Some excerpts of this insightful paper:

Our fundamental concern is with the ways in which we encounter space not simply as a container for our actions, but as a setting within which we act. The embodied nature of activity is an issue for a range of technologies. (...) This social character means that spaces are not “given”; they are the products of active processes of interpretation. The meaningfulness of space is a consequence of our encounters with it. For ubiquitous computing, this is an important consideration. (...) The research challenge, then, is to understand how it is that computationally augmented spaces will be legible; with how people will be able to understand them and act within them. (...) A number of broad observations are particularly notable. (...) First, it was notable that people sought to understand the system not as a whole but in terms of the individual actions of different components. (...) Objects take on meanings and interpretations in their own right rather than as elements of a “system.” This suggests, then, that user’s experiences and interpretations of ubiquitous computing systems will often be of a quite different sort than those of their designers, because of the radically different ways in which they encounter these systems. (...) Second, one particularly interesting area for further exploration is the temporal or- ganization of activity. (...) The temporality of interaction and encounters with technology is a neglected aspect of interaction design and an important part of our ongoing work. (...) Lastly, ubiquitous computing technologies are ones through which people encounter and come to understand infrastructures. (...) The presence or absence of infrastructure, or differences in its availability, becomes one of the ways in which spaces are understood and navigated. At conferences or in airports, the seats next to power outlets are in high demand, and in a wide range of settings, the strength of a cellular telephone signal becomes an important aspect of how space is assessed and used. As we develop new technologies that rely on physical but invisible infrastructures, we create new ways of understanding the structure of space. (...) Our design models must address space not as a passive container of objects and actions, but as something that is explicitly constructed, managed, and negotiated in the course of interaction

Why do I blog this? simply, a large part of my research is geared towards studying the relations between space/place and social/cognitive processes; this paper is very relevant for that matter since it offers some pertinent ideas about this would be applied in the field of ubiquitous computing. I also appreciated the idea of taking as the core of ubicomp the relationship between people, objects, and activities, cast in terms of the ways in which practice evolves. Each of their findings are important in the results I am currently analysing concerning the CatchBob! game usage:

  • As for the first point (people sought to understand the system not as a whole but in terms of the individual actions of different components), the features we provided in CatchBob have some individual consequence such as the location-awareness tool that in itself create a certain behavior consistency.
  • The temporal organization of activity is very important in the CatchBob! pervasive game: each different part of the activity is different can the interface features have a different impact on them.
  • The link with the infrastructure and the activity of using the ubicomp tech in Catchbob lays in the fact that sometime the network is available and sometimes not + the accuracy of the positioning/message exchange varies over time, well this will be fabien's phd work.

The Reconstruction of Space & Time through Mobile Communication Practices

A call for paper that might lead to something I'd be interesting to read/work on:

call for papers for the first Mobile Communication Research Annual. In conjunction with Transaction Publishers and a distinguished editorial board, we are requesting submissions in the area of "The Reconstruction of Space & Time through Mobile Communication Practices."

Rich Ling and Scott Campbell

The volume's theme will be "The Reconstruction of Space and Time through Mobile Communication Practices." The proliferation of wireless and mobile communication technologies gives rise to important changes in how people experience space and time. These changes may be seen in many realms of social life, such as the transformation of public into private space and vice versa, the blurring of lines demarcating work and personal life, and new patterns of coordination and social networks. Recent scholarship has tried to make sense of these changes in space and time. For example, Manuel Castells argues that advances in telecommunications have contributed to new spatio-temporal forms, which he describes as "the space of flows" and "timeless time." According to Castells, these new forms mark a shift in the importance of the meaning of a place to the patterns of the de-sequenced, networked interactions that occur in that place. The purpose of this special issue is to continue and deepen the dialog on how space and time change as a result of the lower threshold for interaction due to mobile communication technologies.

Abstracts of 200 words describing the proposed papers are due by 17 March 2006 with those accepted due in final form by 1 September 2006.

Why do I blog this? I feel like this topic is highly relevant to my research, especially with regards to how knowing where others (people+objects) are can change specific behaviors (communication content, channel of communication used, negotiation and coordination processes...)

Readers asks me where is the url of the CfP. I don't know; I get it from a mailing list (telecom-cities) under the name "Call for Papers: THE MOBILE COMMUNICATION RESEARCH ANNUAL"...

Latour's inscriptions and software development

Following my thoughts about Latour's inscription (see last week's post), I ran across this good paper about distributed software development and the link with 'inscriptions'. Latour's inscriptions are about "social arrangements, debates, divisions of labor, and patterns of work become inscribed into the artifacts and representations in which science trucks". In the context of software development, they want to study the relationship between technological artifacts and the social structures that shape them.

De Souza, C., Froehlich, J., and Dourish, P. 2005. Seeking the Source: Software Source Code as a Social and Technical Artifact. Proc. ACM Conf. Supporting Group Work GROUP 2005 (Sanibel Island, FL.)

Our work has been motivated by the question of whether aspects of informal software process can be found in the structure of the software artifact itself. Using a software visualization tool, Augur, we have been conducting an analysis of the artifacts of a number of software projects, a “software archeology” to explore the relationships between artifacts and activities as they are negotiated in distributed software development through mining software repositories. (...) Each pane displays a different aspect of the system being examined: changes in one view are immediately reflected in the others. The large central pane shows the line-oriented view of the source code. In the figure, the color of each pixel line indicates how recently it was modified; this allows a developer, at a glance, to see how much activity has taken place recently and where that activity has been located.

The conclusions are as follows:

Distributed software development presents two sources of complexity to its participants – the complexity of the software artifacts under development, and the complexity of the process of developing those artifacts. We have presented a study of software artifacts, conducted using a visualization tool, which demonstrates how these twin sources of complexity are intertwined. Software artifacts are not merely the objects of software development processes, but are also the means by which those processes are enacted and regulated. The structure of the artifact both reflects the processes by which it has been created and can be used to control those processes by centralizing points of access, by regulating the relationships between independent activities, and by making visible the relationships between individuals. It is a means, then, by which the articulation work of the project can be carried out.

Why do I blog this? I am fascinated by this: how technological artifacts and social structures might shape a certain phenomenon such as product development.

Latour's research inscriptions

Relying on Latour's sociology of science (+ others such as Latour and Woolgar, 1991; Callon, Law, and Rip, 1986), inscriptions (journal articles, conference papers, presentations, grant proposals, and patents, images of many sorts, databases...) are the core of scientific/knowledge work. Inscription is basically the process of creating technical artifacts that would allow to stabilize the research work so that it can travel across space and time and be combined with other work (and eventually establish the protection of an actor's interests/credibility). On a personal note, I still have to turn this into higher level research inscription: phd results mindmap

This is a kind of summary of my research results so far.

Kids and robots learning to play hide and seek

The following paper describes a field study about Children and robots learning to play hide and seek (by a research group from the Naval Research Laboratory).

How do children learn how to play hide and seek? At age 3-4, children do not typically have perspective taking ability, so their hiding ability should be extremely limited. We show through a case study that a 3 1/2 year old child can, in fact, play a credible game of hide and seek, even though she does not seem to have perspective taking ability. We propose that children are able to learn how to play hide and seek by learning the features and relations of objects (e.g., containment, under) and use that information to play a credible game of hide and seek. We model this hypothesis within the ACT-R cognitive architecture and put the model on a robot, which is able to mimic the child's hiding behavior. We also take the “hiding” model and use it as the basis for a “seeking” model. We suggest that using the same representations and procedures that a person uses allows better interaction between the human and robotic system.

Why do I blog this? I found interesting the idea of "a specific object-relationship hypothesis dealing with how children learn to play hide and seek, and the second representational hypothesis dealing with the types of representations and algorithms or procedures that should be used for intelligent systems". Food for thoughts about cognition and problem solving.

Implications for design and ethnographical studies in HCI

A good read this afternoon: Dourish, P. 2006. Implications for Design. Proc. ACM Conf. Human Factors in Computing Systems CHI 2006 (Montreal, Canada).

The article criticizes the canonical papers (in the field of Human-Computer Interactions) which reports results from an ethnographic study with a final section called "implications for design". The normative epistemology of the HCI field makes it mandatory (as the author mentions "the absence of this section tends to be correlated with negative reviews"). In this paper, Paul Dourish wants to explore "the ways in which the “implications for design” may underestimate, misstate, or misconstrue the goals and mechanisms of ethnographic investigation". To him, this focus is misplaced and researchers are consequently missing the point of how ethnography could benefit to HCI research.

Some pertinent excerpts (with regard to my work + research interests):

ethnographic methods were originally brought into HCI research in response to the perceived problems of moving from laboratory studies to broader understandings technology use. (...) The term “ethnography,” indeed, is often used as shorthand for investigations that are, to some extent, in situ, qualitative, or open-ended. (...) a corpus of field techniques for collecting and organizing data (...) often been aligned with the requirements gathering phase of a traditional software development model [a good connection here -nicolas] (...) In reducing ethnography to a toolbox of methods for extracting data from settings, however, the methodological view marginalizes or obscures the theoretical and analytic components of ethnographic analysis.

But Dourish does not want to say that ethnography is useless to find implications for design, he'd rather want to show it's not only meant to bring out this kind of contribution. And this is very interesting:

Ethnography provides insight into the organization of social settings, but its goal is not simply to save the reader a trip; rather, it provides models for thinking about those settings and the work that goes on there. The value of ethnography, then, is in the models it provides and the ways of thinking that it supports. Ethnography has a critical role to play in interactive system design, but this may be as much in shaping research (or corporate) strategy as in uncovering the constraints.

Why do I blog this? while considering the global framework for my PhD thesis, I have in mind this kind of ideas; especially when it goes to the contribution to the HCI field. However, even though I try to include some mixed methodologies, my work is more quantitative-dominant, on top of which I use ethnographical methodologies (for instance for results triangulation).

User-Centered Needs in Pervasive Gaming

Player-Centred Game Design: Experiences in Using Scenario Study to Inform Mobile Game Designby Laura Ermi and Frans Mäyrä is an interesting paper I found in the Game Studies the (I would say 'an') international journal of computer game research, volume 5, issue 1, october 2005. The paper acknowledges the "need for systematic, research-based and tested game design methodologies that take the needs and preferences of different players into better consideration than the current industry practices"; It also take this approach in the context of pervasive game playing on mobile devices, which is actually our field of research. This is part of a research project called Wireless Gaming Solutions for the Future (MOGAME) carried out by the University of Tampere Hypermedia Laboratory’s (see also the iPerg project, the European Union project connected to it).

The paper is fully of relevant ideas. I like the following statement because it goes beyond the simple mobile gaming approach (i.e. developing old games for new phones, even though some are interesting):

we focused on developing mobile game concepts that are most suitable for contemporary kinds of wireless and mobile terminals. This involves taking advantage of these devices’ unique characteristics such as communication possibilities, mobility and positioning. n a previous research project on interactive television we have observed that communication with other players, especially those unfamiliar to each other in real life, may help in making the play experience feel more adventurous and interesting. Persistent communicative contacts are also important when developing persistent social networks, i.e. communities. Communication is thus an important component of social playability. Using a mobile phone as a communication device in the game also offers possibilities for telecom operators to take advantage of other sources of revenue than just the download price of the game.

The last part is very clever and close to what I've read (and blogged) last week in the IBM podcast about how the mobile devices complement online playing.

The article also presents 2 types of research: basic research on games, players and playability and applied research on the design of location-sensitive services and applications (basically a game called The Songs of North). Their aim was to evaluate the experiences gathered while using a scenario-based player study to inform pervasive mobile game design. The approach appeared to be good and the article described some flaws. Actually, since our research is less oriented towards designing games than studying how people use them, I was more interested by all the remarks about how the application/device impacted the study. for instance:

using player movement as a central game element may easily become too much of a burden for the players. Especially in a persistent game, designers have to take the daily lives of their players into consideration and try to intertwine the game movement to the daily routines or routes of the players to a certain degree. Otherwise the players will probably not have enough energy to keep on playing, possibly for several months in the persistent, mixed reality game world. One solution we came up with, besides taking advantage of the naturally occurring movement of the players, was providing support for team play. When playing in teams, players can easily reduce the amount of their movement if they jointly communicate and coordinate their gameplay. This was also in harmony with the aim of enticing players to communicate with each other – and informants’ wish to be able to form teams in games. (...) Contemporary mobile devices did not appear as very promising gameplay devices from the point of view of the player study informants. They felt mobile games often required too much concentration on the small device when trying to control the game using cramped buttons, and thus might take the attention away from the actual playing. Therefore we are emphasising the role of the auditory world of the game. (...) We are also aiming towards seamless integration of all of the game elements, including the mobile device and the real-world environment, so that the mobile device, for example, would not feel separate from the game

Why do I blog this? Judging from those results, I'd love to know more about it (user experience research methods + results), especially with regards to players' collective behavior and their movements in space!

Some thoughts about eye tracking (+collaborative or mobile settings)

At the lab we've been discussing how we can use eye tracking methodologies for our research projcts about 'mutual modeling'. This lead me to a quick web of science/google scholar scan of what is available concerning the use of this technique to study collaborative interfaces usage. I went forward by looking at whether this can be used in mobile settings. With regard to mobile context analaysis, I ran across this intriguing project at igargoyle: Building a lightweight eyetracker by Jason S.Babcock & Jeff B. Pelz from Rochester Institute of Technology:

(picture taken from the article)

Eyetracking systems that use video-based cameras to monitor the eye and scene can be made significantly smaller thanks to tiny micro-lens video cameras. Pupil detection algorithms are generally implemented in hardware, allowing for real-time eyetracking. However, it is likely that real-time eyetracking will soon be fully accomplished in software alone. This paper encourages an “open-source” approach to eyetracking by providing practical tips on building lightweight eyetracking from commercially available micro-lens cameras and other parts. While the headgear described here can be used with any dark-pupil eyetracking controller, it also opens the door to open-source software solutions that could be developed by the eyetracking and image-processing communities. Such systems could be optimized without concern for real-time performance because the systems could be run offline.

This seems to be interesting but having three lightweight devices like this would be really hard. Another cheap solution can be found in this paper: Building a Low-Cost Device to Track Eye Movement by Ritchie Argue, Matthew Boardman and Jonathan Doyle, Glenn Hickey:

we examine the feasibility of creating a low-cost device to track the eye position of a computer user. The device operates in real-time using prototype Jitter software at over 9 frames per second on an Apple PowerBook laptop. The response of the system is sufficient to show a low-resolution cursor on a computer screen corresponding to user’s eye position, and is accurate to within 1 degree of error. The hardware components of the system can be assembled from readily available consumer electronics and off-the-shelf parts for under $30 with an existing personal computer.

Now, if we want to use this to study collaborative software, it's not easy, as attested by this paper: Using Eye-Tracking Techniques to Study Collaboration on Physical Tasks: Implications for Medical Research by Susan R. Fusell and Leslie D. Setlock. The paper discusses eye-tracking as a technique to study collaborative physical tasks, namely a surgical team might collaborate to save treat a patient. They bring forward the tremendous potential as a tool for studying collaborative physical tasks and highlight some limitations:

The eye tracker typically can’t be calibrated correctly for a sizeable proportion of participants (up to 20%). Furthermore, the head-mounted device may slip over the course of a task, requiring recalibration to avoid data loss. This creates problems in collecting high-quality data. (...) Gaze data also requires considerable effort to process. (...) manual coding could quickly become unwieldy in a setting with many, many possible targets

A butterfly-watching system with WiFi PDAs

Chen, Y-S, Kao, T-C, Yu, G-J and Sheu, J-P (2004). A mobile butterfly-watching learning system for supporting independent learning. Proceedings of the 2nd International Workshop on Wireless and Mobile Technologies in Education. JungLi, Taiwan: IEEE Computer Society, 11-18 It's a butterfly-watching system implemented and tested at an elementary school in Taiwan aimed at being used to teach the different kinds of butterflies in the region.

The proposed BWL system was designed using a wireless mobile ad-hoc learning environment. In our designed system, each individual learner has a wireless handheld device, which is a PDA (Personal Digital Assistant) with an IEEE 802.11 wireless network card and a small-sized CCD camera. One instructor has a notebook computer with a Wi-Fi wireless LAN card which serves as the local server. The notebook has a complete butterfly database. All learners’ wireless handheld devices and the notebook constitute a mobile ad-hoc learning environment. During the butterfly-watching activity, each learner takes a distinct butterfly picture, and wirelessly transmits the picture to the local server. A content-based butterfly-image retrieval technique is applied herein to search for the most closely matching butterfly information, which is returned in real time to the learner’s wireless handheld device. To illustrate the effect of the BWL system, an outdoor BWL activity was actually performed at elementary school in Taiwan.

Why do I blog this? I am looking for references about mobile learning applications for a paper I am currently writing.

Qualitative analysis representation

Yesterday I attended Jean-Baptiste's lecture about qualitative analysis of human-computer ineraction in the CSCW course. His slides are available here. What I really appreciated is the way JB visually represents the data collected to bring forward relevant information or patterns. For instance, in this example, he picked up one of the critical events that occurred during a study about how students interact with an interactive table:

Why do I blog this? on a different granularity of analysis, I'd like to have this kind of approach in CatchBob!, especially while analyzing specific moments of interaction between players; namely when the participants understand (or not) what their partner(s) is(are) doing and on which information they rely on to predict this. This is definitely one of the core question of my phd research, with an emphasis on how location-awareness can impact this.

Noise-Sensitive table at the lab

The noise-sensitive table is an interactive furniture project at our lab. The new prototype is now available, as depicted on the picture below.

The Noise Sensitive Table is an example of interactive furniture based on the concept of group mirror. Its matrix of LED, embedded in the physical table, indeed displays a representation of the social interactions. The table namely reflects turn-taking patterns when students work collaboratively. The peripheral perception of this feedback allows them reflecting on the group verbal interaction or on individual contributions and, finally, deepening learning and regulating their collaboration. The first prototype of the noise sensitive table showed the interest of the concept. Continuing this project now requires adding more features. The interest is to move to more spontaneous and unconstraint interactions, where users can come, move and leave when they want.

(pictures taken by maurice cherubin)

Clumsy automation and user surprises

One of the outcome of my research lately is that automating location-awareness might be detrimental to group collaboration in mobile settings (more about it in this paper .pdf). The concept of automation drawbacks has been already adresses in human-computer interaction and is often refered to as "clumsy automation". This is developed in David Woods' paper called "Human-Centered Software Agents: Lessons from Clumsy Automation". Some excerpts:

These machine agents often are called automation, and they were built in part in the hope that they would improve human performance by off loading work, freeing up attention, hiding complexity -- the same kinds of justifications touted for the benefits of software agents. (...)

The pattern that emerged is that strong but silent and difficult to direct machine agents create new operational complexities. In these studies we interacted with many different operational people and organizations, * through their descriptions of incidents where automated systems behaved in surprising ways, * through their behavior in incidents that occurred on the job, * through their cognitive activities as analyzed in simulator studies that examined the coordination between practitioner and automated systems in specific task contexts, * unfortunately, through the analysis of accidents where people misunderstood what their automated partners were doing until disaster struck.

In our case, the automation did not create cognitive or physical workload nor incidents or suprising things BUT it lead users to a certain inertia in terms of communication (they communicated less), strategy planning (they did not reshape their strategy). Users considered that the information given by the automatic location-awareness tool was sufficient to complete the task and that's it.

Ref: Woods, D. D. (1997). Human-centered software agents: Lessons from clumsy automation. In J. Flanagan, T. Huang, P. Jones, & S. Kasif, S. (Eds.), Human centered systems: Information, interactivity, and intelligence (pp. 288--293). Washington, DC: National Science Foundation.

Technorati Tags: ,

Social Psychology and User Experience of Technology Studies

Recently, while re-doing my PhD mindmap, I thought again and again at methodologies to analyze user experienec in the context of mobile collaboration using certain technologies (e.g. lbs). One of the inspiring field is social psychology. I tried to think about some connection between social psychology and our field. The first point is that social psychology brings framework and theories relevant for CSCW: attribution theory (Heider), social comparison theory (Festinger, Doise and Mugny...) might be interesting to use (in the hypotheses or to the methodology in the qualitative analysis). Heider's attribution theory is about how people make causal explanations of phenomenon: the information they use in making these inferences, and with what they do with this information to answer causal questions. The social comparison theory postulates the existence of a drive to evaluate his opinions and abilities by comparison with the opinions and abilities of others. But, social psychologist with whom we discussed agrees that acknowledge that it is very difficult to take this into account. Anyway, this is useful when we use group-confrontation to traces of their activity (in the context of the CatchBob! replay tool for instance) or during game interviews. This means, for example, being careful when formulating questions, to prevent people from being compared or comparing each others..

In addition, methdologies and statistical techniques developed by social psychology can be helpful. Yes, we have a strong quantitative flavor at the lab, that's the why we're looking into that direction. Social psychology interestingly addressed the notion of 'unit of analysis' when investigating collaboration and small-group phenomenon. What's this unit: the individual? the group? The corollary conclusions social psychologists came out with is that some techniques can tell the observer what he should address. For instance, the intraclass correlation is a good index that can tell us whether the unit should be the individual or the group. The point is to verify the non-independence of the measures. This method has been formalized by Kenny (Kenny, D. A., Kashy, D. A., Bolger, N. (1998) Data analysis in social psychology. In D. Gilbert, S. Fiske, & G. Lindzey (eds.) Handbook of social psychology , vol. 1, pp. 233-251. Boston: McGraw-Hill. p233). I also presented this here.

Of course, there is also a lot to take from multi-level modeling orAPIM (Actor Partner Model...) as in Strijbos, J. W., Martens, R. L., Jochems, W. M. G., & Broers, N. J. (2004). The effect of functional roles on group efficiency: Using multilevel modeling and content analysis to investigate computer-supported collaboration in small groups. /Small Group Research/, /35/, 195-229. http://www.ou.nl/info-alg-english-r_d/OTEC_research/publications/jan%20willem%20strijbos/Strijbos%20et%20al.%20(2004)_Functional%20roles_SGR_35_195-229.pdf).

Some more thoughts about location-awareness (of others) and position sharing

As Fabien points out, the MapQuest FindMe (integrated with AIM) is a clever service that allow users to use manual sharing of one's position. Which is one of the guidelines that would emerge from our CatchBob! experiments. Self-disclosing one's location seems to emerge as a good trend now, both in the real world of services and the academic world of research as in those papers:

Both paper advocate for self-disclosure of location. They rely on different approach to come up with this recommendation. Benford's paper has a qualitative approach and is more focused on users' thoughts. Whereas ours is more mixed-methods (quantitative methods dominant though), it proposed the same idea because of the underwhelming effects of automatic location-awareness on how people collaborate. Another paper for a conference about 'designing for collaboration' will deal with this issue.

I am still digging this issue of location-awareness on collaboration, working on both asynchronous location awareness and the importance of letting people express their own strategy.

CatchBob! automatic data analysis

Recent advances in the CatchBob replay tool project allowed me to automatically compute a new interesting index for the analysis of our pervasive game: the phasing of the activity. As emerged from the qualitative study of CatchBob!, the game is divided into 3 phases:

  • Dispersion of individuals to efficiently locate Bob's area.
  • When one player find the approximate location, the others converge towards him.
  • Then they re-spread to form the triangle so that they can surrounds the object.

What is great is that I now when each phases starts thanks to the analysis of the players' dispersion: the pictures below depicts the evolution of the perimeter formed by a triangle made up of each of the players' position. The evolution of this perimeter shows the 3 differents parts:

PhD mindmap update

After two years working on my PhD, I though it was a good time to update my phd research mindmap. The previous one was here and there is the new version:

I only put on the map the topic I adress in the current research, that is why I removed some aspects, especially the whole part concerning the social functions of space/place. Awareness of others' whereabouts and the technology to do this as well as their impacts on small group-collaboration is the cornerstone of the project. What is interesting is to see how the project evolved in the last 2 years with some back and forth movements, the addition of new methods and how exploratory activities lead to new dimensions being tackled.

This investigation is mostly quantitative in the epistemological sense, which means that we followed an inductive reasoning, trying to benefit from hypotheses we had coming from virtual reality investigation (my masters thesis, see here). At the beginning of the project, the point was to replicate studies in VR to see if results held in the physical world: would people pay attention to others' location as we saw in VR projects? It's a curious way to do research though, but sometimes the insight to address a specific research question si weird. However it certainly makes sense to tackle the issue of how small groups pay attention to their members' location in real space since news technologies allow it. Maybe this weird circumvolution is due to the fact that this PhD is made in a human-computer interaction discipline and not in sociology, nor in psychology.

Finally, I would point that I tried to include some qualitative dimensions in my methodology, to get more details about the participants' experience as well as to deepen the understanding of the socio-cognitive processes involved. Some folks might find this not really valid from an epistemological perspective but hey that's a struggle out there between all the school of thoughts we have to deal with.

Replay tool project update

Fabrice (the master student I work with) has just completed another step in the replay tool project. The point of this is to have a tool that help us to replay the CatchBob! pervasive game so that I can use it with participants to gather information about their activity, what happened, etc. In the last version, it's possible to have a replay of each players' path (in the bottom left-hand corner hereafter) plus some visualizations like the distance between 2 players (in the bottom right hand corner) or the evolution of the group dispersion (in the upper right hand corner). This can be interesting to understand the players' behavior while performing the activity.

Evaluation of 3 Ubicomp systems

Prototypes in the Wild: Lessons from Three Ubicomp Systems by Scott Carter and Jennifer Mankoff, in IEEE Pervasive Computing Journal, October-December 2005 (Vol. 4, No. 4) pp. 51-57. This paper is an account of three ubicomp systems evaluation at multiple design stages:

  • PALplates: to support office workers in doing everyday tasks by presenting key information and services at places of need, or locations where workers were most likely to need them.
  • A nutrition tracking system that uses inexpensive, low-impact sensing to collect data about what household members are purchasing and consuming and then uses simple yet persuasive techniques to suggest potential changes.
  • Hebb: a system which capture and convey shared interests, it senses group members’ interests via email analysis software and displays relationships between members on public and private displays to encourage conversation about those topics. The Hebb system includes interest sensors, presence sensors, and public and private displays

Why do I blog this? For each of these projects, the authors provide a description of how they evaluated them (mostly with paper prototypes first and field experiments with interactive prototypes afterwards). What is strikingly interesting is that their computer science perspective lead them to "a struggle to balance quality of evaluation and ease of prototyping" as they say. This paper is yet another element to attest that testing ubicomp applications in field settings is particularly important. I was also interested by the fact that they studied their prototypes at different stages of design.