Research

From location to places

(via pierre) Extracting Places from Traces of Locations by Jong Hee Kang, William Welbourne, Benjamin Stewart, Gaetano Borriello; WMASH 2004: 110-118.

Location-aware systems are proliferating on a variety of platforms from laptops to cell phones. Locations are expressed in two principal ways: coordinates and landmarks. However, users are often more interested in “places” rather than locations. A place is a locale that is important to an individual user and carries important semantic meanings such as being a place where one works, lives, plays, meets socially with others, etc. Our devices can make more intelligent decisions on how to behave when they have this higher level information. For example, a cell phone can switch to a silent mode when the user is in a quiet place (e.g., a movie theater, a lecture hall, or a place where one meets socially with others). It would be tedious to define this in terms of coordinates. In this paper, we describe an algorithm for extracting significant places from a trace of coordinates, and evaluate the algorithm with real data collected using Place Lab [14], a coordinate-based location system that uses a database of locations for WiFi hotspots.

One of the algorithm for extracting significant places from a trace of coordinates.

New blog about space/place/locative tech: smartspace

Found via Technorati: smartspace by Scott Smith of Social Technologies (an international futures research and consulting firm based in Washington, DC):

Welcome to Smartspace, a new blog about annotated environments, intelligent infrastructure and digital landscapes--the merging of technology with the environment around us, and the overlay of digital environments on the physical ones we inhabit.

This includes discussions, observations and insights on ubiquitous and embedded computing, mapping, location-based services, surveillance and tracking, geotagging, smart homes, intelligent environments, the annotated reality, and virtual worlds, where the increasingly intersect with the physical.

An increasing amount of interest, research, development, investment and regulation is being directed at the world of smart spaces. The purpose of Smartspace is to provide context and explore implications of the convergence of the above mentioned factors as they relate to these activities. Hopefully we will feature interviews, guest authors, and other interesting features and contents that make Smartspace a compelling read.

I found it because he expanded the discussion about my post about the giving of one's location while calling with a cell-phone, Scott adds this intriguing walkaround:

Meanwhile, I find it interesting that, while we are waiting for applications that alert the person on the other end of a mobile discussion automatically as to our location as the call comes in, it would be easier at the moment to take a picture of myself on the train and MMS it to my wife using something like ZoneTag, allowing her to see where I am before I call. Talk about a workaround.

Indeed, image can bring the context that the user wants to show, with the level of accuracy (in terms of contextual cues) the user may want to show and convey in his/her message.

Why do I blog this? another interesting contributor in the field of social usage of space/place/locative tech, very relevant ideas so far.

Reconfiguration of social, cognitive and spatial practices in cities due to technological innovations

After my post about the inevitable existence of electronic ghetto in cities (quoting Mike Davis and William Gibson), I had a discussion with Anne about how technologies (and hence interaction designers) are sometimes not aware of side-effects due to their creation, especially in terms of social, politics or even cognitive practices. For that matter, I am interested in reconfiguration of specific practices in cities due to technological innovations. It's been some time that I am trying to list interesting case studies about that. Books like "City of Bits: Space, Place, and the Infobahn" (William J. Mitchell), "Smart Mobs: The Next Social Revolution" (Howard Rheingold) or "Beyond Blade runner: Urban control, the ecology of fear" (Mike Davis) gives some elements. I tried to find other examples.

Before the introduction of elevators/lift, there was a different social repartition of people in the spatiality of buildings. Rich people were leaving on the first floor, to avoid them having to climb stairs. The higher you went into buildings, the less wealth you had in city-dwellers. The usage of elevators in building where people were living (previously elevator was just used to carry materials such as coal), inverted this repartition: the last floor, now accessible with the technology were for rich people. This is an example of how a technology created a social reconfiguration in space.
Another kind of effects is of course related to cognition. There are indeed important consequences of having information about public transport now allowed with new technologies (urban information display in the vehicle or on an information board) or the organization and the interoperability of information. For example, I like this example by Vincent Kauffman (urban sociologist here at the school): the regularity of different train schedule (there is a train geneva-lausanne every 20minutes with regular shifts: 7:45, 8:15...) plus the interoperabilty of transport means (the departure of city bus is coordinated with trains arrivals) allows people to easily remember commuting schedule and hence better predict how they would manage their spatial practices. These new technologies (urban displays) and the organization of information (due to technological advances) impacts cognitive mechanisms (i.e. memory in the example I described). What's next? would such a intelligent system achieve its goal (i.e. facilitating navigation by suggesting all possible alternative shortest route that connect two or more transition points on a map)?
Likewise, there are interesting concerns lately about whether location-based services might modify behaviors and practices in cities. This question often pops up when people think about location-based games. Results from the MogiMogi game test showed very interesting behaviors: players who wander around in the city using their car or the metro when new objects are released; or once a player complained because he went to a place where he though an object would be but it was not present since it was just there when the moon was full.

Also Daniel Blackburn (manager of Carbon Based Games) questions whether the bluetooth social games might modify people’s behavior in physical space by creating new technosocial situations:

With GPS games such as mogi some players would detour from their everyday routes to go and pick up a virtual object. With Bluetooth enabled game will people try to get within range of someone while there phone is in their bag so they are unlikely to hear it so that they can steal virtual objects without their knowledge. Or will they stay clear of people at work because they are at a high level than the game than them and they want to avoid defeat again. Or will they be constantly checking their phone because they’re convinced someone is trying to virtually assassinate them an could set of a bomb at any time. Meaning they would need to run with there phone to get it out of range of the blast.

Even though I like the example, I am still dubious by this last example (compared to the two others); there are still lots of big expectations with lbs.

Why do I blog this? well, what do I want to show here is that technologies sometimes reshuffle human practices in terms of spatial dispersions, cognitive appraisal of space and social organizations of infrastructures. Maybe I should write a better discussion of this and wrap this up in a paper, here is quite messy. This said, there is still the question of foreseeing the future reconfiguration due to emerging technologies.

Qualitative data analysis in CatchBob!

This afternoon, I tried to formalize a bit my current research approach to analyse qualitative data of CatchBob! The point is to benefit from users' annotations (in game) and the interview I conducted after the game (based on a replay of the activity). This leads me to the extraction of different valuable information that concerns coordination processes in the game.

This is based on Herbert Clark's framework of coordination (as explained in the book "Using Language"). In this context, coordination is a matter of solving practical "coordination problems" through the exchange of what he calls ‘coordination keys/devices’; that is to say, mutually recognized information that would enables the teammates to choose the right actions to perform so that the common goal might be reached. As a matter of fact, such information allows a group to mutually expect the individual actions that are going to be carried out by the partners. According to Clark, a coordination device is not only defined by its content but also by the way the persons who collaborate mutually recognize it. For that matter, Clark differentiates four kinds of coordination devices: conventional procedures (when a convention is set by the participants), explicit agreement (when the participants explicitly acknowledge the information), precedent (when a precedent experience allows participants to form some expectations about others’ behavior), manifest (when the environment or the information sent makes the next move apparent within the many moves that could conceivably be chosen).

This framework then leads to the creation of two coding schemes to analyze my data:

  • What a participant inferred about his/her partner during the game. This coding scheme is clearly data-driven in the sense that it emerged from the players’ verbalizations (namely those extracted during the self-confrontation phase after the game)
  • How a participant inferred these information about their partners: this one is theory-driven since I used Herbert Clark’s theory of coordination keys/devices to have clear categories about what happened

Now, there is another dimension that should be taken into account: TIME: different coordination keys are used at different moments in CatchBob, so I'm trying to put this together in a global model of spatial coordination. In the end, in the would express which kind of coordination keys are used to solve certain coordination problems in the context of a task mobile collaboration such as CatchBob. The potential outcome for this would be to understand whether specific tools can supports the coordination process (for instance would a location awareness tool be useful at a certain point the process').

Social functions of location in mobile telephony

Arminen, I. (2005): Social Functions of Location in Mobile Telephony. Personal and Ubiquitous Computing. This article addresses a topic close to my PhD research: the importance of location awareness in (mobile) communication. Prior to studying the importance of location-based services (especially when it comes to buddy finder or granny locators), the author put the emphasis on the understanding of this peculiar feature: the discussion about one's location over the phone.

To understand the dynamic nature of location, we have to study the actual communicative practices in which location gains its value. (...) Weilenmann has studied particularly the ways in which location references are used to signal communication difficulties: ‘‘I can’t talk now, I’m in a fitting room’’ (...) Laurier, for his part, has shown how mobile professionals routinely stated their locations on a mobile phone as a part of their mobile usage. Both these studies on actual communicative practices point out how the value of location is embedded in the activity in which the mobile user is engaged. (...) 74 Finnish mobile phone conversations were recorded (...) The material covered both mobile-to-mobile and landline-to-mobile or mobile-to-landline conversations (...) The calls were transcribed and analysed in detail by using conversation analytical (CA) method. (...) The usage of mobile communication device does not technically require the parties to get to know where the other party is. (...) 62 mobile calls out of 74 involved a sequence in which the mobile party stated her or his location to the other party

As for the context of this question, the author found that:

Location telling during mobile calls takes place in five different activity contexts. In other words, location seems relevant for the parties in mobile interaction during five different types of activities. (...) Location may be an index of interactional availability, a precursor for mutual activity, part of an ongoing activity, or it may bear emergent relevance for the activity or be presented as a social fact. (...) Most location-telling sequences in these data are linked with practical arrangements. People state their location as a precursor for some practical arrangements (...) Location telling is also commonly done as a part of the real-time ongoing activity in which the parties are engaged. (...) Location can also be a mutual real-time co-ordination task, such as seeing each other in the cafeteria to meet there (...) Finally, a kind of location that is also realized during the ongoing activities is a virtual location referring to a web page or other material at hand to be shared with the communicative partner. (...) A not common, but existing, social practice involves location telling due to its social, symbolic qualities [exemple: beach which signify 'having fun']

Now, for the social functions of discussing locations:

Location may be an index of interactional availability, a precursor for mutual activity, part of an ongoing activity, or it may bear emergent relevance for the activity or be presented as a social fact. (...) International availability: audio-physical and social features of proximal location: noise (disco), network availability, (train, remote areas), involvement with proximal interaction, intimacy of situation (toilet, etc.) (...) Praxiological – spatio-temporal availability: readiness to engage in action (Are you doing anything special? Can you come to x?) – spatio-temporal location of a party vis-a`-vis the engaged activity: temporal distance (half an hour [by car, by train, on foot, etc.] – real-time perspicuous location in an ongoing action: visibility (I’m at x where are you), real-time location (I just saw a reindeer by the road, beware—[told to the car driving behind]) – instructable location: spatialized requests (I’m/accident at the crossroads of A and B, etc.) – proximate praxiological location: microco-ordination of activity (I’m feeling his pulse, the wound stretches from elbow to breast, etc.) – virtual location (I’m on the web page x) (...) Socioemotional – socio-emotional significance of location: biographical relevance (I’m at the cottage of x/my friend, I’m driving car with x), cultural significance (I’m visiting x (old church, museum, medieval city, etc.), aesthetic significance (it’s very scenic here)

Why do I blog this? this kind of study is of tremendous relevance to my phd research since I address the effects of location-awareness on collaboration processes: communication, coordination, division of labor, mutual modeling... What the author described here is very interesting, it's one of the seldom resource about this fact (along with Marc Relieu, Laurier (and there too, plus this one by Weilenmann).

However, the results from our field experiment with CatchBob makes me bit skeptical about the authors' conclusion; when it comes to the implications of this study to LBS, he says "Location awareness that would also indicate the user’s estimated temporal distance from the destination would have a wide applicability for a majority of mobile users. A simple and usable technical solution would immediately meet the end users’ needs". The reason why I am skeptical is that automating location-awareness can sometimes leads to putting the emphasis on an information (others' location versus others' availability, intentions...) that might be not relevant for the time being. Another problem is the kind of location that should be automated and made relevant for other parties (place? country? lat/long? ...).

A manifesto for networked objects - Why things matter

Julian finally released the manifesto about the future of artifacts and the Internet of Things. It's called A Manifesto for Networked Objects — Cohabiting with Pigeons, Arphids and Aibos in the Internet of Things. And of course the short title is "Why Things Matter" which nicely expressed the fact they - hey - in the future things will matter. The document elaborates on the idea of the blogject topic, answering to two questions: first, why would objects want to just blog? Second, why would I care if objects "blog"? It presents the idea of objects that blog, which characteristics they would have (traces, history, agency), which protozoic blogjects we've already seen (Aibo blog, the pigeon blogger...), what's at stakes and why do people envisionned that concept. My favorite part is certainly the end:

Forget about the Internet of Things as Web 2.0 and networked Barcaloungers. I want to know how to make the Internet of Things into a platform for World 2.0. How can the Internet of Things become a framework for creating more habitable worlds, rather than a technical framework for a television talking to my refrigerator? Now that we've shown that the Internet can become a place where social formations can accrete and where worldly change has at least a hint of possibility, what can we do to move that possibility out into the worlds in which we all have to live?

Why do I blog this? this is connected to the blogject thoughts I already discussed here, especially with regards to the workshop we had before lift06. The document also deals with issues very close to my current reseach (for instance when it's related to space/place and behavior).

Collaboration is made of socio-cognitive processes

In my PhD research I often mention the fact that I am studying how certain technologies (location-awareness features, tangible interactions, weird game controllers, voip...) might modify collaboration. The thing is that 'collaboration' is the research object and sometimes it's not so easy to grasp what it means. A relevant resource about this is a paper by P. Dillenbourg and D. Traum entitled "Sharing solutions: persistence and grounding in multi-modal collaborative problem solving", Journal of Learning Sciences, 15(1), 121-151.

current research no longer treats collaboration as a black box but attempts to grasp its mechanisms: What are the cognitive effects of specific types of interactions? Under which conditions do these interactions appear? These mostly verbal interactions are investigated from various angles, including: explanations (Webb, 1991), regulation (Wertsch, 1985), argumentation (Baker, 1994), and conflict resolution (Blaye, 1988). These various types of interactions contribute to the process of building and maintaining a shared understanding of the problem and its solution (Roschelle & Teasley, 1995).

As a matter of fact, collaboration is made of various processes that we can describes as being socio-cognitive. This means that it's both related to the information processing (cognitive) and bound to the social context (collaboration appearing in small groups).

Other collaborative processes are more focused on the activity: division of labor among the group, coordination over time, inference about partners' intents. I am rather focused on those.

References quoted in the excerpts above:

Baker, M.J. (1994). A model for negotiation in teaching-learning dialogues, Journal of Artificial Intelligence in Education, 5 (2), 199-254.

Blaye, A. (1988) Confrontation socio-cognitive et résolution de problèmes. Doctoral dissertation, Centre de Recherche en Psychologie Cognitive, Université de Provence, 13261 Aix-en-Provence, France.

Webb, N.M. (1991) Task related verbal interaction and mathematics learning in small groups. Journal for Research in Mathematics Education, 22 (5), 366-389.

Wertsch, J.V. (1985) Adult-Child Interaction as a Source of Self-Regulation in Children. In S.R. Yussen (Ed).The growth of reflection in Children (pp. 69-97). Madison, Wisconsin: Academic Press.

Evaluating the promises of pervasive gaming

Pervasive Gaming in the Everyday World by Jegers, K. and Wiberg, M., Pervasive Computing, IEEE, 5 (1), pp. 78-85, 2006. The paper is a smart study that look at how the vision of pervasive gaming is becoming reality in the context of SupaFly, an everyday-world pervasive game. They claim that pervasive gaming might offer 3 promises: mobile, place-independent game play /integration between the physical and the virtual worlds /social interaction between players. In this study they wanted to evaluate whether these promises could be held.

It starts by saying that pervasive gaming examples (Uncle Roy, Human Pacman, Songs of the North) are valuable but some limitations remains:

One such limitation is that people play few of the existing pervasive games in their normal everyday life, which makes studying the games’ role and effect in these situations difficult. Such research is necessary to help commercial designers create successful pervasive games and to help identify and explore the issues arising when such computer gaming becomes situated in the everyday world.

In this paper, they try to go beyond that by studying SupaFly, a pervasive game developed by Daydream to evaluate how people perceive and play the game in normal, everyday settings.

Some results were quite unexpected especially about the anywhere/anytime issue of pervasive computing:

Considering the two subjects who stated that they played the game mostly at work, the picture becomes somewhat more problematic. Both subjects stated in the focus group interviews that they normally don’t play computer games at work but that they considered the SMS game activities in SupaFly as different from traditional computer game playing. (...) Those two subjects’ decision to play the game during what they classify as work time seems to run contrary to how people generally separate activities into work and recreation, pursued at separate times. This observation calls for further research considering pervasive gaming’s anytime, anywhere aspect to clarify to what extent pervasive games might challenge people’s conception of social contexts and related activities. (...) Analysis of the focus group data reveals that the game’s integration of the physical and virtual worlds was of limited importance to the players. (...) From our evaluation, we conclude that the implemented integration of the physical and the virtual, based on location of players and virtual objects, was insufficient to be a meaningful and enriching part of the game. (...) We noticed that the players seemed to use the game to facilitate existing social interaction in groups that they belonged to before they played the gamements and social behaviors of people in pervasive-gaming situations.

What is interesting is also the overall conclusion:

Although the threefold vision for pervasive gaming hardly became a reality for the users in our study, it still might be a good catalyst for developing ideas for future pervasive-gaming platforms.

Which led them to refine their research agenda with new questions (that they will address through a longitudinal ethnographical study):

  • In what situations do people choose to enter the game?
  • Do people play alone or when they get together?
  • Is there any learning effect (for example, do people internalize the SMS commands over time)?
  • Does the cost of sending SMS messages create a barrier to long-term playing of the game?

Why do I blog this? I like this kind of empirical research of pervasive games a lot (even though my feeling is that we can go way beyond using a mix of qualitative and quantitative methods), The results and the overall conclusion are very pertinent and make us rethink what pervasive/mobile games are presented by companies and labs: things are not that simple!

Location matters but... some questions raised by location awareness of others in multi-user applications

Location matters but... some questions raised by location awareness of others The "where are you..." question that opens mobile phone conversation is both a common social norm but also an example of how spatial information are important. Asking or giving one’s physical location can be helpful to ground information like conversionalists’ availability (with regard to a social context) or to support coordination of activities (e.g. knowing what others do or did).

Location-based services eases this process of knowing the others location, be it spatial coordinates, a place or a context. Among all of those services, one of the most obvious feature behind LBS is positioning and tracking of individuals. This kind of application is used in various context (ranging from family management to dispatched workers coordination)

Apart from individual applications of LBS, there is now a strong trend in the collaborative usage of geolocation services. For instance, location-specific annotations applications (like Urban Tapestries) allow people to drop annotations at a certain spatial position at a specific location with a mobile device or a through the web (and then the messages can be accessed indepently from the platforms). Other applications allows users who pass in the vicinity of a location can then read the messages and answers; giving them a feeling of re-appropriating the city. Also, location-tracking applications also received a lot of attention (see for instance how Dodgeball has been bought by Google but there are plenty of others). Now the field is know as "Mobile Social Software" (or MoSoSo).

That said, there seems to be a conspicuous lack of user-centered design in location-based services. User's context is often not taken into account, and designers frenziness to push for automatic positioning or complex features often leads to poor scenarios as Russell pointed out some time ago. What is missing is not the technology, of course there are lots of clever positionning techniques (GPS/WiFi triangulation/RFIDs/TV waves...) but rather a scenario that fits to users' needs and their context.

For instance, one of the crux issue in location-awareness usage is the necessity of automating the positionning mechanism versus letting users disclose their own positions. At our lab, we investigated those issues using various field experiments. We use a pervasive game as an alibi to test different interfaces. The game engaged players in a collaborative treasure hunt where they could communicate using an application running on a TabletPC. The application shows the field map as well as annotation sent by the participants. In one set of experiments, two kinds of interfaces have been tested: in one case, we provided the user with an automatic location-awareness tool (the position of their partners is displayed on the screen). In another case, players just see their own character as an avatar on the campus map without their partners’ position.

Automatically displaying the position of the partners on the interface did not change the groups’ performance. However not giving the partner’s positions led players to communicate more, expliciting a lot of their strategy. In addition, another side effect of being not aware of the partners’ positions is that users better modified and reshaped their strategy over time. Therefore collaboration was enriched by the absence of this location awareness tool. It appears that it was better to provide users with a broader channel of communication that would allow them to express what they want or find relevant. The results of this experiment show that automatic positioning prevented users from engaging into rich collaboration. Giving them the possibility to embed location cues with other kind of information like map annotations appeared to be a good solution to support collaborative processes like communication or strategy discussions. This is the reason why I put the emphasis on the idea that location matters but designers should keep in mind that automatic positioning is just sharing information whereas self-declared positioning is both an information and a communication act. Sending one’s position to the partners is indeed at the same time a way to make manifest a fact that the player estimated as being relevant for the activity. This is consistent with other user experience researches, see for instance what Benford and his team. They found that letting users manually reveal their positions was also good way to get rid of location awareness discrepancies (due to unreliable network, latency, bandwidth, security, unstable topology, or network homogeneity).

This post is part of the Carnival of the Mobilists XVI.

Research Bible

This book is a very resourceful manual about research methodologies (not only data gathering techniques but also about literature review and writing tips): On research methods, by Järvinen Pertti; Opinpaja Oy 2004

This book is intended to give a holistic view of on research methods. The main contribution is a classification of reserch approaches presented in an introductory chapter. In order to teach reaserch work there are at least two alternatives. First, to train students in use of one general method, e.g. survey, or secondly, to allow students define their research exercise problems themselves. We chose the latter. The students in a particular reserch course often select problems requiring different research methods. We then many times got real applications of the most research approaches, altogether five ones (chapter-2-6). When a researcher or student is aware of many different research approaches she can better evaluate and utilize research reports produced by other researchers.

The number of research methods is large. To this end we cannot give advises in detail. We wish that the report in its present form could help a reader to find the right method and to get references to essential sources with detailed instructions

Why do I blog this? I am interested in tips/ideas to structure the research practices; maybe it's because I am reaching the end of the phd process but I feel the need of structuring what I did in the last 3 years...

3D printings of your WOW avatar

Following on this morning post about the connection between bruce sterling's shaping things and game design, I ran across this very interesting project about doing 3D prints of Second Life or World of Warcraft avatars. It's based on eyebeam's OGLE project

OGLE (i.e. OpenGLExtractor) is a software package by Eyebeam R&D that allows for the capture and re-use of 3D geometry data from 3D graphics applications running on Microsoft Windows. It works by observing the data flowing between 3D applications and the system's OpenGL library, and recording that data in a standard 3D file format. In other words, a 'screen grab' or 'view source' operation for 3D data. The primary motivation for developing OGLE is to make available for re-use the 3D forms we see and interact with in our favorite 3D applications. Video gamers have a certain love affair with characters from their favorite games; animators may wish to reuse environments or objects from other applications or animations which don't provide data-level access; architects could use this to bring 3D forms into their proposals and renderings; and digital fabrication technologies make it possible to automatically instantiate 3D objects in the real world.

Example: 3D-printing your World of Warcraft character:

It can also be used to put avatars as mash-ups in Google Earth. Check their blog to stay tuned.

Why do I blog this? this is another interesting step towards having new artifacts generated after virtual content, like for spimes. It opens lots of possibilities (especially if the avatars can be tagged). I'd be interested in printing my nintendogs, putting an arphid on it and leaving it in geneva... and see what happen... especially if there could be some interactions with people passing by (with their cell phones)....

Deferring context-awareness elements to users?

Intelligibility and Accountability: Human Considerations in Context-Aware Systems , Victoria Bellotti and Keith Edwards, Human-Computer Interaction, 16(2-4), 2001, 193-212. The paper is a very high-level computer science article about context-awareness and its corollary social issues. It is focused on the problem of defining which context-aware elements might be automatically extracted and shown to the users of interactive systems.

In particular, we argue that there are human aspects of context that cannot be sensed or even inferred by technological means, so context-aware systems cannot be designed simply to act on our behalf. Rather, they will have to be able to defer to users in an efficient and nonobtrusive fashion.

Why do I blog this? This is really one of the conclusion of my phd research: certain processes (like location awareness) should not always be automated, sometimes deferring it to users can be more important as we saw in Catchbob!.

BUT:

Further, experience has shown that people are very poor at remembering to update system representations of their own state; even if it is something as static as whether they will allow attempts at connection in general from some person (Bellotti, 1997;Bellotti & Sellen,1993) or, more dynamically, current availability levels (Wax,1996). So we cannot rely on users to continually provide this information explicitly.

This might depend on the ACTIVITY, in catchbob people kept updating their positions on the map so that others could be aware of what they were doing because it was relevant for the time being and the cost of doing it was low.

Not directly related to my work, the paper also describes two principles for ubiquitouis computing:

Intelligibility: Context-aware systems that seek to act upon what they infer about the context must be able to represent to their users what they know, how they know it, and what they are doing about it.

Accountability: Context-aware systems must enforce user accountability when, based on their inferences about the social context, they seek to mediate user actions that impact others.

Contextual Flickr Uploader: a step towards a camera blogject

Transcripting the notes from the blogject workshop, I connected the first project (a blogject camera) to a contextual flickr uploader Chris recently sent us: the Context Watcher developed by a team led by Johan Koolwaaij:

The Context Watcher is a mobile application developed in Python, and running on Nokia Series 60 phones. Its aim is to make it easy for an end-user to automatically record, store, and use context information, e.g. for personalization purposes, as input parameter to information services, or to share with family, friends, colleagues or other relations, or just to log them for future use or to perform statistics on your own life. The context watcher application is able to record information about the user's:

  • Location (based GPS and/or GSM cell based)
  • Mood (based on user input)
  • Activities and meetings (based on reasoning)
  • Body data (based on heart and foot sensors)
  • Weather (based on a location-inferred remote weather CP)
  • Visual data (pictures enhanced with contextual data)

See the example here: for instance on this blogpost, the content is made up of a picture and contextual elements: I visited Enschede (43.9%) and Glanerbrug (56.1%), mainly Home (56.2%) and Office (42.4%). I met lianne.meppelink (30.2%). My maximum speed was 23.0 km/h.

Why do I blog this? this application is definitely one step towards having blogject. It achieves the first part of the process, which is about having an object that grasps contextual elements (the second would be to let objects have conversations) and upload then on the web.

What is impressive is " I met lianne.meppelink (30.2%)": the fact it can notice the presence of others, this is another good step for a blogject world.

Vermersch's 'explicitation' interviewing technique

Today JB gave us a course about Vermersch's 'explicitation' interviewing technique (mostyl used in France in the field of ergonomics and within the education system). Meant to elicit verbalisations of an activity, the idea of this technique is to favor evocation versus rationalisation from the actor. Here is the process, in two words:

  1. Contract between actor and observer: "if you agree, I will ask you to remember a specific moment...", "if there is something that you don't want to mention, don't tell it".
  2. Initial anchor: "put yourself back into the situation", "can you recall the moment when you were..." or "when you think at that moment, what was the first thing that came into your mind?" or Fishing: "what is the first thing that came into your mind?". The point is to talk about a particular moment (anchor), the interviewer can specific a moment or let the person choose one.
  3. Prompting: "when you [do] what are you doing?", "when you see X, what are you doing?", "When you say you did X, what did you do?" trying to identify in the discourse when it's general and ask the interviewee to precise his/her action, the interviewer also has to avoid introducing his/her own presuppositions. Use the present, use temporal marker "and then?", "and what happen next?", or use spatial marker "where are you when you do X?"

It's possible to use specific cues (like in NPL), like the interviewee gaze to see whether he/she is evocating or not (when he/she stares into space).

Also more about this here.

Why do I blog this? even though I use other techniques (such as self-confrontation for instance), this kind of exercise is interesting for our next catchbob experiment, to reconstruct the game activity.

Similarity in on-line communities

In two interesting papers, Ludford and others discuss the importance of similarity in on-line communities:

In face-to-face interaction, people become friends with others who have interests and demographics similar to their own. This notion, supported by empirical sociology research, has not yet been widely explored by researchers studying online communication. If the same principles hold in the online world, the results could improve online communication forums. New technology could funnel those with similar (or complementary) interests to places where they could exchange ideas online.

"Studying the Effects of Similarity in Online Task-Focused Interactions", Cosley, D., Ludford P.J., Terveen, L.G., GROUP 2003 conference

"Think Different: Increasing Online Community Participation Using Uniqueness and Group Dissimilarity", Ludford, P.J., Cosley, D., Frankowski, D., Terveen, L.G., Ac, CHI 2004

Why do I blog this? I use this kind of material in a report about on-line communities creation and maintenance I am writing currently.

Projects review for the Cluster of Digital Entertainment Companies

Tomorrow I have to go to Lyon, France. I was asked to be part of the scientific committee of the "Pêle de compétitivité: Loisirs Numériques" (i.e. Cluster of Digital Entertainment Companies). Video game companies and research lab worked in the past month on common project about Research and Development issues that they could work on together. The idea is that one project is proposed by at least 2 companies (developpers or editors) and one research lab. Project have to be submitted to get some funding (States + Regions) for Feb 15th and tomorrow is more giving some ideas, thoughts, comments, critiques so that projects would be better suited for this deadline. The final choice is of course up to the state + regions joined into a kind of "state innovation agency".

I am looking forward to see (and comment) those projects!

Bruce Sterling's talk at LIFT06

Here are my notes from Bruce Sterling presentation at LIFT06: Spimes and the future of artifacts by Bruce Sterling. Some exceprts which are very insightful in terms of what would be a "world-with spime", the point of his next novel:

so... now the challenge for the year is to try to describe in a novel what its like to wake up in a world of spime, i try to get this cultural experience down on paper, i will have characters on paper that will be surrounded with spimes

what different does it make between the world i describe and the world with spimes: the primary advantage of a spime world is inside my head because i no longer inventory my positions in my head, i don't care about what i own, there are all inventoried to other magical inventory-voodoo, a spimming process for which searchin/sorting works with a hosted machine

so i no longer remenber where i put or find things or they cost... an so forth I just ask and then I am told, with instant real time accuracy... we have an internet of things with a search engine so i no longer search for my shoes in the morning, i just google them and as long as machines can crunch complexity, this interface makes my relationship to object much simpler and more convenient that today! in a way that it never was before and if it does not it will never be adopted, it is not stable not a universal system, everyone will have their own reaction to spime with extreme conditions, conditions of catastrophy, of extreme poverty... and complete material lost... evacuation camps, prisons... the ultimate versions: clean room, lifesupport systems in hospital, true bohemian madness, complete collamity between people and objects

where do i expect it to come from: from where it's like now, there won't be big decisions, but a natural evolution from the world of digital devices people already carry: laptop, mp3 players camera phones, wands and the wifi, broadband that are serving them in location and the global internet and this big social generated objects, social applications

and now I am trying to write a novel where somebody wake up in the moning unexcited about this, not excited, unexcited... new things saying hello, all things dying off, ghosts, shopware, possessions are waiting creation or their shipment to the junkyard

Some of the questions also address how it's going to be like:

- daniel k.schneider asks: "a preview question: in your novel what would your characters encounters as major problems, because a novel needs problems"

it may be dramatic to have a book without problems, it's not a utopian system, it will be in 30 years, will last liek 20 and then would be replaced by another society.., so my characters would have 2 types of problems: the legacy of the past they will try to reform with their spiming systems and the difficulty of this new protocracy, other things coming in that will make their system tearing apart... there will be new problems due to this technology... i expect them to have protocratic problems: objects will work and other not, state of the art means break down next month, cutting edge will mean borken down last week

- alexandra deschamps-sonsino: do you expect people to have different emotional responses compared to today's objects?

yes of course, i suspect people will have an emotional response which departs the object per se and kind of bleeds over into your records over the departed objects and your plans to have another objects. I have a very similar emotional response to anovel manuscript because I began my career on a manual typewriter so i can recall when a holographic manuscript with sweated blood over was like absolutely vital and valuable "i have got the original manuscript!": that's gone, there is no original manuscrit there is no even orignial file! you're lucky if you have a file that you can send to the publisher and everybody comes back "wow this "bruce sterling treasure dot pdf" BUT it does not mean that i care less about my book but rather that i care about where i got the book, where the book is moving... so my book becomes less physical and more relational, the more like a social process and you feel very differently.

imagine tableware or cars or appartments, things you bought at ikea... yes the construction of emotions change radically and may become more intense like the teddy bear you had when you 6 months never really leaves you, you can get another one, an absolute physical replica and if you could do that would you really want it? if you have the perfect record of the teddy bear and can make another one that is practically identical do you need the teddy bear ot is the teddy bear just a hardcopy of a teddy-bear supports system? and if thats' the case, isn't it the support system that you are nostalgic about? i remenber how ive got that bear on ebay and then i saw it on amazon and here is the record on the paypal transaction and not you! your child! it's as truggle and that's why I am paid to write novels!

Pervasive Games and CSCW

New uses for mobile pervasive games - Lessons learned for CSCW systems to support collaboration in vast work sites by Matthew Chalmers and Oskar Juhlin, paper for the workshop about gaming at European CSCW in September 2005. The paper brings forward the idea of advances in pervasive games research (mostly location-based games) as of benefit for specific mobile work where a vast site is both topic and resource to get the job done. They discuss how place-based annotations and information sharing could possibly improve individual work, collaboration as well as learning.

recent research in pervasive gaming demonstrates principles and lessons that can be applied more generally in CSCW systems for mobile work in vast work settings. There are similarities between many pervasive games and mobile work in vast settings since both have locations as resource and as topic, and more general issues to draw on with regard to how a large unfamiliar space becomes a place that one has experience of; that one understands in a social and practical way, and can interact in.

The similarities are:

  • Many forms of mobile work include collaboration and a focus on the geography both as a topic and a resource in the work. The size of a work site influence the way work is done. A vast work site has the consequence that, workers have to move around to handle tasks, finding colleagues to enable collaboration is difficult, organisational procedures are difficult to relate to specific local objects, movement in vehicles negatively affect possibilities to communicate with locally available colleagues, and mobile workers become more solitary than co-located workers.
  • Coordination is then achieved through negotiations between different localities that take into account the changing situation in each locality

The articles gives example of collaborative activities for which space and others' location is important: snow clearance in airport + road + bus driver's.

The authors then argues that

games do not just support the use of locations as resource in mobile game play, but also establish collaboration on finding and marking locations, and building up experience and understanding of those locations fit into a larger picture of social and technological interaction. (...) Some of the games above support context dependent gesture recognition. It includes two dimensions of context dependence. (...) We see strong and useful parallels with the situation of workers who create their work within organisational rules but also within their wider technical, social and environmental setting. The challenge for future research is to allow such design potential to be realised in ways that build on current work practices, and yet let people change those practices for the better as they use our technology to go about their work in their way in their work community.

Why do I blog this? this is very close to what we think too :) Our take is rather to study how players collaborate using these games so that we can understand how collaboration might be affected by location information (this is actually my phd thesis). This paper is very relevant to my phd word since it fills a kind of missing link about why using a pervasive game to inform CSCW practices.

Some of those things that blog

Working on the blogject workshop debriefing, I tried to gather some examples of 'objects that blog', or objects that upload their story up to web. The simplest form Alex Pang from the IFTF suggested me are webcams but they are rather passive instruments, "reporting" whatever they see. Another simple example is a lamp which can show a history of persons who have entered a specific room (see this aula lamp on page 4).

Another suggestion (by fredhouse) was this project at EPFL that had a bunch of RSS feeds for sensor data from a mote-based sensor net. Using an embedded server component that publishes RSS data feeds and a datablogging platform could be a way to upload these information. The point, as Gene described would be that every connected thing has syndication as a default capability, which is one of the thing we discussed in our workshop the other day.

Of course, there is the AIBO blog (see the aibo blog aggregator too) and the pigeon that blog thing I blogged about last week is very close to this: "Pigeons with GPS enabled electronic air pollution sensing devices, capable of sending location based air pollution data as well as images to an online Mapping/Blogging Environment in real time".

Those things exist already, now there are some thoughts that begin to pop here and there:

Sascha think about something quite beyond that:

I spend some time thinking about object that would tap into the flow of money within Google AdSense, ultimately ending up with an artifact that could make (grow?) money for you. I believe that this would be especially interesting because you then could give people that have no access to these abstract means of generating value (e.g. having a website or blog) or are even illiterate the means to access it and even make a living using paradigms that are coming from a completely different background. Imagine an artificial plant that would generate clicks (money) on it's own AdSense-equipped website whenever its solar cells are being exposed to the sun, thus combining the most

Overall, I like the datablogging concept because it's really close to the idea of various data aggregated with a potential goal, as in blogjects.

Well, we still have to write the workshop report :)

Technorati Tags: ,