Research

Between the virtual and the physical

The International Journal of Design Computing has a special Issue on the Space Between the Physical and the Virtual.

This special issue contains research which navigates the territory between the real and virtual world through metaphor, cognitive model, data stream and a designer's synergy. (...) In this volume of IJDC we attempted to solicit and select papers that explore that overlapping boundary between the physical and the virtual. In particular, we looked for research that contemplated the role of the subject user versus the machine automaton. The first paper from Maher, Gero, Smith, and Gu's utilizes agents that sense their environment and react accordingly. What is of particular interest is in how those artificial agents responded to users who inhabit their world. The Heylighen and Segers' DYNAMO article considers the synergy a designer develops between many forms of media and data, analog and digital. The system again works in partnership with the user predicting their design goals and suggesting appropriate case studies. In the Bermudez cyberPRINT, a dancer interacts with the virtual manifestation of his physiological data. The performance aims to closely couple the human physical condition and the virtual condition such that, eventually, the boundaries between them are blurred. The article from Fischer and Fischer appropriates a morphogenetic biologic model to digital form finding. The human, in this case acts as a director shaping and nudging largely independent virtual actors. It is in the lack of complete control that we find such systems intriguing. Their apparent independence gives the illusion of (or perhaps some may say illustrates) intelligence and purpose.

A live-action Scotland Yard in Bern, Switzerland

A game presented at the workshop about pervasive gaming at Ubicomp: "The hunt for Mr. X: Bringing a board game to the street" by Niklaus Moor (Swisscom Innovations). It's basically an adaptation of the board game 'Scotlan Yard' to the real city level.

„Mr. X“ moves hidden through the streets of London. He has to show his location every 4th turn The detectives know which kind of transportation he uses With this information they have to figure out his position and surround him When the detectives catch him, they win, if Mr. X escapes he wins

4 groups with 4 players hunt Mr. X and Mr. Y in the old city of Bern (Switerzland). Hunting by MMS: Every ten minutes, Mr. X sends a picture of his current location. The detectives have to identify the position by the pictures and find Mr. X. They catch Mr. X by taking a picture of him. All players had cell phones with camera and GPRS connectivity, MMS enabled The pictures have been sent by MMS to a photo weblog page The cell phones where enabled for group chatting (text based)

Why do I blog this? The game seems interesting and engaging. The document does not states whether there were further evaluation/tests/experiments, the user experience analysis appears to be quite limited here. The data collected might be extremely relevant (e.g. pictures). I really like this "The Photo-Blog gave a live overview of the hunt and the positions of players"

Prosopopeia: Live Action Role Play in Stockholm

News from the Swedish Institute of Computer Science:

The Live Action Role Play event Prosopopeia was held the weekend of June 11th and 12th of June. The game designer Martin Eriksson and his team had created a suggestive story, with its origin in ancient mythology but staged in the age in which we live. The game investigated the between-places and non-places of our society, and pointed out how our actions and relations will have consequences for coming generations, and to ourselves - on The Other Side of death

Prosopopeia is:

a demonstrator in the "Enhanced live action roleplaying" work-package of the European research project iPerG and future events will be closely integrated with project results. In Prosopopeia the player becomes a willing channel for a ghost on a desperate mission to the land of the living. The ghosts and their channels use age-old magic, art, experimental technology and humane action to forge a path to the future while battling the spectres of their own past.

I like the premise of the project because I find it sooooo true:

Games on mobile platforms have so far been limited to relatively simple ports of older console games played to redeem the idle moments of modern life. This first stage of mobile gaming is only one possible gaming format using handheld devices. Experiments in the field of pervasive gaming hint at a deeper and more unique game format made possible by the unique traits of mobile devices. Player mobility, location-tracking, physical presence in reality and constant networked communication open new vistas for game design.

[And hey! btw fabien, it's an argument to say that pervasive gaming is one kind of mobile gaming ;)]

Beside, the gameplay seems quite interesting:

The "EVP machine", the modified reel-to-reel tape recorder, through which the players communicated with The Other Side, was completed. It was managed by the Game Master via a GSM link. The software "Thanatos" that transformed the voices of the Game Master group into “ghost voices”, worked in a complex environment of interactive sound technology including not only the voices but also a sampler with sound effects and music, with which the Game Master could improvise and guide the narrative. 

The "acclimatisation machine" which provided the players with background information through a meditative experience, was completed. It was made of 12 modified MP3 players, which could be synchronously remote controlled without any notice of the players. This was a generic and scalable technology that can be used in many future LARPs for embedding of sound or music on different locations or inside objects. 

All the scenes in Prosopopeia were intercepted. The Game Master could hear the dialogue of the players and was able to direct the game. Everything was recorded and will be used for documentation and evaluation. Many scenes were also surveyed with video cameras, both remote controlled W-LAN cameras, and IR sensitive night cameras. The different types of software for handling and storing of parallel video streams have been evaluated.

Why do I blog this? this project is carried out in the European iPerg project which is amazingly relevant and their website is full of resources. It's very interesting to follow their progress since their paving the way for future development in pervasive gaming.

Online Poker Games behavior

Mmh this is not a 'Texas Hold'em' spam post! It's just that I found this topic curious: Hiding and Revealing in Online Poker Games byScott A. Golder and Judith Donath. A nice account of "how card room interfaces can better support the psychological aspects of the game by critiquing the dominant methods of visualizing players: with generic avatars, and with text-only handles".

One of the significant problems in online poker is that most of the psychological and social information that can be gleaned at a card table is not present in online poker interfaces, greatly diminishing the authenticity and enjoyability of the game. In this paper, we discuss the nature and value of psychological and social information in poker, contrast the environments of some virtual card rooms, and make recommendations for general improvement. (...) The overarching problem with current poker interfaces is that much of the detail provided is devoid of meaning, while meaningful detail is absent. Useless detail like garish carpeting and scenery (Figures 1 and 2) provide no real meaning beyond the table metaphor. Misleading avatars (Figure 1) at once convey too much information (through stereotyped images) and too little (due to their being static, unchanging and unmoving). We suggest that human-like representations, if used at all, should not convey potentially meaningful cues that are not instigated by user action. (...) Though poker relies on social information, the majority of current online poker systems do not adequately convey that information, or do so in an inaccurate or problematic way. Being able to recognize other players and remember past interactions is essential, as is being given appropriate, accurate information on which to judge them.

Why do I blog this? first because I find the topic interesting. Second, because we're working on how people model/infer others' intents while doing something together, this is closely related to our concerns. There is indeed one paragraph about it in the paper:

Much socially interpretable information comes from activity directly related to gameplay, even when it is not a player’s turn. These activities include users repeatedly looking at their cards or manipulating their chips. The former is often an unconscious response to a good hand, and the latter may be indicative of one’s desire to bet. Players’ behavior during their non-turn time, whether they are contemplative, inattentive, or even disconnected, can reveal their state of mind. If these activities are transmitted to others, players will feel as though their behavior “counts” even when it is not their turn. Many things compete for computer users’ attention; if players multitask when it is not their turn, they may not give complete attention to the game. Because keeping track of one’s own socially interpretable information and that of others gives attentive players a competitive advantage, there would be a financial incentive to pay attention, which would make the game more captivating and therefore more enjoyable.

Drawing and Handwriting on Mobile Phones

Finally! a study about the use of handwritings on cell phones:Drawing and Handwriting on Mobile Phones by Marc Relieu at the Seeing, Understanding, Learning in the Mobile Age Conference (Hungarian Academy of Sciences, April 28–30, 2005).

Abstract: Mobile phone studies offer several opportunities to explore how interactional practices make sense of new communicational affordances. Beside asynchronous messaging systems that allow combining text and pictures in artful ways, new instant messaging services permit to merge drawings with handwritten texts and to send them in real time on touch sensitive mobile phone displays. I propose an applied conversation analysis of such handwritten exchanges and explain how drawings can be systematically and dynamically coupled with texts in the communicational environments they contribute to produce. The creation of endless new combinations between handwritten text and drawings, either to solicit attention, to open an exchange, to produce an evaluation or to initiate a new topic turns out to be an endogenous game-like practice.

"The Drop": real world 'capture the flag'

The Drop: Pragmatic Problems in the Design of a Compelling, Pervasive Game by I. Smith, S. Consolvo and A. Lamarca (2005)

We are developing a new multiplayer pervasive game, called The Drop, designed to be compelling to play and yet practical to deploy in real-world settings. In The Drop, two teams use mobile phones to play a version of “capture the flag,” where one team hides a virtual “briefcase” in a public place and the other team attempts to find it within a specified amount of time. If the team that is searching for the briefcase finds it within the game’s time limit, they win; otherwise, the team that hid the briefcase wins. In this article we explain how the game is played, then discuss the technical, social, and business challenges we have faced while creating and implementing it.

Why do I blog this? what is interesting in this paper is that the authors not only describes the pervasive game they develop but they also provide the readers with different reflections about how game designers can face some issues when working on this project. The game design process is well explained. It reminds me what I blogged about Mogi Mogi's development. It's something also developed in this paper: The Design History of a Geolocalized Mobile Game: From the Engineering of Displacements to the Engineering of Encounters - A case study of the development of a mobile game based on the geolocation of terminals(whole proceeding as a pdf!) by Christian Licoppe & Guillot Romain

Playware: playground for tangible interactions

Playware is a cool project about tangible interactions for kids. It's carried out by Henrik Hautop Lund and Carsten Jessen (Maersk Institute, University of Southern Denmark and The Danish University of Education). The point of their project is to engage kids in physical activities instead of letting them in front of a (tv) screen through the embodiment of interaction in tangible material. Go check this technical report for more details. They designed an augmented play ground which rocks! the playground is actually meant to support various games:

We implemented different games on the tangible tiles and analysed children’s physical play on the tiles in continuous use for 2 months at a school in Denmark (Tingager Skolen, Denmark). In one of the games, colour race, children compete against each other (more children can play in groups) by first choosing a colour (either blue or red) and then in a hurry jump on the tiles so they turn into their colour. Another example is a tangible version of the computer game Pong where a red arrow moves around randomly and when it gets to one side of the tiles configuration, a child has to step on the tile quickly, to return the arrow to the opponent. The arrow can move to one of the connected neighbours. The wicked witch game is an extension, which uses PDAs and WiFi localization to provide story lines and guidance for the children’s play.

Our goal with the prototype was to investigate whether children would accept the tiles as play equipment and whether these very simple tangible games actually would initiate physical and social play. We observed children playing indoor and outdoor on the tangible tiles and on ordinary playgrounds to investigate play and games activities. Children’s interaction with the tangible tiles was continuously video recorded and analyzed over the 2 months period in the Danish school.

Why do I blog this? I am looking forward to know more about the results/description of what happened in terms of how children used this playground, accepted it and had fun.

Technorati Tags: , , , ,

A social network analysis fo the CSCW community

Examinations of research communities is something I like to have a glance at, especially when it comes to my field of research. I ran across this relevant paper: Six degrees of Jonathan Grudin: A social network analysis of the evolution and impact of CSCW research by Daniel B. Horn, Thomas A. Finholt, Jeremy Birnholtz, Dheeraj Motwani, Swapnaa Jayaraman. The point of the article is to describe the evolution and impact of computersupported cooperative work (CSCW) research through social network analysis of coauthorship data. This seems to be an interesting approach, different from regualr bibliometrical appraoches. Some excerpts I found relevant:

The field of computer-supported cooperative work (CSCW) has an intense interest in studying collaborative practices, yet ironically, CSCW researchers remain unreflective about the structure and impact of their own collaborations. This indifference is in contrast to recent efforts in other disciplines, notably physics, where there is a growing literature on the organization and evolution of collaborations [4, 25]. Social network analysis is the primary lens used to understand patterns of collaborations in these other fields. (...) Given the tools and measures described above [Social Network Analysis], the interesting question becomes how to use these techniques to address hypotheses about the formation, structure, and impact of the CSCW research community. The motivation for examining CSCW researchers is twofold. First, as members of this community, we are curious about the origins and elaboration of the CSCW field. Second, in a more general sense, the emergence of CSCW research is an instance of the broader phenomenon of new disciplines forming at the intersection of existing fields. CSCW community composition over time. (...) we were able to create a picture of how cosmopolitan CSCW research was at any given moment between 1986 and 2003. (...) The data for this study came from the database of HCI publications supported by ACM and maintained by Gary Perlman at http://www.hcibib.org, which includes entries for journal articles, books, book chapters, conference proceedings, videos, and web sites (...) [Results] with respect to the ties between the CSCW community and the larger HCI community, CSCW researchers have maintained a steady association with the HCI world. That is, during the period when CSCW emerged as a separate sub-field, CSCW researchers had roughly equal numbers of CSCW coauthors and HCI coauthors. (...) Second, with respect to size and composition of the CSCW community over time, the community appears to be shrinking and has replaced itself almost completely over the preceding decade. (...) Third, with respect to the visibility of CSCW researchers within the HCI community, researchers central to the CSCW community tended to be central within the HCI community.

The extracted words above are a bit limited since the paper is full of interesting other things.

Digital Ethnography Workgroup

A good project and resources about digital ethnography can be found here. It's called DEW (Digital Ethnography Workgroup). and it's led by Edwin Hutchins (mister 'distributed cognition')DEW is a community of professors, graduate students, and undergraduate students in the department of Cognitive Science at University of California, San Diego. We use the tools of ethnography, in conjunction with technology, to document cognition in real-world settings. This website serves as a common meeting place for developing thoughts, activities, ideas, resources, and projects related to DEW.

The point of this 'digital ethnography' is that digital technology can play an important role in each step of an ethnography: site selection / observation-interaction data collection / transcription / coding /analysis / publication-archiving. This page summarizes how this would work.

Some interesting resources can be found here: references, instructions to use material...

Why do I blog this? at the lab we're having a discussion lately about this topic; we want to do something close to that initative for our research. The main problem we face is about our data sets: how can we something more efficient to mine our data which are so different (mp3, video, logfiles, handwrittent surveys...).

Ubiquitous Computing, Entertainment and Games workshop

For morons like me who could not make it to Ubicomp 2005 and Julian's workshop about 'Ubiquitous Computing, Entertainment and Games', the solution is to have a look at the participants' presentations, which really worth it:

Why do I blog this? this project may be considered as state-of-the-art research in the field of ubiquitous computing applied to the field of gaming. Since we're working in the area, it's a must to know them ;) And we'll be there next time ;) Besides, Julian gives an overview of the event on his blog.

3rd international workshop on mobile music technology

People into mobile music technology should submit something to the 3rd international workshop on mobile music technology (2-3 MARCH 2006, BRIGHTON, UK). Judging from the material extracted in the two previous workshops, it might be interesting!

Following two successful workshops that started to explore and establish the emerging field of mobile music technology, this third edition offers a unique opportunity to participate in the development of mobile music and hands-on experience of the latest cutting-edge technology. The programme will consist of presentations from invited speakers, in-depth discussions about the crucial issues of mobile music technology, hands-on group activities and break-out sessions where participants can get valuable feedback on their work-inprogress projects. The invited speakers include Michael Bull (University of Sussex, UK), often dubbed by the press as 'Professor iPod' for his iPod and car stereo user studies that reveal fascinating trends for mobile music.

People interested can drop a line to Frauke Behrendt: f.behrendt (at) sussex.ac.uk, Lalya Gaye (Hi Lalya!): lalya (at) viktoria.se or Drew Hemment (Hi Drew!): dh (at) loca-lab.org

CatchBob! video

As Fabien mentioned, here are some pictures of the CatchBob! video we did (edition by Fab and footage by Damien from Ecole des Arts Décoratifs of Geneva when he came playing last year): Pictures respectively shows the strategy discussion before the game (1), strategy reshape during the game (2), somebody writing annotations in real-time (3) and the use of the replay tool to reflect on the players' activity (4).

The long video (3:30, .mov, 15.8Mb) can be downloaded here. The short version (1:20, .mov, 8.3Mb)) can be downloaded there

Experiments in dialogue reconstruction

In an research article called Conversation, Slugoski and Hilton desrcibes the 'methodology of reconstruction' created by Clarke:

a multifaceted research instrument by Clarke (1975; 1983), who predicted and found that people would be able to reassemble randomized turns in an unfamiliar conversation with greater than chance accuracy. (...) Interestingly, sorters’ success at the task was not tied to the presence of syntactic cues to turn position in the floor holdings, as Clarke (1983, Experiment 2) found that dialogues made devoid of such cues were, if anything, better reconstructed that those with syntactic cues present.

What is interesting is now how this can be apply to people who are or are not familiar to each other:

Kent, Davis and Shapiro (1981) found that the shared personal knowledge, or common ground, between friends made their conversations less accurately reconstructed by an outsider than were dialogues produced between two strangers, who did not share common knowledge and hence had to be more explicit in their utterances.

And of course some people tried even weirder variables:

Another relevant finding on individual differences in the generation of reconstructable speech patterns is that utterances produced by schizophrenic individuals are less accurately sorted than those produced by psychiatrically normal subjects (Rutter, 1979), a difference presumably due to schizophrenics’ frequent breaches of the Gricean (1975) maxim of relation; that is, the utterances were not connected in any relevant or meaningful way to one another (see Rochester, Martin, & Thurstone, 1977). It turns out, however, that simply telling sorters that the dialogue they are about to reconstruct involves a schizophrenic participant results in significantly poorer reconstruction than among those not so informed (Slugoski & Turnbull, 1987)

Why do I blog this? this is interesting for our project about mutual inferences in collaboration.

Modeling others' intents in CatchBob!

This week I move forward in the analysis of the CatchBob! data. The point was to create two coding schemes for the data analsis:

  • What a participant inferred about his/her partner during the game. This coding scheme is clearly data-driven in the sense that it emerged from the players' verbalizations (namely those extracted during the self-confrontation phase after the game)
  • How a participant inferred these information about their partners: this one is theory-driven since I used Herbert Clark's theory of coordination keys/devices to have clear categories about what happened

The next step is now to look back into the data and re-code them using both schemes with a chronological perspective, that is to say, trying to graphically represent (for each groups/players) what they had to infer and jointly how they did these inferences along the time. My goal is to find some patterns in the way they coordinating using this 'mutual modeling' process.

Let's (again) read the bible about this:


"Observing Interaction : An Introduction to Sequential Analysis" (Roger Bakeman, John M. Gottman), Cambridge: The Cambridge University Press

Hutchins' take on self-confrontation

Edwin Hutchins's position on self-confrontation (a research methology that asks participants to explain their activity based on traces of their interactions, e.g. a video), taken from a discussion with the cognitive anthropology research group:

« My position with respect to verbal data goes back to the earliest work I told you on Monday [Hutchins, 1980], in which I was working at discourse and looking for a structure which seemed to be repeated, a schema, which seemed to account for memorization of the discourse that does not account for particular facts as mentioned in the discourse (…). The underlying hypothesis is that, in terms of this cognitive system, in order to communicate effectively from one person to the other, we have no choice but to use these structures that we share, that are legitimate ways of assembling ideas in our culture (…). If I violate the terms of these kinds of structures, then people say that does not make any sense, that those things don't go together that way. » (...) the problem for me precisely is : I observe some people engaged in a task, and who are producing verbal behaviour, the pilots for example, and I look at this action and communication and see if I can discover recurrent structures which tell me what pilots expect. If this happened and then that happens, we also expect that to happen. And if this did not happen, that will not happen. This is a structure of belief that pilots have about how events happen together, how events cause each other or precludes each other. And we would like to find out what that is. We can get data about that by watching them actually doing the task, or we can ask them questions about it, we can interview them. What we look for when they talk to us is not the truth or the falseness of the assertion they make. We are looking at what is the structure of the schema by which this is a sensible thing to say. Just as if we're going to the used-car salesman, we have to study the car salesman : we don't believe what he says but we ask why it is a reasonable thing for him to say that this automobile was owned by an old lady. So, if we come to the question of what do we do with the data of self-confrontation, the question is : what kind of interpretants are involved ? We present the subject with a task which is : generate for me a culturally meaningful account of your own behaviour while we will remind you what happened by showing you this videotape. At that point, the question is : do we take the content of what the subject says as to actually be information about what happened in that very event, or do we look at the structure of the account to see what it is that pilots believe is a meaningful way to construct a story about what happened in the event ? (…) The point is: shall we take the linguistic behaviour that is produced here as something which is true, or is this another source of data about the structure that subjects believe ? »

Why do I blog this? this statement is important in terms of how we can use self-confrontation (as we do in CatchBob!): participants should be placed in situations where their account should not just be culturally possible as stated by Jacques Theureau in his next book.

Live Action Role Playing Games and Technology

The following paper seems to be one the first paper in the field of (live actions) role-playing games and technology to support it: How to Host a Pervasive Game Supporting Face-to-Face Interactions in Live-Action Roleplaying by Jay Schneider, Gerd Kortuem (Ubicomp 2001). The paper describes an ubiquitous computing gaming environment that supports live-action roleplaying. The point of this is to enhance liveactiongames and thave "a testing ground for our sociability enhancing mobile ad-hoc network applications".

The game they present is called Pervasive Clue, it's a "live-action roleplaying game based loosely on Hasbro's classic board game Clue augmented with short-range radio frequency (RF) PDA devices".

The goal of Pervasive Clue is to discover who killed the host, Mr. Bauer, where it was done and what was the murder weapon. Solving the murder is done through the discovery of clues, when a player feels they can solve the crime they are allowed to make an accusation. If any of the crime facts (murderer, location or weapon) are incorrect the player is eliminated.

To meet this end, each player has a Clue Finder like this (I am crazy about this device ;) ): Why do I blog this? Apart from the scenario I find interesting (we're thinking about something similar for the next episode of CatchBob!), I appreciate the research avenues at the end of the paper:

Aside from our planned exploration into the environment of pervasive games, we see the following research issues to be open and worthy of further examination:

  • What features make pervasive computer games fun for the players? What are the pitfalls to avoid that detract from player enjoyment?
  • How can we measure the effectiveness or effect of pervasive technology in games?
  • What makes a game a "hit"? How does it vary among demographics?
  • What are the characteristics of pervasive games? Can we use these
  • characteristics to categorize pervasive games?

  • What are the core set of applications needed by all pervasive games?

Activity analysis of CatchBob!

I am currently brainstorming about the activity analysis of CatchBob!. My aim is to study how coordination occur in the game. In line with this goal I am figuring out how to represent the joint activity along the time. I would like to take certain factors into account (i.e. representing them graphically to have a clear picture of how things are going):

  • 3 players
  • time (using Herbert Clark's theory of coordination and how time is important like for opening the joint activity, dividing the action into individual acts...)
  • individuals actions
  • information inputs (location awareness, communications acts, proximity sensor...)
  • coordination keys: grounded information among the group
  • grounded plan at the beginning
  • ...

I will use all the data extracted from the game to do so: qualitative and quantitative indexes can give me a clear picture of what happened. Besides, tt would also be interesting to do this analysis for every group in both experimental conditions and then see whether there are differences in terms of coordination keys exchanged.

ActiveCampus: location awareness usage

W. G. Griswold, P. Shanahan, S. W. Brown, R. Boyer, M. Ratto, R. B. Shapiro, and T. M. Truong, ActiveCampus - Experiments in Community-Oriented Ubiquitous Computing'' IEEE Computer, To Appear:

The UCSD ActiveCampus project is an exploration of wireless location-aware computing in the university setting. ActiveClass supports classroom activities such as anonymous asking of questions, polling, and student feedback. ActiveCampus Explorer supports several location aware applications, including location-aware instant messaging and maps of the user’s location annotated with dynamic hyperlinks of nearby buddies, digital graffiti, etc. This paper describes results on the use of these systems by several hundred students, drawing on observations, aggregate usage data, anecdotes, and the analytical perspective of Ecologies. Analysis exposes novel behaviors, the relevance of proximity in social computing, and a willingness to share location information with others.

Why do I blog this? the usage analysis is interesting:

we performed aggregated, anonymized analyses of our server data from ActiveCampus Explorer’s “launches” in April 2002 through May 2003. (...) We instead examined how many distinct people were creating content for each feature.

The top chart in Figure 3 [reproduced below] shows the number of distinct individuals who created each type of content during each month. The peaks correspond to the two launches. Generally, use decays at an exponential rate, a factor of two, over a month to month basis, until it stabilizes around 25 users. About a third of these are members of the ActiveCampus project. This disappointing outcome can be attributed to the ecological deterrents cited above.

Here are the most significant results:

Since one of the underlying principles of ActiveCampus is that location matters, we analyzed message sender and receiver locations. This analysis was limited to the 1597 messages for which both the sending and receiving PDA had been located by the automatic geolocation system within the previous 100 seconds. There are numerous reasons why a user might not be currently geolocating, including use of a non-located computer or the user’s choice to to hide location.

Next, we compared each sender-receiver pair’s average distance at the time of messaging to their average distance in general. The lower chart Figure 3 shows this relationship. For 473 out of 539 pairs the distance when messaging was less than the average distance. For 311 pairs the average messaging distance was less than 50 feet. This tendency held up when members of the project were excluded from the analysis, as well as data from the Explorientation. In short, relative location as a context seems to matter in community-oriented computing. Perhaps ActiveCampus Explorer’s presentation of nearest buddies at the top of the list highlighted their proximity. At the shortest distances, the pairs may have physically seen each other in the same room (using IM as a back channel) or knew they should be in class together. Finally, we examined privacy issues. Just 1% of users changed their default privacy settings to hide location from buddies, and 8.2% exposed their presence and location to non-buddies (0.3% more exposed just presence). In short, users seem unconcerned about location privacy with friends. A modest percentage will even trouble themselves to share location with non-buddies, perhaps as a way to meet people.

Visit at Swisscom Innovations (R&D lab)

Today with Pierre and Mauro we went to the cosy swiss capital (i.e Bern) to meet R&D people at Swisscom Innovations. Emmanuel Corthay presented us how the R&D unit work and gave us some details about what kind of project they carry out. few numbers here:

  • people: 70 specialists (mostly engineers and IT specialists but also economists, psychologists, sociologists)
  • locations: zurich/bern/silicon valley (3 persons who do scouting/reporting about trends)
  • 40MCHF yearly budgets

Mauro and I presented our research projects (Stamps and CatchBob!). Then we splitted into different groups to visits the lab. I managed the 'Economic and Social Aspects' (ESA) unit, whith focuses on the human side of technology (which mostly economists and sociologists). For some reasons, the folks in charge of usability are not part of that group.

In the ESA unit, I met Stefana Broadbent who is leading a group that watches social changes and technology usage (mostly from an ethnographical perspective). She gave a good overview of they did so far (studying families at home, performing various activities) and what they're planning to do in the future.

Then I met Regine Buschauer, whom I already known by reading thie blog. She described me one of the research projects she works on, which is related with place/space activities and location-based services.

An interesting thing at Swisscom innovation is that the structure of the buidling reflects the structure of the research units since each floor corresponds to a specific research group (for instance 7th floor = 'Economic and Social Aspects').

swisscom

The whole day was interesting, it's very refreshing to see pertinent people in the area. There might be some possibilities to collaborate.

Technorati Tags: ,

dencity: virtual networks of real places

(Via infosthetics): dencity is very interesting:

denCity.net is an experiment concerning the territorialisation of the virtual and the deterritorialisation of the physical, en route to an augmented perception of urban reality and density. it's about "density", "city", about "den" (which in japanese means "electronic"), and about "nets"...

denCity.net creates virtual networks of real places. qr-codes (2d barcodes) are used to tag buildings and urban sites. your mobile camera phone can read these tags. simply shoot them and log in to denCity.net.

denCity.net provides information to the specific place. from there, you can browse through the web of tags. the tag is the key. once shot, you can always return to the places, virtually. it's the city in your pocket. anyone can tag places and thus create a new denCity-site. just log in to denCity.net - create a spot on the map and print the tag to attach it to the real-world location. [until now, the system only supports tag creation in the city of aachen, germany]

The goal of the project is compelling (especially for some folks here)

denCity.net examines the enrichment of real urban sites by a virtual dimension of information and networking, beeing accomplished by localisation of the virtual. it is about moderating between "virtual reality"-networks and the city as physical existence.

There are several ways to visualize the messages (bubbles): Why do i blog this? the project seems to be innovative in terms of the visualization, I'd be happy to see how it's employed by potential users, and whether they have formal/less-formal scenarios of use that are really deployed (I am always curious/dubious to know how this kind of system are used).

Technorati Tags: ,