(via this good blog from a group in Lausanne), Multitasking: You can't pay full attention to both sights and sounds by Lisa De Nike. It's actually a lab study which suggests that reason cell phones and driving don't mix
"Our research helps explain why talking on a cell phone can impair driving performance, even when the driver is using a hands-free device," said Steven Yantis, a professor in the Department of Psychological and Brain Sciences in the university's Zanvyl Krieger School of Arts and Sciences.
"The reason?" he said. "Directing attention to listening effectively 'turns down the volume' on input to the visual parts of the brain. The evidence we have right now strongly suggests that attention is strictly limited -- a zero-sum game. When attention is deployed to one modality -- say, in this case, talking on a cell phone -- it necessarily extracts a cost on another modality -- in this case, the visual task of driving."
(...)
Using functional magnetic resonance imaging (fMRI), Yantis and his team recorded brain activity during each of these tasks. They found that when the subjects directed their attention to visual tasks, the auditory parts of their brain recorded decreased activity, and vice versa.
Yantis' team also examined the parts of the brain that control shifts of attention. They discovered that when a person was instructed to move his attention from vision to hearing, for instance, the brain's parietal cortex and the prefrontal cortex produced a burst of activity that the researchers interpreted as a signal to initiate the shift of attention. This surprised them, because it has previously been thought that those parts of the brain were involved only in visual functions.
Why do I blog this? I am more interested by the main result than its implications for cell phone usage. The main lesson is that multitasking cannot be based on different modalities. Apart from cell phone while driving this can have important implications in software design, especially in mobile contexts. For instance, for conveying context/awareness information it could be detrimental to the task performance to give users feedthrough/awareness indications in 2 or 3 modalities (e.g. sounds to show that an event happened and a flash on the screen to display a message). This for instance support attention troubles with current IM interfaces where there is a mix of sounds and visual event...
Another reason why I blog this is because if think cognitive sciences results can be a good resource to orient design and reflections about design processes.