Tuesday 18 February 2014

What miming a steering wheel tells us about what we learn

Pro-tip - keep your eyes open while doing this
One of my favourite podcasts is '99% Invisible' by Roman Mars. It's about design, and about the consequences for good and ill design has on our day to day life. You should listen to it, because it is awesome. 

An older episode fell into my ball park a little bit, and I thought it was a nice idea worth repeating. It's about the steering wheel, and what we learn when we learn to use them. The moral of the story is this: when we learn, we don't acquire internal models of the features of a task which we can then access later on. Instead, we learn how to interact with a given task dynamic and how to use the information made available by that task dynamic.

The set up is this:
If I asked you to close your eyes and mimic the action of using one of the simple human interfaces of everyday life, you could probably do it. Without having a button to push, you could close your eyes and pretend push a button, and that action would accurately reflect the action of pushing a real button[1]. The same goes for flipping a switch or turning a door knob. If you closed your eyes and faked the movement, it would sync up with its real world use.

Now if I asked you to do the same with a car’s steering wheel, you’d think you’d be able to describe steering accurately and mime the correct movements with your hands in the air, but you’d be wrong. Very, very wrong. You’d probably kill a bunch of imaginary people.
[1] ADW Note: This is not particularly true. Actions performed without the use of the typically present information (as is the case when miming) are generally not identical to the real action. This is what makes a good mime quite impressive. That said, you could generate an action in the ballpark.

The podcast features an interview with Steve Cloete who has done research on this topic with his colleague Guy Wallis (e.g. Cloete & Wallis, 2009; Wallis et al, 2002, 2007). The basic finding is that when asked to mime a lane change, people turn the wheel but fail to turn it back to straighten the car. When actually driving or in a simulator, people happily perform both elements of the maneuver. The researchers state (correctly, I think) that this means people have not internalised the dynamics of the steering wheel and the car. They don't have access to an internal representation of the correct action; instead, what they've learned to do is to move so as to produce a particular pattern of visual information (flow across the retina in one then the other direction). We move so as to produce certain perceptual consequences, and without those consequences available we don't reproduce the movement that we learned.

One fun study (unpublished, as far as I can tell) involved people learning to do a lane change in an actual car while blindfolded. They drove down a test road with a person guiding them verbally and over the course of the study they learned to change lanes correctly. Without vision, they initially did the incorrect mime version where they only turned one way, but they eventually learned to do the full maneuver. However, this didn't transfer to the mime task. People had learned to use non-visual information to control the lane change, but this wasn't available when miming because it was created by the dynamics of the car.


Roman Mars finds this lack of transfer amazing; all this training in blindfolded driving and you still can't mime the lane change! But to me, this was always going to be the case. People don't learn internal models of the world; they learn how to interact with their environments and this depends critically on perception and information. No information, no successful action, and if the information doesn't overlap then no transfer either. This result is a lovely demonstration, I think. 


Speaking of lovely demonstrations...PsychScientists Jnr is an excellent pretend driver :) 



References
Cloete S.R. & Wallis G. (2009). Limitations of feedforward control in multiple-phase steering movements, Experimental Brain Research, 195 (3) 481-487. DOI:

Wallis G., Chatziastros A. & Bülthoff H. (2002). An Unexpected Role for Visual Feedback in Vehicle Steering Control, Current Biology, 12 (4) 295-299. DOI:

Wallis G., Chatziastros A., Tresilian J. & Tomasevic N. (2007). The role of visual and nonvisual feedback in a vehicle steering task., Journal of Experimental Psychology: Human Perception and Performance, 33 (5) 1127-1144. DOI:

10 comments:

  1. Excellent. I feel this reminds me of the naive physics studies where observers will make incorrect judgments of things such as drawing the trajectory of a ball after it passes through a C-shaped tube, but when shown actual trajectories, both correct and incorrect, observers can easily pick out the correct one. Hence, our internal models aren't so great when asked to produce the trajectory, but we are nevertheless perceptually sensitive to natural motion

    ReplyDelete
  2. This is great. May I use your post (maybe reproduce parts of it -referenced, naturally) as an example when teaching students about EcoPsy? I am often asked, "well what about in practice" (I am using too basic examples currently) and your post explains it very clearly on a more appropriate level.

    ReplyDelete
  3. Clearly I don’t require an exhaustively explicit internal model of the large bay leaf tree I have in my garden- leaf for leaf, branch for branch, birds nest for birds nest. I merely, only need recall minimal and salient features, for e.g.: It has increased in size three fold in four years, is between my balcony and the driveway, hard to get the lawn mower under, goes great in Bolognese, etc. Surely no representationalist believes we rely exclusively on explicit internal models all the while ignoring what goes on in the world? Are you familiar with the work of Rick Grush? This is a recent paper (co-authored with Lucia Foglia and addressed to Evan Thompson) you might find annoying/interesting: The limitations of a purely enactive account of imagery

    ReplyDelete
    Replies
    1. Surely no representationalist believes we rely exclusively on explicit internal models all the while ignoring what goes on in the world?
      In the standard cognitive model, representations of the world do all the causal work in our behaviour. How representations get functional content is rarely addressed, however.

      I know Grush's stuff but I was never that impressed.

      Delete
  4. Were people asked to change lanes, then stay a safe distance behind an imaginary car in front? If they weren't, would that have a) affected the outcome b) been testing the same thing or not?

    ReplyDelete
    Replies
    1. No they weren't, and I'm not sure what that would have changed. The real issue they were testing was whether the learned ability to drive a car was a knowledge routine accessible to imagining or if it was a skill that emerges in real time and that requires information, etc.

      Delete
  5. I get it now!

    I tried this on one person, and they failed to straighten the wheel, on 3 others with the extra instruction, and they did straighten the wheel.

    But that would not straighten the car, that would be like turning a corner. You need to turn the wheel back the opposite way to change lanes as you want to end up on a parallel course.

    I should've looked at the 2002 paper you referred to first: http://www.cell.com/current-biology/fulltext/S0960-9822(02)00685-1

    What was interesting was in just asking people, rather than in a simulation, how their reply formed part of a conversation so what they showed you was a response to a question, not even a simple acting out.

    People also seemed to perform the action in relation to themselves, rather than attending to an imaginary road. One seemed more aware than the others of the layout of their car throughout the manoeuvre, while some seemed to have a sketchy sense of a stretch of road but not really of the effect of the movement of the vehicle through it.

    Looking at the paper reminded me of my attempts to steer a friend's narrowboat. There are some useful diagrams at http://www.wikihow.com/Control-a-Canal-Boat-(Narrowboat)

    I began learning at Step 8 and all went fine enough for a day or two, but when going through a tight gap, I found it very hard not to want to correct the wrong way. I can still picture it now. I know I would have to make a conscious effort to avoid doing this until I had got it pat.

    Anyway, whereas the inside of a car is fairly static, progress along a road as you say involves a continuous visual flow, and I'm not at all convinced that we often imagine that extensive a visual continuity, even if we are trying hard to do so.

    I would say I remember brief actions with transitions to the next moment of interest, similar to a dream.

    So I wonder how likely it would be that while someone is doing their mime business in the car, or even supported by a simulator, they are actually creating a constantly changing full scene for themselves to drive along?

    I suppose nobody learns to drive without having a road there to support their learning.

    I can well accept that pilots might land a plane taking in the visual flow of information together with physical feedback because they can use the scene to do the work for them.

    I think what I still don't get is what this has to do with the existence of representations, or even exactly what "representations" are. I already expect my mental models to be incomplete, because that is my experience of them, as edited and condensed versions of reality.

    I expect my unconscious "models" to work at the time, but my experience is that I don't have additional direct conscious access to them.

    Thanks for an interesting post!

    ReplyDelete
  6. Nice article. I think it is useful and unique article. I love this kind of article and this kind of blog. I have enjoyed it very much. Thanks for your website.
    TLR Carbon wheels

    ReplyDelete