Surfacing

Thanks for all the work moving things forward. I’m just surfacing from yet another wave of chairing matters—here’s my breath of air (….)—and now am going to be doing some research trips, so I’ll be very tied up in the next few weeks.  Away this Thursday and the 21st and 28th.

I’ve just gotten a reply from Lenay. What he had to say is below. Three things to note: 1) we were right, or at least in agreement with them, to use an on/off response in the buzzer; as he puts it, to have a quantitative response would already be to give a form of spatiality in sensation itself); 2) their protocol is kind of slender phenomenologically; 3) they’ve gone virtual, i.e., they are no longer using a lighted target. Now, we were going to go virtual with our walls. My worry going virtual with the objects had been that latency in the virtual system would make things not be as palpable, vs. encountering the real light through a light detector circuit. But, now, having experienced the setup, and gotten a sense of its rhythm and pace, I can see that latency wouldn’t be such an issue. So, I wonder (mournfully remembering how much work a number of you and we together have put in): if we did have a good position and direction sensor on the subject, could we go entirely virtual too? This would make the setup easer. It would also make recording position data easier, as Lenay indicates. But can we really get things down finely recording the position and direction of a finger tip? NB: if our walls are virtual, we’d be recording the position of the fingertip anyway. So we could have a hybrid experiment. But, if we do go virtual, then we are freed from the dark room requirement.

Even if we went totally virtual, I’d still want to video record the experiment, and to have an LED on the subject going off when they are buzzed in relation to an object or a wall (maybe two LED colours for the two different events).

From Lenay:

For our experiments, we have use three kind of devices.

First, a simple photocell (at the bottom of a small tube of 1.5 cm in length) which activates a small vibrator. Activation was triggered (in all or nothing) when the amount of received light exceeds a threshold.
Then, we used a CCD camera by analyzing the gray level in a small window in the center of the image.
But now we have abandoned these systems, and work instead in a virtual reality environment: a motion sensor is placed on the finger and the tactile feedback is determined by the position of a virtual target. This has the advantage of allowing an accurate record of subject movement.
Of course with all these devices, it would be possible to deliver a tactile signal that is proportional to the intensity of light received or proportional to the distance. But we did not want to give us immediately a quantitative dimension in the sensory input (this would have been to give us a form of spatiality in sensation itself) because we wanted to understand the genesis of the experience of space and especially of the depth.

If you want to easily perform this experiment, you can use the "enactive torch" developed by Tom Froese at Sussex University, following similar principles.( http://enactivetorch.wordpress.com/)

For the protocol, we just asked subjects to locate the target and indicate its position, either verbally (from a learned set of positions) or by pointing to its position with the opposite hand.

For the phenomenological description, it is me who, having carried out the experiment, tried to accurately reported the different stage of constitution of such a space of perception and action.