haptic-light sense organ --> place-memory experiment

I'm tremendously interested in seeing this little arc finish in some insights that we can contribute in conversation with colleagues in the Consciousness and Cognition journal, or SPEP.   It's low-hanging fruit as far as craft is concerned, because the craft is child's play for us, but the research questions are deep.


Speaking of the research questions, it's crucial to understand the aim of the actual experiment.  The light-haptic prosthetic organ was just the warm-up precursor to the actual experiment having to do with what Ed Casey called place-memory.   Please read the http://memoryplace.posthaven.com 
to build on the large amount of prior work and understanding with the Memory Place Identity group.

We want a projective ray, along a very narrow cone -- essentially a line oriented along the pointing gesture.
And as I said before, no proximity, only on off with threshold. 
Yeah you heard me right -- just on-off, no floating points between 0.0 - 1.0 ;)
There are deep phenomenological and consequently methodological reasons for this that David Morris and the Memory Place Identity group seminar realized.

Then, the actual experiment is the place-memory experiment that I've described to Omar, Harry, Elena, Liza that we inherit from Patricia+Zohar's work with the Memory Place Identity group. 

Please read Ed Casey's chapters on body- and especially place-memory, as well as the Lenay-Steiner paper whose experiment we're trying to replicate and then extend to the experiment where we turn the prosthetic on or off depending on where the person is located.  (I attached the chapter + article to the previous entry.)

We need to keep in mind that this is technically trivial engineering.  And it ought to remain engineering-trivial, at least till we get through the place-memory experiment and so we can understand the timing and calibration issues.   So I would simply retain Patricia and Zohar's electronics unless you can improve upon it in less than 4 days calendar time.  All other work should go toward the complete experimental apparatus :  
write code to turn the prosthetic sense organ on/off
(needs new code, and wireless access to on-body processor)
(if necessary use cable tethering the subject! that's OK if that makes it possible 
to skip a step of acquiring a wifi+processor and simply hack it
all in Max via a long cable.)
get the camera feed in the TML
use it to turn the  prosthetic  off/on as a 
function of where the wearer is standing in the TML.

I would place stock by Harry and Liza's judgment on experiment construction, especially for moving this along briskly so as to not get bogged down in technicalities or artful polish at the expense of gaining phenomenological insight.   Fail early, iterate rapidly.  I add an extra request perhaps unique to TML: please make it a 24x7 running environment so that people who are not authors of the experiment can wander in and experience the environment live at any time without you there to run it.   Perhaps it can be another preset state on Nav and Julian's faders.