and try it out ca. March 22. Mazi and Zohar will try to cook up a test rig for audio,(and Zohar will continue to keep track where JS is at with Michael -- thanks !
re the HMD) Try emailing notes to post@memoryplace.posterous.com , please ! Thanks,
Xin Wei
If I need to post this elsewhere, please let me know. See y'all Monday.EXPERIMENTAL SITUATION:
Rig goggles to give real time visual feedback in MIRROR IMAGE or 3RD PERSON.
Give the participant motor tasks (ex: putting on shoes).
QUESTION 1:
One question I'd like to ask: after doing this a while, where do you experience yourself to be moving FROM?
EXPERIENTIAL PARAMETERS VARIED:
One experiential parameter being varied is PLACEMENT, or motor view(point): the tacit "here" to which your movements refer. So, the body-reference (instead of the object-reference) of a perceptual event.
Another experiential parameter being varied is an established level involving both MEMORY and PERCEPTION: specifically, motor memory and visual perception not lining up on the established level.
SPECULATION:
I expect that a new body-reference (and so a new sense of PLACE) will have to be developed in order to make sense of the experience.An analogy: a stereogram. (If you're not familiar with these, I recommend the stereogram of the lilium flower on the wikipedia page: http://en.wikipedia.org/wiki/Stereogram) Two similar but dissonant images claim a similar object reference. Induced diplopia (splitting the visual body-reference) allows these to at first be experienced as double vision: diplopic divergence. The doubled image of the stereogram eventually converges in a 3rd image, this time with stereoscopic depth. A new depth-wise orientation and concomitant body-reference allows the dissonant images to reference both a non-dislocated object as well as a non-dislocated body. The experience is accompanied with the odd feeling of being in a new or different dimension.In the experiment, motility and vision offer a "double image," experienced first in a manner analogous to the diplopic divergence undergone in looking at the stereogram. If the experience is going to begin to make sense, the participant must begin to experience the dissonance itself as a new level, a kind of depth with an accompanying body-reference. For instance: say we do 3RD PERSON visual feedback. The participant is dealing with the dissonance of experiencing motility in the first person (as in movement in actual space), and vision in the 3rd person (as in movement in the virtual space of a video game). I speculate participants who manage to make sense of the situation may do so by developing a 2nd person body-reference, a bodily "here" that is also a "you," a 3rd party who can be referenced by both motility and vision, despite the fact that one is from 1st person and one from 3rd. Note the potential difficulty that participants will simply suppress vision in order to accomplish the task in a remembered way. Task will matter: it must be something for which they depend on vision.
QUESTION 2:
Does this ever feel like relating to one's self across time? Memory and perception being dissonant, I speculate that the participant would begin to experience unusual expansions and contractions of lived time. She may, for instance, begin to experience a time interval between motility and vision. Note that this could be difficult to distinguish from performance difficulties due to the novelty of the situation.
PREP
We could prep participants by viewing and discussing stereograms and stereoscopic phenomena together, working with them to develop some phenomenological vocabulary for the internal articulations of experiences like these, working to avoid technical terms, especially controversial or technically ambiguous ones like "subject" and "object," or mind-body distinctions.
PHILOSOPHICAL RUMINATION:
I'm looking for a doubling to occur that splits the original. Derrida uses a similar description of the evolving relation of form and matter in the event of the materialization of language as writing in the "La Brisure" chapter of De La Grammatologie. 1+1=3. Two senses of the "here" to which my movements refer results in a splitting of what we might think of as an original or absolute position: my sense of "here," where I stand and move and see from. This suggests it is not an absolute position after all: the sense of here that I come into everyday situations with may also have been established in a similar way, as the level of multiple body-references.
Sent: Tuesday, February 23, 2010 12:09 PM
To: 'New post on Posterous'
Subject: RE: Posterous Post (memoryplace) | Jhave Johnston scenario ideas
xwPerusing the inline note (specifically the description of project) for the second time
an idea for one line of research-art occurred to me,
i jot it down and send it over....(it's just a seed):************************ Exploring identity's place-things through negating conventional place-things.An immersive isolation experience: it occurs in a small sound-sealed room with walls of video and surround speakers,
each viewer-subject enters this room alone. They agree to stay there for a duration (1 ? 3? or 12? or 24 hour periods?). The video footage content is all of the familiar world of subject-viewer
(an archive of home-movies that the subject has brought with them -- or had recorded for them -- specifically for the immersive experiment)Initially, normal playback occurs, the viewer-subject watches their life, people they know, places they visit, their homes, families. They hear voices they know, sounds that are emitted by their lived environment......Slowly and progressively over the duration of experiment, all the video is shredded and converted on a gradient of destruction until it is an intense flickering pure white light. In parallel, the familiar sounds of the subject slowly disintegrate into a set of pure sine waves. This change should occur slowly and almost subliminally; like flickers of amnesia, mould, corrosion, diffusion (suggestion of disintegration model: fluid dynamic simulators or Gray-Scott-- perhaps the subject's activity rate could be like pebbles dropped into memory archive pool). In the final segment, light intensity is linked to sound intensity. The room breathes light and an abstract sound beyond identity, all signifiers have been stripped away. The subject is reflected back onto their own mind. Note: the duration of the preliminary 'normal' (pre-disintegration) playback of video is the same as the duration of this final segment of white-light-white-noise.On entry and again on exit, a questionnaire (online & direct into db) is administered to the subject-viewer.
One of the questions on exit is: Which was longer, the period during the beginning or the period at the end of white light? No watches, cellphones, ipods, pdas, or other communication devices are allowed in room. **************************ok, so maybe not so feasible! maybe too elaborate!
but it's sure fun pondering, and i will enjoy listening in and watching the group evolve....thanks for the inspiration,
jhav
Voice: |
+1-650-815-9962 |
Skype |
shaxinwei |
Memory + Place Project |
|||
Experimentalists: |
Blog: |
(The memory+place-seminar@concordia.ca includes memory+place@concordia.ca, but also the extra folks who may be interested in a reading group.)
Members of list 'memory-place':David Morris <davimorr@alcor.concordia.ca>
Harry Smoak <hsmoak@alcor.concordia.ca>
Xin Wei <sha@encs.concordia.ca>
Zohar Kfir <zohar@zzee.net>
Timothy Sutton <timsutton@fastmail.fm>
Michael Fortin <michael.fortin@gmail.com>
Jean-Sebastien Rousseau <jsrousseau@gmail.com>
Morgan Sutherland <morgan@morgansutherland.net>
Navid Navab <navid.nav@gmail.com>
Sha Xin Wei <shaxinwei@gmail.com>
Tristana Martin Rubio <tristana.martin.rubio@gmail.com>
frantovka@gmail.com
don.beith@mail.mcgill.ca
shiloh.whitney@mail.mcgill.ca
noahmb@gmail.com
maziar.j@gmail.com16 subscribers
memory-place@concordia.ca
Jhave <jhave2@gmail.com>
Jen <jenniferbspiegel@gmail.com>
Lina Dib <linadib@rice.edu>
Omri Moses <omri.moses@concordia.ca>
Patrick <harropp@cc.umanitoba.ca>
"'Donald Jack Beith'" <don.beith@mail.mcgill.ca>
"'Shiloh Whitney, Ms'" <shiloh.whitney@mail.mcgill.ca>
"Noah Moss Brender" <mossbren@bc.edu>
David Morris <davimorr@alcor.concordia.ca>
Harry Smoak <hsmoak@alcor.concordia.ca>
Xin Wei <sha@encs.concordia.ca>
Zohar Kfir <zohar@zzee.net>
Timothy Sutton <timsutton@fastmail.fm>
Michael Fortin <michael.fortin@gmail.com>
Jean-Sebastien Rousseau <jsrousseau@gmail.com>
Morgan Sutherland <morgan@morgansutherland.net>
Navid Navab <navid.nav@gmail.com>
Sha Xin Wei <shaxinwei@gmail.com>
Liza Solomonova <liza.solomonova@gmail.com>
Tristana Martin Rubio <tristana.martin.rubio@gmail.com>
maziar.j@gmail.com23 subscribers
Agreed. I'll ask Michael Montanaro and Anne Donovan + Blue Riders and get back to MP. The FG schedule has already been planned.Xin WeiPS.Here's the correct email for posting to our Memory+Place blog: post@memoryplace.posterous.com
(Sorry I sent the wrong one.)On 2010-02-14, at 8:55 AM, David Morris wrote:Sorry to be late in getting to reply here, but (and this is responding quickly):This line seems promising. Altogether, if we are doing moving body stuff, I think it’s going to be better.owner-memory-place@concordia.ca [mailto:owner-memory-place@concordia.ca] On Behalf Of Sha Xin WeiFrom:
Sent: Thursday, February 11, 2010 2:36 PM
To: memory-place@concordia.ca; Morgan Sutherland; zohar Kfir
Cc: post@www.posterous.com
Subject: Memory+Place: plan run through(s) for first week of March?Dear MP folks,How's it going with the gear search? Zohar, Tim, Navid?Let's aim to assemble something in the coming 3 weeks for a sense "transference" experiment first week of March.(Of course prelim tests would be great -- let us know :)Here's an idea -- but I'd have to run it by everyone potentially impacted, first :Maybe we can schedule some runs in the BlackBox in the basement of the EV, for a time when the Frankensteins Ghosts group is not using the black box. (I'll ask my colleagues re that.) The BB is 50' x 50' so quite large, and has been used with a whole bunch of (non)dancers running around under our media, in structured movement exercises. (See Ouija documentation on TML website, for example) So we have some experience with such movement / walking experiments. This time it'll be only ourselves. Since the named gear seems to be all body-based at the moment, I would say that we can plan to get in and get out with no trace. But this depends on whether there's sufficient clearspace on the floor of the BB. I think there is. Otherwise we could reserve the 10th floor projection room, which has no windows, and is totally bare. But then we would definitely need people to physically set up. So this means scheduling and committing to some blocks of time in the coming month or two.I definitely advocate moving on parallel fronts :goggle searchheadmounted earphonesscenario designHowever crude, we need to get some experience trying things out "live" & "in density" -- meaning even if the gear is borrowed and not perfect, it's worth running an entire "experiment" in sketch form, soup to nuts.Whether we transfer visual or sound is less important than running a full scenario, like the walking experimentsThen we can iterate, refining both our choices of tech and the "protocols"I'm sure it will be quite motivating and enlightening to actually do it ourselves :)Note: We have, thanks to Elena, a WIKI to record project info that will be a resource for the eventual papers or proposals to come out of this seed project:Memory+Place project blog: http://memoryplace.posterous.comTo post to Memory+Place blog: post@memoryplace.posterous.comCheers,Xin WeiPS. We have 2 roots for TML blogs: http://topologicalmedialab.com/blogs/ andThey should be unified in some way. (!)should be linked and annotated from http://www.topologicalmedialab.net/joomla/main/content/blogsection/4/15/lang,en/On 2010-02-08, at 8:56 AM, Timothy Sutton wrote:Hi all,I've just forwarded the links David & Zohar collected to a researcher
friend who just conducted a VR experiment in his research lab. As a
subject I tried the glasses they bought, which I suspect are probably
out of our range — but he may have some input. They had the particular
need of needing to track the movement on the same device to input into
a first-person game sim, which I'm not sure would be necessary for
MP's purposes. (though helpful) The glasses were a bit uncomfortable
and the awkward ergonomics of movement took a bit out of the
experience, but the size of the frame and quality of the image was
close enough for jazz. By my memory they seemed like something along
the lines of the 3DVisor product.From a quick look at the i-Glasses 920.. the proprietary processor
seems at least to be able to deactivate the 3D feature.I assume that anything above standard-def resolution is unnecessary
cost? Since a small mounted camera would not provide any better
resolution anyway.. we would just have to deal with outputting back to
a composite signal, ideally getting as low latency as possible in the
input-output loop. DV latency one-way is bad enough, but the DFG I/O
(not DV), is probably about the best we've got. I forget if you can
use two of them on one machine (two FW busses?) to get the input and
output. And I forget if both of ours are in working condition.
TimOn Mon, Feb 8, 2010 at 10:26 AM, zohar <zohar@zzee.net> wrote:
The TAG group might have some, I will ask.On Feb 8, 2010, at 9:41 AM, "David Morris" <davimorr@alcor.concordia.ca>wrote:We hadn’t set a next meeting. I don’t think the 12th will work for me. Ihave meetings 9-6, and really should go out after the 6 to take a speaker todinner, so, unless Xin Wei is physically going to be in town, I don’t thinkI could fit in this meeting. The19th I am away.Can Zohar and others do research on the goggles. One other factor isavailability, actually being able to get them in Canada.I also wonder if it might be the case that some other lab on campus has somethat we could borrow, if things work this way when you’re doing experiments,etc. (So far the only thing I’ve ever need to borrow is a book.)DavidFrom: Sha Xin Wei [mailto:shaxinwei@gmail.com]Sent: Monday, February 08, 2010 4:19 AMTo: David Morris; memory-place@concordia.caSubject: Re: Googling for GoogglesHi EveryoneWhen's the next mtg Friday Feb 12 6 pm?I would like be present so we can decide on what to get etc.Cheers,Xin WeiOn 2010-02-06, at 10:40 AM, David Morris wrote:Dear MIPers,We had a nice go round with Mazi's goggles displacing us via videocamhijinx, but we're realizing there are limits on those myVu goggles. First,lo-res, second, people with eyes like mine, with heavy duty glasses, can'tseem to get their image into focus.So, I've been googling around a bit, and come up with these which I leaveour tech people to look at further (we'd also been thinking 3d goggles wouldbe better to get independent inputs to each eye), as a start:http://www.nvidia.com/object/3D_Vision_Main.html These look nice, cheap--butproprietary system for getting input into them.http://www.3dvisor.com/ Probably very good for our application, would workwith glasses, but expensive, at best we could afford one pair. But, from thefaqs, it looks like researchers like us are interested (e.g., one q is canyou use them in MRI machines, another is can you use them with noisecancelling headphones)http://www.i-glassesstore.com/i-3d.html midrange, but again I wonder aboutproprietary 3d inputs. (haven't had a chance to read through these thingsthoroughly, but e.g.,review here says " 3D Video Format: Interlaced 3D Video" for the i-GlassesHR 920, which I'm guessing would mean the two pictures are transmitted asone interlaced signal, and then decoded in the glasses, which would meanthat we'd need to get interlaced output from jitter, which might also mean,I guess, half the frame rate per image? Or a higher frame rate output? Do scomposite signals have a variable refresh rate, I don't know how they'restructured.This might be a good resource: http://www.allvideoglasses.com/quite figure these out.Also see this re. my idea to use a Wii to track head rotations and use themotorized mount that the mirror is currently on to guide the cameras on aDavid-----Original Message-----[mailto:owner-memory-place@concordia.ca] On Behalf Of zoharSent: Wednesday, February 03, 2010 6:56 PMSubject: Reminder MEMORY + PLACE Friday Feb 5th 6 PMHi all,just a reminder that we will meet on Friday Feb 5th @ 6 PMto review the tech aspects, play with gear and brainstorm some more.see you then !Zohar