Elena Frantova 23-Jun-09: Memory Place Take 1: Mirror

On 23-Jun-09, at 5:48 PM, Elena Frantova wrote:

Hi everyone,

I would like to let you know about the current (for this summer) and future (fall) phenomenological experiments with David Morris.
It may affect you in one way or the other (especially, the other, I hope).

First of all, this is tml wiki page for the project:

It has description of the project and , for those who are or will be part of the reading group, the list of the readings with links to their pdf's:


Project has two parts:
-  "mirror" - happening in the tml right now. Designated corner - by the electronics /soldering station (may change).
-  actual "memory place" - happening in fall (initial setup to be done in summer, see below). Designated table - big white oval tml table, that will be used as projection surface.


1)  "mirror" :
- Apple Cinema Display -  taken away from the video station to the designated corner)
- G5  - Xin Wei suggested we use Varo, the video station now has its display instead of the big one
- 2 analogue cameras + 2 digitizers : for now, I've found two unemployed digitizers, all cameras seem to be integrated in some setup (? JS, does it mean - in use?) so I'm borrowing the hexagram cameras for now but this will need to change....

2) "memory place" - table theatre
- Varo the G5, along with cameras and digitizers will migrate into the second phase of the project. Apple Cinema Display will safely return to its natural video environment and one of the small Mitsubishu projector will be promoted  to animate the table theater ( permanently installed there)...


So there are three working periods coming up this summer (scheduled for now, could be - forth one in the second half of august).

1. June 24 - June 30:
Setup "mirror" experiment: see how simple (visual) manual and more complex (temoral)  manipulations work.  Come up with protocol.

2. June 12 - June 18
Improve the protocol -  clarify what the experiment actually should be. Invite and schedule participants. Collect some data (?)

3. July 27 - August 3
Follow up whatever comes out in 1 and 2, maybe collect some more data with some more participants.
Setup table theater and brainstorming on it.

Suggestions and comments are welcome.




Thanks for a very rich meeting today.

Idea is to get an path working stereo audio from one head to the other,
and try it out ca. March 22.

Mazi and Zohar will try to cook up a test rig for audio,(and Zohar will continue to keep track where JS is at with Michael -- thanks !
re the HMD)

Try emailing notes to post@memoryplace.posterous.com , please !

Xin Wei

Shiloh Whitney: An Experiment Idea

On 2010-03-02, at 12:40 PM, Shiloh Whitney, Ms wrote:

If I need to post this elsewhere, please let me know. See y'all Monday.

Rig goggles to give real time visual feedback in MIRROR IMAGE or 3RD PERSON.
Give the participant motor tasks (ex: putting on shoes).

One question I'd like to ask: after doing this a while, where do you experience yourself to be moving FROM?

One experiential parameter being varied is PLACEMENT, or motor view(point): the tacit "here" to which your movements refer. So, the body-reference (instead of the object-reference) of a perceptual event.
Another experiential parameter being varied is an established level involving both MEMORY and PERCEPTION: specifically, motor memory and visual perception not lining up on the established level.

I expect that a new body-reference (and so a new sense of PLACE) will have to be developed in order to make sense of the experience.

An analogy: a stereogram. (If you're not familiar with these, I recommend the stereogram of the lilium flower on the wikipedia page: http://en.wikipedia.org/wiki/Stereogram) Two similar but dissonant images claim a similar object reference. Induced diplopia (splitting the visual body-reference) allows these to at first be experienced as double vision: diplopic divergence. The doubled image of the stereogram eventually converges in a 3rd image, this time with stereoscopic depth. A new depth-wise orientation and concomitant body-reference allows the dissonant images to reference both a non-dislocated object as well as a non-dislocated body. The experience is accompanied with the odd feeling of being in a new or different dimension.

In the experiment, motility and vision offer a "double image," experienced first in a manner analogous to the diplopic divergence undergone in looking at the stereogram. If the experience is going to begin to make sense, the participant must begin to experience the dissonance itself as a new level, a kind of depth with an accompanying body-reference.

For instance: say we do 3RD PERSON visual feedback. The participant is dealing with the dissonance of experiencing motility in the first person (as in movement in actual space), and vision in the 3rd person (as in movement in the virtual space of a video game). I speculate participants who manage to make sense of the situation may do so by developing a 2nd person body-reference, a bodily "here" that is also a "you," a 3rd party who can be referenced by both motility and vision, despite the fact that one is from 1st person and one from 3rd. Note the potential difficulty that participants will simply suppress vision in order to accomplish the task in a remembered way. Task will matter: it must be something for which they depend on vision.

Does this ever feel like relating to one's self across time?

Memory and perception being dissonant, I speculate that the participant would begin to experience unusual expansions and contractions of lived time. She may, for instance, begin to experience a time interval between motility and vision. Note that this could be difficult to distinguish from performance difficulties due to the novelty of the situation.

We could prep participants by viewing and discussing stereograms and stereoscopic phenomena together, working with them to develop some phenomenological vocabulary for the internal articulations of experiences like these, working to avoid technical terms, especially controversial or technically ambiguous ones like "subject" and "object," or mind-body distinctions.

I'm looking for a doubling to occur that splits the original. Derrida uses a similar description of the evolving relation of form and matter in the event of the materialization of language as writing in the "La Brisure" chapter of De La Grammatologie. 1+1=3. Two senses of the "here" to which my movements refer results in a splitting of what we might think of as an original or absolute position: my sense  of "here," where I stand and move and see from. This suggests it is not an absolute position after all: the sense of here that I come into everyday situations with may also have been established in a similar way, as the level of multiple body-references.


The best way to contact me will be by 
telephone:  +1-650-815-9962 (m, sf), +1-514-817-3505 (text)   
skype: shaxinwei
Sha Xin Wei, Ph.D.
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  topologicalmedialab.net/xinwei
Visiting Scholar • French and Italian Department • Stanford University • +1-650-815-9962
1-650-815-9962 (m, sf) • 1-514-817-3505 (m) • 1-514-848-2424 x 5949 (office) • 1-514-848-4252 (fax) • sha@encs.concordia.ca

head mounted anything 2

One more note, re. hyperencephalophilia.

The body-restraint mesh was interesting antidote to the head-centrism of so much sensory perception research.  We have an opportunity in MP to avoid hyper-encephalophilia.   Perhaps we can imagine the simplest ways to achieve somewhat analogous estranging of the body from itself, or relative to a ground other than the head-in-normal-use.

For example if we want to go the route of body restraint, we could:

(1) mount super-magnets into shoes (we have some), and ferrous plates in the floor;
(2) stitch elastics into shirts;
(3) tie long (12') elastics from shirt to the walls or to other shirts;


Always, moving slowly, with spotters, ourselves.

Xin Wei

head-mounted anything

All the work we do around the TML has been guided also by  "minimax" : maximum experiential impact, minimum perceived tech.   

So in that spirit, I would  urge finding a way to avoid following my friend Blair MacIntyre's route from GaTech with their backpack  mounted cybergoggles for AR research (ARToolkit).  That's way too tech-heavy for my taste, and besides so 2000 ;)   Basically for softwear™ , my criterion is that the device must be as wearable as common jewelry and clothing accessories before I would consider it as a plausible body-based research platform.

What does that imply?   Tetherless gear.   No identifiable "computers" on the body.    Can we see if there are wireless video goggles on the market or for loan?  Another reason for audio instead of video.   The strategy has been to do whatever processing that cannot be done in softwear off the body -- ie beam data off the body , to deliver media to the body using the simplest smallest devices.  Anything more ambitious that requires more processing or encumbering gadget is moved off the body to conventional computers (ie Macs) or to room-based gear, e.g. spatialized audio from speaker array, which can be tres tres sophisticated.

Now having said that, I can relax temporarily on aesthetic/corporeal principles in order to get the experiment rolling so we can learn what we can in order to rough out some scenarios as quickly as possible, tethers be damned.    But paying attention still to the phenomenological commitments being made by choice of sense modality and media transforms being considered.   Then TMLabbers can reasonably and equably re-introduce aesthetic (which are actually ethico-aesthetic) working concerns ... :)

Xin Wei

David re. Jhave Johnston scenario ideas

From: David Morris [mailto:davimorr@alcor.concordia.ca
Sent: Tuesday, February 23, 2010 12:09 PM
To: 'New post on Posterous'
Subject: RE: Posterous Post (memoryplace) | Jhave Johnston scenario ideas
Interesting ideas from Jhave.

A quick note: on a quick glance this emphasizes home recording as a way of drawing in the memorial dimension. Xin Wei’s critical remarks about the head-mounted gear was worried about an ocular emphasis and disembodiment.

Earlier, before we had experimented with the myvu goggles, we had been thinking of a setup that would probe the sort of displacement (of body and world together) that Stratton felt when he donned the ‘inverting goggles’. Namely, we had been thinking of a rig where the participant wears video goggles and a headmounted camera and gps, fed through a backpack computer, and manipulated in various ways (e.g.: invert image, delay, superimpose with images recorded last week at same place etc), and of having participants wear the rig in their own home.

I.e., this was sort of like Jhave’s, but instead of bringing the home into the lab, it would bring the lab into the home. Might be easier to do.

But I think a key thing might be: how to vary the participant’s memorial relation to place as mediated by body and bodily habits? We want place and body to be concrete terms, and place can’t really be varied either practically or logically. (I say logically, because if we follow Casey and Husserl, place is precisely: the invariant (although variable) ground through which alone experience is possible in the first place.)

Earlier, from something Tristana said, we developed a sort of relation between ground and jointedness: ground is what allows us to develop joints, articulations, habits ways of moving; fixed habits become relative grounds, because fixed habits lose articulateness, or the articulacy becomes implicit. But still, some ground is needed.


Jhave Johnston scenario ideas

Contributed by Jhave, Thanks :)  - Xin Wei

On 2009-09-19, at 10:47 AM, david jhave johnston wrote:


Perusing the inline note (specifically the description of project) for the second time
an idea for one line of research-art occurred to me,
i jot it down and send it over....(it's just a seed):


Exploring identity's place-things through negating conventional place-things.

An immersive isolation experience: it occurs in a small sound-sealed room with walls of video and surround speakers,
each viewer-subject enters this room alone. They agree to stay there for a duration (1 ?  3? or 12? or 24 hour periods?). 

The video footage content is all of the familiar world of subject-viewer
(an archive of home-movies that the subject has brought with them -- or had recorded for them -- specifically for the immersive experiment)

Initially, normal playback occurs, the viewer-subject watches their life, people they know, places they visit, their homes, families. They hear voices they know, sounds that are emitted by their lived environment......Slowly and progressively over the duration of experiment, all the video is shredded and converted on a gradient of destruction until it is an intense flickering  pure white light. In parallel, the familiar sounds of the subject slowly disintegrate into a set of pure sine waves. This change should occur slowly and almost subliminally; like flickers of amnesia, mould, corrosion, diffusion (suggestion of disintegration model: fluid dynamic simulators or Gray-Scott-- perhaps the subject's activity rate could be like pebbles dropped into memory archive pool). 

In the final segment, light intensity is linked to sound intensity. The room breathes light and an abstract sound beyond identity, all signifiers have been stripped away. The subject is reflected back onto their own  mind. Note: the duration of the preliminary 'normal' (pre-disintegration) playback of video is the same as the duration of this final segment of white-light-white-noise.

On entry and again on exit, a questionnaire (online & direct into db) is administered to the subject-viewer. 
One of the questions on exit is: Which was longer, the period during the beginning or the period at the end of white light? 

No watches, cellphones, ipods, pdas, or other communication devices are allowed in room. 


ok, so maybe not so feasible! maybe too elaborate!
but it's sure fun pondering, and i will enjoy listening in and watching the group evolve....

thanks for the inspiration,


Memory + Place Project

Memory+Place Experiment: http://memoryplace.posterous.com
Memory Seminar:
Learning-To-Be: http://lrn2b.blogspot.com
MP Project Blog:

WIKI MP Project:

mailing lists: memory-place, mp-seminar

(The memory+place-seminar@concordia.ca  includes memory+place@concordia.ca, but also the extra folks who may be interested in a reading group.)

Members of list 'memory-place':

David Morris <davimorr@alcor.concordia.ca>
Harry Smoak <hsmoak@alcor.concordia.ca>
Xin Wei <sha@encs.concordia.ca>
Zohar Kfir <zohar@zzee.net>
Timothy Sutton <timsutton@fastmail.fm>
Michael Fortin <michael.fortin@gmail.com>
Jean-Sebastien Rousseau <jsrousseau@gmail.com>
Morgan Sutherland <morgan@morgansutherland.net>
Navid Navab <navid.nav@gmail.com>
Sha Xin Wei <shaxinwei@gmail.com>
Tristana Martin Rubio <tristana.martin.rubio@gmail.com>

16 subscribers

Members of list 'mp-seminar':

Elena Frantova <e_franto@encs.concordia.ca>
Jhave <jhave2@gmail.com>
Jen <jenniferbspiegel@gmail.com>
Lina Dib <linadib@rice.edu>
Omri Moses <omri.moses@concordia.ca>
Patrick <harropp@cc.umanitoba.ca>
"'Donald Jack Beith'" <don.beith@mail.mcgill.ca>
"'Shiloh Whitney, Ms'" <shiloh.whitney@mail.mcgill.ca>
"Noah Moss Brender" <mossbren@bc.edu>
David Morris <davimorr@alcor.concordia.ca>
Harry Smoak <hsmoak@alcor.concordia.ca>
Xin Wei <sha@encs.concordia.ca>
Zohar Kfir <zohar@zzee.net>
Timothy Sutton <timsutton@fastmail.fm>
Michael Fortin <michael.fortin@gmail.com>
Jean-Sebastien Rousseau <jsrousseau@gmail.com>
Morgan Sutherland <morgan@morgansutherland.net>
Navid Navab <navid.nav@gmail.com>
Sha Xin Wei <shaxinwei@gmail.com>
Liza Solomonova <liza.solomonova@gmail.com>
Tristana Martin Rubio <tristana.martin.rubio@gmail.com>

23 subscribers

2010-02-14: Memory+Place: plan run through(s) for first week of March?

On 2010-02-14, at 10:44 AM, Sha Xin Wei wrote:

Agreed. I'll ask Michael Montanaro and Anne Donovan +  Blue Riders and get back to MP.   The FG schedule has already been planned.
Xin Wei


Here's the correct email for posting to our Memory+Place blog:  post@memoryplace.posterous.com

(Sorry I sent the wrong one.)

On 2010-02-14, at 8:55 AM, David Morris wrote:

Sorry to be late in getting to reply here, but (and this is responding quickly):

This line seems promising. Altogether, if we are doing moving body stuff, I think it’s going to be better.

owner-memory-place@concordia.ca [mailto:owner-memory-place@concordia.ca] On Behalf Of Sha Xin Wei
Sent: Thursday, February 11, 2010 2:36 PM
To: memory-place@concordia.ca; Morgan Sutherland; zohar Kfir
Cc: post@www.posterous.com
Subject: Memory+Place: plan run through(s) for first week of March?

Dear MP folks,

How's it going with the gear search?   Zohar, Tim,  Navid?
Let's aim to  assemble something in the coming 3 weeks for a sense "transference" experiment first week of March.
(Of course prelim tests would be great -- let us know :)

Here's an idea --  but I'd have to run it by everyone potentially impacted, first :

Maybe we can schedule some runs in the BlackBox in the basement of the EV, for a time when the Frankensteins Ghosts  group is not using the black box.  (I'll ask my colleagues re that.)   The BB is 50' x 50' so quite large, and has been used with a whole bunch of (non)dancers running around under our media, in structured movement exercises.  (See Ouija documentation on TML website, for example)  So we have some experience with such movement / walking experiments.  This time it'll be only ourselves.  Since the named gear seems to be all body-based at the moment, I would say that we can plan to get in and get out with no trace.  But this depends on whether there's sufficient clearspace on the floor of the BB.  I think there is.   Otherwise we could reserve the 10th floor projection room, which has no windows, and is totally bare.  But then we would definitely need people to physically set up.  So this means scheduling and committing to some blocks of time in the coming month or two.

I definitely advocate moving on parallel fronts :
            goggle search
            headmounted earphones
            scenario design

However crude, we need to get some experience trying things out "live" & "in density" -- meaning even if the gear is borrowed and not perfect, it's worth running an entire "experiment" in sketch form,   soup to nuts.

Whether we transfer visual or sound is less important than running a full scenario, like the walking experiments
Then we can iterate, refining both our choices of tech and the "protocols"

I'm sure it will be quite motivating and enlightening to actually do it ourselves :)

Note: We have, thanks to Elena, a WIKI to record project info that will be a resource for the eventual papers or proposals to come out of this seed project:

Memory+Place project blog: http://memoryplace.posterous.com
To post to Memory+Place blog: post@memoryplace.posterous.com

Xin Wei

PS.  We have 2 roots for TML blogs:    http://topologicalmedialab.com/blogs/  and 

They should be unified in some way.  (!)

On 2010-02-08, at 8:56 AM, Timothy Sutton wrote:

Hi all,

I've just forwarded the links David & Zohar collected to a researcher
friend who just conducted a VR experiment in his research lab. As a
subject I tried the glasses they bought, which I suspect are probably
out of our range — but he may have some input. They had the particular
need of needing to track the movement on the same device to input into
a first-person game sim, which I'm not sure would be necessary for
MP's purposes. (though helpful) The glasses were a bit uncomfortable
and the awkward ergonomics of movement took a bit out of the
experience, but the size of the frame and quality of the image was
close enough for jazz. By my memory they seemed like something along
the lines of the 3DVisor product.

From a quick look at the i-Glasses 920.. the proprietary processor
seems at least to be able to deactivate the 3D feature.

I assume that anything above standard-def resolution is unnecessary
cost? Since a small mounted camera would not provide any better
resolution anyway.. we would just have to deal with outputting back to
a composite signal, ideally getting as low latency as possible in the
input-output loop. DV latency one-way is bad enough, but the DFG I/O
(not DV), is probably about the best we've got. I forget if you can
use two of them on one machine (two FW busses?) to get the input and
output. And I forget if both of ours are in working condition.


On Mon, Feb 8, 2010 at 10:26 AM, zohar <zohar@zzee.net> wrote:

The TAG group might have some, I will ask.

On Feb 8, 2010, at 9:41 AM, "David Morris" <davimorr@alcor.concordia.ca>

We hadn’t set a next meeting. I don’t think the 12th will work for me. I
have meetings 9-6, and really should go out after the 6 to take a speaker to
dinner, so, unless Xin Wei is physically going to be in town, I don’t think
I could fit in this meeting. The19th I am away.

Can Zohar and others do research on the goggles. One other factor is
availability, actually being able to get them in Canada.

I also wonder if it might be the case that some other lab on campus has some
that we could borrow, if things work this way when you’re doing experiments,
etc. (So far the only thing I’ve ever need to borrow is a book.)


From: Sha Xin Wei [mailto:shaxinwei@gmail.com]
Sent: Monday, February 08, 2010 4:19 AM
To: David Morris; memory-place@concordia.ca
Subject: Re: Googling for Googgles

Hi Everyone

When's the next mtg Friday Feb 12 6 pm?

I would like be present  so we can decide on what to get etc.


Xin Wei

On 2010-02-06, at 10:40 AM, David Morris wrote:

Dear MIPers,

We had a nice go round with Mazi's goggles displacing us via videocam
hijinx, but we're realizing there are limits on those myVu goggles. First,
lo-res, second, people with eyes like mine, with heavy duty glasses, can't
seem to get their image into focus.

So, I've been googling around a bit, and come up with these which I leave
our tech people to look at further (we'd also been thinking 3d goggles would
be better to get independent inputs to each eye), as a start:

proprietary system for getting input into them.

http://www.3dvisor.com/ Probably very good for our application, would work
with glasses, but expensive, at best we could afford one pair. But, from the
faqs, it looks like researchers like us are interested (e.g., one q is can
you use them in MRI machines, another is can you use them with noise
cancelling headphones)

http://www.i-glassesstore.com/i-3d.html midrange, but again I wonder about
proprietary 3d inputs. (haven't had a chance to read through these things
thoroughly, but e.g.,
review here says " 3D Video Format: Interlaced 3D Video" for the i-Glasses
HR 920, which I'm guessing would mean the two pictures are transmitted as
one interlaced signal, and then decoded in the glasses, which would mean
that we'd need to get interlaced output from jitter, which might also mean,
I guess, half the frame rate per image? Or a higher frame rate output? Do s
composite signals have a variable refresh rate, I don't know how they're

This might be a good resource: http://www.allvideoglasses.com/

quite figure these out.

Also see this re. my idea to use a Wii to track head rotations and use the
motorized mount that the mirror is currently on to guide the cameras on a


-----Original Message-----
[mailto:owner-memory-place@concordia.ca] On Behalf Of zohar
Sent: Wednesday, February 03, 2010 6:56 PM
Subject: Reminder MEMORY + PLACE Friday Feb 5th 6 PM

Hi all,
just a reminder that we will meet on Friday Feb 5th @ 6 PM
to review the tech aspects, play with gear and brainstorm some more.

see you then !