No prob with investigating gear. But before we go really far down a gear-specific path, I'd like to send this out to the group. ... And ... also call in some expert experimentalists who are good for this sort of work: Satinder Gill in Cambridge, and Helgi Schweitzer in Innsbruck. They've been interested in our work. Helgi Schweitzer in particular is a master experimentalist who's worked with questions of of collective rhythm and synchrony. http://www.dalaielian.de/jon-helgi/And Satinder's coming to workshop with us March 1 - 8. So she may as well get warmed up :)Cheers,Xin WeiHi, dear Memory+Place folks!
Hello, did we miss a step? What are our scenarios?The problem with almost all HMD experiment is that they are designed from an ocularcentric perspective, which in turn comes from putting a sharp divide between observer and the observed . Our MP starts from a thoroughly embedded perspective, so to speak, so we have to be very careful that our experiment does not buy the framing assumption along with the tech. Having been around visualization research plus a lot of the AR stuff for a few decades I can attest to the enormous and rigid representationalist assumptions that are hardwired, literally, into HMD's, starting with what is "3D."So I'd be very very careful to design experiments that retain whole-body movement in physical space. Of course this increases the challenge of designing a scenario that'll allow the phenomena to emerge.Part of our "non-methodology" is to avoid pre-judging as much as possible what the salient "observables" are in a given situation.The predominantly ocularcentric "VR" and "AR" tech is designed squarely under the naive sensory-mode-specific assumptions, that either cut out the body altogether (focussing on what the subject sees as representation , or more subtly, assumes that being in the world is simply summing the sense-modalities together, as if experience is merely the linear sum of video input + audio input + ___ input.Of course we can use the gadgets without the engineers' perceptualist or cognitivist assumptions, but typically the more complicated the gear, the more we have to unbuild or hack around in order to use the gear outside its spec.Sorry I was not present on Friday last week, else I would've been able to express this more easily verbally :)Constructively, I would propose two design strategies:(1) Let's try to mitigate the representationalist tendency by going to auditory transference rather than visual. There is a significant practical issue as well. Audio data takes so much less bandwidth and processing power that we can do a LOT richer augmentation and experiment with sound streams -- more phenomenological research for less engineering. Less experiment-specific gear means working with more familiar props. I propose to follow a modified version of Grotowski's "poor theater" and instead of building entirely synthetic perceptual fields, see what experiments we can design that work by defamiliarizing familiar things and bodies and places in situ.(2) What I think is utterly crucial at this stage is to imagine what we would do with the such transference gear.For example, I propose we talk more in email the scenario set-up. Where should this occur?Ideally what to bodies do , in what sort of space? Empty, cluttered with familiar domestic, or public props? indoors? One or more than one person at a time?EXAMPLE: WALKING EXPERIMENT:Scenario:For example: Hard shoes, bare wooden floor. Blindfold one person hearing binaural sound of footfalls from another person. Some "exercises"(A) Walk with that person.(B) Take a walk for 5 minutes. Return to where you were.(C) Listen to recording of another person. Try to walk as that person walked.This can involve time, but also orientation if we unleash the full power of our spatialization set-up: we can ask the person to try to walk not only at the pace, but also where (in the apparent trajectory) of what s/he hears. This is very different than the usual externalist, representationalist approach : A observes B and tries to imitate B. Here we are putting A in B's place -- A hears what B hears -- and we ask A to corporeally do what B may have done corporeally in order for B to have heard what is in A's headphones.SO, let's hear more scenarios?Cheers,Xin WeiOn 2010-02-08, at 8:56 AM, Timothy Sutton wrote:Hi all,I've just forwarded the links David & Zohar collected to a researcher
friend who just conducted a VR experiment in his research lab. As a
subject I tried the glasses they bought, which I suspect are probably
out of our range — but he may have some input. They had the particular
need of needing to track the movement on the same device to input into
a first-person game sim, which I'm not sure would be necessary for
MP's purposes. (though helpful) The glasses were a bit uncomfortable
and the awkward ergonomics of movement took a bit out of the
experience, but the size of the frame and quality of the image was
close enough for jazz. By my memory they seemed like something along
the lines of the 3DVisor product.From a quick look at the i-Glasses 920.. the proprietary processor
seems at least to be able to deactivate the 3D feature.I assume that anything above standard-def resolution is unnecessary
cost? Since a small mounted camera would not provide any better
resolution anyway.. we would just have to deal with outputting back to
a composite signal, ideally getting as low latency as possible in the
input-output loop. DV latency one-way is bad enough, but the DFG I/O
(not DV), is probably about the best we've got. I forget if you can
use two of them on one machine (two FW busses?) to get the input and
output. And I forget if both of ours are in working condition.
TimOn Mon, Feb 8, 2010 at 10:26 AM, zohar <zohar@zzee.net> wrote:
The TAG group might have some, I will ask.
On Feb 8, 2010, at 9:41 AM, "David Morris" <davimorr@alcor.concordia.ca>
wrote:
We hadn’t set a next meeting. I don’t think the 12th will work for me. I
have meetings 9-6, and really should go out after the 6 to take a speaker to
dinner, so, unless Xin Wei is physically going to be in town, I don’t think
I could fit in this meeting. The19th I am away.
Can Zohar and others do research on the goggles. One other factor is
availability, actually being able to get them in Canada.
I also wonder if it might be the case that some other lab on campus has some
that we could borrow, if things work this way when you’re doing experiments,
etc. (So far the only thing I’ve ever need to borrow is a book.)
David
From: Sha Xin Wei [mailto:shaxinwei@gmail.com]
Sent: Monday, February 08, 2010 4:19 AM
To: David Morris; memory-place@concordia.ca
Subject: Re: Googling for Googgles
Hi Everyone
When's the next mtg Friday Feb 12 6 pm?
I would like be present so we can decide on what to get etc.
Cheers,
Xin Wei
On 2010-02-06, at 10:40 AM, David Morris wrote:
Dear MIPers,
We had a nice go round with Mazi's goggles displacing us via videocam
hijinx, but we're realizing there are limits on those myVu goggles. First,
lo-res, second, people with eyes like mine, with heavy duty glasses, can't
seem to get their image into focus.
So, I've been googling around a bit, and come up with these which I leave
our tech people to look at further (we'd also been thinking 3d goggles would
be better to get independent inputs to each eye), as a start:
http://www.nvidia.com/object/3D_Vision_Main.html These look nice, cheap--but
proprietary system for getting input into them.
http://www.3dvisor.com/ Probably very good for our application, would work
with glasses, but expensive, at best we could afford one pair. But, from the
faqs, it looks like researchers like us are interested (e.g., one q is can
you use them in MRI machines, another is can you use them with noise
cancelling headphones)
http://www.i-glassesstore.com/i-3d.html midrange, but again I wonder about
proprietary 3d inputs. (haven't had a chance to read through these things
thoroughly, but e.g.,
http://www.allvideoglasses.com/blog/2009/10/06/hr-920-specifications-1, this
review here says " 3D Video Format: Interlaced 3D Video" for the i-Glasses
HR 920, which I'm guessing would mean the two pictures are transmitted as
one interlaced signal, and then decoded in the glasses, which would mean
that we'd need to get interlaced output from jitter, which might also mean,
I guess, half the frame rate per image? Or a higher frame rate output? Do s
composite signals have a variable refresh rate, I don't know how they're
structured.
This might be a good resource: http://www.allvideoglasses.com/
http://www.edimensional.com/product_info.php?cPath=21&products_id=28 --can't
quite figure these out.
Also see this re. my idea to use a Wii to track head rotations and use the
motorized mount that the mirror is currently on to guide the cameras on a
tripod head. http://emol.org/3dentertainment/3dgaming/news/3dwiipc.html
David
On 2010-02-08, at 11:27 AM, Sha Xin Wei wrote: