2010-02-14: Memory+Place: plan run through(s) for first week of March?

On 2010-02-14, at 10:44 AM, Sha Xin Wei wrote:

Agreed. I'll ask Michael Montanaro and Anne Donovan +  Blue Riders and get back to MP.   The FG schedule has already been planned.
Xin Wei

PS.

Here's the correct email for posting to our Memory+Place blog:  post@memoryplace.posterous.com

(Sorry I sent the wrong one.)


On 2010-02-14, at 8:55 AM, David Morris wrote:

Sorry to be late in getting to reply here, but (and this is responding quickly):

This line seems promising. Altogether, if we are doing moving body stuff, I think it’s going to be better.

From:
 
owner-memory-place@concordia.ca [mailto:owner-memory-place@concordia.ca] On Behalf Of Sha Xin Wei
Sent: Thursday, February 11, 2010 2:36 PM
To: memory-place@concordia.ca; Morgan Sutherland; zohar Kfir
Cc: post@www.posterous.com
Subject: Memory+Place: plan run through(s) for first week of March?

Dear MP folks,

How's it going with the gear search?   Zohar, Tim,  Navid?
Let's aim to  assemble something in the coming 3 weeks for a sense "transference" experiment first week of March.
(Of course prelim tests would be great -- let us know :)

Here's an idea --  but I'd have to run it by everyone potentially impacted, first :

Maybe we can schedule some runs in the BlackBox in the basement of the EV, for a time when the Frankensteins Ghosts  group is not using the black box.  (I'll ask my colleagues re that.)   The BB is 50' x 50' so quite large, and has been used with a whole bunch of (non)dancers running around under our media, in structured movement exercises.  (See Ouija documentation on TML website, for example)  So we have some experience with such movement / walking experiments.  This time it'll be only ourselves.  Since the named gear seems to be all body-based at the moment, I would say that we can plan to get in and get out with no trace.  But this depends on whether there's sufficient clearspace on the floor of the BB.  I think there is.   Otherwise we could reserve the 10th floor projection room, which has no windows, and is totally bare.  But then we would definitely need people to physically set up.  So this means scheduling and committing to some blocks of time in the coming month or two.

I definitely advocate moving on parallel fronts :
            goggle search
            headmounted earphones
            scenario design

However crude, we need to get some experience trying things out "live" & "in density" -- meaning even if the gear is borrowed and not perfect, it's worth running an entire "experiment" in sketch form,   soup to nuts.

Whether we transfer visual or sound is less important than running a full scenario, like the walking experiments
Then we can iterate, refining both our choices of tech and the "protocols"

I'm sure it will be quite motivating and enlightening to actually do it ourselves :)

Note: We have, thanks to Elena, a WIKI to record project info that will be a resource for the eventual papers or proposals to come out of this seed project:

Memory+Place project blog: http://memoryplace.posterous.com
To post to Memory+Place blog: post@memoryplace.posterous.com

Cheers,
Xin Wei

PS.  We have 2 roots for TML blogs:    http://topologicalmedialab.com/blogs/  and 
            http://*.posterous.com/ 
                        memoryplace.posterous.com
                        topological.posterous.com

They should be unified in some way.  (!)

On 2010-02-08, at 8:56 AM, Timothy Sutton wrote:

Hi all,

I've just forwarded the links David & Zohar collected to a researcher
friend who just conducted a VR experiment in his research lab. As a
subject I tried the glasses they bought, which I suspect are probably
out of our range — but he may have some input. They had the particular
need of needing to track the movement on the same device to input into
a first-person game sim, which I'm not sure would be necessary for
MP's purposes. (though helpful) The glasses were a bit uncomfortable
and the awkward ergonomics of movement took a bit out of the
experience, but the size of the frame and quality of the image was
close enough for jazz. By my memory they seemed like something along
the lines of the 3DVisor product.

From a quick look at the i-Glasses 920.. the proprietary processor
seems at least to be able to deactivate the 3D feature.

I assume that anything above standard-def resolution is unnecessary
cost? Since a small mounted camera would not provide any better
resolution anyway.. we would just have to deal with outputting back to
a composite signal, ideally getting as low latency as possible in the
input-output loop. DV latency one-way is bad enough, but the DFG I/O
(not DV), is probably about the best we've got. I forget if you can
use two of them on one machine (two FW busses?) to get the input and
output. And I forget if both of ours are in working condition.


Tim

On Mon, Feb 8, 2010 at 10:26 AM, zohar <zohar@zzee.net> wrote:

The TAG group might have some, I will ask.

On Feb 8, 2010, at 9:41 AM, "David Morris" <davimorr@alcor.concordia.ca>
wrote:

We hadn’t set a next meeting. I don’t think the 12th will work for me. I
have meetings 9-6, and really should go out after the 6 to take a speaker to
dinner, so, unless Xin Wei is physically going to be in town, I don’t think
I could fit in this meeting. The19th I am away.

Can Zohar and others do research on the goggles. One other factor is
availability, actually being able to get them in Canada.

I also wonder if it might be the case that some other lab on campus has some
that we could borrow, if things work this way when you’re doing experiments,
etc. (So far the only thing I’ve ever need to borrow is a book.)

David

From: Sha Xin Wei [mailto:shaxinwei@gmail.com]
Sent: Monday, February 08, 2010 4:19 AM
To: David Morris; memory-place@concordia.ca
Subject: Re: Googling for Googgles

Hi Everyone

When's the next mtg Friday Feb 12 6 pm?

I would like be present  so we can decide on what to get etc.

Cheers,

Xin Wei

On 2010-02-06, at 10:40 AM, David Morris wrote:

Dear MIPers,

We had a nice go round with Mazi's goggles displacing us via videocam
hijinx, but we're realizing there are limits on those myVu goggles. First,
lo-res, second, people with eyes like mine, with heavy duty glasses, can't
seem to get their image into focus.

So, I've been googling around a bit, and come up with these which I leave
our tech people to look at further (we'd also been thinking 3d goggles would
be better to get independent inputs to each eye), as a start:

proprietary system for getting input into them.

http://www.3dvisor.com/ Probably very good for our application, would work
with glasses, but expensive, at best we could afford one pair. But, from the
faqs, it looks like researchers like us are interested (e.g., one q is can
you use them in MRI machines, another is can you use them with noise
cancelling headphones)

http://www.i-glassesstore.com/i-3d.html midrange, but again I wonder about
proprietary 3d inputs. (haven't had a chance to read through these things
thoroughly, but e.g.,
review here says " 3D Video Format: Interlaced 3D Video" for the i-Glasses
HR 920, which I'm guessing would mean the two pictures are transmitted as
one interlaced signal, and then decoded in the glasses, which would mean
that we'd need to get interlaced output from jitter, which might also mean,
I guess, half the frame rate per image? Or a higher frame rate output? Do s
composite signals have a variable refresh rate, I don't know how they're
structured.

This might be a good resource: http://www.allvideoglasses.com/

quite figure these out.

Also see this re. my idea to use a Wii to track head rotations and use the
motorized mount that the mirror is currently on to guide the cameras on a

David

-----Original Message-----
[mailto:owner-memory-place@concordia.ca] On Behalf Of zohar
Sent: Wednesday, February 03, 2010 6:56 PM
Subject: Reminder MEMORY + PLACE Friday Feb 5th 6 PM

Hi all,
just a reminder that we will meet on Friday Feb 5th @ 6 PM
to review the tech aspects, play with gear and brainstorm some more.

see you then !
Zohar