tag:memoryplace.posthaven.com,2013:/posts memory place identity 2018-01-15T17:15:36Z Xin Wei Sha tag:memoryplace.posthaven.com,2013:Post/1040258 2016-04-25T02:07:55Z 2018-01-15T17:15:36Z Goldsmiths : April 23: Rhythm as Pattern and Variation -- Political, Social and Artistic Inflections
For our Rhythm scrapbook:

TALK SLIDES (video): https://www.academia.edu/24710149/Rhythm_and_Textural_Temporality_slides_

http://www.gold.ac.uk/calendar/?id=9756


Rhythm as Pattern and Variation -- Political, Social and Artistic Inflections

April 23, 2016
Goldsmiths London
http://www.gold.ac.uk/calendar/?id=9756


Organizers: 
Paola Crespi and Eleni Ikoniadou

Participants included

Pascal Michon (KEYNOTE)
“Could Rhythm Become a New Scientific Paradigm for the Humanities?"
http://rhuthmos.eu/



RHYTHM and ART
Dee Reynolds
"Rhythmic Seascapes and the Art of Waves"
Paola Crespi
"'Time is Measurable and It's NOT Measurable': Polyrhythmicity in Rudolf Laban's Unpublished Notes and Drawings" 
Bruno Duarte
“Rhythm and Structure: Brecht's Rewriting of Hoelderin's 'Antigone'"


RHYTHM and THE SOCIAL
Ewan Jones
"How the Nineteenth Century Socialised Rhythm"
Mickey Vallee
"Notes Towards a Social Syncopation: Rhythm, History and the Matter of Black Lives"
John Habron
“Rhythm and the Asylum: Priscilla Barclay and the Development of Dalcroze's Eurhythmics as a Form of Music Therapy"


RHYTHM and MEDIA 
Simon Yuill 
and
Bev Skeggs
"Conflicted Rhythms of Value and Capital: Rhythmanalysis and Algorhythmic Analysis of Facebook" 
Sven Raeymaekers
“Silence as Structural Element in Hollywood Films"


RHYTHM and THE BODY 
Laura Potrovic
"Body-Flow: Co-Composing the Passage of Rhythmical Becoming(s)"
Mihaela Brebenel
"What Could Possibly Still Get Us Going: Rhythm and the Unresolved"
Eilon Morris 
“Rhythm and the Ecstatic Performer"



RHYTHM and NUMBER (Topology Research Unit Panel)
Peggy Reynolds
"Rhythms All the Way Down"
Julian Henriques
"Rhythmanalysis Weaponised"
Vesna Petresin
"Being Rhythmic"
Sha Xin Wei
“Rhythm and Textural Temporality: An Approach to Experience Without a Subject and Duration as an Effect"


RHYTHM and PHILOSOPHY 
Steve Tromans
"Rhythmicity, Improvisation and the Musical-Philosophical: Practice-as-Research in Jazz Performance"
Eliza Robertson
"Rhythm in Prose: Bergson's Duree and the Grammatical Verbal"
Yi Chen
“Rhythmanalysis: Using the Concept of Rhythm for Cultural Enquiry"


Sound Installation 
Annie Goh and Lendl Barcelos’ ‘DisqiETUDE'
St Hatcham Church G01
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/972791 2016-01-17T18:03:34Z 2016-01-17T18:03:35Z Synthesis: psychology, neuroscience, Helga Wild, Karl Pribram, Helgi-Jon Schweitzer
Dr. Helga Wild’s earlier research in neuroscience and psychology was with:

Prof. Helgi-Jon Schweitzer @ Innsbruck
Prolegomena zur Theoretischen Grundlegung der Psychologie Kurzer Einführender Text. 
Erscheinungsjahr: 2002
http://memoryplace.posthaven.com/helgi-jon-schweizer

Prof. Karl Pribram @ Stanford
Brain and Perception: Holonomy and Structure in Figural Processing
http://www.scholarpedia.org/article/Holonomic_brain_theory

Xin Wei
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/930639 2015-11-09T03:36:22Z 2015-11-09T03:36:22Z meaning, memory, G Longo on Zalamea

"Meaning derives, moreover, from the intentionality, even a pre-conscious one, that inheres in protensive gestures
a digital machine with a perfect memory cannot do mathematics, because it cannot constitute invariants and its associated transformation groups, because a perfect, non-protensive memory does not construct meaning
Only animal memory and its human meaning allow not only the construction of concepts and structures, but proof as well, as soon as the latter requires us to propose new concepts and structures, or the employment of ordering or invariance properties which go beyond the given formal system

p 18, Synthetic Philosophy of Mathematics and Natural Sciences, Conceptual analyses from a Grothendieckian Perspective.  Reflections on “Synthetic Philosophy of Contemporary Mathematics” by FERNANDO ZALAMEA,  by Giuseppe Longo

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/586498 2013-06-30T17:16:45Z 2013-10-08T17:26:52Z haptic-light sense organ --> place-memory experiment

I'm tremendously interested in seeing this little arc finish in some insights that we can contribute in conversation with colleagues in the Consciousness and Cognition journal, or SPEP.   It's low-hanging fruit as far as craft is concerned, because the craft is child's play for us, but the research questions are deep.


Speaking of the research questions, it's crucial to understand the aim of the actual experiment.  The light-haptic prosthetic organ was just the warm-up precursor to the actual experiment having to do with what Ed Casey called place-memory.   Please read the http://memoryplace.posthaven.com 
to build on the large amount of prior work and understanding with the Memory Place Identity group.

We want a projective ray, along a very narrow cone -- essentially a line oriented along the pointing gesture.
And as I said before, no proximity, only on off with threshold. 
Yeah you heard me right -- just on-off, no floating points between 0.0 - 1.0 ;)
There are deep phenomenological and consequently methodological reasons for this that David Morris and the Memory Place Identity group seminar realized.

Then, the actual experiment is the place-memory experiment that I've described to Omar, Harry, Elena, Liza that we inherit from Patricia+Zohar's work with the Memory Place Identity group. 

Please read Ed Casey's chapters on body- and especially place-memory, as well as the Lenay-Steiner paper whose experiment we're trying to replicate and then extend to the experiment where we turn the prosthetic on or off depending on where the person is located.  (I attached the chapter + article to the previous entry.)

We need to keep in mind that this is technically trivial engineering.  And it ought to remain engineering-trivial, at least till we get through the place-memory experiment and so we can understand the timing and calibration issues.   So I would simply retain Patricia and Zohar's electronics unless you can improve upon it in less than 4 days calendar time.  All other work should go toward the complete experimental apparatus :  
write code to turn the prosthetic sense organ on/off
(needs new code, and wireless access to on-body processor)
(if necessary use cable tethering the subject! that's OK if that makes it possible 
to skip a step of acquiring a wifi+processor and simply hack it
all in Max via a long cable.)
get the camera feed in the TML
use it to turn the  prosthetic  off/on as a 
function of where the wearer is standing in the TML.

I would place stock by Harry and Liza's judgment on experiment construction, especially for moving this along briskly so as to not get bogged down in technicalities or artful polish at the expense of gaining phenomenological insight.   Fail early, iterate rapidly.  I add an extra request perhaps unique to TML: please make it a 24x7 running environment so that people who are not authors of the experiment can wander in and experience the environment live at any time without you there to run it.   Perhaps it can be another preset state on Nav and Julian's faders.

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/586496 2013-06-30T16:59:17Z 2016-07-08T10:23:12Z restarting the place-memory experiment

Who's interested and has the technique to rebuild the photocell + pressure = vibration prosthetic sense organ that Patricia and Zohar build some years ago in the Memory+Place experiments?

Omar and I are hoping to take it to the next level with Liza, and carry out the actual experiment that David and the group wanted to realize.   My hunch is that it would be straightforward and fun with contemporary tech + the expert knowledge that's available around the TML :)

We're hoping to write up the results of this experiment as soon as possible, in August.  If you're interested in doing a bit of pure research, RSVP Omar Faleh <omar@morscad.com>,  Liza Solomonova <liza.solomonova@gmail.com>, cc. me !
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342752 2012-10-03T13:16:11Z 2013-10-08T16:34:55Z Claire Petitmengin: Gestural and Transmodal Dimension of Lived Experience
Thanks to David Morris, from Shaun Gallagher:

Claire Petitmengin
CREA: Centre de Recherche en Epistémologie Appliquée (Ecole Polytechnique/CNRS)*

Towards the Source of Thoughts, The Gestural and Transmodal Dimension of Lived Experience
Journal of Consciousness Studies,  14,  No. 3,  2007,  pp. 54–82

Who Is I ? – Video, Talk @ Panel at Center for Creative Inquiry

* CREA has been the base also for Jean-Pierre Dupuy, and Jean Petitot.
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342756 2012-04-20T19:30:47Z 2013-10-08T16:34:55Z déjà vu déjà vu does not have to be a visually perceived. Maybe kinesthetic methods would be more conducive to inducing such a sense. Doesn't déjà vu come with some vertigo?

Maybe it's is wrong to say déjà vu is an experience, rather it's a mode, a way of interpreting what's happening. Maybe trying to induce such a mode, we'll discover ways to condition events with some other effect that won't fit the label "déjà vu" but which we'll find equally interesting. Just "zeroth order" effects of repetition could be an easy starting place: repeatedly showing representations of a situation.]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342771 2012-04-05T22:03:25Z 2013-10-08T16:34:55Z David Morris: Protocol for getting Phenomenological Descriptions out of participants in our experiments
I recently had a very productive conversation with Shaun Gallagher about protocols helpful for the sort of experimental phenomenology that we are doing.

In particular, he directed me to the work of Claire Petitmengin,http://claire.petitmengin.free.fr/topic/index.html, and especially to her article "Describing one's subjective experience in the second person, an interview method for the Science of Consciousness", Phenomenology and the Cognitive Sciences 5:229-269, which you download via the articles link on her site. I encourage you to read this if you are working on the phenomenological experiment issues.

(Tristana, can you send me again your paper on methodology—I can’t seem to find it. And I don’t remember if you had cited her in your summary.)

I think this could be a really powerful—although laborious—method for us. One insight Petitmengin has is that the interviewer isn’t just asking the participant to report on what they already have available or are already ready to describe: the interviewer anchors and scaffolds a process that helps the participant get to an explicate that experience in the first place. In this process it’s important to keep the participant on track and sticking with description rather than presumption. I.e., even a well trained phenomenologist can be diverted, and needs outside help to stay on track. So there’s an interesting notion here that first person reports in fact might be bettered by or even require an intersubjective relation. We could note that Husserl, say, internalizes that intersubjective relation, by checking his experience against that of his presumed reader, and by having his responsibility to his reader always guide focused investigation of experience. I.e., he’s not doing wishy-washy spontaneous introspection, but a sort of ‘self-interview’ that is mega-labourious and endless.

Petitmengin’s procedure to my mind resembles, a bit, a therapy session, where instead of the therapist helping someone unpack a traumatic experience, they are helping them unpack an experience in general. Or, maybe a detective trying to lead an eyewitness through memory of an event by leading them back into it.

I think we could think of a protocol where we have various participants, variously expert or non-expert in movement/perception, go through an experience that we provide and record from the outside, then work them through the above procedure, then let them go back to the experience in relation to our recording, to coordinate their experience with their body movements, and then have a group of participants sit down together in a workshop to get multiple perspectives on this.

What I am thinking of is our obtaining “the experiential correlates of movement,” which is sort of the opposite of the “neural correlates of consciousness.” In the latter, we think the physiological stuff is the hard thing to find, and the consciousness is the easy and obvious thing. We do the opposite—seeing how people move from the outside is the ‘easy thing’, the hard thing is describing what it is that they are DOING.

Conceptually, the issue here is that MOVING IS NOT DISPLACEMENT OF OBJECTIVE BODIES. In terms of relative objective displacement, the situation is the same whether your body moves forward in relation to the walls around you, or whether the walls move backwards in relation to you (as in the sway room experiment with the babies in the middle of a room that moves). But, in terms of EXPERIENCED MOVEMENT, the two situations are different, because in the one case you’re the agent of displacement, in the other the patient of displacement. This is an argumentative opening for needing to study movement through our EXPERIENCE OF MOVING BODILY AGENCY. So we to describe that.
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342779 2011-12-09T17:12:10Z 2013-10-08T16:34:56Z recent neuroscience re. vision - Austin Roorda: How the unstable eye sees a stable and moving world
[Thanks to Adrian-in-Berkeley.   This may add a bit of flesh to the Petitot-Connes-Sha;) thesis that objects are invariants of Lie groups for a suitably defined group of actions in the experienced world.   See also, Madeline Gins and Arakawa's Organism-That-Persons.]

How the unstable eye sees a stable and moving world

Austin Roorda
School of Optometry, UC Berkeley

Wednesday, December 14 at 12:00
560 Evans

How is it that the eye can have an exquisite sense of motion even while the
retinal image of the stable world during fixation is in constant motion?
Several hypotheses have arisen: The "efference-copy" hypothesis holds that
efferent signals derived from the opto-motor control circuitry are used to
exactly offset the image instability induced by eye-motion[1]. The
"data-driven" hypothesis holds that image stabilization is computed from the
content of the images, deriving compensatory information from the
displacement of image features over time[2].  Or, we might just suppress the
lowest common motion of any visual scene[3]. In any case, the physiology
underlying this phenomenon remains largely unknown. Recent experiments from
our lab using an adaptive-optics-based eye tracker have revealed that the
percept of motion bears a different relationship to actual eye-motion than
any extant hypothesis predicts. We found that stimulus motions that have
directions which are consistent with eye-motion, but largely independent of
magnitude of that motion, produce the most stable percepts. These new
observations not only challenge all existing theories but, more importantly,
define a simpler path toward a physiological solution. 

1. Helmholtz,H. Helmholtz's Treatise on Physiological Optics. Optical
Society of America, Rochester (1924). 
2. Poletti,M., Listorti,C. & Rucci,M. Stability of the visual world during
eye drift. J. Neurosci. 30, 11143-11150 (2010). 
3. Murakami,I. & Cavanagh,P. A jitter after-effect reveals motion-based
stabilization of vision. Nature 395, 798-801 (1998).

---------------------------------------
Bruno A. Olshausen
Helen Wills Neuroscience Institute & School of Optometry Director, Redwood
Center for Theoretical Neuroscience UC Berkeley 575A Evans hall, MC 3198
Berkeley, CA 94720-3198
(510) 642-7250 / 2-7206 (fax)
http://redwood.berkeley.edu/bruno
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342791 2011-11-18T22:36:46Z 2013-10-08T16:34:56Z Inscribing the body, exscribing space

Haven’t read it yet, but this article looks relevant to our group. http://philpapers.org/rec/HAGITB

David
--

Inscribing the body, exscribing space 

Ivar Hagendoorn 

# Springer Science+Business Media B.V. 2011

Hey David,

Thanks for posting this article (Hagendoorn on space and sensorymotor). Definitely up my alley. Had a chance to look through it and it brings up some quite interesting and relevant things.

A loose summary: 

H. Uses ‘excription’ (borrowed from J-L Nancy) to describe a kind of reciprocal verifying or excribing of the body in space. He makes a distinction between ‘alocentric’ (a kind of ‘other’ space, abstract space, objects in relation to one another) and ‘egocentric’  (space created by movement and configuration of the body). The relation between the two is the reciprocal part, which constitutes full spatial awareness. From neuroscience he notes studies which indicate the existence of ‘place cells’ and ‘grid cells’ serving distinct  spatial awareness functions. Place cells fire in specific relation to location (so related to the egocentric, to memory/knowledge(?)). Grid cells fire regularly during a rat’s encounter with an unknown space and function like a trail marking, bread crumb system; creating trajectories memorized as motor sequences. A side note is that these neurons actually seem to fire on the side of the brain where a desired object is located; they possibly have a topographical arrangement.  H. notes that we can access an experience of this more basic construction of space by doing things like examining an object behind our back—so taking place in a relatively unfamiliar part of the body zone  and (importantly) out of sight. And, he extrapolates, this is similar to how we experience dance (either dancing or as observer). He's careful to stipulate that he means certain specific types of choreography conceived to reveal alocentric vs egocentric movement (but I think its actually true as a more general point)

Makes me think of a few things:

First, neuroscientist Rodolfo Linas’ sea squirt, the little animal with a little brain, which begins life moving around to find a spot to anchor, at which point it promptly digests its brain and lives the rest of its life without. Linas conjecture is that thought, or the function of neurons at its most basic is related to movement and motor-negotiation of space, even you could say ‘thought is movement’, or the rehearsal of movement (well, which of these is it?).

H’s point about how watching/and doing dance are related is undeveloped by him, but super important. When I have tedious discussions with people about ‘virtual space’ created using linear perspective in electronic media, the point I make is that the virtual space is in our heads and its not really virtual at all. Experience of movement or of architectural space is like real space. Our fascination with watching bodies dance, or play sports, or sitting at cafés watching the passers-by or even watching animals is that this is compelling on a very neuro-visceral level, that watching unusual movement is doing in some sense—we are rehearsing space, and in rehearsing it we are making/marking it. So thinking about dance, thinking about movement, thinking about space, thinking about architecture (rooms, gridded space) are all related.

The idea that doing something behind one’s own back as revealing of a primal space-building (up to the point that one familiarizes that space) is similar to our first round of experiments, where we were reaching-for or sounding an unknown space or objects in a void. There are other similar spaces, kinds of unbounded rooms which echo this. I think of the inside of one’s mouth as such a ‘room’. We have very little rational or visual context for what this space is, yet it is deeply familiar. Put an object you have not seen in you mouth—what are you experiencing? Try to imagine the boundaries of this space which is the mouth. You are engulfed by it (you are tiny inside this space) and surround it at the same time. That's pretty fascinating, experientially. I think that a clarification of our idea of what the nascent process of building primal spatial temporal awareness could lead to several potential experiments/experiences/thought exercises which relate back to our room/space/memory exploration.

andrew
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342802 2011-08-01T15:51:00Z 2013-10-08T16:34:56Z Andrew F.: Thoughts on Memory Place (August 2011)

>Here are my somewhat rambling impressions of where we are at, from my viewpoint...hope this is useful to getting the next round going.

 

Information and Other-Than-Information: Examining space before it gets to be information (Relevance to new-media, dance, architecture)

We live in a world of data, of encoded information. This is especially true of our visual world. Phenomenology, as we have been at it in these experiments, has something to say about experience of space, of being and space, prior to it being monetized as information that can be parsed in the realm of computation. This seems like a key area where the ‘experiments’ and experiences of Memory Place are a useful corrective to assumptions coming from science and technology (assumptions which, to be fair, also permeate art practice). Key question: is there a way of thinking about this information threshold (when experience/space becomes info) in a way that is useful to a better understanding of technologised culture and what we can do with it?

In relation to many new-media practices, the space of movement/performance is of prime interest as a hybrid between two encoded cultural spaces; firstly, of action grounded in the human body, with its proper perceptual world, and, secondly, of the ‘without-ground’ or ‘without body’ of new media and the virtual. Somewhat naively, electronic arts often use the theatre as a model for virtual/immersive space without examining the implications of that importation. One could argue that dance- and movement-based performance as well as architecture have become more relevant in contemporary visual culture precisely in critical relation to electronic screen-based media, virtual-reality and immersive interfaces. That is, we are trying to understand two different kinds (registers?, grasps?...) of space which are nested or interleaved one with the other. Virtual or immersive ‘reality’ relies on this confusion; that we believe (or are asked to believe) one is the other.

Returning to bodily movement as a form of encounter may be an important corrective to assumptions engendered by new technologies. For the human animal, watching another in movement is immersivity without technology. This makes movement- and space-based art practices (such as dance and architecture) the most relevant areas of artistic research today and a key point of overlap between traditional artistic disciplines and new technology in their ability to confront a culture of information with a culture anchored in the experience of space. How are movement and space ‘fixed’ by technology and new media? The nature of this ‘fixing’ should be a significant preoccupation for creative and critical practitioners in the face of new technologies.

Methods

As outlined in the grant summary the methods of data organization, experimental set-up, interview and so on are all part of the same conundrum of distinguishing scientific/technological method from what it seeks to understand. Our recent experiments have interestingly shown that our observation of the participants ‘working’ in space is fascinating and useful (see the underlined sentence above). We have often said, “it was interesting watching so-and-so doing this or that”, or that the particular ‘style’ of exploration someone has is exceptional. So incorporating rigorous observation into our method (and descriptions and write-ups) seems a good direction. There is a parallel here which might be worth following up in dance observation / dance movement therapy (DMT).  All to say, if we are intrigued by observing movement we should do some work on bringing observation notes into the method alongside alternate debrief methods such as multiple interviewees/ 3 way conversation, etc.  In line with this is an idea in the grant summary that participants can be trained to some extent to help them leave behind some presumptions and tune their own observation skills towards the simpler experiences we are interested in (what breathing and moving before the last round did—could this be more comprehensive). All in all, this means abandoning the naive subject and the invisible experimenter. This is complicated. Scientific experimental method is useful because it is effective, and vice versa. It 'moves forward', as everyone says these days. Our method is defficient in that regard. Its fuzzy both in activity and analysis phase. The question beomes not how 'accurate' a method can be but is it rigorous enough to be generative/creative of...something.

analogue /digital

David’s suggestion in the last follow up of a chair based mechanical device was interesting on two counts. One, it gets our feet off the ground, therefore interfering with some intuitive ways of getting around and sounding-out a space. Two, it is mechanical, therefore analogous to bodily mechanisms.  Somehow I think we might adapt to it in a different way that a device which uses information processing (something mechanical operates in the world of body physics we understand and believe; something computational can be programmed to lie, we have to chose to believe it—perhaps a subtle difference).  There are a variety of simple ‘analogue’ experiments which could complement the more ‘digital’ ones to give us a sense of this.

Perspective and architectonic space

Is linear perspective learned from living in and amongst buildings? What prejudice does this layer onto our experience of space? (linear perspective being the structuring basis of most virtual-visual environments)? But perspective, as a mathematical construction of points is a representation of something that itself it is not. What other kinds/shapes of rooms are there?

Interdisciplinary Relevance

By definition a cross-disciplinary practice straddles multiple discourses and audiences, which brings about either a flattening or sharpening of the work’s critical positioning. Memory Place sits across philosophy, art/design, computational environments in a superbly interesting way—a way in which putting stress on assumptions coming from these disciplines rather than using these disciplines purely as a generator of a hybrid method for production/invention/innovation.  That can be a redefinition of ‘innovation’ right there. This makes ‘M-P’ relevant to the elaboration of what ‘inter-disciplinary’ can be, in that in part it questions the disciplinary partitions themselves and asks how can something creative, experimental and philosophically rigorous work.  Specifically, thinking about movement, memory and space, puts M-P in a key position in relation to ‘traditional’ disciplines (dance, performance, architecture, computation, philosophy) which make a claim to various parts of the shoreline of such an inquiry. Is the end product a work of art or a cognitive-science paper ? Probably not, likely it is a method of  working, observing, generating understanding of this movement, memory, space differently relevant to multiple disciplines, but which helps those disciplines shed some prejudices.

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342819 2011-07-03T14:21:39Z 2018-01-15T10:01:07Z on interdisciplinarity, and transdisciplinarity...
Speaking of interdisciplinarity ...

I was part of the symposium around this book sponsored by the Rockefeller and the National Academies in the USA when it came out 2003.   It was mostly written from the perspective of engineering sciences, in particular information technology and computer science, and telecommunications.    To their credit, the committee was trying to be open to art and design, guided by the best examples of multidisciplinary, interdisciplinary, and transdisciplinary collaborations between artists and engineers over the past century.   Unfortunately their examples were dated.  (ie. they had not seen the TML ;)

It may be interesting to see how disciplines inter-percolate (an Alexander patterning transposed to the ecology of practices?)

We are far from transdisciplinarity as defined in chapter 4, but I expect that does not have to happen in order for significant work to be done.   I'm not even sure that it ought to be an institutional goal, pace Simon Penny who created the exemplary ACE program at UC Irvine.   The late ACE program from which Erik Conrad one of the Memory Project's forbears got his Masters before returning to the TML.

Attached is Chapter 4: The Influence of Art and Design on Computer Science Research and Development
from: Beyond Productivity (2003), NAS, William J. Mitchell,  Alan S. Inouye,  and Marjory S. Blumenthal,  Editors.

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342834 2011-06-11T08:08:23Z 2013-10-08T16:34:56Z GyroOSC is good, IR is good
Hi David,

By the way, GyroOSC a great little app -- atop Adrian / John / CNMAT OSC lib. ;)  No need for Polhemus if we use iPhone ?   And maybe the familiar form factor works in our favor?

I've downloaded it but don't have time to play with it just yet in Max -- expect they have demo patches?   Have you looked at the data coming out on Max/MSP?  I bet it's clean but would like to check effects of interference, range.

If GyrOSC works we can get attitude from onboard sensors, so the only remaining issue is x-y position on the floor.   I hope the demo patches include a simple little calibration to output orientation  relative to local room.  ("Let's do the time warp again!")

Tracking x-y location by LED on the body or handheld accoutrement is delicate  -- that's the solution TG2001  used in Linz and Rotterdam, 2000-2002.   We abandoned on the body LED's after 2002 because they impose stiffening constraints on  people moving around:  people have to keep parts of their bodies or accoutrement presented to the camera.    It's quite annoying when the systems stops responding to you simply because line of sight is broken, or because of light-interfernce.   For our purposes this may be ok for now -- but we'll want to carefully fold the artifactual  "stiffening" constraints into our design.

We (mostly) eliminate interference by visible light conditions by tracking in infra-red.  CV has improved so much that we just track with cv.jit tools + overhead camera(s).  Students and I have dreamed up various schemes for minimally encumbering ID-based tracking.  But I have deliberately avoided tracking ID-ed people, expecting interesting ethico-epistemic implications.   (For wearables, to be considered minimally encumbering I use jewelry as the criterion. :)

Speaking of infra-red, it could be interesting to fold our prosthetic sense-organ "back" into the "ordinary" sensorium by using an IR bright source (assuming our photocells can pick it up).   Then we have more amplitude for the event design.

Cheers,
Xin Wei

On 2011-06-10, at 10:13 PM, Sha Xin Wei wrote:

Hi David,

True 6DOF was solved elegantly by Marek Alboszta's team (Espi) (quaternions are the way to go)  and supplied by Polhemus using a more elaborate set of sensors.  Each solution has trade-offs, for good engineering reasons. I wouldn't try to solve it myself.   These are not cheap gadgets for good reason.


On 2011-06-10, at 1:20 PM, David Morris <davimorr@alcor.concordia.ca> wrote:

Actually, I wonder if this might do what we need:

Glue rigid wood or fibreglass dowel to iPhone case.

Fix a ring of LEDs around the dowel shaft at two points, so an overhead camera can always pick up two points of light at known distances from the iPhone.

how would these distances be measured? 
not enough degrees of freedom to recover 6DOF I understand what you have in mind.
And at what time and space  accuracy and frequency ?  ( over the years many generations of TMLabbers and I have conceived diff solutions, but I'm interested in fresh feasible approaches.)

so this is a line of sight approach.

if we use a
line of sight approach, then I'd simply use Wii and hack up its IR receiver . sender pair since it's so well engineered.

ditto Kinect for camera based tracking.

Adrian and I agree
it makes sense to buy and hack this inexpensive off the shelf gear to get a feel for what can be done.  good practice : "rapid prototyping, quick fail"   The question now is who will do the DIY electronics.

In  that spirit,  before that or meanwhile we can wizard of oz, walk thru some scenarios, and get a lot more juice out of the gloves by using them in many alternative scenarios, I think.

Looking forward to talking with you again, and catching up w people Wed. next week!
Xin Wei

Put the iPhone in the case.

Run GyroOSC on the iPhone.

We now know the tilt of the iPhone relative to 3 axes—that gyroscope is pretty amazing.

From the overhead camera monitoring the LEDs we can get the position of the iphone in the XY plane, and its rotation relative to X and Y axes.

And then, I am thinking that by measuring the apparent distance between the two LEDs in the camera image, and combining this with the gyroscope tilt data, we could solve for the height of the iphone from the floor.

Xin Wei, would the math work here? Does they gyroscope data plus the two LEDs in the image give us a unique 6DOF solution?

From: David Morris [mailto:davimorr@alcor.concordia.ca]
Sent: June-10-11 12:57 PM
To: 'Niomi Anna Cherney'; 'zohar'; 'p.a. duquette'; 'Sha Xin Wei'; 'Noah Brender'; 'Tristana Martin Rubio'; 'Andrew Forster'
Subject: Update: Further Research on 'Magic Wand'

I spent some further time today researching possibilities for the tech for the ‘magic wand’, first to see if we can get this via the iPhone. Results:

--it turns out if you want to simulate Lightsaber duels on the iphone, you need to solve the 6 degrees of freedom problem, position and tilt in 3 dimensions. The App Lightsaber duel from THQ apparently lets you have virtual duels through Bluetooth. I guess it’s doing relative position through Bluetooth? There is also iSamurai. I’ve downloaded these, and I am hoping someone else can bring an iPhone, so we can see how this works. What I don’t know is if this only gives you audio feed back when you hit the other person’s virtual lightsaber, or also if you are pointing their way. (NB, the thing we are trying to do is like Luke in the first Starswars movie tracking the practice drone while blindfolded through his light sabre.)

--there are also a bunch of apps for doing read outs of the gyroscope sensor on the iPhone. One sends the data via OSC, GyroOSC. If an overhead camera tracking a fiducial on the hand could gives us 3D location, then we could use an iPhone or WiiMote to give tilt.

--I also came across this: http://www.free-track.net/english/. It’s a freeware package for Windows that uses LED’s mounted on someone’s head to track 6DO data through a webcam or Wii camera. Maybe we could mount the LEDs on the end of a stick, and use an overhead camera?

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342859 2011-06-09T11:51:19Z 2013-10-08T16:34:56Z Hexagram Research-Creation Seminar on Self-Powered Wireless Biomedical Devices - June 6
Hi Laura,

Thanks for taking the time to share such a delightfully opinionated report.   As Maturana and Varela said at the beginning of their Tree of Life,  everything said is said by someone.     
It's characteristic of "design" not to ask framing questions to your depth.   It's interesting that Yong Lian used "simulations" rather than "experiment".   Had he used "experiment", qualified to characterize typical industrial drug research, then I may agree more.    Yong Lian's responses are very revealing: To say that TCM is "a closed system" expresses epistemic incommensurability.  To say that TCM has no theory reflects the inadequacy of western medical knowledge, which I always thought overwhelmingly un-theoretical compared against mathematics and physics.

Cheers,
Xin Wei

On 2011-06-07, at 12:10 AM, laura emelianoff wrote:

Hello lab and others,

I attended the talk on 'Wireless self-powered biomedical devices' today and wanted to share my notes. 
I tried to record but my Edirol ran out of battery power and was unable to get the whole talk. But I did capture at least a lengthy introduction outlining the speaker's achievements and academic positions held. 
I can share the file if anyone wants it.
Yong Lian Is a researcher developing self-powered devices for monitoring of biomedical data, such as heart activity and chemical levels. One such product was designed to report whether a patient had taken prescribed medication, in cases where the patient's memory was no longer reliable. It was an ingestible, powered by the acid in the stomach. 
Other biomedical interventions include pacemakers, cochlear implants, EEG sensors to detect seizures, and deep brain stimulation, used to treat Parkinsonian nervous disorders.
In general, his products (Wireless Biomedical Sensors, WBS) are intended to collect vital information and share it with medical staff via a mobile phone, for example. Dr. lian says that 'wireless healthcare is the solution' for prevention-oriented care- though he did not clearly address who should receive these prosthetics, we assume they are primarily intended for more extreme cases,  for those who are not capable of autonomous self-care.  However, he seemed very casual about using them to replace person-to-person examinations, so perhaps we should assume that everyone should have a few ECG or EEG sensor pods stuck here and there.

Why should biomedical products be self-powered? Currently, a patient with a pacemaker needs open-chest surgery every 8-10 years to replace the battery. As we are in movement all the time, we are capable of generating electricity through electromechanical, thermal, and chemical means. Batteries should no longer be used.

His main points were:
1. 'There are not enough doctors, and they are overworked. The problem is that 30% of med students are female, and they have kids and quit the workforce.' Constant ambient monitoring of biosignals will reduce diagnostic costs and time.
'Only the technology can reduce the workload of doctors'
2. Wireless is lucrative! Just look at the chart of the value of the US dollar, at point in the last century when key technologies were introduced (TV, mobile phone, internet, etc.) 
3. Current monitoring systems for biodata are invasive and cumbersome- a heart-rate monitor requires several electrodes and a mess of wires, while the autonomous devices are much smaller. 
4. Streamlining signal processing can improve energy efficiency- using an 'event-based' sampling instead of a Nyquist-style periodic sampling rate can reduce power usage. Data transmission can be sent in pulses instead of continuously, and fewer op-amps can be used in circuitry.

My first question was, which power harvesting methods are found to be most useful?
Answer: vibration is often effective, but we are exploring uses of 
glucose as a fuel.

Second question- there are so many hands-on techniques of listening to vital signs and movements: practitioners of Traditional Chinese Medicine palpate tissues and listen to the pulse, as it reveals metabolic functions, many doctors learn to listen to their patients' respiration and heart activity using stethoscopes (auscultation) and cranio-sacral therapists feel the tides of spinal fluids with their own hands. How do you regard these existing methods?
Answer: (quote) 'Traditional Chinese Medicine has no theory. It's a closed system, simulations are hard to do'…. 

meaning that TCM doesn't provide 'data'; perhaps it provides more qualitative information. 
But….  health and wellness is not just about data, it is more complex. 

My third question which I didn't get to ask was, How should we remedy this problem of female students going off and having kids, then quitting the workforce? Is that really why they are dropping out of school? What about the obvious gender inequality in certain disciplines, such as we have here in this lecture room- 25 men and 5 women from the Engineering department?

But there are many more questions- why continue to develop unilaterally, where this monitoring technology feeds the isolation of patients? Why is that acceptable? How long will it take to address problems like implant rejection? 
If prevention is really the focus, why not emphasize healthcare long before measures such as stents and pacemakers become necessary?
The proposed technologies are useful and effective, in the specific applications where they are really necessary, but there seems to be a lack of perspective about what is important in healthcare, beyond just life support systems…. One woman I know who regularly has epileptic seizures is alerted four hours ahead of time, when her dog and cat begin to follow her around the house. She doesn't need an implanted EEG sensor.

comments anyone?
Laura E


From: shaxinwei@gmail.com
Subject: Hexagram Research-Creation Seminar on Self-Powered Wireless Biomedical Devices - June 6
Date: Fri, 20 May 2011 22:08:50 -0400
CC: memory-place@concordia.ca; artcrd@langate.gsu.edu
To: tml-active@concordia.ca

TMLabbers -- of possible interest to those who swear by bio-sensing, or would like to peek at the future of biopolitics -- docile mitochondria! :)  Will someone who can attend please send notes to tml-active?
Thanks!
Xin Wei

Begin forwarded message:

From: Momoko Allard <hexinfo@alcor.concordia.ca>
Date: May 20, 2011 10:56:08 AM EDT
To: Momoko Allard <hexinfo@alcor.concordia.ca>
Subject: Hexagram Research-Creation FW: Seminar on Self-Powered Wireless Biomedical Devices - June 6

-----Original Message-----
From: ENCS Communications <communications@encs.concordia.ca>
Subject: [All-ftfac-announce] Seminar on Self-Powered Wireless Biomedical Devices - June 6

Dear ENCS Members,

The following seminar may be of interest to you. This announcement is sent at the request of the Department of Electrical and Computer Engineering.

INVITED SPEAKER SEMINAR IN ELECTRICAL AND COMPUTER ENGINEERING

CO-SPONSORED BY:
The Department of Electrical and Computer Engineering IEEE Circuits and Systems Montreal Chapter IEEE Montreal Section

Monday, June 6, 2011
6:00 p.m.
Room EV002.184
(Refreshments will be served.)


“Towards Self-Powered Wireless Biomedical Devices”

DR.YONG LIAN
DEgr. (h.c.), FIEEE
Lutcher Brown Endowed Chaired Professor
The University of Texas
San Antonio, TX, USA

ABSTRACT
Body Sensor Network (BSN) combined with wearable/ingestible/injectable /implantable biomedical devices are envisaged to create next era of healthcare system. Such systems allow continuous or intermittent monitoring of physiological signals and are critical for the advancement of both the diagnosis as well as treatment. With the advances of nanotechnologies and integrated circuits, it is possible to build system-on-chip solutions for implantable or wearable wireless biomedical sensors. Such wireless biomedical sensors will benefit millions of patients needing constant monitoring of critical physiological signals anytime anywhere and help to improve the life quality. This talk will cover several topics related to the wireless biomedical sensors, especially on the development of self-powered wireless biomedical sensors and associated low power techniques. A design example of sub-mW wireless EEG sensor is discussed to illustrate the effectiveness of the low power techniques.

BIOGRAPHY
Dr. Yong Lian received the Ph.D degree from the Department of Electrical Engineering of National University of Singapore in 1994. He worked in industry for 10 years and joined NUS in 1996. Currently he is a Provost's Chair Professor and Area Director of Integrated Circuits and Embedded Systems in the Department of Electrical and Computer Engineering. His research interests include biomedical circuits and systems and signal processing. Dr. Lian is the recipient of the 1996 IEEE CAS Society's Guillemin-Cauer Award for the best paper published in the IEEE Transactions on Circuits and Systems II, the 2008 Multimedia Communications Best Paper Award from the IEEE Communications Society for the paper published in the IEEE Transactions on Multimedia, and many other awards.
Dr. Lian is the Editor-in-Chief of the IEEE Transactions on Circuits and Systems II (TCAS-II), Steering Committee Member of the IEEE Transactions on Biomedical Circuits and Systems (TBioCAS), Chair of DSP Technical Committee of the IEEE Circuits and Systems (CAS) Society. He was the Vice President for Asia Pacific Region of the IEEE CAS Society from 2007 to 2008, Chair of the BioCAS Technical Committee of the IEEE CAS Society (2007-2009), the Distinguished Lecturer of the IEEE CAS Society (2004- 2005). Dr. Lian is a Fellow of IEEE.


For additional information, please contact:
Dr. Wei-Ping Zhu
514-848-2424 ext. 4132
weiping@ece.concordia.ca

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342878 2011-05-31T15:06:43Z 2013-10-08T16:34:57Z random idea Re: "scientific" gesture / movement research ?
This is definitely an aside to the ongoing thread. But may be
applicable for next memory-place experiments (not as sophisticated
as UI device)... Just something I've been loosely imagining.....
Without yet thinking through the tech process or synthesis....

A couple of references first:

Depth and gesture mapping / tracking of a participant:
http://jmpelletier.com/freenect/

I was thinking that this environmental installation project
might offer some intriguing possibilities:
http://kinecthacks.net/spinny-glowy-foil-in-a-kinected-bunker/
In particular, notice the white spindly sculptural things (you'll see
them toward end of the video) hanging from the ceiling.

I was imaginging, for example, what would happen if we built
the equivelant of the white 'glowy foil' sculptures - but made
them out of strips of IR LEDs. Then use a matched IR detector
apparatus for triggering vibratory feedback (EG:
http://www.radioshack.com/product/index.jsp?productId=2049723)
pretty much as our last device did. Only in this case, it would
be the overall temporal experience of detection, that might add
up to the participant's recognition of distinct shapes. They might
undergo a slow process of meading one detection from another,
as they explore the space.. Slowly developing a physical
relationship with the way they share the space with the
sculptures (known to them only as on/off vibrational feedback).

Pelletier's note (first link above) is that more than one kinect
can be in use at once. It might be messy to work the bugs
out (interference wise) but; one kinect program could be utilized
to trace participant movement in relation to the IR sculptures,
and another could be employed for retreival of depth and gesture
informations... If that's the sort of information we hope to
retrieve.... I haven't thought out the means of integrating info,
at this point at all....

Also, I was thinking about the issue of participant's bumping
into the sculptures.... So.... What if we used transparent cloths
as dividers that were situated between participants and the
sculptures? The IR sculptures could be encircled with these
cloths, hung from the ceiling to floor... Light-weight and soft,
and distanced enough from the sculptures to serve as an
adequate indicator (to the participant) that they should move
in another direction. This instruction could be provided in
advance of the participant's blind-folded exploration (whenever
they feel the cloth against their skin, they should stop and
turn away from it). Would simplify process, avoid collisions...

Could use battery power, rather than cabling, for the IR
sculptures.... Clearing out and freeing up space for participant's
to move in.... With exception of the cloth veils, of course...

?

x patricia


----- Original Message -----

From: Sha Xin Wei

Sent: 05/31/11 06:41 AM

To: Michael Fortin

Subject: Re: "scientific" gesture / movement research ?

I suggested that some one open up a wii mote  and re-assemle the parts into a suitable form factor.  we still need a visible light photocell though, and cant use  line of sight, so that solution is also clunky...
 
We need a good versatile engineer to own this project and work with the MP group.
 
And strategically in June I'd like to define a non-hobbyist grant to NSF/NSERC/FQRNT parallel to an MP grant to do a gesture/movement tracking research project that meets different interests around the TML -- MP, Adrian (+Wessel), MM, Satinder.   I'll propose this  to my EU colleague as well.
 
Xin Wei
 

On 2011-05-30, at 6:43 PM, Michael Fortin wrote:

This might be a jaded comment.... 
 
I'll call it an advanced WiiMote (WiiMote just tracks the x-y-rotation, they have some idea of angle and distance to the display which the WiiMote doesn't have).   (Morgan -- WiiMote has vibrotactile feedback)
 
Speaking of odd remotes, there's this unrelated interesting toy; http://www.thinkgeek.com/interests/dads/cf9b/
 
Cheers,
~Michael();


On Mon, May 30, 2011 at 03:55, Sha Xin Wei wrote:
Hi Adrian, and scientific researchers,
 
Raising the stakes and thinking ahead to more robust and precise instrumentation, here's the
 
NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo Alto
 
The exclusive patent describes a  6 DOF, x,y,z + euler angles    The company's founded  by a physicist friend from Stanford: Marek Alboszta.  Not productized yet.  "Commercial" co-development would require O(100K) USD.  I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal.  This would make sense in a real NSERC/NSF  co-development grant.
 
Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc?  Let's discuss this in June.
 
Xin Wei
 
 
 
 

On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:

Hello Xin Wei,
 
We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)).  I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).  
...

Is your party ready to pay for this work ...?  If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work).  Anyway, we can talk about it if the allocation of resources is a given - let me know.
 
warm greetings,
<pastedGraphic.jpg>

_____________________
Marek Alboszta
 

On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:

Hi Marek,
 
For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives.  Ideally at better than 30 Hz for the entire 12-vector.
 
We need it wireless, range of say 10m suffices.
 
Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body.   I expect any pen-based input device has more than adequate time-space resolution.
 
We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.
 
We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.
 
The person may be free to wander around the room and point in any direction whatsoever.
Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...
 
Cheers,
Xin Wei
 

 


...............................................................................................
Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it. Such a realm is art. But certainly only if reflection upon art, for its part, does not shut its eyes to the constellation of truth, concerning which we are questioning." - Heidegger
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342900 2011-05-31T10:42:07Z 2013-10-08T16:34:57Z "scientific" gesture / movement research ?
I suggested that some one open up a wii mote  and re-assemle the parts into a suitable form factor.  we still need a visible light photocell though, and cant use  line of sight, so that solution is also clunky...

We need a good versatile engineer to own this project and work with the MP group.

And strategically in June I'd like to define a non-hobbyist grant to NSF/NSERC/FQRNT parallel to an MP grant to do a gesture/movement tracking research project that meets different interests around the TML -- MP, Adrian (+Wessel), MM, Satinder.   I'll propose this  to my EU colleague as well.

Xin Wei


On 2011-05-30, at 6:43 PM, Michael Fortin wrote:

This might be a jaded comment.... 

I'll call it an advanced WiiMote (WiiMote just tracks the x-y-rotation, they have some idea of angle and distance to the display which the WiiMote doesn't have).   (Morgan -- WiiMote has vibrotactile feedback)

Speaking of odd remotes, there's this unrelated interesting toy; http://www.thinkgeek.com/interests/dads/cf9b/

Cheers,
~Michael();


On Mon, May 30, 2011 at 03:55, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian, and scientific researchers,

Raising the stakes and thinking ahead to more robust and precise instrumentation, here's the

NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo Alto

The exclusive patent describes a  6 DOF, x,y,z + euler angles    The company's founded  by a physicist friend from Stanford: Marek Alboszta.  Not productized yet.  "Commercial" co-development would require O(100K) USD.  I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal.  This would make sense in a real NSERC/NSF  co-development grant.

Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc?  Let's discuss this in June.

Xin Wei


On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:

Hello Xin Wei,

We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)).  I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).  
...

Is your party ready to pay for this work ...?  If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work).  Anyway, we can talk about it if the allocation of resources is a given - let me know.

warm greetings,
<pastedGraphic.jpg>

_____________________
Marek Alboszta



On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:

Hi Marek,

For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives.  Ideally at better than 30 Hz for the entire 12-vector.

We need it wireless, range of say 10m suffices.

Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body.   I expect any pen-based input device has more than adequate time-space resolution.

We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.

We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.

The person may be free to wander around the room and point in any direction whatsoever.
Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...

Cheers,
Xin Wei


]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342912 2011-05-31T04:19:31Z 2013-10-08T16:34:57Z Polhemus; Path dependence
Indeed Polhemus is the standard instrument.  From 2004, Polhemus gear seemed unacceptably clunky to be wearable and cost-effective by my "jewelry' standard, but Memory-Place could perhaps put its Verfremdungseffekt to good use.   And it's now sleeker.  If the MP group decides to really go after 6DOF in an future phase of this research, perhaps someone go source and borrow one?    Check TAG , Dr. Mudur, or CIISE.

Naviscribe seems to be an interesting case of the "good enough" solution.   Marek's patent is for reporting euler angle, but the other 6DOF composite the information, which is why his method is so compact, with very nice optics in a tiny lens.   The problem is path dependence.

Path dependence, its critics and the quest for ‘historical economics’Paul A. David
All Souls College, Oxford & Stanford University 2000.

As with other TML work less tightly coupled to the consumer commodity market (eg. game controllers), we can try to go our own way and leverage our particular knowledge and friendship networks, subject to practical constraints.

Xin Wei


On 2011-05-30, at 7:52 PM, <adrian@adrianfreed.com> <adrian@adrianfreed.com> wrote:

As official grumpy old man on these things I should point out that a
high precision 6dof device is the holy grail
and  hard to do at any price. It is a question of making an inventory of
where the various solutions proposed fall down.
There are dozens of companies that have crashed and burned on this
already so we should be cautious as to where
we put our eggs.

The Naviscribe core technique was patented 4 years ago. What happened?
Why can't they implement it or raise the money
to implement it for a gaming controller?

For the particular needs of the Memory/Place work it may be easier to
borrow or rent the market leader for a short time:

http://www.polhemus.com/?page=Motion_PATRIOT%20Wireless


-------- Original Message --------
Subject: Re: "scientific" gesture / movement research ?
From: Morgan Sutherland <morgan@morgansutherland.net>
Date: Mon, May 30, 2011 10:36 am
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: Adrian Freed <adrian@adrianfreed.com>, memory-place@concordia.ca,
Satinder Gill <spg12@cam.ac.uk>


Scientifically speaking, I'd love to see somebody integrate vibrotactile
feedback into this. There is the feedback problem (vibration add noise to
signal which adds noise to the vibration motor ad infinitum) to figure out.

http://www.cim.mcgill.ca/~haptic/pub/HY-VH-RE-CAS-05.pdf ("A Tactile
Enhancement Instrument for Minimally Invasive Surgery")
http://www.cim.mcgill.ca/~haptic/pub/HY-VH-JASA-10.pdf (on better vibration
motors)

I remember I had a specific idea for this kind of device recently – I'll see
if I can remember...

As for the wireless part, it would be dead simple to use an XBee to beam the
data over to a USB data acquisition device (Teensy or whatever), just not
elegant (XBee's are bulky in comparison to a pen). The question there is
whether we get analog or digital output from the pen itself. If it's
digital, then there could be synchronization problems unless Marek can
provide the spec for the communication protocol. If it's analog, then it's
dead simple – just cut the wire and put 2 XBees in between. Add 1 extra week
for unforeseeable headaches that always crop up when doing wireless.

I'm actually very excited about this – if this gets commercialized and keeps
its form factor, I can see myself using one regularly, hopefully by then in
conjunction with a fast RGB E-ink display. Viva post-television.

Morgan

On Mon, May 30, 2011 at 3:55 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Hi Adrian, and scientific researchers,

Raising the stakes and thinking ahead to more robust and precise
instrumentation, here's the

NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in
Palo Alto


The exclusive patent describes a  6 DOF, x,y,z + euler angles    The
company's founded  by a physicist friend from Stanford: Marek Alboszta.  Not
productized yet.  "Commercial" co-development would require O(100K) USD.
I've not discussed how to enter into actual relation with this company, but
we could perhaps work out a deal.  This would make sense in a real NSERC/NSF
co-development grant.

Shall we think about this in context of a scientific gesture research
proposal, along with high FPS cameras, and EONYX etc?  Let's discuss this in
June.

Xin Wei





On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:

Hello Xin Wei,

We can definitely do everything you ask (briefly - up to 100 Hz and better
with all degrees of freedom (6DOF) reported in compact stream (right now not
compressed), requires at most 120 MIPS to do everything (during periods of a
lot of activity) - unit is small so can be in a ring or glasses or headgear
or whatever you choose - we give you intervals so you can compute your
derivatives, resolution in 3D space is considerably better than 1 cm (in
plane it's down to 0.2 mm and better)).  I can't do wireless unless somebody
gives me money to properly design a wireless beta unit (it is not a problem
of technology but pure resources).

...

Is your party ready to pay for this work ...?  If not then we should
reschedule for when they are ready to commit resources for technology
development (or if they/your side wants to do the work).  Anyway, we can
talk about it if the allocation of resources is a given - let me know.

warm greetings,
<pastedGraphic.jpg>

_____________________
Marek Alboszta
marekalb@yahoo.com



On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:

Hi Marek,

For a memory & place experiment, we would like to give people a wand that
they can hold that can report position, euler angles, and their time
derivatives.  Ideally at better than 30 Hz for the entire 12-vector.

We need it wireless, range of say 10m suffices.

Spatial resolution is important, for tracking "pointing" at virtual objects
that people infer by indirectly mapping position & angle to a vibration
motor that will be embedded somewhere on their body.   I expect any
pen-based input device has more than adequate time-space resolution.

We would also like to be able to have a "wand" small enough to fit anywhere
attached to the body in some not too obstrusive way.

We can write our own code to parse the data if you tell us the format
coming in some standard protocol, serial or ethernet/port stream.

The person may be free to wander around the room and point in any direction
whatsoever.
Does the wand needs to see an IR array in "front" ie be constrained to a
half-sphere, or can it be pointed in any direction provided a set of IR
beacons ...

Cheers,
Xin Wei




]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342936 2011-05-30T18:38:10Z 2013-10-08T16:34:58Z "scientific" gesture / movement research ?

Hi Xin Wei,

Interesting device.

For the memory place investigation, though, it wouldn’t work, because it depends on line of sight between the wand and the LEDs around the TV. Also, at the moment, I wonder if it’s fast enough: the people in the demo are moving really slowly.

Best,

David

From: owner-memory-place@concordia.ca [mailto:owner-memory-place@concordia.ca] On Behalf Of Michael Fortin
Sent: May-30-11 1:44 PM
To: Sha Xin Wei
Cc: Adrian Freed; memory-place@concordia.ca; Satinder Gill; post@memoryplace.posterous.com
Subject: Re: "scientific" gesture / movement research ?

This might be a jaded comment.... 

I'll call it an advanced WiiMote (WiiMote just tracks the x-y-rotation, they have some idea of angle and distance to the display which the WiiMote doesn't have).   (Morgan -- WiiMote has vibrotactile feedback)

Speaking of odd remotes, there's this unrelated interesting toy; http://www.thinkgeek.com/interests/dads/cf9b/

Cheers,
~Michael();

On Mon, May 30, 2011 at 03:55, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Hi Adrian, and scientific researchers,

Raising the stakes and thinking ahead to more robust and precise instrumentation, here's the

NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo Alto

The exclusive patent describes a  6 DOF, x,y,z + euler angles    The company's founded  by a physicist friend from Stanford: Marek Alboszta.  Not productized yet.  "Commercial" co-development would require O(100K) USD.  I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal.  This would make sense in a real NSERC/NSF  co-development grant.

Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc?  Let's discuss this in June.

Xin Wei

On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:

Hello Xin Wei,

We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)).  I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).  

...

Is your party ready to pay for this work ...?  If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work).  Anyway, we can talk about it if the allocation of resources is a given - let me know.

warm greetings,

<pastedGraphic.jpg>


_____________________

Marek Alboszta

On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:

Hi Marek,

For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives.  Ideally at better than 30 Hz for the entire 12-vector.

We need it wireless, range of say 10m suffices.

Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body.   I expect any pen-based input device has more than adequate time-space resolution.

We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.

We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.

The person may be free to wander around the room and point in any direction whatsoever.

Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...

Cheers,

Xin Wei

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342956 2011-05-30T07:55:26Z 2013-10-08T16:34:58Z "scientific" gesture / movement research ? Hi Adrian, and scientific researchers,

Raising the stakes and thinking ahead to more robust and precise instrumentation, here's the

NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo Alto

The exclusive patent describes a  6 DOF, x,y,z + euler angles    The company's founded  by a physicist friend from Stanford: Marek Alboszta.  Not productized yet.  "Commercial" co-development would require O(100K) USD.  I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal.  This would make sense in a real NSERC/NSF  co-development grant.

Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc?  Let's discuss this in June.

Xin Wei


On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:

Hello Xin Wei,

We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)).  I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).  
...

Is your party ready to pay for this work ...?  If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work).  Anyway, we can talk about it if the allocation of resources is a given - let me know.

warm greetings,
<pastedGraphic.jpg>

_____________________
Marek Alboszta



On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:

Hi Marek,

For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives.  Ideally at better than 30 Hz for the entire 12-vector.

We need it wireless, range of say 10m suffices.

Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body.   I expect any pen-based input device has more than adequate time-space resolution.

We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.

We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.

The person may be free to wander around the room and point in any direction whatsoever.
Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...

Cheers,
Xin Wei
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342959 2011-05-27T17:19:44Z 2018-01-15T10:01:26Z Follow up Written Responses Hi everyone, 

Sorry this has taken me so long to send out. Here are all the responses I received except for Laura who I have not heard back from yet. 

Cheers, 

Tristana

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342990 2011-05-21T14:31:28Z 2013-10-08T16:34:58Z 'Magic Wand' Followup hi Erik, sure let me send a link. maybe w can talk by phone -- it'd be easier than my hen-pecking.... it'd be fun. FYI, everyone (else)-- Marek Alboszta's company in California holds the patents on true 6 DOF devices. He's trying to bring it to market. He replied...it's too expensive -- different league, but I'll talk w Marek when I go out to sf this week.

Xin Wei


On 2011-05-21, at 8:58 AM, Erik Conrad wrote:

> Hi Xin Wei + TML,
> > Happy to help, but maybe I missed the part of the thread that explains
> the overall concept and desires? Could someone please fill me in?
> > Best,
> Erik
> > On Friday, May 20, 2011, Sha Xin Wei wrote:
>> I'm asking some experts: Erik Conrad, a TMLabber who built mappings to haptics (vibration motors on various parts of body) from camera as well as GPS models of built environment, as well as Marek Alboszta, whose company makes the only true 6DOF wand. (Asking for non-tethered, non-line-of-sight, but may not be possible.)
>> >> BTW. Deleuze' micro-perception lay behind my musing about locus of sensing. It's not a satisfactory vocabulary, but an invitation to parse out the layers: sensing modality / sensing locus / interpretation / logic of response / feedback locus and type ... and of course not leave them split! A "locus" may not be spatial, it could be temporal: keeping a "stimulus" sharply delimited in time, or very clearly temporally-textured is a form of delimitation and localization. Another way is to have a crisply defined rhythm -- unbounded in time (or least in an open set), and with no particular spatial locus.
>> Warmly,Cheers,Xin Wei
>> On 2011-05-20, at 1:09 PM, David Morris wrote:
>> Follow up on magic wand possibilities: --Sandeep’s student has a ‘T-Stick’, http://www.idmil.org/projects/the_t-stick, but this is far too much and it doesn’t sense position. --Lenay’s group is using an ‘enactive torch’ which looks like a handheld device that converts distal measurement it makes into vibration stimuli, in a programmable way, with an Arduino chip. This isn’t quite what we want, because we are more interested in locatedness than distance, and want to be selective on the location/object that prompts a stimulus. But the design is interesting, seehttp://enactivetorch.wordpress.com/. We could use a similar physical sized thing, if we could get position/acceleration sensors into it. NB the enactive torch project looks interesting. --I was trying to find info on getting position in room via Wii, but wasn’t sure we could, at least not in a robust way, because that seems to depend on IR sensitive detectors, and so would get cut off if there is no line of sight… David
>> > >]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/343021 2011-05-20T20:27:04Z 2013-10-08T16:34:59Z 'Magic Wand' Followup
Yes... It could be that rhythm, textures, and/or seemingly fixed stimulae in the environment would provide participants with those 'anchors' I was speaking about. Of course, these can be generated through any number of experiential circumstances, events, or sensory references. Providing a degree of experiential familiarity so that the distinct or contrasting qualities of the experience might become, all unto themselves, compelling. Especially if participants are to still to endeavor the destablizing experience of being blind. The search for more recognisable means of inquiring or tracing one's way through the environment may have to be satisfied in some manner, in order for whatever is unique or contrasting about their experience to be more notable. Or even if they are not going to move around through the space, it seems we need then otherwise provide an intrigue that, when explored, supports some kind of experiential journeying (temporal evolutions). No? xp Just to drag around my 'sounding' (sea bottom) metaphor a little. There's a great recording on Alan Lomax's Deep River of Song: Mississippi Saints and Sinners of Joe Shores reciting a song/call for riverboats sounding depth in the Mississippi.... "no bottom" is the deepest call. Imagine if our sensing apparatus was a long string with a light or texture or sound sensor, or just an eraser, on the end (or if we were animals with one of these--which we all are in a way). Throw it out and drag it through space and time building a place. Like Xin Wei's 'not necessarily spatial' locus - the stimulus that is fixed defines itself. af

 

----- Original Message -----

From: Sha Xin Wei

Sent: 05/20/11 01:31 PM

To: post@memoryplace.posterous.com, Erik Conrad, David Morris, Niomi Anna Cherney, Noah Brender, Tristana Martin Rubio, p.a. duquette, Andrew Forster, zoharKfir

Subject: Re: 'Magic Wand' Followup


I'm asking some experts: Erik Conrad, a TMLabber who built mappings to haptics (vibration motors on various parts of body) from camera as well as GPS models of built environment, as well as Marek Alboszta, whose company makes the only true 6DOF wand.   (Asking for non-tethered, non-line-of-sight,  but may not be possible.)
 
 
BTW. Deleuze' micro-perception lay behind my musing about locus of sensing.  It's not a satisfactory vocabulary,  but an invitation to parse out the layers: sensing modality /  sensing locus / interpretation / logic of response / feedback locus and type ... and of course not leave them split!   A "locus" may not be spatial, it could be temporal: keeping a "stimulus" sharply delimited in time, or very clearly temporally-textured is a form of delimitation and localization.   Another way is to have a crisply defined rhythm -- unbounded in time (or least in an open set), and with no particular spatial locus.
 
Warmly,
Cheers,
Xin Wei
 
On 2011-05-20, at 1:09 PM, David Morris wrote:

Follow up on magic wand possibilities:

 

 

--Sandeep’s student has a ‘T-Stick’, http://www.idmil.org/projects/the_t-stick, but this is far too much and it doesn’t sense position.

 

 

--Lenay’s group is using an ‘enactive torch’ which looks like a handheld device that converts distal measurement it makes into vibration stimuli, in a programmable way, with an Arduino chip. This isn’t quite what we want, because we are more interested in locatedness than distance, and want to be selective on the location/object that prompts a stimulus. But the design is interesting, seehttp://enactivetorch.wordpress.com/. We could use a similar physical sized thing, if we could get position/acceleration sensors into it. NB the enactive torch project looks interesting.

 

 

--I was trying to find info on getting position in room via Wii, but wasn’t sure we could, at least not in a robust way, because that seems to depend on IR sensitive detectors, and so would get cut off if there is no line of sight…

 

 

David

 

 


...............................................................................................
Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it. Such a realm is art. But certainly only if reflection upon art, for its part, does not shut its eyes to the constellation of truth, concerning which we are questioning." - Heidegger
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/343042 2011-05-20T17:31:59Z 2013-10-08T16:34:59Z 'Magic Wand' Followup
I'm asking some experts: Erik Conrad, a TMLabber who built mappings to haptics (vibration motors on various parts of body) from camera as well as GPS models of built environment, as well as Marek Alboszta, whose company makes the only true 6DOF wand.   (Asking for non-tethered, non-line-of-sight,  but may not be possible.)

BTW. Deleuze' micro-perception lay behind my musing about locus of sensing.  It's not a satisfactory vocabulary,  but an invitation to parse out the layers: sensing modality /  sensing locus / interpretation / logic of response / feedback locus and type ... and of course not leave them split!   A "locus" may not be spatial, it could be temporal: keeping a "stimulus" sharply delimited in time, or very clearly temporally-textured is a form of delimitation and localization.   Another way is to have a crisply defined rhythm -- unbounded in time (or least in an open set), and with no particular spatial locus.

Warmly,
Cheers,
Xin Wei

On 2011-05-20, at 1:09 PM, David Morris wrote:

Follow up on magic wand possibilities:

--Sandeep’s student has a ‘T-Stick’,

http://www.idmil.org/projects/the_t-stick, but this is far too much and it doesn’t sense position.

--Lenay’s group is using an ‘enactive torch’ which looks like a handheld device that converts distal measurement it makes into vibration stimuli, in a programmable way, with an Arduino chip. This isn’t quite what we want, because we are more interested in locatedness than distance, and want to be selective on the location/object that prompts a stimulus. But the design is interesting, seehttp://enactivetorch.wordpress.com/. We could use a similar physical sized thing, if we could get position/acceleration sensors into it. NB the enactive torch project looks interesting.

--I was trying to find info on getting position in room via Wii, but wasn’t sure we could, at least not in a robust way, because that seems to depend on IR sensitive detectors, and so would get cut off if there is no line of sight…

David
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/343076 2011-05-15T23:23:23Z 2013-10-08T16:34:59Z The experiments Edvata   
Didier Part 1   
Didier Part 2   
Jen   
Jeremie   
Laura   
Jennifer   
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/343096 2011-05-15T20:31:36Z 2013-10-08T16:35:00Z Debriefing videos
Here they are, the rest to be uploaded soon-
Debriefing_Didier_Edvta  
Debriefing_Jeniffer_Laura  
Debriefing_Jeremie_Jen  
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342728 2011-05-12T14:33:05Z 2018-01-15T10:00:56Z subjective experience vs. psychology
Yes, Andrew the form regards a recording rights issue, with the permission to disseminate with anonymized but still individualized accounts as David pointed out.   

Pertinent to David's observation about the conceit of the interchangeability of subjects, here's Al Bregler's keynote to CIRMMT / McGill in 2008.


Xin Wei


On 2011-05-12, at 9:35 AM, Andrew Forster wrote:

I can type something up for today's use to make sure we have something:

-short description (from protocol)
-participants name and contact info (for follow-up contact)
-participants occupation/specialty
-permission to be recorded by video/audio
-assurance that this recorded material or participants name won't be disseminated in public but is for research purposes
-further permission will be requested for any public dissemination of the material, if we want to do that

(this is really a recording rights thing, if I understand,  not an 'experimental subjects' thing--as we are framing this as participating in an 'environmental experience'..right?)

andrew

On 2011-05-12, at 9:10 AM, David Morris wrote:

I haven’t worked on this. Xin Wei, I’m wondering if your past work might give us some templates that could be quickly modified. I’ve never seen the language on such release forms.

From: Niomi Anna Cherney [mailto:niomi.anna@gmail.com] 
Sent: May-11-11 1:22 PM
To: Noah Moss Brender; Andrew Forster; David Morris; Sha Xin Wei; zohar; Tristana Martin Rubio
Subject: IMPORTANT

Has anyone made the waiver form for the participants to sign?

On Tue, May 10, 2011 at 11:39 AM, Niomi Anna Cherney niomi.anna@gmail.com> wrote:
Hello Guides, 

Noah - is it ok if I send the participants your cell phone number in case they are running late or something? I've instructed them that one or both of you will be meeting them in the JM lobby and taking them upstairs. 

Let me know if this is ok.

-Niomi.

On Tue, May 10, 2011 at 11:37 AM, Niomi Anna Cherney niomi.anna@gmail.com> wrote:
Holy jeeze that is amazing! We should buy Michael a present or something. 

Ok, I vote we set up a semi-permanent warm-up debrief space in the snack studio. I will also be responsible for bringing the cookies and tea. Have just sent the participants an email so I think we're all set to go. 

On Tue, May 10, 2011 at 10:15 AM, p.a.duquette impetus@graffiti.net> wrote:
We are lucky wee experimenters, we are. Michael has been able to confirm a third studio for us. So we now have access to: MB 7.265, MB 7.251, and MB 7.255.

-----Original Message-----
From: Andrew Forster af@reluctant.ca>
To: Niomi Anna Cherney niomi.anna@gmail.com>
Cc: zohar zohar@zzee.net>; p.a.duquette impetus@graffiti.net>; davimorr@alcor.concordia.ca; tristana.martin.rubio@gmail.com; shaxinwei@gmail.com; noahmb@gmail.com
Sent: Mon, May 9, 2011 9:59 pm
Subject: Re: Untitled document (af@reluctant.ca)

I will bring lamp no have bulb... perfect.

Re: the warm up room... is there a curtain or separation possible in one of the studios, then we could use that space...

On 2011-05-09, at 9:36 PM, Niomi Anna Cherney wrote:


That's why I suggested we meet at 3:30 - so we could all gather together, talk and then split up and gather the gear. 

I can bring light bulbs but I don't have any lamps.

Niomi.

On Mon, May 9, 2011 at 9:32 PM, zohar zohar@zzee.net> wrote:
I can pick up the equipment at 4ish and bring it to TML, pas problem. since Hexagram/CDA depots close at 5pm, 
4:40 might be stretching it, as gear is divided between 5th and 11th floors.

light bulbs and lamps, how many? can people bring such from home? I think there is one or two office lamps at TML.
also, what watts do we need, should be consistent?

/z.

On May 9, 2011, at 9:14 PM, p.a.duquette wrote:


I recall the desk lamps w/bare bulbs being the preference also.

Meeting earlier Thursday sounds wise to me also. Do note though that we don't have the studios until 5pm, so if we do pick up the gear @4pm, we'll only be bringing it over to TML. 

Zohar can we pick up gear @4:40pm instead, or is this a 'summer hrs' thing?

The debriefing could take place in one of the already-booked studios (same one the snacks are permitted in). Don't know how easy or possible it will be to get a third studio at this point... I can try... Would the hallway be an OK location for the warm-ups, unto themselves, though? 

xp

-----Original Message-----
From: Niomi Anna Cherney niomi.anna@gmail.com>
To: zohar zohar@zzee.net>
Cc: davimorr@alcor.concordia.ca; Andrew Forster af@reluctant.ca>; Tristana Martin Rubio tristana.martin.rubio@gmail.com>; Xin Wei Sha shaxinwei@gmail.com>; Noah Moss Brender noahmb@gmail.com>; p.a.duquette Duquette impetus@graffiti.net>
Sent: Mon, May 9, 2011 8:39 pm
Subject: Re: Untitled document (af@reluctant.ca)

Zohar - I think we decided on the adjustable desk lamps with the bare bulbs.... am I wrong about this?

Other things as per David's numbering system (with an additional 5 & 6) :

1) I will be sending out the email to participants as soon as Patricia forwards me the security clearance attachment. I was thinking that perhaps it might be better to simply meet and gather participants in a central location in the lobby and then head up to the studios altogether. We thus avoid the security thing also. Perhaps Noah and Andrew could take this on as well?

2) I think we should meet at 3:30 to sit down as a group and just go over how the set up and running of the evening will proceed. At this time we can discuss any last minute logistical problems/ address the remaining questions about the lighting moves and so on. We can then get the equipment at 4pm and begin taking it over to the studios. Whoever is available at this time should head to the TML I think. I also suggest that once we have the set up roughly in place in the studios, we try a few dry runs right away so that we can tweak the edges of the transitions during the protocol while fine tuning the tech set up. We should have enough people present to have this happen. 

3) David, of course you will be involved in the debriefing. That must have been an oversight on my part. Sorry. 

4) Word. 

5) The hallway absolutely WILL NOT DO as a warm-up/ debrief zone. Do we have any other options? I believe there's a third studio on the same floor. Is there any way to get the additional studio?

6) I will add in additional stuff to the protocol and have multiple copies on hand for Thurs. Please add in any last minute stuff you can think of. I can also be in charge of keeping us on task/ schedule. 

-Niomi.

On Mon, May 9, 2011 at 7:58 PM, zohar zohar@zzee.net> wrote:

Some thoughts regarding the lights- did we like the LED ones Andrew got for last time? or better to use a more of an omni-tungsten light?
as they gave very different result, we might want to be consistent.

Andrew, did you buy them in a dollar store?

Not a bad idea to meet tad before, I booked the equipment from 4pm, so maybe we can meet all at the TML and head over to the studios together?

/z.
On May 9, 2011, at 3:53 PM, David Morris wrote:

Hi Niomi,
 
Thanks for all the work on this!!!
 
Catching up:
 
I couldn’t get onto the online doc, but I had suggestions for revisions to the initial script, to build on what you’ve set up,  break the points down a bit, and use what I think is a bit more neutral vocabulary. (E.g., I think we should avoid the language of navigation, and just talk about moving around and finding things, because navigation might put them into a ‘map mindset’; also talk about ‘something’ vs. ‘an object’ just to leave open that they might not feel the thing as an object (might feel it as a barrier?) or may feel more than one thing). I paste my suggested reworking below; I hope you’ll find them an extension of your initial thoughtful framing and work.
 
1)      Have the participants been contacted?
2)      I see there are number of questions we still need to answer, e.g., about how to move the lights around, and who is bringing what. Maybe we need to meet a bit earlier than Patricia’s suggested time?
3)      Also, I wanted to be part of the debriefing process.
4)      In general, I think we have to be careful with the debriefing, balancing letting them speak spontaneously, and drawing them out, also, attending to not getting it to confusing or too many voices.
 
David
 
 
Welcome. Thanks for joining us.
 
We’re now going to do some exercises in body movement and bodily experiential awareness together, to help warm you up for experiencing a special environment that we have prepared for you. But before doing the exercises we wanted to tell a little about this special environment.
 
It’s different than the ones you might be used to. It’s something like an art installation, but one experienced through a new way of sensing that we will provide to you. We’ll provide with this new way of sensing by putting a sleeve‑type apparatus on your finger, and also a band holding a further lightweight apparatus on your forearm. You should let us know if you find either of these things uncomfortable.
 
Once you are wearing this apparatus you will become sensitive to your environment in a new way. We’ll test this out together before you go into the environment. In the environment, there’s something you can find through this new way of sensing, and we’d like you to move around to find and interact with it.
 
The environment is in a different place than the one we’re now in. You’ll be with either Noah or Andrew the whole time. They’re going to guide you into this environment and then step back a little so that you can explore it.
 
If at any time during this process you feel uncomfortable you should let them know right away. You can speak to them the whole time, even if you are feeling completely comfortable. We’re here to help you move your way around this new environment, so feel free to say what comes to mind as you’re moving around. You can move around as much as you’d like but it’s probably a good idea not to move too fast so that your guide can keep up with you and spot you.
After you have completed your time in the environment, you’ll have an opportunity to discuss your experience with the other participant and with us.
 
During the time that you are in the environment, we’ll be audiovisually recording your experience. We would like to watch these recordings later to better observe your experience. We might also want to use some of this information for future trials and to help us build a more precise environment. If that’s ok, we would ask you to sign a form.
 
 
 
 
 
From: Andrew Forster [mailto:af@reluctant.ca
Sent: May-07-11 1:12 PM
To: niomi.anna@gmail.com
Cc: tristana.martin.rubio@gmail.comshaxinwei@gmail.comnoahmb@gmail.comimpetus@graffiti.netzohar@zzee.netdavimorr@alcor.concordia.ca
Subject: Re: Untitled document (af@reluctant.ca)
 
. This is real
]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342733 2011-05-12T13:12:08Z 2013-10-08T16:34:55Z Blind Folds

I have two body shop slumber masks that can be used as blind folds that I can bring for the experiment.

]]>
Xin Wei Sha
tag:memoryplace.posthaven.com,2013:Post/342741 2011-05-12T02:51:18Z 2013-10-08T16:34:55Z The Apparatus ]]> Xin Wei Sha tag:memoryplace.posthaven.com,2013:Post/342750 2011-05-12T01:04:18Z 2013-10-08T16:34:55Z raw schematic (may 12) ]]> Xin Wei Sha tag:memoryplace.posthaven.com,2013:Post/342759 2011-05-08T12:27:16Z 2013-10-08T16:34:55Z notes / queries : memory-place Thurs trial Hi All,

1. Today Xin Wei, Zohar, and I spoke to a graduate student (psychology / intermedia / performance) who is interested in sitting in on our upcoming trial (as an observer). Zoe may also be interested in sitting in on any forthcoming conversations. It is possible too that, if one our volunteer participants doesn't make it Thursday, she could stand in as a back up. Barring that, there may be a technical role we can assign her... ? I've still to take a sit down with the documents Naomi sent (thanks!), but I figure we can wait and see how things pair up on that document, before deciding what (if any) role Zoe could play?

2. Michael M and I have been cooperating to firm up details of our access to the studios. We have passed along a list of all collaborators and participants names to security personnel. Let me know if you notice anyone missing from this list (other than Zoe)?:
Dr. Sha Xin Wei, Prof. David Morris, Andrew Forster, Tristana Martin Rubio, Noah Brender, Niomi Anna Cherney, Zohar Kfir, Patricia Anne Duquette. Didier Chelin, Edvta Niemviska, Jeremie LeClerc, Jen Gibson, Laura Boyd-Clowes, Jennifer Spiegel.

3. Also it should be noted that pedestrian shoes are not permitted in the dance studios (everyone will need to remove these outside the door). Michael has noted that the hallway between the studios may not be suitable for the interviewing / discussion process (students often gather there). He has also noted that food and drink would be permitted only in one particular studio (MB 7.265). My suggestion then would be that the snacks and the interviews could both take place in this same studio. How much would that mess up the existing schedule?

Though this may also mean that we need less a/v recording equipment, Zohar and I were thinking we'd keep the booking and bring the gear 'just in case'... We may actually find it all comes in handy, or find the hallway empty and quiet...

4. We will need to let the participants know where to go and at what time (I don't think it's detailed in David's original call out)... Should we simply provide them with the studio numbers and directions (rather than meeting them in the foyer, for eg)? Should we schedule each participant's arrival in correlation with their scheduled time-block (as opposed to having everyone arrive at the same time, for eg)?

x patricia]]>
Xin Wei Sha