p 18, Synthetic Philosophy of Mathematics and Natural Sciences, Conceptual analyses from a Grothendieckian Perspective. Reflections on “Synthetic Philosophy of Contemporary Mathematics” by FERNANDO ZALAMEA, by Giuseppe Longo
I'm tremendously interested in seeing this little arc finish in some insights that we can contribute in conversation with colleagues in the Consciousness and Cognition journal, or SPEP. It's low-hanging fruit as far as craft is concerned, because the craft is child's play for us, but the research questions are deep.
Who's interested and has the technique to rebuild the photocell + pressure = vibration prosthetic sense organ that Patricia and Zohar build some years ago in the Memory+Place experiments?
Omar and I are hoping to take it to the next level with Liza, and carry out the actual experiment that David and the group wanted to realize. My hunch is that it would be straightforward and fun with contemporary tech + the expert knowledge that's available around the TML :)We're hoping to write up the results of this experiment as soon as possible, in August. If you're interested in doing a bit of pure research, RSVP Omar Faleh <omar@morscad.com>, Liza Solomonova <liza.solomonova@gmail.com>, cc. me !
Haven’t read it yet, but this article looks relevant to our group. http://philpapers.org/rec/HAGITB
David
Hey David,
Thanks for posting this article (Hagendoorn on space and sensorymotor). Definitely up my alley. Had a chance to look through it and it brings up some quite interesting and relevant things.
A loose summary:
H. Uses ‘excription’ (borrowed from J-L Nancy) to describe a kind of reciprocal verifying or excribing of the body in space. He makes a distinction between ‘alocentric’ (a kind of ‘other’ space, abstract space, objects in relation to one another) and ‘egocentric’ (space created by movement and configuration of the body). The relation between the two is the reciprocal part, which constitutes full spatial awareness. From neuroscience he notes studies which indicate the existence of ‘place cells’ and ‘grid cells’ serving distinct spatial awareness functions. Place cells fire in specific relation to location (so related to the egocentric, to memory/knowledge(?)). Grid cells fire regularly during a rat’s encounter with an unknown space and function like a trail marking, bread crumb system; creating trajectories memorized as motor sequences. A side note is that these neurons actually seem to fire on the side of the brain where a desired object is located; they possibly have a topographical arrangement. H. notes that we can access an experience of this more basic construction of space by doing things like examining an object behind our back—so taking place in a relatively unfamiliar part of the body zone and (importantly) out of sight. And, he extrapolates, this is similar to how we experience dance (either dancing or as observer). He's careful to stipulate that he means certain specific types of choreography conceived to reveal alocentric vs egocentric movement (but I think its actually true as a more general point)
Makes me think of a few things:
First, neuroscientist Rodolfo Linas’ sea squirt, the little animal with a little brain, which begins life moving around to find a spot to anchor, at which point it promptly digests its brain and lives the rest of its life without. Linas conjecture is that thought, or the function of neurons at its most basic is related to movement and motor-negotiation of space, even you could say ‘thought is movement’, or the rehearsal of movement (well, which of these is it?).
H’s point about how watching/and doing dance are related is undeveloped by him, but super important. When I have tedious discussions with people about ‘virtual space’ created using linear perspective in electronic media, the point I make is that the virtual space is in our heads and its not really virtual at all. Experience of movement or of architectural space is like real space. Our fascination with watching bodies dance, or play sports, or sitting at cafés watching the passers-by or even watching animals is that this is compelling on a very neuro-visceral level, that watching unusual movement is doing in some sense—we are rehearsing space, and in rehearsing it we are making/marking it. So thinking about dance, thinking about movement, thinking about space, thinking about architecture (rooms, gridded space) are all related.
The idea that doing something behind one’s own back as revealing of a primal space-building (up to the point that one familiarizes that space) is similar to our first round of experiments, where we were reaching-for or sounding an unknown space or objects in a void. There are other similar spaces, kinds of unbounded rooms which echo this. I think of the inside of one’s mouth as such a ‘room’. We have very little rational or visual context for what this space is, yet it is deeply familiar. Put an object you have not seen in you mouth—what are you experiencing? Try to imagine the boundaries of this space which is the mouth. You are engulfed by it (you are tiny inside this space) and surround it at the same time. That's pretty fascinating, experientially. I think that a clarification of our idea of what the nascent process of building primal spatial temporal awareness could lead to several potential experiments/experiences/thought exercises which relate back to our room/space/memory exploration.
>Here are my somewhat rambling impressions of where we are at, from my viewpoint...hope this is useful to getting the next round going.
Information and Other-Than-Information: Examining space before it gets to be information (Relevance to new-media, dance, architecture)
We live in a world of data, of encoded information. This is especially true of our visual world. Phenomenology, as we have been at it in these experiments, has something to say about experience of space, of being and space, prior to it being monetized as information that can be parsed in the realm of computation. This seems like a key area where the ‘experiments’ and experiences of Memory Place are a useful corrective to assumptions coming from science and technology (assumptions which, to be fair, also permeate art practice). Key question: is there a way of thinking about this information threshold (when experience/space becomes info) in a way that is useful to a better understanding of technologised culture and what we can do with it?
In relation to many new-media practices, the space of movement/performance is of prime interest as a hybrid between two encoded cultural spaces; firstly, of action grounded in the human body, with its proper perceptual world, and, secondly, of the ‘without-ground’ or ‘without body’ of new media and the virtual. Somewhat naively, electronic arts often use the theatre as a model for virtual/immersive space without examining the implications of that importation. One could argue that dance- and movement-based performance as well as architecture have become more relevant in contemporary visual culture precisely in critical relation to electronic screen-based media, virtual-reality and immersive interfaces. That is, we are trying to understand two different kinds (registers?, grasps?...) of space which are nested or interleaved one with the other. Virtual or immersive ‘reality’ relies on this confusion; that we believe (or are asked to believe) one is the other.
Returning to bodily movement as a form of encounter may be an important corrective to assumptions engendered by new technologies. For the human animal, watching another in movement is immersivity without technology. This makes movement- and space-based art practices (such as dance and architecture) the most relevant areas of artistic research today and a key point of overlap between traditional artistic disciplines and new technology in their ability to confront a culture of information with a culture anchored in the experience of space. How are movement and space ‘fixed’ by technology and new media? The nature of this ‘fixing’ should be a significant preoccupation for creative and critical practitioners in the face of new technologies.
Methods
As outlined in the grant summary the methods of data organization, experimental set-up, interview and so on are all part of the same conundrum of distinguishing scientific/technological method from what it seeks to understand. Our recent experiments have interestingly shown that our observation of the participants ‘working’ in space is fascinating and useful (see the underlined sentence above). We have often said, “it was interesting watching so-and-so doing this or that”, or that the particular ‘style’ of exploration someone has is exceptional. So incorporating rigorous observation into our method (and descriptions and write-ups) seems a good direction. There is a parallel here which might be worth following up in dance observation / dance movement therapy (DMT). All to say, if we are intrigued by observing movement we should do some work on bringing observation notes into the method alongside alternate debrief methods such as multiple interviewees/ 3 way conversation, etc. In line with this is an idea in the grant summary that participants can be trained to some extent to help them leave behind some presumptions and tune their own observation skills towards the simpler experiences we are interested in (what breathing and moving before the last round did—could this be more comprehensive). All in all, this means abandoning the naive subject and the invisible experimenter. This is complicated. Scientific experimental method is useful because it is effective, and vice versa. It 'moves forward', as everyone says these days. Our method is defficient in that regard. Its fuzzy both in activity and analysis phase. The question beomes not how 'accurate' a method can be but is it rigorous enough to be generative/creative of...something.
analogue /digital
David’s suggestion in the last follow up of a chair based mechanical device was interesting on two counts. One, it gets our feet off the ground, therefore interfering with some intuitive ways of getting around and sounding-out a space. Two, it is mechanical, therefore analogous to bodily mechanisms. Somehow I think we might adapt to it in a different way that a device which uses information processing (something mechanical operates in the world of body physics we understand and believe; something computational can be programmed to lie, we have to chose to believe it—perhaps a subtle difference). There are a variety of simple ‘analogue’ experiments which could complement the more ‘digital’ ones to give us a sense of this.
Perspective and architectonic space
Is linear perspective learned from living in and amongst buildings? What prejudice does this layer onto our experience of space? (linear perspective being the structuring basis of most virtual-visual environments)? But perspective, as a mathematical construction of points is a representation of something that itself it is not. What other kinds/shapes of rooms are there?
Interdisciplinary Relevance
By definition a cross-disciplinary practice straddles multiple discourses and audiences, which brings about either a flattening or sharpening of the work’s critical positioning. Memory Place sits across philosophy, art/design, computational environments in a superbly interesting way—a way in which putting stress on assumptions coming from these disciplines rather than using these disciplines purely as a generator of a hybrid method for production/invention/innovation. That can be a redefinition of ‘innovation’ right there. This makes ‘M-P’ relevant to the elaboration of what ‘inter-disciplinary’ can be, in that in part it questions the disciplinary partitions themselves and asks how can something creative, experimental and philosophically rigorous work. Specifically, thinking about movement, memory and space, puts M-P in a key position in relation to ‘traditional’ disciplines (dance, performance, architecture, computation, philosophy) which make a claim to various parts of the shoreline of such an inquiry. Is the end product a work of art or a cognitive-science paper ? Probably not, likely it is a method of working, observing, generating understanding of this movement, memory, space differently relevant to multiple disciplines, but which helps those disciplines shed some prejudices.
]]>]]>
Hi David,True 6DOF was solved elegantly by Marek Alboszta's team (Espi) (quaternions are the way to go) and supplied by Polhemus using a more elaborate set of sensors. Each solution has trade-offs, for good engineering reasons. I wouldn't try to solve it myself. These are not cheap gadgets for good reason.
On 2011-06-10, at 1:20 PM, David Morris <davimorr@alcor.concordia.ca> wrote:Actually, I wonder if this might do what we need:
Glue rigid wood or fibreglass dowel to iPhone case.
Fix a ring of LEDs around the dowel shaft at two points, so an overhead camera can always pick up two points of light at known distances from the iPhone.
how would these distances be measured?not enough degrees of freedom to recover 6DOF I understand what you have in mind.And at what time and space accuracy and frequency ? ( over the years many generations of TMLabbers and I have conceived diff solutions, but I'm interested in fresh feasible approaches.)so this is a line of sight approach.if we use aline of sight approach, then I'd simply use Wii and hack up its IR receiver . sender pair since it's so well engineered.ditto Kinect for camera based tracking.Adrian and I agreeit makes sense to buy and hack this inexpensive off the shelf gear to get a feel for what can be done. good practice : "rapid prototyping, quick fail" The question now is who will do the DIY electronics.In that spirit, before that or meanwhile we can wizard of oz, walk thru some scenarios, and get a lot more juice out of the gloves by using them in many alternative scenarios, I think.Looking forward to talking with you again, and catching up w people Wed. next week!Xin Wei
Put the iPhone in the case.
Run GyroOSC on the iPhone.
We now know the tilt of the iPhone relative to 3 axes—that gyroscope is pretty amazing.
From the overhead camera monitoring the LEDs we can get the position of the iphone in the XY plane, and its rotation relative to X and Y axes.
And then, I am thinking that by measuring the apparent distance between the two LEDs in the camera image, and combining this with the gyroscope tilt data, we could solve for the height of the iphone from the floor.
Xin Wei, would the math work here? Does they gyroscope data plus the two LEDs in the image give us a unique 6DOF solution?
From: David Morris [mailto:davimorr@alcor.concordia.ca]
Sent: June-10-11 12:57 PM
To: 'Niomi Anna Cherney'; 'zohar'; 'p.a. duquette'; 'Sha Xin Wei'; 'Noah Brender'; 'Tristana Martin Rubio'; 'Andrew Forster'
Subject: Update: Further Research on 'Magic Wand'I spent some further time today researching possibilities for the tech for the ‘magic wand’, first to see if we can get this via the iPhone. Results:
--it turns out if you want to simulate Lightsaber duels on the iphone, you need to solve the 6 degrees of freedom problem, position and tilt in 3 dimensions. The App Lightsaber duel from THQ apparently lets you have virtual duels through Bluetooth. I guess it’s doing relative position through Bluetooth? There is also iSamurai. I’ve downloaded these, and I am hoping someone else can bring an iPhone, so we can see how this works. What I don’t know is if this only gives you audio feed back when you hit the other person’s virtual lightsaber, or also if you are pointing their way. (NB, the thing we are trying to do is like Luke in the first Starswars movie tracking the practice drone while blindfolded through his light sabre.)
--there are also a bunch of apps for doing read outs of the gyroscope sensor on the iPhone. One sends the data via OSC, GyroOSC. If an overhead camera tracking a fiducial on the hand could gives us 3D location, then we could use an iPhone or WiiMote to give tilt.
--I also came across this: http://www.free-track.net/english/. It’s a freeware package for Windows that uses LED’s mounted on someone’s head to track 6DO data through a webcam or Wii camera. Maybe we could mount the LEDs on the end of a stick, and use an overhead camera?
Second question- there are so many hands-on techniques of listening to vital signs and movements: practitioners of Traditional Chinese Medicine palpate tissues and listen to the pulse, as it reveals metabolic functions, many doctors learn to listen to their patients' respiration and heart activity using stethoscopes (auscultation) and cranio-sacral therapists feel the tides of spinal fluids with their own hands. How do you regard these existing methods?Answer: (quote) 'Traditional Chinese Medicine has no theory. It's a closed system, simulations are hard to do'….meaning that TCM doesn't provide 'data'; perhaps it provides more qualitative information.But…. health and wellness is not just about data, it is more complex.My third question which I didn't get to ask was, How should we remedy this problem of female students going off and having kids, then quitting the workforce? Is that really why they are dropping out of school? What about the obvious gender inequality in certain disciplines, such as we have here in this lecture room- 25 men and 5 women from the Engineering department?But there are many more questions- why continue to develop unilaterally, where this monitoring technology feeds the isolation of patients? Why is that acceptable? How long will it take to address problems like implant rejection?If prevention is really the focus, why not emphasize healthcare long before measures such as stents and pacemakers become necessary?The proposed technologies are useful and effective, in the specific applications where they are really necessary, but there seems to be a lack of perspective about what is important in healthcare, beyond just life support systems…. One woman I know who regularly has epileptic seizures is alerted four hours ahead of time, when her dog and cat begin to follow her around the house. She doesn't need an implanted EEG sensor.comments anyone?Laura E
From: shaxinwei@gmail.com
Subject: Hexagram Research-Creation Seminar on Self-Powered Wireless Biomedical Devices - June 6
Date: Fri, 20 May 2011 22:08:50 -0400
CC: memory-place@concordia.ca; artcrd@langate.gsu.edu
To: tml-active@concordia.caTMLabbers -- of possible interest to those who swear by bio-sensing, or would like to peek at the future of biopolitics -- docile mitochondria! :) Will someone who can attend please send notes to tml-active?Thanks!Xin WeiBegin forwarded message:From: Momoko Allard <hexinfo@alcor.concordia.ca>
Date: May 20, 2011 10:56:08 AM EDT
To: Momoko Allard <hexinfo@alcor.concordia.ca>
Subject: Hexagram Research-Creation FW: Seminar on Self-Powered Wireless Biomedical Devices - June 6
-----Original Message-----
From: ENCS Communications <communications@encs.concordia.ca>
Subject: [All-ftfac-announce] Seminar on Self-Powered Wireless Biomedical Devices - June 6Dear ENCS Members,The following seminar may be of interest to you. This announcement is sent at the request of the Department of Electrical and Computer Engineering.INVITED SPEAKER SEMINAR IN ELECTRICAL AND COMPUTER ENGINEERINGCO-SPONSORED BY:
The Department of Electrical and Computer Engineering IEEE Circuits and Systems Montreal Chapter IEEE Montreal SectionMonday, June 6, 2011
6:00 p.m.
Room EV002.184
(Refreshments will be served.)
“Towards Self-Powered Wireless Biomedical Devices”DR.YONG LIAN
DEgr. (h.c.), FIEEE
Lutcher Brown Endowed Chaired Professor
The University of Texas
San Antonio, TX, USAABSTRACT
Body Sensor Network (BSN) combined with wearable/ingestible/injectable /implantable biomedical devices are envisaged to create next era of healthcare system. Such systems allow continuous or intermittent monitoring of physiological signals and are critical for the advancement of both the diagnosis as well as treatment. With the advances of nanotechnologies and integrated circuits, it is possible to build system-on-chip solutions for implantable or wearable wireless biomedical sensors. Such wireless biomedical sensors will benefit millions of patients needing constant monitoring of critical physiological signals anytime anywhere and help to improve the life quality. This talk will cover several topics related to the wireless biomedical sensors, especially on the development of self-powered wireless biomedical sensors and associated low power techniques. A design example of sub-mW wireless EEG sensor is discussed to illustrate the effectiveness of the low power techniques.BIOGRAPHY
Dr. Yong Lian received the Ph.D degree from the Department of Electrical Engineering of National University of Singapore in 1994. He worked in industry for 10 years and joined NUS in 1996. Currently he is a Provost's Chair Professor and Area Director of Integrated Circuits and Embedded Systems in the Department of Electrical and Computer Engineering. His research interests include biomedical circuits and systems and signal processing. Dr. Lian is the recipient of the 1996 IEEE CAS Society's Guillemin-Cauer Award for the best paper published in the IEEE Transactions on Circuits and Systems II, the 2008 Multimedia Communications Best Paper Award from the IEEE Communications Society for the paper published in the IEEE Transactions on Multimedia, and many other awards.
Dr. Lian is the Editor-in-Chief of the IEEE Transactions on Circuits and Systems II (TCAS-II), Steering Committee Member of the IEEE Transactions on Biomedical Circuits and Systems (TBioCAS), Chair of DSP Technical Committee of the IEEE Circuits and Systems (CAS) Society. He was the Vice President for Asia Pacific Region of the IEEE CAS Society from 2007 to 2008, Chair of the BioCAS Technical Committee of the IEEE CAS Society (2007-2009), the Distinguished Lecturer of the IEEE CAS Society (2004- 2005). Dr. Lian is a Fellow of IEEE.
For additional information, please contact:
Dr. Wei-Ping Zhu
514-848-2424 ext. 4132
weiping@ece.concordia.ca
----- Original Message -----
From: Sha Xin Wei
Sent: 05/31/11 06:41 AM
To: Michael Fortin
Subject: Re: "scientific" gesture / movement research ?I suggested that some one open up a wii mote and re-assemle the parts into a suitable form factor. we still need a visible light photocell though, and cant use line of sight, so that solution is also clunky...We need a good versatile engineer to own this project and work with the MP group.And strategically in June I'd like to define a non-hobbyist grant to NSF/NSERC/FQRNT parallel to an MP grant to do a gesture/movement tracking research project that meets different interests around the TML -- MP, Adrian (+Wessel), MM, Satinder. I'll propose this to my EU colleague as well.Xin WeiOn 2011-05-30, at 6:43 PM, Michael Fortin wrote:This might be a jaded comment....I'll call it an advanced WiiMote (WiiMote just tracks the x-y-rotation, they have some idea of angle and distance to the display which the WiiMote doesn't have). (Morgan -- WiiMote has vibrotactile feedback)Speaking of odd remotes, there's this unrelated interesting toy; http://www.thinkgeek.com/interests/dads/cf9b/Cheers,
~Michael();On Mon, May 30, 2011 at 03:55, Sha Xin Wei wrote:Hi Adrian, and scientific researchers,Raising the stakes and thinking ahead to more robust and precise instrumentation, here's theNaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo AltoThe exclusive patent describes a 6 DOF, x,y,z + euler angles The company's founded by a physicist friend from Stanford: Marek Alboszta. Not productized yet. "Commercial" co-development would require O(100K) USD. I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal. This would make sense in a real NSERC/NSF co-development grant.Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc? Let's discuss this in June.Xin Wei...On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:Hello Xin Wei,We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)). I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).Is your party ready to pay for this work ...? If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work). Anyway, we can talk about it if the allocation of resources is a given - let me know.warm greetings,
On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:Hi Marek,For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives. Ideally at better than 30 Hz for the entire 12-vector.We need it wireless, range of say 10m suffices.Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body. I expect any pen-based input device has more than adequate time-space resolution.We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.The person may be free to wander around the room and point in any direction whatsoever.Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...Cheers,Xin Wei
This might be a jaded comment....I'll call it an advanced WiiMote (WiiMote just tracks the x-y-rotation, they have some idea of angle and distance to the display which the WiiMote doesn't have). (Morgan -- WiiMote has vibrotactile feedback)Speaking of odd remotes, there's this unrelated interesting toy; http://www.thinkgeek.com/interests/dads/cf9b/
Cheers,
~Michael();On Mon, May 30, 2011 at 03:55, Sha Xin Wei <shaxinwei@gmail.com> wrote:Hi Adrian, and scientific researchers,Raising the stakes and thinking ahead to more robust and precise instrumentation, here's theNaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo AltoThe exclusive patent describes a 6 DOF, x,y,z + euler angles The company's founded by a physicist friend from Stanford: Marek Alboszta. Not productized yet. "Commercial" co-development would require O(100K) USD. I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal. This would make sense in a real NSERC/NSF co-development grant.Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc? Let's discuss this in June.Xin Wei...On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:Hello Xin Wei,We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)). I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).Is your party ready to pay for this work ...? If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work). Anyway, we can talk about it if the allocation of resources is a given - let me know.warm greetings,
On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:Hi Marek,For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives. Ideally at better than 30 Hz for the entire 12-vector.We need it wireless, range of say 10m suffices.Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body. I expect any pen-based input device has more than adequate time-space resolution.We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.The person may be free to wander around the room and point in any direction whatsoever.Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...Cheers,Xin Wei
As official grumpy old man on these things I should point out that a
high precision 6dof device is the holy grail
and hard to do at any price. It is a question of making an inventory of
where the various solutions proposed fall down.
There are dozens of companies that have crashed and burned on this
already so we should be cautious as to where
we put our eggs.The Naviscribe core technique was patented 4 years ago. What happened?
Why can't they implement it or raise the money
to implement it for a gaming controller?For the particular needs of the Memory/Place work it may be easier to
borrow or rent the market leader for a short time: http://www.polhemus.com/?page=Motion_PATRIOT%20Wireless-------- Original Message --------
Subject: Re: "scientific" gesture / movement research ?
From: Morgan Sutherland <morgan@morgansutherland.net>
Date: Mon, May 30, 2011 10:36 am
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: Adrian Freed <adrian@adrianfreed.com>, memory-place@concordia.ca,
Satinder Gill <spg12@cam.ac.uk>
Scientifically speaking, I'd love to see somebody integrate vibrotactile
feedback into this. There is the feedback problem (vibration add noise to
signal which adds noise to the vibration motor ad infinitum) to figure out.
http://www.cim.mcgill.ca/~haptic/pub/HY-VH-RE-CAS-05.pdf ("A Tactile
Enhancement Instrument for Minimally Invasive Surgery")
http://www.cim.mcgill.ca/~haptic/pub/HY-VH-JASA-10.pdf (on better vibration
motors)
I remember I had a specific idea for this kind of device recently – I'll see
if I can remember...
As for the wireless part, it would be dead simple to use an XBee to beam the
data over to a USB data acquisition device (Teensy or whatever), just not
elegant (XBee's are bulky in comparison to a pen). The question there is
whether we get analog or digital output from the pen itself. If it's
digital, then there could be synchronization problems unless Marek can
provide the spec for the communication protocol. If it's analog, then it's
dead simple – just cut the wire and put 2 XBees in between. Add 1 extra week
for unforeseeable headaches that always crop up when doing wireless.
I'm actually very excited about this – if this gets commercialized and keeps
its form factor, I can see myself using one regularly, hopefully by then in
conjunction with a fast RGB E-ink display. Viva post-television.
Morgan
On Mon, May 30, 2011 at 3:55 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian, and scientific researchers,
Raising the stakes and thinking ahead to more robust and precise
instrumentation, here's the
NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in
Palo Alto
The exclusive patent describes a 6 DOF, x,y,z + euler angles The
company's founded by a physicist friend from Stanford: Marek Alboszta. Not
productized yet. "Commercial" co-development would require O(100K) USD.
I've not discussed how to enter into actual relation with this company, but
we could perhaps work out a deal. This would make sense in a real NSERC/NSF
co-development grant.
Shall we think about this in context of a scientific gesture research
proposal, along with high FPS cameras, and EONYX etc? Let's discuss this in
June.
Xin Wei
On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:
Hello Xin Wei,
We can definitely do everything you ask (briefly - up to 100 Hz and better
with all degrees of freedom (6DOF) reported in compact stream (right now not
compressed), requires at most 120 MIPS to do everything (during periods of a
lot of activity) - unit is small so can be in a ring or glasses or headgear
or whatever you choose - we give you intervals so you can compute your
derivatives, resolution in 3D space is considerably better than 1 cm (in
plane it's down to 0.2 mm and better)). I can't do wireless unless somebody
gives me money to properly design a wireless beta unit (it is not a problem
of technology but pure resources).
...
Is your party ready to pay for this work ...? If not then we should
reschedule for when they are ready to commit resources for technology
development (or if they/your side wants to do the work). Anyway, we can
talk about it if the allocation of resources is a given - let me know.
warm greetings,
<pastedGraphic.jpg>
_____________________
Marek Alboszta
marekalb@yahoo.com
On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:
Hi Marek,
For a memory & place experiment, we would like to give people a wand that
they can hold that can report position, euler angles, and their time
derivatives. Ideally at better than 30 Hz for the entire 12-vector.
We need it wireless, range of say 10m suffices.
Spatial resolution is important, for tracking "pointing" at virtual objects
that people infer by indirectly mapping position & angle to a vibration
motor that will be embedded somewhere on their body. I expect any
pen-based input device has more than adequate time-space resolution.
We would also like to be able to have a "wand" small enough to fit anywhere
attached to the body in some not too obstrusive way.
We can write our own code to parse the data if you tell us the format
coming in some standard protocol, serial or ethernet/port stream.
The person may be free to wander around the room and point in any direction
whatsoever.
Does the wand needs to see an IR array in "front" ie be constrained to a
half-sphere, or can it be pointed in any direction provided a set of IR
beacons ...
Cheers,
Xin Wei
Hi Xin Wei,
Interesting device.
For the memory place investigation, though, it wouldn’t work, because it depends on line of sight between the wand and the LEDs around the TV. Also, at the moment, I wonder if it’s fast enough: the people in the demo are moving really slowly.
Best,
David
From: owner-memory-place@concordia.ca [mailto:owner-memory-place@concordia.ca] On Behalf Of Michael Fortin
Sent: May-30-11 1:44 PM
To: Sha Xin Wei
Cc: Adrian Freed; memory-place@concordia.ca; Satinder Gill; post@memoryplace.posterous.com
Subject: Re: "scientific" gesture / movement research ?
This might be a jaded comment....
I'll call it an advanced WiiMote (WiiMote just tracks the x-y-rotation, they have some idea of angle and distance to the display which the WiiMote doesn't have). (Morgan -- WiiMote has vibrotactile feedback)
Speaking of odd remotes, there's this unrelated interesting toy; http://www.thinkgeek.com/interests/dads/cf9b/
Cheers,
~Michael();
On Mon, May 30, 2011 at 03:55, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian, and scientific researchers,
Raising the stakes and thinking ahead to more robust and precise instrumentation, here's the
NaviScribe 6-DOF 3D wand by Electronic Scripting Products, Inc. (ESPi) in Palo Alto
The exclusive patent describes a 6 DOF, x,y,z + euler angles The company's founded by a physicist friend from Stanford: Marek Alboszta. Not productized yet. "Commercial" co-development would require O(100K) USD. I've not discussed how to enter into actual relation with this company, but we could perhaps work out a deal. This would make sense in a real NSERC/NSF co-development grant.
Shall we think about this in context of a scientific gesture research proposal, along with high FPS cameras, and EONYX etc? Let's discuss this in June.
Xin Wei
On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:
Hello Xin Wei,
We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)). I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).
...
Is your party ready to pay for this work ...? If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work). Anyway, we can talk about it if the allocation of resources is a given - let me know.
warm greetings,
On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:
Hi Marek,
For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives. Ideally at better than 30 Hz for the entire 12-vector.
We need it wireless, range of say 10m suffices.
Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body. I expect any pen-based input device has more than adequate time-space resolution.
We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.
We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.
The person may be free to wander around the room and point in any direction whatsoever.
Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...
Cheers,
Xin Wei
...On 2011-05-20, at 9:32 PM, Marek Alboszta wrote:Hello Xin Wei,We can definitely do everything you ask (briefly - up to 100 Hz and better with all degrees of freedom (6DOF) reported in compact stream (right now not compressed), requires at most 120 MIPS to do everything (during periods of a lot of activity) - unit is small so can be in a ring or glasses or headgear or whatever you choose - we give you intervals so you can compute your derivatives, resolution in 3D space is considerably better than 1 cm (in plane it's down to 0.2 mm and better)). I can't do wireless unless somebody gives me money to properly design a wireless beta unit (it is not a problem of technology but pure resources).
Is your party ready to pay for this work ...? If not then we should reschedule for when they are ready to commit resources for technology development (or if they/your side wants to do the work). Anyway, we can talk about it if the allocation of resources is a given - let me know.warm greetings,
On May 20, 2011, at 10:08 AM, Sha Xin Wei wrote:Hi Marek,For a memory & place experiment, we would like to give people a wand that they can hold that can report position, euler angles, and their time derivatives. Ideally at better than 30 Hz for the entire 12-vector.We need it wireless, range of say 10m suffices.Spatial resolution is important, for tracking "pointing" at virtual objects that people infer by indirectly mapping position & angle to a vibration motor that will be embedded somewhere on their body. I expect any pen-based input device has more than adequate time-space resolution.We would also like to be able to have a "wand" small enough to fit anywhere attached to the body in some not too obstrusive way.We can write our own code to parse the data if you tell us the format coming in some standard protocol, serial or ethernet/port stream.The person may be free to wander around the room and point in any direction whatsoever.Does the wand needs to see an IR array in "front" ie be constrained to a half-sphere, or can it be pointed in any direction provided a set of IR beacons ...Cheers,Xin Wei
]]>
Yes... It could be that rhythm, textures, and/or seemingly fixed stimulae in the environment would provide participants with those 'anchors' I was speaking about. Of course, these can be generated through any number of experiential circumstances, events, or sensory references. Providing a degree of experiential familiarity so that the distinct or contrasting qualities of the experience might become, all unto themselves, compelling. Especially if participants are to still to endeavor the destablizing experience of being blind. The search for more recognisable means of inquiring or tracing one's way through the environment may have to be satisfied in some manner, in order for whatever is unique or contrasting about their experience to be more notable. Or even if they are not going to move around through the space, it seems we need then otherwise provide an intrigue that, when explored, supports some kind of experiential journeying (temporal evolutions). No? xp Just to drag around my 'sounding' (sea bottom) metaphor a little. There's a great recording on Alan Lomax's Deep River of Song: Mississippi Saints and Sinners of Joe Shores reciting a song/call for riverboats sounding depth in the Mississippi.... "no bottom" is the deepest call. Imagine if our sensing apparatus was a long string with a light or texture or sound sensor, or just an eraser, on the end (or if we were animals with one of these--which we all are in a way). Throw it out and drag it through space and time building a place. Like Xin Wei's 'not necessarily spatial' locus - the stimulus that is fixed defines itself. af
----- Original Message -----
From: Sha Xin Wei
Sent: 05/20/11 01:31 PM
To: post@memoryplace.posterous.com, Erik Conrad, David Morris, Niomi Anna Cherney, Noah Brender, Tristana Martin Rubio, p.a. duquette, Andrew Forster, zoharKfir
Subject: Re: 'Magic Wand' Followup
I'm asking some experts: Erik Conrad, a TMLabber who built mappings to haptics (vibration motors on various parts of body) from camera as well as GPS models of built environment, as well as Marek Alboszta, whose company makes the only true 6DOF wand. (Asking for non-tethered, non-line-of-sight, but may not be possible.)BTW. Deleuze' micro-perception lay behind my musing about locus of sensing. It's not a satisfactory vocabulary, but an invitation to parse out the layers: sensing modality / sensing locus / interpretation / logic of response / feedback locus and type ... and of course not leave them split! A "locus" may not be spatial, it could be temporal: keeping a "stimulus" sharply delimited in time, or very clearly temporally-textured is a form of delimitation and localization. Another way is to have a crisply defined rhythm -- unbounded in time (or least in an open set), and with no particular spatial locus.Warmly,Cheers,Xin WeiOn 2011-05-20, at 1:09 PM, David Morris wrote:Follow up on magic wand possibilities:
--Sandeep’s student has a ‘T-Stick’, http://www.idmil.org/projects/the_t-stick, but this is far too much and it doesn’t sense position.
--Lenay’s group is using an ‘enactive torch’ which looks like a handheld device that converts distal measurement it makes into vibration stimuli, in a programmable way, with an Arduino chip. This isn’t quite what we want, because we are more interested in locatedness than distance, and want to be selective on the location/object that prompts a stimulus. But the design is interesting, seehttp://enactivetorch.wordpress.com/. We could use a similar physical sized thing, if we could get position/acceleration sensors into it. NB the enactive torch project looks interesting.
--I was trying to find info on getting position in room via Wii, but wasn’t sure we could, at least not in a robust way, because that seems to depend on IR sensitive detectors, and so would get cut off if there is no line of sight…
David
Follow up on magic wand possibilities:--Sandeep’s student has a ‘T-Stick’, http://www.idmil.org/projects/the_t-stick, but this is far too much and it doesn’t sense position.--Lenay’s group is using an ‘enactive torch’ which looks like a handheld device that converts distal measurement it makes into vibration stimuli, in a programmable way, with an Arduino chip. This isn’t quite what we want, because we are more interested in locatedness than distance, and want to be selective on the location/object that prompts a stimulus. But the design is interesting, seehttp://enactivetorch.wordpress.com/. We could use a similar physical sized thing, if we could get position/acceleration sensors into it. NB the enactive torch project looks interesting.--I was trying to find info on getting position in room via Wii, but wasn’t sure we could, at least not in a robust way, because that seems to depend on IR sensitive detectors, and so would get cut off if there is no line of sight…David
I can type something up for today's use to make sure we have something:-short description (from protocol)-participants name and contact info (for follow-up contact)-participants occupation/specialty-permission to be recorded by video/audio-assurance that this recorded material or participants name won't be disseminated in public but is for research purposes-further permission will be requested for any public dissemination of the material, if we want to do that(this is really a recording rights thing, if I understand, not an 'experimental subjects' thing--as we are framing this as participating in an 'environmental experience'..right?)andrewOn 2011-05-12, at 9:10 AM, David Morris wrote:I haven’t worked on this. Xin Wei, I’m wondering if your past work might give us some templates that could be quickly modified. I’ve never seen the language on such release forms.From: Niomi Anna Cherney [mailto:niomi.anna@gmail.com]
Sent: May-11-11 1:22 PM
To: Noah Moss Brender; Andrew Forster; David Morris; Sha Xin Wei; zohar; Tristana Martin Rubio
Subject: IMPORTANTHas anyone made the waiver form for the participants to sign?
On Tue, May 10, 2011 at 11:39 AM, Niomi Anna Cherney niomi.anna@gmail.com> wrote:Hello Guides,Noah - is it ok if I send the participants your cell phone number in case they are running late or something? I've instructed them that one or both of you will be meeting them in the JM lobby and taking them upstairs.Let me know if this is ok.-Niomi.On Tue, May 10, 2011 at 11:37 AM, Niomi Anna Cherney niomi.anna@gmail.com> wrote:Holy jeeze that is amazing! We should buy Michael a present or something.Ok, I vote we set up a semi-permanent warm-up debrief space in the snack studio. I will also be responsible for bringing the cookies and tea. Have just sent the participants an email so I think we're all set to go.On Tue, May 10, 2011 at 10:15 AM, p.a.duquette impetus@graffiti.net> wrote:We are lucky wee experimenters, we are. Michael has been able to confirm a third studio for us. So we now have access to: MB 7.265, MB 7.251, and MB 7.255.-----Original Message-----
From: Andrew Forster af@reluctant.ca>
To: Niomi Anna Cherney niomi.anna@gmail.com>
Cc: zohar zohar@zzee.net>; p.a.duquette impetus@graffiti.net>; davimorr@alcor.concordia.ca; tristana.martin.rubio@gmail.com; shaxinwei@gmail.com; noahmb@gmail.com
Sent: Mon, May 9, 2011 9:59 pm
Subject: Re: Untitled document (af@reluctant.ca)I will bring lamp no have bulb... perfect.Re: the warm up room... is there a curtain or separation possible in one of the studios, then we could use that space...On 2011-05-09, at 9:36 PM, Niomi Anna Cherney wrote:That's why I suggested we meet at 3:30 - so we could all gather together, talk and then split up and gather the gear.I can bring light bulbs but I don't have any lamps.Niomi.
On Mon, May 9, 2011 at 9:32 PM, zohar zohar@zzee.net> wrote:I can pick up the equipment at 4ish and bring it to TML, pas problem. since Hexagram/CDA depots close at 5pm,4:40 might be stretching it, as gear is divided between 5th and 11th floors.light bulbs and lamps, how many? can people bring such from home? I think there is one or two office lamps at TML.also, what watts do we need, should be consistent?/z.On May 9, 2011, at 9:14 PM, p.a.duquette wrote:I recall the desk lamps w/bare bulbs being the preference also.
Meeting earlier Thursday sounds wise to me also. Do note though that we don't have the studios until 5pm, so if we do pick up the gear @4pm, we'll only be bringing it over to TML.
Zohar can we pick up gear @4:40pm instead, or is this a 'summer hrs' thing?
The debriefing could take place in one of the already-booked studios (same one the snacks are permitted in). Don't know how easy or possible it will be to get a third studio at this point... I can try... Would the hallway be an OK location for the warm-ups, unto themselves, though?
xp-----Original Message-----
From: Niomi Anna Cherney niomi.anna@gmail.com>
To: zohar zohar@zzee.net>
Cc: davimorr@alcor.concordia.ca; Andrew Forster af@reluctant.ca>; Tristana Martin Rubio tristana.martin.rubio@gmail.com>; Xin Wei Sha shaxinwei@gmail.com>; Noah Moss Brender noahmb@gmail.com>; p.a.duquette Duquette impetus@graffiti.net>
Sent: Mon, May 9, 2011 8:39 pm
Subject: Re: Untitled document (af@reluctant.ca)Zohar - I think we decided on the adjustable desk lamps with the bare bulbs.... am I wrong about this?Other things as per David's numbering system (with an additional 5 & 6) :1) I will be sending out the email to participants as soon as Patricia forwards me the security clearance attachment. I was thinking that perhaps it might be better to simply meet and gather participants in a central location in the lobby and then head up to the studios altogether. We thus avoid the security thing also. Perhaps Noah and Andrew could take this on as well?2) I think we should meet at 3:30 to sit down as a group and just go over how the set up and running of the evening will proceed. At this time we can discuss any last minute logistical problems/ address the remaining questions about the lighting moves and so on. We can then get the equipment at 4pm and begin taking it over to the studios. Whoever is available at this time should head to the TML I think. I also suggest that once we have the set up roughly in place in the studios, we try a few dry runs right away so that we can tweak the edges of the transitions during the protocol while fine tuning the tech set up. We should have enough people present to have this happen.3) David, of course you will be involved in the debriefing. That must have been an oversight on my part. Sorry.4) Word.5) The hallway absolutely WILL NOT DO as a warm-up/ debrief zone. Do we have any other options? I believe there's a third studio on the same floor. Is there any way to get the additional studio?6) I will add in additional stuff to the protocol and have multiple copies on hand for Thurs. Please add in any last minute stuff you can think of. I can also be in charge of keeping us on task/ schedule.-Niomi.On Mon, May 9, 2011 at 7:58 PM, zohar zohar@zzee.net> wrote:
Some thoughts regarding the lights- did we like the LED ones Andrew got for last time? or better to use a more of an omni-tungsten light?as they gave very different result, we might want to be consistent.Andrew, did you buy them in a dollar store?Not a bad idea to meet tad before, I booked the equipment from 4pm, so maybe we can meet all at the TML and head over to the studios together?/z.On May 9, 2011, at 3:53 PM, David Morris wrote:Hi Niomi,Thanks for all the work on this!!!Catching up:I couldn’t get onto the online doc, but I had suggestions for revisions to the initial script, to build on what you’ve set up, break the points down a bit, and use what I think is a bit more neutral vocabulary. (E.g., I think we should avoid the language of navigation, and just talk about moving around and finding things, because navigation might put them into a ‘map mindset’; also talk about ‘something’ vs. ‘an object’ just to leave open that they might not feel the thing as an object (might feel it as a barrier?) or may feel more than one thing). I paste my suggested reworking below; I hope you’ll find them an extension of your initial thoughtful framing and work.1) Have the participants been contacted?2) I see there are number of questions we still need to answer, e.g., about how to move the lights around, and who is bringing what. Maybe we need to meet a bit earlier than Patricia’s suggested time?3) Also, I wanted to be part of the debriefing process.4) In general, I think we have to be careful with the debriefing, balancing letting them speak spontaneously, and drawing them out, also, attending to not getting it to confusing or too many voices.DavidWelcome. Thanks for joining us.We’re now going to do some exercises in body movement and bodily experiential awareness together, to help warm you up for experiencing a special environment that we have prepared for you. But before doing the exercises we wanted to tell a little about this special environment.It’s different than the ones you might be used to. It’s something like an art installation, but one experienced through a new way of sensing that we will provide to you. We’ll provide with this new way of sensing by putting a sleeve‑type apparatus on your finger, and also a band holding a further lightweight apparatus on your forearm. You should let us know if you find either of these things uncomfortable.Once you are wearing this apparatus you will become sensitive to your environment in a new way. We’ll test this out together before you go into the environment. In the environment, there’s something you can find through this new way of sensing, and we’d like you to move around to find and interact with it.The environment is in a different place than the one we’re now in. You’ll be with either Noah or Andrew the whole time. They’re going to guide you into this environment and then step back a little so that you can explore it.If at any time during this process you feel uncomfortable you should let them know right away. You can speak to them the whole time, even if you are feeling completely comfortable. We’re here to help you move your way around this new environment, so feel free to say what comes to mind as you’re moving around. You can move around as much as you’d like but it’s probably a good idea not to move too fast so that your guide can keep up with you and spot you.After you have completed your time in the environment, you’ll have an opportunity to discuss your experience with the other participant and with us.During the time that you are in the environment, we’ll be audiovisually recording your experience. We would like to watch these recordings later to better observe your experience. We might also want to use some of this information for future trials and to help us build a more precise environment. If that’s ok, we would ask you to sign a form.From: Andrew Forster [mailto:af@reluctant.ca]
Sent: May-07-11 1:12 PM
To: niomi.anna@gmail.com
Cc: tristana.martin.rubio@gmail.com; shaxinwei@gmail.com; noahmb@gmail.com; impetus@graffiti.net; zohar@zzee.net; davimorr@alcor.concordia.ca
Subject: Re: Untitled document (af@reluctant.ca). This is real
I have two body shop slumber masks that can be used as blind folds that I can bring for the experiment.
]]>