ILYA states, feel and implementation

ILYA notes for states, feel and implementation.
informal thoughts on the train in.

To repeat, there are four basic experiential states to be composed  see the attached PDF from the blog.

The key here is that each state for which you compose some set of effects must be parametrizable by more than one parameter (2 is good) that vary the aesthetic in some interesting and palpable way.


For example, to go from 
Glass  --> Discovery
I would like  my state engine patch to :
 blend from the Glass condition where I see a specular refraction of myself (like in a glass window) superposed on the full color full res of the other  the  Navier-stokes 
but ALSO rock the effect of blending between Navier-stokes wind (velocity field) from one side (East) or other side (West).  Call this parameter S.
In this case, I  send 0-1 floats to those two params in JS's instruments.

In
Discovery -> Storm

general cloud (like Navier Stokes densities being blown by wind from optical flow) 

I would like to begin to change  from blend (sum) of cv.a and cv.b according to S to the product of optical flows of the two sides.   Call this product video P

And in the overlap, radically change to the generation of vital substance 
sparse sparks like flecks of red embers from a fire that is strck like match heads by the tow bodie's friciton
  (particles born where the values P exceeds a threshold.  Velocity field is radial away from some function D of original bodies' density, ie velocity field is gradient of D.   D could be for example, 

Note:  It is NOT necessary to synthetically generate vines. only to perfect a video retargeting method that would take ANY video of a linear-growing thing -- like 
blood running down a white cardboard, or 
a streak of red or yellow or white fire along a fuse, (use a 10' long strip of flash paper from any magic spply store  -- but may burn too fast ),
or vine growing.  (need stock)

How about this:  JS needs to do for Navid is to provide a patch that Navid can include in his side, to receive FOUR video streams low res:
(1) cv.a, cv.b - can be plugged directly in to replace what Navid already uses as test input 
(2) low res of final output.
these should toggle to process Navid's own camera feed.

If I can I will work w Navid to further process the result into some float 0-1 params with sufficient dynamics.  This can be trivial for now, as long as we can hear the result.

Then for example, diagetic video like of a bright line of fire racing down a fuse, will drive obvious dynamics in whatever Navid has running, since his instruments are already sensitized this way.   

I would like to have on hand MANY examples for STORM, so we can quickly go through them and see what works well w/r to DISCOVERY, and leading into  DRY.

This modulation from 
(1) pre-composed rich media-stream (ie sound or video file treated as "input" or "impulse" in physical model language) as well as
(2) live media input,
modulated by
(3) params from my state engine
is a  good model for visual as well as audio instruments.


Instead of completely painting a scene with presets, leave always some params open to external patch -- that have a very large qualitative effect.  (Like # particles, lifetime, or friction -- energy loss) .  JS you can tell us what are the most interesting top 5 or 8 params, and later we'll cook it down. :)

Regards,
Xin Wei