Max Code Structures

From Digital Arts Wiki
Jump to: navigation, search


Basic Tutorials

Please find all the basic tutorials from the first classes here:

Performance Basics

Patches that help with basic video and image slide shows for performance environments
Lucasz' File control: Media:‎

Computer Vision Patches

Max Jitter Patches for Computer Vision: Media:
All these simple patches rely on the cv.jit.mass object by Jean Marc Pelletier, so you must to install cv.jit external library of objects before they can work: [1]

A great read if you're getting into computer vision is Golan Levin's Essay, "Computer Vision for Artists" [2]

Working with the XBox Kinect Sensor

I have also made available this Prezi for an outline of the Interactive Art Kinect Workshop.

There are a number of ways to get Kinect data into Max:

  • The most basic is using the cv.jit object jit.freenect.grab - this will give you the depth map of the kinect in video and you can do basic blob and bounds tracking on this image.

The freenect object however, does not give you any skeletal data (which is really the fun part of the kinect), for this you need to be accessing OpenNI (natural interface) middleware between the Kinect and your patch.
There are however a number of really wonderful people in the world, who have artists and musician's in mind and have made working with this alot easier by creating stand alone app's (many of these in Processing with simpleOpenNI) that send the NI data via OSC and therefore really easy to pick up in Max with the udprecieve object, some of these are:

  • Ryan Challinor's (who helped develop the Dance Central gestural menu system :) ) Synapse, with loads of info on addressing individual joints and action data. Note that Synpase does require the PSI pose to bind the skeleton.
    • Here are two basic patches showing how this can be interfaced in Max using the UDP send and receive: File:UDP Synapse
    • To view the depth map in max you also download the Synapse to Jitter Plugin on the Synapse download page - this show's the kinect depth image in a Jitter matrix.

  • Jon Bellona's two really great interfaces that tracks all points and distance simultaneously, interfacing with Synapse, Processing, OSCeleton and NImate ! For our classes it would be best to choose one of the following (they are on the same page Kinect-Via):
    • Kinect-via-Synpse
    • Kinect-via-Processing
      • You do require simpleOpenNI library by Max Rheiner installed for this to function.
    • If your are planning on using any of these I recommend reading Bellona's paper Kinect-via: Max/MSP Performance Interface Series for Kinect User Tracking via OSC [3]

  • Using Patricio Gonzalez Vivo's KinectCoreVision, you will need to bring the data in with TUIO, which was designed to work with touch screens and touch tables. Here is a file of the object and some examples: File:TUIO

Arduino to Max 5: Firmata

Find the firmata code and examples here: [4]

Mini Projects
Mid Year Exam
Personal tools