Sapfundament continued - Neural Networks, Motion Capture and Flamenco

////Background 


In January 2018 I spent a week with Choreographer and dancer Annalouise Paul at The Drill Hall in Rushcutters Bay, Sydney.  The week was kindly facilitated by Critical Path.  We spent the time in creative development for a show called Mother Tongue that we hope to remount in 2019. 

During this creative development I used and refined machine learning techniques and this articles covers what I discovered during the development and in subsequent analysis.  
This article directly continues from my previous artistic research on machine learning, conducted at the Choreographic Coding Lab 2017 Amsterdam. 

Support from School of Communication and Creative Arts, Deakin University facilitated a budget for robot parts and the post week analysis time. 

A quick summary of my earlier conclusions  
  • Machine learning allows us to program a computer by showing it results rather than describing what to do as logical code.
  • This has potential applications for recognition and interpretation of human dynamic and emotional input. Eg dance or body language.
  • It’s important to understand that just because the machine can recognise or interpret emotions in a subjectively intuitive way doesn’t mean it has a general or animal-like intelligence. Everything that the machine does follows logic, it’s just that this logic was very difficult to describe to the machine using pure coding. 
  • I find this artistically interesting for a few reasons 
  1. Machine learning techniques potentially save time and provide greater nuance than hand coding for some tasks in interactive art.  Machine learning is essentially another tool in the toolkit of an interactive designer. 
  1. With the right context you could create an artwork that demystified human expression using machine learning. 
  1. There is much interest in creating artworks where the machine and the human dance together. Making the machine a more interesting dancer is always useful. 

In the earlier research most of the input was contemporary style movement. Flowing, abstract, etc.
This week gave me a chance to work with flamenco style. Annalousie is an accomplished flamenco choreographer and dancer.
Flamenco is interesting because it has a long history,  formal structures and is highly rhythmic. As far as body language goes we could say contemporary dance and flamenco dance are two different languages. 

When reading the following bear in mind my context as an artist is real-time live performance. I’m less interested in motion capture techniques from the film sector that require post-processing or specific costumes. 

///Goals and Questions

  • Evaluate current real-time motion capture techniques and their suitability for use with Flamenco 
  • Interpret Flamenco style dancing using a neural network and compare the results to previous experiments with contemporary dance. 
  • Does the more structured nature of flamenco provide an opportunity for more structured input to the machine learning process? For example could it recognise specific footwork? 
  • Is it possible to create a machine that comprehends rhythm? In the sense that it is more sophisticated than simple BPM recognition?  

In regards to the last point I should explain that to produce rhythm (and arguably to understand discrete footwork elements) we need to have some kind of concept of time in our system. But that could be a radical change to the system. The nature of the input of a neural network is an extremely important part of establishing the networks capabilities.  It makes natural sense that the output of the machine, wether that’s projector or a robot, is like it’s body and if that body shape changes the expression changes. But the same is also true of the input as the input represents sensors embedded within the machines body (whether they are physically embedded or not), 

From Rolf Pfeifer and Josh Gongard in their book on embodied intelligence: 
if the sensors of a robot or organism are physically positioned on the body in the right places, some kind of preprocessing of the incoming sensory stimulation is performed by the very arrangement of the sensors, rather than by the neural system. That is, through the proper distribution of the sensors over the body, "good" sensory signals are delivered to the brain; it gets good "raw material" to work on.

I’m actually not entirely sure I agree with Pfeifer and Gongard on all their views yet. But I do agree that if we pre-processed the input data in a way that represents physical sensor phenomenon, i.e perception of time then it should change the expression of the machine. The pre processing could be via regular coding or another neural network.  

////Procedure 

As per the previous research I will use Kinect2 for input, Wekinator for the neural network and a small two dimensional robot (Sapfundament) as output. This will allow me to directly compare results to the previous experiments. 

For reference this is the setup

For input we will use the 25 skeleton joints from a Kinect2 sensor. This is a reasonably common sensor in the interactive arts world. 


For mapping we will use Wekinator by Dr. Rebecca Fiebrink.
Wekinator is free and available for Mac, Windows and Linux. 
It allows simple OSC IO for working with neural networks.