I've successfully connected the iphone app to the plugin and gotten facial responses. I've read the readme file on the github page, but I'm still wondering two basic things:
1) If, in order to translate recorded facial movements into mocap float parameters, user needs to assign certain morphs...