Wondering if anyone has experimented with using GPT outputs as a control input for VAM characters, beyond literal chatbot cases (since I recall seeing a few of those on the hub).
For example, somebody has created a plugin for Blender that takes natural-language input and generates script input for blender to create whatever is requested, like "make me a physics demo of 400 stacked cubes."
It seems like it may be possible to craft a "prompt" for GPT that allows it to drive transitions between character animations. I hunch this would require a plugin on the VAM side to provide the hooks for that input, but it seems well within the capability of these models, assuming OpenAI doesn't suss out what we're using the API for and ban us, ha.
Even regardless of GPT4 specifically, this tech seems really potentially exciting for expanding the capabilities of VAM. I feel like VAM could be a really robust creative platform for letting GPT-like LLMs drive human-like actors.
edit: for reference, this Google project gives an LLM a “toolkit” of commands to control the motion of a robot - the LLM didn’t write the code to move the robot in a particular way, but selects what to run and when to accomplish a task. I think this could be analogous to selecting prescripted behaviors/animations for characters in VAM.
For example, somebody has created a plugin for Blender that takes natural-language input and generates script input for blender to create whatever is requested, like "make me a physics demo of 400 stacked cubes."
It seems like it may be possible to craft a "prompt" for GPT that allows it to drive transitions between character animations. I hunch this would require a plugin on the VAM side to provide the hooks for that input, but it seems well within the capability of these models, assuming OpenAI doesn't suss out what we're using the API for and ban us, ha.
Even regardless of GPT4 specifically, this tech seems really potentially exciting for expanding the capabilities of VAM. I feel like VAM could be a really robust creative platform for letting GPT-like LLMs drive human-like actors.
edit: for reference, this Google project gives an LLM a “toolkit” of commands to control the motion of a robot - the LLM didn’t write the code to move the robot in a particular way, but selects what to run and when to accomplish a task. I think this could be analogous to selecting prescripted behaviors/animations for characters in VAM.
Last edited: