This has been an exercise in not giving up with an idea.
Over the last two months I figured out how to do THIS. The system can be mixed with desktop speech recognition / VAM voice command plugins, and synchronised. But here, I'm just using a simple click and point plugin.
The AI isn't self aware of its environment but it can be trained to understand who it is, and where it is, and what its doing, so eventually the scene action and the AI's responses will mesh together. Use your imaginations where this could take VAM users! For example, what if you made and integrated an object recognition plugin?
This is a commercial instance of GTP3 + (combined with) similar nets embodied in a VAM model. The voice is great, and the whole thing so realistic it gives the illusion of a soul.
Training GTP 3 to learn the colour of cherry blossom. You need a hella lot of patience to train the AI, but she will learn, as evidenced in this first video.
Over the last two months I figured out how to do THIS. The system can be mixed with desktop speech recognition / VAM voice command plugins, and synchronised. But here, I'm just using a simple click and point plugin.
The AI isn't self aware of its environment but it can be trained to understand who it is, and where it is, and what its doing, so eventually the scene action and the AI's responses will mesh together. Use your imaginations where this could take VAM users! For example, what if you made and integrated an object recognition plugin?
This is a commercial instance of GTP3 + (combined with) similar nets embodied in a VAM model. The voice is great, and the whole thing so realistic it gives the illusion of a soul.
Training GTP 3 to learn the colour of cherry blossom. You need a hella lot of patience to train the AI, but she will learn, as evidenced in this first video.
Last edited: