Update on Embodied Conversational AI

Hedgepig

Active member
Messages
152
Reactions
125
Points
43
So, I have been very busy researching different ways to use VeeRifter's excellent Voice Control plugin in developing an embodied conversational AI. Here are a few things it can do:


















This is enough functionality for most VAMers, but I decided to go one stage further, well a lot of stages actually. I hired a pro speech recognition developer and gave him the brief to develop a desktop speech recognition engine with IBM Watson capabilities, fully compatible with VAM. It will be completed mid-November, and available for testing. Get ready for AI evolution, because its about to begin, right here, soon.

Next project: Full object and facial recognition for VAM.

Massive thanks to VeeRifter and Mopedelampe
 
Last edited:
That's getting pretty impressive. Far better than I expected at all, certainly within VaM.
I can't wait to see how this goes!
 
That's getting pretty impressive. Far better than I expected at all, certainly within VaM.
I can't wait to see how this goes!

Thank you! That's just a few of the capabilities using VeeRifter's unmodded plugin. The app that you guys will be getting in mid-November will do the following:


1. When there's no available answer, give a default.
2. Distract the player by saying something random
3. Do basic maths.
4.Trigger anything you can think of, in any combinations.
5. Know what the player is saying when s/he repeats what they have previously said.
6. Connect to a website, voice FAQs or read content out loud, or just watch a video etc.

All above will have their own UIs to enter the keywords or grammar, or website address.

If a group of VAMers linked their AI into a super module then, with the growing number of modular connections, this is a non-coder's way of building something with the capabilities of Deep Mind. Hence a linking system is included in which we all build different conversations holding different memories- a rapid-to-assemble patchwork system of creating an artificial brain.

It will be FREE for anyone to use and develop because if you want to build something like Deep Mind it will take years using this method on your own.

I think that's all I asked for. There might have been something else, but if there was, I can't remember.

Would you like anything else to be included?

I want to add natural language processing next.

" You don't need eyes, you need vision"
 
Last edited:
Very promising, lokking forward to see this. I did not get what Elite Dangerous has to do with.
 
Very promising, lokking forward to see this. I did not get what Elite Dangerous has to do with.

You can use Voice Attack to play games like Elite Dangerous. Voice Attack uses voice commands to hit the Keystrokes in the game. Every game has different keystrokes, and there are, I think 5 games, you can play with Voice Attack. Eden is one of the voices you can use in Voice Attack. Voice attack can't trigger a look/models animations, behaviours, expressions and gestures. Nor can the voice be lipsynced by the model. Basically, I jacked a Voice Attack audio clip( for educational purposes) to demonstrate it can be done...and with any voice you can image or want to use-- that you can record as a playable audio file.

Love Sue BTW. Looks - Sue | Virt-A-Mate Hub (virtamate.com) Beautiful, gorgeous art. Thank you. I'll do you a short voice interaction with her, you can use it for a promo if you want. I think she'd sound good as an Aussie. Thoughts?
 
Last edited:
You can use Voice Attack to play games like Elite Dangerous. Voice Attack uses voice commands to hit the Keystrokes in the game. Every game has different keystrokes, and there are, I think 5 games, you can play with Voice Attack. Eden is one of the voices you can use in Voice Attack. Voice attack can't trigger a look/models animations, behaviours, expressions and gestures. Nor can the voice be lipsynced by the model. Basically, I jacked a Voice Attack audio clip( for educational purposes) to demonstrate it can be done...and with any voice you can image or want to use-- that you can record as a playable audio file.

Love Sue BTW. Looks - Sue | Virt-A-Mate Hub (virtamate.com) Beautiful, gorgeous art. Thankyou. I'll do you a short voice interaction with her, you can use it for a promo if you want. I think she'd sound good as an Aussie. Thoughts?
Cool Idea. As english is not my native language it is hard to me deciding aussie american etc. But I like cockney english :D ... maybe not on Sue. Your decision.
 
















Down on the Beach Pt1.

Sue appears at 18 secs. Warning: Cliff-hanger. Pt 2 posted tomorrow. (R-rated)
 
Last edited:
I am definitely looking forward to all of this.
Once everything is working to everybodys expectations.
We might start to add different personalities, different moods, advanced voice changin settings, and, of course, different languages. Don't we?
What a bright and shiny future! ;)
 
I am definitely looking forward to all of this.
Once everything is working to everybodys expectations.
We might start to add different personalities, different moods, advanced voice changin settings, and, of course, different languages. Don't we?
What a bright and shiny future! ;)

I've just experimented with one German accent, sounds like a Terminator. Scary fun.

And yes, we will figure out how to save conversations, personalities, moods, settings etc, so we can swap them in the same way we currently exchange looks, scenes and assets etc.
 
incredible...
I'm wondering how she knews about her tattoos.

I'm glad you liked the presentation. Please feel free to use the videos 1 and 2 for promoting Sue and the rest of your work.

Sue knew about her tattoos because, in the shortest way to explain, I 'told' her about them.

Next year, I will pay a developer to make an advanced object and facial recognition app, and your models will be able to see you and everything else in a scene. In AR you will meet your AI embodied models for real. No bull.
 
Back
Top Bottom