Character.AI + VaM?

DandyRabbit

New member
Messages
1
Reactions
1
Points
3
Greetings all! I'm new to this here community. Just got VR last week and I've been immersed in VaM ever since. Incredible program!


Anywho, I'm typically the lurking type, but I just got this wild idea and I couldn't help but pick the minds of the talented community here. (Also, I'm certainly not the first to think of this):

So, I just finished messing around with a popular AI site known as Character.AI for the past couple hours, and hoo boy! I am truly astonished.
I was talking with the 2B AI, in particular, and her responses to everything I said were basically perfect to her character.
This begs the question: Are there any known projects that involve trying to plug VaM models into these AI archetypes? Because I imagine that would be something on another level.

While VaM is great for fulfilling sexual fantasies, it's clearly missing the companionship aspect that these AI characters can deliver.
VaM has excellent digital bodies, Character.AI has excellent AI personalities. Put two and two together, bada-bing bada-boom! 2+2=4. 4, in this case, being the perfect digital waifu and/or husbando.

It seems to me like all the components are already here to make this a reality. It's merely a matter of assembling them in the correct fashion. On the other hand, I am not a programmer. That is to say, I admittedly lack understanding in regard to the scope of the hurdles that must be surmounted.


Just some food for thought!
 
Last edited:
Hey DandyRabbit, thanks for bringing the charater.ai site in to the hub forum. I have never heard of it. I took a look and it is very interesting. I talked to the psychologist for a while, and I swear I do not know why I spend $200 an hour for a real one. But the charater.ai one, only has a masters degree. LOL. Really thank you, my kids will like it as well, as it has a lot of anime characters. Welcome to VAM. It's an incredible vr experience in many ways. It is truly a vr sandbox art form, IMHO. It will definitely be getting better. In fact, the guy you have been chatting with, is a rising star, and doing just that, making VAM better! If you have not tried his plugins, they are a must! He is knowledgeable and his opinion carries weight.
 
Last edited:
I was curious about this as well given the recent development in both decent text generation models and voice generation models that you can run locally:
https://github.com/KoboldAI/KoboldAI-Client

You'd need a beefy computer to run both of these and VAM at the same time with decent results, so I settled for utilizing ElevenLabs voice synthesis and piped in output from local Kobold instance to produce this PoC (and curiosity satisfier): https://files.catbox.moe/sgwpx7.webm

Unfortunately after refining the script a bit by making the requests asynchronous, the text generation time skyrocketed since my machine was now rendering both VAM and generating text. So I looked at utilizing an external API for both text and audio.

Character.AI doesn't currently have an exposed API, so I tried using ChatGPT instead. However I'm running into difficulties with making API requests from VAM and I believe it's due to ChatGPT's API only supporting TLS 1.2+. I'm not too familiar with .NET so my reasoning may be flawed.
There are issues with remote generation including availability, moderation, and price. Ideally you would spin up your own models locally but this would also necessitate dedicated hardware solely for this purpose.

I think leveraging a sophisticated language model could be key at encoding interactions and emotional sentiment directly in the response in order to produce dynamic behavior. At the moment, prompts are seeded with some initial information about the character including their name and a short description. However, you could potentially add "rules" to the prompt that the model would abide by. This is similar to how ChatGPT and Bing chat encode the rules for their chat bots which restrict their output.

As an example, these rules could govern which items in the scene it can interact with and how, or emotional states it can use within it's responses. If we construct these prompts within the scene, then we have access to any created atoms as well and could include them in the prompt. The response could then be interpreted on the VAM side for executing the interaction. I'm not fully aware of how Reanimate + Replika achieves this. I'm assuming there are key words or phrases they are looking for in the chatbot's response to determine if an action should be taken by the actor. However with a robust language model, these could be encoded directly in the response itself which could enable more dynamic behavior.
 
Last edited:
If this type of thing becomes a plugin, id pay for it. The concept of a chat ai or action responsive ai would be pretty special for many reasons. Having a few random motion plugs, glance plugs, breathing plugs and the like add so much already. Having a plug whereupon the atom reacts to things you do, noises you make or speech would be pretty interesting. (Even if this does feel a little like an adult Tamagotchi) :p
 
Hi :) Im wondering if there is a tutorial or something you have how I can try out your conversational ai because I cant find it. Thankful for any help :)
Its something really awesome you are making I must say :)

This has become Voxta by now. You should be able to easily find some scenes and the plugins in the hub.
 
Finding this a year after it's posted, locally hosted character bots like tavernAI/koboldAi, the issues of latency or APIs are gone. This will happen and I agree will attract all of the users from char.ai and all the dirty chatbots to VAM, resulting in tons of new users.
We don't have a way to plug the chat into VAM yet, but sending requests {CHECK}, speech synthesis with Elevenlabs {CHECK}

My contribution to this idea would be to ignore the text and just grab an audio clip from Elevenlabs and use it as a {head} audio source

Taking audio/speech from the player seems like the harder part because exporting from VAM isn't something I've seen a lot of. I think using a separate program running the behind scenes is going to listen to the mic rather than VAM, send the text to the local tavernAI, which can give the char/bots reply to elevenlabs and get the audio, then saving that to a local file. That's when VAM could look for new files and the audio file could be pushed as a new {head} audio source.

This is my version of 2+2=4 because I haven't written an audio transporter program like this, but ive used the components enough to see how it would work
 
Back
Top Bottom