Imposter (Not Using OpenAI anymore)

Plugins Imposter (Not Using OpenAI anymore)

twinwin

Member
Messages
11
Reactions
73
Points
13
Patreon
TwinWin
twinwin submitted a new resource:

Imposter (Connects to ChatGPT) - Connects VAM to ChatGPT

I am new to VAM (as a creator).

This is meant as a developer preview plugin to help us get started on using large language models, but you can enjoy it as you wish.

To use this plugin, you need to follow the instructions on the Github page to run the needed local servers:

Everything is open source (MIT), except for the lipsync plugin that we have to use for the moment, will be developping a new open source one, big...

Read more about this resource...
 
This is very interesting.... Instead of running local text to speech, have you looked at using Azure's neural text to speech? It's free, and you can do a lot of different voices with emotions. It would require less dependency on the local machine, I created a guide for it here on the hub.
 
Yes, Azure text to speech is very promosing and it does provide the phonemes as well (facial expression), I will see how we can get rid of local servers using Azure. While some people are 100% against using remote services for their privacy, some others would appreciate the ubiquity.

I will keep you updated on my findings.
 
Yes, Azure text to speech is very promosing and it does provide the phonemes as well (facial expression), I will see how we can get rid of local servers using Azure. While some people are 100% against using remote services for their privacy, some others would appreciate the ubiquity.

I will keep you updated on my findings.
Frickin' awesome! Thanks!
 
This looks really cool but the github instructions make little sense to me. I have an OpenAI key, but beyond that, I'm scratching my head. Step-by-step instructions (for dummies) or a video runthrough of the github steps would be much appreciated , Thanks for starting this project. Black Mirror stuff is around the corner. :-D
 
This looks really cool but the github instructions make little sense to me. I have an OpenAI key, but beyond that, I'm scratching my head. Step-by-step instructions (for dummies) or a video runthrough of the github steps would be much appreciated , Thanks for starting this project. Black Mirror stuff is around the corner. :-D
I have just updated the plugin, it connects to a provisionned AWS server, the server is a bit slow, it can be unusable if a lot of people are connecting at once on it.

For better performance, you should run local servers, I am working on improving the instructions, please keep tuned.
 
This is amazing! I have been starting to run my own LLMs as I get the hardware together to build a separate system for it so I can still handle VR, with the single purpose of figuring out how to get it into VAM. Am I going to be able to eventually use my locally hosted AI's API?
 
Last edited:
This is amazing! I have been starting to run my own LLMs as I get the hardware together to build a separate system for it so I can still handle VR, with the single purpose of figuring out how to get it into VAM. Am I going to be able to eventually use my locally hosted AI's API?
Yes, you can simply change the OpenAI API with your own local one.
 
discord link is out of date.. it needs a fresh one. I desire to help, as a developer!
 
What would be interesting is to start a conversation between two different characters, both powered by ChatGPT using different API keys to ensure it's 2 different sessions. Then you could literally sit back and watch them converse with each other.
 
have you contemplated trying to link it to like llama? i have another computer on the local network that could run llama locally. i play with ai and building things like the sr6 but i am essentially code illiterate. if i could run the whole thing in-house that would be dope though.

that being said i set up a vm running the orchestrator, tts and stt and everything and when i try to use it either with my server or the preconfigured one you set up with my openai api key it just says could not connect - very interested to try this out, if you could help me, it would be appreciated.

EDIT: I got it working with the local server, i guess. But she just kept saying "look to me and i will show you the way to true love", then i said "can you say anything other than 'look to me and i will show you the way to true love'? And the server said "i cant help you with that" in the log but vam crashed before she spoke
 
Last edited:
have you contemplated trying to link it to like llama? i have another computer on the local network that could run llama locally. i play with ai and building things like the sr6 but i am essentially code illiterate. if i could run the whole thing in-house that would be dope though.

that being said i set up a vm running the orchestrator, tts and stt and everything and when i try to use it either with my server or the preconfigured one you set up with my openai api key it just says could not connect - very interested to try this out, if you could help me, it would be appreciated.

EDIT: I got it working with the local server, i guess. But she just kept saying "look to me and i will show you the way to true love", then i said "can you say anything other than 'look to me and i will show you the way to true love'? And the server said "i cant help you with that" in the log but vam crashed before she spoke
Hello, the old version lacks some good AI, please keep tuned.
 
twinwin updated Imposter (Connects to ChatGPT) with a new update entry:

Imposter AI - (Jack and Emily)

View attachment 238503
This new version works out of the box, you may need to allow Internet access to the plugin as well as whitelist this domain name: ec2-52-87-238-187.compute-1.amazonaws.com

If you are the only user, the interaction should feel like it is real time.

Emily can repeat herself, she is still learning.

Will post an update to the Github repository once everything is ready.

The server contains these services:
1- Speech to text.
2- Text to Speech.
3- Chatbot.
4-...

Read the rest of this update entry...
 
OMG my brain is exploding right now thinking off all the possibilities! Never even thought about this! I am going to support this on Patreon for sure. Thank you for your work, you are a great man :)
 
I have finetuned several models using MRQ Tortoise and would gladly try them out. Questions:
Is there a way to input queries manually without using a microphone?
Can I use a combination of manual input in vam via virtual keyboard -> OpenAI API -> local TTS?
 
I have finetuned several models using MRQ Tortoise and would gladly try them out. Questions:
Is there a way to input queries manually without using a microphone?
Can I use a combination of manual input in vam via virtual keyboard -> OpenAI API -> local TTS?

Hello, yes, you can disable the push to talk in the plugin configuration and only use the keyboard.
OpenAI is no longer supported as it does not allow NSFW content, you can use the out of the box model it has a very good quality.
If you want to run your local model and TTS, please follow the Github link for instructions.
 
I dont want to use my github account to comment this, so ill put it here. The commands you listed dont give the python package 'sentence_transformers' you might want to put that in the requirements.txt
 
Great job!
hope this plug-in can be combined with TEXT GENERATION WEBUI
 
Back
Top Bottom