• Hello Guest!

    We have recently updated our Site Policies regarding the use of Non Commercial content within Paid Content posts. Please read the new policy here.

    An offical announcement about this new policy can be read on our Discord.

    ~The VaMHub Moderation Team
  • Hello Guest!

    We posted an announcment regarding upcoming changes to Paid Content submissions.

    Please see this thread for more information.

Imposter (Not Using OpenAI anymore)

Plugins Imposter (Not Using OpenAI anymore) 4

Someone has put a demo for it on reddit:
Update 3:

Imposter.png


This new version works out of the box, you may need to allow Internet access to the plugin as well as whitelist this domain name: ec2-52-87-238-187.compute-1.amazonaws.com

If you are the only user, the interaction should feel like it is real time.

Emily can repeat herself, she is still learning.

Will post an update to the Github repository once everything is ready.

The server contains these services:
1- Speech to text.
2- Text to Speech.
3- Chatbot.
4- NPC actions.
5- Chatbot choices.

Triggers are not working yet.

Please enjoy while the server is not full.

Update 2:
I have provisionned an AWS server for around 60$, it is a bit slow (~5s to respond), please only use if for testing, for better performance, please run the local servers.
You still need to use your OpenAPI key though.


Update 1:
Basically you just talk and the bot responds, you can ask it to do some actions and it responds to touch.
The plugin looks for actions in: Scene Animation -> Triggers -> Add Animation Trigger -> Actions...
Please provide action names that can be recognized by the LLM.

I am new to VAM (as a creator).

This is meant as a developer preview plugin to help us get started on using large language models, but you can enjoy it as you wish.

To use this plugin, you need to follow the instructions on the Github page to run the needed local servers:

Everything is open source (MIT), except for the lipsync plugin that we have to use for the moment, will be developping a new open source one, big thanks to the developer of lipsync.



How to use in game:
Push to talk: right click on mouse, or trigger on VR.
Send to LLM: sends your inputs to OpenAI.
Echo back: will repeat what you have said (recognized text).
Redirect Voice to Story: to write your own story using your voice.

imposter.png


This first version is only meant to be used by technical people and content creators.

I will be working on a version that works out of the box, but needs funding for the servers.

For the moment, we have 3 servers: the node orchestrator, Text To Speech server and Speech to Text server.
Will be adding another open source Large Language model server soon.


Some stats:
- Speech recognition + Speech generation => 1s.
- OpenAI response => 2s to 5s.
- Using OpenAI for a one hour session would cost around 1$ (with current config).
- The 3 servers are consuming around 6Go of RAM.


Still facing some performance issues especially a frame lag when the Microphone recording starts.

Please support me on Patreon: https://www.patreon.com/TwinWin, if you are a developer, please join me in making this happen; if you are a prompt engineer please share with us your prompts.

Here is my discord server: https://discord.gg/GqN3STn8U8
Author
twinwin
Downloads
8,445
Views
21,856
First release
Last update
Rating
5.00 star(s) 15 ratings

Latest updates

  1. Add a subscene

    Add a subscene to allow you to use the plugin in any other scene. Please keep the same name...
  2. Imposter AI - (Jack and Emily)

    This new version works out of the box, you may need to allow Internet access to the plugin as...
  3. Imposter - Connects to ChatGPT

    I have provisionned an AWS server for around 60$, it is a bit slow (~5s to respond), please only...

Latest reviews

The FIRST EVER AI on VAM! Paying tribute to the OG
Upvote 0
Hello, excellent Plugin.. You know if i can use my own IA from "text-generation-webui" to connect with your plugin?
Upvote 0
That's what i need. So good. Let's make acions, .. dance or other actions. And insert a second peron for this. (must not be sex)
Upvote 0
Great stuff, i cant get the speache to text working i dont know what i do wrong. in Push to talk when i right click and talk in my microphone it dont work.
Anyways keep working on that, appreachiate it.
T
twinwin
Thanks, you have to choose the right microphone from the microphones list on the Imposter plugin configuration, also, check that the microphone is working and not being used by another app.

If still is not working, it might be an issue with the configuration of your microphone that is not supported, can you try with another microphone?
Upvote 0
Amazing work, the response time has actually been better than stated in the resource. Cant wait to see where this is going.
T
twinwin
Thanks ;)
Upvote 0
Really impressive, but there is a little problem when loading a look, even if send action and send trigger are unchecked it always reload the defaut look of your scene, you know how to fix this ?
T
twinwin
Hello, thanks for the feedback, it is a bug when the plugin tries to trigger the face emotion morph change, as a workaround, you can just delete the actions from the Scene animation's triggers.
Upvote 0
Love the work you are doing, please keep it up!
T
twinwin
Thanks, appreciate it ;)
Upvote 0
I havent tried it cause right now lack of an account and technical understanding, but keep going! Mindblowing times. 👌
T
twinwin
Hello, the new version works out of the box, no need for an account at OpenAI.
Upvote 0
Amazing work! I've been playing with some of the less powerful LLM's and a 3090 can generate some pretty complex responses with the right prompting. Would it be possible to include an option for connecting to local chat instances as well as AWS?
T
twinwin
Thanks, yes, everything is open source. I will be updating the Github repository with the latest code version.

Please note that the performance of VAM might be reduced when LLM is working on the background on the same machine.
Upvote 0
Holy FLip!
Upvote 0
Back
Top Bottom