Large Language model for VAM - looking for help

ReignMocap

Banned User
Messages
477
Reactions
3,869
Points
123
So I've been experimenting with a Large Language Model that is trained for conversations. The results are quite good. Anyway until it breaks down and starts to hallucinate. That happens to all bots eventually. I am running this LLM directly from my hardware using my GPU. It is quite fast in the response time. I would say it's instant. The model is trained on lewd data so it does well with conversations we VAM lovers are interested in. I am looking for someone to help me do an experiment. I need a coder from the community who is willing to take on a task.
There is no way we could run VAM and the LLM on the same machine locally but we can stream the data from a machine to another. It so happens that I have two machines. One running the bot and one running VAM.
I am looking for someone to help me attach a speech recognition module on the machine with the bot and a text to speech module on the machine running VAM. We could stream the audio data via Discord.
It's a hack job but we at least get a feel for the future. If some one is interested please let me know.
 
um, have you seen voxta? and the ALM plugin? I don't get any credit for plugging this, but it' so awesome, and yes, with my setup, i comfortably run the LLM, VAM and TTS/STT just fine.


Processor 12th Gen Intel(R) Core(TM) i9-12900K 3.20 GHz OC @ 4.99GHz
Installed RAM 64.0 GB (63.7 GB usable)

[USER]@Acid-bubbles[/USER]
and
@vaan20
running koboldcpp, piper (TTS) and vosk (STT) locally

I run a 13B model and she stays pretty on track, if you crank up the token size and add to the memory as needed
 

Attachments

  • 1701634172336.png
    1701634172336.png
    65.6 KB · Views: 0
Back
Top Bottom