• Hi Guest!

    We are extremely excited to announce the release of our first Beta1.1 and the first release of our Public AddonKit!
    To participate in the Beta, a subscription to the Entertainer or Creator Tier is required. For access to the Public AddonKit you must be a Creator tier member. Once subscribed, download instructions can be found here.

    Click here for information and guides regarding the VaM2 beta. Join our Discord server for more announcements and community discussion about VaM2.

Large Language model for VAM - looking for help

um, have you seen voxta? and the ALM plugin? I don't get any credit for plugging this, but it' so awesome, and yes, with my setup, i comfortably run the LLM, VAM and TTS/STT just fine.


Processor 12th Gen Intel(R) Core(TM) i9-12900K 3.20 GHz OC @ 4.99GHz
Installed RAM 64.0 GB (63.7 GB usable)

[USER]@Acid-bubbles[/USER]
and
@vaan20
running koboldcpp, piper (TTS) and vosk (STT) locally

I run a 13B model and she stays pretty on track, if you crank up the token size and add to the memory as needed
 

Attachments

  • 1701634172336.png
    1701634172336.png
    65.6 KB · Views: 0
Back
Top Bottom