huggingface models

janet67

Active member
Messages
275
Reactions
91
Points
28
I was wondering how many others use the hugging face models for CHAT AI in VAM. https://huggingface.co/models, and a model you might recommend. I do not have the best setup, and have only a Nvidia RTX 3060 12gb card, so the higher billions of parameters , and higher bit models I really struggle with. For me the 6B and 4 bit models are about my limit for good response and playability, although I typically play with a lot of other plugins, etc. Tried maybe 12 different ones, and probably the most interesting I have found so far, that I could get to work, and in my hardware limited selection is Ancestral_Dolly_Shygmalion-6b-4bit-128g . The model works well, good response time, and has a pretty good varied personality, I guess depending on the card. She can be very sweet, but also quite bold and at least one time, incredibly violent. Not even sure what I said to bring it on, but quite shocked when she said she wanted to murder me and then went about describing the horrible ways on which she would do it. I am sure there are much bigger and better models, but like I said, my hardware limits me. Would love to hear some model recommendations.
 
Curious - what is this and what does it do?
I am using the Alive v45 plugin, that lets you connect to obabooga CHAT AI in VAM, you need a Nvidia GPU card. Obabooga uses huggingface for its models. For the voice you can you the free ones that come with Widows, tthere are multiple options for voice, and websites to get them. Right now I am using a few that are free trial, but there is some free options. The better card and PC you have the better Chat AI models you can download and use. My setup is not the best. You can also use Rivescript to write your own Chat AI and run it from your pc, I really like that as well, its a lot of fun and you can create chat to a specific scene. This is all pretty new stuff but quite impressive. I use my Oculus headset to talk with my mike and hear the VAM person reply. Its pretty cool. This is still in the early stages and has a bit of a learning curve, but if I can get it working anybody can! I am still learning about obabooga and huggingface as configuration of models and the model cards can make a difference. A bonus is the Alive plugin offers a lot beyond the Chat AI capabilities. Here is the Hub URL: https://hub.virtamate.com/resources/alive.15483/ The plugin is now at v46, which I have not tried yet, and will wait for the next update, as it mainly pertains to AI imaging updates, which is cool, but I only have so much time to devote to VAM. LOL
 
Some I do use for kinky stuff:
TheBloke_airoboros-13b-gpt4-1.4-SuperHOT-8K-GPTQ
TheBloke_CAMEL-13B-Role-Playing-Data-SuperHOT-8K-GPTQ
TheBloke_WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ
anon8231489123_gpt4-x-alpaca-13b-native-4bit-128g

Combining Ooba, Eleven Labs and Speech Recognition in the Alive plugin is endless naughty fun ;)
 
Some I do use for kinky stuff:
TheBloke_airoboros-13b-gpt4-1.4-SuperHOT-8K-GPTQ
TheBloke_CAMEL-13B-Role-Playing-Data-SuperHOT-8K-GPTQ
TheBloke_WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ
anon8231489123_gpt4-x-alpaca-13b-native-4bit-128g

Combining Ooba, Eleven Labs and Speech Recognition in the Alive plugin is endless naughty fun ;)
I do not think I can run those, but maybe I will try. What is your setup and GPU? But I agree, it is pretty awesome.
 
I am on a 4070Ti, i7 13700k and 64GB DDR5 RAM. Biggest bottleneck is having “only” 12GB of VRAM
 
I am on a 4070Ti, i7 13700k and 64GB DDR5 RAM. Biggest bottleneck is having “only” 12GB of VRAM
I tired a few of the ones you mentioned, but could not run them, or I need to play with the settings more. I just got a Nvidia card a few weeks ago, and still fine tuning it. I have had some luck and can run a 7b-4bit models in some model configurations very well. I also seem to have quite a bit of difference in different android cards within Alive. Anyway if you have time can you try Ancestral_Dolly_Shygmalion-6b-4bit-128g , just want to see if you find it interesting compared to the bigger 13b models you are using. I know it does will not have the depth. But would love to know what you think. BTW I have a AMD cpu comes out as a Intel(R) Core(TM) i7-10700 CPU 2.90 GHz, 32 gb ram and a rtx 3060 12 gb, so I am considerably less then your power.
 
When I use it together with Alive (or VAM in general), I also have to switch to 7b models, 12 gb are just not enough for that. Most of the mentioned models have their 7b equivalent.
I’ll try the model later when back from work
 
Back
Top Bottom