"The future is NOW! amazing work" - vamX Patreon comment on the latest update
This AI solution is fully hosted, so it works out of the box. Setup time after download should take about 5 minutes.
New Voices for Chat AI
We now have a Verbatik voice subscription included. We have a large (but limited) monthly voice generation supply, that
greatly expands the number of voices you can choose.
These Verbatik voices include MANY voices in non-English languages, so you can finally talk and get responses in almost any language. This is limited by what languages the Chat AI can handle, so try different Chat AIs if NSFW Ooba doesn't handle your language.
This is also a fun way to get cool accents. Just continue to speak in English, but set the voice to Spanish, Dutch, Italian, or whatever, to have them try to speak English using that voice, for a cool accent.
New Chat AIs
Ooba II has been replaced with a better model (this was done around a week ago).
Chat models (LLMs) are defined by their parameter size. Large models can have more complex responses as they have more data to generate their responses from. NSFW Ooba and Ooba II are 13B (13 billion parameter) models.
We've now added a 70B (70 billion parameter) model (you can select "Ooba 70B" from the NSFW Ooba drop down). The 70B model is slower to generate responses, but may give better results (not always though).
If NSFW Ooba isn't giving you the conversation you want, try Ooba II, Ooba 70B, or any Kobold Horde model.
You can now also use chat models through Kobold Horde. This means additional possible responses / Chat AIs based on the Kobold cloud hosted AIs (anyone can host any model, and anyone else can use it). Kobold Horde is slow, and even though
vamX has priority access to Kobold Horde, you should expect to wait longer for responses (at least 7-10 seconds before the text response) when using the Horde.
When choosing Kobold Horde, by default, it selects a Kobold Horde model that we are hosting. That one will generally be the fastest, but you can also select any Kobold Horde model available on the Horde (although we limit the models we display to those that would return a response within 30 seconds max).
Finally, if you want to host your own model, for lightning fast access to whatever LLM you want, you can now use your own hosted LLMs instead of NSFW Ooba. This isn't a local solution, your LLM still connects with our action generation, voice generation, and vamX connections, but this way if you want to have a fast, exclusive, 70B model of your choice, you can host this on RunPod. We try to make this easy, but this is still an advanced feature for those who want to learn about LLMs.
Read the instructions here. If you are a super advanced user you can host somewhere else, but we will only support / help people get things running on RunPod.