• Happy Holidays Guest!

    We want to announce that we will be working at reduced staffing for the holidays. Specifically Monday the 23rd until Jan 2nd.

    This will affect approval queue times and responses to support tickets. Please adjust your plans accordingly and enjoy yourselves this holiday season!

  • Hi Guest!

    Please be aware that we have released a critical security patch for VaM. We strongly recommend updating to version 1.22.0.7 using the VaM_Updater found in your installation folder.

    Details about the security patch can be found here.
AI Chat Plugin by vamX (doesn't require vamX, no dependencies) - 1.39

Paid Plugins AI Chat Plugin by vamX (doesn't require vamX, no dependencies) - 1.39



New Models (new AIs to chat with)

Support for Llama 3 models. Llama 3 is more efficient and generally provides better, smarter, responses.

There is also now a Llama 3.1 70B model available.

Improved Existing Models (improved all existing AIs in vamX chat, including less repetition).

All models will now do better with long conversations, but especially Llama 3 - Lunaris, WestLake, Silicon Maid and Llama 3 70B. These models all have 16k context, which means they can remember and respond well to long conversations.

As Always, Switch Models (AIs) to Improve the Conversation. If you are having a conversation with the AI, and don't like the response, switch the model, then press Retry Last (or if you are using NSFW Random AI, just press Retry Last). There are many models, and it's very easy to switch between models!
By request, we've added an Edit Character button to make it easy to scroll up and edit or summarize the personality (especially in VR where it can be hard to scroll). This button is next to Save Chat in the bottom right corner of the Web UI.

Editing the personality can be useful if you want to change the direction of the conversation without restarting the whole chat.
You can also press Summarize Personality to take the whole chat history and turn it into an AI generated summary. Summarizing the personality can help for long chats where the AI would lose track of what's important, or where you want to edit the history (if she gets confused, changes the direction of the conversation in a way you don't want, or if you want to remove a part of the conversation history).

Fixed a long standing issue where she would (sometimes) say the same thing twice at the beginning of a response
(if the response was split into parts and there was punctuation near the split).
The AI Chat Web UI (the AI Chat window) is invisible from the back.

To make it easier to position and use, there are now buttons that cycle the Web UI to various locations (such as behind or to the left of the AI person).

After adding the plugin, in Person Plugins, press Open Custom UI... next to AI_CHAT_SERVER_AND_PERSONALITY_SET then use the new Cycle Web UI Location, Cycle Web UI Distance, or Select Web UI buttons to move the Web UI into place.

New in 1.39: Choose personality sets, improved scene compatibility, important tutorials.

Demo video shows AI Chat added to free ispinox scenes.



WHAT'S NEW (in 1.39)​

Added the ability to choose personality sets. Added the ability to choose the US or EU server.

You can now choose other people in the scene to talk by changing the Speak As drop down in the Web UI. Change the drop down to Male or Threesome to quickly change who is speaking without reloading the Chat AI.

The narrator voice now works, allowing you to have text marked in () spoken in another voice. For example if the AI writes: "I'd love to (Jenny pulls you closer)" you will hear the primary female voice say "I'd love to" and then a narrator, as if someone is reading the story, will say "Jenny pulls you closer".

You can also choose to set the main voice as a Narrator, so the AI isn't one of the people in the scene, but a sort of Narrator / God Voice / Other Viewer of the Scene.

AI Chat now has improved compatibility with VaM Moan and other plugins that control head audio. vamX AI Chat now prioritizes it's own audio over externally playing head audio, so that VaM Moan can be set to play a continuous breathing sound, or moan, which no longer stops the AI Chat from playing speech. There is also a button in the Custom UI to disable VaM Moan and Life.



TROUBLESHOOTING

Does the audio get cut off in the middle of playing? If so, there may be other plugins playing head audio for the same atom. In VaM each person can only make one sound at a time (head audio). You can manually turn those plugins off, or disable them using this plugins AI_CHAT_SERVER_AND_PERSONALITY_SET -> Open Custom UI...

Can't find the Web UI? It's invisible from behind. Get close to the person you put the AI Chat plugin on, and look behind you.

"The future is NOW! amazing work" - vamX Patreon comment on the latest update

This AI solution is fully hosted, so it works out of the box. Setup time after download should take about 5 minutes.​


New Voices for Chat AI​

We now have a Verbatik voice subscription included. We have a large (but limited) monthly voice generation supply, that greatly expands the number of voices you can choose.

These Verbatik voices include MANY voices in non-English languages, so you can finally talk and get responses in almost any language. This is limited by what languages the Chat AI can handle, so try different Chat AIs if NSFW Ooba doesn't handle your language.

This is also a fun way to get cool accents. Just continue to speak in English, but set the voice to Spanish, Dutch, Italian, or whatever, to have them try to speak English using that voice, for a cool accent.

New Chat AIs​

Ooba II has been replaced with a better model (this was done around a week ago).

Chat models (LLMs) are defined by their parameter size. Large models can have more complex responses as they have more data to generate their responses from. NSFW Ooba and Ooba II are 13B (13 billion parameter) models.

We've now added a 70B (70 billion parameter) model (you can select "Ooba 70B" from the NSFW Ooba drop down). The 70B model is slower to generate responses, but may give better results (not always though).

If NSFW Ooba isn't giving you the conversation you want, try Ooba II, Ooba 70B, or any Kobold Horde model.

You can now also use chat models through Kobold Horde.
This means additional possible responses / Chat AIs based on the Kobold cloud hosted AIs (anyone can host any model, and anyone else can use it). Kobold Horde is slow, and even though vamX has priority access to Kobold Horde, you should expect to wait longer for responses (at least 7-10 seconds before the text response) when using the Horde.

When choosing Kobold Horde, by default, it selects a Kobold Horde model that we are hosting. That one will generally be the fastest, but you can also select any Kobold Horde model available on the Horde (although we limit the models we display to those that would return a response within 30 seconds max).

Finally, if you want to host your own model, for lightning fast access to whatever LLM you want, you can now use your own hosted LLMs instead of NSFW Ooba. This isn't a local solution, your LLM still connects with our action generation, voice generation, and vamX connections, but this way if you want to have a fast, exclusive, 70B model of your choice, you can host this on RunPod. We try to make this easy, but this is still an advanced feature for those who want to learn about LLMs. Read the instructions here. If you are a super advanced user you can host somewhere else, but we will only support / help people get things running on RunPod.
Back
Top Bottom