New Models (new AIs to chat with)
Support for Llama 3 models. Llama 3 is more efficient and generally provides better, smarter, responses.
You can try the Llama 3 - Lunaris model (the new default). This model is fast, smart, and has a long memory (16k).
There is also now a Llama 3.1 70B model available for slower (but often better) responses.
Improved Existing Models (improved all existing AIs in vamX chat)
Fixed some issues to improve results from all models (including less repetition).
Better Memory / Conversations Over Time
All models will now do better with long conversations, but especially Llama 3 - Lunaris, WestLake, Silicon Maid and Llama 3 70B. These models all have 16k context, which means they can remember and respond well to long conversations.
As Always, Switch Models (AIs) to Improve the Conversation. If you are having a conversation with the AI, and
don't like the response, switch the model, then press Retry Last (or if you are using NSFW Random AI, just press Retry Last). There are many models, and it's very easy to switch between models!