So I've been experimenting with a Large Language Model that is trained for conversations. The results are quite good. Anyway until it breaks down and starts to hallucinate. That happens to all bots eventually. I am running this LLM directly from my hardware using my GPU. It is quite fast in the response time. I would say it's instant. The model is trained on lewd data so it does well with conversations we VAM lovers are interested in. I am looking for someone to help me do an experiment. I need a coder from the community who is willing to take on a task.
There is no way we could run VAM and the LLM on the same machine locally but we can stream the data from a machine to another. It so happens that I have two machines. One running the bot and one running VAM.
I am looking for someone to help me attach a speech recognition module on the machine with the bot and a text to speech module on the machine running VAM. We could stream the audio data via Discord.
It's a hack job but we at least get a feel for the future. If some one is interested please let me know.
There is no way we could run VAM and the LLM on the same machine locally but we can stream the data from a machine to another. It so happens that I have two machines. One running the bot and one running VAM.
I am looking for someone to help me attach a speech recognition module on the machine with the bot and a text to speech module on the machine running VAM. We could stream the audio data via Discord.
It's a hack job but we at least get a feel for the future. If some one is interested please let me know.