koboldlink

Plugins koboldlink

bot1789

Member
Messages
32
Reactions
61
Points
18
bot1789 submitted a new resource:

koboldlink - AI response from VAM character using kobaldcpp and SPQRTextAudioTool

This simple plugin allows you to send requests from VaM to a locally running koboldcpp (on the same computer or on another computer in the same local network) and voice the responses using game audio sources. The plugin can be added to Person and AudioSource atoms. Another feature is that requests to the AI can be sent not only by pressing the “Send Message” button in the plugin menu, but also by pressing in-game UI Buttons. In this case, the full text of the button is sent as a request...

Read more about this resource...
 
Is it possible to use alltalk instead of SPQRTextAudioTool? prefer the more human sounding voices (and the fact you can train new ones fully or just use a wav as a sample) ( https://github.com/erew123/alltalk_tts ). Also i know its called Koboldlink but will it work with TextGenWebui in api mode too?
 
Is it possible to use alltalk instead of SPQRTextAudioTool? prefer the more human sounding voices (and the fact you can train new ones fully or just use a wav as a sample) ( https://github.com/erew123/alltalk_tts ). Also i know its called Koboldlink but will it work with TextGenWebui in api mode too?
Thank you for your comment!
Indeed, SPQRTextAudioTool uses old school SAPI voices. They are not very expressive, but the TTS works very quickly even on a weak PC. Regarding alltalk_tts, thank you for the advice, I will try to understand how easy it is to call it from VaM. If it is not too difficult, I will try to add it to this plugin.
I'm only afraid that this neural network TTS will work much slower than SPQRTextAudioTool and the delay for generating the audio-file will be much longer.

Regarding the TextGenWebui (aka oobabooga), there is already a well developed, currently free-to-use plugin that works with it. It is SPQR Alive:
Have you tried it yet?
 
Last edited:
bot1789 updated koboldlink with a new update entry:

v2

Small update:
- Added display of speech bubble. Use the "SpeechBubbleOn" toggle to enable/disable the option, and the "SpeechBubbleDuration" slider to adjust the display duration of the speech bubble.
- Fixed the bug when an UIButton atom that has a parent could not be associated with the plugin. Many thanks to everlaster who suggested the solution.
- Added the use of square brackets to separate the AI request from the rest of the button text. It is useful for not including the name of the...

Read the rest of this update entry...
 
bot1789 updated koboldlink with a new update entry:

v3

Minor update to the plugin's user interface and bug fixes:
- A toggle "AIButtonsOn" to allow/disable sending requests to Kobold using UIButtons is added;
- Checking the AI driving buttons is carried out when adding the plugin to an atom;
-The symbols {char} and {used} are replaced with {{char}} and {{used}} to match the format of the character cards used in TavernAI and other similar programs;
- The lip sync settings (on/off and the volume multiplier) specified in the pref.json file are set...

Read the rest of this update entry...
 
bot1789 updated koboldlink with a new update entry:

v4

Update of memory and chat history management:
- Added a text field displaying the chat history;
- Automatic saving of chat history to file chat_history.txt in Custom/PluginData/koboldlink directory;
- Added buttons for clearing and loading the chat history from file chat_history.txt (it can be used to continue old chat after restarting VaM);
- Added "Update memory" button to load new content of the author's notes and main memory block from files without reloading the plugin;
- The memory is...

Read the rest of this update entry...
 
Hi!
I just wanted to say thanks for your work!
Plugin is nice, I was waiting something like this simple addon for a long time.


Although I had one problem. I could not make it speak yet. SPQR textaudio is installed and says it is ready, also I modded the pref.json to correct the audio path and now I get no error messages after text post. Character is also enabled for lipsync, yet it remains silent. I am a bit stuck at the moment. Any recommendation? I add the kobold plugin to the person.

Also if it is solved I was wondering if the voice could be changed? I use W11 and it has quite good neural voices under narrator already. Also I found different sapi voices on the net.
 
Hi!
I just wanted to say thanks for your work!
Plugin is nice, I was waiting something like this simple addon for a long time.


Although I had one problem. I could not make it speak yet. SPQR textaudio is installed and says it is ready, also I modded the pref.json to correct the audio path and now I get no error messages after text post. Character is also enabled for lipsync, yet it remains silent. I am a bit stuck at the moment. Any recommendation? I add the kobold plugin to the person.

Also if it is solved I was wondering if the voice could be changed? I use W11 and it has quite good neural voices under narrator already. Also I found different sapi voices on the net.
Thank you very much for your support!

Regarding the non-working TTS, can you clarify the following details:
1) Is the sound file "tts_output.wav" with the correctly voiced AI response saved successfully in the directory .../SPQR.TextAudioTool/files?
2) When you press the "Repeat AI response" button, does the Person atom also not speak anything (the pre-recorded tts_output.wav from the directory .../SPQR.TextAudioTool/files must play in this case)?

To say the truth, even for me voicing does not always work stably, sometimes even causing the game to crash. Most likely, the problem is that the sound clip is imported into VaM in a non-optimal way. I am working on this...

I also still haven't figured out how to make any SAPI-voices visible to the SpeechAusioTool. For example, I installed several female english voices using the Windows speech setting (Miscrosot Catherine, Miscrosot Susan, Microsoft Zira, Microsoft Hanzel), but the tool only can see Zira and Hanzel (you can check which voices it sees and their names by entering the request http://127.0.0.1:7069/voices in your browser or using "demo.html" browser application). You can try asking this question to the developer of the Tool (SPQR) on Patreon or GitHub. He seems to be active. Maybe he will explain something. If you manage to find out anything about this, please let me know as well.

Also note that in the latest version of the SpeechAudioTool, the support for the ElevenLabs has been added, but as I understand it, this is a paid option.
 
Thank you very much for your support!

Regarding the non-working TTS, can you clarify the following details:
1) Is the sound file "tts_output.wav" with the correctly voiced AI response saved successfully in the directory .../SPQR.TextAudioTool/files?
2) When you press the "Repeat AI response" button, does the Person atom also not speak anything (the pre-recorded tts_output.wav from the directory .../SPQR.TextAudioTool/files must play in this case)?

To say the truth, even for me voicing does not always work stably, sometimes even causing the game to crash. Most likely, the problem is that the sound clip is imported into VaM in a non-optimal way. I am working on this...

I also still haven't figured out how to make any SAPI-voices visible to the SpeechAusioTool. For example, I installed several female english voices using the Windows speech setting (Miscrosot Catherine, Miscrosot Susan, Microsoft Zira, Microsoft Hanzel), but the tool only can see Zira and Hanzel (you can check which voices it sees and their names by entering the request http://127.0.0.1:7069/voices in your browser or using "demo.html" browser application). You can try asking this question to the developer of the Tool (SPQR) on Patreon or GitHub. He seems to be active. Maybe he will explain something. If you manage to find out anything about this, please let me know as well.

Also note that in the latest version of the SpeechAudioTool, the support for the ElevenLabs has been added, but as I understand it, this is a paid option.
Ahh, I assumed it could be something local, may be with another plugin, or just simply that copy if vam is overbloated..
I will tinker with it.

On the other hand, it the textaudiotool can handle elevenlabs api is nice, cuz that has a free tier too. If I remember correctly you have 10000 characters per
month or so and you can have also 2 or 3 voice set to favourite. Quality is also outstanding. You should definitely give it a try.
So do I understand correctly that your plugin could use that too?!
 
Ahh, I assumed it could be something local, may be with another plugin, or just simply that copy if vam is overbloated..
I will tinker with it.

On the other hand, it the textaudiotool can handle elevenlabs api is nice, cuz that has a free tier too. If I remember correctly you have 10000 characters per
month or so and you can have also 2 or 3 voice set to favourite. Quality is also outstanding. You should definitely give it a try.
So do I understand correctly that your plugin could use that too?!
Also note that if the plugin works correctly, you should see a receipt of the request in the TextAudioTool window (see the screen capture): "Received text to speak..."
juy1.jpg


I didn't know that ElevenLabs has a free tier. Thank you! I will try then. My plugin doesn't support this yet because it's a different http-request with different parameters. I'll try to add it in the future.
 
Also note that if the plugin works correctly, you should see a receipt of the request in the TextAudioTool window (see the screen capture): "Received text to speak..."
View attachment 405468

I didn't know that ElevenLabs has a free tier. Thank you! I will try then. My plugin doesn't support this yet because it's a different http-request with different parameters. I'll try to add it in the future
Textaudiotool works, and the folder is also set, it is just that in vam the person is not speaking..
I tried simpler scenes, but still no success.
It would probably be easier if elevenlabs api could be used.

However I might have a recommendation. The LLMs based on the their quality usually starts to hallucinate time to time, or you just want to make the conversation in a different track. So a “back” or “regenerate” button would be handy if you want to keep the existing chat history but need to adjust something.

Another thing I was thinking, but not directly tool related is that I started to use the UI button function and somehow is that possible to take out its text editing field? I mean now I click on the UI button atom to send a message, but to edit the text I still have to write it in the usual vam ui of the given atom. It would be nice to have an atom in the scene and use like a textbox to write the message and then press the UIbutton to send. There
might be a tool for it I just did not come across yet..
 
Last edited:
Textaudiotool works, and the folder is also set, it is just that in vam the person is not speaking..
I tried simpler scenes, but still no success.
It would probably be easier if elevenlabs api could be used.

However I might have a recommendation. The LLMs based on the their quality usually starts to hallucinate time to time, or you just want to make the conversation in a different track. So a “back” or “regenerate” button would be handy if you want to keep the existing chat history but need to adjust something.

Another thing I was thinking, but not directly tool related is that I started to use the UI button function and somehow is that possible to take out its text editing field? I mean now I click on the UI button atom to send a message, but to edit the text I still have to write it in the usual vam ui of the given atom. It would be nice to have an atom in the scene and use like a textbox to write the message and then press the UIbutton to send. There
might be a tool for it I just did not come across yet..
I still don't understand why the voicing doesn't work at all in your case. I've already tried my plugin on 3 different PCs, so far the speaking has worked everywhere. Sometimes only the voiceline doesn't play because the processing delay is too long, but then I click "Repeat AI response" and the voiceline plays after all.
I'll try to prepare a more detailed video guide on using the SpeechAudioTool together with my plugin. I hope it will help.

I agree that the "back" button is essential. I'm going to add it later.

Regarding the text input field, another reviewers also requested that it be added. I agree that I is necessary. However I still cannot find an example how to impellent it in VaM without using the webbrowser or webpanel atoms. If you find an example of a scene where such a text input field is implemented, please let me know.
 
I still don't understand why the voicing doesn't work at all in your case. I've already tried my plugin on 3 different PCs, so far the speaking has worked everywhere. Sometimes only the voiceline doesn't play because the processing delay is too long, but then I click "Repeat AI response" and the voiceline plays after all.
I'll try to prepare a more detailed video guide on using the SpeechAudioTool together with my plugin. I hope it will help.

I agree that the "back" button is essential. I'm going to add it later.

Regarding the text input field, another reviewers also requested that it be added. I agree that I is necessary. However I still cannot find an example how to impellent it in VaM without using the webbrowser or webpanel atoms. If you find an example of a scene where such a text input field is implemented, please let me know.
With a quick search I think something like asco’s ui undocker would be good.
link - https://hub.virtamate.com/resources/plugin-ui-undocker.37716/

He said it probably wont work in vr, but that would require blind typing anyway..

I came across a thread too where hazmhox asked something similar and acid gave a tip:
thread - https://hub.virtamate.com/threads/coding-canvas-and-layers-in-vam-text-inputs.2839/#post-7192

Or Acid’s keybindings might be useful - https://hub.virtamate.com/resources/keybindings.4400/
 
Last edited:
With a quick search I think something like asco’s ui undocker would be good.
link - https://hub.virtamate.com/resources/plugin-ui-undocker.37716/

He said it probably wont work in vr, but that would require blind typing anyway..

I came across a thread too where hazmhox asked something similar and acid gave a tip:
thread - https://hub.virtamate.com/threads/coding-canvas-and-layers-in-vam-text-inputs.2839/#post-7192

Or Acid’s keybindings might be useful - https://hub.virtamate.com/resources/keybindings.4400/
Thank you for your help! The undocker plugin looks promising. I want to try it!
Regarding post 7192, I read it before but I think they only talked about text fields in plugin UIs...
I agree that Keybindigs plugin is really useful. To say the truth, It is one of my default session plugins.
 
With a quick search I think something like asco’s ui undocker would be good.
link - https://hub.virtamate.com/resources/plugin-ui-undocker.37716/

He said it probably wont work in vr, but that would require blind typing anyway..

I came across a thread too where hazmhox asked something similar and acid gave a tip:
thread - https://hub.virtamate.com/threads/coding-canvas-and-layers-in-vam-text-inputs.2839/#post-7192

Or Acid’s keybindings might be useful - https://hub.virtamate.com/resources/keybindings.4400/
Yes, you're right, the undocker plugin really helps free up screen space without completely hiding the dialog box UI.
4536.jpg
 
I'm glad. Probable a bit of plugin ui redesign could further enhance its use. ;)
Thank you. I understand that the UI redesign is necessary. The problem is that it is not so easy to make separate tabs in VaM scripts. I hope I will do it anyway.
 
Thank you. I understand that the UI redesign is necessary. The problem is that it is not so easy to make separate tabs in VaM scripts. I hope I will do it anyway.
I just noticed, that spqr released a new plugin named Virtainput, which claims that adds text input to vam ui plugins. I did not test it, but I assume it could work for yours too if the UIbutton text field could be separated.
 
bot1789 updated koboldlink with a new update entry:

v5

Update:
- Speach-to-text option is added (STT, uses SPQRTextAudioTool). Press "Record microphone (SST)" to record your message. Press "Send SST Result" button to send it to Kobold. In Windows it uses the default microphone;
- The actions corresponding to the "Send Message (Generate AI Response)", "Record Microphone (SST)" and "Send SST Result" buttons have been made available to external triggers. This can be used to call the actions using in-game triggers or pressing certain keyboard...

Read the rest of this update entry...
 
I just noticed, that spqr released a new plugin named Virtainput, which claims that adds text input to vam ui plugins. I did not test it, but I assume it could work for yours too if the UIbutton text field could be separated.
Thank you for telling me! I've checked the Virtainput, plugin. Unfortunately, it implements the input text fields only in UI's (as the text inputs in the UI of my plugin). It still doesn't show how to implement the text input on a game object canvas.
 
This looks AWESOME @bot1789!

Did I miss where input to the AI can be audio as well as text and if so is it difficult to setup? Thanks!
 
This looks AWESOME @bot1789!

Did I miss where input to the AI can be audio as well as text and if so is it difficult to setup? Thanks!
I'm not sure I understood your question correctly.
Do you mean the way to send a message to the AI?
If yes, please check the video:

If you mean the form of the AI response, by default it must be both text and audio. For the audio response the SPQR TextAudioTool must be launched together with VaM.
For the text response make sure that the "SpeechBubbleOn" toggle is On.
 
Last edited:
I'm not sure I understood your question correctly.
Do you mean the way to send a message to the AI?
If yes, please check the video:

If you mean the form of the AI response, by default it must be both text and audio. For the audio response the SPQR TextAudioTool must be launched together with VaM.
For the text response make sure that the "SpeechBubbleOn" toggle is On.
This answers my question, thank you!
 
I just want to let you know, that this plugin can work with runpod koboldcpp pod.
Set kobold port to TCP before you deploy and it gives you an IP and forwarded port.
Copy them to plugin UI instead of localhost
and it works!
 
I just want to let you know, that this plugin can work with runpod koboldcpp pod.
Set kobold port to TCP before you deploy and it gives you an IP and forwarded port.
Copy them to plugin UI instead of localhost
and it works!
Thank you! I'm also interested in it. I will try
 
Back
Top Bottom