We are excited to announce a new feature on the Hub: Favorites!
You can now add resources to your favorites, and organize your favorites into collections!
You can check out the details in our official announcement!
SallySoundFX is designed to automate visualizations based on audio currently playing in VAM.
It can listen to a frequency and flash a spotlight with every beat for example.
It can be linked to anything that accepts 'values Actions' from 0.0 to 1.0.
It can use a microphones too.
By default VAM can do only basic audio visualizations based on the volume. (As far as I know)
Therefore SallySoundFX is more accurate. A 'SallySoundFX Demo'-scene is included:
[ATTACH...
Does this plugin happen to need windows 10 to run? I have the latest version of VAM installed on windows 7 and it unfortunately didn't work, I only got an error message saying something along the lines of how the .cslist file couldn't be found
Does this plugin happen to need windows 10 to run? I have the latest version of VAM installed on windows 7 and it unfortunately didn't work, I only got an error message saying something along the lines of how the .cslist file couldn't be found
In theory there is no reason for the Plugin not to run on Windows 7 as long as VAM runs normal.
But I build and tested it on Windows 10 only.
Before I set up a Virtual Machine to test this:
Can you please check that it is not a compilation error because MacGruber.LogicBricks.12 was not installed?
It's the only dependency. VAM lists a second one, but that one comes preinstalled.
If it is not the following problem due to the missing 'LogicBricks' please send me a private message with a copy of your error message from the error log.
Then I'll test in a VM.
Code:
!> Missing addon package MacGruber.LogicBricks.12 that packageSally.SallySoundFX.1 depends on
!> Compile of Sally.SallySoundFX.1:/Custom/Scripts/Sally/SallySoundFX.cslist failed. Exception: System.Exception: Path MacGruber.LogicBricks.12:\Custom\Scripts\MacGruber\LogicBricks\Internal\MacGruber_Utils.cs not found
at MVR.FileManagement.FileManager.ReadAllText (System.String path, Boolean restrictPath) [0x00000] in <filename unknown>:0
at MVRPluginManager.SyncPluginUrlInternal (.MVRPlugin mvrp, Boolean isFromConfirmDenyResponse) [0x00000] in <filename unknown>:0
!> Compile of Sally.SallySoundFX.1:/Custom/Scripts/Sally/SallySoundFX.cslist failed. Errors:
Thanks.
I am looking into this right now.
Unfortunately it does not look very promising.
The Plugin picks up the audio data from an AudioListener.
This Listener is always attached to the Camera:
(That is the camera that renders the 3D scene for the monitor, not the 'WindowCamera'-Atom)
This makes sense since the user should hear 'less' if the audio source if far away from the camera.
Basically spatial 3D audio.
The WebPanel and similar objects like the TV seem to play the audio directly and ignore the spatial concept.
You can test this in VAM:
Music playing on an AudioSource will become quieter the further the user moves away.
Music playing on a WebPanel does not.
VAM always seems to load a hidden 'TestAudioSource'.
I tried to get the data from it too -> nothing.
So for now - the answer is: No, not possible.
Since I haven't figured out yet how the WebPanel actually does play the audio there is still hope.
If I do, maybe there is another way to get the required data. Chances are very low though.
The audio has to come from the 'Scene audio' or AptSpeaker / AudioSource / RhythmAudioSource.
Note that by default the AptSpeaker-Atom has spatial audio disabled.
In theory there is no reason for the Plugin not to run on Windows 7 as long as VAM runs normal.
But I build and tested it on Windows 10 only.
Before I set up a Virtual Machine to test this:
Can you please check that it is not a compilation error because MacGruber.LogicBricks.12 was not installed?
It's the only dependency. VAM lists a second one, but that one comes preinstalled.
If it is not the following problem due to the missing 'LogicBricks' please send me a private message with a copy of your error message from the error log.
Then I'll test in a VM.
Code:
!> Missing addon package MacGruber.LogicBricks.12 that packageSally.SallySoundFX.1 depends on
!> Compile of Sally.SallySoundFX.1:/Custom/Scripts/Sally/SallySoundFX.cslist failed. Exception: System.Exception: Path MacGruber.LogicBricks.12:\Custom\Scripts\MacGruber\LogicBricks\Internal\MacGruber_Utils.cs not found
at MVR.FileManagement.FileManager.ReadAllText (System.String path, Boolean restrictPath) [0x00000] in <filename unknown>:0
at MVRPluginManager.SyncPluginUrlInternal (.MVRPlugin mvrp, Boolean isFromConfirmDenyResponse) [0x00000] in <filename unknown>:0
!> Compile of Sally.SallySoundFX.1:/Custom/Scripts/Sally/SallySoundFX.cslist failed. Errors:
Thanks.
I am looking into this right now.
Unfortunately it does not look very promising.
The Plugin picks up the audio data from an AudioListener.
This Listener is always attached to the Camera: View attachment 99090
(That is the camera that renders the 3D scene for the monitor, not the 'WindowCamera'-Atom)
This makes sense since the user should hear 'less' if the audio source if far away from the camera.
Basically spatial 3D audio.
The WebPanel and similar objects like the TV seem to play the audio directly and ignore the spatial concept.
You can test this in VAM:
Music playing on an AudioSource will become quieter the further the user moves away.
Music playing on a WebPanel does not.
VAM always seems to load a hidden 'TestAudioSource'.
I tried to get the data from it too -> nothing.
So for now - the answer is: No, not possible.
Since I haven't figured out yet how the WebPanel actually does play the audio there is still hope.
If I do, maybe there is another way to get the required data. Chances are very low though.
The audio has to come from the 'Scene audio' or AptSpeaker / AudioSource / RhythmAudioSource.
Note that by default the AptSpeaker-Atom has spatial audio disabled.
Great info, and happy hunting (I gave up my quest for it) - whomever controls webpanel audio, controls the universe, its more or less the holy grail as it would make dynamic online Text-To-Speech > headaudio > mouth movement, models dancing to youtube etc possible, it`s next level shit.
Another way would be some sort of proxy, fetching sound (ex the output from a youtube url or tts url server > server) serverside online (not local server - too much to ask from the user to install local shit) and have that server spit out a format the scene-audio can stream as-if-one-inserted-a-URL-there as I know rythmAudioControllers are able to fetch that
@God
To find out how audio is handled for the Web-stuff I started renaming some .dll-files.
Then started VAM with a WebPanel test-scene to see if it still works. The scene was playing a youtube video.
It is weird that VAM has so many audio-library files btw.
I tested these suspicious files until I broke the WebPanel: VAM\VAM_Data\Managed\ Bass.Net.dll Version 2.4.13.2 http://www.bass.radio42.comhttp://www.un4seen.com -> nope still working
NAudio.dll Version 1.8.4.0 https://markheath.net/category/naudio -> nope still working
UnityEngine.UnityWebRequestAudioModule.dll -> nope still working
UnityEngine.WebModule.dll -> nope still working
VAM\VAM_Data\Plugins\ bass.dll -> nope still working
AudioPluginMsHRTF.dll -> nope still working
AudioPluginOculusSpatializer.dll -> nope still working
libmpg123-0.dll -> nope still working zf_cef.dll <- Yep that is it!
ZFProxyWeb.dll <- Yep that is it!
I will end my search here. Unlike a proper Audio-library this Chromium based Browser is unlikely to offer a function to get sample-data. Audio is not even mentioned in the documentation, except for some supported codec. And assuming it would offer such function, I still could not grab it from a Plugin because there is no Interface/Connection to it. End of road.
So the only option is sending the data to the Plugin from an an external source. That's bad. The user would have to install extra stuff and it would never be in sync perfectly. Assuming the WebPanel plays a musicvideo and has some lag, but the external source to analyze the audio has none it would desync ... and since there is no way to read the video-progress - how to resync it? *ugh*
Another idea would be to have a Windows program record whatever VAM is playing in Windows and silently send it back into a fake microphone device. That device can then be picked up by VAM/Unity and played on a AudioSource. However this AudioSource would have to be super silent/far away - or else feedback loop This would at least have no desync issues and little lag. But still installing external software is *ugh*.
Anyway - all solution kind of suck. ? I'm out of 'good' ideas now.
Edit: This is a log of me iterating over relevant objects and component to proof the 'WebPanel' (and similar) do not use UnityEngine.AudioModul at all. That basically means getting the sample data from there is impossible:
The error on loading the demo scene only shows the "sallysoundFX.cslist doesn't exist" error, but upon reloading (or loading from scratch) the plugin itself, the MacGruber dependency issue comes up, which is strange because the file definitely is there and in the right location as well
@Nameless Vagabond
I did test the Plugin inside a Windows 7 Virtual Machine using Virtual Box.
Except for it running slower inside the VM it worked perfectly normal.
From the screenshot I cannot tell whether the explorer window shows the content of a .var file or normal folder.
If the explorer shows your VAM\Custom-folder, the installation is wrong.
The correct paths are: VAM\AddonPackages\Sally.SallySoundFX.1.var
VAM\AddonPackages\MacGruber.LogicBricks.12.var
I'm not sure whether your explorer is maybe configured to show the content of .var-files like a zip-file - but I doubt it.
If you extracted content from .var-files to VAM\Custom or VAM\Saves\ you must remove theses files or else this will cause all kinds of trouble.
(actually the Installation is kind of messed up in that case)
Did you install the Plugin using VAM Hub or manual?
If you are 100% sure your paths are correct maybe there is a folder permission problem. Could be if you VAM is saved to a users folder like the 'desktop'.
But in that case other Plugins should fail too. I'd move VAM to another drive then.
It seems to be to used by the RythemXXX-Atoms in VAM. Btw. I cannot find a documentation for how the Rhythm-Atoms work at all. I just don't like how it seems to waste CPU cycles by using a 'fftWindowSize' of 4096! This is overkill!
Decompiling RhythmTool.Update() would be interesting to know how it processes the data it probably grabs from GetSpectrumData() in there.
Decompiled it. Turns out it uses a custom LomontFFT and not a Unity GetSpectrumData()-call. No idea how good and fast that thing is. It's *seems* to be optimized for high precision (double), high quality data at and not fast real time calculations.
My idea was to potentially 'steal' the data from RhythmTool and save CPU-time for the GetSpectrumData()-call. But the data is in private double[] array. Also it may or may not be processes by the FFT-algorithm yet depending on whether my Plugin or RhythmTool get a Update()-call first.
This is from poking around in a scene with a MP3 playing on a 'RhythmAudioSource'
Thanks for playing tech support for me - I did manually extract the VARs into my custom and save folders, a habit that I developed from the teething days of the VAR system.
Deleting everything and putting the VARs into the addonpackages folder worked - this plugin is seriously cool!
hello! Dayum, found your amazing plugin and demo scene on monster shinkais environment where he mentions you!
I love this plugin you made very much!!!
question:
in the demo scene I wanted to add more lights or objects that moved along with the music but I couldnt do it, Like adding a very similar neon light to the scene, It didnt change its lighting to the rhythm / was default 1 light .
The demoscene triggers lights not directly. Here I trigger the visible UI-sliders from the plugin first and they trigger lights and more.
Let's for example assume you have added an "InvisibleLight" and now you want to trigger it with the Plugin.
1) Open the SallySoundFX customUI and click the red [Trigger 1 Actions]
2) [Add Transition Action] > [Settings...]
3) Here you set up what Atom should be 'connected' to the trigger. For light you can control multiple things like range, color or intensity. For this example let's use intensity to have the light flash based on the triggers value.
Receiver Atom: InvisibleLight
Receiver: Light
Receiver Target: intensity 4) Now set the (Start) intensity-slider to 0.0 and the (End) intensity-slider to 1.0 or higher if you want to 'amplify' the triggervalue
5) [OK] > in complex scenes you should add a descriptive trigger name now, that's optional > [Done]
6) Play music and use the red [Trigger 1 frequency in Hz]-slider to find a good source-frequency for the trigger. Note that moving this slider is visually shown with a red bar in the spectrum to make this easy.
Leaving the Trigger 1 frequency at around 100 Hz this should give you a flashing light with every beat. Probably the most common use. Technically this could be used for other things like speech lipsync too. But I think there are better, more specialized Plugins for that.
Unfortunately I am kind of stuck with the development of this Plugin.
My original plan was to use SallySoundFX as sort of "Master"-Plugin and then have many optional Visualization-Plugin that the user can load on demand. Like that dancefloor, except it's not being loading with a .cslist file - but as separate Plugin. What I did not take into account is the fact that it's not possible to set up a C# interface to communicate between Plugin as far as I know.
Currently I do have a private version with much better signal processing for visualizations. It does no longer use fixed amplification values for the frequency-bands used by the dancefloor. It can adjust itself to any music now automatically.
The demoscene triggers lights not directly. Here I trigger the visible UI-sliders from the plugin first and they trigger lights and more.
Let's for example assume you have added an "InvisibleLight" and now you want to trigger it with the Plugin.
1) Open the SallySoundFX customUI and click the red [Trigger 1 Actions]
2) [Add Transition Action] > [Settings...]
3) Here you set up what Atom should be 'connected' to the trigger. For light you can control multiple things like range, color or intensity. For this example let's use intensity to have the light flash based on the triggers value.
Receiver Atom: InvisibleLight
Receiver: Light
Receiver Target: intensity 4) Now set the (Start) intensity-slider to 0.0 and the (End) intensity-slider to 1.0 or higher if you want to 'amplify' the triggervalue
5) [OK] > in complex scenes you should add a descriptive trigger name now, that's optional > [Done]
6) Play music and use the red [Trigger 1 frequency in Hz]-slider to find a good source-frequency for the trigger. Note that moving this slider is visually shown with a red bar in the spectrum to make this easy.
Leaving the Trigger 1 frequency at around 100 Hz this should give you a flashing light with every beat. Probably the most common use. Technically this could be used for other things like speech lipsync too. But I think there are better, more specialized Plugins for that.
Unfortunately I am kind of stuck with the development of this Plugin.
My original plan was to use SallySoundFX as sort of "Master"-Plugin and then have many optional Visualization-Plugin that the user can load on demand. Like that dancefloor, except it's not being loading with a .cslist file - but as separate Plugin. What I did not take into account is the fact that it's not possible to set up a C# interface to communicate between Plugin as far as I know.
Currently I do have a private version with much better signal processing for visualizations. It does no longer use fixed amplification values for the frequency-bands used by the dancefloor. It can adjust itself to any music now automatically.
What would be the best way to try and get BPM out of this, or is it already in there and I am missing it? If the trigger is at the right freq then I get the beat, but not sure how to use a variable and time to get bpm, without modding the code.
SallySoundFX only sends the live signal-value for a frequency. Technically you can modify the code to measure BPM. But then you'd have to play it for a while first to detect the BPM. I don't think this would be very good.
To get a BPM value before playing a music file it's probably better to hook into the VAM integrated RhythmTool.
It does analyze file while loading. It creates 3 very basic frequency band profiles - low, mid and high.
So looking into that low-profile may be worth it to get the BPM.
Either way - you'd have to touch code. I'm busy with GoDot game development atm. Got an unfinished Plugin Version 2 here for months now.
RhythmTool: no live signal, low precision with 3 frequency bands with preprocessed data
SallySoundFX: allows live signal (microphone), high precision with 128 frequencies accessible, 8 bands internally for visualization
RhythmTool precision is technically higher while it analyzes with a FFT size of 4096 and SallySoundFX only 512. But all that precision is lost by using only the 3 frequency bands.
Releasing SallySoundFX 2 quietly for now on my (free and public) Patreon.
Just in case I F'ed up something I will wait before I upload this to the Hub.
This will hopefully be the foundation for many cool Visualizations.
I've tested long enough. Only visual bugs remain and performance is good. Everything "should" work. ?
Feedback welcome of course! And please be critical! I can take it.
Hope this does not disappoint people because it still looks 'basic'. The potential of all the crazy stuff you can do with this counts. If I would have VR hardware I'd try to make a beat saber Plugin ... that's complicated, but possible. However it would never be 100% the same, since whatever object you spawn to slash is generated from a 'live' signal. Just some thoughts ...
Todo-list: more visualizations *doh*, custom shaders, reflective materials, learn about advanced mesh rendering for even more performance
This was the first "working" version a few days ago. Later I found out it leaked memory like crazy. (fixed now ofc)
Releasing SallySoundFX 2 quietly for now on my (free and public) Patreon.
Just in case I F'ed up something I will wait before I upload this to the Hub.
This will hopefully be the foundation for many cool Visualizations.
I've tested long enough. Only visual bugs remain and performance is good. Everything "should" work. ?
Feedback welcome of course! And please be critical! I can take it.
Hope this does not disappoint people because it still looks 'basic'. The potential of all the crazy stuff you can do with this counts. If I would have VR hardware I'd try to make a beat saber Plugin ... that's complicated, but possible. However it would never be 100% the same, since whatever object you spawn to slash is generated from a 'live' signal. Just some thoughts ...
Todo-list: more visualizations *doh*, custom shaders, reflective materials, learn about advanced mesh rendering for even more performance
This was the first "working" version a few days ago. Later I found out it leaked memory like crazy. (fixed now ofc)
Thank you for your work and effort in this! I'll check it out as fast as I can but like life comes I won't have time left for it until in about ~1 week. Anyway I am really excited to get my hands on!
Releasing SallySoundFX 2 quietly for now on my (free and public) Patreon.
Just in case I F'ed up something I will wait before I upload this to the Hub.
This will hopefully be the foundation for many cool Visualizations.
I've tested long enough. Only visual bugs remain and performance is good. Everything "should" work. ?
Feedback welcome of course! And please be critical! I can take it.
Hope this does not disappoint people because it still looks 'basic'. The potential of all the crazy stuff you can do with this counts. If I would have VR hardware I'd try to make a beat saber Plugin ... that's complicated, but possible. However it would never be 100% the same, since whatever object you spawn to slash is generated from a 'live' signal. Just some thoughts ...
Todo-list: more visualizations *doh*, custom shaders, reflective materials, learn about advanced mesh rendering for even more performance
This was the first "working" version a few days ago. Later I found out it leaked memory like crazy. (fixed now ofc)
Yep, that is not suppose to happen. Fortunately easy to reproduce. If the "Load Texture"-Button gets a new valid Texture back from the VaM-Filebrowser I destroy the previous one in an attempt to keep memory clean. I expected it it be instanced from the other Plugins and so I was very aggressive about keeping memory free.
Concerning what you (MonsterShinkai) wrote on Discord DM
One thing that came to mind, it would be nice if each tile was scalable.
That should be no problem to add. Was thinking about a drop-down menu similar to the Color Logic.
From there you can pick from various "Geometry Logic" behaviors. Scaling the tiles would then be one of them.
That's probably something to look into when I investigate Shaders and Materials more. Currently the Visualizations use the "Sprites/Default" Shader. Unlike other typical VaM Materials the main reason to use this one is - it uses Vertex-Colors. So each geometry point has a color assigned. It is always 'unlit' - no shadows. And it's faster to render compared to the "Standard" Shader where colors come from the the material-color. Vertex-Colors are nice for Visualizations because the color "fine control" for each point.
What would happen if I changed the Shader to "Standard"?
It would need light to become visible, it would have shadows, but no color (or material color only).
The Visualizations really need a good custom Shader.
That would open up the floodgates for all kind of new customization options.
I've experimented with a simple custom one that I created with a Shader Editor named Amplify. At one point I had a version where I loaded this one ...
... from an AssetBundle. It worked, but to be honest I absolutely hate AssetBundles. I removed that code part again. I get why they are useful for game-development, but I just find the workflow to create them super annoying, they add so many additional potential points of failure and they are more or less black box once packed. If I can avoid them I will avoid them. Not sure whether that will be possible with Shaders to load them "directly" into VaM.
I thought of that beat saber thing too and if it helps i think the easiest way to do it is to have a 'learn song' functionality. The user would just let the song play once, you record the data in a json and then it would be used in the game, like at 0:30s in the song you would start popping in atoms based on the data for 0:25s or something like that
Hmm that reminds me, I think the VaM integrated RythymTool does something similar when a audio file is loaded.
Could try to "steal" the required data from that one too. I think that should be 100% the same data every time it loads - as long as the audio file is the same.