Thx for the quick reply
1) I suspected as much. Ideally indeed expression should be treated entirely in BL but as it is Read my Lips lacks option for control for someone like me and I'd wagger, your average Joe "content creator". This isn't a matter of being intense IMO, this is a matter of being able to say "no randomisation on this" and "please use this cool expression by Ashauryn" for that pose.
But FWIW this is a massive issue for me. I have an animation/art background and expressions are a huge part of what makes a scene. Having some randomisation between a few thing, not even being able to have archetype for shy, angry, indifferent etc. is... probably a deal breaker. At the very least if systems are conflicting it's quite important to be able to cut RML and have something like timeline to control it.
Clothing: just on and off really. QoL stuff as this is is quite frequent need. As we both pointed out this can be done with trigger and in a way this like what you did with timeline drop menu - making it more convenient to do from the UI. Obviously working with a 4K list is bad so maybe have them shortlisted from the first pose and only work with that list. In the rare case someone is dressing up his characters as the scene go having a first scene that's instant of maybe with maximally dressed hidden characters could be a workaround.
2) In timeline you can set for each anim/segment what to play next. Essentially play the next animation in their order, or a random one within the segment, or you can name animation 1/1,1/2,1/3 and the 2, 3 etc and tell it to pick a random next animation in 1/ between 1/1 -> 1/x and only those. Also you can assign weight to each animation so that they are more likely to be picked. So if you have the 1/3 animation having only 1% of chances to be played and it's the one exiting the loop, you can have 1 and 2 alternate for a random but significant time then have the anim jump to the next stage with a transition. As it is right now BL only offers "next one" and "randomize amongst ALL animations regardless of level". This is quite frustrating if you're trying to have a progression yet some variation. Having the ability to say random WITHIN CURRENT LEVEL would be great. I then would only use level to navigate from a UI stand point, with pose being a more granular part of one actual pose. (which begs the question of thumbnails for levels).
In general I don't like the current standard in VAM scenes to have the user being forced to click for things to progress. It breaks immersion while he's busy doing something else
At least reducing the granularity of that seems like a good design angle
3) this could be an interesting discussion to have here. I've been designing enough app that I now know I don't like having to solve everything alone.
One can note VAM creators also struggled with similar concepts when designing VAM and came up with their force atoms on one hand and their position based animation on the other. It's a bit like ancestor to BL and Timeline respectively
. Their approach to force was that you would attribute all of the component - angle, direction to one time control - at least all of those would be synched - but of course this isn't exactly intuitive nor a good UX.
Thinking out loud... For contextI come from audio app design where modulation is a massive topic. The way things work there is that you have LFO - modulation of several kind, sine, triangle, randomisation, sometimes custom - that you then apply to parameters - typically from -1 to +1. You would often have several different modulators and those would then drive one or (way) more parameters at a time. Sometimes you would even modulate one modulators parameter (say speed) with another.
This is one solve for the issue of synchronizing but of course this here isn't audio, so it probably doesn't translate 1:1 to somethingh of actual use. I think you're at the very least on a good start with your "sync to thrust" options. Just having pose me level animation also being able to sync to the fill me up master thrust would help. From there it's likely that having an ability to have not just the time, but the intensity of the animation affected by what's going on could help. Maybe some extra concept like delay of application could be useful too. Hard to conceptualize.
I suspect this is all pretty important because whatever VAM2 is it will have learned from its system, the history of various attempts at improving them, what worked and what didn't. How do you have your straightforward, reasonably intuitive yet controllable system for someone with an entry level understanding of animation to be able to reasonably translate what he's seen on pornhub in a 3D app, but with the benefit of having it random and alive?