AnimationPoser

Plugins AnimationPoser

Oh my gosh, I have read a few post there, never figured out multiple messages can have the same string!

Yes! There are really lots of things that the plugin addresses. :LOL: Most of them only become glaring when you are making animations with like 10 characters. For less characters, these issues pop up latter on. But the plugin is capable of addressing almost any of them. I think now any improvement just boils down to how many clicks you need.

If I manage to show the entire feature-set of the plugin and a reasonable amount of patterns, I think it will have dozens of tutorials. It is like blender, where you have thousands of "how to do this in blender" and "how to do that in blender". These tutorials familiarize the user with the interface, etc, and allow him to do some things kind of blindly following the tutorial until it clicks and they are able to do the stuff they want to without needing a specific tutorial. But I'll try to release enough tutorials to be comprehensive to the most common use cases. The possibilities of what people can come up with on their own are almost endless though.
 
For sure! I forgot to tell you, if you hadn't said that about the 180 degree flickering, I think I would have never figured out why the timeline virtual anchoring system was messed up. You've helped immensely in something that is so crucial for the plugin on a macro perspective. The stuff with rernat's turning around animation is amazing. I think the only thing missing now is a mirrowing system.

YESSSS, I am really happy to hear that I helped!
Mirroring is indeed a good feature, imagine creating only lHand and lFoot animation and a few clicks for rHand rFoot. BOOM!

Another thing I wanna address is perhaps a curved interpolation with defined plane, that makes the character turning left and right smoothly, even when she is going ups and downs.

Sounds like a niche feature though. Right now the only situation I need it is a Japanese village, where there is an upward slope. And stairs where you can stare upwards for some nice scene.

Yes! There are really lots of things that the plugin addresses. :LOL: Most of them only become glaring when you are making animations with like 10 characters. For less characters, these issues pop up latter on. But the plugin is capable of addressing almost any of them. I think now any improvement just boils down to how many clicks you need.

If more clicks means less atoms I am happy to do that. My priority are performance>atoms>clicks lol

Nevertheless, the plugin is really promising!

The more hidden feature I discovered, the less atoms and plugins I need to remedy the situation. I am cutting down the atom numbers a lot recently - 70% are still there due to VaM's limitation - I wish VaM have an easy setup to recall and apply translation and rotation values in runtime, things like px=px+0.1 or rx=rx+15 like when I was handling MEL script, then I can ditch like 75% atoms, because I can simply maintain a float record for foot stride, foot spread, heel rotation, height, turning angle and direction etc.

20% are there because of stable foot pace coupled with smoothed path, which needs at least 3-point. Two sets of 3-point values so that I can update one set without affecting the interpolation of another set.

My ultimate goal is to have atoms only for location and pivot, the rest is handled by the plugin of the character, but I am still very happy right now.

If I manage to show the entire feature-set of the plugin and a reasonable amount of patterns, I think it will have dozens of tutorials. It is like blender, where you have thousands of "how to do this in blender" and "how to do that in blender". These tutorials familiarize the user with the interface, etc, and allow him to do some things kind of blindly following the tutorial until it clicks and they are able to do the stuff they want to without needing a specific tutorial. But I'll try to release enough tutorials to be comprehensive to the most common use cases. The possibilities of what people can come up with on their own are almost endless though.

Yeah I can tell. Last time I am that exciting was messing with MAYA, but I am not capable in other aspects for complete game development. I am glad that I have a good platform right now creating something I have been wishing to do for years.
 
The more hidden feature I discovered, the less atoms and plugins I need to remedy the situation. I am cutting down the atom numbers a lot recently - 70% are still there due to VaM's limitation - I wish VaM have an easy setup to recall and apply translation and rotation values in runtime, things like px=px+0.1 or rx=rx+15 like when I was handling MEL script, then I can ditch like 75% atoms, because I can simply maintain a float record for foot stride, foot spread, heel rotation, height, turning angle and direction etc.

This feature might be interesting for you: https://github.com/haremlife/AnimationPoser/issues/25

Yeah I can tell. Last time I am that exciting was messing with MAYA, but I am not capable in other aspects for complete game development. I am glad that I have a good platform right now creating something I have been wishing to do for years.

Same experience; I was very excited with virt-a-mate at first (coming from Blender), but it completely lacked good automation features (not that blender doesn't), and animation plugins such as timeline provided no or very limited features for animation rotation/transposition (which blender does provide), which rendered it impossible to reuse animations, making it necessary to rebuild everything (walking animations, etc) absolutely from scratch for every slightly new situation.

I don't know how Timeline evolved since then in terms of the spatial rearranging features, but even before talking about randomness/automation, AnimationPoser already has extremely useful features for animation reusability/morphing that are unparalleled in the virt-a-mate ecosystem. For example, you can rotate/translate only a portion of an animation in real time. So if you have a mocap animation of a character dancing and walking to a chair, and you change the position of the chair, you can easily morph the animation so that it still works perfectly, changing nothing in the animation besides the distance of the chair, and that takes like 10 clicks tops.
 
Last edited:
My ultimate goal is to have atoms only for location and pivot, the rest is handled by the plugin of the character, but I am still very happy right now.

Well, once you use the atoms (I'm guessing they are anchor atoms?) to adequately position the animation, you can just switch the anchor to world or control and get rid of the extra atoms, like I did in the first video in the plugin page.

I don't know if that is clear for you... But the anchors are PER STATE, meaning each state can be anchored to different stuff. The magic is that the timelines stretch, rotate, translate and morph when you change the anchor positions of the transition source and target states. Also you can transform states into keyframes and vice-versa. So you can use atoms to morph the animation and then get rid of them. You can do that even for mocap data, by choosing a keyframe, turning it into a state and anchoring. So "getting rid of the atoms" is a completely reversible process: you can bring them back whenever you like. Think about them as editing tools, that you can put aside whenever. They don't have to be at the scene.
 
Last edited:

I am not quite sure on this anchor. Can it be targeted globally, e.g. layers on other roles?

If so, this can be interesting, I can grab anyone's boob easily, without NodeAlign atom and scripting which person to target with.

If its limited inside plugin, still useful in saving numbers of clicks I guess.

Same experience; I was very excited with virt-a-mate at first (coming from Blender), but it completely lacked good automation features (not that blender doesn't), and animation plugins such as timeline provided no or very limited features for animation rotation/transposition (which blender does provide), which rendered it impossible to reuse animations, making it necessary to rebuild everything (walking animations, etc) absolutely from scratch for every slightly new situation.

I don't know how Timeline evolved since then in terms of the spatial rearranging features, but even before talking about randomness/automation, AnimationPoser already has extremely useful features for animation reusability/morphing that are unparalleled in the virt-a-mate ecosystem. For example, you can rotate/translate only a portion of an animation in real time. So if you have a mocap animation of a character dancing and walking to a chair, and you change the position of the chair, you can easily morph the animation so that it still works perfectly, changing nothing in the animation besides the distance of the chair, and that takes like 10 clicks tops.

Lol I wished I have discovered this plugin earlier. Timeline is noob friendly but I am not a fan of it when I discovered that the body controller wasnt even anchored to control node - the killer blow that stops me from using it. No matter how nice I am making an animation I cant even reuse it at 50 centimeters apart.

Well, once you use the atoms (I'm guessing they are anchor atoms?) to adequately position the animation, you can just switch the anchor to world or control and get rid of the extra atoms, like I did in the first video in the plugin page.

Not all of them are anchors for animation. I am using searchlight plugin to point the character A towards destination. Since I dont want A robotically rotates 90 degree while moving straight lines, I need an anchor atom B and blend it into the movement.

Another setback is that if the start point and end point are not on the same height, the character is gonna tilted downwards or upwards. As a remedy solution I created y0 children nodes locking the heights. 3 extra atoms used.

To determine when you need a 180 turning animation instead of walking a half circle, I need 2 extra child atoms "orbited" 0.1 apart from anchor B and the character A.

If two atoms are 0.2 apart they are at 180 degree diff, 0.1 apart they are 60 degree.

I know this method is kinda dirty, but I havent think of any elegant solution yet.

I don't know if that is clear for you... But the anchors are PER STATE, meaning each state can be anchored to different stuff. The magic is that the timelines stretch, rotate, translate and morph when you change the anchor positions of the transition source and target states. Also you can transform states into keyframes and vice-versa. So you can use atoms to morph the animation and then get rid of them. You can do that even for mocap data, by choosing a keyframe, turning it into a state and anchoring. So "getting rid of the atoms" is a completely reversible process: you can bring them back whenever you like. Think about them as editing tools, that you can put aside whenever. They don't have to be at the scene.

I am utilising different anchors per states. Like I have said, I can make character grabbing anything I want, with just a few handpose variance.

I try to keep as minimum atoms at possible. If I am doing for rendering 4K animations probably I am not worrying about how many atoms clustered in the editing scene.

But I am doing animations, not for rendering, but for gaming. I am considering a lot of "what-if" situations. What if someone wearing a 2-inch highheel at office dropping her shoes when she's at home, wearing a 3-inch highheel when dating?

In this case its better keeping an extra "highheel height" node so I dont need to worrying about rest of the animations as long as I am anchoring her foot to this node.

Foot stride and side steps, coupled with these I can make some random footing on scene, a step 30cm forward and 10cm side. Can make a wider steps when turning 180 so that the knees dont clash together. And some dancing animations I can spread her legs wider according to how aroused she is.

Thats why I need atoms - more precisely, some float values to fine tune her pose, some body language to progress further on the game. This is a rough idea of course, but I am not satisified with gaming style that just merely clicking or scrolling speed bars.
 
I am not quite sure on this anchor. Can it be targeted globally, e.g. layers on other roles?
If so, this can be interesting, I can grab anyone's boob easily, without NodeAlign atom and scripting which person to target with.

I don't understand. Isn't this exactly why there are *regular* anchors?

The significance of layer anchors is that you could have multiple layers adding offsets to the SAME controller. So one layer is the main layer, the other would add slight incremental variations to the controller position to make it less predictable.

Not all of them are anchors for animation. I am using searchlight plugin to point the character A towards destination. Since I dont want A robotically rotates 90 degree while moving straight lines, I need an anchor atom B and blend it into the movement.

You can probably afford that with the new timeline system. If you have a timeline between states A and B, with a given rotation between them, the timeline BLENDS between the rotations, with blend amount t, where t is the time (normalized to be between 0 and 1).

As a remedy solution I created y0 children nodes locking the heights.

Dumb question, why don't you use the floor? There is something that I am missing.

Anyway curious about your animations. They seem very complex and detailed.
 
In this case its better keeping an extra "highheel height" node so I dont need to worrying about rest of the animations as long as I am anchoring her foot to this node.

Foot stride and side steps, coupled with these I can make some random footing on scene, a step 30cm forward and 10cm side. Can make a wider steps when turning 180 so that the knees dont clash together. And some dancing animations I can spread her legs wider according to how aroused she is.

Now it clicked. That's amazing. It's like the CLS plugin but with predefined walk cycles. I didn't think about that. Quite good stuff. Probably gonna do a tutorial about that. It sounds amazing.
 
I don't understand. Isn't this exactly why there are *regular* anchors?

What was I thinking is that the anchor defines which girl, say, Alisa's Boob, is grabbed in the state. Now if this function works globally I can switch to Cathy by telling which plugin's target layer I am pointing at.

That layer can have Controller added but not controlling the position (pos and rot unchecked), only there for logging the controller's transformation.

But last time I did something like this, the plugin still turns the controller from off to on, which I certainly dont want.

The significance of layer anchors is that you could have multiple layers adding offsets to the SAME controller. So one layer is the main layer, the other would add slight incremental variations to the controller position to make it less predictable.

I understand this conceptually, what I dont quite understand is how it implements on runtime, I was reading those stuff last late night, they look like alien to me, maybe I need sometime to test it out.

lets say I am applying height offsets and rotation offsets for lFoot highheels, when the girl is walking.

Does that mean I am loading 3 plugins on the same character, first plugin for walking (main layer), second and third for adjustment?

Now, if this works that's gonna convert a lot of my atoms into plugins.

You can probably afford that with the new timeline system. If you have a timeline between states A and B, with a given rotation between them, the timeline BLENDS between the rotations, with blend amount t, where t is the time (normalized to be between 0 and 1).

Dumb question, why don't you use the floor? There is something that I am missing.

Anyway curious about your animations. They seem very complex and detailed.

I am not using the floor because I have stepping upstairs/downstairs on mind, if I lock the child point on floor (while its parent is moving up or down), I can lock the rotation on y axis, then I add an incremental height to footsteps.

But perhaps it is not worth 2 extra atom in runtime. Not sure, lets see how far I can go.

This is easy to remove and added back though. As long as there are two point defined in position, the plugin handles the ending rotation.

Right now I am using Node A that defines starting point, node B that defines the stride ,say 60cm per cycle, node C for the rotation (that is ALWAYS pointing to the target).

Lets say I am facing forward, but I am going to to right 2 meters apart. I am blending Node B and Node C, say 70/30.

So after the cycle she is turing 27deg clockwisely, walking 27.9cm to the right (sin27 x60cm), 53.5cm forward (cos27 x60cm).

Next cycle is like turning 45.1deg, walking...

Argh forget about the maths, I feel like making things complicated.

Now it clicked. That's amazing. It's like the CLS plugin but with predefined walk cycles. I didn't think about that. Quite good stuff. Probably gonna do a tutorial about that. It sounds amazing.

Now you mentioned that, I have downloaded that like few weeks ago. Didnt even tried that out before I do walking anim on AnimPoser.

My first thought was to test out how this plugin works and walking-sitting-standing cycle seems a good thing to stress test, at the end I am doing much more I was hoping to do.

Thank you for your appreciation. I feel like it needs some polishment, especially the pivot part.

I find out that switching pivots makes handling animations very easy. Walking is best pivoted on floor, but sitting is better pivoted on hip, so you move chest head hands (except foot) at same time.

In theory it's simple, I want to make end product as easy as "walk 3.78m forward 2.81m rightside", "stop when you are 0.5m close" and "sit on the sofa thats 0.6m behind you, with 0.55m tall", so that I only need to define a stop point, stop time and sit point, and putting my efforts on something more interesting.

BUT because of how stopping time was handling by distance check, I find the pivot switching needs a lot of polishment to time it right.

Anyway this is just some random grumble.

My gratitude to your update speed, I am reducing atoms day by day lol.
 
This is what I have achieved so far. Note that the distance of blue dot (location control) or red dot (pivot control) doesn't affect her pace.

There are a few bugs I am tackling at:

1. It takes two sets of 3-point control setting up the path, and I am blending rotation based on searchlight plugin. Any discrepancy at rotation makes the character "flickers". They were not as apparent should the light position is fixed, but I parent the light position to character and it is magnifying the error.

Maybe I need to put some damping to smooth out rotation.

2. I think its 90% fine when the character turns more than 90 degree, a few percent that she twist part of her ankle or clashing on her knee. Not perfect, but this 10% not as prioritized as first bug.

This can be tackled perfectly should I know which direction (clockwise or anti-clockwise) + which foot step is at front at the time of turning. For example if your lFoot is at front, you cant turn anti clockwise or else your rFoot is gonna step on you lFoot.

3. Also notice how the pivot (red dot) change when she is sitting, this is another area prone to error, if the time control is not precise the character hovers on the air for a short period of time.

Maybe I need to put another time control on pivot, telling it when to play and when to stop.

 
Last edited:
What was I thinking is that the anchor defines which girl, say, Alisa's Boob, is grabbed in the state. Now if this function works globally I can switch to Cathy by telling which plugin's target layer I am pointing at.

Yes! You can do that by switching roles. I think it works if you switch roles before loading the animation. That was my idea when implementing role anchors anyway.

”But last time I did something like this, the plugin still turns the controller from off to on, which I certainly dont want.”

That feature is not yet implemented.

“Does that mean I am loading 3 plugins on the same character, first plugin for walking (main layer), second and third for adjustment?”

Like I said it is not yet implemented, but it would work in a single plugin instance. One layer referencing another in the same character.

For example, breathing would be able to work even in a walking animation that controls the chest. It would be added as a separate layer, controlling the chest on top. So breathing plugins would become unnecessary.

"My first thought was to test out how this plugin works and walking-sitting-standing cycle seems a good thing to stress test, at the end I am doing much more I was hoping to do."

There are some issues for the walk cycle. The root controller is not being captured properly so I'm working on that before I can make my own walking animations.
 
Last edited:
This is what I have achieved so far.

Wait... How does that work?? How is she following the controller in real time?? This looks like the CLS plugin. Is that AnimationPoser? I had no idea this was possible with the plugin lol
 
This result is reasonably satisfied now, much less flickering, if you dont look at the shadow you barely notice that. Added a time control for pivot, to make sure it wont switch too early.

The left right foot does clash when turning sometimes, but I am leaving this for later on when I find a betfer approach to determine rotation direction.

I guess its time to complete the walk-stand-sit-stand-walk cycle, which if it is successful, is making a good foundation on understanding animation transitions.

 
Wait... How does that work?? How is she following the controller in real time?? This looks like the CLS plugin. Is that AnimationPoser? I had no idea this was possible with the plugin lol

Lots of sweat and blood.

Joking. AnimationPoser takes a huge part in it, but wouldn't be successful with other plugins, searchlight is needed to control the direction, NodeAlign to align foot step position, VUML for some string float boolean IO.
 

Attachments

  • bandicam 2022-08-02 20-19-10-598.mp4
    25.4 MB
Like I said it is not yet implemented, but it would work in a single plugin instance. One layer referencing another in the same character.

For example, breathing would be able to work even in a walking animation that controls the chest. It would be added as a separate layer, controlling the chest on top. So breathing plugins would become unnecessary.

Okay nice to hear that, I was planning to add 7 extra plugins instances, each dedicated to a single IK control, but 8 plugin instances would be a hell to maintain with lol
 
Hi,

Trying to figure out how this works.

I'm adding an animation, a layer and some states with some transitions. So far so good.
But I want to wait in each state a couple of minutes before transitioning to the next.

So I add "Wait duration min" but transitions seem to be instantly anyway. Am i doing something wrong?

1659444834464.png
 
But I want to wait in each state a couple of minutes

A couple of minutes? Or seconds?

Because that wait duration is in seconds… So if you are looking at that timescale, try to put the wait duration min at 60 instead of 1
 
There are some issues for the walk cycle. The root controller is not being captured properly so I'm working on that before I can make my own walking animations.
What makes the animation switch though?

First, there is an locatorControl atom that determines starting point (lets call it S), use NodeAlign to trace characters control position in run time.

Next, an atom that determines the distance of next stride, the end point (lets call it E), which is parented by starting point.

Now when starting point moves, the stride atom also moves, so if I update the current end point as next starting point, I can restart the cycle.

To minimize flickering, I prepare two sets of atoms, one is playing and one is updating at the background.

The process is like this
Playing S1 E1 S2 E2 S1 E1...
Updating S2 E2 S1 E1 S2 E2...

Now if you are talking about the direction character is facing, thats how searchlight clicks in. It works as ray tracing that always point towards target atom and I blend it into my walking animations.

Last thing is using VUML that has a function to determine atom distance. If the location is far apart the blue location atom sets at 0 anim speed. If the character is close enough, the location jumps to next state, by setting anim speed back to 1.
 
Last edited:
First, there is an locatorControl atom that determines starting point (lets call it S), use NodeAlign to trace characters control position in run time.

Next, an atom that determines the distance of next stride, the end point (lets call it E), which is parented by starting point.

Now when starting point moves, the stride atom also moves, so if I update the current end point as next starting point, I can restart the cycle.

To minimize flickering, I prepare two sets of atoms, one is playing and one is updating at the background.

The process is like this
Playing S1 E1 S2 E2 S1 E1...
Updating S2 E2 S1 E1 S2 E2...

Now if you are talking about the direction character is facing, thats how searchlight clicks in. It works as ray tracing that always point towards target atom and I blend it into my walking animations.

Last thing is using VUML that has a function to determine atom distance. If the location is far apart the blue location atom sets at 0 anim speed. If the character is close enough, the location jumps to next state, by setting anim speed back to 1.

But there are different animations right? For example for sitting down. Maybe also for turning around? What is triggering the switch?
 
But there are different animations right? For example for sitting down. Maybe also for turning around? What is triggering the switch?
At this stage it simply is a button firing a message to location controller (the blue dot), that switch character destination to sofa from some random walking (I planned to fire location message in a sim-alike approach but that's something working in next phase.)

Next, when the character is close enough, pivot animation kicks in. You can notice the slightly change in pivot position (the red dot), where the end user would never notice that. Searchlight is raytracing this pivot, it would change the rotation correspondingly.

When the change in rotation is very large, something larger than 150 degree I guess, it fires turning message.

Now because there is no easy method recalling the rotation values, I use a stupid method instead. I call this two-moon method.

Imagine earth has two moons. They are orbiting on the same plane same distance. Lets say each moon is 300k km apart from earth, when if two moons are exactly opposite - 180 degrees? When two moons are 300k+300k = 600k km apart, forming a striaght line with earth.

If they are in 60 degrees, two moons has a distance of 300k km, forming a perfect triangle with earth.

The "moons" here are 0.1 apart, so I make the threshold something like 0.15, tested and feel satisfactory. Make the threshold larger than 0.2, then the character never turns, only walk into circles.

To determine exact rotation angle I can apply a few cos or sin function whatever, but thats not necessary, since I dont need a very exact threshold that changes in runtime.

But if it is needed, VUML provides some simple maths tool to do that.

So you know why I am adding quite a lot of atoms in the scene, some as a remedy for what VaM is limited at.

Sitting animation is a bit more similar logic: some distance check, fire a sitting animation, but I added a timer to allow character turning her back before start sitting.
 
Last edited:
The easiest way to convince people to use the plugin would be to create awesome scene...
 
The easiest way to convince people to use the plugin would be to create awesome scene...

In the works ?

I’m no artist, so I’m not sure if people are gonna dig into the scene from a feeling perspective… But technically it shows so much stuff that is unheard of in virt-a-mate, so I hope to make a bang.

I wish I could share some 5+ character scenes, which is what I like, and where the plugin shines the most in my opinion. But then people would just be complaining about FPS. So it’s a little bit hard to make a balance between the plugin capabilities and people’s caveats when appreciating it. But I’m getting there.

For now I’m resting a little bit because the 2 weeks of development for the last releases were crazy ?
 
Now because there is no easy method recalling the rotation values, I use a stupid method instead. I call this two-moon method.

So you have access to distances from the VUML plugin, but not angles, right? I guess the AnimationPoser plugin could have a rules tab, to add some rules related to controller distances, rotations and so on. Thinking about it, having for example some avoids automatically placed based on character atom distance so that they don’t collide feels right. Also I think it would be interesting if the anchoring system was expanded to include “pointing at” capabilities.

Now I’m starting to understand all that “atom” talk. It is VERY interesting the stuff that you are doing with the parenting/anchoring system. It really expands on the plugin capabilities by tying it to game objects. The “editing anchor” tool I showed at those videos was basically an accident... But it seems a lot more is possible. Thanks to MacGruber for the original anchoring system.

Edit: it became much easier to expand on the anchoring system because I coded the transformations into a explicit affine group (the ControlTransform class). So the timeline morphing is an example of that.
 
Last edited:
So you have access to distances from the VUML plugin, but not angles, right? I guess the AnimationPoser plugin could have a rules tab, to add some rules related to controller distances, rotations and so on. Thinking about it, having for example some avoids automatically placed based on character atom distance so that they don’t collide feels right. Also I think it would be interesting if the anchoring system was expanded to include “pointing at” capabilities.

Now I’m starting to understand all that “atom” talk. It is VERY interesting the stuff that you are doing with the parenting/anchoring system. It really expands on the plugin capabilities by tying it to game objects. The “editing anchor” tool I showed at those videos was basically an accident... But it seems a lot more is possible. Thanks to MacGruber for the original anchoring system.

Edit: it became much easier to expand on the anchoring system because I coded the transformations into a explicit affine group (the ControlTransform class). So the timeline morphing is an example of that.

Yes, if distance and rotation check are available then I can saving up even more atoms for CUAs.

On top of that, adding bool logic, string and float storage, random function, I am gonna rename your plugin into VaM GameMaker tool DELUXE lol (It was already a gamemaker tool for me)

Distance check itself can be interesting game element, say, you fiddle your gf's ass and she pushes you away, now you can have a short time frame based on distance threshold, that you can hold her hand, give her some punishment. This is something like an FTG playing on the bed, but with easier controls.

It can be further expanded to give a triangle system, things like hit<hold<grab<hit, code some simple AIs so she is try guessing your move.

Not to mention you can have some random CUA items on the scene that buffs you or debuffs you.

I actually have a lot of thoughts that, technically speaking, is capable, to bring something that makes this really a game you can enjoy with, not just some quick fab material nowadays illusion are producing.

I miss the ol' days when illusion experimenting different genre like FTG ACT etc, but thanks to VaM and your great effort, I have a feeling that we can achieve much more than what VaM is offering now.
 
Yes, if distance and rotation check are available then I can saving up even more atoms for CUAs.

On top of that, adding bool logic, string and float storage, random function, I am gonna rename your plugin into VaM GameMaker tool DELUXE lol (It was already a gamemaker tool for me)

Distance check itself can be interesting game element, say, you fiddle your gf's ass and she pushes you away, now you can have a short time frame based on distance threshold, that you can hold her hand, give her some punishment. This is something like an FTG playing on the bed, but with easier controls.

It can be further expanded to give a triangle system, things like hit<hold<grab<hit, code some simple AIs so she is try guessing your move.

Not to mention you can have some random CUA items on the scene that buffs you or debuffs you.

I actually have a lot of thoughts that, technically speaking, is capable, to bring something that makes this really a game you can enjoy with, not just some quick fab material nowadays illusion are producing.

I miss the ol' days when illusion experimenting different genre like FTG ACT etc, but thanks to VaM and your great effort, I have a feeling that we can achieve much more than what VaM is offering now.

I'm working on the last issues because I'll relaunch it as "VamAutomata", which is a more appropriate name, and I want "VamAutomata" to start in a fresh blank page.

Like I mentioned before in this thread, right now we have a markov chain as the basic engine, with a lot of interesting stuff added on top (indirect transitions, syncs, messages, avoids) that makes it less "memory-less" and more "context-full". Right now you can already create a layer without controllers to use as a counter or a timer. But the state engine can probably be recast into a turing complete machine. For example, float and string storage in a state is very easily feasible, with not much ui fuss. I'll probably keep getting ideas about how to add very powerful capability with the least amount of explicit interface (which is what I have been doing for a while), because I don't want to include something like a tab for boolean logic scripting. Actually since boolean logic is based on on and off gates, it seems that much if not all of boolean logic can already be modeled by presence/absence of transitions or by controllerless layers with "true/false" states. The AND operation, for example, can be modeled by two layers, each with two states, that puts avoids on two middle states in a chain from a third layer. So the path in the third layer is only unblocked ("1" or "true") when both of these states are in the "true" state. In practice, things tend to get more semantic if you are talking about a specific application.
 
Back
Top Bottom