• Hi Guest!

    Please be aware that we have released another critical security patch for VaM. We strongly recommend updating to version 1.22.0.10 using the VaM_Updater found in your installation folder.

    Details about the security patch can be found here.
Sensor Plugin

Plugins + Scripts Sensor Plugin

For Angle, I'm guessing it basically measures the largest angle between two nodes and the directions they're pointing, correct? Would there be any way to measure the angle created by three points instead? So like, the angle created between the foot, knee and hip points during a leg bend?
 
For Angle, I'm guessing it basically measures the largest angle between two nodes and the directions they're pointing, correct? Would there be any way to measure the angle created by three points instead? So like, the angle created between the foot, knee and hip points during a leg bend?
Yep! That exactly how it works.

I'm sure you *could* measure the angle between points (but it would take more of a custom plugin I suspect). Out of curiosity, what's the use case there?
 
Yep! That exactly how it works.

I'm sure you *could* measure the angle between points (but it would take more of a custom plugin I suspect). Out of curiosity, what's the use case there?
Basically, I have some bend morphs that I want to use automaticaly. I have a couple that morph the hip/buttocks area when the knee bends up. I currently have it working with your plugin using distance between the knee and the chest node, but it's not perfect. The knee bend morphs worked well enough but the hip ones don't quite.
 
Basically, I have some bend morphs that I want to use automaticaly. I have a couple that morph the hip/buttocks area when the knee bends up. I currently have it working with your plugin using distance between the knee and the chest node, but it's not perfect. The knee bend morphs worked well enough but the hip ones don't quite.
So you would want to... say... measure the angle at the hip between the chest and knee (as the two extra points)?
 
Hey,

I'm thinking about using this in a demo scene for my plugin.

But... I have a feature request. No rush, and no biggie.

I have a rig set up on a character, with zones. When the player is not near enough to any zone (outer threshold), I want to trigger a state change. The same for if the player is too close to create any further "sensor" feedback (inner threshold). The way it looks now, I would need two instances of the plugin if I want to categorize movement as 1) in range, 2) too far, and 3) too close. Both instances would target the same receiver, but only one would have "scaled" + "outer threshold", and the other instance would just have a threshold serving as the "inner threshold".

If I'm understanding correctly, that's the use case I have in mind.

The 1st plugin I released can be used to "geofence" areas of an environment (CollidersAsTriggers), Sensor can track and form interactions based on where in the geofenced section the player is in, and then the SimpleStateMachine plugin can be used to create all of the states / changes that should be applied outside of the sensor triggers (eg. moving the character/player/Atoms - changing their appearances, physics - setting off animations, etc. etc.). Just those 3 simple plugins themselves could create a lot of the types of interactions users don't get to see much, despite the high quality of scenes coming out. Some of the best ones quality-wise seem to have the least-pleasant / flexible interactions.

Sure, I could use another Atom, with CollidersAsTriggers, to handle "too far" and "too close", similar to how I use it for geotracking. If Sensor didn't exist, it would be an okay solution. But that case is better suited with visible boxes/shapes, as if serving as the edge of a map/level, or when special geometries are needed.

It would be much better in a design sense, to keep separation between triggers related to "proximity" using Sensor, and triggers located to geofencing AROUND the player, with CollidersAsTrigger.

Hope that makes sense.
 
Last edited:
While the idea is still in my head, imagine this scenario:
(note I'm not trying to build this scenario out - it just popped into my head as an example of how the plugins could be used together in a "large/complex" scene effectively)

The Environment has 3 rooms. Each room has 3 places to sit / lay down.

Let's say you're leading a character around (or they're leading you).

Put the 2 characters into a SubScene. Put the Sensor Plugin on whoever is leading.

Geofence the rooms with CollidersAsTrigger.

Attach SimpleStateMachine to the SubScene Atom.
- Create 3 state slots - one for each room.

Attach a second SimpleStateMachine instance to the SubScene Atom.
- Create 3 state slots - one for place to sit / lay.

Create a 1 second Animation on the SubScene Atom, which will call "Next" on the second instance of the state machine. Keep in mind the state machine can save/restore the state of the Sensor plugin - as if you attached a bunch of instances, then only enabled one at a time.
- Edit the state slots by setting Sensor to target one of the places to sit for each state - probably keeping the same threshold values.
-- Now, every 1 second, the state machine will alter Sensor so that it can check proximity to each place to sit/lay, using a single instance of itself.
-- If within 3 seconds the leader is close enough to any of the places to sit, Sensor will trigger a position pose or animation directed on that piece of furniture.

The first instance of the state machine would just move the furniture into the run before the player enters it - maybe change up the colors along with the placement of the furniture for each room.

All together, the single Sensor instance would be capable of tracking the proximity between the player and all "nine" pieces of furniture. Perhaps I would need one instance of the state machine per room... but either way, I'm pretty sure only one instance of Sensor would be required to make that whole scenario work.
 
Back
Top Bottom