Dear hazmhox,
we talked about that earlier, but could you please add an "actor" concept to this great plugin? I find it really cumbersome to manually set the avatar, the avatar position and adjust the colors via triggers everytime the talking actor changes. Please let us define an array of actors with corresponding defaults for the above mentioned settings. This way we only had to set the actor for the current dialog. Maybe the choosen actors avatar could be shown in the ui, so that we see on first glance who is supposed to speak this line.
Some additional ideas:
Let us setup avatars for different emotional states of the actor. The current emotional state could then be choosen alongside the actor in the dialog ui.
It would be great if we could link a person atom to the actor, so that the position of the ui could be automatically put beneath the atoms head. Maybe like that: position = atomheadposition + Vector3.Cross(atomheadposition - cameraTransform.position, Vector3.up).normalized * configurableOffset, where cameraTransform is either ViveCenterCamera, OVRCenterCamera or MonitorCenterCamera (on SuperController.singleton) depending if we are in VR.
If you don't consder this necessary, i will probably do it myself some time in the future. Would you like me to send you the modified code in this case?
Cheesy
we talked about that earlier, but could you please add an "actor" concept to this great plugin? I find it really cumbersome to manually set the avatar, the avatar position and adjust the colors via triggers everytime the talking actor changes. Please let us define an array of actors with corresponding defaults for the above mentioned settings. This way we only had to set the actor for the current dialog. Maybe the choosen actors avatar could be shown in the ui, so that we see on first glance who is supposed to speak this line.
Some additional ideas:
Let us setup avatars for different emotional states of the actor. The current emotional state could then be choosen alongside the actor in the dialog ui.
It would be great if we could link a person atom to the actor, so that the position of the ui could be automatically put beneath the atoms head. Maybe like that: position = atomheadposition + Vector3.Cross(atomheadposition - cameraTransform.position, Vector3.up).normalized * configurableOffset, where cameraTransform is either ViveCenterCamera, OVRCenterCamera or MonitorCenterCamera (on SuperController.singleton) depending if we are in VR.
If you don't consder this necessary, i will probably do it myself some time in the future. Would you like me to send you the modified code in this case?
Cheesy
Last edited: