VaM2 better hurry up. Others are chasing your tail

Interesting video. But what always annoys me about this videos is: people have the abilty to test (speak to) an "AI", and the only question they have in mind is "what is your favourite animal?". Really?? How about the weather? :rolleyes:

I mean not only after watching "Ex-Machina", why not ask philosophical questions or even questions that start with a "What do you THINK about ...?". For everything else, there's google or whatever. This has NOTHING to do with an AI. This is just a visualized digital library.

But thanks for sharing nevertheless. Lipsyncing looks great.
 
Last edited:
It connects to GPT, so it possibly can be as good as ChatGPT is. You can give them character prompts and make them talk to each other about anything (apparently*)

Sarah and David are two Metahumans powered by OpenAI's GPT-3. Sarah's prompt makes her believe in UFO's and David's prompt does not believe in them. The conversation between the two is completely generated by GPT-3 and filmed live.

The problem with that is the latency, like how Siri or Alexa have a pause.
*PS: Not my videos.
 
It looks much better than most of the videos I see out there. But not all the way up there.

Ziva Dynamics should be available to creators when VaM migrates to Unity 2022. That deforms the skin real-time by checking how muscle would move with the animation. Should make the lip sync look better too. UE5 also has support but I don't think the previous videos I posted made use of it.
 
It's nice to see tech demo's like this, Unreal Engine also have amazing tech demo's. But my question is, how feasisble is this kind of technology for the normal user. Afcourse companies can create this kind of photo realistic characters, but will it also run on the avarage computers people have at home?

Those tech demo's are always nice, but if you really look it at a bigger picture, then you will notice that many tech demo's that has been released years ago, still look better than the avarage triple A product we have on the market.

I think we are still years away to see this kind of tech in games or let's say VR software. And this is just a single character, without any rendered environment or what so ever.
 
With VaM, physics is the bottleneck: FastPhysicsDemo
You can have looks with all 8K textures, multiple lights and everything. And still run it fast with a good GPU if the scene doesn't require heavy physics. Like epiTemplateChain.

It really depends on how things are implemented. Newer games have faster physics engines.

If something relies on the GPU, it will get faster with an upgrade or by lowering the resolution. If it relies on the single-thread performance of a CPU, it's gonna get stuck there. GPU's can do 2x in performance from generation to generation or from model to model. But CPU single-thread performance only does 1.1-1.2x. That's like 16x vs 2x in 4 generations.

So, if developers can pull it off with average graphics, meaning the physics are there, it won't be hard to crank the visuals all the way up. AAA games stream a whole city or a whole world sometimes. We only need a room size content for room-scale experiences. We can have better visuals than AAA games.
 
"Memory Optimizations"

Heck yeah!!! :love:





edit:

If you see green ladies with the latest VaM update, clearing your cache might help. (Don't do it unless you get this problem)

Don't even bother if you're not seeing this problem. It might be something else on my side.

I'm just guessing here but I think it's because I installed VaM fresh, I started a scene, it started loading the skin textures but I didn't wait until it finished loading them, and the game failed to cache those textures properly and now it's loading that bogus cache. Clearing the cache seems to have fixed it.
 
Last edited:
Holy quim Batman! the gods of VAM have gathered upon Mt. Olympus to contemplate the meaning of virtual life and software dev. Happy that
I have lived this long...
 
Metahuman, listens and responds in different languages, analyzes your code, adds the functions you ask.

"A bicycle for the mind"

Apple Reality Pro is rumored to have a M2 derived SOC, a second processor dedicated to the data from Lidar and multiple cameras, a high quality color passthrough, high resolution screens...

Apple will likely put Siri in your room this year or the next. That will change people's perception of VR.
  • It's no longer the nerd cave thing to do in isolation. It's part of life.
  • It's no longer real vs virtual, AI vs humans. It's coexisting.
  • It's no longer buying Nokia ringtones, downloading a better calendar app. It's adding new capabilities and looks to your virtual assistant. Including games like playing checkers with her in your living room.
That might start soon. Followed by a non-Pro (cheaper) version of the device for masses. And probably the Quest 3 in between, as a cheaper version that you can also use for wireless PCVR. Not to mention hat Valve might be cooking.

Pay more attention to VR.
VaM is the only thing out there that can do this.
Valve could do it better but wouldn't.
You can.

Future scenes and future versions would do themselves a favor if they followed the same VR/AR/MR friendly design language.
 
Last edited:
Seems like an old game with the same idea, i can't remember the name, but it disappeared. From back when i had an Oculus DK2. I don't think anything could look as good as VAM, or give us the ability to natively add content w/o doing mod paks. We'll see though. I'm still a VAMMER and will be here for the long haull.
 
Meh, lighting looks very flat... VAM is so close to making realistic scenes, almost trick your mind style. I don't think any other are out there (other then actual VR Porn) that come close. And most seem to focus on a genre, not the mechanics. It's like they had this great idea, but didn't have the drive, or the guys at VAM to code for them.
 
Maybe VAM 3.0 will have LIDAR and Augmented Reality, where we don't needs scenes, just a room to sit in, and use HOLO lens lol. You be sitting at the kitchen table with your glasses on, and there is your Virt-A-Mate right there with you, and having a conversation with her, sitting accross the table, diddling herself... oh ... 10 years probably i bet it'll exist. Sign me up.
 
Quest pro has full colour pass through and room scanning already. You can already make. Really good scan of your room with just an iPhone to use in VR. We are months to a couple years ant most away from VR being able to beat any AR solution.
I already use a scan of my room in VaM so I can use my own furniture, it’s just more fun to be somewhere else like a cyberpunk room instead.

it’s chatgpt that will really revolutionise VaM.
 
And THAT is the only reason i would consider an iPhone. Gonna test the S23 scan out, cuz my S21 is ... meh ... ok ... but if it doesn't do much better, i may bite the bullet just for that.
 
I want to test one in the store. My S21 does mediocre with room scans but can't do flat walls cuz it relies on imaging, so th walls seem to bleed in to near by objects, like the wall behind my bed. I'd have to hang posters everywhere for it to work. iPhone is supposedly much better at this, LIDAR ==>
  1. a detection system which works on the principle of radar, but uses light from a laser.
can get very accurate readings.

ok, a quick google, and nevermind...not going to waste my money this go around, no S23 testing needed, it's the same crap, two cameras, and a PC to gueeeesss at stuff. It'd be quicker to make it in blender and texture with pictures. maybe borrow an i phone just for this.
"Samsung s23 doesn't have any depth camera. You can use it with photogrammetry, but you'll need a PC with a decent GPU to process the scans. "
 
on a sad note, my 8700k i7 just died this week, had to sell a kidney and go get a 12900k and new motherboard and water cooler .... :( paiiinful
 
How? You have any "insights"? Is it speculation or do you have some valid points? I'm just curious.
You can't see?
Its all "on the verge" of being amazing. Look at how well services like Amazon Alexa, Apple Siri etc understand your voice.
Add in the recent amazing advances in generating human like speech and the obvious conversational breakthroughs with ChatGPT and you get... An AI "partner" who can listen, understand and answer back in a rational way.
It's only a lack of data training and the right model that stop AI models from controlling the movement too. I have seen AI trained motion models for game characters and they can walk around, interact with objects, duck under things and step over things etc all without pre-programmed fixed animation in dynamic environments.
Can you imagine a VaM model stood the other side of the room in your scene and you just speak, saying "come here and sit down" and she does. She walks over to you and sits in the chair next to you, or maybe sits on the floor, or your lap, without any pre-programmed animation or even knowing which she will chose.
this is all possible now.
Each part has been done, and works. It has even been done on consumer level hardware.
We are on the verge of getting realistic interactive virtual people. There needs to be a lot of improvement in hardware to get it all running together locally, none of this cloud shit but it's coming.
 
I want to test one in the store. My S21 does mediocre with room scans but can't do flat walls cuz it relies on imaging, so th walls seem to bleed in to near by objects, like the wall behind my bed. I'd have to hang posters everywhere for it to work. iPhone is supposedly much better at this, LIDAR ==>
  1. a detection system which works on the principle of radar, but uses light from a laser.
can get very accurate readings.

ok, a quick google, and nevermind...not going to waste my money this go around, no S23 testing needed, it's the same crap, two cameras, and a PC to gueeeesss at stuff. It'd be quicker to make it in blender and texture with pictures. maybe borrow an i phone just for this.
"Samsung s23 doesn't have any depth camera. You can use it with photogrammetry, but you'll need a PC with a decent GPU to process the scans. "

Check out the free Scaniverse App, or the semi-free Polycam.
Scaniverse does very well with just photogrametry and my experiments with Lidar Vs Photogrametry only show very little difference on normal commercial mobile hardware.

All that said, it's still far from good enough for me. I think using lidar to scan the room and get all the measurements to create a base outline then manually modelling and texturing from those dimensions is the way to go for now.

For Ease of use, a scaniverse model of your room can be made in minutes and easily imported into VaM via Unity.

Its an odd feeling sitting on your sofa IRL and in VR at the same time, and looking over to see a virtual person sat next to you that you feel you should be able to reach out and touch. If you put a dummy of some kind there too it would be insane.
 
You can't see?
Its all "on the verge" of being amazing. Look at how well services like Amazon Alexa, Apple Siri etc understand your voice.
Add in the recent amazing advances in generating human like speech and the obvious conversational breakthroughs with ChatGPT and you get... An AI "partner" who can listen, understand and answer back in a rational way.
It's only a lack of data training and the right model that stop AI models from controlling the movement too. I have seen AI trained motion models for game characters and they can walk around, interact with objects, duck under things and step over things etc all without pre-programmed fixed animation in dynamic environments.
Can you imagine a VaM model stood the other side of the room in your scene and you just speak, saying "come here and sit down" and she does. She walks over to you and sits in the chair next to you, or maybe sits on the floor, or your lap, without any pre-programmed animation or even knowing which she will chose.
this is all possible now.
Each part has been done, and works. It has even been done on consumer level hardware.
We are on the verge of getting realistic interactive virtual people. There needs to be a lot of improvement in hardware to get it all running together locally, none of this cloud shit but it's coming.
If we could just grab the AI from another unity game.. that would be awesome. Of course it would have to be 3 laws safe.
 
Alright, f it.

Let'sl implement this:


And this:


So that this girl:

https://www.facebook.com/1000315455...yQ9chkuyRNH6MmWJ213AomYQ4cEl/?mibextid=SDPelY

Can be in front of you as your assistant in VR, to help you reshape it from within like The Dark City, and also be the Siri to your “bicycle of the mind”.

Apple will be expensive at first. Quest 3 seems to have the depth sensor and camera + color pass to make it Mixed Reality and sit a Siri alternative next to you.

We’ll go to Steam when their wireless follow up to The Index comes.

The current proof of concept is based on an old version of Unity that wouldn’t get the API hooks to this stuff. It's all my resources put together.
All free btw, so be my guest is you think this can be done in VAM.
Either way I just want this thing to exist for creators.

I have a version with floating browsers that you can place anywhere on your fov. Any number. And she dances for you and you have great menus. But the browser is outdated, thus not secure, and I would not let my business be done in that. So I’m migrating to UE5.

It will be the Virtual Art Maker (I own that site, empty atm) that would reshape the reality around you as you tell it to do and the existing version will be in a viral project.

The day my assistant (the third person I mentioned) can do what he has to do for you, from within the game, we know we got there.

We'll also have this when there are two characters

We'll be capturing the TikTok girls who will be in the viral video and adding them to the VirtualArtMaker so that they gave their digital doubles that we can use in the same videos, and they will also be available as assistant AI skins for this project where you can tell them to dance like they did in the video. She'll guide you in our website to the archviz project that might work for you in your next art piece. The TikTok girls will be pushing it to get bigger, and they'll be joining it to be immortalized the way they are.

It's your movie studio where this Goddess is your assistant and she/he looks any way you want them to.
And creators can get paid for their archviz project online, without asking people to join Patreon or anything.

That's the gist of it. I have to move fast, so contact me if it interests you.


Wanna check the current version and hear more?

Poke me asap magic UE5 people.

I will express a couple of my thoughts. VAM is great in its boundless creativity, thanks to the creators of mods, the functionality continues to increase, and this is exactly what makes it so popular. VAM allows you to realize almost any (and if you know blender and unity, then any) fantasies and the only one who can compete with it now is HS2, but there is no physics and so on. I've heard that some VAM users are making a game with similar functionality on UE5, but I can't say anything specific about it
 
Last edited:
Imagine the robot as your assistant with “GPT4 with browser support” rigged with Ziva Dynamics for the animation, looking photorealistic as it’s based on scans of the influencer, and taking voice commands from you to shape the reality you’re in from within as if it was The Dark City, with your floating charts wherever you place them.

How is this not the best possible productivity tool and a Virtual Art Maker?

How is it not the best thing to be working on today so that it’s ready for release when Apple Mixed Reality descends to mass adoption price levels as Lisa did back in the day?

Apple Vision Pro is just the start.

I now have a production PC with a 4090. Let’s move this thing, me and my friends are at your disposal.

Cheers

PS: My own stuff definitely has to be:
1. No nudity
2. No influencer under 18y.
3. Family and small business oriented.

You do you, with properly licensed content. I keep my stuff clean but advance you as much as I can. It’s a community.

It’s called Virtual Art Maker so that if @meshedvr feels like it, a synergy around the VAM logo can be created. But as I let him know, I would never do that unless he thought it would benefit him.

I’m an ally.

As I said before, I just want this thing to exist. You do it in the current VAM or the next, feel free to use everything that I have ever shared for free if the license permits.
 
Last edited:
Hi,

If this is not the right place, point me to the exact location.

I have Angela:

I want her to have this:

So that I can ask her to put on a different outfit from this:

And to bring in a different Room subScene from this:

Etc etc. ( lighting rig, gi, furniture, customizing the character, different skins, hair, a different look...)

She reshapes reality (including herself) as seen in The Dark City, if I tell her to do so.
(Unlike this one where you do it yourself but the logic is the same)

Help. Get paid. Poke.

 
Last edited:
epi.noah : it would be awesome if you could tell her to change clothes or something and she would automatically strip and change clothes.
 
Just saw this, and honestly it looks amazing. The cloth physics are outstanding.
None of that nipples poking through the top here!
It's also a quest2 native app (no PC needed!) but apparently the high level physics stuff requires a PC (the cloth). Not sure how it will look on headset only but not needing the PC will be pretty cool.
I just saw this post, to be honest, ComeCloseGame is a "Total Joke" compared to VAM. It's like comparing one average looking
flower to a vast garden of flowers, some of which are supreme beauty.
Per various internet sources, the Quest 3 GPU has the same power as the RTX 1050 TI. This means any game on Quest 3, graphically is not comparable to a Modern PC VR game.
 
Last edited:
Back
Top Bottom