Stable Diffusion

For real? With 4090 it takes around 10 seconds for 1024x768. 1920x1080 takes 23 seconds :censored: :D
But thats with EulerA and 20 steps. Different sampling methods needing different times. DPM++ SDE is the best to me.
EpicRealism is the best realistic model if you ask me, with the right prompts you'll get really good results. For example:

View attachment 281228View attachment 281229View attachment 281231

It's insane.

The third picture is very good, I like that one.

This is what i've rendered so far. But haven't look much in to yet, since I don't have much time this week.
00000-333266261.png
00003-3318523159.png

And i've tried something like Ghost in the Shell and Motoko Kusanagi stuff. It took 1,5 hour to generate, and then I got this.
00004-2176119680.png

I mean, what the hell is this? ? Waited 1,5 hours for this. ?
 
I wouldn't even consider creating one picture that takes 90 minutes. Energy consumption alone is ridiculous. Hope you'll get a faster card soon ?
 
Right, I forgot. But you can use the 1.5 models on SDXL too as you know.
Do you know a realistic SDXL model that gives good results?
 
Y'all should try ComfyUI if you like full control of your ai creations, unlimited workflow possibilites, and great for merging multiple loras and checkpoints for future use. When i try to make a new model, i use the crap out of this. Below is a simple example, but you can have multiple out paths, with options and generate lots of different images with it in one workflow. I love it. I've even used it to create my ideal models from a combination of multiple loras. Once you view a few of the how - to videos, you get a good idea of how to use it and expand your AI content creation. Still use Automatic 1111 to do inpainting, upscaling, etc... but i like the flexibility of this so much better.

1698688877207.png
I'm not advocating for them, i just found it one day and think it's pretty cool. Just so you know.
 
Last edited:
I don't know ... it looks so fucking complicated and right now I'm not in the mood to invest time to learn something new. Recently I learned to create LoRas myself which is really amazing. But I'll have that in mind for the future :)
 
Does anyone know a way to use openpose in SD to detect poses in an image, and output it to a vap file?
 
Hey, is there a way to create a deepfake vam model using stable diffusion? I assume I could use it to create images with ReActor and then send those images to foto2vam, but it seems like there should be a better way with more accurate results. No?
 
Hey, is there a way to create a deepfake vam model using stable diffusion? I assume I could use it to create images with ReActor and then send those images to foto2vam, but it seems like there should be a better way with more accurate results. No?

What does that even mean? a "deepfake vam model"?

There is no magic button/software/tool to port 2D images to a 3D morph for VAM. Everything requires a shit ton of work to port a character, including modelling and texturing.
 
Back
Top Bottom