Stable Diffusion

Last weeks I spent 100% of my free time with Stable Diffusion! Unbelievable!

Ok, where is a plugin to export poses, depth of field and normal maps for ControlNET ? The native export from VAM can create the perfect input for ControlNET. I am confident, it will not be very complex plugin. Create the plugin, pleeeease!


why export when you can do it directly in vam?


I didn't add controlnet yet though
 
Now more difficult, is the reverse possible? From AI (Stable Diffusion) is there a way to have let say, a generated IA image converted to a scene? I'm sure this will happen one day, but maybe it already happen? :cool:
 
Now more difficult, is the reverse possible? From AI (Stable Diffusion) is there a way to have let say, a generated IA image converted to a scene? I'm sure this will happen one day, but maybe it already happen? :cool:

What people will use imho is something like AlexV posted. You'll see the frames passed/filtered through AI. That if you meant for graphics to look more realistic.
 
What people will use imho is something like AlexV posted. You'll see the frames passed/filtered through AI. That if you meant for graphics to look more realistic.

Yes this tool seems fun, but it is used to extract/filter "data" from VAM to the AI to generate an image, what I'm looking for is the reverse, the AI to generate a VAR file from an image :devilish: let say an idle sscene of of a model coming from AI genarted image... here an example of some image of valkyrie I created, well, the AI created the image LOL, just provided what I wanted ahahahaha.

Could you imaging how powerfull that would be if the AI create not only the image, but a VAR file of an idle scene from which you get the model !!! (at least that my view on this ahahahah ):p
 

Attachments

  • File 3.png
    File 3.png
    2.3 MB · Views: 0
There are some things like text to 3d already (like 3dfy.ai) and we'll likely see revolutionary stuff in that area quite soon.

The thing about that though is that, even when it works, it will only look as good as the game engine allows it. If you had that right now, the model would still look like any other vam model I think. If just because of lighting alone. It would still be amazing for asset generating though.
 
came here to see if there was any VAM/SD crossover and lo and behold!

anyway, i think some kind of plugin to export openpose templates right from VAM would be very cool.
 
Yeah, I just wanted to see how effective it would be to take a pose from VaM and slap it on controlNet

Turns out it's pretty damn easy

pic1.png 00007.jpg
 
Now I was interested as well :D

Input
1652093385.jpg

Output
00159-2849461331.png


Like you said: it's fucking easy. Control NET is really a game changer when it comes to posing, will do some more now.
 
He's talking about the 0.9 BETA, I'm talking about 1.0 which came out 2 days ago or so.
 
Correct me if i'm wrong, but with Stable Diffusion XL, you need to buy a currency to generate pictures. I have no problem spending money on a service or AI, but not paying per picture.

And another thing is, i've tried also to generate NSFW but then I receive a message that something is wrong with my prompts. If I remove the NSFW prompts, then it works perfect. But now I got 5 images which I don't like and already lost half of my currency.

If there was a X price per month for unlimited use without restrictions, then I would definitely subscribe. So, in my opinin XL version is not worth it.
 
Correct me if i'm wrong, but with Stable Diffusion XL, you need to buy a currency to generate pictures.

No, SD is free and offline

And another thing is, i've tried also to generate NSFW but then I receive a message that something is wrong with my prompts. If I remove the NSFW prompts, then it works perfect. But now I got 5 images which I don't like and already lost half of my currency.
Are you doing like a discord thing or using some website? Those are mostly gimmicks to make money. What you see online as AI generated images, what people post around here too, is stuff generated locally. Most AI images paid services are 'tourist traps' imo
 
No, SD is free and offline


Are you doing like a discord thing or using some website? Those are mostly gimmicks to make money. What you see online as AI generated images, what people post around here too, is stuff generated locally. Most AI images paid services are 'tourist traps' imo

If you go to Dreamstudio and select SDXL, it cost you points to generate pictures, and those points you need to buy. It's all in the video you posted.

I know that Stable Diffusion is offline and free, but not the one on Dreamstudio. Dreamstudio needs a currency to generate pictures, even if you use Stable Diffusion XL.

And that page doesn't generate NSFW aswell.

That's what I was talking about. And no, I don't use anything like midjourney or any Discord AI or whatever. They are indeed gimmick and even feel as scams. Those Discords are also filled with spam.
 
Last edited:
It's all in the video you posted.
Ah that was only about the review on how the new model images look compared to the old ones. You can run it locally too.

Dreamstudio is legit, it's owned by the company that made stablediffusion. Until they released sdxl I think people could only test the new model on their site, to preview beta versions.

Most people won't use the actual sdxl model but new models (checkpoints) trained by others, like https://civitai.com/tag/sdxl. It's the same now, all the fancy images you see around are not made with the actualy stablediffusion-v1.5 model, but with other custom models that look way better. Entirely so for nsfw images.

At the moment SD1.5 custom models still look better or just about the same in many ways compared to SDXL, and are considerably quicker. So it remains to be seen when (or if) new custom models will be made to push SDXL to its potential and make it worth the performance cost, or if it will suffer the same fate as SD2
 
Ah that was only about the review on how the new model images look compared to the old ones. You can run it locally too.

Dreamstudio is legit, it's owned by the company that made stablediffusion. Until they released sdxl I think people could only test the new model on their site, to preview beta versions.

Most people won't use the actual sdxl model but new models (checkpoints) trained by others, like https://civitai.com/tag/sdxl. It's the same now, all the fancy images you see around are not made with the actualy stablediffusion-v1.5 model, but with other custom models that look way better. Entirely so for nsfw images.

At the moment SD1.5 custom models still look better or just about the same in many ways compared to SDXL, and are considerably quicker. So it remains to be seen when (or if) new custom models will be made to push SDXL to its potential and make it worth the performance cost, or if it will suffer the same fate as SD2

Ah thanks, I thought you could only use Stable Diffusion XL through Dreamstudio website. And I haven't worked with custom models yet myself, I planned to do it, but still haven't done it.
 
Ah thanks, I thought you could only use Stable Diffusion XL through Dreamstudio website. And I haven't worked with custom models yet myself, I planned to do it, but still haven't done it.

Midjourney is like that. StableDiffusion's thing is to be free open AI models, you just need an UI to load them.

Myself I haven't tried SDXL yet, but from what I can tell it should work now with Automatic1111, the most popular local UI. Until recently people I think used ComfyUI to be able to load SDXL. There's also this new UI https://github.com/lllyasviel/Fooocus that I've seen people on reddit talk about.
 
@Toonen1988
Search for Automatic1111 SDXL installation guide and you'll be happy.

Thanks,

I've tried to install the webUI in the past but I had some issue's and errors. I've search for Automatic1111 SDXL installation guide and found this YouTube video, which was very easy.



But I see the limitations with my computer now. I'm going to build a new computer in the near future, i'm still working with a GTX 1080 and it takes like a half hour to generate a 1920x1080 picture.
 
i'm still working with a GTX 1080 and it takes like a half hour to generate a 1920x1080 picture.
For real? With 4090 it takes around 10 seconds for 1024x768. 1920x1080 takes 23 seconds :censored: :D
But thats with EulerA and 20 steps. Different sampling methods needing different times. DPM++ SDE is the best to me.
EpicRealism is the best realistic model if you ask me, with the right prompts you'll get really good results. For example:

00664-2310505072.png00455-1203093307.png00151-1797297875.png

It's insane.
 
Last edited:
Back
Top Bottom