Installing a local version of Stable Diffusion with a simple one click method
I see there is something called "Stability Matrix" on github, there is an article here that talks all about it and how to install it in a few minutes:
https://openaijourney.com/automatic1111-guide/
This seems to have greatly simplified the local installation of Stable Diffusion. It basically turns the whole process into a single download and one-click solution to get you up and running.
I am just wondering if anyone here has actually tried it yet using this method? My concern is my safety and privacy.
Could there be anything packaged in it that would scan my other files on my computer without my knowledge and send all my passwords somewhere? This stuff is all supposed to be open sourced, but do people actually look through the code, or is it just one of those things where everyone assumes someone else is looking throught the code?
Anyway, I'm tempted to try it. I love using Daz the old fashioned way, but if I can use AI locally to create some of my backgrounds more quickly, backgrounds that I would normally blur anyway, it seems like something that would be good to get into my workflow.
Comments
another option is Pinokio
I have a separate standalone installation of A1111 but only becaise I already was using it before finding out about Pinokio which is a webui based python environment that does heaps of AI stuff
you can download installers for Text to speech, Live Portrait animation, Stable Video Diffusion to name a few, even Flux but my VRAM is lacking there, it works but literally takes hours for one image on a 2080Ti
Interesting. I'm looking into that as well, but i see a lot of people are saying, in youtube comments, that half of those Pinokio apps don't even work.
In any case, I decided to install the automatic1111 SD, and I played around with it for a solid day. So far I am unimpressed. I think what I really hoped to get out of it, is just a way to upscale existing Daz renders, so I can have the best of both worlds. The consistency and control of Daz, but with the benefit of AI to take care of some of the final rendering. I want something to take care of the last 5% that my RTX 2060 struggles with.
But unfortunately, none of these tools seem geared for that. So far everything I've tried to upscale hasn't really looked much better. And the inpainting, I think I can do a better faster job in photoshop to be honest.
I think I'm just going to reinstall Unreal or Unity and do fast renders of my backgrounds in there. And then do my character renders in Daz and just composite them in photoshop, the way I was doing them. It seems to be better than using AI at least for me. Otherwise I don't see how I could ever tell a story and keep everything consistent using AI.
Also I realize, it just isn't fun for me. I don't get any thing out of the process. I like spinning dials, creating my own morphs, making my own textures. But I also paint, draw, sketch as well. So I like to get my hands dirty in a sense. And AI takes away some of that pleasure for me.
Try Fooocus (or Forge from the same dev). It's what I have been using for the past 6 months
Thanks, this one does seem to be a little more user friendly. And the inpainting to fix the hair on a daz model actually worked! I tried the same thing with automatic1111 and couldn't get good results with inpainting.
NP. I found ComfyUI and Autopmatic1111 to complicated for my liking, so I was happy when I found this app.. I like the inpainting option and the use the image prompt option alot with renders to set poses when I can't get the right pose with a LoRA or checkpoint.
Now if it could work with Flux, that would be awesome, guess time will tell
Flux is a whole separate kettle of fish in terms of how it processes prompts because it's the first SD base model trained on descriptive phrases not booru-style tags (and good riddance to those, they were never intended to accurately describe visual materials). When I retrained Gina for SD1.5 for Flux recently, I had to use normal, natural language to caption the training data images. So if DAZ AI moves to it people will need to rethink their prompts.
Ironically Gina was created not from training off images/renders or even using a web interface, but closer to the way we create custom characters in DAZ Studio, starting with a generic base figure then applying features/body type models at particular strengths to it, finally exporting the resulting model. For technical reasons this was done from the web interface's underlying command line tools, which are primitive enough to require all submodels have the same data scale (with a few rescaled to ensure this). I don't recommend this method, but it worked.