|
|
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"5 v* X! z$ e* d* O: ]! t& g
fatal: No names found, cannot describe anything.# T6 [: X! {2 B2 c% }5 \
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]/ [7 @. J% q9 p5 z0 d- r" l
Version: <none>
7 {- J+ E7 O* v# eCommit hash: ebf229bd1727a0f8f0d149829ce82e2012ba7318
- y+ V9 ^# R) L: M1 VInstalling requirements
6 D0 V/ j! l2 X" s) T: k# `Launching Web UI with arguments: --autolaunch5 Q- p, K! r7 Q1 m- B; U) F
No module 'xformers'. Proceeding without it.
7 I, z0 V) y* xWarning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled" \5 c! c6 M5 e& B+ g
Loading weights [fc2511737a] from D:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors9 c! ` Z9 e; o- ]) C
Creating model from config: D:\stable-diffusion-webui-directml\configs\v1-inference.yaml
3 H8 W \5 C0 E1 c6 A9 U8 @* QLatentDiffusion: Running in eps-prediction mode
. [( D8 L! y1 IRunning on local URL: http://127.0.0.1:7860! e3 {( l) l$ ]( W0 @9 f/ p6 R# z: q
r, D/ \ p- J2 D5 `7 b
To create a public link, set `share=True` in `launch()`.7 y; h' ?3 a; t- A3 v
DiffusionWrapper has 859.52 M params.: G$ j k j) D6 x( Z
Startup time: 15.1s (import torch: 3.6s, import gradio: 2.8s, import ldm: 1.0s, other imports: 4.4s, setup codeformer: 0.1s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).% F8 N) v9 ~, A5 Q/ M( y5 X9 u
Loading VAE weights found near the checkpoint: D:\stable-diffusion-webui-directml\models\VAE\chilloutmix_NiPrunedFp32Fix.vae.ckpt. M% ~. Y2 @0 ~ u" ~5 F3 `" x
Applying optimization: InvokeAI... done.5 L- V) }7 {$ b. Q k5 b) E
Textual inversion embeddings loaded(0):
% I' q& _7 B: @Model loaded in 5.6s (load weights from disk: 1.1s, create model: 0.8s, apply weights to model: 0.9s, apply half(): 0.5s, load VAE: 0.6s, move model to device: 1.5s).
5 u X2 C @2 e, n$ K1 ~
1 a2 \- [0 e7 ~! ^+ Dversion: • python: 3.10.6 • torch: 2.0.0+cpu • xformers: N/A • gradio: 3.31.0 • checkpoint: fc2511737a4 I0 r" {+ w% s( v! q+ m3 [8 `/ F$ E- X
1 M( I; M0 A6 z7 \/ @0 t
### Installation on Windows 10/11 with NVidia-GPUs using release package
/ j d4 ^" r* ?2 i; q2 p( o# k6 R$ N: [8 a/ K! l+ r# E
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111 ... ases/tag/v1.0.0-pre) and extract it's contents.- b: d3 E/ c% \: v, \* O' z
2. Run `update.bat`.( b8 z! [. M3 F8 k/ q) B
3. Run `run.bat`.. Q# U( L$ H: T
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111 ... -Run-on-NVidia-GPUs)& J' a" v8 p* @- H, c
$ A: p4 B/ }/ v1 M* U### Automatic Installation on Windows
# S& X. d1 W( e2 {# i" y7 a& i6 ^3 W) M! x
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
/ d" N, G9 x6 g, r, ?4 t2. Install [git](https://git-scm.com/download/win).+ r& t, }. ?6 h. r4 P' d8 L. _. p
3. Download the stable-diffusion-webui-directml repository, for example by running `git clone https://github.com/lshqqytiger/s ... ebui-directml.git`.
2 J* @- O# R. D0 d2 o0 U$ O7 u4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.& c1 s" |2 C, F
0 ]/ x4 ~/ M5 e3 B4 o8 ^; {* c4 S
### Automatic Installation on Linux
0 | o( w( p9 U) E: F; k; U, R% A
. f& S9 C# `- Y' d1 `1. Install the dependencies:
7 Q7 R2 R( ?( Y) O9 h' x9 K
. t7 d4 V5 l6 v' ^5 }. k& _```bash
2 Z' S; H( j* Y* r: K% }) Q5 J# Debian-based:
3 |8 A( \7 K0 j" ?2 U- ^+ Qsudo apt install wget git python3 python3-venv
) {& x3 c8 l7 ^2 i# Red Hat-based:$ I( o, c# ], i3 @% G; R
sudo dnf install wget git python3) Z6 I7 y3 m3 b) a9 H1 m: M' g0 J
# Arch-based:* u2 z( f/ ]5 ^9 N" [
sudo pacman -S wget git python3: u$ h: E0 S; ?* ]( \! k* M
```
; P4 \8 y, A! a1 p: B; t2 d
# `) A# n9 [" @# c6 u6 X; w* L2. Navigate to the directory you would like the webui to be installed and execute the following command:% n6 F' c2 Z# h; d4 V$ G
7 \" V# c, D( k& B' b! _```bash
, I( `8 E6 g k; g8 a. Ibash <(wget -qO- https://raw.githubusercontent.co ... bui/master/webui.sh)3 i/ r3 x5 R. ]: \# P4 o
```
) G0 v: H- l9 l# n; y7 U- I- i& \( h* E% |% O1 O
3. Run `webui.sh`.7 s" X: c! g4 d# K
4. Check `webui-user.sh` for options.
; m4 ?* @% F$ g% G/ a* O2 W1 k% W3 ^/ q. j
### Installation on Apple Silicon
; K) M& H9 W) B! L. f* ], `* G, A \
Find the instructions [here](https://github.com/AUTOMATIC1111 ... on-on-Apple-Silicon).
* z% w" Z( L. e; y1 c: `0 v& O" |2 [$ H
## Contributing* |, b6 U- \ z7 u0 j0 Z
3 P& A7 y; y2 g' I- i' T
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111 ... i/wiki/Contributing)6 @) O0 Y* r7 Y' u' v* p* i
. H* P( R/ i. v0 L$ \* \5 m3 K8 A/ P## Documentation2 ]6 M0 s6 x; \7 c5 N3 }1 ?' c
, [/ j7 l. f0 {
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki)., r6 `) ~& w1 I6 J; R
7 u6 y$ d8 f8 I2 h' R
## Credits; ^, \8 }# E; N8 s3 w: ]# j- Q
' l) u( ^! U- `) G1 v" ~ _Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
+ c) b9 T. x3 U) I' s7 G- K( t( U8 Z+ m4 X; ]
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
8 ^3 y& o$ ?: q m$ ^- k-diffusion - https://github.com/crowsonkb/k-diffusion.git" z! s( _) x7 \& ^. h) k) @& X
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
1 q, v/ x, O( j$ K4 J5 B- CodeFormer - https://github.com/sczhou/CodeFormer
* v' O' E i, ?$ f" P- ESRGAN - https://github.com/xinntao/ESRGAN% Q% Y/ ^- ]( I1 U
- SwinIR - https://github.com/JingyunLiang/SwinIR
- R* t- n6 I/ {& F5 k6 V% Z! V- Swin2SR - https://github.com/mv-lab/swin2sr. J$ U) W2 Y' |7 T: W% A
- LDSR - https://github.com/Hafiidz/latent-diffusion
( z. m$ b( ^* L0 {% ^# z6 x8 d- MiDaS - https://github.com/isl-org/MiDaS# |3 j9 z( @# w$ g# Q
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion( }" F6 v( p# _( [( G
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
4 C3 c: ~$ ?$ H: {; V7 [# t0 M- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
2 \* G5 b" P/ T/ z, z- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)' ]. o9 N! d8 W9 m
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas)." A8 N. b9 R1 o+ V5 [7 t! x
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd& V+ }. \. h. k# _" y+ b/ b
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
# w u+ V$ n; _4 R% A7 O- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
0 ~$ _) f1 |; O- Idea for Composable Diffusion - https://github.com/energy-based- ... sion-Models-PyTorch
/ F3 o# K0 h% t; S$ k; p- xformers - https://github.com/facebookresearch/xformers4 v# a( }7 M( H& W4 [+ b
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
& i5 C; P1 e0 Y' ~/ @- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
) c! \+ G* n9 D$ [% M9 o- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
/ B* @" M) M' O' b7 x3 D4 c- Security advice - RyotaK+ o' o# _+ Y& e* Q5 b
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC' `, k' n8 h3 s- t
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
' f$ U! T% g- G4 x. J2 C- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.2 w3 |& A1 g5 r$ k0 s
- Olive - https://github.com/microsoft/Olive
* r& l# ^0 ~7 w9 t: ~$ f- (You)
! J; [" P4 Z$ `" ~% ]) n2 F+ o k
+ C5 `0 O7 P& L# s7 \+ l' e* b0 G% {1 M9 q
https://cloud.189.cn/t/FvqQbeZfYrui (访问码:9ur0)
必先安装运行环境:(安装包在网盘根目录下) 1.Git-2.41.0-64-bit 2.python-3.10.6-amd64,注意安装时选中安装"Add Python to PATH" 3.启动器运行依赖。net-dotnet-6.0.11 以上运行环境安装完毕后,打开根目录"webui-user.bat"等待数秒即可打开stable-diffusion-webui,若然是系统自带IE浏览器打开的话,需要手动指定版本更高的Edge进入地址:http://127.0.0.1:7860 |
|