|
|
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"; @; _6 ~1 P" c" T
fatal: No names found, cannot describe anything.$ V2 ~" u2 G8 k# v* i1 e
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]* Y! }$ o: O% }4 W: _2 L
Version: <none>
" f2 v/ P# o, y/ oCommit hash: ebf229bd1727a0f8f0d149829ce82e2012ba7318
( J: e |7 ?$ D! WInstalling requirements0 ?) x! _7 V/ K( }5 o, Y3 C
Launching Web UI with arguments: --autolaunch7 K* I4 }! g* k0 ?. F3 A! t& a0 k
No module 'xformers'. Proceeding without it.
e# {/ h- n5 z- lWarning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled E: g. S/ q* \
Loading weights [fc2511737a] from D:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors: Z) \0 g2 x/ d# u
Creating model from config: D:\stable-diffusion-webui-directml\configs\v1-inference.yaml' K. N i2 r* T5 l
LatentDiffusion: Running in eps-prediction mode( ^3 Z! e" ~' {" I
Running on local URL: http://127.0.0.1:7860* k' P i9 R- N5 p* y+ r. J( b( b
7 d! }' L5 Q5 C, k1 P* lTo create a public link, set `share=True` in `launch()`.
6 X7 d; d) V4 ^& lDiffusionWrapper has 859.52 M params.( r* E" I% p0 E
Startup time: 15.1s (import torch: 3.6s, import gradio: 2.8s, import ldm: 1.0s, other imports: 4.4s, setup codeformer: 0.1s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).8 Z% s' S8 b9 ~# ^$ u( a- x
Loading VAE weights found near the checkpoint: D:\stable-diffusion-webui-directml\models\VAE\chilloutmix_NiPrunedFp32Fix.vae.ckpt5 _: x& k1 f' P+ O1 y! Y4 @
Applying optimization: InvokeAI... done.
+ B% L, Y$ a, W7 q* ]Textual inversion embeddings loaded(0):
! T6 ^! q! E1 b( W. E. ^9 t- PModel loaded in 5.6s (load weights from disk: 1.1s, create model: 0.8s, apply weights to model: 0.9s, apply half(): 0.5s, load VAE: 0.6s, move model to device: 1.5s).
- | Z! |+ P% O5 j$ a, H' m8 | ; j8 k8 a% }& t: M9 L
version: • python: 3.10.6 • torch: 2.0.0+cpu • xformers: N/A • gradio: 3.31.0 • checkpoint: fc2511737a2 _/ r$ k* ]0 p( g) s; L4 p
; ?7 U3 O0 @- n9 n8 e& l3 y### Installation on Windows 10/11 with NVidia-GPUs using release package9 }) z% m/ N! K; ?+ T3 Y
1 x; C, R6 }! ^8 K
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111 ... ases/tag/v1.0.0-pre) and extract it's contents.
; ]2 ]: ^5 o5 }: x' U1 N2. Run `update.bat`.+ q1 C& P* P2 z- b* Y
3. Run `run.bat`.! g% R7 r5 K, i+ P/ y
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111 ... -Run-on-NVidia-GPUs)
& b2 Q! o9 u# s+ p6 B+ o
! n* Z! v0 B8 M& s8 |+ K$ P( Z### Automatic Installation on Windows" Q4 ]# v- G' O" j1 U4 f T! H( n
9 u8 i( z7 h4 X; _, V. f7 Y3 s3 T' W/ u
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
/ \) N" Z% V5 C/ d- Z; f- Z' D2. Install [git](https://git-scm.com/download/win).. r7 s- m# j3 ]( `9 T
3. Download the stable-diffusion-webui-directml repository, for example by running `git clone https://github.com/lshqqytiger/s ... ebui-directml.git`., \- c' T% @+ e% x
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.: P+ q; [; n; P* x/ g
( v/ \" V# ~& I0 F4 u+ [4 S7 G### Automatic Installation on Linux7 w# L) F4 t" X2 b/ e
6 H# ^' G# {" _, Z, F: F
1. Install the dependencies:& m! t. m4 \) N' O
2 J1 p8 \# B0 E2 ]# [( S: P```bash. k* D" [' \: M
# Debian-based:! v+ K+ F! o3 i [: D
sudo apt install wget git python3 python3-venv) l1 d: ^3 A0 R
# Red Hat-based:! V, O2 f) h( i; z
sudo dnf install wget git python3" U2 `+ K/ G7 @3 |3 B9 g
# Arch-based:
3 S0 q" f: S" i2 x; n$ j! |$ Hsudo pacman -S wget git python3
3 D- j; d* S: r( ~8 l```; a+ h3 b% w; g4 p6 b- y) ~
5 c3 H0 L u a8 q
2. Navigate to the directory you would like the webui to be installed and execute the following command: R8 I, n9 D4 A5 `$ F, q1 D
# N4 d4 h' B; m; r; n```bash2 I R/ \1 `& q/ t" s
bash <(wget -qO- https://raw.githubusercontent.co ... bui/master/webui.sh)
; n6 Y- i" E" R2 X1 j2 S```( L$ j; U: Y8 b% F, I; y& {- k
- M" w" D# m, L- w7 V3. Run `webui.sh`.
) N: P( o+ k \, a1 V! q' ]2 _4. Check `webui-user.sh` for options.) c: n0 [7 T. w8 q+ W
2 ]) B; ?4 g9 a+ \ C ~
### Installation on Apple Silicon
. g4 Q9 R& q+ s+ U: x8 \4 Q$ R/ c( u' \) \3 a1 u! c, P- b
Find the instructions [here](https://github.com/AUTOMATIC1111 ... on-on-Apple-Silicon)., Q8 ~* Y( p& K, A$ R: E4 e
* c% n* G' W- r## Contributing5 A6 O: n0 ]" g2 N
6 v3 N7 D2 q7 U, r+ @6 `8 _
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111 ... i/wiki/Contributing)' E5 Z) s% {0 H; A. D* ]
) Z; O0 _2 \& S. N/ G1 M
## Documentation
) q" ^1 q9 K& X1 j- E1 O3 f/ Q8 g" K8 G5 g7 d' v" H
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
* @! M4 {) n' I: N! P
1 o" z& X4 `9 I" M$ P## Credits
# b" h, p6 b- ^& ?
. S8 ^3 ^' W* WLicenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
& J" `: n+ w" D5 U- _+ s# [1 u. ?1 C5 ~
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers7 V* o- k0 O5 ?' z- Q2 }
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git8 g! \8 i9 v1 f- [
- GFPGAN - https://github.com/TencentARC/GFPGAN.git ], z" s, f5 e B. M
- CodeFormer - https://github.com/sczhou/CodeFormer* m9 i+ [# c! R7 i
- ESRGAN - https://github.com/xinntao/ESRGAN! w4 a. u7 M3 p$ D8 D y. ]" _5 V) {
- SwinIR - https://github.com/JingyunLiang/SwinIR8 x9 Z" D6 [$ v% u
- Swin2SR - https://github.com/mv-lab/swin2sr
( p. a p5 G- v6 s$ V4 a/ K- LDSR - https://github.com/Hafiidz/latent-diffusion4 Q' t3 f7 v% g o1 X
- MiDaS - https://github.com/isl-org/MiDaS
! T0 @" H& m6 Z( C+ V- Ideas for optimizations - https://github.com/basujindal/stable-diffusion$ v8 ?+ R' H9 r' z0 I! ~
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
6 q V. |' D# g9 x3 `4 y- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)' E, ^, n6 |+ l3 y' q9 [
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
3 W9 t0 p$ _! L: W2 A" d8 m- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
$ A/ D, V9 P+ ]/ p3 ?- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd* I; q4 G" z, n. u. A5 y4 W
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
8 w( W e$ C; H6 R0 c8 Z3 s; c- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
; P( M' v, Y) t- Idea for Composable Diffusion - https://github.com/energy-based- ... sion-Models-PyTorch) b P/ ^; q, e" O$ {' o4 M0 n1 D
- xformers - https://github.com/facebookresearch/xformers0 p6 ]* r5 }1 Z
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru, O) S! C" @3 I9 x# k( I3 s
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)# s3 P- B0 F/ A+ x
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
a K2 y( J0 Y ?! d- Security advice - RyotaK
% P6 K2 g* ^9 ?4 E: j* c8 E- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
# o9 Y* o6 i7 g3 x7 X8 h- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd0 D/ r1 i& Z \
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.) L% P9 \$ l& `9 @4 x6 b
- Olive - https://github.com/microsoft/Olive
* w5 w: e8 F1 I. ]. Y$ F$ C- (You)2 Z- G0 }$ p8 `: |! B5 z- B% Z/ I
. t: i' z* @: d d; R
/ N& t- J3 f. i' e0 {https://cloud.189.cn/t/FvqQbeZfYrui (访问码:9ur0)
必先安装运行环境:(安装包在网盘根目录下) 1.Git-2.41.0-64-bit 2.python-3.10.6-amd64,注意安装时选中安装"Add Python to PATH" 3.启动器运行依赖。net-dotnet-6.0.11 以上运行环境安装完毕后,打开根目录"webui-user.bat"等待数秒即可打开stable-diffusion-webui,若然是系统自带IE浏览器打开的话,需要手动指定版本更高的Edge进入地址:http://127.0.0.1:7860 |
|