|
|
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
$ ]1 j3 A4 }; f, c* o2 o) }. q# xfatal: No names found, cannot describe anything.; u. y. ~6 { J5 l; X; f) W; ~
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]2 G& _) v F; X6 `$ m
Version: <none>8 ?. Y- | v9 s, K" O2 w# [
Commit hash: ebf229bd1727a0f8f0d149829ce82e2012ba73187 N" Z/ [7 ]# N& ]" o) S
Installing requirements
, v3 p0 x, ~2 k4 Y6 x, O0 ?8 LLaunching Web UI with arguments: --autolaunch I3 ~' l' {8 J% m$ v9 A
No module 'xformers'. Proceeding without it.$ q% c+ J( @. Q( e
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled# o2 y" I" i) i/ u4 I# |
Loading weights [fc2511737a] from D:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors, |) L( H$ ?( d, R8 V
Creating model from config: D:\stable-diffusion-webui-directml\configs\v1-inference.yaml9 T }; R1 ^$ Q( r* X% k
LatentDiffusion: Running in eps-prediction mode
: d/ ^9 f4 k- q# JRunning on local URL: http://127.0.0.1:7860: w0 Q0 j( v6 E
5 w2 O3 L0 z' z$ L& B/ P
To create a public link, set `share=True` in `launch()`.1 v3 u0 l+ O2 Q6 B$ t: L5 P4 X2 R
DiffusionWrapper has 859.52 M params.5 l" T; d/ ^* h' \. P* N" b7 |
Startup time: 15.1s (import torch: 3.6s, import gradio: 2.8s, import ldm: 1.0s, other imports: 4.4s, setup codeformer: 0.1s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).
. K2 D: D& J3 OLoading VAE weights found near the checkpoint: D:\stable-diffusion-webui-directml\models\VAE\chilloutmix_NiPrunedFp32Fix.vae.ckpt. M2 c1 x) j! t9 w& n
Applying optimization: InvokeAI... done.
& u) O# O& ^" b9 YTextual inversion embeddings loaded(0):6 D4 X5 s- z5 y! ^- ~- z
Model loaded in 5.6s (load weights from disk: 1.1s, create model: 0.8s, apply weights to model: 0.9s, apply half(): 0.5s, load VAE: 0.6s, move model to device: 1.5s).
7 P+ d$ v0 h" h: D! c! M# Q
: t4 u3 [, l# B" ~+ J9 iversion: • python: 3.10.6 • torch: 2.0.0+cpu • xformers: N/A • gradio: 3.31.0 • checkpoint: fc2511737a
3 g7 `4 ]0 K7 J9 `$ D4 `
0 T. a; m* D. y! d. K" p7 e Y### Installation on Windows 10/11 with NVidia-GPUs using release package! `1 J4 N* `" N: |* `3 k
* ]. }- {& N: c0 u, r) V0 _
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111 ... ases/tag/v1.0.0-pre) and extract it's contents.
, x* T( m6 S. |- M* y p2. Run `update.bat`.6 R' _) e2 h3 F0 t: M" z9 g
3. Run `run.bat`.
; [$ J( t6 h7 ?( j; k( I4 _ > For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111 ... -Run-on-NVidia-GPUs). p! h% A/ O6 n _5 y
, x% N0 H. ?) i/ H4 g+ K
### Automatic Installation on Windows
# S* \- x5 _) B2 E- c+ U( Q; H n- x
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".2 w1 D0 O! R/ X1 O! _
2. Install [git](https://git-scm.com/download/win).) x' ^& J6 S6 H) q) j# a* |
3. Download the stable-diffusion-webui-directml repository, for example by running `git clone https://github.com/lshqqytiger/s ... ebui-directml.git`.
% a7 r5 h7 f$ g2 u: ?$ T4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.2 _* i* g3 V( f" X( r6 i
% ^4 f# o9 R; ]7 E6 P: D# ]& C* p# j### Automatic Installation on Linux7 R5 [; G- G. A8 I- v& {
# ~, n! f" X. i- H* g H' n. N1. Install the dependencies:
% A% R1 M- Z/ M
5 O" A8 A. L: a$ m, Z/ b9 e```bash' P- J2 G' m# d- {' I e8 n& O9 S
# Debian-based:: ?9 C" g4 ]+ Y- B {) t$ d- c
sudo apt install wget git python3 python3-venv
; e& s. X4 H8 \* g: t# k# Red Hat-based:
- m7 R# b. e& rsudo dnf install wget git python30 {7 C7 ~2 `8 i+ B# r q
# Arch-based:
0 Z: s: [8 N4 W1 Q4 Vsudo pacman -S wget git python39 B7 g) m0 w- b. J& R, @
```
! _3 x" K9 I! J% ]. N _7 B& v+ s9 I; W% D
2. Navigate to the directory you would like the webui to be installed and execute the following command:
: u6 D1 _' E( w4 A. c U( ~, s- Z$ B
```bash: U) W& I( a1 m# d' Q ]
bash <(wget -qO- https://raw.githubusercontent.co ... bui/master/webui.sh)
! x1 E; e& {3 `$ F4 M2 D9 d```
+ z: ^) b8 F1 }9 b. R/ m
p- \. i0 O+ h# c8 U3. Run `webui.sh`.1 D1 l8 |4 T3 ]; ^$ x: l7 ?
4. Check `webui-user.sh` for options.- \, y5 r* S" }+ ^7 e7 K
3 S6 D) S: ?; }; @0 {### Installation on Apple Silicon
3 k6 N$ ^# e/ E' h. p; q1 [' K# w& v2 T+ i y- U
Find the instructions [here](https://github.com/AUTOMATIC1111 ... on-on-Apple-Silicon).5 h- N7 H) H# E3 \# f
' C% A! C5 T5 Q5 N" X
## Contributing$ d; N% m( z7 J. j( U6 C
/ f5 ]9 J+ l6 P" V0 R
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111 ... i/wiki/Contributing)
0 y: _" S }9 U& }1 K( S, W) [. r- c+ ?
## Documentation3 w0 T: j: Q4 M7 H& b" R- s Y+ v
2 g6 C! K$ |' O" m( s, Q
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).3 Z' ?: D. W, c, D4 K; m& T
2 I' t* y" X: P" N$ ]7 i; y- l# C' {## Credits$ g7 D& F5 U) p" S( U. ]0 k* s
; D0 X# w c, cLicenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.. v1 B1 T4 }2 Q8 `* s+ V
3 z. Q4 u8 B/ A' b: f; }- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
* G; j+ X. w: S7 E% \' \- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
* W' }7 Z- y$ ?. g& A- GFPGAN - https://github.com/TencentARC/GFPGAN.git) i4 J F* P% q, A4 I
- CodeFormer - https://github.com/sczhou/CodeFormer
8 k6 f5 |* F/ ]8 R" \- ESRGAN - https://github.com/xinntao/ESRGAN3 Z. R5 J5 X+ i! ]
- SwinIR - https://github.com/JingyunLiang/SwinIR
9 x* X" B% J: u. L1 p& V4 C- Swin2SR - https://github.com/mv-lab/swin2sr/ T( l2 t8 p& V C2 e& U2 R* M! G
- LDSR - https://github.com/Hafiidz/latent-diffusion
/ P# K# H0 \4 T- MiDaS - https://github.com/isl-org/MiDaS& `2 |6 G. @) A
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion+ g; U1 u8 Y# H) q; N9 u
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.* Q- J0 t3 w; E4 U7 j& P
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
& ~5 ^5 v3 z" _) \% d- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)& n) `! H" h: O: q8 K6 n9 ]3 |0 U
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
* F' K! h8 o* p6 f2 I; o3 q- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
9 K9 t0 Z1 }0 U; K$ ^. P- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
4 }9 {: b/ [9 L+ Q, O' D* o7 o- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
" c# T! V' C; F# a- [# p- Idea for Composable Diffusion - https://github.com/energy-based- ... sion-Models-PyTorch6 @" i8 \) o& O% I: ^* \/ ~
- xformers - https://github.com/facebookresearch/xformers
! Q" u A T. P# a4 Y5 k6 H$ o- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru3 i) w2 ^1 P Q% x& g
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
. C& l4 e, p5 W8 i) x- u! I- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
3 ]" y ] a, m9 N- }- Security advice - RyotaK
6 v- H/ g' }5 p8 {- G. Z8 _- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC! D& f* s; L9 e# }9 w2 v& z
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- X. W8 J% t _- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
7 V# P0 e, @% m' `- Olive - https://github.com/microsoft/Olive8 d9 l4 B( T9 h( o
- (You)
0 I' m! }# J% o
; Y9 u2 p2 ], R8 T: n/ _( w3 L. l9 P( b, N# s% y
https://cloud.189.cn/t/FvqQbeZfYrui (访问码:9ur0)
必先安装运行环境:(安装包在网盘根目录下) 1.Git-2.41.0-64-bit 2.python-3.10.6-amd64,注意安装时选中安装"Add Python to PATH" 3.启动器运行依赖。net-dotnet-6.0.11 以上运行环境安装完毕后,打开根目录"webui-user.bat"等待数秒即可打开stable-diffusion-webui,若然是系统自带IE浏览器打开的话,需要手动指定版本更高的Edge进入地址:http://127.0.0.1:7860 |
|