|
|
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"% Q% `5 G0 U' r$ n0 |
fatal: No names found, cannot describe anything.
t& \. ^9 R x6 NPython 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]3 _9 N* K4 x1 P. L% {5 {
Version: <none>
0 ~ s1 Q& L6 s1 ]Commit hash: ebf229bd1727a0f8f0d149829ce82e2012ba7318" V2 k( r9 b. o! x! K
Installing requirements; n J: ~% l( t0 {4 A
Launching Web UI with arguments: --autolaunch
# u; }. Q& G3 l4 I/ K; `. z& VNo module 'xformers'. Proceeding without it.- B6 H8 M1 W& f1 Y9 ^' l, p
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
7 j$ `7 E, i6 N4 h K) ALoading weights [fc2511737a] from D:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors# o* q* B: @. [. ^) {7 T
Creating model from config: D:\stable-diffusion-webui-directml\configs\v1-inference.yaml2 v7 e, P6 b" s% _% F
LatentDiffusion: Running in eps-prediction mode: o! E" M! K4 t
Running on local URL: http://127.0.0.1:7860
9 g( b% s4 ~3 z2 K w+ q' H" Q; F6 C$ A% L
To create a public link, set `share=True` in `launch()`.! J& B# N1 O5 @' J) M4 m7 a A" j
DiffusionWrapper has 859.52 M params.9 v7 W. j% {2 q
Startup time: 15.1s (import torch: 3.6s, import gradio: 2.8s, import ldm: 1.0s, other imports: 4.4s, setup codeformer: 0.1s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).
1 P! \2 l4 ~; u: j6 Q8 M9 O8 J4 NLoading VAE weights found near the checkpoint: D:\stable-diffusion-webui-directml\models\VAE\chilloutmix_NiPrunedFp32Fix.vae.ckpt/ }; i4 O: v z
Applying optimization: InvokeAI... done.2 j4 n: E3 R% L; {& \6 ]0 Z
Textual inversion embeddings loaded(0):6 O' r# ?. L: y$ X9 t% w1 x# B( v7 t
Model loaded in 5.6s (load weights from disk: 1.1s, create model: 0.8s, apply weights to model: 0.9s, apply half(): 0.5s, load VAE: 0.6s, move model to device: 1.5s).
4 T |" [* ]$ h3 j/ F( R( q ; h+ R+ U( z. y* C
version: • python: 3.10.6 • torch: 2.0.0+cpu • xformers: N/A • gradio: 3.31.0 • checkpoint: fc2511737a( F$ n- z" b+ x1 ?( Z# o6 z
1 A/ ~' Z3 d, J4 L- l
### Installation on Windows 10/11 with NVidia-GPUs using release package
O% E7 e, O/ h5 _; U0 E
~0 x) Q {/ _1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111 ... ases/tag/v1.0.0-pre) and extract it's contents.
7 C2 v8 {8 D& T2. Run `update.bat`.
+ p9 ]3 k5 q0 y. [2 p- [/ T3. Run `run.bat`.
2 i2 m. V8 `( B N0 D > For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111 ... -Run-on-NVidia-GPUs)
' R J7 `1 f. [# J6 n! ]9 U
, z/ h* K1 j+ t2 }8 R5 `- l1 U$ t### Automatic Installation on Windows
; B6 o0 k, A, C0 H1 l `; {
6 I9 ~: h2 O9 s( b' w7 ]1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
% F( ~' [- M) _6 H+ N" i4 _2. Install [git](https://git-scm.com/download/win).5 p/ w. @( M0 z$ L3 [
3. Download the stable-diffusion-webui-directml repository, for example by running `git clone https://github.com/lshqqytiger/s ... ebui-directml.git`.. V) v% R+ N9 I8 _- Q# @
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.+ _% [) P9 T! A& a1 B8 ^8 X
6 x3 W; B% `0 o% N! U$ D### Automatic Installation on Linux
3 y2 }* Q! z; g F; D C& @- r, }) b0 ]
1. Install the dependencies:; s- X& X2 K% ~, p3 J' Y
, ]1 s0 _7 p7 Z```bash
) S) \5 z* v" C3 a4 k# Debian-based:8 b/ }! g0 m7 I
sudo apt install wget git python3 python3-venv) w2 F6 g5 A7 Y9 C
# Red Hat-based:, R# A9 v+ [) A& M: F; d) Y& f
sudo dnf install wget git python3
# j0 ~. y$ u1 L8 A2 ~# Arch-based:6 U5 Z6 P, `& U5 d
sudo pacman -S wget git python3
7 i# K5 J1 c, D$ t+ K! J9 F8 ^$ v```
7 y/ c1 X) A, x t3 G3 s
: B6 r2 G# `* P8 H! S, q" m- k2. Navigate to the directory you would like the webui to be installed and execute the following command:3 N: I1 K2 [- n$ Z5 ^" n( D. L. [
, G# T' {2 ]( g5 M
```bash
. _/ j5 i; n% q& Cbash <(wget -qO- https://raw.githubusercontent.co ... bui/master/webui.sh)" k4 t# b, N4 k0 _
```" w9 \9 u9 b4 \
/ n; }$ h5 ?0 `/ I4 D7 O7 p
3. Run `webui.sh`.
) w2 q6 }4 a$ Z1 D. }4. Check `webui-user.sh` for options.
- ^: R! t! a) A4 T4 @# H$ g2 b0 |- I8 k4 X& A
### Installation on Apple Silicon& k- X- C" {* E, d* ^4 [# N6 h! _
1 u% }$ X/ M) R8 @- y2 N0 z: Y- c
Find the instructions [here](https://github.com/AUTOMATIC1111 ... on-on-Apple-Silicon).
. T5 W9 L+ X) c4 N, P: c: C" A; m4 D) i9 {
## Contributing
7 J. u) b6 w. p( V2 Q0 N' ]2 i
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111 ... i/wiki/Contributing)
* S+ q( B! r; x5 h* Y( n! F0 f: j G
## Documentation4 Q1 D, ~0 b- D0 P+ m- s- k
9 V. D0 M- I- ?$ {9 sThe documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
9 z& X. s W$ y6 T$ ^1 u% D+ o8 }) \3 `* E% t8 s( O3 n( i
## Credits3 `8 C! y5 B8 T. |; ?4 u; c1 v! G
$ O# m, l9 y! F7 K+ D8 Z
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.9 p7 o& j% w* ]# b$ Y
7 j1 T4 q' a u3 e- Y3 m9 U- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers2 z+ `$ O/ i5 E! ?; d% \: v D& |
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
! G2 a, ~+ ]- @8 w8 B- GFPGAN - https://github.com/TencentARC/GFPGAN.git s! y( M2 V8 Z3 K. X8 J
- CodeFormer - https://github.com/sczhou/CodeFormer ~; Y M* G/ J3 t1 l0 i) Q5 M
- ESRGAN - https://github.com/xinntao/ESRGAN& D7 f" _6 e) _1 m8 f
- SwinIR - https://github.com/JingyunLiang/SwinIR
$ G7 K' r2 j: z" V2 B: s- ~% A- Swin2SR - https://github.com/mv-lab/swin2sr* g: {5 [& T+ Z
- LDSR - https://github.com/Hafiidz/latent-diffusion- ?7 C" M+ z3 ^1 {
- MiDaS - https://github.com/isl-org/MiDaS( O; x+ k* ^5 Y- }" G
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
9 M' x3 x/ x6 ?. z/ j7 }- z9 K( A9 n5 \- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.# w: a4 g4 ~8 Q3 ~3 f
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
# L( ^. n; Z- Y6 W; g- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention): \5 ]" O& Y+ @! D4 v; r9 e
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
4 K7 f8 ^. c1 ~3 M6 X- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd4 {' Q, ]6 G; t3 f* x7 H# o: G: S
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot9 @5 O% t: _1 t& j4 H3 Y7 @
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator0 y& J# X1 T. E. o1 u0 C
- Idea for Composable Diffusion - https://github.com/energy-based- ... sion-Models-PyTorch
; I: \& Q' B+ v3 _9 t/ n" y9 G- xformers - https://github.com/facebookresearch/xformers% _2 H i. {0 w
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
/ F6 I: i B* c: N- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
; U- N* f w5 a2 r* r- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix5 ]; V( n( D2 m, G- V1 n
- Security advice - RyotaK: L9 Z# q( s" V8 D6 K
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC9 Q; r. h: e: U* U' ]; u
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
6 n m- k+ l! M8 R: X- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
; o9 M9 y2 F' a: b0 k- Olive - https://github.com/microsoft/Olive
, v+ L* Q, n0 B6 d- A" R- (You)
. X& O/ E; z$ h0 i7 V- n
9 Y0 O ?5 {& _7 K3 Y4 N3 ~. @, C4 \
https://cloud.189.cn/t/FvqQbeZfYrui (访问码:9ur0)
必先安装运行环境:(安装包在网盘根目录下) 1.Git-2.41.0-64-bit 2.python-3.10.6-amd64,注意安装时选中安装"Add Python to PATH" 3.启动器运行依赖。net-dotnet-6.0.11 以上运行环境安装完毕后,打开根目录"webui-user.bat"等待数秒即可打开stable-diffusion-webui,若然是系统自带IE浏览器打开的话,需要手动指定版本更高的Edge进入地址:http://127.0.0.1:7860 |
|