index
:
stable-diffusion-webui-gfx803.git
master
stable-diffusion-webui by AUTOMATIC1111 with patches for gfx803 GPU and Dockerfile
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
modules
/
sd_hijack_optimizations.py
Age
Commit message (
Expand
)
Author
2023-06-04
fix the broken line for #10990
AUTOMATIC
2023-06-03
torch.cuda.is_available() check for SdOptimizationXformers
Vivek K. Vasishtha
2023-06-01
revert default cross attention optimization to Doggettx
AUTOMATIC
2023-05-21
Add a couple `from __future__ import annotations`es for Py3.9 compat
Aarni Koskela
2023-05-19
Apply suggestions from code review
AUTOMATIC1111
2023-05-19
fix linter issues
AUTOMATIC
2023-05-18
make it possible for scripts to add cross attention optimizations
AUTOMATIC
2023-05-11
Autofix Ruff W (not W605) (mostly whitespace)
Aarni Koskela
2023-05-10
ruff auto fixes
AUTOMATIC
2023-05-10
autofixes from ruff
AUTOMATIC
2023-05-08
Fix for Unet NaNs
brkirch
2023-03-24
Update sd_hijack_optimizations.py
FNSpd
2023-03-21
Update sd_hijack_optimizations.py
FNSpd
2023-03-10
sdp_attnblock_forward hijack
Pam
2023-03-10
argument to disable memory efficient for sdp
Pam
2023-03-07
scaled dot product attention
Pam
2023-01-25
Add UI setting for upcasting attention to float32
brkirch
2023-01-23
better support for xformers flash attention on older versions of torch
AUTOMATIC
2023-01-21
add --xformers-flash-attention option & impl
Takuma Mori
2023-01-21
extra networks UI
AUTOMATIC
2023-01-06
Added license
brkirch
2023-01-06
Change sub-quad chunk threshold to use percentage
brkirch
2023-01-06
Add Birch-san's sub-quadratic attention implementation
brkirch
2022-12-20
Use other MPS optimization for large q.shape[0] * q.shape[1]
brkirch
2022-12-10
cleanup some unneeded imports for hijack files
AUTOMATIC
2022-12-10
do not replace entire unet for the resolution hack
AUTOMATIC
2022-11-23
Patch UNet Forward to support resolutions that are not multiples of 64
Billy Cao
2022-10-19
Remove wrong self reference in CUDA support for invokeai
Cheka
2022-10-18
Update sd_hijack_optimizations.py
C43H66N12O12S2
2022-10-18
readd xformers attnblock
C43H66N12O12S2
2022-10-18
delete xformers attnblock
C43H66N12O12S2
2022-10-11
Use apply_hypernetwork function
brkirch
2022-10-11
Add InvokeAI and lstein to credits, add back CUDA support
brkirch
2022-10-11
Add check for psutil
brkirch
2022-10-11
Add cross-attention optimization from InvokeAI
brkirch
2022-10-11
rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...
AUTOMATIC
2022-10-11
fixes related to merge
AUTOMATIC
2022-10-11
replace duplicate code with a function
AUTOMATIC
2022-10-10
remove functorch
C43H66N12O12S2
2022-10-09
Fix VRAM Issue by only loading in hypernetwork when selected in settings
Fampai
2022-10-08
make --force-enable-xformers work without needing --xformers
AUTOMATIC
2022-10-08
add fallback for xformers_attnblock_forward
AUTOMATIC
2022-10-08
simplify xfrmers options: --xformers to enable and that's it
AUTOMATIC
2022-10-08
emergency fix for xformers (continue + shared)
AUTOMATIC
2022-10-08
Merge pull request #1851 from C43H66N12O12S2/flash
AUTOMATIC1111
2022-10-08
update sd_hijack_opt to respect new env variables
C43H66N12O12S2
2022-10-08
Update sd_hijack_optimizations.py
C43H66N12O12S2
2022-10-08
add xformers attnblock and hypernetwork support
C43H66N12O12S2
2022-10-08
Add hypernetwork support to split cross attention v1
brkirch
2022-10-08
switch to the proper way of calling xformers
C43H66N12O12S2
[next]