index
:
stable-diffusion-webui-gfx803.git
master
stable-diffusion-webui by AUTOMATIC1111 with patches for gfx803 GPU and Dockerfile
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
modules
/
sd_hijack.py
Age
Commit message (
Expand
)
Author
2022-10-11
rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...
AUTOMATIC
2022-10-11
Merge branch 'master' into hypernetwork-training
AUTOMATIC
2022-10-11
Comma backtrack padding (#2192)
hentailord85ez
2022-10-10
allow pascal onwards
C43H66N12O12S2
2022-10-10
Add back in output hidden states parameter
hentailord85ez
2022-10-10
Pad beginning of textual inversion embedding
hentailord85ez
2022-10-10
Unlimited Token Works
hentailord85ez
2022-10-09
Removed unnecessary tmp variable
Fampai
2022-10-09
Updated code for legibility
Fampai
2022-10-09
Optimized code for Ignoring last CLIP layers
Fampai
2022-10-08
Added ability to ignore last n layers in FrozenCLIPEmbedder
Fampai
2022-10-08
add --force-enable-xformers option and also add messages to console regarding...
AUTOMATIC
2022-10-08
check for ampere without destroying the optimizations. again.
C43H66N12O12S2
2022-10-08
check for ampere
C43H66N12O12S2
2022-10-08
why did you do this
AUTOMATIC
2022-10-08
restore old opt_split_attention/disable_opt_split_attention logic
AUTOMATIC
2022-10-08
simplify xfrmers options: --xformers to enable and that's it
AUTOMATIC
2022-10-08
Merge pull request #1851 from C43H66N12O12S2/flash
AUTOMATIC1111
2022-10-08
Update sd_hijack.py
C43H66N12O12S2
2022-10-08
default to split attention if cuda is available and xformers is not
C43H66N12O12S2
2022-10-08
fix bug where when using prompt composition, hijack_comments generated before...
MrCheeze
2022-10-08
fix bugs related to variable prompt lengths
AUTOMATIC
2022-10-08
do not let user choose his own prompt token count limit
AUTOMATIC
2022-10-08
let user choose his own prompt token count limit
AUTOMATIC
2022-10-08
use new attnblock for xformers path
C43H66N12O12S2
2022-10-08
delete broken and unnecessary aliases
C43H66N12O12S2
2022-10-07
hypernetwork training mk1
AUTOMATIC
2022-10-07
make it possible to use hypernetworks without opt split attention
AUTOMATIC
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
2022-10-02
Merge branch 'master' into stable
Jairo Correa
2022-10-02
fix for incorrect embedding token length calculation (will break seeds that u...
AUTOMATIC
2022-10-02
initial support for training textual inversion
AUTOMATIC
2022-09-30
Merge branch 'master' into fix-vram
Jairo Correa
2022-09-30
add embeddings dir
AUTOMATIC
2022-09-29
fix for incorrect model weight loading for #814
AUTOMATIC
2022-09-29
new implementation for attention/emphasis
AUTOMATIC
2022-09-29
Move silu to sd_hijack
Jairo Correa
2022-09-27
switched the token counter to use hidden buttons instead of api call
Liam
2022-09-27
added token counter next to txt2img and img2img prompts
Liam
2022-09-25
potential fix for embeddings no loading on AMD cards
AUTOMATIC
2022-09-25
Fix token max length
guaneec
2022-09-21
--opt-split-attention now on by default for torch.cuda, off for others (cpu a...
AUTOMATIC
2022-09-21
fix for too large embeddings causing an error
AUTOMATIC
2022-09-20
fix a off by one error with embedding at the start of the sentence
AUTOMATIC
2022-09-20
add the part that was missing for word textual inversion checksums
AUTOMATIC
2022-09-18
Making opt split attention the default. Are you upset about this? Sorry.
AUTOMATIC
2022-09-18
.....
C43H66N12O12S2
[next]