Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-06-18 | Fix Typo of hints.js | zhtttylz | |
2023-06-18 | update the description of --add-stop-rout | w-e-w | |
2023-06-16 | :bug: Allow Script to have metaclass | huchenlei | |
2023-06-16 | fix very slow loading speed of .safetensors files | dhwz | |
2023-06-15 | Add an opt-in infotext user name setting | Jared Deckard | |
2023-06-15 | Add a user pattern to the filename generator | Jared Deckard | |
2023-06-14 | Note the Gradio user in the Exif data | Jared Deckard | |
2023-06-15 | git clone show progress | w-e-w | |
2023-06-14 | Fix gradio special args in the call queue | Jared Deckard | |
2023-06-14 | terminate -> stop | w-e-w | |
2023-06-14 | response 501 if not a able to restart | w-e-w | |
2023-06-14 | update workflow kill test server | w-e-w | |
2023-06-14 | rename routes | w-e-w | |
2023-06-14 | Formatting code with Prettier | Danil Boldyrev | |
2023-06-14 | Reworked the disabling of functions, refactored part of the code | Danil Boldyrev | |
2023-06-13 | textual_inversion/logging.py: clean up duplicate key from sets (and sort ↵ | Aarni Koskela | |
them) (Ruff B033) | |||
2023-06-13 | Upgrade ruff to 272 | Aarni Koskela | |
2023-06-13 | Remove stray space from SwinIR model URL | Aarni Koskela | |
2023-06-13 | Upscaler.load_model: don't return None, just use exceptions | Aarni Koskela | |
2023-06-13 | Add TODO comments to sus model loads | Aarni Koskela | |
2023-06-13 | Fix up `if "http" in ...:` to be more sensible startswiths | Aarni Koskela | |
2023-06-13 | Move `load_file_from_url` to modelloader | Aarni Koskela | |
2023-06-13 | Use os.makedirs(..., exist_ok=True) | Aarni Koskela | |
2023-06-12 | remove console.log | Danil Boldyrev | |
2023-06-12 | Improved error output, improved settings menu | Danil Boldyrev | |
2023-06-12 | remove fastapi.Response | w-e-w | |
2023-06-12 | move _stop route to api | w-e-w | |
2023-06-12 | update model checkpoint switch code | Su Wei | |
2023-06-10 | quit restart | w-e-w | |
2023-06-09 | fixed typos | arch-fan | |
2023-06-09 | Merge branch 'dev' into release_candidate | AUTOMATIC | |
2023-06-09 | add changelog for 1.4.0 | AUTOMATIC | |
2023-06-09 | linter | AUTOMATIC | |
2023-06-09 | Merge pull request #11092 from AUTOMATIC1111/Generate-Forever-during-generation | AUTOMATIC1111 | |
Allow activation of Generate Forever during generation | |||
2023-06-09 | Merge pull request #11087 from AUTOMATIC1111/persistent_conds_cache | AUTOMATIC1111 | |
persistent conds cache | |||
2023-06-09 | Merge pull request #11123 from akx/dont-die-on-bad-symlink-lora | AUTOMATIC1111 | |
Don't die when a LoRA is a broken symlink | |||
2023-06-09 | Merge pull request #10295 from Splendide-Imaginarius/mk2-blur-mask | AUTOMATIC1111 | |
Split mask blur into X and Y components, patch Outpainting MK2 accordingly | |||
2023-06-09 | Merge pull request #11048 from DGdev91/force_python1_navi_renoir | AUTOMATIC1111 | |
Forcing Torch Version to 1.13.1 for RX 5000 series GPUs | |||
2023-06-09 | Don't die when a LoRA is a broken symlink | Aarni Koskela | |
Fixes #11098 | |||
2023-06-09 | Split Outpainting MK2 mask blur into X and Y components | Splendide Imaginarius | |
Fixes unexpected noise in non-outpainted borders when using MK2 script. | |||
2023-06-09 | Split mask blur into X and Y components | Splendide Imaginarius | |
Prequisite to fixing Outpainting MK2 mask blur bug. | |||
2023-06-09 | add model exists status check to modeuls/api/api.py , /sdapi/v1/options [POST] | Su Wei | |
2023-06-08 | Generate Forever during generation | w-e-w | |
Generate Forever during generation | |||
2023-06-08 | persistent conds cache | w-e-w | |
Update shared.py | |||
2023-06-07 | Merge pull request #11058 from AUTOMATIC1111/api-wiki | AUTOMATIC1111 | |
link footer API to Wiki when API is not active | |||
2023-06-07 | Merge pull request #11066 from aljungberg/patch-1 | AUTOMATIC1111 | |
Fix upcast attention dtype error. | |||
2023-06-06 | Fix upcast attention dtype error. | Alexander Ljungberg | |
Without this fix, enabling the "Upcast cross attention layer to float32" option while also using `--opt-sdp-attention` breaks generation with an error: ``` File "/ext3/automatic1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 612, in sdp_attnblock_forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead. ``` The fix is to make sure to upcast the value tensor too. | |||
2023-06-06 | Skip force pyton and pytorch ver if TORCH_COMMAND already set | DGdev91 | |
2023-06-06 | link footer API to Wiki when API is not active | w-e-w | |
2023-06-06 | Write "RX 5000 Series" instead of "Navi" in err | DGdev91 | |