Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-01-07 | CLIP hijack rework | AUTOMATIC | |
2023-01-06 | rework saving training params to file #6372 | AUTOMATIC | |
2023-01-06 | Merge pull request #6372 from ↵ | AUTOMATIC1111 | |
timntorres/save-ti-hypernet-settings-to-txt-revised Save hypernet and textual inversion settings to text file, revised. | |||
2023-01-06 | allow loading embeddings from subdirectories | Faber | |
2023-01-05 | typo in TI | Kuma | |
2023-01-05 | Include model in log file. Exclude directory. | timntorres | |
2023-01-05 | Clean up ti, add same behavior to hypernetwork. | timntorres | |
2023-01-05 | Add option to save ti settings to file. | timntorres | |
2023-01-04 | Merge branch 'master' into gradient-clipping | AUTOMATIC1111 | |
2023-01-04 | use shared function from processing for creating dummy mask when training ↵ | AUTOMATIC | |
inpainting model | |||
2023-01-04 | fix the merge | AUTOMATIC | |
2023-01-04 | Merge branch 'master' into inpaint_textual_inversion | AUTOMATIC1111 | |
2023-01-04 | Merge pull request #6253 from Shondoit/ti-optim | AUTOMATIC1111 | |
Save Optimizer next to TI embedding | |||
2023-01-03 | add job info to modules | Vladimir Mandic | |
2023-01-03 | Save Optimizer next to TI embedding | Shondoit | |
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory) | |||
2023-01-02 | feat(api): return more data for embeddings | Philpax | |
2023-01-02 | fix the issue with training on SD2.0 | AUTOMATIC | |
2022-12-31 | changed embedding accepted shape detection to use existing code and support ↵ | AUTOMATIC | |
the new alt-diffusion model, and reformatted messages a bit #6149 | |||
2022-12-31 | validate textual inversion embeddings | Vladimir Mandic | |
2022-12-24 | fix F541 f-string without any placeholders | Yuval Aboulafia | |
2022-12-14 | Fix various typos | Jim Hays | |
2022-12-03 | Merge branch 'master' into racecond_fix | AUTOMATIC1111 | |
2022-11-30 | Use devices.autocast instead of torch.autocast | brkirch | |
2022-11-27 | Merge remote-tracking branch 'flamelaw/master' | AUTOMATIC | |
2022-11-27 | set TI AdamW default weight decay to 0 | flamelaw | |
2022-11-26 | Add support Stable Diffusion 2.0 | AUTOMATIC | |
2022-11-23 | small fixes | flamelaw | |
2022-11-21 | fix pin_memory with different latent sampling method | flamelaw | |
2022-11-20 | Gradient accumulation, autocast fix, new latent sampling method, etc | flamelaw | |
2022-11-19 | change StableDiffusionProcessing to internally use sampler name instead of ↵ | AUTOMATIC | |
sampler index | |||
2022-11-05 | Simplify grad clip | Muhammad Rizqi Nur | |
2022-11-04 | Fixes race condition in training when VAE is unloaded | Fampai | |
set_current_image can attempt to use the VAE when it is unloaded to the CPU while training | |||
2022-11-02 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | |
2022-11-01 | fixed textual inversion training with inpainting models | Nerogar | |
2022-10-31 | Fixed minor bug | Fampai | |
when unloading vae during TI training, generating images after training will error out | |||
2022-10-31 | Merge branch 'master' of ↵ | Fampai | |
https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations | |||
2022-10-31 | Added TI training optimizations | Fampai | |
option to use xattention optimizations when training option to unload vae when training | |||
2022-10-30 | Merge master | Muhammad Rizqi Nur | |
2022-10-30 | Fix dataset still being loaded even when training will be skipped | Muhammad Rizqi Nur | |
2022-10-30 | Add missing info on hypernetwork/embedding model log | Muhammad Rizqi Nur | |
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513 Also group the saving into one | |||
2022-10-30 | Revert "Add cleanup after training" | Muhammad Rizqi Nur | |
This reverts commit 3ce2bfdf95bd5f26d0f6e250e67338ada91980d1. | |||
2022-10-29 | Add cleanup after training | Muhammad Rizqi Nur | |
2022-10-29 | Add input validations before loading dataset for training | Muhammad Rizqi Nur | |
2022-10-29 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | |
2022-10-28 | Fix log off by 1 | Muhammad Rizqi Nur | |
2022-10-28 | Learning rate sched syntax support for grad clipping | Muhammad Rizqi Nur | |
2022-10-28 | Gradient clipping for textual embedding | Muhammad Rizqi Nur | |
2022-10-26 | typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir | DepFA | |
2022-10-26 | Implement PR #3625 but for embeddings. | timntorres | |
2022-10-26 | Implement PR #3309 but for embeddings. | timntorres | |