补充解释: Linux/iOS的目录==Windows文件夹
options选项 :
-h, --help 显示帮助信息并退出程序
show this help message and exit
--update-all-extensions 在启动时更新所有扩展插件
(此为launch.py脚本的参数,下同)launch.py argument: download updates for all extensions when starting the program
--skip-python-version-check 跳过python版本检查
launch.py argument: do not check python version
--skip-torch-cuda-test 跳过torch-cuda测试,不检查CUDA是否能够正常工作
launch.py argument: do not check if CUDA is able to work properly
--reinstall-xformers 重新安装相应版本的 xformers
launch.py argument: install the appropriate version of xformers even if you have some version already installed
--reinstall-torch 重新装相应版本的 torch
launch.py argument: install the appropriate version of torch even if you have some version already installed
--update-check 启动时检查更新
launch.py argument: check for updates at startup
--test-server 配置服务器供测试
launch.py argument: configure server for testing
--log-startup 打印启动时的详细日志
launch.py argument: print a detailed log of whats happening at startup
--skip-prepare-environment 跳过所有环境准备工作
launch.py argument: skip all environment preparation
--skip-install 跳过软件包安装
launch.py argument: skip installation of packages
--dump-sysinfo 将sysinfo文件(不含扩展插件、选项)转储到磁盘并退出
launch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit
--loglevel LOGLEVEL 指定日志级别;以下之一: 关键、错误、警告、信息、调试
log level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG
--do-not-download-clip 即使检查点不包含CLIP模型,也不下载CLIP包(openai/clip)
do not download CLIP model even if it's not included in the checkpoint
--data-dir DATA_DIR 指定用户数据的保存目录
base path where all user data is stored
--config CONFIG 指定用于构建模型的配置文件的路径
path to config which constructs model
--ckpt CKPT 指定某SD的模型文件路径, 一旦被指定, 该文件将被添加到检查点CKPT模型列表中并被加载
path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded
--ckpt-dir CKPT_DIR 指定SD的模型的存放目录
Path to directory with stable diffusion checkpoints
--vae-dir VAE_DIR 指定VAE文件的存放目录
Path to directory with VAE files
--gfpgan-dir GFPGAN_DIR 指定GFPGAN的存放目录
GFPGAN directory
补充: GFPGAN是腾讯开源的人像修复大模型 https://github.com/TencentARC/GFPGAN
--gfpgan-model GFPGAN_MODEL 指定GFPGAN模型名
GFPGAN model file name
--no-half 所有CKPT模型将不可切换为半精度浮点类型
do not switch the model to 16-bit floats
--no-half-vae 所有VAE模型将不可切换为半精度浮点类型
do not switch the VAE model to 16-bit floats
--no-progressbar-hiding 不隐藏gradio UI图片生成进度条
do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware acceleration in browser)
(默认隐藏进度条是因为如果在浏览器中使用硬件加速,进度条会降低ML机器学习的运行速度)
--max-batch-count MAX_BATCH_COUNT 指定批次的最大值(默认100)
maximum batch count value for the UI
--embeddings-dir EMBEDDINGS_DIR 指定embeddings嵌入模型的存放目录
embeddings directory for textual inversion (default: embeddings)
即Textual Inversion文本翻转嵌入模型的存放目录(默认为stable-diffusion-webui/embeddings)
--textual-inversion-templates-dir TEXTUAL_INVERSION_TEMPLATES_DIR 指定文本翻转模版的存放目录
directory with textual inversion templates
--hypernetwork-dir HYPERNETWORK_DIR 指定hypernetwork模型的存放目录
hypernetwork directory
--localizations-dir LOCALIZATIONS_DIR 指定本地化文件(即json格式的汉化翻译文件)目录
localizations directory
--allow-code 允许在webui执行自定义脚本
allow custom script execution from webui
--medvram (适用于显存不足条件)开启模型优化,牺牲一些速度来降低显存使用。
enable stable diffusion model optimizations for sacrificing a little speed for low VRM usage
--medvram-sdxl 启用XL版本模型优化,牺牲速度来减少显存使用。
enable --medvram optimization just for SDXL models
--lowvram (适用于低显存条件)开启模型优化,牺牲较大速度来适应低显存条件。
enable stable diffusion model optimizations for sacrificing a lot of speed for very low VRM usage
--lowram 将SD模型的权重加载到VRAM显存而非RAM内存中
load stable diffusion checkpoint weights to VRAM instead of RAM
--always-batch-cond-uncond (无效选项)始终批量cond/uncond
--unload-gfpgan (无效选项)上传gfpgan(到huggingface??)
--precision {full,autocast} 指定精确度(机器学习评价指标)
evaluate at this precision
--upcast-sampling 上播采样
注意: 如果与 --no-half 一起使用则无法生效。
通常会产生与 --no-half 近似的结果,但表现更好,占用内存更少。
upcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.
--share 将webui设置为外部用户可以访问的网站页面
use share=True for gradio and make the UI accessible through their site
Gradio是开源Python库,其目标是使机器学习模型的部署和使用变得简单易行。
--ngrok NGROK 使用云端AI平台ngrok的认证令牌, 替代--share选项
补充: NGROK官网 https://ngrok.com/
ngrok authtoken, alternative to gradio --share
--ngrok-region NGROK_REGION (无效选项)指定ngrok的国家/地区
--ngrok-options NGROK_OPTIONS 指定ngrok的一些选项(须使用JSON格式)
The options to pass to ngrok in JSON format, e.g.: '{"authtoken_from_env":true, "basic_auth":"user:password","oauth_provider":"google", "oauth_allow_emails":"[email protected]"}'
--enable-insecure-extension-access 启用不安全的扩展插件页面(无视其他选项)
enable extensions tab regardless of other options
--codeformer-models-path CODEFORMER_MODELS_PATH 指定codeformer模型文件路径
Path to directory with codeformer model file(s).
--gfpgan-models-path GFPGAN_MODELS_PATH 指定GFPGAN模型文件路径
Path to directory with GFPGAN model file(s).
--esrgan-models-path ESRGAN_MODELS_PATH 指定ESRGAN模型文件路径
Path to directory with ESRGAN model file(s).
--bsrgan-models-path BSRGAN_MODELS_PATH 指定BSRGAN模型文件路径
Path to directory with BSRGAN model file(s).
--realesrgan-models-path REALESRGAN_MODELS_PATH 指定RealESRGAN模型文件路径
Path to directory with RealESRGAN model file(s).
--dat-models-path DAT_MODELS_PATH 指定DAT模型文件路径
Path to directory with DAT model file(s).
--clip-models-path CLIP_MODELS_PATH 指定CLIP模型文件路径
Path to directory with CLIP model file(s).
--xformers 启用xformers,用于多模态场景加速(文+图生图等等)
enable xformers for cross attention layers
--force-enable-xformers 不检查代码, 强制启用xformers
enable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work
--xformers-flash-attention 启用带有Flash-Attention计算加速算法的xformers,以提高可重复性
注意:(仅支持 SD2.x 或其他版本)
enable xformers with Flash Attention to improve reproducibility (supported for SD2.x or variant only)
--deepdanbooru (无效选项)启用DeepDanbooru
DeepDanbooru 是一个动漫风格的女孩图像标签评估系统, 专门用来审视二次元妹子的各种特征
https://github.com/KichangKim/DeepDanbooru
--opt-split-attention 在自动选择优化时,优先选择Doggettx优化算法
prefer Doggettx's cross-attention layer optimization for automatic choice of optimization
--opt-sub-quad-attention 在自动选择优化时,优先选择内存效率较高的sub-quadratic
prefer memory efficient sub-quadratic cross-attention layer optimization for automatic choice of optimization
--sub-quad-q-chunk-size SUB_QUAD_Q_CHUNK_SIZE
query chunk size for the sub-quadratic cross-attention layer optimization to use
--sub-quad-kv-chunk-size SUB_QUAD_KV_CHUNK_SIZE
kv chunk size for the sub-quadratic cross-attention layer optimization to use
--sub-quad-chunk-threshold SUB_QUAD_CHUNK_THRESHOLD
the percentage of VRAM threshold for the sub-quadratic cross-attention layer optimization to use chunking
--opt-split-attention-invokeai 在自动选择优化时,优先选择InvokeAI
prefer InvokeAI's cross-attention layer optimization for automatic choice of optimization
--opt-split-attention-v1 在自动选择优化时,优先选择v1版Split Attention
prefer older version of split attention optimization for automatic choice of optimization
--opt-sdp-attention 在自动选择优化时,优先选择SDP Attention,需要 PyTorch 2.*
prefer scaled dot product cross-attention layer optimization for automatic choice of optimization; requires PyTorch 2.*版本
--opt-sdp-no-mem-attention 自动选择优化时,优先选择SDP No Mem Attention,使图像生成更有确定性。需要 PyTorch 2.*版本
prefer scaled dot product cross-attention layer optimization without memory efficient attention for automatic choice of optimization, makes image generation deterministic; requires PyTorch 2.*
--disable-opt-split-attention 自动选择优化时,禁用Split Attention
prefer no cross-attention layer optimization for automatic choice of optimization
--disable-nan-check 禁用空值检查
不检查生成的 图像/latent 是否有NAN(空值), 在运行不使用CKPT模型的CI(???)时非常有用
do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI
--use-cpu USE_CPU [USE_CPU ...] 针对特定的模型,使用CPU作为torch计算设备
use CPU as torch device for specified modules
--use-ipex 使用Intel XPU AI计算卡作为torch计算设备
use Intel XPU as torch device
--disable-model-loading-ram-optimization 禁用模型加载优化,此优化用于减少的内存占用
disable an optimization that reduces RAM use when loading a model
--listen webUI监听IP,默认值0.0.0.0
launch gradio with 0.0.0.0 as server name, allowing to respond to network requests
--port PORT webUI监听端口,默认值7860
launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available
--show-negative-prompt (无效选项)显示负面提示词
--ui-config-file UI_CONFIG_FILE 指定UI配置文件的文件名
filename to use for ui configuration
--hide-ui-dir-config 在webui中隐藏有关目录的配置
hide directory configuration from webui
--freeze-settings 全局冻结设置
禁止在全局范围内编辑所有设置
disable editing of all settings globally
--freeze-settings-in-sections FREEZE_SETTINGS_IN_SECTIONS 分区域冻结设置
禁止在指定区块内编辑设置
disable editing settings in specific sections of the settings page by specifying a comma-delimited list such like "saving-images,upscaling". The list of setting names can be found in the modules/shared_options.py file
--freeze-specific-settings FREEZE_SPECIFIC_SETTINGS 冻结指定设置项
disable editing of individual settings by specifying a comma-delimited list like "samples_save,samples_format". The list of setting names can be found in the config.json file
--ui-settings-file UI_SETTINGS_FILE 指定UI设置的文件名路径
filename to use for ui settings
--gradio-debug 为gradio开启debug日志
launch gradio with --debug option
--gradio-auth GRADIO_AUTH 设置 gradio 身份验证
set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"
--gradio-auth-path GRADIO_AUTH_PATH 指定 gradio 身份验证文件路径
set gradio authentication file path ex. "/path/to/auth/file" same auth format as --gradio-auth
--gradio-img2img-tool GRADIO_IMG2IMG_TOOL (无效选项)
--gradio-inpaint-tool GRADIO_INPAINT_TOOL (无效选项)
--gradio-allowed-path GRADIO_ALLOWED_PATH
add path to gradio's allowed_paths, make it possible to serve files from it
--opt-channelslast 指定SD的内存类型改为Channel Last
change memory type for stable diffusion to Channel Last
补充: Memory Format分为 Channel First 和 Channel Last
--styles-file STYLES_FILE 指定CSS样式文件(支持通配符),允许多个条目
path or wildcard path of styles files, allow multiple entries.
--autolaunch 启动后在默认浏览器中打开webui
open the webui URL in the system's default browser upon launch
--theme THEME 指定浅色或深色主题
launches the UI with light or dark theme
--use-textbox-seed 允许在提示词文本框中输入随机种子
use textbox for seeds in UI (no up/down, but possible to input long seeds)
--disable-console-progressbars 生成时,不在console控制台显示进度条
do not output progressbars to console
--enable-console-prompts (无效选项)
--vae-path VAE_PATH 指定VAE文件路径, 设置该参数将禁用所有与 VAE 相关的设置
Checkpoint to use as VAE; setting this argument disables all settings related to VAE
--disable-safe-unpickle 禁止检查pytorch模型中的恶意代码
disable checking pytorch models for malicious code
--api 启动webUI的同时启用API
use api=True to launch the API together with the webui (use --nowebui instead for only the API)
--api-auth API_AUTH 为 API 设置身份验证
Set authentication for API like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"
--api-log 启用API请求日志
use api-log=True to enable logging of all API requests
--nowebui 仅用API,禁用webui
use api=True to launch the API instead of the webui
--ui-debug-mode (调试模式) 不加载模型,以快速启动用户界面
Don't load model to quickly launch UI
--device-id DEVICE_ID 选择要默认使用的CUDA设备
(可能需要预先设置全局变量: export CUDA_VISIBLE_DEVICES=0 或 1)。
Select the default CUDA device to use (export CUDA_VISIBLE_DEVICES=0,1,etc might be needed before)
--administrator 使用超级管理员权限
Administrator rights
--cors-allow-origins CORS_ALLOW_ORIGINS
Allowed CORS origin(s) in the form of a comma-separated list (no spaces)
--cors-allow-origins-regex CORS_ALLOW_ORIGINS_REGEX
Allowed CORS origin(s) in the form of a single regular expression
--tls-keyfile TLS_KEYFILE 指定TLS私钥文件
Partially enables TLS, requires --tls-certfile to fully function
--tls-certfile TLS_CERTFILE 指定TLS公钥文件
Partially enables TLS, requires --tls-keyfile to fully function
--disable-tls-verify 禁用TLS验证
When passed, enables the use of self-signed certificates.
--server-name SERVER_NAME 指定webUI页面的显示服务名
Sets hostname of server
--gradio-queue (无效选项)
--no-gradio-queue 禁用 gradio 队列
使用 http 请求,而不是 websockets, 早期版本中默认使用此方法
Disables gradio queue; causes the webpage to use http requests instead of websockets; was the default in earlier versions
--skip-version-check 跳过检查 torch 和 xformers 的版本
Do not check versions of torch and xformers
--no-hashing 禁用CKPT模型的哈希,以提高加载性能
disable sha256 hashing of checkpoints to help loading performance
--no-download-sd-model 不下载 SD1.5 模型
即使在 --ckpt-dir 中没有找到 SD1.5 模型,也不下载该模型。
don't download SD1.5 model even if no model is found in --ckpt-dir
--subpath SUBPATH 自定义 gradio 子路径,与反向代理一起使用
customize the subpath for gradio, use with reverse proxy
--add-stop-route (无效选项)
--api-server-stop 允许通过api启停webui server
enable server stop/restart/kill via api
--timeout-keep-alive TIMEOUT_KEEP_ALIVE 为 uvicorn 设置TCP会话保持超时时间
set timeout_keep_alive for uvicorn
--disable-all-extensions 禁止运行所有扩展,无视其他设置
prevent all extensions from running regardless of any other settings
--disable-extra-extensions 禁止运行除内置扩展之外的所有扩展,无视其他设置
prevent all extensions except built-in from running regardless of any other settings
--skip-load-model-at-start 在API模式启动时跳过模型加载,(仅在有--nowebui选项时有效)
if load a model at web start, only take effect when --nowebui
--unix-filenames-sanitization 允许在文件名中使用除“/”之外的任何特殊字符
这些字符可能对浏览器和文件系统产生文件名冲突
allow any symbols except '/' in filenames. May conflict with your browser and file system
--filenames-max-length FILENAMES_MAX_LENGTH 指定保存的图像文件名的最大长度
不要随意设置这个选项的值, 可能与文件系统允许的文件名长度有冲突
maximal length of filenames of saved images. If you override it, it can conflict with your file system
--no-prompt-history 禁用提示词历史记录
即禁用从上次生成中读取提示词的功能;
设置该参数将不会创建“--data_path/params.txt ”文件
disable read prompt from last generation feature; settings this argument will not create '--data_path/params.txt' file
标签:Diffusion,use,--,gradio,attention,指定,sh,Version,model
From: https://www.cnblogs.com/max27149/p/18178097