SDXL (Stable Diffusion XL) lets you generate images locally on your GPU
with no API keys or cloud costs. It requires an NVIDIA GPU with at least
12 GB VRAM (16 GB recommended).
Requirements:
NVIDIA GPU (RTX 3060 12 GB or better; RTX 4000/5000 series ideal)
~13 GB disk space for the model cache
Windows 10/11
Vizard 8
Open Command Prompt as Administrator (right-click Command Prompt → Run as administrator).
Important: You MUST run as admin so packages install into Vizard's
site-packages, not your user folder.
These GPUs need PyTorch nightly with CUDA 12.8 support:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -m pip install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
Stable PyTorch with CUDA 12.6 is fine:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -m pip install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -c "import torch; print(torch.__version__); print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
You should see something like:
2.12.0.dev20260329+cu128
True
NVIDIA GeForce RTX 5080
If torch.cuda.is_available() prints False, the install did not work correctly.
Still in the admin command prompt:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -m pip install diffusers transformers accelerate safetensors invisible-watermark pillow huggingface_hub
The SDXL model is hosted on Hugging Face and requires authentication.
Go to https://huggingface.co/join and sign up (free).
Click "New token"
Name it anything (e.g., "vizard")
Select Read access (that's all you need)
Click "Generate"
Copy the token (starts with hf_)
Run this command (replace YOUR_TOKEN_HERE with your actual token):
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -c "from huggingface_hub import login; login(token='YOUR_TOKEN_HERE')"
No output means success. The token is saved to:
C:\Users\YOUR_USERNAME\.cache\huggingface\token
Security note: Never share your token in chat, email, or version control.
If you accidentally expose it, revoke it immediately at the URL above and
create a new one.
This pre-downloads the model so the first image generation is fast.
Still in the admin command prompt:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -c "from diffusers import StableDiffusionXLPipeline; StableDiffusionXLPipeline.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', use_safetensors=True)"
This will show progress bars as it downloads 19 files. It takes 10-20
minutes depending on your connection.
If the download fails with WinError 10054 or WinError 10038 (connection
forcibly closed), use this single-threaded download instead:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -c "from huggingface_hub import snapshot_download; snapshot_download('stabilityai/stable-diffusion-xl-base-1.0', max_workers=1)"
This avoids a known Windows bug where parallel downloads corrupt the
connection pool. It is slower but reliable. Once the download finishes,
from_pretrained in your code will use the cached files automatically.
If the single-threaded download also fails (some networks drop
long-lived connections), use the retry script included in the E-Learning
Lab folder (download_sdxl.py). Run it from the E-Learning Lab directory:
cd "D:\SightLab_Assembla\Demos\Lasell Templates\E-Learning Lab"
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" download_sdxl.py
This script calls snapshot_download in a loop with up to 20 retries.
Already-downloaded files are skipped automatically, so each retry picks
up where the last one left off. On a particularly unstable connection it
may take several attempts, but it will get there.
The files are cached at:
C:\Users\YOUR_USERNAME\.cache\huggingface\hub\models--stabilityai--stable-diffusion-xl-base-1.0
Note: You may see a warning about symlinks. This is harmless — it just
means the cache uses more disk space (~13 GB instead of ~7 GB).
To fix this, enable Windows Developer Mode:
Settings → System → For developers → Developer Mode → On
To download a different model, put in the other model name (note: there may be some differences in certain pipelines needed for other models)
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -c "from diffusers import StableDiffusionXLPipeline; StableDiffusionXLPipeline.from_pretrained('SG161222/RealVisXL_V5.0', use_safetensors=True)"
Open AI_Enabled/AI_Agent/configs/AI_Agent_Config_Education.py.
The SDXL settings are already there. The key ones:
# To use SDXL as default, change this:
IMAGE_PROVIDER = 'SDXL (Local)'
# Model and quality settings:
SDXL_MODEL_ID = 'stabilityai/stable-diffusion-xl-base-1.0'
SDXL_STEPS = 30 # Higher = better quality, slower (20-50)
SDXL_GUIDANCE_SCALE = 5.0 # How closely to follow prompt (3.0-9.0)
SDXL_WIDTH = 1024 # Best at 1024x1024
SDXL_HEIGHT = 1024
You can also select SDXL (Local) from the dropdown in the app UI — no need
to edit the config file.
Launch the E-Learning Lab
In the Presentation Wizard or Asset Browser, look for "SDXL (Local)"
in the Image Provider dropdown
Enter a prompt and generate an image
First generation loads the model into GPU memory (~10-15 seconds)
Subsequent generations are faster (~5-15 seconds depending on steps)
The dependencies are not installed in Vizard's Python. Re-run Step 1 and
Step 2 from an admin command prompt.
Verify with:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -c "import torch; import diffusers; print('OK')"
Your PyTorch version doesn't support your GPU. RTX 5080/5090 need the
nightly build with cu128. See Step 1.
Your GPU doesn't have enough VRAM. Try reducing resolution:
SDXL_WIDTH = 768
SDXL_HEIGHT = 768
Or reduce steps:
SDXL_STEPS = 20
You are NOT running as admin. Close the command prompt, right-click
Command Prompt, select "Run as administrator", and try again.
Normal. The model loads into GPU memory on first use (~10-15 seconds).
Subsequent generations reuse the loaded model.
This is a known bug in the httpx library on Windows. When Hugging Face
downloads multiple files in parallel, the connection pool can become
corrupted, causing "connection forcibly closed" and "not a socket" errors.
Fix: use snapshot_download with max_workers=1 to force sequential
downloads (see Step 4 above for the exact command). Once the files are
cached, from_pretrained will load them without re-downloading.
If even single-threaded downloads fail, use download_sdxl.py (see
Step 4). It retries automatically and resumes from where it left off.
The model cache is incomplete — a previous download was interrupted.
Delete the partial cache and re-download:
rmdir /s /q "%USERPROFILE%\.cache\huggingface\hub\models--stabilityai--stable-diffusion-xl-base-1.0"
Then re-run the download command from Step 4 (use max_workers=1 or
download_sdxl.py to avoid the same failure).
Harmless. Enable Developer Mode to fix it:
Settings → System → For developers → Developer Mode → On
To disable SDXL without uninstalling, just select a different provider
(Gemini, OpenAI, etc.) in the config or the UI dropdown. The SDXL code
is fully guarded — if the dependencies aren't loaded, nothing changes.
To remove the model cache and free ~13 GB:
rmdir /s /q "%USERPROFILE%\.cache\huggingface\hub\models--stabilityai--stable-diffusion-xl-base-1.0"
To uninstall the Python packages:
"C:\Program Files\WorldViz\Vizard8\bin\python.exe" -m pip uninstall torch torchvision torchaudio diffusers transformers accelerate safetensors invisible-watermark
Warning: Other features in the E-Learning Lab may depend on torch.
Only uninstall if you're sure nothing else needs it.