Skip to content
LLM-friendly formats:

LoRA Adapters

Load Low-Rank Adaptation (LoRA) weights to customize model behavior.

Basic Usage

Terminal window
curl -X POST "https://sync.render.weyl.ai/image/flux/dev/t2i?format=1024" \
-H "Authorization: Bearer $WEYL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a portrait in the style of artgerm",
"loras": [
{
"url": "https://example.com/artgerm_lora.safetensors",
"weight": 0.8
}
]
}' \
-o output.webp

Multiple LoRAs

Stack multiple adapters:

{
"prompt": "cyberpunk portrait",
"loras": [
{
"url": "https://cdn.render.weyl.ai/loras/cyberpunk.safetensors",
"weight": 0.7
},
{
"url": "https://cdn.render.weyl.ai/loras/cinematic.safetensors",
"weight": 0.5
}
]
}

Weight Tuning

Range: 0.0 - 1.5

  • 0.3-0.5 - Subtle influence
  • 0.6-0.9 - Moderate influence (recommended)
  • 1.0-1.2 - Strong influence
  • 1.3+ - Maximum (may overfit)

Compatibility

Supported:

  • FLUX Dev ✓
  • FLUX Dev2 ✓ (FLUX.1 LoRAs work)

Not Supported:

  • FLUX Schnell (distilled)
  • Z-Image (different architecture)

LoRA Sources

  • Hugging Face Hub
  • CivitAI
  • Custom trained