Hello everyone, so I’ve been trying to connect the stability api to my n8n workflow for a while now, my hugging face credentials are properly configured and my url is https://huggingface.co/proxy/router.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0 but after i execute the workflow the output returns an error 404 request could not be found message. I’ve checked everywhere and can’t seem to find a solution, so i would be grateful if anyone here knows how to get around that.
There are several ways to express Hugging Face URLs, but your one probably doesn’t match N8N’s expectations.
Your 404 is coming from the URL shape, not from SDXL, not from n8n, and not from “bad Hugging Face credentials”.
You are calling:
https://huggingface.co/proxy/router.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0
That path (/models/...) is not a valid route on the Hugging Face Router for inference. The Router serves specific route families, and the OpenAI-compatible /v1/... endpoint is chat-only, not image generation. (Hugging Face)
What you want is the HF Inference provider route under the router:
https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/<model_id> (Hugging Face)
For SDXL base 1.0, that becomes:
https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/stabilityai/stable-diffusion-xl-base-1.0 (Hugging Face)
Background: what “router.huggingface.co” is (and why this trips people)
Hugging Face has been standardizing serverless inference behind Inference Providers. That system supports multiple tasks (text-to-image, embeddings, speech, etc.) and multiple providers. (Hugging Face)
Inside that system there are two “API styles” people mix up:
-
OpenAI-compatible endpoint
- Base:
https://huggingface.co/proxy/router.huggingface.co/v1 - Works for: chat completions only
- Not for: text-to-image, embeddings, speech, etc. (Hugging Face)
- Base:
-
Provider-task inference (HF Inference, fal, Together, etc.)
- Common pattern for HF Inference direct calls:
https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/<model>(Hugging Face) - This is what you need for SDXL image generation.
- Common pattern for HF Inference direct calls:
So your current URL is basically “a route the router does not serve”, which yields a clean 404 Not Found.
Fix 1: use the correct endpoint
Use:
POST https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/stabilityai/stable-diffusion-xl-base-1.0 (Hugging Face)
This same endpoint pattern is shown in multiple Hugging Face examples (including Stable Diffusion model discussions and other integrations). (Hugging Face)
Fix 2: send the correct payload for text-to-image
Hugging Face’s text-to-image task spec is simple:
-
JSON body has:
inputs: your prompt (string)- optional
parameters: width, height, steps, guidance, seed, negative prompt, etc.
-
Response body is the generated image as raw bytes (not JSON). (Hugging Face)
Example payload:
{
"inputs": "Astronaut riding a horse, cinematic lighting, ultra-detailed",
"parameters": {
"width": 1024,
"height": 1024,
"num_inference_steps": 30,
"guidance_scale": 7.5,
"negative_prompt": "blurry, low quality",
"seed": 12345
}
}
The field names above come directly from the task spec. (Hugging Face)
Fix 3: your Hugging Face token must have the right permission
For Inference Providers, Hugging Face explicitly requires a token with “Inference Providers” permission. (Hugging Face)
In practice:
- If you use a fine-grained token, make sure that permission is enabled.
- If you only have a read-only token, you can see confusing failures when calling router/provider endpoints.
Fix 4: n8n must treat the response as binary
Text-to-image returns raw image bytes. (Hugging Face)
So in n8n, your HTTP Request node must be configured to keep the response as a file/binary blob.
Two useful n8n references:
- n8n docs: HTTP Request node basics and “Import cURL” (good for avoiding UI mistakes). (n8n Docs)
- n8n community guidance: add options → Response → set Response format to “File” to capture binary output. (n8n Community)
What to set in the HTTP Request node
-
Method: POST (n8n Docs)
-
URL:
https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/stabilityai/stable-diffusion-xl-base-1.0 -
Headers:
Authorization: Bearer <HF_TOKEN>(Hugging Face)Content-Type: application/json
-
Body: JSON (the payload above)
-
Options: Response format: File (or equivalent binary setting in your node version) (n8n Community)
If you leave response parsing as JSON, you will either get errors or “garbled output” because PNG bytes are not JSON.
A fast way to eliminate n8n misconfiguration: prove it with curl, then import it
n8n explicitly supports importing a curl command into the HTTP Request node. (n8n Docs)
Use something like:
curl -X POST \
'https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/stabilityai/stable-diffusion-xl-base-1.0' \
-H "Authorization: Bearer $HF_TOKEN" \
-H "Content-Type: application/json" \
--data '{"inputs":"Astronaut riding a horse","parameters":{"width":1024,"height":1024}}' \
--output out.png
If that works, import the curl into n8n (“Import cURL”) and you usually get a working node immediately. (n8n Docs)
If you still see 404 after fixing the URL
There are two real possibilities:
1) Transient Hugging Face-side routing issues
There have been periods where users reported waves of 404s across models on inference endpoints. (Hugging Face Forums)
If your exact same request works later without changes, it was likely platform-side.
2) Wrong API style (common)
If you accidentally used:
https://huggingface.co/proxy/router.huggingface.co/v1/...for image generation, it will not work because that endpoint is chat-only. (Hugging Face)
High-quality references worth bookmarking
- Hugging Face Inference Providers overview (how routing works): https://huggingface.co/docs/inference-providers/en/index (Hugging Face)
- Text-to-Image API spec (inputs, parameters, raw bytes response, token permission): https://huggingface.co/docs/inference-providers/en/tasks/text-to-image (Hugging Face)
- HF Inference provider docs (shows router patterns and auth header examples): https://huggingface.co/docs/inference-providers/en/providers/hf-inference (Hugging Face)
- Example of calling Stable Diffusion via
router.../hf-inference/models/...and reading response bytes: https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/discussions/12 (Hugging Face) - n8n HTTP Request node docs (Import cURL): https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.httprequest/ (n8n Docs)
- n8n community tip on “Response format: File” for binary: https://community.n8n.io/t/help-with-http-request-node-for-uploading-a-binary-file-to-google-drive/166256 (n8n Community)
Summary
- Your 404 is because
https://huggingface.co/proxy/router.huggingface.co/models/...is the wrong route. - Use
https://huggingface.co/proxy/router.huggingface.co/hf-inference/models/stabilityai/stable-diffusion-xl-base-1.0. (Hugging Face) - Send
{ "inputs": "...", "parameters": {...} }and expect raw image bytes back. (Hugging Face) - In n8n, set the HTTP response handling to File/Binary. (n8n Community)
- Ensure your HF token has Inference Providers permission. (Hugging Face)