Generate images with wan2.7-image and wan2.7-image-pro. These models route through /v1/chat/completions (not /v1/images/generations) using OpenAI’s multimodal chat schema.
/v1/images/generations path. Send a standard chat request with a multimodal content array containing a text prompt, and AIsa returns generated images as {type: "image"} parts inside choices[].message.content[].
seedream-4-5-251128)? It uses a different route — /v1/images/generations. Gemini image previews use /v1/models/{model}:generateContent. This page only covers the Wan 2.7 family.| Model | Cost per image | Typical use |
|---|---|---|
wan2.7-image | $0.030 | Fast, general-purpose image generation |
wan2.7-image-pro | $0.075 | Higher fidelity; also supports image-to-video via separate flow |
POST /v1/chat/completions you already use for text — the only differences are which model you pass and how content is structured.
Critical rule: messages[].content must be an array of typed parts. Passing a plain string returns 400 invalid_parameter_error with the message "Input should be a valid list: messages[*].content".
| Field | Type | Required | Notes |
|---|---|---|---|
model | string | yes | wan2.7-image or wan2.7-image-pro |
messages[].role | string | yes | user for the prompt turn |
messages[].content | array | yes | Must be an array, not a string |
messages[].content[].type | string | yes | text for prompt parts; image_url for image-to-image inputs |
messages[].content[].text | string | when type=text | The prompt |
messages[].content[].image_url.url | string | when type=image_url | Reference image URL |
n | integer | no | Number of images. Default is 4 for wan2.7-image; pass 1 to save cost |
n=4, you get 4 entries in choices.choice.message.content is an array with a single { "type": "image", "image": "..." } part.image is a short-lived URL (download it soon) or base64 data, depending on your workspace configuration.usage.total_tokens reflects the small token cost of the request framing — billing is per-image at the rate in the table above, not per token.image_url part to the content array and follow it with a text instruction:
POST /v1/chat/completions request the standard OpenAI Chat endpoint uses — only the model and content shape are tuned for images. Your existing OpenAI-compatible SDK code works without modification; just swap the model and content shape.
400 invalid_parameter_error — Input should be a valid list: messages[*].content — content was passed as a string; wrap in an array of typed parts.400 referencing messages — you sent the Gemini-style contents/parts. Use messages with OpenAI multimodal parts for Wan models.404 openai_error on /v1/images/generations — wrong endpoint. Wan models do not route through that path.500 model_not_found — your workspace isn’t provisioned for the Wan family. Contact support.Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Image-generation model. wan2.7-image ($0.030/image) for standard quality, wan2.7-image-pro ($0.075/image) for higher fidelity.
wan2.7-image, wan2.7-image-pro Conversation messages. Image prompts go in the last user message's content array as {type: "text"} parts.
Number of images to generate. wan2.7-image returns 4 by default; pass 1 to save cost.
1 <= x <= 4Images generated. Returned as Chat Completion with message.content[] image parts.