Public pipelines

DALL·E Mini & DALL·E Mega

Inputs

nametypedefaultdescription
promptsList[str][""]Text caption for the model to convert into images. Batch inference is supported, by passing multiple prompts.

Inference arguments

nametypedefaultdescription
num_imagesint9How many images to generate. Currently this must be a square number like 4 or 9.
diversityint (optional)4This modifies the superconditioning so that a low diversity will be closer to the prompt, but the images will be similar.
seedint (optional)-1Use a positive integer for deterministic sampling.

Outputs

nametypedescription
output_imagesList[str]Base64-encoded images in 256x256x3 resolution. Use your programming language's native base64 decoder to access the image in your app.

GPT-J & other GPT models

Inputs

nametypedefaultdescription
promptstr""Context from which the autoregressive model will predict next tokens.

Inference arguments

nametypedefaultdescription
response_lengthint (optional)N/AHow many new tokens to generate.
include_inputbool (optional)FalseWhether to concatenate the generated text with your original prompt.

We also support all the sampling arguments from the transformers implementation of GPT-J, including temperature, top_k, and top_p.

Outputs

nametypedescription
output_strstrGenerated text (or prompt + generated text if include_input is True).

Did this page help you?