Available models

NLP Models

Model name

Pipeline ID

Avg. compute time

Info

GPT-J

pipeline_6908d8fb68974c288c69ef45454c8475

1K tokens
(~40s of compute)

High-quality text generations on-par with GPT-3.

GPT-Neo 2.7B

pipeline_262e705ae32c4d3c9513767ef41711fd

1K tokens
(~48s of compute)

High-quality text generations.

GPT-Neo 1.3B

pipeline_c8d9d179ec1b4ea3bc5ad3c8adb00f1d

Medium-quality text generations.

GPT-Neo 125M

pipeline_3fb167364a2848298828354257f856f8

Fast text generations.

GPT-2 XL

pipeline_5dc496ffb4634764a310c3fe3bfd2b84

High-quality text generations.

GPT-2 Large

pipeline_2cda484e1e69442897854445cbd46b7a

1K tokens
(~30s of compute)

High-quality text generations.

GPT-2 Medium

pipeline_9b40eb8edeee40d8b8818cd29058f221

Medium-quality text generations.

GPT-2

pipeline_2e6bbc4597f949e8b17603d90e8d8a78

Text generations.

For these NLP models, the following applies.

Inputs

nametypedefaultdescription
promptstr""Context from which the autoregressive model will predict next tokens.

Inference arguments

nametypedefaultdescription
response_lengthint (optional)N/AHow many new tokens to generate.
include_inputbool (optional)FalseWhether to concatenate the generated text with your original prompt.

We also support all the sampling arguments from the transformers implementation of GPT-J, including temperature, top_k, and top_p.

Outputs

nametypedescription
output_strstrGenerated text (or prompt + generated text if include_input is True).

Image generation Models

Model name

Pipeline ID

Avg. compute time

Info

DALL•E Mini

pipeline_e118be46bd8248d7b251beec397172eb

9 images
(~15s of compute)

Generate images from a text prompt.

DALL•E Mega

pipeline_17ac3021b7674b10a6fbe3cb980ff57d

9 images
(~24s of compute)

Generate images from a text prompt. Higher quality than mini.

Stable Diffusion

pipeline_67d9d8ec36d54c148c70df1f404b0369

Excellent quality image generation from a text prompt.

For these image generation models, the following applies.

Inputs

nametypedefaultdescription
promptsList[str][""]Text caption for the model to convert into images. Batch inference is supported, by passing multiple prompts.

Inference arguments

nametypedefaultdescription
num_imagesint9How many images to generate. Currently this must be a square number like 4 or 9.
diversityint (optional)4This modifies the superconditioning so that a low diversity will be closer to the prompt, but the images will be similar.
seedint (optional)-1Use a positive integer for deterministic sampling.

Outputs

nametypedescription
output_imagesList[str]Base64-encoded images in 256x256x3 resolution. Use your programming language's native base64 decoder to access the image in your app.


Did this page help you?