https://syncedreview.com/2021/01/05/this-time-openais-gpt-3-generates-images-from-text/

For Its Latest Trick, OpenAI’s GPT-3 Generates Images From Text Captions

OpenAI has trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.

BY SYNCEDCOMMENTS 2

In the latest demonstration of popular large language model GPT-3’s power and potential, OpenAI researchers today unveiled DALL·E, a neural network trained to create images from text captions across a wide range of concepts expressible in natural language.

OpenAI’s GPT-3, released last June, showed that natural language inputs could be used to instruct a large neural network to perform a variety of text generation tasks. The same month, the company’s ImageGPT research showed that similar neural networks could generate high-fidelity images.

To start the new year, OpenAI’s DALL-E builds on this, “to show that manipulating visual concepts through language is now within reach.”

Deriving its name from a portmanteau of artist Salvador Dalí and Pixar’s WALL·E, DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions using a dataset of text–image pairs. DALL·E boasts a diverse set of capabilities, such as creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text and applying transformations to existing images.

image.png
image.png

A transformer-based language model, DALL·E’s vocabulary has tokens for both text and image concepts. It receives both text and images as a single stream of data containing up to 1280 tokens, and is trained using maximum likelihood to sequentially generate tokens to generate images from scratch. It can also regenerate regions of existing images in a manner consistent with the text prompt.

OpenAI today also introduced CLIP (Contrastive Language–Image Pretraining), a neural network that efficiently learns visual concepts from natural language supervision. The researchers say CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, which is similar to the “zero-shot” capabilities of GPT-2 and -3.

Trained on a wide variety of images with a wide variety of natural language supervision abundantly available on the Internet, the network can be instructed in natural language to perform a variety of classification benchmarks without directly optimizing for each benchmark’s performance.

image.png
Robustness gap between CLIP and the original ResNet50 on ImageNet zero-shot without using any of the original 1.28M labelled examples

CLIP is able to learn from unfiltered, highly varied, and highly noisy data, and CLIP models are significantly more flexible and general than existing ImageNet models, the researchers say. The results from their tests with CLIP show that agnostic pretraining on Internet-scale natural language — which has powered recent breakthroughs in NLP — can also be leveraged to improve the performance of deep learning in fields such as computer vision.


Reporter: Yuan Yuan | Editor: Michael Sarazen


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Leave a comment