NVIDIA neural network has learned to generate photorealistic images from text description
Miscellaneous / / November 26, 2021
It can already be tested in beta mode.
Neural network launched in 2019 GauGAN ("Gauguin") from NVIDIA, which generates images from sketches in a graphics editor, has received an update to the second version. Now it provides the ability to get photorealistic landscapes based on text descriptions.
GauGAN 2 does not always work correctly, but with a simple and understandable description of the plot, you can still get an acceptable image. You can already try out the beta version of the neural network on a special website.
To generate an image, accept the terms of service at the bottom of the page, at the top in the Input utilization line leave only text active, enter a description in the field on the right (for example, wood and river), and below click on the arrow to the right. After the left arrow, the image can be copied to the left block for editing. More details - in a visual video.
Previously similar neural network released "Sberbank". His model also generates images from description and deals with more than just landscapes.
Read also🧐
- 15 amazing things neural networks have learned to do
- High-quality photos of Van Gogh and Napoleon: neural networks restored the appearance of historical figures from their portraits
- Video of the day: neural networks "revive" famous paintings
Black Friday: What you need to know about the sale on AliExpress and other stores