A team of researchers with IBM Research, MIT CSAIL and MIT-IBM Watson AI Lab has launched a new online tool called GANPaint Studio that utilizes a GAN neural network and semantic brushes to ‘draw’ entirely new elements into existing images. In the case of this particular tool, the elements include grass, clouds, brick, doors, trees, sky and domes.
|Unedited before image.|
As demonstrated in the images above and below, GANPaint Studio is more of a fun demonstration rather than a serious tool for modifying images. The input images are stripped down to a very low resolution when uploaded; the resulting images are clearly edited, though the neural network is capable of some surprisingly realistic edits.
|After adding grass, trees and clouds.|
In addition to drawing elements into the images, the tool also features an eraser icon that, when clicked, enables the user to erase elements from the input image. This isn’t the first time we’ve seen a demonstration of a neural network capable of producing realistic elements in an image using a basic ‘drawing’ tool.
In March 2019, for example, NVIDIA Research demonstrated a similar tool it calls GauGAN to generate a photorealistic image from a series of crudely painted marks, each mark made to represent types of elements like water, trees and sky. NVIDIA has published a sizeable body of research on AI and its potential for generating photorealistic images.
As for GANPaint Studio, anyone can access the photo editor here; it comes populated with a selection of preloaded images, but users also have the option of uploading their own image. While using the tool, we found that the images need to be at a fairly low resolution, such as 800 x 500, for the editor to successfully upload the input image.
The MIT and IBM researchers have made their research on the project publicly available [Note: This is a 48MB PDF].
Author: Go to Source