AI artist imagine what's outside the frame of famous paintings including Girl with a Pearl Earring

An AI artist can now provide a glimpse of what the background settings of famous paintings and photos may have looked like. 

OpenAI, a San Francisco-based company, has created a new tool called ‘Outpainting’ for its text-to-image AI system, DALL-E. 

Outpainting allows the system to imagine what’s outside the frame of famous paintings such as Girl with The Pearl Earring, Mona Lisa and Dogs Playing Poker.

As users have shown, it can do this with any kind of image, such as the man on the Quaker Oats logo and the cover of the Beatles album ‘Abbey Road’.  

DALL-E relies on artificial neural networks (ANNs), which simulate the way the brain works in order to learn and create an image from text. 

MODIFIED: OpenAI, a San Francisco-based company, has created a new tool called ‘Outpainting’ for its text-to-image AI system, DALL-E. Outpainting allows the system to imagine what’s outside the frame of famous paintings such as Girl with The Pearl Earring

ORIGINAL: Girl with a Pearl Earring is an oil painting on canvas (c. 1665) by Dutch artist Johannes Vermeer. Pictured is the original painting, without any AI manipulation

ORIGINAL: Girl with a Pearl Earring is an oil painting on canvas (c. 1665) by Dutch artist Johannes Vermeer. Pictured is the original painting, without any AI manipulation

HOW DOES IT WORK? 

OpenAI has created a new tool called ‘Outpainting’ for its text-to-image AI system, called DALL-E 2.  

DALL-E 2 and its predecessor DALL-E rely on artificial neural networks (ANNs), which simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information, like speech, text data, or visual images. 

OpenAI developers gathered data on millions of photos to allow the DALL-E algorithm to ‘learn’ what different objects are supposed to look like and eventually put them together.

When a user inputs text for DALL-E to generate an image from, it notes a series of key features that could be present

A second neural network, known as the diffusion model, then creates the image and generates the pixels needed to visualise and replicate it. 

With Outpainting, users have to describe the new extended visuals in text form to DALL-E before it can ‘paint’ them. 

Outpainting, which is primarily aimed for professionals who work with images, will let users ‘extend their creativity’ and ‘tell a bigger story’, according to OpenAI. 

The firm said in a blog post: ‘Today we’re introducing Outpainting, a new feature which helps users extend their creativity by continuing an image beyond its original borders – adding visual elements in the same style, or taking a story in new directions – simply by using a natural language description. 

‘With Outpainting, users can extend the original image, creating large-scale images in any aspect ratio. 

‘Outpainting takes into account the image’s existing visual elements – including shadows, reflections, and textures – to maintain the context of the original image.’

US artist August Kamp used Outpainting to reimagine the famous 1665 painting ‘Girl with a Pearl Earring’ by Johannes Vermeer.

Amazingly, the tool managed to create a background that mimicked the painting style of the original. 

The results show the famous girl in a domestic setting, surrounded by crockery, houseplants, fruit, boxes and more.

It contrasts with the simplicity of Vermeer’s classic, which depicts the girl against a dark, blank background.

Other attempts are somewhat sillier – one shows the subject of Mona Lisa doing the devil horn’s gesture with her hand, with a UFO and a killer robot in the background, while an extended version of A Friend In Need shows another table of gambling canines next to them.

Another shows the man from the Quaker Oats logo with a hefty bust and wearing a dress, surrounded by drinks bottles. 

An extended version of A Friend In Need shows another table of gambling canines next to them (pictured, the original)

Other attempts are somewhat sillier – one shows the subject of Mona Lisa doing the devil horn's gesture with her hand (original is pictured)

Other attempts are somewhat sillier – one shows the subject of Mona Lisa doing the devil horn’s gesture with her hand (original is pictured)

And yet another shows a couple of people crossing the famous zebra crossing outside Abbey Road Studies along with the Beatles with a scattering of autumn leaves, although the original photo was snapped in the height of summer.  

According to the Verge, DALL-E is available to more than 1 million people through a beta program, which gives users a certain amount of free image generations. 

People can join a waiting list to ‘create with DALL-E’ on the OpenAI website, although the company said it is ‘sending invites gradually over time’.

DALL-E already enables changes within a generated or uploaded image – a capability known as Inpainting. 

It is able to automatically fill in details, such as shadows, when an object is added, or even tweak the background to match, if an object is moved or removed.

DALL-E can also produce a completely new image from a text description, such as ‘an armchair in the shape of an avocado’ or ‘a cross-section view of a walnut’. 

Another classic example of DALL-E’s work is ‘teddy bears working on new AI research underwater with 1990s technology’. 

DALL-E image from the prompt 'Teddy bears working on new AI research underwater with 1990s technology'

DALL-E image from the prompt ‘Teddy bears working on new AI research underwater with 1990s technology’

OpenAI is also known for AI-generated audio. 

In 2020, it revealed Jukebox, a neural network that generates eerie approximates of pop songs in the style of multiple artists, including Elvis Presley, Frank Sinatra and David Bowie. 

The neural network generates music, including rudimentary singing complete with lyrics in English and a variety of instruments like guitar and piano. 

OpenAI scraped 1.2 million songs, 600,000 of which are sung in English, from the internet and paired them the lyrics and metadata, which was fed into the AI to generate approximations of the different artists. 

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

source: dailymail.co.uk