A few weeks ago, the entire Internet was filled with strange and surprisingly artistic photos generated by an Artificial Intelligence that creates images from text. That AI is nothing more and nothing less than DALL-E (or DALL-E Mini, its free version) from the company OpenAI, which has just issued a statement announcing that it is now possible to edit images with human faces through its popular AI.
This feature was previously banned for fear of misuse. However, OpenAI have improved their filters to remove images with “sexual, political and violent content”. The most interesting thing about this news is that you will be able to ask DALL-E to change the face of a person so that it looks like another or to change its expression . The AI will also allow you to change someone’s clothes or hairstyle.
DALL-E can now change faces, hairstyles and outfits, does it increase the risk of deepfakes?
— Allan Harding (@allanharding) September 19, 2022
With news like this, it is inevitable to think that someone will use DALL-E to create deepfakes and compromise other people. In case you don’t know, deepfake is the technique that allows you to change the face of a person in an image or video with results so realistic that they can fool anyone . To give an example of misuse, deepfakes can be used to make your partner believe that you are committing infidelity, putting your face in the image of another couple. That’s how dangerous deepfakes are!
OpenAI is aware of the risks posed by this new feature of DALL-E, so they have clarified the following:
“With enhancements to our security system, DALL-E is now ready to support these lovely and important use cases – while minimizing the potential for damage from deepfakes.”
In other words, DALL-E will detect attempted deepfakes and make them less realistic so no one will fall for it. In theory this sounds great, but we’ll see how it works in practice.
Do not forget that in the terms and conditions of use of DALL-E it is prohibited to upload images of people without their consent , although there is really no filter or measure that prevents you from doing so. Therefore, let’s hope that users will stick to morals and not manipulate the AI for their malicious purposes. Anyway, do you want to see what DALL-E is capable of? Here we explain how to test DALL-E 2.