Stability AI can't please everyone with Stable Diffusion 2.0

Stability AI can’t please everyone with Stable Diffusion 2.0

Hello and welcome to the special November monthly edition of Eye on AI David Meyer here in Berlin, replacing Jeremy.

Stable Diffusion is growing so fast. Just three months after Stability AI introduced the image generator to the world, version 2.0 was released. However, while this evolution of the system produces significantly better quality images, the startup’s choices prove controversial.

First, the unalloyed good. Stable Diffusion 2.0 offers new text-to-image models formed using a text encoder developed by LAION which offers a real improvement in quality. The resulting images are larger – 768×768 is now also available as the default resolution, and images can now be 2048×2048 or higher. Also of note: a new depth-guided model called depth2img that can infer new, very different images from an input image.

The controversy stems from how Stability AI has evolved to respond to criticism from earlier versions. It has become more difficult to use Stable Diffusion to generate celebrity images or NSFW content. And the ability to tell Stable Diffusion to generate images “in the style” of specific artists such as the famously ripped off Greg Rutkowski is gone. While the non-NSFW change was to clean up training data from Stable Diffusion, the other changes were implemented through how the tool encodes and receives data, rather than filtering artists, the founder said. from Stability AI, Emad Mostaque. Edge.

When it comes to NSFW images, as Mostaque told users on Discord, Stability AI had to choose between preventing people from generating images of children or preventing them from generating pornographic images, because allowing both was a recipe. for a disaster. This, of course, did not dismiss the censorship charges.

Mostaque would have been less inclined to discuss whether the artist and stardom changes were motivated by a desire to avoid legal action, but that’s a reasonable assumption to make. Copyright concerns have definitely put pressure on artistic communities lately. When the venerable DeviantArt community earlier this month announced its own Stable Diffusion-based text-to-image generator, DreamUp, it initially set the defaults so that users’ art was automatically included in datasets. third-party images. Outrage outrage and a same-day U-turn (although users will need to fill out a form to prevent their “diversions” from being used to train DreamUp further.)

It is clearly not possible to please everyone with these tools, but it is normal when they develop at such a speed, while being accessible to the general public. It’s a bit like running on a tightrope, and who knows what pitfalls will appear in the months to come.

More AI-related news below.

David Meyer
david.meyer@fortune.com
@superglaze

AI IN THE NEWS

Swedish researchers used AI to design synthetic DNA. The Chalmers University of Technology team made DNA that “contains the exact instructions to control the amount of a specific protein”, in the words of lead researcher Aleksej Zelezniak. The result could be faster and cheaper development of drugs and vaccines, using techniques that the team says are comparable to AI generating faces: “Researchers’ AI learned the structure and DNA regulatory code. The AI ​​then designs synthetic DNA, where it is easy to modify its regulatory information in the desired direction of gene expression.

Welcome to the “Matterverse”. A team from the University of California, San Diego has created a massive database of over 31 million materials that have never been synthesized before, using a graphical neural network architecture called M3GNet (Nature article) that can predict their structure and properties. More than a million of these materials are potentially stable. The beauty of this deep learning-based tool is that it works accurately on all elements of the periodic table; previous tools in this vein tend to be either inaccurate or very limited in scope.

Notable titles

San Francisco police will now be allowed to deploy robots that can kill you, by Janie Har and The Associated Press

South Dakota just banned TikTok from state-owned devices over fears of a national security threat, by Alex Barinka and Bloomberg

Our mission to improve business is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.


#Stability #Stable #Diffusion

Leave a Comment

Your email address will not be published. Required fields are marked *