Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-10-22 20:40:22| Engadget

Stable Diffusion, an open-source alternative to AI image generators like Midjourney and DALL-E, has been updated to version 3.5. The new model tries to right some of the wrongs (which may be an understatement) of the widely panned Stable Diffusion 3 Medium. Stability AI says the 3.5 model adheres to prompts better than other image generators and competes with much larger models in output quality. In addition, its tuned for a greater diversity of styles, skin tones and features without needing to be prompted to do so explicitly. The new model comes in three flavors. Stable Diffusion 3.5 Large is the most powerful of the trio, with the highest quality of the bunch, while leading the industry in prompt adherence. Stability AI says the model is suitable for professional uses at 1 MP resolution. Meanwhile, Stable Diffusion 3.5 Large Turbo is a distilled version of the larger model, focusing more on efficiency than maximum quality. Stability AI says the Turbo variant still produces high-quality images with exceptional prompt adherence in four steps. Finally, Stable Diffusion 3.5 Medium (2.5 billion parameters) is designed to run on consumer hardware, balancing quality with simplicity. With its greater ease of customization, the model can generate images between 0.25 and 2 megapixel resolution. However, unlike the first two models, which are available now, Stable Diffusion 3.5 Medium doesnt arrive until October 29. The new trio follows the botched Stable Diffusion 3 Medium in June. The company admitted that the release didnt fully meet our standards or our communities expectations, as it produced some laughably grotesque body horror in response to prompts that asked for no such thing. Stability AIs repeated mentions of exceptional prompt adherence in todays announcement are likely no coincidence. Although Stability AI only briefly mentioned it in its announcement blog post, the 3.5 series has new filters to better reflect human diversity. The company describes the new models human outputs as representative of the world, not just one type of person, with different skin tones and features, without the need for extensive prompting. Lets hope its sophisticated enough to account for subtleties and historical sensitivities, unlike Googles debacle from earlier this year. Unprompted to do so, Gemini produced collections of egregiously inaccurate historical photos, like ethnically diverse Nazis and US Founding Fathers. The backlash was so intense that Google didnt reincorporate human generations until six months later.This article originally appeared on Engadget at https://www.engadget.com/ai/stable-diffusion-35-follows-your-prompts-more-closely-and-generates-more-diverse-people-184022965.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

23.12Engadget Podcast: Why is the Nex Playground 'AI console' such a hit?
23.12Xbox cloud gaming comes to newer Amazon Fire TV models
23.12New York Times reporter files lawsuit against AI companies
23.12Apple's iOS 26.3 will introduce proximity pairing to third-party devices in the EU
23.122025 was the year Xbox died
23.12The Morning After: The best games of 2025
23.12How to set up an Apple Watch for a child
23.12The best Nintendo Switch 2 accessories for 2026
Marketing and Advertising »

All news

23.12Mid-Day Market Internals
23.12What Makes This Trade Great: CCCX Stands Out on a Quiet Day
23.12Engadget Podcast: Why is the Nex Playground 'AI console' such a hit?
23.12What embedded finance needs to succeed
23.12This supersonic stock is making headlines after a wild IPO. Heres why
23.12Why Waymo went down in San Francisco
23.12Xbox cloud gaming comes to newer Amazon Fire TV models
23.12New York Times reporter files lawsuit against AI companies
More »
Privacy policy . Copyright . Contact form .