Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-10-22 20:40:22| Engadget

Stable Diffusion, an open-source alternative to AI image generators like Midjourney and DALL-E, has been updated to version 3.5. The new model tries to right some of the wrongs (which may be an understatement) of the widely panned Stable Diffusion 3 Medium. Stability AI says the 3.5 model adheres to prompts better than other image generators and competes with much larger models in output quality. In addition, its tuned for a greater diversity of styles, skin tones and features without needing to be prompted to do so explicitly. The new model comes in three flavors. Stable Diffusion 3.5 Large is the most powerful of the trio, with the highest quality of the bunch, while leading the industry in prompt adherence. Stability AI says the model is suitable for professional uses at 1 MP resolution. Meanwhile, Stable Diffusion 3.5 Large Turbo is a distilled version of the larger model, focusing more on efficiency than maximum quality. Stability AI says the Turbo variant still produces high-quality images with exceptional prompt adherence in four steps. Finally, Stable Diffusion 3.5 Medium (2.5 billion parameters) is designed to run on consumer hardware, balancing quality with simplicity. With its greater ease of customization, the model can generate images between 0.25 and 2 megapixel resolution. However, unlike the first two models, which are available now, Stable Diffusion 3.5 Medium doesnt arrive until October 29. The new trio follows the botched Stable Diffusion 3 Medium in June. The company admitted that the release didnt fully meet our standards or our communities expectations, as it produced some laughably grotesque body horror in response to prompts that asked for no such thing. Stability AIs repeated mentions of exceptional prompt adherence in todays announcement are likely no coincidence. Although Stability AI only briefly mentioned it in its announcement blog post, the 3.5 series has new filters to better reflect human diversity. The company describes the new models human outputs as representative of the world, not just one type of person, with different skin tones and features, without the need for extensive prompting. Lets hope its sophisticated enough to account for subtleties and historical sensitivities, unlike Googles debacle from earlier this year. Unprompted to do so, Gemini produced collections of egregiously inaccurate historical photos, like ethnically diverse Nazis and US Founding Fathers. The backlash was so intense that Google didnt reincorporate human generations until six months later.This article originally appeared on Engadget at https://www.engadget.com/ai/stable-diffusion-35-follows-your-prompts-more-closely-and-generates-more-diverse-people-184022965.html?src=rss


Category: Marketing and Advertising

 

LATEST NEWS

2024-10-22 20:15:00| Engadget

Anthropic's latest development gives its Claude AI assistant the ability to control a PC, reportedly just like a person would. The feature, dubbed 'computer use,' entered public beta today. With computer use, Claude can be directed to execute tasks such as "looking at a screen, moving a cursor, clicking buttons, and typing text," according to the company's announcement.  We've built an API that allows Claude to perceive and interact with computer interfaces.This API enables Claude to translate prompts into computer commands. Developers can use it to automate repetitive tasks, conduct testing and QA, and perform open-ended research. pic.twitter.com/eK0UCGEozm Anthropic (@AnthropicAI) October 22, 2024 In theory, this could make the AI even more useful in automating repetitive computer tasks. However, a second blog post focused on computer use acknowledged that this application of Anthropic's AI models is still early in development and, to paraphrase, buggy as heck. The company said that in internal testing, Claude stopped in the middle of an assigned coding task and began opening images of Yellowstone National Park. While that is uncannily human behavior (who doesn't want to take a break to stare at natural beauty during the work day?), it's also a reminder that even the best AI models can have errors. In addition to unveiling computer use, Anthropic also released an upgraded version of its Claude 3.5 Sonnet model alongside a brand new model called Claude 3.5 Haiku that will be released later in October. In August, Anthropic joined OpenAI in agreeing to share its work with the US AI Safety Institute.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-letting-claude-ai-control-your-pc-181500127.html?src=rss


Category: Marketing and Advertising

 

2024-10-22 20:00:14| Engadget

If you're a word and game lover like me, then prepare to join me in excitement and, eventual frustration as there's a new daily word puzzle of sorts. New York-based art collective MSCHF has introduced an AOL-style chatroom called Redact-A-Chat that censors a word each time someone uses it. Josh Wardle, creator of Wordle, recently worked at MSCHF there for a few years.  So, how does it work? There's a main chatroom where you can write anything, but if a word gets repeated, then it's covered with a blue blurry line and unavailable for the rest of the day. I got to try it out early, and it seems duplicated words in sentences also lead to the second mention being blurred out. All words become fair game again at midnight. Announcements about newly censored words and when the time starts again come from three one-eyed safety pins reminiscent of the Microsoft Word safety pin.  In a statement, MSCHF said Redact-A-Chat "forces creative communication. You must constantly keep ahead of the censor in order to continue your conversation. On the other hand, you can be that a**hole who starts working their way through the dictionary to deprive everyone else of language." If you're unsure about participating in the main room, you can start a chat just for your friends. You just click the create a chat room button, give it a name and it will appear. You can then invite other people to the group with a unique code. This article originally appeared on Engadget at https://www.engadget.com/ai/redact-a-chat-is-an-old-style-chatroom-that-censors-words-after-one-use-180014370.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

22.10NASA's newest telescope can detect gravitational waves from colliding black holes
22.10OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism
22.10A federal ban on fake online reviews is now in effect
22.10Google Messages adds enhanced scam detection tools
22.10Stable Diffusion 3.5 follows your prompts more closely and generates more diverse people
22.10Anthropic is letting Claude AI control your PC
22.10Redact-A-Chat is an old-style chatroom that censors words after one use
22.10More than 10,500 artists sign open letter protesting unlicensed AI training
Marketing and Advertising »

All news

22.10 Philip R. Lane: Inflation and monetary policy in the euro area
22.10Mid-Day Market Internals
22.10Tomorrow's Earnings/Economic Releases of Note; Market Movers
22.10Bull Radar
22.10Bear Radar
22.10Stocks Reversing Slightly Higher into Final Hour on Stable Long-Term Rates, Earnings Outlook Optimism, Technical Buying, Financial/Alt Energy Sector Strength
22.10NASA's newest telescope can detect gravitational waves from colliding black holes
22.10OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism
More »
Privacy policy . Copyright . Contact form .