Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-10-22 21:00:09| Engadget

Google just announced a spate of safety features coming to Messages. Theres enhanced scam detection centered around texts that could lead to fraud. The company says the update provides improved analysis of scammy texts. For now, this tool will prioritize scams involving package deliveries and job offers. When Google Messages suspects a scam, itll move the message to the spam folder or issue a warning. The app uses on-device machine learning models to detect these scams, meaning that conversations will remain private. This enhancement is rolling out now to beta users who have spam protection enabled. Googles also set to broadly roll out intelligent warnings, a feature thats been in the pilot stage for a while. This tool warns users when they get a link from an unknown sender and automatically blocks messages with links from suspicious senders. The updated safety tools also include new sensitive content warnings that automatically blurs images that may contain nudity. This is an opt-in feature and also keeps everything on the device. Itll show up in the next few months. Finally, theres a forthcoming tool thatll let people turn off messages from unknown international senders, thus cutting the scam spigot off at the source. This will automatically hide messages from international senders who arent already in the contacts list. This feature is entering a pilot program in Singapore later this year before expanding to more countries. In addition to the above tools, Google says its currently working on a contact verifying feature for Android. This should help put the kibosh on scammers trying to impersonate one of your contacts. The company has stated that this feature will be available sometime next year.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/google-messages-adds-enhanced-scam-detection-tools-190009890.html?src=rss


Category: Marketing and Advertising

 

LATEST NEWS

2024-10-22 20:40:22| Engadget

Stable Diffusion, an open-source alternative to AI image generators like Midjourney and DALL-E, has been updated to version 3.5. The new model tries to right some of the wrongs (which may be an understatement) of the widely panned Stable Diffusion 3 Medium. Stability AI says the 3.5 model adheres to prompts better than other image generators and competes with much larger models in output quality. In addition, its tuned for a greater diversity of styles, skin tones and features without needing to be prompted to do so explicitly. The new model comes in three flavors. Stable Diffusion 3.5 Large is the most powerful of the trio, with the highest quality of the bunch, while leading the industry in prompt adherence. Stability AI says the model is suitable for professional uses at 1 MP resolution. Meanwhile, Stable Diffusion 3.5 Large Turbo is a distilled version of the larger model, focusing more on efficiency than maximum quality. Stability AI says the Turbo variant still produces high-quality images with exceptional prompt adherence in four steps. Finally, Stable Diffusion 3.5 Medium (2.5 billion parameters) is designed to run on consumer hardware, balancing quality with simplicity. With its greater ease of customization, the model can generate images between 0.25 and 2 megapixel resolution. However, unlike the first two models, which are available now, Stable Diffusion 3.5 Medium doesnt arrive until October 29. The new trio follows the botched Stable Diffusion 3 Medium in June. The company admitted that the release didnt fully meet our standards or our communities expectations, as it produced some laughably grotesque body horror in response to prompts that asked for no such thing. Stability AIs repeated mentions of exceptional prompt adherence in todays announcement are likely no coincidence. Although Stability AI only briefly mentioned it in its announcement blog post, the 3.5 series has new filters to better reflect human diversity. The company describes the new models human outputs as representative of the world, not just one type of person, with different skin tones and features, without the need for extensive prompting. Lets hope its sophisticated enough to account for subtleties and historical sensitivities, unlike Googles debacle from earlier this year. Unprompted to do so, Gemini produced collections of egregiously inaccurate historical photos, like ethnically diverse Nazis and US Founding Fathers. The backlash was so intense that Google didnt reincorporate human generations until six months later.This article originally appeared on Engadget at https://www.engadget.com/ai/stable-diffusion-35-follows-your-prompts-more-closely-and-generates-more-diverse-people-184022965.html?src=rss


Category: Marketing and Advertising

 

2024-10-22 20:15:00| Engadget

Anthropic's latest development gives its Claude AI assistant the ability to control a PC, reportedly just like a person would. The feature, dubbed 'computer use,' entered public beta today. With computer use, Claude can be directed to execute tasks such as "looking at a screen, moving a cursor, clicking buttons, and typing text," according to the company's announcement.  We've built an API that allows Claude to perceive and interact with computer interfaces.This API enables Claude to translate prompts into computer commands. Developers can use it to automate repetitive tasks, conduct testing and QA, and perform open-ended research. pic.twitter.com/eK0UCGEozm Anthropic (@AnthropicAI) October 22, 2024 In theory, this could make the AI even more useful in automating repetitive computer tasks. However, a second blog post focused on computer use acknowledged that this application of Anthropic's AI models is still early in development and, to paraphrase, buggy as heck. The company said that in internal testing, Claude stopped in the middle of an assigned coding task and began opening images of Yellowstone National Park. While that is uncannily human behavior (who doesn't want to take a break to stare at natural beauty during the work day?), it's also a reminder that even the best AI models can have errors. In addition to unveiling computer use, Anthropic also released an upgraded version of its Claude 3.5 Sonnet model alongside a brand new model called Claude 3.5 Haiku that will be released later in October. In August, Anthropic joined OpenAI in agreeing to share its work with the US AI Safety Institute.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-letting-claude-ai-control-your-pc-181500127.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

22.10Qualcomm and Google team up to help carmakers create AI voice systems
22.10Ecobee smart home users can now unlock Yale and August smart locks from its app
22.10NASA's newest telescope can detect gravitational waves from colliding black holes
22.10OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism
22.10A federal ban on fake online reviews is now in effect
22.10Google Messages adds enhanced scam detection tools
22.10Stable Diffusion 3.5 follows your prompts more closely and generates more diverse people
22.10Anthropic is letting Claude AI control your PC
Marketing and Advertising »

All news

22.10As Trump threatens mass deportations, some rural areas that back him rely heavily on immigrant labor
22.10Qualcomm and Google team up to help carmakers create AI voice systems
22.10Ecobee smart home users can now unlock Yale and August smart locks from its app
22.10Deflation in China: Impossible to Ignore
22.10 Philip R. Lane: Inflation and monetary policy in the euro area
22.10Mid-Day Market Internals
22.10Tomorrow's Earnings/Economic Releases of Note; Market Movers
22.10Bull Radar
More »
Privacy policy . Copyright . Contact form .