|
Amid stiff competition, Baidu says its making its AI chatbot free to use. Starting on April 1, ERNIE Bot will be available to users at no cost. Baidu says it will also issue refunds to users in some cases. The company cited reduced costs and tech improvements as reasons for making ERNIE Bot free across desktop and mobile. Moreover, Baidu plans to roll out an advanced search function on the same day, per Reuters. This will also be available for free and is said to include upgraded reasoning capability. Baidu started offering premium features in its search engine in late 2023. Those were powered by advanced AI models such as ERNIE 4.0. The company charged around $8 per month (59.9 yuan) for those features. While Baidu was one of the first major Chinese companies to deploy its own AI chatbot amid the rise of ChatGPT, ERNIE is said to have struggled to find widespread adoption. By contrast, Reuters reports that domestic rivals such as the Doubao chatbot from ByteDance and upstart DeepSeek (which offers its AI assistant for free) have seen stronger user adoption, according to a third-party data tracker.This article originally appeared on Engadget at https://www.engadget.com/ai/baidu-is-making-its-ai-assistant-ernie-bot-free-to-use-starting-on-april-1-153320140.html?src=rss
Category:
Marketing and Advertising
A new investigation from The Markup claims the parent company of Tinder, Hinge, OKCupid and other dating apps turns a blind eye to allegedly abusive users on its platforms. The 18-month investigation found instances in which users who were repeatedly reported for drugging or assaulting their dates remained on the apps. One such case involves a Colorado-based cardiologist named Stephen Matthews. Over several years, multiple women on Match's platforms reported him for drugging or raping them. Despite these reports, his Tinder profile was at one point given Standout status, reserved for popular profiles and often requiring in-app currency to interact with. Matthews wasn't removed from the platform until two months after one survivor went to the police. Match Group subsequently dragged its feet when Hinge received a search warrant, complying after seven months. He was eventually sentenced to 158 years to life in prison. How was something like this allowed to happen? According to internal company documents cited in the investigation, since 2016, Match Group has been aware of which users were reported for assaulting, drugging or raping their dates. In 2019, Match Group's central database, Sentinel, began recording each user reported for either assault or rape on any of its apps. Company insiders reported that, three years later, the system registered hundreds of incidents weekly. But the system was reportedly ineffective and easy to game. Not only could users easily evade bans by signing up with different contact information, but "internal company documents show information on IP addresses, photos, and birthdate were not used to ban a user if they appear on another Match dating app." A Tinder user banned for reports of rape could simply jump ship to Hinge without issue. There are reportedly many tutorials online for methods to evade bans on Match-owned apps requiring little to no technical expertise, and The Markup was able to validate three of them. But it wasn't just a poorly designed technical system that's to blame. In 2020, Match Group stated it would release a transparency report to demonstrate harm conducted in relation to its platforms that report has still not been released. That same year, 11 members of congress requested information about Match Group's process after receiving sexual violence reports. Three years later, two representatives followed up after being promoted by this report's researchers still no data has been provided. In 2021, Match Group made public promises about increasing safety but company insiders told the researchers that it hasn't improved. That same year, the report claims a presentation shown to employees on multiple occasions asked questions such as, "Do we publish only where we are required by law?" and "Do we push back on how much we are required to reveal, or do we try to go beyond what is required?" By 2022 Match group entered a major partnership with background check company Garbo; the very next year that partnership dissolved, with Garbo writing publicly that "Its become clear that most online platforms arent legitimately committed to trust and safety for their users. In 2024, Match Group cut its remaining central trust-and-safety team Match Group employees, outsourcing the positions overseas who the company's former head of safety described as working under strenuous quotas and with little training. The report claims that at least one employee at the time was worried about the potential dangers of focusing too much on metrics. They asked their bosses: "How much would you personally pay to stop just one person being sexually assaulted by a date, one child being trafficked or one vulnerable person being driven to suicide by a predator? I feel that if I asked members of our staff that question individually, they would put a high value of their own money on it But as a group nobody is ready to hear that yet." We recognize our role in fostering safer communities and promoting authentic and respectful connections worldwide, Kayla Whaling, senior director of communications at Match, said in a statement to The Markup. We will always work to invest in and improve our systems, and search for ways to help our users stay safe, both online and when they connect in real life. The company did not dispute the investigation's findings.This article originally appeared on Engadget at https://www.engadget.com/big-tech/investigation-finds-match-group-failed-to-act-on-reports-of-sexual-assault-151556608.html?src=rss
Category:
Marketing and Advertising
Apple will use Alibaba's generative AI to power artificial intelligence features for iPhones meant for sale in the Chinese market. Joe Tsai, Alibaba Group's Chairman, has confirmed the companies' partnership at the World Governments Summit in Dubai. He revealed that Apple talked to a number of other companies in China for a potential partnership, but it decided to team up with Alibaba in the end. Apple Intelligence features are not accessible in China at the moment, and even those who purchased their iPhones outside the country will not be able to use those features once they change their region to mainland China. As CNBC explains, the country has strict regulations surrounding AI, including requiring large language models to get approval for commercial use. Companies providing generative AI technologies in the region are also responsible for taking down illegal content. The Information reported about the partnership between Alibaba and Apple before Tsai confirmed it. The publication said that the companies had already submitted the AI features Apple plans to roll out to the country's regulators for approval. A previous report said that Apple tried to work with Baidu for its AI needs last year, but the models for Apple Intelligence that the Chinese company was developing were unable to meet its standards. Apple also reportedly talked to other Chinese companies, including Tencent and DeepSeek, but Apple deemed the latter to be lacking in experience and manpower to be able to handle a massive customer base. The DeepSeek AI assistant, if you'll recall, recently went viral and became the top free iPhone app in the US. This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/apple-will-use-alibabas-generative-ai-for-its-iphones-in-china-150008901.html?src=rss
Category:
Marketing and Advertising
All news |
||||||||||||||||||
|