Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-01-06 20:00:00| Fast Company

Elon Musk took over X and folded in Grok, his sister companys generative AI tool, with the aim of making his social media ecosystem a more permissive and free speech maximalist space. What hes ended up with is the threat of multiple regulatory investigations after people began using Grok to create explicit images of women without their permissionand sometimes veering into images of underage children. The problem, which surfaced in the past week as people began weaponizing the image-generation abilities of Grok on innocuous posts by mostly female users of X, has raised the hackles of regulators across the world. Ofcom, the U.K.s communications regulator, has made urgent contact with X over the images, while the European Union has called the ability to use Grok in such a way appalling and disgusting. In the three years since the release of ChatGPT, generative AI has faced numerous regulatory challenges, many of which are still being litigated, including alleged copyright infringement in the training of AI models. But the use of AI in such a harmful way to target women poses a major moral moment for the future of the technology.This is not about nudity. It’s about power, and it’s about demeaning those women, and it’s about showing who’s in charge and getting pleasure or titillation out of the fact that they did not consent, says Carolina Are, a U.K.-based researcher who has studied the harms of social media platforms, algorithms and AI to users, including women. For its part, X has said that Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content, echoing the wording of its owner, Elon Musk, who posted the same thing on January 3. The fact that its at all possible to create such images suggests just how harmful it is to remove guardrails on generative AI to allow users to essentially do whatever they want. This is yet another example of the wild disparities, inequalities, and double standards of the social media age, particularly during this period of time, but also of the impunity of the tech industry, Are says.  Precedented While the scale and power of AI-created images feels unprecedented, some experts disagree that theyrepresent the first real morality test for generative AI.  AIIm using it here for an umbrella termhas long been a tool of discrimination, misogyny, homophobia and transphobia, and direct harm, including encouraging people to end their lives, causing depression and body dysmorphia, and more, says Ari Waldman, professor of law at the University of California, Irvine.Creating deepfakes of women and girls is absolutely horrible, but it is not the first time AI has engaged in morally reprehensible conduct, he adds. But the question of who bears legal responsibility for the production of these images is less clear than Musks pronouncements make it seem. Eric Goldman, a professor at Santa Clara University School of Law, points out that the recently enacted Take it Down Act which requires platforms to have, in the coming months, measures to take down illegal or infringing content within 48 hours added new criminal provisions against intimate visual depictions, a category that would include AI-generated images. But whether that would include bikini images of the type Grok is making by the load is uncertain.This law has not yet been tested in court, but using Grok to create synthetic sexual content is the kind of thing the law was designed to discourage, Goldman says. Given that we don’t know if the Take It Down Act has already put in place the regulatory solution necessary to solve the problem at hand, it would be premature to make yet more laws. Experts like Rebecca Tushnet, a First Amendment Scholar at Harvard Law School, say the necessary laws already exist. The issue is enforcing them against the wrongdoers when the wrongdoers include the politically powerful or those contemptuous of the law, she says. In recent years, many new anti-deepfake and explicit-image laws have been passed in the U.S., including a federal law to punish the distribution of sexually explicit digital forgeries, explains Mary Anne Franks, an intellectual property and technology expert at George Washington Law School. But the recent developments with Grok show the existing measures arent good enough, she says. We need to start treating technology developers like we treat other makers of dangerous products: hold them liable for harms caused by their products that they could and should have prevented. Ultimate responsibility This question of ultimate responsibility, then, remains unanswered. And its the question that Musk may be trying to head off by expressing his distaste for what his users are doing.  The tougher legal question is what, if any, liability Grok may have for facilitating the creation of intimate visual imagery, explains Goldman, pointing to the voluntary imposition of guardrails as part of firms trust and safety protocols. It’s unclear under U.S. law if those guardrails reduce or eliminate any legal liability, he says, adding that its unclear if the model’s liability will increase if a model has obviously inadequate guardrails. Waldman argues that lawmakers in Washington should pass a law that would hold companies legally responsible for designing and building AI tools capable of creating child pornography or pornographic deepfakes of women and girls. Right now, the legal responsibility of tech companies is contested, he adds.  While the Federal Trade Commission has statutory authority to take action, he worries that it won’t. The AI companies have aligned themselves with the president and the FTC doesn’t appear to be fulfilling its consumer protection mandate in any real sense.


Category: E-Commerce

 

LATEST NEWS

2026-01-06 19:59:58| Fast Company

Morgan Stanley is seeking regulatory approval to launch exchange-traded funds tied to the price of cryptocurrency tokens, according to filings with the U.S. Securities and Exchange Commission on Tuesday, the first such move by a big U.S. bank. The bank is looking to launch ETFs tied to the price of cryptocurrencies bitcoin and solana, according to the filings, aiming to deepen its presence in the cryptocurrency space. Regulatory clarity under U.S. President Donald Trump has encouraged mainstream finance companies to embrace digital assets, which were once considered merely speculative instruments. In December, the Office of the Comptroller of the Currency also allowed banks to act as intermediaries on crypto transactions, narrowing the gap between the traditional sector and digital assets. Several investors prefer holding crypto via ETFs, which provide greater liquidity and security, and simplified regulatory compliance compared to managing the underlying asset directly. “It’s interesting to see Morgan Stanley move into a commoditized market, and I suspect that means they want to move clients that invest in bitcoin into their ETFs, which could give them a fast start despite their late entrance,” said Bryan Armour, ETF analyst at Morningstar. “A bank entering the crypto ETF market adds legitimacy to it, and others could follow.” In the two years since the SEC approved the first U.S.-listed spot bitcoin ETF, a wide array of financial institutionsmostly asset managershave stepped up to issue such funds. U.S. banks, which have mostly only acted as custodians of client investments, are looking to evolve from cautious facilitators to active advisers. In October, Morgan Stanley expanded access to crypto investments to include all clients and types of accounts, according to media reports. Bank of America followed suit, allowing its wealth advisers to recommend allocations to crypto in client portfolios from January, without any asset threshold. By Arasu Kannagi Basil and Ateev Bhandari, Reuters


Category: E-Commerce

 

2026-01-06 17:45:00| Fast Company

In moments of political chaos, deepfakes and AI-generated content can thrive. Case in point: the online reaction to the US governments shocking operation in Venezuela over the weekend, which included multiple airstrikes and a clandestine mission that ended with the capture of the countrys president, Nicolás Maduro, and his wife. They were soon charged with narcoterrorism, along with other crimes, and theyre currently being held at a federal prison in New York.  Right now, the facts of the extraordinary operation are still coming to light, and the future of Venezuela is incredibly unclear. President Donald Trump says the U.S. government plans to run the country. Secretary of State Marco Rubio has indicated that, no, America isn’t going to do that, and that the now-sworn-in former vice president, Delcy Rodriguez will lead instead. Others are still calling for opposition leaders María Corina Machado and Edmundo Gonalzez to take charge.  Its in moments like this that deepfakes, disinformation campaigns, and even AI-generated memes, can pick up traction. When the truth, or the future, isnt yet obvious, generative artificial intelligence allows people to render content that answers the as-yet-unanswered questions, filling in the blanks with what they might want to be true.  Weve already seen AI videos about whats going on in Venezuela. Some are meme-y depictions of Maduro handcuffed on a military plane, but some could be confused for actual footage. While a large number of Venezuelans did come out to celebrate Maduros capture, videos displaying AI-generated crowds have also popped up, including one that apparently tricked X CEO Elon Musk.  At least anecdotally, deepfake content related to Venezuela has spiked in recent days, says Ben Colman, the cofounder and CEO of Reality Defender, a firm that tracks deepfakes. Those narratives arent tied to any movement and run the gamut from nationalist to anti-government, pro-Venezuela, pro-US, pro-unity, anti-globalization, and everything in between, he says. The difference between this event and events from even a few months ago is that image models have gotten so good in recent days that the most astute fact-checkers, media verification experts, and experts in our field are unable to manually verify many of them by pointing to specific aspects of the image as an indicator for validity or lack thereof, Colman explains. That battle (of manual, visual verification) is pretty much lost.” OpenAI told Fast Company that its monitoring how Venezuela is playing out across its products and says it will take action where it sees violations of its usage policies.The State Department’s Global Engagement Center, a federal outfit established to monitor disinformation campaigns aboard, would have previously tracked the situation, a former employee says.  For instance, within the Russian war in Ukraine, the State Department saw deepfakes of leaders trying to convince soldiers to lay down their arms, and fake narratives about additional entrants into the war. During political chaos, its common for online actors to try to disincentivize opposing factions, the person adds. That center was later shut down, after Republicans accused the outfit of censoring Americans. The State Department did not respond to a request for comment by time of publication. ‘Accelerants’  Political deepfakes and AI-generated content are now commonplace. A few years ago, AI-generated TV anchors spreading pro-government talking points, seemingly intended to promote the idea that Venezuela’s economy and security were generally good went viral across the country. In 2024, a party affiliated with former president of South Africa, Jacob Zuma, shared a deepfake video featuring an AI-generated Donald Trump endorsing their platform (that was far from the only example in the country). As even the recent New York City mayoral election showed, AI is often deployed during tense campaign seasons.  The Knight First Amendment Institute, which analyzed the use of AI in elections back in 2024, found that many deployments of AI, especially during election time, arent necessarily meant to deceiveand that misinformation isnt always created from AI. The problem isnt just that its easy to make disinformation with AI, but that people are open to ingesting disinformation. In other words, theres demand for this kind of content.  “Deepfakes in this context aren’t just misinformation, they are accelerants, Emmanuelle Saliba, chief investigative officer at GetReal Security, another firm that tracks deepfakes, told Fast Company. “While some of the fabricated content we’ve seen circulating is created to feed meme culture, some of it has been created and disseminated to confuse and destabilize people during an already volatile climate. Trust is hanging by a thread.” 


Category: E-Commerce

 

Latest from this category

07.01Stop chasing AI experts
07.01The Trump administration just gave the food pyramid a Sweetgreen makeover
07.01GameStop says CEO compensation package doesnt include any guaranteed pay
07.01Octopus Prime: Inside a growing and controversial farming effort
07.01The psychology of the Chicken Little coworker
07.01Job openings drop to 2nd lowest level in 5 years in November
07.01Its the first anniversary of the L.A. wildfires. Why have less than a dozen homes been rebuilt since then?
07.01Tin Can phones have been overwhelmed since Christmas 
E-Commerce »

All news

08.01IWF finds sexual imagery of children which 'appears to have been' made by Grok
08.01Bluetti's Charger 2 uses solar and engine power to charge your portable battery
07.01Stocks Reversing Slightly Lower into Final Hour on Diminishing Fed Rate-Cut Odds, Technical Selling, Profit-Taking, Utility/Gambling Sector Weakness
07.01Engadget Podcast: CES 2026 and the rocky year ahead for the PC industry
07.01Samsung Display at CES 2026: Playful demos and mysterious prototypes
07.01Trump backs ban on institutional investor home purchases
07.01Trump backs ban on institutional investor home purchases
07.01Orland Park Plan Commission endorses Amazon retail center, despite residents concerns
More »
Privacy policy . Copyright . Contact form .