Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-01-13 17:58:03| Fast Company

As concerns grow over Grok’s ability to generate sexually explicit content without the subject’s consent, a number of countries are blocking access to Elon Musk’s artificial intelligence chatbot. At the center of the controversy is a feature called Grok Imagine, which lets users create AI-generated images and videos. That tool also features a “spicy mode,” which lets users generate adult content. Both Indonesia and Malaysia ordered that restrictions be put in place over the weekend. Malaysian officials blocked access to Grok on Sunday, citing repeated misuse to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images. Officials also cited “repeated failures by X Corp.” to prevent such content. Indonesia had blocked the chatbot the previous day for similar reasons.  In a statement accompanying Groks suspension, Meutya Hafid, Indonesias Minister of Communication and Digital, said. “The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space.” The responses could be just the beginning of Grok’s problems, though. Several other countries, including the U.K., India, and France, are thinking of following suit. The U.K. has launched an investigation into the chatbot’s explicit content, which could result in it being blocked in that country as well. “Reports of Grok being used to create and share illegal, non-consensual, intimate images and child sexual abuse material on X have been deeply concerning,” Ofcom, the countrys  regulator for the communications services, said in a statement. Musk, in a social media post following word of the Ofcom investigation, wrote that the U.K. government “just want[s] to suppress free speech.” Fast Company attempted to contact xAI for comment about the actions in Indonesia and Malaysia as well as similar possible blocks in other countries. An automatic reply from the company read Legacy Media Lies. Beyond the U.K., officials in the European Union, Brazil, and India have called for probes into Grok’s deepfakes, which could ultimately result in bans as well. (The U.S. government, which has contracts with xAI, has been fairly silent on the matter so far.) In a press conference last week, European Commission spokesperson Thomas Regnier said the commission was “very seriously looking into this matter,” adding This is not spicy.’ This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe. Musk and X are still feeling the effects of a $130 million fine the EU slapped on the company last month for violating the Digital Services Act, specifically over deceptive paid verification and a lack of transparency in the company’s advertising repository. Beyond sexualized images of adults, a report from the nonprofit group AI Forensics that analyzed 20,000 Grok-generated images created between Dec. 25 and Jan. 1 found that 2% depicted a person who appeared to be 18 or younger. These included 30 images of young or very young women or girls in bikinis or transparent clothes. The analysis also found Nazi and ISIS propaganda material generated by Grok. While the company has not addressed the countries blocking access to its services, it did comment on the use of its tool to create sexual content featuring minors. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety wrote in a post. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” The company has also announced it will limit image generation and editing features to paying subscribers. That, however, likely won’t be enough to satisfy government officials who want to block access to Grok while these images can still be generated. 


Category: E-Commerce

 

LATEST NEWS

2026-01-13 17:41:37| Fast Company

Advancements in artificial intelligence are shaping nearly every facet of society, including education. Over the past few years, especially with the availability of large language models like ChatGPT, theres been an explosion of AI-powered edtech. Some of these tools are truly helping students, while many are not. For educational leaders seeking to leverage the best of AI while mitigating its harms, its a lot to navigate. Thats why the organization I lead, the Advanced Education Research and Development Fund, collaborated with the Alliance for Learning Innovation (ALI) and Education First to write Proof Before Hype: Using R&D for Coherent AI in K-12 Education. I sat down with my coauthors, Melissa Moritz, an ALI senior advisor, and Ila Deshmukh Towery, an Education First partner, to discuss how schools can adopt innovative, responsible, and effective AI tools. Q: Melissa, what concerns you about the current wave of AI edtech tools, and what would you change to ensure these tools benefit students?  Melissa: Too often, AI-powered edtech is developed without grounding in research or educators input. This leads to tools that may seem innovative, but solve the wrong problems, lack evidence of effectiveness, ignore workflow realities, or exacerbate inequities. What we need is a fundamental shift in education research and development so that educators are included in defining problems and developing classroom solutions from the start. Deep collaboration across educators, researchers, and product developers is critical. Lets create infrastructure and incentives that make it easier for them to work together toward shared goals. AI tool development must also prioritize learning science and evidence. Practitioners, researchers, and developers must continuously learn and iterate to give students the most effective tools for their needs and contexts.  Q: Ila, what is the AI x Coherence Academy and what did Education First learn about AI adoption from the K-12 leaders who participated in it? Ila: The AI x Coherence Academy helps cross-functional school district teams do the work that makes AI useful: Define the problem, align with instructional goals, and then choose (or adapt) tools that fit system priorities. It’s a multi-district initiative that helps school systems integrate AI in ways that strengthen, rather than disrupt, core instructional priorities so that adoption isnt a series of disconnected pilots. We’re learning three things through this work. First, coherence beats novelty. Districts prefer customizable AI solutions that integrate with their existing tech infrastructure rather than one-off products. Second, use cases come before tools. A clear use case that articulates a problem and names and tracks outcomes quickly filters out the noise. Third, trust is a prerequisite. In a world increasingly skeptical of tech in schools, buy-in is more likely when educators, students, and community members help define the problem and shape how the technology helps solve it.  Leaders are telling us they want tools that reinforce the teaching and learning goals already underway, have clear use cases, and offer feedback loops for continuous improvement. Q: Melissa and Ila, what types of guardrails need to be in place for the responsible and effective integration of AI in classrooms? Ila: For AI to be a force for good in education, we need several guardrails. Lets start with coherence and equity. For coherence, AI adoption must explicitly align with systemwide teaching and learning goals, data systems, and workflows. To minimize bias and accessibility issues, product developers should publish bias and accessibility checks, and school systems should track relevant data, such as whether tools support (versus disrupt) learning and development, and the tools efficacy and impact on academic achievement. These guardrails need to be co-designed with educators and families, not imposed by technologists or policymakers. The districts making real progress through our AI x Coherence Academy are not AI-maximalists. They are disciplined about how new tools connect to educational goals in partnership with the people they hope will use them. In a low-trust environment, co-designed guardrails and definitions are the ones that will actually hold. Melissa: We also need guardrails around safety, privacy, and evidence. School systems should promote safety and protect student data by giving families information about the AI tools being used and giving them clear opt-out paths. As for product developers, building on Ilas points, they need to be transparent about how their products leverage AI. Developers also have a responsibility to provide clear guidance around how their product should and shouldnt be used, as well as to disclose evidence of the tools efficacy. And of course, state and district leaders and regulators should hold edtech providers accountable. Q: Melissa and Ila, what gives you hope as we enter this rapidly changing AI age? Melissa: Increasingly, we are starting to have the right conversations about AI and education. More leaders and funders are calling for evidence, and for a paradigm shift in how we think about teaching and learning in the AI age. Through my work at ALI, Im hearing from federal policymakers, as well as state and district leaders, that there is a genuine desire for evidence-based AI tools that meet students and teachers needs. Im hopeful that together, well navigate this new landscape with a focus on AI innovations that are both responsible and effective.  Ila: What gives me hope is that district leaders are getting smarter about AI adoption. They’re recognizing that adding more tools isn’t the answercoherence is. The districts making real progress aren’t the ones with the most AI pilots; they’re the ones who are disciplined about how new tools connect to their existing goals, systems, and relationships. They’re asking: Does this reinforce what we’re already trying to do well, or does it pull us in a new direction? And theyre bringing a range of voices into defining use cases and testing solutions to center, rather than erode, trust. That kind of strategic clarity is what we need right now. When AI adoption is coherent rather than chaotic, it can strengthen teaching and learning rather than fragment it. Auditi Chakravarty is CEO of the Advanced Education Research and Development Fund.


Category: E-Commerce

 

2026-01-13 17:31:41| Fast Company

When I looked ahead to 2026, one issue jumped out in every conversation I had with business leaders: Resilience is buckling under pressure. The pace of change is no longer just fastit is accelerating beyond the reach of traditional playbooks. We are entering an era of complexity risk, where the greatest threats stem not only from malicious actors, but from the sheer entanglement of our own systems. Below are the four shifts business leaders must prepare for to navigate 2026. 1. Recovery will become the most important metric For years, companies have focused their investments on prevention. But AI changed the economics of cyber risk. Offensive AI makes it fast and inexpensive for attackers to generate malware, exploit known vulnerabilities, and pivot across a digital environment. Even strong defenses will miss things. Rubriks latest research highlights thatonly 28% of organizationsbelieve they can fully recover from a cyberattack within 12 hoursa steep decline from 43% in 2024. The gap in confidence underscores the growing friction between rapid tech adoption and operational resilience. The organizations that thrive in 2026 will prioritize validating data integrity before restoring systems, and establishing isolated cyber vaults for safe testing and rebuilding. Recovery strategies should guarantee that the restored environment is free of malicious code, making robust recovery engines a necessity, not a convenience. 2. Identity security will become the top budget priority Identity is one of the biggest, and least understood, business risks. Most companies today drastically underestimate their identity footprint. In the AI era, non-human identitiesthe service accounts and machine credentials that fuel our automationnow outnumber humans. Instead of breaking in, attackers are logging in by exploiting this labyrinth of unprotected non-human credentials. By 2026, these silent entry points won’t just provide a foothold; they will be the primary lever for achieving full-system compromise. Rubrik found that nearly 9 in 10 organizations plan to hire identity security professionals in the next year. As a result, executives should prepare for a significant rebalancing of budgets, with identity security moving to the top of the priority list. The reality is that identity sprawl will only accelerate as AI increases automation and service connectivity. In 2026, the ability to govern and secure identities will matter more than the data infrastructure those identities protect. 3. AI agent sprawl will trigger a governance renaissance Many organizations are deploying AI agents, sometimes hundreds of them, to handle everything from customer support to code generation to workflow automation. But behind the scenes, most teams lack clear oversight into what those agents are doing, what data they touch, and whether their output is correct. This great AI sprawl is setting up a governance crisis. In 2026, companies will realize that deploying AI agents at scale requires the same level of rigor as onboarding employees or granting system access. A new class of business-critical questions will emerge: Which systems can autonomous agents interact with? How do we validate the accuracy of their actions? What remediation steps are required when agents make mistakes? Success in agent-driven environments requires new frameworks for monitoring, workforce, and security, which includes heavy investment in robust governance and remediation systems. Done correctly, it enables transformation; done poorly, it creates uncontrollable risk. 4. Multi-cloud complexity will force a unified control plane Most enterprises today use a mix of cloud platforms, each with its own backup, security, and identity tools. What began as flexibility has evolved into operational drag. In 2026, the myth that native cloud tools are good enough will collapse. Fragmented environments slow recovery efforts, make migrations painful, and increase the time it takes to diagnose issues across platforms. Companies running multiple cloud-native backup systems are already experiencing longer recovery times, prompting emergency migrations and avoidable downtime. The business case for unifying control across clouds, once seen as an IT optimization, will become a survival requirement. Future-proof organizations are consolidating multi-cloud management into a single pane. Success here depends on one thing: seamlessly merging identity security with data defense to create a unified hub for all corporate data. Leaders will then shift focus toward achieving centralized visibility across clouds, enabling unified orchestration for recovery. 2026 DEMANDS SHARPER RESILIENCE In 2026, business resilience will depend on how effectively organizations recover, how intelligently they govern identity and AI agents, and how well they manage the complexity of multi-cloud environments. Executives who embrace these shifts early will reduce risk, accelerate innovation, and create more durable, adaptable enterprises. Those who delay may find that complexity (especially in managing non-human identities), not attackers, is the biggest threat to their future. Arvind Nithrakashyap is CTO of Rubrik.


Category: E-Commerce

 

Latest from this category

13.01What will define 2026, according to leaders in space, healthcare, and AI
13.01Chris Cuomo makes a comeback to host SiriusXMs morning talk show
13.01This group of women is leaving the labor forceagain
13.01Affordable Care Act health insurance enrollment drops as costs spike
13.01Governments around the world are considering bans on Groks app over AI sexual image scandal
13.01Make AI a force for good in schools 
13.014 cybersecurity trends for business resilience in 2026
13.01Melinda French Gates says everyone should read this book
E-Commerce »

All news

13.01Meta has closed three VR studios as part of its metaverse cuts
13.01Afternoon Market Internals
13.01Tomorrow's Earnings/Economic Releases of Note; Market Movers
13.01Bull Radar
13.01Proposed legislation opens the door to robotaxi services in New York
13.01Taiwan issues arrest warrant for Pete Lau, CEO of OnePlus
13.01EA delays Battlefield 6 Season 2 to February 17
13.01Chris Cuomo makes a comeback to host SiriusXMs morning talk show
More »
Privacy policy . Copyright . Contact form .