Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-06-18 08:00:00| Fast Company

Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable peopleincluding children in foster care, adults in nursing homes, and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs. Developers are using natural language processing, for examplea form of AI that interprets written or spoken languageto try to detect patterns of threats, manipulation, and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most at risk for abuse. When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier. But as a social worker with 15 years of experience researching family violenceand five years on the front lines as a foster-care case manager, child abuse investigator, and early childhood coordinatorIve seen how well-intentioned systems often fail the very people they are meant to protect. Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movementsnot faces or voicesto detect physical violence. Im grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm? New tech, old injustice Many AI tools are trained to “learn by analyzing historical data. But history is full of inequality, bias, and flawed assumptions. So are people, who design, test, and fund AI. That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families risk levelsscores given to hotline staff to help them screen callswould have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%. Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as aggressive at a significantly higher rate than Standard American Englishup to 62% more often, in certain contexts. Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress. These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systemssometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation, and family separation compared with white families, even after accounting for income and other socioeconomic factors. Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers. Surveillance over support Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost. In hospitals and eldercare facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors, and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy. In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 monthsoverwhelming staff and missing at least one real incident. The programs accuracy did not achieve a level that would be considered acceptable to staff and management, according to the independent report. Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian, and Securly are marketed as tools to keep students safe. Such programs can be installed on students devices to monitor online activity and flag anything concerning. But theyve also been shown to flag harmless behaviorslike writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality. Other systems use classroom cameras and microphones to detect aggression. But they frequently misidentify normal behavior like laughing, coughing, or roughhousingsometimes prompting intervention or discipline. These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humansdata that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote i Automating Inequality, AI systems risk scaling up these long-standing harms. Care, not punishment I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. Ive developed a framework of four key principles for what I call trauma-responsive AI. Survivor control: People should have a say in how, when, and if theyre monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help. Human oversight: Studies show that combining social workers expertise with AI support improves fairness and reduces child maltreatmentas in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate. Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBMs AI Fairness 360, Googles What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models. Privacy by design: Technology should be built to protect peoples dignity. Open-source tools like Amnesia, Googles differential privacy library, and Microsofts SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize peoples identities in video or photo data. Honoring these principles means building systems that respond with care, not punishment. Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech developmentfrom needs assessments to user testing and ethical oversight. Legislation is important, too. On May 5, 2025, for example, Montanas governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling. As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it. Aislinn Conrad is an associate professor of social work at the University of Iowa. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

LATEST NEWS

2025-06-18 04:11:00| Fast Company

Gen Zers are putting their money where their mouths are when it comes to shopping. Nearly all Gen Z consumers96%say they shop with intention, and 66% believe their purchases should reflect their personal values, according to the newly released Lightspeed Commerce report, which surveyed over 2,000 North American shoppers. Spending habits have never been more visible, thanks to social media. Todays consumers have more ways than ever to signal their morals and valuesand more platforms to share those choices. Posting shopping hauls and empties videos, or filming a fit check with coffee in hand has become prime social media fodder. In an age where everything is content, more consumers are choosing brands that reflect who they are and what they stand for. A hallmark of Gen Z is coming to age in a hyper-connected world. In this world, every follow, like, repost, and even purchase is a direct reflection of a persons identity and values, Lightspeed CEO Dax Dasilva tells Fast Company. Through this connected world, there is a never-ending exposure to global issues, where activism, accountability, and cancel culture move at the speed of light. Today, the wrong purchase can carry social consequencesnot just from peers, but from the broader judgment of the internet. This pressure is especially strong among Gen Z: Thirty-two percent fear being canceled for supporting the wrong brands, which is more than five times higher than for boomers (6%). In many ways, this fear of being judged or canceled and the understanding of the weight of their buying decisions differentiates Gen Z from older generations, who have traditionally shopped based on things like price or quality, Dasilva says. This trendwhat Lightspeed calls value spendingis part of a broader consumer shift. Nearly all consumers (92%) identify as at least somewhat intentional in their purchases. While price (78%) and quality (67%) remain top priorities across generations, purchasing decisions that align with personal values or identity are close behind, cited by 62% of respondents. In the past six months, 27% of consumers made purchases based on national pride; 18% supported brands tied to charitable or social causes; another 18% chose products for their sustainability impact; and 15% factored in a CEOs political alignment. For 32% of these value spenders, this is a new behaviorbut half believe their spending carries more influence than ever before. Value for money has taken on a new meaning.


Category: E-Commerce

 

2025-06-18 00:15:00| Fast Company

When we talk about sustainable housing, we rarely talk about how to prove that. The construction industry is one of the most polluting sectors on the planet. While both expensive and inefficient, it is responsible for up to 40% of global solid waste. Despite widespread talk of green building, real data is often hard to find. When we at Clearyst° partnered with Azure Printed Homes, which utilizes 3D printing and recycled plastic to redesign the homebuilding process, I was most interested in verifying whether the approach was truly sustainable.  The age-old saying for sustainable planning is you can only manage what you measure. We sought a method to measure impact, make it repeatable, and withstand any scrutiny. We collaborated with Azure to create its first sustainability report, using the EU Taxonomy for Sustainable Activities as our guide. EU Taxonomy is one of the most widely recognized sustainability frameworks globally. Its framework includes six objectivesranging from climate mitigation to biodiversity protectionand requires companies to demonstrate how their operations contribute to these objectives, do not harm others, and meet minimum social safeguards. Its built for accountability. Therefore, we felt it was a very credible framework for Azure to lay the path for a sustainability roadmap.  The framework and findings We applied this framework across Azures operations, and the findings offer a blueprint that other organizations can use to assess and improve their own sustainability efforts. Climate change mitigation Mitigating climate change involves reducing emissions at every stage of a buildings life cyclefrom materials to operations. Organizations evaluating their carbon footprint should examine both embedded emissions and operational energy use. Azures process uses 60% recycled plastic in a zero-waste, factory-controlled environment. We evaluated all of their 3D printed homes from floor to ceiling, finding they were fully insulated and reduced operational energy use. The homes can include optional solar panels, which perform well thanks to the tight building envelope. Compared to cement and lumber construction, Azure significantly lowered their embedded carbon footprint. Climate adaptation This objective considers how buildings withstand climate-related risks like storms, heatwaves and wildfires. Evaluating physical resilience is increasingly important for long-term planning and insurance. The process involves evaluating the structures ability to adapt to climate issues. In this case, Azure engineers its units to endure 150-mph winds, wildfires, and earthquakes. Roofs are printed directly with the walls, so they cant detach in hurricanes. The homes feature double-pane windows, fire-resistant coatings, and ventless cooling systems, meeting or exceeding Californias Chapter 7A code, which was designed for fire-prone zones. Water protection Sustainable construction should aim to reduce water usage, especially in areas facing drought or water stress. Reviewing factory water use and in-home fixtures is a good place to start. Traditional construction relies heavily on vast amounts of fresh water for concrete production, cleaning, and dust suppression. Azures process uses none. All homes are built in a dry factory environment, fitted inside with low-flow and energy-efficient fixtures.  Circular economy A circular approach keeps materials in use and out of landfills. Applying this lens means examining waste streams and end-of-life options for all building components. We found that Azures entire structures can be recycled at the end of their life. Unlike conventional builds, there are no drywall scraps or framing offcuts. An average 2,000 square foot home build can generate up to 8,000 pounds of waste. Azure sends zero. Pollution prevention Construction sites are often major sources of air, noise, and chemical pollution. Evaluating production environments and material choices can highlight opportunities to reduce exposure and environmental harm. With Azure, factory-controlled production eliminates the air pollution typically associated with construction sites. Theres no diesel equipment, no dust clouds, no VOC off-gassing. The process relies on PETG plastic, which is selected partly because it avoids volatile compounds and shedding microplastics. Air quality in finished homes is higher than in traditional buildings. Biodiversity protection Protecting biodiversity includes avoiding practices that degrade natural habitats or deplete ecosystems. This may include material sourcing, site selection, and waste management. Using recycled materials reduces demand for virgin resources. Azure minimizes lumber use, thereby limiting deforestation risks. Also, every ton of plastic diverted from the landfill is one less threat to ecosystems.  Social impact While not yet part of the EU Taxonomy, social equity is an emerging area of focus in sustainability reporting. Housing, access, and affordability are all essential components of a just transition. The Azure team collaborates with nonprofit organizations, such as Dignity Moves, to provide housing for individuals experiencing homelessness. Their units can be produced for under $20,000 in as little as one week, compared to the $600,000 price tag and six-year build time typically associated with Los Angeles or San Francisco. Housing like this doesnt just reduce emissions, but reduces stress and restores dignity. Next steps The impact report wasnt just a means to report results; it provided Azure with a strategic roadmap. Were now prepared for full life cycle assessments and deeper emissions tracking across operations and products. Were also exploring how this approach can support future compliance with global regulations and sustainable finance standards. There is no shortage of innovation in housing right now, from prefabrication to bio-based materials. But innovation doesnt mean much unless we can measure its impact. The construction industry doesnt need more promises; it requires proof. Thats what we set out to provide. And for companies willing to do the work, the frameworks to do it already exist.  Gene Eidelman is cofounder of Azure Printed Homes. Jamie Simon is the director of sustainability at Clearyst°.


Category: E-Commerce

 

Latest from this category

18.06Should drivers be forced to go slower?
18.06Caris Life Sciences IPO: All eyes on the stock price today as the AI cancer care firm makes its Nasdaq debut
18.06How Trumps disruption of the crypto supply chain could be a security risk for the U.S.
18.06Malaysias gas-fired power capacity is set to skyrocket. The cause: data centers
18.06These 7 common email mistakes are making you look unprofessional
18.06Why OpenAI and Microsofts AI partnership might be headed for a breakup
18.06This family-owned toy company is challenging Trumps tariffs before the Supreme Court
18.06This 19-year-old YouTuber is directing a new A24 horror movie
E-Commerce »

All news

18.06What Makes This Trade Great: APVO A Wild Ride
18.06Japan's Nippon takes over US Steel after Trump deal
18.06Fears for baby bank's future after damp and mould
18.06What are the Pip and universal credit changes and who is affected?
18.06Scotland to recover winter fuel payment from better-off pensioners
18.06Should drivers be forced to go slower?
18.06Caris Life Sciences IPO: All eyes on the stock price today as the AI cancer care firm makes its Nasdaq debut
18.06How Trumps disruption of the crypto supply chain could be a security risk for the U.S.
More »
Privacy policy . Copyright . Contact form .