|
|||||
During the pandemic, over two million women left the workforce, many of whom were forced to leave their jobs in the absence of reliable childcare. It took years for female workers to recover from those losses, but eventually the share of working women had surpassed pre-pandemic numbersthough their participation in the labor force still remained lower relative to that of male workers. In 2025, however, it appears that the gains that women had made in recent years started to slip away. In the first half of the year, about 212,000 women exited the workforce, and there was a marked dip in employment among working mothers: An analysis by the Washington Post found that the share of working mothers between the ages of 25 and 44 dropped steadily from January to June, resulting in an overall decrease of nearly three percentage points. The most recent jobs report showed that in December, a total of 81,000 workers left the labor force (which means they are no longer employed or looking for a new job). All of those workers were women, according to a new analysis by the National Womens Law Center (NWLC), which drew on the jobs report data. The overall losses were even higher: 91,000 women left the labor force last month, but that figure was offset by 10,000 men entering the labor force. Across 2025, there was an increase in the number of women entering the labor forcebut at a much slower clip than in past years. The rate of increase among female workers also pales when compared with that of male workers: The pool of women in the labor force expanded by just 184,000, while the share of men grew by 572,000. This decline in employment also seems to be having an outsize impact on women of color, according to the NWLC. Unemployment among Black workers has been on the rise in recent months, but particularly for Black women; the unemployment rate for Black women inched up to 7.3% in December, up from 7.1% in November. Latinas also saw a marginal increase in unemployment, from 4.4% the month prior to 4.5% in December. (The overall rate of unemployment, by comparison, has hovered around 4.4%, with an even lower rate for white workers at 3.8%.) These fluctuations in employment come amid a growing number of hurdles for working mothers. Remote work policies had made it easier for many parents to remain in the workforce, particularly among mothers of young children. Over the last two years, many companies have forced their employees to return to the office, reversing the flexible work arrangements that had enabled parents to juggle their personal and professional obligations. In 2025, leading companies like Amazon and JPMorgan Chase started requiring that workers come into the office five days a week; a report from real estate company Jones Lang LaSalle found that the majority of employees at the hundred largest U.S. companies by revenue were required to come into the office. The Trump administrations policies and executive actions have also taken a toll on women, and especially working mothers. Trump imposed own return-to-office requirements on federal workers, hundreds of thousands of whom were working remotely. Many of the federal layoffs targeted agencies where women and people of color were overrepresented, and they also disproportionately impacted probationary workersthose in their first year of service or people who have recently been promotedwho are more likely to be women. Now, working mothers may face new challenges, as Trump takes aim at the childcare programs that make it possible for countless parents to work. The childcare industry has long struggled with inadequate funding and high labor costs, making it difficult for many providers to keep their doors open. The current administration has exacerbated some of those issues: When he took office, Trump threatened funding for Head Start, which helps subsidize childcare costs for low-income families, and the federal layoffs also compromised the program by cutting staff and making it even harder for underresourced childcare providers to stay afloat. Just days into the new year, Trump has sought to withhold $10 billion in federal funding earmarked for childcare subsidies and social services in Democratic states, due to concerns over alleged fraud. A federal judge has blocked the funding freeze for nowbut theres no telling how Trump may continue to target childcare and caregiving programs, to the detriment of working mothers.
Category:
E-Commerce
As concerns grow over Grok’s ability to generate sexually explicit content without the subject’s consent, a number of countries are blocking access to Elon Musk’s artificial intelligence chatbot. At the center of the controversy is a feature called Grok Imagine, which lets users create AI-generated images and videos. That tool also features a “spicy mode,” which lets users generate adult content. Both Indonesia and Malaysia ordered that restrictions be put in place over the weekend. Malaysian officials blocked access to Grok on Sunday, citing repeated misuse to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images. Officials also cited “repeated failures by X Corp.” to prevent such content. Indonesia had blocked the chatbot the previous day for similar reasons. In a statement accompanying Groks suspension, Meutya Hafid, Indonesias Minister of Communication and Digital, said. “The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space.” The responses could be just the beginning of Grok’s problems, though. Several other countries, including the U.K., India, and France, are thinking of following suit. The U.K. has launched an investigation into the chatbot’s explicit content, which could result in it being blocked in that country as well. “Reports of Grok being used to create and share illegal, non-consensual, intimate images and child sexual abuse material on X have been deeply concerning,” Ofcom, the countrys regulator for the communications services, said in a statement. Musk, in a social media post following word of the Ofcom investigation, wrote that the U.K. government “just want[s] to suppress free speech.” Fast Company attempted to contact xAI for comment about the actions in Indonesia and Malaysia as well as similar possible blocks in other countries. An automatic reply from the company read Legacy Media Lies. Beyond the U.K., officials in the European Union, Brazil, and India have called for probes into Grok’s deepfakes, which could ultimately result in bans as well. (The U.S. government, which has contracts with xAI, has been fairly silent on the matter so far.) In a press conference last week, European Commission spokesperson Thomas Regnier said the commission was “very seriously looking into this matter,” adding This is not spicy.’ This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe. Musk and X are still feeling the effects of a $130 million fine the EU slapped on the company last month for violating the Digital Services Act, specifically over deceptive paid verification and a lack of transparency in the company’s advertising repository. Beyond sexualized images of adults, a report from the nonprofit group AI Forensics that analyzed 20,000 Grok-generated images created between Dec. 25 and Jan. 1 found that 2% depicted a person who appeared to be 18 or younger. These included 30 images of young or very young women or girls in bikinis or transparent clothes. The analysis also found Nazi and ISIS propaganda material generated by Grok. While the company has not addressed the countries blocking access to its services, it did comment on the use of its tool to create sexual content featuring minors. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety wrote in a post. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” The company has also announced it will limit image generation and editing features to paying subscribers. That, however, likely won’t be enough to satisfy government officials who want to block access to Grok while these images can still be generated.
Category:
E-Commerce
Advancements in artificial intelligence are shaping nearly every facet of society, including education. Over the past few years, especially with the availability of large language models like ChatGPT, theres been an explosion of AI-powered edtech. Some of these tools are truly helping students, while many are not. For educational leaders seeking to leverage the best of AI while mitigating its harms, its a lot to navigate. Thats why the organization I lead, the Advanced Education Research and Development Fund, collaborated with the Alliance for Learning Innovation (ALI) and Education First to write Proof Before Hype: Using R&D for Coherent AI in K-12 Education. I sat down with my coauthors, Melissa Moritz, an ALI senior advisor, and Ila Deshmukh Towery, an Education First partner, to discuss how schools can adopt innovative, responsible, and effective AI tools. Q: Melissa, what concerns you about the current wave of AI edtech tools, and what would you change to ensure these tools benefit students? Melissa: Too often, AI-powered edtech is developed without grounding in research or educators input. This leads to tools that may seem innovative, but solve the wrong problems, lack evidence of effectiveness, ignore workflow realities, or exacerbate inequities. What we need is a fundamental shift in education research and development so that educators are included in defining problems and developing classroom solutions from the start. Deep collaboration across educators, researchers, and product developers is critical. Lets create infrastructure and incentives that make it easier for them to work together toward shared goals. AI tool development must also prioritize learning science and evidence. Practitioners, researchers, and developers must continuously learn and iterate to give students the most effective tools for their needs and contexts. Q: Ila, what is the AI x Coherence Academy and what did Education First learn about AI adoption from the K-12 leaders who participated in it? Ila: The AI x Coherence Academy helps cross-functional school district teams do the work that makes AI useful: Define the problem, align with instructional goals, and then choose (or adapt) tools that fit system priorities. It’s a multi-district initiative that helps school systems integrate AI in ways that strengthen, rather than disrupt, core instructional priorities so that adoption isnt a series of disconnected pilots. We’re learning three things through this work. First, coherence beats novelty. Districts prefer customizable AI solutions that integrate with their existing tech infrastructure rather than one-off products. Second, use cases come before tools. A clear use case that articulates a problem and names and tracks outcomes quickly filters out the noise. Third, trust is a prerequisite. In a world increasingly skeptical of tech in schools, buy-in is more likely when educators, students, and community members help define the problem and shape how the technology helps solve it. Leaders are telling us they want tools that reinforce the teaching and learning goals already underway, have clear use cases, and offer feedback loops for continuous improvement. Q: Melissa and Ila, what types of guardrails need to be in place for the responsible and effective integration of AI in classrooms? Ila: For AI to be a force for good in education, we need several guardrails. Lets start with coherence and equity. For coherence, AI adoption must explicitly align with systemwide teaching and learning goals, data systems, and workflows. To minimize bias and accessibility issues, product developers should publish bias and accessibility checks, and school systems should track relevant data, such as whether tools support (versus disrupt) learning and development, and the tools efficacy and impact on academic achievement. These guardrails need to be co-designed with educators and families, not imposed by technologists or policymakers. The districts making real progress through our AI x Coherence Academy are not AI-maximalists. They are disciplined about how new tools connect to educational goals in partnership with the people they hope will use them. In a low-trust environment, co-designed guardrails and definitions are the ones that will actually hold. Melissa: We also need guardrails around safety, privacy, and evidence. School systems should promote safety and protect student data by giving families information about the AI tools being used and giving them clear opt-out paths. As for product developers, building on Ilas points, they need to be transparent about how their products leverage AI. Developers also have a responsibility to provide clear guidance around how their product should and shouldnt be used, as well as to disclose evidence of the tools efficacy. And of course, state and district leaders and regulators should hold edtech providers accountable. Q: Melissa and Ila, what gives you hope as we enter this rapidly changing AI age? Melissa: Increasingly, we are starting to have the right conversations about AI and education. More leaders and funders are calling for evidence, and for a paradigm shift in how we think about teaching and learning in the AI age. Through my work at ALI, Im hearing from federal policymakers, as well as state and district leaders, that there is a genuine desire for evidence-based AI tools that meet students and teachers needs. Im hopeful that together, well navigate this new landscape with a focus on AI innovations that are both responsible and effective. Ila: What gives me hope is that district leaders are getting smarter about AI adoption. They’re recognizing that adding more tools isn’t the answercoherence is. The districts making real progress aren’t the ones with the most AI pilots; they’re the ones who are disciplined about how new tools connect to their existing goals, systems, and relationships. They’re asking: Does this reinforce what we’re already trying to do well, or does it pull us in a new direction? And theyre bringing a range of voices into defining use cases and testing solutions to center, rather than erode, trust. That kind of strategic clarity is what we need right now. When AI adoption is coherent rather than chaotic, it can strengthen teaching and learning rather than fragment it. Auditi Chakravarty is CEO of the Advanced Education Research and Development Fund.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||