|
|||||
If you were building global teams in 2025, you wouldnt need me to tell you it was a crazy year. We experienced economic volatility and AI disruption. Plus, tightened borders caused companies to adjust and readjust their approaches. 2026 wont be calmer. But the elements we need to master to stay competitive are now coming into focus: Navigating mobility disruption, creating unity across increasingly distributed workforces, and building the transparent, compliant infrastructure needed to employ people anywhere. 1. Rethink mobility strategies After a decade or so of relative calm, global mobility is now being disrupted from every angle. Thats because geopolitical instability, along with economic shifts and competing visa regimes are fundamentally changing how companies access and rely on talent. Governments are modernizing immigration with digital platforms like the European Travel Information and Authorisation System, yet the same environment is producing abrupt travel restrictions, emergency evacuations, and rising protectionism. The result is a system that is technically more advanced but practically more unpredictable. Sharp increases in visa costs in some major economies have pushed many companies to rethink their talent strategies. High fees and uncertainty are accelerating offshoring and nearshoring, especially for high-value work in AI, product development, and cybersecurity. Yet with many companies facing hiring freezes and restructuring, mobility in 2026 will need to be more selective and strategy-led, not volume-driven. To thrive in this new landscape, companies should build mobility capability in-houseowning compliance knowledge, digital tooling, and real-time monitoringwhile deepening partnerships with specialist mobility consultants who can navigate complex jurisdictions. This kind of hybrid model will ensure companies are poised to rapidly respond to regulatory changes in an uncertain world. 2. Overcommunicate around AI workflows AI is already embedded in day-to-day work, but without clear communication, it can easily create more noise rather than value: generic content, duplicated effort, and confusion over what is trustworthy. Most teams are still bolting AI onto old workflows, instead of redesigning those workflows with AI in mind. Overcommunicating around AI workflows means making it clear how AI is used, why its used, and where humans fit in the loop. Teams should openly align on what should be automated, what should remain human-led, and how decisions are made and documented. The clearer the communication, the more consistently teams can use AI without compromising quality or accountability. For AI to support unity rather than undermine it, organizations should: Make it clear that AI is a tool for productivity, not as a quiet headcount reducer. Transparency builds trust and encourages adoption. Establish shared guidelines on when and how to use AI. Create internal spaces where people can share prompts, tools, and lessons. 3. Hire for soft skills Tethered to the emergence of AI is an increasing skills gap. Workers often feel confident that they are employable, while employers increasingly question whether available talent matches the demands of modern, tech-driven roles. Education systems still lean toward linear, narrow training, while careers are becoming more non-linear and cross-functional. In 2026, employers that struggle to find hard skills will need to hire for potential instead by focusing on soft skills like communication and problem solving. Also look for curiosity and comfort with ambiguity. The most resilient global teams will build around people who can move across domains, learn new tools quickly, and evolve with the business, instead of those optimized purely for todays job description. 4. Understand transparency mandates Finding the right talent is one problem. Employing people compliantly and fairly across borders is anotherespecially with the new regulatory challenges 2026 is throwing our way. New pay transparency rules require employers to show not just what they pay, but how they arrived at those decisions. Early evidence from transparency laws in some regions suggests they can meaningfully narrow pay gaps when combined with structured reporting. The next wave, including EU-wide pay transparency requirements, will push employers to formalize compensation frameworks and maintain audit-ready data. Many organizations are underprepared. Only just over half of employers are putting money into improving wage transparency. Employees often feel in the dark about how pay works, and ad hoc transparencysuch as publishing a few rangeswont fix that. In 2026, companies will need payroll and HR systems that can: Produce locally compliant payslips Classify roles consistently across borders Surface pay data by region, role, and gender Without this infrastructure, it becomes difficult to demonstrate that outcomes are structured, comparable, and non-discriminatory. 5. Build the infrastructure of global employment In 2026, global companies are expected to expand quickly, de-risk that expansion, and provide a consistent employee experience worldwide. Spreadsheets and fragmented vendors simply cant keep up. The response is the rise of dedicated global employment infrastructure: employers of record, global payroll systems, and collaboration suites. Built correctly, the right stack: Keeps contracts, benefits, and payslips locally compliant Provides a single source of truth for workforce data Enables real-time visibility and control for leaders Reduces misclassification, tax, and security risks In a year of continuous change, this kind of infrastructure will prevent global expansion from becoming a tangle of entities, local providers, and hidden liabilities. PREPARE FOR 2026 Mobility disruption, distributed work, AI, skills gaps, and regulatory shifts are converging into a single test: Can your organization operate as a coherent global system? The teams that win in 2026 will: Treat mobility as a strategic lever Design AI-augmented workflows that enhance clarity and cohesion Hire for adaptability and potential, not just narrow experience Treat transparency as a business priority rather than an afterthought Build compliant employment infrastructure that can scale The world is not getting simpler. But with the right strategies in place, businesses can leap the hurdles and continue to unlock the benefits of global teams. Sagar Khatri is CEO of Multiplier.
Category:
E-Commerce
Six decades after it was created by Congress, the nonprofit that brought America Mister Rogers Neighborhood and Sesame Street will shut down for good. The Corporation for Public Broadcasting announced this week that it would officially shut down, ushering in an uncertain new era for the future of public broadcasting. The organization historically administers funds for NPR, PBS, and more than 1,000 local TV and radio stations nationwide. The nonprofit entity was signed into law by the Public Broadcasting Act of 1967 to manage federal funds for educational TV and radio shows, but it fell victim to a defunding campaign initiated by the Trump administration and approved by a Republican-led Congress. For more than half a century, CPB existed to ensure that all Americansregardless of geography, income, or backgroundhad access to trusted news, educational programming, and local storytelling, CPB president and CEO Patricia Harrison said in a press release. Harrison said that the CPB decided to dissolve the organization as its final act instead of keeping the nonprofit on life support, which could make it susceptible to additional attacks. The Trump administration asked for the cuts to the public broadcasting organization, along with a sweeping pullback in foreign aid spending, earlier this year. Congress ultimately complied and in July voted to cut $1.1 billion in federal funds, with no Democrats voting in support. The public broadcasting fallout begins The Trump administrations decision to defund the countrys largest public broadcasting organization will likely echo for years to come, but were already starting to see some of its effects. In early December, a commission that oversees public educational TV in Arkansas voted to part ways with PBS, citing the shortfall of federal funds. That group framed the decision as a cost-saving measure, arguing that it relied on federal funding to pay annual dues of around $2.5 million to PBS in exchange for the broadcasters programming. The organizations commissioners, who voted 6-2 in favor of parting with PBS, are appointed by the governor. The Arkansas PBS network, now rebranded as Arkansas TV, struck an optimistic tone in its announcement, pointing to a slate of new programming it plans to develop to replace PBS, including two shows for children, two food series, and two new history-focused shows. Public television in Arkansas is not going away, Arkansas TV executive director and CEO Carlton Wing said. In spite of the states upbeat tone around its new brand and bespoke programming, Arkansas residents broadly support PBS and will likely feel the absence of its long-running educational shows. More than 70% state residents said that PBS is an excellent value to their community in recent surveys. The commissions decision to drop PBS membership is a blow to Arkansans who will lose free, over-the-air access to quality PBS programming they know and love, a PBS spokesperson told Fast Company. It also goes against the will of Arkansas viewers. Arkansas was the first state to sever its ties with PBS, but more could follow. Public TV and radio stations in rural parts of the country lack the donor base that their urban counterparts rely on and may be particularly vulnerable to new shortfalls in federal funding. Public support for public broadcasting The Trump administration has made dismantling public broadcasting a priority in its first year, but that position looks out of step with most of the country. President Trump has expressed his personal ire for public broadcasting, referring to PBS and NPR as two horrible and completely biased platforms and calling on Congress to defund what he characterized as a scam perpetrated by the Radical Left. Unlike the Trump administration, most Americans approve of the public broadcaster, which has long been funded through the now-shuttered Corporation for Public Broadcasting. In the U.S., 58% of households with a TV reported watching public programming through PBS in the course of a year. PBS consistently ranks as the most trusted source in America for news and public affairs, besting cable and broadcast networks, newspapers, and streaming services. In 1969, Fred Rogers, the creator and host of Mister Rogers’ Neighborhood, famously testified before Congress to defend the Corporation for Public Broadcasting, which was facing a major budget cut from the Nixon administration just after its creation. His testimony was initially met with a chilly reception, but within the span of six minutes, Rogers won over the senator questioning him. The Corporation for Public Broadcasting went on to secure its full $20 million in federal fundinga comeback story the nonprofit wont be telling in 2026. What has happened to public media is devastating, said CPB board chair Ruby Calvert. Yet, even in this moment, I am convinced that public media will survive, and that a new Congress will address public medias role in our country because it is critical to our children’s education, our history, culture, and democracy to do so.
Category:
E-Commerce
Elon Musk took over X and folded in Grok, his sister companys generative AI tool, with the aim of making his social media ecosystem a more permissive and free speech maximalist space. What hes ended up with is the threat of multiple regulatory investigations after people began using Grok to create explicit images of women without their permissionand sometimes veering into images of underage children. The problem, which surfaced in the past week as people began weaponizing the image-generation abilities of Grok on innocuous posts by mostly female users of X, has raised the hackles of regulators across the world. Ofcom, the U.K.s communications regulator, has made urgent contact with X over the images, while the European Union has called the ability to use Grok in such a way appalling and disgusting. In the three years since the release of ChatGPT, generative AI has faced numerous regulatory challenges, many of which are still being litigated, including alleged copyright infringement in the training of AI models. But the use of AI in such a harmful way to target women poses a major moral moment for the future of the technology.This is not about nudity. It’s about power, and it’s about demeaning those women, and it’s about showing who’s in charge and getting pleasure or titillation out of the fact that they did not consent, says Carolina Are, a U.K.-based researcher who has studied the harms of social media platforms, algorithms and AI to users, including women. For its part, X has said that Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content, echoing the wording of its owner, Elon Musk, who posted the same thing on January 3. The fact that its at all possible to create such images suggests just how harmful it is to remove guardrails on generative AI to allow users to essentially do whatever they want. This is yet another example of the wild disparities, inequalities, and double standards of the social media age, particularly during this period of time, but also of the impunity of the tech industry, Are says. Precedented While the scale and power of AI-created images feels unprecedented, some experts disagree that theyrepresent the first real morality test for generative AI. AIIm using it here for an umbrella termhas long been a tool of discrimination, misogyny, homophobia and transphobia, and direct harm, including encouraging people to end their lives, causing depression and body dysmorphia, and more, says Ari Waldman, professor of law at the University of California, Irvine.Creating deepfakes of women and girls is absolutely horrible, but it is not the first time AI has engaged in morally reprehensible conduct, he adds. But the question of who bears legal responsibility for the production of these images is less clear than Musks pronouncements make it seem. Eric Goldman, a professor at Santa Clara University School of Law, points out that the recently enacted Take it Down Act which requires platforms to have, in the coming months, measures to take down illegal or infringing content within 48 hours added new criminal provisions against intimate visual depictions, a category that would include AI-generated images. But whether that would include bikini images of the type Grok is making by the load is uncertain.This law has not yet been tested in court, but using Grok to create synthetic sexual content is the kind of thing the law was designed to discourage, Goldman says. Given that we don’t know if the Take It Down Act has already put in place the regulatory solution necessary to solve the problem at hand, it would be premature to make yet more laws. Experts like Rebecca Tushnet, a First Amendment Scholar at Harvard Law School, say the necessary laws already exist. The issue is enforcing them against the wrongdoers when the wrongdoers include the politically powerful or those contemptuous of the law, she says. In recent years, many new anti-deepfake and explicit-image laws have been passed in the U.S., including a federal law to punish the distribution of sexually explicit digital forgeries, explains Mary Anne Franks, an intellectual property and technology expert at George Washington Law School. But the recent developments with Grok show the existing measures arent good enough, she says. We need to start treating technology developers like we treat other makers of dangerous products: hold them liable for harms caused by their products that they could and should have prevented. Ultimate responsibility This question of ultimate responsibility, then, remains unanswered. And its the question that Musk may be trying to head off by expressing his distaste for what his users are doing. The tougher legal question is what, if any, liability Grok may have for facilitating the creation of intimate visual imagery, explains Goldman, pointing to the voluntary imposition of guardrails as part of firms trust and safety protocols. It’s unclear under U.S. law if those guardrails reduce or eliminate any legal liability, he says, adding that its unclear if the model’s liability will increase if a model has obviously inadequate guardrails. Waldman argues that lawmakers in Washington should pass a law that would hold companies legally responsible for designing and building AI tools capable of creating child pornography or pornographic deepfakes of women and girls. Right now, the legal responsibility of tech companies is contested, he adds. While the Federal Trade Commission has statutory authority to take action, he worries that it won’t. The AI companies have aligned themselves with the president and the FTC doesn’t appear to be fulfilling its consumer protection mandate in any real sense.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||