|
Artificial intelligence and diversity, equity, and inclusion (DEI) are rapidly becoming two of the most challenging and consequential communication arenas for modern companies. According to Mission Norths 2025 Brand Expectations Index (BEx), public sentiment is evolving in ways that require corporate leaders to rethink how they communicate about these issues, balancing transparency with strategic messaging to maintain trust and relevance. We conducted BEx 2025 in November 2024, surveying 1,000 members of the general population adults and 500 knowledge workers. The goal was to provide practical and positive guidance for how executives such as CEOs, CCOs, and CMOs can bolster their brands reputation while also avoiding actions that might unintentionally harm it. We previously conducted BEx 2024, identifying and measuring leading factors affecting brands today, and used this as a foundation for Bex 2025. AI: BUILDING TRUST IN A FAST-MOVING LANDSCAPE AI adoption is surging, with tools like OpenAIs ChatGPT and Googles Gemini becoming household names. As AI continues to disrupt industries, public perception of AI companies has shifted significantlythough trust remains divided. Top survey findings include: Trust in AI companies and startups jumped 9 points in 2025 over 2024, reflecting increased awareness and adoption. Big tech is leading the trust surge with Google (66%), Amazon (65%), and Microsoft (61%) ranking the most trusted AI companies among the general public.Trust lags significantly for newer players (OpenAI: 41%, Anthropic: 23%). Communication builds trust. Knowledge workers overwhelmingly support transparent AI development, with 81% emphasizing security, 78% ethical oversight, and 77% privacy. Owned content from companies ranks just behind local news as the most trusted source of AI information. Security, privacy, and ethics are big factors in building trust. Security is a top driver of trust in AI companies for 81% of knowledge workers and 69% of the general public. Ethical oversight is also critical: 77% of knowledge workers and 66% of the public support external governance of AI development. Only 40% of the general public trust the government to regulate AI responsibly; 58% of knowledge workers prefer industry self-regulation, reinforcing the need for corporate AI ethics programs. The CEO is your best (or worst) asset, with 67% of knowledge workers and 57% of the public saying a companys CEO reputation directly influences their trust in the brand. DEI AND ESG: NO LABELS, JUST ACTION In a climate of political and cultural pushback against DEI and ESG initiatives, companies face a paradox: While these programs are under attack, employees and consumers still widely support their underlying principles. The public supports DEI valuesbut not always the label. While corporate DEI programs face external pressures, 69% of the general public and 78% of knowledge workers believe in incorporating diverse perspectives. However, companies should remember that the DEI term may carry political baggage and could be rebranded to reflect its broader, inclusive mission. Actions speak louder than words. The study reveals a significant perception gap: 73% of the public supports inclusivity measures, but only 49% believe companies are following through. Businesses must showcase real, meaningful action rather than performative statements to bridge this gap. Environmental stewardship remains a priority. Despite shifting political winds, 68% of the general public and 77% of knowledge workers support corporate sustainability initiatives. Public stances on social issues remain divisive. While 65% of knowledge workers support corporate activism, only 50% of the general public feels the same. Companies should weigh external positioning carefully, prioritizing internal action and policies that reflect core values. SO WHERE DO WE GO FROM HERE? As AI reshapes industries and DEI/ESG debates continue, communicators must stay ahead of evolving expectations. BEx presents some clear pathways forward: Own your narrative: For companies looking to tell their AI story: take control of your story, educate stakeholders, and focus on security, privacy, and ethical leadership to maintain and build trust. Double down on executive comms: The general public and knowledge workers are looking to CEOs for direction; look closely at how your leaders show up, both in words and actions. Transparency and authenticity are essential elements of an executive platform. Focus less on labels and more on results. Embedding inclusive and sustainable practices into company culture without drawing unnecessary controversy will allow brands to maintain credibility while avoiding political landmines. Direct communications is king. Audiences want to hear from you, and content increases knowledge; 81% of the general public and 84% of knowledge workers rank direct communications from companies (podcasts, long-form articles, and videos on technical and human-interest topics) as one of the most trusted sources of information, second only to local news. COMMUNICATE WITH CONFIDENCE The research makes one thing clear: Companies that proactively shape their narratives around AI and DEI will maintain stronger, more resilient brands. AI is no longer an emerging trendits an operational reality. Meanwhile, DEI and ESG efforts remain essential to corporate success, even requiring strategic repositioning. Companies that take control of their messaging, prioritize transparency, and consistently communicate their values will be best positioned to navigate the challenges and opportunities of 2025. By doubling down on education, trust-building, and authentic storytelling, corporate leaders can ensure their brand survives and thrives in the ever-evolving landscape of AI and DEI. Tyler Perry is co-CEO of Mission North. The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share teir insights with our audience. Members pay annual membership dues for access to peer learning and thought leadership opportunities, events and more.
Category:
E-Commerce
Software increasingly makes the world go round. Without this critical digital infrastructure, the economyand society at largewouldnt function. But as recent events like last years CrowdStrike outage have shown, enormous leaps in software power and complexity, including the integration of AI into the development process, ratchet up the potential for things to go sideways, fast. How can software teams better harness the supercharged new tools at their disposal? Heres a look at five things that lie ahead. 1. As consumers lose patience with outages, developers make software more resilient For consumers, the CrowdStrike outage is just the tip of the iceberg. Other recent software disruptions include American Airlines holiday stoppage, the global shutdown of Metas apps, and a Microsoft 365 failure. When we commissioned a survey of U.S. consumers in 2024, their resentment was palpable. More than half had been impacted by software outages. And for 70%, releasing bad code is equal toor even worse thansupermarkets selling contaminated food. Business leaders are worried too. When asked late last year, almost 90% of global executives expected major IT outages in 2025. The irony is these outages arent the result of hackers or security compromisestheyre due to preventable glitches and oversights in the development process itself. The way out of this mess? Moving forward, more companies will fully embrace modern DevOps practicestools and processes that make software delivery more reliable and efficient. An emerging discipline just a few years ago, DevOps is quickly becoming table stakes across industries. The most crucial safeguard: automating the software development pipeline. If every engineer must follow the same steps for planning, writing, testing, deploying, and maintaining code, the entire process is smoother, faster, and safer. 2. AI transcends the hype Roughly 60% of developers deployed AI in 2024, up almost 20% from the previous year. But so far, we havent seen the expected AI productivity gains. That will change going forward. As software teams figure out how to operationalize AI, the technology will start to show its real value. After all, AIs role in the development lifecycle goes well beyond writing code. Developers now have access to AI-native software delivery platforms that weave AI agents into every stage of the development process, not just coding. For example, a DevOps assistant lets software teams instantly create pipelinesand easily modify them. 3. Security threats (and responses) While invaluable to developers, AI has also been a boon to hackers. For instance, state-backed actors from China, Russia, and Iran have been using OpenAI tools to sharpen their skills and deceive targets. Indeed, the majority of hackers agree that businesses adopting AI have created new attack vectors. One vulnerability increasingly exploited: APIs, the doors and windows into code that allow apps to talk to each other. In the past two years, 57% of organizations were hit by an API security breach. And in a survey we conducted, more than two-thirds of businesses said genAI poses a risk to API security. For companies, fighting back requires taking stock of APIs and detecting and preventing attacks. The key steps: conducting an API inventory, ensuring that APIs meet specific security standards, and using smart tools to spot threats. Equally important: integrating security into the developer pipeline. This so-called shift left isnt newprogressive companies have been working for years to get developers in on the ground floor of security initiatives. But well see shift left strengthenpart of a broader effort to close the gap between Dev and functions like FinOps and continuous integration/continuous delivery. 4. DevOps platforms offer developers a one-stop shop As DevOps matures, a significant problem has surfaced: too many tools. Point solutions, handy tools and automations that make developers lives easier, are starting to overwhelm them. They add to daily toil instead of alleviating it. Currently, the average developer is asked to manage 14 different vendor tools. This context shifting between different interfaces, workflows, and licenses leads to confusion, cognitive overload, and development inconsistency. This all points to the need for a robust, integrated platform bringing the needed developer tools into one place. Critically, however, all the solutions need to be best of breed. Developers arent forgiving and would rather build their own tools than use inferior ones. For all these reasons, expect platform engineering to go mainstream this year. Indeed, Forrester predicts that in 2025, half of enterprises will abandon individual software tools for DevOps platforms. 5. Cloud costs tools become mission-critical For developers, getting a handle on cloud services costs has always been a headache. AI has taken that frustration to a whole new level. Because the latest AI applications use so much processing power, cloud costs can quickly escalate, leading to huge surprise bills. For the average business, cloud spending has leaped 30% in the past year, largely because of generative AI. Thats one reason why FinOpsanother emerging discipline that bridges the gap between finance and engineeringis icreasingly important. With developer visibility into cloud spending and accountability for it, companies should see significant savings and efficiency gains. But that hinges on new tech and on harnessing AI as part of the solution. AI-powered tools let anyone easily pinpoint cloud waste by asking questions in plain language. Armed with that knowledge, developers can get recommendations on the right fixes. The best cloud cost management tools also automate fixes, forecast cloud spend, and enforce rules for cloud usage. The result is significant savingsas much as 50% at Fortune 1000 companies I know. For software developers, big opportunities lie ahead, along with growing threats. Teams that stay vigilant and take advantage of the best tools will be better positioned to see real productivity gains from AI while avoiding the security and quality pitfalls. Jyoti Bansal is CEO of Harness. The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual membership dues for access to peer learning and thought leadership opportunities, events and more.
Category:
E-Commerce
At Nvidias developer conference on Thursday, a large group of energy companiesalong with a few technology companiesannounced plans to collaborate on building AI models and apps aimed at improving the generation and distribution of electric power. The initiative, called the Open Power AI Consortium, is organized by the Palo Alto-based Electric Power Research Institute (EPRI). Founding members include Nvidia, Microsoft, AWS, and Oracle. Notably absent from the group are all of the leading developers of frontier AI models, such as Anthropic, Google, and OpenAI. This is about getting the right data, and getting it clean, so that it can be used for AI, Jeremy Renshaw, who leads the consortium at EPRI, tells Fast Company. Renshaw says energy companies have mountains of data, but organizing it in a way that AI models can process is key. But already, more than two dozen regional power companies in the U.S. have signed on, including Con Edison, Duke Energy, New York Power Authority, Pacific Gas and Electric Company, Southern California Edison, Tennessee Valley Authority, and Westinghouse Electric Company. EPRI president and CEO Arshad Mansoor said in a statement that the consortium will create an AI model, datasets, and apps to enhance grid reliability, optimize asset performance, and enable more efficient energy management.” It will also foster a collaborative environment where utilities, startups, academics, and national labs can work together to address power-sector challenges using AI. The consortium doesn’t include representatives from government agencies, but Renshaw said hed like to see their inclusion. We intend to include anyone involved in the making and moving of electricity, he says. Government is important because they do the permitting, licensing, and they provide regulations. The announcement comes amid growing concern in the tech sector over the strain that AI workloads can place on data centers. (Google even pledged last year to buy energy from small modular reactors developed by Kairos Power to support its growing AI ambitions.) Axios climate reporter Alex Freedman notes that the power demands of the so-called AI boom have become a top priority for energy company CEOs in the U.S. Freedman highlights an ongoing debate within that sector over whether the power demands of AI will prolong the use of fossil fuels. Should that prove to be the case, AI could further push back constructive work toward climate goals.
Category:
E-Commerce
All news |
||||||||||||||||||
|