|
|||||
Despite considering themselves successful, most Americans also feel like theyre lagging on at least one major milestone. But experts warn that dwelling on it could put them further behind. In a recent survey conducted by daily development app Headway, 77% of respondents said they consider themselves successful. At the same timein what researchers label the success paradox81% said theyre falling behind their peers in at least one major personal or professional domain. Roughly one-third said they feel behind others their age financially, 11% feel theyre behind in life experiences, 10% feel theyre lagging in their career progress, and another 10% said the same about their relationships. Its very easy to get caught in the trap of, Im not good enough, says Cindy Cavoto, a certified productivity coach for Headway and one of the studys coauthors. People put these expectations on themselves, and I think as a society we don’t give ourselves enough slack. Though many are facing economic challenges and career stagnation in the current job market, Cavoto says those setbacks can feel even bigger in the age of social media comparison. People are only posting their best lives, she says. Rather than focusing on others, Cavoto encourages folks to compare their progress against their own individual benchmarks, which most survey respondents felt positively about. Are you in a better place than you were last year? Are you feeling better about where you’re going this year? Cavoto asks. Stop looking around and just compare yourself to yourself. That’s your best measure, because we’re all on our own journey. The Fog of Work Part of the frustration many workers express is driven by feelings of persistent economic insecurity and career doubt, despite making personal sacrifices to further their professional ambitions. According to the Headway survey, 44% of respondents have forfeited free time, 37% have sacrificed sleep and mental health, and 30% have compromised relationships in pursuit of their goals. Despite those sacrifices, 66% of American workers feel like their career has stalled, according to a recent survey from online résumé builder MyPerfectResume. Furthermore, 45% said they want to leave their jobs but feel they cant in this market, and 70% have questioned or reconsidered their entire career path in the past year. Thats pretty astronomical, says career expert Jasmine Escalera of MyPerfectResume. There are a lot of employees out there who are dissatisfied with their day-to-day work. Of those who said theyve reconsidered their career path, 21% feel like its too late to make a change, 19% believe they should be further along than they are, and 17% admit to just going through the motions. This state of uncertainty, in which workers struggle to see whats ahead, is what MyPerfectResume refers to as career fog. There are a lot of employees out there who are feeling like they’re not having the upward mobility that they want, they’re not developing the skills that they want, they don’t have the career progression that they want, Escalera says. There are also a lot of employees who feel like they’re not getting paid what they should. According to a MyPerfectResume survey conducted last fall, 78% of workers have been assigned new duties without a raise or promotion, and more than half were promised promotions or opportunities that never materialized. In an analysis of U.S. wage growth between 2020 and 2024, MyPerfectResume found that despite an 18% increase in real wages during that period, spending power declined by 2.6% due to inflation. Our recent reports show that a lot of people are struggling financially, Escalera says. The question is, are people also not feeling like theyre moving in the right direction because theyre not being paid enough to afford basic necessities? In another recent MyPerfectResume survey, 74% of respondents cited high expectations, peer comparisons, or personal perfectionism as a driver of self-doubt, and 58% said self-doubt is negatively affecting their career growth. In other words, those negative feelings are further driving negative outcomes. Focus on what you can control Lots of workers feel like theyve lost control of their careers, their personal finances, or their mental health, and for good reason. Economic instability, job market stagnation, layoffs, and AI fears have many workers questioning whether theyre on the right pathand that self-doubt can put a damper on their motivation. How they approach these challenges can play a significant role in the outcome, according to former Stanford lecturer and behavioral design expert Nir Eyal. In his new book, Beyond Belief, Eyal explains that our perception is driven by a set of beliefs that are neither pure fact nor fiction, making them uniquely malleable. Beliefs are tools, not truths, he says. You can change how you see reality based on your beliefs. Adopting what Eyal labels limiting beliefs anchored in self-doubtsuch as Im not where I should be, despite my best effortssaps our motivation and increases suffering. Rather than looking for opportunities to improve our situation, Eyals research suggests those who maintain limiting beliefs wire their brains to look for evidence of their victimhood. How hard am I going to work if I’m thinking, I’ve been working this hard, and look, I still am not where I should be? To me, it’s pretty demotivating, he says. We must reconcile that limiting belief to push beyond it, and we do that by adopting a liberating belief that serves us better. Turning a limiting belief into a liberating belief, according to Eyal, starts with questioning the truth behind the limiting belief, and considering how outcomes might improve if we reversed it. With this inquiry-based stress reduction, we learned that the belief that we think is a fact may not be a fact, and there might be an alternative explanation, he says. We learned that holding on to the belief doesn’t necessarily serve us, doesn’t make us better off, and that actually not holding on to that belief might be much better for us, all in a matter of seconds. In other words, the more workers believe theyre falling behind their peers, the more likely that sentiment is to become reality. Those who instead focus on their own success and potential are more likely to reduce the weight of their personal and professional challenges, and their negative feelings toward them. We can actually use the science behind belief to help us increase our motivation to do what we need to do to decrease our suffering around that situation, so that we can reconcile it, Eyal says. Who do you become hen you believe I’m exactly where I should be and I’m still learning? Youll feel so much more motivated to go learn and keep working at it.”
Category:
E-Commerce
Every so often, a technical dispute reveals something much bigger. The recent blowup between the U.S. Department of Defense and Anthropic is one of those moments: not because its about a $200 million contract, but because it makes visible a new kind of enterprise risk, one that most CEOs, CTOs, and CIOs are still treating as a procurement detail. In a recent piece, The Pentagon wants to rewrite the rules of AI, I focused on the political meaning of a government attempting to force an AI company to relax its own guardrails. For enterprise leaders, the most important takeaway is more practical: If your AI capabilities depend on a single providers terms, policies, and enforcement mechanisms, your strategy is now downstream of someone elses conflict. According to reporting, the Pentagon wanted the ability to use Anthropics models for all lawful purposes, while Anthropic insisted on explicit carve-outs, particularly around mass surveillance and fully autonomous weapons. When Anthropic wouldnt budge, the dispute escalated into threats of blacklisting and supply chain risk designation, with public pressure at the highest political levels. The Associated Press describes the demand for broader access and the potential consequences in detail, including the Pentagons willingness to treat compliance as nonnegotiable for participation in its internal AI network, GenAI.mil. Then came the second act: OpenAI stepped in with its own Pentagon agreement, presenting it as compatible with strong safety principles while debate continued over what the contract language actually prevents, especially regarding the use of publicly available data at scale. You may not be selling to the Pentagon or to governments that are making democracy progressively look like a pipe dream. But you are almost certainly building on vendors whose models are shaped by policies, politics, contracts, and reputational risk. And if youre deploying those models as is, or building agentic systems tightly coupled to one providers tooling and assumptions, youre making a strategic bet you probably havent priced in. This is what the PentagonAnthropic fight should teach every enterprise. Your AI vendor is not just a supplier. Its a governance regime. For the past two years, many companies have treated large language model (LLM) procurement like cloud procurement: Choose a provider, negotiate price, sign terms, integrate application programming interfaces (APIs), ship pilots. But LLM providers are not selling neutral infrastructure. Theyre selling models with built-in constraints, policies that can change, and enforcement mechanisms that can tighten overnight. Even when the models are accessed through APIs, the practical reality is that your capability is partly controlled elsewherethrough usage policies, refusal behaviors, rate limits, logging, retention choices, safety layers, and contractual wording. Thats why this dispute matters. Anthropics stance wasnt simply ethical positioning. It was product governance. The Pentagons stance wasnt simply buyer pressure. It was demanding control of governance. Enterprise leaders should recognize the parallel immediately: Your companys AI behavior is partly determined by a vendors definition of acceptable use, and that definition may collide with your own business requirements, your regulatory environment, your geography, or your risk appetite. In a sense, you are outsourcing part of your decision architecture. And when governance becomes the battleground, its not a technical issue anymore. Its strategic. Out of the box AI is rented intelligence. Strategy requires owned capability. Ive written before that most current AI deploymentsare essentially rented intelligence: powerful, convenient, but ultimately generic. That was the core of my argument in This is the next big thing in corporate AI, and in Why world models will become a platform capability, not a corporate superpower. When everyone can rent similar capabilities from OpenAI, Anthropic, Google, xAI, or others, the differentiator becomes what you build above the model: your workflows, your feedback loops, your integration with operational reality. The Pentagon dispute highlights a hard truth: When you depend on as-shipped AI behavior, your operational continuity depends on someone elses red lines, and those lines can be challenged by customers, governments, courts, or internal politics. If youre a CIO or CTO, this is the moment to stop thinking of LLM selection as the AI strategy, and start treating it as a replaceable component in a larger system. Because the real strategic question is not Which model do we choose? It is: Do we have the technical and organizational ability to switch models quickly, without rewriting our business logic, retraining our workforce, or rebuilding our agent systems? Agentic systems multiply lock-in and amplify the blast radius. You really believed that by saying we are developing an agentic system, you were, somehow, more sophisticated? Simple use cases such as summarization, drafting, and search augmentation are relatively portable. Agentic systems are not. The moment you build agents that call tools, trigger workflows, access internal systems, and make chained decisions, you start encoding business logic in places that are surprisingly hard to migrate: prompts, function-call schemas, tool-selection patterns, model-specific safety behavior, vendor-specific orchestration frameworks, and even quirks of how a particular model handles ambiguity. That is why the PentagonAnthropic fight should feel like a corporate risk scenario, not a Washington drama. A sudden policy shift, contract dispute, or reputational shock can force you to change providers fast, and if your agents are tightly coupled to one stack, your business doesnt switch. It stalls. I made a related point, though from a different angle, in Why your company (and every company) needs an AI-first approach. AI-first should not mean deploy more AI. It should mean building systems where artificial intelligence is structurally embedded, but is also governed, testable, observable, and resilient under change. Resilience is the missing word in most enterprise AI plans. The lesson isnt ethics first. Its architecture first. You dont need to take a public moral stance like Anthropic (or maybe you do, but thats not the topic of this article). You do need to design as if your vendor relationship will be volatile . . . because it will be. Volatility can come from many directions: A provider changes its safety posture. A regulator introduces new constraints. A customer demands contractual carve-outs. A government pressures suppliers. A vendor shifts pricing, retention, or availability. A model is withdrawn, restricted, or re-tiered. A geopolitical event changes what acceptable use means. The organizations that will navigate this era best are those that treat LLMs as interchangeable engines and build capabilities that are model-agnostic. That means investing in a layer above the model that belongs to you: evaluation, routing, policy, observability, and integration with your operational truth. If you need a mental frame, think of what NIST is doing with the AI Risk Management Framework: a structured way to map, measure, and manage AI risk across contexts and use cases, rather than assuming the technology is inherently safe because a vendor says so. The Pentagon itself (ironically, given this dispute) has formal language around responsible AI principles and implementation, emphasizing governance, testing, and life cycle discipline. Companies should read those documents not as government ethics, but as a reminder that the control plane matters as much as the model. Build AI capabilities that reflect your business, not your provider. The endgame is not model independence as an abstract principle. The endgame is strategy dependence: AI systems that are deeply shaped by your supply chain, your operating model, your risk posture, your customer obligations, and your competitive contextno matter how complex those are. That is the part most companies are still avoiding, because it is harder than buying a model. It requires building institutional competence: the ability to evaluate models, to swap them, to tune behavior through your own governance layers, to instrument outputs, to manage tool access, and to treat agents as production systems rather than demos. In What are the 2 categories of AI use and why do they matter?, I tried to describe the divide between organizations that use AI and those that build with AI. The PentagonAnthropic conflict is a perfect illustration of why that divide is becoming existential. If you only use, you inherit someone elses constraints. If you build, you can adapt. The companies that keep treating AI as a cost-cutting plug-in will almost certainly underinvest in the architecture that makes switching possible. Efficiency narratives feel safe, but they often lock you into the shallowest version of the technology. The Pentagon didnt want ethics getting in the way. Anthropic didnt want to yield control. OpenAI negotiated a different set of terms. That triangle is not a one-off story. Its a preview of how contested, politicized, and strategically consequential AI supply will become. Your companys job is not to pick the right provider. Your job is to ensure that, when the inevitable conflict arrives, your business is not trapped inside someone elses argument.
Category:
E-Commerce
Dieticians are warning that GLP-1 use can lead to extreme malnutrition, manifesting in diseases like scurvy, amid findings that the vast majority of studies fail to consider patients eating habits. While GLP-1s like Ozempic and Wegovy have surged in popularity in recent yearsand are now available through injections and in pill formleading dieticians in Australia have discovered that existing research hasnt considered what patients are eating, and how much. Nutritional Deficiencies While the drugs work by suppressing appetite, eating too little or making poor dietary choices can lead to further issues. A reduction in body weight does not automatically mean the person is well-nourished or healthy, Professor Clare Collins told the Australian Financial Review (AFR). Nutrition plays a critical role in health, and right now its largely missing from the evidence. She added that only two trials had recorded or published what GLP-1 users were eating. The current data shows that many patients using weight-loss medication are functionally malnourished, which can lead to severe vitamin deficiencies. A 2025 study of adults with type 2 diabetes found that more than 20 percent of participants had nutritional deficiencies after 12 months of GLP-1 use. And a study examining patients before joint surgery found that 38 percent of GLP-1 users suffered from malnutrition, versus 8 percent for patients not using GLP-1s. Last year, British pop artist Robbie Williams told The Mirror he had developed a 17th century pirate disease after taking something like Ozempic. He was referring to scurvy, a rare but serious vitamin C deficiency. In the worst cases, the illness can lead to death. Id stopped eating, and I wasnt getting nutrients, he said. Its exactly the kind of health emergency the dieticians are working to combat. The Proposed Solution Lets not wait for every GP (general practitioner) to see a case of scurvy, lets get on the front foot and link these GP chronic management plans to a dietician referral, said Collins. GLP-1 use has also been tied to thiamine deficiency, which can cause neurological and cardiovascular disease. Magriet Raxworthy, CEO at Dieticians Australia, said its essential that GLP-1 users receive nutritional guidance while taking the drug. Without personalized medical nutrition therapy provided by a dietitian, people may struggle to meet their nutritional needs and can be placed at risk of significant muscle loss, bone density loss, micronutrient deficiencies, and disordered eating behaviors, she said, according to the AFR. In this case, its clearmedication alone does not deliver sustainable health outcomes. Some GLP-1 providers do offer nutrition assistance, but the issue hasnt yet been centralized in a way that effectively prevents serious deficiencies that can accompany the medication. Ava Levinson This article originally appeared on Fast Companys sister website, Inc.com. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||