|
|||||
President Donald Trump said Sunday that he is “inclined” to keep ExxonMobil out of Venezuela after its top executive was skeptical about oil investment efforts in the country after the toppling of former President Nicolás Maduro.“I didn’t like Exxon’s response,” Trump said to reporters on Air Force One as he departed West Palm Beach, Florida. “They’re playing too cute.”During a meeting Friday with oil executives, Trump tried to assuage the concerns of the companies and said they would be dealing directly with the U.S., rather than the Venezuelan government.Some, however, weren’t convinced.“If we look at the commercial constructs and frameworks in place today in Venezuela, today it’s uninvestable,” said Darren Woods, CEO of ExxonMobil, the largest U.S. oil company.An ExxonMobil spokesperson did not immediately respond Sunday to a request for comment.Also on Friday, Trump signed an executive order that seeks to ensure that Venezuelan oil revenue remains protected from being used in judicial proceedings.The executive order, made public on Saturday, says that if the funds were to be seized for such use, it could “undermine critical U.S. efforts to ensure economic and political stability in Venezuela.” Venezuela has a history of state asset seizures, ongoing U.S. sanctions and decades of political uncertainty.Getting U.S. oil companies to invest in Venezuela and help rebuild the country’s infrastructure is a top priority of the Trump administration after Maduro’s capture.The White House is framing the effort to “run” Venezuela in economic terms, and Trump has seized tankers carrying Venezuelan oil, has said the U.S. is taking over the sales of 30 million to 50 million barrels of previously sanctioned Venezuelan crude, and plans to control sales worldwide indefinitely. Kim reported from West Palm Beach, Florida. Seung Min Kim and Julia Nikhinson, Associated Press
Category:
E-Commerce
Mattel Inc. is introducing an autistic Barbie on Monday as the newest member of its line intended to celebrate diversity, joining a collection that already includes Barbies with Down syndrome, a blind Barbie, a Barbie and a Ken with vitiligo, and other models the toymaker added to make its fashion dolls more inclusive.Mattel said it developed the autistic doll over more than 18 months in partnership with the Autistic Self Advocacy Network, a nonprofit organization that advocates for the rights and better media representation of people with autism. The goal: to create a Barbie that reflected some of the ways autistic people may experience and process the world around them, according to a Mattel news release.That was a challenge because autism encompasses a broad range of behaviors and difficulties that vary widely in degree, and many of the traits associated with the disorder are not immediately visible, according to Noor Pervez, who is the Autistic Self Advocacy Network’s community engagement manager and worked closely with Mattel on the Barbie prototype.Like many disabilities, “autism doesn’t look any one way,” Pervez said. “But we can try and show some of the ways that autism expresses itself.”For example, the eyes of the new Barbie shift slightly to the side to represent how some people with autism sometimes avoid direct eye contact, he said. The doll also was given articulated elbows and wrists to acknowledge stimming, hand flapping and other gestures that some autistic people use to process sensory information or to express excitement, according to Mattel.The development team debated whether to dress the doll in a tight or a loose-fitting outfit, Pervez said. Some autistic people wear loose clothes because they are sensitive to the feel of fabric seams, while others wear figure-hugging garments to give them a sense of where their bodies are, he said.The team ended up choosing an A-line dress with short sleeves and a flowy skirt that provides less fabric-to-skin contact. The doll also wears flat shoes to promote stability and ease of movement, according to Mattel.Each doll comes with a pink finger clip fidget spinner, noise-canceling headphones and a pink tablet modeled after the devices some autistic people who struggle to speak use to communicate.The addition of the autistic doll to the Barbie Fashionistas line also became an occasion for Mattel to create a doll with facial features inspired by the company’s employees in India and mood boards reflecting a range of women with Indian backgrounds. Pervez said it was important to have the doll represent a segment of the autistic community that is generally underrepresented.Mattel introduced its first doll with Down syndrome in 2023 and brought out a Barbie representing a person with Type 1 diabetes last summer. The Fashionistas also include a Barbie and a Ken with a prosthetic leg, and a Barbie with hearing aids, but the line also encompasses tall, petite and curvy body types and numerous hair types and skin colors.“Barbie has always strived to reflect the world kids see and the possibilities they imagine, and we’re proud to introduce our first autistic Barbie as part of that ongoing work,” Jamie Cygielman, Mattel’s global head of dolls, said in a statement.The doll was expected to be available at Mattel’s online shop and at Target stores starting Monday for a suggested retail price of $11.87. Walmart stores are expected to start carrying the new Barbie in March, Mattel said.The Centers for Disease Control and Prevention reported last year that the estimated prevalence of autism among 8-year-old children in the U.S. was 1 in 31. The estimate from the CDC’s Autism and Developmental Disabilities Monitoring Network said Black, Hispanic, Asian and Pacific Islander children in the U.S. were more likely than white children to have a diagnosis, and the prevalence more than three times higher among boys than girls. Anne D’Innocenzio, AP Retail Writer
Category:
E-Commerce
Late last year, Meta confirmed it would effectively be abandoning the metaverse, a nebulously defined project that spurred the companys 2021 rebrand and has cost it over $70 billion since. At a strategy meeting at Mark Zuckerberg’s Hawaii compound, Reality Labs, the division responsible for the metaverse, was told to cut its budget by 30%, versus only 10% across the rest of the company. Reality Labs fate was arguably a long time coming: The division has never turned a profit, with cumulative losses these past five years totalling $73 billion. Wall Street reacted positively to the news, adding $69 billion to its market capitalization. You remember the metaverse, dont you? The next stage of the internets evolution: a virtual reality full of legless avatars, sprawling, lifeless, digital malls, and nausea-inducing headsets. Upon the inception of the metaverse, its enthusiasts looked at vast swaths of the economygaming, online retail, digital advertising, compulsory Zoom meetingsand said: Imagine we did more of this, but on virtual reality platforms, mediated by micro-transactions and facilitated by cryptocurrency-backed assets. Relabeling the digital economy as the “metaverse” was a simple, elegant moveas well as a deeply cynical effort to rebrand already existing digital markets as the next internetthat allowed forecasts to assume an air of inevitability. Until it wasnt. Perhaps more urgently now, the metaverse should also be understood as a dress rehearsal for todays AI boom: The former was to succeed the mobile internet, while the latter now promises to be more profound than electricity or fire. Perpetually inflating definitions. A single-minded focus on profit that identifies but fails to address egregious harms. Manufactured narratives about inevitability and technological progress. Burning eyewatering sums on infrastructure for a product nobody wants. Any of this sound familiar? Talking it into existence At the heart of the metaverse derangement was the persistent inflation of its definition. McKinsey & Company proclaimed in June 2022 that the metaverse could generate “up to $5 trillion in impact by 2030equivalent to the size of the world’s third-largest economy today, Japan.” McKinsey also estimated that e-commerce would comprise $2 trillion to $2.6 trillion of that share. Of the 3,400 consumers and executives McKinsey surveyed, 95% of “business leaders” expected positive impact from the metaverse in five to ten years, while 61% expected moderate changes to their industry’s operations. Incredibly, McKinseys was among the more conservative estimates. A few months before, Citigroup predicted the metaverse would become “the next iteration of the internet, or Web3.” While it would be “community-owned and governed and guarantee privacy by design, it would also have a total addressable market (TAM) between $8 trillion and $13 trillion by 2030, with some five billion users to boot, the bank estimated. And one month before that, Morgan Stanley sent an investor note anticipating that the metaverse represented an $8-trillion market opportunity in China alone. In an essay analyzing Web3 and the metaverse, tech critic Evgeny Morozov observes that a great deal of what was going on at this time was a performance aimed at conjuring new realities into being through language that, itself, spun up visions unmoored from reality. “The advocates of Web3 are quite explicit about this, we’ve got this beautiful map on our handsall that’s missing is the territory it is supposed to refer to . . . if there’s no reality, we’ll create one by talking it into existence.” A mass hallucination Why was this mass hallucination indulged for so long? Part of it was because profit-driven surveillance and enclosure were core ambitions of the metaverse pivot. When it came to labor, the best-case scenario resembled employers platonic ideal: bypassing labor laws through remote work, misclassifying full-time workers due protections and benefits as contractors, paying arbitrage wages across borders, all while subjecting workers to cold, algorithmic overseers. As for consumers, they would be enlisted into digital sharecropping. Take Axie Infinity, the “play-to-earn” game once hailed as a crown jewel of Web3 and the metaverse. “Managers” in wealthy countries bought expensive NFTs, or non-fungible tokens (remember those?), then rented them to “scholars” in the Global South, who grinded for hours and hours in the game for a few pennies an hour, all in hopes of earning enough to one day become a manager with their own scholars. Was this a new economy? The future of the internet? Or the same old bitter taste? At the same time, a land grab for virtual real estate broke out. Speculators poured millions into Decentraland, The Sandbox, and other virtual worlds where land should, theoretically, be limitless and abundant. Yet the speculators imposed artificially limits, in hopes of inflating valuations of the digital real estate. This would allow investors to realize eyewatering returns on fundamentally worthless assets, like a slice of land in an abandoned virtual world. It would be akin to “buying property in Manhattan, but in a world where anyone could feasibly create an infinite amount of alternative Manhattans that are just as easy to get to. Which means the only reason for users to buy into this Manhattan is if it offers a better service than the others,” as Wired put it. The humbling Still, the emptiness did not deter Facebook, which rebranded as Meta on October 28, 2021, during Connect 2021, its annual developer conference. During the October announcement of Facebooks pivot to the metaverse, Zuckerberg offered that “the last few years have been humbling for me and our company in a lot of ways. Thats one word for it. That year, whistleblower Frances Haugen testified that Facebook products had harmed children, torched our democracy, while reaffirming its complicity in genocide in Myanmar and in “literall fanning ethnic violence” elsewhere. On another front, Apple changed iPhone privacy settings so that users could opt out of being tracked for personalized adsMeta told investors the changes would cost it $10 billion of revenue in 2022. The impact may have been so steep that the firm is currently accused of “deliberately bypass[ing] privacy rules on Apple iPhones in a bid to boost revenues.” Amid all this came the metaverse Hail Mary: A transparent, desperate rebrand to sell the promise of “presence” in a virtual world. The pivot was about building a “total service environment”a closed garden where consumers spend all day exclusively using one firm’s goods and services, a new world where Facebook was not seen as a parasite, but understood to be the landlordthe benevolent god watching over everything. “We should all be concerned about how Facebook could and will use the data collected within the metaverse,” warned Bree McEwan, a VR researcher. The physical world was becoming increasingly hostile to Metas relentless profit-seeking. Before Zuckerberg preached democratization, Meta spent the past few years busy at work on patents aimed at optimizing ad delivery through eye-tracking, gait analysis (to identify users by how they walk), and haptic feedback suits monitoring heart rate and emotional arousal. Parents and children were raising concerns about its impact on mental health and social relations. European regulators and American competitors were implementing changes that thwarted data extraction. Rise and fall Yet within a year of the rebrand, there was already trouble in (digital) paradise. By October 2022, Meta’s flagship virtual-reality game, Horizon Worlds, proved so buggy and unpopular that it was placed on “quality lockdown.” There was a time when Horizon Worlds claimed to have 200,000 monthly users (walking back claims of 300,000) and hoped to hit 500,000 by the end of the year. But by August 2023, it wasn’t even clear if there were more than 1,000 daily active users. Other virtual worlds like Decentraland and The Sandbox appeared to fare even worse. Some may insist that we cant learn too much from the rise and fall of the metaversethat it and Meta, more generally, are rogue mutations, aberrations from normal technological development or even from capitalism itself. But Meta is, actually, a more straightforwardly boring company than some critics might have you think. Facebook enthusiastically became Meta, and patented surveillance tools were adopted as a means to an end: making more and more of the rhythms of human life legible to markets. This is old wine in new bottles. From its earliest days, surveillance has helped minimize capitalist dysfunction by regimenting labor, stimulating consumer demand, satiating Wall Street’s hunger for reliable returns, and indulging the security states demand for total information awareness. Meta has been on a vision quest for business ventures that might rival (or bolster) its core advertising business ($51.2 billion Q3 25, up 26% year-over-year). It tried and failed to take on the global financial system with a cryptocurrency called Libra, before stripping it down and selling what remained. It tried and failed to enter hardware with Building 8, which became Portal, which became nothing. Lacking his own currency or device, Zuckerberg made a bet that he could graft a virtual interface onto the digital and physical world (while pocketing a few more advertiser bucks along the way). Aberration vs. symptom If you are reading this in the year 2,025 A.D., you may have noticed there are many similarities between our former inevitable future (the metaverse) and our current inevitable future (generative artificial intelligence). While the word “metaverse” was not uttered once on Meta’s most recent earnings call, executives gushed about generative AI and anticipated “notably larger” growth in capital expenditures in 2026 than 2025 driven primarily by the AI infrastructure overbuild. The company expects to lose $72 billion on artificial intelligence through 2025. Reality Labs is expected to reallocate some metaverse funding to Meta’s Ray-Ban smart glasses pitched as a new AI hardware productthat have seen huge growth in sales, even as the public galvanizes against the return of glassholes. There is the matter of narrative. The metaverse was hailed as “the successor to the mobile internet,” whereas artificial intelligence is “the next general-purpose technology” that will revolutionize human civilization. Just as the metaverses future was so obviously entwined with surveillance and enclosure, so too is the project of remaking the digital world for AI agents regardless of whether they will ever exist, let alone work. There is the tiny problem of the numbers. The metaverse got multi-trillion dollar TAMs by reclassifying all digital activity; artificial intelligence gets multi-trillion dollar GDP contribution estimates by assuming unprecedented productivity improvements and sidestepping questions about the $2 trillion in revenue it needs by 2030 to justify capital expenditures on AI infrastructure. There is also the burning question of demand. In the metaverse era, startups offered crypto-tokens and complicated (Ponzi) schemes to artificially inflate demand. Today, tech firms are “investing” billions in AI startups but requiring those dollars be spent on the investor’s own cloud compute. Subsidizing your own revenue growth to impress Wall Street and create the illusion of organic demand is a tale as old as our tech sector’s origins. How will it go this time? And then there is the question of the fate of our physical world. Intel estimated the metaverse might have required a thousandfold increase in computing capacity, powered by data centers whose energy and environmental costs would be excluded from glossy demos and deks. The metaverse also prominently featured cryptocurrency, which itself demanded substantial amounts of energy. One White House report notes that “from 2018 to 2022, annualized electricity from global crypto-assets grew rapidly, with estimates of electricity usages doubling to quadrupling” landing somewhere between 120 and 240 billion kilowatt-hours per yearon the lower end, thats more than the entirety of Argentina, but on the higher end this would rival Australias electricity usage. Had Meta succeeded, we wouldve built out much more energy-intensive computational infrastructure with a growing ecological cost. But we also wouldve fleshed out supply chains to extract and deliver critical mineralsmeaning we would likely intensify child and slave labor that already prominent figures into these enterprises. A good thing, then, that Meta abandoned this path. Ironically enough, Nvidia offers a bridge between the two worlds: the fusion of the dead metaverse and the living generative AI hype in the “Omniverse.” In The New Yorker‘s 2023 profile of Nvidia chief executive Jensen Huang, he shows off “Diane,” a hyper-realistic avatar with blackheads on her nose and an uncanny shimmer” in her eyes. Were working on that, the specialist shares with the reporter. The goal is to speak whole universes into existence. The writer “felt dizzy” and shared that “I thought of science fiction; I thought of the Book of Genesis.” If that reaction is any guide, Nvidia may well succeed with its proselytizing where Meta failed. It would be a mistake to simply celebrate the death of the metaverse. Instead, we should understand why such a delusional fervor took hold so that we can inoculate ourselves as the next one spreads.
Category:
E-Commerce
I teach AI to editorial and PR teams for a living, and if there’s one thing that excites and engages them more than any other, it’s vibe coding. The highly visual and interactive projects my students create with vibe-coding tools often turn me into the person taking notes. Vibe coding is definitely having a moment. It’s arguably the most impactful thing to come out of the field of generative AI in the past year, at least as far as applied AI goes. Broadly, vibe coding is the practice of using AI to create not just “content,” but webpages, apps, and experiencessoftware people can actually do things with. And you don’t need to know a lick of code: The AI will take your plain-language prompts and do all the programming for you, whipping up pages or even entire websites in minutes. The thrill of the first click The feeling you get the first time you vibe-code something is similar to what you probably felt the first time you asked ChatGPT to write an essay. You feel incredibly empowered, and maybe even a bit fooled. “It can’t be that easy,” is a common thought. And you’d be right: Vibe-coded experiences are visually and technically impressive, but they are almost always one-offs: Turning them into stable tools you can use on an ongoing basis typically requires a wider set of software and developer skills. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}} Nonetheless, vibe coding has the potential to be transformative for storytelling, newsrooms, and the media at large. At last, the people crafting content are no longer constrained by the tools forced upon them by their organizations. I remember the publication I worked at in the early days of blogging didn’t even have a gallery tool for readers to quickly scroll through images. Today, even absent AI, there are many platforms and plug-and-play tools to choose from, but they rarely have all the features you want. In any case, incorporating new software is typically a lengthy process in organizations. With vibe coding, creators can now build experiences that are tailored to the content, not the other way around. Like I mentioned at the outset, this often ignites an enthusiasm in storytellers and domain experts, which is leading to a fantastic uncorking of creativity as more journalists dabble with vibe coding, like an interactive explorer of Newarks municipal service data or a webpage that turns wildfire point data into Datawrapper-ready hexagon maps. The challenge for media organizations is to translate that enthusiasm into deeper audience engagement, and to do it in an ongoing way. That, however, requires an approach that goes beyond simply giving reporters and editors permission to experiment. It requires new skills, specialized tools, and above all a culture shift. Turning vibe coding into a team sport sport The skill of vibe coding isn’t that different from “normal” AI skills, which is to say structured prompting and understanding how to collaborate with AI will get you a long way. But to get the most out of vibe coding, it helps to think first about what inputs you need (beyond your story) and find examples of other interactive experiences that are similar. Most importantly, think about what your audience wants and how you expect readers to interact with what you’re creating. Data journalists will probably have an advantage here, but it ultimately comes down to thinking a bit more like a product manager than a writer. You can technically vibe-code in the same platforms where you’re probably already using AI, such as ChatGPT and Claude, but software tailored to vibe coding can generally get you from prompt to product much faster. That said, the services that hew most closely to the familiar chat interface, such as Lovable and Base44, will be less intimidating to non-enthusiasts. For teams, the goal is to have a go-to platform where anyone can experiment with stories in a safe and private way. Given that the whole point of this software is to create web experiences for pushing out to audiences, that can be tricky, but most vibe-coding platforms have controls for keeping things secure by default while still enabling publishing to a public-facing site when you want to. To really take advantage of the interest in vibe coding, however, will often require a shift in culture. Many media organizations have rigid structures around product and editorial. AI has already begun to chip away at the wall separating creators and coders, and vibe coding essentially takes a sledgehammer to it. That can be unnerving to product teams used to roadmaps, strict QA, and defined KPIs. The teams that get this right will properly balance the desire to allow their creative teams to experimentsometimes publiclywithout turning their sites and strategy into the Wild West. Collaboration is key, and doing it successfully means various teams need to be fully aligned on the end goal: creating a pipeline from creativity to new, polished, and highly engaging experiences. As we move closer to “Google Zero” in 2026, media brands need to do more with the audiences they have, and vibe coding provides a means by which the entire team, not just product managers and engineers, can play a role in crafting that future. The future favors the flexible Vibe coding doesn’t need to replace existing newsroom workflows to matter. Its value comes in giving non-coder domain experts like journalists room to test ideas and think beyond the constraints of the CMS without waiting for an opening in the roadmap. Some of those ideas will remain one-offs, and that is fine. Others will point toward new formats worth formalizing. The organizations that benefit most will be the ones that encourage vibe coding as legitimate editorial exploration, support it with light structure rather than heavy oversight, and accept that the path to stronger audience relationships now runs through experimentation as much as execution. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}}
Category:
E-Commerce
Where success is concernedin whatever way you choose to define successeffort matters. So does skill. Experience. Perseverance. A willingness to do what others will not. And a little bit of luck: A study published in Physics and Society found that while some degree of talent is necessary to be successful in life, almost never do the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals. Outworking, outthinking, and outlasting other people will definitely improve your odds of success, but still: You need a little luck. Fortunately, all luck isnt necessarily random. According to neurologist James Austin in his book Chase, Chance, and Creativity: The Lucky Art of Novelty, there are four basic types of luck, and three of them you at least partly control: Blind luck. Opportunity, or outcome, without effort. Unforeseen, and more important, uncontrollable. Counting on blind luck? Good luck with that. Luck from motion. Taking action. Trying things. Doing things. Like when I cold-emailed someone to tell them I admired their work. (Which, although I couldnt have predicted it, nor intended it to happen, wound up landing me one of my most lucrative and fulfilling ghostwriting gigs.) You cant luck into meeting the right person unless you meet a number of people; the more people you meet, the more your odds of getting lucky increase. Lucking into meeting the right person is just one form of luck from motion. Because luck is often found, but its almost never found on the literal (or figurative) couch. Luck from awareness. Spottingand then seizingopportunities. Being lucky enough to recognize an opportunity is one thing; youre only truly lucky if you also possess the skills, experience, resources, etc. required to take advantage of the opportunity. While I was lucky enough to live next door to a cofounder of Rosetta Stone, I didnt have the foresightor moneyto invest. Even so, according to an experiment described by Richard Wiseman in The Luck Factor, people who consider themselves lucky tend to spot and seize more opportunities than people who consider themselves unlucky. Oddly enough, simply believing youre lucky is causal. In the experiment, people who saw themselves as lucky spotted an opportunity much more quicklyin some cases, people who saw themselves as unlucky never spotted itand were also quicker to believe it actually was an opportunity, and act on it. The difference was self-perception, not access to opportunity. The key is to pay attention, and believe that paying attention will make a difference. Because it will. Luck from uniqueness. Austin says this involves distinctive, if not eccentric, hobbies, personal lifestyles, and motor behaviors (think actions). Keep in mind you dont have to be a little wacky to be unique. Im decidedly average in all things. But the fact that I know a lot about Formula 1, and Australian rules football, and construction, and Henry VIIIs six wives turned a chance meeting in an airport lounge into an hourlong conversation that sparked a decades-long, mutually beneficial professional and personal relationship. It was partly blind luck I ran into that person. But I was also in motion. And I did quickly realize he was a fascinating conversationalist. And the fact that we share a fairly esoteric blend of interests made us both distinctive, at least to each other. Bottom line? If you want to get luckier, meet more people. Do more things. Try more things. Try more unusual things. Be generous, especially with congratulations and praise. And when you see an opportunity, dont be afraid to ask. Luck sometimes results from the right person saying yes: to your idea, to your startup, to your pitch, to your proposal, to your request. But no one says yes unless you ask. As Steve Jobs said, Most people never ask, and thats what separates, sometimes, the people who do things from the people who just dream about them. You cant control blind luck. But you can, to some degree, control the other forms of luck. What you can control is how you respond to chance or circumstance. And, most important, how often you put yourself into a position to be lucky. Inc.
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] next »