|
|||||
An unprecedented, potentially record-breaking heat wave is expected to hit much of the American southwest, from California to Colorado, this weekand experts are concerned about how temperatures will affect the regions already-low snowpacks. Temperatures in the Los Angeles area will be 15 to 25 degrees above seasonal norms on Thursday, March 12, and Friday, March 13, according to the National Weather Service (NWS), reaching into the 90s along the coast and potentially above 100 degrees in some areas. Given the unprecedented length and magnitude of this extreme heat wave, heat stress will be increasing each day, especially in areas that aren’t used to the heat, like the coastal areas, forecasters wrote. Scorching temperatures will stretch through the southwest Tucson, Arizona, could see its earliest 100-plus degree day next week; the March monthly record high for Tucson is 99 degrees, according to the NWS. Parts of Colorado are forecast to reach into the 90s, which would break state records. Some parts of the southwest could see triple-digit temperatures, when they have never before experienced that this early in the year, climate scientist Daniel Swain said on a recent livestream. The heat wave is expected to last for the foreseeable future, he added, with a 10- to 14-day stretch of extraordinarily anomalous weather. It is quite likely that many cities and probably many states will set new all-time high March temperature records, as well as new records for the month of March cumulatively overall, Swain said. All the way from Colorado to California, I think were going to hit records everywhere in between. Heat wave threatens already-low snowpacks The extreme March heat wave comes on the heels of the warmest winter on record for the majority of the American west and Southern Plains. Thats based on 131 years of climate data. It was also an exceptionally dry winter across the West, which has left the region, including the Sierra Nevada, with below-average snowpacks. Many Western communities, including in California, depend on snowpacks as crucial natural reservoirs. They store water through the winter and release it over the spring and summer. The heat waves, though, threaten to melt the already-sparse snow, which means the reservoirs may not have enough water for residents and farms later in the year. The current snowpack is under 50% of its average throughout much of the American West, Swain said. Every single basin, with no exceptions in the Western U.S. . . . is below average. No “miracle March” this year Meteorologists and climate experts use the term “miracle March to describe the way the month can restock reservoirs, even after a winter without much water or snow. Cold, wet March conditions can turn a dry winter into a not-so-dry winter, Swain said. But this year, Swain noted, that is not going to happen. The record-breaking heat wave brings long-term concerns. Along with reducing the amount of water in reservoirs, it could set up dry soil conditions for the summer, which increases the risk of wildfires. The fact that these temperatures are coming so early in the year is also a concern for climate experts. Were about to experience the hottest March temperatures weve ever seen across a lot of the Western U.S., Swain said. This is going to be a heat wave that people aren’t going to be able to ignore because of when its happening.
Category:
E-Commerce
Just days after settling with the Department of Justice (DOJ), ticketing company Live Nation is again under fire after internal messages between employees revealed bragging about taking advantage of ticket buyers. In message exchanges from 2022, two regional directors of ticketing for Live Nation amphitheaters, Ben Baker and Jeff Weinhold, boasted about the prices they were able to get away with charging customers for ancillary fees, including things like parking, lawn chair rentals, and VIP access, with Baker writing, I gouge them on ancil prices. In one exchange, Weinhold shared how he was able to charge $250 for VIP parking at a venue. These people are so stupid, Baker replied. I almost feel bad taking advantage of them. In another series of messages, Baker says he charges customers $50 to park in the grass and $60 for closer grass. Robbing them blind baby, he added. Thats how we do it. The DOJs antitrust trial against Live Nation and Ticketmaster began this month, with the government alleging that Live Nations control of Ticketmaster was monopolizing the ticketing industry and leading to unfair pricing for consumers. Last week, Live Nation filed a request for the judge to exclude six sets of Baker and Weinholds messages from the trial, arguing that they would unfairly bias the jury. The DOJ and attorneys general for the states suing Live Nation opposed the request, and several media organizations later petitioned for the documents to be unsealed. On Monday, the DOJ and Live Nation reached a surprise settlement, letting the company retain ownership of Ticketmasterbut despite a legal win for Live Nation, the Baker and Weinhold messages have dealt another blow to the brands reputation. In a statement to Fast Company, Live Nation condemned Weinhold and Bakers conduct, adding that its own executives were unaware of the exchange prior to the trial documents being unsealed. The Slack exchange from one junior staffer to a friend absolutely doesn’t reflect our values or how we operate, reads the statement. Because this was a private Slack message, leadership learned of this when the public did, and will be looking into the matter promptly. A spokesperson for Live Nation emphasized that Baker and Weinholds behavior was against company policy, and that their pricing exceeded limits put in place to protect ticket buyers. We are digging into it now that we are aware, the spokesperson added. This is not at all an acceptable way to behave or talk, and important to note that these are not executives.
Category:
E-Commerce
At a recent AI summit in New Delhi, Sam Altman warned that early versions of superintelligence could arrive by 2028, that AI could be weaponized to create novel pathogens, and that democratic societies need to act before they are overtaken by the technology they have built. These concerns are widely shared across the industry. Geoffrey Hinton, the Nobel laureate known as the godfather of AI, has warned that creating digital beings more intelligent than ourselves poses a genuine existential threat. Mustafa Suleyman, CEO of Microsoft AI, devoted much of his book The Coming Wave to the argument that AIs fusion with synthetic biology could put the tools to engineer a deadly pandemic within reach of a single individual. These are not warnings about a distant future. Last week, a clash over who controls AI and on what terms led to a complete collapse in the companys relationship with the Pentagon. When politicians and business leaders try to make sense of issues like these, they are often tempted to look to the pharmaceutical industry for a regulatory model. Senator Richard Blumenthalone of the few legislators actively pushing for meaningful AI regulationhas proposed that the way the U.S. government regulates the pharmaceutical industry can serve as a model for AI oversight. The analogy makes intuitive sense. The pharma model shows that strict licensing and oversight of potentially dangerous emerging technologies can limit threats without placing undue restrictions on innovation. The instinctive attraction of this approach isnt confined to legislators. Many companies are applying the same logic internallywhether consciously or notmanaging AI risk through stage-gate reviews, pre-deployment testing, and post-launch monitoring. The pharma model, in other words, is already the de facto governance framework for much of the industry. The problem is that its the wrong frameworkand the differences are not just technical but existential. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":"","wpCssClasses":""}} Three disanalogies that matter Pharmaceutical regulation works because the barriers to entry are high, the product is physical and controllable, and the development cycle is slow enough for oversight to keep pace. None of these conditions hold for AI. First, barriers to entry are very different. Bringing a new drug to market costs an average of $1.1 billion, according to a 2020 study published in the Journal of the American Medical Association. The infrastructure alonelaboratories, clinical trial networks, manufacturing facilitieslimits production to a relatively small number of identifiable companies that regulators can monitor. AI has no equivalent friction. Capable models can be built for a fraction of that cost, fine-tuned on consumer hardware, and deployed globally from a laptop. The universe of actors a regulator would need to track is not a handful of identifiable companiesit is potentially anyone, anywhere. Second, a pharmaceutical product is physical. Manufacturing it requires raw materials, specialized equipment, and distribution logistics. All of this creates friction that regulators can exploit by imposing oversight checkpoints. But code has no such friction. Once released, an AI models weights can be copied number-for-number and shared across borders far more quickly than any physical weapon or industrial system. Its marginal cost of replication is effectively zero. And you cannot recall software the way you recall a contaminated drug. Once it is in the wild, it stays in the wild. Even capabilities that are delivered purely through access to the cloud are vulnerable to replication and thus to the breaking of corporate or regulatory guardrails. In just the last month, Anthropic disclosed that three Chinese AI labsDeepSeek, Moonshot, and MiniMaxhad used 24,000 accounts to generate over 16 million exchanges with Claude, extracting its most advanced capabilities through a technique called distillation. The Chinese labs did not need to infiltrate a supply chain or build expensive factories. They only needed API access and carefully crafted prompts, routed through proxy networks designed to evade detection. There is no pharmaceutical equivalent of this replicability. The final crucial disanalogy is speed. The pharma approval pipeline assumes that a product will go through years of controlled testing before it reaches the public. But AI models evolve on software timelines. Capabilities improve not only through hardware gains but through software updates, new training methods, and frequent model releases that can produce meaningful jumps in weeks rather than years. Anthropic, for instance, shipped two major Claude releases within ten weeks. The iteration cycle is so fast that by the time any pharma-style approval process could hope to evaluate a model, that model would already be obsolete replaced by something far more powerful for which the evaluation process had not even begun. Why test, deploy, monitor doesnt work The problem isnt confined to government. The same pharma-shaped thinking that distorts regulatory frameworks has taken root inside organizationsand it leaves them exposed for the same reasons. Pharma-type risks are familiar: a product might have harmful side effects, so you test it before deployment, monitor it afterward, and pull it back if something goes wrong. Even without an external regulator, many companies are applying this logic to AI internally, managing risk via the familiar means of stage-gate reviews, pre-deployment testing, and post-launch monitoring. It feels responsible. It feels sufficient. This is precisely the danger. Of course, stage-gate reviews and pre-deployment testing are not worthless. They catch real errors, enforce discipline, and create apaper trail that demonstrates due diligence to boards and regulators. Any organization that has implemented them is better off than one that has done nothing. But these frameworks create a false sense of coverage. The risk they manage is the risk they were designed forproduct defects, adverse effects, quality-control failures. AIs risk profile has a different shape entirely. It is defined by the potential for irreversibility, rapid proliferation, and misuse. Not every AI-driven outcome will trigger these risks. But unlike a defective product, you cannot issue a recall once the damage is done. This combination of potential threats means that the familiar toolkit of managed risk simply doesnt fitand organizations that believe it does are accepting exposures they havent mapped. It is precisely to meet these challenges that we developed the OPEN and CARE frameworks for managing AI innovation and risk. The CARE framework, in particular, provides a structured methodology for governing AI risk and is the foundation for the recommendations that follow. Build governance for AI risk The CARE framework works through four stages: Catastrophize, identifying what could go wrong; Assess, prioritizing those risks; Regulate, implementing controls; and Exit, planning for when those controls fail. Applied to your organizations AI exposure, the framework points toward five immediate actions. 1. Surface your shadow AI exposure. Ask your direct reports one question: what AI tools are you using that werent provided by the company? The answers will tell you how large the gap is between the AI your organization officially uses and the AI your people are actually relying on. 2. Map your irreversibility pointsand your fallbacks. Identify the AI-dependent processes where a failure would be irreversible or highly damaging, such as automated customer communications, AI-assisted code pushed to production, algorithmic hiring screens. Ask whether your current safeguards assume you can catch and correct errors before they reach the outside world. If they do, redesign themand build explicit fallback procedures for when they fail anyway. 3. Lock down your data exposure. Every AI tool your organization touches is a data pipeline running in both directions. Classify your data into tierspublic, internal, confidential, restrictedand map which AI tools are authorized for each tier. Audit your vendor agreements for training-data clauses. The moment proprietary data enters a third-party system, your ability to recall it is gone. 4. Red team for misuse, not just malfunction. Red teaming for malfunction asks What if this breaks? Red-teaming for misuse asks What if this works exactly as intended and someone uses it for the wrong purpose? As the CARE frameworks Catastrophize phase emphasizes, you need both. 5. Assign clear executive ownership. None of the above matters if accountability is diffused across committees. Designate a single executive who owns AI risk the way your CFO owns financial risk. That person needs authority, budget, and a direct line to the board. The real stakes For decades, pharma-style regulation has been one of the most successful bets in business: a framework that protects the public without strangling the industry. But the model is insufficient for AI. At the governmental level, serious people are reaching for serious solutions. Sam Altmans call at the New Delhi summit for an international regulatory body modeled on the International Atomic Energy Agency reflects a clearer-eyed view of what kind of technology this isone that demands oversight frameworks commensurate with its actual risk profile, not models borrowed from industries that dont share its characteristics. Business leaders should follow the same path. The category of problem that governments are grappling with at the international level is the same category of problem you are grappling with inside your organization. Design your governance accordinglyfor the technology you actually have, not the one you wish you were dealing with. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":"","wpCssClasses":""}}
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||