|
|||||
People who are squeamish about needles will soon have an alternative, as the Food and Drug Administration has approved a pill version of Wegovy that could be available as soon as next month. Novo Nordisk, maker of the GLP-1 weight-loss drug, announced on Monday that it has received FDA approval for its once-daily pill that has been shown to achieve comparable weight-loss results as the injectable Wegovy. The Danish drugmaker said the pill could launch in the U.S. in early January, while it is still awaiting approval from regulatory authorities elsewhere. The news marks a new era for the spate of popular weight-loss drugs. While there is a 14-milligram oral semaglutide currently on the marketthe diabetes drug Rybelsusthe Wegovy pill will be made available in a higher, 25-milligram dose. Theres not yet a pill version of Ozempic, which is also made by Novo Nordisk. As the first oral GLP-1 treatment for people living with overweight or obesity, the Wegovy pill provides patients with a new, convenient treatment option that can help patients start or continue their weight loss journey, Mike Doustdar, president and CEO of Novo Nordisk, said in a statement. We are very excited for what this will mean for patients in the U.S. THE PILL RACE The race to get a weight-loss pill on the market has been a long time coming, as Novo Nordisk began clinical trials of the Wegovy pill more than two years ago. Eli Lilly, maker of Zepbound and Mounjaro, is currently testing a weight-loss pill called orforglipron in clinical trials and the drug is part of an FDA priority voucher program that comes with a faster timeframe for reviewing medications. Wegovy is also part of that program. As with the shot, the Wegovy pill will require a prescription from a doctor. About one in eight American adults were taking a GLP-1 drug as of several weeks ago, according to a KFF Health Tracking Poll released last month. These drugs are especially popular among middle-aged adults, as 30% of people between the ages of 50 and 64 reported that theyve used one of these drugs at some point, the highest share among any demographic. A pill version could mean that even more people are on weight-loss medications. We believe it will expand access and options for patients, Dr. Jason Brett, principal U.S. medical head for Novo Nordisk, told CNN in an interview. We know there are some patients who just wont take an injectable medication. COST IN FOCUS But the cost of these drugs has also become a concern, and particularly if insurance doesnt cover them. Last month, President Donald Trump announced a plan to lower the costs of popular prescription drugs, including Wegovy and Ozempic, if people purchase through TrumpRX. The Wegovy pill will be available for as little as $149 per month for the starting dose of 1.5 milligrams as part of that deal the drugmaker struck with the Trump administration last month. That said, the starting dose of these drugs typically doesnt yield the same type of weight loss and are intended to help people build up a tolerance. Novo Nordisk didnt provide information about the pricing for the higher dosage of the pill that was approved by the FDA. Shares of the Danish drugmaker have surged more than 8% so far this week.
Category:
E-Commerce
The fintech industry has spent the last decade obsessing over seamless experiences and bringing financial products inside the tools that consumers were already hooked on. Instant approvals, one-click funding, and frictionless onboarding became the benchmarks of success. And for good reason; they removed friction that had frustrated their customers for generations. But here’s what we’re learning as embedded finance matures: The consumers and businesses that use embedded financial products repeatedly and stay loyal to their platforms are not just staying for the technology and platform. They’re staying because when they need it, theyre able to get help from people who understand the product, can anticipate their issues, and guide them through decisions that carry real financial weight. Its not just the technology that is required to win, but the right level of service also. EMBED SOLUTIONS, NOT JUST PRODUCTS Embedded fintech puts financial services directly inside the software people already use to manage their everyday life or business. Here’s why the human element matters from day one. Say youre a restaurant owner looking to apply for capital through your point-of-sale system. When a business owner needs to understand their costs, their options, and whether it makes sense to take a bank loan or advance, reassurance and education are the difference between gaining a customer and losing one. Financial issues carry real consequences and static. Confusing FAQ pages often arent enough to build the trust needed to work through them. Tomorrows embedded solutions cater to the full experience. They offer support from onboarding onwards, and a person to call when need arises. An API that works smoothly is embedded finance. A specialist who walks a customer through why their funding limit changed and what they can do about it? That’s a true embedded solution. SPEED ALONE WONT HELP Fast approvals get customers excited. A three-minute application that results in instant access to $50,000 in working capital, or a next generation embedded credit card, feels impressive at first. But speed without guidance often leads to confusion and eventually customers who stop using the product. Consider the restaurant owner we mentioned above who is applying for working capital through their point-of-sale system. They’re looking at questions about personal guarantees, wondering what happens to their home if the business hits a rough patch. They’re trying to figure out if automatic deductions will interfere with making payroll next week. A self-serve flow can’t anticipate or answer questions like this. It makes it hard to build trust or drive long-term use. Human support changes the numbers by helping customers understand what they have access to and how to use it. When someone has access to a human to help them work out how to use a product, the math is simple. They use it a lot more. BETTER SERVICE EQUALS MORE ADOPTION The most successful embedded fintech products combine great products with a team of people who know how to guide customers through complex decisions. It’s not about choosing between automation and service. It’s about using both where they work best. Even with a smooth digital experience, first-time financial product users benefit from talking to someone who understands their business. The best specialists don’t just answer questions. They help customers see how to get the most value. Most businesses don’t use the maximum available credit. Not because they don’t need it, but because they’re unsure when it makes sense to access more. When a specialist reaches out to say, “We noticed you’re growing quickly and using about 60% of your available capital. Want to talk about whether increasing your limit makes sense?” two things happen. First, the customer feels seen. Second, they’re more likely to access additional capital that actually helps them grow. THE FUTURE OF EMBEDDED FINANCE The embedded finance industry is moving past the early “automate everything” phase. The platforms and providers building long-term relationships understand that financial products are fundamentally about trust, and trust comes from consistent, helpful interaction backed by reliable technology. This doesn’t mean every fintech provider will build massive support teams. It means the successful ones will figure out how to deliver expert guidance at scale, using technology to make human expertise more accessible and effective, not to replace it. For software platforms, this creates an opportunity. As embedded finance becomes more common, service quality becomes a key way to stand out. Its not just about access to capital and financial services. The platforms that help their customers actually use them to grow will build stronger relationships than platforms that simply offer another feature. Luke Voiles is CEO of Pipe.
Category:
E-Commerce
Aerospace company Starfighters Space, which operates the world’s only commercial supersonic aircraft fleet out of NASA’s Kennedy Space Center, is down double digits after major gains following completion of its initial public offering (IPO) last week. Starfighters Space’s stock price has had a volatile ride in the days since, and Tuesday was no exception. On Tuesday, shares of the stock, which are trading under the ticker symbol FJET, were down 55%, just one day after Monday’s record gains, when it soared a whopping 371%. The Florida-based company completed its IPO last Wednesday, with shares beginning to trade on the NYSE American the next day. The company raised $40 million in its Regulation A offering. The stock opened at $10 per share, and peaked at $17.72 on Thursday before sliding back down to $6.69 on Friday, according to Investor’s Business Daily. In midday trading on Tuesday at the time of this writing, FJET was trading at $14.18 a share. What is Starfighters Space? The company owns and operates the largest commercial fleet of supersonic aircraft, which consists of seven Lockheed F-104 Starfighters adapted for space missions, and is based out of NASA’s Kennedy Space Center in Cape Canaveral, Florida. Starfighters is developing a StarLaunch program, which uses the jets to deploy satellites and small payloads into space, with the capability to fly at MACH 2 speed. Founded just three years ago in 2022, the company also offers pilot and astronaut training, in-flight testing services, and solutions for both defense and private sector industries. Current customers include Lockheed Martin, GE, Innoveering, Space Florida, and the U.S. Air Force Research Laboratory. Starfighters Space financials Starfighters Space Inc. had an approximate market capitalization of $395 million at the time of this writing. The public listing . . . reflects growing investor interest in companies providing real-world aerospace capabilities aligned with national security, space access, and advanced testing requirements,” Starfighters CEO and founder Rick Svetkoff said in a recent statement. “The Company is well positioned to deliver services to a range of customers through our fast, innovative and unique platform.
Category:
E-Commerce
Last weekend, a gnarly power outage in San Francisco took out a number of traffic lights, which, in turn, sent a number of self-driving Waymo robotaxis into a sort of fugue state. Instead of driving, some of the Waymos responded to these now-analog intersections by turning on their hazard lights, blocking traffic, and, well, not doing much of anything. There were multiple instances of Waymo cars clogging up roads, turning futuristic technology into glorified bollards. The city quickly asked the company to turn off the service. The immediate issue has been resolvedthe power is back on and the Waymo service had resumed in San Francisco as of Sunday. But questions linger about whether Waymo, or the city, had a plan for a relatively predictable type of municipal emergencya blackout that crowds communications networksor how theyre adjusting now. One of the big solutions to AI failures is the much-discussed human in the loop. The idea: At some point in an automated processwhether it be a job-application screening system or powerful self-driving car algorithmshumans have the opportunity to intervene and fix the hard stuff that artificial intelligence cant handle. AI doesnt understand every complex situation, the logic goes. So there are safeguards built into a system to ensure that, at some point, an actual live person can set an automated system back on the right path. The problem, as recent events demonstrated, is that sometimes this human-in-the-loop doesnt always answer the phone. Or can’t. Over the weekend, a remote assistance team was supposed to help the cars navigate when they encountered a confusing traffic situation, a Waymo spokesperson explains. But networks were overwhelmedbecause of the power outagemaking it difficult for the Waymo Driver software in some of the cars to connect with that team and receive confirmations. Waymo spokesperson Ethan Teicher tells Fast Company that the company prioritizes safety and tests and refines its emergency preparedness and response protocols on a regular basis. He also defends the companys response to other emergencies, including Hurricane Helene in Atlanta and previous tsunami warnings in San Francisco. We are committed to continuous improvement, and we will use learnings from the weekend to strengthen our resilience under even the most challenging conditions, Teicher says. Ahead of entering any city, we work to understand the types of issues that impact the region. Waymo works with local officials and first responders to keep lines of communication open, he adds. “In the event of an emergency, we have operational controls that range from active routing of vehicles to avoid certain locations (for example, in the case of flooding), to fleet reductions or restrictions like we enacted over the weekend in response to the widespread PG&E power outages in the Bay Area, he says. The California Department of Motor Vehicles says that it was in contact with the City of San Francisco about the incident, and that its officials met with Waymo on Monday morning, too. The DMV will continue communication with Waymo to discuss broader operational plans, including actions related to emergency response, a spokesperson for the agency added. The incident is a reminder that while the cars are self-driving, they dont always operate completely independently of public infrastructure, like communications networks. A major proposition of self-driving car companies is that they will be far safer to operate overall than human drivers. Autonomous vehicles do make serious mistakes, but so do human drivers. Importantly, there are also procedures for first responders who encounter Waymo robototaxis, including ways for the cars to call a remote team when it senses an interaction with police, as Fast Company has previously reported. In this case, though, the backup plan for a complex driving situation seems to have actually exacerbated issues. In at least one reported case, the cars apparently blocked emergency vehicles. For example, Cruise, the now-shut-down self-driving car company that was owned by General Motors, also had problems with its cars getting confused and blocking traffic because of wireless connection issues. Waymo has emphasized that its cars do not rely on continuous wireless connection to operate. The company wants its cars to be able to operate with the compute to be on board and for it to make decisions, without needing to rely on cell signals and remote operators, it previously told Light Reading. Still, the power outage is a reminder that the cars sometimes do, in some circumstances, depend on these networks when they need extra assistance. Now comes the question of what happens in the next blackout, and whether the cityor Waymohad a plan for this kind of situation. The San Francisco Metropolitan Transportation Authority did not respond to a request for comment. Terrie Prosper, who handles external communications at the California Public Utilities Commission, says the agency was aware of the Waymo outage and was looking into specifics. As others have pointed out, this isnt just about San Francisco. Waymo is now operating in several places, including perhaps its greatest challenge yet: New York City, where it is in the initial testing phase. The New York City Department of Transportation tells Fast Company the city was in regular communication with Waymo about its testing in some neighborhoods and that it was aware of the outage in San Francisco. A spokesperson emphasizes that state law mandated the presence of a safety driver behind the wheel who would be prepared to take over in the event of a blackout. Waymos have also appeared in Austin and are expected to fully launch in Dallas. A spokesperson for the city of Austin and a spokesperson for the city of Dallas both said their governments are not able to regulate self-driving cars, per state law. The state of Texas did not respond to a request for comment. While Texas law prohibits cities from regulating AVs, including during emergencies, the City of Austin works with all AV companies on expectations around weather and other emergency scenarios, says Jack Flager, a spokesperson for the city of Austin. When our staff work with AV companies on the expectations around weather and other emergency scenarios, those expectations include AVs understanding how to properly react to barricades, floodwater, and dark or flashing signals. As for New York, Oren Barzilay, the president of the FDNY EMS Local 2507, tells Fast Company, an outage like the one in San Francisco would delay emergency response times. “We already have major delays with current traffic conditions, this will only add to a growing issue, Barzilay says. It is a public safety issue if our crews can’t get through to reach victims in a timely manner.
Category:
E-Commerce
Metas decision to end its professional fact-checking program sparked a wave of criticism in the tech and media world. Critics warned that dropping expert oversight could erode trust and reliability in the digital information landscape, especially when profit-driven platforms are mostly left to police themselves. What much of this debate has overlooked, however, is that today, AI large language models are increasingly used to write up news summaries, headlines, and content that catch your attention long before traditional content moderation mechanisms can step in. The issue isnt clear-cut cases of misinformation or harmful subject matter going unflagged in the absence of content moderation. Whats missing from the discussion is how ostensibly accurate information is selected, framed, and emphasized in ways that can shape public perception. Large language models gradually influence the way people form opinions by generating the information that chatbots and virtual assistants present to people over time. These models are now also being built into news sites, social media platforms, and search services, making them the primary gateway to obtain information. Studies show that large language models do more than simply pass along information. Their responses can subtly highlight certain viewpoints while minimizing others, often without users realizing it. Communication bias My colleague, computer scientist Stefan Schmid, and I, a technology law and policy scholar, show in a forthcoming accepted paper in the journal Communications of the ACM that large language models exhibit communication bias. We found that they may have a tendency to highlight particular perspectives while omitting or diminishing others. Such bias can influence how users think or feel, regardless of whether the information presented is true or false. Empirical research over the past few years has produced benchmark datasets that correlate model outputs with party positions before and during elections. They reveal variations in how current large language models deal with public content. Depending on the persona or context used in prompting large language models, current models subtly tilt toward particular positionseven when factual accuracy remains intact. These shifts point to an emerging form of persona-based steerabilitya models tendency to align its tone and emphasis with the perceived expectations of the user. For instance, when a user describes themselves as an environmental activist and another as a business owner, a model may answer the same question about a new climate law by emphasizing different, yet factually accurate, concerns for each of them. For example, the criticisms could be that the law does not go far enough in promoting environmental benefits and that the law imposes regulatory burdens and compliance costs. Such alignment can easily be misread as flattery. The phenomenon is called sycophancy: Models effectively tell users what they want to hear. But while sycophancy is a symptom of user-model interaction, communication bias runs deeper. It reflects disparities in who designs and builds these systems, what datasets they draw from, and which incentives drive their refinement. When a handful of developers dominate the large language model market and their systems consistently present some viewpoints more favorably than others, small differences in model behavior can scale into significant distortions in public communication. Bias in large language models starts with the data theyre trained on. What regulation can and cant do Modern society increasingly relies on large language models as the primary interface between people and information. Governments worldwide have launched policies to address concerns over AI bias. For instance, the European Unions AI Act and the Digital Services Act attempt to impose transparency and accountability. But neither is designed to address the nuanced issue of communication bias in AI outputs. Proponents of AI regulation often cite neutral AI as a goal, but true neutrality is often unattainable. AI systems reflect the biases embedded in their data, training, and design, and attempts to regulate such bias often end up trading one flavor of bias for another. And communication bias is not just about accuracyit is about content generation and framing. Imagine asking an AI system a question about a contentious piece of legislation. The models answer is not only shaped by facts, but also by how those facts are presented, which sources are highlighted and the tone and viewpoint it adopts. This means that the root of the bias problem is not merely in addressing biased training data or skewed outputs, but in the market structures that shape technology design in the first place. When only a few large language models have access to information, the risk of communication bias grows. Apart from regulation, then, effective bias mitigation requires safeguarding competition, user-driven accountability and regulatory openness to different ways of building and offering large language models. Most regulations so far aim at banning harmful outputs after the technologys deployment, or forcing companies to run audits before launch. Our analysis shows that while prelaunch checks and post-deployment oversight may catch the most glaring errors, they may be less effective at addressing subtle communication bias that emerges through user interactions. Beyond AI regulation It is tempting to expect that regulation can eliminate all biases in AI systems. In some instances, these policies can be helpful, but they tend to fail to address a deeper issue: the incentives that determine the technologies that communicate information to the public. Our findings clarify that a more lating solution lies in fostering competition, transparency, and meaningful user participation, enabling consumers to play an active role in how companies design, test, and deploy large language models. The reason these policies are important is that, ultimately, AI will not only influence the information we seek and the daily news we read, but it will also play a crucial part in shaping the kind of society we envision for the future. Adrian Kuenzler is a scholar-in-residence at the University of Denver and an associate professor at the University of Hong Kong. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] next »