Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-02-07 11:30:00| Fast Company

Branded is a weekly column devoted to the intersection of marketing, business, design, and culture. In the run-up to the Super Bowl, the National Football League sought to send a statement about its engagement with issues around race and diversity. In fact, it ended up sending two statementsand together, they come off as conflicting messages. On the one hand, commissioner Roger Goodell reaffirmed the leagues diversity, equity, and inclusion efforts aimed at goals such as increasing the number of non-white coaches, despite the recent wave of DEI pullbacks announced by businesses from Target to McDonalds to Meta, not to mention the Trump administrations noisy demonization of such policies. I believe that our diversity efforts have led to making the NFL better, Goodell said at his Super Bowl news conference this week. Its attracted better talent. We think were better if we get different perspectives, people with different backgrounds, whether theyre women or men or people of color. We make ourselves stronger and we make ourselves better when we have that. And on the other hand, just one day later, The Athletic reported that the NFL would remove the End Racism messaging that has been stenciled over the back of the end zones in Super Bowl games since 2021. (This year, the end zone messages will be It Takes All of Us and Choose Love.) Even critics who acknowledge that an end zone stencil is little more than a gesture nevertheless complained that removing it was a capitulation designed to avoid the wrath of Trump, who is scheduled to attend the game. [Photo: Ryan Kang/Getty Images] The tension between these two messages isnt a triviality for the NFL, a true mass brand that presides over one of the few remaining tentpole events in the U.S., regularly attracting an audience of 100 million or more. As both a brand and a business, the league has been grappling with issues of race and diversity long before the current DEI debate. Some of the diversity efforts Goodell was talking about came about precisely because of a very notable dearth of Black coaches and general managers. Among other policies, the so-called Rooney Rule, implemented in 2003, requires teams to interview minority and female candidates for coaching and other positions. (It is named after Dan Rooney, the Pittsburgh Steelers owner who was head of the leagues diversity committee at the time.) Opinions on the effectiveness of this and other NFL diversity efforts are mixed. The league says 53% of league and team staffs are women and minorities, and half of last years eight head-coach openings were filled by non-white candidates. But of seven more recent head-coach openings, only one is expected to be filled by a Black coach. And some minority-candidate interviews are viewed as basically performative gestures by teams who have already made a decision. A little more than a quarter of head coaches are minority males, compared to about 70% of players. While that progress may be limited, the hiring rules at least acknowledged the legitimacy of the underlying issue. Similarly, when the league first used the End Racism stencil not long after the slaying of George Floyd, it may have been just a gesture, but it was one that acknowledged racism as an ongoing issue. A few years earlier, then-49ers quarterback Colin Kaepernick began to kneel during the national anthemto protest exactly the kind of brutality that later took Floyds lifeturning the NFL into a culture-war forum. (Trump famously said protesting players were SOBs who should be tossed off the field.) At a minimum, the league sought to project an image that embraced diversity. On-field protests have faded, but the rhetorical attacks on public diversity efforts and messaging has only gotten louder. America First Legal, an organization founded by Trump adviser Stephen Miller, has pointed to the Rooney Rule as an example of anti-meritocratic discrimination in the employment process.   If we can take Goodell at his word, the NFL is unmoved by this argument. Were not in this because its a trend to get in or a trend to get out of it, he said at the news conference this week, referring to the leagues DEI work. Our efforts are fundamental in trying to attract the best possible talent into the National Football League, both on and off the field. Meanwhile, a league spokesman told The Athletic that the shift in the end zone messages is simply a response to recent tragedies including the California fires, New Orleans terror attack, and fatal Washington, D.C., air collision. But its hard not to see it as at least partly a response to the political climate (and, uh, notably, conservatives have baselessly implicated DEI policies in both the fires and the air collision). The upshot is a muddled message that seems less like a committed game plan, and more like a punt.


Category: E-Commerce

 

LATEST NEWS

2025-02-07 11:00:00| Fast Company

Zoom made a name for itself during the pandemic, becoming synonymous with video conference calls. But the company recently changed its name from “Zoom Video Communications Inc.” to simply “Zoom Communications Inc.,” a sign that its pushing beyond video. Other Zoom offerings include a Team Chat product comparable to Slack, a collaborative document platform that integrates with Zoom meetings, business phone features, and an AI companion.  Zoom CEO Eric Yuan spoke to Fast Company about the company’s offerings and ambitions beyond video, his vision for the future of AI-powered work, and what the return to the office has meant for how people use Zoom. This interview has been lightly edited and condensed for clarity. You recently dropped “video” from your company name. What does that mean for the future of Zoom? When I started Zoom in 2011, the mission was very simple: to make video communication frictionless. And that’s pretty much what we did.  So, when we started, everything centered around video. Now, you look at what we’re doing today: Way beyond video, we have a full workplace platform. We have Zoom Phone, Contact Center, Team Chat, Whiteboard, Zoom Docs. Essentially, our new mission is to build an AI-first work platform for human connection. It’s not only centered around video anymore.  And what role is AI going to play in all that? Before everyone talked about generative AI, we already heavily invested into AIsome traditional AI and some generative AI. We have a smart team and built our own large language model as well, even before ChatGPT. Today, I open up my Zoom Workplace and I still spend a lot of time to manually do so many things. I check my email, look at my channel messages, phone calls, calendars, meetings, and sometimes I need to write in meeting notes. A lot of manual work.  I think AI can completely change that. Essentially, AI will become my personal assistant. As a step one, to free up a lot of time and make my work more productive and help coordinate so many thingsbooking travel and managing travel plans, making scheduling meetings much easier, leveraging agentic technology to improve productivity. Step two is even more interesting. We all work for five days a week. I think in the next 10, maybe 15 years, I think the four-day working week might become a standard because of AI technology. Step two of digital assistant technology is more like my digital twin. A personal large language model with my personal contacts, knowledge, skills, and everything. I can even send my digital twin to join a meeting. Say you and I are working on a contract. You and I need to look at all the terms, negotiate, spend hours, days, or weeks to finalize the contract. In the future, I send my digital twin, you send your digital twin, and we let them work together and come up with a preliminary contract and just sign off. Plenty of companies are working on AI, office software, and video conferencing. What sets Zoom apart? I think on many fronts we definitely differentiate ourselves. One thing is our innovation velocity. We stay very close with the customers, really understand their pain points, to be the first one to come up with a solution. Number two is really about our philosophy. We want to build a project that just works. When you look at our customers, when they’re using Zoom versus competitors’ products, their feedback is, I really enjoy using Zoom because it’s a very simple intuitive experienceno learning curveand any network environment and all kinds of devices, it just works. The third thing is really about AI. We just finished our Q4 and we’re working on creating our quarterly board slide deck. Quite a few team members have to get all the information from all our systems and work on our slidesmany days work just to get a quarterly slide deck. What if we leveraged AI and could tell the AI, please create our Q4 slide deck? The AI agent will take action proactively, look at all the systems, grab the information and our board slide deck template and create slides automatically.  It used to be every meeting, our chief of staff would write down all the notes and create a Zoom Doc to share. Today, we leverage Zoom AI and, after each meeting is over, we automatically create Zoom Docs with all the action items and insights, and also leveraged our agent to create some tasks assigned to me or assigned to you. It’s a kind of AI-first experience. How has the return to the office affected how people are using Zoom? First of all, the way they use the conference room is very different. Prior to COVID, say you and I joined from a conference room, and some people joined remotely, probably they’re in listening mode, because the conversation is driven by the people in the conference room. Now, it’s very different. Even if people join remotely, they want to have the same experience as the ones sitting in the conference room. Let’s say there are five people in the conference room. From the remote side, they want to see each of those people. The conference room experience is different, and we are much better positioned than other competitors. Another change is, when you work remotely, there’s probably more conferencing meetings and phone calls. Now that it’s back to the office, especially for internal meetings, sometimes it’s just a walk to your desk or your office, and we can talk. Asynchronous collaboration is used more frequently.  We have a Zoom Team Chat solution. People use, more and more, Zoom Team Chat and create more Zoom Docs. If you cannot reach out to your teammates in real time, create a Zoom Doc, share it to the Team Chat. Other people can look at it later on. These async collaboration capabilities are becoming more and more popular, together with the AI. And Zoom is often associated with office work, but you also recently built Zoom Workplace for frontline workers. What motivated that and what does that expansion look like? We build a workplace platform. However, there’s different use cases for some vertical marketsfor educators, the financial industry, healthcare, and frontline workers. The use case is different and the feature set is also different. You can’t build one feature set to serve all these different use cases. The frontline workers’ market is big. A lot of our customers already deploy the Zoom platform. However, they gave us feedback that they need some features for their frontline workers. So, back to our innovation philosophy, when customers share with us the pain point, what can we do? Listen to them and build a new service. That’s how we built a Zoom Workplace for frontline workers, for educators, and for healthcare as well. I think the market is big and we wanted to build more vertical solutions for these different use cases. And as you listen to customer needs, how do you decide which features to build out?


Category: E-Commerce

 

2025-02-07 11:00:00| Fast Company

Twenty-four hours before the White House and Silicon Valley announced the $500 billion Project Stargate to secure the future of AI, China dropped a technological love bomb called DeepSeek. DeepSeek R1 is a whole lot like OpenAIs top-tier reasoning model, o1. It offers state-of-the art artificial thinking: the sort of logic that doesn’t just converse convincingly, but can code apps, calculate equations, and think through a problem more like a human. DeepSeek largely matches o1s performance, but it runs at a mere 3% the cost, is open source, can be installed on a companys own servers, and allows researchers, engineers, and app developers a look inside and even tune the black box of advanced AI. In the two weeks since it launched, the AI industry has been supercharged with fresh energy around the products that could be built next. Through a dozen conversations with product developers, entrepreneurs, and AI server companies, its clear that the worried narratives most of us have heard about DeepSeekits Chinese propaganda, its techie hypedoesnt really matter to a free market.  Everyone wants OpenAI-like quality for less money, says Andrew Feldman, CEO and cofounder of the AI cloud hosting service Cerebras Systems that is hosting DeepSeek on its servers. DeepSeek has already driven down OpenAIs own pricing on a comparable model by 13.6x. Beyond cost, DeepSeek is also demonstrating the values of open technologies versus closed, and wooing interest from Fortune 500s and startups alike. OpenAI declined an interview for this piece. Not to overstate it, but weve been in straight up giddy mode over here since [DeepSeek] came out, says Dmitry Shevelenko, chief business officer at Perplexity, which integrated DeepSeek into its search engine within a week of its release. We could not have planned for this. We had the general belief this is the way the world could go. But when it actually starts happening you obviously get very excited. Looking back five years from now, DeepSeek may or may not still be a significant player in AI, but its arrival will be considered a significant chapter in accelerating our era of AI development. The new era of low-cost thought, powered by interoperability  Kreaan AI-based creative suitehad long considered adding a chatbot to the heart of its generative design tools. When DeepSeek arrived, their decision was made. Krea spent 72 hours from the time R1 was announced to integrating it as a chat-based system to control their entire editing suite. Released on a Monday, the team realized by that afternoon that DeepSeeks APIs worked with their existing tools, and it could even be hosted on their own machines. By Tuesday, they were developing a prototype, coding and designing the front end at the same time. By 3 a.m. Wednesday, they were done, so they recorded a demo video and shipped it by 7 a.m. Thats part of our culture; every Wednesday we ship something and do whatever it takes to get it done, says cofounder Victor Perez. But its a type of marketing that’s actually usable. People want to play with DeepSeek, and now they can do it with Krea.  [Source Images: Gunes Ozcan/Getty Images] Krea’s story illustrates how fast AI is moving, and how product development in the space largely hinges on whatever model can deliver on speed, accuracy, and cost. Its the sort of supply-meets-demand moment thats only possible because of a shift underway in AI development. The apps we know are increasingly powered by AI engines. But something most people dont realize about swapping in and out a large language model like R1 for 03, or ChatGPT for Claude, is that its remarkably easy on the backend. It would literally be a one-line change for us, says Sam Whitmore, cofounder of New Computer. We could switch from o3 to DeepSeek in like, five minutes. Not even a day. Like, its one line of code. A developer only needs to point a URL from one AI host to another, and more often than not, they’re discovering the rest just works. The prompts connecting software to AI engines still return good, reliable answers. This is a phenomenon we predicted two years ago with the rise of ChatGPT, but even Perez admits his pleasant surprise.  Developers of [all] the models are taking a lot of care for this integration to be smooth, he says, and he credits OpenAI for setting API standards for LLMs that have been adopted by Anthropic, DeepSeek, and a host of others. But the [AI] video and image space is still a fucking mess right now, he laughs. Its a completely different situation. Why DeepSeek is so appealing to developers In its simplest distillation, DeepSeek R1 gives the world access to AIs top tier thinking machine, which can be installed and tuned on local computers or cloud servers rather than connecting to OpenAIs models hosted by Microsoft. That means developers can touch and see inside the code, run it at a fixed cost on their own machines, and have more control over the data.  Called inference models, this generation of reasoning AI works differently than the large language models like ChatGPT. When presented with a question, they follow several logical paths of thought to attempt to answer it. That means they run far slower than your typical LLM, but for heavy reasoning tasks, that time is the expense of thinking. Developing these systems is computationally incredible. Even before the advanced programming methods were involved, DeepSeeks creators fed the model 14.8 trillion pieces of information known as tokens, which constitute a significant portion of the entire internet, notes Iker García-Ferrero, a machine researcher at Krea. From there, reasoning models are trained with psychological rewards. Theyre asked a simple math problem. The machine guesses answers. The closer it gets to right, the bigger the treat. Repeat countless times, and it learns math. R1 and its peers also have an additional step known as instructional tuning, which requires all sorts of hand-made examples to demonstrate, sa, a good summary of a full article, and make the system something you can talk to.  Some of their optimizations have been overhyped by the general public, as many were already well known and used by other labs, concedes García-Ferrero, who notes the biggest technological breakthrough was actually in an R1 zero sub model few people in the public are talking about because it was built without any instructional tuning (or expensive human intervention).  But the reason R1 took off with developers was the sheer accessibility of high tech AI. [Before R1], there weren’t good reasoning models in the open source community, says Feldman, whose company Cerebras has constructed the worlds largest AI processing chip. They built upon open research, which is what you’d want from a community, and they put out a comprehensiveor a fairly comprehensive paper on what they did and how.  A few beats later, Feldman echoes doubt shared by many of his peers. [The paper] included some things that are clearly bullshit . . . they clearly used more compute [to train the model] than they said. Others have speculated R1 may have queried OpenAIs models to generate otherwise expensive data for its instructional tuning steps, or queried o1 in such a way that they could deconstruct some of the black box logic at play. But this is just good old reverse engineering, in Feldmans eyes. [Source Images: Gunes Ozcan/Getty Images] If you’re a car maker, you buy the competitor’s car, and you go, Whoa, that’s a smooth ride. How’d they do that? Oh, a very interesting new type of shock! Yeah, that’s what [DeepSeek] did, for sure.  China has been demonized for undercutting U.S. AI investment with a free DeepSeek, but its easy to forget that, two years ago, Meta did much the same thing when, trailing Microsoft and Google in the generative AI race, it released LLaMa as the first open source AI of early LLMs. There was one difference, however: The devil is in the details with open source agreements, and while LLaMa still includes provisions stopping its commercial use by Metas competitive companies, DeepSeek used MITs gold standard license that blows it wide open for anything. Now that R1 is trained and in the wild, the how, what, and why matter mostly to politicians, investors, and researchers. Its a moot point to most developers building products that leverage AI engines. I mean, it’s cool, says Jason Yuan, cofounder of the AI startup New Computer. We’re painters, and everyone’s competing over giving you better and cheaper paints. A wave of demand for DeepSeek Feldman describes the last two weeks at Cerebras as overwhelming, as engineers have been getting R1 running on their servers to feed clients looking for cheap, smart compute. It’s like, every venture capitalist calls you and says, I got a company that can’t find supply. Can you help out? Im getting those three, four times a day, says Feldman. It means you’re getting hundreds of requests through your website. Your sales guys can’t return calls fast enough. That’s what it’s like. These sentiments are shared by Lin Qiao, CEO and cofounder of the cloud computing company Fireworks, which was the first U.S.-based company to host DeepSeek R1. Fireworks has seen a 4x increase in user signups month-over-month, which it attributes to offering the model. Qiao agrees that part of the appeal is price. Ive heard estimates that R1 is about 3% the cost of o1 to run, and Qiao notes that on Fireworks, theyre tracking it as 5X cheaper than o1.  Notably, OpenAI responded to DeepSeek with a new model released last week called o3 mini. According to Greg Kamradt, the founder of ARC Prize, a nonprofit AI benchmarking competition, o3 mini is 13.6x cheaper than o1 processing tasks. Cerebras admits o3 is all around more advanced than DeepSeeks R1, but claims the pricing is comparable. Fireworks contends o3 mini is still less expensive to query than R1. The truth is that costs are moving targets, but the bigger takeaway should be that R1 and o3 mini are similarly cheap. And developers dont need to bet on either horse today to take advantage of the new competition. Our philosophy is always to try all models, writes Ivan Zhao, founder and CEO of Notion, over email. We have a robust eval system in place, so it’s pretty easy to see how each model performs. And if it does well, is cost effective, and meets our security and privacy standards, then we’ll consider it.” DeepSeek offers transparent thought for the first time Shevelenko insists that integrating DeepSeek into Perplexity was more than a trivial effort.  I wouldnt put it in the mindless bucket, he says. But the work was still completed within a week. In many ways, the larger concern for integration was not, would it function, but could Perplexity mitigate R1s censorship on some topics as it leveraged AI for real time internet queries. The real work was we quickly hired a consultant thats an expert in Chinese censorship and misinformation, and we wanted to identify all the areas in which the DeepSeek model was potentially being censored or propagating propaganda, says Shevelenko. And we did a lot of post-training in a quick time frame . . . to ensure that we were answering any question neutrally. But that work was worth it because, it just makes Perplexity better, he says. Shevelenko is not talking in platitudes; with DeepSeek, Perplexity can do something the world has never seen before: Offer a peek inside how these AIs are actually thinking through a problem. This feature of R1 is called chain-of-thought.  Perplexity always offered some transparency in its front end, listing the websites it was crawling on your behalf to answer a question. But now, it will list the prompts as R1 literally talks to itself, step by step, as it reasons through an answer. [Screenshot: courtesy of the author] OpenAI, for competitive purposes, never exposed [chain-of-thought]. One of Perplexitys strengths is UI; we are able to quickly figure out an elegant way of showing you how the model is thinking in real time, says Shevelenko. There’s a curiosity and a utility to it. You can see where the thinking may have gone wrong and reprompt, but more than anything, part of the whole product law at Perplexity is not that you always get the best answer in one shot, its that youre guided on the way to ask better and better questions. It makes you think of other questions. Seeing AI reasoning laid bare also creates more intimacy with the user. The biggest problem of AI right now, how can we trust it? Because we all know AI can hallucinate, says Qiao. However, if transparent thought can bridge this gap of trust, then she imagines developers will begin to do a lot more we cant think of yet with all of this thinking data. [Screenshot: courtesy of the author] There may be products built directly on top of chain-of-thought. Those products could be general search, or all kinds of assistants: coding assistant, teaching assistant, medical assistants. She also believes that, while AI has been obsessed with the assistant metaphor since the launch of ChatGPT, transparent thought will actually give people more faith in automated AI systems because it will leave a trail that humans (or more machines!) can audit.  Buying breathing room for the future Even as debates about Chinese vs U.S. innovation rage on, the biggest single impact that DeepSeek will have is giving developers more autonomy and capability. Some, like Anthropic CEO Dario Amodei, argue that we are simply witnessing the known pricing and capability curve of AI play out. Others recognize the kick in the ass that DeepSeek offered an industry hooked on fundraising and opaque profit margins. Theres no way OpenAI would have priced o3 as low as they had it not for R1, says Shevelenko. It’s a bit of a moving target, once you have an open source drop it dramatically curves down the pricing for closed models, too.  While nothing is to say that OpenAI or Anthropic wont release a far more cutting edge model tomorrow that puts these systems to shame, this moving target is providing confidence to developers, who now see a path toward realizing implementations theyd only fantasized about, especially now that they can dip their own fingers into advanced AI. R1 on its own is still relatively slow for many tasks; a question might take 30 seconds or more to answer, as it has a habit for obsessively double checking its own thinking, perhaps even burning extra energy than it needs to in order to give you an answer. But since it’s open source, the community can distill R1think of it like a low cost cloneto run faster and in lower power environments. Indeed, developers are already doing this. Cerebras demonstrated a race between its own distilled version of R1 to code a chess game against an o3 mini. Cerberus completed the task in 1.2 seconds versus 22 seconds on o3. Efficiencies, fueled by both internal developers and the open source community, will only make R1 more appealing. (And force proprietary model developers to offer more for less.) At Krea, the team is most excited about the same thing thats exciting the big AI server companies: They can actually task an engineer to adjust the weights of this AI (essentially tuning its brain like a performance vehicle). This might allow them to run an R1 model on a single GPU themselves, sidestepping cloud compute altogether, and it can also let them mix homebuilt AI models with it. Being able to run models locally on office workstations, or perhaps even distilling them to run right on someones phone, can do a lot to reduce the price of running an AI company. Right now, developers of AI products are torn between short term optimizations and long-term bets. While they charge $10 to $30 a month, those subscriptions make for a bad business today thats really betting on the future. Its really hard for any of those apps to be profitable because of the cost of doing intelligent workflows per person. There’s always this calculus you’re doing where it’s like, OK, I know that it’s going to be cheap, long, long term. But if I build the perfect architecture right now with as much compute as I need, then I may run out of money if a lot of people use it in a month, says Whitmore. So the pricing curve is difficult, even if you believe that long term, everything will be very cheap. What this post-DeepSeek era will unlock, Whitmore says, is more experimentation from developers to build free AI services because they can do complicated queries for relatively little money. And that trend should only continue. I mean, the price of compute over the past 50 years has [nosedived], and now you have 30 computers in your house. Each of your kids has toys with it. Your TVs have computers in them. Your dishwashers have computers in them. Your fridges probably have five. If you look around, you got one in your pocket, says Feldman. This is what happens when the price of compute drops: You buy a shitload of it. And what this will mean for the UX of AI will naturally change, too. While the way most of us use AI is still based in metaphors of conversation, when it can reason ahead faster than we can converse, the apps of tomorrow may feel quite differenteven living steps ahead of where we imagine going next. As humans, even the smartest of us, take time to reason. And right now, we’re used to reasoning models taking a bit of time, says Yuan of New Computer. But swing your eyes just a few months or even a year, and imagine thinking takes one second or less, or even microseconds. I think that’s when you’ll start seeing the quote unquote AI native interfaces beyond chat.  I think it’s even hard to kind of imagine what those experiences will feel like, because you can’t really simulate it. Even with science fiction, theres this idea hat thinking takes time, he continues. And that’s really exciting. It feels like this will happen.


Category: E-Commerce

 

Latest from this category

07.02How much does a Super Bowl commercial cost in 2025?
07.02NYC shuts down live poultry markets over bird flu outbreak
07.02It just didnt go the way I planned: Hawk Tuah girl breaks silence after crypto scandal
07.02Trump administration asks federal agencies for lists of underperforming workers
07.02Hims & Hers faces backlash over misleading Super Bowl adbut it didnt stop stock from jumping over 11%
07.02Amazon to pay nearly $4 million for allegedly taking drivers tips
07.02Report: It takes a $180,000 salary to comfortably afford U.S. childcare
07.02LG: No refunds or exchanges for 500,000 electric ranges recalled over fires and pet deathsinstead, customers get a sticker
E-Commerce »

All news

08.02Weekly Scoreboard*
08.02House Democrats press Mark Zuckerberg on Metas policy changes
07.02Banks opposing states landmark credit card fees law keep up arguments in court, Springfield
07.02Cheaper China e-bikes 'kick in teeth' for UK firms
07.02Stocks Lower into Final Hour on Higher Long-Term Rates, Escalating Tariff Concerns, Earnings Outlook Jitters, Homebuilding/Biotech Sector Weakness
07.02How much does a Super Bowl commercial cost in 2025?
07.02NYC shuts down live poultry markets over bird flu outbreak
07.02Former Chicago Bears player Nate Davis sells Highland Park home for $3.7M
More »
Privacy policy . Copyright . Contact form .