Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2025-12-04 10:00:00| Fast Company

As gaming platforms Roblox and Fortnite have exploded in popularity with Gen Alpha, its no surprise that more than half of children in the U.S. are putting video games high on their holiday wish lists.  Entertainment Software Association (ESA) surveyed 700 children between the ages of 5 and 17 and found three in five kids are asking for video games this holiday season. However, the most highly requested gift isnt a console or even a specific game: Its in-game currency.  The survey didnt dig into which currency is proving most popular, but the category as a whole tops the list with a 43% request rate, followed by 39% for a console, 37% for accessories, and 37% for physical games.  A study published by Circana this year revealed only 4% of video game players in the U.S. buy a new game more often than once per month, with a third of players not buying any games at all. Behind this shift is the immense popularity of live service games such as Fortnite and those offered on the Roblox platform. Both are free to play, which means the app has to generate money in other ways. Much of Robloxs $3.6 billion revenue in 2024 was made via in-game microtransactions, particularly through purchases of its virtual currency Robux. Here, $5 will get you 400 Robux to spend in the game on emotes, character models, and skins, among other items.  Players can also earn currency just by playing, but as with any free-to-play game, the process of earning in-game points will be slow and tedious compared to purchasing them outright. Its worth noting that while these games often seem innocent enough, about half of parents surveyed by Ygam, an independent U.K. charity dedicated to preventing gaming and gambling harms among young people, noted there are gambling-like mechanisms in the games their child plays, including mystery boxes and loot boxes, which may be harmful to children.  Still, the average parent intends to spend $737 on game-related gifts, ESA reported.  Parents who arent ableor willingto drop hundreds on Robux and V-bucks this holiday may be pleased to learn that more than half of the kids surveyed said they would like to spend more time playing games with their parents, with 73% of those ages 5 through 7. Turns out, the best gift you can give your child is quality time. 


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

Raquel Urtasun is the founder and CEO of self-driving truck startup Waabi as well as a computer science professor at the University of Toronto. Unlike some competitors, Waabis AI technology is designed to drive goods all the way to their destinations, rather than merely to autonomous vehicle hubs near highways. Urtasun, one of Fast Companys AI 20 honorees for 2025, spoke with us about the relationship between her academic and industry work, what sets Waabi apart from the competition, and the role augmented reality and simulation play in teaching computers to drive even in unusual road conditions. This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. Can you tell me a bit about your background and how Waabi got started? Ive been working in AI for the last 25 years, and I started in academia, because AI systems werent ready for the real world. There was a lot of innovation that needed to happen in order to enable the revolution that we see today. For the last 15 years, Ive been dedicated to building AI systems for self-driving. Eight years ago, I made a jump to industry: I was chief scientist and head of R&D for Ubers self-driving program, which gave me a lot of visibility in terms of what building a world-class program and bringing the technology to market would look like. One of the things that became clear was that there was a tremendous opportunity for a disrupter in the industry, because everybody was going with an approach that was extremely complex and brittle, where you needed to incorporate by hand all the knowledge that the system should have. It was not something that was going to provide a scalable solution. So a little bit over four years ago, I left Uber to go all in on a different generation of technology. I had deep conviction that we should build a system designed with AI-first principles, where its a single AI system end-to-end, but at the same time a system that is built for the physical world. It has to be verifiable and interpretable. It has to have the ability to prove the safety of the system, be very efficient, and run onboard the vehicle. The second core pillar was that the data is as important as the model. You will never be able to observe everything and fully test the system by deploying fleets of vehicles. So we built a best-in-class simulator, where we can actually prove its realism. And what differentiates your approach from the competition today? The big difference is that other players have a black-box architecture, where they train the system basically with imitation learning to imitate what humans do. Its very hard to validate and verify and impossible to trace a decision. If the system does something wrong, you cant really explain why that is the case, and its impossible to really have guarantees about the system. Thats okay for a level two system [where a human is expected to be able to take over], but when you want to deploy level four, without a human, that becomes a huge problem. We built something very different, where the system is forced to interpret and explain at every fraction of a second all the things it could do, and how good or bad those decisions are, and then it chooses the best maneuver. And then through the simulator, we can learn much better how to handle safety-critical situations, and much faster as well. How are you able to ensure the simulator works as well as real-world driving? The goal of the simulator is to expose the self-driving vehicles full stack to many different situations. You want to prove that under each specific situation, how the system drives is the same as if the situation happens in the real world. So we take all the situations where Waabi driver has driven in the real world, and clone them in simulation, and then we see, did the truck do the same thing. We also recently unveiled a really exciting breakthrough with mixed-reality testing. The way the industry does safety testing is they bring a self-driving vehicle to a closed course and they expose it to a dozen, maybe two dozen, scenarios that are very simple in order to say it has basic capabilities. Its very orchestrated, and they use dummies in order to test things that are safety critical. Its a very small number of non-repeatable tests. But you can actually do safety testing in a much better way if you can do augmented reality on the self-driving vehicle. With our truck driving around in a closed course, we can intercept the live sensor data and create a view where theres a mix of reality and simulation, so in real time, as its driving in the world, its seeing all kinds of simulated situations as though they were real. That way, you can have orders of magnitude more tests. You can test all kinds of things that are otherwise impossible, like accidents on the road, a traffic jam, construction, or motorbikes cutting in front of you. You can mix real vehicles with things that are not real, like an emergency vehicle in the opposite lane. Youre also a full professor. Are you still teaching and supervising graduate students? I do not teachI obviously do not have time to teach at all. I do have graduate students, but they do their studies at the company. We have this really interesting partnership with the University of Toronto. If you want to really learn and do research in self-driving, it is a must that you get access to a full product. And thats impossible in academia. So a few years ago, we designed this program where students can do research within the company. Its one of a kind, and to me, this is the future of education for physical AI. When did you realize the time was ripe for moving from academic research to industry work? That was about eight and a half years ago. We were at the forefront of innovation, and I saw companies were using our technology, but it was hard for me to understand if we were working on the right things and if there was something that I hadnt thought of that is important when deploying a real product in the real world. And I decided at the time to join Uber, and I had an amazing almost four years. It blew my mind in terms of how the problem of self-driving is much bigger than I thought. I thought, Okay, autonomy is basically it, and then I learned about how you need to design the hardware, the software, the systems around safety, etc., in a way that everything is scalable and efficient. It was very clear to me that end-to-end systems and foundational models would be the thing. And four and a half years in, our rate of hitting milestones really speaks to this technology. Its amazingto give an example, the first time that we drove in rain, the system had never seen rain before. And it drove with no interventions in rain, even though it never saw the phenomenon before. That for me was the “aha” moment. I was actually [in the vehicle] with some investors on the track, so it was kind of nerve-racking. But it was amazing to see. I always have very, very high expectations, but it blew my mind what it could do.


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

Bringing a new drug to market usually requires a decade-long, multibillion-dollar journey, with a high failure rate in the clinical trial phase. Nvidias Kimberly Powell is at the center of a major industry effort to apply AI to the challenge. If you look at the history of drug discovery, weve been kind of circling around the same targets for a long time, and weve largely exhausted the drugs for those targets, she says. A target is a biological molecule, often a protein, thats causing a disease. But human biology is extraordinarily complex, and many diseases are likely caused by multiple targets.Thats why cancer is so hard, says Powell. Because its many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently.Nvidia, which in July became the first publicly traded company to cross $4 trillion in market capitalization, is the primary provider of the chips and infrastructure that power large AI models, both within the tech companies developing the models and the far larger number of businesses relying on them. New generative AI models are quite capable of encoding and generating words, numbers, images, and computer code. But much of the work in the healthcare space involves specialized data sets, including DNA and protein structures. The sheer number of molecule combinations is mind-bogglingly big, straining the capacity of language models. Nvidia is customizing its hardware and software to work in that world.[W]e have to do a bunch of really intricate data science work to . . . take this method and apply it to these crazy data domains, Powell says. Were going from language and words that are just short little sequences to something thats 3 billion [characters] long.Powell, who was recruited by Nvidia to jump-start its investment in healthcare 17 years ago, manages the companys relationships with healthcare giants and startups, trying to translate their business and research problems into computational solutions. Among those partners are 5,000 or so startups participating in Nvidias Inception accelerator program.I spend a ton of my time talking to the disrupters, she explains. Because theyre really thinking about what [AI computing] needs to be possible in two to three years time. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

You might not spend a lot of time thinking about your web browser, whether its Safari, Chrome, or something else. But the decades-old piece of software remains a pretty important canvas for getting things done. Thats why Tara Feener, who spent years developing creative tools with companies such as Adobe, WeTransfer, and Vimeo, decided to join the Browser Company and within two years became head of engineering, overseeing its AI-forward Dia browser. This is more ambitious than any of the other things Ive done, because its where you live your life, and where you create within, she says. Whereas a conventional browser presents you with a search box on its home screen, Dia will either answer your query with AI or route it to a traditional search based on what you write. You can also ask for information from your open tabs or have Dia intelligently sort them into groups. Several of these features have since found their way into more mainstream browsers such as Google Chrome and Microsoft Edge, and in September, Atlassian announced it had acquired the Browser Company and Dia (a $610 million deal), hoping to develop the ultimate AI browser for knowledge workers.Other AI companies are catching on to the importance of owning a browser. Perplexity has launched Comet, and OpenAI launched ChatGPT Atlas in October. This strategic value isnt lost on Feener, who notes that browsers are typically the starting point for workers seeking information. They also provide a treasure trove of context for AI assistants. Dia can already do things like analyze your history for trends and draft messages in Gmail. Feener says her team has never felt more creative coming up with things to do next.With Dia, we have context, we have memory, we have your cookies, so we actually own the entire layer, she says. Just like TikTok gets better with every swipe, every time you open something in Dia, we learn something about you. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

Over the past decade, Figma has transformed how people within companies collaborate to turn software ideas into polished products. Now the company is itself being transformed by AI. The technology is beginning to show its potential to take on much of the detail work that has required human attention in design, coding, and other domains. But the end game involves far more than typing chatbot-style prompts and waiting for the results. I spoke with Figmas head of AI, David Kossnickone of Fast Companys AI 20 honorees for 2025about what the company has accomplished so far and where hes trying to steer it. We’re still in chapter one, maybe the start of chapter two, he told me. This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. Talk a little bit about what your work at Figma encompasses and how you came to have this job. Anything that has AI in it, I and my team touch in some way. It’s everything from traditional AI tools like search, which we’ve rebuilt using multimodal embeddings, to some of our newer, AI-forward workflows. Figma Make is an example of that. As to how I came to get this job, I’ll give you a short version. I knew a lot of the Figma team for a long time. The chief product officer, Yuhki [Yamashita], and I went to college together. He was at my wedding. I did a startup of my own, and one of our board members was John Lilly, who was also on the board of Figma. I actually met [Figma cofounder/CEO] Dylan [Field] when there was, like, a 20-person Figma team, because we were building a game engine, and Figma is basically a game engine, with all sorts of custom renderings. [Lilly] was like, You guys should compare notes. So I’ve known the team for a long time, and it’s a product I’ve used a lot. And then, about a year and a half ago, when I joined, I’d been working on AI at Coda, which was then acquired by Grammarly. As a big Figma user, I also felt like there was just such a huge opportunity for Figma, and it had barely gotten started. So I was thinking about what’s next and sharing it with Yuhki: Theres a lot you guys could do. He was like, I know, we just don’t have the right team here yet. You wanna come? I was like, That sounds amazing. Is there a particular Figma philosophy about AI and how to put it into this experience that’s been around for a while, and which people choose to use because they like it, in most cases? There’s been a couple of learnings, both from our own team and from working with customers. A lot of our biggest customers are technology companies themselves. Many are integrating AI themselves. And so we’ve learned through themwhat’s working and what they’re trying. There have been two industry trends, and we’ve done both here. One is trying to find existing workflows that you can add AI to, to save users time, to delight them, to give them new capabilities. And also building totally new experiences that have AI as the core of the workflow. Interestingly, we’ve actually done some market research and surveys of users and other companies. People understand and value the new AI for workflows even more. I think that is counterintuitive. You think you have such big products, and adding efficiencies to them is very viable. And it is. But often, AI is a little more invisible there. Kt’s embedded in a workflow that you’re used to, and so the thing that is forefront in your mind is the workflow itself That’s good. We don’t want to get in people’s way. Figma Designs canvas is kind of like the Google homepage or Facebook news feed, where a single pixel of friction literally slows down millions of people every day. Which makes for interesting challenges. How do you introduce things so they dont bother people? But on the flip side, there’s a lot of new workflows and new tools. Peopleespecially our type of customersare always experimenting. And so they’re very open to trying a totally different approach. Historically, Figma has been this thing that human beings use to collaborate with other human beings to create stuff from scratch, and often very carefully considered stuff. What’s the experience like of integrating tools that take some of that heavy lifting off their shoulders? I think it’s super exciting. It feels and looks different for different user types. So as an example, we actually just finished up a $100,000 hackathon, our first ever, for Figma Make. It was totally inspiring seeing all the range of things people have made. There were students. There were people who never learned to code. There were designers who code a lot, and its just helping them do it faster. There were hobbyists. For a lot of those user types, a very common theme was, Wow, I just couldn’t have done this before. The other way it feels is as a kind of thought partner to experts. I feel this myself as a [product manager] when I chat with Figma Make or ChatGPT. I have a problem. I have a solution in mind. And actually, there are some other solutions I hadn’t thought about, because I was so focused on this one solution. It can help you pull back and see a wider solution space, and explore a few other threads in a very cheap way before you go too deep. Its like Doctor Strange, where he has this magic crystal that lets him look into all the different possible futures. Expert users are always running simulations in their heads. What if I move this button over here? How’s the user behavior going to change? What does that mean for the next part of the experience? We’re finding that these types of AI tools make that loop so much faster, where it’s like, I’m just going to try exploring a bunch. I’m going to literally make them, but make them 10 times as quickly, and play out all those different end states. How far is Figma down the continuum from having no AI to AI being everywhere and doing everything AI could possibly do? It’s an interesting question. There’s AI today and AI in the future. If all research was frozen, there would probably still be five years of new product experiences that the industry could build from current models. But the pace of model improvement is still really high as well. For us, I’d say we’re still in chapter one, maybe the start of chapter two. And chapter one was, We’re going to do a bunch of basic features, get our feet wet, save time in your workflows. Chapter two is, We’re doing some new AI-first experiences. Figma Make, that whole category of prompt-to-app, is very, very new. As the models get better and faster and cheaper, what other new workflows are going to become available? Today, things like autocomplete, as an example, are hard to make fast, and hard to make cheap, and hard tomake high quality. And, you know, we’re still using many interfaces in the industry that feel like typing at a terminal from the ’60s. That’s not the final interface. That’s not the final workflow. I think the interfaces are going to become more visual, more exploratory. It’s part of why I’m so excited about Figma and why I came here. As AI gets better, what you want the experience of working with an AI to feel like is going to be more and more similar to what you want the experience of working with a human to feel like. You’re going to want to brainstorm with the AI before it goes off and thinks for 10 hours and then builds something. You’re going to want to work through the big trade-offs. Youre going to want your teammates in there too, not just the AI. I think that’ll be a super exciting place, where things like code become implementation details that AIs are more and more capable of driving, with humans reviewing.


Category: E-Commerce

 

Sites : [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] next »

Privacy policy . Copyright . Contact form .