|
Americas AI industry was left reeling over the weekend after a small Chinese company called DeepSeek released an updated version of its chatbot last week, which appears to outperform even the most recent version of ChatGPT. But its not just DeepSeeks performance that is rattling U.S. artificial intelligence giants. Its the fact that DeepSeek built its model in just a few months, using inferior hardware, and at a cost so low it was previously nearly unthinkable. Heres what you need to know about DeepSeek. What is DeepSeek? DeepSeek is a Chinese artificial intelligence lab. It was founded in 2023 and is based in Hangzhou, in China’s Zhejiang province. It has released an open-source AI model, also called DeepSeek. The latest version of DeepSeek, called DeepSeek-V3, appears to rival and, in many cases, outperform OpenAIs ChatGPTincluding its GPT-4o model and its latest o1 reasoning model. However, the idea that the DeepSeek-V3 chatbot could outperform OpenAIs ChatGPT, as well as Metas Llama 3.1, and Anthropics Claude Sonnet 3.5, isnt the only thing that is unnerving Americas AI experts. Its that fact that DeepSeek appears to have developed DeepSeek-V3 in just a few months, using AI hardware that is far from state-of-the-art, and at a minute fraction of what other companies have spent developing their LLM chatbots. How much did DeepSeek cost to develop? Perhaps the most astounding thing about DeepSeek is the cost it took the company to develop. According to the companys technical report on DeepSeek-V3, the total cost of developing the model was just $5.576 million USD. Yes, thats million. For less than $6 million dollars, DeepSeek has managed to create an LLM model while other companies have spent billions on developing their own. (In training just GPT-4, OpenAI reportedly spent $100 million alone, Wired noted in 2023.) This raises several existential questions for Americas tech giants, not the least of which is whether they have spent billions of dollars they didnt need to in building their large language models. The high research and development costs are why most LLMs havent broken even for the companies involved yet, and if America’s AI giants could have developed them for just a few million dollars instead, they wasted billions that they didnt need to. But the fact that DeepSeek may have created a superior LLM model for less than $6 million dollars also raises serious competition concerns. When LLMs were thought to require hundreds of millions or billions of dollars to build and develop, it gave Americas tech giants like Meta, Google, and OpenAI a financial advantagefew companies or startups have the funding once thought needed to create an LLM that could compete in the realm of ChatGPT. But if DeepSeek could build its LLM for only $6 million, then American tech giants might find they will soon face a lot more competition from not just major players but even small startups in Americaand across the globein the months ahead. Wasnt America supposed to prevent Chinese companies from getting a lead in the AI race? Yes. The Biden administration placed a number of export controls on AI technologies in the hopes that they would do just that. Some of the export controls forbade American companies from selling their most advanced AI chips and other hardware to Chinese companies. Some of Nvidias most advanced AI hardware fell under these export controls. Thats why DeepSeeks success is all the more shocking. The model was developed using hardware that was far from being the most advanced. DeepSeek trained its LLM using Nvidia’s H800 chipsa midrange AI chip. Despite being consigned to using less advanced hardware, DeepSeek still created a superior LLM model than ChatGPT. It is also much more energy efficient than LLMS like ChatGPT, which means it is better for the environment. In an interview with Perplexity CEO Aravind Srinivas about DeepSeeks breakthroughs, Srinivas told CNBC, Necessity is the mother of invention. Because they had to figure out work-arounds, they actually ended up building something a lot more efficient. How have Americas AI giants reacted to DeepSeek? With shock and concern. At the World Economic Forum in Davos, Switzerland, on Wednesday, Microsoft CEO Satya Nadella said, To see the DeepSeek new model, its super impressive in terms of both how they have really effectively done an open-source model that does this inference-time compute, and is super-compute efficient. We should take the developments out of China very, very seriously. Microsoft has spent billions investing in ChatGPT-maker OpenAI. Metas chief AI scientist, Yann LeCun, has a slightly different take. On Threads he stated that DeepSeeks success shows “Open source models are surpassing proprietary ones. DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta), LeCun wrote. They came up with new ideas and built them on top of other people’s work. Because their work is published and open source, everyone can profit from it. That is the power of open research and open source. How have investors reacted to the DeepSeek news? With some alarm. As of the time of this writing, major AI or AI-adjacent stocks are down in premarket trading. NVIDIA Corporation shares (Nasdaq: NVDA) are currently down over 10%. Nvidia’s success in recent years, in which it has become the worlds most valuable company, is largely due to companies buying as many of its most advanced AI chips as they can. However, if companies can now build AI models superior to ChatGPT on inferior chipsets, what does that mean for Nvidias future earnings? Shares of ASML Holding N.V. (Nasdaq: ASML) were also down 9% in premarket. ASML makes the equipment needed to produce advanced AI chips. Shares in Microsoft Corporation (Nasdaq: MSFT), OpenAIs biggest investor, were down over 6% in premarket. Can I use DeepSeek? Yep. DeepSeek can be used for freetheres no cost to use the most advanced DeepSeek-V3, which in most tests beats ChatGPTs o1 model. The latter costs $200 a month to use. DeepSeek can be used for free on the web. As you can see, its interface looks no different than the interfaces of other LLMS. You can also use DeepSeek for free on your smartphone via the dedicated DeepSeek app for iOS and Android. And in a sign of how DeepSeek has gained so much mindshare in the AI market over the past several days, the app is now the No. 1 app in Apples App Store.
Category:
E-Commerce
We’re at a fascinating yet concerning inflection point with AI. A recent Gallup poll reveals that 79% of Americans are already using AI-powered products in their daily lives, often without realizing it. Meanwhile, as MIT Sloan Review argues, the profound questions AI raises about consciousness, intelligence, and decision-making aren’t primarily technical problemsthey’re philosophical ones. We need philosophy to help us understand what AI actually is, what it means to be intelligent, and how we should approach human-AI interaction. Without this philosophical foundation, we risk developing AI systems that don’t align with human values and ways of thinking. This creates what I call a “philosophical emergency” in my forthcoming book TRANSCEND: Unlocking Humanity in the Age of AI. Unlike previous technological revolutions that primarily changed what we could do, AI is fundamentally altering how we think, reason, and relate to each other. Without developing strong critical thinking skills specifically calibrated for this AI age, we risk becoming passive consumers of AI-driven decisions rather than active, thoughtful partners with this technology. The stakes are incredibly high. It’s not just about using AI tools effectivelyit’s about maintaining our capacity for independent thought, authentic human connection, and meaningful decision-making in a world where AI is increasingly embedded in every aspect of our lives. Here are seven essential critical thinking skills, grounded in philosophical wisdom, that we must develop to partner effectively with AI: Recognizing limitations. (aka Epistemological Humility): Rooted in Socrates’ famous wisdom: “I know that I know nothing.” Also connects to Immanuel Kant’s limits of human knowledge and reason. When we recognize our own limitations, paradoxically, we become wiser in our interactions with AI.Example: Deliberately choosing films outside AI’s recommendation bubble, asserting human creativity over algorithmic patterns. Pattern Recognition vs Pattern Breaking: This draws from existentialist philosophy, particularly Sartre’s concept of radical freedom. While AI follows patterns, humans have what Sartre called the ability to “transcend the given”to break free from predetermined patterns and create new possibilities.Example: Choosing to have difficult conversations in person rather than using AI to craft perfect messages, prioritizing authentic connection over convenience. Value-Based Reasoning: Connects to Aristotle’s concept of practical wisdom (phronesis)the ability to discern what truly matters in any situation. Also relates to Max Scheler’s hierarchy of values, where he argues that some values (like love and spiritual growth) are inherently higher than others (like comfort and utility).Example: Understanding that while an AI chatbot might offer comfort, it can’t replace the deep mutual understanding possible in human friendships. Authentic Connection Awareness: Draws heavily from Martin Buber’s I and Thou philosophy. Buber distinguished between I-It relationships (treating others as objects) and I-Thou relationships (authentic encounters between subjects). This helps us understand the difference between AI interactions and genuine human connection.Example: Regularly auditing which decisions you’ve unconsciously delegated to AI, from content choices to shopping decisions. Freedom-Conscious Decision Making: Based on Hannah Arendt’s concept of “thoughtful willing”making conscious choices rather than being carried along by automation and convenience. Also connects to Kierkegaard’s emphasis on authentic choice-making as central to human existence.Example: Regularly auditing which decisions you’ve unconsciously delegated to AI, from content choices to shopping decisions. Ethical Impact Analysis: Builds on Hans Jonas’s “imperative of responsibility”the idea that modern technology requires a new kind of ethics that considers long-term and far-reaching consequences. Also incorporates utilitarian considerations about maximizing good outcomes while minimizing harm.Example: Evaluating how using AI for hiring decisions might affect workplace diversity and human potential before implementation. Transcendent Purpose Alignment: Draws from Viktor Frankl’s logotherapy and the human need for meaning, combined with Maslow’s concept of self-actualization. It’s about using AI while staying focused on higher human purposes and potential.Example: Using AI to handle routine tasks while intentionally focusing freed-up time on meaningful work and relationships. These seven critical thinking skills aren’t just nice-to-have philosophical concepts; they’re essential survival skills for maintaining our humanity and agency in an AI-augmented world. They help us engage with AI in a way that enhances rather than diminishes our humanity, allowing us to stay grounded in what makes us uniquely human while making the most of AI’s capabilities. The philosophical foundations remind us that we’re not just dealing with technical challenges but with fundamental questions about human nature, purpose, and potential. The great philosophers have wrestled with these questions long before AI came along, and their insights provide rich frameworks for thinking about how we can partner with AI while maintaining and enhancing our humanity. As AI continues to evolve and integrate more deeply into our lives, developing these critical thinking skills becomes not just important but essential for our individual and collective flourishing. They provide the mental tools we need to navigate this new territory thoughtfully and intentionally, ensuring that we remain active participants in shaping our AI-augmented future rather than passive recipients of whatever that future might bring. Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20205, Faisal Hoque, All rights reserved.
Category:
E-Commerce
Hello and welcome to Modern CEO! Im Stephanie Mehta, CEO and chief content officer of Mansueto Ventures. Each week this newsletter explores inclusive approaches to leadership drawn from conversations with executives and entrepreneurs, and from the pages ofInc.andFast Company. If you received this newsletter from a friend, you cansign up to get it yourselfevery Monday morning. For all the talk about the rise of “the user”user-generated content, user-centered designit is rare that users (aka patients) are actively consulted in health innovations, especially in communities that lack access to basic healthcare. Innovation must be designed with and for the people it serves, harnessing the insights of those who best understand their communities needs, Jennifer Gardy, deputy director, global health, at the Gates Foundation, told me recently. Imperfect pipe dreams Gardy says she understands why global health advocates, often optimists who want to improve outcomes, can sometimes engage shiny thing-itis, as she calls it. Well see this cool new technology and well say, This is so cool. If we build it, they will come, she says. Unfortunately, without community input, those innovations often fall short. In 2011, the foundation initiated a program called the Reinvent the Toilet Challenge. Some 3.5 billion people worldwide lack access to safe sanitation, which puts them at risk for diseases such as diarrhea, typhoid, and cholera. (To draw attention to the challenge, foundation cofounder Bill Gates went on The Tonight Show Starring Jimmy Fallon and convinced the host to drink water made from treated sewage water.) During field testing of the earliest designs, research teams heard from women and girls in their communities about their health, safety, and privacy. They highlighted poor lighting and lack of menstrual products. As a result of this and other projects, the foundation launched a genderintegration effort. Now all foundation teams are trained in applying a gender lens to their work. Whether its developing a team-level guiding strategy that includes gender considerations or filling out a gender intentionality assessment for every investment we make, thinking about the impact (and unintended consequences) on communitiesespecially women and girlsis now baked into the fabric of how we operate as a foundation, Gardy tells me in an email exchange. Gardy also shared some of these insights during a Fast Company panel produced in partnership with the Gates Foundation during the CES tech trade show earlier this month. The Gates Foundation, whose mission is to create a world where every person has an opportunity to live a healthy, productive life, participated in CES to showcase some of the health technologies it has supported. Innovation through community The call for more community input in innovation was echoed by panelists Laura Adams, senior advisor at the National Academy of Medicine, and Greg Simon, president of Simonovation, a science and tech policy consulting firm. Digital health is about giving people access to the information that they ought to have anyway, says Simon. The gripe I have with the digital medical community is that they stopped talking to patients, he adds, and instead collect data without understanding the human context. Thats a shame, as research suggests patients can be an asset in health innovation. A 2015 study of patients with rare diseases found that more than half had developed their own solutions for coping with their diseases, with 8% coming up with tactics that were truly novel. Adams also notes that if patients arent consulted on health innovation, theyll likely take matters into their own hands, adding: I think the sleeping giant is AI (artificial intelligence). We have no idea how far AI will take the empowered patient. She cited the example of a frustrated woman whose young son saw multiple physicians, none of whom could diagnose the cause of his chronic pain. She supplied ChatGPT with information from her sons MRI reports and other health data. The chatbot suggested a diagnosis of tethered cord syndrome, which was confirmed by a neurosurgeon. And as generative AI tools become an increasingly important part of the health landscape, Gardy urges innovators to tap into communities to make sure they have representative data as they build their models. You have to have lived experience part of your development process, whether its an AI-based tool or a better community toilet, she says. I asked Gardy what CEOs and other leaders can do to support the mission to create a world where every person can live a healthy, productive life. She replied: I hope more business leaders, corporations, and tech developers embrace health innovation for all as a smart investment and seize the opportunity to improve lives and livelihoods around the world. How does your company support health innovation? Does your company support health innovation? If so, how do you engage communities to ensure that your product meets the needs of patients? Send your comments to me at stephaniemehta@mansueto.com. Id like to share some of your insights in a future newsletter. Read more: healthcare innovation 8 next big technologies in health Mark Cubans audacious cure for high-priced drugs Fast Companys most innovative healthcare companies for 2024
Category:
E-Commerce
All news |
||||||||||||||||||
|