|
The tech industry has long been infatuated with metaphors relating to the original space race. Its favorite one is moon shot, a term it applies to any undertaking of atypical ambition. But Chinese startup DeepSeeks release of a reasoning AI model that may be a peer of OpenAIs GPT-o1despite having been created on the cheap, without access to Nvidias best chipshas everyone reaching back to 1957s original Sputnik moment as a point of comparison. It somehow took most people a week to pay attention to DeepSeeks R1, which the company released on January 20. Once they did, it spawned an insta-frenzy whose shockwaves ranged from the technological to the geopolitical. They include a stock market beating for Nvidia and other chipmakers, new questions about whether vast resources actually provide an edge in AI after all, and shock that the Biden administrations bans on shipping the most powerful U.S.-designed chips to China didnt prevent that countrys researchers from making a possibly epoch-shifting breakthrough with the stuff they had on hand. DeepSeeks abrupt impact has undeniable similarities to the panic set off more than 67 years ago when the U.S.S.R. successfully put a satellite into orbit before the U.S. did. But as former Reddit CEO Yishan Wong pointed out in a post this week, the parallels are shallow. For one thing, the Soviets worked in deep secrecy. By contrast, DeepSeek is publishing code and research relating to its techniques for creating AI that does more with less. That gives the entire world the opportunity to quickly build upon what the company has created, potentially accelerating AIs use everywhere rather than preserving a daunting competitive advantage for one company or country. To be sure, the sudden commodification of AI could have profound implications for the handful of powerful U.S. companies that have hitherto propelled the technology forward. But while the details and timing of such an inflection point were unpredictable, its inevitability was not. For example, an internal Google document leaked in early 2023 was titled We Have No Moat and Neither Does OpenAI. Or, as Microsoft CEO Satya Nadella put it when I talked to him later that year, As far as Im concerned, early leads in technology dont matter. DeepSeeks R1and other AI technologies modeled upon its approachmay well force AIs incumbent giants to reassess everything about their future. Yet thats hardly an end game for the industry as weve known it. Artificial intelligence isnt anywhere near hitting an insurmountable wall that prevents further progress, and its tough to imagine that companies with access to vast resources wont be able to unlock some advances that those operating under greater constraints cannot. Most importantly, the dizzying improvements weve seen in LLMs over the past few years have yet to be matched by the real-world AI in applications we use. As generative AIs novelty wears off, tools such as Microsofts Copilot look like rougher and rougher drafts of something that needs further ingenuity to live up to its potential. The work of hooking up AI to all the processes we use to get stuff done has barely begun, and a lot of money stands to be made by the companies who get it done. Thats the underyling fact behind the industrys obsession with so-called agentic AIa slightly annoying buzzword that encompasses forms of the technology that can perform complex tasks without constant human oversight. There are some decent early stabs at the idea out there, such as Asanas AI teammates, which already shoulder some of the grunt work of wrangling tasks in the project-management app. But those examples are outnumbered by instances of agentic AI that mostly prove the technology isnt ready to do much on its own. Last week, for example, OpenAI released Operator, a research preview available to users of its $200/month ChatGPT Pro tier. Operator can type into a web browser and control a mouse pointer, a theoretical first step toward letting it handle all the tasks we humans perform on the web. Over at Platformer, Casey Newton reported on his hands-on experience with the service, which included asking it to perform tasks such as writing a high school lesson plan for The Great Gatsby. It took minutes to achieve results that were no better than what the non-agentic ChatGPT came up with almost instantly. And when Newton tried to use Operator to order groceriessomething a stock chatbot cant doit turned out that the current version of Operator is pretty hopeless at the job, too. In December, I got a demo of Googles experimental agentic AI, Project Mariner, that also involved grocery ordering and was too glacially slow to look like progress. That Operator and Mariner arent yet ready to handle a humble task such as buying a gallon of milk isnt evidence that theyre exercises in futilityjust that the goal of making AI usefully agentic remains largely aspirational, even at OpenAI and Google. DeepSeek and other feats of LLM optimization yet to come wont get in the way of further development of agentic AI. Indeed, theyll surely help by making the underlying infrastructure more accessible to more people with good ideas. Even then, the U.S.s AI kingpins will maintain some distinct advantages, from the money and engineering talent they can throw at tomorrows challenges to their ability to market new products to big, established customer bases. Maybe DeepSeek-R1s arrival marks a turning point for these companies. But only a failure of imaginaton would doom them to irrelevance. Youve been reading Plugged In, Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if youre reading it on FastCompany.comyou can check out previous issues and sign up to get it yourself every Wednesday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. Im also on Bluesky, Mastodon, and Threads. More top tech stories from Fast Company Everything wrong with the AI landscape in 2025, hilariously captured in this SNL sketchIn a recent Saturday Night Live sketch, Bowen Yang and Timothée Chalamet managed to highlight several glaring flaws with the current technology. Read More What people on TikTok are really talking about when they say cute winter bootsThe trend has nothing to do with footwear but is instead an example of algospeak, or the use of coded language to avoid filters and censorship. Read More Bookshop.org is launching e-books to help local bookstores compete with Amazons KindleThe challenger brand to Amazons hegemony has big plans to build further, starting with a new e-book initiative. Read More These 5 trends show where music and streaming are headed in 2025Data firm Luminates music streaming data shows where the industry is headed in 2025from 2024s big year for pop to growth in international subscribers. Read More Why did DeepSeek tell me its made by Microsoft?The Chinese-language model has shocked and awed the American stock market. But my chat with it indicates there are many reasons to be skeptical. Read More Your guide to avoiding job scams in 2025Youve probably felt the thrill that comes with receiving a job offer. You read the congratulatory email, begin to imagine life in your new role, then quickly fill out all the required HR paperwork and receive the necessary equipment. And if all is well, you start preparing for your first day. But what… Read More
Category:
E-Commerce
Welcome to Pressing Questions, Fast Companys work-life advice column. Every week, deputy editor Kathleen Davis, host of The New Way We Work podcast, will answer the biggest and most pressing workplace questions. Q: How can I get more sleep?A: I am writing this at 11:12 p.m., so this advice is as much for myself as it is for anyone else. Heres what we should all be doing differently:First, set a schedule and stick to it. The “stick to it” part is hard. But its called the golden rule of sleep for a reason. Set a bedtime, and then plan at least 20-40 minutes back from that time to start your bedtime routine. You might even need an alarm to remind you that its time to end what you’re doing. So, if you have to get up at 7 a.m. and you want to get seven hours of sleep, you want to be asleep by midnight. That means you should start your bedtime routine by 11:30 p.m..And speaking of bedtime routine, you know you cant go directly from staring at a screen to lights out, right? Your mind needs to wind down. Sleep experts recommend that you not only stick to the same bedtime every night, but that you also stick to the same (or similar) process each night. One option is to take things in 15-20 minute stages. First, prep for the next day (pack lunches, set out clothes) and do your nightly hygiene routine. Then spend 20 minutes doing a relaxing activity like reading. Whatever you do, dont sleep with your phone next to you.The other golden piece of sleep advice is intuitive but many of us with desk jobs skip it: Do some kind of physical activity during the daybut not right before bed. If you spend 30-40 minutes a day being active, you will be more physically tired and it will be easier to fall asleep.Want more advice on how to get more sleep? Here you go: Ultimate guide to getting more sleep 5 ways to get a better nights sleep Having trouble sleeping? Ask yourself these 6 questions
Category:
E-Commerce
Illinois lawyer Mathew Kerbis markets himself as the Subscription Attorney, charging businesses and individual clients a monthly rate for legal advice and offering additional services like contract review and legal document drafting for a flat fee. Kerbis is a fairly tech-savvy lawyerhe’s a regular at the American Bar Association’s ABA Techshow conference, he hosts a podcast about subscription-based billing and other industry innovations, and he uses a Stripe-integrated web portal to streamline client payments. So it’s not surprising that he’s spent time experimenting with AI tools to help him do legal research, draft documents, and otherwise assist clients more efficiently. “The faster I can get something to a client, if you think about it in terms of time equals money, the more money I make,” he says. “But also, the more valuable it is to the clients to actually get things faster.” Today, Kerbis is a customer of Paxton AI, a legal AI provider that boasts it can help lawyers quickly draft legal documents, analyze everything from contracts to court filings, and conduct research on legal questions based on up-to-date laws and court precedent. He says Paxton can help tweak model contracts for a client’s situation, find relevant sections of law for a particular legal issue, and review proposed agreements for potentially troublesome terms. Doing the work of a young attorneyat superhuman speed “If a client books a call with me and gives me a contract right there on the phone with them, we could start identifying problematic issues before I’ve even put human eyes on the contract,” Kerbis says. Paxton, a startup that just announced a $22 million Series A funding round, is one of a number of companies offering AI-powered assistance to lawyers. Its competitors range from established legal data vendors like LexisNexis and Thomson Reuters to other startups, all looking to use the text-processing power of large language model AI to more speedily parse, analyze, and draft the voluminous and precise documents that are inherent to the practice of lawand part of the reason for lawyers’ notoriously long hours. “The idea is to do the work of, say, a young assistant or even a young attorney at a firm, but to do it at superhuman speed,” says Jake Heller, head of product for CoCounsel, a legal AI tool from Thomson Reuters. “If you talk to lawyers, I think there’s a universal feeling that there aren’t enough hours in the day to do all the things they want to do for the companies they work for, their clients, their law firms.” [Screenshot: courtesy of Paxton] A 2024 study by legal tech provider Clio found that 79% of legal professionals already use artificial intelligence tools to some degree. Roughly 25% “have adopted AI widely or universally,” according to the survey. Clio CTO Jonathan Watson says Clio Duo AI is the fastest-growing product in the company’s history; it helps lawyers answer questions about particular documents, schedule meetings, and even analyze data from the company’s law firm management softwareall of which allows them to focus more on legal work and less on rote tasks. “What we ultimately want to do is free them of that burden, so they can get back to doing what they do best, and that’s practicing law,” Watson says. Putting AI through law school And while lawyers’ misuse of AI has sometimes generated headlines when they’ve submitted court filings with hallucinated quotes and citations, legal AI providers generally say they’ve built their software with guardrails to reduce such errorsas well as protections for attorney-client confidentialityand linked their language models to specialized legal knowledge bases. “In order to be a proper legal assistant, you need legal training,” says Paxton AI cofounder and CEO Tanguy Chau. “And what that means to us is a complete understanding of all the laws, rules, regulations, statutes that govern the legal practice.” Paxton AI has used a large language model to help analyze court decisions and document how they reference each other in a network graph structure, says cofounder and CTO Michael Ulin. “We have the LLM make a determination as to whether it’s been upheld or overturned,” he says, something that was historically done by human attorneys updating legal reference books and databases. Those sorts of materials are, in fact, part of what helps power CoCounsel, with Heller pointing to a company history of publishing legal information and references dating back to the founding of lawbook giant West Publishingnow part of Thomson Reutersin the 1800s. “They’ve hired some of the best attorneys in the country and said, Well, give us your commentary or thoughts on this topic, Heller says, with that material now accessible to the AI. “It’s able to also do what a lawyer would do, which is draw from these resources, read them first, get a deep understanding of a topic or field from the world’s best experts that really only we have, and then that informs its decision-making process. Lowering the risk of hallucination Legal AI tools generally use the technique known as retrieval-augmented generation (RAG), in which search-engine-like processes first locate relevant source materials, then provide them to language models to use in providing a well-cited, on-point response with less risk of hallucination. “Even several years ago, it was very clear that the retrievalyou know, the R element to the RAGwas the most important part of this whole process, being able to properly identify what cases are responsive. Not just cases, but legislation, regulation,” ays Mark Doble, cofounder and CEO of Alexi, which offers legal AI with a focus on litigation. “And so we’ve done a ton of work in making sure that the retrieval component is really good.” Alexi has also developed rigorous automated and manual systems to verify its AI consistently and statistically predictably gives accurate responses, Doble says. Similarly, CoCounsel is routinely quizzed on tens of thousands of test cases, Heller says, and Paxton has published to GitHub its AI’s results on two legal AI benchmarks, including one developed by scholars at Stanford and Yale as part of a study of legal hallucinations by LLMs, where Paxton claims a nearly 94% accuracy rate. [Screenshot: courtesy of Paxton] Still, even the most rigorously tested AI systems aren’t immune to making mistakes. RAG-powered systems can still hallucinate, particularly if the underlying retrieval process produces irrelevant or misleading hits, something familiar to anyone who has read the AI summaries that now top many internet search results. “From the perspective of general AI research, we know that RAG can reduce the hallucination rate, but it is no silver bullet,” says Daniel E. Ho, a professor at Stanford Law School. Ho is one of the authors of the legal hallucination study and an additional paper looking specifically at errors made by legal-focused AI. In one example, Ho and his colleagues found that a RAG-powered legal AI system asked to identify notable opinions written by “Judge Luther A. Wilgarten” (a fictitious jurist with the notable initials L.A.W.) pointed to a case called Luther v. Locke. It was presumably the result of a routine false-positive search result based on the made-up judge’s first name. While a human lawyer searching a decision database would have realized the mix-up and quickly skipped past that search result, the AI was apparently unable to note that the case was decided by a differently named judge. “It would not surprise any lawyer who’s spent time trying to research cases that there are going to be false positives in those retrieved results,” Ho says. “And when those form the basis of the generated statement through RAG, that’s when hallucinations can result.” Even so, Ho says, the tools can still be useful in legal research, drafting, and other tasks, provided they’re not treated as out-and-out replacements for human lawyers and, ideally, with AI developers providing information about how they’re built and how they perform. [Screenshot: courtesy of Paxton] The lawyer is fully responsible for the work on behalf of the client Since lawyers are already used to picking apart documents, arguments, and citations, as well as double-checking the work of junior associates and paralegals, they may be as well equipped as any professional to use AI as a helpful toolthe first or last pair of eyes on a document, or a path to finding case law relevant to a particular questionrather than an omniscient oracle. “This is something you’re asking to infer how to respond to you, and you need to look at that with a critical eye and go, does that make sense?” says Clio’s Watson. As some attorneys using general-purpose AI tools have learned the hard way, groups like the American Bar Association have said relying on AI doesn’t absolve lawyers of their basic duties to competently represent their clients, safeguard their confidentiality, and ensure what they present in court is accurate and truthful. “In short, regardless of the level of review the lawyer selects, the lawyer is fully responsible for the work on behalf of the client,” according to a formal American Bar Association opinion from July. Experts have suggested that lawyers who rely on mistaken AI-generated information could be sued by clients for malpractice. Even law-focused AI solutions often disclaim legal responsibility for errors, meaning lawyers are likely on the hook for any AI mistakes they fail to catch. Kerbis says he uses Paxton AI to find on-point sections of the law, but he’ll still read the actual cited references. And when he asks the AI to find potential red flags in a contract, he’ll naturally evaluate them himself. Other Paxton customers use the tool to look for the relevant “needle in a haystack” in bulky document repositories, like medical records involved in an injury suit, Ulin says. Since AI can work so quickly, and its findings can often be quickly verified by human attorneys, there may be little reason not to at least see what it can find. Kerbis says the AI sometimes flags a section of a contract that’s ultimately unobjectionable, like an unusually long contract term that actually beneits his client, but such findings are easy enough to skip past. Since Kerbis generally charges clients flat fees instead of an hourly rate, it’s to his advantage if he can do good legal work faster. Some experts predict such arrangements will be more common if AI helps more lawyers work more quickly, though previous predictions of the end of the billable hour have proven premature. Also yet to be determined is whether AI-forward lawyers and law firms will ultimately prefer to get AI tools through an existing vendor like Thomson Reuters or Clio, or from an AI-focused startup like Alexi or Paxton. And it remains to be seen whether the technology will really provide lawyers the level of productivity boost proponents and AI vendors hope for. If it does, it may well become as necessary to the modern practice of law as Microsoft Word. Already, some of those paying the bills and waiting on legal advice have begun to urge lawyers to adopt AI, says Doble, adding, “It’s not clients getting angry when lawyers use AI tools. It’s clients getting angry when lawyers don’t use AI tools.”
Category:
E-Commerce
All news |
||||||||||||||||||
|