|
|||||
Tech giants are making grand promises for the AI age. The technology, we are told, might discover a new generation of medical interventions, and possibly answer some of the most difficult questions facing physics and mathematics. Large language models could soon rival human intellectual abilities, they claim, and artificial superintelligence might even best us. This is exciting, but also scary, they say, since the rise of AGI, or artificial general intelligence, could pose an uncontrollable threat to the human species. U.S. government officials working with AI, including those charged with both implementing and regulating the tech in the government, are taking a different tack. They admit that the government is still falling behind the private sector in implementing LLM tech, and there’s a reason for agencies to speed up adoption. Still, many question the hyperbolic terminology used by AI companies to promote the technology. And they warn that the biggest dangers presented by AI are not those associated with AGI that might rival human abilities, but other concerns, including unreliability and the risk that LLMs are eventually used to undercut democratic values and civil rights. Fast Company spoke with seven people whove worked at the intersection of government and technology on the hype behind AIand what excites and worries them about the technology. Heres what they said. Charles Sun, former federal IT official Sun, a former employee at the Department of Homeland Security, believes AI is, yes, overhypedespecially, he says, when people claim that AI is bigger than the internet. He describes the technology simply as large-scale pattern recognition powered by statistical modeling, noting AIs current wave is impressive but not miraculous. Sun argues that the tech is an accelerator of human cognition, not a replacement for it. I prefer to say that AI will out-process us, not outthink us. Systems can already surpass human capacity in data scale and speed, but intelligence is not a linear metric. We created the algorithms, and we define the rules of their operation. AI in government should be treated as a critical-infrastructure component, not a novelty, he continues. The danger isnt that AI becomes ‘too intelligent,’ but that it becomes too influential without accountability. The real threat is unexamined adoption, not runaway intelligence. Former White House AI official I was worried at the beginning of this . . . when we decided that instead of focusing on mundane everyday use cases for workers, we decided at a national security front that we need to wholesale replace much of our critical infrastructure to support and be used by AI, says the person, who spoke on background. That creates a massive single point of failure for us that depends largely on compute and data centers never failing, and models being impervious to attacksneither of which I don’t think anyone, no matter how technical they are or not, would place their faith in. The former official says theyre not worried about AGI, at least for now: Next token prediction is not nearly enough for us to model complex behaviors and pattern recognition that we would qualify as general intelligence. David Nesting, former White House AI and cybersecurity adviser AI is fantastic at getting insights out of large amounts of data. Those who have AI will be better capable of using data to make better decisions, and to do so in seconds rather than days or weeks. Theres so much data about us out there that hasnt really hurt us because nobodys ever really had the tools to exploit it all, but thats changing quickly, Nesting says. Im worried about the government turning AI against its own people, and Im worried about AI being used to deprive people of their rights in ways that they cant easily understand or appeal. Nesting adds: Im also worried about the government setting requirements for AI models intended to eliminate ‘bias,’ but without a clear definition of what ‘bias’ means. Instead, we get AI models biased toward some ‘official’ ideological viewpoint. Weve already seen this in China: Ask DeepSeek about Tiananmen Square. Will American AI models be expected to maintain an official viewpoint on the January 6th riots? I think were going to be arguing about what AGI means long after its effectively here, he continues. Computers have been doing certain tasks better than people for nearly a century. AI is just expanding that set of tasks more quickly. I think the more alarming milestone will be the point at which AI can be exploited by people to increase their own power and harm others. You dont need AGI for that, and in some ways were already there, Nesting says. Americans today are increasingly and unknowingly interacting online with fake accounts run by AI that are indistinguishable from real peopleeven whole communities of peopleconfirming every fear and anxiety they have, and validating their outrage and hatred. Abigail Haddad, former member of the AI Corps at DHS The biggest problem currently, Haddad argues, is that AI is actually being underused in government. An immense amount of work went into making these tools available inside of federal agencies, she notes, but whats available in the government is still behind whats available commercially. There are concerns about LLMs training on data, but those tools are operating on cloud systems that follow federal cybersecurity standards. People who care about public services and state capacity should be irate at how much is still happening manually and in Excel, she says. Tony Arcadi, former chief information officer of the Treasury Department Computers are already smarter than us. It’s a very nebulous term. What does that really consist of? At least my computer is smarter than me when it comes to complex mathematical calculations, Arcadi says. The sudden emergence of AGI or the singularity, there’s this thing called Rokos basilisk, where the AI will go back in time andI don’t remember the exact thingbut kill people who interfered with this development. I don’t really go for all of that. He adds: The big challenge that I see leveraging AI in government is less around, if you will, the fear factor of the AI gone rogue, but more around the resiliency, reliability, and dependability of AI, which, today, is not great. Eric Hysen, former chief information officer at DHS When asked a few months ago about whether AI might become so powerful that the process of governing might be offloaded to software, Hysen shared the following: I think there is something fundamentally human that Americans expect about their government. . . . Government decision-making, at some level, is fundamentally different than the way private companies make decisions, even if they are of very similar complexity. Some decisions, he added, “we’re always going to want to be fundamentally made by a human being, even if i’s AI-assisted in a lot of ways. It’s going to look more long term like heavy use of AI that will still ultimately feed for a lot of key things to human decision makers. Arati Prabhakar, former science and technology adviser to President Biden Prabhakar, who led the Office of Science and Technology Policy under President Joe Biden, is concerned that the conversation about AGI is being used to influence policy around the technology more broadly. Shes also skeptical that the technology is as powerful as people foretell. I really feel like Im in a freshman dorm room at 2 in the morning when I start hearing those conversations, she says. Your brain is using 20 or 25 watts to do all the things that it does. That includes all kinds of things that are way beyond LLMs. [Its] about 25 watts compared to the mega data centers that it takes to train and then to use AI models. Thats just one hint that we are so far from anything approximating human intelligence, she argues. “Most troubling is it puts the focus on the technology rather than the human choices that are being made in companies by policymakers about what to build, where to use it, and what kind of guardrails really will make it effective.” This story was supported by the Tarbell Center for AI Journalism.
Category:
E-Commerce
President Trump recently promised to make America the “crypto capital of the world.” And his administration is working hard to make that pledge a reality. White House officials have established a working group on digital asset markets and directed federal agencies to craft a strategy to cement U.S. leadership. The president’s legislative team, meanwhile, helped push the GENIUS Act (Guiding and Establishing National Innovation for U.S. Stablecoins Act),through Congress earlier this summer, thus creating the first federal framework for stablecoins. And they’re working to pass the Clarity Act (Digital Asset Market Clarity Act), which would finally settle disputes over which regulator oversees digital assets. It’s refreshing to see our political leaders working to bring digital assets into the financial mainstream, especially after years of hostility from the prior administration. But the work is far from finishedand achieving universal legitimacy will require not just favorable laws and regulations, but also behavioral changes at leading crypto firms. Conflicting guidance For more than a decade, crypto innovators faced a patchwork of state regimes and conflicting federal guidance. The lack of clear regulation led to a proliferation of scams and bad actorsand kept many investors on the sidelines. Big banks and other legacy financial institutions hesitated to adopt cryptocurrencies and the underlying blockchain technology they’re based on, even as top financiers acknowledged blockchain’s potential to reshape the entire industry. The GENIUS Act represents Washington’s first serious attempt to genuinely regulaterather than ignore or suppressone of the leading forms of cryptocurrency. The new law requires stablecoin issuers to maintain dollar-for-dollar reserves and submit to audits. Far from rejecting this level of regulation, crypto leaders practically begged for it. They recognized that federal oversight and transparent standards are needed to transform what the public previously viewed as a speculative product into a reliable payment instrument. That’s why industry leaders are also working with the White House and Congress to finalize the Clarity Act, which would define the boundaries of authority between the Securities and Exchange Commission and the Commodity Futures Trading Commission, delivering the kind of predictability that underpins every functioning capital market. Cultural shift But better regulation alone won’t bring about the mainstream approval that industry leaders seek. Only an internal cultural shiftand rigorous self-policingcan deliver that. Every blockchain transaction depends on various forms of intellectual propertyfrom patents on mobile crypto wallets and bitcoin mining data centers to trade secrets in proprietary trading algorithms, and copyrights protecting exchange software to trademarks that build consumer trust. Coinbase, for instance, holds nearly 200 active patents. But most of the intellectual property powering today’s blockchain activity belongs to third parties outside the crypto industry. Yet even as leading platforms generate billions in revenue, the industry remains reluctant to acknowledge the legitimacy of IP rights. This reluctance is playing out in court. In May, Bancor’s nonprofit arm sued Uniswap, alleging that the leading decentralized exchange built its multibillion-dollar business on Bancor’s patented automated market maker technology without authorization. And earlier this summer, Malikie Innovations filed suits against Core Scientific and Marathon Digital, claiming their bitcoin mining operations infringe on Malikie’s patents for elliptic curve cryptography. Elliptic Curve Cryptography (ECC), a cryptographic technique developed and patented by Certicom years before crypto went mainstream, was licensed by companies like Cisco and Motorola as well as the National Security Agency. Cases like these highlight the tension: Crypto companies depend on IP to function, but too many are willing to disregard the IP rights of others, even as they clamor for legitimacy. Not how respectable companies operate This simply isn’t the way respectable companies in mature industries operate. Spotify and Apple Music wouldn’t enjoy their positive reputations if they refused to pay royalties to artists and record labels. Streaming platforms like Netflix and Hulu would be pariahs if they pirated films. Banks would be shunned by investors alike if they treated software licenses as optional. If leading crypto firms want to be seen as respectable, investable pillars of the global economy, they need to meet those same standards when it comes to intellectual property. Digital assets are here to stay. But universal legitimacy will come only from a combination of comprehensive regulation and a cultural shift within the industry itself.
Category:
E-Commerce
If you slip a tiny wearable device on your fingertip and slide it over a smooth surface like a touchscreen, you can feel digital textures like denim or mesh. The device, designed by researchers at Northwestern University, is the first of its kind to achieve human resolution, meaning that it can more accurately match the complex way a human fingertip senses the world. In previous attempts at haptic devices like this, once you compare them to real textures, you realize theres something still missing, says Sylvia Tan, a PhD student at Northwestern and one of the authors of a new study in Science Advances about the research. Its close, but not quite there. Our work is trying to just get that one step closer. [Photo: Northwestern University] The wearable, made from flexible, paper-thin latex, is embedded with tiny nodes that push into the skin in a precise way and can move up to 800 times per second. Past devices had low resolutionthe touch equivalent of a pixelated image or an early movie from the 1890s with so few frames that the movement looks jerky. Using nodes and arranging them in a particular density improves that resolution. [Photo: Northwestern University] Earlier devices were also bulky. The ultrathin new technology, which weighs less than a gram, is designed to be comfortable to wear. A big goal was to make it very lightweight so you arent distracted by it, Tan says. And [to make] something that we call ‘haptically transparent’that means that even when youre wearing it, you can still perceive the real world, so you can perform everyday tasks. [Photo: Northwestern University] In the study, users could identify fabrics like corduroy or leather with 81% accuracy. The technology is still in development, but in the future, it could make it possible to feel products as you shop online. It could also have more immediate uses for people who are visually impaired, like making it possible to feel a tactile map or translating text on a screen to braille without a large, expensive device. On devices like microwaves, where physical buttons have often been replaced by flat touchscreens, the wearable could help a visually impaired person know where to push. It could also help improve human-robot interfaces. “In the medical field, the Da Vinci robot has very good kinesthetic force feedback,” Tan says. “But getting a surgeon to feel exactly what’s happening at your fingertip as you move the angle of your finger is not quite there. And that’s very important for high-skill workers.”
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||