|
Billionaire Elon Musks DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk’s Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGEs activities, said Musk’s team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. They ask questions, get it to prepare reports, give data analysis. The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. DOGE hasnt pushed any employees to use any particular tools or products, said the spokesperson, who did not respond to further questions. DOGE is here to find and fight waste, fraud and abuse. Musk’s xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for specific business purposes. “AI’s knowledge should be all-encompassing and as far-reaching as possible,” the website says. As part of Musk’s stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get, said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE’s access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. The company has a financial interest in insisting that their product be used by federal employees, he said. APPEARANCE OF SELF-DEALING In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musks organization for access, the people said. They were pushing it to be used across the department, said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldnt determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAIs ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHSs internal bot could be fed more sensitive data, records posted on DHSs website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools including ChatGPT after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the worlds richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It’s unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials including special government employees from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of te American people, said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Groks use without Musks involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. We cant prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing. The push to use Grok coincides with a larger DOGE effort led by two staffers on Musks team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGEs operations. Coristine, a 19-year-old who has used the online moniker “Big Balls,” is one of DOGEs highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not loyal to Trumps political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the departments DOGE team had not been involved in any network monitoring nor had DOGE been directed to use any AI tools, including Grok. Its important to note that all government computers are inherently subject to monitoring as part of the standard user agreement, said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. (Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep) Marisa Taylor, Alexandra Ulmer, Reuters
Category:
E-Commerce
On days of heavy pollution in Sulphur, a southwest Louisiana town surrounded by more than 16 industrial plants, Cynthia “Cindy” Robertson once flew a red flag outside her home so her community knew they faced health hazards from high levels of soot and other pollutants.But she stopped flying the flag after Louisiana passed a law last May that threatened fines of up to $1 million for sharing information about air quality that did not meet strict standards.On Thursday, Robertson’s group Micah 6:8 Mission and other Louisiana environmental organizations sued the state in federal court over the law they say restricts their free speech and undermines their ability to promote public health in heavily industrialized communities.When neighbors asked where the flags went, “I’d tell them, ‘The state of Louisiana says we can’t tell y’all that stuff,'” Robertson said.While the state has argued the law ensures that accurate data is shared with the public, environmental groups like Micah 6:8 Mission believed it was intended to censor them with “onerous restrictions” and violates their free speech rights, according to the lawsuit.Despite having received Environmental Protection Agency funding to monitor Sulphur’s pollution using high quality air monitors for several years, Michah 6:8 Mission stopped posting data on the group’s social media after the law was signed last May, Robertson said.While federal law requires publicly disclosed monitoring of major pollutants, fence-line communities in Louisiana have long sought data on their exposure to hazardous and likely carcinogenic chemicals like chloroprene and ethylene oxide, which were not subject to these same regulations.Under the Biden administration, the EPA tightened regulations for these pollutants, though the Trump administration has committed to rolling them back.The Biden administration’s EPA also injected funding to support community-based air monitoring, especially in neighborhoods on the “fence-line” with industrial plants that emitted pollutants that they were not required to publicly monitor under federal law. Some groups say they lack confidence in the data the state does provide and embraced the chance to monitor the air themselves with federal funding.“These programs help detect pollution levels in areas of the country not well served by traditional and costly air monitoring systems,” the lawsuit stated.In response to the influx of grassroots air monitoring, Louisiana’s Legislature passed the Community Air Monitoring Reliability Act, or CAMRA, which requires that community groups that monitor pollutants “for the purpose of alleging violations or noncompliance” of federal law must follow EPA standards, including approved equipment that can costs hundreds of thousands of dollars.“You can’t talk about air quality unless you’re using the equipment that they want you to use,” said David Bookbinder, director of law and policy at the Environmental Integrity Project, which represents the plaintiffs. He added there was no need for community groups to purchase such expensive equipment when cheaper technology could provide “perfectly adequate results . . . to be able to tell your community, your family, whether or not the air they’re breathing is safe.”Community groups sharing information based on cheaper air monitoring equipment that did not meet these requirements could face penalties of $32,500 a day and up to $1 million for intentional violations, according to analysis from the Environmental Integrity Project.“We’re a small nonprofit, we couldn’t afford to pay one day’s worth of that,” Robertson said. “And the way the law is written, it’s so ambiguous, you don’t really know what you can and can’t do.”There is no known instance in which the state has pursued these penalties, but community groups say the law has a chilling effect on their work.“The purpose of this was very clear: to silence the science, preventing people from doing anything with it, sharing it in any form,” said Caitlion Hunter, director of research and policy for Rise St. James, one of the plaintiffs in the lawsuit.“I’m not sure how regulating community air monitoring programs ‘violates their constitutional rights’,” Louisiana Attorney General Liz Murrill countered in a written statement.Industry groups are excluded from the law’s requirements, the lawsuit notes.The law presumes “that air monitoring information lacks accuracy if disseminated by community air monitoring groups, but not by industry participants or the state,” the complaint states.The Louisiana Department of Environmental Quality and the Environmental Protection Agency declined to comment, citing pending litigation. Brook is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues. Jack Brook, Associated Press/Report for America
Category:
E-Commerce
2026 may still be more than seven months away, but its already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the heart of new kinds of devicesseveral of which were in the news this week. Lets review. On Tuesday, at its I/O developer conference keynote, Google demonstrated smart glasses powered by its Android XR platform and announced that eyewear makers Warby Parker and Gentle Monster would be selling products based on it. The next day, OpenAI unveiled its $6.5 billion acquisition of Jony Ives startup IO, which will put the Apple design legend at the center of the ChatGPT makers quest to build devices around its AI. And on Thursday, Bloombergs Mark Gurman reported that Apple hopes to release its own Siri-enhanced smart glasses. In theory, all these players may have products on the market by the end of next year. What I didnt get from these developments was any new degree of confidence that anyone has figured out how to produce AI gadgets that vast numbers of real people will find indispensable. When and how that could happen remains murkyin certain respects, more than ever. To be fair, none of this weeks news involved products that are ready to be judged in full. Only Google has something ready to demonstrate in public at all: Heres Janko Roettgerss report on his I/O experience with prototype Android XR glasses built by Samsung. That the company has already made a fair amount of progress is only fitting given that Android XR scratches the same itch the company has had since it unveiled its ill-fated Google Glass a dozen years ago. Its just that the available technologiesincluding Googles Gemini LLMhave come a long, long way. Unlike the weird, downright alien-looking Glass, Googles Android XR prototype resembles a slightly chunky pair of conventional glasses. It uses a conversational voice interface and a transparent mini-display that floats on your view of your surroundings. Google says that shipping products will have all-day battery life, a claim, vague though it is, that Glass could never make. But some of the usage scenarios that the company is showing off, such as real-time translation and mapping directions, are the same ones it once envisioned Glass enabling. The markets rejection of Glass was so resounding that one of the few things people remember about the product is that its fans were seen as creepy, privacy-invading glassholes. Enough has happened since thenincluding the success of Metas smart Ray-Bansthat Android XR eyewear surely has a far better shot at acceptance. But as demoed at I/O, the floating screen came off as a roadblock between the user and the real world. Worst case, it might simply be a new, frictionless form of screen addiction that further distracts us from human contact. Meanwhile, the video announcement of OpenAI and IOs merger was as polished as a Jony Ive-designed productSan Francisco has rarely looked so invitingly lustrousbut didnt even try to offer details about their work in progress. Altman and Ive smothered each other in praise and talked about reinventing computing. Absent any specifics, Altmans assessment of one of Ives prototypes (The coolest piece of technology that the world will have ever seen) sounded like runaway enthusiasm at best and Barnumesque puffery at worst. Reporting on an OpenAI staff meeting regarding the news, The Wall Street Journals Berber Jin provided some additional tidbits about the OpenAI device. Mostly, they involved what it isntsuch as a phone or glasses. It might not even be a wearable, at least on a full-time basis: According to Jin, the product will be able to rest in ones pocket or on ones desk and complement an iPhone and MacBook Pro without supplanting them. Whatever this thing is, Jin cites Altman predicting that it will sell 100 million units faster than any product before it. In 2007, by contrast, Apple forecast selling a more modest 10 million iPhones in the phones first full year on the marketa challenging goal at the time, though the company surpassed it. Now, discounting the possibility of something transformative emerging from OpenAI-IO would be foolish. Ive, after all, may have played a leading role in creating more landmark tech products than anyone else alive. Altman runs the company that gave us the most significant one of the past decade. But Ive rhapsodizing over their working relationship in the video isnt any more promising a sign than him rhapsodizing over the $10,000 solid gold Apple Watch was in 2015. And Altman, the biggest investor in Humanes doomed AI Pin, doesnt seem to have learned one of the most obvious lessons of that fiasco: Until you have a product in the market, its better to tamp down expectations than stoke them. You cant accuse Apple of hyping any smart glasses it might release in 2026. It hasnt publicly acknowledged their existence, and wont until their arrival is much closer. If anything, the company may be hypersensitive to the downsides of premature promotion. Almost a year ago, it began trumpeting a new AI-infused version of Sirione it clearly didnt have working at the time, and still hasnt released. After that embarrassing mishap, silencing the skeptics will require shipping stuff, not previewing what might be ahead. Even companies that arent presently trying to earn back their AI cred should take note and avoid repeating Apples mistake. I do believe AI demands that we rethink how computers work from the ground up. I also hope the smartphone doesnt turn out to be the last must-have device, because if it were, that would b awfully boring. Maybe the best metric of success is hitting Apples 10-million-units-per-year goal for the original iPhonewhich, perhaps coincidentally, is the same one set by EssilorLuxottica, the manufacturer of Metas smart Ray-Bans. If anything released next year gets there, it might be the landmark AI gizmo we havent yet seen. And if nothing does, we can safely declare that 2026 wasnt the year of consumer AI hardware after all. Youve been reading Plugged In, Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if you’re reading it on FastCompany.comyou can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company How Google is rethinking search in an AI-filled worldGoogle execs Liz Reid and Nick Fox explain how the company is rethinking everything from search results to advertising and personalization. Read More Roku is doing more than ever, but focus is still its secret ingredientThe company that set out to make streaming simple has come a long way since 2008. Yet its current business all connects back to the original mission, says CEO Anthony Wood. Read More Gen Z is willing to sell their personal datafor just $50 a monthA new app, Verb.AI, wants to pay the generation thats most laissez-faire on digital privacy for their scrolling time. Read More Forget return-to-office. Hybrid now means human plus AIAs AI evolves, businesses should use the technology to complement, not replace, human workers. Read More It turns out TikToks viral clear phone is just plastic. Meet the MethaphoneMillions were fooled by a clip of a see-through phone. Its creator says its not techits a tool to break phone addiction. Read More 4 free Coursera courses to jump-start your AI journeySee what all the AI fuss is about without spending a dime. Read More
Category:
E-Commerce
All news |
||||||||||||||||||
|