Watching videos on the Apple Vision Pro is one of the few use-cases early adopters have found for the VR headset, but Apples produced only a handful of immersive videos to watch on it. Blackmagics new camera could change that. The Blackmagic URSA Cine Immersive is the first camera that can shoot in Apples Immersive Video format, and its available to pre-order now for $29,995 and shipping in late Q1 2025.
Blackmagic first announced it was working on hardware and software for producing content for the Vision Pro at WWDC 2024. As promised then, the camera is capable of capturing 3D footage at 90 fps, with a resolution of 8160 x 7200 per eye. Blackmagic says the URSA Cine Immersive uses custom lenses that are designed for URSA Cines large format image sensor with extremely accurate positional data. It also has 8TB of network storage built-in, which the company says records directly to the included Blackmagic Media Module and can be synced live to a DaVinci Resolve media bin for editors to access footage remotely.
Blackmagic Design
Along with the URSA Cine Immersive, Blackmagic is also updating DaVinci Resolve Studio to work with Apples Immersive Video format, and including new tools so editors can pan, tilt, and roll footage while they edit on a 2D monitor or in a Vision Pro.
The whole package sounds expensive at nearly $30,000, but youre getting a lot more out of the box than you normally would with one of Blackmagics cameras. A normal 12K URSA Cine camera costs around $15,000, but doesnt include lenses or built-in storage. Those come standard on the URSA Cine Immersive.
Apple filmed several short documentaries, sports clips, and at least one short film in its Immersive Video format, but hasnt released a camera of its own for third-party production companies to produce content. And while any iPhone 15 Pro or iPhone 16 can capture 3D spatial videos, they cant produce Immersive Video, which has a 180-degree field of view. Blackmagics camera should make it possible for a lot more immersive content to be created for the Vision Pro and other VR headsets. Now Apple just needs to make a Vision product more people are willing to pay for.This article originally appeared on Engadget at https://www.engadget.com/cameras/blackmagics-vision-pro-camera-is-available-for-pre-order-and-costs-30000-000053495.html?src=rss
Metas Threads app has now grown to 300 million users, with more than 100 million people using the service each day. Mark Zuckerberg announced the new milestone in a post on Threads, saying Threads strong momentum continues.
Zuckerberg has repeatedly speculated that Threads has a good chance of becoming the companys next billion-user app. Though its still pretty far off of that goal, its growth seems to be accelerating. The app hit 100 million users last fall, and reached 275 million in early November. Elsewhere, Apple revealed that Threads was the second-most downloaded app in 2024, behind shopping app Temu, which took the top spot in Apples rankings.
The coming weeks could see some major changes for Threads as Meta looks to capitalize on that growth. The company reportedly has plans to begin experimenting with the first ads for threads in early 2025, according to a recent report in The Information.
Threads isnt the only app trying to reclaim the public square as some longtime users depart the platform now known as X. Bluesky has also seen significant growth of late. The decentralized service nearly doubled its users base in November, and currently has just over 25 million users. (The company has never revealed how many of its users visit the site daily.) Though still much smaller than Threads, Meta seems to have taken inspiration from some of Blueskys signature features in recent weeks, including its take on starter packs and custom feeds.This article originally appeared on Engadget at https://www.engadget.com/social-media/metas-threads-has-grown-to-300-million-users-234138108.html?src=rss
NASA says it was able to use the James Webb telescope to capture images of planet-forming disks around ancient stars that challenge theoretical models of how planets can form. The images support earlier findings from the Hubble telescope that havent been able to be confirmed until now.
The new Webb highly detailed images were captured from the Small Magellanic Cloud, a neighboring dwarf galaxy to our home, the Milky Way. The Webb telescope was specifically focused on a cluster called NGC 346, which NASA says is a good proxy for similar conditions in the early, distant universe, and which lacks the heavier elements that have traditionally been connected to planet formation. Webb was able to capture a spectra of light which suggests protoplanetary disks are still hanging out around those stars, going against previous expectations that they would have blown away in a few million years.
ASA, ESA, CSA, STScI, Olivia C. Jones (UK ATC), Guido De Marchi (ESTEC), Margaret Meixner (USRA)
Hubble observations of NGC 346 from the mid 2000s revealed many stars about 20 to 30 million years old that seemed to still have planet-forming disks, NASA writes. Without more detailed evidence, that idea was controversial. The Webb telescope was able to fill in those details, suggesting the disks in our neighboring galaxies have a much longer period of time to collect the dust and gas that forms the basis of a new planet.
As to why those disks are able to persist in the first place, NASA says researchers have two possible theories. One is that the radiation pressure expelled from stars in NGC 346 just takes longer to dissipate planet-forming disks. The other is that the larger gas cloud thats necessary to form a Sun-like star in an environment with fewer heavy elements would naturally produce larger disks that take longer to fade away. Whichever theory proves correct, the new images are beautiful evidence that we still dont have a full grasp of how planets are formed.This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-new-webb-telescope-images-support-previously-controversial-findings-about-how-planets-form-213312055.html?src=rss
After a federal court last week denied TikToks request to delay a law that could ban the app in the United States, the company is now turning to the Supreme Court in an effort to buy time. The social media company has asked the court to temporarily block the law, currently set to take effect January 19, 2025, it said in a brief statement.
The Supreme Court has an established record of upholding Americans right to free speech, TikTok wrote in a post on X. Today, we are asking the Court to do what it has traditionally done in free speech cases: apply the most rigorous scrutiny to speech bans and conclude that it violates the First Amendment.
The company, which has argued that the law is unconstitutional, lost its initial legal challenge of the law earlier this month. The company then requested a delay of the laws implementation, saying that President-elect Donal Trump had said he would save TikTok. That request was denied on Friday.
In its filing with the Supreme Court, TikTok again referenced Trump's comments. "It would not be in the interest of anyonenot the parties, the public, or the courtsfor the Acts ban on TikTok to take effect only for the new Administration to halt its enforcement hours, days, or even weeks later," it wrote. Trump's inauguration is one day after a ban of the app would take effect.
TikTok is now hoping the Supreme Court will intervene to suspend the law in order to give the company time to make its final legal appeal. Otherwise, app stores and Internet service providers will be forced to begin blocking TikTok next month, making the app inaccessible to its 170 million US users.
Update December 16, 2024, 1:30 PM PT: Updated with details from TikTok's court filing. This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-asks-the-supreme-court-to-delay-upcoming-ban-211510659.html?src=rss
Google has yet another AI tool to add to the pile. Whisk is a Google Labs image generator that lets you use an existing image as your prompt. But its output only captures your starter images essence rather than recreating it with new details. So, its better for brainstorming and rapid-fire visualizations than edits of the source image.
The company describes Whisk as a new type of creative tool. The input screen starts with a bare-bones interface with inputs for style and subject. This simple introductory interface only lets you choose from three predefined styles: sticker, enamel pin and plushie. I suspect Google found those three allowed for the kind of rough-outline outputs the experimental tool is most ideal for in its current form.
As you can see in the image above, it produced a solid image of a Wilford Brimley plushie. (Googles terms forbid pictures of celebrities, but Wilford slipped through the gates, Quaker Oats in tow, without alerting the guards.)
Whisk also includes a more advanced editor (found by clicking Start from scratch from the main screen). In this mode, you can use text or a source image in three categories: subject, scene and style. Theres also an input bar to add more text for finishing touches. However, in its current form, the advanced controls didnt produce results that looked anything like my queries.
For example, check out my attempt to generate the late Mr. Brimley in a lightbox scene in the style of a walrus plushie image I found online:
Google / Screenshot by Will Shanklin for Engadget
Whisk spit out what looks like a vaguely Wilford Brimley-esque actor eating oatmeal inside a lightbox frame. As far as I can tell, that dude is not a plushie. So, its clear why Google recommends using the tool more for rapid visual exploration and less for production-ready content.
Google acknowledges that Whisk will only draw from a few key characteristics of your source image. For example, the generated subject might have a different height, weight, hairstyle or skin tone, the company warns.
To understand why, look no further than Googles description of how Whisk works under the hood. It uses the Gemini language model to write a detailed caption of the source image you upload. It then feeds that description into the Imagen 3 image generator. So, the result is an image based on Geminis words about your image not the source image itself.
Whisk is only available in the US, at least for now. You can try it at the projects Google Labs site.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-new-ai-tool-whisk-uses-images-as-prompts-210105371.html?src=rss
Instagram is adding an option to schedule DMs. Social media expert Lindsey Gamble unearthed the feature, and Instagram confirmed to TechCrunch that it's rolling out scheduled DMs to all users.
When you type a message, simply hold down the send button and you can select a date and time. It seems messages can be scheduled up to 29 days in advance. Until all timed-up messages are sent, you'll see a banner reading something like "x scheduled messages."
This will be handy for folks who want to schedule birthday messages for a bunch of friends at once or, for instance, to remind someone to pick them up from the airport on a certain day. It'll also be useful for people who tend to take care of correspondence at night and don't want everyone to know how late they're staying awake. That's definitely not something I ever do with emails.
It's worth noting that Instagram is rolling out this DM scheduling feature before all users are able to time up posts and Reels in advance. For now, that feature is limited to folks who have set up a professional account.
Meanwhile, Instagram is rolling out several limited-time, end-of-year features to help you celebrate the holidays and your 2024 memories. For one thing, there's a collage tool for Stories that has an end-of-year theme. Based on images Instagram shared, it appears that you can go with a Happy New Year overlay.
There are multiple Add Yours templates based around New Year's as well, such as one you can use to prompt friends to share photos in the how 2024 started/how 2024 ended format. If you hit the like button on end-of-year Stories, you'll see a custom effect. There's a New Year font and Countdown text effect for Stories, Reels and feed posts as well.
Festive chat themes for the holidays include New Year's, one called "chill" and, of course, another based on Mariah Carey. Last but not least, if you use certain emoji based around celebrations or phrases like "Happy New Year" or "hello 2025" in DMs or notes before the end of the year, you'll see a little Easter egg of some kind.This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-is-adding-a-dm-scheduling-feature-before-everyone-can-schedule-posts-203957229.html?src=rss
Snap is changing up its program that allows creators to make money from shortform videos. The company announced a new monetization program that will allow the apps influencers to make money from Spotlight videos that are one minute or longer by earning a share of their content's ad revenue.
The change will streamline Snaps monetization features across Spotlight, its in-app TikTok competitor, and Stories, where Snap first launched its revenue sharing feature. It also means the company will end its Spotlight Reward Program, the creator fund-like arrangement that paid creators directly. That program will be discontinued January 30, 2025, with the new monetization arrangement taking effect February 1.
Snap announced the update as TikTok moves closer to an outright ban in the United States. The ByteDance-owned service is currently facing a January 19, 2025, deadline to sell or be banned f the Supreme Court doesnt intervene. In its announcement, Snap notes that Spotlight viewership is up 25% year-over-year and that there is a unique and growing opportunity for creators to monetize this format in the same way they do with Stories.
Under the new unified program, creators are eligible to earn money from Spotlight videos or Stories if they meet the following requirements:
-Have at least 50,000 followers.
-Post at least 25 times per month to Saved Stories or Spotlight.
-Post to either Spotlight or Public Stories on at least 10 of the last 28 days.
-Achieve one of the following in the last 28 days:
-10 million Snap views
-1 million Spotlight views
-12,000 hours of view time
Some of those metrics are a bit higher than Snaps previous requirements for Stories, which set the bar at only 10 Story posts a month. But, as TechCrunch notes, the new threshold is much higher for Spotlight creators, who could previously earn money from the companys creator fund with only 1,000 followers and 10,000 unique views. The change also pushes creators to make longer content for Spotlight as they can no longer be paid for videos shorter than one minute.
If TikTok does end up being banned, Snap will be one of several platforms trying to lure creators to its product. And while the app is known primarily for its private messaging features, the company says that the number of people posting publicly has more than tripled in the last year, and that it will be evolving and expanding the total rewards available to creators going forward.This article originally appeared on Engadget at https://www.engadget.com/social-media/snap-will-expand-ad-revenue-sharing-to-creators-on-spotlight-193029473.html?src=rss
The Ray-Ban Meta Smart Glasses already worked well as a head-mounted camera and pair of open-ear headphones, but now Meta is updating the glasses with access to live AI without the need for a wake word, live translation between several different languages, and access to Shazam for identifying music.
Meta first demoed most of these features at Meta Connect 2024 in September. Live AI lets you start a live session with Meta AI that gives the assistant access to whatever youre seeing and lets you ask questions without having to say Hey Meta. If you need your hands-free to cook or fix something, Live AI is supposed to keep your smart glasses useful even if you need to concentrate on whatever youre doing.
Live translation lets your smart glasses translate between English and either French, Italian, or Spanish. If live translation is enabled and someone speaks to you in one of the selected languages, youll hear whatever theyre saying in English through the smart glasses speakers or as a typed transcript in the Meta View app. Youll have to download specific models to translate between each language, and live translation needs to be enabled before itll actually act as an interpreter, but it does seem more natural than holding out your phone to translate something.
With Shazam integration, your Meta smart glasses will also be able to identify whatever song you hear playing around you. A simple Meta, what is this song will get the smart glasses' microphones to figure out whatever youre listening to, just like using Shazam on your smartphone.
All three updates baby-step the wearable towards Metas end goal of a true pair of augmented reality glasses that can replace your smartphone, an idea its experimental Orion hardware is a real-life preview of. Pairing AI and either VR and AR seems to be an idea multiple tech giants are circling, too. Googles newest XR platform, Android XR, is built around the idea that a generative AI like Gemini could be the glue that makes VR or AR compelling. Were still years away from any company being willing to actually alter your field of view with holographic images, but in the meantime smart glasses seem like a moderately useful stopgap.
All Ray-Ban Meta Smart Glasses owners will be able to enjoy Shazam integration as part of Metas v11 update. For live translation and live AI, youll need to be a part of Metas Early Access Program, which you can join right now at the companys website.This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-is-rolling-out-live-ai-and-shazam-integration-to-its-smart-glasses-192602898.html?src=rss
If you've been waiting patiently to try ChatGPT Search, you won't have to wait much longer if you're a free user. After rolling out to paid subscribers this fall, OpenAI announced Monday it would make the tool available to everyone, no Plus or Pro membership necessary, "over the coming months."
At that point, all you need before you can start using ChatGPT Search is an OpenAI account. Once you're logged in, and if your query calls for it, ChatGPT will automatically search the web for the latest information to answer your question. You can also force it to search the web, thanks to a new icon located right in the prompt bar. OpenAI has also added the option to make ChatGPT Search your browser's default search engine.
OpenAI announced the expanded availability during its most recent "12 Days of OpenAI" livestream. In previous live streams, the company announced the general availability of Sora and ChatGPT Pro, a new $200 subscription for its chatbot. With four more days to go, it's hard to see the company topping that announcement, but at this point, OpenAI likely has a surprise or two up its sleeve.
Developing...This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-is-getting-ready-to-roll-its-search-tool-out-to-everyone-184442971.html?src=rss
In perhaps the least surprising news of the past six weeks, President-elect Donald Trump reportedly plans to roll back President Bidens electric vehicle and emissions policies. Reuters reports that the incoming presidents transition team has recommended cutting off support for EVs and charging stations while boosting measures to block cars, components and battery materials from China.
The transition teams other reported plans include new tariffs on all battery materials globally, boosting US production of battery materials and negotiations with allies for exemptions. Theyre also said to plan on taking money allocated for building charging stations and making EVs more affordable and redirecting them to sourcing batteries and their required minerals from places other than China. In addition, they reportedly want to axe the Biden administrations $7,500 tax credit for consumer EV purchases.
The plans would let automakers produce more gas-powered vehicles by reversing emissions and fuel economy standards, pushing them back to 2019 levels. Reuters says that would lead to around 25 percent more emissions per vehicle mile than the current limits. It would also lower the average car fuel economy by about 15 percent.
Climate scientists have stressed the importance of transitioning from gas-powered cars to EVs in reducing carbon emissions and fending off the most ravaging scenarios for the planet. Greenhouse gases, including those from vehicle emissions, build up in the atmosphere and warm the climate. That leads to a cascade of effects in the atmosphere, on land and in oceans some of which were already seeing.
As for tariffs, economists have said Trumps plans would likely spur multiple trade wars as countries retaliate with tariffs on American goods, disrupt supply chains and pierce the heart of Americas post-World War II alliances. If we go down the tariff war path, were going down a very dark path for the economy, Mark Zandi, the chief economist of Moodys Analytics, told The New York Times in October.
The Biden administration has championed climate legislation like the Inflation Reduction Act, which allocated $369 billion for green initiatives, and EPA rules that require automakers to ramp up EV sales.
Meanwhile, Trump has called climate change a hoax. In May, he reportedly told a group of oil executives that he would immediately reverse dozens of Bidens environmental rules while blocking new ones from being enacted. His asking price for such deregulation was that they raise $1 billion for his campaign. (Thanks, Citizens United!) So, while the reports about his transition teams plans are still a gut punch to those who care about leaving the planet in a habitable state for future generations (and slowing the effects were already seeing), they arent exactly shocking to anyone paying attention.This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/trump-reportedly-plans-to-reverse-bidens-ev-policies-182206662.html?src=rss