|
Watching videos on the Apple Vision Pro is one of the few use-cases early adopters have found for the VR headset, but Apples produced only a handful of immersive videos to watch on it. Blackmagics new camera could change that. The Blackmagic URSA Cine Immersive is the first camera that can shoot in Apples Immersive Video format, and its available to pre-order now for $29,995 and shipping in late Q1 2025. Blackmagic first announced it was working on hardware and software for producing content for the Vision Pro at WWDC 2024. As promised then, the camera is capable of capturing 3D footage at 90 fps, with a resolution of 8160 x 7200 per eye. Blackmagic says the URSA Cine Immersive uses custom lenses that are designed for URSA Cines large format image sensor with extremely accurate positional data. It also has 8TB of network storage built-in, which the company says records directly to the included Blackmagic Media Module and can be synced live to a DaVinci Resolve media bin for editors to access footage remotely. Blackmagic Design Along with the URSA Cine Immersive, Blackmagic is also updating DaVinci Resolve Studio to work with Apples Immersive Video format, and including new tools so editors can pan, tilt, and roll footage while they edit on a 2D monitor or in a Vision Pro. The whole package sounds expensive at nearly $30,000, but youre getting a lot more out of the box than you normally would with one of Blackmagics cameras. A normal 12K URSA Cine camera costs around $15,000, but doesnt include lenses or built-in storage. Those come standard on the URSA Cine Immersive. Apple filmed several short documentaries, sports clips, and at least one short film in its Immersive Video format, but hasnt released a camera of its own for third-party production companies to produce content. And while any iPhone 15 Pro or iPhone 16 can capture 3D spatial videos, they cant produce Immersive Video, which has a 180-degree field of view. Blackmagics camera should make it possible for a lot more immersive content to be created for the Vision Pro and other VR headsets. Now Apple just needs to make a Vision product more people are willing to pay for.This article originally appeared on Engadget at https://www.engadget.com/cameras/blackmagics-vision-pro-camera-is-available-for-pre-order-and-costs-30000-000053495.html?src=rss
Category:
Marketing and Advertising
Metas Threads app has now grown to 300 million users, with more than 100 million people using the service each day. Mark Zuckerberg announced the new milestone in a post on Threads, saying Threads strong momentum continues. Zuckerberg has repeatedly speculated that Threads has a good chance of becoming the companys next billion-user app. Though its still pretty far off of that goal, its growth seems to be accelerating. The app hit 100 million users last fall, and reached 275 million in early November. Elsewhere, Apple revealed that Threads was the second-most downloaded app in 2024, behind shopping app Temu, which took the top spot in Apples rankings. The coming weeks could see some major changes for Threads as Meta looks to capitalize on that growth. The company reportedly has plans to begin experimenting with the first ads for threads in early 2025, according to a recent report in The Information. Threads isnt the only app trying to reclaim the public square as some longtime users depart the platform now known as X. Bluesky has also seen significant growth of late. The decentralized service nearly doubled its users base in November, and currently has just over 25 million users. (The company has never revealed how many of its users visit the site daily.) Though still much smaller than Threads, Meta seems to have taken inspiration from some of Blueskys signature features in recent weeks, including its take on starter packs and custom feeds.This article originally appeared on Engadget at https://www.engadget.com/social-media/metas-threads-has-grown-to-300-million-users-234138108.html?src=rss
Category:
Marketing and Advertising
NASA says it was able to use the James Webb telescope to capture images of planet-forming disks around ancient stars that challenge theoretical models of how planets can form. The images support earlier findings from the Hubble telescope that havent been able to be confirmed until now. The new Webb highly detailed images were captured from the Small Magellanic Cloud, a neighboring dwarf galaxy to our home, the Milky Way. The Webb telescope was specifically focused on a cluster called NGC 346, which NASA says is a good proxy for similar conditions in the early, distant universe, and which lacks the heavier elements that have traditionally been connected to planet formation. Webb was able to capture a spectra of light which suggests protoplanetary disks are still hanging out around those stars, going against previous expectations that they would have blown away in a few million years. ASA, ESA, CSA, STScI, Olivia C. Jones (UK ATC), Guido De Marchi (ESTEC), Margaret Meixner (USRA) Hubble observations of NGC 346 from the mid 2000s revealed many stars about 20 to 30 million years old that seemed to still have planet-forming disks, NASA writes. Without more detailed evidence, that idea was controversial. The Webb telescope was able to fill in those details, suggesting the disks in our neighboring galaxies have a much longer period of time to collect the dust and gas that forms the basis of a new planet. As to why those disks are able to persist in the first place, NASA says researchers have two possible theories. One is that the radiation pressure expelled from stars in NGC 346 just takes longer to dissipate planet-forming disks. The other is that the larger gas cloud thats necessary to form a Sun-like star in an environment with fewer heavy elements would naturally produce larger disks that take longer to fade away. Whichever theory proves correct, the new images are beautiful evidence that we still dont have a full grasp of how planets are formed.This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-new-webb-telescope-images-support-previously-controversial-findings-about-how-planets-form-213312055.html?src=rss
Category:
Marketing and Advertising
After a federal court last week denied TikToks request to delay a law that could ban the app in the United States, the company is now turning to the Supreme Court in an effort to buy time. The social media company has asked the court to temporarily block the law, currently set to take effect January 19, 2025, it said in a brief statement. The Supreme Court has an established record of upholding Americans right to free speech, TikTok wrote in a post on X. Today, we are asking the Court to do what it has traditionally done in free speech cases: apply the most rigorous scrutiny to speech bans and conclude that it violates the First Amendment. The company, which has argued that the law is unconstitutional, lost its initial legal challenge of the law earlier this month. The company then requested a delay of the laws implementation, saying that President-elect Donal Trump had said he would save TikTok. That request was denied on Friday. In its filing with the Supreme Court, TikTok again referenced Trump's comments. "It would not be in the interest of anyonenot the parties, the public, or the courtsfor the Acts ban on TikTok to take effect only for the new Administration to halt its enforcement hours, days, or even weeks later," it wrote. Trump's inauguration is one day after a ban of the app would take effect. TikTok is now hoping the Supreme Court will intervene to suspend the law in order to give the company time to make its final legal appeal. Otherwise, app stores and Internet service providers will be forced to begin blocking TikTok next month, making the app inaccessible to its 170 million US users. Update December 16, 2024, 1:30 PM PT: Updated with details from TikTok's court filing. This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-asks-the-supreme-court-to-delay-upcoming-ban-211510659.html?src=rss
Category:
Marketing and Advertising
Google has yet another AI tool to add to the pile. Whisk is a Google Labs image generator that lets you use an existing image as your prompt. But its output only captures your starter images essence rather than recreating it with new details. So, its better for brainstorming and rapid-fire visualizations than edits of the source image. The company describes Whisk as a new type of creative tool. The input screen starts with a bare-bones interface with inputs for style and subject. This simple introductory interface only lets you choose from three predefined styles: sticker, enamel pin and plushie. I suspect Google found those three allowed for the kind of rough-outline outputs the experimental tool is most ideal for in its current form. As you can see in the image above, it produced a solid image of a Wilford Brimley plushie. (Googles terms forbid pictures of celebrities, but Wilford slipped through the gates, Quaker Oats in tow, without alerting the guards.) Whisk also includes a more advanced editor (found by clicking Start from scratch from the main screen). In this mode, you can use text or a source image in three categories: subject, scene and style. Theres also an input bar to add more text for finishing touches. However, in its current form, the advanced controls didnt produce results that looked anything like my queries. For example, check out my attempt to generate the late Mr. Brimley in a lightbox scene in the style of a walrus plushie image I found online: Google / Screenshot by Will Shanklin for Engadget Whisk spit out what looks like a vaguely Wilford Brimley-esque actor eating oatmeal inside a lightbox frame. As far as I can tell, that dude is not a plushie. So, its clear why Google recommends using the tool more for rapid visual exploration and less for production-ready content. Google acknowledges that Whisk will only draw from a few key characteristics of your source image. For example, the generated subject might have a different height, weight, hairstyle or skin tone, the company warns. To understand why, look no further than Googles description of how Whisk works under the hood. It uses the Gemini language model to write a detailed caption of the source image you upload. It then feeds that description into the Imagen 3 image generator. So, the result is an image based on Geminis words about your image not the source image itself. Whisk is only available in the US, at least for now. You can try it at the projects Google Labs site.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-new-ai-tool-whisk-uses-images-as-prompts-210105371.html?src=rss
Category:
Marketing and Advertising
Sites : [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] next »