NewsRamp is a PR & Newswire Technology platform that enhances press release distribution by adapting content to align with how and where audiences consume information. Recognizing that most internet activity occurs outside of search, NewsRamp improves content discovery by programmatically curating press releases into multiple unique formats—news articles, blog posts, persona-based TLDRs, videos, audio, and Zero-Click content—and distributing this content through a network of news sites, blogs, forums, podcasts, video platforms, newsletters, and social media.
DeepForest AI: Volumetric Forest Sensing FAQ
TL;DR
DeepForest technology offers a cost advantage over LiDAR by using standard drone cameras to monitor forest biomass for carbon accounting and environmental compliance.
DeepForest combines synthetic-aperture imaging with 3D neural networks to reconstruct volumetric reflectance stacks, improving deep-layer accuracy 7-fold for detailed vegetation health analysis.
This breakthrough enables better monitoring of forest ecosystems, supporting biodiversity conservation and climate change mitigation through accessible environmental data.
Researchers taught drones to see through dense forest canopies using computational photography, revealing hidden ecological details with ordinary cameras.
Found this article helpful?
Share it with your network and spread the knowledge!

DeepForest AI is a novel imaging technology that enables volumetric sensing of deep vegetation layers in forests using standard aerial cameras combined with synthetic-aperture imaging and 3D neural networks. It solves the problem of traditional imaging techniques that only capture top-layer reflectance, leaving under-canopy biomass invisible and restricting accurate carbon estimation, biodiversity monitoring, and climate-impact assessments.
DeepForest AI provides a low-cost alternative to expensive LiDAR or radar systems while offering both structural penetration and fine-resolution spectral details needed for vegetation health assessment. Unlike photogrammetry which reconstructs only top layers, it recovers spectral cues across vertical forest segments and supports vegetation-index calculation with 2–12 times improved deep-layer reflectance accuracy.
Drones equipped with multispectral cameras capture images from a synthetic-aperture grid, which are computationally refocused into hundreds of focal slices forming a 3D stack. Depth-specific 3D convolutional neural networks trained on over 11 million simulated forest samples remove occlusion noise from overlapping leaves, and the corrected reflectance stacks are calibrated with field-reconstructed canopy points for accurate vegetation health assessment.
Researchers from Johannes Kepler University Linz, Helmholtz Centre for Environmental Research, and Leipzig University developed DeepForest AI, which was published on November 11, 2025 in the Journal of Remote Sensing (DOI: 10.34133/remotesensing.0907).
The technology enables broad deployment for environmental monitoring, forestry, carbon accounting, and conservation by providing volumetric reflectance stacks that reveal forest structure from canopy to understory. Its compatibility with standard multispectral cameras significantly reduces costs while offering reliable vegetation health indicators, with field tests achieving MSE = 0.05 between real drone imagery and reconstructed upper layers.
The team scanned 30 x 30 m forest plots using a drone-mounted multispectral camera from 35 m altitude, sampling a 24 × 24 m synthetic-aperture grid. The approach improved deep-layer reflectance accuracy by 2–12 times with an average ~7-fold correction, even in forests with up to 1680 trees/ha density, and enabled accurate NDVI estimation and volumetric visualization of biomass distribution.
The research used drones equipped with regular multispectral cameras (green, red, red-edge, NIR) rather than expensive LiDAR or radar systems, capturing 9×9 images that were refocused into 440 focal slices. The method employed synthetic-aperture imaging enhanced by 3D convolutional neural networks trained on simulated procedural forest datasets to resolve occlusion noise from overlapping leaves.
The complete research is published in the Journal of Remote Sensing with DOI: 10.34133/remotesensing.0907, available at the journal website: https://spj.science.org/journal/remotesensing.
Curated from 24-7 Press Release

