NewsRamp is a PR & Newswire Technology platform that enhances press release distribution by adapting content to align with how and where audiences consume information. Recognizing that most internet activity occurs outside of search, NewsRamp improves content discovery by programmatically curating press releases into multiple unique formats—news articles, blog posts, persona-based TLDRs, videos, audio, and Zero-Click content—and distributing this content through a network of news sites, blogs, forums, podcasts, video platforms, newsletters, and social media.
FAQ: AI-Powered High-Resolution Forest Canopy Mapping
TL;DR
Researchers developed an AI model that provides near-lidar accuracy for forest monitoring at low cost, offering a competitive edge in carbon credit verification and plantation management.
The AI model combines a large vision foundation model with self-supervised enhancement to estimate canopy height from RGB imagery, achieving sub-meter accuracy comparable to lidar systems.
This technology enables precise, affordable monitoring of forest carbon storage, supporting global climate initiatives and sustainable forestry for a healthier planet.
An AI can now map forest canopy heights with lidar-like precision using ordinary satellite photos, revolutionizing how we track carbon sequestration.
Found this article helpful?
Share it with your network and spread the knowledge!

The research presents a new artificial intelligence (AI) vision model that produces high-resolution canopy height maps with sub-meter accuracy using only standard RGB satellite imagery, enabling precise monitoring of forest biomass and carbon storage.
It addresses the urgent need for cost-effective, high-resolution forest monitoring by providing near-lidar accuracy at much lower cost, which is essential for understanding global carbon cycles, assessing tree growth, managing plantation resources, and tracking carbon sequestration under initiatives like China's Certified Emission Reduction program.
The model combines three modules: a feature extractor powered by the DINOv2 large vision foundation model, a self-supervised feature enhancement unit to retain fine spatial details, and a lightweight convolutional height estimator, achieving a mean absolute error of only 0.09 m and R² of 0.78 compared to airborne lidar measurements.
A joint research team from Beijing Forestry University, Manchester Metropolitan University, and Tsinghua University developed the model, which was published in the Journal of Remote Sensing on October 20, 2025 (DOI: 10.34133/remotesensing.0880).
The model was tested in the Fangshan District of Beijing, an area with fragmented plantations primarily composed of Populus tomentosa, Pinus tabulaeformis, and Ginkgo biloba, and also demonstrated robust accuracy when applied to a geographically distinct forest in Saihanba.
The model achieved a mean absolute error of 0.09 m and R² of 0.78 compared to lidar measurements, enabled over 90% accuracy in single-tree detection, showed strong correlations with measured above-ground biomass, and maintained robust accuracy across different forest types.
Traditional lidar systems provide accurate height data but are limited by high costs and technical complexity, while optical remote sensing often lacks structural precision; this AI model achieves near-lidar accuracy at much lower cost and captures subtle variations in tree crown structure that existing models often miss.
The technology enables individual-tree segmentation, plantation-level biomass estimation with R² values exceeding 0.9 for key species, reconstruction of annual growth trends from archived satellite imagery, and scalable long-term carbon sink monitoring for precision forestry management.
It addresses the challenges of balancing cost, precision, and scalability in forest monitoring, overcoming limitations of traditional methods that are either too expensive (lidar) or lack precision (optical sensing), while also solving problems with deep learning methods that require massive labeled datasets and lose fine spatial details.
The model integrates a large vision foundation model with self-supervised enhancement to achieve near-lidar accuracy using only standard RGB imagery, demonstrates strong generalization across forest types, and bridges the gap between expensive lidar systems and less precise optical remote sensing.
Curated from 24-7 Press Release

