
Thematic Sessions
ISPRS 2026
Thematic Sessions
Connecting Ideas Across Core Themes
Thematic Sessions align with the scientific programme of ISPRS 2026, providing a platform for research presentations and discussions around specific themes. These sessions connect global experts, showcase the latest advancements, and encourage dialogue that drives the future of photogrammetry, remote sensing, and spatial information sciences.
Thematic Session Information:
Session Chairs:
Morgan A. Crowley
Joanne V. Hall
Tianjia Liu
Jacquelyn K. Shuman
Advancements in Wildfire Science, Management, and Engagement: Integrating Earth Observation Technologies and Collaborative Development
Wildland fires are an important part of many ecosystems and cultures. However, they can pose significant challenges to communities, ecosystems, and economies worldwide. Balancing the positive benefits of fire on the landscape with the negative impacts that can arise from wildfires requires innovative approaches that leverage cutting-edge technologies while fostering collaboration among multiple stakeholders. This session proposes a comprehensive exploration of advancements in wildland fire management, monitoring, and modelling by integrating Earth observation technologies and collaborative development efforts. We will highlight emerging uses of remote sensing and spatial data in projects that foster cross-boundary, multi-jurisdictional collaborations among private, local, federal, and international partners. Efforts like Canada’s WildFireSat mission, NASA FireSense, the Pyregence Consortium, and the internationally funded GOFC-GOLD Fire Mapping and Monitoring Implementation Team are creating transformative wildland fire science, data products, and tools built on relationships and knowledge exchange. By bringing together a range of perspectives and expertise, we aim to advance the science and practice of fire management while enhancing the resilience of communities and ecosystems in the face of increasing wildfire risk and lengthening fire seasons. This session supports ISPRS’s commitment to addressing global societal challenges by exploring how applied Earth observation and geospatial technologies can enhance wildfire resilience, support emergency preparedness, and inform climate adaptation strategies across fire-prone regions around the world.
Remote Sensing of Methane: Technological and Methodological Advances
Session Chairs:
Masoud Mahdianpari
Methane (CH₄) is a highly potent greenhouse gas, with a global warming potential far exceeding that of carbon dioxide in the short term. Its emissions originate from a range of natural sources, including wetlands, lakes, and permafrost, as well as human activities such as fossil fuel extraction, agriculture, and waste management. Accurate detection and quantification of methane across this spectrum are critical for climate change mitigation and atmospheric science. This session focuses on recent advancements in remote sensing technologies, Earth Observation (EO), and Geospatial Artificial Intelligence (GeoAI) that enhance our ability to monitor methane emissions from both natural and anthropogenic sources across multiple spatial and temporal scales. Contributions are invited that explore developments in sensor systems, analytical methods, and validation strategies. Key themes include: -Satellite-Based Monitoring: Use of EO data from public missions (e.g., Sentinel-5P, MethaneSAT) and commercial platforms (e.g., GHGSat, WorldView-3) to track methane emissions from diverse sources with improved spatial and temporal resolution. -AI and Data Analytics: Application of machine learning and deep learning to automate plume detection, source attribution, and uncertainty reduction across varied emission types. -Validation and Calibration: Ground-truth campaigns, controlled release experiments, and standardized protocols to verify observations across environments. -Multi-Scale Monitoring: Case studies linking local, regional, and global methane data to understand dynamics across natural systems and human infrastructures. This session encourages cross-disciplinary exchange and supports global initiatives aimed at reducing methane emissions, advancing both scientific understanding and climate policy.
Session Chairs:
Bing Wang
Bo Yang
Changhao Chen
Spatial Intelligence in the Wild
As AI systems increasingly permeate our built environments, the ability to endow machines with robust spatial intelligence, perception, reasoning, and cognition grounded in physical space, has become a central research challenge. This thematic session explores how spatial AI models can effectively operate in the wild, i.e., in unstructured, dynamic, and resource-constrained real-world settings beyond the curated conditions of labs and benchmarks. This session aligns with the ISPRS mission of advancing photogrammetry, remote sensing, and spatial information science through novel data-driven methodologies. It spotlights recent developments in multimodal spatial cognition, integrating vision, language, geometry, and motion, to support embodied agents, urban analytics, and autonomous systems. We focus on critical capabilities such as map-free localization, self-supervised spatial learning, continual adaptation, and causal reasoning in 3D scenes. These are essential for enabling spatial systems to generalize across diverse environments and respond to uncertainty, degraded sensor inputs, or topology shifts. Furthermore, we examine the ethical, interpretability, and deployment trade-offs that arise when scaling spatial intelligence to large cities, public infrastructure, and safety-critical applications. This includes managing sensor sparsity, real-time compute constraints, and trustworthiness in decision-making. We welcome contributions from photogrammetry, robotics, SLAM, 3D vision, urban AI, and foundation model communities. By bridging foundational research with real-world applications, this session fosters interdisciplinary dialogue and aims to establish a shared roadmap for building spatially intelligent systems that are resilient, adaptive, and impactful in the wild.
Session Chairs:
Joanne White
Yunsheng Wang
Mariana Campos
Xinlian Liang
Toward Smart Forests: Emerging Tools in Remote Sensing, Artificial Intelligence, and Field Robotics
This special session follows the successful Smart Forests workshop at GSW 2023 and the Smart Forests and Agriculture session at GSW 2025, jointly organized by Working Groups WGIII/1, WGI/7, and WGIII/8. Both events attracted strong submission numbers and were well organized, receiving positive feedback from the ISPRS community. The thematic section "Toward Smart Forests" will explore cutting-edge advancements in sensor technology and data processing at the intersection of remote sensing, artificial intelligence (AI), and robotics. The session will focus on transformative applications in precision forestry, agriculture, and integrated forest and crop management, with particular attention to productivity, biodiversity, and climate-related challenges. As sustainable forest and agriculture increasingly embraces data-driven approaches, this session will showcase how close-range technologies such as LiDAR and UAV-based imaging are being integrated with machine learning algorithms, deep learning, and robotic systems to enhance forest and agriculture automation, inventory, health monitoring, understanding of plant development, and ultimately decision-making. With the growing number of developments targeting precision agriculture and forestry, a key focus of the session will be on benchmarking studies, essential for evaluating and comparing the performance of various methods and technologies across diverse forest conditions. Presentations will highlight innovations in individual tree detection, tree species classification, yield prediction, and autonomous field operations, while also emphasizing the importance of standardized datasets and protocols for method validation. By bringing together researchers and practitioners from diverse disciplines, the session aims to foster interdisciplinary dialogue and identify pathways for the operational deployment and robust assessment of smart forestry technologies.
Session Chairs:
Yiping Chen
Lingfei Ma
Haiyan Guan
Large Language Models for Intelligent LiDAR Point Cloud Processing
Large language models (LLMs) have revolutionized natural language understanding and are now poised to transform 3D geospatial data analysis. This session explores the integration of LLMs with LiDAR point cloud processing to advance tasks such as semantic segmentation, object detection, 3D reconstruction, and dynamic scene interpretation. We focus on harnessing LLMs for contextual reasoning, cross-modal alignment (e.g., text-to-3D), and efficient processing of large-scale point clouds. We invite contributions addressing novel methodologies, including but not limited to: - Semantic understanding: LLM-enhanced segmentation, object detection, and classification in complex urban, forest, and infrastructure environments. - Cross-modal alignment: Text-to-3D retrieval, natural language querying of point clouds, and joint embedding techniques for spatially grounded language models. - Dynamic reconstruction: Spatiotemporal analysis of mobile/airborne LiDAR streams using attention mechanisms and sequence modeling. - Efficiency frameworks: Distillation, quantization, and hybrid architectures for scaling LLMs to continent-scale point clouds. - Interpretability and uncertainty quantification in LLM-guided 3D workflows - Real-world applications in urban digital twins, autonomous navigation, forestry, and disaster monitoring. Emphasis will be placed on practical scalability and reproducibility, highlighting case studies that bridge theoretical innovation with operational deployment. This session aims to catalyze a community roadmap for generative AI in geospatial sciences, fostering interdisciplinary collaboration to address global sustainability challenges through intelligent 3D earth observation.
Session Chairs:
Myriam Servières
Valerio Signorelli
Arzu Çöltekin
Human-AI Collaboration for XR: Technology and Human Factors in Content Creation, Visualisation and Interaction
Artificial Intelligence (AI) powered XR approaches (XR & AI, or Spatial AI) are relevant to the entire spectrum of extended reality, i.e., augmented, mixed and virtual reality (AR, MR, VR) as well as forms of immersive visual analytics (VA). The convergence of spatial computing, immersive interfaces, situated intelligence and generative AI opens up new possibilities for rapidly transforming complex spatial data into multisensory, interactive narratives. In fact, AI-based approaches might replace manual visualization and 3D modeling entirely—or much of such processes—in the near future. As the latest developments in AI change the way we work, in this thematic session we explore the contemporary and future opportunities as well as limitations that (generative) AI brings to XR and VA. Specifically, we examine the evolving collaboration between humans and AI in the context of XR and VA, focusing on how content creation, spatial visualisation, and interaction can benefit from both technological innovation, experience-based and analytical processes, and human-centered design. From a technical standpoint, we are interested how creating realistic XR content such as 3D scenes and objects; interfaces such as avatars, overlays, adaptative menus; and interactions such as speech, gesture, eye tracking which not only enable human-computer interaction but also facilitate technology mediated human-human interaction. We are also keen to include examples of immersive analytics and how this could be supported by current AI approaches. In this context, we invite case studies, applications demonstrating opportunities and bottlenecks with the current technical developments, as well as state of the art and position papers, including those with a critical voice. From a human-centered standpoint, we also consider the effects of XR environments on human experience as with the automated methods 3D content creation is becoming more rapid, consequently setting the anticipation that immersive 3D experiences will become more and more prevalent in the coming decade. We invite contributions that examine the affordances, function, effectiveness of XR systems in experience based vs. analytical tasks, as well as trust, bias, cognitive priors such as assumptions of ‘reality’ as real and virtual becomes hard to tell apart. We are also interested in other social and cultural implications such as social interactions in the so called ‘metaverse’. While we welcome any topic area within the above-mentioned context, given the scope of the ISPRS, a major focus area will be in geospatial studies in our session. In the context of such studies, contextual visualization is often framed as a medium for communication or engagement, however, we also seek to examine its potential as an analytical lens, one that enables spatial reasoning, pattern recognition, and the situated interpretation of complex spatial dynamics. We thus welcome contributions that explore how curated, situated, and immersive visual approaches can help make sense of urban complexity by fostering understanding through engagement, storytelling, and sensory interaction, particularly when augmented by the growing capabilities of the AI ecosystems. Thus we also welcome contributions that investigate how human-AI co-creation can enhance XR-based visualisation and interaction, whether through adaptive interfaces, generative content, or embodied engagement, and how these systems can be deployed in domains such as urban planning, environmental monitoring, education, or cultural heritage. We are particularly interested in approaches that make data not only visible, but also felt, narrated, experienced, and decoded fostering deeper engagement with urban and natural spaces, by leveraging human-AI interaction to support adaptive engagement, and situated intelligence to ensure that interpretation is shaped by spatial, temporal, and contextual awareness.
Session Chairs:
Kyle Gao
Dening Lu
Jonathan Li
Multimodal Large Language Models for Remote Sensing Image Modalities
Recent breakthroughs in multimodal Large Language Models (LLMs) have demonstrated their remarkable ability to understand and reason over natural language as well as complex image-based data modalities. While these models have revolutionized domains such as Natural Language Processing (NLP) and Computer Vision (CV), their application in remote sensing remains in their infancy. This thematic session aims to explore the emerging intersection between the various remote sensing image modalities and LLMs, highlighting their potential to enhance semantic understanding, automate interpretation, and human-like interaction with geospatial data. Remote sensing images are rich in spatial and temporal information but often lack high-level natural language-based semantic descriptors. While the application of LLMs to RGB images analysis is well understood, their use with other remote sensing modalities such as Synthetic Aperture Radar (SAR), non-RGB bands, and hyperspectral imagery remains underexplored. The goal of the proposed session is to highlight and advance research on integrating LLMs with remote sensing image modalities, including RGB images, and modalities with previously underexplored LLM integration like SAR, non-RGB bands, and hyperspectral imagery. This session fits within the scope of Technical Commission III (Remote Sensing). It aims to bring together researchers from both the remote sensing and AI communities to present pioneering work, identify challenges, and shape future research directions for LLM-enhanced Earth observation.
Session Chairs:
Hongjie He
Jonathan Li
Satellite Image Super-Resolution in the Era of Generative AI
This session highlights the transformative potential of generative AI—particularly diffusion models—in advancing satellite image super-resolution for geospatial applications. Traditional interpolation and CNN-based methods often fall short in capturing fine-grained, physically consistent details. In contrast, generative models offer high-fidelity reconstructions that are both controllable and grounded in physical priors, enabling more accurate interpretation of complex remote sensing data. These advancements are critical for high-impact applications such as land use classification, urban infrastructure monitoring, and environmental change detection. We invite contributions that explore the design, training, and deployment of generative models customized for remote sensing, with a focus on controllable generation guided by semantic, physical, or temporal information; fusion of multi-resolution and multi-sensor data; uncertainty quantification; and model interpretability and efficiency. The session aims to foster interdisciplinary collaboration among experts in Earth observation, photogrammetry, and AI, in line with the ISPRS mission to promote scientific innovation and practical solutions in spatial information science. Special emphasis is placed on real-world scenarios involving domain-specific constraints, temporally misaligned data, or sensor-induced variations. By bridging methodological advances in generative modeling with pressing needs in geospatial intelligence and sustainable development, this session supports the creation of next-generation super-resolution frameworks that are explainable, adaptive, and operationally viable for global challenges including smart cities, disaster response, and climate resilience.
Session Chairs:
Eija Honkavaara
Conor Cahalane
Claire Ellul
Anka Lisec
EuroSDR Thematic Session: Emerging Challenges and Opportunities for National Mapping and Cadastral Agencies
National Mapping and Cadastral Agencies (NMCAs) operate within a dynamic landscape shaped by emerging technologies, societal expectations, and environmental pressures, presenting both new opportunities and complex challenges. Advances in sensor technologies, from traditional aerial platforms to drone and satellite systems, are expanding the capabilities for spatial data acquisition. GNSS interference is a critical emerging threat impacting data acquisition and emphasizes the need for robust positioning and navigation solutions. Simultaneously, Artificial Intelligence (AI) is transforming the entire geospatial data pipeline.A key emerging application is the development of Digital Twins for diverse use cases, which require the integration of multi-source geospatial data, often at high temporal and spatial resolution, introducing new demands on data standardisation, interoperability, and quality assurance. Rising economic, societal, and environmental demands urge NMCAs to enhance open data sharing, climate resilience, environmental monitoring, public engagement, and capacity building to ensure sustainable geospatial services. NMCAs play a vital role in delivering consistent, high-quality data at a national scale, enabling more equitable data-related opportunities and extending the benefits of Digital Twins to smaller towns and villages, beyond today’s city-centric focus.These cross-cutting challenges and innovations reflect the diverse roles of European NMCAs across the entire data pipeline, spanning ISPRS Commissions 1–5. This thematic session invites contributions addressing GeoAI-driven national mapping, GNSS interference mitigation, Digital Twins, new sensors, drones, standardisation, climate-resilient workflows, automation in quality assurance, and education and outreach. With this session, we hope to both share the perspective of European NMCAs and learn from others around the world.
Session Chairs:
Jianzhu Huai
Chris Heckman
Sebastian Scherer
Chris Xiaoxuan Lu
Resilient Localization, Mapping, and Perception in Adverse Conditions Using Modern Civilian Radars
Recent advances in 4D radar technology are unlocking new frontiers in resilient autonomous navigation, mapping, and environmental perception. Civilian radars—including automotive, wearable, and long-range systems—are increasingly leveraged in a wide spectrum of applications, ranging from self-driving vehicles and aerial drones to maritime platforms and search-and-rescue robotics. Their unique ability to function reliably in adverse weather, low-visibility, and cluttered environments makes radar sensing an essential technology for robust remote sensing and situational awareness.This session seeks original research and real-world case studies that advance the state of the art in radar-based localization, mapping, and perception, with a special focus on resilient operation in challenging conditions. Topics of interest include, but are not limited to: - Multi-sensor calibration and fusion involving radar - Radar-based odometry and SLAM for robust navigation - Resilient perception, object detection, and tracking in dynamic and adverse environments - Life detection, remote monitoring, and activity recognition - Advances in signal processing, learning-based approaches, and hardware integration for resilience We particularly welcome submissions that present novel applications of 4D and imaging radars, demonstrate resilience in real-world scenarios, or critically assess the strengths and limitations of radar technologies under adverse conditions. The session aims to bring together researchers and practitioners from remote sensing, robotics, and geospatial sciences—advancing the ISPRS mission to foster innovation in photogrammetry, remote sensing, and spatial information science.
Session Chairs:
Wei Huang
Shishuo Xu
Xintao Liu
Spatial Intelligence for Cyber-Physical Human-Urban Interaction
Urban environment has become complex, being shaped by diverse human behavior in both physical and virtual worlds. Understanding the mechanism of interaction between human behavior and cyber-physical environments is key for addressing issues, ranging from pollution to equity in accessibility to activity spaces and opportunities, for sustainable urban development. Advanced computational technologies, in particular spatial intelligence and multi-source spatial big data have recently shown their power for spatial analysis and semantic understanding, which would be beneficial to tackle human-urban conflicts. This session focuses on new advanced technologies in spatial data representation and interoperability, multimodal data fusion, and spatial intelligence, which can harness human behavior-related spatial big data, discovering the mechanisms of the interaction between human behavior and cyber-physical urban environments.This theme is interdisciplinary looking into the human, urban cyber-physical space and social aspects with spatial information, aiming to provide a venue for researchers, engineers, and governors to communicate each other contributing to cutting-edge spatial information technologies.
Session Chairs:
Heiner Kuhlmann
Christoph Holst
TLS-based Deformation Analysis
The terrestrial laser scanning (TLS) is already an established method for reality capture. For rigorous deformation analyses, it has been used only rarely so far, especially for infrastructure monitoring with high accuracy demands. Rigorous means here that a statistical based assessment of significance is needed to separate between actual geometric changes and the uncertainty influences of measurement procedures and data processing methods. The great advantage of TLS-based deformation analyses is that the object is sampled with a high spatial resolution of the resulting point cloud. Thus, the subjective object discretization with individual points as mentioned before can be omitted, leading to more objective analyses. Nevertheless, in order to use TLS for deformation analyses, several challenges need to be solved. These challenges are closely related to the previously mentioned demand on small measurement uncertainties and strict significance investigations that need an entirely determined uncertainty budget. The challenges are: - A surface representation of the measured object surface is needed that allows for representing object details as well as for introducing smoothness assumptions. Additionally, changes in individual parameters should be connectable to individual – if possible spatially limited – deformations. - Calibrating the laser scanner so that systematic instrumental errors are minimized. - Determining a realistic variance-covariance matrix of the TLS measurements for describing the measurement uncertainty. The goal of TLS-based deformation analysis is to turn point clouds of two or more epochs into a meaningful information, e.g. if the object is stable or not. Thus, the proposed thematic session follows the motto of the Congress: From Imagery to Understanding. The session has close links to ISPRS: To TC I, especially to WG I/4 Lidar and to WG I/6 Calibration, and to TC II, especially to WG II/2 Point Cloud Generation and Processing and WG II/8 Infrastructure Monitoring.
Session Chairs:
Laurent Lebegue
Karsten Jacobsen
CO3D Mission
The CO3D constellation was successfully launched on July 25, 2025. The goal of the CO3D mission is the full-automatic production of a worldwide accurate DEM. CO3D is also a constellation of a new generation of four optical satellites. The DEM accuracy is expected to be one meter in relative with a one-meter grid space. Each of the satellite will provide images with 0.50 m resolution in red, green, blue and NIR bands. The satellites resource will be shared by, on one hand, the French institutions who will have dedicated access and preferred price conditions, and on the other hand commercial customers interested in 2D and 3D products. CNES is in charge of the development of the whole 2D and 3D processing chains. A 6-month in-orbit commissioning period including image quality CAL/VAL activities will be conducted by CNES thanks to a dedicated Image Calibration Center. A 18-month demonstration phase will start after this commissioning period. A DSM (Digital Surface Model) over 90 % of France metropolitan territory and 80 % of an ‘Ark of Crisis’ area dedicated to French defense will be acquired, processed and assessed. The worldwide acquisition will start at the same time and 90 % of the data should be collected by the end of 2029. In the scope of ISPRS, the thematic session will gather presentation of the mission, the cal/val activities first results, the ground segment development including processing chains and Image Calibration Center, 3D data quality assessment and examples of 3D products applications.
Session Chairs:
Hao Cheng
Mareike Dorozynski
Max Mehltretter
Rongjun Qin
Fabio Remondino
Xin Wang
AI-Augmented Photogrammetry - Bridging Learning-based Approaches and Classical Geometric-based 3D Methods
Photogrammetry has long been a cornerstone for deriving accurate 3D information from imagery across multiple platforms - from satellite and aerial to terrestrial and underwater systems - using rigorous geometric principles. However, recent advances pushed by artificial intelligence (AI) models, such as Vision Foundation Models, are reshaping the 3D reconstruction landscape. These learning-based models show remarkable capabilities in feature extraction, semantic interpretation and 3D representations, yet their integration into traditional photogrammetric workflows, such as bundle adjustment, dense image matching and geometric modeling, remains an open research challenge and generally end-to-end learning-based approaches are preferred. This session invites early-career scientists, senior researchers and industrial partners to critically assess how AI techniques can enhance or transform photogrammetric pipelines. We seek contributions addressing: - Hybrid methodologies combining geometric- and learning-based approaches - Semantic segmentation and scene understanding in multi-view and multi-modal contexts - Robust image orientation, camera calibration and outlier detection - Feed-forward models or transformer-based architectures, such as VGGT, for efficient end-to-end 3D reconstruction - 3D reconstruction of non-collaborative objects (transparent, reflective, etc.) - Uncertainty quantification in AI-based 3D estimation - Applications of 3D Gaussian Splatting (3DGS), neural radiance fields (NeRF) and implicit representations for scalable, photorealistic 3D modeling By focusing on how AI tools aligned with photogrammetry’s core principles (i.e. accuracy, precision and interpretability), this session aims to explore not just algorithmic improvements but also deeper methodological shifts. A final panel discussion with session speakers will further illuminate the opportunities and limitations of this intersection, offering strategic insights for shaping the next generation of photogrammetric research.
Session Chairs:
Ribana Roscher
Devis Tuia
Marc Rußwurm
Clément Mallet
Data-Centric Learning for Geospatial Data
This session focuses on data-centric approaches for geospatial machine learning that prioritize data quality, structure, and representativeness over model complexity. Topics include label quality, uncertainty, and systematic annotation strategies; weakly, semi-, and self-supervised learning; and data augmentation, synthetic data generation, and active learning. We invite contributions on dataset-driven model design and evaluation protocols (e.g., slicing, stratification), benchmarking and reproducibility, and approaches tailored to the challenges of geospatial data. We also welcome hybrid machine learning methods that integrate data- and model-centric perspectives. The session builds on the recent WGII/4 position paper “Data-Centric Machine Learning for Earth Observation” (Roscher et al., 2024). Organized by ISPRS TC II/WG4, with support from ISPRS TC II/WG5, this session aligns with the ISPRS mission to advance photogrammetry, remote sensing, and spatial information science.
Session Chairs:
Marc Rußwurm
Esther Rolf
Konstantin Klemmer
Evan Shelhamer
Towards Geospatial Embeddings: Investigating Accurate and Accessible Deep Geospatial Feature Representations
A fundamental challenge across all geospatial sciences and ISPRS Commissions is the integration of diverse data types—ranging from satellite images to time series and textual descriptions—into machine learning models. Here, encoding geodata in neural networks as implicit neural representations or deep multi-modal foundation models offers an opportunity to efficiently encode heterogeneous geodata in network weights. Deep feature representations at one specific location, i.e,. Geospatial Embeddings offer a new way to integrate deep AI representations into heterogeneous downstream tasks through an easy-to-learn, location-specific representations that unify heterogeneous geospatial data into a shared, metric embedding space. In this way, Earth Embeddings provide a consistent and scalable interface between remote sensing data and AI models. This session invites contributions from emerging research on Geospatial Neural Encoding Fields and location encoders, such as those utilising embeddings derived from data representations or remote sensing foundation models. It aligns closely with the ISPRS mission by advancing collaborative research at the intersection of photogrammetry, remote sensing, and AI. Earth Embeddings enhance geospatial data integration, supporting informed decision-making in areas such as climate adaptation, biodiversity monitoring, and urban planning—thereby directly benefiting both society and the environment. Furthermore, since Earth Embeddings can lower the barriers of entry for AI development and interpretation in the geosciences, this session empowers the next generation of researchers and fosters international knowledge exchange.
Session Chairs:
Jiaojiao Tian
Ksenia Bittner
Pablo d' Angelo
Cheng Wang
FoMo3D: Advancing 3D Earth Mapping with Foundation Models
Recent advances in foundation models, including SAM, DINOv2, CLIP, Segment Anything are revolutionizing computer vision, and EO specific models such as TerraMind and Prithvi promise the same for remote sensing and photogrammetry. Trained with massive dataset, these models offer unprecedented capabilities in feature extraction, semantic understanding, and transfer learning, paving the way for automated, scalable, and intelligent mapping systems. It is also a powerful solution for processing the immense flow of Earth observation data, enabling an accurate, up-to-date 3D representation of our living environment. We propose a theme session on 3D Earth Mapping using foundation models. As such we expect statements, commentary and position papers that examine where we are now and what the future directions are. This session is jointly organized by ISPRS WG I/8 (Multi-sensor Modelling and Cross-modality Fusion), WG I/1 (Satellite Missions and Constellations for Remote Sensing), and WG II /3 (3D Scene Reconstruction for Modeling & Mapping) We welcome a broad range of papers such as, but not limited to, the following: - Foundation models (e.g., SAM, DINOv2, CLIP, Segment Anything) for geospatial feature extraction - 3D reconstruction with foundation models - Integration of multimodal geospatial data (e.g., LiDAR, SAR, multispectral) with foundation models - Open datasets and benchmarks for training and evaluating foundation models in geospatial applications - Automated semantic segmentation and object detection in 2D and 3D dataset with foundation models
Session Chairs:
Yelda Turkan
Mehdi Maboudi
Kourosh Khoshelham
Jónatas Valença
Advances in Reality Capture, AI, and Digital Twin Technologies for Construction Engineering
The increasing availability of high-resolution spatial data from various sensors such as lidar and cameras, onboard static and mobile platforms including UAVs, combined with recent breakthroughs in computer vision and AI/ML, is transforming civil and construction engineering practice. This thematic session explores how these technologies are enabling a new era of intelligent, data-driven practices across the Architecture, Engineering, and Construction (AEC) industry. A primary focus is on the application of AI/ML algorithms to process diverse geospatial datasets for automated progress monitoring, quality control, and inspection of construction sites among other relevant applications. Topics include but not limited to object detection, semantic segmentation, co-registration of multimodal data, and change detection for tracking construction performance at various scales, from individual components to complex infrastructure systems. Digital twins will also be emphasized as a critical emerging paradigm for integrating real-time sensor data with spatial models, enabling continuous evaluation of the built environment. The session invites work on methods to synchronize as-designed and as-built models, evaluate geometric conformity, and support lifecycle-based decision-making using digital twins. Additional topics include integration of inspection data with BIM platforms, and the role of computer vision in hazard identification, worker tracking, and site compliance monitoring. Aligned with ISPRS’s mission to promote advancements in photogrammetry, remote sensing, and spatial information science, this session will bring together interdisciplinary researchers and practitioners to chart a data-driven, intelligent future for the AEC industry.
Session Chairs:
Markus Ulrich
Samanta Piano
Trustworthy AI in Photogrammetry: Metrological Foundations and Practical Challenges
Artificial Intelligence (AI) is increasingly integrated into photogrammetric workflows, increasing automation and improving performance in tasks like 3D reconstruction, object recognition, and measurement. However, the application of AI in safety-critical domains — such as industrial inspection, medical imaging, autonomous driving, and geospatial monitoring — raises important questions about trustworthiness and metrological soundness. This thematic session addresses the metrological and practical challenges of using AI in photogrammetry. Key topics include the quantification of uncertainty in AI-based regression and classification models, the interpretability and transparency of AI decisions (e.g., through explainable AI), the robustness and reliability of AI systems under varying data conditions, and data privacy (e.g., federated AI). A central focus is whether AI-based methods can meet the rigorous standards of photogrammetry and metrology, e.g.: a) Can we compute standard deviations or confidence intervals for AI outputs? b) Is there a comparable framework to the Guide to the Expression of Uncertainty in Measurement (GUM) for AI-based methods? c) What are the limitations and practical workarounds for deploying AI in real-world measurement tasks? The session welcomes contributions from a broad range of photogrammetric applications and scales, from close-range and micro-scale imaging to large-scale remote sensing, highlighting both theoretical developments and applied case studies. By fostering discussion on the scientific rigor and reliability of AI in photogrammetry, this session supports the ISPRS’ mission to advance accurate and trustworthy geospatial technologies.
Session Chairs:
Xuke Hu
Hongchao Fan
Lucy W. Mburu
Mapping the World with Text
Remote-sensing satellites and airborne LiDAR now deliver sub-metre views of Earth, yet a substantial share of geographic knowledge remains embedded in high-volume unstructured text—the continuous stream of news reports, web pages, social-media posts, and scientific publications—together with the vast body of accumulated historical documents. These sources record place names, spatial relations and descriptions of people, events, environments and societies that sensors cannot capture, ranging from real-time rescue requests during disasters to accounts of landscapes written centuries before the first aerial photograph.Unlocking this information at scale depends on recent advances in natural-language processing (NLP) and GeoAI. This session will showcase state-of-the-art techniques for extracting and analysing geospatial information from text, with particular emphasis on improving coverage for low-resource languages and under-represented regions. We welcome contributions that demonstrate practical applications in any domain—for example (but not limited to) urban planning, emergency management, spatial humanities, geographic search, epidemiology, tourism analytics, landscape assessment and biodiversity informatics. Submissions illustrating how text-derived geoinformation can be integrated with Earth‑observation imagery, LiDAR or volunteered geographic information (VGI) to tackle pressing challenges—such as SDG monitoring and climate-change adaptation—are especially encouraged.By bringing together researchers from GIScience, NLP, humanities and remote sensing, the session aims to strengthen collaboration between ISPRS and sister communities in computational linguistics, information retrieval and digital humanities. Our ultimate goal is to establish text as a rigorous, complementary source of geospatial insight—one that enriches sensor-based data and broadens the temporal and thematic reach of the ISPRS community.
Session Chairs:
Clément Mallet
Ana-Maria Raimond
Nesrine Chehata
Cidalia Fonte
The Global-local Exchange Loop: Coupling Earth Observation and Citizen Sciences for LCLU Mapping
Earth Observation (EO) is steadily advancing thanks to numerous satellites providing daily global coverage at multiple spatial, temporal and spectral scales. This deluge of data offers a unique momentum for global scale yet fine-grained land-cover mapping and has stimulated researchers of various fields to develop the so-called foundation models ingesting any kind of EO data for multiple downstream tasks.In parallel, there is also a huge demand for local scale quantification and qualification of land surfaces, which is today mainly fulfilled by in-situ measurements and observatories, through institutional and/or citizen science campaigns. Such campaigns, now even coupled with prompts when integrating LLMs or VLMs, are extensively carried out and offer a complementary knowledge to coarser global land-cover/land-use maps. This session focuses on bridging the gap between both dynamics to foster interactions between local and global scale initiatives for their mutual benefit. Main challenges include Training and validation data for local/global EO products, Dynamics between land-cover/land-use classes with global and local focus, and IA approaches for global and local EO products integration.We welcome contributions that address one of these two aspects. Papers may address specific (one class, one biophysical variable) to generic land-cover/land-use mapping (multiple classes, changes, parameters) in order to cover multiple configurations and thematic applications such as semantic segmentation, change detection, tracking, integration, calibration, upscaling, validation or quality assessment. This session seeks research that demonstrates either complementarity between both scales or interdisciplinarity (data and/or communities).
Session Chairs:
Roshanak Darvishzadeh
Jadu Dash
Catherine Champagne
Earth Observation for Crop Health and Resilient Food Systems
Agriculture and crop production worldwide are facing increasing pressures from climate change, extreme weather, soil degradation, and the rising spread of pests and plant diseases. These challenges are putting increasing strain on food systems, particularly in vulnerable regions, where crop failures can have immediate and far-reaching consequences. Timely detection, monitoring, and responding to such pressures requires reliable, timely, and spatially detailed data. This is where Earth Observation (EO) technologies are playing an increasingly vital role. This thematic session explores how EO is being applied to monitor agriculture under stress, with a particular focus on crop health, pest and disease detection, and building resilient agricultural systems. Satellite-based data can detect early signs of vegetation stress through changes in crop physiological parameters, abnormal growth patterns, or conditions favourable for outbreaks. When integrated with ground data and predictive models, EO provides early warning tools for faster, more targeted interventions to safeguard crop yields and food supplies. A central theme is the advancement of EO data with actionable insights for agricultural monitoring in global coordination efforts such as those led by GEOGLAM and CGIAR, particularly through research programs focused on plant health. The session will showcase innovations in remote sensing for crop health and demonstrate how EO enables smarter, faster, and more equitable responses to agricultural stress. It also aligns with the mission of the ISPRS to apply spatial science in solving real-world challenges. EO-based crop health monitoring and plant protection are clear examples of this mission in action, supporting SDG2 and SDG13.
Session Chairs:
Miaole Hou
Grazia Tucci
Mario Santana Quintero
Junshan Liu
Towards Large Cultural Heritage Foundation Models: Datasets, Semantic Alignment, and Component-Level Annotation
The rise of large-scale foundation models, trained on billions of parameters and massive multimodal datasets, has reshaped the paradigms of many industries, opening transformative opportunities for cultural heritage conservation, visualization, and virtual restoration. However, heritage-specific foundation models face unique challenges: highly sparse and diverse datasets, strict requirements for precision and tolerance (often at millimeter level), and a lack of standardized multimodal semantic alignment. This thematic session aims to address these critical gaps by exploring frameworks for building cultural heritage-oriented large models, from establishing robust, semantically rich, cross-modal datasets to developing intelligent annotation methods at the component level of heritage assets. It will discuss the knowledge fusion strategies needed to align data sources including text, images, 3D point clouds, drawings, multimedia, and sensor data, ensuring high-quality, multi-perspective, multidisciplinary annotation. Moreover, the session will highlight how foundation models can integrate domain-specific knowledge from structural systems, material sciences, historical archives, and conservation practice, thus serving as a powerful tool for heritage protection, risk assessment, and public engagement. Aligned with the ISPRS mission, this session will connect experts pioneering these data-centric and AI-enhanced approaches, promoting synergy between spatial information science, digital humanities, and cultural heritage disciplines.
Session Chairs:
Mohamed Mostafa
Camillo Ressl
Felix Audirac
Charles Lemaire
Wide-Area Airborne Mapping: Theory and Practice
Wide-Area airborne surveying and mapping projects are vital for modern infrastructure and urban planning. The primary goal is to update base maps that serve as essential tools in both public and private sectors to enable a variety of applications including infrastructure, construction, urban planning, agriculture, forestry, oil and gas, and mining. As a result, large airborne surveying and mapping projects are conducted regularly around the world. These initiatives aim to acquire high-resolution aerial imagery that can encompass entire countries, states, provinces, counties, or cities. By leveraging advanced technology and navigating the complexities of LiDAR, photogrammetric, and geodetic systems, these projects can significantly contribute to the development and maintenance of accurate base maps, ultimately enhancing decision-making processes across sectors. This session addresses the large-area airborne surveying projects. While advanced LiDAR and multi-camera systems are now standard, especially for country- or state-wide initiatives flown at high altitudes (e.g., 10 km AGL), current photogrammetric mathematical models often fall short. These models frequently fail to adequately account for key factors like atmospheric refraction at varying altitudes, the true curvature of the Earth, map projections, and significant geoid variations over large distances. This session will bring together academic researchers and industry experts to delve into both the scientific and practical challenges of executing these complex projects. We'll explore best practices, cutting-edge technological advancements, and critically examine the limitations of outdated photogrammetric modeling. Attendees will gain valuable insights into overcoming these technical hurdles, fostering more precise and reliable geospatial data for diverse applications.
Session Chairs:
Maria J. Santos
Roshanak Darvishzadeh
Marc Paganini
Monitoring Biodiversity from Space
Understanding how biodiversity and ecosystems change and how to protect them requires consistent observations that are complementary to those obtained from ground-based measurements. Earth Observation (EO) is central to this effort, offering powerful data to detect, quantify, and understand biodiversity and ecosystem health changes from local to global scales. This thematic session focuses on understanding how EO contributes to our ability to monitor ecosystems by assessing composition, functioning and dynamics. These insights are increasingly being incorporated into pipelines to calculate Essential Biodiversity Variables (EBVs), a set of metrics proposed that enable a standard quantification of biodiversity change across time and space. Remote sensing (RS)-enabled EBVs are critical in supporting reporting and decision-making under international frameworks such as Kunming-Montreal Global Biodiversity Framework (GBF), which calls for measurable progress in reversing biodiversity loss by 2030. The session also highlights the growing ecosystem of EO resources, provided by space agencies (like the European Space Agency (ESA) and NASA) to drive innovation through missions such as Copernicus Sentinel and Landsat, providing open-access satellite data that fuels biodiversity monitoring globally and supports EO integration into conservation and restoration strategies. Biodiversity monitoring from space reflects the broader mission of the ISPRS, which is to harness the power of spatial science for global benefit. As this field evolves, it improves how we observe the natural world and informs us of the actions needed to protect it. This session will present the latest developments and opportunities for future collaboration at the intersection of biodiversity, RS and global policy.
Session Chairs:
Masoud Mahdianpari
Saeid Homayouni
Earth Observation Foundation Models: Advances in Scalable, Multimodal
Foundation Models (FMs) are revolutionizing Earth Observation (EO) by enabling scalable, generalizable, and cross-modal analysis of geospatial data. Powered by architectures like transformers and trained on large-scale, diverse datasets, geospatial FMs such as Prithvi-EO, Clay, and TerraMind are unlocking new capabilities across remote sensing domains. This session will highlight how these models are transforming applications in agriculture, forestry, climate resilience, and land degradation monitoring. A key strength of FMs lies in their sensor independence—allowing them to process multispectral, hyperspectral, SAR, LiDAR, and VHR satellite data with minimal task-specific training. These models also support self-supervised learning and zero-shot inference, enabling robust analysis even in data-sparse environments. Fine-tuning and transfer learning approaches are being applied to tasks like biomass estimation, crop monitoring, early anomaly detection, and biophysical variable retrieval—areas often underrepresented in existing EO benchmarks. In addition, the session will explore advances in embedding-based approaches to compress, index, and align geospatial data across modalities. These methods address big data challenges posed by EO archives and high-resolution Earth System Models (ESMs), supporting efficient data sharing, search, and multimodal fusion. We invite contributions that address theoretical models, practical deployments, benchmarking strategies, and real-world impact. The session aims to foster cross-disciplinary collaboration among AI researchers, EO scientists, and decision-makers shaping the future of environmental intelligence.
Session Chairs:
Fabio Remondino
Juha Hyyppä
Jon Mills
Norbert Pfeifer
From Photogrammetry, Remote Sensing and AI to Climate Action
The convergence of geospatial sensor technologies, autonomous platforms and artificial intelligence is revolutionizing how we approach climate challenges through geospatial innovation. By harnessing high-resolution photogrammetric images, hyperspectral and LiDAR systems, multi-temporal satellite data and multi-platform data acquisitions, we are creating unprecedented opportunities to develop intelligent, analysis-ready geospatial products that directly support environmental monitoring. This thematic session stems from the I-DEAL EU project and aims to collect advancements and experiences with geospatial solutions that are supporting the European Green Deal and transforming our capacity to monitor, analyze and respond to climate changes. The thematic session seeks visionary contributions from researchers, mapping agencies, industry innovators and policymakers who are developing next-generation geospatial solutions for climate change mitigation. Of particular interest are comprehensive Earth observation frameworks, advanced climate impact assessment tools, biodiversity monitoring pipelines, smart urban sustainability solutions and decision support systems that leverage geospatial intelligence for sustainable development outcomes. This initiative will showcase how innovative geospatial technologies are becoming essential tools for contributing to the global response to climate change, driving both scientific advancement and practical solutions for a sustainable future.