top of page
AdobeStock_613535312.jpeg

Tutorial Session Information

ISPRS 2026
Tutorial Sessions

Advance Your Expertise with Full-Day and Half-Day Learning Opportunities

Join us on Saturday, 4 July, and Sunday, 5 July, 2026, for a series of tutorial sessions designed to deepen your knowledge and practical skills in both emerging and foundational areas of photogrammetry, remote sensing, and spatial information sciences.

Led by international experts from academia, industry, and government, these full-day and half-day tutorials offer valuable opportunities for hands-on learning, in-depth discussion, and professional networking—just ahead of the main congress.

Please note that each tutorial session has a limited capacity.

Registration will be on a first-come, first-served basis until all spots are filled.

Tutorial Information:

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Rongjun Qin

Co-Instructors:

Shuang Song 

Jiyong Kim

3D Reconstruction from Multi-View Satellite Imagery: From Classic to Modern Methods

Full Day (8 Hours)   

High-resolution (high-res) satellite images, which can now observe the Earth with a footprint as small as 30 cm (e.g., WorldView-3/4 sensors), play an important role in generating 3D data over wide areas. Due to orbital limitations, these images have unique characteristics, and their photogrammetric processing requires special considerations. This tutorial is intended for beginners and intermediate users, including students, researchers, and practitioners interested in 3D mapping with multi-stereo satellite images and their applications. Participants will gain hands-on experience with relevant software and learn about recent developments in field-based methods for 3D reconstruction using satellite imagery, including 3D Gaussian Splatting and Neural Radiance Fields (NeRF). The tutorial will cover both theoretical and practical aspects of processing multi-view satellite images and will be divided into a theory session and a hands-on practical session. The theory session will cover: - Existing satellite sensors - Camera models (rational polynomial models) - Bundle adjustment - Surface generation - Orthophoto rectification - Emerging techniques like Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) In the practical sessions, we will demonstrate: -Satellite imagery selection - Level-1 geometric correction - Pan-sharpening - Relative and absolute orientation - Dense matching using the RPC stereo processor (RSP) to generate colored point clouds, digital surface models (raster and textured meshes), and true-orthophotos. - Examples of NeRF & 3DGS to generate a DSM, shadow mask, and albedo (shadow-free) maps.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Erica Nocerino

Co-Instructors:

Fabio Menna 

Dimitrios Skarlatos

Panagiotis Agrafiotis

Gottfried Mandlburger

Caterina Balletti

Katja Richter

Hans-Gerd Maas

Christian Mulsow

A Full Immersion in 3D Underwater Mapping

Full Day (8 Hours)   

3D underwater mapping of oceans and inland environments is crucial for a wide range of applications, including marine ecology, archaeology, coastal monitoring, engineering, and hydrology. This tutorial will introduce participants to the latest developments in 3D underwater mapping—both from above and below the water surface—using passive and active sensing techniques. Lectures will cover the theoretical aspects of underwater photogrammetry, through-water optical bathymetry, and underwater image restoration and enhancement approaches. Hands-on practice will engage participants in using the 3D underwater simulation framework POSER (ISPRS ECBI 2024) and sharing benchmark datasets via the NAUTILUS portal (ISPRS SI 2023). Participants will gain practical experience in both photo- and laser-based bathymetry, as well as underwater color correction methods. By providing theoretical and practical insights into simulation, open datasets, and advanced processing and restoration techniques, the tutorial will promote a broad understanding of the challenges in underwater mapping and emphasizes the importance of data validation and sharing. By the end of the tutorial, participants will: · Deepen their understanding of underwater photogrammetry and optical bathymetry principles; · Explore image restoration and color correction methods; · Practice using the POSER simulation framework to design and test 3D underwater mapping scenarios; · Access and utilize benchmark datasets through the NAUTILUS portal for training and evaluation.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Mozhdeh Shahbazi

Co-Instructors:

Victor Al-Hassan 

Mikhail Sokolov

Geospatial Deep Learning in Practice

Full Day (8 Hours)   

Geospatial Artificial Intelligence (GeoAI) is an emerging field that combines geographic data with machine learning and deep learning techniques to derive insights from spatial information. It is a rapidly growing area driven by advances in remote sensing, big data, and AI technologies. This tutorial will focus on geo deep learning and its application to raster-like data sources, such as satellite imagery, airborne imagery, and elevation models. A short introduction to deep learning will be first given to provide context for participants with little experience in machine learning. This will be followed by a hands-on exercise involving semantic segmentation on large geo-referenced images (GeoTIFF/COG formats) using an open-source geo‑deep‑learning framework developed by Natural Resources Canada. Participants will learn how to use this framework through a complete workflow: · Transforming large high‑resolution imagery into manageable datasets; · Training models with various architectures including convolutional neural networks, transformers, and masked auto-encoders, using scalable solutions for handling large datasets, tracking experiments and optimizing hyperparameters; · Geo-Inference using advanced techniques to optimize performance both in processing time and resource consumption. This tutorial is suitable for all levels. Beginners will gain a solid introduction to geospatial deep learning, while advanced users will explore operationalization, scalability, and performance optimization.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Ewelina Rupnik

Co-Instructors:

Mehdi Daakir

Marc Pierrot-Deseilligny

Hybrid and Precise Camera Pose Estimation in MicMacV2

Full Day (8 Hours)    

This tutorial aims to introduce participants to the open-source photogrammetric processing suite MicMacV2 (MMVII) through two datasets that combine hybrid observations (i.e., photogrammetry and surveying) and camera models (perspective camera and pushbroom sensor), along with hands-on programming examples. This tutorial will mark MicMacV2’s first international release. In development since 2021, MicMacV2 is the successor to MicMacV1. This new version is designed to provide an advanced photogrammetric tool that is accessible both to end users (experts and students) and to external developers interested in contributing to the project. The tutorial will include three sessions: · The first session will focus on the metrological aspects of camera pose estimation, such as simultaneous adjustment of photogrammetric and topometric/surveying observations. · The second session will focus on refining the parameters of the aerial perspective camera model and the RPCs of the satellite pushbroom sensor in a unified adjustment. · This final session (in Python or C++) will introduce participants to the MicMacV2 programming environment, including the basic mechanisms for using the library and, optionally, how to add a new command to the tool.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Andreas Piter

Co-Instructors:

Mahmud H. Haghighi

Alison Seidel

InSAR Time Series Analysis with SARvey and InSAR Explorer

Full Day (8 Hours)   

InSAR is a powerful tool in engineering, enabling accurate assessment of ground deformations and structural stability. This tutorial offers a hands-on introduction to two open-source tools for analyzing and visualizing InSAR time series: SARvey and InSAR Explorer. SARvey is a software package designed for single-look InSAR time-series analysis, with a focus on detecting and monitoring deformations in engineering applications. Typical use cases include assessing dam stability, monitoring roads and railways, and mapping urban deformations at the building scale. The tutorial will provide a complete SARvey workflow—covering installation, parameter configuration, and advanced processing methods—making it an excellent entry point for newcomers to InSAR, as well as a valuable resource for experienced users seeking more advanced analytical capabilities. InSAR Explorer complements SARvey as a QGIS plugin, allowing smooth integration of InSAR-derived deformation data into Geographic Information Systems. The plugin provides intuitive tools for mapping, overlaying auxiliary datasets, and comparing outputs from various processing workflows. Its user-friendly interface enables quick visualization of deformation time series, creation of interactive plots, and in-depth evaluation of results. In this tutorial, participants will use notebooks in a Google Colab environment to follow the entire workflow—from software installation to the execution of real-world case studies using Sentinel-1 data. Attendees will learn how to adjust processing parameters, interpret deformation time series, and use InSAR Explorer within QGIS for data visualization and analysis. Whether participants are new to InSAR or are experienced practitioners exploring new tools, this tutorial will offer them a comprehensive, practical learning experience to advance their skills in Earth observation and deformation monitoring.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Jan Skaloud

Co-Instructors:

Aurélien Brun

Laurent Jospin

Open Point-to-point Correspondences for Loose or Tight Integration in Kinematic Laser Scanning

Full Day (8 Hours)   

This tutorial introduces a rigorous approach to addressing georeferencing challenges in mobile and airborne laser scanning. It builds upon recent advances in extracting point-to-point correspondences from overlapping sections of point clouds. These correspondences are used as spatial constraints in a unified, single-step multi-sensor fusion framework that tightly integrates raw inertial measurements and, when available, satellite-derived position and velocity data — referred to as the Dynamic Network. The tutorial will combine theoretical background with hands-on practice using both an open-source point-to-point matching code and the free, online Dynamic Network (DN) solver. By the end of the tutorial, participants will: · Learn the theory behind generalized point-to-point correspondences and their integration into a dynamic factor graph; · Explore the improved matching pipeline, including descriptor retraining, RANSAC filtering, and local refinement below sub-GSD level; · Gain hands-on experience with open-source tools and public datasets, including aerial and mobile laser scanning experiments, and learn how sub-GSD-level correspondences contribute to improved georeferencing.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Elena Belcore

Co-Instructors:

Paolo Dabove

Alessandro Frigeri

Darshana Rawal

Open-Source Geospatial Tools for Multisensor Environmental Surveying: Positioning, Photogrammetry, Machine/Deep Learning, and Data Security

Full Day (8 Hours)   

This tutorial presents key open-source tools for geospatial education, focusing on GNSS positioning, photogrammetry, AI-based image analysis, and data security. Designed for university-level teaching, it emphasizes accessible solutions for instructors and students. The tutorial will include four modules comprised of lectures and hands-on exercises based on open datasets that will be provided by the instructors. Module 1 - GNSS Positioning : This module will cover the basics of satellite-based geolocation, including positioning accuracy, signal types, and data integration. Participants will use open-source tools, such as RTKLIB, to collect, process, and visualize GNSS data, gaining experience with real-world geospatial workflows. Module 2 - Planetary Mapping: This module will introduce photogrammetric techniques and their application in an open-source software (OpenDroneMap). It will also introduce the principles of planetary geological mapping, with a focus on mapping celestial bodies like the Moon using an open-source software (FOSS). Module 3 - Information Extraction with Deep Learning: This module will explore object detection and classification of UAV imagery using machine and deep learning via several QGIS plugins (Deepness, TreeEyed, SegMAp) and prepared scripts in Jupyter notebooks. Module 4 - Geospatial Data Security: In this module, participants will learn how to protect geospatial data during storage and transfer by integrating blockchain technology with cryptographic techniques for securely encrypting geospatial coordinates. Additionally, they will learn how smart contracts can enforce data-sharing policies and verify user permissions. By the end of the tutorial, participants will learn how to: · Process data using open-source positioning and photogrammetry platforms; · Analyze and interpret data using QGIS and Python; · Use blockchain-based methods for securing geospatial data.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Shahpoor Moradi

Co-Instructors:

Mahkame Moghadam

Sohrab Ganjian

Vittorio Cannas

Stefania Amici 

Mozhdeh Shahbazi

Quantum Computing for Earth Observation

Full Day (8 Hours)    

Classical high-performance computing has long supported remote sensing and Earth observation activities. However, as data volumes and modeling complexity continue to grow, classical computational approaches are becoming increasingly strained. This trend motivates the exploration of emerging computing paradigms. One promising direction is quantum computing, which leverages quantum mechanical principles such as superposition, entanglement, and interference to perform certain computations more efficiently than classical systems. With this in mind, this tutorial offers a practical introduction to quantum computing, with a focus on quantum machine learning. Participants will have the opportunity to implement a quantum machine learning algorithm for a real-world application in multispectral satellite image analysis. Basic knowledge of linear algebra and Python programming is required to participate in this tutorial. The tutorial will begin with an accessible introduction to the principles of quantum computing. A representative problem based on satellite imagery will then be defined, and the tutorial will alternate between conceptual discussions and hands-on implementation, including gate-based quantum circuit design. As the session progresses, the interplay between quantum and classical resources will be explored across the full processing pipeline. Simulated implementation of a quantum system will be introduced, and with the support of PINQ², access to real quantum platforms of IBM will be made possible as well. By the end of the tutorial, participants will gain a foundational understanding of quantum computing and its practical applications in remote sensing.

Saturday, 4 July, 2026 

8:30am - 5:00pm

Tutorial Lead:

Berk Anbaroğlu
Co-Instructor:
İbrahim Topcu

Web-Based Campus Routing and Event Management System

Full Day (8 Hours)    

Many universities offer web-based campus maps with a wide range of features, from 3D maps to indoor navigation tools. However, implementation of commonly desired features — such as geolocating entities, routing between them, and displaying campus events — are often outsourced to external providers, which undermines the sustainability and maintainability of these systems. This tutorial introduces an open system called ROAMer — Reproducible OSRM for Mapping and Event Reporting — which leverages a suite of open technologies, including (but not limited to) Open Source Routing Machine (OSRM), PostgreSQL/PostGIS, React, and Node.js. This tutorial will provide participants with hands-on experience in replicating a web-based GIS tailored to their own university campuses on their local machines. All technologies introduced in this tutorial are open source, demonstrating that such a system is both sustainable and maintainable within the university context, without requiring software licensing fees or reliance on proprietary services. By the end of the tutorial, participants will gain an understanding of the key concepts behind web-based GIS design and development using a relational database management approach and will be able to apply these concepts to implement a functional, customizable campus GIS for their own institutions.

Saturday, 4 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Katharina Anders

Co-Instructors:

Bernhard Höfle

Xiaoyu Huang

Ronald Tabernig 

Open-source Scientific Software py4dgeo for Change Analysis in 3D/4D Point Clouds

Half Day (4 Hours)   

This tutorial introduces py4dgeo, an open-source Python library for analyzing geometric change and surface dynamics in 3D and 4D point cloud data. Designed to support scientific workflows, py4dgeo offers a reproducible and scalable tool for quantifying surface change in multitemporal point clouds and 3D time series across a broad range of topographic monitoring applications. Participants will gain a solid understanding of the key concepts and challenges in 3D/4D change analysis. This includes the full pipeline of multi-temporal point cloud alignment, 3D change quantification, and time series-based quantification methods. The tutorial will demonstrate how py4dgeo implements state-of-the-art algorithms to address these challenges. It will emphasize the library’s modular framework, offering accessible and configurable methods that empower users to perform transparent, fully automated scientific analysis. This tutorial is intended for researchers, students, and practitioners working with time-dependent 3D data who require a flexible, scalable, and open-source framework for surface change analysis. Hands-on exercises will guide participants through practical use of the library, including: · Loading point cloud data, · Applying 3D change detection algorithms (e.g., M3C2), · Using a hierarchical approache for 3D change analysis, · Performing time series-based analysis (e.g., 4D objects-by-change), and · Visualizing results. Example workflows will demonstrate how py4dgeo integrates with standard Python-based environments and complements other open-source tools such as CloudCompare. By the end of the tutorial, participants will understand core methods for 3D/4D change analysis, be able to reproduce and adapt workflows for research or applied monitoring tasks and apply py4dgeo to their own datasets.

Saturday, 4 July, 2026 

1:00pm - 4:30pm

Tutorial Lead:

Ayman Habib

Co-Instructors:

Mohamed M.R. Mostafa

Photogrammetric Mapping by Drones: Theory and Practice

Half Day (4 Hours)   

This tutorial offers foundational insights into the intricacies of Surveying with Drones. It will address the design, development, integration, operation, and calibration of drone systems, along with best practices for achieving optimal accuracy with various payloads across a range of real-world applications. These include high-definition mapping for autonomous vehicles, civil engineering, mining, digital forestry, precision agriculture, and general surveying and mapping. The intended audience includes students, educators, technicians, engineers, surveying and mapping professionals, and decision-makers. The tutorial will cover the fundamentals of photogrammetry, multi-sensor fusion, and drone positioning and sensor georeferencing. A key focus will be on the integration of multiple imaging and navigation sensors, such as RGB, NIR, and thermal cameras, as well as GNSS and inertial systems. When properly integrated—either during post-mission processing or in near-real-time—these systems can produce high-precision mapping products, enabling diverse information extraction scenarios and delivering valuable insights across multiple application domains. The tutorial will also highlight the influence of technological challenges on day-to-day drone operations in relation to: · Airframe selection, · Mission planning, · Data acquisition, · Data processing, · Calibration, · Quality control, and · Accuracy assessment. Q&A sessions will be encouraged during each section of the tutorial to foster interactive learning and address participants' specific questions.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Jiapan Wang

Co-Instructors:

Xiaoyu Huang

Katharina Anders

Advanced Topographic Time Series Data Management Using the Topo4d Extension of the Spatiotemporal Asset Catalog (STAC) for Curation, Analysis, and Visualization of 4D Point Clouds

Half Day (4 Hours)   

4D point cloud datasets, which capture changes in 3D over time, are becoming increasingly important across a broad range of applications, including environmental and infrastructure monitoring. As data sharing grows and diverse tools and methods become more widely available, the need for standardized approaches to managing time-dependent metadata becomes more critical too. This tutorial introduces participants to the challenges and solutions involved in handling multi-temporal 3D data, with a focus on automation, interoperability, and integration with existing tools. Participants will learn how to harmonize and automate the creation of time-dependent metadata using the open-source topo4d framework — a recent extension of the well-established Spatiotemporal Asset Catalog (STAC) standard. The tutorial will cover how to generate, read, and integrate metadata from diverse topographic data sources, regardless of acquisition method or temporal resolution. Emphasis will be placed on aligning with best practices to ensure metadata reusability and consistency across projects. Through hands-on sessions, participants will incorporate the topo4d workflow into existing 4D data pipelines and connect it with widely used tools such as PDAL and the py4dgeo Python library for change analysis. A key use case will demonstrate the benefits of this standardized approach for applications in geomorphology, hydrology, and ecology. By the end of the tutorial, attendees will be equipped with practical tools, clear guidelines, and example datasets to streamline their own 4D processing workflows — enhancing reproducibility and collaboration in the Earth science community.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Ayman Habib

Co-Instructors:

Songlin Fei

Digital Twinning with UAV and Backpack Mobile Mapping Systems

Half Day (4 Hours)   

This tutorial provides a comprehensive overview of using photogrammetric and LiDAR sensors integrated onboard Unmanned Aerial Vehicles (UAVs) and wearable backpack systems for generating digital twins across diverse environments, including urban, natural, and mixed landscapes. The tutorial will begin with an in-depth discussion on sensor integration, emphasizing hardware configurations that combine optical imaging and LiDAR systems with GNSS/INS units. It will address the synchronization of cameras, LiDAR units, GNSS, and IMU sensors to ensure precise and reliable data capture. The discussion will proceed to system calibration, focusing on geometric calibration procedures. Participants will learn methods for refining both intrinsic and extrinsic parameters of cameras and LiDAR sensors — steps that are essential for maintaining data integrity and accuracy. The tutorial will continue with georeferencing techniques including: Direct georeferencing using GNSS/INS; Indirect georeferencing with ground control points (GCPs); Simultaneous Localization and Mapping (SLAM); and Trajectory Enhancement and Mapping (TEAM). It will also discuss strategies for integrating UAV and backpack datasets, which often differ in accuracy and spatial reference frames. The tutorial will also cover 3D modeling workflows, including point cloud generation, surface reconstruction, and feature extraction. Particular attention will be given to the challenges and solutions involved in fusing photogrammetric and LiDAR datasets to create high-resolution, geometrically accurate digital twins. Practical examples will demonstrate how multi-scale datasets can be used in applications such as urban infrastructure assessment, forest inventory, and disaster response planning. By the end of the tutorial, participants will gain hands-on knowledge of the complete workflow—from data acquisition to the creation of actionable digital twin models for both research and operational applications.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead: 

Bastian van den Bout

Co-Instructors:

Katherine van Roon

FastFlood: Rapidly Using Earth Observation Data for Flood Forecasts

Half Day (4 Hours)  

FastFlood is a high-speed flood simulation model integrated into a free, web-based platform. This innovative simulation tool leverages new algorithms and rapidly-processed EO data to allow quick model setup. The model is sufficiently fast to provide out-of-the-box interactive flood simulation results, allowing users to test climate scenarios, mitigation or adaptation. Users can either automatically pull data from global (lower-resolution) datasets or upload their own higher-quality data. The simulation app includes features such as mitigation design, coastal and fluvial boundary conditions, rainfall scenarios, infiltration modeling, and more. This tutorial will introduce participants to rapid flood hazard mapping and forecasting using FastFlood. Through practical hands-on sessions, participants will get familiar with the model and will use it to generate flood maps. The tutorial will begin with a brief overview of flood forecasting theory and principles, demonstrate how to perform rapid flood hazard simulations, explore climate scenarios and nature-based solutions, and discuss in-depth analysis options for evaluating such solutions.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Zilong Zhong

Co-Instructors:

Jose Bermudez

Geospatial Foundation Models for Remote Sensing

Half Day (4 Hours)  

Recent years have witnessed significant advancements in foundation models applied to remote sensing data. Trained on large volumes of unlabeled data, these models have shown a remarkable ability to capture generalized spatial knowledge that can be effectively transferred to a wide range of downstream tasks. This tutorial provides a comprehensive overview of the advancements and applications of geospatial foundation models specifically tailored to satellite observations. Participants will engage in hands-on experiments using these models with diverse datasets, including multispectral and hyperspectral imagery, satellite-borne LiDAR data, and synthetic aperture radar (SAR) data. Practical applications will cover key Earth observation tasks such as land cover and land use classification, wildfire-induced burned area detection, and biomass estimation across various ecological zones. The participants will learn how to: · Download Earth observation data using Google Earth Engine; · Access pre-trained foundation models from Hugging Face; · Set up a deep learning environment and deploy geospatial foundation models. Python and the PyTorch Lightning framework will be used for training, validation, and testing foundation models, while Jupyter Lab will serve as the platform for demonstrating remote sensing applications. The session will conclude with a discussion of current challenges and future directions in the development and deployment of geospatial foundation models.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

David Youssefi

Co-Instructors:

Valentine Bellet

Dimitri Lallement

Getting Started with CNES Open-Source 3D Tools in Python

Half Day (4 Hours)   

This hands-on tutorial presents a complete open-source workflow for generating and analyzing 3D geospatial data from stereo satellite imagery. It will begin with essential theoretical foundations, including satellite orbits and the principles of photogrammetry, before introducing three key tools: CARS, Bulldozer, and xDEM. This tutorial is designed for a broad audience, including beginners, students, geospatial analysts, researchers, and professionals interested in Earth observation, photogrammetry, and 3D mapping. It will offer an accessible introduction to advanced remote sensing techniques using open-source tools and real-world data. No prior experience is required to attend this tutorial; basic knowledge of Python is helpful but not necessary. Using interactive Python notebooks, participants will: · Generate a Digital Surface Model (DSM) with CARS, creating a 3D representation of the Earth's surface as observed by satellites; · Extract a Digital Terrain Model (DTM) with Bulldozer, isolating the bare ground from the DSM; Participants can also derive a Digital Height Model (DHM) to emphasize above-ground features such as buildings and vegetation — enabling applications like building height estimation for digital twin environments; · Analyze 3D products with xDEM, performing post-processing tasks such as coregistration and detailed change detection. By the end of the tutorial, participants will learn how to produce and interpret DSM, DTM, and DHM products, apply photogrammetric concepts, and perform advanced spatial analyses using satellite-derived 3D data.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Robert Gilmore Pontius Jr

Metrics That Make a Difference: How to analyze change and error

Half Day (4 Hours)  

This half-day tutorial introduces fundamental concepts for quantifying temporal change and diagnostic error across a variety of applications — particularly within spatial information sciences and data quality assessment for remote sensing of land change. It explores how to avoid common blunders and emphasizes insightful methods such as the Total Operating Characteristic (TOC), Difference Components, and Trajectory Analysis. This tutorial will be interesting to a broad audience ranging from students to senior scientists. The tutorial will emphasize conceptual understanding rather than software usage, although relevant tools are freely available. No computer will be required during the session. The primary goal will be to help participants gain confidence in communicating clearly about data quality and selecting metrics appropriate to specific research questions. Participants with specific questions are encouraged to contact the instructor, Professor Pontius, in advance so that he can tailor the tutorial accordingly. This is the latest version of a tutorial he has delivered dozens of times in 17 countries, drawing in part from his solo-authored book, Metrics That Make a Difference.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Marc Rubwurm

Co-Instructors:

Esther Rolf 

Konstantin Klemmer

Evan Shelhamer

Towards Geospatial Embeddings: Investigating Accurate and Accessible Deep Geospatial Feature Representations

Half Day (4 Hours)   

This tutorial introduces participants to Earth Embeddings — a new class of deep geospatial representations that unify heterogeneous remote sensing and environmental data into a shared, learnable embedding space indexed by spatiotemporal coordinates. The tutorial is structured into two thematic blocks, each combining concise lectures by leading researchers with practical hands-on sessions. Block 1: Geospatial Embedding Fields This block will cover the foundations of Geospatial Neural Encoding Fields (GeoNEFs), which generate location-specific feature representations from multimodal geospatial data. Participants will learn how embedding fields are constructed using lightweight and foundation vision encoders, and how these embeddings support downstream geospatial tasks. A hands-on session will guide participants through generating embeddings from satellite imagery and environmental time series using pre-trained models. Block 2: Location Encoding and Implicit Neural Representations This block will introduce implicit neural representations and coordinate-based location encoding models that enable scalable geospatial embeddings via neural networks. Participants will explore pre-trained models such as SatCLIP and GeoCLIP, and will learn how to integrate these representations into various downstream use cases. A hands-on tutorial will demonstrate how to build and query Earth Embedding models using open-source libraries and geospatial datasets.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Prasad Deshpande

Co-Instructors:

Gaurav Jha

Vinayak Bhanage

Uncertainty-aware Bayesian Neural Networks

Half Day (4 Hours)  

Conventional AI and machine learning models often function as black boxes, offering predictions without indicating their confidence or reliability. In contrast, uncertainty-aware models provide insights into how much trust can be placed in each prediction. This is especially critical in remote sensing, where decisions frequently depend on model outputs under conditions of data sparsity and noise. This tutorial will equip participants with both theoretical foundations and practical skills to implement Bayesian Neural Networks (BNNs) that disentangle aleatoric (data) and epistemic (model) uncertainty. The session will cover the fundamentals of uncertainty, Bayesian approaches to predictive distributions, and include hands-on coding exercises using Python and TensorFlow on remote sensing datasets for tasks such as land cover classification. This tutorial is intended for researchers, graduate students, and practitioners with basic machine learning and Python experience who seek to enhance the reliability and transparency of their remote sensing models. By the end of the tutorial, participants will learn to implement, visualize, and interpret uncertainty within BNN workflows to better identify sources of error and refine data collection strategies. They will also learn how to build robust, interpretable, and uncertainty-aware models for Earth observation and environmental modeling.

Sunday, 5 July, 2026 

8:30am - 12:00pm

Tutorial Lead:

Ruisheng Wang

Co-Instructors:

Fabio Remondino 

Liangliang Nan

Gunho Sohn

Florent Larfarge

Urban Scene Modeling

Half Day (4 Hours)    

This tutorial offers an in-depth exploration of cutting-edge techniques and applications in 3D urban modeling and digital twin technologies. The objectives of the tutorial include: · Enhancing participants' knowledge of urban modeling methodologies, · Promoting the integration of geospatial data and analytics, · Stimulating innovation in digital twin environments. By showcasing recent advancements, the tutorial aims to support the development of state-of-the-art technologies to address complex urban challenges. Its significance lies in the growing demand for accurate, interactive, and scalable urban models that inform planning, management, and decision-making in smart cities. This tutorial will serve as a platform for interdisciplinary collaboration, encouraging the exchange of ideas among professionals from academia, government, and industry. By the end of the tutorial, participants will gain a solid understanding of the key techniques in 3D point cloud processing using AI as well as data acquisition and processing for urban modeling. Through use-case demonstrations, expert presentations and a panel discussion, attendees will gain valuable insights into the current landscape and future directions of urban scene modeling and digital twin development.

  • X

STAY UP TO DATE!

Congress Secretariat

International Conference Services Ltd.
555 Burrard Street, Vancouver, BC,
Canada, V7X 1M8

/CRSSSCT

isprs2026toronto

@isprs2026to

  • LinkedIn
  • Facebook

CONTACT US

HOSTED BY:

CRSS SCT logo.png

THANK YOU TO OUR SPONSORS AND PARTNERS

mdpi-greyscale_edited.png
Riegl-footer.png
pix4d-grey.png
LATools-Greyscale.png

Copyright © 2026 International Society for Photogrammetry and Remote Sensing (ISPRS). All rights reserved. Privacy Statement

bottom of page