Information fusion has become a key enabler for perception, decision-making, and control across a wide range of domains. By integrating data from multiple sensors, modalities, or sources, it produces more robust and accurate representations of the world. In recent years, the field has significantly expanded in both scope and impact, largely driven by the adoption of deep learning techniques. Advances in multimodal and multi-source fusion have led to notable improvements in diverse applications, including robotics, autonomous driving, medical imaging, remote sensing, surveillance, and infrastructure inspection. Deep learning models—such as CNNs, GANs, autoencoders, transformers, and diffusion models—have further accelerated progress in both fusion methodologies and their downstream applications. This workshop aims to bring together researchers from across the information fusion community to present the latest developments in algorithms, datasets, evaluation strategies, and application-driven solutions. It also seeks to foster cross-disciplinary collaboration by welcoming participants from fields such as computer vision, natural language processing, robotics, healthcare, and remote sensing. By reviewing current trends and exploring future directions, the workshop intends to drive innovation and strengthen community ties in the evolving landscape of information fusion.
The workshop will cover (but is not limited to) the following topics:
TBD
TBD