EVENT
Event News
Talk on "Cross-modal Retrieval with Uncertainty Modeling" by Dr. Wei Ji, National University of Singapore
We are pleased to inform you about the upcoming seminar by Dr. Wei Ji, National University of Singapore titled : "Cross-modal Retrieval with Uncertainty Modeling" Everyone interested is cordially invited to attend!
Title:
Cross-modal Retrieval with Uncertainty Modeling
Abstract:
Cross-modal retrieval is an essential research area aimed at enhancing the retrieval of relevant information across various modalities, such as images, videos, audio, and text. This talk delves into innovative methods for improving retrieval processes through uncertainty modeling. In our research, we present a unified approach that integrates both coarse- and fine-grained retrieval using multi-grained uncertainty modeling and regularization. This method, applied to composed image retrieval with text feedback, improves recall rates by preventing the premature exclusion of potential candidates, demonstrating significant performance gains on datasets like FashionIQ, Fashion200k, and Shoes. Additionally, we explore the efficiency of annotation in video moment retrieval. Our proposed hierarchical uncertainty-based active learning model strategically selects frames with the highest uncertainty for binary annotations, significantly reducing annotation workload while maintaining competitive performance. This strategy, validated on public datasets, operates effectively at both frame and sequence levels, streamlining the human-in-the-loop annotation process. These advancements underscore the importance of incorporating uncertainty estimation in cross-modal retrieval, paving the way for more efficient and accurate systems in retrieving relevant content across different modalities.
Speaker Bio:
Wei Ji is currently a Senior Research Fellow in the School of Computing at National University of Singapore. He received the Ph.D. degree in computer science from the Zhejiang University in 2020. He has published several papers in top conferences such as CVPR, ECCV, SIGIR, AAAI, and journals including TPAMI, TIP and TCYB. His current research interests include multi-modal learning, vision and language, and cross-modal retrieval. He has won the CVPR 2022 Best Paper Finalist.
Time/Date:
11:00 July 22th (Monday), 2024
Place:
Room 1512 NII
Contact:
If you would like to join, please contact by email.
Email :satoh[at]nii.ac.jp