Important Dates

LSC'23 Schedule (12 June 2023)

LSC'23 Proceedings

14:00: LSC'23 Introduction

14:15: System Overview

  1. Maria Tysse Hordvik, Julie Sophie Teilstad Østby, Manoj Kesavulu, Thao-Nhu Nguyen, Tu-Khiem Le, Duc-Tien Dang-Nguyen. LifeLens: Transforming Lifelog Search with Innovative UX/UI Design
  2. Ahmed Alateeq, Mark Roantree, Cathal Gurrin. Voxento 4.0: A More Flexible Visualisation and Control for Lifelogs
  3. Thao-Nhu Nguyen, Tu-Khiem Le, Van-Tu Ninh, Cathal Gurrin, Minh-Triet Tran, Thanh Binh Nguyen, Graham Healy, Annalina Caputo, Sinead Smyth. E-LifeSeeker: An Interactive Lifelog Search Engine for LSC’23
  4. Ricardo Ribeiro, Luísa Amaral, Wei Ye, Alina Trifan, António J. R. Neves, Pedro Iglésias. MEMORIA: A Memory Enhancement and MOment RetrIeval Application for LSC 2023
  5. Ly Duyen Tran, Binh Nguyen, Liting Zhou, Cathal Gurrin. MyEachtra: Event-based Interactive Lifelog Retrieval System for LSC’23
  6. Quang-Linh Tran, Ly-Duyen Tran, Binh Nguyen, Cathal Gurrin. MemoriEase: An Interactive Lifelog Retrieval System for LSC’23
  7. Luca Rossetto, Oana Inel, Svenja Lange, Florian Ruosch, Ruijie Wang, Abraham Bernstein. Multi-Mode Clustering for Graph-Based Lifelog Retrieval
  8. Naushad Alam, Yvette Graham, Cathal Gurrin. Memento 3.0: An Enhanced Lifelog Search Engine for LSC’23
  9. Nhat Hoang-Xuan, Thang-Long Nguyen-Ho, Cathal Gurrin, Minh-Triet Tran. Lifelog Discovery Assistant: Suggesting Prompts and Indexing Event Sequences for FIRST at LSC 2023
  10. Klaus Schoeffmann. lifeXplore at the Lifelog Search Challenge 2023
  11. Tien-Thanh Nguyen-Dang, Xuan-Dang Thai, Gia-Huy Vuong, Van-Son Ho, Minh-Triet Tran, Van-Tu Ninh, Minh-Khoi Pham, Tu-Khiem Le, Graham Healy. LifeInsight: An Interactive Lifelog Retrieval System with Comprehensive Spatial Insights and Query Assistance
  12. Florian Spiess, Ralph Gasser, Heiko Schuldt, Luca Rossetto. The Best of Both Worlds: Lifelog Retrieval with a Desktop-Virtual Reality Hybrid System
  13. LSC'22 Baseline System (e-MySceal). Participation by Tu-Khiem Le.

15:10 Final System Test for all teams.

15:40 Coffee Break

LSC'23 Expert Session

17:30 LSC'23 Novice Session

19:00 LSC'23 Conclusion

Configuration of the Challenge

For the LSC'23 workshop, participating systems will connect to the LSC host server (
DRES), which has been developed by Luca Rossetto and team at UZH. There will be a pre-LSC'23 test session in mid June 2023 for all participating teams to ensure that their systems can interact with the servers. The readme file from the GitHub page for the server gives details of the submission settings for the systems. Specifically the submission format for the LSC'23 competition is http(s)://{server}/submit?item={item} where {item} is the identifier for the retrieved media item.

During the search challenge, participating teams submit an item (image) to a host server for evaluation when a potentially relevant item from the collection is found by a participant. The host server maintains a countdown clock and evaluates in real-time all submissions against a provided groundtruth. For each topic, a score is given based on the time taken to find the relevant content and the number of incorrect items previously submitted by that team to the host server for that topic. There is a maximum of 3 or 5 minutes provided per topic (see below). Throughout the competition, an overall score is maintained for each team, which is the summation of the scores of the topics that had been processed up until that point. The teams will be ranked in terms of overall (expert & novice combined score) and novice score. For more detail on the scoring function and the LSC service, please see the
LSC'18 overview paper.

There will be 24 official English language topics prepared which will be textual in nature and, as in previous LSC competitions, may be temporally enhancing queries. A total of either 3 or 5 minutes will be allocated per topic. KIS and Q&A topics will be given 5 minutes, while Ad-Hoc topics will be allocated 3 minutes. The topics will be revealed only at the challenge. The
LSC'19 topics are available for system testing. During the workshop the topics are first seen on the central screen.

LSC’23 will employ three types of topic:

  • Known-Item search to find any one relevant image from the collection that addresses the topic. These topics have pre-generated relevance judgements, though judges can dynamically judge submissions that are possibly correct, but not in the relevance judgements. (5 minutes)
  • Ad-hoc search to find as many relevant images as possible from the lifelog that answer the topic. Answers judged in real-time. (3 minutes)
  • Q&A topics, which seek a correct answer to an information need. This type of topic is new and our plan is to support textual submissions with answers judged in real-time, with some degree of flexibility applied. (5 minutes)

Challenge Rules

Participating teams can develop any type of interactive retrieval system that they wish without restriction. Query input to each system is not restricted (save for the rules below). This means that some systems will accept text queries, others will facilitate faceted queries, others may support spoken input, others relevance feedback. There is no limit on how interaction can be facilitated once it adheres to the rules below.

Each retrieval system is expected to have pre-indexed the dataset prior to arriving at the workshop. All aspects of the dataset can be indexed and the integration of additional semantic and concept detectors is allowed, once details of these are provided The revised rules of participation are intended to facilitate a fair competition and are listed as:

  • Each retrieval system is expected to have pre-indexed the LSC'23 dataset prior to arriving at the workshop. All aspects of the dataset can be indexed and the integration of additional semantic and concept detectors is allowed, once details of these are provided The revised rules of participation are intended to facilitate a fair competition and are listed as:
  • Every team is allowed to run only one instance of their system (i.e., there is one interface, or one VR glasses) - so that we will have a fair competition.
  • However, you may employ as many users as you want for this interface (it should also be visible on the external camera, how many users interact with the system instance).
  • No communication is allowed between teams.
  • Taking photos of any image (other team's submissions) from the shared screens as query input is prohibited
  • Each incorrect submission will be penalised according to the scoring algorithm, so it is recommended not to submit any result to the central server unless the user is confident that the result is likely to be correct.
  • The relevance judgements provided with the topic are final and cannot be questioned or queried. Every effort will be made to ensure that relevant judgements are not ambiguous.
  • It is prohibited for any system developer (who acts as the expert user for their system) to learn or familiarise themselves sufficiently with the dataset, so as to gain an unfair advantage over other competitors.
  • Interaction mechanisms employed should not interfere with, or disturb, other participants. For any participant bringing novel VR-type systems, a dedicated flow space will be prepared for them which is sufficiently distant from the other participants.

We will ask each team to capture text logs of all user interactions with their systems. This is to facilitate follow-on analysis and publication. We will define the log structure May 2023.