Important Dates

The LSC workshop took place on Friday 30th October 2020. The LSC'20 workshop was open to all attendees of ICMR'20 and we had 150 delegates and viewers attend the event. The access details are available from both the OnAir ICMR'20 platform and the Whova app for the LSC'20 workshop. Participants must join via the zoom webinar, and guests can join via the zoom link or via a public twitch stream. The Zoom webinar began at 8am IST on 30th October and the Twitch stream will went live shortly afterwards, attracting over 90 unique viewers.


  • 8:00 Welcome to LSC'20. Cathal Gurrin, Klaus Schoeffmann, Bjorn Por Jonsson.

  • 8:15 Interactive Lifelog Retrieval with vitrivr. Loris Sauter, Silvan Heller, Mahnaz Amiri Parian, Ralph Gasser, Heiko Schuldt
  • 8:20 VRLE: Lifelog Interaction Prototype in Virtual Reality. Aaron Duane, Björn Þór Jónsson, Cathal Gurrin
  • 8:25 LifeGraph: a Knowledge Graph for Lifelogs.  Luca Rossetto, Matthias Baumgartner, Narges Ashena, Florian Ruosch, Romana Pernischova, Abraham Bernstein
  • 8:30  Exquisitor at the Lifelog Search Challenge 2020. Omar Shahbaz Khan, Mathias Dybkjær Larsen, Liam Alex Sonto Poulsen, Björn Þór Jónsson, Jan Zahálka, Stevan Rudinac, Dennis Koelma, Marcel Worring
  • 8:35 Myscéal - An Experimental Interactive Lifelog Retrieval System for LSC'20. Ly Duyen Tran, Duy Nguyen, Binh Nguyen, Hyowon Lee, Cathal Gurrin
  • 8:40 A Multi-level Interactive Lifelog Search Engine with User Feedback. Jiayu Li, Min Zhang, Weizhi Ma, Yiqun Liu, Shaoping Ma
  • 8:45 lifeXplore at the Lifelog Search Challenge 2020. Andreas Leibetseder, Klaus Schoeffmann
  • 8:50 BIDAL-HCMUS@LSC2020: An Interactive Multimodal Lifelog Retrieval with Query-to-Sample Attention-based Search Engine. Anh-Vu Mai-Nguyen, Van-Luon Tran, Trong-Dat Phan, Anh-Khoa Vo , Minh-Son Dao, Koji Zettsu
  • 8:55 Multimodal Retrieval through Relations between Subjects and Objects in Lifelog Images. Tai-Te Chu, Chia-Chun Chang, An-Zi Yen, Hen-Hsen Huang, Hsin-Hsi Chen
  • 9:00 LifeSeeker 2.0 - Interactive Lifelog Search Engine at LSC 2020. Tu-Khiem Le, Van-Tu Ninh, Minh-Triet Tran, Thanh-An Nguyen, Hai-Dang Nguyen, Liting Zhou, Graham Healy, Cathal Gurrin
  • 9:05 VIRET Tool with Advanced Visual Browsing and Feedback. Gregor Kovalcik, Vit Skrhak, Tomas Soucek, Jakub Lokoc
  • 9:10 FIRST - Flexible Interactive Retrieval SysTem for Visual Lifelog Exploration at LSC 2020. Minh-Triet Tran, Thanh-An Nguyen, Quoc-Cuong Tran, Mai-Khiem Tran, Khanh Nguyen, Van-Tu Ninh, Tu-Khiem Le, Hai-Dang Nguyen, Trong-Le Do, Viet-Khoa Vo-Ho, Cathal Gurrin
  • 9:15 SOMHunter for lifelog search. Frantisek Mejzlik, Patrik Veselý, Miroslav Kratochvíl, Tomáš Souček, Jakub Lokoč
  • 9:20 Voxento - An Prototype Voice-controlled Interactive Search Engine for Lifelogs. Ahmed Alateeq, Mark Roantree, Cathal Gurrin


  • 9:30 Discussion


  • 10:00 Social Session. Cathal Gurrin, Klaus Schoeffmann, Bjorn Por Jonsson


  • 10:30 LSC’20 Competition. All participating teams. This session will include a large number of expert runs to evaluate the systems. Please note that due to the virtual nature of LSC'20, there will be no novice runs this year.


  • 13:00 LSC’20 Review, Closing & Social Discussion. Cathal Gurrin, Klaus Schoeffmann, Bjorn Por Jonsson




  • Configuration of the Challenge


    For the virtual LSC'20 workshop, each team will configure their own experimental setup. The participating systems will connect to the LSC host server, which has been developed by
    Luca Rossetto and team at UZH. There will be a pre-LSC'20 test session in early October for all participating teams to ensure that their systems can interact with the servers. The readme file from the GitHub page for the server gives details of the submission settings for the systems. Specifically the submission format for the LSC'20 competition is http(s)://{server}/submit?item={item} where {item} is the identifier for the retrieved media item. Please engage with the upcoming LSC'20 test session on Friday, October 23, 2020 at 9am Dublin time (10:00 CET, 16:00 Beijing time).

    During the search challenge, participating teams submit an item (image) to a host server for evaluation when a potentially relevant item from the collection is found by a participant. The host server maintains a countdown clock and evaluates in real-time all submissions against a provided groundtruth. For each topic, a score is given based on the time taken to find the relevant content and the number of incorrect items previously submitted by that team to the host server for that topic. There is a maximum of 3 or 5 minutes provided per topic (see below). Throughout the competition, an overall score is maintained for each team, which is the summation of the scores of the topics that had been processed up until that point. The teams will be ranked in terms of overall (expert & novice combined score) and novice score. For more detail on the scoring function and the LSC service, please see the
    LSC'18 overview paper.

    There will be 24 official English language topics prepared which will be textual in nature and, as in pervious LSC competitions, be temporally enhancing queries. This means that every query will consist of a title and 6 increasingly detailed descriptions, which will be revealed every 30 seconds. The users of the systems will be considered to be expert users, who know how to use the system. A total of either 3 or 5 minutes will be allocated per topic. The topics will be revealed only at the challenge. The
    LSC'19 topics are available for system testing. Please note that the LSC'20 topics are not released in XML format until after the workshop. During the workshop the topics are first seen on the central screen.

    We will use Zoom for LSC'20, the meeting details will be provided to you soon. Registered attendees of ICMR’20 will have access via the conference portal. Non-registered attendees can access via the Whova LSC’20 app.

    The idea is that every team shares a video stream of an external camera via the zoom session, so that everybody else can see what each team is doing. Please note that we do not want you to share your screen for the actual competition, because such detailed information could be used by other teams to solve topics. We suggest that you set up a webcam that can see both your participants and your screen, as if someone is looking over your shoulder.

    However, screen sharing will be used for the short system presentation by each team, which is scheduled for the beginning of LSC'20. Please try to present your system in 5 minutes as per the schedule above. For non-European participants, please note that
    daylight-savings time changes occur on Sunday 25th October so the times in your local region may change by one hour.

    We will also live stream an editor curated version of the Zoom call on Twitch, so that all conference attendees can follow the challenge via that stream (alternatively they can also join the Zoom call).

    Tasks/Topics will be issued directly via the LSC server. There you should open the “competition viewer” interface, which is available in the “competition run”. We suggest to use this on a second screen, so that you can always see the topic information (and incrementally revealed information) as well as the status of your team and of others.




    Challenge Rules


    Participating teams can develop any type of interactive retrieval system that they wish without restriction. Query input to each system is not restricted (save for the rules below). This means that some systems will accept text queries, others will facilitate faceted queries, others may support spoken input, others relevance feedback. There is no limit on how interaction can be facilitated once it adheres to the rules below.

    Each retrieval system is expected to have pre-indexed the
    LSC'20 dataset prior to arriving at the workshop. All aspects of the dataset can be indexed and the integration of additional semantic and concept detectors is allowed, once details of these are provided The revised rules of participation are intended to facilitate a fair competition and are listed as:
    - Every team is allowed to run only one instance of their system (i.e., there is one interface, or one VR glasses) - so that we will have a fair competition.
    - However, you may employ as many users as you want for this interface (it should also be visible on the external camera, how many users interact with the system instance).
    - No communication is allowed between teams.
    - Taking photos of any image (other team's submissions) from the shared screens as query input is prohibited
    - Each incorrect submission will be penalised according to the scoring algorithm, so it is recommended not to submit any result to the central server unless the user is confident that the result is likely to be correct.
    - The relevance judgements provided with the topic are final and cannot be questioned or queried. Every effort will be made to ensure that relevant judgements are not ambiguous.
    - It is prohibited for any system developer (who acts as the expert user for their system) to learn or familiarise themselves sufficiently with the dataset, so as to gain an unfair advantage over other competitors.
    - Interaction mechanisms employed should not interfere with, or disturb, other participants. For any participant bringing novel VR-type systems, a dedicated flow space will be prepared for them which is sufficiently distant from the other participants.