The format of LSC 2019 is as follows:
08:00 - Technical Configuration for Teams - Testing of Systems for all teams.
11:00 - Welcome & Introduction to LSC2019
11:15 - Oral Session (5 mins each)
- Smart Lifelog Retrieval System with Habit-based Concepts and Moment Visualization. Nguyen-Khang Le, Dieu-Hien Nguyen, Trung-Hieu Hoang, Thanh-An Nguyen, Thanh-Dat Truong, Duy-Tung Dinh, Quoc-An Luong, Viet-Khoa Vo-Ho, Vinh-Tiep Nguyen and Minh-Triet Tran
- Exquisitor at the Lifelog Search Challenge 2019. Omar Shahbaz Khan, Björn Þór Jónsson, Jan Zahálka, Stevan Rudinac and Marcel Worring
- lifeXplore at the Lifelog Search Challenge 2019. Andreas Leibetseder, Bernd Muenzer, Manfred Jürgen Primus, Sabrina Kletz, Klaus Schöffmann, Fabian Berns and Christian Beecks
- A Two-Level Lifelog Search Engine at the LSC 2019. Isadora Nguyen Van Khan, Pranita Shrestha, Min Zhang, Yiqun Liu and Shaoping Ma
- Enhanced VIRET tool for lifelog data. Jakub Lokoc, Tomáš Souček, Premysl Cech and Gregor Kovalcik
- Retrieval of Structured and Unstructured Data with vitrivr. Luca Rossetto, Ralph Gasser, Silvan Heller, Mahnaz Amiri Parian and Heiko Schuldt
- VieLens, An Interactive Search engine for LSC2019. Son P Nguyen, Dien H Le, Uyen H Pham, Martin Crane, Graham Healy and Cathal Gurrin
- LifeSeeker Interactive Lifelog Search Engine at LSC 2019. Khiem Le Tu, Tu van Ninh, Duc Tien Dang Nguyen, Minh-Triet Tran, Liting Zhou, Pablo Redondo, Sinead Smyth and Cathal Gurrin
- An Interactive Approach to Integrating External Textual Knowledge for Multimodal Lifelog Retrieval. Chia-Chun Chang, Min-Huan Fu, Hen-Hsen Huang and Hsin-Hsi Chen
12:30 - Lunch
13:30 - Search Challenge Session
- 13:30 - Expert Runs (12 topics)
- 14:30 - Coffee & Novice User Training
- 15:30 - Novice Runs (12 topics)
17:00 - Wrap Up
For the interactive search challenge, each participant will be given a desk with arranged in a semi-circle around a dedicated network switch to avoid any latency issues. Each team will have developed a prototype interactive lifelog search engine for the dataset. At the front of the room is a large screen upon which the host shows each topic and a real-time scoreboard with countdown clock. Each team has a score which is based on the speed at which they find the known item represented by the search topic. When an item is found they submit it to a central server which checks to see if it is correct and a score is allocated to a team based on how long it took them to find the relevant content and the accuracy of submissions. We expect to process 16 topics within the session with each topic requiring 3 minutes. The topics will be based on a standard Known-Item approach and to ensure fairness, for the challenge, each system will be piloted/operated by a novice user, rather than the system developer.