Video understanding is a popular field in computer vision and AI where we aim to learn/assess the world around us from video footage and can benefit many real-world applications, such as training and education, patient monitoring, sports assessment, and security systems. By automating these applications through video analysing, not only we can save money and time for their users, but also, we can decrease human errors. Despite the recent advances in the other areas of computer vision, e.g. image analysis, video understanding is still an unsolved problem and is considered a very challenging task.
The proposed workshop on video understanding aims to address the challenges in this field by making the following contributions:
Potential topics include, but are not limited to:
Papers will be limited to 9 pages according to the BMVC format (c.f. main conference authors guidelines). Papers will be published in BMVC 2024 workshop proceedings.
All the papers should be submitted using CMT website https://cmt3.research.microsoft.com/VUABMVC2024.
10:00 - 10:05 | Welcome and Introduction |
10:05 - 10:55 |
Keynote 1 (online) by Prof. Cees G.M. Snoek Title: Learning to Generalize in Video Space and Time (40-minute talk followed by 10-minute Q&A) |
10:55 - 11:45 |
Keynote 2 by Dr. Antonino Furnari Title: Beyond Atomic Actions: Towards Long-Form and Procedural Understanding in Egocentric Videos (40-minute talk followed by 10-minute Q&A) |
11:45 - 12:50 |
Keynote 3 by Dr. Laura Sevilla Title: Video Understanding with Limited Resources (40-minute talk followed by 10-minute Q&A) |
12:50 - 13:30 | Break & Lunch |
13:30 - 13:45 |
AI4ME Presentation: How Video Understanding Can Help Media Production by Faegheh Sardari (15 minutes) |
13:45 - 14:33 |
Oral Presentations (Four presentations, each 10 minutes with a 2-minute Q&A) |
14:33 - 14:40 | Closing the Workshop |
University of Amsterdam, Netherlands
Cees G.M. Snoek is a full professor in computer science at the University of Amsterdam, where he heads the Video & Image Sense Lab. He is the director of three public-private AI research labs: QUVA Lab with Qualcomm, Atlas Lab with TomTom and AIM Lab with Core42. He is also the director of the ELLIS Amsterdam Unit and scientific director of Amsterdam AI, a collaboration between government, academic, medical and other organisations in Amsterdam to develop and deploy responsible AI.
University of Catania, Italy
Antonino Furnari is a tenure-track Assistant Professor at the University of Catania. His research interests lie in the field of Egocentric Vision, with particular interest in video understanding and building assistive wearable systems which can support and empower humans. He is an active member of the EPIC-KITCHENS, EGO4D, and EGO-EXO4D projects, a Senior Member of IEEE and an ELLIS member.
University of Edinburgh, United Kingdom
Laura Sevilla is an Associate Professor at the University of Edinburgh, where she has been since 2019. There she leads her group that focuses on Video Understanding. Before, she was a researcher at Facebook Research in California and a postdoc at the Max Planck Institute in Germany. She obtained her PhD from the University of Massachusetts Amherst in 2015. During her career, she has worked in most aspects of Video Understanding, from Optical Flow to Object Tracking, Video Captioning and Perception for Robotics. Her work has been awarded a Google Research Scholar Award (2022), and a Google Faculty Award (2020).
University of Surrey, United Kingdom
University of Surrey, United Kingdom
University of Surrey, United Kingdom
University of Surrey, United Kingdom
BBC R&D, United Kingdom
University of Surrey, United Kingdom
University of Surrey, United Kingdom
University of Surrey, United Kingdom
University of Surrey, United Kingdom
For additional info please contact us here