


default search action
13th ICVS 2021: Virtual Event
- Markus Vincze
, Timothy Patten
, Henrik I. Christensen
, Lazaros Nalpantidis
, Ming Liu
:
Computer Vision Systems - 13th International Conference, ICVS 2021, Virtual Event, September 22-24, 2021, Proceedings. Lecture Notes in Computer Science 12899, Springer 2021, ISBN 978-3-030-87155-0
Attention Systems
- Nolan B. Gutierrez
, William J. Beksi
:
Thermal Image Super-Resolution Using Second-Order Channel Attention with Varying Receptive Fields. 3-13 - Ali Hamdi
, Amr Aboeleneen, Khaled B. Shaban:
MARL: Multimodal Attentional Representation Learning for Disease Prediction. 14-27 - Soubarna Banik
, Mikko Lauri
, Alois C. Knoll
, Simone Frintrop
:
Object Localization with Attribute Preference Based on Top-Down Attention. 28-40 - Danu Caus
, Guillaume Carbajal
, Timo Gerkmann
, Simone Frintrop
:
See the Silence: Improving Visual-Only Voice Activity Detection by Optical Flow and RGB Fusion. 41-51
Classification and Detection
- Riccardo Grigoletto, Elisa Maiettini
, Lorenzo Natale:
Score to Learn: A Comparative Analysis of Scoring Functions for Active Learning in Robotics. 55-67 - George Ciubotariu
, Vlad-Ioan Tomescu
, Gabriela Czibula
:
Enhancing the Performance of Image Classification Through Features Automatically Learned from Depth-Maps. 68-81 - Bertalan Kovács
, Anders D. Henriksen, Jonathan Dyssel Stets
, Lazaros Nalpantidis
:
Object Detection on TPU Accelerated Embedded Devices. 82-92 - Aishwarya Venkataramanan
, Martin Laviale
, Cécile Figus
, Philippe Usseglio-Polatera
, Cédric Pradalier
:
Tackling Inter-class Similarity and Intra-class Variance for Microscopic Image-Based Classification. 93-103
Semantic Interpretation
- Jean-Baptiste Weibel
, Rainer Rohrböck, Markus Vincze
:
Measuring the Sim2Real Gap in 3D Object Classification for Different 3D Data Representation. 107-116 - Christina Theodoridou
, Andreas Kargakos
, Ioannis Kostavelis
, Dimitrios Giakoumis
, Dimitrios Tzovaras
:
Spatially-Constrained Semantic Segmentation with Topological Maps and Visual Embeddings. 117-129 - Andrei Haidu
, Xiaoyue Zhang
, Michael Beetz
:
Knowledge-Enabled Generation of Semantically Annotated Image Sequences of Manipulation Activities from VR Demonstrations. 130-143 - Matteo Terreran
, Daniele Evangelista
, Jacopo Lazzaro, Alberto Pretto
:
Make It Easier: An Empirical Simplification of a Deep 3D Segmentation Network for Human Body Parts. 144-156
Video and Motion Analysis
- Alexandros Vrochidis
, Nikolaos Dimitriou
, Stelios Krinidis
, Savvas Panagiotidis, Stathis Parcharidis, Dimitrios Tzovaras
:
Video Popularity Prediction Through Fusing Early Viewership with Video Content. 159-168 - Victoria Manousaki
, Konstantinos E. Papoutsakis
, Antonis A. Argyros
:
Action Prediction During Human-Object Interaction Based on DTW and Early Fusion of Human and Object Representations. 169-179 - Özgür Erkent, David Sierra González, Anshul Paigwar
, Christian Laugier:
GridTrack: Detection and Tracking of Multiple Objects in Dynamic Occupancy Grids. 180-194 - Arezoo Sadeghzadeh
, Md Baharul Islam
, Reza Zaker
:
An Efficient Video Desnowing and Deraining Method with a Novel Variant Dataset. 195-208
Computer Vision Systems in Agriculture
- Raymond Kirk
, Michael Mangan
, Grzegorz Cielniak
:
Robust Counting of Soft Fruit Through Occlusions with Re-identification. 211-222 - Raymond Kirk
, Michael Mangan
, Grzegorz Cielniak
:
Non-destructive Soft Fruit Mass and Volume Estimation for Phenotyping in Horticulture. 223-233 - Timothy Patten
, Alen Alempijevic
, Robert Fitch
:
Learning Image-Based Contaminant Detection in Wool Fleece from Noisy Annotations. 234-244 - Usman A. Zahidi, Grzegorz Cielniak
:
Active Learning for Crop-Weed Discrimination by Image Classification from Convolutional Neural Network's Feature Pyramid Levels. 245-257

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.