photo
Vassileios Balntas
(Βασίλειος Μπαλντάς)
Imperial Computer Vision & Learning Lab
Imperial College London
v...@imperial.ac.uk
GitHub / Google Scholar

News


Projects

[img]
HPatches: Large scale patch dataset for descriptor evaluation

One of the most important problems when evaluating feature descriptors is the non-reproducibility of the results. We introduce a new dataset consisting of patches extracted from a large set of matching sequences that can lead to large scale and reproducible evaluations.

[img]
Binary online learned descriptors

We present a novel approach to match descriptors by locally masking invariant feature dimensions for each individual patch. We show that by locally adapting descriptors to each patch, we are able to match or outperform the performance of SIFT while retaining the speed of BRIEF. Recent extension shows that this masking process can be sucessfully applied to all binary descriptors.

[img]
Learning shallow convolutional descriptors

We show that very shallow convolutional networks can be used as efficient feature descriptors using L2 distance for matching. By learning such networks with triplet losses we are able to outperform the state of the art in terms of both performance and computational efficiency. Such descriptors can be extracted in the GPU at the rate of 10μS/patch.

[img]
Pose Guided RGBD feature learning

We present a novel approach to learn RGBD convolutional feature descriptors by directly utilizing the pose of the objects as a guide. We form the feature optimisation problem in terms of maximising the similarity between the distance in the feature space and the distance in the pose space. We show that our learnt RGBD features are more robust and significantly improve the pose retrieval success rate.