On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation

1Arizona State University 2University of Maryland, Baltimore County

CVPR 2024


Abstract

Recent advances in monocular depth estimation have been made by incorporating natural language as additional guidance. Although yielding impressive results, the impact of the language prior, particularly in terms of generalization and robustness, remains unexplored. In this paper, we address this gap by quantifying the impact of this prior and introduce methods to benchmark its effectiveness across various settings. We generate "low-level" sentences that convey object-centric, three-dimensional spatial relationships, incorporate them as additional language priors and evaluate their downstream impact on depth estimation. Our key finding is that current language-guided depth estimators perform optimally only with scene-level descriptions and counter-intuitively fare worse with low level descriptions. Despite leveraging additional data, these methods are not robust to directed adversarial attacks and decline in performance with an increase in distribution shift. Finally, to provide a foundation for future research, we identify points of failures and offer insights to better understand these shortcomings. With an increasing number of methods using language for depth estimation, our findings highlight the opportunities and pitfalls that require careful consideration for effective deployment in real-world settings.

Comparison of generated depth maps, when evaluated in a zero-shot setting, across 5 different scene-types and 4 kinds of natural language guidance. A drop in performance is seen as low-level information is progressively provided to the model.

BibTeX

@InProceedings{Chatterjee_2024_CVPR,
    author    = {Chatterjee, Agneet and Gokhale, Tejas and Baral, Chitta and Yang, Yezhou},
    title     = {On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {2794-2803}
}

Acknowledgement

The authors acknowledge resources and support from the Research Computing facilities at Arizona State University. This work was supported by NSF RI grants #1750082 and #2132724. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers. This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Template of this website is borrowed from nerfies website.