Is BERT Blind?

Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding

Morris Alper * Michael Fiman * Hadar Averbuch-Elor

CVPR 2023
* Equal Contribution

Although the questions above are asking about the text alone, solving these tasks (concreteness prediction, color and shape association prediction) requires visual imagination.

In this work, we investigate whether vision-and-language pretraining can improve performance on text-only tasks involving visual reasoning. We propose a suite of visual language understanding tasks for probing the visual reasoning capabilities of text encoder models, and show that multimodally trained text encoders outperform unimodally trained encoders such as BERT on these tasks.


Abstract

Most humans use visual imagination to understand and reason about language, but models such as BERT reason about language using knowledge acquired during text-only pretraining. In this work, we investigate whether vision-and-language pretraining can improve performance on text-only tasks that involve implicit visual reasoning, focusing primarily on zero-shot probing methods. We propose a suite of visual language understanding (VLU) tasks for probing the visual reasoning abilities of text encoder models, as well as various non-visual natural language understanding (NLU) tasks for comparison. We also contribute a novel zero-shot knowledge probing method, Stroop probing, for applying models such as CLIP to text-only tasks without needing a prediction head such as the masked language modelling head of models like BERT. We show that SOTA multimodally trained text encoders outperform unimodally trained text encoders on the VLU tasks while being underperformed by them on the NLU tasks, lending new context to previously mixed results regarding the NLU capabilities of multimodal models. We conclude that exposure to images during pretraining affords inherent visual reasoning knowledge that is reflected in language-only tasks that require implicit visual reasoning. Our findings bear importance in the broader context of multimodal learning, providing principled guidelines for the choice of text encoders used in such contexts.


Stroop Probing

We propose a new zero-shot probing method for multimodal text encoders such as that of CLIP, taking inspiration from the psychological phenomenon known as the Stroop effect.

Try it yourself! Look at the colors listed in the boxes, and try to read out loud the color in which the text is printed (not the color that is written out). Which set of colors is harder to read out loud?

Stroop probing uses this idea — that incongruent stimuli have an interference effect on the representation of their context — in order to probe models such as CLIP for knowledge without requiring a language modelling head.

stroop effect example 1
stroop effect example 2

Sample Results

Compare the results of BERT-base with masked language modelling and CLIP with Stroop probing on the task of color association prediction. Note that the models receive only the name of the fruit or vegetable without the given image, shown for illustration purposes only.

white space

BERT:
CLIP:

broccoli

green
green

carrot

green
orange

corn

red
yellow

potato

white
brown

In our work we examine additional visual understanding tasks including concreteness prediction and shape association prediction. We also evaluate NLU tasks that do not directly involve visual reasoning as baselines.


Acknowledgements

We thank Noriyuki Kojima, Gabriel Stanovsky, and Adi Haviv for their helpful feedback.


Citation

@InProceedings{alper2023:is-bert-blind,
    author    = {Morris Alper and Michael Fiman and Hadar Averbuch-Elor},
    title     = {Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2023}
}