Today we share more on PaLM-E! (http://palm-e.github.io)
Thread with blog post link at the end.
PaLM-E can do a lot of things across robotics, vision, and language⦠but letās look at a few capabilities in detail, step by step
What happens when we train the largest vision-language model and add in robot experiences?
The result is PaLM-E , a 562-billion parameter, general-purpose, embodied visual-language generalist - across robotics, vision, and language.
Website: https://palm-e.github.io
For one, āLetās think step by stepā comes to multimodal models!
Zero-shot chain-of-thought has been one of these emergent behaviors that has caught considerable interest in researching LLM capabilitiesā¦
With PaLM-E-562B, zero-shot visual chain-of-thought comes āincludedā.
Multimodal chain-of-thought can be very helpful to get a sense of what the model is picking up on.
While the question here is only a 1-bit (yes/no) answer, the chain-of-thought provides much more than 1 bit of information on what the model sees.
Hereās a many-step zero-shot CoT example (prompt by @Ayzaan Wahid!). Note large VQA training datasets (VQAv2, OKVQA, etc.) typically only have 1-, 2-, 3-word answers, so these many-step answers are considerably out-of-distribution.
Hereās another multimodal reasoning question addressed with chain-of-thought, this time doing visual math questions, no OCR required despite needing spatial-textual context, just does everything all in one model. This prompt by @Fei Xia!
Moving on from chain-of-thought, another capability of PaLM-E that ājust comes includedā is the ability to do multi-image reasoning⦠despite only ever being trained on single-image examples.
For this multi-image reasoning, since PaLM-E flexibly supports multimodal sentences, it can answer questions about specific relationships between images. While the previous example was a āwhat matches?ā question, this one is a āwhatās different?ā question.
Extending multi-image further, we can do more than just 2 images...
For this, letās look at a capability we showed last year with Socratic Models (https://socraticmodels.github.io/ā¦, led by @Andy Zeng), where we could do long-form egocentric video understanding, some examples here:
In Socratic Models, this worked by writing out a language-based world-state history ā a timestamped log of textually-represented events:
With PaLM-E, we can do this end-to-end, all in one model, with no explicit textual intermediate stage.
A wide set of temporal/visual reasoning capabilities are in scope.
Lots of potential AR & Robotics applications here.
Quantitatively, PaLM-E-562B sets a new state-of-the-art, 66.1, on OK-VQA dataset.
This number is also achieved with a *generalist* (one model), also trained on diverse robotics and vision data, and without a final task-specific finetuning stage on OK-VQA.
In a recent-ish podcast (recorded in October, released in January), I had a few comments on where large-scale multimodal models are headed and āone big modelā approach... (see around 42 minutes here)
https://twitter.com/gradientpub/status/1611408050987876355?s=20ā¦
Another capability of PaLM-E-562B is that it's, quantitatively, an excellent language model. Roughly as good as PaLM-540B.
Notable that scaling the model significantly reduces catastrophic language forgetting https://twitter.com/DannyDriess/status/1632904698108821505?s=20ā¦
We observe a notable trend with model scale: the larger the language model, the more it maintains its language capabilities when training on visual-language and robotics tasks ā quantitatively, the 562B PaLM-E model nearly retains all of its language capabilities.
For robotics, PaLM-E is a rapid learner of new planning tasks, requiring only a handful of samples to start generalizing well in a given domain. Here we plot PaLM-E sample complexity relative to baseline ā the difference is solely transfer learning. (Subset of Table 2)
PaLM-E can do few-shot and zero-shot generalization ā it never had training data for āpush the red blocks to the coffee cupā, and only had ever seen this coffee cup in 3 images. See website for the never-before-seen āturtleā object too.
Towards wrapping up here, in addition to all our co-authors, I want to especially give a shout-out and thanks to all the Google teams who helped make the effort possible! Especially the folks behind training PaLM and the large ViTs from which PaLM-E is built.
Today we share PaLM-E, a generalist, embodied language model for robotics. The largest instantiation, 562 billion parameters, is also a state-of-the-art visual-language model, has PaLMās language skills, and can be successfully applied across robot types āhttps://goo.gle/3JsszmK
And I want to close with a Haiku.
Prompt in gray by @Brian Ichter, and the completion written by PaLM-E-562B: