Latency#
LLMs are slow. Running a multi-billion parameter model at 10Hz (required for driving) is a massive compute challenge.
Safety#
Hallucinations in a chatbot are annoying; hallucinations in a car are fatal. VLAs must be constrained by safety layers.