Video-Text-to-Text
Transformers
Safetensors
llava_onevision
image-text-to-text
multimodal
multilingual
vlm
translation
Instructions to use utter-project/TowerVideo-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use utter-project/TowerVideo-9B with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("utter-project/TowerVideo-9B") model = AutoModelForImageTextToText.from_pretrained("utter-project/TowerVideo-9B") - Notebooks
- Google Colab
- Kaggle
File size: 219 Bytes
2261404 | 1 2 3 4 5 6 7 8 9 | {
"image_token": "<image>",
"num_image_tokens": 729,
"processor_class": "LlavaOnevisionProcessor",
"video_token": "<video>",
"vision_aspect_ratio": "anyres_max_9",
"vision_feature_select_strategy": "full"
}
|