Helsinki-NLP/tatoeba
Updated • 3.21k • 56
How to use raphaelmerx/ko-en with Transformers:
# Use a pipeline as a high-level helper
# Warning: Pipeline type "translation" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline
pipe = pipeline("translation", model="raphaelmerx/ko-en") # Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("raphaelmerx/ko-en")
model = AutoModelForSeq2SeqLM.from_pretrained("raphaelmerx/ko-en")Forked from odegiber/ko-en, with a .tflite version of the model weights added.
Distilled model from a Tatoeba-MT Teacher: Tatoeba-MT-models/kor-eng/opusTCv20210807-sepvoc_transformer-big_2022-07-28, which has been trained on the Tatoeba dataset.
We used the OpusDistillery to train new a new student with the tiny architecture, with a regular transformer decoder. For training data, we used Tatoeba. The configuration file fed into OpusDistillery can be found here.
>>> from transformers import pipeline
>>> pipe = pipeline("translation", model="odegiber/ko-en", max_length=256)
>>> pipe("2017년 말, 시미노프는 쇼핑 텔레비젼 채널인 QVC에 출연했다.")
[{'translation_text': 'At the end of 2017, Siminof appeared on the shopping television channel QVC.'}]
| testset | BLEU | chr-F |
|---|---|---|
| flores200 | 20.3 | 50.3 |