chess_datasets_translation

0
README.md

Chess Datasets Translation

Currently, Qwen3-30B-A3B-Instruct-2507-FP8 is used for fast inference (> 1000 samples/min)

Getting started

  1. Install vLLM:
    pip install vllm
  2. Update paths in the translate.py file (global variables
    DATASET_PATH
    ,
    TRANSLATED_DATASET_PATH
    )
  3. Run:
    vllm serve Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 --gpu-memory-utilization 0.9
  4. When server is ready, launch translation:
    python translate.py