Join the project
Human-in-the-loop Reviewers
Validate AI drafts, enforce placeholder and tag safety, and improve consistency.
ApplyCollaborators
Contribute datasets, glossaries, evaluation ideas, or model integrations (OpenAI / Ollama).
Learn moreModels & benchmarks
Draft generation
- NLLB-200 for multilingual machine translation drafts.
- OpenAI (optional) for alternative draft generation and comparison.
- Ollama (optional) for local, pluggable LLM experimentation.
Quality checks
- XLM-R sentence embeddings for semantic similarity and back-translation checks.
- AfroLingu-MT benchmark dataset for evaluating quality on underrepresented language pairs.