Evaluation Scenario Writer - QA

November 19, 2025
Open
Open
Location
Vietnam
Occupation
Part-time
Experience level
Mid-level
Apply
Job Summary

Mindrift, a platform powered by Toloka, is seeking intellectually proactive contributors for a remote, flexible, project-based role focused on enhancing the quality of evaluation scenarios for LLM agents. You’ll review, validate, and improve scenario tests, spot inconsistencies, and collaborate with writers and engineers to maintain clarity and robust coverage. Compensation can reach up to $38/hour, and the opportunity is suitable for those seeking freelance work that fits around other commitments.

Ideal candidates will have a QA background (manual or automation), strong critical thinking, and experience in test design, edge case detection, and reviewing structured formats (JSON, YAML). Skills in Python, JavaScript, communication, and familiarity with Git/GitHub and test management tools are important. Prior experience with AI systems or NLP is a plus.

Contributors delivering consistently high-quality work may be invited to participate in future projects. This is an excellent chance to gain valuable hands-on AI project experience, shape the future of generative AI, and expand your professional portfolio while working asynchronously from anywhere in the world.

Highlight
Highlight

At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. 

The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.

Who we're looking for:

We’re looking for curious and intellectually proactive contributors who never miss an error and can think outside of the box when brainstorming solutions.  

Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated?

This is a flexible, project-based opportunity well-suited for:

  • Analysts, researchers, or consultants with strong critical thinking skills.
  • Students (senior undergrads / grad students) looking for an intellectually interesting gig.
  • People open to a part-time and non-permanent opportunity.

About the project:

We’re on the hunt for an Evaluation Scenario Writer - QA for a new project focused on ensuring the quality and correctness of evaluation scenarios created for LLM agents. This project opportunity blends manual scenario validation, automated test thinking, and collaboration with writers and engineers. You will verify test logic, flag inconsistencies, and help maintain a high bar for evaluation coverage and clarity.

What you’ll be doing:

  • Reviewing and validating test scenarios from Evaluation Writers.
  • Spotting logical inconsistencies, ambiguities, or missing checks.
  • Suggesting improvements to structure, edge cases, or scoring logic.
  • Collaborating with infrastructure and tool developers to automate parts of the review.
  • Creating clean and testable examples for others to follow.

Although we’re only looking for experts for this current project, contributors with consistent high-quality submissions may receive an invitation for ongoing collaboration across future projects. 

How to get started

Apply to this post, qualify, and get the chance to contribute to a project aligned with your skills, on your own schedule. Shape the future of AI while building tools that benefit everyone.

The ideal contributor will have:

  • Strong QA background (manual or automation), preferably in complex testing environments.
  • Understanding of test design, regression testing, and edge case detection.
  • Ability to evaluate logic and structure of test scenarios (even if written by others).
  • Experience reviewing and debugging structured test case formats (JSON, YAML).
  • Familiarity with Python and JS scripting for test automation or validation.
  • Clear communication and documentation skills.
  • Willingness to occasionally write or refactor test scenarios.

We also value applicants who have:

  • Experience testing AI-based systems or NLP applications.
  • Familiarity with scoring systems and behavioral evaluation.
  • Git/GitHub workflow familiarity (PR review, versioning of test cases).
  • Experience using test management systems or tracking tools.

Contribute on your own schedule, from anywhere in the world. This opportunity allows you to:

  • Get paid for your expertise, with rates that can go up to $38/hour depending on your skills, experience, and project needs.
  • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments.
  • Participate in an advanced AI project and gain valuable experience to enhance your portfolio.
  • Influence how future AI models understand and communicate in your field of expertise.
Apply now
Thanks you!
Oops! Something went wrong while submitting the form.
Please let us know if this job is expired. Your support helps us maintain an accurate job board!
Similar Jobs
download.webp
AI Data Specialist - Vietnam
RWS TrainAI
Vietnam
Part-time
Entry-level
file.jpeg
AI Trainer - Freelance Annotator (Korean)
Toloka Annotators
Vietnam
Part-time
Entry-level
file.jpeg
QA Engineer (Remote - Isreal)
Metasource
No items found.
Full-time
Senior
image.png
Freelance Mathematics Expert - AI Trainer
Mindrift
Vietnam
Freelancer
Entry-level
file.jpeg
AI Trainer - Freelance Data Annotator
Toloka Annotators
Vietnam
Part-time
Mid-level
image.png
Mindrift
Welcome to Mindrift — a space where innovation meets opportunity. We're a pioneering platform dedicated to advancing the field of artificial intelligence through collaborative online projects. Our focus lies in creating data for generative AI, offering a unique chance for freelancers to contribute to AI development from anywhere, at any time. At Mindrift, we believe in the power of collective intelligence to shape the future of AI. Our platform allows users to dive into a variety of tasks — ranging from creating training prompts for AI models to refining AI responses for greater relevance. Let's build the future of AI together, one task at a time.
HQ Location
Company size
1,001-5,000
Founded in
Industry
Information Technology & Services
Website