Senior Data Engineer
Talent Power Vietnam is hiring a remote Senior Data Engineer to join its dynamic team on a contract basis until December 2025. This position offers a competitive monthly salary of 50–65M VND (gross) along with attractive perks including monthly Grab e-vouchers, life insurance, and comprehensive medical coverage. You’ll play a key role in scaling and optimizing the company’s data infrastructure, leading the development and maintenance of large-scale Apache Spark and Airflow pipelines that support personalization features. Collaboration with cross-functional teams will be crucial as you manage schema changes, track data lineage, and ensure pipeline stability across systems.
The ideal candidate will have 3–5 years of experience in data engineering and strong hands-on skills with Apache Spark (SparkSQL, PySpark), PrestoSQL/HiveSQL, Delta Lake, and orchestration tools like Apache Airflow. Proficiency in Python, Go, or Scala is required, as well as experience with GitLab, CI/CD, and writing robust, clean, object-oriented code. Familiarity with AWS services, Terraform (specifically IAM and S3 management), and backend development in Go will be considered a plus.
This is a great opportunity to contribute to impactful data systems at scale in a fully remote setup. Talent Power Vietnam is committed to empowering careers with flexibility, modern tech stacks, and a supportive work environment tailored for data professionals seeking growth and innovation.
Job Description
We’re looking for 3 passionate Data Engineers to join our team and help scale and improve the data infrastructure that powers critical personalization use cases across our company.
What We Offer
- Salary range: 50 - 65M VND gross
- Open for remote
- Monthly Grab e-voucher
- Life Insurance and comprehensive Medical Insurance
- Contract period: End of Dec 2025
What You’ll Do
- Own and maintain large-scale Apache Spark and Airflow pipelines to ensure high data availability for key downstream systems
- Refactor, optimize, and migrate existing data pipelines to more maintainable and scalable architectures
- Work closely with cross-functional teams to manage schema changes and ensure smooth integration across data consumers
- Ensure high data quality through data lineage tracking and column-level dependency mapping
What We’re Looking For
- 3–5 years of experience in Data Engineering roles
- Proficient in Python, Go, or Scala
- Strong hands-on experience with Apache Spark (SparkSQL, PySpark) and ETL development
- Skilled in PrestoSQL / HiveSQL and Delta Lake architecture
- Familiarity with Airflow and orchestration of large-scale pipelines
- Strong communication skills and experience managing changes across teams
- Experience with GitLab, CI/CD, and writing clean, modular, object-oriented code
Bonus Points (Nice to Have)
- Backend development experience with Go
- Solid understanding of AWS cloud services
- Experience with Terraform (especially S3 and IAM policy management)
- Background working on large-scale, high-impact data systems
Similar Jobs





