Mid Data Engineer
ImportGenius, a leader in global trade data analytics, is seeking a skilled Data Engineer to join its team remotely. This full-time role involves developing scalable ETL pipelines, supporting cross-functional teams with data solutions, and contributing to innovative projects that analyze international import/export trends. You’ll collaborate with product and design teams to implement robust data infrastructure and machine learning solutions. The position offers competitive benefits, including 14 vacation and 14 sick leaves, emergency leave, annual merit increases, milestone bonuses, and weekends/Philippine holidays off.
The ideal candidate has at least 2 years of experience in Data Management, with strong proficiency in Python, Apache Spark, and AWS tools such as Glue, S3, Lambda, Redshift, and Athena. Familiarity with MySQL, MongoDB, and experience building data warehouses are essential. Strong communication, statistical analysis, and a creative approach to problem-solving are key. Experience with Hadoop and a solid grasp of probability are preferred.
This role offers a unique opportunity to work on impactful global trade data projects—like uncovering major global trends—while promoting knowledge-sharing and innovation within a high-performing data team. The hiring process includes aptitude tests, technical assessments, and interviews.
Job Overview And Responsibility
About ImportGenius ImportGenius is the pioneer company in data analytics for the global import/export industry. Our trade data is used by import/export businesses the world over to gain an advantage over their competitors. The insights we produce using our trade data have been used by analysts and journalists to predict the iPhone launch even before the official announcement, track counterfeit money flowing into Venezuela, and most recently to discover priceless antiques being shipped out of Russia by oligarchs. Currently, we're on the hunt for talented data engineers to spearhead our data projects and inspire our gifted data team toward unparalleled innovation. Duties & Responsibilities ● Use your creativity to find and extract useful insights from Global Trade Data ● Build and maintain scalable ETL pipelines that will perform complex transformations in a parallel fashion over large-scale datasets ● Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their needs. ● Conduct upskill or skill transfer training from time to time (including but not limited to Junior Data Engineers) ● Help in the implementation of industry-standard world-class practices in data engineering ● Help in the implementation and scalability of machine learning techniques in the processing of trade data ● Perform such other duties as customarily performed by a professional in other, same, or similar engagements.
Required Skills and Experience
Programming Language: Python Data Processing Framework & Engine: Apache Spark Data Stack: AWS Glue, S3, Lambda, Redshift, OpenSearch, Athena, QuickSight Cloud Service Provider: AWS DBMS: MySQL, MongoDB ● Minimum 2 years of professional experience in Data Management ● Excellent written and verbal English communication skills ● Driven with an intrinsic motivation to succeed and continuously improve yourself and your surroundings ● Creative and able to find and build solutions to complex problems ● Experience designing and building data warehouses and associated topologies ● Strong experience with Relational, Non-Relational, and Columnar Databases ● Innovative “out of the box” thinking. This role is central to the company’s competitive advantage in providing unique insights using publicly available data ● Background in Statistical Analysis ● Ability to liaise with management and other teams
Why Candidate should apply this position
- 14 Vacation Leave - 14 Sick Leave - 5 Emergency Leave - Annual Merit Increase based on performance - Annual Milestone Bonus based on performance - Weekends off and Philippines holidays off
Preferred skills and experiences
● A basic grasp of statistics and probability will be a plus ● Experience with Hadoop a plus
Report to
Hiring Manager
Interview process
1. Aptitude and personality exam -> 2. Skill test -> 3. HR Call -> 4. Final Interview (1-2 rounds based on level)
Similar Jobs





