Requirements:
– Bachelor’s degree in Computer Science, Engineering, or a related field;
– 5-7 years of experience in data engineering;
– Expertise in in data modeling, ETL/ELT processes, and workflow orchestration tools;
– Master of SQL and experience working with relational databases;
– Proficient Python, PySpark, and experience with big data platforms;
– Experience creating data pipelines;
– Working experience in cloud platforms like AWS and Azure.
Responsibilities:
– Design, develop, and maintain scalable and efficient data pipelines to process;
– Implement data modeling and schema design best-practices to ensure data quality;
– Create and maintain large-scale data processing systems and infrastructure;
– Drive the optimization, testing, and tooling;
– Stay updated with emerging technologies.