About the Role
We are seeking a highly motivated and experienced Data Engineer to join our dynamic team supporting a prestigious global banking client. This is a high-impact role that plays a crucial part in the client's cloud transformation journey. If you're passionate about building scalable data pipelines, automating workflows, and working with modern cloud technologies, this opportunity is for you.
Key Responsibilities
- Design, develop, and maintain robust data pipelines using Python and PySpark
- Create and manage DAGs in Apache Airflow for data orchestration and workflow automation
- Work extensively with Google Cloud Platform (GCP), including BigQuery and other native services
- Build and deploy RESTful APIs and microservices using Flask
- Use Helm Charts for deploying services on Kubernetes
- Implement automated testing frameworks using Pytest
- Collaborate with cross-functional teams to deliver scalable and efficient data solutions
Must-Have Skills
- Strong proficiency in Python programming
- Hands-on experience with Google Cloud Platform (GCP) services, especially BigQuery
- Deep understanding of Apache Airflow and its ecosystem
- Experience with Flask for building APIs and microservices
- Familiarity with Kubernetes and Helm Charts
- Knowledge of automated testing using Pytest
- Solid understanding of data pipeline architecture and cloud data engineering best practices
Why Join Us?
- Work with a leading multinational bank undergoing a digital transformation
- Potential to convert to a full-time position based on performance
- Exposure to cutting-edge technologies and cloud-native architecture
- Collaborative and growth-focused work environment
- Based in the heart of Hyderabad’s tech hub