Are you passionate about building scalable data lake solutions using AWS and PySpark? At BHR Code, we are looking for a Senior Data Engineer with a deep understanding of AWS ETL tools, Redshift, Glue, PySpark, and Kafka to join our high-impact engineering team.
✅ Key Skills:
AWS Data Lake, PySpark, Glue, Athena, Kinesis, Lambda, Kafka, Redshift, DynamoDB, Advanced SQL, Python, DB2 Migration
Primary Skills:
AWS Data Lake & Lake Formation
AWS ETL Services: Glue, Lambda, Kinesis, Athena
Programming Languages: Python, PySpark
Streaming/Queueing: Kafka
Database Experience: Redshift, DynamoDB, DB2 (migration experience), SQL
Role Requirements:
6–8 years of strong experience in Data Engineering using AWS and Big Data tools
End-to-end design, development, and deployment of scalable data lake solutions in AWS
Strong experience in migrating on-prem DB2 systems to cloud-based storage and compute layers
Deep understanding of AWS cloud services related to data movement, storage, and transformation
Hands-on with Python, PySpark, Kafka, and advanced SQL for ETL and data pipeline development
Experience working with AWS Databases – especially Redshift, RDS, Aurora, DynamoDB
Exposure to CI/CD using AWS CodePipeline, CodeBuild, and CloudFormation
Familiarity with networking concepts including VPC, IAM, subnets, security groups
Strong knowledge of Agile methodologies and working within tools like JIRA & Confluence
Proven experience in technical mentoring and excellent communication skills
Bonus Points For:
Working knowledge of DevOps tools and container orchestration
Prior work experience in BFSI or regulated industries
🌟 Why BHR Code?
Work with passionate engineers on challenging data problems, get exposure to enterprise-grade cloud solutions, and grow in a collaborative environment that values innovation and agility.