Overview: Need an experienced AWS Redshift Data Engineer with expertise in designing, developing, and optimizing data pipelines.
Qualifications:
- Advanced hands-on experience designing AWS data lake solutions.
- Experience integrating Redshift with other AWS services, such as DMS, Glue, Lambda, S3, Athena, Airflow.
- Proficiency in Python programming with a focus on developing efficient Airflow DAGs and operators.
- Experience with Pyspark and Glue ETL scripting including functions like renationalize, performing joins and transforming data-frames with Pyspark code.
- Competency developing CloudFormation templates to deploy AWS infrastructure, including YAML defined IAM policies and roles.
- Experience with Airflow DAG creation.
- Familiarity with debugging serverless applications using AWS tooling like CloudWatch Logs & Log Insights, CloudTrail, IAM.
- Ability to work in a highly complex python object-oriented platform.
- Strong understanding of ETL best practices, data integration, data modeling, and data transformation.
- Proficiency in identifying and resolving performance bottlenecks and finetuning Redshift queries.
- Familiarity with version control systems, particularly Git, for maintaining a structured code repository.
- Strong coding and problem-solving skills, and attention to detail in data quality and accuracy.
- Ability to work collaboratively in a fast-paced, agile environment and effectively communicate technical concepts to non-technical stakeholders
Minimum Qualifications :
Need 10+ years experience candidate
Experience with Architecture, implementation, and manage end-to-end data pipelines, ensuring data accuracy, reliability, data quality, performance, and timeliness
Candidates local to Wisconsin are highly preferred
Report this job
- Dice Id: 91099677
- Position Id: 8320224