Job Description - Big Data Engineer
Develop and support ETL pipelines using Python & Pyspark with robust monitoring and alarming
Develop data models that are optimized for business understand-ability and usability
Develop and optimize divisional data lake using best practices for partitioning, compression, parallelization, etc.
Develop and maintain metadata, data catalog, and documentation for internal business customers
Help internal business customers troubleshoot and optimize SQL and ETL solutions to solve reporting problems
9+ years of Datawarehouse. Experience with Oracle/Teradata /Redshift.
Extensive experience working with AWS cloud services like s3,Glue,RDS,Lambda,Redshift,DynamoDB,API gateway, SNS, SQL
Any prior experience on MAtillion Redshift ETL is good to have
Excellent communication mustExtensive hands on experience working on Apache Airflow
#J-18808-Ljbffr
Original job Big Data Engineer posted on GrabJobs ©. To flag any issues with this job please use the Report Job button on GrabJobs.