Data Engineer (Big Data, Spark, Hadoop, Python) – Contract - Part-Time

salary Salary :

$6,000 - 8,200 monthly

icon briefcase Job Type : Part-Time

Number of Applicants

 : 

000+

Click to reveal the number of candidates who applied for this job.

Job Description - Data Engineer (Big Data, Spark, Hadoop, Python) – Contract - Part-Time

Job Description

• Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals.

• Solve complex data problems to deliver insights that help the organization achieve its goals.

• Code in Python with tools like Apache Spark to build a multi-cluster data warehouse.

• Interact with other technology teams to define, prioritize, and ensure smooth deployments for other operational components.

• Advise, consult, mentor, and coach other data and analytics professionals on data standards and practices.

• Foster a culture of sharing, reuse, design for scale stability, and operational efficiency of data and analytical solutions.

• Codify best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate data capturing and management.

• Strong experience in mapping attributes, data profiling , data cleansing , and technical data quality etc.

• Strong experience in Ansi SQL and in-depth knowledge with structured , semi-structured & unstructured data

• Must have good knowledge of data lake and working experience of migration projects in cloud with providers like AWS or Microsoft AZURE or GCP

• Good to have experience in working with No SQL, Spark SQL using AWS Glue, EMR, and columnar data store.

• Good to understand data security features like data masking, data encryption , role based & fine grain access control mechanisms etc.

Qualifications

• 4+ years of relevant experience in data engineering/analytics space

• Expertise in SQL and data analysis and strong hands-on expertise with at least one programming language: Python.

• Strong knowledge in one or more of the following big data tools: Hive, Hadoop Impala, Spark, Kafka

• Strong expertise in ETL, reporting tools, data governance, data warehousing, and hands-on experience.

• Experience developing solutions for cloud computing services and infrastructure.

• Experience developing and maintaining data warehouses in big data solutions.

• Up to date on industry trends within the analytics space from a data acquisition processing, engineering, and management perspective

• Experience in agile development.

• Strong people skills, specifically in collaboration and teamwork

• High level of curiosity, creativity, and problem-solving capabilities

Original job Data Engineer (Big Data, Spark, Hadoop, Python) – Contract - Part-Time posted on GrabJobs ©. To flag any issues with this job please use the Report Job button on GrabJobs.
icon no cv required No CV Required icon fast interview Fast Interview via Chat

Share this job with your friends

icon get direction How to get there?

icon geo-alt 32 Pekin Street 048762

icon get direction How to get there?
View similar Technology Part-Time jobs below

GrabJobs is the no1 job portal in Singapore, connecting you to thousands of jobs fast! Find the best jobs in Singapore, apply in 1 click and get a job today!

Mobile Apps

Copyright © 2024 Grabjobs Pte.Ltd. All Rights Reserved.