Senior Data Engineer

icon building Company : Heni
icon briefcase Job Type : Full Time

Number of Applicants

 : 

000+

Click to reveal the number of candidates who applied for this job.

Job Description - Senior Data Engineer

Job Description

Senior Data Engineer

About us

HENI is a technology company pioneering art markets and information. We work with world-leading artists and estates across various sectors, including printmaking, book publishing, NFTs, digital content, and art research – all underpinned by cutting-edge research in analytics, AI, and machine learning.

Our team comprises of talented web developers, data engineers, data scientists, and infrastructure specialists who thrive on solving complex problems. We enjoy developing machine learning tools and embracing the latest in infrastructure and coding methodologies. We maintain high standards of technical quality via thorough testing and a dedication to clean, efficient code.

HENI’s mission is to make art accessible to everyone, by giving people the chance to learn about and collect art. We want to inspire people to love art and we pride ourselves on broadening people’s mindsets, to look at things from different perspectives. Many of our projects and commercial decisions are driven by this innovative collection of art market data.

Role overview

We are seeking a highly skilled and experienced Senior Data Engineer. You would be joining a strong team of roughly 20 people (software and data engineers, data scientists, analysts), in a fast-paced and collaborative environment.

You will be designing and building robust data pipelines – using the best of open-source data engineering and scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity and version control in our workflows. Day-to-day, you will collaborate with the team via daily stand ups, as well as engage with stakeholders at various levels across the business.

We are open to the role being remote working, however we would prefer candidates who could accommodate hybrid working i.e. 2-3 days a week in the London office.

Responsibilities include:

Design, test, implement, and maintain scalable and secure data pipelines.
Ensure high levels of data availability and good data quality.
Monitor and optimise pipelines, to ensure high performance and minimal downtime.
Collaboratively develop adaptable data models with different teams, proactively adapting to suit evolving business needs.
Collaborate with data scientists and analytics teams to gather requirements, ingest, process, transform and aggregate data at scale.
Mentor other data engineers and team members on best practices in data engineering.

The Ideal Candidate Profile:

A minimum of 5 years of experience.
A proven track record of deploying, monitoring, and maintaining production data pipelines, with a commitment to best practices in data engineering including operational metrics and observability.
Strong foundation in data engineering principles, with a particular focus on data quality and implementing data quality testing frameworks.
Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies.
Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses.
Experience with cloud technologies, particularly AWS Cloud services, with a particular focus on using AWS Glue, S3, RDS and AWS Lambda.
Experience in implementing Infrastructure as Code (e.g. AWS Cloud Development Kit (CDK), Terraform, Ansible).
Familiarity with container technologies such as Docker, including the basics of building, managing, and optimizing containers.
Knowledge of Kubernetes is a plus i.e. understanding containerized workflows and Kubernetes operations, through practical experience in developing, packaging, deploying, and managing applications within Kubernetes environments.
Strong qualifications in computing or quantitative fields (such as a First-Class Bachelor’s Degree) would be advantageous.

What We Offer

Unique approach and insights into data in creative industries.
A dynamic and challenging work environment with opportunities for growth and development.
A culture of innovation and continuous learning.
Flexible working hours and remote work options.
Open-plan and modern office located in central London.
Competitive salary and benefits package.

How to Apply

Please submit your CV which includes a link to your GitHub profile, and any relevant open-source contributions to illustrate your background. If you have any questions, you can contact us via

[email protected]

Original job Senior Data Engineer posted on GrabJobs ©. To flag any issues with this job please use the Report Job button on GrabJobs.
icon no cv required No CV Required icon fast interview Fast Interview via Chat

Share this job with your friends

icon get direction How to get there?

icon geo-alt London, England

icon get direction How to get there?
View similar Others jobs below

Similar Jobs in the UK

GrabJobs is the no1 job portal in the UK, connecting you to thousands of jobs fast! Find the best jobs in the UK, apply in 1 click and get a job today!

Mobile Apps

Copyright © 2024 Grabjobs Pte.Ltd. All Rights Reserved.