Data Engineer | Snowflake

icon briefcase Job Type : Full Time

Number of Applicants

 : 

000+

Click to reveal the number of candidates who applied for this job.

Job Description - Data Engineer | Snowflake

Date: Jun 12, 2024

Location:

Charlotte, NC, US, 28210

Company: Popular

Workplace Type: Remote

Data Engineer | Snowflake

AtPopular,we offer a wide variety of services and financial solutions to serve our communities in Puerto Rico, United States & Virgin Islands. As employees, we are dedicated to making our customers dreams come true by offering financial solutions in each stage of their life. Our extensive trajectory demonstrates the resiliency and determination of our employees to innovate, reach for the right solutions and strongly support the communities we serve; this is why we value their diverse skills, experiences and backgrounds.

Are you ready for a rewarding career?

Over 8,000 people in Puerto Rico, United States and Virgin Islands work at Popular.

Come and join our community!

The Opportunity

As a Data Engineer you'll hold a significant position within the Analytical Engineering & Enablement pillar, dedicating your advanced expertise to the detailed design, development, and implementation of analytical solutions. Your primary focus will be on data preprocessing, feature engineering, and ensuring smooth data movement, which are essential for guiding informed decision-making and deriving actionable insights. You'll delve into advanced statistical analysis and data transformation techniques to address notable business challenges, thereby enhancing our operational efficacy. Your senior position will also involve providing mentorship and leading initiatives to drive the analytical engineering agenda forward.

Your key responsibilities:

You will collaborate with multifaceted teams of specialists spread across various locations to offer a broad spectrum of data and analytics solutions. You will address complicated challenges and propel advancement within the Enterprise Data & Analytics function.

Specifically:

Work in collaboration with teams from data architecture, governance, security, and various business units, as well as data analysts and other key stakeholders, to establish requirements for data and analytical pipelines and to comprehend integration patterns of source systems.

Set and maintain best practices for data quality, performance, and cost optimization within data and analytical pipelines, and ensure continuous monitoring is in place.

Guide and manage the Snowflake Center of Excellence (COE), which involves the creation, evaluation, and selection of ETL and ELT products, as well as the optimization and monitoring of ETL/ELT processes.

Perform feature engineering both online and offline to boost the accuracy and performance of analytical models.

Develop and keep up-to-date extensive documentation related to data to ensure data processes and models are transparent and traceable.

Implement and uphold data quality rules, standards, and metrics to guarantee the accuracy and integrity of analytical results.

Promote and establish software engineering best practices within the analytics team to deliver reliable and high-quality analytical solutions.

Apply version control and DataOps principles to guarantee the reproducibility and scalability of analytical models and processes.

Execute thorough data testing to detect and correct errors, inconsistencies, and inaccuracies in data processes and analytical models.

Employ suitable encoding techniques for data preparation to ensure the robustness and efficiency of analytical models.

Engage closely with various business units, data engineers, and other stakeholders to comprehend business challenges, collect requirements, and devise analytical solutions that align with the organization's objectives.

Identify and incorporate new data sources and methods to enhance the precision and overall effectiveness of analytical solutions.

Keep abreast of the latest developments and technologies in data science and analytics, integrating cutting-edge techniques where suitable.

Perform thorough data analysis and preliminary assessments to uncover trends, intrinsic patterns, and insights within the data.

Validate the integrity, dependability, and strength of analytical methods and their outcomes through stringent validation procedures.

Contribute to AI visualization and user-driven analytics initiatives by developing data visualizations that demystify complex analyses for a broad range of business stakeholders.

Pursue ongoing education and skill development, gaining knowledge and expertise from experienced data scientists and analysts.

Uphold stringent compliance with data governance, security, and privacy standards.

Participate in the design, creation, and deployment of analytical models, including predictive analytics, sophisticated clustering algorithms, and machine learning techniques, to examine intricate datasets and extract insights.

To qualify for the role, you must have:

Bachelor’s degree in computer science, Information Systems, Engineering, Statistics, Mathematics, or a related field. A master’s degree in a related field is a plus.

Minimum 15 years of experience in implementing large scale Data & Analytics platform in AWS, Azure, or Google Cloud, on-prem and Hybrid environment.

Minimum 5 years of experience in leading and managing various functional team within ED&A such as data integration, data engineering, analytical engineering, BI / data visualization, Data Operations, or a similar role.

Experience in leading data / analytical engineering teams and delivering data capabilities in following waterfall, iterative, scaled agile, scrum, and kanban methodologies.

In-depth knowledge of data integration methodologies such as change data capture, ETL & ELT processes, real-time data processing, micro-services, data lifecycle management, data lake, data warehouse, data vault, data mesh, data marketplace and data science concepts.

Hands-on experience with On-prem & cloud data platforms such as Snowflake, AWS Redshift, Azure Synapse Analytics, Databricks, AWS Aurora, Oracle Exadata, SQL server, Hadoop, Spark, SAS and R.

Proficiency in data integration tools and frameworks such as Informatica PC & IICS, IBM DataStage, DBT, Matillion, Microsoft SSIS, Glue, Batch, Azure data factory, data pipeline, Qlik replicate, Oracle GoldenGate, Shareplex, Apache NiFi and Python based frameworks.

Experience in Implementing tools and services in data security and data governance domains such as data modeling, data classification, data access control, data masking, data quality, metadata management, catalog, auditing, balancing, reconciliation, and data privacy compliance like GDPR & CCPA.

Excellent data analysis, profiling and statistics skills coupled with proficiency in SQL tools and technologies such as Oracle, SQL Server, MySQL, Pandas, NumPy, Ggplot, Shiny, SciPy, Sci-Kit Learn, and Matplotlib.

Strong proficiency in Hive, SQL, Spark, Python, R, SAS or other data manipulation and transformation languages.

Experience in handling data streams, APIs, events, container orchestration products such OpenShift, EKS, ECS.

Design both online and offline feature stores, providing efficient data access for machine learning models.

Implementation experience of one or more AI/ML platforms in cloud such as Sagemaker, Dataiku, DataRobot, H2O.ai, Snowpark, ModelOp Center, and Domino Data Lab.

Experience in handling high volume of data in structure, semi-structured and unstructured formats such as relational, flat files, XML, JSON, Parquet, Avro, Mainframe copybooks, CSV, Fixed with and hierarchy files.

Experience with DevOps and DataOps products such as Jenkins, Git, GITLab, Maven, Bitbucket, and Jira.

Experience in Cloud transformation and implemented various strategies such as Rehost, Re-platform, Repurchase, Refactor / Re-architect , Retire , and Retain.

Experience with log integration and observability products such as Splunk, Datadog, Grafana, AppDynamics, and CloudWatch.

Hands-on experience in designing and building data pipelines by leveraging AWS services such as S3, S3 Glacier, EC2, ECS, EMR, Sagemaker, IAM, RDS, DynamoDB, Hive,GraphDB, and DocumentDB.

Strong analytical, problem-solving, and critical thinking skills.

Ability to communicate complex data concepts effectively to a diverse array of stakeholders, both technical and non-technical.

Exposure to financial analytics, customer analytics, segmentation, or in-market hypothesis testing is considered advantageous.

A passion for continuous learning and staying abreast of industry innovations and trends

What we look for:

We are seeking enthusiastic and proactive leaders who have a clear vision and an unwavering commitment to remain at the forefront of data technology and science. Our ideal candidates are those who aim to foster a team spirit and collaboration and have a knack for adept management. It is essential that you display comprehensive technical proficiency and possess a rich understanding of the industry.

If you have a genuine drive for helping consumers achieve the full potential of their data while working towards your own development, this role is for you.

Region Locations

North Carolina, Puerto Rico, Florida or Illinois

Work Schedule

Hybrid or Remote depending on location

Important:The candidate must provide evidence of academic preparation or courses related to the job posting, if necessary.

If you have a disability and need assistance with the application process, please contact us at [email protected]. This email inbox is monitored for such types of requests only. All information you provide will be kept confidential and will be used only to the extent required to provide reasonable accommodations. Any other correspondence will not receive a response.

As a leading financial institution in the communities we serve, we reaffirm our commitment to always offer essential financial services and solutions for our customers, including during emergency situations and/or natural disasters. Popular’s employees are considered essential workers, whose role is critical in the continuity of these important services even under such circumstances. By applying to this position, you acknowledge that Popular may require your services during and immediately after any such events.

If you are a California resident, please click here to learn more about your privacy rights.

.

Popular is an Equal Opportunity Employer

Learn more about us at www.popular.com and keep updated with our latest job postings at https://jobs.popular.com/usa/ .

Connect with us!

LinkedIn (http://www.linkedin.com/company/popularbank) | Facebook (https://www.facebook.com/popularbank) | Twitter (https://twitter.com/popularbank) | Instagram (https://www.instagram.com/popularbank/) | Blog (http://blog.popularbank.com/)
Original job Data Engineer | Snowflake posted on GrabJobs ©. To flag any issues with this job please use the Report Job button on GrabJobs.
icon no cv required No CV Required icon fast interview Fast Interview via Chat

Share this job with your friends

icon get direction How to get there?

icon geo-alt Charlotte, North Carolina

icon get direction How to get there?
View similar Others jobs below

Similar Jobs in the US

GrabJobs is the no1 job portal in the US, connecting you to thousands of jobs fast! Find the best jobs in the US, apply in 1 click and get a job today!

Mobile Apps

Copyright © 2024 Grabjobs Pte.Ltd. All Rights Reserved.