Data Engineer
The Data Engineer is a key technical contributor within the Data & Analytics function, responsible for architecting, building, and operating the data infrastructure that underpins the organisation’s analytics and reporting capabilities. With a minimum of three years’ hands-on experience, the successful candidate will drive the end-to-end design of scalable data pipelines, govern data quality, and collaborate across business units to translate complex data requirements into robust, production-grade solutions.
Key Responsibilities:
- Design and implement scalable, resilient data architectures aligned to enterprise strategy and business requirements.
- Build and maintain production-grade ELT/ETL pipelines that ingest, transform, and load data from diverse on-premises and cloud sources.
- Develop and version-control data models and schemas optimised for efficient querying, reporting, and ML consumption.
- Conduct performance tuning and optimisation of pipelines, queries, and storage layers to meet SLA and scalability targets.
- Design and maintain data integration solutions to ensure consistency and accuracy across systems.
- Implement data validation, cleansing, profiling, and enrichment frameworks as part of pipeline workflows.
- Enforce data quality KPIs; proactively monitor and alert on anomalies or pipeline failures.
- Support data lineage tracking and metadata management to enable full auditability.
- Build and manage cloud-native data solutions on Microsoft Azure
- Administer and optimise cloud storage and compute resources for cost and performance.
- Ensure all data engineering solutions adhere to data governance frameworks, privacy regulations and security policies.
- Implement role-based access controls, encryption at rest and in transit, and data masking where required.
- Contribute to the development and maintenance of data catalogues, data dictionaries, and governance documentation.
- Partner with data analysts, data scientists, and business stakeholders to gather requirements and deliver fit-for-purpose solutions.
- Produce and maintain comprehensive technical documentation — pipeline designs, runbooks, and architectural decision records.
Qualification:
- Bachelor’s degree in Computer Science, Data Science, Engineering, Information Systems, or a related field
- Relevant Master’s degree or professional certifications (e.g. Azure Data Engineer Associate, Databricks Certified Associate) are advantageous
Required Skills:
- 3+ years experience in data engineering or a related field
- Proficiency with analytical SQL engines
- Proficiency in Python for data engineering tasks (PySpark, pandas, data pipeline scripting)
- Hands-on experience with Azure cloud data services
- Experience with cloud storage
- Competency with data integration platforms
- Strong understanding of data warehousing concepts, ETL processes, and data modeling techniques.
- Working knowledge of big data technologies
- Experience with version control with Git and CI/CD tooling
- Familiarity with containerisation for data workloads
- Understanding of ML pipelines and MLOps practices to support data science teams
- Exceptional analytical and problem-solving abilities with a data-driven mindset
- Strong stakeholder management and collaborative working style
- Self-driven; able to operate independently in a fast-paced, ambiguous environment.
- Attention to detail and commitment to delivering high-quality, maintainable work.
Highly beneficial skills:
- Experience with dbt for transformation layer management
- Knowledge of data observability and monitoring tools
- Infrastructure-as-Code experience with Terraform, Bicep, or ARM templates