Data Engineer
Working hours: 12:00 PM – 8:00 PM (CEST)
Requirements
Strong experience in building data platforms (end-to-end, not only pipelines).
Advanced Python for data processing and pipeline development.
Strong SQL (CTEs, window functions, query optimization).
Experience with data integration projects (SEI integration strongly expected).
Strong knowledge of data modelling (3NF, star/snowflake schemas).
Experience with Power BI (mandatory) – building dashboards and transforming data for reporting.
Experience with cloud providers (Azure / AWS / GCP).
Experience with data warehousing platforms (e.g. Snowflake, BigQuery, Redshift, Synapse).
Understanding of ETL vs ELT and batch vs near real-time processing.
Experience building data pipelines and orchestration of workflows (e.g. Airflow, Prefect, Dagster, Azure Data Factory).
Experience working in multi-team engineering environments.
Nice to have
TBM Studio knowledge (highly valued, even 3+ years is a strong advantage).
Experience with Apache Spark (PySpark / Spark SQL).
Understanding of Hadoop ecosystem fundamentals.
Experience with streaming / event-driven data processing (Kafka, Kinesis, Pub/Sub, Event Hubs).
Programming in Java or Scala.
Experience with Bash / shell scripting.
Familiarity with dbt.
Awareness of BI tools (Tableau, Looker).
Experience with API / SaaS data integrations.
Understanding of cloud cost optimization (FinOps).
Exposure to data governance, compliance, or GRC frameworks.
Responsibilities
Design, build, and maintain production-grade data pipelines (ETL/ELT).
Develop scalable solutions for processing large datasets.
Model and structure data for analytics and reporting (fact/dimension models).
Optimize SQL queries and data storage for performance and cost efficiency.
Design and implement end-to-end data architectures (batch and near real-time).
Build and manage data workflows and orchestration pipelines (e.g., scheduling, dependencies, retries).
Ensure data quality, validation, and reliability across pipelines.
Implement monitoring, logging, and failure recovery mechanisms.
Work with cloud platforms to build secure and scalable data solutions.
Collaborate with analysts, data scientists, and other engineers to support data needs.
Apply software engineering best practices (version control, CI/CD, testing).
(Optional) Develop and maintain streaming / real-time data pipelines.
Client
A global leader with a sharp focus on lottery solutions. A confident step forward building on a long history of delivering safe and secure technology, demonstrating strong commitment to customers as a dedicated lottery service provider. Leveraging collective insight, experience, and expertise to create reliable and engaging solutions that help lottery clients achieve objectives, meet player needs, and deliver meaningful benefits to communities.
Data Engineer
Data Engineer