Big Data Engineer
Nowa
BI & Data

Big Data Engineer

Rodzaj pracy
Pełny etat
Doświadczenie
Starszy specjalista/Senior
Forma zatrudnienia
B2B
Tryb pracy
Praca hybrydowa

Wymagane umiejętności

angielski

Big data

Apache Spark

JAVA

Python

Groovy

Opis stanowiska

Job Title: Full Stack Engineer (Big Data)

Location: Kraków


About the Role

You will work as part of a newly established engineering team in Kraków, responsible for the development, enhancement, and support of high-volume data processing systems and OLAP solutions used in global traded risk management.


Key Responsibilities

  • Design, develop, test, and deploy scalable IT systems to meet business objectives
  • Build data processing and calculation services integrated with risk analytics components
  • Collaborate with BAs, business users, vendors, and IT teams across regions
  • Integrate with analytical libraries and contribute to overall architecture decisions
  • Apply DevOps and Agile methodologies, focusing on test-driven development
  • Provide production support, manage incidents, and ensure platform stability
  • Contribute to both functional and non-functional aspects of delivery


Required Qualifications & Skills

  • Degree in Computer Science, IT, or a related field
  • Fluent in English, with strong communication and problem-solving skills
  • Hands-on experience with big data solutions and distributed systems (e.g., Apache Spark)
  • Strong backend development using Java 11+, Python, and Groovy
  • Experience in building REST APIs, microservices, and integrating with API gateways
  • Exposure to public cloud platforms, especially GCP or AWS
  • Familiarity with Spring (Boot, Batch, Cloud), Git, Maven, Unix/Linux
  • Experience with RDBMS (e.g., PostgreSQL) and data orchestration tools (e.g., Apache Airflow)
  • Solid understanding of test automation tools like JUnit, Cucumber, Karate, Rest Assured


Desirable Skills

  • Knowledge of financial or traded risk systems
  • Experience with UI/BI tools and streaming solutions
  • OLAP and distributed computation platforms such as ClickHouse, Druid, or Pinot
  • Familiarity with data lakehouse technologies (e.g., Dremio, Trino, Delta Lake, Iceberg)
  • Exposure to technologies like Apache Flink, Beam, Samza, Redis, Hazelcast
  • Containerization and orchestration tools: Docker, Kubernetes
  • Certifications: Scrum Master, PMP, FRM, or CFA
  • Knowledge of RPC frameworks (e.g., gRPC)


Sprawdź inne ciekawe oferty pracy na: https://antal.pl/dla-kandydata