Big Data Engineer

  • Job category IT
  • Employment Fulltime
  • Reference number VAC-10013593
  • Location Utrecht
  • Contract type Secondment via YER
  • Industry IT & Telecom

About this vacancy

Our client is rapidly growing on the use of Big Data and have multiple programs running. With a growing amount of project they look to expand their Big Data Team.

Job description

We have openings for Big Data engineering positions. These persons will help mature the Clients AWS base and help growing the real-time/streaming stack (kafka) and development of streaming use cases. For this we need Engineers who has both experience with cloud formation and big data engineering. AWS experience is a must.

Company

YER is collaborating with one of industries leading telecom- broadcasting companies to achieve their set of goals. Are you looking for an opportunity to work in a complex environment, working on the latest Big Data innovations and team up with highly skilled people, then read on and apply!

Offer description

You will be employed by YER and seconded to our Client. We offer:

  • Good employee benefits (e.g. work-life balance, pension, lease car, bonus model)
  • Challenging assignments
  • Excellent guidance from your consultant and YER's back office
  • Development opportunities
  • Intensive support for international candidates (including Dutch lessons, tax-return and accommodation assistance)
  • Cooperative and results and relationship-driven
  • Friendly atmosphere and open culture
  • Community/network with other technology professionals from a variety of multinationals
  • Events and master classes with interesting speakers and attractive companies


Candidate profile

Requirements:

  • Experience with deployments and provisioning automation tools (AWS Cloud formation, Ansible, Docker, CI/CD (Gitlab), Kubernetes, or Helm charts)
  • 3+ years of programming experience (Python, Scala, etc).
  • 3+ years of Big Data experience and deep understanding and application of modern data processing technology stacks (Hadoop, Spark, Hive, Kafka, etc.).
  • Deep understanding of streaming data architectures and technologies for real-time and low-latency data processing (Kafka)
  • Experience with data pipeline and workflow management tools such as Oozie or Airflow.
  • Ability to drive adoption of Data Engineering best practices and sharing your knowledge.
  • +1 year of recent experience in working with AWS is a must (EMR, Glue)
  • Platform operations and Devops experience on AWS is preferred

 

Knock out:

  • Agile Wow
  • Hadoop
  • Spark
  • Hive
  • Kafka


Nice to haves:

  • Experience with exposing API’s and API gateways
  • Experience with Data Science tools such as Sagemaker, MLflow
  • Experience with AWS Athena, IAM policies, Connectivity