Data Engineer

  • Job category IT
  • Contract Fulltime
  • Reference number VAC-10003410
  • Location Eindhoven
  • Contract type secondment via YER, Contract with Client
  • Industry IT & Telecom

About this vacancy

The supporting teams and the businesses are in-need of more (real-time) insights in the R&D IT design infrastructure. That's why we're looking for a front-end Splunk DevOps engineer / data engineer who will develop reports, dashboards, and applications. As a front-end Splunk developer it will be your role to talk with the business and supporting teams (often IT), to understand their needs and you need to translate those into practical and reliable solutions. Data quality is key.

Job description

Given the huge quantity of data generated by the clients infrastructure, it is important to be comfortable in working with Hadoop and Spark (import and export job, knowledge about HDFS) and being able to join, correlate, process, clean and handle all kind of data formats (e.g. log files, CSV, TSV, key/value pairs, JSON and unstructured).


Next to the above, we consider it key to be successful in this role to have proven experience in software (e.g. Python, R ) engineering, Linux infrastructure (Bash, Vim/Vi, terminal commands) and automation (GitLab, Docker, Ansible). It is essential to be a team player and continuously sharing knowledge with the team.

Technical design based on functional specification


  • Mockups and prototypes to illustrate potential solutions
  • Develop Splunk reports, dashboards, and applications
  • Manage huge batches of data in Hadoop
  • Assemble & organize large, complex data sets that meet business requirements
  • Develop components in Python code
  • Coach other team members on best practices
  • Take end-to-end responsibility for the products of the Data Analytics team
  • Support customers of the Data Analytics team with data analysis
  • Continuous improvement of products and process
  • Writing operational documentation
  • Maintaining systems by monitoring and correcting software defects
  • Present / demo solutions created to business and other stakeholders
  • Be in contact with Vendors for problem resolution or to deliver improvements to existing components



Company

Our client provides High Performance Mixed Signal and Standard Product solutions that leverage its leading RF, Analog, Power Management, Interface, Security and Digital Processing expertise. These innovations are used in a wide range of automotive, identification, wireless infrastructure, lighting, industrial, mobile, consumer and computing applications. A global semiconductor company with operations in more than 35 countries, over 45.000 employees and a revenue of over $10 billion.

The R&D IT design infrastructure team supports the hardware designers with a HW design environment (HW design tools, compute, storage, network, and licenses). The aim is to reduce the lead times of IC design projects and to improve their product quality (Total Quality). The R&D IT Data Analytics team enables team creating dashboards to present real-time status information (with Splunk) about the IT infrastructure, to optimize the utilization of IT resources and to respond quickly to any issue.

Offer description

  • Good employee benefits (e.g. work-life balance, pension, lease car, bonus model)
  • Challenging assignments
  • Excellent guidance from your consultant and YER's back office
  • Development opportunities, including the YER Talent Development Programme with a personal coach
  • Intensive support for international candidates (including Dutch lessons, tax-return and accommodation assistance)
  • Cooperative and results and relationship-driven
  • Friendly atmosphere and open culture
  • Community/network with other technology professionals from a variety of multinationals
  • Events and master classes with interesting speakers and attractive companies

Candidate profile

  • Master degree or hogeschool in computer science (or related study)
  • Splunk certified
  • Hadoop certified (Nice to have)
  • Hands-on experience in building complex Big Data platforms and pipelines (ETL, both batch- and stream-data processing)
  • Hands-on experience managing and using Big Data processing frameworks (e.g. Hadoop, Kafka, Hive, Spark, , Impala, Flume, sqoop etc.,)
  • Experience setting up and testing CI/CD pipelines
  • Basic Knowledge on databases like sql,ms-sql etc.,
  • Knowledge in AWS
  • Hands on experience in installation, configuration and maintenance of Hadoop ecosystem like HDFS, HBase, YARN, Hive, Sqoop, Hue and Spark.
  • Hands on experience in Linux (RedHat), Unix, Amazon web service (AWS), Teradata, Oracle PLSQL and MySQL.
  • Familiar with data architecture including data ingestion pipeline design, Hadoop information architecture, data modeling and data mining.
  • Data warehousing and ETL experience. Experience with data storage and retrieval is a plus.
  • Knowledge in tools like Git, Jenkins, Docker, Spark Streaming, Kubernetes, Ansible
  • Good knowledge on shell scripting.
  • Proven capability to speak "business language"
  • Skills to define requirements with stakeholders
  • DevOps mind set; Team-player with a can-do mentality, eager to develop and maintain the products
  • Used to work in complex technical environments
  • Used to work in global teams
  • Experience with agile methodologies like SCRUM; DevOps; SAFe (a pré)