In Customer Insights our mission is to create a competitive advantage by building a clear understanding of customers' total travel behavior.
One of the foundational tasks to deliver this is to create a source of truth in the company for Trips Data, both at a transactional and Customer level. Some of the main projects to deliver this involve connecting data from different products, bringing the connected trip to life in our databases, metrics and insights, creating a data warehouse, applying use cases of this data to the business such as cross selling opportunities and many more to come.
As a Data Engineer, you are responsible for the development, performance, quality, and scaling of our data pipelines, with a special focus on data quality. You will work independently and will also be responsible for making technical decisions within a team.
● Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
● Solving issues with data and data pipelines, prioritizing based on customer impact.
● End-to-end ownership of data quality in our core datasets and data pipelines.
● Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
● Providing tools that enhance Data Quality company-wide.
● Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
● Acting as an intermediary for problems, with both technical and non-technical audiences.
● Contributing to the growth of Booking.com through interviewing, on-boarding, or other recruitment efforts.
Mandatory Skills Description:
● Minimum of 3 years of experience in the field, using 2 or more server-side programming languages -- preferably Java, Python, Perl, etc.
● Experience with building scalable data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, MySQL, etc.
● Knowledgeable about data modeling, data access, and data storage techniques.
● Understands and can develop streaming processing applications using technologies like Flink, Kafka-Streams, Spark-Streaming, etc.
● Hands-on experience of developing in and contributing to open-source data technologies, such as Hadoop.
● Demonstrable experience with SQL, HQL, CQL, etc.
● Experience of working on systems on large scale.
● Good understanding of basic analytics and machine learning concepts.
● Preferably a university degree in Computer Science.
● Excellent communication, written and spoken.
Preferably a university degree in Computer Science.
Luxoft is a high-end application outsourcing provider of choice and a trusted technology advisor to Global 2000 and medium-sized growth companies that apply compelling technologies to obtain leadership positions in their respective markets.
Luxoft today finds the Best talents, proposes career growth & employment benefits. Our teams are involved in high complicity & innovative projects for the Top leaders companies around the Globe.
DOU company page:
Job posted on
27 May 2021