Middle Data Engineer (offline)

Our globally distributed team is responsible for turning data into insights, services, and products for one of the leaders in the hospitality and travel domain. Around 50 Data Specialists (Analysts, Scientists, and Engineers) work together to create solutions that ultimately serve millions of travelers worldwide.

We leverage the rich data landscape (spend data, booking data, rate data, web analytics data, etc.) to enable business stakeholders and customers with advanced analytical solutions, and provide information on travelers’ behavior, and the lodging market. We also bring state-of-the-art algorithms in production to enhance Search & Booking experience (with personalized recommendations) and optimize the corporate hotel portfolio to meet both the requirements of the customers and the needs of the travelers.

Responsibilities:
- Build a big data platform running on Kubernetes, Spark, Airflow, and other open-source technologies on AWS containers technologies such as EKS, ECS.
- Work with team members to build and run data platform tools based on AWS Data Analytics technologies such as EMR, Glue, Kinesis, Athena and more.
- Engage with other software or data engineers and teams about their platform use to ensure we are building the right things and solving the right problems.
- Drive technical topics along with related discussions and communications.
- Leverage the right tools and engineering practices to deliver testable, maintainable, and reliable cloud native solutions.
- Work closely with other team members on solving technical challenges and improving current processes.
- Design, build and improve a set of team owned components

Hard Skills:
- Experience using AWS in an infrastructure as code and cloud native technologies, in a you build it you run it environment.
- Hands-on experience in maintaining Cloud-Native container infrastructure on AWS (such as Kubernetes) and familiarity with languages such as Python, PySpark, and SQL.
- Strong desire to learn big data technologies and build large scale (100s of TB) distributed data systems using modern "big data" open-source tooling.
- True CI/CD advocate: you are equally at home developing software and operating services and comfortable with DevOps tools on AWS.
- Value software simplicity and performance.
- Upper-Intermediate English is a must.

Nice to have:
- Knowledge of DWH
- Practical experience with Docker, Kubernetes, Airflow, and CloudFormation/Terraform (or willingness to quickly learn and work with them)

About Brightgrove

Brightgrove is a multi-national IT services company with development hubs in the US, Germany and Ukraine. We've been successfully serving our customers globally for the past 11 years by building advanced-skilled teams of mature pros. Our strength is that we can hire the rarest specialists and retain them for years—2 years on average. People stay on the bright side because they simply love what they do and appreciate how we treat them. That's what our satisfaction survey says.

Sounds cliche or too good to be true? Come and see for yourself. Or check what our exes have to say.

Company website:
https://careers.brightgrove.com/

DOU company page:
https://jobs.dou.ua/companies/brightgrove/

The job ad is no longer active
Job unpublished on 1 July 2022

Look at the current jobs Data Engineer →