BI / Data Engineer (Overonix) (offline)
Requirements:
β’ Experince working with BI sphere more than 2 years;
β’ Experience with DB:
- SQL - 4+ years
- PostgrSQL β 2+ years
β’ Experience with Python for Data Engineering 2+ years;
β’ Experience designing and monitoring data pipelines;
β’ Strong data mining / processing skills;
β’ Experience in understanding Data Warehouse building principles;
β’ Hand-on experience in designing data pipelines ETL, ELT, exposing processed data to end users.
Responsibilities:
β’ Architect, Design and implement ETL pipelines for multiple data sources ingesting massive amounts of data daily;
β’ Develop and maintain new projects and tools around the data warehouse;
β’ Optimize, improve, and do refactoring of current applications if deems fit;
β’ Collaborating with software engineers to capture, format and prepare data for various purposes;
β’ Report creating in Matabase and Qlick Sense systems for customer.
Will be a plus:
β’ Working experience with Matabase and Qlick Sense, Tableu systems;
β’ Experience with ElasticSearch or other NoSQL data stores;
β’ Experience with streaming based, about real time Data Warehouses, mostly interested in Apache Kafka, KSQL, Spark or similar;
β’ Familiar with CI / CD using Jenkins pipelines or similar tools;
β’ Data synchronization tools / processes between heterogeneous data sources (SQL, noSQL, DMS, Debezium, ElasticSearch etc);
β’ Degree in Data Science, Statistics, Applied Math, Econometrics, Computer Science, or other related fields.
We care about our employees and are interested in long-term cooperation. For our employees, we offer:
β’ Competitive compensation depending on experience and skills;
β’ Career growth opportunities;
β’ Compensation for sick lists and regular vacations;
β’ English classes with native speaker;
β’ Health insurance;
β’ Free lunches.
β’ Experince working with BI sphere more than 2 years;
β’ Experience with DB:
- SQL - 4+ years
- PostgrSQL β 2+ years
β’ Experience with Python for Data Engineering 2+ years;
β’ Experience designing and monitoring data pipelines;
β’ Strong data mining / processing skills;
β’ Experience in understanding Data Warehouse building principles;
β’ Hand-on experience in designing data pipelines ETL, ELT, exposing processed data to end users.
Responsibilities:
β’ Architect, Design and implement ETL pipelines for multiple data sources ingesting massive amounts of data daily;
β’ Develop and maintain new projects and tools around the data warehouse;
β’ Optimize, improve, and do refactoring of current applications if deems fit;
β’ Collaborating with software engineers to capture, format and prepare data for various purposes;
β’ Report creating in Matabase and Qlick Sense systems for customer.
Will be a plus:
β’ Working experience with Matabase and Qlick Sense, Tableu systems;
β’ Experience with ElasticSearch or other NoSQL data stores;
β’ Experience with streaming based, about real time Data Warehouses, mostly interested in Apache Kafka, KSQL, Spark or similar;
β’ Familiar with CI / CD using Jenkins pipelines or similar tools;
β’ Data synchronization tools / processes between heterogeneous data sources (SQL, noSQL, DMS, Debezium, ElasticSearch etc);
β’ Degree in Data Science, Statistics, Applied Math, Econometrics, Computer Science, or other related fields.
We care about our employees and are interested in long-term cooperation. For our employees, we offer:
β’ Competitive compensation depending on experience and skills;
β’ Career growth opportunities;
β’ Compensation for sick lists and regular vacations;
β’ English classes with native speaker;
β’ Health insurance;
β’ Free lunches.
About RQ Team
RQ Team - recruiting company, specialized in high-quality selection of relevant IT specialists in teams of Ukrainian and foreign companies. Was created, due to the continuous growth of Ukrainian and foreign companies.Company website:
https://rqteam.io
The job ad is no longer active
Job unpublished on
28 October 2020
Look at the current jobs SQL / DBA Kyiv→
Average salary range of similar jobs in
analytics β