Senior/Regular Data Engineer (Python, Spark, Hadoop) Offline
Project Description:
As a part of data migration into Azure Cloud - extract data from Oracle Hive cluster, convert into Parquet format and load to the Azure Blob Storage.
Utilize Spark for further data manipulations and analysis in a distributed manner.
Responsibilities:
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure 'big data' technologies;
• Implement data flows connecting operational systems, BI systems, and the big data platform;
• Build real-time, reliable, scalable, high-performing, distributed, fault tolerant systems;
• Clean and transform data into a usable state for analytics. Build data dictionary;
• Create data tools for analytics and data scientist team members that assist them in their ML endeavors;
• Design and develop code, scripts and data pipelines that leverage structured and unstructured data;
• Implement measures to address data privacy, security, compliance.
Mandatory Skills:
HiveQL, Scala, Java, Apache HBase, Python, Kafka Streams, Big Data, Apache Kafka, Hadoop
• Experience with designing data and analytics architectures in Microsoft Azure cloud;
• Experience with Big Data technologies like Spark, Hadoop, Hive, HBase, Kafka etc.;
• Fluency in several programming languages such as Python, Scala, Java, with the ability to pick up new languages and technologies quickly;
• Experience with data warehousing, data ingestion, and data profiling;
• Demonstrated teamwork, strong communication skills, and collaborative in complex engineering projects.
Nice-to-Have Skills:BS in computer science or related STEM field
The job ad is no longer active
Job unpublished on
2 July 2021
Look at the current jobs Python Kyiv→