Data Engineer (offline)
Data driven decision-making is integral to mobile advertising, marketing and operations at our company. Your work will directly impact millions of end-users in the future. This is your chance to leave your legacy and be part of a highly successful and growth company!
Responsibilities:
β Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS βbig dataβ technologies.
β Create and maintain optimal data pipeline architecture;
β Assemble large, complex data sets that meet functional / non-functional business requirements;
β Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc;
β Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs;
β Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Requirements:
β 3+ years of experience in a Data Engineer role;
β Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases;
β Experience with AWS cloud services: EC2, EMR, RDS, Redshift;
β Experience with big data tools: Hadoop, Spark, Kafka, etc;
β Experience with relational SQL and NoSQL databases, including Postgres and Cassandra;
β Experience with object-oriented/object function scripting languages: PHP, Python, Java, C++, Scala, etc;
β Experience building and optimizing βbig dataβ data pipelines, architectures and data sets;
β Strong analytic skills related to working with unstructured datasets.
Will be a plus:
β Build processes supporting data transformation, data structures, metadata, dependency and workload management;
β Working knowledge of message queuing, stream processing, and highly scalable βbig dataβ data stores;
β Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc;
β Experience with stream-processing systems: Storm, Spark-Streaming, etc.
We offer:
β Flexible hours, ability to work from home if necessary;
β Working directly with the customer;
β Cozy office in the city center or remote work;
β Excellent compensation package.
Responsibilities:
β Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS βbig dataβ technologies.
β Create and maintain optimal data pipeline architecture;
β Assemble large, complex data sets that meet functional / non-functional business requirements;
β Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc;
β Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs;
β Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Requirements:
β 3+ years of experience in a Data Engineer role;
β Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases;
β Experience with AWS cloud services: EC2, EMR, RDS, Redshift;
β Experience with big data tools: Hadoop, Spark, Kafka, etc;
β Experience with relational SQL and NoSQL databases, including Postgres and Cassandra;
β Experience with object-oriented/object function scripting languages: PHP, Python, Java, C++, Scala, etc;
β Experience building and optimizing βbig dataβ data pipelines, architectures and data sets;
β Strong analytic skills related to working with unstructured datasets.
Will be a plus:
β Build processes supporting data transformation, data structures, metadata, dependency and workload management;
β Working knowledge of message queuing, stream processing, and highly scalable βbig dataβ data stores;
β Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc;
β Experience with stream-processing systems: Storm, Spark-Streaming, etc.
We offer:
β Flexible hours, ability to work from home if necessary;
β Working directly with the customer;
β Cozy office in the city center or remote work;
β Excellent compensation package.
The job ad is no longer active
Look at the current jobs Data Science Kharkiv→
Average salary range of similar jobs in
analytics β
Similar jobs
Portfolio Manager at Avans.credit
Lithuania, Ukraine
Machine Learning Engineer, Search at robota.ua
Ukraine
All jobs Data Science Kharkiv All jobs Rallyware