Jobs
136-
Β· 165 views Β· 22 applications Β· 8d
Data Engineer
Countries of Europe or Ukraine Β· 2 years of experience Β· IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. Skills requirements: β’ 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Skills requirements:
β’ 2+ years of experience with Python;
β’ 2+ years of experience as a Data Engineer;
β’ Experience with Pandas;
β’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Familiarity with Amazon Web Services;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Working with high volume tables 10m+.
Optional skills (as a plus):
β’ Experience with Spark (pyspark);
β’ Experience with Airflow;
β’ Experience with Kafka;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms..Key responsibilities:
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models.We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 73 views Β· 21 applications Β· 19d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateMS Azure Platform: Databricks: Experience in managing and analyzing large datasets, creating ETL processes, and data pipelines. Azure Data Explorer (ADX): Knowledge in querying and analyzing data in real-time. Azure Synapse Analytics: Experience in...MS Azure Platform:
Databricks: Experience in managing and analyzing large datasets, creating ETL processes, and data pipelines.
Azure Data Explorer (ADX): Knowledge in querying and analyzing data in real-time.
Azure Synapse Analytics: Experience in integrating and analyzing data from various sources.
CI/CD: Experience with continuous Integration and continuous deployment to ensure automated and efficient development and deployment processes.
DevOps:
Experience collaborating with development teams to support the deployment and maintenance of data platforms.
Knowledge in automating infrastructure and processes.We offer:
β’ Attractive financial package
β’ Challenging projects
β’ Professional & career growth
β’ Great atmosphere in a friendly small team
More -
Β· 75 views Β· 6 applications Β· 6d
Data Engineer
Ukraine Β· Product Β· 2 years of experience Β· Upper-IntermediateRaiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.
At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bankβs products and services, and develop their businesses because we are #Together_with_Ukraine.
About the project:
You will be part of our product team. Team is responsible for building data marts, creating jsons based on them and sending them via Kafka. New Data Platform built in AWS.
We are looking for motivated and result-oriented data engineer, who can join our team in development of Data Products in our new Data Platform.
Your future responsibilities:
- Building an ETL process using AWS services: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
- Building SQL queries from data sources on PySpark
- Data processing and writing to the Data Mart Icberg table
- Building an integration solution on the Airflow + Kafka stack
- Data processing in JSON with publication in Kafka
Your skills and experience:
- Higher education in the field of Computer Science/Engineering
- 2+ years of relevant experience in data engineering or related roles
- Knowledge of programming languages: Python, PLSQL
- 2+ years of experience in parsing, transforming and storing data in a Big Data environment (e.g., Hadoop, Spark)
- 1+ years of experience in AWS Lambda, Glue, Athena and S3
- Experience with Kafka architecture, configuration and support
- Experience with database development and optimization (Oracle/PostgreSQL)
- Experience in developing Big Data pipelines
- Experience with Avro, JSON data formats
- Experience with AWS data services and infrastructure management
- Understanding the principles of working in an Agile environment
We Offer What Matters Most to You:
- Competitive Salary: We guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
- Social Package: Official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
- Comfortable Working Conditions: Possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment
- Wellbeing Program: All employees have access to medical insurance from the first working day, as well as consultations with a psychologist, nutritionist, or lawyer. We also offer discount programs for sports and purchases, family days for children and adults, and in-office massages
- Learning and Development: Access to over 130 online training resources, corporate training programs in CX, Data, IT Security, Leadership, Agile, as well as a corporate library and English lessons
- Great Team: Our colleagues form a community where curiosity, talent, and innovation are welcomed. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
- Career Opportunities: We encourage advancement within the bank across different functions
- Innovations and Technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub Actions, ArgoCD, Prometheus, VictoriaMetrics, Vault, OpenTelemetry, ElasticSearch, Crossplane, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (iOS), Kotlin (Android). Data stores: SQL-Oracle, PgSQL, MsSQL, Sybase. Data management: Kafka, Airflow, Spark, Flink
- Support Program for Defenders: We maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and are developing the Bankβs veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes
Why Raiffeisen Bank?
- People are our main value. We support, acknowledge, educate, and actively involve them in driving change
- One of the largest IT product teams among the countryβs banks
- Recognized as the best employer by EY, Forbes, Randstad, FranklinCovey, and Delo.UA
- One of the largest lenders to the economy and agricultural business among private banks
- The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans)
- One of the largest taxpayers in Ukraine; we paid 6.6 billion UAH in taxes in 2023
Opportunities for Everyone:
- Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
- We support the principles of diversity, equality and inclusiveness
- We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
- We cooperate with students and older people, creating conditions for growth at any career stage
Want to learn more? β Follow us on social media:
Facebook, Instagram, LinkedInβ―
______________________________________________________________________
Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡΡΠ°Π»ΠΎΠΌ. ΠΡΠ»ΡΡΠ΅ 30 ΡΠΎΠΊΡΠ² ΠΌΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠ° Π²ΠΈΠ±ΡΠ΄ΠΎΠ²ΡΡΠΌΠΎ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π°ΡΠΎΡ Π΄Π΅ΡΠΆΠ°Π²ΠΈ.
Π£ Π Π°ΠΉΡΡ ΠΏΡΠ°ΡΡΡ ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ², ΡΠ΅ΡΠ΅Π΄ Π½ΠΈΡ ΠΎΠ΄Π½Π° ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄, ΡΠΎ Π½Π°Π»ΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 800 ΡΠ°Ρ ΡΠ²ΡΡΠ². Π©ΠΎΠ΄Π½Ρ ΠΏΠ»ΡΡ-ΠΎ-ΠΏΠ»ΡΡ ΠΌΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ, ΡΠΎΠ± Π±ΡΠ»ΡΡ Π½ΡΠΆ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½Π° Π½Π°ΡΠΈΡ ΠΊΠ»ΡΡΠ½ΡΡΠ² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΡΠΊΡΡΠ½Π΅ ΠΎΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ, ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΠΈΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ Ρ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π±Π°Π½ΠΊΡ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.β―
Π’Π²ΠΎΡ ΠΌΠ°ΠΉΠ±ΡΡΠ½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° ETL ΠΏΡΠΎΡΠ΅ΡΡ Π· Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½ΡΠΌ AWS ΡΠ΅ΡΠ²ΡΡΡΠ²: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
- ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° SQL Π·Π°ΠΏΠΈΡΡΠ² Π· Π΄ΠΆΠ΅ΡΠ΅Π» Π΄Π°Π½ΠΈΡ Π½Π° PySpark
- ΠΠ±ΡΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ ΡΠ° Π·Π°ΠΏΠΈΡ Π΄ΠΎ Data Mart Icberg ΡΠ°Π±Π»ΠΈΡΡ
- ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° ΡΠ½ΡΠ΅Π³ΡΠ°ΡΡΠΉΠ½ΠΎΠ³ΠΎ ΡΡΡΠ΅Π½Π½Ρ Π½Π° ΡΡΠ΅ΠΊΡ Airflow + Kafka
- ΠΠ±ΡΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ Ρ JSON Π· ΠΏΡΠ±Π»ΡΠΊΠ°ΡΡΡΡ Ρ Kafka
Π’Π²ΡΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠ° Π½Π°Π²ΠΈΡΠΊΠΈ:
- ΠΠΈΡΠ° ΠΎΡΠ²ΡΡΠ° Ρ ΡΡΠ΅ΡΡ ΠΊΠΎΠΌΠΏβΡΡΠ΅ΡΠ½ΠΈΡ Π½Π°ΡΠΊ/ΡΠ½ΠΆΠΈΠ½ΡΡΠΈΠ½Π³
- 2+ ΡΠΎΠΊΠΈ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎΠ³ΠΎ Π΄ΠΎΡΠ²ΡΠ΄Ρ Π² Π΄Π°ΡΠ° ΡΠ½ΠΆΠΈΠ½ΡΡΠΈΠ½Π³Ρ Π°Π±ΠΎ ΡΡΠΌΡΠΆΠ½ΠΈΡ ΡΠΎΠ»ΡΡ
- ΠΠ½Π°Π½Π½Ρ ΠΌΠΎΠ² ΠΏΡΠΎΠ³ΡΠ°ΠΌΡΠ²Π°Π½Π½Ρ: Python, PLSQL
- 2+ ΡΠΎΠΊΡΠ² Π΄ΠΎΡΠ²ΡΠ΄Ρ Π² ΠΏΠ°ΡΡΡΠ²Π°Π½Π½Ρ, ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΡΡ Ρ Π·Π±Π΅ΡΠ΅ΠΆΠ΅Π½Π½Ρ Π΄Π°Π½ΠΈΡ Π² ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ Big Data (e.g., Hadoop, Spark)
- 1+ ΡΠΎΠΊΡΠ² Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π² AWS Lambda, Glue,Athena and S3
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Kafka Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΎΡ, Π½Π°Π»Π°ΡΡΡΠ²Π°Π½Π½ΡΠΌ ΡΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΎΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΎΡ ΡΠ° ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡΡ Π±Π°Π· Π΄Π°Π½ΠΈΡ (Oracle/PostgreSQL)
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ Big Data pipelines
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Avro, JSON ΡΠΎΡΠΌΠ°ΡΠ°ΠΌΠΈ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· AWS ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π΄Π°Π½ΠΈΡ Ρ ΡΠΏΡΠ°Π²Π»ΡΠ½Π½Ρ ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΎΡ
- Π ΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΠΏΡΠΈΠ½ΡΠΈΠΏΡΠ² ΡΠΎΠ±ΠΎΡΠΈ Π² Agile ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ
ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΡΠ΅, ΡΠΎ ΠΌΠ°Ρ Π·Π½Π°ΡΠ΅Π½Π½Ρ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅:
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Π° Π·Π°ΡΠΎΠ±ΡΡΠ½Π° ΠΏΠ»Π°ΡΠ°: Π³Π°ΡΠ°Π½ΡΡΡΠΌΠΎ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ Π΄ΠΎΡ ΡΠ΄ ΡΠ° ΡΡΡΠ½Ρ Π±ΠΎΠ½ΡΡΠΈ Π·Π° ΡΠ²ΡΠΉ ΠΎΡΠΎΠ±ΠΈΡΡΠΈΠΉ Π²Π½Π΅ΡΠΎΠΊ. ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ, Ρ Π½Π°Ρ Π΄ΡΡ ΡΠ΅ΡΠ΅ΡΠ°Π»ΡΠ½Π° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊΡ
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ: ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 28 Π΄Π½ΡΠ² ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΈΠΉ βΠ΄Π΅ΠΊΡΠ΅Ρβ Π΄Π»Ρ ΡΠ°ΡΡΡΡΠ², ΡΠ° ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° Π΄Π»Ρ Π±Π°ΡΡΠΊΡΠ² ΠΏΡΠΈ Π½Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π΄ΡΡΠ΅ΠΉ
- ΠΠΎΠΌΡΠΎΡΡΠ½Ρ ΡΠΌΠΎΠ²ΠΈ ΠΏΡΠ°ΡΡ: ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π³ΡΠ±ΡΠΈΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠ°ΡΡ ΡΠΎΠ±ΠΎΡΠΈ, ΠΎΡΡΡΠΈ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠΊΡΠΈΡΡΡΠΌΠΈ ΡΠ° Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠ°ΠΌΠΈ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΎΡ ΡΠ΅Ρ Π½ΡΠΊΠΎΡ
- Wellbeing ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ°: Π΄Π»Ρ Π²ΡΡΡ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² Π΄ΠΎΡΡΡΠΏΠ½Ρ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ Π· ΠΏΠ΅ΡΡΠΎΠ³ΠΎ ΡΠΎΠ±ΠΎΡΠΎΠ³ΠΎ Π΄Π½Ρ; ΠΊΠΎΠ½ΡΡΠ»ΡΡΠ°ΡΡΡ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π½ΡΡΡΠΈΡΡΠΎΠ»ΠΎΠ³Π° ΡΠΈ ΡΡΠΈΡΡΠ°; Π΄ΠΈΡΠΊΠΎΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡΡ ΡΠ° ΠΏΠΎΠΊΡΠΏΠΊΠΈ; family days Π΄Π»Ρ Π΄ΡΡΠ΅ΠΉ ΡΠ° Π΄ΠΎΡΠΎΡΠ»ΠΈΡ ; ΠΌΠ°ΡΠ°ΠΆ Π² ΠΎΡΡΡΡ
- ΠΠ°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ: Π΄ΠΎΡΡΡΠΏ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²ΡΠ°Π»ΡΠ½ΠΈΡ ΠΎΠ½Π»Π°ΠΉΠ½-ΡΠ΅ΡΡΡΡΡΠ²; ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Ρ Π½Π°Π²ΡΠ°Π»ΡΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π· CX, Data, IT Security, ΠΡΠ΄Π΅ΡΡΡΠ²Π°, Agile. ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π° Π±ΡΠ±Π»ΡΠΎΡΠ΅ΠΊΠ° ΡΠ° ΡΡΠΎΠΊΠΈ Π°Π½Π³Π»ΡΠΉΡΡΠΊΠΎΡ
- ΠΡΡΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡΡ ΠΊΠΎΠ»Π΅Π³ΠΈ β ΡΠ΅ ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ°, Π΄Π΅ Π²ΡΡΠ°ΡΡΡΡΡ Π΄ΠΎΠΏΠΈΡΠ»ΠΈΠ²ΡΡΡΡ, ΡΠ°Π»Π°Π½Ρ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΡ. ΠΠΈ ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, Π²ΡΠΈΠΌΠΎΡΡ ΡΠ°Π·ΠΎΠΌ ΡΠ° Π·ΡΠΎΡΡΠ°ΡΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π·Π½Π°ΠΉΡΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² Ρ ΠΏΠΎΠ½Π°Π΄ 15-ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ ΠΊΠΎΠΌβΡΠ½ΡΡΡ, ΡΠΈΡΠ°ΡΡΠΊΠΎΠΌΡ ΡΠΈ ΡΠΏΠΎΡΡΠΈΠ²Π½ΠΎΠΌΡ ΠΊΠ»ΡΠ±Π°Ρ
- ΠΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ: ΠΌΠΈ Π·Π°ΠΎΡ ΠΎΡΡΡΠΌΠΎ ΠΏΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΠ΅ΡΠ΅Π΄ΠΈΠ½Ρ Π±Π°Π½ΠΊΡ ΠΌΡΠΆ ΡΡΠ½ΠΊΡΡΡΠΌΠΈ
- ΠΠ½Π½ΠΎΠ²Π°ΡΡΡ ΡΠ° ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
- ΠΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΡΠ² Ρ Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡ: ΠΌΠΈ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ ΡΠΎΠ±ΠΎΡΡ ΠΌΡΡΡΡ ΡΠ° Π²ΠΈΠΏΠ»Π°ΡΡΡΠΌΠΎ ΡΠ΅ΡΠ΅Π΄Π½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ ΠΌΠΎΠ±ΡΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΌ. ΠΠ»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² ΡΠ° Π²Π΅ΡΠ΅ΡΠ°Π½ΠΎΠΊ Ρ Π½Π°Ρ Π΄ΡΡ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΡΡΡΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° ΠΠ°Π½ΠΊΡ. ΠΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π°Π΄ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½ΡΠΌ ΠΎΠ±ΡΠ·Π½Π°Π½ΠΎΡΡΡ ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΡΠ² ΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡΠ°Π½Ρ ΠΏΠΎΠ²Π΅ΡΠ½Π΅Π½Π½Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Π΄ΠΎ ΡΠΈΠ²ΡΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΆΠΈΡΡΡ. Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ Π²ΡΠ΄Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΠΊ ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡΠ°ΡΠΈΡ ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ² Π΄Π»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² (Forbes)
Π§ΠΎΠΌΡ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ?β―
ΠΠ°ΡΠ° Π³ΠΎΠ»ΠΎΠ²Π½Π° ΡΡΠ½Π½ΡΡΡΡ β Π»ΡΠ΄ΠΈ Ρ ΠΌΠΈ Π΄Π°ΡΠΌΠΎ ΡΠΌ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Ρ Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²ΡΠ°ΡΠΌΠΎ, Π·Π°Π»ΡΡΠ°ΡΠΌΠΎ Π΄ΠΎ Π·ΠΌΡΠ½. ΠΡΠΈΡΠ΄Π½ΡΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡΡ, Π°Π΄ΠΆΠ΅ Π΄Π»Ρ Π½Π°Ρ Π’Π ΠΌΠ°ΡΡ Π·Π½Π°ΡΠ΅Π½Π½Ρ!β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΊΡΠ΅Π΄ΠΈΡΠΎΡΡΠ² Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π°Π³ΡΠ°ΡΠ½ΠΎΠ³ΠΎ Π±ΡΠ·Π½Π΅ΡΡ ΡΠ΅ΡΠ΅Π΄ ΠΏΡΠΈΠ²Π°ΡΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ²β―
- ΠΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡΠ°ΡΠΈΠΌ ΠΏΡΠ°ΡΠ΅Π΄Π°Π²ΡΠ΅ΠΌ Π·Π° Π²Π΅ΡΡΡΡΠΌΠΈ EY, Forbes, Randstad, Franklin Covey, Delo.UAβ―
- ΠΠ°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ Π³ΡΠΌΠ°Π½ΡΡΠ°ΡΠ½ΠΎΡ Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΈΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² (Π§Π΅ΡΠ²ΠΎΠ½ΠΈΠΉ Π₯ΡΠ΅ΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)β―
- ΠΠ΄Π½Π° Π· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΠ’-ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΊΠΎΠΌΠ°Π½Π΄ ΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² ΠΊΡΠ°ΡΠ½ΠΈβ―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΠ»Π°ΡΠ½ΠΈΠΊΡΠ² ΠΏΠΎΠ΄Π°ΡΠΊΡΠ² Π² Π£ΠΊΡΠ°ΡΠ½Ρ, Π·Π° 2023 ΡΡΠΊ Π±ΡΠ»ΠΎ ΡΠΏΠ»Π°ΡΠ΅Π½ΠΎ 6,6 ΠΌΠ»ΡΠ΄ Π³ΡΠΈΠ²Π΅Π½Ρ
ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ :β―
- Π Π°ΠΉΡ ΠΊΠ΅ΡΡΡΡΡΡΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°ΠΌΠΈ, ΡΠΎ ΡΠΎΠΊΡΡΡΡΡΡΡΡ Π½Π° Π»ΡΠ΄ΠΈΠ½Ρ ΡΠ° ΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ, Ρ ΡΠ΅Π½ΡΡΡ ΡΠ²Π°Π³ΠΈ 5β―500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² ΡΠ° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½ΠΈ ΠΊΠ»ΡΡΠ½ΡΡΠ²β―β―
- ΠΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΡΡΠ·Π½ΠΎΠΌΠ°Π½ΡΡΡΡ, ΡΡΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΠ½ΠΊΠ»ΡΠ·ΠΈΠ²Π½ΠΎΡΡΡ
- ΠΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ Π½Π°ΠΉΠΌΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Ρ Π»ΡΠ΄Π΅ΠΉ Π· ΡΠ½Π²Π°Π»ΡΠ΄Π½ΡΡΡΡ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈ ΡΠΎΠ±ΠΎΡΠ΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ ΠΏΡΠ΄ Π²Π°ΡΡ ΠΎΡΠΎΠ±Π»ΠΈΠ²Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈ
- Π‘ΠΏΡΠ²ΠΏΡΠ°ΡΡΡΠΌΠΎ Π·Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌΠΈ ΡΠ° Π»ΡΠ΄ΡΠΌΠΈ ΡΡΠ°ΡΡΠΎΠ³ΠΎ Π²ΡΠΊΡ,β―ΡΡΠ²ΠΎΡΡΡΡΠΈ ΡΠΌΠΎΠ²ΠΈ Π΄Π»Ρ Π·ΡΠΎΡΡΠ°Π½Π½Ρ Π½Π° Π±ΡΠ΄Ρ-ΡΠΊΠΎΠΌΡ Π΅ΡΠ°ΠΏΡ ΠΊΠ°ΡβΡΡΠΈ
ΠΠ°ΠΆΠ°ΡΡ Π΄ΡΠ·Π½Π°ΡΠΈΡΡ Π±ΡΠ»ΡΡΠ΅? β ΠΡΠ΄ΠΏΠΈΡΡΠΉΡΡ Π½Π° Π½Π°Ρ Ρ ΡΠΎΡ.ΠΌΠ΅ΡΠ΅ΠΆΠ°Ρ : Facebook, Instagram, LinkedInβ―
More -
Β· 52 views Β· 6 applications Β· 5d
Databricks Solutions Architect
Full Remote Β· Worldwide Β· 7 years of experience Β· Upper-IntermediateRequirements: - Hands-on and technical expertise with Apache Spark. - Hands-on experience with Databricks over the course of several large-scale projects. - Databricks Certified Data Engineer Professional certification - Proven experience in designing and...Requirements:
- Hands-on and technical expertise with Apache Spark.
- Hands-on experience with Databricks over the course of several large-scale projects.
- Databricks Certified Data Engineer Professional certification
- Proven experience in designing and implementing big data technologies, including Hadoop, NoSQL, MPP, OLTP, OLAP.
- Over 7 years of experience working as a Software Engineer or Data Engineer, including query tuning, performance tuning, troubleshooting, and debugging Spark and/or other big data solutions.
- Proficiency in programming with Python, Scala, or Java.
- Familiarity with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools, and SQL Interfaces (e.g., Jenkins).
- Experience in customer-facing roles such as pre-sales, post-sales, technical architecture guidance, or consulting.
- Desired experience in Data Science/ML Engineering, including model selection, model lifecycle, hyper-parameter tuning, model serving, deep learning, using tools like MLFlow.
We offer:
β’ Attractive financial package
β’ Challenging projects
β’ Professional & career growth
β’ Great atmosphere in a friendly small team
-
Β· 77 views Β· 24 applications Β· 29d
Senior Data Engineer
Countries of Europe or Ukraine Β· 3 years of experience Β· Upper-IntermediateRole Overview: As a Data Engineer at QuintaGroup, you will design and optimize data pipelines within the AWS ecosystem for a US-based B2B marketplace platform. The platform simplifies and accelerates business operations by providing seamless data...Role Overview:
As a Data Engineer at QuintaGroup, you will design and optimize data pipelines within the AWS ecosystem for a US-based B2B marketplace platform. The platform simplifies and accelerates business operations by providing seamless data solutions and advanced analytics. Youβll collaborate with data scientists, analysts, and cross-functional teams to deliver innovative results.
Key Responsibilities:
β’ Develop, implement, and optimize data pipelines using PySpark in AWS environments.
β’ Utilize AWS services such as S3, Glue, Lambda, EMR to create scalable and efficient data solutions.
β’ Enhance PySpark workflows for performance, reliability, and cost-effectiveness.
β’ Maintain data quality through rigorous testing and monitoring processes.
β’ Apply data governance, security, and compliance best practices.
β’ Document workflows, processes, and designs to support team collaboration and maintenance.
Requirements:
β’ 3+ years of experience in data engineering, with a focus on PySpark.
β’ Strong experience with AWS services.
β’ Proficiency in Python and related frameworks or libraries.
β’ Solid understanding of distributed computing and Apache Spark.
β’ Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation) is a plus.
β’ Strong analytical and problem-solving skills with attention to detail.
β’ Excellent communication skills and ability to work in dynamic, team-oriented environments.
β’ Upper-Intermediate level of English.
Tech Stack:
β’ Programming Languages: Python (with Pandas and PySpark).
β’ AWS Services: S3, Glue (Glue Data Catalog, Glue Crawler, Glue Jobs with PySpark), Lambda, ECS, Athena, Aurora (RDS), AppConfig, API Gateway, Step Functions, Quicksight, EventBridge.
β’ Infrastructure: Terraform for Infrastructure-as-Code.
We Offer:
β’ Flexible working format: remote, office-based, or hybrid.
β’ Competitive salary and compensation package.
β’ Personalized career growth and mentorship programs.
β’ Professional development tools, including tech talks and training sessions.
β’ Access to active tech communities and regular knowledge sharing.
β’ Education reimbursement opportunities.
β’ Memorable anniversary gifts.
β’ Corporate events and team-building activities.
β’ Location-specific benefits.
-
Β· 28 views Β· 2 applications Β· 27d
Senior Data Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-IntermediateN-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customerβs Data Platform team - a key function within the company, responsible...N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customerβs Data Platform team - a key function within the company, responsible for the architecture, development, and management of our core data infrastructure. We leverage Snowflake, Looker, Airflow (MWAA), and dbt while managing DevOps configurations for the platform. Our goal is to build and maintain a self-serve data platform that empowers stakeholders with tools for efficient data management while ensuring security, governance, and compliance standards.
Requirements:
- 6+ years of experience in Data Engineering.
- Strong proficiency in Airflow, Python, and SQL.
- Hands-on experience with cloud data warehouses (Snowflake or equivalent).
- Solid understanding of AWS services and Kubernetes at an advanced user level.
- Familiarity with Data Quality and Observability best practices.
- Ability to thrive in a dynamic environment with a strong sense of ownership and responsibility.
- Analytical mindset and problem-solving skills for tackling complex technical challenges.
- Bachelor's in Mathematics, Computer Science,e or other relevant quantitative fields
Nice-to-Have Skills:
- Experience with DevOps practices, CI/CD, and Infrastructure as Code (IaC).
- Hands-on experience with Looker or other BI tools.
- Performance optimization of large-scale data pipelines.
- Knowledge of metadata management and Data Governance best practices.
Responsibilities:
- Design and develop a scalable data platform to efficiently process and analyze large volumes of data using Snowflake, Looker, Airflow, and dbt.
- Enhance the self-serve data platform by implementing new features to improve stakeholder access and usability.
- Work with cross-functional teams to provide tailored data solutions and optimize data pipelines.
- Foster a culture of knowledge sharing within the team to enhance collaboration and continuous learning.
- Stay updated on emerging technologies and best practices in data engineering and bring innovative ideas to improve the platform.
-
Β· 589 views Β· 52 applications Β· 8d
Junior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
β’ 6+ months of experience as a Data Engineer
β’ Experience with SQL ;
β’ Experience with Python;
Optional skills (as a plus):
β’ Experience with ETL / ELT pipelines;
β’ Experience with PySpark;
β’ Experience with Airflow;
β’ Experience with Databricks;
Key Responsibilities:
β’ Apply data processing algorithms;
β’ Create ETL/ELT pipelines and data management solutions;
β’ Work with SQL queries for data extraction and analysis;
β’ Data analysis and application of data processing algorithms to solve business problems;
We offer:
β’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
β’ Opportunity to work with the high-skilled engineering team on challenging projects;
β’ Interesting projects with new technologies;
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 194 views Β· 24 applications Β· 16d
Data Engineer
Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· IntermediateWe are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data...We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data sources. Your primary focus will be to enable efficient data flow and support analytical capabilities across the organization. You will also contribute to the development of our data architecture, implement best engineering practices, and collaborate closely with cross-functional teams to turn raw data into actionable insights.
Responsibilities
- Communicate with both technical and non-technical audiences to gather requirements
- Review and analyze data and logic to ensure consistency and accuracy
- Design, implement, and maintain data pipelines for efficient data flow
- Integrate and support of developed solutions
- Research and evaluate third-party components for potential use
- Follow best engineering practices: refactoring, code review, testing, continuous delivery, and Scrum
- Design, optimize, and support of data storage
Requirements
- At least 5+ years of experience in data engineering
- Experience in requirement gathering and communication with stakeholders
- Strong knowledge of DWH (data warehouse) architecture and principles
- Practical experience building ETL pipelines and designing data warehouses
- Deep experience with Python with a strong focus on PySpark
- Proficiency in SQL and databases such as PostgreSQL, ClickHouse, MySQL
- Hands-on experience with data scraping and integrating third-party sources and APIs
- Solid understanding of software design patterns, algorithms, and data structures
- Intermediate English proficiency
Will be a plus
- Experience with RabbitMQ or Kafka
- Understanding of web application architecture
- Familiarity with DataOps practices
- Background in FinTech or Trading domains
We offer
- Tax expenses coverage for private entrepreneurs in Ukraine
- Expert support and guidance for Ukrainian private entrepreneurs
- 20 paid vacation days per year
- 10 paid sick leave days per year
- Public holidays as per the company's approved Public holiday list
- Medical insurance
- Opportunity to work remotely
- Professional education budget
- Language learning budget
- Wellness budget (gym membership, sports gear and related expenses)
More
-
Β· 39 views Β· 8 applications Β· 30d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-IntermediateOur long-standing client from the UK is looking for a Senior Data Engineer Project: Decommissioning legacy software and systems Tech stack: DBT, Snowflake, SQL, Python, Fivetran Requirements: Solid experience with CI/CD processes in SSIS Proven...Our long-standing client from the UK is looking for a Senior Data Engineer
Project: Decommissioning legacy software and systems
Tech stack:
DBT, Snowflake, SQL, Python, FivetranRequirements:
- Solid experience with CI/CD processes in SSIS
- Proven track record of decommissioning legacy systems and migrating data to modern platforms (e.g., Snowflake)
- Experience with AWS (preferred) or Azure
- Communicative and proactive team player β able to collaborate and deliver
- Independent and flexible when switching between projects
- English: Upper Intermediate or higher
-
Β· 35 views Β· 0 applications Β· 2d
Data Engineer/Analyst
Office Work Β· Spain Β· Product Β· 3 years of experience Β· Intermediate Ukrainian Product πΊπ¦We are the creators of a new fintech era! Our mission is to change this world by making blockchain accessible to everyone in everyday life. WhiteBIT is a global team of over 1,200 professionals united by one mission β to shape the new world order in the...We are the creators of a new fintech era!
Our mission is to change this world by making blockchain accessible to everyone in everyday life. WhiteBIT is a global team of over 1,200 professionals united by one mission β to shape the new world order in the Web3 era. Each of our employees is fully engaged in this transformative journey.
We work on our blockchain platform, providing maximum transparency and security for more than 8 million users worldwide. Our breakthrough solutions, incredible speed of adaptation to market challenges, and technological superiority are the strengths that take us beyond ordinary companies. Our official partners include the National Football Team of Ukraine, FC Barcelona, Lifecell, FACEIT and VISA.
The future of Web3 starts with you: join us as a Data Engineer/Analyst!
Requirementsβ 3+ years of experience as a Data Analyst / Quant Analyst / Risk Analyst.
β Strong proficiency in Python (pandas, numpy, pyarrow, SQLAlchemy).
β Deep knowledge of SQL (analysis, aggregation, window functions).
β Experience with BI tools (Tableau, Grafana).
β Scripting experience in Python for automation and report integration.
β Solid understanding of trading principles, margining, VaR, and risk models.
β Proven ability to work with large-scale datasets (millions of rows, low-latency environments).
β Experience working with technical teams to deliver business-oriented analytics.
Responsibilitiesβ Build and maintain analytics for PnL, risk, and positions.
β Monitor key performance and risk metrics.
β Develop and optimize ETL/ELT pipelines (both batch and real-time).
β Configure and enhance BI dashboards (Tableau, Grafana).
β Support alerts and anomaly detection mechanisms.
β Work with internal databases, APIs, and streaming data pipelines.
β Collaborate closely with risk, engineering, and operations teams.
β Contribute to the development of the analytics platform: from storage to visualization.
Work conditionsImmerse yourself in Crypto & Web3:
More
β Master cutting-edge technologies and become an expert in the most innovative industry.
Work with the Fintech of the Future:
β Develop your skills in digital finance and shape the global market.
Take Your Professionalism to the Next Level:
β Gain unique experience and be part of global transformations.
Drive Innovations:
β Influence the industry and contribute to groundbreaking solutions.
Join a Strong Team:
β Collaborate with top experts worldwide and grow alongside the best.
Work-Life Balance & Well-being:
β Modern equipment.
β Comfortable working conditions, and an inspiring environment to help you thrive.
β 30 calendar days of paid leave.
β Additional days off for national holidays.
With us, youβll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If youβre ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
Letβs Build the Future Together!
WhiteBIT offers all candidates an equal opportunity to join the team. All hiring decisions are made without regard to race, national origin, gender identity or sexual orientation, age, religion, disability, medical condition, marital status, familial status, veteran status, or any other legally protected characteristic of an individual. -
Β· 23 views Β· 4 applications Β· 15d
Cloud System engineer
Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· Pre-IntermediateRequirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...Requirements:
- Knowledge of the core functionality of virtualization platforms;
- Experience implementing and migrating workloads in virtualized environment;
- Experience in complex IT solutions and Hybrid Cloud solution projects.
- Good understanding of IT-infrastructure services is a plus;
- Strong knowledge in troubleshooting of complex environments in case of failure;
- At least basic knowledge in networking & information security is an advantage
- Hyper-V, Proxmox, VMWare experience would be an advantage;
- Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
- Work experience of 2+ years in a similar position
- Scripting and programming experience/background in PowerShell/Bash is an advantage;
- Strong team communication skills, both verbal and written;
- Experience in technical documentation writing and preparation;
- English skills - intermediate level is minimum and mandatory for global teams communication;
- Industry certification focused on relevant solution area.
Areas of Responsibility includes:
- Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
- Consulting regarding migration IT-workloads in complex infrastructures;
- Presales support (Articulating service value in the sales process) / Up and cross sell capability);
- Project documentation: technical concepts
- Education and development in professional area including necessary certifications.
-
Β· 32 views Β· 2 applications Β· 6d
Data Engineer TL / Poland
EU Β· 4 years of experience Β· Upper-IntermediateOn behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.
Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way.
We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests.
As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. Youβll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.Responsibilities:
- Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
- Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
- Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?
Requirements:
- Over 2 years in a leadership role within a data team.
- Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
- Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
- Bachelorβs degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
- Highly organized with a proactive mindset.
Strong service orientation and a collaborative approach to problem-solving.
Nice to have skills:
- Previous experience as a NOC or DevOps engineer is a plus.
Familiarity with PySpark is considered an advantage.
What we can offer you
- Remote work from Poland, flexible working schedule
- Accounting support & consultation
- Opportunities for learning and developing on the project
- 20 working days of annual vacation
- 5 days paid sick leaves/days off; state holidays
- Provide working equipment
-
Β· 169 views Β· 30 applications Β· 6d
Data Engineer
Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πΊπ¦Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. Weβre proud to be ranked 4th among the Worldβs Top EdTech Π‘ompanies by TIME...Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. Weβre proud to be ranked 4th among the Worldβs Top EdTech Π‘ompanies by TIME magazine. We believe lifelong learning should be accessible, personalized, and impactful to each individual. Thatβs how we change the world and why we bring together exceptional minds.
The core of our achievements is our team. We believe in people and shared values SELECT. Thatβs why, together with Yuliya Savchuk, Engineering Manager of the MIT team, weβre looking for a Data Engineer to join our team of superstars transforming the EdTech industry.
About the role:
With business scaling, we see the need to strengthen the team that is working on building a data analytics platform for Headway Inc. We need to ensure that every business area and our products have reliable data to drive deep insights and innovation.
Data is at the core of our company. You will build and maintain a reliable, efficient, and scalable data infrastructure that enables Headway Inc to leverage data as a strategic asset for informed decision-making, driving innovation, and achieving business goals.
What awaits you on our team:
- Have the opportunity to join the team of a global EdTech company that creates socially impactful products for the international market.
- Have the opportunity to collaborate with a large team of analysts and marketers β to create solutions that have a direct and tangible impact on their work.
- You'll be able to use a wide variety of modern tools and independently decide which technologies are most appropriate to apply.
- We work in an atmosphere of freedom and responsibility.
- Your decisions and ideas will actively impact the business. Youβll own the full development lifecycleβfrom solution design through to user feedback and iteration.
What will you do:
At MIT, the Engineering team develops data platforms and automation tools that help teams work more efficiently and make informed marketing decisions. We create solutions that allow us to analyze and and utilize data for effective decision-making in marketing strategies, improving results and increasing return on investment.
- Communicate and collaborate with the analytics team, being responsible for delivering data to the analytical database for visualization.
- Create and maintain optimal and scalable pipeline architecture. Develop new pipelines and refine existing ones.
- Develop ETL/ELT processes and Data Lake architecture.
- Research and collect large, complex data.
- Identify, design, and implement internal process improvements.
- Continuously learn, develop, and utilize cutting-edge technologies.
What do you need to join us:
- Experience in production development and knowledge of any programming language, including Python, Golang, Java, etc.
- Understanding of Data Lakes, Data Warehousing, OLAP/OLTP approaches, and ETL/ELT processes.
- Proficiency in SQL and experience working with databases.
- Workflow orchestration expirience.
- Problem-solving skills and a passion for creating efficient, well-tested, and maintainable solutions.
- Alignment with the values of our team (SELECT).
Good to have:
- Experience with GCP Data Services and Airflow.
- Experience with CI/CD in Data Engineering.
- Knowledge of Data Governance and Security principles.
- Experience optimizing data pipeline performance.
- Experience in MarTech or AdTech platforms, like marketing campaign orchestration.
What do we offer:
- Work within an ambitious team on a socially impactful education product.
- An office with a reliable shelter, generators, satellite internet, and other amenities.
- Access to our corporate knowledge base and professional communities.
- Personal development plan.
- Partial compensation for English language learning, external training, and courses.
- Medical insurance coverage with a $70 employee contribution and full sick leave compensation.
- Company doctor and massage in the office.
- Sports activities: running, yoga, boxing, and more.
- Corporate holidays: we go on a week-paid holiday to rest and recharge twice a year.
- Supporting initiatives that help Ukraine. Find out more about our projects here.
Working schedule:
This is a full-time position with a hybrid remote option. It means that you can decide for yourself: whether you want to work from the office, remotely, or combine these options.
Are you interested?
Send your CV!
-
Β· 214 views Β· 38 applications Β· 7d
Data Engineer
Full Remote Β· EU Β· 5 years of experience Β· Upper-IntermediateHello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter! Why? Because we trust our...Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter!
Why? Because we trust our Data Platform for daily business decisions. From βWhat ad platform presents us faster? Which creative media presents our value to customers in the most touching way?β to βWhat would our customers like to learn the most about? What can make education more enjoyable?β, we rely on numbers, metrics and stuff. But as we are open and curious, thereβs a lot to collect and measure! Thatβs why we need to extend, improve and speed up our data platform.
Thatβs why we need you to:
- Build and maintain scalable data pipelines using Python and Airflow to provide data ingestion, transformation, and delivery.
- Develop and optimize ETL/ELT workflows to ensure data quality, reliability, and performance.
- Bring your vision and opinion to define data requirements and shape solutions to business needs.
- Smartly monitor, relentlessly troubleshoot, and bravely resolve issues in data workflows, striving for high availability and fault tolerance.
- Propose, advocate, and implement best practices for data storage and querying using AWS services such as S3 and Athena.
- Document data workflows and processes, ensuring you donβt have to say it twice and have time for creative experiments. Sure, itβs about clarity and maintainability across the team as well.
For that, we suppose youβd be keen on
- AWS services such as S3, Kinesis, Athena, and others.
- dbt and Airflow for data pipeline and workflow management.
- Application of data architecture, ETL/ELT processes, and data modeling.
- Advanced SQL and Python programming.
- Monitoring tools and practices to ensure data pipeline reliability.
- CI/CD pipelines and DevOps practices for data platforms.
- Monitoring and optimizing platform performance at scale.
Will be nice to
- Understand cloud services (we use AWS), advances, trade-offs, and perspectives.
- Keep in mind the analytical approach and the ability to consider future perspectives in system design in daily practice and technical decisions
Why You'll Love Working With Us:
- Impactful Work: Your contributions will directly shape the future of our company.
- Innovative Environment: We're all about trying new things and pushing the envelope in EdTech.
- Freedom: flexible role based either remotely or hybrid from one of our offices in Cyprus, Poland.
- Health: we offer Health Insurance package for hybrid mode (Cyprus, Poland) and health corner in the Cyprus office.
- AI solutions β GPT Chat bot/ Chat GPT subscription and other tools.
- Wealth: we offer a competitive salary.
- Balance: flexible paid time off, you get 21 days of annual leave + 10 bank holidays.
- Collaborative Culture: Work alongside passionate professionals who are as driven as you are.
-
Β· 309 views Β· 28 applications Β· 2d
Middle Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· IntermediateDataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, analysis, and integrations. We are waiting for your CV!
Requirements:- 2+ years of commercial experience with Python.
- Experience working with PostgreSQL databases.
- Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
- Solid understanding of ETL principles and best practices.
- Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
- Experience working with Linux environments, cloud services (AWS), and Docker.
- Strong decision-making capabilities with the ability to work independently and proactively.
Will be a plus:
- Experience in web scraping, data extraction, cleaning, and visualization.
- Understanding of multiprocessing and multithreading, including process and thread management.
- Familiarity with Redis.
- Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
- Experience with Flask / Flask-RESTful for API development.
- Knowledge and experience with Kafka.
Key Responsibilities:
- Develop and maintain a robust data processing architecture using Python.
- Design and manage data pipelines using Kafka and SQS.
- Optimize code for better performance and maintainability.
- Design and implement efficient ETL processes.
- Work with AWS technologies to ensure flexible and reliable data processing systems.
- Collaborate with colleagues, actively participate in code reviews, and improve technical knowledge.
- Take responsibility for your tasks and suggest improvements to processes and systems.
We offer:- Working in a fast growing company;
- Great networking opportunities with international clients, challenging tasks;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities, corporate events.
More