Jobs

136
  • Β· 165 views Β· 22 applications Β· 8d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 73 views Β· 21 applications Β· 19d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    MS Azure Platform: Databricks: Experience in managing and analyzing large datasets, creating ETL processes, and data pipelines. Azure Data Explorer (ADX): Knowledge in querying and analyzing data in real-time. Azure Synapse Analytics: Experience in...

    MS Azure Platform:
    Databricks: Experience in managing and analyzing large datasets, creating ETL processes, and data pipelines.
    Azure Data Explorer (ADX): Knowledge in querying and analyzing data in real-time.
    Azure Synapse Analytics: Experience in integrating and analyzing data from various sources.
    CI/CD: Experience with continuous Integration and continuous deployment to ensure automated and efficient development and deployment processes.

    DevOps:
    Experience collaborating with development teams to support the deployment and maintenance of data platforms.
    Knowledge in automating infrastructure and processes.

     

    We offer:

    β€’ Attractive financial package

    β€’ Challenging projects

    β€’ Professional & career growth

    β€’ Great atmosphere in a friendly small team

    More
  • Β· 75 views Β· 6 applications Β· 6d

    Data Engineer

    Ukraine Β· Product Β· 2 years of experience Β· Upper-Intermediate
    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...

    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.

    At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bank’s products and services, and develop their businesses because we are #Together_with_Ukraine.

    About the project:

    You will be part of our product team. Team is responsible for building data marts, creating jsons based on them and sending them via Kafka. New Data Platform built in AWS.

    We are looking for motivated and result-oriented data engineer, who can join our team in development of Data Products in our new Data Platform.

    Your future responsibilities:

    • Building an ETL process using AWS services: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
    • Building SQL queries from data sources on PySpark
    • Data processing and writing to the Data Mart Icberg table
    • Building an integration solution on the Airflow + Kafka stack
    • Data processing in JSON with publication in Kafka

    Your skills and experience:

    • Higher education in the field of Computer Science/Engineering
    • 2+ years of relevant experience in data engineering or related roles
    • Knowledge of programming languages: Python, PLSQL
    • 2+ years of experience in parsing, transforming and storing data in a Big Data environment (e.g., Hadoop, Spark)
    • 1+ years of experience in AWS Lambda, Glue, Athena and S3
    • Experience with Kafka architecture, configuration and support
    • Experience with database development and optimization (Oracle/PostgreSQL)
    • Experience in developing Big Data pipelines
    • Experience with Avro, JSON data formats
    • Experience with AWS data services and infrastructure management
    • Understanding the principles of working in an Agile environment

    We Offer What Matters Most to You:

    • Competitive Salary: We guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social Package: Official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable Working Conditions: Possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment
    • Wellbeing Program: All employees have access to medical insurance from the first working day, as well as consultations with a psychologist, nutritionist, or lawyer. We also offer discount programs for sports and purchases, family days for children and adults, and in-office massages
    • Learning and Development: Access to over 130 online training resources, corporate training programs in CX, Data, IT Security, Leadership, Agile, as well as a corporate library and English lessons
    • Great Team: Our colleagues form a community where curiosity, talent, and innovation are welcomed. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career Opportunities: We encourage advancement within the bank across different functions
    • Innovations and Technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub Actions, ArgoCD, Prometheus, VictoriaMetrics, Vault, OpenTelemetry, ElasticSearch, Crossplane, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (iOS), Kotlin (Android). Data stores: SQL-Oracle, PgSQL, MsSQL, Sybase. Data management: Kafka, Airflow, Spark, Flink
    • Support Program for Defenders: We maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and are developing the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • People are our main value. We support, acknowledge, educate, and actively involve them in driving change
    • One of the largest IT product teams among the country’s banks
    • Recognized as the best employer by EY, Forbes, Randstad, FranklinCovey, and Delo.UA
    • One of the largest lenders to the economy and agricultural business among private banks
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans)
    • One of the largest taxpayers in Ukraine; we paid 6.6 billion UAH in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media: 

    Facebook, Instagram, LinkedInβ€―

    ______________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° ETL процСсу Π· Π²ΠΈΠΊΠΎΡ€ΠΈΡΡ‚анням AWS сСрвісів: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
    • ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° SQL Π·Π°ΠΏΠΈΡ‚Ρ–Π² Π· Π΄ΠΆΠ΅Ρ€Π΅Π» Π΄Π°Π½ΠΈΡ… Π½Π° PySpark
    • ΠžΠ±Ρ€ΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ… Ρ‚Π° Π·Π°ΠΏΠΈΡ Π΄ΠΎ Data Mart Icberg Ρ‚Π°Π±Π»ΠΈΡ†ΡŒ
    • ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Ρ–Π½Ρ‚Π΅Π³Ρ€Π°Ρ†Ρ–ΠΉΠ½ΠΎΠ³ΠΎ Ρ€Ρ–ΡˆΠ΅Π½Π½Ρ Π½Π° ΡΡ‚Π΅ΠΊΡƒ Airflow + Kafka
    • ΠžΠ±Ρ€ΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ… Ρƒ JSON Π· ΠΏΡƒΠ±Π»Ρ–ΠΊΠ°Ρ†Ρ–Ρ”ΡŽ Ρƒ Kafka

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • Π’ΠΈΡ‰Π° освіта Ρƒ ΡΡ„Π΅Ρ€Ρ– ΠΊΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½ΠΈΡ… Π½Π°ΡƒΠΊ/Ρ–Π½ΠΆΠΈΠ½Ρ–Ρ€ΠΈΠ½Π³
    • 2+ Ρ€ΠΎΠΊΠΈ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎΠ³ΠΎ досвіду Π² Π΄Π°Ρ‚Π° Ρ–Π½ΠΆΠΈΠ½Ρ–Ρ€ΠΈΠ½Π³Ρƒ Π°Π±ΠΎ суміТних ролях
    • Знання ΠΌΠΎΠ² програмування: Python, PLSQL
    • 2+ Ρ€ΠΎΠΊΡ–Π² досвіду Π² ΠΏΠ°Ρ€ΡΡƒΠ²Π°Π½Π½Ρ–, трансформації Ρ– Π·Π±Π΅Ρ€Π΅ΠΆΠ΅Π½Π½Ρ– Π΄Π°Π½ΠΈΡ… Π² ΡΠ΅Ρ€Π΅Π΄ΠΎΠ²ΠΈΡ‰Ρ– Big Data (e.g., Hadoop, Spark)
    • 1+ Ρ€ΠΎΠΊΡ–Π² досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² AWS Lambda, Glue,Athena and S3
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Kafka Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΎΡŽ, Π½Π°Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½ΡΠΌ Ρ‚Π° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΎΡŽ
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΎΡŽ Ρ‚Π° ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ”ΡŽ Π±Π°Π· Π΄Π°Π½ΠΈΡ… (Oracle/PostgreSQL)
    • Досвід Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Big Data pipelines
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Avro, JSON Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· AWS сСрвісами Π΄Π°Π½ΠΈΡ… Ρ– ΡƒΠΏΡ€Π°Π²Π»Ρ–ння Ρ–Π½Ρ„Ρ€Π°ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΎΡŽ
    • Розуміння ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΡ–Π² Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² Agile сСрСдовищі

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ…
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes)

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―

    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Одна Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… Π†Π’-ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… ΠΊΠΎΠΌΠ°Π½Π΄ сСрСд Π±Π°Π½ΠΊΡ–Π² ΠΊΡ€Π°Ρ—Π½ΠΈβ€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…: Facebook, Instagram, LinkedInβ€―

    More
  • Β· 52 views Β· 6 applications Β· 5d

    Databricks Solutions Architect

    Full Remote Β· Worldwide Β· 7 years of experience Β· Upper-Intermediate
    Requirements: - Hands-on and technical expertise with Apache Spark. - Hands-on experience with Databricks over the course of several large-scale projects. - Databricks Certified Data Engineer Professional certification - Proven experience in designing and...

    Requirements:

    - Hands-on and technical expertise with Apache Spark.
    - Hands-on experience with Databricks over the course of several large-scale projects.
    - Databricks Certified Data Engineer Professional certification
    - Proven experience in designing and implementing big data technologies, including Hadoop, NoSQL, MPP, OLTP, OLAP.
    - Over 7 years of experience working as a Software Engineer or Data Engineer, including query tuning, performance tuning, troubleshooting, and debugging Spark and/or other big data solutions.
    - Proficiency in programming with Python, Scala, or Java.
    - Familiarity with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools, and SQL Interfaces (e.g., Jenkins).
    - Experience in customer-facing roles such as pre-sales, post-sales, technical architecture guidance, or consulting.
    - Desired experience in Data Science/ML Engineering, including model selection, model lifecycle, hyper-parameter tuning, model serving, deep learning, using tools like MLFlow.
     

    We offer:

    β€’ Attractive financial package

    β€’ Challenging projects

    β€’ Professional & career growth

    β€’ Great atmosphere in a friendly small team

     

    More
  • Β· 77 views Β· 24 applications Β· 29d

    Senior Data Engineer

    Countries of Europe or Ukraine Β· 3 years of experience Β· Upper-Intermediate
    Role Overview: As a Data Engineer at QuintaGroup, you will design and optimize data pipelines within the AWS ecosystem for a US-based B2B marketplace platform. The platform simplifies and accelerates business operations by providing seamless data...

    Role Overview:

     

    As a Data Engineer at QuintaGroup, you will design and optimize data pipelines within the AWS ecosystem for a US-based B2B marketplace platform. The platform simplifies and accelerates business operations by providing seamless data solutions and advanced analytics. You’ll collaborate with data scientists, analysts, and cross-functional teams to deliver innovative results. 

     

    Key Responsibilities:

     

    β€’ Develop, implement, and optimize data pipelines using PySpark in AWS environments.

    β€’ Utilize AWS services such as S3, Glue, Lambda, EMR to create scalable and efficient data solutions.

    β€’ Enhance PySpark workflows for performance, reliability, and cost-effectiveness.

    β€’ Maintain data quality through rigorous testing and monitoring processes.

    β€’ Apply data governance, security, and compliance best practices.

    β€’ Document workflows, processes, and designs to support team collaboration and maintenance.

     

    Requirements:

     

    β€’ 3+ years of experience in data engineering, with a focus on PySpark.

    β€’ Strong experience with AWS services.

    β€’ Proficiency in Python and related frameworks or libraries.

    β€’ Solid understanding of distributed computing and Apache Spark.

    β€’ Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation) is a plus.

    β€’ Strong analytical and problem-solving skills with attention to detail.

    β€’ Excellent communication skills and ability to work in dynamic, team-oriented environments.

    β€’ Upper-Intermediate level of English.

     

    Tech Stack:

     

    β€’ Programming Languages: Python (with Pandas and PySpark).

    β€’ AWS Services: S3, Glue (Glue Data Catalog, Glue Crawler, Glue Jobs with PySpark), Lambda, ECS, Athena, Aurora (RDS), AppConfig, API Gateway, Step Functions, Quicksight, EventBridge.

    β€’ Infrastructure: Terraform for Infrastructure-as-Code.

     

    We Offer:

     

    β€’ Flexible working format: remote, office-based, or hybrid.

    β€’ Competitive salary and compensation package.

    β€’ Personalized career growth and mentorship programs.

    β€’ Professional development tools, including tech talks and training sessions.

    β€’ Access to active tech communities and regular knowledge sharing.

    β€’ Education reimbursement opportunities.

    β€’ Memorable anniversary gifts.

    β€’ Corporate events and team-building activities.

    β€’ Location-specific benefits.

     

    More
  • Β· 28 views Β· 2 applications Β· 27d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-Intermediate
    N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible...

    N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible for the architecture, development, and management of our core data infrastructure. We leverage Snowflake, Looker, Airflow (MWAA), and dbt while managing DevOps configurations for the platform. Our goal is to build and maintain a self-serve data platform that empowers stakeholders with tools for efficient data management while ensuring security, governance, and compliance standards.
     

    Requirements:

    • 6+ years of experience in Data Engineering.
    • Strong proficiency in Airflow, Python, and SQL.
    • Hands-on experience with cloud data warehouses (Snowflake or equivalent).
    • Solid understanding of AWS services and Kubernetes at an advanced user level.
    • Familiarity with Data Quality and Observability best practices.
    • Ability to thrive in a dynamic environment with a strong sense of ownership and responsibility.
    • Analytical mindset and problem-solving skills for tackling complex technical challenges.
    • Bachelor's in Mathematics, Computer Science,e or other relevant quantitative fields
       

    Nice-to-Have Skills:

    • Experience with DevOps practices, CI/CD, and Infrastructure as Code (IaC).
    • Hands-on experience with Looker or other BI tools.
    • Performance optimization of large-scale data pipelines.
    • Knowledge of metadata management and Data Governance best practices.
       

    Responsibilities:

    • Design and develop a scalable data platform to efficiently process and analyze large volumes of data using Snowflake, Looker, Airflow, and dbt.
    • Enhance the self-serve data platform by implementing new features to improve stakeholder access and usability.
    • Work with cross-functional teams to provide tailored data solutions and optimize data pipelines.
    • Foster a culture of knowledge sharing within the team to enhance collaboration and continuous learning.
    • Stay updated on emerging technologies and best practices in data engineering and bring innovative ideas to improve the platform.
    More
  • Β· 589 views Β· 52 applications Β· 8d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· Intermediate
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 194 views Β· 24 applications Β· 16d

    Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· Intermediate
    We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data...

    We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data sources. Your primary focus will be to enable efficient data flow and support analytical capabilities across the organization. You will also contribute to the development of our data architecture, implement best engineering practices, and collaborate closely with cross-functional teams to turn raw data into actionable insights.

     

    Responsibilities

    • Communicate with both technical and non-technical audiences to gather requirements
    • Review and analyze data and logic to ensure consistency and accuracy
    • Design, implement, and maintain data pipelines for efficient data flow
    • Integrate and  support of developed solutions
    • Research and evaluate third-party components for potential use
    • Follow best engineering practices: refactoring, code review, testing, continuous delivery, and Scrum
    • Design, optimize, and support of data storage

     

    Requirements

    • At least 5+ years of experience in data engineering
    • Experience in requirement gathering and communication with stakeholders
    • Strong knowledge of DWH (data warehouse) architecture and principles
    • Practical experience building ETL pipelines and designing data warehouses
    • Deep experience with Python with a strong focus on PySpark
    • Proficiency in SQL and databases such as PostgreSQL, ClickHouse, MySQL
    • Hands-on experience with data scraping and integrating third-party sources and APIs
    • Solid understanding of software design patterns, algorithms, and data structures
    • Intermediate English proficiency

     

    Will be a plus

    • Experience with RabbitMQ or Kafka
    • Understanding of web application architecture
    • Familiarity with DataOps practices
    • Background in FinTech or Trading domains

     

    We offer

    • Tax expenses coverage for private entrepreneurs in Ukraine
    • Expert support and guidance for Ukrainian private entrepreneurs
    • 20 paid vacation days per year
    • 10 paid sick leave days per year
    • Public holidays as per the company's approved Public holiday list
    • Medical insurance
    • Opportunity to work remotely
    • Professional education budget
    • Language learning budget
    • Wellness budget (gym membership, sports gear and related expenses)


     

    More
  • Β· 39 views Β· 8 applications Β· 30d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Our long-standing client from the UK is looking for a Senior Data Engineer Project: Decommissioning legacy software and systems Tech stack: DBT, Snowflake, SQL, Python, Fivetran Requirements: Solid experience with CI/CD processes in SSIS Proven...

    Our long-standing client from the UK is looking for a Senior Data Engineer 

     

    Project: Decommissioning legacy software and systems

     

    Tech stack:
    DBT, Snowflake, SQL, Python, Fivetran

     

    Requirements:

    • Solid experience with CI/CD processes in SSIS
    • Proven track record of decommissioning legacy systems and migrating data to modern platforms (e.g., Snowflake)
    • Experience with AWS (preferred) or Azure
    • Communicative and proactive team player β€” able to collaborate and deliver
    • Independent and flexible when switching between projects
    • English: Upper Intermediate or higher
    More
  • Β· 35 views Β· 0 applications Β· 2d

    Data Engineer/Analyst

    Office Work Β· Spain Β· Product Β· 3 years of experience Β· Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    We are the creators of a new fintech era! Our mission is to change this world by making blockchain accessible to everyone in everyday life. WhiteBIT is a global team of over 1,200 professionals united by one mission β€” to shape the new world order in the...

    We are the creators of a new fintech era!
    Our mission is to change this world by making blockchain accessible to everyone in everyday life. WhiteBIT is a global team of over 1,200 professionals united by one mission β€” to shape the new world order in the Web3 era. Each of our employees is fully engaged in this transformative journey.
    We work on our blockchain platform, providing maximum transparency and security for more than 8 million users worldwide. Our breakthrough solutions, incredible speed of adaptation to market challenges, and technological superiority are the strengths that take us beyond ordinary companies. Our official partners include the National Football Team of Ukraine, FC Barcelona, Lifecell, FACEIT and VISA.

    The future of Web3 starts with you: join us as a Data Engineer/Analyst!


    Requirements

    β€” 3+ years of experience as a Data Analyst / Quant Analyst / Risk Analyst.
    β€” Strong proficiency in Python (pandas, numpy, pyarrow, SQLAlchemy).
    β€” Deep knowledge of SQL (analysis, aggregation, window functions).
    β€” Experience with BI tools (Tableau, Grafana).
    β€” Scripting experience in Python for automation and report integration.
    β€” Solid understanding of trading principles, margining, VaR, and risk models.
    β€” Proven ability to work with large-scale datasets (millions of rows, low-latency environments).
    β€” Experience working with technical teams to deliver business-oriented analytics.


    Responsibilities

    β€” Build and maintain analytics for PnL, risk, and positions.
    β€” Monitor key performance and risk metrics.
    β€” Develop and optimize ETL/ELT pipelines (both batch and real-time).
    β€” Configure and enhance BI dashboards (Tableau, Grafana).
    β€” Support alerts and anomaly detection mechanisms.
    β€” Work with internal databases, APIs, and streaming data pipelines.
    β€” Collaborate closely with risk, engineering, and operations teams.
    β€” Contribute to the development of the analytics platform: from storage to visualization.


    Work conditions

    Immerse yourself in Crypto & Web3:
    β€” Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    β€” Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    β€” Gain unique experience and be part of global transformations.
    Drive Innovations:
    β€” Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    β€” Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    β€” Modern equipment.
    β€” Comfortable working conditions, and an inspiring environment to help you thrive.
    β€” 30 calendar days of paid leave.
    β€” Additional days off for national holidays.

    With us, you’ll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If you’re ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
    Let’s Build the Future Together!

    WhiteBIT offers all candidates an equal opportunity to join the team. All hiring decisions are made without regard to race, national origin, gender identity or sexual orientation, age, religion, disability, medical condition, marital status, familial status, veteran status, or any other legally protected characteristic of an individual.

    More
  • Β· 23 views Β· 4 applications Β· 15d

    Cloud System engineer

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· Pre-Intermediate
    Requirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...

    Requirements:

    • Knowledge of the core functionality of virtualization platforms;
    • Experience implementing and migrating workloads in virtualized environment;
    • Experience in complex IT solutions and Hybrid Cloud solution projects.
    • Good understanding of IT-infrastructure services is a plus;
    • Strong knowledge in troubleshooting of complex environments in case of failure;
    • At least basic knowledge in networking & information security is an advantage
    • Hyper-V, Proxmox, VMWare experience would be an advantage;
    • Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
    • Work experience of 2+ years in a similar position
    • Scripting and programming experience/background in PowerShell/Bash is an advantage;
    • Strong team communication skills, both verbal and written;
    • Experience in technical documentation writing and preparation;
    • English skills - intermediate level is minimum and mandatory for global teams communication;
    • Industry certification focused on relevant solution area.

    Areas of Responsibility includes:

    • Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
    • Consulting regarding migration IT-workloads in complex infrastructures;
    • Presales support (Articulating service value in the sales process) / Up and cross sell capability);
    • Project documentation: technical concepts
    • Education and development in professional area including necessary certifications.
    More
  • Β· 32 views Β· 2 applications Β· 6d

    Data Engineer TL / Poland

    EU Β· 4 years of experience Β· Upper-Intermediate
    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...

    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.

     

    Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way. 

    We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests. 

    As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. You’ll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.

     

    Responsibilities:

    • Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
    • Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
    • Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
    • Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?

       

    Requirements:

    • Over 2 years in a leadership role within a data team.
    • Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
    • Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
    • Bachelor’s degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
    • Highly organized with a proactive mindset.
    • Strong service orientation and a collaborative approach to problem-solving.

       

    Nice to have skills:

    • Previous experience as a NOC or DevOps engineer is a plus.
    • Familiarity with PySpark is considered an advantage.

       

    What we can offer you

    • Remote work from Poland, flexible working schedule
    • Accounting support & consultation
    • Opportunities for learning and developing on the project
    • 20 working days of annual vacation
    • 5 days paid sick leaves/days off; state holidays
    • Provide working equipment
    More
  • Β· 169 views Β· 30 applications Β· 6d

    Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. We’re proud to be ranked 4th among the World’s Top EdTech Π‘ompanies by TIME...

    Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. We’re proud to be ranked 4th among the World’s Top EdTech Π‘ompanies by TIME magazine. We believe lifelong learning should be accessible, personalized, and impactful to each individual. That’s how we change the world and why we bring together exceptional minds.
     

    The core of our achievements is our team. We believe in people and shared values SELECT. That’s why, together with Yuliya Savchuk, Engineering Manager of the MIT team, we’re looking for a Data Engineer to join our team of superstars transforming the EdTech industry.
     

    About the role:

    With business scaling, we see the need to strengthen the team that is working on building a data analytics platform for Headway Inc. We need to ensure that every business area and our products have reliable data to drive deep insights and innovation.

    Data is at the core of our company. You will build and maintain a reliable, efficient, and scalable data infrastructure that enables Headway Inc to leverage data as a strategic asset for informed decision-making, driving innovation, and achieving business goals.
     

    What awaits you on our team:

    • Have the opportunity to join the team of a global EdTech company that creates socially impactful products for the international market.
    • Have the opportunity to collaborate with a large team of analysts and marketers β€” to create solutions that have a direct and tangible impact on their work.
    • You'll be able to use a wide variety of modern tools and independently decide which technologies are most appropriate to apply.
    • We work in an atmosphere of freedom and responsibility.
    • Your decisions and ideas will actively impact the business. You’ll own the full development lifecycleβ€”from solution design through to user feedback and iteration.
       

    What will you do:

    At MIT, the Engineering team develops data platforms and automation tools that help teams work more efficiently and make informed marketing decisions. We create solutions that allow us to analyze and and utilize data for effective decision-making in marketing strategies, improving results and increasing return on investment.
     

    • Communicate and collaborate with the analytics team, being responsible for delivering data to the analytical database for visualization.
    • Create and maintain optimal and scalable pipeline architecture. Develop new pipelines and refine existing ones.
    • Develop ETL/ELT processes and Data Lake architecture.
    • Research and collect large, complex data.
    • Identify, design, and implement internal process improvements.
    • Continuously learn, develop, and utilize cutting-edge technologies.
       

    What do you need to join us:

    • Experience in production development and knowledge of any programming language, including Python, Golang, Java, etc.
    • Understanding of Data Lakes, Data Warehousing, OLAP/OLTP approaches, and ETL/ELT processes.
    • Proficiency in SQL and experience working with databases.
    • Workflow orchestration expirience.
    • Problem-solving skills and a passion for creating efficient, well-tested, and maintainable solutions.
    • Alignment with the values of our team (SELECT).
       

    Good to have:

    • Experience with GCP Data Services and Airflow.
    • Experience with CI/CD in Data Engineering.
    • Knowledge of Data Governance and Security principles.
    • Experience optimizing data pipeline performance.
    • Experience in MarTech or AdTech platforms, like marketing campaign orchestration.
       

    What do we offer:

    • Work within an ambitious team on a socially impactful education product.
    • An office with a reliable shelter, generators, satellite internet, and other amenities.
    • Access to our corporate knowledge base and professional communities.
    • Personal development plan.
    • Partial compensation for English language learning, external training, and courses.
    • Medical insurance coverage with a $70 employee contribution and full sick leave compensation.
    • Company doctor and massage in the office.
    • Sports activities: running, yoga, boxing, and more.
    • Corporate holidays: we go on a week-paid holiday to rest and recharge twice a year.
    • Supporting initiatives that help Ukraine. Find out more about our projects here.
       

    Working schedule:

    This is a full-time position with a hybrid remote option. It means that you can decide for yourself: whether you want to work from the office, remotely, or combine these options.
     

    Are you interested?

    Send your CV!

     

    More
  • Β· 214 views Β· 38 applications Β· 7d

    Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· Upper-Intermediate
    Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter! Why? Because we trust our...

    Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter!

     

    Why? Because we trust our Data Platform for daily business decisions. From β€œWhat ad platform presents us faster? Which creative media presents our value to customers in the most touching way?” to β€œWhat would our customers like to learn the most about? What can make education more enjoyable?”, we rely on numbers, metrics and stuff. But as we are open and curious, there’s a lot to collect and measure! That’s why we need to extend, improve and speed up our data platform.

     

    That’s why we need you to:

    • Build and maintain scalable data pipelines using Python and Airflow to provide data ingestion, transformation, and delivery.
    • Develop and optimize ETL/ELT workflows to ensure data quality, reliability, and performance.
    • Bring your vision and opinion to define data requirements and shape solutions to business needs.
    • Smartly monitor, relentlessly troubleshoot, and bravely resolve issues in data workflows, striving for high availability and fault tolerance.
    • Propose, advocate, and implement best practices for data storage and querying using AWS services such as S3 and Athena.
    • Document data workflows and processes, ensuring you don’t have to say it twice and have time for creative experiments. Sure, it’s about clarity and maintainability across the team as well.

     

    For that, we suppose you’d be keen on

    • AWS services such as S3, Kinesis, Athena, and others.
    • dbt and Airflow for data pipeline and workflow management.
    • Application of data architecture, ETL/ELT processes, and data modeling.
    • Advanced SQL and Python programming.
    • Monitoring tools and practices to ensure data pipeline reliability.
    • CI/CD pipelines and DevOps practices for data platforms.
    • Monitoring and optimizing platform performance at scale.

     

    Will be nice to 

    • Understand cloud services (we use AWS), advances, trade-offs, and perspectives.
    • Keep in mind the analytical approach and the ability to consider future perspectives in system design in daily practice and technical decisions

     

    Why You'll Love Working With Us:

    • Impactful Work: Your contributions will directly shape the future of our company.
    • Innovative Environment: We're all about trying new things and pushing the envelope in EdTech.
    • Freedom: flexible role based either remotely or hybrid from one of our offices in Cyprus, Poland.
    • Health: we offer Health Insurance package for hybrid mode (Cyprus, Poland) and health corner in the Cyprus office.
    • AI solutions β€” GPT Chat bot/ Chat GPT subscription and other tools.
    • Wealth: we offer a competitive salary.
    • Balance: flexible paid time off, you get 21 days of annual leave + 10 bank holidays.
    • Collaborative Culture: Work alongside passionate professionals who are as driven as you are.

     

    More
  • Β· 309 views Β· 28 applications Β· 2d

    Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Intermediate
    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, analysis, and integrations. We are waiting for your CV!

    Requirements:

    - 2+ years of commercial experience with Python.

    - Experience working with PostgreSQL databases.
    - Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
    - Solid understanding of ETL principles and best practices.
    - Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
    - Experience working with Linux environments, cloud services (AWS), and Docker.
    - Strong decision-making capabilities with the ability to work independently and proactively.

    Will be a plus:
    - Experience in web scraping, data extraction, cleaning, and visualization.
    - Understanding of multiprocessing and multithreading, including process and thread management.
    - Familiarity with Redis.
    - Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
    - Experience with Flask / Flask-RESTful for API development.
    - Knowledge and experience with Kafka.
     

    Key Responsibilities:

    - Develop and maintain a robust data processing architecture using Python.

    - Design and manage data pipelines using Kafka and SQS.

    - Optimize code for better performance and maintainability.

    - Design and implement efficient ETL processes.

    - Work with AWS technologies to ensure flexible and reliable data processing systems.

    - Collaborate with colleagues, actively participate in code reviews, and improve technical knowledge.

    - Take responsibility for your tasks and suggest improvements to processes and systems.

    We offer:

    - Working in a fast growing company;

    - Great networking opportunities with international clients, challenging tasks;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities, corporate events.

    More
Log In or Sign Up to see all posted jobs