Jobs Data Engineer
159-
Β· 570 views Β· 55 applications Β· 7d
Data Engineer
Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. Skills requirements: β’ 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Skills requirements:
β’ 2+ years of experience with Python;
β’ 2+ years of experience as a Data Engineer;
β’ Experience with Pandas;
β’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Familiarity with Amazon Web Services;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Working with high volume tables 10m+.
Optional skills (as a plus):
β’ Experience with Spark (pyspark);
β’ Experience with Airflow;
β’ Experience with Kafka;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms..Key responsibilities:
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models.We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 109 views Β· 21 applications Β· 10d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateLead the development and scaling of our scientific knowledge graphβingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights. Requirements: - Strong experience with...Lead the development and scaling of our scientific knowledge graphβingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights.
Requirements:
- Strong experience with knowledge graph design and implementation (Neo4j, RDFLib, GraphQL, etc.).
- Advanced Python for data engineering, ETL, and entity processing (Spark/Dask/Polars).
- Proven track record with large dataset ingestion (tens of millions of records).
- Familiarity with life-science or biomedical data (ontologies, research metadata, entity linking).
- Experience with Airflow/Dagster/dbt, and data APIs (OpenAlex, ORCID, PubMed).
- Strong sense of ownership, precision, and delivery mindset. Nice to Have:
- Domain knowledge in life sciences, biomedical research, or related data models.
- Experience integrating vector/semantic embeddings (Pinecone, FAISS, Weaviate).
We offer:
β’ Attractive financial package
β’ Challenging projects
β’ Professional & career growth
β’ Great atmosphere in a friendly small team
More -
Β· 84 views Β· 2 applications Β· 25d
Data Engineer
Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper IntermediateRaiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.
At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bankβs products and services, and develop their businesses because we are #Together_with_Ukraine.
Your responsibilities:
- Develop and maintain scalable ETL/ELT processes for data collection, transformation, and loading
- Design and implement robust data pipelines for real-time and batch data processing
- Ensure data quality, consistency, and availability for analytical and operational systems
- Optimize query performance and database architecture
- Automate the deployment and monitoring of data infrastructure components
- Work closely with analytics, development, and business teams to implement data-driven solutions
Preferred qualifications:
- 2+ years of relevant experience in data engineering
- We expect you to have solid commercial experience with Python, Groovy
- Deep knowledge of Apache NiFi and hands-on experience in building and administering complex data flows
- Proficient in PostgreSQL, understanding of architecture, experience in query optimization and data schema design
- Experience with Apache Kafka, building real-time data pipelines
Will be a plus:
- Experience with Apache Airflow, workflow organization, monitoring and automation
- Working with Apache Spark frameworks for distributed big data processing
- Experience with AWS Athena and S3, interactive query services
- Experience with Apache Iceberg, understanding of modern table formats for data lakes
- Experience with Terraform, practice using the Infrastructure as Code (IaC) approach
- Experience with Kubernetes, containerization and service orchestration
We offer what matters most to you:
- Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
- Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
- Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
- Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
- Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
- Career opportunities: we encourage advancement within the bank across functions
- Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
- Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bankβs veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes
Why Raiffeisen Bank?
- Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raifβs team because for us YOU matter!
- One of the largest lenders to the economy and agricultural business among private banks
- Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
- The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)
- One of the largest IT product teams among the countryβs banks. One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023
Opportunities for Everyone:
- Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
- We support the principles of diversity, equality and inclusiveness
- We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
- We cooperate with students and older people, creating conditions for growth at any career stage
Want to learn more? β Follow us on social media:
Facebook, Instagram, LinkedIn
___________________________________________________________________________________________
Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡΡΠ°Π»ΠΎΠΌ. ΠΡΠ»ΡΡΠ΅ 30 ΡΠΎΠΊΡΠ² ΠΌΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠ° Π²ΠΈΠ±ΡΠ΄ΠΎΠ²ΡΡΠΌΠΎ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π°ΡΠΎΡ Π΄Π΅ΡΠΆΠ°Π²ΠΈ.
Π£ Π Π°ΠΉΡΡ ΠΏΡΠ°ΡΡΡ ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ², ΡΠ΅ΡΠ΅Π΄ Π½ΠΈΡ ΠΎΠ΄Π½Π° ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄, ΡΠΎ Π½Π°Π»ΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 800 ΡΠ°Ρ ΡΠ²ΡΡΠ². Π©ΠΎΠ΄Π½Ρ ΠΏΠ»ΡΡ-ΠΎ-ΠΏΠ»ΡΡ ΠΌΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ, ΡΠΎΠ± Π±ΡΠ»ΡΡ Π½ΡΠΆ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½Π° Π½Π°ΡΠΈΡ ΠΊΠ»ΡΡΠ½ΡΡΠ² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΡΠΊΡΡΠ½Π΅ ΠΎΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ, ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΠΈΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ Ρ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π±Π°Π½ΠΊΡ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.β―
Π’Π²ΠΎΡ ΠΌΠ°ΠΉΠ±ΡΡΠ½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- Π ΠΎΠ·ΡΠΎΠ±Π»ΡΡΠΈ ΡΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΡΠ²Π°ΡΠΈ ΠΌΠ°ΡΡΡΠ°Π±ΠΎΠ²Π°Π½Ρ ETL/ELT ΠΏΡΠΎΡΠ΅ΡΠΈ Π΄Π»Ρ Π·Π±ΠΎΡΡ, ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΡΡ ΡΠ° Π·Π°Π²Π°Π½ΡΠ°ΠΆΠ΅Π½Π½Ρ Π΄Π°Π½ΠΈΡ
- ΠΡΠΎΡΠΊΡΡΠ²Π°ΡΠΈ ΡΠ° Π²ΠΏΡΠΎΠ²Π°Π΄ΠΆΡΠ²Π°ΡΠΈ Π½Π°Π΄ΡΠΉΠ½Ρ data pipelines Π΄Π»Ρ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ Ρ ΡΠ΅Π°Π»ΡΠ½ΠΎΠΌΡ ΡΠ°ΡΡ ΡΠ° Π² ΠΏΠ°ΠΊΠ΅ΡΠ½ΠΎΠΌΡ ΡΠ΅ΠΆΠΈΠΌΡ
- ΠΠ°Π±Π΅Π·ΠΏΠ΅ΡΡΠ²Π°ΡΠΈ ΡΠΊΡΡΡΡ, ΠΊΠΎΠ½ΡΠΈΡΡΠ΅Π½ΡΠ½ΡΡΡΡ ΡΠ° Π΄ΠΎΡΡΡΠΏΠ½ΡΡΡΡ Π΄Π°Π½ΠΈΡ Π΄Π»Ρ Π°Π½Π°Π»ΡΡΠΈΡΠ½ΠΈΡ ΡΠ° ΠΎΠΏΠ΅ΡΠ°ΡΡΠΉΠ½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌ
- ΠΠΏΡΠΈΠΌΡΠ·ΡΠ²Π°ΡΠΈ ΠΏΡΠΎΠ΄ΡΠΊΡΠΈΠ²Π½ΡΡΡΡ Π·Π°ΠΏΠΈΡΡΠ² ΡΠ° Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΈ Π±Π°Π· Π΄Π°Π½ΠΈΡ
- ΠΠ²ΡΠΎΠΌΠ°ΡΠΈΠ·ΡΠ²Π°ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ ΡΠΎΠ·Π³ΠΎΡΡΠ°Π½Π½Ρ ΡΠ° ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡΠ² data-ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΈ
- Π’ΡΡΠ½ΠΎ ΡΠΏΡΠ²ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π· ΠΊΠΎΠΌΠ°Π½Π΄Π°ΠΌΠΈ Π°Π½Π°Π»ΡΡΠΈΠΊΠΈ, ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ ΡΠ° Π±ΡΠ·Π½Π΅Ρ-ΠΏΡΠ΄ΡΠΎΠ·Π΄ΡΠ»Π°ΠΌΠΈ Π΄Π»Ρ ΡΠ΅Π°Π»ΡΠ·Π°ΡΡΡ data-driven ΡΡΡΠ΅Π½Ρ
Π’Π²ΡΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠ° Π½Π°Π²ΠΈΡΠΊΠΈ:
- 2+ ΡΠΎΠΊΠΈ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎΠ³ΠΎ Π΄ΠΎΡΠ²ΡΠ΄Ρ Π² Π΄Π°ΡΠ° ΡΠ½ΠΆΠΈΠ½ΡΡΠΈΠ½Π³Ρ
- ΠΠΈ ΠΎΡΡΠΊΡΡΠΌΠΎ, ΡΠΎ Π²ΠΈ ΠΌΠ°ΡΡΠ΅ Π²ΠΏΠ΅Π²Π½Π΅Π½ΠΈΠΉ ΠΊΠΎΠΌΠ΅ΡΡΡΠΉΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Python, Groovy
- ΠΠ»ΠΈΠ±ΠΎΠΊΡ Π·Π½Π°Π½Π½Ρ Apache NiFi ΡΠ° ΠΏΡΠ°ΠΊΡΠΈΡΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ Ρ ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²Ρ ΡΠ° Π°Π΄ΠΌΡΠ½ΡΡΡΡΡΠ²Π°Π½Π½Ρ ΡΠΊΠ»Π°Π΄Π½ΠΈΡ ΠΏΠΎΡΠΎΠΊΡΠ² Π΄Π°Π½ΠΈΡ
- ΠΠΏΠ΅Π²Π½Π΅Π½Π΅ Π²ΠΎΠ»ΠΎΠ΄ΡΠ½Π½Ρ PostgreSQL, ΡΠΎΠ·ΡΠΌΡΠ½Π½Ρ Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΈ, Π΄ΠΎΡΠ²ΡΠ΄ ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ Π·Π°ΠΏΠΈΡΡΠ² ΡΠ° ΠΏΡΠΎΡΠΊΡΡΠ²Π°Π½Π½Ρ ΡΡ Π΅ΠΌ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Apache Kafka, ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²Π° real-time data pipelines
ΠΡΠ΄Π΅ ΠΏΠ»ΡΡΠΎΠΌ:
- ΠΠΎΡΠ²ΡΠ΄ Apache Airflow, ΠΎΡΠ³Π°Π½ΡΠ·Π°ΡΡΡ, ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³ ΡΠ° Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·Π°ΡΡΡ Π²ΠΎΡΠΊΡΠ»ΠΎΡ
- Π ΠΎΠ±ΠΎΡΠ° Π· ΡΡΠ΅ΠΉΠΌΠ²ΠΎΡΠΊΠ°ΠΌΠΈ Apache Spark Π΄Π»Ρ ΡΠΎΠ·ΠΏΠΎΠ΄ΡΠ»Π΅Π½ΠΎΡ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π²Π΅Π»ΠΈΠΊΠΈΡ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· AWS Athena ΡΠ° S3, ΡΠ½ΡΠ΅ΡΠ°ΠΊΡΠΈΠ²Π½ΠΈΠΌΠΈ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π·Π°ΠΏΠΈΡΡΠ²
- ΠΠΎΡΠ²ΡΠ΄ Π· Apache Iceberg, ΡΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΈΡ ΡΠΎΡΠΌΠ°ΡΡΠ² ΡΠ°Π±Π»ΠΈΡΡ Π΄Π»Ρ data lake
- ΠΠΎΡΠ²Π²ΡΠ΄ Π· Terraform, ΠΏΡΠ°ΠΊΡΠΈΠΊΠ° Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ ΠΏΡΠ΄Ρ ΠΎΠ΄Ρ Infrastructure as Code (IaC)
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Kubernetes, ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΈΠ·Π°ΡΡΡΡ ΡΠ° ΠΎΡΠΊΠ΅ΡΡΡΠ°ΡΡΡΡ ΡΠ΅ΡΠ²ΡΡΡΠ²
ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΡΠ΅, ΡΠΎ ΠΌΠ°Ρ Π·Π½Π°ΡΠ΅Π½Π½Ρ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅:β―
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Π° Π·Π°ΡΠΎΠ±ΡΡΠ½Π° ΠΏΠ»Π°ΡΠ°: Π³Π°ΡΠ°Π½ΡΡΡΠΌΠΎ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ Π΄ΠΎΡ ΡΠ΄ ΡΠ° ΡΡΡΠ½Ρ Π±ΠΎΠ½ΡΡΠΈ Π·Π° ΡΠ²ΡΠΉ ΠΎΡΠΎΠ±ΠΈΡΡΠΈΠΉ Π²Π½Π΅ΡΠΎΠΊ. ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ, Ρ Π½Π°Ρ Π΄ΡΡ ΡΠ΅ΡΠ΅ΡΠ°Π»ΡΠ½Π° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊΡ.
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ: ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 28 Π΄Π½ΡΠ² ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΈΠΉ βΠ΄Π΅ΠΊΡΠ΅Ρβ Π΄Π»Ρ ΡΠ°ΡΡΡΡΠ², ΡΠ° ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° Π΄Π»Ρ Π±Π°ΡΡΠΊΡΠ² ΠΏΡΠΈ Π½Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π΄ΡΡΠ΅ΠΉ.
- ΠΠΎΠΌΡΠΎΡΡΠ½Ρ ΡΠΌΠΎΠ²ΠΈ ΠΏΡΠ°ΡΡ: ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π³ΡΠ±ΡΠΈΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠ°ΡΡ ΡΠΎΠ±ΠΎΡΠΈ, ΠΎΡΡΡΠΈ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠΊΡΠΈΡΡΡΠΌΠΈ ΡΠ° Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠ°ΠΌΠΈ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΎΡ ΡΠ΅Ρ Π½ΡΠΊΠΎΡ.
- Wellbeing ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ°: Π΄Π»Ρ Π²ΡΡΡ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² Π΄ΠΎΡΡΡΠΏΠ½Ρ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ Π· ΠΏΠ΅ΡΡΠΎΠ³ΠΎ ΡΠΎΠ±ΠΎΡΠΎΠ³ΠΎ Π΄Π½Ρ; ΠΊΠΎΠ½ΡΡΠ»ΡΡΠ°ΡΡΡ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π½ΡΡΡΠΈΡΡΠΎΠ»ΠΎΠ³Π° ΡΠΈ ΡΡΠΈΡΡΠ°; Π΄ΠΈΡΠΊΠΎΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡΡ ΡΠ° ΠΏΠΎΠΊΡΠΏΠΊΠΈ; family days Π΄Π»Ρ Π΄ΡΡΠ΅ΠΉ ΡΠ° Π΄ΠΎΡΠΎΡΠ»ΠΈΡ ; ΠΌΠ°ΡΠ°ΠΆ Π² ΠΎΡΡΡΡ.
- ΠΠ°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ: Π΄ΠΎΡΡΡΠΏ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²ΡΠ°Π»ΡΠ½ΠΈΡ ΠΎΠ½Π»Π°ΠΉΠ½-ΡΠ΅ΡΡΡΡΡΠ²; ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Ρ Π½Π°Π²ΡΠ°Π»ΡΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π· CX, Data, IT Security, ΠΡΠ΄Π΅ΡΡΡΠ²Π°, Agile. ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π° Π±ΡΠ±Π»ΡΠΎΡΠ΅ΠΊΠ° ΡΠ° ΡΡΠΎΠΊΠΈ Π°Π½Π³Π»ΡΠΉΡΡΠΊΠΎΡ.
- ΠΡΡΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡΡ ΠΊΠΎΠ»Π΅Π³ΠΈ β ΡΠ΅ ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ°, Π΄Π΅ Π²ΡΡΠ°ΡΡΡΡΡ Π΄ΠΎΠΏΠΈΡΠ»ΠΈΠ²ΡΡΡΡ, ΡΠ°Π»Π°Π½Ρ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΡ. ΠΠΈ ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, Π²ΡΠΈΠΌΠΎΡΡ ΡΠ°Π·ΠΎΠΌ ΡΠ° Π·ΡΠΎΡΡΠ°ΡΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π·Π½Π°ΠΉΡΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² Ρ ΠΏΠΎΠ½Π°Π΄ 15-ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ ΠΊΠΎΠΌβΡΠ½ΡΡΡ, ΡΠΈΡΠ°ΡΡΠΊΠΎΠΌΡ ΡΠΈ ΡΠΏΠΎΡΡΠΈΠ²Π½ΠΎΠΌΡ ΠΊΠ»ΡΠ±Π°Ρ .
- ΠΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ: ΠΌΠΈ Π·Π°ΠΎΡ ΠΎΡΡΡΠΌΠΎ ΠΏΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΠ΅ΡΠ΅Π΄ΠΈΠ½Ρ Π±Π°Π½ΠΊΡ ΠΌΡΠΆ ΡΡΠ½ΠΊΡΡΡΠΌΠΈ.
- ΠΠ½Π½ΠΎΠ²Π°ΡΡΡ ΡΠ° ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
- ΠΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΡΠ² Ρ Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡ: ΠΌΠΈ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ ΡΠΎΠ±ΠΎΡΡ ΠΌΡΡΡΡ ΡΠ° Π²ΠΈΠΏΠ»Π°ΡΡΡΠΌΠΎ ΡΠ΅ΡΠ΅Π΄Π½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ ΠΌΠΎΠ±ΡΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΌ. ΠΠ»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² ΡΠ° Π²Π΅ΡΠ΅ΡΠ°Π½ΠΎΠΊ Ρ Π½Π°Ρ Π΄ΡΡ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΡΡΡΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° ΠΠ°Π½ΠΊΡ. ΠΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π°Π΄ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½ΡΠΌ ΠΎΠ±ΡΠ·Π½Π°Π½ΠΎΡΡΡ ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΡΠ² ΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡΠ°Π½Ρ ΠΏΠΎΠ²Π΅ΡΠ½Π΅Π½Π½Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Π΄ΠΎ ΡΠΈΠ²ΡΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΆΠΈΡΡΡ. Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ Π²ΡΠ΄Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΠΊ ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡΠ°ΡΠΈΡ ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ² Π΄Π»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² (Forbes).
Π§ΠΎΠΌΡ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ?β―
- ΠΠ°ΡΠ° Π³ΠΎΠ»ΠΎΠ²Π½Π° ΡΡΠ½Π½ΡΡΡΡ β Π»ΡΠ΄ΠΈ Ρ ΠΌΠΈ Π΄Π°ΡΠΌΠΎ ΡΠΌ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Ρ Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²ΡΠ°ΡΠΌΠΎ, Π·Π°Π»ΡΡΠ°ΡΠΌΠΎ Π΄ΠΎ Π·ΠΌΡΠ½. ΠΡΠΈΡΠ΄Π½ΡΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡΡ, Π°Π΄ΠΆΠ΅ Π΄Π»Ρ Π½Π°Ρ Π’Π ΠΌΠ°ΡΡ Π·Π½Π°ΡΠ΅Π½Π½Ρ!β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΊΡΠ΅Π΄ΠΈΡΠΎΡΡΠ² Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π°Π³ΡΠ°ΡΠ½ΠΎΠ³ΠΎ Π±ΡΠ·Π½Π΅ΡΡ ΡΠ΅ΡΠ΅Π΄ ΠΏΡΠΈΠ²Π°ΡΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ²β―
- ΠΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡΠ°ΡΠΈΠΌ ΠΏΡΠ°ΡΠ΅Π΄Π°Π²ΡΠ΅ΠΌ Π·Π° Π²Π΅ΡΡΡΡΠΌΠΈ EY, Forbes, Randstad, Franklin Covey, Delo.UAβ―
- ΠΠ°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ Π³ΡΠΌΠ°Π½ΡΡΠ°ΡΠ½ΠΎΡ Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΈΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² (Π§Π΅ΡΠ²ΠΎΠ½ΠΈΠΉ Π₯ΡΠ΅ΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΠ»Π°ΡΠ½ΠΈΠΊΡΠ² ΠΏΠΎΠ΄Π°ΡΠΊΡΠ² Π² Π£ΠΊΡΠ°ΡΠ½Ρ, Π·Π° 2023 ΡΡΠΊ Π±ΡΠ»ΠΎ ΡΠΏΠ»Π°ΡΠ΅Π½ΠΎ 6,6 ΠΌΠ»ΡΠ΄ Π³ΡΠΈΠ²Π΅Π½Ρ
ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ :β―
- Π Π°ΠΉΡ ΠΊΠ΅ΡΡΡΡΡΡΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°ΠΌΠΈ, ΡΠΎ ΡΠΎΠΊΡΡΡΡΡΡΡΡ Π½Π° Π»ΡΠ΄ΠΈΠ½Ρ ΡΠ° ΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ, Ρ ΡΠ΅Π½ΡΡΡ ΡΠ²Π°Π³ΠΈ 5β―500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² ΡΠ° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½ΠΈ ΠΊΠ»ΡΡΠ½ΡΡΠ²β―β―
- ΠΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΡΡΠ·Π½ΠΎΠΌΠ°Π½ΡΡΡΡ, ΡΡΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΠ½ΠΊΠ»ΡΠ·ΠΈΠ²Π½ΠΎΡΡΡ
- ΠΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ Π½Π°ΠΉΠΌΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Ρ Π»ΡΠ΄Π΅ΠΉ Π· ΡΠ½Π²Π°Π»ΡΠ΄Π½ΡΡΡΡ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈ ΡΠΎΠ±ΠΎΡΠ΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ ΠΏΡΠ΄ Π²Π°ΡΡ ΠΎΡΠΎΠ±Π»ΠΈΠ²Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈ
- Π‘ΠΏΡΠ²ΠΏΡΠ°ΡΡΡΠΌΠΎ Π·Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌΠΈ ΡΠ° Π»ΡΠ΄ΡΠΌΠΈ ΡΡΠ°ΡΡΠΎΠ³ΠΎ Π²ΡΠΊΡ,β―ΡΡΠ²ΠΎΡΡΡΡΠΈ ΡΠΌΠΎΠ²ΠΈ Π΄Π»Ρ Π·ΡΠΎΡΡΠ°Π½Π½Ρ Π½Π° Π±ΡΠ΄Ρ-ΡΠΊΠΎΠΌΡ Π΅ΡΠ°ΠΏΡ ΠΊΠ°ΡβΡΡΠΈ
ΠΠ°ΠΆΠ°ΡΡ Π΄ΡΠ·Π½Π°ΡΠΈΡΡ Π±ΡΠ»ΡΡΠ΅? β ΠΡΠ΄ΠΏΠΈΡΡΠΉΡΡ Π½Π° Π½Π°Ρ Ρ ΡΠΎΡ.ΠΌΠ΅ΡΠ΅ΠΆΠ°Ρ :
Facebook, Instagram, LinkedInβ―
More -
Β· 53 views Β· 15 applications Β· 11d
Senior Data Engineer β (PySpark / Data Infrastructure)
Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· C1 - AdvancedSenior Data Engineer β (PySpark / Data Infrastructure) We're hiring a Senior Data Engineer to help lead the next phase of our data platformβs growth. At Forecasa, we provide enriched real estate transaction data and analytics to private lenders and...Senior Data Engineer β (PySpark / Data Infrastructure)
We're hiring a Senior Data Engineer to help lead the next phase of our data platformβs growth.
At Forecasa, we provide enriched real estate transaction data and analytics to private lenders and investors. Our platform processes large volumes of public data, standardizes and enriches it, and delivers actionable insights that drive lending decisions.
We recently completed a migration from a legacy SQL-based ETL stack (PostgreSQL/dbt) to PySpark, and we're now looking for a senior engineer to take ownership of the new pipeline, maintain and optimize it, and develop new data-driven features to support our customers and internal analytics.
What Youβll Do
- Own and maintain our PySpark-based data pipeline, ensuring stability, performance, and scalability.
- Design and build new data ingestion, transformation, and validation workflows.
- Optimize and monitor data jobs using Airflow, Kubernetes, and S3.
- Collaborate with data analysts, product owners, and leadership to define data needs and deliver clean, high-quality data.
- Support and mentor junior engineers working on scrapers, validation tools, and quality monitoring dashboards.
- Contribute to the evolution of our data infrastructure and architectural decisions.
Our Tech Stack
Python β’ PySpark β’ PostgreSQL β’ dbt β’ Airflow β’ S3 β’ Kubernetes β’ GitLab β’ Grafana
What Weβre Looking For
- 5+ years of experience in data engineering or backend systems with large-scale data processing.
- Strong experience with PySpark, including building scalable data pipelines and working with large datasets.
- Solid command of SQL, data modeling, and performance tuning (especially in PostgreSQL).
- Experience working with orchestration tools like Airflow, and containers via Docker/Kubernetes.
- Familiarity with cloud storage (preferably S3) and modern CI/CD workflows.
- Ability to work independently and communicate clearly in a remote, async-first environment.
Bonus Points
- Background in real estate or financial data
- Experience with data quality frameworks or observability tools (e.g., Great Expectations, Grafana, Prometheus)
- Experience optimizing PySpark jobs for performance and cost-efficiency
-
Β· 12 views Β· 1 application Β· 6d
Presale engineer
Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - ElementaryRequirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...Requirements:
- Knowledge of the core functionality of virtualization platforms;
- Experience implementing and migrating workloads in virtualized environment;
- Experience in complex IT solutions and Hybrid Cloud solution projects.
- Good understanding of IT-infrastructure services is a plus;
- Strong knowledge in troubleshooting of complex environments in case of failure;
- At least basic knowledge in networking & information security is an advantage
- Hyper-V, Proxmox, VMWare experience would be an advantage;
- Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
- Work experience of 2+ years in a similar position
- Scripting and programming experience/background in PowerShell/Bash is an advantage;
- Strong team communication skills, both verbal and written;
- Experience in technical documentation writing and preparation;
- English skills - intermediate level is minimum and mandatory for global teams communication;
- Industry certification focused on relevant solution area.
Areas of Responsibility includes:
- Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
- Consulting regarding migration IT-workloads in complex infrastructures;
- Presales support (Articulating service value in the sales process) / Up and cross sell capability);
- Project documentation: technical concepts
- Education and development in professional area including necessary certifications.
-
Β· 52 views Β· 2 applications Β· 27d
Senior Market Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateWe are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures,...We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures, and more. You will play a key role in designing, developing, and optimizing data pipelines that handle large volumes of data with low latency and high throughput, ensuring that our systems can process market data in real time and batch modes.
Key Responsibilities:
- Architect, develop, and enhance market data systems
- Contribute to the software development lifecycle in a collaborative team environment, including design, implementation, testing, and support
- Design highly efficient, scalable, mission-critical systems
- Maintain good software quality and test coverage
- Participate in code reviews
- Troubleshooting incidents and reported bugs
Requirements:
- Bachelorβs or advanced degree in Computer Science or Electrical Engineering
- Proficiency in the following programming languages: Java, Python or Go
- Prior experience working with equities or futures market data, such as CME data, US Equities Options, is a must
- Experience in engineering and supporting Market Data feed handlers
- Technically fluent (Python, SQL, JSON, ITCH, FIX, CSV); comfortable discussing pipelines and validation specs.
- Prior experience working on tick data storage, such as KDB+ or Clickhouse
- Familiarity with time series analysis
- Good understanding of the Unix/Linux programming environment
- Expertise with SQL and relational databases
- Excellent problem-solving and communication skills
- Self-starter and works well in a fast-paced environment
-
Β· 88 views Β· 4 applications Β· 24d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate MilTech πͺWho We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...Who We Are
OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.
Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.
We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.
Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
Who weβre looking for
OpenMinds is seeking a skilled and curious Data Engineer whoβs excited to design and build data systems that power meaningful insight. Youβll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
In the position you will:
- Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
- Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
- Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
- Stay up-to-date with trends in data engineering and apply modern tools and practices
- Define and implement best practices for data processing, storage, and governance
- Translate complex requirements into efficient data workflows that support threat detection and response
We are a perfect match if you have:
- 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
- Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
- Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
- Ability to write, test and deploy production-ready code
- Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
- Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
- Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
- Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
- Fluent in English with excellent communication and cross-functional collaboration skills
We offer:
- Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
- Competitive market salary
- Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
- Work face-to-face with world-leading experts in their fields, who are our partners and friends
- Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
- Unlimited vacation and leave policies
- Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
- A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
-
Β· 24 views Β· 5 applications Β· 24d
Senior ML/GenAI Engineer
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateSenior ML Engineer Full-time / Remote About Us ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event...Senior ML Engineer
Full-time / Remote
About Us
ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event organizers, including registration, attendee management, event websites, and networking tools.
Role Responsibilities:
- Develop AI Agents, tools for AI Agents, API as a service
- Prepare development and deployment documentation
- Participate in R&D activities of Data Science team
Required Skills & Experience:
- 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
- 5+ years of experience in software development in Python
- Hand-on experience with LLM, RAG and AI Agents development
- Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI
- Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
- Knowledge of SQL, non-SQL and vector databases
- Understanding of embedding vectors and semantic search
- Proficiency in Git (Bitbucket) and Docker
- Upper-Intermediate (B2+) or higher level of English
Would a Plus:
- Hand-on experience with SLM and LLM fine-tuning
- Education in Data Science, Computer Science, Applied Math or similar
- AWS certifications (AWS Certified ML or equivalent)
- Experience with TypeSense
- Experience with speech recognition, speech-to-text ML models
What We Offer:
- Career growth with an international team.
- Competitive salary and financial stability.
- Flexible working hours (Mon-Fri, 8 hours).
- Free English courses and a budget for education
More
-
Β· 35 views Β· 3 applications Β· 26d
Senior Data Engineer at Payments AI Team
Hybrid Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper IntermediateJob Description As a Senior Data Engineer on the Wix Payments AI Team, youβll play a crucial role in the design and integration of emerging AI solutions into the Payments product. Youβll have significant responsibilities which include: Developing &...Job Description
As a Senior Data Engineer on the Wix Payments AI Team, youβll play a crucial role in the design and integration of emerging AI solutions into the Payments product. Youβll have significant responsibilities which include:
- Developing & maintaining infrastructure for both generative AI and classical data science applications.
- Researching emerging AI technology stacks and methodologies to identify optimal solutions.
- Monitoring data pipeline performance and troubleshooting issues.
- Leading & driving the entire lifecycle of a typical team project: ideation β map business constraints, research and evaluate alternative solutions β design & implement a proof-of-concept in collaboration with various stakeholders across the organization, including data engineers, analysts, data scientists and product managers.
Qualifications
- Proficient in Trino SQL (with the ability to craft complex queries) and highly skilled in Python, with expertise in Python frameworks (e.g., Streamlit, Airflow, Pyless, etc.).
- Ability to design, prototype, code, test and deploy production-ready systems.
- Experience with a versatile range of infrastructure, server and frontend tech stacks.
- Experience implementing and integrating GenAI models, particularly LLMs, into production systems.
- Experience with AI agentic technologies (e.g. MCP, A2A, ADK) - an advantage.
- An independent and quick learner.
- Passion for product and technical leadership.
- Business-oriented thinking and skills: data privacy and system security awareness, understanding of business objectives and how to measure their key performance indicators (KPIs), derive and prioritize actionable tasks from complex business problems, business impact guided decision making.
- Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances.
- Fluent in English with strong communication abilities
About the Team
Weβre the Wix Payments team.
We provide Wix users with the best way to collect payments from their customers and manage their Wix income online, in person, and on-the-go. Weβre passionate about crafting the best experience for our users, and empowering any business on Wix to realize its full financial potential. We have developed our own custom payment processing solution that blends many integrations into one clean and intuitive user interface. We also build innovative products that help our users manage their cash and grow their business. The Payments AI team is instrumental in promoting AI based capabilities within the payments domain and is responsible for ensuring the company is always at the forefront of the AI revolution.
More -
Β· 53 views Β· 9 applications Β· 6d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateWe are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud...We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.
About us:Opsfleet is a boutique services company who specializes in cloud infrastructure, data, AI, and humanβbehavior analytics to help organizations make smarter decisions and boost performance.
Our experts provide endβtoβend solutionsβfrom data engineering and advanced analytics to DevOpsβensuring scalable, secure, and AIβready platforms that turn insights into action.
Role Overview
As a Data Engineer at Opsfleet, you will lead the entire data lifecycleβgathering and translating business requirements, ingesting and integrating diverse data sources, and designing, building, and orchestrating robust ETL/ELT pipelines with builtβin quality checks, governance, and observability. Youβll partner with data scientists to prepare, deploy, and monitor ML/AI models in production, and work closely with analysts and stakeholders to transform raw data into actionable insights and scalable intelligence.
What Youβll Do
* E2E Solution Delivery: Lead the full spectrum of data projectsβrequirements gathering, data ingestion, modeling, validation, and production deployment.
* Data Modeling: Develop and maintain robust logical and physical data modelsβsuch as star and snowflake schemasβto support analytics, reporting, and scalable data architectures.
* Data Analysis & BI: Transform complex datasets into clear, actionable insights; develop dashboards and reports that drive operational efficiency and revenue growth.
* ML Engineering: Implement and manage modelβserving pipelines using cloudβs MLOps toolchain, ensuring reliability and monitoring in production.
* Collaboration & Research: Partner with crossβfunctional teams to prototype solutions, identify new opportunities, and drive continuous improvement.
What Weβre Looking For
Experience: 4+ years in a dataβfocused role (Data Engineer, BI Developer, or similar)
Technical Skills: Proficient in SQL and Python for data manipulation, cleaning, transformation, and ETL workflows. Strong understanding of statistical methods and data modeling concepts. Soft Skills: Excellent problemβsolving ability, critical thinking, and attention to detail. Outstanding written and verbal communication.
Education: BSc or higher in Mathematics, Statistics, Engineering, Computer Science, Life Science, or a related quantitative discipline.
Nice to Have
Cloud & Data Warehousing: Handsβon experience with cloud platforms (GCP, AWS or others) and modern data warehouses such as BigQuery and Snowflake.
More
-
Β· 23 views Β· 1 application Β· 12d
Infrastructure Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - AdvancedWe are looking for a Senior Infrastructure Engineer to manage and improve our IT systems and cloud environments. Youβll work closely with DevOps and security teams to ensure system availability and reliability. Details: Experience: 5 years Schedule:...We are looking for a Senior Infrastructure Engineer to manage and improve our IT systems and cloud environments. Youβll work closely with DevOps and security teams to ensure system availability and reliability.
Details:
Experience: 5 years
Schedule: Full time, remote
Start: ASAP
English: Fluent
Employment: B2B ContractResponsibilities:
- Design, deploy, and manage infrastructure environments
- Automate deployments using Terraform, Ansible, etc.
- Monitor and improve system performance and availability
- Implement disaster recovery plans
- Support troubleshooting across environments
Requirements:
- Strong Linux administration background
- Experience with AWS, GCP, or Azure
- Proficiency with containerization tools (Docker, Kubernetes)
- Infrastructure as Code (IaC) using Terraform or similar
- Scripting skills in Python, Bash, etc.
-
Β· 36 views Β· 1 application Β· 27d
Data Quality Engineer
Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate MilTech πͺWeβre building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities,...Weβre building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities, and weβre seeking an engineer who can help us enhance the reliability, transparency, and manageability of our data landscape.
Your responsibilities:
- Develop and maintain data quality monitoring frameworks within the Azure ecosystem (Data Factory, Data Lake, Databricks).
- Design and implement data quality checks, including validation, profiling, cleansing, and standardization.
- Detect data anomalies and design alerting systems (rules, thresholds, automation).
- Collaborate with Data Engineers, Analysts, and Business stakeholders to define data quality criteria and expectations.
- Ensure high data accuracy and integrity for Power BI reports and dashboards.
- Document data validation processes and recommend improvements to data sources.
Requirements:
- 3+ years of experience in a Data Quality, Data Engineering, or BI Engineering role.
- Hands-on experience with Microsoft Azure services (Data Factory, SQL Database, Data Lake).
- Advanced SQL skills (complex queries, optimization, data validation).
- Familiarity with Power BI or similar BI tools.
- Understanding of DWH principles and ETL/ELT pipelines.
- Experience with data quality frameworks and metrics (completeness, consistency, timeliness).
- Knowledge of Data Governance, Master Data Management, and Data Lineage concepts.
Would be a plus:
- Experience with Databricks or Apache Spark.
- DAX and Power Query (M) knowledge.
- Familiarity with DataOps or DevOps principles in a data environment.
- Experience in creating automated data quality dashboards in Power BI.
More -
Β· 30 views Β· 1 application Β· 5d
Data Engineer
Hybrid Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - IntermediateWe are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata,...We are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.
What you will do
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
Qualifications and experience needed
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our projectβs focus. Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
A plus would be
- Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
- Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
- CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
- Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
- Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.
What we offer
- Office or remote β itβs up to you. You can work from anywhere, and we will arrange your workplace.
- Remote onboarding.
- Performance bonuses.
- We train employees with the opportunity to learn through the companyβs library, internal resources, and programs from partners.β―
- Health and life insurance.
- Wellbeing program and corporate psychologist.
- Reimbursement of expenses for Kyivstar mobile communication.
-
Β· 8 views Β· 0 applications Β· 18d
IT Infrastructure Administrator
Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experienceBiosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.
Key responsibilities:
- Administration of Active Directory
- Managing group policies
- Managing services via PowerShell
- Administration of VMWare platform
- Administration of Azure Active Directory
- Administration of Exchange 2016/2019 mail servers
- Administration of Exchange Online
- Administration of VMWare Horizon View
Required professional knowledge and skills:
- Experience in writing automation scripts (PowerShell, Python, etc.)
- Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
- Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
- Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
- Windows Server 2019/2025 (installation, configuration, and adaptation)
- Diagnostics and troubleshooting
- Working with anti-spam systems
- Managing mail transport systems (exim) and monitoring systems (Zabbix)
We offer:
- Interesting projects and tasks
- Competitive salary (discussed during the interview)
- Convenient work schedule: MonβFri, 9:00β18:00; partial remote work possible
- Official employment, paid vacation, and sick leave
- Probation period β 2 months
- Professional growth and training (internal training, reimbursement for external training programs)
- Discounts on Biosphere Corporation products
- Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)
Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).
Learn more about Biosphere Corporation, our strategy, mission, and values at:
http://biosphere-corp.com/
https://www.facebook.com/biosphere.corporation/Join our team of professionals!
By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
More
If your application is successful, we will contact you within 1β2 business days. -
Β· 23 views Β· 1 application Β· 27d
PHP developer/ Data Engineer
Hybrid Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πΊπ¦Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. Youβll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.Thanks to our incredible team of experts, weβve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.
Requirements:
- Design and develop scalable backend services using PHP 7 / 8.
- Strong understanding of OOP concepts, design patterns, clean code principles,
- Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
- Experience of work with NoSQL databases (e.g., Redis).
- Proven experience working on high-load projects
- Understanding of ETL processes and data integration
- Experience of work with ClickHouse
- Strong experience with API development
- Strong knowledge of Symfony 6+, yii2
- Experience with RabbitMQ
Nice to Have:
- AWS services
- Payment API (Stripe, SolidGate etc.)
- Docker, GitLab CI
- Python
Responsibilities:
- Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
- API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
- Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
- Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
- Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.
What we offer:
For personal growth:
- A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
- An educational allowance to ensure that your skills stay sharp;
- English and German classes to strengthen your capabilities and widen your knowledge.
For comfort:
- A great environment where youβll work with true professionals and amazing colleagues whom youβll call friends quickly;
- The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.
For health:
- Medical insurance;
- Twenty-one days of paid sick leave per year;
- Healthy fruit snacks full of vitamins to keep you energized
For leisure:
- Twenty-one days of paid vacation per year;
- Fun times at our frequent team-building activities.