Jobs
158-
Β· 285 views Β· 31 applications Β· 21d
Junior Data Engineer
Full Remote Β· Ukraine Β· 0.5 years of experience Β· B1 - IntermediateWe are looking for a Data Engineer to join our team! Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access. He/she will be in charge of creating pipelines...We are looking for a Data Engineer to join our team!
Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access.
He/she will be in charge of creating pipelines that convert raw data into usable formats for data scientists and other data consumers to utilize.
Data Engineer should be comfortable working with RDBMS and has a good knowledge of the appropriate RDBMS programming language(s) as well.
The Data Engineer fulfills processing of client data based on proper specification and documentation.
*Ukrainian student in UA (2d year and higher).
Main responsibilities:
- Design and develop ETL pipelines;
- Data integration and cleansing;
- Implement stored procedures and function for data transformations;
- ETL processes performance optimization.
Skills and Requirements:
- Experience with ETL tools (to take charge of the ETL processes and performs tasks connected with data analytics, data science, business intelligence and system architecture skills);
- Database/DBA/Architect background (understanding of data storage requirements and design warehouse architecture, should have the basic expertise with SQL/NoSQL databases and data mapping, the awareness of Hadoop environment);
- Data analysis expertise (data modeling, mapping, and formatting, data analysis basic expertise is required);
- Knowledge of scripting languages (Python is preferable);
- Troubleshooting skills (data processing systems operate with large amounts of data and include multiple structural elements. Data Engineer is responsible for the proper functioning of the system, which requires strong analytical thinking and troubleshooting skills);
- Tableau experience is good to have;
- Software engineering background is good to have;
- Good organizational skills, and task management abilities;
- Effective self-motivator;
- Good communication skills in written and spoken English.
Salary Range
Compensation packages are based on several factors including but not limited to: skill set, depth of experience, certifications, and specific work location.
More -
Β· 533 views Β· 49 applications Β· 4d
Data Engineer
Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. Skills requirements: β’ 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Skills requirements:
β’ 2+ years of experience with Python;
β’ 2+ years of experience as a Data Engineer;
β’ Experience with Pandas;
β’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Familiarity with Amazon Web Services;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Working with high volume tables 10m+.
Optional skills (as a plus):
β’ Experience with Spark (pyspark);
β’ Experience with Airflow;
β’ Experience with Kafka;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms..Key responsibilities:
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models.We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 74 views Β· 14 applications Β· 8d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateLead the development and scaling of our scientific knowledge graphβingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights. Requirements: - Strong experience with...Lead the development and scaling of our scientific knowledge graphβingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights.
Requirements:
- Strong experience with knowledge graph design and implementation (Neo4j, RDFLib, GraphQL, etc.).
- Advanced Python for data engineering, ETL, and entity processing (Spark/Dask/Polars).
- Proven track record with large dataset ingestion (tens of millions of records).
- Familiarity with life-science or biomedical data (ontologies, research metadata, entity linking).
- Experience with Airflow/Dagster/dbt, and data APIs (OpenAlex, ORCID, PubMed).
- Strong sense of ownership, precision, and delivery mindset. Nice to Have:
- Domain knowledge in life sciences, biomedical research, or related data models.
- Experience integrating vector/semantic embeddings (Pinecone, FAISS, Weaviate).
We offer:
β’ Attractive financial package
β’ Challenging projects
β’ Professional & career growth
β’ Great atmosphere in a friendly small team
More -
Β· 66 views Β· 2 applications Β· 12d
Data Engineer
Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper IntermediateRaiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.
At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bankβs products and services, and develop their businesses because we are #Together_with_Ukraine.
Your responsibilities:
- Develop and maintain scalable ETL/ELT processes for data collection, transformation, and loading
- Design and implement robust data pipelines for real-time and batch data processing
- Ensure data quality, consistency, and availability for analytical and operational systems
- Optimize query performance and database architecture
- Automate the deployment and monitoring of data infrastructure components
- Work closely with analytics, development, and business teams to implement data-driven solutions
Preferred qualifications:
- 2+ years of relevant experience in data engineering
- We expect you to have solid commercial experience with Python, Groovy
- Deep knowledge of Apache NiFi and hands-on experience in building and administering complex data flows
- Proficient in PostgreSQL, understanding of architecture, experience in query optimization and data schema design
- Experience with Apache Kafka, building real-time data pipelines
Will be a plus:
- Experience with Apache Airflow, workflow organization, monitoring and automation
- Working with Apache Spark frameworks for distributed big data processing
- Experience with AWS Athena and S3, interactive query services
- Experience with Apache Iceberg, understanding of modern table formats for data lakes
- Experience with Terraform, practice using the Infrastructure as Code (IaC) approach
- Experience with Kubernetes, containerization and service orchestration
We offer what matters most to you:
- Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
- Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
- Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
- Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
- Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
- Career opportunities: we encourage advancement within the bank across functions
- Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
- Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bankβs veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes
Why Raiffeisen Bank?
- Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raifβs team because for us YOU matter!
- One of the largest lenders to the economy and agricultural business among private banks
- Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
- The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)
- One of the largest IT product teams among the countryβs banks. One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023
Opportunities for Everyone:
- Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
- We support the principles of diversity, equality and inclusiveness
- We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
- We cooperate with students and older people, creating conditions for growth at any career stage
Want to learn more? β Follow us on social media:
Facebook, Instagram, LinkedIn
___________________________________________________________________________________________
Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡΡΠ°Π»ΠΎΠΌ. ΠΡΠ»ΡΡΠ΅ 30 ΡΠΎΠΊΡΠ² ΠΌΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠ° Π²ΠΈΠ±ΡΠ΄ΠΎΠ²ΡΡΠΌΠΎ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π°ΡΠΎΡ Π΄Π΅ΡΠΆΠ°Π²ΠΈ.
Π£ Π Π°ΠΉΡΡ ΠΏΡΠ°ΡΡΡ ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ², ΡΠ΅ΡΠ΅Π΄ Π½ΠΈΡ ΠΎΠ΄Π½Π° ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄, ΡΠΎ Π½Π°Π»ΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 800 ΡΠ°Ρ ΡΠ²ΡΡΠ². Π©ΠΎΠ΄Π½Ρ ΠΏΠ»ΡΡ-ΠΎ-ΠΏΠ»ΡΡ ΠΌΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ, ΡΠΎΠ± Π±ΡΠ»ΡΡ Π½ΡΠΆ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½Π° Π½Π°ΡΠΈΡ ΠΊΠ»ΡΡΠ½ΡΡΠ² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΡΠΊΡΡΠ½Π΅ ΠΎΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ, ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΠΈΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ Ρ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π±Π°Π½ΠΊΡ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.β―
Π’Π²ΠΎΡ ΠΌΠ°ΠΉΠ±ΡΡΠ½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- Π ΠΎΠ·ΡΠΎΠ±Π»ΡΡΠΈ ΡΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΡΠ²Π°ΡΠΈ ΠΌΠ°ΡΡΡΠ°Π±ΠΎΠ²Π°Π½Ρ ETL/ELT ΠΏΡΠΎΡΠ΅ΡΠΈ Π΄Π»Ρ Π·Π±ΠΎΡΡ, ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΡΡ ΡΠ° Π·Π°Π²Π°Π½ΡΠ°ΠΆΠ΅Π½Π½Ρ Π΄Π°Π½ΠΈΡ
- ΠΡΠΎΡΠΊΡΡΠ²Π°ΡΠΈ ΡΠ° Π²ΠΏΡΠΎΠ²Π°Π΄ΠΆΡΠ²Π°ΡΠΈ Π½Π°Π΄ΡΠΉΠ½Ρ data pipelines Π΄Π»Ρ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ Ρ ΡΠ΅Π°Π»ΡΠ½ΠΎΠΌΡ ΡΠ°ΡΡ ΡΠ° Π² ΠΏΠ°ΠΊΠ΅ΡΠ½ΠΎΠΌΡ ΡΠ΅ΠΆΠΈΠΌΡ
- ΠΠ°Π±Π΅Π·ΠΏΠ΅ΡΡΠ²Π°ΡΠΈ ΡΠΊΡΡΡΡ, ΠΊΠΎΠ½ΡΠΈΡΡΠ΅Π½ΡΠ½ΡΡΡΡ ΡΠ° Π΄ΠΎΡΡΡΠΏΠ½ΡΡΡΡ Π΄Π°Π½ΠΈΡ Π΄Π»Ρ Π°Π½Π°Π»ΡΡΠΈΡΠ½ΠΈΡ ΡΠ° ΠΎΠΏΠ΅ΡΠ°ΡΡΠΉΠ½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌ
- ΠΠΏΡΠΈΠΌΡΠ·ΡΠ²Π°ΡΠΈ ΠΏΡΠΎΠ΄ΡΠΊΡΠΈΠ²Π½ΡΡΡΡ Π·Π°ΠΏΠΈΡΡΠ² ΡΠ° Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΈ Π±Π°Π· Π΄Π°Π½ΠΈΡ
- ΠΠ²ΡΠΎΠΌΠ°ΡΠΈΠ·ΡΠ²Π°ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ ΡΠΎΠ·Π³ΠΎΡΡΠ°Π½Π½Ρ ΡΠ° ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡΠ² data-ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΈ
- Π’ΡΡΠ½ΠΎ ΡΠΏΡΠ²ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π· ΠΊΠΎΠΌΠ°Π½Π΄Π°ΠΌΠΈ Π°Π½Π°Π»ΡΡΠΈΠΊΠΈ, ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ ΡΠ° Π±ΡΠ·Π½Π΅Ρ-ΠΏΡΠ΄ΡΠΎΠ·Π΄ΡΠ»Π°ΠΌΠΈ Π΄Π»Ρ ΡΠ΅Π°Π»ΡΠ·Π°ΡΡΡ data-driven ΡΡΡΠ΅Π½Ρ
Π’Π²ΡΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠ° Π½Π°Π²ΠΈΡΠΊΠΈ:
- 2+ ΡΠΎΠΊΠΈ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎΠ³ΠΎ Π΄ΠΎΡΠ²ΡΠ΄Ρ Π² Π΄Π°ΡΠ° ΡΠ½ΠΆΠΈΠ½ΡΡΠΈΠ½Π³Ρ
- ΠΠΈ ΠΎΡΡΠΊΡΡΠΌΠΎ, ΡΠΎ Π²ΠΈ ΠΌΠ°ΡΡΠ΅ Π²ΠΏΠ΅Π²Π½Π΅Π½ΠΈΠΉ ΠΊΠΎΠΌΠ΅ΡΡΡΠΉΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Python, Groovy
- ΠΠ»ΠΈΠ±ΠΎΠΊΡ Π·Π½Π°Π½Π½Ρ Apache NiFi ΡΠ° ΠΏΡΠ°ΠΊΡΠΈΡΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ Ρ ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²Ρ ΡΠ° Π°Π΄ΠΌΡΠ½ΡΡΡΡΡΠ²Π°Π½Π½Ρ ΡΠΊΠ»Π°Π΄Π½ΠΈΡ ΠΏΠΎΡΠΎΠΊΡΠ² Π΄Π°Π½ΠΈΡ
- ΠΠΏΠ΅Π²Π½Π΅Π½Π΅ Π²ΠΎΠ»ΠΎΠ΄ΡΠ½Π½Ρ PostgreSQL, ΡΠΎΠ·ΡΠΌΡΠ½Π½Ρ Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΈ, Π΄ΠΎΡΠ²ΡΠ΄ ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ Π·Π°ΠΏΠΈΡΡΠ² ΡΠ° ΠΏΡΠΎΡΠΊΡΡΠ²Π°Π½Π½Ρ ΡΡ Π΅ΠΌ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Apache Kafka, ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²Π° real-time data pipelines
ΠΡΠ΄Π΅ ΠΏΠ»ΡΡΠΎΠΌ:
- ΠΠΎΡΠ²ΡΠ΄ Apache Airflow, ΠΎΡΠ³Π°Π½ΡΠ·Π°ΡΡΡ, ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³ ΡΠ° Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·Π°ΡΡΡ Π²ΠΎΡΠΊΡΠ»ΠΎΡ
- Π ΠΎΠ±ΠΎΡΠ° Π· ΡΡΠ΅ΠΉΠΌΠ²ΠΎΡΠΊΠ°ΠΌΠΈ Apache Spark Π΄Π»Ρ ΡΠΎΠ·ΠΏΠΎΠ΄ΡΠ»Π΅Π½ΠΎΡ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π²Π΅Π»ΠΈΠΊΠΈΡ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· AWS Athena ΡΠ° S3, ΡΠ½ΡΠ΅ΡΠ°ΠΊΡΠΈΠ²Π½ΠΈΠΌΠΈ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π·Π°ΠΏΠΈΡΡΠ²
- ΠΠΎΡΠ²ΡΠ΄ Π· Apache Iceberg, ΡΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΈΡ ΡΠΎΡΠΌΠ°ΡΡΠ² ΡΠ°Π±Π»ΠΈΡΡ Π΄Π»Ρ data lake
- ΠΠΎΡΠ²Π²ΡΠ΄ Π· Terraform, ΠΏΡΠ°ΠΊΡΠΈΠΊΠ° Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ ΠΏΡΠ΄Ρ ΠΎΠ΄Ρ Infrastructure as Code (IaC)
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Kubernetes, ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΈΠ·Π°ΡΡΡΡ ΡΠ° ΠΎΡΠΊΠ΅ΡΡΡΠ°ΡΡΡΡ ΡΠ΅ΡΠ²ΡΡΡΠ²
ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΡΠ΅, ΡΠΎ ΠΌΠ°Ρ Π·Π½Π°ΡΠ΅Π½Π½Ρ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅:β―
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Π° Π·Π°ΡΠΎΠ±ΡΡΠ½Π° ΠΏΠ»Π°ΡΠ°: Π³Π°ΡΠ°Π½ΡΡΡΠΌΠΎ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ Π΄ΠΎΡ ΡΠ΄ ΡΠ° ΡΡΡΠ½Ρ Π±ΠΎΠ½ΡΡΠΈ Π·Π° ΡΠ²ΡΠΉ ΠΎΡΠΎΠ±ΠΈΡΡΠΈΠΉ Π²Π½Π΅ΡΠΎΠΊ. ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ, Ρ Π½Π°Ρ Π΄ΡΡ ΡΠ΅ΡΠ΅ΡΠ°Π»ΡΠ½Π° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊΡ.
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ: ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 28 Π΄Π½ΡΠ² ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΈΠΉ βΠ΄Π΅ΠΊΡΠ΅Ρβ Π΄Π»Ρ ΡΠ°ΡΡΡΡΠ², ΡΠ° ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° Π΄Π»Ρ Π±Π°ΡΡΠΊΡΠ² ΠΏΡΠΈ Π½Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π΄ΡΡΠ΅ΠΉ.
- ΠΠΎΠΌΡΠΎΡΡΠ½Ρ ΡΠΌΠΎΠ²ΠΈ ΠΏΡΠ°ΡΡ: ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π³ΡΠ±ΡΠΈΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠ°ΡΡ ΡΠΎΠ±ΠΎΡΠΈ, ΠΎΡΡΡΠΈ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠΊΡΠΈΡΡΡΠΌΠΈ ΡΠ° Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠ°ΠΌΠΈ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΎΡ ΡΠ΅Ρ Π½ΡΠΊΠΎΡ.
- Wellbeing ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ°: Π΄Π»Ρ Π²ΡΡΡ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² Π΄ΠΎΡΡΡΠΏΠ½Ρ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ Π· ΠΏΠ΅ΡΡΠΎΠ³ΠΎ ΡΠΎΠ±ΠΎΡΠΎΠ³ΠΎ Π΄Π½Ρ; ΠΊΠΎΠ½ΡΡΠ»ΡΡΠ°ΡΡΡ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π½ΡΡΡΠΈΡΡΠΎΠ»ΠΎΠ³Π° ΡΠΈ ΡΡΠΈΡΡΠ°; Π΄ΠΈΡΠΊΠΎΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡΡ ΡΠ° ΠΏΠΎΠΊΡΠΏΠΊΠΈ; family days Π΄Π»Ρ Π΄ΡΡΠ΅ΠΉ ΡΠ° Π΄ΠΎΡΠΎΡΠ»ΠΈΡ ; ΠΌΠ°ΡΠ°ΠΆ Π² ΠΎΡΡΡΡ.
- ΠΠ°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ: Π΄ΠΎΡΡΡΠΏ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²ΡΠ°Π»ΡΠ½ΠΈΡ ΠΎΠ½Π»Π°ΠΉΠ½-ΡΠ΅ΡΡΡΡΡΠ²; ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Ρ Π½Π°Π²ΡΠ°Π»ΡΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π· CX, Data, IT Security, ΠΡΠ΄Π΅ΡΡΡΠ²Π°, Agile. ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π° Π±ΡΠ±Π»ΡΠΎΡΠ΅ΠΊΠ° ΡΠ° ΡΡΠΎΠΊΠΈ Π°Π½Π³Π»ΡΠΉΡΡΠΊΠΎΡ.
- ΠΡΡΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡΡ ΠΊΠΎΠ»Π΅Π³ΠΈ β ΡΠ΅ ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ°, Π΄Π΅ Π²ΡΡΠ°ΡΡΡΡΡ Π΄ΠΎΠΏΠΈΡΠ»ΠΈΠ²ΡΡΡΡ, ΡΠ°Π»Π°Π½Ρ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΡ. ΠΠΈ ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, Π²ΡΠΈΠΌΠΎΡΡ ΡΠ°Π·ΠΎΠΌ ΡΠ° Π·ΡΠΎΡΡΠ°ΡΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π·Π½Π°ΠΉΡΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² Ρ ΠΏΠΎΠ½Π°Π΄ 15-ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ ΠΊΠΎΠΌβΡΠ½ΡΡΡ, ΡΠΈΡΠ°ΡΡΠΊΠΎΠΌΡ ΡΠΈ ΡΠΏΠΎΡΡΠΈΠ²Π½ΠΎΠΌΡ ΠΊΠ»ΡΠ±Π°Ρ .
- ΠΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ: ΠΌΠΈ Π·Π°ΠΎΡ ΠΎΡΡΡΠΌΠΎ ΠΏΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΠ΅ΡΠ΅Π΄ΠΈΠ½Ρ Π±Π°Π½ΠΊΡ ΠΌΡΠΆ ΡΡΠ½ΠΊΡΡΡΠΌΠΈ.
- ΠΠ½Π½ΠΎΠ²Π°ΡΡΡ ΡΠ° ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
- ΠΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΡΠ² Ρ Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡ: ΠΌΠΈ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ ΡΠΎΠ±ΠΎΡΡ ΠΌΡΡΡΡ ΡΠ° Π²ΠΈΠΏΠ»Π°ΡΡΡΠΌΠΎ ΡΠ΅ΡΠ΅Π΄Π½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ ΠΌΠΎΠ±ΡΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΌ. ΠΠ»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² ΡΠ° Π²Π΅ΡΠ΅ΡΠ°Π½ΠΎΠΊ Ρ Π½Π°Ρ Π΄ΡΡ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΡΡΡΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° ΠΠ°Π½ΠΊΡ. ΠΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π°Π΄ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½ΡΠΌ ΠΎΠ±ΡΠ·Π½Π°Π½ΠΎΡΡΡ ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΡΠ² ΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡΠ°Π½Ρ ΠΏΠΎΠ²Π΅ΡΠ½Π΅Π½Π½Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Π΄ΠΎ ΡΠΈΠ²ΡΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΆΠΈΡΡΡ. Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ Π²ΡΠ΄Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΠΊ ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡΠ°ΡΠΈΡ ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ² Π΄Π»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² (Forbes).
Π§ΠΎΠΌΡ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ?β―
- ΠΠ°ΡΠ° Π³ΠΎΠ»ΠΎΠ²Π½Π° ΡΡΠ½Π½ΡΡΡΡ β Π»ΡΠ΄ΠΈ Ρ ΠΌΠΈ Π΄Π°ΡΠΌΠΎ ΡΠΌ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Ρ Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²ΡΠ°ΡΠΌΠΎ, Π·Π°Π»ΡΡΠ°ΡΠΌΠΎ Π΄ΠΎ Π·ΠΌΡΠ½. ΠΡΠΈΡΠ΄Π½ΡΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡΡ, Π°Π΄ΠΆΠ΅ Π΄Π»Ρ Π½Π°Ρ Π’Π ΠΌΠ°ΡΡ Π·Π½Π°ΡΠ΅Π½Π½Ρ!β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΊΡΠ΅Π΄ΠΈΡΠΎΡΡΠ² Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π°Π³ΡΠ°ΡΠ½ΠΎΠ³ΠΎ Π±ΡΠ·Π½Π΅ΡΡ ΡΠ΅ΡΠ΅Π΄ ΠΏΡΠΈΠ²Π°ΡΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ²β―
- ΠΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡΠ°ΡΠΈΠΌ ΠΏΡΠ°ΡΠ΅Π΄Π°Π²ΡΠ΅ΠΌ Π·Π° Π²Π΅ΡΡΡΡΠΌΠΈ EY, Forbes, Randstad, Franklin Covey, Delo.UAβ―
- ΠΠ°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ Π³ΡΠΌΠ°Π½ΡΡΠ°ΡΠ½ΠΎΡ Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΈΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² (Π§Π΅ΡΠ²ΠΎΠ½ΠΈΠΉ Π₯ΡΠ΅ΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΠ»Π°ΡΠ½ΠΈΠΊΡΠ² ΠΏΠΎΠ΄Π°ΡΠΊΡΠ² Π² Π£ΠΊΡΠ°ΡΠ½Ρ, Π·Π° 2023 ΡΡΠΊ Π±ΡΠ»ΠΎ ΡΠΏΠ»Π°ΡΠ΅Π½ΠΎ 6,6 ΠΌΠ»ΡΠ΄ Π³ΡΠΈΠ²Π΅Π½Ρ
ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ :β―
- Π Π°ΠΉΡ ΠΊΠ΅ΡΡΡΡΡΡΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°ΠΌΠΈ, ΡΠΎ ΡΠΎΠΊΡΡΡΡΡΡΡΡ Π½Π° Π»ΡΠ΄ΠΈΠ½Ρ ΡΠ° ΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ, Ρ ΡΠ΅Π½ΡΡΡ ΡΠ²Π°Π³ΠΈ 5β―500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² ΡΠ° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½ΠΈ ΠΊΠ»ΡΡΠ½ΡΡΠ²β―β―
- ΠΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΡΡΠ·Π½ΠΎΠΌΠ°Π½ΡΡΡΡ, ΡΡΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΠ½ΠΊΠ»ΡΠ·ΠΈΠ²Π½ΠΎΡΡΡ
- ΠΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ Π½Π°ΠΉΠΌΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Ρ Π»ΡΠ΄Π΅ΠΉ Π· ΡΠ½Π²Π°Π»ΡΠ΄Π½ΡΡΡΡ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈ ΡΠΎΠ±ΠΎΡΠ΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ ΠΏΡΠ΄ Π²Π°ΡΡ ΠΎΡΠΎΠ±Π»ΠΈΠ²Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈ
- Π‘ΠΏΡΠ²ΠΏΡΠ°ΡΡΡΠΌΠΎ Π·Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌΠΈ ΡΠ° Π»ΡΠ΄ΡΠΌΠΈ ΡΡΠ°ΡΡΠΎΠ³ΠΎ Π²ΡΠΊΡ,β―ΡΡΠ²ΠΎΡΡΡΡΠΈ ΡΠΌΠΎΠ²ΠΈ Π΄Π»Ρ Π·ΡΠΎΡΡΠ°Π½Π½Ρ Π½Π° Π±ΡΠ΄Ρ-ΡΠΊΠΎΠΌΡ Π΅ΡΠ°ΠΏΡ ΠΊΠ°ΡβΡΡΠΈ
ΠΠ°ΠΆΠ°ΡΡ Π΄ΡΠ·Π½Π°ΡΠΈΡΡ Π±ΡΠ»ΡΡΠ΅? β ΠΡΠ΄ΠΏΠΈΡΡΠΉΡΡ Π½Π° Π½Π°Ρ Ρ ΡΠΎΡ.ΠΌΠ΅ΡΠ΅ΠΆΠ°Ρ :
Facebook, Instagram, LinkedInβ―
More -
Β· 1390 views Β· 137 applications Β· 7d
Junior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· B1 - IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
β’ 6+ months of experience as a Data Engineer
β’ Experience with SQL ;
β’ Experience with Python;
Optional skills (as a plus):
β’ Experience with ETL / ELT pipelines;
β’ Experience with PySpark;
β’ Experience with Airflow;
β’ Experience with Databricks;
Key Responsibilities:
β’ Apply data processing algorithms;
β’ Create ETL/ELT pipelines and data management solutions;
β’ Work with SQL queries for data extraction and analysis;
β’ Data analysis and application of data processing algorithms to solve business problems;
We offer:
β’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
β’ Opportunity to work with the high-skilled engineering team on challenging projects;
β’ Interesting projects with new technologies;
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 81 views Β· 3 applications Β· 24d
Data Engineer (AI and Data Pipeline Focus)
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B1 - IntermediateDo you want to develop your deep data engineering skills in a complex and high-impact AI product? You have the opportunity to apply your knowledge and grow across all areas of our robust data ecosystem! Join Aniline.ai! We are a forward-thinking...Do you want to develop your deep data engineering skills in a complex and high-impact AI product? You have the opportunity to apply your knowledge and grow across all areas of our robust data ecosystem!
Join Aniline.ai! We are a forward-thinking technology company dedicated to harnessing the power of AI across various sectors, including HR, facility monitoring, retail analytics, marketing, and learning support systems. Our mission is to transform data into actionable insights and innovative solutions.
We are seeking a highly skilled Data Engineer with a strong background in building scalable data pipelines, optimizing high-load data processing, and supporting AI/LLM architectures. In this critical role, you will be the backbone of our data operations, ensuring quality, reliability, and efficient delivery of data across our entire platform.
Key Responsibilities & Focus Areas
You will be a key contributor across our platform, with a primary focus on the following data engineering areas:
1. πΎ Data Pipeline Design & Automation (Primary Focus)
- Design, build, and maintain scalable data pipelines and ETL/ELT processes.
- Automate the end-to-end data pipeline for the periodic collection, processing, and deployment of results to production. This includes transitioning manual processes to robust automated solutions.
- Manage the ingestion of raw data (company reviews from various sources) into our GCP Data Lake and subsequent transformation and loading into the GCP Data Warehouse (e.g., BigQuery).
- Set up and maintain systems for pipeline orchestration.
- Develop ETL/ELT processes to update client-facing databases like Firebase and refresh reference data in PostgreSQL.
- Integrate data from various sources, ensuring data quality and reliability for analytics and reporting.
2. π§ AI Data Support & Integration
- Engineer data flow specifically for AI/LLM solutions, focusing on contextual retrieval and input data preparation.
- Automate the pipeline for updating contexts in the Pinecone vector database for Retrieval-Augmented Generation (RAG) architecture.
- Prepare processed and analyzed data for loading into result tables (including statistics and logs), which serve as the foundation for LLM inputs and subsequent client reporting.
- Perform general Python development tasks to maintain and support existing data-handling code, including LangChain logic and data processing within Jupyter Notebooks.
- Collaborate with cross-functional teams (data scientists and AI engineers) to ensure data requirements are met for LLM solution deployment and prompt optimization.
- Perform data analysis and reporting using BI tools (Looker, Power BI, Tableau, etc.).
3. βοΈ Infrastructure & Optimization
- Work with cloud platforms (preferably GCP) to manage, optimize, and secure data lakes and data warehouses.
- Apply knowledge of algorithmic skills and complexity analysis (including Big O notation) to select the most efficient algorithms for high-load data processing.
- Conduct thorough research and analysis of existing infrastructure, data structures, and code bases to ensure seamless integration and stability of new developments.
Requirements
- Proven experience as a Data Engineer, focusing on building and optimizing ETL/ELT processes for large datasets.
- Strong proficiency in Python development and the data stack (Pandas, NumPy).
- Hands-on experience with cloud-based data infrastructure (GCP is highly preferred), including Data Warehouses (BigQuery) and Data Lakes.
- Familiarity with database technologies including PostgreSQL, NoSQL (Firebase), and, crucially, vector databases (Pinecone, FAISS, or similar).
- Experience supporting LLM-based solutions and frameworks like LangChain is highly desirable.
- Solid grasp of software engineering best practices, including Git and CI/CD.
Nice-to-Have Skills
- Proven track record in building and optimizing ETL/ELT processes for large datasets.
- Experience integrating OpenAI API or similar AI services.
- Experience in a production environment with multi-agent systems.
Next Steps
We are keen to see your practical data engineering experience! We would highly value a submission that includes a link to a Git repository demonstrating your expertise in building a robust data pipeline, especially one that interfaces with LLM/RAG components (e.g., updating a vector database).
Ready to architect our next-generation data ecosystem? Apply today!
More -
Β· 48 views Β· 2 applications Β· 14d
Senior Market Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateWe are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures,...We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures, and more. You will play a key role in designing, developing, and optimizing data pipelines that handle large volumes of data with low latency and high throughput, ensuring that our systems can process market data in real time and batch modes.
Key Responsibilities:
- Architect, develop, and enhance market data systems
- Contribute to the software development lifecycle in a collaborative team environment, including design, implementation, testing, and support
- Design highly efficient, scalable, mission-critical systems
- Maintain good software quality and test coverage
- Participate in code reviews
- Troubleshooting incidents and reported bugs
Requirements:
- Bachelorβs or advanced degree in Computer Science or Electrical Engineering
- Proficiency in the following programming languages: Java, Python or Go
- Prior experience working with equities or futures market data, such as CME data, US Equities Options, is a must
- Experience in engineering and supporting Market Data feed handlers
- Technically fluent (Python, SQL, JSON, ITCH, FIX, CSV); comfortable discussing pipelines and validation specs.
- Prior experience working on tick data storage, such as KDB+ or Clickhouse
- Familiarity with time series analysis
- Good understanding of the Unix/Linux programming environment
- Expertise with SQL and relational databases
- Excellent problem-solving and communication skills
- Self-starter and works well in a fast-paced environment
-
Β· 70 views Β· 4 applications Β· 11d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate MilTech πͺWho We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...Who We Are
OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.
Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.
We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.
Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
Who weβre looking for
OpenMinds is seeking a skilled and curious Data Engineer whoβs excited to design and build data systems that power meaningful insight. Youβll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
In the position you will:
- Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
- Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
- Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
- Stay up-to-date with trends in data engineering and apply modern tools and practices
- Define and implement best practices for data processing, storage, and governance
- Translate complex requirements into efficient data workflows that support threat detection and response
We are a perfect match if you have:
- 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
- Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
- Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
- Ability to write, test and deploy production-ready code
- Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
- Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
- Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
- Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
- Fluent in English with excellent communication and cross-functional collaboration skills
We offer:
- Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
- Competitive market salary
- Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
- Work face-to-face with world-leading experts in their fields, who are our partners and friends
- Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
- Unlimited vacation and leave policies
- Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
- A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
-
Β· 22 views Β· 5 applications Β· 11d
Senior ML/GenAI Engineer
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateSenior ML Engineer Full-time / Remote About Us ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event...Senior ML Engineer
Full-time / Remote
About Us
ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event organizers, including registration, attendee management, event websites, and networking tools.
Role Responsibilities:
- Develop AI Agents, tools for AI Agents, API as a service
- Prepare development and deployment documentation
- Participate in R&D activities of Data Science team
Required Skills & Experience:
- 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
- 5+ years of experience in software development in Python
- Hand-on experience with LLM, RAG and AI Agents development
- Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI
- Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
- Knowledge of SQL, non-SQL and vector databases
- Understanding of embedding vectors and semantic search
- Proficiency in Git (Bitbucket) and Docker
- Upper-Intermediate (B2+) or higher level of English
Would a Plus:
- Hand-on experience with SLM and LLM fine-tuning
- Education in Data Science, Computer Science, Applied Math or similar
- AWS certifications (AWS Certified ML or equivalent)
- Experience with TypeSense
- Experience with speech recognition, speech-to-text ML models
What We Offer:
- Career growth with an international team.
- Competitive salary and financial stability.
- Flexible working hours (Mon-Fri, 8 hours).
- Free English courses and a budget for education
More
-
Β· 32 views Β· 3 applications Β· 13d
Senior Data Engineer at Payments AI Team
Hybrid Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper IntermediateJob Description As a Senior Data Engineer on the Wix Payments AI Team, youβll play a crucial role in the design and integration of emerging AI solutions into the Payments product. Youβll have significant responsibilities which include: Developing &...Job Description
As a Senior Data Engineer on the Wix Payments AI Team, youβll play a crucial role in the design and integration of emerging AI solutions into the Payments product. Youβll have significant responsibilities which include:
- Developing & maintaining infrastructure for both generative AI and classical data science applications.
- Researching emerging AI technology stacks and methodologies to identify optimal solutions.
- Monitoring data pipeline performance and troubleshooting issues.
- Leading & driving the entire lifecycle of a typical team project: ideation β map business constraints, research and evaluate alternative solutions β design & implement a proof-of-concept in collaboration with various stakeholders across the organization, including data engineers, analysts, data scientists and product managers.
Qualifications
- Proficient in Trino SQL (with the ability to craft complex queries) and highly skilled in Python, with expertise in Python frameworks (e.g., Streamlit, Airflow, Pyless, etc.).
- Ability to design, prototype, code, test and deploy production-ready systems.
- Experience with a versatile range of infrastructure, server and frontend tech stacks.
- Experience implementing and integrating GenAI models, particularly LLMs, into production systems.
- Experience with AI agentic technologies (e.g. MCP, A2A, ADK) - an advantage.
- An independent and quick learner.
- Passion for product and technical leadership.
- Business-oriented thinking and skills: data privacy and system security awareness, understanding of business objectives and how to measure their key performance indicators (KPIs), derive and prioritize actionable tasks from complex business problems, business impact guided decision making.
- Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances.
- Fluent in English with strong communication abilities
About the Team
Weβre the Wix Payments team.
We provide Wix users with the best way to collect payments from their customers and manage their Wix income online, in person, and on-the-go. Weβre passionate about crafting the best experience for our users, and empowering any business on Wix to realize its full financial potential. We have developed our own custom payment processing solution that blends many integrations into one clean and intuitive user interface. We also build innovative products that help our users manage their cash and grow their business. The Payments AI team is instrumental in promoting AI based capabilities within the payments domain and is responsible for ensuring the company is always at the forefront of the AI revolution.
More -
Β· 29 views Β· 1 application Β· 14d
Data Quality Engineer
Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate MilTech πͺWeβre building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities,...Weβre building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities, and weβre seeking an engineer who can help us enhance the reliability, transparency, and manageability of our data landscape.
Your responsibilities:
- Develop and maintain data quality monitoring frameworks within the Azure ecosystem (Data Factory, Data Lake, Databricks).
- Design and implement data quality checks, including validation, profiling, cleansing, and standardization.
- Detect data anomalies and design alerting systems (rules, thresholds, automation).
- Collaborate with Data Engineers, Analysts, and Business stakeholders to define data quality criteria and expectations.
- Ensure high data accuracy and integrity for Power BI reports and dashboards.
- Document data validation processes and recommend improvements to data sources.
Requirements:
- 3+ years of experience in a Data Quality, Data Engineering, or BI Engineering role.
- Hands-on experience with Microsoft Azure services (Data Factory, SQL Database, Data Lake).
- Advanced SQL skills (complex queries, optimization, data validation).
- Familiarity with Power BI or similar BI tools.
- Understanding of DWH principles and ETL/ELT pipelines.
- Experience with data quality frameworks and metrics (completeness, consistency, timeliness).
- Knowledge of Data Governance, Master Data Management, and Data Lineage concepts.
Would be a plus:
- Experience with Databricks or Apache Spark.
- DAX and Power Query (M) knowledge.
- Familiarity with DataOps or DevOps principles in a data environment.
- Experience in creating automated data quality dashboards in Power BI.
More -
Β· 6 views Β· 0 applications Β· 5d
IT Infrastructure Administrator
Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experienceBiosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.
Key responsibilities:
- Administration of Active Directory
- Managing group policies
- Managing services via PowerShell
- Administration of VMWare platform
- Administration of Azure Active Directory
- Administration of Exchange 2016/2019 mail servers
- Administration of Exchange Online
- Administration of VMWare Horizon View
Required professional knowledge and skills:
- Experience in writing automation scripts (PowerShell, Python, etc.)
- Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
- Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
- Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
- Windows Server 2019/2025 (installation, configuration, and adaptation)
- Diagnostics and troubleshooting
- Working with anti-spam systems
- Managing mail transport systems (exim) and monitoring systems (Zabbix)
We offer:
- Interesting projects and tasks
- Competitive salary (discussed during the interview)
- Convenient work schedule: MonβFri, 9:00β18:00; partial remote work possible
- Official employment, paid vacation, and sick leave
- Probation period β 2 months
- Professional growth and training (internal training, reimbursement for external training programs)
- Discounts on Biosphere Corporation products
- Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)
Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).
Learn more about Biosphere Corporation, our strategy, mission, and values at:
http://biosphere-corp.com/
https://www.facebook.com/biosphere.corporation/Join our team of professionals!
By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
More
If your application is successful, we will contact you within 1β2 business days. -
Β· 19 views Β· 1 application Β· 14d
PHP developer/ Data Engineer
Hybrid Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πΊπ¦Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. Youβll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.Thanks to our incredible team of experts, weβve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.
Requirements:
- Design and develop scalable backend services using PHP 7 / 8.
- Strong understanding of OOP concepts, design patterns, clean code principles,
- Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
- Experience of work with NoSQL databases (e.g., Redis).
- Proven experience working on high-load projects
- Understanding of ETL processes and data integration
- Experience of work with ClickHouse
- Strong experience with API development
- Strong knowledge of Symfony 6+, yii2
- Experience with RabbitMQ
Nice to Have:
- AWS services
- Payment API (Stripe, SolidGate etc.)
- Docker, GitLab CI
- Python
Responsibilities:
- Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
- API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
- Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
- Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
- Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.
What we offer:
For personal growth:
- A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
- An educational allowance to ensure that your skills stay sharp;
- English and German classes to strengthen your capabilities and widen your knowledge.
For comfort:
- A great environment where youβll work with true professionals and amazing colleagues whom youβll call friends quickly;
- The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.
For health:
- Medical insurance;
- Twenty-one days of paid sick leave per year;
- Healthy fruit snacks full of vitamins to keep you energized
For leisure:
- Twenty-one days of paid vacation per year;
- Fun times at our frequent team-building activities.
-
Β· 40 views Β· 7 applications Β· 21d
GenAI Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· C1 - AdvancedWho we are? We are building a next-generation AI-native sales automation platform for B2B teams. Our goal is to change the very paradigm of how people interact with business applications. Manual data entry becomes a thing of the past as the platform...Who we are?
We are building a next-generation AI-native sales automation platform for B2B teams. Our goal is to change the very paradigm of how people interact with business applications.Manual data entry becomes a thing of the past as the platform proactively connects to your communication and information channels. It seamlessly captures, structures, and transforms data into real-time, actionable awareness.
You no longer work for the tool. The tool works for you, anticipating your needs, surfacing the right context at the right moment, and guiding your next steps with intelligence and precision.
Our vision is to give teams an always-on AI-driven partner that lets them focus entirely on creating value and closing deals.
Philosophy
We value open-mindedness, rapid delivery and impact. Youβre not just coding features-you shape architecture, UX, and product direction. Autonomy, accountability, and a startup builderβs mindset are essential.
Requirements
- Strong backend: Python, FastAPI, Webhooks, Docker, Kubernetes, Git, CI/CD.
- Hands-on with OpenAI-family LLMs, LangChain/LangGraph/LangSmith, prompt engineering, agentic RAG, vector stores (Azure AI Search, Pinecone, Neo4j, hFAISS).
- SQL, Pandas, Graph DBs (Neo4j), NetworkX, advanced ETL/data cleaning, Kafka/Azure EventHub.
- Proven experience building and operating retrieval-augmented generation (RAG) pipelines.
- Familiarity with graph algorithms (community detection, similarity, centrality).
- Good English (documentation, API, teamwork).
Nice to Have
- Generative UI (React).
- Multi-agent LLM frameworks.
- Big Data pipelines in cloud (Azure preferred).
- Production-grade ML, NLP engineering, graph ML.
Responsibilities
- Design, deploy, and maintain GenAI/RAG pipelines for the product
- Integrate LLM/agentic assistants into user business flows.
- Source, ingest, cleanse, and enrich external data streams.
- Build vector search, embedding stores, and manage knowledge graphs.
- Explore and implement new ML/GenAI frameworks.
- Mentor developers and encourage team knowledge-sharing.
What else is important:
- Startup drive, proactivity, independence.
- Willingness to relocate/freedom to travel in Europe; full time.
- Eagerness to integrate latest AI frameworks into real-world production.
Our Team
Agile, tight-knit product group (5β6 experts) with deep experience in SaaS, AI, graph data, and cloud delivery. We move fast, give each member autonomy, and engineer for impact- not just features.
Who takes a final decision:
The team makes the decision based on a technical interview.
Our benefits
- Startup culture: minimal bureaucracy, maximum flexibility
- Remote-first: work from anywhere
- Unlimited vacation β we value results, not hours spent
- Opportunity to grow together with an AI-first product company
- Direct impact on a breakthrough AI-native product
Recruitment process
- HR interview (VP Team) β Technical prescreen (Q&A)
- Technical interview with CTO/Data Officer (real-life case)
- Offer
-
Β· 109 views Β· 3 applications Β· 29d
Data Engineer (NLP-Focused)
Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - IntermediateAbout us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...About us:
More
Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.
About the client:
Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.
About the role:
We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.
You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.
Requirements:
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the projectβs focus.
Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
Responsibilities:
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
- Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability.
- Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
- Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
- Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance.
- Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
The company offers:
- Competitive salary.
- Equity options in a fast-growing AI company.
- Remote-friendly work culture.
- Opportunity to shape a product at the intersection of AI and human productivity.
- Work with a passionate, senior team building cutting-edge tech for real-world business use.