Jobs

158
  • Β· 285 views Β· 31 applications Β· 21d

    Junior Data Engineer

    Full Remote Β· Ukraine Β· 0.5 years of experience Β· B1 - Intermediate
    We are looking for a Data Engineer to join our team! Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access. He/she will be in charge of creating pipelines...

    We are looking for a Data Engineer to join our team!

     

    Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access.

    He/she will be in charge of creating pipelines that convert raw data into usable formats for data scientists and other data consumers to utilize.

    Data Engineer should be comfortable working with RDBMS and has a good knowledge of the appropriate RDBMS programming language(s) as well.

    The Data Engineer fulfills processing of client data based on proper specification and documentation.

     

    *Ukrainian student in UA (2d year and higher).

     

         Main responsibilities:

    • Design and develop ETL pipelines;
    • Data integration and cleansing;
    • Implement stored procedures and function for data transformations;
    • ETL processes performance optimization.

     

         Skills and Requirements:

    • Experience with ETL tools (to take charge of the ETL processes and performs tasks connected with data analytics, data science, business intelligence and system architecture skills);
    • Database/DBA/Architect background (understanding of data storage requirements and design warehouse architecture, should have the basic expertise with SQL/NoSQL databases and data mapping, the awareness of Hadoop environment);
    • Data analysis expertise (data modeling, mapping, and formatting, data analysis basic expertise is required);
    • Knowledge of scripting languages (Python is preferable);
    • Troubleshooting skills (data processing systems operate with large amounts of data and include multiple structural elements. Data Engineer is responsible for the proper functioning of the system, which requires strong analytical thinking and troubleshooting skills);
    • Tableau experience is good to have;
    • Software engineering background is good to have;
    • Good organizational skills, and task management abilities;
    • Effective self-motivator;
    • Good communication skills in written and spoken English.

     

         Salary Range

    Compensation packages are based on several factors including but not limited to: skill set, depth of experience, certifications, and specific work location.

    More
  • Β· 533 views Β· 49 applications Β· 4d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 74 views Β· 14 applications Β· 8d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper Intermediate
    Lead the development and scaling of our scientific knowledge graphβ€”ingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights. Requirements: - Strong experience with...

    Lead the development and scaling of our scientific knowledge graphβ€”ingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights. 

     

    Requirements: 

    - Strong experience with knowledge graph design and implementation (Neo4j, RDFLib, GraphQL, etc.). 

    - Advanced Python for data engineering, ETL, and entity processing (Spark/Dask/Polars). 

    - Proven track record with large dataset ingestion (tens of millions of records). 

    - Familiarity with life-science or biomedical data (ontologies, research metadata, entity linking). 

    - Experience with Airflow/Dagster/dbt, and data APIs (OpenAlex, ORCID, PubMed). 

    - Strong sense of ownership, precision, and delivery mindset. Nice to Have: 

    - Domain knowledge in life sciences, biomedical research, or related data models. 

    - Experience integrating vector/semantic embeddings (Pinecone, FAISS, Weaviate).

     

    We offer:

    β€’ Attractive financial package

    β€’ Challenging projects

    β€’ Professional & career growth

    β€’ Great atmosphere in a friendly small team

    More
  • Β· 66 views Β· 2 applications Β· 12d

    Data Engineer

    Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...

    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.

    At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bank’s products and services, and develop their businesses because we are #Together_with_Ukraine.

    Your responsibilities:

    • Develop and maintain scalable ETL/ELT processes for data collection, transformation, and loading
    • Design and implement robust data pipelines for real-time and batch data processing
    • Ensure data quality, consistency, and availability for analytical and operational systems
    • Optimize query performance and database architecture
    • Automate the deployment and monitoring of data infrastructure components
    • Work closely with analytics, development, and business teams to implement data-driven solutions

    Preferred qualifications:

    • 2+ years of relevant experience in data engineering
    • We expect you to have solid commercial experience with Python, Groovy
    • Deep knowledge of Apache NiFi and hands-on experience in building and administering complex data flows
    • Proficient in PostgreSQL, understanding of architecture, experience in query optimization and data schema design
    • Experience with Apache Kafka, building real-time data pipelines

    Will be a plus:

    • Experience with Apache Airflow, workflow organization, monitoring and automation
    • Working with Apache Spark frameworks for distributed big data processing
    • Experience with AWS Athena and S3, interactive query services
    • Experience with Apache Iceberg, understanding of modern table formats for data lakes
    • Experience with Terraform, practice using the Infrastructure as Code (IaC) approach
    • Experience with Kubernetes, containerization and service orchestration

    We offer what matters most to you:

    • Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
    • Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
    • Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career opportunities: we encourage advancement within the bank across functions
    • Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raif’s team because for us YOU matter!
    • One of the largest lenders to the economy and agricultural business among private banks
    • Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)
    • One of the largest IT product teams among the country’s banks. One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media:

    Facebook, Instagram, LinkedIn

    ___________________________________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • Розробляти Ρ‚Π° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΠ²Π°Ρ‚ΠΈ ΠΌΠ°ΡΡˆΡ‚Π°Π±ΠΎΠ²Π°Π½Ρ– ETL/ELT процСси для Π·Π±ΠΎΡ€Ρƒ, трансформації Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння Π΄Π°Π½ΠΈΡ…
    • ΠŸΡ€ΠΎΡ”ΠΊΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ‚Π° Π²ΠΏΡ€ΠΎΠ²Π°Π΄ΠΆΡƒΠ²Π°Ρ‚ΠΈ Π½Π°Π΄Ρ–ΠΉΠ½Ρ– data pipelines для ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ… Ρƒ Ρ€Π΅Π°Π»ΡŒΠ½ΠΎΠΌΡƒ часі Ρ‚Π° Π² ΠΏΠ°ΠΊΠ΅Ρ‚Π½ΠΎΠΌΡƒ Ρ€Π΅ΠΆΠΈΠΌΡ–
    • Π—Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΠ²Π°Ρ‚ΠΈ ΡΠΊΡ–ΡΡ‚ΡŒ, ΠΊΠΎΠ½ΡΠΈΡΡ‚Π΅Π½Ρ‚Π½Ρ–ΡΡ‚ΡŒ Ρ‚Π° Π΄ΠΎΡΡ‚ΡƒΠΏΠ½Ρ–ΡΡ‚ΡŒ Π΄Π°Π½ΠΈΡ… для Π°Π½Π°Π»Ρ–Ρ‚ΠΈΡ‡Π½ΠΈΡ… Ρ‚Π° ΠΎΠΏΠ΅Ρ€Π°Ρ†Ρ–ΠΉΠ½ΠΈΡ… систСм
    • ΠžΠΏΡ‚ΠΈΠΌΡ–Π·ΡƒΠ²Π°Ρ‚ΠΈ ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΈΠ²Π½Ρ–ΡΡ‚ΡŒ Π·Π°ΠΏΠΈΡ‚Ρ–Π² Ρ‚Π° Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΈ Π±Π°Π· Π΄Π°Π½ΠΈΡ…
    • Автоматизувати процСси розгортання Ρ‚Π° ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ–Π² data-інфраструктури
    • Вісно ΡΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π· ΠΊΠΎΠΌΠ°Π½Π΄Π°ΠΌΠΈ Π°Π½Π°Π»Ρ–Ρ‚ΠΈΠΊΠΈ, Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Ρ‚Π° Π±Ρ–знСс-ΠΏΡ–Π΄Ρ€ΠΎΠ·Π΄Ρ–Π»Π°ΠΌΠΈ для Ρ€Π΅Π°Π»Ρ–Π·Π°Ρ†Ρ–Ρ— data-driven Ρ€Ρ–ΡˆΠ΅Π½ΡŒ

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • 2+ Ρ€ΠΎΠΊΠΈ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎΠ³ΠΎ досвіду Π² Π΄Π°Ρ‚Π° Ρ–Π½ΠΆΠΈΠ½Ρ–Ρ€ΠΈΠ½Π³Ρƒ
    • Ми ΠΎΡ‡Ρ–ΠΊΡƒΡ”ΠΌΠΎ, Ρ‰ΠΎ Π²ΠΈ ΠΌΠ°Ρ”Ρ‚Π΅ Π²ΠΏΠ΅Π²Π½Π΅Π½ΠΈΠΉ ΠΊΠΎΠΌΠ΅Ρ€Ρ†Ρ–ΠΉΠ½ΠΈΠΉ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Python, Groovy
    • Π“Π»ΠΈΠ±ΠΎΠΊΡ– знання Apache NiFi Ρ‚Π° ΠΏΡ€Π°ΠΊΡ‚ΠΈΡ‡Π½ΠΈΠΉ досвід Ρƒ ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Ρ– Ρ‚Π° Π°Π΄ΠΌΡ–ніструванні складних ΠΏΠΎΡ‚ΠΎΠΊΡ–Π² Π΄Π°Π½ΠΈΡ…
    • Π’ΠΏΠ΅Π²Π½Π΅Π½Π΅ володіння PostgreSQL, розуміння Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΈ, досвід ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ— Π·Π°ΠΏΠΈΡ‚Ρ–Π² Ρ‚Π° ΠΏΡ€ΠΎΡ”ктування схСм Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Apache Kafka, ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° real-time data pipelines

    Π‘ΡƒΠ΄Π΅ плюсом:

    • Досвід Apache Airflow, організація, ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³ Ρ‚Π° Π°Π²Ρ‚оматизація Π²ΠΎΡ€ΠΊΡ„Π»ΠΎΡƒ
    • Π ΠΎΠ±ΠΎΡ‚Π° Π· Ρ„Ρ€Π΅ΠΉΠΌΠ²ΠΎΡ€ΠΊΠ°ΠΌΠΈ Apache Spark для Ρ€ΠΎΠ·ΠΏΠΎΠ΄Ρ–Π»Π΅Π½ΠΎΡ— ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π²Π΅Π»ΠΈΠΊΠΈΡ… Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· AWS Athena Ρ‚Π° S3, Ρ–Π½Ρ‚Π΅Ρ€Π°ΠΊΡ‚ΠΈΠ²Π½ΠΈΠΌΠΈ сСрвісами Π·Π°ΠΏΠΈΡ‚Ρ–Π²
    • Досвід Π· Apache Iceberg, розуміння сучасних Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρ–Π² Ρ‚Π°Π±Π»ΠΈΡ†ΡŒ для data lake
    • Досввід Π· Terraform, ΠΏΡ€Π°ΠΊΡ‚ΠΈΠΊΠ° використання ΠΏΡ–Π΄Ρ…ΠΎΠ΄Ρƒ Infrastructure as Code (IaC)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Kubernetes, ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΈΠ·Π°Ρ†Ρ–Ρ”ΡŽ Ρ‚Π° ΠΎΡ€ΠΊΠ΅ΡΡ‚Ρ€Π°Ρ†Ρ–Ρ”ΡŽ сСрвісів

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:β€―

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ.
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ.
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ.
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі.
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—.
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ….
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями.
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes).

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    • Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…:

    Facebook, Instagram, LinkedInβ€―

    More
  • Β· 1390 views Β· 137 applications Β· 7d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· B1 - Intermediate
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 81 views Β· 3 applications Β· 24d

    Data Engineer (AI and Data Pipeline Focus)

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B1 - Intermediate
    Do you want to develop your deep data engineering skills in a complex and high-impact AI product? You have the opportunity to apply your knowledge and grow across all areas of our robust data ecosystem! Join Aniline.ai! We are a forward-thinking...

    Do you want to develop your deep data engineering skills in a complex and high-impact AI product? You have the opportunity to apply your knowledge and grow across all areas of our robust data ecosystem!

     

    Join Aniline.ai! We are a forward-thinking technology company dedicated to harnessing the power of AI across various sectors, including HR, facility monitoring, retail analytics, marketing, and learning support systems. Our mission is to transform data into actionable insights and innovative solutions.

    We are seeking a highly skilled Data Engineer with a strong background in building scalable data pipelines, optimizing high-load data processing, and supporting AI/LLM architectures. In this critical role, you will be the backbone of our data operations, ensuring quality, reliability, and efficient delivery of data across our entire platform.

    Key Responsibilities & Focus Areas

    You will be a key contributor across our platform, with a primary focus on the following data engineering areas:

    1. πŸ’Ύ Data Pipeline Design & Automation (Primary Focus)

    • Design, build, and maintain scalable data pipelines and ETL/ELT processes.
    • Automate the end-to-end data pipeline for the periodic collection, processing, and deployment of results to production. This includes transitioning manual processes to robust automated solutions.
    • Manage the ingestion of raw data (company reviews from various sources) into our GCP Data Lake and subsequent transformation and loading into the GCP Data Warehouse (e.g., BigQuery).
    • Set up and maintain systems for pipeline orchestration.
    • Develop ETL/ELT processes to update client-facing databases like Firebase and refresh reference data in PostgreSQL.
    • Integrate data from various sources, ensuring data quality and reliability for analytics and reporting.

    2. 🧠 AI Data Support & Integration

    • Engineer data flow specifically for AI/LLM solutions, focusing on contextual retrieval and input data preparation.
    • Automate the pipeline for updating contexts in the Pinecone vector database for Retrieval-Augmented Generation (RAG) architecture.
    • Prepare processed and analyzed data for loading into result tables (including statistics and logs), which serve as the foundation for LLM inputs and subsequent client reporting.
    • Perform general Python development tasks to maintain and support existing data-handling code, including LangChain logic and data processing within Jupyter Notebooks.
    • Collaborate with cross-functional teams (data scientists and AI engineers) to ensure data requirements are met for LLM solution deployment and prompt optimization.
    • Perform data analysis and reporting using BI tools (Looker, Power BI, Tableau, etc.).

    3. βš™οΈ Infrastructure & Optimization

    • Work with cloud platforms (preferably GCP) to manage, optimize, and secure data lakes and data warehouses.
    • Apply knowledge of algorithmic skills and complexity analysis (including Big O notation) to select the most efficient algorithms for high-load data processing.
    • Conduct thorough research and analysis of existing infrastructure, data structures, and code bases to ensure seamless integration and stability of new developments.

    Requirements

    • Proven experience as a Data Engineer, focusing on building and optimizing ETL/ELT processes for large datasets.
    • Strong proficiency in Python development and the data stack (Pandas, NumPy).
    • Hands-on experience with cloud-based data infrastructure (GCP is highly preferred), including Data Warehouses (BigQuery) and Data Lakes.
    • Familiarity with database technologies including PostgreSQL, NoSQL (Firebase), and, crucially, vector databases (Pinecone, FAISS, or similar).
    • Experience supporting LLM-based solutions and frameworks like LangChain is highly desirable.
    • Solid grasp of software engineering best practices, including Git and CI/CD.

    Nice-to-Have Skills

    • Proven track record in building and optimizing ETL/ELT processes for large datasets.
    • Experience integrating OpenAI API or similar AI services.
    • Experience in a production environment with multi-agent systems.

     

     

    Next Steps

    We are keen to see your practical data engineering experience! We would highly value a submission that includes a link to a Git repository demonstrating your expertise in building a robust data pipeline, especially one that interfaces with LLM/RAG components (e.g., updating a vector database).

     

    Ready to architect our next-generation data ecosystem? Apply today!

    More
  • Β· 48 views Β· 2 applications Β· 14d

    Senior Market Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures,...

    We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures, and more. You will play a key role in designing, developing, and optimizing data pipelines that handle large volumes of data with low latency and high throughput, ensuring that our systems can process market data in real time and batch modes.

     

    Key Responsibilities:

    • Architect, develop, and enhance market data systems
    • Contribute to the software development lifecycle in a collaborative team environment, including design, implementation, testing, and support
    • Design highly efficient, scalable, mission-critical systems
    • Maintain good software quality and test coverage
    • Participate in code reviews
    • Troubleshooting incidents and reported bugs

     

    Requirements:

    • Bachelor’s or advanced degree in Computer Science or Electrical Engineering
    • Proficiency in the following programming languages: Java, Python or Go
    • Prior experience working with equities or futures market data, such as CME data, US Equities Options, is a must
    • Experience in engineering and supporting Market Data feed handlers
    • Technically fluent (Python, SQL, JSON, ITCH, FIX, CSV); comfortable discussing pipelines and validation specs.
    • Prior experience working on tick data storage, such as KDB+ or Clickhouse
    • Familiarity with time series analysis
    • Good understanding of the Unix/Linux programming environment
    • Expertise with SQL and relational databases
    • Excellent problem-solving and communication skills
    • Self-starter and works well in a fast-paced environment
    More
  • Β· 70 views Β· 4 applications Β· 11d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate MilTech πŸͺ–
    Who We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...

    Who We Are
     

    OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.

    Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.

    We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.

    Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
     

    Who we’re looking for

    OpenMinds is seeking a skilled and curious Data Engineer who’s excited to design and build data systems that power meaningful insight. You’ll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
     

    In the position you will:

    • Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
    • Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
    • Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
    • Stay up-to-date with trends in data engineering and apply modern tools and practices
    • Define and implement best practices for data processing, storage, and governance
    • Translate complex requirements into efficient data workflows that support threat detection and response
       

    We are a perfect match if you have:

    • 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
    • Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
    • Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
    • Ability to write, test and deploy production-ready code
    • Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
    • Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
    • Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
    • Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
    • Fluent in English with excellent communication and cross-functional collaboration skills
       

    We offer:

    • Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
    • Competitive market salary
    • Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
    • Work face-to-face with world-leading experts in their fields, who are our partners and friends
    • Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
    • Unlimited vacation and leave policies
    • Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
    • A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
    More
  • Β· 22 views Β· 5 applications Β· 11d

    Senior ML/GenAI Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Senior ML Engineer Full-time / Remote About Us ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event...

    Senior ML Engineer 

    Full-time / Remote 

     

    About Us

    ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event organizers, including registration, attendee management, event websites, and networking tools.

     

    Role Responsibilities:

    • Develop AI Agents, tools for AI Agents, API as a service
    • Prepare development and deployment documentation
    • Participate in R&D activities of Data Science team

     

    Required Skills & Experience:

    • 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
    • 5+ years of experience in software development in Python
    • Hand-on experience with LLM, RAG and AI Agents development
    • Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI 
    • Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
    • Knowledge of SQL, non-SQL and vector databases
    • Understanding of embedding vectors  and semantic search
    • Proficiency in Git (Bitbucket) and Docker
    • Upper-Intermediate (B2+) or higher level of English

     

    Would a Plus:

    • Hand-on experience with SLM and LLM fine-tuning
    • Education in Data Science, Computer Science, Applied Math or similar
    • AWS certifications (AWS Certified ML or equivalent)
    • Experience with TypeSense
    • Experience with speech recognition, speech-to-text ML models

     

    What We Offer:

    • Career growth with an international team.
    • Competitive salary and financial stability.
    • Flexible working hours (Mon-Fri, 8 hours).
    • Free English courses and a budget for education


     

    More
  • Β· 32 views Β· 3 applications Β· 13d

    Senior Data Engineer at Payments AI Team

    Hybrid Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate
    Job Description As a Senior Data Engineer on the Wix Payments AI Team, you’ll play a crucial role in the design and integration of emerging AI solutions into the Payments product. You’ll have significant responsibilities which include: Developing &...

    Job Description

    As a Senior Data Engineer on the Wix Payments AI Team, you’ll play a crucial role in the design and integration of emerging AI solutions into the Payments product. You’ll have significant responsibilities which include:

    • Developing & maintaining infrastructure for both generative AI and classical data science applications.
    • Researching emerging AI technology stacks and methodologies to identify optimal solutions.
    • Monitoring data pipeline performance and troubleshooting issues.
    • Leading & driving the entire lifecycle of a typical team project: ideation β†’ map business constraints, research and evaluate alternative solutions β†’ design & implement a proof-of-concept in collaboration with various stakeholders across the organization,  including data engineers, analysts, data scientists and product managers.

     

    Qualifications

    • Proficient in Trino SQL (with the ability to craft complex queries) and highly skilled in Python, with expertise in Python frameworks (e.g., Streamlit, Airflow, Pyless, etc.).
    • Ability to design, prototype, code, test and deploy production-ready systems.
    • Experience with a versatile range of infrastructure, server and frontend tech stacks.
    • Experience implementing and integrating GenAI models, particularly LLMs, into production systems. 
    • Experience with AI agentic technologies (e.g. MCP, A2A, ADK) - an advantage.
    • An independent and quick learner.
    • Passion for product and technical leadership.
    • Business-oriented thinking and skills: data privacy and system security awareness, understanding of business objectives and how to measure their key performance indicators (KPIs), derive and prioritize actionable tasks from complex business problems, business impact guided decision making. 
    • Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances.
    • Fluent in English with strong communication abilities

     

    About the Team

    We’re the Wix Payments team.

    We provide Wix users with the best way to collect payments from their customers and manage their Wix income online, in person, and on-the-go. We’re passionate about crafting the best experience for our users, and empowering any business on Wix to realize its full financial potential. We have developed our own custom payment processing solution that blends many integrations into one clean and intuitive user interface. We also build innovative products that help our users manage their cash and grow their business. The Payments AI team is instrumental in promoting AI based capabilities within the payments domain and is responsible for ensuring the company is always at the forefront of the AI revolution.

     

    More
  • Β· 29 views Β· 1 application Β· 14d

    Data Quality Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate MilTech πŸͺ–
    We’re building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities,...

    We’re building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities, and we’re seeking an engineer who can help us enhance the reliability, transparency, and manageability of our data landscape. 

    Your responsibilities: 

    • Develop and maintain data quality monitoring frameworks within the Azure ecosystem (Data Factory, Data Lake, Databricks). 
    • Design and implement data quality checks, including validation, profiling, cleansing, and standardization. 
    • Detect data anomalies and design alerting systems (rules, thresholds, automation). 
    • Collaborate with Data Engineers, Analysts, and Business stakeholders to define data quality criteria and expectations. 
    • Ensure high data accuracy and integrity for Power BI reports and dashboards. 
    • Document data validation processes and recommend improvements to data sources. 

    Requirements: 

    • 3+ years of experience in a Data Quality, Data Engineering, or BI Engineering role. 
    • Hands-on experience with Microsoft Azure services (Data Factory, SQL Database, Data Lake). 
    • Advanced SQL skills (complex queries, optimization, data validation). 
    • Familiarity with Power BI or similar BI tools. 
    • Understanding of DWH principles and ETL/ELT pipelines. 
    • Experience with data quality frameworks and metrics (completeness, consistency, timeliness). 
    • Knowledge of Data Governance, Master Data Management, and Data Lineage concepts. 

    Would be a plus: 

    • Experience with Databricks or Apache Spark. 
    • DAX and Power Query (M) knowledge. 
    • Familiarity with DataOps or DevOps principles in a data environment. 
    • Experience in creating automated data quality dashboards in Power BI. 

     

    More
  • Β· 6 views Β· 0 applications Β· 5d

    IT Infrastructure Administrator

    Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experience
    Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...

    Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.

    Key responsibilities:

    • Administration of Active Directory
    • Managing group policies
    • Managing services via PowerShell
    • Administration of VMWare platform
    • Administration of Azure Active Directory
    • Administration of Exchange 2016/2019 mail servers
    • Administration of Exchange Online
    • Administration of VMWare Horizon View

    Required professional knowledge and skills:

    • Experience in writing automation scripts (PowerShell, Python, etc.)
    • Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
    • Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
    • Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
    • Windows Server 2019/2025 (installation, configuration, and adaptation)
    • Diagnostics and troubleshooting
    • Working with anti-spam systems
    • Managing mail transport systems (exim) and monitoring systems (Zabbix)

    We offer:

    • Interesting projects and tasks
    • Competitive salary (discussed during the interview)
    • Convenient work schedule: Mon–Fri, 9:00–18:00; partial remote work possible
    • Official employment, paid vacation, and sick leave
    • Probation period β€” 2 months
    • Professional growth and training (internal training, reimbursement for external training programs)
    • Discounts on Biosphere Corporation products
    • Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)

    Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).

    Learn more about Biosphere Corporation, our strategy, mission, and values at:
    http://biosphere-corp.com/
    https://www.facebook.com/biosphere.corporation/

    Join our team of professionals!

    By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
    If your application is successful, we will contact you within 1–2 business days.

    More
  • Β· 19 views Β· 1 application Β· 14d

    PHP developer/ Data Engineer

    Hybrid Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...

    Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
    Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. You’ll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.

     

    Thanks to our incredible team of experts, we’ve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.

     

    Requirements:

    • Design and develop scalable backend services using PHP 7 / 8.
    • Strong understanding of OOP concepts, design patterns, clean code principles,
    • Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
    • Experience of work with NoSQL databases (e.g., Redis).
    • Proven experience working on high-load projects
    • Understanding of ETL processes and data integration
    • Experience of work with ClickHouse
    • Strong experience with API development
    • Strong knowledge of Symfony 6+, yii2
    • Experience with RabbitMQ

     

    Nice to Have:

    • AWS services
    • Payment API (Stripe, SolidGate etc.)
    • Docker, GitLab CI
    • Python

     

    Responsibilities:

    • Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
    • API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
    • Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
    • Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
    • Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.

     

    What we offer:

    For personal growth:

    • A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
    • An educational allowance to ensure that your skills stay sharp;
    • English and German classes to strengthen your capabilities and widen your knowledge.

    For comfort:

    • A great environment where you’ll work with true professionals and amazing colleagues whom you’ll call friends quickly;
    • The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.

    For health:

    • Medical insurance;
    • Twenty-one days of paid sick leave per year;
    • Healthy fruit snacks full of vitamins to keep you energized

    For leisure:

    • Twenty-one days of paid vacation per year;
    • Fun times at our frequent team-building activities.
    More
  • Β· 40 views Β· 7 applications Β· 21d

    GenAI Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· C1 - Advanced
    Who we are? We are building a next-generation AI-native sales automation platform for B2B teams. Our goal is to change the very paradigm of how people interact with business applications. Manual data entry becomes a thing of the past as the platform...

    Who we are?
    We are building a next-generation AI-native sales automation platform for B2B teams. Our goal is to change the very paradigm of how people interact with business applications.

    Manual data entry becomes a thing of the past as the platform proactively connects to your communication and information channels. It seamlessly captures, structures, and transforms data into real-time, actionable awareness.

    You no longer work for the tool. The tool works for you, anticipating your needs, surfacing the right context at the right moment, and guiding your next steps with intelligence and precision.

    Our vision is to give teams an always-on AI-driven partner that lets them focus entirely on creating value and closing deals.
     

    Philosophy

    We value open-mindedness, rapid delivery and impact. You’re not just coding features-you shape architecture, UX, and product direction. Autonomy, accountability, and a startup builder’s mindset are essential.
     

    Requirements

    • Strong backend: Python, FastAPI, Webhooks, Docker, Kubernetes, Git, CI/CD.
    • Hands-on with OpenAI-family LLMs, LangChain/LangGraph/LangSmith, prompt engineering, agentic RAG, vector stores (Azure AI Search, Pinecone, Neo4j, hFAISS).
    • SQL, Pandas, Graph DBs (Neo4j), NetworkX, advanced ETL/data cleaning, Kafka/Azure EventHub.
    • Proven experience building and operating retrieval-augmented generation (RAG) pipelines.
    • Familiarity with graph algorithms (community detection, similarity, centrality).
    • Good English (documentation, API, teamwork).
       

    Nice to Have

    • Generative UI (React).
    • Multi-agent LLM frameworks.
    • Big Data pipelines in cloud (Azure preferred).
    • Production-grade ML, NLP engineering, graph ML.
       

    Responsibilities

    • Design, deploy, and maintain GenAI/RAG pipelines for the product
    • Integrate LLM/agentic assistants into user business flows.
    • Source, ingest, cleanse, and enrich external data streams.
    • Build vector search, embedding stores, and manage knowledge graphs.
    • Explore and implement new ML/GenAI frameworks.
    • Mentor developers and encourage team knowledge-sharing.
       

    What else is important:

    • Startup drive, proactivity, independence.
    • Willingness to relocate/freedom to travel in Europe; full time.
    • Eagerness to integrate latest AI frameworks into real-world production.
       

    Our Team

    Agile, tight-knit product group (5–6 experts) with deep experience in SaaS, AI, graph data, and cloud delivery. We move fast, give each member autonomy, and engineer for impact- not just features.
     

    Who takes a final decision:

    The team makes the decision based on a technical interview.
     

    Our benefits

    • Startup culture: minimal bureaucracy, maximum flexibility
    • Remote-first: work from anywhere
    • Unlimited vacation β€” we value results, not hours spent
    • Opportunity to grow together with an AI-first product company
    • Direct impact on a breakthrough AI-native product
       

    Recruitment process

    1. HR interview (VP Team) β€” Technical prescreen (Q&A)
    2. Technical interview with CTO/Data Officer (real-life case)
    3. Offer
    More
  • Β· 109 views Β· 3 applications Β· 29d

    Data Engineer (NLP-Focused)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.

    You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.

    Requirements:
    - Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    - NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    Understanding of FineWeb2 or a similar processing pipeline approach.
    - Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    - Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    - Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
    - Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    - Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    - Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

    Responsibilities:
    - Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
    - Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    - Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
    - Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    - Automate data processing workflows and ensure their scalability and reliability.
    - Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    - Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
    - Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    - Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
    - Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    - Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    - Manage data security, access, and compliance.
    - Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
Log In or Sign Up to see all posted jobs