Jobs

167
  • Β· 94 views Β· 9 applications Β· 8h

    Junior Data Engineer

    Full Remote Β· Ukraine Β· 0.5 years of experience Β· B1 - Intermediate
    We are looking for a Data Engineer to join our team! Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access. He/she will be in charge of creating pipelines...

    We are looking for a Data Engineer to join our team!

     

    Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access.

    He/she will be in charge of creating pipelines that convert raw data into usable formats for data scientists and other data consumers to utilize.

    Data Engineer should be comfortable working with RDBMS and has a good knowledge of the appropriate RDBMS programming language(s) as well.

    The Data Engineer fulfills processing of client data based on proper specification and documentation.

     

    *Ukrainian student in UA (2d year and higher).

     

         Main responsibilities:

    • Design and develop ETL pipelines;
    • Data integration and cleansing;
    • Implement stored procedures and function for data transformations;
    • ETL processes performance optimization.

     

         Skills and Requirements:

    • Experience with ETL tools (to take charge of the ETL processes and performs tasks connected with data analytics, data science, business intelligence and system architecture skills);
    • Database/DBA/Architect background (understanding of data storage requirements and design warehouse architecture, should have the basic expertise with SQL/NoSQL databases and data mapping, the awareness of Hadoop environment);
    • Data analysis expertise (data modeling, mapping, and formatting, data analysis basic expertise is required);
    • Knowledge of scripting languages (Python is preferable);
    • Troubleshooting skills (data processing systems operate with large amounts of data and include multiple structural elements. Data Engineer is responsible for the proper functioning of the system, which requires strong analytical thinking and troubleshooting skills);
    • Tableau experience is good to have;
    • Software engineering background is good to have;
    • Good organizational skills, and task management abilities;
    • Effective self-motivator;
    • Good communication skills in written and spoken English.

     

         Salary Range

    Compensation packages are based on several factors including but not limited to: skill set, depth of experience, certifications, and specific work location.

    More
  • Β· 33 views Β· 12 applications Β· 29d

    Azure Data and AI Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper Intermediate
    Requirement: - Must to have strong experience with Azure, - Must to have strong experience in Data engineering, - Must to have strong experience with AI, We offer: β€’ Attractive financial package β€’ Paid vacation, holidays and sick leaves β€’ Challenging...

    Requirement:

    - Must to have strong experience with Azure,

    - Must to have strong experience in Data engineering,

    - Must to have strong experience with AI,

     

    We offer:

    β€’ Attractive financial package

    β€’ Paid vacation, holidays and sick leaves

    β€’ Challenging projects

    β€’ Professional & career growth

    β€’ Great atmosphere in a friendly small team

    β€’ Flexible working hours

    More
  • Β· 414 views Β· 42 applications Β· 5d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 1245 views Β· 123 applications Β· 15d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· B1 - Intermediate
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 41 views Β· 4 applications Β· 12d

    Cloud System engineer

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - Elementary
    Requirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...

    Requirements:

    • Knowledge of the core functionality of virtualization platforms;
    • Experience implementing and migrating workloads in virtualized environment;
    • Experience in complex IT solutions and Hybrid Cloud solution projects.
    • Good understanding of IT-infrastructure services is a plus;
    • Strong knowledge in troubleshooting of complex environments in case of failure;
    • At least basic knowledge in networking & information security is an advantage
    • Hyper-V, Proxmox, VMWare experience would be an advantage;
    • Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
    • Work experience of 2+ years in a similar position
    • Scripting and programming experience/background in PowerShell/Bash is an advantage;
    • Strong team communication skills, both verbal and written;
    • Experience in technical documentation writing and preparation;
    • English skills - intermediate level is minimum and mandatory for global teams communication;
    • Industry certification focused on relevant solution area.

    Areas of Responsibility includes:

    • Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
    • Consulting regarding migration IT-workloads in complex infrastructures;
    • Presales support (Articulating service value in the sales process) / Up and cross sell capability);
    • Project documentation: technical concepts
    • Education and development in professional area including necessary certifications.
    More
  • Β· 24 views Β· 1 application Β· 20d

    Data Engineer TL / Poland

    Office Work Β· Poland Β· 4 years of experience Β· B2 - Upper Intermediate
    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...

    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.

     

    Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way. 

    We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests. 

    As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. You’ll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.

     

    Responsibilities:

    • Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
    • Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
    • Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
    • Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?

       

    Requirements:

    • Over 2 years in a leadership role within a data team.
    • Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
    • Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
    • Bachelor’s degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
    • Highly organized with a proactive mindset.
    • Strong service orientation and a collaborative approach to problem-solving.

       

    Nice to have skills:

    • Previous experience as a NOC or DevOps engineer is a plus.
    • Familiarity with PySpark is considered an advantage.

       

    What we can offer you

    • Remote work from Poland, flexible working schedule
    • Accounting support & consultation
    • Opportunities for learning and developing on the project
    • 20 working days of annual vacation
    • 5 days paid sick leaves/days off; state holidays
    • Provide working equipment
    More
  • Β· 57 views Β· 9 applications Β· 29d

    Data Engineer

    Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 2 years of experience Β· B1 - Intermediate
    3Shape develops 3D scanners and software solutions that enable dental and hearing professionals to treat more people, more effectively. Our products are market leading innovative solutions that make a real difference in the lives of both patients and...

    3Shape develops 3D scanners and software solutions that enable dental and hearing professionals to treat more people, more effectively. Our products are market leading innovative solutions that make a real difference in the lives of both patients and dental professionals around the world.
     

    3Shape is headquartered in Copenhagen, with development teams in Denmark, Ukraine, North Macedonia and with a production site in Poland.
     

    We are a global company with presence in Europe, Asia and the Americas. Founded in a year 2000, today, we provide services to customers in over 130 countries. Our growing talent pool of over 2500 employees spans 45+ nationalities.
     

    3Shape as an employer is committed to Ukraine. Our UA office was founded in 2006, and we are continuing to grow, hire and take care of our employees even during the war in Ukraine. Among other actions, we support our contractors who are called to the military service, as well as care about our colleagues’ mental health by implementing different activities.
     

    If you are looking for stability in your future, we are the right place for you.


    About the role:

    The Customer Data Strategy is a high-priority initiative with significant potential and senior management buy-in. Join our expanding team that currently includes a Data Analyst, Data Engineer, Data Architect, and Manager.
     

    Key responsibilities: 

    • Develop and optimize Azure Databricks in collaboration with cross-functional teams to enable a 'one-stop-shop' for analytical data
    • Translate customer-focused commercial needs into concrete data products
    • Build data products to unlock commercial value and help integrate systems
    • Coordinate technical alignment meetings between functions
    • Act as customer data ambassador to improve 'data literacy' across the organization

    Your profile:

    • Experience working with data engineering in a larger organization, tech start-up, or as an external consultant
    • Extensive experience with Azure Databricks, Apache Spark, and Delta Lake
    • Proficiency in Python, PySpark and SQL
    • Experience with optimizing and automating data engineering processes
    • Familiarity with GitHub and GitHub Actions for CI/CD processes
    • Knowledge of Terraform as a plus


    Being the part of us means:

    • Make an impact in one of the most exciting Danish tech companies in the medical device industry
    • Work on solutions used by thousands of dental professionals worldwide
    • Be part of 3Shape's continued accomplishments and growth
    • Contribute to meaningful work that changes the future of dentistry
    • Develop professionally in a unique and friendly environment
    • Enjoy a healthy work-life balance
    • Occasional business trips to Western Europe
       

    We offer:

    • 39 hours of cooperation per week within a flexible time frame
    • 24 business days of annual leaves
    • Medical insurance (with additional Dentistry Budget and 10 massaging sessions per year included)
    • Possibility of flexible remote cooperation
    • Good working conditions in a comfortable office near National Technical University β€œKPI” which includes: Blackout ready infrastructure. Corporate Paper Book Library. Gym-Room with Shower.
    • A parking lot with free spaces for employees
    • Partial compensation of lunches
    • Paid sick leaves and child sick leaves
    • Maternity, paternity and family issues leaves
    • Well-being program: monthly well-being meetings and individual psychology hot-line
       

    Want to join us and change the future of dentistry?

    More
  • Β· 517 views Β· 42 applications Β· 8d

    Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, analysis, and integrations. We are waiting for your CV!

    Requirements:

    - 2+ years of commercial experience with Python.

    - Experience working with PostgreSQL databases.
    - Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
    - Solid understanding of ETL principles and best practices.
    - Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
    - Experience working with Linux environments, cloud services (AWS), and Docker.
    - Strong decision-making capabilities with the ability to work independently and proactively.

    Will be a plus:
    - Experience in web scraping, data extraction, cleaning, and visualization.
    - Understanding of multiprocessing and multithreading, including process and thread management.
    - Familiarity with Redis.
    - Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
    - Experience with Flask / Flask-RESTful for API development.
    - Knowledge and experience with Kafka.
     

    Key Responsibilities:

    - Develop and maintain a robust data processing architecture using Python.

    - Design and manage data pipelines using Kafka and SQS.

    - Optimize code for better performance and maintainability.

    - Design and implement efficient ETL processes.

    - Work with AWS technologies to ensure flexible and reliable data processing systems.

    - Collaborate with colleagues, actively participate in code reviews, and improve technical knowledge.

    - Take responsibility for your tasks and suggest improvements to processes and systems.

    We offer:

    - Working in a fast growing company;

    - Great networking opportunities with international clients, challenging tasks;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities, corporate events.

    More
  • Β· 66 views Β· 1 application Β· 3d

    Data Engineer (AI and Data Pipeline Focus)

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B1 - Intermediate
    Do you want to develop your deep data engineering skills in a complex and high-impact AI product? You have the opportunity to apply your knowledge and grow across all areas of our robust data ecosystem! Join Aniline.ai! We are a forward-thinking...

    Do you want to develop your deep data engineering skills in a complex and high-impact AI product? You have the opportunity to apply your knowledge and grow across all areas of our robust data ecosystem!

     

    Join Aniline.ai! We are a forward-thinking technology company dedicated to harnessing the power of AI across various sectors, including HR, facility monitoring, retail analytics, marketing, and learning support systems. Our mission is to transform data into actionable insights and innovative solutions.

    We are seeking a highly skilled Data Engineer with a strong background in building scalable data pipelines, optimizing high-load data processing, and supporting AI/LLM architectures. In this critical role, you will be the backbone of our data operations, ensuring quality, reliability, and efficient delivery of data across our entire platform.

    Key Responsibilities & Focus Areas

    You will be a key contributor across our platform, with a primary focus on the following data engineering areas:

    1. πŸ’Ύ Data Pipeline Design & Automation (Primary Focus)

    • Design, build, and maintain scalable data pipelines and ETL/ELT processes.
    • Automate the end-to-end data pipeline for the periodic collection, processing, and deployment of results to production. This includes transitioning manual processes to robust automated solutions.
    • Manage the ingestion of raw data (company reviews from various sources) into our GCP Data Lake and subsequent transformation and loading into the GCP Data Warehouse (e.g., BigQuery).
    • Set up and maintain systems for pipeline orchestration.
    • Develop ETL/ELT processes to update client-facing databases like Firebase and refresh reference data in PostgreSQL.
    • Integrate data from various sources, ensuring data quality and reliability for analytics and reporting.

    2. 🧠 AI Data Support & Integration

    • Engineer data flow specifically for AI/LLM solutions, focusing on contextual retrieval and input data preparation.
    • Automate the pipeline for updating contexts in the Pinecone vector database for Retrieval-Augmented Generation (RAG) architecture.
    • Prepare processed and analyzed data for loading into result tables (including statistics and logs), which serve as the foundation for LLM inputs and subsequent client reporting.
    • Perform general Python development tasks to maintain and support existing data-handling code, including LangChain logic and data processing within Jupyter Notebooks.
    • Collaborate with cross-functional teams (data scientists and AI engineers) to ensure data requirements are met for LLM solution deployment and prompt optimization.
    • Perform data analysis and reporting using BI tools (Looker, Power BI, Tableau, etc.).

    3. βš™οΈ Infrastructure & Optimization

    • Work with cloud platforms (preferably GCP) to manage, optimize, and secure data lakes and data warehouses.
    • Apply knowledge of algorithmic skills and complexity analysis (including Big O notation) to select the most efficient algorithms for high-load data processing.
    • Conduct thorough research and analysis of existing infrastructure, data structures, and code bases to ensure seamless integration and stability of new developments.

    Requirements

    • Proven experience as a Data Engineer, focusing on building and optimizing ETL/ELT processes for large datasets.
    • Strong proficiency in Python development and the data stack (Pandas, NumPy).
    • Hands-on experience with cloud-based data infrastructure (GCP is highly preferred), including Data Warehouses (BigQuery) and Data Lakes.
    • Familiarity with database technologies including PostgreSQL, NoSQL (Firebase), and, crucially, vector databases (Pinecone, FAISS, or similar).
    • Experience supporting LLM-based solutions and frameworks like LangChain is highly desirable.
    • Solid grasp of software engineering best practices, including Git and CI/CD.

    Nice-to-Have Skills

    • Proven track record in building and optimizing ETL/ELT processes for large datasets.
    • Experience integrating OpenAI API or similar AI services.
    • Experience in a production environment with multi-agent systems.

     

     

    Next Steps

    We are keen to see your practical data engineering experience! We would highly value a submission that includes a link to a Git repository demonstrating your expertise in building a robust data pipeline, especially one that interfaces with LLM/RAG components (e.g., updating a vector database).

     

    Ready to architect our next-generation data ecosystem? Apply today!

    More
  • Β· 37 views Β· 4 applications Β· 29d

    Lead Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 7 years of experience Β· B2 - Upper Intermediate
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...

    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field 
    • At least 1+ year of experience as Lead\Architect 
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications

     

    Ready to try your hand? Send your CV without a doubt!

    More
  • Β· 96 views Β· 4 applications Β· 21d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate MilTech πŸͺ–
    Who We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...

    Who We Are
     

    OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.

    Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.

    We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.

    Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
     

    Who we’re looking for

    OpenMinds is seeking a skilled and curious Data Engineer who’s excited to design and build data systems that power meaningful insight. You’ll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
     

    In the position you will:

    • Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
    • Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
    • Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
    • Stay up-to-date with trends in data engineering and apply modern tools and practices
    • Define and implement best practices for data processing, storage, and governance
    • Translate complex requirements into efficient data workflows that support threat detection and response
       

    We are a perfect match if you have:

    • 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
    • Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
    • Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
    • Ability to write, test and deploy production-ready code
    • Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
    • Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
    • Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
    • Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
    • Fluent in English with excellent communication and cross-functional collaboration skills
       

    We offer:

    • Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
    • Competitive market salary
    • Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
    • Work face-to-face with world-leading experts in their fields, who are our partners and friends
    • Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
    • Unlimited vacation and leave policies
    • Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
    • A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
    More
  • Β· 21 views Β· 4 applications Β· 29d

    Senior ML/GenAI Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Senior ML Engineer Full-time / Remote About Us ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event...

    Senior ML Engineer 

    Full-time / Remote 

     

    About Us

    ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event organizers, including registration, attendee management, event websites, and networking tools.

     

    Role Responsibilities:

    • Develop AI Agents, tools for AI Agents, API as a service
    • Prepare development and deployment documentation
    • Participate in R&D activities of Data Science team

     

    Required Skills & Experience:

    • 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
    • 5+ years of experience in software development in Python
    • Hand-on experience with LLM, RAG and AI Agents development
    • Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI 
    • Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
    • Knowledge of SQL, non-SQL and vector databases
    • Understanding of embedding vectors  and semantic search
    • Proficiency in Git (Bitbucket) and Docker
    • Upper-Intermediate (B2+) or higher level of English

     

    Would a Plus:

    • Hand-on experience with SLM and LLM fine-tuning
    • Education in Data Science, Computer Science, Applied Math or similar
    • AWS certifications (AWS Certified ML or equivalent)
    • Experience with TypeSense
    • Experience with speech recognition, speech-to-text ML models

     

    What We Offer:

    • Career growth with an international team.
    • Competitive salary and financial stability.
    • Flexible working hours (Mon-Fri, 8 hours).
    • Free English courses and a budget for education


     

    More
  • Β· 18 views Β· 0 applications Β· 8d

    Senior BI Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B1 - Intermediate
    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...

    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
    We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
    FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.

     

    We are looking for a Senior BI Engineer to join our BI SB Team.

    Requirements:

    β€” At least 5 years of experience in designing and creating modern data integration solutions.

    β€” Leading the BI SB Team.

    β€” People management and task definition skils must have.
    β€” Master’s degree in Computer Science or a related field.
    β€” Proficient in Python and SQL, particularly for data engineering tasks.
    β€” Experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
    β€” Expirience with DBT framework and AirFlow orchestration.
    β€” Practical experience with both SQL and NoSQL databases (such as PostgreSQL, MongoDB).
    β€” Expirience with SnowFlake.
    β€” Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lamda, RDS, Athena).
    β€” Experience in managing data warehouses and data lakes. Familiarity with star and snowflake DWH design schema. Know the difference between OLAP and OLTP.

    β€” Experience in designing data analytic reports with (QuickSight, Pentaho Services, PowerBI).

     

    Would be a plus:

    β€” Experience with cloud data services (e.g., AWS Redshift, Google BigQuery).

    β€” Experience with tools like GitHub, GitLab, Bitbucket.

    β€” Experience with real-time data processing (e.g., Kafka, Flink).
    β€” Familiarity with orchestration tools like Airflow, Luigi.
    β€” Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
    β€” Knowledge of data security and privacy practices.

     

    Responsibilities:

    β€” Design, construct, install, test, and maintain highly scalable data management systems.
    β€” Develop ETL/ELT processes and frameworks for data transformation and load.

    β€” Implement, optimize and support reports for Sportsbook domain 
    β€” Ensure efficient storage and retrieval of big data.
    β€” Optimize data retrieval and query performance.
    β€” Work closely with data scientists and analysts to provide data solutions and insights.

     

    We can offer:

    β€” 30 days of paid vacation and sick days β€” we value rest and recreation. We also comply with the national holidays.

    β€” Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.

    β€” Remote work; after Ukraine wins the war β€” our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).

    β€” Flexible work schedule β€” we expect a full-time commitment but do not track your working hours.

    β€” Flat hierarchy without micromanagement β€” our doors are open, and all teammates are approachable.

     

     

     

    More
  • Β· 40 views Β· 3 applications Β· 16d

    Middle BI Engineer

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience
    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...

    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
    We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
    FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.

     

    We are looking for a Middle BI Engineer to join our BI SB Team.
     

    Requirements:

    β€” At least 2 years of experience in designing and creating modern data integration solutions.
    β€” Master’s degree in Computer Science or a related field.
    β€” Proficient in Python and SQL, particularly for data engineering tasks.
    β€” Experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
    β€” Experience with DBT framework and AirFlow orchestration.
    β€” Practical experience with both SQL and NoSQL databases (such as PostgreSQL, MongoDB).
    β€” Experience with SnowFlake.
    β€” Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lamda, RDS, Athena).
    β€” Experience in managing data warehouses and data lakes. Familiarity with star and snowflake DWH design schema. Know the difference between OLAP and OLTP.

    β€” Experience in designing data analytic reports with (QuickSight, Pentaho Services, PowerBI).
     

    Would be a plus:

    β€” Experience with cloud data services (e.g., AWS Redshift, Google BigQuery).

    β€” Experience with tools like GitHub, GitLab, Bitbucket.

    β€” Experience with real-time data processing (e.g., Kafka, Flink).
    β€” Familiarity with orchestration tools like Airflow, Luigi.
    β€” Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
    β€” Knowledge of data security and privacy practices.

    Responsibilities:

    β€” Design, construct, install, test, and maintain highly scalable data management systems.
    β€” Develop ETL/ELT processes and frameworks for data transformation and load.

    β€” Implement, optimize and support reports for Sportsbook domain 
    β€” Ensure efficient storage and retrieval of big data.
    β€” Optimize data retrieval and query performance.
    β€” Work closely with data scientists and analysts to provide data solutions and insights.
     

    We can offer:

    β€” 30 days of paid vacation and sick days β€” we value rest and recreation. We also comply with the national holidays.

    β€” Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.

    β€” Remote work; after Ukraine wins the war β€” our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).

    β€” Flexible work schedule β€” we expect a full-time commitment but do not track your working hours.

    β€” Flat hierarchy without micromanagement β€” our doors are open, and all teammates are approachable.

     

     

     

    More
  • Β· 112 views Β· 22 applications Β· 30d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud...

    We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.


    About us: 

    Opsfleet is a boutique services company who specializes in cloud infrastructure, data, AI, and human‑behavior analytics to help organizations make smarter decisions and boost performance.

    Our experts provide end‑to‑end solutionsβ€”from data engineering and advanced analytics to DevOpsβ€”ensuring scalable, secure, and AI‑ready platforms that turn insights into action.

     

    Role Overview

    As a Data Engineer at Opsfleet, you will lead the entire data lifecycleβ€”gathering and translating business requirements, ingesting and integrating diverse data sources, and designing, building, and orchestrating robust ETL/ELT pipelines with built‑in quality checks, governance, and observability. You’ll partner with data scientists to prepare, deploy, and monitor ML/AI models in production, and work closely with analysts and stakeholders to transform raw data into actionable insights and scalable intelligence.

     

    What You’ll Do

    * E2E Solution Delivery: Lead the full spectrum of data projectsβ€”requirements gathering, data ingestion, modeling, validation, and production deployment.

    * Data Modeling: Develop and maintain robust logical and physical data modelsβ€”such as star and snowflake schemasβ€”to support analytics, reporting, and scalable data architectures.

    * Data Analysis & BI: Transform complex datasets into clear, actionable insights; develop dashboards and reports that drive operational efficiency and revenue growth.

    * ML Engineering: Implement and manage model‑serving pipelines using cloud’s MLOps toolchain, ensuring reliability and monitoring in production.

    * Collaboration & Research: Partner with cross‑functional teams to prototype solutions, identify new opportunities, and drive continuous improvement.

     

    What We’re Looking For

    Experience: 4+ years in a data‑focused role (Data Engineer, BI Developer, or similar)

    Technical Skills: Proficient in SQL and Python for data manipulation, cleaning, transformation, and ETL workflows. Strong understanding of statistical methods and data modeling concepts. Soft Skills: Excellent problem‑solving ability, critical thinking, and attention to detail. Outstanding written and verbal communication.

    Education: BSc or higher in Mathematics, Statistics, Engineering, Computer Science, Life Science, or a related quantitative discipline.

     

    Nice to Have

    Cloud & Data Warehousing: Hands‑on experience with cloud platforms (GCP, AWS or others) and modern data warehouses such as BigQuery and Snowflake.


     

    More
Log In or Sign Up to see all posted jobs