Jobs

68
  • Β· 53 views Β· 1 application Β· 12d

    Middle-Senior Data Engineer (Grafana)

    Full Remote Β· Ukraine Β· 2 years of experience Β· Upper-Intermediate
    Our mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities Our values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious...

    Our mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities 🌿Our values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious Openness and Result Driven. We offer a safe, inclusive and productive environment for all team members, and we’re always open to feedbackπŸ’œ
    If you want to work from home or work in the city center of Kyiv, great β€” apply right now.

     

    About the project:
    Generative AI technologies are rapidly changing how digital content is created and consumed. However, many of these systems are trained on vast amounts of data, including articles, videos, and other creative worksβ€”often without the knowledge or consent of the original creators. As a result, publishers, journalists, and content producers face the risk of losing both visibility and critical revenue streams such as advertising, subscriptions, and licensing.

    Our project addresses this issue by developing a system that allows AI platforms to identify when specific content has influenced a generated result. This enables transparent attribution and the possibility for content creators to receive compensation based on how often their work is used. The goal is to build a sustainable ecosystem where creators are fairly rewarded, while AI-generated content remains trustworthy and ethically grounded.

     

    Requirements:
    ● 2+ years of experience in Data Engineering;
    ● Hands-on experience with Grafana, Loki, Promtail, and Grafana Agent;
    ● Strong knowledge of log processing pipelines, including log parsing, structuring, and indexing;
    ● Proficiency in query languages such as LogQL, PromQL, or SQL;
    ● Experience setting up alerting and reporting in Grafana;
    ● Proficiency in Python;
    ● English: Upper-Intermediate+.

     

    What you will get:
    ● Competitive salary and good compensation package;
    ● Exciting, challenging and stable startup projects with a modern stack;
    ● Corporate English course;
    ● Ability to practice English and communication skills through permanent interaction with clients from all over the world;
    ● Professional study compensation, online courses and certifications;
    ● Career development opportunity, semi-annual and annual salary review process;
    ● Necessary equipment to perform work tasks;
    ● VIP medical insurance or sports coverage;
    ● Informal and friendly atmosphere;
    ● The ability to focus on your work: a lack of bureaucracy and micromanagement;
    ● Flexible working hours (start your day between 8:00 and 11:30);
    ● Team buildings, corporate events;
    ● Paid vacation (18 working days) and sick leaves;
    ● Cozy offices in 2 cities ( Kyiv & Lviv ) with electricity and Wi-Fi (Generator & Starlink);
    ● Compensation for coworking (except for employees from Kyiv and Lviv);
    ● Corporate lunch + soft skills clubs;
    ● Unlimited work from home from anywhere in the world (remote);
    ● Geniusee has its own charity fund.

    More
  • Β· 36 views Β· 0 applications Β· 12d

    Senior Data Architect

    Full Remote Β· Poland Β· 9 years of experience Β· Upper-Intermediate
    N-iX is a software development service company that helps businesses across the globe develop successful software products. During 21 years on the market and by leveraging the capabilities of Eastern Europe talents the company has grown to 2000+...

    N-iX is a software development service company that helps businesses across the globe develop successful software products. During 21 years on the market and by leveraging the capabilities of Eastern Europe talents the company has grown to 2000+ professionals with a broad portfolio of customers in the area of Fortune 500 companies as well as technological start-ups. N-iX has come a long way and increased its presence in nine countries - Poland, Ukraine, Romania, Bulgaria, Sweden, Malta, the UK, the US, and Colombia.

     

    The Data and Analytics practice, part of the Technology Office, is a team of high-end experts in data strategy, data governance, and data platforms, and contribute to shaping the future of data platforms for our customers. As Senior Data Architect, you will play a crucial role in designing and overseeing the implementation of our strategic Databricks-based data and AI platforms. You will collaborate with data engineers and data scientists, define architecture standards, and ensure alignment across multiple business units. Your role will be pivotal in shaping the future state of our data infrastructure and driving innovative solutions within the automotive claims management domain.

     

    Key Responsibilities:

     

    • Design scalable and robust data architectures using Databricks and cloud technologies (Azure/AWS)
    • Oversee and guide the implementation of Databricks platforms across diverse business units
    • Collaborate closely with data engineers, data scientists, and stakeholders to define architecture standards and practices
    • Develop and enforce governance strategies, ensuring data quality, consistency, and security across platforms
    • Lead strategic decisions on data ingestion, processing, storage, and analytics frameworks
    • Evaluate and integrate new tools and technologies to enhance data processing capabilities
    • Provide mentorship and guidance to engineering teams, ensuring architectural compliance and effective knowledge transfer
    • Develop and maintain detailed architectural documentation.

     

    Requirements:

     

    • 5+ years of experience as a Solution/Data Architect in complex enterprise environments
    • Extensive expertise in designing and implementing Databricks platforms
    • Strong experience in cloud architecture, preferably Azure or AWS
    • Proficient in Apache Spark and big data technologies
    • Advanced understanding of data modeling, data integration patterns, and data governance
    • Solid background in relational databases (MS SQL preferred) and SQL proficiency
    • Practical knowledge of data orchestration and CI/CD practices (Terraform, GitLab)
    • Ability to articulate complex technical strategies to diverse stakeholders
    • Strong leadership and mentorship capabilities
    • Fluent English (B2 level or higher)
    • Exceptional interpersonal and communication skills in an international team setting.

       

    Nice to have:

     

    • Experience with Elasticsearch or vector databases
    • Knowledge of containerization technologies (Docker, Kubernetes)
    • Familiarity with dbt (data build tool)
    • Willingness and ability to travel internationally twice a year for workshops and team alignment.
    More
  • Β· 4 views Β· 0 applications Β· 12d

    Infrastructure Engineer with Java (hybrid work in Warsaw)

    Office Work Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot...

    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot of new, useful options.

    This role focuses on executing critical migration projects within the backend infrastructure of the project. The Backend Infrastructure team is undertaking several large-scale migrations to modernize its systems, improve reliability, and reduce maintenance overhead. This TVC position will be instrumental in performing the hands-on work required for these migrations, working closely with the infrastructure team and other Backend teams.
     

    Responsibilities:
     

    • Execute Migrations: Actively participate in and drive the execution of large-scale code and system migrations across various backend services. Some examples include:
      • migrating event processing systems from custom infrastructure to managed infrastructure solutions;
      • Transitioning services from custom OpenCensus metrics collection to OpenTelemetry;
      • migrating custom metrics to standard OpenTelemetry metrics.
    • Code Modification and Updates: Update and refactor existing codebases (primarily Java) to align with new libraries, platforms, and infrastructure.
    • Testing: Work with the Infrastructure team to create a testing plan for migrations to ensure that changes do not break running services and execute the test plans.
    • Collaboration: Work closely with the Backend Infrastructure team and other software engineers to understand migration requirements, plan execution strategies, and ensure smooth transitions with minimal disruption.
    • Problem Solving: Investigate, debug, and resolve technical issues and complexities encountered during the migration processes.
    • Documentation: Maintain clear and concise documentation for migration plans, processes, changes made, and outcomes.
    • Best Practices: Adhere to software development best practices, ensuring code quality, and follow established guidelines for infrastructure changes.

       

    Requirements:

    • 5+ years of hands-on experience in backend software development.
    • Strong proficiency in Java programming.
    • Strong communication and interpersonal skills, with the ability to collaborate effectively within a technical team environment.
    • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience.
    • Good spoken and written English level β€” Upper-Intermediate or higher.
       

    Nice to have:

    • Experience with observability frameworks such as OpenTelemetry or OpenCensus.
    • Familiarity with gRPC.
    • Knowledge of Google Cloud Platform (GCP) services, particularly data processing services like Dataflow.
       

    We offer:

    • Opportunities to develop in various areas;
    • Compensation package (20 paid vacation days, paid sick leaves);
    • Flexible working hours;
    • Medical insurance;
    • English courses with a native speaker, yoga (Zoom);
    • Paid tech training and other activities for professional growth;
    • Hybrid work mode (∼3 days in the office);
    • International business trips
    • Comfortable office.

       

    If your qualifications and experience match the requirements of the position, our recruitment team will reach out to you in a week maximum. Please rest assured that we carefully consider each candidate, but due to the amount of applications, the review and further processing of your candidacy may take some time.

    More
  • Β· 38 views Β· 0 applications Β· 9d

    Data Engineer to $7500

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Harmonya, develops an AI-powered product data...

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 

     

    Our client, Harmonya, develops an AI-powered product data enrichment, insights, and attribution platform for retailers and brands. Its proprietary technology processes millions of online product listings, extracting valuable insights from titles, descriptions, ingredients, consumer reviews, and more.

     

    Harmonya builds robust tools to help uncover insights about the consumer drivers of market performance, improve assortment and merchandising, categorize products, guide product innovation, and engage target audiences more effectively.

     

    About the Role: 


    We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

     

    Key Responsibilities: 

    • Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
    • Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
    • Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
    • Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
    • Apply best practices for data security, integrity, and performance across all systems.

     

    Required Competence and Skills:

    • 4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
    • Proven track record in designing, developing, and deploying complex data applications.
    • Hands-on experience with orchestration and processing tools (e.g. Apache Airflow and/or Apache Spark).
    • Experience with public cloud platforms (preferably GCP) and cloud-native data services.
    • Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent practical experience).
    • Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
    • Strong verbal and written communication skills in English.
    • Excellent communication skills and a strong team player, capable of working cross-functionally.

     

    Nice to have:

    • Familiarity with data science tools and libraries (e.g., pandas, scikit-learn).
    • Experience working with Docker and Kubernetes.
    • Hands-on experience with CI tools such as GitHub Actions

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

     

    We provide full accounting and legal support in all countries we operate.

     

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

     

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 54 views Β· 1 application Β· 9d

    Senior Data Engineer

    Full Remote Β· Ukraine, Poland, Romania Β· 4 years of experience Β· Upper-Intermediate
    Job Description Required Qualifications 4+ years of experience in Data Engineering, with hands-on ETL development. Proven experience with Apache Airflow (DAG design, scheduling, monitoring), Apache NiFi. Strong experience with Snowflake architecture,...

    Job Description

    Required Qualifications

     

    • 4+ years of experience in Data Engineering, with hands-on ETL development.
    • Proven experience with Apache Airflow (DAG design, scheduling, monitoring), Apache NiFi.
    • Strong experience with Snowflake architecture, data migration, and performance optimization.
    • Proficient in SQL, Python, and working with REST APIs for data ingestion.
    • Experience with cloud environments (Azure or AWS).
    • Top-notch English written and verbal communication skills (a candidate will report to the USA-based PMO)

     

    Preferred Qualifications

    • Experience in data infrastructure modernization projects.
    • Exposure to CI/CD practices and DevOps collaboration.
    • Familiarity with integrating tools such as Azure DevOps, Snyk, OpsGenie, or Datadog
    • Previous working experience with PAM or Identity Management Solutions

     

    Job Responsibilities

    In the capacity of Senior Data Engineer - you will be expected to:

    • Design scalable data pipeline architectures to support real-time and batch processing.
    • Deploy and configure Apache Airflow for the orchestration of complex ETL workflows.
    • Develop Airflow DAGs for key integrations:
      - Azure DevOps (Work Items, build/release pipelines, commit data)
      - OpsGenie (incident and alert data) - Snyk (security vulnerability data)
      - Datadog (infrastructure monitoring and logs)
    • Migrate the existing data warehouse infrastructure and historical data to Snowflake.
    • Create documentation for data architecture, Airflow configurations, and DAGs.
    • Collaborate with Engineering and DevOps teams to align on integration and deployment strategies.

     

    Department/Project Description

    The client is a product international company with a common goal to redefine the legacy approach to Privileged Access Management by delivering multi-cloud-architected solutions to enable digital transformation at scale. The client company establishes a root of trust and then grants the least privileged access just in time-based on verifying who is requesting access, the context of the request, and the risk of the access environment.

    The client's products centralize and orchestrate fragmented identities, improve audit and compliance visibility, and reduce risk, complexity, and costs for the modern, hybrid enterprise. Over half of the Fortune 100, the world’s largest financial institutions, intelligence agencies, and critical infrastructure companies, all trust this company to stop the leading cause of breaches – privileged credential abuse.        

    The client seeks an experienced Data Engineer to support the Engineering Team in building a modern, scalable data pipeline infrastructure. This role will focus on migrating existing ETL processes to Apache Airflow and transitioning the data warehouse to Snowflake. The engineer will be responsible for designing architecture, developing DAGs, and integrating data sources critical to operations, development, and security analytics.

    The position the client is looking to fill will work under the supervision of the USA client-based PMO in the areas of Data Analysis, Reporting, Database querying, and results presentation to stakeholders. A selected candidate will experience the advantageous privilege of contributing to the distinguished PAM solution company and becoming an inextricable part of the reporting squad. That team is integral to company data analysis, structuring, delivering, and presentation to Senior and VP stakeholders.

    More
  • Β· 58 views Β· 15 applications Β· 8d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient...

    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

    Does this relate to you?

    • 5+ years of experience in Data Engineering or a related field
    • Strong expertise in SQL and data modeling concepts
    • Hands-on experience with Airflow
    • Experience working with Redshift
    • Proficiency in Python for data processing
    • Strong understanding of data governance, security, and compliance
    • Experience in implementing CI/CD pipelines for data workflows
    • Ability to work independently and collaboratively in an agile environment
    • Excellent problem-solving and analytical skills

     

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions
    • Build and optimize ETL/ELT pipelines for efficient data integration
    • Design and implement data models to support analytical and reporting needs
    • Ensure data integrity, quality, and security across all pipelines
    • Optimize data performance and scalability using best practices
    • Work with big data technologies such as  Redshift
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions
    • Implement CI/CD pipelines for data workflows
    • Monitor, troubleshoot, and improve data processes and system performance
    • Stay updated with industry trends and emerging technologies in data engineering


     

    More
  • Β· 48 views Β· 6 applications Β· 8d

    Data Engineer with Knime experience

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a skilled Data Engineer with hands-on experience in KNIME to join our data team. You will be responsible for designing and maintaining scalable data pipelines and ETL workflows, integrating multiple data sources, and ensuring high data...

    We are looking for a skilled Data Engineer with hands-on experience in KNIME to join our data team. You will be responsible for designing and maintaining scalable data pipelines and ETL workflows, integrating multiple data sources, and ensuring high data quality and availability across the organization.
     

    Key Responsibilities

    • Design, develop, and manage ETL workflows using KNIME Analytics Platform
    • Integrate and transform data from various sources (databases, APIs, flat files, etc.)
    • Optimize data pipelines for performance and reliability
    • Collaborate with data analysts, scientists, and business stakeholders
    • Maintain clear documentation of data processes and pipelines
    • Monitor and troubleshoot data quality issues

    Requirements

    • 2+ years of experience as a Data Engineer or similar role
    • Solid hands-on experience with KNIME (workflow creation, integration, automation)
    • Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, MSSQL)
    • Familiarity with Python or R for data manipulation (a plus)
    • Understanding of ETL concepts and data warehousing principles
    • Experience working with APIs, JSON, XML, Excel, and CSV data
    • Good communication skills and ability to work in cross-functional teams

    Nice to Have

    • Experience with cloud platforms (AWS, Azure, GCP)
    • Knowledge of other ETL tools (e.g., Talend, Alteryx, Apache NiFi)
    • Basic knowledge of machine learning or business intelligence tools

    What We Offer

    • Competitive salary and performance bonuses
    • Flexible working hours and remote work options
    • Opportunities for professional growth and training
    • Collaborative and supportive team culture
    More
  • Β· 20 views Β· 1 application Β· 7d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are looking for an experienced Data Engineer to join us in one of our projects for the pharmaceutical sector. You will be involved in building complex big data/analytics platforms, data warehouses, and data pipelines for a wide range of departments...

    We are looking for an experienced Data Engineer to join us in one of our projects for the pharmaceutical sector. You will be involved in building complex big data/analytics platforms, data warehouses, and data pipelines for a wide range of departments (Finance, IoT, Operations, HR, Sales Marketing etc.) for different use cases.

    Your tasks

    • Designing, implementing, and operating Data & Analytics solutions in a DevOps working environment
    • Creating ETL/ELT processes and scalable data pipelines for the data science team & BI Team
    • Managing and releasing system resources with versioning using Continuous Integration/Continuous Delivery
    • Implementing relational and multidimensional data models
    • Standardizing data discovery, acquisition, and harmonization processes, optimizing the data
    • Ensuring compliance with all required security considerations

    Requirements

    • At least 5 years of experience working with the Big Data Ecosystems as Data Engineer
    • Strong experience with cloud-based services and with containers (Docker, Kubernetes, Functions)
    • Hands-on experience in creating and managing an enterprise-scale Data Platform, setting best practices for security, privacy, monitoring & alerting, and CI/CD
    • Excellent communication skills
    • Experience with BI & Master Data (Technology, Process & Governance) will be an advantage
    • Good knowledge of the English language
    More
  • Β· 35 views Β· 0 applications Β· 7d

    Data Engineer/DataOps Team Lead to $8000

    Poland Β· 5 years of experience Β· Upper-Intermediate
    About Us We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration,...

    About Us

    We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration, we offer services across e-commerce, Fintech, logistics, and healthcare.
    Our client is leading mobile app company that depends on high-volume, real-time data pipelines to drive user acquisition and engagement. This role is instrumental in maintaining data reliability, supporting production workflows, and enabling operational agility across teams. This is a hands-on leadership role that requires deep technical expertise, ownership mindset, and strong collaboration across engineering and business stakeholders.

    Key Requirements:

    πŸ”Ή5+ years of experience in data engineering, with strong hands-on expertise in building and maintaining data pipelines;
    πŸ”Ή At least 2 years in a team leadership or technical lead role;
    πŸ”Ή Proficient in Python, SQL, and data orchestration tools such as Airflow;
    πŸ”Ή Experience with both SQL and NoSQL databases, such as MySQL, Presto, Couchbase, MemSQL, or MongoDB;
    πŸ”Ή Bachelor’s degree in Computer Science, Engineering, or a related field;
    πŸ”Ή English – Upper-Intermediate or higher.

    Will be plus:

    πŸ”Ή Background in NOC or DevOps environments is a plus;
    πŸ”Ή Familiarity with PySpark is an advantage.

    What you will do:

    πŸ”Ή Oversee daily data workflows, troubleshoot failures, and escalate critical issues to ensure smooth and reliable operations;
    πŸ”Ή Use Python, SQL, and Airflow to configure workflows, extract client-specific insights, and adjust live processes as needed;
    πŸ”Ή Build and maintain automated data validation and testing frameworks to ensure data reliability at scale;
    πŸ”Ή Own and evolve the metadata system, maintaining table lineage, field definitions, and data usage context to support a unified knowledge platform;
    πŸ”Ή Act as the primary point of contact for operational teams and stakeholders, ensuring consistent collaboration and high data quality across the organization.

    Interview stages:

    πŸ”Ή HR Interview;
    πŸ”Ή Pro-Interview;
    πŸ”Ή Technical Interview;
    πŸ”Ή Final Interview with HR;
    πŸ”Ή Reference Check;
    πŸ”Ή Offer.

    Why Join Us?

    πŸ”Ή Be part of a friendly international team, working together on interesting global projects;
    πŸ”Ή Enjoy many chances to grow, learn from mentors, and work on projects that make a real difference;
    πŸ”Ή Join a team that loves fresh ideas and supports creativity and new solutions;
    πŸ”Ή Work closely with clients, building great communication skills and learning directly from their needs;
    πŸ”Ή Thrive in a workplace that values your needs, offering flexibility and a good balance between work and life.

    More
  • Β· 68 views Β· 5 applications Β· 7d

    Data Engineer with AI-pipelines experience

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· Upper-Intermediate
    Requirements Solid Python or JS (to build data pipelines in LangGraph, LangChain or with similar tools) Hands-on work with LLMs or chatbot frameworks (LangGraph, ADK etc.) Basic knowledge of content quality checks (fact-checking, toxicity...

    Requirements

    • Solid Python or JS (to build data pipelines in LangGraph, LangChain or with similar tools)
    • Hands-on work with LLMs or chatbot frameworks (LangGraph, ADK etc.)
    • Basic knowledge of content quality checks (fact-checking, toxicity filters)
    • Curious, fast-learning, well-organized, and reliable
    • English β€” Intermediate or higher

    Nice to have: experience with vector databases, prompt tuning, or content moderation.

     

    Responsibilities

    • Build and maintain AI agents pipelines that auto-create and update content
    • Design quick validation pipelines to catch errors, duplicates, or policy violations
    • Write and refine prompts to get the best answers from LLMs
    • Track quality metrics; debug and improve the agents over time
    • Suggest new ideas for smarter, safer content automation

     

    We Offer:

    • WFH and flexible working hours 
    More
  • Β· 27 views Β· 0 applications Β· 7d

    Big Data Engineer (Forward Deployed Engineer)

    Office Work Β· Ukraine (Kyiv) Β· 4 years of experience Β· Upper-Intermediate
    In our project, we help organizations turn big and complex multi-modal datasets into information-rich geo-spatial data subscriptions that can be used across a wide spectrum of use cases. We turn petabytes of raw data into clear, actionable insights by...

    In our project, we help organizations turn big and complex multi-modal datasets into information-rich geo-spatial data subscriptions that can be used across a wide spectrum of use cases. We turn petabytes of raw data into clear, actionable insights by applying advanced analytics to multi-sourced data, enabling customers to gain comprehensive understanding of organizations, events, and behaviors across land, sea, air, cyber and space domains.

    We are seeking a Big Data Engineer (Forward Deployed Engineer) to work on-site in Kyiv, supporting critical geospatial analytics and natural language processing initiatives. This hybrid role combines hands-on big data engineering with product development work, focusing on analyzing massive datasets to surface actionable intelligence patterns and behavioral insights.

    As a forward-deployed engineer, you will be embedded directly with the operations team, working on cutting-edge projects that transform petabytes of structured and unstructured geospatial data into strategic intelligence. You'll play a crucial role in developing and optimizing the data fusion capabilities that reduce task force response times from days to hours, while collaborating closely with multidisciplinary teams supporting mission-critical applications.

     

    1. RESPONSIBILITIES
      1. Design and implement scalable big data processing pipelines for ingesting and analyzing petabytes of multi-modal geospatial datasets
      2. Develop and optimize data fusion algorithms that automatically identify relationships and surface hidden patterns in near real-time
      3. Build and maintain robust data engineering infrastructure supporting behavioral analytics and anomaly detection at unprecedented scale
      4. Collaborate with data scientists and analysts to translate complex analytical requirements into production-ready data processing solutions
      5. Implement and optimize natural language processing workflows for unstructured data analysis and entity relationship extraction
      6. Ensure data quality, governance, and security compliance across all data processing workflows
      7. Document data architectures, processing standards, and operational procedures to support knowledge transfer and audit readiness
      8. Participate in incident response and troubleshooting for data pipeline and processing issues
      9. Develop custom attribution and modeling technologies for real-time threat detection and opportunity identification specific to regional requirements
      10. Build integration layers connecting the data engine with tactical mission networks and front-end visualization tools used by local partners
      11. Implement behavioral analytics capabilities that enable rapid identification of meaningful activities, patterns, and anomalies in complex regional datasets
      12. Support product development initiatives by prototyping new analytical capabilities and validating them against real-world operational scenarios
      13. Present analytical findings, insights, and recommendations to both technical and executive audiences, translating complex data relationships into actionable intelligence
      14. Conduct on-site client workshops and training sessions on the analytical capabilities and data products
      15. Optimize data processing performance for tactical decision-making timelines, ensuring sub-hour response times for critical intelligence queries

     

    SKILLS

    1. Big Data Technologies: 4+ years hands-on experience with distributed computing frameworks (Apache Spark, Hadoop, Kafka, Flink)
    2. Programming Proficiency: Expert-level skills in Python, Scala, or Java for large-scale data processing; SQL expertise for complex analytical queries
    3. Geospatial Analytics: Experience with geospatial data formats (GeoTIFF, SHP, KML, NetCDF) and processing libraries (GDAL, PostGIS, GeoPandas, Shapely)
    4. Cloud Platforms: Proficiency with cloud-based big data services (Azure Data Factory, Databricks, HDInsight, or AWS equivalents)
    5. Data Engineering: Strong understanding of ETL/ELT pipelines, data warehousing concepts, and real-time streaming architectures
    6. Database Systems: Experience with both SQL (PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, Elasticsearch) database technologies
    7. DevOps & Infrastructure: Familiarity with containerization (Docker, Kubernetes), CI/CD pipelines, and infrastructure-as-code principles
    8. Performance Optimization: Proven track record of optimizing big data workloads for speed, cost, and reliability at petabyte scale
    9. Real-time Analytics: Required hands-on experience building low-latency data processing systems for near real-time behavioral analytics and alerting
    10. Geospatial Intelligence: Required understanding of GEOINT workflows, temporal analysis, and multi-source intelligence fusion methodologies
    11. Data Visualization Integration: Required experience integrating data engines with tactical visualization tools and mission planning software
    12. Multi-source Data Fusion: Required proven ability to correlate and analyze disparate data sources (satellite imagery, communications data, social media, sensor networks)
    13. Client Presentation Skills: Required experience presenting complex technical findings to diverse audiences including C-level executives, leadership, and technical teams
    14. Stakeholder Management: Required ability to manage multiple client relationships simultaneously while delivering high-quality technical solutions
    15. Security Awareness: Required understanding of data security best practices, particularly in sensitive operational environment 

     

    BENEFITS:

    • Fair Compensation: Enjoy a competitive salary and bonuses that recognize your hard work.
    • Work-Life Balance: Choose from flexible work arrangements, whether you prefer working from home, the office, or a mix of both.
    • Grow with Us: Take advantage of opportunities for professional growth through training, certifications, and attending exciting conferences.
    • Care for Your Well-being: Benefit from comprehensive health benefits, wellness programs, and other perks designed to support your overall well-being.
    More
  • Β· 62 views Β· 11 applications Β· 7d

    Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· Upper-Intermediate
    Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter! Why? Because we trust our...

    Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter!

     

    Why? Because we trust our Data Platform for daily business decisions. From β€œWhat ad platform presents us faster? Which creative media presents our value to customers in the most touching way?” to β€œWhat would our customers like to learn the most about? What can make education more enjoyable?”, we rely on numbers, metrics and stuff. But as we are open and curious, there’s a lot to collect and measure! That’s why we need to extend, improve and speed up our data platform.

     

    That’s why we need you to:

    • Build and maintain scalable data pipelines using Python and Airflow to provide data ingestion, transformation, and delivery.
    • Develop and optimize ETL/ELT workflows to ensure data quality, reliability, and performance.
    • Bring your vision and opinion to define data requirements and shape solutions to business needs.
    • Smartly monitor, relentlessly troubleshoot, and bravely resolve issues in data workflows, striving for high availability and fault tolerance.
    • Propose, advocate, and implement best practices for data storage and querying using AWS services such as S3 and Athena.
    • Document data workflows and processes, ensuring you don’t have to say it twice and have time for creative experiments. Sure, it’s about clarity and maintainability across the team as well.

     

    For that, we suppose you’d be keen on

    • AWS services such as S3, Kinesis, Athena, and others.
    • dbt and Airflow for data pipeline and workflow management.
    • Application of data architecture, ETL/ELT processes, and data modeling.
    • Advanced SQL and Python programming.
    • Monitoring tools and practices to ensure data pipeline reliability.
    • CI/CD pipelines and DevOps practices for data platforms.
    • Monitoring and optimizing platform performance at scale.

     

    Will be nice to 

    • Understand cloud services (we use AWS), advances, trade-offs, and perspectives.
    • Keep in mind the analytical approach and the ability to consider future perspectives in system design in daily practice and technical decisions

     

    Why You'll Love Working With Us:

    • Impactful Work: Your contributions will directly shape the future of our company.
    • Innovative Environment: We're all about trying new things and pushing the envelope in EdTech.
    • Freedom: flexible role based either remotely or hybrid from one of our offices in Cyprus, Poland.
    • Health: we offer Health Insurance package for hybrid mode (Cyprus, Poland) and health corner in the Cyprus office.
    • AI solutions β€” GPT Chat bot/ Chat GPT subscription and other tools.
    • Wealth: we offer a competitive salary.
    • Balance: flexible paid time off, you get 21 days of annual leave + 10 bank holidays.
    • Collaborative Culture: Work alongside passionate professionals who are as driven as you are.

     

    More
  • Β· 34 views Β· 1 application Β· 7d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...

    We are seeking a proactive Senior Data Engineer to join our vibrant team.

    As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges. 
     

    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.


    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    • PySpark
    • SQL
    • TypeScript


    Required:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Proficiency in containerization technologies (e.g., Docker, Kubernetes);
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.


    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Webservice Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.
       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 41 views Β· 6 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    About Us We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration,...

    About Us

    We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration, we offer services across e-commerce, Fintech, logistics, and healthcare.
    Our client is a technology company focused on supply chain management and procurement optimization. Π‘lient`s product simplifies purchasing processes by leveraging modern technology, helping businesses automate supply chains and reduce costs. We're seeking an experienced Data Engineer with expertise in Python, a passion for startups, and a genuine interest in ML and data to join our

    Key Requirements:

    πŸ”Ή 5+ years of commercial experience in Data Engineering;
    πŸ”Ή Proven expertise with the Python programming language;
    πŸ”Ή Strong understanding of ETL processes;
    πŸ”Ή Familiarity with Looker, Tableau, or other BI tools;
    πŸ”Ή Deep expertise in Airflow, Snowflake;
    πŸ”Ή Proficiency in database technologies (SQL, NoSQL, etc.);
    πŸ”Ή English – Upper-Intermediate or higher.

    What you will do:

    πŸ”Ή Create data pipelines that can be used to process and analyze large amounts of data, using Snowflake, Looker, Airflow, and DBT;
    πŸ”Ή Develop ETL workflows to process structured/unstructured data, ensuring data quality and reliability;
    πŸ”Ή Implement and enforce data quality checks, validation processes, and governance best practices to maintain data integrity;
    πŸ”Ή Collaborate with cross-functional teams to align data solutions with business objectives;
    πŸ”Ή Document technical processes and maintain version control for reproducibility.

    Interview stages:

    πŸ”Ή HR Interview;
    πŸ”Ή Pro-Interview;
    πŸ”Ή Test Task;
    πŸ”Ή Final Interview;
    πŸ”Ή Reference Check;
    πŸ”Ή Offer.

    Why Join Us?

    πŸ”Ή Be part of a friendly international team, working together on interesting global projects;
    πŸ”Ή Enjoy many chances to grow, learn from mentors, and work on projects that make a real difference;
    πŸ”Ή Join a team that loves fresh ideas and supports creativity and new solutions;
    πŸ”Ή Work closely with clients, building great communication skills and learning directly from their needs;
    πŸ”Ή Thrive in a workplace that values your needs, offering flexibility and a good balance between work and life.

    More
  • Β· 41 views Β· 7 applications Β· 5d

    Power BI Engineer

    Ukraine Β· 3 years of experience Β· Upper-Intermediate
    Swan Software Solutions is a fast growing, quality-driven IT services company providing cutting edge solutions. We believe we have found the ideal blend of global talent, innovative technologies, and highly-standardized processes to fully leverage our...

    Swan Software Solutions is a fast growing, quality-driven IT services company providing cutting edge solutions. We believe we have found the ideal blend of global talent, innovative technologies, and highly-standardized processes to fully leverage our core values - reliability, scalability and affordability.
    We're looking for talented and creative software engineers to join our growing team!
     

    EXPERIENCE AND SKILLS REQUIRED:

    Key Responsibilities:

    • Migrate existing Crystal Reports into Power BI, ensuring functional parity and performance optimization
    • Interpret and translate legacy report logic and data sources into Power BI datasets and reports
    • Work with product managers and internal stakeholders to understand reporting requirements and user expectations
    • Design and build efficient, reusable, and reliable Power BI datasets, dashboards, and paginated reports
    • Optimize performance and data refresh processes
    • Validate data accuracy and ensure compliance with security and access control policies
    • Own and manage tasks and/or projects with minimal direction
    • Transform data into information through ETL technology
    • Source code deployment and management 
    • Design database structures for maximum performance and usability
    • Visualize data and information to maximum business benefit  
    • Ability to successfully troubleshoot issues/provide solutions with minimal direction
    • Ability to support legacy BI solutions
    • Updates job knowledge by participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations
    • Performs root cause analysis of issues/problems and create preventative measures. Ensure proper escalation and coordination of all issues
    • Contribute DDL/DML to SCRUM application release cycles
    • Perform development and quality assurance tasks with equal enthusiasm and skill
    • Provide technical assistance by responding to inquiries regarding errors, problems, or technical questions
    • Promote adherence to departmental/software development standards
    • Set a professional example for the team through model behavior, superior performance, and proper attendance
    • Facilitate teamwork to meet goals
    • Evaluate and recommend new procedures, software or other tools to enhance capabilities
    • Participate in technical reviews to ensure that the architecture supports current and future business requirements.

       

    Required Knowledge, Skills & Abilities:

    • BS degree in Computer Science, Information systems, or Engineering 
    • 3+ years developing complex SQL and stored procedures
    • 3+ years of BI development experience, preferably within a SQL Server environment
    • Knowledge of Business Intelligence tools and concepts (ETL, Analytics, Visualization)
    • 2+ years experience designing complex reports within Power BI utilizing DAX 
    • 3+ years of Microsoft SSIS, SSAS, and SSRS experience
    • 1+ year of Python development experience
    • ETL experience; ability to move and transform data efficiently and accurately
    • Database design experience, for both relational and dimensional modelling
    • Knowledge of star schemas
    • Knowledge of Cube development
    • Ability to develop and execute thorough test plans
    • 1+ years QA testing experience. Able to successfully validate software solutions
    • Ability to develop automated test solutions
    • Ability to read/analyze queries and stored procedures to diagnose performance issues
    • Clear and effective written and verbal English communications skills 
    • Strong technical and interpersonal skills with the ability to work with minimal supervision
    • Ability to create technical documentation
    • Ability to apply proper security principles to solutions
       

    Preferred Knowledge, Skills & Abilities:

    • SQL Server Change Data Capture
    • Software Development experience
    • Non-Microsoft BI development tools
       

      WE OFFER:

    • A team of experienced professionals, ready to share their knowledge and skills;
    • Strong SDLC process with use of Agile, Scrum, depending on a project;
    • Competitive salary according to your skills and expectations;
    • Corporate English trainings/IT business trainings;
    • Strong compensation packages based on experience;
    • Flexible bonus payment system that allows our team members to earn money above and beyond their standard salary.
       

    We have offices in Kiyv, Poltava, Uzhhorod, Cherkasy, Ivano-Frankivsk and would love for you to become a part of our team!
    In CV you must include contact details, examples of projects, indicating the role in the project.
    The position is open due to the emergence of new projects!

    More
Log In or Sign Up to see all posted jobs