Jobs

144
  • Β· 7 views Β· 0 applications Β· 2h

    Palantir Data Engineer

    Full Remote Β· Ukraine, Romania, Poland, Spain, Portugal Β· 4 years of experience Β· Upper-Intermediate
    We are seeking a skilled and adaptable Data Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and...

    We are seeking a skilled and adaptable Data Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and other modern data tools. We value individuals who are excited to expand their technical capabilities over time, work on multiple accounts, and contribute to a dynamic and growing team.

    You will play a pivotal role in transforming raw data from various sources into structured, high-quality data products that drive business decisions. The ideal candidate should be motivated to learn and grow within the organization, actively collaborating with experienced engineers to strengthen our data capabilities over time.

    About the project

    This project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.

    The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.

     

    Skills & Experience

    • Bachelor’s degree in Computer Science, Software Engineering, or equivalent experience.
    • Minimum 3 years of experience in Python, SQL, and data engineering processes.
    • Experience with Palantir Foundry 
    • Proficiency in multiple database systems, such as PostgreSQL, Redis, and a data warehouse like Snowflake, including query optimization.
    • Hands-on experience with Microsoft Azure services.
    • Strong problem-solving skills and experience with data pipeline development.
    • Familiarity with testing methodologies (unit and integration testing).
    • Docker experience for containerized data applications.
    • Collaborative mindset, capable of working across multiple teams and adapting to new projects over time.
    • Fluent in English (written & verbal communication).
    • Curiosity and enthusiasm for finance-related domains (personal & corporate finance, investment concepts).

     

    Nice to have

    • Experience with Databricks.
    • Experience with Snowflake.
    • Background in wealth management, investment analytics, or financial modeling.
    • Contributions to open-source projects or personal projects showcasing data engineering skills.

     

    Responsibilities

    • Design and maintain scalable data pipelines to ingest, transform, and optimize data.
    • Collaborate with cross-functional teams (engineering, product, and business) to develop solutions that address key data challenges.
    • Support data governance, data quality, and security best practices.
    • Optimize data querying and processing for efficiency and cost-effectiveness.
    • Work with evolving technologies to ensure our data architecture remains modern and adaptable.
    • Contribute to a culture of learning and knowledge sharing, supporting newer team members in building their skills.
    • Grow into new roles within the company by expanding your technical expertise and working on diverse projects over time.

     

    We are looking for individuals who want to be part of a long-term, growing teamβ€”people who may not have all the skills today but are eager to bridge the gap and build their expertise alongside experienced engineers. If you’re excited about pumping the data muscle and growing in your career, we’d love to hear from you!

    More
  • Β· 17 views Β· 1 application Β· 3h

    Middle/Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Pre-Intermediate
    We’re Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud...

    We’re Applyft  - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud to have a 5M monthly active users base and to achieve 20% QoQ revenue growth

     

    Now we are looking for a Middle/Senior Data Engineer to join our Analytics team

     

    What you’ll do:

     

    • Design, develop and maintain Data pipelines and ETL processes for internal DWH
    • Develop and support integrations with 3rd party systems
    • Be responsible for the quality of data presented in BI dashboards
    • Collaborate with data analysts to troubleshoot data issues and optimize data workflows

     

    Your professional qualities:

     

    • 3+ years of BI/DWH development experience
    • Excellent knowledge of database concepts and hands-on experience with SQL
    • Proven experience of designing, implementing, and maintaining ETL data pipelines
    • Hands-on experience writing production-level Python code
    • Experience working with cloud-native technologies (AWS/GCP)

     

    Will be a plus:

     

    • Experience with Business Intelligence software (Looker Studio)
    • Experience with billing systems, enterprise financial reporting, subscription monetization products
    • Experience of supporting product and marketing data analytics

     

    We offer:

     

    • Remote-First culture: We provide a flexible working schedule and you can work anywhere in the world 
    • Health taking care program: We provide Health insurance, sport compensation and 20 paid sick days
    • Professional Development:  The company provides budget for each employee for courses, trainings and conferences
    • Personal Equipment Policy: We provide all necessary equipment for your work. For Ukrainian employees we also provide Ecoflow
    • Vacation Policy: Each employee in our company has 20 paid vacation days and extra days on the occasion of special evens
    • Knowledge sharing: We are glad to share our knowledge and experience in our internal events
    • Corporate Events: We organize corporate events and team-building activities across our hubs
    More
  • Β· 28 views Β· 4 applications Β· 4h

    Senior Data Engineer (Python) to $7000

    Full Remote Β· Ukraine, Poland, Portugal, Romania, Bulgaria Β· 5 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

     

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 
    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role: 
    As a data engineer you’ll have end-to-end ownership - from system architecture and software

    development to operational excellence.

     

    Key Responsibilities: 
    ● Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.

    ● Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.

    ● Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.

    ● Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.

    ● Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.

     

    Required Competence and Skills:
    To excel in this role, candidates should possess the following qualifications and experiences:

    ● A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.

    ● At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.

    ● At least 5 years of experience with Python, building modular, testable, and production-ready code.

    ● Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).

    ● Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).

    ● A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.

    ● Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.

     

    Nice-to-Haves

    ● Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.

    ● Familiarity with API development frameworks (e.g., FastAPI).

    ● Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).

    ● Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

     

    We provide full accounting and legal support in all countries we operate.

     

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

     

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 12 views Β· 0 applications Β· 5h

    System Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 5 years of experience MilTech πŸͺ–
    Airlogix β€” компанія, яка ΡΠΏΠ΅Ρ†Ρ–Π°Π»Ρ–Π·ΡƒΡ”Ρ‚ΡŒΡΡ Π½Π° Π²ΠΈΡ€ΠΎΠ±Π½ΠΈΡ†Ρ‚Π²Ρ– Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–ΠΉΠ½ΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Ρ–Π² Ρƒ сфСрі Π±Π΅Π·ΠΏΡ–Π»ΠΎΡ‚Π½ΠΈΡ… Π»Ρ–Ρ‚Π°Π»ΡŒΠ½ΠΈΡ… Π°ΠΏΠ°Ρ€Π°Ρ‚Ρ–Π² (Π‘ΠŸΠ›Π). Π—Π°Ρ€Π°Π· нашС Π³ΠΎΠ»ΠΎΠ²Π½Π΅ завдання β€” Ρ†Π΅ Π½Π°ΠΉΡΠΊΠΎΡ€Ρ–ΡˆΠ° ΠΏΠ΅Ρ€Π΅ΠΌΠΎΠ³Π° Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, самС Ρ‚ΠΎΠΌΡƒ ΠΌΠΈ ΡˆΡƒΠΊΠ°Ρ”ΠΌΠΎ Ρ‚Π°Π»Π°Π½ΠΎΠ²ΠΈΡ‚ΠΈΡ… Ρ‚Π° ΠΊΡ€Π΅Π°Ρ‚ΠΈΠ²Π½ΠΈΡ… профСсіоналів, які...

    Airlogix β€” компанія, яка ΡΠΏΠ΅Ρ†Ρ–Π°Π»Ρ–Π·ΡƒΡ”Ρ‚ΡŒΡΡ Π½Π° Π²ΠΈΡ€ΠΎΠ±Π½ΠΈΡ†Ρ‚Π²Ρ– Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–ΠΉΠ½ΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Ρ–Π² Ρƒ ΡΡ„Π΅Ρ€Ρ– Π±Π΅Π·ΠΏΡ–Π»ΠΎΡ‚Π½ΠΈΡ… Π»Ρ–Ρ‚Π°Π»ΡŒΠ½ΠΈΡ… Π°ΠΏΠ°Ρ€Π°Ρ‚Ρ–Π² (Π‘ΠŸΠ›Π). Π—Π°Ρ€Π°Π· нашС Π³ΠΎΠ»ΠΎΠ²Π½Π΅ завдання β€” Ρ†Π΅ Π½Π°ΠΉΡΠΊΠΎΡ€Ρ–ΡˆΠ° ΠΏΠ΅Ρ€Π΅ΠΌΠΎΠ³Π° Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, самС Ρ‚ΠΎΠΌΡƒ ΠΌΠΈ ΡˆΡƒΠΊΠ°Ρ”ΠΌΠΎ Ρ‚Π°Π»Π°Π½ΠΎΠ²ΠΈΡ‚ΠΈΡ… Ρ‚Π° ΠΊΡ€Π΅Π°Ρ‚ΠΈΠ²Π½ΠΈΡ… профСсіоналів, які Π³ΠΎΡ‚ΠΎΠ²Ρ– Ρ€Π°Π·ΠΎΠΌ Π· Π½Π°ΠΌΠΈ Π²ΠΊΠ»Π°Π΄Π°Ρ‚ΠΈ всі сили Ρ‚Π° Ρ€Π΅ΡΡƒΡ€ΡΠΈ Ρƒ Π΄ΠΎΡΡΠ³Π½Π΅Π½Π½Ρ Ρ†Ρ–Ρ”Ρ— Π²Π°ΠΆΠ»ΠΈΠ²ΠΎΡ— ΠΌΠ΅Ρ‚ΠΈ. Ми ΠΌΠΎΠ»ΠΎΠ΄Ρ– Ρ‚Π° Π°ΠΌΠ±Ρ–Ρ‚Π½Ρ–, Π·Π°Π²ΠΆΠ΄ΠΈ дивимось Π²ΠΏΠ΅Ρ€Π΅Π΄ Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π° ΠΏΠ΅Ρ€Π΅ΠΌΠΎΠ³Ρƒ.

    Наразі ΠΌΠΈ Ρ€Π°Π΄ΠΎ Π·Π°ΠΏΡ€ΠΎΡˆΡƒΡ”ΠΌΠΎ Π΄ΠΎ Π½Π°Ρ Π² ΠΊΠΎΠΌΠ°Π½Π΄Ρƒ System Engineer.

    Ми Π±ΡƒΠ΄Π΅ΠΌΠΎ Ρ€Π°Π΄Ρ– Π±Π°Ρ‡ΠΈΡ‚ΠΈ Ρƒ Π²Π°ΡˆΠΎΠΌΡƒ досвіді:

    • досвід Π²Ρ–Π΄ 6-Ρ‚ΠΈ Ρ€ΠΎΠΊΡ–Π² Ρƒ ΡΡ„Π΅Ρ€Ρ– Π°Π²Ρ–Π°Ρ†Ρ–Ρ— Π°Π±ΠΎ Π‘ΠŸΠ›Π Π°Π±ΠΎ космосу;
    • Π²Ρ–Π΄ 3-Ρ… Ρ€ΠΎΠΊΡ–Π² досвід Ρƒ ΡΡ„Π΅Ρ€Ρ– ΠΊΠΎΠ½ΡΡ‚Ρ€ΡƒΡŽΠ²Π°Π½Π½Ρ, Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Π΅Π»Π΅ΠΊΡ‚Ρ€ΠΎΠ½Ρ–ΠΊΠΈ;
    • Π²ΠΏΠ΅Π²Π½Π΅Π½Ρ– знання Ρƒ ΡΡ„Π΅Ρ€Ρ– модСлювання Ρ‚Π° Π°Π½Π°Π»Ρ–Π·Ρƒ (FEA, CFD, Thermal, Trajectory simulation);
    • досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Ρƒ ΠŸΠ— Solid Works, CATIA, Autodesk Inventor;
    • досвід управління Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Π½ΠΈΠΌΠΈ ΠΊΠΎΠΌΠ°Π½Π΄Π°ΠΌΠΈ 5+ Π»ΡŽΠ΄Π΅ΠΉ.

     

    Π©ΠΎ Π±ΡƒΠ΄Π΅ Π²Ρ…ΠΎΠ΄ΠΈΡ‚ΠΈ Ρƒ Π²Π°ΡˆΡ– обов’язки:

    • Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½Π΅ ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΡ†Ρ‚Π²ΠΎ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π°ΠΌΠΈ Ρƒ ΡΠΊΠΎΡΡ‚Ρ– Lead Engineer;
    • забСзпСчСння дотримання процСсу Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Π²ΠΈΡ€ΠΎΠ±Ρ–Π² Π½Π° ΠΏΡ–Π΄ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŒΠ½ΠΈΡ… ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π°Ρ…;
    • формування Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΈΡ… Π²ΠΈΠΌΠΎΠ³ Π΄ΠΎ Π²ΠΈΡ€ΠΎΠ±Ρƒ, Ρ‰ΠΎ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Π»ΡΡ”Ρ‚ΡŒΡΡ;
    • Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° ΠΊΠΎΠ½Ρ†Π΅ΠΏΡ‚Ρ–Π² Π²ΠΈΡ€ΠΎΠ±Ρ–Π², 3D ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π΅Ρ‚Π°Π»Π΅ΠΉ Ρ‚Π° Π·Π±Ρ–Ρ€ΠΎΠΊ;
    • ΠΏΡ–Π΄Π³ΠΎΡ‚ΠΎΠ²ΠΊΠ° Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΎΡ— ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π½ΠΎΡ— Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Π°Ρ†Ρ–Ρ—: ΠΏΠΎΡΡΠ½ΡŽΠ²Π°Π»ΡŒΠ½Ρ– записки Π°Π²Π°Π½ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Ρƒ, Сскізного ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Ρƒ, Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΎΠ³ΠΎ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Ρƒ, Π’Π£, Ρ‚ΠΎΡ‰ΠΎ;
    • ΡƒΡ‡Π°ΡΡ‚ΡŒ Ρƒ ΠΎΠ³Π»ΡΠ΄Π°Ρ… ΠšΠ” Ρƒ ΡΠΊΠΎΡΡ‚Ρ– Ρ€Π΅Ρ†Π΅Π½Π·Π΅Π½Ρ‚Π°;
    • Π±ΡƒΠ΄Π΅ плюсом мСнторство спСціалістів Middle Ρ‚Π° Senior Ρ€Ρ–Π²Π½Ρ–Π².

     

    Π©ΠΎ ΠΌΠΈ ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ:

    • ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ Ρ– Π±Ρ–Π»Ρƒ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ, Ρ‰ΠΎΡ€Ρ–Ρ‡Π½Ρƒ ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½Ρƒ відпустку Ρ‚Π° Π»Ρ–карняні;
    • Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€, бомбосховищС Ρ‚Π° ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Ρ–Π½Ρ‚Π΅Ρ€Π½Π΅Ρ‚;
    • ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π²ΠΏΠ»ΠΈΠ²Π°Ρ‚ΠΈ як Π½Π° Π²Π΅ΠΊΡ‚ΠΎΡ€ Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ ΠΊΠΎΠΌΠΏΠ°Π½Ρ–Ρ—, Ρ‚Π°ΠΊ Ρ– Π½Π° ΠΏΠ΅Ρ€Π΅ΠΌΠΎΠ³Ρƒ Π½Π°ΡˆΠΎΡ— ΠΊΡ€Π°Ρ—Π½ΠΈ;
    • Π³Ρ–Π΄Π½ΠΈΠΉ Ρ€Ρ–Π²Π΅Π½ΡŒ ΠΎΠΏΠ»Π°Ρ‚ΠΈ ΠΏΡ€Π°Ρ†Ρ– Ρ‚Π° Ρ‰ΠΎΠ½Π°ΠΉΠΌΠ΅Π½ΡˆΠ΅ Ρ€Π°Π· Π½Π° Ρ€Ρ–ΠΊ Ρ—Ρ— ΠΏΠ΅Ρ€Π΅Π³Π»ΡΠ΄;
    • всСбічну ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Π²Ρ–Π΄ ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΡ†Ρ‚Π²Π°, ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Ρ‚Π° Π΄Π΅Ρ€ΠΆΠ°Π²Π½ΠΈΡ… ΠΎΡ€Π³Π°Π½Ρ–Π²;
    • Π³Ρ€Π°Ρ„Ρ–ΠΊ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ: Π· 08:00–17:00, Π°Π±ΠΎ Π· 09:00 Π΄ΠΎ 18:00 ΠΏΠ½-ΠΏΡ‚;
    • Π±Ρ€ΠΎΠ½ΡŽΠ²Π°Π½Π½Ρ Π·Π° Π½Π°ΡΠ²Π½ΠΎΡΡ‚Ρ– Π΄Ρ–ΡŽΡ‡ΠΈΡ… Π²Ρ–ΠΉΡΡŒΠΊΠΎΠ²ΠΎ-ΠΎΠ±Π»Ρ–ΠΊΠΎΠ²ΠΈΡ… Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Ρ–Π².

     

    Π¦Ρ–ΠΊΠ°Π²ΠΎ? Π§Π΅ΠΊΠ°Ρ”ΠΌΠΎ Π½Π° Π²Π°ΡˆΡ– Π²Ρ–Π΄Π³ΡƒΠΊΠΈ Ρ‚Π° Π±ΡƒΠ΄Π΅ΠΌΠΎ Ρ€Π°Π΄Ρ– ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² ΠΎΠ΄Π½Ρ–ΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Ρ–.

    *Π—Π²Π΅Ρ€Π½Ρ–Ρ‚ΡŒ, Π±ΡƒΠ΄ΡŒ ласка, ΡƒΠ²Π°Π³Ρƒ, Ρ‰ΠΎ Ρ‚Π΅Ρ€ΠΌΡ–Π½ розгляду Ρ€Π΅Π·ΡŽΠΌΠ΅ 10 Ρ€ΠΎΠ±ΠΎΡ‡ΠΈΡ… Π΄Π½Ρ–Π². Π―ΠΊΡ‰ΠΎ протягом Ρ†ΡŒΠΎΠ³ΠΎ ΠΏΠ΅Ρ€Ρ–ΠΎΠ΄Ρƒ ΠΌΠΈ Π½Π΅ Π·Π²β€™ΡΠ·Π°Π»ΠΈΡΡ Π· Π²Π°ΠΌΠΈ, Ρ†Π΅ ΠΎΠ·Π½Π°Ρ‡Π°Ρ”, Ρ‰ΠΎ ΠΌΠΈ Π½Π° Ρ†Π΅ΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ‚ Π½Π΅ Π³ΠΎΡ‚ΠΎΠ²Ρ– запросити вас Π½Π° ΡΠΏΡ–вбСсіду. АлС ΠΌΠΈ Π·Π±Π΅Ρ€Π΅ΠΆΠ΅ΠΌΠΎ Ρ€Π΅Π·ΡŽΠΌΠ΅ Π² Π½Π°ΡˆΡ–ΠΉ Π±Π°Π·Ρ– ΠΊΠ°Π½Π΄ΠΈΠ΄Π°Ρ‚Ρ–Π², Ρ‰ΠΎΠ± Ρƒ ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½ΡŒΠΎΠΌΡƒ Π·Π°ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΠ²Π°Ρ‚ΠΈ Ρ–Π½ΡˆΡ– кар’єрні моТливості, Ρ€Π΅Π»Π΅Π²Π°Π½Ρ‚Π½Ρ– Π²Π°ΡˆΠΎΠΌΡƒ досвіду.

    More
  • Β· 13 views Β· 1 application Β· 9h

    Senior Python Data Engineer (only Ukraine)

    Ukraine Β· Product Β· 6 years of experience Β· Upper-Intermediate
    The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...

    The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.

     

    Requirements:

     

    • At least 5 years of experience with Python
    • At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
    • Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka), (advanced skills in DML).
    • Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
    • Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow,  Apache Hadoop, Apache Spark).
    • Experience in automated test creation (TDD).
    • Freely spoken English.

       

    Advantages:

     

    • Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
    • Experience in any OOP language.
    • Experience in DevOps (Familiarity with Docker and Kubernetes).
    • Experience with GCP services would be a plus.
    • Experience with IaC would be a plus.
    • Experience in Scala.

     

    What we offer:

    • 20 working days’ vacation; 
    • 10 paid sick leaves;
    • public holidays;
    • equipment;
    • accountant helps with documents;
    • many cool team activities.

     

    Apply now and start a new page of your fast career growth with us!

    More
  • Β· 20 views Β· 2 applications Β· 10h

    Senior Data Engineer to $7200

    Full Remote Β· Ukraine, Poland Β· Product Β· 5 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product:

    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role:

    As a data engineer you’ll have end-to-end ownership – from system architecture and software development to operational excellence.

     

    Key Responsibilities:

    • Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
    • Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
    • Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
    • Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
    • Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.

     

    Required Competence and Skills:

    • A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
    • At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
    • At least 5 years of experience with Python, building modular, testable, and production-ready code.
    • Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
    • Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
    • A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
    • Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.

     

    Nice-to-Have:

    • Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
    • Familiarity with API development frameworks (e.g., FastAPI).
    • Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
    • Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
    More
  • Β· 32 views Β· 2 applications Β· 1d

    Data Engineer

    Full Remote Β· Ukraine, Poland, Romania, Spain, Portugal Β· 3 years of experience Β· Intermediate
    We’re expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client. We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible...

    We’re expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client.  

    We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible for ingesting a high volume and variety of enterprise-level data and transforming it into outputs to accelerate decision-making.  
     
    Requirements: 

    - BS+ in computer science or equivalent experience 

    - 3+ years of experience in Data Engineering 

    - 3+ years of experience with Python 

    - Strong experience with AWS stack: Glue, Athena, EMR Serverless, Kinesis, Redshift, Lambda, Step Functions, Data Migration Service (DMS) 

    - Experience with Spark, PySpark, Iceberg, Delta lake, Aurora DB, DynamoDB. 

     

    Nice to have:  

    - Data modeling and managing data transformation jobs with high volume and timing requirements experience; 

    - AWS CodePipeline, Beanstalk, Azure DevOps,  Cloud Formation; 

    - Profound skill to collaborate with cross functional teams, including communicating effectively with people of varying levels of technical knowledge; 

    - Readiness to learn new technologies. 

     

    Responsibilities: 

    - Setting up data imports from external data sources (DB, CSV, API); 

    - Building highly scalable pipelines to process high-volume data for reporting and analytics consumption; 

    - Designing data assets that support experimental and organizational processes, and are efficient and easy to work with; 

    - Close cooperation with engineers, data scientists, product managers, and business teams to make sure data products are aligned with organizational needs. 

     
     
    What we offer: 

    - Paid training programs and English/Spanish language courses; 

    - Medical insurance, sports program compensation, pet care and other benefits compensation program, which can be selected by each employee according to personal preferences; 

    - Comfortable working hours; 

    - Awesome team events and a wide variety of knowledge sharing opportunities. 


     

    More
  • Β· 21 views Β· 2 applications Β· 1d

    Data Engineer with Palantir experience

    Full Remote Β· Ukraine, Poland, Portugal, Romania, Spain Β· 3 years of experience Β· Upper-Intermediate
    We are seeking a skilled and adaptable Data Engineer with Palantir experience or a strong willingness to learn Palantir technology. Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build...

    We are seeking a skilled and adaptable Data Engineer with Palantir experience or a strong willingness to learn Palantir technology.
    Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and other modern data tools. We value individuals who are excited to expand their technical capabilities over time, work on multiple accounts, and contribute to a dynamic and growing team.

    You will play a pivotal role in transforming raw data from various sources into structured, high-quality data products that drive business decisions. The ideal candidate should be motivated to learn and grow within the organization, actively collaborating with experienced engineers to strengthen our data capabilities over time.


    About the project

    This project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.

    The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.

    Skills & Experience

    • Bachelor’s degree in Computer Science, Software Engineering, or equivalent experience.
    • Minimum 3 years of experience in Python, SQL, and data engineering processes.
    • Experience with Palantir Foundry or a strong willingness to learn and develop expertise in it.
    • Proficiency in multiple database systems, such as PostgreSQL, Redis, and a data warehouse like Snowflake, including query optimization.
    • Hands-on experience with Microsoft Azure services.
    • Strong problem-solving skills and experience with data pipeline development.
    • Familiarity with testing methodologies (unit and integration testing).
    • Docker experience for containerized data applications.
    • Collaborative mindset, capable of working across multiple teams and adapting to new projects over time.
    • Fluent in English (written & verbal communication).
    • Curiosity and enthusiasm for finance-related domains (personal & corporate finance, investment concepts).

    Nice to have

    • Experience with Databricks.
    • Experience with Snowflake.
    • Background in wealth management, investment analytics, or financial modeling.
    • Contributions to open-source projects or personal projects showcasing data engineering skills.

    Responsibilities

    • Design and maintain scalable data pipelines to ingest, transform, and optimize data.
    • Collaborate with cross-functional teams (engineering, product, and business) to develop solutions that address key data challenges.
    • Support data governance, data quality, and security best practices.
    • Optimize data querying and processing for efficiency and cost-effectiveness.
    • Work with evolving technologies to ensure our data architecture remains modern and adaptable.
    • Contribute to a culture of learning and knowledge sharing, supporting newer team members in building their skills.
    • Grow into new roles within the company by expanding your technical expertise and working on diverse projects over time.

    We are looking for individuals who want to be part of a long-term, growing teamβ€”people who may not have all the skills today but are eager to bridge the gap and build their expertise alongside experienced engineers. If you’re excited about pumping the data muscle and growing in your career, we’d love to hear from you!

    More
  • Β· 20 views Β· 2 applications Β· 1d

    Senior Architect Data Engineer

    Full Remote Β· Ukraine, Romania, Portugal, Poland, Spain Β· 5 years of experience Β· Upper-Intermediate
    Tech stack: Palantir Foundry, Microsoft Azure, Azure DataLake, Azure App Service, SQL, Spark, Databricks, Python, FastAPI, Pandas, Streamlit, GitHub Actions, OpenAI, LLMs About the role We are seeking a Senior Architect Data Engineer to lead the design...

    Tech stack: Palantir Foundry, Microsoft Azure, Azure DataLake, Azure App Service, SQL, Spark, Databricks, Python, FastAPI, Pandas, Streamlit, GitHub Actions, OpenAI, LLMs


    About the role

    We are seeking a Senior Architect Data Engineer to lead the design and evolution of our Palantir Foundry-based data platform for a finance-focused initiative. This role goes beyond building data pipelines β€” you will own the data architecture end-to-end, mentor other engineers, and shape the technical roadmap for transforming scattered raw data into robust, analytics-ready products.

    You will collaborate directly with leadership and product teams to understand strategic data needs, set technical direction, and ensure our infrastructure scales with business growth. Your expertise will be crucial in driving innovation across ingestion, transformation, quality, and real-time access layers of our modern data stack.

    This is a hands-on leadership role for someone who thrives on both solving complex problems and empowering others to grow.


    About the project

    This project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.

    The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.

    Location
    Remote LATAM/ Poland/Europe/Ukraine

     

    Skills & Experience

    • Possess a Bachelor’s degree in Computer Science or Software Engineering, or demonstrate substantial experience as a seasoned app developer.
    • Demonstrate a minimum of 8 years of proficiency in Python, SQL, data systems development life cycle.
    • Experience with Palantir Foundry.
    • Advanced experience using different kinds of databases (for example, PostgreSQL, BigQuery, Redis) including experience with query and optimization techniques.
    • Display a comprehensive understanding and practical experience with Google Cloud services.
    • Profound experience with data pipeline testing methodologies.
    • Hands-on experience working with Docker
    • Proven background in collaborative efforts with product managers and fellow engineers, particularly within distributed multicultural teams.
    • An excellent command of the English language, both written and verbal.
    • Possess outstanding communication skills, coupled with a sense of humor, and express a keen interest in the domains of personal and corporate finance. 

    Nice to have

    • Experience with LLM integrations (OpenAI, LangChain) and prompt engineering.
    • Experience with Databricks.
    • Experience with Microsoft Azure.
    • Knowledge of the financial domain and understanding of wealth management, investment concepts.
    • Contributions to open-source projects or personal projects showcasing data engineering skills.
    • Experience influencing data strategy in financial services or investment tech platforms.
    • Contributions to open-source or thought leadership in the data community.

    Responsibilities

    • Collaborate with business leaders to align data architecture with strategic goals.
    • Lead end-to-end implementation of complex data pipelines and integrations.
    • Design, enforce, and implement data governance, lineage tracking, and access policies across environments.
    • Review and improve engineering standards, mentoring engineers and reviewing critical code paths.
    • Proactively identify tech debt, bottlenecks, and opportunities for optimization or automation.
    • Drive adoption of emerging technologies in the modern data stack.
    • Represent the data engineering function in cross-functional discussions and roadmap planning.
    • Stay up-to-date with the latest trends and technologies in the data engineering field (Modern Data Stack) and propose improvements to the existing architecture.
    More
  • Β· 24 views Β· 1 application Β· 1d

    Data Engineer

    Ukraine Β· 4 years of experience Β· Upper-Intermediate
    About the Role: As a Data platform engineer, you will design, develop, and build the core stream processing platform. Collaborate with cross-functional teams including product managers, and other engineering teams to deliver end-to-end solutions. Key...

    About the Role:

    As a Data platform engineer, you will design, develop, and build the core stream processing platform. Collaborate with cross-functional teams including product managers, and other engineering teams to deliver end-to-end solutions.

     

    Key Responsibilities:

    • Participate in the design, development, and deployment of scalable and reliable data processing pipelines.
    • Implement robust developer and testing infrastructure to streamline development workflows and ensure high-quality code.
    • Stay current with the latest technologies and industry trends, evaluating and integrating new tools and methodologies as appropriate.
    • Work closely with development, operations, and other teams to ensure alignment and collaboration.
    • Demonstrate strong debugging, documentation, and communication skills.
    • Communicate effectively, both verbally and in writing, to technical and non-technical audiences.

       

    Required Skills and Experience:

    • At least 4+ years of experience in large-scale software development with a specific focus on data processing.
    • Strong proficiency with large-scale data processing technologies like Apache Flink, Apache Spark, Kafka, Kinesis.
    • Proficiency in Java and Python.
    • Comfortable dealing with distributed system complexity.
    • Experience in relational data models and databases.
    • Experience with SQL queries and optimization.
    • Experience with GitHub tooling (actions, workflows, repositories).
    • Familiarity with CI/CD pipelines and automation tools.
    • Problem-solving and troubleshooting skills.
    • Strong communication and collaboration abilities.

       

    Bonus Points:

    • Experience building or designing database systems.
    • Contributions to open-source projects (especially related to Flink, Kafka or Spark)
    • Proficiency with containerization and orchestration technologies (Docker, Kubernetes.
    • Proficiency with cloud platforms (AWS, GCP, or Azure).
    • Proficiency with Golang.
    • Understanding of communication protocols (REST, Grpc) and how to use them when building microservices.
    • Proficiency with Antlr or other compiler tools.
    • Knowledge of security best practices and compliance standards.

     

    We offer:

    • IT Club membership
    • 18 days of vacation + 8 days of paid state holidays
    • Health insurance
    • Compensation for language trainings 
    • Compensation for educational courses, training, and certificates
    • Compensation for sport activities
    • Mentorship program
    • Employee recognition program with awards
    • Running club
    • Reading club
    • Cozy and pet-friendly office
    • Weekly sweets & fruits office days
    • Corporate bookshelf
    • Office relax zone with PS4, VR, table games, table tennis, aero hockey, mini football table.

     

    Are you interested? We would be glad to receive your CV.

    More
  • Β· 12 views Β· 1 application Β· 1d

    Data Solutions Architect

    Full Remote Β· Countries of Europe or Ukraine Β· 8 years of experience Β· Advanced/Fluent
    The Data and Analytics practice, part of the Technology Office, is a team of high-end experts in data strategy, data governance, and data platforms, and contribute to shaping the future of data platforms for our customers. As a Solution Data Architect,...

    The Data and Analytics practice, part of the Technology Office, is a team of high-end experts in data strategy, data governance, and data platforms, and contribute to shaping the future of data platforms for our customers. As a Solution Data Architect, you will play a crucial role in designing and implementing data solutions for our clients and develop, execute the Data and Analytics practice within the company.


    Responsibilities:

    • Client Engagement: Demonstrate deep expertise in Data Platform Modernization to build credibility with prospective clients. Share relevant customer stories and proof points
    • Requirement Analysis: Engage clients to understand their needs, scope projects, and define solutions. Articulate business benefits, use cases, and create roadmaps to achieve client goals
    • Presales Support: Assist the sales team in creating customer proposals, including Statements of Work (SOWs) and responses to RFPs. Participate in project scoping calls
    • Opportunity Identification: Identify opportunities to upgrade and optimize clients and data platforms to meet modern demands
      Implementation Support: Play a key role in overseeing Data Platform modernization implementation projects. Ensure the solution meets client requirements and provide best practice guidance
    • Architectural Guidance: Provide strategic guidance and support in architecting and planning the implementation of modern data platforms for clients
    • Technology Assessment: Stay current with the latest developments in modern data platforms and apply this knowledge to client projects 
    • Stakeholder Collaboration: Participate in meetings with internal and external stakeholders, including Delivery Managers, Account Managers, Client Partners, and Delivery Teams, as well as ecosystem partners
    • Practice Development: Co-create and develop the Data & Analytics practice within the company, focusing on data services and consulting offerings


    Requirements:

    • Proven Experience: Demonstrated track record in leveraging Databricks Data Intelligence and/or Snowflake in major projects.
    • Architectural Expertise: Over 10 years of progressive experience from data engineer to Architect role, designing, building, and maintaining complex solutions
    • Solution Design: Expert knowledge and experience applying data concepts like Data Mesh, Data Fabric, Data Warehouse, Data Lake, Data Lakehouse, from design to implementation
    • Sales Acumen: Worked on implementation/proposals as a Solution Architect responsible for designing solutions end to end
    • Cloud Data Knowledge: Strong knowledge of the Cloud Data Platform landscape, including vendors and offerings across domains such as Cloud Data Platform foundations, migration and modernization, and data intelligence
    • Continuous Learning: Passion, curiosity and desire to learn what is new in the data platforms market, modern technology trends and data stacks
    • Communication Skills: Ability to present technical ideas in a business-friendly language. Excellent English verbal and written communication skills is a must.
    • Travel: Willingness to travel on business as required


    Why Join Us:

    • Innovative Projects: Work on cutting-edge data platform modernization projects with leading industry clients.
    • Professional Growth: Opportunities for continuous learning and professional development.
    • Collaborative Environment: Join a team of passionate experts dedicated to delivering excellence.
    • Competitive Compensation: Attractive salary and benefits package.


    We offer:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits
    More
  • Β· 48 views Β· 0 applications Β· 1d

    Trainee/Junior Data Engineer/BigData Engineer

    Office Work Β· Ukraine (Lviv) Β· Upper-Intermediate
    Inforce is a Software Development Company that provides a full range of top-quality IT services. Our mission is to develop first-class applications and Websites to provide our clients with the best solutions for maximizing their profits and converting...

    Inforce is a Software Development Company that provides a full range of top-quality IT services. Our mission is to develop first-class applications and Websites to provide our clients with the best solutions for maximizing their profits and converting their ideas into reality.

     

    Responsibilities:

    β€’ Assist in designing and building data pipelines.

    β€’ Support database management tasks.

    β€’ Learn and contribute to the automation of data processes.

    β€’ Collaborate with the team on various data-driven projects.

     

    Requirements:

    β€’  Good spoken English

    • Basic proficiency in Python, SQL, and data processing frameworks.
    • Basic knowledge of PySpark and Airflow
    • Eagerness to learn and adapt to new technologies.
    • Strong analytical and problem-solving skills.
    • Excellent communication and teamwork abilities.

       

     

    We offer:

    - Competitive salary

    - Interesting and challenging projects

    - Future career growth opportunities

    - Paid sick leave and working day vacation

    - A friendly team of professionals

    - Delicious coffee biscuits and tea for your good mood

    - The company covers 50% of the cost of courses you need

    - Exciting team-building activities and corporate parties

    - Office in the city center

    More
  • Β· 63 views Β· 1 application Β· 1d

    Junior Data Engineer IRC262233

    Full Remote Β· Ukraine Β· 1 year of experience Β· Upper-Intermediate
    Description The Digital Health organization is technology team which focused on next generation Digital Health capabilities which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health...

    Description

    The Digital Health organization is technology team which focused on next generation Digital Health capabilities which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health Applications & Interoperability subgroup of the broader Digital Health team, focused on patient engagement, care coordination, AI, healthcare analytics & interoperability amongst other advanced technologies which enhance our product portfolio with new services, while improving clinical & patient experiences.

     

    The project is a cloud-based PaaS Ecosystem built with a privacy by design centric approach to provide a centralized cloud-based platform to store, classify, and control access to federated datasets in a scalable, secure, and efficient manner.

    The ecosystem will allow Customer Operating Units (medical device departments) to store federated data sets of varying sizes and formats and control access to those data sets through Data steward(s). Source data sets can be exposed to qualified use cases and workflows through different project types.

    The Healthcare Data Platform ecosystem will provide ML/AI project capabilities for streamlined development processes and a ML/AI workbench to enhance data exploration, wrangling, and model training.

    In queue: 15+ OU’s. At this moment focused on – Nuero, Cardio, Diabetes is the OU that data platform is working with, but there could be more OU’s coming up with requirements in future.

    GL Role: is to work on the enhancement of current capabilities, including taking over the work that AWS proserve team is doing, and develop new requirements that will keep coming from different OU’s in the future.

    Requirements

    Python, Data Engineering, Data Lake or Lakehouse, Apache Iceberg (nice to have), Parquet

    Good communication skills, pro-active/initiative

     

     MUST HAVE

    • AWS Platform: Working experience with AWS data technologies, including S3, AWS RDS, Lake Formation
    • Programming Languages: Strong programming skills in Python
    • Data Formats: Experience with JSON, XML and other relevant data formats
    • CI/CD Tools: Ability to deploy using established CI/CD pipelines using GitLab CI, Jenkins, Terraform or similar tools
    • Scripting and automation: experience in scripting language such as Python, PowerShell, etc…
    • Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, Splunk, ELK, Dynatrace, Prometheus
    • Source Code Management: Expertise with GitLab
    • Documentation: Experience with markdown and in particular Antora for creating technical documentation

     

    NICE TO HAVE

    • Previous Healthcare or Medical Device experience
    • Experience implementating enterprise grade cyber security & privacy by design into software products
    • Experience working in Digital Health software
    • Experience developing global applications
    • Strong understanding of SDLC; experience with Agile methodologies
    • Software estimation
    • Experience leading software development teams onshore and offshore
    • Experience with FHIR

     

    Job responsibilities

    KEY RESPONSIBILITIES

    • Implement data pipelines using AWS services such as AWS Glue, Lambda, Kinesis, etc
    • Implement integrations between the data platform and systems such as Atlan, Trino/Starburst, etc
    • Complete logging and monitoring tasks through AWS and Splunk toolsets
    • Develop and maintain ETL processes to ingest, clean, transform and store healthcare data from various sources
    • Optimize data storage solutions using Amazon S3, AWS RDS, Lake Formation and other AWS technologies.
    • Document, configure, and maintain systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
    • Participate in planning of system and development deployment as well as responsible for meeting compliance and security standards.
    • Actively identify system functionality or performance deficiencies, execute changes to existing systems, and test functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
    • Document testing and maintenance of system updates, modifications, and configurations.
    • Leverage platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.
    • Test the quality of a product and its ability to perform a task or solve a problems.
    • Perform basic maintenance and performance optimization procedures in each of the primary operating systems.
    • Ensure system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc…)
    More
  • Β· 13 views Β· 0 applications Β· 1d

    DWH Oracle Developer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience
    ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ, Ρ‰ΠΎ ΡΠΏΠ΅Ρ†Ρ–Π°Π»Ρ–Π·ΡƒΡ”Ρ‚ΡŒΡΡ Π½Π° Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– ΠΏΠ»Π°Ρ‚Ρ„ΠΎΡ€ΠΌΠΈ ΠΊΡ€ΠΈΠΏΡ‚ΠΎ-Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡ–Ρ‡Π½ΠΎΠ³ΠΎ Ρ‚Ρ€Π΅ΠΉΠ΄ΠΈΠ½Π³Ρƒ. Π„ інструмСнти, які Π²ΠΎΠ½ΠΈ Π·Ρ€ΠΎΠ±ΠΈΠ»ΠΈ ΠΏΠ΅Ρ€ΡˆΠΈΠΌΠΈ Ρƒ світі. Π’ΠΎΠ½ΠΈ Π²Ρ–Ρ€ΡΡ‚ΡŒ Π² Ρ—Ρ… Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ Ρ– Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½Ρ–ΡΡ‚ΡŒ Ρ€ΠΈΠ½ΠΊΡƒ. НСобхідні Π½Π°Π²ΠΈΡ‡ΠΊΠΈ β€’ Π’ΠΏΠ΅Π²Π½Π΅Π½Ρ– знання Π΄Π΅Π½ΠΎΡ€ΠΌΠ°Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΡ… ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π°Π½ΠΈΡ…. β€’...

    ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ, Ρ‰ΠΎ ΡΠΏΠ΅Ρ†Ρ–Π°Π»Ρ–Π·ΡƒΡ”Ρ‚ΡŒΡΡ Π½Π° Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– ΠΏΠ»Π°Ρ‚Ρ„ΠΎΡ€ΠΌΠΈ ΠΊΡ€ΠΈΠΏΡ‚ΠΎ-Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡ–Ρ‡Π½ΠΎΠ³ΠΎ Ρ‚Ρ€Π΅ΠΉΠ΄ΠΈΠ½Π³Ρƒ. Π„ інструмСнти, які Π²ΠΎΠ½ΠΈ Π·Ρ€ΠΎΠ±ΠΈΠ»ΠΈ ΠΏΠ΅Ρ€ΡˆΠΈΠΌΠΈ Ρƒ світі. Π’ΠΎΠ½ΠΈ Π²Ρ–Ρ€ΡΡ‚ΡŒ Π² Ρ—Ρ… Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ Ρ– Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½Ρ–ΡΡ‚ΡŒ Ρ€ΠΈΠ½ΠΊΡƒ.

    НСобхідні Π½Π°Π²ΠΈΡ‡ΠΊΠΈ
    β€’ Π’ΠΏΠ΅Π²Π½Π΅Π½Ρ– знання Π΄Π΅Π½ΠΎΡ€ΠΌΠ°Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΡ… ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π°Π½ΠΈΡ…. 

    β€’ ΠžΡ€Π³Π°Π½Ρ–Π·Π°Ρ†Ρ–Ρ Ρ– ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° DWH, розуміння сСгмСнтації
    β€’ Розуміння рСляційних Π‘Π”, нормалізація, посилкова Ρ†Ρ–Π»Ρ–ΡΠ½Ρ–ΡΡ‚ΡŒ Ρ– Ρ‚.Π΄..
    β€’ НалагодТСння Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Oracle Advanced Queuing
    β€’ Навички застосування всіх Π²ΠΈΠ΄Ρ–Π² Π·'Ρ”Π΄Π½Π°Π½ΡŒ Ρ– Π³Π»ΠΈΠ±ΠΎΠΊΡ– знання PL/SQL
    β€’ ΠΠΊΡƒΡ€Π°Ρ‚Π½Ρ–ΡΡ‚ΡŒ Ρƒ написанні ΠΊΠΎΠ΄Ρƒ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎ Π΄ΠΎ стандартів ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ—
    β€’ Застосування Ρ€Ρ–Π·Π½ΠΈΡ… ΠΏΡ–Π΄Ρ…ΠΎΠ΄Ρ–Π² Π΄ΠΎ ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ— ΠΊΠΎΠ΄Ρƒ, Ρ€ΠΎΠ±ΠΎΡ‚Π° Ρ–Π· ΠΏΠ»Π°Π½ΠΎΠΌ Π·Π°ΠΏΠΈΡ‚Ρƒ
    β€’ Новички DBA Π½Π° Π±Π°Π·ΠΎΠ²ΠΎΠΌΡƒ Ρ€Ρ–Π²Π½Ρ–
    β€’ Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ - досвід ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²ΠΈ OLAP систСм

    ΠžΠ±ΠΎΠ²β€™ΡΠ·ΠΊΠΈ
    β€’ Π ΠΎΠ±ΠΎΡ‚Π° Π· SQL, Oracle PL/SQL. β€’ ETL Π±Ρ–Ρ€ΠΆΠ΅Π²ΠΈΡ… Π΄Π°Π½ΠΈΡ…
    β€’ Написання SQL-скриптів Ρ€Ρ–Π·Π½ΠΎΠ³ΠΎ ступСня складності
    β€’ ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ΡƒΠ²Π°Π½Π½Ρ Π΄Π΅Π½ΠΎΡ€ΠΌΠ°Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΡ… ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π°Π½ΠΈΡ…
    β€’ Π ΠΎΠ±ΠΎΡ‚Π° Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ обсягами Π΄Π°Π½ΠΈΡ… (ΠΌΠ»Ρ€Π΄ записів)
    β€’ ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Π²Ρ–Ρ‚Ρ€ΠΈΠ½ Π΄Π°Π½ΠΈΡ…, ΠΌΠ°Ρ‚Π°Ρ€ΠΈΠ°Π»ΡŒΠ·ΠΎΠ²Π°Π½ΠΈΡ… ΠΏΡ€Π΅Π΄ΡΡ‚Π°Π²Π»Π΅Π½ΡŒ, Ρ‚ΠΎΡ‰ΠΎ
    β€’ ΠžΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ Π·Π°ΠΏΠΈΡ‚Ρ–Π² Ρ‚Π° адаптація Ρ—Ρ… ΠΏΡ–Π΄ навантаТСння
    β€’ ΠœΡ–Π³Ρ€Π°Ρ†Ρ–Ρ Π΄Π°Π½ΠΈΡ…, Ρ€ΠΎΠ±ΠΎΡ‚Π° Π· транспортним Ρ€Ρ–Π²Π½Π΅ΠΌ MQ
    β€’ Π ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° Ρ– ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° систСм Ρ€Π΅ΠΏΠΎΡ€Ρ‚ΠΈΠ½Π³Ρƒ

    Π‘Ρ‚Π΅ΠΊ ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Ρƒ
    AWS, Docker, CI/CDGitlab, Java, Spring, Hibernate, RESTfulAPI, C#, .Net5.0, Blazor, Python, Oracle, Telegram API, Grafana, Excel Data Access, Prometheus, Confluence, Jira.

    Ми ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ
    β€’ ΠœΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ Ρƒ ΠΊΠΎΠΌΠ°Π½Π΄Ρ– профСсіоналів. 
    β€’ Π“Ρ–Π΄Π½Ρƒ ΠΎΠΏΠ»Π°Ρ‚Ρƒ
    β€’ Π ΠΎΠ±ΠΎΡ‚Ρƒ Π· сучасними тСхнологіями, highload Ρ–Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ†Ρ–ΠΉΠ½ΠΎΡŽ ΡΠΈΡΡ‚Π΅ΠΌΠΎΡŽ
    β€’ Π€ΠΎΡ€ΠΌΠ°Ρ‚ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ - Ρ€Π΅ΠΌΠΎΡƒΡ‚ Π°Π±ΠΎ офіс (Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ‡Π½ΠΈΠΉ, сучасний офіс Π²Ρ–Π΄ ΠΌ. Позняки 15 Ρ…Π².)
    β€’ Π’Ρ–Π΄ΡΡƒΡ‚Π½Ρ–ΡΡ‚ΡŒ ΠΌΠ°Ρ€Π½ΠΎΡ— Π±ΡŽΡ€ΠΎΠΊΡ€Π°Ρ‚Ρ–Ρ— Ρ‚Π° Ρ€ΡƒΡ‚ΠΈΠ½ΠΈ

    Π¦Ρ–Π½ΡƒΡ”ΠΌΠΎ Π’Π°ΡˆΡ– якості
    β€’ ЗалучСння
    β€’ ΠšΠΎΠ½ΡΡ‚Ρ€ΡƒΠΊΡ‚ΠΈΠ²Π½Ρƒ Π΅Π½Π΅Ρ€Π³Ρ–ΡŽ
    β€’ ΠŸΡ€Π°Π³Π½Π΅Π½Π½Ρ досягати Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ–Π²

    ΠΠ°ΡˆΡ– ΠΏΡ€Π°Π²ΠΈΠ»Π°
    β€’ Повага, Ρ– Ρ‡Π΅ΡΠ½Ρ–ΡΡ‚ΡŒ Ρ‚Π° Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π°Π»ΡŒΠ½Ρ–ΡΡ‚ΡŒ, як наслідок
    β€’ ΠŸΡƒΠ½ΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½Ρ–ΡΡ‚ΡŒ
    β€’ Π’Π°ΡˆΠ΅ профСсійнС Ρ‚Π° фінансовС зростання -- Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½Π° складова успіху Π½Π°ΡˆΠΎΡ— ΠΊΠΎΠΌΠΏΠ°Π½Ρ–Ρ—

    More
  • Β· 50 views Β· 9 applications Β· 2d

    Data Engineer

    Full Remote Β· Ukraine, Poland, Bulgaria, Germany, Spain Β· 3 years of experience Β· Intermediate
    We’re expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client. We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible...

    We’re expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client.  

    We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible for ingesting a high volume and variety of enterprise-level data and transforming it into outputs to accelerate decision-making.  
     
    Requirements: 
     

    • BS+ in computer science or equivalent experience;
    • 3+ years of experience in Data Engineering; 
    • 3+ years of experience with Python;
    • Strong experience with AWS stack: Glue, Athena, EMR Serverless, Kinesis, Redshift, Lambda, Step Functions, Data Migration Service (DMS); 
    • Experience with Spark, PySpark, Iceberg, Delta lake, Aurora DB, DynamoDB. 

     

    Nice to have:  
     

    • Data modeling and managing data transformation jobs with high volume and timing requirements experience; 
    • AWS CodePipeline, Beanstalk, Azure DevOps,  Cloud Formation; 
    • Profound skill to collaborate with cross functional teams, including communicating effectively with people of varying levels of technical knowledge; 
    • Readiness to learn new technologies. 

     

    Responsibilities: 
     

    • Setting up data imports from external data sources (DB, CSV, API); 
    • Building highly scalable pipelines to process high-volume data for reporting and analytics consumption; 
    • Designing data assets that support experimental and organizational processes, and are efficient and easy to work with; 
    • Close cooperation with engineers, data scientists, product managers, and business teams to make sure data products are aligned with organizational needs. 

     
     
    What we offer: 

     

    • Paid training programs and English/Spanish language courses; 
    • Medical insurance, sports program compensation, pet care and other benefits compensation program, which can be selected by each employee according to personal preferences; 
    • Comfortable working hours; 
    • Awesome team events and a wide variety of knowledge sharing opportunities. 

     

    More
Log In or Sign Up to see all posted jobs