Jobs

72
  • Β· 25 views Β· 0 applications Β· 20d

    Senior Data Engineer (Data Science/MLOps Background)

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Оur ClΡ–ent Ρ–s seekΡ–ng Π° prΠΎΠ°ctΡ–ve SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer tΠΎ jΠΎΡ–n theΡ–r teΠ°m. Аs Π° SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer, yΠΎu wΡ–ll plΠ°y Π° crΡ–tΡ–cΠ°l rΠΎle Ρ–n desΡ–gnΡ–ng, develΠΎpΡ–ng, Π°nd mΠ°Ρ–ntΠ°Ρ–nΡ–ng sΠΎphΡ–stΡ–cΠ°ted dΠ°tΠ° pΡ–pelΡ–nes, ОntΠΎlΠΎgy Оbjects, Π°nd FΠΎundry FunctΡ–ΠΎns wΡ–thΡ–n...

    Оur ClΡ–ent Ρ–s seekΡ–ng Π° prΠΎΠ°ctΡ–ve SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer tΠΎ jΠΎΡ–n theΡ–r teΠ°m.

     

    Аs Π° SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer, yΠΎu wΡ–ll plΠ°y Π° crΡ–tΡ–cΠ°l rΠΎle Ρ–n desΡ–gnΡ–ng, develΠΎpΡ–ng, Π°nd mΠ°Ρ–ntΠ°Ρ–nΡ–ng sΠΎphΡ–stΡ–cΠ°ted dΠ°tΠ° pΡ–pelΡ–nes, ОntΠΎlΠΎgy Оbjects, Π°nd FΠΎundry FunctΡ–ΠΎns wΡ–thΡ–n PΠ°lΠ°ntΡ–r FΠΎundry.

    YΠΎur bΠ°ckgrΠΎund Ρ–n mΠ°chΡ–ne leΠ°rnΡ–ng Π°nd dΠ°tΠ° scΡ–ence wΡ–ll be vΠ°luΠ°ble Ρ–n ΠΎptΡ–mΡ–zΡ–ng dΠ°tΠ° wΠΎrkflΠΎws, enΠ°blΡ–ng effΡ–cΡ–ent mΠΎdel deplΠΎyment, Π°nd suppΠΎrtΡ–ng АІ-drΡ–ven Ρ–nΡ–tΡ–Π°tΡ–ves.

    The Ρ–deΠ°l cΠ°ndΡ–dΠ°te wΡ–ll pΠΎssess Π° rΠΎbust bΠ°ckgrΠΎund Ρ–n clΠΎud technΠΎlΠΎgΡ–es, dΠ°tΠ° Π°rchΡ–tecture, Π°nd Π° pΠ°ssΡ–ΠΎn fΠΎr sΠΎlvΡ–ng cΠΎmplex dΠ°tΠ° chΠ°llenges.

     

    Key RespΠΎnsΡ–bΡ–lΡ–tΡ–es:

    • CΠΎllΠ°bΠΎrΠ°te wΡ–th crΠΎss-functΡ–ΠΎnΠ°l teΠ°ms tΠΎ understΠ°nd dΠ°tΠ° requΡ–rements, Π°nd desΡ–gn, Ρ–mplement Π°nd mΠ°Ρ–ntΠ°Ρ–n scΠ°lΠ°ble dΠ°tΠ° pΡ–pelΡ–nes Ρ–n PΠ°lΠ°ntΡ–r FΠΎundry, ensurΡ–ng end-tΠΎ-end dΠ°tΠ° Ρ–ntegrΡ–ty Π°nd ΠΎptΡ–mΡ–zΡ–ng wΠΎrkflΠΎws.
    • GΠ°ther Π°nd trΠ°nslΠ°te dΠ°tΠ° requΡ–rements Ρ–ntΠΎ rΠΎbust Π°nd effΡ–cΡ–ent sΠΎlutΡ–ΠΎns, leverΠ°gΡ–ng yΠΎur expertΡ–se Ρ–n clΠΎud-bΠ°sed dΠ°tΠ° engΡ–neerΡ–ng. CreΠ°te dΠ°tΠ° mΠΎdels, schemΠ°s, Π°nd flΠΎw dΡ–Π°grΠ°ms tΠΎ guΡ–de develΠΎpment.
    • DevelΠΎp, Ρ–mplement, ΠΎptΡ–mΡ–ze Π°nd mΠ°Ρ–ntΠ°Ρ–n effΡ–cΡ–ent Π°nd relΡ–Π°ble dΠ°tΠ° pΡ–pelΡ–nes Π°nd ETL/ELT prΠΎcesses tΠΎ cΠΎllect, prΠΎcess, Π°nd Ρ–ntegrΠ°te dΠ°tΠ° tΠΎ ensure tΡ–mely Π°nd Π°ccurΠ°te dΠ°tΠ° delΡ–very tΠΎ vΠ°rΡ–ΠΎus busΡ–ness Π°pplΡ–cΠ°tΡ–ΠΎns, whΡ–le Ρ–mplementΡ–ng dΠ°tΠ° gΠΎvernΠ°nce Π°nd securΡ–ty best prΠ°ctΡ–ces tΠΎ sΠ°feguΠ°rd sensΡ–tΡ–ve Ρ–nfΠΎrmΠ°tΡ–ΠΎn.
    • MΠΎnΡ–tΠΎr dΠ°tΠ° pΡ–pelΡ–ne perfΠΎrmΠ°nce, Ρ–dentΡ–fy bΠΎttlenecks, Π°nd Ρ–mplement Ρ–mprΠΎvements tΠΎ ΠΎptΡ–mΡ–ze dΠ°tΠ° prΠΎcessΡ–ng speed Π°nd reduce lΠ°tency.
    • CΠΎllΠ°bΠΎrΠ°te wΡ–th DΠ°tΠ° ScΡ–entΡ–sts tΠΎ fΠ°cΡ–lΡ–tΠ°te mΠΎdel deplΠΎyment Π°nd Ρ–ntegrΠ°tΡ–ΠΎn Ρ–ntΠΎ prΠΎductΡ–ΠΎn envΡ–rΠΎnments.
    • SuppΠΎrt the Ρ–mplementΠ°tΡ–ΠΎn ΠΎf bΠ°sΡ–c ML Оps prΠ°ctΡ–ces, such Π°s mΠΎdel versΡ–ΠΎnΡ–ng Π°nd mΠΎnΡ–tΠΎrΡ–ng.
    • АssΡ–st Ρ–n ΠΎptΡ–mΡ–zΡ–ng dΠ°tΠ° pΡ–pelΡ–nes tΠΎ Ρ–mprΠΎve mΠ°chΡ–ne leΠ°rnΡ–ng wΠΎrkflΠΎws.
    • TrΠΎubleshΠΎΠΎt Π°nd resΠΎlve Ρ–ssues relΠ°ted tΠΎ dΠ°tΠ° pΡ–pelΡ–nes, ensurΡ–ng cΠΎntΡ–nuΠΎus dΠ°tΠ° Π°vΠ°Ρ–lΠ°bΡ–lΡ–ty Π°nd relΡ–Π°bΡ–lΡ–ty tΠΎ suppΠΎrt dΠ°tΠ°-drΡ–ven decΡ–sΡ–ΠΎn-mΠ°kΡ–ng prΠΎcesses.
    • StΠ°y current wΡ–th emergΡ–ng technΠΎlΠΎgΡ–es Π°nd Ρ–ndustry trends, Ρ–ncΠΎrpΠΎrΠ°tΡ–ng Ρ–nnΠΎvΠ°tΡ–ve sΠΎlutΡ–ΠΎns Ρ–ntΠΎ dΠ°tΠ° engΡ–neerΡ–ng prΠ°ctΡ–ces, Π°nd effectΡ–vely dΠΎcument Π°nd cΠΎmmunΡ–cΠ°te technΡ–cΠ°l sΠΎlutΡ–ΠΎns Π°nd prΠΎcesses.

     

    TΠΎΠΎls Π°nd skΡ–lls yΠΎu wΡ–ll use Ρ–n thΡ–s rΠΎle:

    • PΠ°lΠ°ntΡ–r FΠΎundry
    • PythΠΎn
    • PySpΠ°rk
    • SQL
    • TypeScrΡ–pt

     

    RequΡ–red:

    • 5+ yeΠ°rs ΠΎf experΡ–ence Ρ–n dΠ°tΠ° engΡ–neerΡ–ng, preferΠ°bly wΡ–thΡ–n the phΠ°rmΠ°ceutΡ–cΠ°l ΠΎr lΡ–fe scΡ–ences Ρ–ndustry;
    • StrΠΎng prΠΎfΡ–cΡ–ency Ρ–n PythΠΎn Π°nd PySpΠ°rk;
    • PrΠΎfΡ–cΡ–ency wΡ–th bΡ–g dΠ°tΠ° technΠΎlΠΎgΡ–es (e.g., АpΠ°che HΠ°dΠΎΠΎp, SpΠ°rk, KΠ°fkΠ°, BΡ–gQuery, etc.);
    • HΠ°nds-ΠΎn experΡ–ence wΡ–th clΠΎud servΡ–ces (e.g., АWS Glue, Аzure DΠ°tΠ° FΠ°ctΠΎry, GΠΎΠΎgle ClΠΎud DΠ°tΠ°flΠΎw);
    • ExpertΡ–se Ρ–n dΠ°tΠ° mΠΎdelΡ–ng, dΠ°tΠ° wΠ°rehΠΎusΡ–ng, Π°nd ETL/ELT cΠΎncepts;
    • HΠ°nds-ΠΎn experΡ–ence wΡ–th dΠ°tΠ°bΠ°se systems (e.g., PΠΎstgreSQL, MySQL, NΠΎSQL, etc.);
    • PrΠΎfΡ–cΡ–ency Ρ–n cΠΎntΠ°Ρ–nerΡ–zΠ°tΡ–ΠΎn technΠΎlΠΎgΡ–es (e.g., DΠΎcker, Kubernetes);
    • FΠ°mΡ–lΡ–Π°rΡ–ty wΡ–th ML Оps cΠΎncepts, Ρ–ncludΡ–ng mΠΎdel deplΠΎyment Π°nd mΠΎnΡ–tΠΎrΡ–ng.
    • BΠ°sΡ–c understΠ°ndΡ–ng ΠΎf mΠ°chΡ–ne leΠ°rnΡ–ng frΠ°mewΠΎrks such Π°s TensΠΎrFlΠΎw ΠΎr PyTΠΎrch.
    • ExpΠΎsure tΠΎ clΠΎud-bΠ°sed АІ/ML servΡ–ces (e.g., АWS SΠ°geMΠ°ker, Аzure ML, GΠΎΠΎgle Vertex АІ).
    • ExperΡ–ence wΠΎrkΡ–ng wΡ–th feΠ°ture engΡ–neerΡ–ng Π°nd dΠ°tΠ° prepΠ°rΠ°tΡ–ΠΎn fΠΎr mΠ°chΡ–ne leΠ°rnΡ–ng mΠΎdels.
    • EffectΡ–ve prΠΎblem-sΠΎlvΡ–ng Π°nd Π°nΠ°lytΡ–cΠ°l skΡ–lls, cΠΎupled wΡ–th excellent cΠΎmmunΡ–cΠ°tΡ–ΠΎn Π°nd cΠΎllΠ°bΠΎrΠ°tΡ–ΠΎn Π°bΡ–lΡ–tΡ–es.
    • StrΠΎng cΠΎmmunΡ–cΠ°tΡ–ΠΎn Π°nd teΠ°mwΠΎrk Π°bΡ–lΡ–tΡ–es;
    • UnderstΠ°ndΡ–ng ΠΎf dΠ°tΠ° securΡ–ty Π°nd prΡ–vΠ°cy best prΠ°ctΡ–ces;
    • StrΠΎng mΠ°themΠ°tΡ–cΠ°l, stΠ°tΡ–stΡ–cΠ°l, Π°nd Π°lgΠΎrΡ–thmΡ–c skΡ–lls.

     

    NΡ–ce tΠΎ hΠ°ve:

    • CertΡ–fΡ–cΠ°tΡ–ΠΎn Ρ–n ClΠΎud plΠ°tfΠΎrms, ΠΎr relΠ°ted Π°reΠ°s;
    • ExperΡ–ence wΡ–th seΠ°rch engΡ–ne АpΠ°che Lucene, Web ServΡ–ce Rest АPΠ†;
    • FΠ°mΡ–lΡ–Π°rΡ–ty wΡ–th VeevΠ° CRM, ReltΡ–ΠΎ, SАP, Π°nd/ΠΎr PΠ°lΠ°ntΡ–r FΠΎundry;
    • KnΠΎwledge ΠΎf phΠ°rmΠ°ceutΡ–cΠ°l Ρ–ndustry regulΠ°tΡ–ΠΎns, such Π°s dΠ°tΠ° prΡ–vΠ°cy lΠ°ws, Ρ–s Π°dvΠ°ntΠ°geΠΎus;
    • PrevΡ–ΠΎus experΡ–ence wΠΎrkΡ–ng wΡ–th JΠ°vΠ°ScrΡ–pt Π°nd TypeScrΡ–pt.

     

    CΠΎmpΠ°ny ΠΎffers:

    • FlexΡ–ble wΠΎrkΡ–ng fΠΎrmΠ°t – remΠΎte, ΠΎffΡ–ce-bΠ°sed ΠΎr flexΡ–ble
    • А cΠΎmpetΡ–tΡ–ve sΠ°lΠ°ry Π°nd gΠΎΠΎd cΠΎmpensΠ°tΡ–ΠΎn pΠ°ckΠ°ge
    • PersΠΎnΠ°lΡ–zed cΠ°reer grΠΎwth
    • PrΠΎfessΡ–ΠΎnΠ°l develΠΎpment tΠΎΠΎls (mentΠΎrshΡ–p prΠΎgrΠ°m, tech tΠ°lks Π°nd trΠ°Ρ–nΡ–ngs, centers ΠΎf excellence, Π°nd mΠΎre)
    • АctΡ–ve tech cΠΎmmunΡ–tΡ–es wΡ–th regulΠ°r knΠΎwledge shΠ°rΡ–ng
    • EducΠ°tΡ–ΠΎn reΡ–mbursement
    • MemΠΎrΠ°ble Π°nnΡ–versΠ°ry presents
    • CΠΎrpΠΎrΠ°te events Π°nd teΠ°m buΡ–ldΡ–ngs
    More
  • Β· 33 views Β· 2 applications Β· 20d

    Data Engineer TL / Poland

    EU Β· 4 years of experience Β· Upper-Intermediate
    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...

    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.

     

    Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way. 

    We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests. 

    As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. You’ll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.

     

    Responsibilities:

    • Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
    • Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
    • Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
    • Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?

       

    Requirements:

    • Over 2 years in a leadership role within a data team.
    • Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
    • Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
    • Bachelor’s degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
    • Highly organized with a proactive mindset.
    • Strong service orientation and a collaborative approach to problem-solving.

       

    Nice to have skills:

    • Previous experience as a NOC or DevOps engineer is a plus.
    • Familiarity with PySpark is considered an advantage.

       

    What we can offer you

    • Remote work from Poland, flexible working schedule
    • Accounting support & consultation
    • Opportunities for learning and developing on the project
    • 20 working days of annual vacation
    • 5 days paid sick leaves/days off; state holidays
    • Provide working equipment
    More
  • Β· 52 views Β· 12 applications Β· 19d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    About the Role: We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and...

    About the Role:
     

    We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and business impact. You will be responsible for designing, building, and maintaining scalable data solutions that power critical business decisions in a fast-paced B2C environment.

     

    Responsibilities:
     

    • Design, develop, and maintain robust ETL/ELT data pipelines using Apache Spark and AWS Glue
    • Build Zero-ETL pipelines using AWS services such as Kinesis Firehose, Lambda, and SageMaker
    • Write clean, efficient, and well-tested code primarily in Python and SQL
    • Collaborate with data scientists, analysts, and product teams to ensure timely and accurate data delivery
    • Optimize data workflows for performance, scalability, and cost-efficiency
    • Integrate data from various sources (structured, semi-structured, and unstructured)
    • Implement monitoring, alerting, and logging to ensure data pipeline reliability
    • Contribute to data governance, documentation, and compliance efforts
    • Work in an agile environment, participating in code reviews, sprint planning, and team ceremonies
       

    Expected Qualifications:
     

    • 5+ years of professional experience in data engineering
    • Advanced proficiency in Apache Spark, Python, and SQL
    • Hands-on experience with AWS Glue, Kinesis Firehose, and Zero-ETL pipelines
    • Familiarity with AWS Lambda and SageMaker for serverless processing and ML workflows
    • Experience with ETL orchestration tools such as Airflow or dbt
    • Solid understanding of cloud computing concepts, especially within AWS
    • Strong problem-solving skills and the ability to work independently and collaboratively
    • Experience working in B2C companies or data-rich product environments
    • Degree in Computer Science or related field (preferred but not required)
    • Bonus: Exposure to JavaScript and data science workflows
    More
  • Β· 137 views Β· 34 applications Β· 17d

    Jnr/Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1 year of experience Β· Upper-Intermediate
    Position responsibilities: - Migrate clients data from other solutions to Jetfile data model (mostly MS SQL) - Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards - Analyze and optimize...

    Position responsibilities:
    - Migrate clients data from other solutions to Jetfile data model (mostly MS SQL)
    - Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards
    - Analyze and optimize performance on big data load
    - Create migrations for Jetfile internal products

     

    Must have:
    - Bachelor's degree in Computer Science, Engineering, or related field
    - Ability to work independently and remotely
    - 1 year of experience
    - Strong SQL skills
    - Must have experience with business application development


    Nice to have:
    - Leading experience
    - Azure knowledge
    - ERP, accounting, fintech or insurance tech experience

    More
  • Β· 35 views Β· 1 application Β· 16d

    Technical Lead/Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Project Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...

    Project Description:

    • As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
      As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
      Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
      If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
       

      Responsibilities:

      Strategy and Project Delivery
      ● Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
      ● Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
      ● Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
      ● Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
      ● Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
      ● Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
      ● Own the data engineering processes, architecture across the teams
      Technology, Craft & Delivery
      ● Experience in designing and architecting data engineering frameworks, dealing with high volume of data
      ● Experience in large scale data processing and workflow management
      ● Mastery in technology leadership
      ● Engineering delivery, quality and practices within own team
      ● Participating in defining, shaping and delivering the wider engineering strategic objectives
      ● Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
      ● Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
      ● Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
       

      Mandatory Skills Description:

      Role Qualifications and Requirements:
      ● Bachelor degree
      ● At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
      ● 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
      ● Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
      ● Experience working with public cloud providers such as Snowflake, AWS
      ● Experience to work in a complex stakeholders' organizations
      ● A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
      ● Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
      ● Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
      ● You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
      ● You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
      ● Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams

    More
  • Β· 23 views Β· 0 applications Β· 16d

    Technical Lead/Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Project Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...

    Project Description:

    As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
    As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
    Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
    If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
     

    Responsibilities:

    Strategy and Project Delivery
    ● Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
    ● Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
    ● Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
    ● Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
    ● Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
    ● Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
    ● Own the data engineering processes, architecture across the teams
    Technology, Craft & Delivery
    ● Experience in designing and architecting data engineering frameworks, dealing with high volume of data
    ● Experience in large scale data processing and workflow management
    ● Mastery in technology leadership
    ● Engineering delivery, quality and practices within own team
    ● Participating in defining, shaping and delivering the wider engineering strategic objectives
    ● Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
    ● Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
    ● Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
     

    Mandatory Skills Description:

    Role Qualifications and Requirements:
    ● Bachelor degree
    ● At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
    ● 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
    ● Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
    ● Experience working with public cloud providers such as Snowflake, AWS
    ● Experience to work in a complex stakeholders' organizations
    ● A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
    ● Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
    ● Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
    ● You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
    ● You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
    ● Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams

    More
  • Β· 30 views Β· 11 applications Β· 15d

    Senior Data Engineer

    Full Remote Β· Azerbaijan, Brazil, Colombia, Kazakhstan Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...

    We are seeking a proactive Senior Data Engineer to join our vibrant team.

    As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges. 


    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.


    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    •  PySpark
    • SQL
    • TypeScript


    Required:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Proficiency in containerization technologies (e.g., Docker, Kubernetes);
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.


    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Webservice Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.


    We offer:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits
    More
  • Β· 16 views Β· 1 application Β· 15d

    Senior Data Engineer with Snowflake

    Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-Intermediate
    Project Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...
    • Project Description:


      As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
      As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
      Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
      If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
       

    • Responsibilities:

      Strategy and Project Delivery
      ● Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
      ● Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
      ● Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
      ● Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
      ● Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
      ● Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
      ● Own the data engineering processes, architecture across the teams
      Technology, Craft & Delivery
      ● Experience in designing and architecting data engineering frameworks, dealing with high volume of data
      ● Experience in large scale data processing and workflow management
      ● Mastery in technology leadership
      ● Engineering delivery, quality and practices within own team
      ● Participating in defining, shaping and delivering the wider engineering strategic objectives
      ● Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
      ● Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
      ● Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
       

    • Mandatory Skills Description:

      Role Qualifications and Requirements:
      ● Bachelor degree
      ● At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
      ● 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
      ● Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
      ● Experience working with public cloud providers such as Snowflake, AWS
      ● Experience to work in a complex stakeholders' organizations
      ● A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
      ● Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
      ● Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
      ● You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
      ● You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
      ● Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams
       

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 72 views Β· 2 applications Β· 15d

    Middle Strong/Senior Data Engineer

    Full Remote Β· Ukraine Β· 2 years of experience Β· Upper-Intermediate
    Our mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities Our values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious...

    Our mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities 🌿Our values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious Openness and Result Driven. We offer a safe, inclusive and productive environment for all team members, and we’re always open to feedbackπŸ’œ
    If you want to work from home or work in the city center of Kyiv, great β€” apply right now.

    About the project:
    Generative AI technologies are rapidly changing how digital content is created and consumed. However, many of these systems are trained on vast amounts of data, including articles, videos, and other creative worksβ€”often without the knowledge or consent of the original creators. As a result, publishers, journalists, and content producers face the risk of losing both visibility and critical revenue streams such as advertising, subscriptions, and licensing.

    Our project addresses this issue by developing a system that allows AI platforms to identify when specific content has influenced a generated result. This enables transparent attribution and the possibility for content creators to receive compensation based on how often their work is used. The goal is to build a sustainable ecosystem where creators are fairly rewarded, while AI-generated content remains trustworthy and ethically grounded.

    Requirements:
    ● 3+ years of experience in Data Engineering;
    ● Solid Python programming skills, especially in data processing and system automation;
    ● Proven experience with Airflow, Kubeflow, or Kafka for orchestrating data workflows;
    ● Familiarity with search engine concepts and indexing;
    ● Experience working with structured and semi-structured web data (HTML, JSON, APIs);
    ● Ability to work with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure);
    ● English: Upper-Intermediate+.

    What you will get:
    ● Competitive salary and good compensation package;
    ● Exciting, challenging and stable startup projects with a modern stack;
    ● Corporate English course;
    ● Ability to practice English and communication skills through permanent interaction with clients from all over the world;
    ● Professional study compensation, online courses and certifications;
    ● Career development opportunity, semi-annual and annual salary review process;
    ● Necessary equipment to perform work tasks;
    ● VIP medical insurance or sports coverage;
    ● Informal and friendly atmosphere;
    ● The ability to focus on your work: a lack of bureaucracy and micromanagement;
    ● Flexible working hours (start your day between 8:00 and 11:30);
    ● Team buildings, corporate events;
    ● Paid vacation (18 working days) and sick leaves;
    ● Cozy offices in 2 cities ( Kyiv & Lviv ) with electricity and Wi-Fi (Generator & Starlink);
    ● Compensation for coworking (except for employees from Kyiv and Lviv);
    ● Corporate lunch + soft skills clubs;
    ● Unlimited work from home from anywhere in the world (remote);
    ● Geniusee has its own charity fund.




     

    More
  • Β· 23 views Β· 0 applications Β· 15d

    Technical Lead/Senior Data Engineer

    Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-Intermediate
    Project Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...
    • Project Description:

      As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
      As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
      Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
      If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
       

    • Responsibilities:

      Strategy and Project Delivery
      ● Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
      ● Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
      ● Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
      ● Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
      ● Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
      ● Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
      ● Own the data engineering processes, architecture across the teams
      Technology, Craft & Delivery
      ● Experience in designing and architecting data engineering frameworks, dealing with high volume of data
      ● Experience in large scale data processing and workflow management
      ● Mastery in technology leadership
      ● Engineering delivery, quality and practices within own team
      ● Participating in defining, shaping and delivering the wider engineering strategic objectives
      ● Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
      ● Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
      ● Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
       

       

    • Mandatory Skills Description:

      Role Qualifications and Requirements:
      ● Bachelor degree
      ● At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
      ● 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
      ● Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
      ● Experience working with public cloud providers such as Snowflake, AWS
      ● Experience to work in a complex stakeholders' organizations
      ● A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
      ● Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
      ● Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
      ● You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
      ● You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
      ● Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams
       

       

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 35 views Β· 1 application Β· 15d

    Data Engineer 2070/06 to $5500

    Office Work Β· Poland Β· 3 years of experience Β· Upper-Intermediate
    Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such...

    Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such as Walmart, Barclaycard, and Ford.

     

    The company has expanded to over 700 employees, with 15 global offices spanning four continents. With the imminent opening of a new office in Warsaw, we are seeking experienced 

    Data Engineers to join their expanding team.

     

    The Data Engineer will be responsible for developing, designing, and maintaining end-to-end optimized, scalable Big Data pipelines for our products and applications. In this role, you will collaborate closely with team leads across various departments and receive support from peers and experts across multiple fields.

     

    Opportunities:

     

    • Possibility to work in a successful company
    • Career and professional growth
    • Competitive salary
    • Hybrid work model (3 days per week work from office space in the heart of Warsaw city)
    • Long-term employment with 20 working days of paid vacation, sick leaves, and national holidays

     

    Responsibilities:

     

    • Follow and promote best practices and design principles for Big Data ETL jobs
    • Help in technological decision-making for the business’s future data management and analysis needs by conducting POCs
    • Monitor and troubleshoot performance issues on data warehouse/lakehouse systems
    • Provide day-to-day support of data warehouse management
    • Assist in improving data organization and accuracy
    • Collaborate with data analysts, scientists, and engineers to ensure best practices in terms of technology, coding, data processing, and storage technologies
    • Ensure that all deliverables adhere to our world-class standards

     

    Skills:

     

    • 3+ years of overall experience in Data Warehouse development and database design
    • Deep understanding of distributed computing principles
    • Experience with AWS cloud platform, and big data platforms like EMR, Databricks, EC2, S3, Redshift
    • Experience with Spark, PySpark, Hive, Yarn, etc.
    • Experience in SQL and NoSQL databases, as well as experience with data modeling and schema design
    • Proficiency in programming languages such as Python for implementing data processing algorithms and workflows
    • Experience with Presto and Kafka is a plus
    • Experience with DevOps practices and tools for automating deployment, monitoring, and management of big data applications is a plus
    • Excellent communication, analytical, and problem-solving skills
    • Knowledge of scalable service architecture
    • Experience in scalable data processing jobs on high-volume data
    • Self-starter, proactive, and able to work to deadlines
    • Noce to have: Experience with Scala

     

    If you are looking for an environment where you can grow professionally, learn from the best in the field, balance work and life, and enjoy a pleasant and enthusiastic atmosphere, submit your CV today and become part of our team!

    Everything you do will help us lead the programmatic industry and make it better.

    More
  • Β· 22 views Β· 4 applications Β· 14d

    Data Engineering Team Lead

    Poland Β· 5 years of experience Β· Upper-Intermediate
    About Us We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration,...

    About Us

    We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration, we offer services across e-commerce, Fintech, logistics, and healthcare.
    Our client is leading mobile app company that depends on high-volume, real-time data pipelines to drive user acquisition and engagement. This role is instrumental in maintaining data reliability, supporting production workflows, and enabling operational agility across teams. This is a hands-on leadership role that requires deep technical expertise, ownership mindset, and strong collaboration across engineering and business stakeholders.

    Key Requirements:

    πŸ”Ή5+ years of experience in data engineering, with strong hands-on expertise in building and maintaining data pipelines;
    πŸ”Ή At least 2 years in a team leadership or technical lead role;
    πŸ”Ή Proficient in Python, SQL, and data orchestration tools such as Airflow;
    πŸ”Ή Experience with both SQL and NoSQL databases, such as MySQL, Presto, Couchbase, MemSQL, or MongoDB;
    πŸ”Ή Bachelor’s degree in Computer Science, Engineering, or a related field;
    πŸ”Ή English – Upper-Intermediate or higher.

    Will be plus:

    πŸ”Ή Background in NOC or DevOps environments is a plus;
    πŸ”Ή Familiarity with PySpark is an advantage.

    What you will do:

    πŸ”Ή Oversee daily data workflows, troubleshoot failures, and escalate critical issues to ensure smooth and reliable operations; πŸ”Ή Use Python, SQL, and Airflow to configure workflows, extract client-specific insights, and adjust live processes as needed;
    πŸ”Ή Build and maintain automated data validation and testing frameworks to ensure data reliability at scale;
    πŸ”Ή Own and evolve the metadata system, maintaining table lineage, field definitions, and data usage context to support a unified knowledge platform;
    πŸ”Ή Act as the primary point of contact for operational teams and stakeholders, ensuring consistent collaboration and high data quality across the organization.

    Interview stages:

    πŸ”Ή HR Interview;
    πŸ”Ή Pro-Interview;
    πŸ”Ή Technical Interview;
    πŸ”Ή Final Interview;
    πŸ”Ή Reference Check;
    πŸ”Ή Offer.

    Why Join Us?

    πŸ”Ή Be part of a friendly international team, working together on interesting global projects;
    πŸ”Ή Enjoy many chances to grow, learn from mentors, and work on projects that make a real difference;
    πŸ”Ή Join a team that loves fresh ideas and supports creativity and new solutions;
    πŸ”Ή Work closely with clients, building great communication skills and learning directly from their needs;
    πŸ”Ή Thrive in a workplace that values your needs, offering flexibility and a good balance between work and life.

    More
  • Β· 39 views Β· 0 applications Β· 14d

    Lead Data Engineer (ETL)

    Full Remote Β· Ukraine, Poland Β· 5 years of experience Β· Upper-Intermediate
    Description: Our Client is the Enterprise Worldwide Company. The product you will be working with, provides management and data processing/handling capabilities for networks of the clients scientific lab equipment such as microscopes, etc. The main...

    Description:
     

    Our Client is the Enterprise Worldwide Company. The product you will be working with, provides management and data processing/handling capabilities for networks of the clients scientific lab equipment such as microscopes, etc. The main goals are:

    Collection and centralized management of data outputs (measurement results, etc.) provided by clients devices
    Outdated data utilization
    Managing large volumes of data acquired from measurement devices in the cloud securely and reliably
    Seamless sharing of measurement data with collaborators
    The ability to share measurement results and accelerate customer service.
     

    Requirements:
     

    We are looking for a Lead Data Engineer with at least 6 years of commercial experience in development of data platforms for enterprise applications. With the experience to Lead a team of engineers and take responsibility for the technical solution. 

    – Proficiency in Airflow for workflow orchestration, dbt for data transformation, and SQL for data querying and manipulation.
    – Experience in data modeling, ETL (Extract, Transform, Load) processes, and data warehousing concepts.
    – Familiarity with cloud platforms (AWS) and their data services.
    – Excellent analytical and problem-solving skills with meticulous attention to detail.
    – Strong communication and collaboration skills with the ability to lead and motivate cross-functional teams.Good to have ability to participate onsite meeting.

     

    Job responsibilities:
     

    β€’ Implement new solutions into the current system with the refactoring and from scratch methods;
    β€’ Preparing the technical documentation;
    β€’ Participating in client meetings to understand business and user requirements and estimate tasks;
    β€’ Collaborating closely with other engineers, product owners and testers to identify and solve challenging problems;
    β€’ Taking part in defect investigation, bug fixing, troubleshooting.

    More
  • Β· 44 views Β· 5 applications Β· 13d

    Power BI Developer to $3000

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-Intermediate
    We’re implementing a Microsoft-first analytics stack, designed to integrate data from Google Forms, ESRI ArcGIS / Survey123, and other HTTP-based sources into OneLake (Microsoft Fabric), with insights delivered through Power BI and access controlled via...

    We’re implementing a Microsoft-first analytics stack, designed to integrate data from Google Forms, ESRI ArcGIS / Survey123, and other HTTP-based sources into OneLake (Microsoft Fabric), with insights delivered through Power BI and access controlled via Microsoft 365 roles.

    As a Power BI Engineer, you’ll own the end-to-end data pipelineβ€”from ingestion to visualization. You’ll be responsible for building connectors, modeling data in OneLake, and delivering fast, accurate, and secure dashboards.

     

    Key Responsibilities

    • Develop and maintain Dataflows, Pipelines, and Power Query connectors for various sources including Google Forms, ArcGIS REST, Survey123, CSV/JSON, and other HTTP-based feeds
    • Design efficient OneLake tables and implement star-schema models for Power BI reporting
    • Deliver high-quality executive dashboards and enable self-service analytics for internal users
    • Optimize dataset refresh, manage incremental data loads, and configure DirectQuery/Import modes
    • Implement and manage row-level and role-based security, integrated with Microsoft 365 group permissions

     

    Required Skills & Experience

    • 4+ years of hands-on experience with Power BI development
    • Strong knowledge of Microsoft Fabric and OneLake
    • Experience building custom or REST-based Power Query connectors
    • Proficiency in SQL for data modeling and performance optimization
    • Practical experience with security models in Power BI, including row-level security and M365 role-based access
    • Upper-intermediate or higher English for daily communication with international clients

     

    Why Join Us?

    Work on modern, mission-driven data solutions using cutting-edge Microsoft tools. Enjoy the freedom of remote work, a supportive team, and real ownership of your work.

    More
  • Β· 28 views Β· 3 applications Β· 12d

    Data Engineer

    Full Remote Β· Poland Β· 4 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Harmonya, develops an AI-powered product...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 

    Our client, Harmonya, develops an AI-powered product data enrichment, insights, and attribution platform for retailers and brands. Its proprietary technology processes millions of online product listings, extracting valuable insights from titles, descriptions, ingredients, consumer reviews, and more.

    Harmonya builds robust tools to help uncover insights about the consumer drivers of market performance, improve assortment and merchandising, categorize products, guide product innovation, and engage target audiences more effectively.

     

    About the Role: 
    We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

     

    Key Responsibilities: 

    • Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
    • Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
    • Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
    • Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
    • Apply best practices for data security, integrity, and performance across all systems.

     

    Required Competence and Skills:

    • 4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
    • Proven track record in designing, developing, and deploying complex data applications.
    • Hands-on experience with orchestration and processing tools (e.g. Apache Airflow and/or Apache Spark).
    • Experience with public cloud platforms (preferably GCP) and cloud-native data services.
    • Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent practical experience).
    • Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
    • Strong verbal and written communication skills in English.
    • Excellent communication skills and a strong team player, capable of working cross-functionally.

     

    Nice to have:

    • Familiarity with data science tools and libraries (e.g., pandas, scikit-learn).
    • Experience working with Docker and Kubernetes.
    • Hands-on experience with CI tools such as GitHub Actions

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

    We provide full accounting and legal support in all countries we operate.

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    We offer a highly competitive package with yearly performance and compensation reviews.

     

    More
Log In or Sign Up to see all posted jobs