Jobs

21
  • Β· 63 views Β· 2 applications Β· 21d

    Team/ Tech Lead Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. As a Team Lead, you will be an expert and...

    Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    As a Team Lead, you will be an expert and a leader, playing a crucial role in guiding the development team, making technical decisions, and ensuring the successful delivery of high-quality software products.

     

    Skills requirements:

    β€’ 5+ years of experience with Python;

    β€’ 4+ years of experience as a Data Engineer;

    β€’ Knowledge of data algorithms and data structures is a MUST;

    β€’ Excellent experience with Pandas;

    β€’ Excellent experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;

    β€’ Experience Apache Kafka, Apache Spark (pyspark);

    β€’ Experience with Hadoop;

    β€’ Familiarity with Amazon Web Services;

    β€’ Understanding of cluster computing fundamentals;

    β€’ Working with high volume tables 100m+.

     

    Optional skills (as a plus):

    β€’ Experience with scheduling and monitoring (Databricks, Prometheus, Grafana);

    β€’ Experience with Airflow;

    β€’ Experience with Snowflake, Terraform;

    β€’ Experience in statistics;

    β€’ Knowledge of DS and Machine learning algorithms.

     

    Key responsibilities:

    β€’ Manage the development process and support team members;

    β€’ Conduct R&D work with new technology;

    β€’ Maintain high-quality coding standards within the team;

    β€’ Create ETL pipelines and data management solutions (API, Integration logic);

    β€’ Elaborate different data processing algorithms;

    β€’ Involvement in creation of forecasting, recommendation, and classification models;

    β€’ Develop and implement workflows for receiving and transforming new data sources to be used in the company;

    β€’ Develop existing Data Engineering infrastructure to make it scalable and prepare it for anticipated projected future volumes;

    β€’ Identify, design and implement process improvements (i.e. automation of manual processes, infrastructure redesign, etc.).

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 10 views Β· 1 application Β· 5h

    Senior Big Data Engineer

    Full Remote Β· Ukraine, Poland, Spain, Romania, Bulgaria Β· Product Β· 5 years of experience Β· Upper-Intermediate
    Who we are Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product The product of our client stands at the forefront of...

    Who we are

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.

     

    About the Product

    The product of our client stands at the forefront of advanced threat detection and response, pioneering innovative solutions to safeguard businesses against evolving cybersecurity risks. It is a comprehensive platform that streamlines security operations, empowering organizations to swiftly detect, prevent, and automate responses to advanced threats with unparalleled precision and efficiency.

     

    About the Role

    We are looking for a proactive, innovative, and responsible Senior Big Data Engineer with extensive knowledge and experience with streaming and batching processes, building DWH from scratch. Join our high-performance team to work with cutting-edge technologies in a dynamic and agile environment.

     

    Key Responsibilities: 

    • Design & Development: Architect, develop, and maintain robust distributed systems with complex requirements, ensuring scalability and performance.
    • Collaboration: Work closely with cross-functional teams to ensure the seamless integration and functionality of software components.
    • System Optimization: Implement and optimize scalable server systems, utilizing parallel processing, microservices architecture, and security development principles.
    • Database Management: Effectively utilize SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases in system design and development.
    • Big Data Tools: Leverage big data tools such as Spark or Flink to enhance system performance and scalability(experience with these tools is advantageous).
    • Deployment & Management: Demonstrate proficiency in Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.

     

    Required Competence and Skills:

    • At least 5 years of experience in Data Engineering domain
    • Proficiency in SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases
    • Experience building DWH from scratch and working with real-time data and streaming processes
    • Experience with GoLang (commercial\non-commercial)
    • Experienced with big data tools such as Spark or Flink to enhance system performance and scalability
    • Proven experience with Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
    • Ability to work effectively in a collaborative team environment
    • Excellent communication skills and a proactive approach to learning and development

     

    Advantages:

    • Experience in data cybersecurity domain
    • Experience in startup growing product

     

    Why Us

    We utilize a remote working model, providing a powerful workstation and co-working space of your choice in case you need it .

    We offer a highly competitive package

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in)

    We prioritize the professional growth and well-being of our team members. Hence, we organize various social events throughout the year to foster connections and promote wellness

    More
  • Β· 48 views Β· 4 applications Β· 21d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· Upper-Intermediate
    Simulmedia is looking for an experienced and dynamic Data Engineer with a curious and creative mindset to join our Data Services team. The ideal candidate will have a strong background in Python, SQL and Rest API development. This is an opportunity to...

    Simulmedia is looking for an experienced and dynamic Data Engineer with a curious and creative mindset to join our Data Services team. The ideal candidate will have a strong background in Python, SQL and Rest API development. This is an opportunity to join a team of amazing engineers, data scientists, product managers and designers who are obsessed with building the most advanced streaming advertising platform in the market. As a Data Engineer you will build services and data processing systems to support our platform. You will work on a team that empowers the other teams to use our huge amount of data efficiently. Using a large variety of technologies and tools, you will solve complicated technical problems and build solutions to make our services robust and flexible and our data easily accessible throughout the company.

     

    Only for candidates from Ukraine. This position is located in either Kyiv or Lviv, Ukraine. The team is located in both Kyiv and Lviv and primarily works remotely with occasional team meetings in our offices.

     

    Responsibilities:

    • Build products that leverage our data and solve problems that tackle the complexity of streaming video advertising
    • Develop containerized applications, largely in Python, that are deployed to the Cloud
    • Work within an Agile team that releases cutting-edge new features regularly
    • Learn new technologies, and make an outsized impact on our industry-leading tech platform
    • Take a high degree of ownership and freedom to experiment with new technologies to improve our software
    • Develop maintainable code and fault tolerant solutions
    • Collaborate cross-functionally with product managers and stakeholders across the company to deliver on product roadmap
    • Join a team of passionate engineers in search of elegant solutions to hard problems

     

    Qualifications:

    • Bachelor’s degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
    • 7+ years of work experience as a data engineer
    • Proficiency in Python and using it as the primary development language in recent years
    • Proficiency in SQL and relational databases (Postgres, MySQL, etc)
    • Ability to design complex data models (normalized and multi-dimensional)
    • Experience building REST services (Flask, Django, aio-http, etc)
    • Experience developing, maintaining, and debugging problems in large server-side code bases
    • Good knowledge of engineering best practices and testing (unit test, integration test)
    • The desire to take a high level of ownership of the things you work on
    • Ability to learn new things quickly, maintain a high bar for quality, and be pragmatic
    • Must be able to communicate with U.S based teams
    • Experience with AWS is a plus
    • Ability to work 11 am β€” 8 pm EEST

     

    Our Tech Stack:

    • Almost everything we run is on AWS
    • We mostly use Python, Ruby and Go
    • For data, we mostly use Postgres and Redshift
    More
  • Β· 45 views Β· 1 application Β· 24d

    Senior Data Engineer

    Full Remote Β· Poland Β· Product Β· 5 years of experience Β· Upper-Intermediate
    Project Toshiba is the global market share leader in retail store technology. As retail’s first choice for integrated in-store solutions, Toshiba’s innovative technology enhances customer engagement, transforms in-store experience, and accelerates the...

    Project

    Toshiba is the global market share leader in retail store technology. As retail’s first choice for integrated in-store solutions, Toshiba’s innovative technology enhances customer engagement, transforms in-store experience, and accelerates the digital transformation of the retail industry. Today, Toshiba is in a position wherein it defines dominating practices of retail automation and advances the future of retail.

    The product is aimed at comprehensive retail chain automation and covers all work processes of large retail chain operators. The product covers retail store management, warehouse management, payment systems integration, logistics management, hardware/software store automation, etc.
    The product is already adopted by the market, and the biggest US and global retail operators are among the clients.

     

    Technology Stack

    Azure Databricks, Apache Spark (PySpark) , Delta Lake , ADF , Synapse , Python ,SQL, Power BI, MongoDB/CosmosDB, PostgreSQL, Terraform, Jenkins

     

    What you will do

    We are looking for an experienced Azure Databricks Engineer to join our team and contribute to building and optimizing large-scale data solutions. You will be responsible for working with Azure Databricks and Power BI , writing efficient Python and SQL scripts, and optimizing data workflows to ensure performance and scalability, building meaningful reports.

     

    Must-have skills

    • Bachelor’s or Master’s degree in Data Science, Computer Science or related field.
    • 3+ years of experience as a Data Engineer or in a similar role.
    • Proven experience in data analysis, data warehousing, and data reporting.
    • Proven experience in Azure Databricks ( python, pytorch), Azure infrastructure
    • Experience with Business Intelligence tools like Power BI.
    • Proficiency in querying languages like SQL.
    • Strong problem-solving skills and attention to detail.
    • Proven ability to translate business requirements into technical solutions.

     

    Nice-to-have skills

    • Knowledge and experience in e-commerce/retail
    More
  • Β· 43 views Β· 1 application Β· 24d

    Senior Data Engineer/Lead Data Engineer (Healthcare domain)

    Full Remote Β· EU Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights! If...

    We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights!

    If you're ready to take your expertise to the next level and contribute significantly to the success of our projects, submit your resume now.

    Our client is a leading medical technology company. The portfolio of products, services, and solutions is central to clinical decision-making and treatment pathways. Patient-centered innovation has always been at the core of the company, which is committed to improving patient outcomes and experiences, no matter where they live or what challenges they face. The company is innovating sustainably to provide healthcare for everyone, everywhere.

    The Project’s mission is to enable healthcare providers to increase their value by equipping them with innovative technologies and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, and digital health and enterprise services.


    Responsibilities:

    • Work closely with the client (PO) as well as other team members to clarify tech requirements and expectations
    • Contribute to the design, development, and optimization of squad-specific data architecture and pipelines adhering to defined ETL and Data Lake principles
    • Implement architectures using Azure Cloud platforms (Data Factory, Databricks, Event Hub)
    • Discover, understand, and organize disparate data sources, structuring them into clean data models with clear, understandable schemas
    • Evaluate new tools for analytical data engineering or data science and suggest improvements
    • Contribute to training plans to improve analytical data engineering skills, standards, and processes


    Requirements:

    • Solid experience in data engineering and cloud computing services, specifically in the areas of data and analytics (Azure preferred)
    • Strong conceptual knowledge of data analytics fundamentals, including dimensional modeling, ETL, reporting tools, data governance, data warehousing, and handling both structured and unstructured data
    • Expertise in SQL and at least one programming language (Python/Scala)
    • Excellent communication skills and fluency in business English
    • Familiarity with Big Data DB technologies such as Snowflake, BigQuery, etc. (Snowflake preferred)
    • Experience with database development and data modeling, ideally with Databricks/Spark
    More
  • Β· 58 views Β· 9 applications Β· 3d

    Senior Data Engineer (Python) to $8000

    Full Remote Β· Bulgaria, Poland, Portugal, Romania, Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

     

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 
    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role: 
    As a data engineer you’ll have end-to-end ownership - from system architecture and software

    development to operational excellence.

     

    Key Responsibilities: 
    ● Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.

    ● Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.

    ● Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.

    ● Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.

    ● Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.

     

    Required Competence and Skills:
    To excel in this role, candidates should possess the following qualifications and experiences:

    ● A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.

    ● At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.

    ● At least 5 years of experience with Python, building modular, testable, and production-ready code.

    ● Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).

    ● Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).

    ● A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.

    ● Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.

     

    Nice-to-Haves

    ● Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.

    ● Familiarity with API development frameworks (e.g., FastAPI).

    ● Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).

    ● Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

     

    We provide full accounting and legal support in all countries we operate.

     

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

     

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 40 views Β· 3 applications Β· 20d

    Senior Python Data Engineer (only Ukraine)

    Ukraine Β· Product Β· 5 years of experience Β· Upper-Intermediate
    The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...

    The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.

     

    Requirements:

     

    • At least 5 years of experience with Python
    • At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
    • Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka).
    • Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
    • Deep understanding of data processing services (Apache Airflow, GCP Dataflow, Hadoop, Apache Spark).
    • Experience in automated test creation (TDD).
    • Freely spoken English.

       

    Advantages:

     

    • Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
    • Experience in any OOP language.
    • Experience in DevOps (Familiarity with Docker and Kubernetes.)
    • Experience with GCP services would be a plus.
    • Experience with IaC would be a plus.
    • Experience in Scala.


    What we offer:

     

    • 20 working days’ vacation; 
    • 10 paid sick leaves;
    • public holidays;
    • equipment;
    • accountant helps with documents;
    • many cool team activities.

       

    Apply now and start a new page of your fast career growth with us!

    More
  • Β· 9 views Β· 2 applications Β· 4h

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient...

    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

    Does this relate to you?

    • 5+ years of experience in Data Engineering or a related field
    • Strong expertise in SQL and data modeling concepts
    • Hands-on experience with Airflow
    • Experience working with Redshift
    • Proficiency in Python for data processing
    • Strong understanding of data governance, security, and compliance
    • Experience in implementing CI/CD pipelines for data workflows
    • Ability to work independently and collaboratively in an agile environment
    • Excellent problem-solving and analytical skills

     

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions
    • Build and optimize ETL/ELT pipelines for efficient data integration
    • Design and implement data models to support analytical and reporting needs
    • Ensure data integrity, quality, and security across all pipelines
    • Optimize data performance and scalability using best practices
    • Work with big data technologies such as  Redshift
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions
    • Implement CI/CD pipelines for data workflows
    • Monitor, troubleshoot, and improve data processes and system performance
    • Stay updated with industry trends and emerging technologies in data engineering

     

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications
    More
  • Β· 55 views Β· 1 application Β· 27d

    Senior Data Engineer (FinTech Project)

    Full Remote Β· EU Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance...

    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment.
     

    Customer

    Our client is one of Europe’s fastest-growing FinTech innovators, revolutionizing how businesses manage their financial operations. They offer an all-in-one platform that covers everything from virtual cards and account management to wire transfers and spend tracking. As a licensed payment institution, the client seamlessly integrates core financial services into their product, enabling companies to streamline their financial workflows with speed and security.
     

    Project

    You will join a dynamic team driving the evolution of a high-performance data platform that supports real-time financial operations and analytics. The project focuses on building scalable data infrastructure that will guarantee accuracy, reliability, and compliance across multiple financial products and services.

     

    Requirements:

    • At least 5 years of experience in data engineering or software engineering with a strong focus on data infrastructure
    • Hands-on experience in AWS (or equivalent cloud platforms like GCP) and data analytics services
    • Strong proficiency in Python and SQL
    • Good understandingβ€―of database design, optimization, and maintenance (using DBT)
    • Strong experience with data modeling, ETL processes, and data warehousing
    • Familiarity with Terraform and Kubernetes
    • Expertise in developing and managing large-scale data flows efficiently
    • Experience with job orchestrators or scheduling tools like Airflow
    • At least an Upper-Intermediate level of English
       

    Would be a plus:

    • Experience managing RBAC on data warehouse
    • Experience maintaining security on data warehouse (IPs whitelist, masking, sharing data between accounts/clusters, etc.)

     

    Responsibilities:

    • Collaborate with stakeholders to identify business requirements and translate them into technical specifications
    • Design, build, monitor, and maintain data pipelines in production, including complex pipelines (Airflow, Python, event-driven systems)
    • Develop and maintain ETL processes for ingesting and transforming data from various sources
    • Monitor and troubleshoot infrastructure issues, such as Kubernetes and Terraform, including data quality, ETL processes, and cost optimization
    • Collaborate closely with analytics engineers on CI and infrastructure management
    • Drive the establishment and maintenance of the highest coding standards and practices, ensuring the development of efficient, scalable, and reliable data pipelines and systems
    • Participate in data governance initiatives to ensure data accuracy and integrity
    • Actively participate in the data team’s routines and enhancement plans
    • Stay up to date with the latest developments in data technology and provide recommendations for improving our analytics capabilities
    More
  • Β· 35 views Β· 1 application Β· 24d

    Senior Data Engineer

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...

    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.

     

    Key Responsibilities:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
    • Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
    • Contribute to the development of best practices and standards for data engineering processes.
    • Stay updated on emerging technologies and trends in the data engineering landscape.

     

    Required Skills and Qualifications:

    • Bachelor's Degree in Computer Science or related field.
    • Minimum of 5 years of experience in tech lead data engineering or architecture roles.
    • Proficiency in Python and PySpark for ETL development and data processing.
    • AWS CLOUD at least 2 years
    • Extensive experience with cloud-based data platforms, particularly EMR.
    • Must have knowledge with Spark.
    • Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
    • Leadership experience, with a proven track record of leading data engineering teams.

     

    Benefits

     

    • 20 days of paid vacation, 5 sick leave
    • National holidays observed
    • Company-provided laptop

     

     

    More
  • Β· 51 views Β· 6 applications Β· 21d

    Consultant Data Engineer (Python/Databricks)

    Part-time Β· Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution...

    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution optimization on Databricks.

     


    Type of cooperation: Part-time

     

    ⚑️Your responsibilities on the project will be:

    • Interview and hire Data Engineers
    • Supervise work of other Engineers and have hands on for the most complicated tasks from backlog, focus on unblocking other data Engineers in case of technical difficulties
    • Develop and maintain scalable data pipelines using Databricks (Apache Spark) for batch and streaming use cases.
    • Work with data scientists and analysts to provide reliable, performant, and well-modeled data sets for analytics and machine learning.
    • Optimize and manage data workflows using Databricks Workflows and orchestrate jobs for complex data transformation tasks.
    • Design and implement data ingestion frameworks to bring data from various sources (files, APIs, databases) into Delta Lake.
    • Ensure data quality, lineage, and governance using tools such as Unity Catalog, Delta Live Tables, and built-in monitoring features.
    • Collaborate with cross-functional teams to understand data needs and support production-grade machine learning workflows.
    • Apply data engineering best practices: versioning, testing (e.g., with pytest or dbx), documentation, and CI/CD pipelines



     

    πŸ•ΉTools we use: Jira, Confluence, Git, Figma

     

    πŸ—žOur requirements to you:

    • 5+ years of experience in data engineering or big data development, with production-level work.
    • Architect and develop scalable data solutions on the Databricks platform, leveraging Apache Spark, Delta Lake, and the lakehouse architecture to support advanced analytics and machine learning initiatives.
    • Design, build, and maintain production-grade data pipelines using Python (or Scala) and SQL, ensuring efficient data ingestion, transformation, and delivery across distributed systems.
    • Lead the implementation of Databricks features such as Delta Live Tables, Unity Catalog, and Workflows to ensure secure, reliable, and automated data operations.
    • Optimize Spark performance and resource utilization, applying best practices in distributed computing, caching, and tuning for large-scale data processing.
    • Integrate data from cloud-based sources (e.g., AWS S3), ensuring data quality, lineage, and consistency throughout the pipeline lifecycle.
    • Manage orchestration and automation of data workflows using tools like Airflow or Databricks Jobs, while implementing robust CI/CD pipelines for code deployment and testing.
    • Collaborate cross-functionally with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights through robust data infrastructure.
    • Mentor and guide junior engineers, promoting engineering best practices, code quality, and continuous learning within the team.
    • Ensure adherence to data governance and security policies, utilizing tools such as Unity Catalog for access control and compliance.
    • Continuously evaluate new technologies and practices, driving innovation and improvements in data engineering strategy and execution.
    • Experience in designing, building, and maintaining data pipelines using Apache Airflow, including DAG creation, task orchestration, and workflow optimization for scalable data processing.
    • Upper-Intermediate English level.

     

     

    πŸ‘¨β€πŸ’»Who will you have the opportunity to meet during the hiring process (stages):
    Call, HR, Tech interview, PM interview.

     

    πŸ₯―What we can offer you:

    • We have stable and highly-functioning processes – everyone has their own role and clear responsibilities, so decisions are made quickly and without unnecessary approvals. 
    • You will have enough independence to make decisions that can affect not only the project but also the work of the company.
    • We are a team of like-minded experts who create interesting products during working hours and enjoy spending free time together.
    • Do you like to learn something new in your profession or do you want to improve your English? We will be happy to pay 50% of the cost of courses/conferences/speaking clubs.
    • Do you want an individual development plan? We will form one especially for you + you can count on mentoring from our seniors and leaders.
    • Do you have a friend who is currently looking for new job opportunities? Recommend them to us and get a bonus.
    • And what if you want to relax? Then we have 21 working days off.
    • What if you are feeling bad? You can take 5 sick leaves a year.
    • Do you want to volunteer? We will add you to a chat, where we can get a bulletproof vest, buy a pickup truck or send children's drawings to the front.
    • And we have the most empathetic HRs (who also volunteers!). So we are ready to support your well-being in various ways.

     

    πŸ‘¨β€πŸ«A little more information that you may find useful:

    - our adaptation period lasts 3 months, this period of time is enough for us to understand  each other better;

    - there is a performance review after each year of our collaboration where we use a skills map to track your growth;

    - we really have no boundaries in the truest sense of the word – we have flexible working day is up to you.

     

    Of course, we have a referral bonus syst

    More
  • Β· 46 views Β· 4 applications Β· 15d

    Data Engineer (PostgreSQL, Snowflake, Google BigQuery, MongoDB, Elasticsearch)

    Full Remote Β· Worldwide Β· 5 years of experience Β· Intermediate
    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not. Our data comes in all kinds of...

    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not.  Our data comes in all kinds of sizes, shapes and formats.  Traditional RDBMS like PostgreSQL, Oracle, SQL Server, MPPs like StarRocks, Vertica, Snowflake, Google BigQuery, and unstructured, key-value like MongoDB, Elasticsearch, to name a few.

     

    We are looking for individuals who can design and solve any data problems using different types of databases and technology supported within our team.  We use MPP databases to analyze billions of rows in seconds.  We use Spark and Iceberg, batch or streaming to process whatever the data needs are.  We also use Trino to connect all different types of data without moving them around. 

     

    Besides a competitive compensation package, you’ll be working with a great group of technologists interested in finding the right database to use, the right technology for the job in a culture that encourages innovation.  If you’re ready to step up and take on some new technical challenges at a well-respected company, this is a unique opportunity for you.

     

    Responsibilities:

    • Work within our on-prem Hadoop ecosystem to develop and maintain ETL jobs
    • Design and develop data projects against RDBMS such as PostgreSQL 
    • Implement ETL/ELT processes using various tools (Pentaho) or programming languages (Java, Python) at our disposal 
    • Analyze business requirements, design and implement required data models
    • Lead data architecture and engineering decision making/planning.
    • Translate complex technical subjects into terms that can be understood by both technical and non-technical audiences.

     

    Qualifications: (must have)

    • BA/BS in Computer Science or in related field
    • 5+ years of experience with RDBMS databases such as Oracle, MSSQL or PostgreSQL
    • 2+ years of experience managing or developing in the Hadoop ecosystem
    • Programming background with either Python, Scala, Java or C/C++
    • Experience with Spark. PySpark, SparkSQL, Spark Streaming, etc…
    • Strong in any of the Linux distributions, RHEL,CentOS or Fedora
    • Working knowledge of orchestration tools such Oozie and Airflow
    • Experience working in both OLAP and OLTP environments
    • Experience working on-prem, not just cloud environments
    • Experience working with teams outside of IT (i.e. Application Developers, Business Intelligence, Finance, Marketing, Sales)

     

    Desired: (nice to have)

    • Experience with Pentaho Data Integration or any ETL tools such as Talend, Informatica, DataStage or HOP.
    • Deep knowledge shell scripting, scheduling, and monitoring processes on Linux
    • Experience using reporting and Data Visualization platforms (Tableau, Pentaho BI)
    • Working knowledge of data unification and setup using Presto/Trino
    • Web analytics or Business Intelligence a plus
    • Understanding of Ad stack and data (Ad Servers, DSM, Programmatic, DMP, etc)
    More
  • Β· 76 views Β· 19 applications Β· 14d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others. About project: Advanced...

    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others.

    About project: Advanced blockchain analytics and on-the-ground intelligence to empower financial institutions, governments & regulators in the fight against cryptocurrency crime

    • Requirements:
      • 6+ years of experience with Python backend development
        Solid knowledge of SQL (including writing/debugging complex queries)
        Understanding of data warehouse principles and backend architecture
      • Experience working in Linux/Unix environments
        Experience with APIs and Python frameworks (e.g., Flask, FastAPI)
      • Experience with PostgreSQL
      • Familiarity with Docker
      • Basic understanding of unit testing
      • Good communication skills and ability to work in a team
      • Interest in blockchain technology or willingness to learn
      • Experience with CI/CD processes and containerization (Docker, Kubernetes) is a plus
      • Strong problem-solving skills and the ability to work independently
    • Responsibilities:
      • Integrate new blockchainsAMM protocols, and bridges into the our platform
      • Build and maintain data pipelines and backend services
      • Help implement new tools and technologies into the system
      • Participate in the full cycle of feature development – from design to release
      • Write clean and testable code
      • Collaborate with the team through code reviews and brainstorming
    • Nice to Have:
      • Experience with KafkaSpark, or ClickHouse
      • Knowledge of KubernetesTerraform, or Ansible
      • Interest in cryptoDeFi, or distributed systems
      • Experience with open-source tools
      • Some experience with Java or readiness to explore it
    • What we offer:
      • Remote working format 
      • Flexible working hours
      • Informal and friendly atmosphere
      • The ability to focus on your work: a lack of bureaucracy and micromanagement
      • 20 paid vacation days
      • 7 paid sick leaves
      • Education reimbursement
      • Free English classes
      • Psychologist consultations
    • Recruitment process:

      Recruitment Interview – Technical Interview

    More
  • Β· 24 views Β· 2 applications Β· 13d

    Senior Data Engineer (IRC264689)

    Full Remote Β· Poland, Romania, Croatia, Slovakia Β· 5 years of experience Β· Upper-Intermediate
    Our client provides collaborative payment, invoice and document automation solutions to corporations, financial institutions and banks around the world. The company’s solutions are used to streamline, automate and manage processes involving payments,...

    Our client provides collaborative payment, invoice and document automation solutions to corporations, financial institutions and banks around the world. The company’s solutions are used to streamline, automate and manage processes involving payments, invoicing, global cash management, supply chain finance and transactional documents. Organizations trust these solutions to meet their needs for cost reduction, competitive differentiation and optimization of working capital.

    Serving industries such as financial services, insurance, health care, technology, communications, education, media, manufacturing and government, Bottomline provides products and services to approximately 80 of the Fortune 100 companies and 70 of the FTSE (Financial Times) 100 companies.

    Our client is a participating employer in the Employment Verification (E-Verify) program EOE/AA/M/F/V/D/E-Verify Employer

    Our client is an Equal Employment Opportunity and Affirmative Action Employer.

    As part of the GlobalLogic team, you will grow, be challenged, and expand your skill set working alongside highly experienced and talented people.

    Don’t waste any second, apply!

     

    Skill Category

    Data Engineering

     

    We expect candidates with long experience to work with a new team and demonstrate experience on the following:

    • Experience with Databricks or similar
    • Hands-on experience with the Databricks platform or similar is helpful.
    • Managing delta tables, including tasks like incremental updates, compaction, and restoring versions
    • Proficiency in python (or other programming skills) and SQL, commonly used to create and manage data pipelines, query and run BI DWH workload on Databricks
    • Familiarity with other languages like Scala (common in the Spark/Databricks world), Java can also be beneficial.
    • Experience with Apache Spark
    • Understanding of Apache Spark’s architecture, data processing concepts (RDDs, DataFrames, Datasets)
    • Knowledge of spark-based workflows
    • Experience with data pipelines
    • Experience in designing, building, and maintaining robust and scalable ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines.
    • Data Understanding and Business Acumen
    • The ability to analyse and understand data, identify patterns, and troubleshoot data quality issues is crucial. Familiarity with data profiling techniques

     

    Job responsibilities

    • Developing postgres based central storage location as basis for long term data storage
    • Developing Standing up microservices to retain data based on tenant configuration and UI to enable customers to configure their retention policy
    • Creating the pipeline to transform the data from transactional database to format suited to analytical queries
    • Helping pinpoint and fix issues in data quality
    • Paricipating in code review sessions
    • Following client’s standards to the code and data qualities


    #Remote

    More
  • Β· 41 views Β· 5 applications Β· 13d

    Senior Data Engineer

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    Description Method is a global design and engineering consultancy founded in 1999. We believe that innovation should be meaningful, beautiful and human. We craft practical, powerful digital experiences that improve lives and transform businesses. Our...

    Description

    Method is a global design and engineering consultancy founded in 1999. We believe that innovation should be meaningful, beautiful and human. We craft practical, powerful digital experiences that improve lives and transform businesses. Our teams based in New York, Charlotte, Atlanta, London, Bengaluru, and remote work with a wide range of organizations in many industries, including Healthcare, Financial Services, Retail, Automotive, Aviation, and Professional Services.

     

    Method is part of GlobalLogic, a digital product engineering company. GlobalLogic integrates experience design and complex engineering to help our clients imagine what’s possible and accelerate their transition into tomorrow’s digital businesses. GlobalLogic is a Hitachi Group Company.

     

    Your role is to collaborate with multidisciplinary individuals and support the project lead on data strategy and implementation projects. You will be responsible for data and systems assessment, identifying the critical data and quality gaps required for effective decision support, and contributing to the data platform modernization roadmap. 

     

    Responsibilities:

    • Work closely with data scientists, data architects, business analysts, and other disciplines to understand data requirements and deliver accurate data solutions.
    • Analyze and document existing data system processes to identify areas for improvement.
    • Develop detailed process maps that describe data flow and integration across systems.
    • Create a data catalog and document data structures across various databases and systems.
    • Compare data across systems to identify inconsistencies and discrepancies.
    • Contribute towards gap analysis and recommend solutions for standardizing data.
    • Recommend data governance best practices to organize and manage data assets effectively.
    • Propose database design standards and best practices to suit various downstream systems, applications, and business objectives
    • Strong problem-solving abilities with meticulous attention to detail and experience. 
    • Experience with requirements gathering and methodologies. 
    • Excellent communication and presentation skills with the ability to clearly articulate technical concepts, methodologies, and business impact to both technical teams and clients.
    • A unique point of view. You are trusted to question approaches, processes, and strategy to better serve your team and clients.

     

    Skills Required 

    Technical skills

    • Proven experience (5+ years) in data engineering.
    • 5+ years of proven data engineering experience with expertise in data warehousing, data management, and data governance in SQL or NoSQL databases.
    • Deep understanding of data modeling, data architecture, and data integration techniques.
    • Advanced proficiency in ETL/ELT processes and data pipeline development from raw, structured to business/analytics layers to support BI Analytics and AI/GenAI models.
    • Hands-on experience with ETL tools, including: Databricks (preferred), Matillion, Alteryx, or similar platforms.
    • Commercial experience with a major cloud platform like Microsoft Azure (e.g., Azure Data Factory, Azure Synapse, Azure Blob Storage).

     

     

    Core Technology stack

    Databases

    • Oracle RDBMS (for OLTP): Expert SQL for complex queries, DML, DDL.
    • Oracle Exadata (for OLAP/Data Warehouse): Advanced SQL optimized for analytical workloads. Experience with data loading techniques and performance optimization on Exadata.

    Storage:

    • S3-Compatible Object Storage (On-Prem): Proficiency with S3 APIs for data ingest, retrieval, and management.

    Programming & Scripting:

    • Python: Core language for ETL/ELT development, automation, and data manipulation.
    • Shell Scripting (Linux/Unix): Bash/sh for automation, file system operations, and job control.

    Version Control: 

             Git: Managing all code artifacts (SQL scripts, Python code, configuration files).​​

    Related Technologies & Concepts:

    • Data Pipeline Orchestration Concepts: Understanding of scheduling, dependency management, monitoring, and alerting for data pipelines
    • Containerization: Docker, basic understanding of how containerization works
    • API Interaction: Understanding of REST APIs for data exchange (as they might need to integrate with the Java Spring Boot microservices).
       

    Location

    • Remote across Poland

     

    Why Method?

    We look for individuals who are smart, kind and brave. Curious people with a natural ability to think on their feet, learn fast, and develop points of view for a constantly changing world find Method an exciting place to work. Our employees are excited to collaborate with dispersed and diverse teams that bring together the best in thinking and making. We champion the ability to listen and believe that critique and dissonance lead to better outcomes. We believe everyone has the capacity to lead and look for proactive individuals who can take and give direction, lead by example, enjoy the making as much as they do the thinking, especially at senior and leadership levels.

    Next Steps

    If Method sounds like the place for you, please submit an application. Also, let us know if you have a presence online with a portfolio, GitHub, Dribbble, or another platform.

     

    * For information on how we process your personal data, please see Privacy: https://www.method.com/privacy/

    More
Log In or Sign Up to see all posted jobs