Jobs Data Engineer 142
-
Data Engineer (Healthcare, FHIR, Kafka, Knowledge Graphs)
Full Remote · Countries of Europe or Ukraine · 3 years of experience · Upper-IntermediateData Engineer (Healthcare, FHIR, Kafka, Knowledge Graphs) Type: Remote Trial Period: 1 month Job Description We are looking for a skilled Data Engineer to work on data standardization, integration, and knowledge graph construction in the healthcare...Data Engineer (Healthcare, FHIR, Kafka, Knowledge Graphs)
Type: Remote
Trial Period: 1 month
Job Description
We are looking for a skilled Data Engineer to work on data standardization, integration, and knowledge graph construction in the healthcare domain. The role involves developing data processing pipelines for Electronic Health Record (EHR) systems, implementing FHIR-based data transformation, and integrating Large Language Models (LLMs) with Knowledge Graphs (KGs) to enhance predictive analytics.
Key Responsibilities:
• Develop and maintain data pipelines to standardize and process data from various EHR systems based on FHIR standards.
• Map and normalize incoming data to ensure consistency and completeness.
• Work with Kafka for real-time data streaming and integration.
• Implement mechanisms for data segmentation, enabling both structured storage and AI model training.
• Develop knowledge graphs to represent relationships and enhance semantic search capabilities.
• Leverage LLMs for structured data extraction and contextual analysis.
• Ensure robust error handling and logging for auditing and troubleshooting.
Requirements:
• Experience as a Data Engineer, preferably in healthcare or similar domains.
• Strong knowledge of FHIR and HL7 standards for data interoperability.
• Proficiency in Python and SQL for data transformation and querying.
• Experience with Kafka for real-time data streaming.
• Understanding of Knowledge Graphs and graph databases (Neo4j, RDF-based systems).
• Familiarity with machine learning models and AI-driven data processing.
• Strong problem-solving skills and ability to work in a collaborative team environment.
Nice to Have:
• Experience with Docker and Kubernetes.
• Prior work in AI-driven data engineering solutions.
What We Offer:
• Competitive salary.
• 1-month trial period.
• Fully remote work.
• Work on cutting-edge healthcare data solutions.
• Friendly and professional team.
If you are passionate about data engineering and want to work on innovative solutions in healthcare, apply now!
-
Data Engineer
Full Remote · Ukraine · Product · 4 years of experienceAt Wiztech Group, we are revolutionizing the gaming industry by seamlessly integrating cutting-edge technology with immersive gaming experiences. Rooted in transparency, integrity, and accountability, we are on the lookout for a Data Engineer to architect...At Wiztech Group, we are revolutionizing the gaming industry by seamlessly integrating cutting-edge technology with immersive gaming experiences. Rooted in transparency, integrity, and accountability, we are on the lookout for a Data Engineer to architect and maintain the backbone of our data infrastructure, ensuring a seamless, efficient, and high-performance data ecosystem.
Why You'll Love Working With Us
Join a team where your expertise directly influences the performance, scalability, and reliability of gaming experiences enjoyed by millions worldwide. As a Data Engineer, you will collaborate with top-tier professionals to develop robust data pipelines and systems using the latest technologies, driving innovation at every step.
Your Role: What You’ll Do
Data Pipeline Development
- Design, develop, and maintain scalable, secure, and high-performance data pipelines using Python and SQL.
- Implement ETL processes to efficiently extract, transform, and load data from diverse sources into our data warehouse.
Database Management
- Design and manage relational database schemas in Amazon Redshift.
- Optimize database queries for enhanced performance and reliability at scale.
- Oversee database migrations and version control to ensure seamless updates.
Data Modeling & Transformation
- Develop and maintain data models using dbt, ensuring data is structured and accessible for analysis.
- Collaborate with analytics teams to define and implement data transformations that empower business intelligence initiatives.
Collaboration & Teamwork
- Work closely with backend and frontend developers to ensure seamless integration between data systems and applications.
- Partner with DevOps teams to develop and optimize CI/CD pipelines for data workflows.
- Participate in Agile ceremonies, contributing to sprint planning and retrospectives.
Testing & Quality Assurance
- Write and implement unit, integration, and end-to-end tests to ensure code quality and reliability.
- Identify and resolve data-related issues to maintain system stability and performance.
Documentation
- Create and maintain comprehensive documentation for data pipelines, infrastructure, and development processes, supporting scalability and team collaboration.
What Makes You a Perfect Fit
Key Skills & Experience
- 3+ years of data engineering experience with expertise in Python and SQL.
- Proven track record in designing and managing relational databases, particularly Amazon Redshift.
- Hands-on experience with dbt for data modeling and transformation.
- Strong ability to build efficient, scalable ETL pipelines.
- In-depth understanding of data warehousing concepts and best practices.
- Experience with CI/CD pipelines and containerization tools like Docker.
- Excellent problem-solving skills, attention to detail, and strong communication abilities.
Preferred Advantages
- Experience in the gaming industry, with knowledge of data compliance standards.
- Familiarity with event-driven architectures and message queues like Kafka or AWS SQS.
- Understanding of serverless computing and microservices architectures.
- Experience with performance monitoring and optimization tools.
What We Offer
- Competitive salary with ample growth opportunities.
- Top-tier equipment to support your work.
- Flexible work schedules to ensure a healthy work-life balance.
- A chance to join a visionary team redefining the future of gaming across platforms.
Ready to Build the Data Backbone of Gaming?
Join Wiztech Group as a Data Engineer and be a driving force in shaping the next-generation gaming experience. Help us create a scalable, secure, and forward-thinking data infrastructure that fuels unforgettable gaming adventures. Let’s innovate together!
-
Big Data Engineer
Hybrid Remote · Poland · 2 years of experience · Upper-IntermediateExperience Required: 2 years Sector: Banking and Finance Work Mode: Hybrid Location: Poland – Poznań, Łódź, Warsaw, Kraków, Wrocław Job Overview We are seeking an experienced Big Data Engineer to join a dynamic team for a 6-month project, with the...Experience Required: 2 years
Sector: Banking and Finance
Work Mode: Hybrid
Location: Poland – Poznań, Łódź, Warsaw, Kraków, WrocławJob Overview
We are seeking an experienced Big Data Engineer to join a dynamic team for a 6-month project, with the possibility of extension. This role requires someone with hands-on experience in data engineering, Java, and Hadoop ecosystems.
Required Skills and Experience
- At least 2 years of practical experience as a Data Engineer in a commercial environment.
- A minimum of 2 years of commercial experience with Java and Hadoop.
- Proficiency in SQL.
- Hands-on experience with the Hadoop ecosystem, including HDFS, YARN, Spark, HBase, Impala, and Hive.
- Expertise in distributed computing and large-scale data processing.
- Familiarity with ETL pipeline development and data modeling.
- Strong understanding of cluster management, big data troubleshooting, and API development with Java Spring.
- Excellent analytical skills and attention to detail.
Key Responsibilities
Hadoop Ecosystem Expertise:
- Design, develop, and optimize data pipelines using technologies like HDFS, YARN, Spark, HBase, Impala, and Hive.
Java Development for APIs and Batch Processes:
- Develop and deploy RESTful APIs using Java Spring Boot for data integration across systems.
- Design and manage batch processes to ensure efficient data workflows and business logic execution.
- Implement error handling, logging, and monitoring for reliable processes.
Database Management:
- Maintain and enhance relational databases, especially with Oracle DB.
- Write complex SQL and Spark SQL queries for improved performance.
Performance Optimization and Troubleshooting:
- Monitor, troubleshoot, and optimize data processes to ensure efficiency and data quality.
Collaboration and Agile Participation:
- Participate in Agile practices, including stand-ups and retrospectives, to support project success.
- Contribute to code reviews, design discussions, and team collaboration to maintain high development standards.
Location Requirements
- Work Mode: Hybrid
- Offices: Poznań, Łódź, Warsaw, Kraków, Wrocław
- On-Site: 2 times per week at the client's office
- Note: Please specify your city when applying.
Additional Information
- Start Date: ASAP
- Project Duration: 6 months (with the possibility of extension)
- Verification Process: At least two interviews (one with our client and one with the end client)
If you are passionate about big data technologies and meet the above requirements, we encourage you to apply!
-
Data Engineer
Ukraine · Product · 3 years of experience · Advanced/FluentKey Responsibilities: Azure Databricks Architecture & Design:Design, build, and optimize scalable, high-performance data pipelines and data lakes using Azure Databricks. Architect and implement end-to-end analytics solutions leveraging Databricks and...Key Responsibilities:
- Azure Databricks Architecture & Design:
- Design, build, and optimize scalable, high-performance data pipelines and data lakes using Azure Databricks.
- Architect and implement end-to-end analytics solutions leveraging Databricks and other Azure services (e.g., Azure Data Lake, Azure SQL, Azure Blob Storage).
- Lead the design of cloud-based architectures using Azure Databricks for data processing, transformation, and reporting.
- Data Engineering & Integration:
- Design and implement ETL/ELT processes to ingest, process, and transform data across various sources (structured and unstructured).
- Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and develop data solutions.
- Manage data integration workflows between Databricks and other platforms like Power BI, Azure SQL, Synapse, etc.
- Optimization & Performance Tuning:
- Identify performance bottlenecks in Databricks environments and optimize clusters, queries, and code.
- Continuously improve and scale data pipelines to accommodate growing data volumes and business needs.
- Continuous Improvement & Innovation:
- Stay up to date with the latest advancements in Azure Databricks, cloud technologies, and data engineering trends.
- Continuously evaluate and introduce new technologies and methodologies to enhance the data platform.
Required Skills and Qualifications:
- Experience:
- At least 2 years of experience working with Azure Databricks, building data pipelines, and performing data engineering tasks.
- Strong experience with Azure services such as Azure Data Lake, Azure Synapse Analytics, Azure Blob Storage, Azure SQL Database, etc.
- Technical Skills:
- Proficiency in Spark, PySpark, Scala, or SQL for large-scale data processing.
- Expertise in Databricks notebooks, clusters, job scheduling, and libraries.
- Deep understanding of cloud-native data engineering practices and architecture on Azure.
- Familiarity with data modeling, data lakes, and data warehouse concepts.
- Expertise in Azure services: Data Factory, Data Lake, Synapse Analytics, Event Hubs, Cosmos DB.
- Data Pipeline and ETL/ELT Development:
- Experience building and optimizing ETL/ELT workflows on Databricks.
- Knowledge of data orchestration tools such as Azure Data Factory, Apache Airflow, or similar.
- Big Data Technologies:
- Proficiency in handling large-scale distributed data processing with tools like Apache Spark.
- Experience with technologies such as Kafka, Delta Lake, and Databricks Runtime.
- Strong understanding of Delta Lake and Lakehouse architecture.
- Programming Languages:
- Strong programming skills in Python, Scala, or Java.
- Experience with SQL-based querying and optimizations.
- Cloud & DevOps:
- Strong understanding of Azure cloud services and architecture.
- Familiarity with DevOps principles, CI/CD pipelines, and automation tools. Terraform required.
- Knowledge of Git, Azure DevOps, or similar version control and deployment systems.
Preferred Qualifications:
- Certifications:
- Microsoft Certified: Azure Data Engineer Associate or equivalent certification.
- Databricks Certified Associate Developer for Apache Spark.
- Education:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Azure Databricks Architecture & Design:
-
Data Engineer
Part-time · Full Remote · Countries of Europe or Ukraine · 3 years of experience · Upper-IntermediateAbout the Role We are seeking a Data Engineer with strong analytical and problem-solving skills to join our growing team. This role sits at the intersection of product analytics, customer analytics, and data engineering, enabling data-driven...About the Role
We are seeking a Data Engineer with strong analytical and problem-solving skills to join our growing team. This role sits at the intersection of product analytics, customer analytics, and data engineering, enabling data-driven decision-making. The ideal candidate is passionate about turning data into actionable insights and driving commercial impact. They will be responsible for analyzing all data across the platform, generating insights, and working with stakeholders across the organization to drive data-driven decisions.
Responsibilities
● Design, develop, and maintain scalable ETL pipelines using Python to support data
ingestion, transformation, and integration from multiple sources.
● Implement and manage workflow orchestration using Apache Airflow to automate
data processes and ensure reliability.
● Build and maintain RESTful APIs with Flask to facilitate data access and integration
with internal systems.
● Develop and maintain A/B testing frameworks, track key metrics, and analyze
experiment results to inform product strategies.
● Collaborate with cross-functional teams, including customer success, sales, and
marketing, to support commercial strategies with data solutions.
● Analyze customer and operational data to identify trends, patterns, and
opportunities that support revenue growth and improve customer engagement.
● Develop and maintain dashboards and reports using SQL and data visualization
tools like Looker, Tableau, or Power BI to present insights to stakeholders.
● Write and optimize complex SQL queries for data analysis and support
decision-making processes.
● Ensure data integrity and quality through regular audits, validations, and
monitoring.
● Assist in the development of metrics and KPIs to measure the success of business initiatives.
● Partner with commercial teams to analyze performance data and support
customer-facing initiatives.
What We’re Looking For
● Bachelor’s degree in Computer Science, Data Engineering, Statistics, Mathematics,
or a related field.
● 3+ years of experience in data engineering, ETL development, or a related role.
● Strong proficiency in Python, with experience in building ETL workflows and data processing scripts.
● Hands-on experience with Apache Airflow for workflow orchestration and
automation.
● Experience with Flask for developing RESTful APIs and data services.
● Strong SQL skills, including query optimization and database modeling.
● Proficiency with data visualization tools (e.g., Tableau, Power BI, or Looker).
● Familiarity with A/B testing methodologies and ability to analyze experiment
results.
● Solid understanding of Excel, including advanced functions and pivot tables.
● Familiarity with cloud platforms and data storage solutions (e.g., AWS, GCP, Azure)
is a plus.
● Strong analytical and problem-solving skills with a keen attention to detail.
● Excellent communication skills to present data-driven insights to non-technical stakeholders, which may include customers.
● Ability to work collaboratively in a team-oriented environment.
● A proactive mindset and eagerness to learn new tools and technologies.
● Comfortable multitasking across various roles and working independently.
-
Data Engineer (Hourly rated, 80 hours per month) $2000-2500
Part-time · Full Remote · Ukraine · 3 years of experience · IntermediateAbout Us Acropolium is a trusted software development partner with 20+ years of experience delivering custom, scalable, and secure solutions. We specialize in AI, automation, cybersecurity, and cloud solutions across industries like HoReCa, logistics,...About Us
Acropolium is a trusted software development partner with 20+ years of experience delivering custom, scalable, and secure solutions. We specialize in AI, automation, cybersecurity, and cloud solutions across industries like HoReCa, logistics, healthcare, fintech, and trading. 🚀Role Objective 🎯
We’re seeking a Data Engineer to analyze platform data, generate insights, and enable data-driven decision-making. This role bridges product analytics, customer analytics, and data engineering, driving commercial impact.Requirements 📋
- 3+ years in data engineering or ETL development.
- Strong Python skills for ETL workflows and data scripts.
- Proficient in SQL (query optimization, modeling).
- Experience with Apache Airflow (automation) and Flask (RESTful APIs).
- Knowledge of BI tools (Looker, Tableau, or Power BI).
- Familiarity with A/B testing, cloud platforms (AWS/GCP), and Excel.
- Excellent communication and problem-solving skills.
Responsibilities 📌
- Develop scalable ETL pipelines and automate workflows using Airflow.
- Build RESTful APIs with Flask for data access and integration.
- Analyze customer/operational data to identify trends and opportunities.
- Create dashboards and reports with SQL and BI tools.
- Ensure data integrity through audits and validations.
- Support cross-team commercial strategies with data solutions.
Product Tech Stack:
Key Tools: Postgres, Redshift, DBT, Periscope, Python, SQL
Others: Airflow, PeerDb, Bytewax, Lambda, SQS, DataDogWe Offer 🌱
- Career growth opportunities.
- Flexible work policy.
- 20 vacation days annually.
- Option to work remotely.
- Support for your initiatives.
-
Senior Data Engineer/ Data Analyst
Full Remote · Ukraine, Poland · 5 years of experience · Upper-IntermediateBinariks is looking for a highly motivated Senior Data Engineer / Data Analyst who will be a significant part of the healthcare technology company that provides a platform to support patients with complex medical needs. The platform is designed to help...Binariks is looking for a highly motivated Senior Data Engineer / Data Analyst who will be a significant part of the healthcare technology company that provides a platform to support patients with complex medical needs.What We’re Looking For:
The platform is designed to help patients better manage their health conditions and reduce hospitalizations and other costly interventions. Also, it includes a mobile app that allows patients to communicate with their care team, track their symptoms, and access educational resources. The platform also provides care teams with data analytics and tools to identify high-risk patients and proactively manage their care.- 5+ years of experience in data engineering
- Proven experience in data analysis, with proven expertise in SQL and Excel
- Proven experience with Tableau
- Proficiency in tools such as Looker, Metabase, QuickSight, or similar
- Practical experience with Snowflake
- Work experience with AWS environment
- Strong analytical and problem-solving skills
- Focus on data accuracy and quality
- Strong teamwork and collaboration skills
- Ability to work independently and manage multiple projects in a fast-paced, dynamic environment
- Passionate about our mission to improve people’s lives
- At least an Upper-Intermediate level of English
- Extracting, transforming, and loading (ETL) data from various sources using SQL
- Ensuring data accuracy, consistency, and completeness by cleaning and preprocessing data
- Ensuring efficient data storage and management by using Snowflake data storage and Tableau reporting
- Analyzing data to find trends, patterns, and anomalies
- Working with business teams to define key performance indicators (KPIs) and creating useful reports
- Creating and maintaining Excel spreadsheets and dashboards to present data clearly
- Usage of data visualization tools to explain complex data to non-technical stakeholders
- Monitoring data quality and fixing any issues that arise
- Collaboration with teams in marketing, sales, finance, and operations to understand their data needs and provide insights
- Healthcare or related experience
-
Senior Data Engineer
Full Remote · Poland · 5 years of experience · Upper-IntermediateLocation: Poland About the Client: Our client is a top-tier global management consulting firm, ranked among the world's most prestigious. Hundreds of Fortune 500 companies, including leading financial institutions, top media firms, technology companies,...Location: Poland
About the Client:
Our client is a top-tier global management consulting firm, ranked among the world's most prestigious. Hundreds of Fortune 500 companies, including leading financial institutions, top media firms, technology companies, and government agencies, trust our client’s proven platform and services.About the Project:
The project focuses on a dynamic solution that helps companies optimize promotional activities for maximum impact. It collects and validates data, analyzes promotional effectiveness, plans calendars, and seamlessly integrates with existing systems. The tool enhances vendor collaboration, negotiates better deals, and uses machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize ROI.Tech Stack:
SQL, Databricks, Python for ETLRequirements:
- Strong experience with SQL Server and Databricks
- Proficiency in Python for ETL development and data manipulation
- Experience with data visualization libraries in Python (e.g., matplotlib, seaborn) and Power BI
- Familiarity with CI/CD tools and workflows, especially GitHub Actions
- Knowledge of T-SQL
- Experience with AWS services for data engineering
Responsibilities:
- Data Migration: Lead the migration and optimization of a product from SQL Server to Databricks, ensuring a smooth transition with minimal operational disruption
- ETL Pipeline Development: Design, develop, and maintain ETL pipelines in Python to support data ingestion, transformation, and loading
- Database Design: Create and optimize database schemas for efficient data storage and retrieval
- CI/CD Workflows: Implement and manage CI/CD workflows using GitHub Actions to automate testing, deployment, and monitoring of data pipelines
- Data Quality Management: Ensure high data quality through validation, cleansing, and monitoring processes, working closely with data scientists
- Data Visualization: Utilize Python visualization libraries (matplotlib, seaborn) and Power BI to create insightful data visualizations for stakeholders
- Collaboration and Communication: Maintain effective communication with technical and non-technical teams, including business stakeholders, data scientists, and IT leadership
- Task Management: Track and manage tasks using Jira to ensure timely project delivery
Additional Information:
- English proficiency: Upper-Intermediate+
-
Senior Data Engineer
Full Remote · Poland · 5 years of experience · Upper-IntermediateLocation: Poland About the Client: Our client is a leading global management consulting firm, ranked among the top 5 and considered one of the most prestigious in the world. Hundreds of Fortune 500 companies, including the largest financial institutions,...Location: Poland
About the Client:
Our client is a leading global management consulting firm, ranked among the top 5 and considered one of the most prestigious in the world. Hundreds of Fortune 500 companies, including the largest financial institutions, media companies, tech giants, and government agencies, rely on our client’s platform and services.About the Project:
The project focuses on a dynamic solution that helps companies optimize their promotional activities for maximum impact. The tool collects and validates data, analyzes promotion effectiveness, plans calendars, and integrates seamlessly with existing systems. It enhances vendor collaboration, negotiates better deals, and leverages machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize ROI.Tech Stack:
Terraform, AWSRequirements:
- Proven experience in Data Engineering and building data pipelines, ideally within AWS environments
- Expertise in Terraform
- Strong background in pipeline performance optimization and diagnostics
- Interest in Generative AI and Machine Learning
Responsibilities:
- Develop and enhance ETL pipelines for new data sources or optimize existing ones
- Improve existing functionalities in the Universal API or create new APIs when needed
- Implement new search functionalities to support emerging use cases
Additional Information:
- English proficiency: Upper-Intermediate+
-
Data Engineer
Full Remote · Europe except Ukraine · Product · 4 years of experiencePIN-UP Global is an international holding specializing in the development and implementation of advanced technologies, B2B solutions and innovative products for the iGaming industry. We provide certification and licensing of our products, providing...PIN-UP Global is an international holding specializing in the development and implementation of advanced technologies, B2B solutions and innovative products for the iGaming industry. We provide certification and licensing of our products, providing customers and partners of the holding with high-quality and reliable solutions.
We are looking for a Data Engineer to join our team.
Requirements:
- At least 2+ years experience with Python language;
- Experience with Python libraries: Pandas, Numpy, Scipy, Scikit-learn, etc;
- Experience with SQL and RDBMS (Clickhouse), NoSQL databases;
- Experience with ETL tools (Airflow)
- Experience with Git, CI/CD;
- Good knowledge of mathematics;
- AWS experience;
Will be plus:
- Experience working on API integrations;
- NoSQL database experience;
- Understanding of ML algorithms and concepts;
- Experience of project implementation from PoC to production;
Responsibilities:
- Create an ingestion framework for multiple use cases;
- Raw data processing: transform data from unstructured to structured formats;
- Creating / maintaining data pipelines for ML models;
- Document and test data ingestion, data pipelines.
Our benefits to you:
☘️An exciting and challenging job in a fast-growing product holding, the opportunity to be part of a multicultural team of top professionals in Development, Architecture, Management, Operations, Marketing, Legal, Finance and other
🤝🏻Great working atmosphere with passionate experts and leaders, sharing a friendly culture and a success-driven mindset is guaranteed
🧑🏻💻Modern corporate equipment based on macOS or Windows and additional equipment are provided
🏖Paid vacations, sick leave, personal events days, days off
💵Referral program — enjoy cooperation with your colleagues and get the bonus
📚Educational programs: regular internal training, compensation for external education, attendance of specialized global conferences
🎯Rewards program for mentoring and coaching colleagues
🗣Free internal English courses
🦄Multiple internal activities: online platform for employees with quests, gamification and presents for collecting bonuses, PIN-UP team clubs for movie / book / pets lovers, etc
🎳Other benefits could be added based on your location -
Data Engineer
Full Remote · Europe except Ukraine · 4 years of experience · Upper-IntermediateProject Description: Our client is a Dutch company specializing in innovative network monitoring solutions. With over three decades of experience, they have established a solid reputation in a unique international market, supported by a diverse and...Project Description:
Our client is a Dutch company specializing in innovative network monitoring solutions. With over three decades of experience, they have established a solid reputation in a unique international market, supported by a diverse and multicultural team. As they continue to grow and evolve, their R&D department is actively seeking new talent to join the team.
Role Overview:
In this role, you will participate in designing and developing systems for analyzing network traffic.
Collaborating closely with the software development team and network engineers, you will have the opportunity to influence the user experience of our solutions. This position allows for exploration and learning about how network traffic data can be used for monitoring and troubleshooting complex computer networks.
Our solutions require highly performant, user-friendly designs that follow the latest technical standards. Serving over 1,100 clients in more than 70 countries, you'll work in an international environment, contributing to enhanced network visibility and analytics across both physical and virtual infrastructures worldwide.
Requirements:- Advanced knowledge of SQL and experience with analytical database management.
- Thorough understanding of database structure, indexing, and enhancement methods.
- Practical experience with columnar storage systems.
- Exceptional problem-solving abilities and keen attention to detail.
- Strong communication and teamwork skills, enabling effective collaboration within a team setting.
Will be a plus:
- Familiarity with Git for version control.
- Experience with GNU/Linux.
-
Senior Data Engineer
Full Remote · Europe except Ukraine · 5 years of experience · Upper-IntermediateJob Summary: We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring...Job Summary:
We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.
Key Responsibilities:
- Design, develop, and maintain scalable data warehouse solutions.
- Build and optimize ETL/ELT pipelines for efficient data integration.
- Design and implement data models to support analytical and reporting needs.
- Ensure data integrity, quality, and security across all pipelines.
- Optimize data performance and scalability using best practices.
- Work with big data technologies such as Redshift.
- Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
- Implement CI/CD pipelines for data workflows.
- Monitor, troubleshoot, and improve data processes and system performance.
Stay updated with industry trends and emerging technologies in data engineering.
Required Qualifications:
- 5+ years of experience in Data Engineering or a related field.
- Strong expertise in SQL and data modeling concepts.
- Hands-on experience with Airflow.
- Experience working with Redshift.
- Proficiency in Python for data processing.
- Strong understanding of data governance, security, and compliance.
- Experience in implementing CI/CD pipelines for data workflows.
- Ability to work independently and collaboratively in an agile environment.
- Excellent problem-solving and analytical skills.
-
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · 5 years of experience · Upper-IntermediateThis is an exciting opportunity to work on a high-impact project, architecting an end-to-end data solution in a collaborative and forward-thinking environment. The ideal candidate will be at the forefront of delivering scalable, efficient, and...This is an exciting opportunity to work on a high-impact project, architecting an end-to-end data solution in a collaborative and forward-thinking environment. The ideal candidate will be at the forefront of delivering scalable, efficient, and best-in-class Data engineering solutions supporting business-critical insights and reporting capabilities.
We are seeking a Senior Data Engineer to lead the design and implementation of robust data pipelines and warehouse solutions leveraging Snowflake, AWS and Azure. This role will focus on ingesting and transforming data from marketing and sales systems, enabling advanced analytics and reporting capabilities. The candidate will play a key advisory role in defining and implementing best practices for data ingestion, transformation, and reporting.Our client is a global real estate services company specializing in the management and development of commercial properties. Over the past several years, the organization has made significant strides in systematizing and standardizing its reporting infrastructure and capabilities. Due to the increased demand for reporting, the organization is seeking a dedicated team to expand capacity and free up existing resources.
Skills & Experience
- 5+ years of experience in data architecture, data engineering, or related roles.
- Deep knowledge of Python
- Ability to work independently, gather requirements and deliver results in a fast-paced environment.
- Proven expertise in designing and implementing data pipelines on AWS / Azure.
- Deep knowledge of data warehouse architectures, including Snowflake architecture and data governance.
- Good to know: Workato (or similar integration tools) and Power BI for dashboards and reporting.
- Experience in data validation, cleansing, and optimization techniques.
- Exceptional communication and stakeholder management skills
Responsibilities
- Design and implement scalable data pipelines on AWS / Azure to ingest and transform data from different sources
- Integrate SFDC data using tools like Workato (or propose alternative solutions).
- Provide advisory services on Snowflake architecture and implement best practices for ingestion, validation, cleansing, and transformation.
- Develop the initial set of data products (analytics, dashboards, and reporting) in Power BI.
- Ensure scalability, efficiency, and performance of the data infrastructure.
-
Middle/ Senior Data Engineer (Azure, Databricks)
Full Remote · Europe except Ukraine · 2 years of experience · Upper-IntermediateWe are looking for a Middle/ Senior Data Engineer for one of our projects. The Client is among the top 100 of Fortune’s Global 500 companies, a leading global supplier of technology and services (Mobility Solutions, Industrial Technology, Consumer Goods,...We are looking for a Middle/ Senior Data Engineer for one of our projects. The Client is among the top 100 of Fortune’s Global 500 companies, a leading global supplier of technology and services (Mobility Solutions, Industrial Technology, Consumer Goods, and Energy and Building Technology).
You’ll have a chance to improve the quality of life worldwide with products and services that are innovative and spark enthusiasm.
You are going to be involved in operations of leading department which is responsible for providing all the relevant data for logistics and supply chain.Responsibilities:
- Implementation of business logic in Data Warehouse according with the specifications
- Some business analysis required to enable providing the relevant data in a relevant manner
- Conversion of business requirements into data models
- Pipelines management (ETL pipelines in Data Factory)
- Loadings and query performance tuning
- Migrating from Azure Synapse to Databricks
- Working with senior staff on the customer's side who will provide requirements while engineer may propose some own ideas
Requirements:
- 2+ years of experience in development of data base systems (MS-SQL/T-SQL, Synapse SQL)
- Writing well performing SQL code and investigating & implementing performance measures
- Experience in creation and maintenance of Azure DevOps & Data Factory pipelines
- Data warehousing / dimensional modeling
- Working within an Agile project setup
- Developing robust data pipelines with DBT
- Experience with Databricks
Upper-intermediate English level
Would be a plus:
- Previous working experience in Supply Chain & Logistics
- Knowledge of SAP MM Data structures
We offer:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
-
Data Engineer(Scala experience)
Full Remote · Poland · 7 years of experience · Upper-IntermediateExperience Level: 7+ Years Industry: IT Location: Remote (Poland) Job Overview We are seeking experienced Data Engineers with a solid background in Scala and Spark for a long-term project. You will be responsible for designing, building, and optimizing...Experience Level: 7+ Years
Industry: IT
Location: Remote (Poland)Job Overview
We are seeking experienced Data Engineers with a solid background in Scala and Spark for a long-term project. You will be responsible for designing, building, and optimizing data pipelines, ensuring data integrity, and supporting advanced analytics solutions on Microsoft Azure.
Technical Requirements
- Experience: 7+ years in Data Engineering.
- Scala & Spark: 6+ years of experience working with these technologies in a cloud environment (Azure preferred).
- Programming Languages: Proficiency in Scala and Python.
- Data Tools: Hands-on experience with data pipeline frameworks.
- Databases: Expertise in both relational and non-relational databases, including SQL.
- Cloud Platforms: Strong familiarity with cloud data services, especially Microsoft Azure.
- Containerization: Knowledge of Docker, Kubernetes, and related tools is a plus.
- Skills: Strong problem-solving abilities, excellent communication, and collaboration skills.
Key Responsibilities
- Data Pipeline Maintenance: Monitor, maintain, and optimize data pipelines using Scala and SQL on Spark. Resolve data processing issues and improve performance.
- Technical Support & Version Control: Provide technical support for data analysis while managing source code and configurations using GitHub. Automate deployments with GitHub Actions/Workflows.
- Technical Leadership: Guide the development of Spark-based data applications on Azure Synapse Spark Runtime.
- Pipeline Optimization: Design and improve data pipelines based on the Medallion architecture using Azure Synapse Pipelines.
- Data Management: Supervise data ingestion, enforce data quality checks, and manage validation and error-handling workflows.
- Configuration Management: Create and maintain JSON-based configuration settings for various data zones.
- Collaboration: Work closely with cross-functional teams, including data scientists and analysts, to ensure seamless integration of data solutions.
- Logging & Auditing: Implement logging, auditing, and error-handling mechanisms using Azure Log Analytics and KQL queries.
- Testing & Quality Assurance: Conduct unit tests using ScalaTest and enforce data quality protocols for reliable processing results.
Required Technical Skills
- Data Engineering
- Python
- Spark
- Scala
- SQL
- GitHub
- Microsoft Azure
- JSON
If you are passionate about building high-quality data solutions and eager to work in a dynamic environment, we would love to hear from you!