Jobs
122-
· 30 views · 10 applications · 2d
Senior Data Engineer – (PySpark / Data Infrastructure)
Full Remote · Worldwide · Product · 5 years of experience · Advanced/FluentSenior Data Engineer – (PySpark / Data Infrastructure) We're hiring a Senior Data Engineer to help lead the next phase of our data platform’s growth. At Forecasa, we provide enriched real estate transaction data and analytics to private lenders and...Senior Data Engineer – (PySpark / Data Infrastructure)
We're hiring a Senior Data Engineer to help lead the next phase of our data platform’s growth.
At Forecasa, we provide enriched real estate transaction data and analytics to private lenders and investors. Our platform processes large volumes of public data, standardizes and enriches it, and delivers actionable insights that drive lending decisions.
We recently completed a migration from a legacy SQL-based ETL stack (PostgreSQL/dbt) to PySpark, and we're now looking for a senior engineer to take ownership of the new pipeline, maintain and optimize it, and develop new data-driven features to support our customers and internal analytics.
What You’ll Do
- Own and maintain our PySpark-based data pipeline, ensuring stability, performance, and scalability.
- Design and build new data ingestion, transformation, and validation workflows.
- Optimize and monitor data jobs using Airflow, Kubernetes, and S3.
- Collaborate with data analysts, product owners, and leadership to define data needs and deliver clean, high-quality data.
- Support and mentor junior engineers working on scrapers, validation tools, and quality monitoring dashboards.
- Contribute to the evolution of our data infrastructure and architectural decisions.
Our Tech Stack
Python • PySpark • PostgreSQL • dbt • Airflow • S3 • Kubernetes • GitLab • Grafana
What We’re Looking For
- 5+ years of experience in data engineering or backend systems with large-scale data processing.
- Strong experience with PySpark, including building scalable data pipelines and working with large datasets.
- Solid command of SQL, data modeling, and performance tuning (especially in PostgreSQL).
- Experience working with orchestration tools like Airflow, and containers via Docker/Kubernetes.
- Familiarity with cloud storage (preferably S3) and modern CI/CD workflows.
- Ability to work independently and communicate clearly in a remote, async-first environment.
Bonus Points
- Background in real estate or financial data
- Experience with data quality frameworks or observability tools (e.g., Great Expectations, Grafana, Prometheus)
- Experience optimizing PySpark jobs for performance and cost-efficiency
-
· 3 views · 1 application · 2d
On-Site Data Center Engineer (Hyper-V and Infrastructure Upgrades)
Full Remote · Countries of Europe or Ukraine · 5 years of experience · IntermediateRequirements: 4+ years of hands-on experience managing on-premise data center infrastructure, including server hardware setup, troubleshooting, and virtualization (Hyper-V preferred) Install, configure, and maintain physical servers for Hyper-V...Requirements:
- 4+ years of hands-on experience managing on-premise data center infrastructure, including server hardware setup, troubleshooting, and virtualization (Hyper-V preferred)
- Install, configure, and maintain physical servers for Hyper-V virtualization environments.
- Experience with server hardware setup, including RAID and remote management tools configurations, BIOS/UEFI settings, and hardware diagnostics.
- Troubleshoot and resolve hardware and network issues
- Knowledge of critical data center infrastructure (Power configurations, HVAC, Cabling)
- Displayed proficiency in various software applications such as Microsoft Office 365 Suite and G Suite applications
- Strong troubleshooting methodology and attention to detail
- Working knowledge of HPE/Dell server platforms and Juniper or Arista networking equipment.
- Ability to work on-site in London to support ongoing infrastructure upgrades.
- Strong troubleshooting skills and ability to assist with real-time problem resolution.
Nice-to-Have:
- Experience with large-scale data center setups or expansions.
- Strong understanding of maintenance on mission-critical infrastructure power infrastructure in a live environment
- Familiarity with networking and server provisioning.
- Previous experience coordinating with remote teams to ensure smooth project execution.
- Experience using Data Center Infrastructure management (DCIM) tools to manage data center infrastructure
- Experience managing vendors while working on data center build projects
Key Responsibilities:
- Assist with Hyper-V installations and configuration within the data center.
- Setting up and configuring on-premise networks for server infrastructures.
- Work closely with the remote engineering team to facilitate a smooth and efficient upgrade process.
- Document infrastructure setups and procedures.
- Provide on-site support to minimize travel requirements for the core team.
- Identify and resolve any issues that arise during installations and upgrades.
About the Project:
This project focuses on optimizing the power infrastructure within a London-based data center while deploying Hyper-V installations. The goal is to leverage all remaining power resources efficiently, ensuring a seamless and accelerated implementation. Having an on-site contractor will reduce the need for frequent travel, speeding up the project timeline and ensuring smooth execution.
More -
· 22 views · 1 application · 2d
Big Data Engineer
Full Remote · Ukraine · 4 years of experience · Upper-IntermediateМіжнародна IT-компанія у пошуку Big Data Engineer. Формат - повністю віддалено. Це ІТ-компанія, яка допомагає бізнесам по всьому світу створювати інноваційні програмні продукти. Заснована у 2002 році, сьогодні об’єднує 2000+ професіоналів у 41 країні,...Міжнародна IT-компанія у пошуку Big Data Engineer. Формат - повністю віддалено.
Це ІТ-компанія, яка допомагає бізнесам по всьому світу створювати інноваційні програмні продукти. Заснована у 2002 році, сьогодні об’єднує 2000+ професіоналів у 41 країні, розробляючи рішення для провідних компаній, включаючи Fortune 500.
Довгостроковий проєкт швейцарської медіа-компанії, що працює з ними 5 років. Її портфоліо включає близько 140 компаній у сфері друкованих та цифрових медіа, радіо, спортивних медіа, технологій, а також провідні онлайн-майданчики для продажу автомобілів, нерухомості та вакансій.Ви будете займатися:
- Розробкою та оптимізацією масштабованих data pipelines у Palantir Foundry, забезпечуючи цілісність даних і ефективність процесів.
- Перетворенням бізнес-вимог на надійні рішення, використовуючи хмарні технології та передові практики Data Engineering.
- Створенням та підтримка ETL/ELT-процесів для збору, обробки та інтеграції даних з урахуванням вимог безпеки та governance.
- Оптимізацією продуктивності дата-пайплайнів, виявлення вузьких місць і зниження затримок під час обробки даних.
- Дослідженням нових технологій і впровадження інноваційних підходів у процеси роботи з даними.
Необхідний досвід:
- Python,- PySpark,
- розмовна англійська.
-
· 33 views · 1 application · 2d
Data Engineer
Full Remote · Europe except Ukraine · Product · 3 years of experience · Upper-IntermediateAt PAR Technology, our relentless drive for innovation and unwavering commitment to customer success are at the heart of everything we do. We lead the restaurant and retail industries by ensuring that our products – from point-of-sale systems to loyalty...At PAR Technology, our relentless drive for innovation and unwavering commitment to customer success are at the heart of everything we do. We lead the restaurant and retail industries by ensuring that our products – from point-of-sale systems to loyalty programs, digital ordering, restaurant operations solutions, payment services, and hardware – work “better together.” This unified approach, fueled by over 40 years of experience, amplifies our ambition to not just meet but exceed the evolving needs of our global clientele. By optimizing integrations into all leading restaurant solutions, we’re not just creating technology, we’re crafting a future where operations are streamlined, experiences are enhanced, and every interaction is an opportunity for growth.
Position Description
We are seeking a Data Engineer to join our team and help us build the next generation of data tools for our retail and commerce loyalty platform. The ideal candidate will have a passion for tackling complex challenges related to extracting insights from transactional data. In this role, you will work closely with our Application Development teams, Customer Support, Business SMEs, and stakeholders to design and implement data pipelines that transform our data into valuable insights. As we strive to scale and deploy new technologies that enhance the user experience and provide deeper insights, your expertise will be critical to our success.
Requirements:
· Experience: 2-4 years of experience in data engineering or related roles
· Problem-Solving Skills: Proactive approach to troubleshooting and optimizing solutions
· Communication: Ability to convey technical concepts to non-technical stakeholders
· Continuous Learning: Commitment to staying updated with industry technology trends
With a side of (additional skills):
· Programming: Python, SQL
· Data Modeling: Basic understanding of data modeling concepts and practices
· Big Data Technologies: Familiarity with Spark; exposure to Databricks is a plus
· Databases: Experience with relational databases (Postgres); exposure to NoSQL
· CI/CD: Basic understanding of CI/CD tools like Github Actions
· Cloud Platforms: AWS or Azure and their data solutions
· Version Control: Git, Github
What you will be doing and owning:
· Designing and implementing efficient data pipelines
· Keeping aware of emerging technologies and best practices in data engineering
· Contributing to code reviews and providing constructive feedback to peers
We offer:
- Long-term employment;
- Competitive compensation with regular performance based salary and career development reviews;
- 22 working days of vacation per year;
- 8 paid sick leave working days per year;
- Health insurance programme;
- Flexible working hours;
- Comfortable and cozy office or full remote;
- Sponsored company educational program, corporate library;
- Funny celebrations, team outings and company events;
- Unique and friendly environment where everyone can explore and learn new technologies.
More -
· 33 views · 5 applications · 3d
Data Engineer (IRC261473)
Full Remote · Poland, Romania · 4 years of experienceDescription Upbound Group (NASDAQ: UPBD) is an omni-channel platform company committed to elevating financial opportunity for all through innovative, inclusive, and technology-driven financial solutions that address the evolving needs and aspirations of...Description
Upbound Group (NASDAQ: UPBD) is an omni-channel platform company committed to elevating financial opportunity for all through innovative, inclusive, and technology-driven financial solutions that address the evolving needs and aspirations of consumers. The Company’s customer-facing operating units include industry-leading brands such as Rent-A-Center® and Acima® that facilitate consumer transactions across a wide range of store-based and digital retail channels, including over 2,400 company branded retail units across the United States, Mexico, and Puerto Rico. Upbound Group, Inc. is headquartered in Plano, Texas.
The client is looking for Data Engineer to assist their Data Engineering team.
Requirements
• 2-5 years of experience in a Data Engineering role
• Bachelor’s degree or equivalent experience in Computer Science or related technical field
• Strong Python experience
• Strong SQL experience
• Strong REST API experience
• Experience using Linux
• Experience with data warehousing: Redshift and Snowflake
• Warehouse data modeling experience is a plus
• Ability and motivation to learn new technologies quickly with minimal support and guidance
• Strong communication skills
• Experience supporting and working with cross-functional teams in a dynamic environment.Job responsibilities
• Create and maintain data pipelines and ETL jobs using Python and SQL
More
• Assemble large, complex data sets that meet functional / non-functional business requirements
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Collaborate with business leaders, Executives, Data Scientists, BI Analytics, Product, Engineering, and other operational departments to ensure successful delivery of data integration and BI projects
• Perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. -
· 4 views · 0 applications · 3d
Autodesk Construction Cloud (ACC) Platform Expert IRC261435
Full Remote · Countries of Europe or Ukraine · 4 years of experience · IntermediateRole Description This is a full-time remote role for an Autodesk Construction Cloud (ACC) Platform Expert at GlobalLogic. The role involves day-to-day tasks related to budgeting, construction, architecture, inspection, and utilizing Microsoft Office...Role Description
This is a full-time remote role for an Autodesk Construction Cloud (ACC) Platform Expert at GlobalLogic. The role involves day-to-day tasks related to budgeting, construction, architecture, inspection, and utilizing Microsoft Office tools to support project management and collaboration on the ACC platform.
Qualifications:
ACC Build: This is the core module likely to be used for safety workflows (e.g., issue tracking, checklists, forms, daily logs). The technical person must understand its functionalities thoroughly.
ACC Docs: For managing safety documents, plans, and procedures.
ACC Insight: For potential data analysis and reporting on safety metrics.
ACC BIM Collaborate/BIM Collaborate Pro: Understanding how safety information can be linked to the BIM model.
Autodesk Construction Cloud Platform API: The technical person should be proficient in using the ACC API to extend its functionality, integrate with other systems, and build custom applications. This includes understanding RESTful APIs, authentication methods (e.g., OAuth 2.0), and API rate limits.
Integration Capabilities: Understanding how ACC can integrate with other Autodesk products and third-party applications (Power BI).
Job Responsibilities
Communicate to the team the capabilities of the Autodesk Construction Cloud (ACC) Platform
More
Implement ACC workflows based on the defined business requirements
Department/Project Description
Customer is a global technology leader in power technologies and energy systems.
-
· 49 views · 6 applications · 3d
Middle/Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · Product · 3 years of experiencemono — мультипродуктова fintech компанія з України. Із 2017 року мільйони людей муркочуть від наших продуктів: monobank, Experienza, Base, Market by Mono... Прагнемо створити ще більше для нашої екосистеми. Нумо втілювати амбітні ідеї разом! Вимоги - 3+...mono — мультипродуктова fintech компанія з України.
Із 2017 року мільйони людей муркочуть від наших продуктів: monobank, Experienza, Base, Market by Mono... Прагнемо створити ще більше для нашої екосистеми. Нумо втілювати амбітні ідеї разом!
Вимоги- 3+ роки досвіду розробки баз даних
- знання особливостей проектування баз даних (OLTP, OLAP)
- досвід з ClickHouse, PostgreSQL, MongoDB
- впевнене володіння SQL
- досвід розробки (ETL/ELT)-процесів
- досвід розробки за допомогою BI-засобівБуде плюсом
- розуміння цілей і завдань бізнес-підрозділу, орієнтація на їхній результат
- базове розуміння цілей, завдань і процесу моделюванняУмови роботи
- гідна винагорода та її регулярний перегляд за результатами
More
- гнучкий графік роботи без трекерів і параної
- можливість гібридної роботи чи повністю віддалено
- відпустка 18 робочих днів на рік (чи 24 календарних) + 2 дні для форс-мажорів + 6 днів для навчання
- оплачувані дей-офи за станом здоров’я — без SMS і лікарняних листів
- корпоративні курси інглішу
- 100% страхування здоров’я + підтримка менталочки
- картка Platinum від monobank і пундики від наших партнерів
🇺🇦 Підтримуємо сили оборони України власними зборами, розвиваємо культуру донатерства. -
· 15 views · 2 applications · 3d
Senior Data Solutions Architect (AWS)
Full Remote · Worldwide · 5 years of experience · Upper-IntermediateAbout project Provectus, a leading AI consultancy and solutions provider specializing in Data Engineering and Machine Learning. With a focus on helping businesses unlock the power of their data, we leverage the latest technologies to build innovative...About project
Provectus, a leading AI consultancy and solutions provider specializing in Data Engineering and Machine Learning. With a focus on helping businesses unlock the power of their data, we leverage the latest technologies to build innovative data platforms that drive results. Our Data Engineering team consists of top-tier professionals who design, implement, and optimize scalable, data-driven architectures for clients across various industries.
Join us if you have the same passion for making products using AI/ML technologies, cloud services, and data engineering.
As a Data Solutions Architect, you will lead the design, architecture, and implementation of large-scale data solutions for our clients. You will act as a strategic technical leader, collaborating with cross-functional teams to deliver innovative data platforms that drive business value.
Responsibilities:
Strategic Technical Leadership
- Lead high-impact customer engagements focused on AWS Data Platform solutions;
Define and drive technical strategies that align AWS capabilities with customer objectives, incorporating Databricks, GCP, or Azure where appropriate.
Solution Architecture and Design
- Architect and design scalable data platforms using AWS, ensuring optimal performance, security, and cost-efficiency;
- Integrate AWS services with other solutions (Databricks, Snowflake, GCP, or Azure) as needed, selecting the right technologies and tools to meet customer needs;
Develop and maintain comprehensive architectural documentation aligned with organizational technical standards.
Pre-Sales Activities
- Partner with the sales team, providing technical expertise to position AWS-based data solutions effectively;
- Participate in customer meetings to assess technical needs, scope solutions, and identify growth opportunities;
- Create technical proposals, solution architectures, and presentations to support sales efforts and align with customer expectations;
- Assist in responding to RFPs/RFIs with accurate technical input and align solutions to client requirements;
Demonstrate AWS capabilities through POCs and technical demonstrations to showcase proposed solutions.
Customer Engagement and Relationship Management
- Build and maintain strong relationships with key customer stakeholders, acting as a trusted advisor for data platform initiatives;
Lead discovery workshops to understand customer requirements, KPIs, and technical constraints.
Project Leadership and Delivery
- Oversee the end-to-end implementation of AWS-based data platforms, coordinating with engineering teams to ensure successful delivery;
Manage technical risks and develop mitigation strategies.
Innovation and Best Practices
- Stay up-to-date with the latest developments in AWS, Databricks, GCP, Azure, and cloud technologies;
Develop and promote best practices in data platform architecture, data pipelines, and data governance.
Cross-Functional Collaboration
- Collaborate with AI/ML teams to integrate advanced analytics and machine learning capabilities into AWS and other cloud platforms;
Work with DevOps teams to implement CI/CD pipelines and automation for data workflows.
Mentorship and Knowledge Sharing
- Mentor junior architects and engineers, fostering a culture of continuous learning and professional development;
Contribute to knowledge-sharing initiatives through technical blogs, case studies, and industry event presentations.
Governance, Compliance, and Security
- Ensure that AWS-based data platform solutions comply with relevant security standards and regulations;
- Implement data governance frameworks to maintain data quality and integrity.
Requirements:
- Experience in data solution architecture;.
- Proven experience in designing and implementing large-scale data engineering solutions on AWS;
- Experience with Databricks, GCP, or Azure solutions is required;
- Deep expertise in AWS platform services, including S3, EC2, Lambda, EMR, Glue, Redshift, AWS MSK, and EKS;
- Proficient in programming languages like Python, SQL, and Scala;
- Experience with data warehousing, ETL processes, and real-time data streaming;
- Familiarity with open-source technologies and tools in data engineering;
- AWS Certified Solutions Architect – Professional (or similar) is required;
- Excellent communication and presentation skills, with the ability to convey complex technical concepts to non-technical stakeholders;
- Strong leadership and project management skills;
- Ability to work collaboratively in a cross-functional team environment.
Will Be a Plus:
- Certifications in Databricks, GCP, or Azure;
- Experience with AWS Migration Acceleration Programs (MAP);
- Experience with AI/ML integration in data platforms;
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field;
- Contributions to open-source projects or active participation in the data engineering community.
-
· 7 views · 0 applications · 3d
Senior Data Solutions Architect (AWS)
Full Remote · Europe except Ukraine · 5 years of experience · Upper-IntermediateAbout project Provectus, a leading AI consultancy and solutions provider specializing in Data Engineering and Machine Learning. With a focus on helping businesses unlock the power of their data, we leverage the latest technologies to build innovative...About project
Provectus, a leading AI consultancy and solutions provider specializing in Data Engineering and Machine Learning. With a focus on helping businesses unlock the power of their data, we leverage the latest technologies to build innovative data platforms that drive results. Our Data Engineering team consists of top-tier professionals who design, implement, and optimize scalable, data-driven architectures for clients across various industries.
Join us if you have the same passion for making products using AI/ML technologies, cloud services, and data engineering.
As a Data Solutions Architect, you will lead the design, architecture, and implementation of large-scale data solutions for our clients. You will act as a strategic technical leader, collaborating with cross-functional teams to deliver innovative data platforms that drive business value.
Responsibilities:
Strategic Technical Leadership
- Lead high-impact customer engagements focused on AWS Data Platform solutions;
Define and drive technical strategies that align AWS capabilities with customer objectives, incorporating Databricks, GCP, or Azure where appropriate.
Solution Architecture and Design
- Architect and design scalable data platforms using AWS, ensuring optimal performance, security, and cost-efficiency;
- Integrate AWS services with other solutions (Databricks, Snowflake, GCP, or Azure) as needed, selecting the right technologies and tools to meet customer needs;
Develop and maintain comprehensive architectural documentation aligned with organizational technical standards.
Pre-Sales Activities
- Partner with the sales team, providing technical expertise to position AWS-based data solutions effectively;
- Participate in customer meetings to assess technical needs, scope solutions, and identify growth opportunities;
- Create technical proposals, solution architectures, and presentations to support sales efforts and align with customer expectations;
- Assist in responding to RFPs/RFIs with accurate technical input and align solutions to client requirements;
Demonstrate AWS capabilities through POCs and technical demonstrations to showcase proposed solutions.
Customer Engagement and Relationship Management
- Build and maintain strong relationships with key customer stakeholders, acting as a trusted advisor for data platform initiatives;
Lead discovery workshops to understand customer requirements, KPIs, and technical constraints.
Project Leadership and Delivery
- Oversee the end-to-end implementation of AWS-based data platforms, coordinating with engineering teams to ensure successful delivery;
Manage technical risks and develop mitigation strategies.
Innovation and Best Practices
- Stay up-to-date with the latest developments in AWS, Databricks, GCP, Azure, and cloud technologies;
Develop and promote best practices in data platform architecture, data pipelines, and data governance.
Cross-Functional Collaboration
- Collaborate with AI/ML teams to integrate advanced analytics and machine learning capabilities into AWS and other cloud platforms;
Work with DevOps teams to implement CI/CD pipelines and automation for data workflows.
Mentorship and Knowledge Sharing
- Mentor junior architects and engineers, fostering a culture of continuous learning and professional development;
Contribute to knowledge-sharing initiatives through technical blogs, case studies, and industry event presentations.
Governance, Compliance, and Security
- Ensure that AWS-based data platform solutions comply with relevant security standards and regulations;
- Implement data governance frameworks to maintain data quality and integrity.
Requirements:
- Experience in data solution architecture;.
- Proven experience in designing and implementing large-scale data engineering solutions on AWS;
- Experience with Databricks, GCP, or Azure solutions is required;
- Deep expertise in AWS platform services, including S3, EC2, Lambda, EMR, Glue, Redshift, AWS MSK, and EKS;
- Proficient in programming languages like Python, SQL, and Scala;
- Experience with data warehousing, ETL processes, and real-time data streaming;
- Familiarity with open-source technologies and tools in data engineering;
- AWS Certified Solutions Architect – Professional (or similar) is required;
- Excellent communication and presentation skills, with the ability to convey complex technical concepts to non-technical stakeholders;
- Strong leadership and project management skills;
- Ability to work collaboratively in a cross-functional team environment.
Will Be a Plus:
- Certifications in Databricks, GCP, or Azure;
- Experience with AWS Migration Acceleration Programs (MAP);
- Experience with AI/ML integration in data platforms;
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field;
- Contributions to open-source projects or active participation in the data engineering community.
-
· 14 views · 1 application · 3d
Middle/Senior Data Engineer (AWS)
Full Remote · Poland · 5 years of experience · Upper-IntermediateWe are seeking a talented and experienced Data Engineer to join our team at Provectus. As part of our diverse practices, including Data, Machine Learning, DevOps, Application Development, and QA, you will collaborate with a multidisciplinary team of data...We are seeking a talented and experienced Data Engineer to join our team at Provectus. As part of our diverse practices, including Data, Machine Learning, DevOps, Application Development, and QA, you will collaborate with a multidisciplinary team of data engineers, machine learning engineers, and application developers. You will encounter numerous technical challenges and have the opportunity to contribute to Provectus’ open source projects, build internal solutions, and engage in R&D activities, providing an excellent environment for professional growth.
Requirements:
- Experience in data engineering;
- Experience working with Cloud Solutions (preferably AWS, also GCP or Azure);
- Experience with Cloud Data Platforms (e.g., Snowflake, Databricks);
- Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation;
- Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.;
- Proficiency in programming languages relevant to data engineering such as Python and SQL;
- Experience in building scalable APIs;
- Experience in building Generative AI Applications (e.g., chatbots, RAG systems);
- Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization;
- Advanced or Fluent English skills;
- Strong problem-solving skills and the ability to work collaboratively in a fast-paced environment.
Nice to Have:
- Relevant AWS, GCP, Azure, Databricks certifications;
- Knowledge of BI Tools (Power BI, QuickSight, Looker, Tableau, etc.);
- Experience in building Data Solutions in a Data Mesh architecture;
- Familiarity with classical Machine Learning tasks and tools (e.g., OCR, AWS SageMaker, MLFlow, etc.).
Responsibilities:
- Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals;
- Collect and manage large volumes of varied data sets;
- Work directly with Data Scientists and ML Engineers to create robust and resilient data pipelines that feed Data Products;
- Define data models that integrate disparate data across the organization;
- Design, implement, and maintain ETL/ELT data pipelines;
- Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently;
- Develop, continuously test and deploy Data API Products with Python and frameworks like Flask or FastAPI.
About Provectus
More -
· 27 views · 0 applications · 3d
Data Engineer (Azure)
Full Remote · Countries of Europe or Ukraine · 2 years of experience · Upper-IntermediateDataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.
Key Responsibilities:
- Create and manage scalable data pipelines with Azure SQL and other databases;
- Use Azure Data Factory to automate data workflows;
- Write efficient Python code for data analysis and processing;
- Ability to develop data reports and dashboards using PowerBI;
- Use Docker for application containerization and deployment streamlining;
- Manage code quality and version control with Git.
Skills requirements:
- 3+ years of experience with Python;
- 2+ years of experience as a Data Engineer;
- Strong SQL knowledge, preferably with Azure SQL experience;
- Experience with Azure Data Factory for orchestrating data processes;
- Python skills for data manipulation;
- Expertise in Docker for app containerization;
- Familiarity with Git for managing code versions and collaboration;
- Upper- intermediate level of English.
Optional skills (as a plus):
- Experience developing APIs with FastAPI or Flask;
- Proficiency in Databricks for big data tasks;
- Experience in a dynamic, agile work environment;
- Ability to manage multiple projects independently;
- Proactive attitude toward continuous learning and improvement.
We offer:- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
· 20 views · 4 applications · 3d
Senior GCP Data Engineer – ETL
Full Remote · Europe except Ukraine · Product · 5 years of experience · Upper-IntermediateWe're looking for a Senior Data Engineer – ETL with strong GCP and Python or Java experience to join a stable, long-term project for the biggest insurance tech company in the USA. We're building a completely new, advanced data platform based on the lake...We're looking for a Senior Data Engineer – ETL with strong GCP and Python or Java experience to join a stable, long-term project for the biggest insurance tech company in the USA. We're building a completely new, advanced data platform based on the lake house architecture, using bronze/silver/gold layers, and a data model layer. You'll work with advanced technologies and will be able to work with one of the best GCP data architects in the World.
About Company
Our client is a large USA product company, a global leader in insurance technologies, and is seeking an experienced Data Engineer with strong expertise in Google Cloud Platform (GCP). Join us in scaling our Data and Analytics capabilities to drive data-informed decisions across our organization. You will design, build, and maintain efficient data pipelines, optimize data workflows, and integrate data seamlessly from diverse sources.
What You Will Do:
- Build and maintain CI/CD pipelines to enhance productivity, agility, and code quality.
- Optimize data pipelines and workflows for performance and scalability.
- Design efficient processes to minimize data refresh delays, leveraging reusable components and automated quality checks.
- Develop robust, scalable data pipelines supporting business needs.
- Code BigQuery procedures, functions, and SQL database objects.
- Monitor application performance, troubleshoot issues, and implement effective monitoring and alerting.
- Lead design and build-out of production data pipelines using GCP services (BigQuery, DBT, Apache Airflow, Celigo, Python).
- Ensure data quality through rigorous testing and validation.
- Maintain thorough technical documentation and stay current with industry trends.
What You Need to Succeed:
- 8+ years in Data/ETL Engineering, Architecture, and pipeline development, with at least 2 years focused on GCP.
- Proven experience building scalable cloud Data Warehouses (preferably BigQuery).
- 3+ years advanced SQL and strong Python or Java programming experience.
- Extensive experience optimizing ETL/ELT pipelines, data modeling, and schema design.
- Expertise with GCP services: Composer, Compute, GCS, Cloud Functions, BigQuery.
- Proficiency in DevOps tools (Git, GitLab) and CI/CD pipeline integration with GCP.
- Strong automation scripting skills, especially with GCP Composer.
- Solid understanding of Data Lake/Warehouse concepts and data modeling techniques (star schema, snowflake schema, normalization).
- Excellent problem-solving skills; able to work independently and collaboratively.
- Strong communication skills, capable of explaining technical concepts clearly.
- Bachelor's degree in Computer Science, MIS, CIS, or equivalent experience.
-
· 20 views · 8 applications · 3d
Senior GCP Data Engineer
Full Remote · Europe except Ukraine · Product · 5 years of experience · Upper-IntermediateWe're looking for a Senior Data Engineer with GCP, BigQuery, and DBT experience to join a stable, long-term project for the biggest insurance tech company in the USA. We're building a completely new, advanced data platform based on the lake house...We're looking for a Senior Data Engineer with GCP, BigQuery, and DBT experience to join a stable, long-term project for the biggest insurance tech company in the USA. We're building a completely new, advanced data platform based on the lake house architecture, using bronze/silver/gold layers, and data model layer. You'll work with advanced technologies and will be able to work with one of the best GCP data architects in the World.
About Company
Our client is a large USA product company, a global leader in insurance technologies, and is seeking an experienced Data Engineer with strong expertise in Google Cloud Platform (GCP). Join us in scaling our Data and Analytics capabilities to drive data-informed decisions across our organization. You will design, build, and maintain efficient data pipelines, optimize data workflows, and integrate data seamlessly from diverse sources.
What You Will Do:
- Design, develop, and operationalize robust, scalable data pipelines.
- Develop BigQuery procedures, functions, and SQL objects.
- Optimize ETL processes for efficiency, scalability, and performance.
- Create production data pipelines using GCP (BigQuery, Dataflow, DBT), Python, SQL, Apache Airflow, Celigo, etc.
- Deploy streaming and batch jobs on GCP (Cloud Dataflow, Java/Python).
- Build ETL frameworks with reusable components and automated quality checks.
- Develop and maintain scalable data models and schemas for analytics and reporting.
- Implement performance tuning, capacity planning, and proactive monitoring/alerting.
- Ensure rigorous data quality through testing and validation.
- Promptly troubleshoot and resolve data-related issues.
- Maintain thorough technical documentation.
- Stay current with industry trends to improve engineering practices.
What You Need to Succeed:
- 5+ years in Data/ETL Engineering and Architecture, with at least 2 years on GCP.
- Proven expertise in building cloud-based data warehouses (preferably BigQuery).
- Hands-on experience with GCP services: DataProc, Dataflow, BigQuery, DBT.
- Proficiency in SQL, Python, Apache Airflow, Composer, and ETL tools (Talend, Fivetran).
- Experience using Git and DBT for version control and data transformation.
- Knowledge of Data Lake/Warehouse concepts and data modeling techniques (star schema, snowflake schema, normalization).
- Strong analytical and problem-solving skills.
- Excellent communication skills; ability to explain technical concepts clearly.
- Bachelor's degree in Computer Science, MIS, CIS, or equivalent experience.
-
· 86 views · 2 applications · 4d
Старший детектив відділу детективів з цифрових розслідувань та управління кримінальними даними (Data Engineer) to $1950
Office Work · Ukraine (Kyiv) · 2 years of experience · IntermediateСтарший детектив відділу детективів з цифрових розслідувань та управління кримінальними даними Управління кримінального аналізу та фінансових розслідувань Національне антикорупційне бюро України (НАБУ) є незалежним державним органом, який...Старший детектив відділу детективів з цифрових розслідувань та управління кримінальними даними Управління кримінального аналізу та фінансових розслідувань
Національне антикорупційне бюро України (НАБУ) є незалежним державним органом, який спеціалізується на розслідуванні топкорупційних злочинів. Робота в НАБУ – це можливість долучитися до боротьби з корупцією та зробити внесок у розвиток прозорого та справедливого суспільства.
УМОВИ:
- Робота в м. Київ.
- Повна зайнятість.
- Посадовий оклад: 79 939,00 грн.*
Доплати: відповідно до статті 23 Закону України «Про Національне антикорупційне бюро України.
- Відповідно до п. 24 ч.1 ст. 23 Закону України «Про мобілізаційну підготовку та мобілізацію» військовозобов’язані працівники Національного бюро НЕ підлягають призову на військову службу під час мобілізації (відстрочка).
ВИМОГИ:
- Знання та практичні навики однієї або декількох з наступних мов програмування: Python (Django, Flask, FastAPI) / JavaScript (Node.js) / Java / Dart / Go (Golang) / C# (.NET) / Rust / PHP, інших мов програмування.
- Вміння працювати з API та інтеграцією зовнішніх сервісів.
- ІТ-рішення та автоматизація: досвід розробки та впровадження інформаційно-комунікаційних систем, автоматизації бізнес-процесів.
- Розуміння принципів обробки даних, базові знання методів збору та аналізу інформації, підготовки аналітичної документації, візуалізації даних. Досвід роботи з інструментами для збору та аналізу даних.
- Знання основ роботи з базами даних (SQL/NoSQL).
- Вища освіта (магістр, спеціаліст або бакалавр з 2016 року) за напрямом Електроніка, автоматизація та електронні комунікації; Інформаційні технології; Математика та статистика; Соціальні та поведінкові науки (Економіка).
- Стаж роботи тривалістю не менше двох років в одній з наступних сфер: інформаційних технологій, аудиту, управління ризиками, системного та бізнес-аналізу, корпоративної (економічної) розвідки, управління даними.
- Володіння державною мовою – вільне, Володіння іноземною мовою (англійська, французька, німецька) рівня Upper-Intermediate (B2) та вище є додатковою перевагою.
ЗМІСТ ВИКОНУВАНОЇ РОБОТИ:
- Аналіз і підготовка до розробки:
Розбір технічного завдання. Визначення потрібних інструментів і технологій, оцінка обсягу даних і вибір оптимального підходу.
Автоматизація бізнес-процесів:
Ідентифікація завдань, які можна оптимізувати за допомогою програмних рішень. Розробка та впровадження інструментів для автоматизації (наприклад, парсинг даних, автоматичне заповнення форм, генерація документів тощо).
Розробка програмного забезпечення:
Створення скриптів, модулів або повноцінних додатків для автоматизації із використанням мов програмування відповідно до компетенції. Проектування та реалізація API для інтеграції з іншими системами. Відповідна робота з базами даних.
- Тестування рішень:
Перевірка коду на тестових даних. Виявлення помилок, тестування на граничних випадках (порожні дані, великі обсяги) тощо.
- Оптимізація.
- Інтеграція з системами:
Підключення розроблених рішень до інших платформ через API або прямі запити до бази даних. Налаштування обміну даними між різними сервісами. Перевірка стабільності інтеграції при реальних навантаженнях.
- Здійснення, в межах компетенції, інформаційно-аналітичного забезпечення діяльності Національного бюро з метою попередження, виявлення, припинення, розслідування і розкриття корупційних та інших кримінальних правопорушень, віднесених до підслідності Національного бюро, а також інших правопорушень
- Участь у проведенні слідчих (розшукових) та інших процесуальних дій як спеціалістів у сфері інформаційних технологій.
- Здійснення за допомогою програмно-технічних засобів збору, обробки та аналізу даних з цифрових пристроїв.
ЕТАПИ ПРОВЕДЕННЯ КОНКУРСУ:
- Подача документів.
- Тестування (тестування на знання законодавства 1-го рівня – перелік питань за посиланням https://nabu.gov.ua/perelik-pytan-do-kvalifikaciynogo-ispytu, тестування загальних здібностей, психологічне тестування).
- Співбесіда.
*Посадові оклади працівників Національного бюро, які проходять стажування, встановлюються з понижуючим коефіцієнтом 1,5.
**На момент подачі документів необхідно мати/отримати Державний сертифікат про рівень володіння державною мовою
More -
· 20 views · 0 applications · 4d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · 5 years of experience · Upper-IntermediateWe are looking for a Senior Data Engineer who will play a key role in optimizing and automating our data pipeline processes. You will work with our Customer, a leading international consulting company based in Germany. They are expanding their product...We are looking for a Senior Data Engineer who will play a key role in optimizing and automating our data pipeline processes.
You will work with our Customer, a leading international consulting company based in Germany. They are expanding their product portfolio for existing and new clients. You will be working on a project that involves highly loaded platforms and lots of data and analytics.
Requirements:
- To work in the capacity of Azure Databricks, Python, Pyspark developer
- 5+ years of experience with Python (Pyspark and Databricks)
- Hands-on experience in implementing complex user stories
- Expertise in Cloud related Big Data integration and infrastructure techstack using Azure Databricks
- Should have data engineering mindset to build, maintain pipelines, and the ability to monitor critical pipelines closely
- Strong problem-solving and analytical abilities
- Excellent communication and collaboration skills
- A passion for delivering high-quality software solutions and driving innovation.
- Langchain.
- Azure.
- Gitlab.
- OOP.
- Upper-intermediate English.
Responsibilities:
- Build, optimize, and maintain ETL/ELT pipelines using Azure Databricks, Python, and PySpark.
- Apply OOP principles to write clean, scalable code.
- Work closely with development, analytics, and DevOps teams to solve complex challenges.
We offer:
More
— A working environment with plenty of scope for creativity.
— Independent work with a well-rehearsed team in the background.
— Remote work.
— Competitive salary with a long-term contract.
— Attractive compensation package (20 days of vacation, 5 sick days — 100% compensation).
— Trust and support from the management team.
— High degrees of responsibility and autonomy.
— Agile teams where your ideas and solutions are valued.