Jobs
49-
Β· 59 views Β· 1 application Β· 18d
Team/ Tech Lead Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateLooking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. As a Team Lead, you will be an expert and...Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
As a Team Lead, you will be an expert and a leader, playing a crucial role in guiding the development team, making technical decisions, and ensuring the successful delivery of high-quality software products.
Skills requirements:
β’ 5+ years of experience with Python;
β’ 4+ years of experience as a Data Engineer;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Excellent experience with Pandas;
β’ Excellent experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Experience Apache Kafka, Apache Spark (pyspark);
β’ Experience with Hadoop;
β’ Familiarity with Amazon Web Services;
β’ Understanding of cluster computing fundamentals;
β’ Working with high volume tables 100m+.
Optional skills (as a plus):
β’ Experience with scheduling and monitoring (Databricks, Prometheus, Grafana);
β’ Experience with Airflow;
β’ Experience with Snowflake, Terraform;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms.
Key responsibilities:
β’ Manage the development process and support team members;
β’ Conduct R&D work with new technology;
β’ Maintain high-quality coding standards within the team;
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Elaborate different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models;
β’ Develop and implement workflows for receiving and transforming new data sources to be used in the company;
β’ Develop existing Data Engineering infrastructure to make it scalable and prepare it for anticipated projected future volumes;
β’ Identify, design and implement process improvements (i.e. automation of manual processes, infrastructure redesign, etc.).
We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 45 views Β· 11 applications Β· 12d
Databricks Solutions Architect
Full Remote Β· Worldwide Β· 7 years of experience Β· Upper-IntermediateRequirements - 7+ years experience in data engineering, data platforms & analytics - - - Completed Data Engineering Professional certification & required classes - Minimum 6-8+ projects delivered with hands-on experience in development on databricks -...Requirements
- 7+ years experience in data engineering, data platforms & analytics - -
- Completed Data Engineering Professional certification & required classes
- Minimum 6-8+ projects delivered with hands-on experience in development on databricks
- Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with deep expertise in at least one
- Deep experience with distributed computing with Spark with knowledge of Spark runtime internals
- Familiarity with CI/CD for production deployments
- Current knowledge across the breadth of Databricks product and platform features
We offer:
β’ Attractive financial package
β’ Challenging projects
β’ Professional & career growth
β’ Great atmosphere in a friendly small team
More -
Β· 47 views Β· 3 applications Β· 14d
Senior Software Data Engineer
Full Remote Β· Worldwide Β· Product Β· 7 years of experience Β· Upper-IntermediateJoin Burny Games β a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily. What makes us proud? In just two years, we've launched two successful mobile games worldwide:...Join Burny Games β a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily.
What makes us proud?
- In just two years, we've launched two successful mobile games worldwide: Playdoku and Colorwood Sort. We have paused some projects to focus on making our games better and helping our team improve.
- Our games have been enjoyed by over 8 million players worldwide, and we keep attracting more players.
- We've created a culture where we make decisions based on data, which helps us grow every month.
- We believe in keeping things simple, focusing on creativity, and always searching for new and effective solutions.
We are seeking an experienced software engineer to create a high-performance, scalable, and flexible real-time analytics platform.
You will be a key member of our team, responsible for the architecture, development, and optimization of services for processing and analyzing large volumes of data (terrabytes).
Required professional experience:
- 5+ years of experience in developing distributed systems or systems at scale.
- Willingness to upskill on Go, proficient in one of languages: Go, Python, Java/Scala/Kotlin, Rust.
- Rock solid computer science fundamentals.
- Experience with any NoSQL (preferably Cassandra) and OLAP (preferably ClickHouse) databases.
- Experience with distributed log-based messaging system (one of: Kafka, NATS JetStream, etc)
- Experience with Kubernetes (Helm, ArgoCD).
Desired Skills:
- Experience with common networking protocols.
- Experience working with observability tools, such as metrics and traces.
- Database fundamentals.
- Understanding of scalable system design principles and architectures for real-time data processing.
- Experience with distributed processing engine (one of: Flink, Spark).
- Experience with open table format (one of: Apache Iceberg, Delta Lake, Hudi).
- Experience with cloud platforms (one of: Google Cloud, AWS, Azure).
Key Responsibilities:
- Design and develop the architecture of an behavioral analytics platform for real-time big data processing.
- Implement key engine systems (data collection, event processing, aggregation, prepare data for visualization).
- Optimize the platform performance and scalability for handling large data volumes.
- Develop tools for user behavior analysis and product metrics.
- Collaborate with data analysts and product managers to integrate the engine into analytics projects.
- Research and implement new technologies and methods in data analysis.
What we offer:
- 100% payment of vacations and sick leave [20 days vacation, 22 days sick leave], medical insurance.
- A team of the best professionals in the games industry.
- Flexible schedule [start of work from 8 to 11, 8 hours/day].
- L&D center with courses.
- Self-learning library, access to paid courses.
- Stable payments.
The recruitment process:
CV review β Interview with talent acquisition manager β Interview with hiring manager β Job offer.
If you share our goals and values and are eager to join a team of dedicated professionals, we invite you to take the next step.
More -
Β· 38 views Β· 2 applications Β· 28d
Senior Data Engineer (with Go)
Full Remote Β· Bulgaria, Spain, Poland, Romania, Ukraine Β· Product Β· 5 years of experience Β· Upper-IntermediateWho we are Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product The product of our client stands at the forefront of...Who we are
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product
The product of our client stands at the forefront of advanced threat detection and response, pioneering innovative solutions to safeguard businesses against evolving cybersecurity risks. It is a comprehensive platform that streamlines security operations, empowering organizations to swiftly detect, prevent, and automate responses to advanced threats with unparalleled precision and efficiency.
About the Role
We are looking for a proactive, innovative, and responsible Senior Big Data Engineer with extensive knowledge and experience with GoLang streaming and batching processes, building DWH from scratch. Join our high-performance team to work with cutting-edge technologies in a dynamic and agile environment.
Key Responsibilities:
- Design & Development: Architect, develop, and maintain robust distributed systems with complex requirements, ensuring scalability and performance.
- Collaboration: Work closely with cross-functional teams to ensure the seamless integration and functionality of software components.
- System Optimization: Implement and optimize scalable server systems, utilizing parallel processing, microservices architecture, and security development principles.
- Database Management: Effectively utilize SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases in system design and development.
- Big Data Tools: Leverage big data tools such as Spark or Flink to enhance system performance and scalability(experience with these tools is advantageous).
- Deployment & Management: Demonstrate proficiency in Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
Required Competence and Skills:
- At least 5 years of experience in Data Engineering domain
- At least 2 years of experience with GoLang
- Proficiency in SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases
- Experienced with big data tools such as Spark or Flink to enhance system performance and scalability
- Proven experience with Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
- Ability to work effectively in a collaborative team environment
- Excellent communication skills and a proactive approach to learning and development
Advantages:
- Experience in data cybersecurity domain
- Experience in startup growing product
Why Us
We utilize a remote working model, providing a powerful workstation and co-working space of your choice in case you need it .
We offer a highly competitive package
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in)
We prioritize the professional growth and well-being of our team members. Hence, we organize various social events throughout the year to foster connections and promote wellness
More -
Β· 29 views Β· 4 applications Β· 11d
Middle Software Developer (Data Researcher/Data Integration)
Full Remote Β· Ukraine Β· 3 years of experience Β· Upper-IntermediateOur partner is a leading technology company transforming the way investigations are done with smart tools that help teams collect, analyze, and use data effectively. Their AI-powered platform simplifies case management, data visualization, and reporting,...Our partner is a leading technology company transforming the way investigations are done with smart tools that help teams collect, analyze, and use data effectively. Their AI-powered platform simplifies case management, data visualization, and reporting, making it a valuable solution for industries like law enforcement, financial investigations, and cyber threat intelligence. With deep expertise in business intelligence and data, they help organizations make faster and better decisions. They are focused on innovation and collaboration, creating a positive and dynamic workplace.
You'll collaborate closely with the team of engineers and data wizards to develop solutions that make a tangible impact in the world of security. Join a team that pushes boundaries, embraces challenges, and has a great time doing it.
P.S. Being the first to uncover hidden insights in data? Just one of the perks π.
Required Skills
- 2.5+ years of experience in data engineering or software development
- Experience with Python scripting
- Upper-Intermediate level of English
- Ready to collaborate with remote team
- Strong problem-solving abilities and attention to detail
Can-do attitude
Will be a Bonus
- Familiarity with integrating APIs and handling various data sources
- Ability to anticipate and handle multiple potential edge cases related to data consistency
Your Day-to-Day Responsibilities Will Include
- Researching and analyzing various APIs and data sources
- Integrating new data sources into existing system for seamless data flow
- Collaborating closely with the team to define and implement data solutions
- Identifying and addressing multiple potential edge cases in data integration
- Planning your work, estimating effort, and delivering on deadlines
We Offer
π Constant professional growth and improvement:
- Challenging projects with cutting-edge technologies
- Close cooperation with clients and industry leaders
- Support for personal development and mentorship
π Comfortable, focused work environment:
- Remote work encouraged and supported
- Minimal bureaucracy
- Flexible schedule
- High-quality hardware provided
And, of course, all the traditional benefits you'd expect in the IT industry.
More -
Β· 128 views Β· 2 applications Β· 28d
Data Engineer (Azure)
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-IntermediateDataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU clientβs platform offers full visibility into water quality, compliance management, and system performance. If you are...Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU clientβs platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Key Responsibilities:
- Create and manage scalable data pipelines with Azure SQL and other databases;
- Use Azure Data Factory to automate data workflows;
- Write efficient Python code for data analysis and processing;
- Ability to develop data reports and dashboards using PowerBI;
- Use Docker for application containerization and deployment streamlining;
- Manage code quality and version control with Git.
Skills requirements:
- 3+ years of experience with Python;
- 2+ years of experience as a Data Engineer;
- Strong SQL knowledge, preferably with Azure SQL experience;
- Python skills for data manipulation;
- Expertise in Docker for app containerization;
- Familiarity with Git for managing code versions and collaboration;
- Upper- intermediate level of English.
Optional skills (as a plus):
- Experience with Azure Data Factory for orchestrating data processes;
- Experience developing APIs with FastAPI or Flask;
- Proficiency in Databricks for big data tasks;
- Experience in a dynamic, agile work environment;
- Ability to manage multiple projects independently;
- Proactive attitude toward continuous learning and improvement.
We offer:- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
Β· 44 views Β· 3 applications Β· 18d
Senior Data Engineer
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· Upper-IntermediateSimulmedia is looking for an experienced and dynamic Data Engineer with a curious and creative mindset to join our Data Services team. The ideal candidate will have a strong background in Python, SQL and Rest API development. This is an opportunity to...Simulmedia is looking for an experienced and dynamic Data Engineer with a curious and creative mindset to join our Data Services team. The ideal candidate will have a strong background in Python, SQL and Rest API development. This is an opportunity to join a team of amazing engineers, data scientists, product managers and designers who are obsessed with building the most advanced streaming advertising platform in the market. As a Data Engineer you will build services and data processing systems to support our platform. You will work on a team that empowers the other teams to use our huge amount of data efficiently. Using a large variety of technologies and tools, you will solve complicated technical problems and build solutions to make our services robust and flexible and our data easily accessible throughout the company.
Only for candidates from Ukraine. This position is located in either Kyiv or Lviv, Ukraine. The team is located in both Kyiv and Lviv and primarily works remotely with occasional team meetings in our offices.
Responsibilities:
- Build products that leverage our data and solve problems that tackle the complexity of streaming video advertising
- Develop containerized applications, largely in Python, that are deployed to the Cloud
- Work within an Agile team that releases cutting-edge new features regularly
- Learn new technologies, and make an outsized impact on our industry-leading tech platform
- Take a high degree of ownership and freedom to experiment with new technologies to improve our software
- Develop maintainable code and fault tolerant solutions
- Collaborate cross-functionally with product managers and stakeholders across the company to deliver on product roadmap
- Join a team of passionate engineers in search of elegant solutions to hard problems
Qualifications:
- Bachelorβs degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
- 7+ years of work experience as a data engineer
- Proficiency in Python and using it as the primary development language in recent years
- Proficiency in SQL and relational databases (Postgres, MySQL, etc)
- Ability to design complex data models (normalized and multi-dimensional)
- Experience building REST services (Flask, Django, aio-http, etc)
- Experience developing, maintaining, and debugging problems in large server-side code bases
- Good knowledge of engineering best practices and testing (unit test, integration test)
- The desire to take a high level of ownership of the things you work on
- Ability to learn new things quickly, maintain a high bar for quality, and be pragmatic
- Must be able to communicate with U.S based teams
- Experience with AWS is a plus
- Ability to work 11 am β 8 pm EEST
Our Tech Stack:
- Almost everything we run is on AWS
- We mostly use Python, Ruby and Go
- For data, we mostly use Postgres and Redshift
-
Β· 43 views Β· 1 application Β· 21d
Senior Data Engineer
Full Remote Β· Poland Β· Product Β· 5 years of experience Β· Upper-IntermediateProject Toshiba is the global market share leader in retail store technology. As retailβs first choice for integrated in-store solutions, Toshibaβs innovative technology enhances customer engagement, transforms in-store experience, and accelerates the...Project
Toshiba is the global market share leader in retail store technology. As retailβs first choice for integrated in-store solutions, Toshibaβs innovative technology enhances customer engagement, transforms in-store experience, and accelerates the digital transformation of the retail industry. Today, Toshiba is in a position wherein it defines dominating practices of retail automation and advances the future of retail.
The product is aimed at comprehensive retail chain automation and covers all work processes of large retail chain operators. The product covers retail store management, warehouse management, payment systems integration, logistics management, hardware/software store automation, etc.
The product is already adopted by the market, and the biggest US and global retail operators are among the clients.Technology Stack
Azure Databricks, Apache Spark (PySpark) , Delta Lake , ADF , Synapse , Python ,SQL, Power BI, MongoDB/CosmosDB, PostgreSQL, Terraform, Jenkins
What you will do
We are looking for an experienced Azure Databricks Engineer to join our team and contribute to building and optimizing large-scale data solutions. You will be responsible for working with Azure Databricks and Power BI , writing efficient Python and SQL scripts, and optimizing data workflows to ensure performance and scalability, building meaningful reports.
Must-have skills
- Bachelorβs or Masterβs degree in Data Science, Computer Science or related field.
- 3+ years of experience as a Data Engineer or in a similar role.
- Proven experience in data analysis, data warehousing, and data reporting.
- Proven experience in Azure Databricks ( python, pytorch), Azure infrastructure
- Experience with Business Intelligence tools like Power BI.
- Proficiency in querying languages like SQL.
- Strong problem-solving skills and attention to detail.
- Proven ability to translate business requirements into technical solutions.
Nice-to-have skills
- Knowledge and experience in e-commerce/retail
-
Β· 41 views Β· 4 applications Β· 24d
Senior Python Engineer - Data Platform
Full Remote Β· Worldwide Β· Product Β· 8 years of experience Β· Upper-IntermediateDuties and responsibilities: Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within product's platform; Active participation in development and maintenance of our data pipelines and backend services; Integrate new...Duties and responsibilities:
- Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within product's platform;
- Active participation in development and maintenance of our data pipelines and backend services;
- Integrate new technologies into our processes and tools;
- End-to-end feature designing and implementation;
- Code, debug, test and deliver features and improvements in a continuous manner;
Provide code review, assistance and feedback for other team members.
Required:
- 8+ years of experience developing Python backend services and APIs;
- Advanced knowledge of SQL - ability to write, understand and debug complex queries;
- Data Warehousing and database basic architecture principles;
- POSIX/Unix/Linux ecosystem knowledge;
- Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI;
- Knowledge about blockchain technologies or willingness to learn;
- Experience with PostgreSQL database system;
- Knowledge of Unit Testing principles;
- Independent and autonomous way of working;
Team-oriented work and good communication skills are an asset.
Would be a plus:
- Practical experience in big data and frameworks β Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse;
- Knowledge of Docker, Kubernetes and Infrastructure as Code - Terraform, Ansible, etc;
- Passion for Bitcoin and Blockchain technologies;
- Experience with distributed systems;
- Experience with opensource solutions;
- Experience with Java or willingness to learn.
-
Β· 42 views Β· 4 applications Β· 6d
Strong middle/Senior Data engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-IntermediateJob Description We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions). Experience in building Big Data solutions on AWS or other...Job Description
We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions).
Experience in building Big Data solutions on AWS or other cloud platforms
Experience in building Data Lake platforms
Strong practical experience with Apache Spark.
Hands-on experience in building data pipelines using Databricks
Hands-on experience in Python, Scala
Upper-Intermediate English level
Bachelorβs degree in Computer Science, Information Systems, Mathematics, or related technical disciplineJob Responsibilities
Responsible for the design and implementation of data integration pipelines
Perform performance tuning and improve functionality with respect to NFRs.
Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storage
Take part in the full-cycle of feature development (requirements analysis, decomposition, design, etc)
Design, develop and implement data platform enterprise solutions with other talented engineers in a collaborative team environment.
Contribute to the overall quality of development services through brainstorming, unit testing and proactive offering of different improvements and innovations.Department/Project Description
Is it even possible to sleep not only deeply, but smartly? Yes, it is, if the GlobalLogic and Sleep Number teams get down to business! Sleep Number is a pioneer in the development of technologies for monitoring sleep quality. Smart beds have already provided 13 million people with quality sleep, and this is just the beginning.
The GlobalLogic team is a strategic partner of Sleep Number in the development of innovative technologies to improve sleep. By joining the project, you will be dealing with technologies that have already turned the smart bed into a health improvement and wellness center. The world's largest biometric database allows building necessary infrastructure for future inventions.
Join the team and get ready to innovate, lead the way, and improve lives!
More -
Β· 39 views Β· 1 application Β· 21d
Senior Data Engineer/Lead Data Engineer (Healthcare domain)
Full Remote Β· EU Β· 5 years of experience Β· Upper-IntermediateWe are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights! If...We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights!
If you're ready to take your expertise to the next level and contribute significantly to the success of our projects, submit your resume now.
Our client is a leading medical technology company. The portfolio of products, services, and solutions is central to clinical decision-making and treatment pathways. Patient-centered innovation has always been at the core of the company, which is committed to improving patient outcomes and experiences, no matter where they live or what challenges they face. The company is innovating sustainably to provide healthcare for everyone, everywhere.
The Projectβs mission is to enable healthcare providers to increase their value by equipping them with innovative technologies and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, and digital health and enterprise services.
Responsibilities:- Work closely with the client (PO) as well as other team members to clarify tech requirements and expectations
- Contribute to the design, development, and optimization of squad-specific data architecture and pipelines adhering to defined ETL and Data Lake principles
- Implement architectures using Azure Cloud platforms (Data Factory, Databricks, Event Hub)
- Discover, understand, and organize disparate data sources, structuring them into clean data models with clear, understandable schemas
- Evaluate new tools for analytical data engineering or data science and suggest improvements
- Contribute to training plans to improve analytical data engineering skills, standards, and processes
Requirements:- Solid experience in data engineering and cloud computing services, specifically in the areas of data and analytics (Azure preferred)
- Strong conceptual knowledge of data analytics fundamentals, including dimensional modeling, ETL, reporting tools, data governance, data warehousing, and handling both structured and unstructured data
- Expertise in SQL and at least one programming language (Python/Scala)
- Excellent communication skills and fluency in business English
- Familiarity with Big Data DB technologies such as Snowflake, BigQuery, etc. (Snowflake preferred)
- Experience with database development and data modeling, ideally with Databricks/Spark
-
Β· 38 views Β· 1 application Β· 21d
Senior Python Data Engineer (only Ukraine)
Ukraine Β· Product Β· 6 years of experience Β· Upper-IntermediateThe company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.
Requirements:
- At least 5 years of experience with Python
- At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
- Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka), (advanced skills in DML).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
- Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow, Apache Hadoop, Apache Spark).
- Experience in automated test creation (TDD).
Freely spoken English.
Advantages:
- Being fearless of mathematical algorithms (part of our teamβs responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in any OOP language.
- Experience in DevOps (Familiarity with Docker and Kubernetes).
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
- Experience in Scala.
What we offer:
- 20 working daysβ vacation;
- 10 paid sick leaves;
- public holidays;
- equipment;
- accountant helps with documents;
- many cool team activities.
Apply now and start a new page of your fast career growth with us!
More -
Β· 37 views Β· 2 applications Β· 17d
Senior Python Data Engineer (only Ukraine)
Ukraine Β· Product Β· 5 years of experience Β· Upper-IntermediateThe company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.
Requirements:
- At least 5 years of experience with Python
- At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
- Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
- Deep understanding of data processing services (Apache Airflow, GCP Dataflow, Hadoop, Apache Spark).
- Experience in automated test creation (TDD).
Freely spoken English.
Advantages:
- Being fearless of mathematical algorithms (part of our teamβs responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in any OOP language.
- Experience in DevOps (Familiarity with Docker and Kubernetes.)
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
- Experience in Scala.
What we offer:- 20 working daysβ vacation;
- 10 paid sick leaves;
- public holidays;
- equipment;
- accountant helps with documents;
many cool team activities.
Apply now and start a new page of your fast career growth with us!
More -
Β· 66 views Β· 7 applications Β· 28d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-IntermediateWe are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient...We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.
Does this relate to you?- 5+ years of experience in Data Engineering or a related field
- Strong expertise in SQL and data modeling concepts
- Hands-on experience with Airflow
- Experience working with Redshift
- Proficiency in Python for data processing
- Strong understanding of data governance, security, and compliance
- Experience in implementing CI/CD pipelines for data workflows
- Ability to work independently and collaboratively in an agile environment
- Excellent problem-solving and analytical skills
A new team member will be in charge of:
- Design, develop, and maintain scalable data warehouse solutions
- Build and optimize ETL/ELT pipelines for efficient data integration
- Design and implement data models to support analytical and reporting needs
- Ensure data integrity, quality, and security across all pipelines
- Optimize data performance and scalability using best practices
- Work with big data technologies such as Redshift
- Collaborate with cross-functional teams to understand business requirements and translate them into data solutions
- Implement CI/CD pipelines for data workflows
- Monitor, troubleshoot, and improve data processes and system performance
- Stay updated with industry trends and emerging technologies in data engineering
Already looks interesting? Awesome! Check out the benefits prepared for you:
- Regular performance reviews, including remuneration
- Up to 25 paid days off per year for well-being
- Flexible cooperation hours with work-from-home
- Fully paid English classes with an in-house teacher
- Perks on special occasions such as birthdays, marriage, childbirth
- Referral program implying attractive bonuses
- External & internal training and IT certifications
-
Β· 83 views Β· 20 applications Β· 28d
Python Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateTech stack: Python, Java, PostgreSQL, C#, Scala, SQL, Data Expected rate: TBD Domain expertise: Location: remote Expected date: ASAP Duration: Long-term Expiration: ASAP Description: Requirements: - University degree in Computer...β€ Tech stack: Python, Java, PostgreSQL, C#, Scala, SQL, Data
β€ Expected rate: TBD
β€ Domain expertise:
β€ Location: remote
β€ Expected date: ASAP
β€ Duration: Long-term
β€ Expiration: ASAP
β€ Description:
Requirements:
- University degree in Computer Science or equivalent experience
- Strong analytical and problem-solving skills
- Data Engineering working experience, ideally with Python
- Willingness to learn new technologies
- Experience with big data technology stack, for example, (Py-)Spark, Delta Tables, etc.
- 5+ years of experience with general-purpose languages such as C#, Python, Java, Scala etc
- Fluency in SQL and relational DB systems (PostgreSQL, SQL Server)
Nice:
- Experience with data warehousing, infrastructure, ETL/ ELT and analytic tools such as Azure Synapse or Databricks
- Experience with multiple Big Data file formats (Parquet, Avro, Delta Lake)
- Experience with time-series databases and streaming data processing
- Proficiency with PySpark
- Proficiency with .Net
- Experience setting up MLOps processes and a machine learning model registry
- Experience with CI/CD pipelines on Gitlab
- Some experience with Terraform and k8s
More