Jobs

106
  • · 204 views · 31 applications · 6d

    Junior Data Engineer

    Full Remote · Ukraine · Intermediate
    We are looking for a Data Engineer to join our team! Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access. He/she will be in charge of creating pipelines...

    We are looking for a Data Engineer to join our team!

     

    Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access.

    He/she will be in charge of creating pipelines that convert raw data into usable formats for data scientists and other data consumers to utilize.

    Data Engineer should be comfortable working with RDBMS and has a good knowledge of the appropriate RDBMS programming language(s) as well.

    The Data Engineer fulfills processing of client data based on proper specification and documentation.

     

    *Ukrainian student in UA (2d year and higher).

     

    Main responsibilities:

     

    - Design and develop ETL pipelines;

    - Data integration and cleansing;

    - Implement stored procedures and function for data transformations;

    - ETL processes performance optimization.

     

    Skills and Requirements:

     

    - Experience with ETL tools (to take charge of the ETL processes and performs tasks connected with data analytics, data science, business intelligence and system architecture skills);

    - Database/DBA/Architect background (understanding of data storage requirements and design warehouse architecture, should have the basic expertise with SQL/NoSQL databases and data mapping, the awareness of Hadoop environment);

    - Data analysis expertise (data modeling, mapping, and formatting, data analysis basic expertise is required);

    - Knowledge of scripting languages (Python is preferable);

    - Troubleshooting skills (data processing systems operate with large amounts of data and include multiple structural elements. Data Engineer is responsible for the proper functioning of the system, which requires strong analytical thinking and troubleshooting skills);

    - Tableau experience is good to have;

    - Software engineering background is good to have;

    - Good organizational skills, and task management abilities;

    - Effective self-motivator;

    - Good communication skills in written and spoken English.

     

    Salary Range

     

    Compensation packages are based on several factors including but not limited to: skill set, depth of experience, certifications, and specific work location.

    More
  • · 87 views · 7 applications · 20d

    Data Engineer

    Countries of Europe or Ukraine · 2 years of experience · Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV. Skills requirements: • 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.

     

    Skills requirements:
    • 2+ years of experience with Python;
    • 2+ years of experience as a Data Engineer;
    • Experience with Pandas;
    • Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    • Familiarity with Amazon Web Services;
    • Knowledge of data algorithms and data structures is a MUST;
    • Working with high volume tables 10m+.


    Optional skills (as a plus):
    • Experience with Spark (pyspark);
    • Experience with Airflow;
    • Experience with Kafka;
    • Experience in statistics;
    • Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    • Create ETL pipelines and data management solutions (API, Integration logic);
    • Different data processing algorithms;
    • Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    • Great networking opportunities with international clients, challenging tasks;

    • Building interesting projects from scratch using new technologies;

    • Personal and professional development opportunities;

    • Competitive salary fixed in USD;

    • Paid vacation and sick leaves;

    • Flexible work schedule;

    • Friendly working environment with minimal hierarchy;

    • Team building activities, corporate events.

    More
  • · 214 views · 21 applications · 27d

    Junior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 0.5 years of experience · Intermediate
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    • 6+ months of experience as a Data Engineer

    • Experience with SQL ;

    • Experience with Python;

     

     

    Optional skills (as a plus):

    • Experience with ETL / ELT pipelines;

    • Experience with PySpark;

    • Experience with Airflow;

    • Experience with Databricks;

     

    Key Responsibilities:

    • Apply data processing algorithms;

    • Create ETL/ELT pipelines and data management solutions;

    • Work with SQL queries for data extraction and analysis;

    • Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    • Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    • Opportunity to work with the high-skilled engineering team on challenging projects;

    • Interesting projects with new technologies;

    • Great networking opportunities with international clients, challenging tasks;

    • Building interesting projects from scratch using new technologies;

    • Personal and professional development opportunities;

    • Competitive salary fixed in USD;

    • Paid vacation and sick leaves;

    • Flexible work schedule;

    • Friendly working environment with minimal hierarchy;

    • Team building activities, corporate events.

    More
  • · 52 views · 12 applications · 25d

    Databricks Solutions Architect

    Full Remote · Worldwide · 7 years of experience · Upper-Intermediate
    Requirements - 7+ years experience in data engineering, data platforms & analytics - - - Completed Data Engineering Professional certification & required classes - Minimum 6-8+ projects delivered with hands-on experience in development on databricks -...

    Requirements

    - 7+ years experience in data engineering, data platforms & analytics - - 

    - Completed Data Engineering Professional certification & required classes

    - Minimum 6-8+ projects delivered with hands-on experience in development on databricks 

    - Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with deep expertise in at least one 

    - Deep experience with distributed computing with Spark with knowledge of Spark runtime internals 

    - Familiarity with CI/CD for production deployments 

    - Current knowledge across the breadth of Databricks product and platform features

     

    We offer:

    • Attractive financial package

    • Challenging projects

    • Professional & career growth

    • Great atmosphere in a friendly small team

    More
  • · 62 views · 5 applications · 27d

    Senior Software Data Engineer

    Full Remote · Worldwide · Product · 7 years of experience · Upper-Intermediate
    Join Burny Games — a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily. What makes us proud? In just two years, we've launched two successful mobile games worldwide:...

    Join Burny Games — a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily.

    What makes us proud?

    • In just two years, we've launched two successful mobile games worldwide: Playdoku and Colorwood Sort. We have paused some projects to focus on making our games better and helping our team improve.
    • Our games have been enjoyed by over 8 million players worldwide, and we keep attracting more players.
    • We've created a culture where we make decisions based on data, which helps us grow every month.
    • We believe in keeping things simple, focusing on creativity, and always searching for new and effective solutions.


    We are seeking an experienced software engineer to create a high-performance, scalable, and flexible real-time analytics platform.
    You will be a key member of our team, responsible for the architecture, development, and optimization of services for processing and analyzing large volumes of data (terrabytes).
     

    Required professional experience:

    • 5+ years of experience in developing distributed systems or systems at scale.
    • Willingness to upskill on Go, proficient in one of languages: Go, Python, Java/Scala/Kotlin, Rust.
    • Rock solid computer science fundamentals.
    • Experience with any NoSQL (preferably Cassandra) and OLAP (preferably ClickHouse) databases.
    • Experience with distributed log-based messaging system (one of: Kafka, NATS JetStream, etc)
    • Experience with Kubernetes (Helm, ArgoCD).
       

    Desired Skills:

    • Experience with common networking protocols.
    • Experience working with observability tools, such as metrics and traces.
    • Database fundamentals.
    • Understanding of scalable system design principles and architectures for real-time data processing.
    • Experience with distributed processing engine (one of: Flink, Spark).
    • Experience with open table format (one of: Apache Iceberg, Delta Lake, Hudi).
    • Experience with cloud platforms (one of: Google Cloud, AWS, Azure).
       

    Key Responsibilities:

    • Design and develop the architecture of an behavioral analytics platform for real-time big data processing.
    • Implement key engine systems (data collection, event processing, aggregation, prepare data for visualization).
    • Optimize the platform performance and scalability for handling large data volumes.
    • Develop tools for user behavior analysis and product metrics.
    • Collaborate with data analysts and product managers to integrate the engine into analytics projects.
    • Research and implement new technologies and methods in data analysis.
       

    What we offer:

    • 100% payment of vacations and sick leave [20 days vacation, 22 days sick leave], medical insurance.
    • A team of the best professionals in the games industry.
    • Flexible schedule [start of work from 8 to 11, 8 hours/day].
    • L&D center with courses.
    • Self-learning library, access to paid courses.
    • Stable payments.
       

    The recruitment process:

    CV review → Interview with talent acquisition manager → Interview with hiring manager → Job offer.

    If you share our goals and values and are eager to join a team of dedicated professionals, we invite you to take the next step.

    More
  • · 52 views · 6 applications · 10d

    Data Engineer

    Full Remote · EU · Product · 2 years of experience · Upper-Intermediate
    Role Overview: We are looking for a Data Engineer to join the growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow. The right candidate will...

    Role Overview:

    We are looking for a Data Engineer to join the growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

    Key Responsibilities:
     

    — Develop and maintain data infrastructure and data warehouse solutions;

    — Design, develop, and maintain scalable and efficient data pipelines and ETL processes;

    — Develop APIs;

    — Gathering and defining business requirements for data tools and analytics;

    — Communicate and collaborate with the analytics team;

    —Monitor and troubleshoot data pipelines and infrastructure, and implement measures to ensure data integrity, security, and performance;

    — Assistance in the implementation of data science solutions;

    — Develop and maintain documentation for data pipelines, infrastructure, and workflows;

    — Stay up-to-date with the latest data engineering technologies and best practices, and make recommendations for new tools and approaches to improve efficiency and quality;

    — Automation of the data processes;

    — Collect data from different sources.
     

    Ideal profile for the position:
     

    — 2+ years of work experience as a Data Engineer;

    — Experience with AWS - S3, Redshift, DMS, Glue, Lambda, Athena, QuickSight;

    — Excellent level of SQL;

    — Proficient in Python;

    — Knowledge and experience with the development of data warehousing and ETL pipelines;

    — API development experience;

    — Basic understanding of machine learning and data science;

    — Experience with API development;

    — Experience in relational and non-relational databases;

    — Good-level written and verbal communication skills;

    — Upper-intermediate or higher English level.
     

    The company guarantees you the following benefits:
     

    The company guarantees you the following benefits:

    Global Collaboration: Join an international team where everyone treats each other with respect and moves towards the same goal;

    Autonomy and Responsibility: Enjoy the freedom and responsibility to make decisions without the need for constant supervision.

    Competitive Compensation: Receive competitive salaries reflective of your expertise and knowledge as our partner seeks top performers.

    Remote Work Opportunities: Embrace the flexibility of fully remote work, with the option to visit company offices that align with your current location.

    Flexible Work Schedule: Focus on performance, not hours, with a flexible work schedule that promotes a results-oriented approach;

    Unlimited Paid Time Off: Prioritize work-life balance with unlimited paid vacation and sick leave days to prevent burnout;

    Career Development: Access continuous learning and career development opportunities to enhance your professional growth;

    Corporate Culture: Experience a vibrant corporate atmosphere with exciting parties and team-building events throughout the year;

    Referral Bonuses: Refer talented friends and receive a bonus after they successfully complete their probation period;

    Medical Insurance Support: Choose the right private medical insurance, and receive compensation (full or partial) based on the cost;

    Flexible Benefits: Customize your compensation by selecting activities or expenses you'd like the company to cover, such as a gym subscription, language courses, Netflix subscription, spa days, and more;

    Education Foundation: Participate in a biannual raffle for a chance to learn something new, unrelated to your job, as part of our commitment to ongoing education.

     

     

    Interview process:

    — A 30-minute interview with a member of our HR team to get to know you and your experience;

    — A final 2-hour interview with the team to gauge your fit with our culture and working style.

     

     

    If you find this opportunity right for you, don't hesitate to apply or get in touch with us if you have any questions!

     

    More
  • · 77 views · 9 applications · 4d

    Data Engineer

    Full Remote · Worldwide · Product · 3 years of experience · Intermediate
    Primary Responsibilities: Organizing and maintaining real-time data collection, processing, and analysis; Designing and implementing automated reporting systems for business metric monitoring; Configuring and optimizing data pipelines (ETL...

    Primary Responsibilities:

    • Organizing and maintaining real-time data collection, processing, and analysis;
    • Designing and implementing automated reporting systems for business metric monitoring;
    • Configuring and optimizing data pipelines (ETL processes);
    • Working with data visualization tools (e.g., Grafana) to create clear and informative dashboards;
    • Optimizing high-load analytical queries;
    • Developing and maintaining predictive models and machine learning; algorithms for data analysis (if required).


    Core Skills:

    • Strong knowledge of SQL with experience in query optimization for large datasets;
    • Hands-on experience with data pipeline orchestration tools;
    • Proficiency in data visualization tools (e.g., Grafana, Power BI, Tableau);
    • Experience working with real-time analytics and data warehouses;
    • Expertise in big data processing and ETL optimization;
    • Proficiency in data processing programming languages (e.g., Python, Scala, or SQL);
    • Experience with Data Bricks (preferred).


    Additional Skills:

    • Understanding of machine learning fundamentals and experience with libraries such as scikit-learn, TensorFlow, or PyTorch (a plus);
    • Experience working with cloud platforms (AWS, GCP) to deploy analytical solutions;
    • Understanding of CI/CD processes for automating data analytics infrastructure.


    Language Requirements:

    • Intermediate English proficiency for working with technical documentation and communicating with external service support.


    We offer:

    • An interesting project and non-trivial tasks that will allow us to show your professional attitude and creativity;
    • Friendly team;
    • Comfortable working schedule and working conditions;
    • Opportunity to work remotely as well as in an office located in the city centre;
    • Stable, competitive salary;
    • Paid vacation and sick leaves;
    • Opportunity for professional growth and career development;
    • English, paid professional courses, coffee/fruits and other pluses :)
    More
  • · 29 views · 4 applications · 10d

    Senior Big Data Engineer

    Full Remote · Ukraine, Poland, Spain, Romania, Bulgaria · Product · 5 years of experience · Upper-Intermediate
    Who we are Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product The product of our client stands at the forefront of...

    Who we are

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.

     

    About the Product

    The product of our client stands at the forefront of advanced threat detection and response, pioneering innovative solutions to safeguard businesses against evolving cybersecurity risks. It is a comprehensive platform that streamlines security operations, empowering organizations to swiftly detect, prevent, and automate responses to advanced threats with unparalleled precision and efficiency.

     

    About the Role

    We are looking for a proactive, innovative, and responsible Senior Big Data Engineer with extensive knowledge and experience with streaming and batching processes, building DWH from scratch. Join our high-performance team to work with cutting-edge technologies in a dynamic and agile environment.

     

    Key Responsibilities: 

    • Design & Development: Architect, develop, and maintain robust distributed systems with complex requirements, ensuring scalability and performance.
    • Collaboration: Work closely with cross-functional teams to ensure the seamless integration and functionality of software components.
    • System Optimization: Implement and optimize scalable server systems, utilizing parallel processing, microservices architecture, and security development principles.
    • Database Management: Effectively utilize SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases in system design and development.
    • Big Data Tools: Leverage big data tools such as Spark or Flink to enhance system performance and scalability(experience with these tools is advantageous).
    • Deployment & Management: Demonstrate proficiency in Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.

     

    Required Competence and Skills:

    • At least 5 years of experience in Data Engineering domain
    • Proficiency in SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases
    • Experience building DWH from scratch and working with real-time data and streaming processes
    • Experience with GoLang (commercial\non-commercial)
    • Experienced with big data tools such as Spark or Flink to enhance system performance and scalability
    • Proven experience with Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
    • Ability to work effectively in a collaborative team environment
    • Excellent communication skills and a proactive approach to learning and development

     

    Advantages:

    • Experience in data cybersecurity domain
    • Experience in startup growing product

     

    Why Us

    We utilize a remote working model, providing a powerful workstation and co-working space of your choice in case you need it .

    We offer a highly competitive package

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in)

    We prioritize the professional growth and well-being of our team members. Hence, we organize various social events throughout the year to foster connections and promote wellness

    More
  • · 34 views · 4 applications · 24d

    Middle Software Developer (Data Researcher/Data Integration)

    Full Remote · Ukraine · 3 years of experience · Upper-Intermediate
    Our partner is a leading technology company transforming the way investigations are done with smart tools that help teams collect, analyze, and use data effectively. Their AI-powered platform simplifies case management, data visualization, and reporting,...

    Our partner is a leading technology company transforming the way investigations are done with smart tools that help teams collect, analyze, and use data effectively. Their AI-powered platform simplifies case management, data visualization, and reporting, making it a valuable solution for industries like law enforcement, financial investigations, and cyber threat intelligence. With deep expertise in business intelligence and data, they help organizations make faster and better decisions. They are focused on innovation and collaboration, creating a positive and dynamic workplace.

     

    You'll collaborate closely with the team of engineers and data wizards to develop solutions that make a tangible impact in the world of security. Join a team that pushes boundaries, embraces challenges, and has a great time doing it.

     

    P.S. Being the first to uncover hidden insights in data? Just one of the perks 😉.

     

    Required Skills

    • 2.5+ years of experience in data engineering or software development
    • Experience with Python scripting
    • Upper-Intermediate level of English
    • Ready to collaborate with remote team
    • Strong problem-solving abilities and attention to detail
    • Can-do attitude

       

    Will be a Bonus

    • Familiarity with integrating APIs and handling various data sources
    • Ability to anticipate and handle multiple potential edge cases related to data consistency

     

    Your Day-to-Day Responsibilities Will Include

    • Researching and analyzing various APIs and data sources
    • Integrating new data sources into existing system for seamless data flow
    • Collaborating closely with the team to define and implement data solutions
    • Identifying and addressing multiple potential edge cases in data integration
    • Planning your work, estimating effort, and delivering on deadlines

     

    We Offer

    📈 Constant professional growth and improvement:

    • Challenging projects with cutting-edge technologies
    • Close cooperation with clients and industry leaders
    • Support for personal development and mentorship

    😄 Comfortable, focused work environment:

    • Remote work encouraged and supported
    • Minimal bureaucracy
    • Flexible schedule
    • High-quality hardware provided

    And, of course, all the traditional benefits you'd expect in the IT industry.

    More
  • · 34 views · 1 application · 10d

    Data Engineer (Azure)

    Full Remote · Countries of Europe or Ukraine · 2 years of experience · Upper-Intermediate
    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...

    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.

    Key Responsibilities:
    - Create and manage scalable data pipelines with Azure SQL and other databases;
    - Use Azure Data Factory to automate data workflows;
    - Write efficient Python code for data analysis and processing;
    - Ability to develop data reports and dashboards using PowerBI;
    - Use Docker for application containerization and deployment streamlining;
    - Manage code quality and version control with Git.

    Skills requirements:
    - 3+ years of experience with Python;
    - 2+ years of experience as a Data Engineer;
    - Strong SQL knowledge, preferably with Azure SQL experience;
    - Python skills for data manipulation;
    - Expertise in Docker for app containerization;
    - Familiarity with Git for managing code versions and collaboration;
    - Upper- intermediate level of English.

    Optional skills (as a plus):
    - Experience with Azure Data Factory for orchestrating data processes;
    - Experience developing APIs with FastAPI or Flask;
    - Proficiency in Databricks for big data tasks;
    - Experience in a dynamic, agile work environment;
    - Ability to manage multiple projects independently;
    - Proactive attitude toward continuous learning and improvement.

    We offer:

    - Great networking opportunities with international clients, challenging tasks;

    - Building interesting projects from scratch using new technologies;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities and corporate events.

    More
  • · 94 views · 16 applications · 30d

    Senior Data Engineer

    Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience · Intermediate
    We are a forward-thinking software development company, known for our proprietary core platform and its integral components — Sportsbook, CRM, Risk Management, Antifraud, Bonus Engine, and Retail Software Suite. Join our team to be part of pioneering...

    We are a forward-thinking software development company, known for our proprietary core platform and its integral components — Sportsbook, CRM, Risk Management, Antifraud, Bonus Engine, and Retail Software Suite.
    Join our team to be part of pioneering solutions that not only streamline the work of thousands of employees in every department but also significantly boost profits and elevate product quality.
    Your role with us is a key part of a team shaping the future of software solutions in an impactful and dynamic environment.

    Your role with us is a key part of a team shaping the future of software solutions in an impactful and dynamic environment.

     

    Requirements:

    • 3+ years of professional experience as a Data Engineer or in a similar role.
    • Expert-level proficiency in Python for data processing, scripting, and automation.
    • Strong experience with AWS services (S3, Redshift, Glue, Lambda, etc.).
    • Deep understanding of database technologies (SQL and NoSQL systems, data modeling, performance tuning).
    • Experience with ETL/ELT pipeline design and development.
    • Strong understanding of data warehousing concepts and best practices.
    • Familiarity with version control systems like Git and CI/CD pipelines.
    • Excellent problem-solving skills and the ability to work independently.

       

    Responsibilities:

    • Design, build, and maintain robust data pipelines and ETL/ELT processes.
    • Optimize data systems and architecture for performance and scalability.
    • Collaborate with data scientists, analysts, and other engineers to meet organizational data needs.
    • Implement and manage data security and governance measures.
    • Develop and maintain data documentation and ensure data quality standards are met.
    • Lead data architecture discussions and make strategic recommendations.
    • Monitor and troubleshoot data pipeline issues in production environments.

       

    Nice to have:

    • Knowledge of streaming platforms such as Kafka or AWS Kinesis.
    • Familiarity with machine learning workflows and model deployment.
    • Previous experience in a leadership or mentorship role.

       

    Work Conditions:

    • Vacation: 22 working days per year
    • Sick leave: 5 paid days per year
    • Sick lists: 100% paid with medical certificate up to 10 days per year
    • Benefit cafeteria personal limit for health support

       

    Join our passionate and talented team and be at the forefront of transforming the iGaming industry with groundbreaking live tech solutions. If you’re a visionary professional eager to shape the future of iGaming, Atlaslive invites you to apply and drive our continued success in this dynamic market.

    Atlaslive — The Tech Behind the Game.

    *All submitted resumes undergo a thorough review. If we do not contact you within 5 business days after your application, it means that we are unable to proceed with your candidacy at this time. However, we appreciate your interest in our company and will keep your contact details on file for future opportunities.

    More
  • · 16 views · 0 applications · 9d

    Senior Data Engineer

    Full Remote · Ukraine · 4 years of experience · Upper-Intermediate
    N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible...

    N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible for the architecture, development, and management of our core data infrastructure. We leverage Snowflake, Looker, Airflow (MWAA), and dbt while managing DevOps configurations for the platform. Our goal is to build and maintain a self-serve data platform that empowers stakeholders with tools for efficient data management while ensuring security, governance, and compliance standards.
     

    Requirements:

    • 6+ years of experience in Data Engineering.
    • Strong proficiency in Airflow, Python, and SQL.
    • Hands-on experience with cloud data warehouses (Snowflake or equivalent).
    • Solid understanding of AWS services and Kubernetes at an advanced user level.
    • Familiarity with Data Quality and Observability best practices.
    • Ability to thrive in a dynamic environment with a strong sense of ownership and responsibility.
    • Analytical mindset and problem-solving skills for tackling complex technical challenges.
    • Bachelor's in Mathematics, Computer Science,e or other relevant quantitative fields
       

    Nice-to-Have Skills:

    • Experience with DevOps practices, CI/CD, and Infrastructure as Code (IaC).
    • Hands-on experience with Looker or other BI tools.
    • Performance optimization of large-scale data pipelines.
    • Knowledge of metadata management and Data Governance best practices.
       

    Responsibilities:

    • Design and develop a scalable data platform to efficiently process and analyze large volumes of data using Snowflake, Looker, Airflow, and dbt.
    • Enhance the self-serve data platform by implementing new features to improve stakeholder access and usability.
    • Work with cross-functional teams to provide tailored data solutions and optimize data pipelines.
    • Foster a culture of knowledge sharing within the team to enhance collaboration and continuous learning.
    • Stay updated on emerging technologies and best practices in data engineering and bring innovative ideas to improve the platform.
    More
  • · 56 views · 4 applications · 20d

    Data Engineer (RnD team)

    Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience · Intermediate
    In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now looking for a Data Engineer to...

    In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better.
    We are now looking for a Data Engineer to improve our data processing pipelines from performance, cost and correctness standpoints.
    You could be a perfect match for the position if

    You want to:

    • Migrate BigQuery workloads to Spark 3.5 on Databricks.
      Re-engineer our daily-TB batch jobs into Delta Lake pipelines that run faster and cost less.
    • Turn full reloads into true incremental processing.
      Build CDC / MERGE logic so we scan only the data that changed and deliver fresh features within minutes.
    • Add quality gates & observability from day one.
      Instrument every stage with custom metrics, data-drift alerts and cost reports the product team can read.
    • Set up monitoring & slot-second cost dashboards.
      Expose processing-time, SLA and $-per-feature charts so we can make data-driven trade-offs.
    • Pair with Data Scientists and Product Managers. 
      Work side-by-side with Data Scientists and Product Managers from idea to release, instead of simply passing datasets back and forth.
    • Continuously tune for scale.
      Dozens of terabytes move through the platform daily; you’ll experiment with partitioning, Z-ORDER, and Photon to keep latency low as volume grows.

    You have: 

    • 3+ years of experience in data engineer role.
    • Strong knowledge of SQL, Spark, Python, Airflow, binary file formats. 
    • English level - Upper-intermediate +.

    Pleasant extras:

    • Databricks, GCP, BigQuery, Kafka, data modeling patterns, data quality approaches and tools.

    Soft skills:

    • Product mindset.
    • Ability to work in a fast-paced environment.
    • Willingness to take ownership of a feature and guide it through all stages of the development lifecycle.
    • Proactivity, openness, desire to dive deep into the domain and learn new approaches and tools
       

    You’re gonna love it, and here’s why:
     

    • Rich innovative software stack, freedom to choose the best suitable technologies.
    • Remote-first ideology: freedom to operate from the home office or any suitable coworking.
    • Flexible working hours (we start from 8 to 11 am) and no time tracking systems on.
    • Regular performance and compensation reviews.
    • Recurrent 1-1s and measurable OKRs.
    • In-depth onboarding with a clear success track.
    • Competera covers 70% of your training/course fee.
    • 20 vacation days, 15 days off, and up to one week of paid Christmas holidays.
    • 20 business days of sick leave.
    • Partial medical insurance coverage.

    Drive innovations with us. Be a Competerian.

    More
  • · 87 views · 27 applications · 23d

    Data Engineer

    Full Remote · Worldwide · 4 years of experience · Advanced/Fluent
    Requirements: • Develop and maintain data pipelines and ETLs. • Support the development and maintenance of data visualization solutions for the developed data products. • Build and maintain cloud infrastructure for multiple solutions using various AWS...

    Requirements: 

    • Develop and maintain data pipelines and ETLs. 

    • Support the development and maintenance of data visualization solutions for the developed data products. 

    • Build and maintain cloud infrastructure for multiple solutions using various AWS services through AWS CDK written in Python. 

    • Build reusable components for multiple solutions. 

    • Design, build, and implement data quality checks. 

    • Gather and translate business requirements into technical requirements. 

    • Implement Data Engineering best practices. 

    • Document all developed components. 

    • Assist in solution architecture design and implementation. 

    • Build queries to solve analytical questions. 

    • Ensure information security standards are always maintained. 

    • Design, build, and maintain robust and scalable data models across various database vendors and types, including SQL and NoSQL. 

     

    We offer:

    • Attractive financial package

    • Challenging projects

    • Professional & career growth

    • Great atmosphere in a friendly small team

    More
  • · 50 views · 4 applications · 19d

    Strong middle/Senior Data engineer

    Full Remote · Ukraine · 4 years of experience · Upper-Intermediate
    Job Description We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions). Experience in building Big Data solutions on AWS or other...

    Job Description

    We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions).
    Experience in building Big Data solutions on AWS or other cloud platforms
    Experience in building Data Lake platforms
    Strong practical experience with Apache Spark.
    Hands-on experience in building data pipelines using Databricks
    Hands-on experience in Python, Scala
    Upper-Intermediate English level
    Bachelor’s degree in Computer Science, Information Systems, Mathematics, or related technical discipline

    Job Responsibilities

    Responsible for the design and implementation of data integration pipelines
    Perform performance tuning and improve functionality with respect to NFRs.
    Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storage
    Take part in the full-cycle of feature development (requirements analysis, decomposition, design, etc)
    Design, develop and implement data platform enterprise solutions with other talented engineers in a collaborative team environment.
    Contribute to the overall quality of development services through brainstorming, unit testing and proactive offering of different improvements and innovations.

    Department/Project Description

    Is it even possible to sleep not only deeply, but smartly? Yes, it is, if the GlobalLogic and Sleep Number teams get down to business! Sleep Number is a pioneer in the development of technologies for monitoring sleep quality. Smart beds have already provided 13 million people with quality sleep, and this is just the beginning.

    The GlobalLogic team is a strategic partner of Sleep Number in the development of innovative technologies to improve sleep. By joining the project, you will be dealing with technologies that have already turned the smart bed into a health improvement and wellness center. The world's largest biometric database allows building necessary infrastructure for future inventions.

    Join the team and get ready to innovate, lead the way, and improve lives!

    More
Log In or Sign Up to see all posted jobs