Jobs
102-
Β· 347 views Β· 54 applications Β· 19d
Junior Data Engineer
Full Remote Β· Ukraine Β· IntermediateWe are looking for a Data Engineer to join our team! Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access. He/she will be in charge of creating pipelines...We are looking for a Data Engineer to join our team!
Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access.
He/she will be in charge of creating pipelines that convert raw data into usable formats for data scientists and other data consumers to utilize.
Data Engineer should be comfortable working with RDBMS and has a good knowledge of the appropriate RDBMS programming language(s) as well.
The Data Engineer fulfills processing of client data based on proper specification and documentation.
*Ukrainian student in UA (2d year and higher).
Main responsibilities:
- Design and develop ETL pipelines;
- Data integration and cleansing;
- Implement stored procedures and function for data transformations;
- ETL processes performance optimization.
Skills and Requirements:
- Experience with ETL tools (to take charge of the ETL processes and performs tasks connected with data analytics, data science, business intelligence and system architecture skills);
- Database/DBA/Architect background (understanding of data storage requirements and design warehouse architecture, should have the basic expertise with SQL/NoSQL databases and data mapping, the awareness of Hadoop environment);
- Data analysis expertise (data modeling, mapping, and formatting, data analysis basic expertise is required);
- Knowledge of scripting languages (Python is preferable);
- Troubleshooting skills (data processing systems operate with large amounts of data and include multiple structural elements. Data Engineer is responsible for the proper functioning of the system, which requires strong analytical thinking and troubleshooting skills);
- Tableau experience is good to have;
- Software engineering background is good to have;
- Good organizational skills, and task management abilities;
- Effective self-motivator;
- Good communication skills in written and spoken English.
Salary Range
Compensation packages are based on several factors including but not limited to: skill set, depth of experience, certifications, and specific work location.
More -
Β· 117 views Β· 5 applications Β· 30d
Data Engineer
Countries of Europe or Ukraine Β· 2 years of experience Β· IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. Skills requirements: β’ 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Skills requirements:
β’ 2+ years of experience with Python;
β’ 2+ years of experience as a Data Engineer;
β’ Experience with Pandas;
β’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Familiarity with Amazon Web Services;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Working with high volume tables 10m+.
Optional skills (as a plus):
β’ Experience with Spark (pyspark);
β’ Experience with Airflow;
β’ Experience with Kafka;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms..Key responsibilities:
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models.We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 43 views Β· 0 applications Β· 9d
Team/ Tech Lead Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateLooking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. As a Team Lead, you will be an expert and...Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
As a Team Lead, you will be an expert and a leader, playing a crucial role in guiding the development team, making technical decisions, and ensuring the successful delivery of high-quality software products.
Skills requirements:
β’ 5+ years of experience with Python;
β’ 4+ years of experience as a Data Engineer;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Excellent experience with Pandas;
β’ Excellent experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Experience Apache Kafka, Apache Spark (pyspark);
β’ Experience with Hadoop;
β’ Familiarity with Amazon Web Services;
β’ Understanding of cluster computing fundamentals;
β’ Working with high volume tables 100m+.
Optional skills (as a plus):
β’ Experience with scheduling and monitoring (Databricks, Prometheus, Grafana);
β’ Experience with Airflow;
β’ Experience with Snowflake, Terraform;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms.
Key responsibilities:
β’ Manage the development process and support team members;
β’ Conduct R&D work with new technology;
β’ Maintain high-quality coding standards within the team;
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Elaborate different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models;
β’ Develop and implement workflows for receiving and transforming new data sources to be used in the company;
β’ Develop existing Data Engineering infrastructure to make it scalable and prepare it for anticipated projected future volumes;
β’ Identify, design and implement process improvements (i.e. automation of manual processes, infrastructure redesign, etc.).
We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 111 views Β· 9 applications Β· 5d
Junior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
β’ 6+ months of experience as a Data Engineer
β’ Experience with SQL ;
β’ Experience with Python;
Optional skills (as a plus):
β’ Experience with ETL / ELT pipelines;
β’ Experience with PySpark;
β’ Experience with Airflow;
β’ Experience with Databricks;
Key Responsibilities:
β’ Apply data processing algorithms;
β’ Create ETL/ELT pipelines and data management solutions;
β’ Work with SQL queries for data extraction and analysis;
β’ Data analysis and application of data processing algorithms to solve business problems;
We offer:
β’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
β’ Opportunity to work with the high-skilled engineering team on challenging projects;
β’ Interesting projects with new technologies;
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 27 views Β· 8 applications Β· 3d
Databricks Solutions Architect
Full Remote Β· Worldwide Β· 7 years of experience Β· Upper-IntermediateRequirements - 7+ years experience in data engineering, data platforms & analytics - - - Completed Data Engineering Professional certification & required classes - Minimum 6-8+ projects delivered with hands-on experience in development on databricks -...Requirements
- 7+ years experience in data engineering, data platforms & analytics - -
- Completed Data Engineering Professional certification & required classes
- Minimum 6-8+ projects delivered with hands-on experience in development on databricks
- Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with deep expertise in at least one
- Deep experience with distributed computing with Spark with knowledge of Spark runtime internals
- Familiarity with CI/CD for production deployments
- Current knowledge across the breadth of Databricks product and platform features
We offer:
β’ Attractive financial package
β’ Challenging projects
β’ Professional & career growth
β’ Great atmosphere in a friendly small team
More -
Β· 36 views Β· 3 applications Β· 5d
Senior Software Data Engineer
Full Remote Β· Worldwide Β· Product Β· 7 years of experience Β· Upper-IntermediateJoin Burny Games β a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily. What makes us proud? In just two years, we've launched two successful mobile games worldwide:...Join Burny Games β a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily.
What makes us proud?
- In just two years, we've launched two successful mobile games worldwide: Playdoku and Colorwood Sort. We have paused some projects to focus on making our games better and helping our team improve.
- Our games have been enjoyed by over 8 million players worldwide, and we keep attracting more players.
- We've created a culture where we make decisions based on data, which helps us grow every month.
- We believe in keeping things simple, focusing on creativity, and always searching for new and effective solutions.
We are seeking an experienced software engineer to create a high-performance, scalable, and flexible real-time analytics platform.
You will be a key member of our team, responsible for the architecture, development, and optimization of services for processing and analyzing large volumes of data (terrabytes).
Required professional experience:
- 5+ years of experience in developing distributed systems or systems at scale.
- Willingness to upskill on Go, proficient in one of languages: Go, Python, Java/Scala/Kotlin, Rust.
- Rock solid computer science fundamentals.
- Experience with any NoSQL (preferably Cassandra) and OLAP (preferably ClickHouse) databases.
- Experience with distributed log-based messaging system (one of: Kafka, NATS JetStream, etc)
- Experience with Kubernetes (Helm, ArgoCD).
Desired Skills:
- Experience with common networking protocols.
- Experience working with observability tools, such as metrics and traces.
- Database fundamentals.
- Understanding of scalable system design principles and architectures for real-time data processing.
- Experience with distributed processing engine (one of: Flink, Spark).
- Experience with open table format (one of: Apache Iceberg, Delta Lake, Hudi).
- Experience with cloud platforms (one of: Google Cloud, AWS, Azure).
Key Responsibilities:
- Design and develop the architecture of an behavioral analytics platform for real-time big data processing.
- Implement key engine systems (data collection, event processing, aggregation, prepare data for visualization).
- Optimize the platform performance and scalability for handling large data volumes.
- Develop tools for user behavior analysis and product metrics.
- Collaborate with data analysts and product managers to integrate the engine into analytics projects.
- Research and implement new technologies and methods in data analysis.
What we offer:
- 100% payment of vacations and sick leave [20 days vacation, 22 days sick leave], medical insurance.
- A team of the best professionals in the games industry.
- Flexible schedule [start of work from 8 to 11, 8 hours/day].
- L&D center with courses.
- Self-learning library, access to paid courses.
- Stable payments.
The recruitment process:
CV review β Interview with talent acquisition manager β Interview with hiring manager β Job offer.
If you share our goals and values and are eager to join a team of dedicated professionals, we invite you to take the next step.
More -
Β· 78 views Β· 6 applications Β· 22d
Data Engineer
Full Remote Β· EU Β· Product Β· 2 years of experience Β· Upper-IntermediateRole Overview: We are looking for a Data Engineer to join the growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow. The right candidate will...Role Overview:
We are looking for a Data Engineer to join the growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow. The right candidate will be excited by the prospect of optimizing or even re-designing our companyβs data architecture to support our next generation of products and data initiatives.
Key Responsibilities:
β Develop and maintain data infrastructure and data warehouse solutions;
β Design, develop, and maintain scalable and efficient data pipelines and ETL processes;
β Develop APIs;
β Gathering and defining business requirements for data tools and analytics;
β Communicate and collaborate with the analytics team;
βMonitor and troubleshoot data pipelines and infrastructure, and implement measures to ensure data integrity, security, and performance;
β Assistance in the implementation of data science solutions;
β Develop and maintain documentation for data pipelines, infrastructure, and workflows;
β Stay up-to-date with the latest data engineering technologies and best practices, and make recommendations for new tools and approaches to improve efficiency and quality;
β Automation of the data processes;
β Collect data from different sources.
Ideal profile for the position:
β 2+ years of work experience as a Data Engineer;
β Experience with AWS - S3, Redshift, DMS, Glue, Lambda, Athena, QuickSight;
β Excellent level of SQL;
β Proficient in Python;
β Knowledge and experience with the development of data warehousing and ETL pipelines;
β API development experience;
β Basic understanding of machine learning and data science;
β Experience with API development;
β Experience in relational and non-relational databases;
β Good-level written and verbal communication skills;
β Upper-intermediate or higher English level.
The company guarantees you the following benefits:
The company guarantees you the following benefits:
β Global Collaboration: Join an international team where everyone treats each other with respect and moves towards the same goal;
Autonomy and Responsibility: Enjoy the freedom and responsibility to make decisions without the need for constant supervision.
β Competitive Compensation: Receive competitive salaries reflective of your expertise and knowledge as our partner seeks top performers.
β Remote Work Opportunities: Embrace the flexibility of fully remote work, with the option to visit company offices that align with your current location.
β Flexible Work Schedule: Focus on performance, not hours, with a flexible work schedule that promotes a results-oriented approach;
β Unlimited Paid Time Off: Prioritize work-life balance with unlimited paid vacation and sick leave days to prevent burnout;
β Career Development: Access continuous learning and career development opportunities to enhance your professional growth;
β Corporate Culture: Experience a vibrant corporate atmosphere with exciting parties and team-building events throughout the year;
β Referral Bonuses: Refer talented friends and receive a bonus after they successfully complete their probation period;
β Medical Insurance Support: Choose the right private medical insurance, and receive compensation (full or partial) based on the cost;
β Flexible Benefits: Customize your compensation by selecting activities or expenses you'd like the company to cover, such as a gym subscription, language courses, Netflix subscription, spa days, and more;
β Education Foundation: Participate in a biannual raffle for a chance to learn something new, unrelated to your job, as part of our commitment to ongoing education.
Interview process:
β A 30-minute interview with a member of our HR team to get to know you and your experience;
β A final 2-hour interview with the team to gauge your fit with our culture and working style.
If you find this opportunity right for you, don't hesitate to apply or get in touch with us if you have any questions!
-
Β· 28 views Β· 0 applications Β· 20d
SAP Analytics Cloud Consultant
Office Work Β· Ukraine (Kyiv) Β· Product Β· 2 years of experience Β· Pre-Intermediate Ukrainian Product πΊπ¦Ajax Systems is a global technology company and the leading developer and manufacturer of Ajax security systems with smart home capabilities in Europe. It encompasses a comprehensive ecosystem featuring 135 devices, mobile and desktop applications, and a...Ajax Systems is a global technology company and the leading developer and manufacturer of Ajax security systems with smart home capabilities in Europe. It encompasses a comprehensive ecosystem featuring 135 devices, mobile and desktop applications, and a robust server infrastructure. Each year, we experience substantial growth in both our workforce and our user base worldwide. Currently, the company employs over 3,300 individuals, while Ajax sensors safeguard 2.5 million users across more than 187 countries.
We have an open position for a Senior SAC Consultant in our team. We are looking for a professional who will help us design and implement advanced analytics solutions, transforming business data into strategic insights using SAP Analytics Cloud and SAP S/4HANA Embedded Analytics.
Key Responsibilities:
- Analytics Solution Development: Design and develop interactive dashboards, reports, and KPIs in SAP Analytics Cloud (SAC);
- Focus on visual design principles to create user-friendly and intuitive dashboards. Propose a design code guide to follow in the team;
- Utilize SAC capabilities, including planning, forecasting, and predictive analytics;
- Enable and optimize Embedded Analytics in SAP S/4HANA by consuming CDS Views.
- Data Integration and exploration: Integrate data from various sources (SAP and non-SAP systems, SQL databases, etc.) into SAC for unified analytics and reporting;
- Perform exploratory data analysis (EDA) to uncover patterns, validate data quality, and derive actionable insights.
- Business Collaboration: Work closely with business stakeholders to gather requirements, define KPIs, and translate business needs into technical solutions;
- Provide guidance on self-service analytics to empower business users and enhance BI adoption.
- Solution Optimization and Governance: Ensure SAC solutions align with data governance and performance best practices;
- Manage access controls, data security, and compliance across SAC deployments;
- Monitor and optimize dashboards for usability, performance, and scalability.
- Documentation and Knowledge Sharing: Document all solutions, models, data integrations, and reporting workflows to ensure transparency and maintainability;
- Create clear technical and user documentation to support stakeholders and end-users.
- Integration and Enablement: Collaborate with SAP Data Engineers to ensure data readiness from SAP S/4HANA and SAP Data Sphere;
- Integrate SAC with SAP back-end systems and other data sources for a unified
- Leadership and Support: Provide thought leadership on advanced SAC functionalities, trends, and best practices;
- Mentor team members and train end-users to maximize the value of SAC.
Key Requirements:
- 4+ years of experience in SAP Analytics Cloud (SAC), including planning, predictive analytics, and dashboard development.
- Strong knowledge of SAP S/4HANA Embedded Analytics and integration of CDS Views into SAC models.
- Hands-on experience performing exploratory data analysis (EDA) across various data sources to identify patterns, insights, and data quality issues.
- Experience working with SQL-based databases to query, validate, and manipulate datasets for analytics purposes.
- Solid understanding of data visualization best practices and performance optimization techniques.
- Familiarity with integrating SAC with SAP and non-SAP systems for seamless data consumption.
- Excellent ability to gather and translate business requirements into technical analytics solutions.
- Strong communication and stakeholder engagement skills, enabling collaboration with technical and business teams.
- Problem-solving mindset to address complex analytics challenges and ensure data-driven decision-making.
- Ability to work in cross-functional teams, coordinating with Data Engineers and IT stakeholders to ensure data readiness.
Proactive, detail-oriented, and committed to delivering high-quality solutions.
We offer:
- Opportunity to build your own processes and best practices;
- A dynamic team working within a zero-bullshit culture;
- Working in a comfortable office at UNIT.City (Kyiv). The office is safe as it has a bomb shelter;
- Reimbursement for external training for professional development;
- Ajax's security system kit to use;
- Official employment with Diia City ;
- Medical Insurance;
- Flexible work schedule.
Ajax Systems is a Ukrainian success story, a place of incredible strength and energy.
More -
Β· 34 views Β· 2 applications Β· 19d
Senior Data Engineer (with Go)
Full Remote Β· Bulgaria, Spain, Poland, Romania, Ukraine Β· Product Β· 5 years of experience Β· Upper-IntermediateWho we are Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product The product of our client stands at the forefront of...Who we are
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product
The product of our client stands at the forefront of advanced threat detection and response, pioneering innovative solutions to safeguard businesses against evolving cybersecurity risks. It is a comprehensive platform that streamlines security operations, empowering organizations to swiftly detect, prevent, and automate responses to advanced threats with unparalleled precision and efficiency.
About the Role
We are looking for a proactive, innovative, and responsible Senior Big Data Engineer with extensive knowledge and experience with GoLang streaming and batching processes, building DWH from scratch. Join our high-performance team to work with cutting-edge technologies in a dynamic and agile environment.
Key Responsibilities:
- Design & Development: Architect, develop, and maintain robust distributed systems with complex requirements, ensuring scalability and performance.
- Collaboration: Work closely with cross-functional teams to ensure the seamless integration and functionality of software components.
- System Optimization: Implement and optimize scalable server systems, utilizing parallel processing, microservices architecture, and security development principles.
- Database Management: Effectively utilize SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases in system design and development.
- Big Data Tools: Leverage big data tools such as Spark or Flink to enhance system performance and scalability(experience with these tools is advantageous).
- Deployment & Management: Demonstrate proficiency in Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
Required Competence and Skills:
- At least 5 years of experience in Data Engineering domain
- At least 2 years of experience with GoLang
- Proficiency in SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases
- Experienced with big data tools such as Spark or Flink to enhance system performance and scalability
- Proven experience with Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
- Ability to work effectively in a collaborative team environment
- Excellent communication skills and a proactive approach to learning and development
Advantages:
- Experience in data cybersecurity domain
- Experience in startup growing product
Why Us
We utilize a remote working model, providing a powerful workstation and co-working space of your choice in case you need it .
We offer a highly competitive package
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in)
We prioritize the professional growth and well-being of our team members. Hence, we organize various social events throughout the year to foster connections and promote wellness
More -
Β· 21 views Β· 3 applications Β· 2d
Middle Software Developer (Data Researcher/Data Integration)
Full Remote Β· Ukraine Β· 3 years of experience Β· Upper-IntermediateOur partner is a leading technology company transforming the way investigations are done with smart tools that help teams collect, analyze, and use data effectively. Their AI-powered platform simplifies case management, data visualization, and reporting,...Our partner is a leading technology company transforming the way investigations are done with smart tools that help teams collect, analyze, and use data effectively. Their AI-powered platform simplifies case management, data visualization, and reporting, making it a valuable solution for industries like law enforcement, financial investigations, and cyber threat intelligence. With deep expertise in business intelligence and data, they help organizations make faster and better decisions. They are focused on innovation and collaboration, creating a positive and dynamic workplace.
You'll collaborate closely with the team of engineers and data wizards to develop solutions that make a tangible impact in the world of security. Join a team that pushes boundaries, embraces challenges, and has a great time doing it.
P.S. Being the first to uncover hidden insights in data? Just one of the perks π.
Required Skills
- 2.5+ years of experience in data engineering or software development
- Experience with Python scripting
- Upper-Intermediate level of English
- Ready to collaborate with remote team
- Strong problem-solving abilities and attention to detail
Can-do attitude
Will be a Bonus
- Familiarity with integrating APIs and handling various data sources
- Ability to anticipate and handle multiple potential edge cases related to data consistency
Your Day-to-Day Responsibilities Will Include
- Researching and analyzing various APIs and data sources
- Integrating new data sources into existing system for seamless data flow
- Collaborating closely with the team to define and implement data solutions
- Identifying and addressing multiple potential edge cases in data integration
- Planning your work, estimating effort, and delivering on deadlines
We Offer
π Constant professional growth and improvement:
- Challenging projects with cutting-edge technologies
- Close cooperation with clients and industry leaders
- Support for personal development and mentorship
π Comfortable, focused work environment:
- Remote work encouraged and supported
- Minimal bureaucracy
- Flexible schedule
- High-quality hardware provided
And, of course, all the traditional benefits you'd expect in the IT industry.
More -
Β· 87 views Β· 11 applications Β· 30d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-IntermediateWe are seeking a skilled Data Engineer to join our team and contribute to the development of large- scale analytics platforms. The ideal candidate will have strong experience in cloud ecosystems such as Azure and AWS, as well as expertise in AI and...We are seeking a skilled Data Engineer to join our team and contribute to the development of large- scale analytics platforms. The ideal candidate will have strong experience in cloud ecosystems such as Azure and AWS, as well as expertise in AI and machine learning applications. Knowledge of the healthcare industry and life sciences is a plus.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines for large-scale analytics platforms.
- Implement cloud-based solutions using Azure and AWS, ensuring reliability and performance.
- Work closely with data scientists and AI/ML teams to optimize data workflows.
- Ensure data quality, governance, and security across platforms.
- Collaborate with cross-functional teams to integrate data solutions into business processes.
Required Qualifications
- Bachelor's degree (or higher) in Computer Science, Engineering, or a related field.
- 3+ years of experience in data engineering, big data processing, and cloud-based architecture.
- Strong proficiency in cloud services (Azure, AWS) and distributed computing frameworks.
- Mandatory hands-on experience with Databricks (UC, DLTs, Delta Sharing, etc.)
- Expertise in SQL and database management systems (SQL Server, MySQL, etc.).
- Experience with data modeling, ETL processes, and data warehousing solutions.
- Knowledge of AI and machine learning concepts and their data requirements.
- Proficiency in Python, Scala, or similar programming languages.
- Basic knowledge of C# and/or Java programming.
- Familiarity with DevOps, CI/CD pipelines.
- High-level proficiency in English (written and spoken).
Preferred Qualifications
- Experience in the healthcare or life sciences industry.
- Understanding of regulatory compliance related to healthcare data (HIPAA, GDPR, etc.).
- Familiarity with interoperability standards such as HL7, FHIR, and EDI.
-
Β· 119 views Β· 2 applications Β· 19d
Data Engineer (Azure)
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-IntermediateDataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU clientβs platform offers full visibility into water quality, compliance management, and system performance. If you are...Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU clientβs platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Key Responsibilities:
- Create and manage scalable data pipelines with Azure SQL and other databases;
- Use Azure Data Factory to automate data workflows;
- Write efficient Python code for data analysis and processing;
- Ability to develop data reports and dashboards using PowerBI;
- Use Docker for application containerization and deployment streamlining;
- Manage code quality and version control with Git.
Skills requirements:
- 3+ years of experience with Python;
- 2+ years of experience as a Data Engineer;
- Strong SQL knowledge, preferably with Azure SQL experience;
- Python skills for data manipulation;
- Expertise in Docker for app containerization;
- Familiarity with Git for managing code versions and collaboration;
- Upper- intermediate level of English.
Optional skills (as a plus):
- Experience with Azure Data Factory for orchestrating data processes;
- Experience developing APIs with FastAPI or Flask;
- Proficiency in Databricks for big data tasks;
- Experience in a dynamic, agile work environment;
- Ability to manage multiple projects independently;
- Proactive attitude toward continuous learning and improvement.
We offer:- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
Β· 39 views Β· 3 applications Β· 9d
Senior Data Engineer
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· Upper-IntermediateSimulmedia is looking for an experienced and dynamic Data Engineer with a curious and creative mindset to join our Data Services team. The ideal candidate will have a strong background in Python, SQL and Rest API development. This is an opportunity to...Simulmedia is looking for an experienced and dynamic Data Engineer with a curious and creative mindset to join our Data Services team. The ideal candidate will have a strong background in Python, SQL and Rest API development. This is an opportunity to join a team of amazing engineers, data scientists, product managers and designers who are obsessed with building the most advanced streaming advertising platform in the market. As a Data Engineer you will build services and data processing systems to support our platform. You will work on a team that empowers the other teams to use our huge amount of data efficiently. Using a large variety of technologies and tools, you will solve complicated technical problems and build solutions to make our services robust and flexible and our data easily accessible throughout the company.
Only for candidates from Ukraine. This position is located in either Kyiv or Lviv, Ukraine. The team is located in both Kyiv and Lviv and primarily works remotely with occasional team meetings in our offices.
Responsibilities:
- Build products that leverage our data and solve problems that tackle the complexity of streaming video advertising
- Develop containerized applications, largely in Python, that are deployed to the Cloud
- Work within an Agile team that releases cutting-edge new features regularly
- Learn new technologies, and make an outsized impact on our industry-leading tech platform
- Take a high degree of ownership and freedom to experiment with new technologies to improve our software
- Develop maintainable code and fault tolerant solutions
- Collaborate cross-functionally with product managers and stakeholders across the company to deliver on product roadmap
- Join a team of passionate engineers in search of elegant solutions to hard problems
Qualifications:
- Bachelorβs degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
- 7+ years of work experience as a data engineer
- Proficiency in Python and using it as the primary development language in recent years
- Proficiency in SQL and relational databases (Postgres, MySQL, etc)
- Ability to design complex data models (normalized and multi-dimensional)
- Experience building REST services (Flask, Django, aio-http, etc)
- Experience developing, maintaining, and debugging problems in large server-side code bases
- Good knowledge of engineering best practices and testing (unit test, integration test)
- The desire to take a high level of ownership of the things you work on
- Ability to learn new things quickly, maintain a high bar for quality, and be pragmatic
- Must be able to communicate with U.S based teams
- Experience with AWS is a plus
- Ability to work 11 am β 8 pm EEST
Our Tech Stack:
- Almost everything we run is on AWS
- We mostly use Python, Ruby and Go
- For data, we mostly use Postgres and Redshift
-
Β· 33 views Β· 3 applications Β· 4d
Data Engineer
Ukraine Β· Product Β· 2 years of experience Β· Pre-IntermediateRaiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been creating and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work, including one of the largest product IT teams,...Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been creating and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work, including one of the largest product IT teams, which consists of over 800 professionals. Every day, we work hand in hand so that more than 2.7 millions of our clients can receive quality service, use the bankβs products and services, and develop their businesses because we are #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.
Your future responsibilities:
- Creation and validation of ideas for building data-driven on a domain scale
- Development of ETL data pipelines: loading from various sources, cleaning, transformation, enrichment with external data, normalization/denormalization
- Manage data ingestion, storage, and processing pipelines to ensure efficient data flow and accessibility
- Utilize Python and Scala extensively for data processing and analytics tasks
- Conduct data modeling to optimize database performance and ensure data integrity
- Implement and refactore Spark jobs to improve performance and scalability
- Develop Kafka-based solutions on Python for real-time data streaming and processing
- Create dashboards and alerts to address potential issues proactively
Your skills and experience:
- 3+ years experience in a dedicated data engineer role
- Experience working with large structured and unstructured data in various formats
- Knowledge or experience with streaming data frameworks and distributed data architectures
- Experience in building lakehouses on AWS
- Practical experience in the operation of Big Data stack: Kafka, Spark
- Experience with Python (and Scala would be a plus)
- Expert knowledge of PySpark: transformations, aggregations, window functions, writing and optimizing UDFs
- Practical Experience with RDBMS + NoSQL databases
- Development experience in Docker/Kubernetes environment
- Open and team-minded personality and communication skills
- Willingness to work in an agile environment
- Databricks experience would be a plus
We offer what matters most to you:
- Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
- Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
- Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
- Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
- Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. β’ Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
- Career opportunities: we encourage advancement within the bank across functions
- Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
- Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bankβs veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes
Why Raiffeisen Bank?
- Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raifβs team because for us YOU matter!
- One of the largest lenders to the economy and agricultural business among private banks
- Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
- The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)
- One of the largest IT product teams among the countryβs banks. β’ One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023.
Opportunities for Everyone:
- Raif is guided by principles focused on human development and the well-being of 5,500 employees and over 2.7 million clients
- At Raif, we support principles of diversity, equality, and inclusivity
- We develop programs to support defenders. You matter at Raif! Want to learn more?
You matter at Raif!
Want to learn more? Follow us on social media:
Facebook, Instagram, LinkedIn
________________________________________________________________________________
Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡΡΠ°Π»ΠΎΠΌ. ΠΡΠ»ΡΡΠ΅ 30 ΡΠΎΠΊΡΠ² ΠΌΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠ° Π²ΠΈΠ±ΡΠ΄ΠΎΠ²ΡΡΠΌΠΎ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π°ΡΠΎΡ Π΄Π΅ΡΠΆΠ°Π²ΠΈ.
Π£ Π Π°ΠΉΡΡ ΠΏΡΠ°ΡΡΡ ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ², ΡΠ΅ΡΠ΅Π΄ Π½ΠΈΡ ΠΎΠ΄Π½Π° ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄, ΡΠΎ Π½Π°Π»ΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 800 ΡΠ°Ρ ΡΠ²ΡΡΠ². Π©ΠΎΠ΄Π½Ρ ΠΏΠ»ΡΡ-ΠΎ-ΠΏΠ»ΡΡ ΠΌΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ, ΡΠΎΠ± Π±ΡΠ»ΡΡ Π½ΡΠΆ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½Π° Π½Π°ΡΠΈΡ ΠΊΠ»ΡΡΠ½ΡΡΠ² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΡΠΊΡΡΠ½Π΅ ΠΎΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ, ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΠΈΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ Ρ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π±Π°Π½ΠΊΡ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.β―
Π’Π²ΠΎΡ ΠΌΠ°ΠΉΠ±ΡΡΠ½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- Π‘ΡΠ²ΠΎΡΠ΅Π½Π½Ρ ΡΠ° Π²Π°Π»ΡΠ΄Π°ΡΡΡ ΡΠ΄Π΅ΠΉ ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²ΠΈ ΠΏΡΠ΄Ρ ΠΎΠ΄ΡΠ² Π΄Π»Ρ ΡΠΏΡΠ°Π²Π»ΡΠ½Π½Ρ Π΄Π°Π½ΠΈΠΌΠΈ Π² Π΄ΠΎΠΌΠ΅Π½Ρ
- Π ΠΎΠ·ΡΠΎΠ±ΠΊΠ° ETL ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½ΡΠ²: Π·Π°Π³ΡΡΠ·ΠΊΠ° Π· ΡΡΠ·Π½ΠΈΡ ΡΠ΅ΡΡΡΡΡΠ², ΠΎΠ±ΡΠΎΠ±ΠΊΠ°, ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΡΡ, Π·Π±Π°Π³Π°ΡΠ΅Π½Π½Ρ Π·ΠΎΠ²Π½ΡΡΠ½ΡΠΌΠΈ Π΄Π°Π½ΠΈΠΌΠΈ, Π½ΠΎΡΠΌΠ°Π»ΡΠ·Π°ΡΡΡ/Π΄Π΅Π½ΠΎΡΠΌΠ°Π»ΡΠ·Π°ΡΡΡ ΡΠΏΡΠ°Π²Π»ΡΠ½Π½ΡΠΌ Π²Ρ ΡΠ΄Π½ΠΈΡ Π΄Π°Π½ΠΈΡ , ΡΡ Π·Π±Π΅ΡΡΠ³Π°Π½Π½Ρ Π΄Π»Ρ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ ΡΠ° Π°Π½Π°Π»ΡΡΠΈΠΊΠΈ
- ΠΡΠΎΠ²Π΅Π΄Π΅Π½Π½Ρ ΠΌΠΎΠ΄Π΅Π»ΡΠ²Π°Π½Π½Ρ Π΄Π°Π½ΠΈΡ Π΄Π»Ρ ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΈΠ²Π½ΠΎΡΡΡ Π±Π°Π· Π΄Π°Π½ΠΈΡ Ρ ΠΌΠ°ΡΡΠ°Π±ΡΠ²Π°Π½Π½Ρ
- Π ΠΎΠ·ΡΠΎΠ±ΠΊΠ° Kafka ΡΡΡΠ΅Π½Π½Ρ Π½Π° Python Π΄Π»Ρ ΠΏΠΎΡΠΎΠΊΠΎΠ²ΠΎΡ ΠΏΠ΅ΡΠ΅Π΄Π°ΡΡ Π΄Π°Π½ΠΈΡ Ρ ΡΠ΅Π°Π»ΡΠ½ΠΎΠΌΡ ΡΠ°ΡΡ
- Π‘ΡΠ²ΠΎΡΠ΅Π½Π½Ρ ΡΠ½ΡΠΎΡΠΌΠ°ΡΡΠΉΠ½ΠΈΡ ΠΏΠ°Π½Π΅Π»Π΅ΠΉ Π΄Π»Ρ ΡΠΏΠΎΠ²ΡΡΠ΅Π½Π½Ρ, ΡΠΎΠ± Π²ΠΈΡΡΡΡΠ²Π°ΡΠΈ ΠΏΠΎΡΠ΅Π½ΡΡΠΉΠ½Ρ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠΈ
Π’Π²ΡΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠ° Π½Π°Π²ΠΈΡΠΊΠΈ:
- 3+ ΡΠΎΠΊΠΈ Π΄ΠΎΡΠ²ΡΠ΄Ρ Π½Π° ΠΏΠΎΡΠ°Π΄Ρ Data Engineer
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ ΡΡΡΡΠΊΡΡΡΠΎΠ²Π°Π½ΠΈΠΌΠΈ Ρ Π½Π΅ΡΡΡΡΠΊΡΡΡΠΎΠ²Π°Π½ΠΈΠΌΠΈ Π΄Π°Π½ΠΈΠΌΠΈ Π² ΡΡΠ·Π½ΠΈΡ ΡΠΎΡΠΌΠ°Ρ
- ΠΠΎΡΠ²ΡΠ΄ Π°Π±ΠΎ Π·Π½Π°Π½Π½Ρ Π· ΡΡΠ°ΠΉΠΌΠ²ΠΎΡΠΊΠ°ΠΌΠΈ ΠΏΠΎΡΠΎΠΊΠΎΠ²ΠΈΡ Π΄Π°Π½ΠΈΡ Ρ Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΈ ΡΠΎΠ·ΠΏΠΎΠ΄ΡΠ»Π΅Π½ΠΈΡ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²ΠΈ Datahouses Π½Π° AWS
- ΠΡΠ°ΠΊΡΠΈΡΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ Π·Ρ ΡΡΠ΅ΠΊΠΎΠΌ Big Data: Kafka, Spark
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Python (Scala Π±ΡΠ΄Π΅ ΠΏΠ΅ΡΠ΅Π²Π°Π³ΠΎΡ)
- ΠΠΊΡΠΏΠ΅ΡΡΠ½Π΅ Π·Π½Π°Π½Π½Ρ PySpark: ΠΏΠ΅ΡΠ΅ΡΠ²ΠΎΡΠ΅Π½Π½Ρ, Π°Π³ΡΠ΅Π³Π°ΡΡΡ, Π²ΡΠΊΠΎΠ½Π½Ρ ΡΡΠ½ΠΊΡΡΡ, Π½Π°ΠΏΠΈΡΠ°Π½Π½Ρ ΡΠ° ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ UDF
- ΠΡΠ°ΠΊΡΠΈΡΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Π±Π°Π·Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ RDBMS+NoSQL
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ Π² ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ Docker/Kubernetes
- ΠΠΎΡΠ²ΡΠ΄ Π· Databricks Π±ΡΠ΄Π΅ ΠΏΠ»ΡΡΠΎΠΌ
ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΡΠ΅, ΡΠΎ ΠΌΠ°Ρ Π·Π½Π°ΡΠ΅Π½Π½Ρ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅:β―
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Π° Π·Π°ΡΠΎΠ±ΡΡΠ½Π° ΠΏΠ»Π°ΡΠ°: Π³Π°ΡΠ°Π½ΡΡΡΠΌΠΎ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ Π΄ΠΎΡ ΡΠ΄ ΡΠ° ΡΡΡΠ½Ρ Π±ΠΎΠ½ΡΡΠΈ Π·Π° ΡΠ²ΡΠΉ ΠΎΡΠΎΠ±ΠΈΡΡΠΈΠΉ Π²Π½Π΅ΡΠΎΠΊ. ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ, Ρ Π½Π°Ρ Π΄ΡΡ ΡΠ΅ΡΠ΅ΡΠ°Π»ΡΠ½Π° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊΡ.
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ: ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 28 Π΄Π½ΡΠ² ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΈΠΉ βΠ΄Π΅ΠΊΡΠ΅Ρβ Π΄Π»Ρ ΡΠ°ΡΡΡΡΠ², ΡΠ° ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° Π΄Π»Ρ Π±Π°ΡΡΠΊΡΠ² ΠΏΡΠΈ Π½Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π΄ΡΡΠ΅ΠΉ.
- ΠΠΎΠΌΡΠΎΡΡΠ½Ρ ΡΠΌΠΎΠ²ΠΈ ΠΏΡΠ°ΡΡ: ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π³ΡΠ±ΡΠΈΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠ°ΡΡ ΡΠΎΠ±ΠΎΡΠΈ, ΠΎΡΡΡΠΈ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠΊΡΠΈΡΡΡΠΌΠΈ ΡΠ° Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠ°ΠΌΠΈ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΎΡ ΡΠ΅Ρ Π½ΡΠΊΠΎΡ.
- Wellbeing ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ°: Π΄Π»Ρ Π²ΡΡΡ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² Π΄ΠΎΡΡΡΠΏΠ½Ρ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ Π· ΠΏΠ΅ΡΡΠΎΠ³ΠΎ ΡΠΎΠ±ΠΎΡΠΎΠ³ΠΎ Π΄Π½Ρ; ΠΊΠΎΠ½ΡΡΠ»ΡΡΠ°ΡΡΡ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π½ΡΡΡΠΈΡΡΠΎΠ»ΠΎΠ³Π° ΡΠΈ ΡΡΠΈΡΡΠ°; Π΄ΠΈΡΠΊΠΎΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡΡ ΡΠ° ΠΏΠΎΠΊΡΠΏΠΊΠΈ; family days Π΄Π»Ρ Π΄ΡΡΠ΅ΠΉ ΡΠ° Π΄ΠΎΡΠΎΡΠ»ΠΈΡ ; ΠΌΠ°ΡΠ°ΠΆ Π² ΠΎΡΡΡΡ.
- ΠΠ°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ: Π΄ΠΎΡΡΡΠΏ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²ΡΠ°Π»ΡΠ½ΠΈΡ ΠΎΠ½Π»Π°ΠΉΠ½-ΡΠ΅ΡΡΡΡΡΠ²; ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Ρ Π½Π°Π²ΡΠ°Π»ΡΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π· CX, Data, IT Security, ΠΡΠ΄Π΅ΡΡΡΠ²Π°, Agile. ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π° Π±ΡΠ±Π»ΡΠΎΡΠ΅ΠΊΠ° ΡΠ° ΡΡΠΎΠΊΠΈ Π°Π½Π³Π»ΡΠΉΡΡΠΊΠΎΡ.
- ΠΡΡΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡΡ ΠΊΠΎΠ»Π΅Π³ΠΈ β ΡΠ΅ ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ°, Π΄Π΅ Π²ΡΡΠ°ΡΡΡΡΡ Π΄ΠΎΠΏΠΈΡΠ»ΠΈΠ²ΡΡΡΡ, ΡΠ°Π»Π°Π½Ρ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΡ. ΠΠΈ ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, Π²ΡΠΈΠΌΠΎΡΡ ΡΠ°Π·ΠΎΠΌ ΡΠ° Π·ΡΠΎΡΡΠ°ΡΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π·Π½Π°ΠΉΡΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² Ρ ΠΏΠΎΠ½Π°Π΄ 15-ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ ΠΊΠΎΠΌβΡΠ½ΡΡΡ, ΡΠΈΡΠ°ΡΡΠΊΠΎΠΌΡ ΡΠΈ ΡΠΏΠΎΡΡΠΈΠ²Π½ΠΎΠΌΡ ΠΊΠ»ΡΠ±Π°Ρ .
- ΠΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ: ΠΌΠΈ Π·Π°ΠΎΡ ΠΎΡΡΡΠΌΠΎ ΠΏΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΠ΅ΡΠ΅Π΄ΠΈΠ½Ρ Π±Π°Π½ΠΊΡ ΠΌΡΠΆ ΡΡΠ½ΠΊΡΡΡΠΌΠΈ.
- ΠΠ½Π½ΠΎΠ²Π°ΡΡΡ ΡΠ° ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
- ΠΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΡΠ² Ρ Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡ: ΠΌΠΈ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ ΡΠΎΠ±ΠΎΡΡ ΠΌΡΡΡΡ ΡΠ° Π²ΠΈΠΏΠ»Π°ΡΡΡΠΌΠΎ ΡΠ΅ΡΠ΅Π΄Π½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ ΠΌΠΎΠ±ΡΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΌ. ΠΠ»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² ΡΠ° Π²Π΅ΡΠ΅ΡΠ°Π½ΠΎΠΊ Ρ Π½Π°Ρ Π΄ΡΡ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΡΡΡΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° ΠΠ°Π½ΠΊΡ. ΠΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π°Π΄ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½ΡΠΌ ΠΎΠ±ΡΠ·Π½Π°Π½ΠΎΡΡΡ ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΡΠ² ΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡΠ°Π½Ρ ΠΏΠΎΠ²Π΅ΡΠ½Π΅Π½Π½Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Π΄ΠΎ ΡΠΈΠ²ΡΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΆΠΈΡΡΡ. Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ Π²ΡΠ΄Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΠΊ ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡΠ°ΡΠΈΡ ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ² Π΄Π»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² (Forbes).
Π§ΠΎΠΌΡ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ?β―
- ΠΠ°ΡΠ° Π³ΠΎΠ»ΠΎΠ²Π½Π° ΡΡΠ½Π½ΡΡΡΡ β Π»ΡΠ΄ΠΈ Ρ ΠΌΠΈ Π΄Π°ΡΠΌΠΎ ΡΠΌ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Ρ Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²ΡΠ°ΡΠΌΠΎ, Π·Π°Π»ΡΡΠ°ΡΠΌΠΎ Π΄ΠΎ Π·ΠΌΡΠ½. ΠΡΠΈΡΠ΄Π½ΡΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡΡ, Π°Π΄ΠΆΠ΅ Π΄Π»Ρ Π½Π°Ρ Π’Π ΠΌΠ°ΡΡ Π·Π½Π°ΡΠ΅Π½Π½Ρ!β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΊΡΠ΅Π΄ΠΈΡΠΎΡΡΠ² Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π°Π³ΡΠ°ΡΠ½ΠΎΠ³ΠΎ Π±ΡΠ·Π½Π΅ΡΡ ΡΠ΅ΡΠ΅Π΄ ΠΏΡΠΈΠ²Π°ΡΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ²β―
- ΠΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡΠ°ΡΠΈΠΌ ΠΏΡΠ°ΡΠ΅Π΄Π°Π²ΡΠ΅ΠΌ Π·Π° Π²Π΅ΡΡΡΡΠΌΠΈ EY, Forbes, Randstad, Franklin Covey, Delo.UAβ―
- ΠΠ°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ Π³ΡΠΌΠ°Π½ΡΡΠ°ΡΠ½ΠΎΡ Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΈΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² (Π§Π΅ΡΠ²ΠΎΠ½ΠΈΠΉ Π₯ΡΠ΅ΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΠ»Π°ΡΠ½ΠΈΠΊΡΠ² ΠΏΠΎΠ΄Π°ΡΠΊΡΠ² Π² Π£ΠΊΡΠ°ΡΠ½Ρ, Π·Π° 2023 ΡΡΠΊ Π±ΡΠ»ΠΎ ΡΠΏΠ»Π°ΡΠ΅Π½ΠΎ 6,6 ΠΌΠ»ΡΠ΄ Π³ΡΠΈΠ²Π΅Π½Ρ
ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ :β―
- Π Π°ΠΉΡ ΠΊΠ΅ΡΡΡΡΡΡΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°ΠΌΠΈ, ΡΠΎ ΡΠΎΠΊΡΡΡΡΡΡΡΡ Π½Π° Π»ΡΠ΄ΠΈΠ½Ρ ΡΠ° ΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ, Ρ ΡΠ΅Π½ΡΡΡ ΡΠ²Π°Π³ΠΈ 5β―500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² ΡΠ° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½ΠΈ ΠΊΠ»ΡΡΠ½ΡΡΠ²β―β―
- ΠΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΡΡΠ·Π½ΠΎΠΌΠ°Π½ΡΡΡΡ, ΡΡΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΠ½ΠΊΠ»ΡΠ·ΠΈΠ²Π½ΠΎΡΡΡ
- ΠΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ Π½Π°ΠΉΠΌΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Ρ Π»ΡΠ΄Π΅ΠΉ Π· ΡΠ½Π²Π°Π»ΡΠ΄Π½ΡΡΡΡ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈ ΡΠΎΠ±ΠΎΡΠ΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ ΠΏΡΠ΄ Π²Π°ΡΡ ΠΎΡΠΎΠ±Π»ΠΈΠ²Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈ
- Π‘ΠΏΡΠ²ΠΏΡΠ°ΡΡΡΠΌΠΎ Π·Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌΠΈ ΡΠ° Π»ΡΠ΄ΡΠΌΠΈ ΡΡΠ°ΡΡΠΎΠ³ΠΎ Π²ΡΠΊΡ,β―ΡΡΠ²ΠΎΡΡΡΡΠΈ ΡΠΌΠΎΠ²ΠΈ Π΄Π»Ρ Π·ΡΠΎΡΡΠ°Π½Π½Ρ Π½Π° Π±ΡΠ΄Ρ-ΡΠΊΠΎΠΌΡ Π΅ΡΠ°ΠΏΡ ΠΊΠ°ΡβΡΡΠΈ
ΠΠ°ΠΆΠ°ΡΡ Π΄ΡΠ·Π½Π°ΡΠΈΡΡ Π±ΡΠ»ΡΡΠ΅? β ΠΡΠ΄ΠΏΠΈΡΡΠΉΡΡ Π½Π° Π½Π°Ρ Ρ ΡΠΎΡ.ΠΌΠ΅ΡΠ΅ΠΆΠ°Ρ :
Facebook, Instagram, LinkedInβ―
-
Β· 73 views Β· 13 applications Β· 8d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· IntermediateWe are a forward-thinking software development company, known for our proprietary core platform and its integral components β Sportsbook, CRM, Risk Management, Antifraud, Bonus Engine, and Retail Software Suite. Join our team to be part of pioneering...We are a forward-thinking software development company, known for our proprietary core platform and its integral components β Sportsbook, CRM, Risk Management, Antifraud, Bonus Engine, and Retail Software Suite.
Join our team to be part of pioneering solutions that not only streamline the work of thousands of employees in every department but also significantly boost profits and elevate product quality.
Your role with us is a key part of a team shaping the future of software solutions in an impactful and dynamic environment.Your role with us is a key part of a team shaping the future of software solutions in an impactful and dynamic environment.
Requirements:
- 3+ years of professional experience as a Data Engineer or in a similar role.
- Expert-level proficiency in Python for data processing, scripting, and automation.
- Strong experience with AWS services (S3, Redshift, Glue, Lambda, etc.).
- Deep understanding of database technologies (SQL and NoSQL systems, data modeling, performance tuning).
- Experience with ETL/ELT pipeline design and development.
- Strong understanding of data warehousing concepts and best practices.
- Familiarity with version control systems like Git and CI/CD pipelines.
Excellent problem-solving skills and the ability to work independently.
Responsibilities:
- Design, build, and maintain robust data pipelines and ETL/ELT processes.
- Optimize data systems and architecture for performance and scalability.
- Collaborate with data scientists, analysts, and other engineers to meet organizational data needs.
- Implement and manage data security and governance measures.
- Develop and maintain data documentation and ensure data quality standards are met.
- Lead data architecture discussions and make strategic recommendations.
Monitor and troubleshoot data pipeline issues in production environments.
Nice to have:
- Knowledge of streaming platforms such as Kafka or AWS Kinesis.
- Familiarity with machine learning workflows and model deployment.
Previous experience in a leadership or mentorship role.
Work Conditions:
- Vacation: 22 working days per year
- Sick leave: 5 paid days per year
- Sick lists: 100% paid with medical certificate up to 10 days per year
Benefit cafeteria personal limit for health support
Join our passionate and talented team and be at the forefront of transforming the iGaming industry with groundbreaking live tech solutions. If youβre a visionary professional eager to shape the future of iGaming, Atlaslive invites you to apply and drive our continued success in this dynamic market.
Atlaslive β The Tech Behind the Game.
*All submitted resumes undergo a thorough review. If we do not contact you within 5 business days after your application, it means that we are unable to proceed with your candidacy at this time. However, we appreciate your interest in our company and will keep your contact details on file for future opportunities.
More