Jobs
139-
Β· 68 views Β· 18 applications Β· 23d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateJob overview: We are seeking an experienced Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring...Job overview:
We are seeking an experienced Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.Does this relate to you?
- 5+ years of experience in Data Engineering field
- Strong expertise in SQL and data modeling concepts.
- Hands-on experience with Airflow.
- Experience working with Redshift.
- Proficiency in Python for data processing.
- Strong understanding of data governance, security, and compliance.
- Experience in implementing CI/CD pipelines for data workflows.
- Ability to work independently and collaboratively in an agile environment.
Excellent problem-solving and analytical skills.
A new team member will be in charge of:
- Design, develop, and maintain scalable data warehouse solutions.
- Build and optimize ETL/ELT pipelines for efficient data integration.
- Design and implement data models to support analytical and reporting needs.
- Ensure data integrity, quality, and security across all pipelines.
- Optimize data performance and scalability using best practices.
- Work with big data technologies such as Redshift.
- Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
- Implement CI/CD pipelines for data workflows.
- Monitor, troubleshoot, and improve data processes and system performance.
- Stay updated with industry trends and emerging technologies in data engineering.
Already looks interesting? Awesome! Check out the benefits prepared for you:
- Regular performance reviews, including remuneration
- Up to 25 paid days off per year for well-being
- Flexible cooperation hours with work-from-home
- Fully paid English classes with an in-house teacher
- Perks on special occasions such as birthdays, marriage, childbirth
- Referral program implying attractive bonuses
- External & internal training and IT certifications
Ready to try your hand? Send your CV without a doubt!
More -
Β· 28 views Β· 2 applications Β· 22d
Salesforce Service Cloud Specialist
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· C1 - AdvancedJob: Salesforce Service Cloud Specialist About the role: A Salesforce development company is looking for a Senior Salesforce Cloud Specialist who will be responsible for maintaining a scalable, secure, and user-friendly Salesforce environment for a...π Job: Salesforce Service Cloud Specialist
π About the role: A Salesforce development company is looking for a Senior Salesforce Cloud Specialist who will be responsible for maintaining a scalable, secure, and user-friendly Salesforce environment for a call center with a high volume of requests.
You will receive a ready-made infrastructure and resources β all that remains is to maintain and improve it. π
πCandidate requirements:
- 3+ years of practical experience working with Salesforce Service Cloud.
- Experience with Omni-Channel setup, configuration, and optimization.
- Relevant Salesforce Service Cloud certification.
- In-depth knowledge of the Salesforce platform architecture, its limitations, and data security.
- Experience with Salesforce Flows, SQL, data modeling, LWC, and data migration.
- Experience working in customer service or call centers.
- Knowledge of Agile methodologies and DevOps tools for Salesforce deployment.
- Excellent verbal and written communication skills with technical and non-technical audiences.
- Strong analytical mindset and problem-solving ability.
- High attention to detail with a focus on scalable, clear solutions.
- Ability to collaborate cross-functionally and adapt in a rapidly changing environment.
- Commitment to continuous learning and keeping up with updates to the Salesforce ecosystem.
- Certifications:
- Salesforce Advanced Administrator
- Platform Developer I
- Salesforce Certified Service Cloud Consultant
- Salesforce Certified Agent force Specialist
- Sales Cloud Consultant
πKey responsibilities:
- Maintain a complex Salesforce Service Cloud organization with over 1 million records and hundreds of users.
- Configure and optimize Omni-Channel routing, IVR, macros, telephony, and automation rules.
- Use Einstein AI tools (Service Analytics, advanced bots, case classification) to improve user experience and response quality.
- Use Agent force Studio (Agent Builder, Prompt Builder, etc.) to manage and optimize intelligent agents.
- Develop and implement permission sets, role hierarchies, and ensure system uptime.
- Create advanced reports, dashboards, flows, formulas, and validation rules.
- Write clear documentation: technical specifications, change sets, release notes, and in-app instructions.
Collaborate with developers, analysts, and business teams using Agile and DevOps practices (CI/CD pipelines).
π Would be a plus:
- Experience with Einstein GPT, RAG (Retrieval-Augmented Generation), or similar LLM-enhanced AI tools.
- Knowledge of prompt engineering and LLM-context optimization.
- Hands-on work experience with Agentforce AI workflows and custom agent actions.
Experience in cross-channel service strategies and AI voice integrations.
π The company offers:
- An interesting and challenging project.
- Comfortable working conditions.
- Paid vacation and sick leave, additional days off.
- A friendly team and democratic corporate culture (corporate events to bring the team together, corporate parties).
- Full-time employment with a flexible work schedule.
- Free English grammar and vocabulary courses.
π© Send us your resume, we will be happy to talk to you π
More -
Β· 73 views Β· 23 applications Β· 22d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· NativeKey Responsibilities Architect, build, and maintain high-performance data pipelines and warehouses in Snowflake. Design and optimize dimensional data models (star & snowflake schemas) using Kimball methodology. Implement automated data quality checks,...Key Responsibilities
- Architect, build, and maintain high-performance data pipelines and warehouses in Snowflake.
- Design and optimize dimensional data models (star & snowflake schemas) using Kimball methodology.
- Implement automated data quality checks, monitoring, and alerting.
- Partner with analytics, product, and engineering teams to translate requirements into technical solutions.
- Mentor junior engineers and champion data engineering best practices.
- Develop and optimize ETL/ELT workflows integrating multiple cloud and SaaS data sources.
- Continuously tune Snowflake performance and manage cloud resources efficiently.
- Document data architecture, transformations, and operational processes.
Required Qualifications
- 5+ years of experience in data engineering.
- 3+ years of hands-on Snowflake experience, including query and warehouse optimization.
- Strong skills in dimensional modeling and data warehouse design.
- Proficiency in SQL and at least one programming language (Python preferred).
- Experience with modern data pipeline tooling and orchestration.
- Solid understanding of data warehouse concepts and best practices.
Preferred Skills
- Experience with dbt (data build tool).
- Familiarity with AWS, Azure, or GCP (AWS preferred).
- Experience with Git-based workflows and CI/CD pipelines.
- Knowledge of data governance and security in the cloud.
- Exposure to BI tools such as Looker, Tableau, or Power BI.
-
Β· 24 views Β· 0 applications Β· 22d
Data Ops/Engineer (with Capital markets exp.)
Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper IntermediateDevelop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops, Risk, Trading, and...Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops, Risk, Trading, and Compliance. Develop internal data products and analytics
Responsibilities:
Web scraping using scripts/APIs/Tools
Help build and maintain greenfield data platform running on Snowflake and AWS
Understand the existing pipelines and enhance pipelines for the new requirements.
Onboarding new data providers
Data migration projectsMandatory Skills Description:
β’ 10+ years of exp as Data Engineer
β’ SQL
β’ Python
β’ Linux
β’ Containerization(Docker, Kubernetes)
β’ Good communication skills
β’ AWS
β’ Strong on Dev ops side of things(K8s, Docker, Jenkins)
β’ Being ready to work in EU time zoneNice-to-Have Skills Description:
β’ Market Data Projects/ Capital markets exp
β’ Snowflake is a big plus
β’ Airflow
-
Β· 45 views Β· 0 applications Β· 21d
Lead Big Data Engineer
Full Remote Β· Ukraine Β· 6 years of experience Β· B2 - Upper IntermediateRole Overview: As a Lead Big Data Engineer, you will combine hands-on engineering with technical leadership. Youβll be responsible for designing, developing, and optimizing Spark-based big data pipelines in Palantir Foundry, ensuring high performance,...Role Overview:
As a Lead Big Data Engineer, you will combine hands-on engineering with technical leadership. Youβll be responsible for designing, developing, and optimizing Spark-based big data pipelines in Palantir Foundry, ensuring high performance, scalability, and reliability. You will also mentor and manage a team of engineers, driving best practices in big data engineering, ensuring delivery excellence, and collaborating with stakeholders to meet business needs. While our project uses Palantir Foundry, prior experience with it is a plus, but not mandatory.Key Responsibilities:
- Lead the design, development, and optimization of large-scale, Spark-based (PySpark) data processing pipelines.
- Build and maintain big data solutions using Palantir Foundry
- Ensure Spark workloads are tuned for performance and cost efficiency.
- Oversee and participate in code reviews, architecture discussions, and best practice implementation.
- Maintain high standards for data quality, security, and governance.
- Manage and mentor a team of Big Data Engineers, providing technical direction
- Drive continuous improvement in processes, tools, and development practices.
- Foster collaboration across engineering, data science, and product teams to align on priorities and solutions.
Requirements:
- Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
- 6+ years in Big Data Engineering, with at least 1-2 years in a lead (tech/team lead) role.
- Deep hands-on expertise in Apache Spark (PySpark) for large-scale data processing.
- Proficiency in Python and distributed computing principles.
- Experience designing, implementing, and optimizing high-volume, low-latency data pipelines.
- Strong leadership, communication, and stakeholder management skills.
- Experience with Palantir Foundry is a plus, but not required.
- Familiarity with CI/CD and infrastructure as code (Terraform, CloudFormation) is desirable.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 34 views Β· 1 application Β· 21d
Senior Data Streaming Engineer
Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· 4 years of experience Β· B2 - Upper IntermediateWho we are! At Levi9, we are passionate about what we do. We love our work, and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players? About the role As a Data...πΉWho we are!
At Levi9, we are passionate about what we do. We love our work, and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players?
πΉAbout the role
As a Data Streaming Engineer in the customer team, you will leverage millions of daily connections with readers and viewers across the online platforms as a competitive advantage to deliver reliable, scalable streaming solutions. You will collaborate closely with analysts, data scientists and developers across all departments throughout the entire customer organisation. You will design and build cloud-based data pipelines, both batch and streaming, and their underlying infrastructure. In short: you live up to our principle, You Build It, You Run It.
You will be working closely with a tech stack that includes Scala, Kafka, Kubernetes, Kafka Streams, and Snowflake.πΉResponsibilities
- Deliver reliable, scalable streaming solutions
- Collaborate closely with analysts, data scientists and developers across all departments throughout the entire organisation
- Design and build cloud-based data pipelines, both batch and streaming, and their underlying infrastructure
- You Build It, You Run It.
- Building a robust real-time customer profile by aggregating their online behaviour and allowing the usage of this profile to recommend other articles on customers' online platforms.
- Co-develop and cooperate on streaming architectures from inception and design, through deployment, operation and refinement to meet the needs of millions of real-time interactions.
- Closely collaborate with business stakeholders, data scientists and analysts in our daily work, data engineering guild and communities of practice.
πΉRequirements
- Experience implementing highly available and scalable big data solutions
- In-depth knowledge of at least one cloud provider, preferably AWS
- Proficiency in languages such as Scala, Python, or shell scripting, specifically in the context of streaming data workflows
- Extensive experience with streaming technologies, so you can challenge the existing setup.
- Experience with Infrastructure as Code and CI/CD pipelines
- Full understanding of modern software engineering best practices
- Experience with Domain-driven design
- DevOps mindset
- You see the value in a team and enjoy working together with others, also with techniques like pair programming
- You either have an AWS certification or are willing to achieve AWS certification within 6 months (minimum: AWS Certified Associate)
- We welcome candidates living in Ukraine or Europe who are willing and able to travel for business trips to Belgium and the Netherlands.
πΉInterview stages
- HR interview
- Technical interview in English
- Test assignment
- Final interview
πΉ9 reasons to join us:
- Today we're working with the technology of tomorrow.
- We don't wait for a change. We are the change.
- We're experts in creating experts (Levi9 academy, Lead9 program for leaders).
- No micromanagement. We are free birds with a clear understanding of what the high performance is!
- Learning in Levi9 never stops (unlimited Udemy for business, meetups, English&German courses, Professional trainings).
- Here you can train your body and mind.
- We've gathered the best locations - comfortable, cosy and pet-friendly offices in Kyiv (5 minutes from Olimpiyska metro station) and Lviv (overlooking the Stryiskyi Park) with regular offline internal events
- We have a master's degree in work-life balance.
- We are actively supporting Ukraine with constant donations and volunteering
πΉSimple step to get this job
Click the APPLY NOW button and leave your contacts!
More -
Β· 80 views Β· 15 applications Β· 21d
Senior Data Platform Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateWe, at Grid Dynamics, are seeking a Senior Platform Data Engineer to join our team of experts. This role focuses on developing and maintaining a scalable data platform using cutting-edge technologies to meet the clientβs evolving needs. The ideal...We, at Grid Dynamics, are seeking a Senior Platform Data Engineer to join our team of experts. This role focuses on developing and maintaining a scalable data platform using cutting-edge technologies to meet the clientβs evolving needs. The ideal candidate is a proactive problem-solver, passionate about working with complex data systems, and enjoys collaborating in an innovative and supportive environment.
About the Project:
Join our team working with the largest pan-European online car marketplace with over 1.5 million listings and 43,000 car dealer partners. Our client provides inspiring solutions and services that empower customers and deliver real value. As part of this dynamic project, youβll play a key role in shaping and optimizing their data platform, leveraging modern tools and methodologies.
Responsibilities:
Core Data Platform Development:
Develop and maintain scalable data pipelines and integrations to manage increasing data volume and complexity.
Design and implement data contracts to streamline communication and dependencies between teams.
Build pipelines from scratch and on templates, utilizing modern tools and techniques.
Collaboration & Quality:
Work with analytics and business teams to improve data models feeding business intelligence tools, fostering data-driven decision-making.
Implement and monitor systems ensuring data quality, governance, and accuracy for all production data.
Data Infrastructure Management:
Manage and enhance the data platform, incorporating technologies like Airflow, Glue Jobs, and data mesh principles.
Design data integrations and establish a data quality framework.
Define company data assets, document transformations, and maintain engineering wikis.
Operations & Compliance:
Collaborate with engineering, product, and analytics teams to develop and maintain strategies for long-term data platform architecture.
Troubleshoot and resolve data-related issues in production environments.
Tech Stack:
Cloud Technologies: AWS (Athena, Glue, EMR, Firehose, etc.), Azure, GCP.
Data Tools: Airflow, Hadoop, Spark, Trino, Kafka.
Programming Languages: Python, SQL.
Additional Tools: DataStage, Jenkins, Git, Linux/AIX/z/OS.
More
Qualifications:
Must have
Expertise in AWS services, especially Glue, Athena, MWAA
Proficiency in Python and SQL
Experience with streaming platforms (Kafka or Firehose)
Experience with third-party solutions and APIs
Nice to have
Proficiency in data modelling techniques and best practices
Experience in implementing data contracts
Experience in applying data governance policies
Experience with data quality frameworks (Great expectations, Soda)
Familiarity with the data mesh architecture and its principles -
Β· 60 views Β· 2 applications Β· 20d
Data Engineer (NLP-Focused)
Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - IntermediateWe are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text...We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.
What you will do
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
Qualifications and experience needed
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our projectβs focus. Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
A plus would be
- Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
- Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
- CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
- Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
- Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.
What we offer
- Office or remote β itβs up to you. You can work from anywhere, and we will arrange your workplace.
- Remote onboarding.
- Performance bonuses.
- We train employees with the opportunity to learn through the companyβs library, internal resources, and programs from partners.β―
- Health and life insurance.
- Wellbeing program and corporate psychologist.
- Reimbursement of expenses for Kyivstar mobile communication.
-
Β· 81 views Β· 10 applications Β· 17d
Middle/Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateAbout the project: Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics...About the project:
Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics Engineer, you will play a pivotal role in shaping the future of online car markets and enhancing the user experience for millions of car buyers and sellers.
Requirements:
- 5+ years of experience in Data Engineering or Analytics Engineering roles
- Strong experience building and maintaining pipelines in BigQuery, Athena, Glue, and Airflow
- Advanced SQL skills and experience designing dimensional models (star/snowflake)
- Experience with AWS Cloud
- Solid Python skills, especially for data processing and workflow orchestration
- Familiarity with data quality tools like Great Expectations
- Understanding of data governance, privacy, and security principles
- Experience working with large datasets and optimizing performance
- Proactive problem solver who enjoys building scalable, reliable solutions
- English - Upper-Intermediate+Great communication skills
Responsibilities:
- Collaborate with analysts, engineers, and stakeholders to understand data needs and deliver solutions
- Build and maintain robust data pipelines that deliver clean and timely data
- Organize and transform raw data into well-structured, scalable models
- Ensure data quality and consistency through validation frameworks like Great Expectations
- Work with cloud-based tools like Athena and Glue to manage datasets across different domains
- Help set and enforce data governance, security, and privacy standards
- Continuously improve the performance and reliability of data workflows
- Support the integration of modern cloud tools into the broader data platform
-
Β· 65 views Β· 19 applications Β· 2d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateAutomat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results. ...Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Weβre looking for a Senior Data Engineer to play a key role in building our Data & Analytics practice and delivering modern data solutions on AWS for our clients. In this role, you'll be a customer-facing, hands-on technical engineer who designs and implements end-to-end data pipelines and analytics platforms using AWS services like AWS Glue, Amazon OpenSearch Service, Amazon Redshift, and Amazon QuickSight. From migrating legacy ETL workflows to AWS Glue to building scalable data lakes for AI/ML training, you'll ensure our customers can unlock the full value of their data. Youβll work closely with client stakeholders (from startup founders and CTOs to data engineers) to create secure, cost-efficient architectures that drive real business impact.
π Work location - remote from Ukraine
If you are interested in this opportunity, please submit your CV in English.
Responsibilities
- Design, develop, and deploy AWS-based data and analytics solutions to meet customer requirements. Ensure architectures are highly available, scalable, and cost-efficient.
- Develop dashboards and analytics reports using Amazon QuickSight or equivalent BI tools.
- Migrate and modernize existing data workflows to AWS. Re-architect legacy ETL pipelines to AWS Glue and move on-premises data systems to Amazon OpenSearch/Redshift for improved scalability and insights.
- Build and manage multi-modal data lakes and data warehouses for analytics and AI. Integrate structured and unstructured data on AWS (e.g. S3, Redshift) to enable advanced analytics and generative AI model training using tools like SageMaker.
- Implement infrastructure automation and CI/CD for data projects. Use Infrastructure as Code (Terraform) and DevOps best practices to provision AWS resources and continuously integrate/deploy data pipeline code.
- Lead customer workshops and proof-of-concepts (POCs) to demonstrate proposed solutions. Run technical sessions (architecture whiteboards, Well-Architected reviews) to validate designs and accelerate customer adoption.
- Collaborate with engineering teams (Data Scientist, DevOps and MLOps teams) and stakeholders to deliver projects successfully. Ensure solutions follow AWS best practices and security guidelines, and guide client teams in implementing according to the plan.
Stay up-to-date on emerging data technologies and mentor team members. Continuously learn new AWS services (e.g. AWS Bedrock, Lake Formation) and industry trends, and share knowledge to improve our delivery as we grow the Data & Analytics practice.
Requirements
- 5+ years of experience in data engineering, data analytics, or a related field, including 3+ years of hands-on AWS experience (designing, building, and maintaining data solutions on AWS).
- Production experience with AWS cloud and data services, including building solutions at scale with tools like AWS Glue, Amazon Redshift, Amazon S3, Amazon Kinesis, Amazon OpenSearch Service, etc.
- Skilled in AWS analytics and dashboards tools β hands-on expertise with services such as Amazon QuickSight or other BI tools (Tableau, Power BI) and Amazon Athena.
- Experience with ETL pipelines β ability to build ETL/ELT workflows (using AWS Glue, Spark, Python, SQL).
- Experience with data warehousing and data lakes - ability to design and optimize data lakes (on S3), Amazon Redshift for data warehousing, and Amazon OpenSearch for log/search analytics.
- Proficiency in programming (Python/PySpark) and SQL skills for data processing and analysis.
- Understanding of cloud security and data governance best practices (encryption, IAM, data privacy).
- Excellent communication skills with an ability to explain complex data concepts in clear terms. Comfortable working directly with clients and guiding technical discussions.
- Proven ability to lead end-to-end technical engagements and work effectively in fast-paced, Agile environments.
- AWS certification β AWS certifications, especially in Data Analytics or Machine Learning are a plus.
DevOps/MLOps knowledge β experience with Infrastructure as Code (Terraform), CI/CD pipelines, containerization, and AWS AI/ML services (SageMaker, Bedrock) is a plus.
Benefits
- Professional training and certifications covered by the company (AWS, FinOps, Kubernetes, etc.)
- International work environment
- Referral program β enjoy cooperation with your colleagues and get a bonus
- Company events and social gatherings (happy hours, team events, knowledge sharing, etc.)
- Wellbeing and professional coaching
- English classes
- Soft skills training
Country-specific benefits will be discussed during the hiring process.
Automat-it is committed to fostering a workplace that promotes equal opportunities for all and believes that a diverse workforce is crucial to our success. Our recruitment decisions are based on your experience and skills, recognising the value you bring to our team.
More -
Β· 13 views Β· 2 applications Β· 16d
IT Infrastructure Administrator
Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experienceBiosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.
Key responsibilities:
- Administration of Active Directory
- Managing group policies
- Managing services via PowerShell
- Administration of VMWare platform
- Administration of Azure Active Directory
- Administration of Exchange 2016/2019 mail servers
- Administration of Exchange Online
- Administration of VMWare Horizon View
Required professional knowledge and skills:
- Experience in writing automation scripts (PowerShell, Python, etc.)
- Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
- Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
- Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
- Windows Server 2019/2025 (installation, configuration, and adaptation)
- Diagnostics and troubleshooting
- Working with anti-spam systems
- Managing mail transport systems (exim) and monitoring systems (Zabbix)
We offer:
- Interesting projects and tasks
- Competitive salary (discussed during the interview)
- Convenient work schedule: MonβFri, 9:00β18:00; partial remote work possible
- Official employment, paid vacation, and sick leave
- Probation period β 2 months
- Professional growth and training (internal training, reimbursement for external training programs)
- Discounts on Biosphere Corporation products
- Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)
Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).
Learn more about Biosphere Corporation, our strategy, mission, and values at:
http://biosphere-corp.com/
https://www.facebook.com/biosphere.corporation/Join our team of professionals!
By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
More
If your application is successful, we will contact you within 1β2 business days. -
Β· 53 views Β· 0 applications Β· 16d
Big Data Software Engineer
Full Remote Β· Poland Β· Product Β· 2 years of experience Β· B2 - Upper IntermediateSoftware Engineer β Data Team Founded in 2009, we are a global leader in online multi-vertical marketplaces. Through our leading brands, we help millions of people worldwide make informed decisions every day. Our proprietary platform harnesses AI and ML...Software Engineer β Data Team
Founded in 2009, we are a global leader in online multi-vertical marketplaces. Through our leading brands, we help millions of people worldwide make informed decisions every day. Our proprietary platform harnesses AI and ML technologies to help consumers choose the right products and services tailored to their needs.
We are looking for a strong and passionate Java Software Developer to join our Data group. Youβll help us build the data platform and applications that power analytics, reporting, machine learning, and other critical business needs.
Responsibilities
- Design, build & maintain data services with Java, Spring Boot, Kafka, MongoDB, Kubernetes, MySQL, Python, Streamlit, and more.
- Develop large-scale data platforms using Spark, Kafka, Snowflake, Airflow, Flink, Iceberg, and other big-data frameworks.
- Translate product requirements into clear technical designs and deliver impactful projects end-to-end.
- Leverage AI developer tools (e.g., Cursor) and multi-agent frameworks to accelerate development and align output with business goals.
- Create monitoring & operational tooling to ensure data reliability, performance, and usability.
- Contribute to a collaborative engineering culture: participate in code/design reviews, share knowledge, and support teammates.
Requirements
- 2+ yearsβ experience with Big Data and Java (strong juniors or mid-level welcome).
- Experience building, optimizing, and maintaining large-scale Big Data systems with open-source frameworks (Kafka, Spark, Hive, Iceberg, Airflow, etc.).
- 2+ yearsβ experience with Python.
- Strong expertise with SQL; experience with SQL/NoSQL/key-value databases.
- Hands-on experience with Spring / Spring Boot.
- Experience with AWS cloud services (EMR, Aurora, Snowflake, S3, Athena, Glue) is an advantage.
- Proven ability to debug and identify root causes in distributed production platforms.
- Hands-on experience with CI/CD, Microservices, Docker, Kubernetes.
-
Β· 15 views Β· 1 application Β· 16d
PHP developer/ Data Engineer
Hybrid Remote Β· Poland, Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πΊπ¦Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. Youβll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.Thanks to our incredible team of experts, weβve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.
Requirements:
- Design and develop scalable backend services using PHP 7 / 8.
- Strong understanding of OOP concepts, design patterns, clean code principles,
- Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
- Experience of work with NoSQL databases (e.g., Redis).
- Proven experience working on high-load projects
- Understanding of ETL processes and data integration
- Experience of work with ClickHouse
- Strong experience with API development
- Strong knowledge of Symfony 6+, yii2
- Experience with RabbitMQ
Nice to Have:
- AWS services
- Payment API (Stripe, SolidGate etc.)
- Docker, GitLab CI
- Python
Responsibilities:
- Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
- API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
- Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
- Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
- Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.
What we offer:
For personal growth:
- A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
- An educational allowance to ensure that your skills stay sharp;
- English and German classes to strengthen your capabilities and widen your knowledge.
For comfort:
- A great environment where youβll work with true professionals and amazing colleagues whom youβll call friends quickly;
- The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.
For health:
- Medical insurance;
- Twenty-one days of paid sick leave per year;
- Healthy fruit snacks full of vitamins to keep you energized
For leisure:
- Twenty-one days of paid vacation per year;
- Fun times at our frequent team-building activities.
-
Β· 55 views Β· 14 applications Β· 16d
Senior Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateWe are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones. Responsibilities: Build and maintain scalable Zero ETL...We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones.
Responsibilities:- Build and maintain scalable Zero ETL pipelines.
- Design and optimize data warehouses and data lakes on AWS (Glue, Firehose, Lambda, SageMaker).
- Work with structured and unstructured data, ensuring quality and accuracy.
- Optimize query performance and data processing workflows (Spark, SQL, Python).
- Collaborate with engineers, analysts, and business stakeholders to deliver data-driven solutions.
Requirements:
- 5+ years of experience in Data Engineering.
- Advanced proficiency in Spark, Python, SQL.
- Expertise with AWS Glue, Firehose, Lambda, SageMaker.
- Experience with ETL tools (dbt, Airflow etc.).
- Background in B2C companies is preferred.
- JavaScript and Data Science knowledge are a plus.
- Degree in Computer Science (preferred, not mandatory).
-
Β· 25 views Β· 1 application Β· 16d
BigData Engineer (Scala, Spark) IRC273773
Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateDescription Founded in 2007, Rubicon Projectβs pioneering technology created a new model for the advertising industry. Today, our automated advertising platform is used by the worldβs leading publishers and applications to transact with top brands around...Description
Founded in 2007, Rubicon Projectβs pioneering technology created a new model for the advertising industry. Today, our automated advertising platform is used by the worldβs leading publishers and applications to transact with top brands around the globe, enabling them to reach more than 1 billion consumers. Rubicon Project operates the largest independent Display Advertising Exchange and Supply Side Platform that automates the buying and selling of Display Advertising across all formats (banner, video) on all devices (desktop browsers, mobile devices, billboards). Rubicon Project auctions over 20 Billion Ads on a daily basis in real-time in less than 1/2 of a second each. Rubicon Project is a publicly traded company (NYSE: RUBI) headquartered in Los Angeles, California, USA.
Requirements
- Experience building and operating large-scale, high-throughput, enterprise apps;
- 3+ years of working experience in server-side Scala;
- Working experience with data processing systems (Hadoop, Hive, Kafka, Spark);
- A strong understanding of algorithms, data structures, and an ability to recognise the business and technical trade-offs between different
solutions; - Expertise in threading and concurrency;
- Experience with automated testing frameworks (TDD, Mocking, Unit/Functional/Integration);
- Experience with SQL queries (MySQL is a plus);
- Experience with development and CI tools (Maven, git, Jenkins, Puppet, Crucible, Jira, Airflow, Python is a plus);
- Experience working in a Linux environment;
- Expertise in building software in an agile development environment;
- Demonstrated strong English language verbal and written communication skills.
Job responsibilities
- Write production-ready code and unit tests that meet both system and business requirements;
- Respond to feature requests, bug reports, performance issues, and ad-hoc questions;
- Work collaboratively with multiple teams to deliver quality software.
- Comfortable in multi-tasking and fast-pacing dev process;
- Support operation of services in production.