Jobs
101-
· 66 views · 8 applications · 11d
Senior Data Engineer
Full Remote · Ukraine · 4 years of experience · Upper-IntermediateWe’re hunting for a Senior Data Engineer to join a high-impact, long-term project, building cutting-edge data infrastructure. You’ll craft robust data pipelines, ETL workflows, and scalable AWS cloud solutions using Python and AWS CDK, while powering data...We’re hunting for a Senior Data Engineer to join a high-impact, long-term project, building cutting-edge data infrastructure. You’ll craft robust data pipelines, ETL workflows, and scalable AWS cloud solutions using Python and AWS CDK, while powering data visualization with tools like Tableau or QuickSight. If you’re a pro at turning complex data into actionable results and thrive in a remote, collaborative environment, let’s talk!
What You’ll Bring:
• Expertise in Python and AWS (S3, Lambda, Glue, Redshift, etc.) for building scalable infrastructure.
• Proficiency in AWS CDK for infrastructure-as-code.
• Strong skills in ETL and data pipeline development.
• Experience with data visualization tools (Tableau, QuickSight, or similar).
• Upper-Intermediate+ English for seamless communication.
• Ability to work remotely with partial CET time zone overlap
Tech Stack:
Python, AWS (S3, Lambda, Glue, Redshift), AWS CDK, ETL, Data Pipelines, Tableau/QuickSight
Why This Rocks:
• Tackle a long-term project with a top-tier European client.
• Work with modern cloud and Big Data tech in a fully remote setup.
• Join a sharp, distributed team where your impact matters.
Details: Full-time, remote, long-term.
Send your resume. Show us your data engineering chops!
More -
· 84 views · 14 applications · 10d
Data Engineer (6 months, Europe-based)
Full Remote · EU · 4 years of experience · Upper-IntermediateThe client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives. Key responsibilities: Develop data products on GCP using BigQuery and DBT Integrate...The client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives.
Key responsibilities:
- Develop data products on GCP using BigQuery and DBT
- Integrate data from multiple sources using Python and Cloud Functions
- Orchestrate pipelines with Terraform and Cloud Workflows
- Collaborate with Solution Architects, Data Scientists, and Software Engineers
Tech stack:
GCP (BigQuery, Cloud Functions, Cloud Workflows), DBT, Python, Terraform, GitRequirements:
Ability to work independently and within cross-functional teams;
Strong hands-on experience;
English: Upper Intermediate or higherNice to have:
Experience with OLAP cubes and PowerBI -
· 50 views · 11 applications · 10d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · 5 years of experience · Upper-IntermediateSenior Data Engineer | Fintech | Remote | Full-Time Level: Senior English: Upper-Intermediate or higher Workload: Full-time Location: Fully remote (Preference for time zones close to Israel) Time Zone: CET (Israel) Start Date: ASAP ...📣 Senior Data Engineer | Fintech | Remote | Full-Time
🧠 Level: Senior
🗣️ English: Upper-Intermediate or higher
🕒 Workload: Full-time
🌍 Location: Fully remote (Preference for time zones close to Israel)
🕐 Time Zone: CET (Israel)
🚀 Start Date: ASAP
📆 Duration: 6+ months
🧾 About the Client:
Our client is an innovative fintech company dedicated to optimizing payment transaction success rates. Their advanced technology integrates seamlessly into existing infrastructures, helping payment partners and platforms recover lost revenue by boosting transaction approval rates.🔧 Project Stage: Ongoing development
💼 What You’ll Be Doing:- Design and implement robust, scalable data pipelines and ETL workflows
- Develop comprehensive end-to-end data solutions to support analytics, product, and business needs
- Define data requirements, architect systems, and build reliable data models
- Integrate backend logic into data processes for actionable insights
- Optimize system performance, automate processes, and monitor for improvements
- Collaborate closely with cross-functional teams (Product, Engineering, Data Science)
🧠 Must-Have Skills:- 5+ years of experience in data engineering
- Deep expertise in building data warehouses and BI ecosystems
- Strong experience with modern analytical databases (e.g., Snowflake, Redshift)
- Proficient with data transformation tools (e.g., dbt, Dataform)
- Familiar with orchestration tools (e.g., Airflow, Prefect)
- Skilled in Python or Java and advanced SQL (including performance tuning)
- Experience managing large-scale data systems in cloud environments
- Infrastructure as code and DevOps mindset
🤝 Soft Skills:- High ownership and accountability
- Strong communication and collaboration abilities
- Experience in dynamic, startup-like environments
- Analytical thinker with a proactive mindset
- Comfortable working independently
- Fluent spoken and written English
🧪 Tech Stack:
Python or Java, SQL, Snowflake, Redshift, dbt, Dataform, Airflow
📋 Interview Process:- English Check (15 min)
- Technical Interview (1–1.5 hours)
- Final Interview (1 hour) – Client
-
· 42 views · 0 applications · 10d
Senior Data Engineer
Full Remote · Ukraine · 5 years of experience · IntermediateJob Description Strong experience in design, building, and maintaining data pipelines using Databricks Workflows for data ingestion and transformation using PySpark Design, create and maintain data pipelines that leverage Delta tables for efficient data...Job Description
Strong experience in design, building, and maintaining data pipelines using Databricks Workflows for data ingestion and transformation using PySpark
Design, create and maintain data pipelines that leverage Delta tables for efficient data storage and processing within a Databricks environment
Experience with Unity Catalog
Experience with RDBMS, such as MS SQL or MySQL, as well as NoSQL
Data modeling and schema design
Proven understanding and demonstrable implementation experience in Azure (Databricks + Key Vault + ADLS Gen 2)
Excellent interpersonal and teamwork skills
Strong problem-solving, troubleshooting and analysis skills
Good knowledge of Agile ScrumMAST HAVE SKILLS
Databricks, PySpark, MS SQL, ADLS Gen 2, Unity Catalog
Job Responsibilities
Responsible for the design and implementation of key components in the system.
Takes ownership of features, leads design decisions
Peer-review the code and provide constructive feedback
Takes part in defining technical strategies and best practices for the team
Assists with backlog refinement and estimation at story level
Identifies and resolves bottlenecks in the development process (such as performance bottlenecks)
Solves complex tasks without supervision.Department/Project Description
GlobalLogic is searching for a motivated, results-driven, and innovative engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
More
You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you. -
· 99 views · 18 applications · 10d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · 4 years of experience · Upper-IntermediateOur long-standing client from the UK is looking for a Senior Data Engineer Project: Decommissioning legacy software and systems Tech stack: DBT, Snowflake, SQL, Python, Fivetran Requirements: Solid experience with CI/CD processes in SSIS Proven...Our long-standing client from the UK is looking for a Senior Data Engineer
Project: Decommissioning legacy software and systems
Tech stack:
DBT, Snowflake, SQL, Python, FivetranRequirements:
- Solid experience with CI/CD processes in SSIS
- Proven track record of decommissioning legacy systems and migrating data to modern platforms (e.g., Snowflake)
- Experience with AWS (preferred) or Azure
- Communicative and proactive team player — able to collaborate and deliver
- Independent and flexible when switching between projects
- English: Upper Intermediate or higher
-
· 64 views · 18 applications · 10d
Data Engineer to $4800
Full Remote · Countries of Europe or Ukraine · 4 years of experience · Upper-IntermediateWe are currently seeking a skilled Data Engineer to join our team in the development and maintenance of robust data solutions. This role involves building and optimizing data pipelines, managing ETL processes, and supporting data visualization needs for...We are currently seeking a skilled Data Engineer to join our team in the development and maintenance of robust data solutions. This role involves building and optimizing data pipelines, managing ETL processes, and supporting data visualization needs for business-critical use cases.
As part of your responsibilities, you will design and implement cloud infrastructure on AWS using AWS CDK in Python, contribute to solution architecture, and develop reusable components to streamline delivery across projects. You will also implement data quality checks and design scalable data models leveraging both SQL and NoSQL technologies.
Project details:
- Start: ASAP
- Duration: Until 31.12.2026
- Location: Remote
- Language: English
Responsibilities:- Develop, monitor, and maintain efficient ETL pipelines and data workflows
- Build infrastructure on AWS using AWS CDK (Python)
- Design and implement reusable data engineering components and frameworks
- Ensure data quality through validation, testing, and monitoring mechanisms
- Contribute to solution architecture and technical design
- Create and optimize scalable data models in both SQL and NoSQL databases
- Collaborate with cross-functional teams including data scientists, analysts, and product owners
Requirements:
- Solid experience in building and maintaining ETL pipelines
- Hands-on experience with data visualization tools or integrations (e.g., Tableau, Power BI, or custom dashboards via APIs)
- Strong working knowledge of AWS services, especially with AWS CDK (Python)
- Good understanding of SQL and NoSQL database technologies
- Familiarity with version control systems (e.g., Git)
- Experience working in Agile environments
- Strong communication skills and ability to work autonomously in remote teams
-
· 83 views · 10 applications · 9d
Middle Data Engineer
Full Remote · EU · Product · 1 year of experience · Intermediate Ukrainian Product 🇺🇦GR8 Tech is a global product company that provides innovative, scalable platforms and business solutions for the iGaming industry. We have а great experience: GR8 Tech platform successfully handles millions of active players and offers best practices to...GR8 Tech is a global product company that provides innovative, scalable platforms and business solutions for the iGaming industry.
We have а great experience: GR8 Tech platform successfully handles millions of active players and offers best practices to develop and grow in the gambling industry. We are here to provide great gaming tech to satisfy even greater ambition!
We develop complete tech coverage for gambling businesses worldwide, including iGaming platform solutions, consulting, integration, and long-lasting operation services.
We are driven by our ambition to make a great product with great people! Together we move the world of iGaming forward — join!
About your key responsibilities and impact:
- Developing and maintaining data transformation pipelines;
- Data collecting: Kafka, Google Analytics, Firebase, Appsflyer, Cloudflare, other third-party apps;
- Data modeling: building a centralized data catalog with well-validated and documented data marts;
- Data quality/integrity testing automation;
- Semantic layer developing and integrating;
- Designing and implementing REST-based APIs.
Essential professional experience:
Hands-on experience with the following technologies:
- Storage formats, their pros and cons: Parquet / ORC, AVRO, JSON, CSV, TSV;
- Distributed computation: HDFS, Spark, Presto / Trino / Drill, Apache Flink, etc;
- Relational databases (PostgreSQL, Microsoft SQL Server, etc.);
- Columnar storages: AWS Redshift, Google BigQuery, Clickhouse etc;
- Metadata management: DataHub, Openmetadata etc;
- Data access management (RBAC, TBAC, RLS, CLS, Data Masking);
- ETL, Data Warehousing tasks;
- Job scheduling, task queues;
- Cloud providers: AWS, Google Cloud Platform, etc.;
- Designing, implementing REST API (Aiohttp, Flask, FastAPI).
Knowledge in the following areas:
- Performance tuning of ETL Jobs, SQL’s, Partitioning, Indexing;
- Database theory: types, their pros and cons;
- Streaming data processing: Kafka, AWS Kinesis, RabbitMQ, Redpanda etc.
What we offer:
Benefits Cafeteria:
- Sports compensation;
- Medical coverage;
- Psychological support;
- Home-office coverage.
Work-life:
- Remote work, Coworking compensation;
- Childcare budget;
- Maternity leave;
- Paternity leave;
- Additional 2 days for family events.
Our GR8 Culture:
- Open feedback and transparent direct communications;
- Growth and development: better every day;
- High tolerance to experiment and mistakes;
- Supportive friendly environment.
Data Protection Information regarding the processing of your personal data in connection with the recruitment and selection process can be found in the Candidate Privacy Notice at https://gr8.tech/candidate-privacy-notice/.
More -
· 134 views · 15 applications · 9d
Data Engineer\Data Analyst
Full Remote · Countries of Europe or Ukraine · Product · 3 years of experienceNuxGame works with iGaming operators of all scales helping companies access new markets or enhance their existing brands. As a casino gaming software company, NuxGame provides solutions that allow building outstanding brands and fulfilling your business...NuxGame works with iGaming operators of all scales helping companies access new markets or enhance their existing brands. As a casino gaming software company, NuxGame provides solutions that allow building outstanding brands and fulfilling your business goals.We are looking for a Data Engineer\Data Analyst to join our team.
Responsibilities
- Design, build, and maintain robust data pipelines for large-scale processing.
- Develop and optimize ETL workflows and data ingestion from various sources (DBs, APIs, event streams).
- Create and maintain data models and schemas tailored for analytics and reporting.
- Collaborate with analysts and business teams to understand reporting needs and deliver automated dashboards.
- Build high-quality reports and dashboards using BI tools.
- Own and ensure data quality, consistency, and freshness.
- Implement data security best practices, access controls, and data governance.
- Improve and monitor data infrastructure performance (e.g., ClickHouse, BigQuery).
- Work with event-based data (web tracking) to enable product and marketing analytics.
- Collaborate closely with DevOps and Engineering to deploy and scale data solutions.
Investigate new technologies and tools to enhance our data ecosystem.
Experience:
- 3+ years of experience as a Data Analyst, Data Engineer, or in a hybrid role.
- Solid knowledge of SQL and experience with NoSQL databases.
- Proven experience building data pipelines and ETL processes from scratch.
- Hands-on experience with modern Data Warehouses (e.g., ClickHouse, BigQuery, Snowflake).
- Familiarity with workflow orchestration tools like Airflow, dbt, or similar.
- Experience working with event-based data (e.g., user behavior tracking).
- Proficiency in Python for data manipulation and transformation.
- Experience building custom data connectors or integrating APIs.
- Strong knowledge of BI tools — especially Tableau, Power BI, or similar.
- Understanding of cloud platforms (GCP, AWS, or Azure).
Familiarity with Git, Docker, and containerized environments.
Nice to have:
- Experience working in the gambling or betting industry — or deep interest in gaming data.
- Practical knowledge of ClickHouse Cloud, ClickPipes, and related tools.
- Exposure to data streaming platforms (e.g., Apache Kafka).
- Understanding of DevOps and automation pipelines.
Bachelor's degree in Computer Science, Data Science, Math, or a related field.
What We Offer:
- Work Format: Remote work format.
- Working Hours: Typically 09:00/10:00 to 17:00/18:00 (Kyiv time) (Monday-Friday).
- Compensation: Timely payment of competitive wages (salary).
- Employment: Official employment.
- Leave: 24 days of vacation annually.
- Team Environment: A friendly team and pleasant atmosphere without pressure or stress; open and democratic work organization.
- Projects: Interesting work on successful projects within the dynamic iGaming sector
-
· 50 views · 2 applications · 8d
Senior Data Engineer (Python) to $8000
Full Remote · Ukraine, Poland, Bulgaria, Portugal · 8 years of experience · Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.About the Role:
As a Data Engineer, you will operate at the intersection of data engineering, software engineering, and system architecture. This is a high-impact, cross-functional role where you’ll take end-to-end ownership — from designing scalable infrastructure and writing robust, production-ready code to ensuring the reliability and performance of our systems in production.Key Responsibilities:
- Collaborate closely with software architects and DevOps engineers to evolve our AI training, inference, and delivery architecture and deliver resilient, scalable, production-grade machine learning pipelines.
- Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
- Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
- Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
- Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
- Represent the data science team’s needs in cross-functional technical discussions and solutions design.
Required Competence and Skills:
- A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
- 8+ years of experience as a data engineer, software engineer, or similar role, with a proven track record of using data to drive business outcomes.
- Strong Python skills, with experience building modular, testable, and production-ready code.
- Solid understanding of Databases and SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
- Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
- A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
- Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Haves
- Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
- Familiarity with API development frameworks (e.g., FastAPI).
- Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
- Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
· 29 views · 5 applications · 5d
Senior Data Engineer
Countries of Europe or Ukraine · 4 years of experience · Upper-IntermediateWe are building a next-generation AI-powered platform designed for comprehensive observability of digital infrastructure, including mobile networks and data centers. By leveraging advanced analytics, automation, and real-time monitoring, we empower...We are building a next-generation AI-powered platform designed for comprehensive observability of digital infrastructure, including mobile networks and data centers. By leveraging advanced analytics, automation, and real-time monitoring, we empower businesses to optimize performance, enhance reliability, and prevent failures before they happen.
Our platform delivers deep insights, anomaly detection, and predictive intelligence, enabling telecom operators, cloud providers, and enterprises to maintain seamless connectivity, operational efficiency, and infrastructure resilience in an increasingly complex digital landscape.
We have offices in Doha, Qatar and Muscat, Oman. This position requires relocation to one of these offices.
Job Summary
As a Senior Data Engineer, you will be responsible for building and maintaining end-to-end data infrastructure that powers our AI-driven observability platform. You will work with large-scale datasets, both structured and unstructured, and design scalable pipelines that enable real-time monitoring, analytics, and machine learning. This is a hands-on engineering role requiring deep expertise in data architecture, cloud technologies, and performance optimization.
Key Responsibilities
Data Pipeline Development
- Design, develop, and maintain scalable ETL/ELT pipelines from scratch using modern data engineering tools
- Ingest and transform high-volume data from multiple sources, including APIs, telemetry, and internal systems
- Write high-performance code to parse and process large files (JSON, XML, CSV, etc.)
- Ensure robust data delivery for downstream systems, dashboards, and ML models
Infrastructure & Optimization
- Build and manage containerized workflows using Docker and Kubernetes
- Optimize infrastructure for performance, availability, and cost-efficiency
- Implement monitoring, alerting, and data quality checks across the data pipeline stack
Collaboration & Best Practices
- Work closely with AI/ML, backend, and platform teams to define and deliver on data requirements
- Enforce best practices in data modeling, governance, and engineering
- Participate in CI/CD processes, infrastructure automation, and documentation
Required Qualifications
Experience
- 4+ years of hands-on experience in data engineering or similar backend roles
- Proven experience designing and deploying production-grade data pipelines from scratch
Technical Skills
- Proficiency in Python or Scala for data processing
- Deep knowledge of SQL and noSQL systems (e.g., MongoDB, DynamoDB, Cassandra, Firebase)
- Hands-on experience with cloud platforms (AWS, GCP, or Azure)
- Familiarity with data tools like Apache Spark, Airflow, Kafka, and distributed systems
- Experience with CI/CD practices and DevOps for data workflows
Soft Skills
- Excellent communication skills and the ability to work independently in a fast-paced environment
- Strong analytical mindset and attention to performance, scalability, and system reliability
Preferred Qualifications
- Background in the telecom or IoT industry
- Certifications in cloud platforms or data technologies
- Experience with real-time streaming, event-driven architectures, or ML/Ops
- Familiarity with big data ecosystems (e.g., Hadoop, Cloudera)
- Knowledge of API development or experience with Flask/Django
- Experience setting up A/B test infrastructure and experimentation pipelines
Nice to have
Experience with the integration and maintenance of vector databases (e.g., Pinecone, Weaviate, Milvus, Qdrant) to support LLM workflows including embedding search, RAG, and semantic retrieval.
What We Offer
- Performance-Based Compensation: Tied to achieving and exceeding performance targets, with accelerators for surpassing goals
- Shares and Equity: Participation in our Employee Stock Option Plan (ESOP)
- Growth Opportunities: Sponsored courses, certifications, and continuous learning programs
- Comprehensive Benefits: Health insurance, pension contributions, and professional development support
- Annual Vacation: Generous paid annual leave
- Dynamic Work Environment: A culture of innovation, collaboration, and creative freedom
- Impact and Ownership: Shape the future of digital infrastructure and leave your mark
- Flexible Work Arrangements: Options to work remotely or from our offices
- A Mission-Driven Team: Join a diverse, passionate group committed to meaningful change
-
· 28 views · 3 applications · 4d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · 3 years of experience · Upper-IntermediateDataforest is seeking an experienced Senior Data Engineer to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems. Requirements: - 3+ years of...Dataforest is seeking an experienced Senior Data Engineer to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems.
Requirements:
- 3+ years of commercial experience with Python.
- Extensive experience with ElasticSearch and PostgreSQL.
- Knowledge and experience with Kafka.
- Proven experience in setting up and managing monitoring systems with CloudWatch, Prometheus, and Grafana.
- Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
- Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
- Solid understanding of ETL principles and best practices.
- Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
- Experience working with Linux environments, cloud services (AWS), and Docker.
- Strong decision-making capabilities with the ability to work independently and proactively.
Will be a plus:
- Experience in web scraping, data extraction, cleaning, and visualization.
- Understanding of multiprocessing and multithreading, including process and thread management.
- Familiarity with Redis.
- Experience with Flask / Flask-RESTful for API development.
Key Responsibilities:
- Develop and maintain a robust data processing architecture using Python.
- Effectively utilize ElasticSearch and PostgreSQL for efficient data management.
- Design and manage data pipelines using Kafka and SQS.
- Implement and maintain logging and monitoring systems with CloudWatch, Prometheus, and Grafana.
- Optimize code structure and performance for maximum efficiency.
- Design and implement efficient ETL processes.
- Analyze and optimize algorithmic solutions for better performance and scalability.
- Collaborate within the AWS stack to ensure flexible and reliable data processing systems.
- Provide mentorship and guidance to colleagues, fostering a collaborative and supportive team environment.
- Independently make decisions related to software architecture and development processes to drive the project forward.
We offer:- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
· 17 views · 4 applications · 4d
Cloud System engineer
Full Remote · Ukraine · Product · 2 years of experience · Pre-IntermediateRequirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...Requirements:
- Knowledge of the core functionality of virtualization platforms;
- Experience implementing and migrating workloads in virtualized environment;
- Experience in complex IT solutions and Hybrid Cloud solution projects.
- Good understanding of IT-infrastructure services is a plus;
- Strong knowledge in troubleshooting of complex environments in case of failure;
- At least basic knowledge in networking & information security is an advantage
- Hyper-V, Proxmox, VMWare experience would be an advantage;
- Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
- Work experience of 2+ years in a similar position
- Scripting and programming experience/background in PowerShell/Bash is an advantage;
- Strong team communication skills, both verbal and written;
- Experience in technical documentation writing and preparation;
- English skills - intermediate level is minimum and mandatory for global teams communication;
- Industry certification focused on relevant solution area.
Areas of Responsibility includes:
- Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
- Consulting regarding migration IT-workloads in complex infrastructures;
- Presales support (Articulating service value in the sales process) / Up and cross sell capability);
- Project documentation: technical concepts
- Education and development in professional area including necessary certifications.
-
· 22 views · 2 applications · 4d
Senior Data Engineer
Full Remote · Ukraine · 5 years of experience · Upper-IntermediateWe are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers. Responsibilities: • Participate in requirements...We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers.
Responsibilities:
• Participate in requirements clarification and sprint planning sessions.
• Design technical solutions and implement them, inc ETL Pipelines - Build robust data pipelines in PySpark to extract, transform, using PySpark
• Optimize ETL Processes - Enhance and tune existing ETL processes for better performance, scalability, and reliability
• Writing unit and integration tests.
• Support QA teammates in the acceptance process.
• Resolving PROD incidents as a 3rd line engineer.Mandatory Skills Description:
* Min 7 Years of experience in IT/Data
* Bachelor in IT or related field.
* Exceptional logical reasoning and problem-solving skills
* Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
* SQL: Strong expertise in writing and optimizing complex SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
* Data Warehousing: Experience working with data warehousing concepts and platforms, ideally DataBricks
* ETL Tools: Familiarity with ETL tools & processes
* Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
* Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
* Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
* Data Quality Tools: Experience implementing data validation, cleansing, and quality framework
-
· 11 views · 0 applications · 3d
Senior Data Engineer
Full Remote · Ukraine · 7 years of experience · Upper-IntermediateProject description We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers. Responsibilities Participate in...Project description
We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers.
Responsibilities
Participate in requirements clarification and sprint planning sessions.
Design technical solutions and implement them, inc ETL Pipelines
Build robust data pipelines in PySpark to extract, transform, using PySpark
Optimize ETL Processes
Enhance and tune existing ETL processes for better performance, scalability, and reliability
Writing unit and integration tests.
Support QA teammates in the acceptance process.
Resolving PROD incidents as a 3rd line engineer.
Skills
Must have
Min 7 Years of experience in IT/Data
Bachelor in IT or related field.
Exceptional logical reasoning and problem-solving skills
Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
SQL: Strong expertise in writing and optimizing complex SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
Data Warehousing: Experience working with data warehousing concepts and platforms, ideally DataBricks
ETL Tools: Familiarity with ETL tools & processes
Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
Data Quality Tools: Experience implementing data validation, cleansing, and quality framework
Nice to have
Understanding of Investment Data domain.
Languages
English: B2 Upper Intermediate
More -
· 31 views · 6 applications · 3d
Data Engineer
Full Remote · Worldwide · 4 years of experience · Upper-IntermediateAt Uvik Software, we are looking for a talented Data Engineer to join our team. If you are passionate about data, cloud technologies, and building scalable solutions, this role is for you! You will work on designing, developing, and optimizing data...At Uvik Software, we are looking for a talented 🔎 Data Engineer 🔎 to join our team. If you are passionate about data, cloud technologies, and building scalable solutions, this role is for you!
You will work on designing, developing, and optimizing data pipelines, implementing machine learning models, and leveraging cloud platforms like AWS(preferred), Azure, or GCP. You’ll collaborate with cross-functional teams to transform raw data into actionable insights, enabling smarter business decisions.
📊Key Responsibilities:
- Develop and maintain scalable ETL/ELT pipelines for data processing.
- Design and optimize data warehouses and data lakes on AWS, Azure, or GCP.
- Implement machine learning models and predictive analytics solutions.
- Work with structured and unstructured data, ensuring data quality and integrity.
- Optimize query performance and data processing workflows.
- Collaborate with software engineers, analysts, and business stakeholders to deliver data-driven solutions.
📈Requirements:
- 4+ years of experience as a Data Engineer.
- Strong proficiency in SQL and experience with relational and NoSQL databases.
- Hands-on experience with cloud services: AWS (preferred), Azure, or GCP.
- Proficiency in Python or Scala for data processing.
- Experience with Apache Spark, Kafka, Airflow, or similar tools.
- Solid understanding of data modeling, warehousing, and big data processing frameworks.
- Experience with machine learning frameworks (TensorFlow, Scikit-learn, PyTorch) is a plus.
- Familiarity with DevOps practices, CI/CD pipelines, and Infrastructure as Code (Terraform, CloudFormation) is an advantage.