Jobs
98-
Β· 9 views Β· 0 applications Β· 3d
Senior/Tech Lead Data Engineer
Hybrid Remote Β· Poland, Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· Upper-IntermediateQuantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...Quantum is a global technology partner delivering high-end software products that address real-world problems.
We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.
Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.
We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isnβt yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!
About the position
Quantum is expanding the team and has brilliant opportunities for a Data Engineer. As a Senior/Tech Lead Data Engineer, you will be pivotal in designing, implementing, and optimizing data platforms. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Air Flow, Spark, using Python, and various cloud-based solutions.
The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become βsuper-investors.β
Through our generative AI technology implemented into brokerage platforms and other financial institutionsβ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide.
Must have skills:
- Bachelor's Degree in Computer Science or related field
- At least 5 years of experience in Data Engineering
- Proven experience as a Tech Lead or Architect in data-focused projects, leadership skills, and experience managing or mentoring data engineering teams
- Strong proficiency in Python and PySpark for building ETL pipelines and large-scale data processing
- Deep understanding of Apache Spark, including performance tuning and optimization (job execution plans, broadcast joins, partitioning, skew handling, lazy evaluation)
- Hands-on experience with AWS Cloud (minimum 2 years), including EMR and Glue
- Familiarity with PySpark internals and concepts (Window functions, Broadcast joins, Sort & merge joins, Watermarking, UDFs, Lazy computation, Partition skew)
- Practical experience with performance optimization of Spark jobs (MUST)
- Strong understanding of OOD principles and familiarity with SOLID (MUST)
- Experience with cloud-native data platforms and lakehouse architectures
- Comfortable with SQL & NoSQL databases
- Experience with testing practices such as TDD, unit testing, and integration testing
- Strong problem-solving skills and a collaborative mindset
- Upper-Intermediate or higher level of English (spoken and written)
Your tasks will include:
- Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources
- Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions
- Lead efforts in performance tuning and query optimization to enhance data processing efficiency
- Provide expertise in data modeling and database design to ensure the scalability and reliability of data platforms
- Contribute to the development of best practices and standards for data engineering processes
- Stay updated on emerging technologies and trends in the data engineering landscape
We offer:
- Delivering high-end software projects that address real-world problems
- Surrounding experts who are ready to move forward professionally
- Professional growth plan and team leader support
- Taking ownership of R&D and socially significant projects
- Participation in worldwide tech conferences and competitions
- Taking part in regular educational activities
- Being a part of a multicultural company with a fun and lighthearted atmosphere
- Working from anywhere with flexible working hours
Paid vacation and sick leave days
Join Quantum and take a step toward your data-driven future.
More -
Β· 84 views Β· 18 applications Β· 30d
Data Engineer
Full Remote Β· Worldwide Β· 4 years of experience Β· Upper-IntermediateWe are Uvik Software β a successful company in software development with a global presence in the world market and we work with the worldβs most successful companies. We seek a highly skilled and autonomous Data Engineer to join our dynamic team. ...We are Uvik Software β a successful company in software development with a global presence in the world market and we work with the worldβs most successful companies.
We seek a highly skilled and autonomous π£Data Engineer π£ to join our dynamic team. This role requires a blend of technical expertise, creative problem-solving, and leadership to drive projects from concept to deployment.
π»Key Responsibilities:
- Develop and implement robust data models and software architectures.
- Utilize Python and advanced ML libraries to build and deploy AI systems.
- Engage in machine learning engineering, particularly in NLP/NLU and language model development using platforms like GPT.
- Stay abreast of current trends in AI, including MLLMs, AI Agents, and RAG technologies.
- Lead and guide teams through the project lifecycle to meet strategic business goals.
Qualifications:
- Profound knowledge of data structures, data modelling, and software architecture principles.
- Expertise in Python.
- Proven track record in the engineering and deployment of AI systems.
- Strong interest and experience in NLP/NLU and developing language models.
- Familiarity with major cloud platforms including GCP, Azure, and AWS.
- Excellent problem-solving, communication, and leadership skills.
Nice-to-Have:
- Experience in startup environments, ideally scaling new ventures from ground zero.
- Hands-on experience with major ML libraries.
- Active engagement with the AI community, whether through research, presentations, or contributions to open-source projects.
- Experience with innovative interfaces like SMS apps, browser extensions, or interactive modules.
- Technical proficiency in React/Next.js, FastAPI, MongoDB, and Marqo AI Vector DB.
We offer:
βοΈ12 sick leaves and 18 paid vacation business days per year
βοΈComfortable work conditions (including MacBook Pro and Dell monitor in each workplace)
βοΈSmart environment
βοΈInteresting projects from renowned clients
βοΈFlexible work schedule
βοΈCompetitive salary according to the qualifications
βοΈGuaranteed full workload during the term of the contract
βοΈCorporate leisure activities
βοΈGame, lounge, sports zones.
-
Β· 176 views Β· 45 applications Β· 29d
Middle Python / Data Engineer
Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-IntermediateInvolvement: ~15β20 hours/week Start Date: ASAP Location: Remote Client: USA-based Project: Legal IT β AI-powered legal advisory platform About the Project Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal...Involvement: ~15β20 hours/week
Start Date: ASAP
Location: Remote
Client: USA-based
Project: Legal IT β AI-powered legal advisory platformAbout the Project
Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal support for businesses. The platform features:
- A robust contract library
- AI-assisted document generation & guidance
- Interactive legal questionnaires
- A dynamic legal blog with curated insights
Weβre building out advanced AI-driven proof-of-concepts (PoCs) and are looking for a strong Python/Data Engineer to support the backend logic and data pipelines powering these tools.
Core Responsibility
- Collaborate directly with the AI Architect to develop and iterate on proof-of-concept features with ongoing development
Being a part of 3asoft means having:
More
- High level of flexibility and freedom
- p2p relationship with worldwide customers
- Competitive compensation paid in USD
- Fully remote working -
Β· 36 views Β· 1 application Β· 28d
Senior Data Engineer
Full Remote Β· Poland Β· 5 years of experience Β· Upper-IntermediateAs a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.
Key Responsibilities:
- Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
- Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
- Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
- Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
- Contribute to the development of best practices and standards for data engineering processes.
- Stay updated on emerging technologies and trends in the data engineering landscape.
Required Skills and Qualifications:
- Bachelor's Degree in Computer Science or related field.
- Minimum of 5 years of experience in tech lead data engineering or architecture roles.
- Proficiency in Python and PySpark for ETL development and data processing.
- AWS CLOUD at least 2 years
- Extensive experience with cloud-based data platforms, particularly EMR.
- Must have knowledge with Spark.
- Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
- Leadership experience, with a proven track record of leading data engineering teams.
Benefits
- 20 days of paid vacation, 5 sick leave
- National holidays observed
- Company-provided laptop
-
Β· 62 views Β· 2 applications Β· 28d
Middle Data Support Engineer (Python, SQL)
Ukraine Β· 3 years of experience Β· Upper-IntermediateN-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S....N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S. airlines combined and delivers reliable, quality services, including full-service transportation and management, special-needs transportation, route optimization and scheduling, maintenance, and charter services for 1,100 school district contracts.
Responsibilities:
- Provide support in production and non-production environments (Azure cloud)
- Install, configure and provide day-to-day support after implementation, including off hours as needed;
- Troubleshooting defects and errors, arising problems resolution;
- Plan, test, and implement server upgrades, maintenance fixes, and vendor-supplied patches;
- Help in resolving incidents;
- Monitor ETL jobs;
Perform small enhancements (Azure/SQL).
Requirements:
- Proven knowledge and 3+ years experience in Python
- Proficiency in RDBMS systems (MS SQL experience as a plus);
- Experience with Azure cloud provider service;
- Understanding of Azure Data Lake / Storage Accounts;
- Experience in creation and managing data pipelines in Azure Data Factory;
Upper Intermediate/Advanced English level.
Nice to have:
- Experience with administration of Windows Server 2012 and higher;
- Experience with AWS, Snowflake, Power BI;
- Experience with technical support;
Experience in .Net.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 44 views Β· 5 applications Β· 28d
Data engineer (relocation to Berlin)
Office Work Β· Germany Β· 5 years of experience Β· Upper-IntermediateAt TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
About the Data Solution Team As a Data Engineer, you will join our Data Solution Team, which drives our data-driven innovation. The team is pivotal to powering our business processes and enhancing customer experiences through effective data utilization. Our focus areas include:
β Developing integrations between systems.
β Analyzing customer data to derive actionable insights.
β Improving customer experience by leveraging statistical and machine learning models. Our tech stack includes:
β Cloud & Infrastructure: AWS (S3, EKS, Quicksight, and monitoring tools).
β Data Engineering & Analytics: Apache Spark (Scala and PySpark on Databricks), Apache Kafka (Confluence Cloud).
β Infrastructure as Code: Terraform.
β Development & Collaboration: BitBucket, Jira.
β Integration Tools & APIs: Segment.io, Blueshift, Zendesk, Google Maps API, and other external systems
Job requirements
As A Data Engineer, You Will:
β Design, build, and own near-time and batch data processing workflows.
β Develop efficient, low-latency data pipelines and systems.
β Maintain high data quality while ensuring GDPR compliance.
β Analyze customer data and extract insights to drive business decisions.
β Collaborate with Product, Backend, Marketing, and other teams to deliver impactful features.
β Help Data scientists deliver ML/AI solutions.
Requirements:
β 5+ years of experience as a Data Engineer, with expertise in Apache Spark using Python and Scala.
β 3+ years of experience with Apache Kafka.
β Management experience or Tech Lead experience
β Strong proficiency in SQL.
β Experience with CI/CD processes and platforms.
β Hands-on experience with cloud technologies such as AWS, GCP or Azure.
β Familiarity with Terraform.
β Comfortable working in an agile environment.
β Excellent problem-solving and self-learning skills, with the ability to operate both independently and as part of a team.
Nice to have:
β Hands-on experience with Databricks.
β Experience with document databases, particularly Amazon DocumentDB.
β Familiarity with handling high-risk data.
β Exposure to BI tools such as AWS Quicksight or Redash.
β Work experience in a Software B2C company, especially in the FinTech industry.
What we offer:
Our goal is to set up a great working environment. Become part of the process and:
β Shape the future of our organization as part of the international founding team.
β Take on responsibility from day one.
β Benefit from various coaching and training opportunities, including a Sports Subscription, German classes, and a β¬1000 yearly self-development budget.
β Work in a hybrid working model from the comfortable Berlin office
β Enjoy a modern workplace in the heart of Berlin with drinks, fresh fruit, kicker and ping pong
More -
Β· 44 views Β· 1 application Β· 27d
Data Engineer
Hybrid Remote Β· Slovakia Β· 4 years of experience Β· Upper-IntermediateNow is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come...Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come in!
We are looking for a Skilled Data Engineer to join our team.
About the Project
Weβre launching a Snowflake Proof of Concept (PoC) for a leading football organization in Germany. The project aims to demonstrate how structured and well-managed data can support strategic decision-making in the sports domain.
Key Responsibilities
- Define data scope and identify data sources
- Design and build the data architecture
- Implement ETL pipelines into a data lake
- Ensure data quality and consistency
- Collaborate with stakeholders to define analytics needs
- Deliver data visualizations using Power BI
Required Skills
- Strong experience with Snowflake, ETL pipelines, and data lakes
- Power BI proficiency
- Knowledge of data architecture and modeling
- Data quality assurance expertise
- Solid communication in English (B2+)
Nice to Have
- Familiarity with GDPR
- Experience in sports or media-related data projects
- Experience with short-term PoCs and agile delivery
What We Offer
- Contract for the PoC phase with potential long-term involvement
- All cloud resources and licenses provided by the client
- Hybrid/onsite work in Bratislava
- Opportunity to join a meaningful data-driven sports project with European visibility
π¬ Interested? Send us your CV and hourly rate (EUR).
Weβre prioritizing candidates based in Bratislava or in Europe
Interview Process:
1οΈβ£ internal technical interview
More
2οΈβ£ interview with the client -
Β· 63 views Β· 6 applications Β· 26d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πΊπ¦We are Boosta β an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. Weβre looking for a...We are Boosta β an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.
Weβre looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, youβll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.
Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. Youβll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.
WHAT YOUβLL DO
- Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
- Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
- Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
- Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
- Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
- Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
- Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
- Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
- Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
- Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
- Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.
WHAT WE EXPECT FROM YOU
- Strong proficiency in SQL and Python.
- Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
- Proven ability to design, deploy, and scale ETL/ELT pipelines.
- Hands-on experience integrating and automating data from various platforms.
- Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
- Skilled in orchestration tools like Airflow, Prefect, or Dagster.
- Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
- Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
- Experience with Docker to build isolated and reproducible environments for data workflows.
- Exposure to iGaming data structures and KPIs is a strong advantage.
Strong sense of data ownership, documentation, and operational excellence.
HOW IT WORKS
- Stage 1: pre-screen with a recruiter.
- Stage 2: test task.
- Stage 3: interview.
- Stage 4: bar-raising.
- Stage 5: reference check.
- Stage 6: job offer!
A trial period for this position is 3 months, during which we will get used to working together.
WHAT WE OFFER
- 28 business days of paid off
- Flexible hours and the possibility to work remotely
- Medical insurance and mental health care
- Compensation for courses, trainings
- English classes and speaking clubs
- Internal library, educational events
- Outstanding corporate parties, teambuildings
-
Β· 49 views Β· 1 application Β· 26d
Data Engineer 2070/06 to $5500
Office Work Β· Poland Β· 3 years of experience Β· Upper-IntermediateOur partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such...Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such as Walmart, Barclaycard, and Ford.
The company has expanded to over 700 employees, with 15 global offices spanning four continents. With the imminent opening of a new office in Warsaw, we are seeking experienced
Data Engineers to join their expanding team.
The Data Engineer will be responsible for developing, designing, and maintaining end-to-end optimized, scalable Big Data pipelines for our products and applications. In this role, you will collaborate closely with team leads across various departments and receive support from peers and experts across multiple fields.
Opportunities:
- Possibility to work in a successful company
- Career and professional growth
- Competitive salary
- Hybrid work model (3 days per week work from office space in the heart of Warsaw city)
- Long-term employment with 20 working days of paid vacation, sick leaves, and national holidays
Responsibilities:
- Follow and promote best practices and design principles for Big Data ETL jobs
- Help in technological decision-making for the businessβs future data management and analysis needs by conducting POCs
- Monitor and troubleshoot performance issues on data warehouse/lakehouse systems
- Provide day-to-day support of data warehouse management
- Assist in improving data organization and accuracy
- Collaborate with data analysts, scientists, and engineers to ensure best practices in terms of technology, coding, data processing, and storage technologies
- Ensure that all deliverables adhere to our world-class standards
Skills:
- 3+ years of overall experience in Data Warehouse development and database design
- Deep understanding of distributed computing principles
- Experience with AWS cloud platform, and big data platforms like EMR, Databricks, EC2, S3, Redshift
- Experience with Spark, PySpark, Hive, Yarn, etc.
- Experience in SQL and NoSQL databases, as well as experience with data modeling and schema design
- Proficiency in programming languages such as Python for implementing data processing algorithms and workflows
- Experience with Presto and Kafka is a plus
- Experience with DevOps practices and tools for automating deployment, monitoring, and management of big data applications is a plus
- Excellent communication, analytical, and problem-solving skills
- Knowledge of scalable service architecture
- Experience in scalable data processing jobs on high-volume data
- Self-starter, proactive, and able to work to deadlines
- Noce to have: Experience with Scala
If you are looking for an environment where you can grow professionally, learn from the best in the field, balance work and life, and enjoy a pleasant and enthusiastic atmosphere, submit your CV today and become part of our team!
Everything you do will help us lead the programmatic industry and make it better.
More -
Β· 56 views Β· 6 applications Β· 25d
Consultant Data Engineer (Python/Databricks)
Part-time Β· Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateSoftermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β including data pipeline design and solution...Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β including data pipeline design and solution optimization on Databricks.
Type of cooperation: Part-time
β‘οΈYour responsibilities on the project will be:
- Interview and hire Data Engineers
- Supervise work of other Engineers and have hands on for the most complicated tasks from backlog, focus on unblocking other data Engineers in case of technical difficulties
- Develop and maintain scalable data pipelines using Databricks (Apache Spark) for batch and streaming use cases.
- Work with data scientists and analysts to provide reliable, performant, and well-modeled data sets for analytics and machine learning.
- Optimize and manage data workflows using Databricks Workflows and orchestrate jobs for complex data transformation tasks.
- Design and implement data ingestion frameworks to bring data from various sources (files, APIs, databases) into Delta Lake.
- Ensure data quality, lineage, and governance using tools such as Unity Catalog, Delta Live Tables, and built-in monitoring features.
- Collaborate with cross-functional teams to understand data needs and support production-grade machine learning workflows.
- Apply data engineering best practices: versioning, testing (e.g., with pytest or dbx), documentation, and CI/CD pipelines
πΉTools we use: Jira, Confluence, Git, Figma
πOur requirements to you:
- 5+ years of experience in data engineering or big data development, with production-level work.
- Architect and develop scalable data solutions on the Databricks platform, leveraging Apache Spark, Delta Lake, and the lakehouse architecture to support advanced analytics and machine learning initiatives.
- Design, build, and maintain production-grade data pipelines using Python (or Scala) and SQL, ensuring efficient data ingestion, transformation, and delivery across distributed systems.
- Lead the implementation of Databricks features such as Delta Live Tables, Unity Catalog, and Workflows to ensure secure, reliable, and automated data operations.
- Optimize Spark performance and resource utilization, applying best practices in distributed computing, caching, and tuning for large-scale data processing.
- Integrate data from cloud-based sources (e.g., AWS S3), ensuring data quality, lineage, and consistency throughout the pipeline lifecycle.
- Manage orchestration and automation of data workflows using tools like Airflow or Databricks Jobs, while implementing robust CI/CD pipelines for code deployment and testing.
- Collaborate cross-functionally with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights through robust data infrastructure.
- Mentor and guide junior engineers, promoting engineering best practices, code quality, and continuous learning within the team.
- Ensure adherence to data governance and security policies, utilizing tools such as Unity Catalog for access control and compliance.
- Continuously evaluate new technologies and practices, driving innovation and improvements in data engineering strategy and execution.
- Experience in designing, building, and maintaining data pipelines using Apache Airflow, including DAG creation, task orchestration, and workflow optimization for scalable data processing.
- Upper-Intermediate English level.
π¨βπ»Who will you have the opportunity to meet during the hiring process (stages):
Call, HR, Tech interview, PM interview.π₯―What we can offer you:
- We have stable and highly-functioning processes β everyone has their own role and clear responsibilities, so decisions are made quickly and without unnecessary approvals.
- You will have enough independence to make decisions that can affect not only the project but also the work of the company.
- We are a team of like-minded experts who create interesting products during working hours and enjoy spending free time together.
- Do you like to learn something new in your profession or do you want to improve your English? We will be happy to pay 50% of the cost of courses/conferences/speaking clubs.
- Do you want an individual development plan? We will form one especially for you + you can count on mentoring from our seniors and leaders.
- Do you have a friend who is currently looking for new job opportunities? Recommend them to us and get a bonus.
- And what if you want to relax? Then we have 21 working days off.
- What if you are feeling bad? You can take 5 sick leaves a year.
- Do you want to volunteer? We will add you to a chat, where we can get a bulletproof vest, buy a pickup truck or send children's drawings to the front.
- And we have the most empathetic HRs (who also volunteers!). So we are ready to support your well-being in various ways.
π¨βπ«A little more information that you may find useful:
- our adaptation period lasts 3 months, this period of time is enough for us to understand each other better;
- there is a performance review after each year of our collaboration where we use a skills map to track your growth;
- we really have no boundaries in the truest sense of the word β we have flexible working day is up to you.
Of course, we have a referral bonus syst
More -
Β· 133 views Β· 36 applications Β· 21d
Middle+ Data Engineer
Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-IntermediateStart Date: ASAP Weekly Hours: ~15β20 hours Location: Remote Client: USA-based LegalTech Platform About the Project Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses....Start Date: ASAP
Weekly Hours: ~15β20 hours
Location: Remote
Client: USA-based LegalTech PlatformAbout the Project
Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses. The platform includes:
- A robust contract library
- AI-assisted document generation and guidance
- Interactive legal questionnaires
A dynamic legal insights blog
We're currently developing a Proof of Concept (PoC) for an advanced AI agent and are looking for a skilled Python/Data Engineer to support core backend logic and data workflows.
Your Core Responsibilities
- Design and implement ETL/ELT pipelines in the context of LLMs and AI agents
- Collaborate directly with the AI Architect on PoC features and architecture
- Contribute to scalable, production-ready backend systems for AI components
- Handle structured and unstructured data processing
- Support data integrations with vector databases and AI model inputs
Must-have experience with:
- Python (3+ years)
- FastAPI
- ETL / ELT pipelines
- Vector Databases (e.g., Pinecone, Weaviate, Qdrant)
- pandas, numpy, unstructured.io
- Working with transformers and LLM-adjacent tools
Being a part of 3asoft means having:
More
- High level of flexibility and freedom
- p2p relationship with worldwide customers
- Competitive compensation paid in USD
- Fully remote working -
Β· 38 views Β· 5 applications Β· 21d
Data Engineer (Azure stack)
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-IntermediateDataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU clientβs platform offers full visibility into water quality, compliance management, and system performance. If you are...Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU clientβs platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Key Responsibilities:
- Create and manage scalable data pipelines with Azure SQL and other databases.
- Use Azure Data Factory to automate data workflows.
- Write efficient Python code for data analysis and processing.
- Use Docker for application containerization and deployment streamlining.
- Manage code quality and version control with Git.
Skills Requirements:
- 3+ years of experience with Python.
- 2+ years of experience as a Data Engineer.
- Strong SQL knowledge, preferably with Azure SQL experience.
- Python skills for data manipulation.
- Expertise in Docker for app containerization.
- Familiarity with Git for managing code versions and collaboration.
- Upper-intermediate level of English.
Optional Skills (as a plus):
- Experience with Azure Data Factory for orchestrating data processes.
- Experience developing APIs with FastAPI or Flask.
- Proficiency in Databricks for big data tasks.
- Experience in a dynamic, agile work environment.
- Ability to manage multiple projects independently.
- Proactive attitude toward continuous learning and improvement.
We offer:- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
Β· 106 views Β· 2 applications Β· 20d
Middle/Senior Data Engineer (3445)
Full Remote Β· Ukraine Β· 3 years of experience Β· IntermediateGeneral information: Weβre ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If youβre the kind of person who asks the right questions and brings smart ideas to the table,...General information:
Weβre ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If youβre the kind of person who asks the right questions and brings smart ideas to the table, some specific requirements can be flexible β weβre more interested in finding "our person" :)
Responsibilities:
Implementation of business logic in Data Warehouse according with the specifications
Some business analysis required to enable providing the relevant data in a relevant manner
Conversion of business requirements into data models
Pipelines management (ETL pipelines in Datafactory)
Loadings and query performance tuning
Working with senior staff on the customer's side who will provide requirements while engineer may propose some own ideas
Requirements:
Experience with Azure and readiness to work (up to 80% of time) with SQL is a must
Development of data base systems (MS-SQL/T-SQL,SQL)
Writing well performing SQL code and investigating & implementing performance measures
Data warehousing / dimensional modeling
Working within an Agile project setup
Creation and maintenance of Azure DevOps & Data Factory pipelines
Developing robust data pipelines with DBTExperience with Databricks (optional)
Work in Supply Chain & Logistics and aware of SAP MM Data structure (optional). -
Β· 46 views Β· 5 applications Β· 20d
Senior Data Platform Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 8 years of experience Β· Upper-IntermediatePosition Summary: We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are...Position Summary:
We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are flexible with considering applications from anywhere in Europe.
Duties and responsibilities:- Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within Crystal's platform;
- Active participation in development and maintenance of our data pipelines and backend services;
- Integrate new technologies into our processes and tools;
- End-to-end feature designing and implementation;
- Code, debug, test and deliver features and improvements in a continuous manner;
- Provide code review, assistance and feedback for other team members.
Required:- 8+ years of experience developing Python backend services and APIs;
- Advanced knowledge of SQL - ability to write, understand and debug complex queries;
- Data Warehousing and database basic architecture principles;
- POSIX/Unix/Linux ecosystem knowledge;
- Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI;
- Knowledge about blockchain technologies or willingness to learn;
- Experience with PostgreSQL database system;
- Knowledge of Unit Testing principles;
- Experience with Docker containers and proven ability to migrate existing services;
- Independent and autonomous way of working;
- Team-oriented work and good communication skills are an asset.
Would be a plus:- Practical experience in big data and frameworks β Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse;
- Knowledge of Kubernetes and Infrastructure as Code β Terraform and Ansible;
- Passion for Bitcoin and Blockchain technologies;
- Experience with distributed systems;
- Experience with opensource solutions;
- Experience with Java or willingness to learn.
-
Β· 40 views Β· 0 applications Β· 19d
Middle BI/DB Developer
Office Work Β· Ukraine (Lviv) Β· Product Β· 2 years of experience Β· Upper-IntermediateAbout us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...About us:
EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.
But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.
EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
We are looking for a passionate and dedicated Junior QA to join our team in Lviv!
About the unit:
DataMatrix is a part of EveryMatrix platform that is responsible for collecting, storing, processing and utilizing hundreds of millions of transactions from the whole platform every single day. We develop Business Intelligent solutions, reports, 3rd party integrations, data streaming and other products for both external and internal use. The team consists of 35 people and is located in Lviv.What You'll get to do:
- Develop real time data processing and aggregations
- Create and modify data marts (enhance our data warehouse)
- Take care of internal and external integrations
- Forge various types of reports
Our main stack:
- DB: BigQuery, PostgreSQL
- ETL: Apache Airflow, Apache NiFi
- Streaming: Apache Kafka
What You need to know:
Here's what we offer:
- Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
- Stay Healthy: 10 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
- Support for New Parents:
- 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
- 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.
Our office perks include on-site massages and frequent team-building activities in various locations.
Benefits & Perks:
- Daily catered lunch or monthly lunch allowance.β―
- Private Medical Subscription.β―
- Access online learning platforms like Udemy for Business, LinkedIn Learning or OβReilly, and a budget for external training.
- Gym allowance
At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!
More