Digis (a Fiverr company)

Data Engineer

About Digis

Digis is a European IT company with 200+ specialists delivering complex SaaS products, enterprise platforms, and AI-powered solutions worldwide.

We partner with companies from the US, UK, and EU to build long-term distributed development teams. Our engineers enjoy transparency, stability, and modern engineering practices, while working directly with strong technical teams on the client side.
 

About the Project

We are strengthening the team on a long-term data-intensive SaaS platform used by large international enterprises in the hospitality domain. The platform processes data from thousands of locations globally and plays a key role in helping organizations increase revenue through analytics, training, and performance insights.

The project is stable: we have several engineers from Digis already working there, and the cooperation has been ongoing for years.

You will work with large-scale distributed data processing, AWS cloud services, and modern ETL/ELT pipelines.
 

Tech Stack

Spark / PySpark, AWS S3, DynamoDB, PostgreSQL or MySQL, AWS ECS Fargate.

Additional tools:
AWS Glue, EMR, AWS Batch, Flink, Beam, Cassandra / ScyllaDB, AWS Step Functions

Team: up to 50 people, including CTO, 6 Data Engineers, 2 DevOps, and 2 Digis engineers.
There is a Team Lead/CTO performing code reviews and supporting architectural decisions.
 

Responsibilities

  • Develop and optimize data pipelines using PySpark and AWS.
  • Extract, process, and transform data from multiple storage and streaming sources.
  • Write and tune SQL queries (PostgreSQL / Redshift).
  • Build and debug ETL/ELT processes on AWS Glue or EMR.
  • Participate in architectural improvements and propose technical optimizations.
  • Collaborate closely with backend engineers, DevOps, data engineers, and the CTO.
     

Examples of real tasks:

  • Scaling the case detection pipeline.
  • Splitting a large API into multiple microservices.
  • Emulating pub/sub or cloud-run flows locally.
  • Extracting & processing data via PySpark + S3.
  • Running/debugging ETL jobs on Glue/EMR with Spark UI monitoring.
     

Requirements

  • 3+ years of experience as a Data Engineer.
  • 1+ year of commercial experience with Spark/PySpark.
  • 2+ years of SQL experience (PostgreSQL / MySQL / Redshift).
  • 1+ year of AWS experience on a production project.
  • English: Upper-Intermediate+.
     

Why You Will Enjoy Working Here

  • Stable, long-term project with Digis engineers already onboard.
  • Technical ownership: propose architecture improvements without heavy bureaucracy.
  • End-to-end responsibility over the full data lifecycle.
  • Clear business impact — your work directly improves operational and revenue performance across thousands of locations.
  • Modern cloud-first environment built on AWS & distributed processing.
  • Strong engineering culture with a supportive CTO and cross-functional team.
     

Interested?  Send your CV  — we’ll be happy to talk and share more details about the project and the team.

Required languages

English B2 - Upper Intermediate
Python, SQL, Git, ETL, PostgreSQL, Data Warehouse, PySpark, Apache Spark, Apache Airflow, Snowflake
Published 28 November
56 views
·
4 applications
100% read
·
25% responded
Last responded yesterday
To apply for this and other jobs on Djinni login or signup.
Loading...