Strong Middle/Senior Data Engineer

Our customer is a world leader in entertainment. You will build dimensional analytics infrastructure (dbt + Snowflake) for a high-volume live events ticketing & hospitality business. Transform existing Snowflake source data into production-ready BI models for commercial strategy, pricing, and revenue optimization. 

Work schedule till 20:00 (UA).

 

Requirements:

  • 5+ years building dimensional data models in production (facts and dimensions)
  • Expert SQL + dbt with deep understanding of materialization strategies (views, tables, incremental)
  • Snowflake production experience
  • Git workflows: pull requests, code review, peer review collaboration
  • Code quality adherence: linting, pre-commit hooks, following team conventions
  • Strong data validation and quality assurance skills
  • Experience debugging and correcting transformation logic
  • Can deliver independently with minimal supervision while collaborating effectively with Analytics/Data Platform teams
  • Portfolio required: GitHub or similar showing dimensional modeling/dbt work
  • At least Upper Intermediate English level
  • Preferred:
    • E-commerce/ticketing/transactional business experience
    • Experience with cross-system data reconciliation
    • BI platform integration knowledge (Tableau, Omni Analytics, Looker)

 

Responsibilities:

  • Expected Effort Distribution: 20% setup/design > 30% initial model building > 40% validation/iteration > 10% documentation/handoff
  • Scope Boundaries: All source data exists in Snowflake Landing layer (Bronze). You will build the Analytics layer (Silver) dimensional models – facts and dimensions. Analysts will create their own Marts (Gold layer) using Omni Analytics for self-service BI, so your dimensional models – especially dimensions – must be production-ready, well-documented, and high quality for direct downstream consumption. You will not build data ingestion pipelines, orchestration workflows, or final BI dashboards.
  • Expect significant time on validation work: fixing data quality issues, implementing complex business rules, and iterating based on stakeholder feedback to ensure accuracy.
  • Code Review Process: All code pushed to GitHub for peer review and approval by Data Platform team before merging. Must adhere to established linting standards, pre-commit hooks, and data platform conventions.
  • Strong collaboration and communication with Data Platform team required throughout implementation.

 

What we offer:

  • Annual paid vacation of 18 working days.
  • Extra vacation days for long-lasting cooperation.
  • Annual paid sick leave of 10 days.
  • Maternity/Paternity leave.
  • The opportunity for sabbatical leave.
  • Marriage and Parenthood Package.
  • Compensation for sports activities (up to 250$ per year) or health insurance covering (70%) — after the trial period.
  • Internal education(corporate library, Udemy courses).
  • Career development plan.
  • English and Spanish classes.
  • Paying taxes and managing PE (Private Entrepreneur).
  • Technical equipment.
  • Internal Referral program.
  • Opportunity to take part in company volunteering activities.
  • Sombra is a “Friendly to Veterans” award-holder.

 

Required languages

English B2 - Upper Intermediate
Published 7 November
39 views
·
5 applications
100% read
·
100% responded
Last responded yesterday
To apply for this and other jobs on Djinni login or signup.
Loading...