Solvits Solutions

Joined in 2021
5% answers
SOLVITS Solutions is a company operating in EMEA geographies.
Our services encompass a wide range of cross-industry services, including Software Development and Business Management & Advisory domains.

SOLVITS specializes in software development services spanning the entire lifecycle—from the inception or conceptualization phase to the operational deployment and ongoing support of the software.
  • · 17 views · 1 application · 1d

    Senior Data Engineer

    Full Remote · Ukraine · Product · 5 years of experience · English - B2
    About the job: We are an innovative AI-driven construction intelligence startup, committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance the efficiency, safety, and productivity of construction projects...

    About the job:

    We are an innovative AI-driven construction intelligence startup, committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance the efficiency, safety, and productivity of construction projects through intelligent solutions.
     

    We’re hiring a hands-on Senior Data Engineer who wants to build data products that move the needle in the physical world. Your work will help construction professionals make better, data-backed decisions every day. You’ll be part of a high-performing engineering team based in Tel Aviv.

    Responsibilities:

    • Lead the design, development, and ownership of scalable data pipelines (ETL/ELT) that power analytics, product features, and downstream consumption.
    • Collaborate closely with Product, Data Science, Data Analytics, and full-stack/platform teams to deliver data solutions that serve product and business needs.
    • Build and optimize data workflows using Databricks, Spark (PySpark, SQL), Kafka, and AWS-based tooling.
    • Implement and manage data architectures that support both real-time and batch processing, including streaming, storage, and processing layers.
    • Develop, integrate, and maintain data connectors and ingestion pipelines from multiple sources.
    • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Spark on Kubernetes, Kafka, and AWS services.
    • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Databricks, Kafka, and AWS services.
    • Use Terraform (and similar tools) to manage infrastructure-as-code for data platforms.
    • Model and prepare data for analytics, BI, and product-facing use cases, ensuring high performance and reliability.

    Requirements:
     

    • 8+ years of hands-on experience working with large-scale data systems in production environments.
    • Proven experience designing, deploying, and integrating big data frameworks - PySpark, Kafka, Databricks. 
    • Strong expertise in Python and SQL, with experience building and optimizing batch and streaming data pipelines.
    • Experience with AWS cloud services and Linux-based environments.
    • Background in building ETL/ELT pipelines and orchestrating workflows end-to-end.
      Proven experience designing, deploying, and operating data infrastructure / data platforms.
    • Mandatory hands-on experience with Apache Spark in production environments. 
    • Mandatory experience running Spark on Kubernetes.
    • Mandatory hands-on experience with Apache Kafka, including Kafka connectors.
    • Understanding of event-driven and domain-driven design principles in modern data architectures.
    • Familiarity with infrastructure-as-code tools (e.g., Terraform) — advantage.
    • Experience supporting machine learning or algorithmic applications — advantage.
    • BSc or higher in Computer Science, Engineering, Mathematics, or another quantitative field.
    More
Log In or Sign Up to see all posted jobs