Datagrok

Joined in 2020
37% answers
Datagrok is a next-generation web-based integrated data analytics platform that provides a unified experience for data access, data augmentation, exploratory data analysis, advanced visualizations, scientific computations, machine learning, security, governance, and collaboration. Our proprietary technology enables ingestion of big (up to 10M rows) datasets and performing CPU-intensive scientific computations and interactive data exploration and visualization completely on the client side, in the browser.

We are a young startup; yet, we already positively impact millions of lives (ask us how we are connected to the research and development of the COVID-19 vaccines). We are pushing the limits of what’s possible, so we need people who are up to the challenge. You will be solving hard problems, learning complex scientific domains, and managing the ever-increasing complexity.

We are really unlike anything you’ve seen, check out what we can do at:
https://youtu.be/67LzPsdNrEc
Or try it yourself (click on LAUNCH): datagrok.ai
  • · 99 views · 18 applications · 30d

    Sr. DevOps Engineer / Data Architect

    Full Remote · Countries of Europe or Ukraine · 5 years of experience · Advanced/Fluent
    We are building a browser-based data analytics platform with computational capabilities for biopharma. We are looking for a Senior DevOps Engineer to own infrastructure design, automate deployments, and optimize performance at scale. This is a...

    We are building a browser-based data analytics platform with computational capabilities for biopharma. 

     

    We are looking for a Senior DevOps Engineer to own infrastructure design, automate deployments, and optimize performance at scale. This is a DevOps-first role, but if you have experience with systems architecture, high-performance or scientific computing, data engineering, that’s a huge plus.

     

    What you’ll do

     

    DevOps & infra engineering (80% focus)

    • Design, automate, and maintain scalable cloud & on-prem infrastructure (AWS, GCP, Kubernetes)
    • Build and optimize CI/CD pipelines (GitHub Actions, Jenkins)
    • Enhance system observability and monitoring
    • Ensure security, performance, and fault tolerance across all environments
    • Simplify deployment and infrastructure management to keep maintenance minimal

     

    Performance optimization & systems engineering (15% focus)

    • Architect and prototype an OLAP solution that syncs with arbitrary external databases
    • Architect the caching solution for scientific computations
    • Optimize database performance and backend services 
    • Design and refine cloud-based data processing architectures for analytics pipelines
    • Work with backend teams to ensure smooth integrations between applications & infrastructure

     

    Data infrastructure (5% bonus, not required)

    • Integrate and optimize data warehouses (ClickHouse, Redshift, Snowflake)
    • Optimize query performance and storage for large-scale analytics

     

    Some tech you will be working with

    AWS (ECS, EKS), CloudFormation, Terraform, Jenkins, GitHub Actions, Prometheus, CloudWatch, Docker, Kubernetes, ArgoCD, Python, Bash

     

    What we’re looking for

    We don’t expect you to check every box, but you should be comfortable solving DevOps and software engineering problems at scale.

     

    Location: remote (GMT to GMT+3), minimum a 4-hour EST overlap required

     

    Must-have skills

    • 5+ years of experience in DevOps, infrastructure, or backend engineering
    • 3+ years managing complex deployment dev environments
    • 3+ years with cloud environments (AWS (primary), GCP (bonus))
    • 3+ years of experience with CI/CD pipelines
    • Strong Docker experience, K8 is a must
    • Infrastructure as Code (IaC): Terraform + CloudFormation
    • Proficiency in backend scripting/automation (Python, Go, Rust, or TypeScript)
    • Observability & monitoring: any of Prometheus, Grafana
    • Experience with complex enterprise deployments
    • Experience with rapid-growth environments. Track record scaling infrastructure
    • Security best practices for cloud & on-prem deployments
    • Strong communication skills. Experience working with diverse remote teams. Resilience under pressure.

     

    Bonus points (for broader impact)

    • SaaS and startup experience
    • Experience leading teams
    • Experience with data analytics software
    • Data engineering & warehouse performance tuning (ClickHouse, Snowflake, Redshift)
    • MLOps or AI model deployment experience
    • High-performance computing (HPC)

     

    Why join us?

    • Work on high-impact problems in cloud infrastructure, automation, and performance optimization
    • Own and shape DevOps strategy in a fast-growing company
    • Remote-friendly, high-trust culture with minimal bureaucracy

     

    If you’re excited about building fast, scalable, and reliable infrastructure while leveraging your software engineering skills (and maybe some data engineering too), let’s talk!

    More
Log In or Sign Up to see all posted jobs