Lead Data Scientist (offline)

Project Description
We're looking for an experienced Data Scientist to help our new client, CognitOps driving their data and ML ecosystems to the next level.

CognitOps is a cloud and ML-driven supply chain startup founded in 2018 and headquartered in Austin, TX. They tackle cool, challenging problems. For example, they help warehouses supply hospitals with PPE and critical medical equipment more efficiently. CognitOps helps e-commerce businesses ship orders on time. They tackle these challenging problems using modern technologies like machine learning, queue-based simulation, Google Cloud, Kubernetes, Kafka, and Scala. The team we join has decades of startup experience working with big data and building scalable, fault-tolerant, secure software and decades of experience working with warehouses and empathetically understanding the needs they have. CognitOps is still a small early-stage company. So you'll have an opportunity to come in, make a huge impact, and be a leader as the company grows.

Responsibilities
- Own complex projects from ideation to deployment
- Work closely with product teams to identify important questions and answer them with data
- Designing and interpreting experiments to measure the impact of new features
- Defining core data sets and schemas, as well as visualizing and tracking key metrics
- Running impactful inferential analyses and data investigations to identify recurring patterns, root causes, and propose actionable product solutions
- Communicating analyses and data-backed recommendations to stakeholders
- Championing a data-first approach toward decision-making across the entire organization
- Mentoring and growing other data scientists into senior roles and establishing a culture of statistical excellence

Our Stack is:
- Python (scikit-learn, pandas, PySpark). Experience in a Python environment is important.
- Airflow, Docker, Kubernetes, Google Cloud Platform, Mlflow. Experience with these is nice, but not necessary.
- Things like PyTorch, Keras, TensorFlow, PyMC3, tsmodels, Prophet and such are all also nice-to-have but not required.

Skills Required
- 10+ years of experience as a Data Engineer and Data Scientist with increasingly impactful accomplishments;
- Deep understanding of various statistical techniques and experimentation analysis workflows
- Strong familiarity with SQL, data visualization tools, and working knowledge of Python
- Experience with tools like Tableu, Power BI, Periscope, Looker or similar
- For this role, we’d love to see expertise in some field that is useful for us, for which we don’t currently have an expert on-staff. Right now, our ideal areas are:
Deep learning
Time series models; especially multivariate TS
Causal analysis
Econometrics

- Experience with cloud computing (Google Cloud Platform, AWS, etc.) desirable
- Excellent communication skills, both written and spoken (English)

About Zoolatech

We are an IT company that combines an extensive technology stack, flexibility, and charity projects. We are free from bureaucracy, and we share the idea of digital transformation.


– We are responsible and know that we create our own environment, so we strive to take care of others.
– We encourage and support colleagues in their desire to learn and develop professionally.
– Work-life balance is natural and genuine at ZoolaTech. We work and then have fun. Work-life balance matters to us.

Company website:
https://zoolatech.com/

DOU company page:
https://jobs.dou.ua/companies/zoolatech/

The job ad is no longer active
Job unpublished on 6 September 2021

Look at the current jobs Data Science Kyiv→