Database Administrator with DevOps skills Offline

— 5+ years of experience in a DevOps/DBA/DBE role

— Extensive experience in SQL (Postgresql, Postgresql, & more Postgresql, and maybe some MySQL)

— Working knowledge of JSON and Postgresql JSONB extensions

— Strong operational experience in Linux/Unix environment and scripting languages like Shell, Perl, Python is required

— Strong troubleshooting skills

— At least upper-intermediate level of English, both spoken and written

— When you talk in your sleep it should be Postgresql compatible SELECT statements.

 

As a plus:

— Experience in horizontal database scaling, Citus Data/TimescaleDB

— Familiarity with PostGIS.

— Experience in NoSQL (MongoDB) databases.

— Experience in Airflow, Kettle, and the Pentaho PDI/PDR.

 

Our philosophy is that we are a small, closely-knit team and we care deeply about you:

— Competitive salary;

— Great new office space;

— Flexible working schedule, remote work possible;

— Working directly with colleagues from Silicon Valley and around the world;

— Team trips, certification and events compensation, medical insurance, sports, etc.;

— Last but not least, we are really fun to work with!

 

Responsibilities:

— Maintain roughly 100 custom data pipelines written in Perl, PHP, Python & Java.

— Create new data pipelines and transformations.

— Assist with data migrations and database component software updates

— Assist with cloud and bare-metal infrastructure buildout

— Build out new services such as data presentation systems (metabase, pentaho data reporter, tableau, etc)

— Work with the BI team to turn raw data into insights as efficiently as possible

— Help troubleshoot any BI tool issues such as failing jobs and reporting layer slowness

— Work with other teams to create ETLs.

— Set up integration environments and provision databases for new projects

— Develop the Dev-Ops process using CI/CD tools

— Repair failing components as needed — be available when needed

— Be given only a mallet and some small wrenches with which to work

 

This position involves:

 

— Caring for our large horizontally scaled database clusters, processes, and custom data pipelines.

— Creating queries and ETLs and improving poorly written ones.

— Optimizing the clusters to run as efficiently as possible, maintaining data consistency.

— Helping data producers and consumers to move their data efficiently and produce the results they want.

— Contributing to the planning of future growth.

— Revision control and migrating data forward through new versions of software and architectures.

— Implementation of our technologies on cloud environments (AWS, Azure, GCP) and bare metal

— Dreaming about the numbers and ways to improve the systems when asleep.

— Implementing solutions that are good enough now, while planning for better solutions in the future.

— Building tools that make others within the company more productive.

The job ad is no longer active
Job unpublished on 12 April 2021

Look at the current jobs SQL / DBA Kyiv→

Loading...