Big Data Engineer (offline)

If you are excited by technology that has the power to handle hundreds of thousands of transactions per second; collect tens of billions of events each day; evaluate thousands of data-points in real-time all while responding in just a few milliseconds, then we are the place for you!

What you’ll do:
- Working on Big Data technologies such as Hadoop, MapReduce, Kafka, and/or Spark in columnar databases
- Architect, design, code and maintain components for aggregating tens of billions of daily transactions
- Migrating services from on-prem to Cloud
- Lead the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for streaming and batch ETL’s and RESTful API’s
- Mentor junior team members

Requirements:
- 5+ years of recent hands-on Java and/or Python
- Strong knowledge of collections, multi-threading, JVM memory model, etc.
- Great understanding of designing for performance, scalability, and reliability
- Superb understanding of algorithms, scalability and various tradeoffs in a Big Data setting
- In-depth understanding of object-oriented programming concepts
- AWS experience, especially with EMR, Step functions, Glue, CDK
- Excellent interpersonal and communication skills
- Understanding of full software development life cycle, agile development, and continuous integration
- Good knowledge of Linux command-line tools
- Experience with Hadoop MapReduce, Spark, Airflow, Pig, HIVE
- Solid understanding of database fundamentals, good knowledge of SQL

The job ad is no longer active
Job unpublished on 29 August 2020

Look at the current jobs Java Kyiv→