Привет! Мы в поисках Senior Data Engineer
Проект: сложная аналитическая платформа на инвестиционном рынке (USA)
Что нужно: 3+ года опыта с Python; будет плюсом опыт с Scala и клаудами; English intermediate
Возможен полный remote

On behalf of Ciklum Digital, Ciklum is looking for a Senior Data Engineer to join the UA team on a full-time basis. You will join a highly motivated team and will be working on a modern solution for our existing client. We are looking for technology experts who want to make an impact on new business by applying best practices and taking ownership.

Project description:
This interesting and challenging initiative involves designing, implementing and maintaining a complex Intelligence platform capable of processing multipart Big Data from various established vendors, managing such data in its totality to produce meaningful real-life investment-grade, event-signals for a leading US based private equity investor with focus in high-tech industry with portfolio that stretches across the globe. Ciklum has been actively and successfully involved in building and maintaining this AWS based product for over three years and is the lead systems integrator for the intelligent, predictive decision-making process and technology of the entire end-to-end marketing and decision-science platform that includes data ingestion, cleansing, standardizing, codifying, co-relating and making predictive decisions based on heuristic analytical models.

- Responsible for the building, deployment, and maintenance of mission critical analytics solutions that process data quickly at big data scales
- Responsible for design and implementation data integration pipelines on AWS Big Data tech stack, using Apache Spark, Hive, HBase, ELK, PostgreSQL, Lambda
- Contributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and loading across multiple data storages
- Owns one or more key components of the infrastructure and works to continually improve it, identifying gaps and improving the platform’s quality, robustness, maintainability, and speed
- Cross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members
- Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability
- Performs development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
- Contribute in CoE activities and community building, participate in conferences, provide excellence in exercise and best practices

- 3+ years of experience coding in SQL, Python, PySpark, Scala, with solid CS fundamentals including data structure and algorithm design
- 2+ years contributing to production deployments of large backend data processing and analysis systems as a team lead
- 1+ years of hands-on implementation experience working with a combination of the following technologies: Hadoop, Hive, Spark, SQL and NoSQL data warehouses such as Hbase
- 1+ years of experience in cloud data platforms (AWS)
- Knowledge of professional software engineering best practices for the full software
- Knowledge of Data Warehousing, design, implementation and optimization
- Knowledge of Data Quality testing, automation and results visualization
- Knowledge of BI reports and dashboards design and implementation
- Knowledge of development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
- Experience participating in an Agile software development team, e.g. SCRUM
- Experience designing, documenting, and defending designs for key components in large distributed computing systems
- A consistent track record of delivering exceptionally high quality software on large, complex, cross-functional projects
- Demonstrated ability to learn new technologies quickly and independently
- Ability to handle multiple competing priorities in a fast-paced environment
- Undergraduate degree in Computer Science or Engineering from a top CS program required. Masters preferred
- Experience with supporting data scientists and complex statistical usecases highly desirable

- Understanding of cloud infrastructure design and implementation
- Experience in data science and machine learning
- Experience in backend development and deployment
- Experience in CI/CD configuration
- Good knowledge of data analysis in enterprises

Personal skills:
- A curious mind and willingness to work with the client in a consultative manner to find areas to improve
- Intermediate + English
- Good analytical skills
- Good team player motivated to develop and solve complex tasks
- Self-motivated, self-disciplined and result-oriented
- Strong attention to details and accuracy

What's in it for you:
- A Centre of Excellence is ultimately a community that allows you to improve yourself and have fun. Our centres of excellence (CoE) bring together all Ciklumers from across the organisation to share best practices, support, advice, industry knowledge and to create a strong community

About Ciklum International

Ciklum is a top-five global Digital Solutions Company for Fortune 500 and fast-growing organisations alike around the world.
Our 3,000+ Developers located in the Delivery Centres across the globe, provide our clients with a range of services including outsourcing software development, Enterprise App Development, Quality Assurance, Security, R&D, Big Data & Analytics.

Company website:

DOU company page:

Job posted on 3 November 2020

Для отклика на эту и другие вакансии на Джинне войдите или зарегистрируйтесь.
  Receive new jobs in Telegram