Requirements
- 5+ years of hands on experience with Apache Spark(Pyspark/SparkR/sparklyr)
- Experience using Spark in the Databricks Ecosystem
- Deep knowledge in the design and implementation of Distributed File Systems, Data Warehousing and Streaming technologies, including data architecture patterns, using Azure
- Experience in deploying and managing Spark ML models at scale
- Excellent communication and presentation skills
- Passionate about learning and experimenting with new technologies
Job Responsibilities
- Provide expertise and guidance in the architectural design and implementation of Spark projects, from Data Engineering using Cloud Storage to Machine Learning/Deep Learning pipelines in Data Science and AI
- Implementation and delivery of globally scalable and supportable Spark workloads to a production grade level of quality in an Agile Devops working environment
- Engage and consult with end users to understand real world problems worth solving and relating how distributed cloud computing can solve them
- Provide technical leadership and impart Spark knowledge to project team members and others in the Data Science Community
About GlobalLogic
GlobalLogic is a leader in digital product engineering services. We help our clients design and build innovative products, platforms, and digital experiences for the modern world. By integrating strategic design, complex engineering, and vertical industry expertise -- we help our clients imagine what’s possible, and accelerate their transition into tomorrow’s digital businesses. Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers around the world, extending our deep expertise to customers in the communications, automotive, healthcare, technology, media and entertainment, manufacturing, and semiconductor industries.
Company website:
https://www.globallogic.com/ua/
The job ad is no longer active
Job unpublished on
19 June 2020
Look at the current
jobs
Data Science
Kyiv→