Project Description
The client is the automation solutions company for an open digital media industry. Featuring the leading omni-channel revenue automation platform for publishers and enterprise-grade programmatic tools for media buyers, Client's publisher-first approach enables advertisers to access premium inventory at scale. Processing nearly one trillion ad impressions per month, Client has created a global infrastructure to activate meaningful connections between consumers, content and brands. Since 2006, Client's focus on data and technology innovation has fueled the growth of the programmatic industry as a whole. Headquartered in Redwood City, California, Client operates 11 offices and six data centers worldwide.
Responsibilities
• Build, design and implement our highly scalable, fault-tolerant, highly available big data platform to process terabytes of data and provide customers with in-depth analytics.
• Developing Big Data pipelines using modern technology stack such as Spark, Hadoop, Kafka, HBase, Hive, Presto etc.
• Developing analytics application ground up using modern technology stack such as Java, Spring, Tomcat, Jenkins, REST APIs, JDBC, Amazon Web Services, Hibernate.
• Building data pipeline to automate high-volume data collection and processing to provide real-time data analytics.
• Customize Client's reporting and analytics platform based on customer's requirements from customers and deliver scalable, production-ready solutions.
• Lead multiple projects to develop features for data processing and reporting platform, collaborate with product managers, cross-functional teams, other stakeholders and ensure successful delivery of projects.
• Use various mechanisms established to fetch data from different external data sources and reconcile them with Client's processed data.
• Collaborate with functional teams to build products to deliver end-to-end products and features and fix bugs for better performance.
• Develop robust & fault-tolerant systems and monitor implications of changes on data processing pipeline and performance.
• Leveraging a broad range of Client's data architecture strategies and proposing both data flows and storage solutions.
• Managing Hadoop map reduce and spark jobs & solving any ongoing issues with operating the cluster.
• Working closely with cross functional teams on improving availability and scalability of large data platform and functionality of PubMatic software.
• Expertise in developing Implementation of professional software engineering best practices for the full software development life cycle, including coding standards, performing code reviews, committing to GitHub, preparing documents in Confluence, continuous delivery using Jenkins, automated testing, and operations.
• Participate in Agile/Scrum processes such as sprint planning, sprint retrospective, backlog grooming, user story management, work item prioritization, etc.
• Frequently discuss with product managers about the software features to include in Client Data Analytics platform. Understand the technical aspects customer requirement from product managers.
• Keep in regular touch with quality engineering team which ensure the quality of the platforms/products and performance SLAs of java based micro services and spark based data pipeline.
• Support customer issues over email or JIRA (bug tracking system), provide updates, patches to customers to fix the issues.
• Discuss with technical writing team about the technical documents that are published on documentation portal.
• Perform code and design reviews for code implemented by peers or as per the code review process.
Skills
Must have
Data Science, Kafka Streams, Big Data, Spak, Java
The job ad is no longer active
Job unpublished on
20 November 2020
Look at the current
jobs
Data Science
Kyiv→