Technical Support Engineer

Requirements:

At least five years of experience in the development, testing, and maintenance of Python, Java, or Scala-based applications.

Proficient in compiling, building, and navigating Apache Spark™ source code.

Skilled in identifying and implementing patches/bug fixes for Apache Spark™ source code.

Expertise in crafting and managing data pipelines for Big Data/Hadoop/Apache Spark™/Kafka/Elasticsearch environments.

Practical experience with SQL-based database systems.

Proficient in troubleshooting JVM, GC, and Thread dump-related issues.

Experience with AWS or Azure services.

A Bachelor's degree in Computer Science or a related field is mandatory.

 

Responsibilities:

Troubleshoot, resolve and suggest deep code-level analysis of Apache Spark™ to address complex customer issues related to Apache Spark™ core internals, Apache Spark™ SQL, Structured Streaming and Databricks Delta.

Provide best practices guidance around Apache Spark™ runtime performance and usage of Apache Spark™ core libraries and APIs for custom-built solutions developed by Databricks customers.

Help the support team with detailed troubleshooting guides and runbooks.

Contribute to automation and tooling programs to make daily troubleshooting efficient.

Work with the Apache Spark™ Engineering Team and spread awareness of upcoming features and releases.

Identify Apache Spark™ bugs and suggest possible workarounds.

Demonstrate ownership and coordinate with engineering and escalation teams to achieve resolution of customer issues and requests

Participate in weekend and weekday on call rotation.

 

About the project:

Platform offers an expansive suite of cloud-based tools tailored for streamlined big data processing, analytics, and machine learning endeavors. Its seamless integration with diverse data storage systems ensures efficient data management, while its support for multiple programming languages caters to the diverse needs of data professionals.

Facilitating collaborative workspaces, it fosters seamless teamwork among data engineers, scientists, and analysts, enhancing productivity and innovation. Its interactive notebooks enable iterative exploration and analysis, facilitating rapid insights generation.

Furthermore, equipped with advanced machine learning capabilities, this platform empowers users to construct, refine, and deploy models at scale. This capability not only accelerates the development cycle but also enhances the accuracy and scalability of machine learning solutions.


To apply for this and other jobs on Djinni login or signup.