Senior Data Engineer

You will be a part of a team developing digital collaboration tools with features enabling Analog to Digital. You will be working with a very well-known brand. The current focus is on applications built for use in Microsoft Teams & iOS/Android Mobile Apps. The product backlog has featured across a broad range of technologies & business needs. This year is focused on global scalability, compliance & new UX features. The product is all Azure Technology Stack, and GitHub Code Repositories & operates using Agile Scrum Processes. This is a top digital initiative for the company and a fun team to work on!

 

Requirements:

 

Minimum Requirements:

- Minimum of 4 years of experience in SQL and Python programming languages, specifically for data engineering tasks.

- Proficiency in working with cloud technologies such as Azure or AWS.

- Experience with Spark and Databricks or similar big data processing and analytics platforms

- Experience working with large data environments, including data processing, data integration, and data warehousing.

- Experience with data quality assessment and improvement techniques, including data profiling, data cleansing, and data validation.

- Familiarity with data lakes and their associated technologies, such as Azure Data Lake Storage, AWS S3, or Delta Lake, for scalable and cost-effective data storage and management.

- Experience with NoSQL databases, such as MongoDB or Cosmos, for handling unstructured and semi-structured data.

 

Additional Skillsets (Nice to Have):

- Familiarity with Agile and Scrum methodologies, including working with Azure DevOps and Jira for project management.

- Knowledge of DevOps methodologies and practices, including continuous integration and continuous deployment (CI/CD).

- Experience with Azure Data Factory or similar data integration tools for orchestrating and automating data pipelines.

- Ability to build and maintain APIs for data integration and consumption.

- Experience with data backends for software platforms, including database design, optimization, and performance tuning.

 

Responsibilities:

- Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big-data scales

- Responsible for the design and implementation of data integration pipelines

- Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storage

- Take part in the full cycle of feature development (requirements analysis, decomposition, design, etc)

- Design, develop, and implement different web and back-end solutions with other talented engineers in a collaborative team environment.

- Contribute to the overall quality of development services through brainstorming, unit testing, and proactive offering of different improvements and innovations.


To apply for this and other jobs on Djinni login or signup.