Senior Data Engineer (IRC264689) Offline
Our client provides collaborative payment, invoice and document automation solutions to corporations, financial institutions and banks around the world. The company’s solutions are used to streamline, automate and manage processes involving payments, invoicing, global cash management, supply chain finance and transactional documents. Organizations trust these solutions to meet their needs for cost reduction, competitive differentiation and optimization of working capital.
Serving industries such as financial services, insurance, health care, technology, communications, education, media, manufacturing and government, Bottomline provides products and services to approximately 80 of the Fortune 100 companies and 70 of the FTSE (Financial Times) 100 companies.
Our client is a participating employer in the Employment Verification (E-Verify) program EOE/AA/M/F/V/D/E-Verify Employer
Our client is an Equal Employment Opportunity and Affirmative Action Employer.
As part of the GlobalLogic team, you will grow, be challenged, and expand your skill set working alongside highly experienced and talented people.
Don’t waste any second, apply!
Skill Category
Data Engineering
We expect candidates with long experience to work with a new team and demonstrate experience on the following:
- Experience with Databricks or similar
- Hands-on experience with the Databricks platform or similar is helpful.
- Managing delta tables, including tasks like incremental updates, compaction, and restoring versions
- Proficiency in python (or other programming skills) and SQL, commonly used to create and manage data pipelines, query and run BI DWH workload on Databricks
- Familiarity with other languages like Scala (common in the Spark/Databricks world), Java can also be beneficial.
- Experience with Apache Spark
- Understanding of Apache Spark’s architecture, data processing concepts (RDDs, DataFrames, Datasets)
- Knowledge of spark-based workflows
- Experience with data pipelines
- Experience in designing, building, and maintaining robust and scalable ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines.
- Data Understanding and Business Acumen
- The ability to analyse and understand data, identify patterns, and troubleshoot data quality issues is crucial. Familiarity with data profiling techniques
Job responsibilities
- Developing postgres based central storage location as basis for long term data storage
- Developing Standing up microservices to retain data based on tenant configuration and UI to enable customers to configure their retention policy
- Creating the pipeline to transform the data from transactional database to format suited to analytical queries
- Helping pinpoint and fix issues in data quality
- Paricipating in code review sessions
- Following client’s standards to the code and data qualities
#Remote
The job ad is no longer active
Look at the current jobs Data Engineer →