
Akvelon
The development office in Ukraine (Kharkiv) was opened in November 2008 and presently employs more than 300 people. Currently, Akvelon Ukraine has offices in Kharkiv, Dnipro, Lviv, Ivano-Frankivsk, and Gdansk. Only during 2021, our staff grows by 49%!
Companies, such as Microsoft, Reddit, LinkedIn, GitHub, Amazon, Pinterest, Airbnb, Starbucks, T-Mobile, Intel, Nokia, Tideworks, Dropbox and many more, have greatly benefited from working with Akvelonโs talented employees. โ
We ๐ผ๐ณ๐ณ๐ฒ๐ฟ ๐ฎ ๐ฐ๐ต๐ฎ๐ป๐ฐ๐ฒ ๐๐ผ ๐ฏ๐ฒ ๐ฟ๐ฒ๐น๐ผ๐ฐ๐ฎ๐๐ฒ๐ฑ ๐๐ผ ๐๐ต๐ฒ USA ๐๐ผ ๐ผ๐๐ฟ ๐ฐ๐๐๐๐ผ๐บ๐ฒ๐ฟ๐โ ๐๐ค.
Akvelon is about socially significant projects, career growth, and development in various stacks, a culture of environmentally friendly communication and empathy, innovative technologies, and a flexible approach to work. We are not looking for candidates for projects, but we take people to a company where there is always an opportunity to grow and develop effectively in tandem with the team. โ
-
ยท 87 views ยท 21 applications ยท 28d
Data Engineer with Python/Spark skills
Full Remote ยท Worldwide ยท 3 years of experience ยท Upper-IntermediateAkvelon is a known USA company, with offices in places like Seattle, Mexico, Ukraine, Poland, and Serbia. Our company is an official vendor of Microsoft and Google. Our clients also include Amazon, Evernote, Intel, HP, Reddit, Pinterest, AT&T, T-Mobile,...๐ Akvelon is a known USA company, with offices in places like Seattle, Mexico, Ukraine, Poland, and Serbia. Our company is an official vendor of Microsoft and Google. Our clients also include Amazon, Evernote, Intel, HP, Reddit, Pinterest, AT&T, T-Mobile, Starbucks, and LinkedIn. To work with Akvelon means to be connected with the best and brightest engineering teams from around the globe and working with an actual technology stack building Enterprise, CRM, LOB, Cloud, AI and Machine Learning, Cross-Platform, Mobile, and other types of applications customized to clientโs needs and processes.
We are looking for a Data Engineer with Python/Spark skills to join the Data Platform Team on a 3-month contract basis.
About the Project
The project is a leading provider of innovative software solutions for terminal operating systems and logistics management. Its products help ports, terminals, and intermodal facilities optimize cargo movement, improve operational efficiency, and streamline supply chain processes. The project offers data-driven solutions for real-time container tracking, yard management, vessel and rail planning, and automated workflows, enabling businesses to handle increasing cargo volumes with greater accuracy and speed.
Responsibilities:
- Develop, maintain, and optimize ETL pipelines using Python and Apache Spark.
- Implement data transformations, cleansing, and enrichment processes to support business needs.
- Work with large-scale distributed data processing using Apache Spark.
- Design and optimize SQL databases, ensuring efficient data modeling and querying.
- Work with Kubernetes, ensuring smooth deployment and management of containerized applications.
- Analyze and document data mapping and data journey workflows.
- Work with messaging architectures and platforms, understanding their advantages, limitations, and best use cases.
Collaborate in an Agile Scrum environment, following industry-standard SDLC practices.
Requirements:
- Strong experience in Python for production environments.
- Proven hands-on experience with Apache Spark (batch and streaming processing).
- Solid understanding of ETL processes, data transformation, and pipeline optimization.
- Experience working with large-scale distributed data systems.
- Proficiency with Kubernetes and familiarity with kubectl.
- Expertise in SQL, data modeling, and database design.
- Understanding of messaging architectures and their trade-offs.
- Experience with data mapping and documentation methods.
- Previous experience working in an Agile Scrum environment.
Ready to take the next step? Apply now! ๐
More