Trainee AI developer / Data engineer IRC269588

Description

Department/Project Description
GlobalLogic is building an analytical platform for the client. A platform which will be gathering information about some companies from various sources, will normalize it according to the predefined flow and will show it to the user as a set of analytical dashboards. The gathered and processed information will let the customers make decisions regarding the business state of the investigated companies. Analytical information will help customers to predict the future state of the appropriate companies and improve customer’s business processes.

The client provides high value analysis and support to partner companies for identification and mitigation emerging business challenges. Today the customer’s business processes are highly manual and fragmented.

The project is aimed to define and bring to life a digital platform that connects the client employees with relevant and meaningful information about their portfolio holdings, enabling insights and action.

Requirements

Job Description

We are looking for a motivated Trainee Data Engineer to join our team and support AI-focused challenges as well as the development of data pipelines for Generative AI and large language model (LLM) projects. This role is ideal for someone eager to grow in the fields of AI/ML, data engineering, and natural language processing.

Qualifications

  • Student in Computer Science, Data Science, Engineering, or related field.
  • Basic programming skills in Python with OOP; familiarity with SQL and NoSQL.
  • Basic knowledge of REST API. Familiar with Fast API framework.
  • Basic knowledge of Git, Docker, linux commands.
  • Basic knowledge of asynchronous flows, flow in threads.
  • Interest in AI — especially GenAI, Agents, Chats, prompt tuning, trends and patterns.
  • Some exposure to ML frameworks (TensorFlow, PyTorch) or NLP libraries (Huggingface Transformers, SpaCy).
  • Understanding of DevOps/MLOps principles and cloud platforms is welcome.

Would be benefits previous years of experience and a substantial understanding in:

  • Kafka and other event streaming services
  • AWS data access/warehousing tools, such as Athena (in use), S3 storage

Job responsibilities

Key Responsibilities

Collaborate with data engineers on model integration and data transformation including prompt engineering and RAG (Retrieval-Augmented Generation) setups. Developing AI Agents and Chats.
Be aware of modern patterns and trends in GenAI area. Bring new ideas of how it may be leveraged in the project.
Work with vector databases and embedding models.

Published 3 July
413 views
·
81 applications
93% read
·
92% responded
Last responded 3 days ago
To apply for this and other jobs on Djinni login or signup.
Loading...