Senior ML/MLOps Engineer
π Akvelon is a known USA company, with offices in places like Seattle, Mexico, Ukraine, Poland, and Serbia. Our company is an official vendor of Microsoft and Google. Our clients also include Amazon, Evernote, Intel, HP, Reddit, Pinterest, AT&T, T-Mobile, Starbucks, and LinkedIn. To work with Akvelon means to be connected with the best and brightest engineering teams from around the globe and working with an actual technology stack building Enterprise, CRM, LOB, Cloud, AI and Machine Learning, Cross-Platform, Mobile, and other types of applications customized to the clientβs needs and processes.
We are looking for a skilled MLOps Engineer with expertise in building and optimizing data pipelines, deploying models, processing large datasets, and managing infrastructure for machine learning applications. The ideal candidate will have a strong background in data engineering and hands-on experience in Google Cloud Platform (GCP), including Vertex AI, Kubeflow, BigQuery, and Cloud Storage.
Responsibilities:
- Develop and maintain ML pipelines for data ingestion, processing, and model inference, focusing on large-scale structured and unstructured data.
- Leverage GCP services (Vertex AI, Kubeflow, BigQuery, Cloud Storage, etc.) to build scalable and efficient ML infrastructure.
- Deploy deep learning models for both real-time inference and batch processing using tools like Vertex AI endpoints, Nvidia Triton, ONNX, and Dataflow.
- Implement ETL processes to clean, transform, and optimize data pipelines for ML applications.
- Manage large-scale text data, including preprocessing, document segmentation, and feature extraction.
- Collaborate with ML engineers and data scientists to curate high-quality datasets and optimize data workflows.
- Optimize performance of data pipelines and storage solutions to handle increasing data complexity and volume.
- Ensure automation and monitoring of data pipelines for reliability and efficiency.
Document processes and best practices for pipeline architectures and data management.
Requirements:
- 4+ years of experience in MLOps/ML.
- Strong knowledge of Python and SQL.
- Hands-on experience with GCP services: Vertex AI, Kubeflow, BigQuery, Dataflow, and Cloud Storage.
- Experience in machine learning model deployments and pipelines.
- Familiarity with workflow orchestration tools (e.g., Kubeflow, Airflow, Cloud Composer).
- Strong understanding of ETL processes, data integration, and database management.
Ability to work independently and overlap with EST time zone until at least 2 PM EST.
Nice to have:
- Experience with NLP and text processing libraries (e.g., NLTK, SpaCy, Regex).
- Familiarity with LLM ecosystem tools (e.g., LangChain, Embeddings, Vector DBs).
Experience with GPU-based ML workloads and cloud deployment tools like Cloud Run and Kubernetes.
Working conditions and benefits:
- Paid vacation, sick leave (without sickness list)
- Official state holidays β 11 days considered public holidays
- Professional growth while attending challenging projects and the possibility to switch your role, master new technologies and skills with company support
- Flexible working schedule: 8 hours per day, 40 hours per week
- Personal Career Development Plan (CDP)
- Employee support program (Discount, Care, Health, Legal compensation)
- Paid external training, conferences, and professional certification that meets the companyβs business goals
- Internal workshops & seminars
- Corporate library (Paper/E-books) and internal English classes
This is an exciting opportunity to contribute to ML infrastructure in the financial services domain while working with a cutting-edge tech stack. π