Jobs Data Engineer
143-
Β· 841 views Β· 70 applications Β· 4d
Data Engineer
Countries of Europe or Ukraine Β· 2 years of experience Β· English - B1Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. Skills requirements: β’ 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Skills requirements:
β’ 2+ years of experience with Python;
β’ 2+ years of experience as a Data Engineer;
β’ Experience with Pandas;
β’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Familiarity with Amazon Web Services;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Working with high volume tables 10m+.
Optional skills (as a plus):
β’ Experience with Spark (pyspark);
β’ Experience with Airflow;
β’ Experience with Kafka;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms..Key responsibilities:
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models.We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 97 views Β· 15 applications Β· 6d
Senior Data Engineer
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2Join a Company That Invests in You Seeking Alpha is the worldβs leading community of engaged investors. Weβre the go-to destination for investors looking for actionable stock market opinions, real-time market analysis, and unique financial insights. At...Join a Company That Invests in You
Seeking Alpha is the worldβs leading community of engaged investors. Weβre the go-to destination for investors looking for actionable stock market opinions, real-time market analysis, and unique financial insights. At the same time, weβre also dedicated to creating a workplace where our team thrives. Weβre passionate about fostering a flexible, balanced environment with remote work options and an array of perks that make a real difference.
Here, your growth matters. We prioritize your development through ongoing learning and career advancement opportunities, helping you reach new milestones. Join Seeking Alpha to be part of a company that values your unique journey, supports your success, and champions both your personal well-being and professional goals.
What We're Looking For
Seeking Alpha is looking for a Senior Data Engineer responsible for designing, building, and maintaining the infrastructure necessary for analyzing large data sets. This individual should be an expert in data management, ETL (extract, transform, load) processes, and data warehousing and should have experience working with various big data technologies, such as Hadoop, Spark, and NoSQL databases. In addition to technical skills, a Senior Data Engineer should have strong communication and collaboration abilities, as they will be working closely with other members of the data and analytics team, as well as other stakeholders, to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with the overall business goals and objectives.
What You'll Do
- Work closely with data scientists/analytics and other stakeholders to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with business goals and objectives
- Design, build and maintain optimal data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources, including external APIs, data streams, and data stores.
- Continuously monitor and optimize the performance and reliability of the data infrastructure, and identify and implement solutions to improve scalability, efficiency, and security
- Stay up-to-date with the latest trends and developments in the field of data engineering, and leverage this knowledge to identify opportunities for improvement and innovation within the organization
- Solve challenging problems in a fast-paced and evolving environment while maintaining uncompromising quality.
- Implement data privacy and security requirements to ensure solutions comply with security standards and frameworks.
- Enhance the team's dev-ops capabilities.
Requirements
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- 2+ years of proven experience developing large-scale software using an object-oriented or functional language.
- 5+ years of professional experience in data engineering, focusing on building and maintaining data pipelines and data warehouses
- Strong experience with Spark, Scala, and Python, including the ability to write high-performance, maintainable code
- Experience with AWS services, including EC2, S3, Athena, Kinesis/Firehose Lambda and EMR
- Familiarity with data warehousing concepts and technologies, such as columnar storage, data lakes, and SQL
- Experience with data pipeline orchestration and scheduling using tools such as Airflow
- Strong problem-solving skills and the ability to work independently as well as part of a team
- High-level English - a must.
- A team player with excellent collaboration skills.
Nice to Have:
- Expertise with Vertica or Redshift, including experience with query optimization and performance tuning
- Experience with machine learning and/or data science projects
- Knowledge of data governance and security best practices, including data privacy regulations such as GDPR and CCPA.
- Knowledge of Spark internals (tuning, query optimization)
-
Β· 294 views Β· 24 applications Β· 7d
Junior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· English - B2We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
β’ 6+ months of experience as a Data Engineer
β’ Experience with SQL ;
β’ Experience with Python;
Optional skills (as a plus):
β’ Experience with ETL / ELT pipelines;
β’ Experience with PySpark;
β’ Experience with Airflow;
β’ Experience with Databricks;
Key Responsibilities:
β’ Apply data processing algorithms;
β’ Create ETL/ELT pipelines and data management solutions;
β’ Work with SQL queries for data extraction and analysis;
β’ Data analysis and application of data processing algorithms to solve business problems;
We offer:
β’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
β’ Opportunity to work with the high-skilled engineering team on challenging projects;
β’ Interesting projects with new technologies;
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 97 views Β· 12 applications Β· 22d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - None MilTech πͺWho We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...Who We Are
OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.
Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.
We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.
Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
Who weβre looking for
OpenMinds is seeking a skilled and curious Data Engineer whoβs excited to design and build data systems that power meaningful insight. Youβll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
In the position you will:
- Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
- Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
- Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
- Stay up-to-date with trends in data engineering and apply modern tools and practices
- Define and implement best practices for data processing, storage, and governance
- Translate complex requirements into efficient data workflows that support threat detection and response
We are a perfect match if you have:
- 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
- Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
- Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
- Ability to write, test and deploy production-ready code
- Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
- Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
- Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
- Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
- Fluent in English with excellent communication and cross-functional collaboration skills
We offer:
- Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
- Competitive market salary
- Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
- Work face-to-face with world-leading experts in their fields, who are our partners and friends
- Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
- Unlimited vacation and leave policies
- Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
- A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
-
Β· 34 views Β· 0 applications Β· 19d
Data Quality Engineer
Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· English - None MilTech πͺWeβre building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities,...Weβre building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities, and weβre seeking an engineer who can help us enhance the reliability, transparency, and manageability of our data landscape.
Your responsibilities:
- Develop and maintain data quality monitoring frameworks within the Azure ecosystem (Data Factory, Data Lake, Databricks).
- Design and implement data quality checks, including validation, profiling, cleansing, and standardization.
- Detect data anomalies and design alerting systems (rules, thresholds, automation).
- Collaborate with Data Engineers, Analysts, and Business stakeholders to define data quality criteria and expectations.
- Ensure high data accuracy and integrity for Power BI reports and dashboards.
- Document data validation processes and recommend improvements to data sources.
Requirements:
- 3+ years of experience in a Data Quality, Data Engineering, or BI Engineering role.
- Hands-on experience with Microsoft Azure services (Data Factory, SQL Database, Data Lake).
- Advanced SQL skills (complex queries, optimization, data validation).
- Familiarity with Power BI or similar BI tools.
- Understanding of DWH principles and ETL/ELT pipelines.
- Experience with data quality frameworks and metrics (completeness, consistency, timeliness).
- Knowledge of Data Governance, Master Data Management, and Data Lineage concepts.
Would be a plus:
- Experience with Databricks or Apache Spark.
- DAX and Power Query (M) knowledge.
- Familiarity with DataOps or DevOps principles in a data environment.
- Experience in creating automated data quality dashboards in Power BI.
More -
Β· 15 views Β· 0 applications Β· 16d
IT Infrastructure Administrator
Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experience Β· English - NoneBiosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.
Key responsibilities:
- Administration of Active Directory
- Managing group policies
- Managing services via PowerShell
- Administration of VMWare platform
- Administration of Azure Active Directory
- Administration of Exchange 2016/2019 mail servers
- Administration of Exchange Online
- Administration of VMWare Horizon View
Required professional knowledge and skills:
- Experience in writing automation scripts (PowerShell, Python, etc.)
- Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
- Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
- Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
- Windows Server 2019/2025 (installation, configuration, and adaptation)
- Diagnostics and troubleshooting
- Working with anti-spam systems
- Managing mail transport systems (exim) and monitoring systems (Zabbix)
We offer:
- Interesting projects and tasks
- Competitive salary (discussed during the interview)
- Convenient work schedule: MonβFri, 9:00β18:00; partial remote work possible
- Official employment, paid vacation, and sick leave
- Probation period β 2 months
- Professional growth and training (internal training, reimbursement for external training programs)
- Discounts on Biosphere Corporation products
- Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)
Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).
Learn more about Biosphere Corporation, our strategy, mission, and values at:
http://biosphere-corp.com/
https://www.facebook.com/biosphere.corporation/Join our team of professionals!
By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
More
If your application is successful, we will contact you within 1β2 business days. -
Β· 50 views Β· 2 applications Β· 3d
Data Engineer
Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - NoneAbout us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...About us:
More
Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.
About the client:
Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.
About the role:
We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.
You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.
Requirements:
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the projectβs focus.
Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
Nice to have:
- Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
- Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
- CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
- Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
- Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimizing existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve the workflows.
Responsibilities:
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
- Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability.
- Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
- Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
- Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance.
- Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
The company offers:
- Competitive salary.
- Equity options in a fast-growing AI company.
- Remote-friendly work culture.
- Opportunity to shape a product at the intersection of AI and human productivity.
- Work with a passionate, senior team building cutting-edge tech for real-world business use. -
Β· 15 views Β· 0 applications Β· 11d
Senior Data Engineer
Full Remote Β· Ukraine Β· 6 years of experience Β· English - NoneProject Description The project focuses on the modernization, maintenance, and development of an eCommerce platform for a large US-based retail company, serving millions of omnichannel customers weekly. Solutions are delivered by several Product Teams...Project Description
The project focuses on the modernization, maintenance, and development of an eCommerce platform for a large US-based retail company, serving millions of omnichannel customers weekly.Solutions are delivered by several Product Teams working on different domains: Customer, Loyalty, Search & Browse, Data Integration, and Cart.
Current key priorities:
- New brands onboarding
- Re-architecture
- Database migrations
- Migration of microservices to a unified cloud-native solution without business disruption
Responsibilities
- Design data solutions for a large retail company.
- Support the processing of big data volumes.
- Integrate solutions into the current architecture.
Mandatory Skills
- Microsoft Azure Data Factory / SSIS
- Microsoft Azure Databricks
- Microsoft Azure Synapse Analytics
- PostgreSQL
- PySpark
Mandatory Skills Description
- 3+ years of hands-on expertise with Azure Data Factory and Azure Synapse.
- Strong expertise in designing and implementing data models (conceptual, logical, physical).
- In-depth knowledge of Azure services (Data Lake Storage, Synapse Analytics, Data Factory, Databricks) and PySpark for scalable data solutions.
- Proven experience in building ETL/ELT pipelines to load data into data lakes/warehouses.
- Experience integrating data from disparate sources (databases, APIs, external providers).
- Proficiency in data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault).
- Strong SQL skills: complex queries, transformations, performance tuning.
- Experience with metadata and governance in cloud data platforms.
- Certification in Azure/Databricks (advantage).
- Experience with cloud-based analytical databases.
- Hands-on with Azure MI, PostgreSQL on Azure, Cosmos DB, Azure Analysis Services, Informix.
- Experience in Python and Python-based ETL tools.
- Knowledge of Bash/Unix/Windows shell scripting (preferable).
Nice-to-Have Skills
- Experience with Elasticsearch.
- Familiarity with Docker/Kubernetes.
- Skills in troubleshooting and performance tuning for data pipelines.
- Strong collaboration and communication skills.
Languages
- English: B2 (Upper Intermediate)
-
Β· 66 views Β· 0 applications Β· 3d
Sales Executive (Google Cloud+Google Workspace)
Full Remote Β· Czechia Β· Product Β· 2 years of experience Β· English - B2Cloudfresh is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner. Since 2017, weβve been specializing in the...Cloudfresh β οΈ is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner.
Since 2017, weβve been specializing in the implementation, migration, integration, audit, administration, support, and training for top-tier cloud solutions. Our products focus on cutting-edge cloud computing, advanced location and mapping, seamless collaboration from anywhere, unparalleled customer service, and innovative DevSecOps.
We are seeking a dynamic Sales Executive to lead our sales efforts for GCP and GWS solutions across the EMEA and CEE regions. The ideal candidate will be a high-performing A-player with experience in SaaS sales, adept at navigating complex sales environments, and driven to exceed targets through strategic sales initiatives.
Requirements:
- Fluency in English and native Czech is essential;
- From 2 years of proven sales experience in SaaS/ IaaS fields, with a documented history of achieving and exceeding sales targets, particularly in enterprise sales;
- Sales experience on GCP and/or GWS specifically;
- Sales or technical certifications related to Cloud Solutions are advantageous;
- Experience in expanding new markets with outbound activities;
- Excellent communication, negotiation, and strategic planning abilities;
- Proficient in managing CRM systems and understanding their strategic importance in sales and customer relationship management.
Responsibilities:
- Develop and execute sales strategies for GCP and GWS solutions, targeting enterprise clients within the Cloud markets across EMEA and CEE;
- Identify and penetrate new enterprise market segments, leveraging GCP and GWS to improve client outcomes;
- Conduct high-level negotiations and presentations with major companies across Europe, focusing on the strategic benefits of adopting GCP and GWS solutions;
- Work closely with marketing and business development teams to align sales strategies with broader company goals;
- Continuously assess the competitive landscape and customer needs, adapting sales strategies to meet market demands and drive revenue growth.
Work conditions:
- Competitive Salary & Transparent Motivation: Receive a competitive base salary with commission on sales and performance-based bonuses, providing clear financial rewards for your success.
- Flexible Work Format: Work remotely with flexible hours, allowing you to balance your professional and personal life efficiently.
- Freedom to Innovate: Utilize multiple channels and approaches for sales, allowing you the freedom to find the best strategies for success.
- Training with Leading Cloud Products: Access in-depth training on cutting-edge cloud solutions, enhancing your expertise and equipping you with the tools to succeed in an ever-evolving industry.
- International Collaboration: Work alongside A-players and seasoned professionals in the cloud industry. Expand your expertise by engaging with international markets across the EMEA and CEE regions.
- Vibrant Team Environment: Be part of an innovative, dynamic team that fosters both personal and professional growth, creating opportunities for you to advance in your career.
- When applying to this position, you consent to the processing of your personal data by CLOUDFRESH for the purposes necessary to conduct the recruitment process, in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 (GDPR).
- Additionally, you agree that CLOUDFRESH may process your personal data for future recruitment processes.
-
Β· 24 views Β· 3 applications Β· 25d
CloudOps Engineer
Full Remote Β· EU Β· Product Β· 4 years of experience Β· English - B1We are looking for a CloudOps Engineer to join our teams! Requirements: - 4+ years of experience with DevOps practices - 3+ years of experience in public cloud platforms (AWS, GCP, GCore etc) - Strong knowledge of Linux architecture and systems...We are looking for a CloudOps Engineer to join our teams!
Requirements:
- 4+ years of experience with DevOps practices
- 3+ years of experience in public cloud platforms (AWS, GCP, GCore etc)
- Strong knowledge of Linux architecture and systems implementation
- Strong knowledge of IaC approach (Ansible, Terraform)
- Strong scripting skills in Bash, Python, or other automation languages
- Strong knowledge of Cloud based approach
- Knowledge of Kubernetes management
- Good understanding of networking concepts and protocols
- Experience in microservices architecture, distributed systems, and scaling production environments.
- Experience/awareness of automated DevOps activities, concepts, and toolsets.
- Experience with AWS Control Tower, Config, IAM and other technologies that enable high-level administration
- Experience building and maintaining CI/CD pipelines using tools like GitLab/GitHub CI
- Experience with AWS CloudWatch, GCP Cloud Monitoring, Prometheus, Grafana for monitoring and log aggregation
- Problem-solving and troubleshooting skills, ability to analyze complex systems and identify the causes of problems
- Preferable experience with GCP Cloud Resource management, IAM, Organization policies and other technologies that enable high-level administrationWill be plus:
- AWS Certified SysOps Administrator
- AWS Certified DevOps Engineer
- GCP Certified Cloud Engineer
- GCP Certified Cloud DevOps Engineer
- Similar Public Cloud certificatesSoft Skills:
- Team player
- Critical Thinking
- Good communicator
- Open to challenges and new opportunities
- Thirst for knowledge
- Time ManagementResponsibilities:
- Support and evolution of the current public cloud infrastructure
- Automating repetitive tasks and processes in public cloud infrastructure
- Automation and improvement of current processes related to the administration and support of public clouds
- Implementation of new providers of public cloud services
- Collaborate with cross-functional teams to define cloud strategies, governance, and best practices.
- Conduct architectural assessments and provide recommendations for optimizing existing public cloud environmentsOur benefits to you:
More
βοΈAn exciting and challenging job in a fast-growing holding, the opportunity to be part of a multicultural team of top professionals in Development, Architecture, Management, Operations, Marketing, Legal, Finance and more
π€π»Great working atmosphere with passionate experts and leaders, sharing a friendly culture and a success-driven mindset is guaranteed
π§π»βπ»Modern corporate equipment based on macOS or Windows and additional equipment are provided
πPaid vacations, sick leave, personal events days, days off
π΅Referral program β enjoy cooperation with your colleagues and get the bonus
πEducational programs: regular internal training sessions, compensation for external education, attendance of specialized global conferences
π―Rewards program for mentoring and coaching colleagues
π£Free internal English courses
βοΈIn-house Travel Service
π¦Multiple internal activities: online platform for employees with quests, gamification, presents and news, PIN-UP clubs for movie / book / pets lovers and more
π³Other benefits could be added based on your location -
Β· 86 views Β· 20 applications Β· 25d
Senior AI Engineer
Full Remote Β· Worldwide Β· Product Β· 4 years of experience Β· English - B2We need an AI Engineer who can work autonomously on complex agentic AI systems while contributing to our growing AI practice. Youβll work directly on client projects involving LLM agents, RAG systems, and AI model integration - not just building...We need an AI Engineer who can work autonomously on complex agentic AI systems while contributing to our growing AI practice. Youβll work directly on client projects involving LLM agents, RAG systems, and AI model integration - not just building prototypes, but production systems that scale.
This isnβt a research role. This is hands-on engineering: building, deploying, iterating, and supporting AI applications that solve real business problems.
What Youβll Actually Do
Core Responsibilities (70% of time)
- Build agentic AI systems using LangChain, LangGraph, or similar frameworks
- Implement and optimize RAG pipelines with vector databases (PGVector, Pinecone, etc.)
- Integrate multiple LLM providers (OpenAI, Google Gemini, Anthropic Claude)
- Fine-tune models when off-the-shelf solutions donβt meet requirements
- Develop Python backends (FastAPI/Flask) that power AI applications
- Write production-quality code with tests, documentation, and proper error handling
Technical Ownership (20% of time)
- Design AI system architectures for new client projects
- Make build/buy decisions on AI tooling and infrastructure
- Conduct technical discovery with clients to understand requirements
- Estimate project complexity and technical feasibility
Team Collaboration (10% of time)
- Code reviews for junior AI engineers
- Knowledge sharing on AI patterns and best practices
- Client communication on technical approaches and trade-offs
Our Current Tech Stack
AI/ML:
- LangChain, LangGraph for agentic systems
- OpenAI GPT-4, Google Gemini, Anthropic Claude
- PGVector for embeddings and retrieval
- Fine-tuning pipelines (OpenAI, Gemini)
Backend:
- Python 3.11+ (FastAPI primary, Flask acceptable)
- PostgreSQL with PGVector extension
- Google Cloud Run, Cloud Functions
Infrastructure:
- Google Cloud Platform (BigQuery, Cloud SQL, Cloud Run, Vertex AI)
- Firebase/Supabase for user management
- Docker for containerization
Frontend Integration:
- REST APIs consumed by React/TypeScript frontends
- Real-time capabilities (WebSockets, Server-Sent Events)
Development:
- Git/GitHub for version control
- Linear for project management
- Bolt.new for rapid prototyping (you wonβt use this much)
Must-Have Skills
Technical Requirements
- 4+ years of professional Python development - Youβve built production systems, not just scripts. You understand design patterns, testing strategies, and code that scales.
- 2+ years building production AI/LLM applications - Real systems serving real users, not just experiments or prototypes. Youβve handled model deployment, monitoring, and iteration based on production feedback.
- Expert-level Python engineering - You write clean, maintainable, testable code. You can explain the difference between "staticmethod" and "classmethod" and actually care about it. Youβve debugged memory leaks, optimized performance bottlenecks, and know when to use async/await.
- Deep LangChain/LangGraph experience - Youβve built multi-agent systems with complex orchestration. You understand the frameworkβs internals well enough to work around its limitations. Youβve debugged agent loops, optimized token usage, and handled error states gracefully.
- RAG implementation expertise - Youβve built RAG systems that actually work in production. You understand chunking strategies, embedding models, hybrid search, re-ranking, and when RAG isnβt the right solution. Youβve tuned retrieval quality beyond the tutorial examples.
- Vector database expertise - PGVector, Pinecone, Weaviate, or equivalent. Youβve designed schemas, optimized queries, and handled millions of embeddings. You understand the trade-offs between different distance metrics.
- Production LLM integration - OpenAI, Anthropic, Google Gemini in real applications. Youβve handled rate limiting, cost optimization, prompt engineering, context management, and fallback strategies. You know how to debug hallucinations and improve consistency.
- Backend API development - FastAPI or Flask in production. Youβve designed RESTful APIs, handled authentication, managed database connections, implemented error handling, and written API documentation. You understand HTTP status codes and when to use which one.
- SQL and database design - PostgreSQL specifically. You write efficient queries, design normalized schemas, use indexes appropriately, and understand when to denormalize. Youβve debugged slow queries and optimized database performance.
Working Style Requirements
- Self-managed QA mindset - You write unit tests, integration tests, and think about edge cases before they become production bugs. You donβt need a QA engineer to tell you something doesnβt work.
- Pragmatic problem-solver - You prefer βsimplest thing that worksβ over perfect solutions. You know when to refactor and when to ship. You understand technical debt is a tool, not a failure.
- Comfortable with ambiguity - Client requirements evolve mid-project. You adapt architecture without rebuilding from scratch. You ask clarifying questions but donβt wait for perfect specifications before starting.
- Strong technical communicator - You explain technical trade-offs to non-technical stakeholders. You write clear documentation. You give useful code review feedback. You can justify your architectural decisions.
- European timezone - Core hours overlap with UK (GMT/BST). Youβre available for morning standups and client calls during UK business hours.
-
Β· 27 views Β· 5 applications Β· 2d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B1GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an...GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on peopleβs lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.
Requirements- Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
- Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
- Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
- Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
- Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
- Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
- Strong understanding of data modeling, schema design, and database performance optimization
- Practical experience working with various file formats, including JSON, Parquet, and ORC
- Familiarity with machine learning and AI integration within the data platform context
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
- Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
- Strong analytical and problem-solving skills with attention to detail
- Excellent teamwork and communication skills
- Upper-Intermediate English (spoken and written)
Job responsibilities
- Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
- Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
- Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
- Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
- Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
- Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
- Design and maintain data models and schemas optimized for analytical and operational workloads
- Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
- Participate in architecture discussions, backlog refinement, estimation, and sprint planning
- Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
- Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
- Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
-
Β· 32 views Β· 1 application Β· 5d
Middle/Senior/Lead Python Cloud Engineer (IRC280058)
Hybrid Remote Β· Ukraine Β· 5 years of experience Β· English - B2Job Description β’ Terraform β’ AWS Platform: Working experience with AWS services - in particular serverless architectures (S3, RDS, Lambda, IAM, API Gateway, etc.) supporting API development in a microservices architecture β’ Programming Languages: Python...Job Description
β’ Terraform
β’ AWS Platform: Working experience with AWS services - in particular serverless architectures (S3, RDS, Lambda, IAM, API Gateway, etc.) supporting API development in a microservices architecture
β’ Programming Languages: Python (strong programming skills)
β’ Data Formats: Experience with JSON, XML, and other relevant data formats
β’ CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
β’ Scripting and automation: experience in scripting languages such as Python, PowerShell, etc.
β’ Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etcβ¦
β’ Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea, or similar)
NICE TO HAVE
β’ Strongly Preferred: Infrastructure as Code: Experience with Terraform and CloudFormation - Proven ability to write and manage Infrastructure as Code (IaC)
β’ Documentation: Experience with markdown and, in particular, Antora for creating technical documentation
β’ Experience working with Healthcare Data, including HL7v2, FHI,R and DICOM
β’ FHIR and/or HL7 Certifications
β’ Building software classified as Software as a Medical Device (SaMD)
β’ Understanding of EHR technologies such as EPIC, Cerner, e.c.
β’ Experience in implementing enterprise-grade cyber security & privacy by design into software products
β’ Experience working in Digital Health software
β’ Experience developing global applications
β’ Strong understanding of SDLC β Waterfall & Agile methodologies
β’ Software estimation
β’ Experience leading software development teams onshore and offshoreJob Responsibilities
β’ Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
β’ Involved in planning of system and development deployment, as well as responsible for meeting compliance and security standards.
β’ API development using AWS services
β’ Experience with Infrastructure as Code (IaC)
β’ Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
β’ May document testing and maintenance of system updates, modifications, and configurations.
β’ May act as a liaison with key technology vendor technologists or other business functions.
β’ Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
β’ Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.
β’ Test the quality of a product and its ability to perform a task or solve a problem.
β’ Perform basic maintenance and performance optimization procedures in each of the primary operating systems.
β’ Ability to document detailed technical system specifications based on business system requirements
β’ Ensures system implementation compliance with global & local regulatory and security standards (i.e. , HIPAA, SOCII, ISO27001, etc.)Department/Project Description
The Digital Health organization is a technology team that focuses on next-generation Digital Health capabilities, which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health Applications & Interoperability subgroup of the broader Digital Health team, focused on patient engagement, care coordination, AI, healthcare analytics & interoperability amongst other advanced technologies which enhance our product portfolio with new services, while improving clinical & patient experiences.
Authorization and Authentication platform & services for Digital Health
Secure cloud platform for storing and managing medical images (DICOM compliant). Leverages AWS for cost-effective storage and access, integrates with existing systems (EHR, PACS), and offers a customizable user interface.
More -
Β· 57 views Β· 12 applications Β· 11d
Senior Data Engineer
Full Remote Β· Worldwide Β· 4 years of experience Β· English - B2Weβre currently looking for a Senior Data Engineer for a long-term project, with immediate start. The role requires: - Databricks certification (mandatory) - Solid hands-on experience with Spark - Strong SQL (Microsoft SQL Server) knowledge The...Weβre currently looking for a Senior Data Engineer for a long-term project, with immediate start.
The role requires:
- Databricks certification (mandatory)
- Solid hands-on experience with Spark
- Strong SQL (Microsoft SQL Server) knowledge
The project involves the migration from Microsoft SQL Server to Databricks, along with data-structure optimization and enhancements.
More -
Β· 28 views Β· 0 applications Β· 2d
Data Engineer (with Azure)
Full Remote Β· EU Β· 3 years of experience Β· English - B1Main Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...Main Responsibilities:
Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.
You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.
Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.
Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization
Mandatory Requirements:
β 3+ years of experience, ideally within a Data Engineer role.
β understanding of data modeling, data warehousing concepts, and ETL processes
β 2+ years of experience with Azure Cloud technologies
β experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)
β Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)
β SQL-skills
β communication and interpersonal skills
β English βΠ2
Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.
We offer:
β professional growth and international certification
β free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)
β innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customersβ projects
β great compensation and individual bonus remuneration
β medical insurance
β long-term employment
β ondividual development plan
More