Jobs

70
  • Β· 43 views Β· 6 applications Β· 9d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. We’re looking for a...

    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.

    We’re looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, you’ll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.

    Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. You’ll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.

     

    WHAT YOU’LL DO

    • Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
    • Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
    • Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
    • Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
    • Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
    • Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
    • Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
    • Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
    • Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
    • Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
    • Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
    • Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.

       

    WHAT WE EXPECT FROM YOU

    • Strong proficiency in SQL and Python.
    • Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
    • Proven ability to design, deploy, and scale ETL/ELT pipelines.
    • Hands-on experience integrating and automating data from various platforms.
    • Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
    • Skilled in orchestration tools like Airflow, Prefect, or Dagster.
    • Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
    • Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
    • Experience with Docker to build isolated and reproducible environments for data workflows.
    • Exposure to iGaming data structures and KPIs is a strong advantage.
    • Strong sense of data ownership, documentation, and operational excellence.

       

    HOW IT WORKS

    • Stage 1: pre-screen with a recruiter.
    • Stage 2: test task.
    • Stage 3: interview.
    • Stage 4: bar-raising.
    • Stage 5: reference check.
    • Stage 6: job offer!

    A trial period for this position is 3 months, during which we will get used to working together.

     

    WHAT WE OFFER

    • 28 business days of paid off
    • Flexible hours and the possibility to work remotely
    • Medical insurance and mental health care
    • Compensation for courses, trainings
    • English classes and speaking clubs
    • Internal library, educational events
    • Outstanding corporate parties, teambuildings

     

    More
  • Β· 20 views Β· 0 applications Β· 3d

    Middle BI/DB Developer

    Office Work Β· Ukraine (Lviv) Β· Product Β· 2 years of experience Β· Upper-Intermediate
    About us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...

    About us:

    EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.

    But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.

    EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.

    Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
     

    We are looking for a passionate and dedicated Junior QA to join our team in Lviv!

    About the unit:

    DataMatrix is a part of EveryMatrix platform that is responsible for collecting, storing, processing and utilizing hundreds of millions of transactions from the whole platform every single day. We develop Business Intelligent solutions, reports, 3rd party integrations, data streaming and other products for both external and internal use. The team consists of 35 people and is located in Lviv.

    What You'll get to do:

    • Develop real time data processing and aggregations
    • Create and modify data marts (enhance our data warehouse)
    • Take care of internal and external integrations
    • Forge various types of reports

    Our main stack:

    • DB: BigQuery, PostgreSQL
    • ETL: Apache Airflow, Apache NiFi
    • Streaming: Apache Kafka

    What You need to know:

    Here's what we offer:

    • Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
    • Stay Healthy: 10 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
    • Support for New Parents:
    • 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
    • 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.

    Our office perks include on-site massages and frequent team-building activities in various locations.

    Benefits & Perks:

    • Daily catered lunch or monthly lunch allowance.β€―
    • Private Medical Subscription.β€―
    • Access online learning platforms like Udemy for Business, LinkedIn Learning or O’Reilly, and a budget for external training.
    • Gym allowance

    At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!

    More
  • Β· 39 views Β· 7 applications Β· 3d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others. About project: Advanced...

    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others.

    About project: Advanced blockchain analytics and on-the-ground intelligence to empower financial institutions, governments & regulators in the fight against cryptocurrency crime

    • Requirements:
      • 6+ years of experience with Python backend development
        Solid knowledge of SQL (including writing/debugging complex queries)
        Understanding of data warehouse principles and backend architecture
      • Experience working in Linux/Unix environments
        Experience with APIs and Python frameworks (e.g., Flask, FastAPI)
      • Experience with PostgreSQL
      • Familiarity with Docker
      • Basic understanding of unit testing
      • Good communication skills and ability to work in a team
      • Interest in blockchain technology or willingness to learn
      • Experience with CI/CD processes and containerization (Docker, Kubernetes) is a plus
      • Strong problem-solving skills and the ability to work independently
    • Responsibilities:
      • Integrate new blockchainsAMM protocols, and bridges into the our platform
      • Build and maintain data pipelines and backend services
      • Help implement new tools and technologies into the system
      • Participate in the full cycle of feature development – from design to release
      • Write clean and testable code
      • Collaborate with the team through code reviews and brainstorming
    • Nice to Have:
      • Experience with KafkaSpark, or ClickHouse
      • Knowledge of KubernetesTerraform, or Ansible
      • Interest in cryptoDeFi, or distributed systems
      • Experience with open-source tools
      • Some experience with Java or readiness to explore it
    • What we offer:
      • Remote working format 
      • Flexible working hours
      • Informal and friendly atmosphere
      • The ability to focus on your work: a lack of bureaucracy and micromanagement
      • 20 paid vacation days
      • 7 paid sick leaves
      • Education reimbursement
      • Free English classes
      • Psychologist consultations
    • Recruitment process:

      Recruitment Interview – Technical Interview

    More
  • Β· 90 views Β· 14 applications Β· 29d

    Data Engineer (6 months, Europe-based)

    Full Remote Β· EU Β· 4 years of experience Β· Upper-Intermediate
    The client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives. Key responsibilities: Develop data products on GCP using BigQuery and DBT Integrate...

    The client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives.

     

    Key responsibilities:

    • Develop data products on GCP using BigQuery and DBT
    • Integrate data from multiple sources using Python and Cloud Functions
    • Orchestrate pipelines with Terraform and Cloud Workflows
    • Collaborate with Solution Architects, Data Scientists, and Software Engineers

     

    Tech stack:
    GCP (BigQuery, Cloud Functions, Cloud Workflows), DBT, Python, Terraform, Git

     

    Requirements:
    Ability to work independently and within cross-functional teams; 
    Strong hands-on experience;
    English: Upper Intermediate or higher

     

    Nice to have:
    Experience with OLAP cubes and PowerBI

     

    More
  • Β· 54 views Β· 11 applications Β· 29d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Senior Data Engineer | Fintech | Remote | Full-Time Level: Senior English: Upper-Intermediate or higher Workload: Full-time Location: Fully remote (Preference for time zones close to Israel) Time Zone: CET (Israel) Start Date: ASAP ...

    πŸ“£ Senior Data Engineer | Fintech | Remote | Full-Time

    🧠 Level: Senior
    πŸ—£οΈ English: Upper-Intermediate or higher
    πŸ•’ Workload: Full-time
    🌍 Location: Fully remote (Preference for time zones close to Israel)
    πŸ• Time Zone: CET (Israel)
    πŸš€ Start Date: ASAP
    πŸ“† Duration: 6+ months


    🧾 About the Client:
    Our client is an innovative fintech company dedicated to optimizing payment transaction success rates. Their advanced technology integrates seamlessly into existing infrastructures, helping payment partners and platforms recover lost revenue by boosting transaction approval rates.

    πŸ”§ Project Stage: Ongoing development


    πŸ’Ό What You’ll Be Doing:

    • Design and implement robust, scalable data pipelines and ETL workflows
    • Develop comprehensive end-to-end data solutions to support analytics, product, and business needs
    • Define data requirements, architect systems, and build reliable data models
    • Integrate backend logic into data processes for actionable insights
    • Optimize system performance, automate processes, and monitor for improvements
    • Collaborate closely with cross-functional teams (Product, Engineering, Data Science)


    🧠 Must-Have Skills:

    • 5+ years of experience in data engineering
    • Deep expertise in building data warehouses and BI ecosystems
    • Strong experience with modern analytical databases (e.g., Snowflake, Redshift)
    • Proficient with data transformation tools (e.g., dbt, Dataform)
    • Familiar with orchestration tools (e.g., Airflow, Prefect)
    • Skilled in Python or Java and advanced SQL (including performance tuning)
    • Experience managing large-scale data systems in cloud environments
    • Infrastructure as code and DevOps mindset


    🀝 Soft Skills:

    • High ownership and accountability
    • Strong communication and collaboration abilities
    • Experience in dynamic, startup-like environments
    • Analytical thinker with a proactive mindset
    • Comfortable working independently
    • Fluent spoken and written English


    πŸ§ͺ Tech Stack:
    Python or Java, SQL, Snowflake, Redshift, dbt, Dataform, Airflow


    πŸ“‹ Interview Process:

    1. English Check (15 min)
    2. Technical Interview (1–1.5 hours)
    3. Final Interview (1 hour) – Client
    More
  • Β· 107 views Β· 19 applications Β· 29d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Our long-standing client from the UK is looking for a Senior Data Engineer Project: Decommissioning legacy software and systems Tech stack: DBT, Snowflake, SQL, Python, Fivetran Requirements: Solid experience with CI/CD processes in SSIS Proven...

    Our long-standing client from the UK is looking for a Senior Data Engineer 

     

    Project: Decommissioning legacy software and systems

     

    Tech stack:
    DBT, Snowflake, SQL, Python, Fivetran

     

    Requirements:

    • Solid experience with CI/CD processes in SSIS
    • Proven track record of decommissioning legacy systems and migrating data to modern platforms (e.g., Snowflake)
    • Experience with AWS (preferred) or Azure
    • Communicative and proactive team player β€” able to collaborate and deliver
    • Independent and flexible when switching between projects
    • English: Upper Intermediate or higher
    More
  • Β· 73 views Β· 20 applications Β· 29d

    Data Engineer to $4800

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-Intermediate
    We are currently seeking a skilled Data Engineer to join our team in the development and maintenance of robust data solutions. This role involves building and optimizing data pipelines, managing ETL processes, and supporting data visualization needs for...

    We are currently seeking a skilled Data Engineer to join our team in the development and maintenance of robust data solutions. This role involves building and optimizing data pipelines, managing ETL processes, and supporting data visualization needs for business-critical use cases.
     

    As part of your responsibilities, you will design and implement cloud infrastructure on AWS using AWS CDK in Python, contribute to solution architecture, and develop reusable components to streamline delivery across projects. You will also implement data quality checks and design scalable data models leveraging both SQL and NoSQL technologies.

     

    Project details:

    • Start: ASAP
    • Duration: Until 31.12.2026
    • Location: Remote
    • Language: English


    Responsibilities:

    • Develop, monitor, and maintain efficient ETL pipelines and data workflows
    • Build infrastructure on AWS using AWS CDK (Python)
    • Design and implement reusable data engineering components and frameworks
    • Ensure data quality through validation, testing, and monitoring mechanisms
    • Contribute to solution architecture and technical design
    • Create and optimize scalable data models in both SQL and NoSQL databases
    • Collaborate with cross-functional teams including data scientists, analysts, and product owners

     

    Requirements:

    • Solid experience in building and maintaining ETL pipelines
    • Hands-on experience with data visualization tools or integrations (e.g., Tableau, Power BI, or custom dashboards via APIs)
    • Strong working knowledge of AWS services, especially with AWS CDK (Python)
    • Good understanding of SQL and NoSQL database technologies
    • Familiarity with version control systems (e.g., Git)
    • Experience working in Agile environments
    • Strong communication skills and ability to work autonomously in remote teams
    More
  • Β· 64 views Β· 3 applications Β· 27d

    Senior Data Engineer (Python) to $8000

    Full Remote Β· Ukraine, Poland, Bulgaria, Portugal Β· 8 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.

    About the Product:
    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

    About the Role:
    As a Data Engineer, you will operate at the intersection of data engineering, software engineering, and system architecture. This is a high-impact, cross-functional role where you’ll take end-to-end ownership β€” from designing scalable infrastructure and writing robust, production-ready code to ensuring the reliability and performance of our systems in production.

    Key Responsibilities:

    • Collaborate closely with software architects and DevOps engineers to evolve our AI training, inference, and delivery architecture and deliver resilient, scalable, production-grade machine learning pipelines.
    • Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
    • Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
    • Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
    • Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
    • Represent the data science team’s needs in cross-functional technical discussions and solutions design.

    Required Competence and Skills:

    • A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
    • 8+ years of experience as a data engineer, software engineer, or similar role, with a proven track record of using data to drive business outcomes.
    • Strong Python skills, with experience building modular, testable, and production-ready code.
    • Solid understanding of Databases and SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
    • Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
    • A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
    • Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.

    Nice-to-Haves

    • Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
    • Familiarity with API development frameworks (e.g., FastAPI).
    • Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
    • Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

    We provide full accounting and legal support in all countries we operate.

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 42 views Β· 7 applications Β· 24d

    Data Engineer

    Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-Intermediate
    We are building a next-generation AI-powered platform designed for comprehensive observability of digital infrastructure, including mobile networks and data centers. By leveraging advanced analytics, automation, and real-time monitoring, we empower...

    We are building a next-generation AI-powered platform designed for comprehensive observability of digital infrastructure, including mobile networks and data centers. By leveraging advanced analytics, automation, and real-time monitoring, we empower businesses to optimize performance, enhance reliability, and prevent failures before they happen.

    Our platform delivers deep insights, anomaly detection, and predictive intelligence, enabling telecom operators, cloud providers, and enterprises to maintain seamless connectivity, operational efficiency, and infrastructure resilience in an increasingly complex digital landscape.

    We have offices in Doha, Qatar and Muscat, Oman. This position requires relocation to one of these offices.

    Job Summary
    As a Senior Data Engineer, you will be responsible for building and maintaining end-to-end data infrastructure that powers our AI-driven observability platform. You will work with large-scale datasets, both structured and unstructured, and design scalable pipelines that enable real-time monitoring, analytics, and machine learning. This is a hands-on engineering role requiring deep expertise in data architecture, cloud technologies, and performance optimization.
    Key Responsibilities

    Data Pipeline Development
     

    • Design, develop, and maintain scalable ETL/ELT pipelines from scratch using modern data engineering tools
    • Ingest and transform high-volume data from multiple sources, including APIs, telemetry, and internal systems
    • Write high-performance code to parse and process large files (JSON, XML, CSV, etc.)
    • Ensure robust data delivery for downstream systems, dashboards, and ML models


    Infrastructure & Optimization
     

    • Build and manage containerized workflows using Docker and Kubernetes
    • Optimize infrastructure for performance, availability, and cost-efficiency
    • Implement monitoring, alerting, and data quality checks across the data pipeline stack


    Collaboration & Best Practices
     

    • Work closely with AI/ML, backend, and platform teams to define and deliver on data requirements
    • Enforce best practices in data modeling, governance, and engineering
    • Participate in CI/CD processes, infrastructure automation, and documentation

    Required Qualifications

    Experience
     

    • 4+ years of hands-on experience in data engineering or similar backend roles
    • Proven experience designing and deploying production-grade data pipelines from scratch


    Technical Skills
     

    • Proficiency in Python or Scala for data processing
    • Deep knowledge of SQL and noSQL systems (e.g., MongoDB, DynamoDB, Cassandra, Firebase)
    • Hands-on experience with cloud platforms (AWS, GCP, or Azure)
    • Familiarity with data tools like Apache Spark, Airflow, Kafka, and distributed systems
    • Experience with CI/CD practices and DevOps for data workflows


    Soft Skills
     

    • Excellent communication skills and the ability to work independently in a fast-paced environment
    • Strong analytical mindset and attention to performance, scalability, and system reliability

    Preferred Qualifications
     

    • Background in the telecom or IoT industry
    • Certifications in cloud platforms or data technologies
    • Experience with real-time streaming, event-driven architectures, or ML/Ops
    • Familiarity with big data ecosystems (e.g., Hadoop, Cloudera)
    • Knowledge of API development or experience with Flask/Django
    • Experience setting up A/B test infrastructure and experimentation pipelines

    Nice to have 
     

    Experience with the integration and maintenance of vector databases (e.g., Pinecone, Weaviate, Milvus, Qdrant) to support LLM workflows including embedding search, RAG, and semantic retrieval.

     
     What We Offer
     

    • Performance-Based Compensation: Tied to achieving and exceeding performance targets, with accelerators for surpassing goals
    • Shares and Equity: Participation in our Employee Stock Option Plan (ESOP)
    • Growth Opportunities: Sponsored courses, certifications, and continuous learning programs
    • Comprehensive Benefits: Health insurance, pension contributions, and professional development support
    • Annual Vacation: Generous paid annual leave
    • Dynamic Work Environment: A culture of innovation, collaboration, and creative freedom
    • Impact and Ownership: Shape the future of digital infrastructure and leave your mark
    • Flexible Work Arrangements: Options to work remotely or from our offices
    • A Mission-Driven Team: Join a diverse, passionate group committed to meaningful change
    More
  • Β· 46 views Β· 3 applications Β· 23d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· Upper-Intermediate
    Dataforest is seeking an experienced Senior Data Engineer to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems. Requirements: - 3+ years of...

    Dataforest is seeking an experienced Senior Data Engineer to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems.
    Requirements:
    - 3+ years of commercial experience with Python.
    - Solid foundational knowledge of ElasticSearch, including:

    • Ability to perform batch updates using bulk operations.
    • Understanding index mapping and how to adapt it for your project’s needs.
    • (Nice to Have) Some exposure to vector search concepts.

      - Experience working with PostgreSQL databases.
      - Proven experience in setting up and managing monitoring systems with CloudWatch, Prometheus, and Grafana.
      - Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
      - Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
      - Solid understanding of ETL principles and best practices.
      - Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
      - Experience working with Linux environments, cloud services (AWS), and Docker.
      - Strong decision-making capabilities with the ability to work independently and proactively.

      Will be a plus:
      - Experience in web scraping, data extraction, cleaning, and visualization.
      - Understanding of multiprocessing and multithreading, including process and thread management.
      - Familiarity with Redis.
      - Experience with Flask / Flask-RESTful for API development.
      - Knowledge and experience with Kafka.

      Key Responsibilities:
      - Develop and maintain a robust data processing architecture using Python.
      - Effectively utilize ElasticSearch and PostgreSQL for efficient data management.
      - Design and manage data pipelines using Kafka and SQS.
      - Optimize code structure and performance for maximum efficiency.
      - Design and implement efficient ETL processes.
      - Analyze and optimize algorithmic solutions for better performance and scalability.
      - Collaborate within the AWS stack to ensure flexible and reliable data processing systems.
      - Provide mentorship and guidance to colleagues, fostering a collaborative and supportive team environment.
      - Independently make decisions related to software architecture and development processes to drive the project forward.

      We offer:

    - Great networking opportunities with international clients, challenging tasks;

    - Building interesting projects from scratch using new technologies;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities and corporate events.

    More
  • Β· 17 views Β· 0 applications Β· 22d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-Intermediate
    Project description We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers. Responsibilities Participate in...

    Project description

    We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers.

    Responsibilities

    Participate in requirements clarification and sprint planning sessions.

    Design technical solutions and implement them, inc ETL Pipelines

    Build robust data pipelines in PySpark to extract, transform, using PySpark

    Optimize ETL Processes

    Enhance and tune existing ETL processes for better performance, scalability, and reliability

    Writing unit and integration tests.

    Support QA teammates in the acceptance process.

    Resolving PROD incidents as a 3rd line engineer.

    Skills

    Must have

    Min 7 Years of experience in IT/Data

    Bachelor in IT or related field.

    Exceptional logical reasoning and problem-solving skills

    Programming: Proficiency in PySpark for distributed computing and Python for ETL development.

    SQL: Strong expertise in writing and optimizing complex SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.

    Data Warehousing: Experience working with data warehousing concepts and platforms, ideally DataBricks

    ETL Tools: Familiarity with ETL tools & processes

    Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.

    Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.

    Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.

    Data Quality Tools: Experience implementing data validation, cleansing, and quality framework

    Nice to have

    Understanding of Investment Data domain.

     

    Languages

    English: B2 Upper Intermediate

    More
  • Β· 56 views Β· 8 applications Β· 22d

    Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· Upper-Intermediate
    At Uvik Software, we are looking for a talented Data Engineer to join our team. If you are passionate about data, cloud technologies, and building scalable solutions, this role is for you! You will work on designing, developing, and optimizing data...

    At Uvik Software, we are looking for a talented πŸ”Ž Data Engineer πŸ”Ž to join our team. If you are passionate about data, cloud technologies, and building scalable solutions, this role is for you!

    You will work on designing, developing, and optimizing data pipelines, implementing machine learning models, and leveraging cloud platforms like AWS(preferred), Azure, or GCP. You’ll collaborate with cross-functional teams to transform raw data into actionable insights, enabling smarter business decisions.
     

    πŸ“ŠKey Responsibilities:
     

    • Develop and maintain scalable ETL/ELT pipelines for data processing.
    • Design and optimize data warehouses and data lakes on AWS, Azure, or GCP.
    • Implement machine learning models and predictive analytics solutions.
    • Work with structured and unstructured data, ensuring data quality and integrity.
    • Optimize query performance and data processing workflows.
    • Collaborate with software engineers, analysts, and business stakeholders to deliver data-driven solutions.


    πŸ“ˆRequirements:
     

    • 4+ years of experience as a Data Engineer.
    • Strong proficiency in SQL and experience with relational and NoSQL databases.
    • Hands-on experience with cloud services: AWS (preferred), Azure, or GCP.
    • Proficiency in Python or Scala for data processing.
    • Experience with Apache Spark, Kafka, Airflow, or similar tools.
    • Solid understanding of data modeling, warehousing, and big data processing frameworks.
    • Experience with machine learning frameworks (TensorFlow, Scikit-learn, PyTorch) is a plus.
    • Familiarity with DevOps practices, CI/CD pipelines, and Infrastructure as Code (Terraform, CloudFormation) is an advantage.
    More
  • Β· 27 views Β· 0 applications Β· 22d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-Intermediate
    Project Description: We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers. Responsibilities: β€’...
    • Project Description:

      We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers.
       

    • Responsibilities:

      β€’ Participate in requirements clarification and sprint planning sessions.
      β€’ Design technical solutions and implement them, inc ETL Pipelines - Build robust data pipelines in PySpark to extract, transform, using PySpark
      β€’ Optimize ETL Processes - Enhance and tune existing ETL processes for better performance, scalability, and reliability
      β€’ Writing unit and integration tests.
      β€’ Support QA teammates in the acceptance process.
      β€’ Resolving PROD incidents as a 3rd line engineer.
       

    • Mandatory Skills Description:

      * Min 7 Years of experience in IT/Data
      * Bachelor in IT or related field.
      * Exceptional logical reasoning and problem-solving skills
      * Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
      * SQL: Strong expertise in writing and optimizing complex SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
      * Data Warehousing: Experience working with data warehousing concepts and platforms, ideally DataBricks
      * ETL Tools: Familiarity with ETL tools & processes
      * Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
      * Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
      * Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
      * Data Quality Tools: Experience implementing data validation, cleansing, and quality framework
       

    • Nice-to-Have Skills Description:

      Understanding of Investment Data domain.

    More
  • Β· 25 views Β· 1 application Β· 22d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    N-iX is looking for a Senior Data Engineer (with Data Science/MLOps experience) to join our team! Our client: a global biopharmaceutical company. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining...

    N-iX is looking for a  Senior Data Engineer (with Data Science/MLOps experience) to join our team!

    Our client: a global biopharmaceutical company.

    As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. Your background in machine learning and data science will be valuable in optimizing data workflows, enabling efficient model deployment, and supporting AI-driven initiatives. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

     

    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
    • Collaborate with Data Scientists to facilitate model deployment and integration into production environments.
    • Support the implementation of basic ML Ops practices, such as model versioning and monitoring.
    • Assist in optimizing data pipelines to improve machine learning workflows.
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

       

    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    • PySpark
    • SQL
    • TypeScript

       

    Required:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Proficiency in containerization technologies (e.g., Docker, Kubernetes);
    • Familiarity with ML Ops concepts, including model deployment and monitoring.
    • Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
    • Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
    • Experience working with feature engineering and data preparation for machine learning models.
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities.
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.

       

    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Web Service Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 25 views Β· 0 applications Β· 21d

    Senior Data Engineer (Data Science/MLOps Background)

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Оur ClΡ–ent Ρ–s seekΡ–ng Π° prΠΎΠ°ctΡ–ve SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer tΠΎ jΠΎΡ–n theΡ–r teΠ°m. Аs Π° SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer, yΠΎu wΡ–ll plΠ°y Π° crΡ–tΡ–cΠ°l rΠΎle Ρ–n desΡ–gnΡ–ng, develΠΎpΡ–ng, Π°nd mΠ°Ρ–ntΠ°Ρ–nΡ–ng sΠΎphΡ–stΡ–cΠ°ted dΠ°tΠ° pΡ–pelΡ–nes, ОntΠΎlΠΎgy Оbjects, Π°nd FΠΎundry FunctΡ–ΠΎns wΡ–thΡ–n...

    Оur ClΡ–ent Ρ–s seekΡ–ng Π° prΠΎΠ°ctΡ–ve SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer tΠΎ jΠΎΡ–n theΡ–r teΠ°m.

     

    Аs Π° SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer, yΠΎu wΡ–ll plΠ°y Π° crΡ–tΡ–cΠ°l rΠΎle Ρ–n desΡ–gnΡ–ng, develΠΎpΡ–ng, Π°nd mΠ°Ρ–ntΠ°Ρ–nΡ–ng sΠΎphΡ–stΡ–cΠ°ted dΠ°tΠ° pΡ–pelΡ–nes, ОntΠΎlΠΎgy Оbjects, Π°nd FΠΎundry FunctΡ–ΠΎns wΡ–thΡ–n PΠ°lΠ°ntΡ–r FΠΎundry.

    YΠΎur bΠ°ckgrΠΎund Ρ–n mΠ°chΡ–ne leΠ°rnΡ–ng Π°nd dΠ°tΠ° scΡ–ence wΡ–ll be vΠ°luΠ°ble Ρ–n ΠΎptΡ–mΡ–zΡ–ng dΠ°tΠ° wΠΎrkflΠΎws, enΠ°blΡ–ng effΡ–cΡ–ent mΠΎdel deplΠΎyment, Π°nd suppΠΎrtΡ–ng АІ-drΡ–ven Ρ–nΡ–tΡ–Π°tΡ–ves.

    The Ρ–deΠ°l cΠ°ndΡ–dΠ°te wΡ–ll pΠΎssess Π° rΠΎbust bΠ°ckgrΠΎund Ρ–n clΠΎud technΠΎlΠΎgΡ–es, dΠ°tΠ° Π°rchΡ–tecture, Π°nd Π° pΠ°ssΡ–ΠΎn fΠΎr sΠΎlvΡ–ng cΠΎmplex dΠ°tΠ° chΠ°llenges.

     

    Key RespΠΎnsΡ–bΡ–lΡ–tΡ–es:

    • CΠΎllΠ°bΠΎrΠ°te wΡ–th crΠΎss-functΡ–ΠΎnΠ°l teΠ°ms tΠΎ understΠ°nd dΠ°tΠ° requΡ–rements, Π°nd desΡ–gn, Ρ–mplement Π°nd mΠ°Ρ–ntΠ°Ρ–n scΠ°lΠ°ble dΠ°tΠ° pΡ–pelΡ–nes Ρ–n PΠ°lΠ°ntΡ–r FΠΎundry, ensurΡ–ng end-tΠΎ-end dΠ°tΠ° Ρ–ntegrΡ–ty Π°nd ΠΎptΡ–mΡ–zΡ–ng wΠΎrkflΠΎws.
    • GΠ°ther Π°nd trΠ°nslΠ°te dΠ°tΠ° requΡ–rements Ρ–ntΠΎ rΠΎbust Π°nd effΡ–cΡ–ent sΠΎlutΡ–ΠΎns, leverΠ°gΡ–ng yΠΎur expertΡ–se Ρ–n clΠΎud-bΠ°sed dΠ°tΠ° engΡ–neerΡ–ng. CreΠ°te dΠ°tΠ° mΠΎdels, schemΠ°s, Π°nd flΠΎw dΡ–Π°grΠ°ms tΠΎ guΡ–de develΠΎpment.
    • DevelΠΎp, Ρ–mplement, ΠΎptΡ–mΡ–ze Π°nd mΠ°Ρ–ntΠ°Ρ–n effΡ–cΡ–ent Π°nd relΡ–Π°ble dΠ°tΠ° pΡ–pelΡ–nes Π°nd ETL/ELT prΠΎcesses tΠΎ cΠΎllect, prΠΎcess, Π°nd Ρ–ntegrΠ°te dΠ°tΠ° tΠΎ ensure tΡ–mely Π°nd Π°ccurΠ°te dΠ°tΠ° delΡ–very tΠΎ vΠ°rΡ–ΠΎus busΡ–ness Π°pplΡ–cΠ°tΡ–ΠΎns, whΡ–le Ρ–mplementΡ–ng dΠ°tΠ° gΠΎvernΠ°nce Π°nd securΡ–ty best prΠ°ctΡ–ces tΠΎ sΠ°feguΠ°rd sensΡ–tΡ–ve Ρ–nfΠΎrmΠ°tΡ–ΠΎn.
    • MΠΎnΡ–tΠΎr dΠ°tΠ° pΡ–pelΡ–ne perfΠΎrmΠ°nce, Ρ–dentΡ–fy bΠΎttlenecks, Π°nd Ρ–mplement Ρ–mprΠΎvements tΠΎ ΠΎptΡ–mΡ–ze dΠ°tΠ° prΠΎcessΡ–ng speed Π°nd reduce lΠ°tency.
    • CΠΎllΠ°bΠΎrΠ°te wΡ–th DΠ°tΠ° ScΡ–entΡ–sts tΠΎ fΠ°cΡ–lΡ–tΠ°te mΠΎdel deplΠΎyment Π°nd Ρ–ntegrΠ°tΡ–ΠΎn Ρ–ntΠΎ prΠΎductΡ–ΠΎn envΡ–rΠΎnments.
    • SuppΠΎrt the Ρ–mplementΠ°tΡ–ΠΎn ΠΎf bΠ°sΡ–c ML Оps prΠ°ctΡ–ces, such Π°s mΠΎdel versΡ–ΠΎnΡ–ng Π°nd mΠΎnΡ–tΠΎrΡ–ng.
    • АssΡ–st Ρ–n ΠΎptΡ–mΡ–zΡ–ng dΠ°tΠ° pΡ–pelΡ–nes tΠΎ Ρ–mprΠΎve mΠ°chΡ–ne leΠ°rnΡ–ng wΠΎrkflΠΎws.
    • TrΠΎubleshΠΎΠΎt Π°nd resΠΎlve Ρ–ssues relΠ°ted tΠΎ dΠ°tΠ° pΡ–pelΡ–nes, ensurΡ–ng cΠΎntΡ–nuΠΎus dΠ°tΠ° Π°vΠ°Ρ–lΠ°bΡ–lΡ–ty Π°nd relΡ–Π°bΡ–lΡ–ty tΠΎ suppΠΎrt dΠ°tΠ°-drΡ–ven decΡ–sΡ–ΠΎn-mΠ°kΡ–ng prΠΎcesses.
    • StΠ°y current wΡ–th emergΡ–ng technΠΎlΠΎgΡ–es Π°nd Ρ–ndustry trends, Ρ–ncΠΎrpΠΎrΠ°tΡ–ng Ρ–nnΠΎvΠ°tΡ–ve sΠΎlutΡ–ΠΎns Ρ–ntΠΎ dΠ°tΠ° engΡ–neerΡ–ng prΠ°ctΡ–ces, Π°nd effectΡ–vely dΠΎcument Π°nd cΠΎmmunΡ–cΠ°te technΡ–cΠ°l sΠΎlutΡ–ΠΎns Π°nd prΠΎcesses.

     

    TΠΎΠΎls Π°nd skΡ–lls yΠΎu wΡ–ll use Ρ–n thΡ–s rΠΎle:

    • PΠ°lΠ°ntΡ–r FΠΎundry
    • PythΠΎn
    • PySpΠ°rk
    • SQL
    • TypeScrΡ–pt

     

    RequΡ–red:

    • 5+ yeΠ°rs ΠΎf experΡ–ence Ρ–n dΠ°tΠ° engΡ–neerΡ–ng, preferΠ°bly wΡ–thΡ–n the phΠ°rmΠ°ceutΡ–cΠ°l ΠΎr lΡ–fe scΡ–ences Ρ–ndustry;
    • StrΠΎng prΠΎfΡ–cΡ–ency Ρ–n PythΠΎn Π°nd PySpΠ°rk;
    • PrΠΎfΡ–cΡ–ency wΡ–th bΡ–g dΠ°tΠ° technΠΎlΠΎgΡ–es (e.g., АpΠ°che HΠ°dΠΎΠΎp, SpΠ°rk, KΠ°fkΠ°, BΡ–gQuery, etc.);
    • HΠ°nds-ΠΎn experΡ–ence wΡ–th clΠΎud servΡ–ces (e.g., АWS Glue, Аzure DΠ°tΠ° FΠ°ctΠΎry, GΠΎΠΎgle ClΠΎud DΠ°tΠ°flΠΎw);
    • ExpertΡ–se Ρ–n dΠ°tΠ° mΠΎdelΡ–ng, dΠ°tΠ° wΠ°rehΠΎusΡ–ng, Π°nd ETL/ELT cΠΎncepts;
    • HΠ°nds-ΠΎn experΡ–ence wΡ–th dΠ°tΠ°bΠ°se systems (e.g., PΠΎstgreSQL, MySQL, NΠΎSQL, etc.);
    • PrΠΎfΡ–cΡ–ency Ρ–n cΠΎntΠ°Ρ–nerΡ–zΠ°tΡ–ΠΎn technΠΎlΠΎgΡ–es (e.g., DΠΎcker, Kubernetes);
    • FΠ°mΡ–lΡ–Π°rΡ–ty wΡ–th ML Оps cΠΎncepts, Ρ–ncludΡ–ng mΠΎdel deplΠΎyment Π°nd mΠΎnΡ–tΠΎrΡ–ng.
    • BΠ°sΡ–c understΠ°ndΡ–ng ΠΎf mΠ°chΡ–ne leΠ°rnΡ–ng frΠ°mewΠΎrks such Π°s TensΠΎrFlΠΎw ΠΎr PyTΠΎrch.
    • ExpΠΎsure tΠΎ clΠΎud-bΠ°sed АІ/ML servΡ–ces (e.g., АWS SΠ°geMΠ°ker, Аzure ML, GΠΎΠΎgle Vertex АІ).
    • ExperΡ–ence wΠΎrkΡ–ng wΡ–th feΠ°ture engΡ–neerΡ–ng Π°nd dΠ°tΠ° prepΠ°rΠ°tΡ–ΠΎn fΠΎr mΠ°chΡ–ne leΠ°rnΡ–ng mΠΎdels.
    • EffectΡ–ve prΠΎblem-sΠΎlvΡ–ng Π°nd Π°nΠ°lytΡ–cΠ°l skΡ–lls, cΠΎupled wΡ–th excellent cΠΎmmunΡ–cΠ°tΡ–ΠΎn Π°nd cΠΎllΠ°bΠΎrΠ°tΡ–ΠΎn Π°bΡ–lΡ–tΡ–es.
    • StrΠΎng cΠΎmmunΡ–cΠ°tΡ–ΠΎn Π°nd teΠ°mwΠΎrk Π°bΡ–lΡ–tΡ–es;
    • UnderstΠ°ndΡ–ng ΠΎf dΠ°tΠ° securΡ–ty Π°nd prΡ–vΠ°cy best prΠ°ctΡ–ces;
    • StrΠΎng mΠ°themΠ°tΡ–cΠ°l, stΠ°tΡ–stΡ–cΠ°l, Π°nd Π°lgΠΎrΡ–thmΡ–c skΡ–lls.

     

    NΡ–ce tΠΎ hΠ°ve:

    • CertΡ–fΡ–cΠ°tΡ–ΠΎn Ρ–n ClΠΎud plΠ°tfΠΎrms, ΠΎr relΠ°ted Π°reΠ°s;
    • ExperΡ–ence wΡ–th seΠ°rch engΡ–ne АpΠ°che Lucene, Web ServΡ–ce Rest АPΠ†;
    • FΠ°mΡ–lΡ–Π°rΡ–ty wΡ–th VeevΠ° CRM, ReltΡ–ΠΎ, SАP, Π°nd/ΠΎr PΠ°lΠ°ntΡ–r FΠΎundry;
    • KnΠΎwledge ΠΎf phΠ°rmΠ°ceutΡ–cΠ°l Ρ–ndustry regulΠ°tΡ–ΠΎns, such Π°s dΠ°tΠ° prΡ–vΠ°cy lΠ°ws, Ρ–s Π°dvΠ°ntΠ°geΠΎus;
    • PrevΡ–ΠΎus experΡ–ence wΠΎrkΡ–ng wΡ–th JΠ°vΠ°ScrΡ–pt Π°nd TypeScrΡ–pt.

     

    CΠΎmpΠ°ny ΠΎffers:

    • FlexΡ–ble wΠΎrkΡ–ng fΠΎrmΠ°t – remΠΎte, ΠΎffΡ–ce-bΠ°sed ΠΎr flexΡ–ble
    • А cΠΎmpetΡ–tΡ–ve sΠ°lΠ°ry Π°nd gΠΎΠΎd cΠΎmpensΠ°tΡ–ΠΎn pΠ°ckΠ°ge
    • PersΠΎnΠ°lΡ–zed cΠ°reer grΠΎwth
    • PrΠΎfessΡ–ΠΎnΠ°l develΠΎpment tΠΎΠΎls (mentΠΎrshΡ–p prΠΎgrΠ°m, tech tΠ°lks Π°nd trΠ°Ρ–nΡ–ngs, centers ΠΎf excellence, Π°nd mΠΎre)
    • АctΡ–ve tech cΠΎmmunΡ–tΡ–es wΡ–th regulΠ°r knΠΎwledge shΠ°rΡ–ng
    • EducΠ°tΡ–ΠΎn reΡ–mbursement
    • MemΠΎrΠ°ble Π°nnΡ–versΠ°ry presents
    • CΠΎrpΠΎrΠ°te events Π°nd teΠ°m buΡ–ldΡ–ngs
    More
Log In or Sign Up to see all posted jobs