Jobs

108
  • Β· 19 views Β· 1 application Β· 5d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    N-iX is looking for a Senior Data Engineer (with Data Science/MLOps experience) to join our team! Our client: a global biopharmaceutical company. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining...

    N-iX is looking for a  Senior Data Engineer (with Data Science/MLOps experience) to join our team!

    Our client: a global biopharmaceutical company.

    As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. Your background in machine learning and data science will be valuable in optimizing data workflows, enabling efficient model deployment, and supporting AI-driven initiatives. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

     

    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
    • Collaborate with Data Scientists to facilitate model deployment and integration into production environments.
    • Support the implementation of basic ML Ops practices, such as model versioning and monitoring.
    • Assist in optimizing data pipelines to improve machine learning workflows.
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

       

    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    • PySpark
    • SQL
    • TypeScript

       

    Required:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Proficiency in containerization technologies (e.g., Docker, Kubernetes);
    • Familiarity with ML Ops concepts, including model deployment and monitoring.
    • Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
    • Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
    • Experience working with feature engineering and data preparation for machine learning models.
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities.
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.

       

    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Web Service Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 15 views Β· 0 applications Β· 4d

    Senior Data Engineer (Data Science/MLOps Background)

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Оur ClΡ–ent Ρ–s seekΡ–ng Π° prΠΎΠ°ctΡ–ve SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer tΠΎ jΠΎΡ–n theΡ–r teΠ°m. Аs Π° SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer, yΠΎu wΡ–ll plΠ°y Π° crΡ–tΡ–cΠ°l rΠΎle Ρ–n desΡ–gnΡ–ng, develΠΎpΡ–ng, Π°nd mΠ°Ρ–ntΠ°Ρ–nΡ–ng sΠΎphΡ–stΡ–cΠ°ted dΠ°tΠ° pΡ–pelΡ–nes, ОntΠΎlΠΎgy Оbjects, Π°nd FΠΎundry FunctΡ–ΠΎns wΡ–thΡ–n...

    Оur ClΡ–ent Ρ–s seekΡ–ng Π° prΠΎΠ°ctΡ–ve SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer tΠΎ jΠΎΡ–n theΡ–r teΠ°m.

     

    Аs Π° SenΡ–ΠΎr DΠ°tΠ° EngΡ–neer, yΠΎu wΡ–ll plΠ°y Π° crΡ–tΡ–cΠ°l rΠΎle Ρ–n desΡ–gnΡ–ng, develΠΎpΡ–ng, Π°nd mΠ°Ρ–ntΠ°Ρ–nΡ–ng sΠΎphΡ–stΡ–cΠ°ted dΠ°tΠ° pΡ–pelΡ–nes, ОntΠΎlΠΎgy Оbjects, Π°nd FΠΎundry FunctΡ–ΠΎns wΡ–thΡ–n PΠ°lΠ°ntΡ–r FΠΎundry.

    YΠΎur bΠ°ckgrΠΎund Ρ–n mΠ°chΡ–ne leΠ°rnΡ–ng Π°nd dΠ°tΠ° scΡ–ence wΡ–ll be vΠ°luΠ°ble Ρ–n ΠΎptΡ–mΡ–zΡ–ng dΠ°tΠ° wΠΎrkflΠΎws, enΠ°blΡ–ng effΡ–cΡ–ent mΠΎdel deplΠΎyment, Π°nd suppΠΎrtΡ–ng АІ-drΡ–ven Ρ–nΡ–tΡ–Π°tΡ–ves.

    The Ρ–deΠ°l cΠ°ndΡ–dΠ°te wΡ–ll pΠΎssess Π° rΠΎbust bΠ°ckgrΠΎund Ρ–n clΠΎud technΠΎlΠΎgΡ–es, dΠ°tΠ° Π°rchΡ–tecture, Π°nd Π° pΠ°ssΡ–ΠΎn fΠΎr sΠΎlvΡ–ng cΠΎmplex dΠ°tΠ° chΠ°llenges.

     

    Key RespΠΎnsΡ–bΡ–lΡ–tΡ–es:

    • CΠΎllΠ°bΠΎrΠ°te wΡ–th crΠΎss-functΡ–ΠΎnΠ°l teΠ°ms tΠΎ understΠ°nd dΠ°tΠ° requΡ–rements, Π°nd desΡ–gn, Ρ–mplement Π°nd mΠ°Ρ–ntΠ°Ρ–n scΠ°lΠ°ble dΠ°tΠ° pΡ–pelΡ–nes Ρ–n PΠ°lΠ°ntΡ–r FΠΎundry, ensurΡ–ng end-tΠΎ-end dΠ°tΠ° Ρ–ntegrΡ–ty Π°nd ΠΎptΡ–mΡ–zΡ–ng wΠΎrkflΠΎws.
    • GΠ°ther Π°nd trΠ°nslΠ°te dΠ°tΠ° requΡ–rements Ρ–ntΠΎ rΠΎbust Π°nd effΡ–cΡ–ent sΠΎlutΡ–ΠΎns, leverΠ°gΡ–ng yΠΎur expertΡ–se Ρ–n clΠΎud-bΠ°sed dΠ°tΠ° engΡ–neerΡ–ng. CreΠ°te dΠ°tΠ° mΠΎdels, schemΠ°s, Π°nd flΠΎw dΡ–Π°grΠ°ms tΠΎ guΡ–de develΠΎpment.
    • DevelΠΎp, Ρ–mplement, ΠΎptΡ–mΡ–ze Π°nd mΠ°Ρ–ntΠ°Ρ–n effΡ–cΡ–ent Π°nd relΡ–Π°ble dΠ°tΠ° pΡ–pelΡ–nes Π°nd ETL/ELT prΠΎcesses tΠΎ cΠΎllect, prΠΎcess, Π°nd Ρ–ntegrΠ°te dΠ°tΠ° tΠΎ ensure tΡ–mely Π°nd Π°ccurΠ°te dΠ°tΠ° delΡ–very tΠΎ vΠ°rΡ–ΠΎus busΡ–ness Π°pplΡ–cΠ°tΡ–ΠΎns, whΡ–le Ρ–mplementΡ–ng dΠ°tΠ° gΠΎvernΠ°nce Π°nd securΡ–ty best prΠ°ctΡ–ces tΠΎ sΠ°feguΠ°rd sensΡ–tΡ–ve Ρ–nfΠΎrmΠ°tΡ–ΠΎn.
    • MΠΎnΡ–tΠΎr dΠ°tΠ° pΡ–pelΡ–ne perfΠΎrmΠ°nce, Ρ–dentΡ–fy bΠΎttlenecks, Π°nd Ρ–mplement Ρ–mprΠΎvements tΠΎ ΠΎptΡ–mΡ–ze dΠ°tΠ° prΠΎcessΡ–ng speed Π°nd reduce lΠ°tency.
    • CΠΎllΠ°bΠΎrΠ°te wΡ–th DΠ°tΠ° ScΡ–entΡ–sts tΠΎ fΠ°cΡ–lΡ–tΠ°te mΠΎdel deplΠΎyment Π°nd Ρ–ntegrΠ°tΡ–ΠΎn Ρ–ntΠΎ prΠΎductΡ–ΠΎn envΡ–rΠΎnments.
    • SuppΠΎrt the Ρ–mplementΠ°tΡ–ΠΎn ΠΎf bΠ°sΡ–c ML Оps prΠ°ctΡ–ces, such Π°s mΠΎdel versΡ–ΠΎnΡ–ng Π°nd mΠΎnΡ–tΠΎrΡ–ng.
    • АssΡ–st Ρ–n ΠΎptΡ–mΡ–zΡ–ng dΠ°tΠ° pΡ–pelΡ–nes tΠΎ Ρ–mprΠΎve mΠ°chΡ–ne leΠ°rnΡ–ng wΠΎrkflΠΎws.
    • TrΠΎubleshΠΎΠΎt Π°nd resΠΎlve Ρ–ssues relΠ°ted tΠΎ dΠ°tΠ° pΡ–pelΡ–nes, ensurΡ–ng cΠΎntΡ–nuΠΎus dΠ°tΠ° Π°vΠ°Ρ–lΠ°bΡ–lΡ–ty Π°nd relΡ–Π°bΡ–lΡ–ty tΠΎ suppΠΎrt dΠ°tΠ°-drΡ–ven decΡ–sΡ–ΠΎn-mΠ°kΡ–ng prΠΎcesses.
    • StΠ°y current wΡ–th emergΡ–ng technΠΎlΠΎgΡ–es Π°nd Ρ–ndustry trends, Ρ–ncΠΎrpΠΎrΠ°tΡ–ng Ρ–nnΠΎvΠ°tΡ–ve sΠΎlutΡ–ΠΎns Ρ–ntΠΎ dΠ°tΠ° engΡ–neerΡ–ng prΠ°ctΡ–ces, Π°nd effectΡ–vely dΠΎcument Π°nd cΠΎmmunΡ–cΠ°te technΡ–cΠ°l sΠΎlutΡ–ΠΎns Π°nd prΠΎcesses.

     

    TΠΎΠΎls Π°nd skΡ–lls yΠΎu wΡ–ll use Ρ–n thΡ–s rΠΎle:

    • PΠ°lΠ°ntΡ–r FΠΎundry
    • PythΠΎn
    • PySpΠ°rk
    • SQL
    • TypeScrΡ–pt

     

    RequΡ–red:

    • 5+ yeΠ°rs ΠΎf experΡ–ence Ρ–n dΠ°tΠ° engΡ–neerΡ–ng, preferΠ°bly wΡ–thΡ–n the phΠ°rmΠ°ceutΡ–cΠ°l ΠΎr lΡ–fe scΡ–ences Ρ–ndustry;
    • StrΠΎng prΠΎfΡ–cΡ–ency Ρ–n PythΠΎn Π°nd PySpΠ°rk;
    • PrΠΎfΡ–cΡ–ency wΡ–th bΡ–g dΠ°tΠ° technΠΎlΠΎgΡ–es (e.g., АpΠ°che HΠ°dΠΎΠΎp, SpΠ°rk, KΠ°fkΠ°, BΡ–gQuery, etc.);
    • HΠ°nds-ΠΎn experΡ–ence wΡ–th clΠΎud servΡ–ces (e.g., АWS Glue, Аzure DΠ°tΠ° FΠ°ctΠΎry, GΠΎΠΎgle ClΠΎud DΠ°tΠ°flΠΎw);
    • ExpertΡ–se Ρ–n dΠ°tΠ° mΠΎdelΡ–ng, dΠ°tΠ° wΠ°rehΠΎusΡ–ng, Π°nd ETL/ELT cΠΎncepts;
    • HΠ°nds-ΠΎn experΡ–ence wΡ–th dΠ°tΠ°bΠ°se systems (e.g., PΠΎstgreSQL, MySQL, NΠΎSQL, etc.);
    • PrΠΎfΡ–cΡ–ency Ρ–n cΠΎntΠ°Ρ–nerΡ–zΠ°tΡ–ΠΎn technΠΎlΠΎgΡ–es (e.g., DΠΎcker, Kubernetes);
    • FΠ°mΡ–lΡ–Π°rΡ–ty wΡ–th ML Оps cΠΎncepts, Ρ–ncludΡ–ng mΠΎdel deplΠΎyment Π°nd mΠΎnΡ–tΠΎrΡ–ng.
    • BΠ°sΡ–c understΠ°ndΡ–ng ΠΎf mΠ°chΡ–ne leΠ°rnΡ–ng frΠ°mewΠΎrks such Π°s TensΠΎrFlΠΎw ΠΎr PyTΠΎrch.
    • ExpΠΎsure tΠΎ clΠΎud-bΠ°sed АІ/ML servΡ–ces (e.g., АWS SΠ°geMΠ°ker, Аzure ML, GΠΎΠΎgle Vertex АІ).
    • ExperΡ–ence wΠΎrkΡ–ng wΡ–th feΠ°ture engΡ–neerΡ–ng Π°nd dΠ°tΠ° prepΠ°rΠ°tΡ–ΠΎn fΠΎr mΠ°chΡ–ne leΠ°rnΡ–ng mΠΎdels.
    • EffectΡ–ve prΠΎblem-sΠΎlvΡ–ng Π°nd Π°nΠ°lytΡ–cΠ°l skΡ–lls, cΠΎupled wΡ–th excellent cΠΎmmunΡ–cΠ°tΡ–ΠΎn Π°nd cΠΎllΠ°bΠΎrΠ°tΡ–ΠΎn Π°bΡ–lΡ–tΡ–es.
    • StrΠΎng cΠΎmmunΡ–cΠ°tΡ–ΠΎn Π°nd teΠ°mwΠΎrk Π°bΡ–lΡ–tΡ–es;
    • UnderstΠ°ndΡ–ng ΠΎf dΠ°tΠ° securΡ–ty Π°nd prΡ–vΠ°cy best prΠ°ctΡ–ces;
    • StrΠΎng mΠ°themΠ°tΡ–cΠ°l, stΠ°tΡ–stΡ–cΠ°l, Π°nd Π°lgΠΎrΡ–thmΡ–c skΡ–lls.

     

    NΡ–ce tΠΎ hΠ°ve:

    • CertΡ–fΡ–cΠ°tΡ–ΠΎn Ρ–n ClΠΎud plΠ°tfΠΎrms, ΠΎr relΠ°ted Π°reΠ°s;
    • ExperΡ–ence wΡ–th seΠ°rch engΡ–ne АpΠ°che Lucene, Web ServΡ–ce Rest АPΠ†;
    • FΠ°mΡ–lΡ–Π°rΡ–ty wΡ–th VeevΠ° CRM, ReltΡ–ΠΎ, SАP, Π°nd/ΠΎr PΠ°lΠ°ntΡ–r FΠΎundry;
    • KnΠΎwledge ΠΎf phΠ°rmΠ°ceutΡ–cΠ°l Ρ–ndustry regulΠ°tΡ–ΠΎns, such Π°s dΠ°tΠ° prΡ–vΠ°cy lΠ°ws, Ρ–s Π°dvΠ°ntΠ°geΠΎus;
    • PrevΡ–ΠΎus experΡ–ence wΠΎrkΡ–ng wΡ–th JΠ°vΠ°ScrΡ–pt Π°nd TypeScrΡ–pt.

     

    CΠΎmpΠ°ny ΠΎffers:

    • FlexΡ–ble wΠΎrkΡ–ng fΠΎrmΠ°t – remΠΎte, ΠΎffΡ–ce-bΠ°sed ΠΎr flexΡ–ble
    • А cΠΎmpetΡ–tΡ–ve sΠ°lΠ°ry Π°nd gΠΎΠΎd cΠΎmpensΠ°tΡ–ΠΎn pΠ°ckΠ°ge
    • PersΠΎnΠ°lΡ–zed cΠ°reer grΠΎwth
    • PrΠΎfessΡ–ΠΎnΠ°l develΠΎpment tΠΎΠΎls (mentΠΎrshΡ–p prΠΎgrΠ°m, tech tΠ°lks Π°nd trΠ°Ρ–nΡ–ngs, centers ΠΎf excellence, Π°nd mΠΎre)
    • АctΡ–ve tech cΠΎmmunΡ–tΡ–es wΡ–th regulΠ°r knΠΎwledge shΠ°rΡ–ng
    • EducΠ°tΡ–ΠΎn reΡ–mbursement
    • MemΠΎrΠ°ble Π°nnΡ–versΠ°ry presents
    • CΠΎrpΠΎrΠ°te events Π°nd teΠ°m buΡ–ldΡ–ngs
    More
  • Β· 67 views Β· 16 applications Β· 4d

    Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 1.5 years of experience Β· Pre-Intermediate
    Data Engineer Genesis is a co-founding company that builds global tech businesses with outstanding entrepreneurs from CEE. We are one of the largest global app developers β€” products from Genesis companies have been downloaded over 300 million times and...

    Data Engineer 

    Genesis is a co-founding company that builds global tech businesses with outstanding entrepreneurs from CEE. We are one of the largest global app developers β€” products from Genesis companies have been downloaded over 300 million times and are used by tens of millions monthly.

    We’re proud to have one of the strongest tech teams in Europe, with our experts regularly recognized among the best IT professionals in CEE and Ukraine.

    We’re looking for a Data Engineer who’s excited to build something from the ground up and make a real impact on how the Finance team works with data.

    Here’s what your day-to-day will look like:

    πŸ›  Build and Own Our Finance Data Platform. Create and maintain a robust analytical database for the Finance teamβ€”you’ll be the go-to expert for anything data-related.

    🀝 Collaborate with Stakeholders. Work closely with finance team members and business process owners to understand their data needs and turn them into smart, scalable solutions.

    πŸš€ Design and Launch Data Pipelines. Build reliable data pipelines to pull in data from various sourcesβ€”S3, SQL databases, APIs, Google Sheets, CSVs, and more.

    πŸ— Manage Data Infrastructure. Ensure our data systems are well-structured, scalable, reliable, and backed up regularly.

    πŸ“Š Deliver Reports & Dashboards. Make sure key stakeholders get the right data at the right timeβ€”whether it’s for regular reports or one-off deep dives.

    βš™οΈ Automate Manual Work. Help move the Finance team away from Excel by automating repetitive tasks and creating a centralized, easy-to-use data platform.

     

    Key Qualifications of the Ideal Candidate:

    βœ… Experience:

    • 1.5 to 2+ years of hands-on experience in data engineering.
    • Experience with financial datasets is a strong advantage, but not required.

    🧠 SQL Mastery:

    • You’re confident writing complex SQL and working with large-scale datasets.
    • You know your way around CTEs, window functions, joins, and indexes.
      You’ve optimized queries for performance and helped make data easy to consume for others.

    πŸ” ETL / ELT Skills:

    • You’ve worked with tools like Airflow, Airbyte, or similar for orchestrating data pipelines.
    • You’ve set up automated data extraction from sources like S3, SQL databases, APIs, Google Sheets, or CSVs.
    • You can build and maintain pipelines that update financial metrics for dashboards.

       

    πŸ› οΈ Data Infrastructure & Scripting:

    • You have experience maintaining and scaling analytical databases.
      You follow good data quality practicesβ€”validation, logging, and retries are part of your playbook.
    • You can write Python scripts for transforming and automating data workflows.

     

    We Offer:

    • A comprehensive social package in addition to cash compensation, including a comfortable office in Kyiv, just 5 minutes' walk from Taras Shevchenko metro station.
    • Competitive salary and comprehensive benefits such as paid conferences, corporate doctor, medical insurance (for personnel located in Ukraine), and quality food daily (breakfasts and lunches), as well as fresh fruits, snacks, and coffee.
    • A dynamic team environment with opportunities for professional growth.
    • Exceptional opportunities for professional development, including in-house training sessions and seminars, a corporate library, English classes, and compensation for professional qualification costs after the probationary period.
    • Flexible working conditions and a supportive health and sports program.

    Ready to shape your future with Genesis?

    Connect with us, and let's create the future together!



     

    More
  • Β· 21 views Β· 1 application Β· 4d

    Data Engineer TL / Poland

    EU Β· 4 years of experience Β· Upper-Intermediate
    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...

    On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.

     

    Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way. 

    We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests. 

    As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. You’ll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.

     

    Responsibilities:

    • Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
    • Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
    • Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
    • Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?

       

    Requirements:

    • Over 2 years in a leadership role within a data team.
    • Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
    • Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
    • Bachelor’s degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
    • Highly organized with a proactive mindset.
    • Strong service orientation and a collaborative approach to problem-solving.

       

    Nice to have skills:

    • Previous experience as a NOC or DevOps engineer is a plus.
    • Familiarity with PySpark is considered an advantage.

       

    What we can offer you

    • Remote work from Poland, flexible working schedule
    • Accounting support & consultation
    • Opportunities for learning and developing on the project
    • 20 working days of annual vacation
    • 5 days paid sick leaves/days off; state holidays
    • Provide working equipment
    More
  • Β· 34 views Β· 8 applications Β· 4d

    Data Engineer (with Azure)

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-Intermediate
    Would you like to increase your cloud expertise? We’re looking for a Data Engineer to join an international cloud technology company. This is a leading Microsoft & Azure partner providing cloud services in Europe and East Asia. Working with different...

    Would you like to increase your cloud expertise? We’re looking for a Data Engineer to join an international cloud technology company.

    This is a leading Microsoft & Azure partner providing cloud services in Europe and East Asia.

    Working with different customer domains + the most professional team – growth! Let’s discuss.

     

    Main Responsibilities:

    Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.

     

    You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.

    Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.

     

    Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization

     

    Mandatory Requirements:

    – 2+ years of experience, ideally within a Data Engineer role.

    – understanding of data modeling, data warehousing concepts, and ETL processes

    – experience with Azure Cloud technologies

    – experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)

    – Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)

    – SQL-skills

    – communication and interpersonal skills

    – English β€”Π’2

    – Ukrainian language

     

    Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.

     

    We offer:

    – professional growth and international certification

    – free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)

    – innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customers’ projects

    – great compensation and individual bonus remuneration

    – medical insurance

    – long-term employment

    – ondividual development plan

    More
  • Β· 30 views Β· 7 applications Β· 3d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    About the Role: We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and...

    About the Role:
     

    We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and business impact. You will be responsible for designing, building, and maintaining scalable data solutions that power critical business decisions in a fast-paced B2C environment.

     

    Responsibilities:
     

    • Design, develop, and maintain robust ETL/ELT data pipelines using Apache Spark and AWS Glue
    • Build Zero-ETL pipelines using AWS services such as Kinesis Firehose, Lambda, and SageMaker
    • Write clean, efficient, and well-tested code primarily in Python and SQL
    • Collaborate with data scientists, analysts, and product teams to ensure timely and accurate data delivery
    • Optimize data workflows for performance, scalability, and cost-efficiency
    • Integrate data from various sources (structured, semi-structured, and unstructured)
    • Implement monitoring, alerting, and logging to ensure data pipeline reliability
    • Contribute to data governance, documentation, and compliance efforts
    • Work in an agile environment, participating in code reviews, sprint planning, and team ceremonies
       

    Expected Qualifications:
     

    • 5+ years of professional experience in data engineering
    • Advanced proficiency in Apache Spark, Python, and SQL
    • Hands-on experience with AWS Glue, Kinesis Firehose, and Zero-ETL pipelines
    • Familiarity with AWS Lambda and SageMaker for serverless processing and ML workflows
    • Experience with ETL orchestration tools such as Airflow or dbt
    • Solid understanding of cloud computing concepts, especially within AWS
    • Strong problem-solving skills and the ability to work independently and collaboratively
    • Experience working in B2C companies or data-rich product environments
    • Degree in Computer Science or related field (preferred but not required)
    • Bonus: Exposure to JavaScript and data science workflows
    More
  • Β· 72 views Β· 16 applications Β· 1d

    Jnr/Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1 year of experience Β· Upper-Intermediate
    Position responsibilities: - Migrate clients data from other solutions to Jetfile data model (mostly MS SQL) - Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards - Analyze and optimize...

    Position responsibilities:
    - Migrate clients data from other solutions to Jetfile data model (mostly MS SQL)
    - Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards
    - Analyze and optimize performance on big data load
    - Create migrations for Jetfile internal products

     

    Must have:
    - Bachelor's degree in Computer Science, Engineering, or related field
    - Ability to work independently and remotely
    - 1 year of experience
    - Strong SQL skills
    - Must have experience with business application development


    Nice to have:
    - Leading experience
    - Azure knowledge
    - ERP, accounting, fintech or insurance tech experience

    More
  • Β· 32 views Β· 6 applications Β· 12h

    Data Engineer (Middle Level)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Experience Level: Middle to Senior Employment Type: Full-time We are looking for a skilled and detail-oriented Data Engineer or Data Scientist to join our team. You will be working on complex data processing tasks, developing algorithms for data...

    Experience Level: Middle to Senior
    Employment Type: Full-time

    We are looking for a skilled and detail-oriented Data Engineer or Data Scientist to join our team. You will be working on complex data processing tasks, developing algorithms for data extrapolation, and building robust data infrastructures.
     
    Requirements:
     

    • Solid understanding of relational database design principles.
    • Strong knowledge of ANSI SQL, including CTEs and window functions.
    • Proficiency in at least one programming language: Python or R.
    • Analytical mindset with a desire to delve into complex data processing and extrapolation challenges.
    • Strong teamwork skills and stress resilience.
    • Goal-oriented and result-driven approach.
       

    Preferred Qualifications:
     

    • Familiarity with cloud storage systems.
    • Experience developing distributed systems.
    • In-depth knowledge and experience with writing advanced PostgreSQL procedures.
       

    Key Responsibilities:
     

    • Design data structures and schemas.
    • Develop procedures and modules for data ingestion from various sources using SQL and Python/R.
    • Contribute to the development of data processing algorithms.
    • Program data extrapolation algorithms.
       

    If you are passionate about data, algorithms, and working in a collaborative environment, we would love to hear from you!



     

    More
  • Β· 7 views Β· 0 applications Β· 5h

    Middle BigData Engineer to $2300

    Full Remote Β· Ukraine Β· 2 years of experience
    Description of the project: We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big...

    Description of the project:

    We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies.

     

    Your qualification:

    • 2+ years of experience in Big Data engineering.
    • Solid knowledge and practical experience with OLAP technologies.
    • Strong SQL skills and experience with schema design.
    • Proficiency in Java or Python for process automation.
    • Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus.
    • Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis.
    • Understanding of distributed systems such as Spark, Hadoop, etc.
    • Experience working with Kafka or other message broker systems.
    • Familiarity with data governance tools and data science/analytics workbenches.
    • Experience with Ezmeral Data Fabric is a plus.
    • Knowledge of UNIX and experience in Shell scripting for automation tasks.
    • Technical English proficiency (reading and understanding documentation).

     

    Responsibilities:

    • Design and implement data extraction, processing, and transformation pipelines based on MPP architecture.
    • Build and maintain data warehouses and OLAP-based systems.
    • Design database schemas and develop dimensional data models.
    • Work with distributed systems and clusters for big data processing.

     

    We are delighted to provide you with the following benefits:

    • Opportunities for growth and development within the project
    • Flexible working hours
    • Option to work remotely or from the office
    More
  • Β· 6 views Β· 1 application Β· 5h

    Senior BigData Engineer to $3700

    Full Remote Β· Ukraine Β· 4 years of experience Β· Intermediate
    Description of the project: We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big...

    Description of the project:

    We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies.

     

    Your qualification:

    • 4+ years of experience in Big Data engineering.
    • Solid knowledge and practical experience with OLAP technologies.
    • Strong SQL skills and experience with schema design.
    • Proficiency in Java or Python for process automation.
    • Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus.
    • Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis.
    • Understanding of distributed systems such as Spark, Hadoop, etc.
    • Experience working with Kafka or other message broker systems.
    • Familiarity with data governance tools and data science/analytics workbenches.
    • Experience with Ezmeral Data Fabric is a plus.
    • Knowledge of UNIX and experience in Shell scripting for automation tasks.
    • Technical English proficiency (reading and understanding documentation).

       

    Responsibilities:

    • Design and implement data extraction, processing, and transformation pipelines based on MPP architecture.
    • Build and maintain data warehouses and OLAP-based systems.
    • Design database schemas and develop dimensional data models.
    • Work with distributed systems and clusters for big data processing.

       

    We are delighted to provide you with the following benefits:

    • Opportunities for growth and development within the project
    • Flexible working hours
    • Option to work remotely or from the office
    More
  • Β· 7 views Β· 2 applications Β· 4h

    Senior BigData Engineer to $4000

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Intermediate
    We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and...

    We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies. 

     

    Requirements:

    • 4+ years of experience in Big Data engineering. 
    • Solid knowledge and practical experience with OLAP technologies. 
    • Strong SQL skills and experience with schema design.
    •  Proficiency in Java or Python for process automation. 
    • Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus. 
    • Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis. 
    • Understanding of distributed systems such as Spark, Hadoop, etc. 
    • Experience working with Kafka or other message broker systems.
    • Familiarity with data governance tools and data science/analytics workbenches. 
    • Experience with Ezmeral Data Fabric is a plus.
    • Knowledge of UNIX and experience in Shell scripting for automation tasks. 
    • Technical English proficiency (reading and understanding documentation). 

     

    Responsibilities: 

    • Design and implement data extraction, processing, and transformation pipelines based on MPP architecture. 
    • Build and maintain data warehouses and OLAP-based systems. 
    • Design database schemas and develop dimensional data models.
    • Work with distributed systems and clusters for big data processing. 

     

    We are delighted to provide you with the following benefits:

    • Opportunities for growth and development within the project.
    • Flexible working hours.
    • Option to work remotely or from the office.
    More
  • Β· 6 views Β· 1 application Β· 4h

    Middle BigData Engineer to $2500

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· Intermediate
    We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and...

    We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies. 

     

    Requirements: 

    • 2+ years of experience in Big Data engineering.
    • Solid knowledge and practical experience with OLAP technologies. 
    • Strong SQL skills and experience with schema design. 
    • Proficiency in Java or Python for process automation. 
    • Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus. 
    • Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis. 
    • Understanding of distributed systems such as Spark, Hadoop, etc. 
    • Experience working with Kafka or other message broker systems. 
    • Familiarity with data governance tools and data science/analytics workbenches. 
    • Experience with Ezmeral Data Fabric is a plus.
    • Knowledge of UNIX and experience in Shell scripting for automation tasks. 
    • Technical English proficiency (reading and understanding documentation). 

       

    Responsibilities: 

    • Design and implement data extraction, processing, and transformation pipelines based on MPP architecture. 
    • Build and maintain data warehouses and OLAP-based systems.
    • Design database schemas and develop dimensional data models.
    • Work with distributed systems and clusters for big data processing. 

     

    We are delighted to provide you with the following benefits: 

    • Opportunities for growth and development within the project 
    • Flexible working hours
    • Option to work remotely or from the office
    More
  • Β· 12 views Β· 3 applications Β· 3h

    Data Ops Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· Advanced/Fluent
    What you’ll do Become part of an iconic brand that is set to revolutionize the electric pick-up truck & rugged SUV marketplace by achieving the following: Contribute to the design, implementation, and maintenance of the overall cloud infrastructure data...

    What you’ll do

    Become part of an iconic brand that is set to revolutionize the electric pick-up truck & rugged SUV marketplace by achieving the following:

    Contribute to the design, implementation, and maintenance of the overall cloud infrastructure data platform using modern IaC (Infrastructure as Code) practices.

    Work closely with software development and systems teams to build Data Integration solutions.

    Design and build Data models using tools such as Lucid, Talend, Erwin, MySQL workbench.

    Define and enhance enterprise data model to reflect relationships and dependencies.

    Review application data systems to ensure adherence to data governance policies.

    Design and build ETL (Python), ELT(Python) infrastructure, automation, and solutions to transform data as required.

    Design and Implement BI dashboards to visualize Trends and Forecasts.

    Design and implement data infrastructure components, ensuring high availability, reliability, scalability, and performance.

    Design, train and deploy ML models

    Implement monitoring solutions to proactively identify and address potential issues.

    Collaborate with security teams to ensure the data platform meets industry standards and compliance requirements.

    Collaborate with cross-functional teams, including product managers, developers, and business partners to ensure robust and reliable systems.

    What you’ll bring

    We expect all employees to have integrity, curiosity, resourcefulness, and strive to exhibit a positive attitude, as well as a growth mindset. You’ll be comfortable with change and flexible in a fast-paced, high-growth environment. You’ll take a collaborative approach to achieve ambitious goals. Here's what else you'll bring:

    Bachelor's degree in computer science, information technology, or related field or equivalent work experience.

    5+ years of hands-on experience as DataOps Engineer in a manufacturing or automotive environment.

    Experience with streaming and event-based architecture.

    Proficient in building data pipelines using languages such as Python and SQL.

    Experience with AWS based data services such as Glue, Kinesis, Firehose or other comparable services.

    Experience with Structured, unstructured and time series databases.

    Solid understanding of cloud data storage solutions such as RDS, DynamoDB, DocumentDB, Mongo, Cassandra, Influx.

    Experience implementing data lakehouse solutions using Databricks.

    Several years of experience working with cloud platforms such as AWS and Azure.

    Experience with infrastructure as code (Terraform).

    Proven ability to develop and deploy scalable ML models.

    Hands-on experience in designing, training, and deploying ML models

    Strong ability to extract actionable insights using ML techniques

    Ability to leverage ML algorithms for forecasting trends and decision-making

    Excellent problem-solving and troubleshooting skills. When a problem occurs, you run towards it not away.

    Effective communication and collaboration skills. You treat colleagues with respect. You have a desire for clean implementations but are also humble in discussing alternative solutions and options.
     

    More
  • Β· 8 views Β· 0 applications Β· 3h

    Technical Lead/Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Project Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...

    Project Description:

    • As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
      As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
      Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
      If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
       

      Responsibilities:

      Strategy and Project Delivery
      ● Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
      ● Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
      ● Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
      ● Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
      ● Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
      ● Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
      ● Own the data engineering processes, architecture across the teams
      Technology, Craft & Delivery
      ● Experience in designing and architecting data engineering frameworks, dealing with high volume of data
      ● Experience in large scale data processing and workflow management
      ● Mastery in technology leadership
      ● Engineering delivery, quality and practices within own team
      ● Participating in defining, shaping and delivering the wider engineering strategic objectives
      ● Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
      ● Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
      ● Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
       

      Mandatory Skills Description:

      Role Qualifications and Requirements:
      ● Bachelor degree
      ● At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
      ● 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
      ● Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
      ● Experience working with public cloud providers such as Snowflake, AWS
      ● Experience to work in a complex stakeholders' organizations
      ● A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
      ● Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
      ● Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
      ● You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
      ● You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
      ● Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams

    More
  • Β· 80 views Β· 7 applications Β· 6d

    Data Quality Engineer

    Ukraine Β· Product Β· 1 year of experience
    Ми ΡˆΡƒΠΊΠ°Ρ”ΠΌΠΎ Data Quality Engineer, який ΠΏΡ€Π°Π³Π½Π΅ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² Π΄ΠΈΠ½Π°ΠΌΡ–Ρ‡Π½ΠΎΠΌΡƒ сСрСдовищі Ρ‚Π° розділяє цінності Π²Π·Π°Ρ”ΠΌΠ½ΠΎΡ— Π΄ΠΎΠ²Ρ–Ρ€ΠΈ, відкритості Ρ‚Π° ініціативності. ΠŸΡ€ΠΈΠ²Π°Ρ‚Π‘Π°Π½ΠΊ- Ρ” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΌ Π±Π°Π½ΠΊΠΎΠΌ Π£ΠΊΡ€Π°Ρ—Π½ΠΈ Ρ‚Π° ΠΎΠ΄Π½ΠΈΠΌ Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆ Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–ΠΉΠ½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π² світу. Π—Π°ΠΉΠΌΠ°Ρ” Π»Ρ–Π΄ΠΈΡ€ΡƒΡŽΡ‡Ρ–...

    Ми ΡˆΡƒΠΊΠ°Ρ”ΠΌΠΎ Data Quality Engineer, який ΠΏΡ€Π°Π³Π½Π΅ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² Π΄ΠΈΠ½Π°ΠΌΡ–Ρ‡Π½ΠΎΠΌΡƒ сСрСдовищі Ρ‚Π° розділяє цінності Π²Π·Π°Ρ”ΠΌΠ½ΠΎΡ— Π΄ΠΎΠ²Ρ–Ρ€ΠΈ, відкритості Ρ‚Π° ініціативності.

     

    ΠŸΡ€ΠΈΠ²Π°Ρ‚Π‘Π°Π½ΠΊ- Ρ” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΌ Π±Π°Π½ΠΊΠΎΠΌ Π£ΠΊΡ€Π°Ρ—Π½ΠΈ Ρ‚Π° ΠΎΠ΄Π½ΠΈΠΌ Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆ Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–ΠΉΠ½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π² світу. Π—Π°ΠΉΠΌΠ°Ρ” Π»Ρ–Π΄ΠΈΡ€ΡƒΡŽΡ‡Ρ– ΠΏΠΎΠ·ΠΈΡ†Ρ–Ρ— Π·Π° всіма фінансовими ΠΏΠΎΠΊΠ°Π·Π½ΠΈΠΊΠ°ΠΌΠΈ Π² Π³Π°Π»ΡƒΠ·Ρ– Ρ‚Π° складає близько Ρ‡Π²Π΅Ρ€Ρ‚Ρ– всієї Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΠΎΡ— систСми ΠΊΡ€Π°Ρ—Π½ΠΈ.

     

    Ми ΠΏΡ€Π°Π³Π½Π΅ΠΌΠΎ Π·Π½Π°ΠΉΡ‚ΠΈ цілСспрямованого профСсіонала, який Π²ΠΌΡ–Ρ” ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² Ρ€Π΅ΠΆΠΈΠΌΡ– багатозадачності, ΠΎΡ€Ρ–Ρ”Π½Ρ‚ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ Π½Π° ΡΠΊΡ–ΡΡ‚ΡŒ Ρ‚Π° Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚.

     

    ΠŸΡ€ΠΎ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚: ΠΊΠΎΠΌΠ°Π½Π΄Π° Π·Π°ΠΉΠΌΠ°Ρ”Ρ‚ΡŒΡΡ ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²ΠΎΡŽ сучасних процСсів забСзпСчСння Ρ‚Π° ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŽ якості Π΄Π°Π½ΠΈΡ… Π² ΠΊΠΎΠΌΠΏΠ°Π½Ρ–Ρ— Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΡ… Π½Π° покращСння процСссів прийняття Ρ€Ρ–ΡˆΠ΅Π½ΡŒ Π½Π° основі Π΄Π°Π½ΠΈΡ… Ρ‚Π° ΠΏΠΎΠΊΡ€Π°ΡˆΠ΅Π½Π½Ρ якості digital-сСрвісів.

     

    ΠžΡΠ½ΠΎΠ²Π½Ρ– обов’язки:

    - ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ΡƒΠ²Π°Π½Π½Ρ, рСалізація Ρ‚Π° імплСмСнтація процСсів Ρ– ΠΏΡ€ΠΎΡ†Π΅Π΄ΡƒΡ€ для Π·Π±ΠΎΡ€Ρƒ, збСрігання, використання Ρ‚Π° Π±Π΅Π·ΠΏΠ΅ΠΊΠΈ Π΄Π°Π½ΠΈΡ…

    - ВизначСння ступСня Π΄ΠΎΠ²Ρ–Ρ€ΠΈ Π΄ΠΎ Π΄ΠΆΠ΅Ρ€Π΅Π» Π΄Π°Π½ΠΈΡ…

    - ЗабСзпСчСння Ρ– гарантія якості ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½ΠΈΡ… Π΄Π°Π½ΠΈΡ…

    - ДокумСнтування Ρ‚Π° забСзпСчСння дотримання ΠΏΡ€Π°Π²ΠΈΠ» Π·Π±ΠΎΡ€Ρƒ, збСрігання Ρ– використання Π΄Π°Π½ΠΈΡ…

    - ΠšΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŒ Ρ– Π²Ρ–Π΄ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Π½Π½Ρ Ρ–Π½Ρ†ΠΈΠ΄Π΅Π½Ρ‚Ρ–Π², пов’язаних Π· ΡΠΊΡ–ΡΡ‚ΡŽ Π΄Π°Π½ΠΈΡ….

     

    ΠžΡΠ½ΠΎΠ²Π½Ρ– Π²ΠΈΠΌΠΎΠ³ΠΈ:

    - Π’ΠΈΡ‰Π° Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½Π° освіта

    - Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² Data Π΄ΠΎΠΌΠ΅Π½Ρ– Π±Ρ–Π»ΡŒΡˆΠ΅ 2 Ρ€ΠΎΠΊΡ–Π²

    - Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² Π±Π°Π½ΠΊΠΎΠ²Ρ–ΠΉ сфСрі

    - Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ масивами Π΄Π°Π½ΠΈΡ…

    - Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· SQL

    - Розуміння Ρ‚Π΅ΠΎΡ€Ρ–Ρ— Π±Π°Π· Π΄Π°Π½ΠΈΡ… (SQL, NoSQL, NewSQL);

    - Знання основ проСктування Ρ– Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½ΠΈΠΌΠΈ сховищами Ρ– ΠΎΠ·Π΅Ρ€Π°ΠΌΠΈ Π΄Π°Π½Π½Ρ‹Ρ… (Data WareHouse, Data Lake), Π° Ρ‚Π°ΠΊΠΎΠΆ ETL / ELT-процСссами;

    Π‘ΡƒΠ΄Π΅ плюсом:
    - Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Big Data

     

    Π‘Π²ΠΎΡ—ΠΌ співробітникам ΠΌΠΈ ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ:

    - Π ΠΎΠ±ΠΎΡ‚Ρƒ Π² Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΎΠΌΡƒ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–ΠΉΠ½ΠΎΠΌΡƒ Π±Π°Π½ΠΊΡƒ Π£ΠΊΡ€Π°Ρ—Π½ΠΈ

    - ΠžΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ Ρ‚Π° 24 ΠΊΠ°Π»Π΅Π½Π΄Π°Ρ€Π½ΠΈΡ… Π΄Π½Ρ– відпустки

    - ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Ρƒ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ

    - ΠœΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Ρ‚Π° ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½ΠΈΠΉ ΠΌΠΎΠ±Ρ–Π»ΡŒΠ½ΠΈΠΉ зв’язок

    - ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π΅ навчання

    - Бучасний ΠΊΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½ΠΈΠΉ офіс

    - Π¦Ρ–ΠΊΠ°Π²Ρ– ΠΏΡ€ΠΎΡ”ΠΊΡ‚ΠΈ, Π°ΠΌΠ±Ρ–Ρ†Ρ–ΠΉΠ½Ρ– Π·Π°Π΄Π°Ρ‡Ρ– Ρ‚Π° Π΄ΠΈΠ½Π°ΠΌΡ–Ρ‡Π½ΠΈΠΉ Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ

    More
Log In or Sign Up to see all posted jobs