Jobs
144-
Β· 7 views Β· 0 applications Β· 2h
Palantir Data Engineer
Full Remote Β· Ukraine, Romania, Poland, Spain, Portugal Β· 4 years of experience Β· Upper-IntermediateWe are seeking a skilled and adaptable Data Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and...We are seeking a skilled and adaptable Data Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and other modern data tools. We value individuals who are excited to expand their technical capabilities over time, work on multiple accounts, and contribute to a dynamic and growing team.
You will play a pivotal role in transforming raw data from various sources into structured, high-quality data products that drive business decisions. The ideal candidate should be motivated to learn and grow within the organization, actively collaborating with experienced engineers to strengthen our data capabilities over time.
About the projectThis project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.
The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.
Skills & Experience
- Bachelorβs degree in Computer Science, Software Engineering, or equivalent experience.
- Minimum 3 years of experience in Python, SQL, and data engineering processes.
- Experience with Palantir Foundry
- Proficiency in multiple database systems, such as PostgreSQL, Redis, and a data warehouse like Snowflake, including query optimization.
- Hands-on experience with Microsoft Azure services.
- Strong problem-solving skills and experience with data pipeline development.
- Familiarity with testing methodologies (unit and integration testing).
- Docker experience for containerized data applications.
- Collaborative mindset, capable of working across multiple teams and adapting to new projects over time.
- Fluent in English (written & verbal communication).
- Curiosity and enthusiasm for finance-related domains (personal & corporate finance, investment concepts).
Nice to have
- Experience with Databricks.
- Experience with Snowflake.
- Background in wealth management, investment analytics, or financial modeling.
- Contributions to open-source projects or personal projects showcasing data engineering skills.
Responsibilities
- Design and maintain scalable data pipelines to ingest, transform, and optimize data.
- Collaborate with cross-functional teams (engineering, product, and business) to develop solutions that address key data challenges.
- Support data governance, data quality, and security best practices.
- Optimize data querying and processing for efficiency and cost-effectiveness.
- Work with evolving technologies to ensure our data architecture remains modern and adaptable.
- Contribute to a culture of learning and knowledge sharing, supporting newer team members in building their skills.
- Grow into new roles within the company by expanding your technical expertise and working on diverse projects over time.
We are looking for individuals who want to be part of a long-term, growing teamβpeople who may not have all the skills today but are eager to bridge the gap and build their expertise alongside experienced engineers. If youβre excited about pumping the data muscle and growing in your career, weβd love to hear from you!
More -
Β· 17 views Β· 1 application Β· 3h
Middle/Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Pre-IntermediateWeβre Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. Weβre proud...Weβre Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. Weβre proud to have a 5M monthly active users base and to achieve 20% QoQ revenue growth
Now we are looking for a Middle/Senior Data Engineer to join our Analytics team
What youβll do:
- Design, develop and maintain Data pipelines and ETL processes for internal DWH
- Develop and support integrations with 3rd party systems
- Be responsible for the quality of data presented in BI dashboards
- Collaborate with data analysts to troubleshoot data issues and optimize data workflows
Your professional qualities:
- 3+ years of BI/DWH development experience
- Excellent knowledge of database concepts and hands-on experience with SQL
- Proven experience of designing, implementing, and maintaining ETL data pipelines
- Hands-on experience writing production-level Python code
- Experience working with cloud-native technologies (AWS/GCP)
Will be a plus:
- Experience with Business Intelligence software (Looker Studio)
- Experience with billing systems, enterprise financial reporting, subscription monetization products
- Experience of supporting product and marketing data analytics
We offer:
- Remote-First culture: We provide a flexible working schedule and you can work anywhere in the world
- Health taking care program: We provide Health insurance, sport compensation and 20 paid sick days
- Professional Development: The company provides budget for each employee for courses, trainings and conferences
- Personal Equipment Policy: We provide all necessary equipment for your work. For Ukrainian employees we also provide Ecoflow
- Vacation Policy: Each employee in our company has 20 paid vacation days and extra days on the occasion of special evens
- Knowledge sharing: We are glad to share our knowledge and experience in our internal events
- Corporate Events: We organize corporate events and team-building activities across our hubs
-
Β· 28 views Β· 4 applications Β· 4h
Senior Data Engineer (Python) to $7000
Full Remote Β· Ukraine, Poland, Portugal, Romania, Bulgaria Β· 5 years of experience Β· Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.About the Role:
As a data engineer youβll have end-to-end ownership - from system architecture and softwaredevelopment to operational excellence.
Key Responsibilities:
β Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.β Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
β Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
β Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
β Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Required Competence and Skills:
To excel in this role, candidates should possess the following qualifications and experiences:β A Bachelorβs or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
β At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
β At least 5 years of experience with Python, building modular, testable, and production-ready code.
β Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
β Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
β A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
β Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Haves
β Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
β Familiarity with API development frameworks (e.g., FastAPI).
β Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
β Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
Β· 12 views Β· 0 applications Β· 5h
System Engineer
Office Work Β· Ukraine (Kyiv) Β· Product Β· 5 years of experience MilTech πͺAirlogix β ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ, ΡΠΊΠ° ΡΠΏΠ΅ΡΡΠ°Π»ΡΠ·ΡΡΡΡΡΡ Π½Π° Π²ΠΈΡΠΎΠ±Π½ΠΈΡΡΠ²Ρ ΡΠ½Π½ΠΎΠ²Π°ΡΡΠΉΠ½ΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΡΠ² Ρ ΡΡΠ΅ΡΡ Π±Π΅Π·ΠΏΡΠ»ΠΎΡΠ½ΠΈΡ Π»ΡΡΠ°Π»ΡΠ½ΠΈΡ Π°ΠΏΠ°ΡΠ°ΡΡΠ² (ΠΠΠΠ). ΠΠ°ΡΠ°Π· Π½Π°ΡΠ΅ Π³ΠΎΠ»ΠΎΠ²Π½Π΅ Π·Π°Π²Π΄Π°Π½Π½Ρ β ΡΠ΅ Π½Π°ΠΉΡΠΊΠΎΡΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠΎΠ³Π° Π£ΠΊΡΠ°ΡΠ½ΠΈ, ΡΠ°ΠΌΠ΅ ΡΠΎΠΌΡ ΠΌΠΈ ΡΡΠΊΠ°ΡΠΌΠΎ ΡΠ°Π»Π°Π½ΠΎΠ²ΠΈΡΠΈΡ ΡΠ° ΠΊΡΠ΅Π°ΡΠΈΠ²Π½ΠΈΡ ΠΏΡΠΎΡΠ΅ΡΡΠΎΠ½Π°Π»ΡΠ², ΡΠΊΡ...Airlogix β ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ, ΡΠΊΠ° ΡΠΏΠ΅ΡΡΠ°Π»ΡΠ·ΡΡΡΡΡΡ Π½Π° Π²ΠΈΡΠΎΠ±Π½ΠΈΡΡΠ²Ρ ΡΠ½Π½ΠΎΠ²Π°ΡΡΠΉΠ½ΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΡΠ² Ρ ΡΡΠ΅ΡΡ Π±Π΅Π·ΠΏΡΠ»ΠΎΡΠ½ΠΈΡ Π»ΡΡΠ°Π»ΡΠ½ΠΈΡ Π°ΠΏΠ°ΡΠ°ΡΡΠ² (ΠΠΠΠ). ΠΠ°ΡΠ°Π· Π½Π°ΡΠ΅ Π³ΠΎΠ»ΠΎΠ²Π½Π΅ Π·Π°Π²Π΄Π°Π½Π½Ρ β ΡΠ΅ Π½Π°ΠΉΡΠΊΠΎΡΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠΎΠ³Π° Π£ΠΊΡΠ°ΡΠ½ΠΈ, ΡΠ°ΠΌΠ΅ ΡΠΎΠΌΡ ΠΌΠΈ ΡΡΠΊΠ°ΡΠΌΠΎ ΡΠ°Π»Π°Π½ΠΎΠ²ΠΈΡΠΈΡ ΡΠ° ΠΊΡΠ΅Π°ΡΠΈΠ²Π½ΠΈΡ ΠΏΡΠΎΡΠ΅ΡΡΠΎΠ½Π°Π»ΡΠ², ΡΠΊΡ Π³ΠΎΡΠΎΠ²Ρ ΡΠ°Π·ΠΎΠΌ Π· Π½Π°ΠΌΠΈ Π²ΠΊΠ»Π°Π΄Π°ΡΠΈ Π²ΡΡ ΡΠΈΠ»ΠΈ ΡΠ° ΡΠ΅ΡΡΡΡΠΈ Ρ Π΄ΠΎΡΡΠ³Π½Π΅Π½Π½Ρ ΡΡΡΡ Π²Π°ΠΆΠ»ΠΈΠ²ΠΎΡ ΠΌΠ΅ΡΠΈ. ΠΠΈ ΠΌΠΎΠ»ΠΎΠ΄Ρ ΡΠ° Π°ΠΌΠ±ΡΡΠ½Ρ, Π·Π°Π²ΠΆΠ΄ΠΈ Π΄ΠΈΠ²ΠΈΠΌΠΎΡΡ Π²ΠΏΠ΅ΡΠ΅Π΄ Ρ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π° ΠΏΠ΅ΡΠ΅ΠΌΠΎΠ³Ρ.
ΠΠ°ΡΠ°Π·Ρ ΠΌΠΈ ΡΠ°Π΄ΠΎ Π·Π°ΠΏΡΠΎΡΡΡΠΌΠΎ Π΄ΠΎ Π½Π°Ρ Π² ΠΊΠΎΠΌΠ°Π½Π΄Ρ System Engineer.
ΠΠΈ Π±ΡΠ΄Π΅ΠΌΠΎ ΡΠ°Π΄Ρ Π±Π°ΡΠΈΡΠΈ Ρ Π²Π°ΡΠΎΠΌΡ Π΄ΠΎΡΠ²ΡΠ΄Ρ:- Π΄ΠΎΡΠ²ΡΠ΄ Π²ΡΠ΄ 6-ΡΠΈ ΡΠΎΠΊΡΠ² Ρ ΡΡΠ΅ΡΡ Π°Π²ΡΠ°ΡΡΡ Π°Π±ΠΎ ΠΠΠΠ Π°Π±ΠΎ ΠΊΠΎΡΠΌΠΎΡΡ;
- Π²ΡΠ΄ 3-Ρ ΡΠΎΠΊΡΠ² Π΄ΠΎΡΠ²ΡΠ΄ Ρ ΡΡΠ΅ΡΡ ΠΊΠΎΠ½ΡΡΡΡΡΠ²Π°Π½Π½Ρ, ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ Π΅Π»Π΅ΠΊΡΡΠΎΠ½ΡΠΊΠΈ;
- Π²ΠΏΠ΅Π²Π½Π΅Π½Ρ Π·Π½Π°Π½Π½Ρ Ρ ΡΡΠ΅ΡΡ ΠΌΠΎΠ΄Π΅Π»ΡΠ²Π°Π½Π½Ρ ΡΠ° Π°Π½Π°Π»ΡΠ·Ρ (FEA, CFD, Thermal, Trajectory simulation);
- Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Ρ ΠΠ Solid Works, CATIA, Autodesk Inventor;
- Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΏΡΠ°Π²Π»ΡΠ½Π½Ρ ΡΠ½ΠΆΠ΅Π½Π΅ΡΠ½ΠΈΠΌΠΈ ΠΊΠΎΠΌΠ°Π½Π΄Π°ΠΌΠΈ 5+ Π»ΡΠ΄Π΅ΠΉ.
Π©ΠΎ Π±ΡΠ΄Π΅ Π²Ρ ΠΎΠ΄ΠΈΡΠΈ Ρ Π²Π°ΡΡ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- ΡΠ΅Ρ Π½ΡΡΠ½Π΅ ΠΊΠ΅ΡΡΠ²Π½ΠΈΡΡΠ²ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΠ°ΠΌΠΈ Ρ ΡΠΊΠΎΡΡΡ Lead Engineer;
- Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ Π΄ΠΎΡΡΠΈΠΌΠ°Π½Π½Ρ ΠΏΡΠΎΡΠ΅ΡΡ ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ Π²ΠΈΡΠΎΠ±ΡΠ² Π½Π° ΠΏΡΠ΄ΠΊΠΎΠ½ΡΡΠΎΠ»ΡΠ½ΠΈΡ ΠΏΡΠΎΠ΅ΠΊΡΠ°Ρ ;
- ΡΠΎΡΠΌΡΠ²Π°Π½Π½Ρ ΡΠ΅Ρ Π½ΡΡΠ½ΠΈΡ Π²ΠΈΠΌΠΎΠ³ Π΄ΠΎ Π²ΠΈΡΠΎΠ±Ρ, ΡΠΎ ΡΠΎΠ·ΡΠΎΠ±Π»ΡΡΡΡΡΡ;
- ΡΠΎΠ·ΡΠΎΠ±ΠΊΠ° ΠΊΠΎΠ½ΡΠ΅ΠΏΡΡΠ² Π²ΠΈΡΠΎΠ±ΡΠ², 3D ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π΅ΡΠ°Π»Π΅ΠΉ ΡΠ° Π·Π±ΡΡΠΎΠΊ;
- ΠΏΡΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠ° ΡΠ΅Ρ Π½ΡΡΠ½ΠΎΡ ΠΏΡΠΎΠ΅ΠΊΡΠ½ΠΎΡ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΡΡ: ΠΏΠΎΡΡΠ½ΡΠ²Π°Π»ΡΠ½Ρ Π·Π°ΠΏΠΈΡΠΊΠΈ Π°Π²Π°Π½ ΠΏΡΠΎΠ΅ΠΊΡΡ, Π΅ΡΠΊΡΠ·Π½ΠΎΠ³ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΡ, ΡΠ΅Ρ Π½ΡΡΠ½ΠΎΠ³ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΡ, Π’Π£, ΡΠΎΡΠΎ;
- ΡΡΠ°ΡΡΡ Ρ ΠΎΠ³Π»ΡΠ΄Π°Ρ ΠΠ Ρ ΡΠΊΠΎΡΡΡ ΡΠ΅ΡΠ΅Π½Π·Π΅Π½ΡΠ°;
- Π±ΡΠ΄Π΅ ΠΏΠ»ΡΡΠΎΠΌ ΠΌΠ΅Π½ΡΠΎΡΡΡΠ²ΠΎ ΡΠΏΠ΅ΡΡΠ°Π»ΡΡΡΡΠ² Middle ΡΠ° Senior ΡΡΠ²Π½ΡΠ².
Π©ΠΎ ΠΌΠΈ ΠΏΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ:
- ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ Ρ Π±ΡΠ»Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ, ΡΠΎΡΡΡΠ½Ρ ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½Ρ Π²ΡΠ΄ΠΏΡΡΡΠΊΡ ΡΠ° Π»ΡΠΊΠ°ΡΠ½ΡΠ½Ρ;
- Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡ, Π±ΠΎΠΌΠ±ΠΎΡΡ ΠΎΠ²ΠΈΡΠ΅ ΡΠ° ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ ΡΠ½ΡΠ΅ΡΠ½Π΅Ρ;
- ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π²ΠΏΠ»ΠΈΠ²Π°ΡΠΈ ΡΠΊ Π½Π° Π²Π΅ΠΊΡΠΎΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ, ΡΠ°ΠΊ Ρ Π½Π° ΠΏΠ΅ΡΠ΅ΠΌΠΎΠ³Ρ Π½Π°ΡΠΎΡ ΠΊΡΠ°ΡΠ½ΠΈ;
- Π³ΡΠ΄Π½ΠΈΠΉ ΡΡΠ²Π΅Π½Ρ ΠΎΠΏΠ»Π°ΡΠΈ ΠΏΡΠ°ΡΡ ΡΠ° ΡΠΎΠ½Π°ΠΉΠΌΠ΅Π½ΡΠ΅ ΡΠ°Π· Π½Π° ΡΡΠΊ ΡΡ ΠΏΠ΅ΡΠ΅Π³Π»ΡΠ΄;
- Π²ΡΠ΅Π±ΡΡΠ½Ρ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Π²ΡΠ΄ ΠΊΠ΅ΡΡΠ²Π½ΠΈΡΡΠ²Π°, ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ ΡΠ° Π΄Π΅ΡΠΆΠ°Π²Π½ΠΈΡ ΠΎΡΠ³Π°Π½ΡΠ²;
- Π³ΡΠ°ΡΡΠΊ ΡΠΎΠ±ΠΎΡΠΈ: Π· 08:00β17:00, Π°Π±ΠΎ Π· 09:00 Π΄ΠΎ 18:00 ΠΏΠ½-ΠΏΡ;
- Π±ΡΠΎΠ½ΡΠ²Π°Π½Π½Ρ Π·Π° Π½Π°ΡΠ²Π½ΠΎΡΡΡ Π΄ΡΡΡΠΈΡ Π²ΡΠΉΡΡΠΊΠΎΠ²ΠΎ-ΠΎΠ±Π»ΡΠΊΠΎΠ²ΠΈΡ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΡΠ².
Π¦ΡΠΊΠ°Π²ΠΎ? Π§Π΅ΠΊΠ°ΡΠΌΠΎ Π½Π° Π²Π°ΡΡ Π²ΡΠ΄Π³ΡΠΊΠΈ ΡΠ° Π±ΡΠ΄Π΅ΠΌΠΎ ΡΠ°Π΄Ρ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π² ΠΎΠ΄Π½ΡΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Ρ.
More
*ΠΠ²Π΅ΡΠ½ΡΡΡ, Π±ΡΠ΄Ρ Π»Π°ΡΠΊΠ°, ΡΠ²Π°Π³Ρ, ΡΠΎ ΡΠ΅ΡΠΌΡΠ½ ΡΠΎΠ·Π³Π»ΡΠ΄Ρ ΡΠ΅Π·ΡΠΌΠ΅ 10 ΡΠΎΠ±ΠΎΡΠΈΡ Π΄Π½ΡΠ². Π―ΠΊΡΠΎ ΠΏΡΠΎΡΡΠ³ΠΎΠΌ ΡΡΠΎΠ³ΠΎ ΠΏΠ΅ΡΡΠΎΠ΄Ρ ΠΌΠΈ Π½Π΅ Π·Π²βΡΠ·Π°Π»ΠΈΡΡ Π· Π²Π°ΠΌΠΈ, ΡΠ΅ ΠΎΠ·Π½Π°ΡΠ°Ρ, ΡΠΎ ΠΌΠΈ Π½Π° ΡΠ΅ΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ Π½Π΅ Π³ΠΎΡΠΎΠ²Ρ Π·Π°ΠΏΡΠΎΡΠΈΡΠΈ Π²Π°Ρ Π½Π° ΡΠΏΡΠ²Π±Π΅ΡΡΠ΄Ρ. ΠΠ»Π΅ ΠΌΠΈ Π·Π±Π΅ΡΠ΅ΠΆΠ΅ΠΌΠΎ ΡΠ΅Π·ΡΠΌΠ΅ Π² Π½Π°ΡΡΠΉ Π±Π°Π·Ρ ΠΊΠ°Π½Π΄ΠΈΠ΄Π°ΡΡΠ², ΡΠΎΠ± Ρ ΠΌΠ°ΠΉΠ±ΡΡΠ½ΡΠΎΠΌΡ Π·Π°ΠΏΡΠΎΠΏΠΎΠ½ΡΠ²Π°ΡΠΈ ΡΠ½ΡΡ ΠΊΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ, ΡΠ΅Π»Π΅Π²Π°Π½ΡΠ½Ρ Π²Π°ΡΠΎΠΌΡ Π΄ΠΎΡΠ²ΡΠ΄Ρ. -
Β· 13 views Β· 1 application Β· 9h
Senior Python Data Engineer (only Ukraine)
Ukraine Β· Product Β· 6 years of experience Β· Upper-IntermediateThe company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.
Requirements:
- At least 5 years of experience with Python
- At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
- Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka), (advanced skills in DML).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
- Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow, Apache Hadoop, Apache Spark).
- Experience in automated test creation (TDD).
Freely spoken English.
Advantages:
- Being fearless of mathematical algorithms (part of our teamβs responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in any OOP language.
- Experience in DevOps (Familiarity with Docker and Kubernetes).
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
- Experience in Scala.
What we offer:
- 20 working daysβ vacation;
- 10 paid sick leaves;
- public holidays;
- equipment;
- accountant helps with documents;
- many cool team activities.
Apply now and start a new page of your fast career growth with us!
More -
Β· 20 views Β· 2 applications Β· 10h
Senior Data Engineer to $7200
Full Remote Β· Ukraine, Poland Β· Product Β· 5 years of experience Β· Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.
About the Role:
As a data engineer youβll have end-to-end ownership β from system architecture and software development to operational excellence.
Key Responsibilities:
- Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
- Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
- Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
- Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
- Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Required Competence and Skills:
- A Bachelorβs or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
- At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
- At least 5 years of experience with Python, building modular, testable, and production-ready code.
- Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
- Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
- A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
- Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Have:
- Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
- Familiarity with API development frameworks (e.g., FastAPI).
- Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
- Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
-
Β· 32 views Β· 2 applications Β· 1d
Data Engineer
Full Remote Β· Ukraine, Poland, Romania, Spain, Portugal Β· 3 years of experience Β· IntermediateWeβre expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client. We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible...Weβre expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client.
We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible for ingesting a high volume and variety of enterprise-level data and transforming it into outputs to accelerate decision-making.
Requirements:- BS+ in computer science or equivalent experience
- 3+ years of experience in Data Engineering
- 3+ years of experience with Python
- Strong experience with AWS stack: Glue, Athena, EMR Serverless, Kinesis, Redshift, Lambda, Step Functions, Data Migration Service (DMS)
- Experience with Spark, PySpark, Iceberg, Delta lake, Aurora DB, DynamoDB.
Nice to have:
- Data modeling and managing data transformation jobs with high volume and timing requirements experience;
- AWS CodePipeline, Beanstalk, Azure DevOps, Cloud Formation;
- Profound skill to collaborate with cross functional teams, including communicating effectively with people of varying levels of technical knowledge;
- Readiness to learn new technologies.
Responsibilities:
- Setting up data imports from external data sources (DB, CSV, API);
- Building highly scalable pipelines to process high-volume data for reporting and analytics consumption;
- Designing data assets that support experimental and organizational processes, and are efficient and easy to work with;
- Close cooperation with engineers, data scientists, product managers, and business teams to make sure data products are aligned with organizational needs.
What we offer:- Paid training programs and English/Spanish language courses;
- Medical insurance, sports program compensation, pet care and other benefits compensation program, which can be selected by each employee according to personal preferences;
- Comfortable working hours;
- Awesome team events and a wide variety of knowledge sharing opportunities.
More
-
Β· 21 views Β· 2 applications Β· 1d
Data Engineer with Palantir experience
Full Remote Β· Ukraine, Poland, Portugal, Romania, Spain Β· 3 years of experience Β· Upper-IntermediateWe are seeking a skilled and adaptable Data Engineer with Palantir experience or a strong willingness to learn Palantir technology. Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build...We are seeking a skilled and adaptable Data Engineer with Palantir experience or a strong willingness to learn Palantir technology.
Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and other modern data tools. We value individuals who are excited to expand their technical capabilities over time, work on multiple accounts, and contribute to a dynamic and growing team.You will play a pivotal role in transforming raw data from various sources into structured, high-quality data products that drive business decisions. The ideal candidate should be motivated to learn and grow within the organization, actively collaborating with experienced engineers to strengthen our data capabilities over time.
About the projectThis project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.
The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.
Skills & Experience- Bachelorβs degree in Computer Science, Software Engineering, or equivalent experience.
- Minimum 3 years of experience in Python, SQL, and data engineering processes.
- Experience with Palantir Foundry or a strong willingness to learn and develop expertise in it.
- Proficiency in multiple database systems, such as PostgreSQL, Redis, and a data warehouse like Snowflake, including query optimization.
- Hands-on experience with Microsoft Azure services.
- Strong problem-solving skills and experience with data pipeline development.
- Familiarity with testing methodologies (unit and integration testing).
- Docker experience for containerized data applications.
- Collaborative mindset, capable of working across multiple teams and adapting to new projects over time.
- Fluent in English (written & verbal communication).
- Curiosity and enthusiasm for finance-related domains (personal & corporate finance, investment concepts).
Nice to have
- Experience with Databricks.
- Experience with Snowflake.
- Background in wealth management, investment analytics, or financial modeling.
- Contributions to open-source projects or personal projects showcasing data engineering skills.
Responsibilities
- Design and maintain scalable data pipelines to ingest, transform, and optimize data.
- Collaborate with cross-functional teams (engineering, product, and business) to develop solutions that address key data challenges.
- Support data governance, data quality, and security best practices.
- Optimize data querying and processing for efficiency and cost-effectiveness.
- Work with evolving technologies to ensure our data architecture remains modern and adaptable.
- Contribute to a culture of learning and knowledge sharing, supporting newer team members in building their skills.
- Grow into new roles within the company by expanding your technical expertise and working on diverse projects over time.
We are looking for individuals who want to be part of a long-term, growing teamβpeople who may not have all the skills today but are eager to bridge the gap and build their expertise alongside experienced engineers. If youβre excited about pumping the data muscle and growing in your career, weβd love to hear from you!
More -
Β· 20 views Β· 2 applications Β· 1d
Senior Architect Data Engineer
Full Remote Β· Ukraine, Romania, Portugal, Poland, Spain Β· 5 years of experience Β· Upper-IntermediateTech stack: Palantir Foundry, Microsoft Azure, Azure DataLake, Azure App Service, SQL, Spark, Databricks, Python, FastAPI, Pandas, Streamlit, GitHub Actions, OpenAI, LLMs About the role We are seeking a Senior Architect Data Engineer to lead the design...Tech stack: Palantir Foundry, Microsoft Azure, Azure DataLake, Azure App Service, SQL, Spark, Databricks, Python, FastAPI, Pandas, Streamlit, GitHub Actions, OpenAI, LLMs
About the roleWe are seeking a Senior Architect Data Engineer to lead the design and evolution of our Palantir Foundry-based data platform for a finance-focused initiative. This role goes beyond building data pipelines β you will own the data architecture end-to-end, mentor other engineers, and shape the technical roadmap for transforming scattered raw data into robust, analytics-ready products.
You will collaborate directly with leadership and product teams to understand strategic data needs, set technical direction, and ensure our infrastructure scales with business growth. Your expertise will be crucial in driving innovation across ingestion, transformation, quality, and real-time access layers of our modern data stack.
This is a hands-on leadership role for someone who thrives on both solving complex problems and empowering others to grow.
About the projectThis project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.
The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.
Location
Remote LATAM/ Poland/Europe/UkraineSkills & Experience
- Possess a Bachelorβs degree in Computer Science or Software Engineering, or demonstrate substantial experience as a seasoned app developer.
- Demonstrate a minimum of 8 years of proficiency in Python, SQL, data systems development life cycle.
- Experience with Palantir Foundry.
- Advanced experience using different kinds of databases (for example, PostgreSQL, BigQuery, Redis) including experience with query and optimization techniques.
- Display a comprehensive understanding and practical experience with Google Cloud services.
- Profound experience with data pipeline testing methodologies.
- Hands-on experience working with Docker
- Proven background in collaborative efforts with product managers and fellow engineers, particularly within distributed multicultural teams.
- An excellent command of the English language, both written and verbal.
- Possess outstanding communication skills, coupled with a sense of humor, and express a keen interest in the domains of personal and corporate finance.
Nice to have
- Experience with LLM integrations (OpenAI, LangChain) and prompt engineering.
- Experience with Databricks.
- Experience with Microsoft Azure.
- Knowledge of the financial domain and understanding of wealth management, investment concepts.
- Contributions to open-source projects or personal projects showcasing data engineering skills.
- Experience influencing data strategy in financial services or investment tech platforms.
- Contributions to open-source or thought leadership in the data community.
Responsibilities
- Collaborate with business leaders to align data architecture with strategic goals.
- Lead end-to-end implementation of complex data pipelines and integrations.
- Design, enforce, and implement data governance, lineage tracking, and access policies across environments.
- Review and improve engineering standards, mentoring engineers and reviewing critical code paths.
- Proactively identify tech debt, bottlenecks, and opportunities for optimization or automation.
- Drive adoption of emerging technologies in the modern data stack.
- Represent the data engineering function in cross-functional discussions and roadmap planning.
- Stay up-to-date with the latest trends and technologies in the data engineering field (Modern Data Stack) and propose improvements to the existing architecture.
-
Β· 24 views Β· 1 application Β· 1d
Data Engineer
Ukraine Β· 4 years of experience Β· Upper-IntermediateAbout the Role: As a Data platform engineer, you will design, develop, and build the core stream processing platform. Collaborate with cross-functional teams including product managers, and other engineering teams to deliver end-to-end solutions. Key...About the Role:
As a Data platform engineer, you will design, develop, and build the core stream processing platform. Collaborate with cross-functional teams including product managers, and other engineering teams to deliver end-to-end solutions.
Key Responsibilities:
- Participate in the design, development, and deployment of scalable and reliable data processing pipelines.
- Implement robust developer and testing infrastructure to streamline development workflows and ensure high-quality code.
- Stay current with the latest technologies and industry trends, evaluating and integrating new tools and methodologies as appropriate.
- Work closely with development, operations, and other teams to ensure alignment and collaboration.
- Demonstrate strong debugging, documentation, and communication skills.
Communicate effectively, both verbally and in writing, to technical and non-technical audiences.
Required Skills and Experience:
- At least 4+ years of experience in large-scale software development with a specific focus on data processing.
- Strong proficiency with large-scale data processing technologies like Apache Flink, Apache Spark, Kafka, Kinesis.
- Proficiency in Java and Python.
- Comfortable dealing with distributed system complexity.
- Experience in relational data models and databases.
- Experience with SQL queries and optimization.
- Experience with GitHub tooling (actions, workflows, repositories).
- Familiarity with CI/CD pipelines and automation tools.
- Problem-solving and troubleshooting skills.
Strong communication and collaboration abilities.
Bonus Points:
- Experience building or designing database systems.
- Contributions to open-source projects (especially related to Flink, Kafka or Spark)
- Proficiency with containerization and orchestration technologies (Docker, Kubernetes.
- Proficiency with cloud platforms (AWS, GCP, or Azure).
- Proficiency with Golang.
- Understanding of communication protocols (REST, Grpc) and how to use them when building microservices.
- Proficiency with Antlr or other compiler tools.
- Knowledge of security best practices and compliance standards.
We offer:
- IT Club membership
- 18 days of vacation + 8 days of paid state holidays
- Health insurance
- Compensation for language trainings
- Compensation for educational courses, training, and certificates
- Compensation for sport activities
- Mentorship program
- Employee recognition program with awards
- Running club
- Reading club
- Cozy and pet-friendly office
- Weekly sweets & fruits office days
- Corporate bookshelf
- Office relax zone with PS4, VR, table games, table tennis, aero hockey, mini football table.
Are you interested? We would be glad to receive your CV.
More -
Β· 12 views Β· 1 application Β· 1d
Data Solutions Architect
Full Remote Β· Countries of Europe or Ukraine Β· 8 years of experience Β· Advanced/FluentThe Data and Analytics practice, part of the Technology Office, is a team of high-end experts in data strategy, data governance, and data platforms, and contribute to shaping the future of data platforms for our customers. As a Solution Data Architect,...The Data and Analytics practice, part of the Technology Office, is a team of high-end experts in data strategy, data governance, and data platforms, and contribute to shaping the future of data platforms for our customers. As a Solution Data Architect, you will play a crucial role in designing and implementing data solutions for our clients and develop, execute the Data and Analytics practice within the company.
Responsibilities:- Client Engagement: Demonstrate deep expertise in Data Platform Modernization to build credibility with prospective clients. Share relevant customer stories and proof points
- Requirement Analysis: Engage clients to understand their needs, scope projects, and define solutions. Articulate business benefits, use cases, and create roadmaps to achieve client goals
- Presales Support: Assist the sales team in creating customer proposals, including Statements of Work (SOWs) and responses to RFPs. Participate in project scoping calls
- Opportunity Identification: Identify opportunities to upgrade and optimize clients and data platforms to meet modern demands
Implementation Support: Play a key role in overseeing Data Platform modernization implementation projects. Ensure the solution meets client requirements and provide best practice guidance - Architectural Guidance: Provide strategic guidance and support in architecting and planning the implementation of modern data platforms for clients
- Technology Assessment: Stay current with the latest developments in modern data platforms and apply this knowledge to client projects
- Stakeholder Collaboration: Participate in meetings with internal and external stakeholders, including Delivery Managers, Account Managers, Client Partners, and Delivery Teams, as well as ecosystem partners
- Practice Development: Co-create and develop the Data & Analytics practice within the company, focusing on data services and consulting offerings
Requirements:- Proven Experience: Demonstrated track record in leveraging Databricks Data Intelligence and/or Snowflake in major projects.
- Architectural Expertise: Over 10 years of progressive experience from data engineer to Architect role, designing, building, and maintaining complex solutions
- Solution Design: Expert knowledge and experience applying data concepts like Data Mesh, Data Fabric, Data Warehouse, Data Lake, Data Lakehouse, from design to implementation
- Sales Acumen: Worked on implementation/proposals as a Solution Architect responsible for designing solutions end to end
- Cloud Data Knowledge: Strong knowledge of the Cloud Data Platform landscape, including vendors and offerings across domains such as Cloud Data Platform foundations, migration and modernization, and data intelligence
- Continuous Learning: Passion, curiosity and desire to learn what is new in the data platforms market, modern technology trends and data stacks
- Communication Skills: Ability to present technical ideas in a business-friendly language. Excellent English verbal and written communication skills is a must.
- Travel: Willingness to travel on business as required
Why Join Us:- Innovative Projects: Work on cutting-edge data platform modernization projects with leading industry clients.
- Professional Growth: Opportunities for continuous learning and professional development.
- Collaborative Environment: Join a team of passionate experts dedicated to delivering excellence.
- Competitive Compensation: Attractive salary and benefits package.
We offer:- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
-
Β· 48 views Β· 0 applications Β· 1d
Trainee/Junior Data Engineer/BigData Engineer
Office Work Β· Ukraine (Lviv) Β· Upper-IntermediateInforce is a Software Development Company that provides a full range of top-quality IT services. Our mission is to develop first-class applications and Websites to provide our clients with the best solutions for maximizing their profits and converting...Inforce is a Software Development Company that provides a full range of top-quality IT services. Our mission is to develop first-class applications and Websites to provide our clients with the best solutions for maximizing their profits and converting their ideas into reality.
Responsibilities:
β’ Assist in designing and building data pipelines.
β’ Support database management tasks.
β’ Learn and contribute to the automation of data processes.
β’ Collaborate with the team on various data-driven projects.
Requirements:
β’ Good spoken English
- Basic proficiency in Python, SQL, and data processing frameworks.
- Basic knowledge of PySpark and Airflow
- Eagerness to learn and adapt to new technologies.
- Strong analytical and problem-solving skills.
Excellent communication and teamwork abilities.
We offer:
- Competitive salary
- Interesting and challenging projects
- Future career growth opportunities
- Paid sick leave and working day vacation
- A friendly team of professionals
- Delicious coffee biscuits and tea for your good mood
- The company covers 50% of the cost of courses you need
- Exciting team-building activities and corporate parties
- Office in the city center
More -
Β· 63 views Β· 1 application Β· 1d
Junior Data Engineer IRC262233
Full Remote Β· Ukraine Β· 1 year of experience Β· Upper-IntermediateDescription The Digital Health organization is technology team which focused on next generation Digital Health capabilities which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health...Description
The Digital Health organization is technology team which focused on next generation Digital Health capabilities which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health Applications & Interoperability subgroup of the broader Digital Health team, focused on patient engagement, care coordination, AI, healthcare analytics & interoperability amongst other advanced technologies which enhance our product portfolio with new services, while improving clinical & patient experiences.
The project is a cloud-based PaaS Ecosystem built with a privacy by design centric approach to provide a centralized cloud-based platform to store, classify, and control access to federated datasets in a scalable, secure, and efficient manner.
The ecosystem will allow Customer Operating Units (medical device departments) to store federated data sets of varying sizes and formats and control access to those data sets through Data steward(s). Source data sets can be exposed to qualified use cases and workflows through different project types.
The Healthcare Data Platform ecosystem will provide ML/AI project capabilities for streamlined development processes and a ML/AI workbench to enhance data exploration, wrangling, and model training.
In queue: 15+ OUβs. At this moment focused on β Nuero, Cardio, Diabetes is the OU that data platform is working with, but there could be more OUβs coming up with requirements in future.
GL Role: is to work on the enhancement of current capabilities, including taking over the work that AWS proserve team is doing, and develop new requirements that will keep coming from different OUβs in the future.
Requirements
Python, Data Engineering, Data Lake or Lakehouse, Apache Iceberg (nice to have), Parquet
Good communication skills, pro-active/initiative
MUST HAVE
- AWS Platform: Working experience with AWS data technologies, including S3, AWS RDS, Lake Formation
- Programming Languages: Strong programming skills in Python
- Data Formats: Experience with JSON, XML and other relevant data formats
- CI/CD Tools: Ability to deploy using established CI/CD pipelines using GitLab CI, Jenkins, Terraform or similar tools
- Scripting and automation: experience in scripting language such as Python, PowerShell, etcβ¦
- Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, Splunk, ELK, Dynatrace, Prometheus
- Source Code Management: Expertise with GitLab
- Documentation: Experience with markdown and in particular Antora for creating technical documentation
NICE TO HAVE
- Previous Healthcare or Medical Device experience
- Experience implementating enterprise grade cyber security & privacy by design into software products
- Experience working in Digital Health software
- Experience developing global applications
- Strong understanding of SDLC; experience with Agile methodologies
- Software estimation
- Experience leading software development teams onshore and offshore
- Experience with FHIR
Job responsibilities
KEY RESPONSIBILITIES
- Implement data pipelines using AWS services such as AWS Glue, Lambda, Kinesis, etc
- Implement integrations between the data platform and systems such as Atlan, Trino/Starburst, etc
- Complete logging and monitoring tasks through AWS and Splunk toolsets
- Develop and maintain ETL processes to ingest, clean, transform and store healthcare data from various sources
- Optimize data storage solutions using Amazon S3, AWS RDS, Lake Formation and other AWS technologies.
- Document, configure, and maintain systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
- Participate in planning of system and development deployment as well as responsible for meeting compliance and security standards.
- Actively identify system functionality or performance deficiencies, execute changes to existing systems, and test functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
- Document testing and maintenance of system updates, modifications, and configurations.
- Leverage platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.
- Test the quality of a product and its ability to perform a task or solve a problems.
- Perform basic maintenance and performance optimization procedures in each of the primary operating systems.
- Ensure system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etcβ¦)
-
Β· 13 views Β· 0 applications Β· 1d
DWH Oracle Developer
Full Remote Β· Ukraine Β· Product Β· 3 years of experienceΠΠΎΠΌΠΏΠ°Π½ΡΡ, ΡΠΎ ΡΠΏΠ΅ΡΡΠ°Π»ΡΠ·ΡΡΡΡΡΡ Π½Π° ΡΠΎΠ·ΡΠΎΠ±ΡΡ ΠΏΠ»Π°ΡΡΠΎΡΠΌΠΈ ΠΊΡΠΈΠΏΡΠΎ-Π°Π»Π³ΠΎΡΠΈΡΠΌΡΡΠ½ΠΎΠ³ΠΎ ΡΡΠ΅ΠΉΠ΄ΠΈΠ½Π³Ρ. Π ΡΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠΈ, ΡΠΊΡ Π²ΠΎΠ½ΠΈ Π·ΡΠΎΠ±ΠΈΠ»ΠΈ ΠΏΠ΅ΡΡΠΈΠΌΠΈ Ρ ΡΠ²ΡΡΡ. ΠΠΎΠ½ΠΈ Π²ΡΡΡΡΡ Π² ΡΡ ΡΡΠ½Π½ΡΡΡΡ Ρ Π½Π΅ΠΎΠ±Ρ ΡΠ΄Π½ΡΡΡΡ ΡΠΈΠ½ΠΊΡ. ΠΠ΅ΠΎΠ±Ρ ΡΠ΄Π½Ρ Π½Π°Π²ΠΈΡΠΊΠΈ β’ ΠΠΏΠ΅Π²Π½Π΅Π½Ρ Π·Π½Π°Π½Π½Ρ Π΄Π΅Π½ΠΎΡΠΌΠ°Π»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π°Π½ΠΈΡ . β’...ΠΠΎΠΌΠΏΠ°Π½ΡΡ, ΡΠΎ ΡΠΏΠ΅ΡΡΠ°Π»ΡΠ·ΡΡΡΡΡΡ Π½Π° ΡΠΎΠ·ΡΠΎΠ±ΡΡ ΠΏΠ»Π°ΡΡΠΎΡΠΌΠΈ ΠΊΡΠΈΠΏΡΠΎ-Π°Π»Π³ΠΎΡΠΈΡΠΌΡΡΠ½ΠΎΠ³ΠΎ ΡΡΠ΅ΠΉΠ΄ΠΈΠ½Π³Ρ. Π ΡΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠΈ, ΡΠΊΡ Π²ΠΎΠ½ΠΈ Π·ΡΠΎΠ±ΠΈΠ»ΠΈ ΠΏΠ΅ΡΡΠΈΠΌΠΈ Ρ ΡΠ²ΡΡΡ. ΠΠΎΠ½ΠΈ Π²ΡΡΡΡΡ Π² ΡΡ ΡΡΠ½Π½ΡΡΡΡ Ρ Π½Π΅ΠΎΠ±Ρ ΡΠ΄Π½ΡΡΡΡ ΡΠΈΠ½ΠΊΡ.
ΠΠ΅ΠΎΠ±Ρ ΡΠ΄Π½Ρ Π½Π°Π²ΠΈΡΠΊΠΈ
β’ ΠΠΏΠ΅Π²Π½Π΅Π½Ρ Π·Π½Π°Π½Π½Ρ Π΄Π΅Π½ΠΎΡΠΌΠ°Π»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π°Π½ΠΈΡ .β’ ΠΡΠ³Π°Π½ΡΠ·Π°ΡΡΡ Ρ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠ° DWH, ΡΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ
More
β’ Π ΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΡΠ΅Π»ΡΡΡΠΉΠ½ΠΈΡ ΠΠ, Π½ΠΎΡΠΌΠ°Π»ΡΠ·Π°ΡΡΡ, ΠΏΠΎΡΠΈΠ»ΠΊΠΎΠ²Π° ΡΡΠ»ΡΡΠ½ΡΡΡΡ Ρ Ρ.Π΄..
β’ ΠΠ°Π»Π°Π³ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ ΡΠΎΠ±ΠΎΡΠΈ Oracle Advanced Queuing
β’ ΠΠ°Π²ΠΈΡΠΊΠΈ Π·Π°ΡΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΡΡ Π²ΠΈΠ΄ΡΠ² Π·'ΡΠ΄Π½Π°Π½Ρ Ρ Π³Π»ΠΈΠ±ΠΎΠΊΡ Π·Π½Π°Π½Π½Ρ PL/SQL
β’ ΠΠΊΡΡΠ°ΡΠ½ΡΡΡΡ Ρ Π½Π°ΠΏΠΈΡΠ°Π½Π½Ρ ΠΊΠΎΠ΄Ρ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎ Π΄ΠΎ ΡΡΠ°Π½Π΄Π°ΡΡΡΠ² ΠΠΎΠΌΠΏΠ°Π½ΡΡ
β’ ΠΠ°ΡΡΠΎΡΡΠ²Π°Π½Π½Ρ ΡΡΠ·Π½ΠΈΡ ΠΏΡΠ΄Ρ ΠΎΠ΄ΡΠ² Π΄ΠΎ ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ ΠΊΠΎΠ΄Ρ, ΡΠΎΠ±ΠΎΡΠ° ΡΠ· ΠΏΠ»Π°Π½ΠΎΠΌ Π·Π°ΠΏΠΈΡΡ
β’ ΠΠΎΠ²ΠΈΡΠΊΠΈ DBA Π½Π° Π±Π°Π·ΠΎΠ²ΠΎΠΌΡ ΡΡΠ²Π½Ρ
β’ ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ - Π΄ΠΎΡΠ²ΡΠ΄ ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²ΠΈ OLAP ΡΠΈΡΡΠ΅ΠΌ
ΠΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ
β’ Π ΠΎΠ±ΠΎΡΠ° Π· SQL, Oracle PL/SQL. β’ ETL Π±ΡΡΠΆΠ΅Π²ΠΈΡ Π΄Π°Π½ΠΈΡ
β’ ΠΠ°ΠΏΠΈΡΠ°Π½Π½Ρ SQL-ΡΠΊΡΠΈΠΏΡΡΠ² ΡΡΠ·Π½ΠΎΠ³ΠΎ ΡΡΡΠΏΠ΅Π½Ρ ΡΠΊΠ»Π°Π΄Π½ΠΎΡΡΡ
β’ ΠΡΠΎΠ΅ΠΊΡΡΠ²Π°Π½Π½Ρ Π΄Π΅Π½ΠΎΡΠΌΠ°Π»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π΄Π°Π½ΠΈΡ
β’ Π ΠΎΠ±ΠΎΡΠ° Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ ΠΎΠ±ΡΡΠ³Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ (ΠΌΠ»ΡΠ΄ Π·Π°ΠΏΠΈΡΡΠ²)
β’ ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° Π²ΡΡΡΠΈΠ½ Π΄Π°Π½ΠΈΡ , ΠΌΠ°ΡΠ°ΡΠΈΠ°Π»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½Ρ, ΡΠΎΡΠΎ
β’ ΠΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ Π·Π°ΠΏΠΈΡΡΠ² ΡΠ° Π°Π΄Π°ΠΏΡΠ°ΡΡΡ ΡΡ ΠΏΡΠ΄ Π½Π°Π²Π°Π½ΡΠ°ΠΆΠ΅Π½Π½Ρ
β’ ΠΡΠ³ΡΠ°ΡΡΡ Π΄Π°Π½ΠΈΡ , ΡΠΎΠ±ΠΎΡΠ° Π· ΡΡΠ°Π½ΡΠΏΠΎΡΡΠ½ΠΈΠΌ ΡΡΠ²Π½Π΅ΠΌ MQ
β’ Π ΠΎΠ·ΡΠΎΠ±ΠΊΠ° Ρ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠ° ΡΠΈΡΡΠ΅ΠΌ ΡΠ΅ΠΏΠΎΡΡΠΈΠ½Π³Ρ
Π‘ΡΠ΅ΠΊ ΠΏΡΠΎΠ΄ΡΠΊΡΡ
AWS, Docker, CI/CDGitlab, Java, Spring, Hibernate, RESTfulAPI, C#, .Net5.0, Blazor, Python, Oracle, Telegram API, Grafana, Excel Data Access, Prometheus, Confluence, Jira.
ΠΠΈ ΠΏΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ
β’ ΠΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ Ρ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΎΡΠ΅ΡΡΠΎΠ½Π°Π»ΡΠ².
β’ ΠΡΠ΄Π½Ρ ΠΎΠΏΠ»Π°ΡΡ
β’ Π ΠΎΠ±ΠΎΡΡ Π· ΡΡΡΠ°ΡΠ½ΠΈΠΌΠΈ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡΠΌΠΈ, highload ΡΠ½ΡΠΎΡΠΌΠ°ΡΡΠΉΠ½ΠΎΡ ΡΠΈΡΡΠ΅ΠΌΠΎΡ
β’ Π€ΠΎΡΠΌΠ°Ρ ΡΠΎΠ±ΠΎΡΠΈ - ΡΠ΅ΠΌΠΎΡΡ Π°Π±ΠΎ ΠΎΡΡΡ (ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡΠ½ΠΈΠΉ, ΡΡΡΠ°ΡΠ½ΠΈΠΉ ΠΎΡΡΡ Π²ΡΠ΄ ΠΌ. ΠΠΎΠ·Π½ΡΠΊΠΈ 15 Ρ Π².)
β’ ΠΡΠ΄ΡΡΡΠ½ΡΡΡΡ ΠΌΠ°ΡΠ½ΠΎΡ Π±ΡΡΠΎΠΊΡΠ°ΡΡΡ ΡΠ° ΡΡΡΠΈΠ½ΠΈ
Π¦ΡΠ½ΡΡΠΌΠΎ ΠΠ°ΡΡ ΡΠΊΠΎΡΡΡ
β’ ΠΠ°Π»ΡΡΠ΅Π½Π½Ρ
β’ ΠΠΎΠ½ΡΡΡΡΠΊΡΠΈΠ²Π½Ρ Π΅Π½Π΅ΡΠ³ΡΡ
β’ ΠΡΠ°Π³Π½Π΅Π½Π½Ρ Π΄ΠΎΡΡΠ³Π°ΡΠΈ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡΠ²
ΠΠ°ΡΡ ΠΏΡΠ°Π²ΠΈΠ»Π°
β’ ΠΠΎΠ²Π°Π³Π°, Ρ ΡΠ΅ΡΠ½ΡΡΡΡ ΡΠ° Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π°Π»ΡΠ½ΡΡΡΡ, ΡΠΊ Π½Π°ΡΠ»ΡΠ΄ΠΎΠΊ
β’ ΠΡΠ½ΠΊΡΡΠ°Π»ΡΠ½ΡΡΡΡ
β’ ΠΠ°ΡΠ΅ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½Π΅ ΡΠ° ΡΡΠ½Π°Π½ΡΠΎΠ²Π΅ Π·ΡΠΎΡΡΠ°Π½Π½Ρ -- Π½Π΅ΠΎΠ±Ρ ΡΠ΄Π½Π° ΡΠΊΠ»Π°Π΄ΠΎΠ²Π° ΡΡΠΏΡΡ Ρ Π½Π°ΡΠΎΡ ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ -
Β· 50 views Β· 9 applications Β· 2d
Data Engineer
Full Remote Β· Ukraine, Poland, Bulgaria, Germany, Spain Β· 3 years of experience Β· IntermediateWeβre expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client. We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible...Weβre expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client.
We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible for ingesting a high volume and variety of enterprise-level data and transforming it into outputs to accelerate decision-making.
Requirements:
- BS+ in computer science or equivalent experience;
- 3+ years of experience in Data Engineering;
- 3+ years of experience with Python;
- Strong experience with AWS stack: Glue, Athena, EMR Serverless, Kinesis, Redshift, Lambda, Step Functions, Data Migration Service (DMS);
- Experience with Spark, PySpark, Iceberg, Delta lake, Aurora DB, DynamoDB.
Nice to have:
- Data modeling and managing data transformation jobs with high volume and timing requirements experience;
- AWS CodePipeline, Beanstalk, Azure DevOps, Cloud Formation;
- Profound skill to collaborate with cross functional teams, including communicating effectively with people of varying levels of technical knowledge;
- Readiness to learn new technologies.
Responsibilities:
- Setting up data imports from external data sources (DB, CSV, API);
- Building highly scalable pipelines to process high-volume data for reporting and analytics consumption;
- Designing data assets that support experimental and organizational processes, and are efficient and easy to work with;
- Close cooperation with engineers, data scientists, product managers, and business teams to make sure data products are aligned with organizational needs.
What we offer:- Paid training programs and English/Spanish language courses;
- Medical insurance, sports program compensation, pet care and other benefits compensation program, which can be selected by each employee according to personal preferences;
- Comfortable working hours;
- Awesome team events and a wide variety of knowledge sharing opportunities.