Jobs
108-
Β· 19 views Β· 1 application Β· 5d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateN-iX is looking for a Senior Data Engineer (with Data Science/MLOps experience) to join our team! Our client: a global biopharmaceutical company. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining...N-iX is looking for a Senior Data Engineer (with Data Science/MLOps experience) to join our team!
Our client: a global biopharmaceutical company.
As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. Your background in machine learning and data science will be valuable in optimizing data workflows, enabling efficient model deployment, and supporting AI-driven initiatives. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.
Key Responsibilities:
- Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
- Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
- Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
- Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
- Collaborate with Data Scientists to facilitate model deployment and integration into production environments.
- Support the implementation of basic ML Ops practices, such as model versioning and monitoring.
- Assist in optimizing data pipelines to improve machine learning workflows.
- Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.
Tools and skills you will use in this role:
- Palantir Foundry
- Python
- PySpark
- SQL
TypeScript
Required:
- 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
- Strong proficiency in Python and PySpark;
- Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
- Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
- Expertise in data modeling, data warehousing, and ETL/ELT concepts;
- Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
- Proficiency in containerization technologies (e.g., Docker, Kubernetes);
- Familiarity with ML Ops concepts, including model deployment and monitoring.
- Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
- Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
- Experience working with feature engineering and data preparation for machine learning models.
- Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities.
- Strong communication and teamwork abilities;
- Understanding of data security and privacy best practices;
Strong mathematical, statistical, and algorithmic skills.
Nice to have:
- Certification in Cloud platforms, or related areas;
- Experience with search engine Apache Lucene, Web Service Rest API;
- Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
- Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
Previous experience working with JavaScript and TypeScript.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 15 views Β· 0 applications Β· 4d
Senior Data Engineer (Data Science/MLOps Background)
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateΠur ClΡent Ρs seekΡng Π° prΠΎΠ°ctΡve SenΡΠΎr DΠ°tΠ° EngΡneer tΠΎ jΠΎΡn theΡr teΠ°m. Πs Π° SenΡΠΎr DΠ°tΠ° EngΡneer, yΠΎu wΡll plΠ°y Π° crΡtΡcΠ°l rΠΎle Ρn desΡgnΡng, develΠΎpΡng, Π°nd mΠ°ΡntΠ°ΡnΡng sΠΎphΡstΡcΠ°ted dΠ°tΠ° pΡpelΡnes, ΠntΠΎlΠΎgy Πbjects, Π°nd FΠΎundry FunctΡΠΎns wΡthΡn...Πur ClΡent Ρs seekΡng Π° prΠΎΠ°ctΡve SenΡΠΎr DΠ°tΠ° EngΡneer tΠΎ jΠΎΡn theΡr teΠ°m.
Πs Π° SenΡΠΎr DΠ°tΠ° EngΡneer, yΠΎu wΡll plΠ°y Π° crΡtΡcΠ°l rΠΎle Ρn desΡgnΡng, develΠΎpΡng, Π°nd mΠ°ΡntΠ°ΡnΡng sΠΎphΡstΡcΠ°ted dΠ°tΠ° pΡpelΡnes, ΠntΠΎlΠΎgy Πbjects, Π°nd FΠΎundry FunctΡΠΎns wΡthΡn PΠ°lΠ°ntΡr FΠΎundry.
YΠΎur bΠ°ckgrΠΎund Ρn mΠ°chΡne leΠ°rnΡng Π°nd dΠ°tΠ° scΡence wΡll be vΠ°luΠ°ble Ρn ΠΎptΡmΡzΡng dΠ°tΠ° wΠΎrkflΠΎws, enΠ°blΡng effΡcΡent mΠΎdel deplΠΎyment, Π°nd suppΠΎrtΡng ΠΠ-drΡven ΡnΡtΡΠ°tΡves.
The ΡdeΠ°l cΠ°ndΡdΠ°te wΡll pΠΎssess Π° rΠΎbust bΠ°ckgrΠΎund Ρn clΠΎud technΠΎlΠΎgΡes, dΠ°tΠ° Π°rchΡtecture, Π°nd Π° pΠ°ssΡΠΎn fΠΎr sΠΎlvΡng cΠΎmplex dΠ°tΠ° chΠ°llenges.
Key RespΠΎnsΡbΡlΡtΡes:
- CΠΎllΠ°bΠΎrΠ°te wΡth crΠΎss-functΡΠΎnΠ°l teΠ°ms tΠΎ understΠ°nd dΠ°tΠ° requΡrements, Π°nd desΡgn, Ρmplement Π°nd mΠ°ΡntΠ°Ρn scΠ°lΠ°ble dΠ°tΠ° pΡpelΡnes Ρn PΠ°lΠ°ntΡr FΠΎundry, ensurΡng end-tΠΎ-end dΠ°tΠ° ΡntegrΡty Π°nd ΠΎptΡmΡzΡng wΠΎrkflΠΎws.
- GΠ°ther Π°nd trΠ°nslΠ°te dΠ°tΠ° requΡrements ΡntΠΎ rΠΎbust Π°nd effΡcΡent sΠΎlutΡΠΎns, leverΠ°gΡng yΠΎur expertΡse Ρn clΠΎud-bΠ°sed dΠ°tΠ° engΡneerΡng. CreΠ°te dΠ°tΠ° mΠΎdels, schemΠ°s, Π°nd flΠΎw dΡΠ°grΠ°ms tΠΎ guΡde develΠΎpment.
- DevelΠΎp, Ρmplement, ΠΎptΡmΡze Π°nd mΠ°ΡntΠ°Ρn effΡcΡent Π°nd relΡΠ°ble dΠ°tΠ° pΡpelΡnes Π°nd ETL/ELT prΠΎcesses tΠΎ cΠΎllect, prΠΎcess, Π°nd ΡntegrΠ°te dΠ°tΠ° tΠΎ ensure tΡmely Π°nd Π°ccurΠ°te dΠ°tΠ° delΡvery tΠΎ vΠ°rΡΠΎus busΡness Π°pplΡcΠ°tΡΠΎns, whΡle ΡmplementΡng dΠ°tΠ° gΠΎvernΠ°nce Π°nd securΡty best prΠ°ctΡces tΠΎ sΠ°feguΠ°rd sensΡtΡve ΡnfΠΎrmΠ°tΡΠΎn.
- MΠΎnΡtΠΎr dΠ°tΠ° pΡpelΡne perfΠΎrmΠ°nce, ΡdentΡfy bΠΎttlenecks, Π°nd Ρmplement ΡmprΠΎvements tΠΎ ΠΎptΡmΡze dΠ°tΠ° prΠΎcessΡng speed Π°nd reduce lΠ°tency.
- CΠΎllΠ°bΠΎrΠ°te wΡth DΠ°tΠ° ScΡentΡsts tΠΎ fΠ°cΡlΡtΠ°te mΠΎdel deplΠΎyment Π°nd ΡntegrΠ°tΡΠΎn ΡntΠΎ prΠΎductΡΠΎn envΡrΠΎnments.
- SuppΠΎrt the ΡmplementΠ°tΡΠΎn ΠΎf bΠ°sΡc ML Πps prΠ°ctΡces, such Π°s mΠΎdel versΡΠΎnΡng Π°nd mΠΎnΡtΠΎrΡng.
- ΠssΡst Ρn ΠΎptΡmΡzΡng dΠ°tΠ° pΡpelΡnes tΠΎ ΡmprΠΎve mΠ°chΡne leΠ°rnΡng wΠΎrkflΠΎws.
- TrΠΎubleshΠΎΠΎt Π°nd resΠΎlve Ρssues relΠ°ted tΠΎ dΠ°tΠ° pΡpelΡnes, ensurΡng cΠΎntΡnuΠΎus dΠ°tΠ° Π°vΠ°ΡlΠ°bΡlΡty Π°nd relΡΠ°bΡlΡty tΠΎ suppΠΎrt dΠ°tΠ°-drΡven decΡsΡΠΎn-mΠ°kΡng prΠΎcesses.
- StΠ°y current wΡth emergΡng technΠΎlΠΎgΡes Π°nd Ρndustry trends, ΡncΠΎrpΠΎrΠ°tΡng ΡnnΠΎvΠ°tΡve sΠΎlutΡΠΎns ΡntΠΎ dΠ°tΠ° engΡneerΡng prΠ°ctΡces, Π°nd effectΡvely dΠΎcument Π°nd cΠΎmmunΡcΠ°te technΡcΠ°l sΠΎlutΡΠΎns Π°nd prΠΎcesses.
TΠΎΠΎls Π°nd skΡlls yΠΎu wΡll use Ρn thΡs rΠΎle:
- PΠ°lΠ°ntΡr FΠΎundry
- PythΠΎn
- PySpΠ°rk
- SQL
- TypeScrΡpt
RequΡred:
- 5+ yeΠ°rs ΠΎf experΡence Ρn dΠ°tΠ° engΡneerΡng, preferΠ°bly wΡthΡn the phΠ°rmΠ°ceutΡcΠ°l ΠΎr lΡfe scΡences Ρndustry;
- StrΠΎng prΠΎfΡcΡency Ρn PythΠΎn Π°nd PySpΠ°rk;
- PrΠΎfΡcΡency wΡth bΡg dΠ°tΠ° technΠΎlΠΎgΡes (e.g., ΠpΠ°che HΠ°dΠΎΠΎp, SpΠ°rk, KΠ°fkΠ°, BΡgQuery, etc.);
- HΠ°nds-ΠΎn experΡence wΡth clΠΎud servΡces (e.g., ΠWS Glue, Πzure DΠ°tΠ° FΠ°ctΠΎry, GΠΎΠΎgle ClΠΎud DΠ°tΠ°flΠΎw);
- ExpertΡse Ρn dΠ°tΠ° mΠΎdelΡng, dΠ°tΠ° wΠ°rehΠΎusΡng, Π°nd ETL/ELT cΠΎncepts;
- HΠ°nds-ΠΎn experΡence wΡth dΠ°tΠ°bΠ°se systems (e.g., PΠΎstgreSQL, MySQL, NΠΎSQL, etc.);
- PrΠΎfΡcΡency Ρn cΠΎntΠ°ΡnerΡzΠ°tΡΠΎn technΠΎlΠΎgΡes (e.g., DΠΎcker, Kubernetes);
- FΠ°mΡlΡΠ°rΡty wΡth ML Πps cΠΎncepts, ΡncludΡng mΠΎdel deplΠΎyment Π°nd mΠΎnΡtΠΎrΡng.
- BΠ°sΡc understΠ°ndΡng ΠΎf mΠ°chΡne leΠ°rnΡng frΠ°mewΠΎrks such Π°s TensΠΎrFlΠΎw ΠΎr PyTΠΎrch.
- ExpΠΎsure tΠΎ clΠΎud-bΠ°sed ΠΠ/ML servΡces (e.g., ΠWS SΠ°geMΠ°ker, Πzure ML, GΠΎΠΎgle Vertex ΠΠ).
- ExperΡence wΠΎrkΡng wΡth feΠ°ture engΡneerΡng Π°nd dΠ°tΠ° prepΠ°rΠ°tΡΠΎn fΠΎr mΠ°chΡne leΠ°rnΡng mΠΎdels.
- EffectΡve prΠΎblem-sΠΎlvΡng Π°nd Π°nΠ°lytΡcΠ°l skΡlls, cΠΎupled wΡth excellent cΠΎmmunΡcΠ°tΡΠΎn Π°nd cΠΎllΠ°bΠΎrΠ°tΡΠΎn Π°bΡlΡtΡes.
- StrΠΎng cΠΎmmunΡcΠ°tΡΠΎn Π°nd teΠ°mwΠΎrk Π°bΡlΡtΡes;
- UnderstΠ°ndΡng ΠΎf dΠ°tΠ° securΡty Π°nd prΡvΠ°cy best prΠ°ctΡces;
- StrΠΎng mΠ°themΠ°tΡcΠ°l, stΠ°tΡstΡcΠ°l, Π°nd Π°lgΠΎrΡthmΡc skΡlls.
NΡce tΠΎ hΠ°ve:
- CertΡfΡcΠ°tΡΠΎn Ρn ClΠΎud plΠ°tfΠΎrms, ΠΎr relΠ°ted Π°reΠ°s;
- ExperΡence wΡth seΠ°rch engΡne ΠpΠ°che Lucene, Web ServΡce Rest ΠPΠ;
- FΠ°mΡlΡΠ°rΡty wΡth VeevΠ° CRM, ReltΡΠΎ, SΠP, Π°nd/ΠΎr PΠ°lΠ°ntΡr FΠΎundry;
- KnΠΎwledge ΠΎf phΠ°rmΠ°ceutΡcΠ°l Ρndustry regulΠ°tΡΠΎns, such Π°s dΠ°tΠ° prΡvΠ°cy lΠ°ws, Ρs Π°dvΠ°ntΠ°geΠΎus;
- PrevΡΠΎus experΡence wΠΎrkΡng wΡth JΠ°vΠ°ScrΡpt Π°nd TypeScrΡpt.
CΠΎmpΠ°ny ΠΎffers:
- FlexΡble wΠΎrkΡng fΠΎrmΠ°t β remΠΎte, ΠΎffΡce-bΠ°sed ΠΎr flexΡble
- Π cΠΎmpetΡtΡve sΠ°lΠ°ry Π°nd gΠΎΠΎd cΠΎmpensΠ°tΡΠΎn pΠ°ckΠ°ge
- PersΠΎnΠ°lΡzed cΠ°reer grΠΎwth
- PrΠΎfessΡΠΎnΠ°l develΠΎpment tΠΎΠΎls (mentΠΎrshΡp prΠΎgrΠ°m, tech tΠ°lks Π°nd trΠ°ΡnΡngs, centers ΠΎf excellence, Π°nd mΠΎre)
- ΠctΡve tech cΠΎmmunΡtΡes wΡth regulΠ°r knΠΎwledge shΠ°rΡng
- EducΠ°tΡΠΎn reΡmbursement
- MemΠΎrΠ°ble Π°nnΡversΠ°ry presents
- CΠΎrpΠΎrΠ°te events Π°nd teΠ°m buΡldΡngs
-
Β· 67 views Β· 16 applications Β· 4d
Data Engineer
Countries of Europe or Ukraine Β· Product Β· 1.5 years of experience Β· Pre-IntermediateData Engineer Genesis is a co-founding company that builds global tech businesses with outstanding entrepreneurs from CEE. We are one of the largest global app developers β products from Genesis companies have been downloaded over 300 million times and...Data Engineer
Genesis is a co-founding company that builds global tech businesses with outstanding entrepreneurs from CEE. We are one of the largest global app developers β products from Genesis companies have been downloaded over 300 million times and are used by tens of millions monthly.
Weβre proud to have one of the strongest tech teams in Europe, with our experts regularly recognized among the best IT professionals in CEE and Ukraine.
Weβre looking for a Data Engineer whoβs excited to build something from the ground up and make a real impact on how the Finance team works with data.
Hereβs what your day-to-day will look like:
π Build and Own Our Finance Data Platform. Create and maintain a robust analytical database for the Finance teamβyouβll be the go-to expert for anything data-related.
π€ Collaborate with Stakeholders. Work closely with finance team members and business process owners to understand their data needs and turn them into smart, scalable solutions.
π Design and Launch Data Pipelines. Build reliable data pipelines to pull in data from various sourcesβS3, SQL databases, APIs, Google Sheets, CSVs, and more.
π Manage Data Infrastructure. Ensure our data systems are well-structured, scalable, reliable, and backed up regularly.
π Deliver Reports & Dashboards. Make sure key stakeholders get the right data at the right timeβwhether itβs for regular reports or one-off deep dives.
βοΈ Automate Manual Work. Help move the Finance team away from Excel by automating repetitive tasks and creating a centralized, easy-to-use data platform.
Key Qualifications of the Ideal Candidate:
β Experience:
- 1.5 to 2+ years of hands-on experience in data engineering.
- Experience with financial datasets is a strong advantage, but not required.
π§ SQL Mastery:
- Youβre confident writing complex SQL and working with large-scale datasets.
- You know your way around CTEs, window functions, joins, and indexes.
Youβve optimized queries for performance and helped make data easy to consume for others.
π ETL / ELT Skills:
- Youβve worked with tools like Airflow, Airbyte, or similar for orchestrating data pipelines.
- Youβve set up automated data extraction from sources like S3, SQL databases, APIs, Google Sheets, or CSVs.
- You can build and maintain pipelines that update financial metrics for dashboards.
π οΈ Data Infrastructure & Scripting:
- You have experience maintaining and scaling analytical databases.
You follow good data quality practicesβvalidation, logging, and retries are part of your playbook. - You can write Python scripts for transforming and automating data workflows.
We Offer:
- A comprehensive social package in addition to cash compensation, including a comfortable office in Kyiv, just 5 minutes' walk from Taras Shevchenko metro station.
- Competitive salary and comprehensive benefits such as paid conferences, corporate doctor, medical insurance (for personnel located in Ukraine), and quality food daily (breakfasts and lunches), as well as fresh fruits, snacks, and coffee.
- A dynamic team environment with opportunities for professional growth.
- Exceptional opportunities for professional development, including in-house training sessions and seminars, a corporate library, English classes, and compensation for professional qualification costs after the probationary period.
- Flexible working conditions and a supportive health and sports program.
Ready to shape your future with Genesis?
Connect with us, and let's create the future together!
More
-
Β· 21 views Β· 1 application Β· 4d
Data Engineer TL / Poland
EU Β· 4 years of experience Β· Upper-IntermediateOn behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.
Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way.
We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests.
As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. Youβll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.Responsibilities:
- Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
- Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
- Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?
Requirements:
- Over 2 years in a leadership role within a data team.
- Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
- Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
- Bachelorβs degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
- Highly organized with a proactive mindset.
Strong service orientation and a collaborative approach to problem-solving.
Nice to have skills:
- Previous experience as a NOC or DevOps engineer is a plus.
Familiarity with PySpark is considered an advantage.
What we can offer you
- Remote work from Poland, flexible working schedule
- Accounting support & consultation
- Opportunities for learning and developing on the project
- 20 working days of annual vacation
- 5 days paid sick leaves/days off; state holidays
- Provide working equipment
-
Β· 34 views Β· 8 applications Β· 4d
Data Engineer (with Azure)
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-IntermediateWould you like to increase your cloud expertise? Weβre looking for a Data Engineer to join an international cloud technology company. This is a leading Microsoft & Azure partner providing cloud services in Europe and East Asia. Working with different...Would you like to increase your cloud expertise? Weβre looking for a Data Engineer to join an international cloud technology company.
This is a leading Microsoft & Azure partner providing cloud services in Europe and East Asia.
Working with different customer domains + the most professional team β growth! Letβs discuss.
Main Responsibilities:
Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.
You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.
Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.
Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization
Mandatory Requirements:
β 2+ years of experience, ideally within a Data Engineer role.
β understanding of data modeling, data warehousing concepts, and ETL processes
β experience with Azure Cloud technologies
β experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)
β Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)
β SQL-skills
β communication and interpersonal skills
β English βΠ2
β Ukrainian language
Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.
We offer:
β professional growth and international certification
β free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)
β innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customersβ projects
β great compensation and individual bonus remuneration
β medical insurance
β long-term employment
β ondividual development plan
More -
Β· 30 views Β· 7 applications Β· 3d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateAbout the Role: We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and...About the Role:
We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and business impact. You will be responsible for designing, building, and maintaining scalable data solutions that power critical business decisions in a fast-paced B2C environment.
Responsibilities:
- Design, develop, and maintain robust ETL/ELT data pipelines using Apache Spark and AWS Glue
- Build Zero-ETL pipelines using AWS services such as Kinesis Firehose, Lambda, and SageMaker
- Write clean, efficient, and well-tested code primarily in Python and SQL
- Collaborate with data scientists, analysts, and product teams to ensure timely and accurate data delivery
- Optimize data workflows for performance, scalability, and cost-efficiency
- Integrate data from various sources (structured, semi-structured, and unstructured)
- Implement monitoring, alerting, and logging to ensure data pipeline reliability
- Contribute to data governance, documentation, and compliance efforts
- Work in an agile environment, participating in code reviews, sprint planning, and team ceremonies
Expected Qualifications:
- 5+ years of professional experience in data engineering
- Advanced proficiency in Apache Spark, Python, and SQL
- Hands-on experience with AWS Glue, Kinesis Firehose, and Zero-ETL pipelines
- Familiarity with AWS Lambda and SageMaker for serverless processing and ML workflows
- Experience with ETL orchestration tools such as Airflow or dbt
- Solid understanding of cloud computing concepts, especially within AWS
- Strong problem-solving skills and the ability to work independently and collaboratively
- Experience working in B2C companies or data-rich product environments
- Degree in Computer Science or related field (preferred but not required)
- Bonus: Exposure to JavaScript and data science workflows
-
Β· 72 views Β· 16 applications Β· 1d
Jnr/Middle Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1 year of experience Β· Upper-IntermediatePosition responsibilities: - Migrate clients data from other solutions to Jetfile data model (mostly MS SQL) - Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards - Analyze and optimize...Position responsibilities:
- Migrate clients data from other solutions to Jetfile data model (mostly MS SQL)
- Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards
- Analyze and optimize performance on big data load
- Create migrations for Jetfile internal productsMust have:
- Bachelor's degree in Computer Science, Engineering, or related field
- Ability to work independently and remotely
- 1 year of experience
- Strong SQL skills
- Must have experience with business application development
More
Nice to have:
- Leading experience
- Azure knowledge
- ERP, accounting, fintech or insurance tech experience -
Β· 32 views Β· 6 applications Β· 12h
Data Engineer (Middle Level)
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Intermediate Ukrainian Product πΊπ¦Experience Level: Middle to Senior Employment Type: Full-time We are looking for a skilled and detail-oriented Data Engineer or Data Scientist to join our team. You will be working on complex data processing tasks, developing algorithms for data...Experience Level: Middle to Senior
Employment Type: Full-time
We are looking for a skilled and detail-oriented Data Engineer or Data Scientist to join our team. You will be working on complex data processing tasks, developing algorithms for data extrapolation, and building robust data infrastructures.
Requirements:
- Solid understanding of relational database design principles.
- Strong knowledge of ANSI SQL, including CTEs and window functions.
- Proficiency in at least one programming language: Python or R.
- Analytical mindset with a desire to delve into complex data processing and extrapolation challenges.
- Strong teamwork skills and stress resilience.
- Goal-oriented and result-driven approach.
Preferred Qualifications:
- Familiarity with cloud storage systems.
- Experience developing distributed systems.
- In-depth knowledge and experience with writing advanced PostgreSQL procedures.
Key Responsibilities:
- Design data structures and schemas.
- Develop procedures and modules for data ingestion from various sources using SQL and Python/R.
- Contribute to the development of data processing algorithms.
- Program data extrapolation algorithms.
If you are passionate about data, algorithms, and working in a collaborative environment, we would love to hear from you!
More
-
Β· 7 views Β· 0 applications Β· 5h
Middle BigData Engineer to $2300
Full Remote Β· Ukraine Β· 2 years of experienceDescription of the project: We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big...Description of the project:
We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies.
Your qualification:
- 2+ years of experience in Big Data engineering.
- Solid knowledge and practical experience with OLAP technologies.
- Strong SQL skills and experience with schema design.
- Proficiency in Java or Python for process automation.
- Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus.
- Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis.
- Understanding of distributed systems such as Spark, Hadoop, etc.
- Experience working with Kafka or other message broker systems.
- Familiarity with data governance tools and data science/analytics workbenches.
- Experience with Ezmeral Data Fabric is a plus.
- Knowledge of UNIX and experience in Shell scripting for automation tasks.
- Technical English proficiency (reading and understanding documentation).
Responsibilities:
- Design and implement data extraction, processing, and transformation pipelines based on MPP architecture.
- Build and maintain data warehouses and OLAP-based systems.
- Design database schemas and develop dimensional data models.
- Work with distributed systems and clusters for big data processing.
We are delighted to provide you with the following benefits:
- Opportunities for growth and development within the project
- Flexible working hours
- Option to work remotely or from the office
-
Β· 6 views Β· 1 application Β· 5h
Senior BigData Engineer to $3700
Full Remote Β· Ukraine Β· 4 years of experience Β· IntermediateDescription of the project: We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big...Description of the project:
We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies.
Your qualification:
- 4+ years of experience in Big Data engineering.
- Solid knowledge and practical experience with OLAP technologies.
- Strong SQL skills and experience with schema design.
- Proficiency in Java or Python for process automation.
- Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus.
- Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis.
- Understanding of distributed systems such as Spark, Hadoop, etc.
- Experience working with Kafka or other message broker systems.
- Familiarity with data governance tools and data science/analytics workbenches.
- Experience with Ezmeral Data Fabric is a plus.
- Knowledge of UNIX and experience in Shell scripting for automation tasks.
Technical English proficiency (reading and understanding documentation).
Responsibilities:
- Design and implement data extraction, processing, and transformation pipelines based on MPP architecture.
- Build and maintain data warehouses and OLAP-based systems.
- Design database schemas and develop dimensional data models.
Work with distributed systems and clusters for big data processing.
We are delighted to provide you with the following benefits:
- Opportunities for growth and development within the project
- Flexible working hours
- Option to work remotely or from the office
-
Β· 7 views Β· 2 applications Β· 4h
Senior BigData Engineer to $4000
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· IntermediateWe are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and...We are looking for a Senior Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies.
Requirements:
- 4+ years of experience in Big Data engineering.
- Solid knowledge and practical experience with OLAP technologies.
- Strong SQL skills and experience with schema design.
- Proficiency in Java or Python for process automation.
- Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus.
- Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis.
- Understanding of distributed systems such as Spark, Hadoop, etc.
- Experience working with Kafka or other message broker systems.
- Familiarity with data governance tools and data science/analytics workbenches.
- Experience with Ezmeral Data Fabric is a plus.
- Knowledge of UNIX and experience in Shell scripting for automation tasks.
- Technical English proficiency (reading and understanding documentation).
Responsibilities:
- Design and implement data extraction, processing, and transformation pipelines based on MPP architecture.
- Build and maintain data warehouses and OLAP-based systems.
- Design database schemas and develop dimensional data models.
- Work with distributed systems and clusters for big data processing.
We are delighted to provide you with the following benefits:
- Opportunities for growth and development within the project.
- Flexible working hours.
- Option to work remotely or from the office.
-
Β· 6 views Β· 1 application Β· 4h
Middle BigData Engineer to $2500
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· IntermediateWe are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and...We are looking for a Middle Big Data Engineer to join a large-scale telecommunications project. This role involves designing and implementing robust data processing systems, building data warehouses, and working with modern big data tools and technologies.
Requirements:
- 2+ years of experience in Big Data engineering.
- Solid knowledge and practical experience with OLAP technologies.
- Strong SQL skills and experience with schema design.
- Proficiency in Java or Python for process automation.
- Experience with NoSQL databases such as HBase, Elasticsearch; familiarity with Redis or MongoDB is a plus.
- Hands-on experience with Vertica or other DBMS suitable for large-scale data analysis.
- Understanding of distributed systems such as Spark, Hadoop, etc.
- Experience working with Kafka or other message broker systems.
- Familiarity with data governance tools and data science/analytics workbenches.
- Experience with Ezmeral Data Fabric is a plus.
- Knowledge of UNIX and experience in Shell scripting for automation tasks.
Technical English proficiency (reading and understanding documentation).
Responsibilities:
- Design and implement data extraction, processing, and transformation pipelines based on MPP architecture.
- Build and maintain data warehouses and OLAP-based systems.
- Design database schemas and develop dimensional data models.
- Work with distributed systems and clusters for big data processing.
We are delighted to provide you with the following benefits:
- Opportunities for growth and development within the project
- Flexible working hours
- Option to work remotely or from the office
-
Β· 12 views Β· 3 applications Β· 3h
Data Ops Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Advanced/FluentWhat youβll do Become part of an iconic brand that is set to revolutionize the electric pick-up truck & rugged SUV marketplace by achieving the following: Contribute to the design, implementation, and maintenance of the overall cloud infrastructure data...What youβll do
Become part of an iconic brand that is set to revolutionize the electric pick-up truck & rugged SUV marketplace by achieving the following:
Contribute to the design, implementation, and maintenance of the overall cloud infrastructure data platform using modern IaC (Infrastructure as Code) practices.
Work closely with software development and systems teams to build Data Integration solutions.
Design and build Data models using tools such as Lucid, Talend, Erwin, MySQL workbench.
Define and enhance enterprise data model to reflect relationships and dependencies.
Review application data systems to ensure adherence to data governance policies.
Design and build ETL (Python), ELT(Python) infrastructure, automation, and solutions to transform data as required.
Design and Implement BI dashboards to visualize Trends and Forecasts.
Design and implement data infrastructure components, ensuring high availability, reliability, scalability, and performance.
Design, train and deploy ML models
Implement monitoring solutions to proactively identify and address potential issues.
Collaborate with security teams to ensure the data platform meets industry standards and compliance requirements.
Collaborate with cross-functional teams, including product managers, developers, and business partners to ensure robust and reliable systems.
What youβll bring
We expect all employees to have integrity, curiosity, resourcefulness, and strive to exhibit a positive attitude, as well as a growth mindset. Youβll be comfortable with change and flexible in a fast-paced, high-growth environment. Youβll take a collaborative approach to achieve ambitious goals. Here's what else you'll bring:
Bachelor's degree in computer science, information technology, or related field or equivalent work experience.
5+ years of hands-on experience as DataOps Engineer in a manufacturing or automotive environment.
Experience with streaming and event-based architecture.
Proficient in building data pipelines using languages such as Python and SQL.
Experience with AWS based data services such as Glue, Kinesis, Firehose or other comparable services.
Experience with Structured, unstructured and time series databases.
Solid understanding of cloud data storage solutions such as RDS, DynamoDB, DocumentDB, Mongo, Cassandra, Influx.
Experience implementing data lakehouse solutions using Databricks.
Several years of experience working with cloud platforms such as AWS and Azure.
Experience with infrastructure as code (Terraform).
Proven ability to develop and deploy scalable ML models.
Hands-on experience in designing, training, and deploying ML models
Strong ability to extract actionable insights using ML techniques
Ability to leverage ML algorithms for forecasting trends and decision-making
Excellent problem-solving and troubleshooting skills. When a problem occurs, you run towards it not away.
Effective communication and collaboration skills. You treat colleagues with respect. You have a desire for clean implementations but are also humble in discussing alternative solutions and options.
More
-
Β· 8 views Β· 0 applications Β· 3h
Technical Lead/Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateProject Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...Project Description:
As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
Responsibilities:
Strategy and Project Delivery
β Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
β Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
β Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
β Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
β Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
β Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
β Own the data engineering processes, architecture across the teams
Technology, Craft & Delivery
β Experience in designing and architecting data engineering frameworks, dealing with high volume of data
β Experience in large scale data processing and workflow management
β Mastery in technology leadership
β Engineering delivery, quality and practices within own team
β Participating in defining, shaping and delivering the wider engineering strategic objectives
β Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
β Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
β Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
Mandatory Skills Description:
Role Qualifications and Requirements:
β Bachelor degree
β At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
β 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
β Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
β Experience working with public cloud providers such as Snowflake, AWS
β Experience to work in a complex stakeholders' organizations
β A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
β Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
β Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
β You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
β You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
β Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams
-
Β· 80 views Β· 7 applications Β· 6d
Data Quality Engineer
Ukraine Β· Product Β· 1 year of experienceΠΠΈ ΡΡΠΊΠ°ΡΠΌΠΎ Data Quality Engineer, ΡΠΊΠΈΠΉ ΠΏΡΠ°Π³Π½Π΅ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π² Π΄ΠΈΠ½Π°ΠΌΡΡΠ½ΠΎΠΌΡ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ ΡΠ° ΡΠΎΠ·Π΄ΡΠ»ΡΡ ΡΡΠ½Π½ΠΎΡΡΡ Π²Π·Π°ΡΠΌΠ½ΠΎΡ Π΄ΠΎΠ²ΡΡΠΈ, Π²ΡΠ΄ΠΊΡΠΈΡΠΎΡΡΡ ΡΠ° ΡΠ½ΡΡΡΠ°ΡΠΈΠ²Π½ΠΎΡΡΡ. ΠΡΠΈΠ²Π°ΡΠΠ°Π½ΠΊ- Ρ Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΌ Π±Π°Π½ΠΊΠΎΠΌ Π£ΠΊΡΠ°ΡΠ½ΠΈ ΡΠ° ΠΎΠ΄Π½ΠΈΠΌ Π· Π½Π°ΠΉΠ±ΡΠ»ΡΡ ΡΠ½Π½ΠΎΠ²Π°ΡΡΠΉΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ² ΡΠ²ΡΡΡ. ΠΠ°ΠΉΠΌΠ°Ρ Π»ΡΠ΄ΠΈΡΡΡΡΡ...ΠΠΈ ΡΡΠΊΠ°ΡΠΌΠΎ Data Quality Engineer, ΡΠΊΠΈΠΉ ΠΏΡΠ°Π³Π½Π΅ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π² Π΄ΠΈΠ½Π°ΠΌΡΡΠ½ΠΎΠΌΡ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ ΡΠ° ΡΠΎΠ·Π΄ΡΠ»ΡΡ ΡΡΠ½Π½ΠΎΡΡΡ Π²Π·Π°ΡΠΌΠ½ΠΎΡ Π΄ΠΎΠ²ΡΡΠΈ, Π²ΡΠ΄ΠΊΡΠΈΡΠΎΡΡΡ ΡΠ° ΡΠ½ΡΡΡΠ°ΡΠΈΠ²Π½ΠΎΡΡΡ.
ΠΡΠΈΠ²Π°ΡΠΠ°Π½ΠΊ- Ρ Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΌ Π±Π°Π½ΠΊΠΎΠΌ Π£ΠΊΡΠ°ΡΠ½ΠΈ ΡΠ° ΠΎΠ΄Π½ΠΈΠΌ Π· Π½Π°ΠΉΠ±ΡΠ»ΡΡ ΡΠ½Π½ΠΎΠ²Π°ΡΡΠΉΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ² ΡΠ²ΡΡΡ. ΠΠ°ΠΉΠΌΠ°Ρ Π»ΡΠ΄ΠΈΡΡΡΡΡ ΠΏΠΎΠ·ΠΈΡΡΡ Π·Π° Π²ΡΡΠΌΠ° ΡΡΠ½Π°Π½ΡΠΎΠ²ΠΈΠΌΠΈ ΠΏΠΎΠΊΠ°Π·Π½ΠΈΠΊΠ°ΠΌΠΈ Π² Π³Π°Π»ΡΠ·Ρ ΡΠ° ΡΠΊΠ»Π°Π΄Π°Ρ Π±Π»ΠΈΠ·ΡΠΊΠΎ ΡΠ²Π΅ΡΡΡ Π²ΡΡΡΡ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΠΎΡ ΡΠΈΡΡΠ΅ΠΌΠΈ ΠΊΡΠ°ΡΠ½ΠΈ.
ΠΠΈ ΠΏΡΠ°Π³Π½Π΅ΠΌΠΎ Π·Π½Π°ΠΉΡΠΈ ΡΡΠ»Π΅ΡΠΏΡΡΠΌΠΎΠ²Π°Π½ΠΎΠ³ΠΎ ΠΏΡΠΎΡΠ΅ΡΡΠΎΠ½Π°Π»Π°, ΡΠΊΠΈΠΉ Π²ΠΌΡΡ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π² ΡΠ΅ΠΆΠΈΠΌΡ Π±Π°Π³Π°ΡΠΎΠ·Π°Π΄Π°ΡΠ½ΠΎΡΡΡ, ΠΎΡΡΡΠ½ΡΠΎΠ²Π°Π½ΠΎΠ³ΠΎ Π½Π° ΡΠΊΡΡΡΡ ΡΠ° ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ.
ΠΡΠΎ ΠΏΡΠΎΠ΅ΠΊΡ: ΠΊΠΎΠΌΠ°Π½Π΄Π° Π·Π°ΠΉΠΌΠ°ΡΡΡΡΡ ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²ΠΎΡ ΡΡΡΠ°ΡΠ½ΠΈΡ ΠΏΡΠΎΡΠ΅ΡΡΠ² Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠ° ΠΊΠΎΠ½ΡΡΠΎΠ»Ρ ΡΠΊΠΎΡΡΡ Π΄Π°Π½ΠΈΡ Π² ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π½Π° ΠΏΠΎΠΊΡΠ°ΡΠ΅Π½Π½Ρ ΠΏΡΠΎΡΠ΅ΡΡΡΠ² ΠΏΡΠΈΠΉΠ½ΡΡΡΡ ΡΡΡΠ΅Π½Ρ Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ Π΄Π°Π½ΠΈΡ ΡΠ° ΠΏΠΎΠΊΡΠ°ΡΠ΅Π½Π½Ρ ΡΠΊΠΎΡΡΡ digital-ΡΠ΅ΡΠ²ΡΡΡΠ².
ΠΡΠ½ΠΎΠ²Π½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- ΠΡΠΎΠ΅ΠΊΡΡΠ²Π°Π½Π½Ρ, ΡΠ΅Π°Π»ΡΠ·Π°ΡΡΡ ΡΠ° ΡΠΌΠΏΠ»Π΅ΠΌΠ΅Π½ΡΠ°ΡΡΡ ΠΏΡΠΎΡΠ΅ΡΡΠ² Ρ ΠΏΡΠΎΡΠ΅Π΄ΡΡ Π΄Π»Ρ Π·Π±ΠΎΡΡ, Π·Π±Π΅ΡΡΠ³Π°Π½Π½Ρ, Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ ΡΠ° Π±Π΅Π·ΠΏΠ΅ΠΊΠΈ Π΄Π°Π½ΠΈΡ
- ΠΠΈΠ·Π½Π°ΡΠ΅Π½Π½Ρ ΡΡΡΠΏΠ΅Π½Ρ Π΄ΠΎΠ²ΡΡΠΈ Π΄ΠΎ Π΄ΠΆΠ΅ΡΠ΅Π» Π΄Π°Π½ΠΈΡ
- ΠΠ°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ Ρ Π³Π°ΡΠ°Π½ΡΡΡ ΡΠΊΠΎΡΡΡ ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½ΠΈΡ Π΄Π°Π½ΠΈΡ
- ΠΠΎΠΊΡΠΌΠ΅Π½ΡΡΠ²Π°Π½Π½Ρ ΡΠ° Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ Π΄ΠΎΡΡΠΈΠΌΠ°Π½Π½Ρ ΠΏΡΠ°Π²ΠΈΠ» Π·Π±ΠΎΡΡ, Π·Π±Π΅ΡΡΠ³Π°Π½Π½Ρ Ρ Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ Π΄Π°Π½ΠΈΡ
- ΠΠΎΠ½ΡΡΠΎΠ»Ρ Ρ Π²ΡΠ΄ΠΏΡΠ°ΡΡΠ²Π°Π½Π½Ρ ΡΠ½ΡΠΈΠ΄Π΅Π½ΡΡΠ², ΠΏΠΎΠ²βΡΠ·Π°Π½ΠΈΡ Π· ΡΠΊΡΡΡΡ Π΄Π°Π½ΠΈΡ .
ΠΡΠ½ΠΎΠ²Π½Ρ Π²ΠΈΠΌΠΎΠ³ΠΈ:
- ΠΠΈΡΠ° ΡΠ΅Ρ Π½ΡΡΠ½Π° ΠΎΡΠ²ΡΡΠ°
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π² Data Π΄ΠΎΠΌΠ΅Π½Ρ Π±ΡΠ»ΡΡΠ΅ 2 ΡΠΎΠΊΡΠ²
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π² Π±Π°Π½ΠΊΠΎΠ²ΡΠΉ ΡΡΠ΅ΡΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ ΠΌΠ°ΡΠΈΠ²Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· SQL
- Π ΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΡΠ΅ΠΎΡΡΡ Π±Π°Π· Π΄Π°Π½ΠΈΡ (SQL, NoSQL, NewSQL);
- ΠΠ½Π°Π½Π½Ρ ΠΎΡΠ½ΠΎΠ² ΠΏΡΠΎΠ΅ΠΊΡΡΠ²Π°Π½Π½Ρ Ρ ΡΠΎΠ±ΠΎΡΠΈ Π· ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½ΠΈΠΌΠΈ ΡΡ ΠΎΠ²ΠΈΡΠ°ΠΌΠΈ Ρ ΠΎΠ·Π΅ΡΠ°ΠΌΠΈ Π΄Π°Π½Π½ΡΡ (Data WareHouse, Data Lake), Π° ΡΠ°ΠΊΠΎΠΆ ETL / ELT-ΠΏΡΠΎΡΠ΅ΡΡΠ°ΠΌΠΈ;
ΠΡΠ΄Π΅ ΠΏΠ»ΡΡΠΎΠΌ:
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Big DataΠ‘Π²ΠΎΡΠΌ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΠ°ΠΌ ΠΌΠΈ ΠΏΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ:
- Π ΠΎΠ±ΠΎΡΡ Π² Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΎΠΌΡ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΠΉΠ½ΠΎΠΌΡ Π±Π°Π½ΠΊΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ
- ΠΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ ΡΠ° 24 ΠΊΠ°Π»Π΅Π½Π΄Π°ΡΠ½ΠΈΡ Π΄Π½Ρ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ
- ΠΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ ΡΠ° ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½ΠΈΠΉ ΠΌΠΎΠ±ΡΠ»ΡΠ½ΠΈΠΉ Π·Π²βΡΠ·ΠΎΠΊ
- ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π΅ Π½Π°Π²ΡΠ°Π½Π½Ρ
- Π‘ΡΡΠ°ΡΠ½ΠΈΠΉ ΠΊΠΎΠΌΡΠΎΡΡΠ½ΠΈΠΉ ΠΎΡΡΡ
- Π¦ΡΠΊΠ°Π²Ρ ΠΏΡΠΎΡΠΊΡΠΈ, Π°ΠΌΠ±ΡΡΡΠΉΠ½Ρ Π·Π°Π΄Π°ΡΡ ΡΠ° Π΄ΠΈΠ½Π°ΠΌΡΡΠ½ΠΈΠΉ ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ
More