Jobs
72-
Β· 25 views Β· 0 applications Β· 20d
Senior Data Engineer (Data Science/MLOps Background)
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateΠur ClΡent Ρs seekΡng Π° prΠΎΠ°ctΡve SenΡΠΎr DΠ°tΠ° EngΡneer tΠΎ jΠΎΡn theΡr teΠ°m. Πs Π° SenΡΠΎr DΠ°tΠ° EngΡneer, yΠΎu wΡll plΠ°y Π° crΡtΡcΠ°l rΠΎle Ρn desΡgnΡng, develΠΎpΡng, Π°nd mΠ°ΡntΠ°ΡnΡng sΠΎphΡstΡcΠ°ted dΠ°tΠ° pΡpelΡnes, ΠntΠΎlΠΎgy Πbjects, Π°nd FΠΎundry FunctΡΠΎns wΡthΡn...Πur ClΡent Ρs seekΡng Π° prΠΎΠ°ctΡve SenΡΠΎr DΠ°tΠ° EngΡneer tΠΎ jΠΎΡn theΡr teΠ°m.
Πs Π° SenΡΠΎr DΠ°tΠ° EngΡneer, yΠΎu wΡll plΠ°y Π° crΡtΡcΠ°l rΠΎle Ρn desΡgnΡng, develΠΎpΡng, Π°nd mΠ°ΡntΠ°ΡnΡng sΠΎphΡstΡcΠ°ted dΠ°tΠ° pΡpelΡnes, ΠntΠΎlΠΎgy Πbjects, Π°nd FΠΎundry FunctΡΠΎns wΡthΡn PΠ°lΠ°ntΡr FΠΎundry.
YΠΎur bΠ°ckgrΠΎund Ρn mΠ°chΡne leΠ°rnΡng Π°nd dΠ°tΠ° scΡence wΡll be vΠ°luΠ°ble Ρn ΠΎptΡmΡzΡng dΠ°tΠ° wΠΎrkflΠΎws, enΠ°blΡng effΡcΡent mΠΎdel deplΠΎyment, Π°nd suppΠΎrtΡng ΠΠ-drΡven ΡnΡtΡΠ°tΡves.
The ΡdeΠ°l cΠ°ndΡdΠ°te wΡll pΠΎssess Π° rΠΎbust bΠ°ckgrΠΎund Ρn clΠΎud technΠΎlΠΎgΡes, dΠ°tΠ° Π°rchΡtecture, Π°nd Π° pΠ°ssΡΠΎn fΠΎr sΠΎlvΡng cΠΎmplex dΠ°tΠ° chΠ°llenges.
Key RespΠΎnsΡbΡlΡtΡes:
- CΠΎllΠ°bΠΎrΠ°te wΡth crΠΎss-functΡΠΎnΠ°l teΠ°ms tΠΎ understΠ°nd dΠ°tΠ° requΡrements, Π°nd desΡgn, Ρmplement Π°nd mΠ°ΡntΠ°Ρn scΠ°lΠ°ble dΠ°tΠ° pΡpelΡnes Ρn PΠ°lΠ°ntΡr FΠΎundry, ensurΡng end-tΠΎ-end dΠ°tΠ° ΡntegrΡty Π°nd ΠΎptΡmΡzΡng wΠΎrkflΠΎws.
- GΠ°ther Π°nd trΠ°nslΠ°te dΠ°tΠ° requΡrements ΡntΠΎ rΠΎbust Π°nd effΡcΡent sΠΎlutΡΠΎns, leverΠ°gΡng yΠΎur expertΡse Ρn clΠΎud-bΠ°sed dΠ°tΠ° engΡneerΡng. CreΠ°te dΠ°tΠ° mΠΎdels, schemΠ°s, Π°nd flΠΎw dΡΠ°grΠ°ms tΠΎ guΡde develΠΎpment.
- DevelΠΎp, Ρmplement, ΠΎptΡmΡze Π°nd mΠ°ΡntΠ°Ρn effΡcΡent Π°nd relΡΠ°ble dΠ°tΠ° pΡpelΡnes Π°nd ETL/ELT prΠΎcesses tΠΎ cΠΎllect, prΠΎcess, Π°nd ΡntegrΠ°te dΠ°tΠ° tΠΎ ensure tΡmely Π°nd Π°ccurΠ°te dΠ°tΠ° delΡvery tΠΎ vΠ°rΡΠΎus busΡness Π°pplΡcΠ°tΡΠΎns, whΡle ΡmplementΡng dΠ°tΠ° gΠΎvernΠ°nce Π°nd securΡty best prΠ°ctΡces tΠΎ sΠ°feguΠ°rd sensΡtΡve ΡnfΠΎrmΠ°tΡΠΎn.
- MΠΎnΡtΠΎr dΠ°tΠ° pΡpelΡne perfΠΎrmΠ°nce, ΡdentΡfy bΠΎttlenecks, Π°nd Ρmplement ΡmprΠΎvements tΠΎ ΠΎptΡmΡze dΠ°tΠ° prΠΎcessΡng speed Π°nd reduce lΠ°tency.
- CΠΎllΠ°bΠΎrΠ°te wΡth DΠ°tΠ° ScΡentΡsts tΠΎ fΠ°cΡlΡtΠ°te mΠΎdel deplΠΎyment Π°nd ΡntegrΠ°tΡΠΎn ΡntΠΎ prΠΎductΡΠΎn envΡrΠΎnments.
- SuppΠΎrt the ΡmplementΠ°tΡΠΎn ΠΎf bΠ°sΡc ML Πps prΠ°ctΡces, such Π°s mΠΎdel versΡΠΎnΡng Π°nd mΠΎnΡtΠΎrΡng.
- ΠssΡst Ρn ΠΎptΡmΡzΡng dΠ°tΠ° pΡpelΡnes tΠΎ ΡmprΠΎve mΠ°chΡne leΠ°rnΡng wΠΎrkflΠΎws.
- TrΠΎubleshΠΎΠΎt Π°nd resΠΎlve Ρssues relΠ°ted tΠΎ dΠ°tΠ° pΡpelΡnes, ensurΡng cΠΎntΡnuΠΎus dΠ°tΠ° Π°vΠ°ΡlΠ°bΡlΡty Π°nd relΡΠ°bΡlΡty tΠΎ suppΠΎrt dΠ°tΠ°-drΡven decΡsΡΠΎn-mΠ°kΡng prΠΎcesses.
- StΠ°y current wΡth emergΡng technΠΎlΠΎgΡes Π°nd Ρndustry trends, ΡncΠΎrpΠΎrΠ°tΡng ΡnnΠΎvΠ°tΡve sΠΎlutΡΠΎns ΡntΠΎ dΠ°tΠ° engΡneerΡng prΠ°ctΡces, Π°nd effectΡvely dΠΎcument Π°nd cΠΎmmunΡcΠ°te technΡcΠ°l sΠΎlutΡΠΎns Π°nd prΠΎcesses.
TΠΎΠΎls Π°nd skΡlls yΠΎu wΡll use Ρn thΡs rΠΎle:
- PΠ°lΠ°ntΡr FΠΎundry
- PythΠΎn
- PySpΠ°rk
- SQL
- TypeScrΡpt
RequΡred:
- 5+ yeΠ°rs ΠΎf experΡence Ρn dΠ°tΠ° engΡneerΡng, preferΠ°bly wΡthΡn the phΠ°rmΠ°ceutΡcΠ°l ΠΎr lΡfe scΡences Ρndustry;
- StrΠΎng prΠΎfΡcΡency Ρn PythΠΎn Π°nd PySpΠ°rk;
- PrΠΎfΡcΡency wΡth bΡg dΠ°tΠ° technΠΎlΠΎgΡes (e.g., ΠpΠ°che HΠ°dΠΎΠΎp, SpΠ°rk, KΠ°fkΠ°, BΡgQuery, etc.);
- HΠ°nds-ΠΎn experΡence wΡth clΠΎud servΡces (e.g., ΠWS Glue, Πzure DΠ°tΠ° FΠ°ctΠΎry, GΠΎΠΎgle ClΠΎud DΠ°tΠ°flΠΎw);
- ExpertΡse Ρn dΠ°tΠ° mΠΎdelΡng, dΠ°tΠ° wΠ°rehΠΎusΡng, Π°nd ETL/ELT cΠΎncepts;
- HΠ°nds-ΠΎn experΡence wΡth dΠ°tΠ°bΠ°se systems (e.g., PΠΎstgreSQL, MySQL, NΠΎSQL, etc.);
- PrΠΎfΡcΡency Ρn cΠΎntΠ°ΡnerΡzΠ°tΡΠΎn technΠΎlΠΎgΡes (e.g., DΠΎcker, Kubernetes);
- FΠ°mΡlΡΠ°rΡty wΡth ML Πps cΠΎncepts, ΡncludΡng mΠΎdel deplΠΎyment Π°nd mΠΎnΡtΠΎrΡng.
- BΠ°sΡc understΠ°ndΡng ΠΎf mΠ°chΡne leΠ°rnΡng frΠ°mewΠΎrks such Π°s TensΠΎrFlΠΎw ΠΎr PyTΠΎrch.
- ExpΠΎsure tΠΎ clΠΎud-bΠ°sed ΠΠ/ML servΡces (e.g., ΠWS SΠ°geMΠ°ker, Πzure ML, GΠΎΠΎgle Vertex ΠΠ).
- ExperΡence wΠΎrkΡng wΡth feΠ°ture engΡneerΡng Π°nd dΠ°tΠ° prepΠ°rΠ°tΡΠΎn fΠΎr mΠ°chΡne leΠ°rnΡng mΠΎdels.
- EffectΡve prΠΎblem-sΠΎlvΡng Π°nd Π°nΠ°lytΡcΠ°l skΡlls, cΠΎupled wΡth excellent cΠΎmmunΡcΠ°tΡΠΎn Π°nd cΠΎllΠ°bΠΎrΠ°tΡΠΎn Π°bΡlΡtΡes.
- StrΠΎng cΠΎmmunΡcΠ°tΡΠΎn Π°nd teΠ°mwΠΎrk Π°bΡlΡtΡes;
- UnderstΠ°ndΡng ΠΎf dΠ°tΠ° securΡty Π°nd prΡvΠ°cy best prΠ°ctΡces;
- StrΠΎng mΠ°themΠ°tΡcΠ°l, stΠ°tΡstΡcΠ°l, Π°nd Π°lgΠΎrΡthmΡc skΡlls.
NΡce tΠΎ hΠ°ve:
- CertΡfΡcΠ°tΡΠΎn Ρn ClΠΎud plΠ°tfΠΎrms, ΠΎr relΠ°ted Π°reΠ°s;
- ExperΡence wΡth seΠ°rch engΡne ΠpΠ°che Lucene, Web ServΡce Rest ΠPΠ;
- FΠ°mΡlΡΠ°rΡty wΡth VeevΠ° CRM, ReltΡΠΎ, SΠP, Π°nd/ΠΎr PΠ°lΠ°ntΡr FΠΎundry;
- KnΠΎwledge ΠΎf phΠ°rmΠ°ceutΡcΠ°l Ρndustry regulΠ°tΡΠΎns, such Π°s dΠ°tΠ° prΡvΠ°cy lΠ°ws, Ρs Π°dvΠ°ntΠ°geΠΎus;
- PrevΡΠΎus experΡence wΠΎrkΡng wΡth JΠ°vΠ°ScrΡpt Π°nd TypeScrΡpt.
CΠΎmpΠ°ny ΠΎffers:
- FlexΡble wΠΎrkΡng fΠΎrmΠ°t β remΠΎte, ΠΎffΡce-bΠ°sed ΠΎr flexΡble
- Π cΠΎmpetΡtΡve sΠ°lΠ°ry Π°nd gΠΎΠΎd cΠΎmpensΠ°tΡΠΎn pΠ°ckΠ°ge
- PersΠΎnΠ°lΡzed cΠ°reer grΠΎwth
- PrΠΎfessΡΠΎnΠ°l develΠΎpment tΠΎΠΎls (mentΠΎrshΡp prΠΎgrΠ°m, tech tΠ°lks Π°nd trΠ°ΡnΡngs, centers ΠΎf excellence, Π°nd mΠΎre)
- ΠctΡve tech cΠΎmmunΡtΡes wΡth regulΠ°r knΠΎwledge shΠ°rΡng
- EducΠ°tΡΠΎn reΡmbursement
- MemΠΎrΠ°ble Π°nnΡversΠ°ry presents
- CΠΎrpΠΎrΠ°te events Π°nd teΠ°m buΡldΡngs
-
Β· 33 views Β· 2 applications Β· 20d
Data Engineer TL / Poland
EU Β· 4 years of experience Β· Upper-IntermediateOn behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department. Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key...On behalf with our customer we are seeking for DataOps Team Lead to join our global R&D department.
Our customer is an innovative technology company led by data scientists and engineers devoted to mobile app growth. They focus on solving the key challenge of growth for mobile apps by building Machine Learning and Big Data-driven technology that can both accurately predict what apps a user will like and connect them in a compelling way.
We are looking for a data centric quality driven team leader focusing on data process observability. The person is passionate about building high-quality data products and processes as well as supporting production data processes and ad-hoc data requests.
As a Data OPS TL, you will be in charge of the quality of service as well as quality of the data and knowledge platform for all data processes. Youβll be coordinating with stakeholders and play a major role in driving the business by promoting the quality and stability of the data performance and lifecycle and giving the Operational groups immediate abilities to affect the daily business outcomes.Responsibilities:
- Process monitoring - managing and monitoring the daily data processes; troubleshooting server and process issues, escalating bugs and documenting data issues.
- Ad-hoc operation configuration changes - Be the extension of the operation side into the data process; Using Airflow and python scripting alongside SQL to extract specific client relevant data points and calibrate certain aspects of the process.
- Data quality automation - Creating and maintaining data quality tests and validations using python code and testing frameworks.
Metadata store ownership - Creating and maintaining the metadata store; Managing the metadata system which holds meta data of tables, columns, calculations and lineage. Participating in the design and development of the knowledge base metastore and UX. In order to be the pivotal point of contact when needing information on tables, columns and how they are connected. I.e., What is the data source? What is it used for? Why are we calculating this field in this manner?
Requirements:
- Over 2 years in a leadership role within a data team.
- Over 3 years of hands-on experience as a Data Engineer, with strong proficiency in Python and Airflow.
- Solid background in working with both SQL and NoSQL databases and data warehouses, including but not limited to MySQL, Presto, Athena, Couchbase, MemSQL, and MongoDB.
- Bachelorβs degree or higher in Computer Science, Mathematics, Physics, Engineering, Statistics, or a related technical discipline.
- Highly organized with a proactive mindset.
Strong service orientation and a collaborative approach to problem-solving.
Nice to have skills:
- Previous experience as a NOC or DevOps engineer is a plus.
Familiarity with PySpark is considered an advantage.
What we can offer you
- Remote work from Poland, flexible working schedule
- Accounting support & consultation
- Opportunities for learning and developing on the project
- 20 working days of annual vacation
- 5 days paid sick leaves/days off; state holidays
- Provide working equipment
-
Β· 52 views Β· 12 applications Β· 19d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateAbout the Role: We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and...About the Role:
We are seeking a Senior Data Engineer with deep expertise in distributed data processing and cloud-native architectures. This is a unique opportunity to join a forward-thinking team that values technical excellence, innovation, and business impact. You will be responsible for designing, building, and maintaining scalable data solutions that power critical business decisions in a fast-paced B2C environment.
Responsibilities:
- Design, develop, and maintain robust ETL/ELT data pipelines using Apache Spark and AWS Glue
- Build Zero-ETL pipelines using AWS services such as Kinesis Firehose, Lambda, and SageMaker
- Write clean, efficient, and well-tested code primarily in Python and SQL
- Collaborate with data scientists, analysts, and product teams to ensure timely and accurate data delivery
- Optimize data workflows for performance, scalability, and cost-efficiency
- Integrate data from various sources (structured, semi-structured, and unstructured)
- Implement monitoring, alerting, and logging to ensure data pipeline reliability
- Contribute to data governance, documentation, and compliance efforts
- Work in an agile environment, participating in code reviews, sprint planning, and team ceremonies
Expected Qualifications:
- 5+ years of professional experience in data engineering
- Advanced proficiency in Apache Spark, Python, and SQL
- Hands-on experience with AWS Glue, Kinesis Firehose, and Zero-ETL pipelines
- Familiarity with AWS Lambda and SageMaker for serverless processing and ML workflows
- Experience with ETL orchestration tools such as Airflow or dbt
- Solid understanding of cloud computing concepts, especially within AWS
- Strong problem-solving skills and the ability to work independently and collaboratively
- Experience working in B2C companies or data-rich product environments
- Degree in Computer Science or related field (preferred but not required)
- Bonus: Exposure to JavaScript and data science workflows
-
Β· 137 views Β· 34 applications Β· 17d
Jnr/Middle Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1 year of experience Β· Upper-IntermediatePosition responsibilities: - Migrate clients data from other solutions to Jetfile data model (mostly MS SQL) - Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards - Analyze and optimize...Position responsibilities:
- Migrate clients data from other solutions to Jetfile data model (mostly MS SQL)
- Write custom reports that will be utilized inside Jetfile application in form of custom sql query, reports, dashboards
- Analyze and optimize performance on big data load
- Create migrations for Jetfile internal productsMust have:
- Bachelor's degree in Computer Science, Engineering, or related field
- Ability to work independently and remotely
- 1 year of experience
- Strong SQL skills
- Must have experience with business application development
More
Nice to have:
- Leading experience
- Azure knowledge
- ERP, accounting, fintech or insurance tech experience -
Β· 35 views Β· 1 application Β· 16d
Technical Lead/Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateProject Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...Project Description:
As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
Responsibilities:
Strategy and Project Delivery
β Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
β Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
β Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
β Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
β Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
β Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
β Own the data engineering processes, architecture across the teams
Technology, Craft & Delivery
β Experience in designing and architecting data engineering frameworks, dealing with high volume of data
β Experience in large scale data processing and workflow management
β Mastery in technology leadership
β Engineering delivery, quality and practices within own team
β Participating in defining, shaping and delivering the wider engineering strategic objectives
β Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
β Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
β Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
Mandatory Skills Description:
Role Qualifications and Requirements:
β Bachelor degree
β At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
β 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
β Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
β Experience working with public cloud providers such as Snowflake, AWS
β Experience to work in a complex stakeholders' organizations
β A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
β Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
β Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
β You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
β You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
β Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams
-
Β· 23 views Β· 0 applications Β· 16d
Technical Lead/Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateProject Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...Project Description:
As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
Responsibilities:
Strategy and Project Delivery
β Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
β Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
β Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
β Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
β Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
β Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
β Own the data engineering processes, architecture across the teams
Technology, Craft & Delivery
β Experience in designing and architecting data engineering frameworks, dealing with high volume of data
β Experience in large scale data processing and workflow management
β Mastery in technology leadership
β Engineering delivery, quality and practices within own team
β Participating in defining, shaping and delivering the wider engineering strategic objectives
β Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
β Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
β Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
Mandatory Skills Description:
Role Qualifications and Requirements:
More
β Bachelor degree
β At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
β 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
β Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
β Experience working with public cloud providers such as Snowflake, AWS
β Experience to work in a complex stakeholders' organizations
β A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
β Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
β Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
β You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
β You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
β Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams -
Β· 30 views Β· 11 applications Β· 15d
Senior Data Engineer
Full Remote Β· Azerbaijan, Brazil, Colombia, Kazakhstan Β· 5 years of experience Β· Upper-IntermediateWe are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...We are seeking a proactive Senior Data Engineer to join our vibrant team.
As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.
Key Responsibilities:- Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
- Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
- Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
- Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
- Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
- Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.
Tools and skills you will use in this role:- Palantir Foundry
- Python
- PySpark
- SQL
- TypeScript
Required:- 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
- Strong proficiency in Python and PySpark;
- Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
- Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
- Expertise in data modeling, data warehousing, and ETL/ELT concepts;
- Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
- Proficiency in containerization technologies (e.g., Docker, Kubernetes);
- Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
- Strong communication and teamwork abilities;
- Understanding of data security and privacy best practices;
- Strong mathematical, statistical, and algorithmic skills.
Nice to have:- Certification in Cloud platforms, or related areas;
- Experience with search engine Apache Lucene, Webservice Rest API;
- Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
- Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
- Previous experience working with JavaScript and TypeScript.
We offer:- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
-
Β· 16 views Β· 1 application Β· 15d
Senior Data Engineer with Snowflake
Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-IntermediateProject Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...Project Description:
As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
Responsibilities:
Strategy and Project Delivery
β Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
β Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
β Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
β Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
β Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
β Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
β Own the data engineering processes, architecture across the teams
Technology, Craft & Delivery
β Experience in designing and architecting data engineering frameworks, dealing with high volume of data
β Experience in large scale data processing and workflow management
β Mastery in technology leadership
β Engineering delivery, quality and practices within own team
β Participating in defining, shaping and delivering the wider engineering strategic objectives
β Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
β Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
β Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
Mandatory Skills Description:
Role Qualifications and Requirements:
β Bachelor degree
β At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
β 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
β Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
β Experience working with public cloud providers such as Snowflake, AWS
β Experience to work in a complex stakeholders' organizations
β A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
β Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
β Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
β You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
β You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
β Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams
- Languages:
- English: B2 Upper Intermediate
-
Β· 72 views Β· 2 applications Β· 15d
Middle Strong/Senior Data Engineer
Full Remote Β· Ukraine Β· 2 years of experience Β· Upper-IntermediateOur mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities Our values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious...Our mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities πΏOur values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious Openness and Result Driven. We offer a safe, inclusive and productive environment for all team members, and weβre always open to feedbackπ
If you want to work from home or work in the city center of Kyiv, great β apply right now.
About the project:
Generative AI technologies are rapidly changing how digital content is created and consumed. However, many of these systems are trained on vast amounts of data, including articles, videos, and other creative worksβoften without the knowledge or consent of the original creators. As a result, publishers, journalists, and content producers face the risk of losing both visibility and critical revenue streams such as advertising, subscriptions, and licensing.
Our project addresses this issue by developing a system that allows AI platforms to identify when specific content has influenced a generated result. This enables transparent attribution and the possibility for content creators to receive compensation based on how often their work is used. The goal is to build a sustainable ecosystem where creators are fairly rewarded, while AI-generated content remains trustworthy and ethically grounded.
Requirements:
β 3+ years of experience in Data Engineering;
β Solid Python programming skills, especially in data processing and system automation;
β Proven experience with Airflow, Kubeflow, or Kafka for orchestrating data workflows;
β Familiarity with search engine concepts and indexing;
β Experience working with structured and semi-structured web data (HTML, JSON, APIs);
β Ability to work with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure);
β English: Upper-Intermediate+.
What you will get:
β Competitive salary and good compensation package;
β Exciting, challenging and stable startup projects with a modern stack;
β Corporate English course;
β Ability to practice English and communication skills through permanent interaction with clients from all over the world;
β Professional study compensation, online courses and certifications;
β Career development opportunity, semi-annual and annual salary review process;
β Necessary equipment to perform work tasks;
β VIP medical insurance or sports coverage;
β Informal and friendly atmosphere;
β The ability to focus on your work: a lack of bureaucracy and micromanagement;
β Flexible working hours (start your day between 8:00 and 11:30);
β Team buildings, corporate events;
β Paid vacation (18 working days) and sick leaves;
β Cozy offices in 2 cities ( Kyiv & Lviv ) with electricity and Wi-Fi (Generator & Starlink);
β Compensation for coworking (except for employees from Kyiv and Lviv);
β Corporate lunch + soft skills clubs;
β Unlimited work from home from anywhere in the world (remote);
β Geniusee has its own charity fund.
More
-
Β· 23 views Β· 0 applications Β· 15d
Technical Lead/Senior Data Engineer
Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-IntermediateProject Description: As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right...Project Description:
As a Data & Application Engineer for FP&A, you are responsible for the engineering team and the technology that the team owns. You will not only work as a coach for your team but also as a technical leader, ensuring that the right technical decisions are made when building our data and reporting product(s).
As a data and analytics team, we are responsible for building a cloud-based Data Platform for BHI Global Services and its stakeholders across brands. We aim to provide our end users from different Finance departments, e.g.., Risk, FPA, Tax, Order to Cash, the best possible platform for all of their Analytics, Reporting & Data needs.
Collaborating closely with a talented team of engineers and product managers, you'll lead the delivery of features that meet the evolving needs of our business on time. You will be responsible to conceptualize, design, build and maintain data services through data platforms for the assigned business units. Together, we'll tackle complex engineering challenges to ensure seamless operations at scale and in (near) real-time.
If you're passionate about owning the end-to-end solution delivery, thinking for future and driving innovation and thrive in a fast-paced environment, join us in shaping the future of the Data and Analytics team!
Responsibilities:
Strategy and Project Delivery
β Together with the business Subject Matter Experts and Product Manager, conceptualize, define, shape and deliver the roadmap to achieving the company priority and objectives
β Lead business requirement gathering sessions to translate into actionable delivery solutions backlog for the team to build
β Lead technical decisions in the process to achieve excellence and contribute to organizational goals.
β Lead the D&A teams in planning and scheduling the delivery process, including defining project scope, milestones, risk mitigation and timelines management including allocating tasks to team members and ensuring that the project stays on track.
β Have the full responsibility in ensuring successful delivery by the D&A teams to deliver new products on time, set up processes and operational plans from end to end, e.g., collecting user requirement, design, build & test solution and Ops Maintenance
β Technical leader with a strategic thinking for the team and the organization. Visionary who can deliver strategic projects and products for the organization.
β Own the data engineering processes, architecture across the teams
Technology, Craft & Delivery
β Experience in designing and architecting data engineering frameworks, dealing with high volume of data
β Experience in large scale data processing and workflow management
β Mastery in technology leadership
β Engineering delivery, quality and practices within own team
β Participating in defining, shaping and delivering the wider engineering strategic objectives
β Ability to get into the technical detail (where required) to provide technical coach, support and mentor the team
β Drive a culture of ownership and technical excellence, including reactive work such as incident escalations
β Learn new technologies and keep abreast of existing technologies to be able to share learnings and apply these to a variety of projects when needed
Mandatory Skills Description:
Role Qualifications and Requirements:
β Bachelor degree
β At least 5 years of experience leading and managing one or multiple teams of engineers in a fast-paced and complex environment to deliver complex projects or products on time and with demonstrable positive results.
β 7+ years' experience with data at scale, using Kafka, Spark, Hadoop/YARN, MySQL (CDC), Airflow, Snowflake, S3 and Kubernetes
β Solid working experience working with Data engineering platforms involving languages like PySpark, Python or other equivalent scripting languages
β Experience working with public cloud providers such as Snowflake, AWS
β Experience to work in a complex stakeholders' organizations
β A deep understanding of software or big data solution development in a team, and a track record of leading an engineering team in developing and shipping data products and solutions.
β Strong technical skills (Coding & System design) with ability to get hands-on with your team when needed
β Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision
β You have driven successful technical, business and people related initiatives that improved productivity, performance and quality
β You are a humble and thoughtful technology leader, you lead by example and gain your teammates' respect through actions, not the title
β Exceptional and demonstrable leadership capabilities in creating unified and motivated engineering teams
- Languages:
- English: B2 Upper Intermediate
-
Β· 35 views Β· 1 application Β· 15d
Data Engineer 2070/06 to $5500
Office Work Β· Poland Β· 3 years of experience Β· Upper-IntermediateOur partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such...Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such as Walmart, Barclaycard, and Ford.
The company has expanded to over 700 employees, with 15 global offices spanning four continents. With the imminent opening of a new office in Warsaw, we are seeking experienced
Data Engineers to join their expanding team.
The Data Engineer will be responsible for developing, designing, and maintaining end-to-end optimized, scalable Big Data pipelines for our products and applications. In this role, you will collaborate closely with team leads across various departments and receive support from peers and experts across multiple fields.
Opportunities:
- Possibility to work in a successful company
- Career and professional growth
- Competitive salary
- Hybrid work model (3 days per week work from office space in the heart of Warsaw city)
- Long-term employment with 20 working days of paid vacation, sick leaves, and national holidays
Responsibilities:
- Follow and promote best practices and design principles for Big Data ETL jobs
- Help in technological decision-making for the businessβs future data management and analysis needs by conducting POCs
- Monitor and troubleshoot performance issues on data warehouse/lakehouse systems
- Provide day-to-day support of data warehouse management
- Assist in improving data organization and accuracy
- Collaborate with data analysts, scientists, and engineers to ensure best practices in terms of technology, coding, data processing, and storage technologies
- Ensure that all deliverables adhere to our world-class standards
Skills:
- 3+ years of overall experience in Data Warehouse development and database design
- Deep understanding of distributed computing principles
- Experience with AWS cloud platform, and big data platforms like EMR, Databricks, EC2, S3, Redshift
- Experience with Spark, PySpark, Hive, Yarn, etc.
- Experience in SQL and NoSQL databases, as well as experience with data modeling and schema design
- Proficiency in programming languages such as Python for implementing data processing algorithms and workflows
- Experience with Presto and Kafka is a plus
- Experience with DevOps practices and tools for automating deployment, monitoring, and management of big data applications is a plus
- Excellent communication, analytical, and problem-solving skills
- Knowledge of scalable service architecture
- Experience in scalable data processing jobs on high-volume data
- Self-starter, proactive, and able to work to deadlines
- Noce to have: Experience with Scala
If you are looking for an environment where you can grow professionally, learn from the best in the field, balance work and life, and enjoy a pleasant and enthusiastic atmosphere, submit your CV today and become part of our team!
Everything you do will help us lead the programmatic industry and make it better.
More -
Β· 22 views Β· 4 applications Β· 14d
Data Engineering Team Lead
Poland Β· 5 years of experience Β· Upper-IntermediateAbout Us We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration,...About Us
More
We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration, we offer services across e-commerce, Fintech, logistics, and healthcare.
Our client is leading mobile app company that depends on high-volume, real-time data pipelines to drive user acquisition and engagement. This role is instrumental in maintaining data reliability, supporting production workflows, and enabling operational agility across teams. This is a hands-on leadership role that requires deep technical expertise, ownership mindset, and strong collaboration across engineering and business stakeholders.
Key Requirements:
πΉ5+ years of experience in data engineering, with strong hands-on expertise in building and maintaining data pipelines;
πΉ At least 2 years in a team leadership or technical lead role;
πΉ Proficient in Python, SQL, and data orchestration tools such as Airflow;
πΉ Experience with both SQL and NoSQL databases, such as MySQL, Presto, Couchbase, MemSQL, or MongoDB;
πΉ Bachelorβs degree in Computer Science, Engineering, or a related field;
πΉ English β Upper-Intermediate or higher.
Will be plus:
πΉ Background in NOC or DevOps environments is a plus;
πΉ Familiarity with PySpark is an advantage.
What you will do:
πΉ Oversee daily data workflows, troubleshoot failures, and escalate critical issues to ensure smooth and reliable operations; πΉ Use Python, SQL, and Airflow to configure workflows, extract client-specific insights, and adjust live processes as needed;
πΉ Build and maintain automated data validation and testing frameworks to ensure data reliability at scale;
πΉ Own and evolve the metadata system, maintaining table lineage, field definitions, and data usage context to support a unified knowledge platform;
πΉ Act as the primary point of contact for operational teams and stakeholders, ensuring consistent collaboration and high data quality across the organization.
Interview stages:
πΉ HR Interview;
πΉ Pro-Interview;
πΉ Technical Interview;
πΉ Final Interview;
πΉ Reference Check;
πΉ Offer.
Why Join Us?
πΉ Be part of a friendly international team, working together on interesting global projects;
πΉ Enjoy many chances to grow, learn from mentors, and work on projects that make a real difference;
πΉ Join a team that loves fresh ideas and supports creativity and new solutions;
πΉ Work closely with clients, building great communication skills and learning directly from their needs;
πΉ Thrive in a workplace that values your needs, offering flexibility and a good balance between work and life. -
Β· 39 views Β· 0 applications Β· 14d
Lead Data Engineer (ETL)
Full Remote Β· Ukraine, Poland Β· 5 years of experience Β· Upper-IntermediateDescription: Our Client is the Enterprise Worldwide Company. The product you will be working with, provides management and data processing/handling capabilities for networks of the clients scientific lab equipment such as microscopes, etc. The main...Description:
Our Client is the Enterprise Worldwide Company. The product you will be working with, provides management and data processing/handling capabilities for networks of the clients scientific lab equipment such as microscopes, etc. The main goals are:
Collection and centralized management of data outputs (measurement results, etc.) provided by clients devices
Outdated data utilization
Managing large volumes of data acquired from measurement devices in the cloud securely and reliably
Seamless sharing of measurement data with collaborators
The ability to share measurement results and accelerate customer service.
Requirements:
We are looking for a Lead Data Engineer with at least 6 years of commercial experience in development of data platforms for enterprise applications. With the experience to Lead a team of engineers and take responsibility for the technical solution.
β Proficiency in Airflow for workflow orchestration, dbt for data transformation, and SQL for data querying and manipulation.
β Experience in data modeling, ETL (Extract, Transform, Load) processes, and data warehousing concepts.
β Familiarity with cloud platforms (AWS) and their data services.
β Excellent analytical and problem-solving skills with meticulous attention to detail.
β Strong communication and collaboration skills with the ability to lead and motivate cross-functional teams.Good to have ability to participate onsite meeting.Job responsibilities:
β’ Implement new solutions into the current system with the refactoring and from scratch methods;
More
β’ Preparing the technical documentation;
β’ Participating in client meetings to understand business and user requirements and estimate tasks;
β’ Collaborating closely with other engineers, product owners and testers to identify and solve challenging problems;
β’ Taking part in defect investigation, bug fixing, troubleshooting. -
Β· 44 views Β· 5 applications Β· 13d
Power BI Developer to $3000
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-IntermediateWeβre implementing a Microsoft-first analytics stack, designed to integrate data from Google Forms, ESRI ArcGIS / Survey123, and other HTTP-based sources into OneLake (Microsoft Fabric), with insights delivered through Power BI and access controlled via...Weβre implementing a Microsoft-first analytics stack, designed to integrate data from Google Forms, ESRI ArcGIS / Survey123, and other HTTP-based sources into OneLake (Microsoft Fabric), with insights delivered through Power BI and access controlled via Microsoft 365 roles.
As a Power BI Engineer, youβll own the end-to-end data pipelineβfrom ingestion to visualization. Youβll be responsible for building connectors, modeling data in OneLake, and delivering fast, accurate, and secure dashboards.
Key Responsibilities
- Develop and maintain Dataflows, Pipelines, and Power Query connectors for various sources including Google Forms, ArcGIS REST, Survey123, CSV/JSON, and other HTTP-based feeds
- Design efficient OneLake tables and implement star-schema models for Power BI reporting
- Deliver high-quality executive dashboards and enable self-service analytics for internal users
- Optimize dataset refresh, manage incremental data loads, and configure DirectQuery/Import modes
- Implement and manage row-level and role-based security, integrated with Microsoft 365 group permissions
Required Skills & Experience
- 4+ years of hands-on experience with Power BI development
- Strong knowledge of Microsoft Fabric and OneLake
- Experience building custom or REST-based Power Query connectors
- Proficiency in SQL for data modeling and performance optimization
- Practical experience with security models in Power BI, including row-level security and M365 role-based access
- Upper-intermediate or higher English for daily communication with international clients
Why Join Us?
Work on modern, mission-driven data solutions using cutting-edge Microsoft tools. Enjoy the freedom of remote work, a supportive team, and real ownership of your work.
More -
Β· 28 views Β· 3 applications Β· 12d
Data Engineer
Full Remote Β· Poland Β· 4 years of experience Β· Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Harmonya, develops an AI-powered product...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client, Harmonya, develops an AI-powered product data enrichment, insights, and attribution platform for retailers and brands. Its proprietary technology processes millions of online product listings, extracting valuable insights from titles, descriptions, ingredients, consumer reviews, and more.
Harmonya builds robust tools to help uncover insights about the consumer drivers of market performance, improve assortment and merchandising, categorize products, guide product innovation, and engage target audiences more effectively.
About the Role:
We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.Key Responsibilities:
- Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
- Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
- Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
- Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
- Apply best practices for data security, integrity, and performance across all systems.
Required Competence and Skills:
- 4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
- Proven track record in designing, developing, and deploying complex data applications.
- Hands-on experience with orchestration and processing tools (e.g. Apache Airflow and/or Apache Spark).
- Experience with public cloud platforms (preferably GCP) and cloud-native data services.
- Bachelorβs degree in Computer Science, Information Technology, or a related field (or equivalent practical experience).
- Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
- Strong verbal and written communication skills in English.
- Excellent communication skills and a strong team player, capable of working cross-functionally.
Nice to have:
- Familiarity with data science tools and libraries (e.g., pandas, scikit-learn).
- Experience working with Docker and Kubernetes.
- Hands-on experience with CI tools such as GitHub Actions
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.