Competitoor

Joined in 2022
100% answers

Competitoor is a B2B data platform providing information about online prices. Competitoor is a SaaS which combine web scraping and Artificial intelligence. The focus of Competitoor is on analyzing prices of goods and services sold online.

 

We have been recently acquired by Deda Group

  • · 104 views · 12 applications · 25d

    Crawling/Scraping Expert with experience in Node.js

    Full Remote · Worldwide · Product · 2 years of experience · B1 - Intermediate
    We are looking for a Crawling/Scraping Expert with experience in Node.js to join our technical team to enhance and scale our data collection systems in the field of price intelligence and e-commerce monitoring. The successful candidate will be...

    We are looking for a Crawling/Scraping Expert with experience in Node.js to join our technical team to enhance and scale our data collection systems in the field of price intelligence and e-commerce monitoring.

     

    The successful candidate will be responsible for designing, developing and maintaining high-performance, resilient crawlers capable of acquiring large volumes of data from e-commerce sites in a structured and scalable manner.

     

    Proven experience in web scraping/crawling is required, with a particular focus on anti-bot management, IP rotation and dynamic parsing.
    A technical test will be required during the selection process.

     

    The candidate will be part of the data team in full-time mode (8 hours/day).
    It is assumed that this will be the candidate's only job; no secondary or freelance work is permitted during our collaboration.

     

    Main responsibilities

    • Develop and maintain crawlers in Node.js to acquire structured data from e-commerce and marketplaces.
    • Manage anti-bot systems (CAPTCHA, honeypots, rate-limiting).
    • Implement proxy rotation, user-agent management, and retry mechanisms.
    • Monitor the stability of scraping jobs and intervene in case of errors or crashes.
    • Collaborate with the data engineering and operations team to ensure the quality and timeliness of the data collected.

     

    Minimum requirements

    • Proven experience in web scraping/crawling, with portfolio or examples of real projects.
    • Excellent knowledge of Node.js and its crawling libraries (e.g. Puppeteer, Cheerio, Playwright, Axios, etc.).
    • Experience in managing proxies, headless browsers, and anti-detection techniques.
    • Ability to write modular, reusable and maintainable code.
    • Excellent knowledge of relational databases.
    • Autonomy in debugging and solving complex problems.

     

    Preferred requirements

    • Knowledge of other languages used in crawling (e.g. Python).
    • Familiarity with NoSQL databases (ElasticSearch, Redis).
    • Experience in SaaS environments or technology start-ups.

     

    Selection process
    The process includes:

    • Introductory interview.
    • Technical test on a real scraping case (estimated time: 2 hours).
    • Final technical interview with the team.
    More
Log In or Sign Up to see all posted jobs