INCOAlliance

Joined in 2018
INCOAlliance, as a software boutique, creates the client’s specifications and dress them into a prototype or mockups with further customized development. As the outcome, there will be a tailored software privately developed for company’s needs.
As a boutique company, we are focused on the detailed technical description (Specifications) and Pixel-Perfect Design. We want to save your time and money at the very beginning, so a client has the confidence that he gets knowledge and proven experience with the bunch of latest technologies on the plate.

INCO as a boutique development company has the smaller amount of projects, but with a focus on dedicated flow, and 24/7 direct communication support. There is no need for us to push information throughout several departments to reach the goal. The client is always updated with quality, progress and project status. INCO knows every client in person and keeps this information as private as possible. Personal and Company NDA’s are available at your request. Every development and design stage is treated with care, and the client knows for sure that only best and passionate people are involved.
  • · 290 views · 50 applications · 5d

    Data Scientist

    Hybrid Remote · Spain · 2 years of experience · B2 - Upper Intermediate
    We are looking for a Data Scientist (4 months project) to work on customer-facing projects, combining advanced data science techniques with machine learning and big data technologies to design and implement solutions that meet specific customer needs and...

    We are looking for a Data Scientist  (4 months project) to work on customer-facing projects, combining advanced data science techniques with machine learning and big data technologies to design and implement solutions that meet specific customer needs and business objectives. 

     

    Requirements:

    • 2+ years data science experience
    • BSc or equivalent in Mathematics, Statistics, Computer Science, Economics, or related field 
    • Proficient in Apache Spark, Python/PySpark, and SQL
    • Experience with Hadoop ecosystem (Hive, Impala, HDFS, Sqoop) and pipeline optimization
    • Hands-on experience with AI agents and LLMs
    • Strong ML feature engineering and financial analytics skills
    • Experience with workflow tools (Airflow, MLflow, n8n) and Git
    • Customer-facing and team training abilities
    • Fluent English

       

    Responsibilities:

    • Deploy solutions and manage pilots
    • Design customer-specific technical solutions across project lifecycle
    • Lead data science teams and technical partnerships
    • Engineer features from diverse data sources
    • Extract insights and investigate anomalies in big data
    • Build technical relationships with customers and partners
    • Provide product requirements to management
    • Train customers on system usage and monitoring. 

     

    Recruitment process:

    • Screening call  (20 min)
    • Technical Interview with Team Lead (60 min)
    • Interview with Global Head of Data(30 min)
    • Interview with VP of R&D Manager (30 min)
    • HR Interview (30 min)
    • Reference Checks

    Timeline: Complete process typically takes 2-3 weeks from application to offer.

     

    We offer:

    • Competitive compensation based on your skills and experience
    • Exciting projects involving the newest technologies
    • Flexible working hours
    More
  • · 66 views · 1 application · 5d

    Data Engineer

    Hybrid Remote · Spain · 2 years of experience · B2 - Upper Intermediate
    We are looking for a talented Data Engineer (4 months project) to join our growing team. In this role, you will be responsible for designing, implementing, and optimizing sophisticated data pipeline flows within our advanced financial crime detection...

    We are looking for a talented Data Engineer (4 months project) to join our growing team. In this role, you will be responsible for designing, implementing, and optimizing sophisticated data pipeline flows within our advanced financial crime detection system. 

     

    Requirements:

    • 2+ years of hands-on experience with Apache Spark using PySpark or Scala (mandatory requirement)
    • Bachelor's degree or higher in Computer Science, Statistics, Informatics, Information Systems, Engineering, or related quantitative field
    • Proficiency in SQL for data querying and manipulation
    • Experience with version control systems, particularly Git
    • Working knowledge of Apache Hadoop ecosystem components (Hive, Impala, Hue, HDFS, Sqoop)
    • Demonstrated experience in data transformation, validation, cleansing, and ML feature engineering
    • Strong analytical capabilities with structured and semi-structured datasets
    • Excellent collaboration skills for cross-functional team environments
    • Fluent English

     

    Nice to have:

    • Machine learning pipeline development and deployment
    • Proficiency with Zeppelin or Jupyter notebook environments
    • Experience with workflow automation platforms (Jenkins, Apache Airflow)
    • Knowledge of microservices architecture, including containerization technologies (Docker, Kubernetes)

     

    Responsibilities:

    • Design, implement, and maintain production-ready data pipeline flows 
    • Build and optimize machine learning data pipelines to support advanced analytics capabilities
    • Develop solution-specific data flows tailored to unique use cases and customer requirements
    • Create sophisticated data tools and frameworks to empower analytics and data science teams
    • Collaborate closely with product, R&D, data science, and analytics teams to enhance system functionality and drive innovation
    • Work with cross-functional stakeholders to translate business requirements into scalable technical solutions
    • Build and nurture technical relationships with customers and strategic partners

     

    Recruitment process:

    • Screening call  (20 min)
    • Technical Interview with Team Lead (60 min)
    • Technical Test (2 hours take-home assignment)
    • Interview with Global Head of Data(30 min)
    • Interview with VP of R&D Manager (30 min)
    • HR Interview (30 min)
    • Reference Checks

    Timeline: Complete process typically takes 2-3 weeks from application to offer.

     

    We offer:

    • Сompetitive compensation based on your skills and experience.
    • Exciting project involving the newest technologies.
    • Flexible working hours.
    More
Log In or Sign Up to see all posted jobs