Jobs Scala

3
  • · 39 views · 1 application · 16d

    Senior Scala Developer

    Ukraine · Product · 5 years of experience · English - None Ukrainian Product 🇺🇦
    Responsibilities: Develop and maintain high-performance, scalable, and fault-tolerant systems using Scala; - Collaborate with cross-functional teams to design and develop software solutions that meet business needs; - Design, develop, and test software...

    Responsibilities:

     Develop and maintain high-performance, scalable, and fault-tolerant systems using Scala;
    - Collaborate with cross-functional teams to design and develop software solutions that meet business needs;
    - Design, develop, and test software components, services, and APIs;
    - Write clean, efficient, and maintainable code;
    - Troubleshoot and debug production issues, collaborate with the DevOps team;
    - Working proactively on code quality, observability and best practices held across the company.

    Hard Skills:

    - Proficiency in Scala and its ecosystem (sbt, ZIO, PlayFramework, Sangria, and other libraries). At least 3 years of experience is desired;
    - Experience with functional programming and distributed systems;
    - Understanding how the JVM, JDK/JRE, and accompanying technologies work;
    - Familiarity with database technologies (SQL, NoSQL);
    - Hands-on experience using Kafka;
    - Knowledge of web technologies (HTTP, REST, GraphQL) and microservice architecture (service mesh);
    - Experience with software testing and debugging tools.

    Nice to Have:

    - Familiarity with Kubernetes and Docker.
    - Experience with Aerospike or other KV database.
    - Experience with FP paradigm and other JVM languages.
    - Understanding of CI/CD tools and methodologies.

     

    What We Offer:

    - A job in a stable company – we have been in the market for over 13 years;
    - Truly engaging tasks – take part in creating the media service of the future;
    - Relationships built on trust and plenty of opportunities for growth;
    - Table tennis lessons, and piano classes;
    - Free English lessons;
    - A corporate psychologist;
    - Discounts from partner brands and our service.
     

    By applying for the vacancy and submitting your resume to the Company (LLC “MEGOGO”), registered and operating in accordance with the legislation of Ukraine, registration number 38347009, address: Ukraine, 01011, Kyiv, Rybalska Street, 22 (hereinafter referred to as the “Company”), you confirm and agree that the Company processes your personal data provided in your resume in accordance with the Law of Ukraine “On Personal Data Protection” and GDPR regulations.

    We’d appreciate it if you could take a moment to fill out a short survey about what matters most to you. It will help us better understand candidates’ expectations and create an even more comfortable environment at MEGOGO. Here’s the link: bit.ly/43YaxBH

    More
  • · 8 views · 3 applications · 2d

    Senior/Lead Scala Engineer

    Full Remote · Poland, Romania, Croatia, Slovakia · 5 years of experience · English - None
    Required Skills: Expert-level Scala programming and backend development. GraphQL service design and implementation. Strong experience with cloud-native engineering (AWS, Kubernetes, Containers, CI/CD). Distributed data processing using Spark or similar...

    Required Skills:

    • Expert-level Scala programming and backend development.
    • GraphQL service design and implementation.
    • Strong experience with cloud-native engineering (AWS, Kubernetes, Containers, CI/CD).
    • Distributed data processing using Spark or similar frameworks.
    • Working knowledge of Angular (for full-stack contributions).

     

    Preferred Experience:

    • Large-scale, production data platforms.
    • Apache Iceberg, AWS Athena, or similar technologies.
    • Experience in a high-throughput, secure environment.

     

    Job Responsibilities

    • Support the integration of our Scala-based GraphQL domain services API to integrate distributed Spark processing with our Angular based front end.
    • Help develop core services for a petabyte-scale Apache Iceberg data lake on AWS.
    • Build and maintain GraphQL APIs for high-throughput data access.
    • Enforce cloud-native development standards (Containers, Kubernetes, CI/CD via Github).
    • Collaborate with cross-functional teams to deliver robust, scalable solutions.
    • Contribute to UI development in Angular where needed.
    • Implement and optimize large-scale data processing pipelines (Athena, Spark).

     

    Department/Project Description

    Small Teams; Big Data
    We look for individuals who are motivated by complex and challenging work. We want to work with people who share compelling solutions to those challenges, solutions informed by their unique experiences, passions, and expertise. We plan to build a Data Platform on top of AWS Ecosystem to port-over some of EMS Legacy applications. We plan to use Scala to build microservices that will provide self-onboarding functionality to our customer and internal users. We, furthermore, plan to build data pipelines on Spark (Scala) and other open-source technologies with the goal of having robust/scalable and resilient data services. The platform will have several User Interfaces built on top of Angular and Typescript to connect the pipelines and services together and provide a optimized user experience. The application/pipelines and microservices we build will run on top of managed Kubernetes.
    We're open-minded about new technologies, we're passionate about what we do, and we make time for everyone to learn and grow as the industry changes. Engineers on the team are approachable and ambitious people who think outside the box and together solve big problems. Are you up for the challenge?
    We are looking to build multiple value stream teams composed of Front-End/Back-end Software/Data and Infrastructure engineering. We need people that can work hands-on as engineers solving complex problems and building advance software solutions. We face daily challenges that are both unique and engaging, while processing data at petabyte scale. That is over one trillion data points in multiple different data periods. We have consumer privacy and data security at the forefront of all that we do. Our size also allows us to use cutting edge, open-source technologies to tackle the ever-growing challenges.

    More
  • · 47 views · 5 applications · 24d

    Senior Java Developer (Spark, Hadoop)

    Full Remote · Ukraine · 5 years of experience · English - B2
    Project Description: A next-generation cross-asset data management platform to provide globally consistent data and innovative tools to support business strategy for trade/sales clients that is built on big data architecture, highly scalable, and...

    Project Description:

    A next-generation cross-asset data management platform to provide globally consistent data and innovative tools to support business strategy for trade/sales clients that is built on big data architecture, highly scalable, and cloud-ready. The platform enables industry-leading analytics, client reporting, regulatory compliance, surveillance, supervisory reporting, and data science solutions (data flame). The project tech stack: Apache Spark, Hive, Java, Scala, Spring, SQL, Kafka, Hadoop.

    We are looking for a strong Senior Java Developer with Big Data experience (Hadoop, Spark) capable of build the design and establish the framework in order to expose and distribute the Risk data to wide variety of consumers including Finance, Front office Risk Systems, Trading Systems, Back office and Account systems. This should cover low latency access and Streaming, Push / Pull mechanisms depending on the type of consumers.

    The role of the Senior Java Developer involves working with strong development teams (3 Senior Java Developers), the opportunity to work with big data improving knowledge in this domain utilizing the resources of an international bank.

    Responsibilities:

    • Design and build frameworks for ingesting/modeling the complex Risk data
    • Develop Low latency caching to service Front office risk systems

    Mandatory Skills Description:

    • At least 5 years of experience in software development, including professional backend software development experience;
    • Strong Knowledge of Java programming languages
    • 1+ year of Big data development with extensive hands-on experience with Spark programming
    • Strong knowledge of big data technologies like Hadoop (Hive) and Spark
    • Should be familiar with algorithms and design patterns
    • Understanding of distributed systems.
    • Understanding of CI/CD workflow.
    • Familiarity with Linux environment including scripting skills

    Nice-to-Have Skills Description:

    • Experience In-memory databases and caching
    • Job Orchestration tools like Autosys or Airflow
    • Financial background (where possible)

    Languages:

    English: B2 Upper Intermediate

    More
Log In or Sign Up to see all posted jobs