Join our Core Team as a highly skilled Python Developer to build and scale our high-load Geospatial AI Data Platform. You will be the critical link, productionizing Data Science models and transforming massive volumes of raw imagery into meaningful, structured insights that power our client-facing product. If you're driven by complex, high-scale big data challenges, this is the role for you.
Key Responsibilities:
โ Data Pipeline Development: Implement and optimize robust ETL data pipelines for collecting, cleaning, transforming, and loading large volumes of geospatial data.
โ API Development & Deployment: Design, build, and maintain high-speed, scalable RESTful APIs (using frameworks like FastAPI/Powertools for Lambda).
โ Model Integration & Productionization: Work closely with Data Scientists to containerize and deploy models (e.g. roof age prediction) into our Geospatial AI Data Platform.
โ Code Quality & Optimization: Write clean, efficient, testable, and reusable Python code. Optimize data processing and platform latency to handle requests and queries at massive scale and high volume.
โ System Architecture: Contribute to the design and implementation of our Microservices and Serverless architectures, ensuring high availability, security, scalability and observability.
โ Collaboration: Work within an Agile team alongside Data Scientists, DevOps, and Product Managers to translate business requirements into technical solutions.
Required Qualifications:
โ B.Sc. degree in Computer Science or a related technical field.
โ Demonstrable professional experience as a Python Developer with a focus on data-intensive backend systems (4+ years).
โ Expert knowledge of Python 3, follow PEP8.
โ Experience developing using OOP and SOLID principles.
โ Proven experience designing and developing production-ready APIs using a modern Python web framework such as FastAPI or Powertools for AWS Lambda.
โ Experience with AWS services such as S3, SQS, EC2, API GW, Lambda, DynamoDB, CloudFormation/SAM using boto3.
โ Experience with Docker/Kubernetes.
โ Experience with Linux, preferably Ubuntu.
โ Experience with SQL Databases (e.g. PostgreSQL)
โ Experience with version control (Git) and collaborative development workflows.
โ Good communication skills in English
Preferred/Bonus Qualifications:
โ Experience with geo-spatial libraries (e.g., Geopandas, GDAL, Shapely, Rasterio).
โ Familiarity with model serving technologies such as NVIDIA Triton Inference Server.
โ Experience with Workflow Management platforms such as Apache Airflow.
โ Experience with on-prem servers.
โ Knowledge of CI/CD pipelines (GitHub Actions) and DevOps best practices
To apply for this and other jobs on Djinni
login
or
signup.