Middle Data Engineer
Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β a cutting-edge data intelligence platform for e-commerce analytics.
You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, processing, analysis, and integrations.
If you are passionate about data optimization, system performance, and architecture, weβre waiting for your CV!
Requirements:
- 2+ years of commercial experience with Python.
- Advanced experience with SQL databases β including optimization, monitoring, and performance tuning.
- PostgreSQL β must have.
- Solid understanding of ETL principles and best practices.
- Strong knowledge of algorithms and cyclomatic complexity, with the ability to design efficient, scalable data logic.
- Experience with Linux environments, cloud services (AWS: boto3, Lambda, S3, SQS, ECS), and Docker.
- Familiarity with message brokers.
- Experience with Pandas and NumPy for data processing and analysis.
- Understanding of system architecture and monitoring (logs, metrics, Prometheus, Grafana).
- Excellent problem-solving and communication skills, with the ability to work both independently and collaboratively.
Will Be a Plus:
- Experience in web scraping, data extraction, cleaning, and visualization.
- Understanding of NoSQL databases β ideally, Elasticsearch (not only as part of ready-made logging tools like LogStash or FileBeat).
- Experience with multiprocessing and multithreading.
- Familiarity with Flask / Flask-RESTful for API development.
- Experience with Kafka pipelines and stream processing.
- Strong skills in code optimization and structuring.
- TimeScaleDB β good to have.
Key Responsibilities:
- Develop and maintain a robust and scalable data processing architecture using Python.
- Design, optimize, and monitor data pipelines using Kafka and AWS SQS.
- Implement and optimize ETL processes for various data sources.
- Manage and optimize SQL and NoSQL databases (PostgreSQL, TimeScaleDB, Elasticsearch).
- Work with AWS infrastructure to ensure reliability, scalability, and cost efficiency.
- Build monitoring and alerting for data systems (Prometheus, Grafana, logs, metrics).
- Collaborate with cross-functional teams, participate in code reviews, and contribute to continuous improvement.
Proactively identify bottlenecks and suggest technical improvements.
We offer:
- Working in a fast growing company;
- Great networking opportunities with international clients, challenging tasks;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities, corporate events.
Required skills experience
| Python | 2 years |
| PostgreSQL | 2 years |
| ETL | 2 years |
| AWS | 2 years |
| Python Pandas | 2 years |
+ 2 more
| Kafka | 1.5 years |
| Elasticsearch | 6 months |
Required languages
| English | B1 - Intermediate |
| Ukrainian | Native |
Published 11 November
Β·
Updated 20 November
70 views
Β·
3 applications
π
Average salary range of similar jobs in
analytics β
Loading...