Skills
9 hours ago
Exclusive opportunity
Share this opportunity
Share this opportunity to other talents of your network:
✓ Offer them a visibility boost with clients.
✓ Help your contacts find their next job.
Important information
Contract type:
Permanent contract
Salary:
Salary according to profile
Location:
Casablanca, Morocco
Starting date:
2 to 4 weeks
Work mode:
Onsite, Hybrid
Published on:
28 April 2026
What they need
Context
Collective.work is building the next-generation AI-powered sourcing platform for recruiters. Our mission is to help talent teams identify, engage, and hire the best candidates faster through intelligent automation and data-driven insights. We operate at the intersection of data, AI, and recruiting workflows—where high-quality data infrastructure is critical to our success.
Missions
- Design and maintain scalable data pipelines (batch and real-time)
- Build and optimize ETL/ELT workflows across Azure and/or GCP
- Develop data models and architectures to support analytics and ML use cases
- Ensure data quality, integrity, and reliability across systems
- Collaborate with ML engineers to prepare and serve training datasets
- Monitor and improve pipeline performance, cost efficiency, and scalability
- Implement best practices for data governance, security, and compliance
- Contribute to tooling and infrastructure decisions
Tools & Environment
- Cloud: Azure (Data Factory, Synapse) and/or GCP (BigQuery, Dataflow)
- Data Processing: Python, SQL, Spark
- Orchestration: Airflow / Prefect
- Storage: Data lakes, warehouses
- Streaming: Kafka / PubSub (nice to have)
- DevOps: Docker, CI/CD
Working Conditions
- Flexible remote work environment
- Opportunity to work on a product at the cutting edge of AI and recruiting
- High ownership and impact from day one
- Collaborative, product-driven engineering culture
- Opportunity to shape the data foundation of a growing platform
Profile wanted
- 3+ years of experience in data engineering or similar role
- Strong experience with Azure and/or GCP data ecosystems
- Proficiency in Python and SQL
- Experience building scalable ETL pipelines
- Familiarity with data warehousing concepts and modeling
- Understanding of distributed systems and big data tools (e.g., Spark)
- Experience working with APIs and integrating external data sources
- Strong problem-solving skills and attention to detail
Other offers great for you!
These companies are also looking for great profiles
Collective
Data Engineer
Permanent contract
In 2 to 4 weeks
London, UK
Onsite, Hybrid
Skills
6 days ago
Exclusive opportunity
Cherry Pick
Data Engineer Talend
450€
Freelance
Urgent
Paris, France
Hybrid
Skills
8 hours ago
Exclusive opportunity
Unitech Solutions
Data Engineer
430
Freelance
Urgent
Paris, France
Hybrid
Skills
7 hours ago
Exclusive opportunity