Within a team made up of Data Engineers, under the responsibility of an Engineering Manager and in direct collaboration with the CRM project managers, you will participate in the entire process of developing CRM features by intervening on the Data Engineer part.
As a Data Engineer, you will contribute to:
Develops and builds data products and data services and integrates them into business systems and business processes.
Implements data flows to connect operational systems
Automates manual data flows to enable scaling and repeatable use.
Transforms data into a useful format for analysis, with optimized ETL tasks and streaming processing.
Creates accessible data for analysis and development of data products.
Knows privacy regulations and is a privacy by design ambassador.
As an engineer, you will participate in the development, evolution and maintenance of software to best serve our users and offer them the best possible user experience while helping to achieve the team's objectives and leboncoin :
Produce software architectures aligned with the functional and non-functional needs of the product and integrating into the leboncoin ecosystem
Apply quality software practices (software craftsmanship) and support team members in learning and implementing these practices
Develop new features that meet best practices for quality, performance, monitoring and scalability
Participate in the improvement of development, testing (unit and integration) and deployment (code reviews, CI/CD, etc.) environments and methods with a DevOps approach.
Keep a technological watch and take the initiative to improve the product.
Belonging to the "Crew Fidélisation", you will interact with the teams in charge of audience acquisition (SEO) and the segmentation platform to provide support, developments and innovations.
Development under Ubuntu in Java, Python and SQL with IntelliJ, Gradle, Travis, Docker, Github, Ansible, Terraform, Concourse, Helm
In an environment at the cutting edge of current technologies: Airflow, Spark, Elasticsearch, Kafka / Kafka Stream / Kafka Connect, AWS (S3, Redshift, Athena, Glue, DynamoDB), Kubernetes, Jupyter, MLFlow, Hudi
What we expect for this position:
Data Engineer with at least 3 years of experience, justifying one or more experiences in the production of Data software with high quality, performance and scalability requirements.
You know Unix environments, and have an advanced level in Java / Python.
You are familiar with the AWS cloud environment, and have solid notions of distributed architecture and high volume data platform management.
Mastery of the SQL language (PostgreSQL in particular).
Experience with Elasticsearch would be a plus.
Ability to work independently and quickly develop skills.
Ability to work in a team, share knowledge and help others.
You are fluent in English both written and spoken.
- An attractive Base Salary.
- Participation in our Short Term Incentive plan (annual bonus).
- Employee Stock Purchase Program with a match from Adevinta.
- Work From Anywhere: Enjoy up to 20 days a year of working from anywhere! Maybe not from the moon - well why not! just make sure you have internet connection!
- A 24/7 Employee Assistance Program for you and your family, because we care.
- Win together, lose together is one of our key behaviours. At Adevinta you will find a collaborative environment with an opportunity to explore your potential and grow.
On top of these, we also provide a range of locally relevant benefits. Wanna know more? Apply and ask our recruiters!