Distributed data solutions -warehouses, lakes, mesh- design involving batch and stream processing via ETL jobs.
Our products are data-driven at their core. Our product building process will lead you not only to capitalize your technical skills, but also to become a product-minded professional.
You and your team will be in charge of the complete delivery process by ideating, delivering and supporting the iteration of your own product.
What have you done prior to us?
- Dealing with data by extracting and transforming it via Go;
- Designing and implementing data pipelines. Batch and stream processing Kafka is something you already got used to;
- Knowledge and experience with data querying relational and non-relational data models;
- Collaborating with peers on the design, implementation and evolution of data pipelines, lakes, warehouses and meshes;
- Learning and working with C++, Java or Golang.
Traits we value the most
- Strong communication and time-management skills;
- Pragmatism and empathy;
- Investigation and experimentation attitude;
- Risk anticipation, identification and mitigation;
- Ability to thrive in a fast-paced, fast-changing, and unstructured environment;
- Hungry to join a team that aims to leave a mark in the world;
- A teamplayer with an owner’s mentality.
What will you do?
- Work with other engineers, risk specialists and product managers to evolve the backend ecosystem to build data products belonging to our data mesh;
- Design and build ETL jobs based on multiple data sources related to specific risk (Identity Validation, AML, etc.) and relevant subdomains.
- Apply ingestion techniques to systematically extract data from sources;
- Enhance the data completeness, validity, and consistency;
- Provide feedback on the quality and transformation needs regarding data.