Senior Data Engineer
As a Senior Data Engineer, your main goal is to help build the Data Virtualization platform for data scientists and analysts. You will work closely to DataOps engineers and support them by providing highly performant virtualized views leveraging Denodo, optimizing data flow and collection for cross functional teams, infrastructure and tools they can use to deliver end-to-end solutions to business problems and visualizations. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
- Work with business and app/solution teams to implement data strategies, build data flows, and develop data virtualized views with Denodo.
- Hands-on modeling, design, configuration, installation, performance tuning.
- Keep our data separated and secure across multiple sites such as On Prem and Cloud.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Participate in agile ceremonies, weekly demos and such.
- Communicate your daily commitments.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Good knowledge of metadata management, data modeling, ETL
- Expertise in Object Oriented Design, Data modeling and Database principles (relational and non-relational)
- Experience with Agile ceremonies
- You are passionate about good engineering practices and testing
- Ability to organize, prioritize and communicate daily/weekly goals
- You like to learn, are curious, humble and like to get things done
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with cloud environments (e.g: Azure / AWS / GCP).
- Experience with ETL, data ingestion protocols, integration projects/tools such as Azure Data Factory, Talend, Databricks, Data Lake
- Experience with data warehouse and enterprise big data platforms in multi-data-center contexts
- Experience with stream-processing systems: Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, PySpark, Java etc.