ID 1289- Sr Data Infraestructure Engineer ( 100% remoto)

CONEXIONHR
    Job Overview
    • RemotoSi

    About the Company
    This company is the world’s leading roadside assistance platform. We expand mobility and transportation options for consumers, automotive, logistics, and technology companies.

    Responsibilities
    First 3 months:
    o Understand our platform development environment and philosophy
    o Understand our cloud architecture and applications’ infrastructure
    o Understand our engineering teams’ work culture

    First 6 months:
    o Build CI/CD pipelines to deploy and monitor data pipelines
    o Architect and build data infrastructure not available off-the-shelf
    o Build deployment tools to provide blue/green and zero downtime for our
    services
    o Ensure data flow and integrity between systems. Champion the flow of data
    across all our systems from end-to-end ensuring consistency across the chain

    Ongoing:
    o Monitor table schemas (i.e. partitions, compression, distribution) to minimize
    costs and achieve maximize performance
    o Fine-tune our performance with a focus on high availability and scalability
    o Work with different teams to make data available for reporting and analytics
    o Monitor our operations and security
    o Follow AWS best practices to deploy services

    Requirements

    • In-depth and demonstrable knowledge of AWS cloud
    • Experience deploying and maintaining EMR, Glue, Athena, RDS
    • Experience deploying and maintaining data warehousing technologies such as Amazon Redshift and Google BigQuery
    • Experience deploying and maintaining NoSQL databases like Apache Solr, DynamoDB,
      MongoDB.
    • Experience with deploying messaging and data pipeline tools like Apache Kafka,
      Amazon Kinesis etc.
    • Proficient in Python
    • Proficient in Jenkins pipelines
    • Experience in writing Terraform/CloudFormation templates
    • Experience with ECS/Kubernetes in Production environments
    • Experience using and/or implementing modern observability tooling such as
      Prometheus, InfluxDB, Grafana, Logstash, Kibana or Jaeger

    Bonus points
    Knowledge of Airflow or other workflow management systems in a distributed setup