ID 773 – Sr. Python Developer (100% remoto)

CONEXIONHR
    Descripción del trabajo

    For more than 10 years, our client has been developing customized software solutions for web & mobile platforms and providing IT staff augmentation services to clients from all over the world. They strive to develop solutions with creativity and professionalism in a friendly and inspiring environment. 

    Profile & Seniority
    They are looking for a Sr. Python Developer to join them working on a highly challenging Project. Its technology stack includes:
    ● Cloud Provider AWS: EC2, Lambda, Aurora, Redshift, DynamoDB, ECS, SQS, SNS, Kinesis, S3, CloudFront, CloudFormation, SageMaker, KMS, CodePipeline, etc.
    ● DSLbased Search: multiple large scale Elasticsearch Clusters searched using our Disco Query Language (DQL).
    ● Event Bus: Kafka and Schema Registry
    ● 3rd Party Vendors: Redis, Auth0 for Cloud Identity Federation (SSO, SAML, etc).
    ● AI: MinHash, FastText, Word2Vec, Convolution Neural Nets, Algorithmia (Lambda with GPUs) for training, PyTorch, Recurrent Neural Networks, Latent Dirichlet Allocation for Topic Modeling, etc.
    ● Deployment: Terraform, Docker (via ECS), Consul for: App Config, Service Discovery, Shared Secrets.
    ● Visibility: ELK Stack for logging, Datadog, New Relic, Sentry.io
    ● Programming Languages: Python, JavaScript, C#/.NET, Java.
    ● Transport Mechanisms: Protobuf, Avro, HTTP Rest/JSON
    ● CI/CD: Jenkins, CodeDeploy, GitHub, Artifactory

    Position requirements:
    ● Must design and communicate external and internal architectural perspectives of wellencapsulated systems (e.g. Service Oriented Architecture, Docker based Services, microservices) using patterns and tools such as Architecture/Design Patterns and Sequence Diagrams.
    ● Must have experience with some amount of ‘Big Data’ technologies such as: ElasticSearch, NoSql Stores, Kafka, Columnar Databases, DataFlow or Pipeline Systems, Graph DataStores.
    ● Should have experience with design, implementation, and operation of data intensive, distributed systems. (The book, Designing Data Intensive Applications, is a good reference)
    ● Should embrace the discipline of Site Reliability Engineering.

    ● Should have experience using Continuous Integration and Continuous Deployment (CI/CD) with an emphasis on a wellmaintained testing pyramid.
    ● Should have API and Data Model Design or Implementation experience, including how to scale out, make highly available, or map to storage systems.
    ● Should have experience with multiple software stacks, have opinions and preferences, and not be married to a specific stack.
    ● Should have experience designing and operating software in a Cloud Provider such as AWS, Azure, or GCP.
    ● Might have experience using Feature or Release Toggles as a code branching strategy.
    ● Might have experience designing, modifying, and operating multitenant systems.
    ● Might know about algorithm development for intensive pipeline processing systems.
    ● Might understand of how to design and develop from a Security Perspective.
    ● Might know how to identify, select, and extend 3rd Party Components (Commercial or Open Source) that provide operational leverage but does not constrain our product and engineering creativity.

    Your day might include designing and operating platformwide services such as:
    ● Event Bus and Event Sourcing capabilities that provide business and engineering leverage and efficiencies.
    ● Highly scalable and crazy performant search systems.
    ● Transactional or eventually consistent stores that provide wellencapsulated domain object semantics.
    ● Orchestrated scaleout data pipelines that can leverage serverless and containerized compute that balances cost, latency, and duration.
    ● Algorithmically intensive data engines that operate on streaming, large, or multitenant datasets.

    Detalles del trabajo