This recruiter is online.

This is your chance to shine!

Apply Now

Senior Data & AI platform Engineer [DAPE] to design and build a robust infrastructure on AWS for a financial institution.

Toronto, ON
  • Number of positions available : 1

  • To be discussed
  • Contract job

  • Starting date : 1 position to fill as soon as possible

Our client is seeking a Senior Data & AI platform Engineer [DAPE] to design and build a robust infrastructure on AWS for a financial institution.


Initial 1-year contract with possible extension, hybrid in downtown Toronto (3 days/week onsite, 2 days/week remote)


Roles and Responsibility :

  • Working with team of Cloud engineers focused on designing and building robust infrastructure on AWS
  • Working with Development teams to understand technical requirements and then design and provision AWS Platform services required to support complex ETL/ELT data pipelines as well as Machine learning solutions.
  • Working with Product Managers to develop tools to support experimentation, ML model training and production operations. Solid understanding of provisioning end-to-end Data solution enabled with DevOps.
  • Architect scalable low-latency systems, implement capacity planning framework & design data pipelines & required disaster recovery services.
  • Promote software development best-practices and conduct rigorous code reviews, rigorously identify and solve technical challenges
  • Balance and prioritize projects to maximize efficiency and ensure company objectives are achieved Lead and mentor a team of talented engineers within the backend distributed systems team, make a positive impact on the team's productivity and growth


Must Haves:

  • Around 7+ years of experience as a Cloud Infrastructure engineer in Data and Data warehousing technologies.
  • Strong knowledge on Data Lake, Data Warehousing, Distributed systems and Data infrastructure concepts.
  • Solid experience with ML infrastructure and ML DevOps.
  • Understanding of core AWS & Azure services & architecture best practices
  • Experience in migrating data from on-prem Hadoop to AWS/Azure Cloud.
  • Hands-on experience in different domains, like database architecture, business intelligence, machine learning, advanced analytics, big data, etc.
  • Solid experience creating CI/CD pipelines to manage infrastructure deployment and code deployment.
  • Should have sound experience provisioning following Cloud Services as well as their design and architecture components
  • CDK & TypeScript, NodeJS, Python, Terraform
  • Code pipeline, CICD, Build/Deploy
  • AWS EMR, Glue, Managed Airflow, Lake formation, Athena, S3, ELK / OpenSearch
  • RDS, Redshift, DocumentDB, Neptune & Data Migration Service
  • EC2, S3, IAM, Secrets Manager, System Manager, Cloudwatch
  • AWS Sagemaker Studio (ML), Docker & Containers
  • AWS Quick Sight and Power BI reporting tool
Apply

Requirements

Level of education

undetermined

Work experience (years)

undetermined

Written languages

undetermined

Spoken languages

undetermined