This recruiter is online.

This is your chance to shine!

Apply Now

Senior Data Engineering Manager to lead a technical team of data engineers and architecting, implementing, and optimizing end-to-end data solutions on Data

Toronto, ON
  • Number of positions available : 1

  • To be discussed
  • Permanent job

  • Starting date : 1 position to fill as soon as possible

Full Time Opportunity

Twice a week on site in Downtown Toronto


Must Haves:

  • You have a Bachelor’s Degree in Engineering, Computer Science or Equivalent.
  • 5+ years of hands-on experience with Databricks and Apache Spark, demonstrating expertise in building and maintaining a production-grade data pipelines
  • Proven experience leading and mentoring data engineering teams in complex, fast paced environments
  • Extensive experience with AWS cloud services (S3, EC2, Glue, EMR, Lambda, Step Functions)
  • Strong programming proficiency in Python (PySpark) or Scala, and advanced SQL skills for analytics and data modeling
  • Demonstrated expertise in infrastructure as code using Terraform or AWS CloudFormation for cloud resource management
  • Strong background in data warehousing concepts, dimensional modeling, and experience with RDBMS systems (e.g., Postgres, Redshift)
  • Proficiency with version control systems (Git) and CI/CD pipelines, including automated testing and deployment workflows
  • Excellent communication and stakeholder management skills, with demonstrated ability to translate complex technical concepts into business terms
  • Has demonstrated the use of AI in the development lifecycle
  • Some travel may be required to the US


Nice To Haves:

  • Knowledge of financial industry will be preferred


Responsibilities


As the Data Engineering Manager, you will be responsible for architecting, implementing, and optimizing end-to-end data solutions on Databricks while integrating with core AWS services. You will lead a technical team of data engineers, ensuring best practices in performance, security, and scalability. This role requires a deep, hands-on understanding of Databricks internals and a track record of delivering large-scale data platforms in a cloud environment.

  • Lead a team of data engineers in the architecture and maintenance of Databricks Lakehouse platform, ensuring optimal platform performance and efficient data versioning using Delta Lake
  • Manage and optimize Databricks infrastructure including cluster lifecycle, cost optimization, and integration with AWS services (S3, Glue, Lambda)
  • Design and implement scalable ETL/ELT frameworks and data pipelines using Spark (Python/Scala), incorporating streaming capabilities where needed
  • Drive technical excellence through advanced performance tuning of Spark jobs, cluster configurations, and I/O optimization for large-scale data processing
  • Implement robust security and governance frameworks using Unity Catalog, ensuring compliance with industry standards and internal policies
  • Lead and mentor data engineering teams, conduct code reviews, and champion Agile development practices while serving as technical liaison across departments
  • Establish and maintain comprehensive monitoring solutions for data pipeline reliability, including SLAs, KPIs, and alerting mechanisms
  • Configure and manage end-to-end CI/CD workflows using source control, automated testing, and version control





Apply

Requirements

Level of education

undetermined

Work experience (years)

undetermined

Written languages

undetermined

Spoken languages

undetermined