Description
Being part of Air Canada is to become part of an iconic Canadian symbol, recently ranked the best Airline in North America. Let your career take flight by joining our diverse and vibrant team at the leading edge of passenger aviation.
As a Data Scientist Data & AI (Applied AI) at Air Canada, you will be embedded in cross-functional team and will contribute to the MLOps pipeline and processes to scale and deploy ML, Optimization and Agentic solution. We are looking for a Data Scientist to help us build and operate our ML, Optimization and agentic platform to increase reliability and governance. You will primary be working with data scientist, research scientist, solution architect and system integrator to contribute to the architecture and the products powered by advanced analytics and AI.
The ideal candidate will possess a strong foundation in software design principles, with a proven track record of designing, developing, and deploying robust y systems. The candidate will also be well-versed in model version control, deployment pipelines, and monitoring standards. The candidate is expected to have some level on knowledge into the inner working of the models instead of treating them as black boxes.
You will join the Data Science & AI Team, a central group within Air Canada’s IT organization, building machine learning and optimization solutions for internal business units such as Revenue Management, Network Planning, Operations, Maintenance, and Cargo but also customer facing solution. Collaboration with both technical and non-technical stakeholders is essential as you deliver production-grade applications.
All initiatives follow an agile methodology, with 2-to-3-week sprints and incremental releases leading to the final production deployment. This approach fosters continuous improvement, adaptability, and close alignment with business needs.
Responsibilities:
- Build, deploy, and scale ML, Agentic AI, and optimization models in production across Azure and AWS, ensuring reliability, low latency, and cost efficiency.
- Implement end to end MLOps practices-including CI/CD, automated retraining, monitoring, and model/data versioning-using cloud native tooling.
- Monitor model performance and data drift using Azure Monitor, AWS CloudWatch, and similar tools, triggering retraining or recalibration as needed.
- Orchestrate and automate complex AI workflows using Azure Machine Learning, AI Foundry, Azure Functions on Azure or Bedrock, Sagemaker and Akka for distributed processing on AWS.
- Standardize tooling, testing frameworks, and performance benchmarking to ensure consistent model validation and infrastructure reliability across platforms.
- Translate business requirements into scalable ML and generative AI solutions, collaborating closely with cross functional teams.
- Document architectures, workflows, and best practices, and communicate them effectively to both technical and non technical stakeholders.
- Provide technical leadership by mentoring engineers, driving continuous improvement, and promoting innovative AI engineering practices.
- Partner with IT security to perform audits, vulnerability assessments, and ensure secure operation of AI systems in cloud environments.
- Integrate fairness, explainability, and transparency into model development and deployment using cloud native and open source tools.
- Optimize cloud resource usage and implement efficient scaling strategies using autoscaling and distributed computing frameworks (including Akka).
- Develop incident response protocols for AI system failures, lead post mortems, and implement corrective actions in cloud environments.
- Foster innovation by researching, prototyping, and piloting emerging cloud ML services and distributed computing technologies.
- Implement and maintain model governance frameworks, including approval workflows, audit trails, and lifecycle documentation.
- Build automated testing frameworks (unit, integration, regression) for ML models using cloud based CI/CD platforms.
Qualifications
- Master’s or PhD in Data Science, Computer Science, or a closely related field, or equivalent and 5+ relevant working experience.
- Proven experience managing the full MLOps lifecycle, including automated training pipelines, feature store integration, batch and real-time inference, model monitoring, and data/code versioning best practices.
- Strong proficiency in Python and its ML/data ecosystem, including libraries such as Pandas, scikit-learn, MLflow, PySpark, TensorFlow, and others.
- Hands-on experience with Azure’s ML and AI services, including Azure Machine Learning, Azure Databricks, Azure Data Factory, Azure Functions, Azure OpenAI, and Azure AI Search, along with their SDKs.
- Experience building CI/CD pipelines for ML workflows using Git-based platforms such as Azure DevOps and GitHub Actions.
- Working knowledge of large language models (LLMs), prompt engineering, Retrieval-Augmented Generation (RAG) architectures, and open-source frameworks for generative AI.
- Strong problem-solving skills with the ability to work independently and collaboratively in cross-functional teams.
- Demonstrated ability to standardize and productize ML solutions into reusable components and scalable infrastructure.
- Excellent communication skills, both written and verbal, with the ability to convey complex technical concepts to diverse audiences.
- Demonstrate punctuality and dependability to support overall team success in a fast-paced environment.
Asset Qualifications
- Familiarity with Amazon Web Services (AWS) and its ML/AI offerings (e.g., SageMaker, Lambda, S3, EKS).
- Experience with LLMOps tools and practices for managing large language model deployment and monitoring.
- Proficiency in Java, particularly for integrating ML models into production systems.
- Experience deploying models on Azure Kubernetes Service (AKS) or similar container orchestration platforms.
- Familiarity with optimization solvers and tools, including commercial (e.g., CPLEX, Gurobi, FICO Xpress) and open-source (e.g., COIN-OR, SCIP) platforms.
- Relevant certifications (e.g., Azure AI Engineer, AWS Certified Machine Learning, TensorFlow Developer)
Conditions of Employment:
Candidates must be eligible to work in the country of interest at the time any offer of employment is made and are responsible for obtaining any required work permits, visas, or other authorizations necessary for employment. Prior to their start date, candidates will also need to provide proof of their eligibility to work in the country of interest.
Linguistic Requirements
Based on equal qualifications, preference will be given to bilingual candidates.
Diversity and Inclusion
Air Canada is strongly committed to Diversity and Inclusion and aims to create a healthy, accessible and rewarding work environment which highlights employees’ unique contributions to our company’s success.
As an equal opportunity employer, we welcome applications from all to help us build a diverse workforce which reflects the diversity of our customers, and communities, in which we live and serve.
Air Canada thanks all candidates for their interest; however only those selected to continue in the process will be contacted.