Ce recruteur est en ligne!

Voilà ta chance d'être vu en premier!

Postuler maintenant

Analytics Data Engineer - 82002

Toronto, ON
  • Nombre de poste(s) à combler : 1

  • À discuter
  • Emploi Contrat

  • Date d'entrée en fonction : 1 poste à combler dès que possible

Line of Business: Treasury/Finance

Duration: 6 months

Location: Remote or Hybrid - Downtown Toronto [candidate needs to be able to go into the office for tech issues or special occasions.]

Role Responsibilities Include (but are not limited to):

  • Data Collection and Preparation: Gather data from multiple sources and preprocess it to eliminate errors, inconsistencies, and ensure quality.
  • Statistical Analysis: Apply statistical modeling techniques to identify patterns and relationships, leveraging methods such as hypothesis testing, regression, and clustering.
  • Leverage SQL, Python, or R to uncover patterns, trends, and relationships.
  • Interpret these findings to provide insights that address specific business challenges, such as identifying customer behavior or optimizing operations.
  • Create visualizations using tools like Tableau, Power BI, or matplotlib. These visualizations, along with detailed reports, help stakeholders understand complex data and make data-driven decisions.
  • Bridge the gap between technical data and business needs, enabling organizations to improve efficiency, reduce costs, and enhance customer satisfaction
  • Incorporate advanced techniques like predictive modeling and machine learning to address complex challenges.

Must-Have Skills:

Programming & Tools

  • Strong programming skills in Python, SAS, and SQL.
  • Experience with Power BI, DAX, and M Code for dashboarding.
  • Proficiency with MS 365 Suite: Office, Power Automate, SharePoint, OneDrive.

Database & Data Engineering

  • Advanced configuration of SQL Server for high-throughput analytical workloads, including memory allocation and parallel query execution.
  • Design and implementation of partitioned tables, indexed views, and columnstore indexes to support large-scale data operations.
  • Deep understanding of SQL Server recovery models (Simple, Full, Bulk-Logged), including backup/restore strategies, log management, and disaster recovery planning.
  • Development of robust ETL pipelines to extract, transform, and load data from IBM Netezza/Hadoop, ensuring efficient handling of large datasets.
  • Experience with data staging, incremental loads, and change data capture techniques.

Data Architecture

  • Experience with star/snowflake schemas, fact/dimension modeling, and slowly changing dimensions.
  • Design and implementation of batch processing pipelines using Python and SQL.
  • Solid understanding of RDBMS, NoSQL, and data file formats (CSV, Parquet, JSON).
  • Proficient in translating business requirements into scalable data models.
  • Cloud & Big Data
  • Strong experience with AWS (Redshift, Glue, MLOps).

Nice to Have Skills:

  • Previous Data Analyst Experience

Interviews:

  • 1st round - Reporting Manager
  • 2nd Round - Hiring Manager
Disclaimer:
AI may be used in evaluating candidates.
Apply

Exigences

Niveau d'études

non déterminé

Années d'expérience

non déterminé

Langues écrites

non déterminé

Langues parlées

non déterminé