Data Warehouse Engineer
Toronto
Our Client is a global software innovator, enabling organisations to digitally transform how they collaborate and get work done. Founded in 2005 and with a growing global reach, their customers cut across every major industry and include more than half of all Fortune 500 companies.
We are looking for a strong Data Engineering candidate to help us build and scale reliable data pipelines and analytics infrastructure. You will get the opportunity to work with an industry leader, and an expanding customer base from SMBs to Fortune 500 in scale. This role is pivotal to the growth of the organization, and you will get the chance to work closely with our Analytics, Data Science, Product and Platform teams. The team is based in the UK, so experience working with a remote team is a plus.
Working closely with Analytics and Development teams, you will be responsible for growing our Data ingestion pipelines, and the infrastructure needed to scale our Integrations and Analytics services. Using modern development technologies (Serverless, AWS Glue, Redshift, Athena) and languages (Typescript/Python), our Data Engineering team are focused on building a best-in-class infrastructure.
You'll own these pipelines from Source to Stream to Storage, ensuring that data flows to the right team, at the right time.
Our Data Engineer teams build on modern development practices, so comfort with continuous delivery and deployment, infrastructure as code (Terraform, CDK), and agile experience are a must. We value the autonomy that modern development teams thrive in, and give teams the space to build and grow in their own ways.
What we're looking for
-
Proven experience in the field of Data Engineering
-
Strong knowledge of modern languages like Python and Typescript
-
Passion for delivering large-scale Data Pipelines and Warehousing solutions
-
Familiarity with AWS (we use S3, Redshift, Athena, DynamoDB, Kinesis, and more)
-
Modern development tooling like git
-
Execution frameworks like Apache Airflow
-
Able to solve complex problems in a timely manner
-
Comfortable writing unit and integration tests
-
Experience with both batch processing and streaming architectures
It's bonus if you have
- Run services on Kubernetes
-
Familiarity with the Atlassian suite of tools
-
Developed against RESTful endpoints
-
Worked with Serverless frameworks and methodologies
-
Experience with any of the following: Hadoop, Spark, Tensorflow, Kafka
-
Experience with Amazon Firehose, Kinesis or other event ingestion frameworks