No items found.
← Back to Career Board

Data Engineer

Company Name: 
Seattle Kraken
Seattle, WA
Type of Company:
Sports Team
Position Type:
Contact Name: 
Kevin Landauer
Contact Email:
Date Posted:
February 9, 2022
Anticipated Start Date:
March 21, 2022

Position Description

Position Summary: In 2021, the Seattle Kraken embarked on a series of projects with their partner Amazon Web Services (AWS). The primary two projects built the Seattle Data Platform which transformed the Kraken and Climate Pledge Arena’s capabilities to deliver tailored, customized, and targeted experiences for their fans and guests. The Data Engineer will support the Seattle Data Platform which ingests, transforms, organizes, stores, and serves data to and from a variety of sources. This individual will continue to build and maintain the Platform’s full suite of AWS data infrastructure in support of Seattle Kraken, Climate Pledge Arena (CPA), and Kraken Community Iceplex (KCI). We are looking for an engineer with strong experience in SQL, scripting, AWS, data architecture, and pipeline design, who has a passion for creating scalable data architecture that drives continuous and reliable business value. This employee will work closely with the Business Intelligence Engineer to build out an architecture that scales to the business needs. There will be opportunities to build data pipelines in AWS using Apache Airflow, DBT, Lambda, and Glue. Essential Duties & Responsibilities: - Work with the Senior Manager of Data Engineer to support and maintain the Seattle Data Platform - Design and monitor data pipelines from a variety of data sources - Deploy, configure, and update cloud services through automated processes - Develop monitoring scripts/templates to gather metrics, look for patterns & events and to generate alarms - Implement infrastructure as code products through CI/CD pipelines - Ensure that pipelines follow security and governance standards - Own end-to-end development of data resources for BI team and business

Desired Qualifications

Required Experience & Qualifications: - Bachelor’s degree in Computer Science, Engineering or equivalent work experience - 3+ years IT experience - 2+ years working with a data warehouse and creating pipelines using SQL - 2+ years basic Python for Programming or Data Analysis - 2+ years strong SQL experience - A keen interest in new technologies and open source - Willingness to research and self-study to keep technical skills relevant in a highly complex environment - Ability to work independently and as part of a team - Experience working in an Agile/Scrum environment - Innovative and ability to think outside the box for creative problem solving Preferred Experience & Qualifications: - Experience with AWS Redshift and PostgreSQL - Experience with Apache Airflow - Experience with Database Tools (DBT) - Familiarity with AWS Services (IAM, Lambda, Glue, S3) - Familiarity with Infrastructure as Code (IAC) concepts (CDK, CloudFormation, Terraform, SAM) - Software development/coding experience (Python, Scala, Java, C#, etc.)

How to Apply