If you're interested in one of our open positions, start by applying here and attaching your resume.
GCP Data Engineer (ETL)
Location:(Vancouver, Calgary, Toronto)
Job description
THIS IS NOT A DEVOP POSITION-
EXPERIENCE IN GCP,PYSPARK,BIGQUERY,DATAPROC,AIRFLOW are MANDATORY
As a senior data engineer, you will be a hands-on developer who designs, and delivers Data Warehouses, Data Lake, Self-service tooling, Real-time streaming, and Big Data Solutions for our rapidly growing Supply Chain Operations. Working within the Data Engineering organization, this individual will tightly partner with business, product managers and engineering leaders to build data platforms enabling these goals. You will be instrumental in building out the data pipelines to ensure our data is generated, transformed, and mutated over time, working to build a cohesive, scalable, accurate, and performant foundational source of truth upon which all data & analytics users across the company will build.
What You’ll Do
- Build data pipelines to assemble large, complex sets of data that meet non-functional and functional business requirements
- Deliver Data Engineering capabilities for streaming and batch-based data ingestion, enrichment, and aggregation.
- Design and develop sophisticated data models and visualizations that support multiple use cases across different products or domains.
- Define and manage SLA for all data sets in allocated areas of ownership.
- Work closely with data architect, SMEs, and other technology partners to develop & execute on the data architecture roadmap for different functional areas
- Mentor and grow technical skills of engineers across multiple sprint teams by giving high quality feedback in design and code reviews and providing training for new methods, tools, and patterns
- Collaborate with your stakeholders and other business analytics team leaders
What You Have
- 5+ years of experience in Data Engineering role
- 3+ years of programming experience with at least one language such as Python, Scala, Java, or other modern OOP programing language
- 3+ years’ experience in writing SQL statements
- 3+ years’ experience with schema design and dimensional data modeling
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets
- Experienced developing in cloud platforms such as Google Cloud Platform
- Experience with real-time data streaming tools like Kafka, Kinesis, Apache Storm, or any similar tools
- Experience in designing data engineering solutions using open source and proprietary cloud data pipeline tools such as Airflow, dbt, Glue, and Beam
- Experience in development of custom built BI and big data reporting solutions using tools like Looker, DataStudio, or any similar tools
- Experience with code management tools (e.g., Git, SVN) and DevOps tools (e.g., Docker, Jenkins)
- Excellent communication and presentation skills, strong business acumen, critical thinking, and ability to work cross functionally through collaboration with engineering and business partners
Please send you resume to info@suviatechnologies.com