AWS Redshift, Sr Mgr
Your Role and Responsibilities SSGA Technology & Architecture is focusing on adopting a computing framework for its analytic applications using Open Source and open/private cloud hosting. It encompasses Architecture & Development of Analytic computing framework in Cloud, Migration of Data & Compute to Cloud and Cloud Operations, etc. As a member of a high performing group you will be responsible for designing, developing and implementing cloud solution, optimizing the performance and quality of the solutions.
A day in the role will include...
- Design & develop data pipeline for consolidation of several data marts/on-premise data warehouse to AWS Redshift.
- Design & develop Datawarehouse in AWS Redshift which includes migration and transformational projects, and design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues
- Implement automated ETL workflow tools and routines using tools like Airflow
- Ability to work with other employees and contractors and communicate with business stakeholders
- Drive collaborative reviews of design, code, test plans and dataset implementations and maintain high quality standards
- Troubleshoot complex data issues and perform root cause analysis to resolve operational issues
Required Technical and Professional Expertise
- 8-12 yrs overall IT experience with at least 4 years of expertise in any of the Datawarehouse including Teradata, Netezza, Redshift or Snowflake
- Experience working in with AWS Ecosystem and expertise in AWS Pipeline knowledge to develop ETL for data movement to Redshift, experience to map the source to target rules and fields
- Experience with performance tuning, parallel query optimization and execution, large scale data analytics
- Database Modeling and Design for Enterprise Data Warehousing (star, snowflake modelling)
- Expert working with any major data Expert in PL/SQL and develop Stored Procedure
- Working knowledge in Unix, Linux env; Expert in writing bash/shell scripts
- Experience in writing Python scripts for ETL/Data pipeline
- Exposure to distributed computing including Spark, Airflow, Hadoop, EMR, Data Bricks
- Exposure to running/configure/developing Spark jobs
- A minimum of a bachelor degree in Computer Science or other relevant subject area
- Experience working in a fast-paced, result-oriented environment
- Demonstrated the ability to learn new technologies and skillsets