AVP, Data Engineering Trainer, Group Consumer Banking and Big Data Analytics Technology, Technology and Operations
Business Function Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels. Responsibilities
- Devising technical training programs according to organizational requirements
- Determining course content according to objectives
- Producing training schedules, agenda and deliver training (virtual / classroom)
- Design, create e-learning modules
- Work with architects, data engineers to analyse source data and data flows, working with structured and unstructured data
- Analyse and visualize diverse sources of data, interpret results in the business context and report results clearly and concisely.
- Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms.
- Discover data sources, get access to them, import them, clean them up, and make them "model-ready". You need to be willing and able to do your own ETL.
- Create and refine features from the underlying data. You'll enjoy developing just enough subject matter expertise to have an intuition about what features might make your model perform better, and then you'll lather, rinse and repeat.
- Run regular A/B tests, gather data, perform statistical analysis, draw conclusions on the impact of your optimizations and communicate results to peers and leaders.
Apply Now We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognises your achievements.
- Experience in one or more areas of big data and machine learning
- Experience in delivering technical training in classroom or virtual format
- The ability to work with loosely defined requirements and exercise your analytical skills to clarify questions, share your approach and build/test elegant solutions in weekly sprint/release cycles.
- Development experience in Java/Scala and pride in producing clean, maintainable code
- Real world experience in solving business problems by deploying one or more machine learning techniques
- Experience creating pipelines to analyse data, extracted features and updated models in production.
- Independence and self-reliance while being a pro-active team player with excellent communication skills.
- Hands-on development with key technologies including Scala, Spark, and other relevant distributed computing languages, frameworks, and libraries.
- Experience with distributed databases and the key issues affecting their performance and reliability.
- Experience using high-throughput, distributed message queueing systems such as Kafka.
- Familiarity with operational technologies, including Docker, Chef, Puppet, Zookeeper, Terraform, and Ansible
- An ability to periodically deploy systems to on-prem environments.
- Experience with key development tools such as GIT, and familiarity with collaboration tools such as Jira and Confluence or similar tools.
- Experience in graph and stream processing
- Experience in migrating SQL from traditional RDBMS to Spark and Bigdata technologies