• Competitive
  • Singapore
  • Permanent, Full time
  • OCBC Bank
  • 2018-10-15

Hadoop Big Data Product Support specialist

Hadoop Big Data Product Support specialist

The Hadoop Big Data Product Support specialist will be a techno functional support team member under Information Technology Department.

In this role, staff will be responsible for supporting Incident and Problem resolution, providing end-to-end resolution to ensure completion of all jobs within defined objectives. Troubleshoot production issues and coordinate with other teams to remediate and implement to production in order to minimize impact to business and/or downstream. The role also involves engaging and partnering with Applications Development teams, and various platform owners such as Teradata, ETL, Business Intelligence, etc.

Roles/Responsibilities
  • Responsible for development, administration and troubleshooting on Hadoop technologies like HDFS, MapReduce, Hive, Sqoop, HBase, Kafka, Spark, etc.
  • Responsible to Monitor, Administrate and manage the Cloudera CDH Hadoop cluster in the bank
  • Maintenance and support of Hadoop Ecosystem system components
  • Identification and mitigation of risks to service delivery and proactive resolution of issues before SLA impacting
  • Communicate effectively and efficiently with business line staff and App dev teams
  • Lead by example through demonstration of high performance in the areas of customer service, collaboration, team work, reliability, productivity, and execution
  • Coordination of platform maintenance, including change and release management (planned and emergency deployments, configuration and patching).
  • Real-time system monitoring (custom and off-the-shelf tools). Engineer and implement custom monitoring solutions if needed.
  • Participation of Run books development.
  • End-to-end Incident and Problem resolution
  • Perform in-depth research and identify sources of production issues.
  • Effectively perform root cause analysis of issues and report the outcome to business community and management.
  • Develop / utilize core support tools and processes to perform work while improving day-to-day practices for support team members with the goal of delivering service improvements to the business
  • Fine tune applications and systems for high performance and higher volume throughput


Qualifications
  1. Preferably of 2-5 years of experience working in Hadoop/ Big Data related field
  2. Must possess working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce etc
  3. Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
  4. Possess data management, data visualization and statistical analysis experience
  5. Self-starter who works with minimal supervision. Ability to work in a team of diverse skill sets
  6. Experience working in Agile development process and has good understanding of various phases of Software Development Life Cycle
  7. Good interpersonal with excellent communication skills -written and spoken English
*LI-TK