Senior Full Stack Developer Senior Full Stack Developer …

in Singapore, Singapore, Singapore
Permanent, Full time
Be the first to apply
in Singapore, Singapore, Singapore
Permanent, Full time
Be the first to apply
Senior Full Stack Developer
Experian's Product Accelerator team, which sits within the Asia Pacific Innovation Hub, are looking for an experienced person to join the team as a Data Engineer and work with a small team such as Technical Architect, Product Managers, Business Analysts, and Data Scientists. You will be responsible for supporting the assessment of technology within the Innovation Hub, specifically for infrastructure, hardware, data ingestion and pipeline decisions, cloud technology, and software considerations. We need a candidate who can establish a technical vision and provide thought leadership on technological development, working alongside the senior Technical Architect and Product Strategists. The role is focused on the APAC region. You will be a mix of both data engineering and software engineering skill, as our team covers both data heavy products and software applications, or a mix of both.
Key aspects of the role are to:

  1. Identify, assess and evaluate existing technology assets
  2. Identify, assess and evaluate emerging technologies
  3. Ability to influence Experian's technology investment plan
  4. Lead and execute experiments involving Experian and emerging technologies
  5. Work with start-ups and collaborate with other corporates
  6. Evaluate and identify appropriate technology platforms and stacks for product delivery
  7. Raise market awareness of Experian's value to technology companies
  8. Work as a team with other members of the Innovation Hub to drive success

Scope of the role will cover
- Supporting product delivery across the region - for an existing pipeline of initiatives and products that require planning and build acceleration, and future innovative products
- Contribute to new product ideas through the innovation team
- Support project estimation, key risks and issues identification
- Plan and execute build work in a continuous delivery and agile way
- Stakeholder management: identify relevant stakeholders (clients, third party suppliers, internal teams) and their area and level of influence in each project. Support the Product Strategists and Project Managers with appropriate communication to relevant stakeholders.
- Working in a fast-paced environment - the Innovation Hub are in start-up mode - so leaning in is required - it is dynamic and changing and requires flexibility but has well defined strategic goals to achieve.
- Work with multiple personalities and skill sets (think x-men characters), including product strategists, project managers, business analysts, technical analysts, data engineers, technical architects, software developers. Be comfortable to work with teams across:
Australia, NZ, India, Japan, China, Malaysia, U.S, and U.K.
Able to investigate issues, identify impacts and route causes and collaborate with multiple stakeholders and drive out pragmatic solutions

Key Responsibilities

Responsibilities include, but are not limited to, taking the lead in the following areas:
  • Ideal candidate will demonstrate ability to deliver results quickly while working in a dynamic, cross functional, team-oriented environment
  • Support the engineering and analytics teams through the use and understanding of analytical tools operating in a big data cloud environment
  • Collaborate with the Global Architecture team members to continue to evaluate, recommend, and integrate new analytical tools
  • Provide technical leadership and hands-on validation of systems during the design, development, and testing
  • Proven record of success in world-class enterprise big data engineering
  • Experience with commercial big data environments -- Cloudera preferred
  • Experience with cloud solutions - AWS preferred. Good to know - AWS CLI, BOTO, AWS Lambda, Automation
  • Experience with analytical domain tools including SAS, R Studio, Hue, Python, JupyterHub, H2O, Cloudera, Databricks etc.
  • Working knowledge of automated configuration and deployment of tools is a plus
  • Working knowledge of scripting required. Experience with Python is preferred, and other scripting experience is a plus
  • Working knowledge of both Windows and Linux environments is required, along with Hadoop. Cloudera is preferred
  • Knowledge of software development practices and IT operations life cycles is required
  • Experience working in Scrum Agile environment
  • Experience working in a complex, matrix organization, and working with multiple stakeholders across functional and technical skillsets
  • Ability to effectively work with business and other IT teams
  • Excellent verbal and written communication skills. Ability to communicate effectively with development, operations, project, management, and business personnel
  • Applies existing knowledge, and ability to learn and apply more. Tries new approaches and learns from each work assignment
  • Demonstrates flexibility within a variety of changing situations, while working with individuals and groups.
  • Multitasks when required, and where minimal supervision is required - demonstrates ability to produce quality
  • Hand over specifications to the technical teams so they can develop technical specification that meet the business requirements. Support the technical teams in understanding the business requirements.
  • Maintain a high level of mutual respect and open communication with the client stakeholders at all levels of the project team, PM and executive sponsors

1. Candidate Profile Experience

List the skills, experience, education etc.

  • Minimum of 5 -8 years in progressive professional roles in big data and analytics
  • A bachelor's degree in information systems, engineering, computer science or equivalent work experience in big data is preferred
  • A high calibre individual who can create and propose technical data engineering solutions using open source technology stack and which support the analytical tool of choice of our data scientists
  • You will play a critical role in designing, developing and implementing data lakes using big data technologies such as Hadoop.
  • Candidate needs to be hands-on and have a curios nature
  • You will be responsible for building scalable distributed data solutions
  • In collaboration with subject matter experts and data stewards, defines and implements data strategy, policies, controls, and programs to ensure the enterprise data is accurate, complete, secure, and reliable
  • Manages, analyzes, and resolves data initiative issues and manages revisions needed to best meet internal and customer requirements while adhering to published data standard
  • Assists in the application and implementation procedures of data standards and guidelines on data stewardship, coding structures, and data replication to ensure access to and integrity of data sets
  • Be proactive and a self-starter, and be comfortable with ambiguity
  • Experian product knowledge is advantageous, but not essential
  • Preferably some vendor experience
  • Should have previously worked on a large scale Big Data project till implementation
  • Strong presentation skills
  • Self-motivated, assertive, goal oriented and able to work productively with minimal supervision
  • Excellent written and verbal communication skills
  • Good interpersonal skills and ability to work effectively with other team members, on a global basis

Technical Skills:
  • 5+ years of hands-on programming using Python, Java, C, C++, and Scala
  • 5+ years of hands on experience using:
    • BigData Technologies: HDFS, HBASE, HIVE, MapReduce, PIG, SPARK, IMPALA, STORM, KAFKA
    • NoSQL Technologies: SOLR, CASSANDRA, NEO4J and REDIS
    • Cloud Technologies: AWS. Good to know AWS CLI, BOTO, AWS Lambda, Automation
  • Working Knowledge of RDBMS DB
  • Working knowledge of developing RESTFUL API's
  • Knowledge of UNIX - Script language is highly preferable
  • Knowledge of multi-tier architectures and deployment
  • Knowledge of real-time, batch applications
  • Knowledge of IBM mainframes, z/OS, and virtualization technologies a plus
  • Knowledge of Change Data Capture/Replication technologies a plus