Big data engineer (Java, Python, Hadoop)

  • Salary:£600 - £650
  • Location:London, England, United Kingdom
  • Job Type:Contract, Full time
  • Company:Alexander Ash Consulting
  • Updated on:23 Jul 17

My client a leading financial services electronic broker is currently searching for a Senior Big Data engineer. You’ll be working on a very large-scale, cloud based, multi-tenant and secure, data and analytics platform with a focus on ingesting, managing, analysing, and building APIs/visualization of large sets of data to turn information into business insights.

Client

My client a leading financial services electronic broker is currently searching for a Senior Big Data engineer. You’ll be working on a very large-scale, cloud based, multi-tenant and secure, data and analytics platform with a focus on ingesting, managing, analysing, and building APIs/visualization of large sets of data to turn information into business insights.

 

Skills

  • Strong background with Java 8 along with Python
  • Scala Nice to have
  • Hands on experience with PostgreSQL, Cassandra, Kafka, and Akka
  • Strong understanding and experience of AWS cloud storage and computing platform
  • Expert knowledge of data modelling and understanding of different data structures and their benefits and limitations under particular use cases;
  • Hands-on experience with Apache Flink or Spark
  • Experience of working on various data integration tools like Talend, Pentho, Informatica for both ETL as well as ELT processing
  • Experience working within a Linux environment and shell scripting
  • Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI)
  • Solid understanding of Enterprise patterns and applying best practices when integrating various inputs and outputs together at scale

 

Responsibilities

  • Translate complex functional and technical requirements into detailed technical design and high performing big data distributed applications
  • Design and build end to end data pipeline for a given business use case, based on a set of core components using big data tools and technologies
  • Partner with other engineering teams to help architect and build the data pipeline that ingest hundreds of millions of data points into our data platform
  • Design SQL/NoSQL data models for sub milliseconds queries and to store data in most space efficient way
  • Implement physical database models ensuring that data models are consistent with the ecosystem model (e.g., entity names, relationships and definitions)
  • Design, implement, and maintain enterprise-level security
  • Become an expert on AWS services that we leverage. Help to efficiently integrate our big data infrastructure in the AWS cloud
  • Build services and tools to make our platform accessible to all our business applications
  • Conduct design and code reviews in a Continuous Deployment environment
  • Support automated system testing, user testing, and production implementation
  • Work in a cross-functional team – alongside talented Engineers and Data Scientists
  • Write usable and highly performant quality code that is unit tested, code reviewed, and checked in regularly for continuous integration
  • Great problem-solving skills, and the ability and confidence to hack their way out of tight corners
  • Strong organisational skills and ability to successfully manage multiple tasks
  • Excellent written skills are essential for report writing and proof reading
  • Ability to prioritise and meet deadlines