• Competitive
  • Raleigh, NC, USA
  • Permanent, Full time
  • Credit Suisse -
  • 20 Aug 18

Big Data Engineer # 115238

We Offer
We are seeking talented, experienced big data engineer to join a growing, high-visibility cross-Bank team that is developing and deploying solutions to some of Credit Suisse's most challenging analytic and big data problems. As a member of this team, you will work with clients and data spanning Credit Suisse's global organization to solve emerging mission-critical challenges via the utilization of emerging technologies such as:
  • Distributed file systems and storage technologies (HDFS, HBase, Accumulo, Hive).
  • Large-scale distributed data analytic platforms and compute environments (Spark, Map/Reduce).
  • Tools for semantic reasoning and ontological data normalization (RDF, SPARQL, Tamr).

The role offers:
  • A hands-on engineering position responsible for supporting client engagements for Big Data engineering and planning.
  • A solid platform for you to drive the engineering/design decisions needed to achieve cost-effective and high performance result.
  • Thinking out of the box on improvements to current processes & enhancing existing platform.
  • You will be part of a global team of Big Data engineers who are engineering the platform and innovating in core areas of big data, real time analytics and large-scale data processing.

Credit Suisse maintains a Working Flexibility Policy, subject to the terms as set forth in the Credit Suisse United States Employment Handbook.

You Offer
  • You have a formal background and proven experience in engineering, mathematics and computer science, particularly within the financial services sector.
  • You have hands on Programming/Scripting Experience (Python, Java, Scala, Bash).
  • DevOps Tools (Chef, Docker, Puppet, Bamboo, Jenkins).
  • Linux/Windows (Command line). An understanding of Unix/Linux including system administration and shell scripting.
  • You gave proficiency with Hadoop v2, MapReduce, HDFS, Spark.
  • Management of Hadoop cluster, with all included services.
  • You have good knowledge of Big Data querying tools, such as Pig, Hive, Impala and Spark.
  • Data Concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management).
  • You have the ability to function within a multidisciplinary, global team. Be a self-starter with a strong curiosity for extracting knowledge from data and the ability to elicit technical requirements from a non-technical audience.
  • Collaboration with team members, business stakeholders and data SMEs to elicit, translate, and prescribe requirements. Cultivate sustained innovation to deliver exceptional products to customers.
  • Do you have experience with integration of data from multiple data sources?
  • Do you have strong communication skills and the ability to present deep technical findings to a business audience?

For more information visit Technology Careers

Raleigh, NC, USA Raleigh NC US