Data Engineer

We are seeking talented, experienced data engineering and science professionals to join a growing, high-visibility cross-Bank team that is developing and deploying solutions to some of Credit Suisse’s most challenging analytic problems. As a member of this team, the successful candidate will work with clients and data spanning Credit Suisse’s global organization to solve emerging mission-critical challenges via the utilization of emerging technologies such as:

  • Distributed file systems and storage technologies (HDFS, HBase, Accumulo, Hive)

  • Large-scale distributed data analytic platforms and compute environments (Spark, Map/Reduce)

  • Emerging data science techniques and mathematical models



  • A formal background and proven experience in mathematics, statistics and computer science, particularly within the financial services sector. Experience within related fields such as physics, chemistry or biology will be considered.

  • A successful history of developing production-quality enterprise software using programming languages such as Python, C, C++, Java or Scala.

  • A good knowledge of Unix/Linux including system administration and shell scripting.

  • Experience on administration, management and deployment of Hadoop production clusters.

  • The ability to function within a multidisciplinary, global team. Be a self-starter with a solid curiosity for extracting knowledge from data and the ability to elicit technical requirements from a non-technical audience.

  • Solid communication skills and the ability to present deep technical findings to a business audience.

Additional Skills

  • Experience with distributed computing environments such as Hadoop, Spark or High Performance Computing (HPC) systems and an understanding of parallel application performance and reliability.

  • Experience developing software within a large enterprise environment (agile or associated development methodologies,  release management, JIRA).

  • Experience with Hadoop Distribution and Core Hadoop Echo System components: MapReduce and HDFS.

  • Experience with Hadoop ETL/Data Ingestion: Sqoop, Flume, Hive, Spark.

  • Experience with open source configuration management and deployment tools and Scripting using Python/Shell/Bash

  • Investment Banking experience is a plus.


For more information visit Technology Careers