- London, England, United Kingdom
- Permanent, Full time
- 23 May 19
Help us to build scalable, robust data systems.
Annual payment volume at GoCardless exceeds 7.5 billion, and we're processing tens of thousands of transactions every day. As the company grows, there is an increasing demand for the data behind these transactions, and our ability to make sound and timely business decisions is hugely dependent on the availability and reliability of this data.
We're looking for a data engineer who can help us build data systems that scale with this demand.
As one of our Data Engineers, you will help drive our Data Engineering development strategy from day one. You'll play a vital role in productionising new data systems, as well as scaling and improving existing ones, such as our in-house fraud detection system. You will also work with people across technical and commercial teams to understand their data needs, and implement the best possible infrastructure to meet them.
In terms of our stack, we're heavy users of Postgres - both our primary application database and analytics database run Postgres. Our backend technology is built in Ruby, but we use Python and Java for our data projects, and a fair amount of Go in our infrastructure team.
We're also currently implementing Tableau to give everyone at GoCardless simple, self-service access to our data. The backend for this is a data warehouse in Google BigQuery. You'll be responsible for improving, scaling and maintaining these systems, as well as leading the adoption of best practices in testing, monitoring and security.
What we're looking for
We want to work with people who are passionate about building and maintaining reliable, performant systems, and have practical experience of doing so. You should have a solid professional background in software engineering, and a deep understanding of relational databases. Given the versatile nature of the role, we're looking for someone who can learn fast, enjoys working with others, and is a pragmatic decision-maker.
Bonus points for:
- Strong Python or Java or Scala skills.
- Experience designing data warehouses and assembling data pipelines.
- Familiarity with modern data warehousing tools such as Google BigQuery (or alternatively Amazon RedShift).
- Computer Science degree, or equivalent experience.
- Experience with Python's core data science libraries, e.g. pandas, scikit-learn etc.
- Experience with Apache Beam.
In your application, please include your CV and a link to your GitHub, and a brief description of any interesting projects you've worked on that would be relevant to the role.