Data Warehouse Engineer - Enterprise Metering (Consultant)
Our challenge is complexity at scale. As Bloomberg's Enterprise Data business has grown year over year, we see a massive explosion of financial data that our clients use. We need to build systems that support high volume data ingestion, large-scale data storage and distributed data processing. Our data captures the who, what, when, where and how of our clients' use of various Bloomberg Enterprise products. Our mission is to use this data to power operational use cases like invoicing as well as to feed our business intelligence systems. We are very sensitive to our customers' data privacy needs, and our goal is to ensure appropriate controls and audit-ability of our systems.
As an engineering consultant on the Metering team, you will help us build and improve our Enterprise Metering Platform to track billions of data points per day and apply various algorithms that drive our product commercials. What's in it for you:
Starting with being a part of an awesome team that works hard, works smart and has a mission to leverage big data technologies to tackle our data processing at-scale problems. You will be working on many open-source tools and technologies and will be encouraged to research and propose the right solution for solving some of our technology as well as business challenges. Get involved in building systems that will eventually expand to impact the whole firm, not just the Enterprise business!
On our team, you will be a self-starting individual who is motivated and eager to learn and grow, with responsibility for the development of Greenplum based MPP (massively parallel processing) data warehouse. We are looking for a professional with knowledge of Greenplum, PostgreSQL and/or Microsoft SQL server, with significant experience in writing ETL and with Python scripting. You will be able to apply your deep knowledge of data warehousing methodologies and best practices. You will be involved in all phases of development lifecycle and will be expected to work on design, architecture and implementation of our data warehouse environment, participate in code and technology reviews, and work closely with other members of the team. You will be exposed to a variety of data domains and have enormous opportunities to learn and contribute ideas. You'll need to have:
We'd love to see:
- Extensive related database experience developing complex SQL stored procedures/function, loading and processing large data sets and performing storage optimization and query tuning
- Python scripting experience with data access and writing custom libraries
- Deep-level understanding of analytical-level SQL, such as windowing functions and OLAP concepts
- Excellent understanding of VLDB performance aspects - table partitioning, sharding, table distribution and optimization techniques
- Bachelor of Science/MS/PhD Degree in Computer Science, Engineering, Finance or related field (or equivalent experience)
- Solid understanding of Kimball data warehousing methodologies, various stages of ETL processing and dimensional data modeling
If this sounds like you, apply!
- Hands on experience working with Kafka, Phoenix, Hadoop, Streaming framework
- Familiarity with other databases (Oracle, DB2, MySQL)
- Working knowledge of Linux scripting languages, e.g. ksh, Bash
- Python experience with numpy/pandas/scipy
Bloomberg is an equal opportunities employer, and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.