Posted Oct 17, 2018 - Requisition No. 71337
This Enterprise Content & Delivery development team is responsible for building highly-performant distributed systems that manage large volumes of client data. We work with multiple groups within Bloomberg to gather millions of data points per day and develop tools to transform and store the data in an efficient manner for trend analysis, billing and business intelligence.
We are looking to enhance our software suite to develop a new automated data ingestion and analytics pipeline. We want every application that we on-board to define new schemas and leverage them for data ingestion. Secondly, we want to leverage or develop a framework that can provide for analysis against this data store and do this all in a manner that provides for operational independence. The right candidate will be expected to have extensive hands-on experience in Big Data Technologies such as Hadoop and HBase as well as frameworks like Apache Spark and/or Apache Kafka.