Books+ Search Results

Creating an extensible 100+ PB real-time big data platform by unifying storage and serving

Title
Creating an extensible 100+ PB real-time big data platform by unifying storage and serving / Reza Shiftehfar.
Publication
[Place of publication not identified] : O'Reilly Media, 2019.
Physical Description
1 online resource (1 streaming video file (41 min., 49 sec.)) : digital, sound, color
Local Notes
Access is available to the Yale community.
Notes
Title from title screen (viewed July 24, 2020).
Access and use
Access restricted by licensing agreement.
Summary
"Reza Shiftehfar reflects on the challenges faced and proposes architectural solutions to scale a big data platform to ingest, store, and serve 100+ PB of data with minute-level latency while efficiently utilizing the hardware and meeting security needs. You'll get a behind-the-scenes look at the current big data technology landscape, including various existing open source technologies (e.g., Hadoop, Spark, Hive, Presto, Kafka, and Avro) as well as what Uber's tools such as Hudi and Marmaray. Hudi is an open source analytical storage system created at Uber to manage petabytes of data on HDFS-like distributed storage. Hudi provides near-real-time ingestion and provides different views of the data: a read-optimized view for batch analytics, a real-time view for driving dashboards, and an incremental view for powering data pipelines. Hudi also effectively manages files on underlying storage to maximize operational health and reliability. Reza details how Hudi lowers data latency across the board while simultaneously achieving orders of magnitude of efficiency over traditional batch ingestion. He then makes the case for near-real-time dashboards built on top of Hudi datasets, which can be cheaper than pure streaming architectures. Marmaray is an open source plug-in based pipeline platform connecting any arbitrary data source to any data sink. It allows unified and efficient ingestion of raw data from a variety of sources to Hadoop as well as the dispersal of the derived analysis result out of Hadoop to any online data store. Reza explains how Uber built and designed a common set of abstractions to handle both the ingestion and dispersal use cases, along with the challenges and lessons learned from developing the core library and setting up an on-demand self-service workflow. Along the way, you'll see how Uber scaled the platform to move around billions of records per day. You'll also dive into the technical aspects of how to rearchitect the ingestion platform to bring in 10+ trillion events per day at minute-level latency, how to scale the storage platform, and how to redesign the processing platform to efficiently serve millions of queries and jobs per day. You'll leave with greater insight into how things work in an extensible modern big data platform and inspired to reenvision your own data platform to make it more generic and flexible for future new requirements. This session is from the 2019 O'Reilly Strata Conference in New York, NY."--Resource description page.
Variant and related titles
Creating an extensible one hundred plus petabyte real-time big data platform by unifying storage and serving
O'Reilly Safari. OCLC KB.
Format
Images / Online / Video & Film
Language
English
Added to Catalog
March 03, 2022
Performers
Presenter, Reza Shiftehfar.
Citation

Available from:

Loading holdings.
Unable to load. Retry?
Loading holdings...
Unable to load. Retry?