《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

这篇具有很好参考价值的文章主要介绍了《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

来源:《斯坦福数据挖掘教程·第三版》对应的公开英文书和PPT

Chapter 2 MapReduce and the New Software Stack

Computing cluster means large collections of commodity hardware, including conventional processors (“compute nodes”) connected by Ethernet cables or inexpensive switches.

The software stack begins with a new form of file system, called a “distributed file system,” which features much larger units than the disk blocks in a conventional operating system. Distributed file systems also provide replication of data or redundancy to protect against the frequent media failures that occur when data is distributed over thousands of low-cost compute nodes.

Compute nodes are stored on racks, perhaps 8–64 on a rack. The nodes on a single rack are connected by a network, typically gigabit Ethernet. There can be many racks of compute nodes, and racks are connected by another level of network or a switch. The bandwidth of inter-rack communication is somewhat greater than the inter-rack Ethernet, but given the number of pairs of nodes that might need to communicate between racks, this bandwidth may be essential.

💡 rack means “机架”

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

Some important calculations take minutes or even hours on thousands of compute nodes. If we had to abort and restart the computation every time one component failed, then the computation might never complete successfully.
The solution to this problem takes two forms:

  1. Files must be stored redundantly. If we did not duplicate the file at several compute nodes, then if one node failed, all its files would be unavailable until the node is replaced. If we did not back up the files at all, and the disk crashes, the files would be lost forever.
  2. Computations must be divided into tasks, such that if any one task fails to execute to completion, it can be restarted without affecting other tasks. This strategy is followed by the MapReduce programming system.

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack
《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack

Summary of Chapter 2

  • Cluster Computing: A common architecture for very large-scale applications is a cluster of compute nodes (processor chip, main memory, and disk). Compute nodes are mounted in racks, and the nodes on a rack are connected, typically by gigabit Ethernet. Racks are also connected by a high-speed network or switch.
  • Distributed File Systems: An architecture for very large-scale file systems has developed recently. Files are composed of chunks of about 64 megabytes, and each chunk is replicated several times, on different compute nodes or racks.
  • MapReduce: This programming system allows one to exploit parallelism inherent in cluster computing, and manages the hardware failures that can occur during a long computation on many nodes. Many Map tasks and many Reduce tasks are managed by a Master process. Tasks on a failed compute node are rerun by the Master.
  • The Map Function: This function is written by the user. It takes a collection of input objects and turns each into zero or more key-value pairs. Keys are not necessarily unique.
  • The Reduce Function: A MapReduce programming system sorts all the key-value pairs produced by all the Map tasks, forms all the values associated with a given key into a list and distributes key-list pairs to Reduce tasks. Each Reduce task combines the elements on each list, by applying the function written by the user. The results produced by all the Reduce tasks form the output of the MapReduce process.
  • Reducers: It is often convenient to refer to the application of the Reduce function to a single key and its associated value list as a “reducer.”
  • Hadoop: This programming system is an open-source implementation of a distributed file system (HDFS, the Hadoop Distributed File System) and MapReduce (Hadoop itself). It is available through the Apache Foundation.
  • Managing Compute-Node Failures: MapReduce systems support restart of tasks that fail because their compute node, or the rack containing that node, fail. Because Map and Reduce tasks deliver their output only after they finish (the blocking property), it is possible to restart a failed task without concern for possible repetition of the effects of that task. It is necessary to restart the entire job only if the node at which the Master
    executes fails.
  • Applications of MapReduce: While not all parallel algorithms are suitable for implementation in the MapReduce framework, there are simple implementations of matrix-vector and matrix-matrix multiplication. Also, the principal operators of relational algebra are easily implemented in MapReduce.
  • Workflow Systems: MapReduce has been generalized to systems that support any acyclic collection of functions, each of which can be instantiated by any number of tasks, each responsible for executing that function on a portion of the data.
  • Spark: This popular workflow system introduces Resilient, Distributed Datasets (RDD’s) and a language in which many common operations on RDD’s can be written. Spark has a number of efficiencies, including lazy evaluation of RDD’s to avoid secondary storage of intermediate results and the recording of lineage for RDD’s so they can be reconstructed as needed.
  • TensorFlow: This workflow system is specifically designed to support machine-learning. Data is represented as multidimensional arrays, or tensors, and built-in operations perform many powerful operations, such as linear algebra and model training.
  • Recursive Workflows: When implementing a recursive collection of functions, it is not always possible to preserve the ability to restart any failed task, because recursive tasks may have produced output that was consumed by another task before the failure. A number of schemes for checkpointing parts of the computation to allow restart of single tasks, or restart all tasks from a recent point, have been proposed.
  • Communication-Cost: Many applications of MapReduce or similar systems do very simple things for each task. Then, the dominant cost is usually the cost of transporting data from where it is created to where it is used. In these cases, efficiency of a MapReduce algorithm can be estimated by calculating the sum of the sizes of the inputs to all the tasks.
  • Multiway Joins: It is sometimes more efficient to replicate tuples of the relations involved in a join and have the join of three or more relations computed as a single MapReduce job. The technique of Lagrangean multipliers can be used to optimize the degree of replication for each of the participating relations.
  • Star Joins: Analytic queries often involve a very large fact table joined with smaller dimension tables. These joins can always be done efficiently by the multiway-join technique. An alternative is to distribute the fact table and replicate the dimension tables permanently, using the same strategy as would be used if we were taking the multiway join of the fact table and every dimension table.
  • Replication Rate and Reducer Size: It is often convenient to measure communication by the replication rate, which is the communication per input. Also, the reducer size is the maximum number of inputs associated with any reducer. For many problems, it is possible to derive a lower bound on replication rate as a function of the reducer size.
  • Representing Problems as Graphs: It is possible to represent many problems that are amenable to MapReduce computation by a graph in which nodes represent inputs and outputs. An output is connected to all the inputs that are needed to compute that output.
  • Mapping Schemas: Given the graph of a problem, and given a reducer size, a mapping schema is an assignment of the inputs to one or more reducers so that no reducer is assigned more inputs than the reducer size permits, and yet for every output there is some reducer that gets all the inputs needed to compute that output. The requirement that there be a mapping schema for any MapReduce algorithm is a good expression of what makes MapReduce algorithms different from general parallel computations.
  • Matrix Multiplication by MapReduce: There is a family of one-pass MapReduce algorithms that performs multiplication of n × n matrices with the minimum possible replication rate r = 2 n 2 q r =\frac {2n^2}q r=q2n2, where q is the reducer size. On the other hand, a two-pass MapReduce algorithm for the same problem with the same reducer size can use up to a factor of n less communication.

END文章来源地址https://www.toymoban.com/news/detail-444257.html

到了这里,关于《斯坦福数据挖掘教程·第三版》读书笔记(英文版) Chapter 2 MapReduce and the New Software Stack的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 笔记汇总 | 斯坦福 CS229 机器学习

    本文为斯坦福大学 CS229 机器学习课程学习笔记 本文主体部分转载自黄海广博士,文末已给出链接,大家有兴趣可以直接访问笔记首页,下载对应课程资料及作业代码 课程官网:CS229: Machine Learning (stanford.edu) 课程视频:Stanford CS229: Machine Learning Course, Lecture 1 - Andrew Ng (Autumn 2

    2024年02月14日
    浏览(33)
  • LLaMA模型微调版本:斯坦福 Alpaca 详解

    项目代码:https://github.com/tatsu-lab/stanford_alpaca 博客介绍:https://crfm.stanford.edu/2023/03/13/alpaca.html Alpaca 是 LLaMA-7B 的微调版本,使用Self-instruct[2]方式借用text-davinct-003构建了52K的数据,同时在其构建策略上做了一些修改。 性能上作者对Alpaca进行了评估,与openai的text-davinct-003模型在

    2024年02月16日
    浏览(29)
  • 斯坦福JSKarel编程机器人使用介绍

    为了避免被编程语言固有的复杂性所困扰,有一个被称为卡雷尔(Karel)机器人的微型世界(microworld)的简化环境,可以让编程初学者从中学习理解编程的基本概念,而不必掌握大量无关的细节,让编程初学者更容易理解编程的要点和思维方式。 斯坦福Karel是一门面向初学者

    2024年02月05日
    浏览(36)
  • 斯坦福人生设计课——简略笔记(未完待更新)

    来源: ⽐尔 · 博内特 戴夫 · 伊万斯 著图书《人生设计课》 目录 一、认清当下的情况,从四个维度观察自己的人生 二、平衡人生,但不要走入误区 2.1 记录你的“美好时光日志”: 2.1.1 记录内容: 2.1.2 辅助反思的方法:AEIOU方法 2.1.3 一个小TIPS: 2.1.4 如果你发现自己当下

    2024年02月11日
    浏览(30)
  • 自驱力超强的羊驼?斯坦福微调LLaMa

    大型“指令调优”语言模型在新任务上展现了Zero-shot的卓越能力,但严重依赖于人类编写的指令数据,而这些数据在数量、多样性和创造性方面都是有限的。 斯坦福科研人员引入了self-instruction框架,提高指令遵循能力来自我迭代进化,与InstructGPT的性能相当,相比原始GPT3提

    2024年02月09日
    浏览(32)
  • 【LLM系列】00:斯坦福 Alpaca 模型介绍及其复现

    西风吹老洞庭波,一夜湘君白发多。醉后不知天在水,满船清梦压星河。小伙伴好,我是微信公众号《小窗幽记机器学习》的小编:卖核弹的小女孩。更多、更新文章欢迎关注微信公众号:小窗幽记机器学习。后续会持续输出模型推理加速、工程部署、LLM、AI艺术等系列,敬

    2024年02月13日
    浏览(38)
  • 斯坦福2023【FrugalGPT】减少大模型的商业化应用成本

    FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance 这篇文章主要是要解决如何降低调用大语言模型的成本(ChatGPT)。大模型API调用成本主要是三方面的:1. prompt cost(输入的prompt);2. generation cost(输出的部分);3. 每次调用的固定开销(网费等)。不用的模型之前的

    2024年02月06日
    浏览(47)
  • 斯坦福| ChatGPT用于生成式搜索引擎的可行性

    文|智商掉了一地 随着 ChatGPT 在文本生成领域迈出了重要一步,Bing 浏览器也接入了聊天机器人功能,因此如何保证 Bing Chat 等搜索引擎结果的精确率和真实性也成为了搜索领域的热门话题之一。 当我们使用搜索引擎时,往往希望搜索结果能够真实准确地反映我们的需求。然

    2024年02月06日
    浏览(29)
  • 斯坦福Dan Boneh密码学——02 计算密码与语义安全

    语义安全这块内容实在是被书绕晕了,虽然模型就那么一个,但有各种各样的数学符号交织证明,还有官方深奥的语言表述。第一次看是一知半解的,后面势必还要再返回来精读几遍完善笔记。以篇幅来看,语义安全是密码学中非常重要的一个版块。 计算密码与语义安全 我

    2024年02月08日
    浏览(58)
  • 斯坦福 Stats60:21 世纪的统计学:前言到第四章

    原文: statsthinking21.github.io/statsthinking21-core-site/index.html 译者:飞龙 协议:CC BY-NC-SA 4.0 这本书的目标是讲述统计学的故事,以及它如何被全球的研究人员所使用。这是一个与大多数统计学入门书籍中讲述的故事不同的故事,后者侧重于教授如何使用一套工具来实现非常具体的

    2024年01月18日
    浏览(38)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包