【数据挖掘】2017年 Quiz 1-3 整理 带答案

这篇具有很好参考价值的文章主要介绍了【数据挖掘】2017年 Quiz 1-3 整理 带答案。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

Quiz 1

Answer Problems 1-2 based on the following training set, where A , B , C A, B, C A,B,C are describing attributes, and D D D is the class attribute:

A A A B B B C C C D D D
1 1 1 y \mathrm{y} y
1 0 1 y \mathrm{y} y
0 1 1 y \mathrm{y} y
1 1 0 y \mathrm{y} y
0 1 1 n \mathrm{n} n
1 1 1 n \mathrm{n} n
0 0 0 n \mathrm{n} n
0 1 0 n \mathrm{n} n

Problem 1 (20%). Describe an (arbitrary) decision tree that correctly classifies 6 of the 8 records in the training set. Furthermore, based on your decision tree, what is the predicted class for a record with A = 1 , B = 0 , C = 0 A=1, B=0, C=0 A=1,B=0,C=0 ?

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

Problem 2 (40%). Suppose that we apply Bayesian classification with the following conditional independence assumption: conditioned on a value of C C C and a value of D D D, attributes A A A and B B B are independent. What is the predicted class for a record with A = 1 , B = 1 , C = 1 A=1, B=1, C=1 A=1,B=1,C=1 ? You must show the details of your reasoning.
Pr ⁡ [ D ∣ A , B , C ] = Pr ⁡ [ A , B , C ∣ D ] ⋅ Pr ⁡ [ D ] Pr ⁡ [ A , B , C ] = Pr ⁡ [ a , b ∣ c , d ] ⋅ Pr ⁡ [ c ∣ d ] ⋅ Pr ⁡ [ D ] Pr ⁡ [ A , B , C ] = Pr ⁡ [ A ∣ C , D ] ⋅ Pr ⁡ [ B ∣ C , D ] ⋅ Pr ⁡ [ C ∣ D ] ⋅ Pr ⁡ [ D ] Pr ⁡ [ A , B , C ] Pr ⁡ [ A = 1 , B = 1 , C = 1 ] ⋅ Pr ⁡ [ D = y ∣ A = 1 , B = 1 , C = 1 ] = Pr ⁡ [ A = 1 ∣ C = 1 , D = y ] ⋅ Pr ⁡ [ B = 1 ∣ C = 1 , D = y ] ⋅ Pr ⁡ [ C = 1 ∣ D = y ] ⋅ Pr ⁡ [ D = y ] = 2 3 ⋅ 2 3 ⋅ 3 4 ⋅ 1 2 = 1 6 Pr ⁡ [ A = 1 , B = 1 , C = 1 ] ⋅ Pr ⁡ [ D = n ∣ A = 1 , B = 1 , C = 1 ] = Pr ⁡ [ A = 1 ∣ C = 1 , D = n ] ⋅ Pr ⁡ [ B = 1 ∣ C = 1 , D = n ] ⋅ Pr ⁡ [ C = 1 ∣ D = n ] ⋅ Pr ⁡ [ D = n ] = 1 2 ⋅ 1 ⋅ 1 2 ⋅ 1 2 = 1 8  The predicted class for a record with  A = 1 , B = 1 , C = 1  is  D = y \begin{aligned} & \operatorname{Pr}[D \mid A, B, C]=\frac{\operatorname{Pr}[A, B, C \mid D] \cdot \operatorname{Pr}[D]}{\operatorname{Pr}[A, B, C]} \\ & =\frac{\operatorname{Pr}[a, b \mid c, d] \cdot \operatorname{Pr}[c \mid d] \cdot \operatorname{Pr}[D]}{\operatorname{Pr}[A, B, C]} \\ & =\frac{\operatorname{Pr}[A \mid C, D] \cdot \operatorname{Pr}[B \mid C, D] \cdot \operatorname{Pr}[C \mid D] \cdot \operatorname{Pr}[D]}{\operatorname{Pr}[A, B, C]} \\ & \operatorname{Pr}[A=1, B=1, C=1] \cdot \operatorname{Pr}[D=y \mid A=1, B=1, C=1] \\ &=\operatorname{Pr}[A=1 \mid C=1, D=y] \cdot \operatorname{Pr}[B=1 \mid C=1, D=y] \cdot \operatorname{Pr}[C=1 \mid D=y] \cdot \operatorname{Pr}[D=y] \\ &=\frac{2}{3} \cdot \frac{2}{3} \cdot \frac{3}{4} \cdot \frac{1}{2}=\frac{1}{6} \\ & \operatorname{Pr}[A=1, B=1, C=1] \cdot \operatorname{Pr}[D=n \mid A=1, B=1, C=1]\\ & = \operatorname{Pr}[A=1 \mid C=1, D=n] \cdot \operatorname{Pr}[B=1 \mid C=1, D=n] \cdot \operatorname{Pr}[C=1 \mid D=n] \cdot \operatorname{Pr}[D=n] \\ & =\frac{1}{2} \cdot 1 \cdot \frac{1}{2} \cdot \frac{1}{2}=\frac{1}{8} \\ & \text { The predicted class for a record with } A=1, B=1, C=1 \text { is } D=y \end{aligned} Pr[DA,B,C]=Pr[A,B,C]Pr[A,B,CD]Pr[D]=Pr[A,B,C]Pr[a,bc,d]Pr[cd]Pr[D]=Pr[A,B,C]Pr[AC,D]Pr[BC,D]Pr[CD]Pr[D]Pr[A=1,B=1,C=1]Pr[D=yA=1,B=1,C=1]=Pr[A=1C=1,D=y]Pr[B=1C=1,D=y]Pr[C=1D=y]Pr[D=y]=32324321=61Pr[A=1,B=1,C=1]Pr[D=nA=1,B=1,C=1]=Pr[A=1C=1,D=n]Pr[B=1C=1,D=n]Pr[C=1D=n]Pr[D=n]=2112121=81 The predicted class for a record with A=1,B=1,C=1 is D=y

Problem 3 (40%). The following figure shows a training set of 5 points. Use the Perceptron algorithm to find a plane that (i) crosses the origin, and (ii) separates the black points from the white ones. Recall that this algorithm starts with a vector c ⃗ = 0 → \vec{c}=\overrightarrow{0} c =0 and iteratively adjusts it using a point from the training set. You need to show the value of c ⃗ \vec{c} c after every adjustment.

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

 round  c ⃗ p ⃗ 1 ( 0 , 0 ) A ( 0.2 ) 2 ( 0 , 2 ) C ( 2 , 0 ) 3 ( 2 , 2 ) \begin{array}{ccc}\text { round } & \vec{c} & \vec{p} \\ 1 & (0,0) & A(0.2) \\ 2 & (0,2) & C(2,0) \\ 3 & (2,2) & \end{array}  round 123c (0,0)(0,2)(2,2)p A(0.2)C(2,0)

Quiz 2

Problem 1 (30%). The figure below shows the boundary lines of 5 half-planes. Find the point with the smallest y \boldsymbol{y} y-coordinate in the intersection of these half-planes with the linear programming algorithm that we discussed in the class. Assume the algorithm (randomly) permutes the boundary lines into ℓ 1 , ℓ 2 , … , ℓ 5 \ell_{1}, \ell_{2}, \ldots, \ell_{5} 1,2,,5 and processes them in the same order. Starting from the second round, give the point maintained by the algorithm at the end of each round.

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

Answer: Let H 1 , … , H 5 H_{1}, \ldots, H_{5} H1,,H5 be the half-planes whose boundary lines are ℓ 1 , … , ℓ 5 \ell_{1}, \ldots, \ell_{5} 1,,5, respectively. Let p p p be the point maintained by the algorithm. At the end of the second round, p p p is the intersection A A A of ℓ 1 \ell_{1} 1 and ℓ 2 \ell_{2} 2. At Round 3 , because p = A p=A p=A does not fall in H 3 H_{3} H3, the algorithm computes a new p p p as the lowest point on ℓ 3 \ell_{3} 3 that satisfies all of H 1 , … , H 3 H_{1}, \ldots, H_{3} H1,,H3. As a result, p p p is set to B B B. At Round 4 , because p = B p=B p=B does not fall in H 4 H_{4} H4, the algorithm computes a new p p p as the lowest point on ℓ 4 \ell_{4} 4 that satisfies all of H 1 , … , H 4 H_{1}, \ldots, H_{4} H1,,H4. As a result, p p p is set to C C C. Finally, the last round processes H 5 H_{5} H5. As p = B p=B p=B falls in H 5 H_{5} H5, the algorithm finishes with C C C as the final answer.

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

Problem 2 (30%). Consider a set P P P of red points A ( 2 , 1 ) , B ( 2 , − 2 ) A(2,1), B(2,-2) A(2,1),B(2,2) and blue points C ( − 2 , 1 ) C(-2,1) C(2,1), D ( − 2 , − 3 ) D(-2,-3) D(2,3). Give an instance of quadratic programming for finding a separation line with the maximum margin.

Answer: Minimize w 1 2 + w 2 2 w_{1}^{2}+w_{2}^{2} w12+w22 subject to the following constraints:

2 w 1 + w 2 ≥ 1 2 w 1 − 2 w 2 ≥ 1 − 2 w 1 + w 2 ≤ − 1 − 2 w 1 − 3 w 2 ≤ − 1 \begin{aligned} 2 w_{1}+w_{2} & \geq 1 \\ 2 w 1-2 w_{2} & \geq 1 \\ -2 w_{1}+w_{2} & \leq-1 \\ -2 w 1-3 w_{2} & \leq-1 \end{aligned} 2w1+w22w12w22w1+w22w13w21111

Problem 3 (40%). Let P P P be a set of points as shown in the figure below. Assume k = 3 k=3 k=3, and that the distance metric is Euclidian distance.

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

Run the k k k-center algorithm we discussed in the class on P P P. If the first center is (randomly) chosen as point a a a, what are the second and third centers?

Answer: The second center is h h h, and the third is d d d.

Quiz 3

Problem 1 (30%). Consider the dataset as shown in the figure below. What is the covariance matrix of the dataset?

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

Answer: Let A = [ σ x x σ x y σ y x σ y y ] A=\left[\begin{array}{ll}\sigma_{x x} & \sigma_{x y} \\ \sigma_{y x} & \sigma_{y y}\end{array}\right] A=[σxxσyxσxyσyy] be the covariance matrix, where σ x x ( σ y y ) \sigma_{x x}\left(\sigma_{y y}\right) σxx(σyy) is the variance along the x − ( y − ) \mathrm{x}-(\mathrm{y}-) x(y) dimension, and σ x y ( = σ y x ) \sigma_{x y}\left(=\sigma_{y x}\right) σxy(=σyx) is the covariance of the x \mathrm{x} x - and y-dimensions. Since the means along both the x \mathrm{x} x - and y \mathrm{y} y-dimensions are 0 , we have that:
σ x x = 1 4 ( ( − 3 ) 2 + ( − 2 ) 2 + 1 2 + 4 2 ) = 30 / 4 = 7.5 σ y y = 1 4 ( 4 2 + 1 2 + ( − 2 ) 2 + ( − 3 ) 2 ) = 30 / 4 = 7.5 σ x y = 1 4 ( ( − 3 ) × 4 + ( − 2 ) × 1 + 1 × ( − 2 ) + 4 × ( − 3 ) ) = − 28 / 4 = − 7 \begin{aligned} \sigma_{x x} & =\frac{1}{4}\left((-3)^{2}+(-2)^{2}+1^{2}+4^{2}\right)=30 / 4=7.5 \\ \sigma_{y y} & =\frac{1}{4}\left(4^{2}+1^{2}+(-2)^{2}+(-3)^{2}\right)=30 / 4=7.5 \\ \sigma_{x y} & =\frac{1}{4}((-3) \times 4+(-2) \times 1+1 \times(-2)+4 \times(-3))=-28 / 4=-7 \end{aligned} σxxσyyσxy=41((3)2+(2)2+12+42)=30/4=7.5=41(42+12+(2)2+(3)2)=30/4=7.5=41((3)×4+(2)×1+1×(2)+4×(3))=28/4=7

Therefore, A = [ 7.5 − 7 − 7 7.5 ] A=\left[\begin{array}{rr}7.5 & -7 \\ -7 & 7.5\end{array}\right] A=[7.5777.5].

Problem 2 (30%). Use PCA to find the line passing the origin on which the projections of the points in Problem 1 have the greatest variance.

Answer: Let λ \lambda λ be an eigenvalue of A A A, which implies that the determinant of [ 7.5 − λ − 7 − 7 7.5 − λ ] \left[\begin{array}{cc}7.5-\lambda & -7 \\ -7 & 7.5-\lambda\end{array}\right] [7.5λ777.5λ] is 0 . By expanding the determinant, we get the following equation:
( 7.5 − λ ) 2 − 49 = 0. (7.5-\lambda)^{2}-49=0 . (7.5λ)249=0.

It follows that λ 1 = 14.5 \lambda_{1}=14.5 λ1=14.5 and λ 2 = 0.5 \lambda_{2}=0.5 λ2=0.5 are the eigenvalues of A A A, where λ 1 \lambda_{1} λ1 is the larger one.

Let v ⃗ = [ x y ] \vec{v}=\left[\begin{array}{l}x \\ y\end{array}\right] v =[xy] be an eigenvector corresponding to λ 1 \lambda_{1} λ1, which satisfies that

[ 7.5 − 7 − 7 7.5 ] [ x y ] = [ 14.5 x 14.5 y ] \left[\begin{array}{ll} 7.5 & -7 \\ -7 & 7.5 \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right]=\left[\begin{array}{l} 14.5 x \\ 14.5 y \end{array}\right] [7.5777.5][xy]=[14.5x14.5y]

Note that the above equation is satisfied by any pair of x x x and y y y satisfying x + y = 0 x+y=0 x+y=0. As the line chosen by PCA has the same direction as v ⃗ \vec{v} v , the line is x + y = 0 x+y=0 x+y=0.

Problem 3 (40%). Run DBSCAN on the set of points shown in the figure below with ϵ = 1 \epsilon=1 ϵ=1 and minpts = 4 =4 =4. What are the core points and the clusters?

【数据挖掘】2017年 Quiz 1-3 整理 带答案,数据挖掘,数据挖掘,人工智能

Answer: The core points are b , e , g , j , k b, e, g, j, k b,e,g,j,k and o o o. There are three clusters:文章来源地址https://www.toymoban.com/news/detail-728337.html

  • Cluster 1: a , b , c , d , e , f a, b, c, d, e, f a,b,c,d,e,f
  • Cluster 2: f , g , h , i , j , k , l f, g, h, i, j, k, l f,g,h,i,j,k,l
  • Cluster 3: m , n , o , p , q m, n, o, p, q m,n,o,p,q

到了这里,关于【数据挖掘】2017年 Quiz 1-3 整理 带答案的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 1024程序员狂欢节 | IT前沿技术、人工智能、数据挖掘、网络空间安全技术

    一年一度的1024程序员狂欢节又到啦!成为更卓越的自己,坚持阅读和学习,别给自己留遗憾,行动起来吧! 那么,都有哪些好书值得入手呢?小编为大家整理了前沿技术、人工智能、集成电路科学与芯片技术、新一代信息与通信技术、网络空间安全技术,四大热点领域近期

    2024年02月06日
    浏览(64)
  • GEO生信数据挖掘(十)肺结核数据-差异分析-WGCNA分析(900行代码整理注释更新版本)

    第六节,我们使用结核病基因数据,做了一个数据预处理的实操案例。例子中结核类型,包括结核,潜隐进展,对照和潜隐,四个类别。第七节延续上个数据,进行了差异分析。 第八节对差异基因进行富集分析。本节进行WGCNA分析。 WGCNA分析 分段代码(附运行效果图)请查

    2024年02月08日
    浏览(40)
  • 《天池精准医疗大赛-人工智能辅助糖尿病遗传风险预测》模型复现和数据挖掘-论文_企业

    进入21世纪,生命科学特别是基因科技已经广泛而且深刻影响到每个人的健康生活,于此同时,科学家们借助基因科技史无前例的用一种全新的视角解读生命和探究疾病本质。人工智能(AI)能够处理分析海量医疗健康数据,通过认知分析获取洞察,服务于政府、健康医疗机构

    2023年04月09日
    浏览(58)
  • 【SCI征稿】3个月左右录用!计算机信息技术等领域均可,如机器学习、遥感技术、人工智能、物联网、人工神经网络、数据挖掘、图像处理

    计算机技术类SCIEEI 【期刊简介】IF:1.0-2.0,JCR4区,中科院4区 【检索情况】SCIEEI 双检,正刊 【参考周期】期刊部系统内提交,录用周期3个月左右,走完期刊部流程上线 【征稿领域】 计算机信息技术在土地变化检测中的应用 包括但不限于以下主题: ● 利用基于机器学习的

    2024年02月10日
    浏览(61)
  • 【数据挖掘算法与应用】——数据挖掘导论

    数据挖掘技术背景 大数据如何改变我们的生活 1.数据爆炸但知识贫乏   人们积累的数据越来越多。但是,目前这些数据还仅仅应用在数据的录入、查询、统计等功能,无法发现数据中存在的关系和规则,无法根据现有的数据预测未来的发展趋势,导致了“数据爆炸但知识

    2023年04月09日
    浏览(58)
  • 关联规则挖掘(上):数据分析 | 数据挖掘 | 十大算法之一

    ⭐️⭐️⭐️⭐️⭐️欢迎来到我的博客⭐️⭐️⭐️⭐️⭐️ 🐴作者: 秋无之地 🐴简介:CSDN爬虫、后端、大数据领域创作者。目前从事python爬虫、后端和大数据等相关工作,主要擅长领域有:爬虫、后端、大数据开发、数据分析等。 🐴欢迎小伙伴们 点赞👍🏻、收藏

    2024年02月07日
    浏览(52)
  • 【数据挖掘竞赛】零基础入门数据挖掘-二手汽车价格预测

    目录 一、导入数据  二、数据查看 可视化缺失值占比  绘制所有变量的柱形图,查看数据 查看各特征与目标变量price的相关性 三、数据处理  处理异常值 查看seller,offerType的取值 查看特征 notRepairedDamage   异常值截断  填充缺失值   删除取值无变化的特征 查看目标变量p

    2023年04月27日
    浏览(57)
  • 数据挖掘-实战记录(一)糖尿病python数据挖掘及其分析

    一、准备数据 1.查看数据 二、数据探索性分析 1.数据描述型分析 2.各特征值与结果的关系 a)研究各个特征值本身类别 b)研究怀孕次数特征值与结果的关系 c)其他特征值 3.研究各特征互相的关系 三、数据预处理 1.去掉唯一属性 2.处理缺失值 a)标记缺失值 b)删除缺失值行数  c

    2024年02月11日
    浏览(50)
  • 数据挖掘(3.1)--频繁项集挖掘方法

    目录 1.Apriori算法 Apriori性质 伪代码 apriori算法 apriori-gen(Lk-1)【候选集产生】 has_infrequent_subset(c,Lx-1)【判断候选集元素】 例题 求频繁项集: 对于频繁项集L={B,C,E},可以得到哪些关联规则: 2.FP-growth算法 FP-tree构造算法【自顶向下建树】 insert_tree([plP],T) 利用FP-tree挖掘频繁项集

    2023年04月09日
    浏览(50)
  • 数据仓库与数据挖掘

    数据挖掘(Data mining),又译为资料探勘、数据采矿。它是数据库知识发现(Knowledge-Discovery in Databases,KDD)中的一个步骤。 数据挖掘一般是指从大量的数据中通过算法搜索隐藏于其中的信息的过程。 数据挖掘通常与计算机科学有关,并通过统计、在线分析处理、情报检索、

    2024年02月06日
    浏览(46)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包