[FL]Adversarial Machine Learning (1)

这篇具有很好参考价值的文章主要介绍了[FL]Adversarial Machine Learning (1)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

Reading Alleviates Anxiety [Simba的阅读障碍治疗计划:#1]

Reading Notes for (NIST AI 100-2e2023)[https://csrc.nist.gov/pubs/ai/100/2/e2023/final].

Adversarial Machine Learning

A Taxonomy and Terminology of Attacks and Mitigations

Section 1: Introduction

  • There are two broad classes of AI systems, based on their capabilities: Predictive AI (PredAI) and Generative AI (GenAI).

  • However, despite the signifcant progress that AI and machine learning (ML) have made in a number of different application domains, these technologies are also vulnerable to attacks that can cause spectacular failures with dire consequences.

  • Unlike cryptography, there are no information-theoretic security proofs for the widely used machine learning algorithms. Moreover, information-theoretic impossibility results have started to appear in the literature [102, 116] that set limits on the effectiveness of widely-used mitigation techniques. As a result, many of the advances in developing mitigations against different classes of attacks tend to be empirical and limited in nature.

Section 2: Pridictive AI Taxonomy

  • The attacker’s objectives are shown as disjointed circles with the attacker’s goal at the center of each circle: Availability breakdown, Integrity violations, and Privacy compromise.

[FL]Adversarial Machine Learning (1),PaperReading,机器学习,人工智能

  • Machine learning involves a TRAINING STAGE, in which a model is learned, and a DEPLOYMENT STAGE, in which the model is deployed on new, unlabeled data samples to generate predictions.

  • Adversarial machine learning literature predominantly considers adversarial attacks against AI systems that could occur at either the training stage or the ML deployment stage. During the ML training stage, the attacker might control part of the training data, their labels, the model parameters, or the code of ML algorithms, resulting in different types of poisoning attacks. During the ML deployment stage, the ML model is already trained, and the adversary could mount evasion attacks to create integrity violations and change the ML model’s predictions, as well as privacy attacks to infer sensitive information about the training data or the ML model.

  • Training-time attacks. Poisoning Attack. Data poisoning attacks are applicable to all learning paradigms, while model poisoning attacks are most prevalent in federated learning, where clients send local model updates to the aggregating server, and in supply-chain attacks where malicious code may be added to the model by suppliers of model technology.

  • Deployment-time attack. Adversarial Example.

  • Attacker Goals and Objectives. Availability breakdown, Integrity violations, and Privacy compromise.

  • Privacy Compromise. Attackers might be interested in learning information about the training data (resulting in DATA PRIVACY attacks) or about the ML model (resulting in MODEL PRIVACY attacks). The attacker could have different objectives for compromising the privacy of training data, such as DATA RECONSTRUCTION (inferring content or features of training data), MEMBERSHIP-INFERENCE ATTACKS (inferring the presence of data in the training set), data EXTRACTION (ability to extract training data from generative models), and PROPERTY INFERENCE (inferring properties about the training data distribution). MODEL EXTRACTION is a model privacy attack in which attackers aim to extract information about the model.

  • Attacker Capabilities. Training Data Control. Model Control. Testing Data Control. Label Limit. Source Code Control. Query Access.

  • Attacker Knowledge. White-box attacks. These assume that the attacker operates with full knowledge about the ML system, including the training data, model architecture, and model hyper-parameters. Black-box attacks. These attacks assume minimal knowledge about the ML system. An adversary might get query access to the model, but they have no other information about how the model is trained. Gray-box attacks. There are a range of gray-box attacks that capture adversarial knowledge between black-box and white-box attacks. Suciu et al. introduced a framework to classify gray-box attacks. An attacker might know the model architecture but not its parameters, or the attacker might know the model and its parameters but not the training data.

  • Data Modality: Image. Text. Audio. Video. Cybersecurity. Tabular Data.

  • Recently, the use of ML models trained on multimodal data has gained traction, particularly the combination of image and text data modalities. Several papers have shown that multimodal models may provide some resilience against attacks, but other papers show that multimodal models themselves could be vulnerable to attacks mounted on all modalities at the same time.

  • An interesting open challenge is to test and characterize the resilience of a variety of multimodal ML against evasion, poisoning, and privacy attacks.

  • Evasion Attacks and Mitigations. Methods for creating adversarial examples in black-box settings include zeroth-order optimization, discrete optimization, and Bayesian optimization, as well as transferability, which involves the white-box generation of adversarial examples on a different model architecture before transferring them to the target model.

  • The most promising directions for mitigating the critical threat of evasion attacks are adversarial training (iteratively generating and inserting adversarial examples with their correct labels at training time); certifed techniques, such as randomized smoothing (evaluating ML predic-
    tion under noise); and formal verifcation techniques [112, 154] (applying formal method techniques to verify the model’s output). Nevertheless, these methods come with different limitations, such as decreased accuracy for adversarial training and randomized smoothing, and computational complexity for formal methods. There is an inherent trade-off between robustness and accuracy [297, 302, 343]. Similarly, there are trade-offs between a model’s robustness and fairness guarantees.文章来源地址https://www.toymoban.com/news/detail-783704.html

到了这里,关于[FL]Adversarial Machine Learning (1)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 机器学习中的 Transformation Pipelines(Machine Learning 研习之十)

    Transformation Pipelines 有许多数据转换步骤需要以正确的顺序执行。幸运的是, Scikit-Learn 提供了 Pipeline 类来帮助处理这样的转换序列。下面是一个用于数值属性的小管道,它首先对输入特性进行归并,然后对输入特性进行缩放: Pipeline 构造函数采用名称/估算器对(2元组)的列表,

    2024年02月04日
    浏览(40)
  • 应用机器学习的建议 (Advice for Applying Machine Learning)

    问题: 假如,在你得到你的学习参数以后,如果你要将你的假设函数放到一组 新的房屋样本上进行测试,假如说你发现在预测房价时产生了巨大的误差,现在你的问题是要想改进这个算法,接下来应该怎么办? 解决思路: 一种办法是使用更多的训练样本。具体来讲,也许你

    2024年01月25日
    浏览(43)
  • 现实生活中机器学习的具体示例(Machine Learning 研习之二)

    机器学习在现实中的示例 通过上一篇的讲解,我们多多少少对 机器学习 (Machine Learning)有了些许了解,同时也对 机器学习 (Machine Learning)一词不再那么抗拒了。 那么, 机器学习 到底在现实生活为我们解决哪些难题呢?亦或是传统方案目前无法实现的。 1、可以分析生产

    2024年02月16日
    浏览(42)
  • 机器学习在网络安全领域的应用 Demystifying Cybersecurity with Machine Learning

    作者:禅与计算机程序设计艺术 什么是机器学习(Machine Learning)?又是如何应用在网络安全领域呢?本文将详细阐述其定义、分类及历史沿革,同时介绍一些机器学习的基本概念和技术,帮助企业界更好地理解和掌握机器学习在网络安全领域的应用。通过相关案例实践,全

    2024年02月06日
    浏览(42)
  • Azure Machine Learning - 聊天机器人构建

    本文介绍如何部署和运行适用于 Python 的企业聊天应用示例。 此示例使用 Python、Azure OpenAI 服务和 Azure AI 搜索中的检索扩充生成(RAG)实现聊天应用,以获取虚构公司员工福利的解答。 关注TechLead,分享AI全维度知识。作者拥有10+年互联网服务架构、AI产品研发经验、团队管理

    2024年01月19日
    浏览(51)
  • 联邦学习((Federated Learning,FL)

    每日一诗: 题竹(十三岁应试作于楚王孙园亭) ——明*张居正 绿遍潇湘外,疏林玉露寒。 凤毛丛劲节,只上尽头竿。 近期在阅读联邦学习领域相关文献,简单介绍如下文。本文仅供学习,无其它用途。如有错误,敬请批评指正! 一、联邦学习(Federated Learning,FL): 举目

    2024年02月06日
    浏览(42)
  • [machine Learning]强化学习

    强化学习和前面提到的几种预测模型都不一样,reinforcement learning更多时候使用在控制一些东西上,在算法的本质上很接近我们曾经学过的DFS求最短路径. 强化学习经常用在一些游戏ai的训练,以及一些比如火星登陆器,月球登陆器等等工程领域,强化学习的内容很简单,本质就是获取

    2024年02月09日
    浏览(41)
  • [Machine Learning] 领域适应和迁移学习

    在机器学习中,我们的目标是找到一个假设或模型,它可以很好地描述或预测数据。当我们基于训练集训练模型时,我们的目的是让模型能够捕获到数据中的主要模式。然而,为了确保模型不仅仅是对训练数据进行记忆,而是真正理解了数据的结构,我们需要在测试集上评估

    2024年02月08日
    浏览(54)
  • 【Machine Learning 系列】一文带你详解什么是强化学习(Reinforcement Learning)

    机器学习主要分为三类:有监督学习、无监督学习和强化学习。在本文中,我们将介绍强化学习(Reinforcement Learning)的原理、常见算法和应用领域。 强化学习(Reinforcement Learning)是机器学习中一种重要的学习范式,其目标是通过与环境的交互来学习如何做出最优的决策。 强化

    2024年02月14日
    浏览(50)
  • [Machine Learning][Part 8]神经网络的学习训练过程

    目录 训练过程 一、建立模型: 二、建立损失函数 J(w,b): 三、寻找最小损失函数的(w,b)组合 为什么需要激活函数  激活函数种类 二分法逻辑回归模型 线性回归模型 回归模型 根据需求建立模型,从前面神经网络的结果可以知道,每一层都有若干个模型在运行,因此建立神经网

    2024年02月05日
    浏览(48)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包