可信深度学习Trustworthy Deep Learning相关论文

这篇具有很好参考价值的文章主要介绍了可信深度学习Trustworthy Deep Learning相关论文。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

Survey

An Overview of Catastrophic AI Risks. [paper]

Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation. [paper]

A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy. [paper]

Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example. [paper]

Out-of-Distribution Generalization

Simple and Fast Group Robustness by Automatic Feature Reweighting. [paper]

Optimal Transport Model Distributional Robustness. [paper]

Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization. [paper]

Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models. [paper]

Rethinking the Evaluation Protocol of Domain Generalization. [paper]

Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape. [paper]

On the nonlinear correlation of ML performance between data subpopulations. [paper]

An Adaptive Algorithm for Learning with Unknown Distribution Drift. [paper]

PGrad: Learning Principal Gradients For Domain Generalization. [paper]

Benchmarking Low-Shot Robustness to Natural Distribution Shifts. [paper]

eweighted Mixup for Subpopulation Shift. [paper]

ERM++: An Improved Baseline for Domain Generalization. [paper]

Domain Generalization via Nuclear Norm Regularization. [paper]

ManyDG: Many-domain Generalization for Healthcare Applications. [paper]

DEJA VU: Continual Model Generalization For Unseen Domains. [paper]

Alignment with human representations supports robust few-shot learning. [paper]

Free Lunch for Domain Adversarial Training: Environment Label Smoothing. [paper]

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. [paper]

Leveraging Domain Relations for Domain Generalization. [paper]

Evasion Attacks and Defenses

Jailbroken: How Does LLM Safety Training Fails. [paper]

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. [paper]

On adversarial robustness and the use of Wasserstein ascent-descent dynamics to enforce it. [paper]

On the Robustness of AlphaFold: A COVID-19 Case Study. [paper]

Data Augmentation Alone Can Improve Adversarial Training. [paper]

Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing. [paper]

Uncovering Adversarial Risks of Test-Time Adaptation. [paper]

Benchmarking Robustness to Adversarial Image Obfuscations. [paper]

Are Defenses for Graph Neural Networks Robust? [paper]

On the Robustness of Randomized Ensembles to Adversarial Perturbations. [paper]

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation. [paper]

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness. [paper]

Poisoning Attacks and Defenses

Poisoning Language Models During Instruction Tuning. [paper]

Backdoor Attacks Against Dataset Distillation. [paper]

Run-Off Election: Improved Provable Defense against Data Poisoning Attacks. [paper]

Temporal Robustness against Data Poisoning. [paper]

Poisoning Web-Scale Training Datasets is Practical. [paper]

CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning. [paper]

TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets. [paper]

Privacy

SoK: Privacy-Preserving Data Synthesis. [paper]

Ticketed Learning-Unlearning Schemes. [paper]

Forgettable Federated Linear Learning with Certified Data Removal. [paper]

Privacy Auditing with One (1) Training Run. [paper]

DPMLBench: Holistic Evaluation of Differentially Private Machine Learning. [paper]

On User-Level Private Convex Optimization. [paper]

Re-thinking Model Inversion Attacks Against Deep Neural Networks. [paper]

A Recipe for Watermarking Diffusion Models. [paper]

CUDA: Convolution-based Unlearnable Datasets. [paper]

Why Is Public Pretraining Necessary for Private Model Training? [paper]

Personalized Privacy Auditing and Optimization at Test Time. [paper]

Interpretability

Towards Trustworthy Explanation: On Causal Rationalization. [paper]

Don't trust your eyes: on the (un)reliability of feature visualizations. [paper]

Probabilistic Concept Bottleneck Models. [paper]

Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. [paper]

eXplainable Artificial Intelligence on Medical Images: A Survey. [paper]

可信深度学习Trustworthy Deep Learning相关论文,深度学习,人工智能,数据挖掘,机器学习,分类

 可信深度学习Trustworthy Deep Learning相关论文,深度学习,人工智能,数据挖掘,机器学习,分类

 可信深度学习Trustworthy Deep Learning相关论文,深度学习,人工智能,数据挖掘,机器学习,分类

擅长现代信号处理(改进小波分析系列,改进变分模态分解,改进经验小波变换,改进辛几何模态分解等等),改进机器学习,改进深度学习,机械故障诊断,改进时间序列分析(金融信号,心电信号,振动信号等)

 文章来源地址https://www.toymoban.com/news/detail-538099.html

 

到了这里,关于可信深度学习Trustworthy Deep Learning相关论文的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Deep Learning Tuning Playbook(深度学习调参手册中译版)

    由五名研究人员和工程师组成的团队发布了《Deep Learning Tuning Playbook》,来自他们自己训练神经网络的实验结果以及工程师的一些实践建议,目前在Github上已有1.5k星。原项目地址 本文为《Deep Learning Tuning Playbook》中文翻译版本,全程手打,非机翻。因为本人知识水平有限,翻

    2023年04月27日
    浏览(40)
  • 基于深度学习的语音识别(Deep Learning-based Speech Recognition)

    随着科技的快速发展,人工智能领域取得了巨大的进步。其中,深度学习算法以其强大的自学能力,逐渐应用于各个领域,并取得了显著的成果。在语音识别领域,基于深度学习的技术也已经成为了一种主流方法,极大地推动了语音识别技术的发展。本文将从深度学习算法的

    2024年02月04日
    浏览(21)
  • 论文学习记录之SeisInvNet(Deep-Learning Inversion of Seismic Data)

    目录 1 INTRODUCTION—介绍 2 RELATED WORKS—相关作品 3 METHODOLOGY AND IMPLEMENTATION—方法和执行 3.1 方法 3.2 执行 4 EXPERIMENTS—实验 4.1 数据集准备 4.2 实验设置 4.3 基线模型 4.4 定向比较 4.5 定量比较 4.6 机理研究 5 CONCLUSION—结论           地震勘探是根据地震波在大地中的传播规律来

    2024年01月19日
    浏览(27)
  • 深度强化学习的变道策略:Harmonious Lane Changing via Deep Reinforcement Learning

    偏理论,假设情况不易发生 多智能体强化学习的换道策略,不同的智能体在每一轮学习后交换策略,达到零和博弈。 和谐驾驶仅依赖于单个车辆有限的感知结果来平衡整体和个体效率,奖励机制结合个人效率和整体效率的和谐。 自动驾驶不能过分要求速度性能, 考虑单个车

    2024年01月17日
    浏览(27)
  • 第二章:Learning Deep Features for Discriminative Localization ——学习用于判别定位的深度特征

            在这项工作中,我们重新审视了在[13]中提出的全局平均池化层,并阐明了它如何明确地使卷积神经网络(CNN)具有出色的定位能力,尽管它是在图像级别标签上进行训练的。虽然这个技术之前被提出作为一种训练规范化的手段, 但我们发现它实际上构建了一个通

    2024年02月15日
    浏览(18)
  • 基于深度学习的目标检测的介绍(Introduction to object detection with deep learning)

    物体检测的应用已经深入到我们的日常生活中,包括安全、自动车辆系统等。对象检测模型输入视觉效果(图像或视频),并在每个相应对象周围输出带有标记的版本。这说起来容易做起来难,因为目标检测模型需要考虑复杂的算法和数据集,这些算法和数据集在我们说话的时

    2024年02月11日
    浏览(15)
  • 基于深度学习的手写数字识别项目GUI(Deep Learning Project – Handwritten Digit Recognition using Python)

    一步一步教你建立手写数字识别项目,需要源文件的请可直接跳转下边的链接:All project 在本文中,我们将使用MNIST数据集实现一个手写数字识别应用程序。我们将使用一种特殊类型的深度神经网络,即卷积神经网络。最后,我们将构建一个GUI,您可以在其中绘制数字并立即

    2024年02月11日
    浏览(16)
  • 商简智能学术成果|基于深度强化学习的联想电脑制造调度(Lenovo Schedules Laptop Manufacturing Using Deep Reinforcement Learning)

    获取更多资讯,赶快关注上面的公众号吧!   本篇论文作为商简智能的最新研究成果,发表于运筹学顶刊《INFORMS JOURNAL ON APPLIED ANALYTICS》, 首次将深度强化学习落地于大规模制造调度场景 ,该先进排程项目入围国际运筹学权威机构 INFORMS运筹学应用最高奖——Franz Edelman

    2024年02月09日
    浏览(57)
  • 【论文阅读】Deep Graph Contrastive Representation Learning

    作者:Yanqiao Zhu Yichen Xu 文章链接:Deep Graph Contrastive Representation Learning 代码链接:Deep Graph Contrastive Representation Learning 现实世界中,图的标签数量较少,尽管GNNs蓬勃发展,但是训练模型时标签的可用性问题也越来越受到关心。 传统的无监督图表征学习方法,例如DeepWalk和nod

    2024年01月18日
    浏览(32)
  • 论文阅读--Deep Learning-Based Channel Estimation

    论文信息: Soltani M, Pourahmadi V, Mirzaei A, et al. Deep learning-based channel estimation[J]. IEEE Communications Letters, 2019, 23(4): 652-655. 创新点: 信道时频响应建模为图像,将OFDM的时频特性视做一种2D图像信息。 将导频位置的通道响应视为LR图像,并将估计的通道响应视为HR图像。 利用基于深度

    2024年02月01日
    浏览(25)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包