首页 > 其他分享 >Proj CDeepFuzz Paper Reading: Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Acc

Proj CDeepFuzz Paper Reading: Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Acc

时间:2023-08-29 22:33:48浏览次数:61  
标签:methods Neural similar Testing Aries Estimation data accuracy

Abstract

背景:

  1. the de facto standard to assess the quality of DNNs in the industry is to check their performance (accuracy) on a collected set of labeled test data
  2. test selection can save labor and then be used to assess the model

前提: the model should have similar prediction accuracy on the data which have similar distances to the decision boundary

本文:Aries
Github: https://github.com/wellido/Aries
Task: estimate the performance of DNNs on new unlabeled data using only the information obtained from the original test data

实验:
数据集:CIFAR-10, Tiny-ImageNet
对象:CIFAR10-ResNet20, CIFAR10-VGG16, TinyImageNet-ResNet101, TinyImageNet-DenseNet, 13 types of data transformation methods.
Competitors: Cross Entropy-based Sampling, Practical Accuracy Estimation
效果:

  1. Aries accuracy与真实accuracy的效果只差了0.03%-2.60%
  2. outperforms other labeling-free methods in 50/52 cases
  3. outperforms other selection-labeling based methods in 96/128 cases

3. Methodology

Finding 1: A DNN has similar accuracy on the data sets that have similar distances to the decision boundary.
Finding 2: There is a linear relationship between the % of highly confident data (LV R = 1) and the accuracy of the whole set. Therefore, given some labeled data, if we know 1) the accuracy of the DNN in each Bucket, and 2) the percentage of highly confident data, it is promising to estimate the accuracy of the new unlabeled data.

标签:methods,Neural,similar,Testing,Aries,Estimation,data,accuracy
From: https://www.cnblogs.com/xuesu/p/17665760.html

相关文章

  • Proj CDeepFuzz Paper Reading: Deepxplore: Automated whitebox testing of deep lea
    Abstract背景:现有的深度学习测试在很⼤程度上依赖于⼿动标记的数据,因此通常⽆法暴露罕⻅输⼊的错误⾏为。本文:DeepXploreTask:awhite-boxframeworktotestDLModels方法:neuroncoveragedifferentialtestingwithmultipleDLsystems(models)joint-optimizationpro......
  • Debian testing更新遇到依赖错误
    gnustep-base-runtime:Depends:gnustep-base-common(=1.29.0-6)but1.28.1+really1.28.0-5istobeinstalledBing答案Clearoutthelocalrepositoryofretrievedpackagefiles.sudoapt-getautocleanResolvedependenciesproblemssudoapt-get-finstalls......
  • 学习笔记:DSTAGNN: Dynamic Spatial-Temporal Aware Graph Neural Network for Traffic
    DSTAGNN:DynamicSpatial-TemporalAwareGraphNeuralNetworkforTrafficFlowForecastingICML2022论文地址:https://proceedings.mlr.press/v162/lan22a.html代码地址:https://github.com/SYLan2019/DSTAGNN一个用于时空序列预测的交通流量预测模型。可学习的地方:提出......
  • [KDD 2023] All in One- Multi-Task Prompting for Graph Neural Networks
    [KDD2023]AllinOne-Multi-TaskPromptingforGraphNeuralNetworks总结提出了个多任务prompt学习框架,扩展GNN的泛化能力:统一了NLP和图学习领域的prompt格式,包括prompttoken、tokenstructure、insertingpattern构建诱导子图,将点级和边级任务改造为图级任务,统一不同......
  • CS231n: Convolutional Neural Networks for Visual Recognition
    CS231n:ConvolutionalNeuralNetworksforVisualRecognitionEventTypeDateDescriptionCourseMaterialsLecture1Tuesday April4CourseIntroduction Computervisionoverview Historicalcontext Courselogistics[slides] [video]Lecture2Thursday April6Image......
  • Extended Kalman Filter vs. Error State Kalman Filter for Aircraft Attitude Estim
    EKF与ESKF的对比“Engineerscansolveexactproblemsusingnumericalapproximations,ortheycansolveapproximateproblemsexactly"-FredDaum.对出现在实际问题中的非线性的运动学(dynamic)模型以及/或非线性的观测方程进行线性化的操作,然后基于这个线性化的方程计算......
  • Paper Reading: NBDT: Neural-Backed Decision Trees
    目录研究动机文章贡献本文方法推理建立层次结构用WordNet标记决策节点微调和树监督损失实验结果对比实验结果可解释性识别错误的模型预测引导图像分类人更倾向的解释识别有缺陷的数据标签优点和创新点PaperReading是从个人角度进行的一些总结分享,受到个人关注点的侧重和实力......
  • [论文阅读] Neural Transformation Fields for Arbitrary-Styled Font Generation
    Pretitle:NeuralTransformationFieldsforArbitrary-StyledFontGenerationaccepted:CVPR2023paper:https://openaccess.thecvf.com/content/CVPR2023/html/Fu_Neural_Transformation_Fields_for_Arbitrary-Styled_Font_Generation_CVPR_2023_paper.htmlcode:htt......
  • NNs(Neural Networks,神经网络)和Polynomial Regression(多项式回归)等价性之思考,以及深度
    NNs(NeuralNetworks,神经网络)和PolynomialRegression(多项式回归)等价性之思考,以及深度模型可解释性原理研究与案例1.MainPoint0x1:行文框架第二章:我们会分别介绍NNs神经网络和PR多项式回归各自的定义和应用场景。第三章:讨论NNs和PR在数学公式上的等价性,NNs......
  • Neural Network 初学
    参数:机器学习的内容超参数:人手动设置的数值,比如学习率、训练轮数MLP在inputlayer和outputlayer之间有一堆hiddenlayer,每两层之间可以理解成一张完全二分图,二分图的邻接矩阵上有一些权重,随机初始化。将图片的每个像素点抽出来变成向量之后在二分图上矩阵乘法得到第一层......