Cifar100 / resnet20:
1、Baseline
Namespace:(batch_size=128, decay=0.0003, epoch=200, gammas=[0.1, 0.1, 0.5], learning_rate=0.1, momentum=0.9, optimizer='SGD', schedule=[80, 120, 160])
Best acc: 68.85%
80 和 120 是拐点
2、batch_size, gammas
Namespace(batch_size=512, decay=0.0003, epoch=200, gammas=[0.1, 0.1, 0.1], learning_rate=0.1, momentum=0.9, optimizer='SGD',schedule=[55, 110, 160])
best:66.51%
batchsize增大会导致准确率下降
3、lr * 0.1
Namespace(batch_size=128, decay=0.0003, epoch=200, gammas=[0.1, 0.1, 0.5], learning_rate=0.01, momentum=0.9,optimizer='SGD',schedule=[80, 120, 160])
66.67%、
初始lr改为0.01会导致准确率下降
4、调整schedule & gammas
Namespace(batch_size=128, decay=0.0003, epoch=200, gammas=[0.1, 0.1, 0.1], learning_rate=0.1, momentum=0.9, optimizer='SGD', schedule=[55, 110, 160])
68.85%
不变
5、
epoch=100,baseline
Namespace(batch_size=64, decay=0.0003, epoch=100, gammas=[0.1, 0.1, 0.5], lambda1=0.0, learning_rate=0.1, momentum=0.9, , optimizer='SGD'schedule=[20, 40, 60], time=0)
EPOCH=98,best_acc= 64.310%
改了dataloader:
6、
epoch200, batch=64
Namespace(batch_size=64, decay=0.0003, epoch=200, gammas=[0.1, 0.1, 0.1], lambda1=0.0, learning_rate=0.1, momentum=0.9, noptimizer='SGD', schedule=[55, 110, 160])
69.200%
test_acc:68.15%
继续训:
0:EPOCH=271,best_acc= 69.380%
test acuuracy 68.470%
1:EPOCH=282,best_acc= 69.460%
test acuuracy 68.550%
2:EPOCH=213,best_acc= 69.450%
test acuuracy 68.560%
3:EPOCH=213,best_acc= 69.450%
test acuuracy 68.570%
改验证集:
Namespace(batch_size=64, decay=0.0003, epoch=150, gammas=[0.1, 0.1, 0.1], learning_rate=0.1, momentum=0.9,num_workers=16, optimizer='SGD', schedule=[40, 80, 120])
EPOCH=138,best_acc= 83.920%文章来源:https://www.toymoban.com/news/detail-519348.html
Test_acc: 66.30文章来源地址https://www.toymoban.com/news/detail-519348.html
到了这里,关于实验记录resnet20/cifar100的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!