当前位置: 代码网 > it编程>编程语言>Java > 利用knn svm cnn 逻辑回归 mlp rnn等方法实现mnist数据集分类(pytorch实现及源码解析)

利用knn svm cnn 逻辑回归 mlp rnn等方法实现mnist数据集分类(pytorch实现及源码解析)

2024年08月01日 Java 我要评论
在这个部分总结了2019年到目前为止Java常见面试问题,取其面试核心编写成这份文档笔记,从中分析面试官的心理,摸清面试官的“套路”,可以说搞定90%以上的Java中高级面试没一点难度。本节总结的内容涵盖了:消息队列、Redis缓存、分库分表、读写分离、设计高并发系统、分布式系统、高可用系统、SpringCloud微服务架构等一系列互联网主流高级技术的知识点。(上述只是一个整体目录大纲,每个点里面都有如下所示的详细内容,从面试问题——分析面试官心理——剖析面试题——完美解答的一个过程)

training_data = torchvision.datasets.mnist(

root = ‘./mnist/’,

train = true,

transform = torchvision.transforms.totensor(),

download = downloads

)

#convert training data to numpy

train_data = training_data.train_data.numpy()[:train_amount]

train_label = training_data.train_labels.numpy()[:train_amount]

print training data size

print('training data size: ',train_data.shape)

print(‘training data label size:’,train_label.shape)

plt.imshow(train_data[0])

plt.show()

train_data = train_data/255.0

return train_data, train_label

#%% load the test data

def mnist_dataset_test(downloads, test_amount):

load dataset

testing_data = torchvision.datasets.mnist(

root = ‘./mnist/’,

train = false,

transform = torchvision.transforms.totensor(),

download = downloads

)

convert testing data to numpy

test_data = testing_data.test_data.numpy()[:test_amount]

test_label = testing_data.test_labels.numpy()[:test_amount]

print training data size

print('test data size: ',test_data.shape)

print(‘test data label size:’,test_label.shape)

plt.imshow(test_data[0])

plt.show()

test_data = test_data/255.0

return test_data, test_label

#%% main function for mnist dataset

if name==‘main’:

training arguments settings

parser = argparse.argumentparser(description=‘saak’)

parser.add_argument(‘–download_mnist’, default=true, metavar=‘dl’,

help=‘download mnist (default: true)’)

parser.add_argument(‘–train_amount’, type=int, default=60000,

help=‘amount of training samples’)

parser.add_argument(‘–test_amount’, type=int, default=2000,

help=‘amount of testing samples’)

args = parser.parse_args()

print arguments

print(‘\n----------argument values-----------’)

for name, value in vars(args).items():

print(‘%s: %s’ % (str(name), str(value)))

print(‘------------------------------------\n’)

load training data & testing data

train_data, train_label = mnist_dataset_train(args.download_mnist, args.train_amount)

test_data, test_label = mnist_dataset_test(args.download_mnist, args.test_amount)

training_features = train_data.reshape(args.train_amount,-1)

test_features = test_data.reshape(args.test_amount,-1)

training svm

print(‘------training and testing svm------’)

clf = svm.svc(c=5, gamma=0.05,max_iter=10)

clf.fit(training_features, train_label)

#test on test data

test_result = clf.predict(test_features)

precision = sum(test_result == test_label)/test_label.shape[0]

print('test precision: ', precision)

#test on training data

train_result = clf.predict(training_features)

precision = sum(train_result == train_label)/train_label.shape[0]

print('training precision: ', precision)

#show the confusion matrix

matrix = confusion_matrix(test_label, test_result)

2.cnn实现

library

standard library

import os

third-party library

import torch

import torch.nn as nn

import torch.utils.data as data

import torchvision

import matplotlib.pyplot as plt

plt.rc(“font”, family=‘kaiti’)

import matplotlib.pyplot as plt

torch.manual_seed(1)   # reproducible

hyper parameters

epoch = 1               # train the training data n times, to save time, we just train 1 epoch

batch_size = 50

lr = 0.001              # learning rate

download_mnist = false

mnist digits dataset

if not(os.path.exists(‘./mnist/’)) or not os.listdir(‘./mnist/’):

<
(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com