深圳网站建设的公如何新建网页

张小明 2026/1/2 21:54:43
深圳网站建设的公,如何新建网页,买商标最好的平台,二建报名时间2023年报名时间文章目录 Day 36 MLP神经网络的训练数据的准备模型设计训练可视化 Day 36 MLP神经网络的训练 pytorch和cuda的安装有很多教程#xff0c;这里就不多赘述了。 import torch torch.cudamodule torch.cuda from /home/ubuntu24/anaconda3/envs/torch-gpu/lib/python3.13/…文章目录Day 36 · MLP神经网络的训练数据的准备模型设计训练可视化Day 36 · MLP神经网络的训练pytorch和cuda的安装有很多教程这里就不多赘述了。importtorch torch.cudamodule torch.cuda from /home/ubuntu24/anaconda3/envs/torch-gpu/lib/python3.13/site-packages/torch/cuda/__init__.pyimporttorch# 检查CUDA是否可用iftorch.cuda.is_available():print(CUDA可用)# 获取可用的CUDA设备数量device_counttorch.cuda.device_count()print(f可用的CUDA设备数量:{device_count})# 获取当前使用的CUDA设备索引current_devicetorch.cuda.current_device()print(f当前使用的CUDA设备索引:{current_device})# 获取当前CUDA设备的名称device_nametorch.cuda.get_device_name(current_device)print(f当前CUDA设备的名称:{device_name})# 获取CUDA版本cuda_versiontorch.version.cudaprint(fCUDA版本:{cuda_version})else:print(CUDA不可用。)CUDA可用 可用的CUDA设备数量: 1 当前使用的CUDA设备索引: 0 当前CUDA设备的名称: NVIDIA GeForce RTX 4070 Laptop GPU CUDA版本: 12.4数据的准备# 导入3分类的鸢尾花数据集fromsklearn.datasetsimportload_irisfromsklearn.model_selectionimporttrain_test_splitimportnumpyasnp irisload_iris()Xiris.data yiris.target X_train,X_test,y_train,y_testtrain_test_split(X,y,test_size0.2,random_state42)print(X_train.shape)print(y_train.shape)print(X_test.shape)print(y_test.shape)(120, 4) (120,) (30, 4) (30,)# 神经网络对于输入数据敏感因此要对输入的数据进行归一化处理fromsklearn.preprocessingimportMinMaxScaler scalerMinMaxScaler()X_trainscaler.fit_transform(X_train)X_testscaler.transform(X_test)# 将数据转换为张量Pytorch使用张量进行训练张量可以理解为特殊的数组X_traintorch.FloatTensor(X_train)y_traintorch.LongTensor(y_train)X_testtorch.FloatTensor(X_test)y_testtorch.LongTensor(y_test)模型设计importtorchimporttorch.nnasnnimporttorch.optimasoptim# 定义MLP模型modelnn.Sequential(nn.Linear(4,10),nn.ReLU(),nn.Linear(10,3))# 或者classMLP(nn.Module):def__init__(self):super().__init__()self.fc1nn.Linear(4,10)# 输入层到隐藏层self.relunn.ReLU()# 引入非线性self.fc2nn.Linear(10,3)# 隐藏层到输出层# 输出层不需要激活函数因为后面会用到交叉熵函数cross_entropy交叉熵函数内部有softmax函数会把输出转化为概率defforward(self,x):outself.fc1(x)outself.relu(out)outself.fc2(out)returnout# MLP_modelMLP()# 分类问题使用交叉熵损失函数criterionnn.CrossEntropyLoss()# 使用Adam优化器optimizeroptim.Adam(model.parameters(),lr0.01)训练采用交叉熵损失 Adam 优化器。训练前先把模型和数据移动到同一设备随后在循环中维护损失、准确率列表。devicetorch.device(cudaiftorch.cuda.is_available()elsecpu)modelmodel.to(device)X_trainX_train.to(device)y_trainy_train.to(device)X_testX_test.to(device)y_testy_test.to(device)num_epochs20_000 train_losses,val_losses[],[]train_accuracies,val_accuracies[],[]# 计算准确率defcalculate_accuracy(logits,labels):predstorch.argmax(logits.detach(),dim1)return(predslabels).float().mean().item()forepochinrange(1,num_epochs1):model.train()optimizer.zero_grad()outputsmodel(X_train)train_losscriterion(outputs,y_train)train_loss.backward()optimizer.step()train_losses.append(train_loss.item())train_accuracies.append(calculate_accuracy(outputs,y_train))model.eval()withtorch.no_grad():val_outputsmodel(X_test)val_losscriterion(val_outputs,y_test).item()val_acccalculate_accuracy(val_outputs,y_test)val_losses.append(val_loss)val_accuracies.append(val_acc)ifepoch%4000:print(fEpoch [{epoch}/{num_epochs}] ftrain_loss{train_loss.item():.4f}val_loss{val_loss:.4f}ftrain_acc{train_accuracies[-1]:.4f}val_acc{val_acc:.4f})Epoch [400/20000] train_loss0.0629 val_loss0.0538 train_acc0.9750 val_acc0.9667 Epoch [800/20000] train_loss0.0497 val_loss0.0292 train_acc0.9833 val_acc1.0000 Epoch [1200/20000] train_loss0.0473 val_loss0.0203 train_acc0.9833 val_acc1.0000 Epoch [1600/20000] train_loss0.0468 val_loss0.0173 train_acc0.9833 val_acc1.0000 Epoch [2000/20000] train_loss0.0467 val_loss0.0161 train_acc0.9833 val_acc1.0000 Epoch [2400/20000] train_loss0.0466 val_loss0.0157 train_acc0.9833 val_acc1.0000 Epoch [2800/20000] train_loss0.0466 val_loss0.0156 train_acc0.9833 val_acc1.0000 Epoch [3200/20000] train_loss0.0466 val_loss0.0155 train_acc0.9833 val_acc1.0000 Epoch [3600/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [4000/20000] train_loss0.0466 val_loss0.0154 train_acc0.9833 val_acc1.0000 Epoch [4400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [4800/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [5200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [5600/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [6000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [6400/20000] train_loss0.0466 val_loss0.0154 train_acc0.9833 val_acc1.0000 Epoch [6800/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [7200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [7600/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [8000/20000] train_loss0.0466 val_loss0.0154 train_acc0.9833 val_acc1.0000 Epoch [8400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [8800/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [9200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [9600/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [10000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [10400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [10800/20000] train_loss0.0466 val_loss0.0152 train_acc0.9833 val_acc1.0000 Epoch [11200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [11600/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [12000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [12400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [12800/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [13200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [13600/20000] train_loss0.0466 val_loss0.0150 train_acc0.9833 val_acc1.0000 Epoch [14000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [14400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [14800/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [15200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [15600/20000] train_loss0.0466 val_loss0.0152 train_acc0.9833 val_acc1.0000 Epoch [16000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [16400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [16800/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [17200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [17600/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [18000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [18400/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [18800/20000] train_loss0.0466 val_loss0.0155 train_acc0.9833 val_acc1.0000 Epoch [19200/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000 Epoch [19600/20000] train_loss0.0466 val_loss0.0152 train_acc0.9833 val_acc1.0000 Epoch [20000/20000] train_loss0.0466 val_loss0.0153 train_acc0.9833 val_acc1.0000可视化有了损失/准确率数组画双子图就能一眼看出模型是否过拟合或欠拟合。实践中建议在这里记录实验备注比如 epoch 数、学习率、是否使用 GPU方便未来对比。importmatplotlib.pyplotasplt epochsrange(1,num_epochs1)plt.figure(figsize(12,5))plt.subplot(1,2,1)plt.plot(epochs,train_losses,labelTrain Loss)plt.plot(epochs,val_losses,labelValidation Loss)plt.xlabel(Epoch)plt.ylabel(Loss)plt.title(Loss over Epochs)plt.legend()plt.subplot(1,2,2)plt.plot(epochs,train_accuracies,labelTrain Accuracy)plt.plot(epochs,val_accuracies,labelValidation Accuracy)plt.xlabel(Epoch)plt.ylabel(Accuracy)plt.title(Accuracy over Epochs)plt.legend()plt.tight_layout()plt.show()浙大疏锦行
版权声明:本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

南京网站开发个人html产品介绍网页设计代码作业

进阶流程图绘制工具 Unione Flow Editor-- 巧用Event事件机制,破解复杂业务交互难题 在企业级流程节点属性配置场景中,业务逻辑的复杂性往往对属性交互提出更高要求:不同审批类型需显示不同属性、必填规则随业务场景动态变化、控件选择需联动…

张小明 2025/12/30 6:02:14 网站建设

成都生活家装饰公司建站seo课程

ABCJS魔法指南:零基础打造炫酷网页乐谱 【免费下载链接】abcjs javascript for rendering abc music notation 项目地址: https://gitcode.com/gh_mirrors/ab/abcjs 还在为复杂的乐谱制作软件头疼吗?🎵 想要在个人网站上展示原创音乐却…

张小明 2025/12/30 6:02:11 网站建设

网站空间费1年1200仿爱客wordpress

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

张小明 2025/12/30 6:02:09 网站建设

雅安城乡住房建设厅网站网站建设廴金手指花总壹柒

Keil5下载太慢?一文搞定国内高速安装与开发入门 你是不是也经历过这样的场景:准备开始STM32项目,兴冲冲打开Keil官网,结果网页加载五分钟、安装包下载两小时,甚至直接“连接超时”——别急,这几乎是每个嵌…

张小明 2025/12/30 6:02:06 网站建设

怎样做才能发布你的网站有没有个人做网站的

农业生产与天气变化息息相关,霜冻、大风、暴雨等天气可能对作物造成直接影响。依赖大范围的公共天气预报,有时难以满足对特定小气候环境精准了解的需求。如何便捷地获取田间局地的气象信息,成为一些种植户关心的问题。小型农业气象站正是部署…

张小明 2025/12/30 6:02:03 网站建设

怎么做属于自己的音乐网站招生网站建设

问题描述 在开发九两酒微信小程序商城时候,在订单页面,如果金额比较低,就会出现score积分为0的情况,支付完成后会报如下错误“[uni-pay-co]: Error:执行失败,积 分需要大于等于1”,如下图所示:…

张小明 2025/12/31 17:27:09 网站建设