时间:2021-05-22
问题:训练好的网络模型想知道中间某一层的权重或者看看中间某一层的特征,如何处理呢?
1、获取某一层权重,并保存到excel中;
以resnet18为例说明:
import torchimport pandas as pdimport numpy as npimport torchvision.models as modelsresnet18 = models.resnet18(pretrained=True)parm={}for name,parameters in resnet18.named_parameters(): print(name,':',parameters.size()) parm[name]=parameters.detach().numpy()上述代码将每个模块参数存入parm字典中,parameters.detach().numpy()将tensor类型变量转换成numpy array形式,方便后续存储到表格中.输出为:
conv1.weight : torch.Size([64, 3, 7, 7])bn1.weight : torch.Size([64])bn1.bias : torch.Size([64])layer1.0.conv1.weight : torch.Size([64, 64, 3, 3])layer1.0.bn1.weight : torch.Size([64])layer1.0.bn1.bias : torch.Size([64])layer1.0.conv2.weight : torch.Size([64, 64, 3, 3])layer1.0.bn2.weight : torch.Size([64])layer1.0.bn2.bias : torch.Size([64])layer1.1.conv1.weight : torch.Size([64, 64, 3, 3])layer1.1.bn1.weight : torch.Size([64])layer1.1.bn1.bias : torch.Size([64])layer1.1.conv2.weight : torch.Size([64, 64, 3, 3])layer1.1.bn2.weight : torch.Size([64])layer1.1.bn2.bias : torch.Size([64])layer2.0.conv1.weight : torch.Size([128, 64, 3, 3])layer2.0.bn1.weight : torch.Size([128])layer2.0.bn1.bias : torch.Size([128])layer2.0.conv2.weight : torch.Size([128, 128, 3, 3])layer2.0.bn2.weight : torch.Size([128])layer2.0.bn2.bias : torch.Size([128])layer2.0.downsample.0.weight : torch.Size([128, 64, 1, 1])layer2.0.downsample.1.weight : torch.Size([128])layer2.0.downsample.1.bias : torch.Size([128])layer2.1.conv1.weight : torch.Size([128, 128, 3, 3])layer2.1.bn1.weight : torch.Size([128])layer2.1.bn1.bias : torch.Size([128])layer2.1.conv2.weight : torch.Size([128, 128, 3, 3])layer2.1.bn2.weight : torch.Size([128])layer2.1.bn2.bias : torch.Size([128])layer3.0.conv1.weight : torch.Size([256, 128, 3, 3])layer3.0.bn1.weight : torch.Size([256])layer3.0.bn1.bias : torch.Size([256])layer3.0.conv2.weight : torch.Size([256, 256, 3, 3])layer3.0.bn2.weight : torch.Size([256])layer3.0.bn2.bias : torch.Size([256])layer3.0.downsample.0.weight : torch.Size([256, 128, 1, 1])layer3.0.downsample.1.weight : torch.Size([256])layer3.0.downsample.1.bias : torch.Size([256])layer3.1.conv1.weight : torch.Size([256, 256, 3, 3])layer3.1.bn1.weight : torch.Size([256])layer3.1.bn1.bias : torch.Size([256])layer3.1.conv2.weight : torch.Size([256, 256, 3, 3])layer3.1.bn2.weight : torch.Size([256])layer3.1.bn2.bias : torch.Size([256])layer4.0.conv1.weight : torch.Size([512, 256, 3, 3])layer4.0.bn1.weight : torch.Size([512])layer4.0.bn1.bias : torch.Size([512])layer4.0.conv2.weight : torch.Size([512, 512, 3, 3])layer4.0.bn2.weight : torch.Size([512])layer4.0.bn2.bias : torch.Size([512])layer4.0.downsample.0.weight : torch.Size([512, 256, 1, 1])layer4.0.downsample.1.weight : torch.Size([512])layer4.0.downsample.1.bias : torch.Size([512])layer4.1.conv1.weight : torch.Size([512, 512, 3, 3])layer4.1.bn1.weight : torch.Size([512])layer4.1.bn1.bias : torch.Size([512])layer4.1.conv2.weight : torch.Size([512, 512, 3, 3])layer4.1.bn2.weight : torch.Size([512])layer4.1.bn2.bias : torch.Size([512])fc.weight : torch.Size([1000, 512])fc.bias : torch.Size([1000])parm['layer1.0.conv1.weight'][0,0,:,:]输出为:
array([[ 0.05759342, -0.09511436, -0.02027232],[-0.07455588, -0.799308 , -0.21283598],[ 0.06557069, -0.09653367, -0.01211061]], dtype=float32)利用如下函数将某一层的所有参数保存到表格中,数据维持卷积核特征大小,如3*3的卷积保存后还是3x3的.
def parm_to_excel(excel_name,key_name,parm):with pd.ExcelWriter(excel_name) as writer:[output_num,input_num,filter_size,_]=parm[key_name].size()for i in range(output_num):for j in range(input_num):data=pd.DataFrame(parm[key_name][i,j,:,:].detach().numpy())#print(data)data.to_excel(writer,index=False,header=True,startrow=i*(filter_size+1),startcol=j*filter_size)由于权重矩阵中有很多的值非常小,取出固定大小的值,并将全部权重写入excel
counter=1with pd.ExcelWriter('test1.xlsx') as writer: for key in parm_resnet50.keys(): data=parm_resnet50[key].reshape(-1,1) data=data[data>0.001] data=pd.DataFrame(data,columns=[key]) data.to_excel(writer,index=False,startcol=counter) counter+=12、获取中间某一层的特性
重写一个函数,将需要输出的层输出即可.
def resnet_cifar(net,input_data): x = net.conv1(input_data) x = net.bn1(x) x = F.relu(x) x = net.layer1(x) x = net.layer2(x) x = net.layer3(x) x = net.layer4[0].conv1(x) #这样就提取了layer4第一块的第一个卷积层的输出 x=x.view(x.shape[0],-1) return xmodel = models.resnet18()x = resnet_cifar(model,input_data)以上这篇获取Pytorch中间某一层权重或者特征的例子就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
在常见的pytorch代码中,我们见到的初始化方式都是调用init类对每层所有参数进行初始化。但是,有时我们有些特殊需求,比如用某一层的权重取优化其它层,或者手
pytorch输出中间层特征:tensorflow输出中间特征,2种方式:1.保存全部模型(包括结构)时,需要之前先add_to_collection或者用sl
一、多层前向神经网络多层前向神经网络由三部分组成:输出层、隐藏层、输出层,每层由单元组成;输入层由训练集的实例特征向量传入,经过连接结点的权重传入下一层,前一层
在使用tensorflow中,我们常常需要获取某个变量的值,比如:打印某一层的权重,通常我们可以直接利用变量的name属性来获取,但是当我们利用一些第三方的库来
在很多神经网络中,往往会出现多个层共享一个权重的情况,pytorch可以快速地处理权重共享问题。例子1:classConvNet(nn.Module):def_