在pytorch中实现只让指定变量向后传播梯度

时间:2021-05-22

pytorch中如何只让指定变量向后传播梯度?

(或者说如何让指定变量不参与后向传播?)

有以下公式,假如要让L对xvar求导:

(1)中,L对xvar的求导将同时计算out1部分和out2部分;

(2)中,L对xvar的求导只计算out2部分,因为out1的requires_grad=False;

(3)中,L对xvar的求导只计算out1部分,因为out2的requires_grad=False;

验证如下:

#!/usr/bin/env python2# -*- coding: utf-8 -*-"""Created on Wed May 23 10:02:04 2018@author: hy""" import torchfrom torch.autograd import Variableprint("Pytorch version: {}".format(torch.__version__))x=torch.Tensor([1])xvar=Variable(x,requires_grad=True)y1=torch.Tensor([2])y2=torch.Tensor([7])y1var=Variable(y1)y2var=Variable(y2)#(1)print("For (1)")print("xvar requres_grad: {}".format(xvar.requires_grad))print("y1var requres_grad: {}".format(y1var.requires_grad))print("y2var requres_grad: {}".format(y2var.requires_grad))out1 = xvar*y1varprint("out1 requres_grad: {}".format(out1.requires_grad))out2 = xvar*y2varprint("out2 requres_grad: {}".format(out2.requires_grad))L=torch.pow(out1-out2,2)L.backward()print("xvar.grad: {}".format(xvar.grad))xvar.grad.data.zero_()#(2)print("For (2)")print("xvar requres_grad: {}".format(xvar.requires_grad))print("y1var requres_grad: {}".format(y1var.requires_grad))print("y2var requres_grad: {}".format(y2var.requires_grad))out1 = xvar*y1varprint("out1 requres_grad: {}".format(out1.requires_grad))out2 = xvar*y2varprint("out2 requres_grad: {}".format(out2.requires_grad))out1 = out1.detach()print("after out1.detach(), out1 requres_grad: {}".format(out1.requires_grad))L=torch.pow(out1-out2,2)L.backward()print("xvar.grad: {}".format(xvar.grad))xvar.grad.data.zero_()#(3)print("For (3)")print("xvar requres_grad: {}".format(xvar.requires_grad))print("y1var requres_grad: {}".format(y1var.requires_grad))print("y2var requres_grad: {}".format(y2var.requires_grad))out1 = xvar*y1varprint("out1 requres_grad: {}".format(out1.requires_grad))out2 = xvar*y2varprint("out2 requres_grad: {}".format(out2.requires_grad))#out1 = out1.detach()out2 = out2.detach()print("after out2.detach(), out2 requres_grad: {}".format(out1.requires_grad))L=torch.pow(out1-out2,2)L.backward()print("xvar.grad: {}".format(xvar.grad))xvar.grad.data.zero_()

pytorch中,将变量的requires_grad设为False,即可让变量不参与梯度的后向传播;

但是不能直接将out1.requires_grad=False;

其实,Variable类型提供了detach()方法,所返回变量的requires_grad为False。

注意:如果out1和out2的requires_grad都为False的话,那么xvar.grad就出错了,因为梯度没有传到xvar

补充:

volatile=True表示这个变量不计算梯度, 参考:Volatile is recommended for purely inference mode, when you're sure you won't be even calling .backward(). It's more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False.

以上这篇在pytorch中实现只让指定变量向后传播梯度就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。

声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。

相关文章