>>> import torch >>> a=torch.randn(1,1) >>> b=torch.rand(1,1) >>> c=a+b >>> print(b.data,b.grad,b.requires_grad,b.grad_fn) tensor([[-2.4292]]) None False None >>> print(c.data,c.grad,c.requires_grad,c.grad_fn) tensor([[-3.5453]]) None False None >>> c.backward() Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
我们再一次调用backward(),可以发现又一次出现了RuntimeError,但这次原因是“the buffers have already been freed”,如果想解决这个问题只需在第一次调用backward()的时候,设置retain_graph为True。
>>> d.backward() Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.