About the autograd category
|
|
0
|
3906
|
May 13, 2017
|
Segmentation fault when calling .backward() after moving data to GPU (PyTorch + CUDA 12.1)
|
|
3
|
11
|
March 28, 2025
|
Pytorch autograd wrong values on Coriolis- & Centrifugal Matrix
|
|
0
|
9
|
March 25, 2025
|
Greedy optimisation with random noise in gradients
|
|
10
|
2405
|
March 24, 2025
|
Memory used by `autograd` when `torch.scatter` is involved
|
|
8
|
58
|
March 21, 2025
|
Where does the ctx variable come from?
|
|
4
|
965
|
March 21, 2025
|
Why does merging all loss in a batch make sense?
|
|
6
|
2109
|
March 21, 2025
|
Get softmax_lse value for sdpa kernel?
|
|
0
|
6
|
March 20, 2025
|
Does tensor.register_post_accumulate_grad_hook() always fire once, or multiple times?
|
|
0
|
8
|
March 19, 2025
|
How to store temp variables with register_autograd without returning them as output?
|
|
1
|
30
|
March 19, 2025
|
CUDA memory issue in Hessian vector product
|
|
0
|
14
|
March 17, 2025
|
CUDA Memory Profiling: perculiar memory values
|
|
6
|
316
|
March 17, 2025
|
How to calculate Jacobians for a batch
|
|
5
|
49
|
March 16, 2025
|
Initializing tensor inside custom loss fn causes cuda memory err
|
|
5
|
43
|
March 12, 2025
|
Computing Gradient of Loss w.r.t Learning Rate
|
|
1
|
35
|
March 12, 2025
|
PINN for 2D Heat Conduction Always Converges to a Constant Solution
|
|
2
|
42
|
March 10, 2025
|
Forward Mode AD with multiple tangents for each primal
|
|
3
|
84
|
March 7, 2025
|
Efficiently computing the per pixel gradient?
|
|
3
|
34
|
March 6, 2025
|
`compile` a function with `autograd`
|
|
1
|
21
|
March 5, 2025
|
How to preserve computational graph while initializing a network with weights
|
|
2
|
28
|
March 4, 2025
|
Gradient of loss (that depends on gradient of network) with respect to parameters
|
|
2
|
23
|
March 3, 2025
|
Propogate loss though inner loop Meta-Learning
|
|
2
|
52
|
March 3, 2025
|
Checkpoint with BatchNorm running averages
|
|
7
|
2859
|
February 28, 2025
|
Fixing seeds affects performance?
|
|
1
|
17
|
February 26, 2025
|
Grad is None confusion in the "what is torch.nn" tutorial
|
|
3
|
28
|
February 24, 2025
|
Does slicing/trimming during training cause memory leak?
|
|
1
|
26
|
February 22, 2025
|
How does Pytorch handle in-place operations without losing information necessary for backpropagation?
|
|
3
|
32
|
February 17, 2025
|
Gradient computation with PyTorch autograd with 1th and 2th order derivatives does not work
|
|
1
|
53
|
February 15, 2025
|
Free some saved tensors after partial backward
|
|
6
|
137
|
February 14, 2025
|
How does autograd merges 'parallel paths'?
|
|
3
|
36
|
February 13, 2025
|