- Resets parameter data pointer so that they can use faster code paths.
- Right now, this works only if the module is on the GPU and cuDNN is enabled. Otherwise, it’s a no-op.
翻译一下,就是重置参数的数据指针。其实就是contiguous(连续性)的问题,在pytorch issue上有这样的warning:
UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters().
我的理解是,为了提高内存的利用率和效率,调用flatten_parameters让parameter的数据存放成contiguous chunk(连续的块)。类似我们调用tensor.contiguous.
标签:RNNBase,parameters,contiguous,torch,module,flatten From: https://www.cnblogs.com/zjuhaohaoxuexi/p/16755352.html