libcity.model.trajectory_loc_prediction.DeepMove

class libcity.model.trajectory_loc_prediction.DeepMove.Attn(method, hidden_size, device)[source]

Bases: torch.nn.modules.module.Module

Attention Module. Heavily borrowed from Practical Pytorch https://github.com/spro/practical-pytorch/tree/master/seq2seq-translation

forward(out_state, history)[source]

[summary]

Parameters
  • out_state (tensor) – batch_size * state_len * hidden_size

  • history (tensor) – batch_size * history_len * hiddden_size

Returns

(batch_size, state_len, history_len)

Return type

[tensor]

training: bool
class libcity.model.trajectory_loc_prediction.DeepMove.DeepMove(config, data_feature)[source]

Bases: libcity.model.abstract_model.AbstractModel

rnn model with long-term history attention

calculate_loss(batch)[source]
Parameters

batch (Batch) – a batch of input

Returns

return training loss

Return type

torch.tensor

forward(batch)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_weights()[source]

Here we reproduce Keras default initialization weights for consistency with Keras version

predict(batch)[source]
Parameters

batch (Batch) – a batch of input

Returns

predict result of this batch

Return type

torch.tensor

training: bool