libcity.model.traffic_speed_prediction.DCRNN

class libcity.model.traffic_speed_prediction.DCRNN.DCGRUCell(input_dim, num_units, adj_mx, max_diffusion_step, num_nodes, device, nonlinearity='tanh', filter_type='laplacian', use_gc_for_ru=True)[source]

Bases: torch.nn.modules.module.Module

forward(inputs, hx)[source]

Gated recurrent unit (GRU) with Graph Convolution.

Parameters
  • inputs – (B, num_nodes * input_dim)

  • hx – (B, num_nodes * rnn_units)

Returns

shape (B, num_nodes * rnn_units)

Return type

torch.tensor

training: bool
class libcity.model.traffic_speed_prediction.DCRNN.DCRNN(config, data_feature)[source]

Bases: libcity.model.abstract_traffic_state_model.AbstractTrafficStateModel, libcity.model.traffic_speed_prediction.DCRNN.Seq2SeqAttrs

calculate_loss(batch, batches_seen=None)[source]

输入一个batch的数据,返回训练过程的loss,也就是需要定义一个loss函数

Parameters

batch (Batch) – a batch of input

Returns

return training loss

Return type

torch.tensor

decoder(encoder_hidden_state, labels=None, batches_seen=None)[source]

Decoder forward pass

Parameters
  • encoder_hidden_state – (num_layers, batch_size, self.hidden_state_size)

  • labels – (self.output_window, batch_size, self.num_nodes * self.output_dim) [optional, not exist for inference]

  • batches_seen – global step [optional, not exist for inference]

Returns

(self.output_window, batch_size, self.num_nodes * self.output_dim)

Return type

torch.tensor

encoder(inputs)[source]

encoder forward pass on t time steps

Parameters

inputs – shape (input_window, batch_size, num_sensor * input_dim)

Returns

(num_layers, batch_size, self.hidden_state_size)

Return type

torch.tensor

forward(batch, batches_seen=None)[source]

seq2seq forward pass

Parameters
  • batch

    a batch of input, batch[‘X’]: shape (batch_size, input_window, num_nodes, input_dim)

    batch[‘y’]: shape (batch_size, output_window, num_nodes, output_dim)

  • batches_seen – batches seen till now

Returns

(batch_size, self.output_window, self.num_nodes, self.output_dim)

Return type

torch.tensor

predict(batch, batches_seen=None)[source]

输入一个batch的数据,返回对应的预测值,一般应该是**多步预测**的结果,一般会调用nn.Moudle的forward()方法

Parameters

batch (Batch) – a batch of input

Returns

predict result of this batch

Return type

torch.tensor

training: bool
class libcity.model.traffic_speed_prediction.DCRNN.DecoderModel(config, adj_mx)[source]

Bases: torch.nn.modules.module.Module, libcity.model.traffic_speed_prediction.DCRNN.Seq2SeqAttrs

forward(inputs, hidden_state=None)[source]

Decoder forward pass.

Parameters
  • inputs – shape (batch_size, self.num_nodes * self.output_dim)

  • hidden_state – (num_layers, batch_size, self.hidden_state_size), optional, zeros if not provided, hidden_state_size = num_nodes * rnn_units

Returns

tuple contains:

output: shape (batch_size, self.num_nodes * self.output_dim)

hidden_state: shape (num_layers, batch_size, self.hidden_state_size)

(lower indices mean lower layers)

Return type

tuple

training: bool
class libcity.model.traffic_speed_prediction.DCRNN.EncoderModel(config, adj_mx)[source]

Bases: torch.nn.modules.module.Module, libcity.model.traffic_speed_prediction.DCRNN.Seq2SeqAttrs

forward(inputs, hidden_state=None)[source]

Encoder forward pass.

Parameters
  • inputs – shape (batch_size, self.num_nodes * self.input_dim)

  • hidden_state – (num_layers, batch_size, self.hidden_state_size), optional, zeros if not provided, hidden_state_size = num_nodes * rnn_units

Returns

tuple contains:

output: shape (batch_size, self.hidden_state_size)

hidden_state: shape (num_layers, batch_size, self.hidden_state_size)

(lower indices mean lower layers)

Return type

tuple

training: bool
class libcity.model.traffic_speed_prediction.DCRNN.FC(num_nodes, device, input_dim, hid_dim, output_dim, bias_start=0.0)[source]

Bases: torch.nn.modules.module.Module

forward(inputs, state)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class libcity.model.traffic_speed_prediction.DCRNN.GCONV(num_nodes, max_diffusion_step, supports, device, input_dim, hid_dim, output_dim, bias_start=0.0)[source]

Bases: torch.nn.modules.module.Module

forward(inputs, state)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class libcity.model.traffic_speed_prediction.DCRNN.Seq2SeqAttrs(config, adj_mx)[source]

Bases: object

libcity.model.traffic_speed_prediction.DCRNN.calculate_normalized_laplacian(adj)[source]

L = D^-1/2 (D-A) D^-1/2 = I - D^-1/2 A D^-1/2

Parameters

adj – adj matrix

Returns

L

Return type

np.ndarray

libcity.model.traffic_speed_prediction.DCRNN.calculate_random_walk_matrix(adj_mx)[source]
libcity.model.traffic_speed_prediction.DCRNN.calculate_reverse_random_walk_matrix(adj_mx)[source]
libcity.model.traffic_speed_prediction.DCRNN.calculate_scaled_laplacian(adj_mx, lambda_max=2, undirected=True)[source]
libcity.model.traffic_speed_prediction.DCRNN.count_parameters(model)[source]