4. Sioux Falls Example

Here we present example usage for the classic Sioux Falls network, again estimating parameters from simulated data. This is the same example appearing in the README, but we give a little more detail.

import numpy as np

from recursiveRouteChoice.data_loading import load_tntp_node_formulation
from recursiveRouteChoice import RecursiveLogitModelPrediction, ModelDataStruct, \
    RecursiveLogitModelEstimation
from recursiveRouteChoice import optimisers

# DATA
import os
print("sys path is", os.getcwd(), os.listdir(os.getcwd()))
network_file = os.path.join("tests", "docs", "SiouxFalls_net.tntp")
node_max = 24  # from network file

Here we see usage of one of the convenience loader methods provided for the tntp format. Since the network is very small and easily fits in memory, we store it in a dense representation.


distances = data_list[0]

This is the standardised way of passing data to the model, the network attributes are collected into a ModelDataStruct instance, alongside the corresponding incidence matrix. We then simulate a series of observations between every origin destination pair, note that whilst this will print there are 576 observations, there will actually be less as observations starting and ending at the same node are omitted.

network_struct = ModelDataStruct(data_list, incidence_mat)

beta_sim = np.array([-0.8, -0.00015])
model = RecursiveLogitModelPrediction(network_struct,
                                      initial_beta=beta_sim, mu=1)
print("Linear system size", model.get_exponential_utility_matrix().shape)

# sparse sample for quick running example
orig_indices = np.arange(0, node_max, 2)
dest_indices = (orig_indices + 5) % node_max
# sample every OD pair once
# orig_indices = np.arange(0, node_max, 1)
# dest_indices = np.arange(0, node_max, 1)
obs_per_pair = 1

We now construct the estimation model to attempt to recover the parameters the data was simulated with. We construct and optimiser instance, this time using L-BFGS and provide an initial iterate for the optimisation algorithm. Note that these are of differing orders of magnitude, due to capacity being measured on the scale of tens of thousands for Sioux Falls. Attempting to use beta_est_init = [-5, -5] will fail as the matrix \(M\) will be numerically zero, giving rise to a degenerate solution. This case is explicitly caught as an error by the code.

                                  num_obs_per_pair=obs_per_pair, iter_cap=2000, rng_seed=seed)

optimiser = optimisers.ScipyOptimiser(method='l-bfgs-b')
beta_est_init = [-5, -0.00001]
model_est = RecursiveLogitModelEstimation(network_struct, observations_record=obs,
                                          initial_beta=beta_est_init, mu=1,
                                          optimiser=optimiser)

beta_est = model_est.solve_for_optimal_beta(verbose=False)

Running the code, we get that on the specified seed, so the original parameters have been recovered somewhat well. If we increase to simulating a trip between every origin and destination, we see more accurate recovery of the parameters.

# Every second OD pair:
beta expected: [-0.8000, -0.0001], beta_actual: [-1.021, -0.000175]

# One simulation per OD pair:
beta expected: [-0.8000, -0.0001], beta_actual: [-0.7963, -0.0002]

4.1. Efficiency

Note that there are also drop in replacement classes recursiveRouteChoice .RecursiveLogitModelEstimationSM and recursiveRouteChoice .RecursiveLogitModelPredictionSM which provide identical functionality to the variants without suffixes. These however use the Sherman-Morrison formula to solve component linear systems in a more efficient way. This leverages the structure of the problem by noting that the change in sub-problems for different destinations can be described in terms of a rank one update of a destination independent matrix.