Model: Sherman Morrison Variant (recursiveRouteChoice.recursive_logit_efficient_update)

Separate prediction and estimation classes using the more efficient recursive logit model variant.

Very similar to the recursive logit in recursive_route_choice.py Instead using the sherman morrison update formula to save a substantial number of matrix inverse calculations.

Note that empirical tests suggest forming the matrix inverse explicitly is much faster, but may be less numericall stable (relative to LU). In either case these are likely dense if A is sparse, so Krylov subspace methods may have to be used for large instances

class recursiveRouteChoice.recursive_logit_efficient_update.DenseLUObj(mat)

Bases: object

Wrapper class around lu_factor and lu_solve to allow consistent api usage between scipy dense and SuperLU sparse decomps - i.e having a solve method

__init__(mat)
solve(b, trans=0, overwrite_b=False, check_finite=True)

Solves Ax=b via back substitution, where A is the matrix the class was initialised with. For options documentation see scipy.linalg.lu_solve

class recursiveRouteChoice.recursive_logit_efficient_update.RecursiveLogitModelEstimationSM(data_struct: ModelDataStruct, optimiser: OptimiserBase, observations_record, initial_beta=-1.5, mu=1, sort_obs=True)

Bases: RecursiveLogitModelEstimation

__init__(data_struct: ModelDataStruct, optimiser: OptimiserBase, observations_record, initial_beta=-1.5, mu=1, sort_obs=True)

Initialise recursive logit model for estimation

Parameters:
  • data_struct (ModelDataStruct) – containing network attributes of desired network

  • optimiser (ScipyOptimiser or LineSearchOptimiser) – The wrapper instance for the desired optimisation routine

  • observations_record (ak.highlighlevel.Array or scipy.sparse.spmatrix) –

    or list of list

    record of observations to estimate from

  • initial_beta (float or list or array like) – initial guessed values of beta to begin optimisation algorithm with. If a scalar, beta are uniformly initialised to this value

  • mu – The scale parameter of the Gumbel random variables being modelled. Generally set equal to 1 as it is non-identifiable due to the uncertainty in the parameter weights.

  • sort_obs (bool) – flag to sort input observations to allow for efficient caching. Should only be set to False if the data is large and already known to be sorted.

compute_value_function(m_tilde, data_is_sparse=None) bool

Solves the linear system \(z = Mz+b\) and stores the output for future use. Returns a boolean indicating if solving the linear system was successful or not.

Parameters:
  • m_tilde (scipy.sparse.csr_matrix) – The matrix M modified to reflect the current location of the sink destination state.

  • data_is_sparse (bool, optional) – flag to indicate if the data - in this case m_tilde is sparse. If supplied, we don’t need to check this manually.

Returns:

error_flag – Returns True if an error was encountered, else false if execution finished successfully. Errors are due to the linear system having no solution, high residual or having negative solution, such that the value functions have no real solution.

Return type:

bool

update_base_matrix_system()
update_beta_vec(new_beta_vec, from_init=False) bool

Update the interval value for the network parameters beta with the supplied value.

Parameters:
  • new_beta_vec (numpy.array() of float) –

  • from_init (bool) – flag used in subclasses to avoid messy dependence sequencing issues

Returns:

error_flag – Returns a boolean flag to indicate if updating beta was successful. This fails for an ill chosen beta which either is positive and illegal, or is large in magnitude such that the short term utility calculation overflows.

This flag is likely not used by an end user, only the internal code

Return type:

bool

class recursiveRouteChoice.recursive_logit_efficient_update.RecursiveLogitModelPredictionSM(data_struct: ModelDataStruct, initial_beta=-1.5, mu=1.0)

Bases: RecursiveLogitModelPrediction

This class is a bit messy because theres a bit of an inheritance problem. A potential fix is to roll the two Prediction and Estimation classes together, but we might want to have them in separate instances. Totally could have this and methods in common subclass of RecursiveLogitModel, but currently don’t have time to look at MRO to get the subclasses for prediction and estimation behaving.

__init__(data_struct: ModelDataStruct, initial_beta=-1.5, mu=1.0)

Initialises a RecursiveLogitModel instance.

Parameters:
  • data_struct (ModelDataStruct) – The ModelDataStruct corresponding to the network being estimated on

  • initial_beta (float or int or list[float] or array_like) – The initial value for the parameter weights of the network attributes

  • mu (float) – The scale parameter of the Gumbel random variables being modelled. Generally set equal to 1 as it is non-identifiable due to the uncertainty in the parameter weights.

update_base_matrix_system()

Copy of method from Estimation. Should be implemented in a common parent, or as an interface/ subclass, rather than duplication