pyfan.stats.multinomial.multilogit.UtilityMultiNomial

class pyfan.stats.multinomial.multilogit.UtilityMultiNomial(scale_coef=1)[source]
each_j_indirect_utility:

N by J matrix

N is the number of individuals (unique states) J is the number of choices

N might be 0

Methods

expected_u_integrate_allj(prob_denominator)

‘see Train discussion on consumer surplus and logit’ ‘Need to check the reference that Train cites to make sure this integration applies in my case, should derive it myself’ If one option has much higher utility, and if variance is low, integrated utility is linear in this option, in fact they are equal

prob_denominator(all_J_indirect_utility)

if:

prob_j(all_J_indirect_utility[, …])

get_outputs

expected_u_integrate_allj(prob_denominator)[source]

‘see Train discussion on consumer surplus and logit’ ‘Need to check the reference that Train cites to make sure this integration applies in my case, should derive it myself’ If one option has much higher utility, and if variance is low, integrated utility is linear in this option, in fact they are equal

prob_denominator(all_J_indirect_utility)[source]
if:

all_J_indirect_utility/self.scale_coef = -598.66/0.75

then:

prob_denominator = exp(-598.66/0.75) = 0.0

then:

sum(prob_denominator) = 0

then:

np.exp(all_J_indirect_utility/self.scale_coef)/prob_denominator_tile = INVALID

so there must be some minimal level for the division here. in terms of scaling