# Evaluating a model¶

For a given model and set of parameters, we can measure the overall fit to a dataset by calculating the log likelihood. The likelihood is the probability of the data according to the model. Probability is calculated for full recall sequences and then multiplied for each recall sequence to obtain an overall probability of the data. In practice, this leads to extremely small probabilities, which may be difficult for the computer to calculate. Therefore, we use log probabilities to avoid this problem. For both likelihoods and log likelihood, greater values indicate a better fit of the model to the data.

In [1]: from cymr import fit, parameters

In [2]: data = fit.sample_data('Morton2013_mixed').query('subject <= 3')


## Patterns and Weights¶

To simulate free recall using the CMR-Distributed model, we must first define pre-experimental weights for the network. For this example, we’ll define localist patterns, which are distinct for each presented item. They can be represented by an identity matrix with one entry for each item.

In [3]: n_items = 768

In [4]: loc_patterns = np.eye(n_items)


To indicate where the patterns should be used in the network, they are specified as vector (for the $$\mathrm{M}^{FC}$$ and/or $$\mathrm{M}^{CF}$$ matrices) or similarity (for the $$\mathrm{M}^{FF}$$ matrix). We also label each pattern with a name; here, we’ll refer to the localist patterns as 'loc'.

In [5]: patterns = {'vector': {'loc': loc_patterns}}


## Parameters¶

Parameters objects define how parameter values will be interpreted. One use of them is to define the layers and sublayers of a network.

Each pattern is placed in a region of the connection matrix. The region is defined by the sublayer and segment of the $$f$$ and $$c$$ layers. Conventionally, the $$f$$ layer has only one sublayer called 'task'. The $$c$$ layer may have multiple sublayers with different names. Here, we’ll just use one, also called 'task'.

First, we indicate what sublayers will be included in the network.

In [6]: param_def = parameters.Parameters()



Patterns may include multiple components that may be weighted differently. Weight parameters are used to set the weighting of each component. Here, we only have one component, which we assign a weight based on the value of the w_loc parameter.

When setting the weights, we first indicate the region to apply weights to, followed by an expression. This expression may reference parameters and/or patterns.

In [8]: weights = {(('task', 'item'), ('task', 'item')): 'w_loc * loc'}

In [9]: param_def.set_weights('fc', weights)

In [10]: param_def.set_weights('cf', weights)


Segments for simulating the start of the list will also be added automatically.

Finally, we define the parameters that we want to evaluate, by creating a dictionary with a name and value for each parameter. We’ll get a different log likelihood for each parameter set. For a model to be evaluated, all parameters expected by that model must be defined, including any parameters used for setting weights (here, w_loc).

In [11]: param = {
....:     'B_enc': 0.7,
....:     'B_start': 0.3,
....:     'B_rec': 0.9,
....:     'w_loc': 1,
....:     'Lfc': 0.15,
....:     'Lcf': 0.15,
....:     'P1': 0.2,
....:     'P2': 2,
....:     'T': 0.1,
....:     'X1': 0.001,
....:     'X2': 0.25
....: }
....:


## Evaluating log likelihood¶

Define a model (here, cmr.CMRDistributed) and use likelihood() to evaluate the log likelihood of the observed data according to that model and these parameter values. Greater (i.e., less negative) log likelihood values indicate a better fit. In Fitting a model, we’ll use a parameter search to estimate the best-fitting parameters for a model.

In [12]: from cymr import cmr

In [13]: model = cmr.CMRDistributed()

In [14]: logl, n = model.likelihood(data, param, param_def=param_def, patterns=patterns)

In [15]: print(f'{n} data points evaluated.')
1178 data points evaluated.

In [16]: print(f'Log likelihood is: {logl:.4f}')
Log likelihood is: -3073.1892