Mr. Estimator¶
Welcome to the Toolbox for the Multistep Regression Estimator (“Mister Estimator”).
If you find bugs, encounter unexpected behaviour or want to comment, please let us know via mail or open an issue on github. Any input is greatly appreciated.
- Documentation
- Getting Started
- Python Package index
- Github
- arXiv (a nicely-formated PDF)
- Details on the multistep regression estimator: J. Wilting and V. Priesemann, Nat. Commun. 9, 2325 (2018)
If you use our toolbox for a scientific publication please cite it as
Spitzner FP, Dehning J, Wilting J, Hagemann A, P. Neto J, Zierenberg J, et al. (2021) MR. Estimator, a toolbox to determine intrinsic timescales from subsampled spiking activity. PLoS ONE 16(4): e0249447. https://doi.org/10.1371/journal.pone.0249447
Dependencies¶
- Python (>=3.5)
- numpy (>=1.11.0)
- scipy (>=1.0.0)
- matplotlib (>=1.5.3)
Optional Dependencies¶
- numba (>=0.44), for parallelization
- tqdm, for progress bars
We recommend (and develop with) the latest stable versions of the dependencies, at the time of writing that is Python 3.7.0, numpy 1.15.1, scipy 1.1.0 and matplotlib 2.2.3.
Installation¶
Assuming a working Python3 environment, usually you can install via pip (also installs the optional dependencies):
pip3 install 'mrestimator[full]'
To install (or update an existing installation) with optional dependencies:
pip3 install -U 'mrestimator[full]'
If you run into problems during installation, they are most likely due to numpy and scipy. You may check the official scipy.org documentation or try using anaconda as outlined below.
Install Using Anaconda¶
We sincerely recommend using conda, more so if you are unsure about the dependencies on your system or lack administrator priviliges. It is easy to install, allows you to manage different versions of Python and if something breaks, you can role back and reinstall easily - all without leaving your user directory.
Head over to anaconda.com, and download the installer for Python 3.7.
After following the installation instructions (default settings are fine for most users),
start a new python session by typing python
in a new terminal window.
You will see something similar to the following:
Python 3.7.0 (default, Jun 28 2018, 07:39:16)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
End the session (exit()
or Ctrl-D) and type conda list
, which will output a list of the packages that came bundled with anaconda.
All dependencies for Mr. Estimator are included.
Optionally, you can create a new environment (e.g. named ‘myenv’) for the toolbox conda create --name myenv
and activate it with source activate myenv
(activate myenv
on windows).
For more details on managing environments with conda, see here.
Now install using pip: pip install 'mrestimator[full]'
and afterwards you should be able to import the module into any python3 session
python
>>> import mrestimator as mre
INFO Loaded mrestimator v0.1.6, writing to /tmp/mre_paul/
Manual Installation¶
Clone the repository via ssh or https
git clone git@github.com:Priesemann-Group/mrestimator.git
git clone https://github.com/Priesemann-Group/mrestimator.git
And optionally,
export PYTHONPATH="${PYTHONPATH}:$(pwd)/mrestimator"
This line adds the downloaded directory to your PYTHONPATH
environment
variable, so that it will be found automatically when importing. If you want to add the path
automatically when you login, you can add it to your ~/.bashrc
or ~/.profile
:
echo 'export PYTHONPATH="${PYTHONPATH}:'$(pwd)'/mrestimator"' >> ~/.bashrc
Pre-release versions¶
You can upgrade to pre-release versions using pip
pip install -U --pre 'mrestimator[full]'
To revert to the stable version, run
pip install mrestimator==0.1.6
or
pip install --force-reinstall mrestimator
for a complete (longer) reinstall of all dependencies.
Parallelization and running on clusters¶
Per default, the toolbox and its dependencies use all threads available on the host machine. While this is great if running locally, it is undesired for distributed computing as the workload manager expects jobs of serial queues to only use one thread. To disable multi-threading, you can set the following environment variables (e.g. at the beginning of a job file)
export OPENBLAS_NUM_THREADS=1
export MKL_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1
export OMP_NUM_THREADS=1
export NUMBA_NUM_THREADS=1
Getting Started¶
If you installed the toolbox via pip, you can import it directly. Also import numpy and matplotlib, most examples in the documentation use them.
import numpy as np
import matplotlib.pyplot as plt
import mrestimator as mre
If you installed the toolbox manually, add the location where it is stored before importing, so python can find it.
import sys
sys.path.append('/path/to/mrefolder/')
Below we walk through the example script with provided data, you may either follow step by step by copying snippets into a python console or run the full script and modify it to your needs.
You can grab the resources on github.
Preparing Data¶
The toolbox is built with spike-train data in mind where an activity \(A_t\) is recorded for sequential times \(t\). \(A_t\) can be the number of recorded events in an interval or a continuous observable.
Furthermore, source data needs to be in a trial (replica) structure. We use a two-dimensional numpy.ndarray
where the first index is the trial number and the
second index the measurement point (sample) at time \(t\).
Even if there is only one time series
(one repetition), we still use the first index.
All trials need to have the same length. If they are too short or too long,
you will have to trim them (this will be more flexibel in the
future).
In a typical scenario, you want to read your data from disk. For reading
plain text files we will use the input_handler()
.
Download the example data and remember where you saved it, e.g. /Users/me/example/data/. In a new python shell, set the work directory so we can use relative paths and create an output directory to save results
import os
os.chdir('/Users/me/example/data/')
os.makedirs('./output', exist_ok=True)
First, let us import data from a single file ./data/full.tsv where each column corresponds to one trial (repetition). In the example we have ten trials with length 10000.
filepath = './data/full.tsv'
srcful = mre.input_handler(filepath)
print('srcful has shape: ', srcful.shape)
We can also import a single column if the others contain unwanted data. drive.tsv contains a continuous index in the first (zeroth) column and the actual drive we used for the example branching process in the second (first) column.
srcdrv = mre.input_handler('./data/drive.tsv', usecols=1)
print('drive has shape: ', srcdrv.shape)
Note how the returned structure again has two dimensions, even though we only have one timeseries.
We can also pass a list of filepaths or use wildcards *
to match a file
pattern. The ./data/sub_*.tsv files have one column each, so we will get
three trials for the filelist and ten when using the wildcard, since there
are ten files matching the pattern.
filelist = [
'./data/sub_01.tsv',
'./data/sub_02.tsv',
'./data/sub_03.tsv']
srcsub = mre.input_handler(filelist)
print('imported trials from list: ', srcsub.shape[0])
# overwrite srcsub
srcsub = mre.input_handler('./data/sub_*.tsv')
print('imported trials from wildcard: ', srcsub.shape[0])
The advantage of the trial structure is that we can easily compute e.g. averages over all trials:
avgful = np.mean(srcful, axis=0)
avgsub = np.mean(srcsub, axis=0)
Analysis¶
For convenience, we have built a wrapper function full_analysis()
that
does all the sequential steps (preparing data, fitting coefficients, fitting
and exporting) in the right order. As you might see, it uses the import_handler
like we have just done manually.
Please note: full_analysis()
might change in the future as we are
still experimenting to find out what the easiest interface is.
Check the
changelog
before updating, so your scripts dont break.
auto = mre.full_analysis(
data='./data/sub_*.tsv',
coefficientmethod='ts',
targetdir='./output',
title='Full Analysis',
dt=4, dtunit='ms',
tmin=0, tmax=8000,
fitfuncs=['exp', 'exp_offs', 'complex'],
)
plt.show()
A window should pop up looking something like below.
At the top there is an overview of the imported data. Individual trials/realisations of ./data/sub_*.tsv are slightly transparent and the average at time \(t\) is plotted darker. In the second row you see how the average activity (per trial) developes across your trials. On the bottom are the plotted results and the values calculated for \(\tau\) and \(m\).
So what did all the arguments to full_analysis()
do?
First of all, you will find the exact same plot with the specified
title as Full Analysis.pdf in the targetdir (here output).
dt and dtunit set the time scale, how far measurement points are apart. In the example, we have a recording every 4ms. With tmin and tmax we specify the interval (in dtunits) over which the autocorrelations are fitted. In the third plot you see that we fitted up to tmax=8000 ms and used the three builtin fitfunctions (a plain exponential, exponential with offset and the complex function, see fitfunctions.)

The function returns a figure (matplotlib axes element), here assigned to
auto, that only contains
the third subplot with the correlation result. If you want to plot into
an existing figure you can provide a matplotlib.axes.Axes
instance
using the targetplot keyword argument.
Manual Analysis and Customization¶
Lets start by recreating the first subplot from above with some customization.
To plot data with default styling, we create an OutputHandler
and
manually add the data we imported before as a time series.
oful = mre.OutputHandler()
oful.add_ts(srcful)
By default, if we add more than one trial at once, the data is plotted transparently. Next we add an average over trials and specify the plot color and a label.
avgful = np.mean(srcful, axis=0)
oful.add_ts(avgful, color='navy', label='average (full)')
Any kwargs (keyword arguments, think named options) are passed to matplotlibs plot function (that the toolbox uses for plotting). You can find more options to specify in the matplotlib documentation.
avgsub = np.mean(srcsub, axis=0)
oful.add_ts(srcsub, alpha=0.25, color='yellow', label='trials (subs.)')
oful.add_ts(avgsub, ls='dashed', color='maroon', label='average (subs.)')
oful.add_ts(srcdrv, color='green', label='drive')
plt.show()

(Note: matplotlib’s ability to deal with colors has increased a lot since version 1.5.3. The code above uses backwards compatible styling but we recommend using the newer syntax e.g. color='C0'
, if available. See the latest matplotlib api refernces compared to v1.5.3)
So far, so good. After checking that the input is indeed what we want, we
calculate the correlation coefficients \(r_k\) using
the coefficients()
function.
rkdefault = mre.coefficients(srcful, method='ts')
print(rkdefault)
print('this guy has the following attributes: ', rkdefault._fields)
coefficients()
returns a CoefficientResult
, which is a fancy way
for saying we put all the needed information into a structure.
One can access its content like this (for instance, to get the
coefficients \(r_k\) that were calculated): rkdefault.coefficients
.
We can manually specify the time steps for which we want to calculate coefficients (and each time steps dtunit as well as the number of units per step dt)
rk = mre.coefficients(srcsub, method='ts',
steps=(1, 5000), dt=4, dtunit='ms', desc='mydat')
Here we want all coefficients from \(1\times 4 \rm{ms}\) to \(5000\times 4 \rm{ms}\), where, again, the measurement points of our data srcsub are 4ms apart. We also provided a custom description desc that will automatically appear in the plot legend.
Next, we have to call fit()
for estimating the branching parameter and
autocorellation time. Note that the resulting \(\tau\) is independent of the used
time scale but \(m\) directly depends on dt. There will be a dedicated page
showing this relation in the future.
Again, we can either use default arguments, that use the details from the coefficients function or specify some more details. Frontmost, we can specify a custom range over which to fit (without recalculating the coefficients every time) and the fitfunction to use.
m = mre.fit(rk)
m2 = mre.fit(rk, steps=(1, 3000), fitfunc='offset')
The result of the fit is again grouped into a structure, but for now
lets create a new OutputHandler
and save it.
You can add multiple things to add when creating the handler, or add
them indvidually, later.
ores = mre.OutputHandler([rkdefault, m])
ores.add_coefficients(rk)
ores.add_fit(m2)
ores.save('./output/custom')
plt.show()
This should show the plot below. Note that fits are drawn dashed over the range that did not contribute to the fitting.

You will also find the plot as cusotm.pdf in the output save location. Along with it is the raw data in a custom.tsv (tab separated values) file. Let us look into that:
# legendlabel: mydat Fit Exponential
# description: mydat
# m=0.9943242837852002, tau=702.7549736664002[ms]
# fitrange: 1 <= k <= 5000[4.0ms]
# function: $A e^{-k/\tau}$
# with parameters:
# tau = 702.7549736664002
# A = 0.957555920246417
#
# legendlabel: mydat Fit Exp+Offset
# description: mydat
# m=0.9952035404934625, tau=831.9468533876288[ms]
# fitrange: 1 <= k <= 3000[4.0ms]
# function: $A e^{-k/\tau} + O$
# with parameters:
# tau = 831.9468533876288
# A = 0.9853947305761671
# O = -0.04020540759362338
#
# 1_steps[1ms] 2_coefficients 3_stderrs 4_mydat_coefficients 5_mydat_stderrs
1.000000000000000000e+00 9.967336095137534491e-01 2.769460094101808875e-04 nan nan
2.000000000000000000e+00 9.929733724524110183e-01 6.003410730629407926e-04 nan nan
3.000000000000000000e+00 9.888006597244309859e-01 9.619184985625585079e-04 nan nan
4.000000000000000000e+00 9.842378364504643651e-01 1.344448266892795014e-03 9.453904925500404843e-01 4.286386552253950571e-03
...
In the (admittedly, quite long) header marked by #, you find the two plotted fits, their description - if set, legendlabel, fitrange, the underlying function and the parameters that were obtained.
After the fits are the column labels of the plotted data sets. First the x-axis values (steps and their units), followed by corresponding y. We first added rkdefault which had no description specified so the columns are simply labeled 2_coefficients and 3_stderrs. For the second data set we specified my_dat, so you find that in columns 4 and 5. Note that we only have data for multiples of \(4\rm{ms}\) in the later data set, so it is padded with nans.
We strongly advice keeping those meta tsv files around so you can reproduce the plots at a later time, in another plot prorgram (e.g. gnuplot is fine with this layout) or to share them with your colleagues.
The Toolbox¶
Importing Data¶
-
mrestimator.
input_handler
(items, **kwargs)[source]¶ Helper function that attempts to detect provided input and convert it to the format used by the toolbox. Ideally, you provide the native format, a
numpy.ndarray
ofshape(numtrials, datalength)
.Not implemented yet: All trials should have the same data length, otherwise they will be padded.
The toolbox uses two dimensional ndarrays for providing the data to/from functions. This allows to consistently access trials and data via the first and second index, respectively.
Parameters: - items (str, list or ndarray) – A string is assumed to be the path to file that is then imported as pickle or plain text. Wildcards should work. Alternatively, you can provide a list or ndarray containing strings or already imported data. In the latter case, input_handler attempts to convert it to the right format.
- kwargs – Keyword arguments passed to
numpy.loadtxt()
when filenames are detected (see numpy documentation for a full list). For instance, you can provideusecols=(1,2)
if your files have multiple columns and only the column 1 and 2 contain trial data you want to use. The input handler adds each column in each file to the list of trials.
Returns: ndarray
– containing your data (hopefully) formatted correctly. Access via[trial, datapoint]
Example
# import a single file prepared = mre.input_handler('/path/to/yourfiles/trial_1.csv') print(prepared.shape) # or from a list of files myfiles = ['~/data/file_0.csv', '~/data/file_1.csv'] prepared = mre.input_handler(myfiles) # all files matching the wildcard, but only columns 3 and 4 prepared = mre.input_handler('~/data/file_*.csv', usecols=(3, 4)) # access your data, e.g. measurement 10 of trial 3 pt = prepared[3, 10]
-
mrestimator.
simulate_branching
(m, a=None, h=None, length=10000, numtrials=1, subp=1, seed='random')[source]¶ Simulates a branching process with Poisson input. Returns data in the trial structure.
Per default, the function discards the first few time steps to produce stationary activity. If a drive is passed as
h=0
, the recording starts instantly (and produces exponentially decaying activity).Parameters: - m (float) – Branching parameter.
- a (float) – Stationarity activity of the process. Only considered if no drive h is specified.
- h (array, optional) – Specify a custom drive (possibly changing) for every time step. If h is given, its length takes priority over the length parameter. If the first or only value of h is zero, the recording starts instantly with set activity a and the resulting timeseries will not be stationary in the beginning.
- length (int, optional) – Number of steps for the process, thereby sets the total length of the generated time series. Overwritten if drive h is set as an array.
- numtrials (int, optional) – Generate ‘numtrials’ trials. Default is 1.
- seed (int, optional) – Initialise the random number generator with a seed. Per default,
seed='random'
and the generator is seeded randomly (hence each call to simulate_branching() returns different results).seed=None
skips (re)seeding. - subp (float, optional) – Subsample the activity with the probability subp (calls simulate_subsampling() before returning).
Returns: ndarray
– with numtrials time series, each containging length entries of activity. Per default, one trial is created with 10000 measurements.
-
mrestimator.
simulate_subsampling
(data, prob=0.1, seed='random')[source]¶ Apply binomial subsampling.
Parameters: - data (ndarray) – Data (in trial structre) to subsample. Note that data will be cast to integers. For instance, if your activity is normalised consider multiplying with a constant.
- prob (float) – Subsample to probability prob. Default is 0.1.
- seed (int, optional) – Initialise the random number generator with a seed. Per default set to random: seed randomly (hence each call to simulate_branching() returns different results). Set seed=None to keep the rng device state.
Correlation Coefficients¶
-
mrestimator.
coefficients
(data, method=None, steps=None, dt=1, dtunit='ms', knownmean=None, numboot=100, seed=5330, description=None, desc=None)[source]¶ Calculates the coefficients of correlation \(r_k\).
Parameters: - data (ndarray) – Input data, containing the time series of activity in the trial structure. If a one dimensional array is provieded instead, we assume a single trial and reshape the input.
- method (str) – The estimation method to use, either ‘trialseparated’ (
'ts'
) or ‘stationarymean’ ('sm'
).'ts'
calculates the \(r_k\) for each trial separately and averages over trials. The resulting coefficients can be biased if the trials are too short.'sm'
assumes the mean activity and its variance to be constant across all trials. The mean activity is then calculated from the larger pool of data from all trials and the short-trial-bias might be compensated. If you are unsure, compare results from both methods. If they agree, trials should be long enough. - steps (array, optional) – Specify the steps \(k\) for which to compute coefficients
\(r_k\).
If an array of length two is provided, e.g.
steps=(minstep, maxstep)
, all enclosed integer values will be used. Arrays larger than two are assumed to contain a manual choice of steps. Strides other than one are possible. - dt (float, optional) – The size of each step in dtunits. Default is 1.
- dtunit (str, optional) – Units of step size. Default is ‘ms’.
- description (str, optional) – Set the description of the
CoefficientResult
. By default all results of functions working with this set inherit its description (e.g. plot legends).
Other Parameters: - knownmean (float, optional) – If the (stationary) mean activity is known beforehand, it can be provided here. In this case, the provided value is used instead of approximating the expecation value of the activity using the mean.
- numboot (int, optional) – Enable bootstrapping to generate numboot (resampled) series of trials from the provided one. This allows to approximate statistical errors, returned in stderrs. Default is numboot=100.
- seed (int, None or ‘random’, optional) – If bootstrapping (numboot>0), a custom seed can be passed to
the random number generator used for
resampling. Per default, it is set to the same value every time
coefficients() is called to return consistent results
when repeating the analysis on the same data. Set to None to
prevent (re)seeding. ‘random’ seeds using the wall clock time.
For more details, see
numpy.random.RandomState
.
Returns: CoefficientResult
– The output is grouped and can be accessed using its attributes (listed below).
-
class
mrestimator.
CoefficientResult
[source]¶ Result returned by coefficients(). Subclassed from
namedtuple
.Attributes are set to None if the specified method or input data do not provide them. All attributes of type
ndarray
and lists are one-dimensional.Variables: - coefficients (ndarray) – Contains the coefficients \(r_k\), has length
numsteps. Access via
.coefficients[step]
- steps (ndarray) – Array of the \(k\) values matching coefficients.
- dt (float) – The size of each step in dtunits. Default is 1.
- dtunit (str) – Units of step size. Default is ‘ms’.
- method (str or None) – The method that was used to calculate the coefficients
- stderrs (ndarray or None) – Standard errors of the \(r_k\).
- trialactivities (ndarray) – Mean activity of each trial in the provided data.
To get the global mean activity, use
np.mean(trialactivities)
. Has lenght numtrials - description (str) – Description (or name) of the data set, by default all results of functions working with this set inherit its description (e.g. plot legends).
- numtrials (int,) – Number of trials that contributed.
- numboot (int,) – Number of bootstrap replicas that were created.
- numsteps (int,) – Number of steps in coefficients, steps and stderrs.
- bootstrapcrs (list) – List containing the numboot
CoefficientResult
instances that were calculated from the resampled input data. The List is empty if bootstrapping was skipped (numboot=0). - trialcrs (list) – List of the
CoefficientResult
instances calculated from individual trials. Only has length numtrials if the trialseparated method was used, otherwise it is empty.
Note
At the time of writing,
ndarray
behaves a bit unexpected when creating arrays with objects that are sequence like (such asCoefficientResult
andFitResult
), even when specifying dtype=object. Numpy converts the objects into an ndimensional structure instead of creating the (probably desired) 1d-array. To work around the issue, use a list or manually create the array with dtype=object and add the entries after creation.Example
import numpy as np import matplotlib.pyplot as plt import mrestimator as mre # branching process with 15 trials bp = mre.simulate_branching(m=0.995, a=10, numtrials=15) # the bp returns data already in the right format rk = mre.coefficients(bp, method='ts', dtunit='step') # fit ft = mre.fit(rk) # plot coefficients and the autocorrelation fit mre.OutputHandler([rk, ft]) plt.show() # print the coefficients print(rk.coefficients) # get the documentation print(help(rk)) # rk is inherited from namedtuple with all the bells and whistles print(rk._fields)
- coefficients (ndarray) – Contains the coefficients \(r_k\), has length
numsteps. Access via

Fit of the coefficients¶
-
mrestimator.
fit
(data, fitfunc=<function f_exponential_offset>, steps=None, fitpars=None, fitbnds=None, maxfev=None, ignoreweights=True, numboot=0, quantiles=None, seed=101, desc=None, description=None)[source]¶ Estimate the Multistep Regression Estimator by fitting the provided correlation coefficients \(r_k\). The fit is performed using
scipy.optimize.curve_fit()
and can optionally be provided with (multiple) starting fitparameters and bounds.Parameters: - data (CoefficientResult or array) – Correlation coefficients to fit. Ideally, provide this as
CoefficientResult
as obtained fromcoefficients()
. If arrays are provided, the function tries to match the data. - fitfunc (callable, optional) – The model function, f(x, …).
Directly passed to curve_fit():
It must take the independent variable as
the first argument and the parameters to fit as separate remaining
arguments.
Default is
f_exponential_offset
. Other builtin options aref_exponential
andf_complex
. - steps (array, optional) – Specify the steps \(k\) for which to fit (think fitrange).
If an array of length two is provided, e.g.
steps=(minstep, maxstep)
, all enclosed values present in the provdied data, including minstep and maxstep will be used. Arrays larger than two are assumed to contain a manual choice of steps and those that are also present in data will be used. Strides other than one are possible. Ignored if data is not passed as CoefficientResult. Default: all values given in data are included in the fit.
Other Parameters: - fitpars (~numpy.ndarray, optional) – The starting parameters for the fit. If the provided array is two dimensional, multiple fits are performed and the one with the smallest sum of squares of residuals is returned.
- fitbounds (~numpy.ndarray, optional) – Lower and upper bounds for each parameter handed to the fitting
routine. Provide as numpy array of the form
[[lowpar1, lowpar2, ...], [uppar1, uppar2, ...]]
- numboot (int, optional) – Number of bootstrap samples to compute errors from. Default is 0
- seed (int, None or ‘random’, optional) – If numboot is not zero, provide a seed for the random number
generator. If
seed=None
, seeding will be skipped. Per default, the rng is (re)seeded everytime fit() is called so that every repeated call returns the same error estimates. - quantiles (list, optional) – If numboot is not zero, provide the quantiles to return
(between 0 and 1). See
numpy.quantile
. Defaults are[.125, .25, .4, .5, .6, .75, .875]
- maxfev (int, optional) – Maximum iterations for the fit.
- description (str, optional) – Provide a custom description.
Returns: FitResult
– The output is grouped and can be accessed using its attributes (listed below).- data (CoefficientResult or array) – Correlation coefficients to fit. Ideally, provide this as
-
class
mrestimator.
FitResult
[source]¶ Result returned by fit(). Subclassed from
namedtuple
.Variables: - tau (float) – The estimated autocorrelation time in dtunits. Default is ‘ms’.
- mre (float) – The branching parameter estimated from the multistep regression.
- fitfunc (callable) – The model function, f(x, …). This allows to fit directly with popt.
To get the (TeX) description of a (builtin) function,
use
ut.math_from_doc(fitfunc)
. - popt (array) – Final fitparameters obtained from the (best) underlying
scipy.optimize.curve_fit()
. Beware that these are not corrected for the time bin size, this needs to be done manually (for time and frequency variables). - pcov (array) – Final covariance matrix obtained from the (best) underlying
scipy.optimize.curve_fit()
. - ssres (float) – Sum of the squared residuals for the fit with popt. This is not yet normalised per degree of freedom.
- steps (array) – The step numbers \(k\) of the coefficients \(r_k\) that were included in the fit. Think fitrange.
- dt (float) – The size of each step in dtunits. Default is 1.
- dtunit (str) – Units of step size and the calculated autocorrelation time.
Default is ‘ms’.
dt and dtunit are inherited from
CoefficientResult
. Overwrite by providing data fromcoefficients()
and the desired values set there. - quantiles (list or None) – Quantile values (between 0 and 1, inclusive) calculated from
bootstrapping. See
numpy.quantile
. Defaults are[.125, .25, .4, .5, .6, .75, .875]
- tauquantiles (list or None) – Resulting \(\tau\) values for the respective quantiles above.
- mrequantiles (list or None) – Resulting \(m\) values for the respective quantiles above.
- description (str) – Description, inherited from
CoefficientResult
. description provided tofit()
takes priority, if set.
Example
import numpy as np import matplotlib.pyplot as plt import mrestimator as mre bp = mre.simulate_branching(m=0.99, a=10, numtrials=15) rk = mre.coefficients(bp, dtunit='step') # compare the builtin fitfunctions m1 = mre.fit(rk, fitfunc=mre.f_exponential) m2 = mre.fit(rk, fitfunc=mre.f_exponential_offset) m3 = mre.fit(rk, fitfunc=mre.f_complex) # plot manually without using OutputHandler plt.plot(rk.steps, rk.coefficients, label='data') plt.plot(rk.steps, mre.f_exponential(rk.steps, *m1.popt), label='exponential m={:.5f}'.format(m1.mre)) plt.plot(rk.steps, mre.f_exponential_offset(rk.steps, *m2.popt), label='exp + offset m={:.5f}'.format(m2.mre)) plt.plot(rk.steps, mre.f_complex(rk.steps, *m3.popt), label='complex m={:.5f}'.format(m3.mre)) plt.legend() plt.show()

Wrapper¶
-
mrestimator.
full_analysis
(data=None, dt=None, kmax=None, dtunit=' time unit', fitfuncs=None, coefficientmethod=None, tmin=None, tmax=None, steps=None, substracttrialaverage=False, targetdir=None, title=None, numboot='auto', seed=1, loglevel=None, targetplot=None, showoverview=True, saveoverview=False, method=None)[source]¶ Wrapper function that performs the following four steps:
- check data with input_handler()
- calculate correlation coefficients via coefficients()
- fit autocorrelation function with fit()
- export/plot using the OutputHandler
Usually it should suffice to tweak the arguments and call this wrapper function (multiple times). Calling the underlying functions individually gives slightly more control, though. We recommend to set showoverview=False when calling in loops to avoid opening many figures (and wasting RAM).
Parameters: - data (str, list or numpy.ndarray) – Passed to input_handler(). Ideally, import and check data first. A string is assumed to be the path to file(s) that is then imported as pickle or plain text. Alternatively, you can provide a list or ndarray containing strings or already imported data. In the latter case, input_handler() attempts to convert it to the right format.
- dt (float) – How many dtunits separate the measurements of the provided data. For example, if measurements are taken every 4ms: dt=4, dtunit=’ms’.
- kmax (int) – Maximum time lag k (in time steps of size dt) to use for coefficients. Alternatively, tmax or steps can be specified
Other Parameters: - dtunit (str, optional) – Unit description/name of the time steps of the provided data.
- fitfuncs (list, optional) – Which fitfunctions to use e.g.
fitfuncs=['e', 'eo', 'c']
. Renamed from fitfunctions in v0.1.4. - coefficientmethod (str, optional) – ts or sm, method used for determining the correlation
coefficients. See the
coefficients()
function for details. Default is ts. - method (str, optional) – same as coefficientmethod, introduced in v0.1.6.
- tmin (float) – Smallest time separation to use for coefficients, in units of dtunit. Only one argument is possible, either kmax or steps or tmin and tmax.
- tmax (float) – Maximum time separation to use for coefficients. For example, to fit the autocorrelation between 8ms and 2s set: tmin=8, tmax=2000, dtunit=’ms’ (independent of dt).
- steps (~numpy.array, optional) – Specify the fitrange in steps \(k\) for which to compute
coefficients \(r_k\).
Note that \(k\) provided here would need
to be multiplied with units of [dt * dtunit] to convert
back to (real) time.
If an array of length two is provided, e.g.
steps=(minstep, maxstep)
, all enclosed integer values will be used. Arrays larger than two are assumed to contain a manual choice of steps. Strides other than one are possible. Only one argument is possible, either steps or kmax or tmin and tmax. - substracttrialaverage (bool, optional) – Substract the average across all trials before calculating correlation coefficients. Default is False.
- targetdir (str, optional) – String containing the path to the target directory where files are saved with the filename title. Per default, targetdir=None and no files are written to disk.
- title (str, optional) – String for the filenames. Also sets the main title of the overview panel.
- numboot (int or ‘auto’, optional) – Number of bootstrap samples to draw. This repeats every fit numboot times so that we can provide an uncertainty estimate of the resulting branching parameter and autocorrelation time. Per default, bootstrapping is only applied in coefficeints() as most of computing time is needed for the fitting. Thereby we have uncertainties on the \(r_k\) (which will be plotted) but each fit is only done once. Default is numboot=’auto’ where the number of samples depends on the fitfunction (100 for the exponential).
- seed (int, None or ‘random’, optional) – If numboot is not zero, provide a seed for the random number generator. If seed=None, seeding will be skipped. Per default, the rng is (re)seeded every time full_analysis() is called so that every repeated call returns the same error estimates.
- showoverview (bool, optional) – Whether to show the overview panel. Default is ‘True’. Set to ‘False’ when calling full_analysis() repeatedly or just saving the panels to disk with saveoverview (this temporarily overwrites your matplotlib rc parameters for more consistency). Otherwise, matplotlib may create large amounts of figures and leak memory. Note that even when set to ‘True’ the panel might not show if full_analysis() is called through a script instead of an (interactive) shell, this depends on your matplotlib configuration.
- saveoverview (bool, optional) – Whether to save the overview panel in targetdir. Default is ‘False’.
- loglevel (str, optional) – The loglevel to use for the logfile created as title.log in the targetdir. ‘ERROR’, ‘WARNING’, ‘INFO’ or ‘DEBUG’. Per default, no log is written unless loglevel and targetdir are provided.
- targetplot (matplotlib.axes.Axes, optional) – You can provide a matplotlib axes element (i.e. a subplot of an existing figure) to plot the correlations into. The axis will be passed to the OutputHandler and all plotting will happen within that axes. Per default, a new figure is created - that cannot be added as a subplot to any other figure later on. This is due to the way matplotlib handles subplots.
Returns: OutputHandler – that is associated with the correlation plot, fits and coefficients. Also saves meta data and plotted pdfs to targetdir.
Example
# test data, subsampled branching process bp = mre.simulate_branching(m=0.95, h=10, subp=0.1, numtrials=50) mre.full_analysis( data=bp, dt=1, tmin=0, tmax=1500, dtunit='step', fitfuncs=['exp', 'exp_offs', 'complex'], targetdir='./output', title='Branching Process')
Fitfunctions¶
The builtin fitfunctions all follow this form:
-
mre.
f_fitfunction
(k, arg1, arg2, ...)¶ Parameters: - k (array_like) – Independent variable as first argument. If an array is provided, an array of same length will be returned where the function is evaluated elementwise
- args (float) – Function arguments
Return type: float or array
Example
import numpy as np import matplotlib.pyplot as plt import mrestimator as mre # evaluate exp(-1) via A e^(-k/tau) print(mre.f_exponential(1, 1, 1)) # test data rk = mre.coefficients(mre.simulate_branching(m=0.9, h=10, numtrials=10)) # pass builtin function to fit f = mre.f_exponential_offset m = mre.fit(rk, f) # provide an array as function argument to evaluate elementwise # this is useful for plotting xargs = np.array([0, 1, 2, 3]) print(m.popt) # unpack m.popt to provide all contained arguments at once print(f(xargs, *m.popt)) # get a TeX string compatible with matplotlib's legends print(mre.math_from_doc(f))
-
mrestimator.
f_complex
(k, tau, A, O, tauosc, B, gamma, nu, taugs, C)[source]¶ \(|A| e^{-k/\tau} + B e^{-(k/\tau_{osc})^\gamma} \cos(2 \pi \nu k) + C e^{-(k/\tau_{gs})^2} + O\)
-
mrestimator.
default_fitpars
(fitfunc)[source]¶ Called to get the default starting parameters for the built-in fitfunctions that are used to initialise the fitting routine. Timelike values specified here were derived assuming a timescale of miliseconds.
Parameters: fitfunc (callable) – The builtin fitfunction Returns: pars (~numpy.ndarray) – The default parameters of the given function, 2d array for multiple sets of initial conditions.
Plotting and Exporting¶
-
class
mrestimator.
OutputHandler
(data=None, ax=None)[source]¶ The OutputHandler can be used to export results and to create charts with timeseries, correlation-coefficients or fits.
The main concept is to have one handler per plot. It contains functions to add content into an existing matplotlib axis (subplot), or, if not provided, creates a new figure. Most importantly, it also exports plaintext of the respective source material so figures are reproducible.
Note: If you want to have a live preview of the figures that are automatically generated with matplotlib, you HAVE to assign the result of mre.OutputHandler() to a variable. Otherwise, the created figures are not retained and vanish instantly.
Variables: - rks (list) – List of the
CoefficientResult
. Added with add_coefficients() - fits (list) – List of the
FitResult
. Added with add_fit()
Example
import numpy as np import matplotlib.pyplot as plt import mrestimator as mre bp = mre.simulate_branching(numtrials=15) rk1 = mre.coefficients(bp, method='trialseparated', desc='T') rk2 = mre.coefficients(bp, method='stationarymean', desc='S') m1 = mre.fit(rk1) m2 = mre.fit(rk2) # create a new handler by passing with list of elements out = mre.OutputHandler([rk1, m1]) # manually add elements out.add_coefficients(rk2) out.add_fit(m2) # save the plot and meta to disk out.save('~/test')
Working with existing figures:
# create figure with subplots fig = plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) # show each chart in its own subplot mre.OutputHandler(rk1, ax1) mre.OutputHandler(rk2, ax2) mre.OutputHandler(m1, ax3) mre.OutputHandler(m2, ax4) # matplotlib customisations myaxes = [ax1, ax2, ax3, ax4] for ax in myaxes: ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) plt.show(block=False) # hide a legend ax1.legend().set_visible(False) plt.draw()
-
__init__
(data=None, ax=None)[source]¶ Construct a new OutputHandler, optionally you can provide the a list of elements to plot.
ToDo: Make the OutputHandler talk to each other so that when one is written (possibly linked to others via one figure) all subfigure meta data is exported, too.
Parameters: - data (list, CoefficientResult or FitResult, optional) – List of the elements to plot/export. Can be added later.
- ax (Axes, optional) – The an instance of a matplotlib axes (a subplot) to plot into.
-
add_coefficients
(data, **kwargs)[source]¶ Add an individual CoefficientResult. Note that it is not possible to add the same data twice, instead it will be redrawn with the new arguments/style options provided.
Parameters: - data (CoefficientResult) – Added to the list of plotted elements.
- kwargs – Keyword arguments passed to
matplotlib.axes.Axes.plot
. Use to customise the plots. If a label is set via kwargs, it will be used to overwrite the description of data in the meta file. If an alpha value is or linestyle is set, the shaded error region will be omitted.
Example
rk = mre.coefficients(mre.simulate_branching()) mout = mre.OutputHandler() mout.add_coefficients(rk, color='C1', label='test')
-
add_fit
(data, **kwargs)[source]¶ Add an individual FitResult. By default, the part of the fit that contributed to the fitting is drawn solid, the remaining range is dashed. Note that it is not possible to add the same data twice, instead it will be redrawn with the new arguments/style options provided.
Parameters: - data (FitResult) – Added to the list of plotted elements.
- kwargs – Keyword arguments passed to
matplotlib.axes.Axes.plot
. Use to customise the plots. If a label is set via kwargs, it will be added as a note in the meta data. If linestyle is set, the dashed plot of the region not contributing to the fit is omitted.
-
add_ts
(data, **kwargs)[source]¶ Add timeseries (possibly with trial structure). Not compatible with OutputHandlers that have data added via add_fit() or add_coefficients().
Parameters: - data (ndarray) – The timeseries to plot. If the ndarray is two dimensional, a trial structure is assumed and all trials are plotted using the same style (default or defined via kwargs). Not implemented yet: Providing a ts with its own custom axis
- kwargs – Keyword arguments passed to
matplotlib.axes.Axes.plot
. Use to customise the plots.
Example
bp = mre.simulate_branching(numtrials=10) tsout = mre.OutputHandler() tsout.add_ts(bp, alpha=0.1, label='Trials') tsout.add_ts(np.mean(bp, axis=0), label='Mean') plt.show()
-
save
(fname='', ftype='pdf', dpi=300)[source]¶ Saves plots (ax element of this handler) and source that it was created from to the specified location.
Parameters: fname (str, optional) – Path where to save, without file extension. Defaults to “./mre”
-
save_meta
(fname='')[source]¶ Saves only the details/source used to create the plot. It is recommended to call this manually, if you decide to save the plots yourself or when you want only the fit results.
Parameters: fname (str, optional) – Path where to save, without file extension. Defaults to “./mre”
-
save_plot
(fname='', ftype='pdf', dpi=300)[source]¶ Only saves plots (ignoring the source) to the specified location.
Parameters:
-
set_xdata
(data=None, dt=1, dtunit=None)[source]¶ Adjust xdata of the plot, matching the input value. Returns an array of indices matching the incoming indices to already present ones. Automatically called when adding content.
If you want to customize the plot range, add all the content and use matplotlibs
set_xlim
function once at the end. (set_xdata() also manages meta data and can only increase the plot range)Parameters: - data (array) – x-values to plot the fits for. data does not need to be spaced equally but is assumed to be sorted.
- dt (float) – check if existing data can be mapped to the new, provided dt or the other way around. set_xdata() pads undefined areas with nan.
- dtunit (str) – check if the new dtunit matches the one set previously. Any padding to match dt is only done if dtunits are the same, otherwise the plot falls back to using generic integer steps.
Returns: array
– containing the indices where the data given to this function coincides with (possibly) already existing data that was added/plotted before.Example
out = mre.OutputHandler() # 100 intervals of 2ms out.set_xdata(np.arange(0,100), dt=2, dtunit='ms') # increase resolution to 1ms for the first 50ms # this changes the existing structure in the meta data. also # the axis of `out` is not equally spaced anymore fiftyms = np.arange(0,50) out.set_xdata(fiftyms, dt=1, dtunit='ms') # data with larger intervals is less dense, the returned list # tells you which index in `out` belongs to every index # in `xdat` xdat = np.arange(0,50) ydat = np.random_sample(50) inds = out.set_xdata(xdat, dt=4, dtunit='ms') # to pad `ydat` to match the axis of `out`: temp = np.full(out.xdata.size, np.nan) temp[inds] = ydat
- rks (list) – List of the
Changelog¶
v0.1.6 (23.04.2020)¶
- Changed: Now under BSD 3-Clause License
- Changed: When the data has more than one trial, we now require the user to choose which coefficient method to use (
ts
orsm
) inmre.coefficients()
andmre.full_analysis()
. We showed that the resulting time scale that one finds can differ severely between the two methods. If unsure, compare results from both. We explain the difference in the paper and print some recommendation from the toolbox. - Changed: Due to above,
method
is now the second positional argument (this might break scripts that gavesteps
,dt
, ordtunit
as positonal arguments). Callmre.coefficients(data, 'ts')
or as before via keywordmre.coefficients(data, method='ts')
- Fixed: Typo that caused
full_analysis()
to crash when calling the consistency check. - Fixed: Workaround to prevent a memory leak when calling
full_analysis()
repeatedly. Always setshowoverview=False
when usingfull_analysis()
in for loops. We now temporarily setmatplotlib.rcParams['interactive'] = showoverview
to avoid opening a new figure every time. This should make the panel andshowoverview
argument feel more consistent. The same workaround can be used in your custom scripts when using theOutputHandler
(that also opens figures): Nest the loop inside awith matplotlib.rc_context(rc={'interactive': False}):
(or adjust your rc parameters) to avoid figures. - Fixed: Various small bugs
- New:
coefficients
has a new keyword argumentknownmean
to provide a known mean activity. If provdied, it will be used as the expectation value of the activity instead of calculating the mean as an approximation (both, instationarymean
andtrialseparated
method). This allows for custom estimates but, for instance,m>1
will not be detectable as the covariance cannot diverge when the same (time independent) expectation value is used for<a_{t}>
and<a_{t+k}>
. As one example,knownmean=0
restrains the fitted line (with sloper_k
) to go through the origin(0,0)
. See Zierenberg et al., in press.
v0.1.5 (24.09.2019)¶
- Changed: One-file spaghetti code was separated into submodules.
- Fixed:
stationarymean
method for coefficients should work form>1
(Note that this is a non-standard case. A detailed discussion will follow.) - New: Optional Numba dependency to parallelize and precompile the computation of the correlation coefficients. To install numby along with mrestimator,
pip install -U mrestimator[numba]
- New: Uploading pre-release versions to pypi. To switch run
pip install -U --pre mrestimator[full]
and to go back to stablepip install mrestimator==0.1.5
. - New: Basic unit tests.
python -m unittest mrestimator.test_suite
v0.1.4 (05.02.2019)¶
- Changed:
full_analysis()
argumentfitfunctions
renamed tofitfuncs
to be consistent withfit()
andcoefficients()
- Changed:
full_analysis()
was rewritten, now only has three required arguments:data
,dt
andkmax
, wherekmax
can be substituted bysteps
ortmax
. - Changed: concerning the
seed
argument for various functions: all functions take eitherseed=None
(no reseeding),seed='random'
(reseeding to a random value - causing irreproducible resaults) or to a fixed valueseed=int(yourseed)
. Per default, analysis functions -full_analysis()
,fit()
andcoefficients()
- produce same results by seeding to a fixed value each call. (only confidence intervals are affected by seeding) Per default,simulate_branching()
andsimulate_subsampling()
seed torandom
. - Fixed: when calling branching process with
subp
and providing a seed, the subsampling no longer reseeds the rng device. (hence every call produces the same outcome, as expected) - Fixed:
simulate_subsampling()
now returns np arrays of correct dimensions - New:
full_analysis()
now shows a warning in the overview panel if consistency checks fail (so far only one). - New: Version number is printed into the overview panel of
full_analysis()
and into saved meta data
v0.1.3 (16.01.2019)¶
This is a bugfix version in preparation for the wrapper rewrite in 0.1.4.
- Changed: If no
steps
are provided tocoefficients()
, the default maxstep is (for now) 1/10 of the trial length. (Was hard coded to 1500 before) - Changed: Default logs are less verbose to be clearer. The new function
mre._enable_detailed_logging()
enables fully detailed output to console and logfile. This also calls the two new switches, see next point.mre._enable_detailed_logging()
also enables console display of runtime warnings that are usually only printed into the log. - Fixed: Crash due to logfiles. If the toolbox was used by more than one user on one machine, the logfile created in the temporary directory could not be overwritten by other users. We now try to set file permissions of the logfile and target directory to
777
if they are not subfolders of the user folder. Also, per default, each user gets their own directory/tmp/mre_username
. Logfilehandler is now rotating and creates a maximum of 10 logfiles, 50mb each. - Fixed:
full_analysis()
no longer crashes withsubstracttrialaverage=True
when the provided input is of integer type. - Fixed:
fit()
now returns a (mostly empty)FitResult
when no fit converged instead of raising an exception. Helps with scripts that run multiple fits. The returned FitResult works with the OutputHandler in default settings and a note about the failed fit is added to the description and meta data. - Fixed: Calling
coefficients()
with custom steps e.g.steps=np.arange(0,100,5)
is more robust and does not crash due tosteps < 1
. Incorrect entries are replaced. - Fixed:
OutputHandler
now has a deconstructor that closes the matplotlib figure if it was not provided as an arugment. Hence, opening many handlers (e.g. by reassigning a variable in a loopo = mre.OutputHandler()
) does not keep the figure after reusing the variable. This used to cause a warning:More than 20 figures have been opened.
- New: Enable logging of function arguments to console and logfile with
mre._log_locals = True
. Enable logging of stack traces to logfile viamre._log_trace = True
. (Avoiding the console printout of stack traces on exceptions is not feasible at the moment). Per default, both options areFalse
.
v0.1.2 (27.11.2018)¶
- Changed:
coefficients()
withtrialseparate
method calculatesrk
differently (now strictly linear regression). This should enablem>1
estimates. - Changed: builtin fitfunctions now use absolute values of the amplitude of the exponential term
- Changed: fits drawn above data (again), otherwise, they get hidden if data is noisy
- Fixed: maximum steps
k
incoefficients
cannot exceed half the trial length any longer. this could lead to strong fluctuations inr_k
and fits would fail - Fixed: Crashes when providing custom fitfunctions to
fit()
due to unhandled request of default parameters - New: Rasterization of plots in the
OutputHandler
. Especially timeseries grow large quickly. Now, if OutputHandlers create their own figures/axes elements (ax
-argument not given on construction) all elements withzorder<0
are rastered. Per default,add_ts()
uses azorder
of-1
butadd_coefficients()
andadd_fit()
have values above one so they stay vectorized. Callax.set_rasterization_zorder(0)
on your customax
axes element if you want the same effect on customized figures. - New: export as png option for
OutputHandler.save_plot()
v0.1.1 (01.11.2018)¶
- Changed: We reworked the structure of
CoefficientResult
to be more consistent. This is now a completely selfsimilar , where each child-entry has exactly the same structure as the parent. The new attributestrialcrs
andbootstrapcrs
replacedsamples
. Both are now lists containing againCoefficientResults
, any (previously multidmensional) ndarrays are now 1d. - Changed: Per default,
full_analysis()
initialises the random number generator (used for bootstrapping) once per call and passesNone
to the seed arguments of lower functions so they do not reseed. We introduced the convention thatseed=None
tells that function to use the current state of the rng without seeding. (Added anauto
option for seeding where needed) - Changed: All prints now use the logging module. Hopefully nothing broke :P.
- Changed: Default log level to console is now ‘INFO’, and some logs that one could consider info go to ‘DEBUG’ to decrease the spam. Default loglevel to file is ‘DEBUG’ (logfile placed in the default temporary directory, which is also printed when loading the toolbox).
- Changed: When providing no loglevel to
full_analysis()
it uses the currently set level ofmre._logstreamhandler
. - Fixed: When calling
full_analysis()
with one trial, a running average is shown instead of an empty plot. - New: Added quantiles (and standard errors) to fit results if bootstrapping. The new default option,
numboot='auto'
calculates 250 bootstrap samples for the exponential and exp+offset fit functions (which are decently fast) and skips error estimation for the builtin complex (and custom) fits. - New: Added function
set_logfile(fname, loglevel='DEBUG')
to change the path of the global logfile + level. This should allow running the toolbox in parallel, with a seperate logfile per thread and relatively silent/no console output when combining withmre._logstreamhandler.setLevel('ERROR')
or callingfull_analysis(..., loglevel='ERROR')
- New: Undocumented way to change the respective loglevels is e.g.
mre._logstreamhandler.setLevel('WARNING')
for console andmre._logfilehandler.setLevel('DEBUG')
for file - New: Added custom handler class that does not log ‘None Type’ Traces if
log.exception()
is called without atry
statement
v0.1.0 (11.10.2018)¶
- Changed: OutputHandlers set_xdata() now adjusts existing data and is slightly smarter. Now returns an array containing the indices where the x axis value is right for the provided data (wrt the existing context). See the example in the documentation.
- Changed: When calling OutputHanlders
add_coefficients()
oradd_ts()
, the meta data and plot range will be extended usingset_xdata
. Trying to add duplicates only changes their style to the new provided values (without adding meta). - Changed: The parameters of
simulate_branching()
are different.activity
is nowa
,m
is no longer optional and it is possible to set a (time dependent) drive usingh
. - Fixed: Calling
fit()
with only one trial does not crash anymore due to missing uncertainties - Fixed: Calling
fit()
without specifyingsteps
now uses the range used incoefficients()
. - New: added
full_analysis()
, the wrapper function to chain individual tasks together. - New: added
simulate_subsampling()
- New: Whenn adding time series to the
OutputHandler
in trial structure with more than one trial viaadd_ts()
, they are drawn slightly transparent by default. Settingalpha
overwrites this.add_ts
does not use the newset_xdata()
yet. - New: Versionbump so we have the last digit for bugfixes :)
- New: Mr. Estimator came up with his logo.
v0.0.3 (19.09.2018)¶
- Changed: Check for old numpy versions in
fit()
- Changed: Per default, fits are drawn solid (dashed) over the fitted (remaining) range
- Fixed: Typos
(14.09.2018)¶
- New: CoefficientResult constructor now has some default arguments. Still required:
steps
andcoefficients
. Also added thedt, dtunit
attributes. - New: FitResult constructor now has some default arguments. Still required: ‘tau, mre, fitfunc’. Also added the
dt, dtunit, steps
attributes. - New:
fit()
takes argumentsteps=(minstep, maxstep)
to specify a custom fitrange.OutputHandler
plots the fitted range opaque (excluded range has less alpha). - Changed:
dt
is no longer an argument forfit()
. Settingdt
(the step size) and its unitsdtunit
is done via the equally named parameters ofcoefficients()
. It is added to theCoefficientResult
, sofit
and theOutputHandler
can rely on it.
(13.09.2018)¶
- Renamed: module from
mre
tomrestimator
, useimport mrestimator as mre
- Renamed:
correlation_coefficients()
tocoefficients()
- Renamed:
correlation_fit()
tofit()
- Renamed:
CorrelationFitResult
toFitResult