Design Evaluation

There are several metrics available for evaluating whether a design will be appropriate for your analysis.

Statistical Power

A power test can show you how well your design can detect effects of a given size.

f_power(model, design, effect_size, alpha)[source]

Calculates the power of an F test.

This calculates the probability that the F-statistic is above its critical value (alpha) given an effect of some size.

Parameters:
  • model (patsy.formula) – A patsy formula for which to calculate power.
  • design (pandas.DataFrame) – A pandas.DataFrame representing a design.
  • effect_size (float) – The size of the effect that the test should be able to detect (also called a signal to noise ratio).
  • alpha (float between 0 and 1) – The critical value that we want the test to be above.
Returns:

A list of percentage probabilities that an F-test could detect an effect of the given size at the given alpha value for a particular column.

Usage:
>>> design = dexpy.factorial.build_factorial(4, 8)
>>> print(dexpy.power.f_power("1 + A + B + C + D", design, 2.0, 0.05))
[ 95.016, 49.003, 49.003, 49.003, 49.003 ]

Alias List

alias_list(model, design)[source]

Returns a human-readable list of dependent model columns.

This is done by solving AX=B, where X is the full rank model matrix, and B is all of the columns of the model matrix. The result is a matrix of coefficients which represent to what degree a given column is collinear with another column.

Usage:
>>> design = dexpy.factorial.build_factorial(4, 8)
>>> aliases, alias_coefs = dexpy.alias.alias_list("(A+B+C+D)**2)", design)
>>> print(aliases)
['A:B = C:D', 'A:C = B:D', 'A:D = B:C']