Configuration¶
The configuration is loaded upon import from a YAML file in the directory where
PyPhi is run: pyphi_config.yml
. If no file is found, the default
configuration is used.
The various options are listed here with their defaults
>>> import pyphi
>>> defaults = pyphi.config.DEFAULTS
It is also possible to manually load a YAML configuration file within your script:
>>> pyphi.config.load_config_file('pyphi_config.yml')
Or load a dictionary of configuration values:
>>> pyphi.config.load_config_dict({'SOME_CONFIG': 'value'})
Theoretical approximations¶
This section with deals assumptions that speed up computation at the cost of theoretical accuracy.
pyphi.config.ASSUME_CUTS_CANNOT_CREATE_NEW_CONCEPTS
: In certain cases, making a cut can actually cause a previously reducible concept to become a proper, irreducible concept. Assuming this can never happen can increase performance significantly, however the obtained results are not strictly accurate.>>> defaults['ASSUME_CUTS_CANNOT_CREATE_NEW_CONCEPTS'] False
pyphi.config.L1_DISTANCE_APPROXIMATION
: If enabled, theL1
distance will be used instead of the EMD when computing MIPs. If a mechanism and purview are found to be irreducible, the \(\varphi\) value of the MIP is recalculated using the EMD.>>> defaults['L1_DISTANCE_APPROXIMATION'] False
System resources¶
These settings control how much processing power and memory is available for PyPhi to use. The default values may not be appropriate for your use-case or machine, so please check these settings before running anything. Otherwise, there is a risk that simulations might crash (potentially after running for a long time!), resulting in data loss.
pyphi.config.PARALLEL_CONCEPT_EVALUATION
: Control whether concepts are evaluated in parallel when computing constellations.>>> defaults['PARALLEL_CONCEPT_EVALUATION'] False
pyphi.config.PARALLEL_CUT_EVALUATION
: Control whether system cuts are evaluated in parallel, which requires more memory. If cuts are evaluated sequentially, only twoBigMip
instances need to be in memory at once.>>> defaults['PARALLEL_CUT_EVALUATION'] True
Warning
PARALLEL_CONCEPT_EVALUATION
andPARALLEL_CUT_EVALUATION
should not both be set toTrue
. Enabling both parallelization modes will slow down computations. If you are doing \(\Phi\)-computations (withbig_mip
,main_complex
, etc.)PARALLEL_CUT_EVALUATION
will be fastest. UsePARALLEL_CONCEPT_EVALUATION
if you are only computing constellations.pyphi.config.NUMBER_OF_CORES
: Control the number of CPU cores used to evaluate unidirectional cuts. Negative numbers count backwards from the total number of available cores, with-1
meaning “use all available cores.”>>> defaults['NUMBER_OF_CORES'] -1
pyphi.config.MAXIMUM_CACHE_MEMORY_PERCENTAGE
: PyPhi employs several in-memory caches to speed up computation. However, these can quickly use a lot of memory for large networks or large numbers of them; to avoid thrashing, this options limits the percentage of a system’s RAM that the caches can collectively use.>>> defaults['MAXIMUM_CACHE_MEMORY_PERCENTAGE'] 50
Caching¶
PyPhi is equipped with a transparent caching system for the BigMip
and
Concept
objects, which stores them as they are computed to avoid having to
recompute them later. This makes it easy to play around interactively with the
program, or to accumulate results with minimal effort. For larger projects,
however, it is recommended that you manage the results explicitly, rather than
relying on the cache. For this reason it is disabled by default.
pyphi.config.CACHE_BIGMIPS
: Control whetherBigMip
objects are cached and automatically retreived.>>> defaults['CACHE_BIGMIPS'] False
pyphi.config.CACHE_POTENTIAL_PURVIEWS
: Controls whether the potential purviews of mechanisms of a network are cached. Caching speeds up computations by not recomputing expensive reducibility checks, but uses additional memory.>>> defaults['CACHE_POTENTIAL_PURVIEWS'] True
pyphi.config.CACHING_BACKEND
: Control whether precomputed results are stored and read from a database or from a local filesystem-based cache in the current directory. Set this to ‘fs’ for the filesystem, ‘db’ for the database. Caching results on the filesystem is the easiest to use but least robust caching system. Caching results in a database is more robust and allows for caching individual concepts, but requires installing MongoDB.>>> defaults['CACHING_BACKEND'] 'fs'
pyphi.config.FS_CACHE_VERBOSITY
: Control how much caching information is printed. Takes a value between 0 and 11. Note that printing during a loop iteration can slow down the loop considerably.>>> defaults['FS_CACHE_VERBOSITY'] 0
pyphi.config.FS_CACHE_DIRECTORY
: If the caching backend is set to use the filesystem, the cache will be stored in this directory. This directory can be copied and moved around if you want to reuse results _e.g._ on a another computer, but it must be in the same directory from which PyPhi is being run.>>> defaults['FS_CACHE_DIRECTORY'] '__pyphi_cache__'
pyphi.config.MONGODB_CONFIG
: Set the configuration for the MongoDB database backend. This only has an effect if the caching backend is set to use the database.>>> defaults['MONGODB_CONFIG']['host'] 'localhost' >>> defaults['MONGODB_CONFIG']['port'] 27017 >>> defaults['MONGODB_CONFIG']['database_name'] 'pyphi' >>> defaults['MONGODB_CONFIG']['collection_name'] 'cache'
pyphi.config.REDIS_CACHE
: Specifies whether to use Redis to cache Mice.>>> defaults['REDIS_CACHE'] False
pyphi.config.REDIS_CONFIG
: Configure the Redis database backend. Theseare the defaults in the provided
redis.conf
file.>>> defaults['REDIS_CONFIG']['host'] 'localhost' >>> defaults['REDIS_CONFIG']['port'] 6379
Logging¶
These are the settings for PyPhi logging. You can control the format of the logs and the name of the log file. Logs can be written to standard output, a file, both, or none. See the documentation on Python’s logger for more information.
pyphi.config.LOGGING_CONFIG['file']['enabled']
: Control whether logs are written to a file.>>> defaults['LOGGING_CONFIG']['file']['enabled'] True
pyphi.config.LOGGING_CONFIG['file']['filename']
: Control the name of the logfile.>>> defaults['LOGGING_CONFIG']['file']['filename'] 'pyphi.log'
pyphi.config.LOGGING_CONFIG['file']['level']
: Control the concern level of file logging. Can be one of'DEBUG'
,'INFO'
,'WARNING'
,'ERROR'
, or'CRITICAL'
.>>> defaults['LOGGING_CONFIG']['file']['level'] 'INFO'
pyphi.config.LOGGING_CONFIG['stdout']['enabled']
: Control whether logs are written to standard output.>>> defaults['LOGGING_CONFIG']['stdout']['enabled'] True
pyphi.config.LOGGING_CONFIG['stdout']['level']
: Control the concern level of standard output logging. Same possible values as file logging.>>> defaults['LOGGING_CONFIG']['stdout']['level'] 'WARNING'
pyphi.config.LOG_CONFIG_ON_IMPORT
: Controls whether the current configuration is printed when PyPhi is imported.>>> defaults['LOG_CONFIG_ON_IMPORT'] True
Numerical precision¶
pyphi.config.PRECISION
: Computations in PyPhi rely on finding the Earth Mover’s Distance. This is done via an external C++ library that uses flow-optimization to find a good approximation of the EMD. Consequently, systems with zero \(\Phi\) will sometimes be computed to have a small but non-zero amount. This setting controls the number of decimal places to which PyPhi will consider EMD calculations accurate. Values of \(\Phi\) lower than10e-PRECISION
will be considered insignificant and treated as zero. The default value is about as accurate as the EMD computations get.>>> defaults['PRECISION'] 6
Miscellaneous¶
pyphi.config.VALIDATE_SUBSYSTEM_STATES
: Control whether PyPhi checks if the subsystems’s state is possible (reachable from some past state), given the subsystem’s TPM (which is conditioned on background conditions). If this is turned off, then calculated \(\Phi\) values may not be valid, since they may be associated with a subsystem that could never be in the given state.>>> defaults['VALIDATE_SUBSYSTEM_STATES'] True
pyphi.config.SINGLE_NODES_WITH_SELFLOOPS_HAVE_PHI
: If set toTrue
, this defines the Phi value of subsystems containing only a single node with a self-loop to be0.5
. If set to False, their \(\Phi\) will be actually be computed (to be zero, in this implementation).>>> defaults['SINGLE_NODES_WITH_SELFLOOPS_HAVE_PHI'] False
pyphi.config.READABLE_REPRS
: If set toTrue
, this causesrepr
calls on PyPhi objects to return pretty-formatted and legible strings. Although this breaks the convention that__repr__
methods should return a representation which can reconstruct the object, readable representations are convenient since the Python REPL callsrepr
to represent all objects in the shell and PyPhi is often used interactively with the REPL. If set toFalse
,repr
returns more traditional object representations.>>> defaults['READABLE_REPRS'] True
-
pyphi.config.
load_config_dict
(config)¶ Load configuration values.
Parameters: config (dict) – The dict of config to load.
-
pyphi.config.
load_config_file
(filename)¶ Load config from a YAML file.
-
pyphi.config.
load_config_default
()¶ Load default config values.
-
pyphi.config.
get_config_string
()¶ Return a string representation of the currently loaded configuration.
-
pyphi.config.
print_config
()¶ Print the current configuration.
-
class
pyphi.config.
override
(**new_conf)¶ Decorator and context manager to override config values.
The initial configuration values are reset after the decorated function returns or the context manager completes it block, even if the function or block raises an exception. This is intended to be used by testcases which require specific configuration values.
Example
>>> from pyphi import config >>> >>> @config.override(PRECISION=20000) ... def test_something(): ... assert config.PRECISION == 20000 ... >>> test_something() >>> with config.override(PRECISION=100): ... assert config.PRECISION == 100 ...
-
__enter__
()¶ Save original config values; override with new ones.
-
__exit__
(*exc)¶ Reset config to initial values; reraise any exceptions.
-