hxtorch.spiking.Experiment

class hxtorch.spiking.Experiment(mock: bool = False, dt: float = 1e-06, hw_routing_func=<_pygrenade_vx_network_routing.PortfolioRouter object>)

Bases: hxtorch.spiking.experiment.BaseExperiment

Experiment class for describing experiments on hardware

__init__(mock: bool = False, dt: float = 1e-06, hw_routing_func=<_pygrenade_vx_network_routing.PortfolioRouter object>)None

Instantiate a new experiment, representing an experiment on hardware and/or in software.

Parameters
  • mock – Indicating whether module is executed on hardware (False) or simulated in software (True).

  • input_loopback – Record input spikes and use them for gradient calculation. Depending on link congestion, this may or may not be beneficial for the calculated gradient’s precision.

Methods

__init__([mock, dt, hw_routing_func])

Instantiate a new experiment, representing an experiment on hardware and/or in software.

clear()

Reset the experiments’s state.

connect(module, input_handles, output_handle)

Add an module to the experiment and connect it to other experiment modules via input and output handles.

get_hw_results(runtime)

Executes the experiment in mock or on hardware using the information added to the experiment for a time given by runtime and returns a dict of hardware data represented as PyTorch data types.

register_population(module)

Register a module as population.

register_projection(module)

Register a module as projection.

wrap_modules(modules[, func])

Wrap a number of given modules into a wrapper to which a single function func can be assigned. In the PyTorch graph the individual module functions are then bypassed and only the wrapper’s function is considered when building the PyTorch graph. This functionality is of interest if several modules have cyclic dependencies and need to be represented by one PyTorch function. :param modules: A list of module to be wrapped. These modules need to constitute a closed sub-graph with no modules in between that are not element of the wrapper. :func: The function to assign to the wrapper. TODO: Add info about this functions signature.

Attributes

default_execution_instance

Getter for the default ExecutionInstance object.

last_run_chip_configs

clear()None

Reset the experiments’s state. Corresponds to creating a new Experiment instance.

connect(module: torch.nn.modules.module.Module, input_handles: Tuple[hxtorch.spiking.handle.TensorHandle], output_handle: hxtorch.spiking.handle.TensorHandle)

Add an module to the experiment and connect it to other experiment modules via input and output handles.

Parameters
  • module – The HXModule to add to the experiment.

  • input_handles – The TensorHandle serving as input to the module (its obsv_state).

  • output_handle – The TensorHandle outputted by the module, serving as input to subsequent HXModules.

property default_execution_instance

Getter for the default ExecutionInstance object. All modules that have the same Experiment instance assigned and do not hold an explicit ExecutionInstance are assigned to this default execution instance.

Returns

The default execution instance

get_hw_results(runtime: Optional[int])Dict[_pygrenade_vx_network.PopulationOnNetwork, Tuple[Optional[torch.Tensor], ]]

Executes the experiment in mock or on hardware using the information added to the experiment for a time given by runtime and returns a dict of hardware data represented as PyTorch data types.

Parameters

runtime – The runtime of the experiment on hardware in ms.

Returns

Returns the data map as dict, where the keys are the population descriptors and values are tuples of values returned by the corresponding module’s post_process method.

property last_run_chip_configs
register_population(module: hxtorch.spiking.modules.hx_module.HXModule)None

Register a module as population.

Parameters

module – The module to register as population.

register_projection(module: hxtorch.spiking.modules.hx_module.HXModule)None

Register a module as projection.

Parameters

module – The module to register as projection.

wrap_modules(modules: List[hxtorch.spiking.modules.hx_module.HXModule], func: Optional[Callable] = None)

Wrap a number of given modules into a wrapper to which a single function func can be assigned. In the PyTorch graph the individual module functions are then bypassed and only the wrapper’s function is considered when building the PyTorch graph. This functionality is of interest if several modules have cyclic dependencies and need to be represented by one PyTorch function. :param modules: A list of module to be wrapped. These modules need to

constitute a closed sub-graph with no modules in between that are not element of the wrapper.

Func

The function to assign to the wrapper. TODO: Add info about this functions signature.