White Rose University Consortium logo
University of Leeds logo University of Sheffield logo York University logo

The state of MIIND

de Kamps, M., Baier, V., Drever, J., Dietz, M., Mosenlechner, L. and van der Velde, F. (2008) The state of MIIND. Neural Networks, 21 (8). pp. 1164-1181. ISSN 0893-6080

Full text available as:
[img]
Preview
Text
dekamps1.pdf
Available under licence : See the attached licence file.

Download (2632Kb)

Abstract

MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson–Cowan and Ornstein–Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models.

Item Type: Article
Copyright, Publisher and Additional Information: © 2008 Elsevier B.V. This is an author produced version of a paper published in Neural Networks. Uploaded in accordance with the publisher's self archiving policy.
Institution: The University of Leeds
Academic Units: The University of Leeds > Faculty of Engineering (Leeds) > School of Computing (Leeds)
Depositing User: Sherpa Assistant
Date Deposited: 08 Jan 2009 11:25
Last Modified: 08 Feb 2013 17:05
Published Version: http://dx.doi.org/10.1016/j.neunet.2008.07.006
Status: Published
Publisher: Elsevier B.V.
Refereed: Yes
Identification Number: 10.1016/j.neunet.2008.07.006
URI: http://eprints.whiterose.ac.uk/id/eprint/5235

Actions (repository staff only: login required)