Reducing duplication and redundancy in declarative model specifications
Robert Cannon (Textensor Limited), Padraig Gleeson (University College London), Sharon Crook (Arizona State University), Angus Silver (University College London)
understanding that modelers have of the systems they work on. Supporting this style of description allows models to be expressed more concisely than if flatter structures are used since it avoids repeating common elements. It also enables a wide range of model related processing beyond simply executing them, including systematic comparisons, archiving, model composition, and sensitivity analysis.
The resulting system, known as LEMS (Low Entropy Model Specification), has been developed to meet the needs of NeuroML but can also be applied to other declarative model specifications. It makes it possible to retro-fit most of the existing high level concepts ion NeuroML version 1 with domain independent definitions built out of the LEMS elements. This preserves the benefits of the existing semantically rich high-level concepts while adding the ability to extend it with new concepts at the user-level rather than requiring changes to the language. Simulator developers have a choice between directly supporting the library of core types in NeuroML or supporting the underlying LEMS definitions. Libraries are available in both Java and Python to allow simulator developers to work with LEMS models without having to implement the low level support themselves. These include options for flattening models into sets of simultaneous differential equations, and for code generation to create standalone executable units.
1. Gleeson P, Crook S, Cannon RC, Hines ML, Billings GO, et al. NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail. PLoS Comput Biol 6(6): 2010, e1000815. doi:10.1371/journal.pcbi.1000815