Draft:Norse (neuron simulator)
Submission declined on 28 March 2022 by Artem.G (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
| ![]() |
Comment: Maybe it's too early for the subject to be notable, or more independent reliable sources needed to show notability. Artem.G (talk) 18:47, 28 March 2022 (UTC)
Norse | |
---|---|
![]() | |
Initial release | December 2019 |
Repository | https://github.com/norse/norse |
Written in | Python |
License | LGPLv3 |
Website | https://norse.ai |
Norse is a simulator for biological neuron models with an emphasis on gradient-based optimization and integration with neuromorphic hardware and event-based cameras. Norse is developed primarily by researchers at Heidelberg University and KTH Royal Institute of Technology and is publicly available under the LGPLv3 license.
Differences from other simulators
[edit]Norse separates itself from the more "classical" branch of simulators like Neuron (software), NEST (software), and Brian (software) in that it models neural networks as directed graphs that purely operate on tensors, similar to deep learning libraries like PyTorch and TensorFlow[1]. Such computational graphs lend themselves well to hardware acceleration on CPUs, GPUs, and TPUs. It also enables automatic differentiation for neuron discontinuities, such as spiking neural network via gradient approximation methods like SuperSpike[2], which Norse implements to allow the training of arbitrarily deep networks using backpropagation through time.
Example
[edit]The following example demonstrates a network that combines leaky integrate-and-fire neurons with conventional convolutions to classify digits from the MNIST dataset with >99% accuracy.
import torch, torch.nn as nn
from norse.torch import LICell # Leaky integrator
from norse.torch import LIFCell # Leaky integrate-and-fire
from norse.torch import SequentialState # Stateful sequential layers
model = SequentialState(
nn.Conv2d(1, 20, 5, 1), # Convolve from 1 -> 20 channels
LIFCell(), # Spiking activation layer
nn.MaxPool2d(2, 2),
nn.Conv2d(20, 50, 5, 1), # Convolve from 20 -> 50 channels
LIFCell(),
nn.MaxPool2d(2, 2),
nn.Flatten(), # Flatten to 800 units
nn.Linear(800, 10),
LICell(), # Non-spiking integrator layer
)
data = torch.randn(8, 1, 28, 28) # 8 batches, 1 channel, 28x28 pixels
output, state = model(data) # Provides a tuple (tensor (8, 10), neuron state)
See also
[edit]External references
[edit]- Public code repository on GitHub: https://github.com/norse/norse/
Category:Simulation software Category:Computational neuroscience
- ^ Pehle, Christian-Gernot; Pedersen, Jens Egholm (2021-01-06), "Norse - A deep learning library for spiking neural networks", Zenodo, Bibcode:2021zndo...4422025P, doi:10.5281/zenodo.4422025, retrieved 2022-03-06
- ^ Zenke, Friedemann; Ganguli, Surya (2018-06-01). "SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks". Neural Computation. 30 (6): 1514–1541. doi:10.1162/neco_a_01086. ISSN 0899-7667. PMC 6118408. PMID 29652587.
- in-depth (not just passing mentions about the subject)
- reliable
- secondary
- independent of the subject
Make sure you add references that meet these criteria before resubmitting. Learn about mistakes to avoid when addressing this issue. If no additional references exist, the subject is not suitable for Wikipedia.