Dear Nest Users,
I hope all of you be in health during these times.
I want to create a balanced network as it’s the first step of my master thesis. The model I want to use is “iaf_cond_exp” which is a must for later purposes. There’s an example in Pynest folder of a balanced network, unless it’s using “iaf_psc_alpha” and it doesn’t fit my goals.
When I try to change the model, and run the program, the network doesn’t get active and there’s nothing to record.
I divided my question into two part:
- Does anyone have a balanced network with “iaf_cond_exp” neurons and all of its necessary parameters to run?
- In general, how do people find or calculate their network parameters to fit the neuron model they use and don’t get lost in the massive number of parameters.
Note: I also tried to use the example of “brunel_alpha_evolution_strategies.py” example which is a genetic algorithm to find the best parameter, although it finds the parameters after 50 generation, I use those parameters afterwards. It just doesn’t work!
Kind regards,
Nosratullah Mohammadi
Dear all,
Just a little reminder that the submission deadline for the (virtual)
NEST Conference 2020 is *this Monday, June 1st. *We are looking forward
to your contributions.
The NEST Conference provides an opportunity for the NEST Community to
meet, exchange success stories, swap advice, learn about current
developments in and around NEST spiking network simulation and
its application.
This year's conference will take place as a *virtual conference* on
*Monday/Tuesday 29/30 June 2020*.
We are inviting contributions to the conference, including plenary
talks, "posters" and breakout sessions on specific topics.
*Important dates*
*01 June 2020* — Deadline for submission of contributions
*08 June 2020* — Notification of acceptance
*10 June 2020* — Deadline for NEST Initiative Membership applications
(registration is free for members in 2020)
*22 June 2020* — Registration deadline
*29 June 2020* — NEST Conference 2020 starts
For more information on how to submit your contribution, register and
participate, please visit the conference website
*https://nest-simulator.org/conference*
We are looking forward to seeing you all in June!
Hans Ekkehard Plesser, Susanne Kunkel, Dennis Terhorst, Anne Elfgen &
many more
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference on
Monday 25 May, 11.30-12.30 CEST (UTC+2).
In the Project team round, a contact person of each team will give a
short statement summarizing ongoing work in the team and cross-cutting
points that need discussion among the teams. The remainder of the
meeting we would go into a more in-depth discussion suggested from the
teams.
Agenda
Welcome
Review of NEST User Mailing List
Project team round
In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2020-05-25-Open-NEST-Developer-…
Looking forward to seeing you soon!
best,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see
http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Hello everyone!
I am writing to you regarding two matters:
Reset Network/Kernel in nest2->nest 3
In the last developer conference, Daphne Cornelisse talked about that she used ResetNetwork() to solve her problem.
ResetNetwork() is marked as deprecated. No one said anything, so I got confused why this is apparently the recommended way or at least approved. She showed me, that it works (in her case). It is deprecated, so there is probably some good reasoning behind it. The documentation says: "ResetNetwork is deprecated and will be removed in NEST 3, because this function is not fully able to reset network and simulator state. What are the edge cases where the use causes problems?
In this ticket, it is stated that the feature is just removed with any replacement.
https://github.com/nest/nest-simulator/issues/525
Thus, in nest3 there is only ResetKernel().
This means that you have to rebuild the network for any application where you do multiple simulations with different input or parameter changes. I am using nest3 for reinforcement learning and in each training episode, I have to extract all the weights and save them, reset the kernel, reconstruct the net, then load all the weights. This adds a lot of overhead in performance and bloats my code. I basically have another front-end storing the net and talking to the nest back-end.
Therefore, the update to nest3 is a downgrade for many applications. I don’t have a solution for this issue, but I want to spark some discussion as I learned that I am not the only nest user to stumble into this issue.
STDP Performance boost by manual computation in python
In the paper "Demonstrating Advantages of Neuromorphic Computation: A Pilot Study“ by Wunderlich et al. (https://www.frontiersin.org/articles/10.3389/fnins.2019.00260/full) some performance improvement on STDP was reported.
"The synaptic weight updates in each iteration were restricted to those synapses which transmitted spikes, i.e., the synapses from the active input unit to all output units (32 out of the 1,024 synapses), as the correlation a+ of all other synapses is zero in a perfect simulation without fixed-pattern noise. This has the effect of reducing the overall time required to simulate one iteration[…]“
The provided source code (https://github.com/electronicvisions/model-sw-pong/blob/976e0778ca05cfd96c4… <https://github.com/electronicvisions/model-sw-pong/blob/976e0778ca05cfd96c4…>) indeed contains a manual computation of STDP. When using the nest library I don’t expect to do some computation in python to be faster. It appears to me that the nest implementation is computing STDP every time, even without spikes? Maybe someone can comment on this whether this can be improved in nest?
Kind regards,
Benedikt S. Vogler
--
Benedikt S. Vogler
benedikt.s.vogler(a)tum.de
Student M.Sc. Robotics, Cognition, Intelligence
Dear all,
I would like to simulate a network of current based LIF
('iaf_psc_delta') with inhibitory plasticity.
I tried using the model vogels_sprekeler_synapse, but the resulting
weights are positive (and they increase when firing rate is high). I
guess it is because when used with conductance based neurons, the
resulting weights would be multiplied by a gI<0? And with current based
neurons, this is not the case. The weight is directly used as the
synaptic weight. Do I understand it right?
Is there a way to implement inhibitory plasticity using the
vogels_sprekeler_synapse and current based LIF?
Best,
Júlia
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference on
Monday 11 May, 11.30-12.30 CEST (UTC+2).
In the Project team round, a contact person of each team will give a
short statement summarizing ongoing work in the team and cross-cutting
points that need discussion among the teams. The remainder of the
meeting we would go into a more in-depth discussion suggested from the
teams.
Agenda
Welcome
Review of NEST User Mailing List
Project team round
In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2020-05-11-Open-NEST-Developer-…
Looking forward to seeing you soon!
best,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see
http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Dear all,
I am working with frequency encoded spike trains and therefore like to
observe the changing spike frequencies between different individual
connections in my network (if possible in real time). While I can record
the spike events with time using 'spike_detector' units, I couldn't find
a direct method to calculate the frequency. Is there any method
implemented or a preferred way to calculate frequencies from spike
trains over time or would this calculation be up to the user? I am also
thinking about using an indirect way by attaching a leaky integrate and
fire neuron with infinite threshold.
Thanks!
Best,
Benedikt
--
Benedikt Feldotto M.Sc.
Research Assistant
Human Brain Project - Neurorobotics
Technical University of Munich
Department of Informatics
Chair of Robotics, Artificial Intelligence and Real-Time Systems
Room HB 2.02.20
Parkring 13
D-85748 Garching b. München
Tel.: +49 89 289 17628
Mail: feldotto(a)in.tum.de
https://www6.in.tum.de/en/people/benedikt-feldotto-msc/www.neurorobotics.net
Dear all,
I hope you are in staying safe.
I'm working on a balanced network consisting of "iaf_cond_exp" neurone model.
I want to have the control over these parameters at the same time:
- increasing the synaptic decay time.
- reducing the synaptic weight.
The goal is to make the area under the synaptic current, constant. So that we know that the only parameter that is affecting on the network, is synaptic delay.
I wonder if you know of any furmula in mathematics or any algorithm, that can help me, making this process automatic. And keeping the area under the curve, a constant value.
Best wishes,
Nosratullah
Hi all,
I'm trying to understand some inner workings of Nest. Rigth now I'm running
simulations with close half millons elements, using mpirun in a cluster
with 25 nodes. The problem I am having is that the "setup" (layer creation
and connections) phase takes close to 8min and the simulation only takes
1min.
So I tried to use python's multiprocessing package to speed it up, with the
following code:
nest.ResetKernel()
nest.SetKernelStatus({"local_num_threads": 1})
#...
connections = [
(layer1, layer1, conn_ee_dict, 1),
(layer1, layer2, conn_ee_dict, 2),
(layer2, layer2, conn_ee_dict, 3),
(layer2, layer1, conn_ee_dict, 4)
]
# Process the connections.
def parallel_topology_connect(parameters):
[pre, post, projection, number] = parameters
print(f"Connection number: {number}")
topology.ConnectLayers(pre, post, projection)
pool = multiprocessing.Pool(processes=4)
pool.map(parallel_topology_connect, connections)
The above example takes around 0.9s, but if the last two to lines are
changed for a sequential call, it takes 2.1s:
for [pre, post, projection, number] in connections:
print(f"Connection number: {number}")
topology.ConnectLayers(pre, post, projection)
So far the multiprocessing works great, the problem comes when the
"local_num_threads" parameters is changed from 1 to 2 or more, in the
cluster it could be 32. The code gets stuck in the topology.Connect without
any error, after a while I just stopped it.
Also I realised that the tolopoly.ConnectLayers just spawn one thread to
connects layers despite the local_num_threads is setted more than one.
Any idea what is going on?
Thanks in advance
Juan Manuel
PD: The full example code is attached (60 lines of code). The
local_num_threads and multiprocessing_flag variables change the behaviors
of the code.
Hello,
I hope everyone is well.
I am working on replicating some results from Teremae et al 2012
<https://www.nature.com/articles/srep00485#Sec4> and have run into an area
of uncertainty. The paper calls for there to be a synaptic weight-dependent
failure rate of synaptic transmission:
[image: image.png]
Is there any way to implement this in Nest? If it would help, I am working
on a custom neuron model built in Nestml. I can also write some C++ if it
seems that there would be an easy way to add this functionality with a
short function. Alternatively, are there canonical ways to alter network
structure to account for such a failure rate if it cannot be implemented in
Nest?
Thank you and best wishes,
Josh Stern