Hi everyone!

 

I realized today that profiling PyNEST is easier than I thought. In iPython, you can just run

 

 run -p -s cumulative -D srn.prof ../src/pynest/examples/store_restore_network.py

 

which will run the script, present a summary sorted by cumulative time and write binary profiling data (pstats format) to file srn.prof.

 

Then run (gprof2dot available from, e.g., PiPy)

 

gprof2dot -f pstats -o srn.dot srn.prof

 

and finally

 

                dot -Tpdf srn.dot

 

Both tools have lot's of options. In my case (an older version of the script above, currently under review in #1919, not yet in master), the attached PDF resulted, showing that getting connection properties indeed takes a lot of time. Note that the graph only resolves time spent in Python code, time spent in C++ code is hiding behind "run()".

 

Below some more timing results from a network of 1000 neurons with 100,000 connections:

 

In [16]: %time c = nest.GetConnections()

CPU times: user 66.5 ms, sys: 8.09 ms, total: 74.6 ms

Wall time: 75.7 ms

 

In [17]: %time c = nest.GetConnections().weight

CPU times: user 869 ms, sys: 75.3 ms, total: 944 ms

Wall time: 955 ms

 

In [18]: %time c = nest.GetConnections().get("weight", output="pandas")

CPU times: user 1.69 s, sys: 186 ms, total: 1.88 s

Wall time: 1.9 s

 

Clearly, GetConnections() is quite fast, while reading out the weights costs. What maybe surprised me most is that turning the data as a Pandas DataFrame costs a whole second extra—I wonder if we do something suboptimal here.

 

Best,

Hans Ekkehard

 

--

 

Prof. Dr. Hans Ekkehard Plesser

Head, Department of Data Science

 

Faculty of Science and Technology

Norwegian University of Life Sciences

PO Box 5003, 1432 Aas, Norway

 

Phone +47 6723 1560

Email hans.ekkehard.plesser@nmbu.no

Home http://arken.nmbu.no/~plesser