Article 1ANPT Brauch: Processing scientific data in Python and numpy, but doing it fast

Brauch: Processing scientific data in Python and numpy, but doing it fast

by
jake
from LWN.net on (#1ANPT)
On his blog, Sven Brauch has some suggestions on how to use NumPy to process scientific data and how to avoid some pitfalls that will ruin its performance. "In general, copying data is cheap. But if your program simulates 25 million particles, each having a float64 location in 3d, you already have 8*3*25e6 = 600 MB of data. Thus, if you write r = r + v*dt, you will copy 1.2 GB of data around in memory: once 600 MB to calculate v*dt, and again to calculate r+(v*dt), and only then the result is written back to r. This can really become a major bottleneck if you aren't careful. Fortunately, it is usually easy to circumvent; instead of writing r = r+dv, write r += dv. Instead of a = 3*a + b, write a *= 3; a+= b. This avoids the copying completely. For calculating v*dt and adding it to r, the situation is a bit more tricky; one good idea is to just have the unit of v be such that you don't need to multiply by dt. If that is not possible, it might even be worth it to keep a copy of v which is multiplied by dt already, and update that whenever you update v. This is advantageous if only few v values change per step of your simulation.I would not recommend writing it like this everywhere though, it's often not worth the loss in readability; just for really large arrays and when the code is executed frequently."
External Content
Source RSS or Atom Feed
Feed Location http://lwn.net/headlines/rss
Feed Title LWN.net
Feed Link https://lwn.net/
Reply 0 comments