Perfermance improvement when updating hundreds of thousands particles (objects) at runtime

Hi all, I am working on a viewer system visualising a Point Cloud in unity. Aboud 300,000 3D-Points need to be updated their position in 3D space at each frame, as fast as possible. The particle system in Unity seems not to support position manipulating of each point. The point cloud looks similar like this:

alt text

Until now I have tried several approaches: I’ve initialized Primitive Cube as a 3D Point and tried turn off all shadows off and keep the renderer as simple as possible, since you actually don’t need to see much detail of them at all.

3dpoint.renderer.castShadows = false; 
3dpoint.renderer.receiveShadows = false;
Destroy( 3dpoint.collider);

Now at up to about 8000 points the FPS looks already quite poor. Any ideas what else I can do to optimize?

At next step, it would be nice to assign colors to the points according to a depts scale. But this means I have to assign material first which I’m afraid will kill the performance again. Any work around maybe? Sort of layers with color in 3D space?

Any solution that involves accessing individual positions on the CPU is going to be slow with this many points. The cube solution you have now is particularly awful for you because your GPU is drawing 24 vertices (there are 4 vertices per cube side!) and the CPU is issuing one draw call per cube, to draw what should essentially be just one point or one pixel, just because that’s the only way you could get to control the position through script. That’s a huge overhead and drastically sub-optimal, I’m afraid.

To dynamically change the position of hundreds of thousands of points in real-time, you really need to look into shader code, I’m afraid. Setting 3D positions of particles in real-time is typically done by storing the data in textures. The particles are assigned a UV coordinate at creation which it uses on the GPU in a shader to perform a look-up into the texture that has its position data encoded as pixel values. The GPU then changes the position as it draws the primitive, often in a vertex shader.

The tricky part with these kinds of challenges is always to come up with an efficient way to get the position data from main memory into video memory, since bus bandwidth is the most precious resource we have and ends up being the bottleneck in tons of different cases. So, every time your individual positions change, you’d need to update the texture with the new pixel data. People do so either by writing the pixel data to the texture on the CPU and calling Texture2D.Apply, or they try to have the GPU somehow render the data into the texture itself with some intricate use of RenderTextures. Either way, it’ll take a bit of time and custom tinkering to get to work for you, but the good news it’s definitely doable.

I don’t have an out-of-the-box solution for you since your problem isn’t exactly trivial, but I figured I’d at least tell you why your current approach won’t work, and what people in the industry do in these cases. :slight_smile:

This could also be a solution for you: click me

Hi, you can set position for each particle in a particle system using GetParticles and SetParticles. 300k particles is a lot and you may reach some limits of the particle system (maybe using a few of them could work). You can also change the color, size, lifetime etc… before calling SetParticles. I used this for a few thousand mesh based particles on a desktop PC without any CPU/GPU slowdown, using shaders should grant better performances though.