This site uses strictly necessary cookies. More Information

X- Home /

# How do you generate an Infinite Voronoi diagram?

Hello there!

I've been dabbling in procedural world generation for a little while now, and I've gotten a decent understanding of some parts. I can create height maps, temperature and moisture maps. It seems usually people generate biomes based on various factors, eg. temperature and moisture maps.

Now what I want to do is to generate a heightmap, then add biomes, and then modify each biome based on whatever parameters I come up with.

**So... to the point** - I was looking into the Voronoi diagram and implemented that. My problem is, I have a hard time understanding how I implement this for an infinite world. For a heightmap, I can simply offset it by whatever distance on the x/y axis, and all is good (And deterministic based on seed, which is important)

**What I've gathered so far** during my research is this:

When generating the Voronoi diagram for a chunk, you take the eight adjacent chunks into account. Now as I'm writing this, my understanding seems to become a bit better, but I'm still not entirely sure exactly how to do this. I guess this means, assuming chunk 1 - 9 where 5 is the center chunk; if I generate my Voronoi diagram for chunk 5, then if I do the same approach for, say chunk 4 (Left of 5), then it will seamlessly fit together with chunk 5? Is this assumption correct?

Assuming I'm correct in my sudden mid-text epiphany; I'm not really sure how to take the adjacent chunks into account when generating the Voronoi. My current implementation looks like this:

```
public static Color[] GenerateVoronoidDiagram(int width, int height, int numberOfRegions)
{
var centroids = new Vector2Int[numberOfRegions];
var regions = new Color[numberOfRegions];
for (int i = 0; i < numberOfRegions; i++)
{
centroids[i] = new Vector2Int(_rand.Next(0, width), _rand.Next(0, height));
regions[i] = new Color((float)_rand.NextDouble(), (float)_rand.NextDouble(), (float)_rand.NextDouble());
}
var colorDiagram = new Color[width * height];
for (int y = 0; y < width; y++)
{
for (int x = 0; x < height; x++)
{
colorDiagram[y * height + x] = regions[GetClosestCentroidIndex(new Vector2Int(x, y), centroids)];
}
}
return colorDiagram;
}
private static int GetClosestCentroidIndex(Vector2Int pixelPos, Vector2Int[] centroids)
{
var smallestDstSqr = float.MaxValue;
var index = 0;
for (int i = 0; i < centroids.Length; i++)
{
var distanceSqr = (pixelPos - centroids[i]).sqrMagnitude;
if (distanceSqr < smallestDstSqr)
{
smallestDstSqr = distanceSqr;
index = i;
}
}
return index;
}
```

What I'm struggling with is how do I take the other chunks into account? Simply multiplying width and height with 3 (3*3=9 chunks) and multiplying numberOfRegions with 9, will just result in the exact same generation, no matter where I generate it. Initially I thought I could simply offset the centroids by width and height multiplied by the chunks x and y coordinate respectively, but that did not give me the desired result.

Sorry for the wall of text, but wanted to include as much of the info I gathered as possible, and to clarify what I do not understand. - Any help is greatly appreciated :-)

**Answer** by Bunny83
·
Sep 13 at 05:06 PM

Well, the only thing you need are the additional "centroids" positions around the edges of the chunk you're generating. So for each adjacent chunk you just need to generate the centroids for those chunks as well and simply include them in your array when you do your distance calculation. Since this would effectively multiply your centroid count by 9, the GetClosestCentroidIndex becomes much more expensive. Though what you can do to optimise the distance calculation is to build a quadtree out of all the points. However this only make sense if your number of regions / centroids is kinda high.

Of course currently you generate your centroids in local space of each chunk. So of course you need to have all the centroids in the same space. So the centroids generated from the chunk left of the current chunk needs to be offset to the left. So you end up with a list of positions of all those chunks combined.

So I would recommend to outsource the generation of the centroids into a method and you just pass a `List<Vector2Int>`

and a relative offset to it and it fills in the centroids for that chunk into that list. That way you can simply gather all the centroids necessary into one list. Currently you have your region color and the position seperate. I would recommend to combine those into a single struct so you have a single list. That makes the handling much easier.

Keep in mind that you would still just generate the same map, only for your middle chunk. So your "x" and "y" loops stay exactly the same. However having the neigboring centroids in the array / list the transition into the next chunk will be correct.

Hey @Bunny83

Thanks a lot for your detailed answer, this was exactly what I needed. I'll look into how bad the performance is after implementations, first iteration will just be to get results-then optimization ;-) Since I'm planning to use the regions as biomes on my terrain, I wouldn't expect the region numbers to get too high, as each chunk will likely ever only contain/be part of 2-5 biomes, thus the same amount of regions. Great tip about combining position and region color, much appreciated.

All in all, awesome feedback - Again thanks a lot for the helpful input, especially in such a timely manner ;-)

Yes, of course performance shouldn't be the first concern :)

ps: Some time ago I stumbled across a neat alternative way how to draw a voronoi diagram. You would just render a 3d cone at each centroid with the color of that region. Thanks to the depth buffer overlapping cones would intersect each other and automatically create the voronoi diagram for those points. Here's an interactive WebGL example that uses this approach. You can better see how it works when you reduce the cone radius to something like 100. Then you can see where the cones end and that they are indeed just cones. Note that to get reasonable lines between regions you have to use a relatively large triangle count when rendering the cones. When you reduce the triangle count you would notice, that the lines get zig-zaggy.

Though this approach usually isn't worth it since you could do the voronoi generation through a shader as well with perfect precision. I just wanted to add this little detail. I think it's an interesting solution and gives a different view at what a voronoi diagram actually represents.

Hey, thanks a lot for the additional input. That tool is quite nifty to play around with, especially when still learning- It's a great visual aid, especially with the settings you suggested :-)

I pulled out my old voronoi test script. I had 4 different implementations which have vastly different performance stats. Note that I did not clean up any of the code. The different methods are simply executed from Start.

The test image is a 4k x 4k image and I used a relatively large amount of centroids, specifically 100k. The naive approach never finished, but I extrapolated the runtime and calculate it would take about 1h and 20 minutes ^^. Keep in mind that with that resolution and point count it has to do 1.6 trillion distance calculations.

The second case uses a simple quadtree implementation. Though even a quad tree requires quite a bit of work. This solution took about 31 seconds. Much faster, but still quite slow.

The third approach actually uses a "depth buffer" and essentially renders growing "square cones" around each point. So in addition to the color array we simply use an additional int array that represents the depth value of each pixel. For each pixel drawn we simply calculate the theoretical depth based on the distance from the point it belongs to. Each step the "radius" is increased by 1 and we draw 4 pixel lines on the perimeter of each quad around each point, We simply use the depth buffer to determine if a pixel should be drawn or not.

This approach is much faster again. With the same resolution and point count it only took about 3.34 seconds

The last solution is the same as the previous one, just with the "ProcessPoint" method inlined. This gave it an additional boost and it runs in just 1.37 seconds.

So the last two solutions are a great example where we can exchange processing power with memory. A lot problems can be optimised either for speed or memory usage. In many cases using more memory could produce a solution that is much faster. On the other hand if a small memory footprint is important, such solutions usually take more processing time.

### Your answer

### Welcome to Unity Answers

The best place to ask and answer questions about development with Unity.

To help users navigate the site we have posted a site navigation guide.

If you are a new user to Unity Answers, check out our FAQ for more information.

Make sure to check out our Knowledge Base for commonly asked Unity questions.

If you are a moderator, see our Moderator Guidelines page.

We are making improvements to UA, see the list of changes.