To know what is in a circle, use circle collider or Vector3.Distance?

I want to know what objects are in a given radius what if Faster: Put all my objects(around 10) in an array and put this in update?

void Update()
    {
      for(int i = 0; i < myObjectArray.Length; i++)
      {
        if(Vector3.Distance(transform.position, myObjectArray*) > radius)*

{
//do stuff
}
}
}
or do it with an circle collider(and mark it as trigger) and track object that collides with the circle collider? Or is Physics.OverlapSphere() faster. What is the fastest way?

First I have to give you the standard answer.

You should ask this question only if you are experiencing performance problems and are trying to squeeze every last drop of CPU time out of your device after first optimizing the more probable causes of performance issues, like draw calls, polygon counts, texture sizes, unnecessary garbage collection etc. :slight_smile:

Just use the method that’s the easiest in your case and doesn’t cause bugs.

That being said… Vector.Distance calculations use the normal [Pythagorean theorem][1] we all know to calculate the distance. That calculation has a square root in it, which is a relatively slow operation. That’s why it’s good to keep in mind that very often you can avoid using it. Almost every time you only compare the distance to something like in your example

if(Vector3.Distance(transform.position, myObjectArray*) > radius)*
It’s slightly faster to compare the squared distance to your squared threshold value (radius)
if((transform.position - myObjectArray_).sqrMagnitude > radius*radius)_
If you ever wondered why Vector2/3 has a property for getting the sqrMagnitude, this is one of them - skip the calculation of the square root.
Of course it’s even better if radius doesn’t change often so you can calculate radius*radius only when radius changes and store that value.
Physics engines like the ones in Unity are highly optimized especially if you use them properly (mark unmovable objects static, disable collisions between layers that don’t need to collide etc.) so it’s a pretty safe bet to use their functionality.
If are using this to detect hits from multiple moving objects (like an actual physics game with objects bouncing around), I’d go with the trigger collider just because then you are sure to take advantage of the [broad phase optimization][2] and whatnot, of the physics engine.
If this is not the case (instead you just do the checks when the player clicks or something) AND it’s more convenient for you for some reason, go with the Physics.OverlapSphere()
Also in your question you are mixing OverlapSphere and circleCollider… be sure to use only 2D physics if you can get away with that.
[1]: Pythagorean theorem - Wikipedia
_*[2]: http://ianqvist.blogspot.fi/2010/07/broad-and-narrow-phase-collision.html*_

“I want to know what objects are in a given radius what if Faster: Put all my objects(around 10) in an array and put this in update? … Or do it with an circle collider(and mark it as trigger) and track object that collides with the circle collider? Or is Physics.OverlapSphere() faster. What is the fastest way?”

For about 10 objects? Optimize the time it takes you to implement a solution :stuck_out_tongue: Seriously, 10 tests won’t make much of a dent in performance. If it does, it most likely means your game has nothing else than 10 objects testing containment. The point is that 10 containment tests will be dwarfed in comparison to other code that will be running-

Each solution has a relatively fixed overhead and then a relatively variable execution time.

For a few objects, it might be faster to loop through an array.

For a lot of objects, it might be faster to use the physics system (to get the benefit of culling).

For specialized usage, it might be faster to use a different culling mechanism entirely (i.e. you mention “circles”, perhaps you want to try a QuadTree for example?) - the solution you choose should benefit your use case. For 10 tests, don’t even bother…

It’s impossible, or next to very hard, to give a concrete answer as to which is the fastest way because it depends on many factors. What is the fastest way? Well, I don’t know - but I assume you mean “in general”. There’s probably some clever patented algorithm somewhere that utilize the resources super efficiently which may be the fastest way. All in line with cache, utilizing all cores for large sets, one core for small sets, GPU for certain sets and probably your sound card for some strange case. Check out judy arrays for example:

Judy arrays are designed to keep the number of processor cache-line fills as low as possible, and the algorithm is internally complex in an attempt to satisfy this goal as often as possible. Due to these cache optimizations, Judy arrays are fast, especially for very large datasets. On data sets that are sequential or nearly sequential, Judy arrays can even outperform hash tables.

The point is, perfect optimization is hard. Really hard. But do you need perfect optimization?

I believe, the real question is “Is either solution fast enough for my current needs, or reasonably expected needs?”.

The answer then would be “measure it, and measure it well”.

If it’s below your performance requirements, try different approaches. Compare it with your previous benchmark to see if you’re making progress or making it worse.