Problem with camera Rendering

Hello,

I would like to get the output of a camera in a native plugin, but I have some trouble to make it works. Here is the behaviour I use for the camera :

public class TestNativePlugin : MonoBehaviour 
{
  private Camera m_Camera;

  IEnumerator Start()
  {
    m_Camera = gameObject.GetComponent<Camera>();
    if (m_Camera != null && m_Camera.pixelHeight > 0 && m_Camera.pixelWidth > 0)
    {
      CreateTextureAndPassToPlugin();
      yield return StartCoroutine("CallPluginAtEndOfFrames");
    }
  }

  private void CreateTextureAndPassToPlugin()
  {
    var myRenderTexture = new RenderTexture(m_Camera.pixelWidth, m_Camera.pixelHeight, 24, RenderTextureFormat.ARGB32);

    myRenderTexture.Create();

    /*Texture2D tex = new Texture2D(512, 512);
    Color[] colors = new Color[512 * 512];

    for (int i =0; i < 512*512; i++)
    {
      colors *= Color.black;*

}

tex.SetPixels(colors);
tex.Apply();*/

m_Camera.targetTexture = myRenderTexture;
InitializePlugin(myRenderTexture.GetNativeTexturePtr());
}

private IEnumerator CallPluginAtEndOfFrames()
{
while (true)
{
yield return new WaitForEndOfFrame();

GL.IssuePluginEvent(GetRenderEventFunc(), 1);
}
}
}
In my native plugin, I simply copy the texture on the cpu and print the content in a file. And with this behaviour my output is all 0.
If I initialize my native plugin with a Texture2d that I construct myself (code in comment) everything goes well : the value that I initialize in my texture is the value output by my native plugin.
I don’t really understand what happens =/
Thank you for your help !

After a lot of tests and a lot of luck I figured out the problem which is the initialization of the RenderTexture. The following piece of code works :

IEnumerator Start()
  {
    m_Camera = gameObject.GetComponent<Camera>();

    if (m_Camera != null && m_Camera.pixelHeight > 0 && m_Camera.pixelWidth > 0)
    {
      CreateTextureAndPassToPlugin();
      yield return StartCoroutine("CallPluginAtEndOfFrames");
    }
  }

  private void CreateTextureAndPassToPlugin()
  {
    m_RenderTexture = new RenderTexture(m_Camera.pixelWidth, m_Camera.pixelHeight, 24, RenderTextureFormat.ARGB32);
    m_RenderTexture.Create();

    m_Camera.targetTexture = m_RenderTexture;
    m_Camera.Render();

    InitializePlugin(m_RenderTexture.GetNativeTexturePtr());
  }

  private IEnumerator CallPluginAtEndOfFrames()
  {
    while (true)
    {
      yield return new WaitForEndOfFrame();

      GL.IssuePluginEvent(GetRenderEventFunc(), 1);
    }
  }

And the magic key is the call to Render after assigning the camera target texture. The RenderTexture is not created untill it is filled with something, event if you call the Create method that should create it on the hardware. According the documentation, the RenderTexture is effectively created when it is set to active…

I am also trying to get the same thing. Can u please tell me how u have used this native plugin with any unity game code?

Just import this as .jar and put into plugin folder under assets?

What I know we have to import the plugin using DllImport and then we have to call the functions from game code.

Just want to know if we have the same code as a function of any CSharp code in my project, not as a plugin and calling it from that, there is any difference as getting the frame captured by camera normally slows down the game?