Showing posts with label Android. Show all posts
Showing posts with label Android. Show all posts

3/12/19

AR Based Indoor Navigation


Left: The fused output of the navigation system visualized in a VR fashion and simulating the user trajectory in the building digital twin.
Right: The actual video feed when this trajectory was recorded.
Trajectory length: 70+ meters including the stairs.
Note: The shown markers in this video are not used for localization! they are used as anchor points only for evaluating the navigation system.

My master's thesis project was the result of a cooperation between the NavVis Navigation Team and TUM. The research is investigating the viability and practicality of using visual techniques incorporated with 3D building structure knowledge for indoor navigation. The proposed system is a hybrid of absolute visual positioning and relative visual tracking. The positioning is done using image based poses estimation approach relying on dense feature matching (using deep neural network) and the building's 3D structure. While the tracking is done with the help of Goolgle ARCore. This video shows the result of the strategic fusion between the absolute and relative data.
All the indoor scans were acquired using the NavVis M6 Trolley, one of the state of the art mobile scanning devices in the industry.

2/16/14

Augmented Reality App using Nokia HAIP Technology

Hi everyone one,

Today I present you my latest achievement in the field of Augmented Reality.

In the last post I have presented a way using libGDX to augment a box in the right orientation.
This solution is location invariant. Which means whenever I change my location the box will change its location with me. There is no sense of displacement.

Today, using the Nokia HAIP (High Accuracy Indoor Positioning) technology, I was able to add the sense of position to the augmented objects.

Here is a video showing the result we got  and show the used devices and technologies



Note: HAIP is based on BLE Bluetooth Low Energy to get the location of objects/people.

3/23/13

Outdoor/Geo Augmented Reality: Camera Orientation from Mobile Sensors (using LibGDX) in Android

Hi again everyone today's topic is on AR, specially the outdoor AR (based on your geo location).
I am currently working on AR browser (have both outdoor and marker based).In the beginning I had a big challenge of how to show POIs on the screen.My choices was to use the (1) Android SDK using SurfaceView and all my work will be on a canvas so every thing is 2D (2) use OpenGL and deal with it's mess,specially that I am already using the camera rendering module in my last post which is built with OpenGL.After some research I got an crazy idea of using a gaming engine or a gaming framework that will handle all the OpenGL mess and hide it in a new API calls.Back to research again I found that LibGDX is a perfect candidate that will solve all the issues with high performance too.

So now here is the situation we want to show a shape that is positioned in a specific place in the OpenGL coordinates system in front of the OpenGL camera.And when changing the mobile orientation the shape don't leave it's position.This will make us feel like as if the shape is augmented in a specific position in the real world.



To achieve this, we need to make our orientation sensors give us the orientation of our mobile camera looking point and and up vector.

Notes:


  • you should be familiar with LibGDX because there is a lot of classes used in the code like Matrix4 vector3 and they are introduced in LibGDX + you need to know the life cycle of LibGDX.
  • The orientation of the phone is landscape here (for portrait mode the orientation matrix taken from the sensors need to be rotated)


Here is a sample code for getting the orientation :(this method is taken from a big class so any variable not defined here it is considered as a global variable)


 public void start(Context context) {
        listener = new SensorEventListener() {
         private float orientation[]=new float[3];
         private float acceleration[]=new float[3];


         public void onAccuracyChanged(Sensor arg0, int arg1){}

         public void onSensorChanged(SensorEvent evt) {
          int type=evt.sensor.getType();
          //Smoothing the sensor.
          if (type == Sensor.TYPE_MAGNETIC_FIELD) {
           orientation  =lowPass( evt.values, orientation ,0.1f); 
          } else if (type == Sensor.TYPE_ACCELEROMETER) {
           
           acceleration  =lowPass( evt.values, acceleration,0.05f );
          }
          
          if ((type==Sensor.TYPE_MAGNETIC_FIELD) || (type==Sensor.TYPE_ACCELEROMETER)) {
           float newMat[]=new float[16];
           SensorManager.getRotationMatrix(newMat, null, acceleration, orientation);
     SensorManager.remapCoordinateSystem(newMat,
       SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X,
       newMat);
     Matrix4 matT = new Matrix4(newMat).tra();
     float[] newLookAt = {0,0,-far,1};//far = 1000
     float[] newUp = {0,1,0,1};
     Matrix4.mulVec(matT.val, newLookAt);
     Matrix4.mulVec(matT.val, newUp);
     matrix = matT.val;
           lookAt[0]= newLookAt[0]+position[0];//can remove position if the camera always at <0>
           lookAt[1]= newLookAt[1]+position[1];
           lookAt[2]= newLookAt[2]+position[2];
           up=newUp;
           
          }
         }
        };
        sensorMan = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
  sensorAcce = sensorMan.getSensorList(Sensor.TYPE_ACCELEROMETER).get(0);
  sensorMagn = sensorMan.getSensorList(Sensor.TYPE_MAGNETIC_FIELD).get(0);
  sensorOrien = sensorMan.getSensorList(Sensor.TYPE_ORIENTATION).get(0);
  sensorMan.registerListener(listener, sensorAcce, SensorManager.SENSOR_DELAY_FASTEST);
  sensorMan.registerListener(listener, sensorMagn, SensorManager.SENSOR_DELAY_FASTEST); 
  sensorMan.registerListener(listener, sensorOrien, SensorManager.SENSOR_DELAY_NORMAL);
 }

As you have seen there is a lowpass filter applied to the data coming from the sensors because the data is too distorted will cause the shape to jitter on the screen.

Here is the filter:

public class LowPassFilter {

 float[] output;

 public LowPassFilter() {

 }

 // LowPass Filter
 public  float[] lowPass(float[] input, float alpha) {
  if (output == null){
   output= input;
   return output;
  }

  for (int i = 0; i < input.length; i++) {
   output[i] = output[i] + alpha * (input[i] - output[i]);
  }
  return output;
 }

}

Ok now we need to update our camera lookAt vector and up Vector in the render loop of LibGDX


public void render(){
  
  camera.update();
  upVector = orientation.getUp();// this should get you the up variable in the start method above
  camera.up.set(upVector[0],upVector[1], upVector[2]);
  lookVector = orientation.getLookAt();// this should get you the lookAt variable in the start method above
  camera.lookAt(lookVector[0],lookVector[1],lookVector[2]);
  camera.apply(Gdx.gl10);
 }

At the end we achieved our target by making the OpenGL Camera change it's orientation when the senors changes. so now we have the effect of fixed shape in the real world. I hope that you have found something usefull in my post and if you have any questions,feel free to ask.

Here is a repository for the code above:
https://github.com/MESAI/AugmentedRealityLibGDX/tree/master/Augmented%20Reality%20With%20Libgdx


1/25/13

Android Native Camera With OpenCV and OpenGLES

Today I will show a how to use OpenCV and OpenGL to make your own camera preview.it can be used later for image processing or like in my case in a Augmented Reality app. Some questions,assumptions and info need to be considered.

Why accessing the camera natively?
because Image processing is done with OpenCV which is written in C and C++.so you don't want to get your frames in Java and send them to using JNI which is slow get the result back from it then render the resulting frame on the screen.so what  we gonna do is to use the camera,openCV and OpenGL all written in C++.

I heard that's it is not supported for all phones,is that right?
So here is the problem android developers changes their native camera module very frequently.OpenCV people tries to cover all the implemented modules till now and they come up with 7 native libraries to cover the majority of the phones till now.So I guess the answer is yes and no.

My code was tested on Samsung Galaxy Note & I got >30 FPS(Frame Per Second) but due to the camera hardware device you can't get more than 30 FPS.

I am using OpenGLES 1.1 because it's easier maybe not efficient because there is no FBOs (Frame Buffer Object) but actually I din't care so much because I got my target FPS and even more.

No camera controles is made here like taking pictures and adjusting focus,just taking frames and render them with OpenGL.

This topic does't have lots of resources and that made me create a blog and write this post.

So lets begin :D



I am assuming that you have installed OpenCV & NDK and know how to use them.

First of all,You need to have a JNI folder so create one if not there.Then you need to add inside it another folder called build.inside build you need to put openCV include folder and libs folder.Both folders comes with the openCV sdk.

OpenCV_Directory->sdk->native->jni->include
OpenCV_Directory->sdk->native->libs

Now you are ready to go.

I will explain the structure of my code in term of threads.
So we have the main thread(1) that controls that whole app.Then after creating a GLSurface and setting its renderer we get another thread called GLThread(2) that is responsible for drawing the the frames that we grab from the camera and render it on the screen.And the last thread is the Frame-Grabber thread(3) (really slow thread),all what it's doing is just taking frames and store them into the a buffer so lately they can be drawn to the screen.

Here we have the our main Activity class called CameraPreviewer.java


package com.mesai.nativecamera;

import java.util.List;

import org.opencv.android.CameraBridgeViewBase.ListItemAccessor;
import org.opencv.android.NativeCameraView.OpenCvSizeAccessor;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.highgui.VideoCapture;

import android.app.Activity;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.view.Display;

public class CameraPreviewer extends Activity {

    GLSurfaceView mView;
    
    @Override protected void onCreate(Bundle icicle) {
        super.onCreate(icicle);
        Native.loadlibs();
        VideoCapture mCamera = new VideoCapture(Highgui.CV_CAP_ANDROID);
        java.util.List<Size> sizes = mCamera.getSupportedPreviewSizes();
        mCamera.release();
 
        mView = new GLSurfaceView(getApplication()){
         @Override
         public void onPause() {
          super.onPause();
          Native.releaseCamera();
         }
        };
        Size size = calculateCameraFrameSize(sizes,new OpenCvSizeAccessor());
        mView.setRenderer(new CameraRenderer(this,size));
        setContentView(mView);
    }
    
 protected Size calculateCameraFrameSize(List supportedSizes,
   ListItemAccessor accessor) {
  int calcWidth = Integer.MAX_VALUE;
  int calcHeight = Integer.MAX_VALUE;

  Display display = getWindowManager().getDefaultDisplay();

  int maxAllowedWidth = 1024;
  int maxAllowedHeight = 1024;

  for (Object size : supportedSizes) {
   int width = accessor.getWidth(size);
   int height = accessor.getHeight(size);

   if (width <= maxAllowedWidth && height <= maxAllowedHeight) {
    if ( width <= calcWidth 
      && width>=(maxAllowedWidth/2)
      &&(display.getWidth()%width==0||display.getHeight()%height==0)) {
     calcWidth = (int) width;
     calcHeight = (int) height;
    }
   }
  }

  return new Size(calcWidth, calcHeight);
 }
    @Override protected void onPause() {
        super.onPause();
        mView.onPause();
       
    }

    @Override protected void onResume() {
        super.onResume();
        mView.onResume();
        
    }
}

Here we can see that there is some weird lines @23-25.this is because the method getSupportedPreviewSizes() is not supported in the C++ version.And I needed the supported resolutions of the camera so that I can pick one that suits me.After that I create the GLSurface that can be used for video rendering.

Now our Custom Renderer CameraRenderer.java
package com.mesai.nativecamera;


import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import org.opencv.core.Size;

import android.content.Context;
import android.opengl.GLSurfaceView.Renderer;


public class CameraRenderer implements Renderer {

 private Size size;
 private Context context;
 public CameraRenderer(Context c,Size size) {
  super();
  context = c;
  this.size = size;
 }
 
 public void onSurfaceCreated(GL10 gl, EGLConfig config) {
  Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
  Native.initCamera((int)size.width,(int)size.height);
 }

 public void onDrawFrame(GL10 gl) {
//  long startTime   = System.currentTimeMillis();
  Native.renderBackground();
//  long endTime   = System.currentTimeMillis();
//  if(30-(endTime-startTime)>0){
//   try {
//    Thread.sleep(30-(endTime-startTime));
//   } catch (InterruptedException e) {}
//  }
//  endTime   = System.currentTimeMillis();
  //System.out.println(endTime-startTime+" ms");
 }
 
 public void onSurfaceChanged(GL10 gl, int width, int height) {
  Native.surfaceChanged(width,height,context.getResources().getConfiguration().orientation);
 }
} 
uncommenting the part @lines 29-38 will make your camera have a steady 30 fps and will print the time needed to draw each frame.(just for debugging)
The renderer is referring to a class called Native in all it's Method
so here it's
package com.mesai.nativecamera;

public class Native {
 public static void loadlibs(){
  System.loadLibrary("opencv_java");
  System.loadLibrary("NativeCamera");
 }
 public static native void initCamera(int width,int height);
 public static native void releaseCamera();
 public static native void renderBackground();
 public static native void surfaceChanged(int width,int height,int orientation);
}
Ok now for the native part here is the code for CameraRenderer.cpp
#include <jni.h>
#include <GLES/gl.h>
#include <GLES/glext.h>
#include <android/log.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <pthread.h>
#include <time.h>
#include <Math.h>

// Utility for logging:
#define LOG_TAG    "CAMERA_RENDERER"
#define LOG(...)  __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)

GLuint texture;
cv::VideoCapture capture;
cv::Mat buffer[30];
cv::Mat rgbFrame;
cv::Mat inframe;
cv::Mat outframe;
int bufferIndex;
int rgbIndex;
int frameWidth;
int frameHeight;
int screenWidth;
int screenHeight;
int orientation;
pthread_mutex_t FGmutex;
pthread_t frameGrabber;
pthread_attr_t attr;
struct sched_param param;

GLfloat vertices[] = {
      -1.0f, -1.0f, 0.0f, // V1 - bottom left
      -1.0f,  1.0f, 0.0f, // V2 - top left
       1.0f, -1.0f, 0.0f, // V3 - bottom right
       1.0f,  1.0f, 0.0f // V4 - top right
       };

GLfloat textures[8];

extern "C" {

void drawBackground();
void createTexture();
void destroyTexture();
void *frameRetriever(void*);

JNIEXPORT void JNICALL Java_com_mesai_nativecamera_Native_initCamera(JNIEnv*, jobject,jint width,jint height)
{
 LOG("Camera Created");
 capture.open(CV_CAP_ANDROID + 0);
 capture.set(CV_CAP_PROP_FRAME_WIDTH, width);
 capture.set(CV_CAP_PROP_FRAME_HEIGHT, height);
 frameWidth =width;
 frameHeight = height;
 LOG("frameWidth = %d",frameWidth);
 LOG("frameHeight = %d",frameHeight);
 glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
 glShadeModel(GL_SMOOTH);
 glClearDepthf(1.0f);
 glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
 pthread_attr_t attr;
 pthread_attr_init(&attr);
 pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
 pthread_attr_setschedpolicy(&attr, SCHED_FIFO);
 memset(&param, 0, sizeof(param));
 param.sched_priority = 100;
 pthread_attr_setschedparam(&attr, &param);
 pthread_create(&frameGrabber, &attr, frameRetriever, NULL);
 pthread_attr_destroy(&attr);

}

JNIEXPORT void JNICALL Java_com_mesai_nativecamera_Native_surfaceChanged(JNIEnv*, jobject,jint width,jint height,jint orien)
{
 LOG("Surface Changed");
 glViewport(0, 0, width,height);
 if(orien==1) {
   screenWidth = width;
   screenHeight = height;
   orientation = 1;
  } else {
   screenWidth = height;
   screenHeight = width;
   orientation = 2;
  }


 LOG("screenWidth = %d",screenWidth);
 LOG("screenHeight = %d",screenHeight);
 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 float aspect = screenWidth / screenHeight;
 float bt = (float) tan(45 / 2);
 float lr = bt * aspect;
 glFrustumf(-lr * 0.1f, lr * 0.1f, -bt * 0.1f, bt * 0.1f, 0.1f,
   100.0f);
 glMatrixMode(GL_MODELVIEW);
 glLoadIdentity();
 glEnable(GL_TEXTURE_2D);
 glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
 glClearDepthf(1.0f);
 glEnable(GL_DEPTH_TEST);
 glDepthFunc(GL_LEQUAL);
 createTexture();
}

JNIEXPORT void JNICALL Java_com_mesai_nativecamera_Native_releaseCamera(JNIEnv*, jobject)
{
 LOG("Camera Released");
 capture.release();
 destroyTexture();

}

void createTexture() {
  textures[0] = ((1024.0f-frameWidth*1.0f)/2.0f)/1024.0f;
  textures[1] = ((1024.0f-frameHeight*1.0f)/2.0f)/1024.0f + (frameHeight*1.0f/1024.0f);
  textures[2] = ((1024.0f-frameWidth*1.0f)/2.0f)/1024.0f + (frameWidth*1.0f/1024.0f);
  textures[3] = ((1024.0f-frameHeight*1.0f)/2.0f)/1024.0f + (frameHeight*1.0f/1024.0f);
  textures[4] = ((1024.0f-frameWidth*1.0f)/2.0f)/1024.0f;
  textures[5] = ((1024.0f-frameHeight*1.0f)/2.0f)/1024.0f;
  textures[6] = ((1024.0f-frameWidth*1.0f)/2.0f)/1024.0f + (frameWidth*1.0f/1024.0f);
  textures[7] = ((1024.0f-frameHeight*1.0f)/2.0f)/1024.0f;
 LOG("Texture Created");
 glGenTextures(1, &texture);
 glBindTexture(GL_TEXTURE_2D, texture);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024,1024, 0, GL_RGB,
   GL_UNSIGNED_SHORT_5_6_5, NULL);
 glBindTexture(GL_TEXTURE_2D, 0);
}

void destroyTexture() {
 LOG("Texture destroyed");
 glDeleteTextures(1, &texture);
}

JNIEXPORT void JNICALL Java_com_mesai_nativecamera_Native_renderBackground(
  JNIEnv*, jobject) {
 drawBackground();
}

void drawBackground() {
 glClear (GL_COLOR_BUFFER_BIT);
 glBindTexture(GL_TEXTURE_2D, texture);
 if(bufferIndex>0){
 pthread_mutex_lock(&FGmutex);
 cvtColor(buffer[(bufferIndex - 1) % 30], outframe, CV_BGR2BGR565);
 pthread_mutex_unlock(&FGmutex);
 cv::flip(outframe, rgbFrame, 1);
 if (texture != 0)
   glTexSubImage2D(GL_TEXTURE_2D, 0, (1024-frameWidth)/2, (1024-frameHeight)/2, frameWidth, frameHeight,
   GL_RGB, GL_UNSIGNED_SHORT_5_6_5, rgbFrame.ptr());
 }
 glEnableClientState (GL_VERTEX_ARRAY);
 glEnableClientState (GL_TEXTURE_COORD_ARRAY);
 glLoadIdentity();
 if(orientation!=1){
  glRotatef( 90,0,0,1);
 }
 // Set the face rotation
 glFrontFace (GL_CW);
 // Point to our vertex buffer
 glVertexPointer(3, GL_FLOAT, 0, vertices);
 glTexCoordPointer(2, GL_FLOAT, 0, textures);
 // Draw the vertices as triangle strip
 glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
 //Disable the client state before leaving
 glDisableClientState(GL_VERTEX_ARRAY);
 glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}

void *frameRetriever(void*) {
 while (capture.isOpened()) {
  capture.read(inframe);
  if (!inframe.empty()) {
   pthread_mutex_lock(&FGmutex);
   inframe.copyTo(buffer[(bufferIndex++) % 30]);
   pthread_mutex_unlock(&FGmutex);
  }
 }
 LOG("Camera Closed");
 pthread_exit (NULL);
}


}
Here we have a lot of things that need to be explained :
  1. In initCamera :We initialize our camera here by saying which camera we want to access then we set the resolution we want to our frame to be.
  2. initCamera (cont.):We initialize a new thread(Frame-Grabber) for more information on native threads you can search posix thread or pthreads.
  3. surfaceChanged :it 's all Opengl initialization stuff.
  4. destroyTexture and releaseCamera is for exiting the app.
  5. drawBackground: is for rendering the frames.
  6. frameRetriever is the method called by the Frame Grabber thread.
The now you need android.mk file to compile the .cpp

LOCAL_PATH := $(call my-dir)




include $(CLEAR_VARS)
LOCAL_MODULE := opencv-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libopencv_java.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera1-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r4.2.0.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera2-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r4.1.1.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera3-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r4.0.3.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera4-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r4.0.0.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera5-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r3.0.1.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera6-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r2.3.3.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := camera7-prebuilt
LOCAL_SRC_FILES = build/libs/$(TARGET_ARCH_ABI)/libnative_camera_r2.2.0.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/build/include
include $(PREBUILT_SHARED_LIBRARY)

include $(CLEAR_VARS)
OPENGLES_LIB  := -lGLESv1_CM
OPENGLES_DEF  := -DUSE_OPENGL_ES_1_1
LOCAL_MODULE    := NativeCamera
LOCAL_SHARED_LIBRARIES := opencv-prebuilt 
LOCAL_SRC_FILES := CameraRenderer.cpp
LOCAL_LDLIBS +=  $(OPENGLES_LIB) -llog -ldl -lEGL

include $(BUILD_SHARED_LIBRARY)

I hope that you have enjoyed my first tutorial post and if you have any questions just leave a comment and feel free to ask.

UPDATE 1:

Here you can find the source code:
https://github.com/MESAI/NativeCamera
Add the openCV java shared library.
Add the build folder and its content in the JNI folder.
Then ndk-build.