Processing and Arduino

I have got a project video and painting and I want to present in a space.

import processing.opengl.*;
import codeanticode.glgraphics.*;
import codeanticode.gsvideo.*;
import*; //Camera
Capture video; //Camera
GSMovie mov;
GLTexture tex;
int fcount, lastm;
float frate;
int fint = 3;
float x[] = new float[10000]; //Camera
float y[] = new float[10000]; //Camera 
color c[] = new color[10000]; //Camera
boolean show[] = new boolean[10000]; //Camera
int sum = 0; //Camera
color trackColor; //Camera
void setup() {
  size(640, 480, GLConstants.GLGRAPHICS);
  mov = new GSMovie(this, "station.avi");
  video = new Capture(this,width,height,30); //Camera
  // Use texture tex as the destination for the movie pixels.
  tex = new GLTexture(this);
  // This is the size of the buffer where frames are stored
  // when they are not rendered quickly enough.
  // New frames put into the texture when the buffer is full
  // are deleted forever, so this could lead dropeed frames:
  // Otherwise, they are kept by gstreamer and will be sent
  // again later. This avoids loosing any frames, but increases 
  // the memory used by the application.
  trackColor = color(255,255,255);
void draw() {
  if (video.available()) //Camera
  {; //Camera
      // Using the available() method and reading the new frame inside draw()
      // instead of movieEvent() is the most effective way to keep the 
      // audio and video synchronization.
      if (mov.available())
        // putPixelsIntoTexture() copies the frame pixels to the OpenGL texture
        // encapsulated by the tex object. 
        if (tex.putPixelsIntoTexture())
          // Calculating height to keep aspect ratio.      
          float h = width * tex.height / tex.width;
          float b = 0.5 * (height - h);
          image(tex, 0, b, width, h);
          /*String info = "Resolution: " + mov.width + "x" + mov.height +
                        " , framerate: " + nfc(frate, 2) + 
                        " , number of buffered frames: " + tex.getPixelBufferUse();*/
          //rect(0, 0, textWidth(info), b);
          //text(info, 0, 15);
          fcount += 1;
          int m = millis();
          if (m - lastm > 1000 * fint)
            frate = float(fcount) / fint;
            fcount = 0;
            lastm = m; 
  scale(-1.0, 1.0);
  image(mov, mov.width,0);
  // Before we begin searching, the "world record" for closest color is set to a high number that is easy for the first pixel to beat.
  float worldRecord = 500; 
  // XY coordinate of closest color
  int closestX = 0;
  int closestY = 0;
  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x ++ ) 
    for (int y = 0; y < video.height; y ++ ) 
      int loc = x + y*video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);
      // Using euclidean distance to compare colors
      float d = dist(r1,g1,b1,r2,g2,b2); // We are using the dist( ) function to compare the current color with the color we are tracking.
      // If current color is more similar to tracked color than
      // closest color, save current location and current difference
      if (d < worldRecord) 
        worldRecord = d;
        closestX = x;
        closestY = y;
  // We only consider the color found if its color distance is less than 10. 
  // This threshold of 10 is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
  if (worldRecord <40) 
    // Draw a circle at the tracked pixel 
    // Two options: 
      x[sum] = closestX;
      y[sum] = closestY;
      c[sum] = 255;
      show[sum] = true;
      if (sum>10000)
        sum = 0;
    x[sum] = closestX;
    y[sum] = closestY;
    c[sum] = 0;
    show[sum] = true;
    if (sum>10000) 
      sum = 0;
  // Paint surface
  for(int i = 1; i < sum; i++) 
    if (show[i] == true)
      point((x[i]-640), y[i]);

I want to use a sensor on Arduino to star video when entering a man in space. I don’t want one video but I want others video.How to put others video? Also, How to use a sensor when entering a man in space so that to star video? Each video.
I need your help to use with arduino and processing. I don’t know Arduino and I am not good processing.
I have got arduino uno and sensor sharp 2y0a02.

It sounds as if you have a general idea but no details yet. There seem to be several major unknowns to be addressed.

Have you identified the type of sensor that you’re going to use to sense the people entering your space? This seems pretty fundamental to your project.

Your post includes some code that looks like Java (is it Processing code?) that reads from a video source and seems to do some video output. Does that mean that you’ve got the video display side sorted out? To have that triggered from an Arduino, you’d need to write an Arduino sketch that sent status messages over the serial port, and have the (Processing?) application on the PC read from the serial port, parse the messages and process them. It’s quite common to interface Processing with an Arduino like that and there’s a section of the Arduino forum dedicated to that type of thing. If you have any problems with it, look in the playground for examples, and if you still need help with that part I suggest you ask in the “Interfacing w/ Software on the Computer” section.

I answered here because I don't know that I open my topic and I need your help