Digital Scramble

NMNTLab2This is a project I made for a course called ‘New Media New Technology’. We had to work with Open Frameworks, a challenge because I’d never coded in C++ before.

Inspiration
I was thinking of personal space and digital creatures. Research shows [1] that humans very easily assign emotions and wishes to abstract shapes. This effect is even stronger when the abstract shapes are showing some sort of seemingly deliberate behavior [2, watch also the movie]. I thought that an easy way of creating this illusion would be to give digital objects their own personal space. With many objects and different ‘sizes’ of personal space this could create the feeling of some objects not liking others (fleeing because their required personal space is bigger than that of the other object), and some coming closer than others. I wanted to make the requirements of the objects dependent on their visible features, for instance making them more ‘scared’ when they where small, more ‘attracted’ no similarly shaped objects, etc.

However, I also wanted to give the user the opportunity to add objects out of the real world into the digital ‘herd’ of creatures. Inspired by a project from last year, the Quantisatron, I used a contour finder to make it possible for the user to draw a shape on a piece of paper, hold it up to their webcam, and see the shape come to life in the digital world.

Unfortunately, it turned out to be very difficult and beyond my abilities in Open Frameworks to keep the shapes stable when there were more of them in the screen. This was probably due to something in the altered contour tracker add-on I used, but I was not able to solve the problem, and it was impossible to make the shapes react to each other in this way (as they sometimes took points from each others vectors and therefore always touched). I thus adapted my end goal to use this bug as a feature, and decided to concentrate on the destruction of the real shapes in the digital world. Thus, when the user enters a new ‘creature’ into the digital world, it will slowly lose its defining features. In other words, it will be ‘scrambled’. The end result of moving shapes is then created equally by the unique shapes added by the user as by the semi-random scrambling of the application.

Process
I used the ContourFinder addon and example and changed them both to fit my needs. I altered the ‘blobs’ class that holds the shapes the contourfinder finds. I added variables for color and speed, and made the speed dependent on the size of the blob. I then created a second vector of blobs to hold the shape the user selected. This is the final code of the blobs class:

class ofxCvBlob {
public:
float area;
float length;
ofRectangle boundingRect;
ofPoint centroid;
bool hole;
ofColor color;
float speedx;
float speedy;

vector pts; // the contour of the blob
int nPts; // number of pts;
//—————————————-
ofxCvBlob() {
area = 0.0f;
length = 0.0f;
hole = false;
nPts = 0;
color = 0,0,0;
if (area < 8000){ //The speed is dependent on the size of the blob speedx = 1; speedy = 1;} if (area > 8000 && area < 12000){ speedx = 0.5; speedy = 0.7;} if (area > 12000){
speedx = 0.15;
speedy = 0.1;}
}
//—————————————-
void draw(float x = 0, float y = 0){
ofFill;
ofSetColor(color);
ofBeginShape();
for (int i = 0; i < pts.size(); i++){
//This is where the blob is drawn. I have tried everything I could think of to stop it from scrambling,
ofVertex(x + pts[i].x, y + pts[i].y);
//(put it in other places, change it up, make a separate class) but I didn’t get it to work properly.
}
ofEndShape();
}
};

Apart from this, I only altered the Contourfinder in such a way that it would draw the contours in the video in black and not draw the bounding rectangles, which it normally does automatically.
This is the main bit of coding, with comments added:

#include “testApp.h”
//————————————————————–
void testApp::setup(){
//In setup, the video grabber is set up, and the size of the various images determined.

vidGrabber.setVerbose(true);
vidGrabber.initGrabber(320,240);

colorImg.allocate(320,240);
grayImage.allocate(320,240);
grayBg.allocate(320,240);
grayDiff.allocate(320,240);

bLearnBakground = true;
threshold = 60;
//The threshold can be changed when the lighting conditions change. It is now set for good lighting conditions.
}
//————————————————————–
void testApp::update(){
//Most of this is standard contourfinder example code. I removed the bits I did not need.
ofBackground(0,0,0);

bool bNewFrame = false;
vidGrabber.update();
bNewFrame = vidGrabber.isFrameNew();

if (bNewFrame){
colorImg.setFromPixels(vidGrabber.getPixels(), 320,240);

grayImage = colorImg;
if (bLearnBakground == true){
grayBg = grayImage;
bLearnBakground = false;
}
grayDiff.absDiff(grayBg, grayImage);
grayDiff.threshold(threshold);

contourFinder.findContours(grayDiff, 20, (340*240)/3, 1, false);
//It detects only 1 contour every frame
}
}
//————————————————————–
void testApp::draw(){
//This writes the instructions and draws the screen
ofSetColor(255,255,255);
ofDrawBitmapString(“Hold a blank piece of paper in front of the camera and press ‘r’ to reset the baseline. Press ‘x’ to capture a shape.”, 10,15);
ofSetHexColor(0xffffff);
colorImg.draw(10,20);

contourFinder.draw(10,20);
//The main contourfinder is drawn on top of the video

for (int i = 0; i < shapes.size(); i++){
//The already existing ‘shapes’ (a blobs vector) are drawn and their direction is changed when they hit the walls
shapes[i].draw(10,20);
for (int j = 0; j < shapes[i].pts.size(); j++){
shapes[i].pts[j].x+=shapes[i].speedx;
shapes[i].pts[j].y+=shapes[i].speedy;
if (shapes[i].pts[j].x < 0) {
shapes[i].speedx = -shapes[i].speedx;
}
if (shapes[i].pts[j].y < 0) { shapes[i].speedy = -shapes[i].speedy; } if (shapes[i].pts[j].x > ofGetWidth()) {
shapes[i].speedx = -shapes[i].speedx;
}
if (shapes[i].pts[j].y > ofGetHeight()) {
shapes[i].speedy = -shapes[i].speedy;
}
}
}
}

//————————————————————–
void testApp::keyPressed(int key){
switch (key){
case ‘r’:
bLearnBakground = true;
//When the ‘r’ is pressed, the program uses the current image as a baseline
break;
case ‘x’:
//When the ‘x’ is pressed, the following happens:
shapes.push_back(contourFinder.blobs[0]);
//The current blob is put in the ‘shapes’ vector
float a = contourFinder.blobs[0].centroid.x;
float b = contourFinder.blobs[0].centroid.y;
int currShape = shapes.size()-1;
img.grabScreen(20,20,340,260);
shapes[currShape].color = img.getColor(a,b);
//The color of the shape is determined by the center pixel of the blob
contourFinder.reset();
break;}
}

And finally, a video explaining how the final product works and showing how it looks:

1. Pavlova, M., Sokolov, A.A., & Sokolov, A. (2005). Perceived dynamics of static images enables emotional attribution. Perception, 34, 1107-1116.
2. Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior.The American Journal of Psychology, 243-259

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s