Processing – Reflection

Design Iterations, Processing

Drawing to the conclusion of the Processing/Design Iterations blog, I want to reflect on the whole unit and think about what I could have improved as well as looking at what I thought were my strengths.

I will start talking about my original idea, I wanted to include sound that would be made whenever a participant moved in either direction, I also imported ‘minim’ as it was the advised library to download for this type of sketch, personally I think this would have been a superior interactive piece in comparison to my final completed sketch. Unfortunately, time constraints caused this idea to be cancelled because I just could not learn the code on my own within the time I had left.

Firstly, I admire the fact that Processing is Open Source as well as having the ability to create such detailed art pieces; it really is a well done piece of software. As for myself, in terms of learning Processing, I think I understood the software fairly well; I had an issue with using the ‘minim’ library, as we hadn’t learnt it in the workshop but other than that, there wasn’t many issues I came up to that weren’t too time consuming. If I were to go do the unit again, I would certainly learn more of the export libraries like ‘minim’, ‘openCV’ and ‘blobDetection’, just because they offer very well done examples which can be developed on (as I did with facialDetection).

Time management on the blogging is another area where I could have been more focused on; luckily I made sure to log what code I had written and had screenshots of the sketches so I could use them for later reference but for future reference, I think I should have focused more on the detail of my blogging and used my time more efficiently.

When the time came to set up the sketch in the foyer, I connected the laptop to the television without much issue and the whole process of interaction within the space caused little issue, other than the poor latency that occurred while connected. However, it did work and responded well to the environment that I had placed it i.e. lights, people walking that could ruin the interaction.

As for the sketch itself, overall I am pleased with the outcome. It worked the exact way I wanted it too, tracing every face that is visible to the camera and ‘censors’ it out. Improvements for the sketch, adding sound to it I think would created a better atmosphere, but this is more related to my previous sketch where I wanted to create sound via movement which unfortunately proved too difficult to develop within my time frame.

Advertisements

Processing – Feedback

Design Iterations, Processing

After I exhibited my Interactive piece within the space of Weymouth House, I asked a few participants to give me a short paragraph about my idea and what they thought of it, as well as how it could have been improved.

“I feel like the blurring of the faces in its simplicity holds a powerful message as it not only dehumanises individuals but also shows that in there interaction with the installation that we are the ones dehumanising ourselves. As a media student and somewhat of an addict to digital technology, I find this concept very relatable.”Rebecca Goodchild, Student.

“I found your interactive piece provides an interesting commentary on the current state of technology, and the way that it controls our identity.”

From reading this quote, it portrays the fact that one of my concepts and themes behind this piece have been conveyed clearly through my sketch without explanation is a very good sign that I have executed my idea successfully, which I am pleased with.

By blurring out my face, the piece made me consider how technologies create a new identity for me that may or not be representative of my actual self.”Lawrence Holmes, Student.

Again, this quote is interesting as it shows that my interactive pieces gives the user an ambiguous view of identity through technology, I think that this is actually a positive thing, it leaves the user questioning whether or not this is a good thing to have or not.

When asked about what I could improve, people stated that possibly developing the concept of identity further, one viewer stated that possibly blurring out the whole entire body would be the next plan of action, which I have to agree with and I would do this if I had more time to complete the sketch.

Processing – User Testing

Design Iterations, Processing

After I had finished my code, the next step was to take the sketch to the Weymouth House foyer. I connected my laptop to a TV that wasn’t in use and added an external webcam, this meant I had to recode where my camera would capture the video, I did this by entering the name of the camera and the frame rate of which it should capture. As well as the camera name, the screen size also had to be changed to fit the television screen, as it came up extremely smaller.

This is the original code of the “void setup” function. There is no need to write down the camera name as it does it automatically, I had also had a small screen size so that the sketch could run quicker.

originalCameraCode

As you can see, I have changed all screen sizes to ‘displayWidth’ and ‘displayHeight’, this code automatically reformats the screen to fit to whatever it is being played through. The name was also “HD Pro Webcam C920” for the external webcam, which caused me no further problems which was one less issue.

foyerCameraCode

FoyerTest2

Fortunately, in the two images you can see that the interactive piece was successful within the foyer, the tracking blurred directly over the face without issue. The depth as well did not prove to be a problem either as you can see the viewers are a fair distance away from the camera, another fortunate aspect that I was unsure about.

FoyerTest1

However, the increased size proved to be an obstacle for my code and I tried different types of various code to combat this issue but nothing seemed to work unfortunately, the code works but at a much slower rate than expected, if I had more time I would find out how to combat this issue. Shown below:

Because of this issue, I decided to create a recorded version through my computer screen so that I could show a somewhat correct way of my interactive piece in the foyer. This is how I actually wanted it to look, as you can see it tracks the face in real time without lag in any direction.

Processing – Successful Sketch

Design Iterations, Processing

After I rewrote the code over the ‘LiveCamTest’ example, the blurring successfully covered directly over the face as opposed to the previous sketch, where it was askew. For this blog, I am going to talk about the code that I wrote and how it creates the facial blur successfully over the face.

The reason the blur works over the face is due to these lines of code:

Screen Shot 2015-01-26 at 04.39.58

The way the code works is to use a ‘get’ type function to grab the square area of the face, then put that square (or ‘rect’) over the detected face at a lower resolution, this is all in real time. The ‘faceGrab’ function ‘grabs’ the detected face and then resizes it, at a lower resolution (“int(faceGrab.width*0.1)  , int(faceGrab.width*0.1″), the square over the detection is outputting the resolution of 0.1, creating this pixelated, blur like image (shown above).
Overall I am extremely happy with the outcome of the image because the final design was exactly the same as how I imagined it and there was not much issue when it came to designing the piece.

Here is the whole code that I used to create the sketch:

finalCode1

finalcode2

finalcode3

Processing – Unsuccessful Sketch

Design Iterations, Processing

The original sketch that I wrote was successful in terms of creating a code that detected faces and blurred them out consequently of this. However, the blurring was slightly askew, creating this image of a distorted face on top of another, if timing was short I would have settled with this outcome but I had enough time to review over where I was going wrong and correct the issue.

However, reviewing the code I could not find an exact reason as to why the sketch was not working, but I believe it was down to the facial tracking area section of code. Fortunately, this proved to not be that much of an obstacle because I know that it was a simple of moving the code over to an example code of ‘LiveCamTest’.

Here is the code for this sketch, conveying the unsuccessful facial tracking, but unsuccessful facial blur:

unsuccessfulCensorSketch1

unsuccessfulCensorSketch2

unsuccessfulCensorSketch3

Processing – OpenCV

Processing

I was advised by my lecturers to import different contributed libraries into Processing if I were to explore different types of sketches that the programme is capable of. Because I wanted to use the camera as part of my interactive pieces, I imported ‘OpenCV’ as it contained good examples of camera sketches, such as Facial Detection, Live Cam Test and Find Edges.
Open CV is described on the website as: “..an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code.”– (OpenCV, 2015).

What is good about using contributed libraries is how you can just write a simple code, like “import gab.opencv.*;”  and it works accordingly to the code written, I definitely will use this for my final interactive piece in Weymouth House.

Reference:

http://opencv.org/about.html

Processing – Image Pixelation/Manipulation

Processing

“A digital image is nothing more than data — numbers indicating variations of red, green, and blue at a particular location on a grid of pixels. Most of the time, we view these pixels as miniature rectangles sandwiched together on a computer screen. With a little creative thinking and some lower level manipulation of pixels with code, however, we can display that information in a myriad of ways..”- (Processing, 2008)
In this workshop, we worked on using the pixels in regards to a image and explored how to manipulate them in basic but effective ways.
The first thing to do is create a folder for the sketch and within that a ‘data’ folder for the source images and save whatever image I wished (I am going to use two similar images in size, two “The Dark Knight Rises” posters).

The next thing we did was enter the code below: imageBrushcode

This code uses the ‘PImage’ similar to a void function, but to create images, the code gives the image to have a brush style effect.

We also learnt how to put an effect over an image too, the code used makes it look almost unstable, this is done by using an integer to create multiple small squares that are given a random variable to move around in a small area.

unstableImagecodehttp://giant.gfycat.com/ConsciousConfusedAgama.gif

Reference:

https://www.processing.org/tutorials/pixels/