Applied Design – Xcode


For this unit we are going to be working with Xcode, a software programme created by Apple to help developers create Apps, they describe it on their website as “Xcode provides everything developers need to create great applications for Mac, iPhone, and iPad. Xcode brings user interface design, coding, testing, and debugging all into a unified workflow. The Xcode IDE combined with the Cocoa and Cocoa Touch frameworks, and the Swift programming language make developing apps easier and more fun than ever before.”.
Because I have worked with Processing, I can see similarities between them which hopefully means that I will understand this at a much quicker rate than before, it will be interesting to see my final outcome.

Applied Design – Salisbury Cathedral’s Magna Carta Exhibition

Applied Design

For this unit, I was given the task to create an app for Salisbury Cathedral’s Magna Carta exhibition. The requirements for this involves the ability to interact with the user as they walk around the cathedral, with the possibility of Geo-location. We will be put into groups of 7/8 and will each individually design something that will be on the app, some example of ideas that they showed in the presentation were: a puzzle, game or treasure hunt.

To gain a better knowledge of what the Magna Carta is, we are going on a trip to Salisbury Cathedral to visit the exhibit and I will look online as well for inspiration. Personally, I am looking forward to this unit and I hope to maximise my knowledge in XCode to get the best out of the app, it will be interesting to see what my final outcome is.

Visitors view the Magna Carta in the Chapter House

Reference:, 2015. Visiting Magna Carta. [online] Available from: [Accessed 6 Feb. 2015]., 2015. Salisbury | Magna Carta Trust 800th Anniversary | Celebrating 800 years of democracy. [online] Available from: [Accessed 6 Feb. 2015].

Main Blog Posts


The main blog posts I would like my lecturers to read are listed below:

Processing – Reflection

Design Iterations, Processing

Drawing to the conclusion of the Processing/Design Iterations blog, I want to reflect on the whole unit and think about what I could have improved as well as looking at what I thought were my strengths.

I will start talking about my original idea, I wanted to include sound that would be made whenever a participant moved in either direction, I also imported ‘minim’ as it was the advised library to download for this type of sketch, personally I think this would have been a superior interactive piece in comparison to my final completed sketch. Unfortunately, time constraints caused this idea to be cancelled because I just could not learn the code on my own within the time I had left.

Firstly, I admire the fact that Processing is Open Source as well as having the ability to create such detailed art pieces; it really is a well done piece of software. As for myself, in terms of learning Processing, I think I understood the software fairly well; I had an issue with using the ‘minim’ library, as we hadn’t learnt it in the workshop but other than that, there wasn’t many issues I came up to that weren’t too time consuming. If I were to go do the unit again, I would certainly learn more of the export libraries like ‘minim’, ‘openCV’ and ‘blobDetection’, just because they offer very well done examples which can be developed on (as I did with facialDetection).

Time management on the blogging is another area where I could have been more focused on; luckily I made sure to log what code I had written and had screenshots of the sketches so I could use them for later reference but for future reference, I think I should have focused more on the detail of my blogging and used my time more efficiently.

When the time came to set up the sketch in the foyer, I connected the laptop to the television without much issue and the whole process of interaction within the space caused little issue, other than the poor latency that occurred while connected. However, it did work and responded well to the environment that I had placed it i.e. lights, people walking that could ruin the interaction.

As for the sketch itself, overall I am pleased with the outcome. It worked the exact way I wanted it too, tracing every face that is visible to the camera and ‘censors’ it out. Improvements for the sketch, adding sound to it I think would created a better atmosphere, but this is more related to my previous sketch where I wanted to create sound via movement which unfortunately proved too difficult to develop within my time frame.

Processing – Feedback

Design Iterations, Processing

After I exhibited my Interactive piece within the space of Weymouth House, I asked a few participants to give me a short paragraph about my idea and what they thought of it, as well as how it could have been improved.

“I feel like the blurring of the faces in its simplicity holds a powerful message as it not only dehumanises individuals but also shows that in there interaction with the installation that we are the ones dehumanising ourselves. As a media student and somewhat of an addict to digital technology, I find this concept very relatable.”Rebecca Goodchild, Student.

“I found your interactive piece provides an interesting commentary on the current state of technology, and the way that it controls our identity.”

From reading this quote, it portrays the fact that one of my concepts and themes behind this piece have been conveyed clearly through my sketch without explanation is a very good sign that I have executed my idea successfully, which I am pleased with.

By blurring out my face, the piece made me consider how technologies create a new identity for me that may or not be representative of my actual self.”Lawrence Holmes, Student.

Again, this quote is interesting as it shows that my interactive pieces gives the user an ambiguous view of identity through technology, I think that this is actually a positive thing, it leaves the user questioning whether or not this is a good thing to have or not.

When asked about what I could improve, people stated that possibly developing the concept of identity further, one viewer stated that possibly blurring out the whole entire body would be the next plan of action, which I have to agree with and I would do this if I had more time to complete the sketch.

Processing – User Testing

Design Iterations, Processing

After I had finished my code, the next step was to take the sketch to the Weymouth House foyer. I connected my laptop to a TV that wasn’t in use and added an external webcam, this meant I had to recode where my camera would capture the video, I did this by entering the name of the camera and the frame rate of which it should capture. As well as the camera name, the screen size also had to be changed to fit the television screen, as it came up extremely smaller.

This is the original code of the “void setup” function. There is no need to write down the camera name as it does it automatically, I had also had a small screen size so that the sketch could run quicker.


As you can see, I have changed all screen sizes to ‘displayWidth’ and ‘displayHeight’, this code automatically reformats the screen to fit to whatever it is being played through. The name was also “HD Pro Webcam C920” for the external webcam, which caused me no further problems which was one less issue.



Fortunately, in the two images you can see that the interactive piece was successful within the foyer, the tracking blurred directly over the face without issue. The depth as well did not prove to be a problem either as you can see the viewers are a fair distance away from the camera, another fortunate aspect that I was unsure about.


However, the increased size proved to be an obstacle for my code and I tried different types of various code to combat this issue but nothing seemed to work unfortunately, the code works but at a much slower rate than expected, if I had more time I would find out how to combat this issue. Shown below:

Because of this issue, I decided to create a recorded version through my computer screen so that I could show a somewhat correct way of my interactive piece in the foyer. This is how I actually wanted it to look, as you can see it tracks the face in real time without lag in any direction.

Processing – Successful Sketch

Design Iterations, Processing

After I rewrote the code over the ‘LiveCamTest’ example, the blurring successfully covered directly over the face as opposed to the previous sketch, where it was askew. For this blog, I am going to talk about the code that I wrote and how it creates the facial blur successfully over the face.

The reason the blur works over the face is due to these lines of code:

Screen Shot 2015-01-26 at 04.39.58

The way the code works is to use a ‘get’ type function to grab the square area of the face, then put that square (or ‘rect’) over the detected face at a lower resolution, this is all in real time. The ‘faceGrab’ function ‘grabs’ the detected face and then resizes it, at a lower resolution (“int(faceGrab.width*0.1)  , int(faceGrab.width*0.1″), the square over the detection is outputting the resolution of 0.1, creating this pixelated, blur like image (shown above).
Overall I am extremely happy with the outcome of the image because the final design was exactly the same as how I imagined it and there was not much issue when it came to designing the piece.

Here is the whole code that I used to create the sketch: