Final Changes made from User Testing

Feedback given

The feedback given was broadly positive; the app was described as fun, and well designed, and appropriate to the task. It was suggested that the vocabulary being used currently was too mature for the audience and desperately needed simplifying however. We had already removed the parts deemed too explicit for the age group during initial client feedback, so this was not an issue. It was suggested that we added in a transitioning effect for factual information that displays when the correct answer is chosen, with a timer, so that the user is more inclined to read the information instead of skipping through it.

Salient points to consider

Broadly speaking, we identified our current goals as making the content appropriate and readable for the age group, with a secondary goal of adding in a transitional effect.

Implementation of new changes

We reworded the content in order to appear more friendly, more readable, and with an easier vocabulary. We removed some of the more technical details of the facts, which had the added benefit of shortening the facts overall allowing for a bigger font to be used, adding to the appropriateness for the age range. The salient points within the facts still remained valid, and they were still relevant to the Magna Carta and cathedral. We were unable to add the optional features of reminding users to respect times of prayer, and transitional effects, due to time constraints and a need to prioritise project requirements.

The app was finished with final tweaks made to the design and layout, as well as sounds being added to accompany the new content. We checked the MoSCoW analysis which showed us that all priority features had been completed successfully, alongside many preferred and suggested features, as well as ensuring that we had not included features that we had justified not having.

Start Screen VS End Screen

We decided after a group meeting and a substantial amount of dialogue that we all agreed on soundbites as the name. We believed this was a good idea for our app as it was a soundscape based game, with a clear narrative which was based around our fictional character as a symbolic representation of the magna carta itself. Keeping in fitting with the empathy that we was trying to create through the use of the Magna Carta man. We included him in both the start and end screens of the game. As we considered him a fundamental part in the experience for the user. So with all this in mind we decided to have the game enclosed within the start and end screen and him making appearances throughout. We incorporated the idea of bites into the name as the sounds were short which has two meanings firstly from a data perspective they would consist of (data memory in the form of bytes), and secondly and more importantly the way in which our target audience were more likely to recognise the sound clips were short which could be compared to bites in the way that it would consist of multiple smaller bites rather that the thing dissapearing in one go. Along with the fact that each correct answer was signified with a new slice of cake.

The design of the start screen was designed in a way that you would instantly come across the character in an interesting way, so this was conceptualised by using illustrators tools in order to create a 3d title and make it look like the title was resting on a plate in the same way that the cake would be in the end screen. Following on from this the title was made to look like it was being eaten by the character in order to make the game seem more humorous and appealing in a friendly nature.
start screen
The design of the end screen was created using illustrator as was the majority of the app. due it capabilities of working with vectors instead of pixels, allowing for the designs to be designed with responsiveness in mind. The concept of the end screen was to show an elated character, due to the fact that you had completed the game therefore getting him a cake for his birthday. As the birthday (due to the 800th anniversary of the magna carta) as displayed with the number eight hundred on top of the cake. The cake was also a continuous theme throughout as each level was rewarded with a new slice upon completion, this made the transformation into an entire birthday cake at the end. Along with the cake the character was is wearing a party hat. To reinforce the fact it’s his birthday.

magna carta man end screen

Changes made in response to Client Feedback

Content changes from feedback

Age appropriateness

As per feedback from the client, we removed the references to the murder of Thomas Beckett, and the references to adultery within the monarchy, due to the inappropriateness for the age range. We plan to follow up assessment of age appropriateness in the app with user testing and gaining feedback from the education officer at the Cathedral, and a primary school teacher in order to ensure the content and mechanism will work well with the age range.

Storyline/Narrative structure (sectioning)

The introduction of more characters and more tangible features of the Cathedral, to replace previous ideological or historical concepts meant we could now section the content in the application into characters of the Magna Carta legacy, and features of the Cathedral. Introducing this change allowed us to improve upon the narrative of the app, which was supported by the mascot we had designed. We updated the dialogue of the mascot to support the new changes in content.

Content changes

Visiting the client for a meeting also meant visiting the cathedral, and doing so alongside the client offered the chance to look for more inspiration relating to the creation of content. Looking around we found a lot of inspiration for content which we built into the app by means of new questions and factual relevance to the cathedral as a historical building. As a rule, we ensured that our new content as a result of this was less ideological (such as looking at the introduction of Trial by Jury influenced by the Magna Carta) and more tangible, which not only facilitated a connection as something that could be looked at, found, or touched, unlike more ideological or historical concepts, but also meant it was better understood by the age range.

The expert knowledge of the client helped us to produce content that may have been harder to create relying solely on research; we were able to include lots of anecdotes and quirky stories relating to the Cathedral and the Magna Carta legacy. This primarily involved looking at architectural features with a story or rationale behind them, and we were able to forge a link between including these in the app, and encouraging the user to go and search for these parts of the cathedral – categorically introducing a strong connection between the user’s experience of the app and the building itself.

Sound style changes

We chose a selection of less ambiguous sounds that we feel can instantly be understood as to what the sound is of, and thus returns the user to the main challenge of the mechanism of deciding how the sound provided in the app is relevant to the phrases provided as answers and explanations of the sound being played. However, looking at new sounds required ensuring that the copyright was suitable for our planned usage. The sounds chosen were all available under the creative commons licence, for non-commercial usage, with attribution, however we have submitted these details to the RedBalloon legal team for confirmation before we implement them into the project.

Proposed User Feedback

In order to ensure that our current prototype was appropriate for the age range we conducted some user testing by asking the education officer from Salisbury Cathedral, and a primary school teacher, for their thoughts on aspects of the app.

We submitted a video of the prototype in action and asked for feedback on the app as a whole. We explained what the video showed as well as a brief summary of our intentions, in order to make it possible to gauge the prototype against our intentions. We explained also that, since it was a prototype, there were small quirks in the design and layout at times, in order to avoid feedback on issues we were aware of already.

We chose the people we did for feedback as they were professionals in the field relevant to us, and had a broad understanding of the age group due to working with them frequently. We did not ask for feedback from the actual target audience due to logistical issues with being able to find and survey a group of 7-10 year olds as well as a potential inability to provide feedback for the higher level concepts we were still in the process of exploring.

We asked for feedback on how the idea would work in general for 7-10 year olds, and specifically on the mechanic of the idea, whether the user would be able to work out how the game operates, the content itself, and the general concepts and ideas explored. We also asked for any suggestions in terms of features or content.

We plan to use the feedback in order to inform the final design iteration of the application before we release it, and as a confirmation that the final build is appropriate for the audience we plan to have use it.

Client Feedback

Having evaluated that our workflow and process for development was working well, having discussed with both RedBalloon and the rest of the team the current stage of the project, we thought it would be a good idea for me to go to Salisbury and visit the client in order to gain some feedback on the content and mechanism of the application, as part of the critical final evaluation suggested by RedBalloon. We planned to consider the feedback given to us now to be crucial in realising the final release of the app in 4 weeks.

Feedback given regarding mechanism

Feedback given regarding the mechanism of the app was broadly similar to that given by RedBalloon; the client was happy with the proposed design regarding how the user would interact with the concept, and all three parties agreed that the concept was driven mostly by content – including the sounds, the answers, and the factual relevance.

Feedback given regarding content

The client suggested a variety of points for feedback relating to the way in which content had been produced;

  • Make the sound clues more literal and less ambiguous, for example the use of animal sounds, action sounds, or emotional sounds. Be specific with aims of the sounds used. We interpreted this as the use of sounds in a more direct manner, for example, to cause the user to not think “what sound is that?” but to think “how is that sound relevant”, as to avoid a breakdown in interaction between the user and the app, caused by ambiguity in sounds.
  • Link facts to the exhibition, the Magna Carta, or Salisbury cathedral specifically. In order to continue a close adherence to the initial brief we needed to ensure that aspects of the app were closely linked to the subject matter – it was suggested that we achieved this by prompting the user to go and explore an aspect of the physical experience with a prompt in the facts delivered by the app; for example suggesting they search for a tomb upon correctly answering a question regarding the passing of a key figure.
  • It was suggested that we needed to ensure we made content appropriate for the target age range alone by ensuring content was not too explicit or violent and sanitising the aspects of history we had drawn upon with some discretion to make it suitable for the age range.

We took away new intentions for the final iteration of development from this feedback, which was very helpful in refining our final tasks for completion of the project. Our new priorities included fostering a strong link between the general narrative of the concept to the exhibition, the cathedral, and the Magna Carta in order to better meet main points of the brief. In addition to this we had to focus on the coherence of sound effects and the content itself – both in its feasibility and relevance, and in its appropriateness for the age group.

Testing on a device.

As a group we feel it is important to test our app on a physical device. The Xcode development environment allows the software to be tested on a iPhone simulator, however testing on a physical device allows for a more accurate representation to be assessed on how the app will ultimately function. Things such as how easy the user interface is to navigate can be more easily found out if testing on a physical device, due to the nature of the interaction with the software being more representative of the final desired outcome.

In order to get our code built to a physical device, we first had to jump through several administrative hoops, in regards to the Apple developer account system. This involved getting approved through the university, as well as setting up provisioning profiles and getting a specific device approved for application testing. This proved more difficult than expected at first, and we suffered some minor delays in being able to test our application.

When we did manage to get our software on to a physical iPhone, we were faced with additional problems. Our work on setting up the constraints in Xcode had been mainly focused on an iPhone 6-sized canvas. Unfortunately, we had often fallen into the pitfall of using static number based measurements in our laying out of the application elements. This meant that on the specific-sized canvas we were designing for, the application looked as we intended, however on screens any smaller, the elements were arranged incorrectly or sized too large for the display.

In order to fix this, we went through and redid much of our constraints work within our application. Using techniques such as proportional widths and heights, aspect ratios, and pinning elements to each other we achieved a design methodology that responds to whatever size canvas it is set to fill. This allows the application to appear properly and as we had designed it on a range of different devices, from an iPhone 4S to iPads.

As for the other functions of the application, we also initially had issues with sounds not playing, however we also managed to fix this easily, as it was simply due to a file storage issue.

Now that the constraints have been redone, the application works well on a physical phone, and we are happy that its functionality and aesthetics transfer well to different device sizes.

A snapshot of the application running on an iPhone 4S, after constraints have been fixed.

A snapshot of the application running on an iPhone 4S, after constraints have been fixed.


Evaluating the Workflow

With 4 weeks left before the project deadline, evaluation of our workflow and documentation strategy was key. Having a robust system for workflow is important, since in the industry if another team were to pick up the development of a project from where it had been delivered previously by another team, they would need to be able to know how the old team were working in order to best continue developing the project. This includes having complete access to all original assets used in the project, all notes, documents, and blueprints, in order to build a comprehensive picture of the original development strategy and be able to build upon this.

Achieving this state of overall project comprehension is a key reason for why documentation is so important – if every step of the development process is documented, for example – considering why a decision has been made, or why a feature is not implemented, how a feature is to be built, etc, then a new team can easily build an overview of the process and save time in knowing what has already been tried and explored before by a previous team. Additionally, documentation of prior development helps when planning further development on a project as additions can be built to be easily implemented in with the old ones. This would also include information like program versions, as there is differences between older and newer versions of SWIFT for example (cite?), file formats, and other format specifications – such as the resolution for images and videos.

Finally, the systems used for workflow and documentation need to be accessible and translatable in order for the new team to make use of it. A proprietary system for workflow may have benefits in terms of flexibility and privacy, but it needs to be documented well enough for the new team to make use of it. Alternatively, it could be argued that such a system may have disadvantages as it could be difficult to implement this into a new system. Documentation needs to be carried out in a systematic manner, which is easy to read and access – through a type of hierarchy or signposting.


File/Asset management

We had originally opted to use our MediaWiki to host files as it was adaptable, permission-controlled, and also hosted the rest of our main content for the project. However, we eventually switched to using Google Drive (cite) to host the asset files for our project, as we found that use of the MediaWiki as a main channel for quickly sharing files did not work so well; it was lengthy to access, supported a limited range of formats and had a poor system for indexing and accessing uploaded content. Arguably, this could have been remedied for longer term usage with custom modifications, plug-ins and other changes however this was potentially not appropriate for the situation at hand. Changing to Google Drive allowed easy upload and access of files due to the well-designed interface, worked natively on a variety of devices held by the team allowing ubiquitous access to files and assets as needed, and its position as a standard service meant that the team was already familiar with its usage, unlike the wiki – for the purpose of sharing files. We also found ourselves employing other methods such as USB sticks, email, and direct file transfers for more acute, situational usages, where appropriate.


Primarily we communicated in person through arranged working sessions, however outside of this we used a private group session on Facebook. For similar reasons to Google Drive, we opted to use this as it was again, permission-controlled, offered a lot of very useful features, and most importantly was very already ubiquitous in its implementation. We all had Facebook integrated with our phones, laptops, tablets, and various other devices already, being frequent users, and we feel that this perpetual contact offered significant benefits for the project in terms of frequent, short progress updates and the sharing of minute details regarding the project which helped greatly in minimising gaps in communication. Due to the relatively un-indexed nature of communications on Facebook however, we also made good use of a page on the Wiki to post important updates and synthesise meeting notes to provide a steadfast central position from where the most important issues could be made clear to the team. We conducted the vast majority of project-defining decisions through once-weekly meetings and group working sessions which we conducted multiple times a week.


Task management

We continued to use Trello as an online SCRUM board equivalent for the duration of the project to keep an overview on tasks in general, however made use of various pages on our Wiki to discuss semantics, such as what was happening on a day-by-day basis. We designated tasks to ourselves and each other which worked well as it beckoned an element of responsibility and accountability, meaning tasks were completed on time as expected. This also helped us in knowing what tasks were in progress, and how, and when, targets would come together.




We documented meetings with the project manager using OneNote (cite) to quickly make notes of what was said throughout each agenda. This allowed for indexed, hierarchical, and accessible notes to be produced, which were then synthesised onto the Wiki and distributed to any other necessary channels, making changes to layout and content in order to improve accessibility for other team members. Tasks for the week were distributed to a page on the Wiki and to each team member, and client feedback was delivered to relevant areas (dependant on where the feedback was directed to) in order for changes to be made.

Project notes

We used a system of making notes individually, on the task at hand, which would then be discussed at team meetings if need be, or developed into fuller documentation on the Wiki. The Project Manager would collate any notes and thoughts into relevant areas on the Wiki for easier access throughout the duration of the project. Notes were also developed into fuller documentation to keep a development log of blog posts regarding the project on WordPress.

Working documents

Working documents were developed using software such as Microsoft Word, Calligra Words, Word Online, OneNote, Google Docs, and similar packages for their superior editing and proofing tools compared to the plain text entry used by the MediaWiki software, and then transferred to the Wiki when finished in order to make them easily readable and accessible for all members of the team, as a central part of the project. The Wiki became an invaluable cornerstone for documentation of the project as a structure soon developed listing all initial resources and assets from the client, the University, and individual contributions by team members including working documents from which features would be built and assessed, as well as documents showing content for the app, contact details for team members, roles, documentation for the use of the Wiki itself and to-do lists.

Adding Design Features into the App

After a long period of structural development on the app we got to a point, where everything is relatively working and the skeleton of our code is completed. We needed to start designing the look of the app. The design side of our team made some mock-ups of how they thought the app would best look, in relation to the guidelines and it was our job now to implement these designs into a functional app.

The first major part of the design to do was simply to add a background colour to  all of the views. At first I though it was a simple task, as I assumed that there was a straight forward colour picker on the interface which would automatically change the colour of the view, but unfortunately there wasn’t. After a bit of research I found to set the colour of the background you need to call the backgroundColor class of that particular view and then set it with a UIColor an example of the code is used is below:

self.view.backgroundColor = UIColor.redColor()

The next task was to reformat the design of the buttons. The positioning of the buttons were already done as it was included in the structural design of the app.  The design of the buttons in the mock-ups were ones which are the full width of the view, with a proportional height and the colour was the blue supplied in the design guidelines as you can see below:


To achieve this style of button, it relied heavily on constraints. As the buttons had to be dynamic, change size depending on the screen size. Especially in the case of the question view (as you can see below) where there is a cluster of 4 buttons the spaces between the buttons had to relatively stay the same but the height of the buttons had to change in proportion to the screen size without going too small. The rest of the button design was simple, which was colour and opacity which is accessed and changed on the user interface.

Finally, the last thing I had to do was add in content created externally, for example turning the play button into an image and also adding a different variations of our ‘Magna-Carta man’ for example a image of him with his thumbs up when the user gets the correct answer. This process also involved me using constraints for positioning and keeping the aspect ratio of the images. The image on the play button is below:



The final design of the app is below, which includes all of the design features originally intend and is being previewed on an iPhone4s simulator.


Implementing the JSON.

After establishing that we’d be using JSON to store data for our questions, it was time to actually implement this in the code.

The bulk of the code written to implement JSON data storage.

The bulk of the code written to implement JSON data storage.

This is the main section of code that we developed in order to make data storage possible with JSON. As we have discussed elsewhere, due to limitations with the environment in which the app will be used we are opting to store our data locally. This code therefore essentially locates the local JSON file, and parses it as an array of objects of the question class we declared at the start of development of the project.

In the main ViewController for the quiz section of the app, an array is declared and set to the result of the loadQuestions() function above, so that the code in the main body of the app essentially remains the same as it ultimately interfaces with the same array of objects, but more code has been implemented on the back end in creating this array from our JSON file, a snippet of which can be seen here.

Snippet of the JSON file.

Snippet of the JSON file.

File storage, P-lists and JSON.

Having previously discussed here the benefits of separating the data for the app’s questions from the functionality in the code, the question still stood on how to best go about achieving this goal.

Speaking to our tutors, and doing our own research as well, there appeared to be several different options to explore which could provide a suitable solution in the context of our work.


P-lists, more formally referred to as property lists, are files used  in OSX and iOS programming to store serialised objects. They are generally utilised to store a user’s settings for an app, however can be leveraged to store various information about applications. In this current context, a property list file could be used to store information about each question contained within our app.

These files are most commonly formatted in either an XML or a binary form, and can be edited in a text editor. Additionally, the Xcode environment has built in support for editing property lists. These files can be viewed in a hierarchical manner and edited in a similar fashion.

An example of viewing a P-list file in Xcode.

An example of viewing a P-list file in Xcode.


JSON, standing for JavaScript Object Notation, is a commonly used, lightweight data interchange format ( The format uses human-readable text to transmit data which is comprised of pairs of attributes and values. Its most common use is to transmit information between servers and web applications, similarly to the XML format. JSON is a format that was originally derived from JavaScript, a scripting language that is well-used on the web. Despite this, the format of JSON itself is language independent, meaning JSON data can be created and interpreted in many varied programming languages.

Usefully, one such programming language with support for the JSON format is Swift. Even more usefully, we have already been given somewhat of a head-start on using JSON in Swift, due to a tutorial workshop that was given which covered some of the basics. Storing our data in the JSON format makes sense at it is somewhat of a standard for data transfer, being somewhat easier to use that XML-based solutions, and after some trial and error I have had more success adapting the currently stored information to this format than to a property list-based structure. The JSON format also, similarly to P-lists, has the benefit of being easily human-readable and understandable in its text-based representation. Unlike the property list format, there is no visual hierarchical editor built into Xcode, but I do not think this is too big a consideration, since part of the point of separating the data from the code to begin with is to make it easily editable by people with no knowledge of the code base of the application – people who would likely not wish to use the Xcode software to make these changes anyway.


After deciding that our data will be stored in the JSON format, there is still the question of where exactly it will be stored. This format lends itself well to communicating with an external server to fetch data over the internet. This, however, would add an additional layer of complexity to the development process and would come with its own set of problems and limitations. We are unsure at this moment in time whether to go down this road or whether to store our data on the local filesystem with the application, as these methods both have their relative strengths and weaknesses. In either case, however, a JSON-based approach to providing the code with data for the questions is likely to be feasibly achievable, and therefore we have decided to adopt this technology moving forwards. I will post the progress we make with implementing this functionality as it is made.



Introducing JSON [online]. Available from: [accessed 15 May 2015].