The Phone Box – A Collaborative Unit Project
Weeks 1 & 2
10 Jan – 23 Jan 2022
Aim
The aim for weeks 1 and 2 was to get in touch with cohorts from various other courses and determine collaborators. I saw it as an opportunity to meet people of varying interests and find something in common we could work towards.
Personally I wanted to do something I had not lingered much on before, the idea was to make something impossibly out of the box. I did not have a clear idea of what to do initially, but I definitely wanted to use this unit to learn about something new.
What happened?
A ‘Meet and Greet’ was facilitated through the use of Padlet (A cloud-based software as a a service, hosting real-time collaborative web platform where users can upload, organize, and share content on virtual bulletin boards).

We all used the Padlet to write a bit about ourselves, express interests, share contact information and showcase some previous work. The goal was for people to be able to identify possibly at a quick glance possible future team-mates.

Following the initial introduction to all these new people, an in-person meet and greet was organized in the following days. We met up at the park outside the Imperial War Museum. The event was like a make-shift group speed date, we formed small groups and started talking to each other, every few minutes some of us would walk over to other groups and get new conversations going.
As a result of this meet and greet and following conversations a team was formed by:
Abhimanyu Chattopadhyay (Me), Yantao Tang, Filip Norkowski and James Grey.
Weeks 3 & 4
24 Jan – 6 Feb 2022
Aim
The aim for weeks 3 and 4 was to flesh out the final version of concept for the game/experience we would work toward making in the following weeks. I would take this time to pitch various ideas and discuss with my new team mates the various possible outcomes for this unit.
Personally I was interested in exploring the possibility of making a tangible installation with the help of micro-controllers like the Arduino (Arduino is a an open source hardware and software company, project and, user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices).
What happened?
Having attended an introductory workshop about the Arduino at the Creative Technology Lab at LCC in the previous week. I spend a few days ideating about a possible physical installation and talking to course instructors and team-mates about the feasibility of such an undertaking.
The more time I spend fleshing the idea out to get a clearer view of what kind of challenges we might face during the coming weeks, the clearer it became that my idea of a physical installation for a scaled down London Phonebooth was too out of scope for this unit. As a result we started having more detailed discussions about other prospects such as Virtual Reality(VR) and how the interesting parts from the idea of my physical installation could be exaggerated and made compelling in VR.
To gain a better understanding of VR and what it could contribute to our project we decided to play some VR games. We were particularly impressed by Robo Recall : Unplugged and Half-Life: Alyx.
As a result of our positive experience with VR games and further discussions we defined the goal for our project to be as follows,
A VR Narrative game, addressing the stigma around depression and to bring awareness about reaching out to the people around us. The player will call-up friends and family, making dialogue choices over the duration of their conversations. The consequences of your choices will visibly impact the environment around the player and direct how much you learn about the player characters current situation.
Workflow
Once we had a solidified vision of what we wanted to make, we set up a few tools to help streamline our workflow.
Confluence
We used Confluence to document and define the technicalities and design choices of our game.

Telegram
We used the instant messaging software Telegram to always stay in touch and continuously communicate about the project. Handily telegram groups have no size limit on file sharing, so we readily used it to share files and videos of our progress from time to time.

Unity
We used the cross-platform game engine Unity to develop our game. Taking into account the fact that all four of us had varying levels of experience with Unity; but still had some experience developing games using it, in comparison to other alternatives such as Unreal Engine it seemed like the right choice. An added benefit of choosing Unity was the possibility of support from the staff as and when we would require it.

Universal Render Pipeline
The Universal Render Pipeline(URP) is a pre-built Scriptable Render Pipeline, made by Unity. URP provides artist-friendly workflows that let you quickly and easily create optimized graphics across a range of platforms, from mobile to high-end consoles and PCs, for these reasons we decided to use URP over Unity’s Standard Renderer to achieve the desired performance for our game.
GitHub
GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

On account of the massive community support and extensive documentation available for GitHub, how to integrate with Unity and the simple nature of the workflow, we decided to teach each-other about GitHub and how to best use it to keep our development process simple.
My Tasks
Once our ideal workflow had been set in stone, I took up the task of creating the Dialogue System for the game.
Dialogue System
We had settled on a choice based system where the player would be presented with a statement from the ‘invisible’ Non Player Character(NPC) on the other end of the phone along with a list of possible replies. The player could then choose one of these replies and the NPC would revert accordingly.
The plot for the narrative did not require for the system to be extremely robust and have extensive branching capability. In spite of that as the developer for this particular feature I decided to make the feature as robust and designer friendly as possible. Trying to keep in mind what would be useful and save time for me as a developer in the future if we decided to change the capabilities of the players interaction with the dialogue in the game.
This led me to design a Dialogue and Response system using Scriptable Objects.
A Dialogue Object would hold the data of the text to be displayed to the player as part of the conversation along with the list of possible responses the player could choose from. Each response in this list had the ability to hold a reference to the next Dialogue Object that would appear on the player’s screen if he/she were to choose one of the responses.

I also added in functionality to be able to abruptly end the dialogue, keeping in mind that it is possible for players to hang up and walk- away.
As we held more discussions about the theme of the game and it’s moment to moment gameplay, we realized that we would like to add the possibility to do things like change the environment, play sound or spawn in new objects to make the scene more dynamic. This led me to creating some Dialogue Response Events, essentially these are looking for when certain responses are chosen when a dialogue is being displayed and can invoke Unity Events at these times.

The Unity Events can then be used to call functions across a variety of objects around the scene to change it as we desire in response to the player’s choices.
By the end of these two weeks the template for our dialogue system was fully functional with some minor User Interface(UI) bugs. The only drawback was that this implementation was made making no consideration for VR and how UI works in VR. This was an intended outcome as my team mates were meanwhile working on figuring out development for VR using Unity and the Oculus Quest 2.
Weeks 5 & 6
7 Feb – 20 Feb 2022
Aim
The aim for weeks 5 and 6 was to further our development process by integrating each other’s work into a unified scene and make it all work together to achieve a functional player with the basic interactions and physics of the game implemented.
What happened?
We started the week by presenting our game idea and any progress we made to our peers and instructors. After some feedback about what could be interesting for our projects and scoping our project appropriately. We spend the rest of the week integrating the various components we had worked on individually over the past two weeks. As a result we all made slight modifications to our work to try and get everything to work as well as possible.
In the following week we were intimated about an upcoming meeting with students from the Sound Design course at LCC. Each team was allotted a few students from their class. They would work with us to add a variety of sound and music to the game to make it an enriching experience.
During our meeting with them, we talked about the variety of sounds we may need for the project and discussed the possibility of voice acting and how we would go about figuring out the same. We also discussed our backup plans for if and when the voice acting plans fell through for a multitude of reasons.
A short synopsis would be; we identified a list of environmental and ambient sounds we would like to have access to and be able to include in the game. A variety of sound effects pertaining to the players interaction with the phone. Arranging equipment and voice actors for our game once the script was ready. In the event of not having enough time to get all the voice acting done, alternative sound clips to fake the feeling of someone being on the other end of the line, we were looking at voice clips from Animal Crossing: New Horizons and how characters talk there.
Finally we talked about how our workflows would combine and what would be the best way to integrate the sound samples into Unity. To this effect we decided to look into Wwise – is Audiokinetic’s software for interactive media and video games. It features an audio authoring tool and and cross-platform sound engine.
My Tasks
Integrating Dialogue System
As a result of the team’s progress on implementing a VR player controller and interactions with relevant objects in the scene, I was able to plug my Dialogue system into the scene and call it in response to a certain set of actions made by the player. In our case dialing the correct number on the phone would trigger the appropriate dialogue sequence.

Looking into Wwise
On doing some research we found that Wwise was a fairly popular solution for integrating sound and sounds designers into the Unity workflow. I spend the end of this time learning how to integrate Wwise into a Unity project. I followed a small tutorial to learn about how sound from Wwise can be manipulated and used in our game.


While Wwise is a highly capable software that provided all tools we needed for our game and then some. We determined that the amount of time spend in teaching everyone the workflow of Wwise and making sure nothing goes wrong, would take long enough that it wasn’t worth our time. It would be easier for us to make our own Sound Management solution that did exactly what we required so we could quickly implement new sounds into the game without requiring any prior setup or adherence to new workflows.
Week 7 & 8
21 Feb – 6 Mar 2022
Aim
The aim for weeks 7 & 8 was to have a stable version of the game as it is now and extensively playtest it with as many people as possible. Ideally with players of varying experience with VR. This would give us a better understanding of what changes we would need to make to make the player’s first time experience as smooth as possible.
We also wanted to update the Visuals of the game in parallel with changing up the UI and UX, as till this point most of it existed in test scenes.
What happened?
We tested the game with a dozen or so players, our observations were as follows:
- Players that were new to VR had a hard time interacting with objects initially.
- UI exists in the world space in VR and making it diegetic didn’t always work in our favor.
- Once they figured out how to interact with the objects in the scene, players enjoyed doing things like throwing the phone around, picking it back up and pressing the buttons on the dial pad.
- Different players preferred different ways to control their characters. Most players felt disoriented because of the teleport mechanic.
- It was very easy for people to make mistakes while typing in the number, and process of resetting the entry could become tedious quickly.

To tackle the issues that were made evident during the playtest we decided to do the following:
- We did not have the time at this point to make an interactive tutorial of any kind, so we decided to not include any tutorial rather than have one in that does a poor job of trying to explain what the player must do.
- We needed the player to notice the Dialogue UI so we placed within the player’s eye-line while they are interacting with the phone.
- We spaced out interactable elements like the buttons on the dial pad so players would be less likely to make mistakes if they were not precise with their actions.

My Tasks
Making changes
Based on our observations, I made the above mentioned changes in UI and interactable objects.
Integrating Sound
Created a very simple system using Unity’s inbuilt sound components to achieve simple effects like directional audio emitting from the handset. Added some background music to the scene. Created a small script that fades between two tracks on the basis of the player’s position in the scene.

Added an effect using Audio Mixer Snapshots where all other sounds in the scene are muffled when the player is holding the phone in their hands. This was done to make it so other sounds don’t interfere when the player is on a call, also indicating the importance of the conversation in comparison of everything else in the scene.
Trying Different lighting effects
Over the past few weeks, the team had been trying various visual styles and effects with the help of some custom shaders. These shaders really helped us give the effects we wanted but we felt like we weren’t 100% there yet. I spend the rest of my time, learning about how these particular shaders work and how they interact with Unity’s lighting system to try and achieve our desired visual aesthetic.






The vision was to make the player feel like they were stuck in this space like endless darkness and going to the phone booth was their only solace. This would prompt them to walk up to the most important object in the game. Once in their almost prison like safe haven they could find some solace as they could look outside and see at the very least silhouettes and outlines of where they were.
Week 9 & 10
7 Mar – 16 Mar 2022
Aim
The aim for weeks 9 and 10 were to test the game again and look for the following:
1. Did the changes address problems the players encountered last time?
2. Was the visual style pleasing for the players?
3. Do players like having the option to teleport?
Show the game to as many people as possible.
What happened?
During the last few days we focused on polishing what we already had over hastily adding new layers to the experience and breaking things or making the experience unpleasant.
We made small changes on the fly during testing sessions to try and achieve the best possible experience for most players. Luckily we had a lot of different players during the second playtest, some of them were students from LCC’s BA VR course and they were a great help. Their feedback was valuable for us to understand how to make the experience more streamlined for new users incase we decided to invest any time into the same over the next few days.
My Tasks
AB Testing
Making small changes during the playtest session to try different versions of UI alignment, font, enabling/disable ability to teleport etc.

Submission
Making and testing builds to make sure they were stable and ready for submission.
Recording gameplay footage to be used for the purpose of the video submission.
Conclusion
Overall, I feel the team did well in challenging themselves to learn something new and make an attempt to make something tangible and playable from what we learned. Between the four of us, we were able to achieve learning about VR and VR player controllers, rope physics, Lighting effects and Shaders, making modular pieces of code so you can literally plug and play between scenes and projects.
Players seemed to enjoy the experience of interacting with our game and playing around and visually immersing themselves in the world. Unfortunately this came at the cost of players not paying much attention to the narrative and the whole idea of spreading awareness about the stigma around mental health got lost in all the technical challenges were tried to conquer as a team. Giving the player the benefit of the doubt and in their defense, the narrative is the one aspect of the game we spend the least time on, so this was an expected outcome.
All in all, I enjoyed working with like minded peers and together building a VR experience, something I had never done before. Of all the things I explored and learned about, the most eye opening for me was how varied simple interactions are in VR; especially in comparison to the standard First Person Perspective Game.
Leave a Reply