Categories
Blog posts

What is Social VR? – Essay blog

Being asked to write an essay about anything VR is probably one of the most exciting tasks I have been set. I love and have extensively lived in the world of VR for two years, and if there’s anything I can do, its talk about VR. This is why this actually turned out to be an incredibly challenging and complicated task.

Academic essay writing involves extensive objective and un-biased research based content. This makes writing about VR for me really tough, as I love VR and everything VR related so much, there are not many things that I don’t have some knowledge about within the field. So when it came to picking a topic to get started with, it was very easy and fun, but when it came to producing the work around it, it soon became very challenging and interesting none the less.

I knew straight away that I wanted to write about the Proteus effect, a phenomena where users of virtual environments are affected by using virtual avatars that represent their body, and how the change of this avatar can impact the user.

Experiencing VR in another body is something very important to me and a topic I have a lot of experience in. Personally I have built a part of my life on the VR social platform VRChat over the past two years.

Here you can see part of my time spent in VRChat on SteamVR, which is not all of the time I have spent in the game. I have also logged many hours on the Oculus PC client of VRChat and the Quest standalone version of VRChat too. I would estimate it is now approaching 1000 hours, over the past two years. Compared to a large percentage of my friends on VRChat, averaging 500 hours a year is on the low side, I have friends that have averages of 1000 to 2000 hours a year on the platform. 500 hours for perspective, is roughly 21 continuous days of 365.

A picture of me and my Girlfriend in VRChat together, as we met and dated remotely living 4000 miles apart. After we had been together for around seven months, we decided to move across the world to one another, and now live together.
A group photo of me and my oldest and closest group of online and VRChat friends taken in VRChat recently, when we all met up.

In VRChat you have an avatar to represent your body to other users on the platform, and having spent so long on the platform, it has become quite an intimate and important interaction, between myself and my digital identity. It has shown me many things about myself and helped me learn about my values and my real world identity too.

So I knew I wanted to write about the interaction between users of VR social platforms and their avatars, exploring the Proteus effect and to get some users perspectives on this area too. However, whilst doing my research for this discussion I ended up watching a VRChat YouTube documentary written by Strasz, who discusses identity and VRChat;

The video so well discussed the topics that I wanted to talk about, that I suddenly felt incapable of writing about the ideas. It was such an excellent video essay that I felt unable to write about the same subject without constantly just agreeing with everything they said, and not really having anything unique to discuss myself.

So I decided to turn to an equally important discussion, the impact of social VR on behaviour. This a relatively similar talking point, but focuses more on the social norms of virtual spaces and compares them to real life and discusses the impact this has on the users. This was the topic I eventually began writing about, and little did I know, it was too much to discuss in 1600 words.

To summarise the writing process, I am going to include the working document I drafted and revised the essay in, and then I will discuss how and why the essay ended up where it did.

Though the image has blurred quite a lot in the blog post, it still conveys the right information. I wrote out the essay starting with some rough ideas to cover with references to relevant research material. I began discussing under the title; The impact of social VR interaction on an individuals behaviour. However, I quickly reached nearly 2000 words before even discussing a single impact on peoples behaviour. I soon realised that I had written two essays, What is social VR? and The positives and negatives of VRChat, essentially. I realised that to make a coherent and well structured essay and to properly manage my time, I needed to write my essay on one or the other. At the time of making this decision, I had only written about half to two thirds of the essay discussing VRChat’s ins and outs, and the essay about Social VR almost stood on its own. So with a second and final change, I decided that the ultimate discussion I would make would be What is Social VR?

For research into this question and to provide a range of references, I use data from various websites documenting the history of VR as a technology, the history of VRChat, and a variety of YouTube documentaries discussing various users recollection and opinions about VRChat as a platform. I tried to learn from these sources about the subject matter primarily to back up my own experience and knowledge, so as to include information that could be verified from multiple sources. If I were to just write about my past in VR, then it would be a horribly subjective documentation of one persons experiences, and would lack the necessary objective framing required for such a piece of writing.

Before I had cut my full draft down, I included a number of various VR social platforms, talked extensively about each one on a technical level and about various VR hardware, companies and overall, went into far too much detail about subjects that weren’t valuable to the message of the piece. It was hard to cut down the content, but I eventually broke the essay down to the points that efficiently conveyed the discussion whilst still hitting on everything I wanted to talk about.

In the final discussion about VRChat as a social VR platform, I opened the research up to interviewing users on VRChat in-person about what they have experienced, and how they feel about the platform, as part of their lives. It was an important part to bring real peoples words into the essay, to show (even if most of them were close friends that I’ve known for a long time) what people on these platforms feel about them. It was a humbling and relieving moment to take a step back from the very analytical thought process I had been in, discussing this topic, to finally hear people talking naturally about the subject and enjoying reflecting on it.

Overall, I am really pleased with the end product, even if it was quite far from what I had originally intended to write about. The process taught me a lot about writing, about research and about critical and unbiased analysis. If I had had more words to write, I would have loved to have combined a few separate but linked ideas about behavioural impact and positives and negatives, but given the word count constrains, I like how the essay panned out.

Categories
Blog posts

Character design project- Ghosty

With the prompt “children’s character, with target audience aged 10 and under, themed around a local holiday or festivity”, I decided to pick Halloween in the UK. For me this seasonal festivity sparked the most creative potential, though I would have to be careful designing a “spooky” character, taking in mind this has to be appropriate for a much younger audience.

Using the ideation exercises from class, I began sketching soft and squishy silhouettes, aiming at creating a gentle character that was friendly to a younger demographic.

Figure posing using motion lines
The first characters that came to mind when experimenting with soft shapes

Moving on from ideation, I decided to try and take one of my characters a little further, and this is a little development of a skeleton design.

After some thought and expansion of this design, I felt like the character wasn’t quite kind enough, with the thin bony look being too harsh and perhaps a little too scary. I also felt it resembled the skeleton characters from the video game Undertale, and I wanted to get away from this.

Here is my exploration with a progression of this design, following a more ghost like idea.

Happy with these concept sketches, I decided to move on to digital work, to create a character sheet.

This is a finalised concept / character sheet of “Ghosty”, a soft ghost character with disembodied boots and little hand nubs. It’s main communication is though its eye shapes, as I think the character would not be able to talk or make noise. Therefore, its main expression will be in the form of movement; gestures, walking style and head tilts.

Here I have brought the two profiles of Ghosty from my character sheet into Maya to begin modelling.

Through modelling techniques learnt in class, I built a low poly Ghosty switching between poly and smooth mode, creating the sheet of their body, and the boots.

Though the image above shows the final low poly model, I actually had a few developments to bring me to this point. Initially I didn’t have any “gloves” for the character. In the original design I had planned to deform the sheet to give the impression of arms under the material, though decided against this due to complexity. I went with a “chunky space gloves” look, to match the large astronaut boots I had envisioned. Also, the original dimensions of the sheet and boots where a little too tall, and squishing it to be a little shorter improved the look a lot.

Here you can see the low poly gloves and boots on the right compared to the smooth view on left.

This is the finished mesh for Ghosty. I subdivided twice from the low poly, giving it the rounded smoothed look in the actual geometry. In the end I didn’t need to do any organic sculpting but I could have added this here.

Then I had to UV unwrap the mesh to create textures. Here you can see the UV for the sheet of the body and the final texture for it.

Here are the final images of Ghosty with my finished textures. I used an iPad Pro and a graphics tablet to create the textures, and used the .psd shader creator in the hypershade to be able to update the textures in photoshop and then see them update on the 3D model live in Maya. This can in useful for positioning details correctly such as the eyes, the lines on the boots and the markings on the gloves.

For the final renders, I gave the shaders a translucent appearance, so that things can be seen through Ghosty, making it more ghost like.

The next step to prepare for animating my character was to give it a skeleton. I used the quick rig to create guides and bones for the body, but in the end, I didn’t end up using the quick rig for skinning the mesh. I made the bones for the hands by hand and added them to the skeleton.

After going through many issues getting the skin to bind, and then getting it to work, then weight painting, I realised that a lot of the bones had strange rotations, and all of the bones were facing the wrong axis. I also had not created the bind pose in tpose, which was going to be an issue when it came to adding mocap data. All of this essentially meant I had to start all over again, and so I turned my model around and had to rebuild the skeleton, reskin and weight paint it. However, this was worth it (though getting to this point and then starting over took a long time). I ended up learning a lot a long the way, which improved the quality of the model a lot. Above is the final skeleton I made.

Here you can see the some of the issues I was having with my original skeleton and weight paint. The bind pose is in a neutral position, and really needs to be in apose or tpose for animating.

As my model has separate geometry, using the quick rig skinning didn’t work, and so I used the bind skin function, which has a lot of distance issues with my character in this pose. The hand bones were getting weights affecting the legs and body, and these were very difficult to remove.

You can also see there are a lot of bones with rotations that would cause unnatural motion when animated, for example in the arms and legs.

This is what the original skeleton and bind made the model look like when animated by some example mocap data. The weights were a total mess and the bones were not following the same movements as the mocap data.

For my revised skeleton, as I still had to use the bind skin function, so I used the component editor to fix the weight assignment. This was needed for the legs and body as they were affecting each other. Thankfully, the standard weights it assigned for the hands were actually perfect though.

Then I hand painted the weights for the body and legs and used a variety of techniques, while cross referencing the models behaviour when animated, to ensure it was moving correctly with the bones and not clipping. This whole process was very time consuming.

Here is the final model with the final corrected skeleton and weights. Getting to this point was a very time demanding process, but I am happy with how it turned out.

In order to animate the character with mocap data as we had intended in the project, I needed to create a character definition for the skeleton. Here you can see the beginning of this assignment and the beginning of the mocap data transfer. To animate Ghosty with mocap data I also needed to assign the bone definition to the skeleton of the mocap armature that imports. However, as I later learned, the animation requires having a matching rest post to your model, on the first frame, for the movements to be accurately matched.

This is what the model should have looked like with the example mocap data from earlier, however now, with the fixed skeleton and weights, the model accurately follows the mocap’s motion.

Eager to use mocap for this project, and with the lack of a functioning suit in school, I decided to again use my own mocap at home. This exploration ended up also being a very time consuming process. However, I am happy that I got to include this in the project somewhere, even if it is not in the final render. Above is a recording from inside Mocap Fusion, which uses consumer VR hardware to create mocap files, among other things. I used 10 point tracking, to capture feet, knee, waist, elbow, head and hand positions. I also used finger tracking to capture full finger motion. In the video the main view is what I saw in the VR headset, looking back at myself in a mirror. The view in the bottom right is a camera in the world, looking at myself from the outside.

As I later found out, this software is really aimed at game engines, and none of the file exports are easy to get into Maya.

Despite the fact that Mocap fusion only really exports files supported by game engines, such as .anim files for Unity, it could be possible to embed these animations to my model in Unity and export it as an .fbx to Maya. However, every time I tried this, it corrupted the animation, so I decided in the end to just demonstrate the mocap working on my model which I brought into Unity, and use a stock animation file from Mixamo for the Maya render.

To prepare my model for the Maya render, I needed to find a good animation. I used a slow walk that I edited with the Mixamo built in gait effects, and brought this into Maya. However, as previously discussed, for mocap linking to another model, transferring the movements requires both models to have the same rest pose. All Mixamo files come without a rest pose. So when importing this, you have to get an identical skeleton in the correct rest post that you need, assign its bone definition, and then import the same skeleton with the desired animation over the top. This gives the skeleton the desired animation to copy to your model, and the correct rest pose so that the motions are accurately copied. Eventually I worked this all out with the help of some tutorials online, and I got my model to follow the motions of the walk animation I had chosen.

In the end I also baked the animation into the skeleton / control rig of Ghosty, so that the project runs without the original mocap data being needed.

Here you can see the graveyard scene I found online, that I picked for the setting for my character. I thought the low poly look matched the look of my character and it seemed a suitable setting.

Here I created a motion path and a camera and aiming system, to animate the camera to give a little motion to the render. I decided to have a simple walking animation for the render, so I had the camera move towards the character as they walk forwards and past.

Using two panels, I checked that the camera view was suitable, to make sure I was happy with the composition and motion for the final render.

I also added a skydome that gave the scene a dusk / dawn atmosphere, to give it a light but still eery feel. I added an area light to give a little ambient lighting in another colour, to add a little more depth to the scene.

As the final render is too large to upload into the blog, I have included the first and last frame from the render. I ensured to match the frame rate of the animation on the character, the frame rate of the scene and the frame rate of the render, and in end I used 24fps. The animation was only five seconds long, which at 24fps and 1080p at a reasonable render quality, was around 100 frames, and took 3 hours to render. I then used media encoder to compile the frames to a video, and I had to really adjust the encoding settings to get a decent video quality out. I think this is because the scene is very dark and the encoder struggles with bitrate and quality with the darkness. However, I eventually got a really nice video of the render, and I am really pleased with the result.

I am really really happy with what I produced for this project. I put a lot of work into my character and outcomes, and though it is not very complicated (or at least it doesn’t seem so on paper), it was a huge effort to get it all to work. I encountered numerous issues and setback, had to trouble fix an immense number of things that through curve balls at my plans and ultimately spent a lot of time on things that shouldn’t have taken so long, but I suppose, that’s what learning is all about, and I really enjoyed it!

Looking back at my plans at the early development of the character, there were a few things I wanted to include that I didn’t have time for, or the skill or knowledge to put together in time.

I wanted to have the body of the character simulated as cloth, so that it would hang down as though over someone’s head. This wouldn’t have been a hugely complicated thing to implement, but it’s something I don’t know how to do, and didn’t have the time to learn. I am still happy with the way the body turned out in the animation as it still conveys the idea I imagined.

I also wanted to animate the eyes of the character using 2D images, with a variety of different eye drawings and set keys, to hand animate the expressions changing. I did actually start this process, but ran out of time to fully implement and animate it, so instead picked a fixed expression and put this onto the body texture for the final render.

I would like to work out how to get my mocap data into Maya in the future, as I really wanted to make my own mocap for this animation. However, I am happy that I still got to include it in the project. If I could have gotten my mocap into Maya within good time, I would have definitely gone for a longer and more ambition animation as well, but I think the walk animation I used in the end shows off the project well, despite being quite short. In fact, at the end of the project, the night before submission, I tried bringing mocap data into the project in a way which nearly broke my character irreversibly and could have meant needing to spend a lot of time fixing things. However, the damage was not actually apparent in the render, so I fixed everything as best I could and made sure to just stick with the stock animation, and get a render out before I broke anything else.

If I had had more time, I would have liked to have got more creative with the scene, to have included more hand made elements and more animated parts. Despite this, I think the scene I created works well with the character and looks good in the render.

As an accompanying piece, I would’ve like to have 3D printed my character to have a real world model, which would be really cool. I may still do this, even if it didn’t make it into the project.

As I final accompanying idea, I wanted to also try and import my model into something like VRChat so that I can wear my model in VR, which would have been really cool too. Also, I will probably still do this, but I would have liked to have got it into the project.

Overall, I am really pleased with the outcome and quite proud of myself, especially considering all the issues I had along the way. It’s been a really valuable project and I hope to expand on it more.

Categories
Blog posts

My “dream holiday destination” 3D modelling project

Planning

When I started planning for my Dream holiday destination 3D model, my initial thoughts for the project were something along the lines of a desert island. When I began planning the project outcome, I had only just finished modelling my first few objects with Maya and I wasn’t feeling very confident in my ability, so I went with something I thought I could easily manage that fit the theme. But my dream destination wasn’t anything to do with a deserted island, it was a cliché, and I didn’t feel really motivated by it. With encouragement I realised it would be more fitting to pick something closer to my own destination idea. I decided that I should try to pick something that would push me and challenge my skills more.

This is my initial mood board for ideas;

I was most interested in some sort of Sci-Fi design, something that gave cyberpunk and a neo-Tokyo atmosphere. I looked to inspiration from one of my favourite films with this aesthetic; Ghost in the shell. I also pulled inspiration from video games such as Portal.

I decided to take a little spin on the idea of the “Ideal / dream holiday destination” as ideas for the project were developing. What if the dream holiday destination for the subject of my scene is actual virtual? What if we cant see the actual destination itself at all, and the scene is the view of the subject experiencing that destination virtually. This seemed like a fitting idea as I could do some sort of Virtual reality headset and setup taking inspiration from my real life passion of technology, hardware and VR stuff in general. So I came up with the idea to have a view into the world of a subject character immersed in a “virtual dream destination” that they are in, but from the perspective of seeing them in their real world and not actually getting to see the destination they are in.

First concepts

After some planning I decided to do some test modelling of the scene and its composition. I was struggling with visualising what I wanted exactly just through sketches on paper, so I decided to use some 3D modelling sketching to help. To get something done quickly, I decided to use Gravity Sketch in VR, which would allow me to quickly place shapes with an accessible sense of the space. After some time working on designs, this is what I came up with;

These images are the finished model from Gravity sketch which I brought into Maya. I added some lighting for these final renders. This gave me a nice original reference for my outcome. From here I could start modelling things for it; a VR headset, controllers, some sort of character, a robotic arm and lots of wires, a computer with monitors…

During planning, I drew some sketches for these items, including these which show the VR headset concept ideas;

I needed a subject for this scene. In my test model, I just used a prefab mannequin, but I wanted something relevant and fitting for the scene. I decided to pick a premade model as I wasn’t super confident in my own abilities at the time and I didn’t think I would have the time for an original character as well as the rest of the scene. For something VR related and close to home, I decided to use my own personal VR avatar that I use online. It has a more cartoon, animated style, but I thought it would fit with the more low poly look I was going for. This is the model I used;

Modelling

So to get started modelling, I decided go with the VR headset and controllers that would be on the character. I based the controllers on real life controllers, ones that I happened to have in person, which helped me model them. I based the headset more loosely on a variety of real life headsets, but more heavily incorporating some of my early test sketches.

These images are the final models in the finished scene which is why they are shaded and posed, and the headset is already connected to the robotic arm from the scene. I didn’t have any images from earlier in the modelling for these elements. I also have the trackers, which are inspired by real life full body tracking in VR, which I modelled base on real life trackers too.

Here are the all the body’s VR elements for the character.

Then I had to scale and place the VR hardware on the character, and here you can see the final pose of the character and the elements on their body. I used the models armature to pose it and put it into a position that fit the theme and idea of the scene. I put them in a stance with their arms raised, as if they are engaging with the world that they are immersed in.

During the modelling process for the headset and other elements, I used the characters head and body to make it to the right scale and shape. This way the headset and items like the trackers look like they are actually on the character and designed for them.
Here you can see on of the trackers on the body.
The controllers with the hands posed around them.

From here I added the robotic arm that plugs into the VR headset, that connects to the ceiling. I used large blocks and photo references to position and scale the different parts for their final placement. I also don’t have images from earlier in the arms development, but this is the finished arm in relation to the character;

Then I created the computer, monitors and desk they are on;

Next was the complex mesh of wires that were strewn around the room. I wanted a lot of wires to be hanging from the ceiling and coming out of the back of the headset. To create and plan out these wires, I used editable curves in orthographic views to place them in the scene. There ended up being a lot of wires and it was a very time consuming process to place all the curves, extrude them all and make sure everything lined up and was placed realistically. There ended up being three major groups; the wires coming out of the headset, the wires hanging from the ceiling and those leading from the ceiling to the PC.

For all of the shading on the objects, I used the Hypershader to create my own materials. I ended up make a lot of custom materials, all with differing properties based on all the different objects. It was time consuming but fun and really brought life the the models.

Scene building

The final steps in the model were the room to contain all of the scene, and some sort of exterior. In my original plan I wanted to have a futuristic city scape framed by a large window in the room. It took a long time to work out how to pull this off, I tried using flat images, 3D models of cities, large cubes textured. In the end I decided I would use premade assets based on how much time was left for the assignment. This meant the only thing I didn’t model was the character and the view outside the window of the room, which I was comfortable with as a compromise.

I modelled the room using a basic cube that I had been using as a scale reference for all of the wires and parts of the room. I fleshed it out with a few details like lights and shelves, and I put a lip on the window.

For the outside, I places a few large cubes and used the framing of the window from inside the room to adjust their composition. I used some building fronts from models I found online to cover the cubes, and lit them with some large area light from various angles.

With all of this together, the final model was done;

A view of the buildings from the outside of the main room. You can also see their lighting and the physical sky that I used. I went with a very dark sky to make the lights stand out, as this reflects the cyber / sci-fi atmosphere I was going for.

Here is the final model with lighting;

Presenting the model

After finishing the model and lighting, I went on to animate a few things. I wanted to have an in Maya engine animation, as I didn’t have the time or PC power to render out a decent length animation. But for the moment I have made a camera animation. I used a curve to create a rail for the camera to travel on. I used a camera and view rig to be able to move the camera through the scene and to aim the camera as it moves. I used the graph editor to animate the camera for a 45 second clip that shows of the model.

I also added some animation to some of the wires hanging from the ceiling using a curve deformation and infinite oscillation;

Here you can see the camera, view position and the curve I used to animate the camera through the scene.

Final renders

After the animation, I went on to finish the project with some final renders (these get heavily compressed by the blog);

Project overview

Reflecting on the project, there are a lot of things I would like to have done but made compromises for or ran out of time to do;

I wanted to have more detail in the room, like more objects on the shelves, posters on the walls, chairs and other things that busy a room. I just ran out of time to model more, and didn’t want to clutter the model with assets made by other people, so I decided it would be best to present what I have made with a more clean and less busy look.

More wires! I shiver at the thought, but I would have like to have more dense groups of wires, as the final look was a little too sparse compared to what I originally envisioned, but I think the way it ended up being presented stands well on its own too.

I would have loved to have rendered out a full animation, but due to time constraints and access to computing power, I just didn’t have the time to have one rendered for the deadline. Perhaps in the future I will come back and render out a few sequences demoing the model.

I also wanted to add more animated elements in the scene. For example perhaps a flying car going past the window, a flickering light or something displayed on the screens.

I also really wanted to animate the character in the scene, with the whole robotic arm linked to the VR headset following the movement of the character. This ended up being too ambitious for me at the time of making the model and I wouldn’t have had time to do this for the deadline. In the future, when I get more accommodated with animating humanoid rigs in Maya and creating custom rigs for handmade objects, I would like to come back to this and try animating the character and the robot arm how I originally pictured it.

Finally I would have liked to have modelled the character in the scene myself, creating an original character for the theme and scene. I sadly just didn’t have the time or the skill at the start of the project to take on something like that, but I look forward to attempting it, and character design as a whole, in a future project.

I really enjoyed this project. It taught me a lot of new and really fun skills, not just in modelling but in a whole field of 3D practices, and also time management! I got a lot out of making my ideas come to life, in the high definition that the final renders provide. I am proud of the outcome, and pleased that I pushed myself, going out of my comfort zone with the project idea.

Categories
Blog posts

Examples of Animation principles

Exaggeration, Slow in & Slow out, Arcs, Anticipation;

Here we can see in two clips at the end of the video, the exert of Thumper from Bamby, where exaggeration and arcs are used in Thumpers ears and facial expression to amplify the emotions as he experiences them talking to his mother.

All 12 principles of animation;

This is an excellent example of all 12 principles in specifically crafted animations to demonstrate each of their uses.

Categories
Blog posts

My VRMV

My VRMV Plan;

Song choice

For my VRMV I had a few songs in mind. I listened to them all and this was my final list;

Sarah Cothran – As The World Caves In

John K – ilym

Kodaline – All I Want

Alec Benjamin – Beautiful Pain

Jordan Suaste – Body

Corpse – Agoraphobic

Tommyinnit – CG5

In the end, I decided to go with the song Sarah Cothran – As The World Caves In (cover)

I chose this over the others because its lyrics tell a strong narrative that will be both moving as an experience and also make it easier to plan the events of the MV.

Sarah Cothran – As The World Caves In – (cover)

The song tells the story of two lovers finding comfort and peace in one another, as the world ends.

I will most likely will use literal interpretation of the lyrics for the narrative. I will follow the scenes of the song, focusing on the couple, in their home, moving through the rooms of the house.

I think the final scene should show the world ending and a blinding light coming from outside as it all fades to white.

It would be powerful to include an interactive element at line “I weep and say goodnight, love – While my organs pack it in”. Here the song would pause before the final chorus and a button would appear. Here the user would have to press it, sending the bombs mentioned in the song, that end the world.

Locations in my scene

-The home of the couple in the song, and the many rooms inside.

-The street that the house is on, for panning shots / ending or intro.

Items / details in the scene (using lyrics for imagery)

-Bottles / glasses on a table

-Doomsday newspapers

-TV

-Fancy outfits for the characters

-Nail polish for painting nails

-Nuking animation and button to press

Why is VR useful in storytelling and in this type of MV?

When this sort of content is watched in VR, the scene is viewed in 180 or 360 degrees, which allows the experience to unfold around you as the song goes on.

A picture from when me and my Girlfriend went to VR world that immersed you in a music video.

This is one of the most immersive forms of communication, the story is not just communicated as a window into the universe as it would be on a screen, but the user is actually in the scene, with the characters, in their environment.

In the case of this music video, it will create a greater emotional connection to the story and the characters, and greater impact from the dark songs message. I can use audio prompts, physical interactions and events in the scene around the viewer to further the immersion and enhance the detail that would otherwise not be possible.

Filming the characters animations

To create the music video’s characters animations, I will be using Mocap Fusion. This is a free motion capture software currently in Beta testing that allows you to use consumer VR hardware to create animation files, but also has a lot of other features.

It allows you to record many different VR events, but importantly, it can use VR headsets, controllers and trackers to record full humanoid pose data with proper inverse kinematics ready to be imported into projects and applied to models.

I will be using my own VR hardware to create the animations for the characters in the music video. The characters will feature full body presence and can feature details such as finger tracking. This will bring lifelike realism to the models, and will reduce production load, from not having to hand animate the models.

Character models

The models for the characters in the music video are the Avatars that me and my Girlfriend use as our Avatars online, and I thought it would be fitting to use them in a love story song.

They are already rigged and ready for animation files from Mocap Fusion.

Scene arrangement

For the scene, I think I might have an apocalyptic home, where the environment is in disuse. For the theme of the song, the couple are at the end of the world, so the home will be a visualisation of their break down and grief.

I want the scene to have a dark and morbid tone, so I want to give the effect of lots of dust with low lighting.

Story plan

This is my plan for the scenes in the video. I broke the song down into narrative sections and then planned what we would see in each of these sections. Based on what I planned in each area of the song, I used these plans to create the character animations for the character models I have.

When I had finished the animation Mocap, I also added the name of which animation file goes with each section of the song, the duration of each animation, and the duration of each length of music, next to each set of lyrics. This way when I go to arrange the animations in my Unity project, I can reference each section of the song, and know where my models need to be, and what animation they are doing, and at what time.

Creating the animations

Some of my VR hardware

As the narrative is largely about the two main characters, it is important that the models that represent them have animations to give them life. They are the main story telling element, so I wanted to give them hand made animations to tell the narrative and covey their emotional significance and life.

I used a software called Mocap Fusion, which allows you to use consumer VR hardware to record your own acting and create animation files. I used my own HTC Vive for head tracking, Valve Index controllers for hand and arm position using inverse kinematics, and to also provide finger tracking. I also used six Vive trackers for feet, knee, waist and chest position. I recorded the animations for each scene of the song as I had planned previously, and exported them ready for applying to my models in Unity.

The desktop view of Mocap Fusion

Here you can see the imported animations in my Unity project, and a freeze frame of the different events that I will use as applied to the models I am using in my video.

Creating the environment

I wanted to create an end of the world atmosphere, but just inside an everyday couples home. I knew I wanted the home to be busy and full of belongings to show that it is lived in and make it feel more realistic. I knew my 3D modelling skills were not fast enough to create the entire realistic interior of a home, so I decided I would use assets from online to build my scene. I started with a house that had a reasonable interior and then started to flesh it out with objects and belongings you might find in a home. I went for a more vintage feel with the aesthetic of the house.

Here are some images of the inside of the house after adding lots of items and furniture, such as beds, chairs, a kitchen, sofas, and more.

I also included a variety of pictures that me and my Girlfriend have taken together using our Avatars, the models of which I am using as the characters in the video. I imported them from our pictures and scaled some picture frame assets to make it seem as though the characters have hung up pictures of themselves in their home.

I continued to add assets including those to be focussed on that parallel with the lyrics of the song, such as some nail polish, newspapers and others. To finish the scene, I added lighting using a 3D model of a light and an area light to add natural lighting to the inside of the home. I also added clouds to the sky so that looking out the window makes the world outside seem more real and less like a void.

To finish the environment I just needed to create a moody tone, so I decided to change the directional light, serving as the sun, to a warmer tone and turn it away so it looks darker in the scene.

Sequencing the character animations

The longest amount of work I spent on this project was the character animation sequencing. I had many animations for two characters, I needed to have them play at specific times in the song and in specific locations in the house. I also needed the VR view to move the user to where the models would go when the location changed in time too.

To manage all of these changes and timings, I used scripting. It took me a long time to script all of the events as there were many many animations and locations and 6 unique values for each one, with a lot of trial and error to get things to line up correctly. I got some coding help from Antoine and it took me a while to write a script that would work, but eventually I had all of the animations, locations and camera arrangements properly lined up and in time.

Final touches

Also using scripting, I activated a light change at the end of the song, running a pre-recorded animation of an area light getting brighter. I used this to represent the bombs going off outside, and it fittingly shows the outside world slowly get brighter and brighter as the song ends and the couple look on.

I also had to add the song. I imported and added it to an empty game object, selected play on start, and I had the music playing as the events went on.

The last element was the VR camera. I wanted to use the XR plugin to add headset compatibility and more user friendly and open use, but I couldn’t get the XR rig to recognise my headset and nothing would happen when I hit start. So, in effort to have my MV actually running in VR, I used the Oculus OpenXR rig instead. Because of the size of the project and the complexity and concentration of assets, I decided to leave this project running on PC as apposed to building for Android and putting it on a Quest. This would allow me to retain the complex scene I had created and still get high frame rates as the processing power of rendering on PC is far more capable than a Quest. I decided against animating my camera as I was cautious of putting the user on tracks and making it less accessible to those who easily experience motion sickness, so I decided to just use snap cuts to the different areas of the music video.

Things I didn’t quite get to do

I wanted to have some interactive elements of the MV, perhaps have the user launch the missiles at the end of the song, that end the world, by having the song pause and getting them to press a big red button.

There were also a few interesting issues with the animations on the characters. Because of the way I used mocap, I had to create the animations for both characters at different times, and in some scenes there is a little noticeable separation and misalignment of the two characters in relation to one another. In the future it would be interesting to see if I could use multiplayer mocap to record person to person interaction with another person at the same time.

I also wanted to have better jump cuts between the events in the MV, I wanted to have the camera view briefly fade to black between jumps, to make it all feel less sudden. However this was a little too complex and I was already low on time.

I wanted to have more animations and have the characters interact with the environment a little more, for example have one chopping food in the kitchen, or another put the HIFI on when they go to dance together. However due to time constrains and the complexity of recording all the mocap in time, I ended up having to use less animations over all.

I also wanted to have the street outside their home modelled so that you could see more than just an endless expanse outside the windows. However as I was low on time, and I couldn’t find any good free models of housed streets that fit the aesthetic, I just left the outside to be a large dark plane, which does somewhat add to the creepy and dark tone.

Project overview

Overall, I am really proud of how the music video turned out. It took a lot of work and a few sleepless nights, but I really like the final outcome. I think I really hit my goals for the scope and concept of the project. I really enjoyed learning a little more about Unity, coding for Unity and using VR in Unity. It’s taught me a lot of new skills and definitely pushed me out of my comfort zone all for the better.

Categories
Blog posts

Adding Mocap animations to a 3D model in Unity

Here I document how I added animations to this 3D model from Mixamo with animations pulled from Annie’s live Mocap recording. I imported the 3D model files and extracted the models embedded textures and materials. I set the models rig type to humanoid. I imported the animation files and also set their rig type to humanoid.

Then I created a new animation controller, added a new state and assigned the animation file to the state. Then I assigned this controller to the models animation controller input.

Then all I had to do was position the camera correctly and hit the play button, and the model began moving. Here is a screenshot from the model in motion.

Categories
Blog posts

AR detective game

This is our AR 3D detective game. Me and Macy worked on this using Unity and Vuforia. The theme we used was Among us, the game where you are astronauts on a spaceship and everyone must work together to work out who is the Imposter alien crewmate and finish the game before the they kill every innocent player. Here you can see the 9 different images we trained the Vuforia software Unity plugin to recognise. These are the different Crewmates. Each trained image has the Crewmate 3D model that will appear rendered over the top, as well as the text to identify whether this Crewmate is Innocent or an Imposter. The Imposter also has a large knife that appears with the other 3D elements, like in the game, which Macy 3D modelled.
Here you can see I used Maya to create the two 3D models for the text to identify the characters as Crewmate or Imposter.
This is the knife that Macy modelled to display when you find the murderous imposter.
Here you can see some of the final results using the Vuforia AR plugin. You have all of the Crewmate images, and you must find which is the Imposter by revealing the identity with the AR render.
The plugin can distinguish between all the Crewmates at once. Here is another Crewmate.
And finally here is the Imposter, displaying with red text and the knife model.
Categories
Blog posts

3D modelling a chest

To begin my Maya modelling practice, I wanted to pick a design that would be reasonably simple, but also featuring some geometry that would push me none the less, so I chose a treasure chest.
This was after a few hours of working with Maya. This was my third try, as I used some questionable techniques that lead to some complex issues with the overall geometry and I didn’t like the overall look of how it was going. With the knowledge I had gained, I started again and stretched a cube out to a tall cuboid. I almost exclusively used edge loops to add more data points, more vertices, and then the scale tool to scale different groups of vertices, to slowly change the shape. I rounded the top of the chest along its depth and tapered the width slightly too. As I gathered confidence, I added more detail, indenting the centres of each panel. I extruded faces from edge loops on the sides to create a lock shape and handles. I used scaling to indent around the lip of the lid.
Here I wanted to work on fixing some broken geometry that had come from adding vertical edge loops on the sides and then scaling down the width. This made the top of the sides of the lids come to a thin ledge instead of continuing the thick bevel. At the top a lot of vertices came together very close and in strange ways, which could have been avoided if I had planned the extrusions and scaling of different areas at the right times. To fix this top section, I have split the model in to, to help with having to do this process twice.
Here you can see the strange thin ledge at the top of the side of the lid.
Here is the start of my attempt to heal some of the strange polys, and now the top of the sides has more structure and better geometry to allow me to finish the bevel around this area.
I had to remove all the faces in the area where the vertices were very close together. I used edge extrusions and vertices welding to move new faces over the delete spaces, and adjust the geometry to be more suitable for the model in that area.
Categories
Blog posts

Modular models in game engines

When using 3D models in game engines, to reduce overall rendering tax in realtime, the models can be used in an efficient manor, even beyond just their poly count.

In efficient 3D game design, one singular model can be rendered once but told to used in a variety of ways in the scene, so that the overall impression is one of more complexity, but still using less rendering power.

This is a model kit of many modular components for building a futuristic city with many different building blocks. Kits like this allow designers to create complex and large environments more easily, and makes the rendering more efficient as the environment is broken down into pieces to be rendered in different ways and at different times.

Also, large models can be divided into smaller pieces to render individually and be used modularly, so the same models can be repeated over and over to give the impression of one larger model.

Here is an example of a set of models ready for rendering that allow a designer to create many different types of similar buildings, all using the same library of simple building blocks. This will reduce the overall modelling complexity and scale, allow for simplification and yet versatility in the scene. All while giving the impression of a much more complex scene and collection of models, and importantly reducing the rendering load too.

Categories
Blog posts

Model poly count and its impact

This is a 3D model from Sketchfab, which has a very high poly count, and many many tiny polygons in areas of detail like the eyes, face and hands.
Here you can more clearly see the incredibly high level of detail that has gone into this model. The model has 165.7k quads and 331.3k total triangles. This would not be efficient for running and rendering in realtime for a game engine. The more economical the mesh density, the less taxing it is going to be on the machine running it. This would would be less problematic for an animation where the rendering does not need to be rendered in realtime.
These are some other models that I found, which have a much lower complexity and lower mesh density, which are much more suitable for realtime rendering in a game engine as the efficient modelling is less taxing on the machine.