1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
subvrsteve

Blender VR Tutorial for v2.79

image

My old tutorial for rendering VR in Blender has turned out to be one of my most popular posts, and hopefully has helped other artists get into VR. I’ve learned a good few things in the intervening months (although I still consider myself a n00b) with Blender, and Blender 2.79 has brought some very useful new features too that have let me halve my render times, so I think this is overdue for an update.

1. Setting up Blender for VR

This tutorial will assume you’re already familiar with the basics of Blender’s user interface, although I’m still fairly new to it myself, so I’ll keep it simple. It will also assume you have Blender 2.79. Seriously it’s an awesome upgrade from previous versions so go download it now! Many artists have been using betas of it for months because of the new shaders and the very convenient denoising feature.

The great thing about Blender for VR is that it’s Cycles render engine offer full support for most common VR rendering formats. Your can render 180 or 360 stereoscopic videos using absolutely no software other than Blender, with a fairly simple workflow and a high level of control.

First, let’s put some of the basic render settings in place. We’ll come back on those when it’s time to render, but it’s much better to have them set up right from the start of your project to avoid some headache.

image

The resolution here will be your resolution per eye. I usually go with 1920 x 1920 for 180-degree video because that’s the max resolution most live-action VR makers use, but you can go even higher if you want. For 180° video you want a square aspect ratio so you can use any other resolution you like but don’t go much lower than 1000 x 1000 or the VR effect will cease to work well.

If you’re planning to make a 360° video instead, your aspect ratio should be 2:1. Simply doubling the X resolution to 3840 will give you close to 8K total resolution (4K per eye), I don’t think many systems can play that well - although we’re starting to see content at that resolution . Personally I think 2000 x 1000 per eye is a reasonable resolution for 360, but you won’t have the same visual sharpness as in a 180 ° video of course, so consider carefully whether the experience you want to deliver needs a 360 ° field of view.

Very Important: Set the frame rate to 60 fps. Do this right from the start of your project if able. Although I think the PSVR can’t play more than 30fps, 60fps is a must for comfortable VR viewing on other platforms. And it’s much easier to resample down to 30fps later than it is to go the other way around.

If you’re working with an existing scene that’s been animated at a different frame rate, you can use time remapping - however if you’ve got any particle effects or fluid sims in the scene, they will be irrevocably broken - no easy way around that. It’s also very buggy if you want to make changes to your animation with time remapping activated. 

For example for a scene that’s originally 24fps, change the FPS to 60, then under time remapping, set O:24, N:60. Then you need to change your start and end frames: Just take the original frame numbers and multiply them by 2.5 (60/24). You can type the formula directly in the cell.

With that out of the way, let’s activate stereoscopic rendering:

image

Go to render layers, then tick Views and choose Stereo 3D. You now have a left and right view. 

Now select the camera you want to work with, set it as the scene camera and let’s set it up for VR.

image

First, under Lens / Perspective option change the field of view to 90°. This won’t actually change anything in the final, rendered images, but the camera previews in your viewport will be a lot closer to what you’d actually see in the goggles so it helps a lot when positioning your camera later.


image

Next, change your lens to Panoramic, this is where all the really important settings are. My own usual settings are pictured above but let’s review them one by one.

Type: Equirectangular is the most widely supported type of view for both 360 and 180 degree videos. Some 180 degree videos use fisheye instead (most SFM videos do) but I prefer equirectangular myself as it makes use of your full video window whereas fisheye leaves the corners dark, so you’ll get slightly sharper images overall for the same video resolution.

Latitude / Longitude: Set latitude to -90 / +90 and longitude to the same for 180-degree video, or to -180/180 for 360 video. Make sure you’ve set up your resolution and aspect ratio accordingly in the previous steps.

Clipping: If you’ve got some stuff moving right under the “nose“ of your camera, it might get clipped through so reduce the start value to 0.1 or even less. In VR you want to be pretty close to the action.

Stereoscopy: There are three modes you can use. Off-axis is what I use almost all the time. With off-axis the two cameras converge along a plane at a set distance in front of them. It’s the default for all 3D applications really

Parallel is good if you’ve got some distant objects where you still wnat to have 3D effect and nothing too close to the camera.

Toe-in is good I think if you’re doing a 360 video and you’ve got stuff happening very close to the camera in all directions. Works somewhat similarly to Off-axis.

Convergence Plane:  (Off-axis / Toe-in only) This is how far away the stereo cameras converge. This should almost always be 30x you interocular distance (see below)

Interocular Distance: The default value is the human average: 65mm. You do not need to change that unless your scene and models aren’t scaled to normal human size in Blender.  Then you can use this to cheat: if your models are 3x too big, multiply this by 3 too. Artists rarely worry about sizing their models and scene correctly because it has virtually no effect unless you render things in 3D.

I feel like cheating in this way makes some of the resulting scenes look slightly weird, but that might just be me. 

Spherical Stereo: This might be useful for 360 videos, but otherwise should be disabled.


Pole Merge: Usually disabled This gradually reduces the eye separation (stereo effect) as you look up and down at extreme angles. You only need to use this if there’s stuff happening directly above / below the camera and you find that the eyes cannot focus on that properly no matter what you do.

Finally, at the very bottom of camera settings, make sure your depth of field is disabled: Aperture set to 0.

2. Setting the scene

Laying out a scene for VR is very different than traditional 2D work. This is hardly an exhaustive list, but here are some common considerations and pitfalls to avoid to make the best out of your VR scenes. 

  • FPS:  I’ve said it before but high FPS are hugely important for a good VR experience. Set this to 60 from the get go if possible.

  • Size Matters: That’s what she said, eh.

    It’s a common pitfall to animate a whole scene, spend days rendering it for VR, put the goggles on and only then you figure out that your models are giants and you’re just a tiny ant observing the scene. While this can be fun to play with if you know what you’re doing, it’s disastrous if you don’t. 

    Right as you start to set up your scene, make sure your models are human-sized. A quick way to do that is to spawn a human metarig and scale them so they’re roughly the same size. Even better, look up how tall your character is supposed to be, and scale her to exactly that.

    Models imported from SFM need to be scaled down to 0.027 in Blender. Grab them in Object mode, then hit Shift-C to center your cursor at 0, then [S], [.], [0], [2], [7]. to scale down to the exact value. Finally use Shift-A to apply the scale transform.

    I’ve also seen some models that are exactly 3.28 times bigger than they should be. I suspect Imperial units are to blame.

    If all else fails and your scene is already set up with the wrong scale, you can cheat and adjust the interocular distance as indicated in the last chapter.
image

Here I have a scene with a 6-meter tall d.Va. She should be just 1.6 or so, so I’ve multiplied the interocular distance by the size ratio.

  • Camera Viewport: After you’ve enabled stereoscopy, your camera viewport may change to an ugly red-blue display. To get rid of it press N to bring up the side menu, and under Stereoscopy, choose either the Left or Right eye. You might want to change from time to time to make sure the field of view is unobstructed for both eyes.

    You can use the check boxes below to display both cameras, so you’ll see where each “eye” is, and to display the convergence plane described in the last part.  The slider is the opacity of the plane.
image
  • Positioning the camera: You need to position the camera a lot closer to the action than you’d do with a regular, narrow-angle camera. Both your video resolution and that of the HMD are limited, so if you want the viewer to catch details, you need to be fairly close to them. 

    If you’ve set the camera FOV to 90° in the previous chapter, your viewport should give you a decent preview of what the scene will look like with the goggles on.
image

Varren buddy seen through a regular 40° FOV camera.

image

Without moving moving the camera, I’ve changed the FOV to 90°. Not only my buddy looks tiny now, but I’m also seeing some objects that I wasn’t supposed to. I need to rework my lights system.

image

That’s a bit better. I’ve moved my camera forward to about 1 meter from the Varren’s head. I could get even closer for detail but he just ate a pyjack and I don’t want *that* much detail.

  • Keep in mind that what you see through the camera viewport is just what the viewer will see when looking straight forward. If he turns his head around, he’ll see more stuff - especially if you’re rendering in 360.

  • If you get closer than 50cm from any object, the eyes will start to strain a bit to focus on it. You can balance that a bit by moving the camera’s convergence plane closer. Experiment on still renders to see what works.

    Experiment with that, if it’s well done moving objects close to the camera so they appear to touch the viewer’s head can be extremely immersive - it just takes a lot of trial and error to get it right.

  • Movement: It’s safest not to move your VR camera at all to avoid causing motion sickness. Rotation especially should be avoided - the viewer can usually just turn his head around.

    However a bit of motion can actually improve the immersion as long as you keep it slow and steady. 

    If you’re doing a POV scene where the viewer’s head is moving a bit you can use constraints to have the camera follow the head, but not rotate with it.

  • Keep your camera’s Z rotation at 0 if able. I’ve had problems in the past when the camera was tilted sideways, it made objects impossible to focus on unless I also tilted my head sideways. It might be linked to some other settings, but just zero out the Z rotation to be safe.

  • Eye contact is one of your best tricks. Try to establish and keep it if your scene lends itself to it. It’s a staple of live-action VR erotica and for good reason. Most blender models have their eyes rigged with a view target. If they don’t you can try and use track to constraints but it’s a bit trickier.
image

Some cozy time with Ciri (credit: dudefree). Great use of lighting and eye contact there. Realistic movements of her hair and medallion add to the immersion.

  • Lighting: so, those 20 lights you added in to make sure every piece of your scene was subtly enhanced… great artistic work, but it’s going to kill your render times. Keep your lights setup reasonable. Getting the perfect backlight isn’t quite as important for VR.

    Lighting is often used to bring forward the geometry of a scene, and give the illusion of 3D. But here, you’ve got the real thing, or almost! You don’t necessarily need all the subtle shadow plays.

    On the other hand, setting the atmosphere is just as important in VR, and clever use of lighting can do just that.   
  • Be very careful with glossy and transparency shaders. They don’t seem to work quite right in VR- the reflections are a bit off between the two eyes and will make it hard for the eyes to focus on the object. Again, experiment and do test renders to see what works.

  • Preview Runs: Unfortunately, VR will make little details stand out so you may find problems with your scene that you never did before. Clipping issues you’d never have noticed otherwise can become huge, jarring problems.  Do plenty of test renders before you commit to rendering the whole animation. 

    Unfortunately OpenGL doensn’t let you do preview renders for VR, so you’re stuck with Cycles for now. What I usually do before starting the full render of an animation is to do a run at 60-70% resolution, very low sample count and just one frame out of 5. Or sometimes I’ll just render with the final settings but one frame out of 10 or 20. This way I won’t have to re-render that frame when I do the final video.

3. Optimizing

The good news is, this whole section from my previous tutorial is now almost entirely obsolete with Blender 2.79. Yes, that is good news. You can still expect render times in the days or weeks for some videos, but it’s a huge improvement over 2.78. As a consequence I’m cutting out a lot of settings from the old tutorial that don’t really matter so much anymore. Let’s focus on the essentials.

  • Turn on GPU rendering: If you can and haven’t already, this is a must for rendering moderately fast in Cycyles. This is found under User Preferences / System: Choose CUDA for NVidia GPUs or OpenCL for AMD cards.  The good news in 2.79 is that the once terrible OpenCL support has been improved to the point where the newest AMD cards actually outperform their Nvidia peers by a large margin. Quite the reversal. 
image

Select CUDA or OpenCL under User Preferences > System. Save User Settings

image

Go back to render settings and make sure GPU compute is enabled.

  • Denoising: That’s the big new feature of 2.79. It’ll smooth out all the fireflies and dark spots that used to mar cycles renders in older versions. I’m no expert on image quality but it looks to me like the loss of detail is minimal. That means we can now afford to render scenes at much lower sample count and still have them free of noise.
image

To enable denoising, go to render layers and tick the corresponding box. Default settings have worked fine for me so far. If you have more than one render layer, apply it at least to your main layer.

  • Render settings: This part requires a lot less trial and error with denoising on. These days I render almost all scenes with 100 samples and default values for the other parameters. You can increase the sample count if you’ve got some very complex plays of light going on, but otherwise 100 samples will give you good detail and no noise.
image

Un-check square samples or your 100  samples will instead become 10 000.

4. Rendering

The last thing you want to set up before rendering is the output. You want to output as an image sequence absolutely. It takes me 5-6 minutes to render a frame, so at 60 fps even a short clip is likely going to take you days, so you want to be able to interrupt it and resume it when you want to.

image

Still on the render panel, choose JPEG or PNG with minimal compression.

Un-check overwrite: this way if you’ve already rendered some frames, it won’t re-render those - this way you can resume an aborted render.

Then, pick Stereo 3D (both images in one file). 180 degree videos are usually laid out side-by-side, while 360 are top-bottom. Either will work on most players, it’s just a matter of convention.

That’s all the settings done, now you can it that big Alt-F12 button as you’ve been itching to for the last hour, right? right?

image

SAVE your scene first! You’re about to start rendering for a loong time and a lot can go wrong. Imagine rendering half a scene in a couple days, then your computer crashes - so you reopen the file and resume rendering, except you didn’t save your latest changes so you’ve rendered half of the video with different settings. Happened to me.

Ok. NOW you can go for it. See you in several days.

5. Encoding

Barring any disasters during the render process, you should now have a few hundred rendered stereo images. You can already load those up into a VR viewer and check them out one by one. You have several options to finish up the process. Again the cool thing about Blender is you can do all that in Blender’s own video sequencer. Add a sound-track, a few effects even, overlay a logo - Blender can do all that and much more. 

However I don’t use Blender anymore myself for encoding as I found I other solutions can give you better encoding quality, control, and access to more formats. A few of your options are:

  • Output from Blender in uncompressed AVI then use Handbrake to re-encode in the format of your choice.

  • Load up the image sequence in VirtualDub and encode from there.

  • Use some pro / enthusiast video editing solution such as Premiere.

In any case the questions you need to ask yourself are going to be the same. What formats to output to so that my videos are viewable on a maximum of platforms?

Now I do my high quality encodes at eiher 4000x2000 or 3840x1920 (1920x1920 per eye), 60 fps. The latter is the resolution most live-action VR erotica makers release at. I encode using H264 coded, 2-pass encoding, min bitrate 30000, max 40000 and maximum render quality settings turned on. That will work fine on Vive, Oculus, GearVR and other high-end HMDs. Finally the video is saved in MP4 format, with AAC audio.

I then do another render at 2160x1080p, 60fps, as that is the maximum resolution iPhones can play, and works well enough for most other platforms. This is also encoded in H264, 10k bitrate.

From what I’ve read, PSVR should be able to play as high as 3200 x 1600 resolution, so it could be worth it to do an encode run at that size too.

I usually loop short animations several times at this point because many VR video players don’t handle looping well.

6. Viewing

Nothing to do with Blender at this point, but it bears repeating. Lock the door before viewing VR erotica. You’ll be entirely unaware of your surroundings for an extended period. Viewing regular porn, someone can walk in and catch you at an awkward moment. Viewing VR, someone can walk in, film you and post it on your own Facebook before you’re aware of anything. Hm, not that this has ever happened to me.

If you’re new to this, Virtualrealporn has a pretty comprehensive guide on how to watch VR movies on every existing platform.

Personally I use Simple VR Player on my HTC Vive, a paid app but well worth the money. Otherwise the free version of Whirligig will do the job, but the UI is a mess.

I also use a no-name VR headset for smartphones I got for free from a phone shop promo. It makes for a surprisingly comfortable (and wire-free) experience.  Of all the VR players available, the three that seem to do the job best for me are AAA VR Cinema, VaR’s VR Player, and VR Player Free. Neither is perfect but they get the job done and play smoothly.

That’s all! Major props to Likkezg for sharing some of his Blender wizardry with me, and to everyone else who came to me with feedback on my work, I’ve been learning something new every day thanks to you guys.