Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Monday, 24 July 2017

Arecibo Reloaded

Several years ago, as long-term readers will recall, I made a CGI model of Arecibo Observatory because my then-boss told me to. This was then turned into a laser-etched glass cube, originally just as a nice present for the Observatory employees but subsequently sold at the visitor centre. The original model took about a week to make. It was based on the original telescope schematics plus careful site-walking. It had enough details for the required glass cube, but not too much more than that.




Late last year I bought myself a VR headset. I have a review post of that in draft, but suffice to say it's rather fun - though the technology is still immature. Naturally I wanted very much to convert my own content into VR format, because when it works, it really works. The sense of immersion is much, much greater than any other format.

There are different ways to produce VR content. One is to make an interactive, game-like format, where the user can walk around a virtual environment however they like. Of course this would be the most fun sort of VR, but you lose realism and detail. Well, more accurately, if I were to try this, my result would lack realism and detail because I don't have much experience creating interactive content. I'm much more familiar with another method that can be used for VR : pre-rendered video.

Pre-rendered content means you can apply all sorts of fancy lighting and material effects that give realistic results much more easily than with the interactive approach, and you don't have to worry too much about the vertex count. I was urged to try the Unity game engine, but in the end the burden of learning an entirely new interface was just too much. Upgrading the Arecibo model to VR standard was no small task in itself; the prospect of also learning new software with a radically different approach to Blender turned, "relaxing evening" into, "yeah, after you finish work for the day, keep doing more work in the evening." So I went with what I knew : pre-rendered content in Blender.

This seems like a good opportunity to list some of the mistakes I made when learning how to get VR content working, but if you're really only here for the Arecibo stuff, feel free to skip the whole next section. What follows is a pseudo-tutorial to creating VR content; if anyone wants me to develop this into a full tutorial, then I will.


Creating VR : What Not To Do

Before I started playing with the Arecibo model I wanted to get VR content working with something simpler. The main guide I relied on was this excellent video tutorial :


I hate video tutorials and I cannot for the life of me understand why they're so popular, but this one is very good. A couple of things are worth adding/emphasising :
  • Blender's native 360 3D (spherical stereo) camera's are only possible using the Cycles rendering engine. The reason for this I'll explain in detail below.
  • Although it comes with various options for the type of VR content, e.g. top-bottom, side-by-side, these don't actually seem to do anything. You have to render the two images separately and join them together yourself (you can use Blender's sequencer for this, but you have to set things up yourself - there's no automatic, "side by side" button).
All of my content thus far has been with Blender's traditional internal rendering engine, because it's fast and I know how to use it. Rendering speed doesn't seem to be getting much emphasis these days; the focus seems to be on ever-more realistic results. Which is fine, but I don't care for waiting hours and hours for a single still image. I want speed. So Cycles, which is generally much slower than the internal engine, isn't usually for me.

Unfortunately it's not as simple as changing which rendering engine you want and re-rendering an old scene. The two engines are radically different, and this means you have to remake all the old materials in a way that Cycles can understand. Initially this seemed so daunting I decided to try and find a workaround. I'd already done some standard side-by-side 3D content, so it seemed to me that the key was to figure out how to render 360 spherical content and just do this from two different positions. What you need for the headset is an image in equirectangular format

First I tried using the "panorama" option of the camera, experimenting with the field of view setting. Although it's possible to get something sort-of reasonable with this, it's not great - the image gets very distorted at the poles. This option is best avoided.

Fortunately I came up with what I thought was a clever solution. I'd render my old scenes using the internal engine with the classic "6x 90 degree F.O.V." images. Then I'd setup a skybox as normal, but I'd use Cycles materials for each face of the box so I could then use the Cycles spherical camera. Creating a shadeless Cycles material is pretty trivial, and avoids having to learn Cycles in any real depth - and more importantly, there's no need to convert any of the old materials. Plus this would be easy to animate.

Part of a classic skybox. Each face was rendered in Blender internal using a 16 mm camera, giving a 90 degree field of view.
This actually works. When you render the above skybox (adding in the missing planes) using a Cycles panoramic, equirectangular camera, you get the following :


Seamless and perfectly distorted as an equirectangular image should be. Great ! If that's all you need - e.g. 360 degree but 2D panoramas - then there's nothing wrong with this method. You can test the images using, for example, this, or you can find whatever application you prefer for turning them into web pages. Google+ used to let you do this directly, a feature which got lost at the last update but is slowly being re-implemented.

You can also view this directly in Blender in realtime without needing a camera. What you do is to join the faces of the skybox together (make sure your textures are UV mapped), subdivide the mesh a bunch of times, then use the "to sphere" tool. That turns your skybox into a skyball, which you can view just fine in Blender's viewport.


That approach lets you skip the Cycles renderer altogether, as long as you just want a personal viewer. Another option is to use a Python script. After creating the skyball, knowing its radius and the position of each vertex, you could move each vertex to the position it should have on an equirectangular map (i.e. convert its Cartesian coordinates into spherical polar coordinates). That will get you an equirectangular map directly.

I expected that since regular 3D content just consists of two side-by-side images, I could then render two skyboxes (or skyballs) from slightly different positions, and join the two equirectangular maps together. This does not work. Don't do it !

What you get if you try this is something very strange. From one viewing angle, everything looks great on the VR headset... but as you turn your head, the sense of depth changes. Turn your head 180 degrees and you realise the sense of depth is inverted... but if you then turn your head upside-down, everything works again !

This bizarre behaviour was not at all obvious to me, and I only understood it after a lot of Google searching. It turns out that you can't ignore the fact that your eyes move in space as you turn your head. Rendering from two fixed viewpoints is not good enough - you need to account for your eyes being at slightly different physical locations depending on your viewing angle. That's why the spherical stereo mode isn't supported in Blender internal. It requires the camera to render from a different location for each horizontal pixel of the image, which the internal renderer simply doesn't support.

Technically it might be possible to write a Python script to get around this problem. But it would be ugly and incredibly slow. You'd have to render each column of pixels of the two images from different locations, accounting for the rotation of the two cameras around their common centre, and then stitch them together. Since you're going to want your images to be at least 2k on a side, that means rendering 4,000 images per frame. Don't do that. Really, your only option is to go with Cycles.


Arecibo is a complex mesh, so once I resigned myself to the need to use Cycles, initially I thought I'd start with something simpler. The ALFALFA animation seemed like a good choice : 22,000 galaxies, all with very simple, scriptable materials. That would look great in 360 3D VR, wouldn't it ? Being surrounded by a huge mass of galaxies floating past would look pretty shiny, eh ?


It would. And scripting these materials turned out to be extremely simple, which I was rather pleased with. But alas ! It didn't work. Cycles may be technically more capable than the internal render engine, but it includes an extremely irritating and hard limit of the number of image textures it supports : 1024. The only way around this is to edit the Blender source code. I'm told this would not be so difficult, but I didn't fancy trying that.

So I gave up and decided to do the thing properly. Arecibo at least didn't need a thousand different image textures.


Remodelling Arecibo Observatory

The original mesh wasn't in too bad a state. Not so long ago I'd done some tidying up to make an animation for the visitor centre, so I'd already added some details that weren't in the original model.


Of course it isn't perfect. Strange flickering besets the landscape (and to a lesser extent the trees); Blender's textures are not always as stable as they should be. The trees are rather too deciduous for the tropics, but those were the best tree sprites I could find - and overall, I rather like the forest effect.


Viewed closer, the model looks acceptable, though it lacks detail. The materials are decent, but of course they are far from perfect.



However it looks best from below. The low resolution of the landscape is a problem - this was the highest resolution available from the USGS, but it's not really enough, and manually editing it would be quite a task. And while the rocky texture looks OK from a distance, it's not so great close up. All these problems disappear from a different viewing angle.


That starts to look halfway respectable, in that the top half of the image looks respectable though the lower half not so much. Here's a reference photo for comparison.


I began the VR conversion with the existing materials. Unfortunately, I couldn't find an acceptably fast render solution with the trees, so they had to go. Much work went in to creating landscape materials that the viewer could accept as representing rocks and trees. Learning how to distribute different textures using Cycles materials was one of the hardest parts of the process, but eventually something clicked and it started to make sense.


I dallied with getting the rocky areas to have some displacement, but I couldn't get this to work well, so I stopped. Certainly there's a lot of scope for improvement, but it's fast. With the plan being to have the animation take the user on a walking-pace tour of the telescope, rendering speed was all-important.

Fortunately, the telescope itself doesn't feature too many complex materials. Converting them to Cycles format was relatively painless.


The one major exception was the main white paint material. That went through many iterations before I got the balance right. Eventually I realised two things : 1) from reference photos, the material has different levels and types of dirt depending on where it is and when the photo was taken; 2) it's actually white. Not grey - bright white. Making things a brighter shade of white in Blender is one of those things which is really very simple when you know the answer but can sometimes be unexpectedly difficult to solve : make the lights brighter ! Yes, you might then have to adjust all of your other materials, but that's what you gotta do.



To keep rendering times short, I basically disabled all of Cycles fancy lighting effects. Light bounces were reduced to their minimum values, except for transparency since I needed a few transparent materials (the fence mesh material - sometimes you can see other fences through the fence, and you need multiple "bounces" for this to render correctly). I used "branched path tracing" rather than the regular "path tracing" to make sure everything was set on minimum. That completely eliminates the grainy look that often plagues Cycles renders, reducing it back to something approaching the internal render engine in look and speed. Render times were slashed from several minutes per frame (using the default settings) down to 30 seconds.

Of course the penalty is that the render isn't as realistic as it might be. An annoyance with Cycles is that it doesn't support hemi lights, which are useful for faking diffuse background light. I had to make do with crappy sun lights instead, but beggars can't be choosers.

One important decision that had to be made was the level of detail I was prepared to add. The original was very simple - fine for distance shots, but not suitable for close-ups. Also, while the telescope schematics contain everything you could ever want about the superstructure, they contain nothing about everything else : the walkways, the waveguide, the cables - none of these are included at all. And the real telescope is, in many places, ferociously complex.



Worse, I don't have many reference pictures of some of the most complex areas - you don't tend to take photos of those places.

You can see in the above that the major girders of the telescope are all complex features made of many sub-girders and supports. I had to neglect these. It would have made the modelling process incredibly tedious and the model impossible to work with. As it was, the final vertex count was a mere 650,000 : with the girders done in full detail it would probably have been tens of millions. I suppose it might be possible to use image textures, but that can be for the next iteration. So I went with, "include all the major structures, but don't render them in full detail."

I also had to sacrifice on cables. When you're actually up there, the platform is an even messier place than it looks from the photographs. Trying to track every single cable would have been absolutely impossible.


The waveguide - which carries the signal received from the telescope back to the instruments in the control room for analysis - also had to be compromised. You can see it in the above photograph, running vertically through the image right of centre and looking a bit like an air vent. Following its precise path wasn't possible, so I simply included it where I could but it's got quite a lot of gaps in the final model. Which means my virtual model wouldn't really function. Oh well.

What I decided to try and include as much of as possible was everything else : all those other secondary details like railings, lights, boxes, signs, rivets... all those sorts of little details. Great care was taken with each reference photo to include the unique features of every part of the platform, rather than inventing random industrial details. If this is art, then it's art without much creativity in it.




The lower section rotates, of course, which is why it looks different here than in the above reference photo.


Several mistakes here. I put the stairs leading down on the inside, whereas actually they're found on the outside of the azimuth arm. The building is slightly too tall. And the two separate sections should be connected. Oh well.



Again some differences here because the arm was rotated at a different angle in the reference photos.



Of course there are a lot of difference in this dense, complex area where I had few good reference photos. Eagle-eyed viewers will notice that the dirt pattern on the diamond plate flooring changes depending on which level you're standing on.


Pretty nearly all of the details shown above - and more besides - are new for the VR video. Not everything is visible in the VR display - including, unfortunately, the signs. The signs are where I allowed myself a creative outburst.








And so, without further ado, the tour. Starting from the catwalk, it proceeds at walking pace along the top of the triangle, then descends to the central pivot section. From there, look up to see the sky as seen using Arecibo at the 21 cm wavelength. Then it jumps to the upper section of the azimuth arm and walks from one end to the other. It lasts 3 minutes. The sound is a free industrial sound I found somewhere on the web (it sounds vaguely like the cooling system/motors of the telescope); the coquis are my own recording. I don't remember if you can hear those bloody stupid little frogs from the telescope platform or not, but my abiding memory of Puerto Rico is that the basic soundtrack is coquis wherever and whenever you are.

So grab a headset or Google Carboard and enjoy. And if you don't have a headset, grab some 3D glasses and watch it on your PC, using your mouse to look around. And if you don't have any 3D glasses, just watch it in regular 2D 360 mode.


Final remarks : this isn't done. Eventually I want to extend the tour to the lower section of the azimuth arm, but this is quite a complicated place so it will take more time. I'll also probably try and fix some of the more serious known errors. Most irritatingly, I can't seem to get a codec that gives good quality on this, so I suspect you're losing quite a lot of detail. Still tests seemed to give rather better results than the animation, so hopefully the next version will look shinier. I'd like to render in 4k; this one took 80 hours and over 30 GB of rendered files though, so that requires a bit of logistical planning.

Final final remarks : a lot of work has gone into this - tens of hours, if not more - and it's already been used for one commercial product. So unless you're a) an Arecibo Observatory employee or b) someone I've known for many years and already trust, then no, you cannot "just have" a copy of the model. Please stop asking, it's rude. To end on a happier note though, if you ask Bathsheba Grossman very nicely, it's possible you can buy one of the glass cubes. They're very sparkly and make a nice conversation piece. You can't walk around inside them, but at least you don't need a headset.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.