I'm working on creating some new works for an augmented reality exhibition in Paris that launches later this year. One of the pieces I'd like to include is a 3D model I created for a recent commission. This one:
It's pretty complex, and one of the requirements for AR models is that they don't exceed a certain file size - you have to trim them down by reducing detail, both on the wireframe and the texture. Luckily the limits on vertices are pretty high, so this model will be fine, but the texture limits are 1K. That's 1024 X 1024 pixels. For a complex model, that's pretty low.
Currently the texture is the same for all the models - a simple off-white non-reflective material. But for an AR engine, I have to try and export one combined model with as much texture info as possible to cover all the figures. And this has to fit within that 1K texture map. AR doesn't do any raytracing or clever indirect illumination calculations like a 3D renderer, so if I want the model to sit in it's environment, I need to export it with one unified UV map that incorporates shadows and occlusion, diffuse light etc. This is called "baking" a texture. Game artists do it all the time to make sure everything loads in real time.
Here's the cool thing - I need to essentially map all the surfaces of this model in order to bake it, and that map has to fit in the 1K square. I can do this lots of ways, but the simplest is to use the Automatic Projection option.
This lets the software (Maya in this case) do an analysis of the object, break it into pieces and pack those pieces into a square. It's a bit like when an aeroplane crashes and the crash investigation team lay all the salvaged wreckage out in an aircraft hanger. Here's the result of the automatic projection for the model I chose. It's fantastic. It's also really large, so I've cropped some parts I thought looked cool.
It's fascinating how software - led at every possible juncture by efficiency - creates these formal arrangements. It's museological and algorithmic at the same time. This is only one way that UV mapping and texture baking is wonderful. Here's another test I did for my film Soft Crash where I baked the lighting and shadows on the floor of an interior space to make the scene easier to render:
The appearance of this type of image, which I should restate is only of any real use to the machine and it's calculations, is not only formally, but technically reminiscent of a very well-known art historical practice, most famously used by Man Ray. Here's a few of his "Rayographs", made by placing objects directly onto photosensitive paper.
I think the really interesting point here - and it's pretty speculative - is that there is a radical reading of art history that is yet to be made. Did the aesthetic experiments of 19th and 20th century artists prefigure a kind of machine vision that arose independently of art history?
Again - worth restating - baked UV texture maps and automatic UV mapping are primarily machinic modes of organising the visual. They emerge from software. Yes, software is created by people, but those people didn't think at all of Rayographs or museology when they were programming these tools. They thought of the best way of encoding visual data so it could be read by machines.
Worth thinking about, maybe?
Here's a couple of video tutorials about UV mapping and texture baking, in case your interest has been piqued.