Search this blog

11 September, 2012

A hunch

Have you noticed how nice realtime raytracing looks? I was just watching this recent video by Sam Lapere done with Jakko Bikker's famous Brigade renderer. Yes it's noisy, and the lighting is simple, and the models are not complex. But it has a given quality to it, for sure, it feels natural and well lit.

I have a hunch.

It's not (mostly) because of the accuracy in the visibility solution. Nor it is because of the accuracy (which, at least in that scene, does not even seems to be a factor) of the BRDF and material representation. I think that with our skimpy realtime GPU rasterization hacks we are technically capable of producing materials and occlusions of a good enough quality.

I suspect that where games often fall is on finding the right balance. Raytracing does not need this balancing at all, diffuse, ambient specular, they all bounce around as a single entity, and light simply is. In the realtime rasterization world, this unity does not exist, we have components and we tune them and shape them. We start with a bit of diffuse and subtract shadows and add ambient and subtract its occlusion and sprinkle specular on top. Somehow... And sometimes this complex mix is just right, most often, plain wrong.

It's artist's fault? Is it too complex to get right? I think, not, it should not be. In many cases, it's not hard to take references for example, and meaningful ones, measures and measures that split a real world scene into components, devise experiments and validate our effects. Know what we are doing...

It's that often us, rendering engineers, work on technical features and not their quality. We do "SSAO" and add some parameters and if the artists say they're happy we move on, this is our relationship with the image, it's a producer-consumer one.

I think this is irresponsible, and we should be a little more responsible than that, work a bit more closely together. Observe, help, understand, light is a technical and artistic matter, and often we underestimate how much complexity and technicality there is into things that are not strictly complex routines in code. If you think about it, until a couple of years ago, we were still doing all wrong math on colors, and the solution is a simple pow(col,2.2) in code, but still, we did it spectacularly wrong, and called ourselves engineers. We should really understand way better what we do, both its physics and the perception of the physics.

10 September, 2012

Follow-up: Why classes.

Originally I planned to start writing something far more interesting, but all Saturday and some of Sunday I spent playing with Mathematica to refine my tone mapping function, so, no time for blog articles. But I still wanted to write down this little follow up, I had some discussions about my previous article with some friends, and I hope this helps. It surely helps me. You don't really have to read this. You most probably, know already :)
So... Let's assume, reasonably, that we do use our language constructs as we need them. We move across abstraction layers when we need so in order to make our work easier and our code simpler. So we start:

"Pure" functions. Structured programming won many years ago, so this is a no brainier, we start with functions. Note that I write pure here not in the functional sense of purity, as that's violated already with stack variables, we could talk about determinism here, but I don't think formalism matters.

We need state > Functions and pointers to state. This is were we left last time. We could use globals or local static data as well, at least if we need a single instance of state. Global state has a deserved bad reputation because it can cause coupling, if exposed, and both do not play well with threads. What is worse though, is that it hinders readability. A function call to a routine with static state looks like any other at the call site, but it behaves differently. For these reasons we usually don't like static state and we pass it explicitly instead, it's a trade off between being more verbose but more readable.

We need many instances of state, with complex lifetimes > Destructors and Classes. Here, this is when we _need_ classes. The inheritance and OOP things are better served, and it should not be news, by purely virtual interfaces, so we won't discuss this here (nor later, we'll leave the rest out and stop at this level). 
Having methods in a class, public and private visibility, const attributes, all these are not much more than typographic conventions, they are not a very compelling reason to use classes. A function with a "this" pointer and a method call are not dissimilar in terms of expressive power, there are some aesthetic differences between the two, but functionally they are the same, methods do not offer more power, or safety, or convenience.
What we really gain from classes is lifetime control of their instances: constructors, destructors, copy constructors. We can now safely create instances on the stack, in local scopes, have collections and so on. 
The price we pay for this, in C++, is that even thought classes are structures, we can't forward declare their members, nor we have a way of creating interfaces without paying the price of virtual calls, so in order to get all the advantages of destructors, the entire class has to be made visible. Moreover, C++ has no way of telling the size of a class if it doesn't see the full declaration, so even if we had a way of creating interfaces*, we still need to disclose all the details about our class internals.

This is where the all evil lies. And to be clear, it's not because we're "shy" of the implementation internals, we don't want other programmers to see them or such aesthetic considerations. It's because it creates coupling and dependencies, everyone that sees the class declaration has to also know about the declarations of all the member types and so on, recursively, until the linker dies compiling templates.

Now, I know, we can pay a little price and do pimpl. Or we can cheat and use pure virtual but do certain compile arrangements in our "release" version so the compiler knows that virtual always has only one implementation and resolves all calls statically. Yes, it's true, and here is were the previous article starts, if you wish.

The beauty of multiparadigm languages is that they offer you an arsenal of tools to express your computation, and of course, funny exercises and Turin tarpits aside, some map better to certain problems than others. Now, what does "map better" mean? It might seem trivial, but it's the reason people argue over these things. So right before starting, let's me say again what I think it's the most important quality metric: malleability. If your field, your experience or so calls for different metrics, fine, you can stop here.

Quickly now! Malleability = Simplicity / Coupling, roughly. Simplicity = Words * Ease_Of_Reading * Ease_Of_Writing. Some clarifying examples, it's often easy to create constructs that are compact but very foreign (think, most abuses of operator overloads), or that are readable but very hard to write or change (most abuses of templates fit this).
*Note: For the hacker, if you wanted to scare and confuse your coworkers, the debugger and tools, you could achieve non-virtual interfaces in C++. Just declare you class with no other members than the public interface, then in the new operator you can allocate more space than the size of the class and use that to store a structure with all your internals. This fails of course for classes on the stack or as members of other structures, it's a "valid" hack only if we disallow such uses...

02 September, 2012

Doing some homework. C-Style and pain.

So, lately I've been doing some coding at home for a couple of projects, which is not as usual as I would like to be for me. One of these involved creating a little testbed for some DX9 rendering, pretty standard stuff and I would normally use one of the things I have in C# but this time I had reasons to use C++ so I opened Visual Studio and created an empty application with the wizard. Around two in the night I had most of what I wanted and I closed the case.

What usually happens when I code is that I really don't like the coding itself, even more if I'm not working in a particularly expressive framework. Writing code is a chore, but after I start a project I come to think about it a lot while I do all other things, and there is where most of the improvements happen, in my head (usually while I walk back home from work).

This time was no different, what I found worth blogging though is how I happened to structure the code itself. I sometimes used classes, sometimes I used "C-style" objects (functions, passing as the first parameter the "state" or what it could have been the this pointer in C++) and I even wrote a few templates.

I didn't code this with any particular stylistic goal or experiment in mind, the only difference between this and work is that I was not constrained by a preexisting framework, and hundreds of thousands of line of code around my changes.

So, what guides these decisions? Of course, while I code I work out of experience and a given sense of aesthetic. Now my aesthetic is mostly being lazy, mixed with a sense that what I do has to be readable by someone else. For some reason, that was always with me since I was a kid building Lego, I wanted to create things to last, so even in my laziness I think I'm not too sloppy.

What I think it happens is like speaking a language fluently, you don't reason in terms of rules, these do exist of course but they become an intuition in your mind, and indeed these rules are not arbitrary, they are there to codify hundreds of years of practice and evolution, guided by the very same logic that builds up in your brain after practice.

This evolution goes from rules (education) to practice, to new rules, and in a similar way I started thinking about my code and trying to understand if some rules can be derived from practice. Now, don't expect anything earth shattering, with all the discussions about OO and against it, I think there is nothing new to be said really. As for all my posts, I'm just writing down some thoughts.

So, where did I use C style and where did I use objects? Well, in this very small sample, it turns out all the graphics API I had to write was C style.

Think something like gContext = Initialize() somewhere in the main application, then most calls require gContext, and there is of course an explicit teardown method. The type for this context is not visible from the application, it's just a forward declaration of a struct and everything happens passing pointers.

Nothing really fancy, right, what would this achieve, other than some nostalgia of pre-C++ days? Well, indeed it does not achieve anything, what I find is how many constructs we invented to do the very same, wrapped into a C++ class. Let's see...

First of all this entity I was coding, in the specific case the API for the graphics subsystem, has very few instances. Maybe one, or one per thread or process... I've already blogged about the uselessness of design patterns and about the king of useless patterns... you know what I'm talking about, the singleton. Once upon a time, a lead programmer told me that in games he didn't think singletons were not really about a single instance, but control over creation and destruction times of certain global subsystems. Yes. Just like a pointer, with new and delete. It might sound crazy, but if you look around you'll find many "smart" singletons with explicit create and destroy calls. Here, done that!

Now, a second thing that happens with singletons that experts will tell you to avoid, is that once you include the singleton class with all the nice methods you want to call, you also have direct access to the singleton instance. So everybody can call your subsystem directly, everywhere, and you won't see said subsystem being "passed around", an innocent looking function can call in its implementation all the evil and you'll never know. So, you learn to practice "dependency injection". Here, C-Style by default does not encourage that, you can include the API but you'll still need a pointer to the context, and this pointer can be made not accessible from the application who created it, so the application can pass it around explicitly only to the function it wants. Done that, too!

Third? Well, what about pimpl, facades and such? Yes, decoupling implementation from interface, hiding implementation details. C++ does not provide any convenient way of doing so. You might be as minimal as possible, ban private methods (starting for private static, which have no place to exist) and use all implementation side functions. You can include in your class only the minimal API needed for the object, and you should, but no matter how you dice it, you can't hide private member variables. This is bad not only "aesthetically", because you have a sense of hiding the internals, it hurts more concretely in terms of dependencies and how much you have to include just to let another file use a given API. I won't delve into the details of the template usage, but the C style allowed me to use all templates only implementation side, which is a great thing. I even used a bit of STL :)

More? Well, in most coding guidelines and best practices you learn the need to always declare your class copy constructor, and it turns out most times if you follow this practice, you will end up declaring them private with no implementation. You don't want subsystems to be cloned around, and here, this is done too "automatically" by this new invention. What about "mixins" or splitting your subsystem implementation in multiple files? What for all the methods which need more than one "context"? In which class should they live? Here, you have functions, this new invention avoids all these questions.

So what? Am I dissing again C++? Or in general OO? Well not really it turns out, not this time, not as my main point... It just that it's incredible to observe how often we complicate needlessly. I do understand why this happens, because I was once "seduced" by this too. "Advanced" techniques, cutting edge big words and hype. Especially when you have more "knowledge" than reasoning and experience, I mean, out of university, OO is the thing right, maybe even design patterns...

And we lose sight, so easily, about what really matters, being lazy, using concepts only when they actually happen to save us something, lowering our code complexity. Going back to said project, I can have a look at where I did use classes or structures with methods. Every time the object lifetime is more complex I started using constructors and especially destructors, I remember in an implementation needing to type a few times the call to a teardown of a given private structure, and moving that code into a destructor. I wrote templates for some simple data structures just not to have that code intermixed with other concepts in my implementation. I would have used classes also if I needed virtual interfaces and multiple implementations of a number of methods, operator overloading can be useful when it's not confusing, and so on and on.

Really, the key is to be lazy, don't reach for structures because they are fancy, or you think you _might_ need them in some eventual future, predictions work almost never (that's why we all work on "agile" and such methods). Certainly don't use things becuase of their hype, try to understand. It also helps if you consider coding a form of pain. And if you hate your language, I think *. Also, you might consider reading this from Nick Porcino on Futurist Programming.

I love blogsy. And the week of Vancouver summer we get.

*Note: Many of the times one sees crazy things, wild operator overloads, templates that require hours to understand, compile or debug and so on, it's because someone loved the language itself, and the possibility of doing such things, more than how he loved his time or other's people time.

P.S. What bit of STL did I use? std::sort, because it's great really. Until you need to iterate on your own weird data structures, which might happen because at least in my line of work, STL implements only some of the least frequently used ones (before someone asks - fixed_vector and hashmap and yes I know about c++11, caches, pools and lists made of linked chunks. One, two, and maybe three are worth a look and avoid Boost). Yes, you can implement your own random access iterator, it's not a titanic job. Still, in this specific case, it would have easily taken more time and code than what the data structures themselves took. Clearly STL was not made by a lazy person :)