Sunday, October 7, 2012

ISMAR 2011

Having nice global illumination between real and virtual objects is not enough to get a plausible illusion. There are still some important points missing that make virtual objects easily distinguishable from real objects:
  • The tracking often jitters or worse gets lost for virtual objects
  • The colors of virtual objects do not match the colors of the surrounding real objects
  • The virtual objects have a perfect appearance while the real objects don't
  • Camera artifacts introduce distortions to the real objects, but normally not to virtual ones as they are rendered on top of the video image
So in order to get a plausible illusion we have to deal with those problems. In our paper "Adaptive Camera-Based Color Mapping For Mixed-Reality Applications" we tried to find an automatic method that adapts the color mapping characteristics of the camera onto the virtual objects.

New Zealand

I had the luck to spend an awesome time in Christchurch, New Zealand at the famous HITLabNZ. During my three month stay we've developed a small user study that analyzed the influences of our differential instant radiosity method for specific task execution in mixed reality scenarios. However, for the chosen tasks no significant differences could be measured.

A big thanks goes to all the people at the HITLabNZ for such a great time!!!

Writing more about New Zealand is pretty much useless - you should pack your stuff and go there: IT IS ABSOLUTELY AMAZING!!! :-)

ISMAR 2010

The first work we tried to publish was called "Differential Instant Radiosity for Mixed Reality". On this paper Christoph Traxler, Werner Purgathofer, Michael Wimmer and myself were working on a real-time global illumination rendering system for mixed reality scenarios. My master thesis already dealed with real-time global illumination by using a method called instant radiosity. It basically places virtual point lights at positions where light hits a surface.
We submitted our work to the ISMAR 2010 conference held in Seoul, South Korea and were happy to hear that it got accepted. We also presented our method on a demo booth and were very happy again about the very positive feedback of the people. However, to our total surprise, we won the "Best Paper Award" for our method - YEAH!!! :-)

Main Features:
  • Real and virtual objects cast shadows onto each other
  • Real and virtual objects may cause indirect illumination onto each other
  • Real and virtual spot light sources are supported
  • Multiple light bounces
  • Real geometry must be pre-modeled
  • Double shadowing artifacts may occur
  • Inconsistent color bleeding may occur

What happened so far....

Okay, so my master thesis was done and the blog - while never really alive - went completely dead.

However, since 2009 lots of stuff happened. One big change was the start of my PhD or Doctoral studies at the Vienna University of Technology. I got the chance to work on a very interesting project called RESHADE. RESHADE means "Reciprocal Shading for Mixed Reality". The main goal of this project is to develop methods that make virtual objects indistinguishable from real objects in mixed reality scenarios. You can find a more detailed description here:

Since I started working on this project I also managed to publish some work that I will present in follow up posts...

Tuesday, July 28, 2009

Real Time Global Illumination Using Temporal Coherence

It's done!!! I've finished my master thesis!!! :)

Here's a video about the results of my work:

Saturday, April 4, 2009

Realtime Global Illumination using Temporal Coherence

I'm currently working on my master thesis which performs global illumination in realtime. The thesis is based on the "Imperfect Shadowmap" approach. I also use temporal coherence to reuse as much information as possible from the previous frame. The work is not finished yet, but here's a video with the first results. It was captured on an Intel i7 CPU system with 6GB RAM and two GTX295 graphic cards running in SLI mode. With 3 light bounces the application runs at 60fps.

Main Features:
  • No preprocessing necessary
  • Fully dynamic scenes and lighting
  • Multiple Lightbounces
  • Fast through exploiting temporal coherence

Sunday, August 31, 2008

Gruppenzwang - Realtime Graphics Demo

In the realtime rendering labratory course my brother Wolfgang and I developed the graphics demo called 'Gruppenzwang'. The fun part was, that we could implement whatever effects we wanted and I'm quite proud of the result and this list:
  • Shadowmaps with softshadows
  • Screen spaced ambient occlusion
  • Parallax Occlusion Mapping
  • Swarm simulation
  • Splinemeshes
  • Pixelperfect spheres (with raycasting)
  • Volume renderer
  • Microscope effect
  • Normalmapping microscope effect
  • Motionblur
  • Flow simulation
  • Dynamic cubemap environment mapping
  • Screen distortion
In opposite to the other projects presented on this site, we developed a new rendering framework that was the underlying basis for the demo. It has a better object oriented design and is more flexible than the previous one. We used DirectX9 for rendering and the Fragmentlinker to compose the final material shaders. For example raycasting (for the spheres) combined with three different surface shaders.

>>> Download Demo <<<

Project Members: