Inlägg

Dear ImGui - Interactive settings for ocean water

Bild
The last crucial piece of the project implementation relates to GUI. It is tedious having to restart the application for testing different settings, and I wanted to be able to change the settings of the ocean in real time. Using a GUI I wanted the following settings to be modifiable in real time. For ocean: Wave length Amplitude Wind speed Wind direction Normal displacement amount  Vertex count Tile count For application: Camera movement speed Camera rotation speed Enable/disable wireframe mode Enable/disable skybox Enable/disable skybox lighting Selecting skybox cubemap A very convenient and handy tool for debugging graphics applications is Dear ImGui , or simply ImGui. Integrating ImGui is simple and it has an implementation agnostic interface for whichever window environment and graphics library you are using. When the ImGui library is linked properly to the application, dialogs and GUI utils are easily drawn to the screen by initializing an ImGui context. When a context is runn...

The magic of skybox reflections

Bild
Skyboxes and environment maps are for sure one of the more powerful tools for physically based rendering to give a scene a touch of realism. A skybox is essentially a cube map rendered onto a cube which we position in the background of our scene. A cube map is a map of 6 textures, one for each internal side of a cube. A normal 2D texture is sampled using uv-coordinates, representing the "x" and "y" axes of the image. That is, each vertex is assigned a uv coordinate, from which a point on the texture is sampled in the fragment shader. For cube maps, as they represent the inside of a cube, for convenience, there is no reason to apply uv-coordinates to sample the texture. Instead, one selects a point using a 3D vector, pointing from the center inside of the cube into any of the textured cube faces. This makes cube maps very handy for light calculations in 3D. Reflections are then mathematically very simple to create using a cube map, as we just have to calculate the re...

Gradients and Choppy waves.

Bild
As mentioned in the last post, the focus of the last days was to fix proper normal calculations using the gradient of the height-field. Gradient, together with creating a displacement in x and z-direction, only required setting up a four more cuda-buffers and performing an inverse FFT-pass on each of them. Calculating the gradient and the x-z displacement is easy, and comes from equation 20 and 29 in Tessendorf's paper, which you should be familiar with now. The input for calculating the x-gradient and z-gradient simply meant multiplying the input for the displacement map with the imaginary number ik, for each k where k is the propagation direction of the wave at each point. Formula for gradient calculation for realistic surface normals (Tessendorf 2001) Left: z-gradient normalized to range 0-1. Right:  z-gradient normalized to range 0-1 One problem that occurred when calculating the surface normals using the gradient values was that the result of the FFT is naturally rather small....

Seamless displacement

Bild
 An essential property of the proposed algorithm by Tessendorf, Simulating Ocean Water , is that the result of the FFT actually tiles seamlessly. This means that an individual tile will end in x-direction as it starts in x-direction, with regard to the color gradient going left to right and vice versa. Left: One individual displacement texture rendered using Tessendorf's algorithm. Middle: 10x10 tiles tiled side by side. Right: Seamsless edges between tiles observed through a skewed angle This implies that to render a very large ocean, it is only necessary to render one single tile mesh of ocean, and then those can be seamlessly tiled together. One problem with this approach is that if the ocean is observed from far above the surface, so that many tiles are visible simultaneously, then one can spot the recurring patterns in the meshes. Left: Mesh rendered by displacing pixels in y-direction according to the displacement map. Right: Mesh with blinn-phong illumination after naively c...

First signs of success

Bild
Yesterday, I integrated CUDA into the project which went rather easily, except that I had to update some graphics drivers on my dual-booted Ubuntu to get my Nvidia 1070 running.  According to CUDA specification, the result of inverse cuFFT is un-normalized, meaning that the result is scaled by the input size; in my case, 512x512. iFFT(FFT(A) = length(A) * A This means that the result of the cuFFT operation has to be normalized and divided by the problem size. However after normalization, the result did still not look like I expected.  It was like every other pixel were a little darker. And after scratching my head for a while I finally found a master's thesis by Fredrik Larsson (2012) named "Deterministic Ocean Waves", where he intuitively and clearly explains the wave transformation using FFT.  In equation 4.6, he clearly shows how the translation from the frequency domain h-tilde(k, t) to the spatial domain h(q, t) essentially alternates signs due to the definition of...

CUDA cuFFT vs FFTW

Bild
My initial thoughts were that I was going to perform FFT on the GPU as it intuitively sounds fast to use the multi-core system that the GPU provides. However, depending on the size of the FFT, GPU based algorithms may actually be outperformed by running FFT on the CPU. Transferring memory-buffers to the GPU is a slow operation, and the overhead of transferring data to the GPU does affect the overall performance of running FFT particularly. When using CUDA cuFFT compared to serial FFTW, it has been shown that for small N, approximately N <= 4096, the gain from running cuFFT is lost due to slow memory transfer rates. From lecture: "Fast Fourier Transforms (FFTs) and Graphical Processing Units (GPUs)" - Kate Despain, University of Maryland Institute for Advanced Computer Studies. Originally from University of Waterloo (2007). Comparing FFT and cuFFT including and excluding memory transfer rates. Y axis labeling flops, X axis labeling size of FFT work set.  In my project, ...

Hello Blog! - DH2323 project initial post

Project: Simulating ocean water For the final project in the course DH2323 - Computer Graphics and Interaction I will primarily put my focus into rendering water and simulating deep water oceans. Background Simulating water is a complex topic as the behaviour of water depends on many internally related variables. How deep is the water? How does the wind blow, and with which speed? Are there other objects in the water? These questions make the topic of rendering water complicated, but are also what makes it particularly interesting.  In games and 3D simulations however, the water itself is often not relevant, but what actually matters is the shape of the surface. A water surface can be described as a complex set of waves of different frequencies, and such a surface can be described using Discrete Fourier Transforms (DFT) [1]. Particularly, we can describe a height field over all points of the surface using inverse FFT, given the spatial and temporal frequencies of water; that is ho...