Closed Thread Icon

Topic awaiting preservation: Normal Chat (Page 1 of 1) Pages that link to <a href="https://ozoneasylum.com/backlink?for=9451" title="Pages that link to Topic awaiting preservation: Normal Chat (Page 1 of 1)" rel="nofollow" >Topic awaiting preservation: Normal Chat <span class="small">(Page 1 of 1)</span>\

 
warjournal
Maniac (V) Mad Scientist

From:
Insane since: Aug 2000

posted posted 02-20-2004 05:28

Normals

Normals are a vector. They have a length of 1 unit. Yes, 1 unit. Just accept it. They point in the direction of the given face. That information can be encoded into an RGB document. Spiffy.

Let's look at a simple 2d example.



That's a silly little slice of a scene.

Let's encode some 2d direction information.



In the upper-left we have the two axis shown in the first two circles. The third circle is both of them put together in R and G (B being 50% that I'm so fond of). Can you see how the colours in those circles correlate to the colours that I quickly painted in? Shouldn't be too hard to see.

Now let's chat about occlusion for a bit. Occlusion is basically leaving out area. In 3d, leaving out areas that receive less light, which would be nooks and crannies and corners and things.

Now-a-days, a common way of rendering occlusion is with global illumination (GI). Photons are bounced around and some areas receive more or less light depending on the surrounding geometry.

For this, let's talk about a more manual way (I believe this algorithm was first used in a shader called DirtMap). There are some other ways that are tied to "modern" functions, but this way is just plain fun to talk about.

The renderer is busy going about it's business. It comes to a pixel and finds it's spot in the 3d scene. When it gets that point, a hemisphere is kind of created based on that point with the "up" pole pointing in the same direction of the geometric normal. Gads.

Okay, then a bunch of rays are shot from the point, or center of the hemisphere, and checked for a collision with surrounding geometry. Then the ray data is used to determine the occlusion value.



Now, the hemisphere has two basic attributes: number of samples and distance. The more samples you have, the more rays are shot out from the point and the longer the render. Distance is basically the radius of the hemisphere, which can greatly affect the final occlusion render if it's not "scaled" decently to the scene.

The rays are shot and some hit geometry and some don't. The number of rays that make it out alive are counted, then that number is used against the number of total samples to arrive to final percentage. That final number is the occlusion value.

(You can take into account how far a ray goes before hitting surrounding geometry. Use that against the distance to get a percentage for the given ray. Then all ray percentages are averaged for an occlusion value. As far as I know, I'm the only one that has gone this route.)



Tada. A really crappy painting of a 2d occlusion rendering.

So, why talk about the long way? Surely not just because it's fun? Well, it makes explaining bent normals a little easier.

Remember all of those rays that were shot? Well, you can take that data to modify the geometric normals.



Again, a quick-n-sleazy 2d painting.

How is this good? Not sure if I can explain this just yet.

More normal chat to come.


warjournal
Maniac (V) Mad Scientist

From:
Insane since: Aug 2000

posted posted 02-20-2004 23:14

Let's look at a regular 3d render of normals. Not too long ago I finally figured out how to do these with 3DS Max. It takes nested RGB Multiply and cool use of Fall-Off. Bit of a mindbender, but I managed it.

A regular normal render makes use of all of the shades in all 3 channels.



Notice that the R and G channels look very similiar. For example, in the R channel, white will be on the left as geometry slopes that way from the perspective of the camera. So geometry left-to-right = shades of grey left-to-right. You can see the same kind of thing in B, except it's up/down.

Now, B is z-depth, except it's as geometry slopes away from the camera. Somthing like white-to-black in a towards/away fashion. I guess you can think of it as a height map.

This is all relative to the camera. If you move the camera around and rerender, the colours won't "stick" like a regular texture. All of those funky colours will shift.

Back when I first started researching normals, I hadn't figured out how to do them properly. So I had to hack together my own method. My method is in world space. That is, the colours will stick as the camera moves around.



These are my home-brew normals that I call "stroker normals". Named after the nick I use on other forums.

The good about stroker normals is that they are world-based. This makes certain things tons easier.
The bad is that 1/6 of the shades in RGB are useless for any given angle.

For either method, there is common bad, but I can't remember right now.

Gotta go.

NoJive
Maniac (V) Inmate

From: The Land of one Headlight on.
Insane since: May 2001

posted posted 02-21-2004 03:23

Please put the tin foil back on your head.... the rays are getting to you! You are not "Normal."<lol>

Always amazing Warj.

InI
Paranoid (IV) Mad Scientist

From: Somewhere over the rainbow
Insane since: Mar 2001

posted posted 02-21-2004 03:37

The poster has demanded we remove all his contributions, less he takes legal action.
We have done so.
Now Tyberius Prime expects him to start complaining that we removed his 'free speech' since this message will replace all of his posts, past and future.
Don't follow his example - seek real life help first.

warjournal
Maniac (V) Mad Scientist

From:
Insane since: Aug 2000

posted posted 02-21-2004 07:08

InI, if you missed it in the other thread, you might want to give Creating Normals Map with C4D a quick glance. I suspect that is what you used in your spiffy java thing. Also, if you haven't heard of it, I think jrMan is right up your alley. The source comes with the download. Maybe you can do something interesting with some of the bits-n-pieces.

How can a normal render help in post production? Well, you can pull some pretty spiffy lighting tricks. For example, you can easily add rim lighting and other tweaks.

Take Eustace for example. On the left, a rather boring single light render. Blah. Well, using just the regular normal render, I was able to add some rim lighting down his right side.



Can do more tricks than that, but I'll leave it to your imagination.

However, there is a kind of a major downside. That being: no shadows. Although, no shadows is not as bad as it sounds. Anybody that has done a lot of 3d lighting knows that all the lights don't always cast shadows. (Maybe someday all of those non-shadow casting lights will become obsolete due to cheap rendering of photometrics.)

In my fiddling, some tricks have to be pulled, and I haven't quite gotten them completely down to my satisfaction. With a normal render, you would think that Select > Colour Range would be good, but it's not. Seems to me that Colour Range is excessively weighted much like Desaturate. Not good.

1. Start with your normal render.
2. Pick a colour.
3. Start a new layer, fill it, and set to Difference.
4. Ad-Layer > Channel Mixer, set all channels to 33% and check Monochrome.
5. Ad-Layer > Levels, use the bottom slider to invert and the top sliders to taste.

You should end up with a selection mask. Just select all, copy merged, paste into an alpha, and do with it what you please.

A few times I have tried Ad-Layer > HSB with Sat: 0 instead of Channel Mixer. Sometimes it comes out fine, but most of the time it doesn't. I haven't quite figured this out.

If you want to fiddle, feel free to experiment with: Eustace Regular Normals (eregular.jpg).

Still more thoughts to come.

warjournal
Maniac (V) Mad Scientist

From:
Insane since: Aug 2000

posted posted 02-22-2004 15:23

I'll be mixing some stuff that I do know with some pretty wild speculation. Also currently without images.

Notice that Eustace is all by himself and that there is no environment around him. No walls, floors, or anything like that. This is basically because there is a difference between 2d and 3d math. You see, a normal map in Photoshop is only good for doing a infinite directional light stuff.

Let's say that you have a full-blown 3d scene with some omni lights. Say a floor, some walls, and two spheres. Well, if your main light is an omni somewhere between the spheres, using a normal render will be more work because you'll have to do everything seperate. It's that difference between 2d and 3d.

Remember that occlusion hemisphere for doing bent normals and the regular normal render of Eustace? Well, if the normal render was a bent normal render, then tweaking his lighting in PS will have that hemisphere data to work with. It's like tweaking his lighting with the occlusion data built in. Well, you can kind of take that idea to help light an entire scene. This, however, is tricky.

You have your scene. No camera movement and you do a bent normal render based on regular normals. Then you have to use your 3d program to a mix between 2d and 3d compositing. Hmm... kind of weird, eh?

After you do the bent normal render, it is projected from the camera onto the scene. When the renderer is going about it's business, it doesn't use geometric normals calculate shading. Rather, it finds the appropriate pixel in the bent normal render and uses that for the given pixel. It's a perfect marriage. Why, it's even better than using an occlusion render to comp in PS. You can even move lights around, but can't move the camera due to camera spaced normals. But even so, it's still better.

So, what if you want to move lights around and the camera? Seems to me that bent stroker normals can do the trick with some texture baking. If you render stroker normals and bend them with occlusion data, then bake them, you can theoretically move lights, the camera, and still have the benefit.

As far as I know, RenderMan is the only "software" out there capable of this. Unfortunately, I don't know enough about RM and SL to truly test. Also, I have the hardest time with vectors and transforms. I can do geometry and trig (bit rusty, though), but 3d transforms seriously twist my brain. Funny, that.

Time to cook a turkey.
Gotta go.

InI
Paranoid (IV) Mad Scientist

From: Somewhere over the rainbow
Insane since: Mar 2001

posted posted 02-23-2004 15:59

The poster has demanded we remove all his contributions, less he takes legal action.
We have done so.
Now Tyberius Prime expects him to start complaining that we removed his 'free speech' since this message will replace all of his posts, past and future.
Don't follow his example - seek real life help first.

hyperbole
Paranoid (IV) Inmate

From: Madison, Indiana, USA
Insane since: Aug 2000

posted posted 02-23-2004 18:53

Warjournal: 1) What do you mean by a bent normal? You use the term several times, but don't really explain what it means. Is it a term you explained in anotehr discussion that I missed, or is it in common usage for an application like Render Man?

2) How does this discussion of normals apply to Photshop? I understand the use of normals to describe a surface, but I don't know of any place in Photoshop where you can describe a surface as having a normal that is not the screen normal.

3) Have you tried using homogeneous coordinates for doing 3d transforms? I find that a homogeneous 3d transform matrix makes all 3d transforms a lot easier. You can then pre-calculate your transform and apply it to many different objects without having to re-calculate the transform matrix for each object.




-- not necessarily stoned... just beautiful.

warjournal
Maniac (V) Mad Scientist

From:
Insane since: Aug 2000

posted posted 02-24-2004 17:36

1. First post, second image. This is regular normals.
First post, last image. This is the the scene with the normals bent to reflect surrounding geometry. Hemisphere occlusion is used to bend the normals.

2. This has a lot to do with compositing. With Eustace, I showed one way to use a regular normal render for comp tweaking in PS. I'm still working on other ways.

My big problem right now is that I can't do some of the renders, so I have to resort to pure speculation on some counts. I'm kind of hoping that someone out there will have an epiphany and help push things forward. For example, is Select > Colour Range really weighted as I suspect or isn't it? If it is, how to nullify the weighting?

Sorry about some of this, but my head hasn't been in the best place this past week.

3. RM has transform functions. However, I don't understand them and I can't seem to get them to work. I don't know why, but transforms turn my brain to mush. I understand trig and geometry. I understand the different coord systems. But I have the hardest time understanding transforms or even using them "blind". One of the cruelest ironies of my life. Maybe it has something to do with my dyslexai...?

I'll post some links later.

warjournal
Maniac (V) Mad Scientist

From:
Insane since: Aug 2000

posted posted 02-24-2004 22:58

Heh. Yes, I am dyslexic, but only in few very specific situations. It's a kick in the head that gives me giggles.

Here are some links:
Depth Map Based Ambient Occlusion Anderew Whitehurst
Depth Map Based Ambient Occlusion ZJ
Ambient Occlusion
CS779 Final Project: Real-Time Ambient Occlusion

I'm not sure just yet, but I think I have a way of doing bent normals in Pixie. Take Andrew's idea of shadow maps, but use Pixie's irradience cache instead.

« BackwardsOnwards »

Show Forum Drop Down Menu