Mailinglist Archives:
Infrared
Panorama
Photo-3D
Tech-3D
Sell-3D
MF3D
|
|
Notice |
This mailinglist archive is frozen since May 2001, i.e. it will stay online but will not be updated.
|
|
P3D Re: "Too much depth"
- From: Chris Jones <c.jones@xxxxxxxxx>
- Subject: P3D Re: "Too much depth"
- Date: Thu, 4 Nov 1999 12:45:42 -0700
At 20:10 01/11/99 -0700, Gabriel wrote:
>I'm not sure why you think they don't match. In my second statement, the
>different mechanisms I'm referring to, are not "stereoscopic" depth cues,
>but rather monocular depth cues.
Sorry, I didn't interpret your statement in that way. Your second statement
was in response to me saying:
>>There isn't a boundary in the sense that there is a switching from one mode
>>of stereopsis to another. But it seems that different mechanisms dominate
>>at different distances.
Nothing about monocular depth cues...
>>Yes but the distinction I was making was that the close-up example isn't
>>parallax anymore.
>
>I'm not following this. The parallax you refer to is a retinal disparity.
>At close distances it's still a retinal disparity. Therefore distance doesn't
>come into play. Granted at farther distances the parallax is probably more
>important but the "key" word is that they're BOTH retinal disparities. This
>retinal disparity simply is the difference in perspective regardless if the
>disparity is within the object or the edges of the object (as in parallax).
So you would say that the brain processes all retinal disparities through
the same mechanism? I realise evolution can achieve a lot, but even trying
to conceive of an image processing sequence to do that is horrendous! Not
to mention wasteful in terms of "processing time".
>>But there is more than one kind of stereopsis. The distant case I described
>>is known as "quantitative" or "fine" stereopsis, and the close-up case is
>>"qualitative" or "coarse" stereopsis. It's a different process even if the
>>results are broadly similar. And can arise from different cues.
>
>That's interesting but I haven't seen any examples or proof of this yet
>(that it is a different process). Wouldn't it still be retinal disparity
>at play here?
It's always retinal disparity in stereopsis. The original 1971 reference
for this is in the post just previous to this one, by Bishop & Henry.
>>However for your example, do you touch the pencils while keeping your eyes
>>still at constant focus? My guess would be no. But more than that - the
>>example you give isn't relevant to still stereo photography, which is what
>>I was (not explicitly, admittedly) referring to.
>
>I'm not sure why it wouldn't be relevent. Even a static stereo snapshot
>would yield much more depth information than viewing the real pencils
>with one eye. As for the focus I'm not sure this is a significant factor
>in this particular case even if it's in close range.
But would the brain be able to make use of static, out-of-focus
information? Not anywhere near as well as it would be able to use dynamic
out-of-focus input, I think.
>> More on that below, but
>>consider still photography - would a "natural" snapshot of moving two
>>pencils together at a small distance be useful, or make a "good" image?
>
>Yes, I think so.
What for? A snapshot of what I see would probably be the in-focus tip of
one pencil and then a big diplopic mess. That's natural but a pretty poor
image, IMHO.
>Unless there is some new developments in perception research, all I know
>is that motion depth cues are indeed very strong depth perception cues, but
>this is known as a monocular depth cue (since you can do it regardless
>if your using one or two eyes).
With one eye you couldn't resolve the motion in 3 dimensions - the change
in size of an object as it moves toward you isn't precise enough (I think)
or dynamic enough to be used for depth motion perception. It's much more
accurate to use two eyes.
Studies show a big difference in the range over which stereopsis operates
between the static and dynamic cases. Dynamic stereopsis uses a wider
disparity range (presumably for objects whizzing close to the head!), a
shorter latency and better use of areas of the retina away from the fovea.
There are also chrominance and luminance differences.
>>If you're interested
>>however, I can pass on some references to it given in other articles.
>
>I am all ears, or should I say all eyes!
OK here's some of the most promising-looking articles that my search turned
up. Good luck - and if you make better sense out of it than me (not hard I
think!) then do let me know :)
C W Tyler and J Torres, "Frequency response characteristics for sinusoidal
movement in the fovea and periphery", Percept. Psychophys. (12) pp232-236,
1972
K N Ogle, "Spatial localisation through binocular vision", The Eye vol 4,
pp271-320
A Francisco and F Bergholm, "On the importance of being asymmetric in
stereopsis - or why we should use skewed parallel cameras", Int. Jour.
Comp. Vis., 29 (3) pp181-202, 1998
P B Hibbard, M F Bradshaw and B DeBruyn, "Global motion processing is not
tuned for binocular disparity", Vision Research, 39 (5) pp961-974, 1999
--
Chris Jones
http://www.c.jones.cwc.net
ICQ #41744518
DALNet nick trickydisko
PGP key available on request
|