Mailinglist Archives:
Infrared
Panorama
Photo-3D
Tech-3D
Sell-3D
MF3D
|
|
Notice |
This mailinglist archive is frozen since May 2001, i.e. it will stay online but will not be updated.
|
|
P3D Re: "Too much depth"
- From: Chris Jones <c.jones@xxxxxxxxx>
- Subject: P3D Re: "Too much depth"
- Date: Thu, 4 Nov 1999 12:46:13 -0700
At 17:50 01/11/99 -0700, Abram wrote:
>This friendly chat almost seems to deteriorate in a scientific
>discussion (:-)).
Eeek! :)
BTW apologies for the delay in responding, I was out of the office for a
couple of days.
>First, depth and distance perception is not the same.
OK, it appears we may have been using different terminology. I have been
applying depth perception as an umbrella term which covers all the various
ways in which the brain perceives z-axis (ie away from the observer) depth.
This would include what you're defining as distance perception as well. My
definition includes use of monocular and binocular cues.
I've been using stereopsis as a narrower term, within depth perception but
restricted to certain binocular cues involving left and right image fusion
of retinal disparities.
I assume your definitions are correct but it would be useful if someone
could confirm for me - I'd hate to have a discussion go round in circles
because we're talking about different things!
>Convergence perception still seems to be regarded as a factor
>in this scaling process, but this last decade it turns out
>that other factors, especially vertical disparity, are
>more important, see for example:
>Rogers BJ, Bradshaw MF "Vertical disparities, differential
>perspective and binocular stereopsis",
>Nature 1993 Jan 21;361(6409):253-5.
One thing this reminds me of is a worry I've had about some
computer-generated images - many seem to use no vertical disparity. And
since any off-axis image will naturally have this then it's perhaps not
surprising people get eyestrain looking at them.
>An abstract of this article, and also of the one quoted by
>Chris Jones, can be found online:
>http://www.ncbi.nlm.nih.gov/PubMed/
>If you followed the links [Related Articles] after you have
>found the first, you soon get buried in abstracts of
>similar research (PubMed is a great service to the world).
And a resource I hadn't found - thanks!
>In the research there is no indication in my opinion "that
>different mechanisms dominate at different distances", except
>for the obvious geometrical facts, such as the fact that
>stereopsis degrades quadratically with distance.
It was first suggested in 1971 (P O Bishop and G H Henry, "Spatial vision",
Ann. Rev. Psychol. (22), pp507-513) that the brain processes large and
small disparities differently, implying small and large distances. It seems
to have been a popular theory, partly through intuition - it would be very
surprising if the brain has a single mechanism that can cope across all
scales. For example at large distances it would require very little neural
processing to treat the image of an object as identical in each eye, and
use only the parallax to perceive depth. But for a close object that
becomes impractical as there are not only disparities to cope with but eye
angle, diplopia etc. While the brain obviously does do this up close, it
would be odd if it didn't take a shortcut for larger distances, especially
given the larger physical distance and detail covered by a gaze at that range.
>(Angular) stereoscopic acuity doesn't vary much at different
>distances.
>Close-up the fine stereopsis works too, fortunately!
For a small area - but can you use it to take a picture that is both
natural and useful? Let's imagine you image something at close range. The
natural way to do it would be to focus on the object, toe the cameras in
(at interocular separation) and snap. That gets you a great view of the
centre of the image, but if you were to look around the image then the
vertical disparities from keystoning are going to give you a terrible
headache, even if you do manage to fuse it - which is unlikely as the
backgrounds may be very different, and out of focus.
So we then try parallel cameras at the interocular separation, and converge
the image (to bring the object into the display plane) by sliding the film
relative to the lens (or vice versa), and snap with focus from object to
infinity. Now we have a lovely image that has no keystoning, but back to
Bruce's original point of too much depth! The disparities are simply too
large for everyone to view comfortably, if at all.
So we reduce the lens separation to lessen the amount of depth (to 1 in 30,
or 1 in 50 or whatever ratio we choose) and image as above. Now we have a
useful image. We can look all around and see and fuse detail. Only problem
is, the object looks too big - either it seems enlarged or we seem shrunk.
This (I think) is the original issue - how do we get a good close-up photo
that looks natural?
--
Chris Jones
http://www.c.jones.cwc.net
ICQ #41744518
DALNet nick trickydisko
PGP key available on request
|