Mailinglist Archives:
Infrared
Panorama
Photo-3D
Tech-3D
Sell-3D
MF3D

Notice
This mailinglist archive is frozen since May 2001, i.e. it will stay online but will not be updated.
<-- Date Index --> <-- Thread Index --> [Author Index]

Re: John!!!


  • From: P3D John Bercovitz <bercov@xxxxxxxxxx>
  • Subject: Re: John!!!
  • Date: Tue, 15 Oct 1996 12:02:42 -0700

John B asks:
>> Would you please describe the tests, their methodology, and 
>> give their results?
And:
>> Can you tell us how the Rayleigh Criterion applies to binocular 
>> vision?  (I can see how it would apply to monocular vision.)

Bill C replies:
> I'll try "And" first, to see where it goes.  Let's put 
> "binocular vision" on hold. Think of that as a display 
> technology  for now.  I really want to get the root idea of a 
> single-lens-as-rangefinder across. I think Rayleigh, De Broglie, 
> Dawes, acoustic waves etc., all apply here...  Resolution is the 
> heart of it.

Here you're talking about physical optics resolution, not stereo 
resolution.  How does this apply to binocular depth resolution?

> As you know, some point not at the "object plane" will be 
> projected in part as the aperture onto the plane of focus. This 
> is generally referred to as being "out of focus" o-).  The 
> farther an object is from this object plane ("O" is farther than  
> "o"), the greater its diameter... Not!  Some shifts along this 
> Z-axis may be too small to be resolved.  What is that distance? 
> What sort of depth resolution can we expect? How does that 
> compare to more well known systems?

You're saying that if an image point becomes small enough, below 
a certain size, geometric optics no longer applies and the image 
point's size is not resolvable?  Just like a star's image: the 
diameter to the first minimum is a function of the telescope's 
aperture and not of the star's size.  What does that have to do 
with anything here?  Certainly it has to do with whether or not 
depth relative to the plane of focus in object space could be 
deduced by a microscopic (single lens!) examination of the image 
points, but it has nothing whatsoever to do with binocular vision 
(which is based on detection of parallax of centroids of image 
points and lines) and certainly has nothing to do with how these 
SL3D images will be viewed (in a two-lensed viewer or the 
equivalent which is with anaglyph glasses).

> Bull, if "flatties" have depth info, how come they're not 
> "3-D"?!  One reason flatties are 2-D is because a background 
> "O", "o", or "." looks just like a foreground "O", "o", or ".". 
> A difference between the two can be made by encoding the lens 
> aperture.  Since the cone of light from a lens is a projection 
> of the aperture of that lens by some object point, and since 
> this cone reverses at the focal point, so will the encoded 
> aperture reverse the encoding elements at that point.  For 
> example, we can bisect the aperture with red/left and cyan/right
> filters.  Now, as long as an object point is in the foreground, 
> it will be projecting the aperture "o" as red on the left and 
> cyan on the right. If it's in the background, it will have come 
> to a point and reversed itself so that now it appears as cyan on 
> the left and red on the right.

Now you've switched to geometric optics.  This sounds very like 
the conventional explanation of SL3D (geometric optics plus two 
views) and doesn't appear to have anything to do with physical 
optics resolution anymore.  I'm having trouble integrating your 
explanations with each other.  Perhaps you will cover this in a 
subsequent post?

> Well, I see I've wandered. And it's nap time!   

Hurrah for nap time!

In conclusion, you haven't touched on either of my questions yet.  
However, I look forward to your next post with enthusiasm.  And 
don't forget your tests; I'm eager to hear about them.  (All in 
good time, eh?)

John B


------------------------------