Mailinglist Archives:
Infrared
Panorama
Photo-3D
Tech-3D
Sell-3D
MF3D
|
|
Notice |
This mailinglist archive is frozen since May 2001, i.e. it will stay online but will not be updated.
|
|
Re: TECH-3D digest 178
- From: T3D John Ohrt <johrt@xxxxxxx>
- Subject: Re: TECH-3D digest 178
- Date: Mon, 30 Jun 1997 15:50:15 -0400
T3D Eric Goldstein wrote:
>
> John O sez:
>
> > You have to compare density (dynamic range).
>
> Eric sez:
>
> If you are an engineer conducting an experiment, yes.
>
> If instead you are shooting a picture to be viewed with a pair of
> eyeballs in a particular context, no.
Well, I failed to explain this so far, I'll try again. In terms of
digital image processing and display, fstops has no meaning whatsoever
because it conveys nothing about the response function nor the dynamic
range.
> > For example, say we shoot a typical film which can record 12 fstops with
> > a density range of 3.0. Now we substitute a CCD back with 12 bits per
> > channel. It too will record 12 fstops with a dynamic range of log12 =
> > 4.8 density, ie linear.
>
> (Note: CCDs have non-linearities too, btw.)
The CCD device often has a slight deviation from gamma=1.0 in order to
maximize signal to noise ratio. You correct this extrenal to the device
because, the gamma varies slightly among chips and the external circuits
are more easily trimmed. In many cases, CCDs are cooled and extra on
chip circuitry dissapates more heat and can also illumiate the CCD
through junction effects. CCD chips are inherently very close to
gamma=1.0 and to make a camcorder function with VCR tapes and TV
monitors requires a significant alteration to gamma = 0.45 (nominal).
You usually have far more difficulties with non-linearities and
stability in the A to D converter.
But all these non-linearities are truly trivial and mostly reproducable
unlike the magnitude and uncertainties of film.
By comparison, a film recording 12 "fstops" inside a 3.0 density is
grossly nonlinear. If it the recording were linear, then the minimum
density is 4.8.
> > Obviously, all you need is a density of 3.0 for
> > the film to display what it recorded, ie 10 bits per channel, but
> > because the film is nonlinear you may miss subtle detail in certain
> > level ranges. And we just keep going in circles.
>
> Not from a creative users POV, we don't.
>
> There are non-linearities ...<much deleted> ...Photographers (and recording engineers,
> incidently) have been dealing with these issues since man began to
> record sights and sounds.
Not once in the foregoing are you discussing digital technologies.
There are different conceptually from chemistry. Lurkers, please notice
I said different, not better.
> Similarly, our eyes do not have a flat response to all frequencies of
> light.
Irrelevant in many cases, and within the limits of eye to eye
variability, they are well known and easily accomodated. BTW, have you
considered IR, UV, and radar imaging? IR photographic films are widely
used for special effects as well as technical work.
> Neither do photographic lenses and coatings.
Relative to what? You can make the response functions a whole lot
flatter than the human eye.
> Neither do CCDs
They too can be much wider than the human eye. CCD sensors are readily
available which cover 200 to 1100 nm as opposed to the nominal 400 to
700 or so of the eye. Usually consumer products filter down CCDs to an
eye-like response to keep the optics costs within bounds of
affordibility.
> Nothing translates perfectly, but there is typically a useful expression
> of
> contrast (in f/stops) between two visual media which represents a
> working equivalence. The human perceptual apparatus does not mirror the
> measurements of a densitometer or db meter, and perceived contrast or
> dymanic range is very situationally specific and "non-linear."
Not in a digital world, that is why "fstops" just doesn't work.
>
> > You have to decide whether the film is to be presented in its current
> > artistic transfer function, or in a linear transfer function. Only then
> > can you determine the dynamic range (density) appropriate for the
> > output.
>
> These decisions are made every instant of every day in the creative
> world. Because nature and our ability to perceive it is notably
> non-linear and has a vastly greater dymanic range than our recording
> media, we begin making those decisions with our first recording of the
> sight or sound, and continue making them for each subsequent media
> transfer. Computer imaging is in no way unusual in this regard.
CCD imaging is unusual because you can exceed the dynamic range of the
eye (both instantaneously and adaptively). But for sub $1000 cameras,
this is not yet true, to my knowledge.
I don't understand what you mean by computer imaging.
But lets consider an achievable hobbyist system that can run on a 386
class machine if need be.
It is no problem to purchase a digital camera with a gamma of 1.0
nominal. That is the default!
It is no problem to get a display subsytem with gamma of 1.0 nominal.
This is determined by the OS and the hardware. (and when that's
inconvenient you can easily fake it.)
It is no problem to drive a printer at a gamma of 1.0 nominal, that is
the function of the RIP engine. (RIP=Raster Image Processor). eg
PhotoShop has a built in RIP engine, but there are much better (and
sometimes more expensive) stand alone rip engines which are much faster
and more accurate supported by color matching technology.
It is no problem for the eye to interpret the results because the eye is
inherently gamma = 1.0
Do not confuse a monitor gamma with a system gamma. The system gamma of
the TVcamera through display is 1.0. The camera gamma is 0.45 and the
display gamma 2.25. Note that 0.45*2.25 is about 1.0.
The problem with Microsoft's various versions of windows and OS/2 is
that they expect the data to have a gamma of 2.2 so that the OS software
can use a linear driver to control the display. In theory, the feeling
was that gamma=2.2 displays would always be cheaper than gamma=1.0. It
once was true. Virtually every other OS expects a display system of
gamma 1.0.
BTW, the 386 machine requires a minimum of 16M ram and the OS is Linux
(freeware). Linux runs all the bigtime UNIX image processing software
that is free for non-professional use.
Macs don't have access to as powerful software, but they still run
better software than PCs and MacOS will run on more powerful platforms
than Microsoft or OS/2 systems.
> > Ideally, scan with a gamma correction that maximizes the linear dynamic
> > range of the data recorded, which is much less than that suggested by
> > the density. Clip and baseline clip to throwout the data which lies
> > outside the linear dynamic range. Using brightness and contrast
> > adjustments supported by histogram analysis to restore the data to a 0
> > to max range of representation (ie. 0 to 255 for Photoshop etc).
> > Process as desired. In an ideal world, you have the option of wysiwyg.
>
> But the gamma of the display device is different from that of the input
> device and the computer software manipulating it (which is probably the
> only linear part of the equation). That's how this whole thing started;
> out of the recognition that the monitor does *not* present a wysiwyg
> situation.
That simply does not have to be true if you do things correctly.
Typically all image input devices to a computer can have a gamma of
1.0. For example the scanner should have gamma correction to maximally
linearize the image if that is what you want. You can also just scan it
in by setting the gamma corretcion to 1.0. You can also go for special
effects by playing with gamma correction. Once you are inside the
computer, the processing may require the data to be gamma=1.0 for
predictable results or it may not. After processing, you can output it
to a device which has a gamma = 1.0 interface. As mentioned, the eye is
gamma 1.0, so no problem there.
There are always exceptions. eg. A video camera or VCR data captured may
be captured at gamma = 2.2 on some PCs using inexpensive technology.
Just pay attention to the specs.
> Anyway, I'll try to find out what the equivelent f/stop range of
> computer imaging systems generally are and pass it along to the list if
> anyone out there cares
Like I said previously, in a linear system, the concept of a range in
fstops could be likened to the channel depth in bits. ie 16M color is 3
channels each 8 bits deep, ie you might say it is 8 fstops. But how
close the output image matches the photo clutched in your hand, is
strictly a function of dynamic range.
Fstops is embedded in the world of chemistry and its intendent
nonlinearities, as fascinating as they are.
In short, dynamic range is the independent variable of the digital image
world. Specify it, and everything falls into place.
Regards
--
John Ohrt, Regina, SK, Canada
johrt@xxxxxxx
------------------------------
End of TECH-3D Digest 179
*************************
*************************
Trouble? Send e-mail to
wier@xxxxxxxxxxxxxxx
To unsubscribe select one of the following,
place it in the BODY of a message and send it to:
listserv@xxxxxxxxxxxxxxx
unsubscribe photo-3d
unsubscribe sell-3d
unsubscribe mc68hc11
unsubscribe overland-trails
unsubscribe icom
*************************
|