more on grayscale displays and human vision

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

more on grayscale displays and human vision

John T. Sharp
For Joachim, Michael, Jerome, Johannes, Jeff,

I think I might learn something from you. My simple thought was if you can't order a group of randomly organized images of different gray levels when difference between gray levels is less than 10 - 20 (on 0 - 256) then you can't tell the difference.

I have attached two gray scale sets. These are randomly organized. Can you rank them, each 12 in each set? Can you accurately rank every pair - A to B, B to C, C to D, etc?
AT what difference is the accuracy greater than 50 %? Why isn't that a fair test?

It's hard for me to believe that the human eye can tell the difference at a 1 % level or the difference between 192 and 194.

 -------------- Original message ----------------------
From: Joachim Wesner <[hidden email]>

> Hi there,
>
> JTS wrote
> --------------------------------------------------------------------------------
> --------------------------------
>
> I'm not an authority on human vision but I read somewhere that the human
> eye could distingusih approximately 16 different gray levels. This is far
> less than 256 of 8 bit. In fact it is only 3 bit.
>
> I tested this in a small demonstration with a group of colleagues who spend
> a lot of time reading x-ray films of the hands and feet and 16 levels
> appeared to be about what the best could do.
>
> Try it your self. You can compose images with gray scales of anything
> between  0 and 256. Code them, mix them up in
> random fasion and ask your colleagues to tell you which is whiter (or
> blacker ) comparing A to B, A to C, A to D etc.
>
> --------------------------------------------------------------------------------
> --------------------------------
>
> There seems to be a confusion here between smalled difference in brightness
> one can distinguish (say a change of x
> percent) and the total number of such small differences that can appear in
> ONE scene (any dark adaption would NOT count here, because then we are
> comparing diffent scenes) with having a brightness too low to discern or
> too bright for the eye (causes glare etc.).
>
> I do not know what the brightness differences were that you used, but only
> 16 levels within the one-scene dynamic
> range is definitely too low! Take any BW picture in an image processing
> progamm and reduce the number of gray levels
> to 64 and later to only 16, you will start to see a difference at 64 levels
> and definitely at 16 levels!
>
> (HOWEVER, trying to surely identify a certain level my be a different
> thing!)
>
> The gamma faq
>
> http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.htm
>
> has a good discussion on that, (I hope I will not get sued by some
> underemployed advaocate for citing from it):
>
> <cite
>
> 15. How many bits do I need to smoothly shade from black to white?
>
>
> At a particular level of adaptation, human vision responds to about
> a hundred-to-one contrast ratio of luminance from white to black. Call
> these luminance values 100 and 1. Within this range, vision can detect that
> two luminance values are different if the ratio between them exceeds about
> 1.01, corresponding to a contrast sensitivity of one percent.
>
>
> To shade smoothly over this range, so as to produce no perceptible steps,
> at the black end of the scale it is necessary to have coding that
> represents different luminance levels 1.00, 1.01, 1.02, and so on. If
> linear light coding is used, the "delta" of 0.01 must be maintained all the
> way up the scale to white. This requires about 9,900 codes, or about
> fourteen bits per component.
>
>
> If you use nonlinear coding, then the 1.01 "delta" required at the black
> end of the scale applies as a ratio, not an absolute increment, and
> progresses like compound interest up to white. This results in about 460
> codes, or about nine bits per component. Eight bits, nonlinearly coded
> according to Rec. 709, is sufficient for broadcast-quality digital
> television at a contrast ratio of about 50:1.
>
>
> If poor viewing conditions or poor display quality restrict the contrast
> ratio of the display, then fewer bits can be employed.
>
>
> If a linear light system is quantized to a small number of bits, with black
> at code zero, then the ability of human vision to discern a 1.01 ratio
> between adjacent luminance levels takes effect below code 100. If a linear
> light system has only eight bits, then the top end of the scale is only
> 255, and contouring in dark areas will be perceptible even in very poor
> viewing conditions
>
> \cite>
>
> JW
>
>
> ______________________________________________________________________
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email 
> ______________________________________________________________________
Reply | Threaded
Open this post in threaded view
|

Re: more on grayscale displays and human vision

Jeff Brandenburg
On Jul 10, 2006, at 8:49 PM, John T. Sharp wrote:

> For Joachim, Michael, Jerome, Johannes, Jeff,
>
> I think I might learn something from you. My simple thought was if you
> can't order a group of randomly organized images of different gray
> levels when difference between gray levels is less than 10 - 20 (on 0
> - 256) then you can't tell the difference.

It depends on the nature of the trial.  If you're looking at two
isolated chips or areas, no, you can't tell.  If you're looking at two
adjacent areas, you'll likely be able to discern the edge between them.
  (See below.)

> I have attached two gray scale sets. These are randomly organized. Can
> you rank them, each 12 in each set? Can you accurately rank every pair
> - A to B, B to C, C to D, etc?
> AT what difference is the accuracy greater than 50 %? Why isn't that a
> fair test?

Well, first, because attachments aren't forwarded to the list. :-)

The important task for image perception usually isn't ranking of area
brightness; more often, it's patterns and contrast and edges that
people want to "see" (that is, perceive and notice).  The eye is very
poor at estimating absolute brightness, but very good at perceiving
discontinuities (edges), even between very similar brightness levels.

If you use Photoshop or an ImageJ macro to create a smooth gradation
from black to white, you probably won't be able to perceive steps in
brightness -- but you might, depending on your monitor and display
settings.  If you create a gradation from 0 to 255 with 64 steps, you
*will* be able to perceive those steps, at least across some of the
gradient.  (Depending on your gamma setting, you may not be able to see
them toward the dark or light end.)

> It's hard for me to believe that the human eye can tell the difference
> at a 1 % level or the difference between 192 and 194.

The eye absolutely can discern an edge between two areas with a 1%
brightness difference, at least above a certain minimum brightness (and
below levels that would dazzle).  In fact, the figures that I've seen
for this JND (just-noticeable difference) range from 0.5% to 0.2%.  
(Perhaps the 0.2% number applies to radiologists. :-))

As it turns out, if I use ImageJ to create an image on my monitor with
an area of brightness 192 adjacent to an area of brightness 194, I
*can't* perceive the difference.  The reason:  my monitor (a Dell
2405FPW) apparently only resolves 64 gray levels, so 192 and 194 are
displayed as the same brightness!  If I move the same image over to my
outboard CRT monitor, the boundary between the areas is immediately and
clearly visible.

Remember, too, that the human visual system automatically intensifies
edges.  If you look at that 64-step gradient I mentioned before, going
from black at the left to white at the right, you'll see that each band
appears darker on the right and lighter on the left.  This illusion is
known as Mach banding, and is one of the reasons that smooth surfaces
need to be rendered at 8 bits/color or even more.
--
        -jeffB (Jeff Brandenburg, Duke Center for In-Vivo Microscopy)
Reply | Threaded
Open this post in threaded view
|

Re: more on grayscale displays and human vision / camera linearity

Stuart Anderson-3
Colleagues,

Thanks for the wonderful discussion about human vision.  This is fantastic
material for a course I am in the midst of creating entitled Physics for the
Fine Arts.

I noted in one of these messages (which I can no longer locate) a comment that
cheap video cameras (like the USB webcams one can get for $30-50 these days)
pre-correct for the non-linear nature of human intensity perception that has
been discussed in these exchanges (i.e., would have a logarithmic response?),
while more expensive research cameras are linear (generate signals that are
proportional to the # of photons received).

Since am building some student CCD spectrometers based on a CD and a USB
camera (with frame capture and Image J analysis) to do introductory absorption
spectroscopy this is important to sort out.  I can check the linearity of the
intensity scale from these cameras with ND filters, but if one of you out
there knows about - or recalls the comment about - non-linearity of the
signals from consumer electronics cameras I would love to hear more.

Many thanks,

Stu

Quoting Jeff Brandenburg <[hidden email]>:

> On Jul 10, 2006, at 8:49 PM, John T. Sharp wrote:
>
> > For Joachim, Michael, Jerome, Johannes, Jeff,
> >
> > I think I might learn something from you. My simple thought was if you
> > can't order a group of randomly organized images of different gray
> > levels when difference between gray levels is less than 10 - 20 (on 0
> > - 256) then you can't tell the difference.
>
> It depends on the nature of the trial.  If you're looking at two
> isolated chips or areas, no, you can't tell.  If you're looking at two
> adjacent areas, you'll likely be able to discern the edge between them.
>   (See below.)
>
> > I have attached two gray scale sets. These are randomly organized. Can
> > you rank them, each 12 in each set? Can you accurately rank every pair
> > - A to B, B to C, C to D, etc?
> > AT what difference is the accuracy greater than 50 %? Why isn't that a
> > fair test?
>
> Well, first, because attachments aren't forwarded to the list. :-)
>
> The important task for image perception usually isn't ranking of area
> brightness; more often, it's patterns and contrast and edges that
> people want to "see" (that is, perceive and notice).  The eye is very
> poor at estimating absolute brightness, but very good at perceiving
> discontinuities (edges), even between very similar brightness levels.
>
> If you use Photoshop or an ImageJ macro to create a smooth gradation
> from black to white, you probably won't be able to perceive steps in
> brightness -- but you might, depending on your monitor and display
> settings.  If you create a gradation from 0 to 255 with 64 steps, you
> *will* be able to perceive those steps, at least across some of the
> gradient.  (Depending on your gamma setting, you may not be able to see
> them toward the dark or light end.)
>
> > It's hard for me to believe that the human eye can tell the difference
> > at a 1 % level or the difference between 192 and 194.
>
> The eye absolutely can discern an edge between two areas with a 1%
> brightness difference, at least above a certain minimum brightness (and
> below levels that would dazzle).  In fact, the figures that I've seen
> for this JND (just-noticeable difference) range from 0.5% to 0.2%.  
> (Perhaps the 0.2% number applies to radiologists. :-))
>
> As it turns out, if I use ImageJ to create an image on my monitor with
> an area of brightness 192 adjacent to an area of brightness 194, I
> *can't* perceive the difference.  The reason:  my monitor (a Dell
> 2405FPW) apparently only resolves 64 gray levels, so 192 and 194 are
> displayed as the same brightness!  If I move the same image over to my
> outboard CRT monitor, the boundary between the areas is immediately and
> clearly visible.
>
> Remember, too, that the human visual system automatically intensifies
> edges.  If you look at that 64-step gradient I mentioned before, going
> from black at the left to white at the right, you'll see that each band
> appears darker on the right and lighter on the left.  This illusion is
> known as Mach banding, and is one of the reasons that smooth surfaces
> need to be rendered at 8 bits/color or even more.
> --
> -jeffB (Jeff Brandenburg, Duke Center for In-Vivo Microscopy)
>


*************************************************
Stuart M. Anderson
Associate Professor of Physics
Augsburg College
2211 Riverside Avenue
Minneapolis, MN  55454

Tel: (612) 330-1012
Internet: [hidden email]

*********************************************************************
"So, you were surprised to find that the holey grail is a saxophone?"
*********************************************************************