Info:- The Pen F, E-P7, and the OM-3 adjust both targeted and global colors. Other brands (LUTs) make global adjustments.

May 4, 2022

How much light is reaching the sensor at ISO6400?

Last update:- 29th April 2023

Did you know your ISO does not add more light (image data) to the sensor?

I was listening to an astrophotographer talking about his camera settings. Most astrophotographers are comfortable with the technical aspects of digital cameras. He demonstrated how he shifts the histogram to the right (ETTR) to improve his image quality and record more tonal data.

Many photographers use the ISO and Exposure Comp to apply ETTR. Those who follow this blog know the ISO amplifies the image signal from the sensor. If you like to control the light reaching the sensor, fix the ISO and use the aperture and/or shutter speed. Increasing the reflected light on the sensor means a higher SNR and a more Saturated sensor. The histogram shows the reflected light on the sensor with a fixed ISO. Consider this as "managing" the sensor's ISO to SNR ratio.

Study the illustration below and visualize the reflected light on the sensor at the different ISOs. What do we learn from this illustration? Experienced M43 photographers think of reflected light (luminance) on the sensor. The sensor might receive no reflected light at ISO12800. We always need a light source to illuminate the subject and reflected light to expose the sensor. No reflected light means no recordable image information. The next time photographers quote ISO12800 or ISO20000, ask how much luminance (reflected light) was available at the image sensor?


This illustration shows what the actual scene looks like at each ISO amplification.

The key is to study the technical basics of image sensors. For example, all image sensors have a noise floor, and we control the final visibility of this noise floor. It helps to know the different types of image sensors and the importance of pixel area. Pixels capture Photons and NOT the size of the backplate housing them. The size of the sensor only determines the Lens Image Circle that drives the OPTICAL differences between digital cameras. It's important to know that Pixel Area is one of many Quantum Efficiency variables that determine Sensor Sensitivity.

Saturating the sensor with light means a higher SNR and less image noise. A higher SNR also means we record more tonal data. Digital photographers should know these basics and how to manage them with digital cameras. For example, the technical design aspects of the image sensor are always the same. What are the main Optical and Technical differences between image sensors?

Why does the ISO shift the histogram? Because it's a variable in the exposure equation.



It should be clear why we risk having noise at ISO6400. The above "Photons to Electrons" graph shows what happens with less light on the sensor. Too little reflected light means the sensor's SNR is lower. How do we manage noise at higher ISOs? We typically increase the ISO when we need higher shutter speeds. The key is light. It's safe to use higher ISOs with enough available light. Photographing a Formula One event in daylight is a good example. It's safe to change the ISO and shutter speed with enough light. Also, keep the sun behind you and always have a flash in your camera bag.

What happens at higher ISOs and shutter speeds? We reduce the light on the sensor and increase the image signal amplification (ISO) to maintain the camera's exposure level. 

TIP:- The histogram follows the light on the sensor at a fixed ISO. It's incorrect to say the histogram shows jpeg data in Live View. Consider this typical fact statement from undisclosed promoters. When do we have a jpeg file? There's no JPEG file in Live View before you record the image...

Study this basic example of how to apply this knowledge:-

Consider Bird in Flight (BIF) photography. We can safely increase our exposure with 2 stops (ETTR) when photographing birds against a blue sky. If your final adjustment is between ISO800 and ISO1250, which will you take? ISO800 is the better option because it allows more light on the sensor. More light increases the SNR and sensor SaturationThe info in this article enables photographers. Compare that to those advising M43 photographers to "never go above ISO800"...

For more information on ISO and exposure, see this article - link.

See the 7 Points each photographer should know about digital cameras - link.



Have you asked why forum promoters use Photons to Photos graphs when they roam photography forums? Why do they never talk about the basics in this article? Pointless presentations based on calculations do NOT improve image quality. Always ask this simple question. Does the presented information enable photographers or only those manufacturers selling full-frame cameras?

For example, technical aspects like Sensor Type and Effective Pixel Area could be important when selecting the right camera for low-light conditions. Micro Four Thirds cameras use Live MOS sensors, and most APC cameras use Standard CMOS sensors. The Live MOS sensor has up to 75% less control wiring at each pixel. This means M43 cameras typically have a smaller noise floor. Live MOS sensors also have a larger Effective Pixel Area. This explains the Live MOS sensor's relatively good noise and low-light capabilities. Another interesting technical variable is Sensor Readout Speed.

The size of the sensor drives the optical differences between cameras. The technical aspects of the camera are the same for ALL sensor sizes. Why would some say, "Ignore the technical aspects discussed in this article?" Are they simply dumbed-down fanboys or undisclosed promoters?

Best and God's Bless

Siegfried

Mar 1, 2022

The new OM-1 Stacked BSI with Quad Pixel AF...

Last updated:- 22nd January 2023

Introduction.

We see the main areas improving when we study the image sensor and how scientists spend R&D dollars. That said, it's good to focus on all these components:-

  • Speed (new generation sensors are faster)
  • Resolution (the trend is to have more megapixels)
  • Sensitivity (Optical & Quantum efficiency - very important)
  • Firmware (Sensor and camera CPUs - Image Processors are crucial)
  • Sensor Noise Floor (a smaller noise floor with each new generation)

The video discusses the new OM-1 image sensor and why it's a critical development for Micro Four Thirds. We see how Olympus photographers benefited from the OM-1 sensor improvements. We also take a closer look at the new Stacked BSI Image Sensor and why the step to BSI technology.




Camera reviewers never discuss the losses associated with more pixels. For example, any improvements in sensor sensitivity, firmware, or image processing are used to offset the losses from adding more and smaller pixels. OMDS did the opposite and kept the OM-1 resolution the same at 20MP. This pixel count and the new BSI sensor technology made it possible to improve the OM-1 noise performance with up to +2EV and the DR with +1EV. The BSI sensitivity also improved the OM-1's ability to capture detail. These are the benefits of moving from a MOV CMOS to a BSI CMOS sensor.

As seen in the video, it's technically possible to explain why the BSI sensor is better. Looking for similar examples, study the Sony A7 II and A7 III. Like the EM1 and the OM-1, the A7 II / III have the same sensor size and resolution. Like OMDS, Sony also achieved the "standard" BSI noise improvements of +1.5EV and the DR increase of +1EV with the A7 III


Olympus EM1 III with 12-200mm lens and Pro Capture function.


In the following example, Sony used the improvements to the new A7 IV image sensor to offset the losses of adding 40% more megapixels. No matter how you view it, pixels come at a price. In other words, except for the additional pixels, the A7 IV image quality stayed similar to the A7 III. This is an example of how much sensors improve from one generation to the next...

It is crucial to challenge those saying stacked BSI sensors have no benefits. Ask for the same detailed information as in this short article and video. It became so easy to randomly drop incorrect statements on social media.




The R&D on the new image sensor started below the surface. Pixels capture Photons, and pixels are the link to Sensor Sensitivity. For example, scientists will target the noise floor of the sensor, and they will focus on Optical and Quantum Efficiency. The stacked configuration improves the operation and speed of both pixels and the image sensor.


Olympus EM1 III with the 12-200mm lens and the Pro Capture function.

Stacked BSI Live MOS sensor with Quad Pixel AF


A big thank you to the forum poster who posted positive feedback on my OM-1 video. Another forum poster asked for information on the "Quad Bayer AF" solution. The information in my video is enough to help photographers understand the Stacked BSI sensor. Obviously, some photographers like to have more, and that is good.


The source is OMDS                   

It is always better to rely on information from manufacturers. For example, see the OM-1 press release further down. You will see OMDS talking about their Cross Quad Pixel AF solution. This is something we can research. Having done that, we see the first Quad Pixel AF solution came from Canon. The Quad Pixel AF is the next level up to the older Canon Dual Pixel AF solution. Dual Pixel AF is similar to the Standard CMOS technology Canon has been using for years.

It could be that OMDS decided to select a new sensor manufacturer to take this new Stacked BSI - Quad Pixel AF sensor with the more powerful Truepic X CPU to the next level. The main benefits of the Cross Quad Pixel AF sensor are speed, accuracy, and a 4D-type AF capability. This improves the Uni-Directional Dual Pixel AF solution from Canon with all its limitations.

3 aspects of the new OM-1 sensor should be discussed more:-
  1. Pixels capture Photons, and it is possible to improve image sensors...
  2. There is so much more to discover about this amazing new image sensor
  3. We are also seeing more excellent images and feedback from OM-1 users

The official OM-1 news release...












Interesting additional reading:-

- Quad Bayer Sensors - what are they and what are they not - link

- Bringing Backside Illumination to high-speed applications - link

- Interesting explanation of the Quad Bayer section and sensors - link

- Also see this info on Wikipedia (Fuji, Bayer, Quad Bayer, and more) - link

- Comparison between front, and back-illuminated sensors - link

- One more site with detail on the sensor - link

- See this discussion on image quality on DPReview - link

- Interesting book if you like to study more - link

- See the Sony A7 III description of the BSI improvements - link

- Here are some OM-1 test images for download from Image Resource - link

- Another article discussing OMDS introducing the Quad Pixel AF solution - link

- Peta Pixel discussing the Quad Pixel AF tech with a typical Canon video - link

- One of the OM-Systems  OM-1 launch videos - one of the better ones - link

- OM-1 Review, a great overview from an existing Olympus photographer - link

- "Size and capture" theory & counter-marketing. Do you trust undisclosed promotions? - link

Oct 4, 2021

ISO Low, L100, L64, and Flash Photography - Part 1

Last update:- 16th January 2023

While working on Part 2 of this article on ISO and Image Quality, I thought it was a good idea to set the stage with a few random thoughts and a basic challenge. Thinking about it, every photographer should develop the ability to analyze digital images. A good understanding of the digital camera and the ability to apply this knowledge benefits all digital photographers...


Taken at a constant luminance perspective and a variable image signal amplification

Taken at a constant image signal amplification (ISO3200)

You are welcome to try the following challenge. Place an A4-sized white paper against the wall and your camera on a tripod. The challenge is to recreate the above 2 illustrations. The info needed to create a basic plan, take the images, and build the final illustrations, is all in this article.


Olympus Pen F with 25mm f1.4 Leica, ISO80(Low), f3.5, 1/1600 - Edited in DxO PL-4 (See more info further down...)

Here are a few general questions for you:-

  • Prep a short explanation of what happens inside the camera for each illustration
  • Think of a few examples and list the benefits of knowing your digital camera...
  • Why do you think it's safe, or not safe to use the ISO Low, L100, or L64 options?
  • Most social media experts tell us it's not OK to use ISO Low, L100, or L64, why?
  • Which of the 5 images in each of the above illustrations are 18% gray samples?
  • What is the link between the Zone system, 18% gray exposure, and the ISO setting?
  • Study the photons/electrons graph below. Does it apply to all or only some sensors?

For more on how to plan your own strategy, study these articles:
  • Start from basics and learn how to record more image data - link
  • A better way to control the camera is the 2 Step Exposure Technique
  • Why is sensor sensitivity so important? - article (Important info)


A few general thoughts...


The reason photographers should distrust any sensor size references is it's normal for digital cameras to have image noise. What determines this image noise? Most photographers are never told that all sensors come with a native noise floor. Should we trust those reviewers who promote sensor size or write biased camera reviews? This is likely the main reason we don't see discussions about advanced digital photography techniques, like how to use ISO amplification correctly, or how to manage the performance of the Image Sensor. (See this link)

For example, why was the old-school Exposure Triangle never improved? Especially while it's used to train photographers on digital photography? How will they ever master advanced digital camera skills like SNR, sensor saturation, or image signal amplification with an outdated triangle? 

Is size a reasonable measure for IQ? We know pixel area (size) is one of many variables to impact the Optical Efficiency of the image sensor. So why focus on only one of many variables? Well, looking for answers is like finding a needle in a haystack. A more reliable way of rating image sensors seems to be Sensor Sensitivity (Optical and Quantum Efficiency).




To illustrate the oversimplicity of the "size and capture" theory, study the illustration below. This illustration offers more information about the image sensor, the noise elements in the sensor noise floor, and the effective dynamic range of the sensor. Other than the "size and capture" theory, which cannot explain shadow noise, those who master the principles illustrated below will have a strong theoretical foundation. They will improve their analyzing and sensor performance skills.

For example, take a moment and consider the graph below. The horizontal axis is the reflected light or photons hitting the sensor. The vertical axis represents the converted electrons. The sensor's full saturation capacity is reached with a fully exposed sensor. Plot the saturation for shadows or low-light scenes. How does this impact the performance of the image sensor? What happens to the SNR in the shadows? What does the histogram look like for an under-exposed sensor? These are simple questions every digital photographer should be able to answer...




Does the size of the sensor backplate "capture" photons? The answer is NO! We know pixels capture photons and pixels (photocells) convert photons into electrons. This is the main reason why scientists improve pixel (photocell) sensitivity and why they don't design bigger sensors. That said, the size of the sensor does play a role. Any idea what? Think of image effects like background blur.
 
Olympus photographers are familiar with 12MP or 20MP (MFT) sensors. The pixel diameter of 12MP sensors is almost double that of 20MP sensors. We know the EM1 III has one of the most sensitive M43 sensors and delivers far superior IQ to any of the older 12MP MFT sensors. Ever wondered why? Could one of the reasons be, that sensors with lower Temporal Noise have cleaner images? 

Study DxO Mark results for the EM1 II sensor.


The more we learn, the more we see what happens with image quality...


Another illustration with info on how to manage the sensor at ISO3200.


Let's talk about the physical size of mirrorless cameras? The size of the image sensor influences the physical size of the camera? The reason is the lens image circle needs to cover the full sensor. This impacts the size of the lenses, the camera energy needs, heat management, and the effectiveness of features like IBIS. Digital cameras are basically built around the image sensor. The penalty for cutting corners is overheating, lower efficiencies, and less reliable cameras and lenses. 

Separately from any fixed mechanical design criteria, scientists focus on materials and the electrical design aspects of creating more sensitive image sensors. This represents a better way of designing new cameras and improving Sensor Sensitivity. For example, typical improvements in image sensors include replacing older wired functions with modern software or AI solutions... 

As you know, Olympus and Panasonic were the first to introduce mirrorless cameras. Did they also establish the mechanical design benchmark for mirrorless cameras? For example, what is the built-in safety margin on M43 cameras? When you see similarly sized APC or FF cameras, does it mean the M43 camera is over-designed, or are these APC and FF cameras under-designed?


How much image noise is added to the noise floor for each 1-degree increase in temperature..?

Try this quick experiment and point a light source to your PC. Which of these sensors is receiving more light?

If someone says one sensor captures more light than the other, then I cannot help to think, is this statement theoretically correct? I was searching for information when I saw this review. I could not help asking, is this just another Undisclosed Promotion? What if the "more light" benefit was only 0.0002% while those bigger sensors were 10% less efficient? One would like to think, it's all about the efficiency of the sensor when converting photons into electrons, right?

See this discussion. It's a great example of why photographers should push manufacturers for better information. Also, do a quick search on the implications of "Undisclosed Promotions"...



Final comments on the two images in this article


Take a look at the 1st image in this article. I have set the exposure for the bright areas (sky). I wanted the sky with darker shadows. At home, I did a quick test to study the visible shadow noise when I increased the shadow brightness. Editing the raw file in PhotoLab 4, it was possible to extract cleaner image details from those same shadows. Does that mean the image had enough available information in the shadows or is it only PhotoLab doing a great job?

The above example shows the jpeg on the left and the edited raw version on the right. The image was exposed for the shadows, which over-saturated the sensor in the bright areas. It did not clip the highlights while pushing them hard. I tried different editing techniques to get the most from this "data-rich" raw file. The most pleasing result was editing the raw file with Aurora into an HDR image. Did I push the image sensor too hard, or is it OK when we push the image sensor?

The selected images demonstrate the different technical aspects discussed in this article plus it shows it's safe to work with ISO Low on your Olympus Pen F. The same is true for ALL cameras. Don't we benefit more from working with a fully saturated sensor and resetting our final image "brightness" in Workspace? Why is there a link between the camera (Live View) and Workspace? Why sensor size and then push restrictions like don't use the extended ISOs on your M43 camera..?

More about Managing your Image Sensor and ISO Amplification in Part 2...


Finally, what's better, exposing creatively, or saturating the sensor?

VideoPic Blog Comments

Please add any comments to this article here.