Nov 2, 2021

Is the "Size and Capture" theory too basic?

Last updated:- 29th February 2024

Introduction.

I support traditional science and marketing values. That means accurate and mathematically correct data has priority over commercial preferences. I studied electrical engineering, worked as a project engineer, and held several marketing positions in my working career. My marketing training started at a global manufacturer of industrial automation solutions and electrical equipment, and my interest in consumer behavior developed with product launches, the first digital calculator, the original XT PC, automation solutions, large project sales, and the photography segment.



Please study my ISO Low, ISO100, and ISO64 series. My focus in this discussion is luminance (reflected light). We will review the image signal path from the subject to the sensor because our ability to optimally capture reflected light (image signal) depends on the sensitivity (efficiency) of the image sensor and our ability to manage the digital image-taking process. This article illustrates why many photographers question those promoting the oversimplified "size and capture" theory...

Study this article discussing the 7 points each photographer should know...

What are the main technical differences between sensors? We know sensor sensitivity is the sum of the optical and quantum efficiencies of the image sensor. Pixel area (size) influences these optical and quantum efficiencies. The visible impact that pixel area has on quantum and optical efficiency is a good question? For example, the Canon 6D and Olympus Pen-F are 20MP cameras. The pixel area on the 6D is 248% larger than the Pen F. How much does this benefit the Canon's image quality, and what should one look for? One option is the DxOMark IQ database, and the practical option is shadow details. We also know each image sensor has a native noise floor that influences our IQ. The pixel's effective photon-sensitive area also changes for BSI, Live MOS, and Standard CMOS sensors...

The technical characteristics of image sensors are, therefore, unique. The design specs of each image sensor determine its technical characteristics and NOT its physical size. These characteristics include the saturation level, dynamic range, noise floor, and sensitivity of each sensor.



What is the "size and capture" theory? The best place to learn more about this theory is the well-known "size and capture" authority DPReview. Their camera reviews repeatedly explain the benefits of large sensors capturing more light than crop sensors. The "size and capture" theory predominantly applies to sensors smaller than full-frame sensors. It does not equally apply to FF and MF cameras. These are the benefits you should expect from your new FF (large sensor) camera:

  • They capture more light...
  • Have better image quality...
  • Almost no image noise...
  • Much better low-light IQ...
  • DR with No highlight clipping...
  • Better Auto-focus & video...
  • The magical FF look...
  • Better background blur...
  • More and bigger bokeh...
  • The joy of perfect IQ...

Interestingly, "Size and capture" theorists never mention the benefits of saturating the image sensor or having a higher SNR. They only discuss "Shot Noise" and never the sensor's Noise Floor. For example, instead of explaining the benefits of saturating the sensor, they use "ISO Invariance" to discuss the benefits of using a higher SNR. ISO Invariance and sensor size are regarded as magical treats...

This article discusses the 4 essentials that will improve your image quality in 2024...

Do you think the "size and capture" theory is oversimplified..?


Sony improved the Quantum and Optical efficiencies (BSI architecture), lowered the noise floor (fewer pixels), and upped the readout speed.


Here are the points we are reviewing in this article:-

  1. A better way of doing photography
  2. Testing the Pen F and the A7S III
  3. A quick review of the test results?
  4. A few additional thoughts
  5. Conclusion

1. A better way of doing digital photography


The following hearsay theories/trends are associated with the "Size and capture" fanboys:
  • Your ISO function adjusts the sensor's sensitivity
  • Never use ETTR at higher ISOs because the DR is less
  • You don't need a flash because FF cameras have no noise
  • They never use a tripod because new cameras have IBIS
  • They need high-resolution cameras because they CROP
  • They always argue while using the analog exposure triangle
  • They depend 110% on FF sensors, AI, and the perfect AF
  • Crop sensor lenses suffer from high levels of diffraction...
  • They always hope for something new to have more IQ

Take a moment and study the exposure formula...


Everything starts by mastering the image sensor (Fig 1) and exposure. We control 4 of the variables in the Exposure Formula. They are:
  • N - The aperture or f-stop
  • t - The Shutter speed
  • S - ISO setting (image brightness)
  • L - Avg. scene luminance (illumination or a flash)
Digital photographers use these 4 variables to expose (saturate) the image sensor or to create optical effects like background blur or bokeh. This is why experienced photographers have a flash or tripod. Some of the most creative photography is done with artificial lighting like LED panels or a flash.




Reliable information digital photographers can trust. It all starts by walking away from "size and capture" fanboys. For example, focus on the following to master your image sensor:
  • Sensor pixel diameter influences sensitivity - fewer pixels are more sensitive
  • Higher pixel sensitivity improves the sensor's ability to capture shadow details
  • Each image sensor has a unique noise floor (noise floor size and types - Fig 1)
  • More megapixels means adding noise to the Noise Floor. (pixel control circuits)
  • There are two forms of noise. Shot noise and the sensor's Noise floor (Fig 1)
  • When calibrating the sensor, the sensor's sensitivity is fixed/set at the factory
  • High-sensitivity sensors mean less high ISO noise (low calibration multiplier)
  • High-sensitivity sensors typically have a higher saturation point plus DR
  • The old analog exposure triangle is not the best choice for digital cameras

Those saying we don't need knowledge or a flash, see this video on flash photography - link.


Olympus E-P7, w the FL300 flash, w 17mm f1.8 lens - ISO200, f4.0, 1/50th.


A quick way of improving your family photos is to consider the subject, illumination (flash), and the luminance reaching the sensor. The secret is more luminance on the sensor and a flash to illuminate and freeze the subject. In fact, keep an Olympus FL-300R flash or the standard OMD flash in your bag. Try the following camera settings for your next family event. Use Manual Exposure Mode with a shutter speed of 1/125th and an aperture of F4.0. Use ISO500 and set the exposure compensation for your flash between 0 and -0.7EV. Your flash is an exposure variable to illuminate the subject. You don't need the best flash/tripod. A bean bag with the clip-on flash or FL300R is enough.

The focus on Reflected Light is a different way of planning and doing digital photography? This is how photographers used to capture photos before the days of AI and Photoshop. Thinking about it, the only new thing is digital photography and learning how to optimize your image sensor.

See this article discussing the 7 points each digital photographer should know...


Figure 1. This is the most critical illustration photographers should study to master image sensors.


2. Why test extremes like the Sony A7 III and Olympus Pen F?


Because the difference between these 2 sensors is BIG? What happens when we underexpose the shadows while correctly exposing the mid-tones to highlights? Will "size and capture" fanboys claim it's all about DR, sensor size, and smaller sensors capturing less light, or are there technical reasons why scientists investing their time and energy to design more sensitive sensors? Are image sensors as basic as "size," or is there a technical explanation for sensor performance?


My challenge was to underexpose part of the subject and manage the performance of the Image Sensor.

I had the opportunity to test my own advice. My son was so kind as to lend me his A7S III for one day. What simple test could I do to push these two image sensors in one day? I decided to use my Pen F with the A7S III. I wanted to see if the SNR of each sensor changes when part of the subject is in the shadows. Will this prove that all image sensors have a noise floor and the SNR is lower in the shadows? The next step was to create a semi-controlled space to record these images.

A quick reminder:- Your ISO setting does not create noise. The ISO setting amplifies the image signal and the existing noise floor of the image sensor. The sensitivity of your image sensor and the SNR at each exposure will determine how much visible noise you see in your final image.

That said, I wanted to test if my thinking process is correct, or should I repent and forever accept the "Size and Capture" theory and focus on that ONE variable, SIZE..?




3. Can we explain these results?


I have no doubt that the Sony A7S III is a fantastic camera. My son uses the Sony A7S III, his Sony A1, and RED video cameras professionally. His customers are happy with his work. My own experience with the A7S III is only positive. The Sony A7S III is a unique camera aimed at videographers.

The same is true for the Olympus Pen F. Against all odds, it has a loyal following, and many new creative enthusiasts are discovering this unique camera in 2023/24. Does it mean we should compete with the newest and most popular cameras? I really do not see any value in that..?

The reason for this test is NOT which is better, or my M43 sensor is super awesome. Each image was taken in a semi-controlled space. I upped the brightness so you can study the shadows. Olympus said the differences between M43 and FF cameras are tiny. Will we see that in this test? 

The change in shadow detail between 0EV and +1EV demonstrates the changing sensor performance (saturation) between the two exposures. This level of control is only possible if you know your digital camera and how the performance of the camera/sensor works. (It is not only ETTR.)

This exercise was exciting. The SNR response is different for each camera, and the saturated and unsaturated parts of the sensor determine the final image look. "Size and capture" fanboys cannot explain these performance differences between the shadows and well-exposed areas. 


Normally (0EV) Exposed Images


The +1EV (Exposure) Images


4. A few more thoughts on the above test images

The two images below are the fully edited raw versions of the above +1EV images. The Sony A7S III has more shadow details, and it took more effort to recover the Pen F shadow details. The reason for this is the sensitivity differences between the A7S III BSI sensor and the Live MOS sensor of the Pen F. Another reason is my Pen F recorded less tonal data in the shadows (See the histogram). I purposely left these final images slightly "flat" so you can study the "recovered" shadow details. 

Here are a few final thoughts about these images:-

  • Sensor technologies - LiveMOS versus BSI (Both CMOS but different architectures)
  • Technical differences - 2016 to 2020 (Much development happened in these 4 years)
  • Sensor Sensitivity - Sensor evolution focuses on Quantum and Optical Efficiencies...
  • Sensor sensitivity - Sony selected a super high-sensitivity BSI sensor for the A7S III.
  • Pixel Size - It makes a difference + the delta pixel area is the highest for this example
  • Sensor Noise Floor - The A7S III sensor benefits from having a smaller noise floor
  • Sensor Noise Floor - The BSI sensor + four years of R&D improved the noise & eff.

See the video I did for the OM-1 - link.

I asked the following question in a previous article. Does the sensor backplate record photons? We see the size of the backplate stayed the same while the pixel count increased with each new camera. Doesn't that mean the "size and capture" theory is hopelessly oversimplified? 

Have you ever wondered if pixel diameter is one of many variables impacting IQ?
Then what are the other variables, and shouldn't we consider them?

The image below is best viewed on an iMac or large PC screen.

Don't you think the 17mm f1.8 three-dimensional "M43 look" from my Pen F is awesome? 😉

Conclusion


Why do I think "size and capture" promoters are not serious? Simply study their articles, reviews, and comments. For example, why use Pixel Pitch when referring to the Pixel Area. Pixel pitch is generally used with LED monitors. Why call BSI sensors "stacked CMOS sensors?" Why let people think Standard CMOS sensors have a stacked or layered design? Why not call it a Stacked BSI..? 

Why argue and focus on the placement of components like A/D converters or ISO "sensitivity" when photographers benefit more from learning the correct function of these components? Why do fanboys and promoters always focus on unnecessary or fake theories like oversimplification..?

Best Regards

Siegfried

VideoPic Blog Comments

Please add any comments to this article here.