We see the main areas improving when we study the image sensor and how scientists spend R&D dollars. That said, it's good to focus on all these components:-
Speed (new generation sensors are faster)
Resolution (the trend is to have more megapixels)
Sensitivity (Optical & Quantum efficiency - very important)
Firmware (Sensor and camera CPUs - Image Processors are crucial)
Sensor Noise Floor (a smaller noise floor with each new generation)
The video discusses the new OM-1 image sensor and why it's a critical development for Micro Four Thirds. We see how Olympus photographers benefited from the OM-1 sensor improvements. We also take a closer look at the new Stacked BSI Image Sensor and why the step to BSI technology.
Camera reviewers never discuss the losses associated with more pixels. For example, any improvements in sensor sensitivity, firmware, or image processing are used to offset the losses from adding more and smaller pixels. OMDS did the opposite and kept the OM-1 resolution the same at 20MP. This pixel count and the new BSI sensor technology made it possible to improve the OM-1 noise performance with up to +2EV and the DR with +1EV. The BSI sensitivity also improved the OM-1's ability to capture detail. These are the benefits of moving from a MOV CMOS to a BSI CMOS sensor.
As seen in the video, it's technically possible to explain why the BSI sensor is better. Looking for similar examples, study the Sony A7 II and A7 III. Like the EM1 and the OM-1, the A7 II / III have the same sensor size and resolution. Like OMDS, Sony also achieved the "standard" BSI noise improvements of +1.5EV and the DR increase of +1EV with the A7 III.
Olympus EM1 III with 12-200mm lens and Pro Capture function.
In the following example, Sony used the improvements to the new A7 IV image sensor to offset the losses of adding 40% more megapixels. No matter how you view it, pixels come at a price. In other words, except for the additional pixels, the A7 IV image quality stayed similar to the A7 III. This is an example of how much sensors improve from one generation to the next...
It is crucial to challenge those saying stacked BSI sensors have no benefits. Ask for the same detailed information as in this short article and video. It became so easy to randomly drop incorrect statements on social media.
The R&D on the new image sensor started below the surface. Pixels capture Photons, and pixels are the link to Sensor Sensitivity. For example, scientists will target the noise floor of the sensor, and they will focus on Optical and Quantum Efficiency. The stacked configuration improves the operation and speed of both pixels and the image sensor.
Olympus EM1 III with the 12-200mm lens and the Pro Capture function.
Stacked BSI Live MOS sensor with Quad Pixel AF
A big thank you to the forum poster who posted positive feedback on my OM-1 video. Another forum poster asked for information on the "Quad Bayer AF" solution. The information in my video is enough to help photographers understand the Stacked BSI sensor. Obviously, some photographers like to have more, and that is good.
The source is OMDS
It is always better to rely on information from manufacturers. For example, see the OM-1 press release further down. You will see OMDS talking about their Cross Quad Pixel AF solution. This is something we can research. Having done that, we see the first Quad Pixel AF solution came from Canon. The Quad Pixel AF is the next level up to the older Canon Dual Pixel AF solution. Dual Pixel AF is similar to the Standard CMOS technology Canon has been using for years.
It could be that OMDS decided to select a new sensor manufacturer to take this new Stacked BSI - Quad Pixel AF sensor with the more powerful Truepic X CPU to the next level. The main benefits of the Cross Quad Pixel AF sensor are speed, accuracy, and a 4D-type AF capability. This improves the Uni-Directional Dual Pixel AF solution from Canon with all its limitations.
3 aspects of the new OM-1 sensor should be discussed more:-
Pixels capture Photons, and it is possible to improve image sensors...
There is so much more to discover about this amazing new image sensor
We are also seeing more excellent images and feedback from OM-1 users
The official OM-1 news release...
Interesting additional reading:-
- Quad Bayer Sensors - what are they and what are they not - link
- Bringing Backside Illumination to high-speed applications - link
- Interesting explanation of the Quad Bayer section and sensors - link
- Also see this info on Wikipedia (Fuji, Bayer, Quad Bayer, and more) - link
- Comparison between front, and back-illuminated sensors - link
While working on Part 2 of this article on ISO and Image Quality, I thought it was a good idea to set the stage with a few random thoughts and a basic challenge. Thinking about it, every photographer should develop the ability to analyze digital images. A good understanding of the digital camera and the ability to apply this knowledge benefits all digital photographers...
Taken at a constant luminance perspective and a variable image signal amplification
Taken at a constant image signal amplification (ISO3200)
You are welcome to try the following challenge. Place an A4-sized white paper against the wall and your camera on a tripod. The challenge is to recreate the above 2 illustrations. The info needed to create a basic plan, take the images, and build the final illustrations, is all in this article.
Olympus Pen F with 25mm f1.4 Leica, ISO80(Low), f3.5, 1/1600 - Edited in DxO PL-4 (See more info further down...)
Here are a few general questions for you:-
Prep a short explanation of what happens inside the camera for each illustration
Think of a few examples and list the benefits of knowing your digital camera...
Why do you think it's safe, or not safe to use the ISO Low, L100, or L64 options?
Most social media experts tell us it's not OK to use ISO Low, L100, or L64, why?
Which of the 5 images in each of the above illustrations are 18% gray samples?
What is the link between the Zone system, 18% gray exposure, and the ISO setting?
Study the photons/electrons graph below. Does it apply to all or only some sensors?
For more on how to plan your own strategy, study these articles:
Start from basics and learn how to record more image data - link
A better way to control the camera is the 2 Step Exposure Technique
Why is sensor sensitivity so important? - article (Important info)
A few general thoughts...
The reason photographers should distrust any sensor size references is it's normal for digital cameras to have image noise. What determines this image noise? Most photographers are never told that all sensors come with a native noise floor. Should we trust those reviewers who promote sensor size or write biased camera reviews? This is likely the main reason we don't see discussions about advanced digital photography techniques, like how to use ISO amplification correctly, or how to manage the performance of the Image Sensor. (See this link)
For example, why was the old-school Exposure Triangle never improved? Especially while it's used to train photographers on digital photography? How will they ever master advanced digital camera skills like SNR, sensor saturation, or image signal amplification with an outdated triangle?
Is size a reasonable measure for IQ? We know pixel area (size) is one of many variables to impact the Optical Efficiency of the image sensor. So why focus on only one of many variables? Well, looking for answers is like finding a needle in a haystack. A more reliable way of rating image sensors seems to be Sensor Sensitivity (Optical and Quantum Efficiency).
To illustrate the oversimplicity of the "size and capture" theory, study the illustration below. This illustration offers more information about the image sensor, the noise elements in the sensor noise floor, and the effective dynamic range of the sensor. Other than the "size and capture" theory, which cannot explain shadow noise, those who master the principles illustrated below will have a strong theoretical foundation. They will improve their analyzing and sensor performance skills.
For example, take a moment and consider the graph below. The horizontal axis is the reflected light or photons hitting the sensor. The vertical axis represents the converted electrons. The sensor's full saturation capacity is reached with a fully exposed sensor. Plot the saturation for shadows or low-light scenes. How does this impact the performance of the image sensor? What happens to the SNR in the shadows? What does the histogram look like for an under-exposed sensor? These are simple questions every digital photographer should be able to answer...
Does the size of the sensor backplate "capture" photons? The answer is NO! We know pixels capture photons and pixels (photocells) convert photons into electrons. This is the main reason why scientists improve pixel (photocell) sensitivity and why they don't design bigger sensors. That said, the size of the sensor does play a role. Any idea what? Think of image effects like background blur.
Olympus photographers are familiar with 12MP or 20MP (MFT) sensors. The pixel diameter of 12MP sensors is almost double that of 20MP sensors. We know the EM1 III has one of the most sensitive M43 sensors and delivers far superior IQ to any of the older 12MP MFT sensors. Ever wondered why? Could one of the reasons be, that sensors with lower Temporal Noise have cleaner images?
Study DxO Mark results for the EM1 II sensor.
The more we learn, the more we see what happens with image quality...
Another illustration with info on how to manage the sensor at ISO3200.
Let's talk about the physical size of mirrorless cameras?The size of the image sensor influences the physical size of the camera?The reason is the lens image circle needs to cover the full sensor. This impacts the size of the lenses, the camera energy needs, heat management, and the effectiveness of features like IBIS. Digital cameras are basically built around the image sensor. The penalty for cutting corners is overheating, lower efficiencies, and less reliable cameras and lenses.
Separately from any fixed mechanical design criteria, scientists focus on materials and the electrical design aspects of creating more sensitive image sensors. This represents a better way of designing new cameras and improving Sensor Sensitivity. For example, typical improvements in image sensors include replacing older wired functions with modern software or AI solutions...
As you know, Olympus and Panasonic were the first to introduce mirrorless cameras. Did they also establish the mechanical design benchmark for mirrorless cameras? For example, what is the built-in safety margin on M43 cameras? When you see similarly sized APC or FF cameras, does it mean the M43 camera is over-designed, or are these APC and FF cameras under-designed?
How much image noise is added to the noise floor for each 1-degree increase in temperature..?
Try this quick experiment and point a light source to your PC. Which of these sensors is receiving more light?
If someone says one sensor captures more light than the other, then I cannot help to think, is this statement theoretically correct? I was searching for information when I saw this review. I could not help asking, is this just another Undisclosed Promotion? What if the "more light" benefit was only 0.0002% while those bigger sensors were 10% less efficient? One would like to think, it's all about the efficiency of the sensor when converting photons into electrons, right?
See this discussion. It's a great example of why photographers should push manufacturers for better information. Also, do a quick search on the implications of "Undisclosed Promotions"...
Final comments on the two images in this article
Take a look at the 1st image in this article. I have set the exposure for the bright areas (sky). I wanted the sky with darker shadows. At home, I did a quick test to study the visible shadow noise when I increased the shadow brightness. Editing the raw file in PhotoLab 4, it was possible to extract cleaner image details from those same shadows. Does that mean the image had enough available information in the shadows or is it only PhotoLab doing a great job?
The above example shows the jpeg on the left and the edited raw version on the right. The image was exposed for the shadows, which over-saturated the sensor in the bright areas. It did not clip the highlights while pushing them hard. I tried different editing techniques to get the most from this "data-rich" raw file. The most pleasing result was editing the raw file with Aurora into an HDR image. Did I push the image sensor too hard, or is it OK when we push the image sensor?
The selected images demonstrate the different technical aspects discussed in this article plus it shows it's safe to work with ISO Low on your Olympus Pen F. The same is true for ALL cameras. Don't we benefit more from working with a fully saturated sensor and resetting our final image "brightness" in Workspace? Why is there a link between the camera (Live View) and Workspace? Why sensor size and then push restrictions like don't use the extended ISOs on your M43 camera..?
More about Managing your Image Sensor and ISO Amplification in Part 2...
Finally, what's better, exposing creatively, or saturating the sensor?
We are studying the history and growth of Olympus Live View. It all started with the Olympus E330 in 2006 and the E-3 in 2007. The E3 was the first Pro DSLR with a fully Articulating and Live View Display. The focus was functionality and the ability to compose an image while viewing the sensor's live data. The E3 was also the first DSLR to display the sensor's RAW data and update the display as the photographer adjusted settings like the WB, ISO, Auto & Manual Focussing, and Exposure. The photographer could also monitor the camera's IBIS operation on the Live View display.
This was the start of the Olympus Live View function. The current Live View and Workspace (Raw Converter) option advanced to a level one would think is absolutely normal. Interestingly, other manufacturers don't offer a similar solution, except for the Fuji X-RAW-Studio. We are reviewing the Enhanced Raw Format and the integration of Olympus cameras with Workspace.
What does this mean? We can now replicate the sensor's raw data, the camera's final Live View display, and our camera settings in Workspace.
I wrote a new article discussing different options to create profiles in January 2024.
Also, see the 2nd article I wrote about the Enhance Raw Format.
- See this article for details on how Live View works - link
- How to use the Olympus Color Creator and Workspace - link
1. Introduction
What would photographers typically expect from the camera's display:-
High-resolution LED or OLED screens with 1M-Dot or higher resolution
Visibility and functionality are critical aspects for most photographers
Fully Articulating 3" or larger touchscreen displays for video applications
Bright displays with good viewing and controls, similar to mobile phones
Large magnification EVFs (2.3M-dot +, and 120fps) with no black-outs
The new Fuji XT-5 display is one of the best photography formats in 2023
The eyepoint on the EVF is important, especially for those wearing glasses
The Super Control Panel (SCP) on Olympus cameras is a great solution
The existing Olympus menu is great and easy to use for M43 photographers
Backward operational compatibility is a strength of the EM1 III UI & menu
The ability to recreate the camera's Live View display in the raw converter
The ability to develop and practice camera color profiles at home (software)
The EM1 III is the final Pro-level camera from Olympus with the familiar UI and menu. This menu system developed and improved over many years. The best advantage of the EM1 III is its backward compatibility with older cameras. For example, I recently bought a 10-year-old Olympus EM1 MKI and had no problem applying my preferred Olympus configuration to the older EM1.
The above image illustrates the conversion process of the Enhanced Raw File. It starts with adding the final Live View data and camera settings to the Enhanced Raw File. When uploaded to our PC, we open the Enhanced Raw File in Workspace. Only the sensor's raw data will be visible when we open the raw file. The next step is to "activate" the camera settings to enable the final camera's Live View display. The next step is adjusting our camera settings in Workspace. We could also apply more advanced editing in Workspace. The final converted raw file is exported (16-bit Tiff) to PS...
Tip:- Study the Live View Boost function from Olympus in the Users Manual.
Olympus E-3 with an Articulating Display (Competing with the Canon 40D and Nikon D300).
Olympus continued to develop the Live View function and the compatibility between the camera and the previous Olympus raw converter, Viewer 3. The next step was the Creative Color concept. The Creative Color concept from Olympus consists of functions like B&W filters, Color Profiles (Pen F, EP-7), Color Filters, Adjust Color, and the Color Creator.
I discussed the Live View function in some of my other articles. My search for information on Live View and the Histogram started in 2019. For example, I found more data about Live View in my older E30 documentation. Older News Releases from Olympus and User Manuals are a treasure trove of "unfiltered" Olympus information on their cameras, lenses, and software...
Please study as I use this terminology in this article.
2. Live View and Olympus Cameras
Olympus photographers need to answer this, do you think Live View or the Raw Converter (Viewer 3 & Workspace) were only random thoughts? Olympus introduced Live View in 2006, and the Olympus Imaging Division's marketing team never re-launched or advertised any improvements. They looked like the worst marketing team in the industry.The enormous progress by the Imaging Engineering team is only visible when you study the new "Working Space" from Olympus.
For example, have you ever asked yourself why calling it, WorkSpace and Live View?
Any camera's Live View display should mirror the image sensor's response to camera adjustments and the reflected light reaching the sensor. This concept was part of Olympu's design criteria from day one. Combining the sensor's raw data with the functionality of Workspace was the next logical step for the Olympus Imaging engineering team...
But all cameras have Live View. Yes, it's possible to list and evaluate the design criteria of all mirrorless cameras by reviewing the unique photography landscape promoted by camera reviewers and what supposedly photographers (promoters) want from a camera and Live View display.
Studying Olympus, we see the following:-
A live connection between the image sensor and the Live View display
The histogram with the same direct link to the sensor raw or image data
The ability to monitor the raw or image data while adjusting the camera
The ability to evaluate camera adjustments before capturing the image
Selecting and changing any color or creative adjustments in Live View
The ability to have an Enhanced Raw File with ALL the camera settings
Compatibility between the Live View data and supplier Editing Software
The ability to accurately apply & monitor exposure techniques like ETTR
The ability to edit the camera settings or practice with them in Workspace
This basic Live View flow diagram matured with M43 Olympus cameras.
How to Enhance your Raw Files in Live View?" Your camera's Live View display or EVF replicates the sensor's Luminance Perspective. The only difference between the sensor's raw data perspective and the camera Live View image is a layered "Display Profile" placed onto the raw data. Olympus created another layer to add user profile settings (Creative Data) to the sensor's raw data. This is how the Enhanced Raw Format enabled Workspace to access the camera's layered Enhanced Raw data. In other words, we can now simulate the camera's final Live View display in Workspace. It also allows us to experiment with many camera settings or profiles in Workspace.
Regular Raw Converters are different because they access the sensor's Raw Data Layer. Traditional editors like PhotoShop, Lightroom, or PhotoLab cannot access or process the Enhanced Raw Data from Olympus cameras. It does not mean they are not good. WorkSpace has full access to the sensor's Raw Data and the user's Creative Layer via the Enhanced Raw Format. OM-System uses the same "Advanced Raw Format" terminology on its official website and press releases.
Traditional Raw File = Sensor Raw Data
Live View Image = Sensor Raw Data + Display Profile
Enhanced Raw File = Sensor Raw Data + Camera Creative Layers
This is BIG news because the Enhanced Raw Format enables us to test different camera settings while Workspace simulates the camera's Live View display. This process also improves our experience of testing and developing new camera profiles in Workspace. A good example is the Color Creator from Olympus. It is difficult to familiarize yourself with this function on the camera display.
The above illustrations demonstrate the Enhanced Raw Format and Live View in Workspace. It also shows how to activate your camera settings in Workspace. Those camera settings, like Picture Mode, which is not clearly marked, can be found in the Exif data. For example, the Color Creator...
Older WorkSpace versions could only replicate the Creative Color settings of specific camera models. The anomaly was the EM1 II. It was possible to overlay a Pen-F color profile onto the EM1 II raw data. Workspace V1.5 and later versions opened Color Profiles.
How should we edit Enhanced RAW Files? The first step is to Activate your Camera Settings in Workspace. The camera's final Live View display will be displayed on your computer. You will only see the Sensor's RAW Data if you don't activate your Camera Settings in Workspace. See Tip 22 on my Workspace How-to-Page.
Why RAW files and not JPEGS? The reason is simple for WorkSpace. The editing space for jpeg and raw files is the same in WorkSpace. Considering only the available image data, you will find raw files have more than double the file size (amount of data). These reasons should be enough to use raw files. The biggest reason is the Enhanced Raw Format and Live View for Workspace. This changed everything for photographers and Olympus cameras...
Olympus Stylus SH50 Compact Camera - ISO125, f5.8, 1/200
The Live View display allows us to simulate or test our camera settings in Workspace. Trying new camera settings is the best advantage of the Enhanced Raw Format and Workspace. A good example is building new color profiles. Workspace also made it possible to fine-tune your camera settings in Workspace. This is an advantage Olympus photographers shouldn't ignore...
Should we Calibrate our Cameras and PCs? It's possible to select an sRGB or RGB Colorspace for the camera. The color space is embedded in the image Exif data. Color Calibration is a complex subject and warrants a separate article. To keep it simple, I have been using RGB for all my gear.
This short paragraph reminds photographers to use the same Colorspace for all their equipment. I selected my Embedded PC Profile (RGB) for Workspace (see below). These basic steps synchronize the camera, computer, and WorkSpace. Some forum "experts" promote the idea of using the sRGB ColorSpace. My biggest concern is the sRGB color space is the lesser option...
What are the benefits of discussing this information? The advantage of using the same Colorspace on all your equipment is compatibility and the ability to improve your Color Awareness Skills in the comfort of your home. This enables Olympus photographers to grow their creative ART photography skills by editing and practicing their Creative Color camera adjustments in Workspace.
The more you use the WorkSpace Live View mode, the easier it is to apply this experience in the field with your Olympus camera. Live View and WorkSpace were the two most significant developments in the modern history of Olympus digital cameras...
The Olympus histogram:- The Olympus Histogram is as much a part of the Olympus Live View functionality as the image sensor raw data in Live View. The same principles of collecting data apply to the histogram and Live View. You can only benefit from practicing at every opportunity with the different features of Olympus cameras. For example, what is the function of the green add-ons in the Olympus histogram? How do they help us?
It is critical to study and master the Exposure Techniques discussed in this article. This will help you improve your image sensor's performance and exposure settings for creative photography and image quality. It's critical to master your shutter speed and aperture versus the ISO function.
Final Comments:-
What would an Olympus workflow look like? One would typically convert the Enhanced Raw File in Workspace and post-process (edit) the 16-bit Tiff file in Photoshop. Photoshop post-processing includes the Adobe Raw Converter as a layered Smart Object with access to LR features...
Olympus EP-7 w 17mm f2.8 - ISO200, f5.6, 1/500 - Enhanced Raw file, Gradation High, Color Graded and converted in WS and edited in PS.
The above image is an example of using the computational features of Olympus for ETTR, protecting highlights, and improving the shadow SNR and tonal data. See this article.
Is OM-System a concern or a hope for the future? I bought my Olympus EM1 III from OM-System in 2021, and my Inbox turned into a junk box. The OM-1 has a new menu because they couldn't manage the pressure from promoters (product reviewers). PCRAW Mode segregated the OM-1 from the rest of the Olympus Pro cameras. Are these decisions and the OM-5 simply inconsistent decision-making or part of a future product strategy? Does a Photography DNA mean anything? For example, even my old Olympus Stylus XZ-2 works with Workpace and the Enhanced Raw Format.
I haven't used my Fuji XT-5 much because I am satisfied with the Olympus Pen-F, EP-7, and EM1 III. I even considered selling the XT-5 but decided to keep it until I make a final decision...
Why would competitors benefit from having promoters and a new OM-1 menu UI?
When you think about it, Olympus enabled photographers to "edit" the captured raw data before reaching the TruePic Image Processor. In other words, we are dynamically altering the "sensor raw data" before we release the shutter. This is the purest form of digital photography...