INFO: I updated the article where I compare "pixel size" and the effect it has on 15 cameras. I added new test photos and info...

Nov 2, 2021

Is the "Size and Capture" theory too basic?

Last updated:- 29th February 2024

Introduction.

I support traditional science and marketing values. That means accurate and mathematically correct data has priority over commercial preferences. I studied electrical engineering, worked as a project engineer, and held several marketing positions in my working career. My marketing training started at a global manufacturer of industrial automation solutions and electrical equipment, and my interest in consumer behavior developed with product launches, the first digital calculator, the original XT PC, automation solutions, large project sales, and the photography segment.



Please study my ISO Low, ISO100, and ISO64 series. My focus in this discussion is luminance (reflected light). We will review the image signal path from the subject to the sensor because our ability to optimally capture reflected light (image signal) depends on the sensitivity (efficiency) of the image sensor and our ability to manage the digital image-taking process. This article illustrates why many photographers question those promoting the oversimplified "size and capture" theory...

Study this article discussing the 7 points each photographer should know...

What are the main technical differences between sensors? We know sensor sensitivity is the sum of the optical and quantum efficiencies of the image sensor. Pixel area (size) influences these optical and quantum efficiencies. The visible impact that pixel area has on quantum and optical efficiency is a good question? For example, the Canon 6D and Olympus Pen-F are 20MP cameras. The pixel area on the 6D is 248% larger than the Pen F. How much does this benefit the Canon's image quality, and what should one look for? One option is the DxOMark IQ database, and the practical option is shadow details. We also know each image sensor has a native noise floor that influences our IQ. The pixel's effective photon-sensitive area also changes for BSI, Live MOS, and Standard CMOS sensors...

The technical characteristics of image sensors are, therefore, unique. The design specs of each image sensor determine its technical characteristics and NOT its physical size. These characteristics include the saturation level, dynamic range, noise floor, and sensitivity of each sensor.



What is the "size and capture" theory? The best place to learn more about this theory is the well-known "size and capture" authority DPReview. Their camera reviews repeatedly explain the benefits of large sensors capturing more light than crop sensors. The "size and capture" theory predominantly applies to sensors smaller than full-frame sensors. It does not equally apply to FF and MF cameras. These are the benefits you should expect from your new FF (large sensor) camera:

  • They capture more light...
  • Have better image quality...
  • Almost no image noise...
  • Much better low-light IQ...
  • DR with No highlight clipping...
  • Better Auto-focus & video...
  • The magical FF look...
  • Better background blur...
  • More and bigger bokeh...
  • The joy of perfect IQ...

Interestingly, "Size and capture" theorists never mention the benefits of saturating the image sensor or having a higher SNR. They only discuss "Shot Noise" and never the sensor's Noise Floor. For example, instead of explaining the benefits of saturating the sensor, they use "ISO Invariance" to discuss the benefits of using a higher SNR. ISO Invariance and sensor size are regarded as magical treats...

This article discusses the 4 essentials that will improve your image quality in 2024...

Do you think the "size and capture" theory is oversimplified..?


Sony improved the Quantum and Optical efficiencies (BSI architecture), lowered the noise floor (fewer pixels), and upped the readout speed.


Here are the points we are reviewing in this article:-

  1. A better way of doing photography
  2. Testing the Pen F and the A7S III
  3. A quick review of the test results?
  4. A few additional thoughts
  5. Conclusion

1. A better way of doing digital photography


The following hearsay theories/trends are associated with the "Size and capture" fanboys:
  • Your ISO function adjusts the sensor's sensitivity
  • Never use ETTR at higher ISOs because the DR is less
  • You don't need a flash because FF cameras have no noise
  • They never use a tripod because new cameras have IBIS
  • They need high-resolution cameras because they CROP
  • They always argue while using the analog exposure triangle
  • They depend 110% on FF sensors, AI, and the perfect AF
  • Crop sensor lenses suffer from high levels of diffraction...
  • They always hope for something new to have more IQ

Take a moment and study the exposure formula...


Everything starts by mastering the image sensor (Fig 1) and exposure. We control 4 of the variables in the Exposure Formula. They are:
  • N - The aperture or f-stop
  • t - The Shutter speed
  • S - ISO setting (image brightness)
  • L - Avg. scene luminance (illumination or a flash)
Digital photographers use these 4 variables to expose (saturate) the image sensor or to create optical effects like background blur or bokeh. This is why experienced photographers have a flash or tripod. Some of the most creative photography is done with artificial lighting like LED panels or a flash.




Reliable information digital photographers can trust. It all starts by walking away from "size and capture" fanboys. For example, focus on the following to master your image sensor:
  • Sensor pixel diameter influences sensitivity - fewer pixels are more sensitive
  • Higher pixel sensitivity improves the sensor's ability to capture shadow details
  • Each image sensor has a unique noise floor (noise floor size and types - Fig 1)
  • More megapixels means adding noise to the Noise Floor. (pixel control circuits)
  • There are two forms of noise. Shot noise and the sensor's Noise floor (Fig 1)
  • When calibrating the sensor, the sensor's sensitivity is fixed/set at the factory
  • High-sensitivity sensors mean less high ISO noise (low calibration multiplier)
  • High-sensitivity sensors typically have a higher saturation point plus DR
  • The old analog exposure triangle is not the best choice for digital cameras

Those saying we don't need knowledge or a flash, see this video on flash photography - link.


Olympus E-P7, w the FL300 flash, w 17mm f1.8 lens - ISO200, f4.0, 1/50th.


A quick way of improving your family photos is to consider the subject, illumination (flash), and the luminance reaching the sensor. The secret is more luminance on the sensor and a flash to illuminate and freeze the subject. In fact, keep an Olympus FL-300R flash or the standard OMD flash in your bag. Try the following camera settings for your next family event. Use Manual Exposure Mode with a shutter speed of 1/125th and an aperture of F4.0. Use ISO500 and set the exposure compensation for your flash between 0 and -0.7EV. Your flash is an exposure variable to illuminate the subject. You don't need the best flash/tripod. A bean bag with the clip-on flash or FL300R is enough.

The focus on Reflected Light is a different way of planning and doing digital photography? This is how photographers used to capture photos before the days of AI and Photoshop. Thinking about it, the only new thing is digital photography and learning how to optimize your image sensor.

See this article discussing the 7 points each digital photographer should know...


Figure 1. This is the most critical illustration photographers should study to master image sensors.


2. Why test extremes like the Sony A7 III and Olympus Pen F?


Because the difference between these 2 sensors is BIG? What happens when we underexpose the shadows while correctly exposing the mid-tones to highlights? Will "size and capture" fanboys claim it's all about DR, sensor size, and smaller sensors capturing less light, or are there technical reasons why scientists investing their time and energy to design more sensitive sensors? Are image sensors as basic as "size," or is there a technical explanation for sensor performance?


My challenge was to underexpose part of the subject and manage the performance of the Image Sensor.

I had the opportunity to test my own advice. My son was so kind as to lend me his A7S III for one day. What simple test could I do to push these two image sensors in one day? I decided to use my Pen F with the A7S III. I wanted to see if the SNR of each sensor changes when part of the subject is in the shadows. Will this prove that all image sensors have a noise floor and the SNR is lower in the shadows? The next step was to create a semi-controlled space to record these images.

A quick reminder:- Your ISO setting does not create noise. The ISO setting amplifies the image signal and the existing noise floor of the image sensor. The sensitivity of your image sensor and the SNR at each exposure will determine how much visible noise you see in your final image.

That said, I wanted to test if my thinking process is correct, or should I repent and forever accept the "Size and Capture" theory and focus on that ONE variable, SIZE..?




3. Can we explain these results?


I have no doubt that the Sony A7S III is a fantastic camera. My son uses the Sony A7S III, his Sony A1, and RED video cameras professionally. His customers are happy with his work. My own experience with the A7S III is only positive. The Sony A7S III is a unique camera aimed at videographers.

The same is true for the Olympus Pen F. Against all odds, it has a loyal following, and many new creative enthusiasts are discovering this unique camera in 2023/24. Does it mean we should compete with the newest and most popular cameras? I really do not see any value in that..?

The reason for this test is NOT which is better, or my M43 sensor is super awesome. Each image was taken in a semi-controlled space. I upped the brightness so you can study the shadows. Olympus said the differences between M43 and FF cameras are tiny. Will we see that in this test? 

The change in shadow detail between 0EV and +1EV demonstrates the changing sensor performance (saturation) between the two exposures. This level of control is only possible if you know your digital camera and how the performance of the camera/sensor works. (It is not only ETTR.)

This exercise was exciting. The SNR response is different for each camera, and the saturated and unsaturated parts of the sensor determine the final image look. "Size and capture" fanboys cannot explain these performance differences between the shadows and well-exposed areas. 


Normally (0EV) Exposed Images


The +1EV (Exposure) Images


4. A few more thoughts on the above test images

The two images below are the fully edited raw versions of the above +1EV images. The Sony A7S III has more shadow details, and it took more effort to recover the Pen F shadow details. The reason for this is the sensitivity differences between the A7S III BSI sensor and the Live MOS sensor of the Pen F. Another reason is my Pen F recorded less tonal data in the shadows (See the histogram). I purposely left these final images slightly "flat" so you can study the "recovered" shadow details. 

Here are a few final thoughts about these images:-

  • Sensor technologies - LiveMOS versus BSI (Both CMOS but different architectures)
  • Technical differences - 2016 to 2020 (Much development happened in these 4 years)
  • Sensor Sensitivity - Sensor evolution focuses on Quantum and Optical Efficiencies...
  • Sensor sensitivity - Sony selected a super high-sensitivity BSI sensor for the A7S III.
  • Pixel Size - It makes a difference + the delta pixel area is the highest for this example
  • Sensor Noise Floor - The A7S III sensor benefits from having a smaller noise floor
  • Sensor Noise Floor - The BSI sensor + four years of R&D improved the noise & eff.

See the video I did for the OM-1 - link.

I asked the following question in a previous article. Does the sensor backplate record photons? We see the size of the backplate stayed the same while the pixel count increased with each new camera. Doesn't that mean the "size and capture" theory is hopelessly oversimplified? 

Have you ever wondered if pixel diameter is one of many variables impacting IQ?
Then what are the other variables, and shouldn't we consider them?

The image below is best viewed on an iMac or large PC screen.

Don't you think the 17mm f1.8 three-dimensional "M43 look" from my Pen F is awesome? 😉

Conclusion


Why do I think "size and capture" promoters are not serious? Simply study their articles, reviews, and comments. For example, why use Pixel Pitch when referring to the Pixel Area. Pixel pitch is generally used with LED monitors. Why call BSI sensors "stacked CMOS sensors?" Why let people think Standard CMOS sensors have a stacked or layered design? Why not call it a Stacked BSI..? 

Why argue and focus on the placement of components like A/D converters or ISO "sensitivity" when photographers benefit more from learning the correct function of these components? Why do fanboys and promoters always focus on unnecessary or fake theories like oversimplification..?

Best Regards

Siegfried

Oct 4, 2021

ISO Low, L100, L64, and Flash Photography - Part 1

Last update:- 16th January 2023

While working on Part 2 of this article on ISO and Image Quality, I thought it was a good idea to set the stage with a few random thoughts and a basic challenge. You are welcome to add your own thoughts in the comment section or the forum at Rob Trek's photography. Thinking about it, every photographer should develop the ability to analyze digital images. A good understanding of the digital camera and the ability to apply this knowledge benefits all digital photographers...


Taken at a constant luminance perspective and a variable image signal amplification

Taken at a constant image signal amplification (ISO3200)

You are welcome to try the following challenge. Place an A4-sized white paper against the wall and your camera on a tripod. The challenge is to recreate the above 2 illustrations. The info needed to create a basic plan, take the images, and build the final illustrations, is all in this article.


Olympus Pen F with 25mm f1.4 Leica, ISO80(Low), f3.5, 1/1600 - Edited in DxO PL-4 (See more info further down...)

Here are a few general questions for you:-

  • Prep a short explanation of what happens inside the camera for each illustration
  • Think of a few examples and list the benefits of knowing your digital camera...
  • Why do you think it's safe, or not safe to use the ISO Low, L100, or L64 options?
  • Most social media experts tell us it's not OK to use ISO Low, L100, or L64, why?
  • Which of the 5 images in each of the above illustrations are 18% gray samples?
  • What is the link between the Zone system, 18% gray exposure, and the ISO setting?
  • Study the photons/electrons graph below. Does it apply to all or only some sensors?

For more on how to plan your own strategy, study these articles:
  • Start from basics and learn how to record more image data - link
  • A better way to control the camera is the 2 Step Exposure Technique
  • Why is sensor sensitivity so important? - article (Important info)


A few general thoughts...


The reason photographers should distrust any sensor size references is it's normal for digital cameras to have image noise. What determines this image noise? Most photographers are never told that all sensors come with a native noise floor. Should we trust those reviewers who promote sensor size or write biased camera reviews? This is likely the main reason we don't see discussions about advanced digital photography techniques, like how to use ISO amplification correctly, or how to manage the performance of the Image Sensor. (See this link)

For example, why was the old-school Exposure Triangle never improved? Especially while it's used to train photographers on digital photography? How will they ever master advanced digital camera skills like SNR, sensor saturation, or image signal amplification with an outdated triangle? 

Is size a reasonable measure for IQ? We know pixel area (size) is one of many variables to impact the Optical Efficiency of the image sensor. So why focus on only one of many variables? Well, looking for answers is like finding a needle in a haystack. A more reliable way of rating image sensors seems to be Sensor Sensitivity (Optical and Quantum Efficiency).




To illustrate the oversimplicity of the "size and capture" theory, study the illustration below. This illustration offers more information about the image sensor, the noise elements in the sensor noise floor, and the effective dynamic range of the sensor. Other than the "size and capture" theory, which cannot explain shadow noise, those who master the principles illustrated below will have a strong theoretical foundation. They will improve their analyzing and sensor performance skills.

For example, take a moment and consider the graph below. The horizontal axis is the reflected light or photons hitting the sensor. The vertical axis represents the converted electrons. The sensor's full saturation capacity is reached with a fully exposed sensor. Plot the saturation for shadows or low-light scenes. How does this impact the performance of the image sensor? What happens to the SNR in the shadows? What does the histogram look like for an under-exposed sensor? These are simple questions every digital photographer should be able to answer...




Does the size of the sensor backplate "capture" photons? The answer is NO! We know pixels capture photons and pixels (photocells) convert photons into electrons. This is the main reason why scientists improve pixel (photocell) sensitivity and why they don't design bigger sensors. That said, the size of the sensor does play a role. Any idea what? Think of image effects like background blur.
 
Olympus photographers are familiar with 12MP or 20MP (MFT) sensors. The pixel diameter of 12MP sensors is almost double that of 20MP sensors. We know the EM1 III has one of the most sensitive M43 sensors and delivers far superior IQ to any of the older 12MP MFT sensors. Ever wondered why? Could one of the reasons be, sensors with lower Temporal Noise has cleaner images? 

Study DxO Mark results for the EM1 II sensor.


The more we learn, the more we see what happens with image quality...


Another illustration with info on how to manage the sensor at ISO3200.


Let's talk about the physical size of mirrorless cameras? The size of the image sensor influences the physical size of the camera? The reason is the lens image circle needs to cover the full sensor. This impacts the size of the lenses, the camera energy needs, heat management, and the effectiveness of features like IBIS. Digital cameras are basically built around the image sensor. The penalty for cutting corners is overheating, lower efficiencies, and less reliable cameras and lenses. 

Separately from any fixed mechanical design criteria, scientists focus on materials and the electrical design aspects of creating more sensitive image sensors. This represents a better way of designing new cameras and improving Sensor Sensitivity. For example, typical improvements in image sensors include replacing older wired functions with modern software or AI solutions... 

As you know, Olympus and Panasonic were the first to introduce mirrorless cameras. Did they also establish the mechanical design benchmark for mirrorless cameras? For example, what is the built-in safety margin on M43 cameras? When you see similarly sized APC or FF cameras, does it mean the M43 camera is over-designed, or are these APC and FF cameras under-designed?


How much image noise is added to the noise floor for each 1-degree increase in temperature..?

Try this quick experiment and point a light source to your PC. Which of these sensors is receiving more light?

If someone says one sensor captures more light than the other, then I cannot help to think, is this statement theoretically correct? I was searching for information when I saw this review. I could not help asking, is this just another Undisclosed Promotion? What if the "more light" benefit was only 0.0002% while those bigger sensors were 10% less efficient? One would like to think, it's all about the efficiency of the sensor when converting photons into electrons, right?

See this discussion. It's a great example of why photographers should push manufacturers for better information. Also, do a quick search on the implications of "Undisclosed Promotions"...



Final comments on the two images in this article


Take a look at the 1st image in this article. I have set the exposure for the bright areas (sky). I wanted the sky with darker shadows. At home, I did a quick test to study the visible shadow noise when I increased the shadow brightness. Editing the raw file in PhotoLab 4, it was possible to extract cleaner image details from those same shadows. Does that mean the image had enough available information in the shadows or is it only PhotoLab doing a great job?

The above example shows the jpeg on the left and the edited raw version on the right. The image was exposed for the shadows, which over-saturated the sensor in the bright areas. It did not clip the highlights while pushing them hard. I tried different editing techniques to get the most from this "data-rich" raw file. The most pleasing result was editing the raw file with Aurora into an HDR image. Did I push the image sensor too hard, or is it OK when we push the image sensor?

The selected images demonstrate the different technical aspects discussed in this article plus it shows it's safe to work with ISO Low on your Olympus Pen F. The same is true for ALL cameras. Don't we benefit more from working with a fully saturated sensor and resetting our final image "brightness" in Workspace? Why is there a link between the camera (Live View) and Workspace? Why sensor size and then push restrictions like don't use the extended ISOs on your M43 camera..?

More about Managing your Image Sensor and ISO Amplification in Part 2...


Finally, what's better, exposing creatively, or saturating the sensor?

Aug 21, 2021

The Enhanced Raw Format and Live View

Last Updated:- 31st May 2023
  
We are studying the history and growth of Olympus Live View. It all started with the Olympus E330 in 2006 and the E-3 in 2007. The E3 was the first Pro DSLR with a fully Articulating and Live View Display. The focus was functionality and the ability to compose an image while viewing the sensor's live data. The E3 was also the first DSLR to display the sensor's RAW data and update the display as the photographer adjusted settings like the WB, ISO, Auto & Manual Focussing, and Exposure. The photographer could also monitor the camera's IBIS operation on the Live View display.

This was the start of the Olympus Live View function. The current Live View and Workspace (Raw Converter) option advanced to a level one would think is absolutely normal. Interestingly, other manufacturers don't offer a similar solution, except for the Fuji X-RAW-Studio. We are reviewing the Enhanced Raw Format and the integration of Olympus cameras with Workspace.

What does this mean? We can now replicate the sensor's raw data, the camera's final Live View display, and our camera settings in Workspace.

I wrote a new article discussing different options to create profiles in January 2024.

Also, see the 2nd article I wrote about the Enhance Raw Format.

Also, see these articles:

- How I convert my Enhanced Raw Files - link
- Olympus Color and Creative Photography - link
- See this article for details on how Live View works - link
- How to use the Olympus Color Creator and Workspace - link



1. Introduction


What would photographers typically expect from the camera's display:-
  • High-resolution LED or OLED screens with 1M-Dot or higher resolution
  • Visibility and functionality are critical aspects for most photographers
  • Fully Articulating 3" or larger touchscreen displays for video applications
  • Bright displays with good viewing and controls, similar to mobile phones
  • Large magnification EVFs (2.3M-dot +, and 120fps) with no black-outs
  • The new Fuji XT-5 display is one of the best photography formats in 2023
  • The eyepoint on the EVF is important, especially for those wearing glasses
  • The Super Control Panel (SCP) on Olympus cameras is a great solution
  • The existing Olympus menu is great and easy to use for M43 photographers
  • Backward operational compatibility is a strength of the EM1 III UI & menu
  • The ability to recreate the camera's Live View display in the raw converter
  • The ability to develop and practice camera color profiles at home (software)

The EM1 III is the final Pro-level camera from Olympus with the familiar UI and menu. This menu system developed and improved over many years. The best advantage of the EM1 III is its backward compatibility with older cameras. For example, I recently bought a 10-year-old Olympus EM1 MKI and had no problem applying my preferred Olympus configuration to the older EM1.



The above image illustrates the conversion process of the Enhanced Raw File. It starts with adding the final Live View data and camera settings to the Enhanced Raw File. When uploaded to our PC, we open the Enhanced Raw File in Workspace. Only the sensor's raw data will be visible when we open the raw file. The next step is to "activate" the camera settings to enable the final camera's Live View display. The next step is adjusting our camera settings in Workspace. We could also apply more advanced editing in Workspace. The final converted raw file is exported (16-bit Tiff) to PS...

Tip:- Study the Live View Boost function from Olympus in the Users Manual. 


Olympus E-3 with an Articulating Display (Competing with the Canon 40D and Nikon D300).

Olympus continued to develop the Live View function and the compatibility between the camera and the previous Olympus raw converter, Viewer 3. The next step was the Creative Color concept. The Creative Color concept from Olympus consists of functions like B&W filters, Color Profiles (Pen F, EP-7), Color Filters, Adjust Color, and the Color Creator.

I discussed the Live View function in some of my other articles. My search for information on Live View and the Histogram started in 2019. For example, I found more data about Live View in my older E30 documentation. Older News Releases from Olympus and User Manuals are a treasure trove of "unfiltered" Olympus information on their cameras, lenses, and software...


Please study as I use this terminology in this article.


2. Live View and Olympus Cameras


Olympus photographers need to answer this, do you think Live View or the Raw Converter (Viewer 3 & Workspace) were only random thoughts? Olympus introduced Live View in 2006, and the Olympus Imaging Division's marketing team never re-launched or advertised any improvements. They looked like the worst marketing team in the industry. The enormous progress by the Imaging Engineering team is only visible when you study the new "Working Space" from Olympus.

For example, have you ever asked yourself why calling it, WorkSpace and Live View?




Any camera's Live View display should mirror the image sensor's response to camera adjustments and the reflected light reaching the sensor. This concept was part of Olympu's design criteria from day one. Combining the sensor's raw data with the functionality of Workspace was the next logical step for the Olympus Imaging engineering team...



But all cameras have Live View. Yes, it's possible to list and evaluate the design criteria of all mirrorless cameras by reviewing the unique photography landscape promoted by camera reviewers and what supposedly photographers (promoters) want from a camera and Live View display. 

Studying Olympus, we see the following:-
  • A live connection between the image sensor and the Live View display
  • The histogram with the same direct link to the sensor raw or image data
  • The ability to monitor the raw or image data while adjusting the camera
  • The ability to evaluate camera adjustments before capturing the image
  • Selecting and changing any color or creative adjustments in Live View
  • The ability to have an Enhanced Raw File with ALL the camera settings
  • Compatibility between the Live View data and supplier Editing Software
  • The ability to accurately apply & monitor exposure techniques like ETTR
  • The ability to edit the camera settings or practice with them in Workspace

This basic Live View flow diagram matured with M43 Olympus cameras.

How to Enhance your Raw Files in Live View?" Your camera's Live View display or EVF replicates the sensor's Luminance Perspective. The only difference between the sensor's raw data perspective and the camera Live View image is a layered "Display Profile" placed onto the raw data. Olympus created another layer to add user profile settings (Creative Data) to the sensor's raw data. This is how the Enhanced Raw Format enabled Workspace to access the camera's layered Enhanced Raw data. In other words, we can now simulate the camera's final Live View display in Workspace. It also allows us to experiment with many camera settings or profiles in Workspace.



Regular Raw Converters are different because they access the sensor's Raw Data Layer. Traditional editors like PhotoShop, Lightroom, or PhotoLab cannot access or process the Enhanced Raw Data from Olympus cameras. It does not mean they are not good. WorkSpace has full access to the sensor's Raw Data and the user's Creative Layer via the Enhanced Raw Format. OM-System uses the same "Advanced Raw Format" terminology on its official website and press releases.


Traditional Raw File = Sensor Raw Data

Live View Image = Sensor Raw Data + Display Profile

Enhanced Raw File = Sensor Raw Data + Camera Creative Layers


This is BIG news because the Enhanced Raw Format enables us to test different camera settings while Workspace simulates the camera's Live View display. This process also improves our experience of testing and developing new camera profiles in Workspace. A good example is the Color Creator from Olympus. It is difficult to familiarize yourself with this function on the camera display.




The above illustrations demonstrate the Enhanced Raw Format and Live View in Workspace. It also shows how to activate your camera settings in Workspace. Those camera settings, like Picture Mode, which is not clearly marked, can be found in the Exif data. For example, the Color Creator...

Older WorkSpace versions could only replicate the Creative Color settings of specific camera models. The anomaly was the EM1 II. It was possible to overlay a Pen-F color profile onto the EM1 II raw data. Workspace V1.5 and later versions opened Color Profiles. 




How should we edit Enhanced RAW Files? The first step is to Activate your Camera Settings in Workspace. The camera's final Live View display will be displayed on your computer. You will only see the Sensor's RAW Data if you don't activate your Camera Settings in Workspace. See Tip 22 on my Workspace How-to-Page.

Why RAW files and not JPEGS? The reason is simple for WorkSpace. The editing space for jpeg and raw files is the same in WorkSpace. Considering only the available image data, you will find raw files have more than double the file size (amount of data). These reasons should be enough to use raw files. The biggest reason is the Enhanced Raw Format and Live View for Workspace. This changed everything for photographers and Olympus cameras...


Olympus Stylus SH50 Compact Camera - ISO125, f5.8, 1/200

The Live View display allows us to simulate or test our camera settings in Workspace. Trying new camera settings is the best advantage of the Enhanced Raw Format and Workspace. A good example is building new color profiles. Workspace also made it possible to fine-tune your camera settings in Workspace. This is an advantage Olympus photographers shouldn't ignore...

Should we Calibrate our Cameras and PCs? It's possible to select an sRGB or RGB Colorspace for the camera. The color space is embedded in the image Exif data. Color Calibration is a complex subject and warrants a separate article. To keep it simple, I have been using RGB for all my gear.

This short paragraph reminds photographers to use the same Colorspace for all their equipment. I selected my Embedded PC Profile (RGB) for Workspace (see below). These basic steps synchronize the camera, computer, and WorkSpace. Some forum "experts" promote the idea of using the sRGB ColorSpace. My biggest concern is the sRGB color space is the lesser option... 




What are the benefits of discussing this information? The advantage of using the same Colorspace on all your equipment is compatibility and the ability to improve your Color Awareness Skills in the comfort of your home. This enables Olympus photographers to grow their creative ART photography skills by editing and practicing their Creative Color camera adjustments in Workspace.

The more you use the WorkSpace Live View mode, the easier it is to apply this experience in the field with your Olympus camera. Live View and WorkSpace were the two most significant developments in the modern history of Olympus digital cameras...



The Olympus histogram:- The Olympus Histogram is as much a part of the Olympus Live View functionality as the image sensor raw data in Live View. The same principles of collecting data apply to the histogram and Live View. You can only benefit from practicing at every opportunity with the different features of Olympus cameras. For example, what is the function of the green add-ons in the Olympus histogram? How do they help us?




It is critical to study and master the Exposure Techniques discussed in this article. This will help you improve your image sensor's performance and exposure settings for creative photography and image quality. It's critical to master your shutter speed and aperture versus the ISO function.



Final Comments:-


What would an Olympus workflow look like? One would typically convert the Enhanced Raw File in Workspace and post-process (edit) the 16-bit Tiff file in Photoshop. Photoshop post-processing includes the Adobe Raw Converter as a layered Smart Object with access to LR features...


Olympus EP-7 w 17mm f2.8 - ISO200, f5.6, 1/500 - Enhanced Raw file, Gradation High, Color Graded and converted in WS and edited in PS.

The above image is an example of using the computational features of Olympus for ETTR, protecting highlights, and improving the shadow SNR and tonal data. See this article

Is OM-System a concern or a hope for the future? I bought my Olympus EM1 III from OM-System in 2021, and my Inbox turned into a junk box. The OM-1 has a new menu because they couldn't manage the pressure from promoters (product reviewers). PCRAW Mode segregated the OM-1 from the rest of the Olympus Pro cameras. Are these decisions and the OM-5 simply inconsistent decision-making or part of a future product strategy? Does a Photography DNA mean anything? For example, even my old Olympus Stylus XZ-2 works with Workpace and the Enhanced Raw Format.

I haven't used my Fuji XT-5 much because I am satisfied with the Olympus Pen-F, EP-7, and EM1 III. I even considered selling the XT-5 but decided to keep it until I make a final decision...

Why would competitors benefit from having promoters and a new OM-1 menu UI?



When you think about it, Olympus enabled photographers to "edit" the captured raw data before reaching the TruePic Image Processor. In other words, we are dynamically altering the "sensor raw data" before we release the shutter. This is the purest form of digital photography... 

VideoPic Blog Comments

Please add any comments to this article here.

Most read Articles