The features of computational photography raise many photographers’ hackles, but there are many examples of these tools helping photographers bring their creative vision to life and make their lives easier. It sounds good to me, not scary.
It’s fair to say that there is often a deep distrust of new technology in photography, which is somewhat of a paradox. Digital photography is a natural progression from film photography; At some level, an engineer would have introduced the idea of ​​pixels instead of film and probably come up against a fair amount of pushback. As an example of this distrust, let’s look at the way we talk about image editing; We question whether a frame has been ‘Photoshopped’ as if it’s a bad thing when in reality, pretty much every photo seen online has been enhanced to some degree, whether through adjustments in a raw conversion software program. Have a healthy amount of such as Lightroom or even just adding a filter to Instagram.
The latest technology to cause division among photographers is computational photography. This fairly broad umbrella term covers many tasks that use computer-based processing to benefit your photography.
At one end of the spectrum, you can find in-camera noise reduction in this area, which sees the camera working to battle digital noise that can result from high ISO values ​​or long exposures.
At the other end, you have features like high-resolution mode, which typically works by firing a burst of images and merging them into a single frame that boasts a beefier resolution—much higher than the sensor’s only one. Can capture with same file. This mode can work with or without the camera being secured to a tripod and is particularly popular with photographers using the Micro Four Thirds system where resolution is limited – currently the highest resolution MFT cameras ((Panasonic G9 II,GH7) offer a 25-megapixel. While this high-resolution mode has many benefits, including the ability to create larger prints, there are drawbacks because while the high-resolution mode is suitable for static subjects such as landscapes and architecture, shooting sports or wildlife photography will be difficult in some situations. . Subjects moving rapidly to a position located in the middle of the frame are being captured.
However, computational features are better associated with functions such as built-in digital neutral density (ND) filters. The OM system’s Live ND is particularly mind-blowing, because not only will this feature enable you to artificially increase the shutter speed to capture long exposures in strong lighting environments, where this would normally be impossible, But you can also see the effect coming together, and taking shape before your eyes on the LCD, confirming you have more control to get that streaky sky or flowing waterfall looking exactly how you intended. Keep. Recently, the focus on computational features has turned to a new function, and that is the Live Graduated ND Filter mode found on the flagship OM Systems OM-1 Mark IIAs the name suggests, it helps the photographer balance the exposure levels in a scene by adding a graduated ND filter effect to a selected area of ​​the frame—all completely in-camera. .
Now, photographers invest a lot of time in selecting the right filter system for their photography and a lot of hard-earned money to buy specialist optical glass, so it’s no wonder there would be a little skepticism when a digital version came along. , but having used the technology myself, I see its potential. Having the ability to digitally balance exposure levels in the frame without the need to carry additional kit can be a huge win, not only for photographers without the added weight of filters and filter holders, but for those photographers Even for those perhaps starting out in genres like landscape photography and don’t yet have the budget for often expensive filters.
Of course, technology is only worth talking about if it really works, otherwise it can be discarded as a gimmick. Well, OM Systems hit a home run with the Live Graduated ND Filter feature on the OM-1 Mark II. The fine-tuning continues with the ability to precisely select an area of ​​the frame (and even the angle of the graduated area), with the option to control the strength of the filtration and how rough the graduation appears. .
Of course, technology has its limitations; Very large differences in light levels between a burning sky and a dark foreground can test artificial filtration to its limits, but it must be remembered that this is just the beginning. As we’ve seen from advances in areas like in-body image stabilization and subject-detection autofocus, these technologies start small and then improve very quickly.
While the technology may have advanced, computational facilities will require more to be truly successful. They will need acceptance and ‘buy in’ from photographers who incorporate these features into their everyday workflow. This is when attitudes will change, and it is my opinion that in just a few years from now, computational features will not only be accepted, but expanded and advanced, with greater ability to fine-tune exposure levels or pass light. -Through has to be reduced. Artificially extend the exposure time to replicate the look of a 10 or 20-stop ND filter.
Another huge factor that can not only help computational technology but also increase its acceptance as a regularly used part of our cameras is the increasing emergence of AI. Imagine how AI can help the camera read a scene and automatically apply the right levels of exposure correction to instantly balance the lighting in your frame. This is only the beginning; It’s time to embrace these computational functions and put them to work for us, helping our photography, speeding up our photo taking, and helping us capture even better photos.