The next revolutions in photography?

Artificial Intelligence Versus Processing Power

Some modern advancements related to camera design have made me wonder if there will be revolutions in photography. Firstly, there might be a major, updated, advance in the field of artificial intelligence, or AI. One such advance could lead to a possibly upcoming ability of camera processors to become much more efficient, because of AI, and such a ability can make large cameras such as SLRs or Mirrorless obsolete, at least in theory. An increase in the efficiency and therefore speed of in-camera processors might not only mean increases such as frame rates, video bokeh, and better jpeg rendition. There could also be increases in the amount of images that can be taken in a given time, which means that image stacking algorithms that can be built into a camera can process more images per second, and tasks such as focus stacking or HDR can be more easily achieved. Once a camera company can focus their effort on such features, then small devices such as cell phones & compact cameras might be able to produce image quality at the level of what larger sensors found in Mirrorless and SLR cameras can. Such a feature though is usually useful for scenes in which nothing moves, but at least in theory it might be able to work if the sequence of required photos needed is fast enough.

New Sensor Technology

Shortly before the time of this writing, I received interesting news from the world of Panasonic. Panasonic has decided to work on what’s known as an Organic sensor, and is supposed to become available in new cameras sometime around the year 2021, perhaps a little later. Such a sensor can supposedly be comprised of a global shutter, which means that in video mode especially, specific types of moving objects such as fan blades and vertical lines moving horizontally won’t become distorted, which is the major downfall particularly of a rolling shutter. Also, the new sensor will be able to have an infinite ND filter, an electronic one, the value of which can essentially be dialed in at an infinite amount of steps. It’s also likely for such sensors to become a part of Fujifilm cameras as well. Also, for some time, there has been the idea of curved sensors. The reason for curved sensors is so that the light entering a lens can fall on a sensor in a way as to avoid any distortion of light, but for only some time it has been available to work solely using a lens with single focal length, also known as a prime lens. In addition to new sensors, there has also been the idea of getting rid of the lens in what’s known as a lensless camera, that of which I have very little knowledge of. In my opinion, a lensless camera is probably the most intriguing of all of these inventions, matched by or at least closely matched by the idea of an Organic Sensor. Despite all of these ideas possibly becoming real in the near future, user experience is likely to still be a major factor in how well a photographer does. The photographer alone is already a major influence on the outcome of how well the photographic process comes along, generally more important then the gear used, but perhaps emerging technologies might be able to turn the process in favor of technology?

 

Advertisements

Can we recreate the look of film using digital photography?

Film photography can still be considered a fairly interesting topic, despite the fact that digital photography has been around for a long time, and most would probably argue that digital is better in various aspects than film: and I’m one of them. During the digital editing of my own photos, I have been able to recreate the look of film, without actually setting out to do so in the first place: and by look, it’s mainly the color and tonality of a photo that is relevant to this discussion. Modern photo editing software has evolved greatly, to the level at which you can adjust digital photos to very accurate levels. Sure, if you open a photo in Photoshop or a RAW editor, the sliders are limited, because there’s a specific range of values that you can enter in to each one. But think outside of the box, and you can actually use the same software to enhance a photo’s look to a virtually unlimited degree of accuracy. For example, I could open a RAW or DNG file into Adobe Camera RAW, and decide that the contrast slider set to a value of four is almost perfect, while five might be already too much. So, what I can do is initially set the contrast slider to four, then make a digital copy of the file, and set the contrast slider to five for the copy: next, opening the original and copy in Photoshop as layers, I can overlay one over the other and set the opacity of the top layer to much further or accurately enhance the contrast of the photo. I’m at least ninety-nine percent sure that you can, in theory, recreate a photo that came out as film using digital photography; and if there’s any difference between them, it will most likely be unseen, maybe except for the highly trained eye of a master. Today’s photography software has the ability to fine-tune contrast at the local level (can sometimes be referred to as micro contrast), and the global level. Digital photography processes are usually far quicker than that of film processes, the latter which might involve chemicals and specific lighting to develop properly. A single Jpeg should be all it takes to recreate the attributes of a photo made on film, and I have seen proof of it myself, although it’d be much better to use an uncompressed format. For the Jpeg photo that I had, I simply applied High Pass Sharpening, to recreate the crisp attribute known to film. For another situation, I attempted an HDR stack (the images didn’t align well) using three photos, one at -2 exposure compensation, second at normal exposure, and a third at +2 exposure compensation, after I pushed the highlight and shadows sliders to the maximum setting: the result? a photo with very pleasing transitions between dark and light tones, just as you would see on film. I’m not trying to be a fanboy, but I also am a Fuji user, so I don’t worry too much about simulating the actual color rendition of my photos, thanks to Fuji’s amazing film simulations.

Important insight an opinions wanted from my readers

Today, if things go right, leave a comment about what you’d like me to write about in this blog; so far I’ve been writing about photography, even though in my “about” page I wrote that I like to write about various topics, which admittedly is a conflict with what I’ve written so far. However, I have been wanting to know your own opinion. I’d like to make this entire blog successful, but if I write the wrong stuff (and it apparently takes almost nothing to do so) or if I take on a specific topic and write about it too much, you might not like what my writing.

My Hopes To Get Into Extreme Lowlight Video

I might’ve already written about this, if so, this post will likely be deleted. In addition to photography, I’ve been pretty interested in video, even though I’ve been doing it almost not at all. Particularly interesting to me is astro and extreme lowlight video, capturing phenomenon such as meteors, lightning, aurora, etc. I’ve wanted to get a camera that can record with at least a reasonably small amount of noise at extremely low light levels. I think therefore, that a Sony Full Frame would likely be best. I’ve also been considering a Panasonic Full Frame, but before I’ve made sure which one to purchase, I have been desiring more information about the upcoming Panasonic S1H, because it might have what I need. Oh yeah, and the lens itself needs to have a large entrance pupil, which means that a large amount of light must be able to go past the lens into the sensor, not simply aperture or F/Stop. The entrance pupil can be calculated by the equation E=FL/A, where FL=Focal length and A=Aperture. Also, in astronomical applications, it might be important to some extent for the lens to have special characteristics such as, for example, zero distortion or to consider field curvature as well as Coma. I’d like to compare such a future setup, whenever I can gain access to it, to different setups, for example: A dedicated specially cooled astronomical camera (usually with smaller sensor as opposed to a dedicated stills camera for about the same price), as well as mounting each type of camera to a telescope and/or astrograph.

My Dream Camera For Stills & Video

My dream camera is most likely one with a large sensor, small lens, and not just great at photography, but at video as well, since one of my passions is video. Admittedly, I don’t do almost no video at all; I’ve almost forgotten about it so often. The reason I’d like a large sensor is so that I could more creative results from photos, with less risk of digital noise, for example, being able to use high ISO setting and/or long exposure with very little noise. One example of when I’d like to, and probably need to, use high ISO, is if taking a photo of a very dark scene, such as a dimly lit flower against a starry sky, and try to keep everything, from the foreground to the background, in focus; I’d like to be able to at least use a small aperture, or be able to take focus bracketed photos, and not need to also take additional photos to use for what’s known as averaging. Averaging is a method in photography that’s used to reduce noise without destroying fine detail, and is done by digitally stacking multiple photos with specific opacity assigned to each. Theoretically, almost any type of photo is possible with any size of camera sensor, but the smaller the sensor is, the more improvisation is usually required. For example, take two identical photos, taken using the same exact settings (this can actually be more tricky than you probably think) using a camera with a one-inch sensor and the other with a Full Frame sensor; the results will look almost identical initially, until much editing is required- Perhaps the most major difference between the photos would be the dynamic range: In software such as Photoshop, there is the possibility to increase the brightness of shadows and increasing the shadows of the Full Frame photo doesn’t amplify the noise of the shadows or dark areas of the photo nearly as much as the camera with the one-inch sensor. In fact, for photos using high ISO, it would probably take tens of photos from the one-inch sensor camera to stack to add up to the smaller amount of noise visible on the Full Frame photo. The lens of a camera is relatively important to the system in it’s entirety, because the lens is what makes the photo, not so much the sensor. To obtain very high resolution is easy to do for camera companies, but for that resolution to be effectively used up requires a very high quality lens, and this is where I don’t agree with the decisions by various camera brands to increase resolution so much, because the lenses can get expensive quickly if the resolution increases quickly. Also, with higher resolution, it’s easier to spot noise. My dream camera is a Sony Medium Format with intelligent stacking abilities and an efficient processor to enable stacking multiple photos quickly in various ways, whether it be focus stacking, HDR, or averaging. For some time, after my interest in photography, came my interest in the video production of the night sky, mainly recording meteors, and Aurora; and for that, I’d like lenses to become better in the sense that the maximum aperture gets ever larger. With my X-T2 and Mitakon 50mm/F1.2 lens it could be great if the camera’s shutter speed for frame rate could be much slower than 1/24th of a second. I really wonder then, what kind of astronomical video I could achieve with a Sony such as the A7R II. An A7R series camera should go well with a lens such as an adapted Canon 50mm/F1.0, but with just a little bit larger sensor and a little larger of an aperture, I think it would make a wonderful difference for what I’ve been trying to do. But since I started writing about video, keep in mind that a camera such as a Sony A7R series, or others such as Panasonic S1, Nikon Z6, Canon R, etc. while comparable to camcorders, are mostly not designed or built to compete with Cine cameras, which are basically camcorders on steroids. Video production can be just as complicated as photo work, given that there are terms such as Codec, Compression (which is different for video), T-Stops, etc. I’d like a serious stills camera that’s also a serious Cine camera, has built-in ND filters, can record in RAW format, 4:4:4 & H.265. During the time of this writing I currently don’t know of such a camera, I’m still waiting.

Truth about perfection in photography software

In photography, generally only the small stuff is to be improved. For example, color correction, contrast, white balance, noise, and distortion: each needs to generally be at least slightly improved upon. Small improvements, but a large number of them, each of which are mostly unnoticed by the casual, inexperienced, viewer. And to make perfection, the ideal software needs to be used, or the one that’s almost, if not always, more expensive. Cheaper software might do the same job, but generally only to some point, after which not all of the information that comprises the photo is there anymore. On other words, with relatively cheap photography software, you can only go so far before you might preserve all perfection in a photo. One way in which this perfection can get lost is if the software starts to destructively edit a photo of yours. But initially what i had in mind regarding cheaper software is, that there is essentially damage done to the quality of the initial RAW input after a specific amount of editing or enhancing may be done. For example, from start to finish, my Photoshop Elements 15 photo editing software can only keep a file as a RAW for some time, at least if extensive editing may be required. It’s not as flexible as say, Photoshop CC. That’s what I’ve noticed, but perhaps your experiences have varied from mine. And if that’s the case, then let me know what tour experiences with photography software has been, especially if it has involved the editing of RAW or DNG files.

The line between “Stills Camera” & “Cine Camera” has blurred quicker in 2019; New cameras as evidence

Digital camera companies have been teasing new, upcoming, releases of cameras. And while camera releases aren’t a new thing, this time around, the year 2019, the game got amped up, as new technological advances have photography cameras to keep getting improved upon, but video improvements have started improving at a faster pace in 2019. Examples include RAW video capability as is to be in the new Sigma FP Cinema camera, which is supposed to also have 4:4:4 Compression, which is lossless. Also, the upcoming Panasonic S1H is an interesting camera for video. These cameras are supposed to be so well focused on video production, that to say whether they are still cameras or cinema cameras might be difficult, as the line between what is a camera which is dedicated for still photos as opposed to a camera that’s dedicated for video production, is becoming ever more blurred with such new cameras being worked on! Sony hasn’t released quite such news about it’s cameras at the time of this writing, but it’s supposed to be working on new sensors, and the new sixty one megapixel Sony A7R IV might be just a glimpse into what Sony has in store!

“Macro Supernova” Creative lighting example

Remember when I talked about multiple exposure, especially for macro? Here’s an example of one that I did, using various angles of light.

DSCF8301
Taken using a Mitakon 50mm/1.2 lens on a Fuji X-T2, stopped down to an aperture of F/2.8 to achieve good detail, then heavily cropped down. Some slight adjustments in Adobe Camera RAW for the source files, then the final image had slight levels applied & high pass filter sharpening, before creating a duplicate Gaussian Blur layer, setting the layer opactity to about 50%, and that’s about it!

Using a cheap option for extreme macro

I’ve finally gotten around to getting a truly interesting subject to photograph using my point and shoot camera, a Sony RX100 II. I used a microscope eyepiece with a magnification of at least five times that of life size (or at least that’s what it’s specified as). The actual magnification might be as low as 3X, but that’s still ample magnification, especially because I zoomed in all of the way with my lens. The camera was mounted to a telescope adapter from Celestron, and the eyepiece was fitted inside the front end of the adaptor, carefully. The mounting of the camera also had to be careful and precise. The eyepiece didn’t fail to deliver in terms of overall performance, whether its sharpness, lack of vignette, and distortion. How about the photos that I took using the setup? Moderate RAW processing was applied, mainly vibrance (other settings were only pushed a little). Here are the ones that I’ve done so far:

As much as I liked the depth of field, I think that a lens with much less magnification could be great as well, and one that could’ve had much greater working distance. At such high magnification, focus stacking is often a necessity to get the most detail; these photos were both at F/11, but because of the angle at which the camera was pointing at the flower, the depth of field was exceptionally small. Also, a great amount of light is needed in this type of photography, especially if the intent is to use small aperture and/or if a low ISO value is to be used. I’ve been trying to use Picolay right about during the start of the month for stacking, but for every setting that I used, the images were far from aligned; I also tried Helicon Focus (free trial) and just about all of the settings worked; the initial photos were moderately misaligned because I didn’t use a dedicated setup at the time. The subject matter was not overly interesting, which is the reason why I didn’t bother posting it here. And yes, I am to try macro photography with my Fuji as well, and combine it with UVIVF, or basically using UV light to cause what’s known as Ultraviolet Induced Visible Fluorescence!