FRAMES Photography Podcast

Eric Chan

FRAMES Magazine Episode 206

In today's episode, W. Scott Olsen speaks with Eric Chan, a Senior Principal Scientist on the Adobe Camera Raw, where he develops techniques for editing photographs.

You can visit Eric's personal website here.

Find out more and join FRAMES here.

Send us a text

SPEAKER_04:

I think for especially in the Lightroom and Cam Rock case, where so much of the design principles from the very beginning has been about everything can be undone. Everything is kind of this parametric, non-destructive, uh-based editing environment. It's really intended to encourage photographers to explore and try things, right?

SPEAKER_02:

This photography podcast is brought to you by Frames, quarterly printed photography magazine. Here is your today's host, W. Scott Olson, with another fascinating conversation.

SPEAKER_05:

Well, hello, everyone, and welcome to another podcast from Frames Magazine. My name is Scott Olson, and today, folks, today, this is gonna be so cool. I've got to tell you, I am really, really excited and curious and cannot wait to unpack what's gonna happen today because we're talking with someone named Eric Chen. And you may not think you've seen his name, but I can almost guarantee you have seen his name on a daily basis. From one aspect, Eric is just a world-class, fantastic photographer. I mean, we're talking landscapes, urban stuff, travel stuff. I'm looking at his website right now. There's Japan, there's Canada, there's Nepal, there's Iceland, Greenland, the man's been, I don't think there's a country on the planet that he hasn't been in. And some of the best landscape and wildlife and travel photography that I have seen, equally at home in black and white and in color, the stuff just pops. It's just absolutely resonant. I hate to tell you, I was teasing him about this just a minute ago before we hit record. You know, Eric is also partially responsible for the number of cat photos out on the web. Um, but I hate to admit they're actually really good as well. Now, beyond the fact that Eric is one of the photographers who I find absolutely thrilling and compelling with every single composition, he's got a side gig. He's got this little thing going over on the side where he is a senior principal scientist for hang on for it, for Adobe. This is this is the guy who's working on camera raw. And his projects include stuff that we use every single day: highlights and shadows, clarity, my favorite slider of all time, Dee Hayes. I mean, this is the guy or one of the guys behind the work that makes our work possible. So we're talking about two things today. We're talking about some absolutely breathtaking photography and a coder's, a programmers, a scientist's insight into how in the world do we make that vision really pop on our own screens? Eric, welcome to the podcast, man. How are you doing today? I'm great, Scott. Thank you so much for having me. I'm really looking forward to this. Um let's let's start, you know, let's use the Wayback Machine for here a minute. How did photography, either from the technical side or the artistic side, assuming they're slightly different, um, first come into your life? You know, running around with an instomatic when you were seven, or how did this all start?

SPEAKER_04:

Well, it started with borrowing my dad's old film camera when I was young and just doing happy snaps, but it wasn't until I was in graduate school in the early 2000s, I was doing a just a recreational trip up to Acadia National Park in Maine in October with uh three of my college classmates. And um this would have been the fall of 2002, I think, or 2003. And one of them was really excited because he said, Eric, I just got my hands on a Canon Digital Rebel. It was like the first you know, digital SLR that was under$1,000, you know, something that a student might actually be able to afford. And uh I was like at the time, great, but what's an SLR? Because I had no idea. And I just had never used a single lens reflex camera before. I never used anything of that caliber with interchangeable lenses. And he let me borrow it for an afternoon and I got hooked. And then I realized after I went back to school, after that weekend, that hey, my school library has actually a really cool photography section. And I started, you know, in my free time, borrowing all the books that I could and just kind of learning the basics, you know. And then I learned that my graduate advisor, I was studying computer science, but I was I was gonna ask if there was, yeah. I was studying computer graphics in particular, because I thought you know it was interesting. I always had kind of a visual interest in DOM. And it turns out my graduate advisor was really into photography as well. And so just kind of between my my friends having you know gotten me into it and learning that my school had a really good library, and even though all the all the books were about analog photography, because these were you know fairly old books, it's just they were really good for learning just the basics in the foundation, right? And getting inspired by um some of these you know landscape photos that I was really into, David Muench, Dak uh Jack Dakinga and uh Elliot Porter, you know, kind of a lot of the classic masters of of the form. Um and then you know, that finally got my own camera and then eventually uh made my way in as a hobbyist and then uh eventually through work to Adobe.

SPEAKER_05:

Okay, so when you're studying computer science, though, what you know, before you before you know Acadia, what did you think you were going to be doing with it?

SPEAKER_04:

Well, I was studying computer graphics and in computer graphics, which is a very broad field, there's a lot of ways that could go. That could be about like making, you know, interactive, real-time game engines for video games, for example. That's what I thought I was gonna be doing way back like that. Uh that didn't work out. Um but I did, you know, I was also interested in like, for example, maybe doing rendering pipelines for animation studios, you know, one uh special effects, right? For T-Series and movies and things like that. And um, for example, I was very interested just kind of the interaction of light with a scene, how do you cast shadows on objects in a realistic way? Um, and just those are, I think, precursors to kind of my interest in photography and understanding light and composition. At that time, I was looking at it more from the technical angle in terms of how light interacts with a scene in terms of reflections and light bouncing around and going through objects with translucency. Um, but I wasn't really sure exactly to what use I was going to put all that one day.

SPEAKER_05:

Okay. Now help me set the scene a little bit, though. Because when you were in graduate school and you get your first digital camera, you know, we're we're at Photoshop 1.0 or what was the what was the state of the electronic world for you know computer graphics? I assume industrial light magic was already up and running. Um what was the milieu at that time? What was considered state of the art?

SPEAKER_04:

Yeah, so uh, you know, raw converters were kind of new to the scene. They'd been out, like Adobe Camera Raw had been out for a couple of years. Uh we didn't have Lightroom yet. Uh we didn't have aperture at the time, right? This all kind of before all of that. There were a handful of ones that were out on the market, but they were all in their infancy, right? And people were still learning the basics of what it means to capture in RAW and process those if you were doing digitally. Um Photoshop was not, you know, we were well beyond version 1.0, but um, we didn't really have a lot of the you know editing tools that we have today. And it was still very much optimized around the idea of editing one picture at a time. Like there weren't you know, if you're a busy wedding photographer and you came back with 3,000 pictures from a weekend, I mean, good luck going through all of them one by one with a you know, editor of choice, right? It just wasn't a very great workflow at the time. Um, and you know, the tools that we did have, they were certainly a lot less sophisticated than what we have today in terms of what one is able to do with a photo. Of course, we had basic things like exposure and cropping adjustments and whatnot. Right, right. Um, but I think in general, just the level of sophistication, both from the editing perspective as well as the organization and workflow perspective, these were really early days.

SPEAKER_05:

Oh man. I remember some of those with with you know great repression, I think. Yes. Um so I mean one of the things I've discovered, you know, over the years is that a lot in the analog days, a lot of photographers fell into their love of photography by the shooting experience, being out in the field or the studio, holding the camera, pressing the shutter release, that kind of stuff. An equal number of photographers will say that their love for it, their passion, was found in the darkroom. You know, the this magical moment where an image starts to appear on paper. And both situations really are kind of separate problem-solving environments. Um, you know, the problems you solve in the field are not the problems you have in the darkroom, you know, et cetera. So when you start putting computer science together with photography, was there one that was sort of pushing the other?

SPEAKER_04:

Great question. I really love both angles of it personally. You know, I got into nature and landscape photography just through my love of being outdoors and wanting to try to capture some semblance of the experience of being there, right? Photograph is never really a substitute for the real experience, but I that was kind of what inspired me to take those sorts of pictures. Um, so that you know, it relates to that aspect of it. But then I think the computer science angle of it to your question, it really tied into my love of tinkering with what can you do with a picture once you have it. And once I understood what raw meant, which is basically raw ingredients, so to speak, like if one thinks about a baking or cooking analogy, you know, if you have a finished dish, you can always tinker with it with a bit of salt and pepper and spices and whatnot. But it's not as malleable or flexible as what you can do if you had to start from, you know, earlier in the process, right? Right. And so once I've learned, at least at the technical level, what goes on inside of a raw file, and realizing that, oh, there's a lot we can do with the detail with regards to sharpening and texture and edge and contrast enhancement and bringing out little nuances of an image, as well as kind of broader things like how do we interpret certain colors and what do we do with you know saturated colors that are really pop and true to life view, but may not be representable on an average consumer display. You know, what do we do with these colors, right? Some of them are aesthetic decisions, some of them are more like a technical decision, right? We just can't represent them in this color space or on that display. We've got to do something about it, right? And so all of those kind of tie into uh this, in my view, kind of a classical left brain, right brain merging of aesthetic and scientific decisions to be made in post-processing.

SPEAKER_05:

You know, it it it might be left brain and right brain. It it might be, you know, to use a tech technical term here, you know, just you know, pushing the cool factor a little bit. You know, I'm looking at some of your images here. Um, and and you know, this is just on the front page of your website. And let's just, you know, the top right, there's one from Iceland. Just below that, you know, there's another one from Greenland. And, you know, ice and icebergs and all that. Um really, really tough to do well sometimes. And I'm wondering if you didn't sit there, you know, one night looking at one of your images, going, This, yeah, there's actually more here. Or were you looking at the code thinking there's more here? Um, you know, which problem is tastier for you to solve?

SPEAKER_04:

I think uh I know that I now I think have enough experience, especially with some of those iceberg images, to know that there's always more data in the raw file that can be extracted. So even if I look at the initial rendition of the file, it's like, yeah, it's pretty flat. There's it doesn't look too inspiring. I know there's more in there. Um, especially with something like ice, it is photographed under overcast conditions, which I actually think is the strongest way to photograph it because yeah, yeah, me too. You know, it really brings out the subtleties in the color. Um and in those situations, I feel that as long as the raw data was properly captured and exposed, so it's not you know like wildly underexposed or things like that, um, there's just a lot of nuance in the shading of the ice that can be brought out with, you know, either global or local adjustments. TAs you mentioned earlier, is actually something that works pretty effectively when used locally in different regions of the ice. And so I think of it a little bit technically in the sense that I know what the tools can do to it if the if it looks flat to begin with. Um, and part of it is the experience of knowing I've done this before. I know what kind of details you know I'm looking to bring out of an image. And so all I worry about in the feel I'm capturing is to make sure uh it's properly exposed.

SPEAKER_05:

You know, listening to, I'm suddenly wondering about the universe before and after your work. Because, you know, if you've got a bad lens, you know, you're not gonna, you know, you can you can you know have some fun with it in post-production, but it's still a bad lens. Um and you know, people are spending countless hours debating the merits of stacked sensors, partially stacked sensors, you know, backside, you know, all this kind of stuff before the shutter release is ever pushed. Um and on the back end, you mentioned you know, consumer monitors. Um, there's a lot of people who are still looking at pictures just on their phone or on you know old school monitors. Do you find yourself hamstrung inspired, working in tandem with those other steps in the process? You know, the people designing sensors and lenses and the people designing monitors?

SPEAKER_04:

Yeah, it's a unique challenge because just the great variety of devices, both on the capture and the display side. Um, I think the upside is that as we've been working with a lot of our hardware partners out there, over time we've gotten better tie-ins into Lightroom and Camera Ot to improve certain things. For example, around 2010, one of the major advances that were made around kind of the advent of mirrorless cameras, right? Was around the idea that, well, we can make the package smaller by taking some of the things that we used to build into the lens and try to do it in software. Now, the cynical might listen to that and say, well, you're just trying to make cheaper glass so that you know you don't have to build, you know, um, put all the lens elements for fixing things like distortion inside the lens. But I do think that that's a reasonable trade-off because what I think what lens designers understood was that there's certain things like sharpness, which are hard to fix in post, but there are other things like light fall-off in the corners, which is relatively easy to fix in post. And so they started to make decisions like how can we make this lens smaller, yeah, easier, you know, more compact, better for travel, for instance, as one of the options to offer, as opposed to, well, every lens has to have a big front element that makes it bulky and you know hard to store. And because we have these tie-ins now in the Lightroom where we can read metadata from a lens or the raw file that says, here's how the lens was designed to be. We can fix certain things automatically uh when you load the image into Lightroom. I use the term fix a little bit kind of casually here, but what I really mean here is that there's certain operations that they used to build all into the lens and they made all the lenses much bigger and much more expensive. And I think it's actually really good now that there are different options for photographers, right? You can still buy the super top grade lenses that cost a lot more if you're really into that. But people also now have the option to get the kind of more accessible, compact, travel-fronting lenses that may not be quite as good optically, but they know that the images still look pretty good when you come into Lightroom by default because of the technical tie-ins with things like we know what the residual distortion and aberrations are, and we can try to do as much of that correction for the photographer automatically.

SPEAKER_05:

This is so cool. Um I have to ask, so somebody comes out with a new lens. Do you run down to your local camera store, buy it, bring it back into the lab, put it in your machine and analyze it? Or is there a long-standing conversation between the manufacturers and you guys saying, you know, I know you're going to build a profile, you know, let's talk in advance.

SPEAKER_04:

Yeah, it's all of the above, Scott. So in the past, well, it keeps the job interesting, right? So in the past, um, back before we had long-standing ongoing uh partnerships with camera and lens vendors, back when we were starting to experiment with building lens profiles, you know, we did tend to go either buy or rent lenses to try to understand what was possible technically. And, you know, at the time, uh, we tended to have to profile the lenses in our lab ourselves because of the fact that the lenses were historically for uh SLR film SLR systems where there wasn't any real electronic communication with the body other than things like autofocus and focus position. Uh now with a mirrorless system, what's really interesting, and now of course they're so prevalent, is that um the the lens mounts have much richer metadata information that goes across them. So typically what happens is that you know a given lens, let's say a new 24 to 70 that comes out, will have some characteristics, but that information is actually stored digitally in the lens itself, um, in a little piece of computer memory that's in the lens. And when you capture a raw file with it, that information is sent across the lens map to the camera and then it's stored in the raw file, and then it's in a way, uh, in a format that we on the Adobe side can read and apply the processing in Lightroom, corresponding to that lens, not only to that lens, but specific to your particular uh capture condition. So you use that 24 to 70 lens, that 35 millimeters, F4. You know, we know that, right? Because of all the XFX time. But all of this only works if we have a good long-term relationship with the lens vendor. So what we've been doing for the past 15 years is really building and establishing that relationship. So these days, the way it works is a lot more like the latter of what you suggested, which is that you know, a lens comes to market, it's announced, we often will work with them to try to get the profile built in advance so that by the day you can go to your favorite store and pick it up, uh, we should already have a profile inside of the library for you.

SPEAKER_05:

I was I was gonna say, because you know, the I I want that profile there the afternoon I buy the lens.

SPEAKER_04:

That is the ideal we strive for. I can't say uh we did we do it all the time, but we that is our ideal. Like we want zero-day waiting. And I would say we're these days are about as close to that as we've ever been. So I'm proud of our team for having done that. But it did take a while to get there.

SPEAKER_05:

Okay. From from a technical standpoint, I mean your credits include you know, highlights and shadows, clarity, de haze, profiles, you know, lens corrections, all this kind of stuff. I d that's magic talk to me. I you know, I you could show me a thousand lines of code and I would not understand one of them. Unless it was the old, you know, computer science 99 if-then commander. That one I got. The rest of it's just beyond me. How do you work on highlights? And I don't mean from a you know making your picture better. From a coding standpoint, how do you work on highlights? What do you what are your goals?

SPEAKER_04:

Yeah, so something like highlights is a really interesting one because it's one of those controls that really affects the overall balance of tones in a photo. So you know, earlier versions of something like highlights, like initial drafts would have been something as the analogy I would make is something like, well, it's a bit like an exposure or curves control, except it only affects the upper range of the picture. When we experimented with that, the upside is that that's fast and relatively easy to implement from a coding perspective, but from an imaging and visual perspective, it was unsatisfying because it tended to make the tones look too flat. And it basically looked as if someone had kind of compressed the tones in a way that you just can't see them that clearly anymore. So highlights end up just looking dark and muddy and gray. I had a one of my photographic heroes, Charles Kramer, uh, he used to call that tonal constipation. Basically, uh all pushed up together and like nobody likes the way it looks, you know? Um so as like so. That was like when we asked what our kind of guiding principles and what were our group goals were. One of them, kind of informally among the team, was if we're gonna have you know highlight tone mapping and have the highlights bring it down, we don't want tonal constipation. That's like we don't that's the that's nothing to be avoided. And so that leads us to exploring very different techniques, uh, techniques that we call um that I think in the computer graphics and imaging literature would be broadly classified as uh local adaptation, where you basically, I think the general way to think about it is imagine that every pixel in the image had a different curve applied to it. So it's not like one curve that's applied to every pixel, but it's like different parts of the image will get different curves. And so then the guiding principle for developing the method behind highlights is how can we do that in a kind of photographically coherent way? And so the techniques change a lot based on kind of the visual goals. Uh and that's uh this was no technology between or behind highlight shadows and clarity came from one of my teammates in Adobe Research at the time, who had developed a new method for doing uh tone compression, which I felt was state of the art at the times. This would have been 2011 or so. Yeah. And uh so he was working on a paper that was eventually published in ACM Siggraph, a very prestigious computer graphics journal. And I was working on the implementation for Cameron Lightroom. And then once you have the basic idea in place, I think it shouldn't be underestimated that there's just a lot of time spent tuning. And by that I mean it's like parameter tuning. It's like, you know, this as you know, the slider goes from minus 100 to 100. Sure, but what does like minus 50 do exactly? You know, like how far does it go? And how does it work on images that are high contrast versus low contrast? How does it work on portraits versus landscapes, urban scenes? You know, there's just a lot of testing to do and tuning. Um I would say it, you know, it's like the what is it the was it the Edison quote, which is like 1% inspiration, 99% inspiration. I feel like you know, it's 1% idea and 99% tuning.

SPEAKER_05:

Yeah. Yeah. Yeah, I I I have this vision because you know, debugging code is is is one thing, but having it actually look good. Is there a mission control in a basement somewhere with a super giant monitor? You know, you you tweak the code a little bit, then you look up at it, and yeah, I don't like that so much. I mean, I mean, how do you judge success when you're working on these projects? Because the code can be clean.

SPEAKER_04:

Oh, it's so hard, Scott. It's so hard. Well, I think part of it, so reference monitors and reference environments are useful as a baseline, but uh to the point you brought up earlier about like people have different consumer displays and people looking at things on their phones. The reality is, for better or worse, there is this very heterogeneous environment where our customers are on all these different devices, right? Old Windows laptops, shiny new phones with bright displays. Some people have are lucky enough to have these high-end HDR displays and so on. So I think our measure of success is we actually have to evaluate across these devices because that's how our customers, I know photographers, are going to be looking at them. Um, not everyone has you know a reference studio environment with you know a recommended display to do something, right? And the reality is a lot of photographers have multiple environments, right? They might, even those who do have a reference studio environment, they're probably also looking at photos on their phone, right? And those are very different. So our measure of success is really looking at a diversity of things. So we have for every feature kind of a target. Like, here's like kind of our top um audience that we're going after. For example, of a feature like texture, right? Which came out a few years ago. Love that, love that one. It's used for a lot of different things, but it was originally targeted specifically for skin smoothing, like portrait retouching for skin. You can use it for other things like rocks and texture and foliage and grass, but it was really targeted for that. So we spent a lot of time evaluating how it worked on portraits, different skin tones, different lighting conditions. And then we tuned it secondarily for other subjects as well. But that's an example of what we identified this was a need, this was something that was heavily requested, and we tuned it specifically for that. So when we looked on different displays in different environments, we focused primarily on portraits.

SPEAKER_05:

Going back to your own photography for a moment, you you've got a picture from Portugal in 2013, uh, an image that I just love. It's a pathway in a forest. There's a lot of fog, there's trees. Um do you remember this picture? You know which one I'm talking about? Okay. That, I mean, it's it's a beautiful, beautiful image. It's also uh, you know, kind of a challenge, you know, if you're thinking of post-production, you know, because you've got just about everything possible going on in that picture. So walk me through two things for that one image. Walk me through the field experience, taking the picture, you know, and how you created it. But then, okay, you're back home, you got a glass of wine by the side of the machine. It's like, now what the hell am I going to do to make this as beautiful as it becomes?

SPEAKER_04:

That's an interesting trip, Scott, because uh, first of all, it was done during kind of a spring break trip where we didn't know what to expect there. It was my first trip there, and it ended up being really foggy on most of the days, just in general. Um, it was also interesting from a capture perspective in the field because this is my first time borrowing uh one of my teammates' Sony Arcs 1 cameras. It's like a fixed-field 35 millimeter uh lens and uh you know, no zoom, right? Because it's just a fixed prime lens. It's my first time actually trying to use a prime lens for kind of a landscape shipping. And so it was actually my first experience training myself to photograph for an entire week only at 35 millimeters. So it's kind of an interesting exercise, but this was actually the first image that I made in that mindset that kind of convinced me that, hey, you know, maybe life at 35 millimeters only is okay. You know, I can it started to make me realize oh, this is why maybe some people are really kind of gravitating to this vocal link. Just kind of the relationship between the branches and the foliage and the path and all of that. So yeah, just trying to compose and get these elements in a somewhat pleasing arrangement was my main challenge for that in the field. Um, and then once I got back to the desk, editing it was mostly about trying to preserve the feeling of contrast. Uh, like not that much contrast, just enough local contrast so you can see the distinction between the different elements, like the branches that are fading in the background and the path that's receding, but like not enough contrast that you lose the feel of the fog. I think that's one of the pitfalls of low contrast images is that if you have uh a pattern of stretching the histogram to feel, kind of to fill the space, so you kind of set your black point, your white point, it tends to destroy the feeling of or the mood of the image. Um, so for this one, I tried to be more judicious with that. You know, I would use a little bit of with the brush or the radial filter to add a little bit of contrasting there, but I tried to do it subtly.

SPEAKER_05:

I I was gonna ask if you were doing global changes or you had a lot of masks in there.

SPEAKER_04:

Yeah, I mean, I would say the in this case, global changes were really quite minimal, maybe just a little bit, a little bit of a crop. But most of the changes were made with local things just to kind of rebalance the light a little bit. Um quoting Charles Kramer again, he often used to talk about uh re-orchestrating the light. You know, he he was a musician, so he liked to talk about things that from the point of view of someone who's like trying to influence the balance of sound in orchestra, right? Oh, this side is too loud, that side's too quiet. How do you kind of rebalance things to be more harmonious? And I think for an image like this, where you have kind of light coming through the fog, uh, it does tend to naturally be too strong on one side of it. So a lot of it was about just trying to tamp down the edges a little bit, bring out some parts of the branches so that you can see them a bit better. Um it's really kind of not a lot of major tweaks, but more like little small tweaks here and there.

SPEAKER_05:

But so many of those small tweaks can ruin the image as well, make, you know, make me not believe it, make me not, you know, say, you know, this is something I want to fall into. Um absolutely wonderful shot. Um, you were making me think a second ago, you know, about you know the stuff that's coming off of sensors. Is there a lot of information that a sensor captures that we never see? And you know, it's not a part of uh even our imagination of what the cameras can do? I I know at least you know a couple of manufacturers, the sensor is capturing infrared, um, which, you know, unless you've got a filter, we don't see. Um, but is there a lot of just data that's just that that's you know not part of our vocabulary yet?

SPEAKER_04:

Yeah, there's a lot of data. I would say there's data that is captured, but we never show directly to the photographer in Lightroom. So, I mean, some of this is just kind of uh kind of details how the profiles work. But for example, if you photograph with a Fujifilm X100 series camera, you know, what's really the sensor is organized at is this repeating six by six color filter array, um, which is what Fujifilm calls Xtrands. Um that pattern of recording pixels is not something we directly show to the user as like we never show that six by six version of the photo to the user. It always goes through additional processing before we show something to the photographic. And part of that, I think, is just from a workflow perspective, there's not a lot of advantages to showing like the earlier stage image that has all this, you know, it would be it would be a very strange thing, I think, for most people to see visually. It's like the six by six checkerboard, that's a black and white, and some of the checkerboard elements are bright, and some of them are dark. It's like, what is this? It's not even my photograph. But from an image processing perspective, that is absolutely essential because that's how we end up, you know, inter uh interpolating and getting the details and colors right for such images. We kind of have to know those details on the engineering side, but that's like something that's happened almost under the hood and automatically, right? It's not something the photographer necessarily needs to be aware of in order to use the camera or edit the pictures successfully.

SPEAKER_01:

Let's take just a quick break. We hope very much that you are enjoying today's episode. The very fact that you are listening to this podcast suggests that photography means a lot to you. And if that's the case, you might want to have a look at Frames, quarterly printed photography magazine. We truly believe that excellent photography belongs on paper. Visit readframes.com to find out more about our publication and use the coupon code POMDCAST to receive a recurring 10% discount on your new Frames magazine subscription. And now, back to today's conversation.

SPEAKER_05:

I've also wondered, you know, about um having to do with the um denoise um features that you know and how good they have gotten recently. Is there a kind of unintended economic effect upstream? Because if I've got a 20 megapixel camera, but I've got the denoise slider, why would I buy a 100-megapixel camera?

SPEAKER_03:

Well, I mean, people buy 100 megapixel devices for all sorts of reasons, don't they?

SPEAKER_05:

No, but but in terms of image quality, you know, it it it is that feature now um is so cool.

SPEAKER_04:

I think it really opened up a lot of possibilities, right? So um I think from uh imaging perspective, uh it's really enabled us to I think for photographers in terms of what cameras they use and what ISO ranges they use, it's really given them more flexibility. I think for high-resolution sensors, which do tend to be noisier per pixel, the upside is that they have the option if they're using them at even moderate ISOs. Like if you have one of these 60 to 100 megapixel sensor images, uh they do tend to have noise and you zoom in, even at you know 800 to 1600 ISO. And even used at those moderate levels, I think denoise is much more likely to be able to clean them up in a way that preserves the actual detail of it. As to whether you know the lenses themselves hold up to 100 megapixels, I think it's maybe a separate question, right? You know, like obviously, if one is not using a really, really good lens with really, really good technique, it's not one's probably not getting a full 100 megapixels worth of data in the image, right? And then you just end up with you know soft noise that has to be cleaned up. It's not really any better than using a good 20 megapixel sensor, right? Um I personally think the sweet spot still for full frame sensors is around the 24, you know, to 36, somewhere in that range, you know. Like uh you can get cleaner results with 20 and below, and um, but generally for uh I think a lot of photography, you know, that middle spot still remains a really good place to be. And denoise really works well in that range.

SPEAKER_05:

Do you ever do kind of you know kind of um I want to say market studies, but that's not what I mean. You know, real use studies and and find oddities out there, like somebody who you know denoises you know the hell out of something and then puts in 50% grain. You know, that's really rough and big. Um I mean, do you find people's habits are are sometimes in need of tweaking as well?

SPEAKER_04:

I think yes, absolutely. So some of the well, I don't mean in terms of needed tweaking, but they they use the tools in ways that we had not anticipated, right? Or they use the orders. So, you know, I think even very high-level basic interactions, right? For example, do people tend to use the you know, exposure and contrast controls first, or do they do cropping first, you know, and go back and forth between them? There's an interesting interaction between all of these things because some of the controls have behaviors that depend on what the cropped result of the image is, right? Um so for example, vignettes pay attention to where your crop rectangle is, how the remove tool works depends on whether the image is cropped or not, and things like that. And so there are actual potential choices that affect um how a user might choose to add to controls based on the order in which they do things, right? And so that's that's one of the interesting interactions. So even though we can encourage an order, maybe like by the order in which we place the controls in the right.

SPEAKER_05:

I was I was gonna ask, is the ladder there on the right? Is or is that your preferred order of processing?

SPEAKER_04:

For example, in the light and color panels or in the basic panel and light and classic, they are put in generally in the top-down recommended order. But the reality is people bounce back and forth between them and sometimes will jump to one at the bottom before they go back to the top, right? And those habits are very difficult to change, especially if you found something that works for you. So we really try really hard not to prescribe any particular order of the controls. But there are some, especially now with a lot of the more recent controls like denoise or ones that produce bitmaps of images, that um they have interesting and challenging interactions between them, right? Like if you were to denoise an image, but you've also done some removal clone and heel type of operations, well, you want to make sure that those spots are also properly denoised afterwards, right? And so it's just kind of a challenging interaction between these um sophisticated controls that could be uh that produce bitmap images, just because you know, if you change one, you kind of have to change the other.

SPEAKER_05:

You know, I I'm chuckling because I was just out this morning and we had a really, really foggy morning and and took some shots. And so you you gotta know the first thing I went for was De Haze. Um, you know, when I called them up. You know, well, well down on the totem pole there.

SPEAKER_04:

Yeah, De Haze is uh, you know, it's we have it in actually a couple of different places, is an interesting story there, right? In Lightroom Classic, it's there as part of the basic panel. Um, in the rest of the Lightroom apps, including Camera Raw, it's uh it's in the effects panel. Uh and it's one of these interesting conversations where uh internally we debate it or not, you know, is T Hayes kind of a top-level tonal control, like exposure and contrast? That's an argument for having it in basic. Or is it more of something that's more for, I don't know, creative effect and then it belongs in effects, like vignettes and things like that. And you know, we had people on both sides of that fence, and uh as you can tell, we didn't all agree because we have different products that put it in different places, but ultimately, regardless of where you put it, you know, people know what they're looking for, I think, when they try it out, which is they're looking for this interesting mix of, you know, the the scene comes in maybe a little bit obscured for whatever reason, right? Sometimes it's because there's just a little bit of atmosphere, sometimes because it really is foggy and it's very obvious that way. Sometimes they're photographing through an object like some pane of glass or something like that, where there's just something that's reducing the visibility of it. And Hayes runs a little analysis of the image to figure out kind of a mask of what is obscuring the image, and then tries to subtract it out. And uh so in a way, like visually, I think of it as a mix of like the black slider with saturation and contrast.

SPEAKER_05:

When you're working on a on a project or an idea, uh are you are you working sort of cross programs? I mean, you d do you know that this is gonna be for Photoshop or Camera RAW or Lightroom? Um, or is or is the implementation in which package a kind of separate decision?

SPEAKER_04:

That's uh that's a very feature by feature based decision, I think. Um technology-based things. So certain things, for example, like technologies to remove an object from an image or replace it with something else. Those are things that tend to be cross-product or even cross cross-um business units at Adobe, because there's a general need, for example, to like to do that kind of editing for different types of things, whether it's photography or graphic illustration design products, they all kind of benefit from that type of technology in some form or another. But the way it manifests inside of a product tends to be very product focused. So as an example for like the generative remove feature for Lightroom, there are versions of this in other products like Photoshop, but the version in Photoshop tends to be associated with a text prompt, right? The user can say, well, I have to write a mask. I want to fill it with bubbles or something. And for photography, especially Lightroom and Camera Raw, we made a very conscious choice not to do that. We think, well, no, your photo itself should be the source of kind of truth for what you know you're editing. And so we don't want people like typing in, you know, insert, you know, some rabbit with fancy ears on this photo. Um, you know, it's like so we so we kind of very deliberately did not put a text prompt in Degenerative Remove. So even though the technology is shared, the way it manifests in a product is very case-by-case.

SPEAKER_05:

I I'm chuckling because I'm looking at a picture of a rabbit with one of the ears, and there's no ears in this picture. You've got it cropped right around his face.

SPEAKER_04:

None of us done, you know. Uh no labs were harmed in making of that photo.

SPEAKER_05:

It's a it's a great picture. Um, you know, and and and and I'm really you know fascinated by this because I, you know, 99% of my work is in Lightroom. And and I use Photoshop for two things text boxes and the canvas sizes. Um and and you know, if those two were moved over into Lightroom, I would probably never use Photoshop at all. For me, I mean but you know, they I'm sure you know um I might develop another workflow based on a potential that's over there. But you you know, you're you're you're dead on. I mean, Lightroom really is, you know, photocentric. Photoshop is photo potential-centric. How about that? Um, you know, what what next step can I do with it? What what advice have you got for people? You know, both beginners and pros, you know, who are sitting down. I mean, a pro may sit down at any of these programs and say, I know what's here, and you're gonna shake your head and go, no, you don't. Um is there advice across the board that you would give to users of your products?

SPEAKER_04:

I think for especially in the Lightroom and Cameron Rock case, where so much of the design principles from the very beginning has been about everything can be undone, you know, like you're not the non-destructive job. Exactly, right? Everything is kind of this parametric, non-destructive uh-based editing environment. It's really intended to encourage photographers to explore and try things, right? And um, so even if you are a seasoned professional or very experienced photographer, I think because the tools are changing and improving all the time, it's a lot of fun to go back and revisit old photos and see, like, well, not only if the tools change, you might be able to do more with them, but also just if time has passed, um, your vision for that photo may have changed, right?

SPEAKER_05:

And so Eric, do you have any idea how many hours I've spent looking at pictures from five, 10 years ago thinking, oh, now I can do this?

SPEAKER_04:

I know, or I remember how hard it was, how much a pain in the rear it was to do that. And now this tool just like it feels unfair, right? Yeah, I think I mean, I think that's actually because there is a lot of um the temptation, like, well, you just take new photos, take new photos, and it's always looking forward to the next picture. But I think it's also worth looking back at existing photos. And part of it is uh it's fun because you know, you like those photos and it's fun to spend more time with them, but also from a practice point of view, in terms of improving one's skills with the tools and figuring out is there a better way to do something. It's also fun to kind of uh just um like there's nothing wrong with holding on to both versions of a photo, right? It's a little bit like me going looking at the number of times Anvil Add printed his, you know, monolith half-dome, you know, picture, right? It's like there's just so many different versions of that from different years on different papers and different visions for how he wanted that to be printed, right? Uh or any of those. I think it's just uh you can do that digitally as well. And I think that's one of the most fun things one can do besides taking new pictures.

SPEAKER_05:

Yeah, I'm looking at your website, and and because of my own you know personal love for black and white, that that's the part I'm looking at right now. And I'm wondering, you know, in Lightroom at least, you know, when I click on the little black and white thing up there, it's still an RGB file. You know, I've still got all those color tones just hiding back there. From the from the you know, the coding side, from the technical side, is dealing with black and white any different than than dealing with color, or or is it a completely different ballgame?

SPEAKER_04:

Uh yeah, I think it is a little bit witch. And that a lot of the you know, the cues that we have for like bringing out contrast are really flattened into one dimension, right? So it really puts a lot of burden onto controls like clarity, highlights, shadows to do the heavy lifting of things. I think it also really increases the importance of the masking controls. Again, because globally, if you want to relies only on global controls with a color image, it's like, well, if you're not seeing enough differentiation in tone, maybe you can use one of the you know HSL mixer color controls to bring things. Right. And in black and white, there's just a lot less freedom to do that because it's black and white. So I think being able to have a rich masking experience that works with black and white well, I think is really important. And uh one of the things I was really happy with, I think it was a couple years ago, we finally introduced uh curves as a masking adjustment, right? So I think that really helped black and white in particular. Like, you know, if you've got like one cloud in the top right of the picture that's just not standing out enough from the sky or the mountain that's above, or something like that, you can just go and tweak that one thing. And it's a lot easier to do that with a curve uh than with other tools.

SPEAKER_05:

How tough were the new landscape features from you know from a technical standpoint, from a coder's point of view that just came out?

SPEAKER_04:

Yeah, I think you know, a lot of these, you know, scene-based things are challenging in a couple of ways. One of which is that there's just a great variety of content out there in terms of combinations of elements that are trying to tease out, you know, fully to water, mountains, and and so forth. And um, at the same time, having a dedicated model for each one of these elements is not really tractable yet from a performance standpoint. Because if you imagine running like seven different models to find seven different landscape scene elements, it would just take way too long to run. I think the main challenge for us for that was to try to balance this. How can we keep the running time accessible for you know most devices that we know that our customers have, but also produce a sufficiently high level of detection success, right? I think one of the um measures of success for such a feature from the technical side is not like is the mask edge perfect, but whether when you do an adjustment to that mask, it actually looks like a good photograph. Like in other words, does it look photographically plausible? You were talking earlier about how you know doing a lot of masking-based adjustments can actually be a pitfall, right? It can make its image look very obviously fake or it doesn't look like a cohesive photograph anymore. To me, that's actually the main challenge with a lot of these landscape, subject, sky, people-based masses like they do tend to break down a little bit if you push the adjustments too far because it doesn't look like it was all taken as one photograph. It starts to look like a composite. Right. And so we that's that's a line that we're always trying to balance.

SPEAKER_05:

Is there a holy grail out there for you guys these days? Is is is there uh an effect that's that's just beyond being released yet?

SPEAKER_04:

You mean like a visual editing control for um that we're in terms of like what we're trying to build?

SPEAKER_05:

Yeah. What is a visual editing control?

SPEAKER_04:

Yeah, so I think just um, you know, Lightroom is quite mature at this point. We have a lot of the fundamentals there. There's always a lot of things that can be improved and iterated on. Um I think uh a big area of focus for now, looking ahead, is um, you know, you were saying earlier about how photographers, some people really gravitate to the capture process and others really gravitate towards the editing process after the fact. And looking at the second part, I think there's a lot more we can do in terms of having the tools assist better how to edit photos. Um, I'm thinking even for you know busy professionals who've have a lot of experience editing photos. Presets today, for example, they're very helpful for setting controls to a fixed set of values, but they're not particularly adaptive to the underlying image content, right? Because they're just copy and pasting fixed fighter values. You know, I think we're very interested in kind of expanding that capability to work, be more flexible, um, to be able to kind of, you know, you can make something that um, you know, is for your particular style, the way you like to edit them and then apply it in the future in a way to a group of related photos and have it be um similar to how you edited the first one. Um I think there's a lot of kind of uh assisted ways to improve workflows and to improve editing, which is not really about um like a new tool that like creates a different kind of picture you could have done before, but it's really more about improving the workflow and making it easier.

SPEAKER_05:

Okay.

SPEAKER_04:

Um part of that I think is because we hear a lot of people are they like to capture the feel, but they want to reduce the amount of time they spent editing, right?

SPEAKER_05:

Yeah, it's yeah, that has always fascinated me because I I am not a fan of presets. I'm not a fan of um, you know, even the recipes, the profiles uh that are in there. Um because you know I I love the ability, and you know, I love the experience of sitting here at my computer uh and playing with things. Um but I I workflow, you know, speed. I to make it faster to me is just wrong. I don't want to be too much slower. Um but I I want to consider the way I would you know in an old school, dodge and burn carefully and spend hours on a print. I don't want to spend 35 seconds on a on a on a file. I want to spend consideration, maybe not time, but consideration.

SPEAKER_04:

Yeah, that's a very I I relate to that a lot, Scott. And I think it's I I've found in recent years that I myself as a photographer experienced both sides of that because there are some of my images, especially the ones that you've been um seeing on my website, those are in the former camp where, yeah, those are all done by hand. I didn't use presets on those, I considered those, right? Those were you know tinkered with and I enjoyed the process of doing that. I don't, I'm not looking to speed that process up. Uh, but then I've also been on the other side of it with images that I don't consider belong in that category. They're maybe more like happy snaps in my life, you know. Like uh, you know, I like to maybe collect pictures of funny, you know, signs that I've seen in cities that I travel through and just have them be part of my, hey, look at this funny thing that I saw. Um, those are not ones that I'm gonna spend a lot of time doing masking and things like that on, right? But I do want them to have maybe some consistent visual style of some sort. And I can imagine having something that goes through those more quickly, you know. And so like I think depending on the use case, right? Uh, there's kind of something in it for everyone.

SPEAKER_05:

Are we gonna get to the point, you know, with my phone, I can be walking along and you think, you know, I wonder what kind of tree that is. Whip it out, there's the little Google search, she'll say, Oh, that's a sycamore. Um, you know, are we gonna get to the point where in Lightroom I can, you know, almost like in that scene in Blade Runner where he's doing the voice commands to do the pictures, you know, say, you know, computer, reduce all the sycamores 15% of contrast.

SPEAKER_04:

Yeah, you can like a voiced or prompt-based way of doing editing. Yeah, minority report style with uh yeah, you know, I think uh certainly a lot of the ongoing kind of chatbot-based, you know, large language model editing um experiences are are looking that way. I think that the jury's still up on kind of how that experience will ever show up, you know, in Lightroom in terms of how that would look. But I what's that?

SPEAKER_05:

Will I be able to will I be able to tell it to change the sycamores some degree and the the evergreens some other degree?

SPEAKER_04:

Yeah, I don't know if it would ever be like text or voice prompt based like that, but I do think if you've seen like you know, fleck landscape um features work, right? It could be very uh much more specific than just trees, right? It could be a lot more specific.

SPEAKER_05:

Okay, yeah, because you got vegetation now.

SPEAKER_04:

Yeah, vegetation, right? It's just not as specific to the species or things like that. Right. You can totally imagine you know getting a lot more specific and having ways of editing this as you know, take the top right part of the sky and make it a little bit more like the top left part of the sky. Um, or yeah, like you say, you know, I want the maybe the brightest leaves of the sycamore trees to be toned down 15%, or I want you know the shadows to be denoised 10% or something like that. I feel like all of these sort of methods, as long as the photographer retains the creative control over how the image looks, um, I think those are good directions for us to explore. Um, I do think a lot of times, especially new photographers who have a vision for how they want the image to look, but they don't have the vocabulary yet to know how to describe it, this would be a good way for them to get kind of get started. I think it maybe doesn't really help experienced photographers who already know what they want to do and know how to use the controls to do it. But for those who are just they're more like comfortable describing the intended outcome, but don't really know yet how to make it work tour-wise, this might be an interesting avenue for them.

SPEAKER_05:

That that is so cool. Um looking at all your images on your own website here, is is there one story in in post-production, you know, one image that says this one I'm really proud of, or this one, no matter what I try, I see I can't get to the way I really like it.

SPEAKER_04:

Well, Scott, you know, the interesting thing about working on all the edit tools in Lightroom, this is going to sound very strange, but I have this weird pattern where once I've worked on a tool for a long time, I actually try not to use it in my photos. I don't know whether this is a reactionary thing.

SPEAKER_05:

Come on, you you're using contrast in here. I can do it. Yes, this is true.

SPEAKER_04:

So the basic stuff, of course, I use a lot. But uh the interesting thing, I think from almost all the photos on the page, I use very little outside of the basic panel. Uh serious stuff that's in masking, uh, of course, that I use. But um, I think some of the ones I'm proudest of the ones where I might have done like a slight crop and done a little bit of a contrast adjustment, but it was already very close to how I wanted it to look. Um, the ones that I have done and come back to recently are ones that typically have the sun, they're almost like backlit scenes. So like I have an image of uh Nepal with uh fishtail mountain with uh prayer flags in the foreground that like Yeah, beautiful shot. And so that's that's an interesting example of an image where normally for a landscape image, I would you know kind of set up the image and try to anticipate a little bit and try to be very considerate with it. And this is one of those cases where I felt like it broke the rules a little bit in the sense that you know I was on a trek and I didn't have my tripod, I was trying to go light. And this was not an image that was planned, but I knew the morning that we were trekking up that way that, oh, well, the sun is coming up behind me, and I saw these prayer flags, like maybe if I wait like five minutes here, uh something, you know, it will happen to come up behind the mountain. Um, and so uh this is like an image that was kind of found on the spot. Right. Um and then in post-processing, at the time, you know, it's very hard to convey the sense of light in the picture just because, you know, one can't really convey the brightness of the sun. But um afterwards, now in the past few years, with um the display technology getting that much better, we've been able to use the newer uh HDR output feature in Lightroom, at least for HDR-capable displays, to really edit this picture in a way I think that shows off the feeling more of being there and seeing the light shine through the prayer flags. That's right.

SPEAKER_05:

Yeah. I mean, and again, I'm looking at it right now. It's such a magnificent shot. And you mentioned HDR, which we're gonna run out of time here in a second, but I want to tell everybody that you know you've got um a wonderful blog, you know, where you talk about things like HDR, where you talk about some of the uh elements that go into post-production in a really clear and and instructive way. Yeah, kudos to you, man. That is absolutely great work out there. And and I'm I was listening to you and and you were you know, chuckling, saying once I work on a you know a tool for so long, I tend not to use it. I do absolutely believe that the better we get in post-production, learning what's possible, that actually makes us better capturing stuff in camera because we know what's possible. We we know where we're going. Um and if I can catch it there, that's that's you know, that's the magic. That that's the dance that we're all looking for. Eric, thank you. This has been an absolute magnificent conversation. I am impressed as all get out, not only with your own photography, but the work you're doing for every single one of us, you know, at Adobe. And you know, I I mentioned at the beginning, you know, we've all seen your name because it's there, it's there in that list of names that goes by real quick at the very beginning. Thank you very much. I've enjoyed this.

SPEAKER_04:

Thank you so much, Scott, for having me. It's been great fun.

SPEAKER_00:

Frames. Because excellent photography belongs on paper. Visit us at www.readframes.com.