Color & Coffee
Color & Coffee is a film post production podcast focusing on the craft of color grading hosted by colorist and finishing artist Jason Bowdach, CSI. Jason chats with a variety of post-production professionals for intimate discussions on their craft, their passions, and of course, their favorite beverage of choice.
Color & Coffee
Look Dev Is Not Color Grading with the HAL Pictures Team
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Look development and color grading share tools, but they are not the same job. We sit down with the Hal Pictures team — Martin Roux, Olivier Patron, Paul Morin, and Antoine Mayette — to dig into how a group of French DOPs, a DIT, and a director built the tools they couldn't find anywhere else: Diachromie and Diaphanie.
The conversation starts where the frustration did. Post houses in France lacked proper look development infrastructure, and what existed elsewhere was either inaccessible or wrong for their workflow. So they built their own, rooted in a principle that any image can be decomposed into three parts: contrast, color tone curves, and color volume. Understand those, and you can describe a look. If you can describe it, you can build it.
We break down Diachromie's approach to shaping the color volume parametrically inside a color-managed pipeline, and Diaphanie's frequency separation engine for texture — MTF, halation, bloom, and grain as independent, composable layers. The team explains why they separated color from texture, why that opens up aesthetics beyond film emulation, and why algorithm-based tools and sample-based tools are different propositions — not better or worse, just different.
There's also a practical conversation about the limits of OpenFX, running multiple plugin instances to work around fixed order-of-operations, and how the preset system carries look development knowledge from project to project.
These are practitioners who built for their own needs, then released it. That lineage shows in every design decision.
Subscribe, share with a fellow color nerd, and leave a review. Find Hal Pictures and the demo version of both tools at hal-picture.com.
Guest Links:
IG - https://www.instagram.com/hal_picture/
Website - https://hal-picture.com/
PixelTools
Modern Color Grading Tools and Presets for DaVinci Resolve
High-Quality Reference Displays for Editors, Colorists and DITS
DeMystify Color
Color Training and Color Grading Tools
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Like the show? Leave a review!
This episode is brought to you by FSI, DeMystify Color, and PixelTools
Follow Us on Social:
- Instagram @colorandcoffeepodcast
- YouTube @ColorandCoffee
Produced by Bowdacious Media LLC
Split Texture From Color
Speaker 3Most of modern production want a certain part of film behaviors, but also like some digital behavior, and to be able to split the texture from the color and to build also like a node graph with a bit of texture and then the color, and that it's a better way to uh explore new aesthetic. Because if you just want to make a good film look, I mean there's a typical order which is uh which is fine, and and that's it. And that's film look. But the main idea is to be able to make pretty good film looks, but also and more important to be able to move away from film looks to new aesthetic and to some stuff that are newer, and you can rely on a lot of behavior from film, but but but having also new stuff in your image. And so that's that's why it's important to split uh the way you think between color behavior and textual behavior.
SpeakerWelcome to Color and Coffee, a podcast that's focused on the craft of color and the artists behind it. I'm your host, Jason Bowdach, and each episode we'll sit down with some of the most talented artists in the industry and have a casual chat from one artist to another. We'll share their stories, their insights, their tips, and maybe even a little gear talk. Whether you're a seasoned pro or just getting started, join us for some great color discussion. Sit back, relax, you're listening to Color and Coffee. Hello and welcome to another episode of Color and Coffee. I'm so happy to have you here as a guest. I have such a fun episode for you guys today. I have the Hal Pictures team, and we have Paul Morin, Martin Roux, Oliver Patron, and Antoine Mayette. Thank you guys for joining us today on the podcast, you guys.
Speaker 2Hi, Jason. Thanks for having us.
From Set Work To Toolmaking
SpeakerNow, first off, I have to thank all of you guys for joining us today. I think this is the largest group that we've had on the podcast, and I'm so excited to ask you guys about your tools, diacrymy and diaphamy. So, first off, I want to ask you guys how you guys got into the tool creation business. Because if I'm not mistaken, you guys actually started out as practitioners, DOPs, DITs, and a director, correct?
Speaker 3Yeah, that's right. So uh Paul and I we are uh are uh DOP, Oliver is a DIT, Antoine is also uh directing films, all of us are practitioners, and um a few years ago we feel the need to develop our our own tools because we've we we were a little bit disappointed by the how our images were processed uh in French post house. It was like um five or six years ago. Yeah, a bit frustrating. Um and there were like no look development tools uh in any post house in France, and and there were also very few DCTLs uh on the market, very few film look plugins, and so it we feel um the need to start our own journey through uh color science and stuff like that. At the very beginning for our own our own uh needs, but year after years uh the the tools at the very beginning it was just like a small DCTL or a bunch of DCTLs, and then an OFX plugin, and it it became more and more uh complex. And friends are first like asked us to to try it, to use it on their own uh feature film or TV show. And so we um we had to finish the development to have something which uh which is uh uh stable and which is uh uh robust enough. And because it's it takes a lot of time, I mean it the easiest way was to release something, to sell it, to be able to continue to work on it because it was a lot of time to maintain that more and more complex tool. And so we've released the first uh I mean uh open version in uh at the end of 2025, and we are very happy about that.
SpeakerI know, I mean, myself as a developer, I know creating your own internal tools and releasing a public tool that you're gonna sell to others are two entirely different ball games. So I want to jump into the beginning. What were you guys not getting out of some of these post houses? What were your needs and what were you not getting? Because I mean, color correction has been around for, I mean, at least in its current form, digital color correction has been around for almost 25, 26 years at this point. And what were you guys feeling limited by?
Speaker 2I would say there are two sides to this answer, I think. Tools were missing, but also maybe a global understanding of what is a digital image and how it's supposed to be processed. Not saying that there's one way to process a digital image, but it felt sometimes, even in, you know, I mean, we were doing TV shows and feature films back then already, and even then, you could feel that there was some kind of an uncertainty about how to process the images, and and so you weren't sure exactly of what would be the outcome of your image after being processed and why it would uh be like that. And I know it sounds a bit like huge as a I mean, it's a huge statement, but it actually was true most of the time. And uh the expert houses that really had this control were not available for many uh productions because well, they had a savoir-fair, and so it was kind of uh expensive, you know, hard to hard to get. So the mainstream understanding of how to process an image and what to do to have a proper, correct result on screen and when broadcasting was not very well diffused. And if you don't have the understanding, well, even if the tools exist, that doesn't work, but also the tools felt a bit limited or again too expensive to be um accessible for like a medium French production, which you know we're not talking about the same number scales in terms of production uh economy than in the US. So it was kind of a mix of of both these issues, and um as it happens, we all come from the same film school and different promotion with different years. It's differently exactly. There's a 10-year span, I think, covering covered here. So, and that's not for nothing, it's because this is a film school that really gets technical background, has a huge technical and scientific background, and where art meets that you you know your craft is really funded by a lot of technical and scientific um uh skills. So, with this skill and also with the time we we got during like uh COVID, the first COVID lockdown, each in our uh uh on spot, like uh alone at first, we felt that something had to be done with color science, whatever that meant at that time. You know, uh Steve Yedlin had released his first uh video, I think, the display prep demo. I think that's a starting point for a lot of people, or if not starting point, at least like something that really uh fed the bear inside. And so, speaking for myself, I was like I spent a lot of time during the lockdown trying to figure out what was this field color science. I could understand that there was something to be done with that in terms of image processing, but it felt so vast and so complex. And then Martin published a small paper on the AFC website, which is the French um association of uh cinematographers, and that was talking about the idea that there was something beyond color in digital, and that he was talking about the fact that there needed to be an effort of research in terms of uh look development and color science. He had these hints that I could feel were were common to mind. I think Olivier got got in contact with Martin in in the same fashion, and so it started with the three of us, and Antoine joined a bit later, but it started with the idea that there was this field, it was unexplored, and there was something that could be done with it. And we had the three of us put together, we had enough skills combined that maybe we could crack the enough bit of the case and enough motion because we had to go somewhere.
Speaker 1Because each of us alone, we started something looking for some answers, but myself alone, I was uh often asked by DOPs how to get to look creation and all that, and I was trying to craft some stuff there here and there, crafting lookup tables from MATLAB. I remember I was doing that. But you cannot do you cannot go anywhere with that. So at some time point you know you lose some courage. Um but when we we found ourselves also really getting somewhere, like it helped.
Look Development Versus Grading
Speaker 2Yeah, there is strength in numbers, and also we do we did all that and we still do all that as our secondary work because we still have. I mean, my primary work is still DOP on you know TV shows and feature films. Same for Martin, you're still doing a lot of DIT job, that's your main work. So being a multiplicity allowed us to work during our free time and our spare time, and to combine this spare time to have actually enough time combined to get somewhere. If we'd been alone, it would have been just too big a mountain, too high a mountain to climb.
Speaker 3To to be a little bit more specific about your question, what was missing? The fact of being able to develop look in color managed workflow and to develop look only, which is different from color grading. Look only is it it's like a color behavior which is not uh on a shot by shot base, but which is more something global that you can apply to a full feature film or a full um TV series, and uh which is robust enough to be applied on different cameras, and which are just the color, uh the rules of how the color behaves. And to be able to shape this behavior in color managed workflow, there were very few tools, and there is still very few tools because there are tools dedicated to color grading, and there are a lot of them, and they are very good for that. There are a lot of pre-made looks and catalogs of floods and stuff like that, which are great uh also. But if you are looking for parametric tools to shape the color volume into a color managed workflow specifically, and that behaves coherently into a color managed workflow, which which means like in relation with the with the DRT, there is very very few tools. There is some DCTLs. Uh there is uh, of course, chromogen from base light, but chromosom chromogen while was not available where we it was in a better state, I think. When we started, yeah.
Speaker 2And even if it had been available when we started it, in France base light is really not major, a majority of uh post houses do not have base light, it's it's not accessible easily, so it would still have been a blocking criteria because we wouldn't have been able, sorry, to uh to access it on any production. And the idea was to have something that was not have a tool that was not camera dependent and that was not post-production software dependent. So we wanted because we I mean that's true for any kind of production, but we as a filmmakers we didn't have always a choice or where we would where our project will be uh post-produced. And so we wanted to be able to say, okay, whichever tool you use, as long as they're you know production ready, well, we want to embed our solution and it won't break your workflow, it will actually like improve your workflow. But at first it was about making you know making sure people trusted the the tool, and it was like that works, that plays along very nice with the workflow, and and it works whichever camera we had access to, whichever grading suite we had access to. That was uh an important criteria at that moment.
SpeakerI love the approach that you guys took to get there. For me, it falls perfectly in line with this evolving area of directors of photography and DITs basically taking more ownership of the image and saying I didn't have the tool set that I need to have proper authorship over my image, and I need additional tools to do that. And I don't want to use color grading to do that and make them fit in a box where they don't fit properly. So I love that you guys sort of put the rubber to the road and said, if no one else is gonna do this, we're gonna do this. And even if they're doing this somewhere else, I want to make this as accessible as possible, specifically for our production. So that is, I love that, and I love to say that you're part of this new generation of people that are saying if nobody else is gonna make these tools and give us what we need to have the authorship that we want over our image, then we're gonna do it ourselves. So let's jump in to each tool individual because I instead of making a single tool, you guys made two different tools, correct?
Speaker 5Yeah.
SpeakerWe have two different tools. We have diachrony and diaphany. One of them is more of a color volume tool, and one of them is more of a texture management MTF style tool, correct? Yeah, exactly. So I'm gonna ask you guys probably a loaded question. Which one's your favorite? It depends in terms of what.
Speaker 2Do you mean I think the question the question as a user you can only have one, which one do you take? And I think that's which because if you ask me which which one I prefer, it's hard. But like if I have to take only one on the production, I think I'll take diaphany because I think the control over the texture is yeah, it's crazy. I mean, I wouldn't find a tenth of a satisfying solution, the uh tenth satisfying solution, whereas I can see how I can you know really make my colorists work hard and find other ways to get where I want to do color-wise, but I think diaphany uh would be a little bit more essential. But we're really talking about like biomalogy.
Diachromy Breaks Looks Into Three
SpeakerYeah, it's they're both really unique tools, and I actually want to dive into diacrymie first, but I had to ask that because to be honest, I think they're both very different tools, but I find that when you create something, you tend to have an affinity towards one or the other, even if you don't want to, and even if they do very different things, it's hard not to have a little bit of a favorite. And so I thought that was an interesting question. So, diachrome, we know what the purpose is. It's basically going to shape color volume, it's gonna adjust hue and density, and you guys actually have some built-in presets there. But what was the mentality that you guys went into when to differentiate it from some of the built-in tools? Like I'm sure color slice didn't exist in Resolve, and you guys were working on this. But what were you thinking when you guys were developing this? Because you guys have a very unique mentality in both of these plugins.
Speaker 3I think the main idea is that any look of any analog or digital image or anything you can be broken down into three components, which are a contrast curve and any let's, of course, a contrast curve, color tone curves, in fact, that means three uh contrast curves, uh in fact, and a specific shape of the color volume. The the the contrast, the the color tone curves are in fact the the gray axis of the volume, and then how you you shape the volume um all around the natural axis. And so if you can uh verbally express how you want your contrast curve, how you want your color cast or your dominant colors, and how you want each of the parts of your color volume, which are reds, blue, yellow, magenta, greens, etc., how you want those colors to behave. If you can express it, then you can create it. And then the main idea is to break down the color creation the look creation into these those uh three parts. Uh so you can create your own look, of course, but also edit an already made uh look very easily and just on one of those uh three parts. That separation in is the main idea, I think.
Speaker 2Yeah, the the idea is that it can be a bit overwhelming to say what should that that look be, how should my image look like? Describe your look in now. Yeah where so our idea was, and even for us, I mean, and uh for us even primarily, and so the idea was let's break it down into um singular characteristics that cover specific aspects of the look, which makes the conversation about the look simpler between the colorist, DOP, the director, everybody involved in the in the creation um uh process that makes the creation of the look easier, and then its evolution easier as well, uh whether you're doing several versions of it during prep or even doing shoot if you realize that something is not working, instead of, and I'm not exaggerating, panicking and just trying to roll everything to move everything until you find a solution. You can either you yourself on the set or in the lab assess the situation and be like, okay, yeah, your location has actually um a red that's way stronger than we thought, and so we have to work in the reds, and specifically maybe it's the the way we compress to get some duration or you get too much. But you can you know narrow down to uh one or a series of parameters.
Speaker 1Yeah, and that's why we got also the the presets thing built in very very early in the plugin development. I think that you can bring one preset you crafted that you like, but you need to bring it to another show and just tweak maybe like lighten the contrast curve or it's the main thing.
Speaker 2I mean I think the idea that uh so there is there are several ideas in in the in designing tools, but one was the one we just explained. The other one was well, loop development is a complex operation, you have to make it simpler, as simple as you can. But you don't want to make it too simple because if you make too simple, you lose possibilities. So that balance we argue struggle for hours every time. Should we remove this slider? Should we keep this slider? How should we present the sliders as well? So even the in the way we the order in which we present the sliders and which one are shown by default or hidden by default, it's a reflection about what do you use often and how uh in which order you should you think your look. And we think you should think your look first by using working on the contrast, then on the color tone curves, then on the color volumes, which doesn't mean that you won't go back up. Of course, you're gonna do, you know, but a rule of thumb path that you should follow because it makes life easier. And also to make it easier, the presets, the idea was like loop development is hard and it's actually a long-term game. You know, you get better. The more loop development you do, the better you get at it. I'm sure you have this experience as well. And uh stop learning. You never stop learning, man. And so the idea was what's the best way to keep to learn from my previous experiences? Well, it's to have the ability to start from them, but not being locked in my last look. Just okay, I did that. That was that was okay. I liked it, but I really like the contrast, but the color tone cast was a bit too much, or just it's not gonna fit that project. Well, I'm just gonna load the contrast part of my previous presets. You know, I'm just gonna load the contrast preset, but then I'll I'll I'll start from scratch on the color tone curves, or I won't start from scratch, but I know I will have a lot of editing to do on them. But the idea was it can be incremental, it doesn't have to be like you start from scratch every time because well, it it takes time, it takes skills, and also we live in um in an environment with where time is a constraint. So, you know, you don't always have two days of prep for a look. And so maybe you had this great feature film with a lot of money when where you took a lot of time to prep the look, and that was great. But maybe you can use some of that time for the next project, which is maybe a short film or a documentary with less budget, and then you're like, okay, well, I'll just start from the preset I built last time and I'll edit it because I know I I took a lot of time in this preset. I know it's very robust, and so I'm gonna start that. So there was a lot when building the the tools, there was a lot of um it was built on the the way we think it should be used, you know. I mean, how we make the life simpler while preserving the power of the uh features we invented.
Speaker 1We we fixed the order of the process order, the order of operations. We fixed it to try to simplify the way you you craft the look.
Presets That Keep Looks Flexible
SpeakerThat was one of the first things I noticed is how much thought you guys have clearly put into it. Because I mean, first off, I don't see a lot of OFX plugins that let you import and export looks. Yeah, you can obviously create a power grade from it, but like you guys literally allow you to export a preset right from the OFX plugin, which I think is genius, by the way, because you don't want to start from scratch each time. Look development is not black or white, it's sort of a gray zone. And if you've put all this work into creating this nice look, you want to save it and, like you said, start from that the next time and maybe continue to adjust it a little bit because you might come up with something totally different, or you might just need to adjust, like you said, just the contrast curve or just the color volume for a specific hue. And so it's really nice to start with something proven. It also is really nice for clients when you already know that you have something that's robust and you don't need to start from scratch again. So I thought that was a really smart part of the plugin. And I think that's also you're talking about order of operations, that is the biggest difference between something you develop for internal use versus something that you send out into the world, is you are like almost teaching people how to use a tool based off the order of operations. And so as I was going through it, I was basically being taught what you guys were suggesting how to use first. And so that's what I thought was so different about your guys' tool, is it almost feels both technical and not technical, in that it's very clear you guys separated things out into these three sort of areas for this look development. And I really loved that. I liked how I could easily set the preset and what the preset was applied to on the tot. I thought it was a very neat mentality, and I can see how that would come in handy, especially if you already have a couple of your favorites. So, I mean, off the top of my head, there's a silvery bleach kind of a look. And a lot of people get sort of caught up with that and they immediately go to, well, how is bleach by pass done and how do I do that? And you sort of jump to the other side of that and go, here's what's involved with this. What do you want to keep? It doesn't really matter how bleach used to be done. This is what you want to keep and what works for your production. So the other tool, Diaphony, that one I thought was really, really interesting. I love the first thing that was just like, wow, was your texture. Basically, you split out your texture into five or six zones, if I'm not mistaken. And I just loved how easy it was for me to adjust the texture. It was so, so quick and so easy. And yeah, we have a couple of these tools with uh frequency separation in Resolve, but it was set up so nicely out of the box that it was just immediately allowing me to create such different imagery. So honestly, fantastic job for those that are not aware and haven't checked out this tool yet. This tool, and correct me if I'm mistaken, focuses on MTF characteristics through frequency separation, halation, bloom, and grain, correct?
Speaker 3Yeah, for no for no, that's that's what there's inside. And what's very cool about diaphany is that we can add stuff later because there is plenty of other texture. Yeah, spatial operator, we can put optical stuff in in there. So we are for for now that's that's that's what's inside.
Speaker 2And so it's a sandbox, yeah.
Speaker 3You can just we we we have a few ideas to add more um section to diaphany. But uh you're right about the frequency separation stuff is one of my favorite because I don't know why, but in fact you're right, there are several uh options for frequency separation, but I prefer I was uh and and but for for very um uh specific reason. I mean I don't like that much the one from Da Vinci Resolve, which is which behaves a little bit um odd, oddly a bit old. Uh I mean it it can it can help uh when you what does when that that's the only thing you have, but it's a little bit odd. And of course there is baseline. Baseline is one is like the the standard one, but because the way it's it's done, I I guess that's a Laplacian pyramid, that that's how it's called. A subtraction uh of blurred uh image to remove the details to get every band frequency frequency band. And so there's a uh the the main issue is that the smaller zone can be that small. I mean it can be one pixel small. It cannot be one pixel small because the smaller zone is how do you how do you explain that? Uh it's uh it's a minimum size of uh of a three by three kernel. Yeah, of a three by three by kernel, so the minimum size of blur that they that you can generate. And so when it's about uh removing small, small, small details, it's a limit. And so you when you want just to get a very digital image but but just remove what's too sharp, it's not that perfect. And there are also the fact that it's only applying gain on the all the frequency selections. And when you apply gain only, you pretty quick you get this halo when you are and that's because gain only it's not enough. And so we we've made uh the our math a little bit different. We started with gain only and and at the end it's I think it's uh very uh powerful. You can adjust the band of frequency if you want to I mean deep dive into that. You can create your own selection, and it it's it's very, very powerful, in my opinion.
Speaker 1So I love diaphany on every tool. I I love MFT, but I love diaphany because it's very uh simple in its appearance, but uh it addresses very complex stuff.
Speaker 2That's true, yeah. And for some reason, for we started working on diachromy and uh which had yeah the first product was diachromy because colors seemed more simpler. But and so we're like, okay, I thought we're gonna have to do some texture work, I think. And uh and it was actually way simpler than we expected.
Speaker 1But colors, diachromy was we we we keep getting back to diachromy again and again. We think it's done, and then some p some guy shows us look at this gradient on our like, oh yeah, okay, we need to take that again. Yeah, because the other one helped us a lot.
Diaphany For Halation Bloom Grain
Speaker 2Something I meant to say when we were on diachromy, maybe the final uh pillar on which we built diachromy was robustness, an ideal of robustness, which is you're gonna do your look and we're gonna do whatever we can mathematically to try and make it not breaking at some point in any so as with every tool, if you tweak it hard enough, it will break, no worries. But we we really wanted to be as smooth as possible, and we found so many issues with that. And luckily for us, Antoine joined the team at some point, and he was like, Okay, you guys have been in your head for years and years. Yeah, I'm fresh, let me look at it. And he fixed so many issues which were we didn't we never had the time to get into the at the time and also probably the skills.
Speaker 1Yeah, in the skills so yeah, yeah.
Speaker 2And so ironically, I think Diapany was ready for release a bit before that.
Speaker 1Some some nice glitches too. Uh I'm not needing to address that too.
Speaker 4I think the real the real difficulty with color tools is how intertwined everything is. Like uh when you want to have a tool that has a lot of control, you need to be really careful not to break things and to break the interaction between two tools. So it takes a lot of time of fine-tuning. Whereas for a texture tool, uh all the components are independent from one another. So it's easier to address problems one by one. That's true. I think it's uh the difficulty.
SpeakerYeah. I mean, those are really difficult, and it's everybody's using slightly different workflows. You guys also have your own internal color management. You guys are working in ACEs, correct? That's the like internal color model.
Speaker 2Internal uh data is is uh AC CCT AP1, and that's where we're and we're actually uh looking to simplify a bit the color management interface because it uh the way we thought we designed it during uh in the first version of the diagram. I think we a little bit overtly the interface is a bit too complicated for what it does, and some people are a bit worried about it. So we have an update version coming up in the next weeks or something that will uh make it a little bit uh easier, you know, simpler. Let's check boxes and so on. You'll just you know say in which color space you want to work, and we deal with the rest internally. But yeah, our core uh color space is uh AC is uh CCT AP1.
SpeakerYeah, so you have to deal with going back and forth between your internal data and then with the DOP and what you're putting in there. And so there is a bunch of stuff that can go wrong there, and especially as you guys have stated, when you when you twist the knobs to 11, sometimes thing can go a little bit wrong. I'm interested that that doesn't happen as much on the texture side, actually, or you found it easier to resolve those issues because you can you can't really twist hues, but like you're adjusting contrast in a way that can be pretty ugly. So I found it uh really interesting that you guys had less problems with that one. I think your Halation model is really, really pretty, especially along with some bloom. So I found that really, really interesting, and I thought they sit really nicely with each other. How was it developing these two tools next? Meaning how do you deal with interaction between these two tools? You touched on it a little bit, but like how do these tools interact with each other?
Speaker 3That's a good question. Because in fact, uh making two tools was very obvious at the beginning, because in fact so many sliders. Yeah, that's that's too many slides, too many sliders. But sometimes uh we would love that it would be just one tool because some interaction precisely are very uneasy to manage.
Speaker 1For example, if you are uh expanding contrast and then you add diffusion just after you can like uh send you the value to the roof and uh very uh you know uh shoulder very very soft, then the halation from diaphany will have difficulties to kick in. So maybe you would like to put halation before. Most of the time diachromies and diaphany is very good combo.
Speaker 2But sometimes you can find new ways to do things by yeah, breaking down in different I think this question of diachromy, diaphany, two plugins, it touches at a topic that is frequent for us, which is the limitations of OpenFX. We we player, you know, I mean OpenFX, I mean that's a great framework. We wouldn't be here if it wouldn't it didn't exist. But well, you have uh the interface is limiting, it's mostly lists and lists of sliders. So we felt at some point that if some people already feel that there's a lot of sliders, and I completely understand it can be like overwhelming at first. We knew we had to split texture and color just for that, for you know, to give the the users a chance, if I may say so. Not just because it's it's already it would be awful, the user experience would be awful otherwise. And then we realized that it would it was also helpful so that sometimes you can put the Alation before the color and then the rest of the texture after. And so it gives you a way to play with the other operations, which you wouldn't have if we had fixed everything in one plugin, you know. So it gives you back this freedom. And sometimes we also put like two instances of diachromy because we want to do the hook, then some texture, then just maybe some tinting after. It will depend on what you do. So you can bypass the fact that we fix the other operations by using several instances, but it is a limit, and so it's I think it's the best compromise with the development environment, which is being a plug-in inside an open effects constraint and so on. But it's something we've talked a lot about, and we we still feel a little bit frustrated because we feel that that like we we've we've pushed the walls to the limits, and now the question is what do we do?
Speaker 3But there's there's also another um explanation, which is we don't want to make only film looks. Yeah, I mean uh most of modern production want a certain part of film behaviors, uh, but also like some digital behavior, and to be able to split the texture from the the color and to build also like a node graph with a bit of texture and then the color, and that it's a better way to uh explore new aesthetic uh because if you just want to make a good film look, I mean there's a typical order which is uh which is fine, and and that's it, and that's film look. But the main idea is to be able to make pretty good film looks, but also and more important to be able to move away from film looks to new aesthetic and to some stuff that are newer, and you you can rely on a lot of behavior from film, but but but having also new stuff in your image. And so that's that's why it's important to split uh the way you think between core behavior and textual behavior.
A Better Frequency Separation Model
SpeakerI love that because I mean, don't get me wrong, I love film emulation just as much as the next guy. But I think I think uh our clients may be a little bit tired of them, and I think the the demand during an actual session is more for clean modern looks. And the creation of a clean modern look and a film emulation look has a lot of crossover. You use almost the same tools, but you tweak them in a slightly different way, and you're still using texture management tools, but instead of adding grain and heavy halation and heavy uh bloom, you might be just taking that digital edge off. So you don't see every pore on your talent's face. But you may not be adding intense grain, you may not be adding intense bloom, you may be keeping halation off. I still try and sneak it in there, but you're trying to get this clean modern look that doesn't feel digital, but at the same time doesn't invoke the nostalgia of film emulation. So it's this fine line, but you're using the same tool. So I completely understand what you're saying there, and I like that you separated them out in that way to try and encourage that experimentation.
Speaker 2A good example, for instance, is the grain module. If you only use the grain uh module of diaphany, you you can, in some situations, with some settings, implement a grain that is sharper than the actual image. And that's something that people have pointed out, pointed out, like, oh, you it shouldn't be like that. And it's true, it shouldn't be like that if you want to do a proper film print emulation, a proper grain of a film emulation. But and in which case we're like, okay, yeah, if you want to do that, just combine it with the texture uh expensive specializer, and then you can find, you know, right, tweak properly your grain, your definition. But the idea is okay, maybe it's a bit less instinctive because it's two components instead of one, but the extra freedom that you get from splitting these and not just putting some loss of definition automatically every time you had some grain, which some could be a way to do it, but we didn't want to do that. It's uh an it's actually more freedom to you know explore new new new maybe new kind of grains because grain might look different if you want to, you know, and maybe it will look weird, but I mean let's try it out, maybe work for your product. I think that's a good example of um of this kind of new new zones. I mean, that's a small one, but sort of new reels to explore in terms of uh what uh what a digit what an image could look like in a digital world.
SpeakerWith great power comes great responsibility, right?
Speaker 1That's it, that's it. Yeah, yeah.
SpeakerSo I found your presets really interesting. Like, for instance, your vision preset. Tell me a little bit about how you guys developed that. Were you guys sampling film, or were you going based off of the feel of what you guys know codec vision film feels like after grading for years?
Speaker 3No, it's not about sampling, it's about like watching film print. Because there's a big issue with sampling, which are the tools are developed, is that that's algorithm and that's not like sample data. So the main idea was to have like behavior that feels like vision three, for example. But we know that we cannot be as I as accurate as uh sample data. But about sample data, there is uh like a uh big issue, which is like when you are trying to sample film, you're sampling also the whole chain of tools that you use. You're sampling your scan or you're sampling your and and so there are, in my opinion, some uh systematic shifts on sample data, which make that if you are uh projecting a film print on a 35mm projector, and then you are uh projecting the same uh shots uh with very good some sample LUTs that doesn't fit. And that's something I've I've I've uh experienced several times. And so we've we've shot on film, but we've watched the results by uh with our own eyes on projectors, and uh we've made a proposition which is I mean, actually I I think it's it's just as inaccurate as simple data's, but it's another proposition, uh if you know what I mean.
SpeakerYeah, I do know what you mean. It's sort of funny, like having developed my own film emulation product, you have to make a bunch of decisions, and not all of them are easy decisions, because of course film doesn't go through a DRT or color management in the same way that you would, especially if you're trying to emulate a contact print. And you have to make, especially if you're trying to create a product that's gonna go through color management and have a DRT, especially if you're gonna allow the different DRTs. Well, DRTs change things, there's roll-off, there's all these different questions that can you don't have the answer to because the artist is gonna answer that question. And so, how can you develop an emulation for that when you don't have the other parts of the equation? And so I think that what you guys developed is a great middle ground in the sense that you guys are offering a vision-like preset. I think it looks great. And all your presets I thought were really interesting and are great starting points, which is I think I don't like to think of presets as the final game. I think they're great starting points to sort of give you a little bit of an idea, and then obviously the project is going to dictate where you go from there.
Why Two Plugins Work Better
Speaker 2Just to complement the film presets and so on. It's actually interesting, uh, just about the way we developed the tool, is that pretty early on, we we did um a shoot where we shot our digital and film on the same. So we we acquired some data uh pretty early on. And the idea was to because sample data, once you've done your job correctly, you're kind of locked into the sample data. You know, you can't really like tweak it. And as it was not one of our principles of development, we were more looking for the proper mathematical functions and algorithms to emulate some of the behavior, the behaviors that we actually liked a lot about film. And once we found these, for instance, the fact that the the higher the saturation of a color is, the more the more density is gonna be high in film. We we really liked that. So we used our um film data and our um digital data to find the best mathematical function to get that same behavior, but then you can parameterize it so it's not all or nothing. And you can also try and apply that same behavior to other characteristics. So you can also try and apply that to other characteristics of your image, and then maybe you're gonna find something that's actually interesting and maybe a bit of new. Maybe you get some innovation in the way you're gonna manipulate your image. But so basically the idea was what inspires us from film, because that's like decades of uh research and development and a lot of knowledge, so you gotta learn from that. And then how can you iterate from that, maybe evolve from that, and then try and get new results? And we had a lot of stuff that didn't look good at all, so that they're not in the plugin. But uh, from time to time you you get a good idea, you know, you mix and match, and and it works. So that was, I think it's interesting because we hear from time to time, oh yeah, but you guys don't scan anything, you have no idea what film is like, you just you know do math and so on. We yeah, but we don't do math out of the blue. Like we don't just think of functions in our sleep and then just try and apply them to the images. So we I think we try and we try and get both the best of both worlds if we can. And so that's an interesting point in the way we built the tools.
Speaker 3Yeah, and that's interesting in the also as a practitioner, because we are forced to think about what do we love in that color behavior. For example, we are this day we are talking about maybe a way to uh through that saturated color converging to us uh to the same uh U. Yeah. So at a certain point. And that's something we have observed in autochromes. Autochromes, it's an old photographic process. And at a certain point of saturation, all the reds seem the same. We like that behavior, and we are going to try to make something uh out of out of that idea, but we're not going to sample uh autochrome because it's not the point. The point is what the idea behind. I love it.
SpeakerI mean, I it's yeah, it's basically like you can essentially curve or S-curve essentially the hues after a certain saturation level so that they start converging. That's fantastic. I love that idea. I was gonna ask, you basically gave me an example, but I'd love another one. What is something else that you guys either a slider or feature that you guys brought in based off some of that film research? Because I mean I saw some of that, like some of the low saturation and the saturation roll-off that I assume was from those film prints. But what is the feature that you guys discovered from those film prints that you parameterized?
Film Feel Without Scanning
Speaker 3There's something that I which is a uh uh very uh discrete, I mean very small, but that I love is that the mid-push slider in the contrast curve, we which meant that you can push the middle of the curve so it's not a three-part curve, but it became like a five-part curve because sometimes on some sensometric curves you have that break, a little break in the middle of the curve. But in fact, we have um we we have implemented it in a way so you can push it in a direction and in the other side, and I mean you can make it very strong. In fact, it's that behavior is never very strong. I mean, it's it's just a slight break in the middle of the curve, but in diachromy you can put it a lot, and in fact, that became quite interesting because it's it's making the all the upper part of the curve very flat, and so you can get to new aesthetic, and it's it starts from an observation, but the way we have implemented it, I mean there is no link with the reality. And for example, on all this behavior, once the mat exists, we put the trader in a way and in the other way. And the other way it never exists in the reality.
Speaker 2But that's something we we we started doing almost constantly, is like, oh okay, we invented that, it goes from zero to one. Okay, what does what happens if it goes in two and if we go to minus one? And and then you start that that doesn't look good, but then it starts sinking in like okay, it's but it's but it's not that bad.
Speaker 1And if if we adapt it, but that's the beauty of math, is that if it goes in one direction, it can go in the other one. Um if everything is uh continuous and smooth. Yeah, and smooth, continuous and smooth, and there's some equilibrium between every part of it.
Speaker 3It's not always the case. That's a tricky part in math.
Speaker 1Sometimes you find a perfect curve that's go uh wonderful. But below zero, it goes to infinity. Yeah, yeah.
SpeakerYeah, you get illegal values. But that's the cool part about having parameterized data as opposed to sampling data. All you can do is change the opacity of it, or you can apply it inverted. But when you parameterize something, you can do that, like you guys are saying. You can double it or you can go in the opposite direction. And I mean, obviously, you you might get the end result that it is not aesthetically looking the way we want it, but technically it should work. And maybe if you twist a different knob in the right way, you might get something that's pleasing. So I love that you guys are keeping an open eye. So one door closes, another one opens. That's how I sort of see that.
Speaker 1Yeah, and sometimes we want to go in another direction, but the mass will not allow us to do that. So we blend toward a new function when you go to minus to get where we want it to go, to invert the direction of the curve. Or uh yeah.
Speaker 3Yeah, some slider, some sliders are made in fact of two beh totally different behavior between the positive parts and the negative parts. For example, color density is in fact, but that's quite common nowadays, like it's lowering the density of the high saturated color. But when you are going to the other side, it's not only like brightening, uh brightening brightening, it's brightening and desaturated. So it's folding. Getting in pastel colors.
Speaker 1Yeah, yeah.
Speaker 3It's different from a bleach. I mean it's it's another behavior. But when you are watching the volume, in fact, it seems to be the symmetrics of some of the behavior, but only if you are watching the volume, because uh mathematically it's something which is totally different.
Speaker 1But that's inspired from film, but that's yeah, a lot, a lot of inspiration from some film behavior, but sometimes we pushed it like I'm thinking about bleach bypass, but which is the effect, like at the end of the plugin, you got just this effect which is called bleach bypass. And yeah, it does what it looks like bleach bypass, but it's more really stronger than uh actual bleach bypass. So just getting in the directions and find finding some sweet spot.
SpeakerI was actually playing with the bleach bypass feature last night, and I that was one of the first things I noticed is how I was able to push it in sort of a modern bleach bypass direction because of the control you gave us over it. And I I think that's really cool because I mean, don't get me wrong, I I love a good classic bleach bypass as much as the next person, but I also like being able to put a little bit more fill in so that I'm not getting such a crunchy bleach bypass that I I can get a little bit more modern, because I mean clients like really bright images now. And so it's sort of nice. It can give it almost a silvery fashion vibe kind of a thing going on when you bump the fill up a little bit. So it's I thought it was a really neat control that you gave in there with that. And so I thought that was pretty inspired. So very, very neat.
When Sliders Go Beyond Reality
Speaker 3What's funny also with those uh specific tools like Big Bash PlayPath is that you can like uh put several instances of diachromy and like starting with a very small bitch bypass and then applying a global uh global look and then putting something else. Because we are, I mean diachromy is working in in Sun Referred, you know that you can uh put several diachromy and choosing your own order of operation if you want to make stuff that are very different from uh you can instantiate multiple diachromy instances because if you want to get over the order we we we imposed. Yeah. For example, I I often add color tone curves module after the bleach bypass, so I put a second diachromy nodes because when you are bleach bypassing, by definition, you are removing colors in the up and in the low, and so there is less saturation in the color tone curves. And so I'm very happy to add back some colors in the color tone curve. And because I mean it's behaving very straight, you can just add color tone curves, add color uh in the highlight and in the shadow after a strong bleach bypass, and then you get something which doesn't look like uh bleach bypass at all because bleach bypass don't have color costs. So that's uh that's because it's very reliable in terms of robustness and you you we worked a lot on that on robustness, yeah.
Speaker 1But you don't want to look at the node graphs of this guy. No, no, no. That's wrong.
Speaker 3That's wrong. I mean, there most of the time there is two nodes and that's so as we get to the end of our time with each other.
SpeakerI wanted to ask you guys what can we look for with the future of diachromy and diaphany?
Speaker 2Well, we're looking at a lot of things, but uh if we'll take it uh chronologically, I don't know when it's gonna be uh this is gonna be a broadcast, but we're finishing up um a diacra. We just released today the diaphany 1.3, which we introduced a couple of new features, including Yeah, we improved Halation diffusion, thanks to one-to-one a lot.
Speaker 1Like we're uh reworking some of the algorithms to get more robustness and uh yeah. And nicer results. More control also. Halation is getting a little uh yeah, more sliders to it.
Speaker 2But so because we got a we got a lot of um we got a lot of great uh feedback from the from the users these last two months, and there's nothing like uh releasing uh software to get some really really good testing. You know, you can test all you want, but then people are gonna find new ways to push it. I think you know that as well as we do. And so we we address some of uh of the issues and um and small improvements, which we could, and also we added a couple of new features.
Speaker 4Yes, it's also an opportunity to add new features. Every time we bug fix, we think, oh, okay, we need to change the behavior, but what can we do to improve the behavior and not just bug fix but also find new creative tools like for Halation? We added um a control on the loss of definition in the midterms, and this is a new idea we had while fixing um things due to the feedback. We thought about this new idea, and we are always trying to not just fix things but also um take it at an opportunity for uh improvements.
Speaker 1Yeah, we improved some of the sliders because now some of them are expressed in terms of exposure values, in terms of stops. When addition starts, when does it stop? And uh well, not really when does it start, but when does it start and especially. But yeah, we also added something about grey. But to talk about the future, maybe? Yeah, yeah, yeah. This is kind of the immediate future.
Speaker 3Altwin is working on Gambit compression these days, uh, because that's a big issue. In fact, if you I mean we hate negative values. You'll explain, but if you are by the way Artwin is uh the real color scientist of the true when you are dealing with DRT of the color managed workflow, you have to accept how it behaves. I mean, when you are building a film look plugin, you can rely on the DRT made by Kodak, which is wonderful, but uh but but you have that limitation. If you rely on on um on the DRTs of the of the color managed workflow, sometimes out-of-game values are very difficult uh.
Speaker 1Yeah, you are exploring an open domain of values that can be tricky to get uh to get around.
Speaker 4Yeah, so we're working to be able to um give a better redition of unbound data, so imaginary data created, uh imaginary colors created by the uh input device transform, for instance, uh of the camera. And right now we have AC reference gamut compress, which um you may already know, which is really great. But it does some U shifts, so it's not always easy to uh get the redition we want of the out-of-game values. So we are working off our own custom solution to be able to propose an alternative. We will keep, of course, AC reference gamet compress, uh, and we will propose an alternative.
Speaker 1We realized also that AC reference gamut compress was designed also with a lot of constraints, like to be invertible, to be like uh to to please everyone. Um so at some point we're like, no, maybe we can get somewhere else with that, something that will be more pleasing to us because we are the end point. It's the easy part. We are like at the end point, even if we are just before the DRT, you can just when you arrive in diachromy or diaphany, it's the end of your color grading or your color correction.
Future Plans And Gamut Compression
SpeakerSo ASUS 2.0 dramatically improved the gamma compression. But I mean it's not perfect, and like you said, it's trying to please everybody, and when you try and please everybody, you usually please nobody. So there's still a lot of adjustments that you guys can make, especially for tweaking for your own software, especially as you know where you are in the order of operations, mostly.
Speaker 3There's a room in the gamut compression for something that's also aesthetically pleasing. The way you the way you you bend the eye saturation, it can be something. And by the way, uh, I mean in ACs, they that's not their main goal. They they want something which is mathematically clean, but we have another goal which is to produce looks, so we can uh bend um I mean a little bit stronger, and you can do so so Antoine is working on that, so that will be probably a separate feature in in diachromy, the way you chose the gamut. The way you how to how how you compress the gamut, and then we are definitely going to add a new textural module to diaphany. And also we are thinking about um how do you say that?
Try The Demo And Learn Online
Speaker 2We've started our development, and what we realize is diachromy and diaphany are kind of targeted to the high-end uh users, I would say. You know, we are now Hal Picture is part of a French post big French post-production house, so we're close to uh post-production of feature films, of TV shows. That's also what we do as you know filmmakers. We we and that was what we had in mind when we developed it. We wanted something that was industry uh compliant, industry proof, and that could uh really uh integrate well with the highest standards of the industry. And I think we're doing a pretty good job there. But some users have um on like when we released, we understood that you know maybe some users felt that it was uh too priced for a high-end user, and that was a big steep for them, and maybe there was a bit too many tools and too many options for other users. So we're really uh working around another solution, maybe simpler, that would combine uh the essentials of diachromy and diaphragm into one plugin with obviously less parametricity. You know, that would be something easier to use, but that would also uh allow you know broadcast productions, content creators, people on the internet, people that make great stuff for I don't know, YouTube, the platforms. I mean, the the world is vast, and there's so many productions and so many different production profiles that we want to cover maybe that uh segment of creations as well. And they expect a product that is a bit less expensive, to be to be frank, and also maybe a bit simpler to use, maybe less knobs and sliders. You know, so we're looking to um I think it will be before the end of the year, uh, to release something that addresses these needs. And maybe, you know, we started with the high end, we started with the the north face, and we're kind of seeing the light now. And maybe we can you know uh open up for new users that maybe were a bit either frightened by the complexity or maybe just not able to afford it because it is effectively uh directed for high-end users now.
Speaker 3And definitely new looks also. Yeah, I mean, definitely new looks, always new looks. We're going to add new looks, so I mean uh it's just a question of I mean, we have plenty of ideas to add looks, and it's it's just that it's it's so easy to add looks in a new version.
Speaker 2So yeah, we're gonna try and keep that effort of creating adding new presets as a long-term effort, and you know, whenever we can add a couple of presets in the in the mix for uh people uh to enrich their library.
SpeakerFantastic. And for those interested, where can we learn more about you? More about diacromy, more about Daphne?
Outro Reviews And Where To Follow
Speaker 2So we have uh everything on our website. So it's uh how-picture.com. I'm pretty sure it's gonna be displayed somewhere on this page as I speak. So there's uh you can download the demo version, which is basically the plugins. All features are available, it's watermarked, but you have all the features. I think you just can't export a lot, but other than that, you can you know just play with it and uh uh take uh the measure of the potential of the tools. There's a lot of content on YouTube that we produced to help unbox in the in the diachromy and diaphany journey.
Speaker 1Yeah, no, we are publishing uh articles also about uh some quite nerdy colossal stuff. Yeah. That will be out.
Speaker 2One of the ideas when we when we founded a How Picture was also to you know have a voice in the global conversation and share our knowledge and not just, I mean, obviously we have diachromy and diaphany in the back of our head at all times, including when nighttime, but we also uh published papers from time to time about I mean how diffusion and uhation should work, or uh, you know, DRTs is uh display uh transforms are the subject of a big paper that uh Antoine is uh is writing. So we share two chapters. Uh three chapters. Oh yeah, really. Okay. So we publish a lot of uh material on our website. You can do another demo, you can contact us. There's an email and contact form on the website as well. So you can learn about uh about the tools and stuff like that. And we also that's uh a detail but uh close to my heart. We have a toolbox online with a lot of uh open, uh small, small um utilities software online to check out all our preset library, actually. Like you can check out all our presets online, and there's a little bit of a story and their characteristics and a lot of images to play around with. Uh, you can convert the presets uh from our diaphinian diacromy to go from a version to another because there is uh we're very um focused on look continuity, so we don't want people to be scared whenever they update that maybe it's gonna break the looks, you know. So we're really uh taking care of that so that it's not an issue, but it's easy. So yeah, a lot of a lot of stuff on the website, basically.
SpeakerWe will have links to all of that in the show notes. Thank you so much for joining me today, guys. I really appreciate you joining me for this chat. I learned so much about you guys, your tools, and the way you go about this. Thank you so much for joining me, guys. I really appreciate you guys. And for this episode of Color and Coffee, I'm Jason Bowdach, and we'll see you guys in the next one. And that's a wrap. Be sure to follow us on Instagram, YouTube, and your podcast app of choice. Search for at Color Coffee or at Color Coffee Podcast and join the conversation. If you're using Spotify or Apple Podcasts, please leave a review. Huge thanks to FSI, Demystify Color, and Pixel Tools for sponsoring the show. Until the next episode.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Offset Podcast
DC Color