The Lattice (Official 3DHEALS Podcast)

Episode 86 | AI's Vital Role in Medical 3D Printing (Virtual Event Recording)

3DHEALS Episode 86

Artificial intelligence is transforming medical 3D printing and bioprinting. In this virtual event, hear from a panel of experts from across the globe. Our speakers showcase the practical applications of AI in creating personalized medical solutions that were previously impossible.

One size does not fit all. William Jung, Business Development Director of FITme in South Korea, explains how AI-driven customized silicone implants are revolutionizing cosmetic and reconstructive surgery. Using a design screening engine that reduces design time from hours to minutes, FITme’s technology has supported over 30,000 surgical cases and captured 85% of the Korean market.

At the National University of Singapore, Dr. Gopu Sriram explores dental applications of 3D bioprinting. He discusses how an AI-optimized bioprinting process allow for the biofabrication of personalized gum tissue constructs. Gum disease is a worldwide public health burden that affects almost half adults over age thirty. This solution not only addresses these patients, but dramatically accelerates experimental timelines.

Dr. Gregory Hayes brings in the business perspective and shares how EOS Additive Minds is implementing AI across multiple fronts to democratize access to additive manufacturing. Their systems incorporate advanced monitoring tools capable of making real-time adjustments during printing. This allows medical professionals to stay focused on patient outcomes over technical issues.

Focusing on women's health, Aye Nyein San from Cosm Medical shared how their AI-powered digital gynecologic devices are giving women their lives back, with patients describing their products as "magical" and "life-changing."

What makes this event so compelling to listen to is how our speakers illustrate AI’s ability to break down long-standing barriers to 3D printing adoption. By automating complex design decisions and enhancing process reliability, AI is turning specialized, expertise-heavy workflows into scalable, patient-centered solutions.

Ready to explore how AI and 3D printing could effect your medical practice or research? Subscribe to our podcast for more insights into this rapidly evolving field, and join our community of innovators.


Shownotes: https://3dheals.com/event-recap-artificial-intelligence-updates-for-3d-printing-and-bioprinting/

Podcast En

Send us a text

Support the show

Subscribe to our premium version and support the show.

Follow us:
Twitter
Instagram
Linkedin
3DHEALS Website
Facebook
Facebook Group
Youtube channel

About Pitch3D

Speaker 1:

Hello everyone, welcome to the Lattice Podcast, episode 86. This is an audio recording of our recent live event with an international speaker panel focusing on how artificial intelligence is influencing medical 3D printing and viral printing. Our conversation addressed the long-standing but evolving role of artificial intelligence in additive manufacturing, especially in clinical applications. Across all presentations, ai is seen as a key enabler for scaling, customization, optimizing complex multi-parameter processes and reducing the expertise barrier in 3D printing. Listen or watch the whole event to learn current pain points, future direction of this exciting space. For speaker bio and on-demand recording, please refer to our event page link in the show notes. Enjoy, all right, good morning, good night, everybody, good morning.

Speaker 2:

Can you hear yes, can you hear yes? Wonderful Good night everybody, good morning.

Speaker 1:

Can you hear?

Speaker 2:

Yes, Can you hear?

Speaker 1:

Wonderful Love your background Very relaxing.

Speaker 2:

To give that feel of day.

Speaker 1:

Right To pretend that you're in daylight. I mean, we have an incredible panel today, international, really, truly, almost well, except for Africa, the continent-wise we're always missing a couple, but this is pretty as comprehensive as it can be. So good morning, good night everybody. My name is Jenny Chen. Just a real quick intro about 3D Heals, which you probably heard a couple of times. I had founded this company about 10 years ago I guess I can't even count Now.

Speaker 1:

We have three missions. One mission is education. We really want to educate the public about what 3D printing can or cannot do in healthcare and what we're working on. So this is one of those events. Number two is networking. This is virtual networking as well. So for people who are savvy, you can put your LinkedIn social media links in the chat box. Anything you want people to share or watch or read, put it in the chat box. Don't be shy I know sometimes people can get really quiet. And also, if the speakers are doing a great job, give them a kudos using the react button. Okay, so socialize. We also host in-person events as well sometimes, and stay tuned for announcements. Just subscribe to our newsletters. And finally, we have a program called Pitch3D, which we help with fundraising for early stage startup companies, which means from pre-seed to series A, and this is an entirely free program for everyone, and if you're early stage startup founder, reach out to me and see how that works for you.

Speaker 1:

We have also an industry focus, which is this industry, basically 3D technology in healthcare. That includes 3D printing, ar, vr, bioprinting, biofabrication and material science. So now, without further ado, I'd like to introduce the panel today. We host this almost every year and last night I actually went through the sessions that we did previously, and the truth is, 3d printing industry has been working on AI-related functionality or augmentation for a long time functionality or augmentation for a long time, and so it's not like we're ignoring it. But the question today we want to answer is where we are and is AI or machine learning really going to make a big help 3D printing as industry, the 3D technology industry, to leap forward, because we have realized there is a little bit of stasis in the last couple of years in terms of, especially in the commercialization aspect of the industry.

Speaker 1:

So the first speaker I'd like to introduce is located in South Korea, so he's in the middle of the night. Is located in South Korea, so he's in the middle of the night and I really appreciate William and also GoPro to be this part of this program because they're really donating their precious time in their free time to talk to us. So the first speaker is William, if you can unmute and introduce. So William is actually a business developer for Fitme. Sorry, fitme, f-i-t-m-e, a startup. It's a startup or a company in South Korea focusing on personalized medical devices.

Speaker 3:

All right, let me share my screen first and then continue to introduce myself.

Speaker 1:

Yes.

Speaker 3:

All right, can you see my screen?

Speaker 1:

Yes.

Speaker 3:

All right, let me start. All right, let me start. Good morning everyone. Actually, here in Korea, it's midnight. I'm William Jung and Business Development Director at FIMI. It's actually truly a pleasure to speak at 3D Heels and to share our journey of reshaping both cosmetic and reconstructive surgery. Here's our content today.

Speaker 3:

Let's start with the introduction to FIMI. Before that, I'd like to introduce myself. I have been working in the medical device and 3D printing field for nearly 12 years now, and my first experience with 3D printing was as a researcher for artificial joint implant, where we used it to test new product implant geometry. Since then, I have had a chance to explore a wide range of technology and I still remain fascinated by how 3D printing continues to push boundaries, especially in medical field. Today, I'd like to show you how Finme is driving this innovation forward, from design to device. At Finme, our vision is clear. We don't just print implants. We print precision powered by data. We combine advanced 3D printing and AI algorithm to deliver truly personalized, high-quality silicone implants. These implants have already been used in over 30,000 surgeries and, with an 85% market share in Korea, we are rapidly expanding globally, backed by FDA 510 clearance and active uses in Japan, thailand, vietnam and several other countries.

Speaker 3:

All right, let's move to next page, why we needed a new approach. Let's look at why we needed to rethink traditional implants. For decades, the industry has relied on off-the-shelf implants, the so-called one-size-fits-all solution. But here's the reality Poor fit these pre-shaped implants rarely match each patient's unique anatomy. And second, tilting and misalignment. Poor fit means implants can shift, rotate and create visible asymmetry. Third, contracture risk as well. Inconsistent pressure points often cause fibrosis and capsular contracture. Final, dead space Gaps between tissue and implant increase the risk of hematoma and inflammation and other complications. These are not minor inconveniences. They directly affect patient satisfaction, increase the chance of revision rate and impact long-term safety.

Speaker 3:

This is the comparison between FIMI customized silicone implants and off-the-shelf silicone implants. With FIMI, we turned this challenge into an opportunity. We provide fully customized silicone implants designed directly from each patient's CT data. Actually, this ensures a precise fit for each unique anatomy and better symmetry and enhanced safety A predictable and aesthetically satisfying results. Actually, in short, our goal was not just to make a better implant, but to build a better system for both surgeons and patients.

Speaker 3:

Patients Now how. 3d printing enables customization at scale. So how did we make this customization at scale and scalable? Let me walk you through our end-to-end workflow. Step one CT image acquisition Surgeons upload anonymized CT data or CV-CT scan via our secure portal. And second order, submission. They fill a simple patient specific form detailing desired projection height, angles and preferred durometer level, etc. And third, preoperative planning with design, our engineers analyze the imaging data and generate the pre-op plan, which is reviewed and approved by surgeons. The final step is manufacturing. We then 3D print a customized mold and inject implant-grade silicone. Each implant is sterilized and quality checked before shipment. Actually, our indication covers both craniofacial and body contouring applications, which is forehead, nasal, chin, mid face and mandible implants, as well as pectoral implants for chest enhancement in men Actually, we have a gluteal also and various durometer options, from short A20 to A40, to match surgical goals.

Speaker 3:

Now how do we handle this at scale? Actually, we tackled a big bottleneck Manual custom modeling is slow and labor intensive. So together with MetalEyes, we developed a Python-based design scripting engine. This automates the modeling of implant from CT data, cutting design time from hours to just minutes. The results are thousands of surgeries annually, rapid turnaround only two to three days from order to surgery-ready delivery and consistent clinical accuracy and symmetry. So we are not just printing parts, we are delivering automated, scalable personalization. We are dribbling automated, scalable personalization. Actually, you can see the scripting engine on the right side here. Yeah, let me underline further this introduction. Fitme is already the market leader in customized silicone implants in Korea, with an 85% market share and over 30,000 real-world cases backing our safety and efficacy. Now, with FDA clearance, we are expanding actively to more countries and building partnerships worldwide.

Speaker 3:

Let's move to the next step. Actually, the next step for us is an AI-driven simulation platform. We are developing the FACEZONE, which means the FitMe Accessible Communication Engine. This is an AI-powered, fully digital consultation and planning platform. Actually, this includes AI segmentation engine built on Cascade NNUnet 3D. It performs automated and highly consistent medical image segmentation To ensure high accuracy. We've trained it on over 500 CT datasets and we plan to expand this database further to get improved accuracy. And this includes also 3D simulation-based planning.

Speaker 3:

Surgeons can interactively visualize patients patient anatomy and simulate different implant options, even before the production. You can see this. Here you can find the post-op simulation for rhinoplasty. This supports precise pre-op planning and boosts confidence Before and after visualization. One of the biggest patient pain points is uncertainty about results. With the face-on, patients can see a realistic comparison, improving understanding and trust. And last part is order management and seamless communication. We are building an integrity system that connects between surgeons and phimmy engineers, then between surgeons and patients, and our ultimate goal is to evolve this into a virtual surgical planning assistant Predictive simulations for likely outcomes, ai recommendations for best implant shape and size. This transforms surgical planning into a collaborative data-driven process.

Speaker 3:

Alright, let's wrap up here. Actually, ready-made implants are limited. They simply cannot match unique anatomy. Customization is the future and PIMMI is making it practical and scalable by combining 3D printing and automated scripting. We have delivered for over 30,000 patients safely, and now, with AI-powered simulation, we are reimagining consultation, communication and surgical planning. And where we are heading next? We are expanding into reconstructive surgery for trauma, oncology and congenital deformities, and developing surgical guide to assist precision during surgical procedure, and innovating hybrid regenerative implant that merge structural support with bioactive healing. Actually, in summary, plastic surgery is evolving into a new era driven by precision, personalization and patient empowerment. Pimi is proud to be at the forefront of that transformation. Thank you for your time and attention today.

Speaker 1:

Thank you, william, amazing presentation. I was not expecting that. You guys were like 80% of the market. That's amazing. We have one question from the audience, but you kind of addressed it is the material that you can use Currently. You said it's silicone, but what are the other materials that you're considering? Also in the pipeline, in the R&D pipeline?

Speaker 3:

In our R&D pipeline actually we cannot mention the bio-observable material.

Speaker 1:

Makes sense.

Speaker 3:

Such as kind of PCL, PLGA, ELNA, kind of that Quite a lot of material there are, but I cannot say the specific material, sorry.

Speaker 1:

Great. I mean, we actually have people who are working on various versions of these alternatives, of these fundamental materials and they can reach out to you if they want to collaborate as soon. There are a couple of things from your presentation I kind of just want to unpack a little bit. You have one slide that had the engine that you guys were working with, the new 3D or something like that, the new net 3D. Can you just explain that a little bit? What is Cascade, new net 3D? Because I don't think people, some people might not know, because I don't.

Speaker 3:

Actually this is common AI algorithm when we do the segmentation?

Speaker 1:

Okay.

Speaker 3:

Means that we labeling the anatomical structure automatically.

Speaker 1:

Okay.

Speaker 3:

But these days we try to combine two different engine AI algorithm to improve our quality.

Speaker 1:

Okay, great, see, I know nothing about AI Embarrassed. But thank you for the explanation. Another question I was very impressed with your presentation about the predictive value of your software. So do you actually do a post-op after recovery photo to use that as a data point to train your engine to you know? How do you know your prediction is going to be accurate? I mean, is patient going to come back and say you know crying, and say you know what I don't look like the photo you predicted what's going to happen?

Speaker 3:

Actually our KOL actually provide the post-op CT data and we trained it to make it accurate. So actually we have already been through 30,000 cases for now, because plastic surgery is booming in Korea.

Speaker 1:

Yes, it has been booming for the last two decades.

Speaker 3:

Yeah, so we trained it to make it accurate actually.

Speaker 1:

We have another question from the audience. What materials and machines do you use for your 3D printing? The mold?

Speaker 3:

Actually we use the Formlabs actually to make a mold.

Speaker 1:

You can mass produce using a lot of Formlabs, I'm guessing.

Speaker 3:

Yes.

Speaker 1:

Okay, fantastic. Well, thank you so much, william, for a fantastic presentation. I learned so much. I'm going to dig deeper into your company because it's so fascinating. Thank you, and we'll come back to you later as a panel discussion, and I want to introduce our next speaker, also from Asia, in the middle of the night, professor Gopu Sriram. And Gopu is an assistant professor at the Faculty of Dentistry National University of Singapore, and recently he published or co-authored a paper that I found fascinating, focusing on bioprinting, digital dentistry and 3D printing and AI obviously. So, gopu, please take away.

Speaker 2:

Yeah, thanksu. Please take away Gopu Dasmohapatra. Yeah, thanks, thanks, jenny. Just give me a moment to just share the screen. I guess it's on right.

Speaker 1:

Yep, look at All, right, yeah.

Speaker 2:

Yeah, thanks, jenny, and for 3D Heals, for this amazing platform towards educating the latest developments across 3D printing field. And, as Jenny introduced, like, I'm a STEM professor at Faculty of Dentistry in National University of Singapore. By background, I'm a dentist and an oral pathologist, my PhD in cardiovascular tissue engineering and then a postdoc in dermatology and bioengineering, primarily in microfluidics and additive manufacturing. So a little bit of a detour from my dental dentistry background and then back to dentistry. Now, like contributing these various technologies, what is happening in other fields for dental applications. So with that, I'm also having affiliate positions in the Department of Biomedical Engineering, as well, as I lead one of the trusts related to dental applications in the NUS Center for Redditive Manufacturing. So today what I'll be showcasing is, as Jenny introduced, one of the publications which, primarily, we showcase how 3D bioprinting and artificial intelligence can be integrated towards biofabrication of oral soft tissue constructs.

Speaker 2:

If you take gum disease as such, like it's one of the major public health burden. Though not life-threatening, it does cause a significant burden in terms of healthcare, uh aspects, and and as we age, this burden keeps increasing over time. Right, but, but uh, what, what? Uh what the surgeons do in terms of treating this gum disease, like over time. What happens is that, like when I go back to this particular video, is that with the accumulation of plaque, the gum tissues, they recede back Right so slowly, like exposing the root surfaces of the tooth. And that's when the gum disease goes through different stages from early stage, which is called as gingivitis, which is the inflammation of the gum tissues, to a stage where at a later stage, it becomes periodontitis, which is like the tooth starts to loosen out and then eventually falls off. So in the early stages, when the gum recedes and the surgeon wants to replace that, what they do is typically take a small piece of tissue from the roof of the mouth, place it back into this place where the gums have receded, suture it back and expect to heal. It does work amazingly well, but obviously the patient has to go through two surgeries in two different locations and the roof of the mouth is very painful to live with a wound during this healing period. So alternatively sorry, yeah, alternatively when this particular defect is going to be for a larger piece of area, then what the surgeon does is like puts in synthetic collagen-based grafts and then sutures it and then expects things to heal.

Speaker 2:

But because there are no cells, typically the performance is quite inferior compared to that of the tissues naturally obtained from the grafts from the patient's own mouth, from the patient's own mouth right. The other aspect is that the gum tissues, if we really see for each individual, like even from the same individual when we are looking at the upper teeth, was that of lower teeth and then the teeth at the back of our mouth the gum outline is quite different. Like you see a much more broader scalloping for the front teeth in the upper arch, while in the lower arch it's a much more narrower outline. And this variation is within the same individual and over time the gum outline is also going to change significantly. So that's where the bioprinting does provide the potential to customize the shape of the grafts. So if the surgeon were to take it from the pallet, then typically the surgeon picks up a rectangular piece of graft, shapes it by his amazing skill sets and then sutures it into the defect area. But with the bioprinting we can precisely shape it to the defect area and then that can be utilized.

Speaker 2:

But how do we enable that like? So that's where what we did is like we developed a bio ink which is um, has multiple components, like, if we have to break it down into simple words, fibrinogen is basically blood clot. So if we use blood clot as our starting material, just like how any wound would heal, and then mixed up, we mixed it with various thickeners so that we can print it right. So what we did is like we started with like four different bio ink formulations, wherein this, uh, the thickeners which are used to to enable bioprinting of fibrinogen. Wherein, like, when we are moving from bioink A to D, we are having increasing concentrations of maltodextrin and xanthan gum, which are the thickeners right.

Speaker 2:

So for any bioink like to bioprint it, it's almost like if we were to take it, for example, of a toothpaste right, for the toothpaste to stay in the toothbrush, it needs to have certain amount of thickeners, right, and so that it can stay there. Similarly, the bio-inks when we want to print it, we need to have certain amount of viscosity for it to stay and be printable. So that's when we characterized the rheological properties of these bio-inks and as what you can see here among all these graphs is basically that the more the shear means, the more pressure or squeezing forces you apply, then eventually the viscosity should become thinner and lesser and lesser so that it can get moving from its syringe and then come out onto the plate and then eventually it can get bioprinted, right? So that's sort of the summary of these graphs. So all the four bioinks literally exhibit these viscoelastic and shear thinning properties, suggesting that they can be bioprinted.

Speaker 2:

But then if we were to print it, unlike the other 3D printing technologies, one of the challenges with bioprinting is that we are printing hydrogels, like, which are water-based, and hence there are multiple parameters which play in terms of bioprinting. So if we were to break down these parameters, like, which interact with each other, are, like the nozzle diameter, print pressure, print speed and the bioink formulation, meaning the viscosity of the bioink. So the more the viscous, the more the pressure you need, right? Similarly, if you were having a syringe which has a needle which has a very large bore compared to another needle with a very narrow bore, obviously the pressure needed to squeeze out the liquid from that syringe is going to be different among the two, right? So that's where, like, each of these parameters literally interact with each other. But ultimately what comes out is this bio ink through this nozzle and then gets printed, and then so you can have different diameters of the bio ink. So what we did is like we did it manually. We printed with, across the different nozzle diameters 22, 25 and 27 gauge Different printing pressures as you can see here, the five printing pressures here and printing speeds.

Speaker 2:

So with increasing printing speed, obviously the filament diameter drops. But with increasing printing pressure the filament diameter becomes bigger and bigger. Obviously that's what you expect. But to do this particular amount of estimation of what the filament diameter is going to be across these different parameters, for one bio-ink formulation we had to do 360 prints. Translated this to four bio-ink formulation, this would translate to 1440 prints. That's a huge amount of prints to be done and and the amount of manpower which is needed for this right.

Speaker 2:

So that's where, like what we did is like we embarked on this particular platform called Identify AI, which has been a workflow which is developed by Prof Dean Ho from NUS Institute for Digital Medicine and N.1, right Meaning N equal to 1. So here the platform identity for AI works on one single individual as the starting material, in contrast to other AI-based platform wherein you require a much larger n numbers. So how the platform typically works is that you have certain experimental datasets, you get inputs and output parameters and that is fed into an orthogonal array platform and from the orthogonal array you have, from a few data points, you can build an orthogonal array. But then eventually the orthogonal array then has gives you multiple number of data points to play around with. Then imagine like then, if you have it for a different data, different time points or different other parameters which are interacting with each other, then you have you are looking at not just n equal to one, but in millions of combinations. So we embarked on this particular platform.

Speaker 2:

So what we did is that we used a training data set wherein we used three different pressures, which represents low, medium and high. Similarly three printing speeds and three nozzles and three bioing formulations representing the entire range from low to medium to high. And then we printed 25 prints and measured what the diameter of these individual bioprints are, bioprinted filaments are, and these 25 prints was used to develop this orthogonal array. So now, from these 25 data sets, what you get is individual data points across this whole array, but what we are looking at is only two parameters, but what we are input at is only two parameters. But what we are inputting is four different parameters. So that leads to multiple combinations and eventually what we see here is six different ways of representing the interaction between the four different parameters.

Speaker 2:

So in the top left here, what you have is interaction between ink, the viscosity of the ink and the nozzle diameter. Similarly, you have nozzle diameter and the printing speed, ink and the printing speed, and so on. So each of these orthogonal array typically gives you interaction between two parameters, but what is happening in reality is all the four parameters are interacting together. So that is where this Identifyai is a proprietary platform developed by Dean Ho, so it interacts and reads through all these different data sets across these six orthogonal arrays and eventually it ranks top 10 or top 20, depending on what we are asking it to provide, right. So here what we received is, from these 25 prints, we have the six orthogonal arrays and that gave us 13 top ranked printing parameters for the printing speed, printing pressure, nozzle and the ink type to be used, right. So then what we did is we validated these 13 prints in terms of the actual diameter. So what we seek for the Identifyai is to give us 13 top ranked printing parameters which would give us a filament diameter of 0.3 to 0.6 mm, and then we have validated that, whether these parameters really yield the, the diameters and, as you can see, predominantly most of this actually is that right? So having having verified these interaction and then optimize the printing parameters With very few experiments, then we value what we need to know whether Really, cells can grow in these bio-inks, right?

Speaker 2:

So if we were to take the gum tissues, it has a top layer which is made of cells called as keratinocytes and an inner layer which is made of cells called as fibroblasts. So what we did is we first embedded the fibroblasts within the bio ink and printed it and verified the viability of these cells. So whatever dots you see in green are the cells which are living and if they die then they would be labeled with red, with a propidium iodide. So, as you can see here, hardly any red dots here, because and then it was quantified that we are having more than 90% viability at the day of printing and over time, into day seven, over seven days and even more, across both the fibroblast and the keratinocyte layers. So then what we did is like we started off with a particular clinical scenario where the gums have receded back.

Speaker 2:

So original outline is supposed to be somewhere here, right? And these are the roots which have been exposed right. So, using the surgeon's input, like we designed the size, shape of the graft Over here, it's basically a three-dimensional, just a two-dimensional outline, but then that was converted into three-dimensional model, converted into a G-code which the bioprinter can read and for the bioink I already introduced about the bioink, the cells, which was basically the fibroblast and the keratinocytes and the pattern which we got it from the 3D model and that was then used to this extrusion-based bioprinter to print these custom-shaped gum grafts. All right. So, having said that, then we have these printed gum constructs, but the moment you print, the cells are not ready, they are just sitting there as rounded cells. It needs time for it to populate the whole construct and then mature before it can be grafted onto the patient.

Speaker 2:

So we validated that, like on the day zero is the day when it was printed. That like on the day zero is is the day when, when it was printed, then over 18 days it was cultured in air liquid interface. What I mean by air liquid interface is that when the like, for example, if this, if we take the example of a skin right, the skin the receives nutrition from the blood vessels which are beneath, beneath the skin right, but on the other side it is exposed to the air or the external environment. Similarly, the gum tissues also receive nutrition from beneath it, while it is exposed to saliva or the external environment on the other side. So that is what is mean by air-liquid interface. So the moment you culture these tissues under this air-liquid interface, the tissue matures completely and then what we have is, at the end of this 18-day culture period, is these mature gum tissue grafts.

Speaker 2:

So, as introduced initially, there were four bioink formulations right Like from the bioink A being the thinnest to a bioink D being the most viscous, right as we go down the 18-day culture period. In a nutshell, it seems that all these bioinks which have been bioprinted seems to retain its shape despite it being cultured for 18 days. But if we really verify it to that of the actual clinical model, the outline, the thinnest bio ink does not cover these sharp interdental regions. But if we take the ones which are thicker, it forms these bulbous margins. So these again are not good. The more bulbous the gum margins are, it can actually hold food particles there and the bacteria can accumulate and then that's going to lead to again, a gum inflammation. We don't want that. So that's where we lead to, again, a gum inflammation, right? Uh, we don't want that. So that's where, like, we can see that the bioink b has the optimal uh viscosity to form uh, optimal uh bioink sorry gum tissue constructs. And so these were then validated with actual gum tissues in terms of various markers. I don't want to go into what these markers are, but what I can simplify is that these different markers represent how young the particular gum tissues are or whether it is more mature and resembling the actual gum tissues.

Speaker 2:

So, in a nutshell, what we see is that we started off with a clinical problem where you have a gum defect, and, using the outline of the gum defect, we developed the three-dimensional pattern.

Speaker 2:

Using different bioinformulations, we incorporated the cells which are needed to form the gum tissues. We incorporated the cells which are needed to form the gum tissues, and then the AI platforms enabled us to quickly optimize the parameters which are needed for the bioprinting and eventually we can see that these gum tissues can be like. We can biofabricate gum tissues of these precise dimensions and eventually, now we are doing some of the animal work to validate whether these gum tissues can survive and integrate with the living tissues or the body. Having said that, whenever we are looking at grafts which are cell-based, it needs, it's a long pathway going through the regulatory pathways in terms of before it can be applied for human beings. So we start with small animal models to larger animal models before we start with clinical trials. So still a long way ahead. So with that, I would also like to thank all the funding sources, primarily from the National Additive Manufacturing Cluster here in Singapore and National University Hospital, and the range of collaborators and my students and research staff, both within dentistry and beyond, with that.

Speaker 1:

Thank you and happy to take questions. Thank you so much, gopur. Such a great presentation. I'm so glad that you can join us. We have a couple of questions from the audience for you. One question Vijay asks what are the rationale behind selecting a target filament diameter? Which parameter was?

Speaker 2:

the most influential on the filament diameter?

Speaker 2:

Very specific question Mm-hmm.

Speaker 2:

Okay, so the purpose of having a filament diameter is that if you were to print very large diameter prints means filaments of very large diameter then the precision of the shape is difficult to maintain, right, and if it is going to be very thin, then it's going to take a long time to be very thin, then it's going to take a long time to print right, because it's going to take a long time to cover the entire area and the longer the time it takes, then viability of the cells are going to be at stake, right.

Speaker 2:

So you need to find the compromise for finding the right filament diameter which ultimately makes it more efficient and, at the same time, we have shape retention. Ultimately, when we are looking at, the most influential is printing pressure in terms of we want to have the least amount of printing pressure so that again, too much pressure cells are not going to have the least amount of printing pressure, so that again, too much pressure cells are not going to be viable, right, and viability of the cells is one of the most crucial aspect if we want to clinically translate and that ways, printing pressure forms as one of the key aspects, which then when, then when all the other parameters are sort of interrelated and we want to actually keep the printing pressure as low as possible.

Speaker 1:

Makes sense. Okay, we have another question from Jade, curious how the air liquid cell culture works. Can you explain? Also, I'm interested that you were able to achieve so many lives, having a very thick tissue sample, without using perfusion bioreactor.

Speaker 2:

A wonderful question.

Speaker 1:

Many questions in one.

Speaker 2:

Yes.

Speaker 2:

I think I will try to go back to one of the slides. I think, yeah, here, right here, this particular histology is like a cross-section of the tissues right. So what we do is like initially we print only the fibroblasts right and the keratinocytes are seeded as a single layer on top. So when we seed keratinocytes, which are the cells which form the topmost layer right, so that is seated as just single layers. Typically all the cells grow on culture flask as monolayer. And when the cells are growing in flask, what is happening is the cells are attaching to the bottom of the flasks and is submerged within culture medium. So that means it's all in liquid phase, right. In this particular scenario, the cells, whenever they divide, they would just divide in a horizontal fashion, so the daughter cells would just occupy the horizontal plane. But the moment you put it in an air liquid interface, we are shifting the balance. You put it in an air-liquid interface, we are shifting the balance wherein the nutrition is coming only from one side, which is the bottom and the top part. Nothing is there, right? So it sort of establish a gradient within the cell itself. It's a mini gradient which is there within the cell and that sort of polarizes the cell towards shifting its division polarity. So instead of the daughter cells becoming arranged in a horizontal fashion, they start to stack up. So slowly it stacks up, stacks up and then eventually builds the vertical layers, what you see here, right, and that's how the complete tissue forms.

Speaker 2:

In terms of the other question, the viability, right, yeah, two parts to it.

Speaker 2:

One is the horizontal dimension. So here, like this, is around like 1.5, uh, or almost up to two centimeters in the in, in in width, but but width is generally is not an issue, it's the thickness which matters, right, because the nutrition is coming right from the bottom. The thickness of these tissues is in the order of around two millimeters and it is absolutely right that you need perfusion bioreactors for thick tissues. But that is not really necessary when we are growing these tissues in air-liquid interface, wherein the media is supporting from the bottom. But there is a trick in terms of how much media we add. There is a sort of passive active flow which is still passive, but without having pumps. We can establish flow within the construct by adjusting the amount of media which is added in the in the beneath the tissues, which is not exactly at the bottom but slightly above it so that, like you, have media being perfused right till the top but not submerging it. I hope I tried to explain in as simplified words as possible.

Speaker 1:

No, it makes sense Totally. I can imagine there are like a thousand other parameters. You guys can use AI to optimize to make this thing work. One question is other than extrusion-based. First of all, one question from the audience is what kind of machine did you guys use to do the bioprinting?

Speaker 2:

Here we used the Selink BioX triple head extrusion-based bioprinter, but the same thing should work with other extrusion-based bioprinter, but the same thing should work with other bioprinters extrusion-based bioprinters as well.

Speaker 1:

Okay. Another question from the audience says how is AI-assisted design different from using a design of experiments such as factorial design?

Speaker 2:

Okay, I think that's perfect. So from the design of experiments, obviously you get the orthogonal array, but from this one particular orthogonal array it's only like talking about how these two factors are interacting, but this orthogonal array and the adjacent orthogonal array are not talking to each other Right. So that's where the design of experiments is giving you, like these, interaction between the two parameters, but not the interaction between across the parameters, and so the identify platform basically helps in talking to each of these individual orthogonal arrays. So we start as design of experiments, but eventually it's taking the design of experiments to the next level.

Speaker 1:

Great. I hope that makes sense everyone. This is a fantastic presentation. You've done extrusion-based bioprinting. Would there be any desire of moving into other type of 3d printing process like dlp? You know things that will. That can be faster, for example, but with different sets of parameters um, good question.

Speaker 2:

uh, of parameters, good question.

Speaker 2:

The other way, what we are looking at in a way, is also answering one of the earlier questions in terms of the bioreactor part right, is one way.

Speaker 2:

What we can do is like we can make extrusion bioprinters to print the tissue right. The other way is we can use DLP printers to form the mold right, the external part. So what we are doing is like designing, using the DLP printers to print the mold and combine that with a bioreactor wherein you have a chamber wherein we can keep actively flowing fluid and in that ways we are not restricted by the amount of thickness of the tissues. We can have even thicker tissues or in later on, like if you want to incorporate even a bone within it, like wherein you can have a hard tissue, soft tissue interface. That is feasible. So that is where, like, we are using DLP based printers to fabricate the mold and eventually we can cast these tissues by manually just mixing it within fibrinogen all these different cell types and with a manual pipette we can just fill it into that space, or we can also combine it with extrusion-based bioprinting. So both are possible as such.

Speaker 2:

Yeah well cutting down from that was a good question which sort of in a nutshell also tells what we are doing, other things, what we are doing.

Speaker 1:

Yeah, look forward to your updates soon. Thank you so much, gopal. I want to introduce our next speaker so that we have time for a group discussion. Next speaker is Dr Gregory Hayes, from US. He is the Senior Vice President of Global Additive Minds at EOS and also is the startup founder himself for a company called Complex Materials, focusing on bespoke bioresorbable materials.

Speaker 5:

Thank you, jenny. So awesome to be here with you guys. I'm going to talk today about some of the work that we make, however, is used by medical companies, and so it's very important to us that we can cater to the industry and that our machines continue to operate at the forefront. So let me bring up my presentation. Okay, so, on the fly, there's one video in here. Hopefully we can get a chance to show that. But, like I was saying so at EOS, I think it's important to understand that we are not a medical company. We make capital equipment that works in the 3D printing industry, and therefore the equipment that we are making is serving the aerospace industry, defense industries, the energy sector and also the biomedical and medical sector, and so when we think about AI, it's not targeted to a medical industry per se, but rather we think about how can we use AI to improve our machines and to improve the processes that our customers ultimately doctors and people that are making things for doctors are going to be using for the benefit of the end patient. And so you know, I think by now I don't need to educate the audience on how AI works, but in a few just really brief introduction slides, of course, ai is based on datasets and examples, pattern recognition inside of those and then generalization, prompting to find an answer. The really interesting thing from an EOS point of view is we have a large install base and we have a long history of generating 3D printing data in many different material sets across many different applications, and so we're able to pre-train a lot of these large language models, a lot of these large data sets. We're able to specifically pre-train these with EOS data, with data that we've collected from machines that have been running for 15, 20 years, and with knowledge and databases that we've kept, even internally to EOS. So it's for the first time now we're able to utilize the strength of the longevity of being around in the 3D printing world for quite some time, finally thinking about transforming that data and the algorithms that can be built on top of that data. We also have been building algorithms for more than 10 years, so although AI is a relatively new buzzword, it's, of course, not a new topic. We've been busy as EOS for about the last 10 years, even a little bit further, with the topic, and so we're really excited that it's now starting to take off and we see it gaining traction. And what I want to talk about today is you know what are some of the applications of AI that we see, and I'll show a few examples of how we are using it.

Speaker 5:

Fundamentally, we see AI affecting the process of additive manufacturing in three main areas. So you see three columns on the slide. The first is in large language model applications, from a chat bot to really what we would call knowledge management and understanding that. You know an engineer who is located in Singapore who has solved a problem. Another engineer who is located in Arizona faces a similar problem. The problem solution from one area can quickly be applied to another area using a concept of a large language model, assuming that the solution and question answer is incorporated in some sort of larger large language model database. I'll show an example of that later on. That, I think, makes a lot of sense.

Speaker 5:

Second way is in anomaly detection, so EOS machines, but not specific to EOS machines. So a lot of our competitor systems out there. There was mentioned about Formlabs earlier in one of the speakers. There are FDM printers. They are all now equipping themselves with various types of sensors. This sensor data, of course, can be utilized to demonstrate a fingerprint of good design. You can use the sensor data to find patterns of anomalies. You can use that sensor data to even recognize when something may go wrong and, in the most advanced sense, think about corrections that can be made on the fly or, you know, preventative changes that can be made to the process, depending on the sensors that you happen to be monitoring. As it goes, the EOS machines are connected with many different sensors. The systems can also be connected to a network, so you can do things on the fly as well, and there's really a lot of power there in anomaly detection. And then the last one is image recognition. So image recognition also a powerful tool, slightly different than anomaly detection in a data set, but image recognition using AI is getting better and better over the course of the last year, and taking pictures of the process as it's going this can apply to an US printer, can apply to a dental graft, like the example we just saw can really tell you a lot about how the process is evolving, indications of things that are going well or maybe not going well, and give insights into changes that may need to be made.

Speaker 5:

At EOS we work in all three categories, and what I want to do now is show you just a glimpse of some of the things that we're prepared to talk about in a public way and hopefully you can see some of the steps that we made. Hopefully it spawns a bit of a discussion. So the first is a knowledge management tool. So you see here a kind of screenshot in the center of a knowledge management tool that we've created. This is a chat GPT EOS interface style chatbot, but of course it's trained with EOS manuals, with service reports, with reports that show how different machines have been fixed in the field and, most importantly, it's able to maintain version control. So EOS printers have been around for 35 years. Like I've said, there's, of course, many different versions of an M290 out there, many different versions of an M290 out there, many different versions of an M400-4,. How can you be sure that the correct maintenance procedure is provided via this tool to the correct machine?

Speaker 5:

This large language model interface really helps our service team do knowledge management in a professional and efficient way that used to take many, many calories. In terms of alignment and sharing and internal trainings, this has been a huge help and a huge impact for our service team in particular. The second is in image recognition so we talked about this. Our machines are equipped with multiple different types of cameras. One type of camera is something which we call OT. Our OT cameras are detecting a certain wavelength of light that's coming out of the process bed surface as the parts are being printed, and using image recognition, we're able to capture these images layer by layer, which creates, of course, a database that we're able to build. We're able to infer then from that data a fingerprint of a good design. That's been done. We're able to also make changes on the fly. So if there happens to be a section of a part which is maybe not getting hot enough, or it looks a bit cold, or a section of a part which looks like it may be overheating, there are some on the fly changes that the system can make to correct the printing process as it's happening. And then where AI really comes into play here is, without measuring the images, can you predict the images and predict the future of the printing process. And here you get into using true AI algorithms to predict the printing process and therefore you can very accurately simulate how things are going to progress. This is really advantageous, you know, if you have a critical part of a build going in the direction of first time rates, going in the direction of first-time rates, going in the direction of trying to democratize the complexities of 3D printing in general, right, it no longer needs to take. You know an industry veteran who can look at a part and determine exactly how to print this component. We now have the knowledge and the software tools that many different engineers should be able to get the same result looking at the same part with the same software interface. So really trying to bring the technology into a scalable manufacturing space and away from a specialized, bespoke, highly engineered process.

Speaker 5:

We're starting on this journey. It's shown really good results. We're really happy with how far we've come up until now and we're excited to see where that goes in the future. Here, jenny, you have to tell me if you can hear this. Here's an example of how we use um, some different ai tools to also help with uh, with educational aspects. So we do a lot of trainings, as as eos, we've developed something that we call our additiveitive Minds Academy, which is part of our global Additive Minds group, where we try to interface not only with universities, but with our users of our technology to make sure that they're up to date on the latest training options that we have, and we don't have a team of infinite people, and so we use tools like this, as you'll see, to make our job easier.

Speaker 1:

Yeah, I can hear you.

Speaker 4:

That's good Powder flow rates are reported as time per mass or seconds per grams of powder. The particle size distribution, or PSD testing method used in our lab is dynamic image analysis, which uses a larger test portion of powder than diffraction methods. This means the powder in the test is tens of millions of individual particles.

Speaker 6:

The method used in our laboratory to test the particle distribution is the dynamic image analysis, which uses a larger test portion of powder as a bending method. This means that the powder in the test consists of tens of millions of individual particles.

Speaker 5:

I think you see that was one of our lab engineers. She actually runs the testing and quality lab in North America. She does not speak German, although she very much looks like she speaks German, but you can imagine getting this type of information from the experts into the country's, into the country's language. We had a speaker earlier from Korea. You can imagine how useful this is as a tool and, with a small team, to be able to reach a global audience.

Speaker 5:

Okay, I think it's really important, if you're going to talk about AI or if you're going to talk about manufacturing, that you spend a few moments also talking about safety and security. So we take this very seriously as EOS, not only from a government perspective in the defense industry, but even from a consumer perspective and organizations that take security at the highest level, and you really need to be cautious about what type of data you are using, what type of AI tool you are using which then has access to your data, and really should avoid things that you would call kind of a black box AI solution. The tools are very powerful, the security can be very good, but it can also be very lacking, and so please, as a kind of general statement, keep both eyes open as you and your organizations and in your careers, you venture into this space. It will have a profound impact on the manufacturing industry in general. I think we've seen that from the speakers thus far today, but we really have to keep our eyes and our ears open as we, as we, navigate these waters.

Speaker 5:

And then finally and you know, here try to try to summarize things together so the additive manufacturing process is inherently a digital process. These, these processes take into account, you know five or six. You can add more kind of bubbles to the right-hand side of the slide, from images to material properties, to sensor data, to logs of data that are happening, the process conditions that are happening, and so all of these factors need to be taken into account. And understanding how changes in one section affects the mechanical properties of a part, which is, of course, related to the chemistry of the material that's being printed, is a complex problem which has been very difficult to solve from looking at an experimental window point of view. And so using AI tools allow us to harness and control that ecosystem more so than we've ever been able to before, and it's very, very powerful. And what we think really unlocks some of this powerful nature of AI is looking into multimodal embedding. So here, of course, you're linking data sets, monitoring with images, all together in a metadata approach into the process that is happening. This is something that we've been working on for a while now and where we see a lot of advancements are possible here. So you know, with that, we also realize that we don't have a monopoly on good ideas. We also realize more and more that we need to enable the engineers out there in the world, and so we are actively approaching external organizations. One such organization that we've joined is the AI Alliance. You can Google it. It's a group of mostly software companies working on AI concepts, heavy into open-based research platforms, and I think that they do really nice things over there. We're happy to be a part of it and we're happy to bring a kind of manufacturing viewpoint into the application of AI into some modern industries.

Speaker 5:

I'm the face of the presentation, but, as we all know that we've been doing this for a long time, I of course, am not responsible for all of the work. There's a great team of people at EOS. There's a great team of people spread around university consortiums that we're a part of that help us put all of this work together, and these are just a few of the names that have been involved up until now, and so with that I'll take questions. Sorry about the presentation format. I mentioned here a few companies. Intraspectral is an interesting one that comes out of the medical world, but, like I said, we are trying to use the best tools that exist in an open way to make sure that we get the best technology into our customers' hands so that they can make the best parts for ultimately, hopefully, that patient's improvement of life at the end. So thanks, jenny.

Speaker 1:

Thank you, Greg. Yeah, it is unfortunate the presentation didn't work as expected, but your presentation is fantastic. I learned so much about what you guys are working on. Since we have a pretty significant academic audience, do you guys have any data or algorithm sharing programs, you know, with people who are either watching this right now or on demand, potentially that you can collaborate with?

Speaker 5:

Yeah, yeah, we collaborate with many different universities. We share data that comes out of our system in a relatively open way. We have something which we call our EDN network which we call our EDN network, so our developers network that people are able to join and get access to some of the inner workings of our software that controls our systems, which, of course, also helps with getting access to the data that the systems are generating. So certainly, we even have virtual machines that can be run in test environments. We really have a lot that we can enable engineers in academic environments to help us push the limits of what's possible. So sure, just reach out and we can start the conversation. That'd be awesome.

Speaker 1:

It's very exciting to see that you guys put so much effort into various initiatives in AI. What is the commercialization aspect of these activities, you know? For example, would you be able to generate some kind of software that you can sell, either as a license or service as part of US offering?

Speaker 5:

Yeah, that's really interesting. So each of these are kind of commercializable in their own maybe flavor. We don't look to necessarily need to make money out of every improvements or aspects that we are adding to a machine, but rather if that can help an additive manufacturing approach to manufacturing change in industry, we see that as a net benefit, right, and we'll ride that wave of success. So some things are things that you will see as improvements to the EOS ecosystem. That will, of course, just be updates and improvements. Some things may be charged depending on how things are rolled out. For example, I put the image analysis on the screen here.

Speaker 5:

Going forward, all of our systems will be equipped with the OT hardware, so the sensors will be pre-built into all of our systems. It used to be an option in years past. It's now comes fully integrated. Those sensors can then be activated with software subscriptions and you can turn on different AI based improvements to the process through a subscription model. That's one On the training side, taking advantage of some of the content development, like we showed the video of Dana speaking German. That's something where training is already provided to our customers. You will see improvements to that, to the Additive Minds Academy, roll through basically without seeing a kind of price increase, but rather a performance increase in the educational platform that we're able to build. So it's a case by case and we're trying our best to make sense of the business that can be wrapped around these technologies.

Speaker 1:

Fantastic. Can't wait to see your next step. Okay, I'm going to introduce our final speakers. We're going to be on time. Final speaker is A Niansen from Cosm Medical. A is currently the head of technology and operations at QASM Medical Corp, a Toronto-based femtech startup.

Speaker 7:

Thank you, jenny, very happy to be here Going to share my screen. Okay, perfect, perfect, okay, great. So I am a head of technology at Cosm. I am system design engineer by training, mother of two who's passionate about applying AI, software and 3D printing to drive innovation in women's health. Every woman is different.

Speaker 7:

Cosm was founded by our CEO, derek Sham. After witnessing his grandmother suffer from severe prolapse, it robbed her of mobility, independence and ultimately leaving her bedridden. His personal story sparked a professional mission to reimagine pelvic health care using the tools of precision medicine. Many of you may not know what prolapse is. So many of you may not know what prolapse is. I certainly didn't. Until, you know, I saw a job posting from CASA.

Speaker 7:

So pelvic floor disorders like prolapse and incontinence affect one in two women, which is very common in their lifetime. Prolapse occurs when pelvic organs like uterus bladder in the front, rectum in the back, drop from their normal position, often due to weakened muscles. Prevalence rises sharply with age and childbirth 50% of women over the age of 80 and 33% of women with three or more vaginal deliveries are affected by this. How it feels. Imagine a constant sensation of heaviness, pressure or something falling out of your body, sometimes feeling a bulge or even seeing it protrude out of the body. It can cause pain, bleeding, due to friction and emotional distress. Many women also experience urinary and fecal incontinence, with this leaking when sneezing, laughing or even just standing. So these symptoms erode physical confidence, intimacy and social participation. Yet most suffer in silence because current treatments are outdated and very hard to access. Gynecology devices have remained one-size-fits-all for decades. Patients often go through trial and error pessary fittings, with over 100 different choices in terms of shapes and sizes, described. So pessary is a way of managing prolapse by providing support in the vagina of the organs that are falling down. Clinicians describe this as a clinical art because they're relying on intuition and not data. Second type of treatment is physiotherapy, but it is underutilized and surgery is risky. So pessaries are the most common conservative treatment, but the issue is currently there's 30% failure rate and 50% dropout.

Speaker 7:

At Cozum, we're building a product solution that makes women's health personalized and predictive, with AI and 3D printing as its core. We're developing the world's first digital gynecology platform with image-based diagnosis. Digital gynecology platform with image-based diagnosis, ai-generated personalization, secure cloud software, like Greg mentioned, that enabled 3D printing to manufacture patient-specific gynothotic devices. A year ago, we achieved FDA approval and launched Gynothotics in Canada. Our digital platform has both a patient and a clinician portal. Our clinician portal provides a platform to design and order Gynothotics and track patient outcomes. The patient app provides educational articles, videos and aim to improve patient engagement, compliance and track the journey.

Speaker 7:

We've spoken with many women who now are customers of gynothotics. Then they say that gynothotics gave them their lives back. One told us I literally forgot it was in. That's how comfortable it was. Another said it's like night and day. These are women who struggled for years with pain and social isolation. Now they describe our product as magical, life-changing and simply, I can enjoy my life again. So this is the real impact of personalized care.

Speaker 7:

Now, to bring our product to life, we developed a proprietary manufacturing process that balances precision, flexibility and safety. This is very similar to FitMe process. We start with, I think, just from what I saw William presented. We start with patient-specific measurements with data input coming from our clinician portal into our automated parametric modeling tools. Then, from that CAD and STLs, we 3D print a high-resolution mold. Into this mold we inject a biocompatible silicone, ensuring safety and long-term comfort for the patient. Our in-house team performs rigorous quality control at key stages, from the beginning, cad to mold to final product, to make sure that we are outputting the right products ordered. This process gives us the flexibility to adjust to each patient's order while keeping manufacturing agile and scalable.

Speaker 7:

Now I'm going to talk about some of our publications. So to validate our approach, prior to launch we conducted a pilot clinical study with our patient-specific pessaries and the results were published in your gynecology journal. With this study, all eight patients had previously been using standard off-the-shelf pessaries, had previously been using standard off-the-shelf pessaries, and each was transitioned into a gynothotic designed using physician input, patient preferences and goals and precise anatomical measurements. At the end of the trial period, we saw statistically significant improvement. Patients reported reduced prolapse distress and increased satisfaction. Using validated questionnaires, every patient either improved or maintained ease of use, comfort and support, with no average events reported, and every single one of them prefer gynothotic over the standard pest-free. This journal also published an editorial alongside our paper, calling this a translational success and noting that the future of gynecology is bright. This highlights even early on in our journal the potential for personalization.

Speaker 7:

At COSM, we know that measurement is everything when it comes to personalization. Our early work demonstrated automated method for accurately identifying key pelvic floor landmarks, like the bladder and the rectum, as marked here, and extract the plane of minimal hiatal dimension from a 3D ultrasound, which is key to get some of our features going into our AI. Our publication in Medical Physics marks the first reported work of automated segmentation and biometric measurement system for a mid-sagittal plane of 3D transperineal ultrasound volumes. We were also able to automate pelvic anatomy segmentation in 2D videos. Our algorithm can improve efficiency by shortening the analysis time from 15 minutes to just slightly over 1 second, while increasing accuracy and reproducibility compared to manual methods compared to manual methods. Building on this foundation, our ongoing R&D is advancing, with automated AI-driven 3D segmentation and dynamic 4D pelvic anatomy tracking, enabling real-time patient-specific insights for personalized diagnosis and treatment planning. Sorry, just give me one second, okay.

Speaker 7:

The current standard of care for pelvic assessment is largely based on manual exams. At COSM we have another product that we're introducing, a measurement tool called V-Caliper, which will improve data accuracy while keeping the process simple and easy to adopt into the clinical workflow. But we didn't stop there. One of the most under-explored aspects of pelvic floor dysfunction is the mechanical behavior of the vaginal canal itself how it stretches, supports organs and responds to pressure and movement. This mattered deeply in childbirth surgery, and especially for prolapse care.

Speaker 7:

So we developed what we called copal dynamic imaging or CDI, our novel and proprietary ultrasound technique to mold and scan the vagina. To mold and scan the vagina under controlled distension. So the last study I will discuss today is our feasibility study, where we recruited 16 patients with prolapse distending the vagina using a water-inflated catheter bag and performed 3D trans-intra-intraital ultrasound scan. From the imaging and pressure data we generated detailed 3D trans-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra-intra key anatomical planes. Not only were these measurements reproducible, but they also correlated strongly with physical exam findings, such as the POPQ measurements, as well as pessary dimensions. So CDI gives us a new way to objectively understand vaginal mechanics. Ultimately, it can guide us how we personalize devices, evaluate surgeries or even predict key risk indicators. Cosm is building the future of gynecology, using AI and 3D printing to transform pelvic health care. We create personalized, patient-matched devices, because not one size fit all. Our clinical data shows real impact, with improved outcomes and higher patient satisfaction. So that's set a new standard where women get care that is designed for them. Thank you very much, designed for them.

Speaker 1:

Thank you very much. Thank you so much. Such a great presentation. This is such a field that's not very commonly spoken in conversations but so impactful. It seems like that your platform can provide multiple endpoints. Are the imaging ability, the diagnostic aspect of your devices sold separately, Like? How do you guys currently are commercializing this, and what can the public access?

Speaker 7:

What is commercialized currently is our personalized pest street as a device. The diagnostic side is still in in rnd and clinical studies okay, but you, you do want to commercialize that part. Yes, absolutely um the ai, the, you know the hardware. It's a harder path to regulatory approval um yeah, and so that's.

Speaker 1:

There's a leg there um, I think some of the homework I plan to do um is software. I mean soft tissue segmentation, uh, automate. That is much harder, I would say, than heart, like bones. How do you guys tackle that? Challenge how do you tackle this dynamic changing morphology of soft tissues and incorporate that into your algorithm?

Speaker 7:

A lot of our AI algorithm development is because of that, because it is not as clear, it is not as simple and though you know, with our CDI platform, we we used, as mentioned, the water filled catheter bag, so water shows up more clearly under ultrasound. So there are, there are engineering, design, ways of getting improvement on the data, to get cleaner data, to make the analysis easier.

Speaker 1:

Do you see? Sorry, I have one last question. Do you see this method of capturing the soft tissue data translate into other organ system?

Speaker 7:

soft tissue data translate into other organ system. I am not a clinician. I really hope so, you know. I really hope that there is IP there that we could collaborate on for other indications.

Speaker 1:

Absolutely Okay, great. We have one question from the audience, from Jade Great presentation, wondering about the materials you are printing with for the injection mold. Are you designing these as two-part modes or are you using sacrificial, water-soluble, dissolvable material?

Speaker 7:

Good question. I can show you this, this and um. So these are not two-part molds. We made a decision really early on um, two-part mold have a parting line that may have. You know, we want the surface as smooth, as good as possible, um, and so that's and we want it to be, yeah, biocompatibility, the reason why we decided to do mold with bio, fully biocompatible, um, that can be implant grade silicone, um. We are, from the eyes of the FDA, treated as an implant device because our wear time is longer. Even though you can remove it like a menstrual cup, we are treated like an implant. So you know there's safety to consider, so we want to minimize as much our parting line as possible. So, yeah, we, we call this a cocoon mold. It's like eggshell or eggshell mold.

Speaker 1:

So how do you remove that mold once you finish injection?

Speaker 7:

Proprietary processes.

Speaker 1:

Okay.

Speaker 7:

It's very tricky.

Speaker 1:

Yeah, good question. All right, well, thank you A for the excellent presentation. All right, well, thank you A for the excellent presentation. We're actually going to do a podcast soon so everyone can stay tuned with our podcast to discuss this further. And everybody who's still online, please join us for a panel discussion. I don't know who else is still. Are we all here or is William gone already? Okay, probably he's gone. So, thanks so much, and my biggest question for you is I mean, ai is an old but also new topic for me personally, because there's just new things happening all the time. What is specific for 3D printing right now? That's ongoing, that you guys are watching. So I learned something new obviously today, this segmentation algorithm that everybody's using. I'm going to look closer to that and see what other things are also happening in the space that is specifically very impactful to our industry. Anything on your radar right now that you're watching or learning about, no, anyone can start I'm watching direct 3d printing, um, because you know the most like.

Speaker 7:

Ideally, with direct 3d printing we can make more complex shapes right um at a faster speed. So you know, but the challenge still here is at the implant grade, implant level of regulatory and biocompatibility. Um, there is, to my knowledge, no approved um devices. And then there's also the surface. Even we've tested a few prototypes and the surface finish is, um, not as good as what we're able to get with our current structure.

Speaker 1:

For silicone specifically. Yes, yes.

Speaker 5:

You know, jenny, from my point of view, I think 3D printing, additive manufacturing, used to be hard, and what some of these softwares can do now is they take the complexity out of the process and they allow engineers that are working on an application or a problem in a medical field, for example, to focus on that application and the problem and not have to also, and simultaneously and in parallel, solve a manufacturing process problem.

Speaker 5:

And so I think you see that from some of the things that I talked about, you see that with Dr Gopu as well, right, he's using tools to very quickly work his way through an advanced design of experiments where he can just focus on getting that best application in the mouth of a patient and not have to worry about the colloidal chemistry happening in a tissue graft. And the same when you want to make something using an EOS printer, you don't need to be a 10-year veteran now to tweak the process and material chemistry all around to get a good result. These tools are democratizing access to a powerful manufacturing technology. I think that, for me, is like the biggest impact that these type of tools are having at the moment that these type of tools are having at the moment.

Speaker 1:

And then the AI consortium that you were talking about are they actually working on how to make 3D printer easier to use? No, no, they're not.

Speaker 5:

I think they would very much like to, but they are thinking in that direction, right, so I think that they can very easily go down that path. But, yeah, I think the AI consortium they are also looking into what types this is a question also for university people that happen to be on the call now.

Speaker 5:

What types of data sets are available out there and should be utilized to build and train future models? What is kind of like taking an inventory of what we have to train some of the tools that we're building? I think that's an interesting question as well that they would love to hear. But yeah, there's a lot to do. There's a lot to do, but the promise, I think, is already starting to show itself in real applications.

Speaker 1:

Is there any open source or 3D you can access to as a start? As far as I know, though, there's no true open source 3D printing resource that exists Like a foundation data set, a foundation algorithm or AI foundation or something like that.

Speaker 5:

If that exists in an open source format, someone needs to tell me, because I'm not aware of it.

Speaker 1:

But it would be awesome. Yeah, gopu, what's your input on this? What are you watching and what are you really excited about right now?

Speaker 2:

Right From the aspect of bioprinting, right like it's, we are looking at more at soft tissues or or even even tissues which are like even if you're printing a hard tissue, it starts with uh soft, uh hydrogels to be printed, uh, in contrast to say, metal printing or ceramic printing or polymer based printing wherein, like, the printed material is hard and so you can have support structures and so so on. And that's where I can see, like where I or william, they are using like these molds to eventually to make a silicon which which otherwise is soft, right, so printing soft materials is a challenge as such and that sort of eventually, if you want to take bioprinting for uh, complex organ systems like, or even like, say, heart or or uh, kidneys and so on, then it's going to be very difficult to have internal structures and uh with complex shapes and architectures to be printed without ability to support. And that's where now these things called as a volumetric printing, where you can print within a bath, like a support bath which helps to keep the shape printed structure in its shape. So that's where, like now, the field is moving and that enables much larger or complex organ systems to be printed without worrying about the shape being lost after printing.

Speaker 2:

But again, for that to be enabled, what is needed is like, for example, what I showed is the various tool sets which are available to optimize the bioprinting process, but that is based on the assumption that this is going to work once we start the printing. What if there's a small air bubble within the bio ink? Then all these parameters are off already right. And that's where, like you still need, like what Gregory had shown, like in terms of having a real-time monitoring and a feedback system to adjust the settings which can then alter in real time and reduce the defects in the printing process as such. And that's where quite a lot of other technologies have been converging together to enable and that's potential future eventually needed to to move things forward totally.

Speaker 1:

I'm almost in the back of my head. It's like maybe I should start at this open source effort. Um, by the way, have you heard of this company called fluid form 3d out of boston and cmu? Um, yeah, they also. I think the co-founder, adam Feinberg, also wrote a couple of papers using AI for bioprinting as well.

Speaker 5:

Yeah, they have the VATS silicon-based printer right.

Speaker 1:

They have fresh bioprinting. I think that's the general concept they operate under, and recently actually, I can send you the info later that there are a couple of light-based volumetric bioprinting companies are further along their commercialization. That could be useful for a faster direct 3D printing, bioprint printing with cells, so that's also something that's very interesting on the soft tissue side as well.

Speaker 5:

Yeah, but if these technologies can become easier to use because they are being monitored with some sort of in-situ monitoring program and that can do on the on-the-fly corrections, then gopu's engineers and and the engineers at some of these other startups can focus on the medical aspect of the part they're making. They don't have to be also part-time manufacturing engineers to to sort out you know kind of an immature process. The manufacturing process can stop being a bottleneck in the advancements of healthcare.

Speaker 1:

Basically, oh yeah, totally, you know. Just the other day I was thinking, you know, if 3D printing is back to its hype cycle again, which we're certainly not we're the opposite of the hype cycle. If 3D printing is going to be where AI is today, every company right now wants to use AI. If you don't use AI, you're going to lose out. That's the scenario. It's like a FOMO experience. Imagine one day when every company manufacturing facility thinks, or innovating startups thinks, that we cannot go forward without a 3D printer or 3D printing process embedded in our business. We cannot survive. Imagine that day comes. That will be the revival of 3D printing, and I think the huge bottleneck is right now. 3d printing itself is the bottleneck because it requires tinkering, and I have tinkered for a couple of years and I gave up. That's my own personal experience.

Speaker 5:

Maybe if you tinker now with the helping hand of some AI algorithms, you may find yourself with a different outcome.

Speaker 1:

Yes, different outcome.

Speaker 5:

But you know, I'm also calling in to this from Galway in Ireland, which is, for all intents and purposes, can be argued as the kind of global hub for medical 3D printing, and I would argue with you that it has taken root right Additive in these large mobile device companies. They have adopted it. No one is thinking about going backwards. It's more about how do we work. I agree with that statement, but I don't know, do you think that some AI applications may also find themselves a victim to an overhype?

Speaker 1:

I think here oh for sure, absolutely, I would say more than 50% of the LLM-based companies will go away. That's my prediction. You can test it out five years later and see how that works out.

Speaker 5:

We'll come back to you.

Speaker 1:

Yeah, you know, I think the test of whether or not 3D printing needs to exist, that's past. I definitely believe 3D printing needs to exist. I mean, obviously we exist because of that. I think the question is, how fast can it grow? Can it grow faster? And I think the bottleneck right now. I think hopefully AI could help.

Speaker 5:

Yeah, I agree, and in the kind of like down selection of LLM companies, you know it's hard to be a platform company. Only one can kind of exist at the end. Only one can kind of exist at the end. So, so, but but these, these tools keep getting better and better. So I think those who, those who control the best data sets to teach the algorithms, will hold the keys to the, to the future technology growth aspects, and that's where I think we see the value coming in as EOS anyway, yes, and I hope everyone will still be bullish on AI five years from now.

Speaker 1:

I have a sense that most of you will. And thank you again for joining this webinar. We got over time a little bit, but this is a great discussion and this webinar will be online on demand for free, so people can just register and watch this if you have colleagues who wants to learn. And thank you again for your time. Thanks, gopu, for stay up so late. Really appreciate it. It's quite an insightful to learn from the industry leaders.

Speaker 1:

Last July. I'm so excited about all these new information and tools. I can't wait to read up on these. Thanks for the opportunity. Thanks for the opportunity.

Speaker 5:

Yeah, awesome, thank you.

Speaker 1:

Goodbye everyone.

Speaker 5:

Bye.