Future of Selections in Photoshop with Adobe Sneaks
Hi everyone. This is a weird video where we watch a video together. We're going to sit down side by side and watch youtube. Why am I forcing you to do this? You don't have to skip along. But what I wanted to show you is kind of the future of Photoshop just like that. You're in the know, I love being in the know and it's all about masking out things out of images and it's something called deep fill, which is something Photoshop is developing in the background. It's not in the product yet but will be in the future. Just so it gives you a kind of secret backstage pass of what they're doing. They announced that at last year's adobe max conference, which is their big conference, they have every year at something called the sneaks, which is the last day. It's the, it's my most favorite part of the whole conference and they just kind of show you stuff that's going on in the background. Not available yet, but will be and it's pretty amazing. All right. So let's watch it together. Now you can s...
kip ahead to the next video or later on. Just go and check out project deep fill. Otherwise sit back, relax and skip the intro. Let's just jump in more about it. Please welcome Jiwei you. Thank you. Hi, Hi, I'm And today I will introduce some image whole feeling technologies as you all know that Photoshop already have a very powerful image whole feeling tool called content aware fill, which can be used to remove discharging objects or undesired regions in the image to make it nicer. It works well in most cases. But in my case here is quite complicated here. I have a serious fire. So I want to remove this region here because of the annoying balcony. So I mask the region here. And let's see how content of welfare can give us well 12345 or four eyes. Now I'm just gonna talk over this annoying everyone. The main reason for this failure is because the contender will feel does not try to understand the image and it's only relying on copying the surrounding areas, the surrounding pixels into the hole. And we believe a good image whole feeling system should be able to understand the face and fill the nose with the nose not the eyes or mouth or something else. So to bridge this gap and solve this very challenging image. Whole feeling problems will introduce the project. Deep field we leverage the power of adobe sensei. Deep learning deep neural networks and develop the details that can understand the image here. Let's see how it performs. I press it didn't feel button and yes you can. Mm hmm. Cool. Huh? What if we master the entire eyebrow can deep trail. Return us a new one. Well, let's try it here. I'm asked the eyebrow. Let's let's see content will feel first. Well this time it copies the mouth into the location of the north and let's see the field. Well it can successfully hallucinate a new eyebrow for you. So we know that the field works on faces, but in more cases people travel around the world and take photos and finding some people you want to remove. For example, this one in my case here I take a photo in british kenya national park. Well it's a wonderful art, beautiful weather but I find some two people on the top. So what I'm gonna do is I take a photo and I don't explain going on the background. Is that adobe sensei? Is there kind of machine learning artificial again? Watch what happens and watch it disappear. Images, quite high resolution bad version. What adobe sensei is doing is it's looking at other people's images that they found online. That's crazy. It goes off and says you've taken this photo of a popular spot. I'm going to go off and see if I can find it in the center is going to go off and see if they can find it online and grab data from that and put it into your image without you asking. Crazy wavered. This one looks visually realistic but it's not semantically correct because it's not ours anymore. So with our dew field, by the way we can multiple, we can mask multiple regions and hallucinate it in one shot. Let's see it. Yeah. And yes, this is the field, wow. One more thing given the deep field, users still have no control over what the deep field will fail in this master region. Of course we can provide multiple solutions for users to choose. Another thing to mention is that if you are like being kind of cross over into the video world, they're looking to this exact same thing. But in live action. So like people's videos that they've got online can be used to mask out live actions of people walking in front of your video of the Eiffel Tower can be masked out because there are enough videos online, enough data to kind of re represent that with the infield. Let's see how it forms and yes, yeah, this is the ai powered user guided image whole feeling technology project deep field sense, that is awesome. They just have a bit of pre m a little bit of chat afterwards. So yeah, it's just looking at data that exists online and tries to use your image, Photoshop dives into the internet and says, oh, here's some images that are the same or we think are the same and start using that to put it together rather than what's just in the image they call it. Adobe sensei is there kind of like background, robot learning, machine learning artificial intelligence stuff? You'll see more and more of this into Adobe content aware fill is pretty amazing by itself. But when we get into this kind of like reaching out out of Photoshop and out of the image things are going to get pretty cool. Also note if you, this is Debbie maxes in Vegas last year, it was my first time. It was so good. If you are going to this year's one in L. A. Drop me a line, like what we'll do is if any students are going and we want to hang out, we'll just grab a beer one night and we'll see if we can get a few students together and you just have a little chat, doesn't have to be anything too special. But if you are going to adobe max, let me know, Alright, that is enough. Youtube watching together. Let's jump into the next group of videos, hide it up.