Text-Guided Image Inpainting In Jupyter Notebook on Apple Silicon (MPS)

less than 1 minute read

For this one we need to download the Hugging Face Stable-Diffusion-Inpainting Model. You need to read/accept the licence, download the repository and place in your models folder. I suggest you get this this using git lfs install then git clone https://huggingface.co/runwayml/stable-diffusion-inpainting. This will download the model checkpoint and associated files needed to make the following code block to work.

from diffusers import StableDiffusionInpaintPipeline
DEVICE='mps'
pipe = StableDiffusionInpaintPipeline.from_pretrained("models/stable-diffusion-inpainting").to(DEVICE)
import PIL
import requests
import torch
from io import BytesIO

The following code downloads our initial image and mask image examples.

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
init_image

png

mask_image

png

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
  0%|          | 0/50 [00:00<?, ?it/s]
image

png

Updated:

Leave a comment