Easily CLONE Any Art Style With A.I. (MidJourney, Runway ML, Stable Diffusion)

Easily CLONE Any Art Style With A.I. (MidJourney, Runway ML, Stable Diffusion)

Brief Summary

This video demonstrates three AI methods—Midjourney, Runway ML, and Stable Diffusion—for replicating art styles. It provides step-by-step instructions on how to use each platform, including setting up accounts, uploading style samples, and generating images. The video emphasizes ethical considerations, such as obtaining permission from living artists before replicating their work for profit.

  • Midjourney is used via Discord with image links and text prompts.
  • Runway ML involves training a custom model with uploaded images.
  • Stable Diffusion is implemented through a Google Colab notebook, requiring a Hugging Face account and specific settings for training and testing the model.

Introduction

The video introduces the concept of using AI to replicate art styles, presenting a scenario where Burning Man sculptures are reimagined in the style of Salvador Dali. The creator outlines three methods for achieving this: Midjourney, Runway ML, and Stable Diffusion. A disclaimer is given, urging viewers to use these techniques ethically, particularly when replicating the work of living artists, and to consider obtaining permission or using the techniques for experimental purposes only.

Midjourney

The first method involves using Midjourney, which requires joining their Discord server. Users can start with a free trial, but a paid plan is needed for extensive use. To replicate a style, upload a photo to Discord, copy its link, and use the /imagine command followed by the link and a text prompt describing the desired image. The video creator shares the prompts used to generate images of a zebra, lion, and cheetah in an abstract style, including parameters like resolution (8K), aspect ratio (AR 3x2), and Midjourney version (V4) and quality (Q2).

Runway ML

The second method focuses on Runway ML, where users need to create an account and navigate to the AI magic tools to select the custom generator option. Runway ML recommends uploading 15 to 30 sample images of the desired style to train the AI model. The video creator used 30 cropped images of their abstract art. Training the model costs $10. Once the model is trained, users can input prompts to generate images in the chosen style. The platform allows control over the number of image options generated, output size, resolution, style, medium, mood, and prompt weight. The video creator shares the prompts used to generate images of a zebra, lion, and cheetah in their abstract style, specifying brush strokes, drips, and vibrant colors.

Stable Diffusion

The third method uses Stable Diffusion via a Google Colab notebook linked in the video description. Users must connect to the notebook, run the initial setup cells, and grant access to Google Drive. A Hugging Face account is required to create an access token, which is then copied into the notebook. The video creator uses the same 30 images from Runway ML to train the model but notes that a smaller set can also work. Training involves setting the number of steps per image (100 steps per image) and text encoder steps (200-450). After training, users can test the model by uploading a base image and writing a prompt. The video creator used a zebra photo and adjusted settings like sampling steps, sampling method (DDIM), and resolution. The prompts used included details like zebra stripes, colorful abstract elements, graffiti, drips, and expressive brush strokes.

Conclusion

In conclusion, the video summarizes the three methods—Midjourney, Runway ML, and Stable Diffusion—for replicating art styles using AI. The creator asks viewers to share their opinions on which method worked best for replicating their art style and encourages questions and suggestions for future in-depth tutorials. The video ends with a call to like and subscribe for more content.

Share

Summarize Anything ! Download Summ App

Download on the Apple Store
Get it on Google Play
© 2024 Summ