10 Tips for Using "Vary Region" in Midjourney

PLUS: news from Google, DeepMind, and Pika

In this newsletter, read about:

  • 🕵️‍♀️ Vary Region in Midjourney

  • 🗞 News and Top Reads

  • 📌 AI Art Tutorial: Creative Upscaler

  • 🎨 Featured Artist: Valentina

  • 🖼 AI-Assisted Artwork of the Week

  • 🤓 A Comprehensive Midjourney Guide

🕵️‍♀️ Vary Region in Midjourney

"Vary Region" is like your secret weapon in Midjourney, helping you tweak your generated images until they're spot on. But hey, let's be real—it's not always a smooth ride, and sometimes it takes a bit of time to nail that perfect replacement.

In this post, I've rounded up some handy tips for getting the most out of the "Vary Region" feature. They come straight from the #prompt-faqs guide on the Midjourney Discord channel and my own trial and error. So, let's jump right in!

1. Ensure that Remix mode is activated to be able to edit the prompt for the selected region.

When you have the Remix mode switched on, like in the image, you have the opportunity to edit the prompt for the specific region that you want to modify.

2. Choose a region slightly larger than the area you wish to modify.

If you fail to select the entire region that needs to be changed, you will not get the desired result. Better to choose a slightly larger area to be safe.

For example, I liked the image below but wanted to add a bright spot – a red dress.

This is an over-the-shoulder cinematic shot of a couple, man and woman, arguing. The woman is sad, almost crying. The man and the woman are in their nice and expensive apartment. --ar 16:9 --stylize 300

So, I made sure to select the entire region relevant for this replacement, including the woman’s neck to make sure there is enough space to change the color of the dress. If I left just a little bit of the brown dress, the inpainting wouldn’t work.

With enough space selected, Midjourney was able to generate a nice image with a woman wearing a red dress.

3. Position the region to encompass some surrounding context, providing adequate size and scale for the new object.

When you want to add a certain object to the image, you want to make sure that its size is appropriate, considering other objects in the image.

In the image below, I can see something weird on the couch, below the glass of beer. I decided to replace it with a cat.

A cinematic shot of a casually dressed woman sitting on the big couch at her home. She is drinking beer and watching TV. --ar 16:9 --stylize 300

As you see, I selected quite a large area, including the glass of beer to give Midjourney a hint about what size of a cat would be appropriate and to have enough space to fit the cat.

The cat looks a little bit weird, but in general, the result is acceptable. A few more re-rolls might help if you want to get a perfect shot with a perfect cat.

4. Avoid excessive coverage of the surrounding area to prevent blending.

While it’s important to select a large enough region to provide enough space and context, it’s also crucial to avoid including too many other objects in the selected region. This may result in these objects taking the features from the object you want to add.

For example, here I selected too big a region that covered the entire glass of beer and too much of a woman.

As a result, the glass of beer is entirely gone, and the woman’s hair starts looking similar to the cat’s fur.

5. Reinforce proper placement, scale, and size through the prompt that provides the general context.

As you have probably noticed, I usually keep most of the details from the initial prompt when using the “Vary Region” feature. This is to ensure that the prompt provides enough context to have the new object placed accordingly. However, if your initial prompt is very long and includes a lot of details that are irrelevant to the region you want to change, feel free to remove these details – this will help Midjourney to focus on the details you want to change or add.

6. Refocus the prompt on what you want featured in the selected region.

In the examples above, I just added the details I was looking for in the middle of the prompt, where they fit the best. In these simple examples, with relatively short prompts, this was enough. However, in many cases, you will need to refocus the prompt and start it with the new details that you want to change or add, and then provide, the general context needed for the correct placement of these details.

7. Keep style guidance in the prompt to ensure consistency with the entire image.

Also, it’s important that the new prompt you use with “Vary Region” includes any specific style guidance that you had in the original prompt (e.g., cinematic shot, illustration, coloring page, 1960s style, etc.). This will help the new details to follow the same style as the rest of the image.

8. Experiment with higher values of --chaos and --weird to challenge Midjourney's semantic rules.

If Midjourney persistently doesn’t want to give you what you want in the selected region, try to increase a little bit the --chaos and --weird parameters. This will help to get some new, very different results that might be closer to what you are looking for. But be careful, with these parameters set too high, Midjourney goes wild.

9. Reduce --stylization and consider using the --style raw parameter to enhance adherence to the prompt.

Another way to troubleshoot “Vary Region” is to reduce stylization and include --style raw. This should move the resulting images from what Midjourney thinks is perfect to what you ask in the prompt.

10. Keep re-rolling and experimenting with the prompt until you get the desired outcome.

Finally, keep re-rolling multiple times and experimenting with the prompt, word order, and selected region. It often takes quite a lot of attempts to get the perfect image.

Hopefully, this brief post will help you use “Vary Region” more effectively. Happy prompting!

🗞 News and Top Reads

  • Google's Gemini image generator went too far in its attempt to incorporate diversity into generated images.

    • The tool faced accusations of being "anti-white" after portraying founding fathers as black and including almost no white individuals when prompted to generate images of American or Australian women.

    • Google acknowledged the issue and chose to temporarily halt the generation of images featuring people.

    • They aim to resolve the problem within a few weeks.

  • Google's AI venture DeepMind unveiled a live demonstration of Genie, a generative AI model capable of crafting playable games from basic prompts.

    • Genie, an abbreviation for Generative Interactive Environments, specializes in generating side-scrolling 2D platformer games using either user prompts or images.

    • The tool acquired knowledge of game mechanics through extensive analysis of hundreds of thousands of gameplay videos.

    • While Genie excels in constructing 2D environments from text or images, its capabilities extend beyond side-scrollers, potentially including the ability to instruct other AI models or "agents" about 3D worlds.

  • Pika has launched a new Lip Sync feature, further advancing the AI video space.

📌 AI Art Tutorial: Creative Upscaler

In this video, Matt introduces a new Creative Upscaler by Open Art and Stability AI. It is not just increasing the resolution of the original image, but can add lots of intricate details, while staying true to the original.

Valentina is an AI explorer creating outstanding dark fantasy scenes with AI. Her artwork is unique and well-recognizable. Check @darkhour_ai to discover more AI tales from dark lands.

🖼 AI-Assisted Artwork of the Week

🤓 A Comprehensive Midjourney Guide

To get a link to a comprehensive Midjourney guide, please subscribe to this newsletter. The guide is a dynamic document, which I intend to keep up-to-date with the latest Midjourney updates.

Share Kiki and Mozart

If you enjoy this newsletter and know someone who might also appreciate it, please feel free to share it with them. Let's spread the word about AI art and introduce more people to this fascinating field!

Join the conversation

or to participate.