• Kiki and Mozart
  • Posts
  • Midjourney's Style Tuner: From Basics to Advanced Features

Midjourney's Style Tuner: From Basics to Advanced Features

PLUS: updates from Stability AI, Google, and Microsoft

In this newsletter, read about:

  • 🕵️‍♀️ Style Tuner: What You Need to Know

  • đź—ž News and Top Reads

  • đź“Ś AI Art Tutorial: 7-Step Style Tuning Process

  • 🎨 Featured Artist: Chris Fuller

  • 🤓 A Comprehensive Midjourney Guide

🕵️‍♀️ Style Tuner: What You Need to Know

Just as I sent out last week’s newsletter about crafting your own style in Midjourney, they’ve released their Style Tuner that does this at a whole new level. Though it’s a pretty advanced tool and to cover all the details, I’ll probably need to dedicate a few posts to this topic.

Today, I’ll go through the process of generating a Style Tuner, and then I’ll list all the important things to keep in mind when creating, sharing, and using Style Tuners in Midjourney. Some of these considerations will be discussed in more detail in my future posts.

What is a Style Tuner?

When Midjourney creates an image for you, it goes beyond following your prompt and applies its “house” style to the generated images: the higher you set the --s parameter, the more of this default style you get. Unless you specify the --style raw, in which case you get the “raw” style instead of the default one.

With a Style Tuner, Midjourney suggests you to create and then use your own styles instead, or rather on top, of the default or the raw ones. The style you create will be encoded in the sequence of letters (i.e., code), which you can add to your prompts using the --style parameter (e.g., --style jqKjdz7fxeZijfcH). Now, when you specify this code in your prompt, the stylization parameter --s regulates how much of your style to add to the image.

Sounds pretty exciting! Now let’s see how we can actually build these Style Tuners.

Creating a Style Tuner

Start with typing /tune and then your prompt.

/tune prompt: a portrait of a woman with enchanting eyes, in light colors

The prompt will not have a direct impact on your style, but rather guides the content of the sample images that will be generated at the next step for you to choose your preferred aesthetics. However, note:

  • Your created style will perform best with semantically or literally similar prompts.

  • Also, the prompt will include the phrase that unlocks or activates the corresponding style. Without this phrase in the prompt, the style would not be properly activated, even if you add the code to your prompt. You may discover this triggering phrase experimentally, but usually, these are the words that have the most weight when you run your prompt through the /shorten command.

Next, you’ll see the following message.

First, you can choose here how many style, or visual, directions you want to explore (16, 32, 64, or 128). Each direction represents a behind-the-scenes configuration of Midjourney that will pull the style in one direction or another. So, with more visual directions, you might be able to create a more nuanced style, but it will also cost proportionally more GPU credits.

Second, you can specify, whether you want the sample images to be rendered in default mode or raw mode. The last one should be selected if you plan to use the resulting --style in raw mode.

Next, you’ll be asked once again if you are ready to spend the estimated number of GPU credits on building the Style Tuner. After your confirmation, the generation will start, and in a few minutes, you’ll get the link to your personal Style Tuner.

On this page, you may:

  • compare two pairs of 4-image sets, choosing one of them or a “black square” if you don’t have a preference, or

  • just pick your favorite images from a big grid.

The recommendation is to pick only images that you strongly like. It can be just 5-10 images or image sets. Choosing fewer options will result in clearer and more recognizable styles. However, if you are looking for something more nuanced, you’ll need to select more images.

That’s it! When you are done with the selection, you just need to copy the code at the bottom of the Style Tuner page, and use it in your future prompts.

Using a Style Tuner

Let’s see how our newly created style will perform with different prompts.

We’ll start with the prompt we used to create our style to see how Midjourney mixed our preferences and how the resulting style looks like.

a portrait of a woman with enchanting eyes, in light colors --style raw-6uo9BPU3K8ak1HJs --s 500

Then, we can experiment with other subjects. For example, this style should transfer well to the portrait of a man instead of a woman.

a red-haired man with enchanting eyes, in light colors --style raw-6uo9BPU3K8ak1HJs --s 500

Looks similar, but note that in my prompt, I repeated the part “with enchanting eyes, in light colors”, because I’ve discovered that the words “enchanting” and “light” trigger this style. For example, see the result from a similar prompt, without these words.

a red-haired man with beautiful eyes --style raw-6uo9BPU3K8ak1HJs --s 500

The resulting images still resemble our style, but are not as close as the ones we got with the triggering phrase.

Now, let’s see what happens if we try to transfer this style to a completely different subject.

an enchanting city street in light colors --style raw-6uo9BPU3K8ak1HJs --s 500

The first image is close, but the other three do not resemble our newly created style, even though, we used the triggering words.

Things to Know about Style Tuners

There are a lot of nuances and details to know about Style Tuners, and I hope to cover most of them in my future posts. But here I briefly list a few things you need to know if you want to use this tool effectively:

  • You may combine multiple codes with --style code1-code2. To give higher weight to one of the codes, you should repeat it several times (e.g., --style code1-code1-code1-code2).

  • If your style doesn’t transfer well to the new prompt, try higher values for the --stylization (--s) parameter (e.g., 300, 500, 800).

  • If your prompt contains phrases, guiding the image aesthetics (covered in my previous post), this aesthetics will blend / compete with the style encoded in your code.

  • You can take any style code you see and get the style tuner page for it by putting it at the end of this URL https://tuner.midjourney.com/code/StyleCodeHere. On this page, you can continue tuning the style by selecting different images. You’ll get a new code to be used in the prompts. Such tuning wouldn’t cost you any fast hours.

  • And there is also --style random, allowing you to generate random style codes without the Style Tuner.

  • The official Midjourney documentation on Style Tuner is here.

Happy prompting!

đź—ž News and Top Reads

  • Google introduced new AI-powered features into the Performance Max tool that helps customers generate and enhance their creative assets for Google Ads.

    • New asset generation capabilities let users create headlines, descriptions, and images.

    • Performance Max will also take performance data into consideration when suggesting or generating certain assets for the campaigns to help the corresponding ads perform well.

  • Stability AI has expanded its Stable Diffusion platform with advanced 3D capabilities and image fine-tuning features.

    • The most notable addition is the Stable 3D model, which empowers businesses to create high-quality 3D content for graphic design and even video game development.

    • The Stable Diffusion platform also now offers Stable Fine-Tuning, designed to help enterprises expedite the image fine-tuning process for specific use cases.

    • Additionally, the company will integrate an invisible watermark for content authentication in images generated by the Stability AI API.

  • Microsoft announced partnership between Xbox and Inworld AI to empower game creators with the potential of generative AI.

    • The toolset that they want to create together will include an AI design copilot to assist game designers, turning prompts into detailed scripts, dialogue trees, quests and more. 

    • In addition, they want to integrate an AI character runtime engine into the game client, enabling entirely new narratives with dynamically-generated stories, quests, and dialogue for players to experience.

đź“Ś AI Art Tutorial

For those overwhelmed with the new Style Tuner feature, Future Tech Pilot breaks the workflow into a simple 7-step process for you to follow – from one specific thing you need to include in your /tune 'prompt', to what you need to do as soon as you find a style you like.

🎨 Featured Artist: Chris Fuller

Chris Fuller, also known as ai.twofull, is an AI Art prompt engineer, whose unique style, combining colour, quality and imagination, has proved incredibly popular. Check out his incredible artwork on Instagram @ai.twofull.

🤓 A Comprehensive Midjourney Guide

To get a link to a comprehensive Midjourney guide, please subscribe to this newsletter. The guide is a dynamic document, which I intend to keep up-to-date with the latest Midjourney updates.

Share Kiki and Mozart

If you enjoy this newsletter and know someone who might also appreciate it, please feel free to share it with them. Let's spread the word about AI art and introduce more people to this fascinating field!

Join the conversation

or to participate.