How to Use Style Random in Midjourney

PLUS: OpenAI's coup, Stability AI's video generation, Runway's updates

In this newsletter, read about:

  • 🕵️‍♀️ Style Random

  • 🗞 News and Top Reads

  • 📌 AI Art Tutorial: Midjourney Style Sliders

  • 🎨 Featured Artist: David Szauder

  • 🖼 AI-Assisted Artwork of the Week

  • 🤓 A Comprehensive Midjourney Guide

🕵️‍♀️ Style Random

Today, I want to continue my series on Midjourney’s Style Tuner by exploring --style random. This feature gives you another opportunity to play with different styles, but it only provides a limited level of control. Yet, I believe most of the users are not aware that they can roughly control the variety of styles generated with --style random.

So, I’ll start with a brief introduction to this new feature. Then, we’ll experiment with different prompts and parameters to discover interesting styles, and finally, we’ll see how our newly discovered styles will play along with new prompts.

Intro to Style Random

If you are not familiar yet with Midjourney’s Style Tuner, start with my first post on this feature because, in this piece, I’ll assume you know the basics.

So, if you don’t want to spend more GPU credits on creating your Style Tuners or if you just want to experiment with some new interesting styles, without any particular aesthetics in mind, you can have lots of fun with --style random.

Basically, when you type --style random at the end of your prompt, Midjourney replaces “random” with a style code. But this code doesn’t need to be totally random.

As you probably remember, when you create your Style Tuners, you choose the number of visual directions to explore (i.e., 16, 32, 64, or 128). Then, you choose a certain number of favorite images among the generated samples. If you choose just a few options, your style will be very bold and specific. If you choose a lot of samples, the style will be less distinct.

When using --style random, you can also define the number of visual directions to explore. The more visual directions in play, the larger the pool of possible styles to explore. Furthermore, you can define the share of visual directions to be activated, thus controlling how distinct your “random” style is. To control these things, you’ll need to define the style in the following format: --style random-length-percentage.

For example, if you specify --style random-128-10, this means that you want 128 visual directions to be explored with 10% of these directions being activated. This will result in quite a distinct style.

Let’s see how these parameters work in practice.

Discovering New Styles

For the first image, I specified the style parameter as --style random-64-50.

a photo of a magical forest filled with enchanting creatures --style 2If8tN72LShJCel3Yzzijh48L --s 500

The images look similar, but still, the style is not very distinct as we activated 32 out of 64 visual directions. Let’s now try --style random-64-10.

a photo of a magical forest filled with enchanting creatures --style 3T5r84swpvnBYVE0OWNqPoXf --s 500

Here the style is very distinct and specific. Though, not exactly my aesthetics 😂 

Let’s now try a different prompt. For the below images, I used the style --style random-128-75.

a portrait of a beautiful old black woman, colorful and sunny --style 3bRJc7JFQZ1ODrb7FUWVL9y7N6LpLDnqfDtJCmuixJ --s 500

Again, you can see that these beautiful images from the two grids have similar aesthetics, but the style is not so clear. Note that in this case, we activated 75% of 128 visual directions.

Now what if we activate only 10% of 128 visual directions? I.e., --style random-128-10.

a portrait of a beautiful old black woman, colorful and sunny --style dxSj7M5vSP7POsOD0SjTxbVByRo31sDfcyHYJmEDb --s 500

Again, the style looks very distinct. But in this case, I actually like it! Let’s try to use the generated style code for different prompts.

Leveraging Favorite Random Styles

As we know, the style codes usually transfer better to similar prompts and objects. So, let’s see how this style will look for the portraits of little black girls and black men.

a portrait of a beautiful little black girl, colorful and sunny --style dxSj7M5vSP7POsOD0SjTxbVByRo31sDfcyHYJmEDb --s 500

a portrait of a black man, colorful and sunny --style dxSj7M5vSP7POsOD0SjTxbVByRo31sDfcyHYJmEDb --s 500

The style looks very similar, but not the same. Especially, note how men's portraits are much less realistic. I ran the prompts several times and got about the same results. But still, using the style code we were able to direct the aesthetics quite significantly.

Finally, you may have noticed that for all image generations, I used a high value of a stylization parameter (--s 500). I just wanted to make sure that we can observe the styles clearly, but in lots of cases, it makes sense to experiment with this parameter to get some interesting looks or to make Midjourney follow the prompt more closely by lowering stylization.

Happy Prompting!

🗞 News and Top Reads

  • OpenAI had quite an adventure during the last weekend. If you missed it, here is a very brief recap:

    • The board consisting mainly of independent directors fired OpenAI CEO, Sam Altman and removed the Chairman, Greg Brockman from the board. OpenAI investors, including Microsoft, were not represented on the board.

    • Microsoft hired both Sam Altman and Greg Brockman to lead their new AI subsidiary.

    • Almost all of the 770 OpenAI employees signed an open letter requesting the return of Sam and Greg as well as the resignation of the board. Otherwise, they threatened to leave the company for Microsoft, which promised to hire ALL OpenAI employees.

    • After the successful negotiations, Sam was reinstated as the company’s CEO, and a new board was created. Now, the board will be extended to up to 9 persons to give a seat to key investors and avoid such “surprises” in the future.

    • That’s how a $90B company almost went to $0 over one weekend.

  • Stability AI released Stable Video Diffusion, their first foundation model for generative video.

    • As always, the code is open-sourced and available on GitHub.

    • The model can be easily adapted to various downstream tasks, including multi-view synthesis from a single image with finetuning on multi-view datasets.

  • Runway introduced several new features to help generate videos with more control, greater fidelity, and style expression.

    • Motion Brush is a unique interface that allows you to direct specific movements across your generation with a simple brush stroke.

    • Style Presets allow you to generate content using curated styles without the need for complicated prompting.

    • Director Mode’s advanced camera controls have been updated to allow for a more granular level of control.

  • Meta is rolling out two new AI-powered tools:

    • Emu Edit lets you tweak photos just by typing what you want, like for example “replace a dog with a panda”, or “remove a human”.

    • Emu Video is about creating videos from text or images you provide.

    • Both features will be integrated into FB and IG.

📌 AI Art Tutorial: Midjourney Style Sliders

In this video, Nolan shares fascinating tools created by Midjourney community. First of all, there is now a spreadsheet exploring all 128 visual directions and how each of them influences the final style. Furthermore, another user created a Style Slider, where you can use the slider to define how much of each visual direction you want for your style, and it will generate the corresponding style codes for you – again, you can experiment with these tools without spending GPU credits on your own Style Tuners.

🎨 Featured Artist: David Szauder

David Szauder is a media artist, curator, university lecturer, and art consultant with a national and international career. He has been working with digital tools for over twenty years. David has a lot of experience with animation, code-based art, VR and augmented reality, and now – with AI. After exploring in depth what happens in the AI’s brain when it gets an instruction, he is now able to create absolutely stunning and unique images with this tool. Check out his work on Instagram @davidszauder.

🖼 AI-Assisted Artwork of the Week

🤓 A Comprehensive Midjourney Guide

To get a link to a comprehensive Midjourney guide, please subscribe to this newsletter. The guide is a dynamic document, which I intend to keep up-to-date with the latest Midjourney updates.

Share Kiki and Mozart

If you enjoy this newsletter and know someone who might also appreciate it, please feel free to share it with them. Let's spread the word about AI art and introduce more people to this fascinating field!

Join the conversation

or to participate.