• Kiki and Mozart
  • Posts
  • My Experiments with Pika Labs: Creating Videos from Text and Image Prompts

My Experiments with Pika Labs: Creating Videos from Text and Image Prompts

PLUS: how to create a movie trailer with AI

In this newsletter, read about:

  • 🕵️‍♀️ Pika Labs: Early Experiments

  • đź—ž News and Top Reads

  • đź“Ś AI Art Tutorial: Movie Trailer with AI

  • 🎨 Featured Artist: Floriane Bont

  • đź–Ľ AI-Assisted Artwork of the Week

  • 🤓 A Comprehensive Midjourney Guide

🕵️‍♀️ Pika Labs: Early Experiments

Pika is one of the leading players in AI video generation, along with Runway, Stability AI, and Meta AI. With this tool, you can animate some images of yours, generate videos from text prompts, or even edit your videos.

Yesterday, I saw an announcement introducing Pika 1.0, which according to the demo video, can do incredible things in terms of video generation and editing through text prompts. I’ve joined the waitlist, but meanwhile, I was able to experiment with their earlier model on Discord.

Before I share the results of my experiments, let's begin with a very brief overview of how to use this tool.

Pika on Discord: How to Use

If you know how to prompt Midjourney, learning how to use Pika shouldn't be very challenging. The principles are about the same, but the commands and parameters are a little bit different.

  • To generate a video from a text prompt, type /create and then your prompt. The recommendation is to have dynamic verbs like “running”, “dancing”, or “jumping” at the beginning of the prompt.

  • To animate your image, type /animate, upload an image, and then optionally, add a text prompt by clicking on prompt at the top of the chat box.

There are also some optional parameters that you can add to your prompts, including:

  • -ar for aspect ratio (e.g., -ar 9:16, -ar 1:1) with the default at 1024:576;

  • -fps to adjust the number of frames per second (ranges from 8 to 24 with the default set at 24);

  • -motion to adjust the strength of motion (ranges from 0 to 4 with the default at 1);

  • -gs for guidance scale with higher values implying that a video will be following the prompt more closely (the recommendation is to use values between 8 and 24 with the default set at 12);

  • -neg for negative prompts to specify what you don’t want in the video;

  • -camera to direct the camera movement, accepting values like zoom in, zoom out, pan up, pan down, pan right, pan left, pan top right (and other non-conflicting directions), rotate clockwise, rotate counterclockwise, rotate anticlockwise.

With these basic prompt guidelines, we are ready to start our experiments.

Text-to-Video and Image-to-Video in Pika

First of all, let’s see how Pika can animate the following image generated with Midjoruney.

Midjourney prompt: two kids riding their bikes in the park during a sunny day --ar 16:9 --s 50

Pika prompt: kids riding bikes + Image attachment

Well, the video has its pros and cons. The tool correctly captured how the kids should move to ride the bicycle, and the leaves fall more or less naturally. However, we don’t see the actual movement of the bicycles in relation to trees and other objects.

Let’s see how Pika will generate a video of kids riding bicycles without an image reference, using only a text prompt.

Pika prompt: kids riding bikes in the park, sunny day -ar 16:9 -motion 4

I think it’s not bad. The quality of face depiction and other details is far from perfect, but the kids’ movements correspond to riding a bicycle, and we can clearly see the kids moving in relation to buildings and other objects.

Next, I want to experiment with camera parameters. Can you actually control the camera movement with the prompt?

First, I asked Pika to create a video from text, without any image references.

Pika prompt: beautiful full moon in the dark sky -ar 16:9 -camera zoom in -motion 3

The video actually looks quite nice, but it’s definitely not zoom in, probably more of pan left. I guess, we might need a few attempts to get this parameter to work as intended.

But now, I want to experiment with the Midjourney image once again.

Midjourney prompt: a stunning image of the full moon in the sky --ar 16:9

I really enjoy Midjourney aesthetics. Let’s see how this image can be zoomed in with Pika Labs.

Pika prompt: -camera zoom in -motion 3 + Image attachment

It actually worked pretty well this time. We can clearly see the camera zooming in.

Obviously, AI video generation is in its early stages, which also means that the videos will only get better from here. I am sure that next year, we’ll be enjoying fierce competition between Runway, Pika Labs, and Meta AI, creating stunning videos from simple text prompts.

Happy prompting!

đź—ž News and Top Reads

  • Pika introduced Pika 1.0, a new version of their idea-to-video platform that allows users to create and edit their videos with AI.

    • The company has also just raised $55M in a funding round led by Lightspeed Venture Partners with participation from some notable angel investors including Quora founder Adam D’Angelo, ex-GitHub CEO Nat Friedman and Giphy co-founder Alex Chung.

  • Stability AI launched SDXL Turbo, a real-time text-to-image generation model.

    • SDXL Turbo utilizes an innovative distillation technique known as Adversarial Diffusion Distillation (ADD). This technique allows the model to produce image outputs in a single step and generate real-time text-to-image results while preserving high sampling fidelity.

  • Google's Bard has become highly proficient in comprehending YouTube videos.

    • It can analyze individual videos to provide specific information, such as key points or recipe ingredients, without the need to play the video.

đź“Ś AI Art Tutorial: Movie Trailer with AI

To continue with this week's video theme, I recommend checking out Matt Wolfe's tutorial on creating a movie trailer with AI. He will guide you through the entire process, using AI tools such as ChatGPT to create a script, Midjourney to generate images, Runway to animate these images, and ElevenLabs to create a narration voice.

🎨 Featured Artist: Floriane Bont

Floriane Bont is an AI artist and Art Director based in Nantes, France. She loves to combine minimalism and beauty in her work, often featuring a solitary figure in a minimalist landscape. Initially viewing Midjourney as an idea-generating tool, Floriane now sees it as a means of true expression. Passionate about photography, she is using Midjourney to bring her concepts to life. See more of her AI work on Instagram @floow.ai.

đź–Ľ AI-Assisted Artwork of the Week

🤓 A Comprehensive Midjourney Guide

To get a link to a comprehensive Midjourney guide, please subscribe to this newsletter. The guide is a dynamic document, which I intend to keep up-to-date with the latest Midjourney updates.

Share Kiki and Mozart

If you enjoy this newsletter and know someone who might also appreciate it, please feel free to share it with them. Let's spread the word about AI art and introduce more people to this fascinating field!

Join the conversation

or to participate.