• Systematiq AI
  • Posts
  • 5-STEP AD CREATIVE WORKFLOW + RUNWAY RELEASES ACT 2.

5-STEP AD CREATIVE WORKFLOW + RUNWAY RELEASES ACT 2.

The new weekly insanity.

Fam,

It’s been a few weeks….

Between speaking at a conference and a little vacation.

I feel like i’ve been gone for 9-months.

This is going to be a long one…but I promise…

There’s a lot of value in here.

HERE’S WHAT WE GOT TODAY:  

AFFILIATE TAKEOVER: AD CREATIVE WORKFLOW

Last week I went on-stage in Barcelona to chat about Ai Ad Creative.

I presented on the rise of node-based workflows with tools like Flora. So I wanted to give you a behind-the-scenes look at the FLORA processes.

That we discussed at the conference.

FULL THREAD → CHECK IT OUT HERE

STEP 1: LET’S START WITH THE END:

First, we need to understand what we want.

  • To put the Converse shoes

  • In the composition of the black shoes

  • So we can reconstruct the image.

  • Like the example below...

STEP 2: NODE SELECTOR:

Now we’ll outline the nodes and tools we’ll need to build the workflow.

NODES:

  • IMG INPUT: Prod/Ref Images

  • LLM: To generate the prompts

  • IMG Module: Generate images

TOOLS:

  • LLM: ChatGPT 4

  • Image Gen: ChatGPT Img

STEP 3: SYSTEM PROMPT

So we drop all the nodes into the workspace and connect them (see image below)

BUT…we need a system prompt to run the whole thing (automated). The system prompt will analyze the image + create the prompt. And send it directly to the image generator to produce the image.

(No human intervention needed)

LLM SYSTEM PROMPT: You’re a visual merchandiser. Describe this image composition specifically to define the composition to prompt an image generator. Focus on the specific details of the composition (product placement, angle, perspective, lighting, aesthetic) Do not describe and features or product details of the reference image.

STEP 4: MAKING IT A WORKFLOW

This workflow isn’t exclusive to just “these shoes”it’s multi-use.

Now it's a "tool" that can...

  • Drop in ANY reference image

  • ANALYZE the composition

  • DROP in your product

  • PRODUCE an image of your product

  • In that COMPOSITION.

So now it's repeatable…which means it’s scalable.

STEP 5: CONTINUE BUILDING

We can also continue building this tool.

  • Attach the final image to more image nodes

  • Build out different background concepts

  • Or switch it a poster style or a photography style

  • Take those and turn them into video.

  • This becomes unlimited.

SAMPLE RUNWAY REF PROMPT: Keep the exact product and composition. Change the background to new york city telelphone wire so the shoes are dangling against the backdrop of a bronx city street. the background is blurred. The style is ugc-style like it was shot on an iphone.

FLORA:

If this was a little confusing…I’m going to break down more from the presentation in the next few editions of the newsletter.

In the meantime, if you want to check out FLORA → HERE’S THE LINK.

MIDJOURNEY MASTERY: PRICE CHANGE

Making some BIG V7 updates to Midjourney Mastery.

(Already started working on some video generation content as well)

While i’m working on the updates…we’ve dropped the price.

(Will be raising the price once the updates are made)

If you haven’t joined the party

WHAT’S INCLUDED:

Self Paced Video Course | (100+) Lessons | (9+) Hours of Video Lessons | (18) Modules | (6) Cheat Sheets | (40+) PDF Guides | Lifetime Updates

BEST POSTS OF THE WEEK

People seem to like these…so I’ll keep them coming.

Here’s some of the best I’ve found this week:

VEO3: LAUNCHES IMG-TO-VID WITH DIALOG

Of course, Veo3 launched this while I was away.

It’s pretty self-explanatory…but extremely useful.

It really changes a lot of things.

PLAYED AROUND WITH IT → CHECK IT OUT

DIALOG PROMPT STRUCTURE:

Because this is img-2-video…you don’t need to prompt much of the aesthetic.

So focus on the specific camera/subject motion/dialog/tone.

VEO3 PROMPT STRUCTURE: [SHOT TYPE], [CAMERA MOTION], [SUBJECT + ACTION]. CHARACTER says: ["DIALOG"]. HIS VOICE: [VOICE DESCRIPTION]. No subtitles.

VEO3 PROMPT EX: Handheld shot slowly pushing in captures a sloth doing ugc for Nyquil. The sloth rapidly holds a bottle of Nyquil with a snap movement. He says: “What's up everyone? I want to tell you about Nyquil Max 2.0. It's so good, i actually died...They buried me...I dug out...Now I can smell colors.”His voice: Comical, speaking really fast.

Play around with this and have fun. It’s super powerful.

RUNWAY: RELEASES ACT 2

Last week Runway released Act 2 to all ENTERPRISE and CREATIVE PARTNERS.

Meaning…it’s NOT available to everyone yet…but…it will most likely be available this week.

So buckle up…because we got to test it…and it’s definitely improved.

IF YOU MISSED IT → FULL DEMO

WHAT IS ACT 2:

  • Advanced motion capture

  • For head/face/body/hand tracking

  • Upload a reference video

  • Upload a character

  • Push generate (that’s it)

QUICK THOUGHTS:

  • Facial expressions are great

  • Eyes/eyebrows map almost perfectly

  • Hand mocap is good (subtle errors)

  • The head movement is on point

  • Lip sync slightly misses

  • But mimicking Jack's mouth is very hard.

  • Bc he’s making faces and gestures

  • And talking through his teeth

MIDJOURNEY: MICRO-MOODBOARD PROCESS

Sometimes text prompts aren't enough AND you want more control.

So create moodboards to guide the output…and be more custom.

WHAT ARE MICRO-MOODBOARDS?: 

  • Build as "tokens" instead of style

  • More complex aesthetic

  • Pieces of an image vs the entire image

PROBLEM:

  • If you're going for a complex aesthetic

  • Sometimes it's token overload in text prompts

  • You won't get everything you want

  • Like in this example below.

Ex Prompt: Action-filled flawed extreme top-down angle vivid fashion image with blown highlights of an everyday black man looking directly upwards at the camera wearing bandana with braided hair. He is pointing at the camera with an aggressive face. He sits on a vivid red car with white racing stripes, warm tones, captured on a ultra wide angle lens for a fisheye effect. Lens distortion at the edges adds kinetic drama. Subtle imperfections film grain, dust overlays, and faint scratches Mood: raw, youthful, kinetic. Format: vertical, editorial crop. --chaos 5 --ar 1:2 --stylize 650

SOLUTION:

  • Create simple images with those tokens

  • Build moodboards specific for those tokens

  • (8-20 images each)

  • (1) MB dedicated to ultra wide angle fisheye

  • (1) MB dedicated to top-down perspective

EX PROMPT: extreme closeup fisheye lens photography of a flawed fashion model, vertical cropped, attitude and raw emotion, blown highlights, vivid colors, warm and sharp --ar 5:6

REGENERATE PROMPT OR REMIX:

  • Utilize the same prompt

  • Add the moodboards for effect

  • Or remix a previous image + add MBs

  • Aerial MB: --profile jw7xzrc

  • Fisheye MB: --profile ggnjmiy

EX PROMPT: Action-filled flawed extreme top-down angle vivid fashion image with blown highlights of an everyday black man looking directly upwards at the camera wearing bandana with braided hair. He is pointing at the camera with an aggressive face. He sits on a vivid red car with white racing stripes, warm tones, captured on a ultra wide angle lens for a fisheye effect. Lens distortion at the edges adds kinetic drama. Subtle imperfections film grain, dust overlays, and faint scratches Mood: raw, youthful, kinetic. Format: vertical, editorial crop. --chaos 5 --ar 1:2 --stylize 650 --profile jw7xzrc iujzx82 ggnjmiy.

AI CREATIVE AT SCALE: DONE FOR YOU

BIG things are happening at SUPERSIDE.

We’ve now surpassed 4000+ PROJECTS completed with Ai

RESULTS: 50-85% Reduction in Hours → $1.4 Million Saved for Clients.

For brads like Amazon, Reddit, Google, Coinbase, Booking.com, etc…

WHY USE AI?:

  • So we get from brief-to-first-draft as rapidly as possible.

  • With key visuals/copy/ux/ui trained and tailored to brand guidelines/standards

  • Fully automated with tools like OpenAi/Claude/Flux/Figma

  • So we cut the back-and-forth time on “does this fit the brand?

  • And we focus on strategy, visual firepower, and iteration.

The best part?…If we can’t do it with Ai…we have 500+ traditionally trained world-class designers…who can do it by hand.

Have a design problem that needs solving? (We like those)

It’s worth a convo (at the very minimum).

AI NEWS ROUNDUP:

As always..the updates never end.

Here’s the hottest news of the week:

RUNWAY: REFERENCES GETS AN UPGRADE

Honestly, I think Runway References is one of the best image tools.

It’s simple…straight forward…and it works. (what else can we ask for)

So I tested it with UGC.

PROCESS:

  • Take an iPhone selfie.

  • Upload or create a product photo

  • Runway Ref to combine them both.

PROMPT EX: ugc style photo of the exact person in (IMG1) in this exact pose. Holding the bag of (IMG2) in his hand.

These are raw outputs - no post or upscale. These would only need a few touch-ups.

Going to be doing a lot more with this moving forward.

HOW'D WE DO TODAY

[1 = SHIT...5 = AWESOME)

Login or Subscribe to participate in polls.

OFFERS:

$20 OFF Midjourney Mastery Course: (9+) Hours of Video Content

DFY Creative Services: Superside x Rory Flynn Ai-Enhanced Creative.

AI IMAGE + VIDEO Corporate Training: Reply “workshop” to this email.