- Systematiq AI
- Posts
- WOW RUNWAY GEN-3 + Industry Changing FIGMA Reveal
WOW RUNWAY GEN-3 + Industry Changing FIGMA Reveal
Prepare your brain to melt.
Fam,
The game has been changed.
Because RUNWAY GEN-3 is starting to roll out.
And it will change what you think about Ai.
Here’s what we got for today:
RUNWAY GEN-3 Tests + Thoughts
FIGMA’s New Ai Play
MINDBLOWING Ai-Enhanced Creative Systems
MIDJOURNEY Campaign Development
BEYOND MIDJOURNEY: OPENAI’s Curious Hire
MIDJOURNEY OFFICE HOURS
Let’s Boogie.
RUNWAY GEN-3 TESTS + THOUGHTS
Runway GEN-3 is wild. (It’s creativity on steroids)
I was lucky enough to test GEN-3 as part of their Creative Partners Program.
Typically, they give us access 24-48 hours before they release.
(So it’s coming)
Here was the first test: FULL LINKEDIN POST
RUNWAY GEN-3:
This was Runway’s answer to SORA → Mission accomplished.
It’s still developing…but…The product is really impressive.
INITIAL THOUGHTS:
HUGE improvement from GEN-2
Natural motion improvement is crazy
Only text prompting (at the moment)
Prompt coherence is strong
Small prompt details = big difference
Options for (5)-(10) second video output
Advanced camera motion/creative control
Gen time is surprisingly fast
Output resolution 720p (1280x768)
TESTING PROCESS:
Push tools with complex ideas
See where the rough edges are
Run prompt multiple times
Test same/different seeds
TEST PROMPT: low-fi handheld camera footage of a man transitioning into a black werewolf, set in the forest of the Pacific Northwest
OUTPUT THOUGHTS:
Really surprised at the output
Handled a complex visual well
(Can’t be much training data on this)
These were 4/10 generations
Using the same seed
So this wasn’t a 50-generation test (It handled it well…right away)
PUSHING IT FURTHER:
The prompt coherence is also impressive.
Considering you can only TEXT PROMPT at the moment.
(No image reference yet)
I’ve tested some more seemingly complex concepts from a language perspective and it responds.
Complex Prompt: Starting from a ground-level view of a desert road leading towards a tunnel inside a mountain, the camera tracks smoothly along the road into a short dark tunnel. As it emerges on the other side, the camera rapidly ascends, revealing the road continuing to a beach dirt road emerald colored water on the left and palm trees on the right
ADDITIONAL THOUGHTS:
The human face is complex and we notice everything that’s wrong.
It impressively handles human motion like “a smile” or things like skin tone, pores, cracks, etc.
It’s super-defined and well done.
Prompt: selfie closeup shot of a woman's smiling face, she is spinning happily, 360 degree orbiting view, snow shimmers surrounding her.
MULTIPLE OBJECTS:
This was (again) eye-opening.
AI has typically had issues with crowded images.
It’s TOO MUCH stimulus for it to focus on.
(This applies to most video + image generation tools)
Below it handled → Multiple cars, shifting lanes, complex environment.
Prompt: the camera follows black sports car as it races down a busy city street, the car weaves through traffic, 35mm cinematic shot
QUICK TIPS:
Focus on the camera movements
Think start point + end point for the shot
Text prompting vid generators is harder
Because of 3-Dimensional space
Take that into consideration and describe the motion
Be specific with details → Very similar to Midjourney
If you’re having trouble…check the RUNWAY DISCORD
Be on the lookout - I would expect GEN-3 to be released THIS WEEK.
And when it does…say goodbye to your day.
More on this to come.
FIGMA’s NEW AI PLAY:
Guessing there are more than few FIGMA users on this NL.
I’m relatively new to the platform (16 months), but I realize I was far behind.
They just unveiled some updates at CONFIG that are actually really useful.
(Looking at you FIGMA presentations)
Here’s the QUICK RECAP:
If you’re NOT a designer → These updates may not seem big.
If you ARE a designer → You know how helpful this stuff is.
If you manage a creative team → Think about how much this will help
AI UPDATES:
Visual Search: Upload an image and find old designs instantly.
Asset Search: AI-powered search finds the most relevant components within your boards.
Content Generation: Generate realistic copy and images (the copy function allows you to adjust tone → Super helpful)
Background Removal: Remove backgrounds without switching tools.
Make Prototype: Turn static mockups into interactive prototypes
Rename Layers: Rename layers with a single click using AI-generated contextual titles. (A personal favorite)
Make Designs: Generate UI layouts and components using text prompts, creating the first draft of a design for manual editing.
The MAKE DESIGN feature is really wild…it generates an entire layout and gives you starting point (at a minimum) and it’s fully editable.
The most intriguing part…FIGMA gives you the option to “opt-in” to design training.
»So YOU control your data/IP«
And i’m a fan of them being transparent about this.
Here’s the link to the FULL KEYNOTE
MINDBLOWING AI-ENHANCED CREATIVE SYSTEMS:
This week Phillip Maggs from SUPERSIDE took the stage at CONFIG.
(And he did not disappoint)
If you’re interested in the FUTURE OF AI-CREATIVE SYSTEMS…
PARTNERSHIP WITH SUPERSIDE:
I recently PARTNERED WITH SUPERSIDE to work on exactly this.
The Problem:
How can we scale an entire creative team with AI?
Making processes and systems more efficient
Take all aspects of our client’s “brand + data” into consideration
While providing a service-focused approach
(And not downsizing the team)
The Result:
Across 500+ client projects…
We saw 200% faster completion from brief-to-delivery
40-85% reduction in hours spent on projects
SAVING clients over $1.4 Million.
The VIDEO above explains the breakdown of HOW we’re doing it.
It’s not just prompting Midjourney and generating an image.
It’s a fully scaled (ai-enhanced) system that accounts for the hiccups in the design process.
I’ll be breaking down one of the processes on Linkedin this week.
(Stay Tuned)
If you’re looking for a design team → WE’RE HERE.
MIDJOURNEY: CAMPAIGN DEVELOPMENT
Maybe you’re not in a position to implement a full ai system.
There’s still things you can do to utilize Ai in your workflows.
Check this mock campaign: FULL LINKEDIN POST.
THOUGHTS:
IMO, Midjourney is utilized best in small format/short shelf life channels.
Examples: Email Marketing + Social Media + Paid Media.
It works best when you need asset volume to continue production.
Think shots on goal vs. lasting message (like print/billboard)
If you’re going to do this…you need to operationalize MJ a bit.
CONSISTENCY IS THE GOLD STANDARD:
Utilize prompt structures or formulas to get consistent output.
Like this one below:
Base Prompt Structure: Photo Type: [shot type] Subject Focus: [subject + action] Tone: [tonal balance] Colors: [color schemes] Texture: [texture description] Mood: [emotional tone] Environment: [setting details] Lighting: [lighting details] --ar 2:1 --personalize [your code] --stylize 200
The entire campaign can be built off (1) prompt structure + SREF.
Quick Thoughts:
(--p) parameter helps with consistency
Personalize + SREF is a powerful combo
Use Img prompts for keeping image structure
Use SREF for transferring aesthetics
Text prompting is still important
Structuring your prompts = fast iteration
If you want to dive deeper into Storyboarding…check out this post
BEYOND MIDJOURNEY: OPEN AI ADDS EX-NSA TO BOARD
I hype up AI a lot…but…I also want to provide some alternative perspectives.
I’ve been vocal about this OpenAi + Apple integration. (I don’t like it)
Now…OPENAI has added a former Director of the NSA to their board.
Consider this the TIN FOIL HAT section of the NL.
Here’s the breakdown:
TIN FOIL THOUGHTS:
It could be nothing…it could be something.
I’m just not comfortable with OpenAI having access to my emails, texts, browsing history, photo library, health data, geo-location, apple-pay, etc.
I’m sure the former Director of the NSA has zero use for this type of data or knowledge.
Sorry for the “conspiracy mindset”…We’ve just seen Big-Tech overreach time and time again…it’s concerning.
I love Ai for small/medium business and how it can exponentially level the playing field, but this is the side I don’t like.
More info here → AP Article.
MIDJOURNEY OFFICE HOURS (6/19):
Model Updates
Midjourney 6.5: In stage 2 of 3-4,
Midjourney 7: Delayed for new dataset requirements.
Goals: better knowledge, improved prompts, faster performance.
3D and Video Development
Progress: 3D advancing faster, focusing on quality.
Recent Progress: Promising developments in 3D.
Demographic Surveys and Aesthetics
Data Analysis: From user surveys.
Future Features: Demo aesthetics for gender, age, and countries.
Observations:
Current aesthetic is 75% male-dominated.
Male vs. female aesthetics differ greatly.
LGBTQ+ aesthetics are unique.
American and European aesthetics are similar.
Asian vs. Western aesthetics are distinct.
Future Plans: Balance default aesthetics, encourage personalization.
Personalization
Future Developments: Next-gen tests, possible reset option, and personalization profiles.
Style Exploration
Random Style Explorer: Under consideration.
User-Driven Pages: Potential "hot" and "top" style pages.
Have fun.
If you need help…DM me on Linkedin.
Until next time…
Rory
HOW'D WE DO TODAY[1 = SHIT...5 = AWESOME) |
OFFERS:
$20 OFF Midjourney Mastery Course: (7) Hours of Video Content
DFY Creative Services: Superside x Rory Flynn Ai-Enhanced Creative.