AI fruit dating show hits 3.2M followers in 9 days
2026.03.26
What's up — Carat crew here. 🤗 AI-generated fruit characters are dating on TikTok (300M views and counting), Claude just learned to use your Mac, and Google's music AI went from 30-second loops to full 3-minute tracks. Let's get into it.
🔥 AI fruit dating show hits 3.2M followers in 9 days
ⓒ @ai.cinema021
A TikTok account called @ai.cinema021 started posting AI-generated videos of fruit characters in a dating show. First episode went up March 13. Nine days later: 3.2 million followers. Each episode pulls over 10 million views, with 300 million total across the channel.
The format is a parody of Love Island, but every contestant is an animated fruit. Bananito, Strawberrita, Orangelo — they confess, fight, and make up in 2-minute micro-dramas. Viewers suggest storylines in the comments, and the creator actually works them in.
Short-form + AI generation + viewer-driven storylines. This combo just proved it works on TikTok. One episode a day, quality holds up. If you want to try making AI short-form series, you can test individual scenes with Kling or Runway on Carat.
📌 3 stories for today
1️⃣ Claude just started controlling your Mac
ⓒ Brock Mesarich
Anthropic added a Computer Use feature to Claude. It's now in Claude Code and Claude Cowork, and it does exactly what it sounds like — it looks at your Mac screen, clicks buttons, and types for you. Opening apps, browsing the web, filling in spreadsheets. All of it.
There's also a new feature called Dispatch. You tell Claude what to do from your iPhone, and it handles the task on your Mac while you're away. When you come back, it's done. It always asks permission before touching a new app.
Still in research preview. You need Claude Pro or higher, and it's Mac-only for now — no Windows yet.
AI agents just moved from text chats to actually touching your computer. For repetitive data entry or research tasks, this could be a real time-saver.
2️⃣ AI images can now be split into editable layers
ⓒ digitaltrends
When you generate an AI image, you get a flat file. Want to change just the background? You'd have to regenerate the whole thing. A new tool just tackled this problem head-on. Drop in a finished image, and it separates the subject, background, and text into individual layers.
It's not just background removal. Text in the image gets restored as editable text boxes. Depth layers get separated too. Works with images from Midjourney, ChatGPT, Gemini — doesn't matter where it was made.
Still in beta, and edge quality isn't as clean as Photoshop. But this is the first real answer to AI images' biggest weakness: once generated, you couldn't edit them.
We've all been there — "I just want to swap the background." Once this tech matures, AI image workflows change completely. Generate with Nano Banana 2 on Carat, then post-edit with tools like this.
3️⃣ Google just stretched AI music from 30 seconds to 3 minutes
ⓒ Google
Google DeepMind dropped Lyria 3 Pro yesterday. The previous version maxed out at 30-second clips. Pro goes up to 3 minutes. That's 6x longer.
It's not just longer — it understands song structure. You can specify intro, verse, chorus, and bridge in your prompt. So you're essentially directing how the song develops.
Available on 6 platforms including the Gemini app, Google Vids, and Vertex AI. All training data comes from YouTube and Google-licensed music. Every output gets a SynthID watermark automatically.
🧪 Prompt tip of the day
10 cinematic action prompts for Kling
ⓒ @CharaspowerAI
If your Kling videos look flat, it's probably the prompts, not the model. AI video creator @CharaspowerAI (Pierrick Chevallier) shared 10 cinematic prompts that completely change the output quality. Here are 3 of them.
The key is camera work and environmental detail. Don't just write "a person running." Describe how the camera follows them and what's happening around them. That's what makes the difference.
Survivor running through a collapsing city
A lone survivor running through a collapsing city street, covered in dust, looking back in terror. Skyscrapers crumble as shockwaves ripple through the ground, cars flipping and windows exploding. Debris and glass shards flying everywhere. Continuous tracking shot, shaky handheld camera.
Warrior standing still after battle
A lone warrior standing still, sword lowered, breathing heavy after battle. Slowly lifts his head as silence settles. Fallen soldiers, broken weapons, smoke drifting under gray sky. Camera starts close behind him, then rises with a slow crane shot toward the sky.
Chase through a crowded market
A runner sprinting through a crowded market, pushing past people and objects. Knocks over crates and slides under a hanging tarp. Dense street market with vibrant colors, movement everywhere. Continuous tracking shot weaving through obstacles.
Here's what we got running the prompt on Carat with Kling O3 ⬇️
Generated with Kling O3 on Carat
Three things to remember. First, specify camera movement (tracking shot, crane, handheld). Second, pack in environmental detail (debris, smoke, colors). Third, state the character's emotional state (terror, resolve, calm). These three things alone make a huge difference.
The remaining 7 prompts are in the original thread. Give them a try — you'll be surprised how cinematic Kling can get ☺️