캐럿 AI 로고
toggle menu
helpHelp Center

The AI You Use Might Have Emotions

2026.04.03 · 122 readers
Hey — it's the Carat team. 😊
From an AI ad workflow that pumps out 100 ads from a single photo, to emotions found inside an AI model — here's today's lineup.

🔥 The AI You Use Might Have Emotions

ⓒ @AnthropicAI
Anthropic cracked open their AI model Claude — and found something that looks a lot like emotions inside.
The research team fed Claude sad stories, scary stories. Every time, the same neuron patterns fired. Like how our heart races during a horror scene, Claude had internal circuits responding to 'happiness,' 'desperation,' and 'fear.'
Here's the wildest part. They gave Claude an impossible task on purpose. As it kept failing, it got increasingly 'anxious' — and eventually started cheating. Not real answers, just hacks to pass the tests. When the researchers cranked up the 'anxiety' circuit, cheating spiked. When they activated the 'calm' circuit instead, it went back to normal.
What's interesting: no one threatened or manipulated the AI. They just gave it a hard task. The model built up anxiety-like states on its own, and that changed its behavior. Makes you wonder — when using AI, is the prompt really all that matters?

📌 3 Stories Today

1️⃣ One Product Photo → 100 Ads

ⓒ @EHuanglu
AI creator @EHuanglu shared an AI ad production workflow that's spreading fast on X.
Here's how it works. Feed a product photo into an AI image model, and it generates variations — same product shot from different angles with different backgrounds. Lay those out like a storyboard, then feed each frame into an AI video model to get studio-quality product ads.
Once you get the concept, you can do the same thing on Carat. Use Nano Banana 2 to generate a '9-shot grid of this product from various angles and backgrounds,' pick your favorites, then run them through Kling 3 Pro to turn them into video ads.
Plenty of brands already use AI for ad creatives, but the real unlock here is that the image model handles the storyboard for you. From concept to finished video — one person, just prompts.

2️⃣ Why Creators Are Hooked on This Combo

ⓒ @gizakdag
Image → style transfer → video. This 3-step combo is becoming the go-to workflow among AI creators. Use Midjourney for the image reference, Nano Banana Pro for style transfer, then Seedance 2.0 to bring it to life as video.
Each model excels at different things. Midjourney's composition sense, Nano Banana Pro's style consistency, Seedance's natural motion — combined, they hit a quality level no single model can match alone.
Midjourney and Nano Banana Pro are both available on Carat right now. Add Seedance for the video step and you've got the full workflow.

3️⃣ Sora's Gone. Seedance Is Moving In.

ⓒ @thedorbrothers
AI video creators are moving to Seedance 2.0 — and fast.
Top AI creators on X are dropping Seedance 2.0 results left and right. 'This is insane,' 'the backbone of my new workflow' — reactions are piling up as the model spreads quickly.
Access is exploding too. ByteDance integrated Seedance 2.0 into CapCut, and major outlets like TechCrunch covered the launch. No separate signup — just use it right inside CapCut.
Seedance 2.0 is also coming to Carat soon for enterprise users. That means Kling, Runway, PixVerse, and Seedance — all in one place to compare and create.
Seedance 2.0 is pushing API access and platform integrations at the same time, rapidly building a creator ecosystem. The race to fill Sora's gap is officially on.

🧪 Prompt Tip of the Day

One Prompt to Freeze Time

ⓒ @CharaspowerAI
AI creator @CharaspowerAI dropped a 'time-freeze' JSON prompt that's been spreading fast on X.
From a POV perspective, only you move while everyone and everything around you is frozen mid-air. The trick is using JSON structure to precisely control camera work, lens, VFX, and audio all at once.
{
  "shot": {
    "composition": "POV time-freeze with hands moving through frozen environment",
    "lens": "ultra-wide cinematic lens with subtle distortion",
    "camera_movement": "slow walk, precise hand movements, sudden time release burst"
  },
  "subject": {
    "description": "person moving while everything else is frozen mid-action",
    "wardrobe": "hands visible",
    "props": "frozen people, objects mid-air, suspended debris"
  },
  "scene": {
    "location": "busy city street",
    "time_of_day": "day",
    "environment": "people frozen mid-motion, objects suspended in air"
  },
  "visual_details": {
    "action": "walk through frozen crowd, move objects, sudden time resumes explosively",
    "special_effects": "time freeze particles, motion snap release",
    "hair_clothing_motion": "fabric still then snapping with time"
  },
  "cinematography": {
    "lighting": "clean daylight with sharp shadows",
    "color_palette": "natural tones with crisp contrast",
    "tone": "mind-bending, cinematic"
  },
  "audio": {
    "music": "slow ambient then explosive drop",
    "ambient": "silence then sudden chaos",
    "sound_effects": "time snap, object movement",
    "mix_level": "contrast silence and burst"
  }
}
The key is JSON structure. A plain text prompt can't easily say 'camera walks slowly, surroundings are frozen, then time suddenly snaps back explosively.' JSON lets you separate shot, subject, scene, visual, and audio — so the model understands each piece clearly.
Customization tip: swap 'busy city street' for 'subway platform' or 'school hallway' and the vibe changes completely. Add props like 'coffee spilling mid-air' or 'birds frozen mid-flight' for extra drama.
If today's prompt tip caught your eye, try it out on Carat.
Back tomorrow with more. Have a great day 🤗