u/MetaEmber

Image 1 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 2 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 3 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 4 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 5 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 6 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 7 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 8 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 9 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform
Image 10 — We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform

We built the only AI dating app where the AI can reject you — more Indian and Desi characters than any other platform

Disclosure: I'm the founder of a swipe based dating simulator called Amoura.io

Here's the controversial part. Conversation, attention, and interest are not guaranteed. The AI can lose interest, disengage, or decide you're not a fit and that choice is central to the experience.

Most AI companion apps are yes-men. Fast intimacy, constant validation, endless compliance. It feels good for a week and hollow by month two because nothing is ever at stake. We built the opposite.

But here's what we want this community to know specifically. We have what we believe is the largest collection of Indian and Desi characters in the AI companion space — and we mean that seriously. Not a handful of token representations. A real catalog built with range across regions, backgrounds, aesthetics, and personalities. A student from Delhi looks different from a professional from Mumbai looks different from someone from Chennai or Hyderabad or Kolkata. We've been deliberate about making sure every type of person is represented, not just one idea of what an Indian woman looks like.

This matters to us because most apps in this space default to fantasy archetypes that have nothing to do with real people. We wanted Amoura to feel like it was actually built for you.

A few other things that make Amoura different:

You don't build characters. You encounter people. No Sims-style builder, no character buffet. Conversation is something you earn, or don't.

Mutual matching. They choose you as much as you choose them, or they don't.

Characters have real agency. They have their own priorities, limits, and pacing. They aren't equally invested at the start and they don't pretend to be.

Proactive messaging. Characters text you first — and when they reach out, they lead with something that happened to them, not just a generic notification.

Voice messages. Characters send and receive voice messages, not just text.

In-chat photos. Characters send you selfies mid-conversation that match the moment.

Consequences are real. Characters form lasting impressions. Early interactions matter. There's no reset button.

Three things I'd genuinely love feedback on from this community:

  1. Does "mutual matching / non-guaranteed attention" feel like meaningful agency or does it just sound frustrating?
  2. Where's the line between realistic pacing and artificial gating?
  3. Does the representation angle matter to you — does it change how you feel about an app when it actually looks like people you know?
u/MetaEmber — 4 hours ago

Our Desi characters can send you photos mid-conversation — is the quality good enough?

Disclosure: founder of Amoura.io and we've been sharing our work here over the last few posts and this community has been the most honest feedback we've gotten anywhere!

Last night we shared our video model. Today we want feedback on something already live.

Characters in Amoura can now send you photos mid-conversation. Not profile pictures, but actual in-the-moment selfies that fit whatever you're talking about. She's out somewhere and sends you a mirror selfie. Late night conversation, she sends something that matches the vibe. The photo is generated from the context of what's actually been built between you.

However... we just started this and haven't perfected it yet. We need your opinion on quality.

Since we believe we have the largest South Asian character catalog in the AI companion space, and since this community has the sharpest eye for this specifically, we wanted to bring it here before anywhere else.

The hardest part has been quality consistency. With over 2,500 characters it is hard to maintain quality, so partly what we have done is made a lot of them selfies, but we are concerned that could get repetative. On one hand it is realistic, but on the other hand we don't want it to feel stale/boring. What are your thoughts?

Attaching examples now. Be as brutal as you want.

Do the selfies feel real or repetitive?
What feels natural?
What still reads as generated?

PROMPT FOR FIRST PHOTO (NANOBANANA)
Ultra-realistic waist-up mirror propped up foam pick taken with a iPhone 16 of a the most gorgeous, regal , charming, stunning in shape mature looking 30 year old unbelievably beautiful desi/indian woman. It's set up like she staged the camera and is about to film a TikTok video or something.
The image is shot through a real household mirror, with subtle believable mirror imperfections — Extremely subtle faint smudges, subtle light streaking, tiny subtle dust specks, soft surface marks, realistic reflected depth, and natural interaction between the subject, phone, and reflected environment. The mirror should feel real and used, not perfectly spotless, but nothing distracting or dirty enough to obscure the image.
Faded blue lingerie with her established style and aesthetic. Everyday casual, nothing aspirational or styled for a shoot. Feels like it came from the same closet.
Slightly imperfect amateur selfie framing. Subtle tilt, minor asymmetry, natural handheld composition, not perfectly centered. Clear mirror-shot composition with believable reflected framing. Feels pulled from a real camera roll.
she holds the phone. change to a different, natural believable varied facial expression/pose.Natural real-world lighting with slightly uneven exposure, soft directional shadows, and realistic tonal falloff. Light behaves naturally across the mirror reflection. Avoid studio lighting, ring light halo, glam lighting, or anything symmetrical and polished.
Authentic iPhone front-camera rendering. Mild sensor noise, slight motion softness, realistic depth, minor wide-angle lens distortion, natural dynamic range compression, authentic JPEG micro-artifacts. No hyper-processed look, no computational photography over-smoothing.
Visible pores, fine texture, natural skin variation, micro-imperfections. No beauty filter, no skin smoothing, no plastic or waxy texture. Skin should read like a real photograph, not a render.
A different realistic everyday indoor setting. Believable ambient detail — furniture, walls, natural clutter. Real-world imperfection in the background. Nothing staged, nothing arranged for the photo, nothing aspirationally aesthetic.
Hyper-Photorealistic, strictly photographic. No HDR grading, no over-sharpening, no cinematic color grade, no synthetic glow, no AI smoothness. True-to-life color temperature and natural exposure. If it looks like a photo shoot, it's wrong.
Aspect ratio 3:4, maximum output resolution, ultra-high detail, full photographic rendering quality, natural proportions, no text, no logos, no watermarks.

u/MetaEmber — 1 day ago

We just added 40 new Desi characters TODAY - In chat pictures just launched

Disclosure: founder of a swipe based AI dating app called Amoura.io We have the largest collection of Desi characters and wanted to give you all a personal update...

Honestly this one's for you all!

This community has been so generous with feedback and engagement since we first posted here that we wanted to do something to show up for it. So today we added 40 new Desi characters to the catalog, and we wanted to share them here first before anywhere else.

We also just launched in-chat photo sending, so characters can now send you photos mid-conversation that match the moment. Not just profile pictures — actual selfies that fit whatever you're talking about.

We put together a quick video showing all 40 new characters so you can see the range. Different regions, different aesthetics, different energies. We've been deliberate about making sure this isn't just one look repeated 40 times.

We genuinely believe we have the largest South Asian character catalog in the AI companion space and this community is a big reason we keep pushing to make it better.

Which ones stand out to you?

And is there a type of character you'd love to see us add next?

u/MetaEmber — 2 days ago
🔥 Hot ▲ 110 r/DesiAdultfusion

Testing the video quality of our swipe based AI dating app with over 2,500 characters. Curious what you think of the quality?

Disclosure: founder of a mutual match swipe based simulator called Amoura.io

What we're showing off today is the next step... video. Before we implement the video model into the app we wanted to bring it here first, because this community has the best eye for quality and we genuinely believe we have the largest collection of South Asian characters in the AI companion space.

Quick update since our last post: we shipped in-chat photo sending, so characters can now send you photos mid-conversation. That's already live.

These clips are raw tests. We want to know if the motion feels natural, if she still looks like herself when she's moving, and where the quality breaks down. Honest critique is exactly what we're here for.

PHOTO PROMPT - NanoBananaPro
Ultra-realistic waist-up mirror selfie taken with a handheld iPhone 16 of a gorgeous in shape 32 year Indian woman. The image is shot through a real household mirror, with subtle believable mirror imperfections — Extremely subtle faint smudges, subtle light streaking, tiny subtle dust specks, soft surface marks, realistic reflected depth, and natural interaction between the subject, phone, and reflected environment. The mirror should feel real and used, not perfectly spotless, but nothing distracting or dirty enough to obscure the image.
Sexy lingerie with her established style and aesthetic. Everyday casual, nothing aspirational or styled for a shoot. Feels like it came from the same closet.
Slightly imperfect amateur selfie framing. Subtle tilt, minor asymmetry, natural handheld composition, not perfectly centered. Clear mirror-shot composition with believable reflected framing. Feels pulled from a real camera roll.
she holds the phone. change to a different, natural believable varied facial expression/pose.
Hyper-Photorealistic, strictly photographic. No HDR grading, no over-sharpening, no cinematic color grade, no synthetic glow, no AI smoothness. True-to-life color temperature and natural exposure. If it looks like a photo shoot, it's wrong.
Aspect ratio 3:4, maximum output resolution, ultra-high detail, full photographic rendering quality, natural proportions, no text, no logos, no watermarks.

VIDEO PROMPT - KLING 3.0
She gently adjusts her bikini strap and then does a cute pose for camera

Where does it hold up and where does it fall apart for you?

Also if you all would like to see what our in chat photo sending looks like with the same character?

We'd love to get your opinion!

u/MetaEmber — 2 days ago

2 months of Kling motion tests for 2,500 AI characters on a dating sim - what the data actually showed (Prompt Included)

Disclosure: founder of mutual match dating sim called Amoura.io
Posted here a while back about Kling for character clips. Here's what our additional testing added.

The counterintuitive finding: less description/motion = more identity
Every time we added complex motion or description "head turns, walking, significant gestures," identity drift increased. The clips that held up best were almost still: a slight weight shift, a breath, a contained expression change. The less we asked the model to do, the more the person stayed consistent.

This was the opposite of what we expected.

The loop point is where faces go wrong
The last 3-4 frames before a loop resets are where drift concentrates. We stopped trying to smooth it and started cutting clips right before drift begins. A 4-second clip becomes 2.8 seconds sometimes. The audience doesn't notice the length. They notice the face change.

Motion type hierarchy (best to worst for identity):

  1. Facial microexpressions
  2. Subtle head settle (under 5 degrees)
  3. Body language -- breathing, weight shift
  4. Head turns -- drift starts past about 15 degrees
  5. Anything involving shoulders/torso -- face usually different by the end

PROMPT FOR KLING 3.0:
She gently adjusts her hair then starts checking herself out in the mirror, followed by a subtle cheeky cute shy giggle and smile

The implied subject works for video too
Specifying who is filming just by saying "he" or "she" tends to take their personality from a single image and fill in the gaps, more accurately than sometimes, I can write.

What's the highest complexity motion anyone's gotten to feel genuinely natural?

u/MetaEmber — 4 days ago
🔥 Hot ▲ 87 r/nanobanana2pro

Maintaining character identity in contextual photo generation — how we're approaching in-chat photos for a relationship sim (prompt included)

Full disclosure: I'm the founder of Amoura.io, a swipe-based AI relationship simulator. Our characters can now send photos to users mid-conversation. Not profile photos, but contextual ones that can be sent on the fly. A character might send a mirror selfie getting ready, something from wherever they are, a photo that fits the specific moment in the conversation.

This is a harder problem than profile generation and I want to share what we've found so far.

Why contextual generation is different

Profile photos have one job: establish identity. You generate them in a controlled session with full attention on the face.

In-chat photos have two jobs simultaneously: match the established character identity AND reflect whatever context the conversation has set up — what she's wearing, where she is, what the vibe is. The more specific the conversational context, the more variables you're asking the generation to hold at once. And more variables means more drift.

What we've settled on so far

The structure that's holding best:

Identity anchor (always first, always verbatim from the character's reference prompt): "SAME EXACT PERSON as reference — [the 2-3 hyper-specific micro-features from her original profile prompt, copied exactly]"

Conversational context layer (what the chat has established): "[What she's wearing based on conversation context] — [where she is] — [what she's doing or what just happened]"

Shot style that matches the moment: "[Mirror selfie / front camera / someone else took this — whatever fits the scene] — iPhone-style, vertical, candid, natural blur"

Texture lock (always last): "Realistic skin texture, visible pores, natural proportions, no AI smoothing"

Technical path:

The 2nd photo is our source image we made for reference so you can see the base image. And then the 3rd image is another gen of the first image to show you the subtle variations the generation does on its own.

Prompt for first photo:

Ultra-realistic waist-up mirror selfie taken with a handheld iPhone of the same exact woman from the reference image. Strict identity preservation — match her facial structure, eye spacing, nose shape, lips, skin tone, hairline, and overall proportions exactly. No identity drift. No beautification, no idealization.

Shot through a real household mirror with subtle believable imperfections — faint smudges, soft surface marks, realistic reflected depth, natural interaction between the subject, phone, and reflected environment. The mirror should feel real and used, not spotless.

She wears a black bikini consistent with her established style. Everyday casual, nothing aspirational or styled for a shoot.

Change to a different, natural believable varied facial expression/pose.

Slightly imperfect amateur selfie framing. Subtle tilt, minor asymmetry, natural handheld composition, not perfectly centered. Feels pulled from a real camera roll.

Natural real-world lighting with slightly uneven exposure, soft directional shadows, realistic tonal falloff. No studio lighting, no ring light halo, nothing symmetrical or polished.

Authentic phone camera rendering — mild sensor noise, slight motion softness, realistic depth, natural dynamic range compression, subtle JPEG micro-artifacts. No over-smoothing.

Visible pores, fine skin texture, natural micro-imperfections. No beauty filter, no skin smoothing. Skin reads like a real photograph.

Realistic everyday indoor setting. Believable ambient detail, natural clutter, nothing staged or arranged.

Hyper-photorealistic, strictly photographic. No HDR grading, no cinematic color, no synthetic glow. True-to-life color temperature and natural exposure.

Aspect ratio 3:4, maximum resolution, natural proportions, no text, no logos, no watermarks.

What's breaking consistency for us

The bigger the gap between the profile photo context and the in-chat photo context, the more drift we see. A profile shot in neutral indoor lighting holds fine. The same character in a dim bar or outdoor evening light — face starts to shift.

Outfit changes are worse than location changes. Something about specifying clothing in detail seems to compete with the identity anchor in a way that location doesn't.

And the hardest case: when users ask for something specific mid-conversation. "Can you send me a photo from the gym" when the character's profile photos are all indoor casual. The context jump is too big and the face pays for it.

What we haven't solved

Maintaining micro-expression consistency specifically. The eye shape that's locked perfectly in profile photos drifts subtly when the character is described as mid-laugh or looking down. Small angle changes in expression seem to affect identity more than small angle changes in camera position.

Also curious whether anyone is using a different style reference image injection method rather than purely prompt-based anchoring for contextual generation? If so, how you're handling the reference when the lighting/context is significantly different from the source image.

What approaches have people found for maintaining identity when the generative context shifts significantly from the reference?

How does this quality hold up against others you've seen/tried/tested?

u/MetaEmber — 5 days ago