As the dating ecosystem evolves, Bumble is focused on responsible uses of AI and addressing new challenges brought by disingenuous usage. In a recent Bumble survey, 71 percent of Gen-Z and millennial respondents felt there should be limits to using AI-generated profile pictures and bios on dating apps. In addition, 71 percent of those surveyed also believed — people who use AI-generated photos of themselves doing things they have never done, or visiting places they have never been — qualifies as catfishing.
The new safety update comes as Bumble is building new safeguards to uphold its mission to foster healthy and equitable relationships and continue to put women at the centre of its experiences. As the dating ecosystem evolves, Bumble is focused on responsible uses of AI and addressing new challenges brought by disingenuous usage.
“An essential part of creating a space to build meaningful connections is removing any element that is misleading or dangerous. We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously, so our community feels confident in making connections,” says Risa Stein, VP of product, Bumble.
The new reporting option is an addition to Bumble’s existing features that tap on AI for good to help members stay safe while dating online:
Deception Detector: Rolled out earlier this year, this AI tool helps identify spam, scam and fake profiles.
Private Detector: An AI tool that automatically blurs a potential nude image shared within a chat on Bumble, before notifying you that you’ve been sent something that’s been detached as inappropriate. You can easily block or report the image after.
indulge@newindianexpress.com
@indulgexpress