AI Love You: when robots play cupid

AI Love You: when robots play cupid
 "There is a world where your dating concierge could go and date for you with other dating concierge ... and then you don’t have to talk to 600 people" - Whitney Wolfe Herd, Founder of Bumble

In a recent interview, Bumble founder and executive chair Whitney Herd caused a bit of a stir when she talked about the company's plan to bring artificial intelligence (AI) to the dating space. Some even compared her idea to the episode of Black Mirror, "Hang the DJ". In this episode, a dating system uses a "Coach" device to match partners for a set period of time. The system then guides them through simulated realities to determine a near-perfect lifelong match. Ultimately, the characters realise that their repeated rebellion against the system is because they are compatible.

 "There is a world where your dating concierge could go and date for you with other dating concierge ... and then you don’t have to talk to 600 people" she said, summarising Bumble's ambitious vision. This bold proposal to use AI to facilitate and manage relationships is a great opportunity to think about how AI could affect our relationships. By looking at what we know about how the mind works, we can see why it’s important to keep human connections real and strong in the digital world, and most importantly, in the real world.

Step by step: Social Penetration Theory

The Social Penetration Theory, developed in 1973 by psychologists Irwin Altman and Dalmas Taylor, suggests that intimacy is achieved through the deliberate exchange of personal information, which is progressively deepened. This exchange encompasses layers of both superficial and deeply private details. The theory posits that as partners share progressively deeper levels of self-disclosure, trust builds, facilitating closer and more intimate relationships.

The model is often shown as an onion, with the outer layers representing more accessible, surface-level information and the inner layers revealing more vulnerable, private aspects of a person’s life. Layer by layer, our little fears of sharing details of your life, personality and habits, and the acceptance of the other, built a solid relationship over time.

When AI orchestrates or accelerates this disclosure process, it risks creating a veneer of intimacy that lacks the foundational trust typically built through spontaneous and reciprocal sharing - a hollow shell. This artificial pacing may leave individuals feeling disconnected or distrustful, as the relationship depth created may not accurately reflect their true emotional investment or willingness. This could result in relationships where emotional bonds are superficial, potentially leading to psychological distress when discrepancies between perceived and actual intimacy become apparent.

pink and white garlic on white surface
Social penetration theory in a nutshell, or rather, an onion. (Wikimedia Commons)

Cognitive Dissonance and AI-Directed Choices

Cognitive Dissonance Theory, which was first introduced in 1957 by psychologist Leon Festinger, looks at how people feel when they have conflicting beliefs or when their actions don't match up with what they believe and value. Originally, Festinger developed this theory after observing a cult whose end-of-world predictions didn't come true, noting the ways in which members tried to make sense of the contradiction between their beliefs and reality.

When AI suggests that a particular person is a "perfect match", it sets an expectation for the relationship to succeed. If the relationship fails, the user may experience cognitive dissonance from the discrepancy between the AI's assurance and the actual outcome. This could lead, amongst other things, to self-blame, where the individual feels like a failure for not making the "perfect"match work. Such experiences could damage self-esteem and influence future relationship approaches.

This dissonance also opens the door to potential abuse. Cognitive dissonance can create a "chasing losses" effect, where users may double down before giving up. This can lead users to opt for a 'premium' plan, 'advanced AI analysis' or any other product the platform comes up with.

 

Media Richness Theory: The Need for Human Nuance

Media Richness Theory, developed in 1984 by Richard L. Daft and Robert H. Lengel, evaluates different communication media based on their ability to provide immediate feedback, use multiple cues, establish personal focus, and use natural language. The theory asserts that rich media are better equipped to handle complex, nuanced, or ambiguous messages effectively. It categorizes media from rich to lean based on these criteria, suggesting that tasks requiring more detailed explanations benefit from richer media. This theory emerged from organizational studies, particularly looking at how technology affects information processing and communication effectiveness within businesses.

Media Richness Theory (Wikipedia)

In dating contexts, non-verbal cues such as eye contact, gestures and emotional tone play a critical role in building attraction and understanding. Dating app fail to open this avenue - although attempts have been made, with the possibility of sharing audio and video. AI may add another barrier in this process. It's inability to fully replicate or understand social cues can make interactions feel hollow or misunderstood, which is particularly detrimental in the early stages of a romantic relationship.

 

Preserving Human Agency and Autonomy

Autonomy, as defined by psychologists Edward Deci and Richard Ryan in the early 1980s, is about acting as your own person and making choices for yourself. This sense of being in control is really important for how you see yourself and your mental health. Autonomy is a key factor in human motivation and personality. It's essential for self-esteem, well-being and satisfaction with life. Most of the time, the more autonomy you have, the better you'll feel about yourself.

When AI makes decisions on behalf of individuals, such as selecting potential romantic partners or dictating the timing and style of interactions, it diminishes personal agency. This can lead to a reliance on algorithms for social and emotional guidance, potentially stunting personal growth in relationship skills, empathy and self-awareness.

Furthermore, this shift can lead to an over-reliance on technology, reducing resilience and adaptability in social situations without AI support. When you only meet or date people that "perfectly" matches with you, the moment you meet someone that creates some friction, you might not have the tools to deal with the uncomfortable situation.

Behaviorism and the Pitfalls of Reinforcement

Behaviourism, a theory developed in the early 20th century by psychologists such as John B. Watson and later expanded by B.F. Skinner, emphasises the role of external actions and behaviours learned through conditioning. According to this perspective, actions that are followed by positive reinforcement are more likely to be repeated, forming patterns of behaviour. This theory focuses on observable and measurable aspects of human behaviour, arguing that all behaviors are acquired through interaction with the environment. Skinner further developed the concept by introducing the idea of operant conditioning, where behavior is shaped by rewards or punishments.

In the context of AI-enhanced dating apps, behaviours such as swiping, messaging and frequent app interactions could be overly reinforced, leading to the formation of habits that focus on app engagement rather than meaningful relational interaction. This can happen deliberately, as it may be stimulated by the apps, or unintentionally. How? AI training takes longer than human training. Humans learn on the go and much faster than AI models can be retrained or tuned. There is therefore a greater chance of the former having an impact on the latter than the other way round if things are not done carefully.

This could condition users to focus on superficial rewards (e.g. matches, likes) rather than the deeper satisfaction of genuine relationship growth and connection, fostering an environment ripe for addictive behaviours rather than substantive emotional engagement.

 

Aligning with the Theory of Planned Behavior

Developed by psychologist Icek Ajzen in 1985, the theory posits that individuals' actions are guided by their intentions, which are influenced by attitudes, subjective norms and perceived behavioural control. It emphasises how personal agency and social approval guide people's actions in line with their intentions. This theory builds on Ajzen's earlier work with the Theory of Reasoned Action, adding the component of perceived control to account for factors outside of an individual's immediate control that may affect their ability to perform a behaviour.

When AI influences these intentions by setting norms or expectations, or by suggesting behaviours that may be inconsistent with personal desires, it can create a misalignment between one's actions and their intrinsic goals. This misalignment can lead to relationship choices that feel forced or inauthentic, potentially leading to frustration and dissatisfaction with the dating process. The influence of AI could subtly shift norms about how relationships should be formed, imposing a one-size-fits-all approach that neglects individual preferences and circumstances.

 

Opinion

The idea of optimisation and productivity has spread beyond the workplace and is now affecting other areas of human activity. Writing, drawing, composing music and dating are just a few examples. The "there's an app for that" became the "there's an AI for that". The question is: should these areas of human activity be optimised? If we don't need to spend time dating, doing a hobby, learning a skill, what will we spend our time on?

👉
In this context, the only two activities that we don't need to worry about optimising are spending and consuming.

It's important to understand the economic motivations behind technology platforms, including dating apps, and put them in contrast to the interests of society. These platforms are designed to grow their user base and keep users engaged and spending money – their business model is based on attention + money. Even if an app such as Hinge says “Designed to be deleted”, it is safe to say that none of these companies are interested in committing business suicide by asking their users to delete the app and go touch grass.

This clash between what businesses want and what users need raises questions about what they're really up to when they bring AI into dating and other social or creative areas. If the goal is to keep users engaged and active on the platform for as long as possible, AI might not always put the users' best interests first. Instead, it may encourage addictive behaviours and dependencies, rather than fostering genuine and lasting human connections.

So far, it's more than questionable that dating apps and their strategies have made a positive impact on the dating and relationship landscape. Did people have to talk or swipe through 600 people to find a good match before them? I'm wondering why consumers should trust them now.

I'm currently reading The Anxious Generation by Jonathan Haidt. The book explores the rise in mental health problems among today's youth, attributing it to several key factors: the pervasive influence of digital technology and social media, and broader cultural shifts. Haidt argues that these changes have led to increased anxiety and depression in children and adolescents.

When I finish it, I'm pretty sure I'll have some new insights that I'd love to share with you.

So, what do you think? Would you trust AI to find your next life partner?