Tag Archives: AI Chatbots

How AI Companions Can Be Helpful AND Harmful

It is not easy finding that special someone. It never has been, even if you’re rich, beautiful, and have strong social skills. Every generation encounters new challenges in their pursuit of love and companionship. I know I have. I’ve even shared some of those struggles before.

At the moment, I have not found that special someone. I am single, I live alone, and I currently have no romantic prospects of any kind. I’m honestly not even sure where to begin at this point in my life. Nearly everyone else in my immediate family has settled down and started having kids. I am very much behind the curve in that regard.

However, there are some individuals who are in an even tougher situation. I know I am lucky in many regards. I own my own home. I own my own car. I’m never behind on my bills or anything. But there are plenty of men and women my age who have none of that. Some of the people I went to college with are still stuck in debt and can’t even afford to pursue a serious relationship.

In that sense, I don’t blame anyone for seeking other forms of companionship. Loneliness really does suck. It is objectively bad for your health. While it has become a hot topic, even in political fields, it has also led to some controversial trends. And among the most contentious involves people seeking AI companions.

Now, before I go any further, I want to make clear that I am somewhat reluctant to talk about this. While I’m usually up for any discussions about artificial intelligence, especially with how it may affect our love lives, this one specific aspect of AI is riddled with complications.

On one hand, there’s the general stigma. Most AI companions, such as Replika, are essentially idealized avatars of whatever kind of companion the user wants. If a man wants an AI girlfriend to look like a porn star and have hair like a girl from his favorite fantasy anime, he can have that and the personality to match. And while that is appealing, as a product, it still carries a stigma with it.

Men like this who use AI companions aren’t seen in a very sympathetic light. They’re more likely to be seen as examples of toxic male behavior. They’re not just lonely and in need of companionship. They’re seen as perverts who prefer a girlfriend that they can turn off, manipulate, or control in whatever way they please.

And make no mistake, there are men who treat their AI companions like that. They’re not all that subtle about it, either. But most of these men were shallow, crass, and crude before the advent of AI companions. They would’ve been that way regardless of whether or not this technology existed. There have always been men like that. And there always will be to some extent.

But there’s also a double standard at work with these men. Because there are AI companions for women too. They’re every bit as available as the ones men use. They just don’t get as much scrutiny and don’t carry as much stigma. If a woman were to create an AI companion to resemble their favorite male celebrity, chances are they won’t be stigmatized as much as their male counterparts.

Some may see this as concerning, thinking the woman must have issues if she was resorting to AI companions. But she would certainly garner less stigma than the man.

I would still argue there are women out there who seek AI companions for the same reason as men. They’ll even mold avatars meant to resemble the sexiest, most attractive figure they can conjure. I don’t claim to know how common it is, but I don’t doubt this exists.

Even with that kind of shallow use of this technology, I think it’s much more common that these users are just lonely. They seek companionship the same way most humans seek companionship. Even if there are plenty of people to interact with, AI companions help fill a particular need. That’s really all there is to it.

That’s not to say that AI companions are harmless. I strongly believe they can be. It just depends on the user and how they go about interacting with these AI systems.

If someone is manipulative, controlling, abusive, and self-centered, then having an AI companion that they can mold to their whims is not going to temper those tendencies. More than likely, they’ll get much worse. They’ll basically set a standard for the user that conditions them to expect certain qualities in a companion. And since real people can’t be molded, manipulated, or configured like an AI, they’ll never find someone who meets their impossible criteria.

In the process, that same user might grow bitter and angry that no real person can be to them what their AI companion is. And as these feelings simmer, it could just lead them into a destructive cycle of resenting everyone and everything that they can’t control the same way they control their AI companion.

That is very much a worst-case scenario for users of AI companions. I did try to look up research on this, but it was hard to come by. Both the stigma and novelty of these products make it difficult to assess. Maybe I’m being too hopeful, but I think cases like this are rare.

They certainly exist, but they’re the exception rather than the norm. It just tends to get more attention because seeing horrible people reinforce their horrible behavior with these AI companions is disturbing to many people and understandably so.

At the same time, I also believe that AI companions can be genuinely beneficial for a lot of people and those benefits are likely understated. Remember, we are social creatures. And as intelligent as we can be, we’re also blunt instruments with respect to certain mental faculties. Our brains and our psyche don’t care about the nature of social interaction. So long as we find it fulfilling on some levels, we’ll incur the benefits.

In their early form, AI companions probably didn’t offer much in that regard. But in recent years with the rise of AI chatbots and large language models, it’s relatively easy and cheap to create an AI that people can interact with in ways that closely resemble those of real humans. And the growing size of the AI companion industry is solid that there is growing market for this sort of thing.

But the good these AI companions could do goes further than simply giving people a facsimile of human interaction. Remember, the current crop of AI chatbots and LLMs are relatively new. They’re like the early models of the iPhone. They’re going to continue being refined, developed, and improved upon now that an industry is being built around it.

In time, AI chatbots and general AI technology will improve.

At some point, AI technology will get advanced to the point where it can offer more than just a base level interactions. In theory, an AI could be configured in way that didn’t just perfectly complement the personality of the user. It could also interact with them in a way that fosters healthy personal growth, just like any other good relationship.

There could even be AI companions specifically configured to deal with abusive men or women, helping them understand and deal with their issues in a way that makes them better individuals. That could be life-saving for certain people who struggle to find companionship due to issues like personal trauma or mental illness.

These AI companions don’t even need to take a physical form. They don’t need to be incorporated into sex robots or anything. They can still be effective as simple avatars on smart devices. There would certainly need to be some level of testing, safeguards, and refinement in order to make them work effectively. It might even take years before AI companions have such capabilities.

That’s the most I’m willing to say about AI companions at the moment. I don’t doubt this industry will continue to evolve in the coming years. I also don’t doubt there will be plenty of controversies about the ethics of these companions, as well as how they affect the user.

But even in their current form with their current level of intelligence, it offers lonely people an outlet. Reasonable people can argue just how healthy or unhealthy it is. But it doesn’t change the fact that lonely people are out there. They’re seeking connection and companionship like everyone else. These AI companions aren’t perfect replacements, but they’re better than nothing.

Leave a comment

Filed under Artificial Intelligence, futurism, men's issues, psychology, romance, sex in society, sex robots

AI Chatbots May (Thankfully) Render Homework Obsolete

Homework sucks.

Let’s get that out of the way.

I doubt anyone will disagree with that sentiment. No matter who you are or how many years you’ve been out of school, you probably don’t miss doing homework. It’s one of those special shared hatreds reserved only for traffic jams, parking tickets, and slow internet. But unlike those undeniable frustrations, homework isn’t an inescapable force of nature or law. It’s something we, as a society, choose to continue.

I’ve certainly questioned that choice, going back to when I was still in school. Having to do homework was among the many reasons why I was so miserable in school. And even though it was required, I can’t honestly say it ever helped me learn anything. Most teachers and administrators often explained why it was important to ensure we were adequately learning the material. But as I’ve gotten older, I’ve come to realize that, even if that were a valid reason, it still was still ineffective.

Just ask yourself honestly. Did you ever do homework because you were curious and wanted to learn?

Now, I could rant and lament on why homework sucks for days on end. But rather than torture myself to such an extreme, I wanted to highlight something that might offer hope to those who still remember how much homework sucked, as well as those currently in school at this very moment. It has to do with the impact of artificial intelligence and chatbots like ChatGPT.

I know I’ve talked a lot about artificial intelligence in the past. I’ve also highlighted the impact and hype surrounding ChatGPT. It is definitely one of the most intriguing and disruptive technologies to come along in decades. But unlike other discussions about whether artificial intelligence and ChatGPT will lead to the destruction of the human race, this is one issue in which the impact is already happening.

Recently, Vox produced an intriguing video about how ChatGPT has impacted education, especially homework. Even as someone who graduated school years ago, I found the issues and insights of this video remarkable. I encourage everyone to check it out.

The long and short of it is simple. ChatGPT is rendering most homework assignments, be they essays or worksheets, obsolete. Students are using ChatGPT to basically do the bulk of the work for them. The only real effort they need to do is make sure that whatever they produce isn’t obviously the product of a chatbot.

That alone can be difficult. It is well-documented that chatbots like ChatGPT can be inaccurate. But when compared to having to do a long, boring assignment that a student probably isn’t interested in, that kind of challenge seems manageable by comparison.

Also, in the interest of full disclosure, I freely admit that I probably would’ve used ChatGPT when I was in school if I had access to it. I promise it wouldn’t be entirely out of laziness or an unwillingness to learn. I just found most homework assignments to be so dull and pointless that I cared more about just getting them done rather than actually learning anything.

I imagine I’m not the only one who feels this way. I suspect the majority of students simply see homework as a means of ensuring grades rather than actually learning something. And even if that assumption is flawed, it’s still an issue that speaks to major flaws in how we educate ourselves and others.

And until ChatGPT, it was easy to ignore that issue. Schools, teachers, and administrators had no reason to stop giving homework or question whether it was an effective tool. It was just one of those things that our education system had always done. So, why not keep doing it?

Well, now there’s a valid reason. Homework, as we know it, can be easily completed by any student with an internet connection. If there was any learning potential, it’s pretty much lost. As the Vox video stated, it has led schools and educators to consider an entirely new recourse.

The knee-jerk response that I suspect most will adopt is to try and ban or limit the use of chatbots. There are software programs out there that can help detect content that has been generated by a chatbot. However, I liken these programs to using scotch tape to seal the ever-widening cracks of a faulty foundation.

Because, like it or not, these AI chatbots are becoming more advanced. And the tools to keep up with them are always going to lag behind. That is a losing race and one no education system should attempt.

There’s even precedent for surmising why that’s a bad approach. When I was in college, there was a blanket ban on using Wikipedia. But enforcing that ban was a losing battle that caused more problems than it solved. It also created some nasty situations where students were accused of plagiarism when they did nothing of the sort. It took a few high-profile incidents, but most schools eventually came to embrace Wikipedia as a useful tool when approached correctly.

I think the impact of chatbots will have to go through a similar process. But unlike Wikipedia, the application of chatbots is a lot broader. These are tools that can effectively summarize books, write essays, and even write poetry with a few simple prompts. And in the same way young people have become more tech savvy than their parents, I suspect they’ll become more adept than most with respect to navigating chatbots.

That means homework, as we’ve been doing it for the past several decades, will be obsolete. While that’s certainly cause for celebration for many, it’s also an opportunity to take a step back and evaluate the process of education, as a whole.

It’s still very important that we educate young people in a meaningful way.

It’s also important to acknowledge that young people today will have access to resources that others have not.

If homework is no longer useful in that regard, what else could we do? What’s a more effective way to teach kids a concept, even when they’re not that motivated to learn it?

I don’t claim to know the answers. I am not a teacher, but I do remember how miserable I was in school and how little it really taught me. Hopefully, the impact of chatbots like ChatGPT will prompt a more thorough evaluation of how we approach education. Because if we keep clinging to old methods, then nobody will benefit in the long run, especially kids.

1 Comment

Filed under Artificial Intelligence, ChatGPT, technology

Here’s What Happened When I Asked An AI To Write A Love Poem

There has been a lot said about the emerging state of artificial intelligence. I’ve certainly contributed my share. I imagine a lot more is going to be said in the coming months and years.

Ever since ChatGPT demonstrated the power and potential of this technology, it has garnered a great deal of interest among the public. It’s also generated plenty of concerns. Some are minor, like whether this technology would eliminate too many jobs. Some are major, like whether this will lead to the development of a powerful artificial intelligence that we can’t control.

There are legitimate concerns. Some are much more pressing than others. I’m certainly not qualified to explain all the nuances, let alone make recommendations or decisions on this issue. I’ll leave that to people much smarter and more capable than me.

But for now, these tools are here. We can use them in any number of ways. Some are already using them for objectively dangerous purposes. That’s always going to be an issue with every new technology, including AI. But the less dangerous, less dire uses will always be more common. And they’ll rarely make the news.

In that spirit, I’d like to share what happened when I played around a bit with Bing’s new AI program. I didn’t do anything too elaborate or ambitious. I understand this is new technology. It’s going to be limited for a good long while in the early stages. But it’s still quite capable of doing wonderful things.

To that end, I asked Bing to write a love poem. My exact prompt was actually “Write me a beautiful love poem.” This is what the AI created.

Here’s the first poem:

You are the light of my life
The reason for my smile
The warmth in my heart
The spark in my soul

You are the melody of my song
The rhythm of my dance
The harmony in my voice
The tune in my ear

You are the color of my world
The beauty in my eyes
The splendor in my sight
The rainbow in my sky

You are the love of my life
The partner of my dreams
The companion of my journey
The mate of my soul

Here’s another.

When I see your face
I feel a surge of grace
When I hear your voice
I have no other choice

But to love you with all my might
To hold you close and tight
To cherish you day and night
To make you happy and bright

You are my everything
My reason for living
My source of joy and peace
My treasure and my bliss

You are my one and only
My forever and my always
My soulmate and my lover
My friend and my partner

Here’s one more.

You are the star of my night
The moon of my tide
The sun of my day
The sky of my way

You are the fire of my passion
The flame of my desire
The heat of my emotion
The light of my devotion

You are the breath of my life
The beat of my heart
The pulse of my blood
The soul of my love

Are these poems the greatest love poems ever written? Probably not.

Do these poems still qualify as love poems? I would argue they do.

If you didn’t know I had asked a chatbot to write these, would you think a real person wrote them?

Given the current state of this technology, I’m impressed. There’s certainly room for improvement. But let’s remind ourselves that this wasn’t akin to asking a calculator to factor a couple of large numbers. I asked this chatbot to do something creative. And it did so in just few seconds.

This is what AI is capable of now.

Just imagine what it’ll be capable of in the coming years.

Leave a comment

Filed under Artificial Intelligence, ChatGPT, romance