Tag Archives: AI

Why AI SHOULD Replace Most CEOs

This is a video from my YouTube channel, Jack’s World.

In this video, I make the case that, of all the jobs artificial intelligence could replace, CEO should be at the top of that list. In recent years, CEOs in general have generated headlines for all the wrong reasons. With the ongoing advancements in AI, this is one job for which we should be actively trying to remove human flaws.

Leave a comment

Filed under Artificial Intelligence, Jack's World, television, YouTube

Jack’s CreepyPastas: Revenge of the Y2K Bug

This is a video from my YouTube channel, Jack’s World.

This video is a CreepyPasta story I wrote and narrated myself about the Y2K bug. Enjoy!

Leave a comment

Filed under CreepyPasta, horror, YouTube

Happy (Terminator) Judgement Day!

Today is August 29, 2024.

Officially, it’s just an ordinary day. It has no historical or legal significance of any kind.

But if you happen to be a serious movie buff, or a big fan of the Terminator franchise, it is the official canon date of Judgement Day. According to the events documented in “Terminator 2, Judgement Day,” Skynet became self-aware at 2:14 a.m., Eastern Standard Time, on August 29, 1997. Shortly afterwards, it launched a full-scale war on the whole of humanity.

While it is true that this date has fluctuated over the years due to numerous time travel anomalies and sub-par sequels, the August 29, 1997, date tends to be the most well-known and accepted. And in recent years, I think that date has become even more noteworthy and not just because it celebrates the greatest action movie of all time.

In recent years, artificial intelligence has emerged as something far greater than a standard movie trope. It is now a rapidly growing field, as well a multi-billion-dollar industry. I’ve discussed the possibilities in the past and tried to put more recent advancements like ChatGPT into a proper context. But even I have been astonished at how quickly artificial intelligence has gone from an intriguing possibility to something that’s very real.

At this very moment, anyone with an internet connection can utilize what is essentially a limited AI for various tasks in a manner that would’ve seemed like a futuristic dream less than 10 years ago. It’s very likely that in the next 10 years, AI will advance in plenty of other ways, some which we can’t possibly predict.

But these AI systems are nothing like Skynet. They’re very narrow in terms of their scope and capabilities. They’re not going to become self-aware or gain human-level intelligence. But make no mistake. At some point, we will have an AI that is as smart as an average human, if not smarter. This is not a technological feat that requires a huge breakthrough that defies or redefines physics. It’s largely an engineering and logistical challenge.

Whether it happens next year or several decades from now, it will happen. We will eventually create an AI that’s as capable and self-aware as Skynet. At that point, what happens next has been a huge point of contention and speculation.

Unless someone is stupid enough to give this future AI access to a nuclear weapons system, I don’t see it launching a full-scale nuclear war like Skynet did. Even if it wanted to, I doubt it would be given that level of access. That doesn’t mean it’ll be friendly. The first self-aware AI may very well be hostile in that it sees humanity as a threat. If it does choose to pursue conflict with us, that could result in some truly unprecedented dangers.

It’s unlikely it will manifest like the war we saw unfold in multiple Terminator movies. It’s also unlikely it’ll operate in any body that resembles Arnold Schwarzenegger. Whatever form it does ultimately take, it will still likely mark a turning point for technology and human history, as a whole.

Whether we call it Judgement Day, the technological singularity, Doomsday, or just another day that ends in Y, it’s a date we should take seriously. Because with all technology, there are risks and benefits. There are also best-case scenarios to go along with the worst case. “Terminator 2, Judgement Day” offered us a cinematic experience of how bad it can get if AI becomes hostile.

But we also shouldn’t let that obscure how good it could potentially be if utilized properly. I would even argue that, as the world continues to face unprecedented dangers and uncertainty, we need to advance AI. It might very well be our most important tool in solving critical issues having to do with climate change, medicine, ecological destruction, power generation, and surviving the eventual destruction of our planet.

As such, we should continue to proceed carefully. We should remain cautious as we journey into the future. As Sarah Conner once said, there is no fate but what we make. And when it comes to developing an AI that will eventually become self-aware, let’s make that fate a good one for everyone, be that human or machine.

Leave a comment

Filed under Artificial Intelligence, ChatGPT, technology

My Experience (And Plans) Using AI Writing Tools

Since I started this website, I wanted it to act as a hub for my writing, particularly my novels and my sexy short stories. I still want this site to serve that purpose. I see from the regular traffic this site gets that my sexy short stories do have an audience. And I’d like to continue providing that audience with new content, even if my efforts to write larger novels have stalled.

Now, if you’ve been following me over the years, you might have realized I haven’t been putting out as much writing as I used to. For that, I apologize. I promise, there are reasons for that, some of which are personal and prefer not to share. Others simply amount to lacking time and energy.

And once I started my YouTube channel, Jack’s World, my time and energy became even more divided. My channel has actually grown faster than expected, having eclipsed 1,000 subscribers in late 2023. And the videos I make for my YouTube channel generate considerably greater traffic than most of the content I put on this website. As such, expect me to continue giving my YouTube channel priority.

However, I still want to make time for writing sexy short stories. I don’t want to completely abandon that audience. It has just been incredibly difficult to make time to write those stories. And making that time has become even more difficult with each passing month, it seems.

Then, I discovered some new AI tools that could possibly help with that. And after spending the past month experimenting with them, I’d like to share my experience. I’d also like to present a disclaimer of sorts so that those who look forward to my sexy short stories aren’t caught completely off-guard.

Now, if you’ve been following me for any length of time, you probably know I have a keen interest in artificial intelligence and its many implications. Recently, that interest became a lot more real with the mainstreaming of major AI tools like ChatGPT. Like many others, I’ve experimented with ChatGPT and many other tools. It’s been quite an experience. But for the most part, I haven’t really found a use for them beyond simple novelty.

That changed when I discovered some AI tools that specialize in helping people write. These tools aren’t just spelling and grammar checkers like Grammarly. They actually “read” what you wrote and attempt to complete the next few sentences using a model similar to those like ChatGPT. The main difference is that this AI is calibrated to help with writing.

It may sound like a fancier version of autocorrect. I promise you it’s much more than that. I know because in using it, I found it to be surprisingly helpful. It didn’t just supplement my usual writing. It felt like having a personal assistant who understood what I was trying to do and offering unlimited suggestions or ideas whenever I asked.

The AI tool I used in this case was called NovelAI. It’s actually not the best or most well-known AI of its kind. I picked it largely because it was one of the first tools I discovered and it was relatively cheap to experiment with. I only paid for one month of premium use.

But within that month, I actually managed to write more short stories than I had in the previous 10 months. On top of that, I did those over the course of the holidays, a good chunk of which I was both busy and very sick. So, I probably could’ve written a lot more. But overall, I was impressed by what NovelAI allowed me to do.

In terms of actually using it, this actually took some adjusting. This AI program, and others like it, aren’t some magical computer programs in which you could just write a few sentences and have it write 5,000 words with ease to complete it. That’s not how they work. You, the writer, still have to get things going. You still have to establish the characters, setting, and tone.

For me, that usually meant writing three to six paragraphs to get the story started. That part rarely took more than a half-hour. But once I got that setup in place, the AI really helped accelerate the rest. From there, just pushing a button got the AI to spit out several sentences. Sometimes, they were good. Sometimes, they were not. Sometimes, it was complete gibberish.

But that’s actually not a bad thing. Because, like most programs, the more you use it, the better you get at maximizing its strengths. On top of that, the AI gets better at recognizing and adapting to your style. NovelAI even has various settings you can adjust to get different kinds of output. It took a few days of experimenting to get a feel for it. But once I did, I got into a great rhythm.

I managed to churn out multiple short stories in a fraction of the time it usually took me. Even with the complications of the holidays and being sick, I managed to craft over two dozen short stories. I’ll be sharing some of them in the coming weeks and months. I’ll be sure to disclose in the beginning that an AI helped me write them. But I’ll be very interested in seeing the response I get.

If the response is negative, then I’ll certainly take that into consideration with how I plan future short stories. But if it’s positive, then I’ll continue experimenting with other writing AI programs. As good as NovelAI was, it’s not the most well-known or well-regarded writing AI. Later this year, once I assess the impact of the stories I’ve written, I’ll try using another and see if that works better.

I’m also aware that there are those out there who don’t like anything made by AI on principle. Some of my friends and relatives have those views. To them, as well as anyone else who shares that sentiment, I only ask that you keep an open mind. The AI I used did not write these stories for me. I wrote large chunks of it. It just helped me fill in the details in way that saved me time and energy. It supplemented my creative efforts. It did not usurp them.

And if, in the end, the result is still a good, enjoyable story, then I think that made this experience worthwhile. It’s an experience I think many others will explore as AI becomes more advanced and more mainstream. In time, maybe I’ll use it to help me write another novel. Whether or not it ends up being good remains to be seen.

For now, only time will tell. I have plenty of more stories I’d like to tell and if AI helps me get them out there, then I genuinely think that’s a good thing.

Leave a comment

Filed under AI Art, Artificial Intelligence, technology

Mass Effect: The Danger And Soul Of Artificial Intelligence

This is a video from my YouTube channel, Jack’s World.

This video explores the inherent dangers and genuine hopes of artificial intelligence through one of my favorite video game franchises of all time, Mass Effect.

Artificial Intelligence is a big part of this galaxy-spanning saga. The main antagonists, the Reapers, represent a worst-case scenario. The synthetic race, the Geth, are also a recurring danger. However, in exploring the vast history and lore of Mass Effect, we gain some remarkable insights into what the future might hold for how we navigate AI. Enjoy!

Leave a comment

Filed under Aliens, Artificial Intelligence, comic book reviews, Mass Effect, YouTube

Revealing Attitudes Towards AI Art (When People Don’t Know It’s AI Art)

When I was a kid, there was a general attitude towards any movie or TV show that was animated. Unless it was “The Simpsons,” then animation was largely associated with kids. Even when there was a concerted effort to appeal to older demographics, animation was still seen as less mature form of storytelling. Even when it was exceptionally well done, this stigma persisted.

Eventually, animation and animated content, in general, matured to a point where the stigma really doesn’t work anymore. Cartoons aren’t just for kids. We now have shows like “Bojack Horseman” and “Rick and Morty” that have helped change the popular perception of animation. On top of that, the added influence of Japanese anime, from “Dragonball Z” to “Ghost In The Shell,” have helped attitudes about animated content evolve even more.

This shift all happened within my lifetime. It’s a simple recognition that attitudes, perceptions, and trends are always changing. And they’re going to continue to change, regardless of how we might feel about certain issues now.

This brings me to AI generated art. It’s a topic that, depending on where you discuss it, can generate a lot of debate, disagreement, and consternation. I’ve been in more than one discussion with someone who claims AI generated art can only ever be a bad thing in the long run because it enables trolls, assholes, grifters, and bad actors.

I’ve also been in similar discussions with those who claim AI art isn’t even art. It’s, according to them, nothing more than a fancy calculator that crunches numbers on where to place certain pixels on a screen. While I think that’s a gross oversimplification, I don’t think it takes away from the end result. Whether it’s produced by a human or a computer, the end result is still the same. An artistic rendering is created. But regardless of what form it takes, we have certain attitudes about it.

As I write this, I think the prevailing sentiments toward AI art are negative. Show someone an AI generated image and they might concede that it looks nice. But they’ll still have an aversion to it, knowing it’s created by AI. And if you how them a human-generated piece of art, even if it doesn’t look nice, that same person will still ascribe more value to it than they would an AI generated image. That’s just where we are right now.

But in a recent study by Scientific Reports, an interesting insight was uncovered. When the researchers conducted a survey that included approximately 200 people, they found that when people weren’t aware that an image was AI generated, then their attitudes about it were more positive. In many cases, they could not discern between the artwork generated by humans and those generated by AI. But as soon as they were aware that something came from an AI, their sentiments changed.

It’s probably not too surprising, but it’s also revealing. It speaks to where we currently are in our perspectives on anything created by an AI. It still has this synthetic, uncanny valley feel to it. We still inherently ascribe more value to something that is created by a human over that of a machine. And while that is certainly understandable, given that human generated art requires more labor and passion, how much will that value persist in the coming years?

That’s not an unreasonable question because the quality of AI art has changed considerably in the past few years. And it’s likely to continue improving in ways we’re not ready for. Not too long ago, it was somewhat easy to discern when something was created by an AI. The issues it had with rendering hands and fingers are well-document. However, those issues have been addressed and improved considerably, especially with newer models.

As such, you don’t need to be a wide-eyed utopian futurist to predict that AI art generators will improve to a point where it’s genuinely difficult to tell if it was created by a human. This study already showed how close we are. The participants weren’t able to surmise on a surface level that they were dealing with a mix of AI and human-generated art. With improved tools, even the most discerning and tech savvy among us might not be able to figure it out.

At that point, our attitudes towards AI art will have to change. Think what you want about AI and whether it’s capable of creating real art, let alone appreciate what goes into it. But if it’s capable of making something we can’t discern from its human-made counterparts, then those negative sentiments we have are entirely arbitrary, as well as subjective.

And those same attitudes might not be shared by younger people who have grown up in an environment surrounded by the internet, AI, and AI generated content. They might not have any issue with seeing AI generated art as real art. And anyone who just despises AI art as a concept might find themselves in the same group as those who still think all cartoons are just for kids.

It’s hard to know where these trends will lead as AI technology advances. It’s already affecting how we see art, knowledge, and what it means to be intelligent. And how we feel today might not be at all in line with how future generations will feel in the decades to come. Only time will tell.

I’ll simply note that humans, in general, are a visual species. We seek to create, admire, and cherish art, regardless of the source. For that reason, I think there will certainly be a place for any kind of art, whether it comes from a human or an AI.

Leave a comment

Filed under AI Art, Artificial Intelligence, psychology, technology

How AI Art Could Upend/Destroy The Comic Book Industry

This is a video from my YouTube channel, Jack’s World.

This video covers some recent news surrounding accusations about major comic publishers utilizing AI art and the larger implications that this new technology will have on the industry. Artificial Intelligence promises to impact a lot of industries, but the comic industry is uniquely vulnerable. And it’s definitely worth confronting as the technology continues to improve.

1 Comment

Filed under AI Art, Artificial Intelligence, DC Comics, Jack's World, superhero comics, YouTube

How AI Companions Can Be Helpful AND Harmful

It is not easy finding that special someone. It never has been, even if you’re rich, beautiful, and have strong social skills. Every generation encounters new challenges in their pursuit of love and companionship. I know I have. I’ve even shared some of those struggles before.

At the moment, I have not found that special someone. I am single, I live alone, and I currently have no romantic prospects of any kind. I’m honestly not even sure where to begin at this point in my life. Nearly everyone else in my immediate family has settled down and started having kids. I am very much behind the curve in that regard.

However, there are some individuals who are in an even tougher situation. I know I am lucky in many regards. I own my own home. I own my own car. I’m never behind on my bills or anything. But there are plenty of men and women my age who have none of that. Some of the people I went to college with are still stuck in debt and can’t even afford to pursue a serious relationship.

In that sense, I don’t blame anyone for seeking other forms of companionship. Loneliness really does suck. It is objectively bad for your health. While it has become a hot topic, even in political fields, it has also led to some controversial trends. And among the most contentious involves people seeking AI companions.

Now, before I go any further, I want to make clear that I am somewhat reluctant to talk about this. While I’m usually up for any discussions about artificial intelligence, especially with how it may affect our love lives, this one specific aspect of AI is riddled with complications.

On one hand, there’s the general stigma. Most AI companions, such as Replika, are essentially idealized avatars of whatever kind of companion the user wants. If a man wants an AI girlfriend to look like a porn star and have hair like a girl from his favorite fantasy anime, he can have that and the personality to match. And while that is appealing, as a product, it still carries a stigma with it.

Men like this who use AI companions aren’t seen in a very sympathetic light. They’re more likely to be seen as examples of toxic male behavior. They’re not just lonely and in need of companionship. They’re seen as perverts who prefer a girlfriend that they can turn off, manipulate, or control in whatever way they please.

And make no mistake, there are men who treat their AI companions like that. They’re not all that subtle about it, either. But most of these men were shallow, crass, and crude before the advent of AI companions. They would’ve been that way regardless of whether or not this technology existed. There have always been men like that. And there always will be to some extent.

But there’s also a double standard at work with these men. Because there are AI companions for women too. They’re every bit as available as the ones men use. They just don’t get as much scrutiny and don’t carry as much stigma. If a woman were to create an AI companion to resemble their favorite male celebrity, chances are they won’t be stigmatized as much as their male counterparts.

Some may see this as concerning, thinking the woman must have issues if she was resorting to AI companions. But she would certainly garner less stigma than the man.

I would still argue there are women out there who seek AI companions for the same reason as men. They’ll even mold avatars meant to resemble the sexiest, most attractive figure they can conjure. I don’t claim to know how common it is, but I don’t doubt this exists.

Even with that kind of shallow use of this technology, I think it’s much more common that these users are just lonely. They seek companionship the same way most humans seek companionship. Even if there are plenty of people to interact with, AI companions help fill a particular need. That’s really all there is to it.

That’s not to say that AI companions are harmless. I strongly believe they can be. It just depends on the user and how they go about interacting with these AI systems.

If someone is manipulative, controlling, abusive, and self-centered, then having an AI companion that they can mold to their whims is not going to temper those tendencies. More than likely, they’ll get much worse. They’ll basically set a standard for the user that conditions them to expect certain qualities in a companion. And since real people can’t be molded, manipulated, or configured like an AI, they’ll never find someone who meets their impossible criteria.

In the process, that same user might grow bitter and angry that no real person can be to them what their AI companion is. And as these feelings simmer, it could just lead them into a destructive cycle of resenting everyone and everything that they can’t control the same way they control their AI companion.

That is very much a worst-case scenario for users of AI companions. I did try to look up research on this, but it was hard to come by. Both the stigma and novelty of these products make it difficult to assess. Maybe I’m being too hopeful, but I think cases like this are rare.

They certainly exist, but they’re the exception rather than the norm. It just tends to get more attention because seeing horrible people reinforce their horrible behavior with these AI companions is disturbing to many people and understandably so.

At the same time, I also believe that AI companions can be genuinely beneficial for a lot of people and those benefits are likely understated. Remember, we are social creatures. And as intelligent as we can be, we’re also blunt instruments with respect to certain mental faculties. Our brains and our psyche don’t care about the nature of social interaction. So long as we find it fulfilling on some levels, we’ll incur the benefits.

In their early form, AI companions probably didn’t offer much in that regard. But in recent years with the rise of AI chatbots and large language models, it’s relatively easy and cheap to create an AI that people can interact with in ways that closely resemble those of real humans. And the growing size of the AI companion industry is solid that there is growing market for this sort of thing.

But the good these AI companions could do goes further than simply giving people a facsimile of human interaction. Remember, the current crop of AI chatbots and LLMs are relatively new. They’re like the early models of the iPhone. They’re going to continue being refined, developed, and improved upon now that an industry is being built around it.

In time, AI chatbots and general AI technology will improve.

At some point, AI technology will get advanced to the point where it can offer more than just a base level interactions. In theory, an AI could be configured in way that didn’t just perfectly complement the personality of the user. It could also interact with them in a way that fosters healthy personal growth, just like any other good relationship.

There could even be AI companions specifically configured to deal with abusive men or women, helping them understand and deal with their issues in a way that makes them better individuals. That could be life-saving for certain people who struggle to find companionship due to issues like personal trauma or mental illness.

These AI companions don’t even need to take a physical form. They don’t need to be incorporated into sex robots or anything. They can still be effective as simple avatars on smart devices. There would certainly need to be some level of testing, safeguards, and refinement in order to make them work effectively. It might even take years before AI companions have such capabilities.

That’s the most I’m willing to say about AI companions at the moment. I don’t doubt this industry will continue to evolve in the coming years. I also don’t doubt there will be plenty of controversies about the ethics of these companions, as well as how they affect the user.

But even in their current form with their current level of intelligence, it offers lonely people an outlet. Reasonable people can argue just how healthy or unhealthy it is. But it doesn’t change the fact that lonely people are out there. They’re seeking connection and companionship like everyone else. These AI companions aren’t perfect replacements, but they’re better than nothing.

Leave a comment

Filed under Artificial Intelligence, futurism, men's issues, psychology, romance, sex in society, sex robots

Artificial Intelligence, The Entertainment Industry, And Their (Uncertain) Future

This is a video from my YouTube channel, Jack’s World.

This video is part speculation and part thought experiment on the impact of artificial intelligence and the entertainment industry. Since the WGA/SAG strikes of 2023, the impact of AI on entertainment is impossible to ignore. While the technology is still in a very early stage, we’re already seeing it affect the course of multiple industries, but entertainment might be the most profound.

What could this mean for consumers?

What could it mean for the companies, studios, artists, and workers that produce our entertainment?

It’s difficult to determine at this early stage, but I make an effort to imagine what artificial intelligence could mean for the future of entertainment.

1 Comment

Filed under Artificial Intelligence, movies, Neuralink, technology, YouTube

AI Chatbots May (Thankfully) Render Homework Obsolete

Homework sucks.

Let’s get that out of the way.

I doubt anyone will disagree with that sentiment. No matter who you are or how many years you’ve been out of school, you probably don’t miss doing homework. It’s one of those special shared hatreds reserved only for traffic jams, parking tickets, and slow internet. But unlike those undeniable frustrations, homework isn’t an inescapable force of nature or law. It’s something we, as a society, choose to continue.

I’ve certainly questioned that choice, going back to when I was still in school. Having to do homework was among the many reasons why I was so miserable in school. And even though it was required, I can’t honestly say it ever helped me learn anything. Most teachers and administrators often explained why it was important to ensure we were adequately learning the material. But as I’ve gotten older, I’ve come to realize that, even if that were a valid reason, it still was still ineffective.

Just ask yourself honestly. Did you ever do homework because you were curious and wanted to learn?

Now, I could rant and lament on why homework sucks for days on end. But rather than torture myself to such an extreme, I wanted to highlight something that might offer hope to those who still remember how much homework sucked, as well as those currently in school at this very moment. It has to do with the impact of artificial intelligence and chatbots like ChatGPT.

I know I’ve talked a lot about artificial intelligence in the past. I’ve also highlighted the impact and hype surrounding ChatGPT. It is definitely one of the most intriguing and disruptive technologies to come along in decades. But unlike other discussions about whether artificial intelligence and ChatGPT will lead to the destruction of the human race, this is one issue in which the impact is already happening.

Recently, Vox produced an intriguing video about how ChatGPT has impacted education, especially homework. Even as someone who graduated school years ago, I found the issues and insights of this video remarkable. I encourage everyone to check it out.

The long and short of it is simple. ChatGPT is rendering most homework assignments, be they essays or worksheets, obsolete. Students are using ChatGPT to basically do the bulk of the work for them. The only real effort they need to do is make sure that whatever they produce isn’t obviously the product of a chatbot.

That alone can be difficult. It is well-documented that chatbots like ChatGPT can be inaccurate. But when compared to having to do a long, boring assignment that a student probably isn’t interested in, that kind of challenge seems manageable by comparison.

Also, in the interest of full disclosure, I freely admit that I probably would’ve used ChatGPT when I was in school if I had access to it. I promise it wouldn’t be entirely out of laziness or an unwillingness to learn. I just found most homework assignments to be so dull and pointless that I cared more about just getting them done rather than actually learning anything.

I imagine I’m not the only one who feels this way. I suspect the majority of students simply see homework as a means of ensuring grades rather than actually learning something. And even if that assumption is flawed, it’s still an issue that speaks to major flaws in how we educate ourselves and others.

And until ChatGPT, it was easy to ignore that issue. Schools, teachers, and administrators had no reason to stop giving homework or question whether it was an effective tool. It was just one of those things that our education system had always done. So, why not keep doing it?

Well, now there’s a valid reason. Homework, as we know it, can be easily completed by any student with an internet connection. If there was any learning potential, it’s pretty much lost. As the Vox video stated, it has led schools and educators to consider an entirely new recourse.

The knee-jerk response that I suspect most will adopt is to try and ban or limit the use of chatbots. There are software programs out there that can help detect content that has been generated by a chatbot. However, I liken these programs to using scotch tape to seal the ever-widening cracks of a faulty foundation.

Because, like it or not, these AI chatbots are becoming more advanced. And the tools to keep up with them are always going to lag behind. That is a losing race and one no education system should attempt.

There’s even precedent for surmising why that’s a bad approach. When I was in college, there was a blanket ban on using Wikipedia. But enforcing that ban was a losing battle that caused more problems than it solved. It also created some nasty situations where students were accused of plagiarism when they did nothing of the sort. It took a few high-profile incidents, but most schools eventually came to embrace Wikipedia as a useful tool when approached correctly.

I think the impact of chatbots will have to go through a similar process. But unlike Wikipedia, the application of chatbots is a lot broader. These are tools that can effectively summarize books, write essays, and even write poetry with a few simple prompts. And in the same way young people have become more tech savvy than their parents, I suspect they’ll become more adept than most with respect to navigating chatbots.

That means homework, as we’ve been doing it for the past several decades, will be obsolete. While that’s certainly cause for celebration for many, it’s also an opportunity to take a step back and evaluate the process of education, as a whole.

It’s still very important that we educate young people in a meaningful way.

It’s also important to acknowledge that young people today will have access to resources that others have not.

If homework is no longer useful in that regard, what else could we do? What’s a more effective way to teach kids a concept, even when they’re not that motivated to learn it?

I don’t claim to know the answers. I am not a teacher, but I do remember how miserable I was in school and how little it really taught me. Hopefully, the impact of chatbots like ChatGPT will prompt a more thorough evaluation of how we approach education. Because if we keep clinging to old methods, then nobody will benefit in the long run, especially kids.

1 Comment

Filed under Artificial Intelligence, ChatGPT, technology