Tag Archives: technology and society

My Thoughts On Facebook And #DeleteFacebook

Here is how to delete Facebook | TechCrunch

There are certain people, groups, and companies that are difficult to defends. It’s not impossible, as is the case with tobacco companies, criminal organizations, and whoever designs unskippable video ads. It’s just difficult and I’m certainly not qualified to carry out such with any real expertise.

I’m just not that smart or informed.

I make that disclaimer because I’m about to defend a company that doesn’t have a stellar reputation, to say the least. If anything, their name and its famous founder have gained such a negative connotation that they’re just a few steps away from being a tobacco company. Given how one actually sells products that gives people cancer, that’s saying something.

That company is Facebook. I know that just typing that word out for people to read is going to garner a reaction and not in a good way.

I get that. I really do. I’m very much aware of some of the many scandals and shady dealings that Facebook has engaged in since its inception. I’m also aware of the objectively negative impacts that Facebook has had on certain people. That’s not something I can defend, nor would I want to.

There are any number of bad things about Facebook and its impact that I can go over. However, there is one important aspect to those things that I would like to highlight. I don’t think it constitutes a defense of Facebook or its practices, but some may construe it as such. I’m still going to point it out, if only to add some perspective. It all comes down to this.

Facebook is still just a tool. At some point, all its damaging ills are on us, the users, and not the company.

I understand that’s an unpopular sentiment. It’s not that dissimilar from what gun advocates say about guns. Like any inanimate object, it’s not deadly or damaging until somebody willfully uses it. That’s certainly true to some extent. It’s just a matter of the extent that people disagree on.

However, Facebook is not akin to a firearm or some tool that can actually be used to cause tangible, physical harm to someone. It’s a website/software program. Using it requires people to go out of their way to access it. In addition, getting any meaningful use out of it requires active engagement. It’s not just something you can give to a kid and they would easily figure it out.

It can still be damaging, but in a very different way. Like it or not, some of those ways are ultimately our responsibility and not that of Facebook. I know it’s just a lot easier to criticize the company, its practices, and the conduct of its founder, Mark Zuckerburg. That doesn’t change the actual nature of the product.

Yes, there is objectively toxic content on Facebook that degrades, demeans, and deceives people.

However, that toxic content doesn’t come directly from Facebook. It comes from us.

I bring this up because I saw the hashtag, #DeleteFacebook, trending again. That seems to happen several times a year, often after a new scandal or in wake of an unpopular decision. It’s becoming so routine that it’s hard to take seriously.

On top of that, the hashtag rarely accomplishes anything. Despite all the scandals and negative press, the overall usership of Facebook is still growing. As of this writing, it has approximately 2.85 billion users. Criticism and hashtags aside, it hasn’t kept the company from growing. It hasn’t made Mark Zuckerberg any less rich and influential.

I know hashtags are notorious for presenting a false reality to those who seek it, but this particular hashtag has become more a virtue signal than an actual protest. More and more these days, the hashtag has become less about Facebook’s unscrupulous business practices and more about protesting Big Tech, as they’re called.

While there’s certainly a place for protesting the practices of large, powerful corporations, I feel like the substance of that effort gets lost in virtue signaling. People are more inclined to just whine about how bad Facebook is and say how much better their lives are after deleting it. It’s rare for anyone to actually highlight a substantive policy or practice that warrants protest. It’s all about people saying, “Look at me! I gave up Facebook, so I’m better than you!”

I know that’s a simplistic statement that doesn’t apply to everyone. I’m sure there are people whose lives did improve after deleting their Facebook account. At the same time, there are people whose lives are still enriched by Facebook.

Personally, I’ve met great people through Facebook. I’ve also been able to keep up with friends and family that I never would’ve been able to keep up with. I genuinely value those connections. They even prove critical when there’s a major family crisis that everyone is trying to keep up with. That happened several years back when my grandmother got sick. It happened more recently with helping my father connect with other relatives during the pandemic.

Facebook can be used for good. Like any tool, it can have a positive impact on its users. It’s just a matter of how it’s used.

There will always be people who seek to use any tool for something wrong, deviant, or nefarious. We don’t criticize ski masks the same way we criticize Facebook and for good reason. At the end of the day, it comes back to the individuals using it.

Again, that doesn’t excuse some of the shady things the company has done over the years. I’m not defending that. This extended rant is just me reminding people that some of the worst parts of Facebook only exist because of us, the users. At some point, we have to take responsibility for that. We can’t expect a multi-billion dollar software company to do it for us.

Leave a comment

Filed under Current Events, human nature, media issues, rants, technology

An Artificial Intelligence That Can Debate: The Promise And The Perils

Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.

Yes, I understand it’s hard to say that with a straight face these days.

Yes, I’ve been to comments sections, Reddit, and 4chan.

Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.

In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.

That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.

Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.

Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.

TechXplore: IBM’s AI debating system able to compete with expert human debaters

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents. Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.

We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.

Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.

However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.

That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.

An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.

For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.

Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.

It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.

Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.

What side would this system be on?

Moreover, who decides what position these AI systems take?

If no one decides, then how does the AI determine which side on a debate it takes?

These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.

I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Thought Experiment: When Does Technology Make Us Non-Human?

The following is a video from my YouTube channel, Jack’s World. It explores another thought experiment about technology and how it’s affecting us, as a species. I’ve covered this sort of thing before and the implications. I’m looking to see if there’s an audience for this on my channel. Enjoy!

Leave a comment

Filed under Artificial Intelligence, futurism, human nature, Jack's World, technology, Thought Experiment, YouTube

How Many Streaming Services Can We (Realistically) Have?

It’s official. The streaming wars are on.

Hell, it’s been official for the past several years and 2020 only accelerated it. The battle to dominate digital media in all forms is raging on multiple fronts and while many have their favorites, none can say they’ve won.

It’s Netflix versus Hulu versus Amazon versus Disney versus CBS/Viacom versus YouTube versus whatever other media companies are fighting for every possible eyeball. The stakes are high for consumers and content creators alike. There are billions in profits to be made and plenty of epic, culture-defining content to be made. It’s going to get intense is what I’m saying.

I don’t think I need to remind everyone just how much the streaming market has changed in the past 10 years. Even if you’re still a teenager, chances are you still vaguely remember the days before Netflix and chill. Media back then was movies, TV, and Blu-Ray/DVD collections. I’m not saying it was ideal, but that’s what we had to work with.

Then, Netflix came along and changed everything.

Then, every company and their deep-pocketed subsidiaries tried to catch up.

It hasn’t always been smooth. Some people are still not over “The Officeleaving Netflix. Chances are there will be more upheavals like that as companies fight over who streams what and who has the streaming rights to a particular show or movie. That’s sure to get messy and I’m not smart enough to make sense of it.

However, as this war rages, I think there’s a relevant question worth asking. It’s a question that I’m sure both consumers like me and big media companies like Netflix and Disney ask as well. The answer could depend on how the war plays out.

How many streaming services can the average customer have?

Most people already pay for some form of streaming media. Most people subscribe to some form of pay TV, although that trend is in flux. The days of having all the entertainment you want with a simple cable subscription alongside Netflix is long gone and it’s not coming back.

Now, you have to be very selective and self-aware of what you want.

Do you want access to Disney’s vast library of content?

Do you want access to the library of shows from NBC or CBS?

Do you want access to the content from Warner Brothers, Universal, Dreamworks, Discovery, Cartoon Network, or 20th Century Fox?

You can have some, but you can’t have them all without paying way more than you ever would for cable. Even if you did, could you even watch all those streaming services enough to justify the cost? There are only so many hours in a day and there’s only so much attention we have to give. Even if we dedicated half our day to binging movies and TV, we couldn’t watch it all.

That’s the big limiting factor on streaming. It’s also the biggest obstacle any company faces with respect to their effort in the streaming wars. People can only watch so much and they only have so much they can reasonably spend on a streaming service. There comes a point where, even if the content is appealing, they just can’t justify the cost.

Personally, I have subscriptions to five streaming services. They are as follows:

Netflix

Hulu

Amazon Prime

Disney Plus

HBO Max

Now, it’s worth noting that I got HBO Max through my cable subscription. I’ve subscribed to HBO for years so it’s not something I consciously sought out. With Amazon Prime, I primarily used it for the 2-day shipping instead of streaming media, but I’ve certainly found some quality shows on that platform.

I’m not sure I can justify another subscription beyond this. Once my subscriptions cannot be counted on one hand anymore, I think that’s too much. I just cannot watch enough content to warrant paying extra. I say that knowing companies like Paramount and NBC have just launched their own streaming services.

Even though both networks include shows that I love, I’ve no intention of buying their streaming service. If my cable company offers it for free, like it did with HBO, then that’s great. I’ll certainly watch it, but I’m not paying extra.

I feel like a lot of people are in that boat. If they don’t have a cable subscription, then they’re already trying to save money and paying more for a streaming package just defeats the purpose. If they do have cable, then they’re probably not willing to pay more for something they’re already paying too much for.

It’s a tougher situation and one that I’m sure will get tougher in the coming years. It’s not cheap to run a streaming service. The profit margins can be thin if you don’t have the content. There’s a good chance that some streaming services will either fail or get absorbed into another, like CBS All Access did.

Then, there are the pirates and no, I’m not talking about the ones with eye-patches.

Before Netflix streaming, pirating copyrighted content was already pretty rampant. Since the streaming wars began, there has been an uptick in pirated streaming content. That is also likely to intensify the more fragmented the streaming market becomes. If people are really that unwilling to pay a whole subscription to watch just a single show, they will resort to piracy. It’s still distressingly easy.

That’s why this question matters, both for us and the companies who provide our entertainment. I don’t claim to know how it’ll play out. By the time it settles, there might be another major upheaval in the media to supplant it. Whatever happens, I feel like I’ve reached the limit on the number of streaming subscriptions I have.

That’s just me, though. What about you?

How many streaming services do you have and are you willing to pay for another? Our collective answer could very well change the course of the streaming wars.

2 Comments

Filed under Current Events, human nature, media issues, psychology, technology, television

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

1 Comment

Filed under Artificial Intelligence, Current Events, futurism, technology

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

How Jar Jar Binks Exposed The Flaws (And Dangers) Of Social Media

maxresdefault-1

Every now and then, something extraordinary happens that reveals how flawed our current system is and how far behind we are in terms of fixing it. Sometimes, it’s tragic. Sometimes, it’s frustrating. In rare cases, it’s hilarious, albeit in a distressing way. Personally, I find those cases most revealing.

Recently, there was one notable instance that included one of the most reviled fictional characters of the past 20 years. No, I’m not talking about King Joffrey or Ramsay Bolton. I’m talking about Jar Jar Binks. If you’re a “Star Wars” fan, then that name likely inspires all sorts of anger, dread, and distress.

Jar Jar is both a joke and a cautionary tale. Aside from proving that George Lucas has no business directing another “Star Wars,” he demonstrates just how wrong an attempt at comedic relief can go. While he wasn’t the only problem with the prequels, he augmented the flaws. On top of being annoying, incoherent, and incompetent most of the time, he was a major symptom of a much larger disease.

Once again, everything that makes Jar Jar such a pariah in the cultural landscape has exposed another disease in a place that’s not far, far away. For reasons that many found confusing and confounding, Jar Jar started trending on Twitter. While there was a someone legitimate reason for this, it was indirect and unintentional. There was no concerted effort to get him trending. It wasn’t even part of any elaborate trolling.

The fact that it took a while to explain why this infamous character was trending says more about social media than it does about Jar Jar. There’s no question that social media has changed the media landscape in ways that cannot be overstated. We current live in a world where companies invest a great deal of time and resources into making their presence on social media unique. Some definitely do it better than others.

At the same time, social media has not always had a positive effect on the world and its users. There have been plenty of cases where social media has been used to brutally harass people and spread blatant lies. There are even some cases in which social media played a role in directing real harm to innocent people. The dangers are there and well-documented.

Most people with an internet connection know those dangers are there. Many see it as the cost of doing business for a technology that has an uncanny ability to connect people. I certainly pay that cost, given my own presence on social media. However, what just happened with Jar Jar on Twitter demonstrated that the cost might have hidden fees in the fine print.

Remember, there was no concerted effort to get Jar Jar trending. Even after he did, nobody could figure out why he was trending. On top of that, the fact that nobody could figure it out only got people more curious, which made him trend even more. It was a self-reinforcing cycle that was funny in some respects, but distressing in many others.

It’s somewhat similar to what happens with people who are famous just for being famous and little more. This unfortunate, but inescapable aspect of celebrity culture rarely creates people who garner respect or admiration. If anything, they foster cynicism and disconnection from the culture. That kind of fame just feels so random, unearned, and empty. Thanks to Jar Jar, we now know social media trends can do the same.

Things can trend for no discernible reason. Matters that nobody even wants to get trending can garner unexpected and often unwanted attention. Thanks to the mechanisms of social media, the mystery behind why something trends can make it trend even more. While that’s going on, legitimate issues that warrant attention can slip under the radar.

Human beings only have so much attention to give. When something like Jar Jar trends for no discernible reason, a non-significant chunk of our collective attention is redirected. It would be one thing if it were just some masterfully act of trolling, but this is something we do to ourselves collective. That means we have no one to blame but ourselves when something like Jar Jar trends.

We’re the ones who make and share these hashtags. The social media companies are just tools and businesses. Like many companies, they’ll engage in plenty of shady activities. They’ll do whatever they think will make them more money. At the end of the day, though, we’re still the consumers who shape social media.

That should be cause for concern because this isn’t vapid celebrity culture we’re dealing with. The things that trend on social media have real-world consequences. Companies have suffered significant harm. Lives of non-celebrity people have been ruined. A random person who becomes famous for no reason rarely causes actual harm to anyone. Social media trends can do so much more.

In some cases, it can cause a great deal of good. If the right thing gets trending, it can rally people to a worthy cause. It can also inform the public of a serious issue. It can even turn real-world tragedies into a powerful force for good. Personally, I think this good overshadows the bad, but when I see Jar Jar trending, I can’t deny that there’s a flaw in this system.

Is there a fix? I believe there is, but I don’t believe it’s as simple as companies tweaking their rules or insulting people who share hash tags. Jar Jar may have been a source of frustration in the early 2000s, but he’s only relevant in 2019 because we make him relevant. It’s not him. It’s not George Lucas. This is all us.

I believe we’re better than that. Despite all the awful things I’ve seen trending on Twitter and Facebook, I see far more positives that warrant far greater attention. Jar Jar might be a symptom, but I take comfort in the fact that he’s a symptom that often burns out quickly. When something is empty, people get bored of it much easier and nobody should ever underestimate the power of boredom.

In the grand scheme of things, Jar Jar trending for no reason isn’t necessarily a setback. It’s just a sign that we, as a tech-savvy society, have a long way to go with respect to managing social media. In an imperfect world, dumb things will trend for dumb reasons. However, when something like Jar Jar starts trending, that’s a sign that we have plenty of room for improvement.

Leave a comment

Filed under human nature, media issues, outrage culture, political correctness, psychology, Star Wars, technology, War on Boredom

Kids, Technology, And The Growing Bond Between Them

touchscreens-are-bad-for-toddlers-and-babies-sleep

Last year, I explored the idea of children being raised by intelligent robots. For the most part, it was a thought experiment. I approached it in the context of a technology that won’t be implemented anytime soon. Robotics technology hasn’t advanced to the point where it can properly mimic human-like behaviors, although Boston Dynamics is getting pretty damn close.

We also don’t have an artificial intelligence that could properly mirror human levels of intelligence, let alone basic parenting skills. Even when our technology gets to that level, it’ll probably still be a while before people start trusting it with children. Most people today probably recoil at the idea of a robot raising their kids, even if it were programmed with all the best parenting skills.

I tend to share that sentiment. While I’m generally of the opinion that technology will be a net positive, even for something as potentially dangerous as artificial intelligence, the idea of any non-human system raising kids just doesn’t seem workable. Recently, I’ve had to reassess that notion.

Over the past couple years, some close friends and relatives of mine welcomed their first children into the world. I’ve been lucky enough to share in some of these monumental moments. I’ve watched these kids grow from newborn infants into adorable toddlers. Some already know me as their awesome uncle.

While I could spend all day describing how adorable they are, I’ve noticed something remarkable in their growth that no generation before them has experienced before. It has to do with the way in which they interact with technology. I would even argue it’s gone a step further than basic interaction. It’s almost a bond at this point.

I first noticed when I saw a kid who wasn’t even two-years-old use his mother’s smartphone. Granted, he didn’t use it to do anything too fancy, but he was able to open apps, interact with icons, and do more than just put it in his mouth, which counted as a major feat for him.

He wasn’t the only one, either. You don’t have to look far to see videos of infants using tablets. Some use it better than others. I’ve met some who use it better than many adults. If you need further proof, check out this video of a two-year-old operating an iPad back in 2010.

Not surprisingly, this has already caused concerns among parents, teachers, and doctors. There is genuine, legitimate concern about what these devices are doing to the minds of young children. While the research on this impact is still ongoing and inconclusive, the proverbial genie is out of the bottle. These devices exist, kids are using them, and they’re using them quite well.

I believe this has implications beyond causing yet another moral panic about how strange new technology affects children. Make no mistake. There will be a moral panic. I know because I lived through something similar when I was a kid.

Back then, the big fear was about television. Parents, teachers, and doctors were genuinely concerned about all the time kids were spending watching TV. Some went so far as to claim that they were letting TV raise their kids. I question whether these people understood how a TV worked.

Television is an entirely passive technology. You turn it on, pick a channel, and that’s all you can control. Until recently, it wasn’t very interactive. As a kid, I just saw it as another form of entertainment, like comic books, video games, and sports. These tablets that kids are using now are considerably different.

These aren’t devices that just flash colorful images in front of a kid to entertain them. Kids actually interact with these things. They can guide and manipulate what happens on the screen. Many tablets offer applications specifically tailored for children and can be valuable learning tools. A TV show can only do so much to teach a kid skills. An interactive application can do so much more.

At the moment, most of these applications are basically interactive games. Once artificial intelligence enters the equation, the potential changes considerably. Robot pets are becoming more sophisticated, operating on a level that makes it easier to establish a genuine bond with them. The same goes for virtual assistants. They were once a novelty. Now, they’re a mundane feature of most gadgets.

The kids being born today are entering a world where these same assistants are growing alongside them. They’re getting smarter with each passing day. At some point, they may become a more trustworthy source of information for kids than parents. Given the tendency of parents to lie to their kids, even if it’s for their own good, this could be a game-changer for kids and parents alike.

Going back to some of the kids in my own family, I’ve seen signs of this change. Some kids get genuinely upset when you take a tablet or smartphone away from them. They’ll react stronger than they would if someone took a treat or toy away from them. It gives the impression that these devices aren’t just toys to them. They’re something so much greater.

That has potential benefits and drawbacks. In terms of benefits, these devices and the applications they utilize could help children learn faster and more effectively at young ages. Just being able to effectively utilize a smartphone or tablet is a useful skill in almost any profession. A kid who literally grew up with this technology is going to have an edge over their elders in that respect.

There will still be costs. Kids who grow up around these devices and the connected world they link to could be prone to less-than-positive influences. They’ll be surrounded by the forces of outrage culture, online harassment, fake news, and professional trolls. It’s hard enough for adults to deal with these kinds of issues. For young kids who grew up in this system, it could be even harder.

At the moment, there are too many unknowns. One way or another, this technology exists and kids as young as one are capable of using it. They’re growing up with it. They’re bonding with it. The same goes for the technology itself. As it evolves and advances, it may get to a point where it’s a greater authority figure than any parent. At that point, robots raising kids might seem entirely natural.

I don’t claim to know how it will play out. At times, I do worry about the kids in my family or the kids I may have at some point in my life. However, I still tend to be optimistic about how this technology will impact kids. As scary as it may be to think about technology raising kids, let’s not forget that there are still plenty of dumb parents out there whose kids can only benefit from this.

1 Comment

Filed under Artificial Intelligence, Current Events, futurism, psychology, technology