Category Archives: Artificial Intelligence

Self-Driving Cars Are Already Saving Drunk Drivers: The Promise And The Implications

Self-Driving Cars: Everything You Need to Know | Kelley Blue Book

According to the Centers for Disease Control, 10,497 people died in traffic accidents caused by drunk driving in 2016 alone. That accounted for 28 percent of all traffic-related deaths in the United States. A non-insignificant chunk of those deaths were children. Even if you’re not good at math, you know that is not a trivial figure.

There’s also a good chance you know someone who has been hurt or worse because of a drunk driver. This sort of thing is personal for me because one of my cousins was killed by a drunk driver many years ago. He wasn’t even drinking. He was just unlucky enough to be in the back seat of the car at the time.

It’s an issue that has existed for as long as cars. It’s also an issue that policy makers and car manufacturers have tried to address through awareness programs and safety features. However, these measures can only do so much. So long as human beings are drinking and driving cars, this will be an issue.

That dynamic will likely change considerably when self-driving cars enter the picture. To some extent, they’re already making an impact. You can buy a car today that has some measure of self-driving features. They’re still not fully autonomous, but we’ve taken the first critical steps. From here on it, it’s just a matter of refinement.

Even though it might be years before self-driving cars are common, they’re already making an impact and it’s not just in terms of sheer novelty. Very recently, a Tesla Model S, which has an autopilot feature, did something cars of old could never do.

It saved a drunk driver who passed out behind the wheel, which likely saved or prevented serious injuries to others around him. Here are the details, according to the site, Telsarati.

Teslarati: Tesla Autopilot prevents drunk driver from making a fatal mistake

As explained by the Eastern Police District on its official Twitter account, a 24-year-old Tesla owner ended up passing out while driving his Model S. Fortunately for the driver, the vehicle’s Autopilot system was activated, which allowed the Model S to stay in its lane without causing trouble to other drivers.

Upon detecting that its driver was unresponsive, the vehicle eventually came to a stop and engaged its hazards. The man was later attended to by emergency services. No one was injured in the incident.

The police noted that the Tesla driver, who was found unconscious in the Model S, was evidently drunk, though he denied that he was driving. Video evidence showing the Tesla owner passed out in the driver’s seat have been shared online, however. The police stated that necessary tests have been taken, and that the Tesla owner’s driver’s license has been temporarily suspended. A case has also been filed against the driver.

Such an incident could have easily been a lot worse. It is very easy for drunk drivers to harm themselves, after all, but what’s even worse is that they could very easily harm other people just as easily. These scenarios would likely not be as prevalent if vehicles are capable of safely stopping on their own once their human drivers are incapacitated.

The bolded text represents the most relevant details. Without these features, this incident could’ve played out like so many other drunk driving tragedies. A drunk driver passing out behind the wheel would’ve, at the very least, led to the car going off-road and crashing, thus resulting in significant injury. At worst, the driver could’ve hit another car, thus compounding the tragedy.

However, thanks to these emerging systems, that didn’t happen. The safeguards in the car worked. The only real harm done involve a hangover and a DUI. Compared to the alternative, that’s far more preferable.

We should not understate the importance of this development. Think back to that 10,497 figure from 2016. Thanks to the autopilot system in that Tesla, the figure for 2021 will be at least one less. It doesn’t eliminate the tragedy of drunk driving all at once, but it’s a start and an important one, at that.

Driving is inherently dangerous, but a lot of that danger comes from the people behind the wheel and not the machines themselves. Anything operated by a human is prone to human error. An autonomous system, even if it isn’t a full-fledged artificial intelligence, can and will mitigate those errors.

That’s not to say those same autopilot systems aren’t prone to error. They certainly are, but remember that this technology is still very new. The first cell phones couldn’t send an email or reliably play streaming video. That took time, energy, and better hardware.

At this very moment, car companies and tech companies are putting in all that work. There is a lot of potential profit in refining this technology. However, I would point out that you can’t put a price on human life and, as it stands, thousands will continue to die every year because of traffic accidents, especially drunk driving. This one incident might not seem like much in the grand scheme of things, but it’s still one tragedy averted, one accident prevented, and at least one life saved. For anyone who knows the pain of losing a loved one to drunk driving, that’s worth celebrating.

Leave a comment

Filed under Artificial Intelligence, futurism, robots, technology, Uplifting Stories

How Should A Robot Look Before You Welcome It Into Your Home?

karalunaria — ok so upon 3 minutes of google it's the mascot...

There was a time when people were skeptical about having a computer in their home. I know because I’m old enough to remember people like that. It wasn’t that they were paranoid about them, although a few certainly were. They just didn’t see the need.

Then, computers became smaller, more affordable, and more useful. They went from being these big, bulky machines that took up an entire corner of a room into being a sleek, decorative piece of hardware that did so much to improve our lives. From information to communications to masturbation, computers revolutionized our lives.

It’s a common trend in technology. When it’s new and undeveloped, people are wary about having it in their homes. Go back several decades and people felt the same way about television. Go back a century and some people were reluctant to allow electricity into their homes. It takes some people longer than others to come around, but they eventually do when the utility is just too great.

This brings me to robots and for once, I’m not referring to sex robots. While they could very well be part of this conversation, I’m going to set that kinky angle to this issue aside. Instead, I’m going to stick to robots in general, specifically the kind with a body and some mechanism for doing work.

We’ve watched in recent years how quickly robotics technology is advancing. A while back, I highlighted a video from Boston Dynamics that showed one of their robots dancing. Even before that, this same company demonstrated a robot that could run and navigate basic obstacles. While it was certainly no Terminator, it was still no Wall-E.

These robots exist. Every year, they’re being improved and refined. Within the next decade, it is likely we’ll have a robot that can move, react, and navigate its surroundings like a human. It may not have human level intelligence, but it will have the body to match our capabilities in every way.

When this day comes, the world will be a very different place. It’ll definitely raises issues regarding robot workers and robot soldiers, but that sort of impact won’t be as direct for most people. The real change will come when we have the ability to have a robot in our homes that can do almost any kind of work a human could do.

By that, I don’t just mean a virtual assistant like Alexa or Siri. We already have those and they’ve already become an increasingly popular feature for many homes. These assistants can help us with shopping lists, music playlists, and schedule reminders. They can’t do the dishes, clean the bathroom, cook our meals, or make our beds.

Having a robot that could do all that would be nice. It would be like having a personal maid and a personal secretary. There’s certainly a market for it and the rise of virtual assistants has already laid the foundation for that market. However, that still raises some important questions.

How should that robot look before you welcome it into your home?

Ignore for a moment the paranoia about a robot turning evil. Assume, for the sake of argument, these robots are as functional as your typical Roomba. They don’t have advanced AI. They’re not sentient or self-aware on the same level as Rosie from “The Jetsons” or Hal 9000. They just these big tools that do all the work you’d expect of a maid, butler, or servant.

Would you welcome that robot into your home if it looked like one of the Boston Dynamics robots?

Would you welcome that robot into your home if it looked completely indistinguishable from humans, like Kara in “Detroit: Become Human?”

Would you want that robot to look only mostly human, but still be distinctly machine, like Data from “Star Trek: The Next Generation?”

These are all relevant questions if these robots are going to be part of our lives. For some people, a robot that looked too human might be too jarring. It would be difficult to see them and remember they’re just a robot. Some people might be fine with that, especially when sex robots are involved. However, for a robot that’s primarily a helper, that might not be ideal.

For robot servants, it might be more beneficial to everyone if they didn’t look too human. In fact, having a human-like body might even hinder a robots ability to do its job. That’s why most robots you see in factories don’t look human at all. They take the form of whatever helps them do their job.

Maybe a perfect robot housekeeper doesn’t look human. Maybe it looks more like a droid from “Star Wars” that has multiple arms, a head with a panoramic camera, and four legs like a dog. Depending on the home its in, it might even need to be able to adjust its height. Such a robot may be good at its task, but would it be too weird and bulky to allow in our homes?

No matter how human they look, these robots would have to appear to us in a way that we’re comfortable being around. We have to be willing to just leave them in our homes for most of the day, possibly with pets and children, and trust that they’ll do what we want them to do. That kind of trust will take time, just as it did with computers.

It may ultimately take longer to welcome a robot into our homes than we did with computers, but once the benefits and utility get to a certain point, it may be too appealing to ignore. I don’t claim to know what typical household robots will look like before then. I just know they’ll have to look a certain way for us to embrace them as part of our world. Naturally, we’ll still probably embrace sex robots sooner, but it won’t stop there. Robots will become a larger part of our lives eventually. They may end up having a greater impact than any new technology since electricity.

Leave a comment

Filed under Artificial Intelligence, futurism, robots, technology, Thought Experiment

Turning Thoughts Into Images: A New Era Of Art With Brain/Computer Interface

BCI Technology: How does a Brain-Computer Interface Work?

There are any number of skills you can learn, practice, and eventually master. I highly encourage everyone to do to this, whether it involves computer programming, cooking, crafts, or any other hobby. You may not always like or master them, but they’re still fun and rewarding to try.

For some skills, though, no amount of learning or practice will help you master them or even be competent. Some things just take talent. That’s why only a handful of human beings ever become Olympic athletes, professional quarterbacks, or brain surgeons. There’s nothing wrong with that. We need that kind of diverse skill set, as a species.

I consider myself to be good, if not above-average, at a number of skills. I’ve learned plenty over the years and there are some that I just have a knack for more than others. I like to think writing is one of them. However, there’s one particular skill that I just have absolutely zero talent for and it’s something that has bugged me for years.

That skill is drawing.

Please understand that this is somewhat personal for me. I’ve always had an artistic side, but for reasons I can’t quite grasp, I’ve never been able to draw worth a damn. I’ve taken art classes in school. I’ve tried practicing here and there. It just never works. I can barely draw stick figures, let alone an image of a typical person that doesn’t look like it was drawn by a five-year-old.

Some of that actually runs in my family. Quite a few relatives can attest that they can’t draw, either. At the same time, an unusually high number of relatives are good writers, poets, etc. We’re all great with words, for the most part. That’s a talent that seems to get passed down, but we just can’t turn those words into pictures.

For me, that’s kind of frustrating. I’ve always enjoyed telling stories. For a time, I wanted to be a comic book writer, but I learned quickly that’s next to impossible when you can’t draw. There are also times when I wish I could draw well enough to describe a scene from a story. I just don’t have that talent or that skill.

As much as I enjoy writing, I don’t deny that humans are visual creatures. If I could incorporate images into my work, then I believe it’ll have a much greater impact. Sadly, I doubt I’ll ever have the necessary talent and skill to create those images.

However, it certain technological trends continue, I might not have to. A recent article in Psychology Today gave me hope that one day, I’ll be able to take some of these images I see in my head and make them real for others to see. It also leads me to believe that art, as we know it, is about to change in a big way.

Psychology Today: New Brain-Computer Interface Transforms Thoughts to Images

Achieving the next level of brain-computer interface (BCI) advancement, researchers at the University of Helsinki used artificial intelligence (AI) to create a system that uses signals from the brain to generate novel images of what the user is thinking and published the results earlier this month in Scientific Reports.

“To the best of our knowledge, this is the first study to use neural activity to adapt a generative computer model and produce new information matching a human operator’s intention,” wrote the Finnish team of researchers.

The brain-computer interface industry holds the promise of innovating future neuroprosthetic medical and health care treatments. Examples of BCI companies led by pioneering entrepreneurs include Bryan Johnson’s Kernel and Elon Musk’s Neuralink.

Studies to date on brain-computer interfaces have demonstrated the ability to execute mostly limited, pre-established actions such as two-dimensional cursor movement on a computer screen or typing a specific letter of the alphabet. The typical solution uses a computer system to interpret brain-signals linked with stimuli to model mental states.

Seeking to create a more flexible, adaptable system, the researchers created an artificial system that can imagine and output what a person is visualizing based on brain signals. The researchers report that their neuroadaptive generative modeling approach is “a new paradigm that may strongly impact experimental psychology and cognitive neuroscience.”

Naturally, this technology is very new and nowhere near ready for commercial use. It’ll probably be a while before I could use it to create my own graphic novels of the books I’ve written and the sexy short stories I’ve told. That still won’t stop me from entertaining thoughts of incorporating images into my stories.

I doubt I’m the only one who feels that way, too. I know plenty of people like me who just do not have the talent or skill to draw anything more detailed than a stick figure. Those same people have images in their minds that they wish to share. If products like Neuralink, which the article directly references, become more mainstream, then this could be among its many uses.

With some refinement, it won’t just allow artistically challenged people like me to make competent drawings. It’ll allow people who never would’ve otherwise produced that art create something that they can share with the world.

Just take a moment to appreciate how many beautiful images exist only in the minds of people who never get an opportunity to share them. Maybe someone did have an idea for a piece of artwork that would’ve brought beauty, joy, and inspiration to the world, but they just didn’t have the skill, resources, or talent to make it tangible. How many masterpieces have we lost because of that limitation?

We can never know, but any loss of beautiful art is a tragic one. With a process like this, people who never even thought about having an artistic side could explore it. Moreover, they would be able to do it without messy art supplies, sketchbooks, or ink stains. They would just need a neural prosthesis and a computer.

Almost everyone has a computer, so we’re already halfway there. If ever a product came out that allowed us to develop this ability of turning thoughts into images, I would be among the first to try it. I would eagerly line up to take the plunge, if only to open the possibility that some of the images I see when I’m writing can become real one day. I hope I live long enough to see this. Our bodies and minds may ultimately fail us, but great art can last for multiple lifetimes.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Neuralink, technology

An Artificial Intelligence That Can Debate: The Promise And The Perils

Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.

Yes, I understand it’s hard to say that with a straight face these days.

Yes, I’ve been to comments sections, Reddit, and 4chan.

Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.

In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.

That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.

Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.

Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.

TechXplore: IBM’s AI debating system able to compete with expert human debaters

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents. Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.

We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.

Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.

However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.

That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.

An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.

For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.

Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.

It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.

Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.

What side would this system be on?

Moreover, who decides what position these AI systems take?

If no one decides, then how does the AI determine which side on a debate it takes?

These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.

I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

Leave a comment

Filed under Artificial Intelligence, technology, YouTube

Thought Experiment: When Does Technology Make Us Non-Human?

The following is a video from my YouTube channel, Jack’s World. It explores another thought experiment about technology and how it’s affecting us, as a species. I’ve covered this sort of thing before and the implications. I’m looking to see if there’s an audience for this on my channel. Enjoy!

1 Comment

Filed under Artificial Intelligence, futurism, human nature, Jack's World, technology, Thought Experiment, YouTube

Why We Should Treat Our Data As (Valuable) Property

Many years ago, I created my first email address before logging into the internet. It was a simple AOL account. I didn’t give it much thought. I didn’t think I was creating anything valuable. At the time, the internet was limited to slow, clunky dial-up that had little to offer in terms of content. I doubt anyone saw what they were doing as creating something of great value.

I still have that email address today in case you’re wondering. I still regularly use it. I imagine a lot of people have an email address they created years ago for one of those early internet companies that used to dominate a very different digital world. They may not even see that address or those early internet experiences as valuable.

Times have changed and not just in terms of pandemics. In fact, times tends to change more rapidly in the digital world than it does in the real world. The data we created on the internet, even in those early days, became much more valuable over time. It served as the foundation on which multi-billion dollar companies were built.

As a result, the data an individual user imparts onto the internet has a great deal of value. You could even argue that the cumulative data of large volumes of internet users is among the most valuable data in the world.

Politicians, police, the military, big businesses, advertising agencies, marketing experts, economists, doctors, and researchers all have use for this data. Many go to great lengths to get it, sometimes through questionable means.

The growing value of this data raises some important questions.

Who exactly owns this data?

How do we go about treating it from a legal, fiscal, and logistical standpoint?

Is this data a form of tangible property, like land, money, or labor?

Is this something we can exchange, trade, or lease?

What is someone’s recourse if they want certain aspects of their data removed, changed, or deleted?

These are all difficult questions that don’t have easy answers. It’s getting to a point where ownership of data was an issue among candidates running for President of the United States. Chances are, as our collective data becomes more vital for major industries, the issue will only grow in importance.

At the moment, it’s difficult to determine how this issue will evolve. In the same way I had no idea how valuable that first email address would be, nobody can possibly know how the internet, society, the economy, and institutions who rely on that data will evolve. The best solution in the near term might not be the same as the best solution in the long term.

Personally, I believe that our data, which includes our email addresses, browsing habits, purchasing habits, and social media posts, should be treated as personal property. Like money, jewels, or land, it has tangible value. We should treat it as such and so should the companies that rely on it.

However, I also understand that there are complications associated with this approach. Unlike money, data isn’t something you can hold in your hand. You can’t easily hand it over to another person, nor can you claim complete ownership of it. To some extent, the data you create on the internet was done with the assistance of the sites you use and your internet service provider.

Those companies could claim some level of ownership of your data. It might even be written in the fine print of those user agreements that nobody ever reads. It’s hard to entirely argue against such a claim. After all, we couldn’t create any of this data without the aid of companies like Verizon, AT&T, Amazon, Apple, Facebook, and Google. At the same time, these companies couldn’t function, let alone profit, without our data.

It’s a difficult question to resolve. It only gets more difficult when you consider laws like the “right to be forgotten.” Many joke that the internet never forgets, but it’s no laughing matter. Peoples’ lives can be ruined, sometimes through no fault of their own. Peoples’ private photos have been hacked and shared without their permission.

In that case, your data does not at all function like property. Even if it’s yours, you can’t always control it or what someone else does with it. You can try to take control of it, but it won’t always work. Even data that was hacked and distributed illegally is still out there and there’s nothing you can do about it.

Despite those complications, I still believe that our data is still the individual’s property to some extent, regardless of what the user agreements of tech companies claim. Those companies provide the tools, but we’re the ones who use them to build something. In the same way a company that makes hammers doesn’t own the buildings they’re used to make, these companies act as the catalyst and not the byproduct.

Protecting our data, both from theft and from exploitation, is every bit as critical as protecting our homes. An intruder into our homes can do a lot of damage. In our increasingly connected world, a nefarious hacker or an unscrupulous tech company can do plenty of damage as well.

However, there’s one more critical reason why I believe individuals need to take ownership of their data. It has less to do with legal jargon and more to do with trends in technology. At some point, we will interact with the internet in ways more intimate than a keyboard and mouse. The technology behind a brain/computer interface is still in its infancy, but it exists and not just on paper.

Between companies like Neuralink and the increasing popularity of augmented reality, the way we interact with technology is bound to get more intimate/invasive. Clicks and link sharing are valuable today. Tomorrow, it could be complex thoughts and feelings. Whoever owns that stands to have a more comprehensive knowledge of the user.

I know it’s common refrain to say that knowledge is power, but when the knowledge goes beyond just our browsing and shopping habits, it’s not an unreasonable statement. As we build more and more of our lives around digital activities, our identities will become more tied to that data. No matter how large or small that portion might be, we’ll want to own it as much as we can.

It only gets more critical if we get to a point where we can fully digitize our minds, as envisioned in shows like “Altered Carbon.” At some point, our bodies are going to break down. We cannot preserve it indefinitely for the same reason we can’t preserve a piece of pizza indefinitely. However, the data that makes up our minds could be salvaged, but that opens the door to many more implications.

While that kind of technology is a long way off, I worry that if we don’t take ownership of our data today, then it’ll only get harder to do so in the future. Even before the internet, information about who we are and what we do was valuable.

This information forms a big part of our identity. If we don’t own that, then what’s to stop someone else from owning us and exploiting that to the utmost? It’s a question that has mostly distressing answers. I still don’t know how we go about staking our claim on our data, but it’s an issue worth confronting. The longerwe put it off, the harder it will get.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Current Events, futurism, Neuralink, politics, technology

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

3 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

Deep Fake Technology Can Now Make Tom Cruise Iron Man: Signs And Implications

Certain technology advances slowly and steadily. It’s why we’re still waiting for a cure for the common cold. Other technological breakthroughs advance at such a fast rate it’s hard to keep up with. Anyone who doesn’t regularly upgrade their cell phone understands that.

That brings me to the technology of deep fakes. I’ve talked about them before and the implications this technology has for the entertainment industry. Well, I’m here to report that this technology might be advancing faster than I thought.

Recently, a new deep fake video hit the web. It’s nothing overly nefarious. It’s actually a play on a real story from the mid-2000s. Before Robert Downey Jr. was cast as Tony Stark in the first “Iron Man” movie, Tom Cruise was in the running for that role.

He has since claimed he was never close to getting that role, but it’s still an interesting idea. For most Marvel fans, it’s hard to imagine anyone other than RDJ donning that now-iconic armor. However, there’s no denying that Tom Cruise being Iron Man would’ve changed a franchise, as well as cinematic history.

Well, thanks to deep fake technology, we don’t have to imagine anymore. We can now see for ourselves what it would look like if Tom Cruise had been cast as Iron Man in the Marvel Cinematic Universe. See for yourself.

Watching this, I have to say it was more than a little jarring. It’s not just that seeing someone other than RDJ as Iron Man is strange. I was genuinely impressed by how real it looked.

Yes, it did become a bit obvious at times that there was some digital trickery at work. I’ve seen enough Tom Cruise movies to know what he looks like. I could tell that the body just did not match the iconic face at times.

However, I’m still impressed at just how seamless it appeared, especially when he was in the Iron Man costume. It really did look like Cruise had embraced the role as much as RDJ had. Even though the voice had to come courtesy of a skilled voice actor, the graphics technology is definitely on pace to cross the uncanny valley sooner rather than later.

The implications here are profound. If the technology is already at this point, then it’s a given that Hollywood and propaganda pushers will start embracing it sooner. For Hollywood, who is reeling in wake of a historic pandemic, they may have more incentives to embrace it than most.

Beyond actors and actresses who get “cancelled” for their behavior, it may start as a cost cutting measure. If it costs too much to put Hugh Jackman or Tom Cruise on a movie set, why not just put a cheaper actor in their place and just deep fake the more iconic figure over it? If the technology is that good and nobody can tell the difference, it almost makes too much sense.

It may get to a point where nobody outside the studio knows whether the figure we see on screen was actually “there” to give that moment life. They may just be a digital scan mixed with digitally audio, which is also advancing.

This has even larger implications with propaganda. If the technology gets to a point where we can make any public figure say or do anything we want, no matter how deplorable, then how can we trust any media image? Would “cancel culture” even be feasible at that point? If people can just claim an embarrassing moment was a deep fake, how would we know?

It’s a distressing thought, but it’s something we’ll have to account for. We may end up having to contemplate it sooner than we thought. This technology can already show us a world in which Tom Cruise was cast as Iron Man. What other worlds will it reveal?

We’ll find out soon enough.

Leave a comment

Filed under Artificial Intelligence, futurism, media issues, superhero comics, superhero movies, technology, YouTube