Tag Archives: Artificial Intelligence

Turning Thoughts Into Images: A New Era Of Art With Brain/Computer Interface

BCI Technology: How does a Brain-Computer Interface Work?

There are any number of skills you can learn, practice, and eventually master. I highly encourage everyone to do to this, whether it involves computer programming, cooking, crafts, or any other hobby. You may not always like or master them, but they’re still fun and rewarding to try.

For some skills, though, no amount of learning or practice will help you master them or even be competent. Some things just take talent. That’s why only a handful of human beings ever become Olympic athletes, professional quarterbacks, or brain surgeons. There’s nothing wrong with that. We need that kind of diverse skill set, as a species.

I consider myself to be good, if not above-average, at a number of skills. I’ve learned plenty over the years and there are some that I just have a knack for more than others. I like to think writing is one of them. However, there’s one particular skill that I just have absolutely zero talent for and it’s something that has bugged me for years.

That skill is drawing.

Please understand that this is somewhat personal for me. I’ve always had an artistic side, but for reasons I can’t quite grasp, I’ve never been able to draw worth a damn. I’ve taken art classes in school. I’ve tried practicing here and there. It just never works. I can barely draw stick figures, let alone an image of a typical person that doesn’t look like it was drawn by a five-year-old.

Some of that actually runs in my family. Quite a few relatives can attest that they can’t draw, either. At the same time, an unusually high number of relatives are good writers, poets, etc. We’re all great with words, for the most part. That’s a talent that seems to get passed down, but we just can’t turn those words into pictures.

For me, that’s kind of frustrating. I’ve always enjoyed telling stories. For a time, I wanted to be a comic book writer, but I learned quickly that’s next to impossible when you can’t draw. There are also times when I wish I could draw well enough to describe a scene from a story. I just don’t have that talent or that skill.

As much as I enjoy writing, I don’t deny that humans are visual creatures. If I could incorporate images into my work, then I believe it’ll have a much greater impact. Sadly, I doubt I’ll ever have the necessary talent and skill to create those images.

However, it certain technological trends continue, I might not have to. A recent article in Psychology Today gave me hope that one day, I’ll be able to take some of these images I see in my head and make them real for others to see. It also leads me to believe that art, as we know it, is about to change in a big way.

Psychology Today: New Brain-Computer Interface Transforms Thoughts to Images

Achieving the next level of brain-computer interface (BCI) advancement, researchers at the University of Helsinki used artificial intelligence (AI) to create a system that uses signals from the brain to generate novel images of what the user is thinking and published the results earlier this month in Scientific Reports.

“To the best of our knowledge, this is the first study to use neural activity to adapt a generative computer model and produce new information matching a human operator’s intention,” wrote the Finnish team of researchers.

The brain-computer interface industry holds the promise of innovating future neuroprosthetic medical and health care treatments. Examples of BCI companies led by pioneering entrepreneurs include Bryan Johnson’s Kernel and Elon Musk’s Neuralink.

Studies to date on brain-computer interfaces have demonstrated the ability to execute mostly limited, pre-established actions such as two-dimensional cursor movement on a computer screen or typing a specific letter of the alphabet. The typical solution uses a computer system to interpret brain-signals linked with stimuli to model mental states.

Seeking to create a more flexible, adaptable system, the researchers created an artificial system that can imagine and output what a person is visualizing based on brain signals. The researchers report that their neuroadaptive generative modeling approach is “a new paradigm that may strongly impact experimental psychology and cognitive neuroscience.”

Naturally, this technology is very new and nowhere near ready for commercial use. It’ll probably be a while before I could use it to create my own graphic novels of the books I’ve written and the sexy short stories I’ve told. That still won’t stop me from entertaining thoughts of incorporating images into my stories.

I doubt I’m the only one who feels that way, too. I know plenty of people like me who just do not have the talent or skill to draw anything more detailed than a stick figure. Those same people have images in their minds that they wish to share. If products like Neuralink, which the article directly references, become more mainstream, then this could be among its many uses.

With some refinement, it won’t just allow artistically challenged people like me to make competent drawings. It’ll allow people who never would’ve otherwise produced that art create something that they can share with the world.

Just take a moment to appreciate how many beautiful images exist only in the minds of people who never get an opportunity to share them. Maybe someone did have an idea for a piece of artwork that would’ve brought beauty, joy, and inspiration to the world, but they just didn’t have the skill, resources, or talent to make it tangible. How many masterpieces have we lost because of that limitation?

We can never know, but any loss of beautiful art is a tragic one. With a process like this, people who never even thought about having an artistic side could explore it. Moreover, they would be able to do it without messy art supplies, sketchbooks, or ink stains. They would just need a neural prosthesis and a computer.

Almost everyone has a computer, so we’re already halfway there. If ever a product came out that allowed us to develop this ability of turning thoughts into images, I would be among the first to try it. I would eagerly line up to take the plunge, if only to open the possibility that some of the images I see when I’m writing can become real one day. I hope I live long enough to see this. Our bodies and minds may ultimately fail us, but great art can last for multiple lifetimes.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Neuralink, technology

An Artificial Intelligence That Can Debate: The Promise And The Perils

Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.

Yes, I understand it’s hard to say that with a straight face these days.

Yes, I’ve been to comments sections, Reddit, and 4chan.

Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.

In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.

That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.

Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.

Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.

TechXplore: IBM’s AI debating system able to compete with expert human debaters

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents. Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.

We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.

Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.

However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.

That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.

An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.

For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.

Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.

It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.

Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.

What side would this system be on?

Moreover, who decides what position these AI systems take?

If no one decides, then how does the AI determine which side on a debate it takes?

These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.

I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

Leave a comment

Filed under Artificial Intelligence, technology, YouTube

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

1 Comment

Filed under Artificial Intelligence, Current Events, futurism, technology

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

Deep Fake Technology Can Now Make Tom Cruise Iron Man: Signs And Implications

Certain technology advances slowly and steadily. It’s why we’re still waiting for a cure for the common cold. Other technological breakthroughs advance at such a fast rate it’s hard to keep up with. Anyone who doesn’t regularly upgrade their cell phone understands that.

That brings me to the technology of deep fakes. I’ve talked about them before and the implications this technology has for the entertainment industry. Well, I’m here to report that this technology might be advancing faster than I thought.

Recently, a new deep fake video hit the web. It’s nothing overly nefarious. It’s actually a play on a real story from the mid-2000s. Before Robert Downey Jr. was cast as Tony Stark in the first “Iron Man” movie, Tom Cruise was in the running for that role.

He has since claimed he was never close to getting that role, but it’s still an interesting idea. For most Marvel fans, it’s hard to imagine anyone other than RDJ donning that now-iconic armor. However, there’s no denying that Tom Cruise being Iron Man would’ve changed a franchise, as well as cinematic history.

Well, thanks to deep fake technology, we don’t have to imagine anymore. We can now see for ourselves what it would look like if Tom Cruise had been cast as Iron Man in the Marvel Cinematic Universe. See for yourself.

Watching this, I have to say it was more than a little jarring. It’s not just that seeing someone other than RDJ as Iron Man is strange. I was genuinely impressed by how real it looked.

Yes, it did become a bit obvious at times that there was some digital trickery at work. I’ve seen enough Tom Cruise movies to know what he looks like. I could tell that the body just did not match the iconic face at times.

However, I’m still impressed at just how seamless it appeared, especially when he was in the Iron Man costume. It really did look like Cruise had embraced the role as much as RDJ had. Even though the voice had to come courtesy of a skilled voice actor, the graphics technology is definitely on pace to cross the uncanny valley sooner rather than later.

The implications here are profound. If the technology is already at this point, then it’s a given that Hollywood and propaganda pushers will start embracing it sooner. For Hollywood, who is reeling in wake of a historic pandemic, they may have more incentives to embrace it than most.

Beyond actors and actresses who get “cancelled” for their behavior, it may start as a cost cutting measure. If it costs too much to put Hugh Jackman or Tom Cruise on a movie set, why not just put a cheaper actor in their place and just deep fake the more iconic figure over it? If the technology is that good and nobody can tell the difference, it almost makes too much sense.

It may get to a point where nobody outside the studio knows whether the figure we see on screen was actually “there” to give that moment life. They may just be a digital scan mixed with digitally audio, which is also advancing.

This has even larger implications with propaganda. If the technology gets to a point where we can make any public figure say or do anything we want, no matter how deplorable, then how can we trust any media image? Would “cancel culture” even be feasible at that point? If people can just claim an embarrassing moment was a deep fake, how would we know?

It’s a distressing thought, but it’s something we’ll have to account for. We may end up having to contemplate it sooner than we thought. This technology can already show us a world in which Tom Cruise was cast as Iron Man. What other worlds will it reveal?

We’ll find out soon enough.

Leave a comment

Filed under Artificial Intelligence, futurism, media issues, superhero comics, superhero movies, technology, YouTube

Artificial Intelligence Is Learning Law: Is Government Next?

It’s inevitable. As technology advances, certain industries are going to become obsolete. That’s why the horse-and-buggy industry is incredibly limited. It’s also why companies don’t make typewriters or LaserDisk movies anymore. Once better tech becomes available, an industry either adapts or disappears. Just ask Blockbuster.

Sometimes, it’s obvious that an industry is becoming obsolete. Again, just ask Blockbuster. As soon as something better, easier, and more convenient comes along, it’s only a matter of time before it takes over. However, it’s when things aren’t quite as obvious where more dramatic changes occur.

In terms of dramatic change, few things have the potential to generate more than artificial intelligence. I’ve highlighted that many times before, but a lot of that potential depends on advances that haven’t happened yet. They’re still likely to happen at some point, which may or may not be in my lifetime. They’re just not there yet.

That said, AI doesn’t have to become advanced on the level of Skynet or Hal 9000 to impact and/or disrupt major industries. The AI technology we have now is already having an impact. It may only be a narrow form of AI, which is AI that’s focused on performing a specific task, like playing chess. Its potential is still immense and some fields are feeling it more than others.

One industry that might feel it first is law. Now, at the risk of inspiring one too many lawyer jokes, I’m going to try and keep things general here. I’m also going to try and fit in some personal experience. I know some lawyers personally. I’ve been in law offices and I’ve seen how they work. You don’t have to be that much a visionary to understand how AI could change this industry entirely.

Recently, TechNews did a story on how artificial intelligence is learning basic legal operations and learning it quite well. Given the massive amounts of data and technicalities included in American law, a narrow AI is ideally suited to handle such tasks. However, I don’t think the piece fully grasps the implications.

TechNews: Lawyers Beware: Artificial Intelligence Is Learning Law – And Doing Frighteningly Well

AI or artificial intelligence is starting to find its footing in the legal field. The world is now on the brink of revolution in legal profession spearheaded with the extensive use of AI in the entire industry, specifically by the in-house lawyers.

Just like how email greatly changed the way people conduct their business on a daily basis, AI is also expected to become an ever-present force and an invaluable assistant to almost all lawyers.

But the million-dollar question now is, what does the future look like for AI as far as the legal industry is concerned? A much bigger question is, will AI soon replace real life lawyers?

These are not unreasonable questions. What will happen to the current legal industry if much of the legal grunt-work can be handled by an AI? What will happen to the industry when it’s no longer necessary to have a huge team of overpaid lawyers to conduct competent legal operations?

As someone who has been in his share of law offices, I can make a few educated guesses. I can easily imagine firms shrinking their office space, but expanding their operations. Most of the legal offices I’ve gone to dedicate 80 percent of their office space to storing documents and secure research material. Very little is left or necessary for the actual people doing the work.

The recent pandemic has only revealed that plenty of this work can be done form home or remotely. Some legal proceedings are even unfolding through Zoom calls, albeit with mixed results. It’s a step in that it undermines and disrupts the traditional model for handling the law. It also raises a much larger question that the TechNews article didn’t ask.

Once AI learns the law, then is learning government next?

It’s a natural progression. Governments make and administer laws. An AI that specializes in the law would also have to learn government, as well. A narrow AI might be able to process the general bureaucracy of a government, but what happens when those systems become more advanced?

I’m not just talking about a scenario where an AI becomes the government, which I’ve already speculated on. An AI that has perfect expertise in both law and government operations could have many less obvious effects. Inefficiencies that often go unnoticed in a bureaucracy are suddenly harder to overlook. Inconsistencies that rarely get fixed, due to that bureaucracy, can finally be remedied.

In theory, a sufficiently advanced AI, which need not be as intelligent as a human, could do more than just document legal and government proceedings. It could formulate new laws and policies on its own. Some may seem outrageous from a basic non-lawyer human perspective, but make perfect sense within a functioning legal system or government.

It may still seem like just another tool for lawyers to stay organized, but I think it could be more than that. If an AI makes both legal and government systems more efficient, then what will that mean for those in government? Would politicians be better able to implement their agenda if they have tools like AI at their disposal? Would that necessarily be a good thing?

This is where things get both tricky and political. No matter how confident you are in your political persuasions, the party you favor will not always be in power.

It may seem like politics is trending a certain way, but those trends change quickly. People who think their party is strong now can’t imagine a time when they’ll lose that strength. It happens regularly in any democracy.

Like it or not, your party will one day be out of power. When that happens, do you want the other party having a more efficient means of implementing their policies?

I’m sure everyone’s answer to that question will vary. What no one is certain of is how we’ll keep up with ever-improving AI systems, regardless of what industry they’re in. It’s one thing for a system to make it easier to stream movies or keep track of groceries. It’s quite another when it becomes intimately involved with our laws and our government.

The TechNews article expressed some concern, but only with respect to how it affects the current law industry. I believe AI, even if it’s focused only on law, will have a far larger impact. That’s not to say that AI will render law firms and governments obsolete.

If ever there was one domain in which foresight is critical, it’s this. Some industries can and should become obsolete. Others, like how we govern our society, need a more careful approach. We simply cannot afford our laws and our government to end up like Blockbuster.

Leave a comment

Filed under Artificial Intelligence, Current Events, human nature, Neuralink, politics, technology

Streaming Music Into The Brain With Neuralink: Why I Want To Try It

Say what you want about Elon Musk. He’s an eccentric billionaire. There’s a lot to say and not all of it is good. Whatever you think of him, though, you can’t deny he has some big, bold ideas. You don’t become a billionaire tech icon without plenty of those.

I’ve talked about some of his bolder ideas before, namely the potential impact of Neuralink and brain/machine interfaces. I still contend those ideas are still as bold as ever. It’s just a lot harder to explore and contemplate them when we’re in the middle of a global pandemic.

Despite the grim circumstances clouding our world now, Musk still finds a way to drop a new idea into the mix. This one is actually related to Neuralink and the world of brain augmentations. While this effort is still ongoing and very early, he did imply that the neural implants that this company would offer might have another feature that hasn’t been highlighted. Specifically, it’ll allow you to stream music directly into your brain.

It wasn’t treated as groundbreaking. In fact, this topic came about during a Twitter conversation between Musk and an engineer of all things. Usually, Twitter conversations are about as productive as arguing with a creationist, but on rare occasions, something beautiful emerges. I say this is one of them.

Digital Trends: Elon Musk says Neuralink chip will let you stream music into your brain

Tesla CEO Elon Musk’s brain interface company, Neuralink, wants to let you stream music directly into your brain.

Musk recently said that Neuralink’s brain chip technology would allow people to stream music into their brains effortlessly. Musk confirmed the feature on July 19 over an exchange with a Twitter user who asked: “If we implement Neuralink – can we listen to music directly from our chips? Great feature.” Musk replied with a simple, “yes.”

Now, regardless of what you think of Musk’s claim or the technical feasibility of actually streaming music into the brain, I want to make one thing clear. I hope to leave no amgibuity.

I want to try this.

I really want to experience this at some point.

I love music as much as the next person, but my cumulative experience with headphones, stereo systems, and ear buds has been mixed at best. The idea of bypassing that entirely and streaming my favorite songs directly into my brain just has so much appeal and not just from a practical aspect.

Music can a powerful influence. That’s not just an opinion. There’s real science behind it. I’ve certainly experienced that. There are songs on my playlist that can affect my mood, my focus, and my emotional state. Those effects can be pretty diverse. That should be a given. You’re not going to react to a Metallica song the same way you react to a Taylor Swift song.

It’s a testament to how impactful music can be. Now, there might be a way to stream it directly into our brains? Sign me up!

It’s not an incredibly radical idea, when you break it down. In a sense, the music and all its powerful influences goes to your brain already. It’s just indirect. First, it has to go through your ear and then your ear has to process the sound and then the interpretations of those sounds has to go to various parts of your brain. Neuralink is just offering a more direct path.

Imagine hearing something that makes no sound.

Imagine experiencing the emotions and excitement of music in a unique and intimate way.

It may not be the most groundbreaking use of neural implants, but I still want to try it. If being stuck in lockdown has taught us anything these past few months, it’s that we need a diverse range of experiences. There’s only so much we can get from binge-watching Netflix, playing video games, and Zoom chatting family members.

We need those experiences to enrich our lives. We have no idea what kind of state the world will be in by the time this technology is refined. Who knows what kinds of experiences we’ll pursue? Hopefully, I’m around to stream my favorite playlist directly into my brain. It might not be the most profound use of this technology, but it will definitely rock.

1 Comment

Filed under futurism, Neuralink, Sexy Future, technology

The “Perfect” Sex Robot Thought Experiment

There’s a good chance that you’ve encountered someone who has a very strange kink. It’s probably not illegal, disgusting, or damaging. It’s just something that would make most people cringe if said out loud with a straight face. I won’t speculate on what that kink might be. I’ll just trust in the lurid imaginations of anyone reading this article to fill in the blanks.

With that in mind, I’d like to add another detail to that concept. Say you know this person’s kink. It rightly disgusts you. You believe it could be harmful to both the person and whoever they’re doing it with. However, you also know that they’ve never acted on this kink with anyone. On top of that, you know they’ll never act on it. Would you still trust them?

I know that last part is a bit of a stretch. We can never truly predict how anyone will act in the future. They could be the most disciplined person who ever lived, exercising restraint every day of their lives for years on end. They would only have to have one lapse to undermine others’ trust in them.

That’s why I’m framing it as a thought experiment. This is the sort of thing that just has no analog in the real world. It’s still important to contemplate because it can provide insights into who we are, who we trust, and how we conduct ourselves as a society.

Now, I want to throw sex robots into the mix. I promise there’s a legitimate point to that. This isn’t me speculating about the future of sex robots and other technology that’ll likely impact our sex lives. In fact, for this thought experiment to work, I’ll have to push the concept of sex robots to an extreme that is probably beyond any technology we’ll see in our lifetimes.

That’s because it requires that we envision the concept of a “perfect” sex robots. Now, I put “perfect” in quotes because perfection is subjective, especially when it comes to complex issues like human sexuality. It’s just a useful way to envision a form of sexual expression that goes beyond just sex with robots.

For the sake of the thought experiment, here’s a quick definition of what constitutes a “perfect” sex robot.

The robot is of a humanoid form and composed of universally malleable matter. It can effectively shape-shift into anyone, taking on any appearance the user desires, including that of celebrities, fictional characters, or private citizens. The robot can also take on inhuman forms. It can have fully functional sex organs of any gender or entirely new genders.

It also has an artificial intelligence that allows it to perfectly mimic any identity, role, or personality the user wishes. There are no restrictions or taboos. The robot is completely obedient, cannot be harmed, and never suffers.

In essence, the perfect robot is like Mystique from the X-Men combined with Rosie from “The Jetsons.” It can look any way a user wants. It does anything the user wants. It’s basically the ultimate sexual outlet. It doesn’t matter how tame or perverse your kink is. This robot will act it out with you whenever you want.

Why does that matter?

Well, it matters because horrible sex crimes and abuse still happen. As disgusting as it is to acknowledge, people do horrific things to other human beings to obtain sexual gratification. While most people aren’t like that, those deviant individuals still exist. These twisted desires still exist. There are those who don’t act on them, but if the desire is there, it’s still worthy of concern.

I think it’s relevant, given how much concerns over sexual assault and sexual abuse have become in recent years. On top of those concerns, there are other taboos and cultural attitudes that have been skewing our collective sexuality for centuries. From organized religion to sexy video game characters, there are many forces influencing our desires.

That brings me back to the essence of this thought experiment. This is where we have to both use our imaginations and speculate on how we conduct ourselves in a society.

Imagine that this perfect sex robot exist.

Now, imagine that everyone has one or several as soon as they reach an age at which they can consent to sex.

Everyone can carry out whatever depraved sex act they wish with this perfect sex robot, even if it’s illegal.

It doesn’t matter what their income is, where they live, or what their background is. Everyone has access to this perfect sex robot.

People can still form relationships with real people. They can still have children and raise families, like they always do.

What would change in this scenario? How would everyone conduct themselves in a world where they always had an outlet for whatever sexual desires they wanted? From decadent billionaires to working class people, they can all live out whatever fantasy they want with whoever they want.

Take it a step further. Imagine you met someone whose predilections you knew. Maybe they share it with you or you find out. Whatever it is, you find it abhorrent. You believe that, if they did this with anyone other than a sex robot, they’d be guilty of a horrific crime. However, they’ve never done it with anyone other than the robot and never would. Would you still associate with that person?

Even if you had a guarantee that nobody ever acted out their perverse desires on anyone other than a sex robot, would you still be comfortable around that person? Hell, flip the roles. Imagine you told someone about your kinks and they found it horrifying. How would you feel if they resented you, even if you never acted on them with real people and never would?

Keep following the possibilities.

Imagine someone uses their perfect sex robot to sleep with your spouse, parent, sibling, or child.

Imagine someone who claims to be heterosexual, but engages in homosexual acts with their sex robot.

Imagine someone who is never abusive with anyone, but horrifically abuses their sex robot.

I’ll stop short of adding more layers to this experiment. I think I’ve gotten my point across. For now, I encourage everyone to contemplate this. Think about how you would conduct yourself around people in this scenario. Think about what it would mean for society, as a whole.

There are no wrong answers, but the possibilities are as profound as they are kinky.

1 Comment

Filed under health, human nature, sex in society, sex robots, Sexy Future, Thought Experiment

Jack’s World: Why Neuralink Might Be The Most Important Venture Of All Time

The following is a video for my YouTube channel, Jack’s World. You may recognize the title from an article I wrote years ago in the before times when pandemics were still the subject of bad sci-fi movies. I miss those times too.

The news surrounding Neuralink is still of great interest to me. I still think it’s one of the most important technological advancements of the century. This video simply offers another general overview of why this technology is so important. Enjoy!

Leave a comment

Filed under biotechnology, futurism, health, human nature, Jack's World, Neuralink, technology, YouTube