Tag Archives: Artificial General Intelligence

New Zealand Pushes For International Bans On Fully Autonomous Weapons (And Why Other Countries Should Follow Suit)

What are drones? – Drone Wars UK

Whenever I discuss or follow issues surrounding artificial intelligence, it’s not long before the topic of killer robots come up. That’s to be expected and not just because most of us have seen “Terminator” one too many times. However, it’s no laughing matter.

At this stage in our technological development, killer robots and autonomous weapons aren’t restricted to science fiction. They already exist in many respects. We just call them drones or unmanned combat aerial vehicles. These are real things that operate in real war zones. They have killed people, including innocent civilians.

They may not look like your typical T-101, but make no mistake. They’re every bit as deadly. They don’t need to hide within the body of Arnold Schwarzenegger. They can just flies thousands of feet over your head and kill from a distance. That’s a scary thought for anyone in a war zone, but you can take some comfort in that these machines still have human operators.

That could eventually change. As I’ve noted before, artificial intelligence is advancing to a point where it can match or exceed the skill of an ordinary human in specialized tasks. Even if we’re decades away form an AI that has the same cognitive capacity as an ordinary human, we already have AI systems that specialize in particular tasks, like chess.

It wouldn’t be that great a stretch to develop an advanced artificial intelligence that could specialize in flying combat drones without any human input. In principle, an artificial intelligence wouldn’t be nearly as prone to human error or hesitation if their human pilots were taken out of the equation.

However, that also raises some serious ethical, as well as strategic questions. If humans are out of the loop in operating these autonomous weapons systems, then what happens to how we conduct warfare? What does this mean for both sides of an armed conflict?

Ideally, an advanced AI will be better at making decisions to limit civilian casualties. That is likely the ultimate goal in researching these systems. The problem is we’re still a long way from that goal, so much so that one government in the world is already trying to establish a precedent.

Fittingly, it’s a government from an area that is not in any war zone, nor is it near one. New Zealand, a country not known for frequent warmongering, recently pushed for a worldwide ban on autonomous weapons systems. It’s a bold and ambitious effort, but one I believe we should take seriously.

Stuff: Government to push for international ban of autonomous weapons, or killer robots

The Government will push for an international ban on fully autonomous weapons, or killer robots, that use artificial intelligence to target and kill people without any human decision-making.

New Zealand has for decades advocated for disarmament in international forums, after declaring the country a nuclear-free zone in the 1980s. Autonomous weapons are seen as a new frontier in the arms race between major military powers.

Disarmament Minister Phil Twyford on Tuesday said the Government had decided to take a “tough and uncompromising” stance on autonomous weapons, and seek a ban of fully autonomous weapons on the international stage.

Regardless of how you feel about New Zealand’s policies or intentions, this is one rare instance where politics might have to inject itself into science. Like it or not, the pace of artificial intelligence is accelerating. Few agree on how much time it’ll take to reach human level AI, but nobody denies that it’s an advancing field.

At some point in the very near future, we’ll have AI systems specifically tailored to fly combat drones with the same proficiency as a skilled Air Force pilot. That does not require human level AI. It just requires AI that can handle the various tasks associated with operating these systems.

When that time comes, it will be tempting to take flawed humans out of the loop. That means militaries with these autonomous weapons systems won’t have to be as hesitant when it comes to deploying these systems.

We can safely assume this because there’s a big difference between pushing a button that fires a gun and ordering another human being to fire that same gun. Even if that other human is trained and obedient, they’re still human. They can still make mistakes. They can still hesitate.

For once, that’s not a bad thing. Sometimes, we need humans to hesitate in the midst of war. Were it not for hesitation, the Cuban Missile Crisis could’ve ended very differently.

If waging war is too easy for those with access to these systems, then more war will follow. More war means more death, destruction, and violence. Too much of that and lasting peace becomes much more difficult. The whole of 20th century history is proof enough of that.

Like nuclear weapons, autonomous weapons systems are a Pandora’s Box that cannot be closed once opened. We’ve already opened it partially thanks to drone warfare. For that reason, I think New Zealand has the right idea in terms of mitigating the potential danger.

Even if autonomous weapons systems become so advanced that they operate better than any human, we still need at least one human behind the decision making process. We still need a flesh-and-blood person to pull the trigger rather than entrusting that recourse to a machine.

We, as a species, simply cannot and should not walk this path with our technology. It’s not just about limiting the use of dangerous weapons. Once we start teaching these advanced AI’s to kill, then we won’t be able to unteach them. If they eventually become too good at it, then that does not bode well for humanity as a whole, regardless of which side they’re on in any war.

Leave a comment

Filed under Current Events, technology

My (Non-Expert) Proposal For Automation And Greater Human Prosperity

62% say robots can be more productive than human workers —V1 study

I’m not an expert on much. I don’t consider myself exceptionally smart on matters that don’t involve superhero comics, NFL football stats, and quality romance stories. I make that disclaimer because I don’t want to give the impression that I know more than I know.

That kind of perspective is important, especially if you’re going to talk about complicated issues with not-so-clear solutions. I’ve attempted to talk about some of those issues on this site, some being much more complicated than others. I don’t claim to have shed new light on a subject or changed a few minds, but I like to think I still contributed something.

To that end, I’d like to make another contribution to a subject I’ve also touched on before. In the past, I’ve highlighted both emerging and potential issues associated with the development of artificial intelligence, including the sexy kind. I’ve also highlighted the issues we may face in a future where so much traditional work has been automated.

Now, in wake of a global pandemic that revealed just how much we can get done at home, I suspect that trend will accelerate. I also expect that trend to bring many problems, not the least of which involve people struggling to find the kind of good-paying blue collar jobs that have helped people rise out of poverty.

Turning back the clock or just trying to ban automation at a certain point is not a viable solution. There’s just no way to enforce that in the long term and it’ll only get more difficult once artificial intelligence gets to a point where it can match the capabilities of an ordinary human. At some point, we’ll have to adapt and that includes changing how we live, work, and play.

The living and playing part have their own set of unique challenges, but I think the work aspect is more pressing. When most people express concerns about automation and AI, they revolve largely around the economic impact and understandably so.

Historically, people have had to work in order to earn money or resources to survive. Whether you’re on a farm in the 10th century or in a city in the 20th, this dynamic has remained fairly constant.

Automation, especially once supplemented by artificial intelligence, will likely upend that dynamic completely. It’s entirely possible that, at some point this century, we’ll develop machines that can do practically all the work humans have had to do in order to survive.

That work includes, but isn’t limited to, farming our food, mining raw materials, producing our goods, maintaining our streets, protecting our homes, and even governing our society. Since machines never tire and are prone to fewer errors, what other jobs will there be? I don’t doubt there will be jobs, but what form will they take? More importantly, will they pay enough to large swaths of people?

I don’t claim to know the answer, but I suspect they won’t. The dynamics of labor markets just can’t function when the machines are capable of doing so much more work than large swaths of people. Even if those people don’t work, they’re still going to need money and resources. How will they go about getting it?

Answering this question has often led to discussions about a universal basic income, which has actually become a more viable policy position in recent years. I’ve even touched on it a bit as well and while I think it’s a great idea, I think there’s also room for some supplementary policies.

For that reason, I’d like to submit one of those policies that could be implemented with or without universal basic income. I call it the Individual Automation Matching Dividend, or IMAD short. This policy would work like this.

  • All adult citizens within the borders of the country will have a piece of identifying information, such as a social security number, voter ID number, or driver’s license number, turned into a special digital token.
  • That token will be ascribed to a machine/robot/android that is currently active and conducting work that had been done by humans at some point in the past, be it manual labor, service roles, or something of that sort.
  • The productivity and wages of work done by these machines will be indexed to a minimum annual salary of approximately $78,000 in 2021, which will be adjusted for inflation on a yearly basis.
  • Any work done by these machines that exceed the value of that salary will be diverted to a national welfare fund to provide extra support for those who were sick, disabled, or otherwise in need of resources beyond that of a healthy adult.
  • No citizen will be ascribed more machines than any other and any machine ascribed to them that is lost, damaged, or obsolete will be replaced in kind by the state.

I apologize if some of what I just described is confusing. I tried to write this out like a lawyer or someone proposing a new policy to a future government. For those who don’t care for legalize, here’s IMAD in a nutshell.

Once you become an adult, you get your own perfect worker robot. That robot may take many forms, but for the sake of simplicity, let’s just say it’s an android in the mold of those we saw in the “I, Robot” movie. They can work without rest, do everything a healthy adult can do, and have roughly equal to greater intelligence.

You’re given this robot by the government to basically act as your work avatar. So, instead of you going out to work every day to earn a living, this robot does it for you. The work that robot does will be compensated, but the money will go to you. Basically, you get paid for the work your android does. It’s more a dividend than a wage.

Remember, since the robot doesn’t age or get tired, it can work 24/7/365. In principle, you won’t even have to meet it. It just works all day and all night on whatever job requires labor, be it construction, assembly, shipping, farming, cooking, etc. You just get all the money, up to about $78,000 a year.

Now, why did I choose $78,000? I didn’t pick that out of thin air. That’s a figure ripped straight from a real-world case study from a company that started paying all employees a minimum of $70,000 a year back in 2015. The idea was that previous studies had shown that when people make more money beyond a certain point, their happiness doesn’t increase. This company just took that idea and ran with it.

The results, by and large, were overwhelmingly positive. With that kind of money, people could create more comfortable lives. They could buy homes, start families, plan for retirement, and make investments. It makes sense. When people have this kind of money to work with, they have the resources they need to create prosperous lives.

The idea behind IMAD is to mirror that by leveraging the added productivity afforded by automation. It’s not some large blanket package of money like a universal basic income. It starts with an individual, acknowledges the work that they have historically provided for a society, and supplements that with technology.

I’m not saying it’s a perfect proposal. I’m not even saying it’s smart. For one, it assumes that one human-like android is enough and that we can control the artificial intelligence necessary for them to operate on a large scale. That’s still an ongoing issue. I’m sure there are plenty more problems I haven’t thought of, but that’s exactly why I’m sharing it.

Surviving a future with intelligent machines is going to be challenging enough. However, we can’t just stop at survival. We want to prosper. We want to live, love, and build better futures for ourselves and our loved ones. Technology like automation and AI can help us get there, but only if we use it wisely. It’s a big if, but one that’s worth working towards.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Would You Shop At A Store Run Entirely By Robots?

Will Smart Machines Kill Jobs or Create Better Ones? - The Washington Post

Recall the last time you went to the store. It doesn’t matter if it was your corner grocery store or some big box department store. All that matters is you went there to do some basic shopping, as we all end up having to do at some point. With that in mind, try and remember how many store clerks you saw.

Maybe some were working at cash registers.

Maybe some were stocking shelves.

Maybe some were sweeping floors or cleaning up messes.

The chances are you saw at least several. I remember seeing at least three the last time I went to a grocery store. That’s fairly typical. I know I used to see more before the days of self check-out lines, but I always saw people working at these stores, diligently doing the things necessary to keep it running.

For most of us, that’s a mundane sight. For every store we go to, we expect there to be human beings working there to keep it going. It’s part of the infrastructure that keeps these stores stocked. On top of that, seeing other human beings contributing gives us a sense of comfort in that this place is being run by real people with real faces.

Now, try and imagine a store that has no people working at it. You walk in the door and you never see another human being carrying out the various operations we expect of a functioning store. All that is now done by machines and robots. They’re the ones who stock the shelves, handle your money, and clean the messes.

Does that change the experience?

Does that make you more or less inclined to shop at that store?

These are relevant questions because, as I’ve noted before, robots and artificial intelligence are advancing rapidly. Automation is an ongoing trend that promises to have major economic ramifications. Some of those ramifications are already here. It’s one of the reason coal mining jobs will never be as prevalent as they once were.

Other ramifications haven’t arrived yet, but they will eventually come. The technology is there. The incentives are there. It’s just a matter of investing, refinement, and scale. Eventually, it will reach retail work, a sector that employs nearly 10 million people. That will have a major economic impact for large swaths of people.

Unlike other forms of automation, though, it’ll be a lot more visible.

Most of us never set foot in a factory where cars are made, much of which is done by robots. Most will never set foot in an Amazon or Walmart warehouse, which already use robots at a significant scale. The impact of just how much work is done by robots these days is not visible to most ordinary people.

That will not be the case with stores and retail work. Like I said, we all have to get out and shop every now and then. Even though online retail has become more prevalent, people still go to traditional brick and mortar stores. Even as online retail improves, that’s not likely to change.

However, how much will that experience change once robots start doing the jobs that humans have done for centuries?

How will that change the experience?

Will you, as a consumer, shop at a store that had no humans working there most of the time?

If you think this isn’t that far off, think again. Below is a video from an AI channel on YouTube that shows a robot using a bar code scanner for the first time. The process is a bit cumbersome, but the robot is able to handle it. It is able to receive instructions. Given the nature of how robots improve and refine their programming, it’s not unreasonable to assume that future robots will be able to carry out retail tasks more efficiently than any human worker.

It may not happen all at once. You probably won’t just walk into a store one day and notice that everyone was replaced by a robot. Like self check-out, it’ll likely happen gradually. Once it gets to a certain point, though, it’ll become mainstream very quickly. The incentives are just too strong.

You don’t need to be an economist to see those incentives. Robots don’t need to be paid. They don’t slack off on the job. They don’t get sick or tired. In theory, they could keep a store open 24/7 without ever paying overtime. For big box retailers like Walmart, the potential profits are just too large to ignore.

It won’t stop at stores, either. Restaurants will likely undergo a similar process. There are already working robots that can cook meals from scratch. Once they get refined and scaled, then it’s also likely you’ll one day eat at a restaurant entirely run by robots.

Would you be willing to eat at such a place?

Your answer will probably be similar to the one I asked earlier about whether you’d shop at a store run entirely by robots. Personally, I don’t think I’m ready to shop at a place that had no humans working in it, if only because robots sometimes break down. However, within my lifetime, it may get to a point where stores and restaurants run by humans become the exception rather than the norm.

Are we ready for that future?

I don’t know, but it’ll come whether we’re ready for it or not.

1 Comment

Filed under futurism, robots, technology

Turning Thoughts Into Images: A New Era Of Art With Brain/Computer Interface

BCI Technology: How does a Brain-Computer Interface Work?

There are any number of skills you can learn, practice, and eventually master. I highly encourage everyone to do to this, whether it involves computer programming, cooking, crafts, or any other hobby. You may not always like or master them, but they’re still fun and rewarding to try.

For some skills, though, no amount of learning or practice will help you master them or even be competent. Some things just take talent. That’s why only a handful of human beings ever become Olympic athletes, professional quarterbacks, or brain surgeons. There’s nothing wrong with that. We need that kind of diverse skill set, as a species.

I consider myself to be good, if not above-average, at a number of skills. I’ve learned plenty over the years and there are some that I just have a knack for more than others. I like to think writing is one of them. However, there’s one particular skill that I just have absolutely zero talent for and it’s something that has bugged me for years.

That skill is drawing.

Please understand that this is somewhat personal for me. I’ve always had an artistic side, but for reasons I can’t quite grasp, I’ve never been able to draw worth a damn. I’ve taken art classes in school. I’ve tried practicing here and there. It just never works. I can barely draw stick figures, let alone an image of a typical person that doesn’t look like it was drawn by a five-year-old.

Some of that actually runs in my family. Quite a few relatives can attest that they can’t draw, either. At the same time, an unusually high number of relatives are good writers, poets, etc. We’re all great with words, for the most part. That’s a talent that seems to get passed down, but we just can’t turn those words into pictures.

For me, that’s kind of frustrating. I’ve always enjoyed telling stories. For a time, I wanted to be a comic book writer, but I learned quickly that’s next to impossible when you can’t draw. There are also times when I wish I could draw well enough to describe a scene from a story. I just don’t have that talent or that skill.

As much as I enjoy writing, I don’t deny that humans are visual creatures. If I could incorporate images into my work, then I believe it’ll have a much greater impact. Sadly, I doubt I’ll ever have the necessary talent and skill to create those images.

However, it certain technological trends continue, I might not have to. A recent article in Psychology Today gave me hope that one day, I’ll be able to take some of these images I see in my head and make them real for others to see. It also leads me to believe that art, as we know it, is about to change in a big way.

Psychology Today: New Brain-Computer Interface Transforms Thoughts to Images

Achieving the next level of brain-computer interface (BCI) advancement, researchers at the University of Helsinki used artificial intelligence (AI) to create a system that uses signals from the brain to generate novel images of what the user is thinking and published the results earlier this month in Scientific Reports.

“To the best of our knowledge, this is the first study to use neural activity to adapt a generative computer model and produce new information matching a human operator’s intention,” wrote the Finnish team of researchers.

The brain-computer interface industry holds the promise of innovating future neuroprosthetic medical and health care treatments. Examples of BCI companies led by pioneering entrepreneurs include Bryan Johnson’s Kernel and Elon Musk’s Neuralink.

Studies to date on brain-computer interfaces have demonstrated the ability to execute mostly limited, pre-established actions such as two-dimensional cursor movement on a computer screen or typing a specific letter of the alphabet. The typical solution uses a computer system to interpret brain-signals linked with stimuli to model mental states.

Seeking to create a more flexible, adaptable system, the researchers created an artificial system that can imagine and output what a person is visualizing based on brain signals. The researchers report that their neuroadaptive generative modeling approach is “a new paradigm that may strongly impact experimental psychology and cognitive neuroscience.”

Naturally, this technology is very new and nowhere near ready for commercial use. It’ll probably be a while before I could use it to create my own graphic novels of the books I’ve written and the sexy short stories I’ve told. That still won’t stop me from entertaining thoughts of incorporating images into my stories.

I doubt I’m the only one who feels that way, too. I know plenty of people like me who just do not have the talent or skill to draw anything more detailed than a stick figure. Those same people have images in their minds that they wish to share. If products like Neuralink, which the article directly references, become more mainstream, then this could be among its many uses.

With some refinement, it won’t just allow artistically challenged people like me to make competent drawings. It’ll allow people who never would’ve otherwise produced that art create something that they can share with the world.

Just take a moment to appreciate how many beautiful images exist only in the minds of people who never get an opportunity to share them. Maybe someone did have an idea for a piece of artwork that would’ve brought beauty, joy, and inspiration to the world, but they just didn’t have the skill, resources, or talent to make it tangible. How many masterpieces have we lost because of that limitation?

We can never know, but any loss of beautiful art is a tragic one. With a process like this, people who never even thought about having an artistic side could explore it. Moreover, they would be able to do it without messy art supplies, sketchbooks, or ink stains. They would just need a neural prosthesis and a computer.

Almost everyone has a computer, so we’re already halfway there. If ever a product came out that allowed us to develop this ability of turning thoughts into images, I would be among the first to try it. I would eagerly line up to take the plunge, if only to open the possibility that some of the images I see when I’m writing can become real one day. I hope I live long enough to see this. Our bodies and minds may ultimately fail us, but great art can last for multiple lifetimes.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Neuralink, technology

An Artificial Intelligence That Can Debate: The Promise And The Perils

Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.

Yes, I understand it’s hard to say that with a straight face these days.

Yes, I’ve been to comments sections, Reddit, and 4chan.

Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.

In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.

That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.

Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.

Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.

TechXplore: IBM’s AI debating system able to compete with expert human debaters

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents. Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.

We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.

Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.

However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.

That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.

An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.

For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.

Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.

It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.

Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.

What side would this system be on?

Moreover, who decides what position these AI systems take?

If no one decides, then how does the AI determine which side on a debate it takes?

These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.

I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

1 Comment

Filed under Artificial Intelligence, technology, YouTube

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

Deep Fake Technology Can Now Make Tom Cruise Iron Man: Signs And Implications

Certain technology advances slowly and steadily. It’s why we’re still waiting for a cure for the common cold. Other technological breakthroughs advance at such a fast rate it’s hard to keep up with. Anyone who doesn’t regularly upgrade their cell phone understands that.

That brings me to the technology of deep fakes. I’ve talked about them before and the implications this technology has for the entertainment industry. Well, I’m here to report that this technology might be advancing faster than I thought.

Recently, a new deep fake video hit the web. It’s nothing overly nefarious. It’s actually a play on a real story from the mid-2000s. Before Robert Downey Jr. was cast as Tony Stark in the first “Iron Man” movie, Tom Cruise was in the running for that role.

He has since claimed he was never close to getting that role, but it’s still an interesting idea. For most Marvel fans, it’s hard to imagine anyone other than RDJ donning that now-iconic armor. However, there’s no denying that Tom Cruise being Iron Man would’ve changed a franchise, as well as cinematic history.

Well, thanks to deep fake technology, we don’t have to imagine anymore. We can now see for ourselves what it would look like if Tom Cruise had been cast as Iron Man in the Marvel Cinematic Universe. See for yourself.

Watching this, I have to say it was more than a little jarring. It’s not just that seeing someone other than RDJ as Iron Man is strange. I was genuinely impressed by how real it looked.

Yes, it did become a bit obvious at times that there was some digital trickery at work. I’ve seen enough Tom Cruise movies to know what he looks like. I could tell that the body just did not match the iconic face at times.

However, I’m still impressed at just how seamless it appeared, especially when he was in the Iron Man costume. It really did look like Cruise had embraced the role as much as RDJ had. Even though the voice had to come courtesy of a skilled voice actor, the graphics technology is definitely on pace to cross the uncanny valley sooner rather than later.

The implications here are profound. If the technology is already at this point, then it’s a given that Hollywood and propaganda pushers will start embracing it sooner. For Hollywood, who is reeling in wake of a historic pandemic, they may have more incentives to embrace it than most.

Beyond actors and actresses who get “cancelled” for their behavior, it may start as a cost cutting measure. If it costs too much to put Hugh Jackman or Tom Cruise on a movie set, why not just put a cheaper actor in their place and just deep fake the more iconic figure over it? If the technology is that good and nobody can tell the difference, it almost makes too much sense.

It may get to a point where nobody outside the studio knows whether the figure we see on screen was actually “there” to give that moment life. They may just be a digital scan mixed with digitally audio, which is also advancing.

This has even larger implications with propaganda. If the technology gets to a point where we can make any public figure say or do anything we want, no matter how deplorable, then how can we trust any media image? Would “cancel culture” even be feasible at that point? If people can just claim an embarrassing moment was a deep fake, how would we know?

It’s a distressing thought, but it’s something we’ll have to account for. We may end up having to contemplate it sooner than we thought. This technology can already show us a world in which Tom Cruise was cast as Iron Man. What other worlds will it reveal?

We’ll find out soon enough.

Leave a comment

Filed under Artificial Intelligence, futurism, media issues, superhero comics, superhero movies, technology, YouTube

Artificial Intelligence Is Learning Law: Is Government Next?

It’s inevitable. As technology advances, certain industries are going to become obsolete. That’s why the horse-and-buggy industry is incredibly limited. It’s also why companies don’t make typewriters or LaserDisk movies anymore. Once better tech becomes available, an industry either adapts or disappears. Just ask Blockbuster.

Sometimes, it’s obvious that an industry is becoming obsolete. Again, just ask Blockbuster. As soon as something better, easier, and more convenient comes along, it’s only a matter of time before it takes over. However, it’s when things aren’t quite as obvious where more dramatic changes occur.

In terms of dramatic change, few things have the potential to generate more than artificial intelligence. I’ve highlighted that many times before, but a lot of that potential depends on advances that haven’t happened yet. They’re still likely to happen at some point, which may or may not be in my lifetime. They’re just not there yet.

That said, AI doesn’t have to become advanced on the level of Skynet or Hal 9000 to impact and/or disrupt major industries. The AI technology we have now is already having an impact. It may only be a narrow form of AI, which is AI that’s focused on performing a specific task, like playing chess. Its potential is still immense and some fields are feeling it more than others.

One industry that might feel it first is law. Now, at the risk of inspiring one too many lawyer jokes, I’m going to try and keep things general here. I’m also going to try and fit in some personal experience. I know some lawyers personally. I’ve been in law offices and I’ve seen how they work. You don’t have to be that much a visionary to understand how AI could change this industry entirely.

Recently, TechNews did a story on how artificial intelligence is learning basic legal operations and learning it quite well. Given the massive amounts of data and technicalities included in American law, a narrow AI is ideally suited to handle such tasks. However, I don’t think the piece fully grasps the implications.

TechNews: Lawyers Beware: Artificial Intelligence Is Learning Law – And Doing Frighteningly Well

AI or artificial intelligence is starting to find its footing in the legal field. The world is now on the brink of revolution in legal profession spearheaded with the extensive use of AI in the entire industry, specifically by the in-house lawyers.

Just like how email greatly changed the way people conduct their business on a daily basis, AI is also expected to become an ever-present force and an invaluable assistant to almost all lawyers.

But the million-dollar question now is, what does the future look like for AI as far as the legal industry is concerned? A much bigger question is, will AI soon replace real life lawyers?

These are not unreasonable questions. What will happen to the current legal industry if much of the legal grunt-work can be handled by an AI? What will happen to the industry when it’s no longer necessary to have a huge team of overpaid lawyers to conduct competent legal operations?

As someone who has been in his share of law offices, I can make a few educated guesses. I can easily imagine firms shrinking their office space, but expanding their operations. Most of the legal offices I’ve gone to dedicate 80 percent of their office space to storing documents and secure research material. Very little is left or necessary for the actual people doing the work.

The recent pandemic has only revealed that plenty of this work can be done form home or remotely. Some legal proceedings are even unfolding through Zoom calls, albeit with mixed results. It’s a step in that it undermines and disrupts the traditional model for handling the law. It also raises a much larger question that the TechNews article didn’t ask.

Once AI learns the law, then is learning government next?

It’s a natural progression. Governments make and administer laws. An AI that specializes in the law would also have to learn government, as well. A narrow AI might be able to process the general bureaucracy of a government, but what happens when those systems become more advanced?

I’m not just talking about a scenario where an AI becomes the government, which I’ve already speculated on. An AI that has perfect expertise in both law and government operations could have many less obvious effects. Inefficiencies that often go unnoticed in a bureaucracy are suddenly harder to overlook. Inconsistencies that rarely get fixed, due to that bureaucracy, can finally be remedied.

In theory, a sufficiently advanced AI, which need not be as intelligent as a human, could do more than just document legal and government proceedings. It could formulate new laws and policies on its own. Some may seem outrageous from a basic non-lawyer human perspective, but make perfect sense within a functioning legal system or government.

It may still seem like just another tool for lawyers to stay organized, but I think it could be more than that. If an AI makes both legal and government systems more efficient, then what will that mean for those in government? Would politicians be better able to implement their agenda if they have tools like AI at their disposal? Would that necessarily be a good thing?

This is where things get both tricky and political. No matter how confident you are in your political persuasions, the party you favor will not always be in power.

It may seem like politics is trending a certain way, but those trends change quickly. People who think their party is strong now can’t imagine a time when they’ll lose that strength. It happens regularly in any democracy.

Like it or not, your party will one day be out of power. When that happens, do you want the other party having a more efficient means of implementing their policies?

I’m sure everyone’s answer to that question will vary. What no one is certain of is how we’ll keep up with ever-improving AI systems, regardless of what industry they’re in. It’s one thing for a system to make it easier to stream movies or keep track of groceries. It’s quite another when it becomes intimately involved with our laws and our government.

The TechNews article expressed some concern, but only with respect to how it affects the current law industry. I believe AI, even if it’s focused only on law, will have a far larger impact. That’s not to say that AI will render law firms and governments obsolete.

If ever there was one domain in which foresight is critical, it’s this. Some industries can and should become obsolete. Others, like how we govern our society, need a more careful approach. We simply cannot afford our laws and our government to end up like Blockbuster.

3 Comments

Filed under Artificial Intelligence, Current Events, human nature, Neuralink, politics, technology

Streaming Music Into The Brain With Neuralink: Why I Want To Try It

Say what you want about Elon Musk. He’s an eccentric billionaire. There’s a lot to say and not all of it is good. Whatever you think of him, though, you can’t deny he has some big, bold ideas. You don’t become a billionaire tech icon without plenty of those.

I’ve talked about some of his bolder ideas before, namely the potential impact of Neuralink and brain/machine interfaces. I still contend those ideas are still as bold as ever. It’s just a lot harder to explore and contemplate them when we’re in the middle of a global pandemic.

Despite the grim circumstances clouding our world now, Musk still finds a way to drop a new idea into the mix. This one is actually related to Neuralink and the world of brain augmentations. While this effort is still ongoing and very early, he did imply that the neural implants that this company would offer might have another feature that hasn’t been highlighted. Specifically, it’ll allow you to stream music directly into your brain.

It wasn’t treated as groundbreaking. In fact, this topic came about during a Twitter conversation between Musk and an engineer of all things. Usually, Twitter conversations are about as productive as arguing with a creationist, but on rare occasions, something beautiful emerges. I say this is one of them.

Digital Trends: Elon Musk says Neuralink chip will let you stream music into your brain

Tesla CEO Elon Musk’s brain interface company, Neuralink, wants to let you stream music directly into your brain.

Musk recently said that Neuralink’s brain chip technology would allow people to stream music into their brains effortlessly. Musk confirmed the feature on July 19 over an exchange with a Twitter user who asked: “If we implement Neuralink – can we listen to music directly from our chips? Great feature.” Musk replied with a simple, “yes.”

Now, regardless of what you think of Musk’s claim or the technical feasibility of actually streaming music into the brain, I want to make one thing clear. I hope to leave no amgibuity.

I want to try this.

I really want to experience this at some point.

I love music as much as the next person, but my cumulative experience with headphones, stereo systems, and ear buds has been mixed at best. The idea of bypassing that entirely and streaming my favorite songs directly into my brain just has so much appeal and not just from a practical aspect.

Music can a powerful influence. That’s not just an opinion. There’s real science behind it. I’ve certainly experienced that. There are songs on my playlist that can affect my mood, my focus, and my emotional state. Those effects can be pretty diverse. That should be a given. You’re not going to react to a Metallica song the same way you react to a Taylor Swift song.

It’s a testament to how impactful music can be. Now, there might be a way to stream it directly into our brains? Sign me up!

It’s not an incredibly radical idea, when you break it down. In a sense, the music and all its powerful influences goes to your brain already. It’s just indirect. First, it has to go through your ear and then your ear has to process the sound and then the interpretations of those sounds has to go to various parts of your brain. Neuralink is just offering a more direct path.

Imagine hearing something that makes no sound.

Imagine experiencing the emotions and excitement of music in a unique and intimate way.

It may not be the most groundbreaking use of neural implants, but I still want to try it. If being stuck in lockdown has taught us anything these past few months, it’s that we need a diverse range of experiences. There’s only so much we can get from binge-watching Netflix, playing video games, and Zoom chatting family members.

We need those experiences to enrich our lives. We have no idea what kind of state the world will be in by the time this technology is refined. Who knows what kinds of experiences we’ll pursue? Hopefully, I’m around to stream my favorite playlist directly into my brain. It might not be the most profound use of this technology, but it will definitely rock.

2 Comments

Filed under futurism, Neuralink, Sexy Future, technology