Tag Archives: Advanced AI

The Exciting/Distressing World Of AI-Generated Art

Whenever I talk about artificial intelligence, I often talk about the possibilities and potential it has for the near and distant future. I admit I sometimes to a bit overboard with the speculation and the hyperbole. In case I haven’t made it abundantly clear already, I am not an expert. I do not consider myself exceptionally smart or well-informed on this topic. I just find it very interesting and quite tantalizing, given how much I’ve seen computer technology evolve over the course of my lifetime.

However, in talking about artificial intelligence, I rarely get a chance to talk about some actual tools and products powered by AI that we can use today. That’s just the nature of technology like this. It takes a while to develop and refine. It also takes a while to actually create a usable product with it that don’t require a Masters Degree in computer programming.

But this past year has seen the rise of a new type of AI-powered product that is making its presence felt. It’s called AI-Generated art and it’s exactly what it sounds like. It’s art entirely created by an artificial intelligence that uses massive amounts of data on art, shapes, and design to craft artwork based solely on text suggestions.

It’s not entirely new. For a number of years now, there have been AI systems that can essentially create photo-realistic depictions of people who don’t exist. That, alone, is an impressive feat and one that has some distressing implications for those worried about fake IDs, identity theft, or catfishing. However, these new AI-Generated art programs have the potential to do so much more.

While the mechanisms behind it are very complex, the interface itself is very simple. A user just enters a brief description of what kind of artwork they want. Then, the program processes that and crunches the data. Finally, it generates an image. Sometimes, it takes a few seconds. Sometimes, it takes a bit longer, especially if the prompt is more elaborate. If you want to see a good example of what it can create, just check out the brief, but hilarious skit John Oliver did. Just don’t watch it while eating cabbage.

Aside from the inherent comedy gold that can be mined from this technology, just take a step back and consider the larger implications of these tools. These are prompts being entered by people who probably don’t have much in terms of art skills. And as John Oliver noted, some are being entered by people who aren’t entirely sober. While the images they generate don’t exactly look like masterpieces or anything someone would mistake for photo-realism, it’s still remarkable they’re as good as they are.

In seeing some of this art, it actually reminds me somewhat of early video game consoles that began rendering 3D graphics. I’m old enough to remember the somewhat clunky transition between 2D to 3D graphics. Just look at early Playstation games or games like Super Mario 64. They weren’t exactly polished, but they were a step in that direction.

Now, compare that to a typical game on the Playstation 5. In the span of just 20 years, the graphics and renderings have become so realistic that they’re navigating uncanny valley territory. With that in mind, imagine what these AI-generating art programs will do with that kind of refinement. I don’t know if it’ll take 20 years or longer, but it does create some tantalizing possibilities.

Ordinary people could conjure detailed, photo-realistic backgrounds for games, portraits, or stock art.

Ordinary people could conjure elaborate scenes and illustrations for stories they wish to tell.

Ordinary people could create artistic depictions of elaborate fantasies, including the sexy kind.

This is especially intriguing for someone like me because, as I’ve noted in the past, I cannot draw worth a damn. I have practically no skills when it comes to creating visual artwork, be it with a pencil or a computer program. I’ve never had that skill. I’ve tried many times in the past do develop those skills. I’ve never succeeded. I’ve always been better with writing and words. And I’ve been perfectly content with that.

Now, this technology gives someone like me an opportunity to craft images to go along with my words. It opens the possibility that I could one day write a story, sexy or non-sexy, and supplement it with real, vivid depictions of the characters and scenes. That is definitely something I want to pursue. I have experimented a bit with the AI art programs, but they’re still someone limited. I won’t be incorporating them into my sexy short stories anytime soon.

But if these programs continue to improve, then it’s only a matter of time before I craft a story in that manner. Honestly, that really does excite me, more so than a lot of the promising news surrounding artificial intelligence. I understand there are aspects to the technology that may never happen or just won’t be happening within my lifetime. But these AI art programs are real. They exist now and they’re going to be refined, like most emerging technology. It remains to be seen how fast they’ll achieve a higher quality, but I will certainly be watching it closely.

If you want, you can even test these programs out yourself. This site lists 10 sites you can go to right now, but these are the sites I recommend.

Dall-E 2

Artbreeder

DeepAI

StarryAI

Please note that most of these services are limited and none allow anyone to create images that are overly pornographic or outright illegal. However, you can still create some legitimately good images, which you can save and use in whatever way you please. I’m already hoping to use some for my YouTube channel.

But even though this technology is especially intriguing to people like me with no art skills, I don’t deny it has actual artists very concerned. There has already been one instance where an AI-generated artwork won an art contest, which the real artists did not appreciate. It’s not just that an AI like this won without putting in the effort an artist usually would. In many cases, these programs used art other artists had created to refine its code. Over time, these programs could conceivably put those same artists out of work.

I can totally understand that concern. Who would hire a talented, but expensive artists to create images if they could just use an AI program to create it in seconds and for free? Do you really think big companies like Disney, Warner Brothers, and Universal wouldn’t fire their entire art team if they could get the same results for a fraction of the cost? They’re billion-dollar profit-driven companies. You know they would.

Even if this technology doesn’t completely replace real-life artists, it’s still essentially doing most of the work. On some level, it dehumanizes the artistic process, even more so than a camera. A camera can only render the image in front of it. These programs could conceivably conjure images that nobody has ever seen or imagined, a feat that once belonged solely to artists.

What does that mean for the future of artists?

What does that mean for the future of art?

I don’t claim to know the answers. I’m not even sure how to speculate on something like this. Again, I have no art skills with respect to drawing or creating images from scratch. I’m the kind of person who will embrace this technology more than most, so I’m going to be somewhat bias in that regard.

But artists and governments are starting to take notice. China has already made waves by attempting to ban AI-generated media that isn’t appropriately marked. While that may temper some trends in this field, it’s not going to stop it. There’s just too much to be gained at this point. The genie is out of the bottle and there’s no putting it back.

It’s sure to cause more issues, especially as the technology becomes more refined. It probably won’t be long before a major problem occurs because someone used AI-generated art in some nefarious way. Some are already trying, but they can only achieve so much, given the limits of technology.

That will eventually change. If you’re reading this, you’re likely to see some AI-generated artwork that you’ll mistake for something real. At that point, even concerns about deep fakes will be minor in comparison. Only time will tell.

Until then, non-artistically inclined people like me can start contemplating what thoughts and ideas we can one day make real.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Would You Willingly Plug Your Brain Into The Matrix?

The Matrix' Code Came From Sushi Recipes—but Which? | WIRED

What if there was a virtual environment that was so real and so lifelike that it was completely indistinguishable from the real world?

What if you had an opportunity to upload the entire contents of your mind into that environment?

Would you do it? Even if you didn’t have a full measure of control over the environment, would you still venture into this virtual world?

I’m not just asking these questions as another thought experiment, nor am I asking it as an excuse to talk about “The Matrix: Resurrections.” Yes, the prospect of another movie in the mold of “The Matrix” did inspire me to pose these questions, but I also think these questions are worth seriously contemplating.

Back in 1999, the year “The Matrix” first came out, the idea of an entirely simulated world seemed like classic sci-fi tech, the likes of which we’d never see in our lifetimes. That’s understandable. In 1999, the most advanced simulations we knew could only be rendered by a Playstation 2 and those hardly looked realistic.

Since then, computing power and graphics technology has come a long way. These days, graphics in video game consoles are so realistic that it’s nearing Uncanny Valley territory. It won’t be that long before we have computer renderings that are so advanced, so lifelike, and so realistic that our brains can’t tell the difference.

At that point, creating an entirely simulated world is just a matter of computing power, scale, and interface. Since brain/computer interfaces are already being developed, it’s not unreasonable to think that we won’t have a Matrix-like simulation available within the next 40 years. Many people alive today who are under the age of 50 might very well live long enough to see that technology.

Once we have it, we’ll have some important decisions to make. Some of those decisions will be societal. If people suddenly have access to a virtual world where they can be anyone, do anything, and immerse themselves in any conceivable experience, then what does that do to society? What does that do to people, communities, nations, and social structures?

Those are far messier questions to contemplate, which is why I’m not going to belabor them too much at this point. Instead, I want to keep this question within the context of individuals. Everyone’s circumstances and beliefs are different. As a result, that may impact whether you’d take advantage of such an opportunity or what kind of environment you’d seek to create.

Personally, if I ever had an opportunity to upload my mind into a virtual environment on par with the Matrix, I would do it, but the extent and circumstances would vary. I suspect others may feel the same.

If I could create my own personal virtual environment before I uploaded my mind into it, then I would certainly be more willing. I think that’s an important factor. The humans in “The Matrix” didn’t have any measure of control over the environment they were in. I think that would complicate any that anyone would have in such a world.

It would also depend heavily on my physical state in the real world. If this technology became available and I was old, weak, and in poor health, then I would certainly be more inclined to use it. That assumes that any technology involving human enhancement hasn’t progressed significantly and people still age, get sick, and die.

Like it or not, our physical bodies in the real world will break down. If the technology to manage and reverse that isn’t available, then virtual environments might be the only way we can continue to live in any meaningful capacity. I certainly hope that isn’t my only opinion when I get to be that age, but if it is, then that simplifies my decision.

It’s hard to know what sort of options we’ll have. I still believe that technology involving human enhancement and creating virtual worlds will advance in parallel. One would, by default, need the other in order to properly interface with these environments. As such, it would complicate any decision about venturing into virtual environments.

Then, there’s the actual nature of those virtual environments. If we can control what environment we go into, then that opens the door to even more possibilities. Within these worlds, you could be a billionaire playboy, a medieval king, a famous celebrity, or super athlete. From your brain’s perspective, it would feel every bit as real as what you’re feeling right now.

Whether or not our brains would accept it is a different story. I suspect there may be some who, once they enter these worlds, would never want to leave. There may even be some who willingly erase their own memories of the real world so that this new virtual world is their new “reality.” That’s exactly what Cypher desired in “The Matrix” and I suspect others might share that desire.

It really does depend on the person, their situation, and what sort of virtual world they seek to create. We probably won’t know the full impact until we create our first true Matrix-like virtual world. I sincerely hope I live long enough to see that. If you’re reading this, hopefully you get to see it as well. It should give you plenty of time to contemplate these questions and whether you’ll venture into those world.

Leave a comment

Filed under Artificial Intelligence, biotechnology, futurism, Sexy Future, Thought Experiment

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

Deep Fake Technology Can Now Make Tom Cruise Iron Man: Signs And Implications

Certain technology advances slowly and steadily. It’s why we’re still waiting for a cure for the common cold. Other technological breakthroughs advance at such a fast rate it’s hard to keep up with. Anyone who doesn’t regularly upgrade their cell phone understands that.

That brings me to the technology of deep fakes. I’ve talked about them before and the implications this technology has for the entertainment industry. Well, I’m here to report that this technology might be advancing faster than I thought.

Recently, a new deep fake video hit the web. It’s nothing overly nefarious. It’s actually a play on a real story from the mid-2000s. Before Robert Downey Jr. was cast as Tony Stark in the first “Iron Man” movie, Tom Cruise was in the running for that role.

He has since claimed he was never close to getting that role, but it’s still an interesting idea. For most Marvel fans, it’s hard to imagine anyone other than RDJ donning that now-iconic armor. However, there’s no denying that Tom Cruise being Iron Man would’ve changed a franchise, as well as cinematic history.

Well, thanks to deep fake technology, we don’t have to imagine anymore. We can now see for ourselves what it would look like if Tom Cruise had been cast as Iron Man in the Marvel Cinematic Universe. See for yourself.

Watching this, I have to say it was more than a little jarring. It’s not just that seeing someone other than RDJ as Iron Man is strange. I was genuinely impressed by how real it looked.

Yes, it did become a bit obvious at times that there was some digital trickery at work. I’ve seen enough Tom Cruise movies to know what he looks like. I could tell that the body just did not match the iconic face at times.

However, I’m still impressed at just how seamless it appeared, especially when he was in the Iron Man costume. It really did look like Cruise had embraced the role as much as RDJ had. Even though the voice had to come courtesy of a skilled voice actor, the graphics technology is definitely on pace to cross the uncanny valley sooner rather than later.

The implications here are profound. If the technology is already at this point, then it’s a given that Hollywood and propaganda pushers will start embracing it sooner. For Hollywood, who is reeling in wake of a historic pandemic, they may have more incentives to embrace it than most.

Beyond actors and actresses who get “cancelled” for their behavior, it may start as a cost cutting measure. If it costs too much to put Hugh Jackman or Tom Cruise on a movie set, why not just put a cheaper actor in their place and just deep fake the more iconic figure over it? If the technology is that good and nobody can tell the difference, it almost makes too much sense.

It may get to a point where nobody outside the studio knows whether the figure we see on screen was actually “there” to give that moment life. They may just be a digital scan mixed with digitally audio, which is also advancing.

This has even larger implications with propaganda. If the technology gets to a point where we can make any public figure say or do anything we want, no matter how deplorable, then how can we trust any media image? Would “cancel culture” even be feasible at that point? If people can just claim an embarrassing moment was a deep fake, how would we know?

It’s a distressing thought, but it’s something we’ll have to account for. We may end up having to contemplate it sooner than we thought. This technology can already show us a world in which Tom Cruise was cast as Iron Man. What other worlds will it reveal?

We’ll find out soon enough.

Leave a comment

Filed under Artificial Intelligence, futurism, media issues, superhero comics, superhero movies, technology, YouTube

Artificial Intelligence Is Learning Law: Is Government Next?

It’s inevitable. As technology advances, certain industries are going to become obsolete. That’s why the horse-and-buggy industry is incredibly limited. It’s also why companies don’t make typewriters or LaserDisk movies anymore. Once better tech becomes available, an industry either adapts or disappears. Just ask Blockbuster.

Sometimes, it’s obvious that an industry is becoming obsolete. Again, just ask Blockbuster. As soon as something better, easier, and more convenient comes along, it’s only a matter of time before it takes over. However, it’s when things aren’t quite as obvious where more dramatic changes occur.

In terms of dramatic change, few things have the potential to generate more than artificial intelligence. I’ve highlighted that many times before, but a lot of that potential depends on advances that haven’t happened yet. They’re still likely to happen at some point, which may or may not be in my lifetime. They’re just not there yet.

That said, AI doesn’t have to become advanced on the level of Skynet or Hal 9000 to impact and/or disrupt major industries. The AI technology we have now is already having an impact. It may only be a narrow form of AI, which is AI that’s focused on performing a specific task, like playing chess. Its potential is still immense and some fields are feeling it more than others.

One industry that might feel it first is law. Now, at the risk of inspiring one too many lawyer jokes, I’m going to try and keep things general here. I’m also going to try and fit in some personal experience. I know some lawyers personally. I’ve been in law offices and I’ve seen how they work. You don’t have to be that much a visionary to understand how AI could change this industry entirely.

Recently, TechNews did a story on how artificial intelligence is learning basic legal operations and learning it quite well. Given the massive amounts of data and technicalities included in American law, a narrow AI is ideally suited to handle such tasks. However, I don’t think the piece fully grasps the implications.

TechNews: Lawyers Beware: Artificial Intelligence Is Learning Law – And Doing Frighteningly Well

AI or artificial intelligence is starting to find its footing in the legal field. The world is now on the brink of revolution in legal profession spearheaded with the extensive use of AI in the entire industry, specifically by the in-house lawyers.

Just like how email greatly changed the way people conduct their business on a daily basis, AI is also expected to become an ever-present force and an invaluable assistant to almost all lawyers.

But the million-dollar question now is, what does the future look like for AI as far as the legal industry is concerned? A much bigger question is, will AI soon replace real life lawyers?

These are not unreasonable questions. What will happen to the current legal industry if much of the legal grunt-work can be handled by an AI? What will happen to the industry when it’s no longer necessary to have a huge team of overpaid lawyers to conduct competent legal operations?

As someone who has been in his share of law offices, I can make a few educated guesses. I can easily imagine firms shrinking their office space, but expanding their operations. Most of the legal offices I’ve gone to dedicate 80 percent of their office space to storing documents and secure research material. Very little is left or necessary for the actual people doing the work.

The recent pandemic has only revealed that plenty of this work can be done form home or remotely. Some legal proceedings are even unfolding through Zoom calls, albeit with mixed results. It’s a step in that it undermines and disrupts the traditional model for handling the law. It also raises a much larger question that the TechNews article didn’t ask.

Once AI learns the law, then is learning government next?

It’s a natural progression. Governments make and administer laws. An AI that specializes in the law would also have to learn government, as well. A narrow AI might be able to process the general bureaucracy of a government, but what happens when those systems become more advanced?

I’m not just talking about a scenario where an AI becomes the government, which I’ve already speculated on. An AI that has perfect expertise in both law and government operations could have many less obvious effects. Inefficiencies that often go unnoticed in a bureaucracy are suddenly harder to overlook. Inconsistencies that rarely get fixed, due to that bureaucracy, can finally be remedied.

In theory, a sufficiently advanced AI, which need not be as intelligent as a human, could do more than just document legal and government proceedings. It could formulate new laws and policies on its own. Some may seem outrageous from a basic non-lawyer human perspective, but make perfect sense within a functioning legal system or government.

It may still seem like just another tool for lawyers to stay organized, but I think it could be more than that. If an AI makes both legal and government systems more efficient, then what will that mean for those in government? Would politicians be better able to implement their agenda if they have tools like AI at their disposal? Would that necessarily be a good thing?

This is where things get both tricky and political. No matter how confident you are in your political persuasions, the party you favor will not always be in power.

It may seem like politics is trending a certain way, but those trends change quickly. People who think their party is strong now can’t imagine a time when they’ll lose that strength. It happens regularly in any democracy.

Like it or not, your party will one day be out of power. When that happens, do you want the other party having a more efficient means of implementing their policies?

I’m sure everyone’s answer to that question will vary. What no one is certain of is how we’ll keep up with ever-improving AI systems, regardless of what industry they’re in. It’s one thing for a system to make it easier to stream movies or keep track of groceries. It’s quite another when it becomes intimately involved with our laws and our government.

The TechNews article expressed some concern, but only with respect to how it affects the current law industry. I believe AI, even if it’s focused only on law, will have a far larger impact. That’s not to say that AI will render law firms and governments obsolete.

If ever there was one domain in which foresight is critical, it’s this. Some industries can and should become obsolete. Others, like how we govern our society, need a more careful approach. We simply cannot afford our laws and our government to end up like Blockbuster.

3 Comments

Filed under Artificial Intelligence, Current Events, human nature, Neuralink, politics, technology

Jack’s World: Why Neuralink Might Be The Most Important Venture Of All Time

The following is a video for my YouTube channel, Jack’s World. You may recognize the title from an article I wrote years ago in the before times when pandemics were still the subject of bad sci-fi movies. I miss those times too.

The news surrounding Neuralink is still of great interest to me. I still think it’s one of the most important technological advancements of the century. This video simply offers another general overview of why this technology is so important. Enjoy!

2 Comments

Filed under biotechnology, futurism, health, human nature, Jack's World, Neuralink, technology, YouTube

Jack’s World: A Balanced Outlook On Artificial Intelligence

The following is a video I posted on my YouTube channel, Jack’s World. It’s my attempt to offer some perspective on artificial intelligence, a topic I’ve covered many times before. I hope you find it informative and engaging. Enjoy!

2 Comments

Filed under Artificial Intelligence, futurism, Jack's World, technology, YouTube

How Do We Regulate Artificial Intelligence? Seriously, How?

In general, I don’t pay much attention to doomsayers who claim the end is near and we should all cower, tremble, and give them our credit card number. Don’t get me wrong. I still believe there are serious existential threats facing humanity today. Some are worth taking seriously and dedicating resources to addressing. Others are not. Some simply require a more balanced perspective.

There’s a long list of failed apocalyptic predictions. The fact we’re surviving and thriving by most measures shows just how resilient, adaptable, and capable humanity is. There are some threats that I believe humanity will eventually resolve, thanks largely to our accelerating progress in science, technology, and logistics.

Others, however, have me more concerned. While some are more immediate than others, one in particular continues to confound me, as well as some of the smartest people in the world. It involves artificial intelligence, an emerging technology that’s as promising as it is unpredictable. Given the complexity of this technology, it’s difficult to explain in totality, but it can be best summed up by one simple question.

How do you regulate artificial intelligence?

That’s not a rhetorical question. It’s not a thought experiment either. It’s a serious, honest question that people far smarter and far more capable than me are just starting to ask.

Elon Musk is one of them. Very recently, he called for more regulation on artificial intelligence. That, alone, should be both telling and worrying. This man is a billionaire. Usually, billionaires are more inclined advocate removing regulations. Whenever they make an exception, that’s a sign they know it’s serious.

Even though Musk is one of the top advocates for solving big problems with technology, he still has concerns about the problems associated with artificial intelligence. In AI circles, it’s often called the control problem. It’s not a very creative name, but it gets the point across.

How do you control something that is potentially as smart, if not smarter than a human?

How do you manage something that thinks, adapts, and evolves faster than any machine or living thing?

How do you regulate an artificial intelligence that was built by humans, but isn’t at all human?

These are all difficult questions to contemplate, let alone legislate. Even Musk doesn’t provide specifics. Chances are he doesn’t know any more than the rest of the non-billionaire population. That’s a problem because if we’re going to try and regulate this technology, we need to understand it. On top of that, politicians and lawmakers have a long and embarrassing history of failing to understand technology.

However, this isn’t just about writing laws that protect citizens from being exploited by tech companies. Artificial intelligence, especially the kind that exceeds human intelligence, has capabilities that go beyond sending text messages from bathroom stalls. If handled improperly, it wouldn’t just be an existential threat. It could destroy humanity in ways we literally cannot contemplate.

Now, I try to be an optimist in most things involving emerging technology. Humanity has found a way to manage dangerous technology before, namely with nuclear weapons. However, artificial intelligence is a different beast entirely. Regulating it isn’t as easy as simply controlling the materials that make it. The very concept of regulating this technology lacks precedent.

The closest we have to date is Isaac Asimov’s famous three laws of robotics, which were introduced in 1942. Asimov was a brilliant writer and very ahead of his time on some concepts, but this is one issue where we need more than just three simple tenants. We need to think bigger and bolder. If we don’t, then an advanced artificial intelligence will quickly leave us behind.

After that, it won’t matter what kind of regulations we try to pass. It’ll be smart enough to circumvent them. That doesn’t mean humanity is doomed at that point, but we’ll be distressingly vulnerable. I know it’s in our nature to procrastinate on things we don’t see as vital, but if ever there was an issue to make an exception, this is it.

2 Comments

Filed under Artificial Intelligence, biotechnology, futurism, philosophy, technology, Thought Experiment

Guest Post: 5 Highly Recommended Books on Machine Learning (ML) and Artificial Intelligence (AI)

now-ae452835-2617-420e-b8c3-58b5c4668645-1210-680

The following is an article submitted by Harsh Arora, a freelance blogger and writer who shares a similar interest in artificial intelligence, futurism, and the future of the human race. To Mr. Arora, I sincerely thank you for this submission and your continued support.


We would first of all like to welcome all types of readers to our blog – newbies who are just interested about these buzzwords and also experts on the subjects who would like to extend their existing knowledge on the matter. Having established that, it is also imperative to not only define these two concepts (AI and ML) but also to differentiate between them. Although these terms are used synonymously, they are in fact different from one another. AI is the broader level concept where we feed the machine with data and then expect it to take decisions based on that data. ML on the other hand is a subset and application of AI where we feed machines with data and allow them to learn on their own.

Following are the books we recommend for you to learn more about them:

Machine Learning for Absolute Beginners: A Plain English Introduction – Oliver Theobald

It’s easy to see which part of our reader base this particular book is targeted towards. You may be a complete outsider to the world of ML and still be able to understand the granular technical aspects of it through this book. Oliver Theobald assumes no background of programming or coding on the behalf of the reader and allows you to learn right from scratch. It is not only the perfect book for newbies but also experts in the field because it tries to explain the basic concepts in a very coherent and distinct manner. This books not only helps you learn about the concepts of ML but also allows you to unlearn and then relearn them, something is really important for such a subject matter.

The Hundred-Page Machine Learning Book – Andrew Burkov

This is once again a book that will interest not only beginners but also experts in the field. Andrew has really been able to simplify the concepts of ML into basic and easily comprehensible set of cliff notes. With just 100 pages at his disposal, he has really captured the over-arching essence of ML. Though, of course it is not a deep dive into the subject matter like some of our other recommendations, it is however a wonderful summary of it. It is perfect for people who want to understand this technology and its implementations and implications in the real world.

Artificial Intelligence: A Modern Approach (3rd Edition) – Stuart Russell & Peter Norvig

Stuart Russel is one of the sharpest minds working in the field of AI and is a faculty at University of Berkeley. Additionally, he is an Oxford graduate and also holds a PhD from Stanford. In his third edition of the book, Stuart decided to collaborate with Peter Norvig who is currently the R&D Director at Google. Collaboratively, they have created a well-researched and well-defined approach towards understanding modern AI. This book is perfect for students of under-graduate or graduate level courses or even for laymen with the basic understanding of the fundamentals of AI. This long-anticipated edition of its best-seller predecessors offers the most up-to-date and comprehensive understanding of the theory and practice of artificial intelligence.

Machine Learning – Tom M. Mitchell

This is a classic book in which the author has covered the techniques and concepts of the numerous fields and unified them to provide in depth view of ML. Some of the subjects covered include re-enforcement learning, inductive logic programming and genetic algorithms. Tom has tried to simplify these complicated concepts through a clear and explanatory way of writing. At the same time, he has used tools such as case studies and relevant examples to provide a comprehensive overview. Lastly, there is no knowledge of the complex ideas that he has assumed on the part of the reader.

Superintelligence – Nick Bostrom

If you are familiar with the work of Mr. Nick Bostrom, you know you are in for a treat with this book. He takes a different approach to not only explain the artificial intelligence but also the effects it has on our existence. Nick believes that self-aware machines are potentially a bigger threat to humanity than climate change. He has authored over 200 books and his writing forces you to take him seriously in this seemingly sci-fi piece of literature. He helps us understand how the most intelligent form of life i.e. now humans have governed the fate of existence since the dawn. However, with a species (sort of) that has the potential to be smarter than us, what chance is there that they won’t dominate us?

Artificial Intelligence for Humans (Fundamental Algorithms: 1) – Jeff Heaton

If you are planning to build a career in artificial intelligence, this should be your starting off point and you should read it from cover to cover. Jeff Heaton cover several topics in depth such as clustering, distance metrics, dimensionality, linear regression, error calculation and hill climbing. The book takes you through the actual mathematical calculations that you can compute yourself and also see the real-world applications of. However, to build a career in this industry, you must not only understand the basic principals of AI but also of algebra and computer programming. This book will build on those concepts through various computer languages such as C, Java, C#, R and Python.

These books are some of the best in the market and will be perfect for people of all knowledge levels of AI and ML. Given that the industrial revolution 4.0 is upon us and almost all technology is slowly being integrated with it, it is suggested that we all learn more about it. However, it is completely up to you to form opinions about whether or not this technology will be harmful to humans in the long run. Additionally, we also suggest you read up on a few other technologies that are prevalent in this 4.0 era such as IOT, Blockchain and Cloud Computing.

About me: Harsh Arora is a proud father of four rescued dogs and a leopard gecko. Besides being a full-time dog father, he is a freelance content writer/blogger and a massage expert who is skilled in using the best massage gun.

2 Comments

Filed under Artificial Intelligence, biotechnology, futurism, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

7 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology