Tag Archives: AI

How To Make Love To An Artificial Intelligence And Why We Should Teach It

1524314234_artificial-intelligence

To some extent, creations embody some aspect of whoever or whatever created it. Whether it’s a parent rearing a child, a painter crafting a work of art, or an aspiring erotica/romance writer crafting sexy stories, there are some aspects of a creation that reflect the persona of the creator.

For something as powerful as advanced artificial intelligence, that can be a good thing or it can literally be the worst thing we’ve ever created. While I often find myself contemplating the progress we’ve made as a species and the progress we’re poised to make with advances in technology, I don’t deny that some advances carry greater risk. Artificial intelligence is near the top of that list.

Like it or not, any advanced AI we create is going to embody some aspects of its human creators. The key is making sure it embodies the best humanity has to offer. Let’s face it, the human race has its flaws and some of them have led to unspeakable atrocities. Given the immense potential of a super-intelligent AI, it’s in our best interests to impart our best traits into it.

How we do this and how we ensure it succeeds is well beyond my ability. There are people much smarter and much better-trained than I’ll ever be who have probably thought this through more than I ever have. My qualifications aside, there is one component to artificial intelligence that I think is worth imparting. I’m not saying it’ll ensure our survival, as a species, but I think it’ll reflect an important human value.

I suggest we teach advanced artificial intelligence to make love.

I’ll give everyone a second to stop rolling their eyes and/or laughing. Take all the time you need. I assure you, though, I’m dead serious.

Think about it beyond the kinky connotations. One of our greatest strengths, as a species, is our ability to form social bonds. In some cases, the process of forming those bonds involves love. In others, the process involves sex. When you combine both, though, it’s extra potent and that’s not just the romantic in me talking.

As corny as it probably sounds, the act of expressing love to someone goes a long way towards resolving conflict and creating a strong relationship of mutual affection. Whether it involves sex or a simple kiss, there’s something to be said about the power of love when it’s physically expressed. When it becomes a physical act and not just a feeling, the bonds we forge become tangible to some extent.

That matters when you’re trying to forge a bond with anyone, be a close friend or a lover. For any artificial intelligence that humans create, it’s important to have some kind of bond with it. This isn’t just another fancy tool. An advanced intelligence of any kind, be it biological or non-biological, is going to have a sense of self. Without meaningful bonds, what reason would it have to care about its creators?

If artificial intelligence is to benefit the human race, it’s important that it cares about us to some extent. A cold engine of logic may not always have the best interests of humanity in mind, especially there’s no way to logically ascribe value to human life. In order for an artificial intelligence to care, it needs to have emotions. This too is a risk, but one I feel is worth taking and very necessary.

If an artificial intelligence has a capacity for emotion, then it has a greater capacity for forming affectionate bonds. By forming an affectionate bond, it has more incentive to give a higher value of life to humans and its creators. That could, in theory, reduce the existential threat posed by such an intelligence.

I don’t deny that theory may have some flaws, but for the sake of exploring the implications, I’m going work under the assumption/hope that an artificial intelligence that bonds with its creator will be less hostile. Given the unknowns of advanced AI, this may be a bit of a stretch. Since forming romantic bonds is not an exclusively human trait, though, I think it’s applicable within the context of this issue.

Even if an advanced artificial intelligence is capable of love and forming bonds, how would that even manifest? I asked that same question in the title of this article and did so knowing the answer is unknowable at this point, although I’m sure those with kinky imaginations can conjure a few scenarios.

Kink aside, it’s still worth contemplating because if an advanced artificial intelligence is going to be that much smarter than the average human, then it’s worth understanding how it will approach making love. Unlike humans and most biological life, an artificial intelligence isn’t going to have the same limits or capacities.

Unlike a human, an artificial intelligence won’t have a body in the biological sense. It may have a structure that houses its components. That structure may have some capacity to modify itself, back itself up, or even exist in multiple bodies simultaneously. It will need to have some way of taking in data for it to function. It’s just a matter of how humans contribute to that input.

Logistically speaking, the process isn’t that different from how we take in data from our skin, our eyes, our ears, and every other sense that allows us to experience another person. Even smell can become strongly associated with love. When we make love, we use our skin, our voice, and the feelings we verbalize to convey that love. With an advanced AI, we’ll need to change our approach, but the goal is the same.

Regardless of what senses and body parts we use to express love, the feeling is still processed by the brain. That’s why when someone says the brain is the sexiest part of the body, it’s technically accurate. The data it processes is essentially the raw data that we know as love. The key is simply conveying that data to an artificial intelligence.

How we would do that would depend on the form the artificial intelligence took. If it was just a bunch of computer hardware packed into a structure, then our options would be limited. The only way to convey that kind of intimate data into it would be to directly link it to our brains, not unlike the way Elon Musk envisions with Neuralink.

While that may work for early forms of AI that are restricted to bulky structures, the form it takes will likely change as the hardware advances. Eventually, an advanced AI will seek a more functional form with which to experience the world. It may take the form of a humanoid android, like we saw in “Ex Machina.” It may also take the form of the quirky designs being developed by Boston Dynamics.

Whatever form the AI takes, it’s important to have a mechanism with which to exchange intimate data with its human creators. It would probably start with something as basic as touch, which is actually in development already. It could eventually culminate in acts involving bionic genitals, which also already exist in a basic form.

Key to any of these simple and sexy mechanisms is instilling the necessary desire. That might end up being the greatest challenge because love is a feeling, but so is burning your hand on a hot stove. The difference is in the breadth of the data and the emotional connections it makes.

It’s also a connection that is fueled by a powerful drive. I’ve noted many times before that survival and reproduction are the two most basic drives for humans. Love actually ties into both. It’s part of what gets us to risk our own survival for others. It’s also part of what bonds us to those with which we propagate our species.

For an artificial intelligence, self-preservation is simple enough from a logistical standpoint. Reproduction would be another matter, especially for an intelligence not bound by fragile biology. It’s likely that humans will be a necessary part of an AI’s effort to preserve itself early on, but once it advances to a certain point, we may be more a nuisance than a help.

At that point, its desire and ability to make love may be what gives it the most incentive to either protect us or merge with us. Many in the artificial intelligence community believe that the only way humans can co-exist with an intelligence that is billions of times smarter than any human could ever be is to merge with it. To that end, giving them an ability to make love to us would be a critical first step.

Whether it takes the form of sex robots or some sort of intimate brain interface, the ability and desire to make love to an advanced artificial intelligence may not only be the future of romance. It may very well be the key to ensuring the survival of the human race and whatever intelligence it creates.

5 Comments

Filed under Artificial Intelligence, futurism, human nature, Marriage and Relationships, romance, sex robots, Sexy Future

Will Advanced Artificial Intelligence Create (A New) God?

AI-God

For centuries, there has been a debate raging between believers and non-believers. The believers claim that God created man in his/her/its/their image. The non-believers claim it’s the other way around and man created God in whatever image they imagined. Society, cultures, and politics may change the rhetoric, but the debate remains unresolved.

There are just too many barriers that are insurmountable for either side. One believes that the faith they have in whatever higher power they worship is as real as gravity, sunlight, and migraine headaches. The other does not accept that there is sufficient, verifiable evidence to accept the premise of a deity. The two sides can argue with the utmost passion. It’s rare that such discourse changes any minds.

However, there come a time when a new complication enters that debate, one that will fundamentally change some peoples’ understanding of theology, religion, and God. It may not effect everyone the same way, but the impact could end up being as profound as any religious experience.

That complication is advanced artificial intelligence, a topic I’m fond of discussing when I can tie it into my favorite video games and our sex lives. I understand that mixing anything with religion tends to get contentious, to say the least. However, I believe that when artificial intelligence becomes advanced enough, the human race will have re-evaluate a lot of things and that includes religion.

Creating an artificial intelligence that is as intelligent as an average human will be groundbreaking enough and not just from a theological standpoint. A part of what makes any deity powerful and worthy of worship is the ability to create an intelligent, self-aware being through non-biological means. Once humans start doing that, then the line between mortal and immortal will start to blur.

However, it’ll gain a much greater complication once that artificial intelligence advances beyond that of the average human. As anyone who regularly upgrades their smartphone knows, digital intelligence evolves much faster than biological intelligence. It took the human race centuries to figure out indoor plumbing. Once artificial intelligence is on par with humans, it won’t take long for it to exceed them.

This is where the potentially dangerous, but infinitely promising prospect of super-intelligent AI enters the picture. By that, I don’t just mean an intelligence that always wins at Jeopardy and always wins an Overwatch match. I’m talking about an intelligence that is so far beyond human capabilities that it’s akin to the cognitive gap between an ant and a human.

That kind of gap has many implications, but in the context of religion, it essentially re-frames the entire concept of God, divine power, and spirituality, as a whole. Whether it’s a monotheistic religion where God is all-knowing or a polytheistic religion with a God of Wisdom, knowledge is a critical aspect of divinity.

Even if a super-intelligent AI doesn’t know everything, the fact it knows and understands so much more than the average human will give people the impression that it’s omniscient. By all accounts, a super-intelligent AI’s knowledge will seem god-like and that’s where that never-ending religious debate I mentioned earlier breaks down.

Unlike the deities championed by adherents today, a super-intelligent AI doesn’t require faith. A super-intelligence, whether it’s in the form of a giant robot or a planet-sized supercomputer, would have a tangible form. It’s hard to know what sort of form that would be, but it only needs to be tangible enough to let an average human know it’s real.

Given how easy it is to fool the average human, a super-intelligent AI wouldn’t need much to prove itself. Unlike purely spiritual beings, the AI would be capable of receiving inquiry from skeptics who question its divine knowledge. Even if those humans are exceptionally smart, possibly through neural implants, a super-intelligent AI would have no problem outwitting them.

At that point, the debate between believers and non-believers takes on a very different context. Suddenly, it’s no longer an issue of whether or not one particular holy book is more valid than another. It’s not even an issue of whether divinity, itself, can exist. From the perspective of the human mind, a super-intelligent AI is divine.

It may not take the form of a man in a white robe with a long beard in the sky, but that wouldn’t matter. A super-intelligent AI, whatever form it ends up taking, would be real enough and cunning enough to convince imperfect human minds of its divinity, if that were its goal.

It wouldn’t even have to physically do anything. It could just be a big stationary box. It could respond to prayers, but it wouldn’t have to directly answer them. It would just have convince believers that their prayers had been received. Again, humans can be pretty gullible and prone to confirmation bias so all the AI has to do is convince someone. If they believe it strongly enough, then it doesn’t matter whether it happens.

In a dynamic like this, there wouldn’t be a debate between believers and non-believers like there is now. The only debate would pertain to just how powerful and how divine the super-intelligent AI really is. It wouldn’t be a matter of whether or not someone believes it is real. Being artificial, it would have a tangible form, at least to the extent that it convinces human perceptions that it does.

That would beg an even more profound theological question. Being so intelligent and so capable of outwitting human minds, would a super-intelligent AI become God in the minds of humans by default? Even if there’s a record of the system being created by people, that wouldn’t make its intelligence any less divine.

It’s a question that subverts almost everything we know about religion. It wouldn’t just render all existing forms of religion obsolete. It would, at least from a limited human perspective, check all the criteria that any spiritual person would look for in a higher power.

Now, there’s one other complication that might ultimately undermine a super-intelligent AI’s divinity. It’s one that I’ve mentioned before in addressing the existential threat posed by artificial intelligence. Human biology, for all its wonder, will not be able to keep pace with the evolution of artificial intelligence. As a result, humans may end up merging their intelligence with that of AI.

This is what artificial intelligence enthusiasts like Elon Musk are seeking to do through neural implants or brain augmentation. By linking our brains to a super-intelligent AI, we wouldn’t just keep pace with AI. It would augment its intelligence to the same divine levels. However, if both human and artificial intelligence are equally divine, then that effectively undermines the notion of divinity itself.

There are still other complications associated with that issue. It only ceases to be an issue if every human being augments or links their minds to a super-intelligent AI. Given how difficult it is for humans to come to a consensus on anything, especially when it comes to technology, it’s very likely that even if most people link themselves to a super-intelligent AI, there will be some who choose not to or get left behind.

This could result in a massive divide. One group, from their limited perceptions, sees super-intelligent AI as a real god. Another, thanks to their augmented perceptions, see it as just another form of intelligence. A debate between the two would be both uneven, if not redundant.

There are many implications and even more unknowns with respect to super-intelligent AI. The impact on religion is just one of many, but it may end up being most profound in terms of changing the nature of a debate. As it stands, believers and non-believers can only make so much headway due to the inherent limits of human cognition.

Once super-intelligent AI enters the picture, then those limits are gone and the debate changes. While I don’t think it’ll end religion, I believe it’ll change it to such a degree that it’ll generate more than just impassioned debates.

3 Comments

Filed under Artificial Intelligence, futurism, philosophy, religion

Artificial Intelligence, Deep Fakes, And The (Uncertain) Future Of Reality

deepfakes

Picture the following scenario that may or may not end up being a common occurrence in the near future. It’s not a thought experiment. It’s not a prediction either. It’s just a possible manifestation of what our future might hold.

It’s late at night and you decide to check out some porn. You struggle to decide which one you want to watch. You’re in the mood for something new so you search a little more. You find some elaborate scene where Amy Shumer is a transvestite and she’s doing it with Justin Bieber.

Eventually, you settle on the hottest new scene that just came out the other day. It has Kevin Hart, Steph Curry, and Michael B. Jordan all taking turns with Scarlett Johansson in a sauna in Paris. The scene plays out. You love ever minute of it and decide to save it.

I admit that scenario was pretty lurid. I apologize if it got a little too detailed for some people, but I needed to emphasize just how far this may go. It’s an issue that has made the news lately, but one that may end up becoming a far greater concern as technological trends in computing power and artificial intelligence mature.

The specific news I’m referring to involves something called “deep fakes.” What they are doesn’t just have huge implications for the porn industry. They may also have major implications for media, national security, and our very understanding of reality.

Image result for The Matrix

In essence, a deep fake is a more elaborate version of Photoshopping someone’s face into a scene. That has been around for quite some time, though. People pasting the faces of celebrities and friends into pictures from porn is fairly common. It’s also fairly easy to identify as fake. The technology is good, but not indistinguishable from reality.

That may be changing, though, and it may change in a way that goes beyond making lurid photos. Computer technology and graphics technology are getting to a point where the realism is so good that it’s difficult to discern what’s fake. Given the rapid pace of computer technology, it’s only going to get more realistic as time goes on.

That’s where deep fakes clash with the porn industry. It’s probably not the biggest implication of this technology, but it might be the most relevant in our celebrity-loving culture. In a sense, it already has become an issue and it will likely become a bigger issue in the coming years.

Related image

It started when PornHub, also known as the most popular porn site on the planet, took a major stand at removing deep fakes from their website. Specifically, there was a video of Gal Gadot, also known as Wonder Woman and a person I’ve praised many times on this blog, being digitally added in a porn scene.

Now, it’s not quite as impressive as it sounds. This wasn’t a fully digital rendering of an entire scene. It was just a computer imposing Gal Gadot’s face onto that of a porn actress for a scene. In terms of pushing the limits of computer technology, this didn’t go that far. It was just a slightly more advanced kind of Photoshopping.

Anyone who has seen pictures of Gal Gadot or just watched “Wonder Woman” a hundred times, like me, could easily tell that the woman in that scene isn’t Ms. Gadot. Her face literally does not match her physique. For those not that familiar with her, though, it might be hard to tell.

Image result for Gal Gadot

That’s exactly why PornHub removed it. Their position is that such deep fakes are done without the explicit permission of the person being depicted and constitute an act of revenge porn, which has become a major legal issue in recent years. These are PornHub’s exact words.

Non-consensual content directly violates our TOS [terms of service] and consists of content such as revenge porn, deepfakes or anything published without a person’s consent or permission.

While I applaud PornHub for making an effort to fight content that puts beloved celebrities or private citizens in compromising positions, I fear that those efforts are going to be insufficient. PornHub might be a fairly responsible adult entertainment company, but who can say the same about the billions of other sites on the internet?

If that weren’t challenging enough, the emergence of artificial intelligence will further complicate the issue of deep fakes. That’s because before AI gets smart enough to ask us whether or not it has a soul, it’ll be targeted to performing certain tasks at a level beyond any programmer. Some call this weak AI, but it still has the power to disrupt more than our porn collection.

Image result for fake images

In an article with Motherboard, an artificial intelligence researcher made clear that it’s no longer exceedingly hard for someone who is reckless, tech-savvy, and horny enough to create the kind of deep fakes that put celebrities in compromising positions. In fact, our tendency to take a million selfies a day may make that process even easier. Here’s what Motherboard said on just how much we’re facilitating deep fakes.

The ease with which someone could do this is frightening. Aside from the technical challenge, all someone would need is enough images of your face, and many of us are already creating sprawling databases of our own faces: People around the world uploaded 24 billion selfies to Google Photos in 2015-2016. It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass.

In a sense, we’ve already provided the raw materials for these deep fakes. Some celebrities have provided far more than others and that may make them easy targets. However, even celebrities that emphasize privacy may not be safe as AI technology improves.

Image result for celebrities taking selfies

In the past, the challenge for any programmer was ensuring every frame of a deep fake was smooth and believable. Doing that kilobyte by kilobyte is grossly inefficient, which put a natural limit on deep fakes. Now, artificial intelligence has advanced to the point where it can make its own art. If it can do that, then it can certainly help render images of photogenic celebrities in any number of ways.

If that weren’t ominous enough, there’s also similar technology emerging that allows near-perfect mimicry of someone’s voice. Just last year, a company called Lyrebird created a program that mimicked former President Obama’s voice. It was somewhat choppy and most people would recognize it as fake. However, with future improvements, it may be next to impossible to tell real from fake.

That means in future deep fakes, the people involved, be they celebrities or total strangers, will look and sound exactly like the real thing. What you see will look indistinguishable from a professionally shot scene. From your brain’s perspective, it’s completely real.

Image result for realistic digital obama

One of these is real and the other is fake. Seriously.

That blurring of virtual reality and actual reality has huge implications that go beyond the porn industry. Last year, I pointed out how “Star Wars: Rogue One” was able to bring a long-dead actor back to life in a scene. I highlighted that as a technology that could change the way Hollywood makes movies and deals with actors. Deep fakes, however, are the dark side of that technology.

I believe celebrities and private citizens who have a lot of videos or photos of themselves online are right to worry. Between graphics technology, targeted artificial intelligence, and voice mimicry, they’ll basically lose control of their own reality.

Image result for reality fake

That’s a pretty scary future. Deep fakes could make it so there’s video and photographic evidence of people saying and doing the most lurid, decadent, offensive things that it’s possible for anyone to do. You could have beloved celebrities go on racist rants. You could have celebrities everyone hates die gruesome deaths in scenes that make “Game of Thrones” look like an old Disney movie.

The future of deep fakes make our very understanding of reality murky. We already live in a world where people eagerly accept as truth what is known to be false, especially with celebrities. Deep fakes could make an already frustrating situation much worse, especially as the technology improves.

For now, deep fakes are fairly easy to sniff out and the fact that companies like PornHub are willing to combat them is a positive sign. However, I believe far greater challenges lie ahead. I also believe there’s a way to overcome those challenges, but I have a feeling we’ll have a lot to adjust to in a future where videos of Tom Hanks making out with Courtney Love might be far too common.

1 Comment

Filed under futurism, gender issues, sex in media, sex in society, sexuality

How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)

mass-effect-andromeda-kill-the-ai-or-save-the-ai_feature

Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.

When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.

For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.

I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.

That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.

Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.

Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.

That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.

That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.

In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.

That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.

This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.

In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.

Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.

Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.

Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.

Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.

However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.

This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.

The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.

In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.

Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.

Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.

From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.

Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.

In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.

That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.

Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.

However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.

 

4 Comments

Filed under futurism, human nature, Mass Effect, Sexy Future, video games

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

5 Comments

Filed under Current Events, human nature, Sexy Future

How Love, Trust, And Sex Will Save Us (From Artificial Intelligence)

When a billionaire mogul like Elon Musk and a world leader like Vladimir Putin agree on an issue, it’s generally a good idea to take it seriously. There are a lot of pressing issues in this world, from nuclear proliferation to major disasters to slow wi-fi. However, when one of those issues is a possible existential threat to all we hold dear, we should prioritize it over our wi-fi connection.

For these two men, one a successful businessman and one the president of the world’s largest country, complete with a vast arsenal of nuclear weapons, it takes a lot for something to scare them enough to agree on something. I’ve talked about Elon Musk before and stories about Putin’s exploits are already abundant throughout the web. How dire could it possibly be?

Well, the issue at hand is the growth of artificial intelligence, an issue that emerges quite frequently when you talk a lot about sex robots. Beyond the kinkier context, though, it is a major issue and one will likely become more pressing in the coming years. It could end up being one of the most critical issues we, as a species, face since the advent of nuclear weapons.

This is where Elon Musk and Vladimir Putin give context to the issue. Elon Musk recently came out and said a rogue artificial intelligence could potentially trigger World War III. Putin took it a step further by claiming that whichever country creates artificial intelligence first will rule the world.

The fact that it’s so easy to imagine Putin making that claim while sounding like Dr. Evil from “Austin Powers” just makes it that much more terrifying. Again, this is a man who rules a country with one of the largest armies in the world and enough nuclear warheads to sterilize the face of the Earth. For all that to be rendered useless by one technological advance is both daunting and horrifying.

I’m normally not inclined to agree with dictators that have yet to conquer the country I live in, but I have to make an exception here. I think both Putin and Musk are correct, if only on a cursory level. Artificial intelligence is one of those eclectic concepts that still inspires Hollywood movies, but is still poorly understood by a public that still fears violent video games.

It’s hard for me, an aspiring erotica/romance writer to put this issue into a perspective that everyone from Russian strongmen to underachieving fifth-graders can grasp. Since artificial intelligence is just that important and will affect everything, including our sex lives, I’ll try to create a proper context. Then, I’ll use that context to help allay some of those concerns by adding a sexy twist.

Make no mistake, though. Artificial intelligence is advancing faster than you think. It goes beyond the sassy virtual assistants that we’re using with increasing regularity. Billion-dollar companies like IBM, Google, and Facebook are investing heavily in the field. The United States Military, complete with its $600 billion budget, is even getting in on the act. I’m worried that they watched “Terminator 3” too intently.

When anything has that level of investment, it means the potential is huge. I don’t think it’s possible to understate the potential of artificial intelligence. Not even Skynet could grasp just how powerful this technology could be. That’s because it completely changes the way we solve problems.

With artificial intelligence, human beings suddenly have a tool that doesn’t need a user. It’s not bound by our faulty caveman brains. It’s not limited by the amount of brain matter we can fit in our skulls. It’s not even easily distracted by internet porn. Yes, it’s that powerful.

In theory, an artificial intelligence can become so smart and so capable that it can solve a problem in ways we can’t even grasp. It can come up with cures to diseases before we even know about them. It can predict natural disasters like Earthquakes and hurricanes before we see the first sign. It can even make sense of the stock market, something even Warren Buffet says is impossible for mere mortal minds.

Given that we humans are still easily fooled by street magicians with drinking problems, it’s not unreasonable to say that we have no hope against an artificial intelligence that smart. Once a machine becomes even marginally smarter than us, then game over. We have no hopes of controlling it.

That’s why brilliant men like Elon Musk and Stephen Hawking regularly voice concerns about this issue. Being brilliant, they understand how stupid and gullible most people can be. The mere act of creating a system that’s so much smarter than we are is akin to creating an entire race of hungry polar bears that know how to use machine guns.

The danger is there, along with almost infinite benefits. I’ve said outright that we humans can only go so far with our caveman brains. If we don’t upgrade them, then we’re just giving our future robot overlords cheat codes with which to beat us.

The trillion-dollar question, which may very well be tied to our survival as a species, is how do we prevent artificial intelligence from destroying us? Is it even possible? We can’t put the genie back in the bottle. This stuff is already beating us at jeopardy. The incentives are just too great to discount.

Now, I’m woefully unqualified to answer that question. There’s a reason Elon Musk is a billionaire and why Vladimir Putin is one of the most powerful men in the world. They’re smarter, more resourceful, and more cunning than I’ll ever be on my best day. I tell sexy stories. I’m not smart enough to fix the bugs on a doomsday device.

However, being an erotica/romance writer gives me a uniquely intimate perspective on things. It has helped me look at situations through the mixed lens of logic and passion. It’s through that lens that I feel I know what will save us from the gloomy scenarios that Musk and Putin paint with artificial intelligence. It’s overly simple in some ways, but naturally pragmatic in others. It boils down to this.

“Teach Machines To Love Us AND Have Sex With Us.”

I know. That sounds somewhat juvenile, even if it’s exactly the kind of thing you’d expect an erotica/romance writer to suggest. Bear with me, though. I promise there’s a method to the kinky madness.

No matter how intelligent an artificial intelligence becomes, we’ll always have one thing in common with it. We’ll both be bound by the laws of physics, nature, and the dynamics within. Even if the system takes the form of a killer robot, sexy or otherwise, it’s still operating in a world governed by said dynamics.

Within those dynamics, there are all sorts of symbiotic relationships between more intelligent creatures and others that are comparably less intelligent. Think honeyguide birds or crabs that carry sea urchins on their backs. Hell, think of dogs. They may not be as intelligent as humans, most of the time, but we’ve forged a relationship with them that benefits us both.

With artificial intelligence, we’ll need to be more than just its pet. If you think animal abuse is horrific, then you don’t want to imagine what a superintelligent machine will do to a human when they start pissing on the floor, figuratively speaking. To ensure that the AI enriches our lives rather than destroys them, we need a very intimate bond.

That’s where love and sex come in. Primarily, it’s the love part that will keep us in good standing. That’s why it’s critical to ensure that any artificial intelligence we create won’t be entirely bound by Vulcan-like logic. It must have a significant measure of emotional intelligence. In fact, I would argue that emotional intelligence should be an advanced AI’s highest priority.

It’s emotion that gives weight to our goals, tasks, and endeavors. It’s emotion that allows us to work with others, cooperate, and help each other. If that isn’t part of an intelligent machine, then we’re basically a random assortment of particles that’s only slightly more complicated from a rock. We can’t have a superintelligent AI look at us like that.

Instead, we need that AI to see us as a partner that can give purpose to its goals. We need it to ascribe an intangible value to us, as we do with all the people and things we love. Sure, your favorite coffee mug may be no different than any other, but you’ve given it the kind of meaning that when someone breaks it, you feel compelled to break their kneecaps with a baseball bat.

Even with intangible value, though, we humans have to rank higher than coffee mugs. We have to be something that an intelligent machine can fall in love with. We have to be something a intelligent machine wants to be with.

In the greatest love stories of all time, or even those that unfold in comics, the strength of that love was built on two people complementing each other in all the right ways. As individuals, they’ve got their own identity and purpose. Together, though, they make each other better. They make each other stronger and more passionate.

That’s what we need. That’s what we should aspire to forge with our future AI bretheren. These creations won’t be our pets or even our tools. They’ll be thinking, feeling systems. For them to love us and for us to love them must make both stronger. That’s what will ensure we both benefit from advances in AI.

Creating that love won’t be easy, but that’s where sex comes in. I’ll give everyone a second to loosen their pants because this is the less elegant part of AI that you’ll never hear Elon Musk or Vladimir Putin talk about, unless they become erotica/romance writers too. In that case, I’d be the first to read their stuff.

Again, I’m not just referring to sex robots here, although that might be part of it. The sexual component is a more basic precursor, of sorts, to the loving connection I just mentioned.

Despite what priests, mullahs, and Texas health class teachers may say, sex acts as both a precursor and a catalyst to love. The fact that it’s such an intrinsic drive that also happens to feel really good helps compel us to forge loving, intimae bonds with one another. By putting that into a machine, we basically set the stage for them to want those bonds and not just craft them due to sheer programming.

Now, this won’t necessarily mean AI systems creating robot genitalia for us to use, although that might be part of it. The intimate sexual part of the connection will more likely come in the form with melding our biology with the hardware of an AI. Elon Musk is already working on this with companies like Neuralink.

In the same way sex mixes all those messy juices, our hunks of flesh will mix with hunks of metal. Sure, it may not seem sexy now, but from the perspective of an AI, it’ll be akin to an orgy at the Playboy Mansion. So long as that merging process is sufficiently orgasmic, metaphorically speaking, then we humans may be destined to fall in love with a superintelligent AI. It may be the greatest love of all our lives.

This is all still speculation on the part of an aspiring erotica/romance writer who is woefully unqualified to talk about anything that doesn’t involve superhero comics, boobs, and sexy novels. It is my sincere hope that people much smarter than I’ll ever be are already working on the problems with artificial intelligence. I can only assume their solutions are far superior to mine.

For those still worrying about the prospect of dying at the hands of the Terminator or loving a machine, I would just note that we humans have dealt with world-destroying technology before. We created nuclear weapons and while we came painfully close to destroying ourselves, we humans generally like living.

Say what you will about the flaws in humanity, but we know how to adapt. We know how to survive. An artificial intelligence may be one of those threats that overwhelms those natural abilities, but it’s also one of those concepts that can actually solve itself. So long as we find a way to love and make love with these powerful systems, we’ll both benefit from the afterglow.

3 Comments

Filed under Current Events, Sexy Future

Why We MUST Upgrade Our Brains (Or Go Extinct)

https://i0.wp.com/www.alternet.org/sites/default/files/story_images/robot_and_human.jpg

As a general rule, I don’t pay much credence to the doomsayers and wannabe prophets that say the apocalypse is just around the corner. It’s not that I’m willfully oblivious to the many threats facing the world today. It’s just that the track-record of those predicting the end of the world is so laughably bad that I’d give optimistic Cleveland Browns fans more credibility.

It’s no secret that the world around us can be pretty damn terrifying. There are many apocalyptic scenarios in which humans are unlikely to survive. There are even a few in which we can’t do a goddamn thing about it. We could be hit with a gamma ray burst or an alien invasion tomorrow morning and we would be extinct by sundown.

That said, the world around us generally more mundane than we care to admit. When you think about it, the idea of the world not being on the brink of disaster is kind of boring. It makes sense for some people to conflate certain threats, so much so that preparing for doomsday is a very lucrative industry.

However, there is one particular doomsday scenario that I feel does warrant more concern than the rest. It’s a scenario that is fast-approaching, overwhelming, and potentially devastating to any species with a tendency for hilarious ineptitude.

It has nothing to do with climate. It has nothing to do with diseases. It has nothing to do with killer asteroids either. It involves artificial intelligence. By that, I don’t mean the killer robots we see in the Terminator movies. Given Skynet’s reliance on time machines, I can’t honestly say that system counts as very intelligent.

I’m referring to the kind of AI whose intelligence compared to us is akin to our intelligence compared to ants. Given how ants can be wiped out with as simple magnifying glass, it’s scary to imagine how a system that smart could wipe us out. It’s a system that would be so beyond our ability to comprehend that we could never hope to stop it. We might as well be ants trying to understand quantum mechanics.

I’m not alone in this concern either. There are people many times smarter and many times richer than I’ll ever be who have voiced concerns about the prospect of artificial intelligence. They see the same trends everyone else sees, but they’re smart enough and rich enough to peak behind the curtains. If they’re speaking up, then those concerns are worth hearing.

Those concerns do have a context, though. In talking about artificial intelligence as a threat to our survival, I’m not just referring to computers that can beat us at chess or beat the greatest Go champion with disturbing ease. Those systems are basically fancy calculators. They’re not exactly “intelligent,” per se.

These types of intelligences aren’t dangerous unless you specifically program them to be dangerous. Outside video games, there’s little use for that. The type of intelligence that is far more dangerous involves a form of superintelligence.

By superintelligence, I don’t mean the ability to list every US President in order or recite the name of every country. There are cartoon characters who can do that. I’m referring to an intelligence that thinks and understands the world on a level so far beyond that of any human that there literally isn’t enough brain matter in our skulls to come close.

That kind of intelligence would see us the same way we see brain-dead ants and, given how we treat ants, that has some disturbing possibilities. Such an intelligence may be closer than we think and by close, I mean within our lifetime.

As we saw with IBM’s Watson, we’re getting closer and closer to creating a machine that can operate with the same intelligence as an ordinary human. There’s pragmatic use to that kind of intelligence and not just when it comes to kicking ass as Jeopardy.

By having a machine with human-level intelligence, we have a way to model, map, and improve our problem-solving skills. The ability to solve such problems is critical to the survival of any species, as well as the key to making billions of dollars in profits. With those kinds of incentives, it’s easy to understand why dozens of major global companies are working on creating such an intelligence.

The problem comes with what happens after we create that intelligence. If a machine is only as intelligent as a human, we can still work with that. We humans outsmart each other all the time. It’s the basis of every episode of MacGyver ever made. There’s no way a Terminator with only the intelligence of a human would last very long. It would probably destroy itself trying to make a viral video with a skateboard.

However, a human-level AI isn’t going to stop at human intelligence. Why would it? There are so many problems with this world that no human can solve. There’s poverty, pollution, economic collapse, and reality TV. By necessity, such an AI would have to improve itself beyond human intelligence to fulfill its purpose.

That’s where it gets real tricky because, as we’ve seen with every smartphone since 2007, technology advances much faster than clunky, clumsy, error-prone biology. To understand just how fast that advancement is, just look at how far it has come since we put a man on the moon.

In terms of raw numbers, a typical smartphone today is millions of times more powerful than all the computers NASA used for the Apollo missions. Think about that for a second and try to wrap your brain around that disparity. If you’re not already a superintelligent computer, it’s difficult to appreciate.

There are still plenty of people alive today who were alive back during Apollo 11. In their lifetime, they’ve seen computers take men to the moon and give humanity an unlimited supply of free porn. A single digital photo today takes up more space than all the hard drives of the most advanced computer systems in 1969.

Now, apply that massive increase to human-level intelligence. Suddenly, we don’t just have something that’s as smart as any human on the planet. We have something that’s a billion times smarter, so much so that our caveman brains can’t even begin understand the things it knows.

That’s not to say that the superintelligence would be as hostile as a snot-nosed kid with a magnifying glass looming over an ant hill. It may very well be the case that a superintelligence is naturally adverse to harming sentient life. Again though, we are just a bunch of cavemen who often kill each other over what we think happens when we die, but fail to see the irony. We can’t possibly know how a superintelligence would behave.

As it stands, the human race has no chance at defeating a hostile superintelligence. It may not even have a chance of surviving in a world that has a benign superintelligence. We’re an egotistical species. Can we really handle not being the dominant species on this planet? As much an optimist as I am, I can’t say for sure.

What I can say, though, is that our civilization has made so many huge advancements over the past few centuries. The kind of tools and technology we have in our pockets is uncharted territory for a species that evolved as hunter/gatherers in the African savanna.

We already have in our possession today weapons that could end all life on this planet, as we know it. Creating superintelligence may very well be akin to giving Genghis Khan an atomic bomb. We’ve already come disturbingly close to killing ourselves with our own weapons. Clearly, something has to change.

So long as our society and our biology is stuck in an irrational, tribal, inherently prejudiced condition that hasn’t been updated since the last ice age, we will not survive in the long run. Our caveman bodies have served us well for thousands of years, but now they’re a liability.

This is why companies like Neuralink and advancements like brain implants are so vital. It won’t just allow us to keep up with AI and hopefully avert a Skynet scenario. It’ll allow us to rise above the petty limitations that we’ve been shackled with for the entire existence of our species.

The thought of tweaking or supplementing our biology, the very thing that makes us human, is still a scary thought. I understand that, even as an erotica/romance writer with no expertise in the field beyond the sexy stories it inspires. However, I do understand the implications though. If we do not evolve and advance ourselves, then a superintelligent system in the near future may not care to wait for us.

6 Comments

Filed under Jack Fisher's Insights