Tag Archives: ethics in technology

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

The First Genetically Modified Humans Have Been Born: Now What?

designerbabies

When the USSR launched Sputnik 1 on October 4, 1957, it didn’t just kick-start the space race. It marked a major technological paradigm shift. From that moment forward, venturing into space wasn’t just some futuristic fantasy. It was real and it had major implications for the future of our species.

On November 26, 2018, a Chinese scientist named He Jiankui announced that the first genetically modified humans had been born. Specifically, two twin girls actually had their genetic code modified at the embryonic stage to disable the CCR5 gene to make them highly resistant to HIV/AIDS. In the history of our species, this moment will likely exceed the importance of Sputnik.

This man may have just upstaged Neil Armstrong.

To appreciate why this is such a big deal, consider the full ramifications of what Mr. Jiankui achieved. The change he made to the genome of those girls was impossible for them to inherent. This particular allele is a result of a mutation within a small population of Northern Europeans and is present in no other ethnic group. It is best known for providing significant immunity to common strains of the HIV virus.

This is of significant interest to China because they’ve been dealing with a surge in HIV/AIDS rates in recent years. Even though AIDS isn’t a death sentence anymore, the medicine needed to manage it is costly and tedious. These two girls, who have not been publicly named thus far, may now have a level of resistance that they never would’ve had without genetic modification.

On paper, that’s an objective good. According to the World Health Organization, approximately 35 million people have died because of AIDS since it was first discovered and approximately 36.9 million people are living with the disease today. It’s in the best interest of society to take steps towards preventing the spread of such a terrible disease, especially in a country as large as China.

However, Mr. Jiankui has caused more consternation than celebration. Shortly after he announced the birth of the two unnamed children, China suspended his research activities. Their reasoning is he crossed ethical boundaries by subjecting humans to an untested and potentially dangerous treatment that could have unforeseen consequences down the line.

Those concerns have been echoed by many others in the scientific community. Even the co-inventor of CRISPR, the technology used to implement this treatment and one I’ve cited before as a game-changer for biotechnology, condemned Mr. Jiankui’s work. It’s one thing to treat adults with this emerging technology. Treating children in the womb carries a whole host of risks.

That’s why there are multiple laws in multiple countries regulating the use of this technology on top of a mountain of ethical concerns. This isn’t about inventing new ways to make your smartphone faster. This involves tweaking the fundamental code of life. The potential for good is immense, but so is the potential for harm.

Whether or not Mr. Jiankui violated the law depends heavily on what lawyers and politicians decide. Even as the man defends his work, though, there’s one important takeaway that closely parallels the launch of Sputnik. The genie is out of the bottle. There’s no going back. This technology doesn’t just exist on paper and in the mind of science fiction writers anymore. It’s here and it’s not going away.

Like the space race before it, the push to realize the potential of genetic modification is officially on. Even as the scientific and legal world reacts strongly to Mr. Jiankui’s work, business interests are already investing in the future of this technology. The fact this investment has produced tangible results is only going to attract more.

It’s impossible to overstate the incentives at work here. Biotechnology is already a $139 billion industry. There is definitely a market for a prenatal treatment that makes children immune to deadly diseases. Both loving parents and greedy insurance companies have many reasons to see this process refined to a point where it’s as easy as getting a flu shot.

Even politicians, who have historically had a poor understanding of science, have a great many reasons to see this technology improve. A society full of healthy, disease-free citizens is more likely to be prosperous and productive. From working class people to the richest one percent, there are just too many benefits to having a healthy genome.

The current climate of apprehension surrounding Mr. Jiankui’s work may obscure that potential, but it shouldn’t surprise anyone. During the cold war, there was a similar climate of fear, albeit for different reasons. People back then were more afraid that the space race would lead to nuclear war and, given how close we came a few times, they weren’t completely unfounded.

There are reasons to fear the dangers and misuse of this technology. For all we know, the treatment to those two girls could have serious side-effects that don’t come to light until years later. However, it’s just as easy to argue that contracting HIV and having to treat it comes with side-effect that are every bit as serious.

As for what will come after Mr. Jiankui’s research remains unclear. I imagine there will be controversy, lawsuits, and plenty of inquiries full of people eager to give their opinion. As a result, he may not have much of a career when all is said and done. He won’t go down in history as the Neil Armstong of biotechnology, but he will still have taken a small step that preceded a giant leap.

Even if Mr. Jiankui’s name fades from the headlines, the breakthrough he made will continue to have an impact. It will likely generate a new range of controversy on the future of biotechnology and how to best manage it in an ethical, beneficial manner. It may even get nasty at times with protests on par or greater than the opposition to genetically modified foods.

Regardless of how passionate those protests are, the ball is already rolling on this technology. There’s money to be made for big business. There’s power and prosperity to be gained by government. If you think other countries will be too scared to do what a science team in China did, then you don’t know much about geopolitics.

Before November 26, 2018, there were probably many other research teams like Mr. Jiankui who were ready and eager to do something similar. The only thing that stopped them was reservation about being the first to announce that they’d done something controversial with a technology that has been prone to plenty of hype.

Now, that barrier is gone. Today, we live in a world where someone actually used this powerful tool to change the genome of two living individuals. It may not seem different now, but technology tends to sneak up on people while still advancing rapidly. That huge network of satellites that now orbit our planet didn’t go up weeks after Sputnik 1, but they are up there now because someone took that first step.

There are still so many unknowns surrounding biotechnology and the future of medicine, but the possibilities just become more real. Most people alive today probably won’t appreciate just how important November 26, 2018 is in the history of humanity, but future generations probably will, including two remarkable children in China.

5 Comments

Filed under futurism, gender issues, sex in society, Sexy Future, technology