Tag Archives: Big Tech

My Thoughts On Facebook And #DeleteFacebook

Here is how to delete Facebook | TechCrunch

There are certain people, groups, and companies that are difficult to defends. It’s not impossible, as is the case with tobacco companies, criminal organizations, and whoever designs unskippable video ads. It’s just difficult and I’m certainly not qualified to carry out such with any real expertise.

I’m just not that smart or informed.

I make that disclaimer because I’m about to defend a company that doesn’t have a stellar reputation, to say the least. If anything, their name and its famous founder have gained such a negative connotation that they’re just a few steps away from being a tobacco company. Given how one actually sells products that gives people cancer, that’s saying something.

That company is Facebook. I know that just typing that word out for people to read is going to garner a reaction and not in a good way.

I get that. I really do. I’m very much aware of some of the many scandals and shady dealings that Facebook has engaged in since its inception. I’m also aware of the objectively negative impacts that Facebook has had on certain people. That’s not something I can defend, nor would I want to.

There are any number of bad things about Facebook and its impact that I can go over. However, there is one important aspect to those things that I would like to highlight. I don’t think it constitutes a defense of Facebook or its practices, but some may construe it as such. I’m still going to point it out, if only to add some perspective. It all comes down to this.

Facebook is still just a tool. At some point, all its damaging ills are on us, the users, and not the company.

I understand that’s an unpopular sentiment. It’s not that dissimilar from what gun advocates say about guns. Like any inanimate object, it’s not deadly or damaging until somebody willfully uses it. That’s certainly true to some extent. It’s just a matter of the extent that people disagree on.

However, Facebook is not akin to a firearm or some tool that can actually be used to cause tangible, physical harm to someone. It’s a website/software program. Using it requires people to go out of their way to access it. In addition, getting any meaningful use out of it requires active engagement. It’s not just something you can give to a kid and they would easily figure it out.

It can still be damaging, but in a very different way. Like it or not, some of those ways are ultimately our responsibility and not that of Facebook. I know it’s just a lot easier to criticize the company, its practices, and the conduct of its founder, Mark Zuckerburg. That doesn’t change the actual nature of the product.

Yes, there is objectively toxic content on Facebook that degrades, demeans, and deceives people.

However, that toxic content doesn’t come directly from Facebook. It comes from us.

I bring this up because I saw the hashtag, #DeleteFacebook, trending again. That seems to happen several times a year, often after a new scandal or in wake of an unpopular decision. It’s becoming so routine that it’s hard to take seriously.

On top of that, the hashtag rarely accomplishes anything. Despite all the scandals and negative press, the overall usership of Facebook is still growing. As of this writing, it has approximately 2.85 billion users. Criticism and hashtags aside, it hasn’t kept the company from growing. It hasn’t made Mark Zuckerberg any less rich and influential.

I know hashtags are notorious for presenting a false reality to those who seek it, but this particular hashtag has become more a virtue signal than an actual protest. More and more these days, the hashtag has become less about Facebook’s unscrupulous business practices and more about protesting Big Tech, as they’re called.

While there’s certainly a place for protesting the practices of large, powerful corporations, I feel like the substance of that effort gets lost in virtue signaling. People are more inclined to just whine about how bad Facebook is and say how much better their lives are after deleting it. It’s rare for anyone to actually highlight a substantive policy or practice that warrants protest. It’s all about people saying, “Look at me! I gave up Facebook, so I’m better than you!”

I know that’s a simplistic statement that doesn’t apply to everyone. I’m sure there are people whose lives did improve after deleting their Facebook account. At the same time, there are people whose lives are still enriched by Facebook.

Personally, I’ve met great people through Facebook. I’ve also been able to keep up with friends and family that I never would’ve been able to keep up with. I genuinely value those connections. They even prove critical when there’s a major family crisis that everyone is trying to keep up with. That happened several years back when my grandmother got sick. It happened more recently with helping my father connect with other relatives during the pandemic.

Facebook can be used for good. Like any tool, it can have a positive impact on its users. It’s just a matter of how it’s used.

There will always be people who seek to use any tool for something wrong, deviant, or nefarious. We don’t criticize ski masks the same way we criticize Facebook and for good reason. At the end of the day, it comes back to the individuals using it.

Again, that doesn’t excuse some of the shady things the company has done over the years. I’m not defending that. This extended rant is just me reminding people that some of the worst parts of Facebook only exist because of us, the users. At some point, we have to take responsibility for that. We can’t expect a multi-billion dollar software company to do it for us.

Leave a comment

Filed under Current Events, human nature, media issues, rants, technology

Why We Should Treat Our Data As (Valuable) Property

Many years ago, I created my first email address before logging into the internet. It was a simple AOL account. I didn’t give it much thought. I didn’t think I was creating anything valuable. At the time, the internet was limited to slow, clunky dial-up that had little to offer in terms of content. I doubt anyone saw what they were doing as creating something of great value.

I still have that email address today in case you’re wondering. I still regularly use it. I imagine a lot of people have an email address they created years ago for one of those early internet companies that used to dominate a very different digital world. They may not even see that address or those early internet experiences as valuable.

Times have changed and not just in terms of pandemics. In fact, times tends to change more rapidly in the digital world than it does in the real world. The data we created on the internet, even in those early days, became much more valuable over time. It served as the foundation on which multi-billion dollar companies were built.

As a result, the data an individual user imparts onto the internet has a great deal of value. You could even argue that the cumulative data of large volumes of internet users is among the most valuable data in the world.

Politicians, police, the military, big businesses, advertising agencies, marketing experts, economists, doctors, and researchers all have use for this data. Many go to great lengths to get it, sometimes through questionable means.

The growing value of this data raises some important questions.

Who exactly owns this data?

How do we go about treating it from a legal, fiscal, and logistical standpoint?

Is this data a form of tangible property, like land, money, or labor?

Is this something we can exchange, trade, or lease?

What is someone’s recourse if they want certain aspects of their data removed, changed, or deleted?

These are all difficult questions that don’t have easy answers. It’s getting to a point where ownership of data was an issue among candidates running for President of the United States. Chances are, as our collective data becomes more vital for major industries, the issue will only grow in importance.

At the moment, it’s difficult to determine how this issue will evolve. In the same way I had no idea how valuable that first email address would be, nobody can possibly know how the internet, society, the economy, and institutions who rely on that data will evolve. The best solution in the near term might not be the same as the best solution in the long term.

Personally, I believe that our data, which includes our email addresses, browsing habits, purchasing habits, and social media posts, should be treated as personal property. Like money, jewels, or land, it has tangible value. We should treat it as such and so should the companies that rely on it.

However, I also understand that there are complications associated with this approach. Unlike money, data isn’t something you can hold in your hand. You can’t easily hand it over to another person, nor can you claim complete ownership of it. To some extent, the data you create on the internet was done with the assistance of the sites you use and your internet service provider.

Those companies could claim some level of ownership of your data. It might even be written in the fine print of those user agreements that nobody ever reads. It’s hard to entirely argue against such a claim. After all, we couldn’t create any of this data without the aid of companies like Verizon, AT&T, Amazon, Apple, Facebook, and Google. At the same time, these companies couldn’t function, let alone profit, without our data.

It’s a difficult question to resolve. It only gets more difficult when you consider laws like the “right to be forgotten.” Many joke that the internet never forgets, but it’s no laughing matter. Peoples’ lives can be ruined, sometimes through no fault of their own. Peoples’ private photos have been hacked and shared without their permission.

In that case, your data does not at all function like property. Even if it’s yours, you can’t always control it or what someone else does with it. You can try to take control of it, but it won’t always work. Even data that was hacked and distributed illegally is still out there and there’s nothing you can do about it.

Despite those complications, I still believe that our data is still the individual’s property to some extent, regardless of what the user agreements of tech companies claim. Those companies provide the tools, but we’re the ones who use them to build something. In the same way a company that makes hammers doesn’t own the buildings they’re used to make, these companies act as the catalyst and not the byproduct.

Protecting our data, both from theft and from exploitation, is every bit as critical as protecting our homes. An intruder into our homes can do a lot of damage. In our increasingly connected world, a nefarious hacker or an unscrupulous tech company can do plenty of damage as well.

However, there’s one more critical reason why I believe individuals need to take ownership of their data. It has less to do with legal jargon and more to do with trends in technology. At some point, we will interact with the internet in ways more intimate than a keyboard and mouse. The technology behind a brain/computer interface is still in its infancy, but it exists and not just on paper.

Between companies like Neuralink and the increasing popularity of augmented reality, the way we interact with technology is bound to get more intimate/invasive. Clicks and link sharing are valuable today. Tomorrow, it could be complex thoughts and feelings. Whoever owns that stands to have a more comprehensive knowledge of the user.

I know it’s common refrain to say that knowledge is power, but when the knowledge goes beyond just our browsing and shopping habits, it’s not an unreasonable statement. As we build more and more of our lives around digital activities, our identities will become more tied to that data. No matter how large or small that portion might be, we’ll want to own it as much as we can.

It only gets more critical if we get to a point where we can fully digitize our minds, as envisioned in shows like “Altered Carbon.” At some point, our bodies are going to break down. We cannot preserve it indefinitely for the same reason we can’t preserve a piece of pizza indefinitely. However, the data that makes up our minds could be salvaged, but that opens the door to many more implications.

While that kind of technology is a long way off, I worry that if we don’t take ownership of our data today, then it’ll only get harder to do so in the future. Even before the internet, information about who we are and what we do was valuable.

This information forms a big part of our identity. If we don’t own that, then what’s to stop someone else from owning us and exploiting that to the utmost? It’s a question that has mostly distressing answers. I still don’t know how we go about staking our claim on our data, but it’s an issue worth confronting. The longerwe put it off, the harder it will get.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Current Events, futurism, Neuralink, politics, technology

Censorship Vs. Accountability: Know The Difference

I hate talking about censorship.

I hate talking about “cancel culture,” a term I think has all the wrong connotations for all the wrong reasons.

I am a proud American and I believe in free speech to the utmost. I will gladly fight for that freedom and support those who do.

I say all that because, in wake of the terrible events at the Capitol recently, there has been a lot of whining from certain people. I won’t name names. I’ll just note that they’re whining and ranting about censorship and cancel culture. Watch the news for more than five minutes and you’ll know who they are.

Now, I don’t want to go on an extended rant about why censorship is different from not being allowed to post on your favorite social media site. Make no mistake. There is a difference. There’s actually a lengthy legal framework behind what constitutes censorship in a purely legal sense.

I’m not a lawyer, nor am I smart enough to break down the various complexities about censorship, cancel culture, and the various cases attributed to both. I doubt any point I make will change anyone’s mind, especially if they’ve been affected by perceived censoring from either side. Even so, I still think a point is worth being made.

In watching this debate, I’ve noticed something that’s both peculiar and revealing about this issue. Whenever there’s a case of alleged censorship or cancel culture, those who bemoan it only complain when it’s used against them.

For instance, say your favorite politician who you vote for and vocally support was kicked off Twitter, Facebook, and all major social media platforms for rhetoric that incited violence and spreads damaging disinformation. Chances are, you’re going to defend them more than you would if that politician was someone you vocally opposed.

Yes, I realize this is not a subtle example. It still gets the point across.

Those same people might not raise an eyebrow if there was a case of an gay couple who denied service at a bakery. They might also be perfectly fine with laws on the books that actually ban atheists from holding public office. Those are both actual cases of discrimination, but it’s happening against people who they don’t agree with, politically. As such, they don’t care or actively support it.

On the other side of that coin, the same people who celebrate certain politicians from getting kicked off social media will loudly support boycotts and bans from certain individuals speaking on campuses. They’ll also whine about a certain scene in a TV show or certain media depictions of minorities.

The absurdity is the same. The only difference is the target and the political affiliation of those complaining about it. It’s just tribalism at its most basic. You support and make excuses for the people and things you support. You protest those you think are against you. There’s no logic to it. It’s all based on who you support or hate.

With respect to those who actually do get censored, there’s also another side of that coin. While I am a vocal proponent of free speech, I also believe in accountability, especially for people in positions of power. It’s one thing for me to say something stupid on social media. It’s quite another for someone who is an elected official.

Those words carry a lot more weight. When someone in that position encourages anger, outrage, and protest, it can lead to real world actions. We saw that at the Capitol. Real people suffered and died because of those actions. In that case, there are consequences.

When you incite violence against others, that’s not free speech. That’s a crime. When your words lead to tangible harm and destruction, that’s something that you are accountable for. It’s not censorship. It’s not cancel culture. It’s just accountability, plain and simple.

If you can’t understand that, then you don’t understand free speech and what it really means. Now, with respect to tech companies removing certain voices or having a bias, that’s another issue. It’s complicated and requires more nuance than simple whining.

However, when it comes to powerful people just wanting to use social media to exercise their power, that’s different. Powerful people must be held to a higher standard. If not, then people far less powerful will pay the price and we can’t have any kind of freedom in a society like that.

Also, blurring female nipples is still bullshit censorship. Can we at least agree on that?

Leave a comment

Filed under censorship, Current Events, extremism, outrage culture, political correctness, politics

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology