Category Archives: technology

The First CRISPR Patients Are Living Better: Why That Matters After 2020

It’s been a while since I’ve talked about CRISPR, biotechnology, and the prospect of ordinary people enhancing their biology in ways straight out of a comic book. In my defense, this past year has created plenty of distractions. Some have been so bad that my usual optimism of the future has been seriously damaged.

While my spirit is wounded, I still have hope that science and technology will continue to progress. If anything, it’ll progress with more urgency after this year. A great many fields are bound to get more attention and investment after the damage done by a global pandemic.

We can’t agree on much, but we can at least agree on this. Pandemics are bad for business, bad for people, bad for politics, and just objectively awful for everyone all around, no matter what their station is in life.

There’s a lot of incentive to ensure something like this never happens again is what I’m saying. While we’re still a long way from ending pandemics entirely, we already have tools that can help in that effort. One is CRISPR, a promising tool I’ve talked about in the past. While it wasn’t in a position to help us during this pandemic, research into refining it hasn’t stopped.

Despite all the awful health news of this past year, some new research has brought us some promising results on the CRISPR front. In terms of actually treading real people who have real conditions, those results are in and they give us reason to hope.

One such effort involved using CRISPR to help treat people with Sickle Cell Disease, a genetic condition that hinders the ability of red blood cells to carry oxygen. It affects over 4 million people worldwide and often leads to significant complications that can be fatal.

Since CRISPR is all about tweaking genetics, it’s a perfect mechanism with which to develop new therapies. Multiple patients have undergone experimental treatments that utilize this technology. In a report form NPR, the results are exceeding expectations for all the right reasons.

NPR: First Patients To Get CRISPR Gene-Editing Treatment Continue To Thrive

At a recent meeting of the American Society for Hematology, researchers reported the latest results from the first 10 patients treated via the technique in a research study, including Gray, two other sickle cell patients and seven patients with a related blood disorder, beta thalassemia. The patients now have been followed for between three and 18 months.

All the patients appear to have responded well. The only side effects have been from the intense chemotherapy they’ve had to undergo before getting the billions of edited cells infused into their bodies.

The New England Journal of Medicine published online this month the first peer-reviewed research paper from the study, focusing on Gray and the first beta thalassemia patient who was treated.

“I’m very excited to see these results,” says Jennifer Doudna of the University of California, Berkeley, who shared the Nobel Prize this year for her role in the development of CRISPR. “Patients appear to be cured of their disease, which is simply remarkable.”

Make no mistake. This is objectively good news and not just for people suffering from sickle cell disease.

Whenever new medical advances emerge, there’s often a wide gap between developing new treatments and actually implementing them in a way that makes them as commonplace as getting a prescription. The human body is complex. Every individual’s health is different. Taking a treatment from the lab to a patient is among the biggest challenge in medical research.

This news makes it official. CRISPR has made that leap. The possible treatments aren’t just possibilities anymore. There are real people walking this planet who have received this treatment and are benefiting because of it. Victoria Gray, as referenced in the article, is just one of them.

That’s another critical threshold in the development of new technology. When it goes beyond just managing a condition to helping people thrive, then it becomes more than just a breakthrough. It becomes an opportunity.

It sends a message to doctors, researchers, and biotech companies that this technology works. Some of those amazing possibilities that people like to envision aren’t just dreams anymore. They’re manifesting before our eyes. This is just one part of it. If it works for people with Sickle Cell Disease, what other conditions could it treat?

I doubt I’m the first to ask that question. As I write this, there are people far smarter and more qualified than me using CRISPR to develop a whole host of new treatments. After a year like 2020, everyone is more aware of their health. They’re also more aware of why science and medicine matter. It can do more than just save our lives. It can help us thrive.

We learned many hard lessons in 2020, especially when it comes to our health. Let’s not forget those lessons as we look to the future. This technology is just one of many that could help us prosper in ways not possible in previous years. We cheered those who developed the COVID-19 vaccine. Let’s start cheering those working on new treatments with CRISPR.

Leave a comment

Filed under biotechnology, CRISPR, futurism, health, technology

Vaccine Update: Making Sense Of The (Critical) Data On The Johnson & Johnson Vaccine

Last year did a lot to crush my usually optimistic outlook on the future. I experienced a level of cynicism I haven’t felt since high school, a time when I only ever assumed things would get worse and rarely made an effort to change that. It was bad. Global pandemics have that effect on people.

I still made it a point to note when positive things actually happened, especially when it came to news of the vaccine. That marked the ultimate turning point. A vaccine was always going to be our best weapon in terms of ending the pandemic, regardless of what the anti-vaxx crowd says. The fact that we now have two vaccines at our disposal is genuinely encouraging.

Yes, I know the distribution of these vaccines has been a mess, to say the least.

I also don’t deny that the emergence of new mutant strains of the virus could hinder their effectiveness.

Those are legitimate concerns. We should all be worried about how this will impact our ability to finally end this awful pandemic that has caused so much damage. At the same time, we should also be hopeful. Believe me, I’m trying.

That hope got another boost recently when it was announced that a third vaccine developed by Johnson & Johnson had completed its final round of trials. Having a third weapon against this virus can only help. In addition, this one has the advantage of being a one-shot vaccine, as opposed to the two required by Moderna and Pfizer.

While that’s good news for those who hate needles, there is a trade-off. According to the research reported by the media, the vaccine is effective. However, the numbers aren’t quite as promising as what we got with the first two. This is what CNN reported.

CNN: Johnson & Johnson Covid-19 vaccine is 66% effective in global trial, but 85% effective against severe disease, company says

Johnson & Johnson’s Covid-19 single-shot vaccine was shown to be 66% effective in preventing moderate and severe disease in a global Phase 3 trial, but 85% effective against severe disease, the company announced Friday.

The vaccine was 72% effective against moderate and severe disease in the US, the company said.

It’s a striking difference from vaccines from Pfizer/BioNTech and Moderna, and it may give pause to people uncertain about which vaccine to get or when they can get one. The vaccines already on the market in the US are about 95% effective overall against symptomatic Covid-19, with perhaps even higher efficacy against severe cases.

But experts say the Johnson & Johnson vaccine will still be useful against the pandemic in the United States and around the world.

I know the numbers are the only thing that stands out in this piece. That seems to be the main sticking point for most reports about this virus.

Those numbers are still good. The 66% may not be as promising as the 90% promised by the other two, but that’s still more effective than a standard flu vaccine. On top of that, being a one-shot vaccine that can be easily stored in a typical refrigerator will help even more. It means more of this vaccine can get to people, especially in places with less-than-ideal health infrastructure.

This will definitely help in terms of ending or at least mitigating this pandemic. However, there’s one other critical point of data that’s worth highlighting with this vaccine. It’s in some of the reports, but it’s often difficult to discern, mostly because the media’s track record with reporting science is not that great.

For this vaccine, it’s boils down to certain degrees within the data. It’s true the Johnson & Johnson vaccine won’t give you the same protection you’d get from the other two. There’s a good chance that, even after getting this vaccine, you could still become very sick with COVID-19. However, and this is the key, it will help ensure that you don’t become severely ill and die.

That’s not just a sales pitch. According to the research, there were no hospitalizations or deaths among people in the vaccine arm. That may mean some did get sick. Some might have even spread it. However, they didn’t get so sick that they ended up in the hospital. They all survived.

To me, at least, that’s the most important result. Getting sick is one thing. Getting so sick that you die in a hospital bed is something else entirely. One is a bad flu. The other is debilitating illness. That alone makes this vaccine a critical tool in the effort to end this pandemic.

I admit that if I had a choice between the three, I would choose Moderna or Pfizer. I actually know someone who got the Pfizer vaccine and their experience gives me great assurance that it works as intended.

However, if those two were not available and all I could get was the Johnson & Johnson vaccine, I’d take it in a heartbeat. It could mean the difference between being sick and being deathly ill. In a pandemic, that’s the only difference that matters. To all the doctors, nurses, participants, and health care workers who were part of this effort, I sincerely thank you. A lot of lives will be saved because of this vaccine. That makes you true heroes in a world that badly needs them.

Leave a comment

Filed under Current Events, health, technology

Why We Should Treat Our Data As (Valuable) Property

Many years ago, I created my first email address before logging into the internet. It was a simple AOL account. I didn’t give it much thought. I didn’t think I was creating anything valuable. At the time, the internet was limited to slow, clunky dial-up that had little to offer in terms of content. I doubt anyone saw what they were doing as creating something of great value.

I still have that email address today in case you’re wondering. I still regularly use it. I imagine a lot of people have an email address they created years ago for one of those early internet companies that used to dominate a very different digital world. They may not even see that address or those early internet experiences as valuable.

Times have changed and not just in terms of pandemics. In fact, times tends to change more rapidly in the digital world than it does in the real world. The data we created on the internet, even in those early days, became much more valuable over time. It served as the foundation on which multi-billion dollar companies were built.

As a result, the data an individual user imparts onto the internet has a great deal of value. You could even argue that the cumulative data of large volumes of internet users is among the most valuable data in the world.

Politicians, police, the military, big businesses, advertising agencies, marketing experts, economists, doctors, and researchers all have use for this data. Many go to great lengths to get it, sometimes through questionable means.

The growing value of this data raises some important questions.

Who exactly owns this data?

How do we go about treating it from a legal, fiscal, and logistical standpoint?

Is this data a form of tangible property, like land, money, or labor?

Is this something we can exchange, trade, or lease?

What is someone’s recourse if they want certain aspects of their data removed, changed, or deleted?

These are all difficult questions that don’t have easy answers. It’s getting to a point where ownership of data was an issue among candidates running for President of the United States. Chances are, as our collective data becomes more vital for major industries, the issue will only grow in importance.

At the moment, it’s difficult to determine how this issue will evolve. In the same way I had no idea how valuable that first email address would be, nobody can possibly know how the internet, society, the economy, and institutions who rely on that data will evolve. The best solution in the near term might not be the same as the best solution in the long term.

Personally, I believe that our data, which includes our email addresses, browsing habits, purchasing habits, and social media posts, should be treated as personal property. Like money, jewels, or land, it has tangible value. We should treat it as such and so should the companies that rely on it.

However, I also understand that there are complications associated with this approach. Unlike money, data isn’t something you can hold in your hand. You can’t easily hand it over to another person, nor can you claim complete ownership of it. To some extent, the data you create on the internet was done with the assistance of the sites you use and your internet service provider.

Those companies could claim some level of ownership of your data. It might even be written in the fine print of those user agreements that nobody ever reads. It’s hard to entirely argue against such a claim. After all, we couldn’t create any of this data without the aid of companies like Verizon, AT&T, Amazon, Apple, Facebook, and Google. At the same time, these companies couldn’t function, let alone profit, without our data.

It’s a difficult question to resolve. It only gets more difficult when you consider laws like the “right to be forgotten.” Many joke that the internet never forgets, but it’s no laughing matter. Peoples’ lives can be ruined, sometimes through no fault of their own. Peoples’ private photos have been hacked and shared without their permission.

In that case, your data does not at all function like property. Even if it’s yours, you can’t always control it or what someone else does with it. You can try to take control of it, but it won’t always work. Even data that was hacked and distributed illegally is still out there and there’s nothing you can do about it.

Despite those complications, I still believe that our data is still the individual’s property to some extent, regardless of what the user agreements of tech companies claim. Those companies provide the tools, but we’re the ones who use them to build something. In the same way a company that makes hammers doesn’t own the buildings they’re used to make, these companies act as the catalyst and not the byproduct.

Protecting our data, both from theft and from exploitation, is every bit as critical as protecting our homes. An intruder into our homes can do a lot of damage. In our increasingly connected world, a nefarious hacker or an unscrupulous tech company can do plenty of damage as well.

However, there’s one more critical reason why I believe individuals need to take ownership of their data. It has less to do with legal jargon and more to do with trends in technology. At some point, we will interact with the internet in ways more intimate than a keyboard and mouse. The technology behind a brain/computer interface is still in its infancy, but it exists and not just on paper.

Between companies like Neuralink and the increasing popularity of augmented reality, the way we interact with technology is bound to get more intimate/invasive. Clicks and link sharing are valuable today. Tomorrow, it could be complex thoughts and feelings. Whoever owns that stands to have a more comprehensive knowledge of the user.

I know it’s common refrain to say that knowledge is power, but when the knowledge goes beyond just our browsing and shopping habits, it’s not an unreasonable statement. As we build more and more of our lives around digital activities, our identities will become more tied to that data. No matter how large or small that portion might be, we’ll want to own it as much as we can.

It only gets more critical if we get to a point where we can fully digitize our minds, as envisioned in shows like “Altered Carbon.” At some point, our bodies are going to break down. We cannot preserve it indefinitely for the same reason we can’t preserve a piece of pizza indefinitely. However, the data that makes up our minds could be salvaged, but that opens the door to many more implications.

While that kind of technology is a long way off, I worry that if we don’t take ownership of our data today, then it’ll only get harder to do so in the future. Even before the internet, information about who we are and what we do was valuable.

This information forms a big part of our identity. If we don’t own that, then what’s to stop someone else from owning us and exploiting that to the utmost? It’s a question that has mostly distressing answers. I still don’t know how we go about staking our claim on our data, but it’s an issue worth confronting. The longerwe put it off, the harder it will get.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Current Events, futurism, Neuralink, politics, technology

Why Starlink Is The Next Step In The Evolution Of The Internet

Say what you will about Elon Musk. Believe me, a lot can be said about a Tony Stark wannabe whose wealth is on par with Jeff Bezos. Not all of it is flattering, either. I know I’ve expressed a strong appreciation for him in the past. I genuinely believe some of the technology he’s working on will change the world.

I don’t deny that he can be eccentric.

I also don’t deny he says dumb things, often on Twitter.

The man has his faults, but thinking small isn’t one of them. You don’t get to be as rich or successful as Elon Musk by being careful. You also don’t create world-changing technology by being short-sighted. Love him or hate him, Musk has changed the world. He’ll likely change it even more in the coming years.

Some of those changes are years away. A product like Neuralink is probably not going to become mainstream in this decade. However, there is one that’s likely to change the world a lot sooner. In fact, it’s already up and running to some extent. It’s just in the beta phase. Some people can already use it and it’s already proving its worth.

That technology is called Starlink and I believe this will change the internet in a profound way.

Now, you can be forgiven for not keeping up with all of Elon Musk’s elaborate ventures. This one isn’t quite as sexy as brain implants or rockets, but it’s every bit as groundbreaking. If you value internet speeds that don’t suck or lag, then it should be of great interest.

In essence, Starlink is the name and brand of a new satellite-based internet service provider that Musk is creating through his other ambitious venture, SpaceX. The goal is simple on paper, but resource intensive. Instead of the messy network of ground-based hardware that most providers use to deliver the internet to hour homes and businesses, Starlink will deliver it from space.

It’s actually not a new idea or product, for that matter. Satellite based internet service has been around for years. In terms of speeds and utility, though, it just sucks. At most, you’d be lucky to get speeds on par with old school 3G wireless. For some people, that’s better than nothing. For most, it’s not nearly enough to maximize the full power of the internet.

Starlink is hoping to change that. Instead of expensive satellites with high latency and limited bandwidth, these new brand of low-Earth satellites promise to deliver on speeds at or greater than the best 4G internet providers.

On top of that, you don’t need the same elaborate infrastructure and or cell towers to deliver it. You just need a constellation of satellites, a receiver no larger than a pizza box, and a clear view of the sky. If you have all that, you can get the full breadth of the internet. It doesn’t matter if you’re in the middle of the desert or at the top of the Empire State Building. It’s there for you to access.

Make no mistake. That’s a big deal for the 3.8 billion people in the world who don’t have internet access. Whether due to lack of infrastructure or funds, it’s just not an option for them. It’s not just underdeveloped third-world countries either. Even here in America, there are large swaths of the country that have little to no reliable internet access.

It’s a big factor in the ongoing divide between rural and urban areas. If you live in a small rural community full of good, honest, hard-working people, they’re still going to struggle if they don’t have reliable internet. They’ll struggle economically, socially, and financially. To date, the efforts to expand the internet to their communities has been lackluster at best.

I can personally attest how bad it is. A few years back, I drove through a very rural part of West Virginia. For a good chunk of that drive, there was pretty much no reliable internet, be it Wi-Fi or cell phone coverage. The people there didn’t hide their frustration and I certainly sympathized with them.

There are many reasons for this, not all of which is because of how awful cable companies can be. A bit part of that has to do with the tools we use to access the internet. As good as they are for urban areas, they don’t work on a global level. It’s one thing to wire a big, advanced city like New York with fiber optics. It’s quite another to wire an entire planet.

Starlink promises to change that. These satellites aren’t bound by those logistics. They just orbit overhead without us even realizing it. They’re small and easy to mass produce. They can be taken out of orbit easily and replaced with better models. In principle, they could easily deliver the same high level gigabit speeds that are currently at the top of the market.

In terms of opening the internet to the rest of the world, that’s a big deal.

In terms of disrupting the market for delivering the internet, that’s an even bigger deal.

That’s because, to date, the world wide web has struggled to be truly world-wide. When nearly half the world can’t access it, then you can’t truly call it a global network. With Starlink, the internet can become truly global. People in rural India can have access to the same internet speeds as people in downtown Los Angeles. That promises to open up the world up in ways we can’t predict.

It’ll also provide some badly needed competition to internet delivery. For most people in America, you don’t have much choice when it comes to internet service. Cable companies basically have a monopoly on the whole enterprise, which is a big reason why it’s so expensive compared to other countries. Starlink will be the first real competition they’ve had in years for many areas.

I don’t doubt those companies will complain, whine, and lobby, but they’re not going to stop something like Starlink. They’re also not going to muscle out someone like Elon Musk. You don’t become the world’s richest person by being a push-over. Musk has already made clear that Starlink is a big part of his business model for the future.

At the moment, Starlink is still in beta, but Musk himself proves the technology works. He even used it to send a tweet. There are people right now who are testing it and they can confirm its speeds are way better than the crappy DSL internet of yesteryear. Many others have also expressed a keen interest in buying into this service.

At the moment, it’s still expensive. It costs $99 a month to access Starlink and it also costs $500 to buy the necessary antenna to receive it. However, that’s not a whole lot more than what I pay for internet in a month. Once it’s refined, that cost will come down.

Remember, there are over 3 billion people in the world without internet who have no options to access it. Starlink could be their only option and it could be a damn good one. It could be the key to the rest of the world becoming truly connected. That has big implications for society, commerce, and governments. Some countries are already making Starlink illegal for its people to access. Don’t expect that to stop it, though.

The promise of fast, reliable internet at all corners of the globe is too enticing for too many people. It will both connect the world and make Elon Musk even richer. However, for a man who connected the world and pissed off cable companies, I’d say he’ll have earned it.

Leave a comment

Filed under Current Events, futurism, Neuralink, technology

The Future Of Telework: A Trend That Transcends Pandemics

In early 2020, which might as well be another lifetime, I speculated on the lasting impact of increased telework and distance learning due to the pandemic that uprooted our entire society. At the time, I didn’t know just how bad this pandemic would get. In my defense, my hopes were still intact. Now, they’re charred ashes, but that’s beside the point.

In essence, I speculated that once people got used to teleworking, they would not be eager to go back, even after the pandemic had passed. That wasn’t exactly a bold speculation. You don’t have to be a world class psychic to surmise that people will come to enjoy working in their underwear, not having to commute, and enjoying the general flexibility that telework affords.

I’ve been stuck in enough traffic jams to appreciate that kind of flexibility. I know I’m not the only one who might become too fond of telework.

Well, that all-too-obvious insight is starting to take hold in many sectors. It’s not just related to typical office work in cubicles. Everyone from the United States Military to big tech companies to law firms are embracing this new normal for the workplace. Even though it’s more out of necessity than innovation or good will, it’s still happening and there may be no going back.

The pandemic has already forced a mentality shift among the workforce. According to research done by Pew, telework was mostly seen as an optional benefit reserved for an affluent few. That’s not surprising. That kind of flexibility just felt more like a luxury, one that someone had to earn by establishing trust and credibility from an organization.

Now, it’s not just a necessity. It’s unavoidable. The world we’re living in now just cannot accommodate the same professional environment we once knew. I’ve worked in many professional environments before. I can attest that some of them are not built with pandemics in mind.

At one point, I worked at a company in which my desk was crammed into a closet-sized space with three other people. If even one of us caught a cold, we’d all be sick by the end of the week. It was that bad.

I doubt that’s an isolated case. In some of the jobs I’ve had, I have been able to work from home, but it’s only as a last resort. The only times I actually had to do it involved an emergency that occurred on a Saturday morning and one instance where the office was being renovated. In both cases, I still got plenty of work done. I just did it in my underwear.

In that sense, I get why many organizations reserve telework as a luxury rather than a norm. There’s this underlying sentiment that people will abuse it. If they can work from home, they just won’t get as much done. They’ll be too tempted to just grab a bag of chips, lie down on the couch, and watch game shows.

While I don’t doubt there are people who do that, this pandemic has revealed that most people aren’t assholes on that level. In some cases, it’s increasing productivity. Apparently, when workers are comfortable and afforded flexibility, they can get more done. That shouldn’t be too surprising, but it’s still remarkable in its own way.

This has born itself out in subsequent studies and surveys. For some industries, telework is probably more productive in the grand scheme of things and that shouldn’t be surprising. Anyone who has ever had a lousy commute knows why. If a good chunk of your day is spent waking up, putting on itchy clothes, and spending hours in traffic, you’re not going to be in a very productive mood.

That won’t be the case for certain industries. If you’re a doctor, a police officer, a fire fighter, or a trucker, you just can’t telework. The nature of the work doesn’t allow it. That’s still going to be the case, at least until we have robots capable of doing those tasks, which we are working on. However, there’s also sizable chunk of work that could probably be done remotely.

I think the impacts of this emerging truth will extend far beyond the pandemic. I’ve already seen it with people I know. They enjoy teleworking. They don’t want to stop, even after the pandemic becomes a bleak footnote in history. Some are willing to still go into the office some of the time, but they would prefer to telework. I suspect that’s going to become the new normal.

Last year has proven that people can, for the most part, be responsible with the flexibility afforded by telework. As long as they’re getting the work done, who cares if they do it in their underwear while Netflix plays in the background? Considering how costly commutes can be and how expensive office space can be, it might just make more fiscal sense in the long run.

Like it or not, businesses and various organizations tend to err on the side of reducing operating costs. It may mean more employees waste time at home, but if the difference is made up by better productivity, then it’s a net gain overall.

That shift could have impacts that go far beyond business operations. If people have to commute less, then that makes living out beyond urban and suburban settings more feasible. Given how expensive it is to live in those areas, this could spread people out even more, which is an objectively good thing if you’re looking to prevent future pandemics.

It might even help those in depressed rural areas in need of human capital. I can easily imagine some people preferring the quiet, less crowded environment afforded by a rural setting. If they can live in that environment while still getting their work done via internet, assuming they have a reliable connection, then that’s another big benefit that goes beyond the business itself.

This is likely to be a trend. That’s not a fanciful prediction. We’re already seeing it happen. The pandemic just forced it to accelerate. There will likely be other impacts. It may very well change how cities, suburbs, and rural areas are planned from here on out.

I don’t claim to know the specifics, but we’ll likely see it continue in the coming years. I, for one, welcome this change. If I can reduce the amount of time spent in traffic and increase the amount of time I spend in my underwear, then my overall well-being improves considerably.

Leave a comment

Filed under Current Events, futurism, human nature, technology

How Many Streaming Services Can We (Realistically) Have?

It’s official. The streaming wars are on.

Hell, it’s been official for the past several years and 2020 only accelerated it. The battle to dominate digital media in all forms is raging on multiple fronts and while many have their favorites, none can say they’ve won.

It’s Netflix versus Hulu versus Amazon versus Disney versus CBS/Viacom versus YouTube versus whatever other media companies are fighting for every possible eyeball. The stakes are high for consumers and content creators alike. There are billions in profits to be made and plenty of epic, culture-defining content to be made. It’s going to get intense is what I’m saying.

I don’t think I need to remind everyone just how much the streaming market has changed in the past 10 years. Even if you’re still a teenager, chances are you still vaguely remember the days before Netflix and chill. Media back then was movies, TV, and Blu-Ray/DVD collections. I’m not saying it was ideal, but that’s what we had to work with.

Then, Netflix came along and changed everything.

Then, every company and their deep-pocketed subsidiaries tried to catch up.

It hasn’t always been smooth. Some people are still not over “The Officeleaving Netflix. Chances are there will be more upheavals like that as companies fight over who streams what and who has the streaming rights to a particular show or movie. That’s sure to get messy and I’m not smart enough to make sense of it.

However, as this war rages, I think there’s a relevant question worth asking. It’s a question that I’m sure both consumers like me and big media companies like Netflix and Disney ask as well. The answer could depend on how the war plays out.

How many streaming services can the average customer have?

Most people already pay for some form of streaming media. Most people subscribe to some form of pay TV, although that trend is in flux. The days of having all the entertainment you want with a simple cable subscription alongside Netflix is long gone and it’s not coming back.

Now, you have to be very selective and self-aware of what you want.

Do you want access to Disney’s vast library of content?

Do you want access to the library of shows from NBC or CBS?

Do you want access to the content from Warner Brothers, Universal, Dreamworks, Discovery, Cartoon Network, or 20th Century Fox?

You can have some, but you can’t have them all without paying way more than you ever would for cable. Even if you did, could you even watch all those streaming services enough to justify the cost? There are only so many hours in a day and there’s only so much attention we have to give. Even if we dedicated half our day to binging movies and TV, we couldn’t watch it all.

That’s the big limiting factor on streaming. It’s also the biggest obstacle any company faces with respect to their effort in the streaming wars. People can only watch so much and they only have so much they can reasonably spend on a streaming service. There comes a point where, even if the content is appealing, they just can’t justify the cost.

Personally, I have subscriptions to five streaming services. They are as follows:

Netflix

Hulu

Amazon Prime

Disney Plus

HBO Max

Now, it’s worth noting that I got HBO Max through my cable subscription. I’ve subscribed to HBO for years so it’s not something I consciously sought out. With Amazon Prime, I primarily used it for the 2-day shipping instead of streaming media, but I’ve certainly found some quality shows on that platform.

I’m not sure I can justify another subscription beyond this. Once my subscriptions cannot be counted on one hand anymore, I think that’s too much. I just cannot watch enough content to warrant paying extra. I say that knowing companies like Paramount and NBC have just launched their own streaming services.

Even though both networks include shows that I love, I’ve no intention of buying their streaming service. If my cable company offers it for free, like it did with HBO, then that’s great. I’ll certainly watch it, but I’m not paying extra.

I feel like a lot of people are in that boat. If they don’t have a cable subscription, then they’re already trying to save money and paying more for a streaming package just defeats the purpose. If they do have cable, then they’re probably not willing to pay more for something they’re already paying too much for.

It’s a tougher situation and one that I’m sure will get tougher in the coming years. It’s not cheap to run a streaming service. The profit margins can be thin if you don’t have the content. There’s a good chance that some streaming services will either fail or get absorbed into another, like CBS All Access did.

Then, there are the pirates and no, I’m not talking about the ones with eye-patches.

Before Netflix streaming, pirating copyrighted content was already pretty rampant. Since the streaming wars began, there has been an uptick in pirated streaming content. That is also likely to intensify the more fragmented the streaming market becomes. If people are really that unwilling to pay a whole subscription to watch just a single show, they will resort to piracy. It’s still distressingly easy.

That’s why this question matters, both for us and the companies who provide our entertainment. I don’t claim to know how it’ll play out. By the time it settles, there might be another major upheaval in the media to supplant it. Whatever happens, I feel like I’ve reached the limit on the number of streaming subscriptions I have.

That’s just me, though. What about you?

How many streaming services do you have and are you willing to pay for another? Our collective answer could very well change the course of the streaming wars.

2 Comments

Filed under Current Events, human nature, media issues, psychology, technology, television

The Messy/Glitchy Launch To “Cyberpunk 2077” (And Why It Shouldn’t Surprise Anyone)

I’ve been playing video games for most of my life. I’m old enough to remember the excitement surrounding “Super Mario Bros. 3,” “Legends of Zelda: Ocarina of Time,” and the first Pokémon craze. While I don’t consider myself an avid or hardcore gamer, I still have immense love and respect for gaming.

In the time I’ve been playing games and following the industry, I’ve seen many games that were heavily hyped. I vividly remember how games like “No Man’s Sky” and “Spore” were supposed to revolutionize the industry. Most of the time, the game was a letdown, relative to said hype. A few managed to deliver. Franchises like Zelda and Grand Theft Auto keep finding a way.

However, I’ve yet to see a game garner as much hype as “Cyberpunk 2077.” I’ve also yet to see a game garner such a mixed reaction in conjunction with such a chaotic launch. Now granted, some of that might be due to the general chaos of 2020, but the story surrounding this game has been a special kind of messy.

The long and short of that story is this.

  • The game itself was announced was announced back in May 2012 by CD Projekt Red.
  • The first trailers came out in 2018 and 2019.
  • Keanu Reeves was announced to play a significant role in a memorable appearance at E3 2019.
  • The game was originally slated for release in April 2020, but was delayed twice.
  • Once the game finally did come out, it was found to be glitchy and buggy. Some older gaming systems, like the Playstation 4, could not effectively run it and even robust gaming PC’s struggled to play it.
  • Due to the bugs and messy release, fans and critics alike were outraged. Some demanded refunds and Sony even removed the digital version of the game from its store.

Again, there’s a lot more to the story behind this game and how its release was handled, but those are the basics. I won’t get into the meaty details. Others more qualified than me have handled that far better and I’ll defer to them for more insight.

The reason I’m talking about “Cyberpunk 2077” has less to do with the game itself and more to do with the emerging trends behind it. This isn’t the first game to endure a messy, glitchy launch. It’s also not the first to invite massive backlash from fans and customers. I doubt it’ll be the last, either.

It all comes down to this.

Nobody should be surprised that a game as big, ambitious, and complex as “Cyberpunk 2077” had glitches at launch.

Nobody should be surprised when any game that runs on powerful, complex gaming systems aren’t perfect from the beginning.

Everyone should expect patches and fixes that come out years after a game comes out. They’re practically unavoidable.

I know that sounds like a broad generalization. It may even sound like I’m making excuses for game developers like CD Projekt Red. I promise that’s not the case. This is just me sharing my perspective and I feel it’s worth sharing in the current era of AAA gaming.

Like it or not, the gaming industry has evolved a lot since the days of Nintendo, Sega Genesis, and the first Playstation. It’s not just that the industry has become more consolidated and more impacted by games people play on their phones. That is also part of it, but let’s take a moment to appreciate the bigger picture here.

A game like “Tetris” or “Super Mario Bros” is much less complex than a game like “Grand Theft Auto V” or “Elder Scrolls: Skyrim.” I’m not just talking about the story or gameplay, either. These games require a lot more in terms of development, polishing, and refinement to go from the drawing board to a finished product.

The hardware is more powerful.

The mechanics are more complex.

The logistics are far greater.

You didn’t used to have to hire top quality voice acting talent on the level of Keanu Reeves to develop a game. You just had text boxes and sound effects. That’s all games like “Legends of Zelda: Ocarina of Time” and the first Pokémon games needed.

However, those games couldn’t come out now and be as successful. They were products of their time, limited by the hardware and software needed to develop them. It still took time and effort, but let’s not lose perspective here. The entire size of those games could comfortably fit on a $10 flash drive.

In essence, a game like “Cyberpunk 2077” is to “Super Mario Brothers” what a Saturn V rocket is to a standard wheel. It has far more moving parts, far more complexities, and far more investment needed in order to make work.

When you have something that complex, things aren’t always going to go smoothly. Patches and tweaks will be necessary. It’s just a matter of extent. Even top-rated games like “Grand Theft Auto V” needed a few patches to make work. Other games like “Destiny 2” required so many patches that the game was basically overhauled.

In both cases, the games were better because of this. Even if it wasn’t perfect on launch, it created the foundation from which a truly awesome experience could emerge. That’s the best way to approach games like “Cyberpunk 2077.” Regardless of what the release date says, assume that’s just the beginning and not the end.

That’s not to say we should overlook every glitch and flaw at launch. Some just cannot be fixed, no matter how many patches are thrown at it. Games like “Fallout 76” are an unfortunate testament to that.

At the same time, some games are so mired by their launch that nobody notices or appreciates it when the game is ultimately fixed. That’s what happened to “Mass Effect: Andromeda,” a game that was also plagued by a glitchy and messy launch. However, several patches helped fix many of the issues. Now, I can confirm that the game in its most updated form is a genuinely solid gaming experience.

Unfortunately, fans gave up on that game, and many like it, too quickly. I feel like others didn’t even give it a chance because they listened to those who made such a big deal about the glitches at launch. It would be like people avoiding cars for the rest of their lives because the first few crashed or didn’t run well enough.

For this reason, I’ve gotten into the habit of not buying any AAA game at launch. Unless it’s a remaster, I always wait at least three to four months before I consider investing in it. That usually affords enough time to work out the kinks and get the necessary patches in place for the game to realize its full potential.

Sometimes, it’s still a letdown. Games like “Anthem” have never really taken hold, no matter how many patches and tweaks they get.

For the most part, though, there’s a benefit to waiting until months after launch. The hardest part is not letting negative reviews from people bemoaning the early glitches color your opinion of the game. That’s what helped me enjoy “Mass Effect: Andromeda.” I never would’ve gotten that experience had I read all the complaints about the earlier version of the game.

Sometimes, you need to exercise a little patience to get the gaming experience you seek. That’s not easy these days, especially as the gaming industry has grown into a multi-billion dollar entertainment behemoth. I remember just how visceral some of the reactions were when “Cyberpunk 2077.” Now, some of those same people are whining about the game appearing to have been rushed.

It’s the kind of hypocrisy that makes you want to punch your computer screen.

On top of that, game development these days is subject to significant strain among developers. It’s what fuels a less-than-pleasant aspect of the industry called crunch. When a company is eager to get a product to the market or to meet a deadline, it’ll lean heavily on its workers. Many times, those workers will suffer as a result.

It’s a distressing part of the industry, but one I doubt will go away anytime soon. As long as there’s demand for AAA games on par with “Cyberpunk 2077,” we’re going to endure things like this. Games are going to be launched with bugs. Game developers are going to be overworked to death to meet a deadline rather than risk angering the consumer base.

Until these trends and dynamics change, it’s likely to get worse before it gets better. In the meantime, I’m still going to be patient with “Cyberpunk 2077.” I don’t think I’ll get it until several months have gone by, complete with patches, and I have a new Playstation 5 to play it on.

Hopefully, it’ll be worth the wait. After all, where else am I going to play a game in which I can customize a character’s genitals?

2 Comments

Filed under Current Events, technology, video games

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

Leave a comment

Filed under Artificial Intelligence, Current Events, futurism, technology

The First People Have Received The COVID-19 Vaccine (And We Should Celebrate)

It’s almost over. I’m sure I’m not the only one thinking that with each passing day.

This historically horrible year is almost over. We’re in the home stretch with the holidays approaching. A new year is almost upon us and the bar for improvement for 2021 is laughably low compared to previous years.

We can also say with a straight face that the COVID-19 pandemic is almost over. I say that knowing full-well that cases are still rising and people are still dying at a horrific pace. That’s still objectively terrible.

The reason there’s hope now is we actually have a working vaccine. Thanks to the heroic efforts of scientists, doctors, and those who volunteered to test this unproven treatment, the key to ending this pandemic is upon us.

It also is just the first. There are multiple vaccines in late stages of development. It’s very likely that we’ll have a second effective before New Years. That’s a powerful one-two punch to this pandemic that has killed so many and disrupted so many lives.

These aren’t folk remedies or something some shady health guru is trying to pawn for a quick buck. Contrary to what anti-vaxxers may claim, these vaccines will actually protect people. As of this writing, it’s being distributed to front line care workers and vulnerable populations.

Just this past week, the first individuals received the vaccine. It started with a British woman in Coventry. It continued with an ICU nurse in New York City. CNN even captured it in a live video feed.

CNN: ICU nurse in New York among the first people in the US to get authorized coronavirus vaccine

A critical care nurse was the first person in New York and among the first people in the United States to get a shot of the coronavirus vaccine authorized by the US Food and Drug Administration.

Sandra Lindsay, an ICU nurse at Long Island Jewish Medical Center in Queens, New York City, was administered the vaccine during a live video event at about 9:20 a.m. ET on Monday.

Dr. Michelle Chester, the corporate director of employee health services at Northwell Health, delivered the shot.

“She has a good touch, and it didn’t feel any different than taking any other vaccine,” Lindsay said immediately afterward.

This isn’t just a turning point in the fight against a deadly disease. This is something we should celebrate. Moreover, I believe this is the kind of celebrating we should learn from.

I admit I’ve celebrated some less-than-important things in my life. Hell, I celebrated the day when comics started coming out digitally the same day they came out in shops. I treated that like I won the Super Bowl.

People celebrate all sorts of events that they believe to be the most important thing in the world. Whether it’s their team winning a championship or a movie grossing $2 billion at the box office, we all have a different bar for what warrants celebrating.

For just once, let’s all re-think where we raise that bar. Let’s also let this be a prime example of something that’s truly worth celebrating and praising.

Make no mistake. Creating this vaccine this quickly is a remarkable achievement. We’ve endured pandemics in the past. Some of those pandemics have killed far more people. This disease could’ve definitely killed more. If we didn’t have this vaccine, or even if we had to wait a year to get it, thousands more would’ve died.

Now, going into 2021, countless lives will be saved because of this. It’s a testament to the power of science, hard work, and human ingenuity. It’s as heroic as we can be without the aid of superpowers or magic wands. As someone who loves superhero media, I say that’s a beautiful thing indeed. So, let’s all take a moment to appreciate and celebrate this achievement. I also fully intend to get this vaccine, once it’s available. When that day comes, I’ll gladly share that moment and encourage others to do the same.

1 Comment

Filed under Current Events, health, technology, Uplifting Stories

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology