Category Archives: biotechnology

Obesity MIGHT Be Declining In The U.S. (And What The Implications Entail)

Ever since I was a kid, I’ve been hearing about the obesity epidemic in the United States. Every year, it seemed, the trends were getting worse. More and more people were becoming obese. As a result, many of those same people faced serious health issues. Those issues, in turn, put strain on families, health care systems, and numerous aspects of society.

It’s not unreasonable to call increasing rates of obesity a problem.

But calling it an epidemic? That may or may not be appropriate. The language often used to talk about obesity, and body image in general, is not very healthy. But the language used to talk about body positivity isn’t always health, either. Beauty might be in the eye of the beholder, but overall health is something that’s tangible and measurable. And by most measures, being obese is not good for your health.

Over the years, there have been plenty of efforts to raise awareness about obesity. There have been just as many efforts to encourage people to make healthier choices, either by eating healthy foods or exercising regularly. But the fact that obesity rates continued to increase year after year made clear that these efforts weren’t having much impact.

The reasons for that are many. But as someone who did not exercise or eat healthy for a good chunk of my adult life, I can attest that the primary reason is that it’s just hard. Eating right in a country where cheap, delicious, unhealthy fast food is available at every corner takes more than just will-power. Exercising regularly while being able to afford basic living costs can be fraught with challenges.

I was able to change my diet and my exercise habits to improve my overall health, but I was fortunate. My circumstances provided me the opportunities, the time, and the energy to pursue a healthier lifestyle. A lot of people don’t have that luxury. For them, obesity is almost impossible to avoid.

But recently, things might be changing in a profound way. According to new data from the US National Health and Nutrition Examination, the obesity rate actually began to fall between 2020 and 2023. It’s the first time in my life that the rate has actually declined.

That, in and of itself, is remarkable. Obesity in America seemed like one of those things that would never decline, if only because unhealthy food will never stop being so delicious. But this data suggests that there is a limit to obesity within a population. It also has implications that may hint at new variables that we’ve never dealt with before.

That same data made clear that this decline was not directly linked to the rise of new weight-loss drugs like Wegovy and Ozempic, but it’s not unreasonable to assume they had some influence. These drugs are unique in that they don’t change the effects food has on our bodies. They simply change how your body processes food while also affecting your cravings for it.

I can attest that one of the hardest things I had to do when getting into shape was changing my diet. I had gotten so used to eating junk that if I went too long without it, I would crave and binge eat. But these new drugs effectively mute that response. That makes it a lot easier to eat less while ensuring what you do eat doesn’t trigger the mechanisms that lead to weight gain.

These drugs aren’t miracle drugs by any means. They just make eating less and managing your diet a lot easier. And that might very well be the most important ramification of this data.

For years, there has been a tendency for people to seek any alternative to diet and exercise when it comes to losing weight. I’ve seen many diet fads and fitness gimmicks that claim they’ll help you lose weight without relying entirely on changing certain habits. None of them succeeded in the long run. That’s why they’re fads and not medical remedies.

And most of the time, they failed for the same reason. People wanted a “magic pill” to make weight loss and fitness easier. Companies and fraudsters were always eager to oblige, even if it meant providing false promises and unsubstantiated claims. But at the end of the day, people still needed to burn off these calories and change what they put in. Anything that avoided that was doomed to fail, plain and simple.

Now, there are drugs that get to the actual root of obesity, namely our desire to overeat. That approach is very different from any fad or gimmick. And the massive, multi-billion-dollar success of Ozempic has officially set a precedent while creating a new market.

But how far will this effort go?

If we can make drugs that reduces peoples’ desire to eat, why not make drugs that will reduce other desires?

What does that mean for people with addiction?

What does that mean for people with behavioral disorders?

What does that mean for mental health, body image, or even sexuality?

It’s hard to say at this point. But I’ve seen enough health fads and exercise gimmicks to surmise what happens when something actually works. If a company finds a winning drug that makes hard efforts easier, then they will try to build on that success. They’ll try to make new drugs that utilize similar mechanisms. We saw it with erectile dysfunction medications. Now, we might see it this new crop of drugs.

They won’t just affect our bodies.

They’ll affect our desires, as well.

How will we manage this? Will the good outweigh the bad?

Only time will tell. But if the obesity epidemic is finally in decline, then that means we’ll have to answer these questions sooner rather than later.

Leave a comment

Filed under biotechnology, Current Events, exercise, human nature, psychology

Thought Experiment: What Is The Endgame For Human Civilization?

This is a video from my YouTube channel, Jack’s World.

This video is another thought experiment that contemplates where human civilization is heading and where it will ultimately end up. Humanity, as a whole, has undergone many upheavals, collapses, and advancements over the centuries. But as far as we’ve come, there’s still so much farther we have to go. That raises the question.

What’s the endgame?

What is human civilization’s ultimate form?

try to explore that with this thought experiment.

Leave a comment

Filed under Artificial Intelligence, biotechnology, futurism, Jack's World, YouTube

How Much Are You Willing To Spend/Risk On Emerging Longevity/Anti-Aging Treatments?

We’re all born with youthful energy. As kids and young adults, this energy helps fuel us as with build lives for ourselves and our families, be they our close relatives or any children we might have. But over time, that energy fades. Our health, our looks, and our overall energy for living fades. It is an inescapable fact of life.

One of my old health teachers once summed it up with this endearing quote that I remember to this day.

“Once your body is done growing, it starts dying. The only part you can control is how rapidly that process unfolds.”

This has become more and more relevant for me, personally. I am no longer young by most measures. My teenage years and my early 20s feel like a lifetime ago. Who I was then is very different from who I am now. And while I have gotten much better at taking care of myself since I turned 30, I know that’s just slowing the aging process. It doesn’t stop it.

At some point, my body and mind will start to break down.

At some point, I’ll start succumbing to the many ailments often associated with age.

I am not looking forward to that. I prefer to delay that as much as possible. I’ve always been somewhat self-conscious about my looks and my health. I don’t deny that the prospect of aging is scary to me. That’s one of the reasons I often keep a close eye on advancements in biotechnology. And with each passing year, I also find myself paying more attention to advances in the fields of longevity.

I know there are many conflicting perspectives when it comes to body image, beauty standards, and the idea of aging gracefully. For everyone in human history, you didn’t really have a choice. You just had to accept that you were going to get older. Your looks, your energy, and your body was going to fade. But if this technology is able to mature, there might be other options in the future.

Whether or not I’ll live long enough to take advantage of those opportunities, it’s hard to say. It may already be too late for someone my age. Even if new treatments emerge, there’s a good chance they’ll be reserved for the rich and well-connected. Unless I win the lottery, I doubt I’ll be in a position to utilize them.

But that might not be the case for my nieces and nephews, who are still young children at the moment. It might not even be the case for those just graduating college at the moment. In the same way artificial intelligence has had a sudden surge of advancement, longevity might experience a similar surge, thanks in no small part to AI.

As I write this, science has uncovered so much about the mechanics of aging. We know considerably more today than we did 20 years ago. We’re sure to uncover more in the coming years. At some point, we may even develop effective treatments that don’t just slow aging. We might find a way to actually reverse it.

This sort of technology isn’t some far-off sci-fi fantasy on par with a warp drive. Reversing aging doesn’t break the laws of physics. It doesn’t even break the laws of biology, given how some animals never seem to age. It’s just a matter of developing the right tools, the right treatments, and the right approach. I have no idea what form that will take. I doubt it will be something as simple as a pill, an injection, or something you could buy at a pharmacy.

But if such a treatment were available, it’s worth asking how much you’d be willing to risk in order to take advantage of it. Because, like any emerging medicine, there is risk early on. When something is unproven in the long-term, you will be putting your mind and body at risk by embracing it so quickly. Even if it’s tested to a point where very major health organization gives it the thumbs-up, there’s always a chance something could go wrong. That’s just how medicine and biology works.

Some people might not be willing to take that risk.

For me personally, I totally would. Even if my health and appearance is generally good, I would definitely take a chance at a treatment that would help preserve both for a longer period. I would certainly expect side-effects. But if it delivers good results, I’ll endure those.

But there’s also the cost to consider. Even if a treatment is shown to be effective at keeping you feeling young, beautiful, and energetic, it doesn’t do you much good if it costs you every penny you have and then some. Sure, you’d have your youth and your looks, but you’d be broke and in debt. Is that worth it?

Personally, I wouldn’t be willing to spend everything or go that deep into debt, just to look young and remain healthy. Few good things ever come from indebting yourself to that extent.

But others might feel differently. Some might not want that kind of longevity, even if it were available. That’s perfectly fine. We should certainly respect anyone who makes such a choice. But we should also put real thought and effort into attacking aging the same way we attack any disease. Regardless of how we age or how we choose to approach it, we’re subject to the chaotic ravages of time.

Emerging technology will give us more options than we’ve ever had at any point in our existence as a species. How we choose to exercise those options remains to be seen. I might not get that choice. But I sincerely hope that some reading this do.

Leave a comment

Filed under biotechnology, CRISPR, technology

Would You Willingly Plug Your Brain Into The Matrix?

The Matrix' Code Came From Sushi Recipes—but Which? | WIRED

What if there was a virtual environment that was so real and so lifelike that it was completely indistinguishable from the real world?

What if you had an opportunity to upload the entire contents of your mind into that environment?

Would you do it? Even if you didn’t have a full measure of control over the environment, would you still venture into this virtual world?

I’m not just asking these questions as another thought experiment, nor am I asking it as an excuse to talk about “The Matrix: Resurrections.” Yes, the prospect of another movie in the mold of “The Matrix” did inspire me to pose these questions, but I also think these questions are worth seriously contemplating.

Back in 1999, the year “The Matrix” first came out, the idea of an entirely simulated world seemed like classic sci-fi tech, the likes of which we’d never see in our lifetimes. That’s understandable. In 1999, the most advanced simulations we knew could only be rendered by a Playstation 2 and those hardly looked realistic.

Since then, computing power and graphics technology has come a long way. These days, graphics in video game consoles are so realistic that it’s nearing Uncanny Valley territory. It won’t be that long before we have computer renderings that are so advanced, so lifelike, and so realistic that our brains can’t tell the difference.

At that point, creating an entirely simulated world is just a matter of computing power, scale, and interface. Since brain/computer interfaces are already being developed, it’s not unreasonable to think that we won’t have a Matrix-like simulation available within the next 40 years. Many people alive today who are under the age of 50 might very well live long enough to see that technology.

Once we have it, we’ll have some important decisions to make. Some of those decisions will be societal. If people suddenly have access to a virtual world where they can be anyone, do anything, and immerse themselves in any conceivable experience, then what does that do to society? What does that do to people, communities, nations, and social structures?

Those are far messier questions to contemplate, which is why I’m not going to belabor them too much at this point. Instead, I want to keep this question within the context of individuals. Everyone’s circumstances and beliefs are different. As a result, that may impact whether you’d take advantage of such an opportunity or what kind of environment you’d seek to create.

Personally, if I ever had an opportunity to upload my mind into a virtual environment on par with the Matrix, I would do it, but the extent and circumstances would vary. I suspect others may feel the same.

If I could create my own personal virtual environment before I uploaded my mind into it, then I would certainly be more willing. I think that’s an important factor. The humans in “The Matrix” didn’t have any measure of control over the environment they were in. I think that would complicate any that anyone would have in such a world.

It would also depend heavily on my physical state in the real world. If this technology became available and I was old, weak, and in poor health, then I would certainly be more inclined to use it. That assumes that any technology involving human enhancement hasn’t progressed significantly and people still age, get sick, and die.

Like it or not, our physical bodies in the real world will break down. If the technology to manage and reverse that isn’t available, then virtual environments might be the only way we can continue to live in any meaningful capacity. I certainly hope that isn’t my only opinion when I get to be that age, but if it is, then that simplifies my decision.

It’s hard to know what sort of options we’ll have. I still believe that technology involving human enhancement and creating virtual worlds will advance in parallel. One would, by default, need the other in order to properly interface with these environments. As such, it would complicate any decision about venturing into virtual environments.

Then, there’s the actual nature of those virtual environments. If we can control what environment we go into, then that opens the door to even more possibilities. Within these worlds, you could be a billionaire playboy, a medieval king, a famous celebrity, or super athlete. From your brain’s perspective, it would feel every bit as real as what you’re feeling right now.

Whether or not our brains would accept it is a different story. I suspect there may be some who, once they enter these worlds, would never want to leave. There may even be some who willingly erase their own memories of the real world so that this new virtual world is their new “reality.” That’s exactly what Cypher desired in “The Matrix” and I suspect others might share that desire.

It really does depend on the person, their situation, and what sort of virtual world they seek to create. We probably won’t know the full impact until we create our first true Matrix-like virtual world. I sincerely hope I live long enough to see that. If you’re reading this, hopefully you get to see it as well. It should give you plenty of time to contemplate these questions and whether you’ll venture into those world.

Leave a comment

Filed under Artificial Intelligence, biotechnology, futurism, Sexy Future, Thought Experiment

When Parents Look As Young As You: Speculation And Implications

Can aging be reversed? | Wall Street International Magazine

A while back, I was sifting through some old pictures and I found a few of my parents when they were younger. Some of those pictures were a bit faded, but some held up remarkably well. A few in particular depicted my dad when he was in his 20s. It was fun, seeing how my parents looked in their youth. They certainly had plenty of stories behind each picture.

Beyond the stories, there was also the uncanny resemblance. My dad in his 20s looked a lot like me and my brother do now. I definitely have my dad’s facial structure. More than one relative has commented how similar we look whenever I share a picture of us.

My brother definitely inherited my dad’s old hair style. There’s this one picture of my dad in a hammock with long, uncut hair and it looks eerily identical to how my brother styles his hair. Overall, you can definitely see the resemblance.

Naturally, peoples’ appearances change as they age. It’s a normal thing. We can all marvel at how our parents looked in their youth, but that doesn’t change how different they look now. Most people don’t have the luxury of looking like Keanu Reeves in their 50s. As they get older, age will affect their appearance, their energy levels, and their mental state.

With all due respect to my wonderful parents, their age does show. When we stand together for family pictures, you can tell who’s the parent and who are the kids, even though my brother and I are full adults. I don’t doubt my age will start showing soon enough. It already has in some respects.

However, what happens if we suddenly gain the ability to either stop aging at a certain point or completely reverse it?

What if our parents could look the same age as us when we turn 30?

How would that affect us personally?

How would that affect us as a society?

These are not entirely rhetorical questions. It may sound like something that requires futuristic technology, but it’s not as far fetched as we think. Reversing or stopping the aging process in living things isn’t like breaking the speed of light. We know it can be done because there are animals that do it all the time.

Certain species of turtles never seem to age out of their adult prime. Other species basically age in reverse. In biology, it’s called negligible senescence and it’s a subject of significant interest for the treatment of aging. While humans do have a lifespan that seems built into our biology, we’re steadily developing the tools to hack that biology.

The technology is new and unrefined, but the incentives for developing it have never been greater. We already have an aging population. Helping people live into their 90s is nice, but what good is living that long if you can’t enjoy life as you did in your youth?

That technology is still a ways off, but like I said before. There’s no hard rule of biology or physics that prevents us from reversing the effects of aging. The research into the mechanisms of reversing aging altogether is ongoing and anyone who develops treatments are sure to gain a chunk of the multi-billion dollar anti-aging industry.

How and when this technology becomes mainstream is difficult to predict, but if and when it does, it raises some major implications. Setting aside the issues that come about from a population that doesn’t get weaker or less energetic with age, what does that do to how we carry ourselves around family?

That’s a personal impact of this technology that I don’t think enough people contemplate, mostly because they think it’s impossible. However, there are people alive today who may live long enough to see this technology mature. At that point, they’ll have to deal with having parents that look the same age as they do once they turn 30.

Imagine, for a moment, going to a restaurant with your parents. To you, they’re your parents and you know that. To everyone else, however, you’re just three people hanging out at a restaurant. If you look the same age, how can you tell the difference between a family getting dinner and a bunch of friends hanging out?

Things can easily get more complicated and awkward from there. Imagine you’re a guy meeting your mother for lunch or a girl meeting her father for coffee. From the outside, you don’t look like a parent and child. You might look like a couple on a date. I can only imagine how tense waiters might feel if they find out a cute couple are actually parent and child.

Add grandparents who don’t age to the equation and the complications only compound. When your family unit becomes indistinguishable from a co-ed dorm in college, how does that affect your perspective? Beyond the awkward realizations that the cute girl you’re hitting on is as old as your grandmother, how do parents and kids relate to one another when they look alike at a certain point?

As kids, we know our parents are our parents because they’re older than us. Even as adults, most of us reserve some level of respect and reverence for both our parents and elders. Just looking older will garner a certain reaction. What happens when technology removes appearance from the equation entirely?

We all know young people who are wise beyond their years and old people who are as dumb as a kid. When we all look the same age, those distinctions will become blurred and muddled. How that affects our personal perspectives, as well as our society in general, is difficult to fathom at the moment. Given the rapid pace of biotechnology and all the money at stake, that moment might come sooner than we think. As such, we should start preparing ourselves for the awkwardness that’s sure to follow.

1 Comment

Filed under biotechnology, futurism, technology

Turning Thoughts Into Images: A New Era Of Art With Brain/Computer Interface

BCI Technology: How does a Brain-Computer Interface Work?

There are any number of skills you can learn, practice, and eventually master. I highly encourage everyone to do to this, whether it involves computer programming, cooking, crafts, or any other hobby. You may not always like or master them, but they’re still fun and rewarding to try.

For some skills, though, no amount of learning or practice will help you master them or even be competent. Some things just take talent. That’s why only a handful of human beings ever become Olympic athletes, professional quarterbacks, or brain surgeons. There’s nothing wrong with that. We need that kind of diverse skill set, as a species.

I consider myself to be good, if not above-average, at a number of skills. I’ve learned plenty over the years and there are some that I just have a knack for more than others. I like to think writing is one of them. However, there’s one particular skill that I just have absolutely zero talent for and it’s something that has bugged me for years.

That skill is drawing.

Please understand that this is somewhat personal for me. I’ve always had an artistic side, but for reasons I can’t quite grasp, I’ve never been able to draw worth a damn. I’ve taken art classes in school. I’ve tried practicing here and there. It just never works. I can barely draw stick figures, let alone an image of a typical person that doesn’t look like it was drawn by a five-year-old.

Some of that actually runs in my family. Quite a few relatives can attest that they can’t draw, either. At the same time, an unusually high number of relatives are good writers, poets, etc. We’re all great with words, for the most part. That’s a talent that seems to get passed down, but we just can’t turn those words into pictures.

For me, that’s kind of frustrating. I’ve always enjoyed telling stories. For a time, I wanted to be a comic book writer, but I learned quickly that’s next to impossible when you can’t draw. There are also times when I wish I could draw well enough to describe a scene from a story. I just don’t have that talent or that skill.

As much as I enjoy writing, I don’t deny that humans are visual creatures. If I could incorporate images into my work, then I believe it’ll have a much greater impact. Sadly, I doubt I’ll ever have the necessary talent and skill to create those images.

However, it certain technological trends continue, I might not have to. A recent article in Psychology Today gave me hope that one day, I’ll be able to take some of these images I see in my head and make them real for others to see. It also leads me to believe that art, as we know it, is about to change in a big way.

Psychology Today: New Brain-Computer Interface Transforms Thoughts to Images

Achieving the next level of brain-computer interface (BCI) advancement, researchers at the University of Helsinki used artificial intelligence (AI) to create a system that uses signals from the brain to generate novel images of what the user is thinking and published the results earlier this month in Scientific Reports.

“To the best of our knowledge, this is the first study to use neural activity to adapt a generative computer model and produce new information matching a human operator’s intention,” wrote the Finnish team of researchers.

The brain-computer interface industry holds the promise of innovating future neuroprosthetic medical and health care treatments. Examples of BCI companies led by pioneering entrepreneurs include Bryan Johnson’s Kernel and Elon Musk’s Neuralink.

Studies to date on brain-computer interfaces have demonstrated the ability to execute mostly limited, pre-established actions such as two-dimensional cursor movement on a computer screen or typing a specific letter of the alphabet. The typical solution uses a computer system to interpret brain-signals linked with stimuli to model mental states.

Seeking to create a more flexible, adaptable system, the researchers created an artificial system that can imagine and output what a person is visualizing based on brain signals. The researchers report that their neuroadaptive generative modeling approach is “a new paradigm that may strongly impact experimental psychology and cognitive neuroscience.”

Naturally, this technology is very new and nowhere near ready for commercial use. It’ll probably be a while before I could use it to create my own graphic novels of the books I’ve written and the sexy short stories I’ve told. That still won’t stop me from entertaining thoughts of incorporating images into my stories.

I doubt I’m the only one who feels that way, too. I know plenty of people like me who just do not have the talent or skill to draw anything more detailed than a stick figure. Those same people have images in their minds that they wish to share. If products like Neuralink, which the article directly references, become more mainstream, then this could be among its many uses.

With some refinement, it won’t just allow artistically challenged people like me to make competent drawings. It’ll allow people who never would’ve otherwise produced that art create something that they can share with the world.

Just take a moment to appreciate how many beautiful images exist only in the minds of people who never get an opportunity to share them. Maybe someone did have an idea for a piece of artwork that would’ve brought beauty, joy, and inspiration to the world, but they just didn’t have the skill, resources, or talent to make it tangible. How many masterpieces have we lost because of that limitation?

We can never know, but any loss of beautiful art is a tragic one. With a process like this, people who never even thought about having an artistic side could explore it. Moreover, they would be able to do it without messy art supplies, sketchbooks, or ink stains. They would just need a neural prosthesis and a computer.

Almost everyone has a computer, so we’re already halfway there. If ever a product came out that allowed us to develop this ability of turning thoughts into images, I would be among the first to try it. I would eagerly line up to take the plunge, if only to open the possibility that some of the images I see when I’m writing can become real one day. I hope I live long enough to see this. Our bodies and minds may ultimately fail us, but great art can last for multiple lifetimes.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Neuralink, technology

Why We Should Embrace Synthetic Meat (As Soon As Possible)

awd4jrrsg8n0c1uzjgmi

If you’re reading this, then there’s a good chance you drank milk at some point this year. You probably drank a lot more of it when you were a kid. The fact that you’re reading this proves that you didn’t die, as a result. That may not seem like a big deal, but compared to 100 years ago, it counts as a noteworthy feat.

Between 1850 and 1950, approximately a half-million infants died due to diseases contracted by drinking milk. If you do the math, that’s about 5,000 deaths a year, just from drinking milk. Keep in mind, these are children. That’s a lot of death and suffering for drinking one of the most basic substances the animal kingdom.

These days, death by drinking milk is exceedingly rare. Thanks to processes like pasteurization, milk is one of the safest substances you can drink. If anyone does get sick, it’s usually from drinking raw or unpasteurized milk. However, it’s so rare that most people don’t think about it. It’s just a normal part of how we manage our food and nourish ourselves.

I bring up milk because it nicely demonstrates what happens when we apply technology to improve the quality, safety, and abundance of our food. Despite what certain misguided critics may say, many of which probably haven’t experienced extreme starvation, this has been an objective good for humanity, civilization, and the world, as a whole.

Modern medicine and the Green Revolution, championed by the likes of Norman Borlaug, helped give us more efficient ways of producing massive quantities of food. Now, there’s another technological advancement brewing that might end up being more impactful. You’ve probably seen commercials for it already. It has many names, but for now, I’m just going to call it synthetic meat.

It’s almost exactly what it sounds like. It’s the process of producing meat through artificial processes, none of which involve the slaughtering of animals. For those concerned about animal welfare and environmental impacts, it’s the ultimate solution. At most, the animals contribute a few cells. The rest is grown in a laboratory. Nobody has to get hurt. Nobody has to go vegan, either.

It seems too good to be true and there are certainly aspects of synthetic meats that are overhyped. However, unlike other advancements like Neuralink or nanobots, this is already an evolving market. The first synthetic burger was made and consumed in 2013. It was the culmination of a long, laborious effort that cost upwards of $300,000.

Those costs soon came down and they came down considerably. By 2017, the cost of that same meat patty was around $11. People have paid much more for expensive caviar. That’s impressive progress for something that’s still a maturing technology with many unresolved challenges. With major fast food companies getting in on the game, the technology is likely to progress even more.

It’s here where I want to make an important point about this technology. Regardless of how you feel about it or why it’s being developed, there’s one aspect to it that’s worth belaboring.

We should embrace synthetic meat.

In fact, we should embrace this technology faster than others because the benefits of doing so will only compound.

I say this as someone who has tried an impossible meat burger. It’s not terrible. I wouldn’t mind eating them regularly if they were the only option available. That said, you can still tell it’s not traditional beef. That’s because this meat isn’t exactly the kind of cultured meat that’s grown in a lab. It’s assembled from plant proteins and various other well-known substances.

Ideally, synthetic meat wouldn’t just be indistinguishable from traditional beef. It would actually be safer than anything you could get naturally. Meat grown in a lab under controlled conditions can ensure it’s free of food-born illnesses, which are still a problem with meat production. It can also more effectively remove harmful byproducts, like trans fats.

In theory, it might also be possible to produce meat with more nutrients. Imagine a burger that’s as healthy as a bowl of kale. Picture a T-bone steak that has the same amount of nutrients as a plate of fresh vegetables. That’s not possible to do through natural means, but in a lab where the meat is cultured at the cellular level, it’s simply a matter of chemistry and palatability.

Meat like that wouldn’t just be good for our collective health. It would be good for both the environment and the economy, two issues that are rarely aligned. Even if you don’t care at all about animal welfare, synthetic meats has the potential to produce more product with less resources. On a planet of over 7.6 billion, that’s not just beneficial. It’s critical.

At the moment, approximately 70 percent of the agricultural land in the world is dedicated to the meat production. In terms of raw energy requirements, meat requires considerably more energy than plants. That includes water consumption, as well. Making meat in its current form requires a lot of resources and with a growing population, the math is working against us.

Say what you want about vegetarians and vegans when they rant about the meat industry. From a math and resources standpoint, they have a point. However, getting rid of meat altogether just isn’t feasible. It tastes too good and it has too many benefits. We can’t make people hate the taste of burgers, but we can improve the processes on how those burgers are made.

Instead of industrial farms where animals are raised in cramped quarters, pumped full of hormones, and raised to be slaughtered, we could have factories that produce only the best quality meat from the best animal cells. It wouldn’t require vast fields or huge quantities of feed. It would just need electricity, cells, and the assorted cellular nutrients.

Perhaps 3D printing advances to a point where specific cuts of meat could be produced the same way we produce specific parts for a car. Aside from producing meat without having to care for than slaughter animals, such a system would be able to increase the overall supply with a smaller overall footprint.

Needing less land to produce meat means more land for environmental preservation or economic development. Farming, both for crops and for meat, is a major contributor to deforestation. Being able to do more with less helps improve how we utilize resources, in general. Even greedy corporations, of which the food industry has plenty, will improve their margins by utilizing this technology.

Increased supply also means cheaper prices and if the taste is indistinguishable from traditional meat, then most people are going to go with it, regardless of how they feel about it. There will still be a market for traditional, farm-raised meats from animals, just as there’s a market for non-GMO foods. However, as we saw with the Green Revolution in the early 20th century, economics tends to win out in the long run.

It’s a promising future for many reasons. There are many more I could list relating to helping the environment, combating starvation, and improving nutrition. Alone, they’re all valid reasons to embrace this technology and seek greater improvements. If I had to pick only one, though, it’s this.

If we don’t develop this technology, then these delicious meats that we love could be exceedingly scarce or prohibitively expensive in the future.

Like I said earlier, the way we currently produce meat is grossly inefficient. At some point, the demand for meat is going to exceed the current system’s capacity to produce it in an economical way. At that point, this delicious food that we take for granted might not be so readily available and the substitutes might not be nearly as appetizing.

The issue becomes even more pressing if we wish to become a space-faring civilization, which will be necessary at some point. If we still want to enjoy burgers, chicken wings, and bacon at that point, we’ll need to know how to make it without the vast fields and facilities we currently use. Otherwise, we might be stuck dining on potatoes like Matt Damon in “The Martian.”

While the situation isn’t currently that urgent, this is one instance where a new technology is the extra push. You don’t have to be a major investor in companies like Beyond Meat or Impossible Foods. Just go out of your way to try one of these new synthetic meat products. Let the market know that there’s demand for it and the machinations of capitalism will do the rest.

I understand that our inner Ron Swanson will always have a craving for old fashioned burgers, steaks, and bacon. Those things don’t have to go away completely, just as traditional farming hasn’t gone away completely. However, when a particular technology already exists and has so many potential benefits, it’s worth pursuing with extra vigor.

The planet will benefit.

The people will benefit.

The animals will benefit.

Our society, as a whole, will benefit.

Leave a comment

Filed under biotechnology, CRISPR, Current Events, Environment, futurism, health, technology

The First CRISPR Patients Are Living Better: Why That Matters After 2020

It’s been a while since I’ve talked about CRISPR, biotechnology, and the prospect of ordinary people enhancing their biology in ways straight out of a comic book. In my defense, this past year has created plenty of distractions. Some have been so bad that my usual optimism of the future has been seriously damaged.

While my spirit is wounded, I still have hope that science and technology will continue to progress. If anything, it’ll progress with more urgency after this year. A great many fields are bound to get more attention and investment after the damage done by a global pandemic.

We can’t agree on much, but we can at least agree on this. Pandemics are bad for business, bad for people, bad for politics, and just objectively awful for everyone all around, no matter what their station is in life.

There’s a lot of incentive to ensure something like this never happens again is what I’m saying. While we’re still a long way from ending pandemics entirely, we already have tools that can help in that effort. One is CRISPR, a promising tool I’ve talked about in the past. While it wasn’t in a position to help us during this pandemic, research into refining it hasn’t stopped.

Despite all the awful health news of this past year, some new research has brought us some promising results on the CRISPR front. In terms of actually treading real people who have real conditions, those results are in and they give us reason to hope.

One such effort involved using CRISPR to help treat people with Sickle Cell Disease, a genetic condition that hinders the ability of red blood cells to carry oxygen. It affects over 4 million people worldwide and often leads to significant complications that can be fatal.

Since CRISPR is all about tweaking genetics, it’s a perfect mechanism with which to develop new therapies. Multiple patients have undergone experimental treatments that utilize this technology. In a report form NPR, the results are exceeding expectations for all the right reasons.

NPR: First Patients To Get CRISPR Gene-Editing Treatment Continue To Thrive

At a recent meeting of the American Society for Hematology, researchers reported the latest results from the first 10 patients treated via the technique in a research study, including Gray, two other sickle cell patients and seven patients with a related blood disorder, beta thalassemia. The patients now have been followed for between three and 18 months.

All the patients appear to have responded well. The only side effects have been from the intense chemotherapy they’ve had to undergo before getting the billions of edited cells infused into their bodies.

The New England Journal of Medicine published online this month the first peer-reviewed research paper from the study, focusing on Gray and the first beta thalassemia patient who was treated.

“I’m very excited to see these results,” says Jennifer Doudna of the University of California, Berkeley, who shared the Nobel Prize this year for her role in the development of CRISPR. “Patients appear to be cured of their disease, which is simply remarkable.”

Make no mistake. This is objectively good news and not just for people suffering from sickle cell disease.

Whenever new medical advances emerge, there’s often a wide gap between developing new treatments and actually implementing them in a way that makes them as commonplace as getting a prescription. The human body is complex. Every individual’s health is different. Taking a treatment from the lab to a patient is among the biggest challenge in medical research.

This news makes it official. CRISPR has made that leap. The possible treatments aren’t just possibilities anymore. There are real people walking this planet who have received this treatment and are benefiting because of it. Victoria Gray, as referenced in the article, is just one of them.

That’s another critical threshold in the development of new technology. When it goes beyond just managing a condition to helping people thrive, then it becomes more than just a breakthrough. It becomes an opportunity.

It sends a message to doctors, researchers, and biotech companies that this technology works. Some of those amazing possibilities that people like to envision aren’t just dreams anymore. They’re manifesting before our eyes. This is just one part of it. If it works for people with Sickle Cell Disease, what other conditions could it treat?

I doubt I’m the first to ask that question. As I write this, there are people far smarter and more qualified than me using CRISPR to develop a whole host of new treatments. After a year like 2020, everyone is more aware of their health. They’re also more aware of why science and medicine matter. It can do more than just save our lives. It can help us thrive.

We learned many hard lessons in 2020, especially when it comes to our health. Let’s not forget those lessons as we look to the future. This technology is just one of many that could help us prosper in ways not possible in previous years. We cheered those who developed the COVID-19 vaccine. Let’s start cheering those working on new treatments with CRISPR.

2 Comments

Filed under biotechnology, CRISPR, futurism, health, technology

Why We Should Treat Our Data As (Valuable) Property

Many years ago, I created my first email address before logging into the internet. It was a simple AOL account. I didn’t give it much thought. I didn’t think I was creating anything valuable. At the time, the internet was limited to slow, clunky dial-up that had little to offer in terms of content. I doubt anyone saw what they were doing as creating something of great value.

I still have that email address today in case you’re wondering. I still regularly use it. I imagine a lot of people have an email address they created years ago for one of those early internet companies that used to dominate a very different digital world. They may not even see that address or those early internet experiences as valuable.

Times have changed and not just in terms of pandemics. In fact, times tends to change more rapidly in the digital world than it does in the real world. The data we created on the internet, even in those early days, became much more valuable over time. It served as the foundation on which multi-billion dollar companies were built.

As a result, the data an individual user imparts onto the internet has a great deal of value. You could even argue that the cumulative data of large volumes of internet users is among the most valuable data in the world.

Politicians, police, the military, big businesses, advertising agencies, marketing experts, economists, doctors, and researchers all have use for this data. Many go to great lengths to get it, sometimes through questionable means.

The growing value of this data raises some important questions.

Who exactly owns this data?

How do we go about treating it from a legal, fiscal, and logistical standpoint?

Is this data a form of tangible property, like land, money, or labor?

Is this something we can exchange, trade, or lease?

What is someone’s recourse if they want certain aspects of their data removed, changed, or deleted?

These are all difficult questions that don’t have easy answers. It’s getting to a point where ownership of data was an issue among candidates running for President of the United States. Chances are, as our collective data becomes more vital for major industries, the issue will only grow in importance.

At the moment, it’s difficult to determine how this issue will evolve. In the same way I had no idea how valuable that first email address would be, nobody can possibly know how the internet, society, the economy, and institutions who rely on that data will evolve. The best solution in the near term might not be the same as the best solution in the long term.

Personally, I believe that our data, which includes our email addresses, browsing habits, purchasing habits, and social media posts, should be treated as personal property. Like money, jewels, or land, it has tangible value. We should treat it as such and so should the companies that rely on it.

However, I also understand that there are complications associated with this approach. Unlike money, data isn’t something you can hold in your hand. You can’t easily hand it over to another person, nor can you claim complete ownership of it. To some extent, the data you create on the internet was done with the assistance of the sites you use and your internet service provider.

Those companies could claim some level of ownership of your data. It might even be written in the fine print of those user agreements that nobody ever reads. It’s hard to entirely argue against such a claim. After all, we couldn’t create any of this data without the aid of companies like Verizon, AT&T, Amazon, Apple, Facebook, and Google. At the same time, these companies couldn’t function, let alone profit, without our data.

It’s a difficult question to resolve. It only gets more difficult when you consider laws like the “right to be forgotten.” Many joke that the internet never forgets, but it’s no laughing matter. Peoples’ lives can be ruined, sometimes through no fault of their own. Peoples’ private photos have been hacked and shared without their permission.

In that case, your data does not at all function like property. Even if it’s yours, you can’t always control it or what someone else does with it. You can try to take control of it, but it won’t always work. Even data that was hacked and distributed illegally is still out there and there’s nothing you can do about it.

Despite those complications, I still believe that our data is still the individual’s property to some extent, regardless of what the user agreements of tech companies claim. Those companies provide the tools, but we’re the ones who use them to build something. In the same way a company that makes hammers doesn’t own the buildings they’re used to make, these companies act as the catalyst and not the byproduct.

Protecting our data, both from theft and from exploitation, is every bit as critical as protecting our homes. An intruder into our homes can do a lot of damage. In our increasingly connected world, a nefarious hacker or an unscrupulous tech company can do plenty of damage as well.

However, there’s one more critical reason why I believe individuals need to take ownership of their data. It has less to do with legal jargon and more to do with trends in technology. At some point, we will interact with the internet in ways more intimate than a keyboard and mouse. The technology behind a brain/computer interface is still in its infancy, but it exists and not just on paper.

Between companies like Neuralink and the increasing popularity of augmented reality, the way we interact with technology is bound to get more intimate/invasive. Clicks and link sharing are valuable today. Tomorrow, it could be complex thoughts and feelings. Whoever owns that stands to have a more comprehensive knowledge of the user.

I know it’s common refrain to say that knowledge is power, but when the knowledge goes beyond just our browsing and shopping habits, it’s not an unreasonable statement. As we build more and more of our lives around digital activities, our identities will become more tied to that data. No matter how large or small that portion might be, we’ll want to own it as much as we can.

It only gets more critical if we get to a point where we can fully digitize our minds, as envisioned in shows like “Altered Carbon.” At some point, our bodies are going to break down. We cannot preserve it indefinitely for the same reason we can’t preserve a piece of pizza indefinitely. However, the data that makes up our minds could be salvaged, but that opens the door to many more implications.

While that kind of technology is a long way off, I worry that if we don’t take ownership of our data today, then it’ll only get harder to do so in the future. Even before the internet, information about who we are and what we do was valuable.

This information forms a big part of our identity. If we don’t own that, then what’s to stop someone else from owning us and exploiting that to the utmost? It’s a question that has mostly distressing answers. I still don’t know how we go about staking our claim on our data, but it’s an issue worth confronting. The longerwe put it off, the harder it will get.

Leave a comment

Filed under Artificial Intelligence, biotechnology, Current Events, futurism, Neuralink, politics, technology

Appreciating Dr. Maurice Hilleman: The Man Who Saved Millions Of Lives (With Vaccines)

As someone who regularly consumes superhero media of all kinds, I try to appreciate the real heroes in the real world who regularly save countless lives. Most carry themselves without superpowers, flashy costumes, or charisma on par with Robert Downy Jr. or Christopher Reeves. They just do the work that needs doing to help people who will never know their name.

A couple years ago, I made a tribute to Dr. Norman Borlaug, the famed agricultural scientist who helped usher in an agricultural revolution. This man, who most have never heard of, has saved millions of lives by helping the world produce more food, reduce famine, and combat world hunger. The amount of good this man has done for the world cannot be overstated.

In that same spirit, I’d like to highlight another individual who I doubt most people have heard of. He’s another doctor who, through his work, has helped save millions of lives, many of them children. It’s because of this man that millions of children born today don’t become ill with diseases that had ravaged humanity for generations.

That man is Dr. Maurice Hilleman. While his notoriety is not on the same level as Dr. Borlaug, I have a feeling his profile will rise considerably after the events of 2020. That’s because Dr. Hilleman currently holds the record for developing the most vaccines of any doctor in history.

In total, he helped develop more than 40.

Of those vaccines, 8 are still routinely recommended by doctors today. They combat terrible diseases like measles, mumps, Hepatitis, and chicken pox.

It’s a level of productivity that is unparalleled today. As a result of these vaccines, approximately 8 million lives are saved every year. Even though he died in 2005, he continues to save lives with each passing year through his work. Like Dr. Borlaug, his heroism only compounds with time. Even Tony Stark can’t boast that.

Most people alive today don’t realize just how bad it was before these vaccines were developed. Many diseases, some of which you’ve probably never heard of, were rampant. Before Dr. Hilleman helped develop the vaccine, measles alone infected between 3 and 4 million people every year in the United states, killing at between 400 and 500 at a time.

Children and the elderly were especially vulnerable. It was once just a fact of life that these diseases would come around and kill a sizable number of children. It was as unavoidable as bad weather.

Take a moment to imagine life in those conditions. One day, you or your children would just get sick and there was nothing you could do to prevent it. That was life before these remarkable advances came along.

That’s why when people say that nothing has saved more lives than vaccines, they’re not peddling propaganda. They’re just sharing the results of basic math. It’s because of men like Dr. Maurice Hilleman that these numbers are what they are. However, his name is not well-known, even in a field that has become more prominent.

Most people know who Edward Jenner is and appreciate how many lives he saved by combating Smallpox.

Most people know who Jonas Salk is and appreciate how many people he helped by developing a polio vaccine.

Now, what these men did was remarkable. They certainly deserve the praise and admiration they receive for developing their respective vaccines. However, Dr. Maurice Hilleman still deserves to be in that same echelon. For the number of vaccines he helped develop and the legacy he left, he certainly put in the work and accomplished a great deal.

The diseases Dr. Hilleman battled might not have been as high-profile as Smallpox or polio, but they were every bit as damaging. That makes it all the more frustrating to see efforts like the anti-vaxx movement take hold, which has led to resurgences of diseases like measles in certain communities. That is not the direction we should be going right now.

In the midst of a historic pandemic, the importance of medical science and those who work in it has never been more critical. This might be the best possible time to acknowledge the work of men like Dr. Hilleman. Even after this pandemic has passed, we should all appreciate work like his even more.

None of us have superpowers like Spider-Man or Superman.

Most of us will never be as rich, smart, or resourceful as Iron Man or Batman.

Dr. Hilleman had none of this. Just like Dr. Borlaug, he came from a poor family. At one point, he didn’t have enough money for college. He still persevered and he still managed to do the work that went onto save millions of lives. It might not be a hero’s story on par with the Marvel Cinematic Universe, but it’s still a special kind of heroism.

So, on this day as we anxiously wait for this pandemic to end, take a moment to appreciate the work of Dr. Maurice Hilleman. It’s because of him that such pandemics are so rare. It’s also because of him that this current pandemic won’t take nearly as many lives as it could’ve.

Leave a comment

Filed under biotechnology, Current Events, health, technology