Also serving the communities of De Luz, Rainbow, Camp Pendleton, Pala and Pauma

Transhumanism, AI, Singularity - How far is too far?

Julie Reeder

Publisher

Life as we know it is changing, probably more quickly than any time in history. That is just a hunch, based on how quickly technology is advancing and how culturally we seem to be more distracted by controversies and entertainment on our phones and less concerned and informed about important issues that actually have great consequences to our culture, our future, and our very existence.

We don’t want to be “political” or “religious” so ignorance is bliss as to how corrupt our national political system, agencies and leaders are. We suspect something isn’t right and we are uninformed about things like regulatory capture (industry controlling and funding government agencies that used to be objective and for the public good) which actually brings harm to our people.

We were tested during COVID and found to be willing to give up our freedom of speech out of fear of being exposed to “disinformation,” as well as other Constitutional freedoms.

We are fighting for medical freedom over our own bodies and our doctors are losing their licenses for having educated, yet differing, politically incorrect opinions from what the state and federal government have mandated as truth. Now the U.S. is considering giving away our freedom to the World Health Organization when another “pandemic” happens.

We would all be under the governance and control of the WHO, whose leader was placed in his position by China and is not even a medical doctor. Regardless of who the leader is or who put him in the position, we should be in control of our own country with people who we vote into office, because at the root it’s always about control. And very often the end justifies the means.

As things move quickly and leaders from the President and Congress to the WHO and the World Economic Forum scramble to control people, how much easier would it be in a world where we are all transhuman?

Soon the fight may extend past freedom of speech and personal medical freedom into transhumanism, artificial intelligence (AI), and singularity. These are hot complex topics that have been at the forefront of scientific and philosophical discussions for decades but more so right now. The idea of using technology to enhance human abilities and create intelligent machines has both advantages and dangers.

Have you ever heard of Human 2.0? Some believe it is the next phase of evolution, where we create machines that are more intelligent than we are. Elon Musk has repeatedly warned against AI, as he invests in it, that it could literally end our civilization and humanity.

Advantages of transhumanism and AI

Transhumanism is the belief that humans can and should use technology to enhance their physical, intellectual, and emotional capabilities. This philosophy promotes the use of genetic engineering, cybernetics, and other technologies to improve human lives. One of the primary advantages of transhumanism is the potential for increased lifespan and improved health. By using technology to enhance the human body, scientists could potentially eliminate many of the diseases and conditions that shorten human lifespans.

Similarly, the development of AI has the potential to revolutionize virtually every aspect of our lives. AI algorithms can analyze vast amounts of data and identify patterns that humans would not be able to detect. This technology can be used to improve healthcare, education, and many other industries. For example, AI-powered healthcare could help doctors diagnose and treat diseases more accurately and efficiently, potentially saving countless lives.

Dangers of transhumanism and AI

However, there are also many dangers. One of the most significant risks is that the development of these technologies could exacerbate existing social inequalities. For example, if only wealthy individuals have access to life-extending technologies, it could widen the gap between the rich and poor. Similarly, if AI-powered systems are primarily used by corporations and governments, it could lead to further concentration of power and decreased accountability.

There are also concerns about the potential for AI to become too intelligent and out of control. This scenario, known as "superintelligence," is often portrayed in science fiction as a catastrophic event that could lead to the extinction of humanity. While many experts believe that this outcome is unlikely, there is still a significant risk that AI could be used to carry out harmful actions, intentionally or unintentionally.

It makes me think of the Matrix movies where almost all the humans are hooked up to a pod living in a dream reality while serving as batteries for the matrix. Civilization and humanity are destroyed when humans no longer have the freedom to be human, to make decisions, to love, to travel, to experience our world, to serve and work for each other rather than a centralized power, and when we lose the ability to have a soul. It’s one thing we believe separates us from machines and animals. We have a soul. We are creative, spiritual beings and sometimes unpredictable.

Futurist Ray Kurzweil, author of the book Age of Spiritual Machines, predicted in 1999 that machines with human-level intelligence would be available from affordable computing devices within a couple of decades, revolutionizing most aspects of life. He says nanotechnology will augment our bodies and cure cancer even as humans connect to computers via direct neural interfaces or live full-time in virtual reality.

Kurzweil predicts the machines "will appear to have their own free will" and even "spiritual experiences." He says humans will essentially live forever as humanity and its machinery become one and the same. He predicts that intelligence will expand outward from earth until it grows powerful enough to influence the fate of the universe, and goes on to say that “Singularity” will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human, but that transcends our biological roots. There is disagreement on whether computers will one day be conscious.

Others believe that transhumanism, AI, singularity, and Human 2.0 are the dreams and inventions of wealthy men who want to find a way to cheat death, live forever and be their own God. It’s also feared that while there could be some advantages, people may also be able to be controlled by the internet, electricity or other means.

How far is too far? We already have the knowledge of the world at our fingertips on our mobile devices. We already have the ability to have technology aid our hearing, eyesight, movement, etc. Neurolink takes things even further. We have to consider how far can we go before we are someone or something else entirely? And how easily can we be controlled? How compliant will we become by choice? What would be the implications to our society and our freedoms? What does it mean to be human?

There are complex issues that need to be researched and debated. People would do well to unhook from the matrix, ask questions, and be cautious of who is in control as we shape our technological future.

 

Reader Comments(0)