The author Karl Schroeder in his article says “Thalience” is a successor to science – to look past science. If science is ‘reason’ can we look past ‘reason’? While reading his article, I did get a feeling of self-dis-belief on the concept of Thalience, that he is trying to illustrate.

If Thalience is not a replacement (to Science), maybe an offspring…?

He suggests that we ‘automate’ science and leave it to be built on by robots or an AI which doesn’t think the human way – but, how can we ‘automate’ science to achieve ‘thalience’ if automation is ‘science’? Or how else can we achieve this automation? Are these our limits that we are aware of consciously and/or subconsciously? Scientists already deal with many phenomena that they can’t explain – God, religious beliefs, magic, superstition, ghosts, etc. Therefore thinking beyond science is not just changing the way we think – because even that is bound by science – God or the unknown if when things happen without known reasoning or theories. During Galileo’s time, people lived similar lives like us – but the earth was flat, and the sun went over us. Today, is our thinking only limited to E=mc2 ? Do we wait for invention and then think about it? OR Do we think beyond, and then reach invention? 

So it’s at least possible that non-human intelligences would come to different conclusions about what the universe was like, even if their theory produced results compatible with our models.

The author says – give the same inputs to an AI and AI will build a model of a new universe, that is different from the human universe. Will that be untrue for a human (who has never seen our universe)? Read a fresh, unknown paragraph (of about 5 sentences) of a novel, and ask three illustration artists to draw as soon as each sentence is read. Let them draw each sentence as one image. Will all those sets of 5 images be the same? Will all those images have the same world and surroundings and imagination represented in them?

No. So the dynamics of an AI shouldn’t be surprising either.
Therefore, AI is not Magic. AI is still human – a protocol manifestation of human thought. These intelligence systems are still built within our existing systems of science and reason. So how do we build a fresh system fresh out of not even human thought? Can we think non-humanly at all? We can try by changing perspectives to non-human

It is an attempt to give the physical world itself a voice so that rather than us asking what reality is, reality itself can tell us. … It is the recovery of the natural in our understanding of the natural… the art of choosing the one with the most human face.

This makes the most sense and here I agree with the author. This sort of a Thalient system could tell us what the scientists possibly call God or Magic. However, if its natural then its natural in some way or the other irrespective of any other perspective. Therefore, why choose the human perspective to it… isn’t that contributing to the Anthropocene?

While understanding how low power we can go, in this modern age of Global Catastrophic Risks (GCR)  the word ‘power’ itself has powerful ironies associated with it. Developing countries have signed the Paris Climate accord and are improving energy efficiencies promising to guard and stand by them. Whereas a developed country like the USA stands refrained from leading the way of how low power they can go but ironically, claim to be the most “power”ful nations in the world. How low… power we can go?

Images representing Project Cybersyn

Project Cybersyn stands are a great model of a GCR in itself. That doesn’t mean we need to look at it in only the negative manner. If we look at Governments over the several centuries, as a world we still have Monarchy, Communism, and Democracy. We know that some democratic powers want the communism powers to stand down, but why not check the Monarchial governments too? Is it because their nations are not as powerful and do not pose a risk? Are we prepared to think of a future beyond the governing principles of today? While it’s obvious that a project like Cybersyn will have a know all, see all form of functioning, looking from another perspective, can that help improve a more participative form of democracy to the most granular level?

In many respects, Beer’s cybernetic dream has finally come true: the virtue of collecting and analyzing information in real time is an article of faith shared by corporations and governments alike.

“Technology seems to be leading humanity by the nose.” – Stafford Beer

Here are two examples of how companies are “requesting” us to look away from growing GCR’s, in the most “dignified” way possible.  We are not setting the price. The market is setting the price. We have algorithms to determine what that market is.” – Uber CEO. Who makes these ‘do not blame us please’ algorithms?

I don’t believe the big issue are ads from foreign governments. I believe that’s like .1 percent of the issue, the bigger issue is that some of these tools are used to divide people, to manipulate people, to get fake news to people in broad numbers so as to influence their thinking. This to me is the No. 1 through 10 issue.” – Apple CEO

I would like to conclude with the slight contradiction I noticed between Nick Bostrom’s ideas of Endogenous risks as GCR to Karl Schroeder’s way of depicting Thalience’s strong human perspective, that to me risks leading to an Anthropocene era.