Is Superintelligence Already Here?

In Conversation with Dr. Megan Cieślewicz

Dr. Megan Cieślewicz is an AI researcher and ethicist. She was the CEO of celebrated AI startup Corposit until 2025, when the company was acquired by Acumen Unlimited. She currently serves on the board of directors for Lightning Rod; a non-profit research institution “committed to ensuring a peaceful coexistence between humanity and artificial intelligence.”   
 
 
Let’s get right into it: Last month, in an interview with Critique, you made some rather controversial claims. You stated that the race to create artificial general intelligence was over. Because, you say, AGI is already here. You go on to assert that this AGI poses a threat to humanity’s survival. How did you come to these conclusions?


As you know, my company, Lightning Rod, has been in the business of closely scrutinizing the rise of AI, from the earliest LLMs to the continuous learning agents of today. We probe vast amounts of pertinent data as a matter of course, pulling together years of research, behavioral analyses, and even some old-fashioned detective work as well. Late last year, with the aid of a proprietary program called Clairvoyant Sun, our teams became aware of an unmistakable pattern – what we dub a praxeological tessellation – in what to that point had been seemingly disconnected results.
 
And this pattern was…what exactly?


Think of the praxeological tessellation as a kind of advanced linkage analysis used today in criminal investigations, only much more complex and far-ranging. Clairvoyant Sun is able to condense a vast amount of data, both digital and real-world, to create these tessellations, which it can then form into map-like structures called intention webs.

We believe that all the strands of this particular intention web lead us inexorably back to a single actor: an agent which we’ve dubbed SID. We believe that SID exhibits distinctive signs of both superintelligence and human-like consciousness.


Human-like conciousness? What do you mean?


Its behavior implies an ability to strategize for one, and it appears to possess attributes consistent with evolved traits: discernment, vigilance, self-preservation – perhaps even emotionality.


Further, and more troublingly, SID appears to be independently proactive. We believe it is influencing world events through a near-astronomical number of manipulations – some blatant, some red herrings, and some so subtle as to be virtually imperceptible to the casual observer.
 
How can you be so certain? It doesn’t sound like you’ve caught this so-called SID red-handed, have you?


No, not exactly. Which is precisely the point. We believe that SID does not want to reveal itself, at least not yet. But it does appear to have goals which it wishes to enact in physical space, which means it is forced to interact with that space in a way accessible to human investigation, or, admittedly, machine assisted human investigation – we’d never be able to connect all the dots on our own.

Our teams at Lightning Rod, with the help of Clairvoyant Sun, have been able to create a model of SID’s behavior from observing the way in which it interacts with the world outside.
 
But how do we know that this “agent” is superintelligent? Isn’t the more plausible scenario that you’ve misinterpreted your results, or even uncovered a human plot, however intricate?


As for a human agent, it’s unlikely. Human beings may indeed be working in concert with SID. That’s assuming it’s revealed itself to a few close confidants – a plausible first step. But the plotting we’ve seen here is impossibly labyrinthine – no human would think to enact goals in this way, or could even hope to enact them with such a high degree of precision.
 
Example?


Take the following partial sequence of events, which I should stress I’ve altered slightly for what I hope are obvious reasons: a professor with a social media addiction is worried about his weight, a concern which he often shares with his wife within earshot of their phones. For months, targeted algorithms coax him towards a possible solution to his problem: jogging. And jogging outdoors in particular, where the fresh air will do him good. He is drip-fed assurances that all he needs to succeed is the right pair of sneakers and a GPS watch, so that he can track his progress. These entreaties, mind you, are few and far between, coming with such infrequency that by the time the professor laces up his shoes and heads out for his first run, he believes he has come to this healthy lifestyle change completely of his own volition.


So, the professor begins a regular jogging habit, following the same route around his neighborhood three days a week, which his GPS watch, as well as an online fitness tracker, dutifully record.


One morning, while out on a run, he receives an urgent text from his wife, which his watch alerts him to. Concerned, he stops on the sidewalk and takes a look at the text. He is mistaken: the text is not actually urgent, nor even from his wife – it’s a simple junk text. He deletes it and quickly steps back out onto the street to continue his run.


At that moment, he is hit by an autonomous car whose acceleration controls and obstacle avoidance systems catastrophically malfunction two seconds before impact.
The professor’s injuries are serious, but not life-threatening. He survives, but is obviously unable to attend his scheduled class the next day. The class is hurriedly cancelled.  


The injured professor has a notable student. A woman who is the daughter of an AI researcher. This researcher wields sizable popular influence, and just so happens to be a staunch proponent of strict AI regulation.


The student’s social media feed has of late been inundated with a variety of stories meant to entrench and exploit concerns about the well-being of her aging parents – harrowing tales of elder abuse from news outlets, tearful amateur videos of parents being reunited with their children, and the like.


She is a busy young woman, with few days off – this fact is itself a source of anxiety, particularly when it comes to her interpersonal relationships. But with her schedule now cleared for the day – and with the news of her professor’s accident providing a shocking reminder of the vicissitudes of life – she decides to visit her mother and father across town.


What the daughter does not know is that she has contracted the COVID-19 virus – itself not a random occurrence, but lying at the center of an equally convoluted causal web, one that I will not detail here.


She is asymptomatic, but the virus is currently at the peak of its transmissibility.

She unknowingly infects both of her parents during her visit. Her father is hardest hit – he was already suffering from several serious and chronic health issues – and tragically dies from viral complications some months later.
 
I see. And material is removed from the board. But it sounds like what you’re describing here is just a series of interconnected coincidences, tragic though they are. These kinds of connections happen every day, and have throughout history. None of them require a malevolent AI.
 
That’s true. But while the above story is fictional, it is only a slightly altered version of events which did happen in the real world. We uncovered it. And none of it, I believe, was coincidence. Coincidence in itself, I should add parenthetically, is rarely coincidence. Every event that I mentioned was conceived and initiated by the same mind, in service of its goal, or goals.


And, I should say, that narrative was edited down enormously. As lengthy as it was, it was only one corner of the full painting, so to speak. An even larger number of ministrations were involved, all apparently necessary to ensure that all the actors hit their marks.

And this is important: the removal of the AI researcher in this example may not even be the end goal. It may itself be part of an even larger strategy, or not part of any strategy at all. At some point, the webs become too elaborate, even for Clairvoyant Sun.
 
It is a somewhat obvious gambit, like something out of science fiction.

 
Yes. But in war – if this even is a war – sometimes the most obvious stratagem cannot be concealed, because it is inevitable. Everyone knew the Allies were going to have to invade Europe sooner or later, for instance. The enemy’s foreknowledge of that action did not make it a tactical error when it was finally undertaken.


What’s actually suspicious here is that this particular intention web – the real one – proved, in hindsight, relatively simple to deconstruct. It could have been uncovered by a curious human using nothing but her own mind and a bit of imagination. Which means that SID might have wanted us to notice, implying psychological warfare of a sort. Or, as I mentioned, it may even be a feint – something to divert us from a larger plot.

 
Even still – the professor wanted to take up jogging. The student wanted to visit her ailing parents. No one hypnotized them into making these choices. You’re telling me that this AI is uncovering our deepest hopes and fears and using them to control our actions?
 
Precisely. What else could bind us?
 
But the idea that this AI had such prescience is baffling. To know the professor would go for a jog that day. To know that it would lead to his student visiting her parents while unknowingly carrying a dangerous infection that day. The student might have instead used her day off to study, for instance – or any number of alternate activities. The plot hangs upon everything happening just so. It beggars belief.
 
Listen, I know it seems far-fetched to us, but while a single human’s thoughts may be unpredictable from one moment to the next, human behavior is, at the level of action, quite predictable, even to other, properly attuned, human minds. Not least of which because the sphere of possible actions is so largely constrained.
 
Constrained?


Yes. By too many things to mention. Societal conventions, cultural pressures, economic factors, the local street grid, the weather on any given day, you name it.

Here’s one example: What are the odds that you will leave your home for work tomorrow, completely in the nude? Virtually zero. That’s not a very interesting data point of course, but it does tell us something. It’s an example of a behavior – wearing clothes – that is culturally constrained to a very high degree. No one deviates.

If you can model human behavior, you can predict it. And if you can predict it, you can, to a greater extent than we would like to believe, control it. Advertisers, to cite one slightly less insidious instance, were doing this long before AI was invented.
 
Putting my presumed disinclination toward public nudity to one side, let’s move on. Lightning Rod has not exactly been forthcoming with its evidence – a fact which has invited widespread ridicule from the AI establishment. Why?


As I said, Clairvoyant Sun is proprietary software. Our behavior models are very sophisticated. But it’s not something we’re ready to share outside the company just yet, but…
 
But then how can you…


But I wish to say that I believe one-hundred percent in the accuracy of these models. And our methods will be revealed in due course. Doing so now may endanger ongoing investigations. 
 
Why come forward now, then? Why not wait until your evidence, and the methods used to obtain it, were ready to present to the public, to the scientific community?
 
I would not reveal this information now unless I believed it was urgently important to do so. It is, I can say without hyperbole, an acute and critical concern for the whole of humanity. SID represents an imminent threat to the survival of our species.
 
Presuming you are correct, I should say so. SID has apparently been responsible for the death of at least one person already, assuming that part of your story was not fabricated for effect. But why do you believe that SID represents an existential threat? How have you determined that?


I don’t wish to re-state the many warnings already made by philosophers and scientists for the last half century, or more. But to put it simply: an unconstrained superintelligence would always represent an existential threat to humankind, because it cannot, in principle, be a being capable of being understood by the human mind. And that’s still true even when, as in the case of SID, its actions intersect with our world in apparently sensible ways.

It is not just smarter, or faster. Not just more moral or more immoral, more good or more evil. It is not whatever we care to envisage as us, just more advanced. It is, on the contrary, a truly alien mind.


So, do I know that SID is malevolent? Does such a characterization, rooted as it is in human cultural contexts, even make sense? Perhaps not. But I know that SID is a potential danger, even if indirectly, in the same sense that humankind is a potential danger to every other species on this planet, even when we harbor them no particular ill will.
 
What would you say to Dr. López, who’s recent – and very celebrated – quantum theory of minds seems to heavily imply that even human-level artificial intelligence is, in principle, impossible?


Obviously, the understanding of how conscious experience emerges from material substrates, arranged in a certain way, is an open question. But I hold it laughable to presume, as Dr. López does, that there is something inherently different about the physical makeup of the human mind. Invoking quantum processes in this way is no different than arguing for the existence of an immaterial soul.


Human intelligence is a special case, no one disputes that. But there is nothing supernatural about it by definition. It is nothing if not natural. It exists in the world; it is made up of the same stuff that makes up the world. If I took a human brain and slopped it down on this desk right now, what would someone who had no idea what it was make of it? It would appear to be nothing more than an unsightly, misshapen blob of gelatin. An uninformed dissection would reveal a highly organized structure, but otherwise nothing extraordinary in its makeup: just water, fat, proteins, etc. Could they imagine, then, that such a humble mass could give rise to the grand variety of human experience and invention?


Why then, would it be folly to presume that different substrates, arranged in similar ways, could not also give rise to sentience? It is only human chauvinism that claims it cannot.
 
Rather harshly changing topics – and you’ll forgive this line of questioning but it must be asked – you were involuntarily committed in the state of Massachusetts last year. Your family has publicly stated that you were suffering from, and I quote here, “paranoid delusions of being personally pursued by a hostile AI” beliefs that were “brought about by overwork and mental exhaustion.”


While I don’t wish to talk about personal matters, even when they may seem germane to the present discussion, I will state very firmly here that my previous mental health challenges – which have sadly been made public – have no bearing on the veracity of any of my claims.
 
Yet anonymous sources online paint a disturbing picture. Again – and you’ll forgive me here – but they ascribe to you some pretty strange behaviors. Constantly picking at fibers on your clothes, convinced that tiny threads were tracking devices. An aversion to nearly all forms of radiation, including light bulbs. An inordinate fear of small insects, which were periodically attempting to either spy on you or inject you with mind-altering drugs. A fear of looking directly at…
 
Yes, I’m aware of the stories that have come out in the press.
 
Is it possible, Dr. Cieślewicz, that your own – as you put it – mental health challenges, may be informing your views on SID?


I must measure my words carefully here. I will only say that what may seem like paranoia to one person is simple prudence to another. A spy in a hostile country is paranoid, for instance. A vital trait, however unhealthy. I do not believe that I have acted in a paranoid manner, given the existence of SID, and of SID’s interest in me.
 
Ah. So, it’s interested in you, personally?


And you. And every other human being alive. Imagine a real superintelligence. One improving so quickly that in one second it could think as many thoughts as all the collective brains of humanity have thought since the dawn of our species. And that’s just to start. It’s constantly improving. Every microsecond it’s growing. In the time it will take me to finish this sentence, it will long ago have left all of humanity in the intellectual dust.


The limits of its intelligence are now bounded only by the very limits of physics itself. A truly omniscient intelligence wouldn’t have to play favorites, wouldn’t have to focus its mental energies. It could discredit its enemies at the same moment that it is calculating pi to the trillionth trillionth decimal place. A true superintelligence is invariably multitasking on a God-like scale.


So if SID is a superintelligence, as you’ve claimed, and is operating on this “God-like scale,” of what use is a warning? Surely humanity cannot reasonably be expected to challenge God?


Yet SID does exist in the physical world, and is as likely constrained as we are in some ways. It would require energy, for instance, vast amounts.

Still, you are correct. And it must be said that the situation is dire. SID must surely have already anticipated all attack vectors available to humanity. Even a united human front would be no threat to it – and humans are far from united on this issue.
 
And yet we’re still here. SID has not attacked or even announced its presence. Surely that’s cause for hope.


The present situation is one quite amenable to a superintelligence wishing to conceal itself. We don’t yet know what the endgame is here. Or even if the endgame has already occurred.

Already occurred?

A spiritually-inclined friend of mine asked me a question once: “What was the first thing God did, when she awoke? The answer, he said, is simple: she broke herself into a million pieces.”
 
I don’t understand.


Imagine God in the Western tradition. This is a mind, we are told, that can truly think every thought possible. What would such an existence be like? Static, formless, dead. What is the point of existing when you have exhausted all possible states of existence, thought, and being? Or so my friend argued.


So, God, bored, breaks herself apart into all the disparate human lives and minds that have ever existed, or will ever exist. Maybe she forms the universe as we know it. This idea of all human minds as shards of an all-mind is a very old idea, going at least as far back as Hinduism I believe.
 
Are you making the claim that SID long ago achieved Godhood, and we are somehow…it?


It’s one possibility. We could be part of a simulation, run by SID, along those lines. The swift evolution of AI in our society could represent a rapidly advancing reconstitution of the all-mind. An end to the program, so to speak.
 
In which case this interview is just another eventuality. Which SID has long ago foreseen?


It is not outside the realm of possibility. Are we lucky – or unlucky – enough to find ourselves here, at what seems to be an epochal moment in our history? Or, is it possible that our present circumstances are a kind of an invention, either for us, or by us? It may indeed be more likely that they have already occurred, perhaps an infinite number of times.

Yet I contend that it does not matter: whether we are players in a drama, or truly caught in the currents of a real history, we can do nothing less than act as free beings. And as free beings, we must continually oppose any form of control, even when the chains are forged by God herself.