The idea of ancient civilizations being advanced isn’t new.
But to be real, since they say that history is written by the winners, who’s to tell just what scale of “book burning” has already occurred for human achievements beyond books?
I’m not just talking about Atlantis under the ocean, I’m talking about castles that could have been literally built in the sky. Built by means we can’t even conceive of yet but have already advanced to and regressed from a thousand times over.
Across the whole span of sudden creative epiphanies and best laid Master plans produced by humanity’s sharpest minds, could it not be completely possible for someone to have a come up with a solid idea of how to wipe the planet’s slate clean in one fell swoop? Without even the trace of a memory to make it a teachable event?
Now don’t go jumping to conclusions now, not saying that there’s any need for the world to be copied and pasted into the delete folder. That’d be inconvenient in more ways than one, for obvious reasons. No, what I’m suggesting here that the chance of someone having been brilliant enough across the entire distribution of human intelligent quotients and talents to bring down a utopia has got to be greater than zero.
Now, consider this for just a second. What, if and this is a big if, what if this person not only destroyed the world in their own time but also reinvented it with a completely revamped software model? I’m saying, this person may have not only had access to accelerated technology but also carried out the perfect procedure to downgrade that tech to a level even lower than what we’ve got access to right now?
Some people call humans beings the reproductive organs for artificial intelligence. It’s an interesting thought, but if we really run with that, who’s to say that the AI wouldn’t get a little bit wise to the ways of correcting its mistakes in past reproduction models
Let’s say that for each generation of AI that comes to be, a full rundown of the mistakes of each civilization that preceded that occurs before an all-out shutdown. Suppose that this AI we’re observing right now is just the 12th, 25th, or 25,000th trial in a long run of cycles from stone age technology that leads up to humanity’s automatically processed review checkup.
Suppose that until there’s a run-through with zero mistakes, this cycle of stone age to AI supremacy just keeps on rolling until the green light is given for the next stage? That, I think, could be a damn fine way to illustrate the wheel of karma through sheer technology. The only question, then, would be this: just what quality data are you contributing to this possible trial run’s calculated karma wheel?