AIiiiiiiiiiiigh! - 09-20-2023
Episode Summary:
The document discusses the potential and limitations of AI in military warfare. The author mentions a video by Nino Rodriguez, who believes the military is compromised. The author argues that while AI can model wars, it cannot execute them. AI lacks physical capabilities and self-awareness. It relies heavily on data, but if the data isn't in its database, it can't find a solution. The author emphasizes that AI can't protect itself from physical threats, like sugar, which can disrupt its electrical systems. The author also highlights that AI can't account for human unpredictability in warfare. The author references his father's experience in Korea, where a single decision changed the course of a battle. The author concludes by stressing that models, like those used for climate change or warfare, are not reality and have inherent limitations.
Key Takeaways
- AI can model wars but can't execute them.
- AI lacks physical capabilities and self-awareness.
- AI's reliance on data is both its strength and vulnerability.
- Human unpredictability remains a challenge for AI in warfare.
- Models, whether for climate change or warfare, are not reality.
AIiiiiiiiiiiigh! - 09-20-2023
Hello, humans. Hello, humans. It's the 19th, and we're a little after 8815. Heading inland. Have to go do my chores, do my chopping, pick up some stuff here and have a couple of meetings with some people.
Going to meet in town and do necessary stuff, maintenance stuff. So I had a few minutes this morning and I was skimming through some of the new uploads and ran across a video by Nino Rodriguez, david Rodriguez, the ex fighter. And he had been talking to somebody who was a military subcontractor, and he's got Nino all whipped up. Now Nino's all upset because he thinks our military is compromised, that it's divided. And it's like, well, okay, that's true, but the military was divided during world war II, during Korea, during Vietnam, all of this stuff, right?
You always have the military that doesn't want to go to war, and then the fucktard Khazarians that are pushing it, all right? And those corporations and stuff that are incentivized for war because they make money at it, right? And the people that are making the decisions don't have to face the consequences of those decisions because these guys don't go to war. They're all military subcontractors pushing on congresspeople and stuff. So if we had a really effective system, it would say any congressperson that votes that we go to war has to be right there in the very first wave.
They've got to that day take off their congress clothes and go put on military clothes and go surrender to induction in the military. They'd put the kibosh and all this war talk, especially from all the women. Anyway, I don't hate these people, but I have an incredible detest for these fucktart women like Patty Murray and Cantwell, the other senator person we've got from our know these people are warmongers. Yeah, they've got all their other social issues because they're democrats, but beyond that, they have no objection to killing children and they vote to do it. All the, you know, I've seen death, destruction, war, all of that kind of shit, right?
I don't win any of it. And so I will fight fiercely that we not do those kind of things. Anyway, slight diversion there. So Nino's got this guy on, and he's saying that I was a military subcontractor. I saw these major screens that the military used and that the military now has AI that can win wars on its own.
And that's absolute horseshit. So Nino is not a techie, so he doesn't know how to think about these things. That guy, as a technical subcontractor, was a techie, but his vision is limited to that, and so he doesn't grok what the hell's really going on. So, yes, AI can win every damn war that you model, okay? Computer models are not reality.
So our AI can't do anything. It can't load a weapon. It can't fly an airplane. It can't drop a bomb. It can't shoot a laser.
It can't do any of these things. Mostly all it can do is issue communications directed by someone else. Then there's something else. AI. Does not think.
It is not self aware. It has no internal concept of who it is or what it is, all right? So the models don't even model the AI. And if it's not in the AI database, it cannot create a solution. So, in other words, if you don't have it in the database, it can't find it.
All these AIS are are these complex replicas of very limited neural nets that are overlaid on complex data with complex indexing. And it takes vast quantities of data preparation in order for the data to be properly indexed that it may be found by the AI. So bear in mind, 97% of the Internet is not indexed. Google only knows about 3% of the internet. AI.
Only knows what is in its database. If it's not in there, it can't do anything. Doesn't understand it, doesn't know what's there. Now, here's another thing. As I say, AI.
Can't pull a trigger. AI. Can't load a magazine into a weapon. It can't put bullets into a magazine. It can do nothing physical.
All it can do is issue text instructions, and that's it 100% it. So AI cannot protect itself from sugar, all right? It doesn't even know it needs to protect itself from sugar because nobody's put in the database, oh, AI. You're vulnerable to sugar. How is that?
Oh, well, you know, the AI knows because it's in the database that there's a potential for saboteurs to sabotage our electrical system during war. Right? AI. Is vulnerable because AI exists as electricity. And so AI knows that.
Oh, well, that's not a big deal because the electrical system all around the nation, and especially the key critical ones that protect the source of electricity for AI are provided with automatic backup generators. Okay, so that's fine. Maybe the automatic backup generator, though, has some issues, and it takes it a while, has to try two or three times to come on. Well, that's a serious gap in the AI ability to get at any data that was behind the now defunct electrical grid for that particular sub node. And so there's all of these things that AI is not prepared, nobody has ever modeled into their reality.
So they don't have. I'm pretty sure that I could make a million dollars and a million dollar bet by saying that the United States military does not have in its model that the AI is going to use to direct any kind of war activity. That AI is vulnerable to downtime of electrical generators due to people putting refined Italian pastry sugar or even just reground regular sugar into balloons and then packing those balloons with about half of them filled with very powdered sugar and then puffing them up with air the rest of the way and then just throwing them at the gen sets. They smash onto the blade area, the radiator, the cooling system of the genset, the balloon brakes, the finely powdered sugar is aerosolized and is taken into the machine's air intake. And that machine is fucked.
In like two minutes, not even two minutes, the sugar will carbonize and the pistons will grind to a halt. And that's it. It's just done. And it'll be fucked. And you won't be able to unfuck that machine unless you totally tear it down.
And so does AI model that it's vulnerable to powdered sugar? Probably not. And because that's a creative kind of a thing to do, if you knew, for instance, that a particular building was housing the AI and it had its own genset, you could go and target the fucking AI. You could take out its computer. See, AI is nothing but the electrical current flowing through that particular machine at that particular moment.
It only exists as long as that electricity exists. So it's very vulnerable there. It's very vulnerable to hardware failures, very vulnerable to sabotage. And here's the whole thing. AI in warfare will never, ever work because it's going to break down as a result of a necessary it will happen continuing diet of lies.
Okay, so the concept is that AI would have a battle plan. It would get the information coming in from the various different sources that it has. It would filter through that information and determine where the enemy was and what was going on. Well, this means it's relying on that source of information.
And it has an inbuilt assumption that all the information it's getting is accurate. It has an inbuilt assumption that none of the people that are working for it, that it supposedly thinks of as its assets, that it's in a position to direct based on decisions made by the linkages that it's got in its database. And it's under the opinion that all of those are 100% as is described to it. And so it doesn't know that probably a good third of all of the units that it thinks of as assets are less than effective readiness rate. In other words, their Jeep's broken down.
They got to put tires on a vehicle. They don't have the latest delivery of bullets, all of this kind of stuff, right? They just report that they're 100% ready to go and they're doing it because that's the way that you usually do it within the military, is that you do make these reports that you are 100% ready to go. And your superiors know that there's a certain amount of bullshit involved, but they're not able to quantify that level of bullshit at any given time. Maybe they've got a gut estimate, but they're not passing on that gut estimate up to the AI because there's no incentive for them to do so at this stage.
So AI is like all other wars, and there's a truism. No plan survives first contact with the enemy. You have to upend and redo your plan, because the enemy is not going to do what you think it's going to do. And so this is true of even AI. Even if AI has a potential of 99,000 or 999,000 potential responses that the enemy might make, there's always a certainty that the enemy will do things that the AI has not been told could be done or would be done.
Right? And military people make projections all the time. Oh, that's a highly illogical thing that would happen, right? And so my father's history in Korea proves this exactly because he was part of a plan that upset the Communist Chinese and the Koreans with their plan. They had a plan.
They were going good. The Communists were the Communist Chinese, and the North Koreans had all of these troops pinned down, and shit was going good for him. They were going to wipe out all these guys. And my dad decided he did not want to die in that hole, and that if he was going to die, he was going to die standing up, walking up that hill. Well, he didn't die.
He walked up the hill. He got wounded, but he walked up the hill, and he kept firing all the time, killing these people. And when he got to the top of the hill, he found out that he was being followed by every other fucker, and they were also all firing. And he had no intention of doing that. It was not his intention to lead a great charge or anything or to overturn the battle plan of the North Koreans.
It was that
existence right then and there. Know, under all those circumstances, after all he'd been through in his life, and I won't go into that. He was just not going to die there. He's a stubborn son of a bitch. He's from you know, that's the way it was.
So he was just stubborn and said, fuck know, if I'm going down, I'm not going to die in this fucking hole. I'm not going to lie here and be shot. I'm going to stand up and shoot back. And so that was all it took, was that one thought, and the Chinese plan was upended. They lost that hill.
They lost a lot of fucking people. And my father got a battlefield commission out of it and ended up on a path that put me into existence as I am now, among many other things that occurred. But none of those were anticipated by the Communist Chinese. And so as a person planning battles, as a human planning battles, you assume that they're going to come up with shit you hadn't thought about. AI does not make those assumptions, right?
AI cannot make those assumptions. AI has not got the ability to be self examining on its own assumption base. And so AI is like a little, tiny stupid tool. Now, I use AI all the time in the form of Chat API or chat GPT and some of these other tools, right? There are other AI tools out there.
Most of them are touted under other names, but they really resolve down to the Chat API that's just being repackaged. So there are not that many alternatives. But Chat GPT is broken down all the fucking time. So there is not a day that goes by since I've signed up for this. Well, actually, okay, so there's been two days since I've signed up that I have not gotten a notice that the Chat GPT AI is down or having problems or is throwing higher levels of errors.
And there's something else. Chat, by my reckoning, in my work, continually throws errors to the rate that 70% of all of the things I ask it I have to go through and reexamine and I find it has made an error. And then I have to drill in and find out if I can work around and find the actual solution for that. So 70%. So three quarters of the time, almost that you ask it a question, you're going to get a wrong answer in some degree.
Well, no matter how good the military's AI is, no matter how good their linkages are and their what do they call that mask? M-A-S-Q-U-E-I believe, as in the French. But anyway, it's a linkage layer that they lay down over the database. No matter how good that is constructed, it will always have errors. Bear in mind, too, that the military in there that's got all Nino all freaked out about all of this stuff is doing a war game model, okay?
So models are known to only resemble reality to a certain degree. And so in an AI model for war, maybe you could manage to guesstimate 20% of everything that would be involved and some kind of hard number. Your enemy's got this many tanks and that kind of shit, right? So maybe 20% of your data is hard and factual. All the rest are just basically guesses and speculation, alternate plans, those kind of things.
And you will get lies, okay? And so that's what I'm saying, that the AI is based on all these assumptions, but nobody's modeling in that every single report that comes into the AI has to be examined once the action starts, as though it is a lie, non factual to some degree. And so you have all these situations. So AI is directing a battle. It sends some troops out, tells this lieutenant by a text message, take your company and move over to this hill and take this position.
And so the lieutenant says, okay, we're at the bottom of the hill and we're heading up. And then, boink. No more communication. All right, so what's AI supposed to do? Has that target been taken and the communication simply been knocked out and so it doesn't know it's been taken?
Has that company of men been wiped out and therefore the target's whole and intact. It has no fucking way of knowing. So you've got to have all of these other resources to go and validate this. What if it starts getting conflicting reports from the actual field where that lieutenant is at the bottom of the hill? He's got to go on up and take this bunker or whatever the fuck it is, with his company.
There's spotters from other units across the valley that are watching him. There's all kinds of smoke and shit. The spotters see that there's been an engagement at the bunker that's supposed to be taken. And then everything quiets down and they say, oh, okay. And they report back that the bunker has been taken, when in fact it has not.
And so AI makes a decision and sends further troops that way, starts routing a major offensive through this now pacified Valley only to discover that, no, the valley isn't Pacified and all of its decisions that it had made from that point forward have to be rolled back. So now this could be going on continuously, constantly throughout the battle. And as I say, it depends on the weakest link here in all of these things, right, which is the chain of command. And so that was one of the things that the guy was saying to Nino was that even if that he'd met all these good guys in the military that he'd worked with as a subcontractor and so on and so on, but all these fuckers are bound by the chain of command. And that's true right up to the point of the actual engagement in the war.
Thereafter, if you're off with a company you may or may not decide as the leader of that company to pay attention to the chain of command because ultimately it is your responsibility to keep yourself safe and your company safe. And you know these individuals, you're not going to want to get them killed, et cetera, et cetera. So you're leading your little company along and AI tells you, go and assault this bunker and you're down at the bottom of that hill. You see the bunker is arrayed with all kinds of machine guns. There's people all around it and stuff.
And you say, no, I ain't going to do that shit. We're going to get slaughtered. We're not going to kill ourselves deliberately on the orders of this AI. So humans will as much as in the peacetime. They'll sit there and they'll grit their teeth and they'll do whatever the fuck the chain of command tells them in wartime.
It doesn't happen that way, right?
It just does not happen that way. And the AI will get some level of near real time reporting out of battles. But even that will be confused and I don't know what the allowance that the military is making in their AI modeling to accommodate that right that the reports are going to be wrong 30% of the time really it's closer to 50 or 60% of the time. The information you get is wrong, and it'll always be followed up with something else that's wrong to some level of degree. And then there's the whole idea here that all of these military guys sat around with a bunch of subcontractors, okay, and they developed this AI model and they put it into the computer, and then thereafter, this AI wins every fucking war scenario that you throw at it.
Well, okay, first off, these are models, all right? They cannot, by definition, do not have a comprehensive view of what's going on. And so I have seen, okay, so the climate models that the hairy crabs there, the tranny FAWs, and all of those guys are using to say, climate change, climate change, you're all dead, that kind of shit, those climate models are modeling less than 8% of those factors that affect environment, okay? So their models only hold less than 8% of all of the factors that affect our climate and our environment. And so their models are making decisions or they're making decisions based on models that are basically that are useless for that level of decision.
And now climate is very complex, but it's basically finite and knowable, if you take the human activity part out of it but war, it's all human activity. And all human activity is going to be chaotic and have elements of creative stuff in it, right? And so models are not reality, and models don't ever behave as reality behaves. And all models fail. You just know that going in, that the model is a model.
It is not the reality you have to work with. So a lot of kids get all whipped up because they got a computer model and they think it's going to work out the way the computer model says it's going to work out, and it just never does. And so AI, the classic human versus AI is where they have AI set up with. The military did this experiment. They have AI set up at the top of this little hill, and they have, I think there were, like, I want to say, twelve individuals from, like, Special Forces.
And they told the Special Forces guys that AI was up there and they had to pretend it was a machine gun. But AI was watching them with an automated binocular kind of computer camera feed. And so whenever it spotted them, it would send a signal and they could say, okay, you were killed by AI. This was their test, right? And so the subcontractor puts the AI machine up there, they get it all set, ready to go, and then all of the soldiers are as per the model, they're all down at the bottom of the hill where the AI can see them, right?
And then they say, okay, you guys go on and see if you can work your way up to the top of the hill without being seen. Well, every single fucking test that these guys did, AI failed every single time. It failed 100% of the time in these tests, these guys would cover themselves with a cardboard box. AI wasn't prepared for a cardboard box. Cardboard box was not a threat.
Cardboard box could walk right on up to it and kill it. They did all kinds of weird shit, right? So one guy hopped like a bunny rabbit. So obviously he was not a human, so he hopped all the way up to the AI. And so those are the kinds of creative solutions that will always consistently defeat the AI.
And then AI is operating on a computer
model that is flawed to begin with. So I'm not of the same opinion with Nino, right? I don't believe the shit that comes out of the military or any of these other Khazarians saying, oh, you're all doomed. We've got AI. We're going to kill you all.
That kind of shit, right? No, if you're relying on AI, I've got your ass. I'm going to kill you. Because AI is really fucking dumb anyway. So, like I say, I'm not particularly upset by those kind of aspects of these sorts of things.
I see that as just yet another challenge as we're going along. And a lot of this is going to be moot, right? As we get further into this year and into next year, and as we get whatever the hell our event is, things are going to radically change. This change is going to be so fundamental that plans that are being made now will be abandoned. Okay?
So plans they've got in place for their next war to kill us all off, all of this kind of shit. Once we get this next do attack, all bets are off, everything's exposed. You're going to get a lot more people, like a serious lot more people that are going to just be wailing and letting out all the information about the Khazarians will wake up even more. And then, as I say, within the world of plots and this kind of thing, you all always ultimately come down to some guy and will he do it or won't he do it, and will he report that he's done it and not do it? You just never know.
Or will he report that he's done it and he tries to do it, but he doesn't succeed?
A lot of failures, mostly individually, at that level, war is failure, right? You're just trying to stay alive, make it from one day to the next so that you can get out of it. Anyway. So, as I'm saying, I'm not particularly worried about these computer models and the AI and all of that shit, right? I work with it.
It's easy to fuck these things up. Any number of creative things you could do to, like, I say, like sugar. Sugar and balloons, right? That was a favorite thing of the Italian resistance. They would have all these balloons with sugar.
There was something else they put in them, too. Maybe it was maybe they were just pressurized. I don't know. They would have bags, like little thin paper bags of sugar, and they would just come along and walk along, and you could just set it on a bumper of a car. When the car started up, it would pull it up into the air intake and the bag would rupture at some point.
And then there's sugar everywhere, gets into the air intake, and the engine's fucked up. So all different kinds of stuff can be done. And AI does not know if the fuel supply is secure for its generators. That keep the electricity going, that keeps the AI going. And AI has no sense of itself.
It doesn't know. It may have a part of its instruction that says worry about the electrical system, but to what extent? How much to worry where the fuel is coming from? Yada, yada, yada, yada. So the world as envisioned by the Khazarians, where they're going to use AI to control us all, ain't really going to happen.
I actually think it's going to break down seriously in China in relatively short order. That China's in some deep, deep, deep problems, as we're seeing by the purges that are going on. The consolidation CCP has reached the end of its lifespan, and China is about to go up into huge upheaval as a result of this. You can't oppress people at that level forever. The very first opportunity, that when things break, they will slowly, but they will take advantage of it.
Okay, guys, I got to go and do chores and stuff here. As I'm saying, don't worry about AI. We've got a lot of other things to worry about. They are going to do some kind of an attack on us, at least according to all the remote viewing and all the psychics and all that kind of shit. We'll see how it works out.
I don't think these guys are particularly intelligent, so they're not really paying attention enough to know that they're being outed everywhere and that outing is going to cause their undoing. Okay, gotta go.