Artificial Intelligence is retarded - 12-13-2023
Episode Summary:
Clif High discusses artificial intelligence (AI), highlighting its capabilities and limitations. He begins by mentioning his routine and quickly transitions to the topic of AI. High notes that while AI can produce impressive results, like detailed images, it lacks in areas like artificial general intelligence, which is the ability for AI to learn on its own. He emphasizes that AI lacks memory and the ability to follow sequential instructions, using the example of creating cartoon characters that change between scenes. AI, in High's view, cannot retain or recognize previous creations.
He further explains that AI's process involves large language models and neural nets, which require extensive training on data. However, this training isn't persistent, and AI doesn't possess a sense of self or internal integrity. High asserts that AI is not capable of mentation or cognition, and is unable to handle tasks involving repetition or mathematics accurately.
High discusses a breakthrough in AI, the concept of artificial general intelligence (AGI), which could train itself. He is skeptical about the realization of AGI, considering the current approach to neural nets. He believes AI is effective for analysis but not as a creative tool, due to its reliance on indices and databases, and lack of understanding of context or facts.
High then delves into practical applications of AI, like auditing and legal document preparation, stressing the importance of double-checking AI-generated work due to frequent inaccuracies. He shares anecdotes about AI's use in legal settings and its pitfalls, emphasizing that AI does not lie but often produces non-factual results due to the vast, unverified data it is trained on.
Towards the end, High discusses his involvement with groups investing in AI, describing different types of AI and their applications in various fields like law and accounting. He warns about the potential errors in AI's outputs and the necessity of human oversight. He also touches on the stock market and its future, predicting significant changes due to the collapse of artificial derivatives and a shift towards actual asset valuation.
High concludes by sharing his plans to use AI in his own work, acknowledging the challenges and time required to train AI effectively. He expresses a balanced view of AI, seeing it as a useful, yet not fully reliable tool, and not something to be feared.
Key Takeaways:
- AI has impressive capabilities but significant limitations in memory and sequential tasks.
- Artificial general intelligence (AGI) is currently unattainable with existing neural network methodologies.
- AI excels in analytical roles but struggles with creative tasks and mathematical accuracy.
- AI lacks self-awareness and a sense of internal integrity.
- AI in legal and investment fields requires careful human oversight due to frequent inaccuracies.
- AI's outputs often contain non-factual elements, making verification essential.
- AI's data training is extensive but not persistent, requiring continuous updates.
Predictions:
- Shift towards actual asset valuation in the stock market due to the collapse of artificial derivatives.
- Emergence of new AI technology from unpredictable breakthroughs, potentially leading to AGI.
- Long-term process of transitioning to real asset values in various markets.
- Increase in the use of AI for news analysis and tracking public statements.
Artificial Intelligence is retarded - 12-13-2023
Hello, humans. Hello, humans. The 13 December, early in the morning. Gotta go do chores.
One small little stop to make a payment to some soil engineers for some work. And just regular chores after that traffic stuff right off the bat here. Okay, so it's an interesting time. We've got our splits happening.
We got Alex Jones back on Twitter. So all the world's ending, there's no question about that.
Anyway, so how do I get across this idea? All right, so I've been playing with artificial intelligence ever since. It's been generally available for civilians, not people in the companies, to get involved with it.
It's impressive to a certain extent, and then it fails. So, for instance, you can get it to do a really good picture, right? You could describe it, you can tell it what you want. You can say, okay, and actually, I've done this repeatedly, trying to work around some of the problems with AI.
All right, so we don't have artificial general intelligence. That, in my opinion, in my definition, is where you can teach the artificial intelligence to learn on its own. Man, this gets into some real technical stuff. And let me see how I'm going to frame this.
All right, so here's the thing.
Let's look at the symptom, and then we'll look at the cause. All right, so the symptom is that you can do a picture with AI, and you can do a very incredibly detailed picture. You can do it photorealistic. It'll do all kinds of cool stuff that way. And then you go back and you tell it using that same.
So you're going to develop a cartoon character. Okay, and so say we were going to develop a version of Roadrunner and Coyote. Okay? And so we'll call this woodpecker and Fox. All right, so we're going to do a cartoon, and we're going to have a character that's a woodpecker and a character that's a fox.
And we go to AI and we say, okay, AI, you do this for us, right? And you make this character for us, and it'll come out. You got just a beautiful character. It's just what you want. Maybe it takes you three or four or five iterations to get it to zero in on it, and it gets you this character that you want.
And then you put the character into a scene in an image. And then since you're going to be making a whole series of cartoons and stuff, you want to make the next image with that character. And so you tell the AI, take this character and make it do this now, instead of running away from the fox, you make it fly, okay? And so it's going to fly for you. And that's when you discover the real problem of AI.
That AI has no ability, it has no memory, and it has no ability to do sequential instructions.
What it ends up being, hang on, I got people and dogs and weird shit going on here. Hang on. Crawling in out of the woods, that's not good.
Maybe they're mushroom hunters or something. Anyway, because it doesn't have a memory, it does not hold in its so to speak mind. It doesn't hold the image that it just created. In fact, it can't see the image. All it can do is issue instructions to code, which then produce the image for you.
But the AI has no comprehension, it has no mentitian at all, it doesn't cogitate, and it doesn't have any visual mechanisms, so it doesn't know what it actually created for you in the way of an image. Thus when you go back and you tell it, take that woodpecker and make him now fly up in the air away from the fox as the fox is chasing him, it will do that. It will create you a new scene with a woodpecker flying away from a fox, but it won't be the same woodpecker and it won't be the same fox. And by the way, every time you get an image, every time you get an image, a cartoon image, a photorealistic image, all this kind of stuff out of AI, there's shit in the image you did not put in there that you didn't want to have go in there. You wanted to have fewer elements so you could tell it, make an image of, and this is going to happen on every image, okay?
And you could tell it, make an image of Joe Normie at a cocktail party on the fan tale of a big yacht, okay? And it'll do that. And then you say, well, shit, why are there 45 people all around? And why is there this big giant person standing on the poop deck? And why are these two people hanging over the edge?
And basically why is there just this extraneous leg and a foot sticking out from the side of the boat?
So AI puts shit in there because it has no visual acuity and has no memory and it has no discrimination or control. And so what happens is this AI works as data. So in creating an image, you have to go through what's known as the large language model, where AI sort of understands spoken language or allows you to speak to it as though it were a personality, okay? Where you can tell the AI do this, as opposed to actually having to write computer code. And so the AI then interprets your language to see what you want, and then it has resources, hooks into an image generating program, et cetera, et cetera, that it uses.
And so it will then take all these various elements, and it will do its best to come up with some instructions that when those instructions hit that image generating program, it will generate what you want. But AI is not a discrete integral conjugation, so it's not a mind. All right, so AI works by these things called neural nets, and you have to train it, and you train it on data. And the more data you can get, the more trained it is. But that training is not persistent beyond a certain point and has no mantition and is, in fact, an overlay, a network.
That's why they call it a neural net. It's a network of individual indices that are linked up to various levels of interpretive code, various levels of code to interpret what it thinks that, or what it has linked up to. So the neural net doesn't exist as a mind. It's not sitting there thinking when you're not asking anything of it. It's just there, right?
There's no there, there's no sense of self, there's no sensation at all. There's no internal point of integrity for the AI. So the AI does not think of itself. It has no concept of itself. It can't say I exist here.
It can say that because you could ask it questions that would elicit that response out of the interpretation of the large language model.
Um, but it's not going to actually have mentition. It's not actually going to have thinking involved in the process at all.
It's not able to be repetitious. All right? So it can't repeat something, and it can't repeat something with a variant. It can repeat general concepts to a variant, but not any details that you may wish to carry forward. This is the same kind of limitation that prevents AI from being able to do math.
AI is terrible at math. It can't add shit worth a damn. It can't run an accumulator, so it can't count. So you can have it create an image, and then you can feed that image back into it and say, how many people are in this image? And it can go through and examine.
But how it's going to interpret, the question is going to be variant from what you think, because you have to be somewhat explicit. Right? Don't count the extraneous legs sticking out of the yacht, put in there by the image generation program as a people. So you got to get into some level of specifics on these fuckers. And the AI, like I say, can't accumulate it, can't add.
There is some sort of a big breakthrough they thought they had at Chat GPT towards what they call an AGI, artificial general intelligence. So an artificial general intelligence is one that you initially train with your neural net, but thereafter it has the capacity to continue training itself without you having to participate in it. And see, this is what scares everybody. All these people that are managers, funders, pundits, social analysts, that kind of thing that are out there saying AI is going to come and harvest all humans, and we're all toast. All right?
What scares them is the ability for AI to have mentation and to have cognition and to be able to train itself. In my opinion, that will never be done. It can't do that, especially not with these interlacing indices approached by neural nets. And so this will reach a dead end, and it's a fun little toy, and we can use it for some really good stuff, right? So getting us away from images.
You can use AI for analysis very effectively, because you're not attempting an analysis to tell it, to do a task and then repeat the task or accumulate around that task. So you're not asking it to do anything that a human could do in the sense of maintaining a focus in the moment and carrying forward thoughts from one moment to the next in their basic form and then altering them in the next moment kind of a thing. Right? So humans can accumulate, humans can do cognition at that level of thinking, and AI does not, just because of the nature of the neural nets and the fact that it is basically just forming all these indices. It's a giant database of indexes.
These indexes go to other databases and chunks of code and all different other kinds of stuff that allow it to function and to mimic speaking with a human. It does not mimic human intelligence, and it does not mimic intelligence. Intelligence, right. What it is, is a display that basically understands articulation and is able to mimic articulation. It can speak to you, and they've got various little things in there for making you think it has a personality.
So AI is not, when you interact with AI, you don't necessarily have to use any code at all. You can do that kind of thing. If you're at that level of interacting with the ais, like if you're training them or that sort of thing, you can write code on the fly and even have the AI write the code for you and then insert it into the process, reboot, and there you go. So AI provides all kinds of cool tools to us, can do incredible analysis. So you can give it a photograph and you can tell it.
Ask it, is this photograph artificially generated? And it has ways of analyzing all the photograph at levels that you could not compete with, just in terms of both speed and detail. And it can come back and say, yes, there are these artifacts within the photograph that suggest that data was put in after the file itself was created and sealed. And so then you would know, aha, this photo had been tampered. You could use it to analyze accounting.
It's really good at that, right? So you could use AI as an auditor, and it finds shit like you would not believe. So if I were an auditor, I would get into AI. Seriously, the reason I'm bringing all this AI shit up is that I've become involved with a couple of different groups here of people that are moving into AI, either as investors or as owners, right? Some people that want to own an AI for their own purposes, and I'm helping them out.
All right, so there's a couple of different kinds of AI in a general sense now. So we have this. It's not artificial general intelligence. We don't have any Agi. In my opinion, if we're going to achieve that, it will be from a spectacular breakthrough that is not predictable.
And thereafter, we would be off on a totally different kind of AI technology. Now, that having been said, there are a couple of different kinds of AI out there. One of the AI kinds is where they train the AI, and they write the code for the artificial intelligence large language model interaction. And then they get it all set up and they actually build in the ability for it to train itself on specific data sets. So that's the kind of AI you could use for specific tasks, like writing law stuff, right?
Like writing suits, or responding to a suit, or writing a motion or something like this. These kind of ais can also be used to do accounting. So I've seen a couple of these guys that are these ais now that are what they call an API, right? Application programming interface, where you take somebody else's AI and you throw out all of the, after it's been trained and stuff, and you throw out all of the guts of it, the stuff it's actually been trained on, make it basically, I guess I'm going to say naked, it has no real data in it. It just understands how to train itself, given some data, and then you put in the data that you would like, and then you can tell it to train itself on accounting, or you could tell it to train itself on engineering analysis or something like this, but you have to train it.
You have to supply the data sets. And of course, there's potential for wonkiness there, because if you don't provide an adequately broad enough or deep enough data set for it to train on, it will make huge mistakes. Now, as I was saying earlier about the pictures where you might get one giant guy on a boat and everybody else look like regular humans, and then a couple of extra legs or an arm or something like that sticking through the side of the boat, those kind of errors are continuous and constant with AI. So everything you do with AI, you've got to double check if you're doing anything that's like serious work, like an audit, or you are going to do a court case. Now, it'll get the verbiage right, the pleading to the court, it'll get the appropriate proceeding format, it'll stay to the word limit you set on the document.
You're trying to create this sort of thing, but if it's going to give you a legal citation, you'd better damn well look that legal citation up yourself and make sure it actually says what the AI tells you that it says, because frequently it does not. And this happens. And all the AI guys that run these things will tell you these fuckers are wrong a lot. Double check every fucking thing, especially if it involves any of these kinds of elements, such as adding something up or going to a specific that you're going to need to rely on. So I'm actually seeing court cases now that have been chucked out way in the beginning because this was at a.
I think it was a state prosecutor. I don't think it was county, I think it was state. Anyway, this court case was thrown out right at the very beginning of it because the prosecutor used an AI to generate some forms, and the AI put in some references to some legal cases in support of this case. And those cases didn't exist. They were bogus.
He just made it up. So the thing is, they say that AI lies, right? But that's not true because AI has no concept of what is factual and what is not. And so it just is responding to what its indices find. And because you're shoveling in as the trainer, because you're shoveling in vast quantities of data, basically attempting to shove in so much data that you get this near cogitation effect out of the indices.
You're not really sure. You can't actually validate that all of the data is factual and is worth looking at. And in fact, you're taking an approach of saying, well, we're going to assume that eight to 10% of the shit we're shoveling into this is bogus, but we're just basically hoping that the 90% that we think is good and valid will swamp the bogus shit so we don't have that many errors. And that's fundamentally how they're operating.
It's an interesting business. It's really cool in a lot of ways. I'm working with this one group that's going to be doing investing into the AI business, using AI and what they're going to do. And they're asking my assistance here in developing what are called prompt injections. I'll tell you about those in a second.
But they're asking me to help them develop the script. Basically, that will instruct the AI for what to look for in the news and commentary and this kind of thing about various different forms or various different companies and their AI work that would allow these guys to decide, okay, so this AI company in Indonesia here is doing really good stuff. And so we'll invest a little bit of money in that, this sort of thing, right? So they're using it at that level. So they're using AI to analyze in order to be able to invest in AI in a long term plan.
These guys are going to be buying stock, and they know the stock market is just going to crash out. They know the stocks are going to just absolutely shit themselves and that most of them will be toilet paper when all this is done. These guys are buying stocks. But hey, get this, they are demanding delivery of the certificates, and boy, have they run into problems.
So when the system crashes, almost none of the stock you own will ever be able to be given to you. Okay, so you own a rehypothecated chunk of digitry. So you buy, let's just say at t stock, you don't actually own any at T stock. You've got some digits in a brokerage somewhere where they say that they're gonna provide to you att stock on demand if you ever demand it, but they're assuming you'll never ever demand it. And they've sold that same chunk of stock to who knows how many other people.
One guy says that it's quite likely that there are literally hundreds of thousands of rehypothecated individual shares in any given company. So all the brokerages buy one share of at T stock, and then they sell it over and over and over and over and over and over and over again because nobody ever demands delivery of the actual item. They're just always dealing with the derivative, which is the representation of that stock at the exchange, whoever they're dealing with, right, at the broker, the dealer, or the exchange. So anyway, as this all unfolds, my clients know that the ability to or that the whole stock derivative thing is going to crash out and will have to be replaced by actual stock ownership at some level. So they'll have to start delivering to you some form of a stock certificate, this sort of thing, right?
Because we've got to get back to real goods, real value. We can't live in this artificial derivative world any longer. And the whole artificial derivative world is collapsing at all these different levels because it's all based on fiat. And so basically the stock exchange is a fiat version of a stock exchange, right? There's no real there there.
Things will operate entirely differently when we have a return to actual assets in value as we go through this transition period, which will take many years to get all sorted out. But initially, most of the big troubles are going to be felt in like a six to eight month period of time. And then there will be another 18 months after that, another 36 months after that of gradually getting shit worked out and dampening down all of the problems anyway. So we'll be able to use AI in this process of cleaning all this shit up. So I expect that when we get conservatives back into positions of power within the constitutional republics, that they'll start doing things like using AI to analyze news reports and track down all the statements that XYZ news anchor made, line it up with the events that were actually going on, and see where they were bribed, et cetera, et cetera.
You can use AI to suss out all different kinds of stuff as an analytical tool, as a creative tool, it ain't worth shit. But as an analytical tool, hey, I don't think you can beat it. Really cool. If you use it right. You have to be aware of the problems of it and so on.
The fact that they say AI lies to you, well, it's not lying, it's just simply reporting the same level of confidence on these indices, which happen to be non factual as any other indices that it's got relative to reporting data to you.
Again, they're putting a personality, a human touch on this that is not valid. That shouldn't be there. Yeah, I see that bastard. Oh, car people doing weird shit out here. We had a fatal accident in front of my house, and then that was a vaccine, and then maybe it was less than.
So it was like less than two weeks, so maybe it was like ten days, we had another vaccine that led to two fatalities in our area. And then just yesterday we saw, I don't know what the hell it was, but boy, the staters were screaming north. Local county guys going north, aid cars heading on up, just going like bats out of hell. Sirens, lights, all of this sort of thing. So I don't know what was going on up there, but we've got some nasty drivers around here.
So I give everybody a huge, oh, there we go again. There is another sheriff. Oh, he had lights and shit going, okay, all right.
He wasn't that serious about it anyway.
We just wanted to see if we'd all move. Anyway, AI, it's useful stuff. I enjoy playing with it. Getting into investigating the various companies is going to be interesting, and the various different kinds of approaches to this AGI is also going to be interesting. It's a good goal.
We've got real problems in getting there. And these guys, I think, in my opinion, that are doing the neural net training and the structure they've got, they won't reach that goal. Okay? They're not going to get an AGI out of the approach they're taking at this moment. I've got a lot of reasons for suggesting that, and I can go into them at some point, but I encourage everybody to go get a free trial on, like, chat, GPT, or any of these other ais.
I was offered a chance, and I was offered an opportunity to deal with a couple of these ais where it's a blank slate and you load your own data. So again, very tedious, right? Because in order to get quality in the way of an indices and a response out of it, you've got to have quality and quantity on your data going in. And so what I want to do with the one AI which is being provided, the access is being provided to me by a russian corporation. What I want to do is I want to train the bugger to do like, my altar reports, right?
To go out and analyze and this kind of thing, hugely complicated task, training an AI to do this. But I believe it's worthwhile. I believe the AI could eliminate vast quantities of the tedium and the actual work that my process used to take. But it might take me six, eight months to train the thing. I don't know if it's even possible to do this particular model and you don't know until you get into it some distance and do its function to see if indeed it has the capacity to achieve what you want.
But I'm retired. It's not like a big investment for me to put four or five months into it and then have it crap out because I'll learn a lot in the process. Anyway. What the hell, dude?
He turned against the light. Just took a left into this. My God, no wonder we have so many accidents.
So anyway, I'm getting close to my first stop here. Gonna go in and get some over at Home Depot and get some of that paint you spray on the ground and the little orange string and stuff for making, for surveying and laying out your building. Anyway, then a bunch of other crap.
So it's kind of a strange day here. Uhoh. What have you done there, people? No, hold still.
Anyway.
Okay guys. Anyway, watch out for the AI. It always fucks up, makes mistakes. But it's a useful tool and I'm not particularly scared of it. I don't worry about AI, and I sure as fuck don't worry about alien AI floating through the air and taking over.
I shouldn't get on Carrie Cassidy's case. She got fears she doesn't understand, she doesn't program. Anyway, AI is cool. It's very useful and extremely useful if you're a programmer, so it's very worth pursuing. It's not really scary and it's not very reliable.
Anyway.