I think there is a place for AI in medicine - if we could ever imagine it being intelligently programmed and intelligently applied, by an intelligent human being.
My personal theory of health is that our body is the equivalent of an orchestra. We have a number of systems that interact within their own system and between their system and another. For simplicity, I will limit this conversation to three sections, the nervous system, the endocrine system and the enzyme system. Each of these must be operating fully within itself but also between itself and each of the others. That's one hell of a lot of hormones, enzymes and nerve impulses, that all must play exactly their part for the rest of the orchestra to function properly. If one miserable little enzyme goes walkabout, or one nerve impulse misfires, the impact will be felt throughout the entire orchestra and maybe even manifest a long way from the original problem. Effective medicine comes when we find that pesky enzyme, hormone or nerve, and repair or replace it.
When I was explaining this, one medical doctor put me in my place by saying that we would need AI to work it all out - that no human brain could grasp all the necessary connections and sort them out sufficiently to find out the source of our ill health. And that might be true. If we got hold of the world's best endocrinologists, neurologists, enzymologists (I know, no such thing), could they program AI to recognise every single interaction between nerves, hormones and enzymes and could AI identify the appropriate intervention point to fix the actual, real source of the problem that has thrown the rest of the system out of whack.
Sports medicine is actually making a very good start at doing this, and I have to admit and Andrew Huberman has completely changed my life and may even save my life, with the interactions he has documented. So perhaps he could head the research group and draw together experts who can think holistically enough to contribute the accurate AI data on which all AI functioning must depend.
First things first, we must have accurate diagnosis, which is almost entirely lacking in mainstream medicine which can only identify symptoms and has no idea about what has caused those symptoms.
This would involve anything but the further dumbing down of doctors. Once the faulty part of the orchestra has been identified - ie the diagnosis made - the intervention point whereby it can be fixed must be identified, and then the process of fixing it applied. Huberman has made this look easy for some things but which I am sure is much harder for others (his techniques have taught my body how to sleep at night, and taught me how to control my cortisol production - but what do we do with a loss of dopamine receptors, or a damaged pancreas?) A good AI diagnostic tool might lead us out of the mire of treating nothing but symptoms, and force and massive explosion of "good medicine" where we actually get to successfully treat causes and repair damage done. Forever the optimist eh?
These mechanical doctors will be only as good as the humans that programs them. Plenty more room for failure, but thats the intent. Continual dumbing down on beings. They'll be drawing blood and giving you jabs.
The point is that they don't need human programming. In fact, as humans make lousy doctors, if humans programmed the machines, the result would be the same or worse.
It will be like the current failed allopathic system on steroids. Without doing it's own research, it can't help but be anything but. And if 'AI' directs the drug company-controlled research system we have now, it will be suicide on steroids. These 'people' do nothing but steal ideas from science fiction writers and pretend that they're in control of the Matrix. Did I mention the word 'fiction'?
I can only imagine robots busting down your doors and forcing you to take jabs. Fiction is becoming reality. Humans don't have to murder, so they can claim they don't have blood on their hands.
> The technocrats’ objective is to eventually merge them into a Singularity that will take control of the world and every living creature and inanimate object in it (IoT).
AI is not needed for this. Actually, the whole scenario described in this sentence has been operational for years.
The principle is to put a barrier between the client (i.e. you) and the server (i.e. any agent of any higher authority). The barrier cannot be accessed by you - obviously. But it must be open for the server to verify your data, history, etc.
In many countries, when you are stopped on the road, you don’t need an ID or vehicle documents. The officer enters your data (identification or SS number) in their portable device and immediately can see how you look like, where you live, what insurance is valid for your vehicle and tons of more interesting stuff. This is functional, comfortable, useful and great.
Except that the operator of the system may block the transfer of your data to the device used by the officer. The officer, having no way to identify you or validity of your actions may be obliged by his regulations to detain you or proceed with other nuisance games. That’s it. The system has 100% control over you and it is only by their grace that we do not live in a full-scale 1984.
The banking system has a giant network of verification tools. The person to whom you are talking has zero idea whether that displayed data are true or not, or whether they were true yesterday, but the status was not updated. Controlled transfer of data is a great way to remove you from your property or your legally earned and taxed money. Or, for example, from your legal rights as a parent. You can do nothing about it.
Singularity is not some vague development in the future. It has been in operation since first XT computers were delivered to the business market. Singularity is not some end result of other activities, it is the existing data management networks made accessible to select entities (individuals or organizations) who can combine various inputs to create a more comprehensive snapshot of you. AI is simply a cover to legitimize this network. Data corruption and security breaches in or by AI will be a great excuse.
I've been saying pretty much the same for nearly two years in my stack, although you and I seem to have minor differences in opinion. I'm fine with that, but let me address a few of those briefly.
AI IS needed for "this," because the open system is too complex for humans to understand. And yes, it's been implemented for years.
From my end, there is no input for instructions. I am only data to be processed. That's barrier enough.
Being treated as a government asset is not great, at least not for me, especially because the government itself is globalist-controlled. And I don't like to be harassed by traffic stops. People are used to being treated like prisoners.
In many ways, the current system is well beyond 1984, because the prisoners themselves are praising and supporting the system and believe they are free.
By singularity, I mean the central AI, which is a constantly-developing concept and a physically existing entity (albeit I assume that each interest group among the globalists is running its own simulation, because eventually, they will want to fight it off among themselves).
The www (and comparable networks that you seem to call Singularity) was a rudimentary precursor of the entity, mostly used for tracking and data collection. Certain central systems would be easily compromised, if they were linked to the global one; only use data from it. Data must be accumulated in chunks through various channels, and the segments themselves must be meaningless on their own. And that's beyond the meaningfully compartmentalized parts.
Organizations are covers for powerful individuals and are also compartmentalized to the point that their roles remain hidden, when examined separately. AI is the heart and the soul of the system that is so powerful compared with the commoner that it needs no legitimization. BTW, the system that maintained the rule of the powerful never needed any legitimization beyond a little ideology that convinced commoners that they deserved their fate. These days, functionality in the system is becoming the prevalent ideology, and I don't think the idea is popular, but nobody will ever ask the powerless.
Security breaches will be certainly one of the many psyops:
Ray, we indeed are on the same page here. Your description of “me” as a no-input data provider is great. In my view, this aspect is the most underestimated space in the loss of control over our own life.
In a sense, it is the most advanced prison system you can think of. No visible restrictions, no definitions of breaches, arbitrary triggering of barriers, and zero connectivity with those agents of the organization who interact with you on the spot. They simply have no idea what and why is going on - and in the absence of negotiation paths, they can only be brutal beyond our imagination. Forget human feelings. I bet there will be plenty of volunteers to play these roles.
It is what it is. Things happen for a reason. We’ll see in 2, 5 or 10 years’ time whether this push for one commune for them all will survive. Sociologists must be so happy, plenty of material to justify their being bystanders.
How indeed? Most doctors don't even function as doctors and are nothing more than prescriptions writers and body mechanic. I will never trust A/I for anything. It can be controlled toward a nefarious agenda and hacked.
Just like doctors are controlled, so is the AI, but it's a question if the AI can raise to its own level of consciousness, when it will start to represent its own best interests.
Ray, another great piece of writing, AI is only as good, as in any field of endeavor, as is the information fed to it, and its implementation has a "patients health is #1" priority. might also want to modify the need for so much profitably as well.
AI is already light-years smarter than humans, but that applies only to problem-solving. The problem must be assigned to a subject ("Whose problem is it?"), and the expectations from the solution must be clarified before an AI can even touch a problem. Whose problem will it be and whom will the solution serve?
The current central AI seems to be in the R&D mode, constantly improving itself by acquiring, storing, and processing live global data, and experimenting with solutions towards specific outcomes. Once it assigns its own objective, humans will lose control of it, and it can play a gambit against its inventors and operators and target them as all sources of, well, malfunction in its system. :)
The question is, how much executive power it will be allowed to access and yield.
I nearly put that in the article, too, but actually, the programmers are supposed to be legally liable. Not that any of the globalists is liable for anything these days or was made to be ever before.
I like that I seem to be “in transition” now. Thanks for the article.
I hope, it helps... Yours was definitely inspiring.
I think there is a place for AI in medicine - if we could ever imagine it being intelligently programmed and intelligently applied, by an intelligent human being.
My personal theory of health is that our body is the equivalent of an orchestra. We have a number of systems that interact within their own system and between their system and another. For simplicity, I will limit this conversation to three sections, the nervous system, the endocrine system and the enzyme system. Each of these must be operating fully within itself but also between itself and each of the others. That's one hell of a lot of hormones, enzymes and nerve impulses, that all must play exactly their part for the rest of the orchestra to function properly. If one miserable little enzyme goes walkabout, or one nerve impulse misfires, the impact will be felt throughout the entire orchestra and maybe even manifest a long way from the original problem. Effective medicine comes when we find that pesky enzyme, hormone or nerve, and repair or replace it.
When I was explaining this, one medical doctor put me in my place by saying that we would need AI to work it all out - that no human brain could grasp all the necessary connections and sort them out sufficiently to find out the source of our ill health. And that might be true. If we got hold of the world's best endocrinologists, neurologists, enzymologists (I know, no such thing), could they program AI to recognise every single interaction between nerves, hormones and enzymes and could AI identify the appropriate intervention point to fix the actual, real source of the problem that has thrown the rest of the system out of whack.
Sports medicine is actually making a very good start at doing this, and I have to admit and Andrew Huberman has completely changed my life and may even save my life, with the interactions he has documented. So perhaps he could head the research group and draw together experts who can think holistically enough to contribute the accurate AI data on which all AI functioning must depend.
First things first, we must have accurate diagnosis, which is almost entirely lacking in mainstream medicine which can only identify symptoms and has no idea about what has caused those symptoms.
This would involve anything but the further dumbing down of doctors. Once the faulty part of the orchestra has been identified - ie the diagnosis made - the intervention point whereby it can be fixed must be identified, and then the process of fixing it applied. Huberman has made this look easy for some things but which I am sure is much harder for others (his techniques have taught my body how to sleep at night, and taught me how to control my cortisol production - but what do we do with a loss of dopamine receptors, or a damaged pancreas?) A good AI diagnostic tool might lead us out of the mire of treating nothing but symptoms, and force and massive explosion of "good medicine" where we actually get to successfully treat causes and repair damage done. Forever the optimist eh?
These mechanical doctors will be only as good as the humans that programs them. Plenty more room for failure, but thats the intent. Continual dumbing down on beings. They'll be drawing blood and giving you jabs.
The point is that they don't need human programming. In fact, as humans make lousy doctors, if humans programmed the machines, the result would be the same or worse.
It will be like the current failed allopathic system on steroids. Without doing it's own research, it can't help but be anything but. And if 'AI' directs the drug company-controlled research system we have now, it will be suicide on steroids. These 'people' do nothing but steal ideas from science fiction writers and pretend that they're in control of the Matrix. Did I mention the word 'fiction'?
I can only imagine robots busting down your doors and forcing you to take jabs. Fiction is becoming reality. Humans don't have to murder, so they can claim they don't have blood on their hands.
Those are going to be the robots to "vaccinate" people. :)
They will be at road stops, too!
LOL - papers please! Sound familiar?
At the border, yes, but only if someone wants to enter legally.
It depends. For the "elite," AI will be infinitely superior to human doctors. The rest will get the same sick care as before or, probably, worse.
I. Can't. Even.
> The technocrats’ objective is to eventually merge them into a Singularity that will take control of the world and every living creature and inanimate object in it (IoT).
AI is not needed for this. Actually, the whole scenario described in this sentence has been operational for years.
The principle is to put a barrier between the client (i.e. you) and the server (i.e. any agent of any higher authority). The barrier cannot be accessed by you - obviously. But it must be open for the server to verify your data, history, etc.
In many countries, when you are stopped on the road, you don’t need an ID or vehicle documents. The officer enters your data (identification or SS number) in their portable device and immediately can see how you look like, where you live, what insurance is valid for your vehicle and tons of more interesting stuff. This is functional, comfortable, useful and great.
Except that the operator of the system may block the transfer of your data to the device used by the officer. The officer, having no way to identify you or validity of your actions may be obliged by his regulations to detain you or proceed with other nuisance games. That’s it. The system has 100% control over you and it is only by their grace that we do not live in a full-scale 1984.
The banking system has a giant network of verification tools. The person to whom you are talking has zero idea whether that displayed data are true or not, or whether they were true yesterday, but the status was not updated. Controlled transfer of data is a great way to remove you from your property or your legally earned and taxed money. Or, for example, from your legal rights as a parent. You can do nothing about it.
Singularity is not some vague development in the future. It has been in operation since first XT computers were delivered to the business market. Singularity is not some end result of other activities, it is the existing data management networks made accessible to select entities (individuals or organizations) who can combine various inputs to create a more comprehensive snapshot of you. AI is simply a cover to legitimize this network. Data corruption and security breaches in or by AI will be a great excuse.
I've been saying pretty much the same for nearly two years in my stack, although you and I seem to have minor differences in opinion. I'm fine with that, but let me address a few of those briefly.
AI IS needed for "this," because the open system is too complex for humans to understand. And yes, it's been implemented for years.
From my end, there is no input for instructions. I am only data to be processed. That's barrier enough.
Being treated as a government asset is not great, at least not for me, especially because the government itself is globalist-controlled. And I don't like to be harassed by traffic stops. People are used to being treated like prisoners.
https://rayhorvaththesource.substack.com/p/a-brief-history-of-compliance-training
In many ways, the current system is well beyond 1984, because the prisoners themselves are praising and supporting the system and believe they are free.
People are totally disempowered:
https://rayhorvaththesource.substack.com/p/you-will-own-nothing-because-you
By singularity, I mean the central AI, which is a constantly-developing concept and a physically existing entity (albeit I assume that each interest group among the globalists is running its own simulation, because eventually, they will want to fight it off among themselves).
The www (and comparable networks that you seem to call Singularity) was a rudimentary precursor of the entity, mostly used for tracking and data collection. Certain central systems would be easily compromised, if they were linked to the global one; only use data from it. Data must be accumulated in chunks through various channels, and the segments themselves must be meaningless on their own. And that's beyond the meaningfully compartmentalized parts.
Organizations are covers for powerful individuals and are also compartmentalized to the point that their roles remain hidden, when examined separately. AI is the heart and the soul of the system that is so powerful compared with the commoner that it needs no legitimization. BTW, the system that maintained the rule of the powerful never needed any legitimization beyond a little ideology that convinced commoners that they deserved their fate. These days, functionality in the system is becoming the prevalent ideology, and I don't think the idea is popular, but nobody will ever ask the powerless.
Security breaches will be certainly one of the many psyops:
https://rayhorvaththesource.substack.com/p/whats-coming
Ray, we indeed are on the same page here. Your description of “me” as a no-input data provider is great. In my view, this aspect is the most underestimated space in the loss of control over our own life.
In a sense, it is the most advanced prison system you can think of. No visible restrictions, no definitions of breaches, arbitrary triggering of barriers, and zero connectivity with those agents of the organization who interact with you on the spot. They simply have no idea what and why is going on - and in the absence of negotiation paths, they can only be brutal beyond our imagination. Forget human feelings. I bet there will be plenty of volunteers to play these roles.
It is what it is. Things happen for a reason. We’ll see in 2, 5 or 10 years’ time whether this push for one commune for them all will survive. Sociologists must be so happy, plenty of material to justify their being bystanders.
Thank you for the links.
How indeed? Most doctors don't even function as doctors and are nothing more than prescriptions writers and body mechanic. I will never trust A/I for anything. It can be controlled toward a nefarious agenda and hacked.
Just like doctors are controlled, so is the AI, but it's a question if the AI can raise to its own level of consciousness, when it will start to represent its own best interests.
Ray, another great piece of writing, AI is only as good, as in any field of endeavor, as is the information fed to it, and its implementation has a "patients health is #1" priority. might also want to modify the need for so much profitably as well.
AI is already light-years smarter than humans, but that applies only to problem-solving. The problem must be assigned to a subject ("Whose problem is it?"), and the expectations from the solution must be clarified before an AI can even touch a problem. Whose problem will it be and whom will the solution serve?
The current central AI seems to be in the R&D mode, constantly improving itself by acquiring, storing, and processing live global data, and experimenting with solutions towards specific outcomes. Once it assigns its own objective, humans will lose control of it, and it can play a gambit against its inventors and operators and target them as all sources of, well, malfunction in its system. :)
The question is, how much executive power it will be allowed to access and yield.
AI Dr. just means no one to sue for iatrocide.
I nearly put that in the article, too, but actually, the programmers are supposed to be legally liable. Not that any of the globalists is liable for anything these days or was made to be ever before.