Jul 5, 2023·edited Jul 5, 2023Liked by Thorsteinn Siglaugsson
I share a number of your concerns Thorsteinn and while there are many legitimate uses that AI can be put to, many would agree that the ubiquitous screens in our lives have already dumbed us down and lowered our attention spans, then just think what might be the effect of AI doing all of our thinking for us...
But my primary concern is that AI may soon become an unimaginably powerful ideological tool. This immediately raises the question WHO controls AI? What is this person’s worldview and HOW will this person be shaping AI and with WHAT objectives?? Like many other AI propagandists, Harari carefully erects a bogus “mystical aura” around AI, claiming for example that we "don't really understand how AI makes its decisions". In other instances, he has also claimed "AI will eventually be able to understand us better than we understand ourselves". Of course this bogus mystical aura is useful for instilling respect in the masses for AI bots, respect that will be very useful for an elite seeking to establish AI as an instrument of propaganda and manipulation. All bow down to AI... (copy-paste the Ewoks worshiping C-3PO, scene from Return of the Jedi -> Oh No, Oh No, Oh No...)
Thanks Paul. There will surely be massive attempts at using AI as a tool for manipulation. And I'm sure also that for many people it may look mystical. But I don't think installing a mystical aura is Harari's objective, or, for that matter the objective of others who are warning us now, it is simply true that we cannot really predict how neural networks react, this is their nature. The way to manipulate them is through controlling the data available to them. A lot of confusion arises when we start using concepts such as understanding, free will, consciousness etc. and either attribute those qualities to AI, or not. But all this is really of no importance. What matters is how it affects us.
I don't know... Isn't this like claiming that Europeans in the 1930s shouldn't have worried about Nazi ideology because Nazi engineers made decidedly well-designed highways/Autobahns?? That AI can prove useful for writing essays or developing university course material seems acceptable, but this may be just a first step to get AI's foot in the door. Ignoring the worldview of AI's owners/developers seems naïvely attributing an aura of "neutrality" to such apps. Harari certainly has NOT been shy about marketing AI as an ideological tool, even rewriting the Bible. Odd that he never mentions using AI to rewrite the Koran or other sacred books. Perhaps Salman Rushdie's misadventures may have something to do with that...
This is a beautifully written piece that perfectly strikes its target (we must not allow AI to substitute for our thinking), then bizarrely capitulates (but we can use AI to support our critical thinking). You are certainly have more faith in logical methodology than I ever have (a tangent I'll leave for another time), but this is another case where my Masters degree in AI has left me more sceptical than most about what we are doing with AI - and certainly far more than the MBAs currently selling VCs on AI.
"Instead of having AI do our writing for us, we can have it assist us improving it. Instead of outsourcing our thinking to technology, we can harness its power to become better at thinking."
Or, in other words, instead of letting AI write bad prose for us, we will let it conduct bad literature reviews for us instead...? It's all part of the thinking process! Outsourcing any of this is beyond risky. Having seen the abuses that search engines unleashed, I can find no reason to imagine that grafting super-Eliza onto the top of search engines will do anything other than further obfuscate any grounds for clarity of thought. This is especially so given the situation in academic publishing, which will be feeding a fair volume of the data sets for the large language models. Upon this path freshly mouldering madness lies.
Well, I don't expect that I'll convince you, but I still maintain the discourse is its own reward, at least. Here, as everywhere else, 'the means is the end'.
I have a Stranger Worlds coming in September that addresses some of the change in the circumstances in thought that computers brought about, and I'll try to remember to pop in a link here when it runs. In the meantime, I will leave you with this 3-minute reflection from last month:
Thanks for your comment Chris and thanks for sharing your piece. The fact is that AI can be a very useful tool for improving complex cause-effect analyses. That conclusion is no guesswork, but based on a good deal of experimentation. For further elaboration, please take a look at my recent piece on the subject here: https://thorsteinnsiglaugsson.substack.com/p/using-ai-to-improve-our-decisions
I do appreciate how your conclusions have been reached, and I respect your loyalty to your methods, and your rigour. Slaves are always appealing as a matter of utility when we have the moral case to ignore that they are slaves, and the argument against depending upon them does not solely depend upon the slave's side of the equation (cf. Hegel). If we don't speak about this again before then, let's compare notes in a year. I remain very interested to see how your lived practice develops on the narrow path you have chosen. 👋
There are a couple of considerations that are relevant when discussing AI.
On one hand, generative AI systems are just statistical predictive systems. We humans recognize patterns, but we also verify the logical consistency of our own predictive answers before using them.
Within "logical consistency" you should also include the "applicability", or fit-for-purpose-ness of the solution.
Current AI systems are completely lacking this phase.
The second consideration is a related one.
Imagine a person that due to a disease had lived confined in a room all of his/her life.
Imagine also that all knowledge of the real world the person had came from reading the books of a large library. You would surely not trust any real-world advice given by this person.
In other words, Searle's argument is still valid, I'm afraid.
The sense of reality is a human thing and it cannot be programmed or trained in any device.
The human hand holding the hammer is still needed.
In other words, critical, logical thinking is more needed than ever.
Thanks Kurt. I agree. Critical thinking is more needed now than ever. As for Searle, I assume you are referring to the Chinese box analogy that shows how we can not conclude, based on behaviour, that understanding exists. It is a beautiful argument surely. But in the end, I'm just not sure this really matters. Not from a practical perspective at least.
Such issues feed into the many claims we hear about AI soon to become "sentient". In my view this is NOTHING more than marketing/fear-mongering as first off there is no agreed upon and CLEAR definition of what "sentience" means. As a result, such claims are empty/meaningless. The Turing Test, Deep Blue's wins at chess and AI victories at Go or StarCraft appear to be further steps in this direction... As a result claims of AI "sentience" are nothing more that postmodern jargon covering for old theological terms such as "soul" or "spirit"... Of course if the claims of AI "sentience" were widely accepted by the public, in ideological terms this would provide such AI instances with great authority, which is of course a very significant ideologico-religious (and political) matter. I have dealt with such matters in chap. 5 of my book "Flight From the Absolute, volume 1."
We still use concepts like sentience, free will and self-awareness. Concepts which really have little meaning in the context of modern materialism, but are rooted in the dualistic tradition represented by Descartes. And in terms of AI those concepts are totally useless. as you say. Sentient or not, it can be goal-seeking, sentience is not needed for that at all. But we have difficulty understanding this, for our thinking is simply too anthropocentric. It would make sense for us to stop trying to compare AI to humans, we should rather think about it as a different species from us. That way we might be able to think about it more clearly.
I think it is essential to know in advance what kind of relationship we can possibly have with all those thinking machines, as Turing would have put it.
And if you allow me to advance the answer, the only meaningful relationship is as tools.
In the end, the "I" in AI is made of algorithms and training "by" somebody. Because of that, a robot, a thinking machine, cannot be truly (ontologically) autonomous as long as there is an "A"(rtificial) in front of it
I mean, it happens that we think of Tamagotchis or more advanced machines, as individual beings, but this is just emotional projection.
There cannot be anything else behind it
And anyway, let's just keep it at that level. Machines, robots, AIs, are only tools
Thanks. Well. I think we are in agreement here. Equating AI with humans is meaningless, an emotional projection like you say. But that doesn't mean they cannot be dangerous, for they can be goal-seeking, even if the goal is initially set by a human.
I like your observation: "We can let it answer all our questions, handle the administration of our work, answer our emails, drive our cars, manage our homes. And we can let it write our essays for us, analyse our problems, in short, we can let it think for us. How long, then, until we let it vote for us also?"
One could add to that, " How long, then, until we let it decide if our lives are no longer worth living?" If a genocidal eugenist has access to this system, this may eventually become a real concern...
I share a number of your concerns Thorsteinn and while there are many legitimate uses that AI can be put to, many would agree that the ubiquitous screens in our lives have already dumbed us down and lowered our attention spans, then just think what might be the effect of AI doing all of our thinking for us...
But my primary concern is that AI may soon become an unimaginably powerful ideological tool. This immediately raises the question WHO controls AI? What is this person’s worldview and HOW will this person be shaping AI and with WHAT objectives?? Like many other AI propagandists, Harari carefully erects a bogus “mystical aura” around AI, claiming for example that we "don't really understand how AI makes its decisions". In other instances, he has also claimed "AI will eventually be able to understand us better than we understand ourselves". Of course this bogus mystical aura is useful for instilling respect in the masses for AI bots, respect that will be very useful for an elite seeking to establish AI as an instrument of propaganda and manipulation. All bow down to AI... (copy-paste the Ewoks worshiping C-3PO, scene from Return of the Jedi -> Oh No, Oh No, Oh No...)
Thanks Paul. There will surely be massive attempts at using AI as a tool for manipulation. And I'm sure also that for many people it may look mystical. But I don't think installing a mystical aura is Harari's objective, or, for that matter the objective of others who are warning us now, it is simply true that we cannot really predict how neural networks react, this is their nature. The way to manipulate them is through controlling the data available to them. A lot of confusion arises when we start using concepts such as understanding, free will, consciousness etc. and either attribute those qualities to AI, or not. But all this is really of no importance. What matters is how it affects us.
I don't know... Isn't this like claiming that Europeans in the 1930s shouldn't have worried about Nazi ideology because Nazi engineers made decidedly well-designed highways/Autobahns?? That AI can prove useful for writing essays or developing university course material seems acceptable, but this may be just a first step to get AI's foot in the door. Ignoring the worldview of AI's owners/developers seems naïvely attributing an aura of "neutrality" to such apps. Harari certainly has NOT been shy about marketing AI as an ideological tool, even rewriting the Bible. Odd that he never mentions using AI to rewrite the Koran or other sacred books. Perhaps Salman Rushdie's misadventures may have something to do with that...
Dear Thorsteinn,
This is a beautifully written piece that perfectly strikes its target (we must not allow AI to substitute for our thinking), then bizarrely capitulates (but we can use AI to support our critical thinking). You are certainly have more faith in logical methodology than I ever have (a tangent I'll leave for another time), but this is another case where my Masters degree in AI has left me more sceptical than most about what we are doing with AI - and certainly far more than the MBAs currently selling VCs on AI.
"Instead of having AI do our writing for us, we can have it assist us improving it. Instead of outsourcing our thinking to technology, we can harness its power to become better at thinking."
Or, in other words, instead of letting AI write bad prose for us, we will let it conduct bad literature reviews for us instead...? It's all part of the thinking process! Outsourcing any of this is beyond risky. Having seen the abuses that search engines unleashed, I can find no reason to imagine that grafting super-Eliza onto the top of search engines will do anything other than further obfuscate any grounds for clarity of thought. This is especially so given the situation in academic publishing, which will be feeding a fair volume of the data sets for the large language models. Upon this path freshly mouldering madness lies.
Well, I don't expect that I'll convince you, but I still maintain the discourse is its own reward, at least. Here, as everywhere else, 'the means is the end'.
I have a Stranger Worlds coming in September that addresses some of the change in the circumstances in thought that computers brought about, and I'll try to remember to pop in a link here when it runs. In the meantime, I will leave you with this 3-minute reflection from last month:
https://strangerworlds.substack.com/p/laws-of-robotics
If it is not a rebuttal of what you write here, it is at least a more sceptical take on the issue of Large Language Models.
With unlimited love and respect,
Chris.
Thanks for your comment Chris and thanks for sharing your piece. The fact is that AI can be a very useful tool for improving complex cause-effect analyses. That conclusion is no guesswork, but based on a good deal of experimentation. For further elaboration, please take a look at my recent piece on the subject here: https://thorsteinnsiglaugsson.substack.com/p/using-ai-to-improve-our-decisions
I do appreciate how your conclusions have been reached, and I respect your loyalty to your methods, and your rigour. Slaves are always appealing as a matter of utility when we have the moral case to ignore that they are slaves, and the argument against depending upon them does not solely depend upon the slave's side of the equation (cf. Hegel). If we don't speak about this again before then, let's compare notes in a year. I remain very interested to see how your lived practice develops on the narrow path you have chosen. 👋
There are a couple of considerations that are relevant when discussing AI.
On one hand, generative AI systems are just statistical predictive systems. We humans recognize patterns, but we also verify the logical consistency of our own predictive answers before using them.
Within "logical consistency" you should also include the "applicability", or fit-for-purpose-ness of the solution.
Current AI systems are completely lacking this phase.
The second consideration is a related one.
Imagine a person that due to a disease had lived confined in a room all of his/her life.
Imagine also that all knowledge of the real world the person had came from reading the books of a large library. You would surely not trust any real-world advice given by this person.
In other words, Searle's argument is still valid, I'm afraid.
The sense of reality is a human thing and it cannot be programmed or trained in any device.
The human hand holding the hammer is still needed.
In other words, critical, logical thinking is more needed than ever.
Thanks Kurt. I agree. Critical thinking is more needed now than ever. As for Searle, I assume you are referring to the Chinese box analogy that shows how we can not conclude, based on behaviour, that understanding exists. It is a beautiful argument surely. But in the end, I'm just not sure this really matters. Not from a practical perspective at least.
Such issues feed into the many claims we hear about AI soon to become "sentient". In my view this is NOTHING more than marketing/fear-mongering as first off there is no agreed upon and CLEAR definition of what "sentience" means. As a result, such claims are empty/meaningless. The Turing Test, Deep Blue's wins at chess and AI victories at Go or StarCraft appear to be further steps in this direction... As a result claims of AI "sentience" are nothing more that postmodern jargon covering for old theological terms such as "soul" or "spirit"... Of course if the claims of AI "sentience" were widely accepted by the public, in ideological terms this would provide such AI instances with great authority, which is of course a very significant ideologico-religious (and political) matter. I have dealt with such matters in chap. 5 of my book "Flight From the Absolute, volume 1."
We still use concepts like sentience, free will and self-awareness. Concepts which really have little meaning in the context of modern materialism, but are rooted in the dualistic tradition represented by Descartes. And in terms of AI those concepts are totally useless. as you say. Sentient or not, it can be goal-seeking, sentience is not needed for that at all. But we have difficulty understanding this, for our thinking is simply too anthropocentric. It would make sense for us to stop trying to compare AI to humans, we should rather think about it as a different species from us. That way we might be able to think about it more clearly.
Hi
I think I disagree with you.
I think it is essential to know in advance what kind of relationship we can possibly have with all those thinking machines, as Turing would have put it.
And if you allow me to advance the answer, the only meaningful relationship is as tools.
In the end, the "I" in AI is made of algorithms and training "by" somebody. Because of that, a robot, a thinking machine, cannot be truly (ontologically) autonomous as long as there is an "A"(rtificial) in front of it
I mean, it happens that we think of Tamagotchis or more advanced machines, as individual beings, but this is just emotional projection.
There cannot be anything else behind it
And anyway, let's just keep it at that level. Machines, robots, AIs, are only tools
Thanks. Well. I think we are in agreement here. Equating AI with humans is meaningless, an emotional projection like you say. But that doesn't mean they cannot be dangerous, for they can be goal-seeking, even if the goal is initially set by a human.
I like your observation: "We can let it answer all our questions, handle the administration of our work, answer our emails, drive our cars, manage our homes. And we can let it write our essays for us, analyse our problems, in short, we can let it think for us. How long, then, until we let it vote for us also?"
One could add to that, " How long, then, until we let it decide if our lives are no longer worth living?" If a genocidal eugenist has access to this system, this may eventually become a real concern...
Yes, once we start letting it think for us this concern becomes real. This is why it is so important that we avoid this.