Dear Thorsteinn,
This is a beautifully written piece that perfectly strikes its target (we must not allow AI to substitute for our thinking), then bizarrely capitulates (but we can use AI to support our critical thinking). You are certainly have more faith in logical methodology than I ever have (a tangent I'll leave for another time), b…
This is a beautifully written piece that perfectly strikes its target (we must not allow AI to substitute for our thinking), then bizarrely capitulates (but we can use AI to support our critical thinking). You are certainly have more faith in logical methodology than I ever have (a tangent I'll leave for another time), but this is another case where my Masters degree in AI has left me more sceptical than most about what we are doing with AI - and certainly far more than the MBAs currently selling VCs on AI.
"Instead of having AI do our writing for us, we can have it assist us improving it. Instead of outsourcing our thinking to technology, we can harness its power to become better at thinking."
Or, in other words, instead of letting AI write bad prose for us, we will let it conduct bad literature reviews for us instead...? It's all part of the thinking process! Outsourcing any of this is beyond risky. Having seen the abuses that search engines unleashed, I can find no reason to imagine that grafting super-Eliza onto the top of search engines will do anything other than further obfuscate any grounds for clarity of thought. This is especially so given the situation in academic publishing, which will be feeding a fair volume of the data sets for the large language models. Upon this path freshly mouldering madness lies.
Well, I don't expect that I'll convince you, but I still maintain the discourse is its own reward, at least. Here, as everywhere else, 'the means is the end'.
I have a Stranger Worlds coming in September that addresses some of the change in the circumstances in thought that computers brought about, and I'll try to remember to pop in a link here when it runs. In the meantime, I will leave you with this 3-minute reflection from last month:
Thanks for your comment Chris and thanks for sharing your piece. The fact is that AI can be a very useful tool for improving complex cause-effect analyses. That conclusion is no guesswork, but based on a good deal of experimentation. For further elaboration, please take a look at my recent piece on the subject here: https://thorsteinnsiglaugsson.substack.com/p/using-ai-to-improve-our-decisions
I do appreciate how your conclusions have been reached, and I respect your loyalty to your methods, and your rigour. Slaves are always appealing as a matter of utility when we have the moral case to ignore that they are slaves, and the argument against depending upon them does not solely depend upon the slave's side of the equation (cf. Hegel). If we don't speak about this again before then, let's compare notes in a year. I remain very interested to see how your lived practice develops on the narrow path you have chosen. 👋
Dear Thorsteinn,
This is a beautifully written piece that perfectly strikes its target (we must not allow AI to substitute for our thinking), then bizarrely capitulates (but we can use AI to support our critical thinking). You are certainly have more faith in logical methodology than I ever have (a tangent I'll leave for another time), but this is another case where my Masters degree in AI has left me more sceptical than most about what we are doing with AI - and certainly far more than the MBAs currently selling VCs on AI.
"Instead of having AI do our writing for us, we can have it assist us improving it. Instead of outsourcing our thinking to technology, we can harness its power to become better at thinking."
Or, in other words, instead of letting AI write bad prose for us, we will let it conduct bad literature reviews for us instead...? It's all part of the thinking process! Outsourcing any of this is beyond risky. Having seen the abuses that search engines unleashed, I can find no reason to imagine that grafting super-Eliza onto the top of search engines will do anything other than further obfuscate any grounds for clarity of thought. This is especially so given the situation in academic publishing, which will be feeding a fair volume of the data sets for the large language models. Upon this path freshly mouldering madness lies.
Well, I don't expect that I'll convince you, but I still maintain the discourse is its own reward, at least. Here, as everywhere else, 'the means is the end'.
I have a Stranger Worlds coming in September that addresses some of the change in the circumstances in thought that computers brought about, and I'll try to remember to pop in a link here when it runs. In the meantime, I will leave you with this 3-minute reflection from last month:
https://strangerworlds.substack.com/p/laws-of-robotics
If it is not a rebuttal of what you write here, it is at least a more sceptical take on the issue of Large Language Models.
With unlimited love and respect,
Chris.
Thanks for your comment Chris and thanks for sharing your piece. The fact is that AI can be a very useful tool for improving complex cause-effect analyses. That conclusion is no guesswork, but based on a good deal of experimentation. For further elaboration, please take a look at my recent piece on the subject here: https://thorsteinnsiglaugsson.substack.com/p/using-ai-to-improve-our-decisions
I do appreciate how your conclusions have been reached, and I respect your loyalty to your methods, and your rigour. Slaves are always appealing as a matter of utility when we have the moral case to ignore that they are slaves, and the argument against depending upon them does not solely depend upon the slave's side of the equation (cf. Hegel). If we don't speak about this again before then, let's compare notes in a year. I remain very interested to see how your lived practice develops on the narrow path you have chosen. 👋