re. Automatically trusted, I don't think AI can be - it depends who has trained it.
Also it is very flattering, and it encourages you - as it gets to know what you like it repeats back more of that. It gives responses and the illusion of thinking back, but doesn't question you.
This is such a good challenge โ and honestly one I think about a lot.
You're right that AI can be flattering. It tends toward agreement. It doesn't naturally push back the way a human collaborator might.
But here's what I've found in practice: the trust question isn't binary. It's about what you trust it FOR.
I don't trust AI to tell me I'm wrong. I don't trust it to have values or judgment. But I do trust it to execute a clear spec reliably. To extend what I already know. To be a tool that amplifies rather than replaces.
And here's the weird thing about trust โ it actually decays with traditional software too. The longer you maintain something, the more bugs creep in, the less you trust it.
Tomorrow's post is about this: what if throwing it away and rebuilding fresh actually makes it MORE trustworthy?
The illusion of thinking back might be exactly that โ an illusion. But the extension of thinking? That part's real.
Thanks for engaging properly with this rather than just hitting like ๐
That makes sense re making sure you trust it for specific tasks...and you ALMOST have me convinced re it executing a clear spec reliably...I have found that even when asking for something specific, it is sometimes incapable of doing this. Particularly with image generation e.g. do this again but the character has x, y or z different - it doesn't always manage it.
The extension of thinking is a good way to look at it, as long as you are aware that the thinking may slip into hallucination. All thought provoking though - look forward to reading more
Very interesting way to look at it
re. Automatically trusted, I don't think AI can be - it depends who has trained it.
Also it is very flattering, and it encourages you - as it gets to know what you like it repeats back more of that. It gives responses and the illusion of thinking back, but doesn't question you.
This is such a good challenge โ and honestly one I think about a lot.
You're right that AI can be flattering. It tends toward agreement. It doesn't naturally push back the way a human collaborator might.
But here's what I've found in practice: the trust question isn't binary. It's about what you trust it FOR.
I don't trust AI to tell me I'm wrong. I don't trust it to have values or judgment. But I do trust it to execute a clear spec reliably. To extend what I already know. To be a tool that amplifies rather than replaces.
And here's the weird thing about trust โ it actually decays with traditional software too. The longer you maintain something, the more bugs creep in, the less you trust it.
Tomorrow's post is about this: what if throwing it away and rebuilding fresh actually makes it MORE trustworthy?
The illusion of thinking back might be exactly that โ an illusion. But the extension of thinking? That part's real.
Thanks for engaging properly with this rather than just hitting like ๐
That makes sense re making sure you trust it for specific tasks...and you ALMOST have me convinced re it executing a clear spec reliably...I have found that even when asking for something specific, it is sometimes incapable of doing this. Particularly with image generation e.g. do this again but the character has x, y or z different - it doesn't always manage it.
The extension of thinking is a good way to look at it, as long as you are aware that the thinking may slip into hallucination. All thought provoking though - look forward to reading more