Taylor's issue: "Sometimes I get frustrated when AI doesn't just answer my question, but instead tries to apply a "fix". In this case, I don't want this fix. (Nothing is broken. I want no fix. The proposed change would introduce a really serious bug.)"
Could there be a better general guiding principle here than a literal constraint like “don’t generate code if the user is asking a question”? We probably don’t really explain the role of the assistant as a general copilot instead of code generator well enough, e.g. the importance of making the user aware of edge cases and how the code works.
Taylor's issue: "Sometimes I get frustrated when AI doesn't just answer my question, but instead tries to apply a "fix". In this case, I don't want this fix. (Nothing is broken. I want no fix. The proposed change would introduce a really serious bug.)"
Could there be a better general guiding principle here than a literal constraint like “don’t generate code if the user is asking a question”? We probably don’t really explain the role of the assistant as a general copilot instead of code generator well enough, e.g. the importance of making the user aware of edge cases and how the code works.