Ersatz Thinking
To learn and understand something you need to encounter challenge or friction just like lifting weights you should lift on a progressive overload to build strength.
That Large Language Models (llms) actively hinder this approach is no surprise. Therefore you are lying to yourself and to everyone else when you state that a chatbot is helping you learn a new subject or master unfamiliar material. I am stating that this is ersatz thinking/learning and you must temper your expectations as to what you think is possible for yourself and others to achieve using this technique.
I covered this in detail previously here.
When I think of uses for LLMs I start with the following guidelines
- Is accuracy important? (If yes, as in bank charges do not use an LLM it will fail)
- Would this issue follow a generalized template? (If no, do not use but most simple FAQs can be interpreted as applicable here)
- What is the worst action that could happen if a prompt runs? (If the agent is allowed to perform actions that cannot be rolled back or controlled never use an LLM)
If none of these are situations that could happen then the tool of the LLM can be used again it may not be the best and why the current atmosphere of defaulting to using LLMs first and everywhere is problematic and incorrect. It is a tool and suited to particular problems not all problems. Would you use in a chef knife for building a table, no unless you want to get cut badly and in the process destroy your knife, likewise LLMs have applicable and non applicable uses.
Why should you think about those questions before applying and LLM. The answer is that all LLMs
- Use prompt injection (It is the input for an action to occur and is fundamental to why data and actions are not separable with the current approaches)
- Hallucinate responses (many hallucinations are useful, but that is because we have the greatest general purpose inference tool in our brains available to find meaning from levels of vagueness that no LLM can infer)
β Frank Herbert, Dune