One of the problems with #langchain is that the LLM ~~sometimes~~ often produces un-parse-able output, and crashees the chain. I work with GPT3 bcs I don't have 4 access and it's more pronounced.
Wondering if a LangChain like library could enforce some sort of pattern matching by e.g. biasing the logits, such that brackets and other paired & nested deliminators are closed before the context window ended.
Wondering if a LangChain like library could enforce some sort of pattern matching by e.g. biasing the logits, such that brackets and other paired & nested deliminators are closed before the context window ended.