Follow

One of the problems with is that the LLM ~~sometimes~~ often produces un-parse-able output, and crashees the chain. I work with GPT3 bcs I don't have 4 access and it's more pronounced.

Wondering if a LangChain like library could enforce some sort of pattern matching by e.g. biasing the logits, such that brackets and other paired & nested deliminators are closed before the context window ended.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one