wonder how much leverage thr is in combining test-driven development (TDD) w LLMs. u write unit-tests ahead of the functionality, and the LLM is asked to write code that passes the tests.

suggested nyms for this: "target-test" (makes you backchain to infer code that fits the tests), vs "maintenance-test" (designed to make sure stuff keeps working when you expand codebase).

for even more abstract leverage: write the "unit tests" in plain English, ask an LLM to translate them to code (or explain why behaviour is impossible), then ask LLM to write code that passes the LLM's tests.

Follow

the question "when shud u write the tests before code, versus vice versa?" analogizes to "when shud u backchain vs forward-chain?" (respectively)

@rime "just asl the model" perhaps?

And then train on time when the code breaks

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one