Ideas wanted: Plan/Complete #8
Labels
enhancement
New feature or request
help wanted
Extra attention is needed
question
Further information is requested
The slowest part of a tool like tabnine or copilot is the nut behind the keyboard. With edits done, feedback loops are the next really powerful idea this tool probably needs, with building full on apps being the goal.
https://twitter.com/ZackKorman/status/1599317547509108736
That tweet shifted a few things in my head. The big limitation of GPT3 is the token limit - which is actually pretty generous, and bullshit, which is a problem only if you can't test things (like copywriting... boy is bullshitting a pain)
We solve this with 3(ish) phases:
Plan
My thinking is a
plan
command, which transforms an app prompt into setup scripts, types, and aplan.md
file, in which GPT3 details all that it thinks is required to fit the spec. This is the biggest token limit I worry about - this could be pretty long.Types could be built from the plan.md file though... with codex...
If someone is building a cli in python to do X we'll be fine, but if they want a fullstack app on firebase etc, that could actually require expansion past the token limit... Maybe.
Maybe a
--long
flag, which writes out a summary of the big moving pieces, which we could then expand each of?Either way, a human validates this before we move on. Add to the plan, subtract from it, whatever. Make sure the types are right though!
Stub
stub
will take in a planning document and types and build the required files, and put the planning ideas into comments in the new files.I think it's appropriate here to use regular text davinci for this still because I don't plan to parse the written types and docs at any point, I'm letting GPT do that work.
Complete
complete
should use thecodex
line of models - This is the point, we get to here in half an hour and we can use basically copilot to write the whole ass app for pennies (or free right now, actually). We use text-davinci to generate iffy stubs in a format codex likes, but lets codex do the work its good at.The kicker is we benefit from an 8k! token limit, which helps a ton.
And again, this is free today, so use the hell out of it
The other thing is testing and iterating - Davinci has a problem with bullshit, but tests can solve parts of this. You still have to make sure the tests look reasonable, but having it generate tests, code, and passing it errors until the errors resolve will at least avoid stuff like import/syntax errors (or totally wreck the functionality, in which case you can rewrite the tests and have it complete again, and you could use codegpt to rewrite the tests...)
Any ideas or comments are welcome!
The text was updated successfully, but these errors were encountered: