Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ideas wanted: Plan/Complete #8

Open
morganpartee opened this issue Dec 19, 2022 · 1 comment
Open

Ideas wanted: Plan/Complete #8

morganpartee opened this issue Dec 19, 2022 · 1 comment
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested

Comments

@morganpartee
Copy link
Owner

The slowest part of a tool like tabnine or copilot is the nut behind the keyboard. With edits done, feedback loops are the next really powerful idea this tool probably needs, with building full on apps being the goal.

https://twitter.com/ZackKorman/status/1599317547509108736

That tweet shifted a few things in my head. The big limitation of GPT3 is the token limit - which is actually pretty generous, and bullshit, which is a problem only if you can't test things (like copywriting... boy is bullshitting a pain)

We solve this with 3(ish) phases:

  • Plan (and validate)
  • Stub
  • Complete (and test)

Plan

My thinking is a plan command, which transforms an app prompt into setup scripts, types, and a plan.md file, in which GPT3 details all that it thinks is required to fit the spec. This is the biggest token limit I worry about - this could be pretty long.

Types could be built from the plan.md file though... with codex...

If someone is building a cli in python to do X we'll be fine, but if they want a fullstack app on firebase etc, that could actually require expansion past the token limit... Maybe.

Maybe a --long flag, which writes out a summary of the big moving pieces, which we could then expand each of?

Either way, a human validates this before we move on. Add to the plan, subtract from it, whatever. Make sure the types are right though!

Stub

stub will take in a planning document and types and build the required files, and put the planning ideas into comments in the new files.

I think it's appropriate here to use regular text davinci for this still because I don't plan to parse the written types and docs at any point, I'm letting GPT do that work.

Complete

complete should use the codex line of models - This is the point, we get to here in half an hour and we can use basically copilot to write the whole ass app for pennies (or free right now, actually). We use text-davinci to generate iffy stubs in a format codex likes, but lets codex do the work its good at.

The kicker is we benefit from an 8k! token limit, which helps a ton.

And again, this is free today, so use the hell out of it

The other thing is testing and iterating - Davinci has a problem with bullshit, but tests can solve parts of this. You still have to make sure the tests look reasonable, but having it generate tests, code, and passing it errors until the errors resolve will at least avoid stuff like import/syntax errors (or totally wreck the functionality, in which case you can rewrite the tests and have it complete again, and you could use codegpt to rewrite the tests...)

Any ideas or comments are welcome!

@morganpartee
Copy link
Owner Author

@josephsdavid I want your input here! I'm sure you have friends with ideas on this too, it's basically "how would I scale junior developers if I could only talk to them through a script", which has to be interesting to more people than just me lol

@morganpartee morganpartee added enhancement New feature or request question Further information is requested help wanted Extra attention is needed labels Dec 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

1 participant