-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
18.0 fix xdo #971
Open
Xavier-Do
wants to merge
89
commits into
18.0
Choose a base branch
from
18.0-fix-xdo
base: 18.0
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
18.0 fix xdo #971
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Backport of d7a78f8 - `choice` is not a proper type - markers should be declared
In some cases, feedback to the PR author that an r+ is redundant went missing. This turns out to be due to the convolution of the handling of approval on forward-port, and the fact that the target PR is treated exactly like its ancestors: if the PR is already approved the approval is not even attempted (and so no feedback if it's incorrect). Straighten up this bit and add a special case for the PR being commented on, it should have the usual feedback if in error or already commented on. Furthermore, update `PullRequests._pr_acl` to kinda work out of the box for forward-port: if the current PR is a forward port, `is_reviewer` should check delegation on all ancestors, there doesn't seem to be any reason to split "source_reviewer", "parent_reviewer", and "is_reviewer". Fixes #939
The FW reminder is useful to remind people of the outstanding forward ports they need to process, as taking too long can be an issue. They are, however, not useful if the developer has already done everything, the PR is ready and unblocked, but the mergebot has fallen behind and has a hard time catching up. In that case there is nothing for the developer to do, so pinging them is not productive, it's only frustrating. Fixes #930
Reminding users that `r-` on a forward port only unreviews *that* forwardport is useful, as `r+;r-` is not a no-op (all preceding siblings are still reviewed). However it is useless if all siblings are not approved or already merged. So avoid sending the warning in that case. Fixes #934
Rather than only setting `staging_end` on status change, set it when the staging gets deactivated. This way cancelling a staging (whether explicitely or via a PR update) will also end it, so will a staging timeout, etc..., rather than keep the counter running. Fixes #931
This is an approximation under the assumption that stored computes update the `write_date`, and that there's not much else that will be computed on a batch. Eventually it might be a good idea for this to be a proper field, computed alongside the unblocking of the batch. Fixes #932
Today (or really a month ago) I learned: when giving git a symbolic ref (e.g. a ref name), if it's ambiguous then 1. If `$GIT_DIR/$name` exists, that is what you mean (this is usually useful only for `HEAD`, `FETCH_HEAD`, `ORIG_HEAD`, `MERGE_HEAD`, `REBASE_HEAD`, `REVERT_HEAD`, `CHERRY_PICK_HEAD`, `BISECT_HEAD` and `AUTO_MERGE`) 2. otherwise, `refs/$name` if it exists 3. otherwise, `refs/tags/$name` if it exists 4. otherwise, `refs/heads/$name` if it exists 5. otherwise, `refs/remotes/$name` if it exists 6. otherwise, `refs/remotes/$name/HEAD` if it exists This means if a tag and a branch have the same name and only the name is provided (not the full ref), git will select the tag, which gets very confusing for the mergebot as it now tries to rebase onto the tag (which because that's not fun otherwise was not even on the branch of the same name). Fix by providing full refs to `rev-parse` when trying to retrieve the head of the target branches. And as a defense in depth opportunity, also exclude tags when fetching refs by spec: apparently fetching a specific commit does not trigger the retrieval of tags, but any sort of spec will see the tags come along for the ride even if the tags are not in any way under the fetched ref e.g. `refs/heads/*` will very much retrieve the tags despite them being located at `refs/tags/*`. Fixes #922
Apparently when I added these to trigger the corresponding cron the lack of decorator caused them to not work at all, at least when invoked from the interface, consequently I notably wasn't able to force a forward port via creating one such task when trying to work around #953
Logging PRs by id is unusual, unreadable, and inconvenient.
- rather than enumerate states, forward-porting should just check if the statuses are successful on a PR - for the same consistency reasons explained in f97502e, `skipchecks` should force the status of a PR to `success`: it very odd that a PR would be ready without being validated...
The UX around the split of limit and forward port policy (and especially moving "don't forward port" to the policy) was not really considered and caused confusion for both me and devs: after having disabled forward porting, updating the limit would not restore it, but there would be no indication of such an issue. This caused odoo/enterprise#68916 to not be forward ported at merge (despite looking for all the world like it should be), and while updating the limit post-merge did force a forward-port that inconsistency was just as jarring (also not helped by being unable to create an fw batch via the backend UI because reasons, see 10ca096). Fix this along most axis: - Notify the user and reset the fw policy if the limit is updated while `fw=no`. - Trigger a forward port if the fw policy is updated (from `no`) on a merged PR, currently only sources. - Add check & warning to limit update process so it does *not* force a port (though maybe it should under the assumption that we're updating the limit anyway? We'll see). Fixes #953
And unconditionally unstage when the HEAD of a PR is synchronised. While a rebuild on a PR which was previously staged can be a false positive (e.g. because it hit a non-derministic test the second time around) it can also be legitimate e.g. auto-rebase of an old PR to check it out. In that case the PR should be unstaged. Furthermore, as soon as the PR gets rebuilt it goes back into `approved` (because the status goes to pending and currently there's no great way to suppress that in the rebuild case without also fucking it up for the sync case). This would cause a sync on the PR to be missed (in that it would not unstage the PR), which is broken. Fix that by not checking the state of the PR beforehand, it seems to be an unnecessary remnant of older code, and not really an optimisation (or at least one likely not worth bothering with, especially as we then proceed to perform a full PR validation anyway). Fixes #950
- Switch to just `default` ci. - Autouse fixture to init the master branch. - Switch to `make_commits`. - Merge `test_reopen_update` and `test_update_closed_revalidate` into `test_update_closed`: the former did basically nothing useful and the latter could easily be folded into `test_update_closed` by just validating the new commit.
By updating the staging timeout every time we run `_compute_state` and still have a `pending` status, we'd actually bump the timeout *on every success CI* except for the last one. Which was never the intention and can add an hour or two to the mergebot-side timeout. Instead, add an `updated_at` attribute to statuses (name taken from the webhook payload even though we don't use that), read it when we see `pending` required statuses, and update the staging timeout based on that if necessary. That way as long as we don't get *new* pending statuses, the timeout doesn't get updated. Fixes #952
Add warnings when trying to send comments / commands to PRs targeting inactive branches. This was missing leading to confusion, as one warning is clearly not enough. Fixes #941
Given branch A, and branch B forked from it. If B removes a file which a PR to A later modifies, on forward port Git generates a modify/delete conflict (as in one side modifies a file which the other deleted). So far so good, except while it does notify the caller of the issue the modified file is just dumped as-is into the working copy (or commit), which essentially resurrects it. This is an issue, *especially* as the file is already part of a commit (rather tan just a U local file), if the developer fixes the conflict "in place" rather than re-doing the forward-port from scratch it's easy to miss the reincarnated file (and possibly the changes that were part of it too), which at best leaves parasitic dead code in the working copy. There is also no easy way for the runbot to check it as adding unimported standalone files while rare is not unknown e.g. utility scripts (to say nothing of JS where we can't really track the usages / imports at all). To resolve this issue, during conflict generation post-process modify/delete to insert artificial conflict markers, the file should be syntactically invalid so linters / checkers should complain, and the minimal check has a step looking for conflict markers which should fire and prevent merging the PR. Fixes #896
This is a dumb false positive, kill it.
- fix staging reasons containing escaped quotes (would render as ` ` to the end user) - remove extra spacing around PR title @title/popovers - simplify a few view conditionals through judicious use of `t-elif` and nesting - make `staging_end` non-computed as it's not computed anymore, just set if and when the staging gets disabled (146564a)
Unstaged changes can be useful or necessary for some tasks e.g. absolute emergency (where even faking the state of a staging is not really desirable, if that's even possible anymore), or changes which are so broad they're difficult to stage (e.g. t10s updates). Add a new object which serves as a queue for patch to direct-apply, with support for either text patches (udiff style out of git show or format-patch) or commits to cherry-pick. In the former case, the part of the show / format-patch before the diff itself is used for the commit metadata (author, committer, dates, message) whereas for the commit version the commit itself is reused as-is. Applied patches are simply disabled for traceability. Fixes #926
And use it in test_trivial_flow instead of the kinda half-assed manual version.
Because of the false negatives due to github's reordering of events on retargeting, blocking merge methods can be rather frustrating or the user as what's happening and how to solve it isn't clear in that case. Keep the warnings and all, but remove the blocking factor: if a PR doesn't have a merge method and is not single-commit, just skip it on staging. This way, PRs which are actually single-commit will stage fine even if the mergebot thinks they shouldn't be. Fixes #957
This commit will prevent people acting like cowboys to break the whole system by duplicating a Dockerfile without disabling the always_pull field.
When an image is build for the first time and the build fails, an ImageNotfound Exception is raised when trying to push the image. Whith this commit, a warning is emmited instead of crashing in such a case.
If a PR is closed but part of an ongoing batch, the change in status of the batch might be reflected on the PR still: - if a PR is closed and the batch gets staged, the PR shows up as being staged - if the PR is merged then the batch gets merged, the PR shows up as merged Fixes #914 Also remove the unused `_tagstate` helper property.
The mergebot tests have always been pretty gentle on system load which is nice, however it's just looking at the list of longest tests that I realised / re-membered the hook wait duration is 10 seconds for the benefit of github, which doesn't really matter locally. This means on interaction / cron-heavy tests the test might only be using on the order of 10% CPU or something, that is a waste of time. TBF this is easily compensated by increasing the concurrency of the test suite (e.g. from 16 to 32 when I switched machine, but it seems as if not more sensible to lower the webhook wait delay to something more reasonable. 1s seems to be a good fit here, on my new computer at n=16 it leads to the test suite running in 15mn at 600% CPU (which is pretty good on a 6/12 CPU as it loads the system heavily but doesn't completely bog it down). Reducing it to 0.5s, the test suite takes the same duration but CPU load increases to 770%, and errors creep up, likely a mix of concurrency issues in the DB and dummy-central sending webhooks too slowly as we compete with it for CPU resources (could actually make sense to restrict the number of threads tokio can use). Reducing concurrency could make this work better, but I think at this point we're in a pretty good state, it's even somewhat reasonable to run the test suite sequentially (taking about 1h10 but being functionally invisible in terms of load).
Since b45ecf0 forwardport batches which fail have a delay set in order to avoid spamming. However that delay was not displayed anywhere, which made things confusing as the batch would not get run even after creating new triggers. Show the delay if it's set (to a value later than now), as a relative delta for clarity (as normally the delay is in minutes so a full blown date is difficult to read / aprehend), and allow viewing and setting it in the form view. Fixes #982
Add a few views / view extensions set as custom on the production mergebot which I never remembered to implement in the actual source.
If a branch is created via SQL directly, one might forget to set its `write_date`, leading the computation of the `last_modified` to be *really* unhappy as it's asked to compare a bunch of datetime objects to `False` in order to compute the `max`. Substitute missing `write_date` with `datetime.min` so they are extremely unlikely to influence the computation until and unless the branch gets updated.
`get_content` round-trips the text part through `ascii` with `error=replace`, so if the input is not ascii it screws up tremendously, which leads to either failing to apply patches (the more likely situation) or corrupting the patches. `get_payload`, if called without `decode`, pretty much just returns the payload unless it needs a decoding pass (e.g. because it contains raw surrogates, but that should not be an issue for us). So this is really what we want. While at it, increase `patch`'s verbosity in case it can give us more info.
Got a bunch of updates since the initial attempt to migrate the mergebot before the odoo days.
Update a bunch of `create` overrides to work in batch. Also fix a few `super()` calls unnecessarily in legacy style.
Got a bunch of updates since the initial attempt to migrate the mergebot before the odoo days.
Not sure why it didn't break tests in 16...
`read_tracking_value` likely never worked correctly in both branches, but worked in 15.0 because the failures to do anything useful happened to end in the right case?
Because of tracking, `test_patch_acl` needs access to message subtypes during patch creation. If the user *only* has `runbot_merge.group_patcher` or `runbot_merge.group_admin` they don't have any of the "core" user groups (public, portal, internal) and thus don't have access to mail subtypes. Fix that by having the runbot_merge groups imply being an internal user.
Apparently in odoo 17.0 tracking values are always ordered by field name.
Before this, if creating the DB failed the next worker would find themselves with an empty `template-` file, so they would take the path to *create* a template database, which would fail when trying to `mkdir` the `shared-` directory as the directory would already exist. The problem with this is this module would likely immediately fail and take down their worker, triggering a test suite failure for themselves and likely hiding the *true* failure cause (not sure why the originally failed worker isn't the first one to trigger a failure but there you go). By skipping the tests instead, we provide a lot more opportunity for the "true" failure to be revealed.
Can't be arsed to rebase and re-resolve the conflicts, so remerge instead.
Co-authored-by: William Braeckman (wbr) <[email protected]>
Xavier-Do
force-pushed
the
18.0-fix-xdo
branch
from
November 20, 2024 14:15
b5f8a9e
to
17e36a4
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.