cicd #30
This run and associated checks have been archived and are scheduled for deletion.
Learn more about checks retention
Annotations
6 errors
build
Path does not exist: D:\\a\\1\\s\\env.yml\r\nFinishing: Publish Windows environment YAML to Azure\r\n```","author":{"login":"matthewmturner"}}},{"node":{"body":"Thanks @matthewmturner, I think this was fixed in #2486. Can you rebase the PR where you found this, and let me know if it still happens (and you can close if it doesn't).","author":{"login":"datapythonista"}}}]},"createdAt":"2020-10-26T04:29:20Z","updatedAt":"2021-12-29T18:50:47Z","closedAt":"2021-12-29T18:50:47Z","author":{"login":"matthewmturner"}}},{"node":{"title":"bug: fix set_value behavior for scope when timecontext is None","number":2490,"id":"MDU6SXNzdWU3Mjc0ODc0MDE=","url":"https://github.com/ibis-project/ibis/issues/2490","labels":{"edges":[{"node":{"name":"bug"}}]},"state":"CLOSED","stateReason":"COMPLETED","closed":true,"body":"What is the Problem\r\n=============\r\nWhen `timecontext` is `None`, `set_value` method of scope will not override and set value.\r\n\r\nCurrently in `set_value` :\r\n```\r\n def set_value(\r\n self, op: Node, timecontext: Optional[TimeContext], value: Any\r\n ) -> None:\r\n if self.get_value(op, timecontext) is None:\r\n self._items[op] = ScopeItem(timecontext, value)\r\n```\r\nWhile when `timecontext` is `None` and if `op` is already in `scope`, `self.get_value(op, timecontext)` will return the value stored in `scope`, which is not `None` probably. Then `set_value` will not update `scope`.\r\n\r\nThe correct behavior when `timecontext` is `None`, is to overwrite whatever already in `scope`.","comments":{"edges":[{"node":{"body":"@LeeTZ We'd happily accept a PR! Do you have a test case for this that you can paste here?","author":{"login":"cpcloud"}}},{"node":{"body":"Closing. Please reopen if it's still a problem.","author":{"login":"cpcloud"}}}]},"createdAt":"2020-10-22T15:19:37Z","updatedAt":"2022-04-27T21:16:46Z","closedAt":"2022-04-27T21:16:45Z","author":{"login":"LeeTZ"}}},{"node":{"title":"Conda solver taking too long ","number":2489,"id":"MDU6SXNzdWU3MjczMTA1Nzk=","url":"https://github.com/ibis-project/ibis/issues/2489","labels":{"edges":[{"node":{"name":"ci"}}]},"state":"CLOSED","stateReason":"COMPLETED","closed":true,"body":"xref: https://github.com/ibis-project/ibis/pull/2476#issuecomment-714429790\r\n\r\nLooks like the conda package build is taking too long (2 hours in some cases).\r\n\r\nThat build is installing all the dependencies of all the backends, which seem to be causing the problem. `pymapd` has been in the past the main source of conda solving time, so #2356 (moving omnisci to a separate repo should help). Also, for that particular build, #2448 (have independent conda packages for each backend) should fix the problem.\r\n\r\nAs a more immediate solution, #2486 (bumping `pymapd` version) seems to be decreasing the solver time significantly.","comments":{"edges":[{"node":{"body":"@datapythonista @jreback tying back to the issues on #2435 with CI / conda.\r\n\r\nI can make a PR for this if youd like. I assume we would be removing some of the run requirements form `ci/recipe/meta.yaml`. If thats the case do you already have in mind what should be removed?","author":{"login":"matthewmturner"}}},{"node":{"body":"yes remove spark omnisc impala bigquery","author":{"login":"jreback"}}},{"node":{"body":"Can we release next week with all backends for the last time, and soon after release with the entry points, and each backend separately (with omnisci in a separate repo, and maybe others)?\r\n\r\nI think having separate conda packages that work and that don't work with entrypoints will make things complicated.","author":{"login":"datapythonista"}}},{"node":{"body":"Superseded by #2670 ","author":{"login":"datapythonista"}}}]},"createdAt":"2020-10-22T11:41:31Z","updatedAt":"2021-03-05T18:24:15Z","closedAt":"2021-03-05T18:24:15Z","author":{"login":"datapythonista"}}},{"node":{"title":"BUG: Add support for pandas new StringDtype","number":2488,"id":"MDU6SXNzdWU3MjY4OTU3NDg=","url":"https://github.com/ibis-project/ibis/issues/2488","labels":{"edges":[{"node":{"name":"bug"}},{"node":{"
|
build
Path does not exist: D:\\a\\1\\s\\env.yml\r\nFinishing: Publish Windows environment YAML to Azure\r\n```", 'author': {'login': 'matthewmturner'}}}, {'node': {'body': "Thanks @matthewmturner, I think this was fixed in #2486. Can you rebase the PR where you found this, and let me know if it still happens (and you can close if it doesn't).", 'author': {'login': 'datapythonista'}}}]}, 'createdAt': '2020-10-26T04:29:20Z', 'updatedAt': '2021-12-29T18:50:47Z', 'closedAt': '2021-12-29T18:50:47Z', 'author': {'login': 'matthewmturner'}}}, {'node': {'title': 'bug: fix set_value behavior for scope when timecontext is None', 'number': 2490, 'id': 'MDU6SXNzdWU3Mjc0ODc0MDE=', 'url': 'https://github.com/ibis-project/ibis/issues/2490', 'labels': {'edges': [{'node': {'name': 'bug'}}]}, 'state': 'CLOSED', 'stateReason': 'COMPLETED', 'closed': True, 'body': 'What is the Problem\r\n=============\r\nWhen `timecontext` is `None`, `set_value` method of scope will not override and set value.\r\n\r\nCurrently in `set_value` :\r\n```\r\n def set_value(\r\n self, op: Node, timecontext: Optional[TimeContext], value: Any\r\n ) -> None:\r\n if self.get_value(op, timecontext) is None:\r\n self._items[op] = ScopeItem(timecontext, value)\r\n```\r\nWhile when `timecontext` is `None` and if `op` is already in `scope`, `self.get_value(op, timecontext)` will return the value stored in `scope`, which is not `None` probably. Then `set_value` will not update `scope`.\r\n\r\nThe correct behavior when `timecontext` is `None`, is to overwrite whatever already in `scope`.', 'comments': {'edges': [{'node': {'body': "@LeeTZ We'd happily accept a PR! Do you have a test case for this that you can paste here?", 'author': {'login': 'cpcloud'}}}, {'node': {'body': "Closing. Please reopen if it's still a problem.", 'author': {'login': 'cpcloud'}}}]}, 'createdAt': '2020-10-22T15:19:37Z', 'updatedAt': '2022-04-27T21:16:46Z', 'closedAt': '2022-04-27T21:16:45Z', 'author': {'login': 'LeeTZ'}}}, {'node': {'title': 'Conda solver taking too long ', 'number': 2489, 'id': 'MDU6SXNzdWU3MjczMTA1Nzk=', 'url': 'https://github.com/ibis-project/ibis/issues/2489', 'labels': {'edges': [{'node': {'name': 'ci'}}]}, 'state': 'CLOSED', 'stateReason': 'COMPLETED', 'closed': True, 'body': 'xref: https://github.com/ibis-project/ibis/pull/2476#issuecomment-714429790\r\n\r\nLooks like the conda package build is taking too long (2 hours in some cases).\r\n\r\nThat build is installing all the dependencies of all the backends, which seem to be causing the problem. `pymapd` has been in the past the main source of conda solving time, so #2356 (moving omnisci to a separate repo should help). Also, for that particular build, #2448 (have independent conda packages for each backend) should fix the problem.\r\n\r\nAs a more immediate solution, #2486 (bumping `pymapd` version) seems to be decreasing the solver time significantly.', 'comments': {'edges': [{'node': {'body': '@datapythonista @jreback tying back to the issues on #2435 with CI / conda.\r\n\r\nI can make a PR for this if youd like. I assume we would be removing some of the run requirements form `ci/recipe/meta.yaml`. If thats the case do you already have in mind what should be removed?', 'author': {'login': 'matthewmturner'}}}, {'node': {'body': 'yes remove spark omnisc impala bigquery', 'author': {'login': 'jreback'}}}, {'node': {'body': "Can we release next week with all backends for the last time, and soon after release with the entry points, and each backend separately (with omnisci in a separate repo, and maybe others)?\r\n\r\nI think having separate conda packages that work and that don't work with entrypoints will make things complicated.", 'author': {'login': 'datapythonista'}}}, {'node': {'body': 'Superseded by #2670 ', 'author': {'login': 'datapythonista'}}}]}, 'createdAt': '2020-10-22T11:41:31Z', 'updatedAt': '2021-03-05T18:24:15Z', 'closedAt': '2021-03-05T18:24:15Z', 'author': {'login': 'datapythonista'}}}, {'node': {'title': 'BUG: Add support for pandas new StringDtype', 'number': 2488, 'id': 'MDU6SXNzd
|
build
Bash exited with code '2'.\r\n##[section]Finishing: Build docs\r\n```","comments":{"edges":[{"node":{"body":"I'm trying to reproduce this locally (still in docker) but am getting a Timeout error (see [this gist](https://gist.github.com/scottcode/7f3354b6fa4085e986a671964c2d2de5) for the full error output)\r\n\r\n`nbsphinx.NotebookError: TimeoutError in notebooks/tutorial/8-More-Analytics-Helpers.ipynb:\r\nCell execution timed out`","author":{"login":"scottcode"}}},{"node":{"body":"I increased the timeout limit to 60 seconds using the configuration described in the [nbsphinx docs](https://nbsphinx.readthedocs.io/en/0.3.5/timeout.html). It got further, but then I ran into this error `HiveServer2Error: Failed after retrying 3 times` (see [full error](https://gist.github.com/scottcode/9c379c2093354fc18d22e0be3757c5ae)). I'm not sure yet what causes the `HiveServer2Error` or how to get past it.\r\n\r\nGiven the original error message from the build pipeline about `semantic_version`, I wonder--when I get that far--if the problem will be something like: the `semantic_version` package is being given a versioneer dirty version string and doesn't know how to handle it. Just a hypothesis until I can replicate the error locally. ","author":{"login":"scottcode"}}},{"node":{"body":"The `HiveServer2Error` is occurring in `notebooks/tutorial/8-More-Analytics-Helpers.ipynb`","author":{"login":"scottcode"}}},{"node":{"body":"Tracking the local `HiveServer2Error`. Debugging it, I see that the exception it gets each time is below:\r\n\r\n```\r\n*** thriftpy2.transport.TTransportException: TTransportException(type=4, message='TSocket read 0 bytes')\r\n```","author":{"login":"scottcode"}}},{"node":{"body":"@scottcode I am trying to debug that here too. I will post here any update","author":{"login":"xmnlab"}}},{"node":{"body":"issue related: https://github.com/bitprophet/releases/issues/84","author":{"login":"xmnlab"}}}]},"createdAt":"2019-11-09T03:57:25Z","updatedAt":"2019-11-16T22:06:54Z","closedAt":"2019-11-16T22:06:54Z","author":{"login":"xmnlab"}}},{"node":{"title":"ENH: Add VIEW functionality to sqlalchemy-based backends","number":2026,"id":"MDU6SXNzdWU1MjAyNjAzMTA=","url":"https://github.com/ibis-project/ibis/issues/2026","labels":{"edges":[{"node":{"name":"feature"}},{"node":{"name":"sqlalchemy"}}]},"state":"CLOSED","stateReason":"COMPLETED","closed":true,"body":"I tried using `client.table(name_of_view)` to treat an existing view like a table in Postgres, but I got a `NoSuchTableError`. It appears that creating new views--or even referring to existing ones--is not supported in `sqlalchemy`-based backends. It is supported in Impala. \r\n\r\nIt would be nice to add the ability to refer to as well as create views within `ibis`. Interface ideas:\r\n\r\nFor using/referring to existing views:\r\n* `client.view(name_of_view)`\r\n* or just make the `client.table(name_of_view)` work with views\r\n\r\nFor creating a new view:\r\n* `client.create_view(expression)`","comments":{"edges":[{"node":{"body":"I fairly regularly use ibis to connect to existing views with the PostgreSQL backend (both materialized and regular). So at least in some circumstances it works as is with `client.table`.","author":{"login":"ian-r-rose"}}},{"node":{"body":"Ack! I think I know what happened. The view was created after instantiating the client, so its cache of the schema info might have been old. I haven't verified this, just a hunch. Easy refresh/cache invalidation like suggested in #1768 would be helpful in such a case. ","author":{"login":"scottcode"}}},{"node":{"body":"Yeah, I was able to `client.table()` on the view using a newly-created client connection. Looks like it was something about ibis cache of the DB state. I'll close this issue. Thanks @ian-r-rose for chiming in. ","author":{"login":"scottcode"}}}]},"createdAt":"2019-11-08T22:18:13Z","updatedAt":"2019-11-08T23:41:53Z","closedAt":"2019-11-08T23:41:52Z","author":{"login":"scottcode"}}},{"node":{"title":"Documentation improvements","number":2025,"id":"MDU6SXNzdWU1MTk1ODk2MTg=","url":"https://github.com
|
build
Bash exited with code \'2\'.\r\n##[section]Finishing: Build docs\r\n```', 'comments': {'edges': [{'node': {'body': "I'm trying to reproduce this locally (still in docker) but am getting a Timeout error (see [this gist](https://gist.github.com/scottcode/7f3354b6fa4085e986a671964c2d2de5) for the full error output)\r\n\r\n`nbsphinx.NotebookError: TimeoutError in notebooks/tutorial/8-More-Analytics-Helpers.ipynb:\r\nCell execution timed out`", 'author': {'login': 'scottcode'}}}, {'node': {'body': "I increased the timeout limit to 60 seconds using the configuration described in the [nbsphinx docs](https://nbsphinx.readthedocs.io/en/0.3.5/timeout.html). It got further, but then I ran into this error `HiveServer2Error: Failed after retrying 3 times` (see [full error](https://gist.github.com/scottcode/9c379c2093354fc18d22e0be3757c5ae)). I'm not sure yet what causes the `HiveServer2Error` or how to get past it.\r\n\r\nGiven the original error message from the build pipeline about `semantic_version`, I wonder--when I get that far--if the problem will be something like: the `semantic_version` package is being given a versioneer dirty version string and doesn't know how to handle it. Just a hypothesis until I can replicate the error locally. ", 'author': {'login': 'scottcode'}}}, {'node': {'body': 'The `HiveServer2Error` is occurring in `notebooks/tutorial/8-More-Analytics-Helpers.ipynb`', 'author': {'login': 'scottcode'}}}, {'node': {'body': "Tracking the local `HiveServer2Error`. Debugging it, I see that the exception it gets each time is below:\r\n\r\n```\r\n*** thriftpy2.transport.TTransportException: TTransportException(type=4, message='TSocket read 0 bytes')\r\n```", 'author': {'login': 'scottcode'}}}, {'node': {'body': '@scottcode I am trying to debug that here too. I will post here any update', 'author': {'login': 'xmnlab'}}}, {'node': {'body': 'issue related: https://github.com/bitprophet/releases/issues/84', 'author': {'login': 'xmnlab'}}}]}, 'createdAt': '2019-11-09T03:57:25Z', 'updatedAt': '2019-11-16T22:06:54Z', 'closedAt': '2019-11-16T22:06:54Z', 'author': {'login': 'xmnlab'}}}, {'node': {'title': 'ENH: Add VIEW functionality to sqlalchemy-based backends', 'number': 2026, 'id': 'MDU6SXNzdWU1MjAyNjAzMTA=', 'url': 'https://github.com/ibis-project/ibis/issues/2026', 'labels': {'edges': [{'node': {'name': 'feature'}}, {'node': {'name': 'sqlalchemy'}}]}, 'state': 'CLOSED', 'stateReason': 'COMPLETED', 'closed': True, 'body': 'I tried using `client.table(name_of_view)` to treat an existing view like a table in Postgres, but I got a `NoSuchTableError`. It appears that creating new views--or even referring to existing ones--is not supported in `sqlalchemy`-based backends. It is supported in Impala. \r\n\r\nIt would be nice to add the ability to refer to as well as create views within `ibis`. Interface ideas:\r\n\r\nFor using/referring to existing views:\r\n* `client.view(name_of_view)`\r\n* or just make the `client.table(name_of_view)` work with views\r\n\r\nFor creating a new view:\r\n* `client.create_view(expression)`', 'comments': {'edges': [{'node': {'body': 'I fairly regularly use ibis to connect to existing views with the PostgreSQL backend (both materialized and regular). So at least in some circumstances it works as is with `client.table`.', 'author': {'login': 'ian-r-rose'}}}, {'node': {'body': "Ack! I think I know what happened. The view was created after instantiating the client, so its cache of the schema info might have been old. I haven't verified this, just a hunch. Easy refresh/cache invalidation like suggested in #1768 would be helpful in such a case. ", 'author': {'login': 'scottcode'}}}, {'node': {'body': "Yeah, I was able to `client.table()` on the view using a newly-created client connection. Looks like it was something about ibis cache of the DB state. I'll close this issue. Thanks @ian-r-rose for chiming in. ", 'author': {'login': 'scottcode'}}}]}, 'createdAt': '2019-11-08T22:18:13Z', 'updatedAt': '2019-11-08T23:41:53Z', 'closedAt': '2019-11-08T23:41:52Z', 'author': {'login': 'scottcode'}}}, {'node': {'ti
|
build
Bash exited with code '1'.\r\n```","comments":{"edges":[]},"createdAt":"2019-08-21T22:30:23Z","updatedAt":"2019-08-22T13:35:00Z","closedAt":"2019-08-22T13:35:00Z","author":{"login":"xmnlab"}}},{"node":{"title":"OmniSci Histogram not working","number":1934,"id":"MDU6SXNzdWU0ODM2MDY3Mjg=","url":"https://github.com/ibis-project/ibis/issues/1934","labels":{"edges":[]},"state":"CLOSED","stateReason":"COMPLETED","closed":true,"body":"I am trying to do a histogram, to replicate [this geospatial analysis](https://docs.omnisci.com/latest/6_VegaAtaGlance.html):\r\n\r\n```python\r\nimport ibis\r\n\r\nconn = ibis.omniscidb.connect(\r\n host='metis.mapd.com', user='mapd', password='HyperInteractive',\r\n port=443, database='mapd', protocol= 'https'\r\n)\r\nt = conn.table(\"tweets_nov_feb\")\r\nx, y = t.goog_x, t.goog_y\r\n\r\nWIDTH = 385\r\nHEIGHT = 564\r\nX_DOMAIN = [\r\n -3650484.1235206556,\r\n 7413325.514451755\r\n ]\r\nY_DOMAIN = [\r\n -5778161.9183506705,\r\n 10471808.487466192\r\n ]\r\n\r\nt[(X_DOMAIN[0] < x) & (x < X_DOMAIN[1])].group_by(\r\n t.goog_x.histogram(WIDTH).name(\"x_bin\")\r\n).aggregate(t.count()).execute()\r\n```\r\n\r\nHowever it fails with:\r\n\r\n```\r\nException: Exception: Inconsistent return type for FLOOR: SELECT floor((t0.\"goog_x\" - (t1.\"min_1a6124\" - 1e-13)) / ((t1.\"max_1a6124\" - (t1.\"min_1a6124\" - 1e-13)) / 384)) AS x_bin,\r\n count(*) AS \"count\"\r\nFROM tweets_nov_feb t0\r\n JOIN (\r\n SELECT min(\"goog_x\") AS min_1a6124, max(\"goog_x\") AS max_1a6124\r\n FROM tweets_nov_feb\r\n ) t1 ON TRUE\r\nWHERE (t0.\"goog_x\" > -3650484.1235206556) AND\r\n (t0.\"goog_x\" < 7413325.514451755)\r\nGROUP BY x_bin\r\n```","comments":{"edges":[{"node":{"body":"This is the code that histogram generates: https://github.com/ibis-project/ibis/blob/b3d0400d092fcd93bc8da6bfe214afd9b88d4a29/ibis/sql/compiler.py#L218-L246","author":{"login":"saulshanabrook"}}},{"node":{"body":"Instead of subtracting `EPS` can we just cast to a float? https://github.com/ibis-project/ibis/blob/b3d0400d092fcd93bc8da6bfe214afd9b88d4a29/ibis/sql/compiler.py#L235","author":{"login":"saulshanabrook"}}},{"node":{"body":"OK it looks like that isn't it. If I specify the base, the EPS isn't used, but I get the same error:\r\n\r\n```python\r\nt[(X_DOMAIN[0] < x) & (x < X_DOMAIN[1])].group_by(\r\n t.goog_x.histogram(WIDTH, base=X_DOMAIN[0]).name(\"x_bin\")\r\n).aggregate(t.count()).execute()\r\n```\r\n```\r\nException: Exception: Inconsistent return type for FLOOR: SELECT floor((t0.\"goog_x\" - -3650484.1235206556) / ((t1.\"max_e7eff2\" - -3650484.1235206556) / 384)) AS x_bin,\r\n count(*) AS \"count\"\r\nFROM tweets_nov_feb t0\r\n JOIN (\r\n SELECT min(\"goog_x\") AS min_e7eff2, max(\"goog_x\") AS max_e7eff2\r\n FROM tweets_nov_feb\r\n ) t1 ON TRUE\r\nWHERE (t0.\"goog_x\" > -3650484.1235206556) AND\r\n (t0.\"goog_x\" < 7413325.514451755)\r\nGROUP BY x_bin\r\n```","author":{"login":"saulshanabrook"}}},{"node":{"body":"It looks like this issue is relevant https://github.com/omnisci/omniscidb/issues/305","author":{"login":"saulshanabrook"}}},{"node":{"body":"Ah I was able to get it working:\r\n\r\n```python\r\nt[\r\n (X_DOMAIN[0] < x) & (x < X_DOMAIN[1])\r\n].group_by(\r\n t.goog_x.histogram(\r\n WIDTH,\r\n base=ibis.literal(X_DOMAIN[0], 'float64').cast('float32')\r\n ).name(\"x_bin\")\r\n).aggregate(\r\n t.count()\r\n).execute()\r\n```\r\n\r\nThe key is to get the constant to cast to a `FLOAT`, which you have to do `ibis.literal(X_DOMAIN[0], 'float64').cast('float32')` to get.\r\n\r\nMaybe it would be good to do this auto cast somehow for omnisci when it is calling things on floor?","author":{"login":"saulshanabrook"}}},{"node":{"body":"Now I am getting another failure when I try to groupby both axis:\r\n\r\n```\r\nt[\r\n (X_DOMAIN[0] < x) & (x < X_DOMAIN[1]) &\r\n (Y_DOMAIN[0] < y) & (y < Y_DOMAIN[1])\r\n].group_by([\r\n t.goog_x.histogram(\r\n WIDTH,\r\n base=ibis.literal(X_DOMAIN[0], 'float64').cast('fl
|
build
Bash exited with code '1'.\r\n```", 'comments': {'edges': []}, 'createdAt': '2019-08-21T22:30:23Z', 'updatedAt': '2019-08-22T13:35:00Z', 'closedAt': '2019-08-22T13:35:00Z', 'author': {'login': 'xmnlab'}}}, {'node': {'title': 'OmniSci Histogram not working', 'number': 1934, 'id': 'MDU6SXNzdWU0ODM2MDY3Mjg=', 'url': 'https://github.com/ibis-project/ibis/issues/1934', 'labels': {'edges': []}, 'state': 'CLOSED', 'stateReason': 'COMPLETED', 'closed': True, 'body': 'I am trying to do a histogram, to replicate [this geospatial analysis](https://docs.omnisci.com/latest/6_VegaAtaGlance.html):\r\n\r\n```python\r\nimport ibis\r\n\r\nconn = ibis.omniscidb.connect(\r\n host=\'metis.mapd.com\', user=\'mapd\', ***'HyperInteractive\',\r\n port=443, database=\'mapd\', protocol= \'https\'\r\n)\r\nt = conn.table("tweets_nov_feb")\r\nx, y = t.goog_x, t.goog_y\r\n\r\nWIDTH = 385\r\nHEIGHT = 564\r\nX_DOMAIN = [\r\n -3650484.1235206556,\r\n 7413325.514451755\r\n ]\r\nY_DOMAIN = [\r\n -5778161.9183506705,\r\n 10471808.487466192\r\n ]\r\n\r\nt[(X_DOMAIN[0] < x) & (x < X_DOMAIN[1])].group_by(\r\n t.goog_x.histogram(WIDTH).name("x_bin")\r\n).aggregate(t.count()).execute()\r\n```\r\n\r\nHowever it fails with:\r\n\r\n```\r\nException: Exception: Inconsistent return type for FLOOR: SELECT floor((t0."goog_x" - (t1."min_1a6124" - 1e-13)) / ((t1."max_1a6124" - (t1."min_1a6124" - 1e-13)) / 384)) AS x_bin,\r\n count(*) AS "count"\r\nFROM tweets_nov_feb t0\r\n JOIN (\r\n SELECT min("goog_x") AS min_1a6124, max("goog_x") AS max_1a6124\r\n FROM tweets_nov_feb\r\n ) t1 ON TRUE\r\nWHERE (t0."goog_x" > -3650484.1235206556) AND\r\n (t0."goog_x" < 7413325.514451755)\r\nGROUP BY x_bin\r\n```', 'comments': {'edges': [{'node': {'body': 'This is the code that histogram generates: https://github.com/ibis-project/ibis/blob/b3d0400d092fcd93bc8da6bfe214afd9b88d4a29/ibis/sql/compiler.py#L218-L246', 'author': {'login': 'saulshanabrook'}}}, {'node': {'body': 'Instead of subtracting `EPS` can we just cast to a float? https://github.com/ibis-project/ibis/blob/b3d0400d092fcd93bc8da6bfe214afd9b88d4a29/ibis/sql/compiler.py#L235', 'author': {'login': 'saulshanabrook'}}}, {'node': {'body': 'OK it looks like that isn\'t it. If I specify the base, the EPS isn\'t used, but I get the same error:\r\n\r\n```python\r\nt[(X_DOMAIN[0] < x) & (x < X_DOMAIN[1])].group_by(\r\n t.goog_x.histogram(WIDTH, base=X_DOMAIN[0]).name("x_bin")\r\n).aggregate(t.count()).execute()\r\n```\r\n```\r\nException: Exception: Inconsistent return type for FLOOR: SELECT floor((t0."goog_x" - -3650484.1235206556) / ((t1."max_e7eff2" - -3650484.1235206556) / 384)) AS x_bin,\r\n count(*) AS "count"\r\nFROM tweets_nov_feb t0\r\n JOIN (\r\n SELECT min("goog_x") AS min_e7eff2, max("goog_x") AS max_e7eff2\r\n FROM tweets_nov_feb\r\n ) t1 ON TRUE\r\nWHERE (t0."goog_x" > -3650484.1235206556) AND\r\n (t0."goog_x" < 7413325.514451755)\r\nGROUP BY x_bin\r\n```', 'author': {'login': 'saulshanabrook'}}}, {'node': {'body': 'It looks like this issue is relevant https://github.com/omnisci/omniscidb/issues/305', 'author': {'login': 'saulshanabrook'}}}, {'node': {'body': 'Ah I was able to get it working:\r\n\r\n```python\r\nt[\r\n (X_DOMAIN[0] < x) & (x < X_DOMAIN[1])\r\n].group_by(\r\n t.goog_x.histogram(\r\n WIDTH,\r\n base=ibis.literal(X_DOMAIN[0], \'float64\').cast(\'float32\')\r\n ).name("x_bin")\r\n).aggregate(\r\n t.count()\r\n).execute()\r\n```\r\n\r\nThe key is to get the constant to cast to a `FLOAT`, which you have to do `ibis.literal(X_DOMAIN[0], \'float64\').cast(\'float32\')` to get.\r\n\r\nMaybe it would be good to do this auto cast somehow for omnisci when it is calling things on floor?', 'author': {'login': 'saulshanabrook'}}}, {'node': {'body': 'Now I am getting another failure when I try to groupby both axis:\r\n\r\n```\r\nt[\r\n (X_DOMAIN[0] < x) & (x < X_DOMAIN[1]) &\r\n (Y_DOMAIN[0] < y) & (y < Y_DOMAIN[1])\r\n].group_by([\r\n t.goog_x.histogram(\r\n WIDTH,\r\n base=ibis.
|