You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all thanks for the work done here, great Memcached client.
I would like to contribute to the repository by adding a new feature called Autobatching, but before start working on this I would like to gather some feedback from you, for understanding how feasible is gonna be have this feature in place if eventually the reviewers believe that the PR is acceptable.
Rationale
Autobatching is based on the idea of autopipeline for Redis [1] which tries to minimize the TCP overhead by sending multiple keys in a single command transparently to the user.
Following the same ideas, Autobatching for a Memcached client implemented in a reactor loop paradigm leverages on the get_many support provided by Memcached for piling up a set of individual get - or gets - operations for sending all of them together in the next loop iteration.
This has already been implemented, Im the author, in the Emcache [2] library, which uses Asyncio as a framework which implements a reactor pattern, similar to what is provided by Node.js. Laboratory benchmarks show that autobatching can perform x2 better compared to the traditional usage of get and gets methods.
What would be implemented
The idea is to follow the current Emcache implementation, which is based on the following characteristics:
Get and gets operations are piled up during the same event loop iteration.
At the next event loop iteration batches are sent to the right nodes.
Batches are at the same time split into multiple batches for limiting the size of a batch.
Autobatching is enabled at client instantiation level.
API remains equal, but once autobatching is enabled get and gets are routed to the code path for using autobatching.
Hi, first of all thanks for the work done here, great Memcached client.
I would like to contribute to the repository by adding a new feature called Autobatching, but before start working on this I would like to gather some feedback from you, for understanding how feasible is gonna be have this feature in place if eventually the reviewers believe that the PR is acceptable.
Rationale
Autobatching is based on the idea of autopipeline for Redis [1] which tries to minimize the TCP overhead by sending multiple keys in a single command transparently to the user.
Following the same ideas, Autobatching for a Memcached client implemented in a reactor loop paradigm leverages on the
get_many
support provided by Memcached for piling up a set of individual get - or gets - operations for sending all of them together in the next loop iteration.This has already been implemented, Im the author, in the Emcache [2] library, which uses Asyncio as a framework which implements a reactor pattern, similar to what is provided by Node.js. Laboratory benchmarks show that autobatching can perform x2 better compared to the traditional usage of
get
andgets
methods.What would be implemented
The idea is to follow the current Emcache implementation, which is based on the following characteristics:
WDYT?
[1] https://github.com/mcollina/ioredis-auto-pipeline
[2] https://github.com/pfreixes/emcache
The text was updated successfully, but these errors were encountered: