Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug?: autosharding getting enabled for clusters != 1 #2898

Closed
richard-ramos opened this issue Jul 11, 2024 · 5 comments
Closed

bug?: autosharding getting enabled for clusters != 1 #2898

richard-ramos opened this issue Jul 11, 2024 · 5 comments
Assignees
Labels
bug Something isn't working effort/hours Estimated to be completed in a few hours

Comments

@richard-ramos
Copy link
Member

richard-ramos commented Jul 11, 2024

Problem

This might not be a bug but intended behavior, but just in case:
While testing interop between go-waku and nwaku for storev3, I noticed the following:

  1. I started nwaku with the following flags:
--relay --store --nodekey=1122334455667788990011223344556677889900112233445566778899001122 --cluster-id=99
  1. The logs displayed the following:
nwaku-1  | INF 2024-07-11 14:19:59.661+00:00 Created WakuMetadata protocol              topics="waku node" tid=1 file=protocol.nim:117 clusterId=99 shards="{3, 4, 2, 6, 5, 7, 0, 1}"
nwaku-1  | INF 2024-07-11 14:19:59.661+00:00 mounting sharding                          topics="waku node" tid=1 file=waku_node.nim:215 clusterId=99 shardCount=0

(Notice that shards 0 to 7 are being added automatically.

Please clarify whether this is the intended behavior. In go-waku the tests use content topics with random format, so assuming autosharding is getting enabled by default for all clusterIDs, maybe changes must be done to ensure we use the correct format?, as well as ensure the format for the content topics in status-go is also supported

@richard-ramos richard-ramos added the bug Something isn't working label Jul 11, 2024
@richard-ramos richard-ramos changed the title bug: autosharding getting enabled for clusters != 1 bug?: autosharding getting enabled for clusters != 1 Jul 11, 2024
@fryorcraken
Copy link
Collaborator

cc @gabrielmer who is doing some refactoring around that.

I am also aiming to improve this logic with #2859

@chaitanyaprem
Copy link
Contributor

I am wondering if it is happening because of default config if shards are not specified

shards* {.
desc: "Shards index to subscribe to [0..MAX_SHARDS-1]. Argument may be repeated.",
defaultValue:
@[
uint16(0),
uint16(1),
uint16(2),
uint16(3),
uint16(4),
uint16(5),
uint16(6),
uint16(7),
],
name: "shard"
.}: seq[uint16]

@gabrielmer
Copy link
Contributor

We did enable autosharding for any cluster in #2505
The way it works is by activating it with the amount of shards passed with the --pubsub-topic parameter

The idea is in my PR deprecating the --pubsub-topic parameter, introduce a new flag named something like --network-shards where users in clusters different than TWN set how many shards are in the network and that number will be used when running autosharding's algorithm

@gabrielmer gabrielmer self-assigned this Jul 12, 2024
@gabrielmer gabrielmer moved this to To Do in Waku Jul 22, 2024
@gabrielmer gabrielmer added the effort/hours Estimated to be completed in a few hours label Jul 22, 2024
@gabrielmer
Copy link
Contributor

@richard-ramos WDYT? given that autosharding is enabled by default in every cluster, is there any bug in here? not sure if to whether to close this issue or if there's some action item

@richard-ramos
Copy link
Member Author

Let's close this.
I modified go-waku so it mounts metadata protocol regardless of clusterID being 0 to match nwaku's behavior.

@github-project-automation github-project-automation bot moved this from To Do to Done in Waku Jul 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working effort/hours Estimated to be completed in a few hours
Projects
Archived in project
Development

No branches or pull requests

4 participants