You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In games like CTF which unlike building Minetest servers, everything happens in real time. And this also requires moderation with minimum delay. The more delay till appropriate action is taken, the more frustrated the players online will become as the cheaters, spammers or others violating the rules kill the fun of the game.
Current approach
In the current approach, there is a group of game moderators who have access to ban, kick and revoke permissions like shout or interact. There is also a secondary group called "Guardians" who have access only to kick players and have their reports in higher priority.
However, this approach has pitfalls. To keep the game healthy 24/7, enough guardians to cover the entire week and hours in the day are required. Getting a group of people who already cover the entire day is already not easy nor simple and when it comes to all the week, it becomes even worse. The other disadvantage is that different events in real life of people would distract them from doing the moderation. Thus a group of people we need dedicated to find trustworthy players and grant them Guardian role. All the process is done by hand and not automatic which makes it even worse.
Suggested approaches
Using Machine Learning to detect violations of rules which happen in the chat
This problem can be seen as a binary classification ML problem. And we already have a dataset in Discord. We have a history of chat in the server channel and we've got another channel named reports which tells us which messages or sequence of them are considered violations.
Automatically recruit trustworthy players and let a group of them do some moderation task
Approach one
A list of core trusted players can be supplied. We can use the current game moderators plus some or all of guardians.
A number of these trusted players can recruit another trusted one. e.g. 3 trusted players can vote to recruit another trusted player.
A number of majority of these trusted players can de-recruit another trusted player. E.g. if we have a total of 10 trusted players, at least 6 of them can vote out another trusted player.
Approach two
TBD
The text was updated successfully, but these errors were encountered:
First approach is plain "type in whatever X typed to ban Y so you can ban Z". As far as i understood, your machine learning whatever is based on reports themselves, so the "smart" deus ex machina will never digest the report contents and compare them to the actual "happenings" on the server, so it'll ban exclusively on report syntax. Needless to say, it is trivial to enter the server with new login and password, so you'll end up with name-agnostic fact-agnostic instant ban machine. And how are you going to separate cheaters from actual high-skill players or combat the zero-skill crybabies? The interesting part is that you'll get the actual feedback only in two cases - once you run out of players to ban, or once it bans someone "important", as i doubt lots of people will rush to your ban dispensing discord server inquiring about their ban reason. They'll just drop the server.
Second approach is abusable, albeit very inconvenient, votekicking by majority. Except the majority is pretty much a bunch of gatekeepers, so you might eventually end up with having a bunch of idiots there (aka friends of friends of moderator).
Problem
In games like CTF which unlike building Minetest servers, everything happens in real time. And this also requires moderation with minimum delay. The more delay till appropriate action is taken, the more frustrated the players online will become as the cheaters, spammers or others violating the rules kill the fun of the game.
Current approach
In the current approach, there is a group of game moderators who have access to ban, kick and revoke permissions like shout or interact. There is also a secondary group called "Guardians" who have access only to kick players and have their reports in higher priority.
However, this approach has pitfalls. To keep the game healthy 24/7, enough guardians to cover the entire week and hours in the day are required. Getting a group of people who already cover the entire day is already not easy nor simple and when it comes to all the week, it becomes even worse. The other disadvantage is that different events in real life of people would distract them from doing the moderation. Thus a group of people we need dedicated to find trustworthy players and grant them Guardian role. All the process is done by hand and not automatic which makes it even worse.
Suggested approaches
Using Machine Learning to detect violations of rules which happen in the chat
This problem can be seen as a binary classification ML problem. And we already have a dataset in Discord. We have a history of chat in the
server
channel and we've got another channel namedreports
which tells us which messages or sequence of them are considered violations.Automatically recruit trustworthy players and let a group of them do some moderation task
Approach one
Approach two
TBD
The text was updated successfully, but these errors were encountered: